ARTH Task 16.1

Naveen Pareek
6 min readMar 17, 2021

--

Pearson using K8's

Description

🔰 Research how Kubernetes is used in Industries and what all use cases are solved by Kubernetes?

Before discussing about the use-cases of Kubernetes in today’s industry. We need to discuss some of the basics of Kubernetes. So, let’s start…

What is Kubernetes?

Kubernetes is a portable, extensible, open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.

The name Kubernetes originates from Greek, meaning helmsman or pilot. Google open-sourced the Kubernetes project in 2014. Kubernetes combines over 15 years of Google’s experience running production workloads at scale with best-of-breed ideas and practices from the community.

Kubernetes clusters can span hosts across on-premise, public, private, or hybrid clouds. For this reason, Kubernetes is an ideal platform for hosting cloud-native applications that require rapid scaling, like real-time data streaming through Apache Kafka.

Let’s take a look at why Kubernetes is so useful by going back in time.

Reasons to use Kubernetes:

Ø Vendor-agnostic:

Many public cloud providers not only serve managed Kubernetes services but also lots of cloud products built on top of those services for on-premises application container orchestration. Being vendor-agnostic enables operators to design, build, and manage multi-cloud and hybrid cloud platforms easily and safely without the risk of vendor lock-in. Kubernetes also eliminates the ops team’s worries about a complex multi/hybrid cloud strategy.

Ø Service discovery:

To develop microservices applications, Java developers must control service availability (in terms of whether the application is ready to serve a function) and ensure the service continues living, without any exceptions, in response to the client’s requests. Kubernetes’ service discovery feature means developers don’t have to manage these things on their own anymore.

Ø Invocation:

How would your DevOps initiative deploy polyglot, cloud-native apps over thousands of virtual machines? Ideally, dev and ops could trigger deployments for bug fixes, function enhancements, new features, and security patches. Kubernetes’ deployment feature automates this daily work. More importantly, it enables advanced deployment strategies, such as blue-green and canary deployments.

Ø Elasticity:

Autoscaling is the key capability needed to handle massive workloads in cloud environments. By building a container platform, you can increase system reliability for end-users. Kubernetes Horizontal Pod Autoscaler (HPA) allows a cluster to increase or decrease the number of applications (or Pods) to deal with peak traffic or performance spikes, reducing concerns about unexpected system outages.

Ø Resilience:

In a modern application architecture, failure-handling codes should be considered to control unexpected errors and recover from them quickly. But it takes a lot of time and effort for developers to simulate all the occasional errors. Kubernetes’ ReplicaSet helps developers solve this problem by ensuring a specified number of Pods are kept alive continuously.

Features of Kubernetes:

Once you’ve got a grasp on the basics of k8s, you’ll likely want to start taking advantage of the advanced functionality and features.

o Helm Charts:

Helm is a package manager for Kubernetes that you can use to streamline the installation and management of k8s applications. It uses charts composed of a description of the package and templates containing k8s manifest files. You use manifest files in k8s to create, modify, and delete resources.

o Sidecars:

Sidecars are a feature that enables you to run an additional container within a pod to be used as a watcher or proxy. You use this extra container to direct data to be mounted and exposed to other containers in the pod. For example, sidecars could be used to handle logging or authentication for a primary container.

o Custom Controllers:

Controllers are loops that regulate the state of your system or resources. With custom controllers, you can accomplish tasks that aren’t included with standard controllers. For example, you can dynamically reload application configurations. Custom controllers can be used with native or custom resource types.

o Custom Scheduling:

K8s comes with a default scheduler for assigning newly created pods to nodes. If this scheduler doesn’t fit your needs or if you would like to run multiple schedulers, you can create a custom scheduler.

o Taints and Tolerations:

Taints and tolerations are a feature that enables you to direct nodes to “attract” or “repel” pods. Taints are assigned to nodes and specify that pods that do not tolerate the taint assigned should not be accepted.

Tolerations are assigned to pods and signal to nodes with matching taints that pods can be accepted. This feature is useful if you need to deploy an application on specific hardware or if you want to dedicate a set of nodes to specific users.

o Health Checking:

You can check the health of pods or applications in k8s by defining probes to be run by a kubelet agent. You can define readiness, liveness, and startup probes.

Why Kubernetes and Docker — better together?

In short, use Kubernetes with Docker to:

Make your infrastructure more robust and your app more highly available. Your app will remain online, even if some of the nodes go offline.

Make your application more scalable. If your app starts to get a lot more load and you need to scale out to be able to provide a better user experience, it is simple to spin up more containers or add more nodes to your Kubernetes cluster.

Kubernetes and Docker work together. Docker provides an open standard for packaging and distributing containerized applications. Using Docker, you can build and run containers and store and share container images. One can easily run a Docker build on a Kubernetes cluster, but Kubernetes itself is not a complete solution. To optimize Kubernetes in production, implement additional tools and services to manage security, governance, identity and access along with continuous integration/continuous deployment (CI/CD) workflows and other DevOps practices.

Pearson’s Kubernetes Case study:

As we all know Pearson is a global education company serving 75 million learners, as Pearson set a goal to serve more than double this number, to 200 million, by 2025. A key part of this growth is in digital learning experiences, and Pearson was having difficulty in scaling and adapting to its growing online audience. They needed an infrastructure platform that would be able to scale quickly and deliver products to market faster.

And, they find a solution regarding the above problem, as they thought to build a platform that would allow their developers to build, manage and deploy applications in a completely different way. Then, the team choose Docker container technology and Kubernetes orchestration “because of its flexibility, ease of management and the way it would improve our engineers’ productivity.”

After deploying their application on Kubernetes, there has been substantial improvements in productivity and speed of delivery. “In some cases, they’ve gone from nine months to provision physical assets in a data center to just a few minutes to provision and get a new idea in front of a customer.

Thank You

Keep Learning & growing.

--

--

No responses yet