Kubernetes 101 Overview
A Quick primer on Kubernetes

Kubernetes 101 Overview

Kubernetes is a container orchestrator to provision, manage, and scale applications. In other words, Kubernetes allows you to manage the lifecycle of containerized applications within a cluster of nodes (which are a collection of worker machines, for example, VMs, physical machines etc.).
Kubernetes does not have the concept of an application. It has simple building blocks that you are required to compose. Kubernetes is a cloud native platform where the internal resource model is the same as the end user resource model.

Key Components of Kubernetes

Pods

A Pod is the smallest object model that you can create and run. You can add labels to a pod to identify a subset to run operations on. When you are ready to scale your application you can use the label to tell Kubernetes which Pod you need to scale. When we talk about a application, we usually refer to group of Pods. Although an entire application can be run in a single Pod, we usually build multiple Pods that talk to each other to make a useful application
Pod
Creating a Kubernetes Pod Use the following command to launch a simple busybox container as a Kubernetes pod:
1
cat <<EOF | kubectl apply -f -
2
apiVersion: v1
3
kind: Pod
4
metadata:
5
name: busybox-sleep
6
spec:
7
containers:
8
- name: busybox
9
image: busybox
10
args:
11
- sleep
12
- "1000000"
13
EOF
Copied!
Describe Pod Resource Use the kubectl describe command to get more information about our running pod:
1
kubectl describe pod/busybox-sleep
Copied!
While it is trivial to launch a pod within Kubernetes in order to scale I would need to manually create new pods everytime I need the service to be more elastic. This is where Deployments come in to play.

Deployments

A Deployment provides declarative updates for Pods and ReplicaSets. You describe a desired state in a Deployment, and the Deployment Controller changes the actual state to the desired state at a controlled rate.
Deployment
Creating a Kubernetes Deployment Use the following command to launch a set of 3 nginx pods:
1
cat <<EOF | kubectl apply -f -
2
apiVersion: apps/v1
3
kind: Deployment
4
metadata:
5
name: nginx-deploy
6
spec:
7
replicas: 3
8
selector:
9
matchLabels:
10
app: nginx
11
template:
12
metadata:
13
labels:
14
app: nginx
15
spec:
16
containers:
17
- name: nginx-container
18
image: nginx
19
ports:
20
- containerPort: 80
21
EOF
Copied!
Describe Deployment Resource Use the kubectl describe command to get more information about our new deployment:
1
kubectl get deployment.apps/nginx-deploy
Copied!
Now if K8s sees that my nginx deployment only has 2 running pods, it will create an additional pod to meet the deployment specification. So I've now got my nginx containers running within my cluster, but how do outside users and other cluster resources find them? Enter Services.

Services

A Service is an abstract way to expose an application running on a set of Pods as a network service. Services use a single DNS name for a set of Pods, and can load-balance across them. The set of Pods targeted by a Service is usually determined by a selector (in our case app=nginx).
Service
Deploying a service
1
cat <<EOF | kubectl apply -f -
2
apiVersion: v1
3
kind: Service
4
metadata:
5
name: nginx-svc
6
labels:
7
app: nginx
8
spec:
9
selector:
10
app: nginx
11
ports:
12
- protocol: TCP
13
port: 80
14
EOF
Copied!
Describe Service Resource Use the kubectl describe command to get more information about our new service:
1
kubectl describe service/nginx-svc
Copied!
One thing you will notice when looking at the service config is Type: ClusterIP. Kubernetes comes with 3 primary ways to expose resources via Services. If you do not specifically set a type, Kubernetes will default to ClusterIP.
ClusterIP
Exposes the Service on a cluster-internal IP. Choosing this value makes the Service only reachable from within the cluster. This is the default ServiceType.
NodePort
Exposes the Service on each Node's IP at a static port (the NodePort). A ClusterIP Service, to which the NodePort Service routes, is automatically created. You'll be able to contact the NodePort Service, from outside the cluster, by requesting :.
Deploying a NodePort type service
1
cat <<EOF | kubectl apply -f -
2
apiVersion: v1
3
kind: Service
4
metadata:
5
name: nginx-svc-np
6
labels:
7
app: nginx
8
spec:
9
type: NodePort
10
selector:
11
app: nginx
12
ports:
13
- protocol: TCP
14
port: 80
15
EOF
Copied!
Pull NodePort You can pull the assigned port using the kubect get command and jq:
1
kubectl get svc/nginx-svc-np -o json | jq -r '.spec.ports[]'
Copied!
LoadBalancer
Exposes the Service externally using a cloud provider's load balancer. The NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created when using Type: LoadBalancer.
Deploying a LoadBalancer type service
1
cat <<EOF | kubectl apply -f -
2
apiVersion: v1
3
kind: Service
4
metadata:
5
name: nginx-svc-lb
6
labels:
7
app: nginx
8
spec:
9
type: LoadBalancer
10
selector:
11
app: nginx
12
ports:
13
- protocol: TCP
14
port: 80
15
EOF
Copied!
Find external loadBalancer IP
1
kubectl get svc/nginx-svc-lb -o json | jq -r '.status.loadBalancer.ingress[].ip'
Copied!
Use curl to verify connectivity from outside the cluster to your newly created Loadbalancer service.
1
[email protected]:~$ kubectl get svc/nginx-svc-lb -o json | jq -r '.status.loadBalancer.ingress[].ip'
2
169.48.252.133
3
​
4
[email protected]:~$ curl 169.48.252.133
5
<!DOCTYPE html>
6
<html>
7
<head>
8
<title>Welcome to nginx!</title>
9
<style>
10
body {
11
width: 35em;
12
margin: 0 auto;
13
font-family: Tahoma, Verdana, Arial, sans-serif;
14
}
15
</style>
16
</head>
17
<body>
18
<h1>Welcome to nginx!</h1>
19
<p>If you see this page, the nginx web server is successfully installed and
20
working. Further configuration is required.</p>
21
​
22
<p>For online documentation and support please refer to
23
<a href="http://nginx.org/">nginx.org</a>.<br/>
24
Commercial support is available at
25
<a href="http://nginx.com/">nginx.com</a>.</p>
26
​
27
<p><em>Thank you for using nginx.</em></p>
28
</body>
29
</html>
Copied!

Ingress

Exposes HTTP/S routes from outside the cluster to services within the cluster. An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type NodePort or LoadBalancer.
Ingress
Deploy an ingress This will deploy a an ingress that says when traffic for test-ingress.cdetesting.com hits the cluster it will be routed to the service nginx-svc within the cluster.
1
cat <<EOF | kubectl apply -f -
2
apiVersion: extensions/v1beta1
3
kind: Ingress
4
metadata:
5
name: nginx-ingress
6
spec:
7
rules:
8
- host: test-ingress.cdetesting.com
9
http:
10
paths:
11
- backend:
12
serviceName: nginx-svc
13
servicePort: 80
14
path: /
15
EOF
Copied!
Pointing the domain at the cluster In order to get test-ingress.cdetesting.com to resolve to my Kubernetes cluster I created the CNAME test-ingress for the cdetesting.com domain and pointed it at my IKS ingress hostname. You can find your ingress hostname by running the following command:
1
ibmcloud ks cluster get --cluster devcluster --json | jq -r .ingressHostname
Copied!

Persistent Storage

Namespaces

Secrets

Kubernetes application deployment workflow

deployment workflow
  1. 1.
    User via "kubectl" deploys a new application. Kubectl sends the request to the API Server.
  2. 2.
    API server receives the request and stores it in the data store (etcd). Once the request is written to data store, the API server is done with the request.
  3. 3.
    Watchers detects the resource changes and send a notification to controller to act upon it
  4. 4.
    Controller detects the new app and creates new pods to match the desired number# of instances. Any changes to the stored model will be picked up to create or delete Pods.
  5. 5.
    Scheduler assigns new pods to a Node based on a criteria. Scheduler makes decisions to run Pods on specific Nodes in the cluster. Scheduler modifies the model with the node information.
  6. 6.
    Kubelet on a node detects a pod with an assignment to itself, and deploys the requested containers via the container runtime (e.g. Docker). Each Node watches the storage to see what pods it is assigned to run. It takes necessary actions on resource assigned to it like create/delete Pods.
  7. 7.
    Kubeproxy manages network traffic for the pods – including service discovery and load-balancing. Kubeproxy is responsible for communication between Pods that want to interact.

Extending Kubernetes

  • Sidecar container: a separate container that performs its own function distinct from the application container.
    • Istio uses a Sidecar proxy to mediate inbound and outbound communication to the workload instance it is attached to.
  • Custom Resource Definitions:
    • IBM Cloud Databases use CRD's to deploy to the IBM Cloud.

Videos / Tutorials / Labs

Last modified 1yr ago