Mental Migration: A Cloud Foundry operator’s perspective on starting Kubernetes – Part 1

This will be part of a continuing series of posts as I attempt to carve up, chew, and digest this “Kubernetes” thing everyone keeps talking about. There are a ton of good reference materials on how to get started with a hello-world app on Kubernetes. Go read those. Go on, I’ll wait, open another tab.

My Background: I know CF stuff. I’ve seen so many ways to break/fix/break/fix Cloud Foundry, been doing it for years.

My Objective: Through a series of small steps I’ll translate concepts I know of in Cloud Foundry to Kubernetes.

I eventually got to a point where there is almost too much information out there and I vapor-locked not knowing where to really start absorbing Kubernetes. Six years ago with Cloud Foundry it was pretty easy to pick the starting point. Getting the (damn) thing deployed required a carefully handcrafted manifest, a tool called BOSH and someone with a high enough credit card limit to spin up AWS VMs. I spent a significant part of my first few years helping folks get CF deployed and that was a full time job, it was only after that I actually got the chance to USE, the platform to push apps, bind services, and all the other fun activities around USING Cloud Foundry.

Kubernetes is a different beastie when it comes to installing. It is mature enough in its lifecycle that with a few clicks inside of GCP you can have a Kubernetes cluster (well, worker nodes, but we’ll save that discussion for later); AWS and Azure have a similar experience. If you are from our corner of the world, there is even a way to deploy Kubernetes with BOSH. Locally you can spin up a tool called minikube which will use VirtualBox underneath and setup kubectl to automagically set the context to use your minikube cluster. Your blinking cursor awaits you.

What does this mean? We don’t have to spend a few years figuring out the best way of deploying and maintaining Kubernetes. It also means we can just start using the platform much quicker.

Deploy Kubernetes Locally

Before you read any further please go and install minikube and deploy the sample app here:

Here’s the tldr version for MacOS users, if any of the steps fail go back and read the previous two links:

brew cask install minikubeminikube start
kubectl run hello-minikube --image=k8s.gcr.io/echoserver:1.10 --port=8080
kubectl expose deployment hello-minikube --type=NodePort
kubectl get pod

To be fair, I’m not doing this justice, there are dozens of different ways to configure minikube, however at this point you should have a working Kubernetes cluster.

The equivalent of this in the CF world would have been to:

In the end, each of these gives you a platform where you can deploy and manage containers with a CLI and leverage yml files to do the container deploys.

Control and Runtime Planes

I’m going to generalize and say Cloud Foundry and Kubernetes have similar concepts when it comes to splitting up vms to have different roles on each platform. Some vms are used to run containers (which I will call the Runtime Plane) and some vms run, well, everything else such as API servers, schedulers, whatever locket does in replacing Consul (which I will call the Control Plane).

In the CF world, we can lose a ton of components in the Control plane and apps will continue to hum along nicely, Pivotal does a good job at describing the Control Plane here. If you lose the BOSH vm, your app is fine. CredHub decides to keep all the secrets to itself, your app in its container continue blissfully unaware. Touch a cell in the Runtime Plane, now developers can get grumpy, these are where the app containers live and will need to be scheduled onto surviving cells. When you run out of capacity to run more app containers you add more cells.

Kubernetes is similarly split into two planes, master nodes and worker nodes. Master nodes contain the Control Plane components (apiserver, etcd, scheduler). Typically, if you knock one of these over it may make a sound in the woods but your app container continues to run your hello-world app. The Worker Nodes in many ways can be thought of as a cell, these are where your app containers live. When you run out of capacity to run apps you add more Worker Nodes.

Namespaces, Orgs & Spaces

Cloud Foundry has the concept of Organizations (Orgs) and Spaces which in general are used to keep groups of folks from touching each other’s stuff (apps). Your marketing developers, when assigned to an org called marketing will only be able to manage apps & services in that org. Your accounting developers in an org aptly called accounting will only be able to manage apps & services in their org. Within each org, spaces such as “dev”, “test”, “prod” can be created to help categorize apps within an org. Each app is associated with an org and space, no exceptions. Each org also gets their own quota of resources doled out by the CF admins.

You have to be a bit careful if you login as a user with cloud_controller.admin access (aka god-mode), you have access to everything but still need to target an org & space to see apps. Everyone has to use the cf api to target an org and space:

cf target -o marketing -s dev

Kubernetes has the concept of namespace, which is not quite the same as a “CF Org” can likely be used in a similar fashion. When you installed minikube a couple namespaces were created for you. The default namespace is called default. If you deploy an app, it will be to the namespace you are currently targeting. Any resources you create (pods, deployments, pvc’s) will belong to the namespace you have as your context. To change the namespace use the kubectl cli to set the context:

kubectl config set-context --current --namespace=default

Apps & Pods

In Cloud Foundry, each app instance corresponds to a container running on a cell. If you want to scale your app you wind up with more droplet copies of that container running on (likely) different cells.

Anything with persistent data, such as a postgres database, needs to be bound separately. Communication between containers is handled with Silk. You can cf ssh into these app instances and take a look around the file system and observe the running processes.

In Kubernetes, there is the concept of a pod. A pod runs one or more containers. All containers within a pod run on the same worker node. If you want to scale the application you use replication controllers or replica sets to create multiple pods.

Anything with persistent data, such as postgres, can simply be run as a container within a pod. All the containers within the pod can communicate with each other over different ports on localhost with no additional Silk-like CNI required. You can kubectl exec onto a container in a pod and take a look around the file system and observe the running processes.

I’ve got a Secret

In BOSH/Cloud Foundry the default secrets generation and store is CredHub. It has a CLI for admins to view & edit secrets which are nothing more than key+value with the keys at a particular path. cf-deployment is already wired up with a variables: section of the manifest which BOSH knows how to consume and interact with CredHub to generate keys and passwords.

CredHub is a “newer-ish” component to BOSH and has its own CLI, with solutions such as Vault (with safe CLI) as alternatives usually seen for managing application secrets.

In Kubernetes, secrets are part of the primary kubectl cli and are stored in etcd on the Master Nodes. Secrets are managed by namespace and are mounted into a special folder structure within a pod or as environment variables.

In the example below, we’ll create a secret with the kubectl command, reference the secret in a simple deployment, then ssh onto the pod to see where the secret is mounted on the file system. I like pickles, my secret is I like dill pickles the best, let’s let Postgres know this:

➜ echo -n 'dill' > ./pickles.yummy
➜ cat pickles.yummy
dill%
➜ kubectl create secret generic toppings --from-file=pickles.yummy
secret/toppings created

Suppose you’ve defined a yml with a pod definition similar to the following and applied ( full example is here ):

...
      containers:
      - name: hello-postgres
        image: postgres:11.2
        volumeMounts:
        - name: mysecrets
          mountPath: /etc/mysecrets
      volumes:
      - name: mysecrets
        secret:
          secretName: toppings
...

If you connect to the running postgres container the secret toppings is mounted to file system at /etc/mysecrets and can be consumed:

➜ kubectl exec -it postgres6meta-0 -- bash
[email protected]:/# cat /etc/mysecrets/pickles.yummy
dill

This is just one way of sharing secrets defined through kubectl to containers, for more thorough documentation reference the secrets docs on kubernetes.io.

Next Steps

I have more reading and experimenting to do, I’ll be digging into logging, persistent volume claims, slack orgs, dns and ingress/egress bits and hope to have more soon. If there are other topics along the lines of “cf does x, how do I get kubernetes to do the same thing?” let me know in the comments below.

Thanks for reading!

Spread the word

twitter icon facebook icon linkedin icon