Deploy Kubernetes to an Existing BOSH Environment

Coming to SpringOnePlatform next week? Myself (Dr Nic) and many Stark & Wayne team members will be in the Community Hub to talk about Kubernetes/CFCR, Cloud Foundry, Kafka, SHIELD and more. Come find us!

Kubernetes is one of the newest components of a larger Cloud Foundry, deployed with BOSH, and known as CFCR – the Cloud Foundry Container Runtime (previously known as Kubo). But the current documentation at https://docs-cfcr.cfapps.io/ does not work with an existing BOSH environment; it expects to deploy its own bastian/jumpbox and BOSH environment. I already have a BOSH environment. Lots of them. And I want to run Kubernetes on them.

So, I rewrote the deployment manifests so that it is fun and easy to deploy Kubernetes to an existing BOSH environment.

How fun you might ask? This fun:

git clone https://github.com/drnic/kubo-deployment -b stable-0.9.0
export BOSH_DEPLOYMENT=cfcr
bosh deploy kubo-deployment/manifests/cfcr.yml

That’s it. It will fetch all the BOSH releases, provision infrastructure on your favourite Cloudy IaaS (e.g. AWS, GCP, Azure, vSphere, OpenStack), and run a single Kubernetes API on one master instance and three Kubelets on worker instances.

Before running the commands above, check the Requirements section at the bottom of this post.

Instance                                     Process State  AZ  IPs
master/bde7bc5a-a9fd-4bcc-9ba7-b66752fad159  running        z1  10.10.1.20
worker/4518c694-3328-4538-bc08-dedf8a3bf400  running        z1  10.10.1.22
worker/49d317d0-dff2-44a3-b00c-0406ce8a010e  running        z1  10.10.1.23
worker/e00ac851-fadb-4b7d-94c4-8917042ba6cb  running        z1  10.10.1.21

You can now configure kubectl config for the API running on port 8443.

You’ll need the master/0 host IP and the admin password:

master_host=$(bosh int <(bosh instances --json) --path /Tables/0/Rows/0/ips)
admin_password=$(bosh int <(credhub get -n "${BOSH_ENVIRONMENT}/${BOSH_DEPLOYMENT}/kubo-admin-password" --output-json) --path=/value)

Finally, setup your local kubectl configuration:

cluster_name="cfcr:${BOSH_ENVIRONMENT}:${BOSH_DEPLOYMENT}"
user_name="cfcr:${BOSH_ENVIRONMENT}:${BOSH_DEPLOYMENT}-admin"
context_name="cfcr:${BOSH_ENVIRONMENT}:${BOSH_DEPLOYMENT}"
kubectl config set-cluster "${cluster_name}" \
  --server="https://${master_host}:8443" \
  --insecure-skip-tls-verify=true
kubectl config set-credentials "${user_name}" --token="${admin_password}"
kubectl config set-context "${context_name}" --cluster="${cluster_name}" --user="${user_name}"
kubectl config use-context "${context_name}"

To confirm that you are connected and configured to your Kubernetes cluster:

$ kubectl get all
NAME             TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)   AGE
svc/kubernetes   ClusterIP   10.100.200.1   <none>        443/TCP   2h

Deploy Elastic Search

There is a handy repo filled with example Kubernetes deployments for those of use who don’t know anything about such things yet. We just want to see systems running.

Below is the example from https://github.com/kubernetes/examples/tree/master/staging/elasticsearch

git clone https://github.com/kubernetes/examples kubernetes-examples
cd kubernetes-examples
kubectl create -f staging/elasticsearch/service-account.yaml
kubectl create -f staging/elasticsearch/es-svc.yaml
kubectl create -f staging/elasticsearch/es-rc.yaml
kubectl create -f staging/elasticsearch/rbac.yaml

This deploys a 1-instance cluster of ElasticSearch.

$ kubectl get pods
NAME       READY     STATUS    RESTARTS   AGE
es-47mrc   1/1       Running   0          2m

Scaling and replication across the cluster is as delightful as:

kubectl scale --replicas=3 rc es

Our cluster has grown:

$ kubectl get pods
NAME       READY     STATUS    RESTARTS   AGE
es-95h78   1/1       Running   0          3m
es-q8q2v   1/1       Running   0          6m
es-qdcnd   1/1       Running   0          3m
$ kubectl get service elasticsearch
NAME            TYPE           CLUSTER-IP       EXTERNAL-IP   PORT(S)                         AGE
elasticsearch   LoadBalancer   10.100.200.142   <pending>     9200:32190/TCP,9300:31042/TCP   59s

Cloud Foundry Routing

To access our ElasticSearch cluster from outside of Kubernetes – such as from a Cloud Foundry application or from a BOSH deployment – will require a routing layer.

You’re probably reading this because you’ve already got BOSH and Cloud Foundry. So I’ll skip to the good bit – exposing CFCR/Kubernetes services to the HTTP and TCP routing layers.

Redeploy your CFCR BOSH deployment with an additional operator file and some variables:

First, delete the TLS certificates which do not have the new TCP hostname that we will use to access Kubernetes API. These will be regenerated automatically when we run bosh deploy again:

credhub delete -n /$BOSH_ENVIRONMENT/$BOSH_DEPLOYMENT/tls-kubernetes
credhub delete -n /$BOSH_ENVIRONMENT/$BOSH_DEPLOYMENT/tls-kubelet

Create a cf-vars.yml with the following YAML format. The values will come from your Cloud Foundry deployment:

kubernetes_master_host: tcp.apps.mycompany.com
kubernetes_master_port: 8443
routing-cf-api-url: https://api.system.mycompany.com
routing-cf-uaa-url: https://uaa.system.mycompany.com
routing-cf-app-domain-name: apps.mycompany.com
routing-cf-client-id: routing_api_client
routing-cf-client-secret: <<credhub get -n my-bosh/cf/uaa_clients_routing_api_client_secret>>
routing-cf-nats-internal-ips: [10.10.1.6,10.10.1.7,10.10.1.8]
routing-cf-nats-port: 4222
routing-cf-nats-username: nats
routing-cf-nats-password: <<credhub get -n my-bosh/cf/nats_password>>

Alternately, you can try a helper script which might be able to use bosh, cf, and credhub CLIs to look up all the information:

./kubo-deployment/manifests/helper/cf-routing-vars.sh > cf-vars.yml

I find the latter approach only slightly less ugly than the first. In the future, ideally, Cloud Foundry will expose BOSH links to discover its API endpoints, UAA clients, etc. Then this step will go away and there should be no need for a cf-vars.yml file. One day.

Finally, deploy CFCR again with HTTP/TCP routing:

bosh deploy kubo-deployment/manifests/cfcr.yml \
  -o kubo-deployment/manifests/ops-files/cf-routing.yml \
  -l cf-vars.yml

This may fail with the following; but I’m not yet sure if its a bad failure or an annoying failure.

Task 1331 | 22:01:56 | Error: Action Failed get_task: Task 674bebba-0054-4262-486d-e386c145d43b result: 1 of 1 post-deploy scripts failed. Failed Jobs: kubernetes-system-specs.

Once this has completed, you can now start labeling your Kubernetes services, and the route-sync job will automatically and continuously advertise your service to the HTTP or TCP routing tiers.

First, re-configure kubectl config to our new HTTPS-enabled Kubernetes API endpoint.

You’ll need the new HTTPS hostname and the admin password:

master_host=$(bosh int cf-vars.yml --path /kubernetes_master_host)
admin_password=$(bosh int <(credhub get -n "${BOSH_ENVIRONMENT}/${BOSH_DEPLOYMENT}/kubo-admin-password" --output-json) --path=/value)

Now, set up your local kubectl configuration:

rm ~/.kube/config
cluster_name="cfcr:${BOSH_ENVIRONMENT}:${BOSH_DEPLOYMENT}"
user_name="cfcr:${BOSH_ENVIRONMENT}:${BOSH_DEPLOYMENT}-admin"
context_name="cfcr:${BOSH_ENVIRONMENT}:${BOSH_DEPLOYMENT}"
kubectl config set-cluster "${cluster_name}" \
  --server="https://${master_host}:8443" \
  --insecure-skip-tls-verify=true
kubectl config set-credentials "${user_name}" --token="${admin_password}"
kubectl config set-context "${context_name}" --cluster="${cluster_name}" --user="${user_name}"
kubectl config use-context "${context_name}"

To register an HTTP route https://myelastic.apps.mycompany.com to route to your Elastic HTTP API:

kubectl label service elasticsearch http-route-sync=myelastic

The route will take a few moments to appear:

$ curl -k https://myelastic.apps.mycompany.com502 Bad Gateway: Registered endpoint failed to handle the request.
$ curl -k https://myelastic.apps.mycompany.com
{
  "name" : "f6a77df3-a1ae-42a6-b749-87e3a7e88906",
  "cluster_name" : "myesdb",
  "cluster_uuid" : "ErRYBEU8QHChXeOV0NOhsA",
  "version" : {
    "number" : "5.6.2",
    "build_hash" : "57e20f3",
    "build_date" : "2017-09-23T13:16:45.703Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.1"
  },
  "tagline" : "You Know, for Search"
}

To assign yourself public port :9300 on the TCP routing tier tcp.apps.mycompany.com:9300:

kubectl label service elasticsearch tcp-route-sync=9300

Cloud Providers

You can expose your Kubernetes to your underlying Cloud IaaS using an operator file. Look in manifests/ops-files/iaas/<your-iaas>/cloud-provider.yml and fill in the variables.

Requirements

There are a few requirements for your BOSH environment:

  • Credhub/UAA (add -o uaa.yml -o credhub.yml to your bosh create-env installation)
  • Cloud Config with vm_types named minimal, small, and small-highmem as per similar requirements of cf-deployment
  • Cloud Config has a network named defaultas per similar requirements of cf-deployment
  • BOSH instances must be normal VMs, not garden containers (i.e. CFCR does not deploy to bosh-lite)
  • Ubuntu Trusty stemcell 3468 is already uploaded (it’s up to you to keep up to date with latest 3468.X versions and update your BOSH deployments)

If you do not have Credhub, you can use the following additional flags to your bosh deploy commands above --vars-store creds.yml -o kubo-deployment/manifests/ops-files/misc/local-config-server.yml (the base manifest assumes Credhub, and the local-config-server.yml operator removes the options.organization property that bosh CLI does not support locally).

For example:

bosh deploy kubo-deployment/manifests/cfcr.yml \
  --vars-store creds.yml \
  -o kubo-deployment/manifests/ops-files/misc/local-config-server.yml

Future of this Work

Thanks very much to George Lestaris (CFCR PM) and Konstantin Semenov (CFCR Anchor) for helping me all week to get this working. You’re awesome.

An experience like bosh deploy manifests/cfcr.yml will appear in upstream https://github.com/cloudfoundry-incubator/kubo-deployment in the coming weeks. I’ll update the blog post for the revised instructions/operator files at that time.

The "Kubo" team has been working very hard on the CFCR/Kubo project all year. It is very exciting to see Kubernetes "just work" and to start playing with it. I look forward to extending this blog post with newer posts in the future. What’s interesting to you?

Spread the word

twitter icon facebook icon linkedin icon