kpack Archives - Stark & Wayne https://www.starkandwayne.com/blog/tag/kpack/ Cloud-Native Consultants Thu, 30 Sep 2021 15:48:39 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 https://www.starkandwayne.com/wp-content/uploads/2021/04/cropped-SW_Logos__shield_blue_on_transparent_with_border-32x32.png kpack Archives - Stark & Wayne https://www.starkandwayne.com/blog/tag/kpack/ 32 32 Deploy Cloud Foundry to Google Kubernetes in 10 minutes https://www.starkandwayne.com/blog/deploy-cf-for-k8s-to-google-in-10-minutes/ Mon, 20 Apr 2020 21:56:38 +0000 https://www.starkandwayne.com//deploy-cf-for-k8s-to-google-in-10-minutes/

Is Cloud Foundry dead? I ask on your behalf because Pivotal – chief cheerleader and contributor to Cloud Foundry – was sold to VMWare in 2019, and all Pivotal-cum-VMWare staff chant "Tanzu" as the answer to all problems. During 2017, '18, and '19 the vendor ecosystem around "cloud" and "devops" seemed to pine for all things Kubernetes.

Cloud Foundry was not Kubernetes. It worked, sure. It was used by huge companies, great. It was actually used very successfully, at large scale, yes, that's lovely. But it wasn't Kubernetes. A normal person wanting to "try out Cloud Foundry" had to run it themselves. It was a huge thing to try running on your skinny laptop, and the BOSH toolchain for running it on a cloud was confusing and new to everyone.

Kubernetes was smaller, was popular, and it had a nice leveling up experience as you learned it. You could be successful early by deploying nginx pods, and you could keep learning more and more and always feel good about it. Kubernetes had gamified devops.

Cloud Foundry was not Kubernetes. But, this year it is. There are two parallel and converging efforts to bring Cloud Foundry to Kubernetes. Over the last few months, we have looked at KubeCF on this blog. KubeCF and Quarks are an attempt to bring old Cloud Foundry to new Kubernetes, by porting the BOSH releases to Kubernetes.

Today, we look at the new project cf-for-k8s which combines a new effort to make an all new Cloud Foundry that is native to Kubernetes. Does it work? Can you use it yet? Are the VMWare Tanzu people on to something golden? Let's find out.

Update: the cf-for-k8s release team has published a blog post that covers a lot of important what/how/why.

Update: I've confirmed the tutorial included in this article works for cf-for-k8s v0.2.0 released early May 2020.

Table of Contents

What is cf-for-k8s and why is it version 0.1.0?

The Pivotal-cum-VMWare Tanzu staff who work on Cloud Foundry have pivoted, finally, and are full steam ahead moving Cloud Foundry to Kubernetes. What's changed, and what are they trying to do?

Bits are being thrown away (gorouter, loggregator), new bits are being written or rewritten, and importantly, a lot of bits are being used from the wider Kubernetes ecosystem, including Istio, Cloud Native Buildpacks, kpack, fluentd, metacontroller, plus all the solid gold that comes bundled with Kubernetes.

The new networking diagram below shows the green new pieces integrating with Cloud Foundry components Cloud Controller (the CF API), Eirini (Cloud Foundry apps running as Kubernetes pods), and Kubernetes itself.

The cf-for-k8s repository is the release management tool to bring all the old and new components together so they "Just Work". Recently they released v0.1.0 which gives us our first look at the whirlwind efforts of 2020. How's it looking? It's looking great. I'm very excited.

VMware customers will run a commercial version of cf-for-k8s called Tanzu Application Service (TAS) which is now available for initial testing. The installation instructions for TAS look very similar to the installation instructions for open source cf-for-k8s so, you'll learn a lot from this blog post.

Google Kubernetes

We need a Kubernetes. In this article, I'll do everything on Google Cloud. The Kubernetes cluster, the static IP, and the container registry for Docker images.

Let's use our time wisely and run a simple one-liner to provision our Google Kubernetes cluster (GKE):

git clone https://github.com/starkandwayne/bootstrap-kubernetes-demos
cd bootstrap-kubernetes-demos
export PATH=$PWD/bin:$PATH
bootstrap-kubernetes-demos up --google

This handy command will invoke a long gcloud container clusters create command with many flags set, will wait for the cluster to be provisioned, will set up local kubectl credentials, and will set up a cluster-admin role.

When it's finished, check that kubectl is pointing to our new cluster.

$ kubectl get nodes
NAME                                          STATUS   ROLES    AGE     VERSION
gke-drnic-y5p82y-default-pool-9ls0   Ready    <none>   6m58s   v1.15.11-gke.9
gke-drnic-y5p82y-default-pool-t4dv   Ready    <none>   6m58s   v1.15.11-gke.9
gke-drnic-y5p82y-default-pool-t8f3   Ready    <none>   6m58s   v1.15.11-gke.9

Static IP

Next, we want a static IP. We need an IP address for incoming traffic to our Cloud Foundry API and to our apps. And we want it to be static so it doesn't change each time we tear down and rebuild our cluster.

$ gcloud compute addresses create cf-for-k8s --region "$(gcloud config get-value compute/region)" --format json | jq -r '.[].address'
34.83.153.141

DNS

Add a wild-card DNS A record entry to your IP address. In my example, I'm setting up *.cf.drnic.starkandwayne.com to my IP using CloudFlare.

Configuring Cloud Foundry

With five minutes remaining, we will now host Cloud Foundry on our GKE cluster.

The running Cloud Foundry will use Kubernetes to run apps, build source code into images, route HTTP traffic, stream logs, and more.

Cloud Foundry is now a Kubernetes deployment. We need to generate some random secrets, build the YAML, and deploy it.

git clone https://github.com/cloudfoundry/cf-for-k8s
cd cf-for-k8s
mkdir -p config-values tmp
./hack/generate-values.sh -d cf.drnic.starkandwayne.com > config-values/cf-values.yml

This wrapper script uses the bosh CLI for its handy ability to generate secrets and certificates into a YAML file.

The cf-values.yml will include passwords and x509 certificates based on your system domain. If you change your system domain, remember to regenerate your certificates.

To use your static IP, create the file config-values/static-ip.yml:

#@data/values
---
istio_static_ip: 34.83.153.141

Google Container Registry

The running Cloud Foundry will be able to build developer's source code into OCI/Docker images. We will use Google Container Registry (GCR) to store and retrieve these images.

Your Cloud Foundry developer users won't see or touch GCR nor the images. Only Cloud Foundry and Cloud Foundry platform admins (you) will have access to it.

To create a Google service account and grant it permission to read/write to GCR (thanks to https://stackoverflow.com/a/56605528/36170 for the gcloud commands):

export PROJECT="$(gcloud config get-value project)"
export KEY_NAME=cf-for-k8s
gcloud iam service-accounts create ${KEY_NAME} --display-name ${KEY_NAME}
gcloud iam service-accounts keys create --iam-account ${KEY_NAME}@${PROJECT}.iam.gserviceaccount.com tmp/gcloud-key.json
gcloud projects add-iam-policy-binding ${PROJECT} --member serviceAccount:${KEY_NAME}@${PROJECT}.iam.gserviceaccount.com --role roles/storage.admin

A secret tmp/gcloud-key.json file is created.

To use GCR with your service account key, create a file config-values/app-registry.yml:

#@data/values
---
app_registry:
   hostname: gcr.io
   repository: gcr.io/drnic-257704/cf-for-k8s/cf-workloads
   username: _json_key
   password: |-
     {
       ... paste in contents of tmp/gcloud-key.json ...
     }

Note: it is password: |- and not password:. The vertical bar means that the subsequent lines are part of a multiline string, not YAML/JSON data.

The example repository: value above is composed of:

  • gcr.io – the hostname for GCR
  • drnic-257704 – my Google Cloud project ID
  • cf-for-k8s/cf-workloads – an arbitrary path where Cloud Foundry will store all images for all applications that are deployed.

You can also generate this file by running the following command (thanks Ruben Koster for the snippet):

cat << YAML > config-values/app-registry.yaml
#@data/values
---
app_registry:
  hostname: gcr.io
  repository: gcr.io/$(gcloud config get-value project)/cf-for-k8s/cf-workloads
  username: _json_key
  password: |
$(cat tmp/gcloud-key.json | sed 's/^/    /g')
YAML

We will revisit GCR in the browser later when we've deployed our first application, and Cloud Foundry stores its first image in GCR.

Starting Cloud Foundry

At this point, we are inside the cf-for-k8s repository, and inside the config-values folder are three files: cf-values.yml, static-ip.yml, and app-registry.yml. Inside tmp folder is gcloud-key.json.

We are not going to use Helm, which you might be familiar with to compose YAML from values. Instead, we will use ytt. It has many nice features for building large sets of YAML documents, such as for a large Kubernetes deployment.

To build all the YAML that describes our Cloud Foundry deployment:

ytt -f config -f config-values

At the time of writing, for cf-for-k8s v0.1.0, this produces 14,000 lines of YAML. Enjoy.

You could now pipe this ytt command into your favourite "run this YAML on Kubernetes tool".

For example, kubectl apply:

ytt -f config -f config-values | kubectl apply -f -

But, let's try another new tool kapp that will progressively show us the success almost 250 Kubernetes resources.

kapp deploy -a cf -f <(ytt -f config -f config-values)

This will produce a long list of resources to be created (since we are deploying Cloud Foundry for the first time), and conclude with a yes/no prompt:

Op:      245 create, 0 delete, 0 update, 0 noop
Wait to: 245 reconcile, 0 delete, 0 noop
Continue? [yN]:

Press y to continue.

The kapp deployment tool will then wait for all 36 CRDs to be installed, then move on to wait for 42 namespaces, cluster roles, policies, and webhooks, to be successfully installed. Then it waits for the remainder of the 166 resources to complete.

It's quite nice to see kapp progressively show the large deployment in progress.

Accessing your Cloud Foundry for the first time

To access your Cloud Foundry you will need a few things:

  • The cf CLI which is available on the internet (please upgrade if you already have it installed)
  • The API URL for your Cloud Foundry. This is https://api.<your system domain>. For me, this is https://api.cf.drnic.starkandwayne.com.
  • The randomly generated admin secret password stored in config-values/cf-values.yml.

To get your system domain and secret password, look at the top of cf-values.yml:

$ head config-values/cf-values.yml
#@data/values
---
system_domain: "cf.drnic.starkandwayne.com"
app_domains:
#@overlay/append
- "cf.drnic.starkandwayne.com"
cf_admin_password: wough8vdboikelwggbkw

Now we run the cf login command to target and authenticate as built-in admin user:

cf login https://cf.drnic.starkandwayne.com --skip-ssl-validation \
  -u admin -p wough8vdboikelwggbkw

You could also get fancy with bosh int --path to pluck values from the cf-values.yml inline:

cf login \
  -a "https://api.$(bosh int config-values/cf-values.yaml --path /system_domain)" \
  --skip-ssl-validation \
  -u admin \
  -p "$(bosh int config-values/cf-values.yaml --path /cf_admin_password)"

Create an org and a space. If you're new to Cloud Foundry, think of organizations are a collections of users and apps who share domain names and billing information. Spaces are akin to Kubernetes namespaces – a useful way to isolate things and allow them to reuse names.

cf create-org test-org
cf create-space -o test-org test-space
cf target -o test-org -s test-space

To deploy a sample NodeJS app we use the famous cf push command (please upgrade your cf CLI first):

cf push test-node-app -p tests/smoke/assets/test-node-app

In another terminal you can watch the source code being converted into an OCI/Docker image using Cloud Native Buildpacks & kpack:

$ cf logs test-node-app
...
OUT Node Engine Buildpack 0.0.158
OUT Resolving Node Engine version
OUT Candidate version sources (in priority order):
OUT -> ""
OUT
OUT Selected Node Engine version (using ): 10.19.0
...

The tests/smoke/assets/test-node-app folder only contains a trivial package.json and server.js HTTP application. Cloud Foundry takes these files, combines them with a secure version of NodeJS and any npm dependencies, and creates an OCI/Docker image. It then runs the image and routes HTTP traffic.

The hostname for the application will be shown. For me it was http://test-node-app.cf.drnic.starkandwayne.com.

Where are my the application images?

The cf push command converts the simple NodeJS app into an OCI/Docker image before running the application as a set of pods. The person running cf push doesn't care about Docker images, Pods, or Istio HTTP traffic routing. They just want their app built and running. But you care. So where are the Docker images?

Visit the Google Container Registry. If you used a gcr.io registry URL similar to mine, you'll navigate to cf-for-k8s and then cf-workloads to find the image created by kpack:

If you have Docker locally, and your Docker is authenticated to Google Container Registry, you can pull this image and run it locally. All images are runnable and HTTP traffic is on port 8080.

$ docker run -ti -e PORT=8080 -p 8080:8080 \
    gcr.io/drnic-257704/cf-for-k8s/cf-workloads/8f3c28bc-27c8-4a53-913f-4bcadb45ee2b
...
> test-node-app@0.0.1 start /workspace
> node server.js
Console output from test-node-app

In another terminal you can access port 8080:

$ curl http://localhost:8080
Hello World

Tanzu Application Service for Kubernetes

The open source project cf-for-k8s, and all its feed in projects, are the raw materials to VMWare's new Tanzu Application Service for Kubernetes (TAS). You can download v0.1.0 today and run it with very similar instructions above.

Dan Baskette has written up instructions for deploying TAS to Kubernetes in Docker (kind). They should feel very familiar to this blog post, and I've upgraded this blog post to bring it more inline with the ideas in TAS and Dan's tutorial (I changed tmp to config-values folder name).

What's next to learn?

I've had my first taste of ytt for wrangling large amounts of YAML and I quite like it. I have gone thru all of the examples in https://get-ytt.io/ and have spent time on the #k14s Kubernetes Slack channel (join at https://slack.k8s.io/) asking many questions.

I also quite like the output of kapp over the fire-and-forget-and-OMG-it-didn't-eventually-work style of kubectl apply.

I do need to get better at Istio, which is a core part of Cloud Foundry going forward. The networking components, and how they work, are documented in https://github.com/cloudfoundry/cf-k8s-networking.

The build system for converting application source code into OCI/Docker images is called Cloud Native Buildpacks https://buildpacks.io/, and the subsystem included in Cloud Foundry to do this is kpack. Learn more about kpack on this very blog. We investigated Cloud Native Buildpacks and kpack a year ago.

The path towards cf-for-k8s 1.0 has been sketched out in early March, and includes discussion on:

  • What kind of feature parity are we targeting for networking in CF-for-K8s vs CF-for-BOSH?
  • Current dependency on Istio. What's up with that?
  • Migrating from CF/Diego to cf-for-k8s without downtime

There is so much more to learn and more to explain. It is an exciting future for the Cloud Foundry ecosystem – for the platform operators, vendors, contributors, and especially the developer users who love cf push.

If you would like more blog posts or YouTube videos explaining what is going on with Cloud Foundry on Kubernetes, please let us know in the comments.

The post Deploy Cloud Foundry to Google Kubernetes in 10 minutes appeared first on Stark & Wayne.

]]>

Is Cloud Foundry dead? I ask on your behalf because Pivotal – chief cheerleader and contributor to Cloud Foundry – was sold to VMWare in 2019, and all Pivotal-cum-VMWare staff chant "Tanzu" as the answer to all problems. During 2017, '18, and '19 the vendor ecosystem around "cloud" and "devops" seemed to pine for all things Kubernetes.

Cloud Foundry was not Kubernetes. It worked, sure. It was used by huge companies, great. It was actually used very successfully, at large scale, yes, that's lovely. But it wasn't Kubernetes. A normal person wanting to "try out Cloud Foundry" had to run it themselves. It was a huge thing to try running on your skinny laptop, and the BOSH toolchain for running it on a cloud was confusing and new to everyone.

Kubernetes was smaller, was popular, and it had a nice leveling up experience as you learned it. You could be successful early by deploying nginx pods, and you could keep learning more and more and always feel good about it. Kubernetes had gamified devops.

Cloud Foundry was not Kubernetes. But, this year it is. There are two parallel and converging efforts to bring Cloud Foundry to Kubernetes. Over the last few months, we have looked at KubeCF on this blog. KubeCF and Quarks are an attempt to bring old Cloud Foundry to new Kubernetes, by porting the BOSH releases to Kubernetes.

Today, we look at the new project cf-for-k8s which combines a new effort to make an all new Cloud Foundry that is native to Kubernetes. Does it work? Can you use it yet? Are the VMWare Tanzu people on to something golden? Let's find out.

Update: the cf-for-k8s release team has published a blog post that covers a lot of important what/how/why.

Update: I've confirmed the tutorial included in this article works for cf-for-k8s v0.2.0 released early May 2020.

Table of Contents


What is cf-for-k8s and why is it version 0.1.0?

The Pivotal-cum-VMWare Tanzu staff who work on Cloud Foundry have pivoted, finally, and are full steam ahead moving Cloud Foundry to Kubernetes. What's changed, and what are they trying to do?

Bits are being thrown away (gorouter, loggregator), new bits are being written or rewritten, and importantly, a lot of bits are being used from the wider Kubernetes ecosystem, including Istio, Cloud Native Buildpacks, kpack, fluentd, metacontroller, plus all the solid gold that comes bundled with Kubernetes.

The new networking diagram below shows the green new pieces integrating with Cloud Foundry components Cloud Controller (the CF API), Eirini (Cloud Foundry apps running as Kubernetes pods), and Kubernetes itself.

The cf-for-k8s repository is the release management tool to bring all the old and new components together so they "Just Work". Recently they released v0.1.0 which gives us our first look at the whirlwind efforts of 2020. How's it looking? It's looking great. I'm very excited.

VMware customers will run a commercial version of cf-for-k8s called Tanzu Application Service (TAS) which is now available for initial testing. The installation instructions for TAS look very similar to the installation instructions for open source cf-for-k8s so, you'll learn a lot from this blog post.

Google Kubernetes

We need a Kubernetes. In this article, I'll do everything on Google Cloud. The Kubernetes cluster, the static IP, and the container registry for Docker images.

Let's use our time wisely and run a simple one-liner to provision our Google Kubernetes cluster (GKE):

git clone https://github.com/starkandwayne/bootstrap-kubernetes-demos
cd bootstrap-kubernetes-demos
export PATH=$PWD/bin:$PATH
bootstrap-kubernetes-demos up --google

This handy command will invoke a long gcloud container clusters create command with many flags set, will wait for the cluster to be provisioned, will set up local kubectl credentials, and will set up a cluster-admin role.

When it's finished, check that kubectl is pointing to our new cluster.

$ kubectl get nodes
NAME                                          STATUS   ROLES    AGE     VERSION
gke-drnic-y5p82y-default-pool-9ls0   Ready    <none>   6m58s   v1.15.11-gke.9
gke-drnic-y5p82y-default-pool-t4dv   Ready    <none>   6m58s   v1.15.11-gke.9
gke-drnic-y5p82y-default-pool-t8f3   Ready    <none>   6m58s   v1.15.11-gke.9

Static IP

Next, we want a static IP. We need an IP address for incoming traffic to our Cloud Foundry API and to our apps. And we want it to be static so it doesn't change each time we tear down and rebuild our cluster.

$ gcloud compute addresses create cf-for-k8s --region "$(gcloud config get-value compute/region)" --format json | jq -r '.[].address'
34.83.153.141

DNS

Add a wild-card DNS A record entry to your IP address. In my example, I'm setting up *.cf.drnic.starkandwayne.com to my IP using CloudFlare.

Configuring Cloud Foundry

With five minutes remaining, we will now host Cloud Foundry on our GKE cluster.

The running Cloud Foundry will use Kubernetes to run apps, build source code into images, route HTTP traffic, stream logs, and more.

Cloud Foundry is now a Kubernetes deployment. We need to generate some random secrets, build the YAML, and deploy it.

git clone https://github.com/cloudfoundry/cf-for-k8s
cd cf-for-k8s
mkdir -p config-values tmp
./hack/generate-values.sh -d cf.drnic.starkandwayne.com > config-values/cf-values.yml

This wrapper script uses the bosh CLI for its handy ability to generate secrets and certificates into a YAML file.

The cf-values.yml will include passwords and x509 certificates based on your system domain. If you change your system domain, remember to regenerate your certificates.

To use your static IP, create the file config-values/static-ip.yml:

#@data/values
---
istio_static_ip: 34.83.153.141

Google Container Registry

The running Cloud Foundry will be able to build developer's source code into OCI/Docker images. We will use Google Container Registry (GCR) to store and retrieve these images.

Your Cloud Foundry developer users won't see or touch GCR nor the images. Only Cloud Foundry and Cloud Foundry platform admins (you) will have access to it.

To create a Google service account and grant it permission to read/write to GCR (thanks to https://stackoverflow.com/a/56605528/36170 for the gcloud commands):

export PROJECT="$(gcloud config get-value project)"
export KEY_NAME=cf-for-k8s
gcloud iam service-accounts create ${KEY_NAME} --display-name ${KEY_NAME}
gcloud iam service-accounts keys create --iam-account ${KEY_NAME}@${PROJECT}.iam.gserviceaccount.com tmp/gcloud-key.json
gcloud projects add-iam-policy-binding ${PROJECT} --member serviceAccount:${KEY_NAME}@${PROJECT}.iam.gserviceaccount.com --role roles/storage.admin

A secret tmp/gcloud-key.json file is created.

To use GCR with your service account key, create a file config-values/app-registry.yml:

#@data/values
---
app_registry:
   hostname: gcr.io
   repository: gcr.io/drnic-257704/cf-for-k8s/cf-workloads
   username: _json_key
   password: |-
     {
       ... paste in contents of tmp/gcloud-key.json ...
     }

Note: it is password: |- and not password:. The vertical bar means that the subsequent lines are part of a multiline string, not YAML/JSON data.

The example repository: value above is composed of:

  • gcr.io – the hostname for GCR
  • drnic-257704 – my Google Cloud project ID
  • cf-for-k8s/cf-workloads – an arbitrary path where Cloud Foundry will store all images for all applications that are deployed.

You can also generate this file by running the following command (thanks Ruben Koster for the snippet):

cat << YAML > config-values/app-registry.yaml
#@data/values
---
app_registry:
  hostname: gcr.io
  repository: gcr.io/$(gcloud config get-value project)/cf-for-k8s/cf-workloads
  username: _json_key
  password: |
$(cat tmp/gcloud-key.json | sed 's/^/    /g')
YAML

We will revisit GCR in the browser later when we've deployed our first application, and Cloud Foundry stores its first image in GCR.

Starting Cloud Foundry

At this point, we are inside the cf-for-k8s repository, and inside the config-values folder are three files: cf-values.yml, static-ip.yml, and app-registry.yml. Inside tmp folder is gcloud-key.json.

We are not going to use Helm, which you might be familiar with to compose YAML from values. Instead, we will use ytt. It has many nice features for building large sets of YAML documents, such as for a large Kubernetes deployment.

To build all the YAML that describes our Cloud Foundry deployment:

ytt -f config -f config-values

At the time of writing, for cf-for-k8s v0.1.0, this produces 14,000 lines of YAML. Enjoy.

You could now pipe this ytt command into your favourite "run this YAML on Kubernetes tool".

For example, kubectl apply:

ytt -f config -f config-values | kubectl apply -f -

But, let's try another new tool kapp that will progressively show us the success almost 250 Kubernetes resources.

kapp deploy -a cf -f <(ytt -f config -f config-values)

This will produce a long list of resources to be created (since we are deploying Cloud Foundry for the first time), and conclude with a yes/no prompt:

Op:      245 create, 0 delete, 0 update, 0 noop
Wait to: 245 reconcile, 0 delete, 0 noop
Continue? [yN]:

Press y to continue.

The kapp deployment tool will then wait for all 36 CRDs to be installed, then move on to wait for 42 namespaces, cluster roles, policies, and webhooks, to be successfully installed. Then it waits for the remainder of the 166 resources to complete.

It's quite nice to see kapp progressively show the large deployment in progress.

Accessing your Cloud Foundry for the first time

To access your Cloud Foundry you will need a few things:

  • The cf CLI which is available on the internet (please upgrade if you already have it installed)
  • The API URL for your Cloud Foundry. This is https://api.<your system domain>. For me, this is https://api.cf.drnic.starkandwayne.com.
  • The randomly generated admin secret password stored in config-values/cf-values.yml.

To get your system domain and secret password, look at the top of cf-values.yml:

$ head config-values/cf-values.yml
#@data/values
---
system_domain: "cf.drnic.starkandwayne.com"
app_domains:
#@overlay/append
- "cf.drnic.starkandwayne.com"
cf_admin_password: wough8vdboikelwggbkw

Now we run the cf login command to target and authenticate as built-in admin user:

cf login https://cf.drnic.starkandwayne.com --skip-ssl-validation \
  -u admin -p wough8vdboikelwggbkw

You could also get fancy with bosh int --path to pluck values from the cf-values.yml inline:

cf login \
  -a "https://api.$(bosh int config-values/cf-values.yaml --path /system_domain)" \
  --skip-ssl-validation \
  -u admin \
  -p "$(bosh int config-values/cf-values.yaml --path /cf_admin_password)"

Create an org and a space. If you're new to Cloud Foundry, think of organizations are a collections of users and apps who share domain names and billing information. Spaces are akin to Kubernetes namespaces – a useful way to isolate things and allow them to reuse names.

cf create-org test-org
cf create-space -o test-org test-space
cf target -o test-org -s test-space

To deploy a sample NodeJS app we use the famous cf push command (please upgrade your cf CLI first):

cf push test-node-app -p tests/smoke/assets/test-node-app

In another terminal you can watch the source code being converted into an OCI/Docker image using Cloud Native Buildpacks & kpack:

$ cf logs test-node-app
...
OUT Node Engine Buildpack 0.0.158
OUT Resolving Node Engine version
OUT Candidate version sources (in priority order):
OUT -> ""
OUT
OUT Selected Node Engine version (using ): 10.19.0
...

The tests/smoke/assets/test-node-app folder only contains a trivial package.json and server.js HTTP application. Cloud Foundry takes these files, combines them with a secure version of NodeJS and any npm dependencies, and creates an OCI/Docker image. It then runs the image and routes HTTP traffic.

The hostname for the application will be shown. For me it was http://test-node-app.cf.drnic.starkandwayne.com.

Where are my the application images?

The cf push command converts the simple NodeJS app into an OCI/Docker image before running the application as a set of pods. The person running cf push doesn't care about Docker images, Pods, or Istio HTTP traffic routing. They just want their app built and running. But you care. So where are the Docker images?

Visit the Google Container Registry. If you used a gcr.io registry URL similar to mine, you'll navigate to cf-for-k8s and then cf-workloads to find the image created by kpack:

If you have Docker locally, and your Docker is authenticated to Google Container Registry, you can pull this image and run it locally. All images are runnable and HTTP traffic is on port 8080.

$ docker run -ti -e PORT=8080 -p 8080:8080 \
    gcr.io/drnic-257704/cf-for-k8s/cf-workloads/8f3c28bc-27c8-4a53-913f-4bcadb45ee2b
...
> test-node-app@0.0.1 start /workspace
> node server.js
Console output from test-node-app

In another terminal you can access port 8080:

$ curl http://localhost:8080
Hello World

Tanzu Application Service for Kubernetes

The open source project cf-for-k8s, and all its feed in projects, are the raw materials to VMWare's new Tanzu Application Service for Kubernetes (TAS). You can download v0.1.0 today and run it with very similar instructions above.

Dan Baskette has written up instructions for deploying TAS to Kubernetes in Docker (kind). They should feel very familiar to this blog post, and I've upgraded this blog post to bring it more inline with the ideas in TAS and Dan's tutorial (I changed tmp to config-values folder name).

What's next to learn?

I've had my first taste of ytt for wrangling large amounts of YAML and I quite like it. I have gone thru all of the examples in https://get-ytt.io/ and have spent time on the #k14s Kubernetes Slack channel (join at https://slack.k8s.io/) asking many questions.

I also quite like the output of kapp over the fire-and-forget-and-OMG-it-didn't-eventually-work style of kubectl apply.

I do need to get better at Istio, which is a core part of Cloud Foundry going forward. The networking components, and how they work, are documented in https://github.com/cloudfoundry/cf-k8s-networking.

The build system for converting application source code into OCI/Docker images is called Cloud Native Buildpacks https://buildpacks.io/, and the subsystem included in Cloud Foundry to do this is kpack. Learn more about kpack on this very blog. We investigated Cloud Native Buildpacks and kpack a year ago.

The path towards cf-for-k8s 1.0 has been sketched out in early March, and includes discussion on:

  • What kind of feature parity are we targeting for networking in CF-for-K8s vs CF-for-BOSH?
  • Current dependency on Istio. What's up with that?
  • Migrating from CF/Diego to cf-for-k8s without downtime

There is so much more to learn and more to explain. It is an exciting future for the Cloud Foundry ecosystem – for the platform operators, vendors, contributors, and especially the developer users who love cf push.

If you would like more blog posts or YouTube videos explaining what is going on with Cloud Foundry on Kubernetes, please let us know in the comments.

The post Deploy Cloud Foundry to Google Kubernetes in 10 minutes appeared first on Stark & Wayne.

]]>
Kpack, oh kpack, wherefore art thou build logs? https://www.starkandwayne.com/blog/kpack-viewing-build-logs/ Tue, 24 Sep 2019 14:00:00 +0000 https://www.starkandwayne.com//kpack-viewing-build-logs/

Any CI/CD is just like a serverless platform or PaaS: you run other people's code. The difference? With a platform you expect the code to work.

With CI/CD you're waiting for things to fail. So you can fix them. Until they work. Hopefully.

And to fix them you need logs.

kpack is a new open source build system from Pivotal that specializes in applying Cloud Native Buildpacks to my apps and pushing out easy to use OCI/docker images. kpack runs entirely within Kubernetes, and allows me to build OCIs from a git repo branch. New commit results in new OCI image.

That is unless the buildpacks fail upon my heathen code. Then I must debug thy code, reduce my mistakes to distant memories, and request forgiveness from my build system overlords.

So kpack, where are my logs?

In this article we will look at both kpack's own logs CLI, and how you can find the raw logs from the Kubernetes init containers used to run Cloud Native Buildpacks. I learned about init containers and you can too.

First, let's set up kpack and build something

To install kpack v0.0.4 into a clean kpack namespace (there is a Cleanup section at the end):

kubectl apply -f https://github.com/pivotal/kpack/releases/download/v0.0.4/release-0.0.4.yaml

We will build my sample NodeJS application using a collection of new buildpacks from the Cloud Foundry Buildpacks team, which includes buildpacks for NodeJS applications:

ns=demokubectl create ns $ns
kubectl apply -n $ns -f https://raw.githubusercontent.com/starkandwayne/bootstrap-gke/ecbdfc0900ecb58d02be302d968d9d074c59803e/resources/kpack/builder-cflinuxfs3.yaml

Now we need a service account that includes permissions to publish our OCI/docker images to a registry.

Find a sample serviceaccount YAML at https://gist.github.com/drnic/d35eddbef009b2eb8495218a29d4e263. Make your own YAML file, and install it:

kubectl apply -n $ns -f my-serviceaccount.yaml

Finally, to ask kpack to continously watch and build my sample NodeJS application, create a kpack Image file kpack-image.yamlwith the name you wish to publish the Docker image:

apiVersion: build.pivotal.io/v1alpha1
kind: Image
metadata:
  name: sample-app-nodejs
spec:
  builder:
    name: cflinuxfs3-builder
    kind: Builder
  serviceAccount: service-account
  cacheSize: "1.5Gi"
  source:
    git:
      url: https://github.com/starkandwayne/sample-app-nodejs.git
      revision: master
  tag: <my organization>/<my image name>:latest

Apply this file to your namespace and you're done:

kubectl apply -n $ns -f kpack-image.yaml

Your kpack image will automatically detect the latest Git commit on my repository, will create a kpack Build and will start doing its Cloud Native Buildpacks magic.

Unless it doesn't. You have no idea. kpack is "native to Kubernetes" which I think means "no UI" and "figure out for yourself if it works".

Logs, damn it

Latest kpack releases include a logs CLI to allow you to watch or replay the logs for a build (git rep + builder/buildpacks -> docker image). Download the one for your OS, put it in your $PATH, make it executable, and we can see the logs from our first build:

logs -image sample-app-nodejs -namespace $ns -build 1

The output will include the magic of Cloud Native Buildpacks applied to our sample NodeJS app:

...
-----> Node Engine Buildpack 0.0.49
  Node Engine 10.16.3: Contributing to layer
    Downloading from https://buildpacks.cloudfoundry.org/dependencies/node/node-10.16.3-linux-x64-cflinuxfs3-33294d36.tgz
...
-----> Yarn Buildpack 0.0.28
  Yarn 1.17.3: Contributing to layer
...
*** Images:
      starkandwayne/sample-app-nodejs:latest - succeeded
...

From where doth logs cometh?

So we have a kpack logs CLI, but what does it do? Where are these logs?

Take a moment to brush up on init containers. You are now qualified to understand how kpack implements each Build - it creates a pod with a long ordered sequence of init containers. Each step of the Cloud Native Buildpack lifecycle (detect, build, export, etc) is implemented as an independent init container.

Init containers for a pod are run one at a time, until they complete, and pods are only run once all init containers are run. A kpack Build is implemented as a pod whose container does nothing; its all implemented with init containers.

The STDOUT/STDERR of each init container are the logs we are looking for.

To see the logs for an init container we use the kubectl logs -c <container> flag.

For example, to see the build stage logs (most likely where you will find bugs in how buildpacks are running against your application source code) we'd run:

kubectl logs <build-pod> -c build

The kpack logs CLI is simply discovering the build pod, and displaying the logs for each init container in the correct order. Neat.

The init containers map to the buildpack lifecycle steps:

$ kubectl get pods -n $ns
NAME                                        READY   STATUS      RESTARTS   AGE
sample-app-nodejs-build-1-wnlxs-build-pod   0/1     Completed   0          2m38s
$ pod=sample-app-nodejs-build-1-wnlxs-build-pod
$ kubectl get pod $pod -n $ns -o json | jq -r ".spec.initContainers[].name"
creds-init
source-init
prepare
detect
restore
analyze
build
export
cache

So to get the logs for a complete kpack Build, we just look up the logs for each init container in order.

Enter xargs to allow us to invoke kubectl logs -c <init-container> on each named container above:

kubectl get pod $pod -n $ns -o json | \
  jq -r ".spec.initContainers[].name" | \
xargs -L1 kubectl logs $pod -n $ns -c

stern shows all the logs

Another way to view the logs then is the stern cli, which is very handy way to view logs of pods with multiple containers:

stern $pod -n $ns --container-state terminated

One current downside of stern for this task is that it does not show init container logs first, in correct order, so it may be confusing debugging them.

Cleanup

Delete our demo namespace to remove the kpack image, builds, and pods:

kubectl delete ns demo

To remove kpack namespace and custom resource definitions:

kubectl delete -f https://github.com/pivotal/kpack/releases/download/v0.0.4/release-0.0.4.yaml

The post Kpack, oh kpack, wherefore art thou build logs? appeared first on Stark & Wayne.

]]>

Any CI/CD is just like a serverless platform or PaaS: you run other people's code. The difference? With a platform you expect the code to work.

With CI/CD you're waiting for things to fail. So you can fix them. Until they work. Hopefully.

And to fix them you need logs.

kpack is a new open source build system from Pivotal that specializes in applying Cloud Native Buildpacks to my apps and pushing out easy to use OCI/docker images. kpack runs entirely within Kubernetes, and allows me to build OCIs from a git repo branch. New commit results in new OCI image.

That is unless the buildpacks fail upon my heathen code. Then I must debug thy code, reduce my mistakes to distant memories, and request forgiveness from my build system overlords.

So kpack, where are my logs?

In this article we will look at both kpack's own logs CLI, and how you can find the raw logs from the Kubernetes init containers used to run Cloud Native Buildpacks. I learned about init containers and you can too.

First, let's set up kpack and build something

To install kpack v0.0.4 into a clean kpack namespace (there is a Cleanup section at the end):

kubectl apply -f https://github.com/pivotal/kpack/releases/download/v0.0.4/release-0.0.4.yaml

We will build my sample NodeJS application using a collection of new buildpacks from the Cloud Foundry Buildpacks team, which includes buildpacks for NodeJS applications:

ns=demokubectl create ns $ns
kubectl apply -n $ns -f https://raw.githubusercontent.com/starkandwayne/bootstrap-gke/ecbdfc0900ecb58d02be302d968d9d074c59803e/resources/kpack/builder-cflinuxfs3.yaml

Now we need a service account that includes permissions to publish our OCI/docker images to a registry.

Find a sample serviceaccount YAML at https://gist.github.com/drnic/d35eddbef009b2eb8495218a29d4e263. Make your own YAML file, and install it:

kubectl apply -n $ns -f my-serviceaccount.yaml

Finally, to ask kpack to continously watch and build my sample NodeJS application, create a kpack Image file kpack-image.yamlwith the name you wish to publish the Docker image:

apiVersion: build.pivotal.io/v1alpha1
kind: Image
metadata:
  name: sample-app-nodejs
spec:
  builder:
    name: cflinuxfs3-builder
    kind: Builder
  serviceAccount: service-account
  cacheSize: "1.5Gi"
  source:
    git:
      url: https://github.com/starkandwayne/sample-app-nodejs.git
      revision: master
  tag: <my organization>/<my image name>:latest

Apply this file to your namespace and you're done:

kubectl apply -n $ns -f kpack-image.yaml

Your kpack image will automatically detect the latest Git commit on my repository, will create a kpack Build and will start doing its Cloud Native Buildpacks magic.

Unless it doesn't. You have no idea. kpack is "native to Kubernetes" which I think means "no UI" and "figure out for yourself if it works".

Logs, damn it

Latest kpack releases include a logs CLI to allow you to watch or replay the logs for a build (git rep + builder/buildpacks -> docker image). Download the one for your OS, put it in your $PATH, make it executable, and we can see the logs from our first build:

logs -image sample-app-nodejs -namespace $ns -build 1

The output will include the magic of Cloud Native Buildpacks applied to our sample NodeJS app:

...
-----> Node Engine Buildpack 0.0.49
  Node Engine 10.16.3: Contributing to layer
    Downloading from https://buildpacks.cloudfoundry.org/dependencies/node/node-10.16.3-linux-x64-cflinuxfs3-33294d36.tgz
...
-----> Yarn Buildpack 0.0.28
  Yarn 1.17.3: Contributing to layer
...
*** Images:
      starkandwayne/sample-app-nodejs:latest - succeeded
...

From where doth logs cometh?

So we have a kpack logs CLI, but what does it do? Where are these logs?

Take a moment to brush up on init containers. You are now qualified to understand how kpack implements each Build - it creates a pod with a long ordered sequence of init containers. Each step of the Cloud Native Buildpack lifecycle (detect, build, export, etc) is implemented as an independent init container.

Init containers for a pod are run one at a time, until they complete, and pods are only run once all init containers are run. A kpack Build is implemented as a pod whose container does nothing; its all implemented with init containers.

The STDOUT/STDERR of each init container are the logs we are looking for.

To see the logs for an init container we use the kubectl logs -c <container> flag.

For example, to see the build stage logs (most likely where you will find bugs in how buildpacks are running against your application source code) we'd run:

kubectl logs <build-pod> -c build

The kpack logs CLI is simply discovering the build pod, and displaying the logs for each init container in the correct order. Neat.

The init containers map to the buildpack lifecycle steps:

$ kubectl get pods -n $ns
NAME                                        READY   STATUS      RESTARTS   AGE
sample-app-nodejs-build-1-wnlxs-build-pod   0/1     Completed   0          2m38s
$ pod=sample-app-nodejs-build-1-wnlxs-build-pod
$ kubectl get pod $pod -n $ns -o json | jq -r ".spec.initContainers[].name"
creds-init
source-init
prepare
detect
restore
analyze
build
export
cache

So to get the logs for a complete kpack Build, we just look up the logs for each init container in order.

Enter xargs to allow us to invoke kubectl logs -c <init-container> on each named container above:

kubectl get pod $pod -n $ns -o json | \
  jq -r ".spec.initContainers[].name" | \
xargs -L1 kubectl logs $pod -n $ns -c

stern shows all the logs

Another way to view the logs then is the stern cli, which is very handy way to view logs of pods with multiple containers:

stern $pod -n $ns --container-state terminated

One current downside of stern for this task is that it does not show init container logs first, in correct order, so it may be confusing debugging them.

Cleanup

Delete our demo namespace to remove the kpack image, builds, and pods:

kubectl delete ns demo

To remove kpack namespace and custom resource definitions:

kubectl delete -f https://github.com/pivotal/kpack/releases/download/v0.0.4/release-0.0.4.yaml

The post Kpack, oh kpack, wherefore art thou build logs? appeared first on Stark & Wayne.

]]>
Investigating kpack – Continuously Updating Docker Images with Cloud Native Buildpacks https://www.starkandwayne.com/blog/investigating-kpack-automatically-updating-kubernetes-pods-with-buildpacks/ Mon, 02 Sep 2019 15:00:00 +0000 https://www.starkandwayne.com//investigating-kpack-automatically-updating-kubernetes-pods-with-buildpacks/

Docker images don't grow on trees, but you shouldn't buy them from Etsy either.

What I mean is, you don't want your company running on bespoke artisan Docker images based on source code and upstream dependencies that you can't reproduce 50 times a day, and can't keep continually updated and secure for the next 10 years.

Future You does not want artisan Etsy docker images. Future You wants a you to use a build system that will still exist in 10 years.

Cloud Native Buildpacks are part of the answer - their heritage from Heroku and Clouid Foundry means they are already almost a decade old, and almost guaranteed to still be being maintained and secure a decade from now. This is critical to the hopes, dreams, and happiness of Future You.

kpack is a Kubernetes native system to automatically, continuously convert your applications or build artifacts into runnable Docker images.

As a tribute to kpack being Kubernetes native, in lieu of me knowing what that means, I will use the word native a lot in this article. English is my native language.

Starting with pack

Before we get to kpack, let's visit the pack CLI from which kpack derives its name.

Future You will be celebrating that you built all your Docker images to combine your Git repositories and the latest Cloud Native Buildpacks.

Current You will do this using the pack CLI.

An example walk-thru of this simple process is at https://buildpacks.io/docs/app-journey/.

git clone https://github.com/buildpack/sample-java-appcd sample-java-app
pack build myapp
docker run --rm -p 8080:8080 myapp

The pack CLI will start a process upon your Docker daemon that automatically discovers the dependencies required for your application to build it (Java & Maven), and to run it (Java). You didn't need any of these dependencies on your local machine, only pack. Fabulous.

But where will you run pack build for your own production applications, and what will trigger pack build to run when new Git commits are pushed?

Good questions. You could setup a bespoke CI system to watch your Git repos, watch for updates to buildpacks, and run pack build automatically.

Or you could run kpack, configure it for each application Git repo, and walk away forever.

Getting Started with kpack on Kubernetes

kpack uses the same CNB lifecycle system as the pack CLI, combined with the ability to watch for changes in both the source Git repository, and the upstream buildpacks. If anything changes then your application is re-built and a new Docker image is created.

Excellent, let's get started.

Install v0.0.3 of kpack into your Kube cluster:

kubectl apply -f <(curl -L https://github.com/pivotal/kpack/releases/download/v0.0.3/release.yaml)

It installs many CRDs, so you know it's good:

$ kubectl api-resources --api-group build.pivotal.io
NAME              SHORTNAMES                    APIGROUP           NAMESPACED   KIND
builders          cnbbuilder,cnbbuilders,bldr   build.pivotal.io   true         Builder
builds            cnbbuild,cnbbuilds,bld        build.pivotal.io   true         Build
images            cnbimage,cnbimages            build.pivotal.io   true         Image
sourceresolvers                                 build.pivotal.io   true         SourceResolver

Download kpack project for the sample YAML files and the logs CLI (currently kpack does not use go modules, so am installing into $GOPATH):

git clone https://github.com/pivotal/kpack \
   $GOPATH/src/github.com/pivotal/kpack
cd $GOPATH/src/github.com/pivotal/kpack
dep ensure
go install ./cmd/logs

Ok, we have kpack running "natively" (we don't know what that word means) in Kubernetes, and have a logs command ready to stream some build logs later on.

Building the first application

A pack Builder is a collection of Cloud Native Buildpacks. One of these Builders already exists and is constantly updated with latest buildpacks, which in turn maintain the latest secure versions of all dependencies. All these wonder buildpacks are included in the Docker image cloudfoundry/cnb.

We need to tell kpack which upstream Builder image we want to use.

To be fair, you personally don't know what Builder image you want to use, and kpack 0.0.3 does not create a default Builder, so you need to create it, even though you don't know what it is. But perhaps this Builder should have just been created for you when you installed kpack? Anyway, today you need to create a Builder resource to point to the upstream Docker image that contains all the magical buildpacks.

Let's apply the sample Builder which will work with our sample Java applications just nicely:

$ kubectl apply -f samples/builder.yaml
$ kubectl get builds,images,builders,sourceresolvers
NAME                                      AGE
builder.build.pivotal.io/sample-builder   3s

Create a service account for your docker registry and git host. The various samples files assume the ServiceAccount is called service-account, and references secrets for a Git host and a Docker image registry. In the example below I describe my GitHub and Docker Hub registry basic auth secrets.

---
apiVersion: v1
kind: Secret
metadata:
  name: basic-docker-user-pass
  annotations:
    build.pivotal.io/docker: index.docker.io
type: kubernetes.io/basic-auth
stringData:
  username: drnic
  password: ...
---
apiVersion: v1
kind: Secret
metadata:
  name: basic-git-user-pass
  annotations:
    build.pivotal.io/git: https://github.com
type: kubernetes.io/basic-auth
stringData:
  username: drnic
  password: ....
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: service-account
secrets:
  - name: basic-docker-user-pass
  - name: basic-git-user-pass

Apply these secrets the Kubnetes native way with kubectl apply -f my-service-account.yml.

Java/Spring applications can be built from either source code or from a pre-build JAR. Let's do it with a JAR file first, hosted natively on the Internet, with the pre-drafted samples/image_from_blob_url.yaml YAML file.

If you are using a public Docker Hub account then you and I do not have permissions to create sample/image-from-jar, as specified in the sample file. You need to update the YAML file to edit the image name from sample/image-from-jar to <you>/kpack-image-from-jar.

...
spec:
  tag: drnic/kpack-image-from-jar

Apply the new Image and kpack will automatically commence creating the new Docker image, using a Java buildpack.

$ kubectl apply -f samples/image_from_blob_url.yaml
image.build.pivotal.io/sample created
$ kubectl get builds,images,builders,sourceresolvers
NAME                                          IMAGE   SUCCEEDED
build.build.pivotal.io/sample-build-1-xnkq6           Unknown
NAME                            LATESTIMAGE   READY
image.build.pivotal.io/sample                 Unknown
NAME                                      AGE
builder.build.pivotal.io/sample-builder   4m
NAME                                            AGE
sourceresolver.build.pivotal.io/sample-source   4s

Watching the image being built with buildpacks

To tail the logs, use the ./cmd/logs helper app previously installed above as logs:

logs -image sample

The output is similar to our pack build myapp command earlier. This time it is natively running on Kubernetes.

{"level":"info","ts":1566860410.5345762,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
source-init:main.go:261: Successfully downloaded storage.googleapis.com/build-service/sample-apps/spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar in path "/workspace"
2019/08/26 23:00:35 Unable to read "/root/.docker/config.json": open /root/.docker/config.json: no such file or directory
Trying group 1 out of 6 with 14 buildpacks...
======== Results ========
skip: Cloud Foundry Archive Expanding Buildpack
pass: Cloud Foundry OpenJDK Buildpack
skip: Cloud Foundry Build System Buildpack
pass: Cloud Foundry JVM Application Buildpack
pass: Cloud Foundry Apache Tomcat Buildpack
pass: Cloud Foundry Spring Boot Buildpack
pass: Cloud Foundry DistZip Buildpack
skip: Cloud Foundry Procfile Buildpack
skip: Cloud Foundry Azure Application Insights Buildpack
skip: Cloud Foundry Debug Buildpack
skip: Cloud Foundry Google Stackdriver Buildpack
skip: Cloud Foundry JDBC Buildpack
skip: Cloud Foundry JMX Buildpack
pass: Cloud Foundry Spring Auto-reconfiguration Buildpack
Cache '/cache': metadata not found, nothing to restore
Analyzing image 'index.docker.io/drnic/kpack-image-from-jar@sha256:4ede3a534f5de34372edf4eb026ef784aaf1c7a45e63a6e597083326a37be699'
Writing metadata for uncached layer 'org.cloudfoundry.openjdk:openjdk-jre'
Writing metadata for uncached layer 'org.cloudfoundry.springautoreconfiguration:auto-reconfiguration'
Cloud Foundry OpenJDK Buildpack 1.0.0-M9
  OpenJDK JRE 11.0.3: Reusing cached layer
Cloud Foundry JVM Application Buildpack 1.0.0-M9
  Executable JAR: Contributing to layer
    Writing CLASSPATH to shared
  Process types:
    executable-jar: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
    task:           java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
    web:            java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
Cloud Foundry Spring Boot Buildpack 1.0.0-M9
  Spring Boot 2.1.6.RELEASE: Contributing to layer
    Writing CLASSPATH to shared
  Process types:
    spring-boot: java -cp $CLASSPATH $JAVA_OPTS org.springframework.samples.petclinic.PetClinicApplicatio
    task:        java -cp $CLASSPATH $JAVA_OPTS org.springframework.samples.petclinic.PetClinicApplicatio
    web:         java -cp $CLASSPATH $JAVA_OPTS org.springframework.samples.petclinic.PetClinicApplicatio
Cloud Foundry Spring Auto-reconfiguration Buildpack 1.0.0-M9
  Spring Auto-reconfiguration 2.7.0: Reusing cached layer
Reusing layers from image 'index.docker.io/drnic/kpack-image-from-jar@sha256:4ede3a534f5de34372edf4eb026ef784aaf1c7a45e63a6e597083326a37be699'
Reusing layer 'app' with SHA sha256:f640054e9917dc79f4d1c60d8c649032d4156a91b7a3b047e03cbbe3bb21f596
Reusing layer 'config' with SHA sha256:d4a0ae6271b134dd22f162c48b456abdae0c853c90adfe0d43734be09fa0c728
Reusing layer 'launcher' with SHA sha256:2187c4179a3ddaae0e4ad2612c576b3b594927ba15dd610bbf720197209ceaa6
Reusing layer 'org.cloudfoundry.openjdk:openjdk-jre' with SHA sha256:b4c9e176f3e59c28939bcbdf3cd8d8bcbd25dd396cffc831c50400bda14c8498
Reusing layer 'org.cloudfoundry.jvmapplication:executable-jar' with SHA sha256:4504416ffcfe48c04b303f209a71360ef054d759b7d5b7deae53d34542c066a2
Reusing layer 'org.cloudfoundry.springboot:spring-boot' with SHA sha256:84f04b234d761615aa79ea77b691fe6d2cee0f7921cc28d1d52eadd84774fab7
Reusing layer 'org.cloudfoundry.springautoreconfiguration:auto-reconfiguration' with SHA sha256:41658755805c0452025f24e92ea9c26f736c0661c478e8cd69f5d4b6bf9280b9
*** Images:
      drnic/kpack-image-from-jar - succeeded
      index.docker.io/drnic/kpack-image-from-jar:b1.20190826.225838 - succeeded
*** Digest: sha256:d936cb02755bc835018ba9283b763a1095856b4ef533ed1bac90ddb450dc82ca
Caching layer 'org.cloudfoundry.jvmapplication:executable-jar' with SHA sha256:4504416ffcfe48c04b303f209a71360ef054d759b7d5b7deae53d34542c066a2
Caching layer 'org.cloudfoundry.springboot:spring-boot' with SHA sha256:84f04b234d761615aa79ea77b691fe6d2cee0f7921cc28d1d52eadd84774fab7

Watching kpack-controller logs

If the logs command does nothing, perhaps there is an error in the kpack controller which is attempting to orchestrate your image build.

To watch the kpack controller logs try out this:

kubectl logs -n kpack \
   $(kubectl get pod -n kpack | grep Running | head -n1 | awk '{print $1}') \
   -f

Maybe you'll see the following error:

... {"error": "serviceaccounts \"service-account\" not found"}

You have forgotten create your secrets and the wrapper service-account ServiceAccount, from above. Once these are created, the kpack-controller will automatically resume the buildpack sequence.

Docker image created

Once the Image has been built successfully, the LATESTIMAGE attribute is updated to reflect its status in the Docker Registry:

$ kubectl get images sample
NAME     LATESTIMAGE                                                        READY
sample   index.docker.io/drnic/kpack-image-from-jar@sha256:d936cb02755bc...   True

You can see the resulting image at https://hub.docker.com/r/drnic/kpack-image-from-jar/tags

Building an application from its Git repository

Let's create a new Image that will target a public Git repository containing a simple Spring application. This example is the same as our pack run myapp example – building the application image from source code – though the source code is fetched from a Git repository rather than from the local machine.

Create samples/kpack-image-from-git.yml, and remember to change spec.tag to a Docker image you can push to your Registry.

apiVersion: build.pivotal.io/v1alpha1
kind: Image
metadata:
  name: kpack-image-from-git
spec:
  tag: drnic/kpack-image-from-git
  builderRef: sample-builder
  serviceAccount: service-account
  source:
    git:
      url: https://github.com/buildpack/sample-java-app.git
      revision: master

Our Image will use the Git credentials (not required for this public Git repo) from service-account to fetch the repo, and the Docker Registry credentials to push the resulting Docker image.

To create the Image and watch the buildpack process in action:

kubectl apply -f samples/kpack-image-from-git.yml
logs -kubeconfig ~/.kube/config -image kpack-image-from-git

This time we see half of Maven being downloaded as the buildpack sequence first creates the JAR, and then creates the Docker image with everything necessary for our application to run in any Docker, Kubernetes, or Cloud Foundry environment that supports Docker images.

The resulting Image is again visible in the target Docker Registry. I created mine in the public Docker Hub at https://hub.docker.com/r/drnic/kpack-image-from-git/tags

Running the Docker image

Whilst we used kpack-on-kubernetes to create the Docker image, we can now use our Docker image anywhere that makes us happy.

For example, in Docker itself. Like the old days.

$ docker run -p 8080:8080 -e PORT=8080 drnic/kpack-image-from-git
Unable to find image 'drnic/kpack-image-from-git:latest' locally
latest: Pulling from drnic/kpack-image-from-git
...
Status: Downloaded newer image for drnic/kpack-image-from-git:latest
    |'-_ _-'|       ____          _  _      _                      _             _
    |   |   |      |  _ \        (_)| |    | |                    | |           (_)
     '-_|_-'       | |_) | _   _  _ | |  __| | _ __    __ _   ___ | | __ ___     _   ___
|'-_ _-'|'-_ _-'|  |  _ < | | | || || | / _` || '_ \  / _` | / __|| |/ // __|   | | / _ \
|   |   |   |   |  | |_) || |_| || || || (_| || |_) || (_| || (__ |   < \__ \ _ | || (_) |
 '-_|_-' '-_|_-'   |____/  \__,_||_||_| \__,_|| .__/  \__,_| \___||_|\_\|___/(_)|_| \___/
                                              | |
                                              |_|
:: Built with Spring Boot :: 2.1.3.RELEASE
...
2019-08-26 23:24:15.605  INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 29 ms

We exposed the application on port 8080, so visit http://localhost:8080/ to see the running sample app!

What happens now?

You've got what you always wanted – an always-up-to-date Docker image that contains your latest source code, combined with the latest, most secure dependencies.

If you're running your application on Docker, then ensure your application now uses the new image.

If you're running your application on Kubernetes, then ensure your pods now uses the new image.

If you're running your application on Cloud Foundry, then cf push again.

cf push kpack-app -o drnic/kpack-image-from-git --random-route

Good times, the native way.

Thanks

Thanks to Stephen Levine and Matthew McNew for repairing some factual inaccuracies and recent fixes.

The post Investigating kpack – Continuously Updating Docker Images with Cloud Native Buildpacks appeared first on Stark & Wayne.

]]>

Docker images don't grow on trees, but you shouldn't buy them from Etsy either.

What I mean is, you don't want your company running on bespoke artisan Docker images based on source code and upstream dependencies that you can't reproduce 50 times a day, and can't keep continually updated and secure for the next 10 years.

Future You does not want artisan Etsy docker images. Future You wants a you to use a build system that will still exist in 10 years.

Cloud Native Buildpacks are part of the answer - their heritage from Heroku and Clouid Foundry means they are already almost a decade old, and almost guaranteed to still be being maintained and secure a decade from now. This is critical to the hopes, dreams, and happiness of Future You.

kpack is a Kubernetes native system to automatically, continuously convert your applications or build artifacts into runnable Docker images.

As a tribute to kpack being Kubernetes native, in lieu of me knowing what that means, I will use the word native a lot in this article. English is my native language.

Starting with pack

Before we get to kpack, let's visit the pack CLI from which kpack derives its name.

Future You will be celebrating that you built all your Docker images to combine your Git repositories and the latest Cloud Native Buildpacks.

Current You will do this using the pack CLI.

An example walk-thru of this simple process is at https://buildpacks.io/docs/app-journey/.

git clone https://github.com/buildpack/sample-java-appcd sample-java-app
pack build myapp
docker run --rm -p 8080:8080 myapp

The pack CLI will start a process upon your Docker daemon that automatically discovers the dependencies required for your application to build it (Java & Maven), and to run it (Java). You didn't need any of these dependencies on your local machine, only pack. Fabulous.

But where will you run pack build for your own production applications, and what will trigger pack build to run when new Git commits are pushed?

Good questions. You could setup a bespoke CI system to watch your Git repos, watch for updates to buildpacks, and run pack build automatically.

Or you could run kpack, configure it for each application Git repo, and walk away forever.

Getting Started with kpack on Kubernetes

kpack uses the same CNB lifecycle system as the pack CLI, combined with the ability to watch for changes in both the source Git repository, and the upstream buildpacks. If anything changes then your application is re-built and a new Docker image is created.

Excellent, let's get started.

Install v0.0.3 of kpack into your Kube cluster:

kubectl apply -f <(curl -L https://github.com/pivotal/kpack/releases/download/v0.0.3/release.yaml)

It installs many CRDs, so you know it's good:

$ kubectl api-resources --api-group build.pivotal.io
NAME              SHORTNAMES                    APIGROUP           NAMESPACED   KIND
builders          cnbbuilder,cnbbuilders,bldr   build.pivotal.io   true         Builder
builds            cnbbuild,cnbbuilds,bld        build.pivotal.io   true         Build
images            cnbimage,cnbimages            build.pivotal.io   true         Image
sourceresolvers                                 build.pivotal.io   true         SourceResolver

Download kpack project for the sample YAML files and the logs CLI (currently kpack does not use go modules, so am installing into $GOPATH):

git clone https://github.com/pivotal/kpack \
   $GOPATH/src/github.com/pivotal/kpack
cd $GOPATH/src/github.com/pivotal/kpack
dep ensure
go install ./cmd/logs

Ok, we have kpack running "natively" (we don't know what that word means) in Kubernetes, and have a logs command ready to stream some build logs later on.

Building the first application

A pack Builder is a collection of Cloud Native Buildpacks. One of these Builders already exists and is constantly updated with latest buildpacks, which in turn maintain the latest secure versions of all dependencies. All these wonder buildpacks are included in the Docker image cloudfoundry/cnb.

We need to tell kpack which upstream Builder image we want to use.

To be fair, you personally don't know what Builder image you want to use, and kpack 0.0.3 does not create a default Builder, so you need to create it, even though you don't know what it is. But perhaps this Builder should have just been created for you when you installed kpack? Anyway, today you need to create a Builder resource to point to the upstream Docker image that contains all the magical buildpacks.

Let's apply the sample Builder which will work with our sample Java applications just nicely:

$ kubectl apply -f samples/builder.yaml
$ kubectl get builds,images,builders,sourceresolvers
NAME                                      AGE
builder.build.pivotal.io/sample-builder   3s

Create a service account for your docker registry and git host. The various samples files assume the ServiceAccount is called service-account, and references secrets for a Git host and a Docker image registry. In the example below I describe my GitHub and Docker Hub registry basic auth secrets.

---
apiVersion: v1
kind: Secret
metadata:
  name: basic-docker-user-pass
  annotations:
    build.pivotal.io/docker: index.docker.io
type: kubernetes.io/basic-auth
stringData:
  username: drnic
  password: ...
---
apiVersion: v1
kind: Secret
metadata:
  name: basic-git-user-pass
  annotations:
    build.pivotal.io/git: https://github.com
type: kubernetes.io/basic-auth
stringData:
  username: drnic
  password: ....
---
apiVersion: v1
kind: ServiceAccount
metadata:
  name: service-account
secrets:
  - name: basic-docker-user-pass
  - name: basic-git-user-pass

Apply these secrets the Kubnetes native way with kubectl apply -f my-service-account.yml.

Java/Spring applications can be built from either source code or from a pre-build JAR. Let's do it with a JAR file first, hosted natively on the Internet, with the pre-drafted samples/image_from_blob_url.yaml YAML file.

If you are using a public Docker Hub account then you and I do not have permissions to create sample/image-from-jar, as specified in the sample file. You need to update the YAML file to edit the image name from sample/image-from-jar to <you>/kpack-image-from-jar.

...
spec:
  tag: drnic/kpack-image-from-jar

Apply the new Image and kpack will automatically commence creating the new Docker image, using a Java buildpack.

$ kubectl apply -f samples/image_from_blob_url.yaml
image.build.pivotal.io/sample created
$ kubectl get builds,images,builders,sourceresolvers
NAME                                          IMAGE   SUCCEEDED
build.build.pivotal.io/sample-build-1-xnkq6           Unknown
NAME                            LATESTIMAGE   READY
image.build.pivotal.io/sample                 Unknown
NAME                                      AGE
builder.build.pivotal.io/sample-builder   4m
NAME                                            AGE
sourceresolver.build.pivotal.io/sample-source   4s

Watching the image being built with buildpacks

To tail the logs, use the ./cmd/logs helper app previously installed above as logs:

logs -image sample

The output is similar to our pack build myapp command earlier. This time it is natively running on Kubernetes.

{"level":"info","ts":1566860410.5345762,"logger":"fallback-logger","caller":"creds-init/main.go:40","msg":"Credentials initialized.","commit":"002a41a"}
source-init:main.go:261: Successfully downloaded storage.googleapis.com/build-service/sample-apps/spring-petclinic-2.1.0.BUILD-SNAPSHOT.jar in path "/workspace"
2019/08/26 23:00:35 Unable to read "/root/.docker/config.json": open /root/.docker/config.json: no such file or directory
Trying group 1 out of 6 with 14 buildpacks...
======== Results ========
skip: Cloud Foundry Archive Expanding Buildpack
pass: Cloud Foundry OpenJDK Buildpack
skip: Cloud Foundry Build System Buildpack
pass: Cloud Foundry JVM Application Buildpack
pass: Cloud Foundry Apache Tomcat Buildpack
pass: Cloud Foundry Spring Boot Buildpack
pass: Cloud Foundry DistZip Buildpack
skip: Cloud Foundry Procfile Buildpack
skip: Cloud Foundry Azure Application Insights Buildpack
skip: Cloud Foundry Debug Buildpack
skip: Cloud Foundry Google Stackdriver Buildpack
skip: Cloud Foundry JDBC Buildpack
skip: Cloud Foundry JMX Buildpack
pass: Cloud Foundry Spring Auto-reconfiguration Buildpack
Cache '/cache': metadata not found, nothing to restore
Analyzing image 'index.docker.io/drnic/kpack-image-from-jar@sha256:4ede3a534f5de34372edf4eb026ef784aaf1c7a45e63a6e597083326a37be699'
Writing metadata for uncached layer 'org.cloudfoundry.openjdk:openjdk-jre'
Writing metadata for uncached layer 'org.cloudfoundry.springautoreconfiguration:auto-reconfiguration'
Cloud Foundry OpenJDK Buildpack 1.0.0-M9
  OpenJDK JRE 11.0.3: Reusing cached layer
Cloud Foundry JVM Application Buildpack 1.0.0-M9
  Executable JAR: Contributing to layer
    Writing CLASSPATH to shared
  Process types:
    executable-jar: java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
    task:           java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
    web:            java -cp $CLASSPATH $JAVA_OPTS org.springframework.boot.loader.JarLauncher
Cloud Foundry Spring Boot Buildpack 1.0.0-M9
  Spring Boot 2.1.6.RELEASE: Contributing to layer
    Writing CLASSPATH to shared
  Process types:
    spring-boot: java -cp $CLASSPATH $JAVA_OPTS org.springframework.samples.petclinic.PetClinicApplicatio
    task:        java -cp $CLASSPATH $JAVA_OPTS org.springframework.samples.petclinic.PetClinicApplicatio
    web:         java -cp $CLASSPATH $JAVA_OPTS org.springframework.samples.petclinic.PetClinicApplicatio
Cloud Foundry Spring Auto-reconfiguration Buildpack 1.0.0-M9
  Spring Auto-reconfiguration 2.7.0: Reusing cached layer
Reusing layers from image 'index.docker.io/drnic/kpack-image-from-jar@sha256:4ede3a534f5de34372edf4eb026ef784aaf1c7a45e63a6e597083326a37be699'
Reusing layer 'app' with SHA sha256:f640054e9917dc79f4d1c60d8c649032d4156a91b7a3b047e03cbbe3bb21f596
Reusing layer 'config' with SHA sha256:d4a0ae6271b134dd22f162c48b456abdae0c853c90adfe0d43734be09fa0c728
Reusing layer 'launcher' with SHA sha256:2187c4179a3ddaae0e4ad2612c576b3b594927ba15dd610bbf720197209ceaa6
Reusing layer 'org.cloudfoundry.openjdk:openjdk-jre' with SHA sha256:b4c9e176f3e59c28939bcbdf3cd8d8bcbd25dd396cffc831c50400bda14c8498
Reusing layer 'org.cloudfoundry.jvmapplication:executable-jar' with SHA sha256:4504416ffcfe48c04b303f209a71360ef054d759b7d5b7deae53d34542c066a2
Reusing layer 'org.cloudfoundry.springboot:spring-boot' with SHA sha256:84f04b234d761615aa79ea77b691fe6d2cee0f7921cc28d1d52eadd84774fab7
Reusing layer 'org.cloudfoundry.springautoreconfiguration:auto-reconfiguration' with SHA sha256:41658755805c0452025f24e92ea9c26f736c0661c478e8cd69f5d4b6bf9280b9
*** Images:
      drnic/kpack-image-from-jar - succeeded
      index.docker.io/drnic/kpack-image-from-jar:b1.20190826.225838 - succeeded
*** Digest: sha256:d936cb02755bc835018ba9283b763a1095856b4ef533ed1bac90ddb450dc82ca
Caching layer 'org.cloudfoundry.jvmapplication:executable-jar' with SHA sha256:4504416ffcfe48c04b303f209a71360ef054d759b7d5b7deae53d34542c066a2
Caching layer 'org.cloudfoundry.springboot:spring-boot' with SHA sha256:84f04b234d761615aa79ea77b691fe6d2cee0f7921cc28d1d52eadd84774fab7

Watching kpack-controller logs

If the logs command does nothing, perhaps there is an error in the kpack controller which is attempting to orchestrate your image build.

To watch the kpack controller logs try out this:

kubectl logs -n kpack \
   $(kubectl get pod -n kpack | grep Running | head -n1 | awk '{print $1}') \
   -f

Maybe you'll see the following error:

... {"error": "serviceaccounts \"service-account\" not found"}

You have forgotten create your secrets and the wrapper service-account ServiceAccount, from above. Once these are created, the kpack-controller will automatically resume the buildpack sequence.

Docker image created

Once the Image has been built successfully, the LATESTIMAGE attribute is updated to reflect its status in the Docker Registry:

$ kubectl get images sample
NAME     LATESTIMAGE                                                        READY
sample   index.docker.io/drnic/kpack-image-from-jar@sha256:d936cb02755bc...   True

You can see the resulting image at https://hub.docker.com/r/drnic/kpack-image-from-jar/tags

Building an application from its Git repository

Let's create a new Image that will target a public Git repository containing a simple Spring application. This example is the same as our pack run myapp example – building the application image from source code – though the source code is fetched from a Git repository rather than from the local machine.

Create samples/kpack-image-from-git.yml, and remember to change spec.tag to a Docker image you can push to your Registry.

apiVersion: build.pivotal.io/v1alpha1
kind: Image
metadata:
  name: kpack-image-from-git
spec:
  tag: drnic/kpack-image-from-git
  builderRef: sample-builder
  serviceAccount: service-account
  source:
    git:
      url: https://github.com/buildpack/sample-java-app.git
      revision: master

Our Image will use the Git credentials (not required for this public Git repo) from service-account to fetch the repo, and the Docker Registry credentials to push the resulting Docker image.

To create the Image and watch the buildpack process in action:

kubectl apply -f samples/kpack-image-from-git.yml
logs -kubeconfig ~/.kube/config -image kpack-image-from-git

This time we see half of Maven being downloaded as the buildpack sequence first creates the JAR, and then creates the Docker image with everything necessary for our application to run in any Docker, Kubernetes, or Cloud Foundry environment that supports Docker images.

The resulting Image is again visible in the target Docker Registry. I created mine in the public Docker Hub at https://hub.docker.com/r/drnic/kpack-image-from-git/tags

Running the Docker image

Whilst we used kpack-on-kubernetes to create the Docker image, we can now use our Docker image anywhere that makes us happy.

For example, in Docker itself. Like the old days.

$ docker run -p 8080:8080 -e PORT=8080 drnic/kpack-image-from-git
Unable to find image 'drnic/kpack-image-from-git:latest' locally
latest: Pulling from drnic/kpack-image-from-git
...
Status: Downloaded newer image for drnic/kpack-image-from-git:latest
    |'-_ _-'|       ____          _  _      _                      _             _
    |   |   |      |  _ \        (_)| |    | |                    | |           (_)
     '-_|_-'       | |_) | _   _  _ | |  __| | _ __    __ _   ___ | | __ ___     _   ___
|'-_ _-'|'-_ _-'|  |  _ < | | | || || | / _` || '_ \  / _` | / __|| |/ // __|   | | / _ \
|   |   |   |   |  | |_) || |_| || || || (_| || |_) || (_| || (__ |   < \__ \ _ | || (_) |
 '-_|_-' '-_|_-'   |____/  \__,_||_||_| \__,_|| .__/  \__,_| \___||_|\_\|___/(_)|_| \___/
                                              | |
                                              |_|
:: Built with Spring Boot :: 2.1.3.RELEASE
...
2019-08-26 23:24:15.605  INFO 1 --- [nio-8080-exec-1] o.s.web.servlet.DispatcherServlet        : Completed initialization in 29 ms

We exposed the application on port 8080, so visit http://localhost:8080/ to see the running sample app!

What happens now?

You've got what you always wanted – an always-up-to-date Docker image that contains your latest source code, combined with the latest, most secure dependencies.

If you're running your application on Docker, then ensure your application now uses the new image.

If you're running your application on Kubernetes, then ensure your pods now uses the new image.

If you're running your application on Cloud Foundry, then cf push again.

cf push kpack-app -o drnic/kpack-image-from-git --random-route

Good times, the native way.

Thanks

Thanks to Stephen Levine and Matthew McNew for repairing some factual inaccuracies and recent fixes.

The post Investigating kpack – Continuously Updating Docker Images with Cloud Native Buildpacks appeared first on Stark & Wayne.

]]>