Build Docker images inside your Kubernetes with Knative Build

This is the fifth in a collection of articles as I figure out what’s what with Knative for Kubernetes. The full set of articles are:

Previously we’ve built, deployed, and routed traffic to our applications. But what if you just want to create container images (read: Docker images)? Let’s say you have a local project or Git repo and want a container image. Perhaps it has a Dockerfile, or perhaps you want the operatic glory of Cloud Foundry buildpacks to create your container images.

Fortunately we can do all of this with the Knative Build subsystem. Its fast to install into any Kubernetes, and easy to use.

This article does not assume you’ve followed along the preceding four articles. It will guide you through installation, configuration, and your first builds with Knative Build subsystem.


My personal reason for using Knative Build is that I have a local Spring application with a multi-stage Dockerfile and a Helm chart. I need to iterate on it — rebuilding the Docker image, deploy the Helm chart, see if everything works. I’m not a Spring developer so I approach this task like I would play with a Rubik’s Cube: keep changing things until it looks pleasing.

About the same time, Gareth Rushgrove was asking if Knative Build could be used standalone.

Fortunately, yes. It can both be easily installed standalone, and there is a knctl build subcommand to use it without having a runtime application (that we got from knctl deploy).

Assumptions

I’ll assume that you have provisioned minikube, or a cluster of Kubernetes, and your kubectl is targeting it.

Your Kubernetes cluster may need a cluster role/binding. For example, with GKE you’d run something like:

kubectl create clusterrolebinding cluster-admin-binding \
  --clusterrole=cluster-admin \
  --user=$(gcloud config get-value core/account)

Install knctl CLI

To make Knative Build delightful we will use the knctl CLI from Pivotal’s Dmitriy Kalinin. Either install from Github release, or install from our Homebrew tap:

brew install starkandwayne/kubernetes/knctl

We will be using the knative build create subcommand.

Install Knative Build standalone

You can install Knative Build subsystem standalone into any Kubernetes cluster.

Either pipe the a specific Github release.yaml to kubectl apply:

kubectl apply -f https://github.com/knative/build/releases/download/v0.1.0/release.yaml

Or install all of Knative Build, Knative Serving, and Istio:

knctl install --exclude-monitoring

Or you can install the latest nightly build (which might not be backwards compatible with knctl):

kubectl apply -f https://storage.googleapis.com/knative-releases/build/latest/release.yaml --wait

Two pods represent the Knative Build subsystem:

$ kubectl get pods -n knative-build
NAME                                READY   STATUS    RESTARTS   AGE
build-controller-79d6cc9d57-47j2s   1/1     Running   0          1m
build-webhook-f97d479f9-zp48p       1/1     Running   0          1m

Once you’ve finished using Knative Build you can destroy it by deleting the knative-build namespace.

kubectl delete ns knative-build
kubectl delete builds --all

Docker Registry credentials

In all Knative Build examples the byproduct of a Build sequence is a container image (the secret code word for "Docker image"). These images have to live somewhere, such as Docker Hub, GCP Container Registry, Azure Container Registry, or an on-premise/DIY registry like Harbor.

We need to configure Knative with our container registry location and secrets. We can use knctl basic-auth-secret create within each applicable Kubernetes namespace.

For Docker Hub, use --docker-hub flag:

knctl basic-auth-secret create -s registry --docker-hub -u <username> -p <password>

For GCP Container Registry, use --gcr flag and read GCP documentation on Service Accounts and JSON Key Files:

knctl basic-auth-secret create -s registry --gcr -u _json_key -p "$(cat keyfile.json)"

For any other container registry that doesn’t have a convenience flag, use the flags --type and --url:

knctl basic-auth-secret create -s registry --type docker --url https://registry.domain.com/ -u <username> -p <password>

These will create a Kubernetes secret registry:

$ kubectl get secrets
NAME                  TYPE                                  DATA   AGE
default-token-d895q   kubernetes.io/service-account-token   3      30m
registry              kubernetes.io/basic-auth              2      30s

Next, map the container registry secrets into a Kubernetes service account, which will provide the information above into the pods used by Knative Build.

knctl service-account create --service-account build -s registry

This maps down to a Kuberneters service account:

$ kubectl get serviceaccount
NAME      SECRETS   AGE
build     2         37s
default   1         3h

This build serviceaccount will now be passed to the Knative Build subsystem each time we create container images so it can push to them to our registry.

Upload local directory with a Dockerfile

Clone a sample Go application and deploy it from its local directory to a Docker Hub image name:

DOCKER_IMAGE=index.docker.io/<my-org>/knative-simple-app
git clone https://github.com/cppforlife/simple-app
cd simple-app
knctl build create \
    --build simple-app --generate-name \
    --directory=$PWD \
    --service-account build \
    --image ${DOCKER_IMAGE:?required}

The --service-account build flag tells Knative Build to use the build serviceaccount in our default namespace, which in term references the registry secret.

The output will show that your local folder (--directory=$PWD) is uploaded, and then Knative Build decides to use the Dockerfile in the project folder to describe how a container image is build.

Name  simple-app-sjm5c
[2018-11-07T09:39:24+10:00] Uploading source code...
[2018-11-07T09:41:05+10:00] Finished uploading source code...
Watching build logs...
build-step-build-and-push | INFO[0000] Downloading base image ruby:2.5
build-step-build-and-push | INFO[0001] Executing 0 build triggers
build-step-build-and-push | INFO[0020] Taking snapshot of full filesystem...
build-step-build-and-push | INFO[0067] WORKDIR /app
build-step-build-and-push | INFO[0067] cmd: workdir
build-step-build-and-push | INFO[0067] Changed working directory to /app
build-step-build-and-push | INFO[0067] Creating directory /app
build-step-build-and-push | INFO[0067] Taking snapshot of files...
build-step-build-and-push | INFO[0067] EXPOSE 8080
build-step-build-and-push | INFO[0067] cmd: EXPOSE
build-step-build-and-push | INFO[0067] Adding exposed port: 8080/tcp
build-step-build-and-push | INFO[0067] Using files from context: [/workspace]
build-step-build-and-push | INFO[0067] COPY . /app
build-step-build-and-push | INFO[0067] Taking snapshot of files...
build-step-build-and-push | INFO[0067] RUN bundle install
build-step-build-and-push | INFO[0067] cmd: /bin/sh
build-step-build-and-push | INFO[0067] args: [-c bundle install]
build-step-build-and-push | Fetching gem metadata from https://rubygems.org/.........
build-step-build-and-push | Using bundler 1.17.1
build-step-build-and-push | Fetching mustermann 1.0.3
build-step-build-and-push | Installing mustermann 1.0.3
build-step-build-and-push | Fetching puma 3.12.0
build-step-build-and-push | Installing puma 3.12.0 with native extensions
build-step-build-and-push | Fetching rack 2.0.6
build-step-build-and-push | Installing rack 2.0.6
build-step-build-and-push | Fetching rack-protection 2.0.4
build-step-build-and-push | Installing rack-protection 2.0.4
build-step-build-and-push | Fetching tilt 2.0.8
build-step-build-and-push | Installing tilt 2.0.8
build-step-build-and-push | Fetching sinatra 2.0.4
build-step-build-and-push | Installing sinatra 2.0.4
build-step-build-and-push | Bundle complete! 2 Gemfile dependencies, 7 gems now installed.
build-step-build-and-push | Bundled gems are installed into `/usr/local/bundle`
build-step-build-and-push | INFO[0073] Taking snapshot of full filesystem...
build-step-build-and-push | INFO[0079] CMD ["bundle", "exec", "rackup", "-p", "8080", "-o", "0.0.0.0", "-s", "puma"]
build-step-build-and-push | ERROR: logging before flag.Parse: E1106 23:42:29.024544       1 metadata.go:142] while reading 'google-dockercfg' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg
build-step-build-and-push | ERROR: logging before flag.Parse: E1106 23:42:29.030329       1 metadata.go:159] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url
build-step-build-and-push | 2018/11/06 23:42:29 existing blob: sha256:31d57ef7a684bffc0decadb0c268cf3c9b582271caab5565148ed4a87d7c4167
build-step-build-and-push | 2018/11/06 23:42:29 existing blob: sha256:72744d0a318b0788001cc4f5f83c6847ba4b753307fadd046b508bbc41eb9e29
build-step-build-and-push | 2018/11/06 23:42:29 existing blob: sha256:4eaef54651ae4849ae525c92042738b1a3901f2712b534229bc6f4fec05ccf7a
build-step-build-and-push | 2018/11/06 23:42:29 existing blob: sha256:193a6306c92af328dbd41bbbd3200a2c90802624cccfe5725223324428110d7f
build-step-build-and-push | 2018/11/06 23:42:29 existing blob: sha256:a587a86c9dcb9df6584180042becf21e36ecd8b460a761711227b4b06889a005
build-step-build-and-push | 2018/11/06 23:42:29 existing blob: sha256:9ba5073d9663574dfee5793c9b2dc34f2ab9069c3efe26d048c9ba3da11c68c8
build-step-build-and-push | 2018/11/06 23:42:29 existing blob: sha256:bc9ab73e5b14b9fbd3687a4d8c1f1360533d6ee9ffc3f5ecc6630794b40257b7
build-step-build-and-push | 2018/11/06 23:42:29 existing blob: sha256:e5c3f8c317dc30af45021092a3d76f16ba7aa1ee5f18fec742c84d4960818580
build-step-build-and-push | 2018/11/06 23:42:30 pushed blob sha256:1f7e7168549c7afcdaf194a74e3b73cd7f49c8aab5669b1acdc1a9ca7c79cd93
build-step-build-and-push | 2018/11/06 23:42:30 pushed blob sha256:8a0aa99a10858311ace1d9424010db0b82e9d31f85143252eb6d0019a0c1a423
build-step-build-and-push | 2018/11/06 23:42:31 pushed blob sha256:4e96c966b4a3ea75562449718bb27806916a561a11ec323a51bb67ce8870a974
build-step-build-and-push | 2018/11/06 23:42:33 pushed blob sha256:ed422cb4630f3d638e7365f161a6ece2625b2266f176327857268e0f8a28e7cb
build-step-build-and-push | 2018/11/06 23:42:33 index.docker.io/drnic/ruby-with-kubernetes-service-catalog:latest: digest: sha256:89b7630c103f943cffdc0e2158db2274c28abbb212e4d9f875e7b3423c6fac31 size: 2062
nop | Nothing to push
Succeeded

You can now start using your new container image for Kubernetes deployments, or for any other purposes.

Debugging Knative Build

Currently knctl build create will show errors during the formal build sequence (say if there is an error during docker build) but will not show any errors prior to this, say if you’ve made an error with your serviceaccount/registry credentials. You can find yourself just sitting there watching Waiting for new revision to be created… and nothing more.

One option for debugging is to use kail to stream the logs from the Knative Build subsystem:

kail -n knative-build

Then you need to stare deep into the mess of logs and look for errors, such as: "msg":"Failed the resource specific validation{error 25 0 serviceaccounts \"build\" not found}"

Build using Buildpacks

The Cloud Foundry and Heroku approaches to building container images is personally very satisfying, and fortunately for us all it is supported by Knative Build using a custom build template.

First, register the build template with the name "buildpack" into your active namespace:

kubectl -n default apply -f https://raw.githubusercontent.com/knative/build-templates/master/buildpack/buildpack.yaml

To use the custom build template, add the --template buildpack flag. Any additional environment variables used by the build template (or the buildpack sequence in this case) can be passed with --template-env NAME=value.

For example, the Cloud Foundry Go Buildpack requires $GOPACKNAME (see docs):

knctl build create \
    --build simple-app --generate-name \
    --directory=$PWD \
    --service-account build \
    --image ${DOCKER_IMAGE:?required} \
    --template buildpack \
    --template-env GOPACKAGENAME=main

The output shows the same output you’d see from a Cloud Foundry buildpack:

build-step-build | -----> Go Buildpack version 1.8.26
build-step-build | -----> Installing godep 80
build-step-build |        Download [https://buildpacks.cloudfoundry.org/dependencies/godep/godep-v80-linux-x64-cflinuxfs2-06cdb761.tgz]
build-step-build | -----> Installing glide 0.13.1
build-step-build |        Download [https://buildpacks.cloudfoundry.org/dependencies/glide/glide-v0.13.1-linux-x64-cflinuxfs2-aab48c6b.tgz]
build-step-build | -----> Installing dep 0.5.0
build-step-build |        Download [https://buildpacks.cloudfoundry.org/dependencies/dep/dep-v0.5.0-linux-x64-cflinuxfs2-52c14116.tgz]
build-step-build | -----> Installing go 1.8.7
build-step-build |        Download [https://buildpacks.cloudfoundry.org/dependencies/go/go1.8.7.linux-amd64-cflinuxfs2-fff10274.tar.gz]
build-step-build |        **WARNING** Installing package '.' (default)
build-step-build | -----> Running: go install -tags cloudfoundry -buildmode pie .
...

This can be a lot easier and nicer than creating your own Dockerfile and having to curate the dependencies of your application for the rest of its life. Buildpacks are fantastic.

Build Results

You can look up past builds and view their output:

$ knctl build list
Builds in namespace 'default'
Name            Succeeded  Age
simple-app-6vnsk     true       1m
simple-app-sjm5c     true       5m
$ knctl build show -b simple-app-6vnsk
...

Private Git Repository

In the preceding examples we uploaded a local folder. You can also ask Knative Build to fetch a private Git repository.

For instructions, see the Private Git Secret section of a previous article in our Knative series for a discussion secrets, serviceaccounts, and the --git-url and --git-revision flags.

The build flags from knctl deploy are the same as knctl build create.

Summary

The ability to build container images using Cloud Foundry buildpacks is golden and can save your team a lot of time now and for years in the future.

Knative Build is a standalone subsystem that you can use to create container images using Cloud Foundry buildpacks, Dockerfiles, or any other build template you want to curate.

Spread the word

twitter icon facebook icon linkedin icon