Trying tiny k3s on Google Cloud with k3sup

Tutorial based on and converted for Google Cloud gcloud commands.

At the end is a curl | bash -s up all-in-one command if you want to do this again in future.

$ gcloud compute instances create k3s-1 \    --machine-type  n1-standard-1 \
    --tags k3s,k3s-master
Created [].
$ gcloud compute instances create k3s-2 k3s-3 \
    --machine-type  n1-standard-1 \
    --tags k3s,k3s-worker
Created [].
Created [].
k3s-1  us-west1-a  n1-standard-1        RUNNING
k3s-2  us-west1-a  n1-standard-1       RUNNING
k3s-3  us-west1-a  n1-standard-1       RUNNING

From Alex’s blog, here is a conceptual diagram of what we are about to do, except with Google Cloud VMs… His blog post was about Digital Ocean, so I assume this diagram has been around a few blog posts.

From the k3s website we can see the internal components being run on the primary install and secondary agent join VMS:

Alex’s k3sup tool uses SSH to access and install k3s within each VM. We can use gcloud to setup the private key we need for SSH access.

gcloud compute config-ssh

This creates ~/.ssh/google_compute_known_hosts file and also some hostname entries in ~/.ssh/config that we can ignore for now.

k3sup install --ip $primary_server_ip --context k3s --ssh-key ~/.ssh/google_compute_engine --user $(whoami)

We can SSH into the VM and confirm it is full of running processes that look like Kubernetes. Eventually you will see something like:

$ gcloud compute ssh k3s-1
[email protected]:~$ ps axwf
  925 ?        Ssl    0:15 /usr/local/bin/k3s server --tls-san
  958 ?        Sl     0:03  \_ containerd -c /var/lib/rancher/k3s/agent/etc/containerd/config.toml -a /run/k3s/containerd/containerd.sock --state /run/k3s/con
 1241 ?        Sl     0:00      \_ containerd-shim -namespace -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/
 1259 ?        Ss     0:00      |   \_ /pause
 1340 ?        Sl     0:00      \_ containerd-shim -namespace -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/
 1358 ?        Ssl    0:00      |   \_ /coredns -conf /etc/coredns/Corefile
 1627 ?        Sl     0:00      \_ containerd-shim -namespace -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/
 1655 ?        Ss     0:00      |   \_ /pause
 1728 ?        Sl     0:00      \_ containerd-shim -namespace -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/
 1746 ?        Ss     0:00      |   \_ /pause
 1909 ?        Sl     0:00      \_ containerd-shim -namespace -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/
 1930 ?        Ss     0:00      |   \_ /bin/sh /usr/bin/entry
 1962 ?        Sl     0:00      \_ containerd-shim -namespace -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/
 1980 ?        Ss     0:00      |   \_ /bin/sh /usr/bin/entry
 2010 ?        Sl     0:00      \_ containerd-shim -namespace -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/
 2027 ?        Ss     0:00      |   \_ /bin/sh /usr/bin/entry
 2078 ?        Sl     0:00      \_ containerd-shim -namespace -workdir /var/lib/rancher/k3s/agent/containerd/io.containerd.runtime.v1.linux/
 2095 ?        Ssl    0:00          \_ /traefik --configfile=/config/traefik.toml
[email protected]:~$ exit

To access our (1-node) cluster we need to first open a port, to allow our kubectl requests in through port 6443:

$ gcloud compute firewall-rules create k3s --allow=tcp:6443 --target-tags=k3s
Creating firewall...⠏Created [].
Creating firewall...done.
k3s   default  INGRESS    1000      tcp:6443        False

export KUBECONFIG=`pwd`/kubeconfig
kubectl get nodes

The output will show our nacent cluster coming into shape:

k3s-1   Ready    master   4m57s   v1.15.4-k3s.1

We can now setup k3s on the remaining nodes, via their external IPs and in our example above.

We use k3sup join instead of k3sup install to add nodes to our tiny cluster:

k3sup join --ip --server-ip $primary_server_ip --ssh-key ~/.ssh/google_compute_engine --user $(whoami)
k3sup join --ip --server-ip $primary_server_ip --ssh-key ~/.ssh/google_compute_engine --user $(whoami)

Or, if you have lots of worker nodes you can do them all with:

gcloud compute instances list \
  --filter=tags.items=k3s-worker \
  --format="get(networkInterfaces[0].accessConfigs.natIP)" | \
    xargs -L1 k3sup join \
      --server-ip $primary_server_ip \
      --ssh-key ~/.ssh/google_compute_engine \
      --user $(whoami) \

The resulting cluster is up and running:

$ kubectl get nodes
k3s-3   Ready    worker   4m24s   v1.15.4-k3s.1
k3s-2   Ready    worker   4m59s   v1.15.4-k3s.1
k3s-1   Ready    master   6m8s    v1.15.4-k3s.1
$ kubectl describe node/k3s-1


To clean up the three VMs and the firewall rules:

gcloud compute instances delete k3s-1 k3s-2 k3s-3 --delete-disks all
gcloud compute firewall-rules delete k3s

If you have lots of nodes you can delete them all:

gcloud compute instances list \
  --filter=tags.items=k3s --format="get(name)" | \
xargs gcloud compute instances delete

All in one command

I’ve packaged the create (up) and delete (down) instructions above into a curl | bash helper script via a Gist.

To bring up a 3-node cluster of k3s:

curl -sSL | bash -s up

To tear it down:

curl -sSL | bash -s down

Spread the word

twitter icon facebook icon linkedin icon