A long time ago, we at Stark & Wayne faced a problem with jumpboxen. Every environment needs one, and they have to have all the right tooling loaded on them.
We were treating them as a bit of the infrastructure, which meant that we let client infrastructure teams stand them up. A CentOS VM over here, a stock AWS Ubuntu AMI over there. Tools we needed loaded by hand, all at different versions.
Then we took a step back and realized that we were evangelizing the very tool that would make this whole nightmare go away: BOSH. And then we built the Jumpbox BOSH Release.
Now we could deploy a jumpbox, pre-loaded with all of the tools we needed, just as easily as we deployed Vault, Concourse, and Cloud Foundry. Need to onboard a new engineer? Put their account name and SSH keys in the manifest and redeploy. Time to upgrade to a newer version of the
cf CLI? Rebuild the BOSH release and redeploy.
But while redeploying works great for hands-off installations like Cloud Foundry, a jumpbox is an intimate, hands-on kind of thing. If it works, people are actively logged into it, running processes (like
bash) that tie up the persistent disk. The more people using a jumpbox, the harder it is to find a window where you can even perform a
bosh deploy for updates.
About two months ago, in early June, we built a thing called the Containers BOSH Release. The point was to stop needlessly repacking things for BOSH, and instead rely on the maturing container technologies of Docker and Docker Compose. Why should I spend a bunch of effort to run processes outside of a container when they are already packaged and ready to be run inside of a container?
A few weeks ago, I was reminded of an effort some colleagues (and former co-workers) of mine undertook to make jumpboxen easier to use and simpler to maintain: Docker.
The idea was breathtakingly simple: rather than package up all the tools that a jumpbox needs as BOSH Release Packages, just package up Docker and wire up the login process to spin a container of the owner’s choosing.
So that’s what we did in the SSH-able Docker Containers pull request, merged recently into the Containers BOSH Release. Instead of running a Docker Compose recipe, you can now co-locate the
jumpbox job, specify your list of users and their chosen Docker images, and have fun.
Here’s an example (cut-down) manifest:
instance_groups: - name: jb instances: 1 jobs: - name: docker release: containers - name: jumpbox release: containers properties: users: - username: alice image: 'ubuntu:18.04' key: ssh-ed25519 AAAAC3...mK - username: bob image: 'centos:7' key: ssh-ed25519 AAAAC3...fQ
That will spin up a single VM, named
jb, and provision two users, Alice and Bob. Alice is a hardcore hacker, so she wants to use Ubuntu 18.04 LTS. Bob on the other hand is a corporate kinda guy, so he’s all aboard the CentOS train.
Both of these preferences can peacefully coexist on the jumpbox machine, even if they may not get along so well in real life. But the fun doesn’t stop there! Indeed, why be content with plain vanilla distribution images? Why not explore the world and run our own custom images?
I maintain a jumpbox-like container image that houses all of the tools that I find useful I keep it over on Docker Hub, and I can use it for my account:
instance_groups: - name: jb jobs: - name: jumpbox properties: users: - username: jhunt image: huntprod/jumpbox:jhunt
jhunt tag loads up more of my own personalized environment, above and beyond the tools. Don’t try this at home if you are not also a jhunt.
If you want to use a more generic, BOSH + Cloud Foundry image, we’ve got you covered there, with the huntprod/cf-jumpbox image. It comes jam-packed with all sorts of goodies, including:
- Spruce, jq, and friends
- curl / wget
- Stuff from Hashicorp, including
- … and loads more!
The best part?
You don’t even have to use someone else’s jumpbox image. You can throw together your own toolbox, customize the environment a little bit, push it to Docker Hub, and deploy away.
What tools will you install?