GeoffFranks, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/geofffranks/ Cloud-Native Consultants Thu, 30 Sep 2021 15:49:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 https://www.starkandwayne.com/wp-content/uploads/2021/04/cropped-SW_Logos__shield_blue_on_transparent_with_border-32x32.png GeoffFranks, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/geofffranks/ 32 32 Renaming BOSH Jobs without Cloud Config https://www.starkandwayne.com/blog/renaming-bosh-jobs-without-cloud-config/ Tue, 31 Jan 2017 02:41:12 +0000 https://www.starkandwayne.com//renaming-bosh-jobs-without-cloud-config/

I recently had need to rename some BOSH jobs that were deployed with a BOSH v1 manifest, and no Cloud Config. Early in 2016, the bosh rename job command was replaced with the migrated_from feature of the manifest. It's documented pretty well here so I won't go into the details in this post.

However, if you try to add migrated_from to your v1 manifest, you'll get the following error message:

Error 190014: Deployment manifest instance groups contain 'migrated_from', but it can only be used with cloud-config.

The far preferred solution would be to convert your deployments to make use of Cloud Config, but if that isn't an option immediately and you're pressed for time, you can do the following:

  1. Update your jobs: top level key, to be instance_groups:.
  2. For each of the instance_groups, convert its templates key to jobs.
  3. If you are using Spruce or Spiff to generate a manifest from templates, you
    will need to adjust references as needed for jobs.my_job.stuff to
    instance_groups.my_job.stuff. Also there may be a few files that need
    changing, so don't forget to check them all!
  4. Beware of global search/replace. Some things may contain the word templates or
    jobs as part of their properties (e.g. UAA scopes). You don't want to change those.

The post Renaming BOSH Jobs without Cloud Config appeared first on Stark & Wayne.

]]>

I recently had need to rename some BOSH jobs that were deployed with a BOSH v1 manifest, and no Cloud Config. Early in 2016, the bosh rename job command was replaced with the migrated_from feature of the manifest. It's documented pretty well here so I won't go into the details in this post.

However, if you try to add migrated_from to your v1 manifest, you'll get the following error message:

Error 190014: Deployment manifest instance groups contain 'migrated_from', but it can only be used with cloud-config.

The far preferred solution would be to convert your deployments to make use of Cloud Config, but if that isn't an option immediately and you're pressed for time, you can do the following:

  1. Update your jobs: top level key, to be instance_groups:.
  2. For each of the instance_groups, convert its templates key to jobs.
  3. If you are using Spruce or Spiff to generate a manifest from templates, you
    will need to adjust references as needed for jobs.my_job.stuff to
    instance_groups.my_job.stuff. Also there may be a few files that need
    changing, so don't forget to check them all!
  4. Beware of global search/replace. Some things may contain the word templates or
    jobs as part of their properties (e.g. UAA scopes). You don't want to change those.

The post Renaming BOSH Jobs without Cloud Config appeared first on Stark & Wayne.

]]>
Some options for adding a custom buildpack to CF https://www.starkandwayne.com/blog/some-options-for-adding-a-custom-buildpack/ Tue, 10 Jan 2017 19:47:37 +0000 https://www.starkandwayne.com//some-options-for-adding-a-custom-buildpack/

I recently needed to make use of the cf-multi-buildpack (UPDATE: there is also an updated GPL-licensed fork of cf-multi-buildpack). Its instructions indicate to use the -b https://github.com/pl31/cf-multi-buildpack flag during cf push to make use of it. However, I wanted to make this something available explicitly to all my Cloud Foundry users, and add it as a custom buildpack for all to see with cf buildpacks.

Here are a couple different ways of getting this done.

Buildpack-Packager

buildpack-packager is a ruby gem with the goal of helping you make cached/offline buildpacks, or bundling buildpacks in general. Installation is as simple as gem install buildpack-packager. Using it is a little more complicated, but still pretty easy:

  1. Create a manifest.yml file for your buildpack (to list out the dependencies and how to find them). Mine looked like this, since cf-multi-buildpack doesn't have any dependencies of its own:

     language: multi           # used for naming your buildpack
     dependencies: []          # List describing all the deps of your buildpack
     url_to_dependency_map: [] # List of regexen to parse name + version of deps
     exclude_files: []         # List of files to not include in the buildpack
    
  2. Create a VERSION file to version the buildpack itself. Contents should just be a semver appropriate to your buildpack.

  3. Build the buildpack. You can build it in cached mode and embed all dependencies: buildpack-packager --cached. Or, you can build it in uncached mode, and embed no dependencies: buildpack-packager --uncached.

I went with cached mode as it was less typing and I had no dependencies to cache:

$ buildpack-packager --cached
Cached buildpack created and saved as /Users/gfranks/code/starkandwayne/cf-multi-buildpack/multi_buildpack-cached-v1.0.zip with a size of 4.0K

From here, adding the buildpack to CloudFoundry was as simple as cf create-buildpack multi-buildpack 10 multi_buildpack-cached-v1.0.zip.

Manually Zipping

If your buildpack has no dependencies and is relatively simple, it may be easier to create the buildpack manually from source, rather than create the manifest.yml + VERSION files:

$ git clone https://github.com/pl31/cf-multi-buildpack
$ cd cf-multi-buildpack
$ zip -x *.git* -r multi-buildpack-v1.0.zip .
  adding: bin/ (stored 0%)
  adding: bin/compile (deflated 55%)
  adding: bin/detect (deflated 24%)
  adding: bin/release (deflated 35%)
  adding: multi.zip (stored 0%)
  adding: README.md (deflated 41%)

The resultant zipfile can then be uploaded directly to CF via cf create-buildpack.

The post Some options for adding a custom buildpack to CF appeared first on Stark & Wayne.

]]>

I recently needed to make use of the cf-multi-buildpack (UPDATE: there is also an updated GPL-licensed fork of cf-multi-buildpack). Its instructions indicate to use the -b https://github.com/pl31/cf-multi-buildpack flag during cf push to make use of it. However, I wanted to make this something available explicitly to all my Cloud Foundry users, and add it as a custom buildpack for all to see with cf buildpacks.

Here are a couple different ways of getting this done.

Buildpack-Packager

buildpack-packager is a ruby gem with the goal of helping you make cached/offline buildpacks, or bundling buildpacks in general. Installation is as simple as gem install buildpack-packager. Using it is a little more complicated, but still pretty easy:

  1. Create a manifest.yml file for your buildpack (to list out the dependencies and how to find them). Mine looked like this, since cf-multi-buildpack doesn't have any dependencies of its own:

     language: multi           # used for naming your buildpack
     dependencies: []          # List describing all the deps of your buildpack
     url_to_dependency_map: [] # List of regexen to parse name + version of deps
     exclude_files: []         # List of files to not include in the buildpack
    
  2. Create a VERSION file to version the buildpack itself. Contents should just be a semver appropriate to your buildpack.

  3. Build the buildpack. You can build it in cached mode and embed all dependencies: buildpack-packager --cached. Or, you can build it in uncached mode, and embed no dependencies: buildpack-packager --uncached.

I went with cached mode as it was less typing and I had no dependencies to cache:

$ buildpack-packager --cached
Cached buildpack created and saved as /Users/gfranks/code/starkandwayne/cf-multi-buildpack/multi_buildpack-cached-v1.0.zip with a size of 4.0K

From here, adding the buildpack to CloudFoundry was as simple as cf create-buildpack multi-buildpack 10 multi_buildpack-cached-v1.0.zip.

Manually Zipping

If your buildpack has no dependencies and is relatively simple, it may be easier to create the buildpack manually from source, rather than create the manifest.yml + VERSION files:

$ git clone https://github.com/pl31/cf-multi-buildpack
$ cd cf-multi-buildpack
$ zip -x *.git* -r multi-buildpack-v1.0.zip .
  adding: bin/ (stored 0%)
  adding: bin/compile (deflated 55%)
  adding: bin/detect (deflated 24%)
  adding: bin/release (deflated 35%)
  adding: multi.zip (stored 0%)
  adding: README.md (deflated 41%)

The resultant zipfile can then be uploaded directly to CF via cf create-buildpack.

The post Some options for adding a custom buildpack to CF appeared first on Stark & Wayne.

]]>
GitHub-Slack Integrations without giving Slack write permission https://www.starkandwayne.com/blog/github-slack-integrations-without-giving-slack-write-permission/ Wed, 31 Aug 2016 01:10:40 +0000 https://www.starkandwayne.com//github-slack-integrations-without-giving-slack-write-permission/

Slack has a great integration with GitHub, allowing you to have a bot post all kinds of GitHub activity into your slack channels. However, the default settings for this require you to give Slack write-permissions on all your public/private repos. Depending on your level of paranoia, this may not ideal. Fortunately, there is a relatively easy way around this!

First, add an integration to your channel like you normally would (I go into the channel, find the gear icon, and click "Add an app or integration"). Search the integrations for "github", and select the GitHub integration.

Next, click "Add Configuration" to add a new Github->Slack channel integration, and select the channel you want notifications in.

Now for the important part! Instead of clicking the big green button that your eye and mouse are inevitably drawn to, read the fine print above it, and click the link to "switch to unauthed mode".

Slack now gives you a WebHook URL that you can take to GitHub and add manually. Copy that URL, update/customize the names and icons and other settings in Slack as you see fit, then pop over to GitHub. For each repository that you wish to integrate with that channel, go to the repository's settings, and add a webhook integration (not a service integration). Paste in the URL, select the radio button for "Let me select individual events", and check the events that you wish to send to Slack.

Now it's time to click the big green button. Voila, you're integrated!

The post GitHub-Slack Integrations without giving Slack write permission appeared first on Stark & Wayne.

]]>

Slack has a great integration with GitHub, allowing you to have a bot post all kinds of GitHub activity into your slack channels. However, the default settings for this require you to give Slack write-permissions on all your public/private repos. Depending on your level of paranoia, this may not ideal. Fortunately, there is a relatively easy way around this!

First, add an integration to your channel like you normally would (I go into the channel, find the gear icon, and click "Add an app or integration"). Search the integrations for "github", and select the GitHub integration.

Next, click "Add Configuration" to add a new Github->Slack channel integration, and select the channel you want notifications in.

Now for the important part! Instead of clicking the big green button that your eye and mouse are inevitably drawn to, read the fine print above it, and click the link to "switch to unauthed mode".

Slack now gives you a WebHook URL that you can take to GitHub and add manually. Copy that URL, update/customize the names and icons and other settings in Slack as you see fit, then pop over to GitHub. For each repository that you wish to integrate with that channel, go to the repository's settings, and add a webhook integration (not a service integration). Paste in the URL, select the radio button for "Let me select individual events", and check the events that you wish to send to Slack.

Now it's time to click the big green button. Voila, you're integrated!

The post GitHub-Slack Integrations without giving Slack write permission appeared first on Stark & Wayne.

]]>
Hardening the vcap user’s password on BOSH VMs https://www.starkandwayne.com/blog/hardening-the-vcap-users-password-on-bosh-vms/ Tue, 23 Aug 2016 21:42:32 +0000 https://www.starkandwayne.com//hardening-the-vcap-users-password-on-bosh-vms/

Locking down your BOSH VMs? Here's a handy guide for some options at your disposal for overriding the default password for BOSH's vcap user:

Customize it in your manifest

In each resource pool (or VM type) configuration in your BOSH manifest (or cloud config manifest), you can specify env.bosh.password. This will overwrite the value of the password for the vcap user. The value to put in your manifest is a HASH of the password, and should be generated using mkpasswd -s -m sha-512 (you'll need a Linux VM with the whois package installed). The downside to this approach is that it must be done for each resource pools/VM type you deploy. Cloud Config makes this a little easier, since you are able to re-use VM types across deployments, but it still requires remembering.

Here's a quick example on BOSH-Lite:

$ mkpasswd -s -m sha-512
$ mkpasswd -s -m sha-512
Password: REDACTED
$6$KhPGar7zCLLtPU$afuBqZMg5PRLM/3opVltVOA7Tm3IZJr14mr6QmECAIioGw5HaJdG2HhhOczDQ2UubHPcZYXHTK6jB6OKyBWBv/
$ cat manifest.yml
...
resource_pools:
  - name: my-job
    cloud_properties: {}
    network: default
    env:
      bosh:
        password: $6$KhPGar7zCLLtPU$afuBqZMg5PRLM/3opVltVOA7Tm3IZJr14mr6QmECAIioGw5HaJdG2HhhOczDQ2U
    stemcell:
      name: bosh-warden-boshlite-ubuntu-trusty-go_agent
      sha1: 7c1c34df689772c7b14ce85322c4c044fafe7dbe
      url: https://bosh.io/d/stemcells/bosh-warden-boshlite-ubuntu-trusty-go_agent?v=3262.2
      version: 3262.2
...

Have BOSH create a random password for each VM

Newer versions of BOSH (v255.4 and above) have a feature that will randomize the password set
for the vcap user on each VM created. This setting works at the director level, and applies to every new VM. On the positive side, you no longer need to remember to specify a new password for each resource pool/VM type. Any time your VMs are recreated, they get new passwords automatically. Additionally, no VM should end up with the same vcap password as any other instance. This also means that no one (including you) will ever know the password BOSH generated for that VM's vcap user.

To make use of this feature, ensure the following property is set in your BOSH director's manifest, and redeploy:

properties:
  director:
    generate_vm_passwords: true

This feature will eventually be turned on by default in BOSH directors.

Recommendations

We recommend you use the first method to harden the password of the vcap user to a specific password when deploying your BOSH director with bosh-init. At the same time, configure the director to randomly generate passwords for all other VMs' vcap users. You can see this in action in our bosh templates for Genesis.

The post Hardening the vcap user’s password on BOSH VMs appeared first on Stark & Wayne.

]]>

Locking down your BOSH VMs? Here's a handy guide for some options at your disposal for overriding the default password for BOSH's vcap user:

Customize it in your manifest

In each resource pool (or VM type) configuration in your BOSH manifest (or cloud config manifest), you can specify env.bosh.password. This will overwrite the value of the password for the vcap user. The value to put in your manifest is a HASH of the password, and should be generated using mkpasswd -s -m sha-512 (you'll need a Linux VM with the whois package installed). The downside to this approach is that it must be done for each resource pools/VM type you deploy. Cloud Config makes this a little easier, since you are able to re-use VM types across deployments, but it still requires remembering.

Here's a quick example on BOSH-Lite:

$ mkpasswd -s -m sha-512
$ mkpasswd -s -m sha-512
Password: REDACTED
$6$KhPGar7zCLLtPU$afuBqZMg5PRLM/3opVltVOA7Tm3IZJr14mr6QmECAIioGw5HaJdG2HhhOczDQ2UubHPcZYXHTK6jB6OKyBWBv/
$ cat manifest.yml
...
resource_pools:
  - name: my-job
    cloud_properties: {}
    network: default
    env:
      bosh:
        password: $6$KhPGar7zCLLtPU$afuBqZMg5PRLM/3opVltVOA7Tm3IZJr14mr6QmECAIioGw5HaJdG2HhhOczDQ2U
    stemcell:
      name: bosh-warden-boshlite-ubuntu-trusty-go_agent
      sha1: 7c1c34df689772c7b14ce85322c4c044fafe7dbe
      url: https://bosh.io/d/stemcells/bosh-warden-boshlite-ubuntu-trusty-go_agent?v=3262.2
      version: 3262.2
...

Have BOSH create a random password for each VM

Newer versions of BOSH (v255.4 and above) have a feature that will randomize the password set
for the vcap user on each VM created. This setting works at the director level, and applies to every new VM. On the positive side, you no longer need to remember to specify a new password for each resource pool/VM type. Any time your VMs are recreated, they get new passwords automatically. Additionally, no VM should end up with the same vcap password as any other instance. This also means that no one (including you) will ever know the password BOSH generated for that VM's vcap user.

To make use of this feature, ensure the following property is set in your BOSH director's manifest, and redeploy:

properties:
  director:
    generate_vm_passwords: true

This feature will eventually be turned on by default in BOSH directors.

Recommendations

We recommend you use the first method to harden the password of the vcap user to a specific password when deploying your BOSH director with bosh-init. At the same time, configure the director to randomly generate passwords for all other VMs' vcap users. You can see this in action in our bosh templates for Genesis.

The post Hardening the vcap user’s password on BOSH VMs appeared first on Stark & Wayne.

]]>
Learning to Troubleshoot BOSH using fubar-boshrelease https://www.starkandwayne.com/blog/learning-to-troubleshoot-bosh-using-fubar-boshrelease/ Wed, 10 Aug 2016 18:50:06 +0000 https://www.starkandwayne.com//learning-to-troubleshoot-bosh-using-fubar-boshrelease/

A short time ago, I created the fubar-boshrelease with the purpose of providing a broken BOSH release that would help expose operators, and developers to various aspects of troubleshooting things going wrong when working with BOSH. It starts out as a BOSH release repo needing to be built, uploaded, deployed, and made to work. However, many things are set up to go wrong, and force you to investigate what the problem is, and make a decision on how to fix it. In the end, you will have a working BOSH deployment on BOSH-Lite, and are able to see "You win!" when you curl the specified endpoint.

Hopefully this release is useful as a training exercise, or refresher for those who haven't had to deal with any BOSH issues lately. I tried to put as little help in the repo as possible, so that users are forced to go research how BOSH releases are built and deployed to help them learn more by doing, than by reading.

Getting Started

To get started, spin up a BOSH-Lite Vagrant VM, and git clone https://github.com/cloudfoundry-community/fubar-boshrelease. From there:

# Move into the boshrelease directory, target our bosh-lite, and add the bosh-lite routes
$ cd fubar-boshrelease`
$ bosh target 192.168.50.4 bosh-lite
RSA 1024 bit CA certificates are loaded due to old openssl compatibility
Target set to 'Bosh Lite Director'
Your username: admin
Enter password:
Logged in as 'admin'
$ sudo route add 10.244.0.0/16 192.168.50.4
Password:
# Iterate over generating the deployment manifest, uploading, and deploying:
$ templates/make_manifest warden && bosh create release --force && bosh upload release && bosh -n deploy

From there, it's time to investigate and resolve any errors encountered, lather rinse-repeat the manifest generation, release creation/uploading, and deployment as needed. Once you think you're done, you can test the following:

$ curl http://10.244.54.2
You win!

If you see any other messages aside from "You win!", or no output at all, there's still more work to be done!

The post Learning to Troubleshoot BOSH using fubar-boshrelease appeared first on Stark & Wayne.

]]>

A short time ago, I created the fubar-boshrelease with the purpose of providing a broken BOSH release that would help expose operators, and developers to various aspects of troubleshooting things going wrong when working with BOSH. It starts out as a BOSH release repo needing to be built, uploaded, deployed, and made to work. However, many things are set up to go wrong, and force you to investigate what the problem is, and make a decision on how to fix it. In the end, you will have a working BOSH deployment on BOSH-Lite, and are able to see "You win!" when you curl the specified endpoint.

Hopefully this release is useful as a training exercise, or refresher for those who haven't had to deal with any BOSH issues lately. I tried to put as little help in the repo as possible, so that users are forced to go research how BOSH releases are built and deployed to help them learn more by doing, than by reading.

Getting Started

To get started, spin up a BOSH-Lite Vagrant VM, and git clone https://github.com/cloudfoundry-community/fubar-boshrelease. From there:

# Move into the boshrelease directory, target our bosh-lite, and add the bosh-lite routes
$ cd fubar-boshrelease`
$ bosh target 192.168.50.4 bosh-lite
RSA 1024 bit CA certificates are loaded due to old openssl compatibility
Target set to 'Bosh Lite Director'
Your username: admin
Enter password:
Logged in as 'admin'
$ sudo route add 10.244.0.0/16 192.168.50.4
Password:
# Iterate over generating the deployment manifest, uploading, and deploying:
$ templates/make_manifest warden && bosh create release --force && bosh upload release && bosh -n deploy

From there, it's time to investigate and resolve any errors encountered, lather rinse-repeat the manifest generation, release creation/uploading, and deployment as needed. Once you think you're done, you can test the following:

$ curl http://10.244.54.2
You win!

If you see any other messages aside from "You win!", or no output at all, there's still more work to be done!

The post Learning to Troubleshoot BOSH using fubar-boshrelease appeared first on Stark & Wayne.

]]>
Speeding up bosh create-env in Production with Proto-BOSH https://www.starkandwayne.com/blog/speeding-up-bosh-init-in-production/ Sun, 10 Jul 2016 19:53:16 +0000 https://www.starkandwayne.com//speeding-up-bosh-init-in-production/

bosh create-env is AWESOME. It lets you deploy BOSH itself using a BOSH manifest, making it really easy to customize your BOSH deployment as you see fit. It allows you to add a backup agent, some monitoring, and some troubleshooting tools, or even swap out the database with an HA alternative.

However, there is one large downside with bosh create-env. When you go to perform any updates to it, it has to recompile everything you deploy to it, which in practice leads to 20-30 minutes of downtime on the BOSH director. While your deployed VMs will keep running during this time, no one can operate on them using bosh. Additionally, if any VMs fail, they won't be resurrected until the bosh create-env update has completed.

Fortunately, the folks at Pivotal's Ops team offered a suggestion a while back that we really like (Thank you, Pivotal!). I'm not sure if they have a name for it, but we have taken to calling it Proto-BOSH. In essence, you use bosh create-env to deploy a BOSH director (proto-BOSH) which will be used only for deploying/updating other BOSH directors (regular BOSH). The regular BOSH directors will then deploy your production services, like Cloud Foundry. Now, when you go to upgrade your production BOSH director, it will stay online during the package compilations and will be down only for the duration of the VM rebuild (if necessary) and job/package updates.

To make this easier to get going, we've created the bosh-genesis-kit. It is a Genesis Kit for deploying your BOSH director using templates which can be used for both the Proto-BOSH, as well as the subsequent regular BOSH directors. Instructions on bootstrapping environments with the Proto-BOSH methodology can be found in the bosh-genesis-kit README

The post Speeding up bosh create-env in Production with Proto-BOSH appeared first on Stark & Wayne.

]]>

bosh create-env is AWESOME. It lets you deploy BOSH itself using a BOSH manifest, making it really easy to customize your BOSH deployment as you see fit. It allows you to add a backup agent, some monitoring, and some troubleshooting tools, or even swap out the database with an HA alternative.

However, there is one large downside with bosh create-env. When you go to perform any updates to it, it has to recompile everything you deploy to it, which in practice leads to 20-30 minutes of downtime on the BOSH director. While your deployed VMs will keep running during this time, no one can operate on them using bosh. Additionally, if any VMs fail, they won't be resurrected until the bosh create-env update has completed.

Fortunately, the folks at Pivotal's Ops team offered a suggestion a while back that we really like (Thank you, Pivotal!). I'm not sure if they have a name for it, but we have taken to calling it Proto-BOSH. In essence, you use bosh create-env to deploy a BOSH director (proto-BOSH) which will be used only for deploying/updating other BOSH directors (regular BOSH). The regular BOSH directors will then deploy your production services, like Cloud Foundry. Now, when you go to upgrade your production BOSH director, it will stay online during the package compilations and will be down only for the duration of the VM rebuild (if necessary) and job/package updates.

To make this easier to get going, we've created the bosh-genesis-kit. It is a Genesis Kit for deploying your BOSH director using templates which can be used for both the Proto-BOSH, as well as the subsequent regular BOSH directors. Instructions on bootstrapping environments with the Proto-BOSH methodology can be found in the bosh-genesis-kit README

The post Speeding up bosh create-env in Production with Proto-BOSH appeared first on Stark & Wayne.

]]>
Safely Hiding Sensitive Data in your Concourse Pipelines https://www.starkandwayne.com/blog/safely-hiding-sensitive-data-in-your-concourse-pipelines/ Wed, 22 Jun 2016 12:41:44 +0000 https://www.starkandwayne.com//safely-hiding-sensitive-data-in-your-concourse-pipelines/

At Stark & Wayne, we love Concourse pipelines! We use them for testing/releasing CLI utilities, deploying Cloud Foundry apps, building docker images, creating and testing BOSH releases, and vetting changes to BOSH deployments in an automated fashion starting in sandbox environments all the way to production.

Uh-oh! credentials.yml file got committed?

One of the most common challenges we've run into both internally and with our clients is securing credentials to ensure people don't accidentally commit sensitive data to the our repos. Initially, we tried adding a .gitignore on credentials.yml (a file that we would reference via fly set-pipeline --load-vars-from). This was mediocre at best. It left creds on-disk for long periods of time, required people to remember the .gitignore on newly pipelined repos, and didn't scale well when multiple people collaborated on the same project.

Our current solution addresses all of these issues. Leveraging Spruce and its (( vault )) operator, a script called repipe creates the pipeline config, updates via fly, and deletes the generated config file on completion/error so that it doesn't live extended periods of time on-disk. Our entire ci configuration can now be committed to public repos, since there is no sensitive data stored in it. As long as collaborators of the project have access to the Vault, any one of them can make changes to the pipeline.

How to Get Started

To get started using this with your current project, take a look at our pipeline-templates repo. It has instructions for setting up your project with one of the currently provided pipeline templates. Even if your pipeline isn't an exact match to the templates currently provided, you can still make use of the repipe script to generate and update your pipeline configs using Vault. You just need to ensure that your project follows the basic file structure described in the README (ci/repipe, ci/pipeline.yml, and ci/settings.yml).

For an example of this in the wild, feel free to browse through Spruce's CI configurations.

The post Safely Hiding Sensitive Data in your Concourse Pipelines appeared first on Stark & Wayne.

]]>

At Stark & Wayne, we love Concourse pipelines! We use them for testing/releasing CLI utilities, deploying Cloud Foundry apps, building docker images, creating and testing BOSH releases, and vetting changes to BOSH deployments in an automated fashion starting in sandbox environments all the way to production.

Uh-oh! credentials.yml file got committed?

One of the most common challenges we've run into both internally and with our clients is securing credentials to ensure people don't accidentally commit sensitive data to the our repos. Initially, we tried adding a .gitignore on credentials.yml (a file that we would reference via fly set-pipeline --load-vars-from). This was mediocre at best. It left creds on-disk for long periods of time, required people to remember the .gitignore on newly pipelined repos, and didn't scale well when multiple people collaborated on the same project.

Our current solution addresses all of these issues. Leveraging Spruce and its (( vault )) operator, a script called repipe creates the pipeline config, updates via fly, and deletes the generated config file on completion/error so that it doesn't live extended periods of time on-disk. Our entire ci configuration can now be committed to public repos, since there is no sensitive data stored in it. As long as collaborators of the project have access to the Vault, any one of them can make changes to the pipeline.

How to Get Started

To get started using this with your current project, take a look at our pipeline-templates repo. It has instructions for setting up your project with one of the currently provided pipeline templates. Even if your pipeline isn't an exact match to the templates currently provided, you can still make use of the repipe script to generate and update your pipeline configs using Vault. You just need to ensure that your project follows the basic file structure described in the README (ci/repipe, ci/pipeline.yml, and ci/settings.yml).

For an example of this in the wild, feel free to browse through Spruce's CI configurations.

The post Safely Hiding Sensitive Data in your Concourse Pipelines appeared first on Stark & Wayne.

]]>
Standing up Vault using Genesis https://www.starkandwayne.com/blog/standing-up-vault-using-genesis/ Tue, 21 Jun 2016 19:43:46 +0000 https://www.starkandwayne.com//standing-up-vault-using-genesis/

A few of our recent posts related to standing up BOSH deployments using Genesis have all revolved around needing Vault to store your credentials safely. The vault-boshrelease makes this fairly straightforward, but there's now a Genesis Vault template to make running Vault even easier!

The procedure is similar to the other Genesis deployments:

$ genesis new deployment --template vault
$ cd vault-deployments
$ genesis new site --template bosh-lite macbook
$ git add macbook; git commit -m "Added macbook site"
$ bosh target bosh-lite
$ genesis new env macbook sandbox
$ cd macbook/sandbox
$ make deploy
$ git add .; git commit -m "Added initial sandbox environment"
# lather, rinse, repeat as needed for all of your sites/environments

Out of the box, you get an HA Vault using Consul as its encrypted backend datastore. However, to start using it, you will need to initialize your Vault. I recommend using the safe CLI for interacting with Vault:

$ safe target "https://<vault ip:8200>" macbook-vault
$ safe vault init

This will output keys to use when unsealing the Vault, as well as the initial root user's password. Save these somewhere secure, as they will be needed any time the Vault process gets restarted.

Next, we need to unseal the new Vault, using 3 distinct Unseal Keys from the list obtained during safe vault init:

$ safe vault unseal
$ safe vault unseal
$ safe vault unseal

Now that Vault is initialized and unsealed, you can log in and pre-populate the handshake value used by many Genesis templates to detect if Vault is available:

$ safe auth
Authenticating against macbook-vault at https://10.244.9.3:8200
Token:
$ safe set secret/handshake initialized=true
initialized: true
$ safe tree
.
└── secret
    └── handshake
$ safe get secret/handshake
--- # secret/handshake
initialized: "true"

Voila!

The post Standing up Vault using Genesis appeared first on Stark & Wayne.

]]>

A few of our recent posts related to standing up BOSH deployments using Genesis have all revolved around needing Vault to store your credentials safely. The vault-boshrelease makes this fairly straightforward, but there's now a Genesis Vault template to make running Vault even easier!

The procedure is similar to the other Genesis deployments:

$ genesis new deployment --template vault
$ cd vault-deployments
$ genesis new site --template bosh-lite macbook
$ git add macbook; git commit -m "Added macbook site"
$ bosh target bosh-lite
$ genesis new env macbook sandbox
$ cd macbook/sandbox
$ make deploy
$ git add .; git commit -m "Added initial sandbox environment"
# lather, rinse, repeat as needed for all of your sites/environments

Out of the box, you get an HA Vault using Consul as its encrypted backend datastore. However, to start using it, you will need to initialize your Vault. I recommend using the safe CLI for interacting with Vault:

$ safe target "https://<vault ip:8200>" macbook-vault
$ safe vault init

This will output keys to use when unsealing the Vault, as well as the initial root user's password. Save these somewhere secure, as they will be needed any time the Vault process gets restarted.

Next, we need to unseal the new Vault, using 3 distinct Unseal Keys from the list obtained during safe vault init:

$ safe vault unseal
$ safe vault unseal
$ safe vault unseal

Now that Vault is initialized and unsealed, you can log in and pre-populate the handshake value used by many Genesis templates to detect if Vault is available:

$ safe auth
Authenticating against macbook-vault at https://10.244.9.3:8200
Token:
$ safe set secret/handshake initialized=true
initialized: true
$ safe tree
.
└── secret
    └── handshake
$ safe get secret/handshake
--- # secret/handshake
initialized: "true"

Voila!

The post Standing up Vault using Genesis appeared first on Stark & Wayne.

]]>
Using Genesis to Deploy Cloud Foundry https://www.starkandwayne.com/blog/using-genesis-to-deploy-cloud-foundry/ Wed, 08 Jun 2016 18:16:15 +0000 https://www.starkandwayne.com//using-genesis-to-deploy-cloud-foundry/

In this post, we're going to use Genesis to deploy Cloud Foundry. We will make use of some of Genesis's cool features to generate unique credentials for each deployment, and Vault to keep the credentials out of the saved manifests. We will do this on BOSH-Lite, but templates exist to easily deploy to AWS with a nearly identical process (AWS will require a couple more parameters to be defined).

Prerequisites

These instructions assume that you have a working installation of BOSH-Lite, and Vault. It also assumes you have set the VAULT_ADDR environment variable, and successfully authenticated to an unsealed Vault. The vault-boshrelease has some good instructions on getting this up and running quickly. Additionally, the Spruce, Safe, and certstrap command line utilities are required.

Getting Started

First, we will use Genesis to create a new repo for managing our Cloud Foundry deployments:

genesis new deployment --template cf

This will create a git repository in the current working directory called cf-deployments. This repo will be what you use to manage all of your Cloud Foundry deployments, using the Genesis templates. An upstream remote is created automatically to track changes to the original templates, allowing you to easily pull in updates that are made to it over time.

Secondly, we will create a new site for BOSH-Lite:

$ cd cf-deployments
$ genesis new site --template bosh-lite macbook

This creates a new site directory in the cf-deployments repo, called macbook, based on the bosh-lite site templates. As of the time this post was written, both bosh-lite and aws site templates were provided, but more may now exist. If not, you can create your own, and hopefully submitting an upstream pull request, so others can make use of your work.

Third, create a new environment in our macbook site:

genesis new environment macbook sandbox

This creates a sandbox directory representing the environment inside the cf-deployments/macbook site directory. It also does a couple nifty things under the hood that are not immediately apparent:

  1. It runs any scripts in cf-deployments/.env_hooks. In this case, there are scripts which will connect to your Vault installation and generate unique secrets/certificates/keys for your Cloud Foundry deployment. Each time you create a new environment, these credentials will be unique, decreasing the risks of environments accidentally communicating with one-another.
  2. It will generate a logical name for your bosh deployment, based on the type of deployment being done (cf), site (macbook), and environment (sandbox): macbook-sandbox-cf
  3. It grabs the UUID of the currently targeted BOSH director, and inserts that to director.yml, so you don't have to.

Next, we will attempt to deploy, to see if there are any parameters need to be filled out to get our environment deployed:

$ cd macbook/sandbox
$ make deploy
Refreshing site definitions for macbook/sandbox
Refreshing global definitions for macbook/sandbox
4 error(s) detected:
 - $.networks.cf.subnets: Specify the subnet to use for CF
 - $.properties.cc.security_group_definitions.load_balancer.rules: Specify the rules for allowing access for CF apps to talk to the CF Load Balancer External IPs
 - $.properties.cc.security_group_definitions.services.rules: Specify the rules for allowing access to CF services subnets
 - $.properties.cc.security_group_definitions.user_bosh_deployments.rules: Specify the rules for additional BOSH user services that apps will need to talk to
Failed to merge templates; bailing...
make: *** [manifest] Error 5

It looks like we need to provide some configuration for the network setup this Cloud Foundry will run on. Let's use this, to match what the traditional Cloud Foundry on BOSH-Lite will look like:

$ cat <<EOF > networking.yml
networks:
- name: cf
  subnets:
  - gateway: 10.244.0.1
    range: 10.244.0.0/24
    static:
    - 10.244.0.2 - 10.244.0.100
properties:
  cc:
    security_group_definitions:
    - name: services
      rules:
      - destination: 10.244.0.0 - 10.244.1.255
        protocol: all
    - name: load_balancer
      rules:
      - destination: 10.244.0.34
        protocol: all
    - name: user_bosh_deployments
      rules:
      - destination: 10.244.0.0 - 10.244.255.255
        protocol: all
EOF

Now that that's taken care of, let's try again:

Refreshing site definitions for macbook/sandbox
Refreshing global definitions for macbook/sandbox
<output omitted for brevity - it's deploying a full CF to bosh-lite, after all>

Since there were no errors generating the manifest, Genesis moves on to deploying Cloud Foundry. Now take a look at the generated manifest in cf-deployments/macbook/sandbox/manifests/manifest.yml, and look for the certs and passwords. You will see that they're all REDACTED. Genesis tells Spruce to redact credentials from Vault except in the manifest that is actively being deployed via BOSH (which is cleaned up immediately after). This greatly minimizes the risk that sensitive credentials might be committed into your repo, and accidentally leaked to unauthorized eyes.

To find the credentials that were generated for the deployment, you can use safe tree secret/macbook/sandbox/cf to see where the creds are stored and safe get <path> to retrieve them for use with cf login or nats

Finally, shout out to @starkandwayne with the #genesis hashtag to let us know you love genesis!

The Long Term Scenario

When living with this in a real world scenario, you will inevitably need to make modifications and upgrades over time and across a variety of environments. Genesis makes this really easy to manage, since the global and site templates are shared across all your deployments. Many changes you make will go from write-once-per-deployment-hoping-you-applied-all-the-changes-correctly, deploy-many to write-once, deploy-many.

The workflow goes something like this:

  1. Edit the applicable files in cf-deployments/global, cf-deployments/<site>/site or cf-deployments/<site>/<env>
  2. Run make refresh deploy in the testing environment
  3. Validate that all went well
  4. Lather, rinse repeat steps 2-3 as needed for the remaining sites to be updated. If you changed environment-level YAML files in step 1, those will also need to be replicated in each environment).
  5. Marvel at how much time you're saving in both updating YAML, fretting over whether you added all the properties correctly for each environment, and fixing/redeploying from any mistakes made along the way.

The post Using Genesis to Deploy Cloud Foundry appeared first on Stark & Wayne.

]]>

In this post, we're going to use Genesis to deploy Cloud Foundry. We will make use of some of Genesis's cool features to generate unique credentials for each deployment, and Vault to keep the credentials out of the saved manifests. We will do this on BOSH-Lite, but templates exist to easily deploy to AWS with a nearly identical process (AWS will require a couple more parameters to be defined).

Prerequisites

These instructions assume that you have a working installation of BOSH-Lite, and Vault. It also assumes you have set the VAULT_ADDR environment variable, and successfully authenticated to an unsealed Vault. The vault-boshrelease has some good instructions on getting this up and running quickly. Additionally, the Spruce, Safe, and certstrap command line utilities are required.

Getting Started

First, we will use Genesis to create a new repo for managing our Cloud Foundry deployments:

genesis new deployment --template cf

This will create a git repository in the current working directory called cf-deployments. This repo will be what you use to manage all of your Cloud Foundry deployments, using the Genesis templates. An upstream remote is created automatically to track changes to the original templates, allowing you to easily pull in updates that are made to it over time.

Secondly, we will create a new site for BOSH-Lite:

$ cd cf-deployments
$ genesis new site --template bosh-lite macbook

This creates a new site directory in the cf-deployments repo, called macbook, based on the bosh-lite site templates. As of the time this post was written, both bosh-lite and aws site templates were provided, but more may now exist. If not, you can create your own, and hopefully submitting an upstream pull request, so others can make use of your work.

Third, create a new environment in our macbook site:

genesis new environment macbook sandbox

This creates a sandbox directory representing the environment inside the cf-deployments/macbook site directory. It also does a couple nifty things under the hood that are not immediately apparent:

  1. It runs any scripts in cf-deployments/.env_hooks. In this case, there are scripts which will connect to your Vault installation and generate unique secrets/certificates/keys for your Cloud Foundry deployment. Each time you create a new environment, these credentials will be unique, decreasing the risks of environments accidentally communicating with one-another.
  2. It will generate a logical name for your bosh deployment, based on the type of deployment being done (cf), site (macbook), and environment (sandbox): macbook-sandbox-cf
  3. It grabs the UUID of the currently targeted BOSH director, and inserts that to director.yml, so you don't have to.

Next, we will attempt to deploy, to see if there are any parameters need to be filled out to get our environment deployed:

$ cd macbook/sandbox
$ make deploy
Refreshing site definitions for macbook/sandbox
Refreshing global definitions for macbook/sandbox
4 error(s) detected:
 - $.networks.cf.subnets: Specify the subnet to use for CF
 - $.properties.cc.security_group_definitions.load_balancer.rules: Specify the rules for allowing access for CF apps to talk to the CF Load Balancer External IPs
 - $.properties.cc.security_group_definitions.services.rules: Specify the rules for allowing access to CF services subnets
 - $.properties.cc.security_group_definitions.user_bosh_deployments.rules: Specify the rules for additional BOSH user services that apps will need to talk to
Failed to merge templates; bailing...
make: *** [manifest] Error 5

It looks like we need to provide some configuration for the network setup this Cloud Foundry will run on. Let's use this, to match what the traditional Cloud Foundry on BOSH-Lite will look like:

$ cat <<EOF > networking.yml
networks:
- name: cf
  subnets:
  - gateway: 10.244.0.1
    range: 10.244.0.0/24
    static:
    - 10.244.0.2 - 10.244.0.100
properties:
  cc:
    security_group_definitions:
    - name: services
      rules:
      - destination: 10.244.0.0 - 10.244.1.255
        protocol: all
    - name: load_balancer
      rules:
      - destination: 10.244.0.34
        protocol: all
    - name: user_bosh_deployments
      rules:
      - destination: 10.244.0.0 - 10.244.255.255
        protocol: all
EOF

Now that that's taken care of, let's try again:

Refreshing site definitions for macbook/sandbox
Refreshing global definitions for macbook/sandbox
<output omitted for brevity - it's deploying a full CF to bosh-lite, after all>

Since there were no errors generating the manifest, Genesis moves on to deploying Cloud Foundry. Now take a look at the generated manifest in cf-deployments/macbook/sandbox/manifests/manifest.yml, and look for the certs and passwords. You will see that they're all REDACTED. Genesis tells Spruce to redact credentials from Vault except in the manifest that is actively being deployed via BOSH (which is cleaned up immediately after). This greatly minimizes the risk that sensitive credentials might be committed into your repo, and accidentally leaked to unauthorized eyes.

To find the credentials that were generated for the deployment, you can use safe tree secret/macbook/sandbox/cf to see where the creds are stored and safe get <path> to retrieve them for use with cf login or nats

Finally, shout out to @starkandwayne with the #genesis hashtag to let us know you love genesis!

The Long Term Scenario

When living with this in a real world scenario, you will inevitably need to make modifications and upgrades over time and across a variety of environments. Genesis makes this really easy to manage, since the global and site templates are shared across all your deployments. Many changes you make will go from write-once-per-deployment-hoping-you-applied-all-the-changes-correctly, deploy-many to write-once, deploy-many.

The workflow goes something like this:

  1. Edit the applicable files in cf-deployments/global, cf-deployments/<site>/site or cf-deployments/<site>/<env>
  2. Run make refresh deploy in the testing environment
  3. Validate that all went well
  4. Lather, rinse repeat steps 2-3 as needed for the remaining sites to be updated. If you changed environment-level YAML files in step 1, those will also need to be replicated in each environment).
  5. Marvel at how much time you're saving in both updating YAML, fretting over whether you added all the properties correctly for each environment, and fixing/redeploying from any mistakes made along the way.

The post Using Genesis to Deploy Cloud Foundry appeared first on Stark & Wayne.

]]>