To deploy Cloud Foundry (CF) to Bosh-Lite on AWS, we need to [Spin up a Bosh-Lite VM on AWS](##Spin up a Bosh-Lite VM on AWS) first, then we can [Deploy CF using the Bosh-Lite](##Deploy CF using the Bosh-Lite) we spun up.
Spin up a Bosh-Lite VM on AWS
Dr Nic wrote a blog Bosh-Lite can be better on aws which walks you through how to spin up a Bosh-Lite on aws. In this post, we teach you how to deploy CF to Bosh-Lite specifically in an AWS VPC step by step and share with you the possible problems you may encounter when you go to do it.
The basic steps are as follows:
- Install Vagrant Plugin
- Clone the Bosh-Lite Repository
- Configure BOSH Environment Variables
- Configure the Vagrantfile
- Vagrant Up and Bosh Login
Install Vagrant Plugin
Install the Vagrant AWS provider by running the following command:
$ vagrant plugin install vagrant-aws
Clone the Bosh-Lite Repository
Clone Bosh-Lite repository by using the git command:
$ git clone https://github.com/cloudfoundry/bosh-lite.git
Configure BOSH Environment Variables
One example of configuration is as follows. AWS related configurations need to be set in the Vagrantfile which is described in [Configure the Vagrantfile](###Configure the Vagrantfile). The Bosh director will automatically pick up some
BOSH_LITE_ENV variables, such as
BOSH_LITE_NAME in the example below. If
BOSH_LITE_NAME is not set, it will default to vagrant.
export BOSH_LITE_INSTANCE_TYPE=m3.xlarge(for example)
export BOSH_LITE_SUBNET_ID=your_subnet_id (vpc only)
export BOSH_LITE_SECURITY_GROUP=your_group_id (not group name)
You need to make sure that the security group you are using exists and allows inbound TCP traffic on ports 25555 (for the BOSH director), 22 (for SSH), 80/443 (for Cloud Controller), and 4443 (for Loggregator).
See the official doc for Using AWS provider for Bosh-Lite.
Configure the Vagrantfile
In this step, we need configure Vagrantfile using the environment variables we exported in the [Configure BOSH Environment Variables](###Configure BOSH Environment Variables).
config.vm.provider :aws do |aws, override|
override.vm.box_version = '9000.69.0'
aws.access_key_id = ENV['BOSH_AWS_ACCESS_KEY_ID'] || ''
aws.secret_access_key = ENV['BOSH_AWS_SECRET_ACCESS_KEY'] || ''
aws.keypair_name = ENV['BOSH_LITE_KEYPAIR'] || ''
aws.instance_type = ENV['BOSH_LITE_INSTANCE_TYPE'] || ''
aws.subnet_id = ENV['BOSH_LITE_SUBNET_ID'] || ''
aws.security_groups = ENV['BOSH_LITE_SECURITY_GROUP'] || ''
aws.associate_public_ip = true
aws.ami = ''
aws.associate_public_ip = 'true' is to enable assigning an public ip.
You can read more about the Vagrant configuration for aws provider here.
Vagrant Up and Bosh Login
$ vagrant up --provider=aws
You will see the public IP of the VM you just launched at the end of vagrant up command output in your terminal. You can also run
vagrant ssh-config to see the public IP info.
Next, you can run
bosh target YOUR_VM_PUB_IP, or run
vagrant ssh and then
bosh target 127.0.0.1. When you run the
bosh target command it will ask you to login, use admin and admin as username and password.
One problem I have run into is it hangs at the following output after I run
vagrant up --provider=aws.
==> default: Waiting for SSH to become available...
If this happens, I suggest you first check your configuration, especially check that the subnet id and the security group are both set as you expected. If those configuration are correct, you can debug by ssh-ing to your box manually by running the following command. In this way, you can determine if the problem is in ssh part or bosh-lite vm.
$ ssh [email protected]_vm_pub_ip -v -i path_to_your_private_key
If the above ssh command outputs permission denied, please check your private key to see if your private key is stored in key pair in your aws console.
$ ssh-keygen -y -f path_to_your_private_key >bosh-lite.pub
$ ssh-keygen -l -f bosh-lite.pub -E md5
Then compare the fingerprint of the key you are using with the fingerprint for the key pair in your AWS console. If they do not match, you may want to generate a new key pair in your AWS console using the key you are using in
Deploy CF using Bosh-Lite
Now that we have Bosh-Lite running on AWS, let’s use it to deploy CF. If you are deploying CF on your local laptop bosh-lite, you can visit Deploying Cloud Foundry Locally with bosh-lite with Mac-OSX (Late 2015) written by Thomas Mitchell for details.
Most of steps deploying CF to Bosh-Lite on AWS are the same or similar with deploying CF on your local laptop Bosh-Lite. The basic steps are as follows:
- Clone the CF-Release
- Generate and Set the Manifest
- Upload Stemcell
- Upload Release
- Deploy & Verify the Deployment Works
Clone the CF-Release
The git repository for CF-release can be cloned at:
$ git clone https://github.com/cloudfoundry/cf-release.git
Generate and Set the Manifest
Next let’s generate the manifest for the CF deployment.
First, you need to use the
scripts/update helper to update the cf-release submodules by following the below commands.
$ cd cf-release
Second, you need to create a manifest stub file and add the BOSH Director UUID and system domain as follows. You can get your uuid by running
director_uuid: your director uuid
Third, you will generate and set the manifest using the following command:
$ ./scripts/generate-bosh-lite-dev-manifest PATH-TO-MANIFEST-STUB
To view a list of publicly available BOSH stemcells, we will pick the following one.
$ bosh upload stemcell https://bosh.io/d/stemcells/bosh-warden-boshlite-ubuntu-trusty-go_agent
At the time this post is written , the latest release is
cf-224.yml. We will use
cf-224.yml as an example.
$ bosh upload release releases/cf-224.yml
Deploy & Verify the Deployment Works
Now you can run
bosh deploy to deploy the uploaded Cloud Foundry release.
After the deployment is completed, you can run the following commands to verify our CF deployment.
$ cf api --skip-ssl-validation https://api.BOSH-LITE-PUBLIC-IP.xip.io
$ cf auth admin admin
$ cf create-org test-org
$ cf target -o test-org
$ cf create-space test-space
$ cf target -s test-space