Ramon Makkelie, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/ramonmakkelie/ Cloud-Native Consultants Thu, 30 Sep 2021 15:49:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 https://www.starkandwayne.com/wp-content/uploads/2021/04/cropped-SW_Logos__shield_blue_on_transparent_with_border-32x32.png Ramon Makkelie, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/ramonmakkelie/ 32 32 Getting Started with Microsoft AKS https://www.starkandwayne.com/blog/getting-started-with-microsoft-aks/ Mon, 27 Jan 2020 15:45:00 +0000 https://www.starkandwayne.com//getting-started-with-microsoft-aks/

Setting up a Kubernetes cluster from scratch is no easy task. Luckily, there are now many managed service offerings which create a Kubernetes cluster for you.
Many of these providers also take responsibility for keeping your cluster up to date.

In this blog post we will take a closer look at AKS, the Kubernetes-as-a-service offering from Microsoft Azure. The Azure team did a great job of automating the configuration of all the different Kubernetes cluster components. With AKS we can deploy a 3-node cluster in about 15 minutes.

So Lets Get To Work!

If you do not already have an Azure account, Microsoft offers a free trial with $200 in credit toward their cloud services. AKS counts as a cloud service, so get out there and get signed up!

With AKS you will only be paying for the virtual machine instances, storage, and networking resources consumed by your Kubernetes cluster; you won't have to pay for the etcd nodes, the API server components, etc.

Prerequisites

Before we get started, make sure to have the following tools installed:

On Linux:

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bashaz aks install-cli

On macOS:

brew install azure-cli
az aks install-cli

To fully enjoy your kubectl experience, you may want to install autocomplete.

Spinning Up Our Cluster

After everything is installed, you will need to log into Microsoft Azure using the az CLI:

az login

If you have multiple subscriptions, you can optionally set the subscription you
wish to use by using the az account set command, like this:

az account set --subscription "SUBSCRIPTION-NAME"

We now need to create a resource group. A resource group is a collections of assets which will hold virtual machines, load balancers, networking components etc. When you are all done with your Kubernetes cluster, you can delete the resource group to trivially clean up after yourself.

To create our resource group:

$ az group create --name my-aks-rg --location eastus

Here, our resource group is named my-aks-rg and is provisioned in the US - East region (eastus).

To provision our new cluster, use az aks create:

$ az aks create \
   --name my-aks \
   --resource-group my-aks-rg \
   --node-count 3 \
   --generate-ssh-keys

This should take approximately 15 minutes.

After the cluster is up and running we need to tell kubectl how to connect to it to deploy stuff to it. Luckily, the az CLI provides a painless way of doing this:

$ az aks get-credentials --name my-aks --resource-group my-aks-rg
$ kubectl config current-context
my-aks

Verifying Our New Cluster

Let's verify our shiny new cluster by deploying a simple web-based application, complete with a front-end load balancer:

$ kubectl apply -f https://starkandwayne.com/deploy/welcome-to-k8s.yml

We can get the address of the load balancer from the welcome service:

$ kubectl get services -n welcome

Copy the IP that from that command into your web browser, and you should see:

Go you!

Congratulations!

The post Getting Started with Microsoft AKS appeared first on Stark & Wayne.

]]>

Setting up a Kubernetes cluster from scratch is no easy task. Luckily, there are now many managed service offerings which create a Kubernetes cluster for you.
Many of these providers also take responsibility for keeping your cluster up to date.

In this blog post we will take a closer look at AKS, the Kubernetes-as-a-service offering from Microsoft Azure. The Azure team did a great job of automating the configuration of all the different Kubernetes cluster components. With AKS we can deploy a 3-node cluster in about 15 minutes.

So Lets Get To Work!

If you do not already have an Azure account, Microsoft offers a free trial with $200 in credit toward their cloud services. AKS counts as a cloud service, so get out there and get signed up!

With AKS you will only be paying for the virtual machine instances, storage, and networking resources consumed by your Kubernetes cluster; you won't have to pay for the etcd nodes, the API server components, etc.

Prerequisites

Before we get started, make sure to have the following tools installed:

On Linux:

curl -sL https://aka.ms/InstallAzureCLIDeb | sudo bashaz aks install-cli

On macOS:

brew install azure-cli
az aks install-cli

To fully enjoy your kubectl experience, you may want to install autocomplete.

Spinning Up Our Cluster

After everything is installed, you will need to log into Microsoft Azure using the az CLI:

az login

If you have multiple subscriptions, you can optionally set the subscription you
wish to use by using the az account set command, like this:

az account set --subscription "SUBSCRIPTION-NAME"

We now need to create a resource group. A resource group is a collections of assets which will hold virtual machines, load balancers, networking components etc. When you are all done with your Kubernetes cluster, you can delete the resource group to trivially clean up after yourself.

To create our resource group:

$ az group create --name my-aks-rg --location eastus

Here, our resource group is named my-aks-rg and is provisioned in the US - East region (eastus).

To provision our new cluster, use az aks create:

$ az aks create \
   --name my-aks \
   --resource-group my-aks-rg \
   --node-count 3 \
   --generate-ssh-keys

This should take approximately 15 minutes.

After the cluster is up and running we need to tell kubectl how to connect to it to deploy stuff to it. Luckily, the az CLI provides a painless way of doing this:

$ az aks get-credentials --name my-aks --resource-group my-aks-rg
$ kubectl config current-context
my-aks

Verifying Our New Cluster

Let's verify our shiny new cluster by deploying a simple web-based application, complete with a front-end load balancer:

$ kubectl apply -f https://starkandwayne.com/deploy/welcome-to-k8s.yml

We can get the address of the load balancer from the welcome service:

$ kubectl get services -n welcome

Copy the IP that from that command into your web browser, and you should see:

Go you!

Congratulations!

The post Getting Started with Microsoft AKS appeared first on Stark & Wayne.

]]>
Cloud Foundry database replatforming from Postgres to MySQL https://www.starkandwayne.com/blog/cf-migration-from-postgres-to-mysql/ Thu, 13 Sep 2018 09:28:55 +0000 https://www.starkandwayne.com//cf-migration-from-postgres-to-mysql/

So you have a Cloud Foundry deployed with the `cf-release` codebase
and you have noticed that in the newer release Cloud Foundry switched to MySQL.
And now you want to upgrade. But first you need to move from PostgreSQL so it will make your life easier for the future upgrades!

The good thing is...
It can be done pretty easy without much downtime.
Please note! that you need to first test the migration in a sandbox/test environment.

LET'S GET STARTED!!

Prerequisites:

go get --insecure github.com/pivotal-cf/pg2mysql/cmd/pg2mysql

Manifest changes:

First let's copy your current manifest (e.g. cf.yml) that you used to deploy your Cloud Foundry with to a file named cf-mysql.yml.

mv cf.yml to cf-mysql.yml`

Next you will create config files to hold connection information to the ccdb, uaa, and diego databases:

mkdir -p db_configs
touch db_configs/ccdb.yml
touch db_configs/uaadb.yml
touch db_configs/diegodb.yml

Now you will need to populate the contents of each of the files using your favorite editor. For each file you will find the connection information in the cf-mysql.yml file.

Note: You'll need to edit anything in ALLCAPS to match your environment

Edit the file db_configs/ccdb.yml:

mysql:
  database: ccdb
  username: ccadmin
  password: CHANGEME
  host: MYSQL_PROXY_IP
  port: 3306
postgresql:
  database: ccdb
  username: ccadmin
  password: CHANGEME
  host: POSTGRES_IP
  port: 5524
  ssl_mode: disable

Edit the file db_configs/uaadb.yml:

mysql:
  database: uaadb
  username: uaaadmin
  password: CHANGEME
  host: MYSQL_PROXY_IP
  port: 3306
postgresql:
  database: uaadb
  username: uaaadmin
  password: CHANGEME
  host: POSTGRES_IP
  port: 5524
  ssl_mode: disable

Edit the file db_configs/diegodb.yml:

mysql:
  database: diego
  username: diego
  password: CHANGEME
  host: MYSQL_PROXY_IP
  port: 3306
postgresql:
  database: diego
  username: diego
  password: CHANGEME
  host: POSTGRES_IP
  port: 5524
  ssl_mode: disable

Now back to your cf.yml file, find the postgres job in your manifest:

Change

  - name: postgres
    release: cf

To

  - name: postgres
    release: cf
    provides:
      postgres: {as: pgdb}

Also add the following below the postgres job (please note that the dbnames/passwords in the seeded_databases needs to reflect the ones that we just used in the previous config files)

- default_networks:
  - name: cf1
  instances: 1
  name: mysql_proxy_z1
  networks:
  - name: cf1
    static_ips:
    - YOURSTATICIP
  resource_pool: small
  properties:
    cf_mysql:
      proxy:
        api_password: CHANGEME
  templates:
  - consumes:
      consul_client: nil
      consul_common: nil
      consul_server: nil
    name: consul_agent
    release: cf
  - name: proxy
    release: cf-mysql
    provides:
      mysql-database: {as: db}
  - name: metron_agent
    release: cf
  update:
    serial: true
- default_networks:
  - name: cf1
  instances: 1
  name: mysql_z1
  networks:
  - name: cf1
  persistent_disk: 102400
  resource_pool: large
  properties:
    cf_mysql:
      mysql:
        admin_password: CHANGEME
        cluster_health:
          password: CHANGEME
        galera_healthcheck:
          endpoint_password: CHANGEME
          db_password: CHANGEME
        seeded_databases:
          - name: ccdb
            username: ccadmin
            password: CHANGEME
          - name: uaadb
            username: uaaadmin
            password: CHANGEME
          - name: diego
            username: diego
            password: CHANGEME
  templates:
  - consumes:
      consul_client: nil
      consul_common: nil
      consul_server: nil
    name: consul_agent
    release: cf
  - name: mysql
    release: cf-mysql
  - name: metron_agent
    release: cf
  update:
    serial: true

Now find 3 entries in the manifest that refer to db_scheme: postgres and change them from:

    address: PG_STATIC_IP
    db_scheme: postgres
    port: 5524

to:

    address: MYSQL_PROXY_STATIC_IP
    db_scheme: mysql
    port: 3306

For the uaa database the naming is a bit different, the schema name is postgresql, not postgres.

For the diego database they use the following properties, change these accordingly as well:

db_host: MYSQL_PROXY_STATIC_IP
db_driver: mysql
db_port: 3306

Deployment:

Now the fun/scary part... deploying.

Deploy your bosh manifest

bosh -d YOURDEPLOYMENTNAME deploy cf-mysql.yml

The deployment may fail but that is often due to some drain scripts. In my case, it was the api_worker jobs that failed during the redeploy but it could be any job.
This can be solved by skipping the drain scripts and redeploying.
In this case, we force the job to stop and skip the drain scripts

bosh -d YOURDEPLOYMENTNAME stop api_worker_z1 --force --skip-drain
bosh -d YOURDEPLOYMENTNAME stop api_worker_z2 --force --skip-drain
bosh -d YOURDEPLOYMENTNAME deploy cf-mysql.yml

After a successful redeploy we are going to recreate the workers with the new config:

bosh -d YOURDEPLOYMENTNAME recreate api_worker_z1
bosh -d YOURDEPLOYMENTNAME recreate api_worker_z2

At this point the api servers will be pointed to the new mysql databases instead of the older postgres databases. If you log into CF at this point it will appear to be a completely new/clean environment. This is because we still need to migrate the data from postgres to mysql.

To perform the data migration we need to stop every job that connects to the databases:

  • uaa
  • api
  • api_worker
  • clock
  • diego_database

Loop through and stop each on of these, for example:

bosh -d YOURDEPLOYMENTNAME stop uaa_z1

Data Migration:

Now for the actual migration:
Run the pg2mysql with the config files we previously populated:

pg2mysql -c db_configs/uaadb.yml migrate --truncate
pg2mysql -c db_configs/ccdb.yml migrate --truncate
pg2mysql -c db_configs/diegodb.yml migrate --truncate

SideNote! the truncate option will remove all the data in the target database, this is needed because MySQL is case sensitive and PostgreSQL is not

You can also use the tool to verify that all the data was migrated:

pg2mysql -c db_configs/uaadb.yml verify
pg2mysql -c db_configs/ccdb.yml verify
pg2mysql -c db_configs/diegodb.yml verify

If the tool successfully validates the migration you are ready to fire up all the vms we stopped in Cloud Foundry again:

bosh -d YOURDEPLOYMENTNAME start

At this point you can check and verify your CF if all apps are up and running.

Please note that it can take a while before all apps are up and running again, depending on how many apps/cells you have.

The post Cloud Foundry database replatforming from Postgres to MySQL appeared first on Stark & Wayne.

]]>

So you have a Cloud Foundry deployed with the `cf-release` codebase
and you have noticed that in the newer release Cloud Foundry switched to MySQL.
And now you want to upgrade. But first you need to move from PostgreSQL so it will make your life easier for the future upgrades!

The good thing is...
It can be done pretty easy without much downtime.
Please note! that you need to first test the migration in a sandbox/test environment.

LET'S GET STARTED!!

Prerequisites:

go get --insecure github.com/pivotal-cf/pg2mysql/cmd/pg2mysql

Manifest changes:

First let's copy your current manifest (e.g. cf.yml) that you used to deploy your Cloud Foundry with to a file named cf-mysql.yml.

mv cf.yml to cf-mysql.yml`

Next you will create config files to hold connection information to the ccdb, uaa, and diego databases:

mkdir -p db_configs
touch db_configs/ccdb.yml
touch db_configs/uaadb.yml
touch db_configs/diegodb.yml

Now you will need to populate the contents of each of the files using your favorite editor. For each file you will find the connection information in the cf-mysql.yml file.

Note: You'll need to edit anything in ALLCAPS to match your environment

Edit the file db_configs/ccdb.yml:

mysql:
  database: ccdb
  username: ccadmin
  password: CHANGEME
  host: MYSQL_PROXY_IP
  port: 3306
postgresql:
  database: ccdb
  username: ccadmin
  password: CHANGEME
  host: POSTGRES_IP
  port: 5524
  ssl_mode: disable

Edit the file db_configs/uaadb.yml:

mysql:
  database: uaadb
  username: uaaadmin
  password: CHANGEME
  host: MYSQL_PROXY_IP
  port: 3306
postgresql:
  database: uaadb
  username: uaaadmin
  password: CHANGEME
  host: POSTGRES_IP
  port: 5524
  ssl_mode: disable

Edit the file db_configs/diegodb.yml:

mysql:
  database: diego
  username: diego
  password: CHANGEME
  host: MYSQL_PROXY_IP
  port: 3306
postgresql:
  database: diego
  username: diego
  password: CHANGEME
  host: POSTGRES_IP
  port: 5524
  ssl_mode: disable


Now back to your cf.yml file, find the postgres job in your manifest:

Change

  - name: postgres
    release: cf

To

  - name: postgres
    release: cf
    provides:
      postgres: {as: pgdb}

Also add the following below the postgres job (please note that the dbnames/passwords in the seeded_databases needs to reflect the ones that we just used in the previous config files)

- default_networks:
  - name: cf1
  instances: 1
  name: mysql_proxy_z1
  networks:
  - name: cf1
    static_ips:
    - YOURSTATICIP
  resource_pool: small
  properties:
    cf_mysql:
      proxy:
        api_password: CHANGEME
  templates:
  - consumes:
      consul_client: nil
      consul_common: nil
      consul_server: nil
    name: consul_agent
    release: cf
  - name: proxy
    release: cf-mysql
    provides:
      mysql-database: {as: db}
  - name: metron_agent
    release: cf
  update:
    serial: true
- default_networks:
  - name: cf1
  instances: 1
  name: mysql_z1
  networks:
  - name: cf1
  persistent_disk: 102400
  resource_pool: large
  properties:
    cf_mysql:
      mysql:
        admin_password: CHANGEME
        cluster_health:
          password: CHANGEME
        galera_healthcheck:
          endpoint_password: CHANGEME
          db_password: CHANGEME
        seeded_databases:
          - name: ccdb
            username: ccadmin
            password: CHANGEME
          - name: uaadb
            username: uaaadmin
            password: CHANGEME
          - name: diego
            username: diego
            password: CHANGEME
  templates:
  - consumes:
      consul_client: nil
      consul_common: nil
      consul_server: nil
    name: consul_agent
    release: cf
  - name: mysql
    release: cf-mysql
  - name: metron_agent
    release: cf
  update:
    serial: true

Now find 3 entries in the manifest that refer to db_scheme: postgres and change them from:

    address: PG_STATIC_IP
    db_scheme: postgres
    port: 5524

to:

    address: MYSQL_PROXY_STATIC_IP
    db_scheme: mysql
    port: 3306

For the uaa database the naming is a bit different, the schema name is postgresql, not postgres.

For the diego database they use the following properties, change these accordingly as well:

db_host: MYSQL_PROXY_STATIC_IP
db_driver: mysql
db_port: 3306

Deployment:

Now the fun/scary part... deploying.

Deploy your bosh manifest

bosh -d YOURDEPLOYMENTNAME deploy cf-mysql.yml

The deployment may fail but that is often due to some drain scripts. In my case, it was the api_worker jobs that failed during the redeploy but it could be any job.
This can be solved by skipping the drain scripts and redeploying.
In this case, we force the job to stop and skip the drain scripts

bosh -d YOURDEPLOYMENTNAME stop api_worker_z1 --force --skip-drain
bosh -d YOURDEPLOYMENTNAME stop api_worker_z2 --force --skip-drain
bosh -d YOURDEPLOYMENTNAME deploy cf-mysql.yml

After a successful redeploy we are going to recreate the workers with the new config:

bosh -d YOURDEPLOYMENTNAME recreate api_worker_z1
bosh -d YOURDEPLOYMENTNAME recreate api_worker_z2

At this point the api servers will be pointed to the new mysql databases instead of the older postgres databases. If you log into CF at this point it will appear to be a completely new/clean environment. This is because we still need to migrate the data from postgres to mysql.

To perform the data migration we need to stop every job that connects to the databases:

  • uaa
  • api
  • api_worker
  • clock
  • diego_database

Loop through and stop each on of these, for example:

bosh -d YOURDEPLOYMENTNAME stop uaa_z1

Data Migration:

Now for the actual migration:
Run the pg2mysql with the config files we previously populated:

pg2mysql -c db_configs/uaadb.yml migrate --truncate
pg2mysql -c db_configs/ccdb.yml migrate --truncate
pg2mysql -c db_configs/diegodb.yml migrate --truncate

SideNote! the truncate option will remove all the data in the target database, this is needed because MySQL is case sensitive and PostgreSQL is not

You can also use the tool to verify that all the data was migrated:

pg2mysql -c db_configs/uaadb.yml verify
pg2mysql -c db_configs/ccdb.yml verify
pg2mysql -c db_configs/diegodb.yml verify

If the tool successfully validates the migration you are ready to fire up all the vms we stopped in Cloud Foundry again:

bosh -d YOURDEPLOYMENTNAME start

At this point you can check and verify your CF if all apps are up and running.

Please note that it can take a while before all apps are up and running again, depending on how many apps/cells you have.

The post Cloud Foundry database replatforming from Postgres to MySQL appeared first on Stark & Wayne.

]]>
BUCC: I’ll be there for you, BBL https://www.starkandwayne.com/blog/bucc-bbl-bffs/ Thu, 17 May 2018 18:32:05 +0000 https://www.starkandwayne.com//bucc-bbl-bffs/

image: NBC, Warner Bros

We wanted to let you know that two of your friends bbl and bucc are now best friends with each other! You can now use bbl to spin up a bucc fully automated on your target cloud environment.

Setup

The first thing we need is the latest bbl cli.

For this example, we are going to use gcp. Open your favorite terminal and create a workspace folder: ~/workspace/besties

You'll need to set up a service account in gcp. This uses the gcloud sdk to provision and configure the service account.

gcloud iam service-accounts create <service-account-name>
gcloud iam service-accounts keys create --iam-account='<service account name>@<project-id>.iam.gserviceaccount.com' <service account name>.key.json
gcloud projects add-iam-policy-binding <project id> --member='serviceAccount:<service-account-name>@<project-id>.iam.gserviceaccount.com' --role='roles/editor'

Make sure you replace <service-account-name> and <project-id> with your values.

Configure BBL

Then you need to export the following bbl environment variables to in order to start the process.

export BBL_IAAS=gcp
export BBL_ENV_NAME=banana-env
mkdir $BBL_ENV_NAME && cd $BBL_ENV_NAME && git init

Run bbl plan to see where we are at. This will let us know if we need anything else before we can run bbl up to bootstrap the environment.

bbl plan -lb-type concourse`

You will notice that this command is pretty intuitive, as it will ask you what you need to supply next.

In our case with gcp you need:

  • --gcp-service-account-key
  • --gcp-region europe-west4

The service account key is a *.key.json file you created before using the gcloud commands. Putting all the parameters together you come up with this:

bbl plan --lb-type concourse  --gcp-service-account-key ~/<service-account-name>.key.json --gcp-region europe-west4

The region europe-west4 is an example, your region may vary.

A Dash of BUCC

Now, it's time for some bucc magic:

git submodule add https://github.com/starkandwayne/bucc.git bucc
ln -s bucc/bbl/*-director-override.sh .
ln -sr bucc/bbl/terraform/$BBL_IAAS/* terraform/

And the next simple step, just like bucc up, is to run:

bbl up

Once it's done, it will have created the following:

  • jumpbox
  • bosh-director (with all the bucc goodies)
  • load-balancer (so we can use concourse/uaa publicly)

What was all that bucc magic?

We copied down the bucc software, and then overlaid a bbl template that is compatible with bucc. Find out more about how bbl works with advanced configuration in their docs.

Friends with Benefits

Now, run the following commands so that all the goodies you get with bbl and bucc are loaded into your command shell.

eval "$(bbl print-env)"
eval "$(bucc/bin/bucc env)"

Run bucc info, you will see the URL of the loadbalancer here that directs to Concourse.

Run bucc fly so your target is set and able to login in with the fly CLI to Concourse.

Run bosh cloud-config, you will notice that cloud-config is already pre-populated with the environment we just set up.

The post BUCC: I’ll be there for you, BBL appeared first on Stark & Wayne.

]]>

image: NBC, Warner Bros

We wanted to let you know that two of your friends bbl and bucc are now best friends with each other! You can now use bbl to spin up a bucc fully automated on your target cloud environment.

Setup

The first thing we need is the latest bbl cli.

For this example, we are going to use gcp. Open your favorite terminal and create a workspace folder: ~/workspace/besties

You'll need to set up a service account in gcp. This uses the gcloud sdk to provision and configure the service account.

gcloud iam service-accounts create <service-account-name>
gcloud iam service-accounts keys create --iam-account='<service account name>@<project-id>.iam.gserviceaccount.com' <service account name>.key.json
gcloud projects add-iam-policy-binding <project id> --member='serviceAccount:<service-account-name>@<project-id>.iam.gserviceaccount.com' --role='roles/editor'

Make sure you replace <service-account-name> and <project-id> with your values.

Configure BBL

Then you need to export the following bbl environment variables to in order to start the process.

export BBL_IAAS=gcp
export BBL_ENV_NAME=banana-env
mkdir $BBL_ENV_NAME && cd $BBL_ENV_NAME && git init

Run bbl plan to see where we are at. This will let us know if we need anything else before we can run bbl up to bootstrap the environment.

bbl plan -lb-type concourse`

You will notice that this command is pretty intuitive, as it will ask you what you need to supply next.

In our case with gcp you need:

  • --gcp-service-account-key
  • --gcp-region europe-west4

The service account key is a *.key.json file you created before using the gcloud commands. Putting all the parameters together you come up with this:

bbl plan --lb-type concourse  --gcp-service-account-key ~/<service-account-name>.key.json --gcp-region europe-west4

The region europe-west4 is an example, your region may vary.

A Dash of BUCC

Now, it's time for some bucc magic:

git submodule add https://github.com/starkandwayne/bucc.git bucc
ln -s bucc/bbl/*-director-override.sh .
ln -sr bucc/bbl/terraform/$BBL_IAAS/* terraform/

And the next simple step, just like bucc up, is to run:

bbl up

Once it's done, it will have created the following:

  • jumpbox
  • bosh-director (with all the bucc goodies)
  • load-balancer (so we can use concourse/uaa publicly)

What was all that bucc magic?

We copied down the bucc software, and then overlaid a bbl template that is compatible with bucc. Find out more about how bbl works with advanced configuration in their docs.

Friends with Benefits

Now, run the following commands so that all the goodies you get with bbl and bucc are loaded into your command shell.

eval "$(bbl print-env)"
eval "$(bucc/bin/bucc env)"

Run bucc info, you will see the URL of the loadbalancer here that directs to Concourse.

Run bucc fly so your target is set and able to login in with the fly CLI to Concourse.

Run bosh cloud-config, you will notice that cloud-config is already pre-populated with the environment we just set up.

The post BUCC: I’ll be there for you, BBL appeared first on Stark & Wayne.

]]>
Jumpbox connected with OAUTH https://www.starkandwayne.com/blog/jumpbox-connected-with-oauth/ Tue, 15 May 2018 14:13:45 +0000 https://www.starkandwayne.com//jumpbox-connected-with-oauth/

Wouldn't it be nice to have a jumpbox available for your users without needing to maintain a list of users?

Well, we did it again! And made it happen...
It's called oauth-jumpbox

So let's get you up and running...
We are going to use the UAA from BUCC in this example;
if you are not familiar with BUCC check out our blog post here

First, you need a working BUCC

git clone https://github.com/starkandwayne/bucc
cd bucc
bucc up

Upload a cloud-config
cp src/bosh-deployment/warden/cloud-config.yml .
Add another static IP that we are going to use for the oauth-jumpbox
change line 21 in cloud-config.yml
From

static: [10.244.0.34]

To

static:
  - 10.244.0.34
  - 10.244.0.3

Upload our edited cloud-config

bosh update-cloud-config cloud-config.yml

Let's get the latest manifest that is already configured to use the BUCC-UAA
wget https://raw.githubusercontent.com/cloudfoundry-community/oauth-jumpbox-boshrelease/master/manifests/oauth-jumpbox.yml

Upload the lastest stemcell for warden see
https://bosh.io/stemcells/bosh-warden-boshlite-ubuntu-trusty-go_agent

bosh deploy oauth-jumpbox.yml -d oauth-jumpbox

If the deployment succeeded, we can retrieve the generated password from CredHub that we need to use when creating the client in the UAA.

credhub get -n /bucc/oauth-jumpbox/client_secret

Take a note of the value and replace MY_SECRET below.

We can now create a client in the UAA.

bucc uaac
uaac client add jumpbox \
     --name jumpbox \
     --scope openid \
     --autoapprove true \
     --authorized_grant_types password,refresh_token \
     --secret "MY_SECRET"

Create a user in the UAA.

bucc uaac
uaac user add test@example.com -p test

Set up routes on your local machines.
bucc routes

Let's login.
ssh "test@example.com"@10.244.0.3

And now you are logged in via the UAA in a busybox container.

We are really exited to hear your opinions or PR's.

The post Jumpbox connected with OAUTH appeared first on Stark & Wayne.

]]>

Wouldn't it be nice to have a jumpbox available for your users without needing to maintain a list of users?

Well, we did it again! And made it happen...
It's called oauth-jumpbox

So let's get you up and running...
We are going to use the UAA from BUCC in this example;
if you are not familiar with BUCC check out our blog post here

First, you need a working BUCC

git clone https://github.com/starkandwayne/bucc
cd bucc
bucc up

Upload a cloud-config
cp src/bosh-deployment/warden/cloud-config.yml .
Add another static IP that we are going to use for the oauth-jumpbox
change line 21 in cloud-config.yml
From

static: [10.244.0.34]

To

static:
  - 10.244.0.34
  - 10.244.0.3

Upload our edited cloud-config

bosh update-cloud-config cloud-config.yml

Let's get the latest manifest that is already configured to use the BUCC-UAA
wget https://raw.githubusercontent.com/cloudfoundry-community/oauth-jumpbox-boshrelease/master/manifests/oauth-jumpbox.yml

Upload the lastest stemcell for warden see
https://bosh.io/stemcells/bosh-warden-boshlite-ubuntu-trusty-go_agent

bosh deploy oauth-jumpbox.yml -d oauth-jumpbox

If the deployment succeeded, we can retrieve the generated password from CredHub that we need to use when creating the client in the UAA.

credhub get -n /bucc/oauth-jumpbox/client_secret

Take a note of the value and replace MY_SECRET below.

We can now create a client in the UAA.

bucc uaac
uaac client add jumpbox \
     --name jumpbox \
     --scope openid \
     --autoapprove true \
     --authorized_grant_types password,refresh_token \
     --secret "MY_SECRET"

Create a user in the UAA.

bucc uaac
uaac user add test@example.com -p test

Set up routes on your local machines.
bucc routes

Let's login.
ssh "test@example.com"@10.244.0.3

And now you are logged in via the UAA in a busybox container.

We are really exited to hear your opinions or PR's.

The post Jumpbox connected with OAUTH appeared first on Stark & Wayne.

]]>
Creating a Local BUCC-Lite With the BUCC-CLI https://www.starkandwayne.com/blog/creating-a-local-bucc-lite-with-the-bucc-cli/ Wed, 28 Jun 2017 09:22:38 +0000 https://www.starkandwayne.com//creating-a-local-bucc-lite-with-the-bucc-cli/

Consider reading our previous blog post Introducing BUCC (BOSH, UAA Credhub and Concourse), as it explains a lot of why we created BUCC.

In this blog post we will walk through the steps of setting up BUCC on your local machine.

Prerequisites
Get BUCC
git clone https://github.com/starkandwayne/bucc.git
cd bucc
direnv allow

Buckle Up

With the BUCC up command we can choose between a lot of CPIs (e.g. aws, vsphere, gcp, etc).

But here, we are going to use the Virtualbox CPI (which is the default).

Run the bucc up command.

bucc up
Deployment manifest: '/home/workspace/bucc/src/bosh-deployment/bosh.yml'
Deployment state: '/home/workspace/bucc/state/state.json'
Started validating
  Downloading release 'bosh'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh'... Finished (00:00:02)
  Downloading release 'bosh-virtualbox-cpi'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh-virtualbox-cpi'... Finished (00:00:02)
  Downloading release 'os-conf'... Skipped [Found in local cache] (00:00:00)
  Validating release 'os-conf'... Finished (00:00:00)
  Downloading release 'bosh-warden-cpi'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh-warden-cpi'... Finished (00:00:01)
  Downloading release 'garden-runc'... Skipped [Found in local cache] (00:00:00)
  Validating release 'garden-runc'... Finished (00:00:01)
  Downloading release 'uaa'... Skipped [Found in local cache] (00:00:00)
  Validating release 'uaa'... Finished (00:00:01)
  Downloading release 'concourse'... Skipped [Found in local cache] (00:00:00)
  Validating release 'concourse'... Finished (00:00:04)
  Downloading release 'credhub'... Skipped [Found in local cache] (00:00:00)
  Validating release 'credhub'... Finished (00:00:01)
  Validating cpi release... Finished (00:00:00)
  Validating deployment manifest... Finished (00:00:00)
  Downloading stemcell... Skipped [Found in local cache] (00:00:00)
  Validating stemcell... Finished (00:00:11)
Finished validating (00:00:26)
Started installing CPI
  Compiling package 'golang_1.7/21609f611781e8586e713cfd7ceb389cee429c5a'... Finished (00:00:20)
  Compiling package 'virtualbox_cpi/e293cbbb8359fd2cbbb9777b7b91fd142ab6c688'... Finished (00:00:11)
  Installing packages... Finished (00:00:03)
  Rendering job templates... Finished (00:00:00)
  Installing job 'virtualbox_cpi'... Finished (00:00:00)
Finished installing CPI (00:00:35)
Starting registry... Finished (00:00:00)
Uploading stemcell 'bosh-vsphere-esxi-ubuntu-trusty-go_agent/3421.9'... Finished (00:00:13)
Started deploying
  Creating VM for instance 'bosh/0' from stemcell 'sc-e6e04d15-ad19-4172-4ae7-149c42872916'... Finished (00:00:01)
  Waiting for the agent on VM 'vm-9116ed45-d227-4d1e-5efa-b5ce42a85e7f' to be ready... Finished (00:00:41)
  Creating disk... Finished (00:00:00)
  Attaching disk 'disk-3e7d9fea-ddd7-4854-76aa-d0e19b173775' to VM 'vm-9116ed45-d227-4d1e-5efa-b5ce42a85e7f'... Finished (00:00:06)
  Rendering job templates... Finished (00:00:10)
  Compiling package 'libseccomp/7a54b27a61b42980935e863d7060dc5a076b44d0'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'golang_1.7.1/91909d54d203acc915a4392b52c37716e15b5aff'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'apparmor/c8e25d84146677878c699ddc5cdd893030acb26f'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'golang/57b32e5a561e23701e9017d0abed6b9e925ec2ff'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'openjdk_1.8.0/01f437bd5f45eb8e3f1214cc7b5f54bdf9781118'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'golang_1.7/21609f611781e8586e713cfd7ceb389cee429c5a'... Finished (00:00:21)
  Compiling package 'ruby/c1086875b047d112e46756dcb63d8f19e63b3ac4'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'golang_1.7/c82ff355bb4bd412a4397dba778682293cd4f392'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'runc/68f36fbe363fefa5ec8d44b48ee30a56ac6e1e0e'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'worker_version/bc790e9ddcfcaf3f3f6dca527151c94209c35843'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'concourse_version/a5697bd67296d5f0d43afad90218a1b94911dff9'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'mysql/b7e73acc0bfe05f1c6cbfd97bf92d39b0d3155d5'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'libpq/661f5817afe24fa2f18946d2757bff63246b1d0d'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'archive_resource/eb8265d143eb11fe3137b690c40be3c2fd433510'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'generated_signing_key/00b5b02050bc6588cdfe9e523b2ba24a6c9de3c7'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'resource_discovery/0a1b0f4b48b2c8f9f9ed6019935ab3b1919c952a'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'pid_utils/a1f0590ea02d938b933a101c7438985721cd0ab4'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'docker_image_resource/700cb3f33b63a85615f428d98c2101a1f351b479'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'groundcrew/7b24f2447816b7d0df9dbebaf3b0cebbef3d9369'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'baggageclaim/160bf4cb540805dcd306a726ece308f634172c17'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'credhub/576fdf5fceb0cf06153e4efa70fc559c1bc263be'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'virtualbox_cpi/e293cbbb8359fd2cbbb9777b7b91fd142ab6c688'... Finished (00:00:19)
  Compiling package 'bosh_io_stemcell_resource/2fe7524c563d2d0cb21a88c4bc7dc3c06d15265f'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'health_monitor/e9317b2ad349f019e69261558afa587537f06f25'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'tsa/59b899a0c53649a1c67595f124c6d30c00c21051'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'verify_multidigest/8fc5d654cebad7725c34bb08b3f60b912db7094a'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'generated_tsa_host_key/00b5b02050bc6588cdfe9e523b2ba24a6c9de3c7'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'pool_resource/e5f10671fd1938767ae69dc2b660dfa1d14b866f'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'github_release_resource/b31ec6a1d10928f829ab4dc14552df684239c842'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'nginx/2ec2f63293bf6f544e95969bf5e5242bc226a800'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'vagrant_cloud_resource/5f2950486c152ad609fa5ad1d2bc4e8a5f599f18'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'semver_resource/318c561bdbfcb375da31e26a1153266e912d04da'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'tracker_resource/d9f716be444d1451e852a42ef98c3012e629d679'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'davcli/5f08f8d5ab3addd0e11171f739f072b107b30b8c'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'iptables/24e83997945f8817627223c6cee78ca9064f42d5'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'bosh_io_release_resource/1a801e059798d43ab5d6144068deafb90341893e'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'shadow/7a5e46357a33cafc8400a8e3e2e1f6d3a1159cb6'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'git_resource/b5f8f4e2046798e6d357c9df52ec3c28b132a1f4'... Skipped [Package already compiled] (00:00:00)
  Compiling package 's3cli/bb1c1976d221fdadf13a6bc873896cd5e2433580'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'nats/63ae42eb73527625307ff522fb402832b407321d'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'bosh_deployment_resource/fab4efd8b2ed27139eb15ef311f1202f00f7ace8'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'warden_cpi/29ac97b841a747dc238277ffc7d6bf59a278fa37'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'time_resource/96cfa3beb167825c5bc2b33135684dcb70011954'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'generated_worker_key/00b5b02050bc6588cdfe9e523b2ba24a6c9de3c7'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'guardian/c4acb6073abb4e17165253935c923dfbdfbfb188'... Skipped [Package already compiled] (00:00:00)
  Compiling package 's3_resource/522540c90709961062d9ad0978e974a7cf8751b1'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'uaa/517a7bcdf12725b6c1bd39f7c38de9796cd5eb2d'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'postgres/3b1089109c074984577a0bac1b38018d7a2890ef'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'lunaclient/b922e045db5246ec742f0c4d1496844942d6167a'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'tar/f2ea61c537d8eb8cb2d691ce51e8516b28fa5bb7'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'uaa_utils/20557445bf996af17995a5f13bf5f87000600f2e'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'cf_resource/432bc0f191661e40a463f188168b76477ce11198'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'atc/3f57e486f7bdaa1fdd8329157b42c55fd2739e67'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'busybox/fc652425c32d0dad62f45bca18e1899671e2e570'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'director/50af678ba068312e5de229b0558775ebae8d0892'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'btrfs_tools/6856973d0bc2dc673b6740f5e164ba77a77fd4a6'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'postgres-9.4/ded764a075ae7513d4718b7cf200642fdbf81ae4'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'hg_resource/cd6a634e984c26087aedc4ab646678fa3cb0560d'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'fly/71b338bda01b0072b8110e04d454b3b3740402d9'... Skipped [Package already compiled] (00:00:00)
  Updating instance 'bosh/0'... Finished (00:01:50)
  Waiting for instance 'bosh/0' to be running... Finished (00:01:00)
  Running the post-start scripts 'bosh/0'... Finished (00:00:01)
Finished deploying (00:04:45)
Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)
Succeeded

This can take up to 15 minutes depending on your internet connection.

We can now test if everything works with the following command:

bucc test

This will try to verify if BOSH, UAA, Credhub and Concourse are working and that we are able to login to those components.

With the following command we can login to BOSH and start using it:

bucc bosh

From here on out you can use all the bosh2 commands like: bosh vms, bosh releases, etc.

You can also check and set up the Concourse that has been deployed on your BUCC:

  • bucc info will show you the URL and login information for the Web UI of Concourse
  • bucc fly will download the fly CLI and connect and log into Concourse

bucc routes will set up a route in linux/mac OS so that traffic intended for BUCC-Lite can be redirected to its locally deployed BOSH VMs.

For all other commands use bucc help.

 bucc help
BUCC (BOSH UAA Credhud Concourse) CLI v0.1
  up -- creates the bucc VM with bosh create-env
  down -- deletes VM with bosh delete-env
  ssh -- ssh into the bucc VM
  env -- sourceable envrionment variables for cli clients
  int -- wrapper for 'bosh2 int' for the bosh manifest
  info -- displays info about bucc deployed endpoints
  vars -- print vars (yaml formatted) for use in pipelines
  bosh -- configures bosh cli
  credhub -- configures credhub cli
  uaac -- configures uaac cli
  fly -- configures fly cli
  routes -- add routes for virtualbox
  test -- check if all systems are operational

The post Creating a Local BUCC-Lite With the BUCC-CLI appeared first on Stark & Wayne.

]]>

Consider reading our previous blog post Introducing BUCC (BOSH, UAA Credhub and Concourse), as it explains a lot of why we created BUCC.

In this blog post we will walk through the steps of setting up BUCC on your local machine.

Prerequisites
Get BUCC
git clone https://github.com/starkandwayne/bucc.git
cd bucc
direnv allow

Buckle Up

With the BUCC up command we can choose between a lot of CPIs (e.g. aws, vsphere, gcp, etc).

But here, we are going to use the Virtualbox CPI (which is the default).

Run the bucc up command.

bucc up
Deployment manifest: '/home/workspace/bucc/src/bosh-deployment/bosh.yml'
Deployment state: '/home/workspace/bucc/state/state.json'
Started validating
  Downloading release 'bosh'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh'... Finished (00:00:02)
  Downloading release 'bosh-virtualbox-cpi'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh-virtualbox-cpi'... Finished (00:00:02)
  Downloading release 'os-conf'... Skipped [Found in local cache] (00:00:00)
  Validating release 'os-conf'... Finished (00:00:00)
  Downloading release 'bosh-warden-cpi'... Skipped [Found in local cache] (00:00:00)
  Validating release 'bosh-warden-cpi'... Finished (00:00:01)
  Downloading release 'garden-runc'... Skipped [Found in local cache] (00:00:00)
  Validating release 'garden-runc'... Finished (00:00:01)
  Downloading release 'uaa'... Skipped [Found in local cache] (00:00:00)
  Validating release 'uaa'... Finished (00:00:01)
  Downloading release 'concourse'... Skipped [Found in local cache] (00:00:00)
  Validating release 'concourse'... Finished (00:00:04)
  Downloading release 'credhub'... Skipped [Found in local cache] (00:00:00)
  Validating release 'credhub'... Finished (00:00:01)
  Validating cpi release... Finished (00:00:00)
  Validating deployment manifest... Finished (00:00:00)
  Downloading stemcell... Skipped [Found in local cache] (00:00:00)
  Validating stemcell... Finished (00:00:11)
Finished validating (00:00:26)
Started installing CPI
  Compiling package 'golang_1.7/21609f611781e8586e713cfd7ceb389cee429c5a'... Finished (00:00:20)
  Compiling package 'virtualbox_cpi/e293cbbb8359fd2cbbb9777b7b91fd142ab6c688'... Finished (00:00:11)
  Installing packages... Finished (00:00:03)
  Rendering job templates... Finished (00:00:00)
  Installing job 'virtualbox_cpi'... Finished (00:00:00)
Finished installing CPI (00:00:35)
Starting registry... Finished (00:00:00)
Uploading stemcell 'bosh-vsphere-esxi-ubuntu-trusty-go_agent/3421.9'... Finished (00:00:13)
Started deploying
  Creating VM for instance 'bosh/0' from stemcell 'sc-e6e04d15-ad19-4172-4ae7-149c42872916'... Finished (00:00:01)
  Waiting for the agent on VM 'vm-9116ed45-d227-4d1e-5efa-b5ce42a85e7f' to be ready... Finished (00:00:41)
  Creating disk... Finished (00:00:00)
  Attaching disk 'disk-3e7d9fea-ddd7-4854-76aa-d0e19b173775' to VM 'vm-9116ed45-d227-4d1e-5efa-b5ce42a85e7f'... Finished (00:00:06)
  Rendering job templates... Finished (00:00:10)
  Compiling package 'libseccomp/7a54b27a61b42980935e863d7060dc5a076b44d0'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'golang_1.7.1/91909d54d203acc915a4392b52c37716e15b5aff'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'apparmor/c8e25d84146677878c699ddc5cdd893030acb26f'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'golang/57b32e5a561e23701e9017d0abed6b9e925ec2ff'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'openjdk_1.8.0/01f437bd5f45eb8e3f1214cc7b5f54bdf9781118'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'golang_1.7/21609f611781e8586e713cfd7ceb389cee429c5a'... Finished (00:00:21)
  Compiling package 'ruby/c1086875b047d112e46756dcb63d8f19e63b3ac4'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'golang_1.7/c82ff355bb4bd412a4397dba778682293cd4f392'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'runc/68f36fbe363fefa5ec8d44b48ee30a56ac6e1e0e'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'worker_version/bc790e9ddcfcaf3f3f6dca527151c94209c35843'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'concourse_version/a5697bd67296d5f0d43afad90218a1b94911dff9'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'mysql/b7e73acc0bfe05f1c6cbfd97bf92d39b0d3155d5'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'libpq/661f5817afe24fa2f18946d2757bff63246b1d0d'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'archive_resource/eb8265d143eb11fe3137b690c40be3c2fd433510'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'generated_signing_key/00b5b02050bc6588cdfe9e523b2ba24a6c9de3c7'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'resource_discovery/0a1b0f4b48b2c8f9f9ed6019935ab3b1919c952a'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'pid_utils/a1f0590ea02d938b933a101c7438985721cd0ab4'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'docker_image_resource/700cb3f33b63a85615f428d98c2101a1f351b479'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'groundcrew/7b24f2447816b7d0df9dbebaf3b0cebbef3d9369'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'baggageclaim/160bf4cb540805dcd306a726ece308f634172c17'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'credhub/576fdf5fceb0cf06153e4efa70fc559c1bc263be'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'virtualbox_cpi/e293cbbb8359fd2cbbb9777b7b91fd142ab6c688'... Finished (00:00:19)
  Compiling package 'bosh_io_stemcell_resource/2fe7524c563d2d0cb21a88c4bc7dc3c06d15265f'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'health_monitor/e9317b2ad349f019e69261558afa587537f06f25'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'tsa/59b899a0c53649a1c67595f124c6d30c00c21051'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'verify_multidigest/8fc5d654cebad7725c34bb08b3f60b912db7094a'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'generated_tsa_host_key/00b5b02050bc6588cdfe9e523b2ba24a6c9de3c7'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'pool_resource/e5f10671fd1938767ae69dc2b660dfa1d14b866f'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'github_release_resource/b31ec6a1d10928f829ab4dc14552df684239c842'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'nginx/2ec2f63293bf6f544e95969bf5e5242bc226a800'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'vagrant_cloud_resource/5f2950486c152ad609fa5ad1d2bc4e8a5f599f18'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'semver_resource/318c561bdbfcb375da31e26a1153266e912d04da'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'tracker_resource/d9f716be444d1451e852a42ef98c3012e629d679'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'davcli/5f08f8d5ab3addd0e11171f739f072b107b30b8c'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'iptables/24e83997945f8817627223c6cee78ca9064f42d5'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'bosh_io_release_resource/1a801e059798d43ab5d6144068deafb90341893e'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'shadow/7a5e46357a33cafc8400a8e3e2e1f6d3a1159cb6'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'git_resource/b5f8f4e2046798e6d357c9df52ec3c28b132a1f4'... Skipped [Package already compiled] (00:00:00)
  Compiling package 's3cli/bb1c1976d221fdadf13a6bc873896cd5e2433580'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'nats/63ae42eb73527625307ff522fb402832b407321d'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'bosh_deployment_resource/fab4efd8b2ed27139eb15ef311f1202f00f7ace8'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'warden_cpi/29ac97b841a747dc238277ffc7d6bf59a278fa37'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'time_resource/96cfa3beb167825c5bc2b33135684dcb70011954'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'generated_worker_key/00b5b02050bc6588cdfe9e523b2ba24a6c9de3c7'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'guardian/c4acb6073abb4e17165253935c923dfbdfbfb188'... Skipped [Package already compiled] (00:00:00)
  Compiling package 's3_resource/522540c90709961062d9ad0978e974a7cf8751b1'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'uaa/517a7bcdf12725b6c1bd39f7c38de9796cd5eb2d'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'postgres/3b1089109c074984577a0bac1b38018d7a2890ef'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'lunaclient/b922e045db5246ec742f0c4d1496844942d6167a'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'tar/f2ea61c537d8eb8cb2d691ce51e8516b28fa5bb7'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'uaa_utils/20557445bf996af17995a5f13bf5f87000600f2e'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'cf_resource/432bc0f191661e40a463f188168b76477ce11198'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'atc/3f57e486f7bdaa1fdd8329157b42c55fd2739e67'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'busybox/fc652425c32d0dad62f45bca18e1899671e2e570'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'director/50af678ba068312e5de229b0558775ebae8d0892'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'btrfs_tools/6856973d0bc2dc673b6740f5e164ba77a77fd4a6'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'postgres-9.4/ded764a075ae7513d4718b7cf200642fdbf81ae4'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'hg_resource/cd6a634e984c26087aedc4ab646678fa3cb0560d'... Skipped [Package already compiled] (00:00:00)
  Compiling package 'fly/71b338bda01b0072b8110e04d454b3b3740402d9'... Skipped [Package already compiled] (00:00:00)
  Updating instance 'bosh/0'... Finished (00:01:50)
  Waiting for instance 'bosh/0' to be running... Finished (00:01:00)
  Running the post-start scripts 'bosh/0'... Finished (00:00:01)
Finished deploying (00:04:45)
Stopping registry... Finished (00:00:00)
Cleaning up rendered CPI jobs... Finished (00:00:00)
Succeeded

This can take up to 15 minutes depending on your internet connection.

We can now test if everything works with the following command:

bucc test

This will try to verify if BOSH, UAA, Credhub and Concourse are working and that we are able to login to those components.

With the following command we can login to BOSH and start using it:

bucc bosh

From here on out you can use all the bosh2 commands like: bosh vms, bosh releases, etc.

You can also check and set up the Concourse that has been deployed on your BUCC:

  • bucc info will show you the URL and login information for the Web UI of Concourse
  • bucc fly will download the fly CLI and connect and log into Concourse

bucc routes will set up a route in linux/mac OS so that traffic intended for BUCC-Lite can be redirected to its locally deployed BOSH VMs.

For all other commands use bucc help.

 bucc help
BUCC (BOSH UAA Credhud Concourse) CLI v0.1
  up -- creates the bucc VM with bosh create-env
  down -- deletes VM with bosh delete-env
  ssh -- ssh into the bucc VM
  env -- sourceable envrionment variables for cli clients
  int -- wrapper for 'bosh2 int' for the bosh manifest
  info -- displays info about bucc deployed endpoints
  vars -- print vars (yaml formatted) for use in pipelines
  bosh -- configures bosh cli
  credhub -- configures credhub cli
  uaac -- configures uaac cli
  fly -- configures fly cli
  routes -- add routes for virtualbox
  test -- check if all systems are operational

The post Creating a Local BUCC-Lite With the BUCC-CLI appeared first on Stark & Wayne.

]]>
Habitat boshrelease https://www.starkandwayne.com/blog/habitat-boshrelease/ Thu, 15 Jun 2017 22:47:20 +0000 https://www.starkandwayne.com//habitat-boshrelease/

So you have familiarized your self with habitat
but your tired of spinning up docker containers manually
and just want to use bosh to handle it all?

Well that's why I have build a generic bosh release for habitat
that should be able to deploy all habitat services that are currently available.

Please note that this bosh releases uses bosh links
so you should use this with bosh2.
If you do not have bosh2 you can get it up and running within 20 minutes by using bucc

Getting Started

Upload the latest habitat-boshrelease to your bosh

bosh upload-release https://github.com/cloudfoundry-community/habitat-boshrelease/releases/download/v0.0.2/habitat-0.0.2.tgz
Example Deployment

For this example we are going to deploy shield where we will deploy a PostgreSQL cluster with SHIELD.

---
name: shield
update:
  canaries: 1
  canary_watch_time: 30000-1200000
  max_in_flight: 5
  serial: false
  update_watch_time: 5000-1200000
instance_groups:
- name: database
  azs:
  - z1
  instances: 3
  persistent_disk_type: 5GB
  vm_type: default
  stemcell: default
  networks:
  - name: default
  jobs:
  - name: habitat
    release: habitat
    provides:
      habitat: {as: hab_postgres}
    consumes:
      habitat: {from: hab_postgres}
  properties:
    hab:
      service: starkandwayne/postgresql
      topology: leader
- name: shield
  azs:
  - z1
  instances: 1
  vm_type: default
  stemcell: default
  update:
    max_in_flight: 1
  networks:
  - name: default
  jobs:
  - name: habitat
    release: habitat
    consumes:
      habitat: {from: hab_postgres}
  properties:
    hab:
      service: starkandwayne/shield
      binds:
        - "database:postgresql.default"
releases:
- name: habitat
  version: latest
stemcells:
- alias: default
  os: ubuntu-trusty
  version: "3363.20"
Configuration

Here you will provide the service you want to install which are available in the habitat depot

 hab:
   service: starkandwayne/postgresql

The PostgreSQL db configuration is and example of a service which both provides and consumes itself:

 provides:
   habitat: {as: hab_postgres}
 consumes:
   habitat: {from: hab_postgres}

If you are not familiar with habitat and topology see https://www.habitat.sh/docs/run-packages-topologies/

 hab:
   topology: leader

The post Habitat boshrelease appeared first on Stark & Wayne.

]]>

So you have familiarized your self with habitat
but your tired of spinning up docker containers manually
and just want to use bosh to handle it all?

Well that's why I have build a generic bosh release for habitat
that should be able to deploy all habitat services that are currently available.

Please note that this bosh releases uses bosh links
so you should use this with bosh2.
If you do not have bosh2 you can get it up and running within 20 minutes by using bucc

Getting Started

Upload the latest habitat-boshrelease to your bosh

bosh upload-release https://github.com/cloudfoundry-community/habitat-boshrelease/releases/download/v0.0.2/habitat-0.0.2.tgz
Example Deployment

For this example we are going to deploy shield where we will deploy a PostgreSQL cluster with SHIELD.

---
name: shield
update:
  canaries: 1
  canary_watch_time: 30000-1200000
  max_in_flight: 5
  serial: false
  update_watch_time: 5000-1200000
instance_groups:
- name: database
  azs:
  - z1
  instances: 3
  persistent_disk_type: 5GB
  vm_type: default
  stemcell: default
  networks:
  - name: default
  jobs:
  - name: habitat
    release: habitat
    provides:
      habitat: {as: hab_postgres}
    consumes:
      habitat: {from: hab_postgres}
  properties:
    hab:
      service: starkandwayne/postgresql
      topology: leader
- name: shield
  azs:
  - z1
  instances: 1
  vm_type: default
  stemcell: default
  update:
    max_in_flight: 1
  networks:
  - name: default
  jobs:
  - name: habitat
    release: habitat
    consumes:
      habitat: {from: hab_postgres}
  properties:
    hab:
      service: starkandwayne/shield
      binds:
        - "database:postgresql.default"
releases:
- name: habitat
  version: latest
stemcells:
- alias: default
  os: ubuntu-trusty
  version: "3363.20"
Configuration

Here you will provide the service you want to install which are available in the habitat depot

 hab:
   service: starkandwayne/postgresql

The PostgreSQL db configuration is and example of a service which both provides and consumes itself:

 provides:
   habitat: {as: hab_postgres}
 consumes:
   habitat: {from: hab_postgres}

If you are not familiar with habitat and topology see https://www.habitat.sh/docs/run-packages-topologies/

 hab:
   topology: leader

The post Habitat boshrelease appeared first on Stark & Wayne.

]]>