LongNguyen, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/longnguyen/ Cloud-Native Consultants Thu, 30 Sep 2021 15:49:27 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 https://www.starkandwayne.com/wp-content/uploads/2021/04/cropped-SW_Logos__shield_blue_on_transparent_with_border-32x32.png LongNguyen, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/longnguyen/ 32 32 Introducing Mesosphere DC/OS on BOSH https://www.starkandwayne.com/blog/dcos/ Tue, 26 Sep 2017 14:40:31 +0000 https://www.starkandwayne.com//dcos/

DC/OS is an excellent platform to deploy distributed services like Apache Cassandra, Kafka, Hdfs, or even SQL Server. It includes many additional features like the container orchestration and resource management, but this blog post will focus on the deployment and maintenance of DC/OS for distributed services.

We have the following goals:

  • An enterprise-grade deployment of DC/OS.
  • The ability to perform upgrades to DC/OS and the virtual machines' OS.
  • Easy deployment and maintenance of virtual machines.
  • To monitor the health of virtual machines and resurrect terminated instances.

Complex systems require more than just ease of deployment. We build tools for production that are easy for operators to maintain and developers to consume. We care about "Day 2" and the impact that has on people.

Why BOSH?

BOSH is a great tool for provisioning and maintaining virtual machines and disk for large distributed systems. It can deploy VMs with CPIs to AWS, Azure, GCE, vSphere, OpenStack and other infrastructures. Targeting different infrastructures requires very little effort because of the CPIs. BOSH also can:

  • Mount persistent disk
  • Scale VM instances and types easily
  • Run post-install scripts
  • Perform health monitoring and resurrection of VMs
  • The site bosh.io provides pre built and regularly CVE patched Stemcells for Ubuntu & CentOS.

BOSH is also a software/packaging life cycle management tool. A BOSH Manifest combines a BOSH Release and Stemcell into a yaml file to describe the software and operating system a set of servers. BOSH uses this manifest file to deploy to the targeted infrastructure, monitoring the health of the virtual machines and resurrecting any which are destroyed.

BOSH was originally created for deploying Cloud Foundry, a PaaS for deploying, scaling and managing stateless applications. It can also be used to deploy services like HAProxy, Redis and etcd so it isn't limited to just deploying Cloud Foundry.

Overview of How to Deploy DC/OS on BOSH

First, you will need to deploy a BOSH Director. Instructions for doing so can be found here: https://bosh.io/docs/init.html

Next, you will need a BOSH Release for DC/OS. The BOSH Release for DC/OS describes how the software on a DC/OS deployment should look. The release includes configuration and declaration of the packages which will run on each type of VM (called a job). The BOSH Release for DC/OS is located here and is used to describe the mesos-agent, mesos-public-agent and mesos-master jobs which BOSH installs onto VM instances of these jobs when deployed.

The next step is to create a BOSH Deployment Manifest which contains:

  • The types of VMs to create (instance type, Stemcell/OS, disk).
  • The software that should run on these VMs (releases, packages).
  • The network used by the VMs (ip ranges, subnet, security groups, availability zones).
  • The number of each type of VMs to scale (instance count).

More details:

  • An example of a manifest is here.
  • There are additional instructions for deployment here

How to Upgrade DC/OS

This is where the real power of the DC/OS BOSH Release is exposed. If you want to upgrade the version of DC/OS in a deployment you simply change the version of DC/OS in the deployment manifest and redeploy. All of the version handling is built in.

If there is a newer Stemcell (os) image you would like to use for a patched CVE or newer kernel, you upload the new Stemcell to the BOSH Director and deploy. BOSH will recreate each of the VMs, reattach the persistent storage and start the necessary components.

Scaling is simple too, need more mesos-agents? Modify the instances: in the deployment manifest and redeploy. One or more VMs will be created and the install scripts will be run automatically from the provision server.

Summary

Overall, BOSH is a great tool to:

  • Provision, monitor and recreate servers on most known IaaS providers
  • Manage software life cycle
  • Manage DC/OS upgrades

We hope others will find this useful! We are in both dcos-community.slack.com and cloudfoundry.slack.com as @lnguyen so feel free to ask questions there.

The post Introducing Mesosphere DC/OS on BOSH appeared first on Stark & Wayne.

]]>

DC/OS is an excellent platform to deploy distributed services like Apache Cassandra, Kafka, Hdfs, or even SQL Server. It includes many additional features like the container orchestration and resource management, but this blog post will focus on the deployment and maintenance of DC/OS for distributed services.

We have the following goals:

  • An enterprise-grade deployment of DC/OS.
  • The ability to perform upgrades to DC/OS and the virtual machines' OS.
  • Easy deployment and maintenance of virtual machines.
  • To monitor the health of virtual machines and resurrect terminated instances.

Complex systems require more than just ease of deployment. We build tools for production that are easy for operators to maintain and developers to consume. We care about "Day 2" and the impact that has on people.

Why BOSH?

BOSH is a great tool for provisioning and maintaining virtual machines and disk for large distributed systems. It can deploy VMs with CPIs to AWS, Azure, GCE, vSphere, OpenStack and other infrastructures. Targeting different infrastructures requires very little effort because of the CPIs. BOSH also can:

  • Mount persistent disk
  • Scale VM instances and types easily
  • Run post-install scripts
  • Perform health monitoring and resurrection of VMs
  • The site bosh.io provides pre built and regularly CVE patched Stemcells for Ubuntu & CentOS.

BOSH is also a software/packaging life cycle management tool. A BOSH Manifest combines a BOSH Release and Stemcell into a yaml file to describe the software and operating system a set of servers. BOSH uses this manifest file to deploy to the targeted infrastructure, monitoring the health of the virtual machines and resurrecting any which are destroyed.

BOSH was originally created for deploying Cloud Foundry, a PaaS for deploying, scaling and managing stateless applications. It can also be used to deploy services like HAProxy, Redis and etcd so it isn't limited to just deploying Cloud Foundry.

Overview of How to Deploy DC/OS on BOSH

First, you will need to deploy a BOSH Director. Instructions for doing so can be found here: https://bosh.io/docs/init.html

Next, you will need a BOSH Release for DC/OS. The BOSH Release for DC/OS describes how the software on a DC/OS deployment should look. The release includes configuration and declaration of the packages which will run on each type of VM (called a job). The BOSH Release for DC/OS is located here and is used to describe the mesos-agent, mesos-public-agent and mesos-master jobs which BOSH installs onto VM instances of these jobs when deployed.

The next step is to create a BOSH Deployment Manifest which contains:

  • The types of VMs to create (instance type, Stemcell/OS, disk).
  • The software that should run on these VMs (releases, packages).
  • The network used by the VMs (ip ranges, subnet, security groups, availability zones).
  • The number of each type of VMs to scale (instance count).

More details:

  • An example of a manifest is here.
  • There are additional instructions for deployment here

How to Upgrade DC/OS

This is where the real power of the DC/OS BOSH Release is exposed. If you want to upgrade the version of DC/OS in a deployment you simply change the version of DC/OS in the deployment manifest and redeploy. All of the version handling is built in.

If there is a newer Stemcell (os) image you would like to use for a patched CVE or newer kernel, you upload the new Stemcell to the BOSH Director and deploy. BOSH will recreate each of the VMs, reattach the persistent storage and start the necessary components.

Scaling is simple too, need more mesos-agents? Modify the instances: in the deployment manifest and redeploy. One or more VMs will be created and the install scripts will be run automatically from the provision server.

Summary

Overall, BOSH is a great tool to:

  • Provision, monitor and recreate servers on most known IaaS providers
  • Manage software life cycle
  • Manage DC/OS upgrades

We hope others will find this useful! We are in both dcos-community.slack.com and cloudfoundry.slack.com as @lnguyen so feel free to ask questions there.

The post Introducing Mesosphere DC/OS on BOSH appeared first on Stark & Wayne.

]]>
Running your own PyPI Server on Cloud Foundry https://www.starkandwayne.com/blog/running-your-own-pypi-server-on-cloud-foundry/ Mon, 01 Jun 2015 22:08:00 +0000 https://www.starkandwayne.com//?p=2619

I was recently asked if I could run a PIP server on Cloud Foundry. After a quick Google search I found PyPICloud and discovered that this is possible!

Deploying PyPI to Cloud Foundry

Start by cloning this repo:

git clone https://github.com/mathcamp/pypicloud
cd pypicloud

There is a pending pull request here for one of the missing components (waitress). For now there is a workaround by modifying setup.py and adding waitress

diff --git a/setup.py b/setup.py
index ae2568a..39d5db7 100644
--- a/setup.py
+++ b/setup.py
@@ -27,6 +27,7 @@ REQUIREMENTS = [
     'pyramid_tm',
     'six',
     'transaction',
+    'waitress',
     'zope.sqlalchemy',
 ]

Next we need to install PIP tools to run setup:

pip install pypicloud
pypicloud-make-config -t server.ini

You will be prompted for the admin user and S3 bucket to store the Python packages. While it is possible to store these files in the local operating system instead of S3, you risk losing these files if the PyPI server is redeployed on Cloud Foundry.

Note: Storing the Python packages to the file system is a terrible idea, you will lose your package. Only use S3 to store the package files.

The port which PyPI server listens on needs to be configured for Cloud Foundry. In the server.ini that is created the following lines need to be changed from:

[server:main]
use = egg:waitress#main
host = 0.0.0.0
port = 6543

to:

[server:main]
use = egg:waitress#main
host = 0.0.0.0
port = %(http_port)s

Finally we need to create a Procfile:

web: pserve server.ini http_port=$PORT

Now all we need to do is push to Cloud Foundry:

cf push pypicloud

That's it!

The post Running your own PyPI Server on Cloud Foundry appeared first on Stark & Wayne.

]]>

I was recently asked if I could run a PIP server on Cloud Foundry. After a quick Google search I found PyPICloud and discovered that this is possible!

Deploying PyPI to Cloud Foundry

Start by cloning this repo:

git clone https://github.com/mathcamp/pypicloud
cd pypicloud

There is a pending pull request here for one of the missing components (waitress). For now there is a workaround by modifying setup.py and adding waitress

diff --git a/setup.py b/setup.py
index ae2568a..39d5db7 100644
--- a/setup.py
+++ b/setup.py
@@ -27,6 +27,7 @@ REQUIREMENTS = [
     'pyramid_tm',
     'six',
     'transaction',
+    'waitress',
     'zope.sqlalchemy',
 ]

Next we need to install PIP tools to run setup:

pip install pypicloud
pypicloud-make-config -t server.ini

You will be prompted for the admin user and S3 bucket to store the Python packages. While it is possible to store these files in the local operating system instead of S3, you risk losing these files if the PyPI server is redeployed on Cloud Foundry.

Note: Storing the Python packages to the file system is a terrible idea, you will lose your package. Only use S3 to store the package files.

The port which PyPI server listens on needs to be configured for Cloud Foundry. In the server.ini that is created the following lines need to be changed from:

[server:main]
use = egg:waitress#main
host = 0.0.0.0
port = 6543

to:

[server:main]
use = egg:waitress#main
host = 0.0.0.0
port = %(http_port)s

Finally we need to create a Procfile:

web: pserve server.ini http_port=$PORT

Now all we need to do is push to Cloud Foundry:

cf push pypicloud

That's it!

The post Running your own PyPI Server on Cloud Foundry appeared first on Stark & Wayne.

]]>
Multiple BOSHs and users on the same machine. https://www.starkandwayne.com/blog/managing-multiple-boshs/ Thu, 19 Mar 2015 21:58:49 +0000 https://www.starkandwayne.com//managing-multiple-boshs/

Sometimes you have multiple BOSHs and multiple users on the same jumpbox. This is a huge problem because you can only target one BOSH at a time. This makes me sad :(

We now have a great solution for this! Thank you to zimbatm we can install direnv. Follow instructions here

Now we can add context to folders where we store the manifest. Run the following command in the folder you want to connect to different BOSH.

cat >> .envrc <<<'export BOSH_CONFIG="${PWD}/.bosh_config"'

That's it! Everytime you cd into that folder it will target into that BOSH. Of course you still still have issue if multiple people are trying to use same BOSH.

The post Multiple BOSHs and users on the same machine. appeared first on Stark & Wayne.

]]>

Sometimes you have multiple BOSHs and multiple users on the same jumpbox. This is a huge problem because you can only target one BOSH at a time. This makes me sad :(

We now have a great solution for this! Thank you to zimbatm we can install direnv. Follow instructions here

Now we can add context to folders where we store the manifest. Run the following command in the folder you want to connect to different BOSH.

cat >> .envrc <<<'export BOSH_CONFIG="${PWD}/.bosh_config"'

That's it! Everytime you cd into that folder it will target into that BOSH. Of course you still still have issue if multiple people are trying to use same BOSH.

The post Multiple BOSHs and users on the same machine. appeared first on Stark & Wayne.

]]>
Increasing BOSH-lite vm memory size https://www.starkandwayne.com/blog/increasing-bosh-lite-vm-memory-size/ Wed, 25 Feb 2015 15:48:01 +0000 https://www.starkandwayne.com//increasing-bosh-lite-vm-memory-size/

BOSH lite vm is default 6gb. Sometimes you just need more memory to do nefarious things with bosh.

VM_MEMORY=10240 vagrant up

Done! You have bosh-lite with 10gb

The post Increasing BOSH-lite vm memory size appeared first on Stark & Wayne.

]]>

BOSH lite vm is default 6gb. Sometimes you just need more memory to do nefarious things with bosh.

VM_MEMORY=10240 vagrant up

Done! You have bosh-lite with 10gb

The post Increasing BOSH-lite vm memory size appeared first on Stark & Wayne.

]]>
Introducing cf info plugin https://www.starkandwayne.com/blog/introduction-cf-info-plugin/ Fri, 05 Dec 2014 18:34:25 +0000 https://www.starkandwayne.com//introduction-cf-info-plugin/

Cloud Foundry cli has recently added support for plugin.

A piece of info I often forget is what org/space or even user for that fact that I'm logged in as.

I created this simple plugin that output user info so it is easily visible.

How to Install

go get github.com/cloudfoundry-community/info
cf install-plugin $GOPATH/bin/info

Usage

cf info

Sample output

Current User Info
User: admin
Org: dev
Space: dev
API Version: 2.18.0
API Endpoint: https://api.10.244.0.34.xip.io

Note this require cf cli v6.7.0+

Repo: https://github.com/cloudfoundry-community/info

Edit: turns out.... if you just do cf t it should work even if you don't try to target anything.

The post Introducing cf info plugin appeared first on Stark & Wayne.

]]>

Cloud Foundry cli has recently added support for plugin.

A piece of info I often forget is what org/space or even user for that fact that I'm logged in as.

I created this simple plugin that output user info so it is easily visible.

How to Install

go get github.com/cloudfoundry-community/info
cf install-plugin $GOPATH/bin/info

Usage

cf info

Sample output

Current User Info
User: admin
Org: dev
Space: dev
API Version: 2.18.0
API Endpoint: https://api.10.244.0.34.xip.io

Note this require cf cli v6.7.0+

Repo: https://github.com/cloudfoundry-community/info

Edit: turns out.... if you just do cf t it should work even if you don't try to target anything.

The post Introducing cf info plugin appeared first on Stark & Wayne.

]]>
Simple Golang OAuth client for Cloud Foundry https://www.starkandwayne.com/blog/simple-golang-oauth-client-for-cf/ Tue, 11 Nov 2014 15:31:59 +0000 https://www.starkandwayne.com//simple-golang-oauth-client-for-cf/

Cloud Foundry UAA allows OAuth clients to be used to leverage the users of Cloud Foundry. This allows you to create apps without maintaining another user database. A free single-signon (SSO) for all your applications!

Golang makes it easy to write applications that use SSO - by being OAuth clients for UAA (and your pretty/themed login-server).

First we need to add client into UAA client

      cf-go-client-example:
        access-token-validity: 1209600
        authorities: scim.write,scim.read,cloud_controller.read,cloud_controller.write,password.write,uaa.admin,uaa.resource,cloud_controller.admin,billing.admin
        authorized-grant-types: authorization_code,client_credentials
        override: true
        redirect-uri: https://cf-go-client-example.10.244.0.34.xip.io/oauth2callback
        refresh-token-validity: 1209600
        scope: openid,cloud_controller.read,cloud_controller.write,password.write,console.admin,console.support
        secret: c1oudc0w

Please note the authorizations example exposes many scopes & authorities. You can scope it back for your use cases.

main.go

package main
import (
	"github.com/go-martini/martini"
	gooauth2 "github.com/golang/oauth2"
	"github.com/martini-contrib/oauth2"
	"github.com/martini-contrib/sessions"
)
func main() {
	m := martini.Classic()
	oauthOpts := &gooauth2.Options{
		ClientID:     "cf-go-client-example",
		ClientSecret: "c1oudc0w",
		RedirectURL:  "https://cf-go-client-example.10.244.0.34.xip.io/oauth2callback",
		Scopes:       []string{""},
	}
	cf := oauth2.NewOAuth2Provider(oauthOpts, "https://login.10.244.0.34.xip.io/oauth/authorize",
		"https://uaa.10.244.0.34.xip.io/oauth/token")
	m.Handlers(
		sessions.Sessions("my_session", sessions.NewCookieStore([]byte("secret123"))),
		cf,
		oauth2.LoginRequired,
		martini.Logger(),
		martini.Static("public"),
	)
	m.Get("/", func(tokens oauth2.Tokens) string {
		if tokens.IsExpired() {
			return "not logged in, or the access token is expired"
		}
		return "logged in"
	})
	m.Run()
}

That's it! Simple as that.

m := martini.Classic()

We use martini for this because it has great plugin.

oauthOpts := &gooauth2.Options{
		ClientID:     "cf-go-client-example",
		ClientSecret: "c1oudc0w",
		RedirectURL:  "https://cf-go-client-example.10.244.0.34.xip.io/oauth2callback",
		Scopes:       []string{""},
	}
	cf := oauth2.NewOAuth2Provider(oauthOpts, "https://login.10.244.0.34.xip.io/oauth/authorize",
		"https://uaa.10.244.0.34.xip.io/oauth/token")

This setup our OAuth handler. Note that redirect URL must match the one set in manifest or it will not work.

m.Handlers(
		sessions.Sessions("my_session", sessions.NewCookieStore([]byte("secret123"))),
		cf,
		oauth2.LoginRequired,
		martini.Logger(),
		martini.Static("public"),
	)

These handlers force all connections to be authenticated. The session is needed to keep a session for each user.

	m.Get("/restrict", oauth2.LoginRequired, func(tokens oauth2.Tokens) string {
		return tokens.Access()
	})

Alternately if you don't want all request to be authenticated, you can do it by endpoint. With martini you can chain handler.

	m.Get("/", func(tokens oauth2.Tokens) string {
		if tokens.IsExpired() {
			return "not logged in, or the access token is expired"
		}
		return "logged in"
	})
	m.Run()

That's it you have an OAuth client for Cloud Foundry.

The code can be found: https://github.com/cloudfoundry-community/cf-go-client-example

The post Simple Golang OAuth client for Cloud Foundry appeared first on Stark & Wayne.

]]>

Cloud Foundry UAA allows OAuth clients to be used to leverage the users of Cloud Foundry. This allows you to create apps without maintaining another user database. A free single-signon (SSO) for all your applications!

Golang makes it easy to write applications that use SSO - by being OAuth clients for UAA (and your pretty/themed login-server).

First we need to add client into UAA client

      cf-go-client-example:
        access-token-validity: 1209600
        authorities: scim.write,scim.read,cloud_controller.read,cloud_controller.write,password.write,uaa.admin,uaa.resource,cloud_controller.admin,billing.admin
        authorized-grant-types: authorization_code,client_credentials
        override: true
        redirect-uri: https://cf-go-client-example.10.244.0.34.xip.io/oauth2callback
        refresh-token-validity: 1209600
        scope: openid,cloud_controller.read,cloud_controller.write,password.write,console.admin,console.support
        secret: c1oudc0w

Please note the authorizations example exposes many scopes & authorities. You can scope it back for your use cases.

main.go

package main
import (
	"github.com/go-martini/martini"
	gooauth2 "github.com/golang/oauth2"
	"github.com/martini-contrib/oauth2"
	"github.com/martini-contrib/sessions"
)
func main() {
	m := martini.Classic()
	oauthOpts := &gooauth2.Options{
		ClientID:     "cf-go-client-example",
		ClientSecret: "c1oudc0w",
		RedirectURL:  "https://cf-go-client-example.10.244.0.34.xip.io/oauth2callback",
		Scopes:       []string{""},
	}
	cf := oauth2.NewOAuth2Provider(oauthOpts, "https://login.10.244.0.34.xip.io/oauth/authorize",
		"https://uaa.10.244.0.34.xip.io/oauth/token")
	m.Handlers(
		sessions.Sessions("my_session", sessions.NewCookieStore([]byte("secret123"))),
		cf,
		oauth2.LoginRequired,
		martini.Logger(),
		martini.Static("public"),
	)
	m.Get("/", func(tokens oauth2.Tokens) string {
		if tokens.IsExpired() {
			return "not logged in, or the access token is expired"
		}
		return "logged in"
	})
	m.Run()
}

That's it! Simple as that.

m := martini.Classic()

We use martini for this because it has great plugin.

oauthOpts := &gooauth2.Options{
		ClientID:     "cf-go-client-example",
		ClientSecret: "c1oudc0w",
		RedirectURL:  "https://cf-go-client-example.10.244.0.34.xip.io/oauth2callback",
		Scopes:       []string{""},
	}
	cf := oauth2.NewOAuth2Provider(oauthOpts, "https://login.10.244.0.34.xip.io/oauth/authorize",
		"https://uaa.10.244.0.34.xip.io/oauth/token")

This setup our OAuth handler. Note that redirect URL must match the one set in manifest or it will not work.

m.Handlers(
		sessions.Sessions("my_session", sessions.NewCookieStore([]byte("secret123"))),
		cf,
		oauth2.LoginRequired,
		martini.Logger(),
		martini.Static("public"),
	)

These handlers force all connections to be authenticated. The session is needed to keep a session for each user.

	m.Get("/restrict", oauth2.LoginRequired, func(tokens oauth2.Tokens) string {
		return tokens.Access()
	})

Alternately if you don't want all request to be authenticated, you can do it by endpoint. With martini you can chain handler.

	m.Get("/", func(tokens oauth2.Tokens) string {
		if tokens.IsExpired() {
			return "not logged in, or the access token is expired"
		}
		return "logged in"
	})
	m.Run()

That's it you have an OAuth client for Cloud Foundry.

The code can be found: https://github.com/cloudfoundry-community/cf-go-client-example

The post Simple Golang OAuth client for Cloud Foundry appeared first on Stark & Wayne.

]]>
Running Galaxy on Cloud Foundry https://www.starkandwayne.com/blog/running-galaxy-on-cloud-foundry/ Tue, 04 Nov 2014 13:58:39 +0000 https://www.starkandwayne.com//running-galaxy-on-cloud-foundry/

Galaxy is a an open, web-based platform for data intensive biomedical research.

Setting up app

First we need to clone repo

hg clone https://bitbucket.org/galaxy/galaxy-dist/
cd galaxy-dist
hg update stable

Now that we have repo cloned we need to do a few things.

First create Procfile

web: sh run.sh

Next we need requirements.txt for some reason some of the eggs don't download properly so we use pip to install them

pyyaml
bioblend
paramiko
simplejson

A manifest.yml is needed to cf-ssh

---
applications:
- name: galaxy
  memory: 1G
  instances: 1
  services:
  - galaxy-pg

Lets create app and some services to bind to it. I'm showing this on https://run.pivotal.io/ so your service may vary. We need a tool called cf-pancake here. It is used to get all the services attached to app and turn them into environment variables which is later used.

cf cs elephantsql turtle galaxy-pg
cf push galaxy --no-start
cf bs galaxy galaxy-pg
cf-pancake set-env galaxy

Note: that a variable ELEPHANTSQL_URI was set. This is used later on.

Create the config file.

cp config/galaxy.ini.sample config/galaxy.ini

Look for the similar lines and change them to have the following.

port = PORT
host = 0.0.0.0
...
database_connection = ELEPHANTSQL_URI

Note: we used variable earlier.

Add the following lines after #!/bin/sh

sed -i "s|ELEPHANTSQL_URI|$ELEPHANTSQL_URI|g" config/galaxy.ini
sed -i "s/PORT/$PORT/g" config/galaxy.ini

Database migration take a while so in order to do this before starting app we use cf-ssh to start run to create database tables.

cf-ssh manifest.yml
sh run.sh
cf push galaxy

And that's it. You have Galaxy up and running and can play with it.

I got it up and running on Pivotal Web Services.

The post Running Galaxy on Cloud Foundry appeared first on Stark & Wayne.

]]>

Galaxy is a an open, web-based platform for data intensive biomedical research.

Setting up app

First we need to clone repo

hg clone https://bitbucket.org/galaxy/galaxy-dist/
cd galaxy-dist
hg update stable

Now that we have repo cloned we need to do a few things.

First create Procfile

web: sh run.sh

Next we need requirements.txt for some reason some of the eggs don't download properly so we use pip to install them

pyyaml
bioblend
paramiko
simplejson

A manifest.yml is needed to cf-ssh

---
applications:
- name: galaxy
  memory: 1G
  instances: 1
  services:
  - galaxy-pg

Lets create app and some services to bind to it. I'm showing this on https://run.pivotal.io/ so your service may vary. We need a tool called cf-pancake here. It is used to get all the services attached to app and turn them into environment variables which is later used.

cf cs elephantsql turtle galaxy-pg
cf push galaxy --no-start
cf bs galaxy galaxy-pg
cf-pancake set-env galaxy

Note: that a variable ELEPHANTSQL_URI was set. This is used later on.

Create the config file.

cp config/galaxy.ini.sample config/galaxy.ini

Look for the similar lines and change them to have the following.

port = PORT
host = 0.0.0.0
...
database_connection = ELEPHANTSQL_URI

Note: we used variable earlier.

Add the following lines after #!/bin/sh

sed -i "s|ELEPHANTSQL_URI|$ELEPHANTSQL_URI|g" config/galaxy.ini
sed -i "s/PORT/$PORT/g" config/galaxy.ini

Database migration take a while so in order to do this before starting app we use cf-ssh to start run to create database tables.

cf-ssh manifest.yml
sh run.sh
cf push galaxy

And that's it. You have Galaxy up and running and can play with it.

I got it up and running on Pivotal Web Services.

The post Running Galaxy on Cloud Foundry appeared first on Stark & Wayne.

]]>
Fixing Loggregator problems one problem at a time. https://www.starkandwayne.com/blog/fixing-loggregator-problems-one-problem-at-a-time/ Thu, 30 Oct 2014 18:50:38 +0000 https://www.starkandwayne.com//fixing-loggregator-problems-one-problem-at-a-time/

Cloud Foundry Loggregator likes to break on me. This is going to my collection of how I (try to) fix loggregator.

Problem 1

Background

I've just updated CF from v190 -> v191
CF Version: 191
Date of Problem: 10/30/2014

➜  ~  cf logs galaxy
FAILED
Error dialing loggregator server: websocket: bad handshake.
Please ask your Cloud Foundry Operator to check the platform configuration (loggregator endpoint is wss://loggregator.10.244.0.34.xip.io:4443).

Bad handshake?! Wtf? Let's see what CF_TRACE gives me.

WEBSOCKET REQUEST: [2014-10-30T17:27:48Z]
GET /tail/?app=1fd65ad8-4324-4fcd-9b2e-e8b0e0c6d172 HTTP/1.1
Host: wss://loggregator.10.244.0.34.xip.io:4443
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: [HIDDEN]
Authorization: [PRIVATE DATA HIDDEN]
Origin: http://localhost
WEBSOCKET RESPONSE: [2014-10-30T17:27:48Z]
HTTP/1.1 404 Not Found
Date: Thu, 30 Oct 2014 17:27:07 GMT
Content-Length: 81
Content-Type: text/plain; charset=utf-8
X-Cf-Requestid: 0fe6dece-f166-4329-4074-dce4d793c116
X-Cf-Routererror: unknown_route
FAILED
Error dialing loggregator server: websocket: bad handshake.
Please ask your Cloud Foundry Operator to check the platform configuration (loggregator endpoint is wss://loggregator.10.244.0.34.xip.io:4443).
FAILED

My app is not found :(

Investigation

List of jobs upgraded

 Started updating job ha_proxy_z1 > ha_proxy_z1/0
  Started updating job router_z1 > router_z1/0
  Started updating job api_z1 > api_z1/0
  Started updating job nats_z1 > nats_z1/0
  Started updating job api_worker_z1 > api_worker_z1/0
  Started updating job runner_z1
  Started updating job runner_z1 > runner_z1/0
  Started updating job clock_global > clock_global/0
     Done updating job nats_z1 > nats_z1/0 (00:00:23)
     Done updating job api_worker_z1 > api_worker_z1/0 (00:00:29)
     Done updating job ha_proxy_z1 > ha_proxy_z1/0 (00:01:21)
     Done updating job router_z1 > router_z1/0 (00:01:25)
     Done updating job clock_global > clock_global/0 (00:01:31)
     Done updating job api_z1 > api_z1/0 (00:01:57)
     Done updating job runner_z1 > runner_z1/0 (00:03:53)
  Started updating job runner_z1 > runner_z1/1. Done (00:03:18)
  Started updating job runner_z1 > runner_z1/2. Done (00:02:53)
     Done updating job runner_z1 (00:10:04)

Nothing about loggregator...

I got lucky this time and looked at right machine first time. Loggregator Traffic controller logs seem to be culprit.

{"timestamp":1414694116.569226742,"process_id":1497,"source":"loggregator trafficcontroller","log_level":"error","message":"Publishing router.register failed: nats: Connection Closed","data":null,"file":"/var/vcap/data/compile/loggregator_trafficcontroller/loggregator/src/github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/routerregistrar/router_registrar.go","line":118,"method":"github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/routerregistrar.func·004"}

Solution

# on loggregator traffic controller box
monit stop all
monit start all

So what happen here, it seem that nats updated and loggregator traffic controller was still connected. It could no longer talk to nats to register to router. A simple restart of traffic controller fix it.

The post Fixing Loggregator problems one problem at a time. appeared first on Stark & Wayne.

]]>

Cloud Foundry Loggregator likes to break on me. This is going to my collection of how I (try to) fix loggregator.

Problem 1

Background

I've just updated CF from v190 -> v191
CF Version: 191
Date of Problem: 10/30/2014

➜  ~  cf logs galaxy
FAILED
Error dialing loggregator server: websocket: bad handshake.
Please ask your Cloud Foundry Operator to check the platform configuration (loggregator endpoint is wss://loggregator.10.244.0.34.xip.io:4443).

Bad handshake?! Wtf? Let's see what CF_TRACE gives me.

WEBSOCKET REQUEST: [2014-10-30T17:27:48Z]
GET /tail/?app=1fd65ad8-4324-4fcd-9b2e-e8b0e0c6d172 HTTP/1.1
Host: wss://loggregator.10.244.0.34.xip.io:4443
Upgrade: websocket
Connection: Upgrade
Sec-WebSocket-Version: 13
Sec-WebSocket-Key: [HIDDEN]
Authorization: [PRIVATE DATA HIDDEN]
Origin: http://localhost
WEBSOCKET RESPONSE: [2014-10-30T17:27:48Z]
HTTP/1.1 404 Not Found
Date: Thu, 30 Oct 2014 17:27:07 GMT
Content-Length: 81
Content-Type: text/plain; charset=utf-8
X-Cf-Requestid: 0fe6dece-f166-4329-4074-dce4d793c116
X-Cf-Routererror: unknown_route
FAILED
Error dialing loggregator server: websocket: bad handshake.
Please ask your Cloud Foundry Operator to check the platform configuration (loggregator endpoint is wss://loggregator.10.244.0.34.xip.io:4443).
FAILED

My app is not found :(

Investigation

List of jobs upgraded

 Started updating job ha_proxy_z1 > ha_proxy_z1/0
  Started updating job router_z1 > router_z1/0
  Started updating job api_z1 > api_z1/0
  Started updating job nats_z1 > nats_z1/0
  Started updating job api_worker_z1 > api_worker_z1/0
  Started updating job runner_z1
  Started updating job runner_z1 > runner_z1/0
  Started updating job clock_global > clock_global/0
     Done updating job nats_z1 > nats_z1/0 (00:00:23)
     Done updating job api_worker_z1 > api_worker_z1/0 (00:00:29)
     Done updating job ha_proxy_z1 > ha_proxy_z1/0 (00:01:21)
     Done updating job router_z1 > router_z1/0 (00:01:25)
     Done updating job clock_global > clock_global/0 (00:01:31)
     Done updating job api_z1 > api_z1/0 (00:01:57)
     Done updating job runner_z1 > runner_z1/0 (00:03:53)
  Started updating job runner_z1 > runner_z1/1. Done (00:03:18)
  Started updating job runner_z1 > runner_z1/2. Done (00:02:53)
     Done updating job runner_z1 (00:10:04)

Nothing about loggregator...

I got lucky this time and looked at right machine first time. Loggregator Traffic controller logs seem to be culprit.

{"timestamp":1414694116.569226742,"process_id":1497,"source":"loggregator trafficcontroller","log_level":"error","message":"Publishing router.register failed: nats: Connection Closed","data":null,"file":"/var/vcap/data/compile/loggregator_trafficcontroller/loggregator/src/github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/routerregistrar/router_registrar.go","line":118,"method":"github.com/cloudfoundry/loggregatorlib/cfcomponent/registrars/routerregistrar.func·004"}

Solution

# on loggregator traffic controller box
monit stop all
monit start all

So what happen here, it seem that nats updated and loggregator traffic controller was still connected. It could no longer talk to nats to register to router. A simple restart of traffic controller fix it.

The post Fixing Loggregator problems one problem at a time. appeared first on Stark & Wayne.

]]>
Terraforming workloads with Docker and Digital Ocean https://www.starkandwayne.com/blog/terraforming-workloads-with-docker-and-digital-ocean/ Thu, 16 Oct 2014 02:30:09 +0000 https://www.starkandwayne.com//terraforming-workloads-with-docker-and-digital-ocean/

Terraform is a great tool for automating creation of infrastructure and support IaaS, PaaS, and SaaS products.

Docker is a great tool for creating containers which allow apps to be portable.

Digital Ocean is a great IaaS with a great api and fast download speed.

Problem

I'm lazy and my internet is slow. Cloud Foundry is now up to 3gb+ so building a new one and uploading to s3 so the community has a release they could download and use without building and downloading all the blobs. This process takes two hours...

Docker was great to setup a portable environment that I could use on my laptop and on Digital Ocean. Digital Ocean also has much faster upload speed then my home. Terraform allow me to setup infrastructure easily and run my workload and delete my vm on Digital Ocean.

This problem can be very generalized with any workload that you could create a docker image to do work finish and delete image.

Let's get started!

Terraform is very easy to use.

cf-upload.tf

provider "digitalocean" {
    token = "${var.do_token}"
}
resource "digitalocean_droplet" "docker" {
    image = "docker"
    name = "docker"
    region = "nyc3"
    size = "8gb"
    ssh_keys = ["${var.ssh_key_id}"]
    connection {
        user = "root"
        key_file = "${var.key_path}"
    }
    provisioner "remote-exec" {
        inline = [
        "docker run lnguyen/cf-share-release /workspace/create_release.sh ${var.cf_version} ${var.aws_access_key} ${var.aws_secret_key}",
        ]
    }
}

variables.tf

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "do_token" {}
variable "key_path" {}
variable "cf_version" {}
variable "ssh_key_id" {}

So what's going on here.

provider "digitalocean" {
    token = "${var.do_token}"
}

This creates our connection to Digital Ocean

resource "digitalocean_droplet" "docker" {
    image = "docker"
    name = "docker"
    region = "nyc2"
    size = "2gb"
    ssh_keys = ["${var.ssh_key_id}"]
    connection {
        user = "root"
        key_file = "${var.key_path}"
    }
    provisioner "remote-exec" {
        inline = [
        "docker run lnguyen/cf-share-release /workspace/create_release.sh ${var.cf_version} ${var.aws_access_key} ${var.aws_secret_key}",
        ]
    }
}

This creates a Docker vm on Digital Ocean and then runs the Docker container

And that's all you need to run a workload!

I've even made this a simple make file that will create vm, run the workload and delete vm

all: plan apply destroy
plan:
	terraform plan -var-file terraform.tfvars -out terraform.tfplan
apply:
	terraform apply -var-file terraform.tfvars
destroy:
	terraform plan -destroy -var-file terraform.tfvars 	-out terraform.tfplan
	terraform apply terraform.tfplan
clean:
	rm terraform.tfplan
	rm terraform.tfstate

So automating this can be done with simple make. This can be added to CI server to further automate it.

Repo: https://github.com/longnguyen11288/terraform-cf-upload

Conclusion

This is a combination of 3 great tools that allows us as developers to automate workload that may require cpu power or network bandwidth. Hopefully this can help you automate a workload!

The post Terraforming workloads with Docker and Digital Ocean appeared first on Stark & Wayne.

]]>

Terraform is a great tool for automating creation of infrastructure and support IaaS, PaaS, and SaaS products.

Docker is a great tool for creating containers which allow apps to be portable.

Digital Ocean is a great IaaS with a great api and fast download speed.

Problem

I'm lazy and my internet is slow. Cloud Foundry is now up to 3gb+ so building a new one and uploading to s3 so the community has a release they could download and use without building and downloading all the blobs. This process takes two hours...

Docker was great to setup a portable environment that I could use on my laptop and on Digital Ocean. Digital Ocean also has much faster upload speed then my home. Terraform allow me to setup infrastructure easily and run my workload and delete my vm on Digital Ocean.

This problem can be very generalized with any workload that you could create a docker image to do work finish and delete image.

Let's get started!

Terraform is very easy to use.

cf-upload.tf

provider "digitalocean" {
    token = "${var.do_token}"
}
resource "digitalocean_droplet" "docker" {
    image = "docker"
    name = "docker"
    region = "nyc3"
    size = "8gb"
    ssh_keys = ["${var.ssh_key_id}"]
    connection {
        user = "root"
        key_file = "${var.key_path}"
    }
    provisioner "remote-exec" {
        inline = [
        "docker run lnguyen/cf-share-release /workspace/create_release.sh ${var.cf_version} ${var.aws_access_key} ${var.aws_secret_key}",
        ]
    }
}

variables.tf

variable "aws_access_key" {}
variable "aws_secret_key" {}
variable "do_token" {}
variable "key_path" {}
variable "cf_version" {}
variable "ssh_key_id" {}

So what's going on here.

provider "digitalocean" {
    token = "${var.do_token}"
}

This creates our connection to Digital Ocean

resource "digitalocean_droplet" "docker" {
    image = "docker"
    name = "docker"
    region = "nyc2"
    size = "2gb"
    ssh_keys = ["${var.ssh_key_id}"]
    connection {
        user = "root"
        key_file = "${var.key_path}"
    }
    provisioner "remote-exec" {
        inline = [
        "docker run lnguyen/cf-share-release /workspace/create_release.sh ${var.cf_version} ${var.aws_access_key} ${var.aws_secret_key}",
        ]
    }
}

This creates a Docker vm on Digital Ocean and then runs the Docker container

And that's all you need to run a workload!

I've even made this a simple make file that will create vm, run the workload and delete vm

all: plan apply destroy
plan:
	terraform plan -var-file terraform.tfvars -out terraform.tfplan
apply:
	terraform apply -var-file terraform.tfvars
destroy:
	terraform plan -destroy -var-file terraform.tfvars 	-out terraform.tfplan
	terraform apply terraform.tfplan
clean:
	rm terraform.tfplan
	rm terraform.tfstate

So automating this can be done with simple make. This can be added to CI server to further automate it.

Repo: https://github.com/longnguyen11288/terraform-cf-upload

Conclusion

This is a combination of 3 great tools that allows us as developers to automate workload that may require cpu power or network bandwidth. Hopefully this can help you automate a workload!

The post Terraforming workloads with Docker and Digital Ocean appeared first on Stark & Wayne.

]]>