Jeremy R Budnack, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/jeremyrbudnack/ Cloud-Native Consultants Thu, 30 Sep 2021 15:49:49 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 https://www.starkandwayne.com/wp-content/uploads/2021/04/cropped-SW_Logos__shield_blue_on_transparent_with_border-32x32.png Jeremy R Budnack, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/jeremyrbudnack/ 32 32 Brokenstack? – Rescuing Your Instance From The Brink of Oblivion https://www.starkandwayne.com/blog/brokenstack-rescuing-your-instance-from-the-brink-of-oblivion/ Thu, 05 Jan 2017 21:51:15 +0000 https://www.starkandwayne.com//brokenstack-rescuing-your-instance-from-the-brink-of-oblivion/

Have you ever had Openstack do something to your instance that put it in an unbootable state? Did YOU do something to your instance that put it into an unbootable state?

Modern IaaS wisdom teaches us that we are to treat instances like "cattle", that we should be able to just blow it away and replace it at any time. However, we still have dev environments, jump boxes, etc. that will still be treated as "pets". When these instances get in trouble, we panic.

In today's story, we happen upon an Openstack admin who decided to try migrating such an instance from one node to another to better distribute the memory load. That brings us to another axiom: Test Openstack's migrate feature on a test VM BEFORE attempting to move an instance.

So imagine the ensuing panic when said migration failed with a 401 error. Gulp.

As with any other SNAFU involving nova-compute, we figure out which host we're running on and the virsh instance name:

# nova show 928907ae-4711-4863-9add-cff4f0ff161e
+--------------------------------------+-----------------------------------+
| Property                             | Value                                                         |
+--------------------------------------+-----------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                          |
| OS-EXT-AZ:availability_zone          | nova                                                          |
| OS-EXT-SRV-ATTR:host                 | node-5.domain.tld                                             |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | node-5.domain.tld                                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000052                                             |
...
+--------------------------------------+-----------------------------------+

Then SSH directly to the compute node to see what KVM / QEMU's view of the world is.

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 26    instance-00000056              running
 70    instance-00000052              shut down
...

Turns out, Openstack didn't delete the instance, but left the instance's folder in a renamed state, like so:

# ls /var/lib/nova/instances
0ecaff2c-d73a-483f-97d4-3425faa8355e
928907ae-4711-4863-9add-cff4f0ff161e_resize
...
# ls -Alh /var/lib/nova/instances/928907ae-4711-4863-9add-cff4f0ff161e_resize
total 11G
-rw------- 1 root root  46K Jan  5 15:03 console.log
-rw-r--r-- 1 root root  11G Jan  5 15:16 disk
-rw-r--r-- 1 root root 410K Jun  1  2016 disk.config
-rw-r--r-- 1 nova nova  162 Jun  1  2016 disk.info
-rw-r--r-- 1 nova nova 2.9K Jan  5 10:22 libvirt.xml

So all we need to do is rename the directory so it no longer has the _resize directive, then run:

# virsh start instance-00000052
Domain instance-00000052 started
# nova reset-state --active 928907ae-4711-4863-9add-cff4f0ff161e

All is well, right?

Give root password for maintenance (or type Control-D to continue):

Not yet. Looks like the OS has decided that something is amiss - possibly a corrupted root filesystem?!?! All you need to do is type the root password and... oh wait, this is a cloud image. You don't KNOW the root password!

NOTE: It was brought to my attention after posting this that the next logical step is to use nova rescue, which essentially allows me to boot another instance, attach to the boot disk of the instance in question, and perform whatever operations I need. Try that first. If nova rescue doe not work for your particular situation - read on.

If this was your desktop, you'd simply pop the CentOS 7 DVD into the drive and attempt recovery. Let's do that!

Back on the compute node, use virsh to add a cd-rom drive to your instance:

# virsh
virsh # edit instance-00000052

Under <os>, ensure that we're also going to boot from cdrom:

  <os>
    <type arch='x86_64' machine='pc-i440fx-2.0'>hvm</type>
    <boot dev='cdrom'/>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>

Next, add the following device under <devices>:

    <disk type='block' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>

Next, start the instance and attach the ISO:

# virsh start instance-00000052
# virsh attach-disk instance-00000052 /var/lib/nova/workspace/CentOS-7-x86_64-DVD-1611.iso hdc --type cdrom --mode readonly

Then, you can actually go into Horizon, click the Console link for your instance, and operate the console from there. From the console, click Send CTRLALTDEL to restart your instance and boot from the ISO.

You may be tempted to finally say "YAY! I can finally fix the filesystem and boot my instance - almost there." Then some jerk keeps restarting your instance before you can run fsck or xfs_repair. That prankster is nova-compute. To tell him to "cut it out", simply reset the status on the instance after you hit Send CTRLALTDEL.

# nova reset-state --active 928907ae-4711-4863-9add-cff4f0ff161e

Do what you need to do - set the root password this may help, restart your instance from the local disk, and fix what's wrong.

The underlying problem that caused all of this seemed to be twofold: First, xfs_repair found that there were some errors in the root filesystem, and promptly fixed them. Also, I had a block device I was using for data storage that didn't detach cleanly. In fact, early on in the process I went to Horizon and detached the block device when virsh start didn't initially work, and planned on reattaching it when I determined all was well with the OS. However, during boot up, the OS was trying to mount said device per its /etc/fstab and it wasn't apparent from what I was seeing at the console.

When finished, make sure you cleanly power down your instance, go back to the compute node, and use virsh to remove the changes you made to get the cdrom drive to work. Then, start your instance back up using the Horizon UI.

Also - You should probably reset that root password.

Anyway, this may have gone from Buffalo to New York by way of Chicago, but at least now we know what lengths we can go to if something goes south on an instance you care about (and ideally, you SHOULDN'T care about them).

The post Brokenstack? – Rescuing Your Instance From The Brink of Oblivion appeared first on Stark & Wayne.

]]>

Have you ever had Openstack do something to your instance that put it in an unbootable state? Did YOU do something to your instance that put it into an unbootable state?

Modern IaaS wisdom teaches us that we are to treat instances like "cattle", that we should be able to just blow it away and replace it at any time. However, we still have dev environments, jump boxes, etc. that will still be treated as "pets". When these instances get in trouble, we panic.

In today's story, we happen upon an Openstack admin who decided to try migrating such an instance from one node to another to better distribute the memory load. That brings us to another axiom: Test Openstack's migrate feature on a test VM BEFORE attempting to move an instance.

So imagine the ensuing panic when said migration failed with a 401 error. Gulp.

As with any other SNAFU involving nova-compute, we figure out which host we're running on and the virsh instance name:

# nova show 928907ae-4711-4863-9add-cff4f0ff161e
+--------------------------------------+-----------------------------------+
| Property                             | Value                                                         |
+--------------------------------------+-----------------------------------+
| OS-DCF:diskConfig                    | AUTO                                                          |
| OS-EXT-AZ:availability_zone          | nova                                                          |
| OS-EXT-SRV-ATTR:host                 | node-5.domain.tld                                             |
| OS-EXT-SRV-ATTR:hypervisor_hostname  | node-5.domain.tld                                             |
| OS-EXT-SRV-ATTR:instance_name        | instance-00000052                                             |
...
+--------------------------------------+-----------------------------------+

Then SSH directly to the compute node to see what KVM / QEMU's view of the world is.

# virsh list --all
 Id    Name                           State
----------------------------------------------------
 26    instance-00000056              running
 70    instance-00000052              shut down
...

Turns out, Openstack didn't delete the instance, but left the instance's folder in a renamed state, like so:

# ls /var/lib/nova/instances
0ecaff2c-d73a-483f-97d4-3425faa8355e
928907ae-4711-4863-9add-cff4f0ff161e_resize
...
# ls -Alh /var/lib/nova/instances/928907ae-4711-4863-9add-cff4f0ff161e_resize
total 11G
-rw------- 1 root root  46K Jan  5 15:03 console.log
-rw-r--r-- 1 root root  11G Jan  5 15:16 disk
-rw-r--r-- 1 root root 410K Jun  1  2016 disk.config
-rw-r--r-- 1 nova nova  162 Jun  1  2016 disk.info
-rw-r--r-- 1 nova nova 2.9K Jan  5 10:22 libvirt.xml

So all we need to do is rename the directory so it no longer has the _resize directive, then run:

# virsh start instance-00000052
Domain instance-00000052 started
# nova reset-state --active 928907ae-4711-4863-9add-cff4f0ff161e

All is well, right?

Give root password for maintenance (or type Control-D to continue):

Not yet. Looks like the OS has decided that something is amiss - possibly a corrupted root filesystem?!?! All you need to do is type the root password and... oh wait, this is a cloud image. You don't KNOW the root password!

NOTE: It was brought to my attention after posting this that the next logical step is to use nova rescue, which essentially allows me to boot another instance, attach to the boot disk of the instance in question, and perform whatever operations I need. Try that first. If nova rescue doe not work for your particular situation - read on.

If this was your desktop, you'd simply pop the CentOS 7 DVD into the drive and attempt recovery. Let's do that!

Back on the compute node, use virsh to add a cd-rom drive to your instance:

# virsh
virsh # edit instance-00000052

Under <os>, ensure that we're also going to boot from cdrom:

  <os>
    <type arch='x86_64' machine='pc-i440fx-2.0'>hvm</type>
    <boot dev='cdrom'/>
    <boot dev='hd'/>
    <smbios mode='sysinfo'/>
  </os>

Next, add the following device under <devices>:

    <disk type='block' device='cdrom'>
      <driver name='qemu' type='raw'/>
      <target dev='hdc' bus='ide'/>
      <readonly/>
      <address type='drive' controller='0' bus='1' target='0' unit='0'/>
    </disk>

Next, start the instance and attach the ISO:

# virsh start instance-00000052
# virsh attach-disk instance-00000052 /var/lib/nova/workspace/CentOS-7-x86_64-DVD-1611.iso hdc --type cdrom --mode readonly

Then, you can actually go into Horizon, click the Console link for your instance, and operate the console from there. From the console, click Send CTRLALTDEL to restart your instance and boot from the ISO.

You may be tempted to finally say "YAY! I can finally fix the filesystem and boot my instance - almost there." Then some jerk keeps restarting your instance before you can run fsck or xfs_repair. That prankster is nova-compute. To tell him to "cut it out", simply reset the status on the instance after you hit Send CTRLALTDEL.

# nova reset-state --active 928907ae-4711-4863-9add-cff4f0ff161e

Do what you need to do - set the root password this may help, restart your instance from the local disk, and fix what's wrong.

The underlying problem that caused all of this seemed to be twofold: First, xfs_repair found that there were some errors in the root filesystem, and promptly fixed them. Also, I had a block device I was using for data storage that didn't detach cleanly. In fact, early on in the process I went to Horizon and detached the block device when virsh start didn't initially work, and planned on reattaching it when I determined all was well with the OS. However, during boot up, the OS was trying to mount said device per its /etc/fstab and it wasn't apparent from what I was seeing at the console.

When finished, make sure you cleanly power down your instance, go back to the compute node, and use virsh to remove the changes you made to get the cdrom drive to work. Then, start your instance back up using the Horizon UI.

Also - You should probably reset that root password.

Anyway, this may have gone from Buffalo to New York by way of Chicago, but at least now we know what lengths we can go to if something goes south on an instance you care about (and ideally, you SHOULDN'T care about them).

The post Brokenstack? – Rescuing Your Instance From The Brink of Oblivion appeared first on Stark & Wayne.

]]>
Branding – Changing the Logo on the Cloud Foundry UAA Login Page https://www.starkandwayne.com/blog/branding-changing-the-logo-on-the-cloud-foundry-uaa-login-page/ Fri, 04 Nov 2016 15:10:08 +0000 https://www.starkandwayne.com//branding-changing-the-logo-on-the-cloud-foundry-uaa-login-page/

Let's say that you just deployed your shiny new Cloud Foundry at work, and are showing it off to your friends. As you and your friends hover over the login page, Bill from Corporate Compliance happens to stop by - he cranes his neck, sees your monitor, and says "Hey, that's great! Where's our logo? Our logo should be on that login page." He whips out his phone and sends you the company standard 200 x 200 png file and says "Yeah, if you could get that logo updated by the end of the week, that'd be greeeat."

Your friends disperse, and like any good BOSH operator with a "can do" attitude, you go directly to the UAA Release and start looking at the UAA job's specs file for any properties you can change.

Aha! You see it:

#Branding/Customization
login.branding.company_name:
  description: This name is used on the UAA Pages and in account management related communication in UAA
login.branding.product_logo:
    description: This is a base64 encoded PNG image which will be used as the logo on all UAA pages like Login, Sign Up etc.
login.branding.square_logo:
    description: This is a base64 encoded PNG image which will be used as the favicon for the UAA pages
login.branding.footer_legal_text:
    description: This text appears on the footer of all UAA pages
login.branding.footer_links:
    description: These links appear on the footer of all UAA pages. You may choose to add multiple urls for things like Support, Terms of Service etc.
    example:
      linkDisplayName: linkDisplayUrl

So you look at the image file. You look at your Cloud Foundry manifest. You look again at the spec. You look again at the image file.

"How do I cram this file into my manifest? Can I just paste in the contents of the file?" you wonder aloud. You type cat logo.png, hoping this will be a 5 minute copy/paste job. You frantically hit the mute button as your terminal starts beeping like R2D2 on caffeine pills. Nope!

I'll spare you the rest of the narrative. As specified in the spec file, you need to provide a base64 encoded image in your manifest. To generate this, run the following command:

cat logo.png | base64 | tr -d '\n' > logo.png.base64

To break this down: the base64 command encodes the PNG file, but generates newline characters, which we do not want in our manifest - we want this all in 1 line. Therefore, the tr command trims these out.

Once you've done this, simply copy the one-liner from the resulting file and paste it into your manifest. If you do it in VIM, for example, it'll look like this (ellipses added for readability):

...
  login:
    branding:
      company_name: Stark & Wayne
      product_logo: /9j/4AAQSkZJRgABAgAAAQABAAD/2wBDAAUDBAQE...

Once you paste your logo, if you arrow up a line, VIM will look like this due to the obscenely long line of text you have just pasted:

...
  login:
    asset_base_url: /resources/trustedanalytics
    branding:
      company_name: Stark & Wayne
@
@
@
@
@
@
@
...

Save your manifest and deploy. Your site is now compliant! For now.

The post Branding – Changing the Logo on the Cloud Foundry UAA Login Page appeared first on Stark & Wayne.

]]>

Let's say that you just deployed your shiny new Cloud Foundry at work, and are showing it off to your friends. As you and your friends hover over the login page, Bill from Corporate Compliance happens to stop by - he cranes his neck, sees your monitor, and says "Hey, that's great! Where's our logo? Our logo should be on that login page." He whips out his phone and sends you the company standard 200 x 200 png file and says "Yeah, if you could get that logo updated by the end of the week, that'd be greeeat."

Your friends disperse, and like any good BOSH operator with a "can do" attitude, you go directly to the UAA Release and start looking at the UAA job's specs file for any properties you can change.

Aha! You see it:

#Branding/Customization
login.branding.company_name:
  description: This name is used on the UAA Pages and in account management related communication in UAA
login.branding.product_logo:
    description: This is a base64 encoded PNG image which will be used as the logo on all UAA pages like Login, Sign Up etc.
login.branding.square_logo:
    description: This is a base64 encoded PNG image which will be used as the favicon for the UAA pages
login.branding.footer_legal_text:
    description: This text appears on the footer of all UAA pages
login.branding.footer_links:
    description: These links appear on the footer of all UAA pages. You may choose to add multiple urls for things like Support, Terms of Service etc.
    example:
      linkDisplayName: linkDisplayUrl

So you look at the image file. You look at your Cloud Foundry manifest. You look again at the spec. You look again at the image file.

"How do I cram this file into my manifest? Can I just paste in the contents of the file?" you wonder aloud. You type cat logo.png, hoping this will be a 5 minute copy/paste job. You frantically hit the mute button as your terminal starts beeping like R2D2 on caffeine pills. Nope!

I'll spare you the rest of the narrative. As specified in the spec file, you need to provide a base64 encoded image in your manifest. To generate this, run the following command:

cat logo.png | base64 | tr -d '\n' > logo.png.base64

To break this down: the base64 command encodes the PNG file, but generates newline characters, which we do not want in our manifest - we want this all in 1 line. Therefore, the tr command trims these out.

Once you've done this, simply copy the one-liner from the resulting file and paste it into your manifest. If you do it in VIM, for example, it'll look like this (ellipses added for readability):

...
  login:
    branding:
      company_name: Stark & Wayne
      product_logo: /9j/4AAQSkZJRgABAgAAAQABAAD/2wBDAAUDBAQE...

Once you paste your logo, if you arrow up a line, VIM will look like this due to the obscenely long line of text you have just pasted:

...
  login:
    asset_base_url: /resources/trustedanalytics
    branding:
      company_name: Stark & Wayne
@
@
@
@
@
@
@
...

Save your manifest and deploy. Your site is now compliant! For now.

The post Branding – Changing the Logo on the Cloud Foundry UAA Login Page appeared first on Stark & Wayne.

]]>
Using the /check_token Endpoint in Cloud Foundry’s UAA https://www.starkandwayne.com/blog/using-the-check-token-endpoint/ Mon, 24 Oct 2016 14:07:00 +0000 https://www.starkandwayne.com//using-the-check-token-endpoint/

The goal of this interaction is to figure out how to use the /check_token endpoint of the UAA to authenticate clients. This is useful if you want to use the UAA to authenticate a particular client (in my case, an AWS Lambda function that calls my API while standing up a CloudFormation stack), as opposed to authenticating individual users.

First, your application needs to be able to talk to the UAA and validate tokens. To enable this, you need to create a new UAA client and secret for your application to use. To do this, we use the uaac command like so:

$ uaac target uaa.mycfenvironment.com --skip-ssl-validation
Target: https://uaa.mycfenvironment.com
$ uaac token client get admin -s myp@ssw0rd
Successfully fetched token via client credentials grant.
Target: https://uaa.mycfenvironment.com
Context: admin, from client admin
$ uaac client add my-api --scope uaa.resource --authorities uaa.resource --authorized_grant_types refresh_token -s myp@ssw0rd1
  scope: uaa.resource
  client_id: my-api
  resource_ids: none
  authorized_grant_types: refresh_token
  autoapprove:
  action: none
  authorities: uaa.resource
  lastmodified: 1477062635696
  id: my-api

Secondly, whomever or whatever is using your application needs to be able to authenticate to it. So, let's add another client with the authorized grant type of "client_credentials". These will be the client credentials used by the Lambda function itself to talk to my API.

$ uaac client add my-api-client --authorized_grant_types client_credentials -s myp@ssw0rd2

To try this out, we turn to curl. First, let's test that our new api client can get a token (obviously, "REDACTED" isn't a valid token):

$ curl -XPOST -u my-api-client:myp@ssw0rd2 http://uaa.mycfenvironment.com/oauth/token --data-urlencode "grant_type=client_credentials" --data-urlencode "response_type=token"
{"access_token":"REDACTED","token_type":"bearer","expires_in":43199,"scope":"uaa.none","jti":"REDACTED"}

When you look at the token you actually get from the API, note that it is a Java Web Token (JWT). There is software out there that will allow you to decode such a token (or at least part of one) for debugging purposes. One I use is jwt.io.

Next, let us now play the role of your application. We now take the value returned in the "access_token" field of the previous command, and pretend that we are receiving this token after the client has acquired it from the UAA.

Note above we use the -u command line switch to pass the basic auth credentials to the UAA. Here are some other ways you may see to pass credentials:

We could pass the credentials as part of the URL, like this:

$ curl -XPOST http://my-api:myp@ssw0rd1@uaa.mycfenvironment.com/check_token...

It works, but it is not my first choice.

We could also manually add the Authorization header, like so:

$ echo -n my-api:myp@ssw0rd1 | base64
bXktYXBpOm15cEBzc3cwcmQx
$ curl -XPOST http://uaa.mycfenvironment.com/check_token -H "Authorization: Basic bXktYXBpOm15cEBzc3cwcmQx" ...

There is an easier option than this, but try it at least once. This shows how the Basic Authentication header is formed. Also, it is key that you use the -n in echo, which keeps echo from adding a trailing newline character. Otherwise you'll spend hours wondering why your perfectly valid credentials are not being accepted.

I just use the -u option to let curl generate the header for me. Below is the full call and its results:

$ curl -XPOST -u my-api:myp@ssw0rd1 http://uaa.mycfenvironment.com/check_token -H "Content-Type: application/x-www-form-urlencoded" -d "token=REDACTED"
{"client_id":"my-api-client","exp":1477198408,"authorities":["uaa.none"],"scope":["uaa.none"],"jti":"b460e8ef-3f34-45f5-81b0-2d7c0d66dbfe","aud":["my-api-client"],"sub":"my-api-client","iss":"http://uaa.mycfenvironment.com/oauth/token","iat":1477155208,"cid":"my-api-client","grant_type":"client_credentials","azp":"my-api-client","zid":"uaa","rev_sig":"de07cb25"}

So, you have 2 takeaways here:

  1. You have now taken a token from a client that has authenticated with the UAA, and you have authenticated to that same UAA, and validated that it is a valid token.
  2. By using curl to step through each step of the interaction and examine the output, you now better understand how to implement it in code.

For more information on how the UAA API's work, there is some pretty good documentation that comes with the UAA on github:
https://github.com/cloudfoundry/uaa/blob/master/docs/UAA-APIs.rst

The post Using the /check_token Endpoint in Cloud Foundry’s UAA appeared first on Stark & Wayne.

]]>

The goal of this interaction is to figure out how to use the /check_token endpoint of the UAA to authenticate clients. This is useful if you want to use the UAA to authenticate a particular client (in my case, an AWS Lambda function that calls my API while standing up a CloudFormation stack), as opposed to authenticating individual users.

First, your application needs to be able to talk to the UAA and validate tokens. To enable this, you need to create a new UAA client and secret for your application to use. To do this, we use the uaac command like so:

$ uaac target uaa.mycfenvironment.com --skip-ssl-validation
Target: https://uaa.mycfenvironment.com
$ uaac token client get admin -s myp@ssw0rd
Successfully fetched token via client credentials grant.
Target: https://uaa.mycfenvironment.com
Context: admin, from client admin
$ uaac client add my-api --scope uaa.resource --authorities uaa.resource --authorized_grant_types refresh_token -s myp@ssw0rd1
  scope: uaa.resource
  client_id: my-api
  resource_ids: none
  authorized_grant_types: refresh_token
  autoapprove:
  action: none
  authorities: uaa.resource
  lastmodified: 1477062635696
  id: my-api

Secondly, whomever or whatever is using your application needs to be able to authenticate to it. So, let's add another client with the authorized grant type of "client_credentials". These will be the client credentials used by the Lambda function itself to talk to my API.

$ uaac client add my-api-client --authorized_grant_types client_credentials -s myp@ssw0rd2

To try this out, we turn to curl. First, let's test that our new api client can get a token (obviously, "REDACTED" isn't a valid token):

$ curl -XPOST -u my-api-client:myp@ssw0rd2 http://uaa.mycfenvironment.com/oauth/token --data-urlencode "grant_type=client_credentials" --data-urlencode "response_type=token"
{"access_token":"REDACTED","token_type":"bearer","expires_in":43199,"scope":"uaa.none","jti":"REDACTED"}

When you look at the token you actually get from the API, note that it is a Java Web Token (JWT). There is software out there that will allow you to decode such a token (or at least part of one) for debugging purposes. One I use is jwt.io.

Next, let us now play the role of your application. We now take the value returned in the "access_token" field of the previous command, and pretend that we are receiving this token after the client has acquired it from the UAA.

Note above we use the -u command line switch to pass the basic auth credentials to the UAA. Here are some other ways you may see to pass credentials:

We could pass the credentials as part of the URL, like this:

$ curl -XPOST http://my-api:myp@ssw0rd1@uaa.mycfenvironment.com/check_token...

It works, but it is not my first choice.

We could also manually add the Authorization header, like so:

$ echo -n my-api:myp@ssw0rd1 | base64
bXktYXBpOm15cEBzc3cwcmQx
$ curl -XPOST http://uaa.mycfenvironment.com/check_token -H "Authorization: Basic bXktYXBpOm15cEBzc3cwcmQx" ...

There is an easier option than this, but try it at least once. This shows how the Basic Authentication header is formed. Also, it is key that you use the -n in echo, which keeps echo from adding a trailing newline character. Otherwise you'll spend hours wondering why your perfectly valid credentials are not being accepted.

I just use the -u option to let curl generate the header for me. Below is the full call and its results:

$ curl -XPOST -u my-api:myp@ssw0rd1 http://uaa.mycfenvironment.com/check_token -H "Content-Type: application/x-www-form-urlencoded" -d "token=REDACTED"
{"client_id":"my-api-client","exp":1477198408,"authorities":["uaa.none"],"scope":["uaa.none"],"jti":"b460e8ef-3f34-45f5-81b0-2d7c0d66dbfe","aud":["my-api-client"],"sub":"my-api-client","iss":"http://uaa.mycfenvironment.com/oauth/token","iat":1477155208,"cid":"my-api-client","grant_type":"client_credentials","azp":"my-api-client","zid":"uaa","rev_sig":"de07cb25"}

So, you have 2 takeaways here:

  1. You have now taken a token from a client that has authenticated with the UAA, and you have authenticated to that same UAA, and validated that it is a valid token.
  2. By using curl to step through each step of the interaction and examine the output, you now better understand how to implement it in code.

For more information on how the UAA API's work, there is some pretty good documentation that comes with the UAA on github:
https://github.com/cloudfoundry/uaa/blob/master/docs/UAA-APIs.rst

The post Using the /check_token Endpoint in Cloud Foundry’s UAA appeared first on Stark & Wayne.

]]>
Capture HTTP Request Content in Cloud Foundry Using “Gotcha” https://www.starkandwayne.com/blog/capture-http-requests-to-your-cloud-foundry/ Fri, 15 Jul 2016 12:39:29 +0000 https://www.starkandwayne.com//capture-http-requests-to-your-cloud-foundry/

I was working on an application, which talks to a few API's to set up a lab environment on a system I'm working on. This system happens to be using Cloud Foundry to not only host its UI, but also its underlying API's. The API's I'm using in this case, while the source code is open source, don't have a lot of documentation, as they aren't intended to be publicly facing.

Some of the time I'm able to infer the structure of my API call by watching cf logs as I interact with the UI, which in turn calls the API's. This works great for things like GET and DELETE. However - what I was not seeing in cf logs was HTTP request content, as you would see in a PUT or POST.

Since we're dealing with open source, I could trace through their code and figure out how to structure the body. However - there is now a faster way!

Enter Gotcha, another project by one of Stark & Wayne's resident innovators - James Hunt. Gotcha is a MiTM proxy that intercepts traffic meant for your HTTP API and logs exactly what is sent to it. Here's how you use it in Cloud Foundry:

Switch Up The Routes

In order to intercept traffic meant for your API, Gotcha needs to listen on the same hostname that is currently being used by your API. We're then going to use an alternate route to get to the API, so Gotcha can proxy traffic to it.

First, unmap the route from your API:

cf unmap-route my-api testbed.budnack.net --hostname api

Next, map an alternate route:

cf map-route my-api testbed.budnack.net --hostname api-actual
Prep Gotcha for CF Deployment

Now, we're ready to insert Gotcha into the middle of all this.

First, clone the repository:

git clone git@github.com:starkandwayne/gotcha.git

Next, go into the gotcha directory and create a file named Procfile with the following contents:

web: gotcha http://api-actual.testbed.budnack.net $PORT

Notice the $PORT variable here - this allows Gotcha to listen on the port number assigned to it by Cloud Foundry.

Next, push the app:

cf push gotcha-my-api -d testbed.budnack.net -n api
Tail The Logs and Send Some Traffic

Once Gotcha is running, you can use cf logs to tail the logs on both the application and on Gotcha.

cf logs my-api
cf logs gotcha-my-api

In this next example, I initiate a data transfer via the application's UI. In the api's logs, I see the POST request come through:

2016-07-14T18:56:01.70-0400 [RTR/0]      OUT api-actual.testbed.budnack.net - [14/07/2016:22:56:01 +0000] "POST /rest/downloader/requests HTTP/1.1" 200 270 1488 "-" "Java/1.8.0_65-" 10.0.0.162:35686 x_forwarded_for:"52.202.197.223" x_forwarded_proto:"http" vcap_request_id:2ff5fdd5-bbfc-4ba6-58fa-bf53d1bb4d5f response_time:0.391775647 app_id:4c972f0e-84bf-4437-ad91-e89545868720

Now let's see that request in Gotcha's logs. As you can see, we get a wealth of information here, including the POST request format we're looking for (in bold):

2016-07-14T18:56:01.31-0400 [App/0]      ERR POST /rest/downloader/requests HTTP/1.1
2016-07-14T18:56:01.31-0400 [App/0]      ERR Host: api-actual.testbed.budnack.net
2016-07-14T18:56:01.31-0400 [App/0]      ERR User-Agent: Java/1.8.0_65-
2016-07-14T18:56:01.31-0400 [App/0]      ERR Transfer-Encoding: chunked
2016-07-14T18:56:01.31-0400 [App/0]      ERR Accept: application/json, application/*+json
2016-07-14T18:56:01.31-0400 [App/0]      ERR Authorization: bearer REDACTEDFORBLOGPOST
2016-07-14T18:56:01.31-0400 [App/0]      ERR Content-Type: application/json;charset=UTF-8
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Cf-Applicationid: f1f2b239-d26c-4fd9-9a7a-8d0aca91e17d
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Cf-Instanceid: 81b4f26753aa4253844f03b8276cb84496a748f7570f4b7ba6c04bce028e1f5b
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Cf-Requestid: bd01af62-8e4b-4c0e-7fe8-4afd108582f5
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Forwarded-For: 52.202.197.223, 10.0.0.162
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Forwarded-Proto: http
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Request-Start: 1468536961307
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Vcap-Request-Id: b2d43ea3-3537-43a5-7ac0-02ed197d001f
2016-07-14T18:56:01.31-0400 [App/0]      ERR Accept-Encoding: gzip
2016-07-14T18:56:01.31-0400 [App/0]      ERR 1
2016-07-14T18:56:01.31-0400 [App/0]      ERR {
2016-07-14T18:56:01.31-0400 [App/0]      ERR 10d
2016-07-14T18:56:01.31-0400 [App/0]      ERR "orgUUID":"2bc11e2a-5b04-4fb1-967c-9d78b529a670","source":"https://github.com/trustedanalytics/dataset-reader-sample/raw/master/data/nf-data-application.csv","callback":"http://das.testbed.budnack.net/rest/das/callbacks/downloader/d326167b-edf7-4b20-8d77-ea553efe4df4"}
2016-07-14T18:56:01.31-0400 [App/0]      ERR 0
2016-07-14T18:56:01.71-0400 [RTR/0]      OUT api.testbed.budnack.net - [14/07/2016:22:56:01 +0000] "POST /rest/downloader/requests HTTP/1.1" 200 270 1488 "-" "Java/1.8.0_65-" 10.0.0.162:35685 x_forwarded_for:"52.202.197.223" x_forwarded_proto:"http" vcap_request_id:b2d43ea3-3537-43a5-7ac0-02ed197d001f response_time:0.404644956 app_id:f1f2b239-d26c-4fd9-9a7a-8d0aca91e17d
2016-07-14T18:56:01.71-0400 [App/0]      ERR request took 399.586 ms
2016-07-14T18:56:01.71-0400 [App/0]      ERR HTTP/1.1 200 OK
2016-07-14T18:56:01.71-0400 [App/0]      ERR Content-Length: 1488
2016-07-14T18:56:01.71-0400 [App/0]      ERR Cache-Control: no-cache, no-store, max-age=0, must-revalidate
2016-07-14T18:56:01.71-0400 [App/0]      ERR Connection: keep-alive
2016-07-14T18:56:01.71-0400 [App/0]      ERR Content-Type: application/json;charset=UTF-8
2016-07-14T18:56:01.71-0400 [App/0]      ERR Date: Thu, 14 Jul 2016 22:56:15 GMT
2016-07-14T18:56:01.71-0400 [App/0]      ERR Expires: 0
2016-07-14T18:56:01.71-0400 [App/0]      ERR Pragma: no-cache
2016-07-14T18:56:01.71-0400 [App/0]      ERR Server: nginx/1.11.1
2016-07-14T18:56:01.71-0400 [App/0]      ERR X-Application-Context: api:cloud,multitenant-hdfs,proxy:0
2016-07-14T18:56:01.71-0400 [App/0]      ERR X-Cf-Requestid: bd01af62-8e4b-4c0e-7fe8-4afd108582f5
2016-07-14T18:56:01.71-0400 [App/0]      ERR X-Content-Type-Options: nosniff
2016-07-14T18:56:01.71-0400 [App/0]      ERR X-Frame-Options: DENY
2016-07-14T18:56:01.71-0400 [App/0]      ERR X-Xss-Protection: 1; mode=block
2016-07-14T18:56:01.71-0400 [App/0]      ERR {"source":"https://github.com/trustedanalytics/dataset-reader-sample/raw/master/data/nf-data-application.csv","callback":"http://das.testbed.budnack.net/rest/das/callbacks/downloader/d326167b-edf7-4b20-8d77-ea553efe4df4","id":"51","state":"IN_PROGRESS","downloadedBytes":0,"savedObjectId":null,"objectStoreId":null,"token":"REDACTEDFORBLOGPOST"}
2016-07-14T18:56:01.71-0400 [App/0]      ERR receive response took 0.007 ms
2016-07-14T18:56:01.71-0400 [App/0]      ERR send response took 2.039 ms

There you have it - You can leave this running as-is if it performs well enough, or you can even just re-map the original route back to your API, leaving Gotcha waiting in limbo until you need him again for more MiTM hijinks!

The post Capture HTTP Request Content in Cloud Foundry Using “Gotcha” appeared first on Stark & Wayne.

]]>

I was working on an application, which talks to a few API's to set up a lab environment on a system I'm working on. This system happens to be using Cloud Foundry to not only host its UI, but also its underlying API's. The API's I'm using in this case, while the source code is open source, don't have a lot of documentation, as they aren't intended to be publicly facing.

Some of the time I'm able to infer the structure of my API call by watching cf logs as I interact with the UI, which in turn calls the API's. This works great for things like GET and DELETE. However - what I was not seeing in cf logs was HTTP request content, as you would see in a PUT or POST.

Since we're dealing with open source, I could trace through their code and figure out how to structure the body. However - there is now a faster way!

Enter Gotcha, another project by one of Stark & Wayne's resident innovators - James Hunt. Gotcha is a MiTM proxy that intercepts traffic meant for your HTTP API and logs exactly what is sent to it. Here's how you use it in Cloud Foundry:

Switch Up The Routes

In order to intercept traffic meant for your API, Gotcha needs to listen on the same hostname that is currently being used by your API. We're then going to use an alternate route to get to the API, so Gotcha can proxy traffic to it.

First, unmap the route from your API:

cf unmap-route my-api testbed.budnack.net --hostname api

Next, map an alternate route:

cf map-route my-api testbed.budnack.net --hostname api-actual
Prep Gotcha for CF Deployment

Now, we're ready to insert Gotcha into the middle of all this.

First, clone the repository:

git clone git@github.com:starkandwayne/gotcha.git

Next, go into the gotcha directory and create a file named Procfile with the following contents:

web: gotcha http://api-actual.testbed.budnack.net $PORT

Notice the $PORT variable here - this allows Gotcha to listen on the port number assigned to it by Cloud Foundry.

Next, push the app:

cf push gotcha-my-api -d testbed.budnack.net -n api
Tail The Logs and Send Some Traffic

Once Gotcha is running, you can use cf logs to tail the logs on both the application and on Gotcha.

cf logs my-api
cf logs gotcha-my-api

In this next example, I initiate a data transfer via the application's UI. In the api's logs, I see the POST request come through:

2016-07-14T18:56:01.70-0400 [RTR/0]      OUT api-actual.testbed.budnack.net - [14/07/2016:22:56:01 +0000] "POST /rest/downloader/requests HTTP/1.1" 200 270 1488 "-" "Java/1.8.0_65-" 10.0.0.162:35686 x_forwarded_for:"52.202.197.223" x_forwarded_proto:"http" vcap_request_id:2ff5fdd5-bbfc-4ba6-58fa-bf53d1bb4d5f response_time:0.391775647 app_id:4c972f0e-84bf-4437-ad91-e89545868720

Now let's see that request in Gotcha's logs. As you can see, we get a wealth of information here, including the POST request format we're looking for (in bold):

2016-07-14T18:56:01.31-0400 [App/0]      ERR POST /rest/downloader/requests HTTP/1.1
2016-07-14T18:56:01.31-0400 [App/0]      ERR Host: api-actual.testbed.budnack.net
2016-07-14T18:56:01.31-0400 [App/0]      ERR User-Agent: Java/1.8.0_65-
2016-07-14T18:56:01.31-0400 [App/0]      ERR Transfer-Encoding: chunked
2016-07-14T18:56:01.31-0400 [App/0]      ERR Accept: application/json, application/*+json
2016-07-14T18:56:01.31-0400 [App/0]      ERR Authorization: bearer REDACTEDFORBLOGPOST
2016-07-14T18:56:01.31-0400 [App/0]      ERR Content-Type: application/json;charset=UTF-8
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Cf-Applicationid: f1f2b239-d26c-4fd9-9a7a-8d0aca91e17d
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Cf-Instanceid: 81b4f26753aa4253844f03b8276cb84496a748f7570f4b7ba6c04bce028e1f5b
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Cf-Requestid: bd01af62-8e4b-4c0e-7fe8-4afd108582f5
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Forwarded-For: 52.202.197.223, 10.0.0.162
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Forwarded-Proto: http
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Request-Start: 1468536961307
2016-07-14T18:56:01.31-0400 [App/0]      ERR X-Vcap-Request-Id: b2d43ea3-3537-43a5-7ac0-02ed197d001f
2016-07-14T18:56:01.31-0400 [App/0]      ERR Accept-Encoding: gzip
2016-07-14T18:56:01.31-0400 [App/0]      ERR 1
2016-07-14T18:56:01.31-0400 [App/0]      ERR {
2016-07-14T18:56:01.31-0400 [App/0]      ERR 10d
2016-07-14T18:56:01.31-0400 [App/0]      ERR "orgUUID":"2bc11e2a-5b04-4fb1-967c-9d78b529a670","source":"https://github.com/trustedanalytics/dataset-reader-sample/raw/master/data/nf-data-application.csv","callback":"http://das.testbed.budnack.net/rest/das/callbacks/downloader/d326167b-edf7-4b20-8d77-ea553efe4df4"}
2016-07-14T18:56:01.31-0400 [App/0]      ERR 0
2016-07-14T18:56:01.71-0400 [RTR/0]      OUT api.testbed.budnack.net - [14/07/2016:22:56:01 +0000] "POST /rest/downloader/requests HTTP/1.1" 200 270 1488 "-" "Java/1.8.0_65-" 10.0.0.162:35685 x_forwarded_for:"52.202.197.223" x_forwarded_proto:"http" vcap_request_id:b2d43ea3-3537-43a5-7ac0-02ed197d001f response_time:0.404644956 app_id:f1f2b239-d26c-4fd9-9a7a-8d0aca91e17d
2016-07-14T18:56:01.71-0400 [App/0]      ERR request took 399.586 ms
2016-07-14T18:56:01.71-0400 [App/0]      ERR HTTP/1.1 200 OK
2016-07-14T18:56:01.71-0400 [App/0]      ERR Content-Length: 1488
2016-07-14T18:56:01.71-0400 [App/0]      ERR Cache-Control: no-cache, no-store, max-age=0, must-revalidate
2016-07-14T18:56:01.71-0400 [App/0]      ERR Connection: keep-alive
2016-07-14T18:56:01.71-0400 [App/0]      ERR Content-Type: application/json;charset=UTF-8
2016-07-14T18:56:01.71-0400 [App/0]      ERR Date: Thu, 14 Jul 2016 22:56:15 GMT
2016-07-14T18:56:01.71-0400 [App/0]      ERR Expires: 0
2016-07-14T18:56:01.71-0400 [App/0]      ERR Pragma: no-cache
2016-07-14T18:56:01.71-0400 [App/0]      ERR Server: nginx/1.11.1
2016-07-14T18:56:01.71-0400 [App/0]      ERR X-Application-Context: api:cloud,multitenant-hdfs,proxy:0
2016-07-14T18:56:01.71-0400 [App/0]      ERR X-Cf-Requestid: bd01af62-8e4b-4c0e-7fe8-4afd108582f5
2016-07-14T18:56:01.71-0400 [App/0]      ERR X-Content-Type-Options: nosniff
2016-07-14T18:56:01.71-0400 [App/0]      ERR X-Frame-Options: DENY
2016-07-14T18:56:01.71-0400 [App/0]      ERR X-Xss-Protection: 1; mode=block
2016-07-14T18:56:01.71-0400 [App/0]      ERR {"source":"https://github.com/trustedanalytics/dataset-reader-sample/raw/master/data/nf-data-application.csv","callback":"http://das.testbed.budnack.net/rest/das/callbacks/downloader/d326167b-edf7-4b20-8d77-ea553efe4df4","id":"51","state":"IN_PROGRESS","downloadedBytes":0,"savedObjectId":null,"objectStoreId":null,"token":"REDACTEDFORBLOGPOST"}
2016-07-14T18:56:01.71-0400 [App/0]      ERR receive response took 0.007 ms
2016-07-14T18:56:01.71-0400 [App/0]      ERR send response took 2.039 ms

There you have it - You can leave this running as-is if it performs well enough, or you can even just re-map the original route back to your API, leaving Gotcha waiting in limbo until you need him again for more MiTM hijinks!

The post Capture HTTP Request Content in Cloud Foundry Using “Gotcha” appeared first on Stark & Wayne.

]]>
Start Working With HDFS from the Command Line https://www.starkandwayne.com/blog/working-with-hdfs/ Tue, 31 May 2016 21:45:11 +0000 https://www.starkandwayne.com//working-with-hdfs/

For the last month or so, I've been working on a couple of projects that have required me to move files in and out of HDFS. It's pretty straightforward once you get the appropriate tools working, but it can be a bit counterintuitive to get started (at least it was when I was learning it). Here's how you get started:

Install your tools

In this tutorial, we are working with Cloudera 5.5.1, using an Ubuntu (Trusty Tahr) instance to connect to it. First, we need to add Cloudera's repo to apt:

$ wget http://archive.cloudera.com/cdh5/one-click-install/trusty/amd64/cdh5-repository_1.0_all.deb
$ sudo dpkg -i cdh5-repository_1.0_all.deb
$ sudo apt-get update

Since I use both the hdfs command and FUSE, I just install FUSE, which installs both tools.

$ sudo apt-get install hadoop-hdfs-fuse

One prerequisite that apt fails to install is Java. If you try running the hdfs command, you'll get the following error:

Error: JAVA_HOME is not set and could not be found.

Let's put Java on there:

sudo apt-get install openjdk-7-jre

Set up your config

One little quirk about working with the Hadoop command-line tools is you need to use local config files - so you can't just provide the URL to your nameserver and just connect. One exception to this rule is a Go - based library/client written by Colin Marc called (drumroll please...) HDFS.

In Cloudera, you can get the config through the CDH Manager UI:
cdh manager

Once you download this zip file, put its contents into a subfolder of /etc/hadoop as follows:

$ sudo unzip hdfs-clientconfig.zip -d /etc/hadoop
Archive:  hdfs-clientconfig.zip
  inflating: /etc/hadoop/hadoop-conf/hdfs-site.xml
  inflating: /etc/hadoop/hadoop-conf/core-site.xml
  inflating: /etc/hadoop/hadoop-conf/topology.map
  inflating: /etc/hadoop/hadoop-conf/topology.py
  inflating: /etc/hadoop/hadoop-conf/log4j.properties
  inflating: /etc/hadoop/hadoop-conf/ssl-client.xml
  inflating: /etc/hadoop/hadoop-conf/hadoop-env.sh
$ sudo mv /etc/hadoop/hadoop-conf /etc/hadoop/conf.cloudera.HDFS

For the HDFS tools to use your configuration, the HADOOP_CONF_DIR environment variable needs to be set. This can simply be added to your favorite shell profile config:

export HADOOP_CONF_DIR="/etc/hadoop/conf.cloudera.HDFS"

Name Resolution

Now that you have your configuration in the right place, make sure you can actually resolve the names it uses. For this to happen in Cloudera, ensure that one of your Consul DNS servers is listed before your externally resolving DNS server.

nameserver 10.10.10.250 <-- this would be consul
nameserver 10.10.0.2 <-- this is your default DNS server

Try a ping:

$ ping cdh-master-0.node.myclouderacluster.consul
PING cdh-master-0.node.myclouderacluster.consul (10.10.10.70) 56(84) bytes of data.
64 bytes from cdh-master-0.node.myclouderacluster.consul (10.10.10.70): icmp_seq=1 ttl=64 time=1.25 ms
64 bytes from cdh-master-0.node.myclouderacluster.consul (10.10.10.70): icmp_seq=2 ttl=64 time=0.899 ms

Try the HDFS client

To make sure your configuration works, lets use the hdfs command to list our top-level directories:

$ hdfs dfs -ls /
Found 4 items
drwxr-xr-x   - hbase  hbase               0 2016-05-03 23:23 /hbase
drwxr-xr-x   - cf     stark               0 2016-05-03 16:16 /org
drwxrwxrwx   - hdfs   supergroup          0 2016-05-24 00:12 /tmp
drwxr-xr-x   - mapred supergroup          0 2016-05-06 00:07 /user

NOTE: If something is wrong, you will either get errors, OR the command will simply return the results of ls in your current working directory.

From here, you can simply read the help for the hdfs command. Most operations are pretty simple.

Try FUSE

For the next level, let's try mounting HDFS as a usable filesystem. To do this, first create a mountpoint:

$ sudo mkdir -p /hdfs

If you set up everything correctly for the hdfs command as above, you should be able to mount and use your HDFS filesystem like this:

$ sudo hadoop-fuse-dfs dfs://cdh-master-0.node.myclouderacluster.consul:8020 /hdfs
$ ls -Alh /hdfs
total 16K
drwxr-xr-x 10     99 99 4.0K May  3 23:23 hbase
drwxr-xr-x  3     99 99 4.0K May  3 16:16 org
drwxrwxrwx  6 hdfs   99 4.0K May 24 00:12 tmp
drwxr-xr-x 11 mapred 99 4.0K May  6 00:07 user

A Note About Permissions (Security by Obscurity!)

HDFS permissions, by default, are very liberal. As you browse the tree structure, you may notice that you do not have access to get to certain files:

$ ls /hdfs/org/some/restricted/folder
ls: cannot open directory /hdfs/org/some/restricted/folder: Permission denied

The fix? Create a user with the same name as the folder's owner:

$ sudo useradd -m theboss
$ sudo su - theboss -l
$ ls /hdfs/org/some/restricted/folder
resumes
salaries
torrents

This may or may not work for you - typically, if security for HDFS is desired, then one would enable Kereberos for this environment.

Does it actually work?

To ensure that this does work before handing it off to a customer to upload their gargantuan files, I'd suggest trying to upload a large-ish file and see if the checksums before and after upload match:

$ openssl dgst -sha256 big-file.csv
SHA256(big-file.csv)= 646a45f3caed89d7303ae9240c0c3e45e9188e55cf8e65bda8980daa9855be3e
$ cp big-file.csv /hdfs
$ openssl dgst -sha256 /hdfs/big-file.csv
SHA256(big-file.csv)= 646a45f3caed89d7303ae9240c0c3e45e9188e55cf8e65bda8980daa9855be3e

That's it. At this point, you can now interact with HDFS as you would any other linux filesystem.

The post Start Working With HDFS from the Command Line appeared first on Stark & Wayne.

]]>

For the last month or so, I've been working on a couple of projects that have required me to move files in and out of HDFS. It's pretty straightforward once you get the appropriate tools working, but it can be a bit counterintuitive to get started (at least it was when I was learning it). Here's how you get started:

Install your tools

In this tutorial, we are working with Cloudera 5.5.1, using an Ubuntu (Trusty Tahr) instance to connect to it. First, we need to add Cloudera's repo to apt:

$ wget http://archive.cloudera.com/cdh5/one-click-install/trusty/amd64/cdh5-repository_1.0_all.deb
$ sudo dpkg -i cdh5-repository_1.0_all.deb
$ sudo apt-get update

Since I use both the hdfs command and FUSE, I just install FUSE, which installs both tools.

$ sudo apt-get install hadoop-hdfs-fuse

One prerequisite that apt fails to install is Java. If you try running the hdfs command, you'll get the following error:

Error: JAVA_HOME is not set and could not be found.

Let's put Java on there:

sudo apt-get install openjdk-7-jre

Set up your config

One little quirk about working with the Hadoop command-line tools is you need to use local config files - so you can't just provide the URL to your nameserver and just connect. One exception to this rule is a Go - based library/client written by Colin Marc called (drumroll please...) HDFS.

In Cloudera, you can get the config through the CDH Manager UI:
cdh manager

Once you download this zip file, put its contents into a subfolder of /etc/hadoop as follows:

$ sudo unzip hdfs-clientconfig.zip -d /etc/hadoop
Archive:  hdfs-clientconfig.zip
  inflating: /etc/hadoop/hadoop-conf/hdfs-site.xml
  inflating: /etc/hadoop/hadoop-conf/core-site.xml
  inflating: /etc/hadoop/hadoop-conf/topology.map
  inflating: /etc/hadoop/hadoop-conf/topology.py
  inflating: /etc/hadoop/hadoop-conf/log4j.properties
  inflating: /etc/hadoop/hadoop-conf/ssl-client.xml
  inflating: /etc/hadoop/hadoop-conf/hadoop-env.sh
$ sudo mv /etc/hadoop/hadoop-conf /etc/hadoop/conf.cloudera.HDFS

For the HDFS tools to use your configuration, the HADOOP_CONF_DIR environment variable needs to be set. This can simply be added to your favorite shell profile config:

export HADOOP_CONF_DIR="/etc/hadoop/conf.cloudera.HDFS"

Name Resolution

Now that you have your configuration in the right place, make sure you can actually resolve the names it uses. For this to happen in Cloudera, ensure that one of your Consul DNS servers is listed before your externally resolving DNS server.

nameserver 10.10.10.250 <-- this would be consul
nameserver 10.10.0.2 <-- this is your default DNS server

Try a ping:

$ ping cdh-master-0.node.myclouderacluster.consul
PING cdh-master-0.node.myclouderacluster.consul (10.10.10.70) 56(84) bytes of data.
64 bytes from cdh-master-0.node.myclouderacluster.consul (10.10.10.70): icmp_seq=1 ttl=64 time=1.25 ms
64 bytes from cdh-master-0.node.myclouderacluster.consul (10.10.10.70): icmp_seq=2 ttl=64 time=0.899 ms

Try the HDFS client

To make sure your configuration works, lets use the hdfs command to list our top-level directories:

$ hdfs dfs -ls /
Found 4 items
drwxr-xr-x   - hbase  hbase               0 2016-05-03 23:23 /hbase
drwxr-xr-x   - cf     stark               0 2016-05-03 16:16 /org
drwxrwxrwx   - hdfs   supergroup          0 2016-05-24 00:12 /tmp
drwxr-xr-x   - mapred supergroup          0 2016-05-06 00:07 /user

NOTE: If something is wrong, you will either get errors, OR the command will simply return the results of ls in your current working directory.

From here, you can simply read the help for the hdfs command. Most operations are pretty simple.

Try FUSE

For the next level, let's try mounting HDFS as a usable filesystem. To do this, first create a mountpoint:

$ sudo mkdir -p /hdfs

If you set up everything correctly for the hdfs command as above, you should be able to mount and use your HDFS filesystem like this:

$ sudo hadoop-fuse-dfs dfs://cdh-master-0.node.myclouderacluster.consul:8020 /hdfs
$ ls -Alh /hdfs
total 16K
drwxr-xr-x 10     99 99 4.0K May  3 23:23 hbase
drwxr-xr-x  3     99 99 4.0K May  3 16:16 org
drwxrwxrwx  6 hdfs   99 4.0K May 24 00:12 tmp
drwxr-xr-x 11 mapred 99 4.0K May  6 00:07 user

A Note About Permissions (Security by Obscurity!)

HDFS permissions, by default, are very liberal. As you browse the tree structure, you may notice that you do not have access to get to certain files:

$ ls /hdfs/org/some/restricted/folder
ls: cannot open directory /hdfs/org/some/restricted/folder: Permission denied

The fix? Create a user with the same name as the folder's owner:

$ sudo useradd -m theboss
$ sudo su - theboss -l
$ ls /hdfs/org/some/restricted/folder
resumes
salaries
torrents

This may or may not work for you - typically, if security for HDFS is desired, then one would enable Kereberos for this environment.

Does it actually work?

To ensure that this does work before handing it off to a customer to upload their gargantuan files, I'd suggest trying to upload a large-ish file and see if the checksums before and after upload match:

$ openssl dgst -sha256 big-file.csv
SHA256(big-file.csv)= 646a45f3caed89d7303ae9240c0c3e45e9188e55cf8e65bda8980daa9855be3e
$ cp big-file.csv /hdfs
$ openssl dgst -sha256 /hdfs/big-file.csv
SHA256(big-file.csv)= 646a45f3caed89d7303ae9240c0c3e45e9188e55cf8e65bda8980daa9855be3e

That's it. At this point, you can now interact with HDFS as you would any other linux filesystem.

The post Start Working With HDFS from the Command Line appeared first on Stark & Wayne.

]]>
Run Concourse on bosh-lite on AWS https://www.starkandwayne.com/blog/run-concourse-on-bosh-lite-on-aws/ Sun, 18 Oct 2015 22:11:52 +0000 https://www.starkandwayne.com//run-concourse-on-bosh-lite-on-aws/

This week, I've been tasked with making a change to one of our existing Concourse pipelines. This got me to thinking: A CI pipeline should be treated as if it was versioned code. If I want to change it, I should have a place to try changes out, well out of the way of the developers trying to use said pipeline to get their patches out.

"Well gee", you might say - "why don't you just copy the pipeline, make the requisite changes, and name it something else?" Yeah, I could - but using my own Concourse would be one more step to keep me from being one fat-fingered "fly" command from plunging a digital backhoe into the dev pipeline (beeeep beeeep beeeep CRUNCH). Plus, I hadn't stood up Concourse before, so I wanted to learn.

I stood up Concourse locally in bosh-lite using Vagrant, and it seemed to work well. I checked out concourse.ci for other ways to stand it up. There are good example manifests, but I wanted to use bosh-lite running on AWS, as I didn't want to incur the cost of 3 VM's just to run a dev Concourse on AWS for a day or two.

Here's how I did it:

Deploy bosh-lite

First, I installed boss-lite, which is a great way to manage your bosh-lites, plus it makes deployment of bosh-lite easier.

Before using boss-lite, make sure you first fill out your boss-lite/bosh-lites/.envrc file. It should already be stubbed out - you just need to ensure your AWS key, ssh key, keypair, security group id, and subnet id are filled in.

To create a new bosh-lite, you simply run the following command in your boss-lite/bosh-lites directory:

$ bin/boss-lite new concourse

Once boss-lite creates your new bosh-lite, target it:

$ bosh target <your bosh-lite IP here>

Upload The Stemcell and Releases

We will need to upload releases for both Concourse and Garden. Note that from Concourse's Github Page, releases of Concourse and Garden are specifically matched, so use those version numbers in place of the numbers below.

$ export CONCOURSE_VERSION=0.65.1
$ export GARDEN_VERSION=0.307.0
$ bosh upload release https://github.com/concourse/concourse/releases/download/v$CONCOURSE_VERSION/concourse-$CONCOURSE_VERSION.tgz
$ bosh upload release https://github.com/concourse/concourse/releases/download/v$CONCOURSE_VERSION/garden-linux-$GARDEN_VERSION.tgz

Next, upload the stemcell:

$ bosh upload stemcell https://bosh.io/d/stemcells/bosh-warden-boshlite-ubuntu-trusty-go_agent

Create Your Manifest

While the folks at concourse.io give a pretty good sample manifest for deploying to bosh-lite, I had to give it a few tweaks. In addition to having some problems with IP addresses (which may or may not have resulted from me using the wrong version of Garden), I also followed Dr. Nic's blog post on how to notify our team on slack about concourse errors. My sample manifest is in this gist for anyone who wants to use it.

Deploy

Now that we have a manifest, let's deploy:

$ bosh deployment concourse-aws-boshlite.yml
$ bosh deploy

If all goes well, you now have concourse running! Stay tuned though, there is one more step.

Make It Accessible

We need the ability to get to the Concourse API, as well as the UI. For that, we need to run a couple iptables commands directly on the bosh-lite.

First, we can use boss-lite to SSH into our bosh-lite:

$ boss-lite ssh concourse

Next, run the following two commands. NOTE: If you changed the web interface IP in your manifest, you will need to adjust these commands accordingly.

$ sudo iptables -A FORWARD -p tcp -d 10.244.8.2 --dport 8080 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
$ sudo iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 8080 -j DNAT --to-destination 10.244.8.2:8080

Now, you should be able to hit your new Concourse from the web browser on your bosh-lite IP on port 8080.

Troubleshooting Notes

I've found a couple times where I had to actually re-deploy my bosh-lite, due to some Garden containers that fell over and could not be killed (again - I may have been using the wrong version of Garden at the time - make sure you download the correct versions). This was evident from errors that look like this, which indicate there is some rogue Garden container using an IP address that your deployment wants to use.

Error 100: Creating VM with agent ID 'a6cebe66-873d-41a6-99cd-48f14da55088': Creating container: network already acquired: 10.244.8.8/30

If you have deleted your deployment and still suspect something is amiss, you can actually hit the Garden API like this:

$ curl http://<your bosh-lite IP here>:7777/containers

You should get output like this if you have an existing deployment:

{"handles":["c651f731-fd25-4a4e-6877-51c99136f43e","af12d63b-7159-4798-6dc1-9d4beb54bc4d","dd009585-f5dc-428f-4763-00113eb7d03f"]}

If you don't get empty brackets, and you don't have any deployments in BOSH, then you probably need to redeploy bosh-lite. Yes - Garden does have a DELETE endpoint, but I have not yet had it work for me.

For more information on the Garden API, go here.

The post Run Concourse on bosh-lite on AWS appeared first on Stark & Wayne.

]]>

This week, I've been tasked with making a change to one of our existing Concourse pipelines. This got me to thinking: A CI pipeline should be treated as if it was versioned code. If I want to change it, I should have a place to try changes out, well out of the way of the developers trying to use said pipeline to get their patches out.

"Well gee", you might say - "why don't you just copy the pipeline, make the requisite changes, and name it something else?" Yeah, I could - but using my own Concourse would be one more step to keep me from being one fat-fingered "fly" command from plunging a digital backhoe into the dev pipeline (beeeep beeeep beeeep CRUNCH). Plus, I hadn't stood up Concourse before, so I wanted to learn.

I stood up Concourse locally in bosh-lite using Vagrant, and it seemed to work well. I checked out concourse.ci for other ways to stand it up. There are good example manifests, but I wanted to use bosh-lite running on AWS, as I didn't want to incur the cost of 3 VM's just to run a dev Concourse on AWS for a day or two.

Here's how I did it:

Deploy bosh-lite

First, I installed boss-lite, which is a great way to manage your bosh-lites, plus it makes deployment of bosh-lite easier.

Before using boss-lite, make sure you first fill out your boss-lite/bosh-lites/.envrc file. It should already be stubbed out - you just need to ensure your AWS key, ssh key, keypair, security group id, and subnet id are filled in.

To create a new bosh-lite, you simply run the following command in your boss-lite/bosh-lites directory:

$ bin/boss-lite new concourse

Once boss-lite creates your new bosh-lite, target it:

$ bosh target <your bosh-lite IP here>

Upload The Stemcell and Releases

We will need to upload releases for both Concourse and Garden. Note that from Concourse's Github Page, releases of Concourse and Garden are specifically matched, so use those version numbers in place of the numbers below.

$ export CONCOURSE_VERSION=0.65.1
$ export GARDEN_VERSION=0.307.0
$ bosh upload release https://github.com/concourse/concourse/releases/download/v$CONCOURSE_VERSION/concourse-$CONCOURSE_VERSION.tgz
$ bosh upload release https://github.com/concourse/concourse/releases/download/v$CONCOURSE_VERSION/garden-linux-$GARDEN_VERSION.tgz

Next, upload the stemcell:

$ bosh upload stemcell https://bosh.io/d/stemcells/bosh-warden-boshlite-ubuntu-trusty-go_agent

Create Your Manifest

While the folks at concourse.io give a pretty good sample manifest for deploying to bosh-lite, I had to give it a few tweaks. In addition to having some problems with IP addresses (which may or may not have resulted from me using the wrong version of Garden), I also followed Dr. Nic's blog post on how to notify our team on slack about concourse errors. My sample manifest is in this gist for anyone who wants to use it.

Deploy

Now that we have a manifest, let's deploy:

$ bosh deployment concourse-aws-boshlite.yml
$ bosh deploy

If all goes well, you now have concourse running! Stay tuned though, there is one more step.

Make It Accessible

We need the ability to get to the Concourse API, as well as the UI. For that, we need to run a couple iptables commands directly on the bosh-lite.

First, we can use boss-lite to SSH into our bosh-lite:

$ boss-lite ssh concourse

Next, run the following two commands. NOTE: If you changed the web interface IP in your manifest, you will need to adjust these commands accordingly.

$ sudo iptables -A FORWARD -p tcp -d 10.244.8.2 --dport 8080 -m state --state NEW,ESTABLISHED,RELATED -j ACCEPT
$ sudo iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 8080 -j DNAT --to-destination 10.244.8.2:8080

Now, you should be able to hit your new Concourse from the web browser on your bosh-lite IP on port 8080.

Troubleshooting Notes

I've found a couple times where I had to actually re-deploy my bosh-lite, due to some Garden containers that fell over and could not be killed (again - I may have been using the wrong version of Garden at the time - make sure you download the correct versions). This was evident from errors that look like this, which indicate there is some rogue Garden container using an IP address that your deployment wants to use.

Error 100: Creating VM with agent ID 'a6cebe66-873d-41a6-99cd-48f14da55088': Creating container: network already acquired: 10.244.8.8/30

If you have deleted your deployment and still suspect something is amiss, you can actually hit the Garden API like this:

$ curl http://<your bosh-lite IP here>:7777/containers

You should get output like this if you have an existing deployment:

{"handles":["c651f731-fd25-4a4e-6877-51c99136f43e","af12d63b-7159-4798-6dc1-9d4beb54bc4d","dd009585-f5dc-428f-4763-00113eb7d03f"]}

If you don't get empty brackets, and you don't have any deployments in BOSH, then you probably need to redeploy bosh-lite. Yes - Garden does have a DELETE endpoint, but I have not yet had it work for me.

For more information on the Garden API, go here.

The post Run Concourse on bosh-lite on AWS appeared first on Stark & Wayne.

]]>
Hey Windows Users – You Too Can Haz SSH! https://www.starkandwayne.com/blog/hey-windows-users-you-too-can-haz-ssh/ Fri, 11 Jul 2014 21:41:48 +0000 https://www.starkandwayne.com//hey-windows-users-you-too-can-haz-ssh/

I have a confession to make: I used to be a .NET developer. Yup, I said it. I used to run Visual Studio. Windows was my primary operating system.

Then I started writing code for those "other" platforms - Things like Ruby and Go. This mandated that I change my errant ways and adopt a totally different toolset, and therefore I did what every other developer west of the Mississippi does and went with a Macbook.

Last year, I had to ditch my old Macbook in favor of my dual-boot Windows/Ubuntu laptop. I love using Ubuntu. As many years as I had played with Linux as a desktop before, they've really come a long way. However, while Ubuntu's great for development work, I challenge you to try finding any ubiquitous teleconferencing client (other than Skype) that works with Linux.

So up until recently, I've bitten the bullet and used PuTTY on days when I have teleconferences or have to use some other Windows-based tool (like the vSphere Client). As nice as PuTTY is as a tool, I just can't get behind the "clicky-clicky" interface, nor the non-standard command options, nor the special *.ppk format I need to use a certificate. I want something that works like it does in OS X or Linux.

Then it happened - in my frustration, I said "what if" and typed "ssh" at the command prompt. There he was: my old friend the SSH command. And it wasn't just a PuTTY playing with my emotions - this worked exactly as I wanted it to.

I found out that somewhere along the line, I installed Git for Windows. When you install this wonderful tool, you come across the following option:
{<1>}Git For Windows Option

These "optional UNIX tools" is a modest assortment of, say, 144 of the tools you wish Windows had in its own command prompt, without requiring you to install CYGWIN or maintain a separate VM.

Warning - as the dialog box states: "Only use this option if you understand the implications." Fair enough.

So now, Windows is more accommodating to my development needs, AND I no longer have to delay teleconferences through the words "hold on, I have to reboot..."

The post Hey Windows Users – You Too Can Haz SSH! appeared first on Stark & Wayne.

]]>

I have a confession to make: I used to be a .NET developer. Yup, I said it. I used to run Visual Studio. Windows was my primary operating system.

Then I started writing code for those "other" platforms - Things like Ruby and Go. This mandated that I change my errant ways and adopt a totally different toolset, and therefore I did what every other developer west of the Mississippi does and went with a Macbook.

Last year, I had to ditch my old Macbook in favor of my dual-boot Windows/Ubuntu laptop. I love using Ubuntu. As many years as I had played with Linux as a desktop before, they've really come a long way. However, while Ubuntu's great for development work, I challenge you to try finding any ubiquitous teleconferencing client (other than Skype) that works with Linux.

So up until recently, I've bitten the bullet and used PuTTY on days when I have teleconferences or have to use some other Windows-based tool (like the vSphere Client). As nice as PuTTY is as a tool, I just can't get behind the "clicky-clicky" interface, nor the non-standard command options, nor the special *.ppk format I need to use a certificate. I want something that works like it does in OS X or Linux.

Then it happened - in my frustration, I said "what if" and typed "ssh" at the command prompt. There he was: my old friend the SSH command. And it wasn't just a PuTTY playing with my emotions - this worked exactly as I wanted it to.

I found out that somewhere along the line, I installed Git for Windows. When you install this wonderful tool, you come across the following option:
{<1>}Git For Windows Option

These "optional UNIX tools" is a modest assortment of, say, 144 of the tools you wish Windows had in its own command prompt, without requiring you to install CYGWIN or maintain a separate VM.

Warning - as the dialog box states: "Only use this option if you understand the implications." Fair enough.

So now, Windows is more accommodating to my development needs, AND I no longer have to delay teleconferences through the words "hold on, I have to reboot..."

The post Hey Windows Users – You Too Can Haz SSH! appeared first on Stark & Wayne.

]]>
Small to Mid-Range Cloud Foundry – Closing The Gap https://www.starkandwayne.com/blog/small-to-mid-range-cloud-foundry-closing-the-gap/ Fri, 30 May 2014 22:48:42 +0000 https://www.starkandwayne.com//small-to-mid-range-cloud-foundry-closing-the-gap/

There are many ways to run Cloud Foundry. When you search for articles on getting started, you usually find them tailored for 2 camps of people:

Developers: Articles of this pedigree typically focus on how to get a minimal "development environment" running, and how to get an application deployed on the platform in question. A great way for a developer to get started using Cloud Foundry is to deploy it on their own computer using bosh-lite

Infrastructure: There is a lot of content out there that helps you deploy Cloud Foundry for Production use. Options include large "behind-the-firewall" offerings like vCenter and Openstack, as well as outwardly hosted offerings like Amazon Web Services (AWS).

While this covers a large group of people, there is one group (of which I am a part) that seems to be overlooked. It is kind of hard to describe, so let me list my requirements instead:

  • I want to run Cloud Foundry on something that I can keep running all the time - so I don't want to shut it down when I close my laptop.

  • I want to run a small, fairly non-critical application (such as a blog) long-term, so I can get a feel for how to run this platform. I don't just want to deploy an application - I want to know how Cloud Foundry works. How can I break it? What disk locations fill up over time? What happens when the cloud controller, gorouter, dea, or other part dies... and how do I recover from it? Also - How do I back things up and restore them?

Knowing these requirements, let us look at our possible solutions:

  • vSphere: I love vSphere Hypervisor. It is easy to install and easy to use. And it's free!

    If you've played around with BOSH and vSphere though, you were probably as crestfallen as I was when I realized: you can only point it at vCenter, not ESXi.

    Well - you have a couple of options. First - If you just need to run Cloud Foundry for a short while, you can just sign up for a 60 day trial license on VMware's site.

    If you need to run something longer term, there is a very inexpensive option for folks who want to run Cloud Foundry on vCenter: vCenter Essentials. You can get it from the VMware Store for less than $600. This gets you a license to use their vCenter Virtual Appliance, PLUS you have enough licenses for 3 hosts. To be clear - you don't even need Essentials Plus. This option is also wonderful for small to medium organizations looking to run a few applications.

    If your budget is REALLY tight... you can actually run bosh-lite on ESXi Free Edition!

  • OpenStack/KVM: OpenStack is an Open Source Infrastructure as a Service platform that you can run yourself. Fair warning - it has historically been a very involved process, usually taking a couple days to set up. To make things a bit quicker, the folks at Red Hat have set up their own installer that works great on their operating systems (I've used it on Fedora 20 with no problems). Go to http://openstack.redhat.com/ for more details.

  • Amazon Web Services (AWS): Finally, you can run Cloud Foundry on AWS. For a small fee, Amazon will let you run on their massively distributed infrastructure. When running on AWS, just make sure you take the possible fees into account. Not only are there fees for compute time, you can also rack up a bill in data transfers. If properly sized and used, AWS can be a very cost effective way to go.

So, whether you are a small company writing apps, a hobbyist, or even a closet Open Source guy with a Fedora box hidden in your cubicle - there is no excuse to NOT get started playing with Cloud Foundry.

The post Small to Mid-Range Cloud Foundry – Closing The Gap appeared first on Stark & Wayne.

]]>

There are many ways to run Cloud Foundry. When you search for articles on getting started, you usually find them tailored for 2 camps of people:

Developers: Articles of this pedigree typically focus on how to get a minimal "development environment" running, and how to get an application deployed on the platform in question. A great way for a developer to get started using Cloud Foundry is to deploy it on their own computer using bosh-lite

Infrastructure: There is a lot of content out there that helps you deploy Cloud Foundry for Production use. Options include large "behind-the-firewall" offerings like vCenter and Openstack, as well as outwardly hosted offerings like Amazon Web Services (AWS).

While this covers a large group of people, there is one group (of which I am a part) that seems to be overlooked. It is kind of hard to describe, so let me list my requirements instead:

  • I want to run Cloud Foundry on something that I can keep running all the time - so I don't want to shut it down when I close my laptop.

  • I want to run a small, fairly non-critical application (such as a blog) long-term, so I can get a feel for how to run this platform. I don't just want to deploy an application - I want to know how Cloud Foundry works. How can I break it? What disk locations fill up over time? What happens when the cloud controller, gorouter, dea, or other part dies... and how do I recover from it? Also - How do I back things up and restore them?

Knowing these requirements, let us look at our possible solutions:

  • vSphere: I love vSphere Hypervisor. It is easy to install and easy to use. And it's free!

    If you've played around with BOSH and vSphere though, you were probably as crestfallen as I was when I realized: you can only point it at vCenter, not ESXi.

    Well - you have a couple of options. First - If you just need to run Cloud Foundry for a short while, you can just sign up for a 60 day trial license on VMware's site.

    If you need to run something longer term, there is a very inexpensive option for folks who want to run Cloud Foundry on vCenter: vCenter Essentials. You can get it from the VMware Store for less than $600. This gets you a license to use their vCenter Virtual Appliance, PLUS you have enough licenses for 3 hosts. To be clear - you don't even need Essentials Plus. This option is also wonderful for small to medium organizations looking to run a few applications.

    If your budget is REALLY tight... you can actually run bosh-lite on ESXi Free Edition!

  • OpenStack/KVM: OpenStack is an Open Source Infrastructure as a Service platform that you can run yourself. Fair warning - it has historically been a very involved process, usually taking a couple days to set up. To make things a bit quicker, the folks at Red Hat have set up their own installer that works great on their operating systems (I've used it on Fedora 20 with no problems). Go to http://openstack.redhat.com/ for more details.

  • Amazon Web Services (AWS): Finally, you can run Cloud Foundry on AWS. For a small fee, Amazon will let you run on their massively distributed infrastructure. When running on AWS, just make sure you take the possible fees into account. Not only are there fees for compute time, you can also rack up a bill in data transfers. If properly sized and used, AWS can be a very cost effective way to go.

So, whether you are a small company writing apps, a hobbyist, or even a closet Open Source guy with a Fedora box hidden in your cubicle - there is no excuse to NOT get started playing with Cloud Foundry.

The post Small to Mid-Range Cloud Foundry – Closing The Gap appeared first on Stark & Wayne.

]]>
Running bosh-lite on vSphere Hypervisor Free Edition https://www.starkandwayne.com/blog/running-bosh-lite-on-vsphere-hypervisor-free-edition/ Fri, 30 May 2014 22:38:37 +0000 https://www.starkandwayne.com//running-bosh-lite-on-vsphere-hypervisor-free-edition/

When I was searching for a platform to run my personal blog, I made a list of requirements for this platform:

  1. The application should run on Cloud Foundry. In fact, I want to install Ghost using Dr. Nic's tutorial on Deploying Ghost blog on Cloud Foundry
  2. I want to run Cloud Foundry on my own hardware so I know how it works under the hood (and because I don't want to pay anyone to run it).
  3. There should be no licensing costs (again - pay as little as possible).
  4. Installing and administering OpenStack seems like overkill for this project.

It turns out this was a fairly restrictive list. Requirement 2 ruled out AWS and Pivotal Web Services. Requirement 3 ruled out vCenter (plus BOSH just doesn't work with bare ESXi anyway). Requirement 4 is fairly obvious.

Then I got to thinking:

bosh-lite is a great way to get BOSH and Cloud Foundry running quickly so you can dig in. It works with VirtualBox, VMWare Fusion, and AWS. It does this by using a pre-baked stemcell that these platforms just boot up and run.

vSphere Hypervisor (a.k.a. ESXi) is an easy to use and robust platform for running virtual machines. It's easy to install, runs on consumer-grade hardware, and requires very little maintenance.

VirtualBox has the ability to take an existing image and convert its back-end virtual disk to various formats - including an ESXi VMDK file.

Given these facts, we can actually get bosh-lite to run on ESXi Free in a few easy steps:

  1. Follow the instructions in the [bosh-lite](https://github.com/cloudfoundry/bosh-lite) README to get bosh-lite running in VirtualBox - right up to the point where you run "vagrant up".
  2. Log into the virtual machine and shut it down.
    vagrant@bosh-lite:~$vagrant ssh
    vagrant@bosh-lite:~$sudo shutdown -h now
    
  3. Clone the virtual hard disk, converting it into a fixed ESXi VMDK:
    sudo vboxmanage clonehd /data/jbudnack/VirtualBox\ VMs/bosh-lite_default_1401468204242_17042/boshlite-virtualbox-ubuntu1204ds-disk1.vmdk /data/jbudnack/VirtualBox\ VMs/cloned/bosh-lite-esxi.vmdk --format=VMDK --variant=Fixed,ESX
    
  4. On Your ESXi box, enable SSH:
    ![ESXi SSH](http://i.imgur.com/efYVW70.png)
  5. Using SFTP, upload the VMDK file to one of the datastores on your ESXi box.
    jbudnack@Pegasus:/data/jbudnack/VirtualBox VMs/cloned$ sftp root@10.150.0.50
    Password:
    Connected to 10.150.0.50.
    sftp> cd /vmfs/volumes/ENTERPRISE\ 750\ L_DS1/
    sftp> mput *.vmdk
    
  6. Create a new Virtual Machine, using the newly uploaded file as the back-end disk. **Important**: Ensure that this Virtual Machine has 2 virtual NICs. Also, you may want to put the 2nd adapter on its own virtual switch.
    ![ESXi VM](http://i.imgur.com/CEDVn8D.png)
  7. Start the virtual machine. Log into its console as vagrant (password: vagrant)
  8. Configure the network cards as follows in /etc/network/interfaces:
    # This file describes the network interfaces available on your system
    # and how to activate them. For more information, see interfaces(5).
    # The loopback network interface
    auto lo
    iface lo inet loopback
    auto eth0
    iface eth0 inet static
    address 10.150.0.70  #STATIC IP FOR YOUR ESXi VM NETWORK
    netmask 255.255.255.0
    gateway 10.150.0.1 #DONT FORGET THE DEFAULT GATEWAY
    #
    # The primary network interface
    pre-up sleep 2
    #VAGRANT-BEGIN
    # The contents below are automatically generated by Vagrant. Do not modify.
    auto eth1
    iface eth1 inet static
    address 192.168.50.4  #DO NOT CHANGE THIS IP.
    netmask 255.255.255.0
    #VAGRANT-END
    
  9. Restart the virtual machine. Once it is finished rebooting, try targeting your new bosh-lite vm:
    jbudnack@Pegasus:/data/jbudnack/VirtualBox VMs/cloned$ bosh target https://10.150.0.70
    Target already set to `Bosh Lite Director'
    jbudnack@Pegasus:/data/jbudnack/VirtualBox VMs/cloned$ bosh login
    Your username: admin
    Enter password: *****
    Logged in as `admin'
    jbudnack@Pegasus:/data/jbudnack/VirtualBox VMs/cloned$ bosh status
    Config
             /home/jbudnack/.bosh_config
    Director
      Name       Bosh Lite Director
      URL        https://10.150.0.70:25555
      Version    1.2200.0 (f71e2276)
      User       admin
      UUID       1283c62e-8e7b-43c2-8f97-f42bf8aba812
      CPI        warden
      dns        disabled
      compiled_package_cache enabled (provider: local)
      snapshots  disabled
    Deployment
      not set
    
  10. And there you go! If you plan on using this for a while, you might want to change the "vagrant" user's password, as well as the password of your director. You may also want to take this opportunity to snapshot this VM, in case you ever want to roll back to a clean bosh-lite instance.

    The post Running bosh-lite on vSphere Hypervisor Free Edition appeared first on Stark & Wayne.

    ]]>

    When I was searching for a platform to run my personal blog, I made a list of requirements for this platform:

    1. The application should run on Cloud Foundry. In fact, I want to install Ghost using Dr. Nic's tutorial on Deploying Ghost blog on Cloud Foundry
    2. I want to run Cloud Foundry on my own hardware so I know how it works under the hood (and because I don't want to pay anyone to run it).
    3. There should be no licensing costs (again - pay as little as possible).
    4. Installing and administering OpenStack seems like overkill for this project.

    It turns out this was a fairly restrictive list. Requirement 2 ruled out AWS and Pivotal Web Services. Requirement 3 ruled out vCenter (plus BOSH just doesn't work with bare ESXi anyway). Requirement 4 is fairly obvious.

    Then I got to thinking:

    bosh-lite is a great way to get BOSH and Cloud Foundry running quickly so you can dig in. It works with VirtualBox, VMWare Fusion, and AWS. It does this by using a pre-baked stemcell that these platforms just boot up and run.

    vSphere Hypervisor (a.k.a. ESXi) is an easy to use and robust platform for running virtual machines. It's easy to install, runs on consumer-grade hardware, and requires very little maintenance.

    VirtualBox has the ability to take an existing image and convert its back-end virtual disk to various formats - including an ESXi VMDK file.

    Given these facts, we can actually get bosh-lite to run on ESXi Free in a few easy steps:

    1. Follow the instructions in the [bosh-lite](https://github.com/cloudfoundry/bosh-lite) README to get bosh-lite running in VirtualBox - right up to the point where you run "vagrant up".
    2. Log into the virtual machine and shut it down.
      vagrant@bosh-lite:~$vagrant ssh
      vagrant@bosh-lite:~$sudo shutdown -h now
      
    3. Clone the virtual hard disk, converting it into a fixed ESXi VMDK:
      sudo vboxmanage clonehd /data/jbudnack/VirtualBox\ VMs/bosh-lite_default_1401468204242_17042/boshlite-virtualbox-ubuntu1204ds-disk1.vmdk /data/jbudnack/VirtualBox\ VMs/cloned/bosh-lite-esxi.vmdk --format=VMDK --variant=Fixed,ESX
      
    4. On Your ESXi box, enable SSH: ![ESXi SSH](http://i.imgur.com/efYVW70.png)
    5. Using SFTP, upload the VMDK file to one of the datastores on your ESXi box.
      jbudnack@Pegasus:/data/jbudnack/VirtualBox VMs/cloned$ sftp root@10.150.0.50
      Password:
      Connected to 10.150.0.50.
      sftp> cd /vmfs/volumes/ENTERPRISE\ 750\ L_DS1/
      sftp> mput *.vmdk
      
    6. Create a new Virtual Machine, using the newly uploaded file as the back-end disk. **Important**: Ensure that this Virtual Machine has 2 virtual NICs. Also, you may want to put the 2nd adapter on its own virtual switch. ![ESXi VM](http://i.imgur.com/CEDVn8D.png)
    7. Start the virtual machine. Log into its console as vagrant (password: vagrant)
    8. Configure the network cards as follows in /etc/network/interfaces:
      # This file describes the network interfaces available on your system
      # and how to activate them. For more information, see interfaces(5).
      # The loopback network interface
      auto lo
      iface lo inet loopback
      auto eth0
      iface eth0 inet static
      address 10.150.0.70  #STATIC IP FOR YOUR ESXi VM NETWORK
      netmask 255.255.255.0
      gateway 10.150.0.1 #DONT FORGET THE DEFAULT GATEWAY
      #
      # The primary network interface
      pre-up sleep 2
      #VAGRANT-BEGIN
      # The contents below are automatically generated by Vagrant. Do not modify.
      auto eth1
      iface eth1 inet static
      address 192.168.50.4  #DO NOT CHANGE THIS IP.
      netmask 255.255.255.0
      #VAGRANT-END
      
    9. Restart the virtual machine. Once it is finished rebooting, try targeting your new bosh-lite vm:
      jbudnack@Pegasus:/data/jbudnack/VirtualBox VMs/cloned$ bosh target https://10.150.0.70
      Target already set to `Bosh Lite Director'
      jbudnack@Pegasus:/data/jbudnack/VirtualBox VMs/cloned$ bosh login
      Your username: admin
      Enter password: *****
      Logged in as `admin'
      jbudnack@Pegasus:/data/jbudnack/VirtualBox VMs/cloned$ bosh status
      Config
               /home/jbudnack/.bosh_config
      Director
        Name       Bosh Lite Director
        URL        https://10.150.0.70:25555
        Version    1.2200.0 (f71e2276)
        User       admin
        UUID       1283c62e-8e7b-43c2-8f97-f42bf8aba812
        CPI        warden
        dns        disabled
        compiled_package_cache enabled (provider: local)
        snapshots  disabled
      Deployment
        not set
      
    10. And there you go! If you plan on using this for a while, you might want to change the "vagrant" user's password, as well as the password of your director. You may also want to take this opportunity to snapshot this VM, in case you ever want to roll back to a clean bosh-lite instance.

      The post Running bosh-lite on vSphere Hypervisor Free Edition appeared first on Stark & Wayne.

      ]]>