Quintessence Anx, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/quintessenceanx/ Cloud-Native Consultants Thu, 30 Sep 2021 15:49:18 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 https://www.starkandwayne.com/wp-content/uploads/2021/04/cropped-SW_Logos__shield_blue_on_transparent_with_border-32x32.png Quintessence Anx, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/quintessenceanx/ 32 32 Setting up an SSH tunnel with .ssh/config https://www.starkandwayne.com/blog/setting-up-an-ssh-tunnel-with-ssh-config/ Wed, 19 Oct 2016 19:55:31 +0000 https://www.starkandwayne.com//setting-up-an-ssh-tunnel-with-ssh-config/

Recently we had a client whose Openstack configuration required us to use a SOCKSv5 proxy to access the Horizon Dashboard. Rather than create the tunnel by running ssh -D 8080 -f -C -N ${remote-host}, it made more sense to setup the port forwarding in ~/.ssh/config and create a couple aliases that allowed us to quickly start/check/exit the tunnel.

Configure the Tunnel

Add the following to your ~/.ssh/config file:

Host my-proxy
  Hostname x.x.x.x
  User admin-user
  IdentityFile ~/.ssh/id_rsa
  DynamicForward 8080
  ControlMaster auto
  ControlPath ~/.ssh/sockets/%r@%h:%p

Make sure you:

  • Replace the filepath for the key pair used in IdentityFile as needed.
  • Replace x.x.x.x with the desired IP address, most likely a jumphost.
  • Replace admin-user with the desired user.
  • Know what port you need to forward. Here we are forwarding 8080, but your needs may differ.
  • Create the ~/.ssh/sockets directory if it does not already exist.

Using Aliases to make our lives easier

Put the following in your ~/.bash_profile:

## For My Proxy Tunnel
alias proxy-on='ssh -fN my-proxy'
alias proxy-check='ssh -O check my-proxy'
alias proxy-off='ssh -O exit my-proxy'

Starting/stopping the proxy

$ proxy-on
{{no output}}
$ proxy-check
Master running (pid=24407)
$ proxy-off
Exit request sent.
$ proxy-check
Control socket connect(/Users/quinn/.ssh/sockets/admin-user@x.x.x.x:22): No such file or directory

Configuring the Proxy in the web browser

If you are using Chrome, you can use the SwitchyOmega browser extension (or its predecessor SwitchySharp) to setup the proxy. The proxy will need to be SOCKSv5, localhost, port 8080. If you are using the SwitchyOmega extension, it will look like the following:

SwitchyOmega Configuration

It's also worth creating a Auto Switch rule so that you don't have to manually toggle between the appropriate proxy setting:

SwitchyOmega Auto Switch

Accessing the desired URL

Once you have the tunnel running (proxy-on) and the auto switch rule in place, all you need to do is go to the desired URL.

If you opted out of the auto-switch, you can toggle between the Direct and Proxy connections in the extension:

SwitchyOmega in browser

(For the curious, the visible browser extensions are Checker Plus for Gmail, Ad Block Plus, 1Password, JSONView, Momentum, and of course SwitchyOmega.)

The post Setting up an SSH tunnel with .ssh/config appeared first on Stark & Wayne.

]]>

Recently we had a client whose Openstack configuration required us to use a SOCKSv5 proxy to access the Horizon Dashboard. Rather than create the tunnel by running ssh -D 8080 -f -C -N ${remote-host}, it made more sense to setup the port forwarding in ~/.ssh/config and create a couple aliases that allowed us to quickly start/check/exit the tunnel.

Configure the Tunnel

Add the following to your ~/.ssh/config file:

Host my-proxy
  Hostname x.x.x.x
  User admin-user
  IdentityFile ~/.ssh/id_rsa
  DynamicForward 8080
  ControlMaster auto
  ControlPath ~/.ssh/sockets/%r@%h:%p

Make sure you:

  • Replace the filepath for the key pair used in IdentityFile as needed.
  • Replace x.x.x.x with the desired IP address, most likely a jumphost.
  • Replace admin-user with the desired user.
  • Know what port you need to forward. Here we are forwarding 8080, but your needs may differ.
  • Create the ~/.ssh/sockets directory if it does not already exist.

Using Aliases to make our lives easier

Put the following in your ~/.bash_profile:

## For My Proxy Tunnel
alias proxy-on='ssh -fN my-proxy'
alias proxy-check='ssh -O check my-proxy'
alias proxy-off='ssh -O exit my-proxy'

Starting/stopping the proxy

$ proxy-on
{{no output}}
$ proxy-check
Master running (pid=24407)
$ proxy-off
Exit request sent.
$ proxy-check
Control socket connect(/Users/quinn/.ssh/sockets/admin-user@x.x.x.x:22): No such file or directory

Configuring the Proxy in the web browser

If you are using Chrome, you can use the SwitchyOmega browser extension (or its predecessor SwitchySharp) to setup the proxy. The proxy will need to be SOCKSv5, localhost, port 8080. If you are using the SwitchyOmega extension, it will look like the following:

SwitchyOmega Configuration

It's also worth creating a Auto Switch rule so that you don't have to manually toggle between the appropriate proxy setting:

SwitchyOmega Auto Switch

Accessing the desired URL

Once you have the tunnel running (proxy-on) and the auto switch rule in place, all you need to do is go to the desired URL.

If you opted out of the auto-switch, you can toggle between the Direct and Proxy connections in the extension:

SwitchyOmega in browser

(For the curious, the visible browser extensions are Checker Plus for Gmail, Ad Block Plus, 1Password, JSONView, Momentum, and of course SwitchyOmega.)

The post Setting up an SSH tunnel with .ssh/config appeared first on Stark & Wayne.

]]>
Setting up Keybase and GPG Tools (Mac) https://www.starkandwayne.com/blog/setting-up-keybase-and-gpg-tools-mac/ Tue, 30 Aug 2016 19:08:44 +0000 https://www.starkandwayne.com//setting-up-keybase-and-gpg-tools-mac/

What are GPG Tools?

GPG, or GNU Privacy Guard, is a replacement for Symantec's PGP cryptographic software suite and allows users to encrypt sensitive information. Specifically, GPG Tools includes a utility that integrates with Apple Mail, a GPG Keychain to manage OpenPGP keys, and a command line tool (which I will be using below).

What is Keybase?

Keybase is an identity management service allowing users to manage keys, social media accounts, and devices. The idea is that when you meet someone online and want to exchange secure messages or files with them, you will want some way to establish the other party is who they say they are. Keybase has also created their own file system, which encrypts everything in their mounted drive. It's a new feature and definitely worth reading more about here.

It's worth noting that while Keybase does work with GPG and does have a relatively slick CLI, the Keybase CLI does lack equivalents to gpg --list-keys and gpg --export.

Currently Keybase is in alpha - so that means you need an invite or you can join the wait queue.

Digging Deeper: Key Caveats

Make sure you read Keybase's privacy policy. I'm sure a lot of us are guilty of "leafing" through privacy policies and usage agreements; however, considering that Keybase's purpose is to store sensitive information you might want to be aware of the what/where/how they collect and store data. Some highlights:

Probably obvious data collection, usage, and sharing:

  • The information you provide when signing up, i.e. the name you provide, your location, etc., is all stored. The information you put in your profile, including the names of the accounts you choose to verify, are publicly visible.
  • Data may be shared to comply with the law after receiving a request via a lawful process.
  • Data may be shared as a result of a business transaction, such as a corporate restructuring or merger.
  • Keybase is not responsible for any information you choose to share with third parties. e.g. If you share data with Twitter, it is bound by Twitter's privacy policy not Keybase's.

Some potentially less obvious collection, usage, and sharing:

  • Usage information is automatically collected and does not appear to be disable...able. So Keybase gathers and stores your IP address, host computer preferences, URL of the site that referred you to the service, how you interacted with the Keybase UI, and how long you were logged in for. This information is stored in logs that "may persist for an indefinite period."
  • "We may disclose any information, including your Personal Information and any other information or data collected, stored or processed on our servers, if required to do so by law or in the good-faith belief that such action is necessary...to protect the personal safety of Keybase employees, customers, or the public." I read "good faith" as them asking me to rely on them to Do No Evil with my data, which you may or may not want to do depending on what types of secure information you are interested in sharing.

Since the privacy policy is subject to change, make sure you read the latest version when you sign up and keep your knowledge of it up-to-date.

How to Setup Keybase and GPG

Signup for Keybase with either your invite or when your lucky # is drawn from the queue. When you choose your name, choose with caution. Remember, Keybase is an identity management service, which means that:

  1. You cannot change your username later, unless you open a new account and delete the old one which brings me to the next point:
  2. Once a user name is used, it cannot be reused. Even if the account has been deleted.

To clarify #2: this means if you delete your account, the username does not re-enter the free pool. It is unavailable forever. This makes sense, since you wouldn't want to delete your account and then have someone else come along and pretend to be you. As with all things security: be aware, take care!

Moving along.

Installing Keybase and GPG Tools

You'll need to install GPG Tools and Keybase. The GPG Tools suite is available on their site for Macs. For Keybase, you can either use the Keybase installer which installs an app and the CLI utility, or you can use Homebrew: brew install keybase. Note that you may need to run brew update && brew upgrade keybase. I ultimately used the installer so I could have the app as well to explore later.

Generating your public PGP Key for Keybase

Now that you have your Keybase account and CLI tools, you'll need to generate a public PGP key so you can encrypt/decrypt files and messages. You can do this either with the Keybase CLI or GPG. In my case, I went the GPG route:

==[]=[ 15:34:05 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ gpg --gen-key
gpg (GnuPG/MacGPG2) 2.0.30; Copyright (C) 2015 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 0
Key does not expire at all
Is this correct? (y/N) y
GnuPG needs to construct a user ID to identify your key.
Real name: Quintessence
Email address: myemail[at]example.com
Comment:
You selected this USER-ID:
    "Quintessence <myemail[at]example.com>"
Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key PSUEDOPUB marked as ultimately trusted
public and secret key created and signed.
gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   2  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: next trustdb check due at 2018-08-19
pub   4096R/PSUEDOPUB 2016-08-24
      Key fingerprint = <REDACTED>
uid       [ultimate] Quintessence <myemail[at]example.com>
sub   4096R/PSEUDOSUB 2016-08-24
==[]=[ 15:37:11 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ gpg --list-keys
/Users/quinn/.gnupg/pubring.gpg
-------------------------------
pub   2048D/PGPPUB  2010-08-19 [expires: 2018-08-19]
uid       [ultimate] GPGTools Team <team[at]gpgtools.org>
uid       [ultimate] GPGMail Project Team (Official OpenPGP Key) <gpgmail-devel[at]lists.gpgmail.org>
uid       [ultimate] GPGTools Project Team (Official OpenPGP Key) <gpgtools-org[at]lists.gpgtools.org>
uid       [ultimate] [jpeg image of size 5871]
sub   2048g/PGPSUB1 2010-08-19 [expires: 2018-08-19]
sub   4096R/PGPSUB2 2014-04-08 [expires: 2024-01-02]
pub   4096R/RVMPUB 2014-10-28
uid       [ unknown] Michal Papis (RVM signing) <mpapis[at]gmail.com>
pub   4096R/PSUEDOPUB 2016-08-24
uid       [ultimate] Quintessence <myemail[at]example.com>
sub   4096R/PSEUDOSUB 2016-08-24

To export the public key for Keybase:

==[]=[ 15:42:55 ]=[  quinn@FingerSkillet  ]=[ ~/.gnupg     ]=[]==
$ gpg -a --export PSUEDOPUB
-----BEGIN PGP PUBLIC KEY BLOCK-----
Comment: GPGTools - https://gpgtools.org
...
-----END PGP PUBLIC KEY BLOCK-----

Make sure you use the -a flag for ASCII, otherwise you'll get binary output dumping to your terminal. Everyone's favorite experience, amirite?

Copy/paste the key block into Keybase and choose "command line with keybase" when prompted for how you would like to sign your public key. Keybase will display the appropriate command to sign the public key, but first you will need to log into the Keybase API on your laptop using the Keybase CLI. To do this, you will use the same password you use to log into the Keybase website. When you initially log in with the Keybase ClI you will be prompted to generate a paper key, as below:

==[]=[ 15:55:57 ]=[  quinn@FingerSkillet  ]=[ ~/.gnupg     ]=[]==
$ keybase login
Your keybase username or email address: quintessence
Enter a public name for this device: FingerSkillet
===============================
IMPORTANT: PAPER KEY GENERATION
===============================
During Keybase's alpha, everyone gets a paper key. This is a private key.
  1. you must write it down
  2. the first two words are a public label
  3. it can be used to recover data
  4. it can provision new keys/devices, so put it in your wallet
  5. just like any other device, it'll be revokable/replaceable if you lose it
Your paper key is
       	<REDACTED>
Write it down....now!
Have you written down the above paper key? [y/N] y
Excellent! Is it in your wallet? [y/N] y
✔ Success! You provisioned your device FingerSkillet.
You are logged in as quintessence
  - type `keybase help` for more info.

Now that you have logged in, you can proceed:

==[]=[ 16:26:00 ]=[  quinn@FingerSkillet  ]=[ ~/.gnupg     ]=[]==
$ keybase pgp select <REDACTED>
#    Algo    Key Id             Created   UserId
=    ====    ======             =======   ======
1    4096R   <REDACTED>             Quintessence <myemail[at]example.com>
Choose a key: 1
▶ INFO Bundle unlocked: <REDACTED>
▶ INFO Generated new PGP key:
▶ INFO   user: Quintessence <myemail[at]example.com>
▶ INFO   4096-bit RSA key, ID <REDACTED>, created 2016-08-24
▶ INFO Key <REDACTED> imported

Generating the PGP key with Keybase

The other way to generate your PGP key is with the Keybase CLI, using keybase pgp gen. When you do, you will be given the option to upload the (encrypted) secret key to Keybase, e.g.:

==[]=[ 14:45:05 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ keybase pgp gen
Enter your real name, which will be publicly visible in your new key: Quintessence
Enter a public email address for your key: myemail[at]example.com
Enter another email address (or <enter> when done):
Push an encrypted copy of your new secret key to the Keybase.io server? [Y/n]

Depending on your level of paranoia, putting a copy of the secret key on the Keybase server might be asking a bit much. Personally, this behavior is a reason in favor of generating keys with gpg, since Y is the default action.

Note: When you generate a key with keybase pgp gen it will appear in your GPG keyring.

Verifying an account with the CLI

Accounts can be verified using either the web UI or using the Keybase CLI. For the latter:

==[]=[ 16:31:42 ]=[  quinn@FingerSkillet  ]=[ ~/.gnupg     ]=[]==
$ keybase prove github quintessence
Please publicly post the following Gist, and name it keybase.md
### Keybase proof
I hereby claim:
  * I am quintessence on github.
...
Check Github now? [Y/n] Y
▶ NOTICE Success!
==[]=[ 16:33:46 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ keybase prove hackernews quintessence
Please edit your HackerNews profile to contain the following text. Click here: https://news.ycombinator.com/user?id=quintessence
[ my public key: ...
Check HackerNews now? [Y/n] Y
▶ NOTICE Success!

Notice when you are proving that you are a user you are proving that you can somehow post as that user. In the case of GitHub, that means creating a public Gist. For other sites, e.g. Twitter and Reddit, you create a tweet/post. An example of a more complete profile is that of one of my coworkers here at S&W, James Hunt:

jhunt keybase profile

Following a user

You can see above that jhunt is already following me on Keybase. If I want to follow him as well, I can do so via the command line using his Keybase username:

==[]=[ 15:27:25 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ keybase follow jhunt
▶ INFO Identifying jhunt
✔ public key fingerprint: 2BA0 1C9D B438 A64F 214C D2D3 E7B1 C84A EDE5 75A0
✔ admin of DNS zone jameshunt.us: found TXT entry keybase-site-verification=v_ja_-Bvv9kxVH8l0JKdX_yCTFw4TlsZ2bFXoz4g9M0
✔ admin of DNS zone niftylogic.com: found TXT entry keybase-site-verification=QQg_TcUz22MGcRgb5DOcBIzsndgBpsuMZmhU7hQ7jes
✔ admin of DNS zone huntprod.com: found TXT entry keybase-site-verification=qfh_K7mBbkb6JP3WrRLyZiu5bWz8jYCyEHwWopfdPDM
✔ "filefrog" on reddit: https://www.reddit.com/r/KeybaseProofs/comments/4trseg/my_keybase_proof_redditfilefrog_keybasejhunt_z9/
✔ "iamjameshunt" on twitter: https://twitter.com/iamjameshunt/status/755790627552448513
✔ "jhunt" on github: https://gist.github.com/6347ae03c701d50782f25b879b72c394
Is this the jhunt you wanted? [Y/n] Y
Publicly follow? [Y/n] Y

I can now see that he is both following me and I am following him using the Keybase CLI:

==[]=[ 15:59:40 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ keybase list-followers
jhunt
==[]=[ 15:59:46 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ keybase list-following
jhunt

I can also see that I'm following him now using the Keybase web UI:

following jhunt on keybase

Why follow other users?

Following users cuts out a couple steps when you wish to send encrypted information. For example, let's say I wanted to send jhunt the following message:

keybase encrypt jhunt -m "Check out my blog post!"

If I wasn't already following jhunt, the CLI would give me the same manual verification process to ensure that the jhunt it found was the jhunt that I intended to encrypt the message for. When I started following jhunt, Keybase created a signed snapshot of the identity. So now when I encrypt the above message, it uses the signed snapshot instead and doesn't prompt me to verify the user.

For Keybase, following users also helps establish each user's web of trust. When I started following jhunt I verified that I knew him. The more people that follow jhunt over time lends credibility to the Keybase jhunt user being matched with the correct person. Keybase talks about this a bit more in their doc on following users.

Key Deletion/Revocation

Keys are removed from the Keybase client using keybase drop '<KEY>'; however, this does not remove the key from your GPG keyring. Since keys generated with the Keybase CLI are stored in the GPG keyring, regardless of whether you used keybase or gpg to generate the key you will need to revoke the key on the GPG side as well or else the generated key will remain in your GPG keyring.

The post Setting up Keybase and GPG Tools (Mac) appeared first on Stark & Wayne.

]]>

What are GPG Tools?

GPG, or GNU Privacy Guard, is a replacement for Symantec's PGP cryptographic software suite and allows users to encrypt sensitive information. Specifically, GPG Tools includes a utility that integrates with Apple Mail, a GPG Keychain to manage OpenPGP keys, and a command line tool (which I will be using below).

What is Keybase?

Keybase is an identity management service allowing users to manage keys, social media accounts, and devices. The idea is that when you meet someone online and want to exchange secure messages or files with them, you will want some way to establish the other party is who they say they are. Keybase has also created their own file system, which encrypts everything in their mounted drive. It's a new feature and definitely worth reading more about here.

It's worth noting that while Keybase does work with GPG and does have a relatively slick CLI, the Keybase CLI does lack equivalents to gpg --list-keys and gpg --export.

Currently Keybase is in alpha - so that means you need an invite or you can join the wait queue.

Digging Deeper: Key Caveats

Make sure you read Keybase's privacy policy. I'm sure a lot of us are guilty of "leafing" through privacy policies and usage agreements; however, considering that Keybase's purpose is to store sensitive information you might want to be aware of the what/where/how they collect and store data. Some highlights:

Probably obvious data collection, usage, and sharing:

  • The information you provide when signing up, i.e. the name you provide, your location, etc., is all stored. The information you put in your profile, including the names of the accounts you choose to verify, are publicly visible.
  • Data may be shared to comply with the law after receiving a request via a lawful process.
  • Data may be shared as a result of a business transaction, such as a corporate restructuring or merger.
  • Keybase is not responsible for any information you choose to share with third parties. e.g. If you share data with Twitter, it is bound by Twitter's privacy policy not Keybase's.

Some potentially less obvious collection, usage, and sharing:

  • Usage information is automatically collected and does not appear to be disable...able. So Keybase gathers and stores your IP address, host computer preferences, URL of the site that referred you to the service, how you interacted with the Keybase UI, and how long you were logged in for. This information is stored in logs that "may persist for an indefinite period."
  • "We may disclose any information, including your Personal Information and any other information or data collected, stored or processed on our servers, if required to do so by law or in the good-faith belief that such action is necessary...to protect the personal safety of Keybase employees, customers, or the public." I read "good faith" as them asking me to rely on them to Do No Evil with my data, which you may or may not want to do depending on what types of secure information you are interested in sharing.

Since the privacy policy is subject to change, make sure you read the latest version when you sign up and keep your knowledge of it up-to-date.

How to Setup Keybase and GPG

Signup for Keybase with either your invite or when your lucky # is drawn from the queue. When you choose your name, choose with caution. Remember, Keybase is an identity management service, which means that:

  1. You cannot change your username later, unless you open a new account and delete the old one which brings me to the next point:
  2. Once a user name is used, it cannot be reused. Even if the account has been deleted.

To clarify #2: this means if you delete your account, the username does not re-enter the free pool. It is unavailable forever. This makes sense, since you wouldn't want to delete your account and then have someone else come along and pretend to be you. As with all things security: be aware, take care!

Moving along.

Installing Keybase and GPG Tools

You'll need to install GPG Tools and Keybase. The GPG Tools suite is available on their site for Macs. For Keybase, you can either use the Keybase installer which installs an app and the CLI utility, or you can use Homebrew: brew install keybase. Note that you may need to run brew update && brew upgrade keybase. I ultimately used the installer so I could have the app as well to explore later.

Generating your public PGP Key for Keybase

Now that you have your Keybase account and CLI tools, you'll need to generate a public PGP key so you can encrypt/decrypt files and messages. You can do this either with the Keybase CLI or GPG. In my case, I went the GPG route:

==[]=[ 15:34:05 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ gpg --gen-key
gpg (GnuPG/MacGPG2) 2.0.30; Copyright (C) 2015 Free Software Foundation, Inc.
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law.
Please select what kind of key you want:
   (1) RSA and RSA (default)
   (2) DSA and Elgamal
   (3) DSA (sign only)
   (4) RSA (sign only)
Your selection? 1
RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (2048) 4096
Requested keysize is 4096 bits
Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 0
Key does not expire at all
Is this correct? (y/N) y
GnuPG needs to construct a user ID to identify your key.
Real name: Quintessence
Email address: myemail[at]example.com
Comment:
You selected this USER-ID:
    "Quintessence <myemail[at]example.com>"
Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O
You need a Passphrase to protect your secret key.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key PSUEDOPUB marked as ultimately trusted
public and secret key created and signed.
gpg: checking the trustdb
gpg: 3 marginal(s) needed, 1 complete(s) needed, PGP trust model
gpg: depth: 0  valid:   2  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 2u
gpg: next trustdb check due at 2018-08-19
pub   4096R/PSUEDOPUB 2016-08-24
      Key fingerprint = <REDACTED>
uid       [ultimate] Quintessence <myemail[at]example.com>
sub   4096R/PSEUDOSUB 2016-08-24
==[]=[ 15:37:11 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ gpg --list-keys
/Users/quinn/.gnupg/pubring.gpg
-------------------------------
pub   2048D/PGPPUB  2010-08-19 [expires: 2018-08-19]
uid       [ultimate] GPGTools Team <team[at]gpgtools.org>
uid       [ultimate] GPGMail Project Team (Official OpenPGP Key) <gpgmail-devel[at]lists.gpgmail.org>
uid       [ultimate] GPGTools Project Team (Official OpenPGP Key) <gpgtools-org[at]lists.gpgtools.org>
uid       [ultimate] [jpeg image of size 5871]
sub   2048g/PGPSUB1 2010-08-19 [expires: 2018-08-19]
sub   4096R/PGPSUB2 2014-04-08 [expires: 2024-01-02]
pub   4096R/RVMPUB 2014-10-28
uid       [ unknown] Michal Papis (RVM signing) <mpapis[at]gmail.com>
pub   4096R/PSUEDOPUB 2016-08-24
uid       [ultimate] Quintessence <myemail[at]example.com>
sub   4096R/PSEUDOSUB 2016-08-24

To export the public key for Keybase:

==[]=[ 15:42:55 ]=[  quinn@FingerSkillet  ]=[ ~/.gnupg     ]=[]==
$ gpg -a --export PSUEDOPUB
-----BEGIN PGP PUBLIC KEY BLOCK-----
Comment: GPGTools - https://gpgtools.org
...
-----END PGP PUBLIC KEY BLOCK-----

Make sure you use the -a flag for ASCII, otherwise you'll get binary output dumping to your terminal. Everyone's favorite experience, amirite?

Copy/paste the key block into Keybase and choose "command line with keybase" when prompted for how you would like to sign your public key. Keybase will display the appropriate command to sign the public key, but first you will need to log into the Keybase API on your laptop using the Keybase CLI. To do this, you will use the same password you use to log into the Keybase website. When you initially log in with the Keybase ClI you will be prompted to generate a paper key, as below:

==[]=[ 15:55:57 ]=[  quinn@FingerSkillet  ]=[ ~/.gnupg     ]=[]==
$ keybase login
Your keybase username or email address: quintessence
Enter a public name for this device: FingerSkillet
===============================
IMPORTANT: PAPER KEY GENERATION
===============================
During Keybase's alpha, everyone gets a paper key. This is a private key.
  1. you must write it down
  2. the first two words are a public label
  3. it can be used to recover data
  4. it can provision new keys/devices, so put it in your wallet
  5. just like any other device, it'll be revokable/replaceable if you lose it
Your paper key is
       	<REDACTED>
Write it down....now!
Have you written down the above paper key? [y/N] y
Excellent! Is it in your wallet? [y/N] y
✔ Success! You provisioned your device FingerSkillet.
You are logged in as quintessence
  - type `keybase help` for more info.

Now that you have logged in, you can proceed:

==[]=[ 16:26:00 ]=[  quinn@FingerSkillet  ]=[ ~/.gnupg     ]=[]==
$ keybase pgp select <REDACTED>
#    Algo    Key Id             Created   UserId
=    ====    ======             =======   ======
1    4096R   <REDACTED>             Quintessence <myemail[at]example.com>
Choose a key: 1
▶ INFO Bundle unlocked: <REDACTED>
▶ INFO Generated new PGP key:
▶ INFO   user: Quintessence <myemail[at]example.com>
▶ INFO   4096-bit RSA key, ID <REDACTED>, created 2016-08-24
▶ INFO Key <REDACTED> imported

Generating the PGP key with Keybase

The other way to generate your PGP key is with the Keybase CLI, using keybase pgp gen. When you do, you will be given the option to upload the (encrypted) secret key to Keybase, e.g.:

==[]=[ 14:45:05 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ keybase pgp gen
Enter your real name, which will be publicly visible in your new key: Quintessence
Enter a public email address for your key: myemail[at]example.com
Enter another email address (or <enter> when done):
Push an encrypted copy of your new secret key to the Keybase.io server? [Y/n]

Depending on your level of paranoia, putting a copy of the secret key on the Keybase server might be asking a bit much. Personally, this behavior is a reason in favor of generating keys with gpg, since Y is the default action.

Note: When you generate a key with keybase pgp gen it will appear in your GPG keyring.

Verifying an account with the CLI

Accounts can be verified using either the web UI or using the Keybase CLI. For the latter:

==[]=[ 16:31:42 ]=[  quinn@FingerSkillet  ]=[ ~/.gnupg     ]=[]==
$ keybase prove github quintessence
Please publicly post the following Gist, and name it keybase.md
### Keybase proof
I hereby claim:
  * I am quintessence on github.
...
Check Github now? [Y/n] Y
▶ NOTICE Success!
==[]=[ 16:33:46 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ keybase prove hackernews quintessence
Please edit your HackerNews profile to contain the following text. Click here: https://news.ycombinator.com/user?id=quintessence
[ my public key: ...
Check HackerNews now? [Y/n] Y
▶ NOTICE Success!

Notice when you are proving that you are a user you are proving that you can somehow post as that user. In the case of GitHub, that means creating a public Gist. For other sites, e.g. Twitter and Reddit, you create a tweet/post. An example of a more complete profile is that of one of my coworkers here at S&W, James Hunt:

jhunt keybase profile

Following a user

You can see above that jhunt is already following me on Keybase. If I want to follow him as well, I can do so via the command line using his Keybase username:

==[]=[ 15:27:25 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ keybase follow jhunt
▶ INFO Identifying jhunt
✔ public key fingerprint: 2BA0 1C9D B438 A64F 214C D2D3 E7B1 C84A EDE5 75A0
✔ admin of DNS zone jameshunt.us: found TXT entry keybase-site-verification=v_ja_-Bvv9kxVH8l0JKdX_yCTFw4TlsZ2bFXoz4g9M0
✔ admin of DNS zone niftylogic.com: found TXT entry keybase-site-verification=QQg_TcUz22MGcRgb5DOcBIzsndgBpsuMZmhU7hQ7jes
✔ admin of DNS zone huntprod.com: found TXT entry keybase-site-verification=qfh_K7mBbkb6JP3WrRLyZiu5bWz8jYCyEHwWopfdPDM
✔ "filefrog" on reddit: https://www.reddit.com/r/KeybaseProofs/comments/4trseg/my_keybase_proof_redditfilefrog_keybasejhunt_z9/
✔ "iamjameshunt" on twitter: https://twitter.com/iamjameshunt/status/755790627552448513
✔ "jhunt" on github: https://gist.github.com/6347ae03c701d50782f25b879b72c394
Is this the jhunt you wanted? [Y/n] Y
Publicly follow? [Y/n] Y

I can now see that he is both following me and I am following him using the Keybase CLI:

==[]=[ 15:59:40 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ keybase list-followers
jhunt
==[]=[ 15:59:46 ]=[  quinn@FingerSkillet  ]=[ ~     ]=[]==
$ keybase list-following
jhunt

I can also see that I'm following him now using the Keybase web UI:

following jhunt on keybase

Why follow other users?

Following users cuts out a couple steps when you wish to send encrypted information. For example, let's say I wanted to send jhunt the following message:

keybase encrypt jhunt -m "Check out my blog post!"

If I wasn't already following jhunt, the CLI would give me the same manual verification process to ensure that the jhunt it found was the jhunt that I intended to encrypt the message for. When I started following jhunt, Keybase created a signed snapshot of the identity. So now when I encrypt the above message, it uses the signed snapshot instead and doesn't prompt me to verify the user.

For Keybase, following users also helps establish each user's web of trust. When I started following jhunt I verified that I knew him. The more people that follow jhunt over time lends credibility to the Keybase jhunt user being matched with the correct person. Keybase talks about this a bit more in their doc on following users.

Key Deletion/Revocation

Keys are removed from the Keybase client using keybase drop '<KEY>'; however, this does not remove the key from your GPG keyring. Since keys generated with the Keybase CLI are stored in the GPG keyring, regardless of whether you used keybase or gpg to generate the key you will need to revoke the key on the GPG side as well or else the generated key will remain in your GPG keyring.

The post Setting up Keybase and GPG Tools (Mac) appeared first on Stark & Wayne.

]]>
Building a Linux Static Binary with sipcalc, CentOS 7, and Docker https://www.starkandwayne.com/blog/building-a-linux-static-binary-with-sipcalc-centos-7-and-docker-2/ Thu, 11 Aug 2016 02:19:36 +0000 https://www.starkandwayne.com//building-a-linux-static-binary-with-sipcalc-centos-7-and-docker-2/

First: What is sipcalc?

sipcalc is a handy tool that makes networking a bit less painful, e.g.:

$ sipcalc 192.168.0.0/24
-[ipv4 : 192.168.0.0/24] - 0
[CIDR]
Host address		- 192.168.0.0
Host address (decimal)	- 3232235520
Host address (hex)	- C0A80000
Network address		- 192.168.0.0
Network mask		- 255.255.255.0
Network mask (bits)	- 24
Network mask (hex)	- FFFFFF00
Broadcast address	- 192.168.0.255
Cisco wildcard		- 0.0.0.255
Addresses in network	- 256
Network range		- 192.168.0.0 - 192.168.0.255
Usable range		- 192.168.0.1 - 192.168.0.254
-

As it turns out, there are package installers for various distros of Linux here, and you can even run brew install sipcalc on a Mac.

But what about CentOS 7? Well it turns out that sipcalc is not in the EPEL repository for CentOS 7. (Take a look at EPEL for v6 and EPEL for v7.) Since I want to add this to our jumpbox setup script, I decided that I wanted a static binary that could be run on any Linux system.

Building the binary with the rpmbuild/centos7 Docker image

Environment: latest version of Docker Toolbox on Mac OS X.

Using the rpmbuild/centos7 Docker image for this project and grabbing the sipcalc tarball:

==[]=[ 20:21:33 ]=[  quinn@MacBook-Pro  ]=[ ~     ]=[]==
$ docker pull rpmbuild/centos7
Using default tag: latest
latest: Pulling from rpmbuild/centos7
3d8673bd162a: Already exists
a3ed95caeb02: Already exists
fe6f78a62503: Already exists
365f5e11f348: Already exists
Digest: sha256:10a62db594c19a0fc6026cab1492d48ba611a52f5b68c07e33a0da9c6c54e039
Status: Image is up to date for rpmbuild/centos7:latest
==[]=[ 20:21:42 ]=[  quinn@MacBook-Pro  ]=[ ~     ]=[]==
$ docker run -it rpmbuild/centos7 /bin/bash
[builder@61c8cc6d83b9 /]$ sudo wget http://www.routemeister.net/projects/sipcalc/files/sipcalc-1.1.6.tar.gz
[builder@61c8cc6d83b9 /]$ tar -xvzf sipcalc-1.1.6.tar.gz && cd sipcalc-1.1.6

The INSTALL file states that we can build and install sipcalc with ./configure && make && make install. So first I'll try to build the binary by running ./configure && make:

[builder@a025336b0eb1 sipcalc-1.1.6]$ ./configure && make
[lengthy output not included]

And now to test the binary:

[builder@a025336b0eb1 sipcalc-1.1.6]$ find . -name sipcalc
./src/sipcalc
[builder@a025336b0eb1 sipcalc-1.1.6]$ ./src/sipcalc 10.20.30.40/17
-[ipv4 : 10.20.30.40/17] - 0
[CIDR]
Host address		- 10.20.30.40
Host address (decimal)	- 169090600
Host address (hex)	- A141E28
Network address		- 10.20.0.0
Network mask		- 255.255.128.0
Network mask (bits)	- 17
Network mask (hex)	- FFFF8000
Broadcast address	- 10.20.127.255
Cisco wildcard		- 0.0.127.255
Addresses in network	- 32768
Network range		- 10.20.0.0 - 10.20.127.255
Usable range		- 10.20.0.1 - 10.20.127.254
-

Right on we have a working binary! But is it a static binary?

[builder@a025336b0eb1 sipcalc-1.1.6]$ ldd src/sipcalc
	linux-vdso.so.1 =>  (0x00007ffc46768000)
	libnsl.so.1 => /lib64/libnsl.so.1 (0x00007fc6c8f5c000)
	libc.so.6 => /lib64/libc.so.6 (0x00007fc6c8b9a000)
	/lib64/ld-linux-x86-64.so.2 (0x000055777a652000)

Nope. The secret sauce:

[builder@a025336b0eb1 sipcalc-1.1.6]$ sudo yum install glibc-static
[builder@a025336b0eb1 sipcalc-1.1.6]$ CFLAGS=-static ./configure
[builder@a025336b0eb1 sipcalc-1.1.6]$ make clean && make
[builder@a025336b0eb1 sipcalc-1.1.6]$ ldd src/sipcalc
	not a dynamic executable

Static binary win!

Now to make it even easier with a BASH script! Create an executable script named pkg:

#!/bin/bash
set -e
cd /tmp
curl -LO http://www.routemeister.net/projects/sipcalc/files/sipcalc-1.1.6.tar.gz
tar -xzvf sipcalc-1.1.6.tar.gz
cd sipcalc-1.1.6
sudo yum install -y glibc-static
CFLAGS=-static ./configure
make
sudo cp src/sipcalc /srv/sipcalc

Then start up the Docker container to execute the script and exit:

==[]=[ 21:56:57 ]=[  quinn@MacBook-Pro  ]=[ ~     ]=[]==
$ docker run -it -v $PWD:/srv rpmbuild/centos7
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  114k  100  114k    0     0  93450      0  0:00:01  0:00:01 --:--:-- 93503
sipcalc-1.1.6/
sipcalc-1.1.6/TODO
sipcalc-1.1.6/src/
sipcalc-1.1.6/src/sub.c
...
make[2]: Leaving directory `/tmp/sipcalc-1.1.6/src'
make[2]: Entering directory `/tmp/sipcalc-1.1.6'
make[2]: Leaving directory `/tmp/sipcalc-1.1.6'
make[1]: Leaving directory `/tmp/sipcalc-1.1.6'
==[]=[ 21:57:13 ]=[  quinn@MacBook-Pro  ]=[ ~     ]=[]==
$ ls -al | grep sipcalc
-rwxr-xr-x    1 quinn  staff     1058380 Mmm DD 21:57 sipcalc
==[]=[ 21:57:15 ]=[  quinn@MacBook-Pro  ]=[ ~     ]=[]==
$ file sipcalc
sipcalc: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 2.6.32, from 'bp', not stripped

Now to test on an Ubuntu Docker container:

==[]=[ 22:00:31 ]=[  quinn@MacBook-Pro  ]=[ ~     ]=[]==
$ docker run -it -v $PWD:/srv debuild/precise /srv/sipcalc 10.20.30.40/17
Unable to find image 'debuild/precise:latest' locally
latest: Pulling from debuild/precise
765826873799: Pull complete
e7a187926114: Pull complete
fd01d4f3de3b: Pull complete
c704fce22a3c: Pull complete
a3ed95caeb02: Pull complete
6acafd366a73: Pull complete
8026c6cf1ae4: Pull complete
Digest: sha256:2c14957baab89d6595cd9437f9c9d40c76c23f26ab6ab3c77e04542ca5178cff
Status: Downloaded newer image for debuild/precise:latest
-[ipv4 : 10.20.30.40/17] - 0
[CIDR]
Host address		- 10.20.30.40
Host address (decimal)	- 169090600
Host address (hex)	- A141E28
Network address		- 10.20.0.0
Network mask		- 255.255.128.0
Network mask (bits)	- 17
Network mask (hex)	- FFFF8000
Broadcast address	- 10.20.127.255
Cisco wildcard		- 0.0.127.255
Addresses in network	- 32768
Network range		- 10.20.0.0 - 10.20.127.255
Usable range		- 10.20.0.1 - 10.20.127.254
-

BAM. Linux x86_64 compatible sipcalc static binary.

The post Building a Linux Static Binary with sipcalc, CentOS 7, and Docker appeared first on Stark & Wayne.

]]>

First: What is sipcalc?

sipcalc is a handy tool that makes networking a bit less painful, e.g.:

$ sipcalc 192.168.0.0/24
-[ipv4 : 192.168.0.0/24] - 0
[CIDR]
Host address		- 192.168.0.0
Host address (decimal)	- 3232235520
Host address (hex)	- C0A80000
Network address		- 192.168.0.0
Network mask		- 255.255.255.0
Network mask (bits)	- 24
Network mask (hex)	- FFFFFF00
Broadcast address	- 192.168.0.255
Cisco wildcard		- 0.0.0.255
Addresses in network	- 256
Network range		- 192.168.0.0 - 192.168.0.255
Usable range		- 192.168.0.1 - 192.168.0.254
-

As it turns out, there are package installers for various distros of Linux here, and you can even run brew install sipcalc on a Mac.

But what about CentOS 7? Well it turns out that sipcalc is not in the EPEL repository for CentOS 7. (Take a look at EPEL for v6 and EPEL for v7.) Since I want to add this to our jumpbox setup script, I decided that I wanted a static binary that could be run on any Linux system.

Building the binary with the rpmbuild/centos7 Docker image

Environment: latest version of Docker Toolbox on Mac OS X.

Using the rpmbuild/centos7 Docker image for this project and grabbing the sipcalc tarball:

==[]=[ 20:21:33 ]=[  quinn@MacBook-Pro  ]=[ ~     ]=[]==
$ docker pull rpmbuild/centos7
Using default tag: latest
latest: Pulling from rpmbuild/centos7
3d8673bd162a: Already exists
a3ed95caeb02: Already exists
fe6f78a62503: Already exists
365f5e11f348: Already exists
Digest: sha256:10a62db594c19a0fc6026cab1492d48ba611a52f5b68c07e33a0da9c6c54e039
Status: Image is up to date for rpmbuild/centos7:latest
==[]=[ 20:21:42 ]=[  quinn@MacBook-Pro  ]=[ ~     ]=[]==
$ docker run -it rpmbuild/centos7 /bin/bash
[builder@61c8cc6d83b9 /]$ sudo wget http://www.routemeister.net/projects/sipcalc/files/sipcalc-1.1.6.tar.gz
[builder@61c8cc6d83b9 /]$ tar -xvzf sipcalc-1.1.6.tar.gz && cd sipcalc-1.1.6

The INSTALL file states that we can build and install sipcalc with ./configure && make && make install. So first I'll try to build the binary by running ./configure && make:

[builder@a025336b0eb1 sipcalc-1.1.6]$ ./configure && make
[lengthy output not included]

And now to test the binary:

[builder@a025336b0eb1 sipcalc-1.1.6]$ find . -name sipcalc
./src/sipcalc
[builder@a025336b0eb1 sipcalc-1.1.6]$ ./src/sipcalc 10.20.30.40/17
-[ipv4 : 10.20.30.40/17] - 0
[CIDR]
Host address		- 10.20.30.40
Host address (decimal)	- 169090600
Host address (hex)	- A141E28
Network address		- 10.20.0.0
Network mask		- 255.255.128.0
Network mask (bits)	- 17
Network mask (hex)	- FFFF8000
Broadcast address	- 10.20.127.255
Cisco wildcard		- 0.0.127.255
Addresses in network	- 32768
Network range		- 10.20.0.0 - 10.20.127.255
Usable range		- 10.20.0.1 - 10.20.127.254
-

Right on we have a working binary! But is it a static binary?

[builder@a025336b0eb1 sipcalc-1.1.6]$ ldd src/sipcalc
	linux-vdso.so.1 =>  (0x00007ffc46768000)
	libnsl.so.1 => /lib64/libnsl.so.1 (0x00007fc6c8f5c000)
	libc.so.6 => /lib64/libc.so.6 (0x00007fc6c8b9a000)
	/lib64/ld-linux-x86-64.so.2 (0x000055777a652000)

Nope. The secret sauce:

[builder@a025336b0eb1 sipcalc-1.1.6]$ sudo yum install glibc-static
[builder@a025336b0eb1 sipcalc-1.1.6]$ CFLAGS=-static ./configure
[builder@a025336b0eb1 sipcalc-1.1.6]$ make clean && make
[builder@a025336b0eb1 sipcalc-1.1.6]$ ldd src/sipcalc
	not a dynamic executable

Static binary win!

Now to make it even easier with a BASH script! Create an executable script named pkg:

#!/bin/bash
set -e
cd /tmp
curl -LO http://www.routemeister.net/projects/sipcalc/files/sipcalc-1.1.6.tar.gz
tar -xzvf sipcalc-1.1.6.tar.gz
cd sipcalc-1.1.6
sudo yum install -y glibc-static
CFLAGS=-static ./configure
make
sudo cp src/sipcalc /srv/sipcalc

Then start up the Docker container to execute the script and exit:

==[]=[ 21:56:57 ]=[  quinn@MacBook-Pro  ]=[ ~     ]=[]==
$ docker run -it -v $PWD:/srv rpmbuild/centos7
  % Total    % Received % Xferd  Average Speed   Time    Time     Time  Current
                                 Dload  Upload   Total   Spent    Left  Speed
100  114k  100  114k    0     0  93450      0  0:00:01  0:00:01 --:--:-- 93503
sipcalc-1.1.6/
sipcalc-1.1.6/TODO
sipcalc-1.1.6/src/
sipcalc-1.1.6/src/sub.c
...
make[2]: Leaving directory `/tmp/sipcalc-1.1.6/src'
make[2]: Entering directory `/tmp/sipcalc-1.1.6'
make[2]: Leaving directory `/tmp/sipcalc-1.1.6'
make[1]: Leaving directory `/tmp/sipcalc-1.1.6'
==[]=[ 21:57:13 ]=[  quinn@MacBook-Pro  ]=[ ~     ]=[]==
$ ls -al | grep sipcalc
-rwxr-xr-x    1 quinn  staff     1058380 Mmm DD 21:57 sipcalc
==[]=[ 21:57:15 ]=[  quinn@MacBook-Pro  ]=[ ~     ]=[]==
$ file sipcalc
sipcalc: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), statically linked, for GNU/Linux 2.6.32, from 'bp', not stripped

Now to test on an Ubuntu Docker container:

==[]=[ 22:00:31 ]=[  quinn@MacBook-Pro  ]=[ ~     ]=[]==
$ docker run -it -v $PWD:/srv debuild/precise /srv/sipcalc 10.20.30.40/17
Unable to find image 'debuild/precise:latest' locally
latest: Pulling from debuild/precise
765826873799: Pull complete
e7a187926114: Pull complete
fd01d4f3de3b: Pull complete
c704fce22a3c: Pull complete
a3ed95caeb02: Pull complete
6acafd366a73: Pull complete
8026c6cf1ae4: Pull complete
Digest: sha256:2c14957baab89d6595cd9437f9c9d40c76c23f26ab6ab3c77e04542ca5178cff
Status: Downloaded newer image for debuild/precise:latest
-[ipv4 : 10.20.30.40/17] - 0
[CIDR]
Host address		- 10.20.30.40
Host address (decimal)	- 169090600
Host address (hex)	- A141E28
Network address		- 10.20.0.0
Network mask		- 255.255.128.0
Network mask (bits)	- 17
Network mask (hex)	- FFFF8000
Broadcast address	- 10.20.127.255
Cisco wildcard		- 0.0.127.255
Addresses in network	- 32768
Network range		- 10.20.0.0 - 10.20.127.255
Usable range		- 10.20.0.1 - 10.20.127.254
-

BAM. Linux x86_64 compatible sipcalc static binary.

The post Building a Linux Static Binary with sipcalc, CentOS 7, and Docker appeared first on Stark & Wayne.

]]>
My Shell, My Bell: While/Xargs by Example https://www.starkandwayne.com/blog/my-shell-my-bell-whilexargs-by-example/ Mon, 14 Mar 2016 17:31:07 +0000 https://www.starkandwayne.com//my-shell-my-bell-whilexargs-by-example/

Earlier today I logged into a jumpbox session and was greeted with some lovely error messages:

$ channel 12: open failed: administratively prohibited: open failed
channel 13: open failed: administratively prohibited: open failed
channel 19: open failed: administratively prohibited: open failed
channel 20: open failed: administratively prohibited: open failed
channel 21: open failed: administratively prohibited: open failed
channel 22: open failed: administratively prohibited: open failed
channel 23: open failed: administratively prohibited: open failed
channel 24: open failed: administratively prohibited: open failed
channel 25: open failed: administratively prohibited: open failed

Was a little odd, considering that I hadn't done anything yet. So I looked through a list of my running processes and noticed that I had a ton of ssh-agent and ssh -R sessions still running. That's what I get for scripting and not cleaning up, but that's an issue for another day.

My main issue was figuring out how to get all those PIDs and delete them in a loop, because why would I go through probably a hundred processes one by one? (I may be exaggerating a little... but not by much.)

To create the PID lists, I used:

ps -o pid,cmd -u quinn | grep "ssh-agen[t]  " | awk '{print $1}'
ps -o pid,cmd -u quinn | grep "ssh -R 2022" | awk '{print $1}'

(I actually have ps -o pid,cmd -u ${USERNAME} aliased to myps to make my life even easier.) To delete I could use either a WHILE loop or XARGS. Since the list of processes was relatively small, either would work in this case. For WHILE:

ps -eo pid,cmd -u quinn | grep "ssh-agen[t]  " | awk '{print $1}' | while read PID; do kill ${PID}; done

And for XARGS:

ps -o pid,cmd -u quinn | grep "ssh -R 2022" | awk '{print $1}' | xargs -n1 kill -TERM

A Quick XARGS Note

A quick overview of the syntax for the xargs command as I used its: the -n1 tells xargs to "group" the arguments individually, so it's actually running kill -TERM ${PID} for each PID. For situations like rm where you could want multiple arguments at once, e.g. find /home -name '*.bak' | xargs rm to recursively remove all the *.bak files out of /home, then larger groups could be used. When a grouping size isn't specified, like in the rm example, then xargs will simply use the largest group size that particular command can handle.

The post My Shell, My Bell: While/Xargs by Example appeared first on Stark & Wayne.

]]>

Earlier today I logged into a jumpbox session and was greeted with some lovely error messages:

$ channel 12: open failed: administratively prohibited: open failed
channel 13: open failed: administratively prohibited: open failed
channel 19: open failed: administratively prohibited: open failed
channel 20: open failed: administratively prohibited: open failed
channel 21: open failed: administratively prohibited: open failed
channel 22: open failed: administratively prohibited: open failed
channel 23: open failed: administratively prohibited: open failed
channel 24: open failed: administratively prohibited: open failed
channel 25: open failed: administratively prohibited: open failed

Was a little odd, considering that I hadn't done anything yet. So I looked through a list of my running processes and noticed that I had a ton of ssh-agent and ssh -R sessions still running. That's what I get for scripting and not cleaning up, but that's an issue for another day.

My main issue was figuring out how to get all those PIDs and delete them in a loop, because why would I go through probably a hundred processes one by one? (I may be exaggerating a little... but not by much.)

To create the PID lists, I used:

ps -o pid,cmd -u quinn | grep "ssh-agen[t]  " | awk '{print $1}'
ps -o pid,cmd -u quinn | grep "ssh -R 2022" | awk '{print $1}'

(I actually have ps -o pid,cmd -u ${USERNAME} aliased to myps to make my life even easier.) To delete I could use either a WHILE loop or XARGS. Since the list of processes was relatively small, either would work in this case. For WHILE:

ps -eo pid,cmd -u quinn | grep "ssh-agen[t]  " | awk '{print $1}' | while read PID; do kill ${PID}; done

And for XARGS:

ps -o pid,cmd -u quinn | grep "ssh -R 2022" | awk '{print $1}' | xargs -n1 kill -TERM

A Quick XARGS Note

A quick overview of the syntax for the xargs command as I used its: the -n1 tells xargs to "group" the arguments individually, so it's actually running kill -TERM ${PID} for each PID. For situations like rm where you could want multiple arguments at once, e.g. find /home -name '*.bak' | xargs rm to recursively remove all the *.bak files out of /home, then larger groups could be used. When a grouping size isn't specified, like in the rm example, then xargs will simply use the largest group size that particular command can handle.

The post My Shell, My Bell: While/Xargs by Example appeared first on Stark & Wayne.

]]>
To Bundle or not To Bundle https://www.starkandwayne.com/blog/to-bundle-or-not-to-bundle/ Wed, 25 Nov 2015 00:40:33 +0000 https://www.starkandwayne.com//to-bundle-or-not-to-bundle/

Short Answer: Not To Bundle

I recently tore down and recreated my local BOSH lite installation. Re-cloned the repos - everything. Imagine my surprise when I ran into this little gem (har har see what I did thar):

$ ./bin/provision_cf
...
/scripts/generate-bosh-lite-dev-manifest
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- bundler/setup (LoadError)
	from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
Can only target Bosh Lite Director. Please use 'bosh target' before running this script.

This actually ended up being caused by an OpenSSL issue in El Capitan. Specifically, it appears on a cursory Google search that El Capitan doesn't include OpenSSL. To run BOSH lite locally you will need OpenSSL, so the quickest way to fix this if you have Homebrew is:

$ brew link openssl --force
Linking /usr/local/Cellar/openssl/1.0.2d_1... 1548 symlinks created

Background

Arriving at that solution was not as direct as I would have liked as there are a couple ambiguous things going on in this error. The bit about bundler seems to indicate, well, a bundler problem. Then there is message that follows the ruby/bundler errors that seems to indicate that I am not targeting a BOSH lite, which of course I am:

$ bosh target
Current target is https://192.168.50.4:25555 (Bosh Lite Director)

Running gem install bundler ; bundle install did update some gems, and also brought me back face-to-face with my old friend the nokogiri install error, but ultimately did not resolve this issue.

So how did I ultimatley come across the OpenSSL issue? A suggestion from a co-worker who is Ruby-familiar mentioned the issue about bundler missing OpenSSL on El Capitan and sort of wondered aloud if that could be the cause of the issue and that turned out to be it.

The post To Bundle or not To Bundle appeared first on Stark & Wayne.

]]>

Short Answer: Not To Bundle

I recently tore down and recreated my local BOSH lite installation. Re-cloned the repos - everything. Imagine my surprise when I ran into this little gem (har har see what I did thar):

$ ./bin/provision_cf
...
/scripts/generate-bosh-lite-dev-manifest
/System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require': cannot load such file -- bundler/setup (LoadError)
	from /System/Library/Frameworks/Ruby.framework/Versions/2.0/usr/lib/ruby/2.0.0/rubygems/core_ext/kernel_require.rb:55:in `require'
Can only target Bosh Lite Director. Please use 'bosh target' before running this script.

This actually ended up being caused by an OpenSSL issue in El Capitan. Specifically, it appears on a cursory Google search that El Capitan doesn't include OpenSSL. To run BOSH lite locally you will need OpenSSL, so the quickest way to fix this if you have Homebrew is:

$ brew link openssl --force
Linking /usr/local/Cellar/openssl/1.0.2d_1... 1548 symlinks created

Background

Arriving at that solution was not as direct as I would have liked as there are a couple ambiguous things going on in this error. The bit about bundler seems to indicate, well, a bundler problem. Then there is message that follows the ruby/bundler errors that seems to indicate that I am not targeting a BOSH lite, which of course I am:

$ bosh target
Current target is https://192.168.50.4:25555 (Bosh Lite Director)

Running gem install bundler ; bundle install did update some gems, and also brought me back face-to-face with my old friend the nokogiri install error, but ultimately did not resolve this issue.

So how did I ultimatley come across the OpenSSL issue? A suggestion from a co-worker who is Ruby-familiar mentioned the issue about bundler missing OpenSSL on El Capitan and sort of wondered aloud if that could be the cause of the issue and that turned out to be it.

The post To Bundle or not To Bundle appeared first on Stark & Wayne.

]]>
Pre-Flight Checks: Sprucing up Concourse with Test Concourse https://www.starkandwayne.com/blog/pre-flight-checks-sprucing-up-concourse-with-test-concourse/ Thu, 29 Oct 2015 22:00:24 +0000 https://www.starkandwayne.com//pre-flight-checks-sprucing-up-concourse-with-test-concourse/

As part of a project, a client wants to have a self-deploying Concourse. Basically that means that once everything is set up, the alpha Concourse will deploy the beta Concourse and, if that completes successfully, the beta Concourse will then update/deploy the alpha Concourse. Because automation is shiny.

Current Goal

Ensure that the the beta ("test") Concourse mirrors the alpha ("production") Concourse.

What This Means

Basically, both Concourses need to be configured identically except for the necessary deviations for networking.

Important Note

By the way, since the client is using vSphere I am building this off Concourse's vsphere.yml example manifest.

CDC Pipeline: Concourse Deploys Concourse

Sprucify

For efficiency, I spruced the concourse.yml manifest. What is spruce?

A quick note about spruce

spruce is a CLI tool primarily being developed by Geoff Franks. The goal is to have spruce be the next generation replacement for spiff, which is the current tool used to generate BOSH manifests. Here's its inaugural blog post and here is where the code is located on Github.

A Quick How To

First things first: I defined the static IPs in the concourse network like this:

static:
- x.x.x.50 - x.x.x.60

Now with spruce I can reference the IP address x.x.x.50 as static_ips(0) in the concourse network, likewise x.x.x.51 is static_ips(1), etc. So this:

networks:
  - name: concourse
    static_ips: x.x.x.50

Is the same as this:

networks:
  - name: concourse
    static_ips: (( static_ips(0) ))

Instead of using anchor syntax for the ATC credentials like the Concourse example manifest, I created meta at the top level and grabbed both values:

meta:
  atc_db_name: atc
  atc_db_role:
    name: atc
    password: ATCPASSWORD
...
properties:
  atc:
    postgresql:
      database: (( grab meta.atc_db_name ))
      role: (( grab meta.atc_db_role ))

I grabbed IP addresses referenced outside of the concourse network like so:

consul:
  agent:
    servers:
      lan: (( grab jobs.discovery.networks.[0].static_ips ))

To create alpha.yml, the manifest for the alpha Concourse's BOSH deploy, run:

spruce --prune meta alpha-concourse.yml > alpha.yml

Note that the sections in the output will be alphabetized, which can be a little disorienting if you don't expect it.

Now For The Beta Concourse

Creating the beta Concourse manifest is easy as pi(e) since I will only be changing the networking. In fact, there are only two top level keys: the name and networks. That's it!

To generate beta.yml, the manifest for the beta Concourse's BOSH deploy, run:

spruce merge --prune meta alpha-concourse.yml beta-concourse.yml > beta.yml

Order matters with spruce - the order above means that the second file overrides the first. This priority holds true no matter how many files you merge.

Some Cool Things

spruce is very helpful in this project for a couple reasons:

  • If I ever need to change the static IP range in the future I don't need to go through the manifest and fix the IPs in multiple places.
    • Relatedly, this is why the test manifest is so small: since all the static IPs are generated, all I had to do was change the IP ranges in the network. No fishing around for stray "old" IP addresses here!
  • Tucking the ATC credentials under meta was convenient for referencing the credentials throughout the file and since spruce has a prune feature I can eliminate that section from the resulting manifest.

S&W CDC Templates

In our CDC repository I have added:

  • The vSphere spruced concourse templates - AWS and BOSH Lite to follow
  • Scripts to make the manifests and to deploy the pipeline are in bin/
  • The Concourse pipeline files for each pipeline.

The post Pre-Flight Checks: Sprucing up Concourse with Test Concourse appeared first on Stark & Wayne.

]]>

As part of a project, a client wants to have a self-deploying Concourse. Basically that means that once everything is set up, the alpha Concourse will deploy the beta Concourse and, if that completes successfully, the beta Concourse will then update/deploy the alpha Concourse. Because automation is shiny.

Current Goal

Ensure that the the beta ("test") Concourse mirrors the alpha ("production") Concourse.

What This Means

Basically, both Concourses need to be configured identically except for the necessary deviations for networking.

Important Note

By the way, since the client is using vSphere I am building this off Concourse's vsphere.yml example manifest.

CDC Pipeline: Concourse Deploys Concourse

Sprucify

For efficiency, I spruced the concourse.yml manifest. What is spruce?

A quick note about spruce

spruce is a CLI tool primarily being developed by Geoff Franks. The goal is to have spruce be the next generation replacement for spiff, which is the current tool used to generate BOSH manifests. Here's its inaugural blog post and here is where the code is located on Github.

A Quick How To

First things first: I defined the static IPs in the concourse network like this:

static:
- x.x.x.50 - x.x.x.60

Now with spruce I can reference the IP address x.x.x.50 as static_ips(0) in the concourse network, likewise x.x.x.51 is static_ips(1), etc. So this:

networks:
  - name: concourse
    static_ips: x.x.x.50

Is the same as this:

networks:
  - name: concourse
    static_ips: (( static_ips(0) ))

Instead of using anchor syntax for the ATC credentials like the Concourse example manifest, I created meta at the top level and grabbed both values:

meta:
  atc_db_name: atc
  atc_db_role:
    name: atc
    password: ATCPASSWORD
...
properties:
  atc:
    postgresql:
      database: (( grab meta.atc_db_name ))
      role: (( grab meta.atc_db_role ))

I grabbed IP addresses referenced outside of the concourse network like so:

consul:
  agent:
    servers:
      lan: (( grab jobs.discovery.networks.[0].static_ips ))

To create alpha.yml, the manifest for the alpha Concourse's BOSH deploy, run:

spruce --prune meta alpha-concourse.yml > alpha.yml

Note that the sections in the output will be alphabetized, which can be a little disorienting if you don't expect it.

Now For The Beta Concourse

Creating the beta Concourse manifest is easy as pi(e) since I will only be changing the networking. In fact, there are only two top level keys: the name and networks. That's it!

To generate beta.yml, the manifest for the beta Concourse's BOSH deploy, run:

spruce merge --prune meta alpha-concourse.yml beta-concourse.yml > beta.yml

Order matters with spruce - the order above means that the second file overrides the first. This priority holds true no matter how many files you merge.

Some Cool Things

spruce is very helpful in this project for a couple reasons:

  • If I ever need to change the static IP range in the future I don't need to go through the manifest and fix the IPs in multiple places.
    • Relatedly, this is why the test manifest is so small: since all the static IPs are generated, all I had to do was change the IP ranges in the network. No fishing around for stray "old" IP addresses here!
  • Tucking the ATC credentials under meta was convenient for referencing the credentials throughout the file and since spruce has a prune feature I can eliminate that section from the resulting manifest.

S&W CDC Templates

In our CDC repository I have added:

  • The vSphere spruced concourse templates - AWS and BOSH Lite to follow
  • Scripts to make the manifests and to deploy the pipeline are in bin/
  • The Concourse pipeline files for each pipeline.

The post Pre-Flight Checks: Sprucing up Concourse with Test Concourse appeared first on Stark & Wayne.

]]>
I’m the microbosh! No, I’m the microbosh! https://www.starkandwayne.com/blog/im-the-microbosh-no-im-the-microbosh/ Mon, 28 Sep 2015 20:26:04 +0000 https://www.starkandwayne.com//im-the-microbosh-no-im-the-microbosh/

What happens when bosh reports that a disk doesn't exist that clearly does? This is an issue that we encountered using vSphere and the troubleshooting was a little interesting.

The symptoms

We have a client with a rather large deployment in vSphere. While building the pipeline, we noticed that occasionally a microbosh deployment would fail and we'd have to perform "surgery" on the bosh-deployments.yml file. What we'd see is bosh micro deploy failing like this:

...
Stemcell info
-------------
Name:    bosh-vsphere-esxi-ubuntu-trusty-go_agent
Version: 3087
Will deploy due to configuration changes
  Started prepare for update
  Started prepare for update > Stopping agent services. Done (00:00:01)
  Started prepare for update > Unmount disk. Done (00:00:01)
  Started prepare for update > Detach diskat depth 0 - 20: unable to get local issuer certificate
/var/lib/gems/1.9.1/gems/bosh_vsphere_cpi-2.0.0/lib/cloud/vsphere/disk_provider.rb:52:in `find': Could not find disk with id 'disk-e5b812c1-6e20-4365-aba1-99e24f82b889' (Bosh::Clouds::DiskNotFound)
	from /var/lib/gems/1.9.1/gems/bosh_vsphere_cpi-2.0.0/lib/cloud/vsphere/cloud.rb:321:in `block in detach_disk'
	from /var/lib/gems/1.9.1/gems/bosh_common-1.3072.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
	from /var/lib/gems/1.9.1/gems/bosh_vsphere_cpi-2.0.0/lib/cloud/vsphere/cloud.rb:319:in `detach_disk'
	from /var/lib/gems/1.9.1/gems/bosh_cpi-1.3072.0/lib/cloud/internal_cpi.rb:22:in `invoke_cpi_method'
	from /var/lib/gems/1.9.1/gems/bosh_cpi-1.3072.0/lib/cloud/internal_cpi.rb:10:in `method_missing'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:327:in `block in detach_disk'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:85:in `step'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:326:in `detach_disk'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:190:in `update'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:102:in `block in update_deployment'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:92:in `with_lifecycle'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:102:in `update_deployment'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/cli/commands/micro.rb:179:in `perform'
	from /var/lib/gems/1.9.1/gems/bosh_cli-1.3072.0/lib/cli/command_handler.rb:57:in `run'
	from /var/lib/gems/1.9.1/gems/bosh_cli-1.3072.0/lib/cli/runner.rb:56:in `run'
	from /var/lib/gems/1.9.1/gems/bosh_cli-1.3072.0/bin/bosh:19:in `<top (required)>'
	from /usr/local/bin/bosh:23:in `load'
	from /usr/local/bin/bosh:23:in `<main>'

Looking at vSphere, the disk did in fact exist:

Not only did it exist, it belongs to the VM CID that is in the bosh-deployments.yml file:

---
instances:
- :id: 1
  :name: microbosh
  :uuid: {{UUID}}
  :stemcell_cid: {{SCID}}
  :stemcell_sha1: {{SSHA}}
  :stemcell_name: bosh-stemcell-3087-vsphere-esxi-ubuntu-trusty-go_agent
  :config_sha1: {{CONF}}
  :vm_cid: vm-36a656b0-b1b3-4fd4-9cf7-63fa95a5bc63
  :disk_cid: disk-e5b812c1-6e20-4365-aba1-99e24f82b889

What did the log say?

The log was a little less than helpful, as the deployment seemed to go through the normal motions the deploy until encountering an error such at this one:

...
I, [2015-09-28T14:16:35.558692 #28566] [attach_disk(vm-36a656b0-b1b3-4fd4-9cf7-63fa95a5bc63, disk-e5b812c1-6e20-4365-aba1-99e24f82b889)]  INFO -- : Attaching disk
I, [2015-09-28T14:16:37.683131 #28566] [attach_disk(vm-36a656b0-b1b3-4fd4-9cf7-63fa95a5bc63, disk-e5b812c1-6e20-4365-aba1-99e24f82b889)]  INFO -- : Finished attaching disk
I, [2015-09-28T14:17:10.390309 #28566] [0x783ffc]  INFO -- : Director is ready:
...
E, [2015-09-28T15:16:26.775114 #29900] [0xcf5800] ERROR -- : not unmounting disk-e5b812c1-6e20-4365-aba1-99e24f82b889 as it doesn't belong to me: []
I, [2015-09-28T15:16:28.919513 #29900] [detach_disk(vm-36a656b0-b1b3-4fd4-9cf7-63fa95a5bc63, disk-e5b812c1-6e20-4365-aba1-99e24f82b889)]  INFO -- : Detaching disk: disk-e5b812c1-6e20-4365-aba1-99e24f82b889 from vm: vm-36a656b0-b1b3-4fd4-9cf7-63fa95a5bc63
...

So basically the disk would successfully attach to the VM, but then when it came time to detach the VM would respond with "it isn't mine!". I feel like there's a bad joke in here, but I digress.

So what happened?

While we continued to investigate we started to note that some of the INFO statuses did not make sense. For example, we saw this VM being created:

...
INFO -- : Setting VM env: {"vm"=>{"name"=>"vm-7c516a12-8ad5-4033-8a92-ff35030e3a2d" ...

while the VM CID in the bosh-deployments.yml file was vm-700f482f-36cf-4c86-95c5-c86a18499c4a. This caused us to investigate a little further.

What happened

Essentially, what appears to have happened is that somehow the original deployment wasn't saved correctly or at all in bosh-deployments.yml (at this point it is hard to say). Then for every subsequent redeploy it would try to create a new microbosh with the same specifications: including the same IP address. Since vSphere allows multiple VMs to have the same IP without warning, by the time we caught this issue there were over a dozen VMs in vSphere with the same IP address. This collision is what caused confusion when creating/destroying the VMs as part of the normal deployment process, which would then fail.

Since we are deploying the microboshes via a pipeline, and thus whenever a microbosh deploy fails for any reason we see this issue anew, we are adding a script to our vSphere pipeline to basically detect if an IP is in use and if it is in use only by the VM specified by bosh-deployments.yml.

The post I’m the microbosh! No, I’m the microbosh! appeared first on Stark & Wayne.

]]>

What happens when bosh reports that a disk doesn't exist that clearly does? This is an issue that we encountered using vSphere and the troubleshooting was a little interesting.

The symptoms

We have a client with a rather large deployment in vSphere. While building the pipeline, we noticed that occasionally a microbosh deployment would fail and we'd have to perform "surgery" on the bosh-deployments.yml file. What we'd see is bosh micro deploy failing like this:

...
Stemcell info
-------------
Name:    bosh-vsphere-esxi-ubuntu-trusty-go_agent
Version: 3087
Will deploy due to configuration changes
  Started prepare for update
  Started prepare for update > Stopping agent services. Done (00:00:01)
  Started prepare for update > Unmount disk. Done (00:00:01)
  Started prepare for update > Detach diskat depth 0 - 20: unable to get local issuer certificate
/var/lib/gems/1.9.1/gems/bosh_vsphere_cpi-2.0.0/lib/cloud/vsphere/disk_provider.rb:52:in `find': Could not find disk with id 'disk-e5b812c1-6e20-4365-aba1-99e24f82b889' (Bosh::Clouds::DiskNotFound)
	from /var/lib/gems/1.9.1/gems/bosh_vsphere_cpi-2.0.0/lib/cloud/vsphere/cloud.rb:321:in `block in detach_disk'
	from /var/lib/gems/1.9.1/gems/bosh_common-1.3072.0/lib/common/thread_formatter.rb:49:in `with_thread_name'
	from /var/lib/gems/1.9.1/gems/bosh_vsphere_cpi-2.0.0/lib/cloud/vsphere/cloud.rb:319:in `detach_disk'
	from /var/lib/gems/1.9.1/gems/bosh_cpi-1.3072.0/lib/cloud/internal_cpi.rb:22:in `invoke_cpi_method'
	from /var/lib/gems/1.9.1/gems/bosh_cpi-1.3072.0/lib/cloud/internal_cpi.rb:10:in `method_missing'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:327:in `block in detach_disk'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:85:in `step'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:326:in `detach_disk'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:190:in `update'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:102:in `block in update_deployment'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:92:in `with_lifecycle'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/deployer/instance_manager.rb:102:in `update_deployment'
	from /var/lib/gems/1.9.1/gems/bosh_cli_plugin_micro-1.3072.0/lib/bosh/cli/commands/micro.rb:179:in `perform'
	from /var/lib/gems/1.9.1/gems/bosh_cli-1.3072.0/lib/cli/command_handler.rb:57:in `run'
	from /var/lib/gems/1.9.1/gems/bosh_cli-1.3072.0/lib/cli/runner.rb:56:in `run'
	from /var/lib/gems/1.9.1/gems/bosh_cli-1.3072.0/bin/bosh:19:in `<top (required)>'
	from /usr/local/bin/bosh:23:in `load'
	from /usr/local/bin/bosh:23:in `<main>'

Looking at vSphere, the disk did in fact exist:

Not only did it exist, it belongs to the VM CID that is in the bosh-deployments.yml file:

---
instances:
- :id: 1
  :name: microbosh
  :uuid: {{UUID}}
  :stemcell_cid: {{SCID}}
  :stemcell_sha1: {{SSHA}}
  :stemcell_name: bosh-stemcell-3087-vsphere-esxi-ubuntu-trusty-go_agent
  :config_sha1: {{CONF}}
  :vm_cid: vm-36a656b0-b1b3-4fd4-9cf7-63fa95a5bc63
  :disk_cid: disk-e5b812c1-6e20-4365-aba1-99e24f82b889

What did the log say?

The log was a little less than helpful, as the deployment seemed to go through the normal motions the deploy until encountering an error such at this one:

...
I, [2015-09-28T14:16:35.558692 #28566] [attach_disk(vm-36a656b0-b1b3-4fd4-9cf7-63fa95a5bc63, disk-e5b812c1-6e20-4365-aba1-99e24f82b889)]  INFO -- : Attaching disk
I, [2015-09-28T14:16:37.683131 #28566] [attach_disk(vm-36a656b0-b1b3-4fd4-9cf7-63fa95a5bc63, disk-e5b812c1-6e20-4365-aba1-99e24f82b889)]  INFO -- : Finished attaching disk
I, [2015-09-28T14:17:10.390309 #28566] [0x783ffc]  INFO -- : Director is ready:
...
E, [2015-09-28T15:16:26.775114 #29900] [0xcf5800] ERROR -- : not unmounting disk-e5b812c1-6e20-4365-aba1-99e24f82b889 as it doesn't belong to me: []
I, [2015-09-28T15:16:28.919513 #29900] [detach_disk(vm-36a656b0-b1b3-4fd4-9cf7-63fa95a5bc63, disk-e5b812c1-6e20-4365-aba1-99e24f82b889)]  INFO -- : Detaching disk: disk-e5b812c1-6e20-4365-aba1-99e24f82b889 from vm: vm-36a656b0-b1b3-4fd4-9cf7-63fa95a5bc63
...

So basically the disk would successfully attach to the VM, but then when it came time to detach the VM would respond with "it isn't mine!". I feel like there's a bad joke in here, but I digress.

So what happened?

While we continued to investigate we started to note that some of the INFO statuses did not make sense. For example, we saw this VM being created:

...
INFO -- : Setting VM env: {"vm"=>{"name"=>"vm-7c516a12-8ad5-4033-8a92-ff35030e3a2d" ...

while the VM CID in the bosh-deployments.yml file was vm-700f482f-36cf-4c86-95c5-c86a18499c4a. This caused us to investigate a little further.

What happened

Essentially, what appears to have happened is that somehow the original deployment wasn't saved correctly or at all in bosh-deployments.yml (at this point it is hard to say). Then for every subsequent redeploy it would try to create a new microbosh with the same specifications: including the same IP address. Since vSphere allows multiple VMs to have the same IP without warning, by the time we caught this issue there were over a dozen VMs in vSphere with the same IP address. This collision is what caused confusion when creating/destroying the VMs as part of the normal deployment process, which would then fail.

Since we are deploying the microboshes via a pipeline, and thus whenever a microbosh deploy fails for any reason we see this issue anew, we are adding a script to our vSphere pipeline to basically detect if an IP is in use and if it is in use only by the VM specified by bosh-deployments.yml.

The post I’m the microbosh! No, I’m the microbosh! appeared first on Stark & Wayne.

]]>
Running a Mac VM on a Mac using VirtualBox https://www.starkandwayne.com/blog/running-a-mac-vm-on-a-mac-using-virtualbox/ Fri, 04 Sep 2015 14:59:01 +0000 https://www.starkandwayne.com//running-a-mac-vm-on-a-mac-using-virtualbox/

Recently when working with a client we encountered a situation where it would be beneficial run a Mac VM on our Mac laptops, so I decided to investigate. I was in luck! It turns out this is actually really easy to do.

To get started, download Yosemite from the App Store.

Fair warning: the download is ~5.5GB. It took me about half an hour to download, but depending on your connection speed your mileage may vary.

My Setup

  • 2015 Macbook Pro
  • 16 GB RAM
  • 2.5 GHz i7
  • 512 GB SSD
  • VirtualBox v4.3.x

The following instructions are heavily borrowed from frdmn's notes blog with additional notes added as needed.

Creating the Mac OS X Disk Image

  1. Install iesd, to customize OS X InstallESD:
    gem install iesd
  2. Turn install image into base system:
    iesd -i "/Applications/Install OS X Yosemite.app" -o yosemite.dmg -t BaseSystem
  3. Convert into UDSP (sparse image) format:
    hdiutil convert yosemite.dmg -format UDSP -o yosemite.sparseimage
  4. Mount the InstallESD:
    hdiutil mount "/Applications/Install OS X Yosemite.app/Contents/SharedSupport/InstallESD.dmg"
  5. Mount the sparse image:
    hdiutil mount yosemite.sparseimage
  6. Copy the base system into the sparse image:
    cp "/Volumes/OS X Install ESD/BaseSystem."* "/Volumes/OS X Base System/"
  7. Unmount InstallESD:
    hdiutil unmount "/Volumes/OS X Install ESD/"
  8. Unmount the sparse image:
    hdiutil unmount "/Volumes/OS X Base System/"
  9. Unmount both mounted disks:
    diskutil unmountDisk $(diskutil list | grep "OS X Base System" -B 4 | head -1)
    diskutil unmountDisk $(diskutil list | grep "OS X Install ESD" -B 4 | head -1)
  • If you have difficulty/receive an error, you can also do this in Disk Utility. "Right" click on the disk image (either InstallESD.dmg or yosemite.sparseimage) and then select Eject Disk Image. Repeat for the other disk as needed.
  1. Convert back to the UDZO compressed image format:
    hdiutil convert yosemite.sparseimage -format UDZO -o yosemitevagrantbox.dmg

Common Error

What to do if you encounter the this error on the last step:
hdiutil: detach failed - No such file or directory

  1. Remount the sparse image file. One way do to this is to open Finder and double click on the sparse image file.
  2. Use hdiutil detach instead of hdiutil unmount:
    hdiutil detach /Volumes/OS\ X\ Base\ System/
  3. Re-run hdiutil convert command.

Creating the VM in VirtualBox

  1. Click "New" or ctrl+N/cmd+N to create a new VM
  2. Give it a name, select Type: Mac OS X and Version: Mac OS X (64-bit) if these are not populated for you (they will be if you use "Mac" in the name). Click continue.
  3. Default of 2 GB of RAM is adequate for a quick test, but if you plan on using the VM for more than 5 seconds I recommend 4 GB if you can spare it. Otherwise the lag is really frustrating. In any event, click Continue.
  • Note if you change the RAM to 4 GB you will need to change your chipset later.
  1. If you do not already have a virtual hard drive leave the default selection and click "create".
  2. I used VDI, click Continue.
  3. I left the disk as dynamically allocated. Click Continue.
  4. I left the default 20 GB. Click Create.
  5. If you upped your RAM, "right" click on the new VM and click "Settings". Then go to System -> Motherboard to change the chipset to PIIX3. Feel free remove "Floppy" from the boot order while you're in there.
  6. "Right" click on the new VM and click "Start".
  7. Select the disk image as an "optical disk":
    Select disk image
  • You do not need to create an ISO, although if you do it will still run normally. If you would like to create an ISO just run this command:
    hdiutil convert yosemitevagrantbox.dmg -format UDTO -o yosemitevagrantbox && mv yosemitevagrantbox.cdr yosemitevagrantbox.iso
  1. Wait a few minutes while the installer runs. Grab a coffee?
  2. Once the installer starts go ahead and select your language.
  3. Go into Disk Utility and create a formatted partition. Instructions for how to do this are in the "Creating a formatted partition..." section below.
  4. When prompted, install on the partition you created in the previous step.
  5. Go through prompts as normal (iCloud, etc.). I personally didn't sign into iCloud/etc. for a test VM.

Creating a formatted partition with Disk Utility

  1. Start Disk Uility:
    Start Disk Utility
  2. Select 1 Parition:
  3. Name & Apply the partition+format:
  4. Click Partition:
    Partition

Make sure to use the partition for the install:

Install Partition

What next?

Enjoy your test VM! Create and destroy at will! Muhahahhaaha.

The post Running a Mac VM on a Mac using VirtualBox appeared first on Stark & Wayne.

]]>

Recently when working with a client we encountered a situation where it would be beneficial run a Mac VM on our Mac laptops, so I decided to investigate. I was in luck! It turns out this is actually really easy to do.

To get started, download Yosemite from the App Store.

Fair warning: the download is ~5.5GB. It took me about half an hour to download, but depending on your connection speed your mileage may vary.

My Setup

  • 2015 Macbook Pro
  • 16 GB RAM
  • 2.5 GHz i7
  • 512 GB SSD
  • VirtualBox v4.3.x

The following instructions are heavily borrowed from frdmn's notes blog with additional notes added as needed.

Creating the Mac OS X Disk Image

  1. Install iesd, to customize OS X InstallESD:
    gem install iesd
  2. Turn install image into base system:
    iesd -i "/Applications/Install OS X Yosemite.app" -o yosemite.dmg -t BaseSystem
  3. Convert into UDSP (sparse image) format:
    hdiutil convert yosemite.dmg -format UDSP -o yosemite.sparseimage
  4. Mount the InstallESD:
    hdiutil mount "/Applications/Install OS X Yosemite.app/Contents/SharedSupport/InstallESD.dmg"
  5. Mount the sparse image:
    hdiutil mount yosemite.sparseimage
  6. Copy the base system into the sparse image:
    cp "/Volumes/OS X Install ESD/BaseSystem."* "/Volumes/OS X Base System/"
  7. Unmount InstallESD:
    hdiutil unmount "/Volumes/OS X Install ESD/"
  8. Unmount the sparse image:
    hdiutil unmount "/Volumes/OS X Base System/"
  9. Unmount both mounted disks:
    diskutil unmountDisk $(diskutil list | grep "OS X Base System" -B 4 | head -1)
    diskutil unmountDisk $(diskutil list | grep "OS X Install ESD" -B 4 | head -1)
  • If you have difficulty/receive an error, you can also do this in Disk Utility. "Right" click on the disk image (either InstallESD.dmg or yosemite.sparseimage) and then select Eject Disk Image. Repeat for the other disk as needed.
  1. Convert back to the UDZO compressed image format:
    hdiutil convert yosemite.sparseimage -format UDZO -o yosemitevagrantbox.dmg

Common Error

What to do if you encounter the this error on the last step:
hdiutil: detach failed - No such file or directory

  1. Remount the sparse image file. One way do to this is to open Finder and double click on the sparse image file.
  2. Use hdiutil detach instead of hdiutil unmount:
    hdiutil detach /Volumes/OS\ X\ Base\ System/
  3. Re-run hdiutil convert command.

Creating the VM in VirtualBox

  1. Click "New" or ctrl+N/cmd+N to create a new VM
  2. Give it a name, select Type: Mac OS X and Version: Mac OS X (64-bit) if these are not populated for you (they will be if you use "Mac" in the name). Click continue.
  3. Default of 2 GB of RAM is adequate for a quick test, but if you plan on using the VM for more than 5 seconds I recommend 4 GB if you can spare it. Otherwise the lag is really frustrating. In any event, click Continue.
  • Note if you change the RAM to 4 GB you will need to change your chipset later.
  1. If you do not already have a virtual hard drive leave the default selection and click "create".
  2. I used VDI, click Continue.
  3. I left the disk as dynamically allocated. Click Continue.
  4. I left the default 20 GB. Click Create.
  5. If you upped your RAM, "right" click on the new VM and click "Settings". Then go to System -> Motherboard to change the chipset to PIIX3. Feel free remove "Floppy" from the boot order while you're in there.
  6. "Right" click on the new VM and click "Start".
  7. Select the disk image as an "optical disk":
    Select disk image
  • You do not need to create an ISO, although if you do it will still run normally. If you would like to create an ISO just run this command:
    hdiutil convert yosemitevagrantbox.dmg -format UDTO -o yosemitevagrantbox && mv yosemitevagrantbox.cdr yosemitevagrantbox.iso
  1. Wait a few minutes while the installer runs. Grab a coffee?
  2. Once the installer starts go ahead and select your language.
  3. Go into Disk Utility and create a formatted partition. Instructions for how to do this are in the "Creating a formatted partition..." section below.
  4. When prompted, install on the partition you created in the previous step.
  5. Go through prompts as normal (iCloud, etc.). I personally didn't sign into iCloud/etc. for a test VM.

Creating a formatted partition with Disk Utility

  1. Start Disk Uility:
    Start Disk Utility
  2. Select 1 Parition:

  3. Name & Apply the partition+format:

  4. Click Partition:
    Partition

Make sure to use the partition for the install:

Install Partition

What next?

Enjoy your test VM! Create and destroy at will! Muhahahhaaha.

The post Running a Mac VM on a Mac using VirtualBox appeared first on Stark & Wayne.

]]>
Day in the Life: Troubleshooting with Concourse https://www.starkandwayne.com/blog/troubleshooting-with-concourse-cannot-pull-stemcells/ Wed, 12 Aug 2015 02:30:20 +0000 https://www.starkandwayne.com//troubleshooting-with-concourse-cannot-pull-stemcells/

This is another interesting day in the life of modern platforms (BOSH and Cloud Foundry) and automation (Concourse).

The Problem

Recently we ran into an issue with Concourse. After building a seemingly successful pipeline and using it to deploy the microbosh to AWS, we ran into a snag where the deploys always failed for the same reason: the deployment couldn't find the AWS instance using the ID in the manifest. But why?

Symptoms

Initially diagnosing the behavior was reasonably straightforward: when running the pipeline, an error like the following would appear:

Started deploy micro bosh > Mount disk. Done (00:00:01)instance i-22222222 has invalid disk: Agent reports  while deployer's record shows vol-11111111.

Fix attempt #1

Going into AWS -> EC2 -> Volumes and searching for vol-1111111 would easily pull the volume, but it was attached to a different instance, i-33333333. In fact, going into Instances and searching for i-22222222 showed that there were no instances with that ID!

This means that for some reason the bosh-deployments.yml file is wrong. This is the "database" for the bosh micro deploy state. At this point, I wasn't yet sure why it had the incorrect state; so I fixed it to match reality according to the AWS Console:

---
instances:
- :id: 1
  :name: microbosh
  :uuid: (( some UUID ))
  :stemcell_cid: (( AMI ))
  :stemcell_sha1: (( some SHA ))
  :stemcell_name: (( some stemcell ))
  :config_sha1: (( some SHA ))
  :vm_cid: i-33333333
  :disk_cid: vol-11111111
disks: []
registry_instances:
- :id: 14
  :instance_id: i-33333333
  :settings: (( bunch of stuff ))

Great! Everything is kosher. Trigger the pipeline aaaaaaannnnnnndddddd…

Started deploy micro bosh > Mount disk. Done (00:00:01)instance i-33333333 has invalid disk: Agent reports  while deployer's record shows vol-11111111

Going back into AWS shows i-33333333 has been terminated; and when I inspect the volume vol-11111111 shows that it is now attached to a new instance i-44444444; however, the bosh-deployments.yml file has i-33333333.

Hmmm.

Fix Attempt #2

Using one of our earlier blog posts as a guide, I cleaned out all the "dynamic bits" and tried triggering the pipeline again. Unfortunately this did not resolve the issue: even though neither the instance_id nor vm_cid fields were even present when I started the pipeline, when it ran the wrong instance ID was populated in both places and the pipeline terminated with the same error.

Fix Attempt #3

At this point I deleted the EC2 instance that was supposed to be attached to the persistent disk. (Note that it is probably obvious that the volume is not set to delete when the instance deletes or else the volume would have been disappearing as well, but you know I double checked that anyway. Because human error, and what not.) Then I created a NEW instance manually using the criteria in the manifest. I updated the bosh-deployments.yml file and did a manual bosh deploy. SUCCESS! I triggered the pipeline to run - SUCCESS!

BUT because of the change I made to the pipeline, the pipeline was triggered to run a second time after the successful completion of my manual run. This time it FAILED.

And the instance ID was wrong again.

Deeper Troubleshooting

Clearly, something a bit deeper is going on in the pipeline itself. Since this particular pipeline is pushing its changes to GitHub as a sort of audit trail, to track down where the problem was I look at all its git commits. This is where the problem was made a little more obvious.

By looking at the commits, the problem was rooted between when our pipeline was being triggered and where it was grabbing the deployments. Basically, it was grabbing the "state of the universe" at the beginning and using that to populate bosh-deployments.yml, started to run and change the state of the universe, but then used the bosh-deployents.yml file with now-outdated information to try and deploy. This, of course, caused failure.

To prevent pipeline from triggering prematurely and running with out-of-date information, I updated the resources in the pipeline.yml file to ignore our pipeline-inputs.yml:

resources:
- name: aws-pipeline-changes
  type: git
  source:
    uri: {{pipeline-git-repo}}
    branch: {{pipeline-branch}}
    paths:
    - environments/aws/templates/pipeline
    - environments/aws/templates/bin
    - environments/aws/templates/releases
    ignore_paths:
    - environments/aws/templates/pipeline/pipeline-inputs.yml

After some cautious optimism I ran the pipeline again. The good news: the original issue was fixed. The bad (ish?) news: it failed with a new error:

unexpected end of JSON input

Welp, at least our bosh-deployments.yml file was fixed. Huzzah.

Fix one Bug Find Another: The JSON Error

The JSON error appeared right at the build stage - before the pipeline would grab anything and do its magic. In the UI, both the stemcell-aws asset and the environment were in orange. When I clicked on stemcell-aws, I saw that it wasn't able to grab the stemcell - it was just dying.

Looking through the resources in pipeline.yml, the stemcell-aws resource was using bosh-io-stemcell. In Concourse itself, that resource is located at bosh-io-stemcell-resource. The assets/check file is where the curl command runs to grab the stemcell:

curl --retry 5 -s -f http://bosh.io/api/v1/stemcells/$name -o $stemcells

So I ran this command on the jump box that hosts our pipeline and it failed. As an important aside the reason why it failed is because of restrictions on our client's network: only HTTPS connections are allowed and HTTPS connections are redirected before leaving the company intranet. The fix was as simple as changing the curl command to:

After making the pull request, someone pointed out that the bosh-io-release resource had a similar line of code and so it would probably have the same problem eventually. To avoid this, we submitted pull requests for that as well with the same fix.

Resolved!

After the Concourse team merged our pull request to fix the JSON error, we were able to definitively verify that our initial issue was resolved with a series of successful pipeline deployments. ✌.ʕʘ‿ʘʔ.✌

The post Day in the Life: Troubleshooting with Concourse appeared first on Stark & Wayne.

]]>

This is another interesting day in the life of modern platforms (BOSH and Cloud Foundry) and automation (Concourse).

The Problem

Recently we ran into an issue with Concourse. After building a seemingly successful pipeline and using it to deploy the microbosh to AWS, we ran into a snag where the deploys always failed for the same reason: the deployment couldn't find the AWS instance using the ID in the manifest. But why?

Symptoms

Initially diagnosing the behavior was reasonably straightforward: when running the pipeline, an error like the following would appear:

Started deploy micro bosh > Mount disk. Done (00:00:01)instance i-22222222 has invalid disk: Agent reports  while deployer's record shows vol-11111111.

Fix attempt #1

Going into AWS -> EC2 -> Volumes and searching for vol-1111111 would easily pull the volume, but it was attached to a different instance, i-33333333. In fact, going into Instances and searching for i-22222222 showed that there were no instances with that ID!

This means that for some reason the bosh-deployments.yml file is wrong. This is the "database" for the bosh micro deploy state. At this point, I wasn't yet sure why it had the incorrect state; so I fixed it to match reality according to the AWS Console:

---
instances:
- :id: 1
  :name: microbosh
  :uuid: (( some UUID ))
  :stemcell_cid: (( AMI ))
  :stemcell_sha1: (( some SHA ))
  :stemcell_name: (( some stemcell ))
  :config_sha1: (( some SHA ))
  :vm_cid: i-33333333
  :disk_cid: vol-11111111
disks: []
registry_instances:
- :id: 14
  :instance_id: i-33333333
  :settings: (( bunch of stuff ))

Great! Everything is kosher. Trigger the pipeline aaaaaaannnnnnndddddd…

Started deploy micro bosh > Mount disk. Done (00:00:01)instance i-33333333 has invalid disk: Agent reports  while deployer's record shows vol-11111111

Going back into AWS shows i-33333333 has been terminated; and when I inspect the volume vol-11111111 shows that it is now attached to a new instance i-44444444; however, the bosh-deployments.yml file has i-33333333.

Hmmm.

Fix Attempt #2

Using one of our earlier blog posts as a guide, I cleaned out all the "dynamic bits" and tried triggering the pipeline again. Unfortunately this did not resolve the issue: even though neither the instance_id nor vm_cid fields were even present when I started the pipeline, when it ran the wrong instance ID was populated in both places and the pipeline terminated with the same error.

Fix Attempt #3

At this point I deleted the EC2 instance that was supposed to be attached to the persistent disk. (Note that it is probably obvious that the volume is not set to delete when the instance deletes or else the volume would have been disappearing as well, but you know I double checked that anyway. Because human error, and what not.) Then I created a NEW instance manually using the criteria in the manifest. I updated the bosh-deployments.yml file and did a manual bosh deploy. SUCCESS! I triggered the pipeline to run - SUCCESS!

BUT because of the change I made to the pipeline, the pipeline was triggered to run a second time after the successful completion of my manual run. This time it FAILED.

And the instance ID was wrong again.

Deeper Troubleshooting

Clearly, something a bit deeper is going on in the pipeline itself. Since this particular pipeline is pushing its changes to GitHub as a sort of audit trail, to track down where the problem was I look at all its git commits. This is where the problem was made a little more obvious.

By looking at the commits, the problem was rooted between when our pipeline was being triggered and where it was grabbing the deployments. Basically, it was grabbing the "state of the universe" at the beginning and using that to populate bosh-deployments.yml, started to run and change the state of the universe, but then used the bosh-deployents.yml file with now-outdated information to try and deploy. This, of course, caused failure.

To prevent pipeline from triggering prematurely and running with out-of-date information, I updated the resources in the pipeline.yml file to ignore our pipeline-inputs.yml:

resources:
- name: aws-pipeline-changes
  type: git
  source:
    uri: {{pipeline-git-repo}}
    branch: {{pipeline-branch}}
    paths:
    - environments/aws/templates/pipeline
    - environments/aws/templates/bin
    - environments/aws/templates/releases
    ignore_paths:
    - environments/aws/templates/pipeline/pipeline-inputs.yml

After some cautious optimism I ran the pipeline again. The good news: the original issue was fixed. The bad (ish?) news: it failed with a new error:

unexpected end of JSON input

Welp, at least our bosh-deployments.yml file was fixed. Huzzah.

Fix one Bug Find Another: The JSON Error

The JSON error appeared right at the build stage - before the pipeline would grab anything and do its magic. In the UI, both the stemcell-aws asset and the environment were in orange. When I clicked on stemcell-aws, I saw that it wasn't able to grab the stemcell - it was just dying.

Looking through the resources in pipeline.yml, the stemcell-aws resource was using bosh-io-stemcell. In Concourse itself, that resource is located at bosh-io-stemcell-resource. The assets/check file is where the curl command runs to grab the stemcell:

curl --retry 5 -s -f http://bosh.io/api/v1/stemcells/$name -o $stemcells

So I ran this command on the jump box that hosts our pipeline and it failed. As an important aside the reason why it failed is because of restrictions on our client's network: only HTTPS connections are allowed and HTTPS connections are redirected before leaving the company intranet. The fix was as simple as changing the curl command to:

After making the pull request, someone pointed out that the bosh-io-release resource had a similar line of code and so it would probably have the same problem eventually. To avoid this, we submitted pull requests for that as well with the same fix.

Resolved!

After the Concourse team merged our pull request to fix the JSON error, we were able to definitively verify that our initial issue was resolved with a series of successful pipeline deployments. ✌.ʕʘ‿ʘʔ.✌

The post Day in the Life: Troubleshooting with Concourse appeared first on Stark & Wayne.

]]>