MichaelFerris, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/michaelferris/ Cloud-Native Consultants Thu, 30 Sep 2021 15:48:42 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 https://www.starkandwayne.com/wp-content/uploads/2021/04/cropped-SW_Logos__shield_blue_on_transparent_with_border-32x32.png MichaelFerris, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/michaelferris/ 32 32 Eirini: Combine the strengths of Cloud Foundry and Kubernetes https://www.starkandwayne.com/blog/combine-the-strengths-of-cloud-foundry-and-kubernetes-using-eirini-a-practical-guide/ Fri, 16 Aug 2019 18:00:00 +0000 https://www.starkandwayne.com//combine-the-strengths-of-cloud-foundry-and-kubernetes-using-eirini-a-practical-guide/

Unite!

Summary: IBM's Eirini Project allows Cloud Foundry's runtime component to be replaced with Kubernetes. Combining these two technologies enables both operators and developers to be more productive.

Technology executives and managers at large enterprises have a tough job in 2019 because the landscape of enterprise cloud IT is complex and rapidly changing. Deciding which technologies are right for your business's needs is a daunting task.

One of these decisions is that of which platform will run your organization's apps. Kubernetes and Cloud Foundry are the two main players in this space and deciding between them can be daunting. Each comes with its own set of strengths and weaknesses for operators and developers alike.

Cloud Foundry's biggest strength is its developer experience: A simple cf push transforms a developer's source code into a running, routable container.

Kubernetes' biggest strengths are running containers and its massive, Google-backed community.

IBM's Eirini project allows its users to take advantage of both of these strengths in a single platform by allowing Cloud Foundry to run its apps on a Kubernetes cluster. In other words, Eirini replaces CF's Diego runtime component with Kubernetes.

With this configuration, Cloud Foundry's developer experience remains unchanged. However, an operations team deploying and operating CF no longer needs to learn how to network, debug, and scale containers running on a niche, CF-dedicated runtime like Diego. They can instead use their existing Kubernetes knowledge and expertise to support both the apps that are running on Cloud Foundry and any other services running directly on Kubernetes that apps consume.

Things get really interesting when Eirini is combined with SuseCF, which is a distribution of Cloud Foundry in which each component of the platform runs inside of a container instead of a bosh-deployed VM. SuseCF can run on a Kubernetes cluster and use Eirini to push its apps to that same Kubernetes cluster. This way, platform operators don't have to spend their time learning Bosh and Diego just to deploy CF. They can instead spend all of their time making your organization better at running Kubernetes.

Behind the scenes, Eirini accomplishes its runtime-switching-magic via a construct called the Orchestration Provider Interface (OPI). This interface is inspired by Bosh's Cloud Provider Interface (CPI). In the same way that the CPI allows Bosh's user experience to remain the same regardless of the cloud provider (AWS, GCP, Azure) that infrastructure is being provisioned on, Eirini's OPI concept allows the CF developer experience to remain the same regardless of which container runtime the apps actually end up running on.

Eirini takes the CF commands given to the OPI and turns them into runtime-specific commands that result in your app running on Kubernetes as a StatefulSet.

The Eirini Project's vision is a bit bigger than just integrating Cloud Foundry with Kubernetes, though. The OPI provides the foundation for Cloud Foundry to act as a developer's front-end to any container runtime. This will allow developers to keep using the same cf push experience that they love, even as the world of container runtimes continues to change and develop.

Eirini's development can be followed on the project's github, and instructions for deploying to an existing Kubernetes cluster can be found here. These deployment instructions use the SuseCF + Eirini configuration that was mentioned above in order to deploy CF to Kubernetes and deploy CF's apps to the same Kubernetes.

The post Eirini: Combine the strengths of Cloud Foundry and Kubernetes appeared first on Stark & Wayne.

]]>

Unite!

Summary: IBM's Eirini Project allows Cloud Foundry's runtime component to be replaced with Kubernetes. Combining these two technologies enables both operators and developers to be more productive.

Technology executives and managers at large enterprises have a tough job in 2019 because the landscape of enterprise cloud IT is complex and rapidly changing. Deciding which technologies are right for your business's needs is a daunting task.

One of these decisions is that of which platform will run your organization's apps. Kubernetes and Cloud Foundry are the two main players in this space and deciding between them can be daunting. Each comes with its own set of strengths and weaknesses for operators and developers alike.

Cloud Foundry's biggest strength is its developer experience: A simple cf push transforms a developer's source code into a running, routable container.

Kubernetes' biggest strengths are running containers and its massive, Google-backed community.

IBM's Eirini project allows its users to take advantage of both of these strengths in a single platform by allowing Cloud Foundry to run its apps on a Kubernetes cluster. In other words, Eirini replaces CF's Diego runtime component with Kubernetes.

With this configuration, Cloud Foundry's developer experience remains unchanged. However, an operations team deploying and operating CF no longer needs to learn how to network, debug, and scale containers running on a niche, CF-dedicated runtime like Diego. They can instead use their existing Kubernetes knowledge and expertise to support both the apps that are running on Cloud Foundry and any other services running directly on Kubernetes that apps consume.

Things get really interesting when Eirini is combined with SuseCF, which is a distribution of Cloud Foundry in which each component of the platform runs inside of a container instead of a bosh-deployed VM. SuseCF can run on a Kubernetes cluster and use Eirini to push its apps to that same Kubernetes cluster. This way, platform operators don't have to spend their time learning Bosh and Diego just to deploy CF. They can instead spend all of their time making your organization better at running Kubernetes.

Behind the scenes, Eirini accomplishes its runtime-switching-magic via a construct called the Orchestration Provider Interface (OPI). This interface is inspired by Bosh's Cloud Provider Interface (CPI). In the same way that the CPI allows Bosh's user experience to remain the same regardless of the cloud provider (AWS, GCP, Azure) that infrastructure is being provisioned on, Eirini's OPI concept allows the CF developer experience to remain the same regardless of which container runtime the apps actually end up running on.

Eirini takes the CF commands given to the OPI and turns them into runtime-specific commands that result in your app running on Kubernetes as a StatefulSet.

The Eirini Project's vision is a bit bigger than just integrating Cloud Foundry with Kubernetes, though. The OPI provides the foundation for Cloud Foundry to act as a developer's front-end to any container runtime. This will allow developers to keep using the same cf push experience that they love, even as the world of container runtimes continues to change and develop.

Eirini's development can be followed on the project's github, and instructions for deploying to an existing Kubernetes cluster can be found here. These deployment instructions use the SuseCF + Eirini configuration that was mentioned above in order to deploy CF to Kubernetes and deploy CF's apps to the same Kubernetes.

The post Eirini: Combine the strengths of Cloud Foundry and Kubernetes appeared first on Stark & Wayne.

]]>
CredHub: Keys must be PEM-encoded PKCS#1 keys. https://www.starkandwayne.com/blog/credhub-keysmust-be-pem-encoded-pkcs1-keys/ Thu, 15 Aug 2019 18:00:00 +0000 https://www.starkandwayne.com//credhub-keysmust-be-pem-encoded-pkcs1-keys/

Credhub keeps your credentials out of your configuration files. 

On a recent project, I was adding certificates and their private keys to a CredHub instance so that Concourse could retrieve them to configure and then deploy a Cloud Foundry foundation that would then use these certificates.

To do this, I ran:

credhub set --name /path/to/certificate --certificate "$(cat certchain.pem)" --private "$(cat privatekey.pem)

However, the CredHub CLI gave me the following error:

The provided key format is not supported. Keys must be PEM-encoded PKCS#1 keys.

I did not immediately know how to fix this because the private keys had been provided to me by a separate team. I could not re-generate them and simply check the "Please give me a PKCS#1 key" option. I also wasn't sure what the difference is between PKCS#1 keys and whatever format my keys were in.

Thanks to this CredHub error, I now know that my keys were in PKCS#8 format. This blog will show how to convert between PKCS#1 and PKCS#8, and explain the difference.

What are PKCS#1 keys?

PKCS#1 keys are private keys of the form:

-----BEGIN RSA PRIVATE KEY-----	<Key Payload>
-----END RSA PRIVATE KEY-----

PKCS#8 keys (which the ones I had been provided were) are of the form:

-----BEGIN PRIVATE KEY-----
	<Key Payload>
-----END PRIVATE KEY-----

The difference between these two key representations is that PKCS#1 specifies in its envelope (first and last line of the file) that it was generated using an RSA cipher, while PKCS#8 specifies the same information inside of the key payload.

As their numbers imply, PKCS#8 was released after PKCS#1 chronologically. Specifying the cipher in its payload instead of directly in the header allowed for flexibility and compatibility that was built upon in later formats, such as PKCS#12.

Converting PKCS#8 to PKCS#1

Now one might think, "Wow, I just need to add RSA to my PKCS#8 keys' envelopes and they'll become PKCS#1 keys??"

Not quite.

PKCS#8 includes the cipher information in its key payload, so a private key in PKCS#8 format has a completely different key payload than the equivalent key in PKCS#1 format. Convert your keys and then look at their differing contents and you will see what I mean.

Converting a PKCS#8 key to a PKCS#1 key is incredibly easy. Simply run:

openssl rsa -in privateKeyPKCS8.pem -out privateKeyPKCS1.pem

Now my previous credhub set command works and properly sets the certificate and key:

credhub set --name /path/to/certificate --certificate "$(cat certchain.pem)" --private "$(cat privatekeyPKCS1.pem)

Converting PKCS#1 back to PKCS#8:

openssl pkcs8 -topk8 -inform PEM -outform PEM -in privateKeyPKCS1.pem -out privateKeyPKCS8.pem -nocrypt

The post CredHub: Keys must be PEM-encoded PKCS#1 keys. appeared first on Stark & Wayne.

]]>

Credhub keeps your credentials out of your configuration files. 

On a recent project, I was adding certificates and their private keys to a CredHub instance so that Concourse could retrieve them to configure and then deploy a Cloud Foundry foundation that would then use these certificates.

To do this, I ran:

credhub set --name /path/to/certificate --certificate "$(cat certchain.pem)" --private "$(cat privatekey.pem)

However, the CredHub CLI gave me the following error:

The provided key format is not supported. Keys must be PEM-encoded PKCS#1 keys.

I did not immediately know how to fix this because the private keys had been provided to me by a separate team. I could not re-generate them and simply check the "Please give me a PKCS#1 key" option. I also wasn't sure what the difference is between PKCS#1 keys and whatever format my keys were in.

Thanks to this CredHub error, I now know that my keys were in PKCS#8 format. This blog will show how to convert between PKCS#1 and PKCS#8, and explain the difference.

What are PKCS#1 keys?

PKCS#1 keys are private keys of the form:

-----BEGIN RSA PRIVATE KEY-----	<Key Payload>
-----END RSA PRIVATE KEY-----

PKCS#8 keys (which the ones I had been provided were) are of the form:

-----BEGIN PRIVATE KEY-----
	<Key Payload>
-----END PRIVATE KEY-----

The difference between these two key representations is that PKCS#1 specifies in its envelope (first and last line of the file) that it was generated using an RSA cipher, while PKCS#8 specifies the same information inside of the key payload.

As their numbers imply, PKCS#8 was released after PKCS#1 chronologically. Specifying the cipher in its payload instead of directly in the header allowed for flexibility and compatibility that was built upon in later formats, such as PKCS#12.

Converting PKCS#8 to PKCS#1

Now one might think, "Wow, I just need to add RSA to my PKCS#8 keys' envelopes and they'll become PKCS#1 keys??"

Not quite.

PKCS#8 includes the cipher information in its key payload, so a private key in PKCS#8 format has a completely different key payload than the equivalent key in PKCS#1 format. Convert your keys and then look at their differing contents and you will see what I mean.

Converting a PKCS#8 key to a PKCS#1 key is incredibly easy. Simply run:

openssl rsa -in privateKeyPKCS8.pem -out privateKeyPKCS1.pem

Now my previous credhub set command works and properly sets the certificate and key:

credhub set --name /path/to/certificate --certificate "$(cat certchain.pem)" --private "$(cat privatekeyPKCS1.pem)

Converting PKCS#1 back to PKCS#8:

openssl pkcs8 -topk8 -inform PEM -outform PEM -in privateKeyPKCS1.pem -out privateKeyPKCS8.pem -nocrypt

The post CredHub: Keys must be PEM-encoded PKCS#1 keys. appeared first on Stark & Wayne.

]]>
Air Gapping: A Moat for the 21st Century https://www.starkandwayne.com/blog/air-gapping-a-moat-for-the-21st-century/ Tue, 07 May 2019 19:50:48 +0000 https://www.starkandwayne.com//air-gapping-a-moat-for-the-21st-century/

What is Air Gapped?

An air gapped environment is one that is not accessible from the internet and cannot access the internet.

Enterprises use air gaps in order to prevent access to their network from nefarious external actors and to prevent their own engineers from being able to install any arbitrary software from the internet without it first going through a formal review process to ensure that it meets the organization’s security standards.

Obstacles

An air gapped environment has its share of challenges:

  • A lot of software defaults to using the internet.
  • Operational overhead.
  • Slows down operations due to bureaucratic delay.

For app and container platforms such as Cloud Foundry or Kubernetes, this type of environment brings a number of implications for both application developers and platform operators. This blog post will focus on the implications for operators.

Developers

Application developers must write their apps so that they do not rely on internet resources during either the build process or while running. In Cloud Foundry, developers should use the offline version of buildpacks in order to have their apps cf push successfully.

For operators, things get more complicated.

Figure 1. External Services for Resources

Operators

By default, the install of a platform will rely on a number of internet resources: GitHub, Docker Hub, package repositories, and Pivotal Network in the case of a PCF install. In an air gapped environment, each of these resources must be instead provided by an internal service.

Examples of the internal services that an organization can run in order to replace a given external service:

  • An external repository like GitHub/Bitbucket/GitLab can be replaced with an internal “enterprise” instance of the same service.
  • Docker Hub (or any external container registry) can be replaced with VMWare’s Harbor.
  • Pivotal Network and package repositories can be replaced with an internal S3-compatible blobstore such as MinIO, Dell ECS, or Ceph Object Storage.

Target Resources

The automation or manual processes that are used to install the platform in the default, internet-based scenario must then be modified in order to grab resources from these locations instead of the internet-based locations. (See Figure 2.)

Figure 2. Use Internal Services for Resources

The same goes for upgrades of components of the platform: default automation or manual process will rely on an operator’s ability to access the internet in order to retrieve the new binaries for the upgrade. These new binaries must instead come from an internal service and the automation or manual processes must be updated to reflect this.

Our theoretical organization’s install and upgrade processes are now modified to grab their required resources from internally instead of trying to use the internet. Great! But that’s only half of the problem. How do these internal services get populated with the resources that our install or upgrades will require in the first place? This depends on the organization and what they will allow based on the security standards that they are aiming to meet.

Ideal Scenario

In the ideal situation, the entire environment is still air gapped but the platform operators are allowed to whitelist a VM that can talk to the internet. This VM then acts as the worker VM for automation that downloads the necessary resources from the internet and puts those resources into the appropriate internal service.

Figure 3. The worker updates resources on internal services.

This is the ideal scenario for a few reasons:

  • Adding a new binary / image / repo to be accessed from within the air gapped environment is a matter of modifying the automation config file to include the desired resource. No humans are in between operators getting what they need in order to do their jobs.
  • These automation config files then double as a list of exactly what has been pulled into the environment, as long as:
  1. The automation config files are properly version controlled.
  2. SSH access to the whitelisted VM is sufficiently locked-down.

However, in many organizations, the ability to allow a small set of VMs to talk to the internet for the above purpose is completely off the table. This is especially common in military or government environments. In these cases, the required process for moving resources into the air gapped environment can vary. In most cases, these processes are 1.) slow, and 2.) involve people who are not on your team.

Make a Plan

For both of these reasons, it is extremely important for an operations team to plan ahead by having a list of exactly which resources will be required to install and upgrade. Every binary, container, or repo that is not moved into the air gapped environment on the first go will cost your team the time it takes to do the process again, plus it will risk burning your team’s political capital with the teams responsible for moving resources into the environment because they likely do not care whether your team is successful and could see constant “Hey, we forgot this binary, can you move that to server XYZ for us?” requests as unnecessary favors.

When you have a plan and you know the requirements, you can overcome the hurdles of setting up an air gap between your environment and external networks. Yet you can still equip yourself so that developers use the platform transparently as they would before.

This is because as a platform operator you’ve taken the time to make your plan. With the plan to replace the external facing services like a source repository, a container image store, and an object store, you can still do platform automation and upgrades with ease.

Another Example

Recently we gave a talk at the Cloud Foundry Summit 2019 with a client who has successfully air gapped their environment. Vince White, from Agile Defense, shared the stage with me and we discussed these very things. Watch our talk on YouTube if you’re curious about what we did.

And if you need further help or assistance please feel free to reach out to us.

The post Air Gapping: A Moat for the 21st Century appeared first on Stark & Wayne.

]]>

What is Air Gapped?

An air gapped environment is one that is not accessible from the internet and cannot access the internet.

Enterprises use air gaps in order to prevent access to their network from nefarious external actors and to prevent their own engineers from being able to install any arbitrary software from the internet without it first going through a formal review process to ensure that it meets the organization’s security standards.

Obstacles

An air gapped environment has its share of challenges:

  • A lot of software defaults to using the internet.
  • Operational overhead.
  • Slows down operations due to bureaucratic delay.

For app and container platforms such as Cloud Foundry or Kubernetes, this type of environment brings a number of implications for both application developers and platform operators. This blog post will focus on the implications for operators.

Developers

Application developers must write their apps so that they do not rely on internet resources during either the build process or while running. In Cloud Foundry, developers should use the offline version of buildpacks in order to have their apps cf push successfully.

For operators, things get more complicated.

Figure 1. External Services for Resources

Operators

By default, the install of a platform will rely on a number of internet resources: GitHub, Docker Hub, package repositories, and Pivotal Network in the case of a PCF install. In an air gapped environment, each of these resources must be instead provided by an internal service.

Examples of the internal services that an organization can run in order to replace a given external service:

  • An external repository like GitHub/Bitbucket/GitLab can be replaced with an internal “enterprise” instance of the same service.
  • Docker Hub (or any external container registry) can be replaced with VMWare’s Harbor.
  • Pivotal Network and package repositories can be replaced with an internal S3-compatible blobstore such as MinIO, Dell ECS, or Ceph Object Storage.

Target Resources

The automation or manual processes that are used to install the platform in the default, internet-based scenario must then be modified in order to grab resources from these locations instead of the internet-based locations. (See Figure 2.)

Figure 2. Use Internal Services for Resources

The same goes for upgrades of components of the platform: default automation or manual process will rely on an operator’s ability to access the internet in order to retrieve the new binaries for the upgrade. These new binaries must instead come from an internal service and the automation or manual processes must be updated to reflect this.

Our theoretical organization’s install and upgrade processes are now modified to grab their required resources from internally instead of trying to use the internet. Great! But that’s only half of the problem. How do these internal services get populated with the resources that our install or upgrades will require in the first place? This depends on the organization and what they will allow based on the security standards that they are aiming to meet.

Ideal Scenario

In the ideal situation, the entire environment is still air gapped but the platform operators are allowed to whitelist a VM that can talk to the internet. This VM then acts as the worker VM for automation that downloads the necessary resources from the internet and puts those resources into the appropriate internal service.

Figure 3. The worker updates resources on internal services.

This is the ideal scenario for a few reasons:

  • Adding a new binary / image / repo to be accessed from within the air gapped environment is a matter of modifying the automation config file to include the desired resource. No humans are in between operators getting what they need in order to do their jobs.
  • These automation config files then double as a list of exactly what has been pulled into the environment, as long as:
  1. The automation config files are properly version controlled.
  2. SSH access to the whitelisted VM is sufficiently locked-down.

However, in many organizations, the ability to allow a small set of VMs to talk to the internet for the above purpose is completely off the table. This is especially common in military or government environments. In these cases, the required process for moving resources into the air gapped environment can vary. In most cases, these processes are 1.) slow, and 2.) involve people who are not on your team.

Make a Plan

For both of these reasons, it is extremely important for an operations team to plan ahead by having a list of exactly which resources will be required to install and upgrade. Every binary, container, or repo that is not moved into the air gapped environment on the first go will cost your team the time it takes to do the process again, plus it will risk burning your team’s political capital with the teams responsible for moving resources into the environment because they likely do not care whether your team is successful and could see constant “Hey, we forgot this binary, can you move that to server XYZ for us?” requests as unnecessary favors.

When you have a plan and you know the requirements, you can overcome the hurdles of setting up an air gap between your environment and external networks. Yet you can still equip yourself so that developers use the platform transparently as they would before.

This is because as a platform operator you’ve taken the time to make your plan. With the plan to replace the external facing services like a source repository, a container image store, and an object store, you can still do platform automation and upgrades with ease.

Another Example

Recently we gave a talk at the Cloud Foundry Summit 2019 with a client who has successfully air gapped their environment. Vince White, from Agile Defense, shared the stage with me and we discussed these very things. Watch our talk on YouTube if you’re curious about what we did.

And if you need further help or assistance please feel free to reach out to us.

The post Air Gapping: A Moat for the 21st Century appeared first on Stark & Wayne.

]]>