Public Bare Metal CF Cheaper than Public Cloud CF! Whoa!

Cloud Foundry Leadership at Cloud Foundry Summit Europe

Cloud Foundry Summit EU is over and, as part of our servant leadership culture at Stark & Wayne, we reflect on what we’ve learned. First, we want to thank the Cloud Foundry Foundation for offering us the opportunity to sponsor the Hands-On Labs (HOL) and in particular, Chris Clark, for keeping everyone organized and on track. We might not have evaluated bare-metal solutions if you did not offer us this sponsorship opportunity.

SUSE Logo
Altoros Logo
Resilient Scale Logo
Dynatrace Logo

We want to thank our fellow presenters from ResilientScale, Dynatrace, SUSE, and Altoros that used the HOL infrastructure. You became our research subjects providing the data to do this imprecise objective analysis.

Packet Logo

Most of all, we need to thank Packet for reaching out and offering to partner with us, we might not have had the hardware available.

The HOL environment requires one or more Cloud Foundry installations and one or more Kubernetes clusters. In previous years, we ran everything on Google Cloud Platform (GCP). The Cloud Foundry Foundation informed us about their cost expectations and, given the mix of lab requirements at this summit, we had no expectations that costs would be any different. We instinctively started with GCP, and the cost was following the same trend from previous years, but we neglected to reassess the changes in the Public Cloud pricing models to see if there were any more economical options.

What are the characteristics of the Hands-On Lab environment?

  1. The HOL environment lifetime is approximately six (6) weeks.
  2. The usage pattern is low at first, until the final few weeks.
  3. The infrastructure is reproducible in case of disaster.
  4. One or more Kubernetes clusters exist.
  5. One or more Cloud Foundry environments exist.
  6. The shared Cloud Foundry environment should be up 24/7.
  7. Ingress data is a great deal more substantial than the egress data.
  8. Application instances running on the platform are not resource-intensive.
  9. The architecture is torn down after the last lab presentation.

I challenged our European team to see if they could find a better solution. I expected that our team would look at Azure, AWS, Alibaba Cloud, and IBM Cloud in addition to GCP and then come back with their recommendation. To my surprise, they suggested we use Packet, a cloud provider that delivers bare metal infrastructure. We still needed to prove the concept could work, so out came my credit card as I watched to see how much they racked up in costs daily when building out the CF environment.

After 10 long days of development (at approximately the cost for 3 ⅓ days of GCP) we had a working system. Our smoke test was running 100 instances of CF-Env application. What about network data transfer costs? Like most cloud providers, Packet does not charge for ingress traffic (i.e., downloading releases are free) or for traffic within a single datacenter. While Egress traffic, which costs $.05/GB at Packet compared to 0.12/GB at GCP[1], was less than 5% of the total spend. By the end of the conference with including the GCP Kubernetes clusters and the 3 additional extra large Packet nodes used in the S&W lab, we still had a savings of approximately 42% over the North America CF Summit.

The technical details are for additional blog posts where we will introduce the MoltenCore project, but here are some unexpected findings:

  • The complete pipelined installation takes approximately 1 hour from provisioning the Packet servers through installing BUCC and Open Source Small Footprint Cloud Foundry. GCP can take days from creating the account, getting billing in place, requesting increases in quotas, and finally installing CF. Installing CF alone takes about 3+ hours on GCP.
  • If the latest releases were already uploaded and compiled, the Open Source Cloud Foundry install finished installing in under 6 minutes on the Packet Infrastructure.
  • Our lab proved we could install 12 Open Source Cloud Foundry environments in parallel in under 30 minutes on a 3 node cluster (using Packet’s larger m2.xlarge.x86 servers) without any errors.
  • From a purely subjective point of view, applications seemed more responsive.
  • A cf push of an application is equal or faster than pushing on GCP or Pivotal Web Services (PWS).

What really piqued my interest in considering using physical infrastructures for future projects is the ability to have isolation at an affordable cost. If you need to have strict security requirements, tailoring your CF environment is easier because you do not need to worry about other applications running on the same CF environment. You can install your critical applications, and there will be no noisy neighbors. You can also size your CF environment to your needs, and therefore you do not need to over-allocate.

As this project stands today, using Packet’s physical infrastructures works for demonstration and development environments. The code is a fantastic starting point for a production system where you can evaluate your use cases, your edge cases, and add your additional software to productize your CF environment.

If you want to try out a physical infrastructure CF, please reach out to Brian Wong or Joshua “JC” Boliek at Packet. Say you saw this blog post or mention CF Summit EU, and let them know S&W sent you!


  1. At the time of writing this blog, the pricing below was for Google’s Network (Egress) Worldwide Destinations (excluding China & Australia but including Hong Kong). The pricing was higher for China and Australia.
    Monthly
    Usage
    Price
    per GB
    0 – 1 TB $ 0.12
    1 – 10 TB $ 0.11
    10+ TB $ 0.08

Spread the word

twitter icon facebook icon linkedin icon