Chris Weibel, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/chrisweibel/ Cloud-Native Consultants Thu, 12 May 2022 15:44:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 https://www.starkandwayne.com/wp-content/uploads/2021/04/cropped-SW_Logos__shield_blue_on_transparent_with_border-32x32.png Chris Weibel, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/chrisweibel/ 32 32 Cloud Foundry TCP Routing – The Rest of the Story https://www.starkandwayne.com/blog/cloud-foundry-tcp-routing-the-rest-of-the-story/ Thu, 12 May 2022 15:44:50 +0000 https://www.starkandwayne.com/?p=11541

The story is real, but what is the part of TCP Routing you need to know about still?

pic

Photo by Gary Bendig on Unsplash

The documentation to configure Cloud Foundry for TCP Routing is a great reference for getting started on your journey to implementation but there a few missing pieces which I think I can help fill in if you are deploying on AWS. 

Assumptions

  • I need an ELB to listen on tcp ports 40000-50000 and forward the traffic to the tcp-routers. The default range of 1024-1033 is fine for some folks, others, not so much.
  • I want to use a vm_extension to add tcp-router's as they are recreated without manual intervention.
  • I have an AWS account with an ELB listeners quota of 100 and I want to use all 100 ips
  • I want to use Terraform to create the ELB and any necessary supporting resources.

A quick side bar: why an ELB instead of a NLB?

I'm glad you asked. My goal is to have as many tcp ports as possible with a single load balancer. As of this writing, NLB's have a default quota of 50 target groups, each target group can manage a single port. A classic ELB has a default quota of 100 listeners. 100 > 50, therefore the ELB wins!

The link to the soft quota limit for ELB listeners is at https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html

Steps to Implement

  1. Create the ELB with Terraform 
  2. Use the ELB name to modify the Cloud Config
  3. Create the ops file
  4. Set router groups with cf curl
  5. Use cf cli to create shared domain
  6. Test with an app

Step 1: Create the ELB with Terraform

This is one of the places where the documentation isn't 100% helpful, however the good people who have been maintaining BBL help us out. In particular this chunk of Terraform is a great place to start: https://github.com/cloudfoundry/bosh-bootloader/blob/main/terraform/aws/templates/cf_lb.tf#L244-L1041

To support a different range of ips requires a few easy changes. Start by replacing the ingress block in the two security group definitions:

  ingress {
    security_groups = ["${aws_security_group.cf_tcp_lb_security_group.id}"]
    protocol        = "tcp"
    from_port       = 1024
    to_port         = 1123
  }

with

  ingress {
    security_groups = ["${aws_security_group.cf_tcp_lb_security_group.id}"]
    protocol        = "tcp"
    from_port       = 40000
    to_port         = 40099
  }

You'll also need to replace the block of listeners defined in the resource aws_elb.cf_tcp_lb:

  listener {
    instance_port     = 1024
    instance_protocol = "tcp"
    lb_port           = 1024
    lb_protocol       = "tcp"
  }
  ...
  98 bottles of listeners on the wall, 98 bottles of listeners...
  ...
  listener {
    instance_port     = 1123
    instance_protocol = "tcp"
    lb_port           = 1123
    lb_protocol       = "tcp"
  }

With something like:

  listener {
    instance_port     = 40000
    instance_protocol = "tcp"
    lb_port           = 40000
    lb_protocol       = "tcp"
  }
  ...
  98 bottles of listeners on the wall, 98 bottles of listeners...
  ...
  listener {
    instance_port     = 40099
    instance_protocol = "tcp"
    lb_port           = 40099
    lb_protocol       = "tcp"
  }

Don't feel like copy/paste/modifying the same 6 lines of code 99 times? Here's a quick python script that you can run, then copy/paste the results into the Terraform file:

start_port = int(input("Enter starting port (40000):") or "40000")
end_port   = int(input("Enter ending port (40099):") or 40099) + 1

for x in range(start_port, end_port):
    print("  listener {")
    print('    instance_port     =', x)
    print('    instance_protocol = "tcp"')
    print("    lb_port           =", x)
    print('    lb_protocol       = "tcp"')
    print("  }")

Cute, right? Anyway, I called this listeners.py which I can run with python3 listeners.py, copy in the output and enjoy.

If you are going to just use the section of BBL code highlighted with the few changes above you'll need to provide a couple more values for your terraform:

  • subnets - No guidance here other than to pick two subnets in your VPC
  • var.env_id - When in doubt, variable "env_id" { default = "starkandwayne"}
  • short_env_id - When in doubt, variable "short_env_id" { default = "sw"}. Shameless plug, I know.

After your terraform run is complete, you'll see output like:

Outputs:

cf_tcp_lb_internal_security_group = sg-0f9b6a5c6d63f1375
cf_tcp_lb_name = sw-cf-tcp-lb
cf_tcp_lb_security_group = sg-0e5cd4f4f262a8d87
cf_tcp_lb_url = sw-cf-tcp-lb-1943122948.us-west-2.elb.amazonaws.com

Register the ELB CNAME with your DNS provider to point to tcp.APP_DOMAIN, in my case:

  • My apps are in *.apps.codex.starkandwayne.com
  • Therefore I'm using tcp.apps.codex.starkandwayne.com as my TCP url I need to register with DNS
  • So tcp.apps.codex.starkandwayne.com has a CNAME record added for sw-cf-tcp-lb-1943122948.us-west-2.elb.amazonaws.com

Step 2 - Configure Cloud Config

Add to cloud config:

vm_extensions:
  - name: cf-tcp-router-network-properties
    cloud_properties:
      elbs:
        - sw-cf-tcp-lb  # Your name will be in the terraform output as `cf_tcp_lb_name`

A quick update to the bosh director:

$ bosh -e dev update-config --type cloud --name dev dev.yml
Using environment 'https://10.4.16.4:25555' as user 'admin'

  vm_extensions:
  - name: cf-tcp-router-network-properties
+   cloud_properties:
+     elbs:
+     - sw-cf-tcp-lb

Continue? [yN]:

If you take a peek at cf-deployment you'll see that the tcp-router is looking for a vm_extension called cf-tcp-router-network-properties here: https://github.com/cloudfoundry/cf-deployment/blob/v20.2.0/cf-deployment.yml#L1433-L1434 so once you configure the cloud config, cf-deployment is already configured to use the extension. What this means is whenever a tcp-router instance is created, BOSH will automatically add it back to the ELB once it passes the health check.

Step 3 - Create the ops file

Since I need a custom port range, some of the properties in cf-deployment.yml need to be changed.

An example ops file to change ports for the routing release:

- path: /instance_groups/name=api/jobs/name=routing-api/properties/routing_api/router_groups/name=default-tcp?
  type: replace
  value:
    name: default-tcp
    reservable_ports: 40000-40099
    type: tcp

When you include this new ops file in your deployment you'll see the change with:

Task 4856 done
  instance_groups:
  - name: api
    jobs:
    - name: routing-api
      properties:
        routing_api:
          router_groups:
          - name: default-tcp
-           reservable_ports: 1024-1033
+           reservable_ports: 40000-40099

Step 4 - Set router groups via cf curl

Post deployment however CAPI still has the old ports:

$ cf curl /routing/v1/router_groups
[
   {
      "guid": "abe622af-2246-43a2-73f8-79bcb8e0cbb4",
      "name": "default-tcp",
      "type": "tcp",
      "reservable_ports": "1024-1033"
   }
]

To configure the Cloud Controller with the range of ips to use:

$ cf curl -X PUT -d '{"reservable_ports":"40000-40099"}' /routing/v1/router_groups/abe622af-2246-43a2-73f8-79bcb8e0cbb4

Step 5 - Create shared domain

The DNS is configured for tcp.apps.codex.starkandwayne.com and the name of the router group from the ops file is default-tcp. Using the cf cli Cloud Foundry can then be configured to map these two togther into a shared domain:

cf create-shared-domain tcp.apps.codex.starkandwayne.com --router-group default-tcp

If you run the cf domains command you'll see the new tcp domain added with type = tcp

$ cf domains
Getting domains in org system as admin...
name                                status   type   details
apps.codex.starkandwayne.com        shared          
tcp.apps.codex.starkandwayne.com    shared   tcp    
system.codex.starkandwayne.com      owned    

Step 6 - Push an App

To create an app using tcp, there are a few options:

  1. With the cf cli v6: Push the app with the domain specified and a random portcf push myapp -d tcp.apps.codex.starkandwayne.com --random-route
  2. Create a route for a space, push an app, then map the route to the space. $ cf create-route tcp.apps.codex.starkandwayne.com --port 40001 $ cf push myapp --no-route # see next section for example app $ cf map-route myapp tcp.apps.codex.starkandwayne.com --port 40001
  3. Create an app manifest which contains routes:, then specify the app manifest in the cf push (cf push -f manifest.yml) with the contents of manifest.yml being:applications: - name: cf-env memory: 256M routes: - route: tcp.apps.codex.starkandwayne.com

In the previous examples, swap --port with --random-routefor the app push to pick any available port instead of a bespoke one. This will help developers from having to guess which ports are still available. 

Testing an App

Once the application is pushed, for instance with cf push myapp -d tcp.apps.codex.starkandwayne.com --random-routewhich uses cf-env, you can use curl to test the access:

$ curl http://tcp.apps.codex.starkandwayne.com:40001

<html><body style="margin:0px auto; width:80%; font-family:monospace"><head><title>Cloud Foundry Environment</title><meta name="viewport" content="width=device-width"></head><h2>Cloud Foundry Environment</h2><div><table><tr><td><strong>BUNDLER_ORIG_BUNDLER_VERSION</strong></td><td>BUNDLER_ENVIRONMENT_PRESERVER_INTENTIONALLY_NIL</tr><tr><td><strong>BUNDLER_ORIG_BUNDLE_BIN_PATH</strong></td><td>BUNDLER_ENVIRONMENT_PRESERVER_INTENTIONALLY_NIL</tr><tr><td><strong>BUNDLER_ORIG_BUNDLE_GEMFILE</strong></td><td>/home/vcap/app/Gemfile</tr><tr><td><strong>BUNDLER_ORIG_GEM_HOME</strong></td><td>/home/vcap/deps/0/gem_home</tr><tr><td><strong>BUNDLER_ORIG_GEM_PATH</strong></td><td>/home/vcap/deps/0/vendor_bundle/ruby/2.7.0:/home/vcap/deps/0/gem_home:/home/vcap/deps/0/bundler</tr><tr><td><strong>BUNDLER_ORIG_MANPATH</strong></td><td>BUNDLER_ENVIRONMENT_PRESERVER_INTENTIONALLY_NIL</tr><tr><td><strong>BUNDLER_ORIG_PATH</strong></td><td>/home/vcap/deps/0/bin:/usr/local/bin:/usr/bin:/bin</tr><tr><td><strong>BUNDLER_ORIG_RB_USER_INSTALL</strong></td><td>BUNDLER_ENVIRONMENT_PRESERVER_INTENTIONALLY_NIL</tr><tr><td><strong>BUNDLER_ORIG_RUBYLIB</strong></td><td>BUNDLER_ENVIRONMENT_PRESERVER_INTENTIONALLY_NIL</tr><tr><td><strong>BUNDLER_ORIG_RUBYOPT</strong></td><td>BUNDLER_ENVIRONMENT_PRESERVER_INTENTIONALLY_NIL</tr><tr><td><strong>BUNDLER_VERSION</strong></td><td>2.2.28</tr><tr><td><strong>BUNDLE_BIN</strong></td><td>/home/vcap/deps/0/binstubs</tr><tr><td><strong>BUNDLE_BIN_PATH</strong></td><td>/home/vcap/deps/0/bundler/gems/bundler-2.2.28/exe/bundle</tr><tr><td><strong>BUNDLE_GEMFILE</strong></td><td>/home/vcap/app/Gemfile</tr><tr><td><strong>BUNDLE_PATH</strong></td><td>/home/vcap/deps/0/vendor_bundle</tr><tr><td><strong>CF_INSTANCE_ADDR</strong></td><td>10.4.23.17:61020</tr><tr><td><strong>CF_INSTANCE_CERT</strong></td><td>/etc/cf-instance-credentials/instance.crt</tr><tr><td><strong>CF_INSTANCE_GUID</strong></td><td>32064364-6709-44b9-4a91-a1f3</tr><tr><td><strong>CF_INSTANCE_INDEX</strong></td><td><pre>0</pre></tr><tr><td><strong>CF_INSTANCE_INTERNAL_IP</strong></td><td>10.255.103.15</tr><tr><td><strong>CF_INSTANCE_IP</strong></td><td>10.4.23.17</tr><tr><td><strong>CF_INSTANCE_KEY</strong></td><td>/etc/cf-instance-credentials/instance.key</tr><tr><td><strong>CF_INSTANCE_PORT</strong></td><td><pre>61020</pre></tr><tr><td><strong>CF_INSTANCE_PORTS</strong></td><td><pre>[
  {
    "external": 61020,
    "internal": 8080,
    "external_tls_proxy": 61022,
    "internal_tls_proxy": 61001
  },
  {
    "external": 61021,
    "internal": 2222,
    "external_tls_proxy": 61023,
    "internal_tls_proxy": 61002
  }
]</pre></tr><tr><td><strong>CF_SYSTEM_CERT_PATH</strong></td><td>/etc/cf-system-certificates</tr><tr><td><strong>DEPS_DIR</strong></td><td>/home/vcap/deps</tr><tr><td><strong>GEM_HOME</strong></td><td>/home/vcap/deps/0/vendor_bundle/ruby/2.7.0</tr><tr><td><strong>GEM_PATH</strong></td><td></tr><tr><td><strong>HOME</strong></td><td>/home/vcap/app</tr><tr><td><strong>INSTANCE_GUID</strong></td><td>32064364-6709-44b9-4a91-a1f3</tr><tr><td><strong>INSTANCE_INDEX</strong></td><td><pre>0</pre></tr><tr><td><strong>LANG</strong></td><td>en_US.UTF-8</tr><tr><td><strong>MEMORY_LIMIT</strong></td><td>256m</tr><tr><td><strong>OLDPWD</strong></td><td>/home/vcap</tr><tr><td><strong>PATH</strong></td><td>/home/vcap/deps/0/vendor_bundle/ruby/2.7.0/bin:/home/vcap/deps/0/bin:/usr/local/bin:/usr/bin:/bin</tr><tr><td><strong>PORT</strong></td><td><pre>8080</pre></tr><tr><td><strong>PWD</strong></td><td>/home/vcap/app</tr><tr><td><strong>RACK_ENV</strong></td><td>production</tr><tr><td><strong>RAILS_ENV</strong></td><td>production</tr><tr><td><strong>RAILS_LOG_TO_STDOUT</strong></td><td>enabled</tr><tr><td><strong>RAILS_SERVE_STATIC_FILES</strong></td><td>enabled</tr><tr><td><strong>RUBYLIB</strong></td><td>/home/vcap/deps/0/bundler/gems/bundler-2.2.28/lib</tr><tr><td><strong>RUBYOPT</strong></td><td>-r/home/vcap/deps/0/bundler/gems/bundler-2.2.28/lib/bundler/setup</tr><tr><td><strong>SHLVL</strong></td><td><pre>1</pre></tr><tr><td><strong>TMPDIR</strong></td><td>/home/vcap/tmp</tr><tr><td><strong>USER</strong></td><td>vcap</tr><tr><td><strong>VCAP_APPLICATION</strong></td><td><pre>{
  "application_id": "2d19faba-0cae-4cb7-8078-67c092cfcc33",
  "application_name": "test",
  "application_uris": [
    "tcp.apps.codex.starkandwayne.com:40001"
  ],
  "application_version": "16d2c062-932b-4902-b874-0ea519e01dd8",
  "cf_api": "https://api.system.codex.starkandwayne.com",
  "host": "0.0.0.0",
  "instance_id": "32064364-6709-44b9-4a91-a1f3",
  "instance_index": 0,
  "limits": {
    "disk": 1024,
    "fds": 16384,
    "mem": 256
  },
  "name": "test",
  "organization_id": "d396b0c6-872f-46a2-a752-bdea51819c06",
  "organization_name": "system",
  "port": 8080,
  "process_id": "2d19faba-0cae-4cb7-8078-67c092cfcc33",
  "process_type": "web",
  "space_id": "4e081328-2ac1-4509-8f51-ffcbfc012165",
  "space_name": "ops",
  "uris": [
    "tcp.apps.codex.starkandwayne.com:40001"
  ],
  "version": "16d2c062-932b-4902-b874-0ea519e01dd8"
}</pre></tr><tr><td><strong>VCAP_APP_HOST</strong></td><td>0.0.0.0</tr><tr><td><strong>VCAP_APP_PORT</strong></td><td><pre>8080</pre></tr><tr><td><strong>VCAP_SERVICES</strong></td><td><pre>{
}</pre></tr><tr><td><strong>_</strong></td><td>/home/vcap/deps/0/bin/bundle</tr></table></div><h2>HTTP Request Headers</h2><div><table><tr><td><strong>accept</strong></td><td>*/*</tr><tr><td><strong>host</strong></td><td>tcp.apps.codex.starkandwayne.com:40001</tr><tr><td><strong>user_agent</strong></td><td>curl/7.79.1</tr><tr><td><strong>version</strong></td><td>HTTP/1.1</tr></table></div></body></html>%

Additional Reading

These links are to documentation used to put this guide together

  • Primary document on how to configure deploy CF with TCP Routing here
  • Configuring routes/ports for apps after tcp routing is setup here
  • bbl terraform to create the tcp load balancer here
  • Trouble shoot tcp app issues here

Good Day!

PS: I grew up listening to Paul Harvey on the radio in my parent's station wagon. You are missed good sir!

The post Cloud Foundry TCP Routing – The Rest of the Story appeared first on Stark & Wayne.

]]>

The story is real, but what is the part of TCP Routing you need to know about still?

pic

Photo by Gary Bendig on Unsplash

The documentation to configure Cloud Foundry for TCP Routing is a great reference for getting started on your journey to implementation but there a few missing pieces which I think I can help fill in if you are deploying on AWS. 

Assumptions

  • I need an ELB to listen on tcp ports 40000-50000 and forward the traffic to the tcp-routers. The default range of 1024-1033 is fine for some folks, others, not so much.
  • I want to use a vm_extension to add tcp-router's as they are recreated without manual intervention.
  • I have an AWS account with an ELB listeners quota of 100 and I want to use all 100 ips
  • I want to use Terraform to create the ELB and any necessary supporting resources.

A quick side bar: why an ELB instead of a NLB?

I'm glad you asked. My goal is to have as many tcp ports as possible with a single load balancer. As of this writing, NLB's have a default quota of 50 target groups, each target group can manage a single port. A classic ELB has a default quota of 100 listeners. 100 > 50, therefore the ELB wins!

The link to the soft quota limit for ELB listeners is at https://docs.aws.amazon.com/servicequotas/latest/userguide/request-quota-increase.html

Steps to Implement

  1. Create the ELB with Terraform 
  2. Use the ELB name to modify the Cloud Config
  3. Create the ops file
  4. Set router groups with cf curl
  5. Use cf cli to create shared domain
  6. Test with an app

Step 1: Create the ELB with Terraform

This is one of the places where the documentation isn't 100% helpful, however the good people who have been maintaining BBL help us out. In particular this chunk of Terraform is a great place to start: https://github.com/cloudfoundry/bosh-bootloader/blob/main/terraform/aws/templates/cf_lb.tf#L244-L1041

To support a different range of ips requires a few easy changes. Start by replacing the ingress block in the two security group definitions:

  ingress {
    security_groups = ["${aws_security_group.cf_tcp_lb_security_group.id}"]
    protocol        = "tcp"
    from_port       = 1024
    to_port         = 1123
  }

with

  ingress {
    security_groups = ["${aws_security_group.cf_tcp_lb_security_group.id}"]
    protocol        = "tcp"
    from_port       = 40000
    to_port         = 40099
  }

You'll also need to replace the block of listeners defined in the resource aws_elb.cf_tcp_lb:

  listener {
    instance_port     = 1024
    instance_protocol = "tcp"
    lb_port           = 1024
    lb_protocol       = "tcp"
  }
  ...
  98 bottles of listeners on the wall, 98 bottles of listeners...
  ...
  listener {
    instance_port     = 1123
    instance_protocol = "tcp"
    lb_port           = 1123
    lb_protocol       = "tcp"
  }

With something like:

  listener {
    instance_port     = 40000
    instance_protocol = "tcp"
    lb_port           = 40000
    lb_protocol       = "tcp"
  }
  ...
  98 bottles of listeners on the wall, 98 bottles of listeners...
  ...
  listener {
    instance_port     = 40099
    instance_protocol = "tcp"
    lb_port           = 40099
    lb_protocol       = "tcp"
  }

Don't feel like copy/paste/modifying the same 6 lines of code 99 times? Here's a quick python script that you can run, then copy/paste the results into the Terraform file:

start_port = int(input("Enter starting port (40000):") or "40000")
end_port   = int(input("Enter ending port (40099):") or 40099) + 1

for x in range(start_port, end_port):
    print("  listener {")
    print('    instance_port     =', x)
    print('    instance_protocol = "tcp"')
    print("    lb_port           =", x)
    print('    lb_protocol       = "tcp"')
    print("  }")

Cute, right? Anyway, I called this listeners.py which I can run with python3 listeners.py, copy in the output and enjoy.

If you are going to just use the section of BBL code highlighted with the few changes above you'll need to provide a couple more values for your terraform:

  • subnets - No guidance here other than to pick two subnets in your VPC
  • var.env_id - When in doubt, variable "env_id" { default = "starkandwayne"}
  • short_env_id - When in doubt, variable "short_env_id" { default = "sw"}. Shameless plug, I know.

After your terraform run is complete, you'll see output like:

Outputs:

cf_tcp_lb_internal_security_group = sg-0f9b6a5c6d63f1375
cf_tcp_lb_name = sw-cf-tcp-lb
cf_tcp_lb_security_group = sg-0e5cd4f4f262a8d87
cf_tcp_lb_url = sw-cf-tcp-lb-1943122948.us-west-2.elb.amazonaws.com

Register the ELB CNAME with your DNS provider to point to tcp.APP_DOMAIN, in my case:

  • My apps are in *.apps.codex.starkandwayne.com
  • Therefore I'm using tcp.apps.codex.starkandwayne.com as my TCP url I need to register with DNS
  • So tcp.apps.codex.starkandwayne.com has a CNAME record added for sw-cf-tcp-lb-1943122948.us-west-2.elb.amazonaws.com

Step 2 - Configure Cloud Config

Add to cloud config:

vm_extensions:
  - name: cf-tcp-router-network-properties
    cloud_properties:
      elbs:
        - sw-cf-tcp-lb  # Your name will be in the terraform output as `cf_tcp_lb_name`

A quick update to the bosh director:

$ bosh -e dev update-config --type cloud --name dev dev.yml
Using environment 'https://10.4.16.4:25555' as user 'admin'

  vm_extensions:
  - name: cf-tcp-router-network-properties
+   cloud_properties:
+     elbs:
+     - sw-cf-tcp-lb

Continue? [yN]:

If you take a peek at cf-deployment you'll see that the tcp-router is looking for a vm_extension called cf-tcp-router-network-properties here: https://github.com/cloudfoundry/cf-deployment/blob/v20.2.0/cf-deployment.yml#L1433-L1434 so once you configure the cloud config, cf-deployment is already configured to use the extension. What this means is whenever a tcp-router instance is created, BOSH will automatically add it back to the ELB once it passes the health check.

Step 3 - Create the ops file

Since I need a custom port range, some of the properties in cf-deployment.yml need to be changed.

An example ops file to change ports for the routing release:

- path: /instance_groups/name=api/jobs/name=routing-api/properties/routing_api/router_groups/name=default-tcp?
  type: replace
  value:
    name: default-tcp
    reservable_ports: 40000-40099
    type: tcp

When you include this new ops file in your deployment you'll see the change with:

Task 4856 done
  instance_groups:
  - name: api
    jobs:
    - name: routing-api
      properties:
        routing_api:
          router_groups:
          - name: default-tcp
-           reservable_ports: 1024-1033
+           reservable_ports: 40000-40099

Step 4 - Set router groups via cf curl

Post deployment however CAPI still has the old ports:

$ cf curl /routing/v1/router_groups
[
   {
      "guid": "abe622af-2246-43a2-73f8-79bcb8e0cbb4",
      "name": "default-tcp",
      "type": "tcp",
      "reservable_ports": "1024-1033"
   }
]

To configure the Cloud Controller with the range of ips to use:

$ cf curl -X PUT -d '{"reservable_ports":"40000-40099"}' /routing/v1/router_groups/abe622af-2246-43a2-73f8-79bcb8e0cbb4

Step 5 - Create shared domain

The DNS is configured for tcp.apps.codex.starkandwayne.com and the name of the router group from the ops file is default-tcp. Using the cf cli Cloud Foundry can then be configured to map these two togther into a shared domain:

cf create-shared-domain tcp.apps.codex.starkandwayne.com --router-group default-tcp

If you run the cf domains command you'll see the new tcp domain added with type = tcp

$ cf domains
Getting domains in org system as admin...
name                                status   type   details
apps.codex.starkandwayne.com        shared          
tcp.apps.codex.starkandwayne.com    shared   tcp    
system.codex.starkandwayne.com      owned    

Step 6 - Push an App

To create an app using tcp, there are a few options:

  1. With the cf cli v6: Push the app with the domain specified and a random portcf push myapp -d tcp.apps.codex.starkandwayne.com --random-route
  2. Create a route for a space, push an app, then map the route to the space. $ cf create-route tcp.apps.codex.starkandwayne.com --port 40001 $ cf push myapp --no-route # see next section for example app $ cf map-route myapp tcp.apps.codex.starkandwayne.com --port 40001
  3. Create an app manifest which contains routes:, then specify the app manifest in the cf push (cf push -f manifest.yml) with the contents of manifest.yml being:applications: - name: cf-env memory: 256M routes: - route: tcp.apps.codex.starkandwayne.com

In the previous examples, swap --port with --random-routefor the app push to pick any available port instead of a bespoke one. This will help developers from having to guess which ports are still available. 

Testing an App

Once the application is pushed, for instance with cf push myapp -d tcp.apps.codex.starkandwayne.com --random-routewhich uses cf-env, you can use curl to test the access:

$ curl http://tcp.apps.codex.starkandwayne.com:40001

<html><body style="margin:0px auto; width:80%; font-family:monospace"><head><title>Cloud Foundry Environment</title><meta name="viewport" content="width=device-width"></head><h2>Cloud Foundry Environment</h2><div><table><tr><td><strong>BUNDLER_ORIG_BUNDLER_VERSION</strong></td><td>BUNDLER_ENVIRONMENT_PRESERVER_INTENTIONALLY_NIL</tr><tr><td><strong>BUNDLER_ORIG_BUNDLE_BIN_PATH</strong></td><td>BUNDLER_ENVIRONMENT_PRESERVER_INTENTIONALLY_NIL</tr><tr><td><strong>BUNDLER_ORIG_BUNDLE_GEMFILE</strong></td><td>/home/vcap/app/Gemfile</tr><tr><td><strong>BUNDLER_ORIG_GEM_HOME</strong></td><td>/home/vcap/deps/0/gem_home</tr><tr><td><strong>BUNDLER_ORIG_GEM_PATH</strong></td><td>/home/vcap/deps/0/vendor_bundle/ruby/2.7.0:/home/vcap/deps/0/gem_home:/home/vcap/deps/0/bundler</tr><tr><td><strong>BUNDLER_ORIG_MANPATH</strong></td><td>BUNDLER_ENVIRONMENT_PRESERVER_INTENTIONALLY_NIL</tr><tr><td><strong>BUNDLER_ORIG_PATH</strong></td><td>/home/vcap/deps/0/bin:/usr/local/bin:/usr/bin:/bin</tr><tr><td><strong>BUNDLER_ORIG_RB_USER_INSTALL</strong></td><td>BUNDLER_ENVIRONMENT_PRESERVER_INTENTIONALLY_NIL</tr><tr><td><strong>BUNDLER_ORIG_RUBYLIB</strong></td><td>BUNDLER_ENVIRONMENT_PRESERVER_INTENTIONALLY_NIL</tr><tr><td><strong>BUNDLER_ORIG_RUBYOPT</strong></td><td>BUNDLER_ENVIRONMENT_PRESERVER_INTENTIONALLY_NIL</tr><tr><td><strong>BUNDLER_VERSION</strong></td><td>2.2.28</tr><tr><td><strong>BUNDLE_BIN</strong></td><td>/home/vcap/deps/0/binstubs</tr><tr><td><strong>BUNDLE_BIN_PATH</strong></td><td>/home/vcap/deps/0/bundler/gems/bundler-2.2.28/exe/bundle</tr><tr><td><strong>BUNDLE_GEMFILE</strong></td><td>/home/vcap/app/Gemfile</tr><tr><td><strong>BUNDLE_PATH</strong></td><td>/home/vcap/deps/0/vendor_bundle</tr><tr><td><strong>CF_INSTANCE_ADDR</strong></td><td>10.4.23.17:61020</tr><tr><td><strong>CF_INSTANCE_CERT</strong></td><td>/etc/cf-instance-credentials/instance.crt</tr><tr><td><strong>CF_INSTANCE_GUID</strong></td><td>32064364-6709-44b9-4a91-a1f3</tr><tr><td><strong>CF_INSTANCE_INDEX</strong></td><td><pre>0</pre></tr><tr><td><strong>CF_INSTANCE_INTERNAL_IP</strong></td><td>10.255.103.15</tr><tr><td><strong>CF_INSTANCE_IP</strong></td><td>10.4.23.17</tr><tr><td><strong>CF_INSTANCE_KEY</strong></td><td>/etc/cf-instance-credentials/instance.key</tr><tr><td><strong>CF_INSTANCE_PORT</strong></td><td><pre>61020</pre></tr><tr><td><strong>CF_INSTANCE_PORTS</strong></td><td><pre>[
  {
    "external": 61020,
    "internal": 8080,
    "external_tls_proxy": 61022,
    "internal_tls_proxy": 61001
  },
  {
    "external": 61021,
    "internal": 2222,
    "external_tls_proxy": 61023,
    "internal_tls_proxy": 61002
  }
]</pre></tr><tr><td><strong>CF_SYSTEM_CERT_PATH</strong></td><td>/etc/cf-system-certificates</tr><tr><td><strong>DEPS_DIR</strong></td><td>/home/vcap/deps</tr><tr><td><strong>GEM_HOME</strong></td><td>/home/vcap/deps/0/vendor_bundle/ruby/2.7.0</tr><tr><td><strong>GEM_PATH</strong></td><td></tr><tr><td><strong>HOME</strong></td><td>/home/vcap/app</tr><tr><td><strong>INSTANCE_GUID</strong></td><td>32064364-6709-44b9-4a91-a1f3</tr><tr><td><strong>INSTANCE_INDEX</strong></td><td><pre>0</pre></tr><tr><td><strong>LANG</strong></td><td>en_US.UTF-8</tr><tr><td><strong>MEMORY_LIMIT</strong></td><td>256m</tr><tr><td><strong>OLDPWD</strong></td><td>/home/vcap</tr><tr><td><strong>PATH</strong></td><td>/home/vcap/deps/0/vendor_bundle/ruby/2.7.0/bin:/home/vcap/deps/0/bin:/usr/local/bin:/usr/bin:/bin</tr><tr><td><strong>PORT</strong></td><td><pre>8080</pre></tr><tr><td><strong>PWD</strong></td><td>/home/vcap/app</tr><tr><td><strong>RACK_ENV</strong></td><td>production</tr><tr><td><strong>RAILS_ENV</strong></td><td>production</tr><tr><td><strong>RAILS_LOG_TO_STDOUT</strong></td><td>enabled</tr><tr><td><strong>RAILS_SERVE_STATIC_FILES</strong></td><td>enabled</tr><tr><td><strong>RUBYLIB</strong></td><td>/home/vcap/deps/0/bundler/gems/bundler-2.2.28/lib</tr><tr><td><strong>RUBYOPT</strong></td><td>-r/home/vcap/deps/0/bundler/gems/bundler-2.2.28/lib/bundler/setup</tr><tr><td><strong>SHLVL</strong></td><td><pre>1</pre></tr><tr><td><strong>TMPDIR</strong></td><td>/home/vcap/tmp</tr><tr><td><strong>USER</strong></td><td>vcap</tr><tr><td><strong>VCAP_APPLICATION</strong></td><td><pre>{
  "application_id": "2d19faba-0cae-4cb7-8078-67c092cfcc33",
  "application_name": "test",
  "application_uris": [
    "tcp.apps.codex.starkandwayne.com:40001"
  ],
  "application_version": "16d2c062-932b-4902-b874-0ea519e01dd8",
  "cf_api": "https://api.system.codex.starkandwayne.com",
  "host": "0.0.0.0",
  "instance_id": "32064364-6709-44b9-4a91-a1f3",
  "instance_index": 0,
  "limits": {
    "disk": 1024,
    "fds": 16384,
    "mem": 256
  },
  "name": "test",
  "organization_id": "d396b0c6-872f-46a2-a752-bdea51819c06",
  "organization_name": "system",
  "port": 8080,
  "process_id": "2d19faba-0cae-4cb7-8078-67c092cfcc33",
  "process_type": "web",
  "space_id": "4e081328-2ac1-4509-8f51-ffcbfc012165",
  "space_name": "ops",
  "uris": [
    "tcp.apps.codex.starkandwayne.com:40001"
  ],
  "version": "16d2c062-932b-4902-b874-0ea519e01dd8"
}</pre></tr><tr><td><strong>VCAP_APP_HOST</strong></td><td>0.0.0.0</tr><tr><td><strong>VCAP_APP_PORT</strong></td><td><pre>8080</pre></tr><tr><td><strong>VCAP_SERVICES</strong></td><td><pre>{
}</pre></tr><tr><td><strong>_</strong></td><td>/home/vcap/deps/0/bin/bundle</tr></table></div><h2>HTTP Request Headers</h2><div><table><tr><td><strong>accept</strong></td><td>*/*</tr><tr><td><strong>host</strong></td><td>tcp.apps.codex.starkandwayne.com:40001</tr><tr><td><strong>user_agent</strong></td><td>curl/7.79.1</tr><tr><td><strong>version</strong></td><td>HTTP/1.1</tr></table></div></body></html>%

Additional Reading

These links are to documentation used to put this guide together

  • Primary document on how to configure deploy CF with TCP Routing here
  • Configuring routes/ports for apps after tcp routing is setup here
  • bbl terraform to create the tcp load balancer here
  • Trouble shoot tcp app issues here

Good Day!

PS: I grew up listening to Paul Harvey on the radio in my parent's station wagon. You are missed good sir!

The post Cloud Foundry TCP Routing – The Rest of the Story appeared first on Stark & Wayne.

]]>
A Sample Windows Cloud Foundry App https://www.starkandwayne.com/blog/a-sample-windows-cloud-foundry-app/ Wed, 27 Apr 2022 17:32:36 +0000 https://www.starkandwayne.com/?p=11534

Has it really been under your nose all along?

pic

Photo by sydney Rae on Unsplash

Ever try to find a really simple Windows app to test against Cloud Foundry Windows Cells? 

Sometimes the most obvious answer is right under your nose. Inside of the cf-smoke-tests are the tests used by Cloud Foundry to test for both cflinuxfs3 and windows stacks which are safe to run against production. 

In general the tests work by creating a test org, space, and quota, pushes an app, scales it, retrieve logs and finally tear it all back down. There are tests for both the cflinuxfs3 and windows however cf-deployment only includes the errand for cflinuxfs3 by default.

What all this means is there is a simple Windows Cloud Foundry app inside of the smoke tests, here is how you use it:

git clone https://github.com/cloudfoundry/cf-smoke-tests.git
cd cf-smoke-tests/assets/dotnet_simple/Published

cf push imarealwindowsapp -s windows -b hwc_buildpack

In the example above we clone the repo and push an app called imarealwindowsapp, feel free to use whatever name you'd like. To get the url of the app once it is deployed, run the following command and note the routes:

$ cf app imarealwindowsapp
Showing health and status for app imarealwindowsapp in org system / space ops as admin...

name:              imarealwindowsapp
requested state:   started
routes:            imarealwindowsapp.apps.codex.starkandwayne.com
last uploaded:     Wed 27 Apr 17:05:55 UTC 2022
stack:             windows
buildpacks:        hwc

type:           web
instances:      1/1
memory usage:   1024M
     state     since                  cpu    memory         disk          details
#0   running   2022-04-27T17:07:00Z   0.1%   100.5M of 1G   44.8M of 1G

To test whether or not it was successful, you can curl the endpoint adding https:// to the routes: value from the last command output:

$ curl https://imarealwindowsapp.apps.codex.starkandwayne.com -k

Healthy
It just needed to be restarted!
My application metadata: {"application_id":"b55b34e2-c434-4782-b44e-3f9f469dd70c","application_name":"imarealwindowsapp","application_uris":["imarealwindowsapp.apps.codex.starkandwayne.com"],"application_version":"1bd0703a-4f13-45c8-86cb-0632db5cd6bd","cf_api":"https://api.system.codex.starkandwayne.com","host":"0.0.0.0","instance_id":"f56eaa45-cad2-4ab8-6e75-1ea9","instance_index":0,"limits":{"disk":1024,"fds":16384,"mem":1024},"name":"imarealwindowsapp","organization_id":"d396b0c6-872f-46a2-a752-bdea51819c06","organization_name":"system","port":8080,"process_id":"b55b34e2-c434-4782-b44e-3f9f469dd70c","process_type":"web","space_id":"4e081328-2ac1-4509-8f51-ffcbfc012165","space_name":"ops","uris":["imarealwindowsapp.apps.codex.starkandwayne.com"],"version":"1bd0703a-4f13-45c8-86cb-0632db5cd6bd"}
My port: 8080
My instance index: 0
My custom env variable:

Finally, if you look at the logs you'll see that the app emits a timestamp tick every second, which is what the smoke tests look for to validate logging is working:

$ cf logs imarealwindowsapp
Retrieving logs for app imarealwindowsapp in org system / space ops as admin...

   2022-04-27T17:11:15.44+0000 [APP/PROC/WEB/0] OUT Tick: 1651079475
   2022-04-27T17:11:16.45+0000 [APP/PROC/WEB/0] OUT Tick: 1651079476
   2022-04-27T17:11:17.46+0000 [APP/PROC/WEB/0] OUT Tick: 1651079477
   2022-04-27T17:11:18.47+0000 [APP/PROC/WEB/0] OUT Tick: 1651079478
   2022-04-27T17:11:19.47+0000 [APP/PROC/WEB/0] OUT Tick: 1651079479

If you are curious on how to use this in a bosh errand to run the complete Cloud Foundry Windows Smoke Tests, be sure to visit https://www.starkandwayne.com/blog/adding-windows-smoke-tests-to-cloud-foundry/

Enjoy!

The post A Sample Windows Cloud Foundry App appeared first on Stark & Wayne.

]]>

Has it really been under your nose all along?

pic

Photo by sydney Rae on Unsplash

Ever try to find a really simple Windows app to test against Cloud Foundry Windows Cells? 

Sometimes the most obvious answer is right under your nose. Inside of the cf-smoke-tests are the tests used by Cloud Foundry to test for both cflinuxfs3 and windows stacks which are safe to run against production. 

In general the tests work by creating a test org, space, and quota, pushes an app, scales it, retrieve logs and finally tear it all back down. There are tests for both the cflinuxfs3 and windows however cf-deployment only includes the errand for cflinuxfs3 by default.

What all this means is there is a simple Windows Cloud Foundry app inside of the smoke tests, here is how you use it:

git clone https://github.com/cloudfoundry/cf-smoke-tests.git
cd cf-smoke-tests/assets/dotnet_simple/Published

cf push imarealwindowsapp -s windows -b hwc_buildpack

In the example above we clone the repo and push an app called imarealwindowsapp, feel free to use whatever name you'd like. To get the url of the app once it is deployed, run the following command and note the routes:

$ cf app imarealwindowsapp
Showing health and status for app imarealwindowsapp in org system / space ops as admin...

name:              imarealwindowsapp
requested state:   started
routes:            imarealwindowsapp.apps.codex.starkandwayne.com
last uploaded:     Wed 27 Apr 17:05:55 UTC 2022
stack:             windows
buildpacks:        hwc

type:           web
instances:      1/1
memory usage:   1024M
     state     since                  cpu    memory         disk          details
#0   running   2022-04-27T17:07:00Z   0.1%   100.5M of 1G   44.8M of 1G

To test whether or not it was successful, you can curl the endpoint adding https:// to the routes: value from the last command output:

$ curl https://imarealwindowsapp.apps.codex.starkandwayne.com -k

Healthy
It just needed to be restarted!
My application metadata: {"application_id":"b55b34e2-c434-4782-b44e-3f9f469dd70c","application_name":"imarealwindowsapp","application_uris":["imarealwindowsapp.apps.codex.starkandwayne.com"],"application_version":"1bd0703a-4f13-45c8-86cb-0632db5cd6bd","cf_api":"https://api.system.codex.starkandwayne.com","host":"0.0.0.0","instance_id":"f56eaa45-cad2-4ab8-6e75-1ea9","instance_index":0,"limits":{"disk":1024,"fds":16384,"mem":1024},"name":"imarealwindowsapp","organization_id":"d396b0c6-872f-46a2-a752-bdea51819c06","organization_name":"system","port":8080,"process_id":"b55b34e2-c434-4782-b44e-3f9f469dd70c","process_type":"web","space_id":"4e081328-2ac1-4509-8f51-ffcbfc012165","space_name":"ops","uris":["imarealwindowsapp.apps.codex.starkandwayne.com"],"version":"1bd0703a-4f13-45c8-86cb-0632db5cd6bd"}
My port: 8080
My instance index: 0
My custom env variable:

Finally, if you look at the logs you'll see that the app emits a timestamp tick every second, which is what the smoke tests look for to validate logging is working:

$ cf logs imarealwindowsapp
Retrieving logs for app imarealwindowsapp in org system / space ops as admin...

   2022-04-27T17:11:15.44+0000 [APP/PROC/WEB/0] OUT Tick: 1651079475
   2022-04-27T17:11:16.45+0000 [APP/PROC/WEB/0] OUT Tick: 1651079476
   2022-04-27T17:11:17.46+0000 [APP/PROC/WEB/0] OUT Tick: 1651079477
   2022-04-27T17:11:18.47+0000 [APP/PROC/WEB/0] OUT Tick: 1651079478
   2022-04-27T17:11:19.47+0000 [APP/PROC/WEB/0] OUT Tick: 1651079479

If you are curious on how to use this in a bosh errand to run the complete Cloud Foundry Windows Smoke Tests, be sure to visit https://www.starkandwayne.com/blog/adding-windows-smoke-tests-to-cloud-foundry/

Enjoy!

The post A Sample Windows Cloud Foundry App appeared first on Stark & Wayne.

]]>
Adding Windows Smoke Tests to Cloud Foundry https://www.starkandwayne.com/blog/adding-windows-smoke-tests-to-cloud-foundry/ Tue, 05 Apr 2022 16:01:31 +0000 https://www.starkandwayne.com/?p=11490

Opening A Window To Clear the Smoke (Tests)

Photo by Ahmed Zayan on Unsplash

Cloud Foundry has supported running Windows Diego Cells for a few years now but until recently I had not had a reason to use them. 

The instructions for modifying cf-deployment is fairly straight forward for adding ops files to enable Windows:

What was missing? I couldn't find a way to run smoke tests against the Windows Diego Cells. The support for Windows exists in the cf-smoke-tests bosh release, so a quick copy of the existing smoke_tests job from cf-deployment.yml, adding enable_windows_tests: true and windows_stack: windows2016 and a whiff of smoke, here is the ops file that can be included with cf-deployment:

- path: /instance_groups/-
  type: replace
  value:
    azs:
    - z1
    instances: 1
    jobs:
    - name: smoke_tests_windows
      properties:
        bpm:
          enabled: true
        smoke_tests:
          enable_windows_tests: true
          windows_stack: windows2016
          api: "https://api.((system_domain))"
          apps_domain: "((system_domain))"
          client: cf_smoke_tests
          client_secret: "((uaa_clients_cf_smoke_tests_secret))"
          org: cf_smoke_tests_org
          space: cf_smoke_tests_space
          cf_dial_timeout_in_seconds: 300
          skip_ssl_validation: true
      release: cf-smoke-tests
    - name: cf-cli-7-linux
      release: cf-cli
    lifecycle: errand
    name: smoke-tests-windows
    networks:
    - name: default
    stemcell: windows2019
    vm_type: minimal

Once this is deployed, to run the errand:

$ bosh -d cf run-errand smoke_tests_windows

Using environment 'https://192.168.5.56:255555' as user 'admin'
Using deployment 'cf'
Task 134
...
shortened for the sake of scrolling...
...
#############################################################################
      Running smoke tests
      C:\var\vcap\packages\goland-1.13-windows\go\bin\go.exe
      c:\var\vcap\packages\smoke_tests_windows\bin\ginkgo.exe
      [1648831644] - CF-Isolation-Segment-Smoke-Tests - 4/4 specs SSSS SUCESS! 10.8756455s PASS
      [1648831644] - CF-Logging-Smoke-Tests - 2/2 specs ++ SUCESS! 1m5.5649573s PASS
      [1648831644] - CF-Runtime-Smoke-Tests - 2/2 specs ++ SUCESS! 1m9.5844699s PASS
      Ginkgo run 3 suites in 3m9.6523124s
      Tests Suite Passed
      Smoke Tests Complete, exit status 0
Stderr   - 
1 errand(s)
Succedded

Something which doesn't work

If you add the --keep-alive to the bosh run-errand command, you'll need to rerun the run-errand command without the keep-alive option to get subsequent runs of the smoke tests to pass. Part of the scripting moves (instead of copies) some of the files around, so you only get a single attempt to run the tests for a particular vm instance.

Enjoy!

The post Adding Windows Smoke Tests to Cloud Foundry appeared first on Stark & Wayne.

]]>

Opening A Window To Clear the Smoke (Tests)

Photo by Ahmed Zayan on Unsplash

Cloud Foundry has supported running Windows Diego Cells for a few years now but until recently I had not had a reason to use them. 

The instructions for modifying cf-deployment is fairly straight forward for adding ops files to enable Windows:

What was missing? I couldn't find a way to run smoke tests against the Windows Diego Cells. The support for Windows exists in the cf-smoke-tests bosh release, so a quick copy of the existing smoke_tests job from cf-deployment.yml, adding enable_windows_tests: true and windows_stack: windows2016 and a whiff of smoke, here is the ops file that can be included with cf-deployment:

- path: /instance_groups/-
  type: replace
  value:
    azs:
    - z1
    instances: 1
    jobs:
    - name: smoke_tests_windows
      properties:
        bpm:
          enabled: true
        smoke_tests:
          enable_windows_tests: true
          windows_stack: windows2016
          api: "https://api.((system_domain))"
          apps_domain: "((system_domain))"
          client: cf_smoke_tests
          client_secret: "((uaa_clients_cf_smoke_tests_secret))"
          org: cf_smoke_tests_org
          space: cf_smoke_tests_space
          cf_dial_timeout_in_seconds: 300
          skip_ssl_validation: true
      release: cf-smoke-tests
    - name: cf-cli-7-linux
      release: cf-cli
    lifecycle: errand
    name: smoke-tests-windows
    networks:
    - name: default
    stemcell: windows2019
    vm_type: minimal

Once this is deployed, to run the errand:

$ bosh -d cf run-errand smoke_tests_windows

Using environment 'https://192.168.5.56:255555' as user 'admin'
Using deployment 'cf'
Task 134
...
shortened for the sake of scrolling...
...
#############################################################################
      Running smoke tests
      C:\var\vcap\packages\goland-1.13-windows\go\bin\go.exe
      c:\var\vcap\packages\smoke_tests_windows\bin\ginkgo.exe
      [1648831644] - CF-Isolation-Segment-Smoke-Tests - 4/4 specs SSSS SUCESS! 10.8756455s PASS
      [1648831644] - CF-Logging-Smoke-Tests - 2/2 specs ++ SUCESS! 1m5.5649573s PASS
      [1648831644] - CF-Runtime-Smoke-Tests - 2/2 specs ++ SUCESS! 1m9.5844699s PASS
      Ginkgo run 3 suites in 3m9.6523124s
      Tests Suite Passed
      Smoke Tests Complete, exit status 0
Stderr   - 
1 errand(s)
Succedded

Something which doesn't work

If you add the --keep-alive to the bosh run-errand command, you'll need to rerun the run-errand command without the keep-alive option to get subsequent runs of the smoke tests to pass. Part of the scripting moves (instead of copies) some of the files around, so you only get a single attempt to run the tests for a particular vm instance.

Enjoy!

The post Adding Windows Smoke Tests to Cloud Foundry appeared first on Stark & Wayne.

]]>
Enabling SSL for Stratos PostgreSQL Connections https://www.starkandwayne.com/blog/enabling-ssl-for-stratos-postgresql-connections/ Mon, 04 Apr 2022 20:23:18 +0000 https://www.starkandwayne.com/?p=11485

Photo by Ricardo Gomez Angel on Unsplash

Adding the requirement for SSL to Stratos is a fairly easy process. This configuration is highly recommended for production deployments of Stratos on Cloud Foundry.

In the example manifest below, this option is enabled by adding DB_SSL_MODE: "verify-ca" to the bottom of the environment variables:

applications:
  - name: console
    memory: 1512M
    disk_quota: 1024M
    host: console
    timeout: 180
    buildpack: https://github.com/cloudfoundry-incubator/stratos-buildpack#v4.0
    health-check-type: port
    services:
    - console_db
    env:
      CF_API_URL: https://api.bosh-lite.com
      CF_CLIENT: stratos_client
      CF_CLIENT_SECRET: sssshhhitsasecret
      SSO_OPTIONS: "logout, nosplash"
      SSO_WHITELIST: "https://console.bosh-lite.com"
      SSO_LOGIN: true
      DB_SSL_MODE: "verify-ca"

Why this works

The example above relies on a CUPS service instance called console_db which points to a RDS PostgreSQL instance created manually. Creating the CUPS service is as easy as:

cf cups console_db -p '{"uri": "postgres://", "username":"myuser", "password":"mypass", "hostname":"something.xyx.us-west-2.rds.amazon.com", "port":"5432", "dbname":"console_db"}'

Once executed, you can use the console_db as the name of the service in manifest.yml for Stratos.

Also take note that I'm using a RDS instance which means I need the RDS CA in the trusted store of the CF app container which Stratos is running in. This is done by configuring the following ops file to be deployed against Cloud Foundry:

- type: replace
  path: /instance_groups/name=diego-cell/jobs/name=rep/properties/containers/trusted_ca_certificates/-
  value: &rds-uswest2-ca |-
    -----BEGIN CERTIFICATE-----
    MIIEBjCCAu6gAwIBAgIJAMc0ZzaSUK51MA0GCSqGSIb3DQEBCwUAMIGPMQswCQYD
    VQQGEwJVUzEQMA4GA1UEBwwHU2VhdHRsZTETMBEGA1UECAwKV2FzaGluZ3RvbjEi
    MCAGA1UECgwZQW1hem9uIFdlYiBTZXJ2aWNlcywgSW5jLjETMBEGA1UECwwKQW1h
    em9uIFJEUzEgMB4GA1UEAwwXQW1hem9uIFJEUyBSb290IDIwMTkgQ0EwHhcNMTkw
    ODIyMTcwODUwWhcNMjQwODIyMTcwODUwWjCBjzELMAkGA1UEBhMCVVMxEDAOBgNV
    BAcMB1NlYXR0bGUxEzARBgNVBAgMCldhc2hpbmd0b24xIjAgBgNVBAoMGUFtYXpv
    biBXZWIgU2VydmljZXMsIEluYy4xEzARBgNVBAsMCkFtYXpvbiBSRFMxIDAeBgNV
    BAMMF0FtYXpvbiBSRFMgUm9vdCAyMDE5IENBMIIBIjANBgkqhkiG9w0BAQEFAAOC
    AQ8AMIIBCgKCAQEArXnF/E6/Qh+ku3hQTSKPMhQQlCpoWvnIthzX6MK3p5a0eXKZ
    oWIjYcNNG6UwJjp4fUXl6glp53Jobn+tWNX88dNH2n8DVbppSwScVE2LpuL+94vY
    0EYE/XxN7svKea8YvlrqkUBKyxLxTjh+U/KrGOaHxz9v0l6ZNlDbuaZw3qIWdD/I
    6aNbGeRUVtpM6P+bWIoxVl/caQylQS6CEYUk+CpVyJSkopwJlzXT07tMoDL5WgX9
    O08KVgDNz9qP/IGtAcRduRcNioH3E9v981QO1zt/Gpb2f8NqAjUUCUZzOnij6mx9
    McZ+9cWX88CRzR0vQODWuZscgI08NvM69Fn2SQIDAQABo2MwYTAOBgNVHQ8BAf8E
    BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUc19g2LzLA5j0Kxc0LjZa
    pmD/vB8wHwYDVR0jBBgwFoAUc19g2LzLA5j0Kxc0LjZapmD/vB8wDQYJKoZIhvcN
    AQELBQADggEBAHAG7WTmyjzPRIM85rVj+fWHsLIvqpw6DObIjMWokpliCeMINZFV
    ynfgBKsf1ExwbvJNzYFXW6dihnguDG9VMPpi2up/ctQTN8tm9nDKOy08uNZoofMc
    NUZxKCEkVKZv+IL4oHoeayt8egtv3ujJM6V14AstMQ6SwvwvA93EP/Ug2e4WAXHu
    cbI1NAbUgVDqp+DRdfvZkgYKryjTWd/0+1fS8X1bBZVWzl7eirNVnHbSH2ZDpNuY
    0SBd8dj5F6ld3t58ydZbrTHze7JJOd8ijySAp4/kiu9UfZWuTPABzDa/DSdz9Dk/
    zPW4CXXvhLmE02TA9/HeCw3KEHIwicNuEfw=
    -----END CERTIFICATE-----
- type: replace
  path: /instance_groups/name=diego-cell/jobs/name=cflinuxfs3-rootfs-setup/properties/cflinuxfs3-rootfs/trusted_certs/-
  value: *rds-uswest2-ca

The RDS certs for other AWS regions are documented at https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html

Verifying that Database Connections is using SSL

Trust but verify. By making a psql connection to the RDS instance you can verify the connections from Stratos are indeed leveraging SSL. Run the following:

SELECT 
  datid.datname,
  pg_stat_ssl.pid,
  usesysid,
  usename,
  application_name,
  client_addr,
  client_hostname,
  client_port,
  ssl,
  cipher,
  bits,
  compression
FROM
  pg_stat_activity,
  pg_stat_ssl
WHERE
  pg_stat_activity.pid = pg_stat_ssl.pid
  AND pg_stat_activity.usename = 'myuser';  # Name of the user you configured in CUPS


 dataid |  datname   |  pid  | usesysid | username | application_name | client_addr  | client_hostname | client_port | ssl |           cipher            | bits | compression
 -------+------------+-------+----------+----------+------------------+--------------+-----------------+-------------+-----+-----------------------------+------+------------
  16104 | console_db |  3518 |    16939 | myuser   |                  | 10.244.0.20  |                 |       43104 | t   | ECDHE-RSA-AES256-GCM-SHA384 |  256 | f
  16104 | console_db | 22334 |    16939 | myuser   |                  | 10.244.0.20  |                 |       56321 | t   | ECDHE-RSA-AES256-GCM-SHA384 |  256 | f
  16104 | console_db | 25259 |    16939 | myuser   | psql             | 10.244.0.99  |                 |       58990 | t   | ECDHE-RSA-AES256-GCM-SHA384 |  256 | f

In the example above, the third connection is the psql client we are running this query from, the other two connections are coming from the Stratos app on the Diego cell.

What doesn't work

If you are attempting to set the SSL Mode via the URI, while a valid assumption, configuring the CUPS connection will be ignored:

cf cups console_db -p '{"uri": "postgres://", "username":"myuser", "password":"mypass", "hostname":"something.xyx.us-west-2.rds.amazon.com", "port":"5432", "dbname":"console_db", "sslmode":"verify-ca" }'

This is because the Stratos configuration is specifically looking for an environment variable:

db.SSLMode = env.String("DBSSLMODE", "disable")

From https://github.com/cloudfoundry/stratos/blob/master/src/jetstream/datastore/databasecfconfig.go#L81

Enjoy!

The post Enabling SSL for Stratos PostgreSQL Connections appeared first on Stark & Wayne.

]]>

Photo by Ricardo Gomez Angel on Unsplash

Adding the requirement for SSL to Stratos is a fairly easy process. This configuration is highly recommended for production deployments of Stratos on Cloud Foundry.

In the example manifest below, this option is enabled by adding DB_SSL_MODE: "verify-ca" to the bottom of the environment variables:

applications:
  - name: console
    memory: 1512M
    disk_quota: 1024M
    host: console
    timeout: 180
    buildpack: https://github.com/cloudfoundry-incubator/stratos-buildpack#v4.0
    health-check-type: port
    services:
    - console_db
    env:
      CF_API_URL: https://api.bosh-lite.com
      CF_CLIENT: stratos_client
      CF_CLIENT_SECRET: sssshhhitsasecret
      SSO_OPTIONS: "logout, nosplash"
      SSO_WHITELIST: "https://console.bosh-lite.com"
      SSO_LOGIN: true
      DB_SSL_MODE: "verify-ca"

Why this works

The example above relies on a CUPS service instance called console_db which points to a RDS PostgreSQL instance created manually. Creating the CUPS service is as easy as:

cf cups console_db -p '{"uri": "postgres://", "username":"myuser", "password":"mypass", "hostname":"something.xyx.us-west-2.rds.amazon.com", "port":"5432", "dbname":"console_db"}'

Once executed, you can use the console_db as the name of the service in manifest.yml for Stratos.

Also take note that I'm using a RDS instance which means I need the RDS CA in the trusted store of the CF app container which Stratos is running in. This is done by configuring the following ops file to be deployed against Cloud Foundry:

- type: replace
  path: /instance_groups/name=diego-cell/jobs/name=rep/properties/containers/trusted_ca_certificates/-
  value: &rds-uswest2-ca |-
    -----BEGIN CERTIFICATE-----
    MIIEBjCCAu6gAwIBAgIJAMc0ZzaSUK51MA0GCSqGSIb3DQEBCwUAMIGPMQswCQYD
    VQQGEwJVUzEQMA4GA1UEBwwHU2VhdHRsZTETMBEGA1UECAwKV2FzaGluZ3RvbjEi
    MCAGA1UECgwZQW1hem9uIFdlYiBTZXJ2aWNlcywgSW5jLjETMBEGA1UECwwKQW1h
    em9uIFJEUzEgMB4GA1UEAwwXQW1hem9uIFJEUyBSb290IDIwMTkgQ0EwHhcNMTkw
    ODIyMTcwODUwWhcNMjQwODIyMTcwODUwWjCBjzELMAkGA1UEBhMCVVMxEDAOBgNV
    BAcMB1NlYXR0bGUxEzARBgNVBAgMCldhc2hpbmd0b24xIjAgBgNVBAoMGUFtYXpv
    biBXZWIgU2VydmljZXMsIEluYy4xEzARBgNVBAsMCkFtYXpvbiBSRFMxIDAeBgNV
    BAMMF0FtYXpvbiBSRFMgUm9vdCAyMDE5IENBMIIBIjANBgkqhkiG9w0BAQEFAAOC
    AQ8AMIIBCgKCAQEArXnF/E6/Qh+ku3hQTSKPMhQQlCpoWvnIthzX6MK3p5a0eXKZ
    oWIjYcNNG6UwJjp4fUXl6glp53Jobn+tWNX88dNH2n8DVbppSwScVE2LpuL+94vY
    0EYE/XxN7svKea8YvlrqkUBKyxLxTjh+U/KrGOaHxz9v0l6ZNlDbuaZw3qIWdD/I
    6aNbGeRUVtpM6P+bWIoxVl/caQylQS6CEYUk+CpVyJSkopwJlzXT07tMoDL5WgX9
    O08KVgDNz9qP/IGtAcRduRcNioH3E9v981QO1zt/Gpb2f8NqAjUUCUZzOnij6mx9
    McZ+9cWX88CRzR0vQODWuZscgI08NvM69Fn2SQIDAQABo2MwYTAOBgNVHQ8BAf8E
    BAMCAQYwDwYDVR0TAQH/BAUwAwEB/zAdBgNVHQ4EFgQUc19g2LzLA5j0Kxc0LjZa
    pmD/vB8wHwYDVR0jBBgwFoAUc19g2LzLA5j0Kxc0LjZapmD/vB8wDQYJKoZIhvcN
    AQELBQADggEBAHAG7WTmyjzPRIM85rVj+fWHsLIvqpw6DObIjMWokpliCeMINZFV
    ynfgBKsf1ExwbvJNzYFXW6dihnguDG9VMPpi2up/ctQTN8tm9nDKOy08uNZoofMc
    NUZxKCEkVKZv+IL4oHoeayt8egtv3ujJM6V14AstMQ6SwvwvA93EP/Ug2e4WAXHu
    cbI1NAbUgVDqp+DRdfvZkgYKryjTWd/0+1fS8X1bBZVWzl7eirNVnHbSH2ZDpNuY
    0SBd8dj5F6ld3t58ydZbrTHze7JJOd8ijySAp4/kiu9UfZWuTPABzDa/DSdz9Dk/
    zPW4CXXvhLmE02TA9/HeCw3KEHIwicNuEfw=
    -----END CERTIFICATE-----
- type: replace
  path: /instance_groups/name=diego-cell/jobs/name=cflinuxfs3-rootfs-setup/properties/cflinuxfs3-rootfs/trusted_certs/-
  value: *rds-uswest2-ca

The RDS certs for other AWS regions are documented at https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/UsingWithRDS.SSL.html

Verifying that Database Connections is using SSL

Trust but verify. By making a psql connection to the RDS instance you can verify the connections from Stratos are indeed leveraging SSL. Run the following:

SELECT 
  datid.datname,
  pg_stat_ssl.pid,
  usesysid,
  usename,
  application_name,
  client_addr,
  client_hostname,
  client_port,
  ssl,
  cipher,
  bits,
  compression
FROM
  pg_stat_activity,
  pg_stat_ssl
WHERE
  pg_stat_activity.pid = pg_stat_ssl.pid
  AND pg_stat_activity.usename = 'myuser';  # Name of the user you configured in CUPS


 dataid |  datname   |  pid  | usesysid | username | application_name | client_addr  | client_hostname | client_port | ssl |           cipher            | bits | compression
 -------+------------+-------+----------+----------+------------------+--------------+-----------------+-------------+-----+-----------------------------+------+------------
  16104 | console_db |  3518 |    16939 | myuser   |                  | 10.244.0.20  |                 |       43104 | t   | ECDHE-RSA-AES256-GCM-SHA384 |  256 | f
  16104 | console_db | 22334 |    16939 | myuser   |                  | 10.244.0.20  |                 |       56321 | t   | ECDHE-RSA-AES256-GCM-SHA384 |  256 | f
  16104 | console_db | 25259 |    16939 | myuser   | psql             | 10.244.0.99  |                 |       58990 | t   | ECDHE-RSA-AES256-GCM-SHA384 |  256 | f

In the example above, the third connection is the psql client we are running this query from, the other two connections are coming from the Stratos app on the Diego cell.

What doesn't work

If you are attempting to set the SSL Mode via the URI, while a valid assumption, configuring the CUPS connection will be ignored:

cf cups console_db -p '{"uri": "postgres://", "username":"myuser", "password":"mypass", "hostname":"something.xyx.us-west-2.rds.amazon.com", "port":"5432", "dbname":"console_db", "sslmode":"verify-ca" }'

This is because the Stratos configuration is specifically looking for an environment variable:

db.SSLMode = env.String("DBSSLMODE", "disable")

From https://github.com/cloudfoundry/stratos/blob/master/src/jetstream/datastore/databasecfconfig.go#L81

Enjoy!

The post Enabling SSL for Stratos PostgreSQL Connections appeared first on Stark & Wayne.

]]>
Scripting the CF API with `cf oauth-token` and Python https://www.starkandwayne.com/blog/scripting-the-cf-api-with-cf-oauth-token-and-python/ Mon, 15 Nov 2021 18:41:12 +0000 https://www.starkandwayne.com//?p=5915

The Rodney Dangerfield of the CF CLI, show it some respect!

pic

Photo by mk. s on Unsplash

I was assigned a task to take a look at all the environment variables being used for a foundation. Normally I would put together a quick query against the Cloud Controller database but found this data to be encrypted. However, you can pull the information from the CF CLI but you need to know how to login and loop through the results. A bit of Python, a dancing gopher and proper course etiquette are all you need.

Grab your putter and head to the first hole!

Hole 1 - Login

Quick and easy, use the cf CLI to log into your desired CF foundation:

cf login

# or 

cf login --sso

After successful login, the token and other information will stored in ~/.cf/config.json

Hole 2 - Create Python Script

The script below will use the token from the last step which is stored in ~/.cf/config.json

#!/usr/bin/env python3
import requests
from requests.structures import CaseInsensitiveDict

import sys
import warnings


# Disable SSL Warnings - important for KubeCF
if not sys.warnoptions:
    warnings.simplefilter("ignore")

# Login
system_domain = sys.argv[1]
token = sys.argv[2]

headers = CaseInsensitiveDict()
headers["Accept"] = "application/json"
headers["Authorization"] = token

apps_url = "https://api." + system_domain + "/v3/apps/?per_page=100"

entries = requests.get(apps_url, headers=headers, verify=False).json()

total_results = entries["pagination"]["total_results"]
total_pages = entries["pagination"]["total_pages"]
current_page = 1
apps = {}

while True:
    print("Processing page " + str(current_page) + "/" + str(total_pages))

    for entry in entries["resources"]:

        env_vars_url = entry["links"]["environment_variables"]["href"]
        env_vars = requests.get(env_vars_url, headers=headers, verify=False).json()

        line_label = "app:" + entry["name"]+",env"
        apps[line_label] = []

        for key, value in env_vars["var"].items():
            apps[line_label].append(str(key) + "=" + str(value))


    current_page += 1

    if entries["pagination"]["next"] is None:
        break

    entries = requests.get(entries["pagination"]["next"]["href"], headers=headers, verify=False).json()


for key, value in apps.items():
    print(str(key) + ":" + str(value))

Save this file and call it scrape.py

Hole 3 - Run the Script

This is where the magic of the cf oauth-token happens. After you've logged in successfully, the bearer token is stored locally and can be retrieved on demand and is also refreshed as needed. Cool, huh? More details on the command can be found at https://cli.cloudfoundry.org/en-US/v6/oauth-token.html

If you clicked on that link and was underwhelmed, welcome to the club and can be excused for having never heard of the command before. Keep going, you'll see how valuable it truly is.

Assuming you named the script scrape.py, you can now run it against your foundation, the first parameter is the system domain of your foundation and the second parameter is the command to retrieve the bearer token:

python3 scrape.py system_url.nip.io "$(cf oauth-token)"

The script will loop through the visible apps and pull back the list of environment variables. Users with cloud_controller.admin or similar permissions will loop through all orgs and spaces, otherwise it will loop through just the orgs and spaces you have permissions to.

The output will look similar to:

Processing page 1/1
app:my-test-app,env:['MYENV=test']

If you'd like an example of also scraping org, space and alternate login combinations I've created this example script. The notes at the top include instructions if you're attempting to run this against a locally deployed KubeCF.

Enjoy! Remember to return your putter and score card back at the front desk.

The post Scripting the CF API with `cf oauth-token` and Python appeared first on Stark & Wayne.

]]>

The Rodney Dangerfield of the CF CLI, show it some respect!

pic

Photo by mk. s on Unsplash

I was assigned a task to take a look at all the environment variables being used for a foundation. Normally I would put together a quick query against the Cloud Controller database but found this data to be encrypted. However, you can pull the information from the CF CLI but you need to know how to login and loop through the results. A bit of Python, a dancing gopher and proper course etiquette are all you need.

Grab your putter and head to the first hole!

Hole 1 - Login

Quick and easy, use the cf CLI to log into your desired CF foundation:

cf login

# or 

cf login --sso

After successful login, the token and other information will stored in ~/.cf/config.json

Hole 2 - Create Python Script

The script below will use the token from the last step which is stored in ~/.cf/config.json

#!/usr/bin/env python3
import requests
from requests.structures import CaseInsensitiveDict

import sys
import warnings


# Disable SSL Warnings - important for KubeCF
if not sys.warnoptions:
    warnings.simplefilter("ignore")

# Login
system_domain = sys.argv[1]
token = sys.argv[2]

headers = CaseInsensitiveDict()
headers["Accept"] = "application/json"
headers["Authorization"] = token

apps_url = "https://api." + system_domain + "/v3/apps/?per_page=100"

entries = requests.get(apps_url, headers=headers, verify=False).json()

total_results = entries["pagination"]["total_results"]
total_pages = entries["pagination"]["total_pages"]
current_page = 1
apps = {}

while True:
    print("Processing page " + str(current_page) + "/" + str(total_pages))

    for entry in entries["resources"]:

        env_vars_url = entry["links"]["environment_variables"]["href"]
        env_vars = requests.get(env_vars_url, headers=headers, verify=False).json()

        line_label = "app:" + entry["name"]+",env"
        apps[line_label] = []

        for key, value in env_vars["var"].items():
            apps[line_label].append(str(key) + "=" + str(value))


    current_page += 1

    if entries["pagination"]["next"] is None:
        break

    entries = requests.get(entries["pagination"]["next"]["href"], headers=headers, verify=False).json()


for key, value in apps.items():
    print(str(key) + ":" + str(value))

Save this file and call it scrape.py

Hole 3 - Run the Script

This is where the magic of the cf oauth-token happens. After you've logged in successfully, the bearer token is stored locally and can be retrieved on demand and is also refreshed as needed. Cool, huh? More details on the command can be found at https://cli.cloudfoundry.org/en-US/v6/oauth-token.html

If you clicked on that link and was underwhelmed, welcome to the club and can be excused for having never heard of the command before. Keep going, you'll see how valuable it truly is.

Assuming you named the script scrape.py, you can now run it against your foundation, the first parameter is the system domain of your foundation and the second parameter is the command to retrieve the bearer token:

python3 scrape.py system_url.nip.io "$(cf oauth-token)"

The script will loop through the visible apps and pull back the list of environment variables. Users with cloud_controller.admin or similar permissions will loop through all orgs and spaces, otherwise it will loop through just the orgs and spaces you have permissions to.

The output will look similar to:

Processing page 1/1
app:my-test-app,env:['MYENV=test']

If you'd like an example of also scraping org, space and alternate login combinations I've created this example script. The notes at the top include instructions if you're attempting to run this against a locally deployed KubeCF.

Enjoy! Remember to return your putter and score card back at the front desk.

The post Scripting the CF API with `cf oauth-token` and Python appeared first on Stark & Wayne.

]]>
Hello bosh-lite, My Old Friend https://www.starkandwayne.com/blog/hello-bosh-lite-my-old-friend/ Thu, 11 Nov 2021 17:43:28 +0000 https://www.starkandwayne.com//?p=5855

I've come to talk with you again

Photo by Shubham Dhage on Unsplash

I've written a whole bunch of blogs about running Cloud Foundry on Kubernetes. They all work, to varying degrees of success and have the support of a smattering of communities (read: companies with large budgets + engineers + open source ideals).

Sometimes you just want BOSH. That sweet, sweet, familiar BOSH. Enter bosh-lite.

(2022-03-09 Update: Shorty after this was published, the bosh-lite maintainers switched the bosh director from being at 192.168.50.6 to 192.168.56.6, the blog was updated to reflect this)

What is bosh-lite?

bosh-lite works by spinning up a VM and placing BOSH on it. From there, you can use it to spin cf-deployment, zookeeper, or anything else that has a BOSH manifest.

The BOSH director is configured with runtime configs, cloud configs, stemcells, releases and all the other BOSH-like functionality you've come to expect.

Installation

The tool has been around forever, with a very good tutorial on installing and configuring bosh-lite located at https://bosh.io/docs/bosh-lite/. The rest of this blog assumes that you have followed these instructions up to step #7.

I had an issue with a newer version of VirtualBox (6.1.28), what worked for me is version 6.1.26 which is available here.

It ends with an example deployment of zookeeper which I guess is cool, but I'm guessing most BOSH directors are associated with a Cloud Foundry deployment. Read on for tips that maybe aren't obvious from the documentation!

Deploying Cloud Foundry

In cf-deployment-land there is documentation for using BBL to deploy bosh-lite and CF to GCP or AWS. I point these instructions out in case you have an unlimited IAAS budget, however I'll show you how to deploy this to your Mac.

There is also a Readme within cf-deployment For Operators Deploying CF to local bosh-lite. I'll be using this as the basis for the scripting below.

Clone cf-deployment

For these examples, I've assumed you will clone the CF repo to your home folder:

cd ~

git clone https://github.com/cloudfoundry/cf-deployment.git

Setup BOSH Environment variables

This is spread across a few places in the documentation, I've gathered them all together:

export CREDHUB_SERVER=https://192.168.56.6:8844
export CREDHUB_CLIENT=credhub-admin
export CREDHUB_SECRET=$(bosh interpolate ~/deployments/vbox/creds.yml --path=/credhub_admin_client_secret)
export CREDHUB_CA_CERT="$(bosh interpolate ~/deployments/vbox/creds.yml --path=/credhub_tls/ca )"$'\n'"$( bosh interpolate ~/deployments/vbox/creds.yml --path=/uaa_ssl/ca)"

export BOSH_CLIENT=admin
export BOSH_CLIENT_SECRET="$(bosh int ~/deployments/vbox/creds.yml --path /admin_password)"
export BOSH_CA_CERT="$(bosh interpolate ~/deployments/vbox/creds.yml --path /director_ssl/ca)"
export BOSH_ENVIRONMENT=vbox

bosh alias-env vbox -e 192.168.56.6 --ca-cert <(bosh int ~/deployments/vbox/creds.yml --path /director_ssl/ca)

You might consider adding this to your bash/zsh profile.

Upload Stemcell, Cloud Config and Runtime Config

bosh upload-stemcell --sha1 f399044d2ebe3351f0f1b0b3f97ef11464d283b4 "https://bosh.io/d/stemcells/bosh-warden-boshlite-ubuntu-xenial-go_agent?v=621.125"
bosh update-runtime-config ~/workspace/bosh-deployment/runtime-configs/dns.yml --name dns
bosh update-cloud-config ~/cf-deployment/iaas-support/bosh-lite/cloud-config.yml

As time goes by, CF may complain of wanting a newer stemcell version, update the bosh upload-stemcell command with the requested version. The version and sha1 are listed here, note that bosh-lite uses warden stemcells.

Go Get a Drink

This part will take around 45 minutes to complete, unless you've changed the IP address of the BOSH director, the following command will deploy CF.

cd ~/cf-deployment
bosh -e 192.168.56.6 -d cf deploy \
  cf-deployment.yml \
  -o operations/bosh-lite.yml \
  -v system_domain=bosh-lite.com 

Log Into CF

This is done in two steps, targeting the API and then logging in, pulling the admin password out of credhub without printing it to the screen:

cf api https://api.bosh-lite.com --skip-ssl-validation
cf login -u admin -p $(credhub get -n $(credhub find -n admin | grep cf_admin | cut -d: -f2) | grep value | cut -d: -f2) -o system -s test

Tada!

Create a space and start pushing apps.

(Optional) Setup SSH to BOSH Director

Why? I like to be able to SSH to my BOSH Director to poke around. I'm weird. This is not required for the CF deployment, but nice to have:

bosh int ~/deployments/vbox/creds.yml --path /jumpbox_ssh/private_key > ~/deployments/vbox/jumpbox.key
chmod 600 ~/deployments/vbox/jumpbox.key
ssh jumpbox@192.168.56.6 -i ~/deployments/vbox/jumpbox.key

Beware of Dragons

Issue #1 - Reboot

Rebooting your Mac causes bosh-lite to misbehave, badly, unless you take a couple easy steps.

Before shutdown:

  • In Oracle VM VirtualBox Manager select the VM, right click and navigate to Close > Save State

After reboot:

  • In Oracle VM VirtualBox Manager select the VM, right click and navigate to Start > Headless Start

Even performing these steps I can't always bring the BOSH Director back online.

If you have accidentally rebooted the host you can recreate the BOSH director using the state file and use bosh cck to recover the broken CF deployment:

cd ~/deployments/vbox

bosh create-env ~/workspace/bosh-deployment/bosh.yml \
  --state ./state.json \
  -o ~/workspace/bosh-deployment/virtualbox/cpi.yml \
  -o ~/workspace/bosh-deployment/virtualbox/outbound-network.yml \
  -o ~/workspace/bosh-deployment/bosh-lite.yml \
  -o ~/workspace/bosh-deployment/bosh-lite-runc.yml \
  -o ~/workspace/bosh-deployment/uaa.yml \
  -o ~/workspace/bosh-deployment/credhub.yml \
  -o ~/workspace/bosh-deployment/jumpbox-user.yml \
  --vars-store ./creds.yml \
  -v director_name=bosh-lite \
  -v internal_ip=192.168.56.6 \
  -v internal_gw=192.168.56.1 \
  -v internal_cidr=192.168.50.0/24 \
  -v outbound_network_name=NatNetwork --recreate

bosh cck

If that doesn't work, you can completely wipe and recreate the BOSH and CF deployment by:

  • Logging into Oracle VM VirtualBox Manager, select the VM, right click and select Remove > Delete all files
  • Delete the state file at ~/deployments/vbox/state.json
  • Rerun the bosh create-env command then redeploy CF.

Issue 2 - Networking issues

If you get:

Deploying:
  Creating instance 'bosh/0':
    Waiting until instance is ready:
      Post "https://mbus:<redacted>@192.168.56.6:6868/agent": dial tcp 192.168.56.6:6868: connect: connection refused

Exit code 1

You probably skipped the step to add the host routing:

sudo route add -net 10.244.0.0/16     192.168.56.6

Final Thought

We all deserve nice things. bosh-lite is one of those nice things for folks who enjoy BOSH and want to use it on their own computer.

Enjoy!

The post Hello bosh-lite, My Old Friend appeared first on Stark & Wayne.

]]>

I've come to talk with you again

Photo by Shubham Dhage on Unsplash

I've written a whole bunch of blogs about running Cloud Foundry on Kubernetes. They all work, to varying degrees of success and have the support of a smattering of communities (read: companies with large budgets + engineers + open source ideals).

Sometimes you just want BOSH. That sweet, sweet, familiar BOSH. Enter bosh-lite.

(2022-03-09 Update: Shorty after this was published, the bosh-lite maintainers switched the bosh director from being at 192.168.50.6 to 192.168.56.6, the blog was updated to reflect this)

What is bosh-lite?

bosh-lite works by spinning up a VM and placing BOSH on it. From there, you can use it to spin cf-deployment, zookeeper, or anything else that has a BOSH manifest.

The BOSH director is configured with runtime configs, cloud configs, stemcells, releases and all the other BOSH-like functionality you've come to expect.

Installation

The tool has been around forever, with a very good tutorial on installing and configuring bosh-lite located at https://bosh.io/docs/bosh-lite/. The rest of this blog assumes that you have followed these instructions up to step #7.

I had an issue with a newer version of VirtualBox (6.1.28), what worked for me is version 6.1.26 which is available here.

It ends with an example deployment of zookeeper which I guess is cool, but I'm guessing most BOSH directors are associated with a Cloud Foundry deployment. Read on for tips that maybe aren't obvious from the documentation!

Deploying Cloud Foundry

In cf-deployment-land there is documentation for using BBL to deploy bosh-lite and CF to GCP or AWS. I point these instructions out in case you have an unlimited IAAS budget, however I'll show you how to deploy this to your Mac.

There is also a Readme within cf-deployment For Operators Deploying CF to local bosh-lite. I'll be using this as the basis for the scripting below.

Clone cf-deployment

For these examples, I've assumed you will clone the CF repo to your home folder:

cd ~

git clone https://github.com/cloudfoundry/cf-deployment.git

Setup BOSH Environment variables

This is spread across a few places in the documentation, I've gathered them all together:

export CREDHUB_SERVER=https://192.168.56.6:8844
export CREDHUB_CLIENT=credhub-admin
export CREDHUB_SECRET=$(bosh interpolate ~/deployments/vbox/creds.yml --path=/credhub_admin_client_secret)
export CREDHUB_CA_CERT="$(bosh interpolate ~/deployments/vbox/creds.yml --path=/credhub_tls/ca )"$'\n'"$( bosh interpolate ~/deployments/vbox/creds.yml --path=/uaa_ssl/ca)"

export BOSH_CLIENT=admin
export BOSH_CLIENT_SECRET="$(bosh int ~/deployments/vbox/creds.yml --path /admin_password)"
export BOSH_CA_CERT="$(bosh interpolate ~/deployments/vbox/creds.yml --path /director_ssl/ca)"
export BOSH_ENVIRONMENT=vbox

bosh alias-env vbox -e 192.168.56.6 --ca-cert <(bosh int ~/deployments/vbox/creds.yml --path /director_ssl/ca)

You might consider adding this to your bash/zsh profile.

Upload Stemcell, Cloud Config and Runtime Config

bosh upload-stemcell --sha1 f399044d2ebe3351f0f1b0b3f97ef11464d283b4 "https://bosh.io/d/stemcells/bosh-warden-boshlite-ubuntu-xenial-go_agent?v=621.125"
bosh update-runtime-config ~/workspace/bosh-deployment/runtime-configs/dns.yml --name dns
bosh update-cloud-config ~/cf-deployment/iaas-support/bosh-lite/cloud-config.yml

As time goes by, CF may complain of wanting a newer stemcell version, update the bosh upload-stemcell command with the requested version. The version and sha1 are listed here, note that bosh-lite uses warden stemcells.

Go Get a Drink

This part will take around 45 minutes to complete, unless you've changed the IP address of the BOSH director, the following command will deploy CF.

cd ~/cf-deployment
bosh -e 192.168.56.6 -d cf deploy \
  cf-deployment.yml \
  -o operations/bosh-lite.yml \
  -v system_domain=bosh-lite.com 

Log Into CF

This is done in two steps, targeting the API and then logging in, pulling the admin password out of credhub without printing it to the screen:

cf api https://api.bosh-lite.com --skip-ssl-validation
cf login -u admin -p $(credhub get -n $(credhub find -n admin | grep cf_admin | cut -d: -f2) | grep value | cut -d: -f2) -o system -s test

Tada!

Create a space and start pushing apps.

(Optional) Setup SSH to BOSH Director

Why? I like to be able to SSH to my BOSH Director to poke around. I'm weird. This is not required for the CF deployment, but nice to have:

bosh int ~/deployments/vbox/creds.yml --path /jumpbox_ssh/private_key > ~/deployments/vbox/jumpbox.key
chmod 600 ~/deployments/vbox/jumpbox.key
ssh jumpbox@192.168.56.6 -i ~/deployments/vbox/jumpbox.key

Beware of Dragons

Issue #1 - Reboot

Rebooting your Mac causes bosh-lite to misbehave, badly, unless you take a couple easy steps.

Before shutdown:

  • In Oracle VM VirtualBox Manager select the VM, right click and navigate to Close > Save State

After reboot:

  • In Oracle VM VirtualBox Manager select the VM, right click and navigate to Start > Headless Start

Even performing these steps I can't always bring the BOSH Director back online.

If you have accidentally rebooted the host you can recreate the BOSH director using the state file and use bosh cck to recover the broken CF deployment:

cd ~/deployments/vbox

bosh create-env ~/workspace/bosh-deployment/bosh.yml \
  --state ./state.json \
  -o ~/workspace/bosh-deployment/virtualbox/cpi.yml \
  -o ~/workspace/bosh-deployment/virtualbox/outbound-network.yml \
  -o ~/workspace/bosh-deployment/bosh-lite.yml \
  -o ~/workspace/bosh-deployment/bosh-lite-runc.yml \
  -o ~/workspace/bosh-deployment/uaa.yml \
  -o ~/workspace/bosh-deployment/credhub.yml \
  -o ~/workspace/bosh-deployment/jumpbox-user.yml \
  --vars-store ./creds.yml \
  -v director_name=bosh-lite \
  -v internal_ip=192.168.56.6 \
  -v internal_gw=192.168.56.1 \
  -v internal_cidr=192.168.50.0/24 \
  -v outbound_network_name=NatNetwork --recreate

bosh cck

If that doesn't work, you can completely wipe and recreate the BOSH and CF deployment by:

  • Logging into Oracle VM VirtualBox Manager, select the VM, right click and select Remove > Delete all files
  • Delete the state file at ~/deployments/vbox/state.json
  • Rerun the bosh create-env command then redeploy CF.

Issue 2 - Networking issues

If you get:

Deploying:
  Creating instance 'bosh/0':
    Waiting until instance is ready:
      Post "https://mbus:<redacted>@192.168.56.6:6868/agent": dial tcp 192.168.56.6:6868: connect: connection refused

Exit code 1

You probably skipped the step to add the host routing:

sudo route add -net 10.244.0.0/16     192.168.56.6

Final Thought

We all deserve nice things. bosh-lite is one of those nice things for folks who enjoy BOSH and want to use it on their own computer.

Enjoy!

The post Hello bosh-lite, My Old Friend appeared first on Stark & Wayne.

]]>
Running KubeCF using KIND on MacOS https://www.starkandwayne.com/blog/running-kubecf-using-kind-on-macos/ Fri, 01 Oct 2021 21:06:00 +0000 https://www.starkandwayne.com//?p=4241

Photo by Alex Gorzen on Flickr

More Limes, More Coconuts

In previous blog posts I reviewed how to deploy KubeCF on EKS, which gives you a nice stable deployment of KubeCF, the downside is this costs you money for every hour it is run on AWS.

I used to giving Amazon money, but I typically get a small cardboard box in exchange every few days.

So, how do you run KubeCF on your Mac for free(ish)? Tune in below.

There Be Dragons Ahead

You need at least a 16GB of memory installed on your Apple MacOS device, the install will use around 11GB of the memory once it is fully spun up.

The install is fragile and frustrating at times, this is geared more towards operators who are trying out skunkworks on the platform such as testing custom buildpacks, hacking db queries and other potentially destructive activities. The install does NOT survive reboots and become extra brittle after 24+ hours of running. This is not KubeCF's fault, when run on EKS it will happily continue to run without issues. You've been warned!

Overview of Install

  • Install Homebrew and Docker Desktop
  • Install tuntap and start the shim
  • Install kind and deploy a cluster
  • Install and configure metallb
  • Install cf-operator and deploy kubecf
  • Log in and marvel at your creation

The next few sections are borrowed heavily from https://www.thehumblelab.com/kind-and-metallb-on-mac/, I encourage you to skim this document to understand why the tuntap shim is needed and how to verify the configuration for metallb.

Install Homebrew and Docker Desktop

I won't go into great detail as these tools are likely already installed:

   /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"

   brew install --cask docker
   # Then launch docker from Applications to complete the install and start docker

Install tuntap and start the shim

Running docker on MacOS has some "deficiencies" that can be overcome by installing a networking shim, to perform this install:

brew install git
brew install --cask tuntap
git clone https://github.com/AlmirKadric-Published/docker-tuntap-osx.git
cd docker-tuntap-osx

./sbin/docker_tap_install.sh
./sbin/docker_tap_up.sh

Install kind and deploy a cluster

Sweet'n'Simple:

brew install kind
kind create cluster

Install and configure metallb

We'll be using metallb as a LoadBalancer resource and open up a local route so that MacOS can route traffic locally to the cluster.

sudo route -v add -net 172.18.0.1 -netmask 255.255.0.0 10.0.75.2

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml

cat << EOF > metallb-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.18.0.150-172.18.0.200
EOF
kubectl create -f metallb-config.yaml

Install cf-operator and deploy kubecf

brew install wget
brew install helm
brew install watch

wget https://github.com/cloudfoundry-incubator/kubecf/releases/download/v2.7.13/kubecf-bundle-v2.7.13.tgz
tar -xvzf kubecf-bundle-v2.7.13.tgz

kubectl create namespace cf-operator
helm install cf-operator \
  --namespace cf-operator \
  --set "global.singleNamespace.name=kubecf" \
  cf-operator.tgz \
 --wait

helm install kubecf \
  --namespace kubecf \
  --set system_domain=172.18.0.150.nip.io \
  --set features.eirini.enabled=false \
  --set features.ingress.enabled=false \
  --set services.router.externalIPs={172.18.0.150}\
  https://github.com/cloudfoundry-incubator/kubecf/releases/download/v2.7.13/kubecf-v2.7.13.tgz

watch kubectl get pods -A

Now, go take a walk, it will take 30-60 minutes for the kubecf helm chart to be fully picked up by the cf-operator CRDs and are scheduled for running. When complete, you should see output similar to:

Every 2.0s: kubectl get pods -A

NAMESPACE            NAME                                         READY   STATUS      RESTARTS   AGE
cf-operator          quarks-cd9d4b96f-rtkbt                       1/1     Running     0          40m
cf-operator          quarks-job-6d8d744bc6-pmfnd                  1/1     Running     0          40m
cf-operator          quarks-secret-7d76f854dc-9wp2f               1/1     Running     0          40m
cf-operator          quarks-statefulset-f6dc85fb8-x6jfb           1/1     Running     0          40m
kube-system          coredns-558bd4d5db-ncmh5                     1/1     Running     0          41m
kube-system          coredns-558bd4d5db-zlpgg                     1/1     Running     0          41m
kube-system          etcd-kind-control-plane                      1/1     Running     0          41m
kube-system          kindnet-w4m9n                                1/1     Running     0          41m
kube-system          kube-apiserver-kind-control-plane            1/1     Running     0          41m
kube-system          kube-controller-manager-kind-control-plane   1/1     Running     0          41m
kube-system          kube-proxy-ln6hb                             1/1     Running     0          41m
kube-system          kube-scheduler-kind-control-plane            1/1     Running     0          41m
kubecf               api-0                                        17/17   Running     1          19m
kubecf               auctioneer-0                                 6/6     Running     2          20m
kubecf               cc-worker-0                                  6/6     Running     0          20m
kubecf               cf-apps-dns-76947f98b5-tbfql                 1/1     Running     0          39m
kubecf               coredns-quarks-7cf8f9f58d-msq9m              1/1     Running     0          38m
kubecf               coredns-quarks-7cf8f9f58d-pbkt9              1/1     Running     0          38m
kubecf               credhub-0                                    8/8     Running     0          20m
kubecf               database-0                                   2/2     Running     0          38m
kubecf               database-seeder-0bc49e7bcb1f9453-vnvjm       0/2     Completed   0          38m
kubecf               diego-api-0                                  9/9     Running     2          20m
kubecf               diego-cell-0                                 12/12   Running     2          20m
kubecf               doppler-0                                    6/6     Running     0          20m
kubecf               log-api-0                                    9/9     Running     0          20m
kubecf               log-cache-0                                  10/10   Running     0          19m
kubecf               nats-0                                       7/7     Running     0          20m
kubecf               router-0                                     7/7     Running     0          20m
kubecf               routing-api-0                                6/6     Running     1          20m
kubecf               scheduler-0                                  12/12   Running     2          19m
kubecf               singleton-blobstore-0                        8/8     Running     0          20m
kubecf               tcp-router-0                                 7/7     Running     0          20m
kubecf               uaa-0                                        9/9     Running     0          20m
local-path-storage   local-path-provisioner-547f784dff-r8trf      1/1     Running     1          41m
metallb-system       controller-fb659dc8-dhpnb                    1/1     Running     0          41m
metallb-system       speaker-h9lh9                                1/1     Running     0          41m

Log in and marvel at your creation

To login with the admin uaa user account:

cf api --skip-ssl-validation "https://api.172.18.0.150.nip.io"

acp=$(kubectl get secret \
        --namespace kubecf var-cf-admin-password \
        -o jsonpath='{.data.password}' \
        | base64 --decode)

cf auth admin "${acp}"
cf create-space test -o system
cf target -o system -s test

Or, to use smoke_tests uaa client account (because you're a rebel or something):

cf api --skip-ssl-validation "https://api.172.18.0.150.nip.io"
myclient=$(kubectl get secret \
        --namespace kubecf var-uaa-clients-cf-smoke-tests-secret \
        -o jsonpath='{.data.password}' \
        | base64 --decode)

cf auth cf_smoke_tests "${myclient}" --client-credentials
cf create-space test -o system
cf target -o system -s test

Cleaning Up and Putting Your Toys Away

If you are done with the deployment of KubeCF you have two options:

  1. Put your creation to sleep. Start Docker Desktop > Dashboard > Select kind-control-place and click "Stop". Go have fun, when you come back, click "Start". After a few minutes the pods will recreate and become healthy
  2. Clean up. To remove the cluster and custom routing:

   kind delete cluster
   sudo route delete 172.18.0.0
   ./sbin/docker_tap_uninstall.sh

Debugging Issues

Get used to seeing:

Request error: Get "https://api.172.18.0.150.nip.io": dial tcp 172.18.0.150:443: i/o timeout

The api server is flaky, try whatever you were doing again after verifying all pods are running as seen in the Install cf-operator and deploy kubecf section.

Follow up

Have questions? There is an excellent community for KubeCF which can be found at https://cloudfoundry.slack.com/archives/CQ2U3L6DC as kubecf-dev in Slack. You can ping me there via @cweibel

I also have Terraform code which will spin up a VPC + EKS + KubeCF for a more permanent solution to running KubeCF not on a Mac, check out https://github.com/cweibel/example-terraform-eks/tree/main/eks_for_kubecf_v2 for more details.

Enjoy!

The post Running KubeCF using KIND on MacOS appeared first on Stark & Wayne.

]]>

Photo by Alex Gorzen on Flickr

More Limes, More Coconuts

In previous blog posts I reviewed how to deploy KubeCF on EKS, which gives you a nice stable deployment of KubeCF, the downside is this costs you money for every hour it is run on AWS.

I used to giving Amazon money, but I typically get a small cardboard box in exchange every few days.

So, how do you run KubeCF on your Mac for free(ish)? Tune in below.

There Be Dragons Ahead

You need at least a 16GB of memory installed on your Apple MacOS device, the install will use around 11GB of the memory once it is fully spun up.

The install is fragile and frustrating at times, this is geared more towards operators who are trying out skunkworks on the platform such as testing custom buildpacks, hacking db queries and other potentially destructive activities. The install does NOT survive reboots and become extra brittle after 24+ hours of running. This is not KubeCF's fault, when run on EKS it will happily continue to run without issues. You've been warned!

Overview of Install

  • Install Homebrew and Docker Desktop
  • Install tuntap and start the shim
  • Install kind and deploy a cluster
  • Install and configure metallb
  • Install cf-operator and deploy kubecf
  • Log in and marvel at your creation

The next few sections are borrowed heavily from https://www.thehumblelab.com/kind-and-metallb-on-mac/, I encourage you to skim this document to understand why the tuntap shim is needed and how to verify the configuration for metallb.

Install Homebrew and Docker Desktop

I won't go into great detail as these tools are likely already installed:

   /bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)"
   brew install --cask docker
   # Then launch docker from Applications to complete the install and start docker

Install tuntap and start the shim

Running docker on MacOS has some "deficiencies" that can be overcome by installing a networking shim, to perform this install:

brew install git
brew install --cask tuntap
git clone https://github.com/AlmirKadric-Published/docker-tuntap-osx.git
cd docker-tuntap-osx

./sbin/docker_tap_install.sh
./sbin/docker_tap_up.sh

Install kind and deploy a cluster

Sweet'n'Simple:

brew install kind
kind create cluster

Install and configure metallb

We'll be using metallb as a LoadBalancer resource and open up a local route so that MacOS can route traffic locally to the cluster.

sudo route -v add -net 172.18.0.1 -netmask 255.255.0.0 10.0.75.2

kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/namespace.yaml
kubectl create secret generic -n metallb-system memberlist --from-literal=secretkey="$(openssl rand -base64 128)"
kubectl apply -f https://raw.githubusercontent.com/metallb/metallb/v0.9.3/manifests/metallb.yaml

cat << EOF > metallb-config.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  namespace: metallb-system
  name: config
data:
  config: |
    address-pools:
    - name: default
      protocol: layer2
      addresses:
      - 172.18.0.150-172.18.0.200
EOF
kubectl create -f metallb-config.yaml

Install cf-operator and deploy kubecf

brew install wget
brew install helm
brew install watch

wget https://github.com/cloudfoundry-incubator/kubecf/releases/download/v2.7.13/kubecf-bundle-v2.7.13.tgz
tar -xvzf kubecf-bundle-v2.7.13.tgz

kubectl create namespace cf-operator
helm install cf-operator \
  --namespace cf-operator \
  --set "global.singleNamespace.name=kubecf" \
  cf-operator.tgz \
 --wait

helm install kubecf \
  --namespace kubecf \
  --set system_domain=172.18.0.150.nip.io \
  --set features.eirini.enabled=false \
  --set features.ingress.enabled=false \
  --set services.router.externalIPs={172.18.0.150}\
  https://github.com/cloudfoundry-incubator/kubecf/releases/download/v2.7.13/kubecf-v2.7.13.tgz

watch kubectl get pods -A

Now, go take a walk, it will take 30-60 minutes for the kubecf helm chart to be fully picked up by the cf-operator CRDs and are scheduled for running. When complete, you should see output similar to:

Every 2.0s: kubectl get pods -A

NAMESPACE            NAME                                         READY   STATUS      RESTARTS   AGE
cf-operator          quarks-cd9d4b96f-rtkbt                       1/1     Running     0          40m
cf-operator          quarks-job-6d8d744bc6-pmfnd                  1/1     Running     0          40m
cf-operator          quarks-secret-7d76f854dc-9wp2f               1/1     Running     0          40m
cf-operator          quarks-statefulset-f6dc85fb8-x6jfb           1/1     Running     0          40m
kube-system          coredns-558bd4d5db-ncmh5                     1/1     Running     0          41m
kube-system          coredns-558bd4d5db-zlpgg                     1/1     Running     0          41m
kube-system          etcd-kind-control-plane                      1/1     Running     0          41m
kube-system          kindnet-w4m9n                                1/1     Running     0          41m
kube-system          kube-apiserver-kind-control-plane            1/1     Running     0          41m
kube-system          kube-controller-manager-kind-control-plane   1/1     Running     0          41m
kube-system          kube-proxy-ln6hb                             1/1     Running     0          41m
kube-system          kube-scheduler-kind-control-plane            1/1     Running     0          41m
kubecf               api-0                                        17/17   Running     1          19m
kubecf               auctioneer-0                                 6/6     Running     2          20m
kubecf               cc-worker-0                                  6/6     Running     0          20m
kubecf               cf-apps-dns-76947f98b5-tbfql                 1/1     Running     0          39m
kubecf               coredns-quarks-7cf8f9f58d-msq9m              1/1     Running     0          38m
kubecf               coredns-quarks-7cf8f9f58d-pbkt9              1/1     Running     0          38m
kubecf               credhub-0                                    8/8     Running     0          20m
kubecf               database-0                                   2/2     Running     0          38m
kubecf               database-seeder-0bc49e7bcb1f9453-vnvjm       0/2     Completed   0          38m
kubecf               diego-api-0                                  9/9     Running     2          20m
kubecf               diego-cell-0                                 12/12   Running     2          20m
kubecf               doppler-0                                    6/6     Running     0          20m
kubecf               log-api-0                                    9/9     Running     0          20m
kubecf               log-cache-0                                  10/10   Running     0          19m
kubecf               nats-0                                       7/7     Running     0          20m
kubecf               router-0                                     7/7     Running     0          20m
kubecf               routing-api-0                                6/6     Running     1          20m
kubecf               scheduler-0                                  12/12   Running     2          19m
kubecf               singleton-blobstore-0                        8/8     Running     0          20m
kubecf               tcp-router-0                                 7/7     Running     0          20m
kubecf               uaa-0                                        9/9     Running     0          20m
local-path-storage   local-path-provisioner-547f784dff-r8trf      1/1     Running     1          41m
metallb-system       controller-fb659dc8-dhpnb                    1/1     Running     0          41m
metallb-system       speaker-h9lh9                                1/1     Running     0          41m

Log in and marvel at your creation

To login with the admin uaa user account:

cf api --skip-ssl-validation "https://api.172.18.0.150.nip.io"

acp=$(kubectl get secret \
        --namespace kubecf var-cf-admin-password \
        -o jsonpath='{.data.password}' \
        | base64 --decode)

cf auth admin "${acp}"
cf create-space test -o system
cf target -o system -s test

Or, to use smoke_tests uaa client account (because you're a rebel or something):

cf api --skip-ssl-validation "https://api.172.18.0.150.nip.io"
myclient=$(kubectl get secret \
        --namespace kubecf var-uaa-clients-cf-smoke-tests-secret \
        -o jsonpath='{.data.password}' \
        | base64 --decode)

cf auth cf_smoke_tests "${myclient}" --client-credentials
cf create-space test -o system
cf target -o system -s test

Cleaning Up and Putting Your Toys Away

If you are done with the deployment of KubeCF you have two options:

  1. Put your creation to sleep. Start Docker Desktop > Dashboard > Select kind-control-place and click "Stop". Go have fun, when you come back, click "Start". After a few minutes the pods will recreate and become healthy
  2. Clean up. To remove the cluster and custom routing:
   kind delete cluster
   sudo route delete 172.18.0.0
   ./sbin/docker_tap_uninstall.sh

Debugging Issues

Get used to seeing:

Request error: Get "https://api.172.18.0.150.nip.io": dial tcp 172.18.0.150:443: i/o timeout

The api server is flaky, try whatever you were doing again after verifying all pods are running as seen in the Install cf-operator and deploy kubecf section.

Follow up

Have questions? There is an excellent community for KubeCF which can be found at https://cloudfoundry.slack.com/archives/CQ2U3L6DC as kubecf-dev in Slack. You can ping me there via @cweibel

I also have Terraform code which will spin up a VPC + EKS + KubeCF for a more permanent solution to running KubeCF not on a Mac, check out https://github.com/cweibel/example-terraform-eks/tree/main/eks_for_kubecf_v2 for more details.

Enjoy!

The post Running KubeCF using KIND on MacOS appeared first on Stark & Wayne.

]]>
Adding Users To EKS Kubernetes Clusters https://www.starkandwayne.com/blog/adding-users-to-eks-kubernetes-clusters/ Wed, 08 Sep 2021 15:49:50 +0000 https://www.starkandwayne.com//?p=3237

Photo by Pablo García Saldaña on Unsplash

Creating an Amazon EKS Cluster is a fun experience with any number of tools (Terraform, eksctl, AWS Console) to create you first or 100th cluster.

What isn't so much fun? Giving someone other than yourself access to the cluster. 

Option 1: Adding IAM Users and IAM Roles

I won't repeat what AWS has so expertly documented at https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html. It really is a good article. Go read it. I'll wait.

A couple highlights from the documentation:

  • The user or assumed role which originally created the EKS Cluster always has full access to the cluster. Note that this user/role DOES NOT APPEAR in the configmap.
  • AWS IAM Authenticator does not permit a path in the role ARN used in the configuration map. Therefore, before you specify rolearn, remove the path. For example, change arn:aws:iam::<123456789012>:role/<team>/<developers>/<eks-admin>to arn:aws:iam::<123456789012>:role/<eks-admin>according to this link.
  • The group mapping to system:masters is also purposeful as it indicates that members of this ARN will have full cluster admin access as documented here. This group is associated to the ClusterRoleBinding named cluster-admin which is associated with a ClusterRole also named helpfully cluster-admin. These are wired up automatically for you in every EKS cluster. 
  • Any combination of IAM roles and IAM users can be added to the configmap

Option 2: Old Fashioned Service Account

Sometimes you just need a kubeconfig not tied to any IAM users or roles which can connect to the cluster for CI/CD or ease of access.

The script below will create a Kubernetes Service Account and generate a kubeconfig file you can target, save it as create-sa.sh:

name=$1
namespace=$2
role=$3
clusterName=$namespace
server=$(kubectl config view --minify | grep server | awk {​'print $2'}​)
serviceAccount=$namekubectl create -n $namespace sa $serviceAccount
kubectl create -n $namespace clusterrolebinding ${​namespace}​-${​role}​ --clusterrole ${​role}​ --serviceaccount=$namespace:$serviceAccount --namespace $namespace
set -o errexit
secretName=$(kubectl --namespace $namespace get serviceAccount $serviceAccount -o jsonpath='{​.secrets[0].name}​')
ca=$(kubectl --namespace $namespace get secret/$secretName -o jsonpath='{​.data.ca\.crt}​')
token=$(kubectl --namespace $namespace get secret/$secretName -o jsonpath='{​.data.token}​' | base64 --decode)

echo "
---
apiVersion: v1
kind: Config
clusters:
  - name: ${​clusterName}
  ​  cluster:      
  ​    certificate-authority-data: ${​ca}​      
  ​    server: ${​server}​
contexts:  
  - name: ${​serviceAccount}​@${​clusterName}​    
  - context:      
      cluster: ${​clusterName}​      
      namespace: ${​serviceAccount}​      
      user: ${​serviceAccount}
​users:  
​  - name: ${​serviceAccount}​    
​    user:      
​      token: ${​token}​
​current-context: ${​serviceAccount}​@${​clusterName}​" > sa.kubeconfig

Pass in the 3 parameters (service account name, namespace, cluster role) to the script, the kubeconfig file sa.kubeconfig will be generated at can then be targeted:

# Run the script
create-sa.sh my-sa kube-system cluster-admin

# Reference the kubeconfig file the script created
export KUBECONFIG=./sa.kubeconfig

# Test the new kubeconfig file
kubectl get nodes  

Credit goes to Alexander Lukyanov who shared this during a fun pairing session!

The post Adding Users To EKS Kubernetes Clusters appeared first on Stark & Wayne.

]]>

Photo by Pablo García Saldaña on Unsplash

Creating an Amazon EKS Cluster is a fun experience with any number of tools (Terraform, eksctl, AWS Console) to create you first or 100th cluster.

What isn't so much fun? Giving someone other than yourself access to the cluster. 

Option 1: Adding IAM Users and IAM Roles

I won't repeat what AWS has so expertly documented at https://docs.aws.amazon.com/eks/latest/userguide/add-user-role.html. It really is a good article. Go read it. I'll wait.

A couple highlights from the documentation:

  • The user or assumed role which originally created the EKS Cluster always has full access to the cluster. Note that this user/role DOES NOT APPEAR in the configmap.
  • AWS IAM Authenticator does not permit a path in the role ARN used in the configuration map. Therefore, before you specify rolearn, remove the path. For example, change arn:aws:iam::<123456789012>:role/<team>/<developers>/<eks-admin>to arn:aws:iam::<123456789012>:role/<eks-admin>according to this link.
  • The group mapping to system:masters is also purposeful as it indicates that members of this ARN will have full cluster admin access as documented here. This group is associated to the ClusterRoleBinding named cluster-admin which is associated with a ClusterRole also named helpfully cluster-admin. These are wired up automatically for you in every EKS cluster. 
  • Any combination of IAM roles and IAM users can be added to the configmap

Option 2: Old Fashioned Service Account

Sometimes you just need a kubeconfig not tied to any IAM users or roles which can connect to the cluster for CI/CD or ease of access.

The script below will create a Kubernetes Service Account and generate a kubeconfig file you can target, save it as create-sa.sh:

name=$1
namespace=$2
role=$3
clusterName=$namespace
server=$(kubectl config view --minify | grep server | awk {​'print $2'}​)
serviceAccount=$namekubectl create -n $namespace sa $serviceAccount
kubectl create -n $namespace clusterrolebinding ${​namespace}​-${​role}​ --clusterrole ${​role}​ --serviceaccount=$namespace:$serviceAccount --namespace $namespace
set -o errexit
secretName=$(kubectl --namespace $namespace get serviceAccount $serviceAccount -o jsonpath='{​.secrets[0].name}​')
ca=$(kubectl --namespace $namespace get secret/$secretName -o jsonpath='{​.data.ca\.crt}​')
token=$(kubectl --namespace $namespace get secret/$secretName -o jsonpath='{​.data.token}​' | base64 --decode)

echo "
---
apiVersion: v1
kind: Config
clusters:
  - name: ${​clusterName}
  ​  cluster:      
  ​    certificate-authority-data: ${​ca}​      
  ​    server: ${​server}​
contexts:  
  - name: ${​serviceAccount}​@${​clusterName}​    
  - context:      
      cluster: ${​clusterName}​      
      namespace: ${​serviceAccount}​      
      user: ${​serviceAccount}
​users:  
​  - name: ${​serviceAccount}​    
​    user:      
​      token: ${​token}​
​current-context: ${​serviceAccount}​@${​clusterName}​" > sa.kubeconfig

Pass in the 3 parameters (service account name, namespace, cluster role) to the script, the kubeconfig file sa.kubeconfig will be generated at can then be targeted:

# Run the script
create-sa.sh my-sa kube-system cluster-admin

# Reference the kubeconfig file the script created
export KUBECONFIG=./sa.kubeconfig

# Test the new kubeconfig file
kubectl get nodes  

Credit goes to Alexander Lukyanov who shared this during a fun pairing session!

The post Adding Users To EKS Kubernetes Clusters appeared first on Stark & Wayne.

]]>
Checking the Current Status of BOSH Resurrection https://www.starkandwayne.com/blog/checking-the-current-status-of-bosh-resurrection/ Thu, 05 Aug 2021 19:02:14 +0000 https://www.starkandwayne.com//?p=2790

Photo by Daniel Tuttle on Unsplash.

I recently had an issue where AWS sent notifications that they would be stopping EC2 instances on a certain date and time. Turns out, they meant it.

Normally this is no big deal for BOSH deployed VMs.  BOSH would notice the VM is missing and recreate it through the CPI with its awesome self-healing resurrection feature.

I waited.

Nothing happened.

Huh...

Turns out someone had turned off the resurrection feature with the BOSH CLI command:

bosh update-resurrection off

Ah.  Ok.  To turn resurrection back on I can simply run:

bosh update-resurrection on

But, how do I tell what the current state of resurrection is?  Is it on?  Is it off?  BOSH CLI to the rescue? Nope.  There is no BOSH CLI command to retrieve the current value.

Where is this kept?

Inside the bosh database of course!  There is a table called director_attributes which contains the UUID of the BOSH Director and the current status of resurrection.

This is what it looks like with resurrection disabled:

bosh=# select * from director_attributes;
                value                 |        name         | id
--------------------------------------+---------------------+----
 b0062f10-blah-1224-blah-fd0dcc751234 | uuid                |  1
 true                                 | resurrection_paused |  2
(2 rows)

A quick bosh update-resurrection on to enable resurrection results in the table looking like:

bosh=# select * from director_attributes;
                value                 |        name         | id
--------------------------------------+---------------------+----
 b0062f10-blah-1224-blah-fd0dcc751234 | uuid                |  1
 false                                | resurrection_paused |  2
(2 rows)

Side note:  turning resurrection on means the table value is really false.  Resurrection off means the table value is really true.  Having the field being resurrection_paused caused me to take a double-take to make sure I had those values correct.

Second note: if there is no row for resurrection that means that this value has not been set yet.  As soon as you run the bosh update-resurrection command the row will be created.

The post Checking the Current Status of BOSH Resurrection appeared first on Stark & Wayne.

]]>

Photo by Daniel Tuttle on Unsplash.

I recently had an issue where AWS sent notifications that they would be stopping EC2 instances on a certain date and time. Turns out, they meant it.

Normally this is no big deal for BOSH deployed VMs.  BOSH would notice the VM is missing and recreate it through the CPI with its awesome self-healing resurrection feature.

I waited.

Nothing happened.

Huh...

Turns out someone had turned off the resurrection feature with the BOSH CLI command:

bosh update-resurrection off

Ah.  Ok.  To turn resurrection back on I can simply run:

bosh update-resurrection on

But, how do I tell what the current state of resurrection is?  Is it on?  Is it off?  BOSH CLI to the rescue? Nope.  There is no BOSH CLI command to retrieve the current value.

Where is this kept?

Inside the bosh database of course!  There is a table called director_attributes which contains the UUID of the BOSH Director and the current status of resurrection.

This is what it looks like with resurrection disabled:

bosh=# select * from director_attributes;
                value                 |        name         | id
--------------------------------------+---------------------+----
 b0062f10-blah-1224-blah-fd0dcc751234 | uuid                |  1
 true                                 | resurrection_paused |  2
(2 rows)

A quick bosh update-resurrection on to enable resurrection results in the table looking like:

bosh=# select * from director_attributes;
                value                 |        name         | id
--------------------------------------+---------------------+----
 b0062f10-blah-1224-blah-fd0dcc751234 | uuid                |  1
 false                                | resurrection_paused |  2
(2 rows)

Side note:  turning resurrection on means the table value is really false.  Resurrection off means the table value is really true.  Having the field being resurrection_paused caused me to take a double-take to make sure I had those values correct.

Second note: if there is no row for resurrection that means that this value has not been set yet.  As soon as you run the bosh update-resurrection command the row will be created.

The post Checking the Current Status of BOSH Resurrection appeared first on Stark & Wayne.

]]>