Kevin Rutten, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/kevinrutten/ Cloud-Native Consultants Tue, 05 Oct 2021 15:47:25 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 https://www.starkandwayne.com/wp-content/uploads/2021/04/cropped-SW_Logos__shield_blue_on_transparent_with_border-32x32.png Kevin Rutten, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/kevinrutten/ 32 32 Finding the Docker Image Build Date https://www.starkandwayne.com/blog/finding-the-docker-image-build-date/ https://www.starkandwayne.com/blog/finding-the-docker-image-build-date/#respond Thu, 07 Oct 2021 15:00:00 +0000 https://www.starkandwayne.com//?p=3621 It’s become common today to build projects based on Docker images. Somebody will find a blog post of a sample Dockerfile and verify it works with their application. As long as you use :latest or :alpine everything should be good right? An example was a recent project I was helping on where the Dockerfile looked like:

The post Finding the Docker Image Build Date appeared first on Stark & Wayne.

]]>
It’s become common today to build projects based on Docker images. Somebody will find a blog post of a sample Dockerfile and verify it works with their application. As long as you use :latest or :alpine everything should be good right?

An example was a recent project I was helping on where the Dockerfile looked like:

FROM maven:alpine AS build
COPY src /usr/src/app/src
COPY pom.xml /usr/src/app
COPY configuration/settings.xml /usr/src/app
RUN mvn -s /usr/src/app/settings.xml -f /usr/src/app/pom.xml clean package

FROM openjdk:8-alpine
COPY --from=build /usr/src/app/target/myapp*.jar /usr/app/myapp.jar
EXPOSE 5000
ENTRYPOINT ["java","-jar","/usr/app/myapp.jar"]

The JDK 8 is not the surprise. Depending on who you believe, maintenance for OpenJDK 8 is planned for at least another 4 years, until September 2023. According to RedHat – The OpenJDK Lifecycle

https://access.redhat.com/solutions/4934371

I have a habit of checking the Docker images much more closely. The scary issue I found was that these images have not been updated in 2 years:

If you want a more exact date, you can use docker inspect

docker inspect -f '{{ .Created }}' maven:alpine
2019-05-11T04:21:07.847377418Z
docker inspect -f '{{ .Created }}' openjdk:8-alpine
2019-05-11T01:32:17.777332452Z

Checking the Alpine project, a few weeks after this image was built, a number of CVE’s were reported:

CVE-2019-1563,CVE-2019-1549,CVE-2019-1547
CVE-2021-3450, CVE-2021-3450, CVE-2021-23841, CVE-2021-3449

The OpenJDK project is receiving maintenance. The Alpine project is still patching CVE’s. The person who builds these images for Docker just stopped pushing updates so you need to check for an image that is being updated.  In this case, doijanky is pushing images under tags like 3.8.2-ibmjava-8-alpine or ibmjava-alpine which were updated 2021-09-01T06:33:21.362865623Z which is much better.

Of course, you can also look at using buildpacks to avoid needing to use Docker.

Cheers.

The post Finding the Docker Image Build Date appeared first on Stark & Wayne.

]]>
https://www.starkandwayne.com/blog/finding-the-docker-image-build-date/feed/ 0
HAProxy for OpenShift/OKD IPI install on vSphere https://www.starkandwayne.com/blog/haproxy-for-openshift-okd-ipi-install-on-vsphere/ https://www.starkandwayne.com/blog/haproxy-for-openshift-okd-ipi-install-on-vsphere/#respond Tue, 05 Oct 2021 14:00:00 +0000 https://www.starkandwayne.com//?p=3628 global log /dev/log local0 log /dev/log local1 notice chroot /var/lib/haproxy stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners stats timeout 30s user haproxy group haproxy daemon # Default SSL material locations ca-base /etc/ssl/certs crt-base /etc/ssl/private # See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384 ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256 ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets defaults log global mode http option httplog option

The post HAProxy for OpenShift/OKD IPI install on vSphere appeared first on Stark & Wayne.

]]>

I am a big fan of OpenShift and it’s open-source upstream OKD. Kubernetes is a bunch of building blocks and API’s that can be messy to assemble and configure and RedHat has done a great job packaging K8S up.

Installing on most IaaS’s is great….except vSphere where UPI (User Provisioned Infrastructure) is quite complex and involved and IPI (Installer Provisioned Infrastructure) is almost perfect save one major issue!

When you install via IPI on vSphere, OKD/OpenShift boots VM’s with DHCP, first a Bootstrap node, which will grab a pair of Virtual IP’s you configure it to, then the three master nodes. The Virtual IP’s are pointed to a pair of “keepalived” load balancers that track nodes as they boot. Unfortunately, the keepalived only checks if the process is running, not if the load balancer is working, so it’s common to see installations fail with timeouts when the keepalived load balancer gets stuck, maybe moving from the bootstrap node to another node. Its traffic just stalls.

The solution I used on my home lab was to set up a HAProxy load balancer on a tiny Debian box, point my DNS to that load balancer, then that load balancer starts with the two VIP addresses. I add the masters nodes as they boot and come online. If the keepalived balancer fails, everything continues to boot happily.  When the workers appear I add them as a backup if the keepalived fails.

global
	log /dev/log	local0
	log /dev/log	local1 notice
	chroot /var/lib/haproxy
	stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
	stats timeout 30s
	user haproxy
	group haproxy
	daemon

	# Default SSL material locations
	ca-base /etc/ssl/certs
	crt-base /etc/ssl/private

	# See: https://ssl-config.mozilla.org/#server=haproxy&server-version=2.0.3&config=intermediate
        ssl-default-bind-ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384
        ssl-default-bind-ciphersuites TLS_AES_128_GCM_SHA256:TLS_AES_256_GCM_SHA384:TLS_CHACHA20_POLY1305_SHA256
        ssl-default-bind-options ssl-min-ver TLSv1.2 no-tls-tickets

defaults
	log	global
	mode	http
	option	httplog
	option	dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
	errorfile 400 /etc/haproxy/errors/400.http
	errorfile 403 /etc/haproxy/errors/403.http
	errorfile 408 /etc/haproxy/errors/408.http
	errorfile 500 /etc/haproxy/errors/500.http
	errorfile 502 /etc/haproxy/errors/502.http
	errorfile 503 /etc/haproxy/errors/503.http
	errorfile 504 /etc/haproxy/errors/504.http

listen stats
  bind 172.29.0.1:9090
  balance
  mode http
  stats enable
  stats auth admin:admin
  stats uri  /haproxy?stats

# OKD
#
frontend https-bootstrap
  mode tcp
  bind 172.29.0.1:22623
  default_backend bootstrap

frontend https-masters
  mode tcp
  bind 172.29.0.1:6443
  default_backend masters

frontend http-apps
  mode http
  bind 172.29.0.2:80
  default_backend http-pool

frontend https-apps
  mode tcp
  bind 172.29.0.2:443
  default_backend https-pool

backend bootstrap
  mode tcp
  balance roundrobin
  server vip1    172.29.0.3:22623  check  # Will move between nodes
  server master1 172.16.1.55:22623 check
  server master2 172.16.1.56:22623 check
  server master3 172.16.1.57:22623 check
  server boot1   172.16.1.58:22623 check  # Bootstrap node

backend masters
  mode tcp
  balance roundrobin
  server vip1    172.29.0.3:6443  check  # Will move between nodes
  server master1 172.16.1.55:6443 check
  server master2 172.16.1.56:6443 check
  server master3 172.16.1.57:6443 check
  server boot1   172.16.1.58:6443 check  # Bootstrap node

backend http-pool
  mode http
  balance leastconn
  server vip2     172.29.0.4:80  check  # Will move between nodes
  server worker1a 172.16.1.157:80 check
  server worker2a 172.16.1.156:80 check
  server worker3a 172.16.1.61:80 check
  server boot1    172.16.1.58:80 check  # Bootstrap node

backend https-pool
  mode tcp
  balance leastconn
  server vip2     172.29.0.4:443  check  # Will move between nodes
  server worker1a 172.16.1.157:443 check
  server worker2a 172.16.1.156:443 check
  server worker3a 172.16.1.61:443 check
  server boot1    172.16.1.58:443 check  # Bootstrap node

This isn’t a long-term solution because the D in DHCP stands for dynamic and addresses will change over time but for home labs or places where you don’t control the network this is how you can get going.

The post HAProxy for OpenShift/OKD IPI install on vSphere appeared first on Stark & Wayne.

]]>
https://www.starkandwayne.com/blog/haproxy-for-openshift-okd-ipi-install-on-vsphere/feed/ 0