Ashley Gerwitz, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/agerwitz/ Cloud-Native Consultants Mon, 11 Apr 2022 20:18:10 +0000 en-US hourly 1 https://wordpress.org/?v=6.0.3 https://www.starkandwayne.com/wp-content/uploads/2021/04/cropped-SW_Logos__shield_blue_on_transparent_with_border-32x32.png Ashley Gerwitz, Author at Stark & Wayne https://www.starkandwayne.com/blog/author/agerwitz/ 32 32 I switched from bash to PowerShell, and it’s going great! https://www.starkandwayne.com/blog/i-switched-from-bash-to-powershell-and-its-going-great/ Mon, 11 Apr 2022 20:03:00 +0000 https://www.starkandwayne.com/?p=11522

No, I'm not crazy, and no I'm not trolling you! This is for real!

No longer single-platform or closed source, Microsoft's PowerShell Core is now an open source, full-featured, cross-platform (MacOS, Linux, more) shell sporting some serious improvements over the venerable /bin/[bash|zsh] for those souls brave enough to use it as their daily driver.

I made the switch about six months ago and couldn't be happier; it's by far one of the best tooling/workflow decisions I've made in my multi-decade career. PowerShell's consistent naming conventions, built-in documentation system, and object-oriented approach have made me more productive by far, and I've had almost zero challenges integrating it with my day-to-day workflow despite using a mix of both Linux and MacOS.

At a Glance

  • Multi-Platform, Multiple Installation Options:
    • Linux: deb, rpm, AUR, or just unpack a tarball and run'
    • MacOS: Homebrew cask: brew install powershell --cask, Intel x64 and arm64 available, .pkg installers downloadable
      • Only available as a cask, and casks are unavailable on Linux, so I suggest other avenues for you linuxbrew folks out there.
  • POSIX binaries work out of the box as expected
    • ps aux | grep -i someproc works fine
    • No emulation involved whatsoever, no containers, no VMs, no "hax"
    • Could be enhanced via recently-released Crescendo
  • No trust? No problem! It's fully open source!
  • Killer feature: Real CLASSES, TYPES, OBJECTS, METHODS and PROPERTIES. No more string manipulation fragility!

Scripting is much easier and more pleasant with PowerShell because its syntax is very similar to many other scripting languages (unlike bash). PowerShell also wins out when it comes to naming conventions for built-in commands and statements. You can invoke old-school POSIX-only commands through PowerShell and they work just like before, with no changes; so things like ps aux or sudo vim /etc/hosts work out of the box without any change in your workflow at all.

I don't have to worry about what version of bash or zsh is installed on the target operating system, nor am I worried about Apple changing that on me by sneaking it into a MacOS upgrade or dropping something entirely via a minor update.

Developer 1: Here's a shell script for that work thing.

Developer 2: It doesn't run on my computer

Developer 1: What version of bash are you using?

Developer 2: Whatever ships with my version of MacOS

Developer 1: Do echo $BASH_VERSION, what's that say?

Developer 2: Uhh, says 3.2

Developer 1: Dear god that's old!

Developer 3: You guys wouldn't have this problem with PowerShell Core

The biggest advantage PowerShell provides, by far, is that it doesn't deal in mere simplistic strings alone, but in full-fledged classes and objects, with methods, properties, and data types. No more fragile grep|sed|awk nonsense! You won't have to worry about breaking everything if you update the output of a PowerShell script! Try changing a /bin/sh script to output JSON by default and see what happens to your automation!

PowerShell works exactly as you would expect on Linux and MacOS, right out of the box. Invoking and running compiled POSIX binaries (e.g. ps|cat|vim|less, etc.) works exactly like it does with bash or zsh and you don't have to change that part of your workflow whatsoever (which is good for those of us with muscle memory built over 20+ years!). You can set up command aliases, new shell functions, a personal profile (equivalent of ~/.bashrc), custom prompts and shortcuts - whatever you want! If you can do it with bash, you can do it BETTER with PowerShell.

Taken all together, the case for trying out modern PowerShell is incredibly strong. You'll be shocked at how useful it is! The jolt it'll give your productivity is downright electrifying and it can seriously amp up your quality of life!

Okay, okay, fine: I'll stop with the electricity puns.

I promise nothing.

Nothing wrong with bash/zsh

Let me get this out of the way: There's nothing wrong with bash or zsh. They're fine. They work, they work well, they're fast as hell, and battle-tested beyond measure. I'm absolutely NOT saying they're "bad" or that you're "bad" for using them. I did too, for over 20 years! And I still do every time I hit [ENTER] after typing ssh [...]! They've been around forever, and they're well respected for good reason.

PowerShell is simply different, based on a fundamentally more complex set of paradigms than the authors of bash or zsh could have imagined at the time those projects began. In fact, pwsh couldn't exist in its current state without standing on the shoulders of giants like bash and zsh, so respect, here, is absolutely DUE.

That said, I stand by my admittedly-controversial opinion that PowerShell is just plain better in almost all cases. This post attempts to detail why I'm confident in that statement.

bash and zsh are Thomas Edison minus the evil: basic, safe, known, and respected, if a bit antiquated. PowerShell is like Nikola Tesla: a "foreigner" with a fundamentally unique perspective, providing a more advanced approach that's far ahead of its time.

A Tale of Two PowerShells

You may see references to two flavors of PowerShell out there on the interweb: "Windows PowerShell" and "PowerShell Core":

  • "Windows" PowerShell typically refers to the legacy variant of PowerShell, version 5.1 or earlier, that indeed is Windows-only. It's still the default PowerShell install on Windows 10/11 (as of this writing), but with no new development/releases for this variant since 2016, I don't advise using it.
    • Note that PowerShell ISE - "Integrated Scripting Environment" - relies on this version of PowerShell, and as such, is Windows-Only. It's essentially been replaced by VS Code with an extension.
  • PowerShell Core is what you want: cross-platform, open source, and as of this writing, at version 7.2.

Of the two, you want PowerShell Core, which refers to PowerShell version 6.0 or higher. Avoid all others.

For the remainder of this article, any references to "PowerShell" or pwsh refer exclusively to PowerShell Core. Pretend Windows PowerShell doesn't exist; it shouldn't, and while Microsoft has yet to announce its official EOL, the trend is clear: Core is the future.

PowerShell: More than a Shell

PowerShell is more than simply a shell. It's an intuitive programming environment and scripting language that's been wrapped inside a feature-packed REPL and heavily refined with an intentional focus on better user experience via consistency and patterns without loss of execution speed or efficiency.

Basically, if you can do it in bash or zsh, you can do it - and a whole lot more - in PowerShell. In most cases, you can do it faster and easier, leading to a far more maintainable and portable final result (e.g. tool, library, etc.) that, thanks to PowerShell Core's multi-platform nature, is arguably more portable than bash/zsh (which require non-trivial effort to install/update/configure on Windows).

And with modules from the PowerShell Gallery, it can be extended even further, with secrets management capabilities and even a system automation framework known as "Desired State Configuration" (DSC).

Note: DSC is, as of this writing, a Windows-Only feature. Starting in PowerShell Core 7.2 they moved it out of PowerShell itself and into a separate module to enable future portability. In DSC version 3.0, currently in "preview", it's expected to be available on Linux. Whether or not I'd trust a production Linux machine with this, however, is another topic entirely. Caveat emptor.

A Scripting Language So Good It'll Shock You

PowerShell really shines as a fully-featured scripting language with one critical improvement not available in bash or zsh: objects with methods and properties of various data types.

Say goodbye to the arcane insanity that is sed and associated madness! With PowerShell, you don't get back mere strings, you get back honest-to-goodness OBJECTS with properties and methods, each of which corresponds to a data type!

No more being afraid to modify the output of that Perl script from 1998 that's holding your entire infrastructure together because it'll crash everything if you put an extra space in the output, or - *gasp* - output JSON!

Purely for the purposes of demonstration, take a look at these two scripts for obtaining a list of currently running processes that exceed a given amount of memory. I'm no shell script whiz by any means, but even if /usr/bin/ps had a consistent, unified implementation across BSD, MacOS, Linux and other POSIX operating systems, you'd still have a much harder time using bash than you do with PowerShell:

Screenshot of two scripts, side-by-side, that show all processes using over 200mb of memory.
PowerShell (left): 3 lines. bash (right): 25. You do the math.

Rather than lengthen an article already in the running for "TL;DR of the month", I'll just link to gists for those scripts:

Disclaimer: I never claimed to be a shell script whiz, but I'd be surprised to see any bash/zsh implementation do this easier without additional tools - which PowerShell clearly doesn't need.

In the case of bash, since we have to manipulate strings directly, the output formatting is absolutely crucial; any changes, and the entire shell script falls apart. This is fundamentally fragile, which makes it error prone, which means it's high-risk. It also requires some external tooling or additional work on the part of the script author to output valid JSON. And if you look at that syntax, you might go blind!

By contrast, what took approximately 25-ish lines in bash takes only three with PowerShell, and you could even shorten that if readability wasn't a concern. Additionally, PowerShell allows you to write data to multiple output "channels", such as "Verbose" and "Debug", in addition to STDOUT. This way I can run the above PowerShell script, redirect its output to a file, and still get that diagnostic information on my screen, but NOT in the file, thus separating the two. Put simply, I can output additional information without STDERR on a per-run basis whenever I want, without any chance of corrupting the final output result, which may be relied upon by other programs (redirection to file, another process, etc.)

Plug In to (optional) Naming Conventions

Unlike the haphazard naming conventions mess that is the *nix shell scripting and command world, the PowerShell community has established a well-designed, explicit, and consistent set of naming conventions for commands issued in the shell, be they available as modules installed by default, obtained elsewhere, or even stuff you write yourself. You're not forced into these naming conventions of course, but once you've seen properly-named commands in action, you'll never want to go back. The benefits become self-evident almost immediately:

*nix shell command or utility PowerShell equivalent Description
cd Set-Location Change directories
pushd / popd Push-Location / Pop-Location push/pop location stack
pwd Get-Location What directory am I in?
cat Get-Content Display contents of a file (generally plain text) on STDOUT
which Get-Command Find out where a binary or command is, or see which one gets picked up from $PATH first
pbcopy / pbpaste on MacOS (Linux or BSD, varies) Get-Clipboard / Set-Clipboard Retrieve or Modify the contents of the clipboard/paste buffer on your local computer
echo -e "\e[31mRed Text\e[0m Write-Host -ForegroundColor Red "Red Text" Write some text to the console in color (red in this example)

No, you don't literally have to type Set-Location every single time you want to change directories. Good 'ol cd still works just fine, as do dozens of common *nix commands. Basically just use it like you would bash and it "Just Works™".

To see all aliases at runtime, try Get-Alias. To discover commands, try Get-Command *whatever*. Tab-completion is also available out-of-the-box.

See the pattern? All these commands are in the form of Verb-Noun. They all start with what you want to do, then end with what you want to do it TO. Want to WRITE stuff to the HOST's screen? Write-Host. Want to GET what LOCATION (directory) you're currently in? Get-Location. You could also run $PWD | Write-Host to take the automatic variable $PWD - present working directory - and pipe that to the aforementioned echo equivalent. (To simplify it even further, the pipe and everything after it aren't technically required unless in a script!)

Most modules for PowerShell follow these conventions as well, so command discoverability becomes nearly automatic. With known, established, consistent conventions, you'll never wonder what some command is called ever again because it'll be easily predictable.

And if not, there's a real easy way to find out what's what:

Get-Verb
  # Shows established verbs with descriptions of each
Get-Command -Verb *convert*
  # Shows all commands w/ "convert" in the name
  # For example, ConvertFrom-Json, ConvertTo-Csv, etc.
Get-Command -Noun File
  # What's the command to write stuff to a file? 
  # Well, look up all the $VERB-File commands to start!
  # See also: Get-Command *file* for all commands with "file" in the name

Note that cAsE sEnSiTiViTy is a little odd with PowerShell on *nix:

If the command/file is from... Is it cAsE sEnSiTiVe? Are its args cAsE sEnSiTiVe?
$PATH or the underlying OS/filesystem YES Generally Yes
Depends on the implementation
PowerShell Itself (cmdlet) No Generally No
Possible, but not common
Are commands case-sensitive in PowerShell? It depends.

Note that there are always exceptions to every rule, so there are times the above may fail you. Snowflakes happen. My general rule of thumb, which has never steered me wrong in these cases, is this:

Assume EVERYTHING is cAsE sEnSiTiVe.

If you're wrong, it works. If you're right, it works. Either way, you win!

Documentation: The Path of Least Resistance

Ever tried to write a formatted man page? It's painful:

.PP
The \fB\fCcontainers.conf\fR file should be placed under \fB\fC$HOME/.config/containers/containers.conf\fR on Linux and Mac and \fB\fC%APPDATA%\\containers\\containers.conf\fR on Windows.

.PP
\fBpodman [GLOBAL OPTIONS]\fP

.SH GLOBAL OPTIONS
.SS \fB--connection\fP=\fIname\fP, \fB-c\fP
.PP
Remote connection name

.SS \fB--help\fP, \fB-h\fP
.PP
Print usage statement

This is a small excerpt from a portion of the podman manual page. Note the syntax complexity and ambiguity.

By contrast, you can document your PowerShell functions with plain-text comments right inside the same file:

#!/usr/bin/env pwsh

# /home/myuser/.config/powershell/profile.ps1

<#
.SYNOPSIS
  A short one-liner describing your function
.DESCRIPTION
  You can write a longer description (any length) for display when the user asks for extended help documentation.
  Give all the overview data you like here.
.NOTES
  Miscellaneous notes section for tips, tricks, caveats, warnings, one-offs...
.EXAMPLE
  Get-MyIP # Runs the command, no arguments, default settings
.EXAMPLE
  Get-MyIP -From ipinfo.io -CURL # Runs `curl ipinfo.io` and gives results
#>

function Get-MyIP { ... }

Given the above example, an end-user could simply type help Get-MyIP in PowerShell and be presented with comprehensive help documentation including examples within their specified $PAGER (e.g. less or my current favorite, moar). You can even just jump straight to the examples if you want, too:

> Get-Help -Examples Get-History

NAME
    Get-History

SYNOPSIS
    Gets a list of the commands entered during the current session.

    [...]

    --------- Example 2: Get entries that include a string ---------

    Get-History | Where-Object {$_.CommandLine -like "*Service*"}

    [...]

I've long said that if a developer can't be bothered to write at least something useful about how to use their product or tool, it ain't worth much. Usually nothing. Because nobody has time to go spelunking through your code to figure out how to use your tool - if we did, we'd write our own.

That's why anything that makes documentation easier and more portable is a win in my book, and in this category, PowerShell delivers. The syntax summaries and supported arguments list are even generated dynamically by PowerShell! You don't have to write that part at all!

The One Caveat: Tooling

Most tooling for *nix workflows is stuck pretty hard in sh land. Such tools have been developed, in some cases, over multiple decades, with conventions unintentionally becoming established in a somewhat haphazard manner, though without much (if any) thought whatsoever toward the portability of those tools to non-UNIX shells.

And let's face it, that's 100% Microsoft's fault. No getting around the fact that they kept PowerShell a Windows-only, closed-source feature for a very long time, and that being the case, why should developers on non-Windows platforms have bothered? Ignoring it was - note the past tense here - entirely justified.

But now that's all changed. Modern PowerShell isn't at all Windows-only anymore, and it's fully open source now, too. It works on Linux, MacOS, and other UNIX-flavored systems, too (though you likely have to compile from source) along with Windows, of course. bash, while ubiquitous on *nix platforms, is wildly inconsistent in which version is deployed or installed, has no built-in update notification ability, and often requires significant manual work to implement a smooth and stable upgrade path. It's also non-trivial to install on Windows.

PowerShell, by contrast, is available on almost as many platforms (though how well tested it is outside the most popular non-Windows platforms is certainly up for debate), is available to end-users via "click some buttons and you're done" MSI installers for Windows or PKG installers on MacOS, and is just as easy to install on *nix systems as bash is on Windows machines (if not easier in some cases; e.g. WSL).

Additionally, PowerShell has a ton of utilities available out-of-the box that bash has to rely on external tooling to provide. This means that any bash script that relies on that external tooling can break if said tooling has unaccounted for implementation differences. If this sounds purely academic, consider the curious case of ps on Linux:

$ man ps
[...]

This version of ps accepts several kinds of options:

       1   UNIX options, which may be grouped and must be preceded by a dash.
       2   BSD options, which may be grouped and must not be used with a dash.
       3   GNU long options, which are preceded by two dashes.

       Options of different types may be freely mixed, but conflicts can appear.
       [...] due to the many standards and ps implementations that this ps is
       compatible with.

       Note that ps -aux is distinct from ps aux. [...]

Source: ps manual from Fedora Linux 35

By contrast, PowerShell implements its own Get-Process cmdlet (a type of shell function, basically) so that you don't even need ps or anything like it at all. The internal implementation of how that function works varies by platform, but the end result is the same on every single one. You don't have to worry about the way it handles arguments snowflaking from Linux to MacOS, because using it is designed to be 100% consistent across all platforms when relying purely on PowerShell's built-in commands.

And, if you really do need an external tool that is entirely unaware of PowerShell's existence? No problem: you can absolutely (maybe even easily?) integrate existing tools with PowerShell, if you, or the authors of that tool, so desire.

But, IS there such a desire? Does it presently exist?

Probably not.

Open source developers already work for free, on their own time, to solve very complex problems. They do this on top of their normal "day job," not instead of it (well, most, anyway).

Shout-out to FOSS contributors: THANK YOU all, so much, for what you do! Without you, millions of jobs and livelihoods would not exist, so have no doubt that your efforts matter!

It's beyond ridiculous to expect that these unsung heroes would, without even being paid in hugs, let alone real money, add to their already superhuman workload by committing to support a shell they've long thought of as "yet another snowflake" with very limited adoption or potential, from a company they've likely derided for decades, sometimes rightly so. You can't blame these folks for saying "nope" to PowerShell, especially given its origin story as a product from a company that famously "refuses to play well with others."

And therein lies the problem: many sh-flavored tools just don't have any good PowerShell integrations or analogs (yet). That may change over time as more people become aware of just how awesome modern pwsh can be (why do you think I wrote this article!?). But for the time being, tools that developers like myself have used for years, such as rvm, rbenv, asdf, and so on, just don't have any officially supported way to be used within PowerShell.

The good news is that this is a solvable problem, and in more ways than one!

Overload Limitations With Your Own PowerShell Profile

The most actionable of these potential solutions is the development of your own pwsh profile code that will sort of fake a given command, within PowerShell only, to allow you to use the same command/workflow you would have in bash or zsh, implemented as a compatibility proxy under the hood within PowerShell.

For a real-world example, here's a very simplistic implementation of a compatibility layer to enable rbenv and bundle commands (Ruby development) in PowerShell (according to my own personal preferences) by delegating to the real such commands under the hood:

#
# Notes:
#   1. My $env:PATH has already been modified to find rbenv in this example
#   2. See `help about_Splatting`, or the following article (same thing), to understand @Args
#          https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_splatting?view=powershell-7.2
#          Oversimplification: @Args = "grab whatever got passed to this thing, then throw 'em at this other thing VERBATIM"
#

function Invoke-rbenv {
  rbenv exec @Args
}

function irb {
  Invoke-rbenv irb @Args
}

function gem {
  Invoke-rbenv gem @Args
}

function ruby {
  Invoke-rbenv ruby @Args
}

function bundle {
  Invoke-rbenv bundle @Args
}

function be {
  Invoke-rbenv bundle exec @Args
}

With this in place, I can type out commands like be puma while working on a Rails app, and have that delegated to rbenv's managed version of bundler, which then execs that command for me. And it's all entirely transparent to me!

This is just one example and an admittedly simplistic one at that. Nonetheless, it proves that using PowerShell as your daily driver is not only possible but feasible, even when you need to integrate with other tools that are entirely unaware of PowerShell's existence.

But, we can go a step further with the recently-released PowerShell Crescendo. While I have yet to look into this all that much, essentially it provides a way for standard *nix tools to have their output automatically transformed from basic strings into real PowerShell objects at runtime. You have to write some parsing directives to tell PowerShell how to interpret the strings generated by some program, but once that's done you're set: you'll have non-PowerShell tools generating real PowerShell objects without any change to the tools themselves at all.

Jump on the Voltswagon!

If you're not convinced by now, something's wrong with you.

For the rest of you out there, you've got some options for installation:

  1. Use the packages provided by Microsoft (deb, rpm) (sudo required)
  2. Grab a precompiled Linux tarball then unpack it somewhere and run: /path/to/powershell/7.2/pwsh (no sudo required)
  3. Mac users can brew install powershell --cask. (sudo required for .pkg installer)

Antipattern: Do NOT change your default login shell

Don't do this: chsh -s $(which pwsh)

Modify your terminal emulator profile instead.

Just a quick tip: while PowerShell works fine as a default login shell and you can certainly use it this way, other software may break if you do this because it may assume your default login shell is always bash-like and not bother to check. This could cause some minor breakage here and there.

But the real reason I advise against this is more to protect yourself from yourself. If you shoot yourself in the foot with your pwsh configuration and totally bork something, you won't have to worry too much about getting back to a working bash or zsh configuration so you can get work done again, especially if you're in an emergency support role or environment.

When you're first learning, fixing things isn't always a quick or easy process, and sometimes you just don't have time to fiddle with all that, so it's good to have a "backup environment" available just in case you have to act fast to save the day.

Don't interpret this as "PowerShell is easy to shoot yourself in the foot with" - far from it. Its remarkable level of clarity and consistency make it very unlikely that you'll do this, but it's still possible. And rather than just nuking your entire PowerShell config directory and starting from scratch, it's far better to pick it apart and make yourself fix it, because you learn the most when you force yourself through the hard problems. But you won't always have time to do that, especially during your day job, so having a fallback option is always a good idea.

First Steps

Once installed, I recommend you create a new profile specifically for PowerShell in your terminal emulator of choice, then make that the default profile (don't remove or change the existing one if you can help it; again, have a fallback position just in case you screw things up and don't have time to fix it).

Specifically, you want your terminal emulator to run the program pwsh, located wherever you unpacked your tarball. If you installed it via the package manager, it should already be in your system's default $PATH so you probably won't need to specify the location (just pwsh is fine in that case). No arguments necessary.

With that done, run these commands first:

PS > Update-Help
PS > help about_Telemetry

The first will download help documentation from the internet so you can view help files in the terminal instead of having to go to a browser and get a bunch of outdated, irrelevant results from Google (I recommend feeding The Duck instead).

The second will tell you how to disable telemetry from being sent to Microsoft. It's not a crucial thing, and I don't think Microsoft is doing anything shady here at all, but I always advise disabling telemetry in every product you can, every time you can, everywhere you can, just as a default rule.

More importantly, however, this will introduce you to the help about_* documents, which are longer-form help docs that explain a series of related topics, instead of just one command. Seeing a list of what's available is nice and easy: just type help about_ then mash the TAB key a few times. It'll ask if you want to display all hundred-some-odd options; say Y. Find something that sounds interesting, then enter the entire article name, e.g. help about_Profiles or help about_Help, for example.

Next, check out my other article on this blog about customizing your PowerShell prompt!

Roll the Dice

bash and zsh are great tools: they're wicked fast, incredibly stable, and have decades of battle-tested, hard-won "tribal knowledge" built around them that's readily available via your favorite search engine.

But they're also antiquated. They're based on a simpler series of ideas that were right for their time, but fundamentally primitive when compared to the same considerations in mind when PowerShell was designed.

Sooner or later you just have to admit that something more capable exists, and that's when you get to make a choice: stick with what you know, safe in your comfort zone, or roll the dice on something that could potentially revolutionize your daily workflow.

Once I understood just a fraction of the value provided by pwsh, that choice became a no-brainer for me. It's been roughly six months since I switched full-time, and while I still occasionally have a few frustrations here and there, those cases are very few and far between (it's been at least two months since the last time something made me scratch my head and wonder).

But those frustrations are all part of the learning process. I see even more "WTF?" things with bash or zsh than I do with pwsh, by far! Those things are rarely easy to work out, and I struggle with outdated documentation from search results in nearly every case!

But with PowerShell, figuring out how to work around the problem - if indeed it is a problem, and not my own ignorance - is much easier because I'm not dealing with an arcane, arbitrary syntax from hell. Instead, I have a predictable, standardized, consistent set of commands and utilities available to me that are mostly self-documenting and available offline (not some archived forum post from 2006). On top of that, I have real classes and objects available to me, and a built-in debugger (with breakpoints!) that I can use to dig in and figure things out!

So, why are we still using system shells that are based on paradigms from the 1980's? Are you still rocking a mullet and a slap bracelet, too?

Just because "that's the way it's always been" DOESN'T mean that's the way it's always gotta be.

PowerShell is the first real innovation I've seen in our field in a long time. Generally replete with "social" networks, surveillance profiteering, user-generated "content" and any excuse to coerce people into subscriptions, our industry repackages decades-old innovations ad infinitum, even when new approaches are within reach, desperately needed, and certain to be profitable.

So in the rare case that something original that is actually useful, widely-available and open source finally does see the light of day, I get very intrigued. I get excited. And in this case, I "jumped on the Voltswagon!"

And you should, too!

The author would like to thank Chris Weibel for his help with some of those electricity puns, and Norm Abramovitz for his editorial assistance in refining this article.

The post I switched from bash to PowerShell, and it’s going great! appeared first on Stark & Wayne.

]]>

No, I'm not crazy, and no I'm not trolling you! This is for real!

No longer single-platform or closed source, Microsoft's PowerShell Core is now an open source, full-featured, cross-platform (MacOS, Linux, more) shell sporting some serious improvements over the venerable /bin/[bash|zsh] for those souls brave enough to use it as their daily driver.

I made the switch about six months ago and couldn't be happier; it's by far one of the best tooling/workflow decisions I've made in my multi-decade career. PowerShell's consistent naming conventions, built-in documentation system, and object-oriented approach have made me more productive by far, and I've had almost zero challenges integrating it with my day-to-day workflow despite using a mix of both Linux and MacOS.

At a Glance

  • Multi-Platform, Multiple Installation Options:
    • Linux: deb, rpm, AUR, or just unpack a tarball and run'
    • MacOS: Homebrew cask: brew install powershell --cask, Intel x64 and arm64 available, .pkg installers downloadable
      • Only available as a cask, and casks are unavailable on Linux, so I suggest other avenues for you linuxbrew folks out there.
  • POSIX binaries work out of the box as expected
    • ps aux | grep -i someproc works fine
    • No emulation involved whatsoever, no containers, no VMs, no "hax"
    • Could be enhanced via recently-released Crescendo
  • No trust? No problem! It's fully open source!
  • Killer feature: Real CLASSES, TYPES, OBJECTS, METHODS and PROPERTIES. No more string manipulation fragility!

Scripting is much easier and more pleasant with PowerShell because its syntax is very similar to many other scripting languages (unlike bash). PowerShell also wins out when it comes to naming conventions for built-in commands and statements. You can invoke old-school POSIX-only commands through PowerShell and they work just like before, with no changes; so things like ps aux or sudo vim /etc/hosts work out of the box without any change in your workflow at all.

I don't have to worry about what version of bash or zsh is installed on the target operating system, nor am I worried about Apple changing that on me by sneaking it into a MacOS upgrade or dropping something entirely via a minor update.

Developer 1: Here's a shell script for that work thing.

Developer 2: It doesn't run on my computer

Developer 1: What version of bash are you using?

Developer 2: Whatever ships with my version of MacOS

Developer 1: Do echo $BASH_VERSION, what's that say?

Developer 2: Uhh, says 3.2

Developer 1: Dear god that's old!

Developer 3: You guys wouldn't have this problem with PowerShell Core

The biggest advantage PowerShell provides, by far, is that it doesn't deal in mere simplistic strings alone, but in full-fledged classes and objects, with methods, properties, and data types. No more fragile grep|sed|awk nonsense! You won't have to worry about breaking everything if you update the output of a PowerShell script! Try changing a /bin/sh script to output JSON by default and see what happens to your automation!

PowerShell works exactly as you would expect on Linux and MacOS, right out of the box. Invoking and running compiled POSIX binaries (e.g. ps|cat|vim|less, etc.) works exactly like it does with bash or zsh and you don't have to change that part of your workflow whatsoever (which is good for those of us with muscle memory built over 20+ years!). You can set up command aliases, new shell functions, a personal profile (equivalent of ~/.bashrc), custom prompts and shortcuts - whatever you want! If you can do it with bash, you can do it BETTER with PowerShell.

Taken all together, the case for trying out modern PowerShell is incredibly strong. You'll be shocked at how useful it is! The jolt it'll give your productivity is downright electrifying and it can seriously amp up your quality of life!

Okay, okay, fine: I'll stop with the electricity puns.

I promise nothing.

Nothing wrong with bash/zsh

Let me get this out of the way: There's nothing wrong with bash or zsh. They're fine. They work, they work well, they're fast as hell, and battle-tested beyond measure. I'm absolutely NOT saying they're "bad" or that you're "bad" for using them. I did too, for over 20 years! And I still do every time I hit [ENTER] after typing ssh [...]! They've been around forever, and they're well respected for good reason.

PowerShell is simply different, based on a fundamentally more complex set of paradigms than the authors of bash or zsh could have imagined at the time those projects began. In fact, pwsh couldn't exist in its current state without standing on the shoulders of giants like bash and zsh, so respect, here, is absolutely DUE.

That said, I stand by my admittedly-controversial opinion that PowerShell is just plain better in almost all cases. This post attempts to detail why I'm confident in that statement.

bash and zsh are Thomas Edison minus the evil: basic, safe, known, and respected, if a bit antiquated. PowerShell is like Nikola Tesla: a "foreigner" with a fundamentally unique perspective, providing a more advanced approach that's far ahead of its time.

A Tale of Two PowerShells

You may see references to two flavors of PowerShell out there on the interweb: "Windows PowerShell" and "PowerShell Core":

  • "Windows" PowerShell typically refers to the legacy variant of PowerShell, version 5.1 or earlier, that indeed is Windows-only. It's still the default PowerShell install on Windows 10/11 (as of this writing), but with no new development/releases for this variant since 2016, I don't advise using it.
    • Note that PowerShell ISE - "Integrated Scripting Environment" - relies on this version of PowerShell, and as such, is Windows-Only. It's essentially been replaced by VS Code with an extension.
  • PowerShell Core is what you want: cross-platform, open source, and as of this writing, at version 7.2.

Of the two, you want PowerShell Core, which refers to PowerShell version 6.0 or higher. Avoid all others.

For the remainder of this article, any references to "PowerShell" or pwsh refer exclusively to PowerShell Core. Pretend Windows PowerShell doesn't exist; it shouldn't, and while Microsoft has yet to announce its official EOL, the trend is clear: Core is the future.

PowerShell: More than a Shell

PowerShell is more than simply a shell. It's an intuitive programming environment and scripting language that's been wrapped inside a feature-packed REPL and heavily refined with an intentional focus on better user experience via consistency and patterns without loss of execution speed or efficiency.

Basically, if you can do it in bash or zsh, you can do it - and a whole lot more - in PowerShell. In most cases, you can do it faster and easier, leading to a far more maintainable and portable final result (e.g. tool, library, etc.) that, thanks to PowerShell Core's multi-platform nature, is arguably more portable than bash/zsh (which require non-trivial effort to install/update/configure on Windows).

And with modules from the PowerShell Gallery, it can be extended even further, with secrets management capabilities and even a system automation framework known as "Desired State Configuration" (DSC).

Note: DSC is, as of this writing, a Windows-Only feature. Starting in PowerShell Core 7.2 they moved it out of PowerShell itself and into a separate module to enable future portability. In DSC version 3.0, currently in "preview", it's expected to be available on Linux. Whether or not I'd trust a production Linux machine with this, however, is another topic entirely. Caveat emptor.

A Scripting Language So Good It'll Shock You

PowerShell really shines as a fully-featured scripting language with one critical improvement not available in bash or zsh: objects with methods and properties of various data types.

Say goodbye to the arcane insanity that is sed and associated madness! With PowerShell, you don't get back mere strings, you get back honest-to-goodness OBJECTS with properties and methods, each of which corresponds to a data type!

No more being afraid to modify the output of that Perl script from 1998 that's holding your entire infrastructure together because it'll crash everything if you put an extra space in the output, or - *gasp* - output JSON!

Purely for the purposes of demonstration, take a look at these two scripts for obtaining a list of currently running processes that exceed a given amount of memory. I'm no shell script whiz by any means, but even if /usr/bin/ps had a consistent, unified implementation across BSD, MacOS, Linux and other POSIX operating systems, you'd still have a much harder time using bash than you do with PowerShell:

Screenshot of two scripts, side-by-side, that show all processes using over 200mb of memory.
PowerShell (left): 3 lines. bash (right): 25. You do the math.

Rather than lengthen an article already in the running for "TL;DR of the month", I'll just link to gists for those scripts:

Disclaimer: I never claimed to be a shell script whiz, but I'd be surprised to see any bash/zsh implementation do this easier without additional tools - which PowerShell clearly doesn't need.

In the case of bash, since we have to manipulate strings directly, the output formatting is absolutely crucial; any changes, and the entire shell script falls apart. This is fundamentally fragile, which makes it error prone, which means it's high-risk. It also requires some external tooling or additional work on the part of the script author to output valid JSON. And if you look at that syntax, you might go blind!

By contrast, what took approximately 25-ish lines in bash takes only three with PowerShell, and you could even shorten that if readability wasn't a concern. Additionally, PowerShell allows you to write data to multiple output "channels", such as "Verbose" and "Debug", in addition to STDOUT. This way I can run the above PowerShell script, redirect its output to a file, and still get that diagnostic information on my screen, but NOT in the file, thus separating the two. Put simply, I can output additional information without STDERR on a per-run basis whenever I want, without any chance of corrupting the final output result, which may be relied upon by other programs (redirection to file, another process, etc.)

Plug In to (optional) Naming Conventions

Unlike the haphazard naming conventions mess that is the *nix shell scripting and command world, the PowerShell community has established a well-designed, explicit, and consistent set of naming conventions for commands issued in the shell, be they available as modules installed by default, obtained elsewhere, or even stuff you write yourself. You're not forced into these naming conventions of course, but once you've seen properly-named commands in action, you'll never want to go back. The benefits become self-evident almost immediately:

*nix shell command or utilityPowerShell equivalentDescription
cdSet-LocationChange directories
pushd / popdPush-Location / Pop-Locationpush/pop location stack
pwdGet-LocationWhat directory am I in?
catGet-ContentDisplay contents of a file (generally plain text) on STDOUT
whichGet-CommandFind out where a binary or command is, or see which one gets picked up from $PATH first
pbcopy / pbpaste on MacOS (Linux or BSD, varies)Get-Clipboard / Set-ClipboardRetrieve or Modify the contents of the clipboard/paste buffer on your local computer
echo -e "\e[31mRed Text\e[0mWrite-Host -ForegroundColor Red "Red Text"Write some text to the console in color (red in this example)

No, you don't literally have to type Set-Location every single time you want to change directories. Good 'ol cd still works just fine, as do dozens of common *nix commands. Basically just use it like you would bash and it "Just Works™".

To see all aliases at runtime, try Get-Alias. To discover commands, try Get-Command *whatever*. Tab-completion is also available out-of-the-box.

See the pattern? All these commands are in the form of Verb-Noun. They all start with what you want to do, then end with what you want to do it TO. Want to WRITE stuff to the HOST's screen? Write-Host. Want to GET what LOCATION (directory) you're currently in? Get-Location. You could also run $PWD | Write-Host to take the automatic variable $PWD - present working directory - and pipe that to the aforementioned echo equivalent. (To simplify it even further, the pipe and everything after it aren't technically required unless in a script!)

Most modules for PowerShell follow these conventions as well, so command discoverability becomes nearly automatic. With known, established, consistent conventions, you'll never wonder what some command is called ever again because it'll be easily predictable.

And if not, there's a real easy way to find out what's what:

Get-Verb
  # Shows established verbs with descriptions of each
Get-Command -Verb *convert*
  # Shows all commands w/ "convert" in the name
  # For example, ConvertFrom-Json, ConvertTo-Csv, etc.
Get-Command -Noun File
  # What's the command to write stuff to a file? 
  # Well, look up all the $VERB-File commands to start!
  # See also: Get-Command *file* for all commands with "file" in the name

Note that cAsE sEnSiTiViTy is a little odd with PowerShell on *nix:

If the command/file is from...Is it cAsE sEnSiTiVe?Are its args cAsE sEnSiTiVe?
$PATH or the underlying OS/filesystemYESGenerally Yes
Depends on the implementation
PowerShell Itself (cmdlet)NoGenerally No
Possible, but not common
Are commands case-sensitive in PowerShell? It depends.

Note that there are always exceptions to every rule, so there are times the above may fail you. Snowflakes happen. My general rule of thumb, which has never steered me wrong in these cases, is this:

Assume EVERYTHING is cAsE sEnSiTiVe.

If you're wrong, it works. If you're right, it works. Either way, you win!

Documentation: The Path of Least Resistance

Ever tried to write a formatted man page? It's painful:

.PP
The \fB\fCcontainers.conf\fR file should be placed under \fB\fC$HOME/.config/containers/containers.conf\fR on Linux and Mac and \fB\fC%APPDATA%\\containers\\containers.conf\fR on Windows.

.PP
\fBpodman [GLOBAL OPTIONS]\fP

.SH GLOBAL OPTIONS
.SS \fB--connection\fP=\fIname\fP, \fB-c\fP
.PP
Remote connection name

.SS \fB--help\fP, \fB-h\fP
.PP
Print usage statement

This is a small excerpt from a portion of the podman manual page. Note the syntax complexity and ambiguity.

By contrast, you can document your PowerShell functions with plain-text comments right inside the same file:

#!/usr/bin/env pwsh

# /home/myuser/.config/powershell/profile.ps1

<#
.SYNOPSIS
  A short one-liner describing your function
.DESCRIPTION
  You can write a longer description (any length) for display when the user asks for extended help documentation.
  Give all the overview data you like here.
.NOTES
  Miscellaneous notes section for tips, tricks, caveats, warnings, one-offs...
.EXAMPLE
  Get-MyIP # Runs the command, no arguments, default settings
.EXAMPLE
  Get-MyIP -From ipinfo.io -CURL # Runs `curl ipinfo.io` and gives results
#>

function Get-MyIP { ... }

Given the above example, an end-user could simply type help Get-MyIP in PowerShell and be presented with comprehensive help documentation including examples within their specified $PAGER (e.g. less or my current favorite, moar). You can even just jump straight to the examples if you want, too:

> Get-Help -Examples Get-History

NAME
    Get-History

SYNOPSIS
    Gets a list of the commands entered during the current session.

    [...]

    --------- Example 2: Get entries that include a string ---------

    Get-History | Where-Object {$_.CommandLine -like "*Service*"}

    [...]

I've long said that if a developer can't be bothered to write at least something useful about how to use their product or tool, it ain't worth much. Usually nothing. Because nobody has time to go spelunking through your code to figure out how to use your tool - if we did, we'd write our own.

That's why anything that makes documentation easier and more portable is a win in my book, and in this category, PowerShell delivers. The syntax summaries and supported arguments list are even generated dynamically by PowerShell! You don't have to write that part at all!

The One Caveat: Tooling

Most tooling for *nix workflows is stuck pretty hard in sh land. Such tools have been developed, in some cases, over multiple decades, with conventions unintentionally becoming established in a somewhat haphazard manner, though without much (if any) thought whatsoever toward the portability of those tools to non-UNIX shells.

And let's face it, that's 100% Microsoft's fault. No getting around the fact that they kept PowerShell a Windows-only, closed-source feature for a very long time, and that being the case, why should developers on non-Windows platforms have bothered? Ignoring it was - note the past tense here - entirely justified.

But now that's all changed. Modern PowerShell isn't at all Windows-only anymore, and it's fully open source now, too. It works on Linux, MacOS, and other UNIX-flavored systems, too (though you likely have to compile from source) along with Windows, of course. bash, while ubiquitous on *nix platforms, is wildly inconsistent in which version is deployed or installed, has no built-in update notification ability, and often requires significant manual work to implement a smooth and stable upgrade path. It's also non-trivial to install on Windows.

PowerShell, by contrast, is available on almost as many platforms (though how well tested it is outside the most popular non-Windows platforms is certainly up for debate), is available to end-users via "click some buttons and you're done" MSI installers for Windows or PKG installers on MacOS, and is just as easy to install on *nix systems as bash is on Windows machines (if not easier in some cases; e.g. WSL).

Additionally, PowerShell has a ton of utilities available out-of-the box that bash has to rely on external tooling to provide. This means that any bash script that relies on that external tooling can break if said tooling has unaccounted for implementation differences. If this sounds purely academic, consider the curious case of ps on Linux:

$ man ps
[...]

This version of ps accepts several kinds of options:

       1   UNIX options, which may be grouped and must be preceded by a dash.
       2   BSD options, which may be grouped and must not be used with a dash.
       3   GNU long options, which are preceded by two dashes.

       Options of different types may be freely mixed, but conflicts can appear.
       [...] due to the many standards and ps implementations that this ps is
       compatible with.

       Note that ps -aux is distinct from ps aux. [...]

Source: ps manual from Fedora Linux 35

By contrast, PowerShell implements its own Get-Process cmdlet (a type of shell function, basically) so that you don't even need ps or anything like it at all. The internal implementation of how that function works varies by platform, but the end result is the same on every single one. You don't have to worry about the way it handles arguments snowflaking from Linux to MacOS, because using it is designed to be 100% consistent across all platforms when relying purely on PowerShell's built-in commands.

And, if you really do need an external tool that is entirely unaware of PowerShell's existence? No problem: you can absolutely (maybe even easily?) integrate existing tools with PowerShell, if you, or the authors of that tool, so desire.

But, IS there such a desire? Does it presently exist?

Probably not.

Open source developers already work for free, on their own time, to solve very complex problems. They do this on top of their normal "day job," not instead of it (well, most, anyway).

Shout-out to FOSS contributors: THANK YOU all, so much, for what you do! Without you, millions of jobs and livelihoods would not exist, so have no doubt that your efforts matter!

It's beyond ridiculous to expect that these unsung heroes would, without even being paid in hugs, let alone real money, add to their already superhuman workload by committing to support a shell they've long thought of as "yet another snowflake" with very limited adoption or potential, from a company they've likely derided for decades, sometimes rightly so. You can't blame these folks for saying "nope" to PowerShell, especially given its origin story as a product from a company that famously "refuses to play well with others."

And therein lies the problem: many sh-flavored tools just don't have any good PowerShell integrations or analogs (yet). That may change over time as more people become aware of just how awesome modern pwsh can be (why do you think I wrote this article!?). But for the time being, tools that developers like myself have used for years, such as rvm, rbenv, asdf, and so on, just don't have any officially supported way to be used within PowerShell.

The good news is that this is a solvable problem, and in more ways than one!

Overload Limitations With Your Own PowerShell Profile

The most actionable of these potential solutions is the development of your own pwsh profile code that will sort of fake a given command, within PowerShell only, to allow you to use the same command/workflow you would have in bash or zsh, implemented as a compatibility proxy under the hood within PowerShell.

For a real-world example, here's a very simplistic implementation of a compatibility layer to enable rbenv and bundle commands (Ruby development) in PowerShell (according to my own personal preferences) by delegating to the real such commands under the hood:

#
# Notes:
#   1. My $env:PATH has already been modified to find rbenv in this example
#   2. See `help about_Splatting`, or the following article (same thing), to understand @Args
#          https://docs.microsoft.com/en-us/powershell/module/microsoft.powershell.core/about/about_splatting?view=powershell-7.2
#          Oversimplification: @Args = "grab whatever got passed to this thing, then throw 'em at this other thing VERBATIM"
#

function Invoke-rbenv {
  rbenv exec @Args
}

function irb {
  Invoke-rbenv irb @Args
}

function gem {
  Invoke-rbenv gem @Args
}

function ruby {
  Invoke-rbenv ruby @Args
}

function bundle {
  Invoke-rbenv bundle @Args
}

function be {
  Invoke-rbenv bundle exec @Args
}

With this in place, I can type out commands like be puma while working on a Rails app, and have that delegated to rbenv's managed version of bundler, which then execs that command for me. And it's all entirely transparent to me!

This is just one example and an admittedly simplistic one at that. Nonetheless, it proves that using PowerShell as your daily driver is not only possible but feasible, even when you need to integrate with other tools that are entirely unaware of PowerShell's existence.

But, we can go a step further with the recently-released PowerShell Crescendo. While I have yet to look into this all that much, essentially it provides a way for standard *nix tools to have their output automatically transformed from basic strings into real PowerShell objects at runtime. You have to write some parsing directives to tell PowerShell how to interpret the strings generated by some program, but once that's done you're set: you'll have non-PowerShell tools generating real PowerShell objects without any change to the tools themselves at all.

Jump on the Voltswagon!

If you're not convinced by now, something's wrong with you.

For the rest of you out there, you've got some options for installation:

  1. Use the packages provided by Microsoft (deb, rpm) (sudo required)
  2. Grab a precompiled Linux tarball then unpack it somewhere and run: /path/to/powershell/7.2/pwsh (no sudo required)
  3. Mac users can brew install powershell --cask. (sudo required for .pkg installer)

Antipattern: Do NOT change your default login shell

Don't do this: chsh -s $(which pwsh)

Modify your terminal emulator profile instead.

Just a quick tip: while PowerShell works fine as a default login shell and you can certainly use it this way, other software may break if you do this because it may assume your default login shell is always bash-like and not bother to check. This could cause some minor breakage here and there.

But the real reason I advise against this is more to protect yourself from yourself. If you shoot yourself in the foot with your pwsh configuration and totally bork something, you won't have to worry too much about getting back to a working bash or zsh configuration so you can get work done again, especially if you're in an emergency support role or environment.

When you're first learning, fixing things isn't always a quick or easy process, and sometimes you just don't have time to fiddle with all that, so it's good to have a "backup environment" available just in case you have to act fast to save the day.

Don't interpret this as "PowerShell is easy to shoot yourself in the foot with" - far from it. Its remarkable level of clarity and consistency make it very unlikely that you'll do this, but it's still possible. And rather than just nuking your entire PowerShell config directory and starting from scratch, it's far better to pick it apart and make yourself fix it, because you learn the most when you force yourself through the hard problems. But you won't always have time to do that, especially during your day job, so having a fallback option is always a good idea.

First Steps

Once installed, I recommend you create a new profile specifically for PowerShell in your terminal emulator of choice, then make that the default profile (don't remove or change the existing one if you can help it; again, have a fallback position just in case you screw things up and don't have time to fix it).

Specifically, you want your terminal emulator to run the program pwsh, located wherever you unpacked your tarball. If you installed it via the package manager, it should already be in your system's default $PATH so you probably won't need to specify the location (just pwsh is fine in that case). No arguments necessary.

With that done, run these commands first:

PS > Update-Help
PS > help about_Telemetry

The first will download help documentation from the internet so you can view help files in the terminal instead of having to go to a browser and get a bunch of outdated, irrelevant results from Google (I recommend feeding The Duck instead).

The second will tell you how to disable telemetry from being sent to Microsoft. It's not a crucial thing, and I don't think Microsoft is doing anything shady here at all, but I always advise disabling telemetry in every product you can, every time you can, everywhere you can, just as a default rule.

More importantly, however, this will introduce you to the help about_* documents, which are longer-form help docs that explain a series of related topics, instead of just one command. Seeing a list of what's available is nice and easy: just type help about_ then mash the TAB key a few times. It'll ask if you want to display all hundred-some-odd options; say Y. Find something that sounds interesting, then enter the entire article name, e.g. help about_Profiles or help about_Help, for example.

Next, check out my other article on this blog about customizing your PowerShell prompt!

Roll the Dice

bash and zsh are great tools: they're wicked fast, incredibly stable, and have decades of battle-tested, hard-won "tribal knowledge" built around them that's readily available via your favorite search engine.

But they're also antiquated. They're based on a simpler series of ideas that were right for their time, but fundamentally primitive when compared to the same considerations in mind when PowerShell was designed.

Sooner or later you just have to admit that something more capable exists, and that's when you get to make a choice: stick with what you know, safe in your comfort zone, or roll the dice on something that could potentially revolutionize your daily workflow.

Once I understood just a fraction of the value provided by pwsh, that choice became a no-brainer for me. It's been roughly six months since I switched full-time, and while I still occasionally have a few frustrations here and there, those cases are very few and far between (it's been at least two months since the last time something made me scratch my head and wonder).

But those frustrations are all part of the learning process. I see even more "WTF?" things with bash or zsh than I do with pwsh, by far! Those things are rarely easy to work out, and I struggle with outdated documentation from search results in nearly every case!

But with PowerShell, figuring out how to work around the problem - if indeed it is a problem, and not my own ignorance - is much easier because I'm not dealing with an arcane, arbitrary syntax from hell. Instead, I have a predictable, standardized, consistent set of commands and utilities available to me that are mostly self-documenting and available offline (not some archived forum post from 2006). On top of that, I have real classes and objects available to me, and a built-in debugger (with breakpoints!) that I can use to dig in and figure things out!

So, why are we still using system shells that are based on paradigms from the 1980's? Are you still rocking a mullet and a slap bracelet, too?

Just because "that's the way it's always been" DOESN'T mean that's the way it's always gotta be.

PowerShell is the first real innovation I've seen in our field in a long time. Generally replete with "social" networks, surveillance profiteering, user-generated "content" and any excuse to coerce people into subscriptions, our industry repackages decades-old innovations ad infinitum, even when new approaches are within reach, desperately needed, and certain to be profitable.

So in the rare case that something original that is actually useful, widely-available and open source finally does see the light of day, I get very intrigued. I get excited. And in this case, I "jumped on the Voltswagon!"

And you should, too!

The author would like to thank Chris Weibel for his help with some of those electricity puns, and Norm Abramovitz for his editorial assistance in refining this article.

The post I switched from bash to PowerShell, and it’s going great! appeared first on Stark & Wayne.

]]>
Power Up Your PowerShell Prompt https://www.starkandwayne.com/blog/power-up-your-powershell-prompt/ Thu, 13 Jan 2022 16:17:54 +0000 https://www.starkandwayne.com//?p=9432 So, you’ve heard great and wonderful things about PowerShell and now you’re seriously considering it for your day-to-day, eh? Well, good! You should! But before you drink the Kool-Aid, you’ve probably got some doubts, concerns, and questions. I’m sure one of those, lurking in the back of your mind, is something along the lines of:

The post Power Up Your PowerShell Prompt appeared first on Stark & Wayne.

]]>
So, you’ve heard great and wonderful things about PowerShell and now you’re seriously considering it for your day-to-day, eh?

Well, good!

You should!

But before you drink the Kool-Aid, you’ve probably got some doubts, concerns, and questions. I’m sure one of those, lurking in the back of your mind, is something along the lines of:

“Can I customize my PowerShell prompt?”

Allow me to answer you with a meme:

Clip originally from 1999 comedy “Big Daddy”. Film subject to copyright. Not an endorsement. Displayed for educational, non-commercial purposes only under 17 U.S.C. § 107.

With that doubt soundly purged from your mind, you may now find yourself wondering if you can get your PowerShell prompt looking like all those fancy “powerline” prompts you’ve probably seen in screenshots out there. You’re wondering…

“How far could I take this?”

Answer: About 4.3 lightyears (give or take).

Traveling at the speed of light, it would take more than four years to reach Earth’s closest neighboring star system, Alpha Centauri. And that’s without stopping to use the restroom! (Image credit: NASA)

Okay, so maybe putting a number on it, measured at a hypothetical relative velocity, wasn’t technically correct, but it makes a heck of a point: you can take PowerShell customization way, WAY beyond what anyone would dare consider sane!

Now that you know just about anything’s possible, how do you do it? The short version is this:

  1. Find your $PROFILE on disk.
  2. Override the default prompt function with your own.
  3. Profit!

The $PROFILE and the prompt

Upon startup, PowerShell looks for a special file for the user executing the process called a profile. This is a plain-text file, written in PowerShell’s own scripting language, that allows the user to set a great many things like environment variables, aliases, custom functions, and yes, even their shell prompt.

To get started you need to find where your specific user profile (file) is located on disk.

Locating Your $PROFILE

The location of this file may vary based on platform and configuration, so the easiest way to find where pwsh wants yours to be is just to ask it!

$ pwsh
PowerShell 7.1.5
Copyright (c) Microsoft Corporation.

https://aka.ms/powershell
Type 'help' to get help.

PS > $PROFILE
/Users/jah/.config/powershell/Microsoft.PowerShell_profile.ps1

In this example, since I’m on MacOS and my $HOME is under /Users/jah, we can see that PowerShell is looking for the file in its default location on my platform. Linux users will likely see almost the same thing, with /home in place of /Users.

Be aware that the string output you get from $PROFILE doesn’t necessarily prove that the file itself actually exists; this is just the setting that PowerShell has internally. This is just where it’s going to look. It’s still UP TO YOU to create that file.

If this file doesn’t yet exist in the location PowerShell expects, just create it yourself. A quick touch $PROFILE from within PowerShell should do the trick rather easily. (You might need to create the $HOME/.config directory if it doesn’t already exist.)

Your $PROFILE file is nothing more than a plain-text UTF-8 encoded file with LF line endings (on *nix systems). You can put as much code, comments, and such in here as you want over the course of time that you use PowerShell. Consider making it part of your “dotfiles” configuration backup/persistence strategy. (Lots of folks find success using personal, private GitHub repositories for that. Just be sure not to commit secrets to history!)

The prompt function

Every time PowerShell needs to show you a prompt, it runs a specially-named function simply called prompt. If you don’t define this yourself, PowerShell uses a built-in default function that is extremely plain and minimal. This is the function we’re going to overwrite.

Let’s kick things off by overriding prompt with our own function: a very simple tweak to change the prompt’s output text color.

Before we proceed, a quick note on terminal emulators. I’m using iTerm2 (which is also what renders the stats bar at the bottom) on MacOS with the SF Mono font (which is, I think, Apple proprietary). It doesn’t contain emoji unicode symbols, so I’ve supplemented that with a Nerd Font, ligatures enabled. You Windows folks should try the new Windows Terminal from Microsoft, and you Linux users out there have more choice in this department than you could shake a stick at. Point is, your choice of terminal, and its configuration, are your responsibility.

Open your $PROFILE file in your favorite text editor and write your own prompt function. Start with this, just to get your feet wet:

function prompt {
  Write-Host ("$env:USER@$(hostname) [$(Get-Location)] >") -NoNewLine -ForegroundColor $(Get-Random -Min 1 -Max 16)
  return " "
}

This code was originally from Microsoft’s docs; I’ve made only minor tweaks to it, nothing more.

Here’s a screenshot of what this looks like in my case using iTerm2 on MacOS:

Now, this isn’t very exciting, but notice something: we’ve told PowerShell to choose a NEW COLOR at random every time it draws the prompt. So hit enter a few times and you get proof that this function runs every time PowerShell is ready for input:

Sure, pretty colors are nice, but this isn’t all that useful yet. Let’s power this up.

A Powerline Prompt With Oh-My-Posh

Give your terminal a good jolt by setting up a nice powerline prompt with a utility called oh-my-posh.
Here’s a sample of what that might look like:

Here I’ve gone into the /System directory on my Mac workstation. My ls command is actually a function I’ve defined
in my PowerShell $PROFILE that delegates to the awesome exa utility under-the-hood as a pseudo-replacement for /bin/ls.
This way I can “replace” ls for personal use without the consequences of altering the original and its behavior/output (scripts, etc.).

Install Oh-My-Posh

As the oh-my-posh website explains, OMP is a shell agnostic tool that allows you to configure your prompt not just for bash, zsh or even just PowerShell, but any shell that works roughly the same way. This means you can have one configuration to define your prompt, then switch between all three aforementioned shells as you like and get the same prompt with all of them!

So visit the oh-my-posh docs and install OMP for your platform. In my case, this was a series of simple Homebrew commands (brew tap and brew install) that can be copy-pasta’d from the documentation (as of this writing).

BE ADVISED: Ignore Install-Module; Outdated

You may find outdated documentation elsewhere on the web referring to oh-my-posh as a PowerShell-only utility, or telling you to install it directly through PowerShell via Install-Module. DO NOT DO IT THIS WAY. That’s an old, outdated approach back from the days when Oh-My-Posh used to be only for PowerShell. That is no longer the case and installing it this way may be unsupported at any point in the future, so you’re better off avoiding this method entirely, even if you never intend to use anything other than PowerShell.

Just because you can doesn’t mean you should. As with life in general, going down “easy street” will usually bite you in the posterior later on. Same here; don’t fall for it!

Themes

Oh-My-Posh itself provides the ability to make your shell pretty, but for the actual “pretty stuff” itself, you need
a compatible theme. Thankfully, OMP distributes a number of very nice, useful themes along with its install that you can re-use or copy-and-tweak to your liking.

If you’re following the brew installation route, you can see those themes in their original, distributed state by asking brew where that is:

brew --prefix oh-my-posh

Now, just tack /themes on the end of whatever that command gives you, and boom! There’s the plethora of themes you can choose from to get started.

In my case, I started with the theme blue-owl.omp.json, but with one tweak: I changed the value for osc99 from true to false. Why? Because that’s telling iTerm2 to sound an audible bell noise every time the theme gets loaded. So in my workflow, that meant that every time I opened a new terminal tab I’d hear that annoying beep noise! Not cool! So I just flipped the bit to remove that annoyance! I wish all life’s annoyances could be so easily eradicated…

You can do the same thing I did, starting with an existing theme, then making small tweaks, or you could go much further with your customizations. However you decide to do things, just make sure you COPY the existing theme file to a new location, instead of overwriting the original! This is because your installation method of choice – Homebrew, in this example – will likely overwrite your changes when it next updates OMP. Then you’d have to restore from backup, or do this all over again! Not what I typically want to be doing on a Saturday afternoon, ya know?

PowerShell Integration

With your theme selected, that JSON file is copied and tweaked to your liking, then saved elsewhere on disk (I chose $HOME/.config), now you can modify the previously mentioned $PROFILE file on disk to tie these things together.

Open up a new PowerShell session and ask it for the path to your $PROFILE on disk again:

> $PROFILE
/Users/jah/.config/powershell/Microsoft.PowerShell_profile.ps1

Sample output only. Your path/response will vary.

Open that file in your text editor of choice. Now, assuming you have NOT already altered your $PATH environment variable to tell PowerShell where to find stuff installed via Homebrew (or other package manager), you can do something like this to construct an array for that value:

# Set Paths
$pth = (
  "$Home/.bin",
  "$Home/.brew/bin",
  "$Home/.brew/sbin",
  "$env:PATH"
)
$env:PATH = ($pth -Join ':')

This is an example only, taken from my personal configuration. I keep one-off scripts/code in ~/.bin as symlinks to other things so I can rename commands, etc. (e.g. nvim -> vim) without actually renaming the files themselves or having to create aliases by modifying code (just a convenience). And I install Homebrew in $HOME/.brew so that
it won’t need full disk access. It’s more secure, and in something like 10 years it’s never once actually broken anything for me, even though the Homebrew authors explicitly advise against doing it this way. But that’s just me – you do you!

Be sure you do this BEFORE invoking any call to oh-my-posh. Otherwise, the shell will have no idea what you’re talking about and you’re gonna have a bad time.

With that in place, add the following line just below that snippet, before doing any further customization:

oh-my-posh --init --shell pwsh --config ~/.config/omp.theme.json | Invoke-Expression

Of course, substitute the path provided to the --config argument with the right path to YOUR configuration file.

With that done, save the file and open up a new PowerShell terminal session (new terminal tab).
You’ve now got a fancy new shell prompt in PowerShell!

What’s going on under the hood?

What the above command does is use the oh-my-posh binary, provided with arguments, to generate some PowerShell code. Then, that output is piped from within PowerShell to the Invoke-Expression function. This is essentially an eval() function for pwsh. It’s like saying, “Here’s some string data, now treat it as source code and run it.”

For that reason, an astute observer might find this approach a little uncomfortable, which is pretty understandable. If that’s you, I commend your security awareness and eagle-eyed nature. As a purely academic exercise, here’s the first piece of what that generated code looks like (I had to cut the screenshot because what it generates is kinda long, but you’ll see where I’m going with this):

Sample of generated code that is “eval”‘d by PowerShell

If you find the Invoke-Expression implementation uncomfortable, you could copy-and-paste that output into another
file somewhere, or even put it directly into your $PROFILE, to render attacks against that specific vector impossible. But the cost of doing that is convenient; you’d have to regenerate it every time OMP or the theme is updated, and possibly with some future PowerShell update as well if backward compatibility gets broken at some point. You’d also have to maintain the generated source code itself by backing up yet another file somehow/somewhere.

But that’s up to you. Personally, as long as I’m aware of the oh-my-posh binary on disk gets changed, I’m “comfortable enough” to run it this way. But it’s quite understandable if you don’t share my opinion on this matter. You wouldn’t be “wrong” whatsoever; perhaps “impractical”, but certainly not “wrong”.

Now What?

You’ve got your fancy prompt, so now what? I recommend taking a look at the built-in help documentation from within PowerShell itself to get started. At your (now snazzy!) prompt, do this:

  1. Type help about_
  2. Now STOP. Do NOT press enter.
  3. Instead, whack that TAB key a few times and see what happens!

If you answer y, you’ll get a list of all the about_* files that ship with PowerShell. Each of these contains a very well-written overview of multiple features, settings, and other very useful bits of info on how to better use PowerShell to get stuff done.

Now all you need to do is figure out which file to view. If like me, you’re a privacy-conscious person, you might want to start with the entry on telemetry:

help about_Telemetry

Next, perhaps you’d like to know more about how to configure PowerShell to your liking:

help about_Profiles

But if you want to see where PowerShell really shines, check out the entry on Methods:

help about_Methods
PowerShell: it’s not just a shell, it’s a fully-baked object-oriented programming environment, too!

Variables, types, classes, methods – PowerShell has it all. The syntax is very approachable and will feel familiar to anyone with even the smallest amount of non-negligible programming experience. While there are a few variations in that syntax some consider odd, they’re very unobtrusive and in truth, it’s far easier to build shell scripts in PowerShell that are distributable and work consistently across implementation versions (and platforms!) than it ever would be using the esoteric vagaries of /bin/sh and friends, especially for those of us who haven’t been writing shell scripts since the days of UNIX System-V.

Opinion: PowerShell seems less “snowflake-y” than sh; note the brackets [ ], use of the then keyword, and instead of curly braces like dozens of other programming languages, sh uses a new line and fi – “if”, backwards – to end a conditional logic block. How is this in any way easy or approachable? By contrast, PowerShell draws on ubiquitous conventions and patterns you already know!

While PowerShell isn’t as popular, especially outside of the Windows ecosystem, as its storied counterparts of UNIX legend, it’s more capable and has a much shorter learning curve. There’s certainly nothing wrong with using those shells of old, but why settle for a 9-volt battery when you can have a nuclear reactor?

The post Power Up Your PowerShell Prompt appeared first on Stark & Wayne.

]]>
Internship blog – Tanvie Kirane https://www.starkandwayne.com/blog/internship-blog-tanvie-kirane/ Fri, 17 Dec 2021 19:44:31 +0000 https://www.starkandwayne.com//?p=8239 Tanvie-Internship

This Fall, I am interning at Stark & Wayne, LLC in Buffalo, NY. Although this is my second internship, it is a lot more than I thought it would be — in the best way possible! I got the opportunity to develop skills in utilizing different UI design tools, as well as overcoming my fear of command line. I have been working a lot with the CTO (Wayne Seguin), our supervisors (Dr. Xiujiao Gao and Tyler Poland), and three other interns.

This internship has been by far the most challenging yet exciting work I have done in my college career. What I loved most about it is the weekly meetings and 1:1’s where we get the opportunity to present our work and gauge perspectives from our supervisors and guest attendees. We also discuss mentorship and career development tips which I think are very important at this stage in my career. The overall work culture is very open and friendly which makes it easier for us interns to gel with the team! 

I classify myself as a calculated risk-taker and have always tried to avoid making mistakes. At the beginning of this internship, I was scared of using the terminal because it’s one (unintentional) move and game over. I mentioned this during a conversation with my supervisors and they helped me realize that it was okay to make mistakes, provided you learn from them. They motivated me to try new things and reach out if anything comes up and, slowly & steadily, I have overcome my fear of using the terminal! Besides this, I learned the benefits of version control, active discussions, and reverse engineering.

As for expectations for the rest of my internship, I am thrilled to be learning more every day! This internship has given me a taste of the real-world industry and also given me a chance to explore problem-solving and designing a product based on client needs and how I can use my expertise and skills in a team to help deliver the best results.

The post Internship blog – Tanvie Kirane appeared first on Stark & Wayne.

]]>
Tanvie-Internship

This Fall, I am interning at Stark & Wayne, LLC in Buffalo, NY. Although this is my second internship, it is a lot more than I thought it would be — in the best way possible! I got the opportunity to develop skills in utilizing different UI design tools, as well as overcoming my fear of command line. I have been working a lot with the CTO (Wayne Seguin), our supervisors (Dr. Xiujiao Gao and Tyler Poland), and three other interns.

This internship has been by far the most challenging yet exciting work I have done in my college career. What I loved most about it is the weekly meetings and 1:1’s where we get the opportunity to present our work and gauge perspectives from our supervisors and guest attendees. We also discuss mentorship and career development tips which I think are very important at this stage in my career. The overall work culture is very open and friendly which makes it easier for us interns to gel with the team! 

I classify myself as a calculated risk-taker and have always tried to avoid making mistakes. At the beginning of this internship, I was scared of using the terminal because it’s one (unintentional) move and game over. I mentioned this during a conversation with my supervisors and they helped me realize that it was okay to make mistakes, provided you learn from them. They motivated me to try new things and reach out if anything comes up and, slowly & steadily, I have overcome my fear of using the terminal! Besides this, I learned the benefits of version control, active discussions, and reverse engineering.

As for expectations for the rest of my internship, I am thrilled to be learning more every day! This internship has given me a taste of the real-world industry and also given me a chance to explore problem-solving and designing a product based on client needs and how I can use my expertise and skills in a team to help deliver the best results.

The post Internship blog – Tanvie Kirane appeared first on Stark & Wayne.

]]>
Getting Kubernetes Cert-Manager to work with Cloudflare and Let’s Encrypt https://www.starkandwayne.com/blog/getting-kubernetes-cert-manager-to-work-with-cloudflare-and-lets-encrypt/ Mon, 01 Nov 2021 18:52:33 +0000 https://www.starkandwayne.com//?p=3376 Life was great until we needed to upgrade! We had our websites and services working just fine with our certificates being upgraded automatically, and then we were forced to upgrade to a later Kubernetes release. Kubernetes 1.22 removed a few features but it should tell us if anything went wrong in the upgrade, right? We

The post Getting Kubernetes Cert-Manager to work with Cloudflare and Let’s Encrypt appeared first on Stark & Wayne.

]]>
Life was great until we needed to upgrade!


We had our websites and services working just fine with our certificates being upgraded automatically, and then we were forced to upgrade to a later Kubernetes release. Kubernetes 1.22 removed a few features but it should tell us if anything went wrong in the upgrade, right?

We started getting emails about certificates expiring. Well, that was odd, so we ran the cert-manager plugin to renew the certificates. Of course, to get that to work, we needed to upgrade the cert-manager service, which required us to also change our deployment manifest file to match the updated specification for the cert-manager. Isn’t Kubernetes just grand!

We ran the cert-manager renew command, and everything looked like it should work.

We were wrong and now what can we do?

Our expiry date came, and we started getting the following error:

The error message you get when CloudFlare’s DNS Proxy is enabled.

Ok, so the certificate rotation did not work. Now what?

Is this really the right solution?

One of my colleagues said, oh, certificate rotation does not work anymore. So, do the following steps:

  1. Extract the private and public certificate parts
  2. Decode the cerificates because it is stored in base64
  3. Generate the new certificates
  4. Encode the certificates back into base64
  5. Push the encoded certificates parts back to the cert-manager

This solution bothered me to no end. What is the issue? If the cert-manager cannot do its job, then why even have it? Searching the web suggested using another DNS provider. Well, we like Cloudflare because it supports DNS proxy. What is Cloudfare’s definition of DNS proxy? Their DNS proxy broadcasts a fake/proxy IP and then the proxy IP redirects the actual/real host IP. This allows Cloudfare’s proxy host to deal with denial of service attacks instead of your website. Your actual IP host cannot be discovered directly since the DNS service only knows about the proxy IP address. Well, why does this matter?

We looked at the cert-manager log and discovered log messages like the one below:

E0913 20:53:36.897867       1 sync.go:185] cert-manager/controller/challenges “msg”=”propagation check failed” “error”=”wrong status code ‘526’, expected ‘200’” “dnsName”=”DNS-name-changed” “resource_kind”=”Challenge” “resource_name”=”verse-tls-gg4k8-1453563326-1173022923” “resource_namespace”=”namespace-name-changed” “resource_version”=”v1” “type”=”HTTP-01”

Kubectl log for cert-manager

That status code is the same status code we get back from the Cloudflare proxy service. Ah, the cert-manager is trying to renew the certificate using the public internet which is proxied through Cloudflare.

526 Invalid SSL Certificate
Cloudflare could not validate the SSL certificate on the origin web server. Also used by Cloud Foundry’s gorouter.

https://en.wikipedia.org/wiki/List_of_HTTP_status_codes

Cloudflare won’t proxy new traffic from a server with an invalid certificate. We also redirect HTTP to HTTPS so it must be served by SSL. So, without looking deeper into the problem, what can we do? It turns out you can turn off DNS Proxy through the Cloudflare interface.

If you turn off the proxy status marked in the image below, the renewing certificates will work.

Another solution, but at least we are using the cert-manager

The new steps we followed were:

  1. Login to Cloudflare
  2. Find your DNS entry and disable DNS proxy
  3. Run the kubectl cert-manager renew command
  4. Wait the renewal to complete
  5. Reenable DNS proxy
  6. Logout from Cloudflare

Our next step is to do some research and see if we can actually make changes to the certificate renewal process so this manual process can be avoided in the future.

The post Getting Kubernetes Cert-Manager to work with Cloudflare and Let’s Encrypt appeared first on Stark & Wayne.

]]>
Stark & Wayne Internship Program Fall 2021 Kick-Off https://www.starkandwayne.com/blog/stark-wayne-internship-program-fall-2021-kick-off/ Fri, 06 Aug 2021 15:30:57 +0000 https://www.starkandwayne.com//?p=2794

We're excited to announce the restart our internship program for this upcoming fall semester! We put our program on hold temporarily during the Covid-19 pandemic and have been looking forward to restarting it with a new group of interns. As before, the goal of the continuing program is to bring in a group of upperclassmen students (grad and undergrad) and enable them in dev/ops culture, agile methodologies, and cloud technology.

The Format

Like previous internships, there are up to four interns joining our team part-time (20 hours) through the fall semester. The interns will own their project for the entire lifecycle from inception and planning, through implementation and testing. The project is run with an agile philosophy using daily stand-ups, weekly demos, two-week sprints, and a backlog. One of the end goals is to teach how software is built in modern companies and teach how to thrive in an agile environment. Additionally, and perhaps most importantly, interns will be sharing their experience by collaborating on blog posts describing the experience during and at the conclusion of their project.

Note: Due to current public health concerns this will be operated as a remote-first internship. Our business space will be open to the interns during normal business hours, however, remote collaboration via Slack and Zoom will be used for most team interactions.

Pick Your Project

In our previous internship cycles, the interns collaborated and picked the project they would work on for the length of the internship. This allows the internship to be flexible and adapt to the group's experience and interests. We will be providing five choices in the roadmap for our existing open-source projects (Tweed, SHIELD, Genesis, Safe).

Here is an example project from summer 2019:

The Kubernetes container orchestration system relies on a single persistent data store for its metadata and configuration needs: etcd. This distributed key-value store is vital to the proper operation of Kubernetes and, assuming identical replacement hardware / component configuration, the data in etcd is all that an operator needs to resurrect a dead cluster.

Design and build a SHIELD Data Protection Plugin for backing up the data in etcd, and restoring data to etcd from those archives. This plugin must work within the existing contract SHIELD has for "target" plugins. To enable this effort, you will need to be able to deploy and validate both Kubernetes and SHIELD. We will provide you hardware for both.

This project is split fairly even between development and operations work. SHIELD plugins are virtually all written in Go.

The goal of the five projects is to lie on the spectrum of purely software development, to mostly operations, with most lying somewhere in the middle.

Want In?

We have started to speak to candidates interested in joining our Fall 2021 internship and are looking to finalize participants by the end of August. If you're interested please email your résumé and a small blurb about why you are interested to internships@starkandwayne.com. We look forward to hearing from you! Given the remote-first nature of the program this fall, candidates outside the Buffalo, NY area are encouraged to apply.

What's Next?

We're excited to share this journey with you over the next few months! Be sure to check out our followup blog about the projects available this semester as well as posts from the interns talking about the project, implementation, and overall experience.

The post Stark & Wayne Internship Program Fall 2021 Kick-Off appeared first on Stark & Wayne.

]]>

We're excited to announce the restart our internship program for this upcoming fall semester! We put our program on hold temporarily during the Covid-19 pandemic and have been looking forward to restarting it with a new group of interns. As before, the goal of the continuing program is to bring in a group of upperclassmen students (grad and undergrad) and enable them in dev/ops culture, agile methodologies, and cloud technology.

The Format

Like previous internships, there are up to four interns joining our team part-time (20 hours) through the fall semester. The interns will own their project for the entire lifecycle from inception and planning, through implementation and testing. The project is run with an agile philosophy using daily stand-ups, weekly demos, two-week sprints, and a backlog. One of the end goals is to teach how software is built in modern companies and teach how to thrive in an agile environment. Additionally, and perhaps most importantly, interns will be sharing their experience by collaborating on blog posts describing the experience during and at the conclusion of their project.

Note: Due to current public health concerns this will be operated as a remote-first internship. Our business space will be open to the interns during normal business hours, however, remote collaboration via Slack and Zoom will be used for most team interactions.

Pick Your Project

In our previous internship cycles, the interns collaborated and picked the project they would work on for the length of the internship. This allows the internship to be flexible and adapt to the group's experience and interests. We will be providing five choices in the roadmap for our existing open-source projects (Tweed, SHIELD, Genesis, Safe).

Here is an example project from summer 2019:

The Kubernetes container orchestration system relies on a single persistent data store for its metadata and configuration needs: etcd. This distributed key-value store is vital to the proper operation of Kubernetes and, assuming identical replacement hardware / component configuration, the data in etcd is all that an operator needs to resurrect a dead cluster.

Design and build a SHIELD Data Protection Plugin for backing up the data in etcd, and restoring data to etcd from those archives. This plugin must work within the existing contract SHIELD has for "target" plugins. To enable this effort, you will need to be able to deploy and validate both Kubernetes and SHIELD. We will provide you hardware for both.

This project is split fairly even between development and operations work. SHIELD plugins are virtually all written in Go.

The goal of the five projects is to lie on the spectrum of purely software development, to mostly operations, with most lying somewhere in the middle.

Want In?

We have started to speak to candidates interested in joining our Fall 2021 internship and are looking to finalize participants by the end of August. If you're interested please email your résumé and a small blurb about why you are interested to internships@starkandwayne.com. We look forward to hearing from you! Given the remote-first nature of the program this fall, candidates outside the Buffalo, NY area are encouraged to apply.

What's Next?

We're excited to share this journey with you over the next few months! Be sure to check out our followup blog about the projects available this semester as well as posts from the interns talking about the project, implementation, and overall experience.

The post Stark & Wayne Internship Program Fall 2021 Kick-Off appeared first on Stark & Wayne.

]]>
Using a Windows Gaming PC as a (Linux) Docker Host https://www.starkandwayne.com/blog/converting-a-windows-gaming-pc-to-a-linux-docker-host/ Tue, 11 May 2021 18:30:00 +0000 https://www.starkandwayne.com//converting-a-windows-gaming-pc-to-a-linux-docker-host/

Docker Desktop is a perfectly serviceable way to use Docker on either MacOS or Windows, but for non-trivial use cases, it leaves much to be desired.

I recently happened upon one such use case that you might think would be rather common: I develop on MacOS, but since my MacBook Pro only has 16GB of memory, I'd like to use another host - in this case, my personal Windows gaming computer, which has a whopping 32GB of memory - as a remote Docker host. I mean, how many of us hackers, nerds, and geeks out there work with our (usually) MacBook laptops during the day, but then flip over to our custom-built Windows boxes after work to blast some aliens in the face?

You'd think this would be pretty well-traveled territory by now, and thus relatively easy to achieve. But you'd be dead wrong. It turns out that getting Docker working as a local network host on Windows is anything but simple, and in fact, it apparently requires quite a lot of kludgy hodgepodge of hacks to work.

Disclaimer: I am NOT a Windows admin by any stretch of the imagination. I am fully aware that you can do pretty-much anything with Windows these days that you can with *nix operating systems in terms of configuration, services, etc., but I fully admit that I do not know current best practices to do so. If you know better, @ me.

Also, it should be painfully obvious, but don't use this sort of setup in any mission-critical or production situation.

tl;dr summary

  1. Install WSL 2, then install Docker as a daemon inside that.
  2. Configure said daemon to listen on 0.0.0.0:2375.
  3. Realize that WSL will force it to bind to localhost, not 0.0.0.0, no
    matter what you do.
  4. Wire up a TCP proxy running in Windows userspace to bind to 0.0.0.0:2375
    and use that to shuttle traffic to localhost:2375.
  5. Test connectivity from an external host to verify it works.

Docker Desktop: What NOT to do

Docker Desktop includes a simple way to make your Windows machine a Docker host (or so it would seem): a simple check box in the configuration. You'd think this is all you need, but there's a "gotcha" here: it will only bind Docker to localhost:2375, NOT to 0.0.0.0:2375, meaning that using this option will only make Docker available over the local machine. It'll be unreachable from anywhere else on your network using this option.

Side note: apparently Docker Desktop includes this functionality to make current versions of the daemon available to legacy docker CLI clients. Still, for our purposes, it's pretty-much useless.

At first, I didn't realize this. I used the aforementioned check-box and tried to reach Docker from another machine on my local network, to no avail. Wondering what I'd done wrong, I looked at the Windows Resource Monitor and found that the Docker Desktop application was listening on port 2375, sure enough, but that it was bound to localhost only, so it couldn't receive traffic from anywhere else on my network.

Next, I did what any sane person would do I: I asked "The Google"™. I happened upon this article on Docker configuration that implied I could control this behavior by modifying the JSON configuration exposed in the Docker Desktop app. So naturally, I added the following to the Docker configuration:

{  "hosts": ["tcp://0.0.0.0:2375"]
}

Then, I disabled my firewall in both Windows and in my antivirus software just to rule it out as a possibility in case that didn't work. When I restarted Docker, I checked the Network tab of the Resource Monitor again and looked for what was bound to port 2375. Yep, its Docker Desktop all right, but its still bound ONLY to localhost! Even with the right configuration according to the documentation, Docker Desktop on Windows just refuses to bind to all network interfaces, making Docker Desktop essentially useless for all but the most trivial of use cases.

Using Windows Subsystem for Linux

After a few more trial-and-error attempts to get the above working failed, out of frustration I decided to nuke the entire Docker installation from within Windows and re-install just the daemon inside WSL. Once I had that installed and configured, I reasoned, I could expose the daemon from within WSL to the network and be able to finally get some work done.

How wrong, how hopelessly wrong I was.

For the uninitiated, "WSL" stands for "Windows Subsystem for Linux" and is, quite possibly, the most amazing thing Microsoft has ever built into their flagship product. The first version of WSL added hooks/shims to the Windows kernel directly so that, when a complied Linux binary made syscalls to what it thought was the Linux kernel, the Windows kernel accepted those syscalls and redirected or re-implemented the logic that was needed in "The Windows Way" such that any Linux binary running had absolutely no idea it was running on Windows. Think virtualization without needing a hypervisor or another OS installed.

As with anything, this was less than perfect. Over time, issues began to arise that Microsoft eventually decided to address with WSL version 2: implementing an extremely lightweight, totally transparent virtual machine within Windows that allowed for the same features, just implemented in a different way. This has some tradeoffs vs. the previous version, but is, for the most part, the recommended way to go these days.

You'll need Windows 10, preferably the most recent build version, to use WSL. You may also need the Pro edition or better (I'm not sure if Home edition is allowed to run WSL or not). You can read more about how to install it here.

Installing the daemon under Linux is straightforward enough. Docker provides excellent documentation on the install process and a great post-install guide, both of which I followed for an Ubuntu 20.04 WSL 2 install. That was simple enough, however due to the fact that WSL starts up a little different than a normal Linux distro, you don't have a working systemd so you'll need to figure out another way to auto-start the Docker daemon.

Once everything's installed and configured, you can launch the docker daemon manually by running dockerd as root.

However, if you review the Network tab of the Windows Resource Monitor, you won't see dockerd anywhere. Instead, that'll be under the process wslhost, which will be bound only to local loopback. No matter what you do, WSL will not allow you to expose processes inside of it to the network directly. We're still stuck with local loopback only.

So, more magic will be needed to get things working. How about a TCP proxy?

Sticking a Proxy In Front: An Exercise in Madness

At this point I realized there was no way I would ever get Docker exposed to the entire network directly, so I resolved to put a TCP proxy in front of it. "As long as it's just blindly shuttling traffic back and forth," I reasoned, "the docker CLI won't even know the difference."

So, what proxy should we put in place? I had experience with HAproxy, but it looks like they don't have a build for Windows. I needed something in the Windows user space, so that was out.

The other proxy I was familiar with was Envoy, so I looked at their website to see if they had a Windows build available. It turns out that the "documentation" (such as it is) for running Envoy on Windows tells you to compile it from source (at the time this happened; hopefully this changes in the future) and I had neither the time, nor patience, to install a massive Windows development environment just to build Envoy once. That seemed kind of insane.

After some more looking around, I found their FAQ, which linked to this document as the place to download binaries. Only one problem: they're all Docker images! So I'd need Docker to run Docker?!

That made zero sense, so I looked at the project's GitHub releases — no luck. So, I started looking at the project's "issues" to see if anyone else had asked for some help in getting a pre-compiled Windows release, hoping maybe they got a useful answer.

Unfortunately, the most useful response on the issue of providing pre-built Windows binaries for Envoy was this response from one of the maintainers:

See https://www.envoyproxy.io/docs/envoy/latest/faq/binaries. We don't provide binaries, but you can copy them out of the docker container if you want.

The thread does go on, and there are links provided to documentation and a tool for downloading Envoy on Linux machines, but any mention of Windows is entirely missing from any of the documentation that's been linked (as of the time of this writing), making the responses in that thread entirely pointless.

And with that, I was off to go rip apart a Docker image.

Doing Ungodly Things With Docker Images

In my personal opinion, using Windows for development and/or terminal-based work/workflows is still a pain. It's just not a smooth experience, not to mention I'd have to re-install about a bajillion different things, and configure a ton more, just to get a decent workflow set up.

So, I proceeded to grab the official Envoy Docker Image for Windows from Docker Hub while on my Mac (using Docker desktop which was eating over 80% of my memory at the time, just to pull images):

$ docker pull envoyproxy/envoy-windows
Using default tag: latest
Error response from daemon: manifest for envoyproxy/envoy-windows:latest not found: manifest unknown: manifest unknown

All right, fine Docker, be difficult. Let's just force the most recently published version as the tag:

$ docker pull envoyproxy/envoy-windows:v1.18.2
v1.18.2: Pulling from envoyproxy/envoy-windows
4612f6d0b889: Downloading
5ff1512f88ec: Downloading
ecaafce5c67e: Pulling fs layer
d507bef22a1e: Waiting
9bdb36426121: Waiting
e5b3f220cc68: Waiting
e5410c30e7a9: Waiting
4b47829da3b1: Waiting
35eb096aecb4: Waiting
68a2a4ec685d: Waiting
9c1c360fc8b3: Waiting
a08a596feca8: Waiting
image operating system "windows" cannot be used on this platform

...aaaaaand, nope. Docker won't even let you pull the image if there's a mismatch between the host and container operating systems.

After cursing the gods, I decided I wasn't going to give up. I remembered that Docker images are just big tarballs, so I should be able to just download the flat tarball image from Docker Hub and, in theory, unpack it somewhere to browse the filesystem of what would otherwise be a Windows image.

As it turns out, this isn't so easy. There are a lot of custom headers and and tokens involved in retrieving a Docker image, and it's not easy to do manually. Luckily, I stumbled on this script from the Moby project that did exactly what I needed! I retrieved the script and ran it to fetch the Windows Envoy image:

$ ./script.sh win envoyproxy/envoy-windows:v1.18.2
Downloading 'envoyproxy/envoy-windows:v1.18.2@v1.18.2' (12 layers)...
<snip>
Download of images into 'win' complete.
Use something like the following to load the result into a Docker daemon:
  tar -cC 'win' . | docker load

Note: If you get an error message complaining that 'mapfile' is not found, you may need to upgrade your version of Bash. MacOS ships with an old version that's two major releases behind. Alternatively, you could run this on your WSL machine which may already have an updated version of Bash available.

Now, we can browse the win/ directory that this script created for us:

$ ls win
Permissions Size User Date Modified Name
drwxr-xr-x     - jah   4 May 15:42  1a5606bb70dc20fb56456b2583d295bdd2e847c0617da3da68d152bdd6a10b78
drwxr-xr-x     - jah   4 May 15:42  4e6cb5497aca4d83d2b91ef129fa823c225b0c76cefd88f5a96dd6c0fccdd6c7
drwxr-xr-x     - jah   4 May 15:42  6bfb8784732bcc28ef5c20996dbe6f15d3a004bf241ba59575b8af65de0a0aaf
drwxr-xr-x     - jah   4 May 15:42  3712aa599c08d0fb31285318af13e44d988391806d2460167643340c4f3a7123
drwxr-xr-x     - jah   4 May 15:42  698765937dc05ffcc458d8c2653563450bc169a724c62ed6a2c58f23c054b0ff
drwxr-xr-x     - jah   4 May 15:42  a4c3f3e7cef6cd7492338a26b7b307c0cd26e29379655f681d402c1eeaf595b6
drwxr-xr-x     - jah   4 May 15:42  b93d56fb00e644574bb7c2df769bb383d7fa351730393d46239078026bbc8efc
.rw-r--r--  3.7k jah   4 May 15:42  b775d72f61762e116864ab49adc8de32045e001efd1565c7ed3afe984d6e07f0.json
drwxr-xr-x     - jah   4 May 15:42  c42480d1b057b159309c4e55553ba75d84c21dc6c870f7ed77b0744c72e755f5
.rw-r--r--  3.7k jah   4 May 15:40  d00ba7ba582355550f5e42f453d99450754df890dec22fc86adb2520f3f81da2.json
drwxr-xr-x     - jah   4 May 15:42  d59df72a9d52b10ca049b2b6b1ce5b94f6ebb8a100ec71cea71ec7d8c0369383
drwxr-xr-x     - jah   4 May 15:43  d8067d34f431844ea7a3068d31cdb9254f1fcb93bcaf1c182ceebdec17c8d1fc
drwxr-xr-x     - jah   4 May 15:42  ea8955ac8603cc8dbb34e70e0922b59271522839db7d626e0f79f45b954c0d12
drwxr-xr-x     - jah   4 May 15:42  ec233e633fbbcbaf9d6f7ba3496ebc676f9b70ac4b95ba1127c466723976f55a
.rw-r--r--  1.2k jah   4 May 15:43  manifest.json
.rw-r--r--    51 jah   4 May 15:43  repositories

After poking around each directory one-by-one and decompressing layer.tar that was found in each, I eventually found what I was looking for:

$ cd 3712aa599c08d0fb31285318af13e44d988391806d2460167643340c4f3a7123
$ tar xf layer.tar && chmod -R 777 * && tree .
tree .
.
├── Files
│   ├── Documents\ and\ Settings -> [Error\ reading\ symbolic\ link\ information]
│   └── Program\ Files
│       └── envoy
│           └── envoy.exe
├── Hives
│   ├── DefaultUser_Delta
│   ├── Sam_Delta
│   ├── Security_Delta
│   ├── Software_Delta
│   └── System_Delta
├── VERSION
├── json
└── layer.tar
4 directories, 10 files

Finally! We found envoy.exe!

After copying envoy.exe out of the container tarball, now I could write a configuration file to proxy TCP traffic from 0.0.0.0:2375 to local loopback on the same port. That config file wound up looking about like this:

static_resources:
  listeners:
    - name: docker-proxy
      address:
        socket_address:
          address: 0.0.0.0
          port_value: 2375
          protocol: TCP
      filter_chains:
        filters:
          - name: envoy.filters.network.tcp_proxy
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
              cluster: docker-actual
              stat_prefix: docker-proxy
  clusters:
    - name: docker-actual
      connect_timeout: 1s
      type: STATIC
      load_assignment:
        cluster_name: docker-actual
        endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: 172.25.65.236
                    port_value: 2375
                    protocol: TCP
admin:
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 9901

Important note: Every time WSL launches, it assigns a different IPv4 address to itself. One thing I haven't figured out just yet is how to get it to always use the same IP address. You'll need to get your WSL IP address by running ifconfig eth0 from within a WSL shell and substitute your IP for the one in my example configuration. You'll need to do this every time you reboot, and/or every time you stop and start WSL.

Start it Up

With Envoy finally ready to go, we can start everything up and test it. On my Windows machine I launched a shell into WSL and ran the Docker daemon manually:

$ sudo dockerd --tls=false &

Then, in another terminal window, I launched a Powershell session and ran Envoy:

PS J:\envoy> .\envoy.exe -c .\envoy.yaml

If all is working as it should, you should be able to see that wslhost is bound to the local loopback on port 2375, and that envoy.exe is bound to "IPv4 unspecified" on 2375 as well:

Resource Monitor

Now I was finally able to use a computer with some significant hardware as my Docker host. All that remained was to export the environment variable and use it!

$ export DOCKER_HOST=<ip of windows box>
$ docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Success!

Closing Thoughts

In case it wasn't obvious, what I did here should NEVER be done in a production situation. This was purely for my own use on my own network for development and testing purposes only.

The setup described herein relies quite a bit on manual work. Every time you reboot the host, you'll need to dig up the internal IPv4 address for WSL, change that entry in the Envoy configuration, start up dockerd within WSL and launch envoy.exe again to connect everything. Right now I'm doing this by hand since, fortunately, I don't have to reboot that Windows machine all too often. That said, I do plan on finding ways to automate the launch of that stuff every time I boot the computer up. I believe that can be accomplished with a BAT file that you call through a Windows "Scheduled Task" (see Task Scheduler app) every time you start the computer, prior to login.

This process was way harder than it needed to be. From the Windows Docker Desktop application refusing to honor the hosts configuration array, to the Envoy project claiming they have a Windows release of their proxy but making it so incredibly hard to retrieve, to the way Microsoft refuses to expose processes bound to ports in WSL to the network by default, this was a major pain in the posterior. It's fair to criticize all three companies involved here in how hard they've made it just to do what should otherwise be a very simple thing. It's also fair to say that none of this would be possible without the fine job the people at these aforementioned places have done in building excellent tools that allow people like myself to accomplish such feats of "Mad Science" as this.

One thing I will admit, though, is that it wasn't necessary to remove the Docker Desktop application from Windows, since the daemon we installed within WSL wound up being bound only to local loopback anyway. In theory, at least, you should be able to do all of this without using WSL at all (minus retrieving the Envoy Docker image, you need a recent-ish version of bash for that). The important words above here are "in theory" - I haven't tested this with the Docker Desktop app for Windows, and have no idea what additional headaches one might run into should they try to use it in this way.

None of this would have been necessary, however, if either:

  • Apple shipped their default MacBook Pro laptops with more than 16GB of RAM, or;
  • Docker didn't require an entire VM with dedicated resources to "fake" the notion of containers on MacOS.

But, we don't live in a perfect world.

The post Using a Windows Gaming PC as a (Linux) Docker Host appeared first on Stark & Wayne.

]]>

Docker Desktop is a perfectly serviceable way to use Docker on either MacOS or Windows, but for non-trivial use cases, it leaves much to be desired.

I recently happened upon one such use case that you might think would be rather common: I develop on MacOS, but since my MacBook Pro only has 16GB of memory, I'd like to use another host - in this case, my personal Windows gaming computer, which has a whopping 32GB of memory - as a remote Docker host. I mean, how many of us hackers, nerds, and geeks out there work with our (usually) MacBook laptops during the day, but then flip over to our custom-built Windows boxes after work to blast some aliens in the face?

You'd think this would be pretty well-traveled territory by now, and thus relatively easy to achieve. But you'd be dead wrong. It turns out that getting Docker working as a local network host on Windows is anything but simple, and in fact, it apparently requires quite a lot of kludgy hodgepodge of hacks to work.

Disclaimer: I am NOT a Windows admin by any stretch of the imagination. I am fully aware that you can do pretty-much anything with Windows these days that you can with *nix operating systems in terms of configuration, services, etc., but I fully admit that I do not know current best practices to do so. If you know better, @ me.

Also, it should be painfully obvious, but don't use this sort of setup in any mission-critical or production situation.

tl;dr summary

  1. Install WSL 2, then install Docker as a daemon inside that.
  2. Configure said daemon to listen on 0.0.0.0:2375.
  3. Realize that WSL will force it to bind to localhost, not 0.0.0.0, no
    matter what you do.
  4. Wire up a TCP proxy running in Windows userspace to bind to 0.0.0.0:2375
    and use that to shuttle traffic to localhost:2375.
  5. Test connectivity from an external host to verify it works.

Docker Desktop: What NOT to do

Docker Desktop includes a simple way to make your Windows machine a Docker host (or so it would seem): a simple check box in the configuration. You'd think this is all you need, but there's a "gotcha" here: it will only bind Docker to localhost:2375, NOT to 0.0.0.0:2375, meaning that using this option will only make Docker available over the local machine. It'll be unreachable from anywhere else on your network using this option.

Side note: apparently Docker Desktop includes this functionality to make current versions of the daemon available to legacy docker CLI clients. Still, for our purposes, it's pretty-much useless.

At first, I didn't realize this. I used the aforementioned check-box and tried to reach Docker from another machine on my local network, to no avail. Wondering what I'd done wrong, I looked at the Windows Resource Monitor and found that the Docker Desktop application was listening on port 2375, sure enough, but that it was bound to localhost only, so it couldn't receive traffic from anywhere else on my network.

Next, I did what any sane person would do I: I asked "The Google"™. I happened upon this article on Docker configuration that implied I could control this behavior by modifying the JSON configuration exposed in the Docker Desktop app. So naturally, I added the following to the Docker configuration:

{  "hosts": ["tcp://0.0.0.0:2375"]
}

Then, I disabled my firewall in both Windows and in my antivirus software just to rule it out as a possibility in case that didn't work. When I restarted Docker, I checked the Network tab of the Resource Monitor again and looked for what was bound to port 2375. Yep, its Docker Desktop all right, but its still bound ONLY to localhost! Even with the right configuration according to the documentation, Docker Desktop on Windows just refuses to bind to all network interfaces, making Docker Desktop essentially useless for all but the most trivial of use cases.

Using Windows Subsystem for Linux

After a few more trial-and-error attempts to get the above working failed, out of frustration I decided to nuke the entire Docker installation from within Windows and re-install just the daemon inside WSL. Once I had that installed and configured, I reasoned, I could expose the daemon from within WSL to the network and be able to finally get some work done.

How wrong, how hopelessly wrong I was.

For the uninitiated, "WSL" stands for "Windows Subsystem for Linux" and is, quite possibly, the most amazing thing Microsoft has ever built into their flagship product. The first version of WSL added hooks/shims to the Windows kernel directly so that, when a complied Linux binary made syscalls to what it thought was the Linux kernel, the Windows kernel accepted those syscalls and redirected or re-implemented the logic that was needed in "The Windows Way" such that any Linux binary running had absolutely no idea it was running on Windows. Think virtualization without needing a hypervisor or another OS installed.

As with anything, this was less than perfect. Over time, issues began to arise that Microsoft eventually decided to address with WSL version 2: implementing an extremely lightweight, totally transparent virtual machine within Windows that allowed for the same features, just implemented in a different way. This has some tradeoffs vs. the previous version, but is, for the most part, the recommended way to go these days.

You'll need Windows 10, preferably the most recent build version, to use WSL. You may also need the Pro edition or better (I'm not sure if Home edition is allowed to run WSL or not). You can read more about how to install it here.

Installing the daemon under Linux is straightforward enough. Docker provides excellent documentation on the install process and a great post-install guide, both of which I followed for an Ubuntu 20.04 WSL 2 install. That was simple enough, however due to the fact that WSL starts up a little different than a normal Linux distro, you don't have a working systemd so you'll need to figure out another way to auto-start the Docker daemon.

Once everything's installed and configured, you can launch the docker daemon manually by running dockerd as root.

However, if you review the Network tab of the Windows Resource Monitor, you won't see dockerd anywhere. Instead, that'll be under the process wslhost, which will be bound only to local loopback. No matter what you do, WSL will not allow you to expose processes inside of it to the network directly. We're still stuck with local loopback only.

So, more magic will be needed to get things working. How about a TCP proxy?

Sticking a Proxy In Front: An Exercise in Madness

At this point I realized there was no way I would ever get Docker exposed to the entire network directly, so I resolved to put a TCP proxy in front of it. "As long as it's just blindly shuttling traffic back and forth," I reasoned, "the docker CLI won't even know the difference."

So, what proxy should we put in place? I had experience with HAproxy, but it looks like they don't have a build for Windows. I needed something in the Windows user space, so that was out.

The other proxy I was familiar with was Envoy, so I looked at their website to see if they had a Windows build available. It turns out that the "documentation" (such as it is) for running Envoy on Windows tells you to compile it from source (at the time this happened; hopefully this changes in the future) and I had neither the time, nor patience, to install a massive Windows development environment just to build Envoy once. That seemed kind of insane.

After some more looking around, I found their FAQ, which linked to this document as the place to download binaries. Only one problem: they're all Docker images! So I'd need Docker to run Docker?!

That made zero sense, so I looked at the project's GitHub releases — no luck. So, I started looking at the project's "issues" to see if anyone else had asked for some help in getting a pre-compiled Windows release, hoping maybe they got a useful answer.

Unfortunately, the most useful response on the issue of providing pre-built Windows binaries for Envoy was this response from one of the maintainers:

See https://www.envoyproxy.io/docs/envoy/latest/faq/binaries. We don't provide binaries, but you can copy them out of the docker container if you want.

The thread does go on, and there are links provided to documentation and a tool for downloading Envoy on Linux machines, but any mention of Windows is entirely missing from any of the documentation that's been linked (as of the time of this writing), making the responses in that thread entirely pointless.

And with that, I was off to go rip apart a Docker image.

Doing Ungodly Things With Docker Images

In my personal opinion, using Windows for development and/or terminal-based work/workflows is still a pain. It's just not a smooth experience, not to mention I'd have to re-install about a bajillion different things, and configure a ton more, just to get a decent workflow set up.

So, I proceeded to grab the official Envoy Docker Image for Windows from Docker Hub while on my Mac (using Docker desktop which was eating over 80% of my memory at the time, just to pull images):

$ docker pull envoyproxy/envoy-windows
Using default tag: latest
Error response from daemon: manifest for envoyproxy/envoy-windows:latest not found: manifest unknown: manifest unknown

All right, fine Docker, be difficult. Let's just force the most recently published version as the tag:

$ docker pull envoyproxy/envoy-windows:v1.18.2
v1.18.2: Pulling from envoyproxy/envoy-windows
4612f6d0b889: Downloading
5ff1512f88ec: Downloading
ecaafce5c67e: Pulling fs layer
d507bef22a1e: Waiting
9bdb36426121: Waiting
e5b3f220cc68: Waiting
e5410c30e7a9: Waiting
4b47829da3b1: Waiting
35eb096aecb4: Waiting
68a2a4ec685d: Waiting
9c1c360fc8b3: Waiting
a08a596feca8: Waiting
image operating system "windows" cannot be used on this platform

...aaaaaand, nope. Docker won't even let you pull the image if there's a mismatch between the host and container operating systems.

After cursing the gods, I decided I wasn't going to give up. I remembered that Docker images are just big tarballs, so I should be able to just download the flat tarball image from Docker Hub and, in theory, unpack it somewhere to browse the filesystem of what would otherwise be a Windows image.

As it turns out, this isn't so easy. There are a lot of custom headers and and tokens involved in retrieving a Docker image, and it's not easy to do manually. Luckily, I stumbled on this script from the Moby project that did exactly what I needed! I retrieved the script and ran it to fetch the Windows Envoy image:

$ ./script.sh win envoyproxy/envoy-windows:v1.18.2
Downloading 'envoyproxy/envoy-windows:v1.18.2@v1.18.2' (12 layers)...
<snip>
Download of images into 'win' complete.
Use something like the following to load the result into a Docker daemon:
  tar -cC 'win' . | docker load
Note: If you get an error message complaining that 'mapfile' is not found, you may need to upgrade your version of Bash. MacOS ships with an old version that's two major releases behind. Alternatively, you could run this on your WSL machine which may already have an updated version of Bash available.

Now, we can browse the win/ directory that this script created for us:

$ ls win
Permissions Size User Date Modified Name
drwxr-xr-x     - jah   4 May 15:42  1a5606bb70dc20fb56456b2583d295bdd2e847c0617da3da68d152bdd6a10b78
drwxr-xr-x     - jah   4 May 15:42  4e6cb5497aca4d83d2b91ef129fa823c225b0c76cefd88f5a96dd6c0fccdd6c7
drwxr-xr-x     - jah   4 May 15:42  6bfb8784732bcc28ef5c20996dbe6f15d3a004bf241ba59575b8af65de0a0aaf
drwxr-xr-x     - jah   4 May 15:42  3712aa599c08d0fb31285318af13e44d988391806d2460167643340c4f3a7123
drwxr-xr-x     - jah   4 May 15:42  698765937dc05ffcc458d8c2653563450bc169a724c62ed6a2c58f23c054b0ff
drwxr-xr-x     - jah   4 May 15:42  a4c3f3e7cef6cd7492338a26b7b307c0cd26e29379655f681d402c1eeaf595b6
drwxr-xr-x     - jah   4 May 15:42  b93d56fb00e644574bb7c2df769bb383d7fa351730393d46239078026bbc8efc
.rw-r--r--  3.7k jah   4 May 15:42  b775d72f61762e116864ab49adc8de32045e001efd1565c7ed3afe984d6e07f0.json
drwxr-xr-x     - jah   4 May 15:42  c42480d1b057b159309c4e55553ba75d84c21dc6c870f7ed77b0744c72e755f5
.rw-r--r--  3.7k jah   4 May 15:40  d00ba7ba582355550f5e42f453d99450754df890dec22fc86adb2520f3f81da2.json
drwxr-xr-x     - jah   4 May 15:42  d59df72a9d52b10ca049b2b6b1ce5b94f6ebb8a100ec71cea71ec7d8c0369383
drwxr-xr-x     - jah   4 May 15:43  d8067d34f431844ea7a3068d31cdb9254f1fcb93bcaf1c182ceebdec17c8d1fc
drwxr-xr-x     - jah   4 May 15:42  ea8955ac8603cc8dbb34e70e0922b59271522839db7d626e0f79f45b954c0d12
drwxr-xr-x     - jah   4 May 15:42  ec233e633fbbcbaf9d6f7ba3496ebc676f9b70ac4b95ba1127c466723976f55a
.rw-r--r--  1.2k jah   4 May 15:43  manifest.json
.rw-r--r--    51 jah   4 May 15:43  repositories

After poking around each directory one-by-one and decompressing layer.tar that was found in each, I eventually found what I was looking for:

$ cd 3712aa599c08d0fb31285318af13e44d988391806d2460167643340c4f3a7123
$ tar xf layer.tar && chmod -R 777 * && tree .
tree .
.
├── Files
│   ├── Documents\ and\ Settings -> [Error\ reading\ symbolic\ link\ information]
│   └── Program\ Files
│       └── envoy
│           └── envoy.exe
├── Hives
│   ├── DefaultUser_Delta
│   ├── Sam_Delta
│   ├── Security_Delta
│   ├── Software_Delta
│   └── System_Delta
├── VERSION
├── json
└── layer.tar
4 directories, 10 files

Finally! We found envoy.exe!

After copying envoy.exe out of the container tarball, now I could write a configuration file to proxy TCP traffic from 0.0.0.0:2375 to local loopback on the same port. That config file wound up looking about like this:

static_resources:
  listeners:
    - name: docker-proxy
      address:
        socket_address:
          address: 0.0.0.0
          port_value: 2375
          protocol: TCP
      filter_chains:
        filters:
          - name: envoy.filters.network.tcp_proxy
            typed_config:
              "@type": type.googleapis.com/envoy.extensions.filters.network.tcp_proxy.v3.TcpProxy
              cluster: docker-actual
              stat_prefix: docker-proxy
  clusters:
    - name: docker-actual
      connect_timeout: 1s
      type: STATIC
      load_assignment:
        cluster_name: docker-actual
        endpoints:
          - lb_endpoints:
            - endpoint:
                address:
                  socket_address:
                    address: 172.25.65.236
                    port_value: 2375
                    protocol: TCP
admin:
  address:
    socket_address:
      address: 0.0.0.0
      port_value: 9901
Important note: Every time WSL launches, it assigns a different IPv4 address to itself. One thing I haven't figured out just yet is how to get it to always use the same IP address. You'll need to get your WSL IP address by running ifconfig eth0 from within a WSL shell and substitute your IP for the one in my example configuration. You'll need to do this every time you reboot, and/or every time you stop and start WSL.

Start it Up

With Envoy finally ready to go, we can start everything up and test it. On my Windows machine I launched a shell into WSL and ran the Docker daemon manually:

$ sudo dockerd --tls=false &

Then, in another terminal window, I launched a Powershell session and ran Envoy:

PS J:\envoy> .\envoy.exe -c .\envoy.yaml

If all is working as it should, you should be able to see that wslhost is bound to the local loopback on port 2375, and that envoy.exe is bound to "IPv4 unspecified" on 2375 as well:

Resource Monitor

Now I was finally able to use a computer with some significant hardware as my Docker host. All that remained was to export the environment variable and use it!

$ export DOCKER_HOST=<ip of windows box>
$ docker ps
CONTAINER ID   IMAGE     COMMAND   CREATED   STATUS    PORTS     NAMES

Success!

Closing Thoughts

In case it wasn't obvious, what I did here should NEVER be done in a production situation. This was purely for my own use on my own network for development and testing purposes only.

The setup described herein relies quite a bit on manual work. Every time you reboot the host, you'll need to dig up the internal IPv4 address for WSL, change that entry in the Envoy configuration, start up dockerd within WSL and launch envoy.exe again to connect everything. Right now I'm doing this by hand since, fortunately, I don't have to reboot that Windows machine all too often. That said, I do plan on finding ways to automate the launch of that stuff every time I boot the computer up. I believe that can be accomplished with a BAT file that you call through a Windows "Scheduled Task" (see Task Scheduler app) every time you start the computer, prior to login.

This process was way harder than it needed to be. From the Windows Docker Desktop application refusing to honor the hosts configuration array, to the Envoy project claiming they have a Windows release of their proxy but making it so incredibly hard to retrieve, to the way Microsoft refuses to expose processes bound to ports in WSL to the network by default, this was a major pain in the posterior. It's fair to criticize all three companies involved here in how hard they've made it just to do what should otherwise be a very simple thing. It's also fair to say that none of this would be possible without the fine job the people at these aforementioned places have done in building excellent tools that allow people like myself to accomplish such feats of "Mad Science" as this.

One thing I will admit, though, is that it wasn't necessary to remove the Docker Desktop application from Windows, since the daemon we installed within WSL wound up being bound only to local loopback anyway. In theory, at least, you should be able to do all of this without using WSL at all (minus retrieving the Envoy Docker image, you need a recent-ish version of bash for that). The important words above here are "in theory" - I haven't tested this with the Docker Desktop app for Windows, and have no idea what additional headaches one might run into should they try to use it in this way.

None of this would have been necessary, however, if either:

  • Apple shipped their default MacBook Pro laptops with more than 16GB of RAM, or;
  • Docker didn't require an entire VM with dedicated resources to "fake" the notion of containers on MacOS.

But, we don't live in a perfect world.

The post Using a Windows Gaming PC as a (Linux) Docker Host appeared first on Stark & Wayne.

]]>
Implementing Non-Trivial Containerized Systems – Part 4: Adding a Web Interface https://www.starkandwayne.com/blog/implementing-non-trivial-containerized-systems-part-4-adding-a-web-interface/ Tue, 20 Apr 2021 19:17:00 +0000 https://www.starkandwayne.com//implementing-non-trivial-containerized-systems-part-4-adding-a-web-interface/

This is the fourth part of a multi-part series on designing and building non-trivial containerized solutions. We're making a radio station using off-the-shelf components and some home-spun software, all on top of Docker, Docker Compose, and eventually, Kubernetes.

In this part, we've got a working system, our own rebuildable images, and a portable Docker Compose deployment recipe. What more could we want? A web interface for managing the tracks that we're streaming would be nice...

The Icecast web interface is, well, usable, but it does leave a fair amount to be desired. This is the part of systems design where I usually take a step back, see what's missing, and then focus my software development energies on solving those deficiencies.

As a radio station operator, I would like to...:

  1. Exert fine-grained control over the tracks that my radio station plays.
  2. Use my web browser to add new YouTube tracks to my radio station.

That's it; I'm a simple man, with simple needs. I can write a small-ish web app to do these things, and give it read-write access to the /radio volume.

And here it is:

I call it MixBooth. It's on GitHub too.

MixBooth is a small Vue.js front-end backed by an even smaller Go HTTP (REST) API. The backend piece consists of only three endpoints:

GET /playlist

This one retrieves the current playlist. The Vue bits render this on the bottom half, under the Current Track Line-Up heading. The checkboxes are there so that you, intrepid Radio Disk Jockey that you are, can remove tracks from the rotation, and add them back in, by way of our next endpoint:

PUT /playlist

Tired of the same song popping up? Yank the cassette!

Finally, to get new stuff into the station, from YouTube, we have:

POST /upload

For a deeper dive into the code, check out the GitHub repository.

To containerize this thing, we're first going to look at our Dockerfile. Keep in mind I specifically wrote this piece of software to work with this radio station deployment. I was "filling in the gaps" so to speak. This is a long Dockerfile – the longest we've seen so far – so we'll take it in parts.

Our first part is the api stage. Here, we build the Go backend application, using tooling that should be familiar to most Go programmers (even terrible ones like me):

FROM golang:1.15 AS apiWORKDIR /app
COPY go.mod .
COPY go.sum .
COPY main.go .
RUN go build

Our next stage (in the same Dockerfile) builds the Vue component, using tooling that is recognizable to Node developers (but maybe not to the aformentioned Go rockstars):

FROM node:15 AS ux
WORKDIR /app
COPY ux .
RUN yarn install
RUN yarn build

Finally, in the last stage, we'll tie it all back together, copying in assets from our build stages, with some Ubuntu packaging:

FROM ubuntu:20.04
RUN apt-get update \
 && DEBIAN_FRONTEND=noninteractive  apt-get install -y python3 python3-pip ffmpeg \
 && pip3 install youtube-dl \
 && apt-get remove -y python3-pip \
 && apt-get autoremove -y \
 && rm -rf /var/lib/apt/lists/*
COPY --from=api /app/mixbooth /usr/bin/mixbooth
COPY --from=ux  /app/dist     /htdocs
COPY            ingest        /usr/bin
EXPOSE 5000
ENV HTDOCS_ROOT=/htdocs
CMD ["mixbooth"]

Remember: we need Python (and PIP!) for the youtube-dl bits; this web UI literally shells out to run youtube-dl when you ask to ingest new tracks.

In fact, let's take a closer look at that ingest script we're copying in.

#!/bin/bash
set -eu
mkdir -p /tmp/ytdl.$$
pushd /tmp/ytdl.$$
for url in "$@"; do
  youtube-dl -x "$url"
  for file in *; do
    ffmpeg -i "$file" -vn -c:a libopus "$file.opus"
    mv "$file.opus" $RADIO_ROOT/
    echo "$RADIO_ROOT/$file.opus" >> $RADIO_ROOT/playlist.m3u
  done
done
popd
rm -rf /tmp/ytdl.$$

I want to point out that instead of hard-coding the radio files mountpoint to something like /radio, I chose to rely on the $RADIO_ROOT environment variable instead. We'll use this in our next section, when we add the web interface container into our larger deployment.

Composing the Web UI

Let's get this web interface into the mix from a Docker perspective, shall we?

Here's the Compose file we ended up with from the last post:

---
version: '3'
services:
  icecast:
    image: filefrog/icecast2:latest
    ports:
      - '8000:8000'
    environment:
      ICECAST2_PASSWORD: whatever-you-want-it-to-be
  source:
    image: filefrog/liquidsoap:latest
    command:
      - |
        output.icecast(%opus,
          host = "icecast",
          port = 8000,
          password = "whatever-you-want-it-to-be",
          mount = "pirate-radio.opus",
          playlist.safe(reload=120,"/radio/playlist.m3u"))
    volumes:
      - $PWD/radio:/radio

Let's add a new web service, using the published Docker image:

---
version: '3'
services:
  # in addition to the other services, here's a new one:
  web:
    image: filefrog/mixbooth:latest
    environment:
      RADIO_ROOT:      /radio
      MIXBOOTH_STREAM: '//{host}:8000/pirate-radio.opus'
    ports:
      - 5000:5000
    volumes:
      - $PWD/radio:/radio

This will spin up the (public) MixBooth image, and bind it on port 5000. We point to the same host path ($PWD/radio) as we used for the LiquidSoap container – we need them to both be looking at the exact same files, so that we can add new tracks for the stream source to pick up, and modify the playlist it uses.

We also added the $RADIO_ROOT environment variable, since MixBooth has no preconceived notions of where the audio files ought to go. We also set the funny-looking $MIXBOOTH_STREAM environment variable like so:

      MIXBOOTH_STREAM: '//{host}:8000/pirate-radio.opus'

This is highly-specific to what MixBooth does. The embedded player for the radio station, is little more than an HTML 5 <audio> element with the appropriate stream source elements. The heavy lifting is done by the browser – thankfully! However, much as it was clueless about where the audio tracks should live, it likewise flummozed by where, precisely, one would go to listen to those tracks.

This $MIXBOOTH_STREAM environment variable encodes that information, but it does so with some late-binding templating. More precisely, the {host} bit will be replaced, by the Javascript in the visitor's browser, with whatever hostname they used to access the web interface itself. Come in by IP? Hit the Icecast endpoint by IP. Used a domain name and TLS? Listen in secure comfort, oblivious to the numbers that underpin the very Internet.

These were both conscious design decisions made (by me) while implementing this missing piece of the puzzle. By abstracting the site- and station-specific configuration out of the code, and even out of the "configuration" (such as it is), I was able to make the deployment more cohesive and explicit. Since you can't not spell out precisely where the wiring goes, the resulting docker-compose.yml is much easier to understand.

(If you're into software engineering self-reflection, this forthrightness is aimed squarely at reducing Action At A Distance.)

Here's the final Compose file; take it for a spin and see what you think!

---
version: '3'
services:
  icecast:
    image: filefrog/icecast2:latest
    ports:
      - '8000:8000'
    environment:
      ICECAST2_PASSWORD: whatever-you-want-it-to-be
  source:
    image: filefrog/liquidsoap:latest
    command:
      - |
        output.icecast(%opus,
          host = "icecast",
          port = 8000,
          password = "whatever-you-want-it-to-be",
          mount = "pirate-radio.opus",
          playlist.safe(reload=120,"/radio/playlist.m3u"))
    volumes:
      - $PWD/radio:/radio
  web:
    image: filefrog/mixbooth:latest
    environment:
      RADIO_ROOT:      /radio
      MIXBOOTH_STREAM: '//{host}:8000/pirate-radio.opus'
    ports:
      - 5000:5000
    volumes:
- $PWD/radio:/radio

Next time, we'll pick this whole deployment up and dump it onto the nearest convenient Kubernetes cluster – stay tuned!

The post Implementing Non-Trivial Containerized Systems – Part 4: Adding a Web Interface appeared first on Stark & Wayne.

]]>

This is the fourth part of a multi-part series on designing and building non-trivial containerized solutions. We're making a radio station using off-the-shelf components and some home-spun software, all on top of Docker, Docker Compose, and eventually, Kubernetes.

In this part, we've got a working system, our own rebuildable images, and a portable Docker Compose deployment recipe. What more could we want? A web interface for managing the tracks that we're streaming would be nice...

The Icecast web interface is, well, usable, but it does leave a fair amount to be desired. This is the part of systems design where I usually take a step back, see what's missing, and then focus my software development energies on solving those deficiencies.

As a radio station operator, I would like to...:

  1. Exert fine-grained control over the tracks that my radio station plays.
  2. Use my web browser to add new YouTube tracks to my radio station.

That's it; I'm a simple man, with simple needs. I can write a small-ish web app to do these things, and give it read-write access to the /radio volume.

And here it is:

I call it MixBooth. It's on GitHub too.

MixBooth is a small Vue.js front-end backed by an even smaller Go HTTP (REST) API. The backend piece consists of only three endpoints:

GET /playlist

This one retrieves the current playlist. The Vue bits render this on the bottom half, under the Current Track Line-Up heading. The checkboxes are there so that you, intrepid Radio Disk Jockey that you are, can remove tracks from the rotation, and add them back in, by way of our next endpoint:

PUT /playlist

Tired of the same song popping up? Yank the cassette!

Finally, to get new stuff into the station, from YouTube, we have:

POST /upload

For a deeper dive into the code, check out the GitHub repository.

To containerize this thing, we're first going to look at our Dockerfile. Keep in mind I specifically wrote this piece of software to work with this radio station deployment. I was "filling in the gaps" so to speak. This is a long Dockerfile – the longest we've seen so far – so we'll take it in parts.

Our first part is the api stage. Here, we build the Go backend application, using tooling that should be familiar to most Go programmers (even terrible ones like me):

FROM golang:1.15 AS apiWORKDIR /app
COPY go.mod .
COPY go.sum .
COPY main.go .
RUN go build

Our next stage (in the same Dockerfile) builds the Vue component, using tooling that is recognizable to Node developers (but maybe not to the aformentioned Go rockstars):

FROM node:15 AS ux
WORKDIR /app
COPY ux .
RUN yarn install
RUN yarn build

Finally, in the last stage, we'll tie it all back together, copying in assets from our build stages, with some Ubuntu packaging:

FROM ubuntu:20.04
RUN apt-get update \
 && DEBIAN_FRONTEND=noninteractive  apt-get install -y python3 python3-pip ffmpeg \
 && pip3 install youtube-dl \
 && apt-get remove -y python3-pip \
 && apt-get autoremove -y \
 && rm -rf /var/lib/apt/lists/*
COPY --from=api /app/mixbooth /usr/bin/mixbooth
COPY --from=ux  /app/dist     /htdocs
COPY            ingest        /usr/bin
EXPOSE 5000
ENV HTDOCS_ROOT=/htdocs
CMD ["mixbooth"]

Remember: we need Python (and PIP!) for the youtube-dl bits; this web UI literally shells out to run youtube-dl when you ask to ingest new tracks.

In fact, let's take a closer look at that ingest script we're copying in.

#!/bin/bash
set -eu
mkdir -p /tmp/ytdl.$$
pushd /tmp/ytdl.$$
for url in "$@"; do
  youtube-dl -x "$url"
  for file in *; do
    ffmpeg -i "$file" -vn -c:a libopus "$file.opus"
    mv "$file.opus" $RADIO_ROOT/
    echo "$RADIO_ROOT/$file.opus" >> $RADIO_ROOT/playlist.m3u
  done
done
popd
rm -rf /tmp/ytdl.$$

I want to point out that instead of hard-coding the radio files mountpoint to something like /radio, I chose to rely on the $RADIO_ROOT environment variable instead. We'll use this in our next section, when we add the web interface container into our larger deployment.

Composing the Web UI

Let's get this web interface into the mix from a Docker perspective, shall we?

Here's the Compose file we ended up with from the last post:

---
version: '3'
services:
  icecast:
    image: filefrog/icecast2:latest
    ports:
      - '8000:8000'
    environment:
      ICECAST2_PASSWORD: whatever-you-want-it-to-be
  source:
    image: filefrog/liquidsoap:latest
    command:
      - |
        output.icecast(%opus,
          host = "icecast",
          port = 8000,
          password = "whatever-you-want-it-to-be",
          mount = "pirate-radio.opus",
          playlist.safe(reload=120,"/radio/playlist.m3u"))
    volumes:
      - $PWD/radio:/radio

Let's add a new web service, using the published Docker image:

---
version: '3'
services:
  # in addition to the other services, here's a new one:
  web:
    image: filefrog/mixbooth:latest
    environment:
      RADIO_ROOT:      /radio
      MIXBOOTH_STREAM: '//{host}:8000/pirate-radio.opus'
    ports:
      - 5000:5000
    volumes:
      - $PWD/radio:/radio

This will spin up the (public) MixBooth image, and bind it on port 5000. We point to the same host path ($PWD/radio) as we used for the LiquidSoap container – we need them to both be looking at the exact same files, so that we can add new tracks for the stream source to pick up, and modify the playlist it uses.

We also added the $RADIO_ROOT environment variable, since MixBooth has no preconceived notions of where the audio files ought to go. We also set the funny-looking $MIXBOOTH_STREAM environment variable like so:

      MIXBOOTH_STREAM: '//{host}:8000/pirate-radio.opus'

This is highly-specific to what MixBooth does. The embedded player for the radio station, is little more than an HTML 5 <audio> element with the appropriate stream source elements. The heavy lifting is done by the browser – thankfully! However, much as it was clueless about where the audio tracks should live, it likewise flummozed by where, precisely, one would go to listen to those tracks.

This $MIXBOOTH_STREAM environment variable encodes that information, but it does so with some late-binding templating. More precisely, the {host} bit will be replaced, by the Javascript in the visitor's browser, with whatever hostname they used to access the web interface itself. Come in by IP? Hit the Icecast endpoint by IP. Used a domain name and TLS? Listen in secure comfort, oblivious to the numbers that underpin the very Internet.

These were both conscious design decisions made (by me) while implementing this missing piece of the puzzle. By abstracting the site- and station-specific configuration out of the code, and even out of the "configuration" (such as it is), I was able to make the deployment more cohesive and explicit. Since you can't not spell out precisely where the wiring goes, the resulting docker-compose.yml is much easier to understand.

(If you're into software engineering self-reflection, this forthrightness is aimed squarely at reducing Action At A Distance.)

Here's the final Compose file; take it for a spin and see what you think!

---
version: '3'
services:
  icecast:
    image: filefrog/icecast2:latest
    ports:
      - '8000:8000'
    environment:
      ICECAST2_PASSWORD: whatever-you-want-it-to-be
  source:
    image: filefrog/liquidsoap:latest
    command:
      - |
        output.icecast(%opus,
          host = "icecast",
          port = 8000,
          password = "whatever-you-want-it-to-be",
          mount = "pirate-radio.opus",
          playlist.safe(reload=120,"/radio/playlist.m3u"))
    volumes:
      - $PWD/radio:/radio
  web:
    image: filefrog/mixbooth:latest
    environment:
      RADIO_ROOT:      /radio
      MIXBOOTH_STREAM: '//{host}:8000/pirate-radio.opus'
    ports:
      - 5000:5000
    volumes:
- $PWD/radio:/radio

Next time, we'll pick this whole deployment up and dump it onto the nearest convenient Kubernetes cluster – stay tuned!

The post Implementing Non-Trivial Containerized Systems – Part 4: Adding a Web Interface appeared first on Stark & Wayne.

]]>
A Decade With Cloud Foundry https://www.starkandwayne.com/blog/a-decade-with-cloud-foundry/ Fri, 02 Apr 2021 21:14:00 +0000 https://www.starkandwayne.com//a-decade-with-cloud-foundry/

Buddhism has its Four Noble Truths. Plato had his Ideals.

We've spent close to a decade running applications on top of the Cloud Foundry platform-as-a-service (PaaS), and here's a few universal truths about application development and deployment that we hope you'll bring to your next Kubernetes project.

Start Strong.

One of the great joys of working with Cloud Foundry is getting up and running quickly, by way of buildpacks. A buildpack is an impressive bit of engineering that can inspect your raw source code and build out the appropriate, containerized application execution unit, automatically. You stand on the shoulders of buildpack authors, to paraphrase the philosopher Bertrand of Chartres.

The importance of having a solid foundation cannot be overstated. In Cloud Foundry, build packs allow developers to focus almost entirely on their application code – the problem it solves, the solutions it employs. We find that this leads to happier, more productive teams that are faster to market and quicker to respond to changing customer needs.

Kubernetes went a different way: container images. We spend a lot of time crafting and maintaining Dockerfiles and the images that they tie together, and build upon (Bertrand of Chartres truly was a visionary in his own right!). While some may argue that buildpacks are inherently superior to Dockerfiles, we think this takes the short view. The best thing about Dockerfiles is that you get to choose your base image.

As an enterprise or other large organization, you absolutely must build solid base images equipped with tools and best practices. This will free up your application teams to spend more of their time improving their applications.

Standardize, Then Evangelize.

Cloud Foundry is all about standard ways of doing things. The only way to get HTTP traffic to an application is to map a route. The only way to integrate data services is through the marketplace. Applications go in spaces, which go inside of organizational namespaces. The list goes on.

There is a whole world of things that Cloud Foundry developers don't have to consider that people working with Kubernetes deal with all the time. What ingress should we use? Do we use Helm or something else? Does each application team gets it's own namespace? It's own cluster?

For your Kubernetes strategy to work, you have to standardize these bits of information. This effort absolutely must be done with input of the application developers who will ultimately be building applications to run on top of your proposed solution.

As important as having a standard is evangelizing that standard. For this, we often find a lighthouse team, an eager application team that is cross-functional and well-versed in the platform and the applications and frameworks your organization uses. We work with lighthouse teams, empowering them to not only solve their problems on top of the platform, but to document their processes, file issues for deficiencies and troublesome areas, blog about their processes, and generally preach the virtues of the standardized way.

When your application teams see other application teams succeeding on your platform, rate of adoption will be the least of your problems.

Be Flexible.

The great thing about standards is there are so many different ones to choose from!
– me, at several points in my career

Standardization has a dark side: rigidity. Overly-prescriptive standards oppress and stifle innovation and uptake. Brittle standards don't take into account problem spaces outside of those of immediate interest, and make no concessions for extracurricular activities. If your standard platform doesn't bend, it will break – developers will seek solutions outside of the smothering confines of the standard.

We have seen this time and time again with Cloud Foundry. To retain market share, CF has had to grow and evolve as use cases outside of the "push a Ruby/Go/Java web application" sphere are encountered in the wild.

There was a time you had to use buildpacks for every application on Cloud Foundry – then Diego learned how to accept vanilla OCI images.

At one point, the only way to integrate a data service with an application was through the marketplace and a service broker. Now we have User-Provided Services.

All applications are web applications, right? Nope, and now Cloud Foundry has TCP Routing services for non-HTTP workloads.

Whatever you standardize on, make sure you have enough flexibility to bend to the inexorable winds of change, or be prepared to make a new standard in a few years.

Mind The (Knowledge) Gap.

All platforms attempt to encapsulate and abstract away certain aspects of the system, so that you don't have to know how they work. The flat memory model is a cute abstraction over the incredible monstrosity that is the modern memory architecture. Programming languages are built for beautiful simple machines so that developers don't need to know how the metal in the CPU actually thinks.

PaaSes like Cloud Foundry, and runtime orchestrators like Kubernetes are no different. One of the virtues often extolled by fans of buildpacks is that they relieve the developer of having to know how their code gets deployed. OCI image adherents claim likewise, but for lower layers of the image stack.

Both camps are right, and both camps are wrong. There will always be a knowledge gap, and at some point in your career you and your team will need to cross it. The biggest decision you need to make is when you attempt that.

Buildpacks defer crossing the gap until later. At some point in the future, an application, without any changes, will stop working with the available buildpacks. This is a huge problem for Cloud Foundry teams, because it incentivizes a stagnation of infrastructure – "Please don't upgrade the Java buildpack," say the developers. "Our applications won't build on the latest version, and we don't know why."

Building OCI images force you to cross the gap sooner. You can't help but solve deployment issues if you hope to get the application image built and shipped. Furthermore, the Dockerfile becomes a record of the steps required to run the application, which will assist future maintenance programmers and operations staff.

Decouple Your Data.

Cloud Foundry solved "the state problem" by pushing state out of the applications and into data services, which you interact with via Cloud Foundry itself (cf create-service, cf bind-service, and friends). This did two things: it greatly simplified the lives of the people who build Cloud Foundry, and it taught application developers to think of their data services external, and swappable.

For Cloud Foundry in particular, this has allowed very large installations to move from one instance of the CF runtime to another – whether that's a version upgrade, a distribution change (proprietary CF → Open Source CF), or an execution runtime swap (remember the move from DEA to Diego?)

With Kubernetes, it's way too easy to run your data services right next to your application workloads. In some cases this is desirable – in development, for instance, you can spin up an ephemeral PostgreSQL instance without any input from your data services team. In production, however, you're usually better off decoupling your data deployments from your application deployments.

The post A Decade With Cloud Foundry appeared first on Stark & Wayne.

]]>

Buddhism has its Four Noble Truths. Plato had his Ideals.

We've spent close to a decade running applications on top of the Cloud Foundry platform-as-a-service (PaaS), and here's a few universal truths about application development and deployment that we hope you'll bring to your next Kubernetes project.

Start Strong.

One of the great joys of working with Cloud Foundry is getting up and running quickly, by way of buildpacks. A buildpack is an impressive bit of engineering that can inspect your raw source code and build out the appropriate, containerized application execution unit, automatically. You stand on the shoulders of buildpack authors, to paraphrase the philosopher Bertrand of Chartres.

The importance of having a solid foundation cannot be overstated. In Cloud Foundry, build packs allow developers to focus almost entirely on their application code – the problem it solves, the solutions it employs. We find that this leads to happier, more productive teams that are faster to market and quicker to respond to changing customer needs.

Kubernetes went a different way: container images. We spend a lot of time crafting and maintaining Dockerfiles and the images that they tie together, and build upon (Bertrand of Chartres truly was a visionary in his own right!). While some may argue that buildpacks are inherently superior to Dockerfiles, we think this takes the short view. The best thing about Dockerfiles is that you get to choose your base image.

As an enterprise or other large organization, you absolutely must build solid base images equipped with tools and best practices. This will free up your application teams to spend more of their time improving their applications.

Standardize, Then Evangelize.

Cloud Foundry is all about standard ways of doing things. The only way to get HTTP traffic to an application is to map a route. The only way to integrate data services is through the marketplace. Applications go in spaces, which go inside of organizational namespaces. The list goes on.

There is a whole world of things that Cloud Foundry developers don't have to consider that people working with Kubernetes deal with all the time. What ingress should we use? Do we use Helm or something else? Does each application team gets it's own namespace? It's own cluster?

For your Kubernetes strategy to work, you have to standardize these bits of information. This effort absolutely must be done with input of the application developers who will ultimately be building applications to run on top of your proposed solution.

As important as having a standard is evangelizing that standard. For this, we often find a lighthouse team, an eager application team that is cross-functional and well-versed in the platform and the applications and frameworks your organization uses. We work with lighthouse teams, empowering them to not only solve their problems on top of the platform, but to document their processes, file issues for deficiencies and troublesome areas, blog about their processes, and generally preach the virtues of the standardized way.

When your application teams see other application teams succeeding on your platform, rate of adoption will be the least of your problems.

Be Flexible.

The great thing about standards is there are so many different ones to choose from!
– me, at several points in my career

Standardization has a dark side: rigidity. Overly-prescriptive standards oppress and stifle innovation and uptake. Brittle standards don't take into account problem spaces outside of those of immediate interest, and make no concessions for extracurricular activities. If your standard platform doesn't bend, it will break – developers will seek solutions outside of the smothering confines of the standard.

We have seen this time and time again with Cloud Foundry. To retain market share, CF has had to grow and evolve as use cases outside of the "push a Ruby/Go/Java web application" sphere are encountered in the wild.

There was a time you had to use buildpacks for every application on Cloud Foundry – then Diego learned how to accept vanilla OCI images.

At one point, the only way to integrate a data service with an application was through the marketplace and a service broker. Now we have User-Provided Services.

All applications are web applications, right? Nope, and now Cloud Foundry has TCP Routing services for non-HTTP workloads.

Whatever you standardize on, make sure you have enough flexibility to bend to the inexorable winds of change, or be prepared to make a new standard in a few years.

Mind The (Knowledge) Gap.

All platforms attempt to encapsulate and abstract away certain aspects of the system, so that you don't have to know how they work. The flat memory model is a cute abstraction over the incredible monstrosity that is the modern memory architecture. Programming languages are built for beautiful simple machines so that developers don't need to know how the metal in the CPU actually thinks.

PaaSes like Cloud Foundry, and runtime orchestrators like Kubernetes are no different. One of the virtues often extolled by fans of buildpacks is that they relieve the developer of having to know how their code gets deployed. OCI image adherents claim likewise, but for lower layers of the image stack.

Both camps are right, and both camps are wrong. There will always be a knowledge gap, and at some point in your career you and your team will need to cross it. The biggest decision you need to make is when you attempt that.

Buildpacks defer crossing the gap until later. At some point in the future, an application, without any changes, will stop working with the available buildpacks. This is a huge problem for Cloud Foundry teams, because it incentivizes a stagnation of infrastructure – "Please don't upgrade the Java buildpack," say the developers. "Our applications won't build on the latest version, and we don't know why."

Building OCI images force you to cross the gap sooner. You can't help but solve deployment issues if you hope to get the application image built and shipped. Furthermore, the Dockerfile becomes a record of the steps required to run the application, which will assist future maintenance programmers and operations staff.

Decouple Your Data.

Cloud Foundry solved "the state problem" by pushing state out of the applications and into data services, which you interact with via Cloud Foundry itself (cf create-service, cf bind-service, and friends). This did two things: it greatly simplified the lives of the people who build Cloud Foundry, and it taught application developers to think of their data services external, and swappable.

For Cloud Foundry in particular, this has allowed very large installations to move from one instance of the CF runtime to another – whether that's a version upgrade, a distribution change (proprietary CF → Open Source CF), or an execution runtime swap (remember the move from DEA to Diego?)

With Kubernetes, it's way too easy to run your data services right next to your application workloads. In some cases this is desirable – in development, for instance, you can spin up an ephemeral PostgreSQL instance without any input from your data services team. In production, however, you're usually better off decoupling your data deployments from your application deployments.

The post A Decade With Cloud Foundry appeared first on Stark & Wayne.

]]>
Implementing Non-Trivial Containerized Systems – Part 3: Deploying Containers Together https://www.starkandwayne.com/blog/implementing-non-trivial-containerized-systems-part-3-deploying-containers-together/ Tue, 30 Mar 2021 17:07:00 +0000 https://www.starkandwayne.com//implementing-non-trivial-containerized-systems-part-3-deploying-containers-together/

This is the third part of a multi-part series on designing and building non-trivial containerized solutions. We're making a radio station using off-the-shelf components and some home-spun software, all on top of Docker, Docker Compose, and eventually, Kubernetes.

In this part, we're going to take the notes we took during the last part (you did take notes, didn't you?) and distill them down into some infrastructure-as-code. It all starts with Docker Compose.

Welcome back! If you've been following along, you should now have a directory of .opus audio files and two containers streaming audio out to anyone who connects. If you haven't been following along, go read [the] [other parts] [first]. Then come back.

The Dockerfiles we have made so far are eminently portable. Anyone can take those little recipes and rebuild our container images, more or less. The images themselves are also portable – we could docker push them up to DockerHub and let the world enjoy their container-y goodness.

What's not so portable is how we run and wire those containers up. That part is actually quite proprietary, dense, and easily forgotten. Let's rectify that. Let's use Docker Compose.

Docker Compose is a small-ish orchestration system for a single Docker host. It lets us specify how we want our containers run; what ports to forward, what volumes to mount, etc. It also has a few tricks up its sleeve that will make our lives better. Best of all, Docker Compose recipes can be shared with others who also like building bespoke Internet radio stations. We are legion. And we use Docker.

What's in a recipe? It starts, as most things in the container world do, with a YAML file:

$ cat docker-compose.yml---
version: '3'

We'll be using version 3 of the Compose specification. That's not terribly important right now, but there are certain concepts that won't work if we're not on a new enough version.

NOTE: If you have been following along, you've probably got containers still running from our previous experiments. Luckily, we specified the --rm flag when we ran these containers, so all we need to do to clean up is stop them:

$ docker stop icecast
$ docker stop liquid

Docker Compose deals in containers, but it does so in scalable sets of identical containers that it calls "services". For our immediate purposes, we won't be scaling these things out, so a service is effectively a container, and vice versa.

We'll start by adding our first service / container: Icecast2 itself.

---
version: '3'
services:
  icecast:
    image: filefrog/icecast2:latest
    ports:
      - '8000:8000'
    environment:
      ICECAST2_PASSWORD: whatever-you-want-it-to-be

Essentially, we're taking the docker run command and committing it to a file, in YAMLese. To get this container running, we'll use the docker-compose command:

$ docker-compose up

This starts the containers up in "foreground mode" - their output will be multiplexed onto your terminal. As we add more containers (err... services), Docker Compose will start color-coding their output to help us keep them separate.

To get your shell prompt back, terminate the foreground process with a Ctrl-C.

That was so easy, let's add our LiquidSoap service / container into the mix:

---
version: '3'
services:
  icecast:
    image: filefrog/icecast2:latest
    ports:
      - '8000:8000'
    environment:
      ICECAST2_PASSWORD: whatever-you-want-it-to-be
  source:
    image: filefrog/liquidsoap:latest
    command:
      - |
        output.icecast(%opus,
          host = "10.128.0.56",
          port = 8000,
          password = "whatever-you-want-it-to-be",
          mount = "pirate-radio.opus",
          playlist.safe(reload=120,"/radio/playlist.m3u"))
    volumes:
      - $PWD/radio:/radio

This time, we're going to give docker-compose a -d flag, so that it forks into the foreground and keeps on running, while we get our shell prompt back to get EVEN MORE WORK DONE.

$ docker-compose up -d

Since Docker Compose is just creating containers using Docker, we can use the docker CLI to do all the things we're already accustomed to. For example, to see what LiquidSoap is up to, we can check the docker logs:

$ docker logs radio_source_1
docker logs radio_source_1
2021/03/30 13:16:36 >>> LOG START
2021/03/30 13:16:36 [main:3] Liquidsoap 1.4.4
2021/03/30 13:16:36 [main:3] Using: bytes=[distributed with OCaml 4.02 or above] pcre=7.4.6 sedlex=2.3 menhirLib=20201216 dtools=0.4.1 duppy=0.8.0 cry=0.6.4 mm=0.5.0 xmlplaylist=0.1.4 lastfm=0.3.2 ogg=0.5.2 vorbis=0.7.1 opus=0.1.3 speex=0.2.1 mad=0.4.6 flac=0.1.5 flac.ogg=0.1.5 dynlink=[distributed with Ocaml] lame=0.3.4 shine=0.2.1 gstreamer=0.3.0 frei0r=0.1.1 fdkaac=0.3.2 theora=0.3.1 ffmpeg=0.4.3 bjack=0.1.5 alsa=0.2.3 ao=0.2.1 samplerate=0.1.4 taglib=0.3.6 ssl=0.5.9 magic=0.7.3 camomile=1.0.2 inotify=2.3 yojson=1.7.0 faad=0.4.0 soundtouch=0.1.8 portaudio=0.2.1 pulseaudio=0.1.3 ladspa=0.1.5 dssi=0.1.2 camlimages=4.2.6 srt.types=0.1.1 srt.stubs=0.1.1 srt.stubs=0.1.1 srt=0.1.1 lo=0.1.2 gd=1.0a5
2021/03/30 13:16:36 [gstreamer.loader:3] Loaded GStreamer 1.16.2 0
2021/03/30 13:16:36 [frame:3] Using 44100Hz audio, 25Hz video, 44100Hz master.
... etc ...

That was fairly painless. All of the things we can express in terms of a docker run command can be specified, in YAML (of course), for Docker Compose. This means we'll never get to the point of wanting to "wrap up" all the Docker stuff we've been playing with into something more serious.

There are a few things Docker Compose gives us that we don't (easily) get with pure docker run: a bit of DNS. Recall from the last post that when we told LiquidSoap where to stream to, we had to use the actual IP of the Docker host. With Docker Compose, our containers all live on the same virtual bridge, and we get free DNS for connecting them together. From the container, the name icecast will resolved to the internal bridge IP address of the icecast containers. This means we can improve our compose recipe:

---
version: '3'
services:
  icecast:
    image: filefrog/icecast2:latest
    ports:
      - '8000:8000'
    environment:
      ICECAST2_PASSWORD: whatever-you-want-it-to-be
  source:
    image: filefrog/liquidsoap:latest
    command:
      - |
        output.icecast(%opus,
          host = "icecast",
          port = 8000,
          password = "whatever-you-want-it-to-be",
          mount = "pirate-radio.opus",
          playlist.safe(reload=120,"/radio/playlist.m3u"))
    volumes:
- $PWD/radio:/radio

This final version of our recipe is completely portable; you don't have to adhere to my personal network numbering scheme to make use of this deployment asset – just download it and compose it up!

Before I let you go, we should verify that everything is still working. The Icecast web interface should still load, and you should be able to listen to those two goofballs on Rent / Buy / Build (the podcast) talk about how to source the components of your Cloud-Native platform.

Enjoy!

The post Implementing Non-Trivial Containerized Systems – Part 3: Deploying Containers Together appeared first on Stark & Wayne.

]]>

This is the third part of a multi-part series on designing and building non-trivial containerized solutions. We're making a radio station using off-the-shelf components and some home-spun software, all on top of Docker, Docker Compose, and eventually, Kubernetes.

In this part, we're going to take the notes we took during the last part (you did take notes, didn't you?) and distill them down into some infrastructure-as-code. It all starts with Docker Compose.

Welcome back! If you've been following along, you should now have a directory of .opus audio files and two containers streaming audio out to anyone who connects. If you haven't been following along, go read [the] [other parts] [first]. Then come back.

The Dockerfiles we have made so far are eminently portable. Anyone can take those little recipes and rebuild our container images, more or less. The images themselves are also portable – we could docker push them up to DockerHub and let the world enjoy their container-y goodness.

What's not so portable is how we run and wire those containers up. That part is actually quite proprietary, dense, and easily forgotten. Let's rectify that. Let's use Docker Compose.

Docker Compose is a small-ish orchestration system for a single Docker host. It lets us specify how we want our containers run; what ports to forward, what volumes to mount, etc. It also has a few tricks up its sleeve that will make our lives better. Best of all, Docker Compose recipes can be shared with others who also like building bespoke Internet radio stations. We are legion. And we use Docker.

What's in a recipe? It starts, as most things in the container world do, with a YAML file:

$ cat docker-compose.yml---
version: '3'

We'll be using version 3 of the Compose specification. That's not terribly important right now, but there are certain concepts that won't work if we're not on a new enough version.

NOTE: If you have been following along, you've probably got containers still running from our previous experiments. Luckily, we specified the --rm flag when we ran these containers, so all we need to do to clean up is stop them:

$ docker stop icecast
$ docker stop liquid

Docker Compose deals in containers, but it does so in scalable sets of identical containers that it calls "services". For our immediate purposes, we won't be scaling these things out, so a service is effectively a container, and vice versa.

We'll start by adding our first service / container: Icecast2 itself.

---
version: '3'
services:
  icecast:
    image: filefrog/icecast2:latest
    ports:
      - '8000:8000'
    environment:
      ICECAST2_PASSWORD: whatever-you-want-it-to-be

Essentially, we're taking the docker run command and committing it to a file, in YAMLese. To get this container running, we'll use the docker-compose command:

$ docker-compose up

This starts the containers up in "foreground mode" - their output will be multiplexed onto your terminal. As we add more containers (err... services), Docker Compose will start color-coding their output to help us keep them separate.

To get your shell prompt back, terminate the foreground process with a Ctrl-C.

That was so easy, let's add our LiquidSoap service / container into the mix:

---
version: '3'
services:
  icecast:
    image: filefrog/icecast2:latest
    ports:
      - '8000:8000'
    environment:
      ICECAST2_PASSWORD: whatever-you-want-it-to-be
  source:
    image: filefrog/liquidsoap:latest
    command:
      - |
        output.icecast(%opus,
          host = "10.128.0.56",
          port = 8000,
          password = "whatever-you-want-it-to-be",
          mount = "pirate-radio.opus",
          playlist.safe(reload=120,"/radio/playlist.m3u"))
    volumes:
      - $PWD/radio:/radio

This time, we're going to give docker-compose a -d flag, so that it forks into the foreground and keeps on running, while we get our shell prompt back to get EVEN MORE WORK DONE.

$ docker-compose up -d

Since Docker Compose is just creating containers using Docker, we can use the docker CLI to do all the things we're already accustomed to. For example, to see what LiquidSoap is up to, we can check the docker logs:

$ docker logs radio_source_1
docker logs radio_source_1
2021/03/30 13:16:36 >>> LOG START
2021/03/30 13:16:36 [main:3] Liquidsoap 1.4.4
2021/03/30 13:16:36 [main:3] Using: bytes=[distributed with OCaml 4.02 or above] pcre=7.4.6 sedlex=2.3 menhirLib=20201216 dtools=0.4.1 duppy=0.8.0 cry=0.6.4 mm=0.5.0 xmlplaylist=0.1.4 lastfm=0.3.2 ogg=0.5.2 vorbis=0.7.1 opus=0.1.3 speex=0.2.1 mad=0.4.6 flac=0.1.5 flac.ogg=0.1.5 dynlink=[distributed with Ocaml] lame=0.3.4 shine=0.2.1 gstreamer=0.3.0 frei0r=0.1.1 fdkaac=0.3.2 theora=0.3.1 ffmpeg=0.4.3 bjack=0.1.5 alsa=0.2.3 ao=0.2.1 samplerate=0.1.4 taglib=0.3.6 ssl=0.5.9 magic=0.7.3 camomile=1.0.2 inotify=2.3 yojson=1.7.0 faad=0.4.0 soundtouch=0.1.8 portaudio=0.2.1 pulseaudio=0.1.3 ladspa=0.1.5 dssi=0.1.2 camlimages=4.2.6 srt.types=0.1.1 srt.stubs=0.1.1 srt.stubs=0.1.1 srt=0.1.1 lo=0.1.2 gd=1.0a5
2021/03/30 13:16:36 [gstreamer.loader:3] Loaded GStreamer 1.16.2 0
2021/03/30 13:16:36 [frame:3] Using 44100Hz audio, 25Hz video, 44100Hz master.
... etc ...

That was fairly painless. All of the things we can express in terms of a docker run command can be specified, in YAML (of course), for Docker Compose. This means we'll never get to the point of wanting to "wrap up" all the Docker stuff we've been playing with into something more serious.

There are a few things Docker Compose gives us that we don't (easily) get with pure docker run: a bit of DNS. Recall from the last post that when we told LiquidSoap where to stream to, we had to use the actual IP of the Docker host. With Docker Compose, our containers all live on the same virtual bridge, and we get free DNS for connecting them together. From the container, the name icecast will resolved to the internal bridge IP address of the icecast containers. This means we can improve our compose recipe:

---
version: '3'
services:
  icecast:
    image: filefrog/icecast2:latest
    ports:
      - '8000:8000'
    environment:
      ICECAST2_PASSWORD: whatever-you-want-it-to-be
  source:
    image: filefrog/liquidsoap:latest
    command:
      - |
        output.icecast(%opus,
          host = "icecast",
          port = 8000,
          password = "whatever-you-want-it-to-be",
          mount = "pirate-radio.opus",
          playlist.safe(reload=120,"/radio/playlist.m3u"))
    volumes:
- $PWD/radio:/radio

This final version of our recipe is completely portable; you don't have to adhere to my personal network numbering scheme to make use of this deployment asset – just download it and compose it up!

Before I let you go, we should verify that everything is still working. The Icecast web interface should still load, and you should be able to listen to those two goofballs on Rent / Buy / Build (the podcast) talk about how to source the components of your Cloud-Native platform.

Enjoy!

The post Implementing Non-Trivial Containerized Systems – Part 3: Deploying Containers Together appeared first on Stark & Wayne.

]]>