Software Engineer at day. Tech Storyteller at night. Helping people master Containers.
Ivan's here! It's time for the monthly roundup of all things Containers, Kubernetes, and Server-Side craft.
October was a productive month:
Let's get started!
SPONSORED What Are JWTs by Teleport - JWTs are everywhere. Hell, even Kubernetes uses them. So, it's important to understand what they are (e.g, signed) and what they are not (e.g, encrypted). When the sponsored content has zero marketing fluff. Do recommend for a thorough read!
My work at Slim.AI requires spending a lot of time debugging containerized workloads (love it 😍). Often, such workloads lack debugging tools because the images are built from a scratch, distroless, or slim base or minified with something like DockerSlim. I've been collecting ways to get into containers for quite a while and recently even wrote a few of them down in a blog post (How To Debug Distroless And Slim Containers).
However, I wasn't fully satisfied with my list - none of the methods I was aware of would provide good enough UX.
What I needed is a workflow as handy as docker exec and kubectl debug combined 🙈 In other words, I needed a short and fast way to run arbitrary commands inside a target container - without copying or installing the debugging tools or restarting a container with an extra volume mount.
Running a "sidecar" (i.e., sharing the net, uts, pid, and maybe ipc namespaces) container with a special toolkit image (like kubectl debug does) was close to my requirements but didn't feel 100% it - I wanted my debugging tools to have the same root filesystem as the target container (as with docker exec).
Combining the "debug sidecar" approach with chroot-ing the debugger's entrypoint to /proc/1/root (that's where the target container rootfs often can be found) was almost ideal. But it worked only for statically linked tools like busybox. And any additional tool would require manual installation (and may very well not work in the end because of missing shared libraries).
The fun part here is that I've known about Nix and Nixery for quite a while... But only this month the things finally clicked!
So, behold: cdebug exec -it <target>:
I made a tool... to debug containers 🧙♂️ It's like "docker exec", but it works even for containers without a shell (scratch, distroless, slim, etc). The "cdebug exec" command allows you to bring your own toolkit and start a shell inside of a running container. A short demo 👇 pic.twitter.com/82m4vzPYJr
October 23rd 2022
The cdebug exec command mimics the docker exec UX. It starts a sidecar container (using the latest busybox image by default) that shares most of the target namespaces and chroot's the debugger's shell to the target rootfs. It's super handy and fast, but I wouldn't even bother automating it if it were only busybox.
The really powerful stuff starts when you pass the "--image nixery.dev/shell/..." flag to cdebug exec. The way Nix packages are packed (no pun intended) makes them extremely portable. And Nixery allows assembling container images from Nix packages on the fly. So, with very little extra effort, it's possible to get almost any tool into the debugger container, and this tool won't break because of chroot.
Here is an example of how to get vim and tshark inside your target container: cdebug exec -it --privileged --image nixery.dev/shell/vim/tshark <target>.
The second most frequent issue that I need to deal with while debugging containers is accessing unexposed (or exposed but not functioning) ports. No doubt, nsenter is fun, but I want to have a UX close to kubectl port-forward. So, I started working on adding the cdebug port-forward command but ended up writing an article on SSH Tunnels instead. Don't ask 😁
But I'm glad I did it!
Forwarding ports with SSH is an ancient trick, but I rely on it in my Cloud Native journey no less than during good old bare-metal days. Here is a sneak peek:
And here is the full blog post: A Visual Guide to SSH Tunnels. It has (container-powered) labs!
The Real Reason Cloud IDE Adoption Is Lagging - Corey Quinn goes on a rant about cloud IDEs, and so many things there echo my own experience. I do share the opinion that developer laptops should be just thin clients. It's an efficient, portable, and probably the most secure way. But the issue is that "cloud IDE" made by "we're-going-to-reinvent-the-terminal" (c) startups feel way too opinionated and limiting for me. Coding on a remote VM (using Vim or VS Code with the Remote SSH extension) remains my favorite option. Thanks to Corey for easing my fear of missing out on yet another cool tech (hey, I was sooo late to containers) - I'll wait at least one more year before starting even thinking of giving a "cloud IDE" a shot.
From dotCloud to Docker - Did you know that the first
Docker dotCloud containers ran an SSH server inside to implement
docker exec-like functionality? This fabulous read by Jérôme Petazzoni sheds quite some light on the early days of Docker, its evolution, and design decisions. In particular, it touches upon an important topic of why Docker is a daemon and not a CLI tool. More UNIX-y approaches (like podman) sound attractive, and that’s how Docker actually started (it was a CLI called
dc). However, when you need to run hundreds of containers on a machine, coordination becomes really tricky. So, if you’re thinking of just running a few containerized daemons, podman + systemd can be the way to go. But anything of a larger scale would require a daemon with an API.
The Secure Software Supply Chain by Kelsey Hightower - The whole talk is pretty good (Kelsey's performance on stage is as always great even though this time I didn't understand how exactly things he was showing work), but this particular part where Kelsey talks about open-source contributions being rejected based solely on the citizenship of a contributor scared me to death. Yes, developers' identities must be known, we definitely outgrew the time when critical components of our software could have been built by semi-anonymous folks from the Internet. However, in this (more and more) divided world, the united OSS community of peer contributors from all over the world has been staying the last stronghold. And now it's falling...
The Future of Ops Is Platform Engineering by Charity Majors - The goal is: you wrote code - you run it (and how this goal is achieved may not really matter). Ops/DevOps might already be the past. Platform Engineering (and Platform Teams) may be the future. But SRE seems to be becoming tangential to Dev/Ops dichotomy, and it's here to stay - as a form of dev teams reinforcement, as observability guardians, and/or as people responsible for handling incident response and doing postmortems.
Kubernetes 1.25: alpha support for running Pods with user namespaces - Great news! This is a major improvement for running secure workloads in Kubernetes because it enables running workloads as root in a safer manner. The process can keep capabilities that are usually restricted to privileged Pods and do it in a safe way since the capabilities granted in a new user namespace do not apply in the host's initial namespaces.
Progress for unprivileged containers (LWN) - My layman's opinion is that maybe we should stop adding more namespaces and instead start investing more time in improving lightweight virtualization solutions such as AWS Firecracker. This could probably keep the Linux kernel saner and, at the same time, improve the isolation, hence the security posture of our workloads.
Faster Multi-Platform Builds: Dockerfile Cross-Compilation Guide - A solid read on efficient cross-platform builds and cross-compilation (these are two different things!) with a good practical part.
Exploiting Protocols for Fun - Matt Rickard again. A curious read on how different protocols are abused. Have you heard about "the filesystem over ping"? Quite an ingenious way to store data in ICMP ping packets.
QUIC Is Not a TCP Replacement - TCP is inefficient for RPC ...but historically, TCP had been the only option. QUIC finally brings the RPC semantic to the protocol level, and it'll be the new transport layer for HTTP. The winners of the overhaul will be all those numerous RESTful and gRPC APIs. But TCP is here to stay. For instance, downloading large chunks of data is still much more efficient over TCP.
How does one know when to stop? Here is my recipe - when the word counter hits 1500, it's circa thousand words late to stop adding things to a single newsletter issue...
Anyways, I wish you all a productive week! Keep learning!
P.S. The audience of this newsletter and the blog keeps growing (so is my time investment), and it makes me think of bumping up the sponsorship game. If you're interested in becoming a (corporate) sponsor of this newsletter or my blog, hit reply or drop me a message on Twitter or LinkedIn.
Software Engineer at day. Tech Storyteller at night. Helping people master Containers.
Hello there! 👋 Debugging containerized applications is... challenging. Debugging apps that use slim variants of container images is double challenging. And debugging slim containers in hardened production environments is often close to impossible. Before jumping to the DevOps problems that I prepared for you this week, let's review a few tricks that can be used to troubleshoot containers. If the container has a shell inside, running commands in it with docker exec (or kubectl exec) is...
Hey hey! Are you ready for your next DevOps challenge? Last week, we all witnessed yet another terrifying cyber-security event, and this time, it was a direct hit - researchers from Snyk discovered a way to break out of containers! 🤯 The vulnerability was found in the fundamental component of the containerization ecosystem - the most popular implementation of the (low-level) OCI container runtime - runc. Notice how, on the diagram above, most high-level container runtimes actually rely on the...
Hello friends! Ivan's here - with my traditional monthly roundup of all things Linux, Containers, Kubernetes, and Server-Side craft 🧙 What I was working on After my announcement of the iximiuz Labs GA earlier this month, the platform's usage has more than doubled, so I had to focus on the system's stability and UX. As a result, I increased observability and test coverage, added one more bare-metal server, streamlined a bunch of use cases, and fixed a few bugs. The most notable user-facing...