Search
Close this search box.

Unikernels are unfit for production

January 22, 2016

Recently, I made the mistake of rhetorically asking if I needed to spell out why unikernels are unfit for production. The response was overwhelming: whether people feel that unikernels are wrong-headed and are looking for supporting detail or are unikernel proponents and want to know what the counter-arguments could possibly be, there is clearly a desire to hear the arguments against running unikernels in production.

So, what’s the problem with unikernels? Let’s get a definition first: a unikernel is an application that runs entirely in the microprocessor’s privileged mode. (The exact nomenclature varies; on x86 this would be running at Ring 0.) That is, in a unikernel there is no application at all in a traditional sense; instead, application functionality has been pulled into the operating system kernel. (The idea that there is “no OS” serves to mislead; it is not that there isn’t an operating system but rather that the application has taken on the hardware-interfacing responsibilities of the operating system — it is “all OS”, if a crude and anemic one.) Before we discuss the challenges with this, it’s worth first exploring the motivations for unikernels — if only because they are so thin…

The primary reason to implement functionality in the operating system kernel is for performance: by avoiding a context switch across the user-kernel boundary, operations that rely upon transit across that boundary can be made faster. In the case of unikernels, these arguments are specious on their face: between the complexity of modern platform runtimes and the performance of modern microprocessors, one does not typically find that applications are limited by user-kernel context switches. And as shaky as they may be, these arguments are further undermined by the fact that unikernels very much rely on hardware virtualization to achieve any multi-tenancy whatsoever. As I have expanded on in the past, virtualizing at the hardware layer carries with it an inexorable performance tax: by having the system that can actually see the hardware (namely, the hypervisor) isolated from the system that can actually see the app (the guest operating system) efficiencies are lost with respect to hardware utilization (e.g., of DRAM, NICs, CPUs, I/O) that no amount of willpower and brute force can make up. But it’s not worth dwelling on performance too much; let’s just say that the performance arguments to be made in favor of unikernels have some well-grounded counter-arguments and move on.

The other reason given by unikernel proponents is that unikernels are “more secure”, but it’s unclear what the intellectual foundation for this argument actually is. Yes, unikernels often run less software (and thus may have less attack surface) — but there is nothing about unikernels in principle that leads to less software. And yes, unikernels often run new or different software (and are therefore not vulnerable to the OpenSSL vuln-of-the-week) but this security-through-obscurity argument could be made for running any new, abstruse system. The security arguments also seem to whistle past the protection boundary that unikernels very much depend on: the protection boundary between guest OS’s afforded by the underlying hypervisor. Hypervisor vulnerabilities emphatically exist; one cannot play up Linux kernel vulnerabilities as a silent menace while simultaneously dismissing hypervisor vulnerabilities as imaginary. To the contrary, by depriving application developers of the tools of a user protection boundary, the principle of least privilege is violated: any vulnerability in an application tautologically roots the unikernel. In the world of container-based deployment, this takes a thorny problem — secret management — and makes it much nastier (and with much higher stakes). At best, unikernels amount to security theater, and at worst, a security nightmare.

The final reason often given by proponents of unikernels is that they are small — but again, there is nothing tautologically small about unikernels! Speaking personally, I have done kernel implementation on small kernels and big ones; you can certainly have lean systems without resorting to the equivalent of a gastric bypass with a unikernel! (I am personally a huge fan of Alpine Linux as a very lean user-land substrate for Linux apps and/or Docker containers.) And to the degree that unikernels don’t contain much code, it seems more by infancy (and, for the moment, irrelevancy) than by design. But it would be a mistake to measure the size of a unikernel only in terms of its code, and here too unikernel proponents ignore the details of the larger system: because a unikernel runs as a guest operating system, the DRAM allocated by the hypervisor for that guest is consumed in its entirety — even if the app itself isn’t making use of it. Because running out of memory remains one of the most pernicious of application failure modes (especially in dynamic environments), memory sizing tends to be overengineered in that requirements are often blindly doubled or otherwise slopped up. In the unikernel model, any such slop is lost — nothing else can use it because the hypervisor doesn’t know that it isn’t, in fact, in use. (This is in stark contrast to containers in which memory that isn’t used by applications is available to be used by other containers, or by the system itself.) So here again, the argument for unikernels becomes much more nuanced (if not rejected entirely) when the entire system is considered.

So those are the reasons for unikernels: perhaps performance, a little security theater, and a software crash diet. As tepid as they are, these reasons constitute the end of the good news from unikernels. Everything else from here on out is bad news: costs that must be borne to get to those advantages, however flimsy.

The disadvantages of unikernels start with the mechanics of an application itself. When the operating system boundary is obliterated, one may have eliminated the interface for an application to interact with the real world of the network or persistent storage — but one certainly hasn’t forsaken the need for such an interace! Some unikernels (like OSv and Rumprun) take the approach of implementing a “POSIX-like” interface to minimize disruption to applications. Good news: apps kinda work! Bad news: did we mention that they need to be ported? And here’s hoping that your app’s “POSIX-likeness” doesn’t extend to fusty old notions like creating a process: there are no processes in unikernels, so if your app depends on this (ubiquitous, four-decades-old) construct, you’re basically hosed. (Or worse than hosed.)

If this approach seems fringe, things get much further afield with language-specific unikernels like MirageOS that deeply embed a particular language runtime. On the one hand, allowing implementation only in a type-safe language allows for some of the acute reliability problems of unikernels to be circumvented. On the other hand, hope everything you need is in OCaml!

So there are some issues getting your app to work, but let’s say you’re past all this: either the POSIX surface exposed by your unikernel of choice is sufficient for your app (or platform), or it’s already written in OCaml or Erlang or Haskell or whatever. Should you have apps that can be unikernel-borne, you arrive at the most profound reason that unikernels are unfit for production — and the reason that (to me, anyway) strikes unikernels through the heart when it comes to deploying anything real in production: Unikernels are entirely undebuggable. There are no processes, so of course there is no ps, no htop, no strace — but there is also no netstat, no tcpdump, no ping! And these are just the crude, decades-old tools. There is certainly nothing modern like DTrace or MDB. From a debugging perspective, to say this is primitive understates it: this isn’t paleolithic — it is precambrian. As one who has spent my career developing production systems and the tooling to debug them, I find the implicit denial of debugging production systems to be galling, and symptomatic of a deeper malaise among unikernel proponents: total lack of operational empathy. Production problems are simply hand-waved away — services are just to be restarted when they misbehave. This attitude — even when merely implied — is infuriating to anyone who has ever been responsible for operating a system. (And lest you think I’m an outlier on this issue, listen to the applause in my DockerCon 2015 talk after I emphasized the need to debug systems rather than restart them.) And if it needs to be said, this attitude is angering because it is wrong: if a production app starts to misbehave because of a non-fatal condition like (say) listen drops, restarting the app is inducing disruption at the worst possible time (namely, when under high load) and doesn’t drive at all towards the root cause of the problem (an insufficient backlog).

Now, could one implement production debugging tooling in unikernels? In a word, no: debugging tooling very often crosses the user-kernel boundary, and is most effective when leveraging the ad hoc queries that the command line provides. The organs that provide this kind of functionality have been deliberately removed from unikernels in the name of weight loss; any unikernel that provides sufficiently sophisticated debugging tooling to be used in production would be violating its own dogma. Unikernels are unfit for production not merely as implemented but as conceived: they cannot be understood when they misbehave in production — and by their own assertions, they never will be able to be.

All of this said, I do find some common ground with proponents of unikernels: I agree that the container revolution demands a much leaner, more secure and more efficient run-time than a shared Linux guest OS running on virtual hardware — and at Joyent, our focus over the past few years has been delivering exactly that with SmartOS and Triton. While we see a similar problem as unikernel proponents, our approach is fundamentally different: instead of giving up on the notion of secure containers running on a multi-tenant substrate, we took the already-secure substrate of zones and added to it the ability to natively execute Linux binaries. That is, we chose to leverage advances in operating systems rather than deny their existence, bringing to Linux and Docker not only secure on-the-metal containers, but also critical advances like ZFS, Crossbow and (yes) DTrace. This merits a final reemphasis: our focus on production systems is reflected in everything we do, but most especially in our extensive tooling for debugging production systems — and by bringing this tooling to the larger world of Linux containers, Triton has already allowed for production debugging that we never before would have thought possible!

In the fullness of time, I think that unikernels will be most productive as a negative result: they will primarily serve to demonstrate the impracticality of their approach for production systems. As such, they will join transactional memory and the M-to-N scheduling model as breathless systems software fads that fell victim to the merciless details of reality. But you needn’t take my word for it: as I intimated in my tweet, undebuggable production systems are their own punishment — just kindly inflict them upon yourself and not the rest of us!

Leave a Reply

Recent Posts

September 2, 2024
November 18, 2023
November 27, 2022
October 11, 2020
July 31, 2019
December 16, 2018
September 18, 2018
December 21, 2016
September 30, 2016
September 26, 2016
September 13, 2016
July 29, 2016
December 17, 2015
September 16, 2015
January 6, 2015
November 10, 2013
September 3, 2013
June 7, 2012
September 15, 2011
August 15, 2011
March 9, 2011
September 24, 2010
August 11, 2010
July 30, 2010
July 25, 2010
March 10, 2010
November 26, 2009
February 19, 2009
February 2, 2009
November 10, 2008
November 3, 2008
September 3, 2008
July 18, 2008
June 30, 2008
May 31, 2008
March 16, 2008
December 18, 2007
December 5, 2007
November 11, 2007
November 8, 2007
September 6, 2007
August 21, 2007
August 2, 2007
July 11, 2007
May 20, 2007
March 19, 2007
October 12, 2006
August 17, 2006
August 7, 2006
May 1, 2006
December 13, 2005
November 16, 2005
September 13, 2005
September 9, 2005
August 21, 2005

Archives

Archives