test: vaultwarden microVM

This commit is contained in:
Nick 2025-11-07 13:36:30 -06:00
parent e90d05f83d
commit 7ba592c0c5
43 changed files with 4005 additions and 267 deletions

1
example/doc/.gitignore vendored Executable file
View file

@ -0,0 +1 @@
book

9
example/doc/book.toml Executable file
View file

@ -0,0 +1,9 @@
[book]
authors = ["Astro"]
language = "en"
multilingual = false
src = "src"
title = "microvm.nix"
[output.html]
git-repository-url = "https://github.com/microvm-nix/microvm.nix"

23
example/doc/src/SUMMARY.md Executable file
View file

@ -0,0 +1,23 @@
# Table of Contents
- [Intro](./intro.md)
- [Declaring MicroVMs](./declaring.md)
- [Configuration options](./options.md)
- [Network interfaces](./interfaces.md)
- [Shared directories](./shares.md)
- [Device pass-through](./devices.md)
- [CPU emulation](./cpu-emulation.md)
- [Output options](./output-options.md)
- [MicroVM options reference ⚙️](./microvm-options.md)
- [Running a MicroVM as a package](./packages.md)
- [Preparing a host for declarative MicroVMs](./host.md)
- [A simple network setup](./simple-network.md)
- [Advanced network setup](./advanced-network.md)
- [Routed network setup](./routed-network.md)
- [Host systemd services](./host-systemd.md)
- [Host options reference ⚙️](./host-options.md)
- [Declarative MicroVMs](./declarative.md)
- [Imperative MicroVM management](./microvm-command.md)
- [Deploy via SSH](./ssh-deploy.md)
- [Conventions](./conventions.md)
- [Frequently Asked Questions](./faq.md)

View file

@ -0,0 +1,109 @@
# Advanced network setup
Renting a server in a datacenter usually gets you one IP address. You
must not bridge your local VM traffic together with the physical
Ethernet uplink port. Instead, setup a host-internal bridge for the
Virtual Machines, and provide them with Internet through NAT just like
your plastic ADSL router at home.
## A bridge to link TAP interfaces
Instead of placing MicroVMs directly on a LAN, one can also use a TAP
interface to get a virtual Ethernet interface on the host. Although it
is possible to [assign individual IP
configuration](./routed-network.md) to these individual interfaces,
let us avoid the additional configuration effort and create a bridge
instead:
```nix
systemd.network.netdevs."10-microvm".netdevConfig = {
Kind = "bridge";
Name = "microvm";
};
systemd.network.networks."10-microvm" = {
matchConfig.Name = "microvm";
networkConfig = {
DHCPServer = true;
IPv6SendRA = true;
};
addresses = [ {
addressConfig.Address = "10.0.0.1/24";
} {
addressConfig.Address = "fd12:3456:789a::1/64";
} ];
ipv6Prefixes = [ {
ipv6PrefixConfig.Prefix = "fd12:3456:789a::/64";
} ];
};
# Allow inbound traffic for the DHCP server
networking.firewall.allowedUDPPorts = [ 67 ];
```
This configuration will hand out IP addresses to clients on the
bridge. In practise, better leave out the DHCP server and its state by
opting for declarative, versioned configuration instead.
Last, the TAP interfaces of MicroVMs shall be attached to this central
bridge. Make sure your `matchConfig` matches just the interfaces you
want!
```nix
systemd.network.networks."11-microvm" = {
matchConfig.Name = "vm-*";
# Attach to the bridge that was configured above
networkConfig.Bridge = "microvm";
};
```
## Provide Internet Access with NAT
IPv4 addresses are exhausted. It is a very common case that you get
one public IPv4 address for your machine. The solution is to route
your internal virtual machines with *Network Address Translation*.
You might not get a dedicated /64 IPv6 prefix to route to your
MicroVMs. NAT works for this address family, too!
```nix
networking.nat = {
enable = true;
# NAT66 exists and works. But if you have a proper subnet in
# 2000::/3 you should route that and remove this setting:
enableIPv6 = true;
# Change this to the interface with upstream Internet access
externalInterface = "eth0";
# The bridge where you want to provide Internet access
internalInterfaces = [ "microvm" ];
};
```
Check out
[`networking.nat.forwardPorts`](https://search.nixos.org/options?channel=unstable&show=networking.nat.forwardPorts&query=networking.nat.forwardPorts)
to make your MicroVM's services available to networks outside your
host!
## Port forwarding
Isolating your public Internet services is a great use-case for
virtualization. But how does traffic get to you when your MicroVMs
have private IP addresses behind NAT?
NixOS has got you covered with the `networking.nat.forwardPorts`
option! This example forwards TCP ports 80 (HTTP) and 443 (HTTPS) to
other hosts:
```nix
networking.nat = {
enable = true;
forwardPorts = [ {
proto = "tcp";
sourcePort = 80;
destination = my-addresses.http-reverse-proxy.ip4;
} {
proto = "tcp";
sourcePort = 443;
destination = my-addresses.https-reverse-proxy.ip4;
} ];
};
```

35
example/doc/src/conventions.md Executable file
View file

@ -0,0 +1,35 @@
# Conventions between MicroVM packages and the host
This section describes the interface that is used to run MicroVM
packages with the flake's `host` module. While the **microvm.nix**
flake was designed for single-server usage, you can build different
MicroVM deployments using the information on this page.
| `nixosModule.microvm` option | MicroVM package file | `nixosModules.host` systemd service | Description |
|------------------------------|----------------------------------------|-------------------------------------|-----------------------------------------------------------------------------------------------|
| `microvm.hypervisor` | `bin/microvm-run` | `microvm@.service` | Start script for the main MicroVM process |
| `microvm.hypervisor` | `bin/microvm-shutdown` | `microvm@.service` | Script for graceful shutdown of the MicroVM (i.e. triggering the power button) |
| `microvm.interfaces.*.id` | `share/microvm/tap-interfaces` | `microvm-tap-interfaces@.service` | Names of the tap network interfaces to setup for the proper user |
| `microvm.devices.*.path` | `share/microvm/pci-devices` | `microvm-pci-devices@.service` | PCI devices that must be bound to the **vfio-pci** driver on the host |
| `microvm.shares.*.source` | `share/microvm/virtiofs/${tag}/source` | `microvm-virtiofsd@.service` | Source directory of a **virtiofs** instance by tag |
| `microvm.shares.*.socket` | `share/microvm/virtiofs/${tag}/socket` | `microvm-virtiofsd@.service` | **virtiofsd** socket path by tag |
| `microvm.systemSymlink` | `share/microvm/system` | | `config.system.build.toplevel` symlink, used for comparing versions when running `microvm -l` |
## Generating custom operating system hypervisor packages
Because a microvm.nix runner package completely defines how to run the
Hypervisor, it is possible to define independent packages that
virtualize other operating systems than NixOS.
- Your NixOS configurations should export their runner package as
`config.microvm.declaredRunner` so that it can be picked up either
as [declarative MicroVMs](declarative.md) or by [the microvm
command](microvm-command.md).
- The runner package must have a file layout as described in the table
above.
[microvm-solo5-spt](https://github.com/microvm-nix/microvm-solo5-spt) is an
example of a Flake that can run on a microvm.nix host.

View file

@ -0,0 +1,71 @@
# CPU emulation
It's possible to emulate a CPU if desired. This feature is only
supported by the qemu hypervisor.
**Note:** this feature has a significant performance impact.
## Defining an emulated NixOS system
You can call to `nixpkgs.lib.nixosSystem`, with the following key
settings:
- Set the `system` attribute to the host system.
- A module that sets `nixpkgs.crossSystem.config` to the guest
system. This lets `microvm.nix` know that it's a cross-system
environment.
- Set `microvm.hypervisor` to `qemu`, given this is the only
hypervisor that supports this feature.
- Set `microvm.cpu` to the desired emulated CPU. You can find a [list
of the available systems
here](https://www.qemu.org/docs/master/system/targets.html).
```nix
# Example flake.nix
{
inputs = {
nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
microvm = {
url = "github:microvm-nix/microvm.nix";
inputs.nixpkgs.follows = "nixpkgs";
};
};
outputs = { self, nixpkgs, microvm }: {
nixosConfigurations.emulated-dev = nixpkgs.lib.nixosSystem {
# host system
system = "x86_64-linux";
modules = let
guestSystem = "aarch64-unknown-linux-gnu";
# you can use packages in the guest machine with cross system configuration
pkgs = import nixpkgs {
system = "x86_64-linux";
crossSystem.config = guestSystem;
};
in [
microvm.nixosModules.microvm
{
nixpkgs.crossSystem.config = guestSystem;
microvm = {
# you can choose what CPU will be emulated by qemu
cpu = "cortex-a53";
hypervisor = "qemu";
};
environment.systemPackages = with pkgs; [ cowsay htop ];
services.getty.autologinUser = "root";
system.stateVersion = "24.11";
}
];
};
};
}
```
You can run the example with `nix run
.#nixosConfigurations.emulated-dev.config.microvm.declaredRunner`.
As shown in this example, you can use system packages on the guest
system by using nixpkgs with a proper `crossSystem` configuration.

73
example/doc/src/declarative.md Executable file
View file

@ -0,0 +1,73 @@
# Declarative MicroVMs
Provided your NixOS host [includes the host nixosModule](./host.md),
options are declared to build a MicroVM together with the host.
You can choose whether your MicroVMs should be managed in a fully-declarative
way, or whether your only want the initial deployment be declarative (with subsequent
imperative updates using the [microvm command](./microvm-command.md)).
microvm.nix distinguishes between fully-declarative configurations
and declarative deployment by allowing you to specify either
a full `config` or just a `flake` respectively.
## Fully declarative
You can create fully declarative VMs by directly defining their
nixos system configuration in-place. This is very similar to how
nixos-containers work if you are familiar with those.
```nix
# microvm refers to microvm.nixosModules
{ microvm, ... }: {
imports = [ microvm.host ];
microvm.vms = {
my-microvm = {
# The package set to use for the microvm. This also determines the microvm's architecture.
# Defaults to the host system's package set if not given.
pkgs = import nixpkgs { system = "x86_64-linux"; };
# (Optional) A set of special arguments to be passed to the MicroVM's NixOS modules.
#specialArgs = {};
# The configuration for the MicroVM.
# Multiple definitions will be merged as expected.
config = {
# It is highly recommended to share the host's nix-store
# with the VMs to prevent building huge images.
microvm.shares = [{
source = "/nix/store";
mountPoint = "/nix/.ro-store";
tag = "ro-store";
proto = "virtiofs";
}];
# Any other configuration for your MicroVM
# [...]
};
};
};
}
```
## Declarative deployment
Why *deployed*? The per-MicroVM subdirectory under `/var/lib/microvms`
is only created if it did not exist before. This behavior is
intended to ensure existence of MicroVMs that are critical to
operation. To update them later you will have to use the [imperative microvm
command](./microvm-command.md).
```nix
microvm.vms = {
my-microvm = {
# Host build-time reference to where the MicroVM NixOS is defined
# under nixosConfigurations
flake = self;
# Specify from where to let `microvm -u` update later on
updateFlake = "git+file:///etc/nixos";
};
};
```
Note that building MicroVMs with the host increases build time and
closure size of the host's system.

40
example/doc/src/declaring.md Executable file
View file

@ -0,0 +1,40 @@
# Declaring NixOS MicroVMs
![Demo](demo.gif)
microvm.nix creates virtual machine disk images and runner script
packages for the entries of the `nixosConfigurations` section of a
`flake.nix` file.
## The `microvm` module
To add MicroVM functionality, a NixOS system configuration is
augmented by importing this flake's `nixosModule.microvm`:
```nix
# Example flake.nix
{
inputs.nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
inputs.microvm.url = "github:microvm-nix/microvm.nix";
inputs.microvm.inputs.nixpkgs.follows = "nixpkgs";
outputs = { self, nixpkgs, microvm }: {
# Example nixosConfigurations entry
nixosConfigurations.my-microvm = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
# Include the microvm module
microvm.nixosModules.microvm
# Add more modules here
{
networking.hostName = "my-microvm";
microvm.hypervisor = "cloud-hypervisor";
}
];
};
};
}
```
To get you started quickly, a Flake template is included. Run `nix
flake init -t github:microvm-nix/microvm.nix` in a new project directory.

BIN
example/doc/src/demo.gif Executable file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

53
example/doc/src/devices.md Executable file
View file

@ -0,0 +1,53 @@
# Device pass-through
PCI and USB pass-through is supported on some hypervisors. Permission
setup is automatic for declared `"pci"` devices, but manual for
`"usb"` devices.
## Example PCI pass-through
Guest example:
```nix
microvm.devices = [ {
bus = "pci";
path = "0000:06:00.1";
} {
bus = "pci";
path = "0000:06:10.4";
} ];
```
Permission setup on the host is provided by systemd template unit
`microvm-pci-devices@.service`.
## Example USB pass-through
### In the guest
```nix
microvm.devices = [
# RTL2838UHIDIR
# Realtek Semiconductor Corp. RTL2838 DVB-T
{ bus = "usb"; path = "vendorid=0x0bda,productid=0x2838"; }
# Sonoff Zigbee 3.0 USB Dongle Plus
# Silicon Labs CP210x UART Bridge
{ bus = "usb"; path = "vendorid=0x10c4,productid=0xea60"; }
];
```
### On the host
USB device paths are not directly translatable to udev rules. Setup
permissions yourself:
```nix
services.udev.extraRules = ''
# RTL2838UHIDIR
# Realtek Semiconductor Corp. RTL2838 DVB-T
SUBSYSTEM=="usb", ATTR{idVendor}=="0bda", ATTR{idProduct}=="2838", GROUP="kvm"
# Sonoff Zigbee 3.0 USB Dongle Plus
# Silicon Labs CP210x UART Bridge
SUBSYSTEM=="usb", ATTR{idVendor}=="10c4", ATTR{idProduct}=="ea60", GROUP="kvm"
'';
```

153
example/doc/src/faq.md Executable file
View file

@ -0,0 +1,153 @@
# Frequently Asked Questions
## Are there elaborate example setups?
microvm.nix is used in these open-source infrastructure projects:
- [C3D2 services](https://gitea.c3d2.de/c3d2/nix-config)
- [DD-IX services](https://github.com/dd-ix/nix-config)
Let us know if you know more!
## Can I support the development and maintenance of this project?
[ Sponsor](https://github.com/sponsors/astro)
## How to centralize logging with journald?
That is possible without even requiring a network transport by just
making the journals available to the host as a share. Because journald
identifies hosts by their `/etc/machine-id`, we propose to use static
content for that file. Add a NixOS module like the following to your
MicroVM configuration:
```nix
environment.etc."machine-id" = {
mode = "0644";
text =
# change this to suit your flake's interface
self.lib.addresses.machineId.${config.networking.hostName} + "\n";
};
microvm.shares = [ {
# On the host
source = "/var/lib/microvms/${config.networking.hostName}/journal";
# In the MicroVM
mountPoint = "/var/log/journal";
tag = "journal";
proto = "virtiofs";
socket = "journal.sock";
} ];
```
Last, make the MicroVM journals available to your host. The
`machine-id` must be available.
```nix
systemd.tmpfiles.rules = map (vmHost:
let
machineId = self.lib.addresses.machineId.${vmHost};
in
# creates a symlink of each MicroVM's journal under the host's /var/log/journal
"L+ /var/log/journal/${machineId} - - - - /var/lib/microvms/${vmHost}/journal/${machineId}"
) (builtins.attrNames self.lib.addresses.machineId);
```
Once your MicroVM's journal data is visible in the
`/var/log/journal/$machineId/` directories, `journalctl` can pick it
up using the `-m`/`--merge` switch.
## Can I build with hypervisors from the host's nixpkgs instead of the MicroVM's?
Yes. This scenario is enabled through the flake's `lib.buildRunner`
function. See the [`nix run
microvm#build-microvm`](https://github.com/microvm-nix/microvm.nix/blob/main/pkgs/build-microvm.nix)
script that you will need to customize to fit your deployment scenario.
## How can I deploy imperatively from Continuous Integration?
Do this by integrating into your automation what the `microvm` command
does.
```nix
environment.systemPackages = [ (
# Provide a manual updating script that fetches the latest
# updated+built system from Hydra
pkgs.writeShellScriptBin "update-microvm" ''
if [ $# -lt 1 ]; then
NAMES="$(ls -1 /var/lib/microvms)"
else
NAMES="$@"
fi
for NAME in $NAMES; do
echo MicroVM $NAME
cd /var/lib/microvms/$NAME
# Is this truly the flake that is being built on Hydra?
if [ "$(cat flake)" = "git+https://gitea.example.org/org/nix-config?ref=flake-update" ]; then
NEW=$(curl -sLH "Accept: application/json" https://hydra.example.org/job/org/nix-config/$NAME/latest | ${pkgs.jq}/bin/jq -er .buildoutputs.out.path)
nix copy --from https://nix-cache.example.org $NEW
if [ -e booted ]; then
nix store diff-closures $(readlink booted) $NEW
elif [ -e current ]; then
echo "NOT BOOTED! Diffing to old current:"
nix store diff-closures $(readlink current) $NEW
else
echo "NOT BOOTED?"
fi
CHANGED=no
if ! [ -e current ]; then
ln -s $NEW current
CHANGED=yes
elif [ "$(readlink current)" != $NEW ]; then
rm -f old
cp --no-dereference current old
rm -f current
ln -s $NEW current
CHANGED=yes
fi
fi
if [ "$CHANGED" = "yes" ]; then
systemctl restart microvm@$NAME
fi
echo
done
''
) ];
```
## Can I include my host's `<nixpkgs>` channel when building the VM?
Use the following configuration if you build your MicroVM with
`--impure` from channels, not Flakes:
```nix
nix.nixPath = [
"nixpkgs=${builtins.storePath <nixpkgs>}"
];
```
## How do I let the `microvm` user access block devices?
You can re-add the following line to your host's NixOS configuration
which was removed from microvm.nix:
```nix
users.users.microvm.extraGroups = [ "disk" ];
```
The more secure solution would be writing custom
`services.udev.extraRules` that assign ownership/permissions to the
individually used block devices.
## My virtiofs-shared sops-nix /run/secrets disappears when the host is updated!
A workaround may be setting `sops.keepGenerations = 0;`, effectively
stopping sops-nix from ever removing old generations in
`/run/secrets.d/`.
That means that you still must reboot all MicroVMs to adapt any
updated secrets.

47
example/doc/src/host-systemd.md Executable file
View file

@ -0,0 +1,47 @@
# systemd services on a MicroVM host
The `host` nixosModule provides a few systemd services for additional
bringup which is not available when running a MicroVM interactively
from a package.
## `install-microvm-${name}.service`
Creates and prepares a subdirectory under `/var/lib/microvms` for
[declarative MicroVMs](./declarative.md) according to the
`microvm.vms` option.
If the MicroVM subdirectory under `/var/lib/microvms` already exists,
**and** the MicroVM is configured to be built from a flake's
`nixosConfigurations`, this systemd unit will be skipped. The reason
for this behavior is that it is easier to update with the [`microvm`
command](./microvm-command.md) instead of restarting all virtual
machines on a host when doing `nixos-rebuild switch`.
## `microvm-tap-interfaces@.service`
Creates TAP virtual network interfaces for the user that will run MicroVMs.
## `microvm-macvtap-interfaces@.service`
Creates MACVTAP virtual network interfaces for the user that will run MicroVMs.
## `microvm-pci-devices@.service`
Prepares PCI devices for passthrough
([VFIO](https://www.kernel.org/doc/html/latest/driver-api/vfio.html)).
## `microvm-virtiofsd@.service`
Starts a fleet of virtiofsd servers, one for each `virtiofs`
mountpoint in `microvm.shares`.
## `microvm@.service`
Runs the actual MicroVM through
`/var/lib/microvms/%i/current/bin/microvm-run` where `%i` is the
MicroVM name.
## `microvms.target`
Depends on the `microvm@.service` instance for all configured
`microvm.autostart`.

56
example/doc/src/host.md Executable file
View file

@ -0,0 +1,56 @@
# Preparing a NixOS host for declarative MicroVMs
**microvm.nix** adds the following configuration for servers to
host MicroVMs reliably:
- a `/var/lib/microvms` state directory with one subdirectory per MicroVM
- systemd services `microvm-tap-interfaces@` to setup TAP network interfaces
- systemd services `microvm-virtiofsd@` to start virtiofsd instances
- systemd services `microvm@` to start a MicroVM
- configuration options to [declaratively build MicroVMs with the host
system](./declarative.md)
- tools to [manage MicroVMs imperatively](./microvm-command.md)
Prepare your host by including the microvm.nix `host` nixosModule:
```nix
# Your server's flake.nix
{
inputs.nixpkgs.url = "github:nixos/nixpkgs/nixos-unstable";
inputs.microvm.url = "github:microvm-nix/microvm.nix";
inputs.microvm.inputs.nixpkgs.follows = "nixpkgs";
outputs = { self, nixpkgs, microvm }: {
# Example nixosConfigurations entry
nixosConfigurations.server1 = nixpkgs.lib.nixosSystem {
system = "x86_64-linux";
modules = [
# Include the microvm host module
microvm.nixosModules.host
# Add more modules here
{
networking.hostName = "server1";
# try to automatically start these MicroVMs on bootup
microvm.autostart = [
"my-microvm"
"your-microvm"
"their-microvm"
];
}
];
};
};
}
```
# Preparing a non-Flakes host
If you really cannot migrate to Flakes easily, just import the `host`
module directly in your NixOS configuration:
```nix
imports = [ (builtins.fetchGit {
url = "https://github.com/microvm-nix/microvm.nix";
} + "/nixos-modules/host") ];
```

89
example/doc/src/interfaces.md Executable file
View file

@ -0,0 +1,89 @@
# Network interfaces
Declare a MicroVM's virtual network interfaces like this in its NixOS
configuration:
```nix
{
microvm.interfaces = [ {
type = "tap";
# interface name on the host
id = "vm-a1";
# Ethernet address of the MicroVM's interface, not the host's
#
# Locally administered have one of 2/6/A/E in the second nibble.
mac = "02:00:00:00:00:01";
} ];
}
```
## `type = "user"`
User-mode networking is only provided by qemu and kvmtool, providing
outgoing connectivity to your MicroVM without any further setup.
As kvmtool seems to lack a built-in DHCP server, additional static IP
configuration is necessary inside the MicroVM.
## `type = "tap"`
Use a virtual tuntap Ethernet interface. Its name is the value of
`id`.
Some Hypervisors may be able to automatically create these interfaces
when running as root, which we advise against. Instead, create the
interfaces before starting a MicroVM:
```bash
sudo ip tuntap add $IFACE_NAME mode tap user $USER
```
**Note:** add `multi_queue` to this command line if the VM is configured
with more than one CPU core.
When running MicroVMs through the `host` module, the tap network
interfaces are created through a systemd service dependency.
Extend the generated script in the guest configuration like this:
```nix
microvm.binScripts.tap-up = lib.mkAfter ''
${lib.getExe' pkgs.iproute2 "ip"} link set dev 'vm-ixp-as11201p' master 'ixp-peering'
'';
```
## `type = "macvtap"`
*MACVTAP* interfaces attach to a host's physical network interface,
joining the same Ethernet segment with a separate MAC address.
Before running a MicroVM interactively from a package, do the
following steps manually:
```bash
# Parent interface:
LINK=eth0
# MACVTAP interface, as specified under microvm.interfaces.*.id:
ID=microvm1
# Create the interface
sudo ip l add link $LINK name $ID type macvtap mode bridge
# Obtain the interface index number
IFINDEX=$(cat /sys/class/net/$ID/ifindex)
# Grant yourself permission
sudo chown $USER /dev/tap$IFINDEX
```
When running MicroVMs through the `host` module, the macvtap network
interfaces are created through a systemd service dependency. Per
interface with `type = "macvtap"`, a `link` attribute with the parent
interface, and `mode` attribute for the MACVTAP filtering mode must be
specified.
## `type = "bridge"`
This mode lets qemu create a tap interface and attach it to a bridge.
The `qemu-bridge-helper` binary needs to be setup with the proper
permissions. See the `host` module for that. qemu will be run
*without* `-sandbox on` in order for this contraption to work.

50
example/doc/src/intro.md Executable file
View file

@ -0,0 +1,50 @@
# Intro
**microvm.nix** is a Flake to run lightweight NixOS virtual machines
on NixOS. Starting with the reasons why for the remainder of this
chapter, this handbook guides you through the provisioning of MicroVMs
on your NixOS machine.
## Compartmentalization
NixOS makes running services a breeze. Being able to quickly rollback
configuration is a life-saver. Not so much however on systems that are
shared by multiple services where maintenance of one affects others.
Increase stability by partitioning services into virtual NixOS systems
that can be updated individually.
**microvm.nix** can isolate your /nix/store into exactly what is
required for the guest's NixOS: the root filesystem is a read-only
erofs/squashfs file-systems that include only the binaries of your
configuration. Of course, that holds only true until you mount the
host's /nix/store as a share for faster build times, or mount the
store with a writable overlay for Nix builds inside the VM.
## The Case Against Containers
Linux containers are not a single technology but a plethora of kernel
features that serve to isolate various system resources so that the
running system appears as one. It is still one shared Linux kernel
with a huge attack surface.
Virtual machines on the other hand run their own OS kernel, reducing
the attack surface to the hypervisor and its device drivers. The
resource usage however incurs some overhead when compared with
containers, with memory allocation being especially inflexible.
**microvm.nix** is a tool that helps you building the guest's OS and
running it in ways that are easier than writing a `Dockerfile`, once
you know how to put a NixOS config into a `flake.nix` file.
## Just Virtual Machines?
Full virtualization has been available for a long time with QEMU and
VirtualBox. The *MicroVM* machine type highlights that virtualization
overhead has been reduced a lot by replacing emulated devices with
*virtio* interfaces that have been optimized for this environment.
This Flake offers you to run your MicroVMs not only on QEMU but with
other Hypervisors that have been explicitly authored for
*virtio*. Some of them are written in Rust, a programming language
that is renowned for being safer than C.

View file

@ -0,0 +1,80 @@
# Imperative MicroVM management with the `microvm` command
Compartmentalizing services in an infrastructure landscape allows us to
conduct maintenance individually and without affecting unrelated
MicroVMs. The `microvm` command helps with that.
## Create a MicroVM
You can specify this MicroVM's source flake with `-f`. If omitted, the
tool will assume `git+file:///etc/nixos`. The source flakeref will be
kept in `/var/lib/microvms/*/flake` for future updating the MicroVM.
```bash
microvm -f git+https://... -c my-microvm
```
### Enabling MicroVM autostart
Extension of the host's systemd units must happen declaratively in the
host's NixOS configuration:
```nix
microvm.autostart = [
"myvm1"
"myvm2"
"myvm3"
];
```
## Update a MicroVM
*Updating* does not refresh your packages but simply rebuilds the
MicroVM. Use `nix flake update` to get new package versions.
```bash
microvm -u my-microvm
```
Until ways have been found to safely transfer the profile into the
target /nix/store, and subsequently activate it, you must restart the
MicroVM for the update to take effect.
Use the `-R` flag to automatically restart if an update was built.
## List MicroVMs
Listing your MicroVMs is as trivial as `ls -1 /var/lib/microvms`
For more insight, the following command will read the current system
version of all MicroVMs and compare them to what the corresponding
flake evaluates. It is therefore quite slow to run, yet very useful
for an updatable VM overview.
```bash
microvm -l
```
If you want a faster overview of booted and current versions, run
this instead:
```bash
ls -l /var/lib/microvms/*/{current,booted}/share/microvm/system
```
## Removing MicroVMs
First, stop the MicroVM:
```bash
systemctl stop microvm@$NAME
```
If you don't use absolute filesystem paths for sockets, volumes, or
shares, all MicroVM state is kept under `/var/lib/microvms/$NAME/`.
The `microvm@.serivce` systemd service template depends on existence
of this directory.
```bash
rm -rf /var/lib/microvms/$NAME
```

24
example/doc/src/options.md Executable file
View file

@ -0,0 +1,24 @@
# Configuration options
By including the `microvm` module a set of NixOS options is made
available for customization. These are the most important ones:
| Option | Purpose |
|--------------------------------|-----------------------------------------------------------------------------------------------------|
| `microvm.hypervisor` | Hypervisor to use by default in `microvm.declaredRunner` |
| `microvm.vcpu` | Number of Virtual CPU cores |
| `microvm.mem` | RAM allocation in MB |
| `microvm.interfaces` | Network interfaces |
| `microvm.volumes` | Block device images |
| `microvm.shares` | Shared filesystem directories |
| `microvm.devices` | PCI/USB devices for host-to-vm passthrough |
| `microvm.socket` | Control socket for the Hypervisor so that a MicroVM can be shutdown cleanly |
| `microvm.user` | (qemu only) User account which Qemu will switch to when started as root |
| `microvm.forwardPorts` | (qemu user-networking only) TCP/UDP port forwarding |
| `microvm.kernelParams` | Like `boot.kernelParams` but will not end up in `system.build.toplevel`, saving you rebuilds |
| `microvm.storeOnDisk` | Enables the store on the boot squashfs even in the presence of a share with the host's `/nix/store` |
| `microvm.writableStoreOverlay` | Optional string of the path where all writes to `/nix/store` should go to. |
See [the options declarations](
https://github.com/microvm-nix/microvm.nix/blob/main/nixos-modules/microvm/options.nix)
for a full reference.

View file

@ -0,0 +1,35 @@
# MicroVM output options
Hypervisor runners are provided in the `config` generated by a
nixosSystem for you to use inside and outside your configuration.
| Option | Purpose |
|--------------------------|-----------------------------------------------------------|
| `microvm.declaredRunner` | Runner package selected according to `microvm.hypervisor` |
| `microvm.runners` | Attribute set of runner packages per known Hypervisor. |
The `microvm.declaredRunner` selects the hypervisor according to the
configured `microvm.hypervisor`.
```bash
nix run .#nixosConfigurations.my-microvm.config.microvm.declaredRunner
```
The `microvm.runners` option provides a runner for each known
Hypervisor regardless of the `microvm.hypervisor` config setting. To
build *my-microvm* for Firecracker for example:
```bash
nix run .#nixosConfigurations.my-microvm.config.microvm.runners.firecracker
```
## Configure `microvm.hypervisor`, use `microvm.declaredRunner`!
One of the `microvm.runners` is picked by `microvm.declaredRunner` by
evaluating `microvm.hypervisor`.
You may switch the Hypervisor quickly, but use `declaredRunner` in
production. Any other NixOS configuration that evaluates the
`microvm.hypervisor` option can be wrong when you pick from
`microvm.runners` directly. One example would be the defaults set by
`microvm.optimize`.

25
example/doc/src/packages.md Executable file
View file

@ -0,0 +1,25 @@
# Running a MicroVM as a package
Quickly running a MicroVM interactively is great for testing. You get
to interact with its console.
There are drawbacks: no preparation for TAP network interfaces is done
and no virtiofsd is started. These can be worked around by relying on
9p shares and using qemu's `host` network interfaces.
## Immediately running a nixosConfiguration
To run a `nixosConfiguration` off your Flake directly use:
```bash
nix run .#nixosConfigurations.my-microvm.config.microvm.declaredRunner
```
## Add a runner package to your Flake
To add this runner permanently add a package like this to the outputs
of your `flake.nix`:
```nix
packages.x86_64-linux.my-microvm = self.nixosConfigurations.my-microvm.config.microvm.declaredRunner;
```
You can then run the MicroVM with a simple `nix run .#my-microvm`

153
example/doc/src/routed-network.md Executable file
View file

@ -0,0 +1,153 @@
# Routed network setup
## Motivation
In bridged setups the Virtual Machines share the same Ethernet
segment. A compromised VM still has raw network access, allowing it to
send a lot of funny packets that cause problems for other
VMs. Examples:
- Forging MAC addresses
- Running rogue DHCP servers
- ARP/NDP spoofing
- Meddling with link-local multicast
This can be avoided by unsharing the Ethernet segments, ie. removing
the bridge.
## Addressing
Compared to one Ethernet where we assign a large subnet like
`10.0.0.0/24`, we will now only deal with *Host Routes* where the
prefix length is `/32` for IPv4 and `/128` for IPv6. Note that by
doing this we no longer lose precious space to a subnet's network and
broadcast addresses.
## Host configuration
Using systemd-networkd, a VM's tap interface is configured with static
addresses and the corresponding host routes. We do this for up to
`maxVMs`. Increasing this number will create as many `.network`
configuration files, so it's relatively cheap.
```nix
{ lib, ... }:
let
maxVMs = 64;
in
{
networking.useNetworkd = true;
systemd.network.networks = builtins.listToAttrs (
map (index: {
name = "30-vm${toString index}";
value = {
matchConfig.Name = "vm${toString index}";
# Host's addresses
address = [
"10.0.0.0/32"
"fec0::/128"
];
# Setup routes to the VM
routes = [ {
Destination = "10.0.0.${toString index}/32";
} {
Destination = "fec0::${lib.toHexString index}/128";
} ];
# Enable routing
networkConfig = {
IPv4Forwarding = true;
IPv6Forwarding = true;
};
};
}) (lib.genList (i: i + 1) maxVMs)
);
}
```
## NAT
For NAT configuration on the host we're not going to specify each
potential tap interface. That would create a lot of firewall rules. To
avoid this additional complexity, use a single subnet that matches all
your VMs' addresses:
```nix
{
networking.nat = {
enable = true;
internalIPs = [ "10.0.0.0/24" ];
# Change this to the interface with upstream Internet access
externalInterface = "enp0s3";
};
}
```
# Virtual Machine configuration
We no longer rely on DHCP for this non-standard setup. To produce IPv4
and IPv6 addresses let's assign a number `index` to each MicroVM. Make
sure that this number is **not reused** by two VMs!
We suggest creating some sort of central configuration file that
contains each VM's network `index` in one place. That should make
reuses obvious. If that list becomes too long, write a NixOS
assertion!
```nix
{ lib, ... }:
let
# Change this by VM!
index = 5;
mac = "00:00:00:00:00:01";
in
{
microvm.interfaces = [ {
id = "vm${toString index}";
type = "tap";
inherit mac;
} ];
networking.useNetworkd = true;
systemd.network.networks."10-eth" = {
matchConfig.MACAddress = mac;
# Static IP configuration
address = [
"10.0.0.${toString index}/32"
"fec0::${lib.toHexString index}/128"
];
routes = [ {
# A route to the host
Destination = "10.0.0.0/32";
GatewayOnLink = true;
} {
# Default route
Destination = "0.0.0.0/0";
Gateway = "10.0.0.0";
GatewayOnLink = true;
} {
# Default route
Destination = "::/0";
Gateway = "fec0::";
GatewayOnLink = true;
} ];
networkConfig = {
# DNS servers no longer come from DHCP nor Router
# Advertisements. Perhaps you want to change the defaults:
DNS = [
# Quad9.net
"9.9.9.9"
"149.112.112.112"
"2620:fe::fe"
"2620:fe::9"
];
};
};
}
```

84
example/doc/src/shares.md Executable file
View file

@ -0,0 +1,84 @@
# Shares
Persistent file-systems are provided by both volumes and
shares. Volumes are block devices inside the virtual machine, yielding
fast performance but mounted file-systems require exclusive
access. Shares allow mounting an arbitrary directory tree from the
host.
In `microvm.shares` elements the `proto` field allows either of two
values:
- `9p` (default) is built into many hypervisors, allowing you to
quickly share a directory tree
- `virtiofs` requires a separate virtiofsd service which is started as
a prerequisite when you start MicroVMs through a systemd service
that comes with the `microvm.nixosModules.host` module.
If you want to run from the command-line, start `bin/virtiofsd-run`
separately.
Expect `virtiofs` to yield better performance over `9p`.
```nix
microvm.shares = [ {
proto = "virtiofs";
tag = "home";
# Source path can be absolute or relative
# to /var/lib/microvms/$hostName
source = "home";
mountPoint = "/home";
} ];
```
<div class="warning">
When sharing a path that is on ZFS with virtiofs, the dataset must
have options
<code>-o xattr=sa -o acltype=posixacl</code>
</div>
## Sharing a host's `/nix/store`
If a share with `source = "/nix/store"` is defined, size and build
time of the stage1 squashfs for `/dev/vda` will be reduced
drastically.
```nix
microvm.shares = [ {
tag = "ro-store";
source = "/nix/store";
mountPoint = "/nix/.ro-store";
} ];
```
## Writable `/nix/store` overlay
An optional writable layer will be mounted if the path
`microvm.writableStoreOverlay` is set. Make sure that the path is
located on a writable filesystem.
**Caveat:** The Linux overlay filesystem is very picky about the
filesystems that can be the upper (writable) layer. 9p/virtiofs shares
don't work currently, so resort to using a volume for that:
```
{ config, ... }:
{
microvm.writableStoreOverlay = "/nix/.rw-store";
microvm.volumes = [ {
image = "nix-store-overlay.img";
mountPoint = config.microvm.writableStoreOverlay;
size = 2048;
} ];
}
```
<div class="warning">
The Nix database will forget all built packages after a
reboot, containing only what is needed for the VM's NixOS
system. Until this has been solved, it is recommended to just delete
and recreate the overlay after MicroVM shutdown or before startup.
</div>

113
example/doc/src/simple-network.md Executable file
View file

@ -0,0 +1,113 @@
# A simple network setup
While networking infrastructure is out of scope for the **microvm.nix**
flake, here is some guidance for providing the MicroVMs on your NixOS
machine with internet access.
Use this for your local LAN where IP addresses are free and
plentiful. If not, head over to the
[advanced networking](./advanced-network.md) page.
Because we already use systemd for MicroVM startup, let's pick
`systemd-networkd`:
```nix
networking.useNetworkd = true;
```
## A bridge to link TAP interfaces
To make your MicroVM reachable, the host will place its Ethernet port (`eno1`)
on a bridge (`br0`). This bridge will have the MicroVM's TAP interface attached
to it - directly placing the MicroVM on your local network.
Note that the addresses provided below are examples and you must adjust these
for your network settings. Also note that the `eno1` must be attached to the
bridge with the `vm-*` TAP interfaces that you will specify in the MicroVM
definition.
```nix
systemd.network.enable = true;
systemd.network.networks."10-lan" = {
matchConfig.Name = ["eno1" "vm-*"];
networkConfig = {
Bridge = "br0";
};
};
systemd.network.netdevs."br0" = {
netdevConfig = {
Name = "br0";
Kind = "bridge";
};
};
systemd.network.networks."10-lan-bridge" = {
matchConfig.Name = "br0";
networkConfig = {
Address = ["192.168.1.2/24" "2001:db8::a/64"];
Gateway = "192.168.1.1";
DNS = ["192.168.1.1"];
IPv6AcceptRA = true;
};
linkConfig.RequiredForOnline = "routable";
};
```
Now that the host is configured, you can define a MicroVM to have a static IP
address with:
```nix
microvm = {
#...add additional MicroVM configuration here
interfaces = [
{
type = "tap";
id = "vm-test1";
mac = "02:00:00:00:00:01";
}
];
};
systemd.network.enable = true;
systemd.network.networks."20-lan" = {
matchConfig.Type = "ether";
networkConfig = {
Address = ["192.168.1.3/24" "2001:db8::b/64"];
Gateway = "192.168.1.1";
DNS = ["192.168.1.1"];
IPv6AcceptRA = true;
DHCP = "no";
};
};
```
## Docker and systemd-network
If you use the above `systemd.network` bridge config and wish to run
Docker containers inside your microvms using `virtualisation.docker`,
you may need to add the following snippet to stop `systemd-networkd` from
managing the bridged `veth*` interfaces Docker creates for each container.
Without this, network access inside the containers will be broken.
```nix
systemd.network.networks."19-docker" = {
matchConfig.Name = "veth*";
linkConfig = {
Unmanaged = true;
};
};
```
## Advanced: Improving Performance
If you prioritize network performance over inter-VM communication on
the virtual bridge, have a look into these alternatives:
- Network interfaces with `type = "macvtap"` are supported in
microvm.nix. While they're technically tap devices, they attach to
an external Ethernet port, eliminating the `br0` bridge.
- Server Ethernet cards support SR-IOV: setup Virtual Function devices
for PCI passthru into MicroVMs.

66
example/doc/src/ssh-deploy.md Executable file
View file

@ -0,0 +1,66 @@
# Deploying via SSH
By running either from packages or through systemd services
microvm.nix tries to support a wholesome Nix workflow: develop and
test on your local laptop, then deploy to staging and later to
production.
Let's explore alternative ways before detailing our elaboration:
- You could build
`.#nixosConfiguration.my-microvm.config.microvm.declaredRunner`
locally, then `nix copy` it to the target host for
installation. This comes at the expense of your laptop's battery
time and it can also become quite network-heavy.
- You may transfer each change to the remote host to build entirely
remote. There you're going to have a repository state that is going
to confuse fellow operators. Also, your local `--override-input`
parameters will become meaningless on the remote filesystem.
## microvm.deploy.rebuild
The *easy* interface that is named after `nixos-rebuild` combines the
two scripts that are described below:
- First, we evaluate locally and build remotely with
`microvm.deploy.installOnHost`
- Depending on whether the host's `/nix/store` is mounted and SSH is
running in the MicroVM:
- We either run `microvm.deploy.sshSwitch` as described below
- Alternatively, we restart the MicroVM's systemd service on the
host
Because it needs to know about both the host and the MicroVM, these
ssh addresses must come before the actual `switch` argument:
```
nix run .#nixosConfigurations.my-microvm.config.microvm.deploy.rebuild root@example.com root@my-microvm.example.com switch
```
## microvm.deploy.installOnHost
This script will evaluate only the system's derivations locally. It
then transfers these and their dependencies to the remote system so
the actual build can be performed there.
Just like [the microvm command](microvm-command.md), it then installs
the MicroVM under `/var/lib/microvms/$NAME` so that the systemd
services of the `host` module can pick it up.
It is irrelevant whether you create a new MicroVM or update an
existing one.
## microvm.deploy.sshSwitch
Once the host has an updated MicroVM in its `/nix/store` (see above)
the new system must be activated. For a proper state, this script does
a bit more in the MicroVM than just `switch-to-configuration`:
- First, the `config.networking.hostName` is compared to the running
system for safety reasons.
- The Nix database registration will be imported which is important if
you build packages into a `microvm.writableStoreOverlay`.
- The new system is installed into `/nix/var/nix/profiles/system`
which is optional but expected by some Nix tooling.
- Finally, run `switch-to-configuration` with the provided parameter
(eg. `switch`).