@push.rocks/smartvm is a TypeScript control layer for Amazon Firecracker. It downloads and caches Firecracker binaries, resolves bootable base images, starts microVMs through the Firecracker Unix-socket API, creates host networking, stages ephemeral drives, applies optional egress controls, and cleans up the mess when a VM is done.
The point is simple: Firecracker gives you strong, lightweight VM isolation. smartvm gives you a programmable way to use it without writing the same shell scripts, socket calls, TAP setup, bridge setup, and cleanup code for every project.
Why Firecracker Needs a Control Layer
Firecracker is intentionally small. It is a virtual machine monitor, not a platform. That is part of its strength. It exposes an HTTP API over a Unix domain socket and expects the host to provide the rest: kernel image, root filesystem, network devices, boot arguments, process lifecycle, and cleanup.
That leaves every user building the same operational layer:
- Find or download the right Firecracker binary.
- Prepare a kernel and root filesystem.
- Start the VMM with an API socket.
- Wait for the socket and API to become ready.
- Send pre-boot config in the right order.
- Create TAP devices and connect them to host networking.
- Decide how VM traffic leaves the host.
- Stop the VMM and remove sockets, TAPs, bridges, routes, and temp files.
smartvm wraps that host plumbing behind typed TypeScript classes.
SmartVM
ImageManager Firecracker binaries, manual kernels, manual rootfs files
BaseImageManager cached Firecracker CI or hosted base image bundles
NetworkManager TAPs, bridge, NAT, firewall, WireGuard egress
MicroVM one Firecracker VM and its state machine
FirecrackerProcess child process and socket readiness
SocketClient HTTP over Unix socket
VMConfig config validation and payload conversion
Your application describes the VM. smartvm handles the host-side sequence.
The Minimal Boot Path
The happy path is short: create a SmartVM, resolve a base image, create a VM, start it, and clean it up.
import { SmartVM } from '@push.rocks/smartvm';
const smartvm = new SmartVM({
dataDir: '/tmp/.smartvm',
runtimeDir: '/dev/shm/.smartvm/runtime',
});
const baseImage = await smartvm.ensureBaseImage({ preset: 'latest' });
const vm = await smartvm.createVM({
id: 'hello-firecracker',
bootSource: {
kernelImagePath: baseImage.kernelImagePath,
bootArgs: baseImage.bootArgs,
},
machineConfig: {
vcpuCount: 1,
memSizeMib: 256,
},
drives: [
{
driveId: 'rootfs',
pathOnHost: baseImage.rootfsPath,
isRootDevice: true,
isReadOnly: baseImage.rootfsIsReadOnly,
},
],
});
try {
await vm.start();
console.log(vm.state); // running
console.log(await vm.getVersion());
console.log(await vm.getInfo());
} finally {
if (vm.state === 'running' || vm.state === 'paused') {
await vm.stop();
}
await vm.cleanup();
await smartvm.cleanup();
}
That snippet hides a lot of host work. Firecracker is downloaded or reused. A base image bundle is resolved and cached. A per-VM runtime directory is created. A Unix socket path is assigned. Writable drives are staged if needed. Firecracker is started as a child process. The API socket is polled until ready. Configuration is sent through the socket. The instance is booted. Cleanup tears down the process and host resources owned by the instance.
Real Isolation, Real Host Requirements
smartvm is TypeScript, but it is not pretending that Firecracker is portable JavaScript. Firecracker is a Linux/KVM technology.
You need a Linux host with /dev/kvm. If you use networking, you need privileges for TAP devices, bridges, IP forwarding, iptables, policy routing, and optional WireGuard setup. The host also needs normal system tools such as curl, tar, ip, sysctl, and iptables; WireGuard routing needs wg.
That constraint is useful to state clearly. smartvm is not a local toy VM emulator. It is a typed operations layer for real microVMs on a real Linux host.
Ephemeral by Default
The most important design choice in smartvm is its runtime model. VMs are treated as disposable execution units unless you opt into persistence.
By default, runtime files go into tmpfs when available:
| Artifact | Default location | Behavior |
|---|---|---|
| Firecracker binaries | /tmp/.smartvm/bin |
Cached for reuse |
| Base images | /tmp/.smartvm/base-images |
Cached with retention |
| VM sockets | /dev/shm/.smartvm/runtime/<vmId>/firecracker.sock |
Per-VM, deleted on cleanup |
| Writable drives | /dev/shm/.smartvm/runtime/<vmId>/drives/* |
Copied before boot, deleted on cleanup |
| Read-only drives | Original path | Reused directly unless explicitly staged |
The default is: immutable base image, staged writable copy, no accidental mutation of cached root filesystems.
const vm = await smartvm.createVM({
bootSource: {
kernelImagePath: baseImage.kernelImagePath,
bootArgs: baseImage.bootArgs,
},
machineConfig: {
vcpuCount: 1,
memSizeMib: 256,
},
drives: [
{
driveId: 'rootfs',
pathOnHost: '/images/rootfs.ext4',
isRootDevice: true,
isReadOnly: false,
ephemeral: true,
},
],
});
If you need persistent state, say so explicitly:
const vm = await smartvm.createVM({
bootSource: {
kernelImagePath: '/images/vmlinux',
bootArgs: 'console=ttyS0 reboot=k panic=1 pci=off',
},
machineConfig: {
vcpuCount: 2,
memSizeMib: 512,
},
drives: [
{
driveId: 'state',
pathOnHost: '/var/lib/my-vm/state.ext4',
isRootDevice: true,
isReadOnly: false,
ephemeral: false,
},
],
});
This default is the right bias for worker pools, test isolation, CI jobs, sandboxed execution, disposable service instances, and workloads where durable state belongs in an external database, object store, or explicit persistent volume.
Base Images Without Checking Rootfs Files Into Git
Firecracker needs a kernel image and a root filesystem. smartvm has two layers for that.
ImageManager is the lower-level helper for binaries, kernels, rootfs downloads, blank rootfs creation, and cloning. It is useful when you already know what files you want.
BaseImageManager is the higher-level path. It resolves bootable Firecracker CI image bundles through presets or a project-owned hosted manifest.
const latest = await smartvm.ensureBaseImage();
const lts = await smartvm.ensureBaseImage({ preset: 'lts' });
const hosted = await smartvm.ensureBaseImage({
preset: 'hosted',
manifestUrl: 'https://assets.example.com/smartvm/manifest.json',
});
The presets are deliberately pragmatic:
| Preset | Meaning |
|---|---|
latest |
Resolve the latest Firecracker release and matching CI demo artifacts |
lts |
Use the pinned Firecracker CI train v1.7 and Firecracker v1.7.0 |
hosted |
Use your own manifest with kernel/rootfs artifact metadata |
Cached base image bundles are stored under /tmp/.smartvm/base-images by default. The cache keeps two bundles unless configured otherwise. Cached artifacts are checked for size and SHA256 before reuse. Hosted URL artifacts require SHA256 hashes, which is the right default when a VM root filesystem is being pulled from a remote location.
The hosted manifest format is plain JSON:
{
"schemaVersion": 1,
"bundleId": "smartvm-minimal-v1-x86_64",
"arch": "x86_64",
"firecrackerVersion": "v1.15.1",
"rootfsType": "squashfs",
"rootfsIsReadOnly": true,
"bootArgs": "console=ttyS0 reboot=k panic=1 pci=off ro rootfstype=squashfs",
"kernel": {
"url": "https://assets.example.com/smartvm/vmlinux",
"fileName": "vmlinux",
"sha256": "0000000000000000000000000000000000000000000000000000000000000000"
},
"rootfs": {
"url": "https://assets.example.com/smartvm/rootfs.squashfs",
"fileName": "rootfs.squashfs",
"sha256": "0000000000000000000000000000000000000000000000000000000000000000"
}
}
That gives projects a clean path from quick-start CI images to controlled production images.
A Strict MicroVM State Machine
MicroVM represents one Firecracker instance. Its lifecycle is intentionally narrow:
created -> configuring -> running -> paused -> stopped
-> error
The public methods enforce valid states. You cannot pause a VM that has not started. You cannot snapshot a running VM; snapshots require paused. You cannot call high-level API helpers before the socket client exists.
await vm.start();
await vm.pause();
await vm.createSnapshot({
snapshotPath: '/tmp/vm.snapshot',
memFilePath: '/tmp/vm.mem',
snapshotType: 'Full',
});
await vm.resume();
await vm.stop();
await vm.cleanup();
During start(), smartvm sends Firecracker config in the right order: logger, metrics, machine config, boot source, drives, network interfaces, vsock, balloon, MMDS config, then InstanceStart.
That sequencing is easy to get wrong in ad hoc code. Here it is centralized and covered by tests.
TypeScript Config, Firecracker Payloads
Firecracker's API uses snake_case payloads. TypeScript projects usually use camelCase. VMConfig handles validation and transformation.
import { VMConfig } from '@push.rocks/smartvm';
const vmConfig = new VMConfig({
bootSource: {
kernelImagePath: '/images/vmlinux',
bootArgs: 'console=ttyS0 reboot=k panic=1 pci=off',
},
machineConfig: {
vcpuCount: 2,
memSizeMib: 256,
},
drives: [
{
driveId: 'rootfs',
pathOnHost: '/images/rootfs.ext4',
isRootDevice: true,
},
],
});
const validation = vmConfig.validate();
if (!validation.valid) {
throw new Error(validation.errors.join('; '));
}
console.log(vmConfig.toBootSourcePayload());
console.log(vmConfig.toMachineConfigPayload());
console.log(vmConfig.toDrivePayload(vmConfig.config.drives![0]));
The constructor clones caller-provided config. Internal normalization does not mutate the object you passed in, except where MicroVM intentionally stages drive paths before boot on its internal config copy.
Networking: TAPs, Bridges, NAT, and Deterministic Guests
Firecracker gives the VM a network interface if you provide a TAP device. It does not configure the host network for you. NetworkManager owns that part.
By default, it uses bridge svbr0 and subnet 172.30.0.0/24. It normalizes CIDR input, assigns the first usable address as the gateway, starts guest allocation at the second usable address, creates deterministic locally administered MAC addresses, caps TAP names to Linux interface length limits, and configures NAT through the host default route.
Automatic mode is minimal:
const vm = await smartvm.createVM({
bootSource: {
kernelImagePath: baseImage.kernelImagePath,
bootArgs: baseImage.bootArgs,
},
machineConfig: {
vcpuCount: 1,
memSizeMib: 256,
},
drives: [
{
driveId: 'rootfs',
pathOnHost: baseImage.rootfsPath,
isRootDevice: true,
isReadOnly: baseImage.rootfsIsReadOnly,
},
],
networkInterfaces: [{ ifaceId: 'eth0' }],
});
If your guest image configures networking through kernel boot arguments, NetworkManager can give you the static guest data:
const tap = await smartvm.networkManager.createTapDevice('net-vm', 'eth0');
const vm = await smartvm.createVM({
id: 'net-vm',
bootSource: {
kernelImagePath: baseImage.kernelImagePath,
bootArgs: `${baseImage.bootArgs} ${smartvm.networkManager.getGuestNetworkBootArgs(tap)}`,
},
machineConfig: {
vcpuCount: 1,
memSizeMib: 256,
},
drives: [
{
driveId: 'rootfs',
pathOnHost: baseImage.rootfsPath,
isRootDevice: true,
isReadOnly: baseImage.rootfsIsReadOnly,
},
],
networkInterfaces: [
{
ifaceId: 'eth0',
hostDevName: tap.tapName,
guestMac: tap.mac,
},
],
});
There is no hidden DHCP server. The guest still needs to configure its interface, either through the image itself or through boot arguments.
Egress Firewall Policies
Version 1.4.0 added configurable VM egress firewall policies. The policy applies to the manager's VM subnet, and rules are evaluated in order before the default action.
const smartvm = new SmartVM({
firewall: {
egress: {
defaultAction: 'deny',
rules: [
{ action: 'allow', to: '1.1.1.1', protocol: 'udp', ports: 53, comment: 'DNS' },
{ action: 'allow', to: '203.0.113.0/24', protocol: 'tcp', ports: [443] },
],
},
},
});
The implementation uses an owned iptables chain jumped from FORWARD for traffic from the VM subnet. Rules support IPv4 destinations or CIDR ranges, all, tcp, udp, and icmp protocols, and destination ports for TCP/UDP rules.
This is useful for sandboxed execution. A VM can be allowed to reach DNS and a narrow API endpoint without getting unrestricted internet access.
WireGuard Egress Without WireGuard in the Guest
The same release added host-side WireGuard routing. This is important because it keeps the guest image simple. The VM does not need WireGuard installed. smartvm policy-routes packets from the VM subnet through a host WireGuard interface.
Managed interface mode creates and removes the WireGuard interface:
const smartvm = new SmartVM({
wireguard: {
interfaceName: 'svwg0',
routeTable: 51820,
failClosed: true,
config: `
[Interface]
PrivateKey = <private-key>
Address = 10.70.0.2/32
MTU = 1420
[Peer]
PublicKey = <public-key>
AllowedIPs = 0.0.0.0/0
Endpoint = vpn.example.com:51820
PersistentKeepalive = 25
`,
},
});
Existing interface mode uses an interface managed outside smartvm:
const smartvm = new SmartVM({
wireguard: {
existingInterface: 'wg0',
routeTable: 51820,
failClosed: true,
},
});
routeAllVmTraffic defaults to true. failClosed defaults to true, so VM traffic is dropped if it would otherwise escape through a non-WireGuard egress path. Managed configs reject unsafe hook fields such as PostUp, PreDown, and SaveConfig; policy routing is owned by smartvm instead.
MMDS, Vsock, Ballooning, and Snapshots
smartvm exposes Firecracker features beyond the minimal boot path.
MMDS lets the host provide structured metadata to the guest:
await vm.setMetadata({
instance: {
id: 'api-worker-1',
region: 'local',
},
config: {
mode: 'ephemeral',
},
});
console.log(await vm.getMetadata());
VM config also supports vsock devices, balloon devices, logger config, metrics config, drive and network rate limiters, drive path hot updates, network interface updates, and snapshot creation.
Snapshots are intentionally explicit. Diff snapshots require machineConfig.trackDirtyPages before boot, and restore is exposed as a low-level API rather than a magic lifecycle shortcut. That is the right tradeoff for a library touching VM process state and host files.
Errors You Can Handle
Package-level failures use SmartVMError with structured codes.
import { SmartVMError } from '@push.rocks/smartvm';
try {
await vm.start();
} catch (err) {
if (err instanceof SmartVMError) {
console.error(err.code);
console.error(err.statusCode);
console.error(err.details);
}
throw err;
}
The codes cover VM state errors, invalid config, socket timeouts, Firecracker API timeouts, binary lookup failures, download failures, base image manifest problems, subnet and interface validation, firewall and WireGuard validation, bridge setup, TAP creation, rootfs operations, and VM start failures.
That matters operationally. A caller can distinguish INVALID_CONFIG from SOCKET_TIMEOUT, or INVALID_WIREGUARD_CONFIG from WIREGUARD_SETUP_FAILED, without parsing strings.
Where It Fits
smartvm is for systems that need stronger isolation than a normal process but do not want to build a VM platform from scratch.
Sandboxed code execution can run untrusted or semi-trusted workloads in short-lived microVMs with explicit network egress policy.
CI and test infrastructure can boot clean Linux environments with cached base images and disposable writable drives.
Edge workloads can run isolated service instances with host-controlled networking and WireGuard routing.
Developer tooling can automate Firecracker without spreading shell scripts across repositories.
Internal platforms can expose a higher-level job or worker API while keeping Firecracker lifecycle details in one library.
The common pattern is ephemeral compute: boot from a known image, inject metadata, run the workload, collect results externally, clean the VM.
The Takeaway
Firecracker is powerful because it is small. It does not try to be Kubernetes, Docker, Vagrant, or a cloud control plane. But using it directly means owning a lot of host integration code.
@push.rocks/smartvm gives TypeScript projects that missing layer: typed VM config, Firecracker binary and image management, Unix-socket API calls, strict lifecycle handling, tmpfs-backed runtime artifacts, TAP/bridge networking, egress firewalling, host-side WireGuard routing, and cleanup.
If your system needs real Linux microVM isolation but your control plane is TypeScript, smartvm is the bridge between the two.