Raspberry Pi 5 deployment¶
The Pi 5 isn’t just a deploy target — it’s the numerical audit rig
for llamaclaw. Every function in esml is tested on both x86_64 and
aarch64; subtle BLAS differences catch ill-conditioned code that x86
Accelerate silently accepts.
Hardware recommendation¶
Raspberry Pi 5 (8 GB minimum, 16 GB preferred for Perseus)
Active cooler (official kit works)
1 TB NVMe via PCIe HAT — puts
/homeand Ollama models off the SDEthernet (Wi-Fi works but NVMe downloads are large)
1. Base OS¶
Flash Raspberry Pi OS 64-bit (Bookworm or later). First boot: set
hostname to zeus.local, create user perseus, enable SSH.
2. Move /home to NVMe¶
See llamaclaw/deploy/yodavision/zeus/APPLY_NOTES.md
for the full migration. SD card stays boot + root; all user data + Ollama
models live on NVMe.
3. Bootstrap¶
ssh perseus@zeus.local
curl -fsSL https://install.llamaclaw.org | bash -s -- --with-pi
The --with-pi flag pulls Perseus (llamaclaw/perseus:e2b), wires up
systemd units from llamaclaw/deploy,
and runs kernel-tuning udev rules (NVMe BFQ scheduler, sysctl tweaks).
4. Verify¶
ssh perseus@zeus.local
esml doctor
# ─── ESML Doctor ───
# ✓ Python 3.12.3
# ✓ R 4.4.1
# ✓ Ollama 0.1.40
# ✓ Perseus perseus:e2b (9.6 GB)
# ✓ 41 datasets in SQLite
# ✓ NVMe /home mounted (1 TB, 620 GB free)
# ✓ Luci systemd unit enabled
5. systemd units¶
Three services come from llamaclaw/deploy/raspberry-pi/:
Unit |
What |
|---|---|
|
Keeps Ollama daemon running (restart-on-failure) |
|
Perseus HTTP API at port 8421 for network access |
|
Optional: Luci AI agent (see below) |
Enable all three:
sudo systemctl enable --now esml-ollama.service perseus-relay.service
6. Talk to Luci (optional)¶
Luci is the Pi-side AI agent from llamaclaw/luci. It’s sandboxed:
no agent-to-agent calls, no public endpoints, exec approvals required.
# Interactive
ssh -t perseus@zeus.local "zeroclaw agent"
# One-shot
ssh perseus@zeus.local "zeroclaw agent -m 'what is your status?'"
7. Run llamaclaw/tide remotely¶
ssh -t perseus@zeus.local "docker run --rm -it --network host ghcr.io/llamaclaw/tide:latest"
Or build kronos natively (ARM-first, ~8 MB binary):
ssh perseus@zeus.local
git clone git@github.com:llamaclaw/kronos.git && cd kronos
cargo build --release
./target/release/kronos
Security posture¶
SSH key-only (no password auth). Deploy key on
ruhelavansh-oss/esmlis separate from the user-level key on llamaclaw (different auth paths for the monorepo mirror vs the new ecosystem).ufwfirewall active; only SSH + Perseus-relay port exposed.Luci runs as dedicated Linux user
luci(not root, not perseus).All systemd units use
Restart=on-failurewith exponential backoff.
Pi isn’t just for deployment¶
Every llamaclaw CI run should eventually be reproduced on a Pi, because
aarch64 BLAS raises LinAlgError on matrices that x86 silently
tolerates. See
yodavision/mistakes/arm_vs_x86_numerical_divergence.md
for the pattern we discovered on 2026-04-17: two pytest failures on Pi
that never showed on Mac, both caught by ARM’s stricter numerics.