5.4 KiB
ZeroLagHub – File System & File Browser Strategy
Date: 2026-02-28 Status: Planning — not yet implemented Next Action: Stub file endpoints in existing agent, prove end-to-end, extract later
Context
This document captures the architectural decisions and UX direction for file system access in ZeroLagHub game and dev containers. It is a planning document, not an implementation spec.
Use Cases
Game Server Owners (non-technical)
- Edit
server.propertiesand config files - Upload mod
.jarfiles to/mods - Restore deleted mods from
/mods-removed - Download log files for debugging
Developers / BYOS (technical)
- Full shell access
- File transfer (upload/download)
- SFTP access to dev container filesystem
These are two distinct personas with different needs and different solutions.
Architecture Decision
Game Server File Access
Handled via agent file endpoints + portal file browser UI.
No SSH required. The agent exposes REST file management endpoints. The API proxies them behind auth + ownership enforcement (same pattern as all other agent endpoints). The portal renders a file browser panel.
Dev Container File Access
Handled via WebSSH2 + SFTP, proxied through the API.
Developers get a real SSH2 session with SFTP channel. No direct container access from the browser — API proxy maintains the security boundary (DEC-008).
Agent File Endpoints (Planned)
GET /game/files?path= — list directory
GET /game/files/download?path= — download file
POST /game/files/upload?path= — upload file
DELETE /game/files?path= — delete file
PATCH /game/files — rename file
Mirrored in API under:
/api/game/servers/:id/files/*
Security Requirements
- Hard-rooted to
serverRoot— no path traversal outside container root - HTTPS only
- Auth + ownership enforced at API layer
- Upload size limits enforced
- No execution of uploaded files
Implementation Sequencing
Do not split the agent yet.
Land file endpoints in the existing agent first. Prove the feature end-to-end. Once the surface area is clear and stable, extract if needed.
SFTP is the exception — if SFTP access for dev containers is implemented, it warrants its own separate process from day one due to SSH server complexity. It does not belong in the main game agent.
Phased approach
- File endpoints in existing agent (stub → prove → harden)
- Portal file browser UI wired to API proxy
- SFTP as separate agent process for dev containers (separate binary, separate port, separate systemd unit)
Frontend File Browser Direction
Layout
Split-pane panel — directory tree left, file detail/actions right. Slides in as a panel, does not replace the console view. Server console remains visible.
Navigation
Breadcrumb-based. Flat navigation within each directory rather than deep tree expansion. Click folder → replace view. Not expand-in-place.
File Listing
Columns: name, size, modified date, type icon. For .jar files: status badge (enabled / disabled / removed).
Actions
Context menu or per-row three-dot menu. Actions: download, delete, rename. For .jar files: enable / disable toggle. Drag-to-upload supported, file picker fallback.
In-Browser Editor
Plain textarea or Monaco for text files (server.properties, .json, .txt). Binary files get download link only. Not required for launch.
mods-removed Surface
"Recently removed" section or toggle to show /mods-removed alongside active mods. This makes soft delete visible and gives users a restore path without knowing the underlying filesystem layout.
What to Avoid
- Deep expand/collapse tree for mod directory (use flat list + filter)
- In-browser zip/unzip
- Making the file browser the primary surface — mod manager stays primary, file browser is secondary/advanced
Per-Container Web Server Decision
Do not run Nginx or Caddy per container.
The agent already runs an HTTP server. Serving static file browser assets from the agent directly keeps per-container footprint minimal. No additional process, no config management, no extra memory overhead.
Caddy or Nginx per container would make sense if you needed per-container SSL termination or direct browser access without the API proxy. ZLH's architecture routes everything through the API proxy (zlh-proxy handles SSL at the edge), so a local web server adds a layer without adding capability.
Resilience Notes
The file agent (when extracted) should follow the same binary resilience pattern as the main agent:
- Versioned release layout (
releases/<version>/) currentsymlink pointing to active binary- Previous version retained on disk
- Systemd watchdog flips
currentback to previous on health check failure - No dependency on artifact server for rollback — local fallback only
This keeps the file service self-healing without operator intervention, consistent with ZLH's overall design goal.
Related Documents
docs/architecture/mod-deployment-safety.md— mod lifecycle and rollback modeldocs/architecture/dev-to-game-artifact-pipeline.md— dev container promotion pipelineOPEN_THREADS.md— file browser listed as next major featureFrontend/TerminalView_Component.md(knowledge-base) — terminal implementation reference- WebSSH2:
https://github.com/billchurch/webssh2— SFTP + SSH2 for dev containers