diff --git a/PROJECT_CONTEXT.md b/PROJECT_CONTEXT.md index a094b26..c035295 100644 --- a/PROJECT_CONTEXT.md +++ b/PROJECT_CONTEXT.md @@ -8,6 +8,7 @@ Current operating strengths: - custom agent architecture - open-source-oriented stack - developer-friendly game and dev environment focus +- integrated hosted IDE / dev container surface through the API System posture: stable, controlled expansion phase. @@ -80,10 +81,33 @@ System posture: stable, controlled expansion phase. - Crash recovery: backoff 30s/60s/120s, resets if uptime ≥ 30s, `error` state after repeated failures - Crash observability: exit code, signal, uptime, log tail, classification - Real Minecraft readiness probing exists in `internal/minecraft/readiness.go` +- File handlers are live for both `game` and `dev` containers +- File edits create shadow copies for revert when supported by the agent file policy + +### Minecraft runtime behavior +- `vanilla` + - implemented as the internal `vanilla-fabric` profile + - downloads `minecraft/fabric//server.jar` + - installs FabricProxy-Lite + - installs version-matched Fabric API from `minecraft/fabric/fabric-api//fabric-api.jar` + - installs FabricProxy-Lite config +- `fabric` + - downloads `minecraft/fabric//server.jar` + - no proxy jar + - no Fabric API injection + - no proxy config +- `forge` / `neoforge` + - use installer-based setup + - first boot avoids readiness ping gating until post-install files are generated + - agent waits for `server.properties`, stops the process, enforces server properties, then restarts through the readiness-aware path + - extended readiness timeout for first-start / post-processing flow + +This split is important: ZeroLagHub `vanilla` is not plain Mojang jar delivery; it is the Fabric-based vanilla profile used for proxy compatibility and platform behavior. Normal Fabric is plain Fabric jar delivery only. ### Backup boundary - Agent-owned game backups are local, app-aware rollback backups - Current implemented game backup scope is local Minecraft backup create/list/restore/delete plus pre-restore checkpoint hardening +- Restore is real and validated in practice; API now starts restore asynchronously and Portal should rely on polled status rather than a long-lived restore POST - PBS / platform backups are the durability and disaster-recovery layer - Do not treat offsite/PBS durability work as agent implementation work unless ownership changes @@ -167,6 +191,9 @@ Headscale/Tailscale for SSH, VS Code Remote, local tools. Constraints: no exit n - Portal uses the API-mediated hosted IDE flow - Portal uses the API websocket bridge for console access - Portal no longer relies on stale DB-only state for console availability +- Portal uses API-mediated game files, backups, and restore flows +- Restore start is async at the API layer; frontend should treat `202 Accepted` as success and then poll status +- API normalizes backup timestamps and filters pre-restore checkpoints out of the default backup list - Game publish flow remains untouched by dev routing work --- @@ -209,7 +236,7 @@ Repo-specific active work now lives under: - `Codex/Agent/*` High-level active themes: -1. Backup contract normalization and live validation +1. Backup / restore UX and status polish 2. Dev access / SSH / hosted IDE hardening 3. Service discovery and provisioning validation 4. Email notifications and launch polish