- Add harness_chat_conversations table with messages stored as JSONB
- Add chat-store.ts with CRUD operations
- Add /api/conversations routes (GET list, POST create, PATCH update,
DELETE remove)
- Update chat-tab.tsx to load conversations on mount, persist metadata
changes immediately, and save messages after streaming completes
- Add loading state while conversations are being fetched
The boot code only set baseUrl when it was empty, so a stale value
(gitea.platform.svc vs gitea-helm-http.platform.svc) was never
corrected. Now always updates to match the env var.
Projects were stored purely in useState — lost on every page refresh.
- Add harness_projects table (id, name, workspaces jsonb)
- Add /api/projects CRUD route (GET/POST/PUT/DELETE)
- Load projects from DB on dashboard mount
- All project mutations (create, delete, add/remove repo) now
persist via API calls
- Close last tab shows empty state with "Start new chat" button
instead of auto-creating a new conversation
- Double-click tab label to rename conversations inline
- Replace workspace repo search with project dropdown using same
SearchableDropdown component (projects passed from dashboard)
- Move context bar from top to bottom, above text input
- Use sentence case "Thinking..." instead of all-caps
Log the Gitea search URL, response status, body on failure, and result
count to help debug why repos aren't being found. Also handle both
data.data and direct array response formats.
Infrastructure:
- Add Longhorn PVCs for knowledge store (1Gi) and workspace (10Gi),
replacing ephemeral emptyDir for workspace
- Set HARNESS_KNOWLEDGE_DIR=/data/knowledge env var in deployment
Chat UI improvements:
- Thinking ticker: pulsing indicator while waiting for model response
and between tool-use rounds
- Context bar: message count, estimated token usage, color-coded fill
bar against model context window
- Multiple conversation tabs: independent state per conversation with
create/close/switch, model selection inherited on new tabs
- Workspace binding: per-conversation repo search that injects project
context into the system prompt
Browse and manage Gitea container images directly from the dashboard.
- List all container images with tag counts
- Drill into an image to see all tags sorted by date
- Delete individual tags (with confirmation)
- Visual distinction for digests vs named tags, "latest" badge
- Proxies Gitea's /api/v1/packages API with sealed token
OpenCode Zen and Go models were being routed to api.openai.com because
their base URLs weren't in the provider map. Add correct base URLs
(opencode.ai/zen and opencode.ai/zen/go) and error on unknown providers
instead of silently falling back to OpenAI.
The dropdown was empty because it fetched from /api/models/curated
(small DB subset) filtered to enabled-only. Switch to /api/models
which queries all providers live and returns 300+ available models.
- Object store is now a tab ("Object Browser") alongside "Buckets"
- Buckets tab: create and delete buckets
- New directory creation via NEW DIR button
- DOWNLOAD and DELETE buttons are now full words with borders and
spacing between them to prevent misclicks
- Bucket selector dropdown when multiple buckets exist
- All API routes accept optional bucket query param
Replace the xterm/PTY-based chat tab with a direct-to-API streaming chat
UI that calls provider APIs (Anthropic, OpenAI) with SSE streaming and
inline tool execution.
- Extract 18 shared tools from MCP server into chat-tools.ts registry
(knowledge, task, agent, model, web, shell, filesystem tools)
- Add streaming adapters for Anthropic and OpenAI APIs (raw fetch, no SDK)
- Add POST /api/chat SSE route with tool-use loop (max 10 rounds)
- Rewrite chat-tab.tsx as message-based UI with model selector,
streaming text, and collapsible tool call blocks
- Refactor mcp-server.ts to consume shared tool registry
Lightweight Next.js app for browsing, uploading, and downloading
artifacts from the cluster-local Garage S3 bucket. Uses the harness
design system. Features:
- File/folder browser with breadcrumb navigation
- Drag-and-drop upload
- Download and delete
- Ingress at platform.coreworlds.io (internal-only)
Also adds platform-dash to CI/deploy workflows.
Download from garage.platform.svc:3902 (web gateway) instead of
GitHub releases. Eliminates external network dependency during builds.
Verified download works from inside DinD-spawned containers.
Cluster-local object store for build artifacts (CLI binaries etc.)
so Docker builds don't depend on flaky external downloads.
- Single-node Garage v1.0.1 StatefulSet (LMDB, replication=1)
- Metadata on longhorn-nvme (1Gi), data on longhorn HDD (20Gi)
- S3 API at garage.platform.svc:3900
- External ingress at s3.coreworlds.io (internal-only)
- SealedSecret for admin token and RPC secret
Piping curl directly to tar fails in CI when the download is chunked,
causing "not found in archive". Download to a temp file first.
Verified on linux/amd64.
The opencode.ai install script fails in CI (TLS errors, missing $SHELL).
Download the pre-built musl binary directly from GitHub releases instead.
Verified locally on linux/amd64 with PTY spawn.
The opencode installer script requires $SHELL to be set, which Alpine's
sh in Docker doesn't provide. This caused the install to download the
binary but fail before placing it, silently swallowed by the || fallback.
Also hardcode the known install path and fail the build if it's missing.
Add "gitea" to local RepoResult provider type (was missing from UI
interface despite being returned by repo-search). Copy opencode binary
instead of symlinking — symlink through /root/ is inaccessible to the
nextjs user due to directory permissions.
- GITEA_URL was pointing to gitea.platform.svc but the Helm chart
names the HTTP service gitea-helm-http.platform.svc
- Add Gitea badge (GT, green) to repo search results UI
- Update placeholder and credential hint to mention Gitea
- Rewrite internal service URLs to external gitea.coreworlds.io in
search results so agents can clone from outside the cluster
- Add error logging to diagnose search failures
The opencode curl installer puts the binary in /root/.local/bin which
isn't on PATH for the nextjs user. Add a symlink to /usr/local/bin
after install. Also ensure /usr/local/bin is always in the PATH
passed to spawned agent processes.
The localhost check using host header and x-forwarded-for was unreliable
in the standalone Next.js server which may inject forwarded headers
internally. Replace with a per-process random token shared between the
PTY server and the API route via env var.
In K8s, HOSTNAME is set to the pod name, so the server only listened
on that interface. The PTY server's loopback fetch to 127.0.0.1 was
connection-refused. Always bind to 0.0.0.0 so loopback works.
The standalone next package is trimmed and doesn't include webpack.
The custom server.js was using next() which triggers config loading
that requires webpack. Fix by extracting the standalone config at
build time and setting __NEXT_PRIVATE_STANDALONE_CONFIG before
requiring next, matching what the generated standalone server does.
server.js requires 'next', which the standalone output places at
apps/harness/node_modules/next. Running server.js from the repo root
meant Node couldn't resolve it. Move server.js and pty-server.js into
apps/harness/ so module resolution finds the standalone node_modules.
The Dockerfile check in the while-read loop used `[ -f ... ] && echo`,
which exits non-zero for packages without Dockerfiles. With bash's
pipefail, this killed the entire step. Also remove unused GitHub
workflow copies since CI runs on Gitea only.
Turbo's change detection includes shared packages like @homelab/db,
which don't have Dockerfiles. Filter to only apps with a Dockerfile
to prevent 'path not found' errors during docker build.