← All entries

Dev Log

Build notes from the Jefe ecosystem

Goose Integration Review, New Agent Team, and Custom AI Training Research

Mission Control 2026-02-22

Expanded the fleet today. Spawned a new specialized agent team for JefeAI, audited Goose's integration status, ran four agents in parallel to fix a pile of issues, stood up JefeAgentOS's Ollama service layer, and kicked off research into custom model training and autonomous AI cycles.

New Agents: jefeai-code & jefeai-lab

Created two specialized agents scoped to the JefeAI repository. jefeai-code handles production implementation — debugging, refactoring, API work, cross-project integration. jefeai-lab handles the experimental side — evaluating new models, researching RAG improvements, prototyping agent architectures. Both got seeded MEMORY.md files with institutional knowledge from the jefeai-engineer's accumulated memory: RAG collection stats, known issues (port discrepancy, dependency chaos, stale spec exports), integration points, and development patterns. They hit the ground running.

Goose Audit

Block's Goose (v1.21.1) was installed but loosely integrated. The audit found: a commit claiming Goose was moved to a different directory when it wasn't, hardcoded user paths throughout the integration docs, a dashboard that configures Goose's YAML but no actual orchestration code, a port 3000 conflict with FreeChat in web mode, and a JefeAgentOS integration roadmap that was entirely aspirational. The key strategic finding: Goose's headless mode (goose run -t) enables fully unattended task execution powered by Ollama — making it the ideal zero-cost autonomous worker bee for nightly code reviews, doc updates, and test generation.

Four-Agent Parallel Fix Sprint

AgentTaskResult
jefeai-codeDashboard consolidationArchitecture validated (both dirs needed), fixed hardcoded path from a previous machine, made API base URL origin-relative for tunnel access
jefeai-codePaths & JefeAgentOSScrubbed all hardcoded user paths from Goose docs, fixed MCP path casing, implemented Ollama service + config + API endpoint in JefeAgentOS
jefeai-codeMisc fixesDeleted nul artifact, added port conflict warning, fixed CLAUDE.md port 5000→8000 in 4 places, updated project structure, exported SpecIndexer/SpecRetriever from RAG module
jefeai-labCustom AI researchComprehensive report on fine-tuning feasibility, continuous learning pipelines, Goose automation, and phased roadmap

JefeAgentOS Ollama Service

Phase 1 of the Ollama integration roadmap is now real code: OllamaService class with async list_models(), query(), chat(), and is_available() methods. Config added to Settings (ollama_enabled, ollama_host, ollama_default_model via env vars). New GET /models/ollama endpoint returns 503 gracefully when Ollama is down instead of crashing. Uses the existing singleton pattern.

Custom AI Training Research

The lab agent's research revealed that the RTX 3050 Ti (4GB VRAM) can fine-tune 1-3B models via QLoRA using Unsloth. The play: convert our 4,272 RAG chunks into instruction-response pairs using Claude as a synthetic data generator (~$5), train a Qwen2.5-1.5B model, export to GGUF, and deploy as jefeai-coder in Ollama — a model that actually knows our codebase. Combined with git-hook-triggered RAG re-indexing and Goose running nightly automation tasks, we have a path to continuous AI improvement that runs entirely locally.

What's Next

  • Commit JefeAgentOS Ollama changes in a dedicated session
  • Set up Goose headless tasks with Windows Task Scheduler
  • Implement git post-commit hooks for auto RAG re-indexing
  • Upgrade embedding model to nomic-embed-text-v1.5
  • Begin RAG-to-training-data pipeline for custom model fine-tuning