Documentation

Localcode is a full-featured AI coding agent that runs in your terminal. It connects to any LLM provider (local or cloud), reads your codebase, edits files, runs commands, and iterates until the job is done. It comes with 139 specialized agents across engineering, design, testing, security, DevOps, and more — each with deep domain expertise.

Unlike cloud-only coding assistants, Localcode runs locally with Ollama — your code never leaves your machine. No subscriptions. No cloud lock-in. No telemetry unless you opt in.

Installation

From npm (recommended)

npm install -g @localcode/cli

From source

git clone https://github.com/thealxlabs/localcode.git
cd localcode
npm install
npm run build
npm link

Requirements

Quick Start

  1. Install: npm install -g @localcode/cli
  2. Launch: Run localcode in your terminal
  3. Choose provider: Pick Ollama for free local AI, or connect a cloud provider
  4. Start coding: Describe what you need — Localcode handles the rest

Configuration

Localcode uses ~/.localcode/settings.json for global config and .localcode/settings.json for project-specific config. Settings are merged with global settings taking precedence.

{
  "provider": {
    "provider": "ollama",
    "model": "qwen2.5-coder:7b",
    "baseUrl": "http://localhost:11434",
    "temperature": 0.3,
    "maxTokens": 8192
  },
  "agentDispatch": {
    "enabled": true,
    "requireApproval": false,
    "maxConcurrentAgents": 5,
    "dispatchStrategy": "smart",
    "qualityGate": true,
    "maxRetries": 3
  },
  "permissions": {
    "fileEdit": "allow",
    "fileWrite": "allow",
    "bash": "allow",
    "bashPatterns": {
      "git *": "allow",
      "npm test*": "allow"
    }
  },
  "session": {
    "autoSave": true,
    "autoSaveInterval": 30,
    "maxHistorySize": 1000,
    "autoCompact": true,
    "compactThreshold": 50
  },
  "git": {
    "enabled": true,
    "autoCommit": false,
    "autoStash": false
  },
  "memory": {
    "enabled": true,
    "autoExtract": true,
    "persistentMemory": true
  },
  "mcp": {
    "enabled": true,
    "servers": {},
    "autoConnect": true,
    "timeout": 30000
  }
}

Environment Variables

VariableDescription
OPENAI_API_KEYOpenAI API key (auto-loaded)
ANTHROPIC_API_KEYAnthropic API key (auto-loaded)
GROQ_API_KEYGroq API key (auto-loaded)
LOCALCODE_PROVIDERDefault provider override
LOCALCODE_MODELDefault model override
LOCALCODE_WORKDIRDefault working directory

Providers

Localcode supports any OpenAI-compatible API endpoint. Switch providers mid-session with /provider <name>.

ProviderSetupLocal?Default Model
OllamaInstall Ollama, run localcodeYesqwen2.5-coder:7b
OpenAISet OPENAI_API_KEYNogpt-4o
AnthropicSet ANTHROPIC_API_KEYNoclaude-sonnet-4-5
GroqSet GROQ_API_KEYNollama-3.3-70b-versatile

Tools

Localcode has 10 built-in tools that agents use autonomously. Each tool requires permission before execution (configurable in settings).

ToolDescriptionParameters
read_fileRead file contents with line numberspath
write_fileCreate or overwrite a filepath, content
patch_fileEdit part of a file (old_str → new_str)path, old_str, new_str
delete_fileDelete a filepath
move_fileMove/rename a filesource, destination
run_shellRun any shell commandcommand, cwd
list_dirList directory contents (recursive)path, recursive
search_filesGrep-like search across the projectpattern, path, case_insensitive
find_filesFind files by name patternpattern, path
git_operationRun git commandsargs

Commands

Type / to see all available commands. Localcode has 60+ slash commands organized by category.

Session Commands

CommandDescription
/clearClear conversation history
/compactSummarize & compress conversation
/checkpointSave a checkpoint
/restoreRestore a checkpoint
/retryRegenerate last response
/copyCopy last response to clipboard
/exportExport conversation to markdown
/undoUndo last file change
/statusShow session info
/exitExit

Agent Commands

CommandDescription
/agentBrowse and activate agents
/agentsList all agents by category
/orchestrateRun multi-agent pipeline
/nexusFull NEXUS pipeline
/swarmParallel agent swarm

Provider Commands

CommandDescription
/providerSwitch AI provider
/apikeySet API key for current provider
/modelChange model
/modelsList available models
/costShow estimated session cost

Utility Commands

CommandDescription
/commitAI-generated git commit
/reviewAI code review of current changes
/diffShow session file changes
/doctorHealth check
/memoryShow and manage memory files
/hooksShow configured hooks
/mcpManage MCP servers
/benchmarksRun performance benchmarks
/settingsShow current settings
/telemetryView/toggle telemetry
/rate-limitView rate limit status

Permissions

Every file write, patch, or shell command asks for permission first. Three approval modes:

Configure per-tool and per-command pattern in settings:

"permissions": {
  "fileEdit": "allow",
  "fileWrite": "allow",
  "bash": "allow",
  "bashPatterns": {
    "git *": "allow",
    "npm test*": "allow"
  }
}

Agents Overview

Localcode comes with 139 specialized agents across engineering, testing, security, DevOps, design, marketing, product, and strategy. Each agent has deep domain expertise encoded in its system prompt.

Engineering (30+ agents)

AI Engineer, Senior Developer, Software Architect, Backend Architect, Frontend Developer, Database Optimizer, API Designer, Code Reviewer, Git Workflow Master, and 21 more.

Testing (10+ agents)

API Tester, Test Results Analyzer, Reality Checker, Test-Driven Developer, Integration Tester, E2E Test Engineer, and 4 more.

Security (8+ agents)

Security Engineer, Threat Detection, Compliance Auditor, Blockchain Security, Penetration Tester, and 3 more.

DevOps (10+ agents)

DevOps Automator, SRE, Infrastructure Maintainer, Cloud Architect, CI/CD Engineer, and 5 more.

Auto-Dispatch

Agents are automatically dispatched based on task context. No manual switching needed:

User: "Fix the authentication bug"
→ Auto-dispatches: security-engineer, backend-architect, testing-reality-checker

User: "Optimize the database queries"
→ Auto-dispatches: database-optimizer, performance-benchmarker

Configure in settings: agentDispatch.enabled, agentDispatch.requireApproval.

Orchestration

Run multi-agent pipelines with /orchestrate "task" <mode>:

NEXUS Pipeline

The full NEXUS pipeline coordinates multiple agents through 7 phases:

  1. Discovery — Requirements gathering, competitive analysis
  2. Strategy — Architecture decisions, technology selection
  3. Foundation — Project scaffolding, CI/CD setup
  4. Build — Core implementation with parallel agents
  5. Hardening — Security audit, performance optimization, testing
  6. Launch — Deployment, documentation, monitoring
  7. Operate — Ongoing maintenance, incident response

Quality gates enforced between each phase. Dev↔QA loops running for all implementation tasks.

Plugins

Create custom commands by dropping .js files in ~/.localcode/plugins/:

// ~/.localcode/plugins/my-plugin.js
export default {
  name: 'my-plugin',
  trigger: '/mycommand',
  description: 'Does something useful',
  async execute(args, context) {
    context.addDisplay({ role: 'assistant', content: `Hello ${args}!` });
  }
};

Plugins are sandboxed with a 30-second execution timeout and blocked from accessing dangerous modules (eval, child_process, fs, net, http).

Hooks

Custom logic that runs before/after tool use:

// ~/.localcode/hooks.json
{
  "PreToolUse": [{
    "matcher": "run_shell",
    "hooks": [{ "type": "command", "command": "echo 'Running: $COMMAND'" }]
  }],
  "PostToolUse": [{
    "matcher": "write_file",
    "hooks": [{ "type": "command", "command": "prettier --write $FILE_PATH" }]
  }],
  "Notification": [{
    "matcher": ".*",
    "hooks": [{ "type": "command", "command": "notify-send 'Localcode' '$MESSAGE'" }]
  }]
}

MCP Servers

Connect external tools via Model Context Protocol:

{
  "mcp": {
    "enabled": true,
    "servers": {
      "filesystem": {
        "command": "npx",
        "args": ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/project"]
      }
    },
    "autoConnect": true,
    "timeout": 30000
  }
}

MCP servers auto-reconnect with exponential backoff if they disconnect.

Memory

Localcode maintains persistent memory across sessions:

Telemetry

Telemetry is opt-in and anonymous. No PII is collected. Data is stored locally in ~/.localcode/telemetry.log.

Tracked events: command usage, tool execution, errors, session start/end, agent dispatch, provider switches.

Disable with /telemetry off or in settings:

{
  "telemetry": {
    "enabled": false
  }
}

Rate Limiting

Localcode includes built-in rate limiting to protect you from burning API credits:

Check status with /rate-limit.

Security

Localcode includes comprehensive security checks for shell commands:

Benchmarks

Run performance benchmarks with /benchmarks or localcode --benchmarks:

Results are saved to benchmarks.md in your working directory.