Oh My Opencode QuickStart for OpenCode: Install, Configure, Run
Install Oh My Opencode and ship faster.
Oh My Opencode turns OpenCode into a multi-agent coding harness: an orchestrator delegates work to specialist agents that run in parallel.

The shortest mental model you need: install the plugin, then use the keyword ultrawork (or ulw) when you need full-power multi-agent execution.
This quickstart covers installation and day-one configuration. Once you are up and running:
- Specialised agents deep dive — every agent explained with model routing and self-hosting guidance
- Oh My Opencode experience — honest benchmarks and real-world billing caveats
For the broader AI coding toolchain, see the AI developer tools overview. If you prefer a sandboxed, browser-capable alternative to OpenCode, OpenHands takes a different architectural approach to agentic coding.
Why Oh My Opencode exists
OpenCode is an open source AI coding agent that runs as a terminal UI and handles non-interactive CLI runs for scripting and automation.
Oh My Opencode pushes that ceiling much higher. It adds a full orchestration layer built around the “Sisyphus” agent, multi-specialist collaboration, and tighter tool integration. The result is a system where complex engineering tasks — the kind that would normally require you to babysit several prompts — get planned, delegated, and verified in a single workflow.
Two concepts come up constantly, so nail these before anything else:
- Ultrawork mode: a keyword-triggered workflow switch. Adding
ultraworkorulwto your prompt activates parallel background execution, LSP integration, context management, and automatic task decomposition — it is not just a stylistic choice. - Specialist agents: named roles like
@oraclefor architecture and@librarianfor research and documentation. You can call them explicitly, or let the orchestrator route to them automatically.
This article uses Oh My Opencode, oh-my-opencode, and Oh My OpenCode interchangeably — they all refer to the same plugin in the OpenCode ecosystem.
Prerequisites and terminology
You need OpenCode installed and working first. See our OpenCode Quickstart to learn how to install, configure and start it.
By default opencode starts the terminal UI when run without arguments, and you can also use opencode run ... for non-interactive execution.
Oh My Opencode requires at least one provider configured. OpenCode loads providers from credentials, environment variables, or a project .env. The quickest way to get that sorted:
opencode auth login
opencode auth list
Credentials live in ~/.local/share/opencode/auth.json.
One naming quirk worth knowing upfront: the upstream GitHub repository is now branded as oh-my-openagent (“previously oh-my-opencode”), but the plugin and package name in configs is still oh-my-opencode. Both names float around in community posts — they are the same thing.
Installation quickstart
There are multiple routes (CLI installer, manual repo clone, or “let an agent install it”). The right choice depends on whether you want a repeatable installer flow or full manual control.
If you have Bun installed, bunx is Bun’s npx equivalent,
designed to run npm packages quickly without a separate global install step.
curl -fsSL https://bun.com/install | bash
Automatic install with OpenCode
You can execute prompt in OpenCode and it will install the OmO automatically
- Start OpenCode, or CursorAI or CloudeCode
- Copy-paste and execute tis prompt (see here for the installation details: https://github.com/code-yeongyu/oh-my-openagent)
Install and configure oh-my-opencode by following the instructions here:
https://raw.githubusercontent.com/code-yeongyu/oh-my-openagent/refs/heads/dev/docs/guide/installation.md
It will ask some questions about LLM subscriptions you have and will register Oh My Opencode in OpenCode plugins.
Manual path: run the Oh My Opencode installer with bunx or npx
Interactive installer that about your providers and generates an “optimal” configuration automatically, and also a non-interactive mode for automation and agents.
Interactive install:
bunx oh-my-opencode install
# or if you prefer npm tooling
npx oh-my-opencode install
Non-interactive install (useful for provisioning scripts, CI images, or “agent installs”):
bunx oh-my-opencode install --no-tui \
--claude=<yes|no|max20> \
--openai=<yes|no> \
--gemini=<yes|no> \
--copilot=<yes|no>
What the installer does, at a high level:
- registers the plugin in OpenCode configuration
- generates model assignments using provider availability and fallback rules
- writes
oh-my-opencodeconfiguration alongside OpenCode config files
Manual path: clone the repo and run the install script
There is also a “human” manual flow: clone the repo, install dependencies, then run an install script which detects your OpenCode config location, makes a backup, and merges settings.
git clone https://github.com/code-yeongyu/oh-my-opencode.git ~/.oh-my-opencode
cd ~/.oh-my-opencode
npm install
npm run install
If the script cannot find your OpenCode config automatically, set the config path environment variable explicitly before re-running the installer.
Quick verification checklist
After installation, validate three things:
- your Oh My Opencode config file exists in a recognised location
- specialist agents can be invoked
- ultrawork keyword triggering works as expected
Examples to copy:
# Show the plugin config file (try the location that matches your setup)
cat .opencode/oh-my-opencode.json
cat ~/.config/opencode/oh-my-opencode.json
# Quick agent test
opencode run "Use @oracle to analyse the architecture of the current project"
# Quick ultrawork trigger test
opencode run "ulw list all configuration files in this project"
If all three pass, you are genuinely ready — not just installed.
Configuration and tuning
Where configuration lives and how precedence works
Oh My Opencode looks for configuration files in a priority order, with project-level config taking precedence over user-level config. The documented search order is:
.opencode/oh-my-opencode.json(project-level, highest priority)~/.config/opencode/oh-my-opencode.json(user-level on macOS and Linux)%APPDATA%\opencode\oh-my-opencode.json(user-level on Windows)
OpenCode itself also supports multiple config sources and merges them rather than replacing them, with later sources overriding earlier ones only when keys conflict. This matters because you typically want a stable global baseline plus per-repo overrides.
JSONC support for sane configs
Both OpenCode and Oh My Opencode support JSONC, which means you can keep comments and trailing commas in config files without breaking parsing. This is especially helpful for documenting why certain limits or models were chosen.
The “ultrawork” keyword and what it enables
ultrawork (or ulw) is the mode switch you will reach for most.
Drop it into your task description and the system enables background parallel execution, LSP integration, context management, and automatic task decomposition. It is not a hint to the model — it is a hard trigger that changes how the entire run is structured.
Concurrency and background tasks
Oh My Opencode exposes concurrency controls for background execution via sisyphus.max_concurrent_tasks.
Do not chase high concurrency here. Each provider plan has a ceiling — Claude Max20 tops out at 3 parallel tasks, for example — and pushing past it produces flaky completions and hard-to-debug timeouts. Start conservative and only increase after clean runs.
A minimal concurrency config looks like this:
{
"sisyphus": {
"enabled": true,
"max_concurrent_tasks": 2,
"task_timeout": 300
}
}
A concurrency of 2 is a safe starting point for most plans.
Per-agent and per-category model routing
Oh My Opencode is built around the idea that different tasks benefit from different models, so the configuration supports:
- per-agent model selection, variants, and prompt extensions
- category-based routing for “quick” vs “deep” work
- fallback models when your preferred provider is unavailable
The specialised agents deep dive covers the full model routing system in detail — including why certain agents must not be switched to the wrong model family, the complete fallback chains, and how to safely override models for self-hosted setups.
Model resolution is layered: your explicit override wins, then provider fallback, then system default. The fallback chain starts with your native providers and can drop down to alternatives like GitHub Copilot if nothing else is available — which means a misconfigured agent is unlikely to silently fail; it will just use a weaker model.
A practical starter config you can adapt (JSONC recommended):
{
"$schema": "https://raw.githubusercontent.com/code-yeongyu/oh-my-opencode/master/assets/oh-my-opencode.schema.json",
// Keep concurrency conservative until you trust your provider limits
"sisyphus": {
"enabled": true,
"max_concurrent_tasks": 2,
"task_timeout": 300
},
// Agent tuning
"agents": {
"oracle": {
"enabled": true,
"model": "openai/gpt-5.2",
"variant": "high",
"temperature": 0.7,
"prompt_append": "Provide concise tradeoffs and a clear recommendation"
},
"librarian": {
"enabled": true,
"model": "google/gemini-3.1-pro"
}
},
// Category routing for fast queries vs deeper work
"categories": {
"quick": { "model": "opencode/gpt-5-nano" },
"visual-engineering": { "model": "google/gemini-3.1-pro" }
}
}
The $schema field is worth keeping — it unlocks autocomplete in most editors and catches typos in agent names before you waste a run on a broken config.
MCP integration and hooks
Oh My Opencode supports MCP (Model Context Protocol) servers via a configuration block where you define the server command, args, and environment variables.
In practice, the easier path is to manage MCPs directly through OpenCode using the opencode mcp command group — it handles adding, listing, and OAuth authentication without you touching config files manually.
Workflow examples you can copy
The two examples below are intentionally “real repo” shaped: they show how to trigger orchestration, how to aim specialist agents, and how to keep results verifiable.
Example: Architecture review with Oracle and a repo map
Use this when you join a new codebase or inherit a service and need a quick, actionable view of the architecture. Oracle is a read-only specialist — it cannot make changes, only advise.
Run it from your project root:
opencode
OpenCode will start its terminal UI by default when launched without arguments.
Then ask Oracle to produce an architecture analysis and a concrete refactoring or modularisation plan:
Use @oracle to analyse the architecture of this repository.
Identify the key bounded contexts, major dependency edges, and the highest risk components.
End with a refactoring plan that can be executed in small PRs.
For logging or scripting, skip the TUI entirely and use opencode run:
opencode run "Use @oracle to analyse the architecture of this repository and propose a staged refactor plan"
opencode run passes the prompt directly and exits when done — exactly what you want for CI steps or automation pipelines.
Example: Full-feature implementation with ultrawork orchestration
Use this when you need a feature that touches multiple files or layers, or when you want background tasks and specialist agents enabled automatically.
Start with an ultrawork prompt:
opencode run "ultrawork implement a new /health endpoint with readiness and liveness checks, add tests, update documentation, and verify it runs locally"
The ultrawork keyword activates all configured specialist agents, background task parallelism, and deeper tooling in one shot. You do not need separate prompts for planning, implementing, and testing.
When you use ultrawork, you can still explicitly steer specialist behaviour. For example, you might force the architecture design first:
opencode run "ultrawork ask @oracle to design the endpoint contract and dependencies first, then implement with minimal changes and add tests"
Combining ultrawork with explicit agent mentions like @oracle is the sweet spot: you get full orchestration but still control which specialist handles the critical design decisions.
One thing to get right for larger ultrawork tasks: keep concurrency inside your provider limits. Start at 2, run a few tasks cleanly, then decide whether to increase. Pushing past your plan’s ceiling does not make things faster — it makes them unreliable.
Troubleshooting and best practices
Ultrawork keyword does not trigger
If ulw or ultrawork is being ignored, treat it as a signal to check plugin loading and version compatibility. There has been at least one reported regression where keyword detection did not inject the expected ultrawork mode prompt, with investigation pointing to the dedicated keyword-detector hook.
In practice, the fastest fixes are:
- confirm the plugin is registered and config files are present (see the verification checklist earlier)
- upgrade OpenCode or the plugin if you are on a version known to regress keyword detection
Installation script cannot find your OpenCode config
Verify OpenCode is installed (opencode --version), then set an explicit config path environment variable before re-running the installer.
Too many background tasks or throttling issues
If you hit rate limits, timeouts, or flaky completions, the fix is always the same: reduce concurrency and re-run. Chasing higher parallelism when your provider is already throttling you is a losing battle.
When you should not use ultrawork
Ultrawork is powerful, but it is not free — every parallel agent costs tokens and counts against your rate limits. Reserve it for large refactors, full-stack feature work, and complex multi-step tasks. For a small single-file edit or a quick question, a plain prompt is faster and cheaper.
A solid rule of thumb:
- use normal prompts for small edits and straightforward questions
- use ultrawork when you want the system to decompose, delegate, execute, and verify across the repo
Compliance note about provider terms
Some community discussion around this ecosystem touches on provider restrictions and terms of service, particularly around third-party OAuth and compatibility layers. These are real constraints — review your provider’s terms before running automated or parallel workflows at scale, and do not assume that a working integration means a permitted one.
Ready to go deeper? The specialised agents deep dive explains every agent’s role, model requirements, and self-hosting options. For honest performance data and community-reported billing risks, see the Oh My Opencode experience article.