All articles
opencodebenchmarksmodelsai-agents

OpenCode Go Models Benchmark 2026: Mimo, Kimi K2.5, MiniMax, GLM-5 Compared

We ran every opencode-go subscription model on identical coding tasks and measured response time, code quality, and output structure. Mimo v2 Omni came out on top.

6 min read
Response time benchmarkopencode-go/* · same task · median of 3
SCOREmimo-v2-omni6.7s78minimax-m2.58.3s70mimo-v2-pro8.6s80minimax-m2.719.1s68kimi-k2.521.4s72glm-523.9s62
fast tier
balanced tier
low quality score

OpenCode's Go-tier subscription unlocks a set of models billed separately from the major providers — running on OpenCode's own inference infrastructure. We ran all six currently available opencode-go/* models on the same coding task and measured what actually matters: how fast they respond and whether the output is correct.

The Test

Each model received the same prompt via opencode run -m <model>: implement a Python find_duplicates(lst) function that returns elements appearing more than once, preserving order of first duplicate occurrence, with 3 edge cases in the docstring and a 2-sentence efficiency explanation.

This task tests three things simultaneously: correct algorithm selection (needs a set for O(n) time), docstring completeness, and the ability to explain trade-offs in plain language. No streaming was used — we measured wall-clock time to first complete response.

Note
All runs were made sequentially from the same machine in the same region. No caching. Each model received a fresh context with no prior messages. Times are median of 3 runs.

Results

ModelTimeScoreTier
mimo-v2-omnifastest
6.7s
78
fast
minimax-m2.5
8.3s
70
fast
mimo-v2-probest doc
8.6s
80
balanced
minimax-m2.7
19.1s
68
balanced
kimi-k2.5
21.4s
72
balanced
glm-5
23.9s
62
fast

Key Findings

01

Mimo v2 Omni is the fast-tier winner

At 6.7 seconds, mimo-v2-omni was 3.5× faster than glm-5 and produced a correct, well-explained implementation. It beat minimax-m2.5 (8.3s) on speed while matching it on quality. For latency-sensitive tasks — quick search, exploration, first-pass generation — this is now the default fast-tier pick.
02

Mimo v2 Pro has the best structured output

The mimo-v2-pro variant produced the cleanest docstring, including explicit Args and Returnssections that the other models omitted. If you're generating library code or anything that feeds into documentation pipelines, the slightly slower 8.6s response is worth it. Use mimo-v2-omni where latency counts, mimo-v2-pro where structure matters.
03

MiniMax M2.7 is slower than M2.5 with no quality gain

This was the most surprising result. minimax-m2.7 took 19.1 seconds — more than double minimax-m2.5's 8.3s — with no measurable quality improvement on this task. Unless you need its larger 256K context window, prefer M2.5.
04

GLM-5 produced the weakest code

Beyond being the slowest (23.9s), glm-5 used a redundant added set alongside the seenset, making the implementation unnecessarily complex. Skip GLM-5 and M2.7 unless you have a specific reason (e.g., M2.7's 256K context window). The Mimo models dominate this tier on both speed and quality.

Mimo v2 Omni at 6.7s is 3.5× faster than GLM-5 with a higher quality score. Speed and correctness, not a trade-off.

Using opencode-go Models with 0dai

0dai's delegation policy automatically routes tasks to the right model tier. To pin a specific opencode-go model for swarm tasks, set the model in your config:

# ~/.config/opencode/config.json
{
  "model": "opencode-go/mimo-v2-omni"
}

# Or per-run:
opencode run -m opencode-go/mimo-v2-pro "your task"

You can also check the full model table any time with the CLI:

0dai models                 # all supported models
0dai models --available     # only installed CLIs

Conclusion

The opencode-go subscription adds real value if you're already using OpenCode — the Mimo models in particular outperform the older Kimi and MiniMax entries on both speed and output quality. Start with mimo-v2-omni for fast tasks and mimo-v2-pro for anything requiring structured documentation. Skip GLM-5 and M2.7 unless you have specific reasons.

Try 0dai

AI agents that know your project

Shared context, session roaming, and multi-agent swarm for Claude Code, Codex, Gemini, Aider, and OpenCode — from a singleai/directory. Install in seconds.

npm install -g @0dai-dev/cli && 0dai init
Back to all articles