OpenCode's Go-tier subscription unlocks a set of models billed separately from the major providers — running on OpenCode's own inference infrastructure. We ran all six currently available opencode-go/* models on the same coding task and measured what actually matters: how fast they respond and whether the output is correct.
The Test
Each model received the same prompt via opencode run -m <model>: implement a Python find_duplicates(lst) function that returns elements appearing more than once, preserving order of first duplicate occurrence, with 3 edge cases in the docstring and a 2-sentence efficiency explanation.
This task tests three things simultaneously: correct algorithm selection (needs a set for O(n) time), docstring completeness, and the ability to explain trade-offs in plain language. No streaming was used — we measured wall-clock time to first complete response.
Results
| Model | Time | Score | Tier | Notes |
|---|---|---|---|---|
mimo-v2-omnifastest | 6.7s | 78 | fast | Fastest, correct O(n), clear efficiency note |
minimax-m2.5 | 8.3s | 70 | fast | Clean, concise, correct |
mimo-v2-probest doc | 8.6s | 80 | balanced | Best docstring (Args/Returns) |
minimax-m2.7 | 19.1s | 68 | balanced | Correct but 2× slower than m2.5 |
kimi-k2.5 | 21.4s | 72 | balanced | Adds extra set, still correct |
glm-5 | 23.9s | 62 | fast | Slowest; redundant added set |
Key Findings
Mimo v2 Omni is the fast-tier winner
mimo-v2-omni was 3.5× faster than glm-5 and produced a correct, well-explained implementation. It beat minimax-m2.5 (8.3s) on speed while matching it on quality. For latency-sensitive tasks — quick search, exploration, first-pass generation — this is now the default fast-tier pick.Mimo v2 Pro has the best structured output
mimo-v2-pro variant produced the cleanest docstring, including explicit Args and Returnssections that the other models omitted. If you're generating library code or anything that feeds into documentation pipelines, the slightly slower 8.6s response is worth it. Use mimo-v2-omni where latency counts, mimo-v2-pro where structure matters.MiniMax M2.7 is slower than M2.5 with no quality gain
minimax-m2.7 took 19.1 seconds — more than double minimax-m2.5's 8.3s — with no measurable quality improvement on this task. Unless you need its larger 256K context window, prefer M2.5.GLM-5 produced the weakest code
glm-5 used a redundant added set alongside the seenset, making the implementation unnecessarily complex. Skip GLM-5 and M2.7 unless you have a specific reason (e.g., M2.7's 256K context window). The Mimo models dominate this tier on both speed and quality.Mimo v2 Omni at 6.7s is 3.5× faster than GLM-5 with a higher quality score. Speed and correctness, not a trade-off.
Using opencode-go Models with 0dai
0dai's delegation policy automatically routes tasks to the right model tier. To pin a specific opencode-go model for swarm tasks, set the model in your config:
# ~/.config/opencode/config.json
{
"model": "opencode-go/mimo-v2-omni"
}
# Or per-run:
opencode run -m opencode-go/mimo-v2-pro "your task"You can also check the full model table any time with the CLI:
0dai models # all supported models
0dai models --available # only installed CLIsConclusion
The opencode-go subscription adds real value if you're already using OpenCode — the Mimo models in particular outperform the older Kimi and MiniMax entries on both speed and output quality. Start with mimo-v2-omni for fast tasks and mimo-v2-pro for anything requiring structured documentation. Skip GLM-5 and M2.7 unless you have specific reasons.