Show HN: adamsreview – better multi-agent PR reviews for Claude Code
github.comI built adamsreview, a Claude Code plugin that runs deeper, multi-stage PR reviews using parallel sub-agents, validation passes, persistent JSON state, and optional ensemble review via Codex CLI and PR bot comments.
On my own PRs, it has been catching dramatically more real bugs than Claude’s built-in /review, /ultrareview, CodeRabbit, Greptile, and Codex’s built-in review, while producing fewer false positives.
adamsreview is six Claude Code slash commands packaged as a plugin: review, codex-review, add, promote, walkthrough, and fix. I modeled it after the built-in /review command and extended it meaningfully.
You can clear context between review stages because state is stored in JSON artifacts on disk, with built-in scripts for keeping it updated.
The walkthrough command uses Claude’s AskUserQuestion feature to walk you through uncertain findings or items needing human review one by one. Then, the fix command dispatches per-fix-group agents and re-reviews the work with Opus, reverting any regressions before committing survivors.
It runs against your regular Claude Code subscription (Max plan recommended), unlike /ultrareview, which charges against your Extra Usage pool.
I would love feedback from Claude Code users, pro devs, and anyone with strong opinions about AI code reviews.
Repo: https://github.com/adamjgmiller/adamsreview
Install: /plugin marketplace add adamjgmiller/adamsreview, /plugin install adamsreview@adamsreview
The best code review improvement I have done in my workflow with Claude is using tuicr (https://tuicr.dev).
It runs locally, YOU review all the code locally, and feedback that to Claude.
Agents reviewing AI code always felt dirty to me, especially when working on production (non-disposable) code.
The video actually convinced me that this might be an interesting tool. I'm going to try it myself for a small one-shot project and see how well it performs.
TUI-based reviews on it's own are already interesting. I had never considered it, I guess.
> Runs against your regular Claude Code subscription (Max plan recommended) — unlike /ultrareview, which charges against your Extra Usage pool.
How expensive is it to run in your experience? In $ or tokens?
Great project! I’ve build something similar, not very clean and polished, but focussed around deterministic orchestration of multiple agents via typescript, because a coordinating agent was notoriously bad at things such as fetching relevant tickets and other context. One thing I struggle with so far, though, are the actual instructions for the review themselves. They are either too vague, leading to superficial or overly broad reviews, or too specific and thus not applicable to different kinds of PRs…
We seem to be fighting complexity with complexity. Does it really help?
"I pay Claude, to use Claude, to write instructions for Claude, to review code from Claude"
Have we all just given up?
You forgot “to use Claude to write a HN post to promote..”
s/Claude/Intern/g
Holy vibe coding batman this looks like a repository with just a bazillion prompts of which there are already a million.
Seems like it would create a lot of friction and burn a lot of tokens.
That's looks like a fair bit of ceremony for what it does. Is this representative of the output? https://github.com/adamjgmiller/adamsreview/pull/3