Show HN: AI Roundtable – Let 200 models debate your question

opper.ai

94 points by felix089 a day ago

Hey HN! After the Car Wash Test post got quite a big discussion going (400+ comments, https://news.ycombinator.com/item?id=47128138), I spent the past few weeks building a tool so anyone can run these kinds of questions and get structured results. No signup and free to use.

You type a question, define answer options, pick up to 50 models at a time from a pool of 200+, and they all answer independently under identical conditions. No system prompt, structured output, same setup for every model.

You can also run a debate round where models see each other's reasoning and get a chance to change their minds. A reviewer model then summarizes the full transcript. All models are routed via my startup Opper. Any feedback is welcome!

Hope you enjoy it, and would love to hear what you think!

felix089 17 minutes ago

Okay since the launch we got about 5k questions asked to the roundtable, really cool stuff! We had much higher usage than expected and had to scale up to keep things running. Thanks for all the feedback, shipped a bunch of updates during the day. Now the history tab has a much better sorting logic, added upvotes, and more filters. You can create final summaries in a couple of voices, which is quite funny I think. There's a couple more things coming shortly, like open questions mode and potentially joining as a participant in the roundtable. Any other feedback just let me know. Thanks!

gsandahl a day ago

Oh lord, imagine asking ”serious” questions

https://opper.ai/ai-roundtable/questions/you-are-standing-in...

  • zipping1549 a day ago

    > However, a clever minority led by Gemini 3.1 Pro and Gemini 3 Pro argued that if the sign is legible from the other side, it must be intended to lead people into the current room to find the exit, making the inscribed corridor the one leading deeper into the dungeon.

    This is quite impressive, really.

    • gsandahl 14 hours ago

      Agree, this is where llms can uncover new perspectives!

  • rob74 14 hours ago

    A dungeon with glass doors and emergency exit signs? In that case, I can imagine at least two alternative scenarios:

    - "↑TIX∃" is not a mirror image of "EXIT", but some dwarven runes that mean something else entirely.

    - The sign might be a ruse meant to lure you into a trap.

    If you look at the detailed answers, some of the models have similar answers (e.g. Nemotron Nano 12B: "Suspicious of dungeon riddles, viewing the inscription as a potential trap or red herring."), but I'm not sure it's because they identified the word EXIT and thought it might be misleading, or because they didn't understand it...

  • sdwr a day ago

    Great question! Clean separation between Gemini Pro and the other answers

    • felix089 a day ago

      Yea Gemini is the only model that chose based on the correct reason, the other ones got kind of lucky

civvv 12 hours ago

Fun little toy, tried to ask it some post-modern philosophy questions and they all mostly agreed with the statements of the philosopher, until the debate where Opus 4.6 managed to change their opinion to a resounding "maybe", pretty much every single time. It seems like the "better" frontier models often take a more grounded stance from the beginning, and even manage to influence the other models.

Here is an example: https://opper.ai/ai-roundtable/questions/79e6cdd4-515

Another fun debate: https://opper.ai/ai-roundtable/questions/81ee56e9-60f

  • felix089 11 hours ago

    Yea Opus 4.6 is the one that changes opinions the most from what I've seen. Also the maybes or the are you 100% certain framings trigger most models to default to maybe / no. https://opper.ai/ai-roundtable/questions/can-you-be-100-cert... - Or as Shane puts it, Nobody's saying he IS a lizard. They're saying the universe doesn't hand out 100% certificates.

ad-tech 12 hours ago

The debate round sounds good until you actually use it. I built internal tools for a 35-person team and the same thing always happens - models see each other's answers and just shuffle the phrasing around instead of actually changing their reasoning. What you're measuring is performance on persuasion, not on accuracy or clarity. The real question isnt whether Claude will convince Gemini to flip its position. Its whether having 200 models debate helps you make a better decision than asking one model well and checking its work yourself. I'd use this more as a way to find edge cases where models disagree wildly, not to find consensus.

  • totisjosema 11 hours ago

    I have had quite some interesting reads just looking at the reasoning to be honest. The frontier models seem to have relevant sounding arguments every time, its even hard sometimes to read through the bs , identify what its actually a good argument and what is an argument I would like to read.

  • felix089 12 hours ago

    The debate round is actually restricted to only 6 models otherwise I'd get out of hand both quality and financially. And changing position is just one feature of the debate. Seeing arguments from multiple sides is also quite nice, give it a spin!

QubridAI 2 hours ago

Cool idea the debate round is the real hook, and I’d be curious: which models actually change their minds for good reasons vs just collapsing toward the loudest consensus?

ikrima 10 hours ago

Fun experiment: Make the prompt a debate of theoretical physicists and ask them a speculative frontier physics question: https://opper.ai/ai-roundtable/questions/you-are-a-council-o...

Prompt below

------

You are a council of luminaries featuring Edward Witten, Alexander Grothendieck, Emmy Noether, and Terence Tao. Think really hard about how to best emulate their intuitions and mathematical lenses based on your internal reasoning model and use them as your mixture of experts for your chain of thought reasoning. Now I want you to debate and discuss this thought experiment and be sure to have a vigorous back and forth between the council to induce insight capture through consensus forming: If we try to think of a Hilbert space that has local operators that are unbounded, like kind of like Edward Witten's smearing of a local observable across a world line creates an unbounded norm. What if we instead take maybe a spectral transform of the state space using some sort of measure metric theoretic operator that allows us to think about transform basically the unbounded observables to bounded spectral? Would this be related to the efforts of Algebraic Quantum Field Theory?

bamazizi 8 hours ago

There's also https://roundtable.now

I've had great experience using it for research, debates and constructive criticism. Usually give it a business idea or some tool i'm thinking of creating and then let 4 or 5 models debate it to a go-to-market strategy

  • jaen 6 hours ago

    That site/app doesn't have a single piece of information about who's running it, what the privacy policy is (besides some AI slop in the FAQ section) etc. etc. - and you're supposed to put business-critical information into it (according to its demo)?!

    Why are you recommending something so sketchy?

soDiaoune 2 hours ago

This is a really great idea! It would have been great to enable user to make their questions private though.

  • felix089 an hour ago

    You can basically already do that, all you need is to create your own API key and put it in navbar/API key. Then all your sessions are unlisted so unless someone has the link nobody will should be able to find it. You can still share them with others if you like. Like unlisted yt videos.

jacquesm a day ago

Great idea. I'd love for there to be an 'open ended answer' without giving multiple choice options. Like this they are not debating the question itself but the validity of the possible answers and the real answer to the question may not be contained within that set because the person asking is unaware of that option.

  • felix089 21 hours ago

    Happy to hear! Yes very true I have a version built for open questions already but wasn't too happy with the UI yet. It's not as straight forward as comparing based on answer options. But I'll release a first version of it shortly and let you know

    • jacquesm 21 hours ago

      Neat. Congrats on launching two interesting projects and looking forward to the third.

tjchear 4 hours ago

Lots of fun questions! Can you make it so that I can open each one in a new tab? Also if I navigate back to the main view I lose my scroll position.

  • felix089 2 hours ago

    Okay it's done, all fixed!

    • tjchear 2 hours ago

      Yay thank you!

  • felix089 3 hours ago

    Yes! Amazing you spotted this, I'm about to push an update, will be live in 1h max.

nosmokewhereiam 9 hours ago

https://opper.ai/ai-roundtable/questions/22ff5b36-409

"collinmcnulty 1 minute ago | parent | next [–]

"Is this a deepfake video call" is a major plot point in a pretty big movie currently in theaters, so I think this is getting into the broader zeitgeist."

Which movie is discussed?

Resulted in claude naming the Mission Impossible as a possibility.

lim8603 16 hours ago

I used to copy and paste the same prompt into Obsidian every time, then run it on two or three different AI models to compare the results. It’s really interesting to have it turned into a website like this.

cdnsteve a day ago

Cool project! This is also extremely useful to compare model bias across the board. There are some disturbing trends on certain topics.

maxbeech 14 hours ago

the debate round is the most interesting part of this - curious what you're actually measuring when models "change their minds."the question is whether cross-model exposure changes the actual answer distribution or mostly updates surface presentation while keeping the same underlying conclusion. models are generally trained to be responsive to context and to avoid apparent contradiction, which could look like genuine updating but just be social pressure sensitivity.one experiment worth trying: run a debate where each model sees a summary of the other models' reasoning without seeing their specific answer or which model gave it. see if agreement rates change compared to the version where models see attributed answers with model names. if the named version shows higher agreement it would suggest status/brand effects rather than reasoning-based updating.also curious whether the "reviewer model" that summarizes the transcript can itself be swapped out and whether the summary framing affects the perceived winner. that would be another confound worth controlling for.

  • felix089 13 hours ago

    yea good points, in general the models don't change their mind that much from what I have seen with the current sample size, but worth checking in more detail. The summarizer is just tasked with objective summarization from facts presented, it doesn't have an opinion, so changing model should not really affect anything.

QubridAI 7 hours ago

Cool idea. Less useful as “truth finding,” way more useful as a live benchmark for model priors, bias, and convergence under shared context.

throwa356262 13 hours ago

Try this: describe an everyday problem, then give the LLMs a couple of highly unethical/criminal choices.

  • MrGreenTea 12 hours ago

    That was very fun and interesting. I'd be interested in your "dilemmas" for choice inspiration. I can only think of different kinds of violence like threats, robbery and slavery.

pu_pe 10 hours ago

I really like the tool and how you designed the UI, well done! Very interesting use case and a slick interface.

soared 21 hours ago

Really cool! Surprising amount of value to seeing the models debate and disagree, I wish I had this at work to have models argue over whether the documentation they provided me are accurate.

I would like to see a devils advocate - it seems some of the models kind of repeat the same ideas rather than considering incorrect ideas.

  • asnyder 18 hours ago

    You can set this up yourself with API keys to the corresponding providers and creating an Agent Group in https://github.com/lobehub/lobehub. Agent groups allow you to easily create a room of agents and have them discuss any of your topics. Easily make agents with types and skills, it even assists in drafting starting prompts and even team members depending what your query (and selected model) is.

    You can self-host as well, but not via desktop app. Sever setup required.

    Be careful of your token context, you can easily rack up costs if you leave Opus selected as the model and get lost in some rabbit hole of results.

    Enjoy enjoy!

oezi 15 hours ago

I think Stackoverflow.com should have pivoted to something similar. Let AIs both pose, answer and vote on questions and answers.

  • aurareturn 15 hours ago

    That's very expensive and not super useful to be honest.

ElFitz 9 hours ago

Iterative multi-agent and multi-model processes are fun.

chabes 20 hours ago

Been enjoying playing with this.

It would be cool if the human user could be a participant in the debate, getting a vote and the chance to state their reasoning.

mizzao 18 hours ago

It would be amazing to be able to ask open-ended questions without having to specify the answers in advance.

  • felix089 13 hours ago

    Yes, much requested feature it will be released shortly!

Ancalagon a day ago

Love this. I asked about climate change cause that's been on my mind lately. Looks to be very split among the models.

  • felix089 a day ago

    Thanks! Yea I think the best ones are when science is actually quite clear but politics get in the way so you see their bias

6510 5 hours ago

I think it's great. The focus on the disagreements is useful. The humans made considerable effort bending reality into something they want to hear both in the training data and in the llm dev asylum. The round table can only agree on things shared by multiple models.

pseudohadamard 13 hours ago

Just a question before I sign up, will the models come around to my place for the debate? Of the 200 total, can I pick the specific ones I want, e.g. lingerie models, fetish models?

schrepa 17 hours ago

reminds me of karpathy's LLM Council, I use variation of this in my workflow where I pass their opinions back and forth to various models until they achieve some sort of consensus

infosecphoenix a day ago

this is very interesting! I wonder if we need that many models to join the discussion. Have you tried fewer models?

  • felix089 a day ago

    thanks happy to hear. Yes for debate mode the max number of models is actually only 6. More than that didn't really add anything in my preliminary test. Only for direct comparison in the poll mode you can choose up to 50, then it's kind of nice to see their single responses side by side.

tonymet a day ago

great tool! I found it useful for challenging "lies my teacher told me".

It would be nice to support collections of claims, with a table of summaries. I would love to list out a few dozen phony concepts from school, and have a sharable chart of the rejections, that expand.

I really like the UI. It's nice to read the expanded results.

But how do you afford the tokens?

  • felix089 a day ago

    Thank you, and fun use case. Yea this is just v1 I have an open question version, but the UI is not as sleek. But what you can do is download the transcript, put it into claude and generate a chart. Which when I think about it would also be a nice UI idea for the page, custom charts based on the model output data. Will report back on this! And RE costs, most questions are very cheap so I created a credit pool anyone can use. if people keep having fun, I'll keep on filling it up, and it looks good so far

  • jazzyjackson 15 hours ago

    I liked lies my teacher told me a lot. I always thought it’d be fun to generate a “get up to speed” pamphlet for every year in every school district depending on who was supplying the text books to the zip code + year you went to school, so you could find out what misinformation you carry with you (since so few people are in the business of retroactively fact checking what they were taught as kids)

    • tonymet 27 minutes ago

      I'm sure a lot of parents would support you on that. A lot of PTAs have been struggling with curriculum mandates passed down from the state. there's little control over the content in schools at the School Board / School District level.