Cynddl 2 days ago

Is it me or they very carefully do not report performance on GPT-5.4 Pro, only the default GPT-5.4? They also very carefully left Anthropic models out of their comparison.

I went back to the BixBench benchmark which they mentioned. I couldn't find official results for Anthropic models, but I found a project taking Opus 4.6 from 65.3% to 92.0% (which would be above GPT-Rosalind) with nearly 200 carefully crafted skills [1]. There also appears to be competitive competitor models with scores on par with this tuned GPT.

[1] https://github.com/jaechang-hits/SciAgent-Skills

  • jadusm 2 days ago

    Bix Bench seems like a really interesting/useful idea but most of the value for a layperson (like me) is comparing the results of different models on the benchmark. From what I can find there is no centralised & updated model results set. Shame.

furyofantares 2 days ago

I'm all for naming things in honor of Rosalind Franklin, but this seems like incredible misplaced hubris instead.

  • peyton 2 days ago

    > GPT‑Rosalind is now available … for qualified customers …

    It’s kind of gross to make money off her name (if that’s what’s happening) posthumously. It’s a complicated story anyway. IIRC her sister referred to it as “the Cult of Rosalind” when people were cashing in on books about her.

    • bombcar 2 days ago

      I'd rather the AI companies make up names, or name their products things like "Clod" than use my name (if they were to ask) - as no matter how good it looks today eventually it'll be some form of laughingstock.

      • Sanzig 2 days ago

        Claude is most likely a nod to Claude Shannon, father of information theory and an early AI pioneer.

        • bombcar 2 days ago

          The real hubris will be to name a model Turing, or Alan if you’re a bit more discrete.

          • ben_w 1 day ago

            I had to double check they hadn't already done so; the GPT-3 models were called ada, babbage, curie, and davinci.

            • bombcar 1 day ago

              At least GPT is pretty "unique" and they've not polluted search (except for those looking for the GUID Partition Table, RIP).

              Any name you pick will immediately override anything that comes before - naming a model Socrates would confuse searches, for example (and it's why I hate the rename of iTunes to "Music" which is a generic term!).

an0malous 2 days ago

“GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert.”

Sam Altman, August 2025

https://www.bbc.com/news/articles/cy5prvgw0r1o

  • falcor84 2 days ago

    What of it?

    For me too, it was around that time last year, with GPT-5, Claude Sonnet 4.5 and then Gemini 3 that I started feeling that these models are clearly becoming great at reasoning. I'm not at all opposed to saying that they are around PhD-level on at least some domains.

    • kmaitreys 2 days ago

      I think there's a lot of difference between sounding like someone and being someone. The models are excellent at pretending indeed.

      • 0123456789ABCDE 2 days ago

        exactly. this is what whole RL thing is optimizing for, even if that is not the intent.

      • falcor84 1 day ago

        I don't think that sama was arguing that ChatGPT actually passed a PhD thesis defense. But arguably, it could make for an interesting benchmark.

        • kmaitreys 1 day ago

          Please do not get swayed by nor defend the words vomited by a snake oil salesman.

          Also what benchmark? How will you you design it?

huslage 2 days ago

I work for a life sciences company. It will be a long time before anyone trusts a generative model to do the actual science when mathematically provable models are as good as they are today. There is room for AI in the field, but it's not in the science directly.

  • oofbey 2 days ago

    What would be a good use of AI? Writing code to do the modeling?

    • ben_w 1 day ago

      Not yet, I think.

      Earlier this year I tried to do this for a much simpler target than bioscience, a Farnsworth fusor, and even though I started off with ~"which open source physics libraries do you recommend we use for this?" and it giving me a list, instead of actually bothering to use any of those libraries that it suggested, it decided to roll its own simulation code, and the code it wrote very obviously didn't work.

      It may *assist* with coding, but I don't think it could code for them yet.

modeless 2 days ago

The voiceover in the promo video on this page seems to be AI generated, with some weird artifacts. Right at the beginning it sounds like it says "cormbiying structure daya retrieval and lirrachure search".

shwn2989 2 days ago

I prefer GPT 5 pro, which i found expert in coding and reasoning.

tonfreed 2 days ago

Who's at fault when it suggests feeding someone cyanide?

  • falcor84 2 days ago

    > We want to make these capabilities available to the scientists and research organizations best positioned to advance human health, while maintaining strong safeguards against biological misuse. The Life Sciences model is launching through a trusted-access deployment structure for qualified Enterprise customers in the U.S. to start, with controls around eligibility, access management, and organizational governance.

    I'm absolutely ok with a legitimate lab scientist conducting biochemical research getting suggestions about substances that are generally considered dangerous but might be appropriate for their study, and it'll be up to the scientist to discern whether it is indeed appropriate to use.

jostmey 2 days ago

The real issue isn’t finding therapies but getting them tested in clinical trials

  • XenophileJKO 2 days ago

    I would argue that while you still have failed trials, then we have room to improve trial vetting.

  • Gethsemane 1 day ago

    I somewhat agree, in that most of these life science adjacent demos are essentially "find good drug targets for $DISEASE", which mostly overfit to existing, well-classified drugs and targets. The biggest gains IMO will be in improved connectors with autonomous lab platforms, better sharing and annotation of relevant data sets, and yes also improving the pathway to clinical trials.

    At the moment, it feels like releases like this overcommit and overpromise on "PhD level reasoning", which I wouldn't say is the absolute bottleneck in clinical research.

spwa4 1 day ago

If you have something like this, how about you demonstrate a way to really help, and demonstrate (as opposed to claim) what this can do? Make a cheap vaccine against the new resistant forms of TBC, or if you truly want to impress, against HIV. DON'T get it approved at all, just publish how it would work, maybe with a simulation (so it can't be patented). This shouldn't even be so hard, it's just really hard to make money on either of those vaccines, as right 1st world countries have little need for them (HIV, perhaps, but vaccines don't make much money. A TBC vaccine, definitely doesn't make money), so you're not "getting in the way of business" doing that.

Why? AI's reputation would be greatly improved by saving a few 10s of millions of lives (per year, I might add). And either of those advances would do just that.

Oh, and another reason. Do either of these things and you'll have very rich businesses screaming to become your customer coming out of every hole. Guaranteed.