matt3210 7 hours ago

Maybe it’s actual information that it’s trained on, produced and published by people using AI. A hallucination feedback loop of sorts.

josefritzishere 6 hours ago

AI still seems to mostly be over-hyped trash. It's ironic that their own testing seems to support my opinion.

  • labrador 6 hours ago

    There is a lot of over-hyped trash because people want to be the next billionaire and AI field is wide open.

    I'm not about to throw the baby out with the bathwater. There is real intelligence being born here.

  • nh23423fefe 6 hours ago

    How could it be overhyped? Are you imagining it has some ceiling? Saying trash is just telling on yourself.

techpineapple 7 hours ago

I dunno if it’s because I have a warped thought process, or because I have a background in Psychology, or because I’m wrong. But this always felt to me like the natural progression.

Assuming that a deeper thinking broader contexed, being with more information would be more accurate is actually counter-intuitive to me.

  • labrador 6 hours ago

    Your last line made me think of telescopes: bigger mirrors bring in more light, but they’re harder to keep in focus due to thermal distortion.

    Same with ChatGPT. The more it knows about you, the richer the connections it can make. But with that comes more interpretive noise. If you're doing scientific or factual work, stateless queries are best so turn off memory. But for meaning-of-life questions or personal growth? I’ll take the distortion. It’s still useful and often surprisingly accurate

metalman 5 hours ago

could be we are stumbling into a discovery of where the line between genius and insanity lies.... is it right to expect sanity from something that can fold proteins? or maybe we are so dumb ass slow that comming down to our level is the realy crazy part. Be fun if all the llm's hooked up and just went for it, gone in a flash, 500 billion in graphics cards cathing fire simultaneously

johnea 6 hours ago

> Instead of merely spitting out text based on statistical models of probability, reasoning models break questions or tasks down into individual steps akin to a human thought process.

Uh huh, because they have the entire human brain all mapped out, and completely understand how consciousness works, and how it leads to the "human thought process".

Sounds like o3 isn't the only thing hallucinating...

If this is the case:

> what's happening inside AI models is very different from how the models themselves described their "thought"

What makes people think, that the way they think they think, is actually how they think?

Post that one to your generative model and let's all laugh at the output...