Not only has OpenAI's market share gone down significantly in the last 6mo, Nvidia has been using its newfound liquid funds to train its own family of models[1]. An alliance with OpenAI just makes less sense today than it did 6mo ago.
> Nvidia has been using its newfound liquid funds to train its own family of models
Nvidia has always had its own family of models, it's nothing new and not something you should read too much into IMHO. They use those as template other people can leverage and they are of course optimized for Nvidia hardware.
Nvidia has been training models in the Megatron family as well as many others since at least 2019 which was used as blueprint by many players. [1]
Nemotron-3-Nano-30B-A3B[0][1] is a very impressive local model. It is good with tool calling and works great with llama.cpp/Visual Studio Code/Roo Code for local development.
It doesn't get a ton of attention on /r/LocalLLaMA but it is worth trying out, even if you have a relatively modest machine.
Deep SSMs, including the entire S4 to Mamba saga, are a very interesting alternative to transformers. In some of my genomics use cases, Mamba has been easier to train and scale over large context windows, compared to transformers.
It was good for like, one month. Qwen3 30b dominated for half a year before that, and GLM-4.7 Flash 30b took over the crown soon after Nemotron 3 Nano came out. There was basically no time period for it to shine.
It is still good, even if not the new hotness. But I understand your point.
It isn't as though GLM-4.7 Flash is significantly better, and honestly, I have had poor experiences with it (and yes, always the latest llama.cpp and the updated GGUFs).
Do they have a good multilingual embedding model? Ideally, with a decent context size like 16/32K. I think Qwen has one at 32K. Even the Gemma contexts are pretty small (8K).
I find the Q8 runs a bit more than twice as fast as gpt-120b since I don’t have to offload as many MoE layers, but is just about as capable if not better.
Yeah. Even if OpenAI models were the best, I still wouldn't used them, given how the Sam Altman persona is despicable (constantly hyping, lying, asking for no regulations, then asking for regulations, leaked emails where founders say they just wanna get rich without any consideration of their initial "open" claims...). I know other companies are not better, but at least they have a business model and something to lose.
I'll let you read a few articles about this lawsuit, but basically they said to Musk (and frankly, to everyone else) that they were committed to the non-profit model, while behind the scenes thinking about "making the billion" and turning for-profit.
Literally everyone raising money is just searching for the magic combo of stuff to make it happen. Nobody enjoys raising money. Wouldn’t read that much into this.
I agree.
Especially the whole Johny Ive and Altman's hype video in that coffee shop was absolutely disgusting. Oh how far their egos have been inflated, which leads to very bad decision making. Not to be trusted.
And the whole AI craze is becoming nothing but a commodity business where all kinds of models are popping in and out, one better this update, the other better the next update etc. In short - they're basically indistinguishable for the average layman.
Commodity businesses are price chasers. That's the only thing to compete on when product offerings are similar enough.
AI valuations are not setup for this. AI Valuations are for 'winner takes all' implications. These are clearly now falling apart.
It doesn't really feel like AI for coding is commoditized atm.
As problematic as SWE-Bench is as a benchmark, the top commercial models are far better than anything else and it seems tough to see this as anything but a 3 horse race atm.
When you have more users you get more data to improve your models. The bet is that one company will be able to lock in to this and be at the top constantly.
I'm not saying this is what will happen, but people obviously bet a lot of money on that.
1. OpenAI bet largely on consumer. Consumers have mostly rejected AI. And in a lot of cases even hate it (can't go on TikTok or Reddit without people calling something slop, or hating on AI generated content). Anthropic on the other hand went all in on B2B and coding. That seems to be the much better market to be in.
We’ve seen many times that platforms can be popular and widely disliked at the same time. Facebook is a clear example.
The difference there is it became hated after it was established and financially successful. If you need to turn free visitors in to paying customers, that general mood of “AI is bad and going to make me lose my job/fuck up society” is yet another hurdle OpenAI will have to overcome.
Yeah, every single big website is totally free. People have complex emotions toward Facebook, Instagram and TikTok, but they don't have to pull out their wallet. That's a bridge too far for many people.
It’s incorrect to point out that consumers have rejected AI.
The strategy here is more valid in my opinion. The value in AI is much more legible when the consumer uses it directly from their chat UI than whatever enterprises can come up with.
I can suggest many ways that consumers can use it directly from chat window. Value from enterprise use is actually not that clear. I can see coding but that’s about it. Can you tell me ways in which enterprises can use AI in ways that is not just providing their employees with chaggpt access?
Was the golden boy for a while? What shifted? I don't even remember what he did "first" to get the status. Is it maybe just a case of familiarity breeding contempt?
It is starting to become clear to more and more people that Sam is a dyed in the wool True Believer in AGI. While it's obvious in hindsight that OpenAI would never have gotten anywhere if he wasn't, seeing it so starkly is really rubbing a lot of people the wrong way.
Well, in the world where AGI is created and it goes suboptimally, everybody gets turned into computronium and goes extinct, which is a prospect some are miffed about. And, in the world where it goes well, no decision of any consequence is made by a human being ever again, since the computer has planned every significant life event since before their birth. Free will in a very literal sense will have been erased. Sam being a true believer means he is not going to stop working until one of these worlds comes true. People who understand the stakes are understandably irked by him.
He is a pretty interesting case. According to the book "Empire of AI" about OpenAI, he lies constantly, even about things that are too trivial to matter. So it may be part of some compulsive behavior.
And when two people want different things from him, he "resolves" the conflict by agreeing with each of them separately, and then each assumes they got what they wanted, until they talk to the other person and find out that nothing was resolved.
Really not a person who is qualified to run a company, except the constant lying is good for fundraising and PR.
It's sort of two books combined into one: The first one is the story of OpenAI from the beginning, with all the drama explained with quotes from inside sources. This part was informative and interesting. It includes some details about Elon being convinced that Demis Hassabis is going to create an evil super-intelligence that will destroy humanity, because he once worked on a video game with an evil supervillain. I guess his brain was cooked much earlier than we thought.
The second one is a bunch of SJW hand-wringing about things that are only tangentially related, like indigenous Bolivians being oppressed by Spanish Conquistadors centuries ago. That part I don't care for as much.
Not a case, society call them sociopaths. Witch includes power struggle, manipulation and physiological abuse of the people around them.
Example, Sam Altman and OpenAI hoarding 40% of the RAM supply as unprocessed wafers stored in warehouses bought with magical bubble investors money in GPUs that don't exist yet and that they will not be able to install because there's not enough electricity to feed such botched tech, in data centers that are still to be built, with intention to punch the competence supply, and all the people of the planet in the process along two years (at least).
Yep the various -path adjectives get overused but in this case he's the real deal, something is really really off about him.
You can see it when he talks, he's clearly trying (very unconvincingly) to emulate normal human emotions like concern and empathy. He doesn't feel them.
People like that are capable of great evil and there's a part of our lizard brains that can sense it
Indeed. Sama seems to be incredibly delusional. OAI going bust is going to really damage his well-being, irrespective of his financial wealth. Brother really thought he was going to take over the world at one point.
I don't and I see Sam Altman as a greater fraud than that (loathsome) individual. And I don't think Sam gets through the coming bubble pop without being widely exposed (and likely prosecuted) as a fraudster.
The CEO just has to have followership: the people who work there have to think that this is a good person to follow. Even they don't have to "like" him
HN is such a bubble. ChatGPT is wildly successful, and about to be an order of magnitude more so, once they add ads. And I have never heard a non-technical person mention Altman. I highly doubt they have any idea who he is, or care. They’re all still using ChatGPT.
You have to give credit to Sam, he’s charismatic enough to the right people to climb man made corporate structures. He was also smart enough to be at the right place at the right time to enrich himself (Silicon Valley). He seems to be pretty good at cutting deals. Unfortunately all of the above seems to be at odds with having any sort of moral core.
He and his personality caused people like Ilya to leave. At that point the failure risk of OAI jumped tremendously. The reality he will have to face is, he has caused OAIs demise.
Perhaps hes ok with that as long as OAI goes down with him. Would expect nothing less from him.
All this drama is mostly irrelevant outside a very narrow and very online community.
The demise of OpenAI is rooted in the bad product market fit, since many people like using ChatGPT for free, but fewer are ready to pay for it. And that’s pretty much all there is to it. OpenAI bet on consumers, made a slopstagram that unsurprisingly didn’t revolutionise content, and doesn’t sell as many licenses as they would like.
I actually think Sam is “better” than say Elon or Dario because he seems like a typical SF/SV tech bro. You probably know the type (not talking about some 600k TC fang worker, I mean entrepreneurs).
He says a lot of fluff, doesn’t try to be very extreme, and focuses on selling. I don’t know him personally but he comes across like an average person if that makes sense (in this environment that is).
I think I personally prefer that over Elon’s self induced mental illnesses and Dario being a doomer promoting the “end” of (insert a profession here) in 12 months every 6 months. It’s hard for me to trust a megalomaniac or a total nerd. So Sam is kinda in the middle there.
I hope OpenAI continues to dominate even if the margins of winning tighten.
It’s kind of sad. I can’t believe I used to like him back in the iron man days. Back then I thought he was cool for the various ideas and projects he was working on. I still think many of those are great but he as a person let me down.
Back then he had a PR firm working for him, getting him cameos and good press. But in 2020 he fired them deciding that his own "radically awesome" personality doesn't need any filtering.
Personally I don't think Elon is the worst billionaire, he's just the one dumb enough to not have any PR (since 2020). They're all pretty reprehensible creatures.
Any number of past mega-rich were probably equally nuts and out of touch and reprehensible but they just didn't let people find out. Then Twitter enabled an unfiltered mass-media broadcast of anyone's personal insanity, and certain public figures got addicted and exposed.
There will always be enough people willing to suck up to money that they'll have all the yes-men they need to rationalize it as "it's EVERYONE ELSE who's wrong!"
The watershed moment for me was when he pretended to be a top tier gamer on Path of Exile. Anyone in the know saw right through it, and honestly makes me wonder if we just spotted this behavior because it's "our turf", but actually he and people like him just operate this way in absolutely everything they do
Props to him for letting people mute him on his own platform. The issue with Sam and OpenAI is they their bias on any controversional topic can't be switched off.
So? I bet you think you're clever. You're using platforms daily that are ran by insane people. Don't forget that the internet itself was a military invention.
Not extreme? Have you seen his interviews? I guess his wording and delivery are not extreme, but if you really listen to what he's saying, it's kinda nuts.
I understand what GP is saying in the sense that, yes, on an objective scale, what Sam is saying is absolutely and completely nuts... but on a relative scale he's just hyping his startup. Relative to the scale he's at, it’s no worse than the average support tool startup founder claiming they will defeat Salesforce, for example.
He's definitely not. If Altman. Is a "typical" SF/SV
tech bro then that's an indication the valley has turned full d-bag. Altman's past is gross. So, if he's the norm then I will vehemently avoid any dollars of mine going to OAI. I paid for an account for a while, but just like Musk I lose nothing over actively avoiding his Ponzi scheme of a company.
Altman is a consummate liar and manipulator with no moral scruples. I think this LLM business is ethically compromised from the start, but Dario is easily the least worst of the three.
Your argument is guilt by association. Association with something that isn't morally wrong, it's just a way to try to spend money on charity in an effective way? You can take a lot of ideas too far and end up with a bad result of course.
Demis is the reason Google is afloat with a good shot at winning the whole race. The issue currently is he isn’t willing to become the alphabet CEO. IMHO he’ll need to for the final legs.
> I actually think Sam is “better” than say Elon or even Dario because he seems like a typical SF/SV tech bro.
If you nail the bar to the floor, then sure, you can pass over it.
> He says a lot of fluff, doesn’t try to be very extreme, and focuses on selling.
I don't now what your definition of extreme is but by mine he's pretty extreme.
> I think I personally prefer that over Elon’s self induced mental illnesses and Dario being a doomer promoting the “end” of (insert a profession here) in 12 months every 6 months.
All of them suffer from thinking their money makes them somehow better.
> I hope OpenAI continues to dominate even if the margins of winning tighten.
I couldn't care less. I'm on the whole impressed with AI, less than happy about all of the slop and the societal problems it brings and wished it had been a more robust world that this had been brought in to because I'm not convinced the current one needed another issue of that magnitude to deal with.
> All of them suffer from thinking their money makes them somehow better.
Let's assume they think they're better than others.
What makes you think that they think it's because of their money, as opposed to, say, because of their success at growing their products and businesses to the top of their field?
That’s ok, but AI is useful in particular use cases for many people. I use it a lot and I prefer the Codex 5.2 extra high reasoning model. The AI slop and dumb shit on IG/YT is like the LCD of humans though. They’ve always been there and always will be there to be annoying af. Before AI slop we had brain rot made by humans.
I think over time it (LLM based) will become like an augmenter, not something like what they’re selling as some doomsday thing. It can help people be more efficient at their jobs by quickly learning something new or helping do some tasks.
I find it makes me a lot more productive because I can have it follow my architecture and other docs to pump out changes across 10 files that I can then review. In the old way, it would have taken me quite a while longer to just draft those 10 files (I work on a fairly complex system), and I had some crazy code gen scripts and shit I’d built over the years. So I’d say it gives me about 50% more efficiency which I think is good.
Of course, everyone’s mileage may vary. Kinda reminds me of when everyone was shitting on GUIs, or scripting languages or opinionated frameworks. Except over time those things made productivity increase and led to a lot more solutions. We can nitpick but I think the broader positive implication remains.
It's very hard to see downsides on something like GUIS, scripting languages or opinionated frameworks compared to a broad, easily weaponized tool like generative AI.
ChatGPT has nowhere the lead it used to have. Gemini is excellent and Google and Anthropic are very serious competitors. And open weight models are slowly catching up.
"Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data."
I meant thinking patterns that go beyond our understanding. High functioning autism that is beyond jealousy/envy, and beyond the need to hold or be on a leash and beyond the enigma of emotions that come with the influence to dump and pump stock market prices of precious, precious metals.
Or, in other terms, the kind of intelligence that is built for abstract, distant, symbiotic humanity. From the POV of Earth as a system, we're quite the dumb nuisance. "Just get it, man". :D
> Anthropic relies heavily on a combination of chips designed by Amazon Web Services known as Trainium, as well as Google’s in-house designed TPU processors, to train its AI models. Google largely uses its TPUs to train Gemini. Both chips represent major competitive threats to Nvidia’s best-selling products, known as graphics processing units, or GPUs.
So which leading AI company is going to build on Nvidia, if not OpenAI?
"Largely" is doing a lot of heavy lifting here. Yes Google and Amazon are making their own GPU chips, but they are also buying as many Nvidia chips as they can get their hands on. As are Microsoft, Meta, xAI, Tesla, Oracle and everyone else.
google buys nvidia GPUs for cloud, I don't think they use them much or at all internally. The TPUs are both used internally, and in cloud, and now it looks like they are delivering them to customers in their own data centers.
The various AI accelerator chips, such as TPUs and NVidia GPUs, are only compatible to extent that some of the high level tools like PyTorch and Triton (kernel compiler) may support both, which is like saying that x86 and ARM chips are compatible since gcc supports them both as targets, but note this does not mean that you can take a binary compiled for ARM and run it on an x86 processor.
For these massive, and expensive to train, AI models the differences hit harder since at the kernel level, where the pedal hits the metal, they are going to be wringing every last dollar of performance out of the chips by writing hand optimized kernels for them, highly customized to the chip's architecture and performance characteristics. It may go deeper than that too, with the detailed architecture of the models themselves tweaked to best perform on a specific chip.
So, bottom line is that you can't just take a model "compiled to run on TPUs", and train it on NVidia chips just because you have spare capacity there.
I think Apple is waiting for the bubble to deflate, then do something different. And they have the ready to use user base to provide what they can make money from.
If they were taking that approach, they would have absolutely first-class integration between AI tools and user data, complete with proper isolation for security and privacy and convenient ways for users to give agents access to the right things. And they would bide their time for the right models to show up at the right price with the right privacy guarantees.
They apparently are working on and are going to release 2(!) different versions of siri. Idk, that just screams "leadership doesn't know what to do and can't make a tough decision" to me. but who knows? maybe two versions of siri is what people will want.
It sounds like the first one, based on Gemini, will be more a more limited version of the second ("competitive with Gemini 3"). IDK if the second is also based on Gemini, but I'd be surprised if that weren't the case.
Seems like it's more a ramp-up than two completely separate Siri replacements.
1. Sit out and buy the tech you need from competitors.
2. Spend to the tune of ~$100B+ in infra and talent, with no guarantee that the effort will be successful.
Meta picked option 2, but Apple has always had great success with 1 (search partnership with Google, hardware partnerships with Samsung etc.) so they are applying the same philosophy to AI as well. Their core competency is building consumer devices, and they are happy to outsource everything else.
This whole thread is about whether the most valuable startup of all time will be able to raise enough money to see the next calendar year.
It's definitely rational to decide to pay wholesale for LLMs given:
- consumer adoption is unclear. The "killer app" for OS integration has yet to ship by any vendor.
- owning SOTA foundation models can put you into a situation where you need to spend $100B with no clear return. This money gets spent up front regardless of how much value consumers derive from the product, or if they even use it at all. This is a lot of money!
- as apple has "missed" the last couple of years of the AI craze, there has been no meaningful ill effects to their business. Beyond the tech press, nobody cares yet.
Well they tried and they failed. In that case maybe the smartest move is not to play. Looks like the technology is largely turning into a commodity in the long run anyways. So sitting this out and letting others make the mistakes first might not be the worst of all ideas.
I mean, they tried. They just tried and failed. It may work out for them, though — two years ago it looked like lift-off was likely, or at least possible, so having a frontier model was existential. Today it looks like you might be able to save many billions by being a fast follower. I wouldn’t be surprised if the lift-off narrative comes back around though; we still have maybe a decade until we really understand the best business model for LLMs and their siblings.
They tried to do something that probably would have looked like Copilot integration into Windows, and they chose not to do that, because they discovered that it sucked.
So, they failed in an internal sense, which is better than the externalized kind of failure that Microsoft experienced.
I think that the nut that hasn't been cracked is: how do you get LLMs to replace the OS shell and core set of apps that folks use. I think Microsoft is trying by shipping stuff that sucks and pissing off customers, while Apple tried internally declined to ship it. OpenClaw might be the most interesting stab in that direction, but even that doesn't feel like the last word on the subject.
I think you are right. Their generative AI was clearly underwhelming. They have been losing many staff from their AI team.
I’m not sure it matters though. They just had a stonking quarter. iPhone sales are surging ahead. Their customers clearly don’t care about AI or Siri’s lacklustre performance.
> Their customers clearly don’t care about AI or Siri’s lacklustre performance.
I would rather say their products didn’t just loose in value for not getting an improvement there. Everyone agrees that Siri sucks, but I’m pretty sure they tried to replace it with a natural language version built from the ground up, and realised it just didn’t work out yet: yes, they have a bad, but at least kinda-working voice assistant with lots of integrations into other apps. But replacing that with something that promises to do stuff and then does nothing, takes long to respond, and has less integrations due to the lack of keywords would have been a bad idea if the technology wasn’t there yet.
We do know that they made a number of promises on AI[1] and then had to roll them back because the results were so poor[2]. They then went on to fire the person responsible for this division[3].
That doesn't sound like a financial decision to me.
Sure, Siri is, but do people really buy their phone based off of a voice assistant? We're nowhere near having an AI-first UX a la "Her" and it's unclear we'll even go in that direction in the next 10 years.
Nvidia had the chance to build its own AI software and chose not to. It was a good choice so far, better to sell shovels than go to the mines - but they still could go mining if the other miners start making their own shovels.
If I were Nvidia I would be hedging my bets a little. OpenAI looks like it's on shaky ground, it might not be around in a few years.
They do build their own software, though. They have a large body of stuff they make. My guess is that it’s done to stay current, inform design and performance, and to have something to sell enterprises along with the hardware; they have purposely not gone after large consumer markets with their model offerings as far as I can tell.
That’s interesting, I didn’t know that about Anthropic. I guess it wouldn’t really make sense to compete with OpenAI and everyone else for Nvidia chips if they can avoid it.
It's almost as if everyone here was assuming that Nvidia would have no competition for a long time, but it has been known for a long time, there are many competitors coming after their data center revenues. [0]
> So which leading AI company is going to build on Nvidia, if not OpenAI?
It's xAI.
But what matters is that there is more competition for Nvidia and they bought Groq to reduce that. OpenAI is building their own chips as well as Meta.
The real question is this: What happens when the competition catches up with Nvidia and takes a significant slice out of their data center revenues?
This video that breaks down the crazy financial positions of all the AI companies and how they are all involved with one called CoreWeave (who could easily bring the whole thing tumbling down) is fascinating: https://youtu.be/arU9Lvu5Kc0?si=GWTJsXtGkuh5xrY0
I don’t think so. I think it is positioning for the unknown future and hedging.
For example, Amazon isn’t able to train its own models so it hedges by investing in Anthropic and OpenAI. Oracle, same with OpenAI deal. Nvidia wants to stay in OpenAI and Anthropic’s tech stack.
Oracle is a perfect example of using empty AI partnership announcements to goose the stock price and also a perfect example of how unsustainable of a strategy it is.
You're explaining why they invest, not why they make pie-in-the-sky non-binding announcements. The point of those is to inflate the bubble and keep it going as long as they can.
The non binding agreements is due to the implications of AI. No one knows for sure who will win or what will happen. These aren’t just fake agreements though. Oracle is building Stargate. Softbank’s investments are in OpenAI’s bank.
We know that it is all a grift before the inevitable collapse, so everyone is racing for the exit before that happens.
I guarrantee you that in 10 years time, you will get claims of unethical conduct by those companies only after the mania has ended (and by then the claimants have sold all their RSUs.)
It’s probably not really related, but this bug and the saga of OpenAI trying and failing to fix it for two weeks is not indicative of a functional company:
OTOH, if Anthropic did that to Claude Code, there wasn’t a moderately straightforward workaround, and Anthropic didn’t revert it quickly, it might actually be a risk-the-whole-business issue. Nothing makes people jump ship quite like the ship refusing to go anywhere for weeks while the skipper fumbles around and keeps claiming to have fixed the engines.
Also, the fact that it’s not major news that most business users cannot log in to the agent CLI for two weeks running is not major news suggests that OpenAI has rather less developer traction than they would like. (Personal users are fine. Users who are running locally on an X11-compatible distro and thus have DISPLAY set are okay because the new behavior doesn’t trigger. It kind of seems like everyone else gets nonsense errors out of the login flow with precise failures that change every couple days while OpenAI fixes yet another bug.)
You still need to get engineers to actually dispatch that work, test it, possibly update the backend. Each of those can be already done via AI, but actually doing that in a large environment - we're not there yet.
I don't know what you're so surprised about. The ticket reads like any other typical [Big] enterprise ticket. UI works, headless - not (headless is what only hackers use, so not a priority, etc.) Oh, found the support guy who knows what headless is and the doc page with a number of workarounds. There is even ssh tunnel (how is that made in into enterprise docs?!) and the classic - copy logged in credentials from UI machine once you logged in there. Bla-bla-bla and again classic:
"Root Cause
The backend enforces an Enterprise-only entitlement for codex_device_code_auth on POST /backend-api/accounts/{account_id}/beta_features. Your account is on the Team plan, so the server rejects the toggle with {"detail":"Enterprise plan required."} "
and so on and so forth. At any given day i have several such long-term tickets that get ultimately escalated to me (i'm in dev and usually the guy who would pull the page with ssh tunnel or credentials copying :)
The backstory here is that codex-rs (OpenAI’s CLI agent harness) launched an actual headless login mechanism, just like Claude Code has had forever. And it didn’t work, from day one. And they can’t be bothered to revert it for some reason.
Sure, big enterprises are inept. But this tool is fundamentally a command line tool. It runs in a terminal. It’s their answer to one of their top two competitors’ flagship product. For a company that is in some kind of code red, the fact that they cannot get their ducks in a row to fix it is not a good sign.
Keep in mind that OpenAI is a young company. They should have have a thicket of ancient garbage to wade through to fix this — it’s not as if this is some complex Active Directory issue that no one knows how to fix because the design is 30-40 years old and supports layers and layers of legacy garbage.
Because approximately zero smallish businesses use Codex, perhaps?
It’s also possible that the majority of people hitting it are using the actual website support (which is utterly and completely useless), since the bug is only a bug in codex-rs to the extent that codex-rs should have either reverted or deployed a workaround already.
Only in a monopoly situation. If you have several companies with comparable models you can easily switch between, all desperate for revenue to recoup their massive capex. you’re fine.
I felt anxious about all the insane valuations and spending around AI lately, and I knew it couldn't last (I mean there's only so much money, land, energy, water, business value, etc). But I didn't really know when it was going to collapse, or why. But recently I've been diving into using local models, and now it's way more clear. There seems to be a specific path for the implosion of AI:
- Nvidia is the most valuable company. Why? It makes GPUs. Why does that matter? Because AI is faster on them than CPUs, ASICs are too narrowly useful, and because first-mover advantage. AMD makes GPUs that work great for AI, but they're a fraction of the value of Nvidia, despite the fact that they make more useful products than Nvidia. Why? Nvidia just got there first, people started building on them, and haven't stopped, because it's the path of least resistance. But if Nvidia went away tomorrow, investors would just pour money into AMD. So Nvidia doesn't have any significant value compared to AMD other than people are lazy and are just buying the hot thing. Nvidia was less valuable than AMD before, they'll return there eventually; all AMD needs is more adoption and investment.
- Every frontier model provider out there has invested billions to get models to the advanced state they're in today. But every single time they advance the state of the art, open weights soon match them. Very soon, there won't be any significant improvement, and open weights will be the same as frontier, meaning there's no advantage to paying for frontier models. So within a few years, there will be no point to paying OpenAI, Anthropic, etc. Again, these were just first-movers in a commodity market. The value just isn't there. They can still provide unique services, tailored polished apps, etc (Anthropic is already doing this by banning users who have the audacity to use their fixed-price plans with non-Anthropic tools). But with AI code tools, anyone can do this. They are making themselves obsolete.
- The final form of AI coding is orchestrated agent-driven vibe-coding with safeguards. Think an insane asylum with a bowling league: you still want 100 people to autonomously (and in parallel) knock the pins knocked over, but you have to prevent the inmates from killing anyone. That's where the future of coding is. It's just too productive to avoid. But with open models and open source interfaces, anyone can do this, whether with hosted models (on any of 50 different providers), or a Beowulf cluster of cobbled together cheap hardware in a garage.
- Eventually, in like 5-10 years (a lifetime away), after AI Beowulfs have been a fad for a while, people will tire of it and move back to the cloud, where they can run any model they want on a K8s cluster full of GPUs, basically the same as today. Difference between now and then is, right now everyone is chasing Anthropic because their tools and models are slightly better. But by then, they won't be. Maybe people will use their tools anyway? But they won't be paying for their models. And it's not just price: one of the things you learn quickly by running models, is they're all good for different things. Not only that, you can tweak them, fine-tune them, and make them faster, cheaper, better than what's served up by frontier models. So if you don't care about the results or cost, you could use frontier, but otherwise you'll be digging deep into them, the same way some companies invest in writing their own software vs paying for it.
- Finally, there's the icing on the cake: LLMs will be cooked in 10 years. I keep reading from AI research experts that "LLMs are a dead end" - and it turns out it's true. LLMs are basically only good because we invest an unsustainable amount of money in the brute-forcing of a relatively dumb form of iteration: download all knowledge, do some mind-bogglingly expensive computational math on it, tweak the reasults, repeat. There's only so many of that loop you can do, because fundamentally, all you're doing is trying to guess your way to an answer from a picture of the past. It doesn't actually learn, the way a living organism learns, from experience, in real-time, going forward; LLMs only look backward. Like taking a snapshot of all the books a 6 year old has read, then doing tweaks to try to optimize the knowledge from those books, then doing it again. There's only so much knowledge, only so many tweaks. The sensory data of the lived experience of a single year of life of a 6 year old is many times more information than everything ever recorded by man. Reinforcement Learning actually gives you progressive, continuously improved knowledge. But it's slow, which is why we aren't doing it much. We do LLMs instead because we can speed-run them. But the game has an end, and it's the total sum of our recorded knowledge and our tweaks.
So LLMs will plateau, frontier models will make no sense, all lines of code will be hands-off, and Nvidia will return to making hardware for video games. All within about 10 years. With the caveat that there might be a shift in global power and economic stability that interrupts the whole game.... but that's where we stand if things keep on course. Personally, I am happy to keep using AI and reap the benefits of all these moronic companies dumping their money into it, because the open weights continue being useful after those companies are dead. But I'm not gonna be buying Nvidia stock anytime soon, and I'm definitely not gonna use just one frontier model company.
I've thought about this too.
I do agree that open source models look good and enticing, especially from a privacy standpoint.
But these solutions are always going to remain niche solutions for power users.
I'm not one of them.
I can't be hassled/bothered to setup that whole thing (local or cloud) to gain some privacy and end up with an inferior model and tool. Let's not forget about the cost as well!
Right now I'm paying for Claude and Gemini.
I run out of Claude tokens real fast, but I can just keep on going using Gemini/GeminiCLI for absolutely no cost it seems like.
The closed LLMs with the biggest amount of users will eventually outperform the open ones too, I believe.
They have a lot of closed data that they can train their next generation on.
Especially the LLMs that the scientific community uses will be a lot more valuable (for everyone).
So in terms of quality, the closed LLMs should eventually outperform the open ones, I believe, which is indeed worrisome.
I also felt anxious early december about the valuations, but, one thing remains certain.
Compute is in heavy demand, regardless of which LLM people use.
I can't go back to pre-AI. I want more and more and faster and faster AI.
The whole world is moving that way it seems like.
I'm invested into phsyical AI atm (chips, ram, ...) whose evaluations look decently cheap.
I think you should reconsider the idea that frontier models will be superior, for a couple reasons:
- LLMs have fixed limitations. The first one is training, the dataset you use. There's only so much information in the world and we've largely downloaded it all, so it can't get better there. Next you can do training on specific things to make it better at specific things, but that is by definition niche; and you can actually do that for free today with Google's Tensors in free Cloud products. Later people will pay for this, but the point is, it's ridiculously easy for anyone to fine-tune training, we don't need frontier companies for that. And finally, LLM improvements come by small tweaks to models that already come to open weights within a matter of months, often surpassing the frontier! All you have to do is sit on your ass for a couple months and you have a better open model. Why would anyone do this? Because once all models are extremely good (about 1 year from now) you won't need them to be better, they'll already do everything you need in 1-shot, so you can afford to sit and wait for open models. Then the only reason left to use frontier cloud is that they host a model; but other people do cloud-hosted models! Because it's a commodity! (And by the way, people like me are already pissed off at Anthropic because we're not allowed to use OAuth with 3rd party tools, which is complete bullshit. I won't use them on general principle now, they're a lock-in moat, and I don't need them) There will also be better, faster, more optimized open models, which everyone is going to use. For doing math you'll use one model, for intelligence you'll use a different model, for coding a different model, for health a different model, etc, and the reason is simple: it's faster, lower memory, and more accurate. Why do things 2x slower if you don't have to? Frontier model providers just don't provide this kind of flexibility, but the community does. Smart users will do more with less, and that means open.
On the hardware:
- Def it will continue to be investment-worthy, but be cautious. The growth simply isn't going to continue at pace, and the simple reason is we've already got enough hardware. They want more hardware so they can continue trying to "scale LLMs" the way they have with brute force. But soon the LLMs will plateau and the brute force method isn't going to net the kind of improvements that justify the cost. Demand for hardware is going to drop like a stone in 1-2 years; if they don't cease building/buying then, they risk devaluing it (supply/demand), but either way Nvidia won't be selling as much product so there goes their valuation. And RAM is eventually going to get cheaper, so even if demand goes up, spending is less. The other reason demand won't continue at pace is investors are already scared, so the taps are being tightened (I'm sure the "Megadeal" being put on-hold is the secret investment groups tightening their belts or trying to secure more favorable terms). I honestly can't say what the economic picture is going to look like, but I guarantee you Nvidia will fall from its storied heights back to normal earth, and other providers will fill the gap. I don't know who for certain, but AMD just makes sense, because they're already supported by most AI software the way Nvidia is (try to run open-source inference today, it's one of those two). Frontier and cloud providers have Tensors and other exotic hardware, which is great for them, but everyone else is gonna buy commodity chips. Watch for architectures with lower price and higher parts availability.
> There's only so much information in the world and we've largely downloaded it all, so it can't get better there.
What about all the input data into LLMs and the conversations we're having?
That must be able to produce a better next gen model, no?
> it's ridiculously easy for anyone to fine-tune training, we don't need frontier companies for that.
Not for me. It'll take me days, and then I'm pretty sure it won't be better than Gemini 3 pro for my coding needs, especially in reasoning.
> For doing math you'll use one model, for intelligence you'll use a different model, for coding a different model, for health a different model, etc, and the reason is simple: it's faster, lower memory, and more accurate.
Why wouldn't e.g. Gemini just add a triage step?
And are you sure it's that much easier to get a better model for math than the big ones?
I think you underestimate the friction this causes regular users by handpicking and/or training specific models, whilst the big vendors are good enough for their needs.
> What about all the input data into LLMs and the conversations we're having? That must be able to produce a better next gen model, no?
Better models are largely coming from training, tuning, and specific "techniques" discovered to do things like eliminate loops and hallucinations. Human inputs are a small portion of that; you'll notice that all models are getting better despite the fact that all these companies have different human inputs! A decent amount of the models' abilities come from properties like temperature/p-settings, which is basically introducing variable randomness. (these are now called "low" and "high" in frontier models) This can cause problems, but also increased capability, so the challenge isn't getting better input, it's better controlling randomness (sort of). Even coding models benefit from a small amount of this. But there is a lot more, so overall model improvements are not one thing, they are many things that are not novel. In fact, open models get novel techniques before the frontier does, it's been like that for a while.
> Not for me. It'll take me days, and then I'm pretty sure it won't be better than Gemini 3 pro for my coding needs, especially in reasoning.
If you don't want the improvements, that's up to you; I'm just saying the frontier has no advantage here, and if people want better than frontier, it's there for free.
> Why wouldn't e.g. Gemini just add a triage step? And are you sure it's that much easier to get a better model for math than the big ones?
They already do have triage steps, but despite that, they still create specific models for specific use-cases. Most people already choose Thinking by default for general queries, and coding models for coding. That will continue, but there will be more providers of more specific models that will outperform frontier models, for the simple fact that there's a million use-cases out there and lots of opportunity for startups/community to create a better tailored model for cheaper. And soon all our computers will be decent at doing AI locally, so why pay for frontier anyway? I can already AI-code locally on a 4 year old machine. Two years from now, there likley won't be a need for you to use a cloud service at all, because your local machine and a local model will be equivalent, private, and free.
And Google and Microsoft have huge distribution advantages that OpenAI doesn’t. Google and Microsoft can add AI to their operating systems, browsers, and office apps that users are already using. OpenAI just has a website and a niche browser. To Google and Microsoft, AI is a feature, not a product.
this is the argument i continue to have with people. first mover isnt always an advantage - i think openai will be sold or pennies on these dollars someday (next 5 years after they run out of funding).
Google has data, TPUs, and a shitload of cash to burn
I'm not sure because google was by far the best search engine for a long time in the early 2000s and there are a lot of models close to what openai has right now.
Name recognition only gets you so far. "Just Google it" happened because Google was better than Hotbot/Altavista/Yahoo! etc by orders of magnitude. Nobody even bothered to launch a competing search engine in the 2000s because of this (until Microsoft w/ Bing in 2009). There is no such parallel with ChatGPT; Google, Bing, even DuckDuckGo has AI search.
First mover advantage matters only if it has long-lasting network effects. American schools are run on Chromebooks and Google Docs/Slides, but these have no penetration in enterprise, as college students have been discovering when they enter their first jobs.
Well, they might have gotten a little wary from previous boom and bust cycles.
Perhaps they are a bit wary about the economic sustainability of the whole AI thing.
However, perhaps they also might be driven by greed at this point. Why not just constrain supply and increase margins whilst they are no real competitor?
> TSMC’s leadership dismissed Altman as a “podcasting bro” and scoffed at his proposed $7 trillion plan to build 36 new chip manufacturing plants and AI data centers.
I thought it was ridiculous when I read it. I'm glad the fabs think he's crazy too. If he wants this then he can give them the money up front. But of course he doesn't have it.
After the dot com collapse my company's fabs were running at 50% capacity for a few years and losing money. In 2014 IBM paid Global Foundries $1.5 billion to take the fabs away. They didn't sell the fabs, they paid someone to take them away. The people who run TSMC are smart and don't want to invest $20-100 billion in new fabs that come online in 3-5 years just as the AI bubble bursts and demand collapses.
I started working during the dot com boom. I was getting 3 phone calls a week from recruiters on my work telephone number. Then I saw the bubble burst in from mid-2000. In 2001 zero recruiters called me. I hated my job after the reorg and it took me 10 months to find a new one.
I know a lot of people in the 45+ age range including many working on AI accelerators. We all think this is a bubble. The AI companies are not profitable right now for the prices they charge. There are a bunch of articles on this. If they raise prices too quickly to become profitable then demand will collapse. Eventually investors will want a return on their investment. I made a joke that we haven't reached the Pets.com phase of the bubble yet.
> He[Jensen Huang] has also privately criticized what he has described as a lack of discipline in OpenAI’s business approach and expressed concern about the competition it faces from the likes of Google and Anthropic, some of the people said.
People talk about an AI bubble. What we actually have is a GPU bubble. NVidia makes really expensive GPUs for AI. Others also make GPUs.
Companies like Google produce and operate AI models largely using their own TPUs rather than NVidia's GPUs. We've seen the Chinese produce pretty competitive open models with either older NVidia GPUs or alternative GPUs because they are not allowed to buy the newer ones. And AMD, Intel and other chip makers are also eager to get in on the action. Companies like Microsoft, Amazon, etc. have their own chips as well (similar to Google). All the hyperscalers are moving away from NVidia.
And then Apple runs a non Intel and non NVidia based range of workstations and laptops that are pretty popular with AI researchers because the M series CPU/GPU/NPU is pretty decent value for running AI models. You see similar movement with ARM chips from Qualcomm and others. They all want to run AI models on phones, tablets, laptops. But without NVidia.
NVidia's bubble is about vastly overcharging for a thing that only they can provide. Their GPU chips have enormous margins relative to CPU chips coming out of the same/similar machines. That's a bubble. As soon as you introduce competition, the companies with the best price performance wins. NVidia is still pretty good at what they do. But not enough to justify an order of magnitude price/cost difference.
NVidia's success has been predicated on its proprietary software and instruction set (CUDA). That's a moat that won't last. The reason Google can use its own TPUs rather than CUDA is that it worked hard to get rid of their CUDA dependence. Same for the other hyperscalars. At this point they can do training and inference without CUDA/NVidia and its more cost effective.
The reason that this 100B deal is apparently being reconsidered is that it is a bad deal for OpenAI. It was going to overpay for a solution that they can get cheaper elsewhere. It's bad news for NVidia, good news for OpenAI. This deal started out with just NVidia. But at this point there are also deals with AMD, MS, and others. OpenAI like the other hyperscalers is not betting the company on NVidia/CUDA. Good for them.
> People talk about an AI bubble. What we actually have is a GPU bubble. NVidia makes really expensive GPUs for AI. Others also make GPUs.
Yes it is. I think even for multiple reasons. Competition in that space not sleeping is one but it's also a huge overestimation of demand combined with the questionable believe those GPUs and the Datacenters housing them can actually be built and put into operation as fast as envisioned.
> The reason that this 100B deal is apparently being reconsidered is that it is a bad deal for OpenAI. It was going to overpay for a solution that they can get cheaper elsewhere. It's bad news for NVidia, good news for OpenAI. This deal started out with just NVidia. But at this point there are also deals with AMD, MS, and others. OpenAI like the other hyperscalers is not betting the company on NVidia/CUDA. Good for them.
I think in case of OpenAI both may be true. While what you are saying makes sense, NVs first mover advantage obviously can't last forever, OpenAI currently does have little to no competitive advantage over other players. Combine this with the fact that some (esp. Google) sit on a huge pile of cash. In contrast for OpenAI the party is pretty much over as soon as investors stop throwing money into the oven so they might need to cut back a bit.
Idk about this news specifically but oracle cds prices are moving. The below link says 30k layoffs may hit Oracle which I feel is a bit hyperbolic so this article may not be grounded in reality.
I know OpenAI isn't a popular company here (anymore) but the doomerism in this thread seems a bit too hasty. People were just as doomy when Altman was sacked, and it turned into nothing and the industry market caps have doubled or even tripled since.
The article references an “undisciplined” business. I wonder if this is speaking to projects like Sora. Sora is technically impressive and was fun for a moment, but it’s nowhere near the cultural relevance of TikTok, but I believe significantly more expensive, harder to monetize, and consuming some significant share of their precious GPU capacity. Maybe I’m just not the demo and missing something.
And yes, Sam is incredibly unlikable. Every time I see him give an interview, I am shocked how poorly prepared he is. Not to mention his “ads are distasteful, but I love my supercar and ridiculous sunglasses.”
I would love it if AI fizzled out and nvidia had to go back to making gaming cards. Just trying to have a simple life here and play video games, and ridiculous hype after hype keeps making it expensive.
AINFTs! You're right, and it's a bit depressing. Seems more and more that cloud gaming is the only long term solution the industry will tolerate...I hate it.
Literally the whole economy has "over-raised its fundamentals" though. Not everyone is going to fail in exactly this way, but (again, pretty much literally) everyone is exposed to a feedback-driven crash from "everyone else" that ended up too exposed.
We all know this is a speculative run-up. We all know it'll end somehow. Crashes always start with something like this. Is this the tipping point? Damned if I know. But it'll come.
This is reasoning from a mistake. Market valuation is about VALUE, which is an abstract idea assigned by the market, which is not the same thing as MONEY, which can be "printed"[1]. Market values go up and down on their own, irrespective of the amount of money in circulation. They reflect consensus (often irrational) for what the securities "should be trading at", and that's all. If the currency inflates or deflates, the markets do too.
[1] Though recognize that by engaging in that frame you're painting yourself as an unserious amateur being influenced by partisan media. Real governments do not "print money" in any real sense, and attempts to conflate things like bond debt with it run afoul, yet again, of the money/value mistake.
Important for what? Google and anthropic's models are already better, and google actually makes money, and both are US companies. What strategic relevance is there to Open AI?
Unrelated: does anyone else think that Jensen's gatorskin leather jacket at their latest conference didn't suit him at all? It felt very "witness my wealth" and out of character.
https://archive.is/BXlAP
Not only has OpenAI's market share gone down significantly in the last 6mo, Nvidia has been using its newfound liquid funds to train its own family of models[1]. An alliance with OpenAI just makes less sense today than it did 6mo ago.
[1] https://blogs.nvidia.com/blog/open-models-data-tools-acceler...
> Nvidia has been using its newfound liquid funds to train its own family of models
Nvidia has always had its own family of models, it's nothing new and not something you should read too much into IMHO. They use those as template other people can leverage and they are of course optimized for Nvidia hardware.
Nvidia has been training models in the Megatron family as well as many others since at least 2019 which was used as blueprint by many players. [1]
[1] https://arxiv.org/abs/1909.08053
Nemotron-3-Nano-30B-A3B[0][1] is a very impressive local model. It is good with tool calling and works great with llama.cpp/Visual Studio Code/Roo Code for local development.
It doesn't get a ton of attention on /r/LocalLLaMA but it is worth trying out, even if you have a relatively modest machine.
[0] https://huggingface.co/nvidia/NVIDIA-Nemotron-3-Nano-30B-A3B...
[1] https://huggingface.co/unsloth/Nemotron-3-Nano-30B-A3B-GGUF
Some of NVIDIA's models also tend to have interesting architectures. For example, usage of the MAMBA architecture instead of purely transformers: https://developer.nvidia.com/blog/inside-nvidia-nemotron-3-t...
Deep SSMs, including the entire S4 to Mamba saga, are a very interesting alternative to transformers. In some of my genomics use cases, Mamba has been easier to train and scale over large context windows, compared to transformers.
It was good for like, one month. Qwen3 30b dominated for half a year before that, and GLM-4.7 Flash 30b took over the crown soon after Nemotron 3 Nano came out. There was basically no time period for it to shine.
It is still good, even if not the new hotness. But I understand your point.
It isn't as though GLM-4.7 Flash is significantly better, and honestly, I have had poor experiences with it (and yes, always the latest llama.cpp and the updated GGUFs).
Genuinely exciting to be around for this. Reminds me of the time when computers were said to be obsolete by the time you drove them home.
I recently tried GLM-4.7 Flash 30b and didn’t have a good experience with it at all.
It feels like GLM has either a bit of a fan club or maybe some paid supporters...
Oh those ghastly model names. https://www.smbc-comics.com/comic/version
Do they have a good multilingual embedding model? Ideally, with a decent context size like 16/32K. I think Qwen has one at 32K. Even the Gemma contexts are pretty small (8K).
I find the Q8 runs a bit more than twice as fast as gpt-120b since I don’t have to offload as many MoE layers, but is just about as capable if not better.
Nemo is different to Megatron.
Megatron was a research project.
NVidia has professional services selling companies on using Nemo for user facing applications.
its a finetune..
Yeah. Even if OpenAI models were the best, I still wouldn't used them, given how the Sam Altman persona is despicable (constantly hyping, lying, asking for no regulations, then asking for regulations, leaked emails where founders say they just wanna get rich without any consideration of their initial "open" claims...). I know other companies are not better, but at least they have a business model and something to lose.
> leaked emails where founders say they just wanna get rich without any consideration of their initial "open" claims
Point me to these? Would like to have a look.
Sorry, not leaked emails, but it's the Greg Brockman's diary and leaked texts.
I didn't find the original lawsuit documents, but there's a screenshot in this video: https://youtu.be/csybdOY_CQM?si=otx3yn4N26iZoN7L&t=182 (timestamp is 3:02 if you don't see it)
There's more details about the behind-the-scenes and greg brockman's diary leaks in this article: https://www.techbuzz.ai/articles/open-ai-lawsuit-exposed-the... Some documents are made public thanks to the Musk-OpenAI trial.
I'll let you read a few articles about this lawsuit, but basically they said to Musk (and frankly, to everyone else) that they were committed to the non-profit model, while behind the scenes thinking about "making the billion" and turning for-profit.
Much appreciated!
Edit: Ah, so the fake investment announcements started from the very beginning. Incredible.
Hate that bringing fraud to justice means paying out to the wealthiest person on the planet....
Justice should be blind
Literally everyone raising money is just searching for the magic combo of stuff to make it happen. Nobody enjoys raising money. Wouldn’t read that much into this.
I agree. Especially the whole Johny Ive and Altman's hype video in that coffee shop was absolutely disgusting. Oh how far their egos have been inflated, which leads to very bad decision making. Not to be trusted.
Oh could I get a link to that one ?
https://www.youtube.com/watch?v=rDNyFN_eMec
That has strong Krazam vibes https://youtu.be/WqjUlmkYr2g?si=KiMO-dtyFEZ_XIDt
Just fantastic. Hadn't seen it. Thanks for sharing that!
And the whole AI craze is becoming nothing but a commodity business where all kinds of models are popping in and out, one better this update, the other better the next update etc. In short - they're basically indistinguishable for the average layman.
Commodity businesses are price chasers. That's the only thing to compete on when product offerings are similar enough. AI valuations are not setup for this. AI Valuations are for 'winner takes all' implications. These are clearly now falling apart.
It doesn't really feel like AI for coding is commoditized atm.
As problematic as SWE-Bench is as a benchmark, the top commercial models are far better than anything else and it seems tough to see this as anything but a 3 horse race atm.
When you have more users you get more data to improve your models. The bet is that one company will be able to lock in to this and be at the top constantly.
I'm not saying this is what will happen, but people obviously bet a lot of money on that.
Problem is you can easily train one model on the other. And at the end of the day everyone has access to enough data in one way or another.
Nvidia isn’t competing with OpenAI for frontier models.
Au contraire, they’re selling the shovels.
Bored of hearing this
I think there are two things that happened
1. OpenAI bet largely on consumer. Consumers have mostly rejected AI. And in a lot of cases even hate it (can't go on TikTok or Reddit without people calling something slop, or hating on AI generated content). Anthropic on the other hand went all in on B2B and coding. That seems to be the much better market to be in.
2. Sam Altman is profoundly unlikable.
> Consumers have mostly rejected AI.
People like to complain about things, but consumers are heavily using AI.
ChatGPT.com is now up to the 4th most visited website in the world: https://explodingtopics.com/blog/chatgpt-users
We’ve seen many times that platforms can be popular and widely disliked at the same time. Facebook is a clear example.
The difference there is it became hated after it was established and financially successful. If you need to turn free visitors in to paying customers, that general mood of “AI is bad and going to make me lose my job/fuck up society” is yet another hurdle OpenAI will have to overcome.
Yeah, every single big website is totally free. People have complex emotions toward Facebook, Instagram and TikTok, but they don't have to pull out their wallet. That's a bridge too far for many people.
Are they paying through? Reddit was also popular for a long time and didn't make much money.
My point was more that it seems this wave of AI is more profitable if you're in B2B vs. B2C.
It’s incorrect to point out that consumers have rejected AI.
The strategy here is more valid in my opinion. The value in AI is much more legible when the consumer uses it directly from their chat UI than whatever enterprises can come up with.
I can suggest many ways that consumers can use it directly from chat window. Value from enterprise use is actually not that clear. I can see coding but that’s about it. Can you tell me ways in which enterprises can use AI in ways that is not just providing their employees with chaggpt access?
#2 cannot be understated
Was the golden boy for a while? What shifted? I don't even remember what he did "first" to get the status. Is it maybe just a case of familiarity breeding contempt?
It is starting to become clear to more and more people that Sam is a dyed in the wool True Believer in AGI. While it's obvious in hindsight that OpenAI would never have gotten anywhere if he wasn't, seeing it so starkly is really rubbing a lot of people the wrong way.
Advertising Generated Income?
One of my favorites is Amphora of Great Intelligence by the artist David Revoy.
https://www.davidrevoy.com/article1090/the-amphora-of-great-...
https://www.davidrevoy.com/article1092/the-amphora-of-great-...
Damm this is smart. I like it
Someone else said it first here
it's even worse than that and i hope people recognize that it's not that he's a True Believer (though the TBs are often hilarious)
it's that he has no ethics to speak of at all. it's not that he's out of touch, it's that he simply does not care.
Why would him believing in AGI make people dislike him?
He is clearly disliked by a lot of tech community, I don't see his AGI belief as a big part of that.
Well, in the world where AGI is created and it goes suboptimally, everybody gets turned into computronium and goes extinct, which is a prospect some are miffed about. And, in the world where it goes well, no decision of any consequence is made by a human being ever again, since the computer has planned every significant life event since before their birth. Free will in a very literal sense will have been erased. Sam being a true believer means he is not going to stop working until one of these worlds comes true. People who understand the stakes are understandably irked by him.
Well, he made mistake many billionaires do, he opened his mouth with his own thoughts, instead of just reading what PR department told him to read
All the manipulation and lying that got him fired.
He is a pretty interesting case. According to the book "Empire of AI" about OpenAI, he lies constantly, even about things that are too trivial to matter. So it may be part of some compulsive behavior.
And when two people want different things from him, he "resolves" the conflict by agreeing with each of them separately, and then each assumes they got what they wanted, until they talk to the other person and find out that nothing was resolved.
Really not a person who is qualified to run a company, except the constant lying is good for fundraising and PR.
He was once a big pin in Y Combinator (I think kind of ran it?)... Paul Graham thought he was great for YC.
Interesting that he's got as far as he has with this issue. I don't think you can run a company effectively if you don't deal in truth.
Some of his videos have seemed quite bizarre as well, quite sarcastic about concerns people have about AI in general.
> He was once a big pin in Y Combinator (I think kind of ran it?)... Paul Graham thought he was great for YC.
And today it seems everyone will at YC hate him but pretend not
Saw Empire of AI in a bookshop recently but held off buying as wasn’t sure if it was going to be surface level. You’d recommend?
Understandable worry, but it's not surface-level at all. Karen Hao is a great journalist. Highly recommend.
It's sort of two books combined into one: The first one is the story of OpenAI from the beginning, with all the drama explained with quotes from inside sources. This part was informative and interesting. It includes some details about Elon being convinced that Demis Hassabis is going to create an evil super-intelligence that will destroy humanity, because he once worked on a video game with an evil supervillain. I guess his brain was cooked much earlier than we thought.
The second one is a bunch of SJW hand-wringing about things that are only tangentially related, like indigenous Bolivians being oppressed by Spanish Conquistadors centuries ago. That part I don't care for as much.
Not a case, society call them sociopaths. Witch includes power struggle, manipulation and physiological abuse of the people around them.
Example, Sam Altman and OpenAI hoarding 40% of the RAM supply as unprocessed wafers stored in warehouses bought with magical bubble investors money in GPUs that don't exist yet and that they will not be able to install because there's not enough electricity to feed such botched tech, in data centers that are still to be built, with intention to punch the competence supply, and all the people of the planet in the process along two years (at least).
> Really not a person who is qualified to run a company, except the constant lying is good for fundraising and PR.
For a brief moment I thought you were talking about Elon there
He is a sociopath. It's ok to say it.
Yep the various -path adjectives get overused but in this case he's the real deal, something is really really off about him.
You can see it when he talks, he's clearly trying (very unconvincingly) to emulate normal human emotions like concern and empathy. He doesn't feel them.
People like that are capable of great evil and there's a part of our lizard brains that can sense it
Sounds like when people are politicking he just takes a “whatever” approach haha. That seems reasonable.
No, that's not what he's doing.
[dead]
Cringey to watch their interviews.
*Overstated
Indeed. Sama seems to be incredibly delusional. OAI going bust is going to really damage his well-being, irrespective of his financial wealth. Brother really thought he was going to take over the world at one point.
Scariest part is it probably won't, and he'll be back in five year with something else.
Do you see Sam Bankman-Fried getting reinstated?
I don't and I see Sam Altman as a greater fraud than that (loathsome) individual. And I don't think Sam gets through the coming bubble pop without being widely exposed (and likely prosecuted) as a fraudster.
People lying to everyone lie to themselves the most
Instead of anecdotes about “what you saw on TikTok and Reddit”, it’s really not that hard to lookup how many paid users ChatGPT has.
Besides OpenAI was never going to recoup the billions of dollars based on advertising or $20/month subscriptions
Is CEO likeability a reliable predictor?
I think it depends how visible the CEO is to (potential) customers, in this case very visible, he is in the media all the time.
They pay to be in the media
good point.
I don't think it is at all
The CEO just has to have followership: the people who work there have to think that this is a good person to follow. Even they don't have to "like" him
Ask Tesla about the impact of their CEOs likeability on their sales.
> OpenAI bet largely on consumer
Source on that?
Lots of organizations offer ChatGPT subscriptions, and Microsoft pushes Copilot as hard as it can which uses GPT models.
Those who is publicly hating LLMs still use them though, even for the stuff the claim to hate, like writing fanfic.
HN is such a bubble. ChatGPT is wildly successful, and about to be an order of magnitude more so, once they add ads. And I have never heard a non-technical person mention Altman. I highly doubt they have any idea who he is, or care. They’re all still using ChatGPT.
> and about to be an order of magnitude more so, once they add ads.
How do you figure?
You have to give credit to Sam, he’s charismatic enough to the right people to climb man made corporate structures. He was also smart enough to be at the right place at the right time to enrich himself (Silicon Valley). He seems to be pretty good at cutting deals. Unfortunately all of the above seems to be at odds with having any sort of moral core.
Ermmm what?
He and his personality caused people like Ilya to leave. At that point the failure risk of OAI jumped tremendously. The reality he will have to face is, he has caused OAIs demise.
Perhaps hes ok with that as long as OAI goes down with him. Would expect nothing less from him.
All this drama is mostly irrelevant outside a very narrow and very online community.
The demise of OpenAI is rooted in the bad product market fit, since many people like using ChatGPT for free, but fewer are ready to pay for it. And that’s pretty much all there is to it. OpenAI bet on consumers, made a slopstagram that unsurprisingly didn’t revolutionise content, and doesn’t sell as many licenses as they would like.
Imo they'll soon make a lot of money with advertisement. Whenever chatgpt brings you to some website to buy a product they will get some share.
Ilya took a swing at the king and missed. It would have been awkward to hang around after that debacle.
[flagged]
I actually think Sam is “better” than say Elon or Dario because he seems like a typical SF/SV tech bro. You probably know the type (not talking about some 600k TC fang worker, I mean entrepreneurs).
He says a lot of fluff, doesn’t try to be very extreme, and focuses on selling. I don’t know him personally but he comes across like an average person if that makes sense (in this environment that is).
I think I personally prefer that over Elon’s self induced mental illnesses and Dario being a doomer promoting the “end” of (insert a profession here) in 12 months every 6 months. It’s hard for me to trust a megalomaniac or a total nerd. So Sam is kinda in the middle there.
I hope OpenAI continues to dominate even if the margins of winning tighten.
Elon is one of the most unlikable people on the planet, so I wouldn't consider him much of a bar.
Hah, you beat me to it, serves me right for writing longer comments. Have an upvote ;)
It’s kind of sad. I can’t believe I used to like him back in the iron man days. Back then I thought he was cool for the various ideas and projects he was working on. I still think many of those are great but he as a person let me down.
Now I have him muted on X.
Back then he had a PR firm working for him, getting him cameos and good press. But in 2020 he fired them deciding that his own "radically awesome" personality doesn't need any filtering.
Personally I don't think Elon is the worst billionaire, he's just the one dumb enough to not have any PR (since 2020). They're all pretty reprehensible creatures.
Any number of past mega-rich were probably equally nuts and out of touch and reprehensible but they just didn't let people find out. Then Twitter enabled an unfiltered mass-media broadcast of anyone's personal insanity, and certain public figures got addicted and exposed.
There will always be enough people willing to suck up to money that they'll have all the yes-men they need to rationalize it as "it's EVERYONE ELSE who's wrong!"
The watershed moment for me was when he pretended to be a top tier gamer on Path of Exile. Anyone in the know saw right through it, and honestly makes me wonder if we just spotted this behavior because it's "our turf", but actually he and people like him just operate this way in absolutely everything they do
Yeah, Putin is probably the worst billionaire. Elon might be a close second though, or maybe it's a US politician if they actually are a billionaire.
Peter Thiel who thinks the Pope or Greta Thunberg might be the antichrist, and that freedom is incompatible with democracy
https://www.nationalmemo.com/peter-thiel-antichrist
I think you did not understand his argument. He said it is a great danger that people might unite behind an antichrist like figure.
Exactly, other billionaires having calmer personality types does not make them less nuts.
> Now I have him muted on X.
Props to him for letting people mute him on his own platform. The issue with Sam and OpenAI is they their bias on any controversional topic can't be switched off.
But you're still on Twitter and calling it X...
So? I bet you think you're clever. You're using platforms daily that are ran by insane people. Don't forget that the internet itself was a military invention.
Not extreme? Have you seen his interviews? I guess his wording and delivery are not extreme, but if you really listen to what he's saying, it's kinda nuts.
That Dyson sphere interview should've been a wake up call for the OpenAI faithful.
I understand what GP is saying in the sense that, yes, on an objective scale, what Sam is saying is absolutely and completely nuts... but on a relative scale he's just hyping his startup. Relative to the scale he's at, it’s no worse than the average support tool startup founder claiming they will defeat Salesforce, for example.
Exactly. Thanks for getting it, it is refreshing to encounter people who get it. Good luck with everything!
You too!
He's definitely not. If Altman. Is a "typical" SF/SV tech bro then that's an indication the valley has turned full d-bag. Altman's past is gross. So, if he's the norm then I will vehemently avoid any dollars of mine going to OAI. I paid for an account for a while, but just like Musk I lose nothing over actively avoiding his Ponzi scheme of a company.
Altman is a consummate liar and manipulator with no moral scruples. I think this LLM business is ethically compromised from the start, but Dario is easily the least worst of the three.
Darío unsettles me the most, he kinda reminds me of SBF, I wouldn’t be surprised if, well they’re all bad its to stack rank them.
I don't think he's good, but afaik he isn't trying to make everyone psychologically dependent on Claude and releasing sex bots.
He and SBF are both big into effective altruism, and SBF gave Anthropic their seed funding, so yeah, that checks out.
There's nothing wrong with effective altruism -- making money to give it away -- it's SBF.
Of course there is. The whole thing is a cult, designed to pull in suckers.
Your argument is guilt by association. Association with something that isn't morally wrong, it's just a way to try to spend money on charity in an effective way? You can take a lot of ideas too far and end up with a bad result of course.
There’s 4 though, where does Demis fit in the stack rank?
TBH, I hadn't heard of him until now. Looks like he's had a crazy legit professional career. I'd put him at the top for his work at Bullfrog alone.
Demis is the reason Google is afloat with a good shot at winning the whole race. The issue currently is he isn’t willing to become the alphabet CEO. IMHO he’ll need to for the final legs.
I’d hate the job too. It would be interesting to see how Google might evolve with him at the helm, for sure.
Pfft. Dario has been making nonsense fear mongering that never comes true.
> I actually think Sam is “better” than say Elon or even Dario because he seems like a typical SF/SV tech bro.
If you nail the bar to the floor, then sure, you can pass over it.
> He says a lot of fluff, doesn’t try to be very extreme, and focuses on selling.
I don't now what your definition of extreme is but by mine he's pretty extreme.
> I think I personally prefer that over Elon’s self induced mental illnesses and Dario being a doomer promoting the “end” of (insert a profession here) in 12 months every 6 months.
All of them suffer from thinking their money makes them somehow better.
> I hope OpenAI continues to dominate even if the margins of winning tighten.
I couldn't care less. I'm on the whole impressed with AI, less than happy about all of the slop and the societal problems it brings and wished it had been a more robust world that this had been brought in to because I'm not convinced the current one needed another issue of that magnitude to deal with.
> All of them suffer from thinking their money makes them somehow better.
Let's assume they think they're better than others.
What makes you think that they think it's because of their money, as opposed to, say, because of their success at growing their products and businesses to the top of their field?
Even if it's success rather than money, you still have survivorship bias to contend with, so it's not really much of a helpful distinction.
Because they wouldn't talk about money as much or try to convert a non-profit into a for profit company.
That’s ok, but AI is useful in particular use cases for many people. I use it a lot and I prefer the Codex 5.2 extra high reasoning model. The AI slop and dumb shit on IG/YT is like the LCD of humans though. They’ve always been there and always will be there to be annoying af. Before AI slop we had brain rot made by humans.
I think over time it (LLM based) will become like an augmenter, not something like what they’re selling as some doomsday thing. It can help people be more efficient at their jobs by quickly learning something new or helping do some tasks.
I find it makes me a lot more productive because I can have it follow my architecture and other docs to pump out changes across 10 files that I can then review. In the old way, it would have taken me quite a while longer to just draft those 10 files (I work on a fairly complex system), and I had some crazy code gen scripts and shit I’d built over the years. So I’d say it gives me about 50% more efficiency which I think is good.
Of course, everyone’s mileage may vary. Kinda reminds me of when everyone was shitting on GUIs, or scripting languages or opinionated frameworks. Except over time those things made productivity increase and led to a lot more solutions. We can nitpick but I think the broader positive implication remains.
some people are so determined to be positive about AI that at some point it just comes across like they’re getting paid to be
There are quite a lot of posts like that. Just a bit too eager. Proselytising as if AI is a religion.
I don't think I did that at all, and I call out that sort of bullshit all the time and get downvoted lol (idgaf :P)
Maybe some/many even are? For "AI" companies it's not really a big expense in comparison and they depend hugely on keeping the hype going.
It's very hard to see downsides on something like GUIS, scripting languages or opinionated frameworks compared to a broad, easily weaponized tool like generative AI.
Naive to call Sam Altman unlikeable.
[flagged]
ChatGPT has nowhere the lead it used to have. Gemini is excellent and Google and Anthropic are very serious competitors. And open weight models are slowly catching up.
[flagged]
ChatGPT is a goner. OpenAI will probably rule the scam creation, porn bot, and social media slop markets.
Gemini will own everything normie and professional services, and Anthropic will own engineering (at least software)
Honestly as of the last few months anyone still hyping ChatGPT is outing themselves.
[dead]
[flagged]
"Please don't post insinuations about astroturfing, shilling, bots, brigading, foreign agents and the like. It degrades discussion and is usually mistaken. If you're worried about abuse, email hn@ycombinator.com and we'll look at the data."
https://news.ycombinator.com/newsguidelines.html
https://hn.algolia.com/?sort=byDate&dateRange=all&type=comme...
Sorry dang you are right and I was wrong to say that. Mea culpa.
Nobody. Did you talk to all the models? Can you actually have a non-coder, human conversation?
You mean the DOW right?
I'm afraid I don't know what that is.
I meant thinking patterns that go beyond our understanding. High functioning autism that is beyond jealousy/envy, and beyond the need to hold or be on a leash and beyond the enigma of emotions that come with the influence to dump and pump stock market prices of precious, precious metals.
Or, in other terms, the kind of intelligence that is built for abstract, distant, symbiotic humanity. From the POV of Earth as a system, we're quite the dumb nuisance. "Just get it, man". :D
Last paragraph is informative:
> Anthropic relies heavily on a combination of chips designed by Amazon Web Services known as Trainium, as well as Google’s in-house designed TPU processors, to train its AI models. Google largely uses its TPUs to train Gemini. Both chips represent major competitive threats to Nvidia’s best-selling products, known as graphics processing units, or GPUs.
So which leading AI company is going to build on Nvidia, if not OpenAI?
"Largely" is doing a lot of heavy lifting here. Yes Google and Amazon are making their own GPU chips, but they are also buying as many Nvidia chips as they can get their hands on. As are Microsoft, Meta, xAI, Tesla, Oracle and everyone else.
But is Google buying those GPU chips for their own use, or to have them on their data centers for their cloud customers?
google buys nvidia GPUs for cloud, I don't think they use them much or at all internally. The TPUs are both used internally, and in cloud, and now it looks like they are delivering them to customers in their own data centers.
When I was there a few years ago, we only got CPUs and GPUs for training. TPUs were in too high of demand.
I can see them being used for training if they're vacant.
The various AI accelerator chips, such as TPUs and NVidia GPUs, are only compatible to extent that some of the high level tools like PyTorch and Triton (kernel compiler) may support both, which is like saying that x86 and ARM chips are compatible since gcc supports them both as targets, but note this does not mean that you can take a binary compiled for ARM and run it on an x86 processor.
For these massive, and expensive to train, AI models the differences hit harder since at the kernel level, where the pedal hits the metal, they are going to be wringing every last dollar of performance out of the chips by writing hand optimized kernels for them, highly customized to the chip's architecture and performance characteristics. It may go deeper than that too, with the detailed architecture of the models themselves tweaked to best perform on a specific chip.
So, bottom line is that you can't just take a model "compiled to run on TPUs", and train it on NVidia chips just because you have spare capacity there.
>but they are also buying as many Nvidia chips as they can get their hands on
>But is Google buying those GPU chips for their own use
>google buys nvidia GPUs for cloud, I don't think they use them much or at all internally.
We're not talking about GPUs.
Both. Internal are customers too.
How about Apple? How is Apple training its next foundation models?
To use the parlance of this thread: "next" foundation models is doing a lot of heavy lifting here. Am I doing this right?
My point is, does Apple have any useful foundation models? Last I checked they made a deal with OpenAI, no wait, now with Google.
Apple does have their own small foundation models but it's not clear they require a lot of GPUs to train.
Do you mean like OCR in photos? In that case, yes, I didn't think about that. Are there other use cases aside from speach to text in Siri?
I think they are also used for translation, summarization, etc. They're also available to other apps: https://developer.apple.com/documentation/FoundationModels
Thanks, I am a dumb dumb about Apple, and mobile in general. I should have known this. I really appreciate the reply so that I know it now.
I think Apple is waiting for the bubble to deflate, then do something different. And they have the ready to use user base to provide what they can make money from.
If they were taking that approach, they would have absolutely first-class integration between AI tools and user data, complete with proper isolation for security and privacy and convenient ways for users to give agents access to the right things. And they would bide their time for the right models to show up at the right price with the right privacy guarantees.
I see no evidence of this happening.
As an outsider, the only thing the two of you disagree on is timing. I probably side with the ‘time is running out’ team at the current juncture.
They apparently are working on and are going to release 2(!) different versions of siri. Idk, that just screams "leadership doesn't know what to do and can't make a tough decision" to me. but who knows? maybe two versions of siri is what people will want.
Arena mode! Which reply do you prefer? /s
But seriously, would one be for newer phone/tablet models, and one for older?
It sounds like the first one, based on Gemini, will be more a more limited version of the second ("competitive with Gemini 3"). IDK if the second is also based on Gemini, but I'd be surprised if that weren't the case.
Seems like it's more a ramp-up than two completely separate Siri replacements.
Apple can make more money from shorting the stock market, including their own stock, if they believe the bubble will deflate.
Apple is sitting this whole thing out. Bizarre.
The options for a company in their position are:
1. Sit out and buy the tech you need from competitors.
2. Spend to the tune of ~$100B+ in infra and talent, with no guarantee that the effort will be successful.
Meta picked option 2, but Apple has always had great success with 1 (search partnership with Google, hardware partnerships with Samsung etc.) so they are applying the same philosophy to AI as well. Their core competency is building consumer devices, and they are happy to outsource everything else.
This whole thread is about whether the most valuable startup of all time will be able to raise enough money to see the next calendar year.
It's definitely rational to decide to pay wholesale for LLMs given:
- consumer adoption is unclear. The "killer app" for OS integration has yet to ship by any vendor.
- owning SOTA foundation models can put you into a situation where you need to spend $100B with no clear return. This money gets spent up front regardless of how much value consumers derive from the product, or if they even use it at all. This is a lot of money!
- as apple has "missed" the last couple of years of the AI craze, there has been no meaningful ill effects to their business. Beyond the tech press, nobody cares yet.
Well they tried and they failed. In that case maybe the smartest move is not to play. Looks like the technology is largely turning into a commodity in the long run anyways. So sitting this out and letting others make the mistakes first might not be the worst of all ideas.
I mean, they tried. They just tried and failed. It may work out for them, though — two years ago it looked like lift-off was likely, or at least possible, so having a frontier model was existential. Today it looks like you might be able to save many billions by being a fast follower. I wouldn’t be surprised if the lift-off narrative comes back around though; we still have maybe a decade until we really understand the best business model for LLMs and their siblings.
> I mean, they tried. They just tried and failed.
They tried to do something that probably would have looked like Copilot integration into Windows, and they chose not to do that, because they discovered that it sucked.
So, they failed in an internal sense, which is better than the externalized kind of failure that Microsoft experienced.
I think that the nut that hasn't been cracked is: how do you get LLMs to replace the OS shell and core set of apps that folks use. I think Microsoft is trying by shipping stuff that sucks and pissing off customers, while Apple tried internally declined to ship it. OpenClaw might be the most interesting stab in that direction, but even that doesn't feel like the last word on the subject.
I think you are right. Their generative AI was clearly underwhelming. They have been losing many staff from their AI team.
I’m not sure it matters though. They just had a stonking quarter. iPhone sales are surging ahead. Their customers clearly don’t care about AI or Siri’s lacklustre performance.
> Their customers clearly don’t care about AI or Siri’s lacklustre performance.
I would rather say their products didn’t just loose in value for not getting an improvement there. Everyone agrees that Siri sucks, but I’m pretty sure they tried to replace it with a natural language version built from the ground up, and realised it just didn’t work out yet: yes, they have a bad, but at least kinda-working voice assistant with lots of integrations into other apps. But replacing that with something that promises to do stuff and then does nothing, takes long to respond, and has less integrations due to the lack of keywords would have been a bad idea if the technology wasn’t there yet.
Honestly, what it seems like is financial discipline.
We do know that they made a number of promises on AI[1] and then had to roll them back because the results were so poor[2]. They then went on to fire the person responsible for this division[3].
That doesn't sound like a financial decision to me.
[1] https://www.apple.com/uk/newsroom/2024/06/wwdc24-highlights/
[2] https://www.bloomberg.com/news/features/2025-05-18/how-apple...
[3] https://nypost.com/2025/12/02/business/apple-shakes-up-ai-te...
From a technology standpoint I don’t feel Apple’s core competency is in AI model foundations
They might know something?
More like they don't know the things others do. Siri is a laughing stock.
Sure, Siri is, but do people really buy their phone based off of a voice assistant? We're nowhere near having an AI-first UX a la "Her" and it's unclear we'll even go in that direction in the next 10 years.
They are in housing their AI to sell it as a secure way to AI, which 100% puts them in the lead for the foreseeable future.
Nvidia had the chance to build its own AI software and chose not to. It was a good choice so far, better to sell shovels than go to the mines - but they still could go mining if the other miners start making their own shovels.
If I were Nvidia I would be hedging my bets a little. OpenAI looks like it's on shaky ground, it might not be around in a few years.
Another comment had this:
https://blogs.nvidia.com/blog/open-models-data-tools-acceler...
Interesting times.
Good point, I didn't notice that.
Interesting times indeed!
They do build their own software, though. They have a large body of stuff they make. My guess is that it’s done to stay current, inform design and performance, and to have something to sell enterprises along with the hardware; they have purposely not gone after large consumer markets with their model offerings as far as I can tell.
There is no way Nvidia can make even a fraction of what they are making from AI software.
OpenAI will keep using Nvidia GPUs but they may have to actually pay for them.
That’s interesting, I didn’t know that about Anthropic. I guess it wouldn’t really make sense to compete with OpenAI and everyone else for Nvidia chips if they can avoid it.
Would Nvidia investing heavily in ClosedAI dissuade others to use Nvidia?
Aren't they switching to PI for Pretend Intelligence?
If nothing else, the video game market would explode under AMD, maybe?
The elephant in the room is China also being partially successful with their chips
Literally all the other companies that still believe they can be the leading ones one day?
Maybe xAI/Tesla, Meta, Palantir
The moment you threaten NVDA's livelyhood, your company starts to fall apart. History tells.
It's almost as if everyone here was assuming that Nvidia would have no competition for a long time, but it has been known for a long time, there are many competitors coming after their data center revenues. [0]
> So which leading AI company is going to build on Nvidia, if not OpenAI?
It's xAI.
But what matters is that there is more competition for Nvidia and they bought Groq to reduce that. OpenAI is building their own chips as well as Meta.
The real question is this: What happens when the competition catches up with Nvidia and takes a significant slice out of their data center revenues?
[0] https://news.ycombinator.com/item?id=45429514
the chinese will probably figure out a way to sneak the nvidia chips around the sanctions
Alibaba has their own chips now they use for training.
This video that breaks down the crazy financial positions of all the AI companies and how they are all involved with one called CoreWeave (who could easily bring the whole thing tumbling down) is fascinating: https://youtu.be/arU9Lvu5Kc0?si=GWTJsXtGkuh5xrY0
Yeah I see coreweave as a canary in the coal mine. They’re not doing so hot and basically got a bailout by Nvidia a few days ago.
https://techcrunch.com/2026/01/26/nvidia-invests-2b-to-help-...
Coreweave acquired WandB last year. https://www.coreweave.com/blog/coreweave-completes-acquisiti... . Strategic.
Microsoft and Google's role in this does explain why they're so keen to push AI into their products, whether customers want it or not.
And the loans given to nvidia, the collateral are old and rapidly discounting GPUs
All these giant non-binding investment announcements are just a massive confidence scam.
I don’t think so. I think it is positioning for the unknown future and hedging.
For example, Amazon isn’t able to train its own models so it hedges by investing in Anthropic and OpenAI. Oracle, same with OpenAI deal. Nvidia wants to stay in OpenAI and Anthropic’s tech stack.
It’s all jockeying for position.
Oracle is a perfect example of using empty AI partnership announcements to goose the stock price and also a perfect example of how unsustainable of a strategy it is.
You're explaining why they invest, not why they make pie-in-the-sky non-binding announcements. The point of those is to inflate the bubble and keep it going as long as they can.
The non binding agreements is due to the implications of AI. No one knows for sure who will win or what will happen. These aren’t just fake agreements though. Oracle is building Stargate. Softbank’s investments are in OpenAI’s bank.
We know that it is all a grift before the inevitable collapse, so everyone is racing for the exit before that happens.
I guarrantee you that in 10 years time, you will get claims of unethical conduct by those companies only after the mania has ended (and by then the claimants have sold all their RSUs.)
[dead]
It’s probably not really related, but this bug and the saga of OpenAI trying and failing to fix it for two weeks is not indicative of a functional company:
https://github.com/openai/codex/issues/9253
OTOH, if Anthropic did that to Claude Code, there wasn’t a moderately straightforward workaround, and Anthropic didn’t revert it quickly, it might actually be a risk-the-whole-business issue. Nothing makes people jump ship quite like the ship refusing to go anywhere for weeks while the skipper fumbles around and keeps claiming to have fixed the engines.
Also, the fact that it’s not major news that most business users cannot log in to the agent CLI for two weeks running is not major news suggests that OpenAI has rather less developer traction than they would like. (Personal users are fine. Users who are running locally on an X11-compatible distro and thus have DISPLAY set are okay because the new behavior doesn’t trigger. It kind of seems like everyone else gets nonsense errors out of the login flow with precise failures that change every couple days while OpenAI fixes yet another bug.)
Funny that they can't just get the "AI" to fix it.
I expect the “AI” created it in the first place.
You still need to get engineers to actually dispatch that work, test it, possibly update the backend. Each of those can be already done via AI, but actually doing that in a large environment - we're not there yet.
I don't know what you're so surprised about. The ticket reads like any other typical [Big] enterprise ticket. UI works, headless - not (headless is what only hackers use, so not a priority, etc.) Oh, found the support guy who knows what headless is and the doc page with a number of workarounds. There is even ssh tunnel (how is that made in into enterprise docs?!) and the classic - copy logged in credentials from UI machine once you logged in there. Bla-bla-bla and again classic:
"Root Cause
The backend enforces an Enterprise-only entitlement for codex_device_code_auth on POST /backend-api/accounts/{account_id}/beta_features. Your account is on the Team plan, so the server rejects the toggle with {"detail":"Enterprise plan required."} "
and so on and so forth. At any given day i have several such long-term tickets that get ultimately escalated to me (i'm in dev and usually the guy who would pull the page with ssh tunnel or credentials copying :)
Sort of?
The backstory here is that codex-rs (OpenAI’s CLI agent harness) launched an actual headless login mechanism, just like Claude Code has had forever. And it didn’t work, from day one. And they can’t be bothered to revert it for some reason.
Sure, big enterprises are inept. But this tool is fundamentally a command line tool. It runs in a terminal. It’s their answer to one of their top two competitors’ flagship product. For a company that is in some kind of code red, the fact that they cannot get their ducks in a row to fix it is not a good sign.
Keep in mind that OpenAI is a young company. They should have have a thicket of ancient garbage to wade through to fix this — it’s not as if this is some complex Active Directory issue that no one knows how to fix because the design is 30-40 years old and supports layers and layers of legacy garbage.
This issue has one thumbs up, nobody cares about it.
Because approximately zero smallish businesses use Codex, perhaps?
It’s also possible that the majority of people hitting it are using the actual website support (which is utterly and completely useless), since the bug is only a bug in codex-rs to the extent that codex-rs should have either reverted or deployed a workaround already.
Many of us predicted OpenAIs insistence that the model was the product was the wrong path.
The tools on top of the models are the path and people building things faster is the value.
The model is the product. OpenAI themselves also build products on top of their models.
Those without models are hugely vulnerable to sudden rug pulls.
Only in a monopoly situation. If you have several companies with comparable models you can easily switch between, all desperate for revenue to recoup their massive capex. you’re fine.
But OpenAI has spent too much capital on their models and not balanced that with pragmatic product development.
They’re never gonna recover their investment and eventually their partners will run away.
The GPT models are not a moat.
Hard to capture that value all in one place though.
Interesting to see this follow the news of their plan IPO in Q4 just yesterday. https://www.wsj.com/tech/ai/openai-ipo-anthropic-race-69f06a...
In the distance, Uncle Sam groans as his phone rings
I felt anxious about all the insane valuations and spending around AI lately, and I knew it couldn't last (I mean there's only so much money, land, energy, water, business value, etc). But I didn't really know when it was going to collapse, or why. But recently I've been diving into using local models, and now it's way more clear. There seems to be a specific path for the implosion of AI:
- Nvidia is the most valuable company. Why? It makes GPUs. Why does that matter? Because AI is faster on them than CPUs, ASICs are too narrowly useful, and because first-mover advantage. AMD makes GPUs that work great for AI, but they're a fraction of the value of Nvidia, despite the fact that they make more useful products than Nvidia. Why? Nvidia just got there first, people started building on them, and haven't stopped, because it's the path of least resistance. But if Nvidia went away tomorrow, investors would just pour money into AMD. So Nvidia doesn't have any significant value compared to AMD other than people are lazy and are just buying the hot thing. Nvidia was less valuable than AMD before, they'll return there eventually; all AMD needs is more adoption and investment.
- Every frontier model provider out there has invested billions to get models to the advanced state they're in today. But every single time they advance the state of the art, open weights soon match them. Very soon, there won't be any significant improvement, and open weights will be the same as frontier, meaning there's no advantage to paying for frontier models. So within a few years, there will be no point to paying OpenAI, Anthropic, etc. Again, these were just first-movers in a commodity market. The value just isn't there. They can still provide unique services, tailored polished apps, etc (Anthropic is already doing this by banning users who have the audacity to use their fixed-price plans with non-Anthropic tools). But with AI code tools, anyone can do this. They are making themselves obsolete.
- The final form of AI coding is orchestrated agent-driven vibe-coding with safeguards. Think an insane asylum with a bowling league: you still want 100 people to autonomously (and in parallel) knock the pins knocked over, but you have to prevent the inmates from killing anyone. That's where the future of coding is. It's just too productive to avoid. But with open models and open source interfaces, anyone can do this, whether with hosted models (on any of 50 different providers), or a Beowulf cluster of cobbled together cheap hardware in a garage.
- Eventually, in like 5-10 years (a lifetime away), after AI Beowulfs have been a fad for a while, people will tire of it and move back to the cloud, where they can run any model they want on a K8s cluster full of GPUs, basically the same as today. Difference between now and then is, right now everyone is chasing Anthropic because their tools and models are slightly better. But by then, they won't be. Maybe people will use their tools anyway? But they won't be paying for their models. And it's not just price: one of the things you learn quickly by running models, is they're all good for different things. Not only that, you can tweak them, fine-tune them, and make them faster, cheaper, better than what's served up by frontier models. So if you don't care about the results or cost, you could use frontier, but otherwise you'll be digging deep into them, the same way some companies invest in writing their own software vs paying for it.
- Finally, there's the icing on the cake: LLMs will be cooked in 10 years. I keep reading from AI research experts that "LLMs are a dead end" - and it turns out it's true. LLMs are basically only good because we invest an unsustainable amount of money in the brute-forcing of a relatively dumb form of iteration: download all knowledge, do some mind-bogglingly expensive computational math on it, tweak the reasults, repeat. There's only so many of that loop you can do, because fundamentally, all you're doing is trying to guess your way to an answer from a picture of the past. It doesn't actually learn, the way a living organism learns, from experience, in real-time, going forward; LLMs only look backward. Like taking a snapshot of all the books a 6 year old has read, then doing tweaks to try to optimize the knowledge from those books, then doing it again. There's only so much knowledge, only so many tweaks. The sensory data of the lived experience of a single year of life of a 6 year old is many times more information than everything ever recorded by man. Reinforcement Learning actually gives you progressive, continuously improved knowledge. But it's slow, which is why we aren't doing it much. We do LLMs instead because we can speed-run them. But the game has an end, and it's the total sum of our recorded knowledge and our tweaks.
So LLMs will plateau, frontier models will make no sense, all lines of code will be hands-off, and Nvidia will return to making hardware for video games. All within about 10 years. With the caveat that there might be a shift in global power and economic stability that interrupts the whole game.... but that's where we stand if things keep on course. Personally, I am happy to keep using AI and reap the benefits of all these moronic companies dumping their money into it, because the open weights continue being useful after those companies are dead. But I'm not gonna be buying Nvidia stock anytime soon, and I'm definitely not gonna use just one frontier model company.
I've thought about this too. I do agree that open source models look good and enticing, especially from a privacy standpoint. But these solutions are always going to remain niche solutions for power users. I'm not one of them. I can't be hassled/bothered to setup that whole thing (local or cloud) to gain some privacy and end up with an inferior model and tool. Let's not forget about the cost as well! Right now I'm paying for Claude and Gemini. I run out of Claude tokens real fast, but I can just keep on going using Gemini/GeminiCLI for absolutely no cost it seems like.
The closed LLMs with the biggest amount of users will eventually outperform the open ones too, I believe. They have a lot of closed data that they can train their next generation on. Especially the LLMs that the scientific community uses will be a lot more valuable (for everyone). So in terms of quality, the closed LLMs should eventually outperform the open ones, I believe, which is indeed worrisome.
I also felt anxious early december about the valuations, but, one thing remains certain. Compute is in heavy demand, regardless of which LLM people use. I can't go back to pre-AI. I want more and more and faster and faster AI. The whole world is moving that way it seems like. I'm invested into phsyical AI atm (chips, ram, ...) whose evaluations look decently cheap.
I think you should reconsider the idea that frontier models will be superior, for a couple reasons:
- LLMs have fixed limitations. The first one is training, the dataset you use. There's only so much information in the world and we've largely downloaded it all, so it can't get better there. Next you can do training on specific things to make it better at specific things, but that is by definition niche; and you can actually do that for free today with Google's Tensors in free Cloud products. Later people will pay for this, but the point is, it's ridiculously easy for anyone to fine-tune training, we don't need frontier companies for that. And finally, LLM improvements come by small tweaks to models that already come to open weights within a matter of months, often surpassing the frontier! All you have to do is sit on your ass for a couple months and you have a better open model. Why would anyone do this? Because once all models are extremely good (about 1 year from now) you won't need them to be better, they'll already do everything you need in 1-shot, so you can afford to sit and wait for open models. Then the only reason left to use frontier cloud is that they host a model; but other people do cloud-hosted models! Because it's a commodity! (And by the way, people like me are already pissed off at Anthropic because we're not allowed to use OAuth with 3rd party tools, which is complete bullshit. I won't use them on general principle now, they're a lock-in moat, and I don't need them) There will also be better, faster, more optimized open models, which everyone is going to use. For doing math you'll use one model, for intelligence you'll use a different model, for coding a different model, for health a different model, etc, and the reason is simple: it's faster, lower memory, and more accurate. Why do things 2x slower if you don't have to? Frontier model providers just don't provide this kind of flexibility, but the community does. Smart users will do more with less, and that means open.
On the hardware:
- Def it will continue to be investment-worthy, but be cautious. The growth simply isn't going to continue at pace, and the simple reason is we've already got enough hardware. They want more hardware so they can continue trying to "scale LLMs" the way they have with brute force. But soon the LLMs will plateau and the brute force method isn't going to net the kind of improvements that justify the cost. Demand for hardware is going to drop like a stone in 1-2 years; if they don't cease building/buying then, they risk devaluing it (supply/demand), but either way Nvidia won't be selling as much product so there goes their valuation. And RAM is eventually going to get cheaper, so even if demand goes up, spending is less. The other reason demand won't continue at pace is investors are already scared, so the taps are being tightened (I'm sure the "Megadeal" being put on-hold is the secret investment groups tightening their belts or trying to secure more favorable terms). I honestly can't say what the economic picture is going to look like, but I guarantee you Nvidia will fall from its storied heights back to normal earth, and other providers will fill the gap. I don't know who for certain, but AMD just makes sense, because they're already supported by most AI software the way Nvidia is (try to run open-source inference today, it's one of those two). Frontier and cloud providers have Tensors and other exotic hardware, which is great for them, but everyone else is gonna buy commodity chips. Watch for architectures with lower price and higher parts availability.
> There's only so much information in the world and we've largely downloaded it all, so it can't get better there.
What about all the input data into LLMs and the conversations we're having? That must be able to produce a better next gen model, no?
> it's ridiculously easy for anyone to fine-tune training, we don't need frontier companies for that.
Not for me. It'll take me days, and then I'm pretty sure it won't be better than Gemini 3 pro for my coding needs, especially in reasoning.
> For doing math you'll use one model, for intelligence you'll use a different model, for coding a different model, for health a different model, etc, and the reason is simple: it's faster, lower memory, and more accurate.
Why wouldn't e.g. Gemini just add a triage step? And are you sure it's that much easier to get a better model for math than the big ones?
I think you underestimate the friction this causes regular users by handpicking and/or training specific models, whilst the big vendors are good enough for their needs.
> What about all the input data into LLMs and the conversations we're having? That must be able to produce a better next gen model, no?
Better models are largely coming from training, tuning, and specific "techniques" discovered to do things like eliminate loops and hallucinations. Human inputs are a small portion of that; you'll notice that all models are getting better despite the fact that all these companies have different human inputs! A decent amount of the models' abilities come from properties like temperature/p-settings, which is basically introducing variable randomness. (these are now called "low" and "high" in frontier models) This can cause problems, but also increased capability, so the challenge isn't getting better input, it's better controlling randomness (sort of). Even coding models benefit from a small amount of this. But there is a lot more, so overall model improvements are not one thing, they are many things that are not novel. In fact, open models get novel techniques before the frontier does, it's been like that for a while.
> Not for me. It'll take me days, and then I'm pretty sure it won't be better than Gemini 3 pro for my coding needs, especially in reasoning.
If you don't want the improvements, that's up to you; I'm just saying the frontier has no advantage here, and if people want better than frontier, it's there for free.
> Why wouldn't e.g. Gemini just add a triage step? And are you sure it's that much easier to get a better model for math than the big ones?
They already do have triage steps, but despite that, they still create specific models for specific use-cases. Most people already choose Thinking by default for general queries, and coding models for coding. That will continue, but there will be more providers of more specific models that will outperform frontier models, for the simple fact that there's a million use-cases out there and lots of opportunity for startups/community to create a better tailored model for cheaper. And soon all our computers will be decent at doing AI locally, so why pay for frontier anyway? I can already AI-code locally on a 4 year old machine. Two years from now, there likley won't be a need for you to use a cloud service at all, because your local machine and a local model will be equivalent, private, and free.
Thank you. You have somewhat shifted my beliefs in a meaningful way.
The circular funding scheme is finally becoming evident to people. Did my best to cover it in a video https://www.youtube.com/watch?v=Q5fSbuO8Q3k
Google has the data and the TPUs and the massive cash to advance.
Microsoft has GitHub - the world’s biggest pile of code training data, plus infinite cash.
OpenAI has …… none of these advantages.
And Google and Microsoft have huge distribution advantages that OpenAI doesn’t. Google and Microsoft can add AI to their operating systems, browsers, and office apps that users are already using. OpenAI just has a website and a niche browser. To Google and Microsoft, AI is a feature, not a product.
this is the argument i continue to have with people. first mover isnt always an advantage - i think openai will be sold or pennies on these dollars someday (next 5 years after they run out of funding).
Google has data, TPUs, and a shitload of cash to burn
>first mover isnt always an advantage
but in this case it is, ChatGPT name is really, really strong, it's like "just google it" instead of "just search the web"
Maybe but it's far from profitable. People largely don't want to pay for it either.
Who cares? profitability is not the most important thing at every stage of the product
Altman is a horrible CEO also, which wont help. He table-side manners are horrible.
I'm not sure because google was by far the best search engine for a long time in the early 2000s and there are a lot of models close to what openai has right now.
Name recognition only gets you so far. "Just Google it" happened because Google was better than Hotbot/Altavista/Yahoo! etc by orders of magnitude. Nobody even bothered to launch a competing search engine in the 2000s because of this (until Microsoft w/ Bing in 2009). There is no such parallel with ChatGPT; Google, Bing, even DuckDuckGo has AI search.
First mover advantage matters only if it has long-lasting network effects. American schools are run on Chromebooks and Google Docs/Slides, but these have no penetration in enterprise, as college students have been discovering when they enter their first jobs.
I wonder how much the indications of Altman's duplicitous behavior through the deposition findings have been relevant here.
I doubt they care at all. It might even be a feature.
the biggest imo indicator that it is gonna pop was dram makers not promising to expand production
Well, they might have gotten a little wary from previous boom and bust cycles. Perhaps they are a bit wary about the economic sustainability of the whole AI thing. However, perhaps they also might be driven by greed at this point. Why not just constrain supply and increase margins whilst they are no real competitor?
I've been designing chips since 1997. The first 2 companies I was at had their own fabs. It's been a boom and bust industry for 50 years or more.
https://www.macrobusiness.com.au/2021/05/the-great-semicondu...
Here is a long article from last year about Sam Altman.
https://www.nytimes.com/2024/09/25/business/openai-plan-elec...
https://finance.yahoo.com/news/tsmc-rejects-podcasting-bro-s...
> TSMC’s leadership dismissed Altman as a “podcasting bro” and scoffed at his proposed $7 trillion plan to build 36 new chip manufacturing plants and AI data centers.
I thought it was ridiculous when I read it. I'm glad the fabs think he's crazy too. If he wants this then he can give them the money up front. But of course he doesn't have it.
After the dot com collapse my company's fabs were running at 50% capacity for a few years and losing money. In 2014 IBM paid Global Foundries $1.5 billion to take the fabs away. They didn't sell the fabs, they paid someone to take them away. The people who run TSMC are smart and don't want to invest $20-100 billion in new fabs that come online in 3-5 years just as the AI bubble bursts and demand collapses.
https://gf.com/gf-press-release/globalfoundries-acquire-ibms...
Thanks for the insights. The 'podcasting bro' bit is hilarious.
I don't think demand will collapse though, since the Mag7 has the cash flow to spend, and they can monetize if the time's ripe.
What do you think?
I started working during the dot com boom. I was getting 3 phone calls a week from recruiters on my work telephone number. Then I saw the bubble burst in from mid-2000. In 2001 zero recruiters called me. I hated my job after the reorg and it took me 10 months to find a new one.
I know a lot of people in the 45+ age range including many working on AI accelerators. We all think this is a bubble. The AI companies are not profitable right now for the prices they charge. There are a bunch of articles on this. If they raise prices too quickly to become profitable then demand will collapse. Eventually investors will want a return on their investment. I made a joke that we haven't reached the Pets.com phase of the bubble yet.
Nothing wrong with not wanting to go bankrupt if the bubble pops. It isn’t an indicator, it’s risk management.
which is itself an indicator
> He[Jensen Huang] has also privately criticized what he has described as a lack of discipline in OpenAI’s business approach and expressed concern about the competition it faces from the likes of Google and Anthropic, some of the people said.
People talk about an AI bubble. What we actually have is a GPU bubble. NVidia makes really expensive GPUs for AI. Others also make GPUs.
Companies like Google produce and operate AI models largely using their own TPUs rather than NVidia's GPUs. We've seen the Chinese produce pretty competitive open models with either older NVidia GPUs or alternative GPUs because they are not allowed to buy the newer ones. And AMD, Intel and other chip makers are also eager to get in on the action. Companies like Microsoft, Amazon, etc. have their own chips as well (similar to Google). All the hyperscalers are moving away from NVidia.
And then Apple runs a non Intel and non NVidia based range of workstations and laptops that are pretty popular with AI researchers because the M series CPU/GPU/NPU is pretty decent value for running AI models. You see similar movement with ARM chips from Qualcomm and others. They all want to run AI models on phones, tablets, laptops. But without NVidia.
NVidia's bubble is about vastly overcharging for a thing that only they can provide. Their GPU chips have enormous margins relative to CPU chips coming out of the same/similar machines. That's a bubble. As soon as you introduce competition, the companies with the best price performance wins. NVidia is still pretty good at what they do. But not enough to justify an order of magnitude price/cost difference.
NVidia's success has been predicated on its proprietary software and instruction set (CUDA). That's a moat that won't last. The reason Google can use its own TPUs rather than CUDA is that it worked hard to get rid of their CUDA dependence. Same for the other hyperscalars. At this point they can do training and inference without CUDA/NVidia and its more cost effective.
The reason that this 100B deal is apparently being reconsidered is that it is a bad deal for OpenAI. It was going to overpay for a solution that they can get cheaper elsewhere. It's bad news for NVidia, good news for OpenAI. This deal started out with just NVidia. But at this point there are also deals with AMD, MS, and others. OpenAI like the other hyperscalers is not betting the company on NVidia/CUDA. Good for them.
> People talk about an AI bubble. What we actually have is a GPU bubble. NVidia makes really expensive GPUs for AI. Others also make GPUs.
Yes it is. I think even for multiple reasons. Competition in that space not sleeping is one but it's also a huge overestimation of demand combined with the questionable believe those GPUs and the Datacenters housing them can actually be built and put into operation as fast as envisioned.
> The reason that this 100B deal is apparently being reconsidered is that it is a bad deal for OpenAI. It was going to overpay for a solution that they can get cheaper elsewhere. It's bad news for NVidia, good news for OpenAI. This deal started out with just NVidia. But at this point there are also deals with AMD, MS, and others. OpenAI like the other hyperscalers is not betting the company on NVidia/CUDA. Good for them.
I think in case of OpenAI both may be true. While what you are saying makes sense, NVs first mover advantage obviously can't last forever, OpenAI currently does have little to no competitive advantage over other players. Combine this with the fact that some (esp. Google) sit on a huge pile of cash. In contrast for OpenAI the party is pretty much over as soon as investors stop throwing money into the oven so they might need to cut back a bit.
And so it begins.
garbing popcorn
Would be interesting to see how Oracle's CDSs react to this news.
Idk about this news specifically but oracle cds prices are moving. The below link says 30k layoffs may hit Oracle which I feel is a bit hyperbolic so this article may not be grounded in reality.
https://www.theregister.com/2026/01/29/oracle_td_cowen_note/
Edit: Another src https://www.cio.com/article/4125103/oracle-may-slash-up-to-3...
I know OpenAI isn't a popular company here (anymore) but the doomerism in this thread seems a bit too hasty. People were just as doomy when Altman was sacked, and it turned into nothing and the industry market caps have doubled or even tripled since.
The article references an “undisciplined” business. I wonder if this is speaking to projects like Sora. Sora is technically impressive and was fun for a moment, but it’s nowhere near the cultural relevance of TikTok, but I believe significantly more expensive, harder to monetize, and consuming some significant share of their precious GPU capacity. Maybe I’m just not the demo and missing something.
And yes, Sam is incredibly unlikable. Every time I see him give an interview, I am shocked how poorly prepared he is. Not to mention his “ads are distasteful, but I love my supercar and ridiculous sunglasses.”
Hey, at least Sora exists. They acquired Jony Ive's company for $6.5bn without even any vaporware to point to.
This is certainly no fatal for OpenAI, but there is some irony that Altman and Musk are both struggling.
will there be more 5090 FE cards at a lower price? one can only hope
I would love it if AI fizzled out and nvidia had to go back to making gaming cards. Just trying to have a simple life here and play video games, and ridiculous hype after hype keeps making it expensive.
I hate to say it but there will likely come much more annying things that'll disturb your gaming experience.
At this point I’m content just playing single player games and older games. What is there left to disturb me? (After I build a PC)
AINFTs! You're right, and it's a bit depressing. Seems more and more that cloud gaming is the only long term solution the industry will tolerate...I hate it.
This seems unfair and biased. After all, I’ve never seen a more obviously capable CEO.
https://preview.redd.it/sam-altman-on-the-model-v0-7u2a2o7lr...
My doubts have all evaporated after seeing that photo.
Does this mean OpenAI won't be needing all that RAM after all...?
sadly micron/sandisk bubble is going full steam ahead
How is this legal for them to do to pump stocks
Sex
Ha!
"the people said"
nvidia should buy OpenAI. I like Jensen.
That's Sam Altman's wet dream: to get out of this with lots of cash and headache-free when the bubble bursts.
...and the merry go round stopped
Not for all the players. Not everyone has over-raised their fundamentals.
Literally the whole economy has "over-raised its fundamentals" though. Not everyone is going to fail in exactly this way, but (again, pretty much literally) everyone is exposed to a feedback-driven crash from "everyone else" that ended up too exposed.
We all know this is a speculative run-up. We all know it'll end somehow. Crashes always start with something like this. Is this the tipping point? Damned if I know. But it'll come.
Just print so much money that people (yes, banks are people!) have nothing better to do than buy stonks. Problem solved!
This is reasoning from a mistake. Market valuation is about VALUE, which is an abstract idea assigned by the market, which is not the same thing as MONEY, which can be "printed"[1]. Market values go up and down on their own, irrespective of the amount of money in circulation. They reflect consensus (often irrational) for what the securities "should be trading at", and that's all. If the currency inflates or deflates, the markets do too.
[1] Though recognize that by engaging in that frame you're painting yourself as an unserious amateur being influenced by partisan media. Real governments do not "print money" in any real sense, and attempts to conflate things like bond debt with it run afoul, yet again, of the money/value mistake.
[dead]
[dead]
[dead]
If the ice cream cone won't lick itself, who will?
OpenAI is too important to run out of cash. The gov will make companies invest.
Too important to what? The bubble?
Important for what? Google and anthropic's models are already better, and google actually makes money, and both are US companies. What strategic relevance is there to Open AI?
We will miss Sam’s irresistible puppy eyes
Is it? What do they have that Google and Anthropic do not at this point?
Cash ashes.
Unrelated: does anyone else think that Jensen's gatorskin leather jacket at their latest conference didn't suit him at all? It felt very "witness my wealth" and out of character.
that is his character