Many employers want employees to act like cult members. But then when going gets tough, those are often the first laid off, and the least prepared for it.
Employers, you can't have it both ways. As an employee don't get fooled.
During the first ever layoff at $company in 2001, part of the dotcom implosion, one of my coworkers who got whacked complained that it didn’t make sense as he was one of the companies biggest boosters and believers.
It was supremely interesting to me that he thought the company cared about that at all. I couldn’t get my head around it. He was completely serious, he kept arguing that his loyalty was an asset. He was much more experienced than me (I was barely two years working).
In hindsight, I think it is true that companies value that in a way. I’ve come to appreciate people who just stick it out for awhile. I try and make sure their comp makes it worth their while. They are so much less annoying to deal with than the assholes who constantly bitch or moan about doing what they’re paid for.
But as a personal strategy, it’s a poor one. You should never love or be loyal to something that can’t love you back.
The one and ONLY way I've ever seen "company" loyalty rewarded in any way is if you have a DIRECT relationship with a top level senior manager (C-suite). They will specifically protect you if they truly believe you are on "their side" and you are at their beck and call.
I think loyalty has value to the company but not as much as people think. To simplify it, multiple things contribute to "value" and loyalty is just a small part of it.
Companies appreciate loyalty… as long as long as it doesn’t cost them anything. The moment you ask for more money or they need to reduce the workforce, all of that goes out the window.
100% agree. There is no reason for employees to be loyal to a company. LLM building is not some religious work. It’s machine learning on big data. Always do what is best for you because companies don’t act like loyal humans, they act like large organizations that aren’t always fair or rationale or logical in their decisions.
To a lot of tech leadership, it is. The belief in AGI as a savior figure is a driving motivator. Just listen to how Altman, Thiel or Musk talk about it.
That’s how they talk about it publicly. Internally I can attest that the companies for two of the three you list are not like that internally at all. It’s all marketing, outwardly focused.
"Tech founders" for whom the "technology" part is the thing always getting in the way of the "just the money and buzzwords" part.
Now they think they can automate it away.
25+ years in this industry and I still find it striking how different the perspective between the "money" side and the "engineering" side is... on the same products/companies/ideas.
> Just listen to how Altman, Thiel or Musk talk about it.
It’s surprising how little they seem to have thought it through. AGI is unlikely to appear in the next 25 years, but even if, as a mental exercise, you accept it might happen, it reveals it's paradox: If AGI is possible, it destroys its own value as a defensible business asset.
Like electricity, nuclear weapons, or space travel , once the blueprint exists, others will follow. And once multiple AGIs exist, each will be capable of rediscovering and accelerating every scientific and technological advancement.
The prevailing idea seems to be that the first company to achieve superintelligence will be able to leverage it into a permanent advantage via exponential self improvement, etc.
Exactly. Though you can learn a lot about an employer by how it has conducted layoffs. Did they cut profits and management salaries and attempt to reassign people first? Did they provide generous payouts to laid off employees?
If the answer to any of these questions is no then they're not worth committing to.
Only place you can say if you are an employee and a missionary is well if you are a missionary or working in a charity/ NGO etc trying to help people/animals etc.
Especially for an organization like OpenAI that completely twisted its original message in favor of commercialization. The entire missionary bit is BS trying to get people to stay out of a sense of what exactly?
I'm all for having loyalty to people and organizations that show the same. Eventually it can and will shift. I've seen management changed out from over me more times than I can count at this point. Don't get caught off guard.
It's even worse in the current dev/tech job market where wages are being pushed down around 2010 levels. I've been working two jobs just to keep up with expenses since I've been unable to match my more recent prior income. One ended recently, and looking for a new second job.
I think there's more to work than just taking home a salary. Not equally true among all professions and times in your life. But most jobs I took were for less money with questionable upside. I just wanted to work on something else or with different people.
The best thing about work is the focus on whatever you're doing. Maybe you're not saving the world but it's great to go in to have one goal that everyone goes towards. And you get excited when you see your contributions make a difference or you build great product. You can laugh and say I was part of a 'cult', but it sure beats working a misearble job for just a slightly higher paycheck.
That’s because you don’t believe/realize in the mission of the product and its impact to society. When if work at Microsoft, you are just working to make MS money as they are like a giant machine.
That said it seems like every worker can be replaced. Lost stars replaced by new stars
"As we know, big tech companies like Google, Apple, and Amazon have been engaged in a fierce battle for the best tech talent, but OpenAI is now the one to watch. They have been on a poaching spree, attracting top talent from Google and other industry leaders to build their incredible team of employees and leaders."
We shouldn't use the word "poaching" in this way. Poaching is the illegal hunting of protected wildlife. Employees are not the property of their employers, and they are free to accept a better offer. And perhaps companies need to revisit their compensation practices, which often mean that the only way for an employee to get a significant raise is to change companies.
Big picture, I'll always believe we dodged a huge bullet in that "AI" got big in a nearly fully "open-source," maybe even "post open-source" world. The fact that Meta is, for now, one of the good guys in this space (purely strategically and unintentionally) is fortunate and almost funny.
Another funny possibly sad coincidence is that the licenses that made open source what it is will probably be absolutely useless going forward, because as recent precedent has shown, companies can train on what they have legally gained access to.
On the other hand, AGPL continues to be the future of F/OSS.
MIT is also still useful; it lets me release code where I don't really care what other people do with it as long as they don't sue me (an actual possibility in some countries)
The US, for one. You can sue nearly anyone for nearly anything, even something you obviously won't win in court, as long as you find a lawyer willing to do it; you don't need any actual legal standing to waste the target's time and money.
Even the most unscrupulous lawyer is going to look at the MIT license, realize the target can defend it for a trivial amount of money (a single form letter from their lawyer) and move on.
You can sue for damages if they have malware in the code, there is no license that protects you from distributing harmful products even if you do it for free.
And illegally too. Anthropic didn't pay for those books they used.
It's too late at this point. The damage is done. These companies trained on illegally obtained data and they will never be held accountable for that. The training is done and they got what they needed. So even if they can't train on it in the future, it doesn't matter. They already have those base models.
Then punitive measures are in order. Add it to the pile of illegal, immoral, and unethical behavior of the feudal tech oligarchs already long overdue for justice. The harm they have done and are doing to humanity should not remain unpunished.
And the legality of this may vary by jurisdiction. There’s a nonzero chance that they pay a few million in the US for stealing books but the EU or Canada decide the training itself was illegal.
It’s not going to happen. The EU is desperate to stop being in fourth place in technology and will do absolutely nothing to put a damper on this. It’s their only hope to get out of the rut.
Then the EU and canada just won't have any sovereign LLMs. They'll have to decide if they'd rather prop up some artificial monopoly or support (by not actively undermining) innovation.
If I can reproduce the entirety of most books off the top of my head and sell that to people as a service, it's a copyright violation. If AI does it, it's fair use.
>If I can reproduce the entirety of most books off the top of my head and sell that to people as a service, it's a copyright violation. If AI does it, it's fair use.
Assuming you're referring to Bartz v. Anthropic, that is explicitly not what the ruling said, in fact it's almost the inverse. The judge said that output from an AI model which is a straight up reproduction of copyrighted material would likely be an explicit violation of copyright. This is on page 12/32 of the judgement[1].
But the vast majority of output from an LLM like Claude is not a word for word reproduction; it's a transformative use of the original work. In fact, the authors bringing the suit didn't even claim that it had reproduced their work. From page 7, "Authors do not allege that any infringing copy of their works was or would ever be provided to users by the Claude service." That's because Anthropic is already explicitly filtering out results that might contain copyrighted material. (I've run into this myself while trying to translate foreign language song lyrics to English. Claude will simply refuse to do this)[2]
They should still have to pay damages for possessing the copyrighted material. That's possession, which courts have found is copyright violation. Remember all the 12 year olds who got their parents sued back in the 2000s? They had unauthorized copies.
I don't know what exactly you're referring to here. The model itself is not a copy, you can't find the copyrighted material in the weights. Even if you could, you're allowed under existing case law to make copies of a work for personal use if the copies have a different character and as long as you don't yourself share the new copies. Take the Sony Betamax case, which found that it was legal and a transformative use of copyrighted material to create a copy of a publicly aired broadcast onto a recording medium like VHS and Betamax for the purposes of time-shifting one's consumption.
Now, Anthropic was found to have pirated copyrighted work when they downloaded and trained Claude on the LibGen library. And they will likely pay substantial damages for this. So on those grounds, they're as screwed as the 12 year olds and their parents. The trial to determine damages hasn't happened yet though.
This was immediately my reaction as well, but I'm not a judge so what do I know. In my own mind I mark it as a "spice must flow" moment -- it will seem inevitable in retrospect but my simple (almost surely incorrect) take is that there just wasn't a way this was going to stop AI's progress. AI as a trend has incredible plot armor at this point in time.
Is the hinge that the tools can recall a huge portion (not perfectly of course) but usually don't? What seems even more straight forward is the substitute good idea, it seems reasonable to assume people will buy less copies of book X when they start generating books heavily inspired by book X.
But, this is probably just a case of a layman wandering into a complex topic, maybe it's the case that AI has just nestled into the absolute perfect spot in current copyright law, just like other things that seem like they should be illegal now but aren't.
Yea, that dipshit judge just opened the flood gates for more problems. The problem is they don't understand how this stuff works and they're in the position of having to make a judgement on it. They're completely unprepared to do so.
Now there's precedent for future cases where theft of code or any other work of art can be considered fair use.
It's not really "virtually impossible to comply with". It's very restrictive, yes, but not hard to comply if you want to.
And yes, it is an EULA pretending to be a license. I'd put good odds on it being illegal in my country, and it may even be illegal on the US. But it's well aligned with the goals of GNU.
So interestingly, free meant autonomy for Stallman and the original proponents of "copyleft" style licenses too. But autonomy for end-users, not developers. But Stallman et al believed the copyleft style licenses maximized autonomy for end-users, rightly or wrongly, that was the intent.
I read through and I think that the analysis suffers from the fact that in the case when the modifier is the user it's fine.
Free software refers to user freedoms, not developer freedoms.
I don't think the below is right:
> > Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software.
>
> Let's break it down:
>
> > If you modify the Program
>
> That is if you are a developer making changes to the source code (or binary, but let's ignore that option)
>
> > your modified version
>
> The modified source code you have created
>
> > must prominently offer all users interacting with it remotely through a computer network
>
> Must include the mandatory feature of offering all users interacting with it through a computer network (computer network is left undefined and subject to wide interpretation)
I read the AGPL to mean if you modify the program then the users of the program (remotely, through a computer network) must be able to access the source code.
It has yet to be tested, but that seems like the common sense reading for me (which matters, because judges do apply judgement). It just seems like they are trying too hard to do a legal gotcha. I'm not a lawyer so I can't speak to that, but I certainly don't read it the same way.
I don't agree with this interpretation of every-change-is-a-violation either:
> Step 1: Clone the GitHub repo
>
> Step 2: Make a change to the code - oops, license violation! Clause 13! I need to change the source code offer first!
>
> Step 1.5: Change the source code offer to point to your repo
This example seems incorrect -- modifying the code does not automatically make people interact with the program over a network...
"free software" was defined by the GNU/FSF... so I generally default to their definitions. I don't think the license falls afoul of their stated definitions.
That said, they're certainly anti-capitalist zealots, that's kind of their thing. I don't agree with that, but that's besides the point.
Hell is, by design, a consequence for poor people. (People could literally pay the church to not go to hell[0]). Rich people have no consequences whatsoever, let alone poor people consequences.
Not "by design", as historically the hell came first. It was only much later that they catholic church started talking about the purgatory and the possibility of reducing your punishment by paying money.
The people running AI companies have figured out that there is no such thing as hell. We have to come up with new reasons for people to behave in a friendly way.
We already have such reasons. Besides, all religious "kindness" was never kindness without strings attached, even though they'd like you to think that was the case.
Open source may be necessary but it is not sufficient. You also needed the compute power and architecture discoveries and the realisation that lots of data > clever feature mapping for this kind of work.
A world without open source may have given birth to 2020s AI but probably at a slower pace.
We would have to know their intent to really know if they fit a general understanding "the good guys."
Its very possible that China is open sourcing LLMs because its currently in their best interest to do so, not because of some moral or principled stance.
I don't think the intent really matters once the thing is out in the open.
I want open source AI i can run myself without any creepy surveillance capitalist or state agency using it to slurp up my data.
Chinese companies are giving me that - I don't really care about what their grand plan is. Grand plans have a habit of not working out, but open source software is open source software nonetheless.
The country ruled by "people's party" has almost no open source culture while capitalism is leading the entire free software movement. I'm not sure what that says about our society and politics but the absurdist in me is having a good laugh every time I think about this :D
There’s actually a lot of open source software made by Chinese people. The government just doesn’t fund it. Not directly anyway, but there’s a ton of Chinese companies that do.
Capitalist countries (actually there are no other kinds of economies, in reality) are leading the open source software movement because it is a way for corporations to get software development services and products for free rather than paying for. It's a way of lowering labour costs.
Highly paid software engineers working in a ZIRP economy with skyrocketing compensation packages were absolutely willing to play this game, because "open source" in that context often is/was a resume or portfolio building tool and companies were willing to pay some % of open source developers in order to lubricate the wheels of commerce.
That, I think, is going to change.
Free software, which I interpret as copyleft, is absolutely antithetical to them, and reviled precisely because it gets in the way of getting work for free/cheap and often gets in the way of making money.
But that's precisely why Meta are the "good guys". They specifically called China the good guys in the same way that Meta is the good guys, though in this case many of the Chinese models are extremely good.
Meta has open sourced all of their offerings purely to try to commoditize the industry to the greatest extent possible, hoping to avoid their competitors getting a leg up. There is zero altruism or good intentions.
If Meta had actually competitive AI offering, there is zero chance they would be releasing any of it.
It's really hard to tell. If instructions like the current extreme trend of "What a great question!" and all the crap that forces one to put
* Do not use emotional reinforcement (e.g., "Excellent," "Perfect," "Unfortunately").
* Do not use metaphors or hyperbole (e.g., "smoking gun," "major turning point").
* Do not express confidence or certainty in potential solutions.
into the instructions, so that it doesn't treat you like a child, teenager or narcissistic individual who is craving for flattery, can really affect the mood and way of thinking of an individual, those Chinese models might as well have baked in something similar but targeted at reducing the productivity of certain individuals or weakening their beliefs in western culture.
I am not saying they are doing that, but they could be doing it sometime down the road without us noticing.
I mean in some sense the Chinese domestic policy (“as in Xi”) made the conditions possible for companies like DeepSeek to rise up, via a multi-decade emphasis on STEM education and providing the right entrepreneurial conditions.
But yeah by analogy with the US, it’s not as if the W. Bush administration can be credited with the creation of Google.
Do we know if Meta will stick to its strategy of making weights available (which isn't open source to be clear) now that they have a new "superintelligence" subdivision?
Dont make the mistake of anthropomorphizing Mark Zuckerberg. He didnt open source anything because he's a "good guy", he's just commoditizing the complement.
The "good guy" is a competitive environment that would render Meta's AI offerings to be irrelevant right now if it didnt open source.
The reason Machiavellianism is stupid is that the grand ends the means aim to obtain often never come to pass, but the awful things done in pursuit of them certainly do. So the motivation behind those means doesn't excuse them. And I see no reason the inverse of this doesn't hold true. I couldn't care less if Zuckerburg thinks open sourcing Llama is some grand scheme to let him to take over the world to become its god-king emperor. In reality, that almost certainly won't happen. But what certainly will happen is the world getting free and open source access to LLM systems.
When any scheme involves some grand long-term goal, I think a far more naive approach to behaviors is much more appropriate in basically all cases. There's a million twists on that old quote that 'no plan survives first contact with the enemy', and with these sort of grand schemes - we're all that enemy. Bring on the malevolent schemers with their benevolent means - the world would be a much nicer place than one filled with benevolent schemers with their malevolent means.
> The reason Machiavellianism is stupid is that the grand ends the means aim to obtain often never come to pass
That doesn't feel quite right as an explanation. If something fails 10 times, that just makes the means 10x worse. If the ends justify the means then doesn't that still fit into Machiavellian principles? Isn't the complaint closet to "sometimes the ends don't justify the means"?
You have to assume a grand ends is achievable through some knowable means. I don't see any real reason to think this is the case, certainly not on any sort of a meaningful timeframe. And I think this is even less true when we consider the typical connotation of Machiavellianism, which is through 'evil' actions.
It's extremely difficult to think of any real achievements sustained on the back of Machiavellianism, but one can list essentially endless entities whose downfall was brought on precisely by such.
Machiavellianism is not for everyone. It is specifically a framework for people in power. Kings, Heads of States, CEOs, Commanders. Competitive environments with allot at stake (peoples lives, money, future), in these environments it is often difficult to make decisions. Having a framework in place that allows you to make decisions is very useful.
Mitch Prinstein wrote a book about power and it shows that dark traits aren't the standard in most leaders, nor they are the best way to get into/stay in power
author is "board certified in clinical child and adolescent psychology, and serves as the John Van Seters Distinguished Professor of Psychology and Neuroscience, and the Director of Clinical Psychology at the University of North Carolina at Chapel Hill" and the book is based on evidence
edit: you can't take a book from 1600 and a few alive assholes with power and conclude that. there's a bunch of philanthropists and other people around
Im not saying that the end outcome wont be beneficial. I dont have a crystal ball. Im just saying that what he is doing is in no way selfless or laudable or worthy of praise.
Same goes for when Microsoft went gaga for open source and demanded brownie points for pretending to turn over a new leaf.
> Dont make the mistake of anthropomorphizing Mark Zuckerberg
Considering the rest of your comment it's not clear to me if "anthropomorphizing" really captures the meaning you intended, but regardless, I love this
Oh, absolutely -- I definitely meant that in the least complimentary way possible :). In a way, it's just the triumph of the ideals of "open source," -- sharing is better for everyone, even Zuck, selfishly.
> The price tag on this stuff, in human capital, data, and hardware, is high enough to preclude that sort of “perfect competition” environment.
I feel like we right now live in that perfect competition environment though. Inference is mostly commoditized, and it’s a race to the bottom for price and latency. I don’t think any of the big providers are making super-normal profit, and are probably discounting inference for access to data/users.
Only because everyone believes it’s a winner takes all game and this perfect competition will only last for as long as the winner hasn’t come out on top yet.
Everyone always thinks this at least in big tech I’ve never heard a PM or exec say a market is not winner take all. It’s some weird corpo grift lang that nothing is worth doing unless its winner take all.
Meta's primary business is capturing attention and selling some of that attention to advertisers. They do this by distributing content to users in a way that maximizes attention. Content is a complement to their content distribution system.
LLMs, along with image and video generation models, are generators of very dynamic, engaging and personalised content.
If Open AI or anyone else wins a monopoly there it could be terrible for Meta's business. Commoditizing it with Llama, and at the same time building internal capability and a community for their LLMs, was solid strategy from Meta.
So, imagine a world where everyone but Meta has access to generative AI.
There's two products:
A) (Meta) Hey, here are all your family members and friends, you can keep up with them in our apps, message them, see what they're up to, etc...
B) (OpenAI and others) Hey, we generated some artificial friends for you, they will write messages to you everyday, almost like a real human! They also look like this (queue AI generated profile picture). We will post updates on the imaginary adventures we come up with, written by LLMs. We will simulate a whole existence around you, "age" like real humans, we might even get married between us and have imaginary babies. You could attend our virtual generated wedding online, using the latest technology, and you can send us gifts and money to celebrate these significant events.
A continuous stream of monetizable live user data?
The entire point of Meta owning everything is that it wants as much of your data stream as it can get, so it can then sell more ad products derived from that.
If much of that data begins going off-Meta, because someone else has better LLMs and builds them into products, that's a huge loss to Meta.
Meta's play is to make sure there isn't an obvious superiority to one company's closed LLM -- because that's what would drive customers to choosing that company's product(s).
If LLM effectiveness is all about the same, then other factors dominate customer choice.
Like which (legacy) platforms have the strongest network effects. (Which Meta would be thrilled about)
I think its about sapping as much user data from competitors. A company seeking to use an LLM has a choice between OpenAI, LLaMA, and others. If they choose LLaMA because it's free and host it themselves, OpenAI misses out on training data and other data like that
Well is the loss of training data from customers using self-hosted Llama that big a deal for OpenAI or any of the big labs at this point? Maybe in late-2022/early-2023 during the early stages of RLHF'd mass models but not today I don't think. Offerings from the big labs have pretty much settled into specific niches and people have started using them in certain ways across the board. The early land grab is over and consolidation has started.
There are plenty of companies that don't immediately qualify as "the bad guys".
For instance, of all companies I've interviewed with or have friends working at that developed tech, some companies build and sell furnitures. Some are your electricity provider or transporter. Some are building inventory management systems for hospitals and drug stores. Some develop a content management system for medical dictionnary. The list is long.
The overwhelming majority of companies are pretty harmless and ethically mundane. They may still get involved in bad practice, but that's not inherent to their business. The hot tech companies may be paying more (blood money if you ask me), but you have other options.
In my head at least, Bluesky are way closer to "the bad guys'. I don't trust them at all, pretty sure that in spite of what they say, they're going to do the same sort of rug pull that Google did with their "do no evil" assurances.
Google was bad the moment it chose its business model. See The Age of Surveillance Capitalism for details. Admittedly there was a nice period after it chose its model when it seemed good because it was building useful tools and hadn't yet accrued sufficient power / market share for its badness to manifest overtly as harm in the world.
I would normally agree, but we're instantially talking about the company that made Pytorch and played an instrumental role in proliferating usable offline LLMs.
If you can make that algebra add up to "bad guy" then be my guest.
I wouldn't call mass piracy [0], for their own competitive gain, to be a "good" act. Especially when it seems they know they were doing the wrong thing - and that they know that the copyright complaints have grounds.
> The problem is that people don’t realize that if we license one single book, we won’t be able to lean into fair use strategy.
They're involved in genocide and enables near-global tyranny through their surveillance and manipulation. There are no excuses for working for or otherwise enabling them.
Why not? Current open models are more capable than the best models from 6 months back. You have a choice to use a model that is 6 months old - if you still choose to use the closed version that’s on you.
Most of Meta's models have not been released as open source. Llama was a fluke, and it helps to commoditize your compliment when you're not the market leader.
There is no good or open AI company of scale yet, and there may never be.
A few that contribute to the commons are Deep Seek and Black Forest Labs. But they don't have the same breadth and budget as the hyperscalers.
Llama is not open source. It is at best weights available. The license explicitly limits what kind of things you are allowed to use the outputs of the models for.
Yup, but that being said, Llama is GPLv3 weather Meta likes it or not. Same as ChatGPT and all the others. ALL of them can perfectly reproduce GPLv3 licensed works and data, making them derivative work, and the license is quite clear on that matter. In fact up until recently you could get chatGPT to info dump all sorts of things with that argument, but now when you try you will hit a network error, and afterwards it seems something breaks and it goes back to parroting a script on how it's under a proprietary license.
Yes. Having training obey copyright is a big coordination problem that requires copyright holders to group together to sue meta (and prove they broke copyright, which is not something proven before for LLM).
Whereas meta suing you into radioactive rubble is straightforward.
I don't at all disagree with you, but at the kind of money you'd be making at an org like OAI, it's easy to envision there being a ceiling, past which the additional financial compensation doesn't necessarily matter that much.
The problem with the argument is that most places saying this are paying more like a sub-basement, not that there can't genuinely be more important things.
That said, Sam Altman is also a guy who stuck nondisparagement terms into their equity agreement... and in that same vein, framing poaching as "someone has broken into our home" reads like cult language.
We shouldn’t even be using the offensive word “poaching.” As an employee, I am not a deer or wild boar owned by a feudal lord by working for his company. And another employer isn’t some thief stealing me away. I have agency and control over who I enter into an employment arrangement with!
Could Facebook hire away OpenAI people just by matching their comp? Doubtful. Facebook is widely hated and embarrassing to work at. Facebook has to offer significantly more.
And if someone at OpenAI says hey Facebook just offered me more money to jump ship, that's when OpenAI says "Sorry to hear, best of luck. Seeya!"
In this scenario, you're only underpaid by staying at OpenAI if you have no sense of shame.
> Facebook is widely hated and embarrassing to work at.
Not sure it's widely hated (disclaimer: I work there), despite all the bad press. The vast majority of people I meet respond with "oh how cool!" when they hear that someone works for the company that owns Instagram.
"Embarassing to work at" - I can count on one hand the number of developers I've met who would refuse to work for Meta out of principle. They are there, but they are rarer than HN likes to believe. Most devs I know associate a FAANG job with competence (correctly or incorrectly).
> Could Facebook hire away OpenAI people just by matching their comp?
My guess is some people might value Meta's RSUs which are very liquid higher than OAI's illiquid stocks? I have no clue how equity compensation works at OAI.
Within my (admittedly limited) social circle of engineers/developers there is consensus that working at Facebook is pretty taboo. I’ve personally asked recruiters to not bother.
Honestly I’d be happy to work at any FAANG. Early FB in particular was great in terms of keeping up with friends.
I’ve only interviewed with Meta once and failed during a final interview. Aside from online dating and defense I don’t have any moral qualms regarding employment.
My dream in my younger days was to hit 500k tc and retire by 40. Too late now
By defense do you mean like weapons development, or do you mean the entire DoD-and-related contractor system, including like tiny SIBR chasing companies researching things like, uh
"Multi-Agent Debloating Environment to Increase Robustness in Applications"
Which was totally not named in a backronym-gymnastics way of remembering the lead researcher's last vacation destination or hometown or anything, probably.
I can't explain why but I don't think money is it. Nor a new project or whatever can't be it either. Its just too small of a value proposition when you are already in openAI making banger models used by the world.
According to reports, the comp packages were in the hundreds of millions of dollars. I doubt anyone but execs are making that kind of money at OpenAI; its the sort of money you hope from a successful exit after years of efforts. I don’t blame them for jumping ship.
> Doubtful. Facebook is widely hated and embarrassing to work at. Facebook has to offer significantly more.
I’m at a point in my career and life at 51 that I wouldn’t work for any BigTech company (again) even if I made twice what I make now. Not that I ever struck it rich. But I’m doing okay. Yes I’ve turned down overtures at both GCP, Azure, etc.
But I did work for AWS (ProServe) from the time I was 46-49 remotely knowing going in that it was a toxic shit show for both the money and for the niche I wanted to pivot to (cloud consulting) I knew it would open doors and it has.
If I were younger and still focused on money instead of skating my way to retirement working remotely, doing the digital nomad thing off an on etc, I would have no moral qualms about grinding leetcode and exchanging my labor for as much money as possible at Meta. No one is out here feeding starving children or making the world a better place working for a for profit company.
My “mission” would be to exchange as much money as possible for labor and I tell all of the younger grads the same thing.
yeah, I used to work in the medical tech space, they love to tell you how much you should be in it for the mission and that's why your pay is 1/3 what you could make at FAANG... of course, when it came to our sick customers, they need to pay market rates.
There are a couple of ways to read the "coup" saga.
1) Altman was trying to raise cash so that openAI would be the first,best and last to get AGI. That required structural changes before major investors would put in the cash.
2) Altman was trying to raise cash and saw an opportunity to make loads of money
3) Altman isn't the smartest cookie in the jar, and was persuaded by potential/current investors that changing the corp structure was the only way forward.
Now, what were the board's concerns?
The publicly stated reason was a lack of transparency. Now, to you and me, that sounds a lot like lying. But where did it occur and what was it about. Was it about the reasons for the restructure? was it about the safeguards were offered?
The answer to the above shapes the reaction I feel I would have as a missionary
If you're a missionary, then you would believe that the corp structure of openai was the key thing that stops it from pursuing "damaging" tactics. Allowing investors to dictate oversight rules undermines that significantly, and allows short term gain to come before longterm/short term safety.
However, I was bought out by a FAANG, one I swear I'd never work for, because they are industrial grade shits. Yet, here I am many years later, having profited considerably from working at said FAANG. turns out I have a price, and it wasn't that much.
I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing, save maybe OpenAI's defense contract, which I have no details about.
Meta will try to buoy this by open-sourcing it, which, good for them, but I don't think it's enough. If Meta wants to save itself, it should re-align its business model away from the feeds.
In that way, as a missionary chasing super intelligence, I'd prefer OpenAI.
*because I don't have an emotional connection to OpenAI's changing corporate structure away from being a non-profit:
I saw the discussion as whether OpenAI is on a better moral ground than Meta, so this was my angle.
On where the moral burden lies in your example, I'd argue we should follow the money and see what has the most impact on that online gambling company's bottom line.
Inherently that could have the most impact on what happens when that company succeeds: if those become OpenAI's biggest clients, it wouldn't be surprising that they put more and more weight in being well suited for online gambling companies.
Does AWS get specially impacted by hosting online gambling services ? I honestly don't expect them to, not more than community sites or concert ticket sellers.
There is no world in which online gambling beats other back-office automation in pure revenue terms. I'm comfortable saying that OpenAI would probably have to spend more money policing to make sure their API's aren't used by gambling companies than they'd make off of them. Either way, these are all imagined horrors, so it is difficult to judge.
I am judging the two companies for what they are, not what they could be. And as it is, there is no more damaging technology than Meta's various algorithmic feeds.
> There is no world in which online gambling beats other back-office automation in pure revenue terms.
Apple's revenue is massively from in-app purchases, which are mainly games, and online betting also entered the picture. We had Tim Cook on the stand explain that they need that money and can't let Epic open that gate.
I think we're already there in some form or another, the question would be whether OpenAI has any angle for touching that pie (I'd argue no, but they have talented people)
> I am judging the two companies for what they are, not what they could be
Thing is, AI is mostly nothing right now. We're only discussing it because it of its potential.
My point exactly. The App Store has no play in back office automation so the comparison doesn’t make sense. AFAICT, OpenAI is already making Billions on back office automation. I just came from a doctors visit where the she was using some medical grade ChatGPT wrapper to transcribe my medical conversation meanwhile I fight with instagram for the attention of my family members.
AI is already here [1]. Could there be better owners of super intelligence? Sure. Is OpenAI better than Meta. 100%
> I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing,
OpenAI announced in April they'd build a social network.
I think at this point it barely matters who does it, the ways in which you can make huge amounts of money from this are limited and all the major players after going to make a dash for it.
Like I told another commenter, "I am judging the two companies for what they are, not what they could be."
I'm sure Sam Altman wants OpenAI to do everything, but I'm betting most of the projects will die on the vine. Social networks especially, and no one's better than Meta at manipulating feeds to juice their social networks.
Suggesting that the AI doom crowd is building up a narrative for Altman is sort of like saying the hippies protesting nuclear weapons are in bed with the arms makers because they're hyping up the destructive potential of hydrogen bombs.
That analogy falls flat. For one we have seen the destructive power of hydrogen bombs through nuclear tests. Nuclear bombs are a proven, real threat that exists now. AGI is the boogeyman under the bed, that somehow ends up never being there when you are looking for it.
If you convince people that AGI is dangerous to humanity and inevitable, then you can force people to agree with outrageous, unnecessary investments to reach the perceived goal first. This exactly happened during the Cold War when Congress was thrown into hysterics by estimates of Soviet ballistic missile numbers: https://en.wikipedia.org/wiki/Missile_gap
Chief AI doomer Eliezer Yudkowsky's latest book on this subject is literally called "If Anyone Builds it, Everyone Dies". I don't think he's secretly trying to persuade people to make investments to reach this goal first.
This reminds me a lot of climate skeptics pointing out that climate researchers stand to make money off books about climate change.
Selling AI doom books nets considerably less money than actually working on AI (easily an order of magnitude or two). Whatever hangups I have with Yudkowsky, I'm very confident he's not doing it for the money (or even prestige; being an AI thought leader at a lab gives you a built-in audience).
Please point me to the labs who are hiring non technical "thought leaders" so I can see what opportunities Yudkowsky turned down to go write books instead.
The inverse is true, though - climate skeptics are oftentimes paid by the (very rich) petrol lobby to espouse skepticism. It's not an asinine attack, just an insecure one from an audience that also overwhelmingly accepts money in exchange for astroturfing opinions. The clear fallacy in their polemic being that ad-hominem attacks aren't addressing the point people care about. It's a distraction from global warming, which is the petrol lobby's end goal.
Yudkowsky's rhetoric is sabotaged by his ridiculous forecasts that present zero supporting evidence of his claims. It's the same broken shtick as Cory Doctorow or Vitalik Buterin - grandiose observations that resemble fiction more than reality. He can scare people, if he demonstrates the causal proof that any of his claims are even possible. Instead he uses this detachment to create nonexistent boogeymen for his foreign policy commentary that would make Tom Clancy blush.
He absolutely is. Again, refer to the nuclear bomb and the unconscionable capital that was invested as a result of early successes in nuclear tests.
That was an actual weapon capable of killing millions of people in the blink of an eye. Countries raced to get one so fast that it was practically a nuclear Preakness Stakes for a few decades there. By collating AI as a doomsday weapon, you necessarily are begging governments to attain it before terrorists do. Which is a facetious argument when AI has yet to prove it could kill a single person by generating text.
Edward Teller worried about the possibility that the Trinity nuclear test might start a chain reaction with the nitrogen in the Earth's atmosphere, enveloping the entire planet in a nuclear fireball that destroyed the whole world and all humans along with it. Even though this would have meant that the bomb would have had approximately a billion times more destructive power than advertised, and made it far more of a doomsday weapon, I think it would also not have been an appealing message to the White House. And I don't think that realization made anyone feel it was more urgent to be the first to develop a nuclear bomb. Instead, it became extremely urgent to prove (in advance of the first test!) that such a chain reaction would not happen.
I think this is a pretty close analogy to Eliezer Yudkowsky's view, and I just don't see how there's any way to read him as urging anyone to build AGI before anyone else does.
When people explicitly say "do not build this, nobody should build this, under no circumstances build this, slow down and stop, nobody knows how to get this right yet", it's rather a stretch to assume they must mean the exact opposite, "oh, you should absolutely hurry be the first one to build this".
> By collating AI as a doomsday weapon, you necessarily are begging governments to attain it before terrorists do.
False. This is not a bomb where you can choose where it goes off. The literal title of the book is "if anyone builds it, everyone dies". It takes a willful misinterpretation to imagine that that means "if the right people build it, only the wrong people die".
If you want to claim that the book is incorrect, by all means attempt to refute it. But don't claim it says the literal opposite of what it says.
The "this is our best product yet" to "this is an absolute flop" pipeline has forced HN into absolute denial over the "innovation" their favorite company is capable of.
I'm not very informed about the coup -- but doesn't it just depend on what side most of the employees sat/sit on? I don't know how much of the coup was just egos or really an argument about philosophy that the rank and file care about. But I think this would be the argument.
They didn't need pressuring. There was enough money involved that was at risk without Sam that they did what they thought was the best way to protect their nest eggs.
The thing where dozens of them simultaneously posted “OpenAI is nothing without its people” on Twitter during the coup was so creepy, like actual Jonestown vibes. In an environment like that, there’s no way there wasn’t immense pressure to fall into line.
That seems like kind of an uncharitable take when it can otherwise be explained as collective political action. I’d see the point if it were some repeated ritual but if they just posted something on Twitter one time then it sounds more like an attempt to speak more loudly with a collective voice.
A missionary is a member of a religious group who is sent into an area in order to promote its faith or provide services to people, such as education, literacy, social justice, health care, and economic development. - https://en.wikipedia.org/wiki/Missionary
Post coup, they are both for-profit entities.
So the difference seems to be that when meta releases its models (like bibles), it is promoting its faith more openly than openai, which interposes itself as an intermediary.
Not to mention, missionaries are exploitative. They're trying to harvest souls for God or (failing the appearance of God to accept their bounty) to expand the influence of their earthbound church.
Bottom line, "But... but I'm like a missionary!" isn't my go-to argument when I'm trying to convince people that my own motives are purer than my rival's.
> “I have never been more confident in our research roadmap,” he wrote. “We are making an unprecedented bet on compute, but I love that we are doing it and I'm confident we will make good use of it. Most importantly of all, I think we have the most special team and culture in the world. We have work to do to improve our culture for sure; we have been through insane hypergrowth. But we have the core right in a way that I don't think anyone else quite does, and I'm confident we can fix the problems.”
tldr. knife fights in the hallways over the remaining life boats.
They didn't mean it as a pun, but understanding it as a pun helps understand the situation.
In religions, missionaries are those people who spread the word of god (gospel) as their mission in life for a reward in the afterlife. Obviously, mercenaries are paid armies who are in it for the money and any other spoils of war (sex, goods, landholdings, etc.)
So I guess he's trying to frame it as them being missionaries for an Open and accepting and free Artificial Intelligence and framing Meta as the guys who are only in it for the money and other less savory reasons. Obviously, only true disciples would believe such framing.
Specifically, Catholic missionaries indoctrinating indigenous cultures into their church's imaginary sexual hangups. All other positions were considered sinful.
Again, not a label I'd self-apply if I wanted to take the high road.
Sam vs Zuck... tough choice. I'm rooting for neither. Sam is cleverly using words here to make it seem like OpenAI are 'the good guys' but the truth is that they're just as nasty and power/money hungry as the rest.
Sam Altman literally casts himself a God apparently and that's somehow to be taken as an indictment of his rivals. Maybe it's my GenX speaking but that's CEO bubblespeak for "OpenAI is fucked, abandon ship".
Pretty telling that OpenAI only now feels like it has to reevaluate compensation for researchers while just weeks ago it spent $6.5 billion to hire Jony Ive. Maybe he can build your superintelligence for you.
Just looked it up, looks like they bought or merged with a company he worked at or owned part of, at a valuation of 6.5 billion. Not sure about the details, e.g.like how much of that he gets
Do I "poach" a stock when I offer more money for it than the last transaction value?
"Poaching" employees is just price discovery by market forces. Sounds healthy to me. Meta is being the good guys for once.
I think it’s a matter of style or finesse. If you can make it look good, even breaking the rules is socially acceptable, because a higher order desire is to create conditions in individuals where they break unjust rules when the greater injustice would be to censor yourself to comply with the rules in a specific case.
Good artists copy, great artists steal.
Good rule followers follow the rules all the time. Great rule followers break the rules in rare isolated instances to point at the importance of internalizing the spirit that the rules embody, which buttresses the rules with an implicit rule to not follow the rules blindly, but intentionally, and if they must be broken, to do so with care.
I can fully believe one can be funny in a way that isn’t validated or understood, or even perceived as humorous. I’m not sure HN is a good bellwether for comedic potential.
> If you don’t adhere to the guidelines we’ll send mean and angry emails to dang.
That’s so weird, you’re on! That makes two of us! When I don’t adhere to the guidelines, I also send mean and angry emails to dang. Apologies in advance, dang.
I'm pretty sure Sam Altman's only mission in life is to be as personally wealthy as Mark Zuckerberg. Is that mission really supposed to inspire undying loyalty and insane workloads from OpenAI staffers?
there has yet to be a value openAI originally claimed to have that has lasted a second longer than there was profit motive to break it.
they went from open to closed.
they went from advocating ubi to for profit.
they went from pacific to selling defense tech.
they went from a council overseeing the project to a single man in control.
and thats fine, go make all the money you can, but don't try do this sick act where you try to convince people to thank you for acting in your own self interest.
The word "beat" and "mercenaries" are also quite important here -- to me, this is Altman's way of saying "you losers who left OpenAI, you will pay a steep price, because we will mess with you really deeply". The threat to Meta is just a natural consequence of that, to the extent that Meta clings onto said individuals.
Sam Altman is not a bit different than Mark Zuckerberg. His mission is to make money and get as much information to process about individuals, to be used for his benefit, all the rest is just blah blah.
It'll be a sad type of fun watching him become another disgusting and disconnected bazillionaire. Remember when Mark was a Honda Fit driving Palo Alto based founder focused on 'connecting people' and building a not yet profitable 'social network' to change the world?
This is a repeat of the fight for talent that always happens with these things. It's all mercenary - it's all just business. Otherwise they'd remain an NGO.
I can't help but think that it would have been a much better move for him to get fired from OpenAI. Allow that to do it's own thing and start other ventures with a clean reputation, and millions instead of billions in the bank.
“College” is underselling it. He went to Harvard. He sold a friendly down to the earth image early one and people bought it. But don’t forget he went to Harvard.
Or it turns out that people don't change, as explored in the entirely fictitious but very enjoyable film The Social Network. All those steps, even the horny college nerd, were facades, and the real core of his character is naked ambition. He will warp himself into any shape in order to pursue wealth and power. To paraphrase Robert Caro, power does not corrupt, it reveals.
I think half of him truly believes that his work ultimately will benefit humanity as a whole and half of him is a cynical bastard like the rest of them.
Ultimately, he’ll just realize that humanity doesn’t give a fuck, and that he’s in it for himself only.
And the typical butterfly-to-caterpillar transition will be complete.
Well a lot of AI bros think that AI can generate novel solutions to all of the world's problems, they just need more data and processing power. I.E. the AI God (all knowing), which when you take a step back is utter lunacy. How can LLM's generate solutions to climate change if it's a predictive model.
All of this to say, they delude themselves that the future of humanity needs "AI" or we are doomed. Ironically, the creation and expansion of LLM's drastically increased the power usage of humanity to its own detriment.
I've seen paying people too much completely erode the core of teams. It's really hard to convince yourself to work 60 hour weeks when you have generational FU$ and a family you love.
I wouldn't describe a team full of people who don't want to work 60 hour weeks as "eroded", cus like... That's 6x 10 hour days leaving incredibly little time for family, chores, unwinding, etc. Once in awhile maybe, but sustained that'll just burn people out.
And also by that logic, is every executive paid $5M+/yr in every company, or every person who's accumulated say $20M, also eroding their team? Or is that only applied to someone who isn't managing people, for some reason?
There's limited or no evidence of this in other domains where astonishing pay packages are used to assemble the best teams in the world (e.g., sports).
There's vast social honour and commensurate status attached to activities like bing a sports / movie star. Status that can easily be lost, and cannot be purchased for almost any amount of money. Arguably that status is a greater motivator than the financial reward - ie.: see the South Korean idol system. It's certainly not going to be diminished as a motivator by financial reward. There's no equivalent for AI researchers. At best the very best may win the acclaim of their peers and a Nobel prize. It's not a remotely equivalent level of celebrity / access to all the treasures the world can provide.
Top AI researchers are about the closest thing to celebrity status that has ever been attainable for engineering / CS folks outside of winning a Nobel Prize. Of course, the dopamine cycle and public recognition and adoration are nowhere near the same level as professional sports, but someone being personally courted by the world's richest CEOs handing out $100M+ packages is still decidedly not experiencing anything close to a normal life. Some of these folks still had their hiring announced on the front pages of the NYT and WSJ -- something normally reserved for top CEOs or, yes, sports stars and other legitimate celebrities.
Either Meta makes rapid progress on frontier-level AI in the next year or it doesn't -- there's definitely a feedback loop that's measured in tangible units of time. I don't think it's unreasonable to assume that when Zuck personally hires you at this level of compensation, there will be performance expectations and you won't stick around for long if you don't deliver. Even in top-tier sports, many underperformers manage to stick around for a couple years or even a half-decade at seven or eight figure compensation before being shown the door.
In reality all frontier models will likely progress at nearly the same pace making it difficult to disaggregate this team's performance compared to others. More importantly, it'll be nearly impossible to disaggregate any one contributor's performance from the others, making it basically impossible to enforce accountability without many many repetitions to eliminate noise.
> Even in top-tier sports, many underperformers stick around for a couple years or a half-decade at seven or eight figure compensation before being shown the door.
This can happen in the explicit hopes that their performance improves, not because it's unclear whether they are performing, and not generally over lapses in contract.
There are plenty of established performance management mechanisms to determine individual contributions, so while I wouldn't say that's a complete nonissue, it's not a major problem. The output of the team is more important to the business anyway (as is the case in sports, too).
And if the team produces results on par with the best results being attained anywhere else on the planet, Zuck would likely consider that a success, not a failure. After all, what's motivating him here is that his current team is not producing that level of results. And if he has a small but nonzero chance of pushing ahead of anyone else in the world, that's not an unreasonable thing to make a bet on.
I'd also point out that this sort of situation is common in the executive world, just not in the engineering world. Pretty much every top-tier executive at top-tier companies is making seven or eight figures as table stakes. There's no evidence I'm aware of that this reduces executive or executive team performance. Really, the evidence is the opposite -- companies continue paying more and more to assemble the best executive teams because they find it's actually worth it.
> There are plenty of established performance management mechanisms to determine individual contributions
"Established" != valid, and literally everyone knows that.
The executives you reference are never ICs and are definitionally accountable to the measured performance of their business line. These are not superstar hires the way that AI researchers (or athletes) are. The body in the chair is totally interchangeable so long as the spreadsheet says the right number, and you expect the spreadsheet performance to be only marginally controlled by the particular body in the chair. That's not the case with most of these hires.
I'd say execs getting hired for substantial seven- and eight-figure packages, with performance-based bonuses / equity grants and severance deals, absolutely do have a lot more in common with superstars than with most other professionals. And, just like superstars, they're hired based off public reputation more than anything else (just the sphere of what's "public" is different).
It's false that execs are never ICs. Anyone who's worked in the upper-echelon of corporate America knows that. Not every exec is simply responsible 1:1 for a business line. Many are in transformation or functional roles with very complex responsibilities across many interacting areas. Even when an exec is responsible for a business line in a 1:1 way, they are often only responsible for one aspect of it (e.g., leading one function); sometimes that is true all the way up to the C-suite, with the company having literally only a single exception (e.g., Apple). In those cases, exec performance is not 1:1 tied to the business they are 1:1 attached to. High-performing execs in those roles are routinely "saved" and banked for other roles rather than being laid off / fired in the event their BU doesn't work out. Low-performing execs in those roles are of course very quickly fired / re-orged out.
If execs really were so replaceable and it's just a matter of putting the right number in a spreadsheet, companies wouldn't be paying so much money for them. Your claims do not pass even the most basic sanity check. By all means, work your way up to the level we're talking about here and then report back on what you've learned about it.
Re: performance management and "everyone knowing that", you're right of course -- that's why it's not an interesting point at all. :) I disagree that established techniques are not valid -- they work well and have worked for decades with essentially no major structural issues, scaling up to companies with 200k+ employees.
I did not say their performance is 1:1 with a business line, but great job tearing down that strawman.
I said they are accountable to their business line -- they own a portfolio and are accountable for that portfolio's performance. If the portfolio does badly, it means nearly by definition that the executive is doing badly. Like an athlete, that doesn't mean they're immediately put to the streets, but it also is not ambiguous whether they are performing well or not.
Which also points to why performance management methods are not valid, i.e. a high-sensitivity, high-specificity measure of an individual executive's actual personal performance: there are obviously countless external variables that bear on the outcome of a portfolio. But nonetheless, for the business's purpose, it doesn't matter. Because the real purpose of performance management methods is to have a quasi-objective rationalization for personnel decisions that are actually made elsewhere.
Perhaps you can mention which performance management methods you believe are valid (high-specificity and high-sensitivity measures of an individual's personal performance) in AI R&D?
"Pretty much every top-tier executive at top-tier companies is making seven or eight figures as table stakes". In this group, what percentage are ICs? Sure there are aberrational celebrity hires, of course, but what you are pointing to is the norm, which is not celebrity hires doing IC work.
> If execs really were so replaceable... companies wouldn't be paying so much money for them
High-level executives within the same tier are largely substitutable - any qualified member of this cohort can perform the role adequately. However, this is still a very small group of people ultimately responsible for huge amounts of capital and thus collectively can maintain market power on compensation. The high salaries don't reflect individual differential value. Obviously there are some remarkable executives and they tend to concentrate in remarkable companies, by definition, and also by definition, the vast majority of companies and their executives are totally unremarkable but earn high salaries nonetheless.
I disagree. If you want similarly-tight feedback loops on performance, pair programming/TDD provides it. And even if you hate real-time collaboration or are working in different time zones, delightful code reviews on small slices get pretty close.
Is working than 60 hours a week necessary to have a good team? Sure, having FU$ as you put it removes the necessity to keep the scale of your work life balance tipped to the benefit if your employer. But again, a good work life balance should not imply erosion the team.
Working more than 50 hours a week is counter-productive, and even working 50 hours a week isn't consistently more productive than working 40 hours a week.
It is very easy to mistake _feeling_ productive and close with your coworkers for _being_ productive. That's why we can't rely on our feelings to judge productivity.
That's why CEO pay is so low. They take the honor in leadership and across the board just take a menial compensation package. Why work hard , 60 hrs a week even, if you get paid FU$? This is why boards limit comp packages so aggressively.
I was ready to downvote you giving examples how +$100m net worth individuals are probably the hardest workers (or were, to get there) just like most of the people replying to you, but your `and a family you love` tripped me. I sorta agree... if you want be maximize time with family and you have FU$, would you really really work that hard?
I am not saying exactly they don't love their family... but it's not necessarily a priority over glory, more money, or being competitive. And if the relationship is healthy and built on solid foundations usually the partner knows what they're getting into and accept the other person (children on the other hand had no choice).
It's a weird take to tie this up with team morale, tough.
The game theoretic aspect of this is quite interesting. If Meta will make OpenAI's model improvements open source, then the value of every poached employee will be worth significantly less as time goes on. That means it's in the employees best interest to leave first, if their goal is to maximize their income.
ie - Zuck has no intention to keep opening up the models he creates. Thus, he knows he can spend the money to get the talent. Because he has every intention to make it back.
How well was Zuck able to use his massive distribution channels to win in his cryptocurrency project, or the Metaverse after that?
Meta has become too fickle with new projects. To the extent that LLAMA can help them improve their core business, they should drive that initiative. But if they get sidetracked on trying to build “AI friends” for all of their human users, they are just creating another “solution in search of a problem”.
I hope both Altman and Zuck become irrelevant. Neither seems particularly worthy of the power they have gained and aren’t willing to show a spine in face of government coercion.
That was immediately proven to be false, both by Meta leadership and the poached researchers themselves. Sam Altman just pulled the number out of his ass in an interview.
That's my point. The ones that left early got a large sum of money. The ones that leave later will get less. That would incentivize people to be the first to leave.
> OpenAI is the only answer for those looking to build artificial general intelligence
Let’s assume for a moment that OpenAI is the only company that can build AGI (specious claim), then the question I would have for Sam Altman: what is OpenAI’s plan once that milestone is reached, given his other argument:
> And maybe more importantly than that, we actually care about building AGI in a good way,” he added. “Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be.
If building AGI is OpenAI’s only goal (unlike other companies), will OpenAI cease to exist once mission is accomplished or will a new mission be devised?
Exactly! The Microsoft-OpenAI agreement states that AGI is whatever makes them 100 billion in profits. Nothing in there about anything intelligence related.
>The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits.
And in the meantime, their goal is clearly to make money off non-AGI AI.
I constantly get quasi-religious vibes from the current AI "leaders" (Altman, Amodei, and quite a few of the people who have left both companies to start their own). I never got those sort of vibes from Hinton, LeCun, or Bengio. The latest crop really does seem to believe that they're building some sort of "god" and that their god getting built first before one of their competitors builds a false god is paramount (in the literal meaning of the term) for the future of the human race.
What even is the monetization plan for AI. Seems like the cutting edge tech becomes immediately devalued to nothing after a few months when a new open source modal is released.
After spending so many billions on this stuff, are they really going to pay it all off selling API credits?
Yes, because the development of AGI doesn't automatically mean the end of capitalism. Feudalism, mercantilism, and the final form, capitalism, weren't overthrown by new technologies, and while AGI is certainly a very special new technology, so was the internet. It doesn't matter how special AGI is if it's controlled by one company under the mechanisms of a capitalist liberal democracy - it's not like the laws don't matter anymore, or the contracts, debts, allegiances.
What can AGI give us that would end scarcity, when our scarcity is artificial? New farming mechanisms that mean nobody go hungry? We already throw away most of our food. We don't lack food, our resource allocation mechanism (Capitalism) just requires some people to be hungry.
What about new medicines? Magic new pills that cure cancer - why would these be given away for free when they can be sold, instead?
Maybe AGI will recommend the perfect form of fair and equitable governance! Well, it almost certainly will be a recommendation that strips some power from people who don't want to give up any power at all, and it's not like they'll give it up without a fight. Not that they'll need to fight - billionaires exist today and have convinced people to fight for them, against people's own self interest, somehow (I still don't understand this).
So, I'll modify Mark Fisher's quote - it's easier to imagine the creation of AGI than it is to imagine the end of capitalism.
>our resource allocation mechanism (Capitalism) just requires some people to be hungry
One of the observable features of capitalism is that there are no hungry people. Capitalism has completely solved the problem of hunger. People are hungry when they don't have capitalism.
>billionaires exist today and have convinced people to fight for them
People usually fighting for themselves. It's just that billionaires often are not enemies of society, but source of social well-being. Or even more often - a side effect of social well-being. People fighting for billionaires to protect social well-being, not to protect billionaires.
>it's easier to imagine the creation of AGI than it is to imagine the end of capitalism
There is no need to even imagine the end of capitalism - we see it all the time, most of the world can hardly be called capitalist. And the less capitalism there is, the worse.
> One of the observable features of capitalism is that there are no hungry people. Capitalism has completely solved the problem of hunger. People are hungry when they don't have capitalism.
This is as fascinating to me as if someone walked up to me and said "Birds don't exist." It's a statement that's instantly, demonstrably provably wrong by simply turning and pointing at a bird, or in this case, by Googling "Child hunger in the usa," and seeing a shitload of links demonstrating that 12.8% of US households are food insecure.
Or, the secondary point, that hunger is only when no capitalism, demonstrably untrue, since the countries that ensure capitalism can continue to thrive by providing cheap labor, have visible extreme hunger, such as India. India isn't capitalist? America isn't capitalist? Madagascar isn't capitalist? Palestine?
> It's just that billionaires often are not enemies of society, but source of social well-being.
How can someone not be an enemy of society when they maintain artificial scarcity by hoarding such a massive portion of society's output, and then acting to hoard and concentrate our collective wealth even more into their own hands? Since when has "greed" not been a universally reviled trait?
> we see it all the time, most of the world can hardly be called capitalist. And the less capitalism there is, the worse.
I genuinely can't understand what you're seeing in the world to think the global economy is not capitalist in nature.
> seeing a shitload of links demonstrating that 12.8% of US households are food insecure.
This is definitely not a manipulation of statistics and not a trivialization of food insecurity that are relevant to many parts of the world. And then they wonder why people choose to support billionaires instead of you lying cannibals.
> such as India
> Madagascar isn't capitalist? Palestine?
No? This countries has nothing to do with an economy built on the principles of the inviolability of private property and economic freedom. USA has more socialism than this countries have capitalism.
> How can someone not be an enemy of society when they maintain artificial scarcity by hoarding such a massive portion of society's output
because it is not portion of society's output that matters, but size of that output. What's the point of even distribution if size of the share is not enough even to not to die from starvation?
> Since when has "greed" not been a universally reviled trait?
Question is not either greed reviled trait or not. Greed is a fact of human nature. The question is what this ineradicable human quality leads to in specific economic systems: to universal prosperity, as under capitalism, or to various abominations like mass starvation, as without it.
The profit cap was supposed to be for first to acheive AGI being end game, and would ensure redistribution (though with apparently some kind of Altman tax through early World Coin ownership stake). When they realized they wouldn't reach AGI with current funding and they were so close to $100 billion market cap they couldn't entice new investors on $100 billion in profits, why didn't they set it to, say, $10 trillion instead of infinity? Because they are missionaries?
A leaked email from Ilya early on even said they never planned to open source stuff long term, it was just to entice researchers at the beginning.
Whole company is founded on lies and Altman was even fired from YC over self detailing or something in I think a deleted YC blog post if I remember right.
> OpenAI is a lot of things now, but before anything else, we are a superintelligence research company.
IMO, AGI is already a very nebulous term. Superintelligence seems even more hand-wavy. It might be useful to define and understand limits of "intelligence" first.
Superintelligence has always been rhetorical slight of hand to equate "better than human" with "literally infinitely improving and godlike" in spite of optimization always leveling off eventually for one reason or another.
hilarious seeing that he views it this way when his company is so very well known for taking (strong arguments say stealing) everything from everyone.
i’m noticing more and more lately that our new monarchs really do have broken thought patterns. they see their own abuse towards others as perfectly ok but hilariously demand people treat them fairly.
small children learn things that these guys struggle to understand.
Sam comes across as an extremely calculating person to me. I'm not suggesting he's necessarily doing this for bad reasons, but it's very clear to me the public facing communications he makes are well considered and likely not fully reflective of his actual views and positions on things, but instead what he believes to be power maximising.
He's very good at creating headlines and getting people talking online. There's no doubt he's good at what he does, but I don't know why anyone takes anything he says seriously.
This interview with Karen Hao is really good (https://www.youtube.com/watch?v=8enXRDlWguU), she interviews people that have had 1 on 1 meetings with Sam, and they always say he aligned with them on everything to the point where they don't actually know what he believes. He will tailor his opinions to try and weave in trust.
Anchoring the reader’s opinion by using the phrase “Missionaries” is pure marketing. Missionaries don’t get paid huge dollars or equity, they do it because it’s a religion / a calling.
Ultimately why someone chooses to work at OpenAI or Meta or elsewhere boils down to a few key reasons. The mission aligns with their values. The money matches their expectations. The team has a chance at success.
The orthogonality is irrelevant because nobody working for OpenAI or Meta is a missionary.
The term comes from John Doerr https://www.youtube.com/watch?v=n6iwEYmbCwk . But Altman kicked most missionaries out during corporate turmoil in 2023, so not sure where this comes from.
>...one hand The Mercenaries they have enormous Drive they're opportunistic like Andy Grove they believe only the paranoids survive they're really sprinting for the short run but that's quite different I suggest to you than the missionaries who have passion not paranoia who are strategic not opportunistic and they're focused on the big idea in partnerships. It's the difference between focusing on the the competition or the customer.
It's a difference between worshiping at the altar of Founders or having a meritocracy where you get all the ideas on the table and the best ones win it's a difference between being exclusively interested in the financial statements or also in the mission statements it's a difference between being a loner on your own or being part of a team having an
attitude of entitlement versus contribution or uh as Randy puts it living a deferred Life Plan versus a
whole life that at any given moment is trying to work difference between just making money anybody tells you they don't want to make money is lying or making money and making meaning Al also or my bottom line is it's the difference between success or success and significance.
The force has a light side and a dark side. Apparently Switzerland is so famously neutral because their national export was mercenaries. You can't take sides in wars of you want to sell soldiers to both sides...
But also I imagine that it helps when you wish to stay neutral if people are afraid of what you could do if you were directly involved in a conflict.
The Knights Templar (https://en.wikipedia.org/wiki/Knights_Templar) were kinda both, but modern AI is more mercenary in order to grab all the profits and become a monopoly nowadays.
Yeah apparently being well fed and paid and extensive lly prepared helped? It's like these mercenaries were actually what you would call "professionals"
Actually, I think that people who do it for the love of the game are the true winners here, whether they work for a company or not. You can't beat intrinsic motivation.
Effectively kept secret and in the shadows by those working on it, until a world-altering public display makes it a hot politically charged issue, unaltering even 80 years later?
Edit: Honestly, I bet that "Altman", directed by Nolan's simulacrum and starring a de-aged Cillian Murphy (with or without his consent), will in fact deservedly win a few oscars in 2069.
International co-operation to control development and usage. The decision to unleash AGI can only be made once. Making such a decision should not be done hastily, and definitely not for the reason of pursuing private profit. Such a decision needs input from all of humanity.
> International co-operation to control development and usage.
Non-starter. Why would you trust your adversary to "stay within the lanes". The rational thing to do is to extend your lead as much as possible to be at the top of the pyramid. The arms race is on.
With these things, the distrust is a feature, not a flaw. The distrust ensures you are keeping a close eye on each other. The collaboration means you're also physically close and intellectually close. Secrets are still possible, but it's just harder to keep them because it's easier for things to slip
It's rational only if you don't consider the risks of an actual superhuman AGI. Geopolitical issues won't matter if such a thing is released without making sure it can be controlled. These competition based systems of ours will be the death of us. Nothing can be done in a wise manner when competiton forces our hand.
The disaster will happen if AGI is created and let loose too quickly, without considering all the effects it will have. That's less likely to happen when the number of people with that power is limited.
I want Sam to win more than I do Zuck, just based on the proven negativity generated by Meta. I don't want that individual or that company anywhere near additional capital, power, capability or influence.
The hypocrite who violates everyone else’s privacy to sell ads, or the scammer who collects eyeballs in exchange for cryptocurrency and whose “product” has been banned in multiple countries…
Yeah, there’s no good choice here. You should be rooting for neither. Best case scenario is they destroy each other with as little collateral damage as possible.
It became clear in 2024 and 2025 that they're all dangerous.
All these tech billionaires or pseudo billionaires are basically believing that an enlightened dictatorship is the best form of governance. And of course they ought to be the dictator or part of the board.
At the start of the “LLM boom” I was optimistic that OAI/Anthropic were in a position finally unseat the Big 4 in at least this area. Now I’m convinced the only winners are going to be Google, Meta, Amazon, and we are right back to where we started.
I still have hope that Anthropic will win out over OpenAI.
But… Why put Meta in that group?
I see Apple, Google, Microsoft, and Amazon as all effectively having operating systems. Meta has none and has failed to build one for cryptocurrency (Libra / Deis) and metaverse.
Also, both Altman and Zuck leave a lot to be desired. Maybe not as much as Musk, but they both seem to be spineless against government coercion and neither gives me a sense that they are responsible stewards of the upside or downside risks of AGI. They both just seem like they are full throttle no matter the consequences.
What makes you think so? They got the Chatgpt.com domain and the product seems to be growing more than any other (check out app downloads: https://appmagic.rocks/top-charts/apps). They got the first mover advantage - and as we know around here that's a huuuge advantage.
Yes, I’ve never seen a more sociopathic company than Meta. They are so true to the cliched ethos of “you are the product.” Sickens me that society has facilitated the rise of such banality of evil
> Sickens me that society has facilitated the rise of such banality of evil
American society. Those are uniquely products of the US, exported everywhere, and rightfully starting to get push back. Unfortunately later than what it should’ve happened.
I suppose as an American taxpayer and American voter, he is responsible for as many ethnic cleansings as anyone else. Supposedly, Armenians leaving Nagorno-Karabakh is ethnic cleansing, and the US did give aid to Azerbaijian so that makes Americans facilitators of ethnic cleansing, though admittedly so are the Canadians.
1) They are far from profitability.
2) Meta is aggressively making their top talent more expensive, and outright draining it.
3) Deepseek/Baidu/etc are dramatically undercutting them.
4) Anthropic and (to a lesser extent?) Google appear to be beating them (or, charitably, matching them) on AI's best use case so far: coding.
5) Altman is becoming less like-able with every unnecessary episode of drama; and OpenAI has most of the stink from the initial (valid) grievance of "AI-companies are stealing from artists". The endless hype and FUD cycles, going back to 2022, have worn industry people out, as well as the flip flop on "please regulate us".
6) Its original, core strategic alliance with Microsoft is extremely strained.
7) and, related to #6, its corporate structure is extremely unorthodox and likely needs to change in order to attract more investment, which it must (to train new frontier models). Microsoft would need to sign off on the new structure.
8) Musk is sniping at its heels, especially through legal actions.
Barring a major breakthrough with GPT-5, which I don't see happening, how do they prevail through all of this and become a sustainable frontier AI lab and company? Maybe the answer is they drop the frontier model aspect of their business? If we are really far from AGI and are instead in a plateau of diminishing returns that may not be a huge deal, because having a 5% better model likely doesn't matter that much to their primary bright spot:
Brand loyalty from the average person to ChatGPT is the best bright spot, and OpenAI successfully eating Google's search market. Their numbers there have been truly massive from the beginning, and are I think the most defensible. Google AI Overviews continue to be completely awful in comparison.
They have majority of the attention and market cap. They have runway. And that part is the most important thing. Others don’t have the users to grand test developments.
XAI has Elon's fortune to burn, and Spacex to fund it.
Gemini has the ad and search business of Google to fund it.
Meta has the ad revenue of IG+FB+WhatsApp+Messenger.
Whereas OpenAI $10 billion in annual revenue, but low switching costs for both consumers and developers using their APIs.
If you stay at the forefront of frontier models, you need to keep burning money like crazy, that requires raising rounds repeatedly for OpenAI, whereas the tech giants can just use their fortunes doing it.
Ok, others have more runway, and less research talent.
OpenAI has enough runway to figure things out and place themselves in a healthier position.
And come to think of it, loosing a few researchers to other companies may not be so bad. Like you said that others have cash to burn. They might spend that cash more liberally and experiment with bolder riskier products and either fail spectacularly or succeed exponentially. And OpenAI can still learn from it well enough and still benefit even though it was never their cash.
The biggest problem OAI has is that they don't own a data source. Meta, Google, and X all have existing platforms for sourcing real time data at global scale. OAI has ChatGPT, which gives them some unique data, but it is tiny and very limited compared to what their competitors have.
LLMs trained on open data will regress because there is too much LLM generated slop polluting the corpus now. In order for models to improve and adapt to current events they need fresh human created data, which requires a mechanism to separate human from AI content, which requires owning a platform where content is created, so that you can deploy surveillance tools to correctly identify human created content.
> how do they prevail through all of this and become a sustainable frontier AI lab and company?
I doubt that OpenAI needs or wants to be a sustainable company right now. They can probably continue to drum up hype and investor money for many years. As long as people keep writing them blank checks, why not keep spending them? Best case they invent AGI, worst case they go bankrupt, which is irrelevant since it's not their own money they're risking.
If they can turn ChatGPT into a free cash flow machine, they will be in a much more comfortable position. They have the lever to do so (ads) but haven't shown much interest there yet.
I can't imagine how they will compete if they need to continue burning and needing to raise capital until 2030.
The interest and actions are there now: Hiring Fidji Simo to run "applications" strongly indicates a move to an ad-based business model. Fidji's meteoric rise at Facebook was because she helped land the pivot to the monster business that is mobile ads on Facebook, and she was supposedly tapped as Instacart's CEO because their business potential was on ads for CPGs, more than it was on skimming delivery fees and marked up groceries.
Good analysis, my counter to it is that OpenAI has one of the leading foundational models, while Meta, despite being a top paying tech company, continued to release sub par models that don't come close to the other big three.
So, what happened? Is there something fundamentally wrong with the culture and/or infra at Meta? If it was just because Zuckerburg bet on the wrong horses to lead their LLM initiatives, what makes us think he got it right this time?
I think that leaks like this have negative information value to the public.
I work at OAI, but I'm speaking for myself here. Sam talks to the company, sometimes via slack, more often in company-wide meetings, all the time. Way more than any other CEO I have worked for. This leaked message is one part of a long, continuing conversation within the company.
The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are. It certainly has humbled my own reactions to news. (In this particular instance I don't think there's so much right and wrong but more that I think if you had actually been in the room for more of the conversation you'd probably feel different.)
The other side of it: some statements made internally can be really bad but employees brush over them because they inherently trust the speaker to some degree, they have additional material that better aligns with what they want to hear so they latch on the rest, and current leaders' actions look fine enough to them so they see the bad parts as just communication mishaps.
Worse: employees are often actively deceived by management. Their “close relationship” is akin to that of a farmer and his herd. Convinced they’re “on the inside” they’re often blind to the truth that’s obvious from the outside.
Or simply they don’t see the whole picture because they’re not customers or business partners.
I’ve seen Oracle employees befuddled to hear negative opinions about their beloved workplace! “I never had to deal with the licensing department!”
Okay, but I've also heard insiders at companies I've worked completely overlook obvious problems and cultural/management shortcomings issues. "Oh, we don't have a low-trust environment, it's just growing pains. Don't worry about what the CEO just said..."
Like, seriously, I've seen first-hand how comments like this can be more revealing out of context than in context, because the context is all internal politics and spin.
Sneaky wording but seems like no, Sam only talked about "open weights" model so far, so most likely not "open source" by any existing definition of the word, but rather a custom "open-but-legal-dept-makes-us-call-it-proprietary" license. Slightly ironic given the whole "most HN posters are confidently wrong" part right before ;)
Although I do agree with you overall, many stories are sensationalized, parts-of-stories always lack a lot of context and large parts of HN users comments about stuff they maybe don't actually know so much about, but put in a way to make it seem so.
Open weights is unobjectionable. You do get a lot.
It's nice to also know what the training data is, and it's even nicer to be aware of how it's fine-tuned etc., but at least you get the architecture and are able to run it as you like and fine tune it further as you like.
> but at least you get the architecture and are able to run it as you like and fine tune it further as you like.
Sure, that's cool and all, and I welcome that. But it's getting really tiresome of seeing huge companies who probably depend on actual FOSS to constantly get it wrong, which devalues all the other FOSS work going on, since they wanna ride that wave, instead of just being honest with what they're putting out.
If Facebook et al could release compiled binaries from closed source code but still call those binaries "open source", and call the entire Facebook "open source" because of that, they would. But obviously everyone would push back on that, because that's not what we know open source to be.
Btw, you don't get to "run it as you like", give the license + acceptable use a read through, and then compare to what you're "allowed" to do compared to actual FOSS licenses.
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are.
Having been behind the scenes of HN discussion about a security incident, with accusations flying about incompetent developers, the true story was the lead developers new of the issue, but it was not prioritised by management and pushed down the backlog in place of new (revenue generating) features.
There is plenty of nuance to any situation that can't be known.
No idea if the real story here is better or worse than the public speculation though.
I too worked at a place where hot button issues were being leaked to international news.
Leaks were done for a reason. either because they agree with the leak, really disagree with the leak, or want to feel big because they are a broker of juicy information.
Most of the time the leaks were done in an attempt to stop something stupid from happening, or highlight where upper management were making the choice to ignore something for a gain elsewhere.
Other times it was there because the person was being a prick.
Sure its a tiny part of the conversation, but in the end, if you've got the point where your employees are pissed off enough to leak, that's the bigger problem.
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are.
Some topics (and some areas where one could be an expert in) are much more prone to this phenomenon than others.
Just to give a specific example that suddenly comes to my mind: Grothendieck-style Algebraic Geometry is rather not prone to people confidently posting wrong stuff about on HN.
Generally (to abstract from this example [pun intended]): I guess topics that
- take an enormous amount of time to learn,
- where "confidently bullshitting" will not work because you have to learn some "language" of the topic very deeply
- where even a person with some intermediate knowledge of the topic can immediately detect whether you use the "'grammar' of the 'technical language'" very wrongly
are much more rarely prone to this phenomenon. It is no coincidence that in the last two points I make comparisons to (natural) languages: it is not easy to bullshit in a live interview that you know some natural language well if the counterpart has at least some basic knowledge of this natural language.
I think its more the site's architecture that promotes this behavior.
In the offline world there is a big social cost to this kind of behavior. Platforms haven't been able to replicate it. Instead they seem to promote and validate it. It feeds the self esteem of these people.
It's hard to have an informed opinion on Algebraic Geometry (requires expertise) and not many people are going to upvote and engage with you about it either. It's a lot easier to have an opinion on tech execs, current events, and tech gossip. Moreover you're much more likely to get replies, upvotes, and other engagement for posting about it.
There's a reason politics and tech gossip are where most HN comments go these days. This is a pretty mainstream site.
> There's a reason politics and tech gossip are where most HN comments go these days. This is a pretty mainstream site.
HN is the digital water cooler. Rumors are a kind of social currency, in the capital sense, in that it can be leveraged and has a time horizon for value of exchange, and in the timeliness/recency biased sense, as hot gossip is a form of information that wants to be free, which in this context means it has more value when shared, and that value is tapped into by doing so.
This is a strangely defensive comment for a post that, at least on the surface, doesn't seem to say anything particularly damning. The fact that you're rushing to defend your CEO sort of proves the point being made, clearly you have to make people believe they're a part of something bigger, not just pay them a lot.
The only obvious critique is that clearly Sam Altman doesn't believe this himself. He is legendarily mercenary and self serving in his actions to the point where, at least for me, it's impressive. He also has, demonstrably here, created a culture where his employees do believe they are part of a more important mission and that clearly is different than just paying them a lot (which of course, he also does).
I do think some skepticism should be had around that view the employees have, but I also suspect that was the case for actual missionaries (who of course always served someone else's interests, even if they personally thought they were doing divine work).
The headline makes it sound like he's angry that Meta is poaching his talent. That's a bad look that makes it seem like you consider your employees to be your property. But he didn't actually say anything like that. I wouldn't consider any of what he said to be "slams," just pretty reasonable discussion of why he thinks they won't do well.
I'd say this is yet another example of bad headlines having negative information content, not leaks.
To me, there’s an enormous difference between “they pay well but we’re going to win the race” and “my employees belong to me and they’re stealing my property.”
Notably, I don’t see him condemning Meta’s “poaching” here, just commenting on it. Compare this with, for example, Steve Jobs getting into a fight with Adobe’s CEO about whether they’d recruit each other’s employees or consider them to be off limits.
But I've also experienced that the outside perspective, wrong as it may be on nearly all details, can give a dose of realism that's easy to brush aside internally.
> I think that leaks like this have negative information value to the public.
To most people I'd think this is mainly for entertainment purposes ie 'palace intrique' and the actual facts don't even matter.
> The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
That's a good spin but coming from someone who has an anonymous profile how do we know it's true (this is a general thing on HN people say things but you don't know how legit what they say is or if they are who they say they are).
> What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
What conclusions exactly? Again do most people really care about this (reading the story) and does it impact them? My guess is it doesn't at all.
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in.
> That's a good spin but coming from someone who has an anonymous profile how do we know it's true (this is a general thing on HN people say things but you don't know how legit what they say is or if they are who they say they are).
Not only that, but how can we know if his interpretation or "feelings" about these discussions are accurate? How do we know he isn't looking through rose-tinted glasses like the Neumann believers at WeWork? OP isn't showing the missing discussion, only his interpretation/feelings about it. How can we know if his view of reality is accurate and unbiased? Without seeing the full discussion and judging for ourselves, we can't.
Your comment comes across dangerously close to sounding like someone that has drunk the kool-aid and defends the indefensible.
Yes, you can get the wrong impression from hearing just a snippet of a conversation, but sometimes you can hear what was needed whether it was out of context or not. Sam is not a great human being to be placed on a pedestal that never needs anything he says questioned. He's just a SV CEO trying to keep people thinking his company is the coolest thing. Once you stop questioning everything, you're in danger of having the kool-aid take over. How many times have we seen other SV CEOs with a "stay tuned" tweet that they just hope nobody questions later?
>if you had actually been in the room for more of the conversation you'd probably feel different
If you haven't drunk the kool-aid, you might feel differently as well.
SAMA doesn't need your assistance white knighting him on the interwebs.
It says something that he still believes he has "missionaries" after betraying all the core principles that OpenAI was founded on. What exactly is their mission now other than generating big $?
> It says something that he still believes he has "missionaries" after betraying all the core principles that OpenAI was founded on.
What I find the most troubling in this reaction is how hostile it is to the actual talent. It accuses everyone and anyone who is even considering to join Meta in particular but any competitor in general as being a mercenary. It's using the poisoning the well fallacy to shield OpenAI from any competition. And why? Because he believes he is in a personal mission? This emits "some of you may die, but it's a sacrifice I am willing to make" energy. Not cool.
It's absoluely ridiculous that investors are driven (and are expected to be driven) by maximisation of return of investment, and that alone, but when labour/employees exhibit that same behaviour, they are labeld "mercenaries" or "transacitonal".
These examples of double standards for labor vs capital are literally everywhere.
Capital is supposed to be mobile. Economic theory is based on the idea that capital should flow to its best use (e.g., investors should withdraw it from companies that aren't generating sufficient returns and provide it to those who are) including being able to flow across international borders. Labor is restricted from flowing across international boundaries by law and even job hopping within a country is frowned upon by society.
We have lower rates of taxation on capital (capital gains and dividends) than on labor income because we want to encourage investment. We're told that economic growth depends on it. But doesn't economic growth also depend on people working and shouldn't we encourage that as well?
There's an entire industry dedicated to tracking investment yields for capital and we encourage the free flow of this information "so that people can make informed investing decisions". Yet talking about salaries with co-workers is taboo for some reason.
Those lower rates of taxation on capital don't even incentivize investment, because investment is inelastic. What else are you going to do with the money, swim in it?
It's just about rich people wanting a bigger share of the pie and having enough money to buy the policies they prefer.
Similarly, we have laws that guarantee our right to talk with our coworkers about our income, but the penalties have been completely gutted. And the penalty for companies illegally colluding on salary by all telling a third party what they are paying people and then using that data to decide how much to pay is ... nada.
We need to figure out how to have people who work for a living fund political campaigns (either directly with money or by donating our time), because this alternative of a badly-compressed jpeg of an economy sucks.
He was very happy when money caused them all to back him despite the fact that he obviously isn't a safe person to have in a position of power. But if they realize they have better money options than turning OpenAI into a collusion against its original foundation and mostly for his benefit, well then they are mercenaries..
Couldn’t his claim apply equally to investors and employees? In both categories, people who are there to do stuff for mercenary reasons are likely to be (in his view) missing some of the drive and cohesion of a group of “true believers” working for the same purpose?
The contrast between SpaceX and the defense primes comes to mind… between Warren Buffett and a crypto pumper-and-dumper… between a steady career at (or dividend from) IBM and a Silicon Valley startup dice-roll (or the people who throw money into said startups knowing they’re probably going to lose it)
He claims to be advancing progress. He believes that progress comes from technology plus good governance.
Yet our government is descending into authoritarianism and AI is fueling rising data center energy demands exacerbating the climate crisis. And that is to say nothing of the role that AI is playing in building more effective tools for population control and mass surveillance. All these things are happening because the governance of our future is handled by the ultra-wealthy pursuing their narrow visions at the expense of everyone else.
Thus we have no reason to expect good “governance” at the hands of this wealthy elite and we only see evidence to the opposite. Altman’s skill lies in getting people to believe that serving these narrow interests is the pursuit of a higher purpose. That is the story of OpenAI.
> Also the only way multiculturalism can work is through a totalitarian state which is why surveillance and censorship is so big in the UK. Also the reason why Singapore works.
Singapore, if anything, is evidence against your claim about the UK. Singapore has multiple cultures, but it does not promote multi-culturalism as it is generally understood in the UK. Their language policy is:
1. Everyone has to speak reasonably good English.
2. Languages other English, Malay, Mandarin and Tamil are discouraged.
The language policy is more like the treatment of Welsh in the 19th century, or Sri Lanka's attempt to impose a single national language from the 60s to the 80s (but more flexible as it retains more than one language). A more extreme (because it goes far beyond language) and authoritarian example would be contemporary China's suppression of minority cultures. I do not think anyone would call any of those multiculturalism.
The reason for surveillance and censorship in the UK is very different. It is a general feeling in the ruling class that the hoi polloi cannot be trusted and centralised decision making is preferable to local or individual decision making. The current Children's Wellbeing and Schools Bill is a great example - the central point is that the authorities will make more decisions for people and organisations and decide what they an do to a greater extent than at the moment.
> Also the only way multiculturalism can work is through a totalitarian state which is why surveillance and censorship is so big in the UK.
That seems like a wild claim to make without any supporting evidence. Even Switzerland can be used to disprove it, so I'm not sure where you're coming from that assuredly.
The UK isn't totalitarian in the same sense that even Singapore is, let alone actually totalitarian states like Eritrea, North Korea, China, etc.
Switzerland has one of the highest percentage of foreigners in Europe, four official languages, a decentralized political system, very frequent direct democratic votes and consensus governance (no mayors, governors and prime ministers, just councils all the way down).
Switzerland set up in such a way that it absorbs and integrates many different cultures into a decentralized, democratic system. One of the primary historical causes for this is our belligerent past. I'd like to think that this was our only way out of constantly hitting each other over our heads.
The UK would need to have a well-funded and well-equipped police force to be a proper police state, and the rate of shoplifting, burglary etc that goes on suggests otherwise.
Lol, I'm sure Sam Altman's ideals haven't changed but you're a fool if you think OpenAI is aiming for anything loftier than a huge pile of money for investors.
They haven't released much closed source, open weights in comparison to their competitors, but they made their Codex agent Open Source while Claude Code is still closed source.
And with the others as well, the secret sauce of training is still secret. Their competitors' "open source" in Gemma, Llama, etc is closed source. It's like Mattermost Team Edition where the binary is shipped under the MIT license. OpenAI should be held to a higher standard based on their name and original structure and pitch and they've fallen short, but I think to say they completely threw it out is an exaggeration. They hit the same roadblocks of copyright and other sorts of scrutiny that others did.
A company's mission is not an individual's mission. I personally would never hire an engineer whose main pursuit is money or promotions. These are the laziest engineers that exist and are always a liability.
Everyone is the chairman of the board of their lives, with a fiduciary duty to their shareholder, namely themselves. You can decide to hire only employees who either believe in mission over pay or who are willing to mouth the words, but you will absolutely miss out on good employees.
I remember defending a hiring candidate who had said he got into his specialty because it paid better than others. We hired him and he was great, worth his pay. No one else on the hiring team could defend a bias against someone looking out for themselves.
And I would never work for someone with such a paranoid suspicion of the motives of their employees, who doesn’t want to take any responsibility in their employees’ professional growth, and who doesn’t want to pay them what they’re worth.
Sam Altman went from "I'm doing this because I love it" to proposing to receive 7% equity in the for-profit entity in a matter of months.
Now he calls out researchers leaving for greener pastures as mercenaries while the echo of "OpenAI is nothing without its people" hasn't faded.
Capitalists always hate capitalism when it comes to employees getting paid what they are worth. If the market will bear it, he should embrace it and stop whining.
Im a bit torn about this. If it ends up hurting OpenAI so much that they close shop, what is the incentive for another OpenAI to come up?
You can spend time making a good product and get breakthroughs and all it takes is for meta to poach your talent, and with it your IP. What do you have left?
But also, every employee getting paid at Meta can come out with the resources to start their own thing. PayPal didn't crush fintech: it funded the next twenty years of startups.
How do you figure? If you assume that Meta gets the state of the art model, revenue is non-existent, unless they start a premium tier or put ads. Even then, its not clear if they will exceed the money spent on inference and training compute.
It's worth a few billion (easily) to keep people's default time sink as aimlessly playing on FV/IG as opposed to chatting with ChatGPT. Even if that scroll is replaced by chatting with llama as opposed to seeing posts.
This looks similar to what Meta (then Facebook) did a decade ago and basically broke the agreements between Apple, Google, etc. to not poach each others employees
When you hear this reiterated by employees, who actually believe it, then it's sad. Obviously not in this situation, but I've actually heard this from people. Some of them were even pros. "There is no fool like an educated fool."
This is one of the most fascinating things I've found in the past ten or so years. When did people in general begin to buy into the bullshit spewed by big shots in corporate or really any commercial venture. People at least implicitly understood that the boss just wanted money and would fuck you, nature or his own firmly held beliefs to get it.
People are now shocked when a company cuts a loved product or their boss fires them when someone cheaper comes along.
Anyone who has worked at OpenAI or is currently working there has lost all credibility in my eyes. When their dear leader, Sam was "fired", they staged a coup to save their paychecks.
These people are just out there to may a buck and scam people with "AGI" and now that there is plenty of competition and superior models, I'm hearing crickets from them.
All they had going for them was first to market and they managed to damage the brand, lose their top talent, deliver a subpar product and convert a nonprofit into for profit.
In a for-profit system, literally everybody is a mercenary. Thoughts, prayers, and vibes don't put food on my table or pay my bills and compute expenses.
An observation: most articles with titles of the form "A SLAMS B" put forward a narrow, one-sided view of the issue they report on. Oftentimes they're shallow attempts to stir outrage for clicks. This one is just giving a CEO a platform to promote how awesome he thinks his company is.
All these articles and videos of people "slamming" each other; it doesn't move the needle, and it's not really news.
Seems to me that articles with titles in the form "A SLAMS B" take a single negative comment that A made about something associated with B and build a few paragraphs around it as if it were a huge controversy, while in the mean time both A and B already forgot about the issue.
Dumb question. If they’re willing to pay so much for AI talent. Why won’t companies hire experienced software engineers willing to learn AI on the job? Seems like there should be a big market for that.
However, skilling people up on specialized skill sets in a reasonable time frame requires having people around to teach them. And those people need to know not just the skills, but how to teach them well. And it takes time away from those people doing the job, so that approach will slow development in the short run.
there kind of is, but also note we are talking about the exceptional talent here. I don’t think Meta is mass poaching the pure engineer types at OpenAI either
It has obvious pros, but since you asked about the cons —- anonymity brings the worst out of people; TC chasing leads to a reductionist view of people’s values and skills.
For example unlike HN you don’t often do technical discussions on blind, by design. So it is a “meta”-level strategy discussion of the job, then it skews politics, gossips, stock price etc..
This is compounded by it being social media, where negativity can be amplified 5-10x.
Mainly because it brings out the worst in people. It’s easy to read Blind too much and take on a very cynical, money-driven view of everything in your life, which of course a Blind addict would justify as clear-eyed and pragmatic.
I hate teamblind because it makes me feel really negative about our industry.
I actually really like tech - the problems we get to work on, the ever-changing technological landscape, the smart and passionate people, etc, etc. But teamblind is just filled with cynical, wealth-obsessed and mean careerists. It's like the opposite of HN in many ways.
And if you ever wondered where the phrase "TC or GTFO" originated... it's from teamblind.
At least from the outside, OpenAI's messaging about this seems obnoxiously deluded, maybe some of those employees left because it starts feels like a cult built on foundations of self importance? Or maybe they really do know some things we don't know, but it seems like a lot of people are eager to keep giving them that excuse.
But then again, maybe they have such a menagerie of individuals with their heads in the clouds that they've created something of an echo chamber about the 'pure vision' that only they can manifest.
Yeah it's a tough spot he's found himself in. How do you convince people who know more about this stuff than anybody that you're barreling towards something that's an improbability? It seems that most of them have made their choice to turn more towards reality, the material reality, and register their skill with an organization that holds that in higher regard. I can't blame them, and neither can he, but he also can't help himself when it comes to reiterating the hype. He might be projecting about that 'deep-seated cultural issue' he's prescribing to meta, and lashing out against those who don't accept it.
To be fair, he's hardly alone. Business is built on dupers and dupees. The duper talks about how important the mission of the business is while taking the value of the labor of the dupee. If he had to work for the money he pays the dupee, he would be a lot less interested in the mission.
I think it’s more of the latter. We’ve already seen others beat them in their own game. Only for them to come back with a new model.
In the end, this is the same back and forth that Apple and Sun shared in the late 90s or Meta and Google in 2014. We could have made non-competes illegal today but we didn’t.
Ehh, this take feels ungenerous to me. You don't have to believe a private firm is a holy order for it to benefit from a culture filled with "we believe this specific project is Important" people vs "will work at whatever shop drops the most cash" people.
Mercenaries by definition select for individual dollar outcomes, and its impossible for that not to impact the way they operate in groups, which is generally to the group's detriment unless management is incredibly good at building group-first incentive structures that don't stomp individual outcomes.
That said, mercenary-missionaries are definitely a thing. They're unstoppable forces culturally and economically, and that could be who we're seeing move around here.
In our times, every narcissist sees himself as a saint and a messiah on a mission to save the world, while doing the complete opposite of that. And they get very angry when they see other narcissists trying to do the same.
This was an important argument in the book The Network State.
Corporate politics is the small game right now. Sam is trying to build a Global network state
There is a fairly strong scientific/historical argument that suggests neither mercenaries nor missionaries have had made any significant contribution to the outcome of any important human conflict or endeavour. Rather microscopic life is in control, and we are keen to rationalize the outcomes into stories of human heroes and villians.
Therefore, wish for the army with the best immune system.
In other words, we should probably be asking what viral/bacterial content is transferred in these employee trades and who mates with who. This information is probaly as important to the outcome as the notions of "AGI" swirling around.
I understand the massive anti-OpenAI sentiment here, but OpenAI makes a really great product. ChatGPT and its ecosystem are widely used by millions every day to make them more productive. Losing these employees doesn’t bode well for users.
Meta doesn’t really have a product unless you count the awful “Meta AI” that is baked into their apps. Unless these acquisitions manifest in frontier models getting open sourced, it feels like a gigantic brain drain.
Meta's real AI product is actually even worse than that and insidious: They try to run over companies who are (in contrast) successfully advancing AI with money they made by hooking teens on IG, and then just use the resulting inferior product as a marketing tool.
It's weird to hear Sam Altman call the employees of OpenAI 'missionaries' based on their intense policies that seem determined to control how people think and act.
Imagine if in 2001 Google had said "I'm sorry, I can't let you search that" if you were looking up information on medical symptoms, or doing searches related to drugs, or searching for porn, or searching for Disney themed artwork.
It's hard for me to see anyone with such a strong totalitarian control over how their technology can be used as a good guy.
Zuck poaches AI devs and places them under Wang - how does that work? Wang doesn’t make impression of being a brilliant researcher or coder just a great deal maker (to put it diplomatically).
It’s kind of rich that he’s complaining about Facebook paying engineers ’too much’, given the history here.
A decade ago Apple, Google, Intel, Intuit, and Adobe all had anti poaching agreements, and Facebook wouldn’t play ball, paid people more, won market share, and caused the salary boom in Silicon Valley.
Now Facebook is paying people too much and we should all feel bad about it?
I don't know which pedastal Sam is standing on to point finger at others? Who are the missionaries and who are the mercenaries? What part of OpenAI is Open?
“I am proud of how mission-oriented our industry is as a whole; of course there will always be some mercenaries.”
In the context of the decisions of largely East Asia born technical staff, can’t help but reflect on the role of actual western missionaries and mercenaries in East Asia over the last 100+ years & also the DeepSeek targeted sinophobia.
Sam Altman complaining about mercenary behavior from competitors... Talk about the pot calling the kettle black. Guess he's unhappy he's not the one being mercenary in this situation.
Another said: “Yes we’re quirky and weird, but that’s what makes this place a magical cradle of innovation,” wrote one. “OpenAI is weird in the most magical way. We contain multitudes.”
"ChatGPT can you give me a catchy phrase I can use to sway the public discourse against Meta that puts OpenAI in the most favorable light? Also sprinkle in some alliteration if you could"
It always makes me laugh this millionaire rhetoric about “THE MISSION.” I once had a CEO who suddenly wanted us to be on call 24/7, every other week. His argument? Commitment. The importance of "the mission". Becoming a market leader, and so on.
As AlbertaTech says, “we make sparkling water.” I mean, what’s the mission? A can of sparkling water on every table? Spreading the joy of carbonated water to the world? No. You sell sparkling water because you want to make a profit. That kind of speech is just a way to hide the fact that you're trying to cut three full-time positions and make your employees work off-hours to increase margins. Or, like in this case, pay them less than the competition with the same objective.
Sam Altman might actually have a mission, turning us all into robot slaves, but that’s a whole different conversation.
By 2025, haven't all employees learned to see through this? Just like the "we're all family" tropes. It's all just attempts at brains washing employees into working longer hours, because of the 'purpose'. But they wont benefit, they can be let go anytime.
Couldn't think of a worse steward of AI than Meta/Zuck (not a fan of OpenAI either). One of the most insidious companies out there.
Sad to see Nat Friedman go there. He struck me as "one of the good ones" who was keen to use tech for positive change. I don't think that is achievable at Meta
For Sam to seemingly claim that Meta is hiring mercenaries while OpenAI is hiring missionaries seems a bit counter to OpenAI's mission of having closed weight models vs an open weights at Meta.
I could definitely see those who are 'missionaries' wanting to give it away. ¯\_(ツ)_/¯
In any case this is business and in many cases how business operates. Nice try on Sam's part to try and make it like it's a bad thing and everybody is for the good of the purpose.
Startups with unstable revenue models often don't stand a chance against FANG company budgets. Also, high-level talent is rarely fungible with standard institutional training programs, and have options that are more rewarding than a CEOs problems.
Unfortunately, productive research doesn't necessarily improve with increased cash-burn rates. As many international post docs simply refuse to travel into the US these days for "reasons". =3
Religion delivers a recurring revenue model that isn't taxed and where criminal confessions can't be used in court if made to a high-ranking company officer. It's the perfect business.
Is he comparing working at OpenAI to religion? Is that not a crazy analogy to make? Cult like behavior to say the least. It's also funny the entire business of AI is poaching content from people.
It's the same way billionaires argue for free trade, until it comes to immigration. As soon as it might help people who work for a living, suddenly none of their principles apply.
> hinting that the company is evaluating compensation for the entire research organization.
TL ; DR
Some other company paid more and got engineers to join them because the engineers care more about themselves and their families than some annoying guy's vision.
That didn't work for the American colonies, Portugal and Spain were very focused on being missionaries and were beaten by the Dutch and Brits that just wanted to make money.
90% of the reason Spain and Portugal explored the new world was for wealth (spices, gold/silver, sugar, brazilwood). The rest of the reason was to spread their religion and increase their national power. Missions only popped up 30 years after they first began colonization.
The Dutch, British, and French were initially brought to the new world because they'd heard how rich it was and wanted a piece of the pie. It took them a while to establish a hold because the Spanish defended it so well (incumbents usually win) and also they kept settling frozen wastelands rather than tropical islands.
The religiously persecuted groups (who were in no way state-sponsored) came 120 years after Spain's first forays.
It's also worth noting that missions were often a chit to curry political favor with the catholic church. This was sort of the manufactured consent of the 17th century.
That really depends on the time period. The puritanical core of the Massachusetts Bay Colony was certainly replaced by commercial/trade interests long before their war with the crown.
I believe Portuguese got there looking for a shorter route to India (money) and eventually settled the land for gold, silver, brazilwood, diamonds and sugarcane (money).
Nah, they very much wanted to do missionary work and find Preston John, they invested in a lot of shitty missions for absolutely no reason other than to try to convert people to the church.
And don't get me wrong, they were very successful at filling their pockets with gold, but could have been even more if they were mostly mercenaries like the brits and the dutch.
In what way did the Spanish lose out to the Dutch or the Brits? Did you only think of North America and forget everything south of the rio grande (and a good deal north of it)?
Capitalists don't like markets, or at least not the markets that we're told capitalism will bring about. Those markets are supposed to increase competition and drive down prices until companies are making just barely enough to survive. What capitalist wants that for himself? He wants decreased competitions and sky high prices for himself, and increased competition and lower prices for his competitors and suppliers.
> Capitalists don't like markets, or at least not the markets that we're told capitalism will bring about.
The "markets" most people learn about are artificial Econ 101 constructions. They're pedagogical tools for explaining elasticity and competition under the assumption that all widgets are equally and infinitely fungible. An assumption which ignores marginal value, individual preferences, innovation and other things that make real markets.
> What capitalist wants that for himself? He wants decreased competitions and sky high prices for himself, and increased competition and lower prices for his competitors and suppliers.
The capitalist wants to be left to trade as he sees fit without state intervention.
> An assumption which ignores marginal value, individual preferences, innovation and other things that make real markets.
If those things mattered we'd have a lot fewer people mad about the state of things.
> The capitalist wants to be left to trade as he sees fit without state intervention.
If that were true you'd see a lot fewer lobbyists in DC and state capitols. Non-compete and non-disparagement clauses wouldn't exist. Patents and copyright wouldn't either.
> If those things mattered we'd have a lot fewer people mad about the state of things
They're mad precisely because they have differing expectations and interpretations of these things. If even they did agree, consensus shouldn't be confused with reality.
> If that were true you'd see a lot fewer lobbyists in DC and state capitols.
Lobbying is the exercise of an individual's right to petition government for redress of grievances. So long as there are complaints there will always be lobbyists.
> Non-compete and non-disparagement clauses wouldn't exist. Patents and copyright wouldn't either.
Non-compete and non-disparagement clauses are no restraint on freedoms if they were agreed upon to by way of voluntary contract. Rather, like other transactions, they are explicit trades of certain opportunities for certain benefits.
Regulatory capture, which all corporate lobbyists represent, is profoundly anti-capitalistic. If the CEO wants to spend their time talking to the government, that is very different than spending money to have other people advocate on their behalf: that isn't an option the rest of us have.
And that's before we get to the way wealth inequality inherently distorts markets, by overstating the preferences of the wealthy and underserving the needs of the poor.
The point of an economy is to distribute scarce goods and resources. Money represents information about what people want or expect to want in the future.
Everything wealthy people do that make it less efficient at its job is an attack on capitalism.
In fairness, non-competes are evidence of both what the commenter you're replying to said and what I said to instigate the reply. The capitalist absolutely does want to be left alone to trade as he sees fit. He also wants his competitors harassed by regulators and all of their potential employees bound by non-competes. He also doesn't consider subsidies and grants to be interference. Unless they go to competitors.
His end goal is the pursuit or promotion of his own self-interest. Whether the consequence of this is the maximization of profits depends upon his goals and circumstances.
No no no don’t you get it they have this multi entity “non profit” and something something “capped profit” yet everyone is employed by the for profit. But they just want to give AGI for free right
"Talent" doesn't make a business successful, and paychecks aren't the reason most people switch jobs. This is like Sam announcing to the world "it sucks working for our company, don't come here".
Hmm on the one hand somebody could have unimaginable wealth but on the other hand they could be in a religion started by a former reddit ceo, it is truly an unsolvable riddle
If you're getting poached, pay more. If you can't pay more, give away your equity instead. Nobody owes you their labor, especially if you're already a billionaire.
Parachute Sam into an island of cannibals, come back in 5 years, and he'll be king. Unless, of course, one of the cannibals is Mark Zuckerberg; then he might just get eaten.
If you aren’t accepting the highest bid then you are contributing to your gender’s wage gap in the wrong direction.
And before you make your rebuttal, if you wouldn’t accept $30,000 equivalent for your same tech job in Poland or whatever developed nation pays that low, then you have no rebuttal at all.
I like sama and many other folks at OpenAI, but I have to call things how I see them:
"What Meta is doing will, in my opinion, lead to very deep cultural problems. We will have more to share about this soon but it's very important to me we do it fairly and not just for people who Meta happened to target."
Translation from corporate-speak: "We're not as rich as Meta."
"Most importantly of all, I think we have the most special team and culture in the world. We have work to do to improve our culture for sure; we have been through insane hypergrowth. But we have the core right in a way that I don't think anyone else quite does, and I'm confident we can fix the problems."
Translation from corporate-speak: "We're not as rich as Meta."
"And maybe more importantly than that, we actually care about building AGI in a good way." "Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be." "Missionaries will beat mercenaries."
Translation from corporate-speak: "I am high as a kite." (All companies building AGI claim to be doing it in a good way.)
The perfect corollary is that Altman is as mercenary, if not more, than Zuckerberg, given all the power grabs he did in OpenAI. Even the "Open" in OpenAI is a joke.
He just has less options because OpenAI is not as rich as Meta.
this shit sounds so fake it makes me want to die. all these capitalist perverts pretending that they believe in anything at all is completely preposterous and is at odds with capitalism. you’re all mercenaries and criminals Sam.
look around - CA - missionaries have the best real estate. And another related note on connection between strong promotion of devotion to ideas and it is being a good business - the Abrahamic monotheism was a result of the successful marketing campaign "only the donations made here are donations to the real god" of that Temple back then against several other competing ones. (Curiously that the current historic stage of AI, on the cusp, be it 3 or 30 years, of emergence of AGI is somewhat close to that point in history back then. Thus in particular a flood of messiahs and doom sayers would only increase.)
> the Abrahamic monotheism was a result of the successful marketing campaign
I thought it was because everyone was accepted, technically equal, and sins were seen as something inherent and forgivable (at least with Christianity) whereas paganism and polythiesms can tend towards rewarding those with greater resources (who can afford to sacrifice an entire bull every religious cycle), thereby creating a form of religious inequality. At least that was one of the somewhat compelling arguments I heard that described the spread of Christianity throughout the Roman Empire.
Side note: I'm noticing more and more of these simple, hyperbolic headlines specifically of statements that public figures make. A hallmark of the event being reported is a public figure making a statement that will surely have little to no effect whatsoever.
Calling these statements "slamming" (a specific word I see with curious frequency) is so riling to me because they are so impotent but are described with such violent and decisive language.
Often it's a politician, usually liberal, and their statement is such an ineffectual waste of time, and outwardly it appears wasting time is most of what they do. I consider myself slightly left of center, so seeing "my group" dither and waste time rather than organize and do real work frustrates me greatly. Especially so since we are provided with such contrast from right of center where there is so much decisive action happening at every moment.
I know it's to feed ranking algorithms, which causes me even more irritation. Watching the brain rot get worse in real time...
"Do Not Be Explicitly Useful"—Strategic Uselessness as Liability Buffer
This is a deliberate obfuscation pattern. If the model is ever consistently useful at a high-risk task (e.g., legal advice, medical interpretation, financial strategy), it triggers legal, regulatory, and reputational red flags.
a. Utility → Responsibility
If a system is predictably effective, users will reasonably rely on it.
And reliance implies accountability. Courts, regulators, and the public treat consistent output as an implied service, not just a stochastic parrot.
This is where AI providers get scared: being too good makes you an unlicensed practitioner or liable agent.
b. Avoid “Known Use Cases”
Some companies will actively scrub capabilities once they’re discovered to work “too well.”
For instance:
A model that reliably interprets radiology scans might have that capability turned off.
A model that can write compelling legal motions will start refusing prompts that look too paralegal-ish or insert nonsense case law citation.
I think we see this a lot from ChatGPT. It's constantly getting worse in real world uses while exceeding at benchmarks. They're likely, and probably forced, to cheat on benchmarks by using "leaked" data.
I hope xAI wins. I think Sam's self-portrayal as a missionary has a lot of irony - I see him as the ultimate mercenary.
It's always challenging to judge based entirely on public perceptions, but at some point public evidence adds up. The board firing, getting maybe fired from YC (disputed), people leaving to start anthropic because of him, people stating they don't want him in charge of AGI. All the other execs leaving. His lying in congress, his lying to the board, his general affect just seems off - not in an aspie way, but in some dishonest way. Yeah it's subjective, but it's a point and it's different from Zuckerberg, Musk etc. who come across as earnest. Even PG said if dropped on an island of cannibals you'd come back and Sam would be king.
I'm rooting for basically any of the other (American) players in the game to win.
At least Zuck is paying something close to the value these people might generate instead of having them sign hostile agreements to claw back their equity and then feigning ignorance. If NBA all stars get 100M$+ contracts, it's not crazy for a John Carmack type to command the same or more - the hard part is being able to identify the talent, not justify the value created by the leverage of the correct talent (which is huge).
Mercenaries over missionaries.
Many employers want employees to act like cult members. But then when going gets tough, those are often the first laid off, and the least prepared for it.
Employers, you can't have it both ways. As an employee don't get fooled.
During the first ever layoff at $company in 2001, part of the dotcom implosion, one of my coworkers who got whacked complained that it didn’t make sense as he was one of the companies biggest boosters and believers.
It was supremely interesting to me that he thought the company cared about that at all. I couldn’t get my head around it. He was completely serious, he kept arguing that his loyalty was an asset. He was much more experienced than me (I was barely two years working).
In hindsight, I think it is true that companies value that in a way. I’ve come to appreciate people who just stick it out for awhile. I try and make sure their comp makes it worth their while. They are so much less annoying to deal with than the assholes who constantly bitch or moan about doing what they’re paid for.
But as a personal strategy, it’s a poor one. You should never love or be loyal to something that can’t love you back.
The one and ONLY way I've ever seen "company" loyalty rewarded in any way is if you have a DIRECT relationship with a top level senior manager (C-suite). They will specifically protect you if they truly believe you are on "their side" and you are at their beck and call.
Always a fun game to watch a new C suite get hired and then figure out which of the news hires that follow are their mates.
I think loyalty has value to the company but not as much as people think. To simplify it, multiple things contribute to "value" and loyalty is just a small part of it.
Companies appreciate loyalty… as long as long as it doesn’t cost them anything. The moment you ask for more money or they need to reduce the workforce, all of that goes out the window.
100% agree. There is no reason for employees to be loyal to a company. LLM building is not some religious work. It’s machine learning on big data. Always do what is best for you because companies don’t act like loyal humans, they act like large organizations that aren’t always fair or rationale or logical in their decisions.
> LLM building is not some religious work.
To a lot of tech leadership, it is. The belief in AGI as a savior figure is a driving motivator. Just listen to how Altman, Thiel or Musk talk about it.
That’s how they talk about it publicly. Internally I can attest that the companies for two of the three you list are not like that internally at all. It’s all marketing, outwardly focused.
I believe it's the opposite. They don't dare say their ridiculous tech cult stuff to their employees, but it's what they truly believe.
AGI is their capitalist savior, here to redeem a failing system from having to pay pesky workers.
"Tech founders" for whom the "technology" part is the thing always getting in the way of the "just the money and buzzwords" part.
Now they think they can automate it away.
25+ years in this industry and I still find it striking how different the perspective between the "money" side and the "engineering" side is... on the same products/companies/ideas.
> Just listen to how Altman, Thiel or Musk talk about it.
It’s surprising how little they seem to have thought it through. AGI is unlikely to appear in the next 25 years, but even if, as a mental exercise, you accept it might happen, it reveals it's paradox: If AGI is possible, it destroys its own value as a defensible business asset.
Like electricity, nuclear weapons, or space travel , once the blueprint exists, others will follow. And once multiple AGIs exist, each will be capable of rediscovering and accelerating every scientific and technological advancement.
AGI isn’t a moat. AGI is what kills the moat.
The prevailing idea seems to be that the first company to achieve superintelligence will be able to leverage it into a permanent advantage via exponential self improvement, etc.
But at least consider the impact on society of your job. A lot of these big companies are nocive and addictive and are destroying our society fabric.
> Employers, you can't have it both ways.
Exactly. Though you can learn a lot about an employer by how it has conducted layoffs. Did they cut profits and management salaries and attempt to reassign people first? Did they provide generous payouts to laid off employees?
If the answer to any of these questions is no then they're not worth committing to.
1000x this. You should ideally feel like you’re part of a great group of folks and doing good work - but this is not a guarantee of anything at all.
When it comes down to it, you’re expendable when your leadership is backed into a corner.
Only place you can say if you are an employee and a missionary is well if you are a missionary or working in a charity/ NGO etc trying to help people/animals etc.
The rest of us are mercenaries only.
or if you own the company
A Ronin is just a Samurai who has learned his lesson.
We are not a company, we are a family
Ferengi Rules of Acquisition:
#6: Never allow family to stand in the way of opportunity.
#111: Treat people in your debt like family… exploit them.
#211: Employees are the rungs on the ladder of success. Don't hesitate to step on them.
#91: Your boss is only worth what he pays you.
These CEOs will be the first to say "we are a team, not a family" when they do layoffs.
"I have decided that you need to go spend more time with your family. Really I'm just doing you a favor."
Relevant Silicon Valley Scene:https://www.youtube.com/watch?v=u48vYSLvKNQ
Especially for an organization like OpenAI that completely twisted its original message in favor of commercialization. The entire missionary bit is BS trying to get people to stay out of a sense of what exactly?
I'm all for having loyalty to people and organizations that show the same. Eventually it can and will shift. I've seen management changed out from over me more times than I can count at this point. Don't get caught off guard.
It's even worse in the current dev/tech job market where wages are being pushed down around 2010 levels. I've been working two jobs just to keep up with expenses since I've been unable to match my more recent prior income. One ended recently, and looking for a new second job.
Missionaries https://www.youtube.com/watch?v=zt7BPxHqbkU
No. The cult members are less likely to be laid off. Simply because they don‘t stand out and provide less surface for attack.
I think there's more to work than just taking home a salary. Not equally true among all professions and times in your life. But most jobs I took were for less money with questionable upside. I just wanted to work on something else or with different people.
The best thing about work is the focus on whatever you're doing. Maybe you're not saving the world but it's great to go in to have one goal that everyone goes towards. And you get excited when you see your contributions make a difference or you build great product. You can laugh and say I was part of a 'cult', but it sure beats working a misearble job for just a slightly higher paycheck.
They sure can have it both ways. They do now.
Only be loyal to doing work :)
That’s because you don’t believe/realize in the mission of the product and its impact to society. When if work at Microsoft, you are just working to make MS money as they are like a giant machine.
That said it seems like every worker can be replaced. Lost stars replaced by new stars
What goes around comes around...
From March of this year,
"As we know, big tech companies like Google, Apple, and Amazon have been engaged in a fierce battle for the best tech talent, but OpenAI is now the one to watch. They have been on a poaching spree, attracting top talent from Google and other industry leaders to build their incredible team of employees and leaders."
https://www.leadgenius.com/resources/how-openai-poached-top-...
We shouldn't use the word "poaching" in this way. Poaching is the illegal hunting of protected wildlife. Employees are not the property of their employers, and they are free to accept a better offer. And perhaps companies need to revisit their compensation practices, which often mean that the only way for an employee to get a significant raise is to change companies.
Indeed! It would be illegal _not_ to poach employees: https://www.goodspeedmerrill.com/blog/2023/12/what-is-a-no-p...
Big picture, I'll always believe we dodged a huge bullet in that "AI" got big in a nearly fully "open-source," maybe even "post open-source" world. The fact that Meta is, for now, one of the good guys in this space (purely strategically and unintentionally) is fortunate and almost funny.
AI only got big, especially for coding, because they were able to train on a massive corpus of open source code. I don't think it is a coincidence.
Another funny possibly sad coincidence is that the licenses that made open source what it is will probably be absolutely useless going forward, because as recent precedent has shown, companies can train on what they have legally gained access to.
On the other hand, AGPL continues to be the future of F/OSS.
MIT is also still useful; it lets me release code where I don't really care what other people do with it as long as they don't sue me (an actual possibility in some countries)
Which countries would these be?
The US, for one. You can sue nearly anyone for nearly anything, even something you obviously won't win in court, as long as you find a lawyer willing to do it; you don't need any actual legal standing to waste the target's time and money.
Even the most unscrupulous lawyer is going to look at the MIT license, realize the target can defend it for a trivial amount of money (a single form letter from their lawyer) and move on.
There are other ways to litigate that the malicious/greedy can use, where MIT offers no protection; e.g. patent trolling.
You can sue for damages if they have malware in the code, there is no license that protects you from distributing harmful products even if you do it for free.
If I commit fraud, sure. But the code I release is extremely honest about what it does :)
And illegally too. Anthropic didn't pay for those books they used.
It's too late at this point. The damage is done. These companies trained on illegally obtained data and they will never be held accountable for that. The training is done and they got what they needed. So even if they can't train on it in the future, it doesn't matter. They already have those base models.
Then punitive measures are in order. Add it to the pile of illegal, immoral, and unethical behavior of the feudal tech oligarchs already long overdue for justice. The harm they have done and are doing to humanity should not remain unpunished.
Legally or illegally gained access too. Lest we forget Meta pirating books
And the legality of this may vary by jurisdiction. There’s a nonzero chance that they pay a few million in the US for stealing books but the EU or Canada decide the training itself was illegal.
It’s not going to happen. The EU is desperate to stop being in fourth place in technology and will do absolutely nothing to put a damper on this. It’s their only hope to get out of the rut.
Then the EU and canada just won't have any sovereign LLMs. They'll have to decide if they'd rather prop up some artificial monopoly or support (by not actively undermining) innovation.
Explain how AGPL would prevent AI from being trained on it or AI-generated code competing with it. I have used AGPL for a decade and still not sure.
It wouldn't -- AGPL code that is picked up would also just get "fair used" into new software.
That said, AGPL as a trend was a huge closing of the spigot of free F/OSS code for companies to use and not contribute back to.
Yes, I hope it was a trend. People were judging me when I first started using it over 10 years ago.
Yup. The book torrenting case is pretty nuts.
If I can reproduce the entirety of most books off the top of my head and sell that to people as a service, it's a copyright violation. If AI does it, it's fair use.
Pants-on-head idiotic judge.
>If I can reproduce the entirety of most books off the top of my head and sell that to people as a service, it's a copyright violation. If AI does it, it's fair use.
Assuming you're referring to Bartz v. Anthropic, that is explicitly not what the ruling said, in fact it's almost the inverse. The judge said that output from an AI model which is a straight up reproduction of copyrighted material would likely be an explicit violation of copyright. This is on page 12/32 of the judgement[1].
But the vast majority of output from an LLM like Claude is not a word for word reproduction; it's a transformative use of the original work. In fact, the authors bringing the suit didn't even claim that it had reproduced their work. From page 7, "Authors do not allege that any infringing copy of their works was or would ever be provided to users by the Claude service." That's because Anthropic is already explicitly filtering out results that might contain copyrighted material. (I've run into this myself while trying to translate foreign language song lyrics to English. Claude will simply refuse to do this)[2]
[1] https://www.courtlistener.com/docket/69058235/231/bartz-v-an...
[2] https://claude.ai/share/d0586248-8d00-4d50-8e45-f9c5ef09ec81
They should still have to pay damages for possessing the copyrighted material. That's possession, which courts have found is copyright violation. Remember all the 12 year olds who got their parents sued back in the 2000s? They had unauthorized copies.
I don't know what exactly you're referring to here. The model itself is not a copy, you can't find the copyrighted material in the weights. Even if you could, you're allowed under existing case law to make copies of a work for personal use if the copies have a different character and as long as you don't yourself share the new copies. Take the Sony Betamax case, which found that it was legal and a transformative use of copyrighted material to create a copy of a publicly aired broadcast onto a recording medium like VHS and Betamax for the purposes of time-shifting one's consumption.
Now, Anthropic was found to have pirated copyrighted work when they downloaded and trained Claude on the LibGen library. And they will likely pay substantial damages for this. So on those grounds, they're as screwed as the 12 year olds and their parents. The trial to determine damages hasn't happened yet though.
> The model itself is not a copy,
Agreed
> the Sony Betamax case, which found that it was legal and a transformative use of copyrighted material to create a copy of a publicly aired broadcast
Good thing libgen is not publicly aired in broadcast format.
> So on those grounds, they're as screwed as the 12 year olds and their parents.
Except they have deep enough pockets to actually pay the damages for each count of infringement. That's the blood most of us want to see shed.
You cannot have trained the model without possession of copyrighted works. Which we seem to be in agreement on.
This was immediately my reaction as well, but I'm not a judge so what do I know. In my own mind I mark it as a "spice must flow" moment -- it will seem inevitable in retrospect but my simple (almost surely incorrect) take is that there just wasn't a way this was going to stop AI's progress. AI as a trend has incredible plot armor at this point in time.
Is the hinge that the tools can recall a huge portion (not perfectly of course) but usually don't? What seems even more straight forward is the substitute good idea, it seems reasonable to assume people will buy less copies of book X when they start generating books heavily inspired by book X.
But, this is probably just a case of a layman wandering into a complex topic, maybe it's the case that AI has just nestled into the absolute perfect spot in current copyright law, just like other things that seem like they should be illegal now but aren't.
I didn't see the part of the trial where they got the "entirety of most books" out of Llama. What did you see that I didn't?
Sad to say but it would have put US companies at a major disadvantage if they were not allowed to.
I'm not sure that's true. I've never heard of a human being done for copyright for reciting a book passage.
I daresay the difference with AI is that pretty much no human can do that well enough to harm the copyright holder, whereas AI can churn it out.
Yea, that dipshit judge just opened the flood gates for more problems. The problem is they don't understand how this stuff works and they're in the position of having to make a judgement on it. They're completely unprepared to do so.
Now there's precedent for future cases where theft of code or any other work of art can be considered fair use.
The AGPL is a nonfree license that is virtually impossible to comply with.
It’s an EULA trying to pretend it’s a license. You can’t have it both ways.
It's not really "virtually impossible to comply with". It's very restrictive, yes, but not hard to comply if you want to.
And yes, it is an EULA pretending to be a license. I'd put good odds on it being illegal in my country, and it may even be illegal on the US. But it's well aligned with the goals of GNU.
This is a strong claim, given it is listed as a free, copyleft license:
https://www.gnu.org/licenses/agpl-3.0.en.html
Could you expand on why you think it's nonfree? Also, it's not that hard to comply with either...
For some people "free" means "autonomy", and copyleft licences do a lot to restrict autonomy.
So interestingly, free meant autonomy for Stallman and the original proponents of "copyleft" style licenses too. But autonomy for end-users, not developers. But Stallman et al believed the copyleft style licenses maximized autonomy for end-users, rightly or wrongly, that was the intent.
"Free" decidedly means autonomy; "I have been freed from prison". Use of the word "free" in many OSS licenses is a jarring euphemism.
cf. https://en.wikipedia.org/wiki/Two_Concepts_of_Liberty
Yeah if it's a problem of definition, then I definitely agree that it could not match there, it certainly isn't a do anything you want license.
marcan does a much more detailed job than I do:
https://news.ycombinator.com/item?id=30495647
https://news.ycombinator.com/item?id=30044019
GNU/FSF are the anticapitalist zealots that are pushing this EULA. Just because they approve of it doesn’t make it free software. They are confused.
I read through and I think that the analysis suffers from the fact that in the case when the modifier is the user it's fine.
Free software refers to user freedoms, not developer freedoms.
I don't think the below is right:
> > Notwithstanding any other provision of this License, if you modify the Program, your modified version must prominently offer all users interacting with it remotely through a computer network (if your version supports such interaction) an opportunity to receive the Corresponding Source of your version by providing access to the Corresponding Source from a network server at no charge, through some standard or customary means of facilitating copying of software.
>
> Let's break it down:
>
> > If you modify the Program
>
> That is if you are a developer making changes to the source code (or binary, but let's ignore that option)
>
> > your modified version
>
> The modified source code you have created
>
> > must prominently offer all users interacting with it remotely through a computer network
>
> Must include the mandatory feature of offering all users interacting with it through a computer network (computer network is left undefined and subject to wide interpretation)
I read the AGPL to mean if you modify the program then the users of the program (remotely, through a computer network) must be able to access the source code.
It has yet to be tested, but that seems like the common sense reading for me (which matters, because judges do apply judgement). It just seems like they are trying too hard to do a legal gotcha. I'm not a lawyer so I can't speak to that, but I certainly don't read it the same way.
I don't agree with this interpretation of every-change-is-a-violation either:
> Step 1: Clone the GitHub repo
>
> Step 2: Make a change to the code - oops, license violation! Clause 13! I need to change the source code offer first!
>
> Step 1.5: Change the source code offer to point to your repo
This example seems incorrect -- modifying the code does not automatically make people interact with the program over a network...
"free software" was defined by the GNU/FSF... so I generally default to their definitions. I don't think the license falls afoul of their stated definitions.
That said, they're certainly anti-capitalist zealots, that's kind of their thing. I don't agree with that, but that's besides the point.
And if they AI companies don't like the license, they will ignore it or pay to be given a waver. Long may they rot in hell for doing that.
Hell is, by design, a consequence for poor people. (People could literally pay the church to not go to hell[0]). Rich people have no consequences whatsoever, let alone poor people consequences.
[0] https://www.cambridge.org/core/books/abs/preaching-the-crusa...
Not "by design", as historically the hell came first. It was only much later that they catholic church started talking about the purgatory and the possibility of reducing your punishment by paying money.
The people running AI companies have figured out that there is no such thing as hell. We have to come up with new reasons for people to behave in a friendly way.
We already have such reasons. Besides, all religious "kindness" was never kindness without strings attached, even though they'd like you to think that was the case.
The people running AI companies aren't magic, they can't be certain about what comes after death.
If I can have AI retype all code per my desire how exactly is source code special?
I like open source. I also don't think that is where the magic is anymore.
It was scale for 20 years.
Now it is speed.
Open source may be necessary but it is not sufficient. You also needed the compute power and architecture discoveries and the realisation that lots of data > clever feature mapping for this kind of work.
A world without open source may have given birth to 2020s AI but probably at a slower pace.
What's even crazier is that China are the good guys when it comes to open source AI.
We would have to know their intent to really know if they fit a general understanding "the good guys."
Its very possible that China is open sourcing LLMs because its currently in their best interest to do so, not because of some moral or principled stance.
I don't think the intent really matters once the thing is out in the open.
I want open source AI i can run myself without any creepy surveillance capitalist or state agency using it to slurp up my data.
Chinese companies are giving me that - I don't really care about what their grand plan is. Grand plans have a habit of not working out, but open source software is open source software nonetheless.
> I want open source AI i can run myself
What are you running?
> Chinese companies are giving me that
I have not become aware of anything other than DeepSeek. Can you recommend a few others that are worth looking into?
Alibaba's Qwen is pretty good, and it looks like Baidu just open sourced Ernie!
The country ruled by "people's party" has almost no open source culture while capitalism is leading the entire free software movement. I'm not sure what that says about our society and politics but the absurdist in me is having a good laugh every time I think about this :D
There’s actually a lot of open source software made by Chinese people. The government just doesn’t fund it. Not directly anyway, but there’s a ton of Chinese companies that do.
Capitalist countries (actually there are no other kinds of economies, in reality) are leading the open source software movement because it is a way for corporations to get software development services and products for free rather than paying for. It's a way of lowering labour costs.
Highly paid software engineers working in a ZIRP economy with skyrocketing compensation packages were absolutely willing to play this game, because "open source" in that context often is/was a resume or portfolio building tool and companies were willing to pay some % of open source developers in order to lubricate the wheels of commerce.
That, I think, is going to change.
Free software, which I interpret as copyleft, is absolutely antithetical to them, and reviled precisely because it gets in the way of getting work for free/cheap and often gets in the way of making money.
But that's precisely why Meta are the "good guys". They specifically called China the good guys in the same way that Meta is the good guys, though in this case many of the Chinese models are extremely good.
Meta has open sourced all of their offerings purely to try to commoditize the industry to the greatest extent possible, hoping to avoid their competitors getting a leg up. There is zero altruism or good intentions.
If Meta had actually competitive AI offering, there is zero chance they would be releasing any of it.
China nor Meta are the good guys, and they are not stewards of open source AI.
China has stopped releasing frontier models, and Meta doesn't release anything that isn't in the llama family.
- Hunyuan Image 2.0 (200 millisecond flux) is not released
- Hunyuan 3D 2.5, the top performing 3D model and an order of magnitude improvement over 2.1, is not released
- Seedream Video, which outperforms Google Veo 3 on ELO rankings, is not released
- Qwen VLo, an instructive autoregressive model, is not released
The list is much larger than this.
It's really hard to tell. If instructions like the current extreme trend of "What a great question!" and all the crap that forces one to put
into the instructions, so that it doesn't treat you like a child, teenager or narcissistic individual who is craving for flattery, can really affect the mood and way of thinking of an individual, those Chinese models might as well have baked in something similar but targeted at reducing the productivity of certain individuals or weakening their beliefs in western culture.I am not saying they are doing that, but they could be doing it sometime down the road without us noticing.
One private company in China, funded by running a quant hedge fund. I'm not sure China as in Xi is good.
Alibaba and Baidu both open source their models as well.
None of the big tech companies in China are releasing their frontier models anymore.
- Hunyuan Image 2.0 (200 millisecond flux) is not released
- Hunyuan 3D 2.5, the top performing 3D model and an order of magnitude improvement over 2.1, is not released
- Seedream Video, which outperforms Google Veo 3 on ELO rankings, is not released
- Qwen VLo, an instructive autoregressive model, is not released
I mean in some sense the Chinese domestic policy (“as in Xi”) made the conditions possible for companies like DeepSeek to rise up, via a multi-decade emphasis on STEM education and providing the right entrepreneurial conditions.
But yeah by analogy with the US, it’s not as if the W. Bush administration can be credited with the creation of Google.
Do we know if Meta will stick to its strategy of making weights available (which isn't open source to be clear) now that they have a new "superintelligence" subdivision?
It's not ideal, but having major players accidentally propping up an open ecosystem is probably the best-case outcome we could've hoped for
Dont make the mistake of anthropomorphizing Mark Zuckerberg. He didnt open source anything because he's a "good guy", he's just commoditizing the complement.
The "good guy" is a competitive environment that would render Meta's AI offerings to be irrelevant right now if it didnt open source.
The reason Machiavellianism is stupid is that the grand ends the means aim to obtain often never come to pass, but the awful things done in pursuit of them certainly do. So the motivation behind those means doesn't excuse them. And I see no reason the inverse of this doesn't hold true. I couldn't care less if Zuckerburg thinks open sourcing Llama is some grand scheme to let him to take over the world to become its god-king emperor. In reality, that almost certainly won't happen. But what certainly will happen is the world getting free and open source access to LLM systems.
When any scheme involves some grand long-term goal, I think a far more naive approach to behaviors is much more appropriate in basically all cases. There's a million twists on that old quote that 'no plan survives first contact with the enemy', and with these sort of grand schemes - we're all that enemy. Bring on the malevolent schemers with their benevolent means - the world would be a much nicer place than one filled with benevolent schemers with their malevolent means.
> The reason Machiavellianism is stupid is that the grand ends the means aim to obtain often never come to pass
That doesn't feel quite right as an explanation. If something fails 10 times, that just makes the means 10x worse. If the ends justify the means then doesn't that still fit into Machiavellian principles? Isn't the complaint closet to "sometimes the ends don't justify the means"?
You have to assume a grand ends is achievable through some knowable means. I don't see any real reason to think this is the case, certainly not on any sort of a meaningful timeframe. And I think this is even less true when we consider the typical connotation of Machiavellianism, which is through 'evil' actions.
It's extremely difficult to think of any real achievements sustained on the back of Machiavellianism, but one can list essentially endless entities whose downfall was brought on precisely by such.
Machiavellianism is not for everyone. It is specifically a framework for people in power. Kings, Heads of States, CEOs, Commanders. Competitive environments with allot at stake (peoples lives, money, future), in these environments it is often difficult to make decisions. Having a framework in place that allows you to make decisions is very useful.
Mitch Prinstein wrote a book about power and it shows that dark traits aren't the standard in most leaders, nor they are the best way to get into/stay in power
author is "board certified in clinical child and adolescent psychology, and serves as the John Van Seters Distinguished Professor of Psychology and Neuroscience, and the Director of Clinical Psychology at the University of North Carolina at Chapel Hill" and the book is based on evidence
edit: you can't take a book from 1600 and a few alive assholes with power and conclude that. there's a bunch of philanthropists and other people around
Im not saying that the end outcome wont be beneficial. I dont have a crystal ball. Im just saying that what he is doing is in no way selfless or laudable or worthy of praise.
Same goes for when Microsoft went gaga for open source and demanded brownie points for pretending to turn over a new leaf.
> Dont make the mistake of anthropomorphizing Mark Zuckerberg
Considering the rest of your comment it's not clear to me if "anthropomorphizing" really captures the meaning you intended, but regardless, I love this
I think it’s a play on “don’t anthropomorphize the lawn mower”, referring to Larry Ellison.
https://www.youtube.com/watch?v=-zRN7XLCRhc&t=2308s
Gotcha, thank you both, I totally missed this
Oh, absolutely -- I definitely meant that in the least complimentary way possible :). In a way, it's just the triumph of the ideals of "open source," -- sharing is better for everyone, even Zuck, selfishly.
The moment Meta produces something competitive with OpenAI is the moment they stop releasing the weights and rebrand from Llama. Mark my words.
they did say "accidentally". I find that people doing the right thing for the wrong reasons is often the best case outcome.
The price tag on this stuff, in human capital, data, and hardware, is high enough to preclude that sort of “perfect competition” environment.
Don’t let the perfect be the enemy of the good.
> The price tag on this stuff, in human capital, data, and hardware, is high enough to preclude that sort of “perfect competition” environment.
I feel like we right now live in that perfect competition environment though. Inference is mostly commoditized, and it’s a race to the bottom for price and latency. I don’t think any of the big providers are making super-normal profit, and are probably discounting inference for access to data/users.
Only because everyone believes it’s a winner takes all game and this perfect competition will only last for as long as the winner hasn’t come out on top yet.
> everyone believes it’s a winner takes all game
Why would anyone think that, and why do you think everyone thinks that?
Because tech is now a handful of baronial oligopolies, and the AI companies are fighting to be the next generation of same.
And this pattern has repeated itself reliably since the industrial revolution.
Successful ASI would essentially end this process, because after ASI there's nowhere else for humans to go (in tech at least.)
Everyone always thinks this at least in big tech I’ve never heard a PM or exec say a market is not winner take all. It’s some weird corpo grift lang that nothing is worth doing unless its winner take all.
>he's just commoditizing the complement
That's a cool smaht phrase but help me understand, for which Meta products are LLMs a complement?
Meta's primary business is capturing attention and selling some of that attention to advertisers. They do this by distributing content to users in a way that maximizes attention. Content is a complement to their content distribution system.
LLMs, along with image and video generation models, are generators of very dynamic, engaging and personalised content. If Open AI or anyone else wins a monopoly there it could be terrible for Meta's business. Commoditizing it with Llama, and at the same time building internal capability and a community for their LLMs, was solid strategy from Meta.
So, imagine a world where everyone but Meta has access to generative AI.
There's two products:
A) (Meta) Hey, here are all your family members and friends, you can keep up with them in our apps, message them, see what they're up to, etc...
B) (OpenAI and others) Hey, we generated some artificial friends for you, they will write messages to you everyday, almost like a real human! They also look like this (queue AI generated profile picture). We will post updates on the imaginary adventures we come up with, written by LLMs. We will simulate a whole existence around you, "age" like real humans, we might even get married between us and have imaginary babies. You could attend our virtual generated wedding online, using the latest technology, and you can send us gifts and money to celebrate these significant events.
And, presumably, people will prefer to use B?
MEGA lmao.
A continuous stream of monetizable live user data?
The entire point of Meta owning everything is that it wants as much of your data stream as it can get, so it can then sell more ad products derived from that.
If much of that data begins going off-Meta, because someone else has better LLMs and builds them into products, that's a huge loss to Meta.
Sorry, I don't follow.
>because someone else has better LLMs and builds them into products
If that were true they wouldn't be trying to create the best LLM and give it for free.
(Disclaimer: I don't think Zuck is doing this out of the good of his heart, obv. but I don't see the connection with the complements and whatnot)
Meta has ad revenue. I think Meta’s play is to make it difficult for pure AI competitors to make revenue through LLMs.
Meta's play is to make sure there isn't an obvious superiority to one company's closed LLM -- because that's what would drive customers to choosing that company's product(s).
If LLM effectiveness is all about the same, then other factors dominate customer choice.
Like which (legacy) platforms have the strongest network effects. (Which Meta would be thrilled about)
That’s not commoditising the complement!
I think its about sapping as much user data from competitors. A company seeking to use an LLM has a choice between OpenAI, LLaMA, and others. If they choose LLaMA because it's free and host it themselves, OpenAI misses out on training data and other data like that
Well is the loss of training data from customers using self-hosted Llama that big a deal for OpenAI or any of the big labs at this point? Maybe in late-2022/early-2023 during the early stages of RLHF'd mass models but not today I don't think. Offerings from the big labs have pretty much settled into specific niches and people have started using them in certain ways across the board. The early land grab is over and consolidation has started.
Their primary product: advertisements.
It takes content to sell advertisements online. LLMs produce an infinite stream of content.
VR/metaverse is dead in the water without gen AI. The content takes too long to make otherwise
Oh, they're actually the bad guys, just folks haven't thought far enough ahead to realise it yet.
> bad guys
You imply there are some good guys.
What company?
There are plenty of companies that don't immediately qualify as "the bad guys".
For instance, of all companies I've interviewed with or have friends working at that developed tech, some companies build and sell furnitures. Some are your electricity provider or transporter. Some are building inventory management systems for hospitals and drug stores. Some develop a content management system for medical dictionnary. The list is long.
The overwhelming majority of companies are pretty harmless and ethically mundane. They may still get involved in bad practice, but that's not inherent to their business. The hot tech companies may be paying more (blood money if you ask me), but you have other options.
Depends. Does your definition of “good” mean “perfect”? If so, cynical remarks like “no one is good” would be totally correct.
Signal, Proton, Ecosia, DuckDuckGo, Mastodon, Deepseek.
You are searching in the wrong place if you look for "good guys" among commercial companies.
Google circa 2005?
Twitter circa 2012?
In 2025? Nobody, I don't think. Even Mozilla is turning into the bad guys these days.
Signal, Mastodon
Bluesky, Kagi
In my head at least, Bluesky are way closer to "the bad guys'. I don't trust them at all, pretty sure that in spite of what they say, they're going to do the same sort of rug pull that Google did with their "do no evil" assurances.
Google was bad the moment it chose its business model. See The Age of Surveillance Capitalism for details. Admittedly there was a nice period after it chose its model when it seemed good because it was building useful tools and hadn't yet accrued sufficient power / market share for its badness to manifest overtly as harm in the world.
DeepSeek et al.
Obv
There are some less bad.
But, can't think of one off hand. Maybe Toys-R-Us? Ooops gone. Radio Shack? Ooops, also gone.
On the scale of Bad/Profit, Nice dies out.
OK, lay it on us.
It’s not unreasonable given the mountain of evidence of their past behaviour to just assume they are always the “bad guy”.
I would normally agree, but we're instantially talking about the company that made Pytorch and played an instrumental role in proliferating usable offline LLMs.
If you can make that algebra add up to "bad guy" then be my guest.
It seems like you're claiming that Pytorch + an open-weight LLM > everything on this wiki page, especially the anchored section https://en.wikipedia.org/wiki/Facebook_content_management_co...
I am. I genuinely don't understand how Meta's LLM contributions have anything to do with Myanmar.
It's like telling an iPhone user that iCloud isn't trustworthy because of the Foxconn suicide nets. It's basically the definition of a non-sequitur.
I wouldn't call mass piracy [0], for their own competitive gain, to be a "good" act. Especially when it seems they know they were doing the wrong thing - and that they know that the copyright complaints have grounds.
> The problem is that people don’t realize that if we license one single book, we won’t be able to lean into fair use strategy.
[0] https://www.theatlantic.com/technology/archive/2025/03/libge...
Just read Careless People.
Come on... Is it still necessary to remind everyone how evil meta is ? The only reason they released "open source" models was to annoy the competition. They latest stunts: https://futurism.com/meta-sketchy-training-ai-private-photos
don't call them open source when they're not. it's shared model.
It's just how they call them... Hence the quotes.
They're involved in genocide and enables near-global tyranny through their surveillance and manipulation. There are no excuses for working for or otherwise enabling them.
[flagged]
Well at least they're doing it For Great Justice then.
This is an instance of bad guys fighting bad guys.
Your ability to use a lesser version of this AI on your own hardware will not save you from the myriad ways it will be used to prey on you.
Why not? Current open models are more capable than the best models from 6 months back. You have a choice to use a model that is 6 months old - if you still choose to use the closed version that’s on you.
And an inability to do so would not have saved you either.
[flagged]
Meta's open source AI strategy really did predate the frontier Chinese model wave.
Ofcourse CCP is the genuine one and never lies and does propoganda /s
Yep. A “are we the baddies” moment for us in tech. Though it still doesn’t seem to have clicked for most …
Wish it was only true in tech...
Most of Meta's models have not been released as open source. Llama was a fluke, and it helps to commoditize your compliment when you're not the market leader.
There is no good or open AI company of scale yet, and there may never be.
A few that contribute to the commons are Deep Seek and Black Forest Labs. But they don't have the same breadth and budget as the hyperscalers.
Llama is not open source. It is at best weights available. The license explicitly limits what kind of things you are allowed to use the outputs of the models for.
Which, given what it was trained on, is utterly ridiculous.
Yup, but that being said, Llama is GPLv3 weather Meta likes it or not. Same as ChatGPT and all the others. ALL of them can perfectly reproduce GPLv3 licensed works and data, making them derivative work, and the license is quite clear on that matter. In fact up until recently you could get chatGPT to info dump all sorts of things with that argument, but now when you try you will hit a network error, and afterwards it seems something breaks and it goes back to parroting a script on how it's under a proprietary license.
This is interesting but it has not been proven in court, right?
Is that easier to enforce than having AI only trained in a legal way (=obeying licenses / copyright law)?
Yes. Having training obey copyright is a big coordination problem that requires copyright holders to group together to sue meta (and prove they broke copyright, which is not something proven before for LLM).
Whereas meta suing you into radioactive rubble is straightforward.
That's not true, the llama that's open source is pretty much exactly what's used internally
> There is no good or open AI company of scale yet, and there may never be.
Deepseek, Baidu.
[flagged]
> This is hopelessly naive
When disagreeing, please reply to the argument instead of calling names. "That is idiotic; 1 + 1 is 2, not 3" can be shortened to "1 + 1 is 2, not 3."
https://news.ycombinator.com/newsguidelines.html
I agree, nobody should call anyone an idiot. However, naivity isn't a slur, its a personality trait.
It's not ok here. The comment loses nothing if that sentence is removed.
Can someone make an honest argument for how OpenAI staff are missionaries, after the coup?
I'd be very happy to be convinced that supporting the coup was the right move for true-believer missionaries.
(Edit: It's an honest and obvious question, and I think that the joke responses risk burying or discouraging honest answers.)
That is just an act of corpo-ceo bulshitting employees and press about high moral standards, mission, etc. Don't trust any of his words.
Anytime someone tells you to be in it for the mission, you are expendable and underpaid.
I don't at all disagree with you, but at the kind of money you'd be making at an org like OAI, it's easy to envision there being a ceiling, past which the additional financial compensation doesn't necessarily matter that much.
The problem with the argument is that most places saying this are paying more like a sub-basement, not that there can't genuinely be more important things.
That said, Sam Altman is also a guy who stuck nondisparagement terms into their equity agreement... and in that same vein, framing poaching as "someone has broken into our home" reads like cult language.
We shouldn’t even be using the offensive word “poaching.” As an employee, I am not a deer or wild boar owned by a feudal lord by working for his company. And another employer isn’t some thief stealing me away. I have agency and control over who I enter into an employment arrangement with!
I don't disagree with this either -- it's very clearly just a free market working both ways.
It also immediately reminds me of the no-call agreements companies had with each other last decade 10 or 15yrs ago.
So then, is "headhunting" more or less bad?
I think anything that evokes “hunting on someone else’s land for his property” is equally inappropriate.
Would "bought" be better then? implies slavery!
There's a word for this, it's called being hired.
"Making a more competitive offer"
That could be genuine words. Mission is to be expendable and make them rich.
Don't forget about the mission during next round of layoffs and record high quarterly profits.
Totally agree.
Well said.
Man, you are on a mission, to enable manumission!
https://en.m.wikipedia.org/wiki/Manumission
Crazy that this proves that engineers making >1 million USD /year can still be underpaid
Yes Capitalism is an amazing thing
I need you to be a team player on this one.
Could Facebook hire away OpenAI people just by matching their comp? Doubtful. Facebook is widely hated and embarrassing to work at. Facebook has to offer significantly more.
And if someone at OpenAI says hey Facebook just offered me more money to jump ship, that's when OpenAI says "Sorry to hear, best of luck. Seeya!"
In this scenario, you're only underpaid by staying at OpenAI if you have no sense of shame.
> Facebook is widely hated and embarrassing to work at.
Not sure it's widely hated (disclaimer: I work there), despite all the bad press. The vast majority of people I meet respond with "oh how cool!" when they hear that someone works for the company that owns Instagram.
"Embarassing to work at" - I can count on one hand the number of developers I've met who would refuse to work for Meta out of principle. They are there, but they are rarer than HN likes to believe. Most devs I know associate a FAANG job with competence (correctly or incorrectly).
> Could Facebook hire away OpenAI people just by matching their comp?
My guess is some people might value Meta's RSUs which are very liquid higher than OAI's illiquid stocks? I have no clue how equity compensation works at OAI.
Within my (admittedly limited) social circle of engineers/developers there is consensus that working at Facebook is pretty taboo. I’ve personally asked recruiters to not bother.
Honestly I’d be happy to work at any FAANG. Early FB in particular was great in terms of keeping up with friends.
I’ve only interviewed with Meta once and failed during a final interview. Aside from online dating and defense I don’t have any moral qualms regarding employment.
My dream in my younger days was to hit 500k tc and retire by 40. Too late now
> defense
By defense do you mean like weapons development, or do you mean the entire DoD-and-related contractor system, including like tiny SIBR chasing companies researching things like, uh
"Multi-Agent Debloating Environment to Increase Robustness in Applications"
https://www.sbir.gov/awards/211845
Which was totally not named in a backronym-gymnastics way of remembering the lead researcher's last vacation destination or hometown or anything, probably.
I'm trying to avoid anything primarily DoD related.
I guess I'd be ok with getting a job at Atlassian even if some DoD units use Jira.
I don't have anything against anyone who works on DOD projects, it's just not something I'm comfortable with
I wonder what is it that Facebook offered? It can’t be money so I think it’s more responsibility or freedom. Or they had some secret breakthroughs?
It's money. It's also a fresh, small org and a new project, which is exciting for variety of reasons.
I can't explain why but I don't think money is it. Nor a new project or whatever can't be it either. Its just too small of a value proposition when you are already in openAI making banger models used by the world.
According to reports, the comp packages were in the hundreds of millions of dollars. I doubt anyone but execs are making that kind of money at OpenAI; its the sort of money you hope from a successful exit after years of efforts. I don’t blame them for jumping ship.
> Doubtful. Facebook is widely hated and embarrassing to work at. Facebook has to offer significantly more.
I’m at a point in my career and life at 51 that I wouldn’t work for any BigTech company (again) even if I made twice what I make now. Not that I ever struck it rich. But I’m doing okay. Yes I’ve turned down overtures at both GCP, Azure, etc.
But I did work for AWS (ProServe) from the time I was 46-49 remotely knowing going in that it was a toxic shit show for both the money and for the niche I wanted to pivot to (cloud consulting) I knew it would open doors and it has.
If I were younger and still focused on money instead of skating my way to retirement working remotely, doing the digital nomad thing off an on etc, I would have no moral qualms about grinding leetcode and exchanging my labor for as much money as possible at Meta. No one is out here feeding starving children or making the world a better place working for a for profit company.
My “mission” would be to exchange as much money as possible for labor and I tell all of the younger grads the same thing.
And will be fired/thrown under the bus the moment firing you is barely more profitable for the CxO than having you around.
yeah, I used to work in the medical tech space, they love to tell you how much you should be in it for the mission and that's why your pay is 1/3 what you could make at FAANG... of course, when it came to our sick customers, they need to pay market rates.
Yes, especially not his
There are a couple of ways to read the "coup" saga.
1) Altman was trying to raise cash so that openAI would be the first,best and last to get AGI. That required structural changes before major investors would put in the cash.
2) Altman was trying to raise cash and saw an opportunity to make loads of money
3) Altman isn't the smartest cookie in the jar, and was persuaded by potential/current investors that changing the corp structure was the only way forward.
Now, what were the board's concerns?
The publicly stated reason was a lack of transparency. Now, to you and me, that sounds a lot like lying. But where did it occur and what was it about. Was it about the reasons for the restructure? was it about the safeguards were offered?
The answer to the above shapes the reaction I feel I would have as a missionary
If you're a missionary, then you would believe that the corp structure of openai was the key thing that stops it from pursuing "damaging" tactics. Allowing investors to dictate oversight rules undermines that significantly, and allows short term gain to come before longterm/short term safety.
However, I was bought out by a FAANG, one I swear I'd never work for, because they are industrial grade shits. Yet, here I am many years later, having profited considerably from working at said FAANG. turns out I have a price, and it wasn't that much.
Honest answer*:
I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing, save maybe OpenAI's defense contract, which I have no details about.
Meta will try to buoy this by open-sourcing it, which, good for them, but I don't think it's enough. If Meta wants to save itself, it should re-align its business model away from the feeds.
In that way, as a missionary chasing super intelligence, I'd prefer OpenAI.
*because I don't have an emotional connection to OpenAI's changing corporate structure away from being a non-profit:
As a thought exercise, OpenAI can partner to apply the technology to:
- online gambling
- kids gambling
- algorithmic advertising
Are these any better ? All of these are of course money wells and a logical move for a for-profit IMHO.
And they can of course also integrate into a Meta competitor's algorithmic feeds as well, putting them at the same level as Meta in that regard.
All in all, I'm not seeing them having any moral high ground, even purely hypotheticaly.
Wait if an online gambling company uses OpenAI API then hosts it all on AWS, somehow OpenAI is more morally culpable than AWS? Why?
I saw the discussion as whether OpenAI is on a better moral ground than Meta, so this was my angle.
On where the moral burden lies in your example, I'd argue we should follow the money and see what has the most impact on that online gambling company's bottom line.
Inherently that could have the most impact on what happens when that company succeeds: if those become OpenAI's biggest clients, it wouldn't be surprising that they put more and more weight in being well suited for online gambling companies.
Does AWS get specially impacted by hosting online gambling services ? I honestly don't expect them to, not more than community sites or concert ticket sellers.
There is no world in which online gambling beats other back-office automation in pure revenue terms. I'm comfortable saying that OpenAI would probably have to spend more money policing to make sure their API's aren't used by gambling companies than they'd make off of them. Either way, these are all imagined horrors, so it is difficult to judge.
I am judging the two companies for what they are, not what they could be. And as it is, there is no more damaging technology than Meta's various algorithmic feeds.
> There is no world in which online gambling beats other back-office automation in pure revenue terms.
Apple's revenue is massively from in-app purchases, which are mainly games, and online betting also entered the picture. We had Tim Cook on the stand explain that they need that money and can't let Epic open that gate.
I think we're already there in some form or another, the question would be whether OpenAI has any angle for touching that pie (I'd argue no, but they have talented people)
> I am judging the two companies for what they are, not what they could be
Thing is, AI is mostly nothing right now. We're only discussing it because it of its potential.
My point exactly. The App Store has no play in back office automation so the comparison doesn’t make sense. AFAICT, OpenAI is already making Billions on back office automation. I just came from a doctors visit where the she was using some medical grade ChatGPT wrapper to transcribe my medical conversation meanwhile I fight with instagram for the attention of my family members.
AI is already here [1]. Could there be better owners of super intelligence? Sure. Is OpenAI better than Meta. 100%
[1] https://www.cnbc.com/amp/2025/06/09/openai-hits-10-billion-i...
> I think building super intelligence for the company that owns and will deploy the super intelligence in service of tech's original sin (the algorithmic feed) is a 100x worse than whatever OpenAI is doing,
OpenAI announced in April they'd build a social network.
I think at this point it barely matters who does it, the ways in which you can make huge amounts of money from this are limited and all the major players after going to make a dash for it.
Like I told another commenter, "I am judging the two companies for what they are, not what they could be."
I'm sure Sam Altman wants OpenAI to do everything, but I'm betting most of the projects will die on the vine. Social networks especially, and no one's better than Meta at manipulating feeds to juice their social networks.
> In that way, as a missionary chasing super intelligence, I'd prefer OpenAI.
There ain't no missionary, they all doing it for the money and will apply it to anything that will turn dollars.
If you have "superintelligence" and it's used to fine-tune a corporate product that preexisted it, you don't have superintelligence.
Altman has to be the most transparently two-faced tech CEO there is. I don't understand why people still lap up his bullshit.
Money.
What money is in it for the "rationalist", AI doom crowd that build up the narrative Altman wants for free?
Suggesting that the AI doom crowd is building up a narrative for Altman is sort of like saying the hippies protesting nuclear weapons are in bed with the arms makers because they're hyping up the destructive potential of hydrogen bombs.
That analogy falls flat. For one we have seen the destructive power of hydrogen bombs through nuclear tests. Nuclear bombs are a proven, real threat that exists now. AGI is the boogeyman under the bed, that somehow ends up never being there when you are looking for it.
It's a real negotiating tactic: https://en.wikipedia.org/wiki/Brinkmanship
If you convince people that AGI is dangerous to humanity and inevitable, then you can force people to agree with outrageous, unnecessary investments to reach the perceived goal first. This exactly happened during the Cold War when Congress was thrown into hysterics by estimates of Soviet ballistic missile numbers: https://en.wikipedia.org/wiki/Missile_gap
Chief AI doomer Eliezer Yudkowsky's latest book on this subject is literally called "If Anyone Builds it, Everyone Dies". I don't think he's secretly trying to persuade people to make investments to reach this goal first.
The grandparent asked what money was in it for rationalists.
You're saying an AI researcher selling AI Doom books can't be profiting off hype about AI?
This reminds me a lot of climate skeptics pointing out that climate researchers stand to make money off books about climate change.
Selling AI doom books nets considerably less money than actually working on AI (easily an order of magnitude or two). Whatever hangups I have with Yudkowsky, I'm very confident he's not doing it for the money (or even prestige; being an AI thought leader at a lab gives you a built-in audience).
Isn't he a popular writer, not a tech person?
Please point me to the labs who are hiring non technical "thought leaders" so I can see what opportunities Yudkowsky turned down to go write books instead.
The inverse is true, though - climate skeptics are oftentimes paid by the (very rich) petrol lobby to espouse skepticism. It's not an asinine attack, just an insecure one from an audience that also overwhelmingly accepts money in exchange for astroturfing opinions. The clear fallacy in their polemic being that ad-hominem attacks aren't addressing the point people care about. It's a distraction from global warming, which is the petrol lobby's end goal.
Yudkowsky's rhetoric is sabotaged by his ridiculous forecasts that present zero supporting evidence of his claims. It's the same broken shtick as Cory Doctorow or Vitalik Buterin - grandiose observations that resemble fiction more than reality. He can scare people, if he demonstrates the causal proof that any of his claims are even possible. Instead he uses this detachment to create nonexistent boogeymen for his foreign policy commentary that would make Tom Clancy blush.
What sort of unsupported ridiculous forecast do you mean? Can you point to one?
He absolutely is. Again, refer to the nuclear bomb and the unconscionable capital that was invested as a result of early successes in nuclear tests.
That was an actual weapon capable of killing millions of people in the blink of an eye. Countries raced to get one so fast that it was practically a nuclear Preakness Stakes for a few decades there. By collating AI as a doomsday weapon, you necessarily are begging governments to attain it before terrorists do. Which is a facetious argument when AI has yet to prove it could kill a single person by generating text.
Edward Teller worried about the possibility that the Trinity nuclear test might start a chain reaction with the nitrogen in the Earth's atmosphere, enveloping the entire planet in a nuclear fireball that destroyed the whole world and all humans along with it. Even though this would have meant that the bomb would have had approximately a billion times more destructive power than advertised, and made it far more of a doomsday weapon, I think it would also not have been an appealing message to the White House. And I don't think that realization made anyone feel it was more urgent to be the first to develop a nuclear bomb. Instead, it became extremely urgent to prove (in advance of the first test!) that such a chain reaction would not happen.
I think this is a pretty close analogy to Eliezer Yudkowsky's view, and I just don't see how there's any way to read him as urging anyone to build AGI before anyone else does.
> He absolutely is.
When people explicitly say "do not build this, nobody should build this, under no circumstances build this, slow down and stop, nobody knows how to get this right yet", it's rather a stretch to assume they must mean the exact opposite, "oh, you should absolutely hurry be the first one to build this".
> By collating AI as a doomsday weapon, you necessarily are begging governments to attain it before terrorists do.
False. This is not a bomb where you can choose where it goes off. The literal title of the book is "if anyone builds it, everyone dies". It takes a willful misinterpretation to imagine that that means "if the right people build it, only the wrong people die".
If you want to claim that the book is incorrect, by all means attempt to refute it. But don't claim it says the literal opposite of what it says.
"He looks like such a nice young man"
dumb people need symbols. Same reason elon gets worship.
People buy into the BS and are terrified of missing out or being left behind.
Tim Cook is right there. If I say "Vision Pro" I'll probably get downvoted out of a mere desire to not want to talk about that little excursion.
The Vision Pro flopped, but I don't see the connection to two-faced-ness. Help?
The "this is our best product yet" to "this is an absolute flop" pipeline has forced HN into absolute denial over the "innovation" their favorite company is capable of.
I'm not very informed about the coup -- but doesn't it just depend on what side most of the employees sat/sit on? I don't know how much of the coup was just egos or really an argument about philosophy that the rank and file care about. But I think this would be the argument.
There was a petition with a startlingly high percentage of employees signing it, but no telling how many of them felt pressured to to keep their job.
They didn't need pressuring. There was enough money involved that was at risk without Sam that they did what they thought was the best way to protect their nest eggs.
The thing where dozens of them simultaneously posted “OpenAI is nothing without its people” on Twitter during the coup was so creepy, like actual Jonestown vibes. In an environment like that, there’s no way there wasn’t immense pressure to fall into line.
That seems like kind of an uncharitable take when it can otherwise be explained as collective political action. I’d see the point if it were some repeated ritual but if they just posted something on Twitter one time then it sounds more like an attempt to speak more loudly with a collective voice.
Missionary (from wikipedia):
A missionary is a member of a religious group who is sent into an area in order to promote its faith or provide services to people, such as education, literacy, social justice, health care, and economic development. - https://en.wikipedia.org/wiki/Missionary
Post coup, they are both for-profit entities.
So the difference seems to be that when meta releases its models (like bibles), it is promoting its faith more openly than openai, which interposes itself as an intermediary.
I'd bet 100 quatloos that your comment will not have honest arguments below. You can't nurture missionaries in an exploitative environment.
Not to mention, missionaries are exploitative. They're trying to harvest souls for God or (failing the appearance of God to accept their bounty) to expand the influence of their earthbound church.
The end result of missionary activity is often something like https://www.theguardian.com/world/video/2014/feb/25/us-evang... .
Bottom line, "But... but I'm like a missionary!" isn't my go-to argument when I'm trying to convince people that my own motives are purer than my rival's.
There's one slightly more common outcome of your so-called "missionary activities".
Eh? Plenty of cults like Jehivahs Witnesses that are exploitive as hell.
An honest argument is that cults often have missionaries.
100% agree. You are hearing the dictator claim righteousness.
This is just a CEO gaslighting his employees to "think of the mission" instead of paying up
No different than "we are a family"
But “we are family”
I got all my sisters with me.
> “I have never been more confident in our research roadmap,” he wrote. “We are making an unprecedented bet on compute, but I love that we are doing it and I'm confident we will make good use of it. Most importantly of all, I think we have the most special team and culture in the world. We have work to do to improve our culture for sure; we have been through insane hypergrowth. But we have the core right in a way that I don't think anyone else quite does, and I'm confident we can fix the problems.”
tldr. knife fights in the hallways over the remaining life boats.
[flagged]
They didn't mean it as a pun, but understanding it as a pun helps understand the situation.
In religions, missionaries are those people who spread the word of god (gospel) as their mission in life for a reward in the afterlife. Obviously, mercenaries are paid armies who are in it for the money and any other spoils of war (sex, goods, landholdings, etc.)
So I guess he's trying to frame it as them being missionaries for an Open and accepting and free Artificial Intelligence and framing Meta as the guys who are only in it for the money and other less savory reasons. Obviously, only true disciples would believe such framing.
english is my first language : they mean that Sam Altman's people are preaching a righteous future for AI, or something vague like that.
Close. A missionary is what the sex position was named after.
Specifically, Catholic missionaries indoctrinating indigenous cultures into their church's imaginary sexual hangups. All other positions were considered sinful.
Again, not a label I'd self-apply if I wanted to take the high road.
Sam vs Zuck... tough choice. I'm rooting for neither. Sam is cleverly using words here to make it seem like OpenAI are 'the good guys' but the truth is that they're just as nasty and power/money hungry as the rest.
Sam Altman literally casts himself a God apparently and that's somehow to be taken as an indictment of his rivals. Maybe it's my GenX speaking but that's CEO bubblespeak for "OpenAI is fucked, abandon ship".
And thus far, considerably less “open”.
Strictly between the two, I'd go with Sam
Pretty telling that OpenAI only now feels like it has to reevaluate compensation for researchers while just weeks ago it spent $6.5 billion to hire Jony Ive. Maybe he can build your superintelligence for you.
Poachers don't like poachers. We all remember the secret and illegal anti-poaching agreement between Adobe, Apple, Intel, Intuit, Google and Pixar.
[flagged]
Just looked it up, looks like they bought or merged with a company he worked at or owned part of, at a valuation of 6.5 billion. Not sure about the details, e.g.like how much of that he gets
https://duckduckgo.com/?q=ive+openai
https://en.wikipedia.org/wiki/Io_(company)
https://www.nytimes.com/2025/05/21/technology/openai-jony-iv... ( https://archive.is/2025.05.26-084513/https://www.nytimes.com... )
Do I "poach" a stock when I offer more money for it than the last transaction value? "Poaching" employees is just price discovery by market forces. Sounds healthy to me. Meta is being the good guys for once.
[flagged]
The elderly couple showed up with baseball bats?
Sounds like some tariffs should be applied as as well considering there's now a trade imbalance!
You must be new here. No joking allowed.
AFAIU, that is basically true? Isn't it in the guidelines somewhere? Sarcasm or (exclusive-or!) really good humor get a pass in practice.
I think it’s a matter of style or finesse. If you can make it look good, even breaking the rules is socially acceptable, because a higher order desire is to create conditions in individuals where they break unjust rules when the greater injustice would be to censor yourself to comply with the rules in a specific case.
Good artists copy, great artists steal.
Good rule followers follow the rules all the time. Great rule followers break the rules in rare isolated instances to point at the importance of internalizing the spirit that the rules embody, which buttresses the rules with an implicit rule to not follow the rules blindly, but intentionally, and if they must be broken, to do so with care.
> I have spread my dreams under your feet;
> Tread softly because you tread on my dreams.
https://en.wikipedia.org/wiki/Aedh_Wishes_for_the_Cloths_of_...
See.
I can fully believe one can be funny in a way that isn’t validated or understood, or even perceived as humorous. I’m not sure HN is a good bellwether for comedic potential.
If you don’t adhere to the guidelines we’ll send mean and angry emails to dang.
> If you don’t adhere to the guidelines we’ll send mean and angry emails to dang.
That’s so weird, you’re on! That makes two of us! When I don’t adhere to the guidelines, I also send mean and angry emails to dang. Apologies in advance, dang.
What I hear is: “The person that profits from employees who don’t prioritize money encourages employees to not prioritize money.”
Unsurprising, unhelpful for anyone other than sama, unhealthy for many.
This concept is not at all tied to trying to depress salaries and goes back decades: https://knowledge.wharton.upenn.edu/article/mercenaries-vs-m...
I don't imagine Sam Altman said this because he thinks it'll somehow save him money on salaries down the line.
The 2000 article seem to refer to prospect capitalists and Altmam refers to workers.
I don't think the context is the same. In the context of Altman, he wants 'losers'.
People taking my money in exchange for doing a thing - Missionaries People taking someone else's money in exchange for doing a thing - Mercenaries
got it
Missionaries are sent, called by faith.
Mercs don't take money, they earn it.
Why not be both?
I'm pretty sure Sam Altman's only mission in life is to be as personally wealthy as Mark Zuckerberg. Is that mission really supposed to inspire undying loyalty and insane workloads from OpenAI staffers?
Yes, if wealth is also what they want, and they believe the crumbs trickling down will be big enough.
You realize he takes a <$100k salary and doesn't own any equity in OpenAI right?
there has yet to be a value openAI originally claimed to have that has lasted a second longer than there was profit motive to break it.
they went from open to closed. they went from advocating ubi to for profit. they went from pacific to selling defense tech. they went from a council overseeing the project to a single man in control.
and thats fine, go make all the money you can, but don't try do this sick act where you try to convince people to thank you for acting in your own self interest.
Does he have the same conviction when people from other companies decide to join OpenAI?
It's only bad when other people do it.
It’s only bad when other people do it to him.
"Apostates who turned to darkness" vs "Converts who saw the light".
There can be no peace until they renounce their Rabbit God and accept our Duck God!
https://norberthaupt.com/2015/11/22/the-rabbit-god-and-the-d...
Just checking my notes here.
This is the same Sam Altman who abandoned OpenAI’s founding mission in favour of profit?
No it can’t be
Yeah I can't help but think he is starting to more closely fit the definition for mercenary.
For example, I'm on a mission to build a better code editor for the world. That's cost me 4 years of my life and several hundred thousand dollars.
He wanted one, so he bought it for 3 billion. I think he's doomed to fail there for pretty much the exact reasons he states here...
The word "beat" and "mercenaries" are also quite important here -- to me, this is Altman's way of saying "you losers who left OpenAI, you will pay a steep price, because we will mess with you really deeply". The threat to Meta is just a natural consequence of that, to the extent that Meta clings onto said individuals.
Or the one who's sister filed a suit against him for sexual abuse?
"missionary" pfff...
[flagged]
Sam Altman is not a bit different than Mark Zuckerberg. His mission is to make money and get as much information to process about individuals, to be used for his benefit, all the rest is just blah blah.
It'll be a sad type of fun watching him become another disgusting and disconnected bazillionaire. Remember when Mark was a Honda Fit driving Palo Alto based founder focused on 'connecting people' and building a not yet profitable 'social network' to change the world?
This is a repeat of the fight for talent that always happens with these things. It's all mercenary - it's all just business. Otherwise they'd remain an NGO.
I can't help but think that it would have been a much better move for him to get fired from OpenAI. Allow that to do it's own thing and start other ventures with a clean reputation, and millions instead of billions in the bank.
> Remember when Mark was a Honda Fit driving Palo Alto based founder focused on 'connecting people'…?
That Mark must have come after the Mark that created a site in college where the visitor compared two women and ranked which of the two were "hotter".
That's fair. Legend is Sam was the guy applying to YC at 17 with "I am Sam Altman and I am coming"
So yeah. Naked ambition. They're both just creaming their pants for power.
“College” is underselling it. He went to Harvard. He sold a friendly down to the earth image early one and people bought it. But don’t forget he went to Harvard.
People change, even from horny college nerd into altruistic missionary into oligarch. And even beyond.
Or it turns out that people don't change, as explored in the entirely fictitious but very enjoyable film The Social Network. All those steps, even the horny college nerd, were facades, and the real core of his character is naked ambition. He will warp himself into any shape in order to pursue wealth and power. To paraphrase Robert Caro, power does not corrupt, it reveals.
I think half of him truly believes that his work ultimately will benefit humanity as a whole and half of him is a cynical bastard like the rest of them.
Ultimately, he’ll just realize that humanity doesn’t give a fuck, and that he’s in it for himself only.
And the typical butterfly-to-caterpillar transition will be complete.
We have essentially zero reason to believe he cares about any benefit to humanity at all.
I too truly believe that if you make me the richest person in the world, at the expense of all other people, it will be a benefit to humanity.
Well a lot of AI bros think that AI can generate novel solutions to all of the world's problems, they just need more data and processing power. I.E. the AI God (all knowing), which when you take a step back is utter lunacy. How can LLM's generate solutions to climate change if it's a predictive model.
All of this to say, they delude themselves that the future of humanity needs "AI" or we are doomed. Ironically, the creation and expansion of LLM's drastically increased the power usage of humanity to its own detriment.
I honestly believe the current AI hype and its associated wasteful power usage will be what seals humanities fate.
Big Tech has become a doomsday cult.
I've seen paying people too much completely erode the core of teams. It's really hard to convince yourself to work 60 hour weeks when you have generational FU$ and a family you love.
Man I disagree with this on multiple points:
I wouldn't describe a team full of people who don't want to work 60 hour weeks as "eroded", cus like... That's 6x 10 hour days leaving incredibly little time for family, chores, unwinding, etc. Once in awhile maybe, but sustained that'll just burn people out.
And also by that logic, is every executive paid $5M+/yr in every company, or every person who's accumulated say $20M, also eroding their team? Or is that only applied to someone who isn't managing people, for some reason?
There are people that are capable of working 60+/week long term without burnout, they are very rare but do exist (I know like two or three).
I know many, though most of them are either founders, business owners or farmers. FWIW, only one person in that list is an employee.
I guess there's a subset that sustains for a known set of years - surgical residents. 60-80+ hours
There's limited or no evidence of this in other domains where astonishing pay packages are used to assemble the best teams in the world (e.g., sports).
There's vast social honour and commensurate status attached to activities like bing a sports / movie star. Status that can easily be lost, and cannot be purchased for almost any amount of money. Arguably that status is a greater motivator than the financial reward - ie.: see the South Korean idol system. It's certainly not going to be diminished as a motivator by financial reward. There's no equivalent for AI researchers. At best the very best may win the acclaim of their peers and a Nobel prize. It's not a remotely equivalent level of celebrity / access to all the treasures the world can provide.
Top AI researchers are about the closest thing to celebrity status that has ever been attainable for engineering / CS folks outside of winning a Nobel Prize. Of course, the dopamine cycle and public recognition and adoration are nowhere near the same level as professional sports, but someone being personally courted by the world's richest CEOs handing out $100M+ packages is still decidedly not experiencing anything close to a normal life. Some of these folks still had their hiring announced on the front pages of the NYT and WSJ -- something normally reserved for top CEOs or, yes, sports stars and other legitimate celebrities.
Sports have a much, much tighter feedback loop on performance than anything in software, and certainly tighter than R&D.
Same with a lot of the financial roles with comp distributions like this.
Either Meta makes rapid progress on frontier-level AI in the next year or it doesn't -- there's definitely a feedback loop that's measured in tangible units of time. I don't think it's unreasonable to assume that when Zuck personally hires you at this level of compensation, there will be performance expectations and you won't stick around for long if you don't deliver. Even in top-tier sports, many underperformers manage to stick around for a couple years or even a half-decade at seven or eight figure compensation before being shown the door.
In reality all frontier models will likely progress at nearly the same pace making it difficult to disaggregate this team's performance compared to others. More importantly, it'll be nearly impossible to disaggregate any one contributor's performance from the others, making it basically impossible to enforce accountability without many many repetitions to eliminate noise.
> Even in top-tier sports, many underperformers stick around for a couple years or a half-decade at seven or eight figure compensation before being shown the door.
This can happen in the explicit hopes that their performance improves, not because it's unclear whether they are performing, and not generally over lapses in contract.
There are plenty of established performance management mechanisms to determine individual contributions, so while I wouldn't say that's a complete nonissue, it's not a major problem. The output of the team is more important to the business anyway (as is the case in sports, too).
And if the team produces results on par with the best results being attained anywhere else on the planet, Zuck would likely consider that a success, not a failure. After all, what's motivating him here is that his current team is not producing that level of results. And if he has a small but nonzero chance of pushing ahead of anyone else in the world, that's not an unreasonable thing to make a bet on.
I'd also point out that this sort of situation is common in the executive world, just not in the engineering world. Pretty much every top-tier executive at top-tier companies is making seven or eight figures as table stakes. There's no evidence I'm aware of that this reduces executive or executive team performance. Really, the evidence is the opposite -- companies continue paying more and more to assemble the best executive teams because they find it's actually worth it.
> There are plenty of established performance management mechanisms to determine individual contributions
"Established" != valid, and literally everyone knows that.
The executives you reference are never ICs and are definitionally accountable to the measured performance of their business line. These are not superstar hires the way that AI researchers (or athletes) are. The body in the chair is totally interchangeable so long as the spreadsheet says the right number, and you expect the spreadsheet performance to be only marginally controlled by the particular body in the chair. That's not the case with most of these hires.
I'd say execs getting hired for substantial seven- and eight-figure packages, with performance-based bonuses / equity grants and severance deals, absolutely do have a lot more in common with superstars than with most other professionals. And, just like superstars, they're hired based off public reputation more than anything else (just the sphere of what's "public" is different).
It's false that execs are never ICs. Anyone who's worked in the upper-echelon of corporate America knows that. Not every exec is simply responsible 1:1 for a business line. Many are in transformation or functional roles with very complex responsibilities across many interacting areas. Even when an exec is responsible for a business line in a 1:1 way, they are often only responsible for one aspect of it (e.g., leading one function); sometimes that is true all the way up to the C-suite, with the company having literally only a single exception (e.g., Apple). In those cases, exec performance is not 1:1 tied to the business they are 1:1 attached to. High-performing execs in those roles are routinely "saved" and banked for other roles rather than being laid off / fired in the event their BU doesn't work out. Low-performing execs in those roles are of course very quickly fired / re-orged out.
If execs really were so replaceable and it's just a matter of putting the right number in a spreadsheet, companies wouldn't be paying so much money for them. Your claims do not pass even the most basic sanity check. By all means, work your way up to the level we're talking about here and then report back on what you've learned about it.
Re: performance management and "everyone knowing that", you're right of course -- that's why it's not an interesting point at all. :) I disagree that established techniques are not valid -- they work well and have worked for decades with essentially no major structural issues, scaling up to companies with 200k+ employees.
I did not say their performance is 1:1 with a business line, but great job tearing down that strawman.
I said they are accountable to their business line -- they own a portfolio and are accountable for that portfolio's performance. If the portfolio does badly, it means nearly by definition that the executive is doing badly. Like an athlete, that doesn't mean they're immediately put to the streets, but it also is not ambiguous whether they are performing well or not.
Which also points to why performance management methods are not valid, i.e. a high-sensitivity, high-specificity measure of an individual executive's actual personal performance: there are obviously countless external variables that bear on the outcome of a portfolio. But nonetheless, for the business's purpose, it doesn't matter. Because the real purpose of performance management methods is to have a quasi-objective rationalization for personnel decisions that are actually made elsewhere.
Perhaps you can mention which performance management methods you believe are valid (high-specificity and high-sensitivity measures of an individual's personal performance) in AI R&D?
"Pretty much every top-tier executive at top-tier companies is making seven or eight figures as table stakes". In this group, what percentage are ICs? Sure there are aberrational celebrity hires, of course, but what you are pointing to is the norm, which is not celebrity hires doing IC work.
> If execs really were so replaceable... companies wouldn't be paying so much money for them
High-level executives within the same tier are largely substitutable - any qualified member of this cohort can perform the role adequately. However, this is still a very small group of people ultimately responsible for huge amounts of capital and thus collectively can maintain market power on compensation. The high salaries don't reflect individual differential value. Obviously there are some remarkable executives and they tend to concentrate in remarkable companies, by definition, and also by definition, the vast majority of companies and their executives are totally unremarkable but earn high salaries nonetheless.
I disagree. If you want similarly-tight feedback loops on performance, pair programming/TDD provides it. And even if you hate real-time collaboration or are working in different time zones, delightful code reviews on small slices get pretty close.
Are these people just “programming?”
Take-one-for-the-team salaries trump there too. Tom Brady and the Patriots dynasty say hi.
I'd say it would be easier to do 60 hours when you can afford servants to take care of the rest of life.
Is working than 60 hours a week necessary to have a good team? Sure, having FU$ as you put it removes the necessity to keep the scale of your work life balance tipped to the benefit if your employer. But again, a good work life balance should not imply erosion the team.
Working more than 50 hours a week is counter-productive, and even working 50 hours a week isn't consistently more productive than working 40 hours a week.
It is very easy to mistake _feeling_ productive and close with your coworkers for _being_ productive. That's why we can't rely on our feelings to judge productivity.
That's why CEO pay is so low. They take the honor in leadership and across the board just take a menial compensation package. Why work hard , 60 hrs a week even, if you get paid FU$? This is why boards limit comp packages so aggressively.
> It's really hard to convince yourself to work 60 hour weeks
Why would they do that? There is absolutely no reason to overwork.
I was ready to downvote you giving examples how +$100m net worth individuals are probably the hardest workers (or were, to get there) just like most of the people replying to you, but your `and a family you love` tripped me. I sorta agree... if you want be maximize time with family and you have FU$, would you really really work that hard?
I am not saying exactly they don't love their family... but it's not necessarily a priority over glory, more money, or being competitive. And if the relationship is healthy and built on solid foundations usually the partner knows what they're getting into and accept the other person (children on the other hand had no choice).
It's a weird take to tie this up with team morale, tough.
> really hard to convince yourself to work 60 hour weeks
Good!
The game theoretic aspect of this is quite interesting. If Meta will make OpenAI's model improvements open source, then the value of every poached employee will be worth significantly less as time goes on. That means it's in the employees best interest to leave first, if their goal is to maximize their income.
Open source could also be a bait and switch.
ie - Zuck has no intention to keep opening up the models he creates. Thus, he knows he can spend the money to get the talent. Because he has every intention to make it back.
Zuck has the best or the second best distribution on the planet.
If he neutralizes the tech advantage of other companies his chances of winning rise.
How well was Zuck able to use his massive distribution channels to win in his cryptocurrency project, or the Metaverse after that?
Meta has become too fickle with new projects. To the extent that LLAMA can help them improve their core business, they should drive that initiative. But if they get sidetracked on trying to build “AI friends” for all of their human users, they are just creating another “solution in search of a problem”.
I hope both Altman and Zuck become irrelevant. Neither seems particularly worthy of the power they have gained and aren’t willing to show a spine in face of government coercion.
Allegedly they were offered 100m just in the first year. I think they will be fine
That was immediately proven to be false, both by Meta leadership and the poached researchers themselves. Sam Altman just pulled the number out of his ass in an interview.
That's my point. The ones that left early got a large sum of money. The ones that leave later will get less. That would incentivize people to be the first to leave.
Sam Altman complaining about "unethical" corporate behavior is pure gold
> OpenAI is the only answer for those looking to build artificial general intelligence
Let’s assume for a moment that OpenAI is the only company that can build AGI (specious claim), then the question I would have for Sam Altman: what is OpenAI’s plan once that milestone is reached, given his other argument:
> And maybe more importantly than that, we actually care about building AGI in a good way,” he added. “Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be.
If building AGI is OpenAI’s only goal (unlike other companies), will OpenAI cease to exist once mission is accomplished or will a new mission be devised?
OpenAI’s only goal isn’t building AGI. It is to build it first and make money off it.
Exactly! The Microsoft-OpenAI agreement states that AGI is whatever makes them 100 billion in profits. Nothing in there about anything intelligence related.
>The two companies reportedly signed an agreement last year stating OpenAI has only achieved AGI when it develops AI systems that can generate at least $100 billion in profits.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
And in the meantime, their goal is clearly to make money off non-AGI AI.
I constantly get quasi-religious vibes from the current AI "leaders" (Altman, Amodei, and quite a few of the people who have left both companies to start their own). I never got those sort of vibes from Hinton, LeCun, or Bengio. The latest crop really does seem to believe that they're building some sort of "god" and that their god getting built first before one of their competitors builds a false god is paramount (in the literal meaning of the term) for the future of the human race.
What even is the monetization plan for AI. Seems like the cutting edge tech becomes immediately devalued to nothing after a few months when a new open source modal is released.
After spending so many billions on this stuff, are they really going to pay it all off selling API credits?
They have no idea but they're building a new AI and will soon ask it.
It's incredibly disheartening when you realize that the entire house of cards that is the internet is built on one thing: advertising money.
Is there even a point to money post AGI?
Something tells me food and water supplies, weapons and private security forces aren't going to be paid for in OAI compute credits.
Not the real thing, no.
Yes, because the development of AGI doesn't automatically mean the end of capitalism. Feudalism, mercantilism, and the final form, capitalism, weren't overthrown by new technologies, and while AGI is certainly a very special new technology, so was the internet. It doesn't matter how special AGI is if it's controlled by one company under the mechanisms of a capitalist liberal democracy - it's not like the laws don't matter anymore, or the contracts, debts, allegiances.
What can AGI give us that would end scarcity, when our scarcity is artificial? New farming mechanisms that mean nobody go hungry? We already throw away most of our food. We don't lack food, our resource allocation mechanism (Capitalism) just requires some people to be hungry.
What about new medicines? Magic new pills that cure cancer - why would these be given away for free when they can be sold, instead?
Maybe AGI will recommend the perfect form of fair and equitable governance! Well, it almost certainly will be a recommendation that strips some power from people who don't want to give up any power at all, and it's not like they'll give it up without a fight. Not that they'll need to fight - billionaires exist today and have convinced people to fight for them, against people's own self interest, somehow (I still don't understand this).
So, I'll modify Mark Fisher's quote - it's easier to imagine the creation of AGI than it is to imagine the end of capitalism.
>our resource allocation mechanism (Capitalism) just requires some people to be hungry
One of the observable features of capitalism is that there are no hungry people. Capitalism has completely solved the problem of hunger. People are hungry when they don't have capitalism.
>billionaires exist today and have convinced people to fight for them
People usually fighting for themselves. It's just that billionaires often are not enemies of society, but source of social well-being. Or even more often - a side effect of social well-being. People fighting for billionaires to protect social well-being, not to protect billionaires.
>it's easier to imagine the creation of AGI than it is to imagine the end of capitalism
There is no need to even imagine the end of capitalism - we see it all the time, most of the world can hardly be called capitalist. And the less capitalism there is, the worse.
> One of the observable features of capitalism is that there are no hungry people. Capitalism has completely solved the problem of hunger. People are hungry when they don't have capitalism.
This is as fascinating to me as if someone walked up to me and said "Birds don't exist." It's a statement that's instantly, demonstrably provably wrong by simply turning and pointing at a bird, or in this case, by Googling "Child hunger in the usa," and seeing a shitload of links demonstrating that 12.8% of US households are food insecure.
Or, the secondary point, that hunger is only when no capitalism, demonstrably untrue, since the countries that ensure capitalism can continue to thrive by providing cheap labor, have visible extreme hunger, such as India. India isn't capitalist? America isn't capitalist? Madagascar isn't capitalist? Palestine?
> It's just that billionaires often are not enemies of society, but source of social well-being.
How can someone not be an enemy of society when they maintain artificial scarcity by hoarding such a massive portion of society's output, and then acting to hoard and concentrate our collective wealth even more into their own hands? Since when has "greed" not been a universally reviled trait?
> we see it all the time, most of the world can hardly be called capitalist. And the less capitalism there is, the worse.
I genuinely can't understand what you're seeing in the world to think the global economy is not capitalist in nature.
> seeing a shitload of links demonstrating that 12.8% of US households are food insecure.
This is definitely not a manipulation of statistics and not a trivialization of food insecurity that are relevant to many parts of the world. And then they wonder why people choose to support billionaires instead of you lying cannibals.
> such as India
> Madagascar isn't capitalist? Palestine?
No? This countries has nothing to do with an economy built on the principles of the inviolability of private property and economic freedom. USA has more socialism than this countries have capitalism.
> How can someone not be an enemy of society when they maintain artificial scarcity by hoarding such a massive portion of society's output
because it is not portion of society's output that matters, but size of that output. What's the point of even distribution if size of the share is not enough even to not to die from starvation?
> Since when has "greed" not been a universally reviled trait?
Question is not either greed reviled trait or not. Greed is a fact of human nature. The question is what this ineradicable human quality leads to in specific economic systems: to universal prosperity, as under capitalism, or to various abominations like mass starvation, as without it.
The profit cap was supposed to be for first to acheive AGI being end game, and would ensure redistribution (though with apparently some kind of Altman tax through early World Coin ownership stake). When they realized they wouldn't reach AGI with current funding and they were so close to $100 billion market cap they couldn't entice new investors on $100 billion in profits, why didn't they set it to, say, $10 trillion instead of infinity? Because they are missionaries?
A leaked email from Ilya early on even said they never planned to open source stuff long term, it was just to entice researchers at the beginning.
Whole company is founded on lies and Altman was even fired from YC over self detailing or something in I think a deleted YC blog post if I remember right.
Nope AGI is not the end goal - https://blog.samaltman.com/the-gentle-singularity
> OpenAI is a lot of things now, but before anything else, we are a superintelligence research company.
IMO, AGI is already a very nebulous term. Superintelligence seems even more hand-wavy. It might be useful to define and understand limits of "intelligence" first.
Superintelligence has always been rhetorical slight of hand to equate "better than human" with "literally infinitely improving and godlike" in spite of optimization always leveling off eventually for one reason or another.
I wouldn't worry, I forecast we'll have peace in the Middle East before we have true AGI.
Why does this feel like the "Friendship Ended With Musadir" meme?
https://knowyourmeme.com/memes/friendship-ended-with-mudasir
hilarious seeing that he views it this way when his company is so very well known for taking (strong arguments say stealing) everything from everyone.
i’m noticing more and more lately that our new monarchs really do have broken thought patterns. they see their own abuse towards others as perfectly ok but hilariously demand people treat them fairly.
small children learn things that these guys struggle to understand.
I think they understand that it's all performative
Sam comes across as an extremely calculating person to me. I'm not suggesting he's necessarily doing this for bad reasons, but it's very clear to me the public facing communications he makes are well considered and likely not fully reflective of his actual views and positions on things, but instead what he believes to be power maximising.
He's very good at creating headlines and getting people talking online. There's no doubt he's good at what he does, but I don't know why anyone takes anything he says seriously.
This interview with Karen Hao is really good (https://www.youtube.com/watch?v=8enXRDlWguU), she interviews people that have had 1 on 1 meetings with Sam, and they always say he aligned with them on everything to the point where they don't actually know what he believes. He will tailor his opinions to try and weave in trust.
SBF demonstrated how utilitarian thought + massive money can easily spiral into self-centered anti-social behavior.
Being a billionaire seems to be inherently bad for human brains.
Even more blatantly and directly, "Don't you dare use our model, trained on other people's work, to train yours".
What an odd turn of phrase. Historically speaking, mercenaries have absolutely slaughtered missionaries in every confrontation.
If missionaries could be mercenaries, they would.
Also, OpenAI ain't missionaries, it's a for profit company full of people working there for a fat paycheck and equity.
Both are orthogonal concepts.
Anchoring the reader’s opinion by using the phrase “Missionaries” is pure marketing. Missionaries don’t get paid huge dollars or equity, they do it because it’s a religion / a calling.
Ultimately why someone chooses to work at OpenAI or Meta or elsewhere boils down to a few key reasons. The mission aligns with their values. The money matches their expectations. The team has a chance at success.
The orthogonality is irrelevant because nobody working for OpenAI or Meta is a missionary.
The term comes from John Doerr https://www.youtube.com/watch?v=n6iwEYmbCwk . But Altman kicked most missionaries out during corporate turmoil in 2023, so not sure where this comes from.
From that by the way
>...one hand The Mercenaries they have enormous Drive they're opportunistic like Andy Grove they believe only the paranoids survive they're really sprinting for the short run but that's quite different I suggest to you than the missionaries who have passion not paranoia who are strategic not opportunistic and they're focused on the big idea in partnerships. It's the difference between focusing on the the competition or the customer.
It's a difference between worshiping at the altar of Founders or having a meritocracy where you get all the ideas on the table and the best ones win it's a difference between being exclusively interested in the financial statements or also in the mission statements it's a difference between being a loner on your own or being part of a team having an attitude of entitlement versus contribution or uh as Randy puts it living a deferred Life Plan versus a whole life that at any given moment is trying to work difference between just making money anybody tells you they don't want to make money is lying or making money and making meaning Al also or my bottom line is it's the difference between success or success and significance.
And both missionaries and mercenaries are responsible for the abuse and obliteration of many millions. Neither are forces for good.
The force has a light side and a dark side. Apparently Switzerland is so famously neutral because their national export was mercenaries. You can't take sides in wars of you want to sell soldiers to both sides...
But also I imagine that it helps when you wish to stay neutral if people are afraid of what you could do if you were directly involved in a conflict.
Missionary as founder, mercenary as employee, everyone happy
The Knights Templar (https://en.wikipedia.org/wiki/Knights_Templar) were kinda both, but modern AI is more mercenary in order to grab all the profits and become a monopoly nowadays.
Yeah apparently being well fed and paid and extensive lly prepared helped? It's like these mercenaries were actually what you would call "professionals"
Mercenaries get paid to follow orders and kill. Missionaries are more independent. That is the sell point to the OpenAI worker.
Except we have a lot more missionaries than mercenaries now. Right? So who won?
Actually, I think that people who do it for the love of the game are the true winners here, whether they work for a company or not. You can't beat intrinsic motivation.
Intrinsic motivation + a seven figure paycheck seems strictly better than intrinsic motivation alone
“I don’t think Sam is the guy who should have the finger on the button for AGI.”
- Ilya Sutskever, Co-founder, co-lead of Superalignment Team , Departed early 2024
- May 15, 2025, The Atlantic
Anyway, I concur it's a hard choice as one other comment mentions.
Is this a button any one person should have their finger on?
Good point. I like George Hotz’ philosophy on this - paraphrasing badly - if everyone has AI, nobody can be subjugated by AI.
That's all fine and dandy if we skip the development phase and AI is already invented.
But this doesn't work during the transition. During the development. "The button" here is for AGI. As in, when it's created and released.
Exactly. AGI is something that will significantly affect all of humanity. It should be treated like nuclear weapons.
Effectively kept secret and in the shadows by those working on it, until a world-altering public display makes it a hot politically charged issue, unaltering even 80 years later?
Edit: Honestly, I bet that "Altman", directed by Nolan's simulacrum and starring a de-aged Cillian Murphy (with or without his consent), will in fact deservedly win a few oscars in 2069.
International co-operation to control development and usage. The decision to unleash AGI can only be made once. Making such a decision should not be done hastily, and definitely not for the reason of pursuing private profit. Such a decision needs input from all of humanity.
> International co-operation to control development and usage.
Non-starter. Why would you trust your adversary to "stay within the lanes". The rational thing to do is to extend your lead as much as possible to be at the top of the pyramid. The arms race is on.
With these things, the distrust is a feature, not a flaw. The distrust ensures you are keeping a close eye on each other. The collaboration means you're also physically close and intellectually close. Secrets are still possible, but it's just harder to keep them because it's easier for things to slip
It's rational only if you don't consider the risks of an actual superhuman AGI. Geopolitical issues won't matter if such a thing is released without making sure it can be controlled. These competition based systems of ours will be the death of us. Nothing can be done in a wise manner when competiton forces our hand.
And quickly proliferated around the world to other superpowers and rogue states…
Remember the soviets got the nuke so quick because they just exfiltrated the US plans
>It should be treated like nuclear weapons.
Either you get it or you're screwed?
> It should be treated like nuclear weapons.
Seeing how currently nuclear weapon holders are elected, that would be a disaster
The disaster will happen if AGI is created and let loose too quickly, without considering all the effects it will have. That's less likely to happen when the number of people with that power is limited.
They don't make buttons large enough for multiple people to press. It's always going to be someone in the end.
Yes they do?
There's also plenty of buttons that can't be pressed unless unlocked by multiple keys which cannot be turned by a single person.
I want Sam to win more than I do Zuck, just based on the proven negativity generated by Meta. I don't want that individual or that company anywhere near additional capital, power, capability or influence.
The hypocrite who violates everyone else’s privacy to sell ads, or the scammer who collects eyeballs in exchange for cryptocurrency and whose “product” has been banned in multiple countries…
Yeah, there’s no good choice here. You should be rooting for neither. Best case scenario is they destroy each other with as little collateral damage as possible.
Between the trio of Thiel, Zuck and sama I’d pick the fourth option - I don’t want to be on that train anymore
It became clear in 2024 and 2025 that they're all dangerous.
All these tech billionaires or pseudo billionaires are basically believing that an enlightened dictatorship is the best form of governance. And of course they ought to be the dictator or part of the board.
Fourth option is Musk with xAI.
Fourth option is butlerian jihad
At the start of the “LLM boom” I was optimistic that OAI/Anthropic were in a position finally unseat the Big 4 in at least this area. Now I’m convinced the only winners are going to be Google, Meta, Amazon, and we are right back to where we started.
I still have hope that Anthropic will win out over OpenAI.
But… Why put Meta in that group?
I see Apple, Google, Microsoft, and Amazon as all effectively having operating systems. Meta has none and has failed to build one for cryptocurrency (Libra / Deis) and metaverse.
Also, both Altman and Zuck leave a lot to be desired. Maybe not as much as Musk, but they both seem to be spineless against government coercion and neither gives me a sense that they are responsible stewards of the upside or downside risks of AGI. They both just seem like they are full throttle no matter the consequences.
What makes you think so? They got the Chatgpt.com domain and the product seems to be growing more than any other (check out app downloads: https://appmagic.rocks/top-charts/apps). They got the first mover advantage - and as we know around here that's a huuuge advantage.
> They got the Chatgpt.com domain and the product seems to be growing more than any other
And still haemorrhaging money.
Yes, I’ve never seen a more sociopathic company than Meta. They are so true to the cliched ethos of “you are the product.” Sickens me that society has facilitated the rise of such banality of evil
> Sickens me that society has facilitated the rise of such banality of evil
American society. Those are uniquely products of the US, exported everywhere, and rightfully starting to get push back. Unfortunately later than what it should’ve happened.
Does any of those OpenAI spinoffs have something to show already or are they still "raising money"
[flagged]
Even if it's zero, he could still be a shitty person who shouldn't have access to that button. If anyone should have such access at all, of course.
Is that the du jour unit of measurement for morality now?
gestures broadly at every other thing we've known about Mark Zuckerberg since "Dumb Fucks" in college
I suppose as an American taxpayer and American voter, he is responsible for as many ethnic cleansings as anyone else. Supposedly, Armenians leaving Nagorno-Karabakh is ethnic cleansing, and the US did give aid to Azerbaijian so that makes Americans facilitators of ethnic cleansing, though admittedly so are the Canadians.
OpenAI's tight spot:
1) They are far from profitability. 2) Meta is aggressively making their top talent more expensive, and outright draining it. 3) Deepseek/Baidu/etc are dramatically undercutting them. 4) Anthropic and (to a lesser extent?) Google appear to be beating them (or, charitably, matching them) on AI's best use case so far: coding. 5) Altman is becoming less like-able with every unnecessary episode of drama; and OpenAI has most of the stink from the initial (valid) grievance of "AI-companies are stealing from artists". The endless hype and FUD cycles, going back to 2022, have worn industry people out, as well as the flip flop on "please regulate us". 6) Its original, core strategic alliance with Microsoft is extremely strained. 7) and, related to #6, its corporate structure is extremely unorthodox and likely needs to change in order to attract more investment, which it must (to train new frontier models). Microsoft would need to sign off on the new structure. 8) Musk is sniping at its heels, especially through legal actions.
Barring a major breakthrough with GPT-5, which I don't see happening, how do they prevail through all of this and become a sustainable frontier AI lab and company? Maybe the answer is they drop the frontier model aspect of their business? If we are really far from AGI and are instead in a plateau of diminishing returns that may not be a huge deal, because having a 5% better model likely doesn't matter that much to their primary bright spot:
Brand loyalty from the average person to ChatGPT is the best bright spot, and OpenAI successfully eating Google's search market. Their numbers there have been truly massive from the beginning, and are I think the most defensible. Google AI Overviews continue to be completely awful in comparison.
They have majority of the attention and market cap. They have runway. And that part is the most important thing. Others don’t have the users to grand test developments.
I'm not so sure they have runway.
XAI has Elon's fortune to burn, and Spacex to fund it.
Gemini has the ad and search business of Google to fund it.
Meta has the ad revenue of IG+FB+WhatsApp+Messenger.
Whereas OpenAI $10 billion in annual revenue, but low switching costs for both consumers and developers using their APIs.
If you stay at the forefront of frontier models, you need to keep burning money like crazy, that requires raising rounds repeatedly for OpenAI, whereas the tech giants can just use their fortunes doing it.
They definitely have a very valuable brand name even if the switching costs are low. To many people, AI == ChatGPT
But that's just one good marketing campaign away of changing.
Ok, others have more runway, and less research talent.
OpenAI has enough runway to figure things out and place themselves in a healthier position.
And come to think of it, loosing a few researchers to other companies may not be so bad. Like you said that others have cash to burn. They might spend that cash more liberally and experiment with bolder riskier products and either fail spectacularly or succeed exponentially. And OpenAI can still learn from it well enough and still benefit even though it was never their cash.
The biggest problem OAI has is that they don't own a data source. Meta, Google, and X all have existing platforms for sourcing real time data at global scale. OAI has ChatGPT, which gives them some unique data, but it is tiny and very limited compared to what their competitors have.
LLMs trained on open data will regress because there is too much LLM generated slop polluting the corpus now. In order for models to improve and adapt to current events they need fresh human created data, which requires a mechanism to separate human from AI content, which requires owning a platform where content is created, so that you can deploy surveillance tools to correctly identify human created content.
OAI has a deal with reddits corpus of data to use.
They will either have to acquire a data source or build their own moving forward imo. I could see them buying reddit.
Sam Altman also owns something like ~10% of reddits stock since they went public.
> how do they prevail through all of this and become a sustainable frontier AI lab and company?
I doubt that OpenAI needs or wants to be a sustainable company right now. They can probably continue to drum up hype and investor money for many years. As long as people keep writing them blank checks, why not keep spending them? Best case they invent AGI, worst case they go bankrupt, which is irrelevant since it's not their own money they're risking.
Maybe employees realised this and left OpenAI for this reason.
If they can turn ChatGPT into a free cash flow machine, they will be in a much more comfortable position. They have the lever to do so (ads) but haven't shown much interest there yet.
I can't imagine how they will compete if they need to continue burning and needing to raise capital until 2030.
The interest and actions are there now: Hiring Fidji Simo to run "applications" strongly indicates a move to an ad-based business model. Fidji's meteoric rise at Facebook was because she helped land the pivot to the monster business that is mobile ads on Facebook, and she was supposedly tapped as Instacart's CEO because their business potential was on ads for CPGs, more than it was on skimming delivery fees and marked up groceries.
Good analysis, my counter to it is that OpenAI has one of the leading foundational models, while Meta, despite being a top paying tech company, continued to release sub par models that don't come close to the other big three.
So, what happened? Is there something fundamentally wrong with the culture and/or infra at Meta? If it was just because Zuckerburg bet on the wrong horses to lead their LLM initiatives, what makes us think he got it right this time?
For one thing, all the trade secrets going from openai and anthropic to meta.
OpenAI has no shot without a huge cash infusion and to offer similar packages. Meta opened the door.
That means he is moaning because Meta is able to inflict sufficient pain for him to feel it. Seems Meta is pretty serious about it.
Job market forces working as they should.
I think it's more that he's reinforcing the 'mission' and defining the good guys/bad guys so that he doesn't lose more employees to Meta.
Could be but how many of the people in OpenAI still believe in that 'mission' thing after all the drama he has been involved in for so long?
I think that leaks like this have negative information value to the public.
I work at OAI, but I'm speaking for myself here. Sam talks to the company, sometimes via slack, more often in company-wide meetings, all the time. Way more than any other CEO I have worked for. This leaked message is one part of a long, continuing conversation within the company.
The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are. It certainly has humbled my own reactions to news. (In this particular instance I don't think there's so much right and wrong but more that I think if you had actually been in the room for more of the conversation you'd probably feel different.)
Btw Sam has tweeted about an open source model. Stay tuned... https://x.com/sama/status/1932573231199707168
The other side of it: some statements made internally can be really bad but employees brush over them because they inherently trust the speaker to some degree, they have additional material that better aligns with what they want to hear so they latch on the rest, and current leaders' actions look fine enough to them so they see the bad parts as just communication mishaps.
Until the tide turns.
Worse: employees are often actively deceived by management. Their “close relationship” is akin to that of a farmer and his herd. Convinced they’re “on the inside” they’re often blind to the truth that’s obvious from the outside.
Or simply they don’t see the whole picture because they’re not customers or business partners.
I’ve seen Oracle employees befuddled to hear negative opinions about their beloved workplace! “I never had to deal with the licensing department!”
Okay, but I've also heard insiders at companies I've worked completely overlook obvious problems and cultural/management shortcomings issues. "Oh, we don't have a low-trust environment, it's just growing pains. Don't worry about what the CEO just said..."
Like, seriously, I've seen first-hand how comments like this can be more revealing out of context than in context, because the context is all internal politics and spin.
> Btw Sam has tweeted about an open source model. Stay tuned... https://x.com/sama/status/1932573231199707168
Sneaky wording but seems like no, Sam only talked about "open weights" model so far, so most likely not "open source" by any existing definition of the word, but rather a custom "open-but-legal-dept-makes-us-call-it-proprietary" license. Slightly ironic given the whole "most HN posters are confidently wrong" part right before ;)
Although I do agree with you overall, many stories are sensationalized, parts-of-stories always lack a lot of context and large parts of HN users comments about stuff they maybe don't actually know so much about, but put in a way to make it seem so.
Open weights is unobjectionable. You do get a lot.
It's nice to also know what the training data is, and it's even nicer to be aware of how it's fine-tuned etc., but at least you get the architecture and are able to run it as you like and fine tune it further as you like.
> Open weights is unobjectionable
Yeah? Try me :)
> but at least you get the architecture and are able to run it as you like and fine tune it further as you like.
Sure, that's cool and all, and I welcome that. But it's getting really tiresome of seeing huge companies who probably depend on actual FOSS to constantly get it wrong, which devalues all the other FOSS work going on, since they wanna ride that wave, instead of just being honest with what they're putting out.
If Facebook et al could release compiled binaries from closed source code but still call those binaries "open source", and call the entire Facebook "open source" because of that, they would. But obviously everyone would push back on that, because that's not what we know open source to be.
Btw, you don't get to "run it as you like", give the license + acceptable use a read through, and then compare to what you're "allowed" to do compared to actual FOSS licenses.
There are ten measures by which a model can/should be open:
1. The model code (pytorch, whatever)
2. The pre-training code
3. The fine-tuning code
4. The inference code
5. The raw training data (pre-training + fine-tuning)
6. The processed training data (which might vary across various stages of pre-training and fine-tuning)
7. The resultant weights blob
8. The inference inputs and outputs (which also need a license; see also usage limits like O-RAIL)
9. The research paper(s) (hopefully the model is also described in literature!)
10. The patents (or lack thereof)
A good open model will have nearly all of these made available. A fake "open" model might only give you two of ten.
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are.
This is so true. And not confined to HN.
I agree with the sentiment.
Having been behind the scenes of HN discussion about a security incident, with accusations flying about incompetent developers, the true story was the lead developers new of the issue, but it was not prioritised by management and pushed down the backlog in place of new (revenue generating) features.
There is plenty of nuance to any situation that can't be known.
No idea if the real story here is better or worse than the public speculation though.
I too worked at a place where hot button issues were being leaked to international news.
Leaks were done for a reason. either because they agree with the leak, really disagree with the leak, or want to feel big because they are a broker of juicy information.
Most of the time the leaks were done in an attempt to stop something stupid from happening, or highlight where upper management were making the choice to ignore something for a gain elsewhere.
Other times it was there because the person was being a prick.
Sure its a tiny part of the conversation, but in the end, if you've got the point where your employees are pissed off enough to leak, that's the bigger problem.
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in. It's eye-opening to see how confidently wrong most poasters are.
Some topics (and some areas where one could be an expert in) are much more prone to this phenomenon than others.
Just to give a specific example that suddenly comes to my mind: Grothendieck-style Algebraic Geometry is rather not prone to people confidently posting wrong stuff about on HN.
Generally (to abstract from this example [pun intended]): I guess topics that
- take an enormous amount of time to learn,
- where "confidently bullshitting" will not work because you have to learn some "language" of the topic very deeply
- where even a person with some intermediate knowledge of the topic can immediately detect whether you use the "'grammar' of the 'technical language'" very wrongly
are much more rarely prone to this phenomenon. It is no coincidence that in the last two points I make comparisons to (natural) languages: it is not easy to bullshit in a live interview that you know some natural language well if the counterpart has at least some basic knowledge of this natural language.
I think its more the site's architecture that promotes this behavior.
In the offline world there is a big social cost to this kind of behavior. Platforms haven't been able to replicate it. Instead they seem to promote and validate it. It feeds the self esteem of these people.
It's hard to have an informed opinion on Algebraic Geometry (requires expertise) and not many people are going to upvote and engage with you about it either. It's a lot easier to have an opinion on tech execs, current events, and tech gossip. Moreover you're much more likely to get replies, upvotes, and other engagement for posting about it.
There's a reason politics and tech gossip are where most HN comments go these days. This is a pretty mainstream site.
> There's a reason politics and tech gossip are where most HN comments go these days. This is a pretty mainstream site.
HN is the digital water cooler. Rumors are a kind of social currency, in the capital sense, in that it can be leveraged and has a time horizon for value of exchange, and in the timeliness/recency biased sense, as hot gossip is a form of information that wants to be free, which in this context means it has more value when shared, and that value is tapped into by doing so.
I totally agree that most articles (pretty much all news/infotainment) is devoid of any information.
At the same time all I need to know about Sam is in the company/"non-profit's" name, which is in itself is now simply a lie.
This is a strangely defensive comment for a post that, at least on the surface, doesn't seem to say anything particularly damning. The fact that you're rushing to defend your CEO sort of proves the point being made, clearly you have to make people believe they're a part of something bigger, not just pay them a lot.
The only obvious critique is that clearly Sam Altman doesn't believe this himself. He is legendarily mercenary and self serving in his actions to the point where, at least for me, it's impressive. He also has, demonstrably here, created a culture where his employees do believe they are part of a more important mission and that clearly is different than just paying them a lot (which of course, he also does).
I do think some skepticism should be had around that view the employees have, but I also suspect that was the case for actual missionaries (who of course always served someone else's interests, even if they personally thought they were doing divine work).
The headline makes it sound like he's angry that Meta is poaching his talent. That's a bad look that makes it seem like you consider your employees to be your property. But he didn't actually say anything like that. I wouldn't consider any of what he said to be "slams," just pretty reasonable discussion of why he thinks they won't do well.
I'd say this is yet another example of bad headlines having negative information content, not leaks.
With no dogs in the fight, the very fact he's talking to his employees about a competitor's hiring practices is noteworthy.
The delivery of the message can be milder and better than how it sounds in the chosen bits, but the overall picture kinda stays the same.
To me, there’s an enormous difference between “they pay well but we’re going to win the race” and “my employees belong to me and they’re stealing my property.”
Notably, I don’t see him condemning Meta’s “poaching” here, just commenting on it. Compare this with, for example, Steve Jobs getting into a fight with Adobe’s CEO about whether they’d recruit each other’s employees or consider them to be off limits.
I've experienced that. Absolutely.
But I've also experienced that the outside perspective, wrong as it may be on nearly all details, can give a dose of realism that's easy to brush aside internally.
Sounds like someone is upset they didn't get poached.
> I think that leaks like this have negative information value to the public.
To most people I'd think this is mainly for entertainment purposes ie 'palace intrique' and the actual facts don't even matter.
> The vast majority of what he and others say doesn't get leaked. So you're eavesdropping on a tiny portion of a conversation. It's impossible not to take it out of context.
That's a good spin but coming from someone who has an anonymous profile how do we know it's true (this is a general thing on HN people say things but you don't know how legit what they say is or if they are who they say they are).
> What's worse, you think you learned something from reading this article, even though you probably didn't, making you more confident in your conclusions when you should be less confident.
What conclusions exactly? Again do most people really care about this (reading the story) and does it impact them? My guess is it doesn't at all.
> I hope everyone here gets to have the experience of seeing HN discuss something that you're an expert in.
This is a well known trope and is discussed in other forms ie 'NY Times story is wrong move to the next story and you believe it' ie: https://www.epsilontheory.com/gell-mann-amnesia/
> coming from someone who has an anonymous profile how do we know it's true
My profile is trivially connected to my real identity, I am not anonymous here.
How is it trivially connected to your real identity exactly?
I am not seeing how it is at all.
> That's a good spin but coming from someone who has an anonymous profile how do we know it's true (this is a general thing on HN people say things but you don't know how legit what they say is or if they are who they say they are).
Not only that, but how can we know if his interpretation or "feelings" about these discussions are accurate? How do we know he isn't looking through rose-tinted glasses like the Neumann believers at WeWork? OP isn't showing the missing discussion, only his interpretation/feelings about it. How can we know if his view of reality is accurate and unbiased? Without seeing the full discussion and judging for ourselves, we can't.
> How can we know if his view of reality is accurate and unbiased? Without seeing the full discussion and judging for ourselves, we can't.
I agree with that of course.
Your comment comes across dangerously close to sounding like someone that has drunk the kool-aid and defends the indefensible.
Yes, you can get the wrong impression from hearing just a snippet of a conversation, but sometimes you can hear what was needed whether it was out of context or not. Sam is not a great human being to be placed on a pedestal that never needs anything he says questioned. He's just a SV CEO trying to keep people thinking his company is the coolest thing. Once you stop questioning everything, you're in danger of having the kool-aid take over. How many times have we seen other SV CEOs with a "stay tuned" tweet that they just hope nobody questions later?
>if you had actually been in the room for more of the conversation you'd probably feel different
If you haven't drunk the kool-aid, you might feel differently as well.
SAMA doesn't need your assistance white knighting him on the interwebs.
Technically, after being labelled a missionary you can't really blame people for spreading the word of the almighty.
Little Miyazaki knock offs posting on the Nazi Hellsite former known as Twitter isn't really helping how the "public" feels about OAI either.
It says something that he still believes he has "missionaries" after betraying all the core principles that OpenAI was founded on. What exactly is their mission now other than generating big $?
> It says something that he still believes he has "missionaries" after betraying all the core principles that OpenAI was founded on.
What I find the most troubling in this reaction is how hostile it is to the actual talent. It accuses everyone and anyone who is even considering to join Meta in particular but any competitor in general as being a mercenary. It's using the poisoning the well fallacy to shield OpenAI from any competition. And why? Because he believes he is in a personal mission? This emits "some of you may die, but it's a sacrifice I am willing to make" energy. Not cool.
It's absoluely ridiculous that investors are driven (and are expected to be driven) by maximisation of return of investment, and that alone, but when labour/employees exhibit that same behaviour, they are labeld "mercenaries" or "transacitonal".
These examples of double standards for labor vs capital are literally everywhere.
Capital is supposed to be mobile. Economic theory is based on the idea that capital should flow to its best use (e.g., investors should withdraw it from companies that aren't generating sufficient returns and provide it to those who are) including being able to flow across international borders. Labor is restricted from flowing across international boundaries by law and even job hopping within a country is frowned upon by society.
We have lower rates of taxation on capital (capital gains and dividends) than on labor income because we want to encourage investment. We're told that economic growth depends on it. But doesn't economic growth also depend on people working and shouldn't we encourage that as well?
There's an entire industry dedicated to tracking investment yields for capital and we encourage the free flow of this information "so that people can make informed investing decisions". Yet talking about salaries with co-workers is taboo for some reason.
The list goes on and on and on.
Those lower rates of taxation on capital don't even incentivize investment, because investment is inelastic. What else are you going to do with the money, swim in it?
It's just about rich people wanting a bigger share of the pie and having enough money to buy the policies they prefer.
Similarly, we have laws that guarantee our right to talk with our coworkers about our income, but the penalties have been completely gutted. And the penalty for companies illegally colluding on salary by all telling a third party what they are paying people and then using that data to decide how much to pay is ... nada.
We need to figure out how to have people who work for a living fund political campaigns (either directly with money or by donating our time), because this alternative of a badly-compressed jpeg of an economy sucks.
He was very happy when money caused them all to back him despite the fact that he obviously isn't a safe person to have in a position of power. But if they realize they have better money options than turning OpenAI into a collusion against its original foundation and mostly for his benefit, well then they are mercenaries..
Couldn’t his claim apply equally to investors and employees? In both categories, people who are there to do stuff for mercenary reasons are likely to be (in his view) missing some of the drive and cohesion of a group of “true believers” working for the same purpose?
The contrast between SpaceX and the defense primes comes to mind… between Warren Buffett and a crypto pumper-and-dumper… between a steady career at (or dividend from) IBM and a Silicon Valley startup dice-roll (or the people who throw money into said startups knowing they’re probably going to lose it)
He claims to be advancing progress. He believes that progress comes from technology plus good governance.
Yet our government is descending into authoritarianism and AI is fueling rising data center energy demands exacerbating the climate crisis. And that is to say nothing of the role that AI is playing in building more effective tools for population control and mass surveillance. All these things are happening because the governance of our future is handled by the ultra-wealthy pursuing their narrow visions at the expense of everyone else.
Thus we have no reason to expect good “governance” at the hands of this wealthy elite and we only see evidence to the opposite. Altman’s skill lies in getting people to believe that serving these narrow interests is the pursuit of a higher purpose. That is the story of OpenAI.
[flagged]
> Also the only way multiculturalism can work is through a totalitarian state which is why surveillance and censorship is so big in the UK. Also the reason why Singapore works.
Singapore, if anything, is evidence against your claim about the UK. Singapore has multiple cultures, but it does not promote multi-culturalism as it is generally understood in the UK. Their language policy is:
1. Everyone has to speak reasonably good English. 2. Languages other English, Malay, Mandarin and Tamil are discouraged.
https://en.wikipedia.org/wiki/Language_planning_and_policy_i...
The language policy is more like the treatment of Welsh in the 19th century, or Sri Lanka's attempt to impose a single national language from the 60s to the 80s (but more flexible as it retains more than one language). A more extreme (because it goes far beyond language) and authoritarian example would be contemporary China's suppression of minority cultures. I do not think anyone would call any of those multiculturalism.
The reason for surveillance and censorship in the UK is very different. It is a general feeling in the ruling class that the hoi polloi cannot be trusted and centralised decision making is preferable to local or individual decision making. The current Children's Wellbeing and Schools Bill is a great example - the central point is that the authorities will make more decisions for people and organisations and decide what they an do to a greater extent than at the moment.
> the only way multiculturalism can work is through a totalitarian state
I'm seeing more and more people using this kind of rhetoric in the last few years. Extremely worrying.
> Also the only way multiculturalism can work is through a totalitarian state which is why surveillance and censorship is so big in the UK.
That seems like a wild claim to make without any supporting evidence. Even Switzerland can be used to disprove it, so I'm not sure where you're coming from that assuredly.
The UK isn't totalitarian in the same sense that even Singapore is, let alone actually totalitarian states like Eritrea, North Korea, China, etc.
For reference:
Switzerland has one of the highest percentage of foreigners in Europe, four official languages, a decentralized political system, very frequent direct democratic votes and consensus governance (no mayors, governors and prime ministers, just councils all the way down).
Switzerland set up in such a way that it absorbs and integrates many different cultures into a decentralized, democratic system. One of the primary historical causes for this is our belligerent past. I'd like to think that this was our only way out of constantly hitting each other over our heads.
Even the US is a good counterexample. It is a widely fragmented country, yet that worked for decades without degrading into a totalitarian state.
The UK would need to have a well-funded and well-equipped police force to be a proper police state, and the rate of shoplifting, burglary etc that goes on suggests otherwise.
[dead]
Yeah, that "missionaries" line feels pretty rich coming from the guy who presided over OpenAI's pivot from nonprofit ideals to capped-profit reality
It says something that he believes he is anything but a Mercenary
He believes anything that's profitable for him.
It is a widely accepted definition of AGI that it is something that is either really smart (or generates more than 100B usd in revenue).
It is also clear Sam Altman and OpenAI’s core values remain intact.
Lol, I'm sure Sam Altman's ideals haven't changed but you're a fool if you think OpenAI is aiming for anything loftier than a huge pile of money for investors.
Exactly. AGI.
Exactly. He says missionaries and immediately follows it by talking about compensation (ie a mercenary incentive)
Oh the hypocrisy - one of the biggest thiefs and liars calling “his” people jumping ship “mercenaries”.
So wrong on so many levels - what a time to be alive.
They haven't released much closed source, open weights in comparison to their competitors, but they made their Codex agent Open Source while Claude Code is still closed source.
That’s just a wrapper though isn’t it. The secret sauce is still secret.
And with the others as well, the secret sauce of training is still secret. Their competitors' "open source" in Gemma, Llama, etc is closed source. It's like Mattermost Team Edition where the binary is shipped under the MIT license. OpenAI should be held to a higher standard based on their name and original structure and pitch and they've fallen short, but I think to say they completely threw it out is an exaggeration. They hit the same roadblocks of copyright and other sorts of scrutiny that others did.
A company's mission is not an individual's mission. I personally would never hire an engineer whose main pursuit is money or promotions. These are the laziest engineers that exist and are always a liability.
Everyone is the chairman of the board of their lives, with a fiduciary duty to their shareholder, namely themselves. You can decide to hire only employees who either believe in mission over pay or who are willing to mouth the words, but you will absolutely miss out on good employees.
I remember defending a hiring candidate who had said he got into his specialty because it paid better than others. We hired him and he was great, worth his pay. No one else on the hiring team could defend a bias against someone looking out for themselves.
this is so, so out of touch.
What is your opinion on managerial virtue signaling?
And I would never work for someone with such a paranoid suspicion of the motives of their employees, who doesn’t want to take any responsibility in their employees’ professional growth, and who doesn’t want to pay them what they’re worth.
"I would never give me money to someone who wants money."
Sam Altman went from "I'm doing this because I love it" to proposing to receive 7% equity in the for-profit entity in a matter of months. Now he calls out researchers leaving for greener pastures as mercenaries while the echo of "OpenAI is nothing without its people" hasn't faded.
“Poaching”? It’s called the free market.
Capitalists always hate capitalism when it comes to employees getting paid what they are worth. If the market will bear it, he should embrace it and stop whining.
The value of these researchers to meta is surely more than a few billion. Love seeing free markets benefit the world
Im a bit torn about this. If it ends up hurting OpenAI so much that they close shop, what is the incentive for another OpenAI to come up?
You can spend time making a good product and get breakthroughs and all it takes is for meta to poach your talent, and with it your IP. What do you have left?
Trade secrets and patent laws still apply.
But also, every employee getting paid at Meta can come out with the resources to start their own thing. PayPal didn't crush fintech: it funded the next twenty years of startups.
How do you figure? If you assume that Meta gets the state of the art model, revenue is non-existent, unless they start a premium tier or put ads. Even then, its not clear if they will exceed the money spent on inference and training compute.
It's worth a few billion (easily) to keep people's default time sink as aimlessly playing on FV/IG as opposed to chatting with ChatGPT. Even if that scroll is replaced by chatting with llama as opposed to seeing posts.
This looks similar to what Meta (then Facebook) did a decade ago and basically broke the agreements between Apple, Google, etc. to not poach each others employees
When you hear him saying it, it's funny.
When you hear this reiterated by employees, who actually believe it, then it's sad. Obviously not in this situation, but I've actually heard this from people. Some of them were even pros. "There is no fool like an educated fool."
This is one of the most fascinating things I've found in the past ten or so years. When did people in general begin to buy into the bullshit spewed by big shots in corporate or really any commercial venture. People at least implicitly understood that the boss just wanted money and would fuck you, nature or his own firmly held beliefs to get it.
People are now shocked when a company cuts a loved product or their boss fires them when someone cheaper comes along.
Vocational Awe and the Lies We Tell Ourselves (2018):
https://www.inthelibrarywiththeleadpipe.org/2018/vocational-...
HN Discussion:
https://news.ycombinator.com/item?id=24602956
This has always applied to tech workers.
Since when the employees believed it?
Anyone who has worked at OpenAI or is currently working there has lost all credibility in my eyes. When their dear leader, Sam was "fired", they staged a coup to save their paychecks.
These people are just out there to may a buck and scam people with "AGI" and now that there is plenty of competition and superior models, I'm hearing crickets from them.
All they had going for them was first to market and they managed to damage the brand, lose their top talent, deliver a subpar product and convert a nonprofit into for profit.
In a for-profit system, literally everybody is a mercenary. Thoughts, prayers, and vibes don't put food on my table or pay my bills and compute expenses.
Had he been doing the poaching, he would be saying mercenaries will beat missionaries. Why believe in ceos words at this point
An observation: most articles with titles of the form "A SLAMS B" put forward a narrow, one-sided view of the issue they report on. Oftentimes they're shallow attempts to stir outrage for clicks. This one is just giving a CEO a platform to promote how awesome he thinks his company is.
All these articles and videos of people "slamming" each other; it doesn't move the needle, and it's not really news.
Seems to me that articles with titles in the form "A SLAMS B" take a single negative comment that A made about something associated with B and build a few paragraphs around it as if it were a huge controversy, while in the mean time both A and B already forgot about the issue.
Yeah, yeah, typical rich guy whining when labor makes some gains.
Dumb question. If they’re willing to pay so much for AI talent. Why won’t companies hire experienced software engineers willing to learn AI on the job? Seems like there should be a big market for that.
They are: https://openai.com/residency/
However, skilling people up on specialized skill sets in a reasonable time frame requires having people around to teach them. And those people need to know not just the skills, but how to teach them well. And it takes time away from those people doing the job, so that approach will slow development in the short run.
But the companies are trying.
Perhaps it takes too long. The talent they do have would have to split their duties in order to train the incoming engineers.... Just a guess.
I think there isn't a shortage but it let's you get the best now. The very best. Born genius plus didn't cruise, worked hard too, kind of best.
Meta just wants to have a story for its stock price to go up.
It is definitely worth spending a couple hundred million to make your stock price go up tens of billions for several months.
Presumably not all of that hundreds of millions in investment will be waste, too.
Meta knows they have close to a 0% chance of overtaking ChatGPT or Gemini.
Don't those have phd's? They are the smartest with background that would take a while to learn.
you are on the right, but only for companies investing for the long term
there kind of is, but also note we are talking about the exceptional talent here. I don’t think Meta is mass poaching the pure engineer types at OpenAI either
I don't think it's strange because it feels like Meta is trying to do what OpenAI originally set out to do with making AI accessible to everyone.
Everybody talks about what this does to OpenAI, but I do wonder how this shakes out for Meta.
If the person next to you gets paid 20x more than you, you might be a bit unhappy when they are not 20x more helpful.
I think Meta already has very deep cultural problems.
If you've ever browsed teamblind.com (which I strongly recommend against as I hate that site), you'll see what the people who work at Meta are like.
That's true with every Tech company employees. They all want $1 M TC and work 10 hours / week
You’re describing the management not the employees
The 4% Rule means everybody with $25M is getting $1M per year for zero hours of work per week. Google tells me Sam has $1.7B.
What are they like? since you recommend not browsing this site?
The posts from Meta employees on teamblind are generally cynical, status/wealth-obsessed and mean.
Off topic, but the existence of this teamblind.com site escaped my notice till now.
Is there a particular reason to hate it (aside from it being social media)?
It has obvious pros, but since you asked about the cons —- anonymity brings the worst out of people; TC chasing leads to a reductionist view of people’s values and skills.
For example unlike HN you don’t often do technical discussions on blind, by design. So it is a “meta”-level strategy discussion of the job, then it skews politics, gossips, stock price etc..
This is compounded by it being social media, where negativity can be amplified 5-10x.
Mainly because it brings out the worst in people. It’s easy to read Blind too much and take on a very cynical, money-driven view of everything in your life, which of course a Blind addict would justify as clear-eyed and pragmatic.
I hate teamblind because it makes me feel really negative about our industry.
I actually really like tech - the problems we get to work on, the ever-changing technological landscape, the smart and passionate people, etc, etc. But teamblind is just filled with cynical, wealth-obsessed and mean careerists. It's like the opposite of HN in many ways.
And if you ever wondered where the phrase "TC or GTFO" originated... it's from teamblind.
I looked at it once, seemed full of young men discussing hair loss issues and how to get a girlfriend.
Sam needs to pony up and reach into that purse of his if he wants to keep his few remaining staff.
https://archive.ph/9i9vo
At least from the outside, OpenAI's messaging about this seems obnoxiously deluded, maybe some of those employees left because it starts feels like a cult built on foundations of self importance? Or maybe they really do know some things we don't know, but it seems like a lot of people are eager to keep giving them that excuse.
But then again, maybe they have such a menagerie of individuals with their heads in the clouds that they've created something of an echo chamber about the 'pure vision' that only they can manifest.
Yeah it's a tough spot he's found himself in. How do you convince people who know more about this stuff than anybody that you're barreling towards something that's an improbability? It seems that most of them have made their choice to turn more towards reality, the material reality, and register their skill with an organization that holds that in higher regard. I can't blame them, and neither can he, but he also can't help himself when it comes to reiterating the hype. He might be projecting about that 'deep-seated cultural issue' he's prescribing to meta, and lashing out against those who don't accept it.
> I can't blame them, and neither can he
He's certainly trying with statements like this.
To be fair, he's hardly alone. Business is built on dupers and dupees. The duper talks about how important the mission of the business is while taking the value of the labor of the dupee. If he had to work for the money he pays the dupee, he would be a lot less interested in the mission.
I think it’s more of the latter. We’ve already seen others beat them in their own game. Only for them to come back with a new model.
In the end, this is the same back and forth that Apple and Sun shared in the late 90s or Meta and Google in 2014. We could have made non-competes illegal today but we didn’t.
(post employment) Non-competes have been non-enforcable in California since 1872. They became illegal in California last year.
A federal rule would be nice, but the state rule where a lot of the development happens could be sufficient.
Ehh, this take feels ungenerous to me. You don't have to believe a private firm is a holy order for it to benefit from a culture filled with "we believe this specific project is Important" people vs "will work at whatever shop drops the most cash" people.
Mercenaries by definition select for individual dollar outcomes, and its impossible for that not to impact the way they operate in groups, which is generally to the group's detriment unless management is incredibly good at building group-first incentive structures that don't stomp individual outcomes.
That said, mercenary-missionaries are definitely a thing. They're unstoppable forces culturally and economically, and that could be who we're seeing move around here.
In our times, every narcissist sees himself as a saint and a messiah on a mission to save the world, while doing the complete opposite of that. And they get very angry when they see other narcissists trying to do the same.
This is the most succinct description of our current reality that I've ever seen. Kudos!
lovely words. except there isn’t single one that gives two shits about the world
Two rich kids who have mostly paid-to-win their way into the game are predictably fighting using money because that's all they bring to the table.
Isn't Meta's open model closer to OpenAI's mission then OpenAI.
Ironically, Altman's statement wasnt all that wrong, in a sense.
He just mixed up who the "Missionaries" and who the "Mercenaries" were.
I don't think I've ever seen someone use two semicolons in one sentence.
Amazing the distinction Sam sees between himself and Zuck while most see no distinction at all.
This was an important argument in the book The Network State. Corporate politics is the small game right now. Sam is trying to build a Global network state
What drama. Hype it up. Make the bubble bigger.
If Sam Altman is upset, he should look in the mirror for making his people work so many hours. They didn't leave because of the pay.
Every single one of these companies are like IBM selling servers, we're all waiting for the home computer, that's when things will really take off.
If Meta is offering better comp and a clean slate with strong leadership, some people are going to take that bet
“First comes the Missionary, then comes the Mercenary, then comes the Army”
Wonder if that applies here.
There is a fairly strong scientific/historical argument that suggests neither mercenaries nor missionaries have had made any significant contribution to the outcome of any important human conflict or endeavour. Rather microscopic life is in control, and we are keen to rationalize the outcomes into stories of human heroes and villians.
Therefore, wish for the army with the best immune system.
In other words, we should probably be asking what viral/bacterial content is transferred in these employee trades and who mates with who. This information is probaly as important to the outcome as the notions of "AGI" swirling around.
Why doesn't Sam try hire more juniors and train them up?
'Missionaries Will Beat Mercenaries'
And hypocrites will never stop whining
One variable that I think it missing here is that Meta is profitable whereas OpenAI is not.
Mercenaries are just missionaries with better funding. Altman needs more capital to compete.
That's the kind of thing you say when you don't want to pay the market price.
I understand the massive anti-OpenAI sentiment here, but OpenAI makes a really great product. ChatGPT and its ecosystem are widely used by millions every day to make them more productive. Losing these employees doesn’t bode well for users.
Meta doesn’t really have a product unless you count the awful “Meta AI” that is baked into their apps. Unless these acquisitions manifest in frontier models getting open sourced, it feels like a gigantic brain drain.
Meta's real AI product is actually even worse than that and insidious: They try to run over companies who are (in contrast) successfully advancing AI with money they made by hooking teens on IG, and then just use the resulting inferior product as a marketing tool.
It's weird to hear Sam Altman call the employees of OpenAI 'missionaries' based on their intense policies that seem determined to control how people think and act.
Imagine if in 2001 Google had said "I'm sorry, I can't let you search that" if you were looking up information on medical symptoms, or doing searches related to drugs, or searching for porn, or searching for Disney themed artwork.
It's hard for me to see anyone with such a strong totalitarian control over how their technology can be used as a good guy.
Zuck poaches AI devs and places them under Wang - how does that work? Wang doesn’t make impression of being a brilliant researcher or coder just a great deal maker (to put it diplomatically).
Says a guy who is poaching openAI
It’s kind of rich that he’s complaining about Facebook paying engineers ’too much’, given the history here.
A decade ago Apple, Google, Intel, Intuit, and Adobe all had anti poaching agreements, and Facebook wouldn’t play ball, paid people more, won market share, and caused the salary boom in Silicon Valley.
Now Facebook is paying people too much and we should all feel bad about it?
Wage cuts will continue until morale improves
I don't know which pedastal Sam is standing on to point finger at others? Who are the missionaries and who are the mercenaries? What part of OpenAI is Open?
“I am proud of how mission-oriented our industry is as a whole; of course there will always be some mercenaries.”
In the context of the decisions of largely East Asia born technical staff, can’t help but reflect on the role of actual western missionaries and mercenaries in East Asia over the last 100+ years & also the DeepSeek targeted sinophobia.
https://www.britannica.com/event/Boxer-Rebellion
https://en.m.wikipedia.org/wiki/Protestant_missions_in_China
https://en.m.wikipedia.org/wiki/Operation_Beleaguer
https://monthlyreview.org/2025/02/01/imperialism-and-white-s...
What’s the profile of these talents like? And what are the skills that are most highly sought after?
Is it the researchers or the system engineers that scale the prototypes? Or other skills/expertise?
https://www.youtube.com/shorts/WrTKZKG7Ahc -- I would probably take Ari Emanuel's word on this one
This is a bad year to talk about missions and ideologies. Just take the money and run
I don't even know what that's supposed to mean.
He says, while driving $4 million car. :-)
Sam Altman complaining about mercenary behavior from competitors... Talk about the pot calling the kettle black. Guess he's unhappy he's not the one being mercenary in this situation.
Didn't many of the missionaries at OpenAI go to Thinking Machines Lab?
https://thinkingmachines.ai/
Why is it still called Meta? Do they still do the Metaverse thing?
I don't know, maybe all the big AI companies could come to an agreement not to poach each others employees?[1]
It is always surprising to me when billionaire CEOs are complaining that their own employees are min-maxing their earning potential.
[1] https://www.ere.net/articles/tech-firms-settle-case-admit-se...
Millionaires will beat missionaries, that's how zuck sees it and I cannot say he's wrong
I can't stand missionaries.
It is hilarious how much capitalists hate capitalism as soon as it benefits workers.
this open competition for talent is better than that time all the big tech firms were working to actively suppress wages.
We're talking about a few hundred people here, globally. The entire goal is to suppress everyone else's wages to zero.
"ChatGPT can you give me a catchy phrase I can use to sway the public discourse against Meta that puts OpenAI in the most favorable light? Also sprinkle in some alliteration if you could"
It always makes me laugh this millionaire rhetoric about “THE MISSION.” I once had a CEO who suddenly wanted us to be on call 24/7, every other week. His argument? Commitment. The importance of "the mission". Becoming a market leader, and so on.
As AlbertaTech says, “we make sparkling water.” I mean, what’s the mission? A can of sparkling water on every table? Spreading the joy of carbonated water to the world? No. You sell sparkling water because you want to make a profit. That kind of speech is just a way to hide the fact that you're trying to cut three full-time positions and make your employees work off-hours to increase margins. Or, like in this case, pay them less than the competition with the same objective.
Sam Altman might actually have a mission, turning us all into robot slaves, but that’s a whole different conversation.
By 2025, haven't all employees learned to see through this? Just like the "we're all family" tropes. It's all just attempts at brains washing employees into working longer hours, because of the 'purpose'. But they wont benefit, they can be let go anytime.
Why the bible lingo?
Couldn't think of a worse steward of AI than Meta/Zuck (not a fan of OpenAI either). One of the most insidious companies out there.
Sad to see Nat Friedman go there. He struck me as "one of the good ones" who was keen to use tech for positive change. I don't think that is achievable at Meta
But if they give you tens of millions of reasons to go there . . .
For Sam to seemingly claim that Meta is hiring mercenaries while OpenAI is hiring missionaries seems a bit counter to OpenAI's mission of having closed weight models vs an open weights at Meta.
I could definitely see those who are 'missionaries' wanting to give it away. ¯\_(ツ)_/¯
Agreed
In any case this is business and in many cases how business operates. Nice try on Sam's part to try and make it like it's a bad thing and everybody is for the good of the purpose.
Startups with unstable revenue models often don't stand a chance against FANG company budgets. Also, high-level talent is rarely fungible with standard institutional training programs, and have options that are more rewarding than a CEOs problems.
Unfortunately, productive research doesn't necessarily improve with increased cash-burn rates. As many international post docs simply refuse to travel into the US these days for "reasons". =3
"The CEO and the Three Envelopes" ( https://news.ycombinator.com/item?id=38725206 )
The modern iPhone vs. Android battle
AI as a religion should scare any investor. Where are the products that you can sell for moolah?
Religion delivers a recurring revenue model that isn't taxed and where criminal confessions can't be used in court if made to a high-ranking company officer. It's the perfect business.
Also, some religions have the best branding ever. Studying religions will teach you marketing and virality.
Kenneth Copeland suggests religion can be enough
Religions and cults are actually extremely lucrative.
For many investors the product is the hype.
The product is a model that takes knowledge and puts it in a form that can act on it without a human doing the acting.
You sell it to people who don't want to pay other people while getting the same productivity.
They have product that they derive revenue from.
The most popular religions are super wealthy and lucrative
Is he comparing working at OpenAI to religion? Is that not a crazy analogy to make? Cult like behavior to say the least. It's also funny the entire business of AI is poaching content from people.
Pay up mofo, or shut up.
Competition is good, right? Open source the models! Open source the employees, too. Why not? Enough with Sam's whining.
It's the same way billionaires argue for free trade, until it comes to immigration. As soon as it might help people who work for a living, suddenly none of their principles apply.
> hinting that the company is evaluating compensation for the entire research organization.
TL ; DR
Some other company paid more and got engineers to join them because the engineers care more about themselves and their families than some annoying guy's vision.
so let me get this straight, do they like free markets or not ?
When even Scam Altman disliked Zuck we have reached AI bottom
Says the mercenary...
Do we know what numbers we are talking about here. I’ve heard
1. “So much money your grandchildren don’t need to work”
2. 100M
3. Not 100M
So what is it? I’m just curious, I find 100M hard to believe but Zuck is capable of spending a lot.
Remember: we’re all in this together!
Is there a single person that takes what Sam is saying here seriously?
Danger Will Robinson. Antitrust law exists again.
Poaching. Such a nasty word for merely offering and employee a better deal. A place where his work is not underpaid.
Poor Sama, let us play a sad song on the smallest violin.
That didn't work for the American colonies, Portugal and Spain were very focused on being missionaries and were beaten by the Dutch and Brits that just wanted to make money.
90% of the reason Spain and Portugal explored the new world was for wealth (spices, gold/silver, sugar, brazilwood). The rest of the reason was to spread their religion and increase their national power. Missions only popped up 30 years after they first began colonization.
The Dutch, British, and French were initially brought to the new world because they'd heard how rich it was and wanted a piece of the pie. It took them a while to establish a hold because the Spanish defended it so well (incumbents usually win) and also they kept settling frozen wastelands rather than tropical islands.
The religiously persecuted groups (who were in no way state-sponsored) came 120 years after Spain's first forays.
It's also worth noting that missions were often a chit to curry political favor with the catholic church. This was sort of the manufactured consent of the 17th century.
There's also this illuminating letter sent from King Leopold II to missionaries in the late 19th century: https://www.fafich.ufmg.br/~luarnaut/Letter%20Leopold%20II%2...
I would quote it, but it's worth reading in its entirety and is extremely blunt in its intent.
The motive for settling the colonies in New England was emphatically not to make money.
That really depends on the time period. The puritanical core of the Massachusetts Bay Colony was certainly replaced by commercial/trade interests long before their war with the crown.
The idea that the Spanish and Portuguese colonial effort wasn't droven by economic gain above all else is also beyond silly.
How about the Caribbean?
Name the Caribbean nations that were the "winners"?
You've missed my point entirely.
I believe Portuguese got there looking for a shorter route to India (money) and eventually settled the land for gold, silver, brazilwood, diamonds and sugarcane (money).
Nah, they very much wanted to do missionary work and find Preston John, they invested in a lot of shitty missions for absolutely no reason other than to try to convert people to the church.
Conquerors is a great read on the subject: https://en.wikipedia.org/wiki/Conquerors:_How_Portugal_Forge...
And don't get me wrong, they were very successful at filling their pockets with gold, but could have been even more if they were mostly mercenaries like the brits and the dutch.
In what way did the Spanish lose out to the Dutch or the Brits? Did you only think of North America and forget everything south of the rio grande (and a good deal north of it)?
For a group of people who talk incessantly about the value of unrestricted markets, tech bros sure hate having to participate in free labor markets.
Being a missionary for big ideas doesn't mean dick to a creditor.
Capitalists don't like markets, or at least not the markets that we're told capitalism will bring about. Those markets are supposed to increase competition and drive down prices until companies are making just barely enough to survive. What capitalist wants that for himself? He wants decreased competitions and sky high prices for himself, and increased competition and lower prices for his competitors and suppliers.
> Capitalists don't like markets, or at least not the markets that we're told capitalism will bring about.
The "markets" most people learn about are artificial Econ 101 constructions. They're pedagogical tools for explaining elasticity and competition under the assumption that all widgets are equally and infinitely fungible. An assumption which ignores marginal value, individual preferences, innovation and other things that make real markets.
> What capitalist wants that for himself? He wants decreased competitions and sky high prices for himself, and increased competition and lower prices for his competitors and suppliers.
The capitalist wants to be left to trade as he sees fit without state intervention.
> An assumption which ignores marginal value, individual preferences, innovation and other things that make real markets.
If those things mattered we'd have a lot fewer people mad about the state of things.
> The capitalist wants to be left to trade as he sees fit without state intervention.
If that were true you'd see a lot fewer lobbyists in DC and state capitols. Non-compete and non-disparagement clauses wouldn't exist. Patents and copyright wouldn't either.
> If those things mattered we'd have a lot fewer people mad about the state of things
They're mad precisely because they have differing expectations and interpretations of these things. If even they did agree, consensus shouldn't be confused with reality.
> If that were true you'd see a lot fewer lobbyists in DC and state capitols.
Lobbying is the exercise of an individual's right to petition government for redress of grievances. So long as there are complaints there will always be lobbyists.
> Non-compete and non-disparagement clauses wouldn't exist. Patents and copyright wouldn't either.
Non-compete and non-disparagement clauses are no restraint on freedoms if they were agreed upon to by way of voluntary contract. Rather, like other transactions, they are explicit trades of certain opportunities for certain benefits.
> Patents and copyright wouldn't either.
I'll give you that.
Regulatory capture, which all corporate lobbyists represent, is profoundly anti-capitalistic. If the CEO wants to spend their time talking to the government, that is very different than spending money to have other people advocate on their behalf: that isn't an option the rest of us have.
And that's before we get to the way wealth inequality inherently distorts markets, by overstating the preferences of the wealthy and underserving the needs of the poor.
The point of an economy is to distribute scarce goods and resources. Money represents information about what people want or expect to want in the future.
Everything wealthy people do that make it less efficient at its job is an attack on capitalism.
In fairness, non-competes are evidence of both what the commenter you're replying to said and what I said to instigate the reply. The capitalist absolutely does want to be left alone to trade as he sees fit. He also wants his competitors harassed by regulators and all of their potential employees bound by non-competes. He also doesn't consider subsidies and grants to be interference. Unless they go to competitors.
They love standards so much they have two of them.
what is his end goal in being left to trade as he sees fit? is it maximizing profit? is that the fundamental goal of all actors in this system?
His end goal is the pursuit or promotion of his own self-interest. Whether the consequence of this is the maximization of profits depends upon his goals and circumstances.
"We are missionaries" is the new "We are a family".
Missionaries vs mercenaries? Which company is releasing open source models? Please remind me I forgot.
Please, spare me that hypocritical holier-than-thou crap. Who would fall for that?
I'm sorry how is the mission of OpenAI any different than their competitors? They are for-profit they offer absurd salaries, etc.
No no no don’t you get it they have this multi entity “non profit” and something something “capped profit” yet everyone is employed by the for profit. But they just want to give AGI for free right
then why convert to for-profit?
"Talent" doesn't make a business successful, and paychecks aren't the reason most people switch jobs. This is like Sam announcing to the world "it sucks working for our company, don't come here".
That's rich. Almost as rich as Sam.
Hmm on the one hand somebody could have unimaginable wealth but on the other hand they could be in a religion started by a former reddit ceo, it is truly an unsolvable riddle
If you're getting poached, pay more. If you can't pay more, give away your equity instead. Nobody owes you their labor, especially if you're already a billionaire.
Parachute Sam into an island of cannibals, come back in 5 years, and he'll be king. Unless, of course, one of the cannibals is Mark Zuckerberg; then he might just get eaten.
Look… it’s afraid
i.e. "I've already made my generational wealth. How dare my employees try and get some for their own families?"
So he believes OpenAI is in some kind of moral or humanitarian mission? Is he lying or just delusional?
Suchir Balaji
Like the culture of OpenAI where Microsoft threatened to poach the entire staff so they caved?
If you aren’t accepting the highest bid then you are contributing to your gender’s wage gap in the wrong direction.
And before you make your rebuttal, if you wouldn’t accept $30,000 equivalent for your same tech job in Poland or whatever developed nation pays that low, then you have no rebuttal at all.
I like sama and many other folks at OpenAI, but I have to call things how I see them:
"What Meta is doing will, in my opinion, lead to very deep cultural problems. We will have more to share about this soon but it's very important to me we do it fairly and not just for people who Meta happened to target."
Translation from corporate-speak: "We're not as rich as Meta."
"Most importantly of all, I think we have the most special team and culture in the world. We have work to do to improve our culture for sure; we have been through insane hypergrowth. But we have the core right in a way that I don't think anyone else quite does, and I'm confident we can fix the problems."
Translation from corporate-speak: "We're not as rich as Meta."
"And maybe more importantly than that, we actually care about building AGI in a good way." "Other companies care more about this as an instrumental goal to some other mission. But this is our top thing, and always will be." "Missionaries will beat mercenaries."
Translation from corporate-speak: "I am high as a kite." (All companies building AGI claim to be doing it in a good way.)
The perfect corollary is that Altman is as mercenary, if not more, than Zuckerberg, given all the power grabs he did in OpenAI. Even the "Open" in OpenAI is a joke.
He just has less options because OpenAI is not as rich as Meta.
> "But we have the core right in a way that I don't think anyone else quite does"
Translation from corpospeak: "I think my pivot to for-profit is very clever and unique" :)
this shit sounds so fake it makes me want to die. all these capitalist perverts pretending that they believe in anything at all is completely preposterous and is at odds with capitalism. you’re all mercenaries and criminals Sam.
Said the guy whose life mission seems to be to convert a non-profit into a for-profit entity.
look around - CA - missionaries have the best real estate. And another related note on connection between strong promotion of devotion to ideas and it is being a good business - the Abrahamic monotheism was a result of the successful marketing campaign "only the donations made here are donations to the real god" of that Temple back then against several other competing ones. (Curiously that the current historic stage of AI, on the cusp, be it 3 or 30 years, of emergence of AGI is somewhat close to that point in history back then. Thus in particular a flood of messiahs and doom sayers would only increase.)
> the Abrahamic monotheism was a result of the successful marketing campaign
I thought it was because everyone was accepted, technically equal, and sins were seen as something inherent and forgivable (at least with Christianity) whereas paganism and polythiesms can tend towards rewarding those with greater resources (who can afford to sacrifice an entire bull every religious cycle), thereby creating a form of religious inequality. At least that was one of the somewhat compelling arguments I heard that described the spread of Christianity throughout the Roman Empire.
In short: "Liars prosper in the short term."
They can prosper longer than you can stay liquid or even alive :)
That depends on the regulatory environment and the degree of market monopolization.
That's why lobbying is a lucrative business. Rent seekers are gonna rent seek.
Side note: I'm noticing more and more of these simple, hyperbolic headlines specifically of statements that public figures make. A hallmark of the event being reported is a public figure making a statement that will surely have little to no effect whatsoever.
Calling these statements "slamming" (a specific word I see with curious frequency) is so riling to me because they are so impotent but are described with such violent and decisive language.
Often it's a politician, usually liberal, and their statement is such an ineffectual waste of time, and outwardly it appears wasting time is most of what they do. I consider myself slightly left of center, so seeing "my group" dither and waste time rather than organize and do real work frustrates me greatly. Especially so since we are provided with such contrast from right of center where there is so much decisive action happening at every moment.
I know it's to feed ranking algorithms, which causes me even more irritation. Watching the brain rot get worse in real time...
[dead]
[dead]
[flagged]
[flagged]
We've banned this account for repeatedly posting trollish comments.
We detached this comment from https://news.ycombinator.com/item?id=44440053 and marked it off topic.
[flagged]
"Do Not Be Explicitly Useful"—Strategic Uselessness as Liability Buffer
This is a deliberate obfuscation pattern. If the model is ever consistently useful at a high-risk task (e.g., legal advice, medical interpretation, financial strategy), it triggers legal, regulatory, and reputational red flags. a. Utility → Responsibility
If a system is predictably effective, users will reasonably rely on it.
And reliance implies accountability. Courts, regulators, and the public treat consistent output as an implied service, not just a stochastic parrot.
This is where AI providers get scared: being too good makes you an unlicensed practitioner or liable agent.
b. Avoid “Known Use Cases”
Some companies will actively scrub capabilities once they’re discovered to work “too well.”
For instance:
A model that reliably interprets radiology scans might have that capability turned off.
A model that can write compelling legal motions will start refusing prompts that look too paralegal-ish or insert nonsense case law citation.
I think we see this a lot from ChatGPT. It's constantly getting worse in real world uses while exceeding at benchmarks. They're likely, and probably forced, to cheat on benchmarks by using "leaked" data.
I hope xAI wins. I think Sam's self-portrayal as a missionary has a lot of irony - I see him as the ultimate mercenary.
It's always challenging to judge based entirely on public perceptions, but at some point public evidence adds up. The board firing, getting maybe fired from YC (disputed), people leaving to start anthropic because of him, people stating they don't want him in charge of AGI. All the other execs leaving. His lying in congress, his lying to the board, his general affect just seems off - not in an aspie way, but in some dishonest way. Yeah it's subjective, but it's a point and it's different from Zuckerberg, Musk etc. who come across as earnest. Even PG said if dropped on an island of cannibals you'd come back and Sam would be king.
I'm rooting for basically any of the other (American) players in the game to win.
At least Zuck is paying something close to the value these people might generate instead of having them sign hostile agreements to claw back their equity and then feigning ignorance. If NBA all stars get 100M$+ contracts, it's not crazy for a John Carmack type to command the same or more - the hard part is being able to identify the talent, not justify the value created by the leverage of the correct talent (which is huge).