except it's... all wrong: this dependency-free compiler has a hard dependency on gcc (even as it's claiming it's a drop-in replacement), it has so many hardcoded paths, etc.
If more people are able to step back and think about the potential growth for the next 5-10 years, then I think the discussion would be very different.
I am grateful to be able to witness all these amazing progress play out, but am also concerned about the wide ranging implications.
> think about the potential growth for the next 5-10 years,
I thought about it and it doesn't seem that bright. The problem is not that LLMs generate inferior code faster, is that at some point some people will be convinced that this code is good enough and can be used in production. At that point, the programming skills of the population will devolve and less people will understand what's going on. Human programmers will only work in financial institutions etc., the rest will be a mess. Why? Because generated code is starting to be a commodity and the buyer doesn't understand how bad it it.
So we're at the stage when global companies decided it's a fantastic idea to outsource the production of everything to China, and individuals are buying Chinese plastic gadgets en masse. Why? Because it's very cheap when compared to the real thing.
This is what the kids call “cope”, but it comes from a very real place of fear and insecurity.
Not the kind of insecurity you get from your parents mind you, but the kind where you’re not sure you’re going to be able to preserve your way of life.
My hot take is that portions of both the pro- and anti- factions are indulging in the copium. That LLMs can regurgitate a functioning compiler means that it has exceeded the abilities of many developers and whether they wholeheartedly embrace LLMs or reject LLMs isn't going to save those that have been exceeded from being devalued.
The only safety lies in staying ahead of LLMs or migrating to a field that's out of reach of them.
Sorry but I think you have it the other way around.
The ones against it understand fully what the tech means for them and their loved ones. Even if the tech doesn't deliver on all of its original promises (which is looking more and more unlikely), it still has enough capabilities to severely affect the lives of a large portion of the population.
I would argue that the ones who are inhaling "copium" are the ones who are hyping the tech. They are coping/hoping that if the tech partially delivers what it promises, they get to continue to live their lives the same way, or even an improved version. Unless they already have underground private bunkers with a self-sustained ecosystem, they are in for a rude awakening. Because at some point they are going to need to go out and go grocery shopping.
There is a massive difference between a result like this when it's a research project and when it's being pushed by billion dollar companies as the solution to all of humanities problems.
In business, as a product, results are all that matter.
As a research and development efforts it's exciting and interesting as a milestone on the path to something revolutionary.
But I don't think it's ready to deliver value. Building a compiler that almost works is of no business value.
Noone can correctly quantify what these models can and can't do. That leads to the people in charge completely overselling them (automating all white collar jobs, doing all software engineering, etc) and the people threatened by those statements firing back when these models inevitably fail at doing what was promised.
They are very capable but it's very hard to explain to what degree. It is even harder to quantify what they will be able to do in the future and what inherent limits exist. Again leading to the people benefiting from it to claim that there are no limits.
Truth is that we just don't know. And there are too few good folks out there that are actually reasonable about it because the ones that know are working on the tech and benefit from more hype. Karpathy is one of the few that left the rocket and gives a still optimistic but reasonable perspective.
It could also be that, so often, the claims of what LLMs are achieve are so, so overstated that people feel the need to take it down a notch.
I think lofty claims ultimately hurt the perception of AI. If I wanted to believe AI was going nowhere, I would listen to people like Sam Altman, who seem to believe in something more akin to a religion than a pragmatic approach. That, to me, does not breed confidence. Surely, if the product is good, it would not require evangelism or outright deceit? For example, claiming this implementation was 'clean room'. Words have meaning.
This feat was very impressive, no doubt. But with each exaggeration, people lose faith. They begin to wonder - what is true, and what is marketing? What is real, and what is a cheap attempt for companies to rake in whatever cold hard AI cash they can? Is this opportunistic, like viral pneumonia, or something we should really be looking at?
This reply is argumentum ad personam. We could reverse it and say GenAI companies push this hype down our throats because of fear that they are burning cash with no moat but these kinds of discussions lead nowhere. It's better to focus on core arguments.
How does a statistical model become "perfect" instead of merely approaching it? What do you even mean by "perfect"?
We already have determinism in all machines without this wasteful layer of slop and indirection, and we're all sick and tired of the armchair philosophy.
It's very clear where LLMs will be used and it's not as a compiler. All disagreements with that are either made in bad faith or deeply ignorant.
Flagging is the new downvote, with extra power. No one can say no to you, if enough people (who knows how many, 1, 5, 20? Definitely an order of magnitude less that upvotes least) do it the system automatically hides it. And unless the mods care, the system can be abused very easily.
I’ve seen posts with 500+ upvotes that were still flagged. I think the balance and automation around flagging is completely off and too easily abused.
Ah, two megapixel-PNG screenshots of console text (one hidpi too!), and of some IDE showing also text (plus a lot of empty space)... Great great job, everyone.
It really can replace human engineers. Mistakes and all. I've definitely written an "example" that I didn't actually test only to find out it doesn't work
I wonder if it feels the same embarrassment and shame I do too
Seems like a nothingburger? Mostly a spammy GitHub thread of people not reading the rest of the responses.
> Works if you supply the correct include path(s)
> Can confirm, works fine:
> You could arguably fault ccc's driver for not specifying the include path to find the native C library on this system.
> (I followed the instructions in the BUILDING_LINUX.txt file in the repo and got the kernel built for RISC-V. You can find the build I made here if someone is just interested in the binaries)
>> Works if you supply the correct include path(s)
The location of Standard C headers do not need to be supplied to a conformant compiler.
>> You could arguably fault ccc's driver for not specifying the include path to find the native C library on this system.
This is not a good implementation decision for a compiler which is not the C compiler distributed with the OS. Even though Standard C headers have well-defined names and public contracts, how they are defined is very much compiler specific.
Well this compiler was written to build Linux as a proof of concept. You don't need a libc for building the kernel. Was it claimed anywhere that it is a fully compliant C compiler?
That's kind of moving the goal post no? They set out to build a C compiler that could compile the kernel, it can do that just fine?
Searching for "compliant" in the README doesn't seem to indicate that building a "conformant compiler" was even the goal here, so not sure why that's suddenly should be a requirement.
The anti-AI crowd proves that they do need replacing as programmers since it was user error. Opus 4.6/ChatGPT 5.3 xhigh is superior to the vast majority of programmers. Talk about grasping for straws.
This will do the rounds on the front page of reddit with no mention of the users c library paths having issues as the root cause despite the clear error message stating that.
They had GCC to use as an oracle/source of truth. Humans intervened multiple times. Clearly writing C compilers is a huge part of its training data—the literal definition of training on test data.
Wake me up when a model trained only on data through the year 1950 can write a C compiler.
This is hilarious. But the compiler itself is working, it's just that the path to the stdlib isn't being passed properly
https://github.com/anthropics/claudes-c-compiler/issues/1#is...
except it's... all wrong: this dependency-free compiler has a hard dependency on gcc (even as it's claiming it's a drop-in replacement), it has so many hardcoded paths, etc.
The negativity around the lack of perfection for something that was literal fiction fiction just some years ago is amazing.
If more people are able to step back and think about the potential growth for the next 5-10 years, then I think the discussion would be very different.
I am grateful to be able to witness all these amazing progress play out, but am also concerned about the wide ranging implications.
> think about the potential growth for the next 5-10 years,
I thought about it and it doesn't seem that bright. The problem is not that LLMs generate inferior code faster, is that at some point some people will be convinced that this code is good enough and can be used in production. At that point, the programming skills of the population will devolve and less people will understand what's going on. Human programmers will only work in financial institutions etc., the rest will be a mess. Why? Because generated code is starting to be a commodity and the buyer doesn't understand how bad it it.
So we're at the stage when global companies decided it's a fantastic idea to outsource the production of everything to China, and individuals are buying Chinese plastic gadgets en masse. Why? Because it's very cheap when compared to the real thing.
This is what the kids call “cope”, but it comes from a very real place of fear and insecurity.
Not the kind of insecurity you get from your parents mind you, but the kind where you’re not sure you’re going to be able to preserve your way of life.
> Not the kind of insecurity you get from your parents mind you
I don't get this part. At least my experience is the opposite: it's basically the basic function of parents to give their child the sense of security.
That’s the joke.gif
My hot take is that portions of both the pro- and anti- factions are indulging in the copium. That LLMs can regurgitate a functioning compiler means that it has exceeded the abilities of many developers and whether they wholeheartedly embrace LLMs or reject LLMs isn't going to save those that have been exceeded from being devalued.
The only safety lies in staying ahead of LLMs or migrating to a field that's out of reach of them.
Sorry but I think you have it the other way around.
The ones against it understand fully what the tech means for them and their loved ones. Even if the tech doesn't deliver on all of its original promises (which is looking more and more unlikely), it still has enough capabilities to severely affect the lives of a large portion of the population.
I would argue that the ones who are inhaling "copium" are the ones who are hyping the tech. They are coping/hoping that if the tech partially delivers what it promises, they get to continue to live their lives the same way, or even an improved version. Unless they already have underground private bunkers with a self-sustained ecosystem, they are in for a rude awakening. Because at some point they are going to need to go out and go grocery shopping.
[dead]
There is a massive difference between a result like this when it's a research project and when it's being pushed by billion dollar companies as the solution to all of humanities problems.
In business, as a product, results are all that matter.
As a research and development efforts it's exciting and interesting as a milestone on the path to something revolutionary.
But I don't think it's ready to deliver value. Building a compiler that almost works is of no business value.
Noone can correctly quantify what these models can and can't do. That leads to the people in charge completely overselling them (automating all white collar jobs, doing all software engineering, etc) and the people threatened by those statements firing back when these models inevitably fail at doing what was promised.
They are very capable but it's very hard to explain to what degree. It is even harder to quantify what they will be able to do in the future and what inherent limits exist. Again leading to the people benefiting from it to claim that there are no limits.
Truth is that we just don't know. And there are too few good folks out there that are actually reasonable about it because the ones that know are working on the tech and benefit from more hype. Karpathy is one of the few that left the rocket and gives a still optimistic but reasonable perspective.
The negativity is around the unceasing hype machine.
Schadenfreude predates AI by millenia. Humans gonna human.
It’s a fear response.
It could also be that, so often, the claims of what LLMs are achieve are so, so overstated that people feel the need to take it down a notch.
I think lofty claims ultimately hurt the perception of AI. If I wanted to believe AI was going nowhere, I would listen to people like Sam Altman, who seem to believe in something more akin to a religion than a pragmatic approach. That, to me, does not breed confidence. Surely, if the product is good, it would not require evangelism or outright deceit? For example, claiming this implementation was 'clean room'. Words have meaning.
This feat was very impressive, no doubt. But with each exaggeration, people lose faith. They begin to wonder - what is true, and what is marketing? What is real, and what is a cheap attempt for companies to rake in whatever cold hard AI cash they can? Is this opportunistic, like viral pneumonia, or something we should really be looking at?
No.
While there are many comments which are in reaction to other comments:
Some people hype up LLMs without admitting any downsides. So, naturally, others get irritated with that.
Some people anti-hype LLMs without admitting any upsides. So, naturally, others get irritated with that.
I want people to write comments which are measured and reasonable.
This reply is argumentum ad personam. We could reverse it and say GenAI companies push this hype down our throats because of fear that they are burning cash with no moat but these kinds of discussions lead nowhere. It's better to focus on core arguments.
I think it’s a good antidote to the hype train. These things are impressive but still limited, solely hearing about the hype is also a problem.
"We can now expensively generate useless things! Why are you not more impressed?!"
How does a statistical model become "perfect" instead of merely approaching it? What do you even mean by "perfect"?
We already have determinism in all machines without this wasteful layer of slop and indirection, and we're all sick and tired of the armchair philosophy.
It's very clear where LLMs will be used and it's not as a compiler. All disagreements with that are either made in bad faith or deeply ignorant.
It is wild that this is getting flagged!
Wait why IS this flagged? Is a fairly straight up tech topic - granted somewhat in a humorous vein, but still valid?
Flagging is the new downvote, with extra power. No one can say no to you, if enough people (who knows how many, 1, 5, 20? Definitely an order of magnitude less that upvotes least) do it the system automatically hides it. And unless the mods care, the system can be abused very easily.
I’ve seen posts with 500+ upvotes that were still flagged. I think the balance and automation around flagging is completely off and too easily abused.
>enough people
it's less than 5 :)
Ah, two megapixel-PNG screenshots of console text (one hidpi too!), and of some IDE showing also text (plus a lot of empty space)... Great great job, everyone.
Would appreciate unflagging this.
It really can replace human engineers. Mistakes and all. I've definitely written an "example" that I didn't actually test only to find out it doesn't work
I wonder if it feels the same embarrassment and shame I do too
Why is this flagged?
HNers generally flag anything with negative sentiment or portrayal of Anthropic, Apple, and Tesla.
Seems like a nothingburger? Mostly a spammy GitHub thread of people not reading the rest of the responses.
> Works if you supply the correct include path(s)
> Can confirm, works fine:
> You could arguably fault ccc's driver for not specifying the include path to find the native C library on this system.
> (I followed the instructions in the BUILDING_LINUX.txt file in the repo and got the kernel built for RISC-V. You can find the build I made here if someone is just interested in the binaries)
>> Works if you supply the correct include path(s)
The location of Standard C headers do not need to be supplied to a conformant compiler.
>> You could arguably fault ccc's driver for not specifying the include path to find the native C library on this system.
This is not a good implementation decision for a compiler which is not the C compiler distributed with the OS. Even though Standard C headers have well-defined names and public contracts, how they are defined is very much compiler specific.
So this defect is a "somethingburger."
Well this compiler was written to build Linux as a proof of concept. You don't need a libc for building the kernel. Was it claimed anywhere that it is a fully compliant C compiler?
That's kind of moving the goal post no? They set out to build a C compiler that could compile the kernel, it can do that just fine?
Searching for "compliant" in the README doesn't seem to indicate that building a "conformant compiler" was even the goal here, so not sure why that's suddenly should be a requirement.
The anti-AI crowd proves that they do need replacing as programmers since it was user error. Opus 4.6/ChatGPT 5.3 xhigh is superior to the vast majority of programmers. Talk about grasping for straws.
They're literally following the first few lines of the README exactly as instructed by Claude. I don't think it's unreasonable to point out the issue
This will do the rounds on the front page of reddit with no mention of the users c library paths having issues as the root cause despite the clear error message stating that.
They had GCC to use as an oracle/source of truth. Humans intervened multiple times. Clearly writing C compilers is a huge part of its training data—the literal definition of training on test data.
Wake me up when a model trained only on data through the year 1950 can write a C compiler.