ndesaulniers an hour ago

I spent a good part of my career (nearly a decade) at Google working on getting Clang to build the linux kernel. https://clangbuiltlinux.github.io/

This LLM did it in (checks notes):

> Over nearly 2,000 Claude Code sessions and $20,000 in API costs

It may build, but does it boot (was also a significant and distinct next milestone)? (Also, will it blend?). Looks like yes!

> The 100,000-line compiler can build a bootable Linux 6.9 on x86, ARM, and RISC-V.

The next milestone is:

Is the generated code correct? The jury is still out on that one for production compilers. And then you have performance of generated code.

> The generated code is not very efficient. Even with all optimizations enabled, it outputs less efficient code than GCC with all optimizations disabled.

Still a really cool project!

  • shakna 41 minutes ago

    > Opus was unable to implement a 16-bit x86 code generator needed to boot into 16-bit real mode. While the compiler can output correct 16-bit x86 via the 66/67 opcode prefixes, the resulting compiled output is over 60kb, far exceeding the 32k code limit enforced by Linux. Instead, Claude simply cheats here and calls out to GCC for this phase

    Does it really boot...?

    • ndesaulniers 29 minutes ago

      > Does it really boot...?

      They don't need 16b x86 support for the RISCV or ARM ports, so yes, but depends on what 'it' we're talking about here.

      Also, FWIW, GCC doesn't directly assemble to machine code either; it shells out to GAS (GNU Assembler). This blog post calls it "GCC assembler and linker" but to be more precise the author should edit this to "GNU binutils assembler and linker." Even then GNU binutils contains two linkers (BFD and GOLD), or did they excise GOLD already (IIRC, there was some discussion a few years ago about it)?

      • shakna 7 minutes ago

        Yeah, didn't mention gas or ld, for similar reasons. I agree that a compiler doesn't necessarily "need" those.

        I don't agree that all the claims are backed up by their own comments, which means that there's probably other places where it falls down.

        Its... Misrepresentation.

        Like Chicken is a Scheme compiler. But they're very up front that it depends on a C compiler.

        Here, they wrote a C compiler that is at least sometimes reliant on having a different C compiler around. So is the project at 50%? 75%?

        Even if its 99%, thats not the same story as they tried to write. And if they wrote that tale instead, it would be more impressive, rather than "There's some holes. How many?"

  • beambot 25 minutes ago

    This is getting close to a Ken Thompson "Trusting Trust" era -- AI could soon embed itself into the compilers themselves.

    • bopbopbop7 21 minutes ago

      A pay to use non-deterministic compiler. Sounds amazing, you should start.

      • Aurornis 5 minutes ago

        Application-specific AI models can be much smaller and faster than the general purpose, do-everything LLM models. This allows them to run locally.

        They can also be made to be deterministic. Some extra care is required to avoid computation paths that lead to numerical differences on different machines, but this can be accomplished reliably with small models that use integer math and use kernels that follow a specific order of operations. You get a lot more freedom to do these things on the small, application-specific models than you do when you're trying to run a big LLM across different GPU implementations in floating point.

      • ndesaulniers 14 minutes ago

        Some people care more about compile times than the performance of generated code. Perhaps even the correctness of generated code. Perhaps more so than determinism of the generated code. Different people in different contexts can have different priorities. Trying to make everyone happy can sometimes lead to making no one happy. Thus dichotomies like `-O2` vs `-Os`.

        EDIT (since HN is preventing me from responding):

        > Some people care more about compiler speed than the correctness?

        Yeah, I think plenty of people writing code in languages that have concepts like Undefined Behavior technically don't really care as much about correctness as they may claim otherwise, as it's pretty hard to write large volumes of code without indirectly relying on UB somewhere. What is correct in such case was left up to interpretation of the implementer by ISO WG14.

        • bopbopbop7 7 minutes ago

          Some people care more about compiler speed than the correctness? I would love to meet these imaginary people that are fine with a compiler that is straight up broken. Emitting working code is the baseline, not some preference slider.

        • chasd00 5 minutes ago

          a compiler introducing bugs into code it compiles is a nightmare thankfully few have faced. The only thing worse would be a CPU bug like the legendary Pentium bug. Imagine you compile something like Postgres only to have it crash in some unpredictable way. How long do you stare at Postgres source before suspecting the compiler? What if this compiler was used to compile code in software running all over cloud stacks? Bugs in compilers are very bad news, they have to be correct.

    • ndesaulniers 9 minutes ago

      We're already starting to see people experimenting with applying AI towards register allocation and inlining heuristics. I think that many fields within a compiler are still ripe for experimentation.

      https://llvm.org/docs/MLGO.html

  • zaphirplane 39 minutes ago

    What were the challenges out of interest. Some of it is the use of gcc extensions? Which needed an equivalent and porting over to the equivalent

    • ndesaulniers 23 minutes ago

      `asm goto` was the big one. The x86_64 maintainers broke the clang builds very intentionally just after we had gotten x86_64 building (with necessary patches upstreamed) by requiring compiler support for that GNU C extension. This was right around the time of meltdown+spectre, and the x86_64 maintainers didn't want to support fallbacks for older versions of GCC (and ToT Clang at the time) that lacked `asm goto` support. `asm goto` requires plumbing throughout the compiler, and I've learned more about register allocation than I particularly care...

      Fixing some UB in the kernel sources, lots of plumbing to the build system (particularly making it more hermetic).

      Getting the rest of the LLVM binutils substitutes to work in place of GNU binutils was also challenging. Rewriting a fair amount of 32b ARM assembler to be "unified syntax" in the kernel. Linker bugs are hard to debug. Kernel boot failures are hard to debug (thank god for QEMU+gdb protocol). Lots of people worked on many different parts here, not just me.

      https://github.com/ClangBuiltLinux/linux/issues for a good historical perspective. https://github.com/ClangBuiltLinux/linux/wiki/Talks,-Present... for talks on the subject. Keynoting LLVM conf was a personal highlight (https://www.youtube.com/watch?v=6l4DtR5exwo).

  • phillmv an hour ago

    i mean… your work also went into the training set, so it's not entirely surprising that it spat a version back out!

    • underdeserver 41 minutes ago

      Anthropic's version is in Rust though, so at least a little different.

      • ndesaulniers 18 minutes ago

        There's parts of LLVM architecture that are long in the tooth (IMO) (as is the language it's implemented in, IMO).

        I had hoped one day to re-implement parts of LLVM itself in Rust; in particular, I've been various curious if we can concurrently compile C (and parse C in parallel, or lazily) that haven't been explored in LLVM, and I think might be safer to do in Rust. I don't know enough about grammers to know if it's technically impossible, but a healthy dose of ignorance can sometimes lead to breakthroughs.

        LLVM is pretty well designed for test. I was able to implement a lexer for C in Rust that could lex the Linux kernel, and use clang to cross check my implementation (I would compare my interpretation of the token stream against clang's). Just having a standard module system makes having reusable pieces seems like perhaps a better way to compose a toolchain, but maybe folks with more experience with rustc have scars to disagree?

      • rwmj 34 minutes ago

        It's not really important in latent space / conceptually.

NitpickLawyer 3 hours ago

This is a much more reasonable take than the cursor-browser thing. A few things that make it pretty impressive:

> This was a clean-room implementation (Claude did not have internet access at any point during its development); it depends only on the Rust standard library. The 100,000-line compiler can build Linux 6.9 on x86, ARM, and RISC-V. It can also compile QEMU, FFmpeg, SQlite, postgres, redis

> I started by drafting what I wanted: a from-scratch optimizing compiler with no dependencies, GCC-compatible, able to compile the Linux kernel, and designed to support multiple backends. While I specified some aspects of the design (e.g., that it should have an SSA IR to enable multiple optimization passes) I did not go into any detail on how to do so.

> Previous Opus 4 models were barely capable of producing a functional compiler. Opus 4.5 was the first to cross a threshold that allowed it to produce a functional compiler which could pass large test suites, but it was still incapable of compiling any real large projects.

And the very open points about limitations (and hacks, as cc loves hacks):

> It lacks the 16-bit x86 compiler that is necessary to boot [...] Opus was unable to implement a 16-bit x86 code generator needed to boot into 16-bit real mode. While the compiler can output correct 16-bit x86 via the 66/67 opcode prefixes, the resulting compiled output is over 60kb, far exceeding the 32k code limit enforced by Linux. Instead, Claude simply cheats here and calls out to GCC for this phase

> It does not have its own assembler and linker;

> Even with all optimizations enabled, it outputs less efficient code than GCC with all optimizations disabled.

Ending with a very down to earth take:

> The resulting compiler has nearly reached the limits of Opus’s abilities. I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.

All in all, I'd say it's a cool little experiment, impressive even with the limitations, and a good test-case as the author says "The resulting compiler has nearly reached the limits of Opus’s abilities". Yeah, that's fair, but still highly imrpessive IMO.

  • geraneum 3 hours ago

    > This was a clean-room implementation

    This is really pushing it, considering it’s trained on… internet, with all available c compilers. The work is already impressive enough, no need for such misleading statements.

    • raincole an hour ago

      It's not a clean-room implementation, but not because it's trained on the internet.

      It's not a clean-room implementation because of this:

      > The fix was to use GCC as an online known-good compiler oracle to compare against

      • Calavar 2 minutes ago

        By the classical definition of a clean room implementation, it's something that's made by looking at the output but not at the source.

        I agree that having a reference compiler available is a huge caveat though. Their developing against a programmatic checker for a spec that's already had millions of man hours put into it. This is an optimal scenario for agentic coding, but the vast majorty of problems that people are going to want to tackle with agentic coding are not going to look like that

      • regularfry 25 minutes ago

        That doesn't stop it being "clean-room" in any practical sense.

    • antirez 3 hours ago

      The LLM does not contain a verbatim copy of whatever it saw during the pre-training stage, it may remember certain over-represented parts, otherwise it has a knowledge about a lot of things but such knowledge, while about a huge amount of topics, is similar to the way you could remember things you know very well. And, indeed, if you give it access to internet or the source code of GCC and other compilers, it will implement such a project N times faster.

      • halxc 2 hours ago

        We all saw verbatim copies in the early LLMs. They "fixed" it by implementing filters that trigger rewrites on blatant copyright infringement.

        It is a research topic for heaven's sake:

        https://arxiv.org/abs/2504.16046

        • Aurornis a minute ago

          Simple logic will demonstrate that you can't fit every document in the training set into the parameters of an LLM.

          Citing a random arXiv paper from 2025 doesn't mean "they" used this technique. It was someone's paper that they uploaded to arXiv, which anyone can do.

        • RyanCavanaugh 2 hours ago

          The internet is hundreds of billions of terabytes; a frontier model is maybe half a terabyte. While they are certainly capable of doing some verbatim recitations, this isn't just a matter of teasing out the compressed C compiler written in Rust that's already on the internet (where?) and stored inside the model.

          • philipportner an hour ago

            This seems related, it may not be a codebase but they are able to extract "near" verbatim books out of Claude Sonnet.

            https://arxiv.org/pdf/2601.02671

            > For Claude 3.7 Sonnet, we were able to extract four whole books near-verbatim, including two books under copyright in the U.S.: Harry Potter and the Sorcerer’s Stone and 1984 (Section 4).

          • seba_dos1 8 minutes ago

            > The internet is hundreds of billions of terabytes; a frontier model is maybe half a terabyte.

            The lesson here is that the Internet compresses pretty well.

          • mft_ 26 minutes ago

            (I'm not needlessly nitpicking, as I think it matters for this discussion)

            A frontier model (e.g. latest Gemini, Gpt) is likely several-to-many times larger than 500GB. Even Deepseek v3 was around 700GB.

            But your overall point still stands, regardless.

        • ben_w 2 hours ago

          We saw partial copies of large or rare documents, and full copies of smaller widely-reproduced documents, not full copies of everything. An e.g. 1 trillion parameter model is not a lossless copy of a ten-petabyte slice of plain text from the internet.

          The distinction may not have mattered for copyright laws if things had gone down differently, but the gap between "blurry JPEG of the internet" and "learned stuff" is more obviously important when it comes to e.g. "can it make a working compiler?"

          • tza54j 2 hours ago

            We are here in a clean room implementation thread, and verbatim copies of entire works are irrelevant to that topic.

            It is enough to have read even parts of a work for something to be considered a derivative.

            I would also argue that language models who need gargantuan amounts of training material in order to work by definition can only output derivative works.

            It does not help that certain people in this thread (not you) edit their comments to backpedal and make the followup comments look illogical, but that is in line with their sleazy post-LLM behavior.

            • ben_w an hour ago

              > It is enough to have read even parts of a work for something to be considered a derivative.

              For IP rights, I'll buy that. Not as important when the question is capabilities.

              > I would also argue that language models who need gargantuan amounts of training material in order to work by definition can only output derivative works.

              For similar reasons, I'm not going to argue against anyone saying that all machine learning today, doesn't count as "intelligent":

              It is perfectly reasonable to define "intelligence" to be the inverse of how many examples are needed.

              ML partially makes up for being (by this definition) thick as an algal bloom, by being stupid so fast it actually can read the whole internet.

          • antirez 2 hours ago

            Besides, the fact an LLM may recall parts of certain documents, like I can recall incipits of certain novels, does not mean that when you ask LLM of doing other kind of work, that is not recalling stuff, the LLM will mix such things verbatim. The LLM knows what it is doing in a variety of contexts, and uses the knowledge to produce stuff. The fact that for many people LLMs being able to do things that replace humans is bitter does not mean (and is not true) that this happens mainly using memorization. What coding agents can do today have zero explanation with memorization of verbatim stuff. So it's not a matter of copyright. Certain folks are fighting the wrong battle.

            • shakna 28 minutes ago

              During a "clean room" implementation, the implementor is generally selected for not being familiar with the workings of what they're implementing, and banned from researching using it.

              Because it _has_ been enough, that if you can recall things, that your implementation ends up not being "clean room", and trashed by the lawyers who get involved.

              I mean... It's in the name.

              > The term implies that the design team works in an environment that is "clean" or demonstrably uncontaminated by any knowledge of the proprietary techniques used by the competitor.

              If it can recall... Then it is not a clean room implementation. Fin.

          • philipportner an hour ago

            Granted, these are some of the most widely spread texts, but just fyi:

            https://arxiv.org/pdf/2601.02671

            > For Claude 3.7 Sonnet, we were able to extract four whole books near-verbatim, including two books under copyright in the U.S.: Harry Potter and the Sorcerer’s Stone and 1984 (Section 4).

            • ben_w 44 minutes ago

              Already aware of that work, that's why I phrased it the way I did :)

              Edit: actually, no, I take that back, that's just very similar to some other research I was familiar with.

          • boroboro4 2 hours ago

            While I mostly agree with you, it worth noting modern llms are trained on 10-20-30T of tokens which is quite comparable to their size (especially given how compressible the data is)

        • soulofmischief an hour ago

          The point is that it's a probabilistic knowledge manifold, not a database.

      • PunchyHamster an hour ago

        So it will copy most code with adding subtle bugs

    • inchargeoncall an hour ago

      [flagged]

      • teaearlgraycold an hour ago

        With just a few thousand dollars of API credits you too can inefficiently download a lossy copy of a C compiler!

  • modeless 3 hours ago

    There seem to still be a lot of people who look at results like this and evaluate them purely based on the current state. I don't know how you can look at this and not realize that it represents a huge improvement over just a few months ago, there have been continuous improvements for many years now, and there is no reason to believe progress is stopping here. If you project out just one year, even assuming progress stops after that, the implications are staggering.

    • zamadatix an hour ago

      The improvements in tool use and agentic loops have been fast and furious lately, delivering great results. The model growth itself is feeling more linear lately, but what you can do with models as part of an overall system has been increasing in growth rate and that has been delivering a lot of value. It matters less if the model natively can keep infinite context or figure things out on its own in one shot so long as it can orchestrate external tools to achieve that over time.

    • nozzlegear an hour ago

      Every S-curve looks like an exponential until you hit the bend.

      • NitpickLawyer an hour ago

        We've been hearing this for 3 years now. And especially 25 was full of "they've hit a wall, no more data, running out of data, plateau this, saturated that". And yet, here we are. Models keep on getting better, at more broad tasks, and more useful by the month.

        • nozzlegear 44 minutes ago

          > We've been hearing this for 3 years now

          Not from me you haven't!

          > "they've hit a wall, no more data, running out of data, plateau this, saturated that"

          Everyone thought Moore's Law was infallible too, right until they hit that bend. What hubris to think these AI models are different!

          But you've probably been hearing that for 3 years too (though not from me).

          > Models keep on getting better, at more broad tasks, and more useful by the month.

          If you say so, I'll take your word for it.

          • Cyphase 34 minutes ago

            25 is 2025.

            • nozzlegear 15 minutes ago

              Oh my bad, the way it was worded made me read it as the name of somebody's model or something.

          • torginus 33 minutes ago

            Except for Moore's law, everyone knew decades ahead of what the limits of Dennard scaling are (shrinking geometry through smaller optical feature sizes), and roughly when we would get to the limit.

            Since then, all improvements came at a tradeoff, and there was a definite flattening of progress.

            • nozzlegear 12 minutes ago

              > Since then, all improvements came at a tradeoff, and there was a definite flattening of progress.

              Idk, that sounds remarkably similar to these AI models to me.

        • fmbb 23 minutes ago

          > And yet, here we are.

          I dunno. To me it doesn’t even look exponential any more. We are at most on the straight part of the incline.

          • bopbopbop7 17 minutes ago

            People are confusing exponential improvement with the exponential pre-IPO marketing budget increase at Anthropic and OpenAI.

      • raincole an hour ago

        This quote would be more impactful if people haven't been repeating it since gpt-4 time.

        • kimixa 29 minutes ago

          People have also been saying we'd be seeing the results of 100x quality improvements in software with corresponding decease in cost since gpt-4 time.

          So where is that?

        • nozzlegear 42 minutes ago

          I agree, I have been informed that people have been repeating it for three years. Sadly I'm not involved in the AI hype bubble so I wasn't aware. What an embarrassing faux pas!

    • chasd00 35 minutes ago

      i have to admit, even if model and tooling progress stopped dead today the world of software development has forever changed and will never go back.

  • gmueckl 3 hours ago

    The result is hardly a clean room implementation. It was rather a brute force attempt to decompress fuzzily stored knowledge contained within the network and it required close steering (using a big suite of tests) to get a reasonable approximation to the desired output. The compression and storage happened during the LLM training.

    Prove this statement wrong.

    • libraryofbabel 2 hours ago

      Nobody disputes that the LLM was drawing on knowledge in its training data. Obviously it was! But you'll need to be a bit more specific with your critique, because there is a whole spectrum of interpretations, from "it just decompressed fuzzily-stored code verbatim from the internet" (obviously wrong, since the Rust-based C compiler it wrote doesn't exist on the internet) all the way to "it used general knowledge from its training about compiler architecture and x86 and the C language."

      Your post is phrased like it's a two sentence slam-dunk refutation of Anthropic's claims. I don't think it is, and I'm not even clear on what you're claiming precisely except that LLMs use knowledge acquired during training, which we all agree on here.

    • NitpickLawyer 3 hours ago

      > Prove this statement wrong.

      If all it takes is "trained on the Internet" and "decompress stored knowledge", then surely gpt3, 3.5, 4, 4.1, 4o, o1, o3, o4, 5, 5.1, 5.x should have been able to do it, right? Claude 2, 3, 4, 4.1, 4.5? Surely.

      • shakna 33 minutes ago

        Well, "Reimplement the c4 compiler - C in four functions" is absolutely something older models can do. Because most are trained, on that quite small product - its 20kb.

        But reimplementing that isn't impressive, because its not a clean room implementation if you trained on that data, to make the model that regurgitates the effort.

      • gmueckl an hour ago

        This comparison is only meaningful with comparable numbers of parameters and context window tokens. And then it would mainly test the efficiency and accuracy of the information encoding. I would argue that this is the main improvement over all model generations.

      • hn_acc1 an hour ago

        Are you really asking for "all the previous versions were implemented so poorly they couldn't even do this simple, basic LLM task"?

      • geraneum 3 hours ago

        Perhaps 4.5 could also do it? We don’t know really until we try. I don’t trust the marketing material as much. The fact that the previous version (smaller versions) couldn’t or could do it does not really disprove that claim.

    • Marha01 3 hours ago

      Even with 1 TB of weights (probable size of the largest state of the art models), the network is far too small to contain any significant part of the internet as compressed data, unless you really stretch the definition of data compression.

      • jesse__ 2 hours ago

        This sounds very wrong to me.

        Take the C4 training dataset for example. The uncompressed, uncleaned, size of the dataset is ~6TB, and contains an exhaustive English language scrape of the public internet from 2019. The cleaned (still uncompressed) dataset is significantly less than 1TB.

        I could go on, but, I think it's already pretty obvious that 1TB is more than enough storage to represent a significant portion of the internet.

        • FeepingCreature an hour ago

          This would imply that the English internet is not much bigger than 20x the English Wikipedia.

          That seems implausible.

      • kgeist an hour ago

        A lot of the internet is duplicate data, low quality content, SEO spam etc. I wouldn't be surprised if 1 TB is a significant portion of the high-quality, information-dense part of the internet.

        • FeepingCreature an hour ago

          I would be extremely surprised if it was that small.

      • gmueckl an hour ago

        This is obviously wrong. There is a bunch of knowledge embedded in those weights, and some of it can be recalled verbatim. So, by virtue of this recall alone, training is a form of lossy data compression.

    • 0xCMP an hour ago

      I challenge anyone to try building a C compiler without a big suite of tests. Zig is the most recent attempt and they had an extensive test suite. I don't see how that is disqualifying.

      If you're testing a model I think it's reasonable that "clean room" have an exception for the model itself. They kept it offline and gave it a sandbox to avoid letting it find the answers for itself.

      Yes the compression and storage happened during the training. Before it still didn't work; now it does much better.

      • hn_acc1 an hour ago

        The point is - for a NEW project, no one has an extensive test suite. And if an extensive test suite exists, it's probably because the product that uses it also exists, already.

        If it could translate the C++ standard INTO an extensive test suite that actually captures most corner cases, and doesn't generate false positives - again, without internet access and without using gcc as an oracle, etc?

    • brutalc 3 hours ago

      No one needs to prove you wrong. That’s just personal insecurity trying to justify ones own worth.

  • panzi 42 minutes ago

    > clean-room implementation

    Except its trained on all source out there, so I assume on GCC and clang. I wonder how similar the code is to either.

  • dyauspitr 42 minutes ago

    > Claude did not have internet access at any point during its development

    Why is this even desirable? I want my LLM to take into account everything there is out there and give me the best possible output.

    • simonw 17 minutes ago

      It's desirable if you're trying to build a C compiler as a demo of coding agent capabilities without all of the Hacker News commenters saying "yeah but it could just copy implementation details from the internet".

itay-maman an hour ago

My first reaction: wow, incredible.

My second reaction: still incredible, but noting that a C compiler is one of the most rigorously specified pieces of software out there. The spec is precise, the expected behavior is well-defined, and test cases are unambiguous.

I'm curious how well this translates to the kind of work most of us do day-to-day where requirements are fuzzy, many edge cases are discovered on the go, and what we want to build is a moving target.

  • ndesaulniers an hour ago

    > C compiler is one of the most rigorously specified pieces of software out there

    /me Laughs in "unspecified behavior."

    • ori_b 37 minutes ago

      There's undefined behavior, which is quite well specified. What do you mean by unspecified behavior? Do you have an example?

    • irishcoffee 10 minutes ago

      Undefined is absolutely clear in the spec.

      Unspecified is whatever you want it to mean. I am also laughing, having never heard "unspecified" before.

201984 2 hours ago
  • Philpax an hour ago

    The issue is that it's missing the include paths. The compiler itself is fine.

  • krupan an hour ago

    Thank you. That was a long article that started with a claim that was backed up by no proof, dismissing it as not the most interesting thing they were talking about when in fact it's the baseline of the whole discussion.

  • Retr0id an hour ago

    Looks like these users are just missing glibc-devel or equivalent?

    • delusional an hour ago

      Naa, it looks like it's failing to include the standard system include directories. If you take then from gcc and pass them as -I, it'll compile.

      • Retr0id an hour ago

        Can confirm (on aarch64 host)

            $ ./target/release/ccc-arm -I /usr/include/ -I /usr/local/include/ -I /usr/lib/gcc/aarch64-redhat-linux/15/include/ -o hello hello.c 
        
            $ ./hello
            Hello from CCC!
        • u8080 an hour ago

          Seems this non-artificial intelligence model just too limited to understand concept of include path.

          • dyauspitr 39 minutes ago

            It’s machine specific

      • zamadatix an hour ago

        Hmm, I didn't have to do that. https://i.imgur.com/OAEtgvr.png

        But yeah, either way it just needs to know where to find the stdlib.

        • Retr0id an hour ago

          Probably depends on where your distro puts stuff by default, I think it has a few of the common include paths hardcoded.

          • zamadatix an hour ago

            Makes sense for the behavior.

btown 3 hours ago

> This was a clean-room implementation (Claude did not have internet access at any point during its development); it depends only on the Rust standard library. The 100,000-line compiler can build Linux 6.9 on x86, ARM, and RISC-V. It can also compile QEMU, FFmpeg, SQlite, postgres, redis, and has a 99% pass rate on most compiler test suites including the GCC torture test suite. It also passes the developer's ultimate litmus test: it can compile and run Doom.

This is incredible!

But it also speaks to the limitations of these systems: while these agentic systems can do amazing things when automatically-evaluable, robust test suites exist... you hit diminishing returns when you, as a human orchestrator of agentic systems, are making business decisions as fast as the AI can bring them to your attention. And that assumes the AI isn't just making business assumptions with the same lack of context, compounded with motivation to seem self-reliant, that a non-goal-aligned human contractor would have.

  • _qua 3 hours ago

    Interesting how the concept of a clean room implementation changes when the agent has been trained on the entire internet already

    • falcor84 3 hours ago

      To the best of my knowledge, there's no Rust-based compiler that comes anywhere close to 99% on the GCC torture test suite, or able to compile Doom. So even if it saw the internals of GCC and a lot of other compilers, the ability to recreate this step-by-step in Rust is extremely impressive to me.

  • falcor84 3 hours ago

    Agreed, but the next step is of having an AI agent actually run the business and be able to get the business context it needs as a human would. Obviously we're not quite there, but with the rapid progress on benchmarks like Vending-Bench [0], and especially with this teams approach, it doesn't seem far fetched anymore.

    As a particular near-term step, I imagine that it won't be long before we see a SaaS company using an AI product manager, which can spawn agents to directly interview users as they utilize the app, independently propose and (after getting approval) run small product experiments, and come up with validated recommendations for changing the product roadmap. I still remember Tay, and wouldn't give something like that the keys to the kingdom any time soon, but as long as there's a human decision maker at the end, I think that the tech is already here.

    [0] https://andonlabs.com/evals/vending-bench-2

Havoc 2 hours ago

Cool project, but they really could have skipped the mention of clean room. Something trained on every copyrighted thing known to mankind is the opposite of clean room

  • cheema33 34 minutes ago

    As others have pointed out, humans train on existing codebases as well. And then use that knowledge to build clean room implementations.

    • mxey 14 minutes ago

      That’s the opposite of clean-room. The whole point of clean-room design is that you have your software written by people who have not looked into the competing, existing implementation, to prevent any claim of plagiarism.

      “Typically, a clean-room design is done by having someone examine the system to be reimplemented and having this person write a specification. This specification is then reviewed by a lawyer to ensure that no copyrighted material is included. The specification is then implemented by a team with no connection to the original examiners.”

    • regularfry 17 minutes ago

      What they don't do is read the product they're clean-rooming. That's kinda disqualifying. Impossible to know if the GCC source is in 4.6's training set but it would be kinda weird if it wasn't.

    • pizlonator 16 minutes ago

      Not the same.

      I have read nowhere near as much code (or anything) as what Claude has to read to get to where it is.

      And I can write an optimizing compiler that isn't slower than GCC -O0

    • cermicelli 24 minutes ago

      If that's what clean room means to you, I do know AI can definitely replace you. As even ChatGPT is better than that.

      (prompt: what does a clean room implementation mean?)

      From ChatGPT without login BTW!

      > A clean room implementation is a way of building something (usually software) without copying or being influenced by the original implementation, so you avoid copyright or IP issues.

      > The core idea is separation.

      > Here’s how it usually works:

      > The basic setup

      > Two teams (or two roles):

      > Specification team (the “dirty room”)

      > Looks at the original product, code, or behavior

      > Documents what it does, not how it does it

      > Produces specs, interfaces, test cases, and behavior descriptions

      > Implementation team (the “clean room”)

      > Never sees the original code

      > Only reads the specs

      > Writes a brand-new implementation from scratch

      > Because the clean team never touches the original code, their work is considered independently created, even if the behavior matches.

      > Why people do this

      > Reverse-engineering legally

      > Avoid copyright infringement

      > Reimplement proprietary systems

      > Create open-source replacements

      > Build compatible software (file formats, APIs, protocols)

      I really am starting to think we have achieved AGI. > Average (G)Human Intelligence

      LMAO

  • benjiro an hour ago

    Hot take:

    If you try to reimplement something in a clean room, its a step by step process, using your own accumulated knowledge as the basis. That knowledge that you hold in your brain, all too often is code that may have copyrights on it, from the companies you worked on.

    Is it any different for a LLM?

    The fact that the LLM is trained on more data, does not change that when you work for a company, leave it, take that accumulated knowledge to a different company, you are by definition taking that knowledge (that may be copyrighted) and implementing it somewhere else. It only a issue if you copy the code directly, or do the implementation as a 1:1 copy. LLMs do not make 1:1 copies of the original.

    At what point is trained on copyrighted data, any different then a human trained on copyrighted data, that get reimplemented in a transformative way. The big difference is that the LLM can hold more data over more fields, vs a human, true... But if we look at specializations, this can come back to the same, no?

    • cermicelli 18 minutes ago

      If you have worked on a related copyrighted work you can't work on a clean room implementation. You will be sued. There are lots of people who have tried and found out.

      They weren't trillion dollar AI companies to bankroll the defense sure. But thinking about clean room and using copyrighted stuff is not even an argument that's just nonsense to try to prove something when no one asked.

rwmj 42 minutes ago

The interesting thing here is what's this code worth (in money terms)? I would say it's worth only the cost of recreation, apparently $20,000, and not very much more. Perhaps you can add a bit for the time taken to prompt it. Anyone who can afford that can use the same prompt to generate another C compiler, and another one and another one.

GCC and Clang are worth much much more because they are battle-tested compilers that we understand and know work, even in a multitude of corner cases, over decades.

In future there's going to be lots and lots of basically worthless code, generated and regenerated over and over again. What will distinguish code that provides value? It's going to be code - however it was created, could be AI or human - that has actually been used and maintained in production for a long time, with a community or company behind it, bugs being triaged and fixed and so on.

  • kingstnap 12 minutes ago

    The code isn't worth money. This is an experiment. The knowledge that something like this is even possible is what is worth money.

    If you had the knowledge that a transformer could pull this off in 2022. Even with all its flawed code. You would be floored.

    Keep in mind that just a few years ago, the state of the art in what these LLMs could do was questions of this nature:

    Suppose g(x) = f−1 (x), g(0) = 5, g(4) = 7, g(3) = 2, g(7) = 9, g(9) = 6 what is f(f(f(6)))?

    The above is from the "sparks of AGI paper" on GPT-4, where they were floored that it could coherently reason through the 3 steps of inverting things (6 -> 9 -> 7 -> 4) while GPT 3.5 was still spitting out a nonsense argument of this form:

    f(f(f(6))) = f(f(g(9))) = f(f(6)) = f(g(7)) = f(9).

    This is from March 2023 and it was genuinely very surprising at the time that these pattern matching machines trained on next token prediction could do this. Something like a LSTM can't do anything like this at all btw, no where close.

    To me its very surprising that the C compiler works. It takes a ton of effort to build such a thing. I can imagine the flaws actually do get better over the next year as we push the goalposts out.

whinvik 3 hours ago

It's weird to see the expectation that the result should be perfect.

All said and done, that its even possible is remarkable. Maybe these all go into training the next Opus or Sonnet and we start getting models that can create efficient compilers from scratch. That would be something!

  • regularfry 15 minutes ago

    This is firmly where I am. "The wonder is not how well the dog dances, it is that it dances at all."

  • minimaxir 3 hours ago

    A symptom of the increasing backlash against generative AI (both in creative industries and in coding) is that any flaw in the resulting product is predicate to call it AI slop, even if it's very explicitly upfront that it's an experimental demo/proof of concept and not the NEXT BIG THING being hyped by influencers. That nuance is dead even outside of social media.

    • stonogo 3 hours ago

      AI companies set that expectation when their CEOs ran around telling anyone who would listen that their product is a generational paradigm shift that will completely restructure both labor markets and human cognition itself. There is no nuance in their own PR, so why should they benefit from any when their product can't meet those expectations?

      • minimaxir 3 hours ago

        Because it leads to poor and nonconstructive discourse that doesn't educate anyone about the implications of the tech, which is expected on social media but has annoyingly leaked to Hacker News.

        There's been more than enough drive-by comments from new accounts/green names even in this HN submission alone.

        • krupan an hour ago

          It does lead to poor non-constructive discourse. That's why we keep calling those CEOs to task on it. Why are you not?

          • dwaltrip an hour ago

            The CEOs aren't here in the comments.

karmakaze 17 minutes ago

I'm not particularly impressed that it can turn C into an SSA IR or assembly etc. The optimizations, however sophisticated is where anything impressive would be. Then again, we have lots of examples in the training set I would expect. C compilers are probably the most popular of all compilers. What would be more impressive is for it to have made a compiler for a well defined language that isn't very close to a popular language.

What I am impressed by is that the task it completed had many steps and the agent didn't get lost or caught in a loop in the many sessions and time it spent doing it.

akrauss 3 hours ago

I would like to see the following published:

- All prompts used

- The structure of the agent team (which agents / which roles)

- Any other material that went into the process

This would be a good source for learning, even though I'm not ready to spend 20k$ just for replicating the experiment.

  • password4321 an hour ago

    Yes unfortunately these days most are satisfied with just the sausage and no details about how it was made.

ks2048 2 hours ago

It's cool that you can look at the git history to see what it did. Unfortunately, I do not see any of the human written prompts (?).

First 10 commits, "git log --all --pretty=format:%s --reverse | head",

  Initial commit: empty repo structure
  Lock: initial compiler scaffold task
  Initial compiler scaffold: full pipeline for x86-64, AArch64, RISC-V
  Lock: implement array subscript and lvalue assignments
  Implement array subscript, lvalue assignments, and short-circuit evaluation
  Add idea: type-aware codegen for correct sized operations
  Lock: type-aware codegen for correct sized operations
  Implement type-aware codegen for correct sized operations
  Lock: implement global variable support
  Implement global variable support across all three backends
OsrsNeedsf2P 3 hours ago

This is like a working version of the Cursor blog. The evidence - it compiling the Linux kernel - is much more impressive than a browser that didn't even compile (until manually intervened)

  • ben_w 3 hours ago

    It certainly slightly spoils what I was planning to be a fun little April Fool's joke (a daft but complete programming language). Last year's AI wasn't good enough to get me past the compiler-compiler even for the most fundamental basics, now it's all this.

    I'll still work on it, of course. It just won't be so surprising.

storus 20 minutes ago

Now this is fairly "easy" as there are multitude of implementations/specs all over the Internet. How about trying to design a new language that is unquestionably better/safer/faster for low-level system programming than C/Rust/Zig? ML is great in aping existing stuff but how about pushing it to invent something valuable instead?

underdeserver 37 minutes ago

> when agents started to compile the Linux kernel, they got stuck. [...] Every agent would hit the same bug, fix that bug, and then overwrite each other's changes.

> [...] The fix was to use GCC as an online known-good compiler oracle to compare against. I wrote a new test harness that randomly compiled most of the kernel using GCC, and only the remaining files with Claude's C Compiler. If the kernel worked, then the problem wasn’t in Claude’s subset of the files. If it broke, then it could further refine by re-compiling some of these files with GCC. This let each agent work in parallel

This is a remarkably creative solution! Nicely done.

danfritz an hour ago

Ha yes classic showcase of:

1) obvious green field project 2) well defined spec which will definitely be in the training data 3) an end result which lands you 90% from the finish

Now comes the hard part, the last 10%. Still not impressed here. Since fixing issues in the end was impossible without introducing bugs I have doubts about quality

I'm glad they do call it out in the end. That's fair

jcalvinowens 3 hours ago

How much of this result is effectively plagiarized open source compiler code? I don't understand how this is compelling at all: obviously it can regurgitate things that are nearly identical in capability to already existing code it was explicitly trained on...

It's very telling how all these examples are all "look, we made it recreate a shitter version of a thing that already exists in the training set".

  • Philpax 3 hours ago

    What Rust-based compiler is it plagiarising from?

    • rubymamis 3 hours ago
      • jsnell 2 hours ago

        Did you actually look at these?

        > https://github.com/jyn514/saltwater

        This is just a frontend. It uses Cranelift as the backend. It's missing some fairly basic language features like bitfields and variadic functions. And if I'm reading the documentation right, it requires all the source code to be in a single file...

        > https://github.com/ClementTsang/rustcc

        This will compile basically no real-world code. The only supported data type is "int".

        > https://github.com/maekawatoshiki/rucc

        This is just a frontend. It uses LLVM as the backend.

      • Philpax 2 hours ago

        Look at what those compilers are capable of compiling and to which targets, and compare it to what this compiler can do. Those are wonderful, and I have nothing but respect for them, but they aren't going to be compiling the Linux kernel.

        • rubymamis 2 hours ago

          I just did a quick Google search only on GitHub, maybe there are better ones out there on the internet?

    • lossolo 2 hours ago

      Language doesn't really matter, it's not how things are mapped in the latent space. It only needs to know how to do it in one language.

      • HDThoreaun 5 minutes ago

        Ok you can say this about literally any compiler though. The authors of every compiler have intimate knowledge of other compilers, how is this different?

    • jcalvinowens 3 hours ago

      Being written in rust is meaningless IMHO. There is absolutely zero inherent value to something being written in rust. Sometimes it's the right tool for the job, sometimes it isn't.

      • modeless 3 hours ago

        It means that it's not directly copying existing C compiler code which is overwhelmingly not written in Rust. Even if your argument is that it is plagiarizing C code and doing a direct translation to Rust, that's a pretty interesting capability for it to have.

        • jcalvinowens 3 hours ago

          Surely you agree that directly copying existing code into a different language is still plagiarism?

          I completely agree that "reweite this existing codebase into a new language" could be a very powerful tool. But the article is making much bolder claims. And the result was more limited in capability, so you can't even really claim they've achieved the rewrite skill yet.

      • Philpax 3 hours ago

        Please don't open a bridge to the Rust flamewar from the AI flamewar :-)

        • jcalvinowens 3 hours ago

          Hahaha, fair enough, but I refuse to be shy about having this opinion :)

  • jeroenhd 3 hours ago

    The fact it couldn't actually stick to the 16 bit ABI so it had to cheat and call out to GCC to get the system to boot says a lot.

    Without enough examples to copy from (despite CPU manuals being available in the training set) the approach failed. I wonder how well it'll do when you throw it a new/imaginary instruction set/CPU architecture; I bet it'll fail in similar ways.

    • jsnell 2 hours ago

      "Couldn't stick to the ABI ... despite CPU manuals being available" is a bizarre interpretation. What the article describes is the generated code being too large. That's an optimization problem, not a "couldn't follow the documentation" problem.

      And it's a bit of a nasty optimization problem, because the result is all or nothing. Implementing enough optimizations to get from 60kB to 33kB is useless, all the rewards come from getting to 32kB.

    • jcalvinowens 2 hours ago

      IMHO a new architecture doesn't really make it any more interesting: there's too many examples of adding new architectures in the existing codebases. Maybe if the new machine had some bizarre novel property, I suppose, but I can't come up with a good example.

      If the model were retrained without any of the existing compilers/toolchains in its training set, and it could still do something like this, that would be very compelling to me.

  • anematode 3 hours ago

    Honestly, probably not a lot. Not that many C compilers are compatible with all of GCC's weird features, and the ones that are, I don't think are written in Rust. Hell, even clang couldn't compile the Linux kernel until ~10 years ago. This is a very impressive project.

jwpapi 12 minutes ago

This is my favorite article this year. Just very insightful and honest. The learnings are worth thousands for me.

cuechan an hour ago

> The compiler is an interesting artifact on its own [...]

its funny bacause by (most) definitions, it is not an artifact:

> a usually simple object (such as a tool or ornament) showing human workmanship or modification as distinguished from a natural object

epolanski 2 hours ago

However it was achieved, building a such a complex project like a C compiler on a 20k $ budget in full autonomy is quite impressive.

Imho some commenters focus way too much on the (many, and honestly also shared by the blog post too) cons, that they forget to be genuinely impressed by the steps forward.

personjerry 15 minutes ago

> Over nearly 2,000 Claude Code sessions and $20,000 in API costs

Well there goes my weekend project plans

yu3zhou4 2 hours ago

At this point, I genuinely don't know what to learn next to not become obsolete when another Opus version gets released

  • missingdays an hour ago

    Learn to fix bugs, it's gonna be more relevant than ever

  • RivieraKid an hour ago

    I agree. I don't understand there are so many software engineers who are excited about this. I would only be excited if I was a founder in addition to being a software engineer.

hmry 24 minutes ago

If I, a human, read the source code of $THING and then later implement my own version, that's not a "clean-room" re-implementation. The whole point of "clean-room" is that no single person has access to both the original code and the new code. That way, you can legally prove that no copyright infringement took place.

But when an AI does it, now it counts? Opus is trained on the source code of Clang, GCC, TCC, etc. So this is absolutely not "clean-room".

  • rishabhaiover 18 minutes ago

    What life does one lead to be this sore in life

    • hmry 11 minutes ago

      Just tired of AI companies having more rights than natural people when it comes to copyright infringement. Let us have some of the fun too!

      • rishabhaiover 7 minutes ago

        I apologize for making that assumption.

  • bmandale 19 minutes ago

    That's not the only way to protect yourself from accusations of copyright infringement. I remember reading that the GNU utils were designed to be as performant as possible in order to force themselves to structure the code differently from the unix originals.

jhallenworld 12 minutes ago

Does it make a conforming preprocessor?

polskibus an hour ago

So did the Linux compiled with this compiler worked? Does it work the same as GCC-compiled Linux (but slower due to generating non optimized code?)

gignico 3 hours ago

> To stress test it, I tasked 16 agents with writing a Rust-based C compiler, from scratch, capable of compiling the Linux kernel. Over nearly 2,000 Claude Code sessions and $20,000 in API costs, the agent team produced a 100,000-line compiler that can build Linux 6.9 on x86, ARM, and RISC-V.

If you don't care about code quality, maintainability, readability, conformance to the specification, and performance of the compiler and of the compiled code, please, give me your $20,000, I'll give you your C compiler written from scratch :)

  • chasd00 22 minutes ago

    > If you don't care about code quality, maintainability, readability, conformance to the specification, and performance of the compiler and of the compiled code, please, give me your $20,000, I'll give you your C compiler written from scratch :)

    i don't know if you could. Let's say you get a check for $20k, how long will it take you to make an equivalent performing and compliant compiler? Are you going to put your life on pause until it's done for $20k? Who's going to pay your bills when the $20k is gone after 3 months?

  • minimaxir 3 hours ago

    There is an entire Evaluation section that addresses that criticism (both in agreement and disagreement).

  • 52-6F-62 3 hours ago

    If we're just writing off the billions in up front investment costs, they can just send all that my way while we're at it. No problem. Everybody happy.

throwaway2027 an hour ago

Next time can you build a Rust compiler in C? It doesn't even have to check things or have a borrow checker, as long as it reduces the compile times so it's like a fast debug iteration compiler.

exitcode0000 an hour ago

Cool article, interesting to read about their challenges. I've tasked Claude with building an Ada83 compiler targeting LLVM IR - which has gotten pretty far.

I am not using teams though and there is quite a bit of knowledge needed to direct it (even with the test suite).

geooff_ an hour ago

Maybe I'm naive, but I find these re-engineering complex product posts underwhelming. C Compilers exist and realistically Claudes training corpus contains a ton of C Compiler code. The task is already perfectly defined. There exists a benchmark of well-adopted codebases that can be used to prove if this is a working solution. Half the difficulty in making something is proving it works and is complete.

IMO a simpler novel product that humans enjoy is 10x more impressive than rehashing a solved problem, regardless of difficulty.

  • bs7280 an hour ago

    I don't see this as just exercise in making a new useful thing, but benchmarking the SOTA models ability to create a massive* project on its own, with some verifiable metrics of success. I believe they were able to build FFMPEG with this rust compiler?

    How much would it cost to pay someone to make a C compiler in rust? A lot more than $20k

    * massive meaning "total context needed" >> model context window

  • stephc_int13 an hour ago

    This is a nice benchmark IMO. I would be curious to see how competitors and improved models would compare.

    • NitpickLawyer an hour ago

      And how long will it take before an open model recreates this. The "vibe" consensus before "thinking" models really took off was that open was ~6mo behind SotA. With the massive RL improvements, over the past 6 months I've thought the gap was actually increasing. This will be a nice little verifiable test going forward.

small_model 3 hours ago

How about we get the LLM's to collaborate and design a perfect programming language for LLM coding, it would be terse (less tokens) easy for pattern searches etc and very fast to build, iterate over.

  • WarmWash 3 hours ago

    I cannot decide if LLMs would be excellent at writing in pure binary (why waste all that context on superfluous variable names and function symbols) or be absolutely awful at writing pure binary (would get hopelessly lost without the huge diversification of tokens).

    • anematode 3 hours ago

      Binary is wayyy less information dense than normal code, so it wouldn't work well at all.

    • small_model 2 hours ago

      We would still need the language to be human readable, but it could be very dense. They could build the ultimate std lib, that goes directly to kernels, so a call like spawn is all the tokens it needs to start a co routine for example.

  • copperx 3 hours ago

    I'm surprised by the assumption that LLMs would design such a language better than humans. I don't think that's the case.

  • hagendaasalpine 2 hours ago

    what about APL et al (BQN), information dense(?)

stephc_int13 an hour ago

They should add this to the benchmark suite, and create a custom eval for how good the resulting compiler is, as well as how maintainable the source code.

  • snek_case an hour ago

    This would be an expensive benchmark to run on a regular basis, though I guess for the big AI labs it's nothing. Code quality is hard to objectively measure, however.

throwaway2027 3 hours ago

I think it's funny how me and I assume many others tried to do the same thing and they probably saw it being a popular query or had the same idea.

stephc_int13 an hour ago

It means that if you already have or a willing to build very robust test suite and the task is a complicated but already solved problem, you can get a sub-par implementation for a semi-reasonable amount of money.

This is not entirely ridiculous.

falloutx 3 hours ago

So it copied one of the C compilers? This was always possible but now you need to pay $1000 in API costs to Anthropic

  • Rudybega an hour ago

    It wrote the compiler in Rust. As far as I know, there aren't any Rust based C compilers with the same capabilities. If you can find one that can compile the Linux kernel or get 99% on the GCC torture test suite, I would be quite surprised. I couldn't in a search.

    Maybe read the article before being so dismissive.

    • hgs3 an hour ago

      > As far as I know, there aren't any Rust based C compilers with the same capabilities.

      If you trained on a neutral representation like an AST or IR, then the source language shouldn't matter. *

      * I'm not familiar with how Anthropic builds their models, but training this way should nullify PL differences.

    • falloutx 42 minutes ago

      Why does language of the compiler matter? Its a solved problem and since other implementations are already available anyone can already transpile them to rust.

      • Rudybega 38 minutes ago

        Direct transpilation would create a ton of unsafe code (this repo doesn't have any) and fixing that would require a lot of manual fixes from the model. Even that would be a massive achievement, but it's not how this was created.

  • chucksta 3 hours ago

    Add a 0 and double it

    |Over nearly 2,000 Claude Code sessions and $20,000 in API cost

    • lossyalgo 29 minutes ago

      One more reason RAM prices will continue to go up.

IshKebab 42 minutes ago

> I tried (hard!) to fix several of the above limitations but wasn’t fully successful. New features and bugfixes frequently broke existing functionality.

This has been my experience of vibe coding too. Good for getting started, but you quickly reach the point where fixing one thing breaks another and you have to finish the project yourself.

7734128 3 hours ago

I'm sure this is impressive, but it's probably not the best test case given how many C compilers there are out there and how they presumably have been featured in the training data.

This is almost like asking me to invent a path finding algorithm when I've been thought Dijkstra's and A*.

  • NitpickLawyer 3 hours ago

    It's a bit disappointing that people are still re-hashing the same "it's in the training data" old thing from 3 years ago. It's not like any LLM could 1for1 regurgitate millions of LoC from any training set... This is not how it works.

    A pertinent quote from the article (which is a really nice read, I'd recommend reading it fully at least once):

    > Previous Opus 4 models were barely capable of producing a functional compiler. Opus 4.5 was the first to cross a threshold that allowed it to produce a functional compiler which could pass large test suites, but it was still incapable of compiling any real large projects. My goal with Opus 4.6 was to again test the limits.

    • simonw an hour ago

      This is a good rebuttal to the "it was in the training data" argument - if that's how this stuff works, why couldn't Opus 4.5 or any of the other previous models achieve the same thing?

    • wmf 3 hours ago

      In this case it's not reproducing training data verbatim but it probably is using algorithms and data structures that were learned from existing C compilers. On one hand it's good to reuse existing knowledge but such knowledge won't be available if you ask Claude to develop novel software.

      • RobMurray 2 hours ago

        How often do you need to invent novel algorithms or data structures? Most human written code is just rehashing existing ideas as well.

        • notnullorvoid 27 minutes ago

          I wouldn't say I need to invent much that is strictly novel, though I often iterate on what exists and delve into novel-ish territory. That being said I'm definitely in a minority where I have the luxury/opportunity to work outside the monotony of average programming.

          The part I find concerning is that I wouldn't be in the place I am today without spending a fair amount of time in that monotony and really delving in to understand it and slowly push outside it's boundary. If I was starting programming today I can confidently say I would've given up.

        • lossolo 2 hours ago

          They're very good at reiterating, that's true. The issue is that without the people outside of "most humans" there would be no code and no civilization. We'd still be sitting in trees. That is real intelligence.

          • ben_w an hour ago

            Why's that the issue?

            "This AI can do 99.99%* of all human endeavours, but without that last 0.01% we'd still be in the trees", doesn't stop that 99.99% getting made redundant by the AI.

            * vary as desired for your preference of argument, regarding how competent the AI actually is vs. how few people really show "true intelligence". Personally I think there's a big gap between them: paradigm-shifting inventiveness is necessarily rare, and AI can't fill in all the gaps under it yet. But I am very uncomfortable with how much AI can fill in for.

    • lossolo 2 hours ago

      They couldn't do it because they weren't fine-tuned for multi-agent workflows, which basically means they were constrained by their context window.

      How many agents did they use with previous Opus? 3?

      You've chosen an argument that works against you, because they actually could do that if they were trained to.

      Give them the same post-training (recipes/steering) and the same datasets, and voila, they'll be capable of the same thing. What do you think is happening there? Did Anthropic inject magic ponies?

    • zephen 3 hours ago

      > It's a bit disappointing that people are still re-hashing the same "it's in the training data" old thing from 3 years ago.

      They only have to keep reiterating this because people are still pretending the training data doesn't contain all the information that it does.

      > It's not like any LLM could 1for1 regurgitate millions of LoC from any training set... This is not how it works.

      Maybe not any old LLM, but Claude gets really close.

      https://arxiv.org/pdf/2601.02671v1

    • falloutx 2 hours ago

      They can literally print out entire books line by line.

    • lunar_mycroft 3 hours ago

      LLMs can regurgitate almost all of the Harry Potter books, among others [0]. Clearly, these models can actually regurgitate large amounts of their training data, and reconstructing any gaps would be a lot less impressive than implementing the project truly from scratch.

      (I'm not claiming this is what actually happened here, just pointing out that memorization is a lot more plausible/significant than you say)

      [0] https://www.theregister.com/2026/01/09/boffins_probe_commerc...

      • StilesCrisis 3 hours ago

        The training data doesn't contain a Rust based C compiler that can build Linux, though.

sho_hn 3 hours ago

Nothing in the post about whether the compiled kernel boots.

  • chews 3 hours ago

    video does show it booting.

gre 3 hours ago

There's a terrible bug where once it compacts then it sometimes pulls in .o or binary files and immediately fills your entire context. Then it compacts again...10m and your token budget is gone for the 5 hour period. edit: hooks that prevent it from reading binary files can't prevent this.

Please fix.. :)

light_hue_1 3 hours ago

> This was a clean-room implementation (Claude did not have internet access at any point during its development);

This is absolutely false and I wish the people doing these demonstrations were more honest.

It had access to GCC! Not only that, using GCC as an oracle was critical and had to be built in by hand.

Like the web browser project this shows how far you can get when you have a reference implementation, good benchmarks, and clear metrics. But that's not the real world for 99% of people, this is the easiest scenario for any ML setting.

  • rvz an hour ago

    > This is absolutely false and I wish the people doing these demonstrations were more honest.

    That's because the "testing" was not done independently. So anything can be possibly be made to be misleading. Hence:

    > Written by Nicholas Carlini, a researcher on our Safeguards team.

dmitrygr 3 hours ago

> The generated code is not very efficient. Even with all optimizations enabled, it outputs less efficient code than GCC with all optimizations disabled.

Worse than "-O0" takes skill...

So then, it produced something much worse than tcc (which is better than gcc -O0), an equivalent of which one man can produce in under two weeks. So even all those tokens and dollars did not equal one man's week of work.

Except the one man might explain such arbitrary and shitty code as this:

https://github.com/anthropics/claudes-c-compiler/blob/main/s...

why x9? who knows?!

Oh god the more i look at this code the happier I get. I can already feel the contracts coming to fix LLM slop like this when any company who takes this seriously needs it maintained and cannot...

  • ben_w 3 hours ago

    I'm trying to recall a quote. Some war where all defeats were censored in the news, possibly Paris was losing to someone. It was something along the lines of "I can't help but notice how our great victories keep getting closer to home".

    Last year I tried using an LLM to make a joke language, I couldn't even compile the compiler the source code was so bad. Before Christmas, same joke language, a previous version of Claude gave me something that worked. I wouldn't call it "good", it was a joke language, but it did work.

    So it sucks at writing a compiler? Yay. The gloriously indefatigable human mind wins another battle against the mediocre AI, but I can't help but notice how the battles keep getting closer to home.

    • sjsjsbsh 3 hours ago

      > but I can't help but notice how the battles keep getting closer to home

      This has been true for all of (known) human history. I’m gonna go ahead and make another bold prediction: tech will keep getting better.

      The issue with this blog post is it’s mostly marketing.

  • sebzim4500 3 hours ago

    Can one man really make a C compiler in one week that can compile linux, sqlite, etc.?

    Maybe I'm underestimating the simplicity of the C language, but that doesn't sound very plausible to me.

    • dmitrygr 3 hours ago

      yes, if you do not care to optimize, yes. source: done it

      • Philpax 3 hours ago

        I would love to see the commit log on this.

        • rustystump 3 hours ago

          Implementing just enough to conform to a language is not as difficult as it seems. Making it fast is hard.

        • dmitrygr 3 hours ago

          did this before i knew how to git, back in college. target was ARMv5

          • Philpax 3 hours ago

            Great. Did your compiler support three different architectures (four, if you include x86 in addition to x86-64) and compile and pass the test suite for all of this software?

            > Projects that compile and pass their test suites include PostgreSQL (all 237 regression tests), SQLite, QuickJS, zlib, Lua, libsodium, libpng, jq, libjpeg-turbo, mbedTLS, libuv, Redis, libffi, musl, TCC, and DOOM — all using the fully standalone assembler and linker with no external toolchain. Over 150 additional projects have also been built successfully, including FFmpeg (all 7331 FATE checkasm tests on x86-64 and AArch64), GNU coreutils, Busybox, CPython, QEMU, and LuaJIT.

            Writing a C compiler is not that difficult, I agree. Writing a C compiler that can compile a significant amount of real software across multiple architectures? That's significantly more non-trivial.

  • bwfan123 an hour ago

    > I can already feel the contracts coming to fix LLM slop

    First, the agents will attempt to fix issues on their own. Most easy problems will be fixed or worked-around in this manner. The hard problems will require a deeper causal model of how things work. For these, the agents will give up. But, the code-base is evolved to a point where no-one understands whats going on including the agents and its human handlers. Expect your phone to ring at that point, and prepare to ask for a ransom.

  • small_model 3 hours ago

    Claude is only a few years old so we should compare it to a 3 year old human's C compiler

    • zephen 3 hours ago

      Claude contains the entire wisdom of the internet, such as it is.

  • sjsjsbsh 3 hours ago

    > I can already feel the contracts coming to fix LLM slop like this when any company who takes this seriously needs it maintained and cannot

    Honest question, do you think it’d be easier to fix or rewrite from scratch? With domains I’m intimately familiar with, I’ve come very close to simply throwing the LLM code out after using it to establish some key test cases.

    • dmitrygr 2 hours ago

      Rewrite is what I’ve been doing so far in such cases. Takes fewer hours

hrgadyx 3 hours ago

[flagged]

  • falcor84 3 hours ago

    They didn't "steal" open source code any more than I stole my copy of The Odyssey.

sjsjsbsh 3 hours ago

> So, while this experiment excites me, it also leaves me feeling uneasy. Building this compiler has been some of the most fun I’ve had recently, but I did not expect this to be anywhere near possible so early in 2026

What? Didn’t cursed lang do something similar like 6 or 7 months ago? These bombastic marketing tactics are getting tired.

  • ebiester 3 hours ago

    Do you not see the difference between a toy language and a clean room implementation that can compile Linux, QEMU, Postgres, and sqlite? (No, it doesn't have the assembler and linker.)

    That's for $20,000.

    • falloutx 2 hours ago

      people have built compilers for free, with $20000 you can even a couple of devs for a year in low income countries.

  • jsnell 3 hours ago

    No? That was a frontend for a toy language calling using LLVM as the backend. This is a totally self-contained compiler that's capable of compiling the Linux kernel. What's the part that you think is similar?

trilogic 3 hours ago

Can it create employment? How is this making life better. I understand the achievement but come on, wouldn´t it be something to show if you created employment for 10000 people using your 20000 USD!

Microsoft, OpenAI, Anthropic, XAI, all solving the wrong problems, your problems not the collective ones.

  • jeffbee 3 hours ago

    "Employment" is not intrinsically valuable. It is an emergent property of one way of thinking about economic systems.

    • trilogic 3 hours ago

      For employment I mean "WHATEVER LEADS TO REWARD COLLECTIVE HUMANS TO SURVIVE".

      Call it as you wish, but I am certainly not talking about coding values.

      • falcor84 3 hours ago

        I'm struggling to even parse the syntax of "WHATEVER LEADS TO REWARD COLLECTIVE HUMANS TO SURVIVE", but assuming that you're talking about resource allocation, my answer is UBI or something similar to it. We only need to "reward" for action when the resources are scarce, but when resources are plentiful, there's no particular reason not to just give them out.

        I know it's "easier to imagine an end to the world than an end to capitalism", but to quote another dreamer: "Imagine all the people sharing all the world".

        • swexbe 29 minutes ago

          Except resources won't be plentiful for a long while since AI is only impacting the service sector. You can't eat a service, you can't live in one. SAAS will get very cheap though...

  • mofeien 2 hours ago

    Obviously a human in the loop is always needed and this technology that is specifically trained to excel at all cognitive tasks that humans are capable of will lead to infinite new jobs being created. /s

chvid 3 hours ago

100.000 lines of code for something that is literally a text book task?

I guess if it only created 1.000 lines it would be easy to see where those lines came from.

  • falcor84 3 hours ago

    > literally a text book task

    Generating a 99% compliant C compiler is not a textbook task in any university I've ever heard of. There's a vast difference between a toy compiler and one that can actually compile Linux and Doom.

    From a bit of research now, there are only three other compilers that can compile an unmodified Linux kernel: GCC, Clang/LLVM and Intel's oneAPI. I can't find any other compiler implementation that came close.

    • cv5005 2 hours ago

      That's because you need to implement a bunch of gcc-specific behavior that linux relies on. A 100% standards compliant c23 compiler can't compile linux.

  • anematode 3 hours ago

    A simple C89 compiler is a textbook task; a GCC-compatible compiler targeting multiple architectures that can pass 99% of the GCC torture test suite is absolutely not.

  • wmf 3 hours ago

    This has multiple backends and a long tail of C extensions that are not in the textbook.

fxtentacle 3 hours ago

You could hire a reasonably skilled dev in India for a week for $1k —- or you could pay $20k in LLM tokens, spend 2 hours writing essays to explain what you want, and then get a buggy mess.

  • Philpax 3 hours ago

    No human developer, not even Fabrice Bellard, could reproduce this specific result in a week. A subset of it, sure, but not everything this does.