points by stingraycharles 1 week ago

Interesting to see this when the current top post on HN is someone worrying about Bun as it was acquired by Anthropic. The top comment there describes “Anthropic does experiments on their own codebase, the Bun team is not gonna do the same vibe coding experiments”.

Yet here we are, what looks like a massive undertaking for vibe coding.

Time will tell how this will turn out. Would be nice if the Bun maintainers could give some clarification about what they’re doing here, and why they’re doing this.

andkenneth 1 week ago

They recently tried to upstream an improvement to zig, but were prevented from doing so because zig has a hard and fast "no AI code" rule. Whether you think this response is trying to put pressure on zig or whether they're just moving for practical reasons is up to you.

It's probably a bit of both.

  • pton_xd 1 week ago

    Anthropic just needs to buy Zig! Problem solved.

  • endospore 1 week ago

    Makes me wonder why zig announced the strict LLM rule recently. I'm afraid one reason could be that zig doesn't want to accept code from the bun fork in the first place (because of LLM usage, deviation and other reasons)

    • ai_critic 1 week ago

      It's a combination of pragmatism (not wanting to wade through slop, not wanting to shove out newbie developers) and politics (usual contemporary techie progressive stuff that's now oddly anti-technology).

      • Onavo 1 week ago

        I like your username.

      • happymellon 1 week ago

        > usual contemporary techie progressive stuff that's now oddly anti-technology

        You can be against a particular technology without being "anti-technology".

        See DRM/surveillance/bad self driving implementations.

      • wiseowise 1 week ago

        > usual contemporary techie progressive stuff that's now oddly anti-technology

        Just because a thing exists doesn’t mean you have to use it for everything. You don’t use asbestos blanket? Why are you so against asbestos?

        • a96 1 week ago

          Against blankets would be even more like that argument.

    • neomantra 1 week ago

      One non-obvious reason is that an important aspect of their community is to shepherd new contributors [1]. LLMs crushing everything would reduce that. More obvious is all the toil for maintainers dealing with LLM PRs (broadly it’s an issue). The Zig maintainers prefer to put their energy into improving people and fostering those relationship.

      [1] https://kristoff.it/blog/contributor-poker-and-ai/

      • lowbloodsugar 1 week ago

        Yeah, I remember when the lazy bastards started writing programs using compilers instead of learning assembly language. Now I don’t have a single colleague who can write assembly. There’s whole generations now who can’t code assembly. Most don’t even know what a register is. Hope Zig holds against this latest attempt to make everyone stupid.

        • wtetzner 1 week ago

          Using an LLM isn't analogous to using a higher level language.

          • brabel 1 week ago

            That’s funny because it’s exactly, literally the same. The difference is it’s not deterministic. That may be a problem but it’s still a higher level language, just a much higher level language than anything before.

            • xigoi 1 week ago

              The main difference is that the input to an LLM is in an ambiguous language.

              • brabel 1 week ago

                A programming language is allowed to be ambiguous, I don’t know of a definition that excludes that!

                • xigoi 1 week ago

                  All programming languages I know of provide at least some guarantees about the program’s behavior.

                • skydhash 1 week ago

                  The language specs may be, but an implementation is never ambiguous. When you encounter and undefined behavior in the specs, that’s when you look at your compiler/interpreter docs.

                • a_shiine 3 days ago

                  The kinda thoughts you form when you ever only vibe-coded

            • bigstrat2003 1 week ago

              > That’s funny because it’s exactly, literally the same. The difference is it’s not deterministic.

              So it is not, by your own admission, "exactly, literally the same".

              • amoss 1 week ago

                Take it gently, the poor thing doesn't understand the difference between code and talking about code.

            • wiseowise 1 week ago

              So by your logic all the PMs, managers and customers are programmers, right? After all, there’s a human compiler that takes their input and produces a program?

              • brabel 1 week ago

                They are programmers when they write a prompt and get runnable code as a result, yes… but no if asking a human to write the code because if you have an intermediate, manual step between the text and the running code, you don’t have an automated process and hence it’s no longer even an application, let alone a “compiler”.

                • matt_kantor 1 week ago

                  Why does it matter if a human or a machine is responsible for turning the prompt into code?

                  If there's a black box which I can send C code into one side of and get faithful machine code out the other, I'd call that box a "compiler". I wouldn't rename it if I later find out that there are little elves inside doing the translation.

                  • brabel 1 week ago

                    Sorry but that’s a childish take.

            • sandrello 1 week ago

              I assume you're some sort of programmer and I genuinely wonder how in the world can someone in good faith downplay non-determinism and ambiguity when talking about a programming language.

              High-level languages can certainly yield inefficient code when compiled, or maybe different code among different compilers, but they're always meant to allow their users to know exactly what to expect from what they put together in their programs. I've always considered this a hard fact, I simply cannot wrap my head around working in a way that forces me to abandon this basic assumption.

        • gls2ro 1 week ago

          Generating AI code/PR is not the same as using compilers because of at least two things:

          - the scale of how much and how fast you can generate code with AI vs how fast can you write code for compiler

          - the mental model of what is being generated and how much the contributor understands and owns the generated code

        • gertop 1 week ago

          Your analogy falls apart because the "lazy bastards" still knew how to program and understood the code they were working on.

          Vide-coders often don't read, let alone understand, the code they send for PRs.

          • merlindru 1 week ago

            I don't think most JavaScript devs know how to read C code, let alone assembly, so I think the comparison is apt. Is it not?

            • wtetzner 1 week ago

              The JavaScript developers are checking in JavaScript code that they ostensibly understand. That is not the same as prompting an LLM to generate Zig that they don't understand, and expecting someone to merge it.

              • merlindru 1 week ago

                ah, i see what you're saying. fair point! though the argument was that LLMs essentially are a yet higher level programming language (or, rather, let you write in a higher level language).

                • wtetzner 1 week ago

                  They do let you write in a higher-level language, but it's not really analogous to a higher-level programming language. The ambiguity and lack of determinism makes prompting fundamentally different from using a high level programming language.

        • uncircle 1 week ago

          To add to the other commenters, loads of people don’t know assembly, which speaks to the quality of the average developer. The ones that still understand assembly to this day tend to be better developers, writing faster and more efficient code.

          • DeathArrow 1 week ago

            >The ones that still understand assembly to this day tend to be better developers, writing faster and more efficient code.

            That is if you use something like C, C+=, Java, .NET, Go. With Javascript and Python I don't think knowing assembly would make any difference because it's hard to optimize the code in these languages for how the CPU and memory works.

            • uncircle 1 week ago

              Knowing assembly in this day and age is the result of being curious and wanting to understand how computers work, which means knowledge of algorithms, data structures, etc.

              The same applies to vibe coding: the best "vibe coder" will paradoxically be the person with enough knowledge and curiosity to understand programming, how computer works and the subject at hand; one that could write the whole thing from scratch so they have enough judgement to review generated code.

              Of course the vast majority will be mediocre vibe coders, and even worse programmers; at least that's the direction we're going.

              • Chris2048 1 week ago

                > wanting to understand how computers work, which means knowledge of algorithms, data structures, etc.

                It's possible to know in general terms, how computers work, and what assembly is without "knowing assembly" in the sense of being familiar with using/debugging it as a programming language.

            • skydhash 1 week ago

              Knowing assembly doesn’t mean you would spend your time writing assembly (aka being familiar with opcodes and architecture optimizations). But in the process, you get familiar with the working of the computer hardware and the OS that sits on top of it. That is always useful knowledge especially when needing to deal with binary format and protocols or FFI.

              • Chris2048 1 week ago

                > But in the process..

                Then it's sufficient to know assembly, but not necessary.

                This is compatible with "[developers] that still understand assembly to this day tend to be better developers", but not with "[on developers who] don’t know assembly, which speaks to [their] quality".

          • crysin 1 week ago

            I'd be very surprised if the "average" developer across the board was in fact not just a JavaScript / TypeScript only developer. I have no expectations or really even hope that the average developer I work with has ever written a line of assembly.

        • wiseowise 1 week ago

          There’s a big difference between (mostly) deterministic compiler and non-deterministic LLMs.

      • bbor 1 week ago

        Well said! I don't think either party is really at fault here, but if Anthropic wanted to contribute non-negligible amounts of code over time then it's an absolute dealbreaker.

        Sucks for people who were invested in contributing to Bun and don't like working with AI tools to be sure, but I think the writing was on the wall for them pretty much immediately post-acquisition. You must admit, it's hard to predict that 100% of source lines will be written by AI if you're not walking the walk!

      • Dylan16807 1 week ago

        That's a solid reason to keep LLMs away from the kind of tasks that help with onboarding. But a patch series from a competent team that changes 3000 lines should probably be evaluated on its own merits. Or at least, the collaboration-based reasons to reject AI don't apply and the real reason would be something else.

        (Though I don't know if this particular patch series would get accepted on its own merits.)

        • riffraff 1 week ago

          The recent article explained the bun patch would have been refused on technical merits as it's intrinsically incorrect, to be able to work properly it required some language changes.

        • bboozzoo 1 week ago

          > patch series from a competent team that changes 3000 lines should probably be

          split into a bunch of much smaller changes?

          • Dylan16807 1 week ago

            I don't understand your suggestion. If you take an ugly patch series that changes 3000 lines and organize it into small quality changes, it's still a patch series that changes 3000 lines.

            There's no reason to assume my generic statement was talking about the ugly version rather than the nicely organized version.

            • wolfi1 1 week ago

              perhaps not all of these 3000 line changes make sense?

        • moomoo11 1 week ago

          I mean in an authoritarian system you wouldn’t make a one off exception like that.

      • heavyset_go 1 week ago

        It's important that developers have an accurate mental model of how things work, are structured and why.

        LLMs promote a decoupling of mental models and the actual codebase.

        As much as some may want to believe, just reviewing what the LLM outputs is not equivalent to thinking about implementation details, motivations, exactly how and why things are, and how and why they work the way they do, and then writing it yourself. The process itself is what instills that knowledge in you.

        • sucrosesucrose 1 week ago

          Exactly. This is what many ai-sloppers ignore. Mental models are crucial. Nothing substitutes for having the program itself in your brain and being able to "mentally debug" it when something breaks.

    • KingMob 1 week ago

      Possibly, but the Zig creator is active on Lobste.rs, where he's been vocally anti-LLM for a year now, so the timing could just be a coincidence.

    • foresterre 1 week ago

      There are other reasons why a project like Zig might not want to accept LLM generated contributions.

      Zig, as programming language, has a multiplier codebase. A bug may affect a significant larger portion of users than most libraries or binaries will, as it's a fundamental building block of everything that uses Zig. Just that could be worth the extra scrutiny on every individual commit.

      There's also the usual arguments: copyright ethics, environmental ethics and maintainer burden.

      • esperent 1 week ago

        > has a multiplier codebase. A bug may affect a significant larger portion of users than most libraries or binaries will

        Couldn't you say exactly the same about bun?

        • emaro 1 week ago

          Sure, but Bun is now owned by a company who's entire shtick is creating AI models. That shifts priorities.

        • mert-kurttutan 1 week ago

          It might be one of the reasons they want to migrate to Rust, i.e. to handle many these memory related issues by the compiler. Personally I used bun on a very few personal instances. But if you check issue reports, you will see memory bugs being reported say more than deno.

    • DeathArrow 1 week ago

      >Makes me wonder why zig announced the strict LLM rule recently.

      I guess there are 2 philosophies in software development: move fast and break things and move at a pace that guarantees everything is rock solid.

      Most commercial software, Anthropic included is taking the former path, while most infrastructure teams are taking the later.

      I guess Linux and FreeBSD kernels are also not accepting LLM based contributions yet.

      • brabel 1 week ago

        > move fast and break things and move at a pace that guarantees everything is rock solid.

        Zig is famous for taking the former path! Anyone using Zig for a few years knows every release breaks things, and they are still making huge changes which I would classify as “moving fast”, like the recent IO changes!

        • lukaslalinsky 1 week ago

          Exactly, and Zig 0.16 is explicitly a release with known issues, just count the number of TODOs in the std.Io namespace.

      • jeltz 1 week ago

        > I guess Linux and FreeBSD kernels are also not accepting LLM based contributions yet.

        PostgreSQL, a famously slow and rock solid project, accepts LLM-based contributions. But they are held to the same high standard, if you cannot explain the patch you submitted it likely get rejected.

    • xydone 1 week ago

      The LLM rule has been a thing for a very long time at this point.

  • wg0 1 week ago

    So if tomorrow Rust denied the "improvement" to upstream Rust then what's the next language they plan to vibe code it in?

    • echelon 1 week ago

      Rust is legit one of the best languages to "vibe code" in.

      The emitted AST has a lower defect rate since it incorporates strong types and in-built error handling. Other pros include native code and portability, but downside is the compile time.

      • nvader 1 week ago

        Excellent comment.

        As a downside, the compile time is somewhat offset once you're using agents (and especially parallel agents) anyway. Since all of your edits cost a round-trip API call to a third party server, you can accept a slightly slower compile step.

      • wg0 1 week ago

        This could be a subjective feeling with no real data to back it up.

        People say same about Go as well that it's type system and limited feature set makes it the best AI friendly language but there too, it just seems like a hunch rather than a proven fact.

        • Onavo 1 week ago

          If we are gonna go down that rabbit hole, then the natural conclusion is Haskell.

          • boxed 1 week ago

            Which seems pretty reasonable tbh. Claude Code is amazing with Elm in my experience.

          • robocat 1 week ago

            How good are LLMs at understanding Haskell errors and then dealing with them?

            The last time I had a go with Haskell, the errors reminded me so much of hellish terminal compilers from the 80s and 90s that I quickly gave up. Been there, not doing that again.

        • treyd 1 week ago

          The thing is that this argument doesn't work with Go because its type system (and the whole language, really) is much less expressive and compiler gives a lot less feedback to the LLM. So it tends to have to write more unit tests and do more cycles of testing (and spend more tokens) to get it right.

          • wg0 1 week ago

            The argument about type system is absurd anyway. The types in a program aren't a universal vocabulary that the LLM would already know about like the words of English language. They are unique to each program and domain so an LLM can't be better at it.

            Let me elaborate further - it's like the proficiency of LLMs in writing English vs writing Sawahili or Kurdish.

            The types of a program are like Swahili or Kurdish etc even worse because those languages still have sizeable chuck on the Internet and digital archives but types of a program are very specific to it.

            • treyd 1 week ago

              Studies have shown that natural human languages are all more or less equally expressive in terms of bits per second while speaking. There's lots of different ways they can be structured but they tend to follow common rules that have been well-characterized by linguists. They can be used to describe formal mathematical statements, but are not rigorously formal languages themselves.

              Programming languages, in contrast, are constructed and vary much more in their designs. They are formal languages, making them closer to math than spoken language. LLMs being able to describe concepts more thoroughly and precisely through more expressive semantics obviously makes some languages more suitable than others.

              The type system of a language is just one aspect of it that allows the language to provide guarantees to the LLM (and the user) about correctness of the code it's writing.

              I am not speaking about specific types in specific programs. I am talking about the ability to describe complex constraints that LLMs (and humans) end up using to make writing correct code easier and more productive. Some programming languages absolutely are more effective at this than others, and that's always been true even before LLMs.

        • solumunus 1 week ago

          Well those people are simply wrong. Go and Rust type systems don't even remotely compare. Go types suck.

      • J_Shelby_J 1 week ago

        Downside: CC and Codex will write, compile, and fix in a loop until it has a monstrosity rather than designing something smarter.

    • petre 1 week ago

      C obviously.

      • wg0 1 week ago

        I was hoping bash because why not. It's AI that has to work and maintain anyway and Anthropic employees aren't limited by 5 hour 7 days limits anyway I suppose.

      • psychoslave 1 week ago

        You missed the part were everyone is going to run its own vibe coded assembly tools[1].

        So the next step will be that bun will be directly re-written from scratch at every iteration, the repository will only contains the specs for the LLMs.

        Caching locally the generated code will be authorized for some transition period, but as it’s obviously very dangerous to let people tweak what exactly computers are doing, forbidding such a practice using safe secure boot mandatory mode is already planed. Only nazi pedophiles would do otherwise anyway, thus the enactment of the companion law is an obvious go to.

        [1] https://news.ycombinator.com/item?id=47997947

        • wiseowise 1 week ago

          Democratizing knowledge btw

          • psychoslave 1 week ago

            Not sure I understand what you mean here.

    • f33d5173 1 week ago

      Rust is a significantly more mature language. Adoption of zig has to be done on the assumption that the language will significantly improve as your project evolves, and if those improvements don't agree with your project's goals you're in something of a lurch. Rust is basically finished and adopting it has to be done on the assumption it won't change very much. I don't know what their initial logic for adopting zig was, but I think porting to a more mature language was inevitable, unless by some miracle zig happened to rapidly mature in exactly the direction they wanted,

    • a96 1 week ago

      Javascript

  • abacadaba 1 week ago

    seems easier to fork zig

    • kimos 1 week ago

      Then that becomes an ongoing effort. The rewrite is once. (Good idea or not)

  • rdmsr0 1 week ago

    Even if AI had not been used, the changes would not have been upstreamed, see https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio... tl;dr the supposed improvements are not sound and the zig compiler has already gotten a whole lot faster

    • abtinf 1 week ago

      That is a devastating comment. I will now be extremely skeptical of bun.

    • NewJazz 1 week ago

      What a sober, detailed forum post.

    • nechuchelo 1 week ago

      This should be the top comment in the whole thread. AI is not the point, the PR is just not of a good quality.

    • abpin 1 week ago

      Thanks, that is the answer.

  • parchley 1 week ago

    Read the previous discussions on the topic. Your summary is a sensationalist lie, since their change was apparently a smoking pile of hot garbage, and Zig already had similar performance gains in a newer release.

  • DeathArrow 1 week ago

    >They recently tried to upstream an improvement to zig, but were prevented from doing so because zig has a hard and fast "no AI code" rule.

    And will Rust team accept their vibe coded patches?

    • Hendrikto 1 week ago

      Very likely not, if they are of similarly low quality.

    • kibwen 1 week ago

      No. The Rust project developers are more lenient when it comes to developing patches with AI assistance, but the amount of leniency one receives is proportional to the amount of pre-existing trust a contributor has with the project, and every PR still has to be reviewed by an independent human. A stranger dumping a zillion lines of slop in a PR is a one-way ticket to having your PR politely closed.

  • giancarlostoro 1 week ago

    Probably moreso going with the native language that is reliable and battle tested. Rust runs on Firefox, and in production at several systems across major orgs, this is not surprising.

  • sevenzero 1 week ago

    I see that as a win for Zig.

  • SkiFire13 1 week ago

    > but were prevented from doing so because zig has a hard and fast "no AI code" rule

    The patch would have been rejected either way because it was out of date and conflicted with other work going on.

  • norman784 1 week ago

    Not only because the AI part, here's a discussion [0] about it

    [0] https://ziggit.dev/t/bun-s-zig-fork-got-4x-faster-compilatio...

    • alethic 1 week ago

      In the context of this post, that's absolutely hilarious they're vibe-porting their Zig codebase to Rust.

      I love Rust, but you couldn't pick a language with slower compile times... XD

      • jeroenhd 1 week ago

        Compiling Rust is actually quite fast in my experience. The problem with many Rust projects is that they pull in dependencies left, right, and center. Pulling in Tokio makes your project compile an entire thread management system even if you're just compiling Hello World, and simple oneliners containing macros can easily spread out into dozens of lines of code each.

        Linking is also slow, and the extreme amounts of metadata produced for LLVM almost serves as a benchmark for LLVM's throughput, but that's all in an effort to produce faster, better binaries in the end.

        On godbolt.org, Hello World compiles and runs in about 250ms. Zig's Hello World compiles and runs in 600ms. Of course Zig is still an unfinished language so optimisations like these are probably hardly a priority, but when it comes to lines of code per second, the difference isn't as big as people make it out to be.

        What will make the most difference is how many crates the rewrite will pull in. The PORTING.md file specifies "No `tokio`, `rayon`, `hyper`, `async-trait`, `futures`" for the second phase, which should definitely get rid of the excessive compile time many people associate with Rust projects.

        • Tehnix 1 week ago

          >Compiling Rust is actually quite fast in my experience

          I guess it's all relative.

          I find Rust's compile times abhorrent and it's objectively slower than many many other languages that also pull in dependencies left, right, and center. I guess that just means Rust scales very badly with amount of code.

          I'd put it at a bit better than Haskell, but honestly not by much.

          I really wish Rust would focus much more on compile times, or on making smaller parallel compilation units. It's quite a chore to have to keep splitting your program into smaller and smaller crates just to not sit and wait for an eternity.

          As a comparison my CI job for Rust takes 14m running on a 16vCPU machine while my much larger TypeScript project compiles in 1m on a 2vCPU machine. I know people that have to spend quite a lot of work on keeping compile times manageable for Rust (nix, smaller crates, aggressive caching, etc etc).

          Rust still brings me enough value that I'll stick with it, but one can still dream of a better future :)

          • tcfhgj 1 week ago

            idk, maybe you can do it, but your TypeScript project compiles to machine code?

        • pierrelgol 1 week ago

          That's true, but then there's also the case of working on the zig compiler which is roughly a million loc, and with `--watch -fincremental` you can get 200ms recompile even if you change some of the most called function. Meanwhile even a 5k-10k rust project can take a 30s to recompile on minor changes. So the impact on velocity can be quite high, I love both languages, but the Zig compiler is undeniably faster than the Rust compiler and by multiple orders of magnitudes.

          • satvikpendem 1 week ago

            Rust also has incremental compiling and is pretty fast, I haven't experienced 30 second compile times when using cargo watch. See also, cranelift, which is supposed to make compile times even faster.

            • pierrelgol 1 week ago

              from my own testing even their incremental on a codebase 10x smaller than the Zig compiler like Helix the text editor, on my machine almost all changes take 2/3s and with cranelift it's like 4/5s.

              So it's definitely a faster feedback loop and honestly completely bearable, but it's not 200ms.

            • pierrelgol 1 week ago

              The problem is not just that Rust takes a few seconds longer once. It compounds across the edit/debug cycle. If you make around 800 save/check iterations in a day, then a 2.5–3.5s feedback loop costs roughly 34–48 minutes of waiting per day. The same number of iterations at 200ms costs about 2.8 minutes.

              So the practical delta is around half an hour to three quarters of an hour per day, or multiple hours per week. That directly affects flow state and experimentation speed. over the span of a month that's 2 full days worth of work waiting for the compiler. Or if you take my company's evaluation of the average engineer's hour cost it's roughly 2550 per month or almost 30k per year, obviously it's a bit exaggerated, you don't spend a full year refactoring and working like that, but even a tenth of that is still a big lump of money if you scale it to a few teams.

              Now it needs to be taken with a huge pinch of salts because Rust provides other benefits that offsets the fact that it's painfully slow to compile, but still worth noting

        • tracker1 1 week ago

          Having a gen 5 nvme helps significantly.

    • mehdix 1 week ago

      That reply was educational indeed. Thanks for sharing.

  • jeltz 1 week ago

    I don't see why they think it would work when the reason their patch set was rejected was because it was not correct, did not go in a direction the Zig authors were interested in and is also in an area where they are already working hard on improvements. It would have been much better if the bun team joined forces and helped out instead of vibe coding a broken PoC patch that never can get merged. Compilation speed is one of the current main focuses of Zig and changing the type system to make that possible was a big part of 0.16.

    Anyone can hack up a quick PoC, even without LLMs, the hard part is writing code that is correct and maintainable.

    • wiseowise 1 week ago

      > It would have been much better if the bun team joined forces and helped out instead of vibe coding a broken PoC patch that never can get merged

      Bold of you to assume they have the expertise.

      • jeltz 1 week ago

        I think they do. Building bun is a complex task and engineers who can do that should also be able to figure out how to help out with a compiler. It is just a matter of immersing yourself in the code and be willing to put in the hours and hard work. Sure, they may not be able to help out with designing the type resolution but there is other work which needs to be done that any skilled engineer can do.

      • merlindru 1 week ago

        Bun folks routinely contribute to WebKit, and bun itself is an incredibly impressive project, so I don't think they're lacking expertise

    • rdmsr0 1 week ago

      Side note, but I think using LLMs like this to write PoCs in existing projects is actually a good idea to prove whatever you had in mind is feasible and worth it to pour time into. Obviously you need to not vibecode the entire thing once you're past that point though...

    • naasking 1 week ago

      > It would have been much better if the bun team joined forces and helped out

      Submitting patches is joining forces and helping out.

      • dspillett 1 week ago

        Submitting patches that are correct and match the project's desired standards¹ is joining forces and helping out.

        --------

        [1] And align with the project's direction. This part is of course much more subjective so could very easily be an honest misunderstanding of the situation.

  • rob74 1 week ago

    Yeah, now that I think about it, having a major project written in a language that doesn't accept AI contributions now owned by a major AI company was a recipe for dis... er, conflict.

    I'm not a huge fan of Rust, but I guess having a project like Bun in an actually memory safe language is probably a win? Guess it depends on how good Claude is at writing Rust code...

  • codethief 1 week ago

    > but were prevented from doing so because zig has a hard and fast "no AI code" rule

    No, they were prevented from doing so because the Zig devs didn't like the proposed changes and are preparing a more comprehensive improvement.

  • TiredOfLife 1 week ago

    > They recently tried to upstream an improvement to zig

    They didn't.

  • Hendrikto 1 week ago

    The Zig maintainers did a pretty in-depth review of the PR, and laid out multiple technical reasons for why it would not get merged. They did not reject it simply for being vibe-coded (though that is likely the cause of it sucking).

  • fridder 1 week ago

    Not only that but Zig was working on a similar improvement to their change already

malisper 1 week ago

> what looks like a massive undertaking for vibe coding

fwiw, I suspect it's less of an undertaking than you may think. I've been playing with AI to rewrite Postgres in Rust[0] over the past couple of weeks and I found the AI to be exceptional at doing rewrites. Having an existing codebase you can reference prevents a lot of the problems you have with vibecoding. You have an existing architecture that works well and have a test suite that you can test against

Over the course of a month I've gone from nothing to passing over 95% of the Postgres test suite. Given Jarred built Bun, I bet he'll be able to go much faster

[0] https://github.com/malisper/pgrust

  • nailer 1 week ago

    > I suspect it's less of an undertaking than you may think... having an existing codebase you can reference prevents a lot of the problems you have with vibecoding.

    That's because it's not vibe coding - stingraycharles doesn't seem to understand what vibe coding is. Vibe coding was defined here https://x.com/karpathy/status/1886192184808149383

    > There's a new kind of coding I call “vibe coding”, where you fully give in to the vibes, embrace exponentials, and forget that the code even exists.

    This is very far from Anthropic's migration plans.

    • andai 1 week ago

      Yeah, it's a distinction worth making, and the language for making it kind of sucks. Vibe coding means "AI does the whole thing", or "I use tab autocomplete" depending on who you ask. It's not a very useful term anymore, we need better ones.

      My benchmark is basically, "are you letting the AI drive."

      In this case, an AI appears to have written the migration guide...

      • wrs 1 week ago

        It was and is a perfectly good term, but people started using it without regard for its definition. I don't know why people wouldn't misuse a "better" term the same way.

        • kelnos 1 week ago

          In this case I think the current zeitgeist (at least among zoomers and younger millennials) really loves the word "vibe". Once they hear of the term "vibe coding", they just want to be able to say it, even if what they're doing isn't really vibe coding.

          And then that leaks outside their social and age groups, because other people hear the incorrect usage, get confused, and incorporate that confusion into their own use of the term.

          • uncircle 1 week ago

            Waiting until they decide to call non-assisted programming ‘unc coding’

            • kelnos 1 week ago

              As someone who might be described as an "unc", I had to look up what "unc" meant.

      • c0rruptbytes 1 week ago

        i mean AI docs are usually the result of collabs between users and AI using /plan

        with superpowers, i see a lot of specs -> impl plan -> execute plan

        • andai 1 week ago

          Yeah. It "might be" that a human actually looked at it. There's just no way to know anymore. So it rightly doesn't inspire confidence.

    • bitwize 1 week ago

      "Vibe coding" = "let Dario take the wheel" as ThePrimeagen puts it.

    • brabel 1 week ago

      You are right but recently, vibe coding has become a demeaning term for AI assisted code by anti-AI people. It’s interesting seeing how words evolve very quickly on the internet as they spread to different demographics.

    • Jtarii 1 week ago

      That is one person's definition of vibe coding, not "the definition" of vibe coding. Words have multiple meanings.

      • andai 1 week ago

        Just going off vibes and not even looking at the code was the original definition. But "different people say the same thing but mean different things" is kind of the problem I was getting at.

      • nailer 1 week ago

        It’s the person that created the term’s definition.

        • Jtarii 1 week ago

          Language and culture don't work like that.

          Inventing a term doesn't give you exclusive rights to provide the definition.

          • nailer 1 week ago

            Yes but it's been a little over 14 months.

Avicebron 1 week ago

I imagine claude is better at Rust than Zig?

  • fcarraldo 1 week ago

    Contributors and maintainers will also be easier to find in Rust than Zig.

    Zig is a great language and I want to see it succeed, but this is a prudent move for Bun.

    • versecafe 1 week ago

      This is likely irrelevant given bun has stopped taking community PR's entirely and Jarred is pitching that human contributors should be banned.

      • etoxin 1 week ago

        There is like 1,713 open PR's on the Bun repo. I'm assuming all are from Claude or robobun?. I guess this gives us an insight on what the claude-code workflow look likes. Crazy times.

      • jabedude 1 week ago

        Where is a source for either of these extraordinary claims?

        • csande17 1 week ago
          • shadowfiend 1 week ago

            The gp's interpretation of that tweet is such a completely incorrect reading as to make one think it's likely disingenuous.

            • slopinthebag 1 week ago

              > I expect OSS to go the opposite direction: no human contribution allowed.

              How is it an incorrect interpretation? Jared is indeed pitching/suggesting/predicting that human contribution will not be allowed in the near future, i.e. banned.

              • Philpax 1 week ago

                A prediction is not a policy.

                • sandbags 1 week ago

                  When you use the word “allowed” it becomes a policy.

                  • NewsaHackO 1 week ago

                    No, it doesn't.

                    • sandbags 1 week ago

                      What would be required for you to see it as a policy?

              • kelnos 1 week ago

                "Pitching" generally means that the person making the pitch is endorsing and pushing for it. (This might also be a regional word meaning/usage difference type thing.)

                The person upthread should have said "predicting".

          • lioeters 1 week ago

            Wow, didn't realize how bad the situation was. Completely lost any respect and trust I had in the Bun project and its lead dev.

          • kelnos 1 week ago

            What a weird take. I do a ton of OSS, and the act of writing code is what makes it fun for me. If I were forced to use an LLM to write all my OSS code, I would just not do it anymore.

          • jdiaz97 1 week ago

            he clearly works for an LLM provider now

          • Imustaskforhelp 1 week ago

            Jesus christ, This is the thing which should be talked about more. What an abysmally bad take. This actually makes me worry about the faith of bun more than any other thing discussed here.

    • unclad5968 1 week ago

      I don't think Zig is different enough from rust or any other systems language for it to matter. If you can write rust you can write Zig.

      • jaggederest 1 week ago

        Anthropic makes claude, claude can write Rust like a champ and struggles at Zig. It's a straightforward "training data" argument.

        I think there are even longer term plays that Anthropic should be looking at, in this space, but it seems like they've decided rust is the right thing, so fair play. I would be (am!) thinking about making an LLM optimized high level language that you can generate / train on intensively because you control the language spec.

        • dnautics 1 week ago

          claude does not struggle with zig? not in my hands anyways.

        • aabhay 1 week ago

          Claude doesn’t write Rust like a champ. It’s still miles ahead at js and python than it is at rust. It can do macros and single file optimizations but its gotten really stuck in type hell and tried to dyn everything on multiple occasions for me.

          • vlovich123 1 week ago

            Claude struggling at Rust: not getting types correct, using the wrong abstractions, not implementing things correctly

            Claude struggling at Zig: the above + memory safety issues if you run “fast” mode.

            It is generally true that Rust code tends to be written in a way that the compiler catches the issue at compile time. The same is not as true for Zig, Python or JS

          • fireant 1 week ago

            As a human occasionally writing Rust I've also frequently got stuck in type hell.

      • speed_spread 1 week ago

        I'm reminded of the old joke "how to shoot yourself in the foot in 25 different languages". The first one was "C - you shoot yourself in the foot." Zig remains very close to that philosophy.

        So the difference is not in writing new stuff but in maintaining the existing codebase. Rust's rigidity makes it potentially harder to break stuff compared to Zig's general flexibility. As a project grows and matures, different types of contributors naturally come in and it's unreasonable to expect everyone to learn about historical footguns that may have accumulated.

        • hnlmorg 1 week ago

          Oh man. That joke takes me back.

    • chrisweekly 1 week ago

      100%. For many people, Bun is the only reason they've even heard of Zig. I'm not in a position to comment intelligently on comparative language features per se, but when it comes to mindshare and community size, Rust is a clear winner.

      • majormajor 1 week ago

        fwiw before today I'd heard of Zig and not Bun :D

        something JS-adjacent could certainly be more known than an obscure language but are that many people using drop-in node replacements?

        • Dylan16807 1 week ago

          fwiw I knew about both but I had no idea Bun was written in Zig.

        • chrisweekly 1 week ago

          alt runtimes are still pretty niche, but deno and bun do have some degree of adoption. For Bun, the runtime is actually sometimes perceived as unwanted baggage, (eg a consulting client of mine wanted to pursue bun for its build tooling but had no interest in changing the runtime). IMHO, node (with Vite and PNPM) is the right call for the vast majority.

    • TheRoque 1 week ago

      Why didn't they use Rust in the first place then ? All this was true before AI

      • tux1968 1 week ago

        Anthropic only acquired Bun in December of last year. They weren't there in the first place, to make the decision.

      • epolanski 1 week ago

        Zig has some advantages for such projects, especially in the beginning.

        Among them:

        - much easier to iterate on (due to the language being simpler and compilation much faster)

        - native C/C++ interops (Zig can compile C and C++ and mix it with Zig) which is crucial for a node-replacement runtime that runs an open source JS engine

        - fewer dependencies and trivial static linking

        I guess that now that they've been acquired by Anthropic there's this combination of having both in-house Rust talent, AI which does better on Rust, and the funding and resources necessary to undertake such a migration.

        • dolmen 1 week ago

          Also: Anthropic bought Bun to not depend on node.js. But now they are dependent on Zig which is a moving target and is hostile to them because not accepting their contributions.

          • 12_throw_away 1 week ago

            > But now they are dependent on Zig which [...] is hostile to them because not accepting their contributions

            I'm struggling to figure out how to even start interrogating this notion. What does this mean?

    • GuB-42 1 week ago

      I wouldn't call any port "prudent". In general, taking mature software and doing any major rewrite is one of the riskiest thing you can do. It is a large scale attempt to fix what isn't broken.

      Sometimes it is worth it, but it may also kill projects. A risky move. And AI doesn't help its cause. AI can save a lot of time when making ports, it is one of the things it does best, but it doesn't protect from regressions.

      I am not using Bun in production, but if I was, I would consider it a risk. Not because of Rust vs Zig, but for changing things that work.

  • allthetime 1 week ago

    Zig is a moving target. 0.15 -> 0.16 includes some massive structural changes concerning IO and async/threading.

    Claude has absolutely no idea what it's doing with bleeding edge zig unless you feed it source and guide it closely (in which case it's useful for focused work) - I'm building a game engine & tcp/udp servers with it and it requires a hands-on approach and actually understanding what's being built.

    I imagine these are not really concerns with rust at this point.

    In my ideal world the team behind bun would be putting in the work to keep up with modern zig, but it's starting to look like they are running mostly on vibes in which case rust might be a better choice.

    • rudedogg 1 week ago

      > it requires a hands-on approach and actually understanding what's being built.

      I think this is true regardless of what language you’re using.

      I’ve built a lot in Zig and there’s no difference between vibing stuff in it versus TypeScript/React. Claude can “one-shot” them both, and will mimic existing code or grep the standard library to figure everything out.

      • dns_snek 1 week ago

        The code may run but it's rarely idiomatic. For example they almost never define functions inside the struct/union/enum namespace unless it already exists and follows that style, i.e. I expect "foo.bar()" but they make it "FooMod.bar(foo)".

    • 10000truths 1 week ago

      > unless you feed it source

      Which isn't particularly difficult - the language docs and std source come with the installation, so all you need to do is tell Claude where those directories are in your skill/plugin/CLAUDE.md.

      > and guide it closely (in which case it's useful for focused work)

      It does struggle sometimes with writing code that compiles and uses the APIs correctly. My approach to that so far has been to write test blocks describing the desired interface + semantics, and asking Claude to (`zig test` -> fix errors) in a loop until all the tests pass.

      • allthetime 1 week ago

        You're already at a disadvantage having to stuff the context and spend extra tokens coercing the model in the correct direction compared to it already knowing what to do (rust, ts, go, etc.)

        Here, I just did a quick test with claude.

        1. "make a simple tcp echo server that uses rust"

        compiles and runs - took a few seconds to generate.

        2. "make a simple tcp echo server that uses zig"

        result: compile error, took literal minutes of spinning and thinking to generate

        response: "ziglang.org isn't in the allowed domains. Let me check if there's another way, or just verify the code compiles conceptually and present it clean."

        /opt/homebrew/Cellar/zig/0.15.2/lib/zig/std/Io/Writer.zig:1200:9: error: ambiguous format string; specify {f} to call format method, or {any} to skip it @compileError("ambiguous format string; specify {f} to call format method, or {any} to skip it"); ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

        3. "make a simple tcp echo server that uses zig 0.16"

        result: compile error:

        zig build-exe main.zig main.zig:30:21: error: no field named 'io' in struct 'process.Init.Minimal' const io = init.io; ^~

        4. "make a simple tcp echo server that uses zig 0.15"

        result: compile error

        zig build-exe main.zig /nix/store/as1zlvrrwwh69ii56xg6yd7f6xyjx8mv-zig-0.15.2/lib/std/Io/Writer.zig:1200:9: error: ambiguous format string; specify {f} to call format method, or {any} to skip it @compileError("ambiguous format string; specify {f} to call format method, or {any} to skip it");

        Rust took seconds and just works. Zig examples took minutes and don't work out of the box. The DX & velocity isn't even close.

        • dimator 1 week ago

          i mean, if zig is doing its best (inadvertently) at shooing off slop jockeys, then i already have more confidence that:

          1. the language and stdlib are written by people who know what they're doing 2. packages in the ecosystem, at the barest level, are written by those who didn't leave after a few compile errors they couldn't reason about

          • Philpax 1 week ago

            The agents will churn their way through the errors. The new users whose learning material is out of date, as well as the existing users that have an insurmountable task in updating their code, will give up instead.

            I think the changes are improvements, but there's a real cost to language churn, and every time it happens, the graveyard of projects grows just that little bit larger.

        • andai 1 week ago

          I guess now we can't make new programming languages anymore.

    • jedisct1 1 week ago

      The Rust ecosystem is also a moving target.

      Virtually all crates are still at version 0.x and introduce constant breaking changes: [https://00f.net/2025/10/17/state-of-the-rust-ecosystem/](https://00f.net/2025/10/17/state-of-the-rust-ecosystem/)

      If you don’t want to use obsolete versions of dependencies, you need to explicitly tell the model that. Then you have to hope it can adopt new APIs it wasn’t trained on, rewrite existing code to handle the breaking changes, and keep your fingers crossed that nothing else breaks in the process.

      LLMs perform much better with Go, not only because of the lack of hidden control flow (LLMs can deal with that, but it costs a lot of tokens) but mainly because both the language and its dependencies introduce very few breaking changes.

      • adastra22 1 week ago

        This hasn’t been true for some months. Claude has gotten better about adding latest versions of crates, and when it does encounter a breaking change from what it expects it is usually very quick about finding the change in the docs or crate source code.

        What you are talking about used to be a pain point, but is now pretty much gone.

        Rust can be a real superpower for AI-assisted dev work, because the compiler outputs very good errors, and the type system catches most safety bugs.

  • kllrnohj 1 week ago

    I would expect all LLMs are going to be better at Rust than Zig - a strong, thorough compiler will simply prevent more mistakes, and the benefits of a "simple" language decreases the larger the code base gets. The more abstractions exist, the less valuable "no hidden control flow" or "no hidden allocations" from the standard library get, and that's before you add the mother of all abstractions of vibe coding.

    • pizlonator 1 week ago

      I have no doubt that LLMs are good at Rust.

      But I can’t reconcile the reasoning about “strong, thorough compiler” with the fact that LLMs are also fantastic at Ruby.

      They also write really great posix shell (including very sophisticated scripts) and python.

      Something more subtle is going on.

      • josephg 1 week ago

        They do work well. But I still see the occasional type related issue or bug from refactoring that claude will introduce into javascript and python code. It seems to be happening less and less frequently as the models get better. But, the rust compiler catches real bugs in LLM code. I consider that a win.

        Has anyone made any cross language benchmarks for LLMs? I wonder if rust's conceptual complexity makes it harder for LLMs to write? If all you care about is working software, which language is best for LLMs? Python, because there's more example code? Go or Java, because they're simpler languages? Ruby because its terse? Rust because of the compiler? I'd love to see a comparison!

NewsaHackO 1 week ago

But why should they? This just seems like the groundwork for an initial refactor and moving from one language to another. They haven't actually committed to switching from Zig to Rust yet. I mean, I get if you are an investor and you want to see if they are using their time effectively, but why would it matter to anyone else?

  • SergeAx 1 week ago

    Lots of people, me included, heavily invested their time and expertise into Bun, using it as a daily driver, to bundle production code or even using it in production as a JS/TS runtime. Of course, we are interested in Bun to stay a useful tool. The Anthropic acquisition was worrying enough on its own.

    • NewsaHackO 1 week ago

      But there isn't any change in someone's expertise in Bun though, currently, just in development. Why would they have to dive you into a daily stand-up about their development process?

      • SergeAx 1 week ago

        Bun may become unusable after Antropic meddling with it. In that case the expertise would be wasted. It's not a great deal for most of users, but still.

  • stingraycharles 1 week ago

    They’re not required to do so, but like I said, it would be nice, because it removes a lot of speculation. And development is in the open, so people notice what they’re doing.

sureglymop 1 week ago

To be fair, this seems to be Buns original creator themselves experimenting. Unclear if there's any relation to the Anthropic acquisition. But I think it's best we refrain from prematurely speculating if we just don't know.

simultsop 1 week ago

The industry does not shape bases on HN top posts, nor media buzz. Remember youtube birth. Necessity, available tech, fresh talent.

I believe now we have all but we fail at choosing.

elktown 1 week ago

"Show me the incentive and I'll show you the outcome" is usually the overarching law of software dev/design/arch.

  • stingraycharles 1 week ago

    What do you mean with that in this context?

    • elktown 1 week ago

      That the incentives have changed after being bought by Anthropic. So don't be surprised by a sudden change of heart.

      • stingraycharles 1 week ago

        Change of heart about what?

        Sorry if I’m being pedantic, but I’m not aware of Bun having made any statements about AI assisted coding before.

debarshri 1 week ago

I think itnis ok to use or build vibe coded tools if it is built by experts in the domain and they take the ownership.

  • andkenneth 1 week ago

    I think if it's well built by experts it doesn't deserve the "vibe coded" label even if it was built with agentic tools.

splittydev 1 week ago

Honestly, this kind of thing seems to work quite well with vibe coding. If I remember correctly, the Ladybird JS engine was "vibe-ported" to Rust as well, and it passed 100% of the original test suite, in addition to new Rust tests.

faangguyindia 1 week ago

anthropic just wanted to "codex" like bragging rights of codex being developed in rust. so they are now going to write bun in rust, and then claudecode can use claim to be built on rust.

nailer 1 week ago

> what looks like a massive undertaking for vibe coding

It doesn’t look like that at all. Do you think that all use of AI is vibe coding?

  • MarsIronPI 1 week ago

    It depends on what you mean by "vibe coding". Is AI coding based on an existing implementation vibe coding? What about only from a natural-language spec? How does manual reviewing affect whether or not it's vibe coding?

    • matt_kantor 1 week ago

      > How does manual reviewing affect whether or not it's vibe coding?

      I think the most commonly-accepted definition of "vibe coding" is when you "forget that the (generated) code even exists"[0]. So vibe-ness entirely hinges upon whether you're manually reviewing. If you make/prompt changes based on what you observe in the generated code (rather than only based on runtime behavior), then you're not "vibe coding".

      I think the other things you mentioned are orthogonal to vibe-ness.

      [0]: https://en.wikipedia.org/wiki/Vibe_coding#Definition

  • lmm 1 week ago

    In practice all use of AI rapidly becomes vibe coding. Even if someone says they're going to carefully manually review everything that's generated, within a couple of days they get bored and just click approve.

    • p-e-w 1 week ago

      Not to mention that manually writing code is itself a process of understanding. It cannot be replicated by reading code, no matter how carefully.

    • jmull 1 week ago

      While I'm sure you're speaking for many, this is definitely not true across the board.

    • markatto 1 week ago

      This is just a matter of priorities - I use LLMs to write code every day and I have never put a single line of code up for review that I didn’t read and understand.

      • pineapple_opus 1 week ago

        I use to do this and then do test manually to validate everything works as expected in my small open source project. But then over the time I saw that some bugs crept in which I was unable track since I was doing manual testing. So I wrote some e2e tests with playwright and I think that gives a bit relief (at least).

  • allthetime 1 week ago

    what would you call a fully uncommented commit with

    "+27,939Lines changed: 27939 additions & 0 deletions"

    of new rust code

    • heddhunter 1 week ago

      Just another Monday in 2026.

    • vips7L 1 week ago

      The blind leading the blind.

    • geodel 1 week ago

      I'm sure it will be called Systems Programing . Because Rust.

    • LamaOfRuin 1 week ago

      The commit would look exactly like that if it was a 100% deterministic transpilation (like Golang did with their original C implementation?).

      This is obviously very different from that, but the way the commit looks doesn't make it so.

      • kelnos 1 week ago

        The question isn't whether or not you'd get the same line count with a non-LLM tool. The question of whether or not it's vibe-coded depends on whether or not the committer actually reviewed and understood the new code. And with a 75k line difference, that seems unlikely.

        • nailer 1 week ago

          > The question of whether or not it's vibe-coded depends on whether or not the committer actually reviewed and understood the new code

          Why? Do you think large changes not made by LLMs are also reviewed line by line?

  • stingraycharles 1 week ago

    I think the definition of vibe coding is a bit fluid, in this case I just meant it to be “code fully generated by AI, possibly not fully reviewed by human eyes”. I agree that this definitely not “coding based purely off vibes”, and the approach looks legit.

  • WD-42 1 week ago

    Did you look at the branch? This is vibed, even with the most liberal definition

    https://github.com/oven-sh/bun/compare/claude/phase-a-port

    This single commit is 65k lines of additions

    https://github.com/oven-sh/bun/commit/ffa6ce211a0267161ae48b...

    • nailer 1 week ago

      The definition is at https://x.com/karpathy/status/1886192184808149383 and no that does not match what is in the branch. Systemically migrating a code base using an LLM does not match the defintion of vibe coding.

      There's a decent article by Simon Willison that talks about this: https://simonwillison.net/2025/Mar/19/vibe-coding/

      > I’m seeing people apply the term “vibe coding” to all forms of code written with the assistance of AI. I think that both dilutes the term and gives a false impression of what’s possible with responsible AI-assisted programming.

      • WD-42 1 week ago

        You're right, all 750k lines of code added in a single day - definitely reviewed and completely understood.

      • brailsafe 1 week ago

        This is just a coined term; definitions evolve over time based on usage

        • gschizas 1 week ago

          All language is "coined terms". The point is that if you dilute the definition of a term, you make the term useless. Evolution of a term isn't done automatically. Correcting terms such as these pushed the evolution in a more useful way. Also, evolution of language is not a magic spell that automatically forgives people on making language mistakes.

        • kelnos 1 week ago

          Then "vibe coding" is a useless term, if it just means "LLM-assisted coding". We might as well just say "LLM-assisted coding" or "AI coding" or whatever.

          As much as I find the word "vibe" generally annoying (in all contexts), I actually really like "vibe coding" as "LLM did everything and I didn't even look at it". It's a succinct, useful way to describe that mode of doing things. Diluting it down to "LLM-assisted coding" makes it useless.

          • dolebirchwood 1 week ago

            > Then "vibe coding" is a useless term

            You're absolutely right.

          • brailsafe 1 week ago

            Nah, I'm not big on these "it either matches the way ___ used it or it's useless" binaries. The term is the term, it's recent, and people are using various forms of the others you mentioned. People use it loosely, people use it specifically, this is the way for many colloquial terms, and definitions form around them and expand over time or change.

            It sort of surprises me how uptight people are getting about a term that was mentioned on X last year and has since been tossed around to loosely imply that a machine did between zero and all of the work. Just because it doesn't match exactly does not mean it's useless, it maps to a concept, if the details are important and ambiguous, then elaborate.

      • Dylan16807 1 week ago

        The dilution of the term is a real problem sometimes.

        But pointing your AI at an entire codebase to transpile pretty much entirely by itself? Yeah vibe coding is a fitting term.

        Even if you wrote it a small essay on how to Rust. That improves the situation but doesn't change the core autonomy/hope of the task.

      • rzmmm 1 week ago

        Here is the Wiktionary definition for curiosity.

        > (programming, neologism) A method of programming in which a developer generates code by repeatedly prompting a large language model.

        https://en.wiktionary.org/wiki/vibe_coding

        • dolebirchwood 1 week ago

          Thanks. That helps us know not to take Wiktionary seriously.

pstuart 1 week ago

Porting from one typed language to another seems like a perfect use for LLMs. I can see the appeal of both languages and why to consider such an action (e.g., rust is a mainstream PL vs zig's cult status (no slight intended)).

  • rtpg 1 week ago

    I think the big difficulty here is that Rust's ownership model in particular tends to require certain kinds of control flow to avoid a bunch of weird churning/copying, which makes it not as straightforward of a port target from other imperative languages.

    Like maybe you get the LLM to try _really hard_ to churn through everything, but this feels like a big case of "perils of the lack of laziness".

    Of course if you have a good idea for how to deal with allocations etc "idiomatically" already maybe that works out well. And to the credit of the port guide writer bun seems to have its explicit allocations that are already mapping pretty well to Rust.

    • pstuart 1 week ago

      This is all wild conjecture, but I'd assume that teaching the LLM to do that mapping is an achievable goal and then it get's close to automatic -- effectively slurp the source AST into a rust AST and render.

      My only experience with ports so far is Python to Go, and it's been near flawless (just enough stupid shit to make me feel justified to be in the loop).

      • spem-in-allium 1 week ago

        I'm porting a large-ish delphi application to c sharp. It's been pretty hands-off except for converting to async and some language capability mismatch.

      • rtpg 1 week ago

        It really isn't if you don't have the right abstractions.

        Especially for memory management the right and wrong abstractions in Rust can lead to a factor of 5 or 10 extra amount of difficulty. The right memory management abstraction and your code can be a straight line port (or even cleaner!), the wrong one and you're going to just be spending a lot of tokens to have a machine spin around in circles trying to untie itself

        GC'd languages don't have this problem, though obviously you can still generate stupid amount of pain for yourself by doing something wrong