A bug is a bug. A “potential vulnerability” is a bug. A vulnerability is verifiable as having security implications with a proof of concept or other substantial evidence.
Words matter. Bugs matter. It’s important to fix large amounts of bugs, just as it always has been, and has been done. Let that be impressive on its own, because it IS impressive.
Mythos didn’t write 271 PoC for vulnerabilities and demonstrate code path reachability with security implications. Mythos found 271 valid bugs. Let that be enough.
I was a bit confused by your definitions, but here's how Mozilla broke out [1] the 271, um, things:
> As additional context, we apply security severity ratings from critical to low to indicate the urgency of a bug:
> * sec-critical and sec-high are assigned to vulnerabilities that can be triggered with normal user behavior, like browsing to a web page. We make no technical difference between these, but sec-critical bugs are reserved for issues that are publicly disclosed or known to be exploited in the wild.
> * sec-moderate is assigned to vulnerabilities that would otherwise be rated sec-high but require unusual and complex steps from the victim.
> * sec-low is assigned to bugs that are annoying but far from causing user harm (e.g, a safe crash).
> Of the 271 bugs we announced for Firefox 150: 180 were sec-high, 80 were sec-moderate, and 11 were sec-low.
Mozilla uses the term "vulnerability" for even sec-high, even though they say right below that it doesn't mean the same thing as a practical exploit. And on their definitional page, they classify even sec-low as "vulnerabilities" [2].
Words are tools, that get their utility from collective meaning. I'd be interested where you recieved your semantics from and if they match up or disagree with Mozilla.
> Mozilla uses the term "vulnerability" for even sec-high, even though they say right below that it doesn't mean the same thing as a practical exploit.
That’s not evident in what you pastedat all.
What you pasted says
> sec-critical and sec-high are assigned to vulnerabilities that can be triggered with normal user behavior […] We make no technical difference between these […] sec-critical bugs are reserved for issues that are publicly disclosed or known to be exploited in the wild.
> sec-low is assigned to bugs that are annoying but far from causing user harm (e.g, a safe crash).
From this one infers that the "180 were sec-high" bugs found are actually exploitsble but known to have been found in the wild, and are NOT mere annoying bugs.
The difference between 180 and 270 does nothing to deflate the signicance, or lack there of, of the implication re: Mythos.
Presumably there are (implicit?) "sec-none" things, like [a] from the recently released 150.0.2 [b] which makes absolutely zero mention about "Security Impact" or "Severity" in the bug report, unlike [c], which is listed in the Mozilla weblog post [2].
Security things are mentioned in the Release Notes [b] pointing to a completely different document [d].
Perhaps sometimes a bug is 'just' a bug, and not a vulnerability.
In general, I would say that our use of "vulnerability" lines up with what jerrythegerbil calls "potential vulnerability". (In cases with a POC, we would likely use the word "exploit".) Our goal is to keep Firefox secure. Once it's clear that a particular bug might be exploitable, it's usually not worth a lot of engineering effort to investigate further; we just fix it. We spend a little while eyeballing things for the purpose of sorting into sec-high, sec-moderate, etc, and to help triage incoming bugs, but if there's any real question, we assume the worst and move on.
So were all 271 bugs exploitable? Absolutely not. But they were all security bugs according to the normal standards that we've been applying for years.
(Partial exception: there were some bugs that might normally have been opened up, but were kept hidden because Mythos wasn't public information yet. But those bugs would have been marked sec-other, and not included in the count.)
So if you think we're guilty of inflating the number of "real" vulnerabilities found by Mythos, bear in mind that we've also been consistently inflating the baseline. The spike in the Firefox Security Fixes by Month graph is very, very real:
https://hacks.mozilla.org/2026/05/behind-the-scenes-hardenin...
I'm not a security dev or researcher or anything, but as an outsider my understanding matches how Mozilla uses the terms. Though words used by specialists and the general public can offer differ...
I work on SpiderMonkey, so I mostly looked at the JS bugs. It was a smorgasbord of various things. Broadly speaking I'd say the most impressive bugs were TOCTOU issues, where we checked something and later acted on it, and the testcase found a clever way to invalidate the result of the check in between.
> although the real cleverness is in the testcase, which we have not made public
What is the point of keeping it private? I'd bet feeding this patch to Opus and asking to look for specific TOCTOU issue fixed by the patch will make it come up with a testcase sooner or later.
Possibly! One of the many areas that might need rethinking in the age of AI (that started in February of this year) is how long security bugs should be hidden. We live in interesting times.
The same is also true of a good security researcher, and has been for a long time. The question is mostly whether it takes long enough to come up with a testcase that we've managed to ship the fix to all affected releases, and given people some time to update. (And maybe LLMs do change the calculus there! We'll have to wait and see.)
I'd say it leans towards memory corruption kinds of issues, as those are easiest to pass the validator, thanks to AddressSanitizer. I think there's a lot of potential for making the validator more sophisticated. Like maybe you add a JS function that will only crash when run in the parent process and have a validator that checks for that specific crash, as a way for the LLM to "prove" that it managed to run arbitrary JS in the parent. Would that turn up subtler issues? Maybe.
Can you elaborate why those bugs weren't found by e.g. fuzzing in the past?
I'm genuinely curious what "types" of implementation mistakes these were, like whether e.g. it was library usage bugs, state management bugs, control flow bugs etc.
Would love to see a writeup about these findings, maybe Mythos hinted us towards that better fuzzing tools are needed?
Fuzzing isn't good at things like dealing with code behind a CRC check, whereas the audit based approach using an LLMs can see the sketchy code, then calculate the CRC itself to come up with a test case. I think you end up having to write custom fuzzing harnesses to get at the vulnerable parts of the code. (This is an example from a talk by somebody at Anthropic.)
That being said, I think there's a lot of potential for synergy here: if LLMs make writing code easier, that includes fuzzers, so maybe fuzzers will also end up finding a lot more bugs. I saw somebody on Twitter say they used an LLM to write a fuzzer for Chrome and found a number of security bugs that they reported.
If I had to guess, I'd say that AI is better at finding TOCTOU bugs than fuzzing because it starts by looking at the code and trying to find problems with it, which naturally leads it to experiment with questions like "is there any way to make this assumption false?", whereas fuzzing is more brute force. Fuzzing can explore way more possible states, but AI is better at picking good ones.
In this particular sense, AI tends to find bugs that are closer to what we'd see from a human researcher reading the code. Fuzz bugs are often more "here's a seemingly innocuous sequence of statements that randomly happen to collide three corner cases in an unexpected way".
Outside of SpiderMonkey, my understanding is that many of the best vulnerabilities were in code that is difficult to fuzz effectively for whatever reason.
How about this: a "vulnerability" is a "vulnerability", but after it was identified and verified to cause problem, that's when it should be called a "bug", because it could make the software do unwanted things.
At Mozilla, everything is called a bug. It's what other systems call an "issue". So it's too late for your terminology at Mozilla. (Example: I have a bug to improve the HTML output of my static analysis tool. There is nothing incorrect or flawed about the current output.)
At Mozilla, but not everywhere: exploits are a subset of vulnerabilities are a subset of bugs.
Mythos did in fact write PoCs for all bugs that crash with demonstration of memory-unsafe behavior (e.g. use-after-free, out-of-bounds reads/writes, etc).
For us this is substantial enough evidence to consider it a security vulnerability at that point, unless shown otherwise and it has always been this way (also for fuzzing bugs).
> Mythos did in fact write PoCs for all bugs that crash with demonstration of memory-unsafe behavior (e.g. use-after-free, out-of-bounds reads/writes, etc).
But report [1] says that "Some of these bugs showed evidence of memory corruption...", which implies that majority of these (which includes 271 bugs from Mythos) don't have evidence at all. Do I not understand something?
> For us this is substantial enough evidence to consider it a security vulnerability at that point
Mythos is supposed to be pretty good at writing actual exploits, so (as I understand) there shouldn't be any serious problems with checking if bug is vulnerability or not.
> But report [1] says that "Some of these bugs showed evidence of memory corruption...", which implies that majority of these (which includes 271 bugs from Mythos) don't have evidence at all. Do I not understand something?
I'm guessing a bit, but for example: out of bounds reads are not memory corruption. Assertion failures in debug builds are also usually not memory corruption, and I'd guess that many of these bugs were found through assertions. (Some parts of Firefox like the SpiderMonkey JS engine make heavy use of assertions, and that's the biggest signal used for defect validation. An assertion firing is almost always treated as a real and serious problem. Though with our harness, Opus and Mythos try to come up with an exploit PoC anyway.)
I dismissed the earlier non-technical blog post as shameless product boosterism for Anthropic. The linked hacks blog (which is a better source than this article) is a welcome release. It's hard to deny there's something real to this now, I think. Mozilla's internal definition of a "vulnerability" is also probably more widely applied than what many would intuit, but it is good that these issues are being taken seriously and fixed.
Agreed. The earlier blog post did not explicitly claim this, but I think casual viewers were prompted to believe that the Magic of Mythos (TM) went and found (and fixed??) a bunch of vulnerabilities with minimal human guidance, and even contrasted this with their fuzzing infrastructure and made it sound (to me) like it was casting shade on it.
This new post makes it pretty clear that this was all bolted on-top of their existing fuzzing infrastructure, and really just used to get more and better initial hits that a very skilled team is looking at. I assume Anthropic was giving them a very good deal on inference for the positive PR, but I believe these other reports and suspect Mozilla did not really need them.
As the Hacks.Mozilla article notes: "We began with small-scale experiments prompting the harness to look for sandbox escapes with Claude Opus 4.6. Even with this model, we identified an impressive amount of previously-unknown vulnerabilities which required complex reasoning over multiprocess browser engine code."
It's better because it actually lists a sample of Bugzilla reports that were made public. This topic was discussed previously (36 comments two weeks ago: https://news.ycombinator.com/item?id=47885042), but the part about bug reports being made public is brand new.
I hope to see the day when (or if) the LLMs get so good at spotting and fixing bugs that all that’s left for the Firefox engineers to do is to focus on adding new features.
This isn’t sarcasm. Firefox deserves to be used more. Most people I know don’t use it because “Chrome does almost everything better”, and Firefox can’t compete with the other browsers’ roadmaps.
Totally agree. I even go as far as choosing which website I make purchases on depending if they work on FF, or writing to support occasionally to tell them it's not supported or a feature isn't working properly and this would be appreciated.
I know it pretty much always goes nowhere, but I feel it's what I can do to keep the browser somehow on the radar.
Part of the problem is, when they stop working on fixing bugs, they start doing Mr Robot things... We just want a web browser. Nobody asked for pocket, or AI...
If they use AI to fix all the bugs, then what else is for them to do, other than maintain syntax compatibility with the various languages they build with? They're just going to go back to making the browser trash again.
I have an old apk of Firefox pinned on mobile. I do this because I genuinely believe that for my very limited usecases, the browser has become actively worse.
(Don't worry- I use the system browser for any site I don't fully trust.)
Chrome ain't better in any meaningful way for >99% of use cases. Heck, I am a dev and I use FF with ublock origin only, for past... 10 years?
Same with my wife, after I've explained things to her and she understood how different internet experience can be thats the primary browser.
So please don't put the argument like 'here is crappy underdog but please use it because monopoly is bad and google is a bit evil', its first class experience in everything I have ever thrown at it. Tripple that on mobile, by far the best mobile and useful mobile experience, bar none.
They've only linked a few tickets, so of course maybe when we see all 271 actual distinct things the insight won't apply but all those I examined ended up as some C++ code with a nasty bug in it.
Firefox is written in several languages, only about 25% of it is in C++ but every single one of these issues seems to touch the C++.
A general limitation of this approach is that it is only as good as your validator, and there's nothing easier to validate than a test case that creates, say, an AddressSanitizer use-after-free. For subtler issues will we have to more specific validators or will the LLM become better at coming up with other dangerous conditions it will verify? We'll see.
> A general limitation of this approach is that it is only as good as your validator, and there's nothing easier to validate than a test case that creates, say, an AddressSanitizer use-after-free
Sure, but, surely AddressSanitizer would also detect the same problem in the C or Rust which together also make up about 25% of Firefox so... ?
It's possible Mythos is a lot better at finding vulnerabilities in C++ code than it is for other languages. After all, these models are also based on pre-existing security analysis.
From what I can tell, a lot of these bugs were hardly C++-specific, they just happened in C++ code. Even the most secure Rust can't magically catch things like TOCTOU issues.
> Even the most secure Rust can't magically catch things like TOCTOU issues
I suppose it depends what the word "magically" means. A ToCToU race is because you imagined things wouldn't change but they did and in Rust you actually do write fewer patterns with this mistake because of the Mutable xor Aliased rule. If we have at least one immutable reference to a Goose then Rust isn't OK with anybody mutating the Goose, your safe Rust can't do that and unsafe Rust mustn't do that. So the ToCToU race caused by "Oops I forgot somebody else might change the Goose" is less likely because you were made to wrestle with this problem during design - the safe Rust where you just forgot about this doesn't compile.
Reading this article in the context of the Zig folks refusing to even consider LLM-generated bugs certainly shapes my perspective on what technologies will be in my toolchain.
Both are right and it depends on which model you use and who submits those bugs. The capabilities of leading models went from 99% noise to 99% valid bugs in essentially a few months. Some projects are flooded with the former and need to take precautions to avoid essential DoS attacks on the maintainers.
When I was at PalmSource, I tried to get budget for CoVerity or Fortify (static code analysis tools.). "Too expensive," my management chain said. I spent another year putting together a deal for a lower cost but limited to scanning the network stack. "No, it's based on BSD and BSD is inherently secure," my management chain said (neither is true, btw.)
I eventually left and wound up at Mozilla where there were a number of /* flawfinder ignore */ comments scattered throughout the code.
My guess is that Mythos just ignored the "flawfinder ignore" directives and reported the known vulnerabilities in the code.
What are people's thoughts on how this could affect static analysis tools? I know they are very different beats but often they achieve the same goal. Static analysis tools can be slow, and they report lots of false positives.
I wonder if these models will get good + cheap enough so that people rarely reach for static analysis.
I've been thinking about this. Static analysis tools can also be much faster and most are fully deterministic, so including them in CI can catch bugs or latent bugs before they have a chance to land.
I maintain a static analysis tool using in Firefox's CI. False positives have to be fixed or annotated as non-problems in order for you to land a patch in our tree. That means permitting zero positives (false or true), which is a strict threshold. This is a conscious tradeoff; it requires weakening the analysis and getting some false negatives (missed bugs) in order to keep the signal-to-noise ratio high enough that people don't just ignore it and annotate everything away, or stop running it. Nearly all static analysis tools have to do this balancing act.
AI, as commonly used, is given more leeway. It's kind of fundamental that it must be allowed to hallucinate false positives; that's the source of much of its power. Which means you need layers of verification and validation on top of it. It can be slow, you'll never be able to say "it catches 100% of the errors of this particular form: ...", and yet it catches so much stuff.
Data point: my analysis didn't cover one case that I erroneously thought was unlikely to produce true positives (real bugs), and was more complex to implement than seemed worth the trouble. Opus or Mythos, I'm not sure which, started reporting vulnerabilities stemming from that case, so I scrambled and extended the analysis to cover the gap. It took me long enough to implement that by the time I had a full scan of the source tree, Claude had found every important problem that it reported. The static analysis found several others, and I still honestly don't know whether any of them could ever be triggered in practice.
I still think there's value in the static analysis. Some of those occurrences of the problematic pattern might be reachable now through paths too tricky for the AI to construct. Some of them might turn into real problems when other code changes. It seems worth having fixes for all of them now for both possibilities, and also for the lesser reason of not wanting the AI to waste time trying to exploit them. At the same time, clearly the cost/benefit balance has shifted.
They could also team up: if I relax my standards and allow my analysis to write an additional warnings report of suspected problems, with the clear expectation that they might be false alarms, then I could feed that list to an AI to validate them. Essentially, feed slop to the slop machine and have it nondeterministically filter out the diamonds in the rough.
LLMs are much better at using tools than replacing tools. The tools are generally a lot faster than trying to achieve the same result with an LLM.
Using LLM coding tools to stay on top of static analysis tool output works very well and adding some guard rails that enforce that there are no issues is probably a good idea. Just like adding CI checks to make sure everything is clean.
As for false positives, it depends on the tool. I tend to avoid tools that generate mostly noise. Most of these tools allow you to disable rules if they produce a lot of noise. Or you can just tell the LLM to fix all the issues. When it's cheaper to fix things than to argue with the rule, just fix it. That used to be really expensive when you had to do that manually. Now it isn't.
I recently did this to an Ansible code base that I needed to refresh after not touching it for a few years. It had hundreds of ansible-lint issues; mostly deprecation warnings and some non fatal other warnings. And 10 minutes later I had zero. Mostly they probably weren't very serious ones but it's a form of technical debt. If you have to fix hundreds of warnings manually, you are probably not going to do it. But if you can wave a magical wand and it all goes a way, why not? I adjusted the guard rails so it now always runs ansible-lint and fixes any issues. Only takes a few seconds extra.
One interesting possibility I've seen discussed[0] is enhancing the static analysis tools to find bug shapes that LLMs originally discover. There's no doubt static analysis tools are faster and cheaper, so they scale better to huge codebases and may allow generalizing the LLM's methods.
Really it was not the issue that Opus could not do all these, there was just no incentive to fix bugs. Mythos represented a real marketing use case, so yes thanks for spending money to fix this, but this is not sustainable.
In the latest Mission Impossible, saving the world depends on recovering the original software of an escaped superhuman AGI from a sunken Russian submarine. Luther writes a "poison pill" that given the original source will instantly one-shot the AI. We were left to wonder how this magical code could have been written, but now we know. Luthor just wrote a Mythos prompt that handed it the source code and asked for an immutable critical exploit.
One of the biggest issues in security historically imo is vendors who think, well nobody will ever find this bug so we can deprioritize fixing it. LLMs will prevent vendors lying to themselves which will lead to more secure software.
In 5 years attackers have an advantage but in the long run I think more secure if developers use LLMs on software to find and fix all of the worse remotely exploitable bugs before release. LLMs are going to force devs to be much more security conscious.
Kinda like home-improvement stores, power tools, easily available hardware and youtube tutorials led to both incredibly amazing and durable furniture, as well as janky, ugly and even dangerous furniture.
More tools for more people equals more stuff being made on a wider range.
It will probably wipe out a few categories of issues, which is probably a good thing. And those things that still are still insecure can also be translated to some other language.
Translating things to Rust manually was already a thing before LLMs came into the picture. Now with LLMs that's only going to get easier and faster. The long term value is going to come from getting on top of the mountain of technical debt in the form of existing C/C++ code bases that is responsible for the vast majority of memory exploits, buffer overflows, and other issues that despite decades of attention still are being found across major code bases on a regular basis.
Mozilla finding these issues comes on the back of a quarter century of some very competent engineers trying to do the right thing and using all the tools at their disposal to prevent these issues from happening. I have a lot of respect for that team and the contributions it has made over the years to improve tools, testing/verification practices, etc. The issue is not their effort or competence.
The job of taking an existing system that is well covered in test, well documented/specified, etc. and producing a new one that can function as a drop in replacement is now something that can be considered. A few years ago that would have translated into absolutely massive project cost and risk. Now it's something you can kick off on a Friday afternoon. Worst case it doesn't work, best case you end up with a much better implementation.
It's still early days. There are still a lot of quality issues with LLM generated code. But the success/fail rate will probably improve over time.
The usual sorts of fuzzing and static analyses, using AddressSanitizer and ThreadSanitizer. Also, with a bug bounty program to try to encourage external researchers to report issues. (I work on Firefox security; also I fixed 2 of the bugs linked in the blog post.)
How many of those are false positives though? Probably just over 5000?
You get bug bounties if you report the kind of bugs Mythos identified. There's a reason no-one collected bounties from the "5000 defects" Coverity identified.
The Mythos reports have several examples of chaining a whole bunch of logic in different parts of the program together to exploit something very subtle. The Coverity reports aren't anything like that. These tools aren't remotely in the same league or even universe.
The best general purpose one, anyway. Specialty tools can be much better for their niches. Heck, compiler warnings are one such niche tool, and some of them are quite good.
Firefox developers do fix issues found by Coverity. I haven't looked at the results in over a decade, but the last time I did there were a few code patterns we used in a lot of places which Coverity didn't like (but were actually okay the way we were doing them) which resulted in a colossal number of false positives.
That's this post.. and while it's more detail on the same headline (271 bugs) these discussions look the same as 2 weeks ago (and the same as all the bloggers and podcasters discussed)
Let's see, how this will improve the daily soc work. I still don't see, what's the big difference between Mythos and Opus, security wise. I'm confident, that this kind of vul detection is a long-term improvement. But does specifically Mythos makes such a big difference to "normal" models? I would love to see, what's the actual difference.
Among other things, Mythos seems better at "let me find, weaponize, and stack vulnerabilities until I get end-to-end from untrusted content to root", rather than just finding one thing in a specific identified area.
Quantifying the abilities of an LLM is a hard research problem, so I'm not sure if I can describe it in any great way, but Mythos did seem to be fairly clever about putting together things from different domains to find problems.
For instance, in one of the included bugs (2022034) it figured out that a floating point value being sent over IPC could be modified by an attacker in such a way that it would be interpreted by the JS engine as an arbitrary pointer, due to the way the JS engine uses a clever representation of values called NaN-boxing. This is not beyond the realm of a human researcher to find, but it did nicely combine different domains of security.
As the person responsible for accidentally introducing that security problem (and then fixing it after the Mythos report), while I am aware of NaN-boxing (despite not being a JS engine expert), I was focused more on the other more complex parts of this IPC deserialization code so I hadn't really thought about the potential problems in this context. It is just a floating point value, what could go wrong?
Okay, so far it makes sense to me. But is the deal with JS and floating point values, which isn't soemthing super special super rare stuff, only detected and identfied by Mythos while Opus wouldn't get to this point?
There doesn't have to be a huge qualitative discontinuity between Opus and Mythos. It's just that Mythos has reached a threshold where it's finally smart enough that putting it in a loop and asking it to find bugs is suddenly really effective. Especially at the beginning, Mozilla wasn't doing anything particularly clever with prompts. Mythos is just smart enough that the hit rate on obvious prompts is high enough to matter. (Maybe you can get similar performance out of Opus 4.6 with really smart prompts, but AFAICT nobody had managed it until Mythos.)
Results similar to mythos have been duplicated by weaker models.
Think it's more a care of mythos raising widespread awareness that tireless LLMs can be weaponized to dig through code and find that one tiny flaw nobody spotted
The report I saw kind of seemed to be pointing at a flaw and asking "do you see it?" which is not the same thing. I felt a pretty large difference between Opus 4.6's results and Mythos's, so I would be surprised if even weaker models did anywhere near as well. I'd like to see these results, if they are using a decent methodology.
Of course, even the reports with flawed methodology could be suggesting that a great harness + weak model might achieve a similar level of results as a mediocre harness + strong model. But I'd want to see solid evidence for that.
More, many more. See the bug reports linked in the post. I checked a few, and all of them had an exploit in them, and there are definitely more than 1 bugs listed.
Well, depending on how you define "exploit"; some might only read arbitrary pointers or just out of bounds. Those would be useful primitives in a chain of vulnerabilities, not exploits themselves. You'll have to read through the first comments yourself, but if you're hoping that this is all nonsense and ignorable hype, you're going to be disappointed.
> “That’s the key thing that has unlocked our ability to operate at the scale we’ve been operating at now,” he said. “It gives the engineer a crank they can pull that says: ‘Yep, this has the problem,’ and then you can iterate on the code and know clearly when you’ve fixed it and eventually land the test case in the tree such that you don’t regress it.”
I don't understand much of this paragraph:
* "a crank they can pull that says: ‘Yep, this has the problem,’": as in, ring an alarm? Does the LLM ring th alarm?
* "you can iterate on the code and know clearly when you’ve fixed it": Isn't that true of most bugs, assuming you do the normal thing and generate a test case? And I thought the LLM output test cases itself: "It will craft test cases. We have our existing fuzzing systems and tools to be able to run those tests" And are they claiming the LLM facilitates iterating?
* "and eventually land the test case in the tree": Don't you create the test case before the fix? And just a few words earlier they seemed to be working on the fix, not the test case. And see the prior point about test cases.
* "such that you don’t regress it.”: How is the LLM helping here?
Maybe I'm missing some fundamental unwritten assumption?
Mostly I think this just means that having a test case makes it easier to fix and verify. You can't actually take for granted having a test case when fixing a security bug. Sometimes you only have a crash stack or maybe a vague and hypothetical static analysis result.
> eventually land the test case
This is just a reference to the fact that we don't land test cases for security bugs immediately in the public repository, to make it harder for attackers. You are right that the LLM only helps with creating the initial test case. Things like running the test case in automation is part of the standard development process.
I don't find that number very high. In a project of the size of Firefox, a new version of a compiler with stricter warnings or a draconian interpretation of the C standard can easily find 200 new bugs.
New tools find new bugs, but the oligarchy newspapers report on Mythos and not on clang-22.0.
The raw number of things found by Claude (Opus or Mythos) was much higher and would be more comparable to a new clang warning. I vaguely remember seeing a number early on in this process that was in the mid-thousands. The 271 is a small, validated subset of that. None of the 271 were deemed false positives iiuc. Most instances of a new clang warning will be false positives. (Same as most of the raw problems reported by the AI.)
It is still unclear and open for speculation as to what percentage of all security bugs in Firefox today are being found by the AIs (as opposed to not being found at all). It might be that AI is very good at certain types of problems, even if we can't put our finger on what those types are, and that after the initial wave of bug reports the AI findings will slow to a trickle even while many many other bugs remain in the codebase. Or it might be that AI really does detect most instances of some class of problems and all those bugs will now be gone forever, never to return as long as Mozilla keeps paying the token monster. This is closely related to the oft-asked question "are we better or worse off after both attackers and defenders have access to this new capability?"
Again, and this is important:
A bug is a bug. A “potential vulnerability” is a bug. A vulnerability is verifiable as having security implications with a proof of concept or other substantial evidence.
Words matter. Bugs matter. It’s important to fix large amounts of bugs, just as it always has been, and has been done. Let that be impressive on its own, because it IS impressive.
Mythos didn’t write 271 PoC for vulnerabilities and demonstrate code path reachability with security implications. Mythos found 271 valid bugs. Let that be enough.
I was a bit confused by your definitions, but here's how Mozilla broke out [1] the 271, um, things:
> As additional context, we apply security severity ratings from critical to low to indicate the urgency of a bug:
> * sec-critical and sec-high are assigned to vulnerabilities that can be triggered with normal user behavior, like browsing to a web page. We make no technical difference between these, but sec-critical bugs are reserved for issues that are publicly disclosed or known to be exploited in the wild.
> * sec-moderate is assigned to vulnerabilities that would otherwise be rated sec-high but require unusual and complex steps from the victim.
> * sec-low is assigned to bugs that are annoying but far from causing user harm (e.g, a safe crash).
> Of the 271 bugs we announced for Firefox 150: 180 were sec-high, 80 were sec-moderate, and 11 were sec-low.
Mozilla uses the term "vulnerability" for even sec-high, even though they say right below that it doesn't mean the same thing as a practical exploit. And on their definitional page, they classify even sec-low as "vulnerabilities" [2].
Words are tools, that get their utility from collective meaning. I'd be interested where you recieved your semantics from and if they match up or disagree with Mozilla.
[1] https://hacks.mozilla.org/2026/05/behind-the-scenes-hardenin...
[2] https://wiki.mozilla.org/Security_Severity_Ratings/Client
> Mozilla uses the term "vulnerability" for even sec-high, even though they say right below that it doesn't mean the same thing as a practical exploit.
That’s not evident in what you pastedat all.
What you pasted says
> sec-critical and sec-high are assigned to vulnerabilities that can be triggered with normal user behavior […] We make no technical difference between these […] sec-critical bugs are reserved for issues that are publicly disclosed or known to be exploited in the wild.
> sec-low is assigned to bugs that are annoying but far from causing user harm (e.g, a safe crash).
From this one infers that the "180 were sec-high" bugs found are actually exploitsble but known to have been found in the wild, and are NOT mere annoying bugs.
The difference between 180 and 270 does nothing to deflate the signicance, or lack there of, of the implication re: Mythos.
Yes, it is not in what I pasted, as I said, "even though they say right below". If you don't believe me then click on either of the links.
Presumably there are (implicit?) "sec-none" things, like [a] from the recently released 150.0.2 [b] which makes absolutely zero mention about "Security Impact" or "Severity" in the bug report, unlike [c], which is listed in the Mozilla weblog post [2].
Security things are mentioned in the Release Notes [b] pointing to a completely different document [d].
Perhaps sometimes a bug is 'just' a bug, and not a vulnerability.
[a] https://bugzilla.mozilla.org/show_bug.cgi?id=2034980 ; "Can't highlight image scans in Firefox 150+"
[b] https://www.firefox.com/en-CA/firefox/150.0.2/releasenotes/
[c] https://bugzilla.mozilla.org/show_bug.cgi?id=2024918
[d] https://www.mozilla.org/en-US/security/advisories/mfsa2026-4...
I work at Mozilla; I fixed a bunch of these bugs.
In general, I would say that our use of "vulnerability" lines up with what jerrythegerbil calls "potential vulnerability". (In cases with a POC, we would likely use the word "exploit".) Our goal is to keep Firefox secure. Once it's clear that a particular bug might be exploitable, it's usually not worth a lot of engineering effort to investigate further; we just fix it. We spend a little while eyeballing things for the purpose of sorting into sec-high, sec-moderate, etc, and to help triage incoming bugs, but if there's any real question, we assume the worst and move on.
So were all 271 bugs exploitable? Absolutely not. But they were all security bugs according to the normal standards that we've been applying for years.
(Partial exception: there were some bugs that might normally have been opened up, but were kept hidden because Mythos wasn't public information yet. But those bugs would have been marked sec-other, and not included in the count.)
So if you think we're guilty of inflating the number of "real" vulnerabilities found by Mythos, bear in mind that we've also been consistently inflating the baseline. The spike in the Firefox Security Fixes by Month graph is very, very real: https://hacks.mozilla.org/2026/05/behind-the-scenes-hardenin...
I'm not a security dev or researcher or anything, but as an outsider my understanding matches how Mozilla uses the terms. Though words used by specialists and the general public can offer differ...
What types of vulnerabilities was it finding? Cross site scripting, privilege escalation, etc? Mostly memory corruption or any Javascript logic bugs?
I work on SpiderMonkey, so I mostly looked at the JS bugs. It was a smorgasbord of various things. Broadly speaking I'd say the most impressive bugs were TOCTOU issues, where we checked something and later acted on it, and the testcase found a clever way to invalidate the result of the check in between.
If you look closely at, say, this patch, you might get a sense of what I mean (although the real cleverness is in the testcase, which we have not made public): https://hg-edge.mozilla.org/integration/autoland/rev/c29515d...
> although the real cleverness is in the testcase, which we have not made public
What is the point of keeping it private? I'd bet feeding this patch to Opus and asking to look for specific TOCTOU issue fixed by the patch will make it come up with a testcase sooner or later.
Possibly! One of the many areas that might need rethinking in the age of AI (that started in February of this year) is how long security bugs should be hidden. We live in interesting times.
The same is also true of a good security researcher, and has been for a long time. The question is mostly whether it takes long enough to come up with a testcase that we've managed to ship the fix to all affected releases, and given people some time to update. (And maybe LLMs do change the calculus there! We'll have to wait and see.)
Very cool, thank you.
I'd say it leans towards memory corruption kinds of issues, as those are easiest to pass the validator, thanks to AddressSanitizer. I think there's a lot of potential for making the validator more sophisticated. Like maybe you add a JS function that will only crash when run in the parent process and have a validator that checks for that specific crash, as a way for the LLM to "prove" that it managed to run arbitrary JS in the parent. Would that turn up subtler issues? Maybe.
Can you elaborate why those bugs weren't found by e.g. fuzzing in the past?
I'm genuinely curious what "types" of implementation mistakes these were, like whether e.g. it was library usage bugs, state management bugs, control flow bugs etc.
Would love to see a writeup about these findings, maybe Mythos hinted us towards that better fuzzing tools are needed?
Fuzzing isn't good at things like dealing with code behind a CRC check, whereas the audit based approach using an LLMs can see the sketchy code, then calculate the CRC itself to come up with a test case. I think you end up having to write custom fuzzing harnesses to get at the vulnerable parts of the code. (This is an example from a talk by somebody at Anthropic.)
That being said, I think there's a lot of potential for synergy here: if LLMs make writing code easier, that includes fuzzers, so maybe fuzzers will also end up finding a lot more bugs. I saw somebody on Twitter say they used an LLM to write a fuzzer for Chrome and found a number of security bugs that they reported.
If I had to guess, I'd say that AI is better at finding TOCTOU bugs than fuzzing because it starts by looking at the code and trying to find problems with it, which naturally leads it to experiment with questions like "is there any way to make this assumption false?", whereas fuzzing is more brute force. Fuzzing can explore way more possible states, but AI is better at picking good ones.
In this particular sense, AI tends to find bugs that are closer to what we'd see from a human researcher reading the code. Fuzz bugs are often more "here's a seemingly innocuous sequence of statements that randomly happen to collide three corner cases in an unexpected way".
Outside of SpiderMonkey, my understanding is that many of the best vulnerabilities were in code that is difficult to fuzz effectively for whatever reason.
How about this: a "vulnerability" is a "vulnerability", but after it was identified and verified to cause problem, that's when it should be called a "bug", because it could make the software do unwanted things.
At Mozilla, everything is called a bug. It's what other systems call an "issue". So it's too late for your terminology at Mozilla. (Example: I have a bug to improve the HTML output of my static analysis tool. There is nothing incorrect or flawed about the current output.)
At Mozilla, but not everywhere: exploits are a subset of vulnerabilities are a subset of bugs.
When I worked at Mozilla, _everything_ was called a bug, whether it was a software issue, a problem in the office or some paperwork missing.
Much as GitHub calls everything an "issue" and GitLab a "work item".
Mythos did in fact write PoCs for all bugs that crash with demonstration of memory-unsafe behavior (e.g. use-after-free, out-of-bounds reads/writes, etc).
For us this is substantial enough evidence to consider it a security vulnerability at that point, unless shown otherwise and it has always been this way (also for fuzzing bugs).
Is that number of crashing bugs with PoC available/written down anywhere?
> Mythos did in fact write PoCs for all bugs that crash with demonstration of memory-unsafe behavior (e.g. use-after-free, out-of-bounds reads/writes, etc).
But report [1] says that "Some of these bugs showed evidence of memory corruption...", which implies that majority of these (which includes 271 bugs from Mythos) don't have evidence at all. Do I not understand something?
> For us this is substantial enough evidence to consider it a security vulnerability at that point
Mythos is supposed to be pretty good at writing actual exploits, so (as I understand) there shouldn't be any serious problems with checking if bug is vulnerability or not.
[1] https://www.mozilla.org/en-US/security/advisories/mfsa2026-3...
> But report [1] says that "Some of these bugs showed evidence of memory corruption...", which implies that majority of these (which includes 271 bugs from Mythos) don't have evidence at all. Do I not understand something?
I'm guessing a bit, but for example: out of bounds reads are not memory corruption. Assertion failures in debug builds are also usually not memory corruption, and I'd guess that many of these bugs were found through assertions. (Some parts of Firefox like the SpiderMonkey JS engine make heavy use of assertions, and that's the biggest signal used for defect validation. An assertion firing is almost always treated as a real and serious problem. Though with our harness, Opus and Mythos try to come up with an exploit PoC anyway.)
> Mythos didn’t write 271 PoC for vulnerabilities
I think the word you're looking for is exploit?
This isn’t true anywhere people have to make decisions about what to work on first.
It may be worth noting that Claude can and will (if it believes you own the code, at least) produce PoC exploits for exploitable bugs that it finds.
My only source for this is personal experience, and no, I can't share any evidence of it.
I dismissed the earlier non-technical blog post as shameless product boosterism for Anthropic. The linked hacks blog (which is a better source than this article) is a welcome release. It's hard to deny there's something real to this now, I think. Mozilla's internal definition of a "vulnerability" is also probably more widely applied than what many would intuit, but it is good that these issues are being taken seriously and fixed.
At the same time other companies like AISLE are matching Mythos on vulnerabilities using older models but their own harnass: https://aisle.com/blog/aisle-matches-anthropic-mythos-on-fre...
So while Mythos certainly is real I think you could do the same with Deepseek pro, GPT 5.5 etc...
Agreed. The earlier blog post did not explicitly claim this, but I think casual viewers were prompted to believe that the Magic of Mythos (TM) went and found (and fixed??) a bunch of vulnerabilities with minimal human guidance, and even contrasted this with their fuzzing infrastructure and made it sound (to me) like it was casting shade on it.
This new post makes it pretty clear that this was all bolted on-top of their existing fuzzing infrastructure, and really just used to get more and better initial hits that a very skilled team is looking at. I assume Anthropic was giving them a very good deal on inference for the positive PR, but I believe these other reports and suspect Mozilla did not really need them.
As the Hacks.Mozilla article notes: "We began with small-scale experiments prompting the harness to look for sandbox escapes with Claude Opus 4.6. Even with this model, we identified an impressive amount of previously-unknown vulnerabilities which required complex reasoning over multiprocess browser engine code."
> The linked hacks blog
https://hacks.mozilla.org/2026/05/behind-the-scenes-hardenin...
Original source: https://news.ycombinator.com/item?id=48051079
It's better because it actually lists a sample of Bugzilla reports that were made public. This topic was discussed previously (36 comments two weeks ago: https://news.ycombinator.com/item?id=47885042), but the part about bug reports being made public is brand new.
I hope to see the day when (or if) the LLMs get so good at spotting and fixing bugs that all that’s left for the Firefox engineers to do is to focus on adding new features.
This isn’t sarcasm. Firefox deserves to be used more. Most people I know don’t use it because “Chrome does almost everything better”, and Firefox can’t compete with the other browsers’ roadmaps.
> Firefox deserves to be used more
Totally agree. I even go as far as choosing which website I make purchases on depending if they work on FF, or writing to support occasionally to tell them it's not supported or a feature isn't working properly and this would be appreciated.
I know it pretty much always goes nowhere, but I feel it's what I can do to keep the browser somehow on the radar.
> Firefox deserves to be used more
Part of the problem is, when they stop working on fixing bugs, they start doing Mr Robot things... We just want a web browser. Nobody asked for pocket, or AI...
If they use AI to fix all the bugs, then what else is for them to do, other than maintain syntax compatibility with the various languages they build with? They're just going to go back to making the browser trash again.
I have an old apk of Firefox pinned on mobile. I do this because I genuinely believe that for my very limited usecases, the browser has become actively worse.
(Don't worry- I use the system browser for any site I don't fully trust.)
Chrome ain't better in any meaningful way for >99% of use cases. Heck, I am a dev and I use FF with ublock origin only, for past... 10 years?
Same with my wife, after I've explained things to her and she understood how different internet experience can be thats the primary browser.
So please don't put the argument like 'here is crappy underdog but please use it because monopoly is bad and google is a bit evil', its first class experience in everything I have ever thrown at it. Tripple that on mobile, by far the best mobile and useful mobile experience, bar none.
They've only linked a few tickets, so of course maybe when we see all 271 actual distinct things the insight won't apply but all those I examined ended up as some C++ code with a nasty bug in it.
Firefox is written in several languages, only about 25% of it is in C++ but every single one of these issues seems to touch the C++.
A general limitation of this approach is that it is only as good as your validator, and there's nothing easier to validate than a test case that creates, say, an AddressSanitizer use-after-free. For subtler issues will we have to more specific validators or will the LLM become better at coming up with other dangerous conditions it will verify? We'll see.
> A general limitation of this approach is that it is only as good as your validator, and there's nothing easier to validate than a test case that creates, say, an AddressSanitizer use-after-free
Sure, but, surely AddressSanitizer would also detect the same problem in the C or Rust which together also make up about 25% of Firefox so... ?
It's possible Mythos is a lot better at finding vulnerabilities in C++ code than it is for other languages. After all, these models are also based on pre-existing security analysis.
From what I can tell, a lot of these bugs were hardly C++-specific, they just happened in C++ code. Even the most secure Rust can't magically catch things like TOCTOU issues.
> Even the most secure Rust can't magically catch things like TOCTOU issues
I suppose it depends what the word "magically" means. A ToCToU race is because you imagined things wouldn't change but they did and in Rust you actually do write fewer patterns with this mistake because of the Mutable xor Aliased rule. If we have at least one immutable reference to a Goose then Rust isn't OK with anybody mutating the Goose, your safe Rust can't do that and unsafe Rust mustn't do that. So the ToCToU race caused by "Oops I forgot somebody else might change the Goose" is less likely because you were made to wrestle with this problem during design - the safe Rust where you just forgot about this doesn't compile.
Reading this article in the context of the Zig folks refusing to even consider LLM-generated bugs certainly shapes my perspective on what technologies will be in my toolchain.
Both are right and it depends on which model you use and who submits those bugs. The capabilities of leading models went from 99% noise to 99% valid bugs in essentially a few months. Some projects are flooded with the former and need to take precautions to avoid essential DoS attacks on the maintainers.
When I was at PalmSource, I tried to get budget for CoVerity or Fortify (static code analysis tools.). "Too expensive," my management chain said. I spent another year putting together a deal for a lower cost but limited to scanning the network stack. "No, it's based on BSD and BSD is inherently secure," my management chain said (neither is true, btw.)
I eventually left and wound up at Mozilla where there were a number of /* flawfinder ignore */ comments scattered throughout the code.
My guess is that Mythos just ignored the "flawfinder ignore" directives and reported the known vulnerabilities in the code.
The code is open. If you can prove that's the case you'll have a real news story...
What are people's thoughts on how this could affect static analysis tools? I know they are very different beats but often they achieve the same goal. Static analysis tools can be slow, and they report lots of false positives.
I wonder if these models will get good + cheap enough so that people rarely reach for static analysis.
I've been thinking about this. Static analysis tools can also be much faster and most are fully deterministic, so including them in CI can catch bugs or latent bugs before they have a chance to land.
I maintain a static analysis tool using in Firefox's CI. False positives have to be fixed or annotated as non-problems in order for you to land a patch in our tree. That means permitting zero positives (false or true), which is a strict threshold. This is a conscious tradeoff; it requires weakening the analysis and getting some false negatives (missed bugs) in order to keep the signal-to-noise ratio high enough that people don't just ignore it and annotate everything away, or stop running it. Nearly all static analysis tools have to do this balancing act.
AI, as commonly used, is given more leeway. It's kind of fundamental that it must be allowed to hallucinate false positives; that's the source of much of its power. Which means you need layers of verification and validation on top of it. It can be slow, you'll never be able to say "it catches 100% of the errors of this particular form: ...", and yet it catches so much stuff.
Data point: my analysis didn't cover one case that I erroneously thought was unlikely to produce true positives (real bugs), and was more complex to implement than seemed worth the trouble. Opus or Mythos, I'm not sure which, started reporting vulnerabilities stemming from that case, so I scrambled and extended the analysis to cover the gap. It took me long enough to implement that by the time I had a full scan of the source tree, Claude had found every important problem that it reported. The static analysis found several others, and I still honestly don't know whether any of them could ever be triggered in practice.
I still think there's value in the static analysis. Some of those occurrences of the problematic pattern might be reachable now through paths too tricky for the AI to construct. Some of them might turn into real problems when other code changes. It seems worth having fixes for all of them now for both possibilities, and also for the lesser reason of not wanting the AI to waste time trying to exploit them. At the same time, clearly the cost/benefit balance has shifted.
They could also team up: if I relax my standards and allow my analysis to write an additional warnings report of suspected problems, with the clear expectation that they might be false alarms, then I could feed that list to an AI to validate them. Essentially, feed slop to the slop machine and have it nondeterministically filter out the diamonds in the rough.
Food for thought...
LLMs are much better at using tools than replacing tools. The tools are generally a lot faster than trying to achieve the same result with an LLM.
Using LLM coding tools to stay on top of static analysis tool output works very well and adding some guard rails that enforce that there are no issues is probably a good idea. Just like adding CI checks to make sure everything is clean.
As for false positives, it depends on the tool. I tend to avoid tools that generate mostly noise. Most of these tools allow you to disable rules if they produce a lot of noise. Or you can just tell the LLM to fix all the issues. When it's cheaper to fix things than to argue with the rule, just fix it. That used to be really expensive when you had to do that manually. Now it isn't.
I recently did this to an Ansible code base that I needed to refresh after not touching it for a few years. It had hundreds of ansible-lint issues; mostly deprecation warnings and some non fatal other warnings. And 10 minutes later I had zero. Mostly they probably weren't very serious ones but it's a form of technical debt. If you have to fix hundreds of warnings manually, you are probably not going to do it. But if you can wave a magical wand and it all goes a way, why not? I adjusted the guard rails so it now always runs ansible-lint and fixes any issues. Only takes a few seconds extra.
One interesting possibility I've seen discussed[0] is enhancing the static analysis tools to find bug shapes that LLMs originally discover. There's no doubt static analysis tools are faster and cheaper, so they scale better to huge codebases and may allow generalizing the LLM's methods.
[0] https://lwn.net/Articles/1068968/
> Anyone building software can start using a harness with a modern model to find bugs and harden their code today. We recommend getting started now.
From what I understand, that is a recipe for getting quickly banned by commercial LLM providers?
Really it was not the issue that Opus could not do all these, there was just no incentive to fix bugs. Mythos represented a real marketing use case, so yes thanks for spending money to fix this, but this is not sustainable.
In the latest Mission Impossible, saving the world depends on recovering the original software of an escaped superhuman AGI from a sunken Russian submarine. Luther writes a "poison pill" that given the original source will instantly one-shot the AI. We were left to wonder how this magical code could have been written, but now we know. Luthor just wrote a Mythos prompt that handed it the source code and asked for an immutable critical exploit.
Curious if people think LLMs will lead to more secure or less secure software in five years.
Both. The skilled will use them to find problems, the unskilled will use them to slopcode insecure software the skilled will have to fix.
That depends on which side has more money.
One of the biggest issues in security historically imo is vendors who think, well nobody will ever find this bug so we can deprioritize fixing it. LLMs will prevent vendors lying to themselves which will lead to more secure software.
In 5 years attackers have an advantage but in the long run I think more secure if developers use LLMs on software to find and fix all of the worse remotely exploitable bugs before release. LLMs are going to force devs to be much more security conscious.
I’m just happy we’re talking about security.
That will make software safer alone.
Kinda like home-improvement stores, power tools, easily available hardware and youtube tutorials led to both incredibly amazing and durable furniture, as well as janky, ugly and even dangerous furniture.
More tools for more people equals more stuff being made on a wider range.
More secure software, but in the same way that the population is net healthier after a plague.
It will probably wipe out a few categories of issues, which is probably a good thing. And those things that still are still insecure can also be translated to some other language.
Translating things to Rust manually was already a thing before LLMs came into the picture. Now with LLMs that's only going to get easier and faster. The long term value is going to come from getting on top of the mountain of technical debt in the form of existing C/C++ code bases that is responsible for the vast majority of memory exploits, buffer overflows, and other issues that despite decades of attention still are being found across major code bases on a regular basis.
Mozilla finding these issues comes on the back of a quarter century of some very competent engineers trying to do the right thing and using all the tools at their disposal to prevent these issues from happening. I have a lot of respect for that team and the contributions it has made over the years to improve tools, testing/verification practices, etc. The issue is not their effort or competence.
The job of taking an existing system that is well covered in test, well documented/specified, etc. and producing a new one that can function as a drop in replacement is now something that can be considered. A few years ago that would have translated into absolutely massive project cost and risk. Now it's something you can kick off on a Friday afternoon. Worst case it doesn't work, best case you end up with a much better implementation.
It's still early days. There are still a lot of quality issues with LLM generated code. But the success/fail rate will probably improve over time.
More secure, at least in the cases where the tools are properly applied.
But it also represents more easily available opportunities for blackhats to abuse against the projects where these tools were not being applied.
I'm curious about how did Mozilla do bug finding before Mythos? Did they use any non-AI bug finding tools?
The usual sorts of fuzzing and static analyses, using AddressSanitizer and ThreadSanitizer. Also, with a bug bounty program to try to encourage external researchers to report issues. (I work on Firefox security; also I fixed 2 of the bugs linked in the blog post.)
Coverity (similar to lint) scans various open source software products for vulnerabilities.
see https://www.blackduck.com/static-analysis-tools-sast/coverit...
and for Firefox-related alleged defects, see https://scan.coverity.com/projects/firefox
You have to create an account to view the actual reported defects.
There are just over 5000 reported defects still outstanding. I don't know how many overlap with the reported 271 Mythos-reported defects.
How many of those are false positives though? Probably just over 5000?
You get bug bounties if you report the kind of bugs Mythos identified. There's a reason no-one collected bounties from the "5000 defects" Coverity identified.
The Mythos reports have several examples of chaining a whole bunch of logic in different parts of the program together to exploit something very subtle. The Coverity reports aren't anything like that. These tools aren't remotely in the same league or even universe.
Yeah, fuzzing, sanitizers, and bug bounties were our main pre-AI tools for finding bugs.
it's just sad that Coverity represents the best working C++ static analysis tool.
The best general purpose one, anyway. Specialty tools can be much better for their niches. Heck, compiler warnings are one such niche tool, and some of them are quite good.
Firefox developers do fix issues found by Coverity. I haven't looked at the results in over a decade, but the last time I did there were a few code patterns we used in a lot of places which Coverity didn't like (but were actually okay the way we were doing them) which resulted in a colossal number of false positives.
16 day old story
Wired: Mozilla Used Anthropic's Mythos to Find and Fix 271 Bugs in Firefox (41 points, 18 comments) https://news.ycombinator.com/item?id=47853649
Ars: Mozilla: Anthropic's Mythos found 271 security vulnerabilities in Firefox 150 (33 points, 8 comments)https://news.ycombinator.com/item?id=47855384
No, it's a new post, see also
https://hacks.mozilla.org/2026/05/behind-the-scenes-hardenin...
That's this post.. and while it's more detail on the same headline (271 bugs) these discussions look the same as 2 weeks ago (and the same as all the bloggers and podcasters discussed)
Let's see, how this will improve the daily soc work. I still don't see, what's the big difference between Mythos and Opus, security wise. I'm confident, that this kind of vul detection is a long-term improvement. But does specifically Mythos makes such a big difference to "normal" models? I would love to see, what's the actual difference.
Among other things, Mythos seems better at "let me find, weaponize, and stack vulnerabilities until I get end-to-end from untrusted content to root", rather than just finding one thing in a specific identified area.
Quantifying the abilities of an LLM is a hard research problem, so I'm not sure if I can describe it in any great way, but Mythos did seem to be fairly clever about putting together things from different domains to find problems.
For instance, in one of the included bugs (2022034) it figured out that a floating point value being sent over IPC could be modified by an attacker in such a way that it would be interpreted by the JS engine as an arbitrary pointer, due to the way the JS engine uses a clever representation of values called NaN-boxing. This is not beyond the realm of a human researcher to find, but it did nicely combine different domains of security.
As the person responsible for accidentally introducing that security problem (and then fixing it after the Mythos report), while I am aware of NaN-boxing (despite not being a JS engine expert), I was focused more on the other more complex parts of this IPC deserialization code so I hadn't really thought about the potential problems in this context. It is just a floating point value, what could go wrong?
Okay, so far it makes sense to me. But is the deal with JS and floating point values, which isn't soemthing super special super rare stuff, only detected and identfied by Mythos while Opus wouldn't get to this point?
There doesn't have to be a huge qualitative discontinuity between Opus and Mythos. It's just that Mythos has reached a threshold where it's finally smart enough that putting it in a loop and asking it to find bugs is suddenly really effective. Especially at the beginning, Mozilla wasn't doing anything particularly clever with prompts. Mythos is just smart enough that the hit rate on obvious prompts is high enough to matter. (Maybe you can get similar performance out of Opus 4.6 with really smart prompts, but AFAICT nobody had managed it until Mythos.)
Results similar to mythos have been duplicated by weaker models.
Think it's more a care of mythos raising widespread awareness that tireless LLMs can be weaponized to dig through code and find that one tiny flaw nobody spotted
The report I saw kind of seemed to be pointing at a flaw and asking "do you see it?" which is not the same thing. I felt a pretty large difference between Opus 4.6's results and Mythos's, so I would be surprised if even weaker models did anywhere near as well. I'd like to see these results, if they are using a decent methodology.
Of course, even the reports with flawed methodology could be suggesting that a great harness + weak model might achieve a similar level of results as a mediocre harness + strong model. But I'd want to see solid evidence for that.
A vuln finder is useful only if it respects the humans on the other end. Every bogus report taxes the same scarce attention needed for the real bugs.
I just hope they don't start ignoring human created bug reports, as there are still many that haven't been fixed for years.
I still don't know the exploit count for Mythos. Is it zero, one, or more?
More, many more. See the bug reports linked in the post. I checked a few, and all of them had an exploit in them, and there are definitely more than 1 bugs listed.
Well, depending on how you define "exploit"; some might only read arbitrary pointers or just out of bounds. Those would be useful primitives in a chain of vulnerabilities, not exploits themselves. You'll have to read through the first comments yourself, but if you're hoping that this is all nonsense and ignorable hype, you're going to be disappointed.
> “That’s the key thing that has unlocked our ability to operate at the scale we’ve been operating at now,” he said. “It gives the engineer a crank they can pull that says: ‘Yep, this has the problem,’ and then you can iterate on the code and know clearly when you’ve fixed it and eventually land the test case in the tree such that you don’t regress it.”
I don't understand much of this paragraph:
* "a crank they can pull that says: ‘Yep, this has the problem,’": as in, ring an alarm? Does the LLM ring th alarm?
* "you can iterate on the code and know clearly when you’ve fixed it": Isn't that true of most bugs, assuming you do the normal thing and generate a test case? And I thought the LLM output test cases itself: "It will craft test cases. We have our existing fuzzing systems and tools to be able to run those tests" And are they claiming the LLM facilitates iterating?
* "and eventually land the test case in the tree": Don't you create the test case before the fix? And just a few words earlier they seemed to be working on the fix, not the test case. And see the prior point about test cases.
* "such that you don’t regress it.”: How is the LLM helping here?
Maybe I'm missing some fundamental unwritten assumption?
Mostly I think this just means that having a test case makes it easier to fix and verify. You can't actually take for granted having a test case when fixing a security bug. Sometimes you only have a crash stack or maybe a vague and hypothetical static analysis result.
> eventually land the test case
This is just a reference to the fact that we don't land test cases for security bugs immediately in the public repository, to make it harder for attackers. You are right that the LLM only helps with creating the initial test case. Things like running the test case in automation is part of the standard development process.
Thank you; that makes sense.
Related:
The zero-days are numbered
https://news.ycombinator.com/item?id=47853277
I don't find that number very high. In a project of the size of Firefox, a new version of a compiler with stricter warnings or a draconian interpretation of the C standard can easily find 200 new bugs.
New tools find new bugs, but the oligarchy newspapers report on Mythos and not on clang-22.0.
The raw number of things found by Claude (Opus or Mythos) was much higher and would be more comparable to a new clang warning. I vaguely remember seeing a number early on in this process that was in the mid-thousands. The 271 is a small, validated subset of that. None of the 271 were deemed false positives iiuc. Most instances of a new clang warning will be false positives. (Same as most of the raw problems reported by the AI.)
It is still unclear and open for speculation as to what percentage of all security bugs in Firefox today are being found by the AIs (as opposed to not being found at all). It might be that AI is very good at certain types of problems, even if we can't put our finger on what those types are, and that after the initial wave of bug reports the AI findings will slow to a trickle even while many many other bugs remain in the codebase. Or it might be that AI really does detect most instances of some class of problems and all those bugs will now be gone forever, never to return as long as Mozilla keeps paying the token monster. This is closely related to the oft-asked question "are we better or worse off after both attackers and defenders have access to this new capability?"