mrshu 4 hours ago

GitHub no longer publishes aggregate numbers so here they are parsed out. It looks like they are down to a single 9 at this point across all services:

https://mrshu.github.io/github-statuses/

  • everfrustrated 3 hours ago

    Nice! I still remember the old GitHub status page which showed and reported on their uptime. No surprises they took it offline and replaced it with the current one when it started reporting the truth.

    EDIT: You mention this with archive.org links! Love it! https://mrshu.github.io/github-statuses/#about

  • Anon1096 3 hours ago

    > It looks like they are down to a single 9 at this point across all services

    That's not at all how you measure uptime. The per area measures are cool but the top bar measuring across all services is silly.

    I'm unsure what they are targeting, seems across the board it's mostly 99.5+ with the exception of Copilot. Just doing math, 3 (independent, which I'm aware they aren't fully) 99.5 services brings you down to an overall "single 9" 98.5 healthy status but it's not meaningful to anyone.

    • munk-a 3 hours ago

      It depends whether the outages are overlapped or not. If the outages are not overlapped then that is indeed how you do it since some of your services being unavailable means your service is not fully available.

      • reed1234 3 hours ago

        They are overlapped. You can hover over the bars and some bars have multiple issues.

      • mynameisvlad 2 hours ago

        I mean, there's a big difference between primary Git operations being down and Copilot being down. Any SLAs are probably per-service, not as a whole, and I highly doubt that someone just using a subset of services cares that one of the other services is down.

        Copilot seems to be the worst offender, and 99% of people using Github likely couldn't care less.

  • onionisafruit 3 hours ago

    It's interesting to see that copilot has the worst overall. I use copilot completions constantly and rarely notice issues with it. I suspect incidents aren't added until after they resolve.

    • nightpool 3 hours ago

      Completions run using a much simpler model that Github hosts and runs themselves. I think most of the issues with Copilot that I've seen are upstream issues with one or two individual models (e.g. the Twitter API goes down so the Grok model is unavailable, etc)

  • jeltz 3 hours ago

    Do I misunderstand or does your page count today's downtime as minor? I would not count the web UI being mostly unusable as minor. Does this mean GitHub understates how bad incidents are? Pr has your page just not yet been updated to include it?

    • onionisafruit 3 hours ago

      Today's isn't accounted for on that page yet

  • gordonhart 3 hours ago

    Great project, thanks for building and sharing!

showerst 6 hours ago

If you'd have asked me a few years ago if anything could be an existential threat to github's dominance in the tech community I'd have quickly said no.

If they don't get their ops house in order, this will go down as an all-time own goal in our industry.

  • panarky 5 hours ago

    Github lost at least one 9, if not two, since last year's "existential" migration to Azure.

    • imglorp 3 hours ago

      I'm pretty sure they don't GAF about GH uptime as long as they can keep training models on it (0.5 /s), but Azure is revenue friction so might be a real problem.

      Something this week about "oops we need a quality czar": https://news.ycombinator.com/item?id=46903802

      • joshstrange 3 hours ago

        > (0.5 /s),

        Does this mean you are only half-sarcastic/half-joking? Or did I interpret that wrong?

        • imglorp 3 hours ago

          Yes that's it.

    • showerst 5 hours ago

      I'm sympathetic to ops issues, and particularly sympathetic to ops issues that are caused by brain-dead corporate mandates, but you don't get to be an infrastructure company and have this uptime record.

      It's extra galling that they advertise all the new buzzword laden AI pipeline features while the regular website and actions fail constantly. Academically I know that it's not the same people building those as fixing bugs and running infra, but the leadership is just clearly failing to properly steer the ship here.

    • arianvanp 5 hours ago

      They didn't migrate yet.

      • Krutonium 5 hours ago

        Fucking REALLY?!

        • panarky 5 hours ago

          Migrations of Actions and Copilot to Azure completed in 2024.

          Pages and Packages completed in 2025.

          Core platform and databases began in October 2025 and are in progress, with traffic split between the legacy Github data center and Azure.

          • ifwinterco 5 hours ago

            That's probably partly why things have got increasingly flaky - until they finish there'll be constant background cognitive load and surface area for bugs from the fact everything (especially the data) is half-migrated

            • panarky 5 hours ago

              You'd think so, and we don't know about today's incident yet, but recent Github incidents have been attributed specifically to Azure, and Azure itself has had a lot of downtime recently that lasts for many hours.

              • ifwinterco 4 hours ago

                True, the even simpler explanation is what they've migrated to is itself just unreliable

          • pipo234 5 hours ago

            This has those Hotmail migration vibes off the early 2000s.

            • jeffrallen 3 hours ago

              And yet, somehow my wife still has a hotmail.com address 25 years later.

    • sgt 5 hours ago

      Is there any reason why Github needs 99.99% uptime? You can continue working with your local repo.

      • degenerate 5 hours ago

        Many teams work exclusively in GitHub (ticketing, boards, workflows, dev builds). People also have entire production build systems on GitHub. There's a lot more than git repo hosting.

        • munk-a 3 hours ago

          It's especially painful for anyone who uses Github actions for CI/CD - maybe the release you just cut never actually got deployed to prod because their internal trigger didn't fire... you need to watch it like a hawk.

          • jamiecurle an hour ago

            I waited 2.5 hours for a webhook from the registry_packages endpoint today.

            I'm grateful it arrived, but two and half hours feels less than ideal.

      • amonith 4 hours ago

        I'm a firm believer that almost nothing except public services needs that kind of uptime... We've introduced ridiculous amounts of complexity to our infra to achieve this and we've contributed to the increasing costs of both services and development itself (the barrier of entry for current juniors is insane compared to what I've had to deal with in my early 20s).

        • nosman 4 hours ago

          What do you mean by public services?

          All kinds of companies lose millions of dollars of revenue per day if not hour if their sites are not stable.... apple, amazon, google, Shopify, uber, etc etc.

          Those companies have decided the extra complexity is worth the reliability.

          Even if you're operating a tech company that doesn't need to have that kind of uptime, your developers probably need those services to be productive, and you don't want them just sitting there either.

          • amonith a minute ago

            By public services I mean only important things like healthcare, law enforcement, fire department. Definitely not stores and food delivery. You can wait an hour or even a couple of hours for that.

            > Those companies have decided the extra complexity is worth the reliability.

            Companies always want more money and yes it makes sense economically. I'm not disagreeing with that. I'm just saying that nobody needs this. I grew up in a world where this wasn't a thing and no, life wasn't worse at all.

      • babo 5 hours ago

        As an example, Go build could fail anywhere if a dependency module from Github is not available.

        • sethops1 4 hours ago

          Any module that is properly tagged and contains an OSS license gets stored in Google's module cache indefinitely. As long as it was go-get-ed once before, you can pull it again without going to GitHub (or any other VCS host).

        • toastal 2 hours ago

          Does go build not support mirrors so you can define a fallback repository? If not, why?

      • badgersnake 5 hours ago

        Lots of teams embraced actions to run their CI/CD, and GitHub reviews as part of their merge process. And copilot. Basically their SOC2 (or whatever) says they have to use GitHub.

        I’m guessing they’re regretting it.

        • swiftcoder 5 hours ago

          > Basically their SOC2 (or whatever) says they have to use GitHub

          Our SOC2 doesn't specify GitHub by name, but it does require we maintain a record of each PR having been reviewed.

          I guess in extremis we could email each other patch diffs, and CC the guy responsible for the audit process with the approval...

          • bostik 3 hours ago

            Every product vendor, especially those that are even within a shouting distance from security, has a wet dream: to have their product explicitly named in corporate policies.

            I have cleaned up more than enough of them.

          • sgt 4 hours ago

            Does SOC2 itself require that or just yours? I'm not too familiar with SOC2 but I know ISO 27001 quite well, and there's no PR specific "requirements" to speak of. But it is something that could be included in your secure development policy.

            • badgersnake 3 hours ago

              Yeah, it’s what you write in the policy.

              • swiftcoder 3 hours ago

                And it's pretty common to write in the policy, because its pretty much a gimme, and lets you avoid writing a whole bunch of other equivalent quality measures in the policy.

          • onraglanroad 5 hours ago

            The Linux kernel uses an email based workflow. You can digitally sign email and add it to an immutable store that can be reviewed.

      • nullstyle 5 hours ago

        The money i pay them is the reason

      • theappsecguy 5 hours ago

        What if you need to deploy to production urgently...

      • ajross 4 hours ago

        I think this is being downvoted unfairly. I mean, sure, as a company accepting payment for services, being down for a few hours every few months is notably bad by modern standards.

        But the inward-looking point is correct: git itself is a distributed technology, and development using it is distributed and almost always latency-tolerant. To the extent that github's customers have processes that are dependent on services like bug tracking and reporting and CI to keep their teams productive, that's a bug with the customer's processes. It doesn't have to be that way and we as a community can recognize that even if the service provider kinda sucks.

        • progmetaldev 3 hours ago

          There are still some processes that require a waterfall method for development, though. One example would be if you have a designer, and also have a front-end developer that is waiting for a feature to be complete to come in and start their development. I know on HN it's common for people to be full-stack developers, or for front-end developers to be able to work with a mockup and write the code before a designer gets involved, but there are plenty of companies that don't work that way. Even if a company is working in an agile manner, there still may come a time where work stalls until some part of a system is finished by another team/team-member, especially in a monorepo. Of course they could change the organization of their project, but the time suck of doing that (like going with microservices) is probably going to waste quite a bit more time than how often GitHub is down.

          • ajross 3 hours ago

            > There are still some processes that require a waterfall method for development, though

            Not on the 2-4 hour latency scale of a GitHub outage though. I mean, sure, if you have a process that requires the engineering talent to work completely independently on day-plus timescales and/or do all their coordination offline, then you're going to have a ton of trouble staffing[1] that team.

            But if your folks can't handle talking with the designers over chat or whatnot to backfill the loss of the issue tracker for an afternoon, then that's on you.

            [1] It can obviously be done! But it's isomorphic to "put together a Linux-style development culture", very non-trivial.

        • toastal 2 hours ago

          Being snapshot-based. Git has some issues being distributed in practice since the patch order matter which means you basically need to have some centralized authoritative server in most cases with more than 2 folks to resolve the order of patches for meaningful uses as the hash is used in so many contexts.

          • ajross an hour ago

            That's... literally what a merge collision is. The tooling for that predates git by decades. The solutions are all varying levels of non-trivial and involve tradeoffs, but none of them require 24/7 cloud service availability.

      • esafak 5 hours ago

        Are you kidding? I need my code to pass CI, and get reviewed, so I can move on, otherwise the PRs just keep piling. You might as well say the lights could go out, you can do paperwork.

        • koreth1 5 hours ago

          > otherwise the PRs just keep piling

          Good news! You can't create new PRs right now anyway, so they won't pile.

        • munk-a 3 hours ago

          When in doubt - schedule a meeting about how you're unable to do work to keep doing work!

  • bartread 5 hours ago

    Yeah, I'm literally looking at GitLab's "Migrate from GitHub" page on their docs site right now. If there's a way to import issues and projects I could be sold.

    • stevekemp 2 hours ago

      If you're considering moving away from github due to problems with reliability/outages, then any migration to gitlab will not make you happy.

    • gslepak 3 hours ago

      > If there's a way to import issues and projects I could be sold.

      That is what that feature does. It imports issues and code and more (not sure about "projects", don't use that feature on Github).

  • thinkingtoilet 4 hours ago

    This is obviously empty speculation, but I wonder if the mindless rush to AI has anything to do with the increase in outages we've seen recently.

    • iLoveOncall 20 minutes ago

      It does. I work at Amazon and I can see the increase in outages or major issues since AI has been pushed.

    • ezst 3 hours ago

      Or maybe the mindless rush to host it in azure?

  • throwaway5752 2 hours ago

    This is Microsoft. They forced a move to Azure, and then prioritized AI workfloads higher. I'm sure the traing read workloads on GH are nontrivial.

    They literally have the golden goose, the training stream of all software development, dependencies, trending tool usage.

    In an age of model providers trying train their models and keep them current, the value of GitHub should easily be in the high tens of billions or more. The CEO of Microsoft should be directly involved at this point, their franchise at risk on multiple fronts now. Windows 11 is extremely bad. GitHub going to lose their foundational role in modern development shortly, and early indications are that they hitched their wagon to the wrong foundational model provider.

  • jbmilgrom 5 hours ago

    I viscerally dislike github so much at this point. I don't know how how they come back from this. Major opportunity for competitor here to come around and with ai native features like context versioning

mholt 6 hours ago

Of course they're down while I'm trying to address a "High severity" security bug in Caddy but all I'm getting is a unicorn when loading the report.

(Actually there's 3 I'm currently working, but 2 are patched already, still closing the feedback loop though.)

I have a 2-hour window right now that is toddler free. I'm worried that the outage will delay the feedback loop with the reporter(s) into tomorrow and ultimately delay the patches.

I can't complain though -- GitHub sustains most of my livelihood so I can provide for my family through its Sponsors program, and I'm not a paying customer. (And yet, paying would not prevent the outage.) Overall I'm very grateful for GitHub.

  • cced 5 hours ago

    Which security bug(s) are you referring to?

    • NewJazz 3 hours ago

      Presumably bugs that may still be under embargo

  • gostsamo 5 hours ago

    have you considered moving or having at least an alternative? asking as someone using caddy for personal hosting who likes to have their website secure. :)

    • mholt 5 hours ago

      We can of course host our code elsewhere, the problem is the community is kind of locked-in. It would be very "expensive" to move, and would have to be very worthwhile. So far the math doesn't support that kind of change.

      Usually an outage is not a big deal, I can still work locally. Today I just happen to be in a very GH-centric workflow with the security reports and such.

      I'm curious how other maintainers maintain productivity during GH outages.

      • gostsamo 5 hours ago

        Yep, I get you about the community.

        As an alternative, I thought mainly as a secondary repo and ci in case that Github stops being reliable, not only as the current instability, but as an overall provider. I'm from the EU and recently catch myself evaluating every US company I interact with and I'm starting to realize that mine might not be the only risk vector to consider. Wondering how other people think about it.

    • Nextgrid 5 hours ago

      > have you considered moving or having at least an alternative

      Not who you're responding to, but my 2 cents: for a popular open-source project reliant on community contributions there is really no alternative. It's similar to social media - we all know it's trash and noxious, but if you're any kind of public figure you have to be there.

      • jeltz 5 hours ago

        Several quite big projects have moved to Codeberg. I have no idea how it has worked out for them.

        • Zambyte 41 minutes ago

          Zig has been doing fine since switching to Codeberg

        • mring33621 3 hours ago

          LOL Codeberg's 'Explore' link is 503 for me!

      • malfist 5 hours ago

        N.I.N.A. (Nighttime Imaging 'N' Astronomy) is on bitbucket and it seems to be doing really well.

        Edit: Nevermind, looks like they migrated to github since the last time I contributed

      • gostsamo 5 hours ago

        I get that, but if we all rely on the defaults, there couldn't be any alternatives.

    • indigodaddy 5 hours ago

      You are talking to the maintainer of caddy :)

      Edit- oh you probably meant an alternative to GitHub perhaps..

      • gostsamo 5 hours ago

        no worries, misunderstandings happen.

stefankuehnel 6 hours ago

You can literally watch GitHub explode bit by bit. Take a look at the GitHub Status History; it's hilarious: https://www.githubstatus.com/history.

  • 12_throw_away 4 hours ago

    14 incidents in February! It's February 9th! Glad to see the latest great savior phase of the AI industrial complex [1] is going just as well as all the others!

    [1] https://www.theverge.com/tech/865689/microsoft-claude-code-a...

    • chrisandchris 2 hours ago

      An interesting thing I notice now is that people do not like companies that only post about outages if half the world have them ... and also not companies that also post about "minor issues", e.g.:

      > During this time, workflows experienced an average delay of 49 seconds, and 4.7% of workflow runs failed to start within 5 minutes.

      That's for sure not perfect, but there was also a 95% chance that if you have re-run the job, it will run and not fail to start. Another one is about notificatiosn being late. I'm sure all others do have similar issues people notice, but nobody writes about them. So a simple "to many incidents" does bot make the stats bad - only an unstable service the service.

    • jeffrallen 2 hours ago

      At this point they are probably going to crash their status system. "No one ever expected more than 50 incidents in a month!"

  • munk-a 6 hours ago

    You know what I think would reverse the trend? More vibe coding!

    • slyzmud 5 hours ago

      I know you are joking but I'm sure that there is at least one director or VP inside GitHub pushing a new salvation project that must use AI to solve all the problems, when actually the most likely reason is engineers are drawing in tech debt.

      • munk-a 5 hours ago

        Upper management in Microsoft has been bragging about their high percentage of AI generated code lately - and in the meantime we've had several disastrous Windows 11 updates with the potential to brick your machine and a slew of outages at github. I'm sure it might be something else but it's clear part of their current technical approach is utterly broken.

        • riddlemethat 4 hours ago

          CoPilot has done more for Linux than anyone expected. I switched. I'm switching my elderly parents away next before they fall victim.

        • LeifCarrotson 4 hours ago

          Utterly broken - perhaps, but apparently that's not exclusive with being highly profitable, so why should they care?

          • munk-a 3 hours ago

            When I first typed up my comment I said "their current business approach" and then corrected it to technical since - yea, in the short term it probably isn't hurting their pocket books too much. The issue is that it seems like a lot more folks are seriously considering switching off Windows - we'll see if this actually is the year of the linux desktop (it never seems to be in the end) but it certainly seems to be souring their brand reputation in a major way.

          • Aperocky 4 hours ago

            For the time being. Does anyone want Windows 11 for real?

            The inertia is not permanent.

          • moffkalast 4 hours ago

            Cause it's finally the year of Linux on desktop.

      • risyachka 5 hours ago

        It’s not a joke. This is funny because it is true.

      • OtomotO 5 hours ago

        Better to replace management by AI.

        Computers can produce spreadsheets even better and they can warm the air around you even faster.

        • Sharlin 4 hours ago

          I mean, the strengths of LLMs were always a much better match for the management than for technical work:

          * writing endless reports and executive summaries

          * pretending to know things that they don't

          * not complaining if you present their ideas as yours

          * sycophancy and fawning behavior towards superiors

        • chrisjj 5 hours ago

          Plus they don't take stock options!

        • mrweasel 4 hours ago

          Honestly AI management would probably be better. "You're a competent manager, you're not allowed to break or circumvent workers right laws, you must comply with our CSR and HR policies, provide realistic estimates and deliver stable and reliable products to our customers." Then just watch half the tech sector break down, due to a lack of resources, or watch as profit is just cut in half.

    • alansaber 5 hours ago

      All the cool kids move fast and break things. Why not the same for core infrastructure providers? Let's replace our engineers with markdown files named after them.

    • brookst 5 hours ago

      This kind of thing never happened before LLMs!

    • akulbe 5 hours ago

      No, the reason it's happening is because they must be vibe coding! :P

      • akulbe 5 hours ago

        [flagged]

        • badgersnake 5 hours ago

          No because you missed the joke.

    • re-thc 5 hours ago

      That's not good enough. You need SKILLS!

  • elondemirock 4 hours ago

    I'm happy that they're being transparent about it. There's no good way to take downtime, but at least they don't try to cover it up. We can adjust and they'll make it better. I'm sure a retro is on its way it's been quite the bumpy month.

  • krrishd 3 hours ago

    I was sort of hoping this would be a year-to-date visualization similar to Github profile contribution graphs...

  • melodyogonna 5 hours ago

    I think this will continue to happen until they finish migrating to Azure

  • hnthrowaway0315 6 hours ago

    Someone should make a timeline chart from that, lol.

adamcharnock 4 hours ago

We've migrated to Forgejo over the last couple of weeks. We position ourselves[0] as an alternative to the big cloud providers, so it seemed very silly that a critical piece of our own infrastructure could be taken out by a GitHub or Azure outage.

It has been a pretty smooth process. Although we have done a couple of pieces of custom development:

1) We've created a Firecracker-based runner, which will run CI jobs in Firecracker VMs. This brings the Foregjo Actions running experience much more closely into line with GitHub's environment (VM, rather than container). We hope to contribute this back shortly, but also drop me a message if this is of interest.

2) We're working up a proposal[1] to add environments and variable groups to Forgejo Actions. This is something we expect to need for some upcoming compliance requirements.

I really like Forgejo as a project, and I've found the community to be very welcoming. I'm really hoping to see it grow and flourish :D

[0]: https://lithus.eu, adam@

[1]: https://codeberg.org/forgejo/discussions/issues/440

PS. We are also looking at offering this as a managed service to our clients.

  • cyberpunk an hour ago

    Why .eu if you're in London? Where are your servers located and who hosts them?

MattIPv4 6 hours ago

Status page currently says the only issue is notification delays, but I have been getting a lot of Unicorn pages while trying to access PRs.

Edit: Looks like they've got a status page up now for PRs, separate from the earlier notifications one: https://www.githubstatus.com/incidents/smf24rvl67v9

Edit: Now acknowledging issues across GitHub as a whole, not just PRs.

  • priteau 6 hours ago

    They added the following entry:

    Investigating - We are investigating reports of impacted performance for some GitHub services. Feb 09, 2026 - 15:54 UTC

    But I saw it appear just a few minutes ago, it wasn't there at 16:10 UTC.

    • priteau 6 hours ago

      And just now:

      Investigating - We are investigating reports of degraded performance for Pull Requests Feb 09, 2026 - 16:19 UTC

  • twistedpair 2 hours ago

    I cannot approve PRs because the JSON API is returning HTML error pages. Something is really hosed over there.

  • salmon 6 hours ago

    Yep, trying to access commit details is just returning the unicorn page for me

  • mephos 5 hours ago

    git operations are down too.

petetnt 5 hours ago

GitHub has had customer visible incidents large enough to warrant status page updates almost every day this year (https://www.githubstatus.com/history).

This should not be normal for any service, even at GitHub's size. There's a joke that your workday usually stops around 4pm, because that's when GitHub Actions goes down every day.

I wish someone inside the house cared to comment why the services barely stay up and what kinds of actions are they planning to do to fix this issue that's been going on years, but has definitely accelerated in the past year or so.

  • huntaub 5 hours ago

    It's 100% because the number of operations happening on Github has likely 100x'd since the introduction of coding agents. They built Github for one kind of scale, and the problem is that they've all of a sudden found themselves with a new kind of scale.

    That doesn't normally happen to platforms of this size.

    • data-ottawa 5 hours ago

      A major platform lift and shift does not help. They are always incredibly difficult.

      There are probably tons of baked in URLs or platform assumptions that are very easy to break during their core migration to Azure.

      • lelanthran 3 hours ago

        > A major platform lift and shift does not help.

        ISTR that the lift-n-shift started like ... 3 years ago? That much of it was already shifted to Azure ... 2 years ago?

        The only thing that changed in the last 1 year (if my above two assertions are correct (which they may not be)) is a much-publicised switch to AI-assisted coding.

cedws 6 hours ago

Screw GitHub, seriously. This unreliability is not acceptable. If I’m in a position where I can influence what code forge we use in future I will do everything in my power to steer away from GitHub.

  • edoceo 5 hours ago

    Forge feature parity is easy to find. But GH has that discover ability feature and the social queues from stars/forks.

    One solution I see is (eg) internal forge (Gitlab/gitea/etc) and then mirrored to GH for those secondary features.

    Which is funny. If GH was better we'd just buy their better plan. But as it stands we buy from elsewhere and just use GH free plans.

    • coffeebeqn 4 hours ago

      Every company I’ve worked in the last 10 years used GH for the internal codebase hosting , PRs and sometimes CI. Discoverability doesn’t really come into picture for those users and you can still fork things from GitHub even if you don’t host your core code infra on it

    • regularfry 5 hours ago

      Stars are just noise. All they tell you is how online the demographics of that ecosystem are.

      Mirroring is probably the way forward.

jbpadgett 6 hours ago

3 outages in 3 months straight according to their own status history. https://www.githubstatus.com/history

  • hnthrowaway0315 6 hours ago

    I wonder who left the team recently. Must be someone bagged with shadow knowledge. Or maybe they send devops/devs work to another continent.

    • jsheard 5 hours ago

      They're in the process of moving from "legacy" infra to Azure, so there's a ton of churn happening behind the scenes. That's probably why things keep exploding.

      • estimator7292 5 hours ago

        I don't know jack about shit here, but genuinely: why migrate a live production system piecewise? Wouldn't it be far more sane to start building a shadow copy on Azure and let that blow up in isolation while real users keep using the real service on """legacy""" systems that still work?

        • chickenpotpie 5 hours ago

          Because it's significantly harder to isolate problems and you'll end up in this loop

          * Deploy everything * It explodes * Rollback everything * Spend two weeks finding problem in one system and then fix it * Deploy everything * It explodes * Rollback everything * Spend two weeks finding a new problem that was created while you were fixing the last problem * Repeat ad nauseum

          Migrating iteratively gives you a foundation to build upon with each component

          • wizzwizz4 5 hours ago

            So… create your shadow system piecewise? There is no reason to have "explode production" in your workflow, unless you are truly starved for resources.

            • paulddraper an hour ago

              Does this shadow system have usage?

              Does it handle queries, trigger CI actions, run jobs?

        • throwway120385 5 hours ago

          Why would you avoid a perfect opportunity to test a bunch of stuff on your customers?

        • toast0 4 hours ago

          If you make it work, migrating piecewise should be less change/risk at each junction than a big jump between here and there of everything at once.

          But you need to have pieces that are independent enough to run some here and some there, and ideally pieces that can fail without taking down the whole system.

        • literallyroy 5 hours ago

          That’s a safer approach but will cause teams to need to test in two infrastructures (old world and new) til the entire new environment is ready for prime time. They’re hopefully moving fast and definitely breaking things.

        • paulddraper 5 hours ago

          A few reasons:

          1. Stateful systems (databases, message brokers) are hard to switch back-and-forth; you often want to migrate each one as few times as possible.

          2. If something goes sideways -- especially performance-wise -- it can be hard to tell the reason if everything changed.

          3. It takes a long time (months/years) to complete the migration. By doing it incrementally, you can reap the advantages of the new infra, and avoid maintaining two things.

          ---

          All that said, GitHub is doing something wrong.

      • helterskelter 5 hours ago

        It took me a second to realize this wasn't sarcasm.

      • hnthrowaway0315 5 hours ago

        Are they just going to tough through the process and whatever...

    • perdomon 5 hours ago

      I think it's more likely the introduction of the ability to say "fix this for me" to your LLM + "lgtm" PR reviews. That or MS doing their usual thing to acquired products.

    • persedes 3 hours ago

      rumors I've heard was that github is mostly run by contractors? That might explain the chaos more than simple vibe coding (which probably aggravates this)

    • arccy 5 hours ago

      nah, they're just showing us how to vibecode your way to success

      • hnthrowaway0315 5 hours ago

        If the $$$ they saved > the $$$ they lose then yeah it is a success. Business only cares about $$$.

        • collingreen 4 hours ago

          Definitely. The devil is in the details though since it's so damn hard to quantify the $$$ lost when you have a large opinionated customer base that holds tremendous grudges. Doubly so when it's a subscription service with effectively unlimited lifetime for happy accounts.

          Business by spreadsheet is super hard for this reason - if you try to charge the maximum you can before people get angry and leave then you're a tiny outage/issue/controversy/breach from tipping over the wrong side of that line.

          • hnthrowaway0315 4 hours ago

            Yeah, but who cares about long-term? In the long term we are all dead. CEO only needs to be good for 5-10 max years, pop up stock prices and get applause every where and called as the smartest guy in the world.

  • bartread 5 hours ago

    I think the last major outage wasn't even two weeks ago. We've got about another 2 weeks to finish our MVP and get it launched and... this really isn't helpful. I'm getting pretty fed up of the unreliability.

  • aloisdg 5 hours ago

    Sure it is not vibe coding related

RomanPushkin 6 hours ago

Looks like AI replacement of engineering force in action.

  • alexeiz 6 hours ago

    You're absolutely right! Sorry I deleted your database.

    • gunapologist99 6 hours ago

      I can help you restore from backups if you will tell me where you backed it up.

      You did back it up, right? Right before you ran me with `--allow-dangerously-skip-permissions` and gave me full access to your databases and S3 buckets?

      • jpalawaga 5 hours ago

        You're right! Let's just quickly promote your only read replica to the new primar---oops!

        • whatevermom3 3 hours ago

          I was laughing really hard until I remembered it happened to me a few months ago and I wasn't having fun at that time.

        • guluarte 3 hours ago

          Good news: I optimized your infrastructure costs to zero. Bad news: I did it by deleting everything. You're welcome.

      • lelanthran 3 hours ago

        > I can help you restore from backups if you will tell me where you backed it up.

        "Whoops, now that one is nuked too. You have any more backups I can practice my shell commands on?"

      • SAI_Peregrinus 3 hours ago

        I'm very sorry I deleted your `backups` bucket, despite being specifically instructed not to touch the `backups` bucket.

  • dmix 5 hours ago

    Github is moving to Microsoft Azure which is causing all of this downtime AFAIK

    • DeepYogurt 5 hours ago

      That's cover. They've been doing that since microsoft bought them

      • ifwinterco 5 hours ago

        Yeah but that's exactly the issue - that whole time dev time will have been getting chewed up on the migration when it could have been spent elsewhere

  • rvz 5 hours ago

    More like Tay.ai and Zoe.ai AIs still arguing amongst themselves not being able to keep the service online for Microsoft after they replaced their human counterparts.

razwall 6 hours ago

They're overwhelmed with all the vibecoded apps people are pushing after watching the Super Bowl.

  • ddtaylor 5 hours ago

    Their network stack is ran by OpenAI and is now advertising cool new ways for us to stay connected in a fun way with Mobile Co (TM).

pimpl 5 hours ago

What are good alternatives to GitHub for private repos + actions? I'm considering moving my company off of it because of reliablity issues.

  • mfenniak 5 hours ago

    It probably depends on your scale, but I'd suggest self-hosting a Forgejo instance, if it's within your domain expertise to run a service like that. It's not hard to operate, it will be blazing fast, it provides most of the same capabilities, and you'll be in complete control over the costs and reliability.

    A people have replied to you mentioning Codeberg, but that service is intended for Open Source projects, not private commercial work.

    • palata 4 hours ago

      This. I have been using Codeberg and self-hosting Forgejo runners and I'm happy. For personal projects though, I don't know for a company.

      Also very happy with SourceHut, though it is quite different (Forgejo looks like a clone of GitHub, really). The SourceHut CI is really cool, too.

  • yoyohello13 5 hours ago

    We self-host Gitlab at work and it's amazing. CI/CD is great and it has never once gone down.

  • ai-christianson 5 hours ago

    If you want to go really minimal you can do raw git+ssh and hooks (pre/post commit, etc).

    • chasd00 5 hours ago

      i would imagine that's what everyone is doing instead of sitting on their hands. Setup a different remote and have your team push/pull to/from it until Github comes back up. I mean you could probably use ngrok and setup a remote on your laptop in a pinch. You shouldn't be totally blocked except for things like automated deployments or builds tied specifically to github.com

      Distributed source control is distributable.

      • peartickle 5 hours ago

        It's also fun when a Jr. on the team distributes the .env file via Git...

        • mrweasel 4 hours ago

          Couldn't you avoid that with .gitignore and pre-commit hooks? A determined Jr. can still mess it up, but you can minimize the risk.

  • lelanthran 2 hours ago

    > What are good alternatives to GitHub for private repos + actions? I'm considering moving my company off of it because of reliablity issues.

    Dunno about actions[1], but I've been using a $5/m DO droplet for the last 5 years for my private repo. If it ever runs out of disk space, an additional 100GB of mounted storage is an extra $10/m

    I've put something on it (Gitea, I think) that has the web interface for submitting PRs, reviewing them, merging them, etc.

    I don't think there is any extra value in paying more to a git hosting SaaS for a single user, than I pay for a DO droplet for (at peak) 20 users.

    ----------------------

    [1] Tried using Jenkins, but alas, a $5/m DO droplet is insufficient to run Jenkins. I mashed up shell scripts + Makefiles in a loop, with a `sleep 60` between iterations.

  • Defelo 4 hours ago

    I've been using https://radicle.xyz/ + https://radicle-ci.liw.fi/ (in combination with my own ci adapter for nix flakes) for about half a year now for (almost) all my public and private repos and so far I really like it.

    • rirze 2 hours ago

      +1, I like the idea of a peer-distributed code forge. I've been using it as well.

  • Kelteseth 5 hours ago

    Gitlab.com. CI is super nice and easily self hostable.

    • misnome 5 hours ago

      And their status history isn't much better. It's just that they are so much smaller it's not Big News.

      • plagiarist 4 hours ago

        For me it is their history of high-impact easily avoidable security bugs. I have no idea why "send a reset password link to an address from an unauthenticated source" was possible at all.

    • MYEUHD 5 hours ago

      I heard that it's hard to maintain self-hosted Gitlab instances

      • 12_throw_away 4 hours ago

        Nah at a small scale it's totally fine, and IME pretty pain-free after you've got it running. The biggest pain points are A) It's slow, B) between auth, storage, and CI runners, you have a lot of unavoidable configuration to do, and C) it has a lot of different features so the docs are MASSIVE.

      • cortesoft 5 hours ago

        Not really. About average in terms of infrastructure maintenance. Have been running our orgs instance for 5 years or so, half that time with premium and half the time with just the open source version, running on kubernetes... ran it in AWS at first, then migrated to our own infrastructure.

      • throwuxiytayq 5 hours ago

        I type docker pull like once a month and that's it.

      • Kelteseth 5 hours ago

        Uhm no? We have been self-hosting Gitlab for 6 years now with monthly updates and almost zero issues, just apt update && apt upgrade.

  • ramon156 5 hours ago

    Codeberg is close to what i need

  • jruz 5 hours ago

    I left for codeberg.org and my own ci runner with woodpecker. Soooo much faster than github

  • estimator7292 5 hours ago

    At my last job I ran a GitLab instance on a tiny AWS server and ran workers on old desktop PCs in the corner of the office.

    It's pretty nice if you don't mind it being some of the heaviest software you've ever seen.

    I also tried gitea, but uninstalled it when I encountered nonsense restrictions with the rationale "that's how GitHub does it". It was okay, pretty lightweight, but locking out features purely because "that's what GitHub does" was just utterly unacceptable to me.

    • NewJazz 5 hours ago

      One thing that always bothered me about gitea is they wouldn't even dog food for a long time. GitLab has been developing on GitLab since forever, basically.

  • theredbeard 5 hours ago

    Gitlab.com is the obvious rec.

  • xigoi 4 hours ago

    SourceHut.

  • ewuhic 5 hours ago

    Don't listen to the clueless suggesting Gitlab. It's forgejo (not gitea) or tangled, that's it.

    • tenacious_tuna 5 hours ago

      > clueless suggesting Gitlab

      ad hominem isn't a very convincing argument, and as someone who also enjoys forgejo it doesn't make me feel good to see as the justification for another recommender.

    • Zetaphor 5 hours ago

      Can you offer some explanation as to why Forgejo and Tangled over Gitlab or Gitea?

      I personally use Gitea, so I'd appreciate some additional information.

      • tenacious_tuna an hour ago

        I'm not OP, but; Forgejo is much lighterweight than Gitlab for my usecase, and was cited as a more maintained version of Gitea, but that's just anecdote from my brain and I don't have sources, so take that with a truckload of salt.

        I'd had a gitea instance before and it was appealing insofar as having the ability to mirror from or to a public repo, it had docker container registry capability, it ties into oauth, etc; I'm sure gitlab has much/all of that too, but forgejo's tiny, tiny footprint was very appealing for my resource-constrained selfhosted environment.

      • rhdunn 4 hours ago

        From [1] "Forgejo was created in October 2022 after a for profit company took over the Gitea project."

        Forgejo became a hard fork in 2024, with both projects diverging. If you're using it for local hosting I don't personally see much of a difference between them, although that may change as the two projects evolve.

        [1] https://forgejo.org/compare-to-gitea/

      • xigoi 4 hours ago

        GitLab is slow as fuck and the UI is cluttered with corporate nonsense.

feverzsj 5 hours ago

Seems Microsoft goes downhill after all in AI.

  • oxag3n 4 hours ago

    It's already there - most CS students have second-hand experience with MS products.

  • gtowey 2 hours ago

    It's all been downhill at Microsoft since windows 3.1

  • behnamoh 5 hours ago

    I'm fine with that!

mikert89 6 hours ago

pretty clear that companies like microsoft are actually terrible at engineering, their core products were built 30 years ago. any changes now are generally extremely incremental and quickly rolled back with issue. trying to innovate at github shows just how bad they are.

  • shimman 6 hours ago

    It's not just MSFT, it's all of big tech. They basically run as a cartel, destroy competition through illegal means, engage in regulatory capture, and ensure their fiefdoms reign supreme.

    All the more reason why they should be sliced and diced into oblivion.

    • mikert89 5 hours ago

      yeah i have worked at a few FAANG, honestly stunning how entrenched and bad some of the products are. internally, they are completely incapable of making any meaningful product changes, the whole thing will break

    • jpalawaga 5 hours ago

      to be fair, git is one of the most easily replaced pieces of tech.

      just add a new git remote and push. less so for issues and and pulls, but at least your dev team/ci doesn't end up blocked.

  • swiftcoder 5 hours ago

    It's a general curse of anything that becomes successful at a BigCorp.

    The engineers who build the early versions were folks at the top of their field, and compensated accordingly. Those folks have long since moved on, and the whole thing is maintained by a mix of newcomers and whichever old hands didn't manage to promote out, while the PMs shuffle the UX to justify everyones salary...

    • mikert89 5 hours ago

      im not even sure id say they were "top", id more just say its a different type of engineer, that either doesnt get promoted to a big impact role at a place like microsoft, or leaves on their own.

jnhbgvjkb 11 minutes ago

If you were looking for a signal to leave github, then this is it.

JamesTRexx 5 hours ago

Sorry, my fault. I tried to download a couple of CppCon presentations from their stash. Should have known better than to touch anything C++. ducks

  • alexeiz 3 hours ago

    There are new slides? Here goes the rest of my work day.

mentalgear 6 hours ago

Seems like MS copilot is vibe-ing it again ! Some other major cloud provider outages come to mind that never happened before the "vibe" area.

romshark 5 hours ago

GitHub is slowly turning into the Deutsche Bahn of git providers.

ecshafer 6 hours ago

Well its a day that ends in Y.

Github is down so often now, especially actions, I am not sure how so many companies are still relying on them.

  • bigfishrunning 5 hours ago

    Migration costs are a thing

    • Zambyte 5 hours ago

      So are the costs of downtime.

tapoxi 6 hours ago

Is it really that much better than alternatives to justify these constant outages?

  • dsagent 6 hours ago

    We're starting to have that convo in our org. This is just getting worse and worse for Github.

    Hosting .git is not that complicated of a problem in isolation.

  • bigfishrunning 5 hours ago

    No, but it has momentum left over from when it was much better. The Microsoft downslide will continue untill there's no one left

  • shimman 2 hours ago

    Yes, for personal projects I just self-host an instance of forgejo with dokploy. Everything else I deploy on codeberg, which is also an instance of forgejo.

  • jeltz 5 hours ago

    Not any longer. It used to but the outages have become very common. I am thinking about moving all my personal stuff to Codeberg.

  • azangru 5 hours ago

    I love its UI (apart from its slowness, of course). I find it much cleaner than Gitlab's.

  • tacker2000 5 hours ago

    Im using Bitbucket for years with no issues.

    • onraglanroad 4 hours ago

      The great advantage of Bitbucket is that it's so painfully slow you can't tell if it's down or not.

  • riffic 6 hours ago

    self-host your own services. There are a lot of alternatives to GitHub.

trollbridge an hour ago

Fortunately, git is quite resilient and you can work offline and even do pull requests with your peers without GitHub.

twistedpair 3 hours ago

In the age of Claude Code et al, my honest biggest bottleneck is GH downtime. I've got a dozen PRs I'm working on, but it's all frozen up, daily, with GH outages.

Are the other providers offering much better uptime GitLab, CircleCI, Harness? Saying this as someone that's been GH exclusive sicne 2010.

danelski 6 hours ago

I wonder what's the value of having a dedicated X (formerly Twitter) status account post 2023 when people without account will see a mix of entries from 2018, 2024, and 2020 in no particular order upon opening it. Is it just there so everyone can quickly share their post announcing they're back?

byte_surgeon 5 hours ago

Just remove all that copilot nonsense and focus on uptime... I would like to push some code.

porise 6 hours ago

Take it away from Microsoft. Not sure how this isn't an antitrust issue anyway.

  • burningChrome 6 hours ago

    At its core antitrust cases are about monopolies and how companies use anti-competitive conduct to maintain their monopoly.

    Github isn't the only source control software in the market. Unless they're doing something obvious and nefarious, its doubtful the justice department will step in when you can simply choose one of many others like Bitbucket, Sourcetree, Gitlab, SVN, CVS, Fossil, DARCS, or Bazaar.

    There's just too much competition in the market right now for the govt to do anything.

    • datsci_est_2015 4 hours ago

      Minimal changes have occurred to the concept of “antitrust” since its inception as a form of societal justice against corporations, at least per my understanding.

      I doubt policymakers in the early 1900s could have predicted the impact of technology and globalization on the corporate landscape, especially vis a vis “vertical integration”.

      Personally, I think vertical integration is a pretty big blind spot in laws and policies that are meant to ensure that consumers are not negatively impacted by anticompetitive corporate practices. Sure, “competition” may exist, but the market activity often shifts meaningfully in a direction that is harmful consumers once the biggest players swallow another piece of the supply chain (or product concept), and not just their competitors.

      • ianburrell 3 hours ago

        There was a change in the enforcement of antitrust law in the 1970s. Consumer welfare, which came to mean lower prices, is the standard. Effectively normal competition is fine and takes egregious behavior to be violation. It even assumes that big companies are more efficient which makes up for lack of competition.

        The other change is reluctance to break up companies. AT&T break up was big deal. Microsoft survived being broken up in its antitrust trial. Tech companies can only be broken up vertically, but maybe the forced competition would be enough.

    • mrweasel 4 hours ago

      Picking something other than Github may also have the positive effect that you're less of a target for drive by AI patches.

    • porise 6 hours ago

      Can they use Github to their advantage to maintain a monopoly if they are nefarious? Think about it.

      • afavour 5 hours ago

        Unfortunately the question is "have they", not "can they".

    • 01HNNWZ0MV43FF 5 hours ago

      > you can simply choose one of many others

      Not really. It's a network effect, like Facebook. Value scales quadratically with the number of users, because nobody wants to "have to check two apps".

      We should buy out monopolies like the Chinese government does. If you corner the market, then you get a little payout and a "You beat capitalism! Play again?" prize. Other companies can still compete but the customers will get a nice state-funded high-quality option forever.

      • StilesCrisis 4 hours ago

        Forever, for sure, definitely. State sponsored projects are never subject to the whims of uninformed outsiders.

  • palata 4 hours ago

    > Not sure how this isn't an antitrust issue anyway.

    Simple: the US stopped caring about antitrust decades ago.

  • brendanfinan 6 hours ago

    It's not an antitrust issue because antitrust laws aren't enforced in the U.S.

  • kgwxd 6 hours ago

    That's on every individual that decided to "give it" to Microsoft. Git was made precisely to make this problem go away.

    • cedws 5 hours ago

      Git is like 10% of building software.

      • kgwxd 3 hours ago

        If GitHub is doing 90% more than Git does, "GitHub" is a terrible name for it.

  • that_guy_iain 6 hours ago

    Not sure how having downtime is an anti-competition issue. I'm also not sure how you think you can take things away from people? Do you think someone just gave them GitHub and then take it away? Who are you expecting to take it away? Also, does your system have 100% uptime?

    • porise 6 hours ago

      Companies used to be forced to sell parts of their business when antitrust was involved. The issue isn't the downtime, they should never have been allowed to own this in the first place.

      There was just a recent case with Google to decide if they would have to sell Chrome. Of course the Judge ruled no. Nowadays you can have a monopoly in 20 adjacent industries and the courts will say it's fine.

      • that_guy_iain 4 hours ago

        You've been banging on about this for a while, I think this is my third time responding to one of your accounts. There is no antitrust issue, how are they messing with other competitors? You never back up your reasoning. How many accounts do you have active since I bet all the downvotes are from you?

        • porise 4 hours ago

          I've had two accounts. I changed because I don't like the history (maybe one other person has the same opinion I did?). Anyways it's pretty obvious why this is an issue. Microsoft has a historical issue with being brutal to competition. There is no oversight as to what they do with the private data on GitHub. It's absolutely an antitrust issue. Do you need more reasoning?

          • that_guy_iain 3 hours ago

            Didn't you just privately tell me it was 4 accounts? Maybe that was someone else hating on Windows 95. But you need an active reason not what they did 20 years ago.

            • porise 3 hours ago

              Nope. If someone did that it should be reported if it's against the rules here.

  • alimbada 6 hours ago

    Do you also post "Take it away from $OWNER" every time your open source software breaks?

    • otikik 6 hours ago

      If he posted every time GitHub broke, he would have certainly have posted a bunch of times.

    • porise 6 hours ago

      What antitrust issue does my open source software have?

      • alimbada 6 hours ago

        What does antitrust have to do with the GitHub services downtime?

        • abdullahkhalids 5 hours ago

          The more stable/secure a monopoly is in its position the less incentive it has to deliver high quality services.

          If a company can build a monopoly (or oligopoly) in multiple markets, it can then use these monopolies to build stability for them all. For example, Google uses ads on the Google Search homepage to build a browser near-monopoly and uses Chrome to push people to use Google Search homepage. Both markets have to be attacked simultaneously by competitors to have a fighting chance.

        • fsflover 5 hours ago

          It regularly breaks the workflow for thousands of FLOSS projects.

ZpJuUuNaQ5 6 hours ago

It's a funny coincidence - I pushed a commit adding a link to an image in the README.md, opened the repo page, clicked on the said image, and got the unicorn page. The site did not load anymore after that.

petterroea 4 hours ago

When I was a summer intern 10 years ago I remember there without fail always being a day where GitHub was down, ever summer. Good times.

1vuio0pswjnm7 4 hours ago

I am able to access github.com at 140.82.112.3 no problem

I am able to access api.github.com at 20.205.243.168 no problem

No problem with githubusercontent.com either

CamT 5 hours ago

It feels like GitHub's shift to these "AI writes code for you while you sleep!" features will appeal to a less technical crowd who lack awareness of the overall source code hosting and CI ecosystem and, combined with their operational incompetence of late (calling it how I see it), will see their dominance as the default source code solution for folks using it to maintain production software projects fade away.

Hopefully the hobbyists are willing to shell out for tokens as much as they expect.

koreth1 5 hours ago

The biggest thing tying my team to GitHub right now is that we use Graphite to manage stacked diffs, and as far as I can tell, Graphite doesn't support anything but GitHub. What other tools are people using for stacked-diff workflows (especially code review)?

Gerrit is the other option I'm aware of but it seems like it might require significant work to administer.

  • satya71 5 hours ago

    I use git town. Fits my brain a lot better.

zurfer 6 hours ago

to be fair, i think usage has increased a lot because of coding agents and some things that worked well for now can't scale to the next 10x level.

  • jcdcflo 5 hours ago

    Maybe they need to sort things out for people who pay through the nose for it cause I ain't comforted by vibe coders slowing us down.

BhavdeepSethi 5 hours ago

I wonder if GH charges for the runners during their downtime. Last week lot of them would retry multiple times and then fail.

0xbadcafebee 5 hours ago

List of company-friendly managed-host alternatives? SSO, auditing, user management, billing controls, etc?

I would love to pay Codeberg for managed hosting + support. GitLab is an ugly overcomplicated behemoth... Gitea offers "enterprise" plans but do they have all the needed corporate features? Bitbucket is a joke, never going back to that.

alfanick 6 hours ago

Oh! It's not my GitLab@Hetzner that's not working, it's GitHub. Just when I decided to opensource my project.

  • rvz 5 hours ago

    Well done for self-hosting.

jablongo 3 hours ago

It looks like one of my employees got her whole account deleted or banned without warning during this outage. Hopefully this is resolved as service returns.

bovermyer 4 hours ago

Meanwhile, Codeberg and Worktree are both online and humming along.

Codeberg gets hit by a fair few attacks every year, but they're doing pretty well, given their resources.

I am _really_ enjoying Worktree so far.

  • rmunn 4 hours ago

    For anyone else having trouble finding Worktree's site because you keep getting "how to use git-worktree" results, it's https://worktree.ca/

    • bovermyer 4 hours ago

      Sorry! I should have included a link, since it's relatively unknown.

Culonavirus 6 hours ago

Azure Screen of Death?

  • esafak 5 hours ago

    Kids don't even know this. Lucky them.

    • coffeebeqn 4 hours ago

      They will soon given MS direction

aqme28 5 hours ago

The saddest part to me is that their status update page and twitter are both out of date. I get a full 500 on github.com and yet all I see on their status page is an "incident with pull requests" and "copilot policy propagation delays."

altern8 31 minutes ago

One reason for the reduction in global downtime could be that with time they add more and more services that can go down and affect the stats.

Just saying.

Tade0 5 hours ago

I don't know if it's related, but for the past week I've been getting pages cut off at some point, as if something closed the connection mid-transfer.

Today, when I was trying to see the contribution timeline of one project, it didn't render.

bigbuppo 5 hours ago

On the plus side, it's git, so developers can at least get back to work without too much hassle as long as they don't need the CI/CD side of things immediately.

ilikerashers 6 hours ago

Yeap, getting this for the last 20 minutes. Everything green on their status pages.

edverma2 5 hours ago

Anyone have alternatives to recommend? We will be switching after this. Already moved to self-hosted action runners and we are early-stage so switching cost is fairly low.

  • akshitgaur2005 5 hours ago

    Codeberg, if your product/project is open source, otherwise try out Tangled.org and Radicle!!

    Radicle is the most exciting out of these, imo!

ascendantlogic 5 hours ago

So what's the moneyline on all these outages being the result of vibe-coded LLM-as-software-engineer/LLM-as-platform-engineer executive cost cutting mandates?

huntertwo 6 hours ago

Microslop strikes again!

tigerlily 6 hours ago

So, what're people's alt stack for replacing GitHub?

  • nostrapollo 5 hours ago

    We're mirroring to Gitea + Jenkins.

    It's definitely some extra devops time, but claude code makes it easy to get over the config hurdles.

  • throw_m239339 5 hours ago

    Wait a minute, isn't Git supposed to be... distributed?

    • swiftcoder 5 hours ago

      Yeah, but things with "Hub" in their name don't tend to be very distributed

      • esafak 5 hours ago

        Thanks for underscoring the beautiful oxymoron.

    • arcologies1985 5 hours ago

      Issues, CI, and downloads for built binaries aren't part of vanilla Git. CI in particular can be hard if you make a multi-platform project and don't want to have to buy a new mac every few years.

      • swiftcoder 5 hours ago

        Probably Worth taking an honest look at whether your CI could just be an SQS queue and a Mac mini running under your desk though

        • arcologies1985 4 hours ago

          For my OSS work that is about $699 over my budget

          • swiftcoder 3 hours ago

            Yeah, fair enough (though you can often pick up an M1 Mini for <$300 these days)

kachapopopow 4 hours ago

I made this joke 10 hours ago: "I wonder if you opened https://github.com/claude in like 1000's of browsers / unique ips would it bring down github since it does seem to try until timeout"

coincidence I think not!

thewhitetulip 6 hours ago

Has anyone noticed that in the past year we have seen a LOT of outages?

  • thesmart 6 hours ago

    Yes. Feels like every other week.

    • thewhitetulip 5 hours ago

      That goes against all the gushing posts about how AI is great. I use all the frontier models and sure they're a bit helpful

      But I don't understand if they're that good why are we getting an outage every other week? AWS had an outage unsolved for about 9+ hrs!

davidfekke 4 hours ago

I guess Bill Gates has a virus.

canterburry 4 hours ago

I wonder if the incident root cause analysis will point to vibe coding?

rileymichael 5 hours ago

the incident has now expanded to include webhooks, git operations, actions, general page load + API requests, issues, and pull requests. they're effectively down hard.

hopefully its down all day. we need more incidents like this to happen for people to get a glimpse of the future.

  • swiftcoder 5 hours ago

    And hey, its about the best excuse for not getting work done I can think of

yoyohello13 5 hours ago

Azure infra rock solid as always.

elcapitan 6 hours ago

Maybe we should post when it's up

parvardegr 5 hours ago

Damn, I was also trying to push and deploy a critical bug fix that was needed within minutes.

  • rvz 5 hours ago

    Well unfortunately, you have to wait for GitHub to get back online to push that critical bug fix. If that were me, I would find that unacceptable.

    Self hosting would be a better alternative, as I said 5 years ago. [0]

    [0] https://news.ycombinator.com/item?id=22867803

an0malous 6 hours ago

I think this is an indicator of a broader trend where tech companies put less value on quality and stability and more value on shipping new features. It’s basically the enshittification of tech

GeneralGrevous 3 hours ago

Fix this or I will send my droid army. #greenpurplelifesmatter #Imcocoforcocoapuffs #ihatejedi

jcdcflo 5 hours ago

We replaced everything except the git part because of reliability issues. Pages…gone Actions…gone KB…gone. Tickets…gone.

Maybe they need to get more humans involved because GitHub is down at least once a week for a while now.

zingerlio 5 hours ago

I was wondering why my AUR packages won’t update, just my luck.

CodingJeebus 6 hours ago

Do they publish proper post-mortems? I feel like that's gotta be the bare minimum nowadays for such critical digital infrastructure.

The new-fangled copilot/agentic stuff I do read about on HN is meaningless to me if the core competency is lost here.

semiinfinitely 5 hours ago

They put too much AI in it bot enough engineering rigor

EToS 5 hours ago

sorry all, i took a month off and then opened github.com

nusaru 5 hours ago

I look forward to the day that jjhub becomes available...

peab 6 hours ago

Is it just me, or are critical services like GitHub, AWS, Google, etc., down more often than they used to be these days?

gpmcadam 6 hours ago

> Monday

Beyond a meme at this point

simianwords 4 hours ago

Monolith looking like a good now?

unboxingelf 5 hours ago

1 engineer, 1 month, 1 million lines of code.

dbingham 5 hours ago

Github's two biggest selling points were its feature set (Pull Requests, Actions) and its reliability.

With the latter no longer a thing, and with so many other people building on Github's innovations, I'm starting to seriously consider alternatives. Not something I would have said in the past, but when Github's outages start to seriously affect my ability to do my own work, I can no longer justify continuing to use them.

Github needs to get its shit together. You can draw a pretty clear line between Microsoft deciding it was all in on AI and the decline in Github's service quality. So I would argue that for Github to gets its shit back together, it needs to ditch the AI and focus on high quality engineering.

thinkindie 6 hours ago

it's Monday therefore Github is down.

arnvald 5 hours ago

GitHub is the new Internet Explorer 6. A Microsoft product so dominant in its category that it's going to hold everyone back for years to come.

Just when open source development has to deal with the biggest shift in years and maintainers need a tool that will help them fight the AI slop and maintain the software quality, GitHub not only can't keep up with the new requirements, they struggle to keep their product running reliably.

Paying customers will start moving off to GitLab and other alternatives, but GitHub is so dominant in open source that maintainers won't move anywhere, they'll just keep burning out more than before.

wrxd 6 hours ago

Copilot, what have you done again?

seneca 5 hours ago

GitHub has a long history of being extremely unstable. They were down all the time, much like recently, several years ago. They seemed to stabilize quite a bit around the MS acquisition era, and now seem to be returning to their old instability patterns.

ilovefrog 2 hours ago

when im on w2 this is good but when im contracting this is bad

hit8run 6 hours ago

They should have just scaled a proper Rails monolith instead of this React, Java whatever mixed mess. But hey probably Microslop is vibecoding everything to Rust now!

  • edoceo 5 hours ago

    Team is doing resume driven development

blibble 6 hours ago

presumably slophub's now dogfooding GitHub Agentic Workflows?

sama004 6 hours ago

3 incidents in feb already lmao

thesmart 6 hours ago

Can we please demand that Github provide mirror APIs to competitors? We're just asking for an extinction-level event. "Oops, our AI deleted the world's open source."

Any public source code hosting service should be able to subscribe to public repo changes. It belongs to the authors, not to Microsoft.

  • munk-a 5 hours ago

    The history of tickets and PRs would be a major loss - but a beauty of git is that if at least one dev has the repo checked out then you can easily rehost the code history.

    • 1313ed01 5 hours ago

      It would be nice to have some sort of widespread standard for doing issue tracking, reviews, and CI in the repo, synced with the repo to all its clones (and fully from version-managed text-files and scripts) rather than in external, centralized, web tools.

  • small_model 5 hours ago

    Every repo usually has at least one local copy somewhere, worst would be few old repos disappear.

  • _flux 5 hours ago

    Making it even easier to snipe accidentally committed credentials?

  • kgwxd 5 hours ago

    No, we can't. Hence Git. Use it the right way, or prepare for the fallout. Anyone looking for a good way to prepare for that, I suggest Git.

thesmart 6 hours ago

It's really pathetic for however many trillions MSFT is valued.

If we had a government worth anything, they ought to pass a law that other competitors be provided mirror APIs so that the entire world isn't shut off from source code for a day. We're just asking for a world wide disaster.

guluarte 4 hours ago

vibe coding too much?

ChrisArchitect 5 hours ago

Related incidents:

Incident with Pull Requests https://www.githubstatus.com/incidents/smf24rvl67v9

Copilot Policy Propagation Delays https://www.githubstatus.com/incidents/t5qmhtg29933

Incident with Actions https://www.githubstatus.com/incidents/tkz0ptx49rl0

Degraded performance for Copilot Coding Agent https://www.githubstatus.com/incidents/qrlc0jjgw517

Degraded Performance in Webhooks API and UI, Pull Requests https://www.githubstatus.com/incidents/ffz2k716tlhx

gamblor956 an hour ago

MS is now all in on agentic coding.

Github stability worse than ever. Windows 11 and Office stability worse than ever. Features that were useful for decades on computers with low resources are now "too hard" too implement.

Coincidence?

Hamuko 5 hours ago

I get the feeling that most of these GitHub downtimes are during US working hours, since I don't remember being impacted them during work. Only noticed it now as I was looking up a repo on my free time.

iamleppert 6 hours ago

Good thing we have LLM agents now. Before this kind of behavior was tolerable. Now it's pretty easy to switch over to using other providers. The threat of "but it will take them a lot of effort to switch to someone else" is getting less and less every day.

  • camdenreslink 5 hours ago

    Are we sure LLM agents aren't the cause of these increasing outages?

GeneralGrevous 3 hours ago

fix it or I will send robot to your house blud #greenpurplelifesmade #Imcocoforcocoapuffs

ruined 6 hours ago

tangled is up B]

retinaros 6 hours ago

migrating to azure kills businesses

dmix 5 hours ago

Welcome to Microsoft Github

DetroitThrow 6 hours ago

GitHub downtime is going from once a month (unacceptable) to twice a month (what the fuck?)

iamsyr 6 hours ago

The next name after Cloudflare

charles_f 5 hours ago

That pink "Unicorn!" joke is something that should be reconsidered. When your services are down you're probably causing a lot of people a lot of stress ; I don't think it's the time to be cute and funny about it.

EDIT: my bad, seems to be their server's name.

  • frou_dh 5 hours ago

    One of Reddit's cutesy error pages (presumably for Internal Server Error is similar) is an illustration that says "You broke reddit". I know it's a joke, but have wondered what effect that might have on a particularly anxiety-prone person who takes it literally and thinks they've done something that's taken the site down and inconvenienced millions of other people. Seems a bit dodgy for a mainstream site to assume all of its users have the dev knowledge to identify a joking accusation.

  • demothrowaway 5 hours ago

    Even if it is their server name, I completely agree with your point. The image is not appropriate when your multi-billion revenue service is yet again failing to meet even a basic level of reliability, preventing people from doing their jobs and generally causing stress and bad feeling all round.

  • jeltz 5 hours ago

    I am personally totally fine with it but I see your point. Github is a bit too big for often braking with a cutsey error message even if it is a reference to their web server.

  • Brian_K_White 5 hours ago

    That stupid "Aww, Snap!" message I think it's one of the browsers does.