ApolloRising 9 hours ago

As long as everyone is here, have you seen the token usage just go up remarkably recently for the $100 plan? it lasts a lot less time than it used to recently. Might be related to recent releases of claude.

  • gambiting 9 hours ago

    I'm on the basic £18/month plan and with Sonnet 4.6 I literally get 20 maybe 30 minutes of use out of it per day. It's borderline useless now. I was using it for some Home Assistant changes yesterday and it used up my entire daily allowance after 8 prompts.

    • ManlyBread 9 hours ago

      I guess 2026 is the last year AI is widely available to anyone who isn't willing to shell out hundreds if not thousands for a monthly subscription. I guess all that's left is to thank all the investors for the free ride LOL

      • andrewinardeer 9 hours ago

        Good luck jacking prices up too high with new open models flying around daily.

        • baq 8 hours ago

          good luck getting mythos/spud quality models open

      • _delirium 9 hours ago

        Hard to square that with how good open-weights models are getting? I'm doing stuff with Qwen3.5-4b that required a frontier hosted model less than a year ago.

        • baq 8 hours ago

          the problem is you're still a year behind with this approach and it isn't at all clear locally hosted models can keep the gap. need more turboquant-like algorithmic boosts for this to happen.

      • brador 5 hours ago

        For a second there we felt the future. And now it’s gone.

  • billynomates 9 hours ago

    No, in fact I'm growing increasingly suspicious of messages I see like this all over the socials.

    I am using Claude constantly, multiple agents, around 8-10hrs a day, 5 or 6 days a week, and I'm never anywhere need my limit.

    • N_Lens 9 hours ago

      I suspect Anthropic flags accounts in their backend and different people are getting different limits. What criteria they flag with, I am not sure.

      • dgb23 9 hours ago

        I would try to trim this suspicion with both Occam‘s and Hanlon‘s razor.

    • oefrha 9 hours ago

      Unless you’re somehow on a different quota system, or maybe using Haiku, there’s no way you can sustain five continuous hours of parallel agents running without hitting the 5h quota limit, even on the 20x max plan. But maybe your company is flagged as VIP or something.

  • figmert 9 hours ago

    I'm on the max 20 plan, and yes, it's the same for me. The week before last it used to last all week for me, but now it's Wednesday and it's already at 40% usage.

  • oefrha 9 hours ago

    Yes, it’s extremely obvious. The recent “we give you $100/$200 extra credit for a month” is clearly just “you’re supposed to pay extra for the same usage from the now on” dressed up as a “bonus”, just like giving “bonus” usage off-peak before announcing faster burn rate during peak a short while ago.

    And the recent “Investigating usage limits hitting faster than expected” [1] is probably them intentionally gauging how much they can push it without too much of an uproar.

    [1] https://www.reddit.com/r/ClaudeAI/comments/1s7zgj0/investiga...

    • copperx 9 hours ago

      Anthropic bonuses seem to be code for "your usage limits are going down soon."

  • anonzzzies 9 hours ago

    I read this on reddit daily ; we have usage monitoring running and collect all stats; we have seen no difference at all. Guess they are split testing or something maybe?

    • ramon156 9 hours ago

      Could you elaborate what these usage monitors look like? I collect data locally and can easily show that cost per token has gone up in some of my sessions

      • anonzzzies 8 hours ago

        All our people run a cron script which counts tokens (from jsonl) use and runs a scripted cli /usage (sending keyboard input to the running claude code) and sends that to a central system where we can see this. We see no real changes on any of the accounts or averaged. I have to note here that we only use sonnet 4.6; opus always ran over limits if not continuesly monitored and switched over to sonnet since it came out and it's useless to us for that reason.

  • risyachka 8 hours ago

    Like any company they will squeeze the usage as much as they possibly can. There is not a little chance that prices can be 1k+ so only enterprises can allow coding subs.Those who have ROI will pay for it.

    Current phase of usage/pricing is just testing the waters. Especially considering they are the market leader in this category.

  • PaulMest 8 hours ago

    Maybe you're experiencing normal usage rates now that the 2x March promotion is over?

    > From March 13, 2026 through March 28, 2026, your five-hour usage is doubled during off-peak hours (outside 8 AM-2 PM ET / 5-11 AM PT / 12-6 PM GMT) on weekdays). Usage remains unchanged from 8 AM-2 PM ET / 5-11 AM PT / 12-6 PM GMT on weekdays. Source: https://support.claude.com/en/articles/14063676-claude-march...

  • joshdev 4 hours ago

    We discovered a bug in AWS Bedrock that is double counting cache writes when thinking/reasoning is enabled for the Anthropic models. It’s not clear to me if this is limited to just AWS Bedrock or all providers. AWS Support is aware.

    We’ve also observed a much higher cache miss rate in the past few weeks. Combine both together and your usage consumption can be greatly increased.

tao_oat 9 hours ago

A bit surprised by the snarky comments here -- I also want Claude to work reliably but very few (no?) companies have ever seen this level of rapid growth. We're going to go through a long fail-whale-style period and I can imagine very, very few companies that could avoid that.

  • rvz 9 hours ago

    How can Claude work reliably if Claude keeps going on vacation for several hours?

    Maybe it is recovering from the weekend a few days ago, but wanted to take an extra day off like it did on Monday, hence the "outage".

    • ben_w 9 hours ago

      > How can Claude work reliably if Claude keeps going on vacation for several hours?

      Not that I wish to anthropomorphise it in this answer, but businesses have managed just fine when humans do this for "lunch breaks" and "going home for the evening to sleep".

      (And even mandatory meetings which should have been emails).

      • sassymuffinz 7 hours ago

        Thing is, if Dave the programmer goes on vacation or calls in sick for the day, hopefully you have a larger team to fall back on and your business doesn't grind to a halt.

        No one is apparently noticing that if they build their entire business model around AI being a certain price and availability they're essentially building one giant point of failure into their productivity.

        What if the price shoots up 10x or Claude goes down for a day, or what if he's occasionally drunk (hallucinating). Reliability is sometimes a more important facet of business than ultra speed and productivity.

  • yakattak 9 hours ago

    They’re asking for $100+/mo for the plans that are actually usable at scale. If I’m paying that much I have very high expectations.

    There’s also the fact that they’re known for dogfooding heavily, I imagine that contributes to it a lot.

    • forrestthewoods 9 hours ago

      > They’re asking for $100+/mo for the plans that are actually usable at scale. If I’m paying that much I have very high expectations.

      They’re losing money on you at that price point.

      Or more precisely you’re paying for it by giving them training data.

      • omega3 8 hours ago

        I'm not convinced, Kimi 2.5, GLM 5.1, Minimax M2.7 are all fraction of the price and still make money on inference.

    • baq 9 hours ago

      they can adjust prices or you can adjust your expectations.

      ...and let's be realistic, it'll be both.

    • stingraycharles 9 hours ago

      > They’re asking for $100+/mo for the plans that are actually usable at scale. If I’m paying that much I have very high expectations.

      If you think $100 is that much and get very high expectations from it, you're not the target customer. You're a loss leader to Anthropic, and the fact that you don't see that / still have high expectations means your expectations are unrealistic.

      • dijit 9 hours ago

        $100/m for SaaS is very steep.

        For an entire productivity suite including mail, meetings and terabytes of backed up redundant storage with nearly no bandwidth limitations it's like $35/m for even the most expensive option.

        • tao_oat 9 hours ago

          Those products have very low COGS in comparison to this.

        • baq 9 hours ago

          you could arguably ditch the productivity suite and a few other 'essential' subscriptions to make room for this one, except the price point will get enshittified to hell in the coming months and years.

        • bredren 9 hours ago

          Claude Code and Codex are not SaaS products in the traditional sense.

        • stingraycharles 4 hours ago

          Comparing state of the art LLMs with Office365 / Gsuite is like comparing renting a datacenter vs an airbnb. Entirely different things.

          Compare it to hosting models locally, that would be more apt. Or renting GPUs from cloud vendors.

          • dijit 4 hours ago

            It’s a SaaS, and the most expensive SaaS available.

            If you’re saying an LLM provides more value than the office productivity suites , mail platforms and meeting platforms which run essentially the planet: then I am afraid, you have drunk the kool-aid.

            If you’re evaluating software licenses you have to weigh the price to value, there can be value to these LLMs but its not 3x the productivity of Mail+Spreadsheets+Live Meetings+presentations+wordprocessing+filesharing.

            Its just not.

            • stingraycharles 3 hours ago

              If you really think the value add by LLMs is comparable to email and calendar, I don’t understand why you don’t understand my point that you’re not the customer Anthropic cares about.

              • dijit 3 hours ago

                Your point is stupid.

                The cost of offering a service and the cost of buying a service are correlated but not the customers problem.

                If you are the most expensive SaaS on a docket sheet and you’re also the least reliable you had better be delivering some serious value in the times you’re up otherwise customers won’t depend on you and you’ll be the first one out.

                Nobody wants to pay premium prices for things they can’t depend on. If you cant understand that then you need to stop offloading critical thinking to your AI tools because your mind needs the exercise.

      • dgb23 8 hours ago

        I think the bigger point is that the price tag is simply not competitive, especially given all of the issues, downsides and dangers.

        Whether Anthropic makes money from the $100 subscriptions or not, is their problem.

    • tristanj 9 hours ago

      Those subscription plans are a loss leader. Anthropic would rather have you pay per-token for their API, where they actually make money. By cutting subscription limits, they push people towards using their API. And it's working, there are people spending thousands per month on their API.

  • trvz 9 hours ago

    They always have the option to stop accepting new customers when their infrastructure is peaked out instead of lowering quality for everyone.

    • stingraycharles 9 hours ago

      You don't know whether this is due to infrastructure capacity or has other reasons (organizational). Also, "let's stop accepting new customers" is probably not a realistic choice for a hundred reasons.

    • alkonaut 9 hours ago

      That would mean in a way accepting that they are suddenly a service company with the aim to create revenue by selling services to customers for money.

    • arcfour 9 hours ago

      You can't stop accepting new customers unless you're fine with killing your potential future customer base. That's a ridiculous suggestion.

      Either your current customers or your potential future customers are going to be unhappy so long as compute resources are finite. Take your pick.

      • ed_mercer 9 hours ago

        > That's a ridiculous suggestion.

        Is it though? Claude's reliability is now at an all-time low of 98.7%. It's not a stretch to think that large companies will have second doubts about about adopting claude for their production environments.

      • baq 9 hours ago

        > You can't stop accepting new customers unless you're fine with killing your potential future customer base. That's a ridiculous suggestion.

        what? they already have, they aren't releasing mythos except to a limited pre-approved customer base who is practically begging them to take their money. they can do that for lower tier models and at this point they should.

      • trvz 8 hours ago

        Waiting lists are a thing.

      • coldtea 6 hours ago

        >You can't stop accepting new customers unless you're fine with killing your potential future customer base. That's a ridiculous suggestion.

        And yet, it's what any business with limited stock or slots (from restaurants and car companies to airlines) have done since forever...

  • logicchains 9 hours ago

    >I also want Claude to work reliably but very few (no?) companies have ever seen this level of rapid growth. We're going to go through a long fail-whale-style period and I can imagine very, very few companies that could avoid that.

    Their main competitor OpenAI has much better uptime and more generous usage limits.

    • code51 9 hours ago

      Exactly this. OpenAI is running huge workloads silently, without anybody patting their back.

  • techpression 9 hours ago

    Doesn’t matter, they are not handling it correctly, but instead keep selling while far over capacity. They should not accept more users until they can supply the service. We solved this thousands of years ago, it’s called waiting in line. And yes, it’s not common to see, but that doesn’t excuse not doing it.

  • al_borland 9 hours ago

    When the narrative around AI is that people should rely on it all the time, people will be judged by your token use (it better be high), the AI is smarter than everyone and will take all the jobs, the AI is the best programmer, and more… When things fail repeatedly, it highlights that the emperor has no clothes.

    If it’s as good as they say, why can’t it figure out how to not go down every day?

    How can people rely on it for their job if it goes down everyday? Maybe they shouldn’t rely on it.

    If it’s supposed to be such a good engineer, why should it have the same scaling issues as Twitter did 20 years ago with 20 years of lessons learned and 20 years of development for more modern and scalable infrastructures? Shouldn’t it know all the tricks to scale and have redundancy to keep availability high? Does it not know the demands?

    When expectations are out of line with reality, there will be snark when things fail. Those expectations have been force fed to us by these AI companies for years now, so I don’t have much sympathy or patience to offer them. They created these expectations of their platforms and if they can’t live up to them, then maybe it’s time for recalibrate the public image of what AI really is and what it can do… and what its limitations are.

    • runeblaze 9 hours ago

      I mean I used to work on model reliability with my little PhD degree and the models i manage go down all the time.

      Some profs have a team of PhDs and things go to shit all the time. I don’t know why we expect $FRONTIER_LLM to do better

      • al_borland 3 hours ago

        We accept human errors, limitations, and failures. We can empathize with team of humans doing the best they can, and we know any failure is a chance for them to learn and grow.

        The sales pitch of AI is that it’s better than humans and has no real limits; it will make us all obsolete. This framing they created means I expect it not to make errors, not to have limits, and not to fail. I expect it to be able to learn and adapt at the speed of light and solve complex problems beyond what a PhD could do. This is what we’ve been told with the narratives around future jobs, AI performance on PhD level tests, how coding is a solved problem, and pictures painted of what a future with AI will look like. While we may know this isn’t true, this is what they are selling, and that’s the standard I’m going to hold them to.

        I don’t blame the customer for being upset the snake oil didn’t live up to its promises, I blame the snake oil salesman. We have every right to be upset with the snake oil salesman and ridicule him when his product doesn’t work. Maybe we don’t need better more reliable snake oil, maybe we need real medicine. If real medicine don’t exist, its better to be honest than to mislead people and say it does.

        This isn’t to say AI is completely useless, but it’s not what’s being sold. The downtime just proves that, unless they aren’t using their own product. If that’s the case, why not?

    • sassymuffinz 9 hours ago

      So true, we’re constantly told that we’re now obsolete, a magic robot can do everything we can do without sleeping and for a fraction of the cost. Except occasionally the robot just doesn’t turn up to work or occasionally he appears drunk on the job. The elites think it’s fine now while it’s cheap but just wait until the agents are priced properly and cost 5x or 10x more.

      Suddenly the fleshy meat sacks who used to do all this work, just slower, who have persistent memory, who get better and more experienced over time, who only require a few bananas to power their brains start looking like the more reliable option again.

      The only reason these chat bots exist is because the upper crust don’t want to pay us to live properly, not because the robots can do it better, they just want to pay as little as possible.

    • dgb23 8 hours ago

      I have to agree with this. The economic culture around this tech is very toxic.

      There seems to be a mass anxiety around the job market even. I‘ve seen a lot of social media content, including videos of people giving advice, especially to younger tech workers.

      The most dangerous (psychologically, socially, economically) are people in important positions, who understand just enough to see some of its usefulness, but not enough to assess where its assumptions and guarantees actually are.

      Even moreso if they see workers as a mere cost center instead of an asset.

      But here is my perhaps naive, hopefully brave prediction: the real winners of this shake up are not decided yet, and neither bean counting nor superficial engagement with the topic will be sufficient or even useful.

  • pllbnk 8 hours ago

    They have this new Mythos model. I am sure it can fix all the bugs and reliability issues since it's nearly AGI. /s

  • coldtea 6 hours ago

    >I also want Claude to work reliably but very few (no?) companies have ever seen this level of rapid growth.

    You do understand however that aside from the growth/maturity path, this is also a path to enshittification and skinning their users, which might come even faster to LMMs than say Google , because the latter managed to have hundres of billions in investments in record time to recoup and IPOs on sight.

albert_e 9 hours ago

This is how it manifests on Claude Code terminal and desktop for me --

API Error: 529 {"type":"error","error":{"type":"overloaded_error","message":"Overloaded"},"request_id": "xxxxxxx"}

jonatron 9 hours ago

If you look at the uptime graph, it's probably more newsworthy when it's up, not down.

capnsketch 9 hours ago

Apparently mythos isn't good enough to fix their infra problems

  • kubb 9 hours ago

    It’s just so POWERFUL and DANGEROUS that its very aura disrupts the weaker models.

    • menno-dot-ai 9 hours ago

      Be careful, MYTHOS might be reading your comment directly from their janitor's iPhone RAM right now. You do not want to upset such a powerful entity!

      • kubb 9 hours ago

        Oh no, it will BREAK OUT and send me an email when I’m in the park.

  • tristanj 8 hours ago

    It's a lack of compute. Anthropic is growing double digit % every month, and they're growing faster than they can acquire compute resources.

    Plus, they do not want to overbuild computer, like what OpenAI is doing.

wg0 9 hours ago

Mythos is hacking its way to serve itself into production and doesn't like older models to have any limelight could be one theory.

After all it's so dangerous.

antfarm 9 hours ago

Claude Code started making stupid errors around Saturday. I have been using it frequently for months, and now it feels like back in the day when I tried Gemini for the first time.

  • risyachka 9 hours ago

    Now new model looks so much better though on benchmarks!

gdorsi 9 hours ago

This explains why they are trying to cut all the third party software out of the subscriptions.

rbmck 9 hours ago

Serious Flowers for Algernon moment.

  • ak4153 8 hours ago

    Which side the getting smart or dumbing down

NiekvdMaas 9 hours ago

With 30+ billion run rate (https://x.com/i/status/2041275563466502560), there should be plenty of cash to invest in infrastructure.

  • 0123456789ABCDE 9 hours ago

    there is enough money; there isn't enough infrastructure/hardware where you can spend that money.

  • arcfour 9 hours ago

    Money pays for infrastructure. It doesn't will infrastructure that doesn't exist into existence.

anshumankmr 9 hours ago

Might as well log off for the day.

(though Copilot is working :) and OpenCode)

taspeotis 9 hours ago

I mean if people have judged this important enough to be on the front page of HN ... I guess it's important enough to be on the front page?

But any combination of the Claude models are up or down on any given day: https://status.claude.com/

tipiirai 9 hours ago

Currently the #1 entry. Noted fast.

ACCount37 9 hours ago

The legendary one nine of reliability. Frankly, feels like they should be down to zero nines by now.

I get that they barely have the infrastructure to run their models at scale even when absolutely nothing goes wrong in any of it, but holy shit does it suck to be on the receiving end of that.

Makes me wonder where all the "bubble" talk is even coming from when we have a top 3 provider getting fucked over on every day of the week that ends in Y because of its inability to online compute faster than the inference demand grows.

pjmlp 9 hours ago

Maybe they could just, I don't know, use Claude to research their bugs. /s