analognoise 9 minutes ago

I think we should be limiting data centers because of their power draw and water usage, so I’m against it.

I don’t particularly want to compete with Microsoft for power, or OpenAI for water.

akersten 6 hours ago

Political parties hitching their wagon to "AI good" or "AI bad" aside, I'm actually a huge fan of this sort of anti-law. Legislators have been far too eager to write laws about computers and the Internet and other things they barely understand lately. A law that puts a damper on all that might give them time to focus on things that actually matter to their constituents instead of beating the tired old drum of "we've got to do something about this new tech."

  • HWR_14 2 hours ago

    The problem is when companies dodge responsibility for what their AI does, and these laws prevent updating the law to handle that. If your employees reject black loan applicants instantly, that's a winnable lawsuit. If your AI happens to reject all black loan applicants, you can hide behind the algorithm.

  • Herring 5 hours ago

    Everybody gangsta until AI deletes 90% of white collar jobs.

Alive-in-2025 5 hours ago

Because this comes from ALEC (per the article on this), I'd say it's a bad idea, meant to support poorly conceived rules. ALEC advocates for policies that benefits conservatives and large corporations. They push policies that persecute minorities, and also make it much harder for states for prosecute misconduct of businesses or discrimination. In this case it looks like this is intended to block government regulation of AI and compute algorithms. https://www.commoncause.org/issues/alec/

  • tdeck 4 hours ago

    My theory is that this will benefit Flock.

Qwertious 7 hours ago

No mention of DRM. Shame.

Herring 7 hours ago

Background:

Trump signed an Executive Order (Dec 2025) preempting state AI safety laws, threatening to withhold $42.5B in broadband funding from states that refuse to comply (specifically targeting Colorado and California).

In response, New York signed the "RAISE Act" after the EO was issued. It has strict safety, transparency, and reporting protocols for frontier models.

California is enforcing its "Transparency in Frontier AI Act" (Sept 2025) regardless of the Federal threat. It requires developers of large AI models (over 10^26 FLOPS) to publicly disclose safety frameworks, report "catastrophic risk" incidents, protect whistleblowers.. etc

Big Tech (OpenAI, Google, Andreessen Horowitz) is siding with Trump on this one. They prefer one weak federal law to 50 strict state laws.

This post:

Red states are creating deregulation areas. If a big tech company has data centers in Montana, and CA tries to impose an audit on their model, the company can sue, claiming their "Civil Rights" in Montana are being infringed by California's overreach.

Red states are tying "Compute" to the First Amendment (free expression), basically anticipating the Supreme Court.

Future implications:

The US continues to split into two distinct operating environments. https://www.economist.com/interactive/briefing/2022/09/03/am...

tehjoker 7 hours ago

The goals of this law:

"So, hypothetically, in a state with a right-to-compute law on the books, any bill put forward to limit AI or computation, even to prevent harm, could be halted while the courts worked it out. That could include laws limiting data centers as well.

“The government has to prove regulation is absolutely necessary and there’s no less restrictive way to do it,” Wilcox said. “Most oversight can’t clear that bar. That’s the point. Pre-deployment safety testing? Algorithmic bias audits? Transparency requirements? All would face legal challenge. "

My take: This sounds incredibly pro-industry and anti-democratic.

  • free_bip 20 minutes ago

    My take (IANAL): the law should probably be ruled unconstitutional for restricting the ability of Congress to pass laws. I believe there is precedent for this but I can't remember where.

  • eikenberry 5 hours ago

    These laws would have one upside.. Open models would remain open and available. A big problem with at least some of proposed AI regulation is that it could outlaw a growingly important aspect of general purpose computing for the majority of people.

    • HWR_14 2 hours ago

      I don't know any proposed laws that limit models. I only know of proposed laws that limit deployment of models.

  • Smar 6 hours ago

    And scary. Really scary.

    • tehjoker 6 hours ago

      It's really funny for how all the talk of AI safety what has resulted is precisely the exact series of steps one would take if one were to intentionally design some kind of dystopian AI system.

cyanydeez 5 hours ago

Some godel number this so we have a right to pirate and repair.

j-bos 8 hours ago

> Similar to how free speech doesn't mean you can yell “Fire!” in a crowded theater

While I appreciate bringing attention to ongoing changes in the tech/legal landscape, I'll get my rundowns from a source that doesn't blindly repeat this broken assertion. Doesn't speak well of their research practices.

  • AnthonyMouse 8 hours ago

    Yeah, that quote was "mere dicta" from the first day (the case wasn't about shouting fire in a theater, it was about distributing pamphlets opposing the draft), and the actual holding of the case the quote is from was overturned more than half a century ago.

    Hasn't stopped every authoritarian from parroting the quote whenever they want to censor something.

    • comex 7 hours ago

      Despite its history, it’s still a valid example of an exception to the First Amendment under current law. The problem is that most people who cite it are using it as an analogy for something else that isn’t.

      • AnthonyMouse 7 hours ago

        > Despite its history, it’s still a valid example of an exception to the First Amendment under current law.

        Is it though? If you're putting on a play, and there is a fire in the script, e.g. in a play criticizing that decision, can the government punish you for putting on the play because of the risk it could cause a panic? If there is actually a fire in the theater, can they punish you for telling people? What if there isn't actually a fire but you believe that there is?

        Not only is it useless as an analogy for doing any reasoning, the thing itself is so overbroad that even the unqualified literal interpretation is more of a prohibition than would actually be permissible.

        • deathanatos 6 hours ago

          None of your examples is what is meant by "Shouting fire in a crowded theatre." The quote is expressly about falsely shouting fire, not as part of the play, not as an honest act of attempting to alert people to a dangerous situation. The quote with more context is clear: "The most stringent protection of free speech would not protect a man falsely shouting fire in a theatre and causing a panic..."

          > If there is actually a fire in the theater, can they punish you for telling people? What if there isn't actually a fire but you believe that there is?

          (IANAL) Law usually takes circumstance into consideration, and AIUI, usually comes to reasonable conclusions in this case. The Wikipedia article on this quote[1] goes into that:

          > Ultimately, whether it is legal in the United States to falsely shout "fire" in a theater depends on the circumstances in which it is done and the consequences of doing it. The act of shouting "fire" when there are no reasonable grounds for believing one exists is not in itself a crime, and nor would it be rendered a crime merely by having been carried out inside a theatre, crowded or otherwise. If it causes a stampede and someone is killed as a result, then the act could amount to a crime, such as involuntary manslaughter, assuming the other elements of that crime are made out. Similarly, state laws such as Colorado Revised Statute § 18-8-111 classify knowingly "false reporting of an emergency," including false alarms of fire, as a misdemeanor if the occupants of the building are caused to be evacuated or displaced, and a felony if the emergency response results in the serious bodily injury or death of another person.

          (It continues with other jurisdictions and situations.)

          [1]: https://en.wikipedia.org/wiki/Shouting_fire_in_a_crowded_the...

          • AnthonyMouse 4 hours ago

            > None of your examples is what is meant by "Shouting fire in a crowded theatre."

            Which is exactly the point, because they nevertheless literally are "shouting fire in a crowded theater".

            > The quote with more context is clear: "The most stringent protection of free speech would not protect a man falsely shouting fire in a theatre and causing a panic..."

            Which is likewise why the people trying to use the quote all but universally omit the qualifiers -- it would otherwise be clear that, even in the context of Schenck, the constraint was intended to be narrow.

            And even with the qualifiers, the original quote still doesn't do well with the first example or the third, because imposing a prior restraint under the hypothetical argument that people could get confused and panic is going to be a weak case when the reason someone is doing it is it to criticize the government, and it's quite objectionable to punish people for speech when they genuinely believe something to be true just because they've made an honest mistake.