I spent last week getting my parents set up with a new phone plan. Standard stuff. But my dad also asked me if the video he saw of a celebrity endorsing a crypto investment was real. It wasn't. It was a deepfake. And honestly? I almost couldn't tell either.
That little moment is why I can't stop thinking about what's happening with AI regulation right now. Because the federal government is actively trying to make it harder for states to protect people like my dad from exactly this kind of thing. And nobody seems to be talking about it in terms normal humans can understand.
So let me try.
What Actually Happened
On December 11, 2025, President Trump signed an executive order aimed at limiting state governments' powers to regulate AI, reinforcing a hands-off approach that signals speed and innovation over guardrails. The core argument is simple: a 50-state "patchwork of different regulatory regimes" creates compliance challenges and stifles innovation.
Okay, fair. I get it. If you're a startup trying to build something cool, dealing with 50 different rule books sounds miserable. In 2025, more than 1,000 AI-related bills were introduced across all U.S. states and territories. That's genuinely chaotic.
But the solution isn't to nuke consumer protections from orbit. And that's kind of what's happening.
The executive order uses federal funding as leverage to limit state AI regulation by authorizing agencies to condition discretionary grants on states refraining from enacting AI laws deemed inconsistent with the order's policy. Translation: play ball or lose your money. States with "onerous" AI laws could face lawsuits and withholding of certain federal funding, including grants under the $42.5 billion Broadband Equity, Access and Deployment program.
Think about that. Texas alone was approved for $1.27 billion in broadband deployment funds under the program, and as one policy expert put it: "If it came down to, you pick, keep the AI law or connect the disconnected in vulnerable and rural communities, that's a tremendously hard political decision."
Thirty-six state attorneys general have voiced opposition to any moratorium on state AI laws, warning that a broad federal moratorium would freeze states' ability to respond as new risks emerge. This isn't a blue state, red state thing. Governors from both parties, including Florida's Ron DeSantis and California's Gavin Newsom, have opposed federal preemption of state AI laws.
The Consumer Problem Nobody Wants to Talk About
Here's where I put on my "regular person who uses technology every day" hat. Because while D.C. debates regulatory frameworks, real people are getting wrecked.
In 2025, deepfake-related losses from fraud and scams in the US reached $1.1 billion, tripling from $360 million in 2024. That is not a typo. Tripled. The FTC found that consumers lost more than $12.5 billion to fraud overall last year, while nearly 60% of companies reported an increase in losses from 2024 to 2025.
Reports of AI-related incidents rose 50% year-over-year from 2022 to 2024, and in the first 10 months of 2025, incidents had already surpassed the 2024 total. Out of 346 AI incidents recorded in 2025, 179 involved deepfakes. A 2025 iProov study found that only 0.1% of participants correctly identified all fake and real media shown to them.
Read that last one again. Virtually nobody can reliably spot a deepfake anymore. Not you, not me, not your extremely online nephew.
And the states trying to do something about this? Those are the exact laws the White House is calling "onerous." State AI laws designed to protect children, limit facial recognition, and prohibit discrimination against protected groups are being evaluated, and laws prohibiting discrimination are being deemed onerous.
My buddy Audrey would call this a structural power grab by Big Tech. She's not entirely wrong, even if she makes everything sound like a sociology lecture. The Brennan Center pointed out it's not a coincidence that this push comes after the AI industry poured millions into campaigns and super PAC donations. But I don't need a political theory to feel uneasy about this. I just need to watch my dad almost get scammed by a fake celebrity video.
The Weird Plot Twist: The "Deregulation" Bill Is Actually Huge
Here's the part that made me laugh out loud. Senator Marsha Blackburn's proposed TRUMP AMERICA AI Act represents the most ambitious congressional attempt to establish unified federal AI governance, seeking to codify the December executive order while creating a comprehensive regulatory framework that would preempt certain state AI laws.
The bill's full name? I'm not making this up. "The Republic Unifying Meritocratic Performance Advancing Machine Intelligence by Eliminating Regulatory Interstate Chaos Across American Industry Act." Someone worked overtime on that acronym.
But here's the twist: the bill's most striking feature is the gap between its stated deregulatory purpose and its actual regulatory density. Despite the executive order framing state laws as "cumbersome" and promising a "minimally burdensome national standard," the Act establishes mandatory duty of care obligations, multiple overlapping liability theories, and required participation in DOE evaluation programs.
The bill has generated opposition from both technology industry advocates concerned about regulatory overreach and progressive groups concerned about preemption of state consumer protections. When both sides hate it, that's usually a sign of something interesting happening.
For consumers? Some of this is actually good. The bill places a duty of care on AI developers to prevent and mitigate foreseeable harm and requires regular risk assessments of how algorithms contribute to psychological, physical, financial, and exploitative harms. Individuals would be able to sue companies that use their personal data for AI training without explicit consent.
That last one? Huge. That's the kind of thing that actually protects regular people.
The Verdict
Look, I'm not anti-innovation. My wallet will confirm that I buy too many gadgets. But my one question is always the same: does this make my actual daily life better?
Deregulating AI without strong consumer protections does not make my life better. It makes scammers' lives better. Fraud losses from generative AI are expected to rise from $12.3 billion in 2024 to $40 billion by 2027. That's the trajectory we're on.
What I want is dead simple. One clear set of federal rules that protects consumers from deepfake fraud, requires transparency about when I'm talking to an AI, and gives people the right to sue when their data gets used without permission. I don't care if it comes from Sacramento or D.C. I care that it exists.
Organizations should not confuse deregulation with reduced risk. AI-related harm won't arrive as an "AI claim." It will surface through product liability, privacy complaints, and consumer protection issues.
The risk is real. The harm is measurable. And right now, the adults in the room are arguing about jurisdiction while regular people are losing real money. Skip the turf war. Protect the consumer. That's it. That's the whole column.