A federal judge named Nina Wang just told every attorney in her courtroom: if you used ChatGPT, Harvey.AI, or Google Gemini on this filing, you sign off on it and you verify every citation yourself. No exceptions. The attorneys who challenged this lost. The court called it routine procedural certification, the same category as a certificate of service. Not a free speech crisis. Paperwork.

I think Judge Wang is right, and I think the rest of the professional world should be embarrassed it took a courtroom to get here.

The "It Slows Us Down" Argument Is Real, and It Still Doesn't Win

Okay, fair point to the skeptics: mandatory disclosure does create friction. Provenance tracking, contributor sign-offs, human-review logs. For a solo practitioner billing 60-hour weeks, that is a real cost. I get it.

But here is what that friction is actually buying you. Earlier this year, a federal judge ruled that submitting documents to an AI platform with non-confidential terms of service waives attorney-client privilege. Gone. The disclosure burden is annoying. Accidentally handing your client's trade secrets to a public AI system is a malpractice lawsuit. Pick your problem.

The EU published its Second Draft Code of Practice on AI Transparency in March 2026 and landed on the same basic principle: humans must oversee the labeling process, and organizations should log every step where AI generated or modified content. California is requiring pre-use notices for automated decision tools by January 1, 2027. The federal government issued a Legislative Framework in March 2026 urging Congress to set minimum standards before the state patchwork gets worse. Every major regulatory body is pointing the same direction.

This Is Not About Lawyers. It's About Anyone Getting Paid to Know Things.

Think about your doctor's office. Your financial advisor. The consultant your company hired for $400 an hour. If any of them handed you a report, a diagnosis summary, or a market analysis that was quietly 80% AI-generated with no human verification, would you want to know? Obviously yes. You are paying for their judgment, not their ability to prompt GPT-4.

The disclosure question is not really about AI being bad. It is about whether the person charging you professional rates actually reviewed what they gave you. A surgeon who uses robotic tools still has to sign the operative report. A CPA who uses tax software still certifies the return. The tool does not remove the accountability. AI should not either.

Devon Reyes will tell you this creates compliance theater, that bad actors will just check the box and move on. He is not wrong that enforcement is underdeveloped. The March 2026 federal framework is still waiting on Congress, and sanctions for false certification are basically theoretical right now. But "hard to enforce perfectly" is not the same as "pointless." Seatbelt laws were hard to enforce in 1970. We kept them.

The specific ask here is simple: every regulated profession, starting with law and medicine, should adopt the Wang standard now. Sign off on AI use. Verify citations personally. Log the review. Do not wait for Congress to finish arguing about it, because based on the last 6 months, that could take until 2031.

You already trust your doctor more because they have to sign things. Turns out signatures matter. Who knew.