A Brooklyn jury just handed dispatchers $3.1 million after their employer forced them back to the office, ignoring the fact that remote work, enabled by technology, had been functioning just fine. That case wasn't specifically about AI. But the underlying logic is spreading fast: the way your employer uses technology is increasingly a legal question, not just an HR preference.
So: can your boss force you to use AI tools at work? Yes, almost certainly. Requiring specific software is well within normal management rights in the U.S., Canada, and the UK. Your employer already decides what laptop you use, what Slack workspace you're in, what CRM you log calls into. Adding an AI writing assistant or a productivity tracker to that list is, legally speaking, unremarkable.
That answer is technically correct and completely misses the point.
When the Tool Becomes the Manager
A 2026 GAO report found that algorithmic monitoring is increasingly being used to make employment decisions, including discipline, promotion, and termination, with what the GAO called "minimal human review." One quoted scenario described workers with chronic pain being flagged as idle because a gap in their keystrokes looked the same as slacking to an algorithm that has no setting for a bad pain day.
That's the actual issue. Requiring you to use an AI tool is one thing. Letting that tool generate the performance score that gets you fired is something else entirely. The tool becomes the boss, and the boss has no idea who you are.
Plaintiffs are already testing this in court. In Falsch v. Fitch Solutions, lawyers argued that shifting algorithmic performance metrics created constructive discharge, basically that the system was weaponized to make conditions intolerable until someone quit. Whether or not that case wins, it names the behavior. That matters.
Notice Is Not Protection
States are moving. California's AB 1898 would require employers to give written notice when AI was used in employment decisions or workplace surveillance. Georgia's HB 1351 would require state agencies to notify employees when AI influences personnel matters. Some states are even banning fully automated decisions in workers' comp claims, requiring a human to make the final call.
I'll give the employer-rights crowd their fair point: mandating an AI drafting tool is genuinely no different from mandating Microsoft Word, and workers shouldn't have veto power over every software rollout. Fine.
But notice laws don't protect you. Being told that an AI scored your performance is cold comfort when you've already lost your job over a metric you never saw and couldn't contest. Knowing the algorithm exists is not the same as having any power over what it does to your career.
Canada is in an even murkier place: no comprehensive federal AI law for private-sector employers as of 2026, existing human-rights codes applying only "by impact, not by name," and Canadian guidance quietly noting that your employer may be storing your AI-generated performance data on servers outside the country, triggering privacy obligations most HR departments haven't thought about yet.
What workers actually need is a right to human review before an adverse employment decision, not the right to refuse the tool. That's the specific thing legislators should be writing into these bills. The EU framework already moves this direction, tying data-protection rights to automated decisions. American and Canadian law should catch up.
The Brooklyn jury awarded $3.1 million and called it a reasonable accommodation problem. In three years, the same verdict will be about someone whose algorithmic productivity score got them fired without a single human reading it.
Your boss can hand you the AI. They shouldn't be allowed to let it hand you a pink slip.