Meta has 70,000+ employees spread across dozens of offices and time zones. The company built a photorealistic AI avatar of Mark Zuckerberg to interact with them directly. The reaction has been predictable: creepy, dystopian, accountability dodge. My reaction is different. A company that spent $135 billion on AI infrastructure and is pushing toward 1 billion Meta AI users should absolutely be running its own tools on its own people first. That's not a crisis. That's dogfooding.

I get the visceral discomfort. Nobody wants to ask their boss a question and get a chatbot wearing his face. But strip away the uncanny valley and look at the engineering problem. How does a single executive communicate consistent strategy, priorities, and context to an organization that size? The traditional answers are all-hands meetings (which scale terribly), email blasts (which nobody reads), and a chain of middle managers playing telephone with the CEO's intent. Every engineer who has watched a product vision get mangled through 4 layers of management knows this failure mode.

The Real Question Is Whether It Ships Useful Output

The interesting evaluation isn't philosophical. It's functional. Does the avatar actually convey accurate, up-to-date strategic context? Can employees query it about priorities and get answers that match what Zuckerberg would say? Does it reduce the latency between a leadership decision and the team understanding why that decision was made?

These are measurable things. You can A/B test them. You can survey employees who used the avatar against those who didn't and compare alignment on company priorities. If Meta is tracking employee typing patterns and mouse clicks (which, per reports, they are), they certainly have the instrumentation to measure whether this tool moves the needle on execution speed.

Fair point from the skeptics: Meta's track record on internal tools inspiring trust is not great. The employee surveillance reporting, where workers called the monitoring "creepy," landed just weeks ago. And the Avocado model failure, where Meta's own frontier AI couldn't compete and they considered licensing Google's Gemini, suggests the AI org isn't infallible. Acknowledged. But a failed research model and a successful internal communication tool are different products solving different problems. Llama has 1.2 billion downloads. Meta knows how to ship AI that people actually use.

The deeper objection, the one Audrey Liang will make eloquently, is that this avatar insulates Zuckerberg from direct accountability. That it's a buffer between a CEO with dual-class voting control and the employees living with his decisions. I take that seriously. But the alternative isn't some fantasy where Zuckerberg personally answers questions from 70,000 people. The alternative is the status quo: filtered messages, stale memos, and the organizational drift that kills large companies slowly.

Dogfooding Isn't Optional When You're Selling the Dog Food

Meta is telling every business on the planet to integrate AI into their workflows. If Meta won't use its own AI for one of the hardest problems in organizational management, CEO-to-employee communication at scale, why should anyone else trust the pitch? Satya Nadella rebuilt Microsoft's internal tooling around Copilot before selling it externally. That wasn't vanity. It was product validation.

The $375 million New Mexico verdict, the $4.2 million LA ruling, the 2,400 pending youth safety cases, the 11% stock drop. Meta has real accountability problems. They are about ad targeting, algorithmic harm, child safety failures, and a governance structure that concentrates power in one person's hands. Those are serious. An AI avatar that helps employees understand company direction faster is not one of them.

Conflating a communication tool with a governance crisis dilutes the actual criticism. Meta should answer for its algorithms optimizing scam ads. It should answer for platform design that harms minors. Spending outrage on an internal avatar means less pressure on the things that matter.

Ship the tool. Measure whether it works. If employees report better strategic clarity and faster execution, keep it. If it's just a talking head that repeats platitudes, kill it like any other feature that doesn't perform. That's how you evaluate engineering. Not by how it looks in a headline.