The Expertise Paradox
What Elon Musk's DOGE experience reveals about the collision between different types of expertise—and why every industry is about to learn the same lesson
There was Elon Musk at the May 30 press conference with President Trump, admitting that cutting government spending was "more of an uphill climb than I anticipated." Here's a man who revolutionized electric vehicles and private space travel, explaining why he couldn't deliver on his $2 trillion promise. This isn't a story about failure—it's about the collision between different types of expertise.
While Musk's experience with DOGE grabbed headlines, it's actually a preview of something much bigger happening across every industry as AI reshapes how work gets done. It reveals the expertise paradox: the assumption that deep expertise in one domain automatically translates to competence in another domain—especially when those domains intersect or when new technology bridges them.
The Expertise Paradox Goes Both Ways
I learned this lesson long before Musk's government adventure. In the early 2000s, I watched political operatives who thought they could make content "go viral" on social media the same way they'd release a press statement or buy a TV commercial. They didn't understand that digital platforms had their own rules, their own rhythms, their own communities.
Then in 2010, I saw it flip the other direction. Ad representatives from a tech company decided to pitch the National Republican Senatorial Committee on a nationwide ad buy during a midterm cycle. They had no political understanding to know that "nationwide" isn't a thing in anything but a presidential cycle—we only cared about a handful of competitive states. The tech experts understood their platform perfectly but missed the fundamental realities of how political campaigns actually work.
This pattern has only accelerated. After 2016, tech companies like Facebook staffed up on non-tech expertise—human rights specialists, child safety experts, terrorism researchers. They realized that the people building products didn't understand the nuances of how those products might cause harm in the real world.
But here's what's happening now: after the 2024 election, you see companies like Meta pulling back on that expertise, trying to put more power back in the hands of product people because the non-product people are seen as slowing down the process too much. It's the classic tension between needing to move fast for competition and wanting to be safe.
The problem is that experts from outside tech aren't used to moving at the speed of tech, nor do they know how to talk to product people. It helps when domain experts can think in frameworks that product people understand—user journeys instead of regulatory requirements, risk-ranked priorities instead of comprehensive lists of concerns, and success metrics that translate domain expertise into measurable outcomes.
When Binary Thinking Meets Human Systems
This pattern reveals something deeper about how different types of expertise clash. My favorite example happened toward the end of my tenure at Facebook in 2020. An engineer who had been on a political content project for only a few days started lecturing me about the importance of maintaining neutrality between Republicans and Democrats. He didn't know what my role was at the company, had no understanding of the expertise of others in the room, yet he was confident he knew exactly what needed to be done.
This captures something fundamental about how many tech people approach complex human systems: they think in binaries. Ones and zeros. If I do this, then this will happen. But that doesn't work in government, politics, or most human systems.
You can have a meeting with a policymaker behind closed doors where they say one thing, then watch them say something completely different to the press the next minute. As Soren Dayton perfectly explains in his recent piece on "inside game vs. outside vibes," there's the political game—balancing what constituents want, what colleagues want, what will get press and raise money—and then there's the inside game of wheeling and dealing in committee markups and backroom negotiations where legislation actually gets made. It's like playing chess and checkers at the same time on the same board, where a move in one game impacts what you can do in the other.
The thing about government that tech leaders miss is that what looks like inefficiency is often a feature, not a bug. The checks and balances, the deliberation, the seemingly redundant processes—these exist because government directly impacts people's lives in ways where you cannot just make moves and fix them later. People want stability among change. They want thoughtful transitions. They want to be brought along.
Many tech leaders have operated with no real checks and balances, no pushback. That's something at the root of many people's issues with tech, and why leaders like Elon struggle when they encounter systems designed with built-in resistance to rapid change.
When DOGE teams "misidentified issues such as fraud due to misunderstandings of agency data structures," they were seeing patterns in data and assuming corruption when it was actually something else. For example, Musk claimed that millions of dead people were getting Social Security benefits when in fact there were two different databases—one that tracks everyone who has ever gotten a Social Security number (including deceased people for record-keeping) and another for active benefit recipients.
Domain Experts Learning AI vs. AI Experts Learning Domains
This brings us to the heart of how expertise needs to evolve in the age of AI. The solution isn't to keep AI out of complex human systems—that's neither possible nor desirable. But it's also not to let AI experts reinvent every domain from scratch.
I learned this personally when Facebook was building our political ad transparency library. Initially, I thought I should run the initiative and was really upset when I wasn't chosen, thinking it diminished my domain knowledge. But I wasn't giving enough credit to my lack of product management know-how. I had to check my ego and realize this wasn't the world telling me I wasn't who I thought I was—it was telling me that I was exactly who I thought I was - a political ad expert, not a person who knew how to manage engineers to build products.
The magic happened when the product team and others in ad policy and sales came together. It required constant communication, trust, and thoughtful debate and compromises. When you find people who genuinely want you as part of the process, titles and roles don't matter.
A domain expert learning AI means understanding how models are trained, how datasets are assembled, how to fine-tune, and what various outputs could be. This allows them to input their expertise where it matters most. An AI expert learning a domain will never understand every topic in the world and all its nuances well enough to grasp the implications of the AI's outputs on those fields.
Healthy vs. Destructive Disruption
This distinction matters because not all change is created equal. Healthy disruption understands that people will be upset and displaced, but treats those people with respect while ultimately creating a seamless transition that makes most people's lives better. Those people don't even realize it's fully happening—they just know their lives have improved.
Destructive disruption is DOGE. Coming in with the proverbial sledgehammer to take action without thinking it through. Destructive disruption doesn't treat the humans impacted with respect, and it doesn’t make people’s lives better.
For instance, I think higher education is about to experience a profound disruption of its own. It could happen in the next four years when what colleges are offering stops setting students up for success in the world they'll be entering. This will happen because university leadership and professors are still scared of AI, don't know what ethical boundaries exist for using it, and don't use it themselves to teach students. Universities need to immediately implement AI training for staff and professors—building the plane while flying it, since the technology keeps changing.
Panic Responsibly About Expertise
Many people are panicking about how AI will disrupt everything, but they don't know enough to understand whether experts like Anthropic's Dario Amodei are correct when they make dire economic predictions. I think it's irresponsible for leaders to ring alarm bells without proposing solutions. That's what "panicking responsibly" means—separate the signal from the noise, identify where you can take action to be part of the solution rather than just adding to the anxiety.
For tech leaders entering new industries: hire experts who understand that domain and listen to them, but also embed them with your engineers so the experts understand how products are built. Know that there are two games at play—the public positioning and the behind-the-scenes reality—and you need to understand both.
The companies and leaders who succeed in this new landscape will be those who can hold two things in tension: the transformative potential of AI and the irreplaceable value of human expertise in every domain it touches.
We're all beginners again as AI changes how work gets done. The question is whether we'll approach that beginner's mind with curiosity and humility, or with the confidence that got Musk his black eye.
I am more cynical. I think this group thinks blitz-scaling is the solution to everything and do not know or do not care to know that the trade-offs are harming people, in part because the function of government in fundamentally different from inventing a service or maximizing shareholder profit.
In other words, our government requires civic character, and these people don’t have it. They view themselves as “great men” special characters in a Civilization game who can only contribute some multiplicative special bonus.