The Roadmaps Hiding in Plain Sight
While predicting the future is near impossible, we do at least have a sense of direction where tech is going
I've been thinking about how the next few years might unfold in our increasingly AI-driven world. While predicting the future with precision feels impossible—especially with rapid technological evolution and an unpredictable political landscape—the tech industry and the White House has already laid out its roadmaps.
These aren't secrets. They're public declarations in earnings calls, official submissions to government agencies, memos and product rollouts. Yet somehow, we're still caught off guard when these visions materialize.
The Stages of Tech Overwhelm
Many of us are cycling through what I call the "stages of overwhelm and change" with technology—essentially the grief cycle for our pre-AI world:
Shock: That stunned feeling when you first realize how capable AI has become
Denial: "This can't transform my industry."
Anger: Frustration at how quickly everything is changing
Bargaining: "Maybe if we just regulate it enough..."
Depression: Feeling helpless in the face of massive technological shifts
Testing: Cautiously experimenting with AI tools to see what they can do
Acceptance: Embracing the new reality and finding ways to adapt and thrive
Add onto this all of the other change that the Trump administration has enacted since taking office only 80 days ago. Recognizing where you are in this cycle can help you constructively move forward. Most of us fluctuate between these stages, and that's perfectly normal.
The Maps Are Right in Front of Us
Looking at recent submissions to the White House's AI Action Plan, the tech industry has mapped their vision for the next few years - even if they are unsure about the specifics. (Note: Meta has not made their submission public yet.) Civil society, academia, the media, and other organizations can look at these when figuring out where they want to contribute:
Energy Revolution: Tech giants are planning unprecedented infrastructure buildouts. Microsoft is reviving the Three Mile Island nuclear site (renamed Crane Clean Energy Center) to power its data centers. Anthropic warns that "by 2027, training a single frontier AI model will require networked computing clusters drawing approximately five gigawatts of power"—equivalent to powering a small city.
Copyright Battles: The fight over AI training data is intensifying. Google argues for preserving "access to openly available data." At the same time, the Association of American Publishers counters that "the United States can provide singular AI leadership by prioritizing intellectual property and AI together." This isn't just a legal dispute—it's about who controls the raw materials of the AI economy.
Jobs Transformation: OpenAI's submission describes AI as creating "a flywheel of more freedom leading to more productivity, prosperity, and yet more innovation." The AI 2027 report notes that by July 2027, "there's never been a better time to be a consultant on integrating AI into your business," suggesting massive opportunities for those who position themselves at the intersection of human and machine work.
Information Management: As Google reimagines search with AI, we face profound questions about information access and trust. The Bloomberg report about Google's AI search overhaul reveals the company is working to "wake people up to what this new reality might look like"—a world where information isn't searched for but synthesized and personalized in real-time.
Ethical Guardrails: Without appropriate frameworks, the gap between AI winners and losers will "widen dramatically, not over decades, but months." Organizations and individuals need clear ethics and governance frameworks to navigate these waters responsibly.
Short-Term Political Milestones to Watch
Several imminent political developments will shape how tech navigates its increasingly complex relationship with governments:
Meta's Antitrust Trial (April 14): Zuckerberg is reportedly working to cut a deal with Trump ahead of this crucial trial, which could fundamentally reshape Meta's business model
TikTok Divestment Deadline (June 19): After being extended 75 days, this deadline looms large for the platform used by over 170 million Americans
EU Regulatory Actions: Ongoing fines against Meta and X (formerly Twitter) signal Europe's willingness to confront tech giants and both are hoping Trump will intervene.
Elon Musk's Government Role: How long will the mercurial tech leader stay involved in Trump's administration, and what influence will he wield?
These developments largely hinge on tech's relationship with Trump and what his administration ultimately decides to do—a significant source of uncertainty in an already unpredictable landscape. Cecilia Kang notes in her recent NYT piece that tech’s courting of Trump has yet to pay off, but I think everyone is waiting for what the administration does on AI.
A Public Roadmap for Government AI Use
It’s not just tech; the White House has long previewed its plans, including last week when it released new guidance directing federal agencies to adopt AI rapidly while safeguarding public trust. Amongst the things this memo outlines are:
Transparency and Public Oversight: Agencies must publicly post AI strategies and use case inventories, creating a window for public analysis, critique, and engagement.
Risk & Accountability: High-impact AI systems must be tested, reviewed, and monitored—and shut down if they don't meet safety, privacy, or fairness standards.
Open Code, Shared Models: Wherever possible, agencies are directed to open source AI models and datasets—allowing external researchers and developers to inspect and improve them.
Generative AI Policy and Media Integrity: Agencies must create policies for generative AI, which raises urgent questions about misinformation, authenticity, and automated content.
Building a Trusted AI Ecosystem: Agencies are encouraged to consult external experts, opening the door for advocates, academics, and civic technologists to shape federal AI governance.
Moving fast is paramount as deadlines for some of these actions, such as appointing a chief AI officer, start as early as June 2.
2028: The First True AI Election
I also, of course, need to look ahead to the 2028 election, which will start in early 2027, where we'll likely see the first substantial AI-driven presidential election:
Hyper-personalized political messaging tailored through AI that knows voters' preferences, concerns, and emotional triggers
Candidates with AI avatars campaigning 24/7 across multiple platforms
Debates augmented by real-time AI fact-checking and sentiment analysis
Unprecedented disinformation challenges, where synthetic media becomes indistinguishable from reality
Mark Zuckerberg's recent earnings call emphasis on personalized AI that "really understands you as a person" offers a preview of this future—AI systems that understand voters better than they know themselves, potentially reshaping democratic processes in profound ways. These are all areas that will need help and input on developing the ethical and legal guidelines for using AI in campaigning.
The Tech Sovereign States
The tech landscape isn't just changing technologically—it's reorganizing power. Tech companies have long acted somewhat like their own countries. When Facebook started hiring policy people around the globe, Slate likened them to ambassadors. Facebook’s Oversight Board has been compared to a Supreme Court.
Politico's comparison of Mark Zuckerberg and other tech CEOs' purchases of Washington, DC, properties as “personal embassies” symbolizes this shift. It's not just a home; it's a power base for company CEOs that reaches billions globally, establishing direct channels to government power centers.
Panic Responsibly
This leads me to my mantra: panic responsibly.
We shouldn’t be paralyzed by fear. Instead, we need to:
Focus on the signal, not the noise: Identify meaningful trends within the chaos
Distinguish between what we can and cannot control: Channel energy toward areas where we can make a difference
Recognize the opportunity within disruption: Leadership gaps create openings for new voices
Take incremental action: Even small steps toward engagement with these technologies matter
What You Can Do Now
Here are concrete steps to consider:
Learn the basics: Spend time with AI tools to understand their capabilities and limitations. As the AI 2027 report suggests, consultant expertise in AI integration will be increasingly valuable.
Develop organizational ethics: Create clear policies for AI use in your organization. (I've developed an ethics and transparency statement that can serve as a starting point, and my AI training workshops have helped organizations navigate these issues.)
Engage with policy discussions: Brainstorm ways your organization can contribute to these conversations. Create the pitch documents to find funding. Think outside the box. Support organizations working on responsible AI governance.
Demand transparency: Press companies for precise information about their AI systems and data usage
Think critically: Question claims made about these technologies and their supposed inevitability.
Join communities: Connect with others working to ensure these technologies benefit humanity.
The tech industry and the administration already has a direction they are moving towards. The question is: Will we let them navigate unchallenged and alone, or will we help steer the course?
As I said in my recent keynote, these challenges present an "opportunity for leadership, opportunity for stories, opportunity to do things differently." The maps are right in front of us. Now, we need to decide where we want to go.