Satya Nadella – How Microsoft is preparing for AGI

What does it take to run the engines behind AI models larger than anything we've seen before? How does a software giant like Microsoft pivot from selling licenses to shepherding the future of cognitive amplification and superintelligence? These questions and more unfold in this podcast episode with Satya Nadella, where he opens up about Microsoft's grand vision, strategy, and challenges on the road to Artificial General Intelligence (AGI).

Scaling Infrastructure for a New Era of AI

Nadella paints a vivid picture of Microsoft's new Fairwater 2 data center, a cutting-edge facility boasting network optics equivalent to all Azure data centers combined from just a few years ago. This isn't merely about housing racks and servers; it's a strategic bet on scaling AI's computational demand by 10x every 18 to 24 months, far surpassing the training capacity of models like GPT-5. The design allows disparate sites and regions to connect, aggregate compute power, and run enormous training jobs in parallel, supported by a blistering petabit network linking multiple data centers. But, what happens when new chips with radically different power and cooling needs emerge? Microsoft is committed to building an adaptable, fungible infrastructure that can evolve with the rapid innovation cycles without being stuck on one generation or architecture.

This industrial shift requires mastering both capital and knowledge intensity. Nadella insists that success depends not just on Moore's Law or raw computing power, but on software sophistication—scheduling workloads, optimizing resources dynamically, and orchestrating a fleet capable of serving varied AI needs across the globe. With AI workloads growing more complex, the balance between training capacity and serving inference demands becomes a critical part of Microsoft's strategy, ensuring they build a hyperscale platform for all users, not just one or two mega AI labs.

The AI Revolution

While market cap and capex numbers skyrocket, Nadella remains cautiously optimistic that AI, despite its staggering pace, is still in its "early innings." He acknowledges scaling laws driving model improvements alongside a need for real scientific breakthroughs. He embraces the metaphor from Raj Reddy that AI is ultimately a "cognitive amplifier" or a "guardian angel," a tool augmenting human capabilities rather than a mystical replacement.

But if AI is a tool, will it simply become the "software company's software," or something far more transformational? Nadella suggests a future where AI is not just embedded in applications like Excel or GitHub but evolves into autonomous agents capable of working independently with their own provisioned computing environments. This heralds a transition from AI-assisted humans to AI agents working alongside humans or even entirely on their own, fundamentally reshaping workflows and productivity.

Business Models in the Era of AI Agents

Microsoft's transition from licensing Windows to subscription services like Microsoft 365 already showed how digital delivery changed economics and market reach. Yet with AI, the story grows more complex. The high cost of AI compute—the "Cost of Goods Sold"—challenges traditional SaaS models with their low incremental user costs.

Nadella explains that Microsoft approaches this by reimagining subscriptions and consumption models, offering tiered access, usage-based pricing, and bundles that mix multiple AI agents under a "Mission Control" interface within GitHub. This scaffolding allows users to orchestrate various AI assistants in parallel, each performing parts of a coding or knowledge workflow while maintaining observability and control.

The market for AI coding assistants alone has exploded from around $500 million in revenue just a year ago to potentially $5-6 billion by the end of this year. Yet competition is fierce and fast-moving, with numerous new companies launching rivals. Microsoft's strength lies in integrating AI models into robust ecosystems and infrastructure like GitHub, VS Code, and the Azure cloud. The value capture is no longer about owning the model alone but controlling the broader tooling, data, identity, and deployment stacks.

Models, Scaffolding, and the Value Split

A recurring tension in the discussion is where the economic value resides: with the AI models themselves or the platforms and scaffolding built around them. Nadella views this as a dynamic balance. Models will continue to be open and competitive, especially with open source variants rising. But to unlock true value, you must integrate them into applications with deep domain-specific knowledge—like Microsoft's Excel Agent that understands formulas beyond just text.

He envisions that winning in AI will mean building vertically integrated stacks that fuse custom models, infrastructure, and application-level intelligence, all woven together with layers of tooling that offer control, observability, and the ability to blend multiple AI services. Microsoft bets on being both a hyperscale infrastructure provider and a model innovator, supported by its strategic partnership with OpenAI and internal R&D. This balanced portfolio helps commoditize and democratize AI while also preserving opportunities for differentiation and value creation.

The Future of AI Agents

Looking ahead, Nadella foresees a paradigm where AI agents use tools autonomously—beyond just augmenting human users—to perform complex, long-duration tasks involving data migration, workflow orchestration, and interdisciplinary integration. These agents will need fully provisioned computing environments, identity management, security controls, and enterprise-grade storage solutions.

Microsoft aims to be the infrastructure and platform that powers these next-gen agents, recognizing that provisioning and managing these agents will become a core part of enterprise IT. The shift from per-user pricing to per-agent pricing reflects this evolution, as organizations deploy thousands or millions of AI agents working at different autonomy levels.

Challenges in Talent, and Competition

Despite access to OpenAI's models and intellectual property, Nadella acknowledges Microsoft's own AI model, known as MAI, faces fierce competition—ranking only 36th in Chatbot Arena currently. To close this gap, Microsoft is investing heavily in talent acquisition, R&D, and building its own world-class superintelligence team. The company aims to combine the strengths of OpenAI models with in-house innovations to push the frontier of multimodal AI.

He stresses the importance of infrastructure that supports multiple model families to avoid technological lock-in or obsolescence, especially given breakthroughs like mixture-of-experts architectures or radically new chips. Microsoft's ambition is to remain open and agile, hosting an ecosystem where various models can coexist, compete, and complement each other.

Geopolitical Realities and Sovereignty in AI

An often-overlooked dimension is how geopolitics shapes the future of AI. Nadella points out that unlike earlier technology eras, AI infrastructure and data sovereignty have become strategic priorities for nations worldwide. Microsoft is uniquely experienced in building sovereign cloud services meeting European Union privacy standards and is working to support data residency and agency needs globally.

He acknowledges the growing bipolarity of the tech landscape—particularly between the U.S. and China—and foresees a future where no single AI model dominates globally. Countries will insist on multi-model ecosystems and local control, making the AI market less winner-take-all and more distributed. This decentralization protects against concentration risk while enabling countries to leverage AI's benefits without ceding too much control.

Industrial Execution: From Vision to Reality

Nadella's reflections circle back to the enormous industrial challenge Microsoft faces. This is a capital-intensive, knowledge-rich business where massive compute investments depreciate rapidly and infrastructure must be rebuilt every few years. He highlights the urgency of speed—the 90-day rollout from build to workload in the Atlanta data center is a testament to their focus on "speed-of-light execution."

Balancing supply chains, power availability, regional demand, and rapidly evolving hardware is akin to managing a giant industrial operation. Microsoft has deliberately paused expansion in some areas to maintain fleet fungibility and avoid being tied down to outdated generations of hardware, ensuring a flexible, sustainable growth path.

A Thought to Carry Forward

So, what emerges from this conversation? It's clear Microsoft is betting not on a single "winner-take-all" AI but on an ecosystem of models, platforms, and autonomous agents, all supported by hyperscale infrastructure adaptable across geographies and business needs. Nadella's vision balances pragmatism with ambition: AI will amplify human potential in complex, incremental steps, embedded deeply into tools we already use, while infrastructure and policy shape an inclusive, sovereign, and competitive AI future.

Videos

Full episode

Episode summary