Earlier this month at CES 2026 in Las Vegas, NVIDIA Founder and CEO Jensen Huang delivered a keynote that underscored a pivotal moment for the technology industry. Artificial intelligence is no longer an incremental evolution—it represents a full platform reset.
As Jensen explained, every 10 to 15 years the computing industry reinvents itself. Entire ecosystems shift. Applications are rebuilt. Infrastructure is reimagined. Today, AI is driving the next reset—and it’s happening at unprecedented speed and scale.
What makes this moment different is that AI isn’t just changing applications. It’s fundamentally reshaping how computing itself works. Software is trained rather than precompiled. Models reason in real time. Systems generate outputs dynamically and increasingly interact with the physical world. This transformation has given rise to what Jensen described as the AI factory: an industrial-scale system designed to produce intelligence continuously.
A Stack Rebuilt from the Ground Up
One of the most compelling frameworks from Jensen’s keynote was his description of the five-layer AI stack—a reminder that every breakthrough in AI depends on what lies beneath it.
The stack consists of:
1. Land, power, and shell
2. Chips (GPUs, CPUs, networking)
3. Infrastructure
4. Models
5. Applications
While much of the industry focuses on the top layers—models and applications—Jensen made it clear that the foundation matters just as much. In fact, nothing above can function without the first layer being engineered correctly.
Power, he emphasized, is not an afterthought. It is the first layer of AI.
Why Power Has Become a Strategic Layer
The computational demands of AI are unlike those of traditional data centers. Training and inference workloads are highly dynamic, dense, and increasingly continuous. As models grow larger and reasoning becomes more complex, compute intensity rises—and power demand changes rapidly in response.
AI factories face challenges such as:
· Rapidly fluctuating loads
· Extreme rack-level density
· Minimal tolerance for instability or interruption
At the same time, the pace of innovation is accelerating. Faster training enables faster deployment. Faster deployment means earlier access to new capabilities—and a competitive edge. In this environment, power systems can no longer be static or purely reactive. They must be responsive, efficient, and designed to scale alongside AI itself.
Simply put, AI cannot advance faster than the power infrastructure supporting it.
Layer One: Where EPC Power Fits
This is where EPC Power plays a foundational role.
Jensen’s framing places land, power, and shell at the base of the AI stack for a reason. Before chips, models, or applications can deliver value, power systems must provide a stable and resilient foundation. With the large increase in projected data center capacity for AI, power is starting to become a major constraint for development. Most of the time, the grid can handle the increased demand, but for only a few hours a year, the grid may become overloaded. According to a 2025 Duke University study, battery storage with a duration of two hours, powered by EPC Power inverters, can enable nearly 100GW of new data center loads.
Modern AI facilities increasingly rely on complex combinations of grid power, on-site generation, and energy storage. These resources must operate together seamlessly, even as demand fluctuates and system density increases. Without proper coordination, generators may fail and utility interconnections may not allow fast fluctuations in power. Power electronics are no longer passive components—they actively influence system stability and performance.
EPC Power’s inverter technology is designed to support this first layer of the AI stack by enabling:
· Stable power delivery under highly dynamic conditions
· Smooth interaction between the grid and on-site energy resources
· Infrastructure built for how AI systems actually behave—not how data centers historically operated
As the layers above accelerate, the margin for error at the foundation continues to shrink.
AI Is Expanding Beyond the Screen
Another theme from Jensen’s keynote was the rise of physical AI—intelligence that interacts with the real world through automation, robotics, and industrial systems. Driven by both technological breakthroughs and real-world pressures like global labor shortages, AI is moving from digital environments into physical infrastructure.
As this transition continues, reliable and responsive power becomes even more critical. Physical systems cannot pause, reboot, or tolerate instability. The intelligence may live in models and chips, but the reliability depends on the foundation beneath them.
Building the AI Future from the Ground Up
Jensen Huang’s CES keynote made one thing clear: the future of AI is being built from the ground up. While innovation at the top of the stack captures attention, long-term success depends on the strength of the layers below.
Power is not merely supporting AI—it is enabling it.
As the AI stack continues to evolve, the first layer must be designed with the same level of innovation and foresight as every layer above it. Because no matter how advanced the model or how powerful the GPU, the AI factory only performs as well as the power foundation it’s built on.
The future of AI starts with power.
Footnote: Insights referenced from NVIDIA Founder and CEO Jensen Huang’s CES 2026 keynote.
Citing for Duke Study: https://nicholasinstitute.duke.edu/sites/default/files/publications/rethinking-load-growth.pdf




