What Does It Take to Build and Operate an AI Factory?
AI is pushing data center infrastructure to new limits. Next-generation compute platforms, rack-scale systems, and advanced cooling technologies are reshaping how AI environments are designed and deployed.
This workshop explores what it actually takes to build and operate AI factories inside the data center — from system architecture to the operational execution required to run these environments at scale.
How the Workshop Will Explore These Challenges
Compute Platforms
Next-generation AI accelerators and compute platforms are driving new levels of power density and reshaping infrastructure design inside the data center.
Rack-Scale Systems
AI deployments are moving beyond individual servers toward integrated rack-scale systems built for massive parallel compute.
Cooling Technologies
As compute density rises, traditional air cooling approaches are reaching their limits, accelerating adoption of advanced liquid cooling technologies.
Execution at Scale
Deploying AI infrastructure requires advanced modeling, integration across multiple technology layers, and disciplined operational execution.
Panelists
Greg Stover, Vertiv | Josh Claman, Accelsius | Rob Curtis, AMD | Nathan Mallamace, Supermicro | Kourosh Nemati, NVIDIA
Sherman Ikemoto, Cadence | Al Nichols, Silverback
Please note that Workshop attendance requires a Data Center World pass

