In recent years, the prevailing trend in data center construction has shifted toward consolidation and retirement. Companies have streamlined operations by migrating substantial workloads to the cloud, thereby reducing growth in the number of active data centers. However, this trajectory has taken a turn with the explosive rise of artificial intelligence (AI), particularly generative AI. The global AI infrastructure market, encompassing data centers, networks, and supporting hardware, is expected to reach $422.55 billion by 2029, with an annual growth rate of 44% over the next six years, according to Data Bridge Market Research.
Enterprises are now choosing to conduct their AI processing on-premises due to various factors, such as data requirements and governance. The critical shift towards on-premises AI processing is often fueled by the nature of data involved—business-critical or highly sensitive information that cannot, or should not, be transferred to the cloud. Consequently, a well-rounded AI strategy may demand an on-premises data center. However, accommodating AI workloads in data centers necessitates a departure from traditional design principles, influencing how data centers are designed, built, and operated.
Key Considerations for AI-Centric Data Center Migration
Location, Location, Location
The location of a data center is paramount, especially when accommodating AI workloads due to their heightened compute and network intensity. This emphasizes the need for meticulous planning before breaking ground.
Know Your Power Draw
Traditional data centers, running databases and business applications, usually operate with a power draw of 8 kW to 10 kW per rack, easily cooled by fans. Unlike traditional data centers, AI processing requires significantly more power—upwards of 70 kW per rack with some industry experts contemplating future needs approaching 300 kW per rack. Understanding the power draw is crucial for planning and ensuring adequate support for AI workloads.
Check the Local Power and Water Supply
Verify if the local power company can meet the heightened power requirements. Additionally, account for the substantial water needs, especially for liquid-cooled AI data centers, by checking the availability of a sufficient water supply and considering more advanced closed loop cooling systems.
Consider the Consumption Model
Explore consumption models offered by major hardware OEMs, which allow payment based on usage rather than upfront acquisition costs. This model is gaining popularity due to its cost-effective and flexible nature.
Isolate the AI
Ensure proper isolation and containment of AI equipment to maintain consistent thermodynamics. Similar power envelopes running side-by-side are ideal to prevent the impact of varying pressure and temperature ranges
Keep the Floor in Mind
For liquid-cooled AI systems, a raised floor may not be necessary. Consider a slab floor that can accommodate the weight of the equipment more effectively.
Plan Ahead
Anticipate future needs by ensuring sufficient cooling, capacity, and power to accommodate not only current but also projected growth. Communicate growth projections with management to avoid potential obsolescence issues
Expect Hardware Churn
Traditional data center operators have been able to extend the lifespan of their equipment well beyond the warranty period. In the case of AI technology there is an accelerated pace of innovation, leading to faster obsolescence. Design the AI section of the data center for ease of modernization and upgrades, distinct from the traditional hardware.
In summary, the field of data center construction is changing, with a shift toward on-premises artificial intelligence processing that goes against established design conventions. For a seamless transition, it is essential to address critical factors such precise location, increased power requirements, adaptable infrastructure design, and strategic planning. A flexible approach to equipment updates and obsolescence becomes crucial as AI’s influence grows. Taking these factors into account will help to guarantee the successful integration of AI into data centers and strengthen their resilience in the ever-changing technological landscape.