- Johnson Controls
- Building Insights
- Thermal management where it really matters: your data hall
Thermal management where it really matters: your data hall
Highlights
- Cooling can consume up to 40% of facility power, meaning thermal performance increasingly acts as a limiting growth factor for AI-era data halls
- AI is driving unprecedented thermal volatility, with U.S. data center electricity demand projected to rise 50% from 2025 to 2027, placing new pressure on how quickly and confidently facilities can scale
- Johnson Controls thermal management begins at the chip and increasingly depends on integrated chain spanning heat capture, transfer, rejection, and reuse rather than isolated system decisions
AI chip density has fundamentally reshaped the thermal characteristics of the data hall. LLM training factories create long plateaus of high heat, while inference factories can produce dramatic swings in thermal output. U.S. data center electricity demand is on track to rise roughly 50% from 2025 to 2027, a trajectory that signals severe grid pressure for hyperscale markets.
Against this backdrop, cooling is emerging as one of the most consequential variables operators can influence. Cooling systems account for up to 40% of a data centers total energy, which is energy that could otherwise power AI compute, making thermal performance inseparable from capacity, efficiency, and growth planning.
Across AI and high-density environments, systems will continue to change, but the underlying requirement remains constant: significantly greater thermal and energy management supported by more sophisticated controls. Johnson Controls is building the infrastructure required to scale AI reliably, efficiently, and at pace, starting where performance constraints are felt first: inside the data hall.
From chip to ambient designing thermal performance for the AI era
As AI rack densities surge, operators are shifting from traditional air cooling toward liquid-based architectures. Modern thermal management begins at the chip, where heat is first captured, often through cold plates that bring liquid directly to the GPU surface. From there, the thermal chain moves through transfer elements such as Johnson Controls Silent-Aire Coolant Distribution Units (CDUs), which circulate liquid between the chip-level capture devices and the facility’s higher-level cooling loops, and ultimately to high-efficiency chillers.
This chip-to-ambient approach is now essential as conventional air systems can no longer manage the heat intensity or rapid thermal fluctuations created by high-density AI workloads. Read more here.
Coordinating thermal performance for scale
Operators face a systemic thermal challenge, not a component level one, so must implement end to end management of the thermal chain, monitoring each stage and managing overall PUE, WUE and resiliency.
High density AI racks can create intense, rapidly changing heat loads that must be captured, transferred, and rejected as part of a coordinated chain. That chain begins at the chip, where the flow of coolant to the cold plate manifolds is controlled by Johnson Controls Silent-Aire CDUs. When combined with a YORK YVAM chiller, the transfer and rejection phases can also be effectively managed, and operators can achieve PUE performance near, and in some climates below,1.2.
A CDU platform that scales from ~500 kW to over 10 MW, supports in row, perimeter, and hybrid topologies, and integrates with YORK chiller solutions and optimized controls reflects how operators are translating architectural flexibility into measurable improvements across the thermal chain.
Engineering data centers with confidence
Industry analysts have ranked Johnson Controls among the leading thermal management providers worldwide, confirming our commitment to innovating ahead of the curve and delivering solutions that meaningfully free power for computing.
Johnson Controls experience is established within hyperscale facilities, where our teams work directly with operators to solve the real thermal challenges created by rising rack densities, tighter power limits, and rapidly shifting workloads. That proximity shapes how we design: anticipating the next generation of heat loads, building for higher temperature liquid loops, and supporting the resiliency strategies operators need for mission critical AI infrastructure.
As AI workloads intensify, the operators who can demonstrate more tokens per watt will define the next era of digital infrastructure. We are committed to helping you build that advantage, starting in your data hall, where thermal performance matters most.
Scale your AI infrastructure with confidence
FAQs
Why are we seeing operators shifting from air cooling to liquid cooling?
The heat produced by modern GPUs exceeds what air systems can reliably manage. Liquid cooling brings coolant directly to the chip, enabling stable heat capture even under sustained training loads or rapid inference spikes.
What does “chip to chiller” thermal management mean?
It refers to managing heat across the entire chain, from chip level capture, through CDUs and facility loops, to final rejection at the chiller, as a single coordinated system rather than a set of isolated components.
How do Silent Aire CDUs support high-density AI workloads?
Silent Aire CDUs regulate coolant flow to the chip, stabilize thermal loads, and provide flexibility across in-row, perimeter, and hybrid configurations. This creates the conditions for improved density, faster deployment, stronger uptime, and better power allocation to IT.

















.jpg?la=en&h=320&w=720&hash=244C75B74F0F77521D56164450973BCD)














.jpg?la=en&h=310&w=720&hash=8D9823F26AA80B2B75C3E4B2E61770DC)


.jpg?la=en&h=320&w=719&hash=13CA7E4AA3E453809B6726B561F2F4DD)
.jpg?la=en&h=306&w=720&hash=F21A7CD3C49EFBF4D41F00691D09AEAC)

.png?la=en&h=320&w=720&hash=18CFCCD916C92D922F600511FABD775D)


