- Johnson Controls
- Building Insights
- Q&A with Aaron Lewis of Johnson Controls
AI infrastructure and increased rack densities
A Q&A with Aaron Lewis of Johnson Controls
Highlights
- The AI boom is accelerating innovation in heat transfer with continued iterations in liquid cooling expected in the next two to three years
- When it comes to rethinking the data center thermal chain, you must look at the entire ecosystem of components
- It’s incredibly important to ensure that the equipment that draws the most energy (the chiller) is doing so in the most efficient way possible
- On-site power generation is becoming a necessity in some instances and some geographies, which presents an opportunity to build more sustainable data centers
Rapid increases in AI chip density, along with the escalating cooling requirements, are drastically changing the characteristics of data halls. According to AFCOM’s 2026 State of the Data Center Report, average rack densities climbed to 27 kW per rack this year – up from 16 kW last year. With AI becoming more prominent, there is no sign of this trend slowing down.
Added to this, the International Energy Agency (IEA) predicts that data center electricity demand could more than double by 2030, with individual IT racks consuming several hundred kilowatts each. Modern data centers need dynamic thermal management systems that can adapt to intense and variable AI loads.
It was against this industry backdrop, that Aaron Lewis – Chief Commercial Officer, Global Data Center Solutions at Johnson Controls – was interviewed by Intelligent Data Centres at Data Centre World 2026 in London.
During a quickfire Q&A session, Lewis was asked about the future of data center cooling and how companies like Johnson Controls are rethinking the thermal chain. The interview started on the topic of AI infrastructure and what Lewis sees as the most imminent changes.
"The AI demand and what's happening with rack densities is driving additional innovation in heat transfer."
How is the AI boom reshaping the design and operations of modern data centers? What changes will we see in the next two to three years?
Aaron Lewis: With the AI boom, the biggest changes that we're seeing in data center design are really about heat transfer. What we've seen is the transition from fully air-cooled data centers to introducing mechanical cooling and then into liquid cooling with CDUs (Coolant Distribution Unit).
I expect what we're going to see is continued iterations on liquid cooling, whether that's through water or some type of dielectric fluid – or maybe even refrigerant directly to the chip – which will help us transfer heat even more effectively.
Is on-site power generation becoming a necessity rather than an option, and what opportunity exists in recovering and reusing waste heat?
Aaron Lewis: This is a hot topic right now. On-site power generation is a necessity today in some geographies where electrical input is at a long lead time or not even available from the grid. In those instances, you have to look at some type of on-site generation.
But outside of it being a necessity, I think of it more as an opportunity because it allows us to provide a more sustainable solution to data centers. Because anywhere you are using on-site power generation, you're also generating waste heat. You can use that waste heat to power chillers, like an absorption machine. So, that waste heat can provide free cooling into the data center, reducing the electrical load and doing it sustainably.
Scale your AI infrastructure with confidence
As AI workloads push power densities beyond what traditional air cooling can handle, how is Johnson Controls rethinking the thermal chain from chip to chiller?
Aaron Lewis: When you think about the efficiency of moving everything, i.e. the transfer of heat all the way from chip to ambient, you must first look at the components. First, we look at making the chiller the most efficient that we can. We also have to look at the CDUs and how the heat transfers directly from the chip. But then the next step is to look at that whole ecosystem of components and how we can make that more efficient.
Most of the work today is done by the chiller. And we are using magnetic bearing compression technology to reduce oil and inefficiencies in the chiller system. We also want to make sure that we're doing this across a broad range of temperatures for the data center. This is done to ensure that the largest mechanical piece of equipment (the chiller), which draws the most energy, is doing so in the most efficient way possible.
"The speed at which technology is changing and the innovations that are coming into the market are something we've never seen before. But the fundamentals don’t change."
How are AI-driven rack densities and cooling demands reshaping customer requirements globally?
Aaron Lewis: The AI demand and what's happened with the rack densities is driving additional innovation in heat transfer. Because of this, we're seeing new entrants to the market – new partner availabilities and new technologies – that we can take advantage of to make sure that we can transfer heat in the most efficient way possible.
What it's done for us globally, aside from driving innovation, is that it’s also requiring us to scale more effectively so that we can use our global manufacturing footprint to build at the rate that these data centers are asking for. And the last thing is around consistency. This means making sure that we can provide a consistent design and a consistent product across regions so that they're able to standardize on their end and know that they're always getting the product that will serve their data center the best. So in a nutshell, it’s about three things: innovating, scaling and being consistent.
What will be the most significant change in data center infrastructure over the next five years, and how is Johnson Controls positioning itself to lead it?
Aaron Lewis: The speed at which technology is changing and the innovations that are coming into the market are something we've never seen before. But the fundamentals don’t change: YORK and Johnson Controls have been the leaders in complex heat transfer applications for over 150 years. We evaluate all the new technologies and innovations coming to market to see if they're suitable for this application and make sure that they can be applied at scale.
For 150 years, we've been at the front edge of this discussion. I expect five years from now, whatever that new technology is, we'll also be at the front edge of that.
Partner with experts driving data center innovation
FAQs
What is heat transfer in data centers?
Heat transfer in data centers is the controlled movement of heat away from IT equipment, namely high-density computing equipment (CPUs/GPUs) to prevent overheating and malfunction. This is done to maintain reliability, efficiency and safe operation.
What does chip to ambient mean in data centers?
Chip to ambient in data centers describes the full thermal pathway from capturing heat on high-density computing equipment (chip processors) all the way through to rejecting this heat to the outside, ambient environment.
What is on-site power generation in data centers?
On-site power generation for data centers means that the data center has its own power-producing system or electricity supply located at the facility or next to it, rather than relying solely on the public grid. This power can be primary, supplemental or backup – depending on the design. Common types of on-site power generation include diesel or natural gas generators, solar panels, wind turbines, fuel cells, gas turbines and even nuclear plants.

















.jpg?la=en&h=320&w=720&hash=244C75B74F0F77521D56164450973BCD)














.jpg?la=en&h=310&w=720&hash=8D9823F26AA80B2B75C3E4B2E61770DC)


.jpg?la=en&h=320&w=719&hash=13CA7E4AA3E453809B6726B561F2F4DD)
.jpg?la=en&h=306&w=720&hash=F21A7CD3C49EFBF4D41F00691D09AEAC)

.png?la=en&h=320&w=720&hash=18CFCCD916C92D922F600511FABD775D)


