The Physics of the Problem
Air cooling has a fundamental physical limit. When rack density exceeds approximately 30 to 40 kilowatts, the volume of air required to remove heat efficiently becomes impractical within standard data center floor layouts. The air volume required scales with heat load, and the fans required to move it consume meaningful power themselves — often 10 to 15% of total facility load at high densities.
NVIDIA's Blackwell B200 GPU delivers roughly 1,000 teraflops of FP8 performance and has a thermal design power (TDP) of 1,000 watts. A rack of 8 GPUs represents 8 kilowatts from the GPUs alone, before accounting for networking, power distribution, and other components. DGX B200 systems — NVIDIA's purpose-built AI training platforms — run at approximately 120 kilowatts per rack. A standard air-cooled data center cannot accommodate this density without massive overcorrection in raised floor spacing and CRAC unit deployment, effectively wasting floor space.
The industry responded to this physics constraint exactly as you would expect: liquid cooling became not a premium option but an operational requirement for high-density AI compute.
Cooling Technologies
Two primary liquid cooling approaches have emerged at scale.
Direct Liquid Cooling (DLC) routes chilled water or coolant directly to heat exchangers attached to the processor package. The coolant absorbs heat from the chip and carries it away to a facility-level heat rejection system. This approach can be retrofitted to existing data center infrastructure more easily than immersion alternatives, making it the dominant near-term adoption path for hyperscalers.
Immersion cooling submerges server hardware in dielectric fluid — either single-phase (the fluid remains liquid throughout) or two-phase (the fluid boils and condenses as part of the cooling cycle). Immersion achieves higher thermal efficiency than DLC but requires specialized infrastructure and is harder to service. It is best suited for new construction rather than retrofits.
NVIDIA has made liquid cooling a design-in requirement for Blackwell-based systems at scale. The company now includes liquid cooling rear-door heat exchangers or direct water blocks in its reference designs, effectively signaling to the market that air cooling is insufficient for their current-generation products.
Adoption Curves and Market Structure
Liquid cooling penetration in data centers is accelerating from a low base. IDC and other research firms estimate that direct liquid cooling represented less than 5% of new data center cooling capacity in 2022. Penetration is expected to reach 20 to 30% by 2027, driven almost entirely by AI compute density requirements.
The leading suppliers of direct liquid cooling infrastructure include Vertiv, which has integrated cooling solutions across their rack and facility products; CoolIT Systems, a private company that has emerged as a preferred DLC supplier to several hyperscalers; and Schneider Electric, which has been developing liquid cooling integration into its broader data center infrastructure portfolio.
The rear-door heat exchanger market — large heat exchangers that mount on the rear of standard racks and capture exhaust air heat before it enters the room — is served by a broader set of players, with Stulz, Rittal, and several others competing for hyperscaler contracts.
Sizing the Opportunity
The total addressable market for data center cooling is substantial and growing rapidly. Global data center cooling infrastructure spending was approximately $20 billion annually in 2023. With liquid cooling's share expanding and total compute density rising, the addressable market for liquid cooling specifically is projected to compound at 25 to 35% annually through 2028.
Importantly, the shift to liquid cooling does not eliminate spending on traditional air conditioning — it creates additional spending on liquid infrastructure atop existing cooling budgets. Data centers are adding liquid cooling to supplement, not replace, their air systems during the transition period.
The companies we track most closely in this space are those with: direct relationships with hyperscaler mechanical engineers embedded in the design-in process, validated qualification on current-generation GPU platforms, and manufacturing scale to support multi-gigawatt campus builds. Cooling decisions for a major data center campus are made 18 to 24 months before construction completes — companies already in those design discussions have significant revenue visibility.
Investment Framework
We evaluate cooling infrastructure companies against three criteria: hyperscaler qualification status and design-in relationships, backlog as a percentage of annual revenue, and gross margin trajectory in a constrained supply environment. Companies that score well on all three criteria represent the highest-conviction positions within this theme.