Image: ChatGPT imagines the story from arsTECHNICA:
On Friday, Politico reported that one of the country’s biggest data center developments had guzzled nearly 30 million gallons of water without paying for it. Even worse, the water grab came at a time when nearby drought-stricken residents were warned to restrict their personal water consumption, and some reported sudden decreases in water pressure.
An investigation conducted by utility officials in Georgia’s Fayette County found that the Quality Technology Services (QTD) facility had two industrial-scale water hookups that weren’t being monitored. “One water connection had been installed without the utility’s knowledge, and the other was not linked to the company’s account and therefore wasn’t being billed,” Politico reported.
MY PROMPT:As this story documents, there is definitely a problem with the water demands of the nation’s data centers. What I would like to know is how the water is being used and why, if it is primarily for coolant purposes, why can’t it be recirculated through a system of cooling-type towers?
Your instinct about recirculation is correct — and the industry knows it. Here is a complete explanation of how data center water is actually used and why the problem is more complicated than it first appears:
Cooling tower diagram
How Water Is Actually Used
Data centers need cooling at two levels simultaneously:
Server level — individual chips and processors generate intense localized heat that must be drawn away directly
Building level — the entire facility must maintain a stable ambient temperature to prevent equipment failure
The dominant cooling method for large, modern data centers is evaporative cooling — and this is where the water disappears. Here is the process: water absorbs heat from the servers, travels to a cooling tower, and is then exposed to moving air. A portion of that water evaporates into the atmosphere, carrying the heat with it. The evaporated water is gone — it cannot be recaptured without enormous additional energy expenditure.
This is the core of the problem. The water is not being used in the conventional sense — it is being consumed by phase change. It leaves the system as water vapor.
Why Not Just Recirculate It?
Your instinct is entirely sound — and closed-loop systems do exactly what you describe. The distinction is critical:
System Type
How It Works
Water Loss
Open-loop evaporative
Water absorbs heat, then partially evaporates in cooling tower
High — evaporated water is lost permanently
Closed-loop recirculating
Water circulates in sealed pipes, transfers heat to a secondary exchanger, never exposed to air
Minimal — same water reused continuously
Air/dry cooling
No water — mechanical chillers and fans only
Zero water — but uses significantly more electricity
Closed-loop systems can reduce freshwater consumption by up to 70% compared to evaporative methods. Oracle has publicly advertised that its AI data center cooling systems are “filled once, recirculated continuously” with no ongoing water consumption.
So Why Isn’t Everyone Using Closed-Loop Systems?
Three reasons drive the industry toward open evaporative systems despite the water cost:
1. Energy economics. Evaporative cooling rejects heat far more efficiently than mechanical air cooling. In hot climates like Phoenix, the electricity savings from evaporative cooling are substantial — and electricity is currently the more expensive and politically visible resource. Data centers are essentially trading water for kilowatts, and until recently that trade looked favorable.
2. AI’s unprecedented heat density. The new generation of AI chips — Nvidia’s H100 and H200 GPUs — generate heat at densities that air cooling simply cannot handle. This is a genuine engineering constraint, not a preference. The heat output per square foot of a modern AI server rack is roughly 10 times that of a conventional server rack, which is forcing the industry toward more aggressive cooling strategies simultaneously with its most rapid expansion.
3. Cost and complexity of retrofitting. The majority of existing data centers were built around open evaporative systems. Converting them to closed-loop requires significant capital investment in new infrastructure — sealed cooling loops, liquid-to-liquid heat exchangers, and higher-grade pumping systems. New construction can be designed for closed-loop from the ground up, but the installed base is largely open-loop.
The Cooling Tower Blowdown Problem
Even in partially recirculating systems, there is a chemistry problem your question correctly identifies. As water circulates and partially evaporates, dissolved minerals and salts concentrate in the remaining water. Left unchecked, this causes scaling and corrosion in the pipes. To prevent this, operators periodically purge a portion of the circulating water — called blowdown — and replace it with fresh water. That blowdown water is discharged rather than recirculated.
Advanced water treatment systems can now recapture and re-treat blowdown water for reuse, reducing this loss significantly — but these systems add cost and operational complexity that most operators have historically avoided.
The Bottom Line
You are correct that recirculation is technically feasible, and the industry is slowly moving in that direction under regulatory and community pressure. The honest answer to why it has not happened faster is a combination of energy-cost accounting that historically ignored water cost, AI’s unprecedented thermal demands, the capital expense of retrofitting existing facilities, and — as the Ars Technica story illustrates — regulatory frameworks that have simply not required it. A data center in a water-stressed region like Phoenix that can draw 30 million gallons of municipal water without initially paying for it has, until recently, faced no meaningful economic incentive to invest in closed-loop efficiency.
MY PROMPT: How could liquid nitrogen be used for data center cooling? Are there cost or mechanical barriers?
Liquid nitrogen cooling for data centers is a genuinely fascinating engineering proposition — and it is moving from theory toward early practical application. Here is a complete picture:
Datacenter server rack
How It Would Work
Liquid nitrogen (LN2) boils at -196°C (-321°F) and absorbs an enormous amount of heat as it transitions from liquid to gas — a property called latent heat of vaporization. In a data center application, LN2 would flow through heat exchangers in direct contact with server chips or rack-level cooling loops, absorbing heat so rapidly that chips could be kept at cryogenic temperatures rather than merely cool ones.
Two distinct use cases have emerged in current research:
1. Emergency/backup cryogenic cooling — LN2 stored in insulated tanks serves as a failsafe when primary cooling systems fail, providing rapid heat absorption on demand without requiring electrical power. This is the nearest-term practical application.
2. Continuous cryogenic chip operation — Running CMOS chips at temperatures below -153°C dramatically increases their performance and power efficiency. At cryogenic temperatures, electron mobility in semiconductors improves significantly, meaning chips run faster and waste less energy as heat. This is the more transformative — and more distant — application.
The Performance Upside Is Real
Extreme overclockers have used LN2 for decades to set world benchmark records. Every major CPU and GPU performance record has been set under liquid nitrogen cooling. The chip physics are well understood: colder semiconductors perform better, consume less power for equivalent workloads, and generate less heat in a self-reinforcing cycle. For AI GPU clusters running at maximum density, this is not a trivial benefit.
The Cost and Mechanical Barriers
Despite the physics being favorable, several serious barriers currently prevent widespread adoption:
1. LN2 production is energy-intensive.
Liquid nitrogen is produced by compressing and cooling atmospheric air until nitrogen liquefies — a process that itself consumes substantial electricity. The energy cost of producing enough LN2 to continuously cool a large data center could offset — or exceed — the efficiency gains from cryogenic chip operation. This is the fundamental thermodynamic problem: you cannot cool for free.
2. It evaporates immediately and cannot easily be recirculated.
Unlike water, LN2 absorbs heat and boils off as nitrogen gas — it is consumed in the cooling process. Recapturing and re-liquefying the nitrogen requires the same energy-intensive compression process, plus additional equipment. Closed-loop LN2 systems exist in laboratory settings but add significant mechanical complexity and cost.
3. Infrastructure requirements are severe.
Cryogenic systems require heavily insulated pipes, specialized seals and connectors rated for extreme cold, and materials that do not become brittle at -196°C. Standard data center infrastructure — cables, connectors, server chassis, rack hardware — would need to be redesigned from the ground up for cryogenic operation. The retrofit cost for existing facilities would be prohibitive.
4. Condensation and ice formation.
Running chips at cryogenic temperatures in a normal atmosphere causes moisture in the air to condense and freeze on the hardware — the same problem that makes extreme overclocking with LN2 a careful, time-limited exercise rather than a continuous operation. Solving this requires either sealing the entire server environment in dry nitrogen gas or building hermetically controlled enclosures — again, massive infrastructure investment.
5. Safety.
LN2 in large quantities displaces oxygen rapidly and silently. A significant leak in an enclosed data center could create a suffocation hazard for personnel within minutes. This requires sophisticated gas monitoring systems, emergency ventilation, and strict safety protocols that add operational complexity and cost.
What Is Actually Being Developed
The most practical near-term hybrid approach involves liquid air cooling rather than pure LN2. Liquid air — produced off-peak using surplus or renewable electricity and stored in insulated tanks — is released through heat exchangers when cooling demand peaks, evaporating and absorbing heat before venting as harmless atmospheric gas. This allows data centers to store “cold” the way a battery stores electricity — produced cheaply when energy is available, consumed when needed.
For quantum computing, cryogenic cooling is already essential — quantum processors require temperatures near absolute zero (-273°C), making LN2 a comparatively warm intermediate step in those systems. The infrastructure developed for quantum computing data centers will likely be the proving ground for broader cryogenic cooling adoption.
Bottom Line
Liquid nitrogen cooling is not a fantasy — it is a real engineering option with genuine performance advantages that the industry is actively exploring for high-density AI applications. The barriers are not theoretical; they are practical and economic: LN2 production consumes significant energy, the infrastructure to contain and circulate cryogenic fluids is expensive and complex, and evaporation losses mean it is not truly recirculable without additional energy input. The most likely path forward is hybrid systems — cryogenic cooling used selectively for the highest-heat-density components, combined with conventional closed-loop systems for the broader facility — rather than wholesale replacement of current cooling architectures.
MY PROMPT:Does the same principle apply to the use of Freon gas?
The freon/refrigerant question is actually the most relevant and immediately practical of the three cooling approaches — because refrigerant-based cooling is already widely used in data centers, and the industry is actively managing its limitations right now.
The Key Distinction From Liquid Nitrogen
The critical difference between freon-type refrigerants and liquid nitrogen is that refrigerants operate in a closed, recirculating loop — which is precisely what you were asking about with water cooling. Here is how it works:
The refrigerant cycle has four stages that repeat continuously:
Evaporator — liquid refrigerant absorbs heat from servers and boils into gas
Compressor — the gas is compressed, raising its temperature further
Condenser — the hot gas releases its heat to the outside environment and returns to liquid
Expansion valve — pressure drops, the liquid cools, and the cycle begins again
No refrigerant is consumed or lost in normal operation. Unlike evaporative water cooling, nothing escapes to the atmosphere during the cooling process itself. This is the closed-loop recirculation you correctly identified as the right direction.
What Refrigerants Are Currently Used
The data center industry primarily uses two refrigerants:
R-410A — standard in small to mid-size data centers, up to roughly 300–400kW cooling capacity per unit
R-134a — used in larger chiller systems at hyperscale facilities
A newer and highly promising two-phase approach involves immersing servers directly in dielectric refrigerant fluid — the servers literally sit in a bath of boiling liquid that absorbs heat through phase change, then the vapor is recaptured, condensed, and returned. Microsoft, Google, and others are actively deploying this at scale. It uses zero water, has an extremely small physical footprint, and handles heat densities that air cooling cannot approach.
The Real Barriers — Environmental Regulation
Here is where the freon story gets complicated. Traditional refrigerants work well mechanically but carry a severe environmental liability:
Global Warming Potential (GWP). HFC refrigerants like R-410A and R-134a have GWP values thousands of times higher than CO₂. When systems leak during service, repair, or end-of-life disposal — which they inevitably do — the escaped refrigerant is an extraordinarily potent greenhouse gas. A data center with hundreds of tons of refrigerant charge represents a significant environmental liability even if it never uses a drop of water.
Regulatory phase-out. The European Union’s F-Gas Regulation is mandating a phase-down of high-GWP refrigerants through 2030. The United States is following a similar trajectory under the AIM Act. Data centers built today around R-410A face the near-certainty of having to retrofit to compliant refrigerants within the decade.
Replacement refrigerant trade-offs. The leading low-GWP replacements — called A2L refrigerants — are mildly flammable. This introduces a new set of infrastructure requirements: gas leak detection systems, automatic isolation valves, active ventilation, and redesigned mechanical rooms with ignition controls. The engineering is manageable but adds cost and complexity that conventional HFC systems did not require.
Practical Summary
Coolant
Recirculates?
Water Use
Key Problem
Coolant
Recirculates?
Water Use
Key Problem
Evaporative water
Partially
High — evaporation loss
Water consumption
Closed-loop water
Yes
Minimal
Requires infrastructure investment
LN2
No — boils off
None
Energy to produce; infrastructure cost
Refrigerant (HFC)
Yes — fully
None
High GWP; regulatory phase-out
Low-GWP refrigerant
Yes — fully
None
Mildly flammable; transition cost
Two-phase immersion
Yes — fully
None
High upfront cost; server redesign
The refrigerant path — particularly two-phase immersion cooling with modern low-GWP fluids — is arguably the most elegant long-term answer to both the water problem and the heat density problem simultaneously. It uses no water, recirculates completely, handles extreme heat densities, and shrinks the physical footprint of a data center dramatically. The barriers are transition cost, the flammability requirements of next-generation refrigerants, and the fact that existing server hardware was not designed for immersion — though manufacturers are beginning to address that directly.
MY PROMPT:How will the benefits vs liabilities of the data center problem ultimately be solved?
This is ultimately a problem that will be solved — but not quickly, not cheaply, and not by any single solution. The resolution will come through a convergence of engineering advances, regulatory pressure, market economics, and geographic discipline. Here is the honest long view:
The Engineering Trajectory Is Clear
The cooling technology path is already settled in principle — the industry is moving decisively toward liquid cooling in all its forms, with air cooling receding to legacy status. By 2024, liquid-based cooling had already captured 46% of the data center cooling market, and it is projected to dominate all new AI and hyperscale builds. The progression looks like this:
Near term (now–2027): Direct-to-chip liquid cooling becomes standard for AI GPU racks. Nvidia’s next-generation Feynman GPUs, arriving around 2028, will push power per chip to 4.4kW — a level that makes air cooling physically impossible.
Mid term (2027–2032): Two-phase immersion cooling — servers submerged in dielectric fluid — scales from pilot deployments to mainstream adoption for highest-density facilities. Water consumption approaches zero in these environments.
Longer term: Direct-to-silicon cooling, where microchannels are embedded directly into chip architecture to remove heat at the transistor level, enables heat flux densities that no external cooling method can match. TSMC has demonstrated this approach can reduce cooling infrastructure requirements by nearly half.
The Energy Problem Is Harder Than the Water Problem
Water consumption, as serious as it is, may actually be the easier problem to solve — because closed-loop liquid cooling already exists and works. The harder problem is electricity.
Lawrence Berkeley National Laboratory projects data centers may consume 12% of all U.S. electricity by 2028. In Virginia — already the most data-center-dense region in the world — 25% of all electricity generated in the state already goes to data centers. This is not sustainable on any grid that also needs to power homes, hospitals, factories, and transportation.
The resolution here involves three parallel tracks:
1. Hardware efficiency gains — each successive generation of AI chips does more computation per watt. The trajectory is favorable but not fast enough to offset the sheer volume of new builds.
2. On-site power generation — data centers are increasingly building dedicated power infrastructure: natural gas turbines, fuel cells (Bloom Energy is a leading provider), and co-located solar and battery storage that insulate them from grid dependency.
3. Nuclear — this is the most significant emerging development. Microsoft, Google, Amazon, and Oracle have all signed contracts for dedicated nuclear power supply to their data centers — including investments in small modular reactors (SMRs) that are specifically sized for data center baseload. Nuclear is the only carbon-free power source that can provide the 24/7, weather-independent baseload that AI data centers require at the scale they are demanding.
The Geographic Discipline Problem
One of the most underappreciated aspects of the problem is that data centers are being built in the wrong places — and market forces alone will not fix this.
Phoenix, Las Vegas, Dallas, and Northern Virginia are among the most data-center-dense regions in the country — all of them in or near water-stressed or grid-stressed environments. The reason is not engineering logic. It is real estate economics, fiber infrastructure, and proximity to existing cloud campuses that attract more campuses.
The sustainable resolution requires building new capacity in locations with:
Abundant renewable energy (Pacific Northwest hydro, Great Plains wind, desert solar)
Cold climates that enable free-air cooling for significant portions of the year (Scandinavia, Iceland, and northern Canada are already major data center destinations for this reason)
Water availability from non-stressed sources
Existing or expandable grid infrastructure
Regulatory frameworks — zoning restrictions, water permit requirements, mandatory environmental impact assessments — will increasingly force this geographic discipline on an industry that has resisted it.
The Heat Recovery Opportunity
One development that rarely gets sufficient attention is waste heat recovery — turning data centers from pure consumers into net contributors to local energy systems. Several European cities, including Norderstedt in Germany, are already piping waste heat from data centers into municipal district heating networks, effectively using the buildings as large-scale radiators that heat homes in winter. At scale, this transforms the data center’s thermal liability into a community asset.
The Regulatory Pressure Curve
The ultimate forcing function will be regulatory, not technological. The EU is already requiring data centers above a certain size to report energy and water consumption, achieve minimum efficiency ratings, and — in several member states — demonstrate waste heat recovery plans as a condition of permitting. The United States is following more slowly, but state-level action in California, Virginia, and Arizona is accelerating as communities begin to understand the local infrastructure costs of large data center campuses.
Bottom Line
The problem will not be solved so much as it will be managed into acceptable bounds through a combination of forces no single actor controls:
Engineering closes the water gap through immersion and direct-to-chip cooling
Nuclear and on-site generation addresses the electricity gap
Geographic discipline — enforced by regulation and economics — moves new capacity to sustainable locations
Heat recovery converts thermal waste into community benefit
Chip efficiency gains slow the growth of demand, though they will not reverse it
The honest answer to your original question — why isn’t this already solved? — is that until very recently, water was essentially free, electricity was cheap, and no regulator was paying attention. The Ars Technica story you shared is a symptom of that era ending. The data center industry built itself on the assumption that its resource consumption was someone else’s problem. That assumption is now being challenged from every direction simultaneously — and the industry, to its credit, is responding faster than most heavy industries have when faced with comparable pressure.
MY PROMPT:Has anyone discussed limiting data center construction to colder climates, like Canada, which I suppose would be a little more complicated?
Yes — this conversation is happening at every level from academic research to state gubernatorial pitches, and Canada specifically is already being named as a priority destination. Here is the full picture:
The Conversation Is Already Well Underway
As of early 2026, the cold-climate data center concept has moved from theoretical to actively competitive. The Week ran a piece in January 2026 titled “Most data centers are being built in the wrong climate” — noting that of 8,808 operational data centers worldwide, only about 600 are located where average temperatures exceed the upper cooling threshold, meaning the vast majority of the water and energy problem is being created by location choices, not engineering necessity.
Canada’s Specific Advantages
Canada is increasingly prominent in infrastructure planning conversations for several compounding reasons:
Climate — Cold ambient temperatures allow free-air cooling for large portions of the year, eliminating or drastically reducing mechanical cooling requirements entirely. A 10-megawatt data center in Finland uses 10–20 cubic meters of water per year. The same facility in a hot climate uses tens of millions of liters. The differential is staggering.
Hydroelectric power — Quebec, British Columbia, and Manitoba have some of the largest hydroelectric systems in the world, providing clean, abundant, baseload electricity at among the lowest rates in North America — precisely what data centers need.
Political stability and rule of law — Hyperscale operators building $5–10 billion campuses require long-term regulatory predictability. Canada scores well on every institutional stability metric.
Proximity to U.S. population centers — Southern Ontario and Quebec are within acceptable latency range of the northeastern U.S. market, which is the world’s largest data consumption zone.
Water abundance — Canada holds approximately 20% of the world’s fresh surface water. Water stress is simply not a constraint in most of the country.
Canadian data center operator eStruxture has been explicitly marketing these advantages to hyperscale customers, describing their facilities as “harnessing the power of the cold” with free-cooling infrastructure already built in.
Nordic Countries Are Already Proving the Model
The most advanced real-world validation of the cold-climate thesis is already operating in Scandinavia:
Facebook/Meta built a major data center near the Arctic Circle in Luleå, Sweden, using cold Nordic air as primary cooling with near-zero water consumption
atNorth — recently acquired by Equinix and the Canada Pension Plan Investment Board for $4 billion — operates eight data centers across Iceland, Denmark, Sweden, and Finland entirely on cold-climate free cooling and renewable energy
Northern Data operates in Norway and Sweden, leveraging cold climates for high-performance computing
Several of these facilities pipe waste heat directly into municipal district heating networks, heating homes and businesses with what would otherwise be a liability
The Real Complications You Correctly Anticipated
Your instinct that it would be “a little more complicated” is well-founded. The barriers are real:
1. Latency.
Data center location is partly constrained by physics. The speed of light through fiber optic cable means that every thousand miles of distance adds approximately 5–10 milliseconds of round-trip latency. For financial trading systems, real-time AI inference, video conferencing, and interactive applications, that latency is operationally significant. Truly remote Arctic locations simply cannot serve latency-sensitive applications regardless of their other advantages. Workloads divide into two categories: latency-sensitive (must be near users) and latency-tolerant (bulk AI training, archival storage, batch processing) — and only the second category can be relocated freely to cold climates.
2. Power grid infrastructure.
Cold, sparsely populated regions have power grids sized for local residential and light industrial demand — not for facilities drawing 500–1,000 megawatts continuously. Building the transmission infrastructure to connect a major data center campus to a remote cold-climate location can cost as much as the campus itself, and in Canada specifically, provincial grid interconnection involves multiple regulatory jurisdictions.
3. Fiber connectivity.
Submarine and terrestrial fiber networks were built along routes connecting population centers. Remote cold-climate locations often require building new long-haul fiber routes — expensive, slow to permit, and difficult to maintain in permafrost and extreme weather conditions.
4. Labor and logistics.
Data centers require not only construction workers but ongoing skilled technical staff — and cold, remote regions simply do not have deep labor markets for data center operations and maintenance. Equipment delivery and servicing in extreme weather adds operational complexity and cost.
5. Permafrost.
In the most extreme cold locations, permafrost makes conventional foundation construction problematic — and warming permafrost from climate change is actively destabilizing existing Arctic infrastructure.
The Alaska Pitch
In a striking illustration of how seriously this is being taken, Alaska Governor Mike Dunleavy personally appeared at Data Center World Power in Texas to pitch Alaska as a data center destination — emphasizing climate-driven cooling efficiency, abundant freshwater, and available land. He framed Alaska not as a frontier novelty but as a practical economic option for next-generation infrastructure. That a sitting governor is making this pitch to industry audiences signals that the geographic reorientation conversation has moved from academic to political economy.
Bottom Line
The cold-climate solution is real, proven, and increasingly competitive — but it will not replace warm-climate data centers. It will complement them by capturing the large and growing fraction of workloads — AI model training, bulk storage, batch processing — that are genuinely latency-tolerant and can be located anywhere with power and connectivity. For those workloads, building in Canada, the Nordic countries, or Alaska rather than Phoenix or Northern Virginia is not merely environmentally preferable — it is increasingly economically rational, as water costs, energy costs, and regulatory pressure in warm climates continue to rise while cold-climate infrastructure investment brings those regions’ costs down.