Gregor: How much AI infrastructure do we need in Germany in the future?
Jonathan: There is certainly significant demand in Germany for AI infrastructure. However, an open question remains: will this demand be met within Germany, or will it be served from regions where energy is cheaper? A major factor here is Germany’s energy grid constraint, which poses a significant challenge. While energy generation itself might not be the primary issue, grid transmission and pricing are critical hurdles.
Currently, German AI workloads, such as machine learning tasks from heavy industries like automotive manufacturing, are already being processed in the Nordics where power costs are lower. This reflects a division of AI workloads: latency-sensitive and sovereign AI workloads must remain in Germany, whereas non-latency-sensitive tasks, such as modeling AI, can be and often are processed outside Germany, particularly in northern Europe.
Germany, however, faces a shortfall in capacity for sovereign and domestic use cases. Currently, Frankfurt, the primary hub for data centers, has all its capacity spoken for. While new substation (“Umspannwerk”) capacity has been promised for Frankfurt and surrounding areas, such as Hanau, this may not materialize until 2030–2032. Established availability zones like Sossenheim and Hattersheim are already fully utilized, and constructing new capacity is challenging due to pre-leased projects and lengthy permitting processes.
Gregor: Where can we build further data centers then?
Jonathan: Opportunities may exist to the east of Hanau and in pockets of energy availability in Berlin, but these projects are still in early stages and will take years to complete. Germany’s permitting system is complex and slow, further extending timelines. While time and money can solve these problems, these solutions are likely years away—potentially towards the end of the decade. Additionally, while capital for data center construction is available, reinforcing the grid requires public agency involvement, approvals, and significant investments, which add complexity and delay.
Gregor: When you mention “capacity” you mean the available supply for power?
Jonathan: Indeed. The underlying theory is that AI infrastructure follows energy infrastructure. Globally, this is evident, with about three-quarters of such decisions being influenced by energy considerations and the remaining one-quarter by data sovereignty and security requirements. In high-cost energy jurisdictions, sovereignty or latency requirements necessitate local infrastructure. Conversely, when latency sensitivity and sovereignty constraints are absent, AI workloads migrate to regions with cheaper energy.
In terms of energy, Germany has renewable power available from the north. However, the transmission infrastructure, particularly Südlink, is not yet sufficient to bring this energy to key areas like Frankfurt. This situation is not unique to Germany; similar challenges exist in the United States and other countries. Grid reinforcement is a multi-year process requiring substantial effort and investment to meet the growing demand for data centers in Frankfurt and other hubs.
Demand for compute capacity is currently outstripping supply, but it’s worth noting that AI infrastructure is also riding the wave of inflated expectations seen on the Gartner Hype Cycle. The current enthusiasm surrounding generative AI is at its peak, which influences both demand and perception of infrastructure needs.
Gregor: Do you think the demand for AI data centers is more like a hallucination itself?
Jonathan: Decoupling AI demand from regular hyperscale demand can sometimes be challenging. However, I foresee a huge growth. Most of this demand can be categorized into two main groups. First, there is the massive demand driven by the big Internet giants like Oracle, Microsoft, Amazon, Meta, and Google. This accounts for the highest increments of demand. Then, an order of magnitude smaller—but still significant—is the demand from large enterprises. For instance, the top 10 corporations in Germany are purchasing GPUs in what would traditionally be considered substantial quantities, but their requirements pale in comparison to those of the major Internet giants.
In addition, there is growing demand from AI platform startups. In Europe, companies like Northern Data stand out. Headquartered in Frankfurt, much of their compute is located in Sweden, with their largest compute node situated in the Nordics. Despite being a German company, they demonstrate how AI startups are increasingly relying on compute capacity outside their home countries. Another example is Nebius, which has also been acquiring substantial capacity, often around 5 megawatts. While this used to be considered large, the Internet giants are now demanding 20 to 40 megawatts per site in Europe, with triple-digit requirements in the United States.
Gregor: Where will these new data centers be built then?
Jonathan: The compute infrastructure that has been established, particularly in Frankfurt, is not going anywhere—it will continue to grow as the primary hub for compute in the region. As hyperscalers develop their AI strategies, they are largely expanding their AI clusters where their cloud compute clusters are already located. This allows them the flexibility to repurpose that capacity for other workloads, such as enterprise cloud, social platforms, e-commerce, and gaming, if AI demand stabilizes or decreases in the future. This strategic positioning reflects their anticipation of sustained demand.
While some suggest we might be at the peak of the AI hype cycle, the spending patterns of Google, Amazon, Meta, and Microsoft tell a different story. These companies have indicated that their investments in GPUs, servers, and data centers will continue to rise, with spending in 2026 projected to exceed current levels. This suggests that demand will remain elevated for at least another year, if not longer.
Gregor: Do you think these spending patterns are sustainable?
Jonathan: A significant question many are asking is what the return on investment will be for the triple-digit billions of dollars already poured into AI infrastructure. At this point, there’s no clear answer. It will likely take years, rather than quarters, to determine which use cases justify these investments—if any. Are the returns coming from cost savings, productivity enhancements, or entirely new revenue streams? Potential use cases, such as AI-driven drug discovery in biotech or self-driving cars, hold promise, but concrete results are still elusive.
This uncertainty places hyperscalers like Oracle, Microsoft, Amazon, Meta, and Google in a difficult position. They must continue spending regardless because they can’t afford to miss out on future innovation. This is a classic example of FOMO (fear of missing out) investing. Hyperscalers recognize that AI may not end up as a standalone driver of revenue but as an embedded component in existing applications. For instance, recommendation engines like those used by Netflix, YouTube, and Meta have become more effective due to AI, leading to higher monetization. Meta, for example, has leveraged improved recommendation algorithms to boost its per-click advertising revenue, though such impacts haven’t been fully quantified.
Gregor: Sounds like hyperscalers behave like Teenagers … attending parties they don´t enjoy and buying stuff they may not need …
Jonathan: Indeed. We are in the early stage of AI’s development and hyperscalers have no choice but to invest heavily. Over the past 12–18 months, large-scale GPU orders have been placed and are now being deployed in data centers worldwide. However, these GPUs have a limited lifespan, typically around three years, before they need to be replaced. This raises an important future question: when the replenishment cycle begins around 2027 or 2028, will these companies decide that their investments have paid off and upgrade to even more advanced GPUs from NVIDIA, AMD, or others? Or will they pause their AI expansion and repurpose their existing infrastructure for other use cases, such as enterprise cloud or B2B services?
Gregor: In which regions of Germany is the FOMO money flowing then?
Jonathan: For example, Microsoft has major plans for facilities west of Cologne, as well as around Frankfurt and possibly Berlin. Other hyperscalers are also committing to large projects, often partnering with third-party developers for 10- to 15-year leases. These long-term commitments reflect their belief in future demand, but they are strategically located near existing cloud clusters. This approach ensures flexibility—if AI doesn’t deliver the expected returns, the infrastructure can be repurposed for other cloud-related applications.
Conversely, the trend of building AI-only data centers in remote areas is limited. While model training doesn’t require low latency and could theoretically be done in regions like Iceland or the far Nordics, these projects carry greater risk. If the demand for training large language models diminishes in the next few years, such facilities could become stranded assets, as they are less suited for latency-sensitive applications. For this reason, hyperscalers like Oracle, Microsoft, Amazon, and Meta are hesitant to make significant investments in these remote locations. Some smaller investments may occur, but the majority of AI infrastructure will remain near existing cloud hubs to mitigate risk and ensure broader utility.
In summary, the next few years will be critical in evaluating the effectiveness of these massive AI investments. Until then, hyperscalers are making calculated bets, balancing the potential of transformative AI applications with the flexibility to pivot their infrastructure for other uses.
Gregor: Thank you for your time, Jonathan.
