Demand Dictates Supply: Go Big Or Go Home

By Justin Heyes – May 1st 2024

In an ever increasingly digital world, it is no real secret that the demand for datacenters is at an all-time high and continues to increase with the rise of data consumption from a society continually utilising new technology to enhance both their personal and professional lives. In fact, the current projections for generative AI predict a significant disruption to the industry over the coming years.

And this is the case globally, in a recent article Data Center Knowledge, in an update to their state of the industry report, concluded that rack density had doubled in the past eight years; and yet 60% of respondents were still actively working on increasing rack density further, as with the advent of AI applications, this demand is expected to continue increasing at an exponential rate. Meeting this demand will take a concerted effort and is arguably the highest priority as the industry scrambles to bring new capacity online.

Build Bigger, Build More

One solution which has been a go to for the industry is to increase the number of facilities, increasing the scope and scale of these hyperscale datacenters, in a bid to get ahead of the curve. The number of active hyperscale facilities around the world increased to 992 at the end of 2023 (Synergy Research Group), and this number has since surpassed 1,000. The report forecasts an additional 120-130 hyperscale datacenters will come online each year over the next decade.

There is evidence to back this, as we observe some of the largest property investors in the world aggressively expanding their datacenter portfolios. In December, Digital Realty launched a US$7 billion joint venture with Blackstone to develop more hyperscale capacity, with Blackstone having previously pledged to spend up to US$8 billion on its datacenter subsidiary, QTS, to prepare for the “once in a generation” AI boom.

The leaders in the market Amazon, Microsoft, and Google are also notably ramping up investments. Despite accounting for 60% of all hyperscale datacenter capacity, Amazon is reported to be planning to spend US$150 billion over the next 15 years, with Google breaking ground on a new development in Norway and revealing plans for a US$1 billion datacenter campus in the UK, and Microsoft are intending to double the new datacenter capacity it brings online this year.

This is also reflected across the APAC region, as the demand for capacity continues to ramp up both within the region and internationally as lower latency data is exported to a relatively newer market. Anyone who has been paying attention will have noticed GDS building across the region with datacenters in Malaysia, Hong Kong and announcements for Japan, Bridge Data Centres expanding operations in Malaysia as well as in Thailand and even AIMS moving into the Vietnamese market, just to name a few examples.

An AI Led Future

However, with all this construction in the pipeline, some argue that this may not come quickly enough. The rapid pace of evolution has always been the driving force of innovation in our industry, the most significant difference is that it’s happening incredibly fast. A proposed alternative is to start utilising AI racks as Generative AI requires more densely clustered and performance-intensive infrastructure than the frameworks that we are familiar with today. The front runner for these new AI racks, the NVIDIA DGX H100 is certainly capable of increasing rack density, improving processing and storage of data to an extent only previously dreamed of, however the system itself will consume up to 10.2 kilowatts per rack. If retrofitted into existing datacenters, they could

An Era Of Megastructures

It seems inevitable then that, despite the industry’s intentions to become more sustainable and lessen the impact on the climate, we must once again look to increase capacity first and then innovate for green later. All forecasts point to an era of larger hyperscale projects, utilising AI racks despite their propensity to guzzle resources to find a stasis point.

One realisation of this future has already been provided by Scott Data, who have created a datacenter spanning 110,000 square feet, equipped to handle more than 3,000 NVIDIA H100 GPUs. While once this would have been considered overkill, the new facility is perhaps the most interesting case study for how future facilities will have to be constructed.

Additionally, this leads to further speculation in terms of site selection requirement in the future, as structures such as the one envisioned by Scott Data need to be recalculated for; not only in terms of the land space requirement, but the access to ample power and water, arguably frustrating the conventional wisdom of building closer to populations, and instead looking to more remote locations where they can operate without disruption, as training and tuning application workloads for AI are not typically latency-sensitive.

SHARE

Like, Share and Subscribe to help us grow

AREA NEWSLETTER