The disrupted datacenter: facilities will be optimized for applications

Most datacenters built to date offer a largely uniform space for IT systems. Traditional planning and design seeks the lowest common denominator in requirements to create a standard facility with some options for special cases, such as additional close-coupled cooling, fitting in non-standard cabinets or adding additional uninterruptible power supply (UPS) for extra protection.

This practice may be about to change because uniform facilities carry inherent inefficiencies via mismatches between the facility and the IT. As IT infrastructures and workloads are scaling up and becoming ever more critical to business, systems technology is evolving fast – and so must facilities.

Some of these changes represent a major challenge. For example, specialized machine-learning clusters and real-time analytics engines may require some high-density cabinets that need to be positioned strategically in the facility. Meanwhile, other technologies will support a more optimal, flexible layout and facility design, such as silicon photonics interconnects that enable system components to be spread out without loss of performance. But ultimately, the pressure to hold down costs will drive architects toward more finely tuned and non-uniform datacenter designs. This is the application-optimized datacenter that represents a major departure from current planning, design and build practices.

The application-optimized datacenter is one of more than a dozen technologies that we are evaluating as part of our upcoming "Disruptive Technologies in the Datacenter" report, a follow-up to our widely read and referenced 2013 report.

The 451 Take

IT infrastructures of the coming years will differ vastly from existing ones, not only in their architecture but also in the variety of facility designs that they will need. In response to these changes, operators will need to have the ability to mix different facility configurations (that typically exist in separate sites today) in a single datacenter – this will be necessary in order to drive costs further down. Doing this will necessitate novel planning, design and build processes that are incompatible with existing, more rigid, practices. The disruption will be organizational, which makes it particularly challenging – and difficult to copy from those who successfully adopted it.

Technology and context

Better application specificity, as opposed to generic facility designs, will likely play a role in the future of the datacenter industry. Generic (uniform or homogenous) facilities, which prioritize flexibility of white-space configuration over cost and operational efficiency (whether energy or space), seek to satisfy all requirements in a shared space, which leads to economically sub-optimal engineering and operational choices – be it redundancy levels, power density or cooling modes. Today, regardless of whether it is storage for data backup, long-term archives, a hosted desktop infrastructure, high-performance engineering and analytics clusters or core enterprise applications critical to the business – all systems receive the same facility service (excepting a relatively small number of high-performance computing centers – see below).

Advancements in datacenter and IT technologies, many of which we assessed in other reports on disruptive datacenter technologies, make application-optimized datacenter infrastructure to minimize waste from design and operational mismatches a viable alternative to uniform design. An application-specific infrastructure is distinct from a generic one in its tailored design to satisfy specific, clearly defined capacity, density, reliability, climatic and security requirements of major IT and business use cases. We view application-optimized datacenters as an evolution of the multiple redundancy levels (multi-tier) concept in which data halls (and micro-modular cabinets) vary in many significant ways, driven by the use case.

Datacenters dedicated to a given set of applications are not completely new. High-performance computing centers are typically lower redundancy to save cost, but much higher in power density than average and often use specialty cooling to support high densities. Core enterprise sites, on the other hand, value reliability over cost and efficiency and tend to run mission-critical business applications on which the company's existence relies. In the past, as a general rule, these systems had no need to talk to each other and, as a result, were usually sited separately. But increasingly, enterprises and service providers are running an increasingly rich set of workloads from one datacenter or IT infrastructure.

Technology is rapidly changing IT architectures too. New enterprise IT systems are becoming denser and increasingly making use of a variety of accelerators to boost applications. Primary storage tiers will soon be exclusively solid-state. Next-generation applications are built on a fabric of microservices (software modules) that can be deployed and scaled flexibly; these are agnostic to the underlying infrastructure and availability is not dependent on individual servers – isolated system failures cost next to nothing.

To accommodate these very different requirements, datacenter owners and designers will find it more economical to develop a flexible framework (e.g., a prepared site with all utility works, an optional shell and shared modular power plant) in which various types of infrastructure can be installed easily and rapidly. As an example, for some applications we envisage all-silicon 'compute' halls, with much relaxed climatic settings (e.g., temperatures can drift in a wide band) and no mechanical refrigeration (chillers or direct expansion) deployed at all. Elsewhere in the same facility, some extreme density clusters running engineering simulations will favor the use of full immersion cooling in a dedicated section. 'Storage' halls that hold secondary copies of production data, as well as long-term archives on hard drives and tapes, will maintain a tightly controlled environment to minimize media failure rates.

At the same time, the most mission-critical systems that continue to depend on the facility infrastructure for availability will reside in a highly redundant section of the facility (such as Tier IV per The Uptime Institute's definition). Sone application accelerators (for example, analytics engines running on FPGAs, GPUs or specialty silicon) may be strategically positioned in the infrastructure to optimize connectivity and drive utilization – such cabinets can be put into micro-modular datacenter (MMDC) units (packaged IT racks encapsulated in their own facility), so as not to interfere with the airflow of lower density racks around.

Such optimizations will be made possible by a set of technologies converging on the datacenter. Inexpensive silicon photonic interconnects will allow IT architects to develop novel system designs where compute nodes connect to shelves of hard disk drives and tape libraries in another data hall, without considerable latency or bandwidth penalties. A server will not be confined to a chassis any more but will be 'composed' of physically decoupled pools of resources, such as compute, memory, storage, network and accelerators (fully realizing the promise of software-defined infrastructure), leading. On the facility side, more granular capacity increments will be preferred by some. The ongoing industrialization of facilities infrastructure using prefabricated and modular subsystems (e.g., modular IT rooms, electrical skids, packaged cooling plants), makes this possible. Distributed power topologies, such as Open Compute Project and Open19 that can integrate battery packs and UPS functionality, also bring a build-as-you-go strategy closer. We also envisage custom-designed MMDC cabinets to be deployed in vertical-specific roles in support of the Internet of Things boom in manufacturing, healthcare, logistics, retail, mining, oil and gas, and other industries.

In summary, we expect some of the future datacenters to be much more tiered (internally varied) than today's uniform facilities. Differences will not be limited to resiliency, the classical understanding of tiers in a datacenter environment, although that alone will be a significant source of cost optimization. Power density, cooling technology and climatic settings (temperature and humidity), and choice of fire suppression will all play a part.

Drivers for adoption

The 'big picture' issue pertaining to the future of datacenters is cost pressure. This is still the case, even though build and operational costs have dropped considerably over the last 10 years as the datacenter construction industry has grown in scale and design and build practices have matured. A historical cost analysis performed by Schneider Electric suggests a 1MW highly available facility can be built for about half of what it cost about a decade ago, while the same state-of-the-art datacenter can achieve or exceed annualized power usage effectiveness of 1.2 (20% energy overhead on top of the IT load) or better in most North American and European locations, compared with an average of about 2 in 2007. In other words, the datacenter industry has become far more efficient in building and operations.

But where will operators go for further savings, vital for their long-term competitiveness and profitability? If the industry were to repeat the same feat in the next 10 years, operators will need to become smarter about designing to match facility infrastructure to diverse IT and business requirements. The traditional approach leads to substantial waste and imbalance; many systems that run non-critical applications are overprotected with redundancy. For example, the most business-critical applications may need to settle for lower-than-ideal infrastructure reliability and climatic controls due to a one-size-fits-all approach. Another major source of capital and operational excess in traditional uniform facility designs is cooling: most servers are arguably overcooled in a tightly controlled environment – needlessly so, industry data suggests. For parts of the infrastructure, operators should be able to adopt relaxed climatic settings and shed all mechanical cooling in the majority of locations. Another typical major mismatch is between design power density and actual power density, which costs in wasted space (for high-density systems) or stranded capacity (data hall is full, but power and cooling capacity is underutilized).

All of this suggests that, in pursuit of lowering capital and operational costs, datacenter owners will develop facilities that are more application-specific and increasingly modular to match infrastructure design to use cases more closely. Many individual workloads will lack the scale to justify dedicated design effort. But 451 Research believes that operators will be able to cluster requirements into larger groups for which it can install a more optimized infrastructure within the framework of a site.

Impediments to adoption

Application-optimized datacenter design is not a technology per se, but a result of putting many techniques and technologies to use for more economic results. And here lies the challenge: planning processes and design methodologies will have to change to flexibly accommodate for various potentially very different requirements across different parts of the facility. It will also necessitate much closer collaboration between facility and IT teams to plan and develop a 'flexible framework' (internal technical standards for available design and build options, protocols for execution), in which all teams can operate. Such changes will struggle against organizational inertia and will likely cause friction. In an industry that is conditioned to minimize risk, most engineers and managers are not motivated to abandon their known ways.

Another hindrance to more granular, application-optimized facilities is the prevalence of procurement policies that prioritize price discounts and prefer buying in bulk. Many larger datacenter owners prefer to build in increments of 2-3MW critical load to reap cost benefits on the purchase and installation of equipment. Even though this could be partially addressed by installing big-ticket items, such as generators and switchgears, up-front to the full extent of the final (design) load, a more gradual and heterogeneous buildup of capacity will be seen as an added complexity and cost premium by many datacenter owners.

This development will also change the way datacenters appear. Further optimization to accommodate more specific requirements in order to drive costs down will make datacenters more industrial, more functional and less concerned about aesthetics or human comfort. A facility divided into smaller, application-specific sections, some which with little walkable space and high-temperature operations, will be – to some managers and colocation customers – less attractive a concept than another one with large contiguous data halls with easily accessible equipment. This factor should not be underestimated, because datacenters have often been an object of corporate pride.

The 451 Group's research into the disruptive impact of application-optimized datacenters is ongoing, and our assessment will be published in mid-2017. We welcome informed input on this and other disruptive datacenter technologies we are evaluating, including silicon photonics, DMaaS, open source hardware designs, direct liquid cooling, chiller-free cooling, software-defined power, distributed resiliency, datacenter as a machine and microgrids. Please email disrupted.datacenter@451research.com to participate.



Daniel Bizo

Senior Analyst, Datacenter Technologies

Andy Lawrence

Research Vice President - Datacenter Technologies (DCT) & Eco-Efficient IT

New Alert Set

"My Alert"

Failed to Set Alert

"My Alert"