Uptime Institute survey findings on enterprise workloads, resiliency and new technologies

Uptime Institute, which is part of the 451 Group, conducted its annual Datacenter Industry Survey examining relevant industry trends, major challenges and emerging technologies. This report reviews some of the key findings.

451 Research has long argued that a hybrid datacenter strategy would be the most likely path taken by a majority of enterprises, which is reflected in the survey: a majority of respondents are in the process of identifying or moving certain applications off-premises. However, the results also suggest that business and mission-critical applications may not be moving off-premises as fast as some in the industry have speculated, with 65% of workloads still running in enterprise-owned datacenters. This is a number that Uptime highlighted as being relatively steady since 2014.

The 451 Take

The move by enterprises toward hybrid datacenter environments, with workload deployments spread across on-premises, colocation and public cloud facilities, is ongoing. This is providing significant opportunities for suppliers in the colocation and cloud sectors, while also creating challenges (and opportunities) in the enterprise datacenter sector. We believe most large enterprises with significant legacy IT investments will continue to run critical workloads in high-redundancy premium facilities, often their own, for the foreseeable future. As such, company-owned datacenter operators tend to be a bit risk-averse and emerging technologies tend to gain acceptance and be implemented over the longer term. That does not preclude enterprise operators from investing in their owned datacenters. By competing for workloads with public cloud and colocation providers, the requirement for enterprise operators to be more efficient and maintain high availability has never been greater. And that may mean enterprises will need to upgrade facilities or consider emerging technologies.

Key Uptime Institute survey takeaways

The Uptime Institute survey included over 1,000 datacenter professionals and IT practitioners (the majority being senior executives, facility managers and IT managers, respectively) across different verticals and geographies. Responses were weighted toward colocation providers (representing 29%) with financial (16%) and telecommunications (13%) the next two largest markets. The demographics were fairly dispersed on a regional basis with North America at 35%, Europe at 23% and Asia-Pacific at 17%.

Enterprise IT assets are becoming ever more distributed as companies embrace hybrid IT strategies, but as we highlighted earlier, maybe not as fast as some had anticipated. A majority of the IT assets (65%) remain in enterprise-owned datacenters, with 22% at colocation providers and 13% deployed in the cloud. This compares with the 2016 Uptime survey breakdown of 71%, 20% and 9%, respectively.

The majority (60%) of enterprise IT managers are consolidating servers – resulting in a flat or shrinking server footprint. This is partly explained by increased server performance (newer processors) and utilization (virtualization). For datacenter suppliers, this trend is not new and is not all bad news. The Uptime survey identified that half of the facilities teams in enterprise datacenters claim to be upgrading their infrastructure, and overall budgets for datacenter spending in 2017 remain solid, with 75% of respondents citing increased or stable budgets compared with 2016.

The shrinking enterprise server footprint is also explained by workloads migrating to cloud computing, with 43% of IT managers planning to meet increased IT capacity demand with public cloud deployments. We anticipate a continued distribution of IT assets off-premises, generating faster relative growth in cloud and colocation capacity, as reflected in our 451 Research Datacenter Monitor service forecasts. Unsurprisingly, 67% of those surveyed anticipate some workloads moving to the cloud. While this does not directly quantify the percentage of workloads moving off-premises, we believe that it is highly likely that the pace of datacenter capacity growth in both colocation and cloud could increase beyond 2020.

The decision-making process to determine which workloads are candidates to move off-premises (and the corresponding selection of the cloud or colocation provider to host those workloads) can be daunting and includes factors that may be variable or dynamic, increasing the complexity. To that end, an overwhelming 70% of those surveyed by Uptime believe their internal cloud and colocation selection process needs improvement, with 15% describing their process as 'incoherent.'

Despite increased server virtualization and utilization, the Uptime survey did not show a meaningful increase in rack power densities. The majority of racks were less than 4kW (34%) and 83% were less than 8kW. We believe that the adoption of more efficient server management tools (increasing server utilizations) and open-sourced hardware designs (such as those from the Open Compute Project) will result in a gradual increase in rack densities with the potential for the pace to accelerate a few years out. Other drivers that are increasing density today are applications such as AI/machine learning, high-performance computing (HPC) and big data. The use of power-intensive graphics processing units (GPUs) in specialized applications and hyperconverged infrastructure could also push rack densities higher.

The Open Compute Project has made little headway in the enterprise sector: 40% of those surveyed by Uptime never heard of it before. And only 2% have deployed OCP hardware in production. That result fits with our 451 Research estimate that OCP-based hardware represents about 2-3% of global datacenter spending (servers, storage, networking). We've published a number of reports discussing the maturing OCP ecosystem, as well as the current challenges for adoption. While the value proposition of OCP-based designs (lower cost, highly efficient, scalable) is becoming better known, overcoming the adoption hurdles (such as procurement, test and certification, support and maintenance) is key to increasing deployments of open-sourced designs outside of the hyperscalers.

Availability and resiliency

One-quarter of companies experienced an unplanned datacenter outage in the last year. Notably, members of the Uptime Institute Network experienced half as many instances, which speaks to the value of using a third-party authority to review datacenter design/build, of sharing expertise with peers and experts, and of disciplined maintenance and operations (M&O). When an outage occurs, most organizations (90%) conduct root cause analysis, but only 60% report that they measure the cost of downtime as a business metric. Therefore, while most operators are learning how to better avoid future downtime by using root cause analysis to improve operating procedures, many are very likely underestimating the business (not just economic) impact of downtime. Unexpected outages remain a top concern because only 8% of respondents believe that their company management was less concerned about IT outages from a year ago.

Most companies surveyed (68%) claim to deploy a multi-site, software-based IT resiliency strategy that is capable of live workload failovers. This number, we believe, does not necessarily mean synchronized, seamless failover, but may involve a number of strategies, including disaster recovery as a service and some forms of remote backup. Despite this resiliency trend, 73% of respondents anticipate continued investment to ensure high availability and physical redundancy in owned datacenters; criteria that remains critical to their IT infrastructure (as opposed to transitioning to lower availability and less redundant datacenters). Key decision-making regarding resiliency strategies lies undeniably with the IT team (83% of respondents).

The same percentage of organizations deploying a multi-site IT resiliency strategy (68%) are highly confident their strategy will work as expected in the case of an outage. Alternatively, 31% of respondents were not entirely confident in their resiliency strategy (while 1% responded with no confidence). Critical to any resiliency strategy is going to be the balance of legacy applications versus cloud-native applications. Many legacy critical applications are not written to comply with the requirements of software-based resiliency environments and it may not be feasible to re-architect these applications.

Upgrades and emerging technologies

How to upgrade a datacenter for improved efficiency, controls and automation? One idea is to scrap the BMS for a new one. Datacenters are typically designed for a 15-20-year lifecycle with a BMS system chosen up front in the design phase. Aging BMS systems are struggling to keep up with new datacenter technologies and are creating a challenge for over half of the survey's respondents. Upgrading a datacenter BMS is complex and has its risks. Nonetheless, 60% of respondents are up for the challenge and are planning to upgrade their BMS, while 32% will simply maintain their legacy systems (for as long as possible). We believe this suggests an opportunity to implement a combined BMS and DCIM solution.

A second idea is to upgrade the cooling system (typically from an aging chiller design). Two-thirds of the respondents are planning to either upgrade to new cooling technologies (including economization, adiabatic cooling and heat wheels) or refresh their existing system with updated chiller equipment – with responses split about evenly between the two. The remaining third plan to simply ride it out with their existing chillers.

The combined capital and operational costs associated with cooling data halls to within a narrow and low temperature band, still the norm at many enterprise and multi-tenant sites, can represent between one-third and one-half of the total lifecycle cost of a facility, depending on engineering choices. In a recent report in our 'Disrupted Datacenter' series, we analyzed chiller-free cooling technologies that are more efficient and can be implemented along with the adoption of wider ASHRAE temperature bands. While some operators are hesitant to eschew their chiller plants, advanced cooling technologies such as direct liquid cooling (DLC) at the rack or server level (19% of responses), in-rack or in-row cooling (41%), direct air economization (31%) and indirect economization (29%) are being adopted. We were a bit surprised to see more organizations deploying direct air versus indirect air solutions.

Other emerging technologies that were surveyed included Lithium-ion batteries. Anecdotally, concerns surrounding Lithium-ion batteries in a datacenter remain relatively high for datacenter operators and managers. Nonetheless, the Uptime survey shows 10% of organizations have already installed Lithium-ion batteries and 26% are considering them. This is a fairly high percentage of adoption in our view, considering that Lithium-ion-based UPS systems became commercially available (for early adopters) only around 2010, but with improved chemistries, technologies and pricing became competitive versus lead-acid based systems on a TCO basis only in the past 18-24 months. This also takes into account an entire adoption cycle that includes retrofitting the UPS and battery systems.

Most geographic regions had mid- to high single-digit percentage Lithium-ion adoption rates, while Africa and the Middle East stood out at 35% adoption. The survey results suggest positive traction for this emerging technology. However, 64% of respondents answered that they were not considering it, so there is still some work to be done by datacenter suppliers to assuage operator concerns and the (perceived) risks and promote the TCO advantages.

There is growing interest in power generation on-site, reflecting a desire to reduce dependency on the power grid. The Datacenters & Critical Infrastructure (DCI) team at 451 Research was recently rebranded to align with our expanded scope of coverage including distributed asset management, energy management and other related areas. Given the importance of datacenter availability, improving energy management systems and the massive (and increasing) size of some hyperscale datacenters, technologies such as on-site power generation are moving to the forefront. The installation of on-site power generation (including renewables and cogeneration) is well above 10% across all geographic regions (Russia and CIS were the highest at 33%) and is being deployed by 21% of the entire Uptime survey pool. An additional 22% of respondents are considering deployment.



Jeffrey Fidacaro

Senior Analyst, Datacenter Technologies

Matt Stansberry

Director of Content and Publications

New Alert Set

"My Alert"

Failed to Set Alert

"My Alert"