Why Hyperconverged Infrastructure is the Future of Enterprise Computing

by Jaap van Duijvenbode on May 12, 2017

With the explosion in the amount of digital data corporations generate and use, the traditional storage marketplace is undergoing a fundamental transformation. The demands on corporate data centers are increasing rapidly due to the growth of applications such as big data analytics, internet commerce, and data mobility. Yet IT budgets aren’t nearly keeping pace.

Because traditional storage is now seen as too complex, too expensive, and too rigid for the rapidly expanding requirements being placed on corporate data centers today, software-defined storage (SDS) is enjoying fast-growing acceptance as the dominant standard for enterprise storage. Now, IT leaders are taking the next logical step by applying the software-centric paradigm of the SDS model to the data center as a whole. The new approach is called Hyperconverged Infrastructure (HCI), and many IT professionals consider it to be the gateway to the fully software-defined data center (SDDC).

With HCI, CIOs and other IT leaders see the opportunity to substantially lower both the cost and the management complexity associated with traditional data center operations, while providing the increased flexibility and agility that today’s business environment demands.

Although HCI hardly existed a few years ago, today it is one of the fastest growing segments of the IT marketplace. The research firm Gartner expects the HCI market to grow from essentially zero in 2012 to $5 billion by 2019.

What is Hyperconverged Infrastructure?

Gartner defines HCI as “a platform offering shared compute and storage resources, based on software-defined storage, software-defined compute, commodity hardware and a unified management interface.” John Abbott, an analyst at 451 Research, sees it as “software that you can load onto industry-standard x86 servers, in turn turning those servers into a clustered, scalable pool of compute and storage.”

Corporate data centers have traditionally been defined by three main functional areas: servers to run applications, storage arrays to store the data, and networks as the communication backbone tying it all together. HCI combines all these elements in a single unified package, or appliance, each of which contains its own compute, storage, and networking resources. This allows a degree of workload optimization and interoperability that was not possible when each function constituted a separate platform that had to be externally integrated with the others.

HCI is implemented as a cluster of nodes consisting of appliances that incorporate industry-standard servers (normally x86) and commodity storage hardware. Although each node contains physical compute, storage, and networking resources, the real secret sauce is the software. As with SDS, which is one of the basic underpinnings of the hyperconverged model, HCI abstracts intelligence out of hardware and into a software layer that controls all the devices as a single pool of resources. The focus is on uniform management of the compute, storage, and networking functions through software. Both server virtual machine (VM) workloads and storage controller functions run alongside one another on each node.

Why Enterprise Adoption of HCI Is Gaining Momentum

The driving force behind the increasing adoption of HIC by enterprises is the array of advantages over traditional data center approaches it provides. These include the following:

  • Enhanced Ease of Use: As the underlying set of data center processes becomes more and more complex with a large variety of sophisticated and resource-intensive workloads, one of the most important functions of HCI is the simplification of data center management. With HCI, IT administrators are presented with a unified and intuitive management interface, even while backend complexity continues to grow.
  • Simplified Data Center Management: HCI is policy based. By implementing policy directives at the software level, the necessity of configuring each VM and storage array to perform as desired is avoided. In a large system consisting of hundreds or even thousands of virtual machines, the task of manually configuring each of those VMs would be forbiddingly time consuming and error prone. But with HCI, all of the VMs can be uniformly and instantly configured through application of policies defined in the HCI software. The result is not only a substantial reduction in the number of IT staff members required to support the organization’s IT infrastructure, but also a reduction in downtime caused by configuration errors or incompatibilities.
  • CapEx and OpEx Savings: HCI appliances are implemented with relatively inexpensive commercial off-the-shelf (COTS) server and storage hardware (a category that now includes solid state drives or SSDs), rather than the far more costly proprietary storage arrays and controllers common in traditional storage solutions. Because an extensive ecosystem for these x86-compatible devices is already well established, costs for both initial acquisition and ongoing support of HCI appliances can be substantially less than with traditional solutions.
  • Increased Reliability: Because it is expressly designed to allow use of COTS hardware, HCI software assumes and is designed to transparently accommodate a relatively high rate of device failure without impacting running workloads. This is achieved by automatically distributing replicas of the data across nodes, so that if one node fails, another is already prepared to take over its functions.
  • Enhanced Data Safety and Security: Because of the HCI model’s unified point of control, best-of-breed data protection practices, including access control, encryption, remote data replication, and sophisticated backup and disaster recovery strategies, can be applied system-wide through software.
  • Increased Agility: With HCI, upgrades and reconfigurations of data center resources to accommodate changes in the business or technological environment can be made quickly through software.
  • Effectively Unlimited Scalability: HCI systems are inherently highly scalable, and can increase or decrease storage capacity simply by adding or subtracting nodes. Because characteristics of physical storage devices are not directly exposed to users, but are managed exclusively by software, HCI allows individual storage units to be transparently added or removed without workloads even being aware that it’s happening.

Because of features such as these, many CIOs and other IT leaders believe that HCI is becoming an indispensable factor in meeting the rapidly expanding IT infrastructure needs of modern enterprises.

Remote Sites Present a Special Challenge

One of the highest profile use cases for HCI is in the management of the IT infrastructure of companies with widely dispersed remote or branch office (ROBO) locations. Such sites can be extremely important to an organization. About half of all employees work in ROBOs, and in many cases those locations are the main points of contact with customers. So, providing a first class IT infrastructure for remote locations is a critical objective for many enterprises.

But it’s not easy.

Historically, ROBOs have implemented and managed their own IT operations so as to meet the particular needs of that location. This meant, for example, that each branch office might have its own heterogeneous collection of servers and storage devices. Each site would also be responsible for its own data backup and disaster recovery solutions, as well as for ongoing support of its IT infrastructure.

Data generated in ROBOs may be extremely important for the company as a whole, especially when it involves customer interactions. But making that information available to the main office and other branch locations on a timely basis, while maintaining its integrity by keeping the various copies of the data in sync with one another, can be a difficult task. Often remote site WAN or internet connections don’t have sufficient bandwidth to allow updated data to be quickly disseminated across the company’s network. And insuring that simultaneous changes in different locations don’t result in dangerous data inconsistencies has often proven to be problematic.

In addition, ROBOs often don’t have the resources to implement first class data security, backup, and disaster recovery solutions, nor the expertise to insure that they are correctly and consistently utilized. The result is that irreplaceable information generated at the site is vulnerable to being lost or compromised.

What administrators need in order to meet these ROBO challenges is a means of centralizing IT management and data storage provisioning for all remote sites. And that’s what HCI offers.

HCI Fundamentally Changes The Way ROBO IT Is Managed

The software-centric nature of HCI offers the promise of unifying and simplifying the management chaos inherent in having to support a number of different locations, each doing its own IT thing. Handling disparate resources as a single entity is a fundamental aspect of the HCI paradigm. And it doesn’t matter where in the world those resources are located. Conceptually, it makes no difference in an HCI implementation if the servers, storage arrays, and network connections being managed by the software are all together in one data center, or are scattered across different continents.

The key to managing the IT infrastructure of a company with a number of ROBO sites as a unified whole is the consolidation of all the organization’s data into a central repository. Because the HCI software presents a unified view of the central file server to remote locations, ROBO users interact with that centralized storage as if it were local.

Rather than each branch office storing its own data on site, users work from a local cache that retains only the information they most frequently work with. When changes are made (or when data is newly entered) at the ROBO, only the differences from what is contained in the corporate repository are transmitted back to the main office, thus minimizing bandwidth requirements while ensuring that the central copy of the data is always up to date. A top-of-the-line HCI package will also include centralized file locking to insure that simultaneous incompatible changes to the same dataset are prevented.

Since HCI software is designed for use in a standard x86-compatible environment, ROBOs can run HCI instances on their local server clusters without having to install any specialized devices. This, in effect, provides a standard hardware configuration that can be centrally supported through software, eliminating the need to have expert IT support staff on site in remote locations.

Because all ROBO-generated or modified data is immediately reflected back to the central repository, there is no need for separate local storage. There's also no need for ROBO sites to concern themselves with data backup and disaster recovery, since best-of-breed implementations of these functions can be managed through software from a central site, encompassing an organization’s entire far-flung IT infrastructure.

HCI is Indeed the Future of Enterprise Computing

According to a 2016 report by 451 Research, which focused on companies planning major server and storage replacement projects over the year, 86 percent planned on increasing their purchases of HCI products. That indicates that HCI is gaining momentum in its penetration of corporate data centers. As the challenges IT leaders face become ever more demanding, the ascendancy of HCI in enterprise computing can only increase.

Learn about the Talon FAST™ solution!

February 18, 2019

Talon and NetApp Enable Enterprises to Utilize a Revolutionary Software Storage Platform

Talon and NetApp work closely to provide businesses with enterprise grade cloud storage and file management. Through NetApp’s Cloud Volumes ONTAP and Talon’...
Continue »

May 31, 2018

Managing Unstructured Data

Data is the cornerstone of an enterprise. It shapes the evolution of corporations and dictates international investments. Unfortunately, many global organizatio...
Continue »

May 29, 2018

The Future of Data Analysis: Fog Computing

To understand fog computing, basic knowledge of edge computing is required. Edge computing, or remote computing, is the current forefront of data analysis and s...
Continue »