Converged vs Hyperconverged Infrastructure: How They Differ, Why It Matters

by Andrew Mullen on July 18, 2017

Today’s constantly changing business environment is putting companies under intense pressure to quickly deliver new applications and functionality. Traditional data centers are just not equipped for that kind of flexibility and agility. Historically, the compute, storage, and networking components in the data center are physically distinct and separately controlled. As a company’s IT infrastructure grows, the task of managing and coordinating these disparate entities is expanding at even greater rates.

The search for a better way to accommodate explosive growth in functionality without requiring equally explosive growth in hardware and staff resources has led to a focus on converged architectures, which offer simpler, faster, more flexible, and less costly deployment than traditional approaches.

The principal converged architectures currently in use are Converged Infrastructure and Hyperconverged Infrastructure. The two have similarities, but are not the same.

Converged Infrastructure (CI)

In a converged infrastructure implementation, server, storage, and networking resources are packaged together into a single, integrated bundle. In effect, customers purchase an entire server environment as a single unit, rather than having to acquire and integrate each component individually. A CI bundle is typically sold either as a pre-assembled unit, or as a reference architecture that customers can assemble on their own.

The distinguishing feature of CI is that the specified hardware components are selected, configured, validated, and certified by the vendor to work together as a unit. This vastly simplifies the task of configuring and tuning individual server and storage components on the data center floor to get them to work well together.

Hyperconverged Infrastructure (HCI)

HCI builds on the CI model by adding an overarching software layer. The individual compute, storage, and networking components are normally even more tightly integrated than with CI.

Both the hardware appliance as a whole, and its constituent components, are directly controlled at a granular level by the software. Users interact with a uniform, standardized, “single pane of glass” software portal that presents them with the same interface no matter which particular hardware devices are being employed.

Each HCI appliance functions as a node in a cluster, and because of the common software interface, all nodes present the same logical “look” to users. HCI systems scale out in both computational and storage capacity simply by adding nodes.

How CI and HCI Compare

Both CI and HCI are designed to simplify data center management. Because the hardware is assembled into pre-configured and pre-tested bundles, both CI and HCI require substantially less on-site tuning, while offering faster deployment, simpler operation, and less complex management than traditional solutions. Both are easier to optimally configure for specific workloads. And HCI, because of its tighter hardware integration and, especially, its software-defined underpinnings, is inherently more easily, rapidly, and extensively configurable than CI.

Because all elements of an HCI implementation are controlled by software, advanced functionality can be implemented through software from a central location. For example, a state-of-the-art data backup, restore, and disaster recovery regime can be instituted once in software, and applied throughout the organization’s IT infrastructure. And because the entire infrastructure can be managed through a common software interface, both the size and skill level of the staff required to implement and support an HCI implementation are greatly reduced.

HCI scales out in storage capacity simply by adding nodes. On the other hand, CI normally scales up by adding additional storage to each unit. This scale up approach will eventually be limited by the ability of the storage controller to handle more drives without becoming a bottleneck. In contrast, each node added to an HCI cluster brings with it its own additional compute and networking resources, in effect allowing practically unlimited scalability. That, however, can lead to waste. For example, if a workload has adequate compute power but needs more storage, the addition of HCI nodes to increase storage capacity will at the same time bring in additional CPU, RAM, and networking hardware that may be underutilized if not entirely wasted.

Use Cases for CI and HCI

The original and ideal use case for CI is VDI (Virtual Desktop Infrastructure). CI is usually the favored approach for large VDI deployments, ranging from about 500 desktops up to 10,000 or more.

CI is also favored for situations in which it’s important that administrators be able to exercise direct control over individual elements of the infrastructure. However, because components are individually specified and configured by the vendor, when CI bundles are used as intended, as unified, integrated packages, administrators have limited freedom to employ different hardware components or to tweak the implementation to fit the demands of specific workloads.

HCI, on the other hand, is designed to be tailored to the workload. Because its deployment step size (the minimum amount of hardware required to scale the solution) is small, HCI deployments can be more closely fitted to the requirements of particular workloads. This is especially useful for remote and branch office (ROBO) locations, where HCI can be implemented with as little as two or three nodes, or can be used to consolidate local data into a central, universally accessible corporate instance.

In today’s environment, corporate information is often isolated and segregated, whether in the servers of individual business units, or in ROBOs. HCI is critical to the task of consolidating such data in a centralized corporate repository. Such consolidation also enables extensive virtualization of workloads, which in turn reduces the number of physical servers required.

Although as HCI matures it’s becoming more usable for general purpose workloads, some experts caution that it is not yet suited to real-time applications, such as OLTP, that have high IOPS and low latency requirements.

The HCI Cost Advantage

A major advantage of HCI is that with its high level of virtualization, it typically reduces the amount of hardware, thus lowering costs. That’s the experience of the Baystate Health system of Springfield, Mass. The company was contemplating building a new $8.5 million data center, but decided to explore whether HCI might better meet their needs. When they calculated the reductions that could be achieved not only in direct hardware costs, but also in data center space, cooling, and power requirements, Baystate decided to completely replace their legacy infrastructure with HCI. By doing so the company saved $3.5 million in construction costs, reduced its storage costs by half, and its non-storage hardware costs by 20 percent.

CI or HCI?

If you are deploying on a large scale, have expert staff, and need a high level of performance and control of individual data center components, CI might well be the best approach for you.

If, however, you are aiming at fast and easy deployment on a smaller scale with simplified management, and you need to minimize the costs of both hardware and staff, or if your focus is on consolidating ROBO site data, you should take a very close look at HCI.

May 31, 2018

Managing Unstructured Data

Data is the cornerstone of an enterprise. It shapes the evolution of corporations and dictates international investments. Unfortunately, many global organizatio...
Continue »

May 29, 2018

The Future of Data Analysis: Fog Computing

To understand fog computing, basic knowledge of edge computing is required. Edge computing, or remote computing, is the current forefront of data analysis and s...
Continue »