VMstore T7000 Series Icon

VMstore T7000 Series

On-prem VMstore platform.

Tintri Cloud Platform

Managed infrastructure powered by Tintri.

Tintri Cloud Engine

Container-driven VMstore platform.

All Die Hard Tintri News & Events Perspectives Products & Solutions Technical

Watch out for HCI’s Hidden Costs

  • KEY TAKEAWAYS
  • Enterprise customers who have experimented with conventional HCI have uncovered some unexpected gotchas that add to infrastructure costs.
  • Beware of the “HCI tax” — buying both storage AND compute when you only need one resource or the other.
  • Conventional HCI stores two or three copies of each data block and that increases overall storage costs.

Hyperconverged infrastructure (HCI) sounds like a great idea, but inflexible configuration rules and escalating costs may make it an expensive choice for enterprises.

It seems as if every few years there’s a new infrastructure approach that promises to revolutionize the enterprise data center. You probably know from experience that many of these trends don’t live up to the initial hype and some even end up taking your operations in the wrong direction. So it is with conventional hyperconverged infrastructure (HCI) today.

Conventional HCI deploys storage on top of a hypervisor. While this may make sense for small deployments, Tintri strongly believes that most enterprises will be better served with separate servers and virtualization-centric storage when deploying infrastructure at scale.

When Tintri was founded, we evaluated various architectures and decided that a virtualization-centric, federated pool of storage driven by analytics was the best approach in terms of balancing cost, performance, and complexity. This is the first in a series of three blogs examining areas where conventional HCI may come up short and why Tintri’s CONNECT architecture with its web services building blocks is better suited to enterprise cloud.

Despite claims to the contrary, conventional HCI can increase deployment costs in a number of ways:

  • Requirement for balanced nodes
  • Increased software licensing costs
  • Increased storage costs

Balanced Nodes

Conventional HCI implementations generally require you to have a similar CPU, memory, and storage configuration on all the nodes in a cluster. While it may be possible in some implementations to have storage-heavy or compute-heavy nodes, they are not considered a best practice because the imbalance can cause storage hot spots and bottlenecks. For example, balanced nodes are still considered a best practice for both Nutanix and VMware vSAN.

As a result, anyone that follows best practices ends up purchasing storage when they need compute or compute when they need storage. As a result, you can end up spending more—and having valuable resources sitting idle. This effect is so well known that it is often referred to as the “HCI tax.”

Licensing Costs

Conventional HCI can also add to your licensing costs. In most implementations, storage on each node is controlled by a dedicated virtual machine. So, the more storage you have, the higher your virtualization licensing costs.

Other software licensing costs can go up as well. For example, Microsoft SQL Server and Oracle database are licensed based on the number of CPUs on a node. It doesn’t matter if those CPUs are actually being used for storage, your license still costs the same.  And—because you’re dedicating a lot of resources on each node to storage—you end up needing more nodes to get enough vCPUs for all your database instances.

Storage Costs

Many conventional HCI implementations, including Nutanix, VMware vSAN, and NetApp HCI, store multiple (two or three) copies of each block of data to protect against failures. Naturally this increases the total amount of storage you’ll need—and thus your total cost.

What Happened to My Flash Storage Capacity?

Many conventional HCI adopters have been surprised by how little usable storage capacity they end up with. With two or three copies of every data block, capacity gets consumed quickly. Additional sources of overhead also exist in many of those HCI implementations:

  • It is recommended to keep used capacity below 70% to avoid rebalancing that adds performance overhead.
  • For disk firmware upgrades, additional free space may be needed in the cluster equivalent to the used capacity of the largest disk group.
  • Additional node(s) with disks/SSDs may be needed as spares. You may need to have one-to-two additional nodes with full storage capacity.

All of this can drop the usable capacity in a conventional HCI environment below 30%. That’s a lot of wasted resources and spend, especially for all-flash configurations. Flash is still an expensive resource at scale.

To minimize the impact, the tendency is to opt for just two copies of data to save capacity and increase usable space, but that can spell disaster if a component fails during maintenance on another node.

The Advantage of Best-of-Breed Architecture

Infrastructure that is architected with separate, best-of-breed servers and virtualization-centric storage avoids these cost-related challenges. With storage independent from compute, it’s much easier to get the right mix of resources and you have more flexibility to pick the best compute and storage to support your particular workloads. Compute and storage gets better every year in terms of performance and density. Purchasing them separately gives you more flexibility to mix new and old hardware as needed.

Tintri offers all-flash storage arrays and cloud management software built on a web services architecture. Together, these building blocks deliver virtualization-centric operations with guaranteed performance, in-depth analytics, and a federated scale-out architecture. This web services approach to infrastructure simplifies your data center and makes autonomous operations a reality.

Next time we’ll look at conventional HCI performance.

This site is registered on wpml.org as a development site. Switch to a production site key to remove this banner.