Menu
Is The Spotlight Now On Pioneering Hyper Convergence Technology??

Is The Spotlight Now On Pioneering Hyper Convergence Technology??

-By Mr. Prasad Pimple, Head of Department.

New wave of Hyper converge will improve Virtual Machine (VM) performance by using server flash for key storage functions. But it has its drawbacks. Separating storage performance from capacity overcomes these issues. Applications & Database Workload are increasing, more virtual machines in a data center are creating pressure on back-end shared storage and due to this performance bottlenecks arise.To handle such IOPS & Latency pressure only enterprise solid-state drive is a right answer , which helps in additional acceleration & improve application performance by reducing latency access times by almost 10x or more. Some of the mission critical workloads like ( SAP / OLTP Database / ORACLE ) may demands Sub Millisecond latency for data access along with high IOPS But another performance challenge also exists: network latency. Every transaction going to and from a VM must traverse various checkpoints, including a host bus adapter (HBA) on the server, LAN, storage controllers, and a storage fabric. To address this, many companies are placing active data on the host instead of on back-end storage to shorten the distance (and time) for each read/write operation.

Moving data closer to VMs at the server tier reduces latency. Hyperconvergence puts solid-state storage inside servers. In this respect, it brings incremental performance gains to several applications, like VDI. But architecturally, it introduces various drawbacks, particularly around flexibility, cost, and scale. Perhaps most significantly, it causes substantial disruption to the data center.
Let’s look a little closer at these hyper convergence challenges and how to overcome them. Hyper Convergence Hangover

As explained, hyper convergence improves VM performance by leveraging server flash for key storage I/O functions. But combining the functions conventionally provided by two discrete systems — servers and storage — requires a complete overhaul of the IT environment currently in place. It creates new business processes (e.g. new vendor relationships, deployment models, upgrade cycles, etc.) and introduces new products and technology to the data center, which creates disruption for any non-green field deployment. For example, the storage administrator may need to re-implement data services such as snapshots, cloning, and replication, restructure processes for audit/compliance, and require training to become familiar with a new user interface and/or tool.
Another major challenge with hyper convergence is, to mold it for scaling. IT infra has become completely obsolete where by customer do not have a choice due to de-facto mode of scaling ,a hyper converged environment is, to simply add another appliance. It restricts the ability of the administrator to precisely allocate resources to meet the desired level of performance without similarly adding capacity.

This might work for some applications where performance and capacity typically go hand in hand, but it’s an inefficient way to support other applications, like virtualized databases, where that is not the case. For instance, let’s consider a service supported by a three-node or Four Node cluster of hyper converged systems. In order to reach the desired performance threshold, an additional appliance must be added. While the inclusion of the fifth box has the desired performance outcome, it forces the end user to also buy unneeded capacity.
This over provisioning is unfortunate for several reasons: It is an unnecessary hardware investment that can require superfluous software licenses, consume valuable data center real estate, and increase environmental (i.e. power and cooling) load.

Finally, hyper converged systems restrict choice. They are typically delivered by a vendor who requires the use of specific hardware (and accompanying software for data services). Or they are packaged to adhere to precisely defined specifications that preclude customization. In both scenarios, deployment options are limited. Organizations with established dual-vendor sourcing strategies or architects desiring a more flexible tool to design their infrastructure will need to make significant concessions to adopt this rigid model.

More to follow on the same topic.
Stay Tuned.

For more information regarding Hyper Convergence or VM,
Visit our website at www.netlabindia.com
Or Contact Mr. Prasad Pimple at prasad@netlabindia.com

Disclaimer:

All content provided on this blog is for informational purposes only. The owner of this blog makes no representations as to the accuracy or completeness of any information provided in this blog.

Leave a Reply

Related Posts

Enter your keyword