HDS Offers Up a Three Tiered Virtualization Vision

Back in the early 2000’s as the storage virtualization debate raged HDS took what was then a novel approach to storage virtualization: it decoupled its storage controllers from its back end disks and enabled them to virtualize storage from it and other providers. Fast forward to 2011 and HDS again has given the industry something new to chew on. It offered up a new, three tiered vision for virtualization that builds upon its existing storage controller virtualization approach but which it now expands to also virtualize the emerging content and information layers that enterprises are creating.

A decade ago it was heavily debated where the best was place to do block-based storage virtualization: in an appliance, on the host, in the network or on a storage controller. While all of these approaches to storage virtualization persist, storage controller virtualization appears to be where enterprises are coalescing as their preferred mechanism to virtualize their storage infrastructure.

This trend toward enterprises using storage controller virtualization has now become so pronounced that at the HDS Influencer Summit I attended this week HDS no longer even refers to it as “storage controller virtualization” or even “storage virtualization.” Instead it avoids all of the baggage associated with these terms and calls it “infrastructure virtualization.”

Yet it is what HDS envisions virtualizing next that captured the imagination and sparked much discussion among the analysts, bloggers and reporters attending this event. Rather than just satisfying itself with virtualizing storage at the block level which is the primary context in which HDS has been historically associated, its acquisitions over the last few years of Archivas, Parascale and most recently of BlueArc has enabled HDS to substantially expand how and what it can virtualize.

HDS plans to use these new technologies to build upon the infrastructure virtualization functionality already found in its storage controllers to deliver these higher layers of virtualizations that enterprise organizations will expect and need in the years to come to manage their rapidly expanding data stores.

The next or 2nd layer above the infrastructure layer is what HDS refers to as “Content Virtualization.” This new level of virtualization will reside in the I/O path (HDS’ storage controllers) and index all data passing through to include structured and unstructured data. This indexed information will then be used to create a object file so any data that resides in an organization can theoretically be found thereby eliminating the silos of information that applications inadvertently create today.

Where HDS sees the biggest challenge is in creating an object file that scales to hold all of the metadata about all of the data that their customers are storing today. As HDS specifically intends to continue to target enterprise customers, some of these customers are already seeing data stores reaching into the tens and even hundreds of petabytes. Yet most solutions used to index data today are only designed to index about 50 TBs of data, a fraction of what this new content level of virtualization will need to be able to support.

But even as HDS works on the architecture to deliver this underlying solution that will hold all of this metadata, it is also developing new technology that will peer into, understand and then index all types of data stored to include audio, video and images. To achieve this feat, HDS is collaborating more closely with its parent company, Hitachi Limited, to identify and develop technology that will enable them to index these types of files.

The end game of this initiative is to provide organizations a single portal that they can then access to search their entire enterprise data store. This will enable them to more quickly respond to eDiscovery requests as well as improve their ability to spot trends and do market research.

However it is the next layer of virtualization, the Information Cloud, that could then be created on top of this Content Layer that piqued the interest of most in attendance. Driving the need for the creation of this information layer of virtualization is the growing amount of information that is being gathered from information collecting devices like sensors and video surveillance cameras and then making sense of that data to take action.

A specific example that HDS cited where an early form of this type of Information Virtualization already exists is in Japan. Japan has installed sensors on the railroads to monitor
movement on the railroad tracks for events like earthquakes and feed that data
back to a central database in real time. 
As this data rolls in, it is indexed by HDS and stored in a database
where it is immediately analyzed.

During a recent set of earthquakes, as the information
analysis occurred, the software detected in the data that vibrations were occurring
on the tracks, determined that an earthquake was occurring and immediately
brought trains to a halt to prevent accidents and possible derailments.

The HDS Influencer Summit did not answer every question or address every concern I had about its storage strategy (nor did I really expect it to.) But it did accomplish what I believe HDS set out to do: establish that HDS did more than just building great hardware and was in fact building out a software and services strategy that rivaled any of its competitors in the enterprise storage space.

So while the jury is still out on whether HDS will be able to execute upon and deliver on all of these new layers of virtualization that it outlined this week, HDS’ new three tiered virtualization vision is certainly one that organizations of any size would be wise to adopt as their own.

Share
Share

Click Here to Signup for the DCIG Newsletter!

Categories

DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.