Close this search box.

Pulling Back the Covers on Sub-volume Optimization

The appeal of sub-volume optimization to end-users is plain as it promises to lower storage capacity and management costs while increasing application efficiency. But before users succumb to its allure, they need to verify vendor claims regarding its benefits as vendors are stretching them to make this technology fit your environment. It is therefore incumbent upon end-users that they pull back the covers on sub-volume optimization to determine exactly what it can and cannot deliver.

Sub-volume optimization has already been identified by some as one of the potentially big storage trends of the next decade.  By way of example, at the recent June 2010 BDevent, Highland Capital’s Peter Bell listed sub-volume optimization as one of the top emerging technologies that his company is keeping its eye on.

While it was unclear from his presentation as to whether or not Highland Capital has actually invested any money in storage companies that are developing this technology, what is of note is that Bell still considers sub-volume optimization an emerging technology. This viewpoint represents a significant departure from what storage vendors are publicly promoting and would have end-users believe.

Storage vendors are the first to point out the benefits of sub-volume optimization. It is intended to automatically and dynamically place the most (or least) active segments of application data on the right tier of storage at the right time in such a way that is meets an application’s changing requirements.

Deployed in this fashion organizations achieve the best of all worlds. They can procure an optimal mix of SSD and SATA disk; lower their overall storage costs as it is cheaper for enterprises to buy a smaller mix of SSD and SATA drives as opposed to all FC drives; and, reduce the amount of time that they spend managing storage since application data is automatically placed on the right tier of storage.

The interest in sub-volume optimization is being further driven by the performance gains that SSD provides and the difficulty that users have in justifying its deployment for anything but just a few select applications. The introduction of sub-volume optimization in conjunction with SSDs into a storage system now makes it possible to extend the performance benefits of SSDs to every application using that storage system.

On the surface, this approach makes perfect sense and is why nearly every storage vendor is adding sub-volume optimization to its storage system as a feature.  However simply “adding” it does not mean it works as users expect, is optimized for their environments or that all of its “gotchas” are documented or understood.

For instance, here are just some of the assumptions that storage vendors hope users make in regards to sub-volume optimization:

  • The software is there and it works. Few companies buy any version Microsoft software until Microsoft releases its first patch. The same analogy applies here. The sub-volume optimization software available from many storage vendors is, in many instances, in its first release with few if any customers using it in production or willing to act as a reference.
  • Managing SSDs is the same as managing cache.  SSDs are being positioned as a second tier of cache but this approach is not as simple as it sounds. Cache is often used as an initial target for writes before the writes are de-staged to disk. The advantage that cache has over SSDs is that the write performance of cache is substantially better than SSDs so it is not a simple swap and replace.
  • Vendors can provide a SSD/SATA system configuration that matches your requirements. Storage vendors are still largely guessing as to what is the right mix of SSD and SATA for your environment. While 95:5, 90:10, 85:15 and even 80:20 ratios of SATA to SSD are often used, without any information as to how a specific set of applications perform, it is difficult if not impossible for storage vendors to provide a SSD/SATA ratio that is optimal for your environment.
  • SSDs work great in all situations.  SSDs work great for reads but are less than optimal for writes. While a single SSD will outperform a single HDD in all circumstances, in write-intensive environments or environments with large databases (over 1 TB) that have large amounts of random access reads, it is difficult to predict where the next read will come from. So if the system cannot predict the next block or blocks of data that are needed to be read and then places that data on the SSD before the read occurs, SSD still might not be the right solution for your environment.
  • The movement of data between tiers will occur seamlessly. Moving one block of data from one tier to another is easy; automatically and dynamically moving hundreds, thousands or millions of blocks of data in real time so it is on the right tier of storage and then updating the index that tracks where the data is located is very complex and needs to be carefully orchestrated such that it does not interrupt application processing. At this time few if any vendors are even attempting this sophisticated level of sub-volume optimization.
  • The right data will be on the right tier at the right time. Probably the biggest presumption that storage vendors hope users make is this: that sub-volume optimization software will place the right data on the right tier of storage just before or as the application needs it. However that is a BIG assumption. If sub-volume optimization only occurs on a scheduled basis or according to pre-set policies (as is the primary way it is implemented now), it presumes that the schedule is correct or that the person who set up the policies understands the behavior of the application well enough to predict when it will need data.

Sub-volume optimization is a storage technology that promises to be very disruptive in the coming decade but, as Highland Capital’s Bell points out, it is still an emerging storage technology. Right now storage vendors are either downplaying these limitations or hoping that users will overlook them with the idea that users will in the near term buy into the concept.

The good news is that behind the scenes storage vendors recognize the current limitations of sub-volume optimization and are working to rectify these problems.  In an upcoming blog entry, I will take a closer look at one of these solutions, 3PAR’s Adaptive Optimization, and how its sub-volume optimization implementation addresses many of the limitations of other storage vendor’s offerings.


Click Here to Signup for the DCIG Newsletter!


DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.