Nanoseconds, Stubborn SAS, and Other Takeaways from the Flash Memory Summit 2019

Every year at the Flash Memory Summit held in Santa Clara, CA, attendees get a firsthand look at the technologies that will impact the next generation of storage. This year many of the innovations centered on forthcoming interconnects that will better deliver on the performance that flash offers today. Here are DCIG’s main takeaways from this year’s event.

Takeaway #1 – Nanosecond Response Times Demonstrated

PCI Express (PCIe) fabrics can deliver nanosecond response times using resources (CPU, memory, storage) situated on different physical enclosures. In meeting with PCIe provider, Dolphin Interconnect Solutions, it demonstrated how an application could access resources (CPU, flash storage & memory) on different devices across a PCIe fabric in nanoseconds. Separately, GigaIO announced 500 nanosecond end-to-end latency using its PCIe FabreX switches. While everyone else at the show was boasting about microsecond response times, Dolphin and GigaIO introduced nanoseconds into the conversation. Both these companies ship their solutions now.

Takeaway #2 – Impact of NVMe/TCP Standard Confirmed

Ever since we heard the industry planned to port NVMe-oF to TCP, DCIG thought this would accelerate the overall adoption of NVMe-oF. Toshiba confirmed our suspicions. In discussing its Kumoscale product with DCIG, it shared that it has seen a 10x jump in sales since the industry ratified the NVMe/TCP standard. This stems from all the reasons DCIG stated in a previous blog entry such as TCP being well understood, Ethernet being widely deployed, its low cost, and its use of existing infrastructure in organizations.

Takeaway #3 – Fibre Channel Market Healthy, Driven by Enterprise All-flash Array

According to FCIA leaders, the Fibre Channel (FC) market is healthy. FC vendors are selling 8 million ports per year. The enterprise all-flash array market is driving FC infrastructure sales, and 32 Gb FC is shipping in volume. Indeed, DCIG’s research revealed 37 all-flash arrays that support 32 Gb FC connectivity.

Front-end connectivity is often the bottleneck in all-flash array performance, so doubling the speed of those connections can double the performance of the array. Beyond 32 Gb FC, the FCIA has already ratified the 64 Gb standard and is working on the 128 Gb FC. Consequently, FC has a long future in enterprise data centers.

FC-NVMe brings the benefits of NVMe-oF to Fibre Channel networks. FC-NVMe reduces protocol overhead, enabling GEN 5 (16 Gb FC) infrastructure to accomplish the same amount of work while consuming about half the CPU of standard FC.

Takeaway #4 – PCIe Will Not be Denied

All resources (CPU, memory and flash storage) can connect with one another and communicate over PCIe. Further, using PCIe eliminates the need for introducing the overhead associated with storage protocols (FC, InfiniBand, iSCSI, SCSI). All these resources talk the PCIe protocol. With the PCIe 5.0 standard formally ratified in May 2019 and discussions about PCIe 6.0 occurring, the future seems bright for the growing adoption of this protocol. Further, AMD and Intel having both thrown their support behind it.

Takeaway #5 – SAS Will Stubbornly Hang On

DCIG’s research finds that over 75% of AFAs support 12Gb/second SAS now. This predominance makes the introduction of 24G a logical next step for these arrays. A proven, mature, and economical interconnect, few applications can yet drive the performance limits of 12Gb, much less the forthcoming 24G standard. Adding to the likelihood that 24G moves forward, the SCSI Trade Association (STA) reported that the recent 24G plug fest went well.

Editor’s Note: This blog entry was updated on August 9, 2019, to correct grammatical mistakes and add some links.



NVMe: Four Key Trends Set to Drive Its Adoption in 2019 and Beyond

Storage vendors hype NVMe for good reason. It enables all-flash arrays (AFAs) to fully deliver on flash’s performance characteristics. Already NVMe serves as an interconnect between AFA controllers and their back end solid state drives (SSDs) to help these AFAs unlock more of the performance that flash offers. However, the real performance benefits that NVMe can deliver will be unlocked as a result of four key trends set to converge in the 2019/2020 time period. Combined, these will open the doors for many more companies to experience the full breadth of performance benefits that NVMe provides for a much wider swath of applications running in their environment.

Many individuals have heard about the performance benefits of NVMe. Using it, companies can reduce latency with response times measured in few hundred microseconds or less. Further, applications can leverage the many more channels that NVMe has to offer to drive throughput to hundreds of GBs per second and achieve millions of IOPs. These types of performance characteristics have many companies eagerly anticipating NVMe’s widespread availability.

To date, however, few companies have experienced the full breadth of performance characteristics that NVMe offers. This stems from:

  • The lack of AFAs on the market that fully support NVMe (about 20%).
  • The relatively small performance improvements that NVMe offers over existing SAS-attached solid-state drives (SSDs); and,
  • The high level of difficulty and cost associated with deploying NVMe in existing data centers.

This is poised to change in the next 12-24 months with four key trends poised to converge that will open up NVMe to a much wider audience.

  1. Large storage vendors getting ready to enter the NVMe market. AFA providers such as Tegile (Western Digital), iXsystems, Huawei, Lenovo, and others ship products that support NVMe. These vendors represent the leading edge of where NVMe innovation has occurred. However, their share of the storage market remains relatively small compared to providers such as Dell EMC, HPE, IBM, and NetApp. As these large storage providers enter the market with AFAs that support NVMe, expect market acceptance and adoption of NVMe to take off.
  2. The availability of native NVMe drivers on all major operating systems. The only two major enterprise operating systems that have currently native NVMe drivers for their OSes are Linux and VMware. However, until Microsoft and, to a lesser degree, Solaris, offer native NVMe drives, many companies will have to hold off on deploying NVMe in their environments. The good news is that all these major OS providers are actively working on NVMe drivers. Further, expect that the availability of these drivers will closely coincide with the availability of NVMe AFAs from the major storage providers and the release of the NVMe-oF TCP standard.
  3. NVMe-oF TCP protocol standard set to be finalized yet in 2018. Connecting the AFA controller to its backend SSDs via NVMe is only one half – and much easier part – of solving the performance problem. The much larger and more difficult problem is easily connecting hosts to AFAs over existing storage networks as it is currently difficult to setup and scale NVMe-oF. The establishment of the NVMe-oF TCP standard will remedy this and facilitate the introduction and use of NVMe-oF between hosts and AFAs using TCP/IP over existing Ethernet storage networks.
  4. The general availability of NVMe-oF TCP offload cards. To realize the full performance benefits of NVMe-oF using TCP, companies are advised to use NVMe-oF TCP offload cards. Using standard Ethernet cards with no offload engine, companies will still see high throughput but very high CPU utilization (up to 50 percent.) Using the forthcoming NVMe-oF TCP offload cards, performance increases by anywhere from 33 to 150 percent versus native TCP cards while only introducing nominal amounts of latency (single to double digit microseconds.)

The business need for NVMe technology is real. While today’s all-flash arrays have tremendously accelerated application performance, NVMe stands poised to unleash another round of up to 10x or more performance improvements. But to do that, a mix of technologies, standards, and programming changes to existing operating systems must converge for mass adoption in enterprises to occur. This combination of events seems poised to happen in the next 12-24 months.




Seven Key Differentiators between Dell EMC VMAX and HPE 3PAR StoreServ Systems

Dell EMC and Hewlett Packard Enterprise are enterprise technology stalwarts. Thousands of enterprises rely on VMAX or 3PAR StoreServ arrays to support their most critical applications and to store their most valuable data. Although VMAX and 3PAR predated the rise of the all-flash array (AFA), both vendors have adapted and optimized these products for flash memory. They deliver all-flash array performance without sacrificing the data services enterprises rely on to integrate with a wide variety of enterprise applications and operating environments.

While VMAX and 3PAR StoreServ can both support hybrid flash plus hard disk configurations, the focus is now on all-flash. Nevertheless, the ability of these products to support multiple tiers of storage will be advantageous in a future that includes multiple classes of flash memory along with other non-volatile storage class memory technologies.

Both the VMAX and 3PAR StoreServ can meet the storage requirements of most enterprises, yet differences remain. DCIG compares the current AFA configurations from Dell EMC and HPE in its latest DCIG Pocket Analyst Report. This report will help enterprises determine which product best fits with its business requirements.

DCIG updated its research on all-flash arrays during the first half of 2018. In so doing, DCIG identified many similarities between the VMAX and 3PAR StoreServ products including:

  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with multiple virtualization management consoles
  • Rich data replication and data protection offerings
  • Support for a wide variety of client operating systems
  • Unified SAN and NAS

DCIG also identified significant differences between the VMAX and 3PAR StoreServ products including:

  • Data center footprint
  • High-end history
  • Licensing simplicity
  • Mainframe connectivity
  • Performance resources
  • Predictive analytics
  • Raw and effective storage density

blurred image of the front page of the report

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares all-flash arrays from Dell EMC and HPE. This report is currently available for purchase on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report become available at no charge at some future time.




Five Key Differentiators between the Latest NetApp AFF A-Series and Hitachi Vantara VSP F-Series Platforms

Every enterprise-class all-flash array (AFA) delivers sub-1 millisecond response times using standard 4K & 8K performance benchmarks, high levels of availability, and a relatively full set of core data management features. As such, enterprises must examine AFA products to determine the differentiators between them. It is when DCIG compared the newest AFAs from leading providers such as Hitachi Vantara and NetApp in its latest DCIG Pocket Analyst Report that differences between them quickly emerged.

Both Hitachi Vantara and NetApp refreshed their respective F-Series and A-Series lines of all-flash arrays (AFAs) in the first half of 2018. In so doing, many of the similarities between the products from these providers persisted in that they both continue to natively offer:

  • Unified SAN and NAS interfaces
  • Extensive support for VMware API’s including VMware Virtual Volumes (VVols)
  • Integration with popular virtualization management consoles
  • Rich data replication and data protection offerings

However, the latest AFA product refreshes from each of these two vendors also introduced some key areas where they diverge. While some of these changes reinforced the strengths of each of their respective product lines, other changes provided some key insights into how these two vendors see the AFA market shaping up in the years to come. This resulted in some key differences in product functionality emerging between the products from these two vendors that will impact them in the years to come.

Clicking on image above will take you to a third party website to access this report.

To help enterprises select the solution that best fits their needs, there are five key ways that the latest AFA products from Hitachi Vantara and NetApp differentiate themselves from one another. These five key differentiators include:

  1. Data protection and data reduction
  2. Flash performance optimization
  3. Predictive analytics
  4. Public cloud support
  5. Storage networking protocols

To see which vendor has the edge in each of these categories and why, you can access the latest 4-page Pocket Analyst Report from DCIG that analyzes and compares these newest all-flash arrays from Hitachi Vantara and NetApp. This report is currently available for sale on DCIG’s partner site: TechTrove. You may also register on the TechTrove website to be notified should this report becomes available at no charge at some future time.




Six Best Practices for Implementing All-flash Arrays

Almost any article published today related to enterprise data storage will talk about the benefits of flash memory. However, while many organizations now use flash in their enterprise, most are only now starting to use it at a scale where they use it to host more than a handful of their applications. As organizations look to deploy flash more broadly in their enterprises, here are six best practices to keep in mind as they do so.

The six best practices outlined below are united by a single overarching principle. That overarching principle is that the data center is not merely a collection of components, it is an interdependent system. Therefore, the results achieved by changing any key component will be constrained by its interactions with the performance limits of other components. Optimal results come from optimizing the data center as a system.

Photograph of scaffolding on a building

Photo by Dan Gold on Unsplash

Best Practice #1: Focus on Accelerating Applications

Business applications are the reason businesses run data centers. Therefore, accelerating applications is a useful focus in evaluating data center infrastructure investments. Eliminating storage perfor­mance bottlenecks by implementing an all-flash array (AFA) may reveal bottlenecks elsewhere in the infrastructure, including in the applications themselves.

Getting the maximum performance benefit from an AFA may require more or faster connections to the data center network, changes to how the network is structured and other network configuration details. Application servers may require new network adapters, more DRAM, adjustments to cache sizes and other server configuration details. Applications may require configuration changes or even some level of recoding. Some AFAs include utilities that will help identify the bottle­necks wherever they occur along the data path.

Best Practice #2: Mind the Failure Domain

Consolidation can yield dramatic savings, but it is prudent to consider the failure domain, and how much of an organization’s infrastructure should depend on any one component—including an all-flash array. While all the all-flash arrays that DCIG covers in its All-flash Array Buyer’s Guides are “highly available” by design, some are better suited to deliver high availability than others. Be sure the one you select matches your requirements and your data center design.

Best Practice #3: Use Quality of Service Features and Multi-tenancy to Consolidate Confidently

Quality of Service (QoS) features enable an array to give criti­cal business applications priority access to storage resources. Multi-tenancy allocates resources to specific business units and/or departments and limits the percentage of resources that they can consume on the all-flash array at one time. Together, these features protect the array from being monopolized by any one application or bad actor.

Best Practice #4: Pursue Automation

Automation can dramatically reduce the amount of time spent on routine storage management and enable new levels of IT agility. This is where features such as predictive analytics come into play. They help to remove the risk associated with managing all-flash arrays in complex, consolidated environments. For instance, they can proactively intervene by identifying problems before they impact production apps and take steps to resolve them.

Best Practice #5: Realign Roles and Responsibilities

Implementing an all-flash storage strategy involves more than technology. It can, and should, reshape roles and responsibilities within the central IT department and between central IT, develop­ers and business unit technologists. Thinking through the possible changes with the various stakeholders can reduce fear, eliminate obstacles, and uncover opportunities to create additional value for the business.

Best Practice #6: Conduct a Proof of Concept Implementation

A good proof-of-concept can validate feature claims and uncover perfor­mance-limiting bottlenecks elsewhere in the infrastructure. However, key to implementing a good proof-of-concept is having an environment where you can accurately host and test your production environment on the AFA.

A Systems Approach Will Yield the Best Result

Organizations that approach the AFA evaluation from a systems perspective–recognizing and honoring the fact that the data center is an interdependent system that includes hardware, software and people—and that apply these six best practices during an all-flash array purchase decision are far more likely to achieve the objectives that prompted them to look at all-flash arrays in the first place.

DCIG is preparing a series of all-flash array buyer’s guides that will help organizations considering the purchase of an all-flash array. DCIG buyer’s guides accelerate the evaluation process and facilitate better-informed decisions. Look for these buyer’s guides beginning in the second quarter of 2018. Visit the DCIG web site to discover more articles that provide actionable analysis for your data center infrastructure decisions.




Two Most Disruptive Storage Technologies at the NAB 2018 Show

The exhibit halls at the annual National Association of Broadcasters (NAB) show in Las Vegas always contain eye-popping displays highlighting recent technological advances as well as what is coming down the path in the world of media and entertainment. But behind NAB’s glitz and glamour lurks a hard, cold reality; every word recorded, every picture taken, and every scene filmed must be stored somewhere, usually multiple times, and available at a moment’s notice. It is these halls at the NAB show that DCIG visited where it identified two start-ups with storage technologies poised to disrupt business as usual.

Storbyte. Walking the floor at NAB, a tall, blond individual literally yanked me by the arm as I was walking by and asked me if I had ever heard of Storbyte. Truthfully, the answer was No. This person turned out to be Steve Groenke, Storbyte’s CEO, and what ensued was a great series of conversations while at NAB.

Storbyte has come to market with an all-flash array. However, it took a very different approach to solve the problems of longevity, availability and sustainable high write performance in SSDs and storage systems built with them. What makes it so disruptive is it created a product that meets the demand for extreme sustained write performance by slowing down flash and it does so at a fraction of the cost of what other all-flash arrays cost.

In looking at today’s all-flash designs, every flash vendor is actively pursuing high performance storage. The approach they take is to maximize the bandwidth to each SSD. This means their systems must use PCIe attached SSDs addressed via the new NVMe protocol.

Storbyte chose to tackle the problem differently. Its initial target customers had continuous, real-time capture and analysis requirements as they routinely burned through the most highly regarded enterprise class SSDs in about seven months. Two things killed NAND flash in these environments: heat and writes.

To address this problem, Storbyte reduces heat and the number of writes that each flash module experiences by incorporating sixteen mSATA SSDs into each of its Eco*Flash SSDs. Further, Storbyte slows down the CPUs in each of the mSATA module on its system and then wide-stripes writes across all of them. According to Storbyte, this only requires about 25% of the available CPU on each mSATA module so they use less power. By also managing the writes, Storbyte simultaneously extends the life of each mSATA module on its Eco-flash drives.

The end result is a low cost, high performance, very dense, power-efficient all-flash array built using flash cards that rely upon “older”, “slower”, consumer-grade mSATA flash memory modules that can drive 1.6 million IOPS on a 4U system. More notably, its systems cost about a quarter of that of competitive “high performance” all-flash arrays while packing more than a petabyte of raw flash memory capacity in 4U of rack space that use less power than almost any other all-flash array.

Wasabi. Greybeards in the storage world may recognize the Wasabi name as a provider of iSCSI SANs. Well, right name but different company. The new Wasabi recently came out of stealth mode as a low cost, high performance, cloud storage provider. By low cost, we mean 1/5 of the cost of Amazon’s slowest offering (Glacier) and at 6x the speed of Amazon’s highest performing S3 offering. In other words, you can have your low cost cloud storage and eat it too.

What makes its offering so compelling is that it offers storage capacity at $4.99/TB per month. That’s it. No additional egress charges for every time you download files. No complicated monthly statements to decipher to figure out how much you are spending and where. No costly storage architects to hire to figure out how to tier data to optimize performance and costs. This translates into one fast cloud storage tier at a much lower cost than the Big 3 (Amazon AWS, Google Cloud, and Microsoft Azure.)

Granted, Wasabi is a cloud storage provider start-up so there is an element of buyer beware. However, it is privately owned and well-funded. It is experiencing explosive growth with over 1600 customers in just its few months of operation. It anticipates raising another round of funding. It already has data centers scattered throughout the United States and around the world with more scheduled to open.

Even so, past horror stories about cloud providers shutting their doors give every company pause by using a relatively unknown quantity to store their data. In these cases, Wasabi recommends that companies use its solution as your secondary cloud.

Its cloud offering is fully S3 compatible and most companies want a cloud alternative anyway. In this instances, store copies of your data to both Amazon and Wasabi. Once stored, run any queries, production, etc. against the Wasabi cloud. The Amazon egress charges that your company avoids by accessing its data on the Wasabi cloud will more than justify taking the risk of storing the data you routinely access on Wasabi. Then in the unlikely event Wasabi does go out of business (not that it has any plans to do so,) companies still have a copy of data with Amazon that they can fail back to.

This argument seems to resonate well with prospects. While I could not substantiate these claims, Wasabi said that they are seeing multi-petabyte deals coming their way on the NAB show floor. By using Wasabi instead of Amazon in the use case just described, these companies can save hundreds of thousands of dollars per month just by avoiding Amazon’s egress charges while mitigating their risk associated with using a start-up cloud provider such as Wasabi.

Editor’s Note: The spelling of Storbyte was corrected on 4/24.




NVMe: Setting Realistic Expectations for 2018

Non-volatile Memory Express (NVMe) has captured the fancy of the enterprise storage world. Implementing NVMe on all-flash arrays or hyper-converged infrastructure appliances carries with it the promise that companies can leverage these solutions to achieve sub-millisecond response times, drive millions of IOPS, and deliver real-time application analytics and transaction processing. But differences persist between what NVMe promises for these solutions and what they can deliver. Here is a practical look at NVMe delivers on these solutions in 2018.

First and foremost, NVMe is an exciting and needed breakthrough to deliver on the performance characteristics as of early 2018. Unlike the SCSI protocol that it replaces which was designed and implemented with mechanical hard disk drives (HDDs) in mind, NVMe comes to market intended for use with today’s flash-based systems. In fact, as evidence of the biggest difference between SCSI and NVMe, NVMe cannot even interface with HDDs. NVMe is intended to speak flash.

As part of speaking flash, NVMe no longer concerns itself with the limitations of mechanical HDDs. By way of example, HDDs can only handle one command at a time. Whether it is a read or a write, the entire HDD is committed to completing that one command before it can start processing the next one and it only has one channel delivering that command to it.

The limitations of flash, and by extension, NVMe, are exponentially higher. In the case of NVMe, it can support 65,535 queues into the flash media and stack up to 64,000 commands per queue. In other words, over 4 billion commands can theoretically be issued to a single flash media at any time.

Of course, just because NVMe can support over 4 billion commands does not mean that any product or application currently even comes close to doing that. Should they ever do so, and they probably will at some point, it is plausible that published IOP numbers might be in the range of tens or hundreds of millions of IOPs. But as of early 2018, everyone must still develop and mature their infrastructure and applications to support that type of throughput. Further, NVMe as a protocol still must continue to mature its interface to support those kinds of workloads.

So as of early 2018, here is what enterprises can realistically expect from NVMe:

1. If you want NVMe on your all-flash array, you have a short list from which to choose. NVMe capable all-flash arrays that have NVMe interfaces to all SSDs are primarily available from Dell EMC, Huawei, Pivot3, Pure Storage, and Tegile. The number of all-flash arrays that currently support NVMe remains in the minority with only 18% of the 100+ all-flash arrays that DCIG evaluated supporting NVMe connectivity to all back end SSDs.

Source: DCIG

The majority of AFAs currently shipping support a 3, 6, or 12 Gb SAS interface to their backend flash media for good reason: few applications can take full advantage of NVMe’s capabilities. As both applications and NVMe mature, expect the number of AFAs that support NVMe to increase.

2. Your connectivity between your server and shared storage array will likely remain the same in 2018. Enterprises using NAS protocols such as CIFS or NFS or SAN protocols such as FC or iSCSI should expect to do so for 2018 and probably for the next few years. While new standards such as NVMe-oF are emerging and provide millions of IOPs when implemented, such as evidenced by early solutions from providers such as E8 Storage, NVMe is not yet well suited to act as a shared storage protocol between servers and AFA arrays. For now, NVMe remains best suited for communication between storage array controllers and their backend flash media or on servers that have internal flash drives. To use NVMe for any other use cases in enterprise environments is, at this point, premature.

3. NVMe is a better fit for hyper-converged infrastructure solutions than AFAs for now. Enterprises expecting a performance boost from their use of NVMe will likely see it whether they deploy it in hyper-converged infrastructure or AFA solutions. However, enterprises must connect to AFAs using existing storage protocols such as listed above. Conversely, applications running on hyper-converged infrastructure solutions that support NVMe may see better performance than those running on AFAs. Using AFAs, protocol translation over a NAS or SAN must still occur over the storage network to get to the NVMe enabled AFA. Hyper-converged infrastructure solutions negate the need for this additional protocol conversion.

NVMe will improve performance but verify your applications are ready. Stories about the performance improvements that NVMe offers are real and validated in the real world. However, these same users also find that some of their applications using these NVMe-based all-flash arrays are not getting the full benefit that they expected from them because, in part, their applications cannot handle the performance. Some users report that they have uncovered their applications have wait times built into them because the applications were designed to work with slower HDDs. Until the applications themselves are updated to account for AFAs by having those preconfigured wait times removed or minimized, the applications may become the new choke point that prevent enterprises from reaping the full performance benefits that NVMe has to offer.

NVMe is almost without doubt the future for communicating with flash media. But in early 2018, enterprises need to set realistic expectations as to how much of a performance boost NVMe will provide when deployed. Sub-millisecond response times are certainly a realistic expectation and maybe almost a necessity at this point to justify the added expense of using an NVMe array since many SAS-based arrays may achieve this same metric. Further, once an enterprise commits to using NVMe, one also makes the commitment to only using flash media since NVMe provides no option to interface with HDDs.




Five Ways to Measure Simplicity on All-flash Arrays

Simplicity is one of those terms that I love to hate. On one hand, people generally want the products that they buy to be “simple” to deploy and manage so they can “set them and forget them.” The problem that emerges when doing product evaluations, especially when evaluating all-flash arrays (AFAs), is determining what features contribute to making AFAs simple to deploy and manage. The good news is that over the last few years five key features have emerged that organizations can use to measure the simplicity of an AFA to select the right one for their environment.

Simplicity is one of those attributes that everyone generally knows what it is when they see it. However, it can be challenging to quantify exactly what features contribute to making a product simple to deploy and manage. This difficulty stems from the fact that the definition as to what constitutes simplicity is subjective and varies from organization to organization and even from individual to individual.

Individuals and organizations may look at multiple features to ascertain the simplicity of a product. As they then apply their definitions and interpretation of simplicity to AFAs, arriving at a conclusion of what simplicity means and that everyone agrees upon can be problematic.

The good news is that as the adoption of AFAs has increased, the list of features that deliver on simplicity and which one should look for has coalesced to a list of features that you can get your arms around. These five features that contribute toward delivering on this ideal of simplicity on all-flash arrays are:

  1. All-inclusive software licensing. Nothing is worse than trying to figure out how many or what type of software licenses you need on your AFA. Many AFAs now solve this dilemma by including software licenses for all the features on their array. While some still do tie licensing to storage capacity, number of hosts, processing power, or some mix thereof, the overhead and time associated with managing software licenses on each array should be much less than in the past.
  2. Evergreen. The capital costs associated with hardware refreshes that occur every 3-5 years put a large hole in corporate budgets in the year that they hit. More AFAs now include “evergreen” options that, when purchased as part of their support contracts, refresh the existing hardware at its end of life, usually three years.
  3. Pre-built integration with automation frameworks. As organizations look to automate the management of their IT infrastructure, AFAs are falling right in line. While using web-based GUIs to manage an AFA is handy, AFAs that can be discovered and managed as part of the organization’s broader automation framework make it more seamless for organizations to quickly roll new AFAs into their environment, discover them, and put them into productions.
  4. Proactive maintenance. The last thing any IT manager wants to get is a notification in the middle of the night, while on vacation, on a weekend is that there is an application performance problem or a hardware failure. Many AFAs now proactively maintain their products using software that constantly optimizes performance or identifies and remediates hardware problems before they impact production applications. While IT managers still may be notified of these proactive activities performed by the AFA, the unpredictable, reactive nature of managing them is greatly reduced.
  5. Scale-out architectures. Hardware upgrades and refreshes as well as the data migrations that are often associated with performing those routine system admin activities have been a bugaboo for years in enterprise data centers. New scale-out architectures, sometimes referred to as web scale, now found on many AFAs mitigate if not put an end to the long hours and application disruptions that performing these activities have historically caused.

This list may not represent a comprehensive list of all the features that make an AFA simple to deploy and manage. However, this list does certainly represent the primary features that individuals and organizations should review to verify an AFA delivers on this attribute of simplicity to ensure an AFA’s easy deployment and management in your environment.

These features and many others are what DCIG take into consideration as it prepares each of its Buyer’s Guides. Further, licensing the DCIG Competitive Intelligence Portal, a SaaS offering from DCIG, includes DCIG research. You can then use this research as starting point to initiate and/or augment your own research. This Portal  serves to centralize your internal competitive intelligence that can then be easily shared throughout your organization to whoever needs it wherever they need it. To learn more, click here to have someone from DCIG contact you.




Software-defined Data Centers Have Arrived – Sort of

Today organizations more so than ever are looking to move to software-defined data centers. Whether they adopt software-defined storage, networking, computing, servers, security, or all of them as part of this initiative, they are starting to conclude that a software-defined world trumps the existing hardware defined one. While I agree with this philosophy in principle, organizations need to carefully dip their toe into the software-defined waters and not dive head-first.

The concept of software-defined data centers is really nothing new. This topic has been discussed for decades and was the subject of one of the first articles I ever published 15 years ago (though the technology was more commonly called virtualization at that time.) What is new, however, is the fact that the complementary, supporting set of hardware technologies needed to enable the software-defined data center now exists.

More powerful processors, higher capacity memory, higher bandwidth networks, scale-out architectures, and other technologies have each contributed, in part, to making software-defined data centers a reality. The recent availability of solid state drives (SSDs) may have been perhaps the technology that ultimately enabled this concept to go from the drawing boards into production. SSDs reduce data access times from milliseconds to microseconds helping to remove one of the last remaining performance bottlenecks to making software-defined data centers a reality.

Yet as organizations look to replace their hardware defined infrastructure with a software-defined data center, they must still proceed carefully. Hardware defined infrastructures may currently cost a lot more than software-defined data centers but they do offer distinct benefits that software-defined solutions currently are still hard-pressed to match.

For instance, the vendors who offer the purpose-built appliances for applications, backup, networking, security, or storage used in hardware defined infrastructures typically provide hardware compatibility lists (HCLs). Each HCL names the applications, operating systems, firmware, etc., for which the appliance is certified to interact with and which the vendor will provide support. Deviate from that HCL and your ability to get support suddenly gets sketchy.

Even HCLs are problematic due to the impossibly large number of possible configurations that exist in enterprise environments which vendors can never thoroughly vet and test.

This has led to the emergence of converged infrastructures. Using these, vendors guarantee that all components in the stack (applications, servers, network, and storage along with their firmware and software) are tested and certified to work together. So long as organizations use the vendor approved and tested hardware and software component in this stack and keep them in sync with the vendor specifications, they should have a reliable solution.

Granted, obtaining solutions that satisfy these converged infrastructure requirements cost more. But for many enterprises paying the premium was worth it. This testing helps to eliminate situations such as I once experienced many years ago.

We discovered in the middle of a system wide SAN upgrade that a FC firmware driver on all the UNIX systems could not detect the LUNs on the new storage systems. Upgrading this driver required us to spend nearly two months with individuals coming in every weekend to apply this fix across all these servers before we could implement and use the new storage systems.

Software-defined data centers may still encounter these types of problems. Even though the software itself may work fine, it cannot account for all the hardware in the environment or guarantee interoperability with them. Further, since software-defined solutions tend to go into low cost and/or rapidly changing environments, there is a good possibility the HCLs and/or converged solutions they do offer are limited in their scope and may have not been subjected to the extensive testing that production environments.

The good news is that software-defined data centers are highly virtualized environments. As such, copies of production environments can be made and tested very quickly. This flexibility mitigates the dangers of creating unsupported, untested production environments. It also provides organizations an easier, faster means to failback to the original configuration should the configuration now work as expected.

But here’s the catch. While software-defined data centers provide flexibility, someone must still possess the skills and knowledge to make the copies, perform the tests, and do the failbacks and recoveries if necessary. Further, software-defined data centers eliminate neither their reliance on underlying hardware components nor the individuals who create and manage them.

Interoperability with the hardware is not a given and people are known to be unpredictable and/or unreliable from time to time, the whole system could go down or function unpredictably without a clear path to resolution. Further, if one encounters interoperability issues initially or at some point in the future, the situation may get thornier. Organizations may have to ask and answer questions such as:

  1. When the vendors start finger pointing, who owns the problem and who will fix it?
  2. What is the path to resolution?
  3. Who has tested the proposed solution?
  4. How do you back out if the proposed solution goes awry?

Software-defined data centers are rightfully creating a lot of buzz but they are still not the be all and end all. While the technology now exists at all levels of the data center to make it practical to deploy this architecture and for companies to realize significant hardware savings in their data center budgets, the underlying best practices and support needed to successfully implement software-defined data are still playing catch-up. Until those are fully in place or you have full assurances of support by a third party, organizations are advised to proceed with caution on any software-defined initiative, data center or otherwise.




Difficult to Find any Sparks of Interest or Innovation in HDDs Anymore

In early November DCIG finalized its research into all-flash arrays and, in the coming weeks and months, will be announcing its rankings in its various Buyer’s Guide Editions as well as in its new All-flash Array Product Ranking Bulletins. It as DCIG prepares to release its all-flash array rankings that we also find ourselves remarking just how quickly interest in HDD-based arrays has declined just this year alone. While we are not ready to declare HDDs dead by any stretch, finding any sparks that represents interest or innovation in hard disk drives (HDDs) is getting increasingly difficult.

spark

The rapid declining of interest in HDDs over the last 18 months, and certainly the last six months, is stunning. When flash first came started gaining market acceptance in enterprise storage arrays around 2010, there was certainly speculation that flash could replace HDDs. But the disparity in price per GB between disk and flash was great at the time and forecast to remain that way for many years. As such, I saw no viable path for flash to replace disk in the near term.

Fast forward to late 2016 and flash’s drop in price per GB coupled with the introduction of technologies such as compression and deduplication in enterprise storage arrays has brought its price down to where it now approaches HDDs. Then factor in the reduced power and heating costs, flash’s increased life span (5 years or longer in many cases,) the improved performance and intangibles such as the elimination of noise in data centers, and suddenly the feasibility of all-flash data centers does not seem so far-fetched.

Some vendors are even working behind the scenes to make the case for flash even more compelling. They plan to eliminate the upfront capital costs associated with deploying flash and are instead working on flash deployments that charge monthly based on how much capacity your organization uses.

Recent statistics support this rapid adoption. Trendfocus announced that it found a 101% quarter over quarter increase in the number of enterprise PCIe units shipped, the capacity for all shipped SSDs approaching 14 exabytes,  and the total number of SATA and SAS SSDs shipped topped 4 million units. Those numbers coupled with CEOs from providers such as Kaminario (link) and Nimbus Data (link) both publicly saying that the list prices for flash for their all-flash units have dropped below the $1/Gb price point and it is no wonder that flash is dousing any sparks of interest that companies have in buying HDDs or that vendors have in innovating in HDD technology.

Is DCIG declaring disk dead? Absolutely not. In talking with providers of integrated and hybrid cloud backup appliances, deduplicating backup appliances, and archiving appliances, they still cannot yet justify replacing HDDs with flash. Or at least not yet.

One backup appliance provider tells me his company watches the prices of flash like a hawk and re-evaluates the price of flash versus HDDs about every six months to see if it makes sense to replace HDDs with flash. The threshold that makes it compelling for his company to use flash in lieu of HDDs has not yet been crossed and may still be some time away.

While flash has certainly dropped in price even as it simultaneously increases in capacity, companies should not expect to store their archive and backup data on flash in the next few years. The recently announced Samsung 15.36TB SSD drive that is available for around $10,000 is ample proof of that. Despite its huge capacity, it still costs around 65 cents/GB as compared to the price/GB for 8TB HDDs which run around a nickel per GB – or about one tenth.

That said, circle the year 2020 as potential tipping point. That year, Samsung anticipates releasing a 100TB flash drive. If that flash drive stays at the same $10,000 price point, it will put flash within striking range of HDDs on a price per GB or make it so low in cost per GB that most shops will no longer care about the slight price differential between HDDs and flash. That price point coupled with flash’s lower operating costs and longer life may finally put out whatever sparks of interest or innovation are left in HDDs.




Server-based Storage Makes Accelerating Application Performance Insanely Easy

In today’s enterprise data centers, when one thinks performance, one thinks flash. That’s great. But that thought process can lead organizations to think that “all-flash arrays” are the only option they have to get high levels of performance for their applications. That thinking is now so outdated. The latest server-based storage solution from Datrium illustrates how accelerating application performance just became insanely easy by simply clicking a button versus resorting to upgrading some hardware in their environment.

As flash transforms the demands of application owners, organizations want more options to cost-effectively deploy and manage it. These include:

  • Putting lower cost flash on servers as it performs better on servers than across a SAN.
  • Hyper-converged solutions have become an interesting approach to server-based storage. However, concerns remain about fixed compute/capacity scaling requirements and server hardware lock-in.
  • Array-based arrays have taken off in large part because they provide a pool of shared flash storage accessible to multiple servers.

Now a fourth, viable flash option has appeared on the market. While I have always had some doubts about server-based storage solutions that employ server-side software, today I changed my viewpoint after reviewing Datrium’s DVX Server-powered Storage System.

Datrium has the obvious advantages over arrays as it leverages the vast, affordable and often under-utilized server resources.  But unlike hyper-converged systems, it scales flexibly and does not require a material change in server sourcing.

To achieve this ends, Datrium has taken a very different approach with its “server-powered” storage system design.  In effect, Datrium split speed from durable capacity in a single end-to-end system.  Storage performance and data services tap host compute and flash cache, driven by Datrium software that is uploaded to the virtual host. It then employs its DVX appliance, an integrated external storage appliance, that permanently holds data and orchestrates the DVX system protects application data in the event of server or flash failure.

This approach has a couple meaningful takeaways versus traditional arrays:

  • Faster flash-based performance given it is local to the server versus accessed across a SAN
  • Lower cost since server flash drives cost far less than flash drives found on an all-flash array.

But it also addresses some concerns that have been raised about hyper-convered systems:

  • Organizations may independently scale compute and capacity
  • Plugs into an organization’s existing infrastructure.

Datrium Offers a New Server-based Storage Paradigm

StatelessServers_Diesl-1024x818

Source: Datrium

Datrium DVX provides the different approach needed to create a new storage paradigm. It opens new doors for organizations to:

  1. Leverage excess CPU cycles and flash capacity on ESX servers. ESX servers now exhibit the same characteristics that the physical servers they replaced once did: they have excess, idle CPU. By deploying server-based storage software at the hypervisor level, organizations can harness this excess, idle CPU to improve application performance.
  2. Capitalize on lower-cost server-based flash drives. Regardless of where flash drives reside (server-based or array-based,) they deliver high levels of performance. However, server-based flash costs much less than array-based flash while providing greater flexibility to add more capacity going forward.

Accelerating Application Performance Acceleration Just Became Insanely Easy

Access to excess server-based memory, CPU and flash combine to offer another feature that array-based flash can never deliver: push button application performance. By default, when the Datrium storage software installs on ESX hypervisor, it limits itself to 20 percent of the available vCPU available to each VM. However, not every VM uses all of its available vCPU with many VMs only using only 10-40 percent of their available resources.

Using Datrium’s DIESL Hyperdriver Software version 1.0.6.1, VM administrators can non-disruptively tap into these latent vCPU cycles. Using Datrium’s new Insane Mode, they may increase the available vCPU cycles a VM can access from 20 to 40 percent with a click of a button. While the host VM must have latent vCPU cycles available to accomplish this task, this is a feature that array-based flash would be hard-pressed to ever offer and unlikely could ever do with the click of a button.

Server-based storage designs have shown a lot of promise over the years but have not really had the infrastructure available to them to build a runway to success. That has essentially changed and Datrium is one of the first solutions to come to market that recognizes this fundamental change in the infrastructure of data centers and has brought a product to market to capitalize on it. As evidenced by the Insane Mode in its latest software release, organizations may now harness next generation server-based storage designs and accelerate application performance while dramatically lowering complexity and costs in their environment.




DCIG 2015-16 All-Flash Array Buyer’s Guide Now Available

DCIG is pleased to announce the September 29 release of the DCIG 2015-16 All-Flash Array Buyer’s Guide that weights, scores and ranks more than 100 features of twenty-eight (28) all-flash arrays or array series from eighteen (18) enterprise storage providers.

Icon for DCIG 2015-16 All-Flash Array Buyer’s Guide

The marketplace for all-flash arrays is both rapidly growing and highly competitive. Many changes have taken place in the all-flash array marketplace in the 18 months since the release of the DCIG 2014-15 Flash Memory Storage Array Buyer’s Guide in March of 2014. We have witnessed substantial increases in capacity, storage density and performance. Over this same period, AFA’s have established a track record of dramatic application acceleration and proven reliability.

All-Flash Arrays Now Replacing Traditional Enterprise Arrays in Mainstream Businesses

When we prepared the previous edition of this Buyer’s Guide, multiple vendors indicated that prospective customers were looking to move to an all-flash environment for their critical busi­ness applications. These same vendors report that enterprises are now looking to use flash memory not just for critical applications, but for all active workloads in the data center. In a recent study1 by 451 Research, 22% of respondents have already implemented an all-flash array. Of those, 57% were using the array to speed up multiple applications and 26% had fully replaced legacy arrays.

The return on investment (ROI) of using flash for all active workloads already made sense in 2014; and subsequent improvements in all-flash performance and flash prices make the ROI of moving to all-flash storage compelling. As a result, organizations will increasingly replace primary enterprise storage systems with all-flash arrays. The DCIG 2015-16 All-Flash Array Buyer’s Guide will help those organizations accelerate the all-flash array selection process.

Enterprises wanting to change storage vendors will discover a robust and competitive marketplace. Multiple vendors have created new storage architectures designed from the ground up for flash memory and have created new expectations around ease-of-use and analytics-based proactive support.

Enterprises that are generally happy with their current storage vendor and storage system (performance issues aside) are likely to find an all-flash version of the storage system is available. Such businesses can realize some or all of the benefits of an AFA without the risk associated with migrating to a new storage architecture, and without having to re-implement data protection strategies.

A Systemic Opportunity to Speed Up the Business

The purchase of an all-flash array (AFA) is most easily justified and will have the greatest benefit if approached as a systemic data center and business opportunity. Organizations taking this approach may discover that “flash is free”. That is, the return on investment within the IT budget is rapid, and accelerating all enterprise applications creates the opportunity to reduce costs and increase opportunities across the entire business. As Eric Pearson, the CIO of InterContinental Hotels Group was quoted in Pat Gelsinger’s VMworld 2015 keynote, “It’s no longer the big beating the small. It’s the fast beating the slow.2

Who’s Who of All-Flash Array Providers

Vendors with products included in this guide are AMI, Dell, EMC, Fujitsu, Hitachi Data Systems, HP, Huawei, IBM, iXsystems, Kaminario, NetApp, Nimbus Data, Oracle, Pure Storage, SolidFire, Tegile, Violin Memory and X-IO Technologies.

The DCIG 2015-16 All-Flash Array Buyer’s Guide top 10 solutions include (in alphabetical order):

  • AMI StorTrends 3600i Series
  • Dell Compellent SC8000
  • HP 3PAR StoreServ 20000 Series
  • HP 3PAR StoreServ 7000c Series
  • Hitachi Data Systems HUS VM
  • IBM FlashSystem V9000
  • NetApp AFF8000 Series
  • Pure Storage FlashArray//m Series
  • SolidFire SF Series
  • Tegile IntelliFlash T3000 Series

The HP 3PAR StoreServ 20000 Series earned the Best-in-Class ranking among all all-flash arrays evaluated in this Buyer’s Guide. The HP 3PAR StoreServ 20000 Series stood out by offering the following capabilities:

  • Achieved the Best-in-Class rank in 3 out of 4 categories; meaning it has the most comprehensive set of features expected of a primary enterprise storage array
  • Multi-protocol SAN, NAS and object access, with support for data migration to OpenStack-based clouds; meaning it can handle any workload
  • Provides up to 46 TB raw flash capacity per rack unit (TB/U) making it one of the highest density arrays in this guide
  • Robust VMware and Microsoft technology support including VMware VVols and Microsoft SCVMM, ODX and SMB3

About the DCIG 2015-16 All-Flash Array Buyer’s Guide

DCIG creates Buyer’s Guides in order to help end users accelerate the product research and selection process; driving cost out of the research process while simultaneously increasing confidence in the results.

The DCIG 2015-16 All-Flash Array Buyer’s Guide achieves the following objectives:

  • Provides an objective, third party evaluation of products that evaluates features from an end user’s perspective
  • Provides insight into the state of the all-flash array (AFA) marketplace
  • Identifies the significant benefits organizations should look to achieve through an AFA implementation
  • Identifies key features organizations should be aware of as they evaluate AFA’s
  • Provides brief observations about the distinctive features of each array
  • Ranks each array in each ranking category and presents the results in easy to understand ranking tables that enable organizations to get an “at-a-glance” overview of the AFA marketplace
  • Provides a standardized one-page data sheet for each array so organizations may quickly do side-by-side product comparisons that enable organizations to quickly get to a short list of products that may meet their requirements.
  • Provides a solid foundation for getting competitive bids from different providers that are based on “apples-to-apples” comparisons

The DCIG 2015-16 All-Flash Array Buyer’s Guide is available immediately to subscribing users of the DCIG Analysis Portal. Individuals who have not yet subscribed to the DCIG Analysis Portal may test drive the DCIG Analysis Portal as well as download this Guide by following this link.

 

1 Coulter, Marco. “Flash Storage Outlook.” Proc. of Flash Memory Summit 2015, Santa Clara, CA. Flash Memory Summit, 12 Aug. 2015. Web. 28 Aug. 2015. <https://www.flashmemorysummit.com/English/Collaterals/Proceedings/2015/20150812_S203D_Coulter.pdf>.

2 Pat Gelsinger on Stage at VMworld 2015, 15:50. YouTube. YouTube, 01 Sept. 2015. <https://www.youtube.com/watch?v=U6aFO0M0bZA&list=PLeFlCmVOq6yt484cUB6N4LhXZnOso5VC7&index=3>.




DCIG All-Flash Array Buyer’s Guide will Reveal Dynamic Flash Memory Storage Marketplace

The DCIG analyst team is in the final stages of preparing a fresh snapshot of the all-flash array (AFA) marketplace. We began covering this nascent storage array category in 2012. At that time, storage appliances that permanently store data on flash memory were commonly being referred to as either flash memory storage arrays or as all-flash arrays. In the time since the publication of the DCIG 2014-15 Flash Memory Storage Array Buyer’s Guide, the storage industry has embraced the term all-flash array. For that reason the forthcoming refresh of the buyer’s guide will be called the DCIG 2015-16 All-Flash Array Buyer’s Guide.

More than terminology has changed over the last eighteen (18) months. Although we are still in the process of receiving final data updates from storage vendors, the fresh data DCIG compiled on forty-nine (49) arrays from eighteen (18) storage vendors currently shows that all-flash array vendors have substantially reduced the barriers to all-flash array adoption.

Consider the following facts drawn from comparing the 2014-15 and the 2015-16 data:

  • Flash capacity is up 2x to 4x. Compared with the arrays in the 2014-15 edition, the average raw flash memory capacity nearly quadrupled from 117 TB to 445 TB. Median and maximum raw flash memory capacities more than doubled to 88 TB and 3.9 PB per array. Effective capacity after deduplication and compression is a multiple of these numbers.
  • Flash density is up 50%. Average storage density rose 50%, from 14 TB/U to 21.5 TB/U. Median density is now 19.2 TB/U. Maximum density is 45 TB/U. These are raw flash densities; effective density is a multiple of these numbers. This means an all-flash array can store more data in less space than a traditional array. The combination of all-flash storage density and performance can result in a 10x reduction of the storage footprint in a data center.
  • Entry prices are down 50% to less than $25,000. The entry point list price is less than half what it was in 2014. Several all-flash arrays now carry a starting list price of less than $25,000, placing all-flash performance within the reach of many more businesses.
  • Majority of shipping configurations are under $250,000 list price. Among vendors that reported the list price of a typical configuration as ordered by customers, three (3) report a list price under $100,000; three (3) between $100,000 and $150,000; and thirteen (13) between $150,000 and $250,000.

Although $/GB is probably the least favorable way of evaluating flash memory costs, it is a metric familiar to many storage purchasers. Multiple vendors now claim a cost per GB—after deduplication and compression—of $2 or less. This compares favorably with traditional 15K HDD costs, though still a multiple of the cost for NL-SAS HDDs.

We are looking forward to finalizing our analysis of all-flash array features and presenting the resulting snapshot of this dynamic marketplace. We expect to release the DCIG 2015-16 All-Flash Array Buyer’s Guide by the end of this month.

The Buyer’s Guide will be available to subscribing users of the DCIG Analysis Portal. All DCIG Buyer’s Guides are currently available for download at no charge to any end-user who registers for the DCIG Analysis Portal (resellers and vendors may test drive it up for up to 30 days).




HP 3PAR StoreServ 8000 Series Lays Foundation for Flash Lift-off

Almost any hybrid or all-flash storage array will accelerate performance for the applications it hosts. Yet many organizations need a storage array that scales beyond just accelerating the performance of a few hosts. They want a solution that both solves their immediate performance challenges and serves as a launch pad to using flash more broadly in their environment.

Yet putting flash in legacy storage arrays is not the right approach to accomplish this objective. Enterprise-wide flash deployments require purpose-built hardware backed by Tier-1 data services. The HP 3PAR StoreServ 8000 series provides a fundamentally different hardware architecture and complements this architecture with mature software services. Together these features provide organizations the foundation they need to realize flash’s performance benefits while positioning them to expand their use of flash going forward.

A Hardware Foundation for Flash Success

Organizations almost always want to immediately realize the performance benefits of flash and the HP 3PAR StoreServ 8000 series delivers on this expectation. While flash-based storage arrays use various hardware options for flash acceleration, the 8000 series complements the enterprise-class flash HP 3PAR StoreServ 20000 series while separating itself from competitive flash arrays in the following key ways:

  • Scalable, Mesh-Active architecture. An Active-Active controller configuration and a scale-out architecture are considered the best of traditional and next-generation array architectures. The HP 3PAR StoreServ 8000 series brings these options together with its Mesh-Active architecture which provides high-speed, synchronized communication between the up-to-four controllers within the 8000 series.
  • No internal performance bottlenecks. One of the secrets to the 8000’s ability to successfully transition from managing HDDs to SSDs and still deliver on flash’s performance benefits is its programmable ASIC. The HP 3PAR ASIC, now it’s 5th generation, is programmed to manage flash and optimize its performance, enabling the 8000 series to achieve over 1 million IOPs.
  • Lower costs without compromise. Organizations may use lower-cost commercial MLC SSDs (cMLC SSDs) in any 8000 series array. Then leveraging its Adaptive Sparing technology and Gen5 ASIC, it optimizes capacity utilization within cMLC SSDs to achieve high levels of performance, extends media lifespan which are backed by a 5-year warranty, and increases usable drive capacity by up to 20 percent.
  • Designed for enterprise consolidation. The 8000 series offers both 16Gb FC and 10Gb Ethernet host-facing ports. These give organizations the flexibility to connect performance-intensive applications using Fibre Channel or cost-sensitive applications via either iSCSI or NAS using the 8000 series’ File Persona feature. Using the 8000 Series, organizations can start with configurations as small as 3TB of usable flash capacity and scale to 7.3TB of usable flash capacity.

A Flash Launch Pad

As important as hardware is to experiencing success with flash on the 8000 series, HP made a strategic decision to ensure its converged flash and all-flash 8000 series models deliver the same mature set of data services that it has offered on its all-HDD HP 3PAR StoreServ systems. This frees organizations to move forward in their consolidation initiatives knowing that they can meet enterprise resiliency, performance, and high availability expectations even as the 8000 series scales over time to meet future requirements.

For instance, as organizations consolidate applications and their data on the 8000 series, they will typically consume less storage capacity using the 8000 series’ native thin provisioning and deduplication features. While storage savings vary, HP finds these features usually result in about 4:1 data reduction ratio which helps to drive down the effective price of flash on an 8000 series array to as low as $1.50/GB.

Maybe more importantly, organizations will see minimal to no slowdown in application performance even as they implement these features, as they may be turned on even when running mixed production workloads. The 8000 series compacts data and accelerates application performance by again leveraging its Gen5 ASICs to do system-wide striping and optimize flash media for performance.

Having addressed these initial business concerns around cost and performance, the 8000 series also brings along the HP 3PAR StoreServ’s existing data management services that enable organizations to effectively manage and protect mission-critical applications and data. Some of these options include:

  • Accelerated data protection and recovery. Using HP’s Recovery Manager Central (RMC), organizations may accelerate and centralize application data protection and recovery. RMC can schedule and manage snapshots on the 8000 series and then directly copy those snapshots to and from HP StoreOnce without the use of a third-party backup application.
  • Continuous application availability. The HP 3PAR Remote Copy software either asynchronously or synchronously replicates data to another location. This provides recovery point objectives (RPOS) of minutes, seconds, or even non-disruptive application failover.
  • Delivering on service level agreements (SLAs). The 8000 series’ Quality of Service (QoS) feature ensures high priority applications get access to the resources they need over lower priority ones to include setting sub-millisecond response times for these applications. However QoS also ensures lower priority applications are serviced and not crowded out by higher priority applications.
  • Data mobility. HP 3PAR StoreServ creates a federated storage pool to facilitate non-disruptive, bi-directional data movement between any of up to four (4) midrange or high end HP 3PAR arrays.

Onboarding Made Fast and Easy

Despite the benefits that flash technology offers and the various hardware and software features that the 8000 series provides to deliver on flash’s promise, migrating data to the 8000 series is sometimes viewed as the biggest obstacle to its adoption. As organizations may already have a storage array in their environment, moving its data to the 8000 series can be both complicated and time-consuming. To deal with these concerns, HP provides a relatively fast and easy process for organizations to migrate data to the 8000 series.

In as few as five steps, existing hosts may discover the 8000 series and then access their existing data on their old array through the 8000 series without requiring the use of any external appliance. As hosts switch to using the 8000 series as their primary array, Online Import non-disruptively copies data from the old array to the 8000 series in the background. As it migrates the data, the 8000 series also reduces the storage footprint by as much as 75 percent using its thin-aware functionality which only copies blocks which contain data as opposed to copying all blocks in a particular volume.

Maybe most importantly, data migrations from EMC, HDS or HP EVA arrays (and others to come) to the 8000 series may occur in real time Hosts read data from volumes on either the old array or the new 8000 series with hosts only writing to the 8000 series. Once all data is migrated, access to volumes on the old array is discontinued.

Achieve Flash Lift-off Using the HP 3PAR StoreServ 8000 Series

Organizations want to introduce flash into their environment but they want to do so in a manner that lays a foundation for their broader use of flash going forward without creating a new storage silo that they need to manage in the near term.

The HP 3PAR StoreServ 8000 series delivers on these competing requirements. Its robust hardware and mature data services work hand-in-hand to provide both the high levels of performance and Tier-1 resiliency that organizations need to reliably and confidently use flash now and then expand its use in the future. Further, they can achieve lift-off with flash as they can proceed without worrying about how they will either keep their mission-critical apps online or cost-effectively migrate, protect or manage their data once it is hosted on flash.




The Performance of a $500K Hybrid Storage Array Goes Toe-to-Toe with Million Dollar All-Flash and High End Storage Arrays

On March 17, 2015, the Storage Performance Council (SPC) updated its “Top Ten” list of SPC-2 results that includes performance metrics going back almost three (3) years to May 2012. Noteworthy in these updated results is that the three storage arrays ranked at the top are, in order, a high end mainframe-centric, monolithic storage array (the HP XP7, OEMed from Hitachi), an all-flash storage array (from startup Kaminario, the K2 box) and a hybrid storage array (Oracle ZFS Storage ZS4-4 Appliance). Making these performance results particularly interesting is that the hybrid storage array, the Oracle ZFS Storage ZS4-4 Appliance, can essentially go toe-to-toe from a performance perspective with both the million dollar HP XP7 and Kaminario K2 arrays and do so at approximately half of their cost.

Right now there is a great deal of debate in the storage industry about which of these three types of arrays – all-flash, high end or hybrid – can provide the highest levels of performance. In recent years, all-flash and high end storage arrays have generally gone neck-and-neck though all-flash arrays are generally now seen as taking the lead and pulling away.

However, when price becomes a factor (and when isn’t price a factor?) such that enterprises have to look at price and performance, suddenly hybrid storage arrays surface as very attractive alternatives for many enterprises. Granted, hybrid storage arrays may not provide all of the performance of either all-flash or high end arrays, but they can certainly deliver superior performance at a much lower cost.

This is what makes the recently updated Top Ten results on the SPC website so interesting. While the breadth of arrays covered in the published SPC results by no means cover every storage array on the market, they do provide enterprises with some valuable insight into:

  • How well hybrid storage arrays can potentially perform
  • How comparable their storage capacity is to high-end and all-flash arrays
  • How much more economical hybrid storage arrays are

In looking at these three arrays that currently sit atop the SPC-2 Top Ten list and how they were configured for this test, they were comparable in one of the ways that enterprises examine when making a buying decision. For instance, all three had comparable amounts of raw capacity.

Raw Capacity

High-End HP XP7                                                                         230TB
All-Flash Kaminario K2                                                              179TB
Hybrid  Oracle ZFS Storage ZS4-4 Appliance                    175TB

Despite using comparable amounts of raw capacity for testing purposes, they got to these raw capacity totals using decidedly different media. The high end, mainframe-centric HP XP7 used 768 300GB 15K SAS HDDs to get to its 230TB total while the all-flash Kaminario K2 used 224 solid state drives (SSDs) to get to its 179TB total. The Oracle ZS4-4 stood out from these other two storage arrays in two ways. First, it used 576 300GB 10K SAS HDDs. Second, its storage media costs were a fraction of the other two. Comparing strictly list prices, its media costs were only about 16% of the cost of the HP XP7 and 27% of the cost of the Kaminario K2.

These arrays also differed in terms of how many and what types of storage networking ports they each used. Both the HP XP7 and the Kaminario K2 used a total of 64 and 56 8Gb FC ports respectively for connectivity between the servers and their storage arrays. The Oracle ZS4-4 only needed 16 ports for connectivity though it used Infiniband for server-storage connectivity as opposed to 8Gb FC. The HP XP7 and Oracle ZS4-4 also used cache (512GB and ~3TB respectively) while the Kaminario K2 used no cache at all. It instead used a total of 224 solid state drives (SSDs) packaged in 28 flash nodes (8-800GB SSDs in each flash node.)

This is not meant to disparage the configuration or architecture of any of these three different storage arrays as each one uses proven technologies in the design of their arrays. Yet what is notable is the end results when these three arrays in these configurations are subjected to the same SPC2 performance benchmarking tests.

While the HP XP7 and Kaminario K2 came out on top from an overall performance perspective, it is interesting to note how well the Oracle ZS4-4 performs and what its price/performance ratio is when compared to the high end HP XP7 and the all-flash Kaminario K2. It provides 75% to over 90% of the performance of these other arrays at a cost per MB that is up to 46% less.

SPC-2 Top Ten ResultsSource: “Top Ten” SPC-2 Results, https://www.storageperformance.org/results/benchmark_results_spc2_top-ten

It is easy for enterprises to become enamored with all-flash arrays or remain transfixed on high-end arrays because of their proven and perceived performance characteristics and benefits. But these recent SPC-2 performance benchmarks illustrate that hybrid storage arrays such as the Oracle ZFS Storage ZS4-4 Appliance can deliver levels of performance that are comparable to million-dollar all-flash and high-end arrays at half of their cost which are numbers that any enterprise can take to the bank.




The Three Biggest Challenges to Realizing Flash’s Full Potential and Micron’s Strategy to Overcome Them

At a recent analyst briefing, Micron Storage leaders identified at least three critical transitions that must take place in order to unleash the full potential of flash memory in the data center:

  1. the transition from planar to 3D NAND, enabling a jump in global production capacity and in the capacity of individual storage devices
  2. the transition from SATA and SAS interfaces to PCIe/NVMe to increase bandwidth and reduce latency
  3. the transition to treating flash—and other future non-volatile RAM technologies—as a large pool of persistent memory rather than as a disk replacement

The Transition to 3D NAND

The transition to 3D NAND production is well underway, with 3D NAND displacing planar flash by the end of 2016 for all four NAND manufacturers and across every product category. 3D NAND will lead to greater storage densities. At the August 2014 Flash Memory Summit we saw a prototype PCIe card carrying 64TB of NAND, but no date was given for general availability. Although no specific product announcements were made during the briefing, one Micron staffer suggested we might see flash devices with 16TB raw flash capacity by the end of 2015.

The Transition to PCIe/NVMe

The existing SATA/SAS disk-drive interface is already a bottleneck for flash storage performance, and the continuing growth in per-device flash capacity makes that bottleneck more and more an issue. Happily, the transition of the flash storage ecosystem to PCIe/NVMe during 2015 will allow for up to 4 lanes of PCIe to each flash device along with much improved queuing mechanisms. NVMe drives have the potential to deliver 10x the performance of current SATA SSDs.

The Transition to Treating Flash as Persistent Memory

The transition to treating flash as persistent memory will be much more complex, but has the greatest potential to transform the data center through another 10x-100x jump in storage performance. Micron’s multi-pronged strategy to accelerate this transition includes:

  • creating high-trust partnerships with OEMs,
  • engaging with enterprise end users to gain a deeper understanding of their critical workloads, and
  • collaborating with key players up and down the technology stack—from operating system providers to application developers—to achieve a virtual re-integration of the entire data center technology stack.

In Micron’s view, this reintegration will not be based on owning all the parts–as it was in the early days of mainframe computing–but through a reinvigorated collaboration and a set of win/win partnerships.

No doubt there will be many challenges and naysayers along the way, but Micron Storage seems to be well-positioned to facilitate just such a virtual reintegration, and has taken multiple steps in the last 12 months to realize this vision. Micron Storage has assembled a dream team of technology pros from leading enterprise storage/server/compute providers including Dell, EMC and Intel. These professionals understand enterprise requirements, and all appear to have left their prior companies on good terms and with relationships intact.

Creating High-trust Partnerships

Progress will be much quicker if key participants can learn to trust one another and collaborate much more deeply, especially in research and development. When I took over the leadership of IT at Buena Vista University I told my staff and my peers over and over, “If we will trust one another and think strategically, we can accomplish phenomenal things together.” They believed me and put that belief into action, creating the nation’s first wireless community (eBVyou) just 12 months after the original 802.11 WiFi standard was adopted.

The first fruits of the “high-trust partnerships with OEMs” element of Micron’s strategy were revealed during the February 19 announcement of the IBM FlashSystem V9000 and the FlashSystem 900. IBM credited Micron’s responsive collaboration during the research and development process with accelerating development; in particular helping IBM to gain the fine-grained visibility into flash cell health that enables the arrays to place the hottest data in the healthiest flash cells. This more intelligent approach to wear leveling enabled IBM to confidently transition from eMLC to Micron’s MLC flash while increasing the warranty on the resulting flash modules to a full 7 years.

Engaging with Enterprise End-users

Realizing the full potential of flash memory will require vertical integration across the technology stack—a process that will probably take years to play out. Nevertheless, there are near term opportunities to transform the performance of specific workloads by better understanding those workloads and then collaborating with two or three companies to bring a solution to market.

For example, Ed Doller, VP of Storage Technology, told me his team experimented with running a search directly on the CPU embedded in their SSD controller and achieved an 8x improvement in performance compared to running the search using a server’s primary CPU. This experiment demonstrates a truism that Ed shared with us, namely that as hardware gets more sophisticated, programmability emerges. Micron wants to leverage that programmability to address enterprise requirements with and through suitable partners.

Collaborating to Accelerate Storage Evolution

In order to achieve orders-of-magnitude improvements across all types of workloads, the data center technology stack will need to stop addressing non-volatile memory through block-oriented disk-based constructs and start addressing it through highly parallel page-based persistent memory constructs. This will require changes in operating systems and in applications.

Although moving forward in some areas will require high-trust partnerships, Micron can move the operating system component forward by making direct, meaningful contributions to the Unix/Linux storage stack. Micron recently joined the Linux Foundation to signal this intention, and at the February 19 briefing announced that they are creating a storage software design center in Austin, TX to foster collaborations.

The Necessity of Unlearning and Relearning

Re-inventing the data center infrastructure will be a complex, long-term task. The greatest barriers to progress may well be human rather than technical. One barrier to progress is what we already know—our customary approach to thinking about storage. The people designing next generation data center technologies will need to unlearn a lot of what they know about current systems design and return to the origins of computing, an environment characterized primarily by compute and memory.

Darren Thomas, VP of the Storage Business Unit at Micron, has articulated just such a strategic vision—redefining the future of storage—based on recognizing the opportunities and then creating high-trust win/win partnerships with key participants outside of Micron. As one of just a handful of DRAM and NAND manufacturers in the world, Micron clearly has a lot at stake in this transformation; and just as clearly Micron has decided to take an active role in shaping and accelerating that future.

All of the dynamics highlighted above suggest that storage will continue to be a locus of innovation and a lever for transformation in the enterprise data center for years to come. DCIG will continue to provide periodic snapshots of this dynamic marketplace through its feature-oriented buyer’s guides, including the forthcoming DCIG 2015-16 Hybrid Storage Array Buyer’s Guide in late March and the DCIG 2015-16 Flash Memory Storage Array Buyer’s Guide in May of this year.




The Three Biggest Challenges to Realizing Flash’s Full Potential and Micron’s Strategy to Overcome Them

At a recent analyst briefing, Micron Storage leaders identified at least three critical transitions that must take place in order to unleash the full potential of flash memory in the data center:

  1. the transition from planar to 3D NAND, enabling a jump in global production capacity and in the capacity of individual storage devices
  2. the transition from SATA and SAS interfaces to PCIe/NVMe to increase bandwidth and reduce latency
  3. the transition to treating flash—and other future non-volatile RAM technologies—as a large pool of persistent memory rather than as a disk replacement

The Transition to 3D NAND

The transition to 3D NAND production is well underway, with 3D NAND displacing planar flash by the end of 2016 for all four NAND manufacturers and across every product category. 3D NAND will lead to greater storage densities. At the August 2014 Flash Memory Summit we saw a prototype PCIe card carrying 64TB of NAND, but no date was given for general availability. Although no specific product announcements were made during the briefing, one Micron staffer suggested we might see flash devices with 16TB raw flash capacity by the end of 2015.

The Transition to PCIe/NVMe

The existing SATA/SAS disk-drive interface is already a bottleneck for flash storage performance, and the continuing growth in per-device flash capacity makes that bottleneck more and more an issue. Happily, the transition of the flash storage ecosystem to PCIe/NVMe during 2015 will allow for up to 4 lanes of PCIe to each flash device along with much improved queuing mechanisms. NVMe drives have the potential to deliver 10x the performance of current SATA SSDs.

The Transition to Treating Flash as Persistent Memory

The transition to treating flash as persistent memory will be much more complex, but has the greatest potential to transform the data center through another 10x-100x jump in storage performance. Micron’s multi-pronged strategy to accelerate this transition includes:

  • creating high-trust partnerships with OEMs,
  • engaging with enterprise end users to gain a deeper understanding of their critical workloads, and
  • collaborating with key players up and down the technology stack—from operating system providers to application developers—to achieve a virtual re-integration of the entire data center technology stack.

In Micron’s view, this reintegration will not be based on owning all the parts–as it was in the early days of mainframe computing–but through a reinvigorated collaboration and a set of win/win partnerships.

No doubt there will be many challenges and naysayers along the way, but Micron Storage seems to be well-positioned to facilitate just such a virtual reintegration, and has taken multiple steps in the last 12 months to realize this vision. Micron Storage has assembled a dream team of technology pros from leading enterprise storage/server/compute providers including Dell, EMC and Intel. These professionals understand enterprise requirements, and all appear to have left their prior companies on good terms and with relationships intact.

Creating High-trust Partnerships

Progress will be much quicker if key participants can learn to trust one another and collaborate much more deeply, especially in research and development. When I took over the leadership of IT at Buena Vista University I told my staff and my peers over and over, “If we will trust one another and think strategically, we can accomplish phenomenal things together.” They believed me and put that belief into action, creating the nation’s first wireless community (eBVyou) just 12 months after the original 802.11 WiFi standard was adopted.

The first fruits of the “high-trust partnerships with OEMs” element of Micron’s strategy were revealed during the February 19 announcement of the IBM FlashSystem V9000 and the FlashSystem 900. IBM credited Micron’s responsive collaboration during the research and development process with accelerating development; in particular helping IBM to gain the fine-grained visibility into flash cell health that enables the arrays to place the hottest data in the healthiest flash cells. This more intelligent approach to wear leveling enabled IBM to confidently transition from eMLC to Micron’s MLC flash while increasing the warranty on the resulting flash modules to a full 7 years.

Engaging with Enterprise End-users

Realizing the full potential of flash memory will require vertical integration across the technology stack—a process that will probably take years to play out. Nevertheless, there are near term opportunities to transform the performance of specific workloads by better understanding those workloads and then collaborating with two or three companies to bring a solution to market.

For example, Ed Doller, VP of Storage Technology, told me his team experimented with running a search directly on the CPU embedded in their SSD controller and achieved an 8x improvement in performance compared to running the search using a server’s primary CPU. This experiment demonstrates a truism that Ed shared with us, namely that as hardware gets more sophisticated, programmability emerges. Micron wants to leverage that programmability to address enterprise requirements with and through suitable partners.

Collaborating to Accelerate Storage Evolution

In order to achieve orders-of-magnitude improvements across all types of workloads, the data center technology stack will need to stop addressing non-volatile memory through block-oriented disk-based constructs and start addressing it through highly parallel page-based persistent memory constructs. This will require changes in operating systems and in applications.

Although moving forward in some areas will require high-trust partnerships, Micron can move the operating system component forward by making direct, meaningful contributions to the Unix/Linux storage stack. Micron recently joined the Linux Foundation to signal this intention, and at the February 19 briefing announced that they are creating a storage software design center in Austin, TX to foster collaborations.

The Necessity of Unlearning and Relearning

Re-inventing the data center infrastructure will be a complex, long-term task. The greatest barriers to progress may well be human rather than technical. One barrier to progress is what we already know—our customary approach to thinking about storage. The people designing next generation data center technologies will need to unlearn a lot of what they know about current systems design and return to the origins of computing, an environment characterized primarily by compute and memory.

Darren Thomas, VP of the Storage Business Unit at Micron, has articulated just such a strategic vision—redefining the future of storage—based on recognizing the opportunities and then creating high-trust win/win partnerships with key participants outside of Micron. As one of just a handful of DRAM and NAND manufacturers in the world, Micron clearly has a lot at stake in this transformation; and just as clearly Micron has decided to take an active role in shaping and accelerating that future.

All of the dynamics highlighted above suggest that storage will continue to be a locus of innovation and a lever for transformation in the enterprise data center for years to come. DCIG will continue to provide periodic snapshots of this dynamic marketplace through its feature-oriented buyer’s guides, including the forthcoming DCIG 2015-16 Hybrid Storage Array Buyer’s Guide in late March and the DCIG 2015-16 Flash Memory Storage Array Buyer’s Guide in May of this year.




DCIG Announces Calendar of Planned Buyer’s Guide Releases in the First Half of 2015

At the beginning of 2014, I started the year with the theme: “it’s an exciting time to be part of the DCIG team“. This was due to the explosive growth we saw in website visits and popularity of our Buyer’s Guides. That hasn’t changed. DCIG Buyer’s Guides continue to grow in popularity, but what’s even more exciting is the diversity of our new products and services. This year’s theme is diversity: a range of different things. DCIG is expanding…again…in different directions.

In the past year, we have added a number of offerings to our repertoire of products and services.   In addition to producing our popular Buyer’s Guides and well known blogs, we now offer Competitive Research Services, Executive Interviews, Executive White papers, Lead Generation, Special Reports and Webinars. Even more unique, DCIG now offers an RFP/RFI Analysis Software Suite. This suite gives anyone (vendor, end-user or technology reseller) the ability to license the same software that DCIG uses internally to develop its Buyer’s Guide. In this way, you may use the software to do your internal technology assessments with your own scores and rankings so that the results align more closely with your specific business needs.

While we diversify our portfolio, it’s important to note that we also increased our Buyer’s Guide publication output by nearly 40% to thirteen (13) over our 2013 publications. We also contracted for over 30 Competitive Advantage reports in 2014.   This success is largely due the well-planned timeline, more clearly defined processes, and the addition of new analysts. The team is busy and here is a sneak peek at the Buyer’s Guides that they are currently working on during the first half of 2015 (in order of target release date):

Hybrid Storage Array: Hybrid Storage Array is a physical storage appliance that dynamically places data in a storage pool that combines flash memory and HDD storage (and in some cases NVRAM and/or DRAM) resources by intelligently caching data and metadata and/or by automatically moving data from one performance tier to another. The design goal of a hybrid storage array is to typically provide sub-2-millisecond response times associated with flash memory storage arrays with capacity and cost similar to HDD-based arrays.

SDS Server SAN: A new Buyer’s Guide for DCIG, the SDS Server SAN is a collection of servers combining compute, memory and internal DAS storage, which enables organizations to remove the need to for external storage in a virtualized environment. The SDS Server SAN software provides the glue between the compute and storage portions of the environment allowing for clustering of not only the virtual host but the underlying file system as well. SDS Server SAN’s typically bundle compute, storage and hypervisors and employ the usage of SSD as a tier for storage caching; SAS and/or SATA HDDs for data storage; and, support of one or more hypervisors.

Hybrid Cloud Backup Appliance: A Hybrid Cloud Backup Appliance is a physical appliance that comes prepackaged with server, storage and backup software. What makes this Buyer’s Guide stand apart from the Integrated Backup Appliances is that the Hybrid Cloud Backup Appliance must support backup both locally and to cloud providers. In this new Buyer’s Guide for DCIG, DCIG evaluates which cloud provider or providers that the appliance natively supports, the options it offers to backup to the cloud and even what options are available to recover data and/or applications with a cloud provider.

Private Cloud Storage Array: Private Cloud Storage Array is a physical storage appliance located behind an organization’s firewall that enables the delivery of storage as a service to end users within an enterprise. Private cloud storage brings the benefits of public cloud storage to the enterprise—rapid provisioning/de-provisioning on storage resources through self-service tools and automated management, scalability, and REST API support for cloud-native apps—while still meeting corporate data protection, security and compliance requirements

Flash Memory Storage Array: The Flash Memory Buyer’s Guide is a refresh from 2014. The flash array is a solid state storage disk system that contains multiple flash memory drives instead of hard disk drives.

Unified Communications: Another new guide for DCIG, Unified communications (UC) is any system that integrates real-time and non-real-time enterprise communication services such as voice, messaging, instant messaging, presence, audio and video conferencing and mobility features. The purpose of UC is to provide a consistent user-interface and experience across multiple devices and media-types.

Watch the latter half of the year as DCIG plans to refresh Buyer’s Guides on the following topics:

  • Big Data Tape Library
  • Deduplicating Backup Appliance
  • High End Storage Array
  • Integrated Backup Appliance
  • Midrange Unified Storage
  • SDS Storage Virtualization
  • Virtual Server Backup Software

We also have other topics that we are evaluating as the basis for new Buyer’s Guides so look for announcements on their availability in the latter half of this year.




DCIG Announces Calendar of Planned Buyer’s Guide Releases in the First Half of 2015

At the beginning of 2014, I started the year with the theme: “it’s an exciting time to be part of the DCIG team“. This was due to the explosive growth we saw in website visits and popularity of our Buyer’s Guides. That hasn’t changed. DCIG Buyer’s Guides continue to grow in popularity, but what’s even more exciting is the diversity of our new products and services. This year’s theme is diversity: a range of different things. DCIG is expanding…again…in different directions.

In the past year, we have added a number of offerings to our repertoire of products and services.   In addition to producing our popular Buyer’s Guides and well known blogs, we now offer Competitive Research Services, Executive Interviews, Executive White papers, Lead Generation, Special Reports and Webinars. Even more unique, DCIG now offers an RFP/RFI Analysis Software Suite. This suite gives anyone (vendor, end-user or technology reseller) the ability to license the same software that DCIG uses internally to develop its Buyer’s Guide. In this way, you may use the software to do your internal technology assessments with your own scores and rankings so that the results align more closely with your specific business needs.

While we diversify our portfolio, it’s important to note that we also increased our Buyer’s Guide publication output by nearly 40% to thirteen (13) over our 2013 publications. We also contracted for over 30 Competitive Advantage reports in 2014.   This success is largely due the well-planned timeline, more clearly defined processes, and the addition of new analysts. The team is busy and here is a sneak peek at the Buyer’s Guides that they are currently working on during the first half of 2015 (in order of target release date):

Hybrid Storage Array: Hybrid Storage Array is a physical storage appliance that dynamically places data in a storage pool that combines flash memory and HDD storage (and in some cases NVRAM and/or DRAM) resources by intelligently caching data and metadata and/or by automatically moving data from one performance tier to another. The design goal of a hybrid storage array is to typically provide sub-2-millisecond response times associated with flash memory storage arrays with capacity and cost similar to HDD-based arrays.

SDS Server SAN: A new Buyer’s Guide for DCIG, the SDS Server SAN is a collection of servers combining compute, memory and internal DAS storage, which enables organizations to remove the need to for external storage in a virtualized environment. The SDS Server SAN software provides the glue between the compute and storage portions of the environment allowing for clustering of not only the virtual host but the underlying file system as well. SDS Server SAN’s typically bundle compute, storage and hypervisors and employ the usage of SSD as a tier for storage caching; SAS and/or SATA HDDs for data storage; and, support of one or more hypervisors.

Hybrid Cloud Backup Appliance: A Hybrid Cloud Backup Appliance is a physical appliance that comes prepackaged with server, storage and backup software. What makes this Buyer’s Guide stand apart from the Integrated Backup Appliances is that the Hybrid Cloud Backup Appliance must support backup both locally and to cloud providers. In this new Buyer’s Guide for DCIG, DCIG evaluates which cloud provider or providers that the appliance natively supports, the options it offers to backup to the cloud and even what options are available to recover data and/or applications with a cloud provider.

Private Cloud Storage Array: Private Cloud Storage Array is a physical storage appliance located behind an organization’s firewall that enables the delivery of storage as a service to end users within an enterprise. Private cloud storage brings the benefits of public cloud storage to the enterprise—rapid provisioning/de-provisioning on storage resources through self-service tools and automated management, scalability, and REST API support for cloud-native apps—while still meeting corporate data protection, security and compliance requirements

Flash Memory Storage Array: The Flash Memory Buyer’s Guide is a refresh from 2014. The flash array is a solid state storage disk system that contains multiple flash memory drives instead of hard disk drives.

Unified Communications: Another new guide for DCIG, Unified communications (UC) is any system that integrates real-time and non-real-time enterprise communication services such as voice, messaging, instant messaging, presence, audio and video conferencing and mobility features. The purpose of UC is to provide a consistent user-interface and experience across multiple devices and media-types.

Watch the latter half of the year as DCIG plans to refresh Buyer’s Guides on the following topics:

  • Big Data Tape Library
  • Deduplicating Backup Appliance
  • High End Storage Array
  • Integrated Backup Appliance
  • Midrange Unified Storage
  • SDS Storage Virtualization
  • Virtual Server Backup Software

We also have other topics that we are evaluating as the basis for new Buyer’s Guides so look for announcements on their availability in the latter half of this year.




Oracle Brings out the Big Guns, Rolls out the FS1 Flash Storage System

Dedicating a single flash-based storage array to improving the performance of a single application may be appropriate for siloed or small SAN environments. However this is NOT an architecture that enterprises want to leverage when hosting multiple applications in larger SAN environments, especially if the flash-based array has only a few or unproven data management services behind it. The new Oracle FS1 Series Flash Storage System addresses these concerns by providing enterprises the levels of performance and the mature and robust data management services that they need to move flash-based arrays from the fringes of their SAN environments into their core.

Throwing flash memory at existing storage performance problems has been the de facto approach of most hybrid and flash memory storage arrays released to date. While these flash-based arrays deliver performance improvements of 3x to as much as 20x or more over traditional hard disk drive (HDD) based arrays, they are frequently deployed as a single array dedicated to improving the performance of a single application.

This approach breaks down in enterprise environments that want to attach multiple (and ideally all) applications to a single flash-based array. While performance improvements will likely still occur in this scenario, many flash-based arrays often lack any intelligence to prioritize I/O originating from different applications.

This results in I/O traffic from higher priority applications being given the same priority as lower priority applications based on the assumption that flash-based arrays are “so fast” that they will be able to service all I/O from all applications equally well. Meanwhile, I/O from mission-critical applications wait as I/O from lower priority applications get served.

The drawbacks with this approach are three-fold:

  • Enterprises want a guarantee that their most mission critical applications such as Oracle E-Business Suite will get the performance that they need when they need it over other, lower tier applications. In today’s environments, no such guarantees exist.
  • Should an application’s performance requirements change over time, today’s flash-based storage arrays have no way to natively detect these changes.
  • Business owners of lower-tier applications will internally campaign to connect their applications to these flash-based arrays as they will, by default, get the same performance as higher tier applications. This further impacts the ability of mission-critical applications to get their I/Os served in a timely manner.

Even should these arrays deliver the performance that all of these applications needs, the data management services they offer are either, at best, immature or, at worst, incomplete and insufficient to meet enterprise demands. This is why today’s flash-based arrays fall short and what sets the Oracle FS1 Series Flash Storage System apart.

The Oracle FS1 was specifically architected for flash media as its primary function with HDD support a secondary focus. Further distinguishing it from other all-flash arrays, the FS1 offers up to two tiers of flash with an optional, additional two tiers of disk to provide a four-tier storage architecture with data intelligently and automatically moved between tiers.

The Oracle FS1 comes equipped with the specific technologies that today’s enterprises need to justify deploying a flash storage array into the heart of their SAN environment. It can simultaneously and successfully handle multiple different application workloads with the high levels of predictable performance that meets their specific needs.

On the hardware side, it delivers the high end specifications that enterprises expect. It is architected to support as many as sixteen (16) nodes in a single logical system, enterprises may start with an FS1 configuration as small as two (2) nodes and then scale it to host petabytes of flash capacity in a single logical configuration. This is more than twice the size of EMC’s XtremeIO (6 nodes) or an HP 3PAR StoreServ 7400 (8 nodes). In the 16 node configuration, internal Oracle tests have already shown the FS1 capable of supporting up to 80 GBps of throughput and 2 million IOPs.

It is on the software side with its native data management services that puts the FS1 in a class by itself. Most flash-based storage arrays have either minimal data management services or, if they do offer them, the data management services are, at best, immature. The Oracle FS1 provides the mature, full suite of data management services that enterprises want and need to justify deploying a flash-based solution into their SAN environments.

Further, Oracle makes it easy and practical for enterprises to take advantage of this extensive set of data management services as they are included with every Oracle FS1 as part of the base system. In this way, any enterprise that deploys an Oracle FS1 has immediate access to its rich set of software features.

Consider these highlights:

  • Storage profiles associated with each application. The key to prioritizing application I/O and putting the right data on the right disk is to first establish the application priority and then associate it with the right tier or tiers of disk. To deliver on this requirement, the Oracle FS1 offers pre-defined, pre-tested and pre-tuned storage profiles that are optimized for specific applications.

Using these profiles, enterprises may, with a single click, associate each application with a specific storage profile that is optimized for that application’s specific capacity, performance and cost requirements. For example, demanding Oracle Database applications may be provisioned with “Premium Priority” high performance storage profiles that consist of tiers of flash disk. This priority level ensures that mission-critical applications receive the low latency service they require. Conversely, lower tier, less demanding applications may be associated with medium or low priority storage profiles that provision tiers of performance and capacity-oriented HDDs.

  • Application I/O prioritization. Associating applications with storage profiles eliminates the need for the Oracle FS1 to rely on the traditional “cross your fingers and hope for the best” means of application I/O prioritization. Knowing the priority of applications enables the FS1 to receive and prioritize I/Os according to the application sending them.

As it simultaneously receives I/Os from multiple different applications, it recognizes which I/Os are associated with high priority applications and services them first. This “priority in, priority out” option eliminates the risk and uncertainty associated with the “first in, first out” methodology predominantly found on most flash-based arrays today.

  • Adds business value to application I/O management. The prevalent I/O queue management technique used in flash and HDD storage systems is “first in, first out” just as it was with the first hard disk drive – the IBM 305 RAMAC – in 1958.

The world has changed a bit since then. The Oracle FS1 recognizes that different applications have different value to the enterprise and the Oracle FS1’s QoS Plus takes into this into account as it prioritizes I/O. As the FS1 receives I/Os from multiple different applications, it recognizes which I/Os are associated with “Premium Priority” (high business value) applications and services them first. This “priority in, priority out” option eliminates the risk and uncertainty associated with the “first in, first out” I/O queue management.

The Oracle FS1 QoS Plus delivers further business value by placing data across up to four different storage layers (performance and capacity flash media along with performance and capacity HDDs.). QoS Plus collects detailed information on the applications’ storage usage profile, evaluates data for movement to different storage tiers, then combines that with auto-tiering to automatically migrate data to the most cost-effective media (flash or disk) from a $/IOP and $/GB standpoint based on the application data’s usage profile AND the value of that data to the business.

  • Capacity optimization. To dynamically optimize data placement on available storage capacity, the Oracle FS1 stores data in 640K chunks. Storing data in these size chunks, it optimally uses the available flash and HDD storage capacity in the FS1 without creating too much management overhead on the system as can happen using smaller, 4K chunks. This also minimizes the waste that can occur at the other extreme as some storage arrays store and move data in chunks as large as 1GB (1600x larger than FS1.)

The FS1 then tracks the performance of individual 640K chunks over time using workload-driven heat maps. If chunks that reside on flash are infrequently or not accessed, they may get moved down to lower tiers of flash or disk; conversely, chunks that reside on HDDs may become more active over time so they may get moved to a higher tier of disk or even flash.

  • Isolate data in containers. The Oracle FS1’s Storage Domains software enables the creation of multiple, virtual storage systems within a single Oracle FS1, a feature not readily available with flash storage systems. Each storage domain is a “data container” which isolates data from other storage domains.

The Storage Domains multiple unique environments with individual custom-tailored QoS Plus settings can reside on a single physical FS1, reducing power, cooling, and management administration expense. This multi-tenancy capability is ideal for private or public cloud deployments, regulatory compliance requirements, or chargeback models.

  • Optimized for Oracle Database environments. The Oracle FS1 Series supports all major operating systems, hypervisors and applications. However enterprises running Oracle Database in their environment will experience benefits that no other vendor’s flash- or HDD-based array can offer.

By supporting Oracle Database’s Advanced Data Optimization (ADO) and Hybrid Columnar Compression (HCC), enterprises achieve levels of performance and capacity optimization for Oracle Database that other vendors’ flash-based arrays cannot provide because they are not co-engineered for deep levels of integration with Oracle Database.

The Oracle FS1 Series Flash Storage System breaks new ground in flash-based array SAN battleground by delivering more than high levels of performance which is often where other flash-based storage arrays start and stop. The Oracle FS1 stands apart from its competitors in this space by providing a highly available and scalable architecture backed by mature and proven suite of data management services that are part of its base system, not separately licensed options. With Oracle FS1, enterprises can finally move ahead with their plans to bring flash storage arrays to run multiple applications and workloads at the core of their SAN environments, not just as single-application point products.

Bitnami