DCIG Will Provide Update on All-flash Array Advances at Flash Memory Summit 2019

Flash Memory Summit is the world’s largest storage industry event featuring the trends, innovations, and influencers driving the adoption of flash memory. DCIG will again present at the Summit this year. DCIG’s presentation will draw from its independent research into all-flash arrays and the Competitive Intelligence that DCIG performs on behalf of its clients.

The session will highlight recent developments in all-flash arrays and the rapidly changing competitive landscape for these products. Ken Clipperton, DCIG’s Lead Analyst for Storage, will speak on Tuesday, August 6th, from 9:45-10:50 AM. The session is called BMKT-101B-1: Annual Update on Flash Arrays.

Just as DCIG does in its reports, Mr. Clipperton will discuss both the “What” and the “So what?” of these advances in all-flash arrays. The presentation will cover the changes occurring in all-flash arrays, the value they create for organizations implementing them, and the key topic areas that DCIG focuses on in its competitive intelligence reports.

Mr. Clipperton will cover the following topics:

  • Advances in front-end connectivity to the storage network/application servers
  • Advances in back-end connectivity to storage media
  • Integration of storage-class memory
  • Integrations with other elements in the data center
  • Cloud connectivity
  • Delivery models
  • Predictive analytics
  • Proactive support
  • Licensing
  • Storage-as-a-Service (OpEx model)
  • Guarantee programs
  • Expectations about developments in the near-term future

If you will be at FMS, we hope that you will be able to attend this session and then stick around to introduce yourself and share your perspectives on where the AFA marketplace is heading.

Whether you are able to attend FMS or DCIG’s session at the summit, we invite you to sign up for our newsletter. To request more information about DCIG’s Competitive Intelligence services, click on this link.

Be sure to check back on the DCIG website after the event to get our take on the Summit and the products we believe deserve “Best in Show” honors.




Fast Network Connectivity Key to Unlocking All-flash Array Performance

The current generation of all-flash arrays offers enough performance to saturate the network connections between the arrays and application servers in the data center. In many scenarios, the key limiter to all-flash array performance is storage network bandwidth. Therefore, all-flash array vendors have been quick to adopt the latest advances in storage network connectivity.

Fast Networks are Here, and Faster Networks are Coming

Chart showing current and future Ethernet speeds

Ethernet is now available with connection speeds up to 400 Gb per second. Fibre Channel now reaches speeds up to 128 Gb per second. As discussed during a recent SNIA presentation, the roadmaps for both technologies forecast another 2x to 4x increase in performance.

While the fastest connections are generally used to create a storage network fabric among data center switches, many all-flash arrays support fast storage network connectivity.

All-flash Arrays Embrace Fast Network Connectivity

DCIG’s research into all-flash arrays identified thirty-seven (37) models that support 32 Gb FC, seventeen (17) that support 100 Gb Ethernet, and ten (10) that support 100 Gb InfiniBand connectivity. These include products from Dell EMC, FUJITSU Storage, Hitachi Vantara, Huawei, Kaminario, NEC Storage, NetApp, Nimbus Data, Pure Storage and Storbyte.

Summary chart of AFA connectivity support

Source: DCIG

Other Drivers of Fast Network Connectivity

Although all-flash storage is a key driver behind fast network connectivity, there are also several other significant drivers. Each of these has implications for the optimal balance between compute, storage, network bandwidth, and the cost of creating and managing the infrastructure.

These other drivers of fast networking include:

  • Faster servers that offer more capacity and performance density per rack unit
  • Increasing volumes of data require increasing bandwidth
  • Increasing east-west traffic between servers in the data center due to scale-out infrastructure and distributed cloud-native applications
  • The growth of GPU-enabled AI and data mining
  • Larger data centers, especially cloud and co-location facilities that may house tens of thousands of servers
  • Fatter pipes yield more efficient fabrics with fewer switches and cables

Predominant All-Flash Array Connectivity Use Cases

How an all-flash array connects to the network is frequently based on the type of organization deploying the array. While there are certainly exceptions to the rule, the predominant connection methods and use cases can be summarized as follows:

  • Ethernet = Cloud and Service Provider data centers
  • Fibre Channel = Enterprise data centers
  • InfiniBand = HPC environments

Recent advances in network connectivity–and the adoption of these advances by all-flash array providers–creates new opportunities to increase the amount of work that can be accomplished by an all-flash array. Therefore, organizations intending to acquire all-flash storage should consider each product’s embrace of fast network connectivity as an important part of the evaluation process.




HPE Predicts Sunny Future for Cloudless Computing

Antonio Neri, CEO of HPE, declared at its Discover event last week that HPE is transforming into a consumption-driven company that will deliver “Everything as a Service” within three years. In addition, Neri put forward the larger concept of “cloudless” computing. Are these announcements a tactical response to the recent wave of public cloud adoption by enterprises, or are they something more strategic?

“Everything as a Service” is Part of a Larger Cloudless Computing Strategy

“Everything as a Service” is, in fact, part of a larger “cloudless” computing strategy that Neri put forth. Cloudless. Do we really need to add yet another term to our technology dictionaries? Yes, we probably do.

picture of Antonio Neri with the word Cloudless in the background

HPE CEO, Antonio Neri, describing Cloudless Computing at HPE Discover

“Cloudless” is intentionally jarring, just like the term “serverless”. And just as “serverless” applications actually rely on servers, so also “cloudless” computing will rely on public clouds. The point is not that cloud goes away, but that it will no longer be consumed as a set of walled gardens requiring individual management by enterprises and applications.

Enterprises are indeed migrating to the cloud, massively. Attractions of the cloud include flexibility, scalability of performance and capacity, access to innovation, and its pay-per-use operating cost model. But managing and optimizing the hybrid and multi-cloud estate is challenging on multiple fronts including security, compliance and cost.

Cloudless computing is more than a management layer on top of today’s multi-cloud environment. The cloudless future HPE envisions is one where the walls between the clouds are gone; replaced by a service mesh that will provide an entirely new form of consuming and paying for resources in a truly open marketplace.

Insecure Infrastructure is a Barrier to a Cloudless Future

Insecure infrastructure is a huge issue. We recently learned that more than a dozen of the largest global telecom firms were compromised for as much as seven years without knowing it. This was more than a successful spearfishing expedition. Bad actors compromised the infrastructure at a deeper level. In light of such revelations, how can we safely move toward a cloudless future?

Foundations of a Cloudless Future

Trust based on zero trust. The trust fabric is really about confidence. Confidence that infrastructure is secure. HPE has long participated in the Trusted Computing Group (TCG), developing open standards for hardware-based root of trust technology and the creation of interoperable trusted computing platforms. At HPE they call the result “silicon root of trust” technology. This technology is incorporated into HPE ProLiant Gen10 servers.

Memory-driven computing. Memory-driven computing will be important to cloudless computing because it is necessary for real-time supply chain, customer and financial status integration.

Instrumented infrastructure. Providers of services in the mesh must have an instrumented infrastructure. Providers will use the machine data in multiple ways; including analytics, automation, and billing. After all, you have to see it in order to measure it, manage it and bill for it.

Infrastructure providers have created multiple ways to instrument their systems. Lenovo TruScale measures and bills based on power consumption. In HPE’s case, it uses embedded instrumentation and the resulting machine data for predictive analytics (HPE InfoSight), billing (HPE GreenLake) and cost optimization (HPE Consumption Analytics Portal).

Cloudless Computing Coming Next Year

HPE is well positioned to deliver on the “everything as a service” commitment. It has secure hardware. It has memory-driven composable infrastructure. It has an instrumented infrastructure across the entire enterprise stack. It has InfoSight analytics. It has consumption analytics. If has its Pointnext services group.

However, achieving the larger vision of a cloudless future will involve tearing down some walls with participation from a wide range of participants. Neri acknowledged the challenges, yet promised that HPE will deliver cloudless computing just one year from now. Stay tuned.




Lenovo TruScale and Nutanix Enterprise Cloud Accelerate Enterprise Transformation

Digital transformation is an enterprise imperative. Enabling that transformation is the focus of Lenovo’s TruScale data center infrastructure services. The combination of TruScale infrastructure services and Nutanix application services creates a powerful accelerant for enterprise transformation.

Cloud is the Transformation Trigger

Many enterprises are seeking to go to the cloud, or at least to gain the benefits associated with the cloud. These benefits include:

  • pay-as-you-go operational costs instead of large capital outlays
  • agility to rapidly deploy new applications
  • flexibility to adapt to changing business requirements

For many IT departments, the trigger for serious consideration of a move to the cloud is when the CFO no longer wants to approve IT acquisitions. Unfortunately, the journey to the cloud often comes with a loss of control over both costs and data assets. Thus many enterprise IT leaders are seeking a path to cloud benefits without sacrificing control of costs and data.

TruScale Brings True Utility Computing to Data Center Infrastructure

The Lenovo Data Center Group focused on the needs of these enterprise customers by asking themselves:

  • What are customers trying to do?
  • What would be a winning consumption model for customers?

The answer they came up with is Lenovo TruScale Infrastructure Services.

Nutanix invited DCIG analysts to attend the recent .NEXT conference. While there we met with many participants in the Nutanix ecosystem, including an interview with Laura Laltrello, VP and GM of Lenovo Data Center Services. This article, and DCIG’s selection of Lenovo TruScale as one of three Best of Show products at the conference, is based largely on that interview.

As noted in the DCIG Best of Show at Nutanix .NEXT article, TruScale literally introduces utility data center computing. Lenovo bills TruScale clients a monthly management fee plus a utilization charge. It bases this charge on the power consumed by the Lenovo-managed IT infrastructure. Clients can commit to a certain level of usage and be billed a lower rate for that baseline. This is similar to reserved instances on Amazon Web Services, except that customers only pay for actual usage, not reserved capacity.

infographic summarizing Lenovo TruScale features

Source: Lenovo

This power consumption-based approach is especially appealing to enterprises and service providers for which one or more of the following holds true:

  • Data center workloads tie directly to revenue.
  • Want IT to focus on enabling digital transformation, not infrastructure management.
  • Need to retain possession or secure control of their data.

Lenovo TruScale Offers Everything as a Service

TruScale can manage everything as a service, including both hardware and software. Lenovo works with its customers to figure out which licensing programs make the most sense for the customer. Where feasible, TruScale includes software licensing as part of the service.

Lenovo Monitors and Manages Data Center Infrastructure

TruScale does not require companies to install any extra software. Instead, it gets its power utilization data from the management processor already embedded in Lenovo servers. It then passes this power consumption data to the Lenovo operations center(s) along with alerts and other sensor data.

Lenovo uses the data it collects to trigger support interventions. Lenovo services professionals handle all routine maintenance including installing firmware updates and replacing failed components to ensure maximum uptime. Thus, Lenovo manages data center infrastructure below the application layer.

Lenovo Provides Continuous Infrastructure (and Cost) Visibility

Lenovo also uses the data it collects to provide near real-time usage data to customers via a dashboard. This dashboard graphically presents performance versus key metrics including actual vs budget. In short, Lenovo’s approach to utility data center computing provides a distinctive and easy means to deploy and manage infrastructure across its entire lifecycle.

Lenovo Integrates with Nutanix Prism

Lenovo TruScale infrastructure services cover the entire range Lenovo ThinkSystem and ThinkAgile products. The software defined infrastructure products include pre-integrated solutions for Nutanix, Azure HCI, Azure Stack and VMware.

Lenovo has taken extra steps to integrate its products with Nutanix. These include:

  • ThinkAgile XClarity Integrator for Nutanix is available via the Nutanix Calm marketplace. It works in concert with Prism to integrate server data and alerts into the Prism management console.
  • ThinkAgile Network Orchestrator is an industry-first integration between Lenovo switches and Prism. It reduces error and downtime by automatically changing physical switch configurations when changes are made to virtual Nutanix networks.

Nutanix Automates the Application Layer

Nutanix software simplifies the deployment and management of enterprise applications at scale. The following graphic, taken from the opening keynote lists each Nutanix component and summarizes its function.

image showing summary list of Nutanix services

Source: Nutanix

The Nutanix .NEXT conference featured many customers telling how Nutanix has transformed their data center operations. Their statements about Nutanix include:

“stable and reliable virtual desktop infrastructure”

“a private cloud with all the benefits of public, under our roof and able to keep pace with our ambitions”

“giving me irreplaceable time and memories with family”

“simplicity, ease of use, scale”

Lenovo TruScale + Nutanix = Accelerated Enterprise Transformation

I was not initially a fan of the term “digital transformation.” It felt like yet another slogan that really meant, “Buy more of my stuff.” But practical applications of machine learning and artificial intelligence are here now and truly do present significant new opportunities (or threats) for enterprises in every industry. Consequently, and more than at any time in the past, the IT department has a crucial role to play in the success of every company.

Enterprises need their IT departments to transition from being “Information Technology” departments to “Intelligent Transformation” departments. TruScale and Nutanix each enable such a transition by freeing up IT staff to focus on the business rather than on technology. Together, the combination of TruScale infrastructure services and Nutanix application services creates a powerful accelerant for enterprise transformation.

Transform and thrive.

 

Disclosure: As noted above, Nutanix invited DCIG analysts to attend the .NEXT conference. Nutanix covered most of my travel expenses. However, neither Nutanix nor Lenovo sponsored this article.

Updated on 5/24/2019.




Convincing a Skeptical Buyer that Your Product is the Best

Every company tends to believe that its products are the best in whatever market it services. Nothing wrong with that mindset – it helps your company sell its products and succeed. However, convincing a skeptical buyer of the superiority of your company’s product changes the dynamics of the conversation. He or she expects you to provide some facts to back up your claims to persuade him or her to buy from you.

As a provider of competitive content for many years now, DCIG has learned a lot about how to conduct competitive research and deliver the results in a compelling and informative manner. Here are a few insights to help you convince an undecided buyer that your product is the best.

  1. Stay focused on the positive. Stay positive in all the communications you have about your products and your competitor’s products. Your prospective buyer may not agree with the glowing assessment of your product. However, one sure way to turn them off is to disparage your competitor’s product in any way.

Disparaging your competitor’s product becomes especially perilous in this age of instant communications and mobile devices. As fast as you can make a claim about your competitor’s product, your prospective buyer can search the internet and validate your assertion. If he or she finds your claim incorrect or out-of-date, you will, at best, look petty and uninformed. At worst, you may lose the buyer’s trust.

Even if you absolutely, unequivocally know your competitor does not offer a feature that your product does, stay positive. Use it as an opportunity to explain why your product offers the features it does and articulate the various use cases it solves.

  1. Present all competitive information in a high quality, professional manner. Excel spreadsheets and Word documents serve as great tools to aggregate and store your raw competitive data. The danger comes from presenting that raw data without first taking the time and effort to properly analyze it and then present it professionally.

Analyzing it, organizing it, and then presenting it in a professional manner take additional time and expertise above and beyond the time and expertise required to collect the data. These steps may even prompt you to go back and re-validate some of your data and initial assumptions.

  1. Use a third party to validate competitive research. Even assuming you collect all the competitive data and take the time to professionally prepare it, when you present yourself as the source of the data about your product’s information and your competitor’s information to the prospective buyer, it can create doubts in the buyer’s mind. In that situation, the buyer will minimally question the data’s validity and objectivity.

Here is where having a third party to review your data, validate your conclusions, and even ideally present the information can add significant value. It can help you identify potential biases in the data-gathering stage, serve to double-check your work, and save you the time, hassle and expense of putting together a professional presentation that lays out the differences between your product and your competitor. This third-party validation will heighten the value of the competitive content when you share it with your skeptical buyer.

Your product is the best and you know it. Maybe even your competitor knows it. However, at the end of the day, it only matters if your prospective buyer comes to that same conclusion. Presenting the right information in an objective manner in a professional context will go a long way toward persuading a skeptical buyer that you have the right product for his or her needs. If this sounds like a challenge that you have, DCIG would love to help. Feel free to reach out to DCIG by contacting us at this email address.




DCIG’s ISC West 2019 Best of Show in Video Surveillance

ISC West logoISC West—the International Security Conference and Exposition—provides insight into some of the biggest trends in the security industry. The conference attracted more than 30,000 attendees and nearly 1,000 vendors earlier this month. DCIG analysts planned our attendance at this year’s conference with a focus on video surveillance, especially video analytics. We had an eye-opening experience.

Artificial Intelligence is More Than a Buzzword

Artificial intelligence was one of the major themes of the conference. We saw some disappointing examples of vendors stretching to apply the artificial intelligence (AI) label to their products. In contrast, other vendors said AI is just a buzzword with no real projects demonstrating value in the field.

From what we gleaned from our experience at ISC West, video analytics in the field has advanced significantly. AI is more than a buzzword. The ability of AI to generate insight and value from surveillance video has moved from a diamond in the rough to a multi-faceted gem.

DCIG identified three companies for “Best of Show” awards in various facets of video surveillance infrastructure.

Briefcam Proves the Benefits of Video Analytics in the Field

BriefCam logo

The best example that we encountered of video analytics yielding actionable intelligence is Briefcam. At their booth, former law enforcement officer Johnmichael O’Hare demonstrated how he had used Briefcam to quickly condense four hours of surveillance video to a heat map that instantly revealed a house being used to sell drugs. He said they raided the house the next day, resulting in multiple arrests and the seizure of a significant quantity of dangerous drugs.

Briefcam is a multi-faceted tool. For example, Johnmichael demonstrated how Briefcam could be used to rapidly analyze traffic flows and add value to video for law enforcement and city planners.

Pivot3 Hyperconverged Infrastructure Handles Video Surveillance Workloads at Enterprise Scale

Pivot three company logoWe found Briefcam through a mention at the Pivot3 booth. It turns out that Briefcam and Pivot3 have been partnering since 2011 to deliver integrated surveillance video storage and analytics. Pivot3 provides a hyperconverged infrastructure (HCI) that can handle video surveillance workloads at scale. The Pivot3 solution incorporates NVIDIA GPU’s into its intelligent storage architecture to accelerate video analytics. The scalability of the Pivot3 HCI is important to deployments that may scale to include thousands of cameras and other IoT endpoints.

Razberi Technologies EndpointDefender Secures IoT Infrastructure

razberi technologies logo

Speaking of IoT, an important element of deploying and managing a video surveillance infrastructure is securing that infrastructure. Hackers gaining control of security cameras in homes is creepy. Hackers gaining control of security cameras within enterprise networks and critical facilities is spooky on a whole different level.

Cameras destined for enterprise deployments are supposed to be more secure than devices intended for the home. Nevertheless, they depend on manufacturers to incorporate appropriate security features in firmware, and on installers properly configuring the devices during installation. These and other IoT devices depend on the infrastructure to protect the IoT devices from attacks. Prudent companies also protect the infrastructure from attacks that improperly secured IoT devices make possible. This is where EndpointDefender from Razberi Technologies comes in.

photo of awards on display at razberi boothEndpointDefender secures cameras and other IoT devices on the edge, even those that were deemed unsecure. A gentleman at the Razberi Technologies booth told of a situation where an integrator had installed hundreds of video cameras just before a federal agency issued a warning that the cameras were not secure. Rather than replacing all the video cameras, the integrator was able to replace the standard Ethernet switches with razberi EndpointDefender network appliances to harden the connected cameras and protect the network from cybersecurity threats posed by the cameras.

Apparently, we were not the only ones who were impressed by Razberi Technologies and the EndpointDefender. The technology won SIA’s 2019 Cybersecurity New Product Showcase Award at the conference.

Analytics Can Move Video Surveillance from Cost Center to Strategic Asset

Many organizations implemented video surveillance as an operational tool and are now managing more than a petabyte of video surveillance storage. AI-enabled analytics tools are now available that in many cases can turn that operational cost center into a strategic asset. It is time to up-level our thinking and discussions within the enterprise about video surveillance.

DCIG will continue to cover developments in video surveillance and cybersecurity. If you haven’t already done so, please signup for the weekly DCIG Newsletter so that we can keep you informed of these developments.




DCIG 2019-20 Enterprise Deduplication Backup Target Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of its 2019-20 Enterprise Deduplication Backup Target Appliance Buyer’s Guide which helps enterprises assess the enter­prise deduplication backup target appliance marketplace and identify which appliance may be the best fit for their environment. This Buyer’s Guide includes data sheets for 19 enterprise deduplication backup target appliances that achieved rankings of Recommended and Excellent. These products are available from five vendors including Cohesity, Dell EMC, ExaGrid, HPE, and NEC.

Enterprises rarely want to talk about the make-up of the infrastructure of their data centers anymore. They prefer to talk about artificial intelligence, cloud adoption, data analytics, machine learning, software-defined data centers, and uninterrupted business operations. As part of those discussions, they want to leverage current technologies to drive new insights into their business and, ultimately, create new opportunities for busi­ness growth or cost savings because their underlying data center technologies work as expected.

The operative phrase here becomes “works as expected”, especially as it relates to Enterprise Deduplication Backup Target Appliances. Expectations as to the exact features that an enterprise deduplication backup target appliance should deliver can vary widely.

If an enterprise only wants an enterprise deduplication backup target appliance that meets traditional data center requirements, every appliance covered in this Buyer’s Guide satisfies those needs. Each one can:

  • Serve as a target for backup software.
  • Analyze and break apart data in backup streams to optimize deduplication ratios.
  • Replicate backup data to other sites
  • Replicate data to the cloud for archive, disaster recovery, and long-term data retention.

While the appliances from each provider uses different techniques to accomplish these objectives and some perform these tasks better than others depending on the use case, each one does deliver on these objectives.

But for enterprises looking for a solution that enables them to meet their broader, more strategic objectives, only a couple of providers covered in this Buyer’s Guide, appear to be taking the appropriate steps to position enterprises for the software-defined hybrid data center of the future. Appliances from these provid­ers better position enterprises to perform next generation data lifecycle management tasks while still providing enterprises with the necessary features to accomplish traditional backup and recovery tasks.

It is in this context that DCIG presents its DCIG 2019-20 Enterprise Deduplication Backup Target Appliance Buyer’s Guide. As in the development of all prior DCIG Buyer’s Guides, DCIG has already done the heavy lifting for enterprise technology buyers by:

  • Identifying a common technology need with competing solutions
  • Scanning the environment to identify available products in the marketplace
  • Gathering normalized data about the features each product supports
  • Providing an objective, third-party evaluation of those features from an end-user perspective
  • Describing key product considerations and important changes in the marketplace
  • Presenting DCIG’s opinions and product feature data in a way that facilitates the rapid comparisons of various products and product features

The products that DCIG ranks as Recommended in this Guide are as follows (in alphabetical order):

Access to this Buyer’s Guide edition is available immediately by following this link to any of the following DCIG partner sites:

TechTrove

HPE




Ways Persistent Memory is Showing Up in Enterprise Storage in 2019

Persistent Memory is bringing a revolution in performance, cost and capacity that will change server, storage system, data center and software design over the next decade. This article describes some ways storage vendors are integrating persistent memory into enterprise storage systems in 2019.

Intel Optane DC Persistent Memory Modules (PMM)

picture of an Intel® Optane™ DC persistent memory stickAs noted in the second article in the series–NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise—the lack of a standard DIMM format for persistent memory is a key barrier to the development of NVDIMMs. Nevertheless, Intel recently announced general availability of pre-standard Optane DIMMs, branded Intel Optane DC Persistent Memory Modules (PMM).

Intel supports multiple modes for accessing Optane PMM. Each mode exposes different capabilities for systems to exploit. In “Memory Mode” DRAM acts as a hot-data cache in front of the Optane capacity tier. Somewhat strangely, in memory mode the Optane provides a large pool of volatile memory. A second mode for Optane PMM is called “App Direct Mode”. In App Direct Mode, Optane is persistent memory, and applications write to the Optane using load/store memory semantics.

NetApp demonstrates one way this technology can be integrated into existing enterprise storage systems. It uses Optane DIMMs in application servers as part of the NetApp Memory Accelerated (MAX) Data solution. MAX Data writes to Optane PMM in App Direct Mode as the hot storage tier. The solution tiers cold data to NetApp AFF all-flash arrays. With NetApp MAX, applications do not need to be rewritten to take advantage of Optane. Instead, the solution presents the Optane memory as POSIX-compliant storage.

Storage Vendors are Using Optane SSDs in Multiple Ways

As noted in the first article in this series, multiple storage system providers are taking advantage of Optane SSDs. Some storage vendors, such as HPE, use the Optane SSDs to provide a large ultra-low-latency read cache. Some vendors, including E8 Storage, use Optane SSDs as primary storage. Still others use Optane SSDs as the highest performing tier of storage in a multi-tiered storage environment.

A startup called VAST Data recently emerged from stealth. Its solution uses Optane SSDs as a write buffer and metadata store in front of the primary storage pool. It uses the least expensive flash memory–currently QLC SSDs–as the only capacity tier. The architecture also disaggregates storage processing from the storage pool by running the logic in containers on servers that talk to the storage nodes via NVMe-oF.

MRAM is Being Embedded Into Storage Components

At the SNIA Persistent Memory Summit, one presenter said that the largest uses of MRAM in the data center are in enterprise SSDs, RAID controllers, storage accelerator add-in cards and network adapters. For example, IBM uses MRAM in its Flashcore Modules, its most recent generation of 2.5-inch U.2 SSDs. The MRAM replaced supercapacitors plus DRAM it used in the prior generation of SSDs, simplifying the design and enabling more capacity in less space without the risk of data loss.

Persistent Memory Will Impact All Aspects of Data Processing

Technology companies have invested many millions of dollars into the development of a variety of persistent memory technologies. Some of these technologies exist only in the laboratories of these companies. But today, multiple vendors are incorporating Intel’s Optane 3D XPoint and MRAM into a variety of data center products.

We are in the very early phases of a persistent-memory-enabled revolution in performance, cost and capacity that will change server, storage system, data center and software design over the next decade. Although some aspects of this revolution are being held back by a lack of standards, multiple vendors are now shipping storage class memory as part of their enterprise storage systems. The revolution has begun.

 

This is the third in a series of articles about Persistent Memory and its use in enterprise storage. The second article in the series is NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise.

This article was updated on 4/5/2019 to add a link to the prior article in the series.




NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise

logo of the persistent memory summitThe SNIA Persistent Memory Summit held in late January 2019 provided a good view into the current state of industry. Some key technologies and standards related to persistent memory are moving forward more slowly than expected. Others are finally transitioning from promise to products. This article summarizes a few key takeaways from the event as they relate to enterprise storage systems.

Great Performance Gains Possible Without Modifying Software

One point the presenters at this SNIA-sponsored event took pains to make clear is that great performance gains from storage class memory are possible without making any changes to the software that uses the storage. For example, a machine learning test using Optane to extend server memory capacity allowed a standard host to complete 3x more analytics models.

These results are being obtained due to the efforts of SNIA and its member organizations. They developed the SNIA NVM Programming Model and a set of persistent memory libraries. Both Microsoft Windows and multiple Linux variants take advantage of these libraries to enable any application running on those operating systems to benefit from persistent memory.

Optane is a Gap Filler in the Storage Hierarchy, Not a DRAM Replacement

chart showing place of optane in storage memory hierarchy between DRAM and NAND SSD

Slide from Intel Presentation at SNIA PM Summit

One fact made clear across multiple presentations is that Optane (Intel’s brand name for 3D XPoint persistent memory) fills an important gap in the storage hierarchy, but falls short as a non-volatile replacement for DRAM. Every storage medium has strengths and weaknesses. Optane has excellent read latency and bandwidth, so deploying it as a persistent read-cache as HPE is doing may be its primary use case in enterprise storage systems.

MRAM is Shipping Now and Being Embedded Into Many Products

The main surprise for me from the event was the extent to which MRAM has become a real product. In addition to Everspin and Avalanche, both Intel and Samsung have announced that they are ready to ship STT-MRAM (spin-transfer torque magnetic RAM) in commercial production volumes.

MRAM offers read/write speeds similar to DRAM, and enough endurance to be used as a DRAM replacement in many scenarios. The initial focus of MRAM shipments is embedded devices, where the necessary surrounding standards are already in place. MRAM’s capacity, endurance and low power draw make it a great fit with the requirements of next-generation embedded edge devices.

photo of Kevin Conley CEO of Everspin and the memory landscape

Kevin Conley presenting at the PM Summit

Kevin Conley, CEO of Everspin Technologies, gave an especially helpful presentation describing the characteristics of MRAM and how it fits into the memory technology landscape. He stated that MRAM is currently being used in enterprise SSDs, RAID controllers and storage accelerator cards. His 10-minute presentation begins approximately 13 minutes into this video recording.

Persistent Memory Moving Onto the NIC

One new use case for persistent memory is to place it on network interface cards. The idea is to persist writes on the NIC before the data leaves the host server, eliminating the network and back-end storage system from the write-latency equation. It will be interesting to see how providers will integrate this capability into their storage solutions.

MRAM Memory Sticks Waiting on DDR5 and NVDIMM-P Standards

One factor holding back MRAM and other storage-class memories from being used in the familiar DIMM format is the lack of critical standards. The NVDIMM-P is the standard for placing non-volatile memory on DIMMs. The DDR5 standard will permit large capacity DIMMs. Both standards were originally expected to be completed in 2018, but that did not happen. No firm date for their completion was provided at the Summit.

Not all are waiting for the standards to be finalized. Intel is shipping its Optane DC Persistent Memory in DDR4-compatible DIMM format without waiting for the NVDIMM-P standard. The modules are available in capacities of 128, 256 and 512GB–a foretaste of what NVDIMM-P will do for memory capacities. While it is good to see some pre-standard NVDIMM products being introduced, the NVDIMM-P and DDR5 standards will be key to the broad adoption of persistent memory, just as the CCITT Group 3 and IEEE 802.3 standards were to fax and networking.

NVDIMM-N Remains the Predominant Non-Volatile Memory Technology for 2019 and 2020

The predominant technology for providing non-volatile memory on the memory bus is based on NVDIMM-N standard. These NVDIMMs pair DRAM with flash memory and a battery or capacitor. The DRAM handles I/O until a shutdown or power loss triggers the contents of DRAM to be copied to the flash memory.

NVDIMM-N modules provide the performance of DRAM and the persistence of flash memory. This makes them excellent for use as a write-cache, as iXsystems and Western Digital do in their respective TrueNAS and IntelliFlash enterprise storage arrays.

NVMe-oF Delivers in 2019 and 2020

If the DDR5 and NVDIMM-P standards are published by the end of 2019, we may see MRAM and other storage class memory technologies in enterprise storage systems by 2021. In the meantime, enterprise storage providers will focus on integrating NVMe and NVMe-oF into their products to provide advances in storage performance. Multiple vendors are already shipping NVMe-oF compliant products. These include E8 Storage, Pavilion Data Systems, Kaminario, and Pure Storage.

Learn More About Persistent Memory

DCIG focuses most of its efforts on enterprise technology that is currently available in the marketplace. Nevertheless, we believe that persistent memory will have significant implications for servers, storage and data center designs within the technology planning horizons of most enterprises. As such, it is important for anyone involved in enterprise information technology to understand those implications.

You can learn more about persistent memory from the people and organizations that are driving the industry forward. SNIA is making all the presentations from the Persistent Memory Summit available for viewing at https://www.snia.org/pm-summit.

DCIG will continue to cover developments in persistent memory, especially as it makes its way into enterprise technology products. If you haven’t already done so, please signup for the weekly DCIG Newsletter so that we can keep you informed of these developments.

 

This is the second in a series of articles about Persistent Memory and its use in enterprise storage. The first article in the series is Caching vs Tiering with Storage Class Memory and NVMe – A Tale of Two Systems. The third article is Ways Persistent Memory is Showing Up in Enterprise Storage in 2019.

This article was updated on 4/1/2019 to add more detail about MRAM and NVDIMM-P, and on 4/5/2019 to add links to the other articles in the series.




DCIG Introduces Two New Offerings in 2019

DCIG often gets so busy covering all the new and emerging technologies in multiple markets that we can neglect to inform our current and prospective clients of new offerings that DCIG has brought to market. Today I address this oversight.

While many of you know DCIG for its Buyer’s Guides, blogs, and executive white papers, DCIG now offers the following two assets that companies can contract DCIG to create:

1.      DCIG Competitive Intelligence Reports. These reports start by taking a subset of the information we gather as part of creating the DCIG Buyer’s Guides. These reports compare features from two to five selected products and examines how they deliver on these features. The purpose of these reports is not to declare which feature implementation is “best”. Rather, it examines how each product implements these select features and what the most appropriate use case is for those features.

2.      DCIG Content Bundle. In today’s world, people consume the same content in multiple ways. Some prefer to hear it via podcasts. Some prefer to watch it on video. Some want to digest it in bite size chunks in blog entries. Still others want the whole enchilada in the form of a white paper. To meet these various demands, DCIG delivers the same core set of content in all four of these formats as part of its newly created content bundle.

If any of these new offerings pique your interest, let us know! We would love to have the opportunity to explain how they work and provide you with a sample of these offerings. Simply click on this link to send us an email to inquire about these services.




Caching vs Tiering with Storage Class Memory and NVMe – A Tale of Two Systems

Dell EMC announced that it will soon add Optane-based storage to its PowerMAX arrays, and that PowerMAX will use Optane as a storage tier, not “just” cache. This statement implies using Optane as a storage tier is superior to using it as a cache. But is it?

PowerMAX will use Storage Class Memory as Tier in All-NVMe System

Some people criticized Dell EMC for taking an all-NVMe approach–and therefore eliminating hybrid (flash memory plus HDD) configurations. Yet the all-NVMe decision gave the engineers an opportunity to architect PowerMAX around the inherent parallelism of NVMe. Dell EMC’s design imperative for the PowerMAX is performance over efficiency. And it does perform:

  • 290 microsecond latency
  • 150 GB per second of throughput
  • 10 million IOPS

These results were achieved with standard flash memory NVMe SSDs. The numbers will get even better when Dell EMC adds Optane-based storage class memory (SCM) as a tier. Once SCM has been added to the array, Dell EMC’s fully automated storage tiering (FAST) technology will monitor array activity and automatically move the most active data to the SCM tier and less active data to the flash memory SSDs.

The intelligence of the tiering algorithms will be key to delivering great results in production environments. Indeed, Dell EMC states that, “Built-in machine learning is the only cost-effective way to leverage SCM”.

HPE “Memory-Driven Flash” uses Storage Class Memory as Cache

HPE is one of many vendors taking the caching path to integrating SCM into their products. It recently began shipping Optane-based read caching via 750 GB NVMe SCM Module add-in cards. In testing, HPE 3PAR 20850 arrays equipped with this “HPE Memory-Driven Flash” delivered:

  • Sub-200 microseconds of latency for most IO
  • Nearly 100% of IO in under 300 microseconds
  • 75 GB per second of throughput
  • 4 million IOPS

These results were achieved with standard 12 Gb SAS SSDs providing the bulk of the storage capacity. HPE Memory-Driven Flash is currently shipping for HPE 3PAR Storage, with availability on HPE Nimble Storage yet in 2019.

An advantage of caching approach is that even a relatively small amount of SCM can enable a storage system to deliver SCM performance by dynamically caching hot data, even when it is storing most of the data on much slower and less expensive media. As with tiering, the intelligence of the algorithms is key to delivering great results in production environments.

The performance HPE is achieving with SCM is good news for other arrays based on caching-oriented storage operating systems. In particular, ZFS-based products such as those offered by Tegile, iXsystems and OpenDrives, should see substantial performance gains when they switch to using SCM for the L2ARC read cache.

What is Best – Tier or Cache?

I favor the caching approach. Caching is more dynamic than tiering, responding to workloads immediately rather than waiting for a tiering algorithm to move active data to the fastest tier on some scheduled basis. A tiering-based system may completely miss out on the opportunity to accelerate some workloads. I also favor caching because I believe it will bring the benefits of SCM within reach of more organizations.

Whether using SCM as a capacity tier or as a cache, the intelligence of the algorithms that automate the placement of data is critical. Many storage vendors talk about using artificial intelligence and machine learning (AI/ML) in their storage systems. SCM provides a new, large, persistent, low-latency class of storage for AI/ML to work with in order to deliver more performance in less space and at a lower cost per unit of performance.

The right way to integrate NVMe and SCM into enterprise storage is to do so–as a tier, as a cache or as both tier and cache–and then use automated intelligent algorithms to make the most of the storage class memory that is available.

Prospective enterprise storage array purchasers should take a close look at how the systems use (or plan to use) storage class memory and how they use AI/ML to inform caching and/or storage tiering decisions to deliver cost-effective performance.

 

This is the first in a series of articles about Persistent Memory and its use in enterprise storage. The second article in the series is NVMe-oF Delivering While Persistent Memory Remains Mostly a Promise.

Revised on 4/5/2019 to add the link to the next article in the series.




Your Data Center is No Place for a Space Odyssey

The first movie I remember seeing in a theater was 2001: A Space Odyssey. If you saw it, I am guessing that you remember it, too. At the core of the story is HAL, a sophisticated computer that controls everything on a space ship en route to Jupiter. The movie is ultimately a story of artificial intelligence gone awry.

When the astronauts realize that HAL has become dangerous due to a malfunction, they decide they need to turn HAL off. I still recall the chill I experienced when one of the astronauts issues the command, “Open the pod bay doors please, HAL.” And HAL responds with, “I’m sorry, Dave. I’m afraid I can’t do that.”

Artificial Intelligence is Real Today, but not Perfect

Today, we are finally experiencing voice interaction with a computer that feels as sophisticated as what that movie depicted more than 50 years ago. But sometimes with unintended or unexpected consequences.

Artificial intelligence (AI) is great, except when it is not. My sister recently purchased a vehicle with collision avoidance technology built in. Surprisingly, it engaged the emergency stop procedure on a rural highway when no traffic was approaching. Fortunately, there was no vehicle following close behind or this safety feature might have actually caused an accident. (The dealer eventually accepted the return of the vehicle.)

Artificial Intelligence in Data Center Infrastructure Products

Artificial intelligence and machine learning technologies are being incorporated into data center infrastructure products. Some of these implementations are delivering measurable value to the customers who use these products. AI/ML enabled capabilities may include:

  • AI/ML enabled by default… Yay!
  • Cloud-based analytics…Yay!
  • Proactive fault remediation… Yay!
  • Recommendations… Yay!
  • Totally autonomous operations… I’m not sure about that.

Examples of Artificial Intelligence and Machine Learning Done Right

  • HPE InfoSight – all the “Yay!” items above. For example, HPE claims that with InfoSight, 86% of problems are predicted and automatically resolved before customers even realize there is an issue.
  • HPE Memory-Driven Flash is now shipping for HPE 3PAR arrays. It is implemented as an 750 GB NVMe Intel Optane SSD add-in card that provides an extremely low-latency read cache. The read cache uses sophisticated caching algorithms to complete nearly all I/O operations in under 300 microseconds. Yet, system administrators can enable this cache per volume, giving humans the opportunity to specify which workloads are of the highest value to the business.
  • Pivot3 Dynamic QoS provides policy-based quality of service management based on the business value of workloads. The system automatically applies a set of default policies, and dynamically enforces those policies. But administrators can change the policies and change which workloads are assigned to each policy on-the-fly.

When evaluating the AI/ML capabilities of data center infrastructure products, enterprises should look for products that enable AI/ML by default, yet which humans can override based on site-specific priorities, preferably on a granular basis.

After all, when a critical line of business application is not getting the priority it deserves, the last thing you want to hear from your infrastructure is, “I’m sorry, Dave. I’m afraid I can’t do that.”

 




Number of Appliances Dedicated to Deduplicating Backup Data Shrinks even as Data Universe Expands

One would think that with the continuing explosion in the amount of data being created every year, the number of appliances that can reduce the amount of data stored by deduplicating it would be increasing. That statement is both true and flawed. On one hand, the number of backup and storage appliances that can deduplicate data has never been higher and continues to increase. On the other hand, the number of vendors that create physical target-based appliances dedicated to the deduplication of backup data continues to shrink.

Data Universe Expands

In November 2018 IDC released a report where it estimated the amount of data that will be created, captured, and replicated will increase five-fold from the current 33 zettabytes (ZBs) to about 175 ZBs in 2025. Whether one agrees with that estimate, there is little doubt that there are more ways than ever in which data gets created. These include:

  • Endpoint devices such as PCs, tablets, and smart phones
  • Edge devices such as sensors that collect data
  • Video and audio recording devices
  • Traditional data centers
  • The creation of data through the backup, replication and copying of this created data
  • The creation of metadata that describes, categorizes, and analyzes this data

All these sources and means of creating data means there is more data than ever under management. But as this occurs, the number of the products originally developed to control this data growth – hardware appliances that specialize in the deduplication of backup data after it is backed up such as those from Dell EMC, ExaGrid, and HPE – has shrunk in recent years.

Here are the top five reasons for this trend.

1. Deduplication has Moved onto Storage Arrays.

Many storage arrays, both primary and secondary, give companies the option to deduplicate data. While these arrays may not achieve the same deduplication ratios as appliances purpose-built for the deduplication of backup data, their combination of lower costs and highs levels of storage capacity offset the inabilities of their deduplication software to optimize backup data.

2. Backup software offers deduplication capabilities.

Rather than waiting to deduplicate backup data on a hardware appliance, almost all enterprise backup software products can deduplicate on either the client or the backup server before storing it. This eliminates the need to use a storage device dedicated to deduplicating data.

3. Virtual appliances that perform deduplication on the rise.

Some providers, such as Quest Software, have exited the physical deduplication backup target appliance market and re-emerged with virtual appliances that deduplicate data. These give companies new flexibility to use hardware from any provider they want and implement their software-defined data center strategy more aggressively.

4. Newly created data may not deduplicate well or at all.

A lot of the new data that companies may not deduplicate well or at all. Audio or video files may not change and will only deduplicate if full backups are done – which may be rare. Encrypted data will not deduplicate at all. In these circumstances, deduplication appliances are rarely if ever needed.

5. Multiple backup copies of the data may not be needed.

Much of the data collected from edge and endpoint devices may only need a couple of copies of data, if that. Audio and video files may also fall into this same category of not needing to retain more than a couple copies of data. To get the full benefits of a target-based deduplication appliance, one needs to backup the same data multiple times – usually at least six times if not more. This reduced need to backup and retain multiple copies of data diminishes the need for these appliances.

Remaining Deduplication Appliances More Finely Tuned for Enterprise Requirements

The reduction in the number of vendors shipping physical target-based deduplication backup appliances seems almost counter-intuitive in the light of the ongoing explosion in data growth that we are witnessing. But when one considers must of data being created and its corresponding data protection and retention requirements, the decrease in the number of target-based deduplication appliances available is understandable.

The upside is that the vendors who do remain and the physical target-based deduplication appliances that they ship are more finely tuned for the needs of today’s enterprises. They are larger, better suited for recovery, have more cloud capabilities, and account for some of these other broader trends mentioned above. These factors and others will be covered in the forthcoming DCIG Buyer’s Guide on Enterprise Deduplication Backup Appliances.




Leading Hyperconverged Infrastructure Solutions Diverge Over QoS

Hyperconvergence is Reshaping the Enterprise Data Center

Virtualization largely shaped the enterprise data center landscape for the past ten years. Hyper-converged infrastructure (HCI) is beginning to have the same type of impact, re-shaping the enterprise data center to fully capitalize on the benefits that virtualizing the infrastructure affords them.

Hyperconverged Infrastructure Defined

DCIG defines a hyperconverged infrastructure (HCI) as a solution that pre-integrates virtualized compute, storage and data protection functions along with a hypervisor and scale-out cluster management software. HCI vendors may offer their solutions as turnkey appliances, installable software or as an instance running on public cloud infrastructure. The most common physical instantiation of—and unit of scaling for—hyperconverged infrastructure is a 1U or 2U rack-mountable appliance contain­ing 1–4 cluster nodes.

HCI Adoption Exceeding Analyst Forecasts

Hyperconverged Infrastructure (HCI)–and the software-defined storage (SDS) technology that is a critical component of these solutions–is still in the early stages of adoption. Yet according to IDC data, spending on HCI already exceeds $5 Billion annually and is growing at a rate that substantially outpaces many analyst forecasts.Graph comparing analyst forecasts with actual hyperconverged sales growth

HCI Requirements for Next-Generation Datacenter Adoption

The success of initial HCI deployments in reducing complexity, speeding time to deployment, and lowering costs compared to traditional architectures has opened the door to an expanded role in the enterprise data center. Indeed, HCI is rapidly becoming the core technology of the next-generation enterprise data center. In order to succeed as a core technology these HCI solutions must meet a new and demanding set of expectations. These expectations include:

  • Simplified management, including at scale
  • Workload consolidation, including mission-critical

The Role of Quality of Service in Simplifying Management and Consolidating Workloads

Three performance elements that are candidates for quality of service (QoS) management are latency, IOPS, and throughput. Some HCI solutions address all three elements, others manage just a single element.

HCI solutions also take varied approaches to managing QoS in terms of fixed assignments versus relative priority. The fixed assignment approach involves assigning minimum, maximum and/or target values per volume. The relative priority approach involves assigning each volume to a priority group–like Gold, Silver or Bronze.

Superior QoS technology creates business value by driving down operating expenses (OPEX). It dramatically reduces the amount of time IT staff must spend troubleshooting service level agreement (SLA) related problems.

Superior QoS also creates business value by driving down capital expenses (CAPEX). It enables more workloads to be confidently consolidated onto less hardware. The more intelligent it is, the less over-provisioning (and over-purchasing) of hardware will be required.

Finally, QoS can be applied to workload performance alone or to performance and data protection to meet service level agreements in both domains.

How Some Popular Hyperconverged Infrastructure Solutions Diverge Over QoS

DCIG is in the process of updating its research on hyperconverged infrastructure solutions. In the process we have observed that these solutions take very divergent approaches to quality of service.

Cisco HyperFlex offers QoS on the NIC, which is useful for converged networking, but does not offer storage QoS that addresses application priority within the solution itself.

Dell EMC VxRail QoS is very basic. Administrators can assign fixed IOPS limits per volume. Workloads using those volumes get throttled even when there is no resource contention, yet still compete for IOPS with more important workloads. This approach to QoS does protect a cluster from a rogue application consuming too many resources, but is probably a better fit for managed service providers than for most enterprises.

Nutanix “Autonomic QoS” automatically prioritizes user applications over back end operations whenever contention occurs. Nutanix AI/ML technology understands common workloads and prioritizes different kinds of IO from a given application accordingly. This approach offers great appeal because it is fully automatic. However, it is global and not user configurable.

Pivot3 offers intelligent policy-based QoS. Administrators assign one of five QoS policies to each volume when it is created. In addition to establishing priority, each policy assigns targets for latency, IOPS and throughput. Pivot3’s Intelligence Engine then prioritizes workloads in real-time based on those policies. The administrator assigning the QoS policy to the volume must know the relative importance of the associated workload; but once the policy has been assigned, performance management is “set it and forget it”. Pivot3 QoS offers other advanced capabilities including applying QoS to data protection and the ability to change QoS settings on-the-fly or on a scheduled basis.

QoS Ideal = Automatic, Intelligent and Configurable

The ideal quality of service technology would be automatic and intelligent, yet configurable. Though none of these hyperconverged solutions may fully realize that ideal, Nutanix and Pivot3 both bring significant elements of this ideal to market as part of their hyperconverged infrastructure solutions.

Enterprises considering HCI as a replacement for existing core data center infrastructure should give special attention to how the solution implements quality of service technology. Superior QoS technology will reduce OPEX by simplifying management and reduce CAPEX by consolidating many workloads onto the solution.




The Early Implications of NVMe/TCP on Ethernet Network Designs

The ratification in November 2018 of the NVMe/TCP standard officially opened the doors for NVMe/TCP to begin to find its way into corporate IT environments. Earlier this week I had the opportunity to listen in on a webinar that SNIA hosted which provided an update on NVMe/TCP’s latest developments and its implications for enterprise IT. Here are four key takeaways from that presentation and how these changes will impact corporate data center Ethernet network designs.

First, NVMe/TCP will accelerate the deployment of NVMe in enterprises.

NVMe is already available in networked storage environments using competing protocols such as RDMA which ships as RoCE (RDMA over Converged Ethernet). The challenge is no one (well, very few anyway) use RDMA in any meaningful way in their environment so using RoCE to run NVMe never gained and will likely never gain any momentum.

The availability of NVMe over TCP changes that. Companies already understand TCP, deploy it everywhere, and know how to scale and run it over their existing Ethernet networks. NVMe/TCP will build on this legacy infrastructure and knowledge.

Second, any latency that NVMe/TCP introduces still pales in comparison to existing storage networking protocols.

Running NVMe over TCP does introduces latency versus using RoCE. However, the latency that TCP introduces is nominal and will likely be measured in microseconds in most circumstances. Most applications will not even detect this level of latency due to the substantial jump in performance that natively running NVMe over TCP will provide versus using existing storage protocols such as iSCSI and FC.

Third, the introduction of NVMe/TCP will require companies implement Ethernet network designs that minimize latency.

Ethernet networks may implement buffering in Ethernet switches to handle periods of peak workloads. Companies will need to modify that network design technique when deploying NVMe/TCP as buffering introduces latency into the network and NVMe is highly latency sensitive. Companies will need to more carefully balance how much buffering they introduce on Ethernet switches.

Fourth, get familiar with the term “incast collapse” on Ethernet networks and how to mitigate it.

NVMe can support up to 64,000 queues. Every queue that NVMe opens up initiates a TCP session. Here is where challenges may eventually surface. Simultaneously opening up multiple queues will result in multiple TCP sessions initiating at the same time. This could, in turn, have all these sessions arrive at a common congestion point in the Ethernet network at the same time. The network remedies this by having all TCP sessions backing off at the same time, or an incast collapse, creating latency in the network.

Source: University of California-Berkeley

Historically this has been a very specialized and rare occurrence in networking due to the low probability that such an event would ever take place. But the introduction of NVMe/TCP into the network makes the possibility of such a event much more likely to occur, especially as more companies deploy NVMe/TCP into their environment.

The Ratification of the NVMe/TCP

Ratification of the NVMe/TCP standard potentially makes every enterprise data center a candidate for storage systems that can deliver dramatically better performance to their work loads. Until the performance demands of every workload in a data center are met instantaneously, some workload requests will queue up behind a bottleneck in the data center infrastructure.

Just as introducing flash memory into enterprise storage systems revealed bottlenecks in storage operating system software and storage protocols, NVMe/TCP-based storage systems will reveal bottlenecks in data center networks. Enterprises seeking to accelerate their applications by implementing NVMe/TCP-based storage systems may discover bottlenecks in their networks that need to be addressed in order to see the full benefits that NVMe/TCP-based storage.

To view this presentation in its entirety, follow this link.




Three Hallmarks of an Effective Competitive Intelligence System

Across more than twenty years as an IT Director, I had many sales people incorrectly tell me that their product was the only one that offered a particular benefit. Did their false claims harm their credibility? Absolutely. Were they trying to deceive me? Possibly. But it is far more likely that they sincerely believed their claims. 

Their lack was not truthfulness but accuracy. They lacked accurate and up-to-date information about the current capabilities of competing products in the marketplace. Their competitive intelligence system had failed them.

When DCIG was recruiting me to become an analyst I asked DCIG’s founder, Jerome Wendt, what were the most surprising things he had learned since founding DCIG. One of the three things he mentioned in his response was the degree to which vendors lack a knowledge of the product features and capabilities of their key competitors.

Reasons Vendors Lack Good Competitive Intelligence

There are many reasons why vendors lack good competitive intelligence. These include:

  • They are focused on delivering and enhancing their own product to meet the perceived needs of current and prospective customers.
  • Collecting and maintaining accurate data about even key competitor’s products can be time consuming and challenging.
  • Staff transitions may result in a loss of data continuity.

Benefits of an Effective Competitive Intelligence System

An effective competitive intelligence system increases sales by enabling partners and sales personnel to quickly grasp key product differentiators and how those translate into business benefits. Thus, it enhances the onboarding of new personnel and their opportunity for success.

Three Hallmarks of an Effective Competitive Intelligence System

The hallmarks of an effective competitive intelligence system center around three themes: data, insight and communication.

Regarding Data, the system must:

  • Capture current, accurate data about key competitor products
  • Provide data continuity across staff transitions
  • Provide analyses that surfaces commonalities and differences between products

 

Regarding Insight, the system must:

  • Clearly identify product differentiators
  • Clearly articulate the business benefits of those differentiators

 

Regarding Communication, the system must:

  • Provide concise content that enables partners and sales personnel to quickly grasp key product differentiators and how those translate into business benefits for CxOs and line of business executives
  • Bridge the gap between sales and marketing with messages that are tailored to be consistent with product branding
  • Provide the content at the right time and in the right format

Whatever combination of software, services and competitive intelligence personnel a company employs, an effective competitive intelligence system is an important asset for any company seeking to thrive in a competitive marketplace.

DCIG’s Competitive Intelligence Track Record

DCIG Buyer’s Guides

Since 2010, DCIG Buyer’s guides have provided hundreds of thousands with an independent look at the many products in each market DCIG covers. Each Buyer’s Guide gives decision makers insight into the features that merit particular attention, what is available now and key directions in the marketplace. DCIG produces Buyer’s Guides based on our larger bodies of research in data protection, enterprise storage and converged infrastructure.

DCIG Pocket Analyst Reports

DCIG leverages much of the Buyer’s Guide research methodology–and the competitive intelligence platform that supports that research–to create focused reports that highlight the differentiators between two products that are frequently making it onto the same short lists.

Our Pocket Analyst Reports are published and made available for sale on a third party website to substantiate the independence of each report. Vendors can license these reports for use in lead generation, internal sales training and for use with prospective clients. 

DCIG Competitive Intelligence Reports

DCIG also uses its Competitive Intelligence Platform to produce reports for internal use by our clients. These concise reports enable partners and sales personnel to quickly grasp key product differentiators and how those translate into business benefits that make sense to CxOs and line of business executives. Because these reports are for internal use, the client can have substantial input into the messaging.

DCIG Battle Cards

Each DCIG Battle Card is a succinct 2-page document that compares the client’s product or product family to one other product or product family. The client and DCIG collaborate to identify the key product features to compare, the key strengths that the client’s product offers over the competing product, and the appropriate messaging to include on the battle card. Content may be contributed by the client for inclusion on the battle card. The battle card is only for the internal use of the client and its partners and may not be distributed.

DCIG Competitive Intelligence Platform

The DCIG Competitive Intelligence (CI) Platform is a multi-tenant, platform-as-a-service (PaaS) offering backed by support from DCIG analysts. The DCIG Competitive Intelligence Platform offers the flexibility to centrally store data and compare features on competitive products. Licensees receive the ability to centralize competitive intelligence data in the cloud with the data made available internally to their employees and partners via reports prepared by DCIG analysts.

DCIG Competitive Intelligence platform and associated analyst services strengthen the competitive intelligence capabilities of our clients. Sometimes in unexpected ways…

  • Major opportunity against a competitor never faced before
  • Strategic supplier negotiation and positioning of competitor

 

In each case, DCIG analysis identified differentiators and 3rd party insights that helped close the deal.




HCI Comparison Report Reveals Key Differentiators Between Dell EMC VxRail and Nutanix NX

Many organizations view hyper-converged infrastructure (HCI) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

HCI Appliances Deliver Radical Simplicity

Hyper-converged infrastructure appliances radically simplify the data center architecture. These pre-integrated appliances accelerate and simplify infrastructure deployment and management. They combine and virtualize compute, memory, storage and networking functions from a single vendor in a scale-out cluster. Thus, the stakes are high for vendors such as Dell EMC and Nutanix as they compete to own this critical piece of data center real estate.

In the last several years, HCI has also emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, plus non-disruptive hardware upgrades and data migrations are among the features that enterprises love about these solutions.

HCI Appliances Are Not All Created Equal

Many enterprises are considering HCI solutions from providers Dell EMC and Nutanix. A cursory examination of these two vendors and their solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their HCI appliances. Also, both providers pretest firmware and software updates and automate cluster-wide roll-outs.

Nevertheless, important differences remain between the products. Due to the high level of interest in these products, DCIG published an initial comparison in November 2017. Both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first page of HCI comparison reportUpdated DCIG Pocket Analyst Report Reveals Key HCI Differentiators

In this updated report, DCIG identifies six ways the HCI solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed feature matrix as well as insight into key differentiators between these two HCI solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.




VMware vSphere and Nutanix AHV Hypervisors: An Updated Head-to-Head Comparison

Many organizations view hyper-converged infrastructure appliances (HCIAs) as foundational for the cloud data center architecture of the future. However, as part of an HCIA solution, one must also select a hypervisor to run on this platform. The VMware vSphere and Nutanix AHV hypervisors are two capable choices but key differences exist between them.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. Aside from the connectivity to public and private clouds that these solutions often provide, they offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises adopt HCIA solutions in general as well as HCIA solutions from providers like Nutanix, they must still evaluate key features in these solutions. One variable that enterprises should pay specific attention to is the hypervisors available to run on these HCIA solutions.

Unlike some other HCIA solutions, Nutanix gives organizations the flexibility to choose which hypervisor they want to run on their HCIA platform. They can choose to run the widely adopted VMware vSphere. They can choose to run Nutanix’s own Acropolis hypervisor (AHV).

What is not always so clear is which one they should host on the Nutanix platform. Each hypervisor has its own set of benefits and drawbacks. To help organizations make a more informed choice as to which hypervisor is the best one for their environment, DCIG is pleased to make its updated DCIG Pocket Analyst Report that does a head-to-head comparison between the VMware vSphere and Nutanix AHV hypervisors.

blurred image of first page of reportThis succinct, 4-page report includes a detailed product matrix as well as insight into seven key differentiators between these two hypervisors and which one is best positioned to deliver on key cloud and data center considerations such as:

  • Data protection ecosystem
  • Support for Guest OSes
  • Support for VDI platforms
  • Certified enterprise applications
  • Fit with corporate direction
  • More favorable licensing model
  • Simpler management

This DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace. The report is temporarily also available free of charge with registration from the Unitrends website.




Dell EMC VxRail vs Nutanix NX: Six Key HCIA Differentiators

Many organizations view hyper-converged infrastructure appliances (HCIAs) as the data center architecture of the future. Dell EMC VxRail and Nutanix NX appliances are two leading options for creating the enterprise hybrid cloud. Visibility into their respective data protection ecosystems, enterprise application certifications, solution integration, support for multiple hypervisors, scalability and maturity should help organizations choose the most appropriate solution for them.

Hyper-converged infrastructure appliances (HCIA) radically simplify the next generation of data center architectures. Combining and virtualizing compute, memory, stor­age, networking, and data protection func­tions from a single vendor in a scale-out cluster, these pre-integrated appliances accelerate and simplify infrastructure deploy­ment and management. As such, the stakes are high for vendors such as Dell EMC and Nutanix that are competing to own this critical piece of data center infrastructure real estate.

In the last several years, HCIAs have emerged as a key enabler for cloud adoption. These solutions provide connectivity to public and private clouds, and offer their own cloud-like properties. Ease of scaling, simplicity of management, and non-disruptive hardware upgrades and data migrations highlight the list of features that enterprises are coming to know and love about these solutions.

But as enterprises consider HCIA solutions from providers such as Dell EMC and Nutanix, they must still evaluate key features available on these solutions as well as the providers themselves. A cursory examination of these two vendors and their respective solutions quickly reveals similarities between them. For example, both companies control the entire hardware and software stacks of their respective HCIA solutions. Both pre-test firmware and software updates holistically and automate cluster-wide roll-outs.

Despite these similarities, differences between them remain. To help enterprises select the product that best fits their needs, DCIG published its first comparison of these products in November 2017. There is a high level of interest in these products, and both providers recently enhanced their offerings. Therefore, DCIG refreshed its research and has released an updated head-to-head comparison of the Dell EMC VxRail and Nutanix NX appliances.

blurred image of first pageIn this updated report, DCIG identifies six ways the HCIA solutions from these two providers currently differentiate themselves from one another. This succinct, 4-page report includes a detailed product matrix as well as insight into key differentiators between these two HCIA solutions such as:

  • Breadth of ecosystem
  • Enterprise applications certified
  • Multi-hypervisor flexibility
  • Scalability
  • Solution integration
  • Vendor maturity

DCIG is pleased to make this updated DCIG Pocket Analyst Report available for purchase for $99.95 via the TechTrove marketplace.




Purpose-built Backup Cloud Service Providers: A Practical Starting Point for Cloud Backup and DR

The shift is on toward using cloud service providers for an increasing number of production IT functions with backup and DR often at the top of the list of the tasks that companies first want to deploy in the cloud. But as IT staff seeks to “Check the box” that they can comply with corporate directives to have a cloud solution in place for backup and DR, they also need to simultaneously check the “Simplicity,” “Cost-savings,” and “It Works” boxes.

Benefits of Using a Purpose-built Backup Cloud Service Provider

Cloud service providers purpose-built for backup and DR put companies in the best position to check all those boxes for this initial cloud use case. These providers solve their clients’ immediate challenges of easily and cost-effectively moving backup data off-site and then retaining it long term.

Equally important, addressing the off-site backup challenge in this way positions companies to do data recovery with the purpose-built cloud service provider, a general purpose cloud service provider, or on-premises. It also sets the stage for companies to regularly test their disaster recovery capabilities so they can perform them when necessary.

Choosing a Purpose-built Backup Cloud Service Provider

To choose the right cloud service provider for corporate backup and DR requirements companies want to be cost conscious. But they also want to experience success and not put their corporate data or their broader cloud strategy at risk. A purpose-built cloud service provider such as Unitrends and its Forever Cloud solution frees companies to aggressively and confidently move ahead with a cloud deployment for its backup and DR needs.

picture showing date and time of webinar

Join Our Webinar for a Deeper Look at the Benefits of Using a Purpose-built Backup Cloud Service Provider

Join me next Wednesday, October 17, 2018, at 2 pm EST, for a webinar where I take a deeper look at both purpose-built and general purpose cloud service providers. In this webinar, I examine why service providers purpose-built for cloud backup and DR can save companies both money and time while dramatically improving the odds they can succeed in their cloud backup and DR initiatives. You can register by following this link.

 

Bitnami