The Undocumented Risks of Data Migrations

All organizations hate introducing risk into their production IT environments. Yet many organizations of all sizes regularly take undocumented risks whenever they migrate data from one storage target to another. The need for data migrations often occurs any time they introduce a new storage array or storage target into their environment. While the risk level varies from one organization to the next, none are completely immune from them.

IT staff, regardless of their role in the organization, generally dislike migrating data from one storage target to another. The degree to which they dislike these data migrations frequently depends upon the risks associated with completing them.

In a best-case scenario, IT staff find them an annoyance. They have the expertise, software, and processes in place to successfully execute upon them. While they do not “love” them, they do not find them overly burdensome either.

In a worst-case scenario, IT staff loathe them. They may need to use multiple different software tools to migrate data. They lack the skills and expertise to manage these tools and complete the data migration quickly. They must manage a process that involves many people and touches multiple groups.

Love or hate them, data migrations create risk. Data migrations require that IT staff perform tasks that are may be poorly documented or undocumented since they perform them infrequently.

Step 1 – Pre-data Migration Prep

  • Create network routing between the servers and new storage array (New zones on FC switch)
  • Assign storage capacity to the servers (LUN masking)
  • Determine which software tool(s) will be used to migrate data
  • Contact appropriate people and groups
  • Schedule change control
  • Find a date that will work for everyone
  • Verify existing servers can see new storage

Step 2 – Data Migration

  • Initiate the migration
  • Verify all data is successfully moved
  • Verify applications run as good or better on new storage

Step 3 – Post-data Migration Clean-up

  • Remove unneeded network routes (Remove unneeded zones)
  • Remove unneeded storage assignments on storage arrays (Remove unneeded LUN masking)
  • Delete data on the old array

Aggravating the situation, everyone expects data migrations to occur without disrupting production applications. These expectations include applications experiencing minimal or no performance impact while the migrations take place.

To top it off, the responsible IT staff may have to work nights and weekends in addition to fulfilling their normal responsibilities. This combination of balancing competing priorities, working extra hours, and managing a complex, unfamiliar task introduces risks that companies may fail to fully grasp.

Data migrations represent what many IT staff view as both an unavoidable task and a necessary evil. However, as more organizations look toward next the generation of storage solutions to store their, look for solutions that transparently and non-disruptively handle this task. By selecting cloud storage, software-defined storage, or a scale-out storage solution as their next storage target, they can ideally make their next data migration their last data migration.




Nanoseconds, Stubborn SAS, and Other Takeaways from the Flash Memory Summit 2019

Every year at the Flash Memory Summit held in Santa Clara, CA, attendees get a firsthand look at the technologies that will impact the next generation of storage. This year many of the innovations centered on forthcoming interconnects that will better deliver on the performance that flash offers today. Here are DCIG’s main takeaways from this year’s event.

Takeaway #1 – Nanosecond Response Times Demonstrated

PCI Express (PCIe) fabrics can deliver nanosecond response times using resources (CPU, memory, storage) situated on different physical enclosures. In meeting with PCIe provider, Dolphin Interconnect Solutions, it demonstrated how an application could access resources (CPU, flash storage & memory) on different devices across a PCIe fabric in nanoseconds. Separately, GigaIO announced 500 nanosecond end-to-end latency using its PCIe FabreX switches. While everyone else at the show was boasting about microsecond response times, Dolphin and GigaIO introduced nanoseconds into the conversation. Both these companies ship their solutions now.

Takeaway #2 – Impact of NVMe/TCP Standard Confirmed

Ever since we heard the industry planned to port NVMe-oF to TCP, DCIG thought this would accelerate the overall adoption of NVMe-oF. Toshiba confirmed our suspicions. In discussing its Kumoscale product with DCIG, it shared that it has seen a 10x jump in sales since the industry ratified the NVMe/TCP standard. This stems from all the reasons DCIG stated in a previous blog entry such as TCP being well understood, Ethernet being widely deployed, its low cost, and its use of existing infrastructure in organizations.

Takeaway #3 – Fibre Channel Market Healthy, Driven by Enterprise All-flash Array

According to FCIA leaders, the Fibre Channel (FC) market is healthy. FC vendors are selling 8 million ports per year. The enterprise all-flash array market is driving FC infrastructure sales, and 32 Gb FC is shipping in volume. Indeed, DCIG’s research revealed 37 all-flash arrays that support 32 Gb FC connectivity.

Front-end connectivity is often the bottleneck in all-flash array performance, so doubling the speed of those connections can double the performance of the array. Beyond 32 Gb FC, the FCIA has already ratified the 64 Gb standard and is working on the 128 Gb FC. Consequently, FC has a long future in enterprise data centers.

FC-NVMe brings the benefits of NVMe-oF to Fibre Channel networks. FC-NVMe reduces protocol overhead, enabling GEN 5 (16 Gb FC) infrastructure to accomplish the same amount of work while consuming about half the CPU of standard FC.

Takeaway #4 – PCIe Will Not be Denied

All resources (CPU, memory and flash storage) can connect with one another and communicate over PCIe. Further, using PCIe eliminates the need for introducing the overhead associated with storage protocols (FC, InfiniBand, iSCSI, SCSI). All these resources talk the PCIe protocol. With the PCIe 5.0 standard formally ratified in May 2019 and discussions about PCIe 6.0 occurring, the future seems bright for the growing adoption of this protocol. Further, AMD and Intel having both thrown their support behind it.

Takeaway #5 – SAS Will Stubbornly Hang On

DCIG’s research finds that over 75% of AFAs support 12Gb/second SAS now. This predominance makes the introduction of 24G a logical next step for these arrays. A proven, mature, and economical interconnect, few applications can yet drive the performance limits of 12Gb, much less the forthcoming 24G standard. Adding to the likelihood that 24G moves forward, the SCSI Trade Association (STA) reported that the recent 24G plug fest went well.

Editor’s Note: This blog entry was updated on August 9, 2019, to correct grammatical mistakes and add some links.



Definitions Help Bring Sanity to the Cloud

The more DCIG covers various enterprise technologies, the more it sees the term “cloud” permeating the literature originating from vendors describing their products. In so doing, they use the term “cloud” very liberally to describe their products’ capabilities. To try to bring some sanity to all these occurrences of cloud that one encounters, here are some definitions that DCIG uses to assess each product’s cloud capabilities.

The Cloud May Not Mean What You Think It Means

Read any vendor’s literature and one will encounter one or more of the following phrases:

  • Our product supports the cloud.
  • Our product is cloud-enabled.
  • It offers cloud data management.

The list goes on with each cloud capability sounding more wonderful than the last. But what do each of these phrases mean? And does that meaning align with what the product alleges it does?

DCIG finds that vendors ascribe different meanings to the same cloud term when describing their products. It is so prevalent that in our internal conversations about various products and their respective cloud capabilities, we stop one another. We do this to confirm that the product delivers on our internally agreed-upon meaning of a specific term.

Ascribing Meaning to Cloud Terms

Here are the definitions of six commonly used cloud terms and phrases that we use internally to determine whether a product falls into a specific category.

Cloud

If we encounter the use of the word cloud to describe a product, it means it offers a single common pool of resources, no more, no less. This is a safe term to use when describing a product. To infer the product can scale, is highly available, etc. is to grant it attributes that it may or may not possess.

Cloud-enabled

This is the one cloud term that we currently deem meaningless. One can interpret this term to apply to public or private clouds. It can infer connectivity to clouds of many types. It can imply that the product possesses characteristics to enable future cloud … something. If any vendor uses this term to describe its product, ask the vendor to explain what it means in the context of its product.

Cloud-based

This is a new term we see more frequently. We interpret this term to mean it is a software-as-a-service and is hosted in the cloud. One still needs to define who’s cloud the software is hosted in and if that cloud meets your company’s service level requirements.

Cloud Connectivity

We define this term as the product offering connectivity to a cloud storage offering such as Amazon S3 or an S3-compatible provider. These products can store files or objects with the cloud provider and retrieve them.

“Cloud Support” or “Supports the Cloud”

These two terms we define as offering support for cloud storage offerings from Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and any S3-compatible storage offering. The use of this term infers broader support for cloud storage offerings that the use of the term cloud connectivity.

Cloud Data Management

This is perhaps the cloud term that is currently most overused and abused. We currently define it as one step beyond cloud support. A product that offers cloud data management can connect to multiple cloud storage offerings and manage the storage features within each cloud storage offering. In the case of AWS, it can connect to Amazon S3 and then manage data placement across its various storage tiers. The more cloud storage offerings to which it can connect and storage tiers it can manage within them, the more robust the product is.

Defining Cloud Terms is an Imperative

Regardless of whether you agree with DCIG’s definition of these cloud terms, this post should highlight the importance of defining commonly used cloud terms. As we have found internally, failure to do so can quickly result in miscommunication and misunderstandings. DCIG’s analysts take existing definitions and we either use them or adapt them to better equip us to perform our analysis.

Even if you must come up with your own definitions for these terms, the sooner you do so, the better. These will equip you in the same way to understand the differences between products in the market, ask the right questions about them, and make better choices about cloud offerings.




DRaaS Providers Come of Age

As more organizations embrace a cloud-first model, everything in their IT infrastructure comes under scrutiny, to include backup and recovery. A critical examination of this component of their infrastructure often prompts them to identify their primary objectives for recovery. In this area, they ultimately want simplified application recoveries that meet their recovery point and time objectives. To deliver this improved recovery experience, organizations may now turn to a new generation of disaster-recovery-as-a-service (DRaaS) offerings.

A Laundry List of DRaaS’ Past Shortcomings

DRaaS may not be the first solution that comes to mind to improve their recovery experience. They may not even believe DRaaS solutions can address their recovery challenges. Instead, DRaaS may imply that organizations must first:

  1. Figure out how to pay for it
  2. Accept there is no certainty of success
  3. Do an in-depth evaluation of their IT infrastructure and applications
  4. Re-create their environment at a DR site
  5. Perform time consuming tests to prove DRaaS works
  6. Dedicate IT staff for days or weeks to gather information and perform DR tests

This perception about DRaaS may have held true at some level in the past. However, any organizations that still adhere to this view need to take a fresh view of how DRaaS providers now deliver their solutions.

The Evolution of DRaaS Providers

DRaaS providers have evolved in four principal ways to take the pain out of DRaaS and deliver the simplified recovery experiences that organizations seek.

1. They recognize recovery experiences are not all or nothing events.

In other words, DRaaS providers now make provisions in their solutions to do partial on-premises recoveries. In the past, organizations may have only called upon DRaaS providers when they needed a complete off-site DR of all applications. While some DRaaS providers still operate that way, that no longer applies to all of them.

Now organizations may call upon a DRaaS provider to help with recoveries even when they experience just a partial outage. This application recovery may occur on an on-premises backup appliance provided by the DRaaS provider as part of its offering.

2. They use clouds to host recoveries.

Some DRaaS providers may still make physical hosts available for some application recoveries. However, most make use of purpose-built or general-purpose clouds for application recoveries. DRaaS providers use these cloud resources to host an organization’s applications to perform DR testing or a real DR. Once completed, they can re-purpose the cloud resources for DR and DR testing for other organizations.

3. They gather the needed information for recovery and build out the templates needed for recovery.

Knowing what information to gather and then using that data to recreate a DR site can be a painstaking and lengthy process. While DRaaS providers have not eliminated this task, they shorten the time and effort required to do it. They know the right questions to ask and data to gather to ensure they can recover your environment at their site. Using this data, they build templates that they can use to programmatically recreate your IT environment in their cloud.

4. They can perform most or all the DR on your behalf.

When a disaster strikes, the stress meter for IT staff goes straight through the roof. This stems from, in part, few, if any of them have ever been called upon to do a DR. As a result, they have no practical experience in performing one.

In response to this common shortfall, a growing number of DRaaS providers perform the entire DR, or minimally assist with it. Once they have recovered the applications, they turn control of the applications over to the company. At that point, the company may resume its production operations running in the DRaaS provider’s site.

DRaaS Providers Come of Age

Organizations should have a healthy fear of disasters and the challenge that they present for recovery. To pretend that disasters never happen ignores the realities that those in Southern California and Louisiana may face right now. Disasters do occur and organizations must prepare to respond.

DRaaS providers now provide a means for organizations to implement viable DR plans. They provide organizations with the means to recover on-premises or off-site and can do the DR on their behalf. Currently, small and midsize organizations remain the best fit for today’s DRaaS providers. However, today’s DRaaS solutions foreshadow what should become available in the next 5-10 years for large enterprises as well.




Five More Reasons Why Organizations Choose DCIG to Create Competitive Intelligence Reports

Last month I shared five reasons why organizations choose DCIG to prepare competitive intelligence reports on their behalf. However, that list represented only a glimpse into why companies select DCIG. This month, I share five more reasons why companies engage DCIG to produce these reports to equip their sales staff and partners.

Companies want and need professional, well-researched reports that they can use for equipping their internal sales staff and resellers. They may also use these reports to educate current and prospective clients to ask the right questions to identify the right solution. In conversations with our clients, here are five more reasons they cite for using DCIG to create competitive intelligence reports.

1. DCIG knows how to safely present competitive information in a public forum. 

Companies may fail to realize that federal civil laws exist that govern the publication of content that they publish comparing them to their competitors. While no one goes to jail, it does open the door for your competitor to file lawsuits against your company. DCIG knows how to navigate these waters and help avoid these circumstances which can drag on for years or even decades.

2. Provides succinct content that is relevant, to the point, and easy to be read.

 DCIG analysts are former end-users who successfully made the case for technical solutions. They translated the technical benefits in ways that make sense to business management. DCIG also works with value added resellers. This background informs DCIG on the types of questions that these individuals will likely pose and want answered. DCIG combines this background with its years of writing experience to prepare and deliver concise, easy-to-digest reports.

3. Avoid internal competitive intelligence crisis.

DCIG does not look to replace a company’s internal competitive intelligence team – it looks to augment it. DCIG regularly encounters companies who experience turnover in this area or who have no one dedicated to it full time. Using DCIG provides companies with consistency, continuity, and a common repository for their internal competitive intelligence function.

4. Frees your staff to focus on responding to internal sales inquiries and understanding customer needs.

Understanding and responding to the inquiries of your customers and sales staff should take precedence over consistently researching your competitors’ features. The reason is simple – companies must prioritize the needs right in front of them. Conversely, DCIG regularly covers and aggregates the product features of your competitors. It simply makes sense on multiple levels for them to outsource this function to a third party.

5. DCIG Competitive Intelligence Reports summarize the distinctive benefits of your solution.

Every company wants to distinguish its product from others in its space. That is the product’s value add. However, that approach only works if one can accurately articulate one’s differentiators. Saying a product stands apart in a feature offering only to find out it does not may result in lost credibility. It’s even worse if a current or prospective customer brings this oversight to your attention. Using DCIG, you can better mitigate the possibility of that occurrence.

Companies rightfully conclude that they can perform their own competitive intelligence. They know, probably better than any analyst firm, who their primary competitors are, and the features their own products offer that result in them winning deals. However, safely and objectively presenting that information in a professional format may require more expertise or time than your team possesses.

This is where DCIG can and has helped other organizations. If this is where your company can use some help, let us know!  You can contact DCIG by filling out this form on DCIG’s website or emailing us.




Seven Basic Questions to Screen Cloud Storage Offerings

Using cloud storage often represents the first way that most companies adopt the cloud. They leverage cloud storage to archive their data, as a backup target, share files, or for long term data retention. These approaches offer a low risk means for companies to get started in the cloud. However, with more cloud storage offerings available than ever, companies need to ask and answer more pointed questions to screen them.

50+ Cloud Storage Offerings

As recently as a few years ago, one could count on one hand the number of cloud storage offerings. Even now, companies may find themselves hard pressed to name more than five or six of them.

The truth of the matter is companies have more than 50 cloud storage offerings from which to choose. These offerings come from general purpose cloud providers such as Amazon, Microsoft, and Google to specialized providers such as Degoo, hubiCJottacloud and Wasabi.

 

 

 

 

 

 

 

The challenge companies now face is, “How do I screen these cloud storage offerings to make the right choice for me?” Making the best selection from these multiple cloud storage offerings starts by first asking and answering basic questions about your requirements.

Seven Basic Questions

Seven basic questions you should ask and answer to screen these offerings include:

  1. What type or types of data will you store in their cloud? If you only have one type of data (backups, files, or photos) to store in the cloud, a specialized cloud storage provider may best meet your needs. If you have multiple types of data (archival, backups, block, file, and/or object) to store in the cloud, a general-purpose cloud storage provider may better fit your requirements.
  2. How much data will you store in the cloud? Storing a few GBs or even a few hundred GBs of data in the cloud may not incur significant cloud storage costs. When storing hundreds of terabytes or petabytes of storage in the cloud, a cloud offering with multiple tiers of storage and pricing may be to your advantage.
  3. How much time do you have to move the data to the cloud? Moving a few GBs of data to the cloud may not take very long. Moving terabytes of data (or more) may take days, weeks or even months. In these circumstances, look for cloud providers that offer tools to ingest data at your site that they can securely truck back to their site.
  4. How much time do you have to manage the cloud? No one likes to think about managing data in the cloud. Cloud providers count on this inaction as this is when cloud storage costs add up. If you have no plans to optimize data placement or the data management costs outweigh the benefits, identify a cloud storage provider that either does this work for you or makes its storage so simple to use you do not have to manage it.
  5. How often will you retrieve data from the cloud and how much will you retrieve? If you expect to retrieve a lot of data from the cloud, identify if the cloud provider charges egress (data exit) fees and how much it charges.
  6. What type of performance do you need? Storing data on lower cost, lower tiers of storage may sound great until you need that data. If waiting multiple days to retrieve it could impact your business, keep your data on the higher performing storage tiers.
  7. What type of availability do you need? Check with your cloud storage provider to verify what uptime guarantees it provides for the region where your data resides.

A Good Base Line

There are many more questions that companies can and should ask to select the right cloud storage offering for them. However, these seven basic questions should provide the base line set of information companies need to screen any cloud storage offering.

If your company needs help in doing a competitive assessment of cloud storage providers, DCIG can help. You can contact DCIG by filling out this form on DCIG’s website or emailing us.




The Journey to the Cloud Begins by Speaking the Same Cloud Language

Every company wants to make the right cloud decision for their business. As a result, more companies than ever ask their vendors to describe the cloud capabilities of their products. However, as you ask your vendors cloud questions, verify that you both use the same cloud language. You may find that how you and your vendors define the cloud differ significantly which can quickly result in communication breakdowns.

Technology providers feel intense pressure to remain relevant in a rapidly changing space. As more companies adopt the cloud, they want to make sure they are part of the conversation. As such, one should have no problem identifying products that support the cloud. However, some vendors take more liberties than other in how they apply the term cloud to describing their products’ features.

The Language of Cloud

For better or worse, the term “cloud” can mean almost anything. A simple definition for the cloud implies the ability to access needed resources over a computer network. Hence, any product that claims “cloud support” means it can simply access resources over a computer network, regardless of where they reside.

Pictures of blue skies and fluffy clouds that often accompany vendor descriptions of “cloud support” for their products do not help clarify the situation. These pictures can lead one to assume that a product provides more robust cloud support than it truly delivers.

By way of example, almost every enterprise backup product claims support for the cloud. However, the breadth and depth of cloud support that each one offers varies widely. To assess the true scope of each one’s cloud support, one first needs to understand the language they use to describe the cloud.

For instance, if you plan to use the cloud for long-term backup retention, most enterprise backup products connect to a cloud. The key here is be very clear in what they mean by “connectivity to the cloud”.  Key questions you should ask include:

  • Does the product’s cloud support include connectivity to any large general-purpose cloud providers such as AWS, Azure, or Google?
  • Does the product need to work with all three of these cloud providers?
  • Does it support include any S3 compliant public cloud?
  • Does the product support more cost-effective public cloud options such as Wasabi?
  • Does its cloud support refer to a purpose-built cloud for backup and DR such as Unitrends offers?

Getting questions like these answered will provide you the insight you need to determine if the cloud capabilities of their products matches your requirements. That understand can only occur if you are both first speak the same language.

Multi-cloud can be Just as Cloudy

As companies connect to the cloud, many find they want the option to connect to multiple different clouds. This option gives them more power to negotiate prices as well as flexibility to deploy resources where they run best. But here again, one needs to drill down on exactly how a product delivers on its multi-cloud support.

Key questions that you must ask when evaluating a product’s multi-cloud capabilities include:

  1. To which public clouds can the product connect, if any?
  2. To which private clouds can it connect, if any?
  3. Can it connect and use multiple clouds simultaneously?
  4. Can it connect and use public and private clouds at the same time?
  5. Does it offer any features beyond just connectivity to manage the cloud’s features?

The Journey to the Cloud Begins by Speaking the Same Cloud Language

Companies today more so than ever want to start their journey to the cloud. To begin that journey, you must first speak the same language. Start by defining what the cloud means to you, or what you think it means to you.

This may even require you to engage some of your preferred vendors or partners to help you draft that definition. Regardless of how you arrive at your definition of the cloud, the sooner you do, the sooner you can ask the right questions, understand the answers given to you, and get the clarity you need to choose products that support the cloud in the way that you expect.




Four Ways to Achieve Quick Wins in the Cloud

More companies than ever want to use the cloud as part of their overall IT strategy. To do so, they often look to achieve some quick wins in the cloud to demonstrate its value. Achieving these quick wins also serves to give them some practical hands on experience in the cloud. Incorporating the cloud into your backup and disaster recovery (DR) processes may serve as the best way to get these wins.

Any company hoping to get some quick wins in the cloud should first define what a “win” looks like. For the purposes of this blog entry, a win consists of:

  • Fast, easy deployments of cloud resources
  • Minimal IT staff involvement
  • Improved application processes or workflows
  • The same or lower costs

Here are four ways for companies to achieve the quick wins in the cloud through their backup and DR processes:

#1 – Take a Non-disruptive Approach

When possible, leverage your company’s existing backup infrastructure to store copies of data in the cloud. All enterprise backup products such as backup software and deduplication backup appliances, save one or two, interface with public clouds. These products can store backup data in the cloud without disrupting your existing environment.

Using these products, companies can get exposure to the public cloud’s core compute and storage services. These are the cloud services companies are most apt to use initially and represent the most mature of the public cloud offerings.

#2 – Deduplicate Backup Data Whenever Possible

Public cloud providers charge monthly for every GB of data that companies store in their respective clouds. The more data that your company stores in the cloud, the higher these charges become.

Deduplicating data reduces the amount of data that your company stores in the cloud. In so doing, it also helps to control and reduce your company’s monthly cloud storage costs.

#3 – Tier Your Backup Data

Many public cloud storage providers offer multiple tiers of storage. The default storage tier they offer does not, however, represent their most cost-effective option. This is designed for data that needs high levels of availability and moderate levels of performance.

Backup data tends to only need these features for the first 24 – 72 hours after it is backed up. After that, companies can often move it to lower cost tiers of cloud storage. Note that these lower cost tiers of storage come with decreasing levels of availability and performance. While many backups (over 99%) fall into this category, check to see if any application recoveries occurred that required data over three days old before moving it to lower tiers of storage.

#4 – Actively Manage Your Cloud Backup Environment

Applications and data residing in the cloud differ from your production environment in one important way. Every GB of data consumed and every hour that an application runs incur costs. This differs from on-premises environments where all existing hardware represents a sunk cost. As such, there is less incentive to actively manage existing hardware resources since any resources recouped only represent a “soft” savings.

This does not apply in the cloud. Proactively managing and conserving cloud resources translate into real savings. To realize these savings, companies need to look to products such as Quest Foglight. It helps them track where their backup data resides in the cloud and identify the application processes they have running. This, in turn, helps them manage and control their cloud costs.

Companies rightfully want to adopt the cloud for the many benefits that it offers and, ideally, achieve a quick win in the process. Storing backup data in the cloud and moving DR processes to the cloud provides the quick win in the cloud that many companies initially seek. As they do so, they should also ensure they put the appropriate processes and software in place to manage and control their usage of cloud resources.




Five Reasons Why Organizations Choose DCIG to Create Competitive Intelligence Reports

At a high level, anyone can prepare a competitive intelligence report. All one needs is an Excel spreadsheet, a web browser, access to the Internet, a list of your competitors, and a list of product features. Then, boom, just like that, you have a report. However, companies that engage DCIG to create Competitive Intelligence Reports want much more that an Excel spreadsheet with a list of features and check marks in order to truly empower their sales staff and partners.

5 Reasons Companies Choose DCIG for Competitive Intelligence Reports

These companies want professional reports that they can use for equipping and educating their internal sales staff and resellers. They also use reports to educate their current and prospective clients. In talking with our clients about why they choose DCIG to create these reports, here are five reasons they commonly cite:

#1 – Validate their assumptions and findings about their competitors’ products’ features.

Many clients who engage DCIG to create competitive intelligence reports have already done some and, in some cases, a lot of research into their competitors’ products and the features they offer. However, they find it helpful to have an analyst firm double check and validate their research.

#2 – DCIG routinely covers and communicates with their competitors. 

DCIG has nearly 20 years of experience in covering enterprise technology products and communicating with them. This has resulted in DCIG having contacts with hundreds of technology companies and thousands of professionals within these companies. Further, DCIG has information about thousands of products and their features in DCIG’s product database. We often have the information that our clients need to validate their research or can quickly identify someone who can help get the competitive information you seek.

#3 – Companies want to speak with DCIG.

DCIG’s analysis and reports are distributed to and read by thousands of people every month through DCIG’s newsletter and on DCIG’s website. Further, DCIG’s content is often picked up by third party websites such as Storage Newsletter. As a result, companies often want to speak to DCIG and brief us on their products in anticipation of this type of coverage. This ensures that DCIG has the latest information about the products from multiple companies.

#4 – Want objective, credible, third party content.

Your customers, partners and even your own sales force will find the presentation of competitive information by an analyst firm more objective and credible than if you present it. This makes it more likely that they read it,  understand it, and use it in the field.

#5 – DCIG translates technology into understandable business benefits.

This perhaps reflects the primary reason companies engage with DCIG to produce Competitive Intelligence Reports. An Excel spreadsheet with a list of features and check marks is only the starting point for DCIG as it builds out its Competitive Intelligence Reports – not its end game. Every DCIG Competitive Intelligence Report explains why certain features matter and under what circumstances.

These reports also include questions that companies can ask their prospective customers to determine if these features matter to them. These helps both customers and the companies selling the products more quickly get to the two best answers, “Yes, I want to buy it,” or, “No, I do not.” This saves all parties involved time, money, and energy to get to the best answer.

Present Competitive Intelligence from an Objective Third Party

Companies rightfully conclude that they can perform their own competitive intelligence. They may know, perhaps better than an analyst firm, who their competitors are, and what features they offer that result in them winning specific deals. However, safely and objectively presenting that same information in a professional format for your customers, partners, and sales force can use often require more time than your existing team has.

This is where DCIG can help and has helped other organizations. If this is where your competitive intelligence team, product manager, marketing team, or internal product evaluation team recognizes a need, let us know!  You can contact DCIG by filling out this form on DCIG’s website or emailing us.




Best of Show at Nutanix .NEXT

Every time DCIG attends a conference, we attempt to meet with as many exhibitors as possible to get an overview of their solutions and the key business challenges they solve. We then identify three that best address these challenges. In attending the Nutanix .NEXT event last week in Anaheim, CA, DCIG awarded these three products as Best of Show.

Best of Show Award Winner #1: Nutanix Mine

Nutanix Mine was one of the announcements made during the opening keynote at the Nutanix .NEXT conference that prompted spontaneous applause from the audience in attendance. That applause came for good reason.

Source: Nutanix

Companies who standardize on Nutanix ideally do not really want to introduce another HCI platform to host their data protection platform. Using Nutanix Mine, companies get all the HCI benefits that Nutanix offers that companies can then use to host the data protection solution of their choice.

Data protection providers such as HYCU, Commvault, Unitrends, and Veritas all announced their intentions to use Nutanix Mine as an option to host their data protection software. Further, other data protection providers in attendance at Nutanix .NEXT privately shared with DCIG that they plan to adopt Mine as a platform hosting option at some point in the future, one even going so far as to say it views Mine as a platform it must adopt.

Best of Show Award Winner #2: Lenovo TruScale

Lenovo TruScale literally introduces enterprises to utility data center computing. Lenovo bills TruScale clients a monthly management fee plus a utilization charge. It bases this charge on the power consumed by the Lenovo-managed IT infrastructure.

Source: Lenovo

This power consumption-based approach is especially appealing to enterprises and service providers for which one or more of the following holds true:

  • Data center workloads tie directly to revenue.
  • Want IT to focus on enabling digital transformation, not infrastructure management.
  • Need to retain possession or secure control of their data.

TruScale does not require companies to install any extra software. TrueScale gets its power utilization data from the management processor already embedded in Lenovo servers. It then passes this power consumption data to the Lenovo operations center(s) along with alerts and other sensor data.

Lenovo uses the data to trigger support interventions and to provide near real-time usage data to customers via a portal. The portal graphically presents performance versus key metrics including actual vs budget.  Lenovo’s approach to utility data center computing provide a distinctive and easy means this technology while simultaneously simplifying billing. (Note: DCIG will be publishing another blog entry very shortly that more thoroughly examines Lenovo TruScale.)

Best of Show Award Winner #3: HYCU 4.0

I have known about HYCU for a while so its tight integration with Nutanix AHV is not the motivation for DCIG awarding HYCU Best of Show. Rather, the testimony that a staff member from Nutanix’s internal IT department shared about Nutanix’s own experience running HYCU to protect its data center caught my attention.

Source: HYCU

Nutanix internally deployed HYCU in three data centers across the United Stated and in its small data centers in its global offices. HYCU protects over 3,500 VMs that includes both Linux and Windows VMs with no agents installed. It provides both file and VM level restores and uses Active Directory for its RBAC (role-based access control).

Nutanix evaluated data protection products from other data protection providers. Nutanix chose HYCU over all of them. Pretty strong testimonial and endorsement of HYCU by Nutanix when almost other data protection provider would give their eye teeth to be Nutanix’s internal go-to provider of backup software.




HYCU Continues Its March Towards Becoming the Default Nutanix Backup Solution

Any time a new operating system platform comes to market, one backup solution tends to lead in providing a robust set of data protection features that companies can quickly, easily, and economically deploy. It happened with Unix. It happened with Windows and VMware. Now it is happening again with the Nutanix Acropolis operating system (AOS) as HYCU continues to make significant product enhancements in its march to become the default backup solution for Nutanix-centric environments.

I greatly respect any emerging technology provider that can succeed at any level in the hyper-competitive enterprise space. To compete and win in the enterprise market, it must execute simultaneously on multiple levels. Minimally, it must have solid technology, a compelling message, and competent engineering, marketing, management, sales, and support teams to back the product up. Nutanix delivers on all these fronts.

However, companies can sometimes overlook the value of the partner community that must simultaneously develop when a new platform such as Nutanix AOS comes to market. If companies such as HYCU, Intel, Microsoft, SAP and others did not commit resources to form technology alliances with Nutanix, it would impede Nutanix’s ability to succeed in the market place.

Of these alliances, Nutanix’s alliance with HYCU merits attention. While Nutanix does have technology alliances with other backup providers, HYCU is the only one of these providers that has largely hitched its wagon to the Nutanix train. As a result, as Nutanix goes, so largely goes HYCU.

Given that Nutanix continues to rock the hyperconverged infrastructure (HCI) market space, this bodes well for HYCU – assuming HYCU matches Nutanix’s pace of innovation step-for-step. Based upon the announcement that HYCU made at this week’s Nutanix .NEXT conference in Anaheim, CA, it is clear that HYCU fully understands the opportunities in front of it and capitalizes on them in its latest 4.0 release. Consider:

  • HYCU supports and integrates with Nutanix Mine beginning in the second half of 2019. Emerging data protection providers such as Cohesity and Rubrik have (rightfully) made a lot of noise about using HCI platforms (and especially theirs) for data protection use cases. In the face of this noise, HYCU, with its HYCU-X announcement in late 2018, grasped that it could use Nutanix to meet this use case. The question was, “Did Nutanix want to position AOS as a platform for data protection software and secondary enterprise workloads?

The short answer is Yes. The Nutanix Mine May 8 announcement makes it clear that Nutanix has no intention of conceding the HCI platform space to competitors that focus primarily on data protection. Further, Nutanix’s technology alliance with HYCU immediately pays dividends. Companies can select backup software that is fully integrated with the Nutanix AOS, obtaining it and managing it in almost the same way as if Nutanix had built its own backup software. Further, HYCU is the only data protection solution ready now to ship when Nutanix goes GA with Mine in the second half of 2019.

  • Manage HYCU through Nutanix Prism management interface. Nutanix Prism is the Nutanix interface used to manage Nutanix AOS environments. With the forthcoming release of HYCU 4.0, companies may natively administer HYCU through the Nutanix PRISM interface as part of their overall Nutanix AOS management experience.
  • Support for Nutanix Files. The scale-out characteristics of Nutanix make it very appealing for companies to use it for purposes other than simply hosting their VMs. Nutanix Files is a perfect illustration as companies can use Nutanix to host their unstructured data to get the availability, performance, and flexibility that traditional NAS providers increasingly struggle to deliver in a cost-effective manner.

HYCU 4.0’s support for Nutanix Files includes NFS support and changed file tracking. This feature eliminates the overhead of file system scans, automates protections of newly created VMs with a default policy, and should serve to accelerate the speed of incremental backups.

  • Protects physical Windows servers. Like it or not, physical Windows servers remain a fixture in many corporate environments and companies must protect them. To address this persistent need, HYCU 4.0 introduces protection for physical Windows servers so as companies look to adopt HYCU to protect their expanding Nutanix environment, they can “check the box”, so to speak, to extend their use of HYCU to protect their physical Windows environment.

The Nutanix Mine announcement represents yet another market place into which Nutanix will extend the reach of its AOS platform to provide a consistent, single cloud platform that companies may use. As Nutanix makes its Mine offering available, companies may note that Nutanix mentions multiple data protection providers who plan to come to market with solutions running on Nutanix Mine.

However, “running on Nutanix Mine” and “optimized and fully integrated with Nutanix” are two very different phrases. Of the providers who were mentioned by Nutanix that will run on Nutanix Mine, only HYCU has moved in lockstep with Nutanix AOS almost since HYCU’s inception. In so doing, HYCU has well positioned itself to become the default backup solution for Nutanix environments due to the many ways HYCU has adopted and deeply ingrained Nutanix’s philosophy of simplicity into its product’s design.




Breaking Down Scalable Data Protection Appliances

Scalable data protection appliances have arguably emerged as one of the hottest backup trends in quite some time, possibly since the introduction of deduplication into the backup process. These appliances offer backup software, cloud connectivity, replication, and scalable storage in a single, logical converged or hyperconverged infrastructure platform offering that simplify backup while positioning a company to seamlessly implement the appliance as part of its disaster recovery strategy or even create a DR solution for the first time.

It is as the popularity of these appliances increases, so does the number of product offerings and the differences between them. To help a company break down the differences between these scalable data protection appliances, here are some commonalities between them as well as three key features to evaluate how they differ.

Features in Common

At a high level these products all generally share the following seven features in common:

  1. All-inclusive licensing that includes most if not all the software features available on the appliance.
  2. Backup software that an organization can use to protect applications in its environment.
  3. Connectivity to general-purpose clouds for off-site long-term data retention.
  4. Deduplication technologies to reduce the amount of data stored on the appliance.
  5. Replication to other appliances on-premises, off-site, and even in the cloud to lay the foundation for disaster recovery.
  6. Rapid application recovery which often includes the appliance’s ability to host one or more virtual machines (VMs) on the appliance.
  7. Scalable storage that enables a company to quickly and easily add more storage capacity to the appliance.

 

It is only when a company begins to drill down into each of these features that it starts to observe noticeable differences between each of the features available from each provider.

All-inclusive Licensing

For instance, on the surface, all-inclusive licensing sounds straight forward. If a company buys the appliance, it obtains the software with it. That part of the statement holds true. The key question that a company must ask is, “How much capacity does the all-inclusive licensing cover before I have to start paying more?

That answer will vary by provider. Some providers such as Cohesity and Rubrik charge by the terabyte. As the amount of data on its appliance under management by its software grows, so do the licensing costs. In contrast, StorageCraft licenses the software on its OneXafe appliance by the node. Once a company licenses StorageCraft’s software for a node, the software license covers all data stored on that node (up to 204TBs raw.)

Deduplication

Deduplication software is another technology available on these appliances that a company might assume is implemented essentially the same way across all these available offerings. That assumption would be incorrect.

Each of these appliances implement deduplication in slightly different ways. Cohesity gives a company a few ways to implement deduplication. These include deduplicating data when it backs up data using its backup software or deduplicating data backed up by another backup software to its appliance. A company may, at its discretion, choose in this latter use case to deduplicate either inline or post-process.

StorageCraft deduplicates data using its backup software on the client and also offers inline deduplication for data backed up by another backup software to its appliance. Rubrik only deduplicates data backed up by Cloud Data Management Software. HYCU uses the deduplication technology natively found in the Nutanix AHV hypervisor.

Scalable Storage

A third area of differentiation between these appliances shows up in how they scale storage. While scale-out architectures get a lot of the press, that is only one scalable storage option available to a company. The scale-out architecture, such as employed by Cohesity, HYCU, and Rubrik, entails adding more nodes to an existing configuration.

Using a scale-up architecture, such as is available on the Asigra TrueNAS appliance from iXsystems, a company can add more disk drives to an existing chassis. Still another provider, StorageCraft, uses a combination of both architectures in its OneXafe appliance. Once can add more drives to an existing node or add more nodes to an existing OneXafe deployment.

Scalable data protection appliances are changing the backup and recovery landscape by delivering both the simplicity of management and the breadth of features that companies have long sought. However, as a cursory examination into three of their features illustrates, the differences between the features on these appliances can be significant. This makes it imperative that a company first break down the features on any of these scalable data protection appliances that it is considering for purchase to ensure it obtains the appliance with the most appropriate feature set for its requirements.




Convincing a Skeptical Buyer that Your Product is the Best

Every company tends to believe that its products are the best in whatever market it services. Nothing wrong with that mindset – it helps your company sell its products and succeed. However, convincing a skeptical buyer of the superiority of your company’s product changes the dynamics of the conversation. He or she expects you to provide some facts to back up your claims to persuade him or her to buy from you.

As a provider of competitive content for many years now, DCIG has learned a lot about how to conduct competitive research and deliver the results in a compelling and informative manner. Here are a few insights to help you convince an undecided buyer that your product is the best.

  1. Stay focused on the positive. Stay positive in all the communications you have about your products and your competitor’s products. Your prospective buyer may not agree with the glowing assessment of your product. However, one sure way to turn them off is to disparage your competitor’s product in any way.

Disparaging your competitor’s product becomes especially perilous in this age of instant communications and mobile devices. As fast as you can make a claim about your competitor’s product, your prospective buyer can search the internet and validate your assertion. If he or she finds your claim incorrect or out-of-date, you will, at best, look petty and uninformed. At worst, you may lose the buyer’s trust.

Even if you absolutely, unequivocally know your competitor does not offer a feature that your product does, stay positive. Use it as an opportunity to explain why your product offers the features it does and articulate the various use cases it solves.

  1. Present all competitive information in a high quality, professional manner. Excel spreadsheets and Word documents serve as great tools to aggregate and store your raw competitive data. The danger comes from presenting that raw data without first taking the time and effort to properly analyze it and then present it professionally.

Analyzing it, organizing it, and then presenting it in a professional manner take additional time and expertise above and beyond the time and expertise required to collect the data. These steps may even prompt you to go back and re-validate some of your data and initial assumptions.

  1. Use a third party to validate competitive research. Even assuming you collect all the competitive data and take the time to professionally prepare it, when you present yourself as the source of the data about your product’s information and your competitor’s information to the prospective buyer, it can create doubts in the buyer’s mind. In that situation, the buyer will minimally question the data’s validity and objectivity.

Here is where having a third party to review your data, validate your conclusions, and even ideally present the information can add significant value. It can help you identify potential biases in the data-gathering stage, serve to double-check your work, and save you the time, hassle and expense of putting together a professional presentation that lays out the differences between your product and your competitor. This third-party validation will heighten the value of the competitive content when you share it with your skeptical buyer.

Your product is the best and you know it. Maybe even your competitor knows it. However, at the end of the day, it only matters if your prospective buyer comes to that same conclusion. Presenting the right information in an objective manner in a professional context will go a long way toward persuading a skeptical buyer that you have the right product for his or her needs. If this sounds like a challenge that you have, DCIG would love to help. Feel free to reach out to DCIG by contacting us at this email address.




DCIG 2019-20 Enterprise Deduplication Backup Target Appliance Buyer’s Guide Now Available

DCIG is pleased to announce the availability of its 2019-20 Enterprise Deduplication Backup Target Appliance Buyer’s Guide which helps enterprises assess the enter­prise deduplication backup target appliance marketplace and identify which appliance may be the best fit for their environment. This Buyer’s Guide includes data sheets for 19 enterprise deduplication backup target appliances that achieved rankings of Recommended and Excellent. These products are available from five vendors including Cohesity, Dell EMC, ExaGrid, HPE, and NEC.

Enterprises rarely want to talk about the make-up of the infrastructure of their data centers anymore. They prefer to talk about artificial intelligence, cloud adoption, data analytics, machine learning, software-defined data centers, and uninterrupted business operations. As part of those discussions, they want to leverage current technologies to drive new insights into their business and, ultimately, create new opportunities for busi­ness growth or cost savings because their underlying data center technologies work as expected.

The operative phrase here becomes “works as expected”, especially as it relates to Enterprise Deduplication Backup Target Appliances. Expectations as to the exact features that an enterprise deduplication backup target appliance should deliver can vary widely.

If an enterprise only wants an enterprise deduplication backup target appliance that meets traditional data center requirements, every appliance covered in this Buyer’s Guide satisfies those needs. Each one can:

  • Serve as a target for backup software.
  • Analyze and break apart data in backup streams to optimize deduplication ratios.
  • Replicate backup data to other sites
  • Replicate data to the cloud for archive, disaster recovery, and long-term data retention.

While the appliances from each provider uses different techniques to accomplish these objectives and some perform these tasks better than others depending on the use case, each one does deliver on these objectives.

But for enterprises looking for a solution that enables them to meet their broader, more strategic objectives, only a couple of providers covered in this Buyer’s Guide, appear to be taking the appropriate steps to position enterprises for the software-defined hybrid data center of the future. Appliances from these provid­ers better position enterprises to perform next generation data lifecycle management tasks while still providing enterprises with the necessary features to accomplish traditional backup and recovery tasks.

It is in this context that DCIG presents its DCIG 2019-20 Enterprise Deduplication Backup Target Appliance Buyer’s Guide. As in the development of all prior DCIG Buyer’s Guides, DCIG has already done the heavy lifting for enterprise technology buyers by:

  • Identifying a common technology need with competing solutions
  • Scanning the environment to identify available products in the marketplace
  • Gathering normalized data about the features each product supports
  • Providing an objective, third-party evaluation of those features from an end-user perspective
  • Describing key product considerations and important changes in the marketplace
  • Presenting DCIG’s opinions and product feature data in a way that facilitates the rapid comparisons of various products and product features

The products that DCIG ranks as Recommended in this Guide are as follows (in alphabetical order):

Access to this Buyer’s Guide edition is available immediately by following this link to any of the following DCIG partner sites:

TechTrove

HPE




Analytics, Microservices and Scalable Storage Finding Their Way onto Backup Appliances

Companies of all sizes pay more attention to their backup and recovery infrastructure than perhaps ever before. While they still rightfully prioritize their production infrastructure over their backup one, companies seem to recognize and understand that can use backups as more than just insurance policies to recover their production data. This is resulting in cutting edge innovations such as analytics, microservices, and scalable storage finding their way into backup solutions in general and backup appliances specifically.

One of the challenges associated with innovative technologies finding their way into data protection solutions stems from cost. Companies tend to see backup as a cost of doing business and less of an investment in the company’s future. This viewpoint results in them only spending what they must on backup.

Since new technologies tend to be more costly, this tends to inhibit the introduction and adoption of new technologies on the backup appliances. More than one backup provider has told me in briefings with me they would gladly introduce more innovative technologies into their backup solutions. There simply has not been the budget-backed demand for them.

For these reasons, backup may never become a maelstrom of innovation. Nevertheless, changes in how companies want to manage and use their backup infrastructure is driving the introduction of some of the latest and greatest technologies into backup solutions. Here are three that DCIG has observed finding their way onto backup appliances.

  1. Analytics. Backup is ripe for picking when it comes to applying analytics such as artificial intelligence and machine learning technologies to the backup process. Companies keep multiple copies of data in a single data store for multiple years. This would lead one to believe there is more value than just data recovery that companies can glean from their backup data repositories … if they can just figure out what that value prop is.

As it turns out, cybercriminals might be the ones who do the most to help companies derive more value from their backup data stores. As cybercriminals get more sophisticated in their malware attacks, companies are turning to backups to help them detect the presence of malware in their production environment by analyzing backup data.

Asigra incorporates the use of cybersecurity software in its Cloud Backup software. This software can analyze data as it is backed up or recovered for the presence of malware. Unitrends also uses analytics to detect for the presence of ransomware in backups. It compares changes between backups and looks for unusual or unexpected activity in data between backups.

  1. Microservices. Using microservices sto accelerate application development has been one of the hottest topics in technology for the last few years. Here again, microservices have been slow to gain a foothold in data protection though StorageCraft on its OneXafe solution represents one of the first to find a practical application for microservices.

StorageCraft gives companies multiple ways to use their OneXafe appliance. If they use SSDs in its all-flash 5410 model, companies can use it for production file services. But OneXafe was originally designed for use as a secondary storage appliance in the form of a deduplication backup target.

The underlying software platform upon which StorageCraft built OneXafe lends itself well to running data protection microservices in Docker containers. Companies may enable this data protection service on the OneXafe platform at any time. Using OneXafe’s architecture, its ShadowXafe data protection service has access to all the storage across all the nodes in a OneXafe deployment without the design and storage limitations inherent in competing products.

  1. Scalable storage. Scalable storage represents the trend being most rapidly adopted by data protection solutions. Companies struggle to forecast data growth in their production environment. Having to manage the corresponding storage growth in their backup environment only adds to their headaches.

More backup appliances than ever give companies the flexibility they need to easily scale their storage without the corresponding headaches that they have faced in the past. Some examples of this innovation include:

  • Asigra has partnered with iXsystems to deliver the Asigra TrueNAS appliance that scales to over 10PB.
  • StorageCraft merged with ExaBlox and now offers its OneXafe appliance which likewise can scale up to over a petabyte in a single logical configuration.
  • Others like HYCU have partnered with Nutanix to deliver this scalable storage functionality.

Very few IT professionals choose IT out of a desire to merely “feed the beast.” Most want to innovate and adopt new and exciting technologies that create new value for their organizations.Today, the cybersecurity threats they face from outside their organization and the internal needs to simplify the management of their data protection environment have created an opportunity for savvy IT professionals to add value by adopting backup appliances that leverage analytics, microservices, and scalable storage.




Make the Right Choice between Scale-out and Scale-up Backup Appliance Architectures

Companies are always on the lookout for simpler, most cost-effective methods to manage their infrastructure. This explains, in part, the emergence of scale-out architectures over the last few years as a preferred means for implementing backup appliances. It is as scale-out architectures gain momentum that it behooves companies taking a closer look at the benefits and drawbacks of both scale-out and scale-up architectures to make the best choice for their environment.

Backup appliances primarily ship in two architectures: scale-out and scale-up.  A scale-out architecture is comprised of nodes that are logically grouped together using software that the vendor provides. Each node ships with preconfigured amounts of memory, compute, network ports, and storage capacity. The maximum raw capacities of backup appliances from about a few dozen terabytes to nearly twelve petabytes.

In contract, a scale-up architecture places a controller with compute, memory and network ports in front of storage shelves. A storage shelf may be internal or external to the appliance. Each storage shelf holds a fixed number of disk drives.

Backup appliances based on a scale-up architecture usually require lower amounts of storage capacity for an initial deployment. If an organization needs more capacity, it adds more disk drives to these storage shelves, up to some predetermined, fixed, hard upper limit. Backup appliances that use this scale-up architecture range from a few terabytes of maximum raw capacity to over multiple petabytes of maximum raw capacity.

Scale-out Benefits and Limitations

A scale-out architecture, sometimes referred to as a hyper-converged infrastructure (HCI), enables a company to purchase more nodes as it needs them. Each time it acquires another node, it provides more memory, compute, network interfaces, and storage capacity to the existing solutions. This approach addresses enterprise needs to complete increased backup workloads in the same window of time since they have more hardware resources available to them.

This approach also addresses concerns about product upgrades. By placing all nodes in a single configuration, as existing nodes age or run of capacity, new nodes with higher levels of performance and more capacity can be introduced into the scale-out architecture.

Additionally, an organization may account for and depreciate each node individually. While the solution’s software can logically group physically nodes together, there is no requirement to treat all the physical nodes as a single entity. By treating each node as its own physical entity, an organization can depreciate physical over a three to five-year period (or whatever period its accounting rules allow for.) This approach mitigates the need to depreciate newly added appliances in a shorter time frame as is sometimes required when adding capacity to scale-up appliances.

The flexibility of scale-out solutions can potentially create some management overhead. Using a scale-out architecture, an enterprise should verify that as the number of nodes in the scale-out configuration increases, the solution has a means to automatically load balance the workloads and store backup data across all its available nodes. If not, an enterprise may find it spends an increasing amount of time balancing the backup jobs across its available nodes

An enterprise should also verify that all the nodes work together as one collective entity. For instance, an enterprise should verify that the scale-out solution offers “global deduplication”. This feature dedu­plicates data across all the nodes in the system, regardless of on which node the data resides. If it does not offer this feature, the solution will still deduplicate the data but only on each individual node.

Finally, an enterprise should keep its eye on the possibility of “node sprawl” when using these solutions. These solutions make it easy to grow but an enterprise needs to plan for the optimal way to add each node as individual nodes can vary widely in their respective capacity and performance characteristics.

Scale-up Benefits and Limitations

Backup appliances that use a scale-up architecture have their own sets of benefits and limitations. Three features that currently working in their favor include:

  1. Mature
  2. Well-understood
  3. Widely adopted and used

One other broader backup industry trend currently also works in favor of scale-up architectures. More enterprises use snapshots as their primary backup technique. Using these snapshots as the source for the backup frees enterprises to do backups at almost any time of the day. This helps to mitigate the night and weekend performance bottleneck that can occur when forced to do all backups at the same using one of these appliances as the backup target.

A company may encounter the following challenges when working with scale-up appliances:

First, it must size and configure the appliance correctly. This requires an enterprise to have a good understanding of its current and anticipated backup workloads, its total amount of data to backup, and its data retention requirements. Should it overestimate its requirements, it may end up with an appliance oversized for its environment. Should it underestimate its requirements, backup jobs may not complete on time or it may run out of capacity, requiring it to buy another appliance sooner than it anticipated.

Second, all storage capacity sits behind a single controller. This architecture necessitates that the controller be sufficiently sized to meet all current and future backup workloads. Even though the appliance may support the addition of more disk drives, all backup jobs will still need to run through the same controller. Depending on the amount of data and how quickly backup jobs need to complete, this could bottleneck performance and slow backup and recovery jobs.

Make the Right Backup Appliance Choice

In order to make the right choice between these two architectures, the choice may come down to how well you understand your own environment. If a company expects to experience periods of rapid or unexpected data growth, using a scale-out appliance will often be a better approach. In these scenarios, look to appliances from Cohesity, Commvault, ExaGrid, NEC and StorageCraft.

If a company expects more predictable or minimal data growth in its environment, scale-up backup solutions such as the Asigra TrueNAS and Unitrends appliances will likely better match its requirements.




DCIG Introduces Two New Offerings in 2019

DCIG often gets so busy covering all the new and emerging technologies in multiple markets that we can neglect to inform our current and prospective clients of new offerings that DCIG has brought to market. Today I address this oversight.

While many of you know DCIG for its Buyer’s Guides, blogs, and executive white papers, DCIG now offers the following two assets that companies can contract DCIG to create:

1.      DCIG Competitive Intelligence Reports. These reports start by taking a subset of the information we gather as part of creating the DCIG Buyer’s Guides. These reports compare features from two to five selected products and examines how they deliver on these features. The purpose of these reports is not to declare which feature implementation is “best”. Rather, it examines how each product implements these select features and what the most appropriate use case is for those features.

2.      DCIG Content Bundle. In today’s world, people consume the same content in multiple ways. Some prefer to hear it via podcasts. Some prefer to watch it on video. Some want to digest it in bite size chunks in blog entries. Still others want the whole enchilada in the form of a white paper. To meet these various demands, DCIG delivers the same core set of content in all four of these formats as part of its newly created content bundle.

If any of these new offerings pique your interest, let us know! We would love to have the opportunity to explain how they work and provide you with a sample of these offerings. Simply click on this link to send us an email to inquire about these services.




The New Need to Create a Secondary Perimeter to Detect for Malware’s Presence

Malware – and specifically ransomware – tends to regularly make headlines with some business somewhere in the world reporting having its data encrypted by it. Due to this routine occurrence, companies need to acknowledge that their standard first line defenses such as cybersecurity and backup software no longer completely suffice to detect malware. To augment these defenses, companies need to take new steps to shore up these traditional defenses which, for many, will start with creating a secondary perimeter around their backup stores to detect the presence of malware.

The size of companies getting infected by malware are not what one may classify as “small.” By way of example, a story appeared earlier this week about an 800-bed hospital in Malvern, Australia, that had the medical records of 15,000 of its patients in its cardiology unit compromised and encrypted at the end of January 2019.

While I am unfamiliar with both this hospital’s IT staff and procedures and the details of this incident, one can make two educated observations about its IT operations:

  • One, the hospital is sufficiently large that it likely had anti-virus software and firewalls in place that, in a perfect world, would have detected the malware and thwarted it.
  • Two, it probably did regular backups of its production data (nightly or weekly.) Even if the malware attack did succeed, it should have been able to use backups to recover.

So, the questions become:

  1. Why is his hospital, or any company for that matter, still susceptible to something as theoretically preventable as a malware attack in the form of ransomware?
  2. Why could the hospital not use its backups to recover?

Again, sufficient details are not yet publicly available about this attack to know with certainty why these defenses failed or if they were even in place. If one or both these defenses were not in place, then this hospital was susceptible to becoming a victim to this sort of attack. But even if both these defenses were in place or even if just one was in place, it begs asking, “Why did one or both of these defenses not suffice?

The short answer is, both these defenses remain susceptible to malware attacks whether used separately or together. This deficiency does not necessarily originate with poorly designed anti-virus software, backup software or firewalls. Rather, malware’s rapid evolution and maturity challenges the ability of cybersecurity and backup software providers to keep pace with them.

A 2017 study published by G DATA security experts revealed they discovered a new malware strain about every four seconds.  This massive number of malware strains makes it improbable that anti-virus software and firewalls can alone identify every new strain of malware as it enters a company.  Malware’s rapid evolution can also result in variations of documented ransomware strains such as Locky, NotPetya, and WannaCry slipping through undetected.

Backup software is also under attack by malware. Strains of malware now exist that may remain dormant and undetected for some time. Once inside a company, it first infects production files over a period of days, weeks or even months before it detonates. During the malware’s incubation period, companies will back up these infected production files. At the same time, they will, as part of their normal backup operations, delete their expiring backups.

After a few weeks or months of routine backup operations, all backups created during this time will contain infected production files. Then when the malware does detonate in the production environment, companies may get caught in a Zero-day Attack Loop.

Using cybersecurity software on the perimeter of corporate IT infrastructures and backup software inside the IT infrastructure does help companies detect and prevent malware attacks as well as recover from them. However, the latest strains of malware’s reflect its continuing evolution and growing sophistication that better equips them to bypass these existing corporate countermeasures as is evidenced by attacks on this hospital in Australia and the ones too numerous to mention around the world.

For these reasons, backup software that embeds artificial intelligence, machine learning, and, yes, even cybersecurity software, is entering the market place. Using these products, companies can create a secondary defense perimeter inside their company around their data stores that provides another means for companies to detect existing and new strains of malware as well as better position them to successfully recover from malware attacks when they do occur.




Tips to Selecting the Best Cloud Backup Solution

The cloud has gone mainstream with more companies than ever looking to host their production applications with general-purpose cloud providers such as the Google Cloud Platform (GCP). As this occurs, companies must identify backup solutions architected for the cloud that capitalize on the native features of each provider’s cloud offering to best protect their virtual machines (VMs) hosted in the cloud.

Company that move their applications and data to the cloud must orchestrate the protection of their applications and data once they move them there. GCP and other cloud providers offer highly available environments and replicate data between data centers in the same region. They also provide options in their clouds for companies to configure their applications to automatically fail over, fail back, scale up, and scale back down as well as create snapshots of their data.

To fully leverage these cloud features, companies must identify an overarching tool that orchestrates the management of these availability, backup and recovery features as well as integrates with their applications to create application-consistent backups. To select the right cloud backup solution for them, here are a few tips to help companies do so.

Simple to Start and Stop

The cloud gives companies the flexibility and freedom to start and stop services as needed and then only pay for these services as they use them. The backup solution should give companies the same ease to start and stop these services. It should only bill companies for the applications it protects during the time it protects them.

The simplicity of the software’s deployment should also extend to its configuration and ongoing management. Companies can quickly select and deploy the compute, networking, storage, and security services cloud providers offer. In the same way, the software should similarly make it easy for companies to select and configure it for the backup of VMs. They can also optionally turn the software off if needed.

Takes Care of Itself

When companies select any cloud provider’s service, companies get the benefits of the service without the maintenance headaches associated with owning it. For example, when companies choose to host data on GCP’s Cloud Storage service, they do not need to worry about administering Google’s underlying IT infrastructure. The tasks of replacing faulty HDDs, maintaining HDD firmware, keeping its Cloud Storage OS patched, etc. fall to Google.

In the same way, when companies select backup software, they want its benefits without the overhead of patching it, updating it, and managing it long term. The backup software should be available and run as any other cloud service. However, in the background, the backup software provider should take care of its software’s ongoing maintenance and updates.

Integrates with the Cloud Provider’s Identity Management Services

Companies use services such as LDAP or Microsoft AD to control access to corporate IT resources. Cloud providers also have their own identity management services that companies can use to control their employees’ access to cloud resources.

The backup software will ideally integrate with the cloud provider’s native identity management services to simplify its management and ensure that those who administer the backup solution have permission to access VMs and data in the cloud.

Integrates with the Cloud Provider’s Management Console

Companies want to make their IT environments easier to manage. For many, that begins with a single pane of glass to manage their infrastructure. In cloud environments, companies must adhere to this philosophy as cloud providers offer dozens of cloud services that individuals can view and access through that cloud provider’s management console.

To ensure cloud administrators remain aware that the backup is available as an option, much less use it, the backup software must integrate with the cloud provider’s default management console. In this way, these individuals can remember to use it and easily incorporate its management into their overall job responsibilities.

Controls Cloud Costs

It should come as no great surprise that cloud providers make their money when companies use their services. The more of their services that companies use, the more the cloud providers charge. It should also not shock anyone the default services that cloud providers offer may be among their most expensive.

The backup software can help companies avoid racking up unneeded costs in the cloud. The backup software will primarily consume storage capacity in the cloud. The software should offer features that help manage these costs. Aside from having policies in place to tier backup data as its ages across these different storage types, it should also provide options to archive, compress, deduplicate, and even delete data. Ideally, it will also spin up cloud compute resources when needed and shut them down once backup jobs complete to further control costs in the cloud.

HYCU Brings the Benefits of Cloud to Backup

Companies choose the cloud for simple reasons: flexibility, scalability, and simplicity. They already experience these benefits when they choose the cloud’s existing compute, networking, storage, and security services. So, they may rightfully wonder, why should the software service they use to orchestrate their backup experience in the cloud be any different?

In short, it should not be any different. As companies adopt and adapt to the cloud’s consumption model, they will expect all services they consume in the cloud to follow its billing and usage model. Companies should not give backup a pass on this growing requirement.

HYCU is the first backup and recovery solution that companies can choose when protecting applications and data on the Google Cloud Platform to follow these basic principles of consuming cloud services. By integrating with GCP’s identity management services, being simple to start and stop, and helping companies control their costs, among others, HYCU exemplifies how easy backup and recovery can and should be in the cloud. HYCU provides companies with the breadth of backup services that their applications and data hosted in the cloud need while relieving them of the responsibility to continue to manage and maintain it.




Number of Appliances Dedicated to Deduplicating Backup Data Shrinks even as Data Universe Expands

One would think that with the continuing explosion in the amount of data being created every year, the number of appliances that can reduce the amount of data stored by deduplicating it would be increasing. That statement is both true and flawed. On one hand, the number of backup and storage appliances that can deduplicate data has never been higher and continues to increase. On the other hand, the number of vendors that create physical target-based appliances dedicated to the deduplication of backup data continues to shrink.

Data Universe Expands

In November 2018 IDC released a report where it estimated the amount of data that will be created, captured, and replicated will increase five-fold from the current 33 zettabytes (ZBs) to about 175 ZBs in 2025. Whether one agrees with that estimate, there is little doubt that there are more ways than ever in which data gets created. These include:

  • Endpoint devices such as PCs, tablets, and smart phones
  • Edge devices such as sensors that collect data
  • Video and audio recording devices
  • Traditional data centers
  • The creation of data through the backup, replication and copying of this created data
  • The creation of metadata that describes, categorizes, and analyzes this data

All these sources and means of creating data means there is more data than ever under management. But as this occurs, the number of the products originally developed to control this data growth – hardware appliances that specialize in the deduplication of backup data after it is backed up such as those from Dell EMC, ExaGrid, and HPE – has shrunk in recent years.

Here are the top five reasons for this trend.

1. Deduplication has Moved onto Storage Arrays.

Many storage arrays, both primary and secondary, give companies the option to deduplicate data. While these arrays may not achieve the same deduplication ratios as appliances purpose-built for the deduplication of backup data, their combination of lower costs and highs levels of storage capacity offset the inabilities of their deduplication software to optimize backup data.

2. Backup software offers deduplication capabilities.

Rather than waiting to deduplicate backup data on a hardware appliance, almost all enterprise backup software products can deduplicate on either the client or the backup server before storing it. This eliminates the need to use a storage device dedicated to deduplicating data.

3. Virtual appliances that perform deduplication on the rise.

Some providers, such as Quest Software, have exited the physical deduplication backup target appliance market and re-emerged with virtual appliances that deduplicate data. These give companies new flexibility to use hardware from any provider they want and implement their software-defined data center strategy more aggressively.

4. Newly created data may not deduplicate well or at all.

A lot of the new data that companies may not deduplicate well or at all. Audio or video files may not change and will only deduplicate if full backups are done – which may be rare. Encrypted data will not deduplicate at all. In these circumstances, deduplication appliances are rarely if ever needed.

5. Multiple backup copies of the data may not be needed.

Much of the data collected from edge and endpoint devices may only need a couple of copies of data, if that. Audio and video files may also fall into this same category of not needing to retain more than a couple copies of data. To get the full benefits of a target-based deduplication appliance, one needs to backup the same data multiple times – usually at least six times if not more. This reduced need to backup and retain multiple copies of data diminishes the need for these appliances.

Remaining Deduplication Appliances More Finely Tuned for Enterprise Requirements

The reduction in the number of vendors shipping physical target-based deduplication backup appliances seems almost counter-intuitive in the light of the ongoing explosion in data growth that we are witnessing. But when one considers must of data being created and its corresponding data protection and retention requirements, the decrease in the number of target-based deduplication appliances available is understandable.

The upside is that the vendors who do remain and the physical target-based deduplication appliances that they ship are more finely tuned for the needs of today’s enterprises. They are larger, better suited for recovery, have more cloud capabilities, and account for some of these other broader trends mentioned above. These factors and others will be covered in the forthcoming DCIG Buyer’s Guide on Enterprise Deduplication Backup Appliances.

Bitnami