Seven Basic Questions to Screen Cloud Storage Offerings

Using cloud storage often represents the first way that most companies adopt the cloud. They leverage cloud storage to archive their data, as a backup target, share files, or for long term data retention. These approaches offer a low risk means for companies to get started in the cloud. However, with more cloud storage offerings available than ever, companies need to ask and answer more pointed questions to screen them.

50+ Cloud Storage Offerings

As recently as a few years ago, one could count on one hand the number of cloud storage offerings. Even now, companies may find themselves hard pressed to name more than five or six of them.

The truth of the matter is companies have more than 50 cloud storage offerings from which to choose. These offerings come from general purpose cloud providers such as Amazon, Microsoft, and Google to specialized providers such as Degoo, hubiCJottacloud and Wasabi.

 

 

 

 

 

 

 

The challenge companies now face is, “How do I screen these cloud storage offerings to make the right choice for me?” Making the best selection from these multiple cloud storage offerings starts by first asking and answering basic questions about your requirements.

Seven Basic Questions

Seven basic questions you should ask and answer to screen these offerings include:

  1. What type or types of data will you store in their cloud? If you only have one type of data (backups, files, or photos) to store in the cloud, a specialized cloud storage provider may best meet your needs. If you have multiple types of data (archival, backups, block, file, and/or object) to store in the cloud, a general-purpose cloud storage provider may better fit your requirements.
  2. How much data will you store in the cloud? Storing a few GBs or even a few hundred GBs of data in the cloud may not incur significant cloud storage costs. When storing hundreds of terabytes or petabytes of storage in the cloud, a cloud offering with multiple tiers of storage and pricing may be to your advantage.
  3. How much time do you have to move the data to the cloud? Moving a few GBs of data to the cloud may not take very long. Moving terabytes of data (or more) may take days, weeks or even months. In these circumstances, look for cloud providers that offer tools to ingest data at your site that they can securely truck back to their site.
  4. How much time do you have to manage the cloud? No one likes to think about managing data in the cloud. Cloud providers count on this inaction as this is when cloud storage costs add up. If you have no plans to optimize data placement or the data management costs outweigh the benefits, identify a cloud storage provider that either does this work for you or makes its storage so simple to use you do not have to manage it.
  5. How often will you retrieve data from the cloud and how much will you retrieve? If you expect to retrieve a lot of data from the cloud, identify if the cloud provider charges egress (data exit) fees and how much it charges.
  6. What type of performance do you need? Storing data on lower cost, lower tiers of storage may sound great until you need that data. If waiting multiple days to retrieve it could impact your business, keep your data on the higher performing storage tiers.
  7. What type of availability do you need? Check with your cloud storage provider to verify what uptime guarantees it provides for the region where your data resides.

A Good Base Line

There are many more questions that companies can and should ask to select the right cloud storage offering for them. However, these seven basic questions should provide the base line set of information companies need to screen any cloud storage offering.

If your company needs help in doing a competitive assessment of cloud storage providers, DCIG can help. You can contact DCIG by filling out this form on DCIG’s website or emailing us.




Four Ways to Achieve Quick Wins in the Cloud

More companies than ever want to use the cloud as part of their overall IT strategy. To do so, they often look to achieve some quick wins in the cloud to demonstrate its value. Achieving these quick wins also serves to give them some practical hands on experience in the cloud. Incorporating the cloud into your backup and disaster recovery (DR) processes may serve as the best way to get these wins.

Any company hoping to get some quick wins in the cloud should first define what a “win” looks like. For the purposes of this blog entry, a win consists of:

  • Fast, easy deployments of cloud resources
  • Minimal IT staff involvement
  • Improved application processes or workflows
  • The same or lower costs

Here are four ways for companies to achieve the quick wins in the cloud through their backup and DR processes:

#1 – Take a Non-disruptive Approach

When possible, leverage your company’s existing backup infrastructure to store copies of data in the cloud. All enterprise backup products such as backup software and deduplication backup appliances, save one or two, interface with public clouds. These products can store backup data in the cloud without disrupting your existing environment.

Using these products, companies can get exposure to the public cloud’s core compute and storage services. These are the cloud services companies are most apt to use initially and represent the most mature of the public cloud offerings.

#2 – Deduplicate Backup Data Whenever Possible

Public cloud providers charge monthly for every GB of data that companies store in their respective clouds. The more data that your company stores in the cloud, the higher these charges become.

Deduplicating data reduces the amount of data that your company stores in the cloud. In so doing, it also helps to control and reduce your company’s monthly cloud storage costs.

#3 – Tier Your Backup Data

Many public cloud storage providers offer multiple tiers of storage. The default storage tier they offer does not, however, represent their most cost-effective option. This is designed for data that needs high levels of availability and moderate levels of performance.

Backup data tends to only need these features for the first 24 – 72 hours after it is backed up. After that, companies can often move it to lower cost tiers of cloud storage. Note that these lower cost tiers of storage come with decreasing levels of availability and performance. While many backups (over 99%) fall into this category, check to see if any application recoveries occurred that required data over three days old before moving it to lower tiers of storage.

#4 – Actively Manage Your Cloud Backup Environment

Applications and data residing in the cloud differ from your production environment in one important way. Every GB of data consumed and every hour that an application runs incur costs. This differs from on-premises environments where all existing hardware represents a sunk cost. As such, there is less incentive to actively manage existing hardware resources since any resources recouped only represent a “soft” savings.

This does not apply in the cloud. Proactively managing and conserving cloud resources translate into real savings. To realize these savings, companies need to look to products such as Quest Foglight. It helps them track where their backup data resides in the cloud and identify the application processes they have running. This, in turn, helps them manage and control their cloud costs.

Companies rightfully want to adopt the cloud for the many benefits that it offers and, ideally, achieve a quick win in the process. Storing backup data in the cloud and moving DR processes to the cloud provides the quick win in the cloud that many companies initially seek. As they do so, they should also ensure they put the appropriate processes and software in place to manage and control their usage of cloud resources.




HPE Expands Its Big Tent for Enterprise Data Protection

When it comes to the mix of data protection challenges that exist within enterprises today, these companies would love to identify a single product that they can deploy to solve all their challenges. I hate to be the bearer of bad news, but that single product solution does not yet exist. That said, enterprises will find a steadily improving ecosystem of products that increasingly work well together to address this challenge with HPE being at the forefront of putting up a big tent that brings these products together and delivers them as a single solution.

Having largely solved their backup problems at scale, enterprises have new freedom to analyze and address their broader enterprise data protection challenges. As they look to bring long term data retention, data archiving, and multiple types of recovery (single applications, site fail overs, disaster recoveries, and others) under one big tent for data protection, they find they often need to deploy multiple products.

This creates a situation where each product addresses specific pain points that enterprises have. However, multiple products equate to multiple management interfaces that each have their own administrative policies with minimal or no integration between them. This creates a thornier problem – enterprises are left to manage and coordinate the hand-off of the protection and recovery of data between these different individual data protection products.

A few years HPE started to build a “big tent” to tackle these enterprise data protection and recovery issues. It laid the foundation with its HPE 3PAR StoreServ storage arrays, StoreOnce deduplication storage systems, and Recovery Manager Central (RMC) software to help companies coordinate and centrally manage:

  • Snapshots on 3PAR StoreServ arrays
  • Replication between 3PAR StoreServ arrays
  • The efficient movement of data between 3PAR and StoreOnce systems for backup, long term retention, and fast recoveries

This week HPE expanded its big tent of data protection to give companies more flexibility to protect and recover their data more broadly across their enterprise. It did so in the following ways:

  • HPC RMC 6.0 can directly recover data to HPE Nimble storage arrays. Recoveries from backups can be a multi-step process that may require data to pass through the backup software and the application server before it lands on the target storage array. Beginning December 2018, companies can use RMC to directly recover data to HPE Nimble storage arrays from an HPE StoreOnce system without going through the traditional recovery process just as they can already do to HPE 3PAR StoreServ storage arrays.
  • HPE StoreOnce can directly send and retrieve deduplicated data from multiple cloud providers. Companies sometimes fail to consider that general purpose cloud service providers such as Amazon Web Services (AWS) or Microsoft Azure make no provisions to optimize data stored with them such as deduplicating it. Using HP StoreOnce’s new direct support for AWS, Azure, and Scality, companies can use StoreOnce to first compress and deduplicate data before they store the data in the cloud.
  • Integration between Commvault and HPE StoreOnce systems. Out of the gate, companies can use Commvault to manage StoreOnce operations such as replicating data between StoreOnce systems as well as moving data directly from StoreOnce systems to the cloud. Moreover, as this relationship between Commvault and HPE matures, companies will also be able to use HPE’s StoreOnce Catalyst, HPE’s client-based deduplication software agent, in conjunction with Commvault to backup data on server clients where data may not reside on HPE 3PAR or Nimble storage. Using the HPE StoreOnce Catalyst software, Commvault will deduplicate data on the source before sending it to an HPE StoreOnce system.

Source:HPE

Of these three announcements that HPE made this week, this new relationship with Commvault that accompanies its pre-existing relationships with Micro Focus (formerly HPE Data Protector) and Veritas demonstrate HPE’s commitment to helping enterprises build a big tent for their data protection and recovery initiatives. Storing data on the HPE 3PAR and Nimble and using RMC to manage their backups and recoveries on the StoreOnce systems certainly accelerates and simplifies these functions when companies can do so. But by working with these other partners, it illustrates that HPE recognizes that companies will not store all their data on its systems and that HPE will accommodate companies so they can create a single, larger data protection and recovery solution for their enterprise.




Two Hot Technologies to Consider for Your 2019 Budgets

Hard to believe but the first day of autumn is just two days away and with fall weather always comes cooler temperatures (which I happen to enjoy!) This means people are staying inside a little more and doing those fun, end of year activities that everyone enjoys – such as planning their 2019 budgets. As you do so, solutions from BackupAssist and StorMagic are two hot new technologies for companies to consider making room for in the New Year.

BackupAssist 365.

BackupAssist 365 backs up files and emails stored in the cloud. While backup of cloud-based data may seem rather ho-hum in today’s artificial intelligent, block chain obsessed, digital transformation focused world, it solves a real world that nearly every size organization faces: how to cost-effectively and simply protect all those pesky files and emails that people store in cloud applications such as DropBox, Office 365, Google Drive, OneDrive, Gmail, Outlook and others.

To do so, BackupAssist 365 adopted two innovative yet practical approaches to protect files and emails.

  • First, it interfaces directly with these various cloud providers to backup this data. Using your login permissions (which you provide when configuring the software,) BackupAssist 365 accesses data directly in the cloud. This negates the need for your server, PC, or laptop to be turned on when these backups occur so backups can occur at any time.
  • Second, it does cloud-to-local In other words, rather than running up more data transfer and network costs that come with backing up to another cloud, it backs the data backup to local storage on your site. While that may seem a little odd in today’s cloud-centric world, companies can get a great deal of storage capacity for nominal amounts of money. Since it only does an initial full backup and then differential backups thereafter, the ongoing data transfer costs are nominal and the amount of storage capacity that one should need onsite equally small.

Perhaps the best part about BackupAssist 365 is its cost (or lack thereof.) BackupAssist 365 licenses its software on a per user basis with each user email account counting as one user license. However, this one email account covers the backup of that user’s data in any cloud service used by that user. Further, the cost is only $1/month per user with a decreasing cost for greater number of users. In fact, the cost is so low on a per user basis, companies may not even need to budget for this service. They can just start using it and expense their credit cards to keep it below corporate radar screens.

StorMagic SvSAN

The StorMagic SvSAN touches on another two hot technology trends that I purposefully (or not so purposefully) left out above: hyperconverged infrastructure or HCI and edge computing. However, unlike many of the HCI and edge computing plays in the marketplace such as Cisco HyperFlex, Dell EMC VxRail, and Nutanix, StorMagic has not forgotten about cost constraints that branch, remote, and small offices face.

As Cisco, Dell EMC, Nutanix and others chase the large enterprise data center opportunities, they often leave remote, branch, and small offices with two choices: pay up or find another solution. Many of these size offices are opting to find alternative solutions.

This is where StorMagic primarily plays. For a less well-known player, they play much bigger than they may first appear. Through partnerships with large providers such as Cisco and Lenovo among others, StorMagic comes to market with highly available, two-server systems that scale across dozens, hundreds, or even thousands of remote sites. To get a sense of StorMagic’s scalability, walk into any of the 2,000+ Home Depots in the United States or Mexico and ask to look at the computer system that hosts their compute and storage. If the Home Depot lets you and you can find it, you will find a StorMagic system running somewhere in the store.

The other big challenge that each StorMagic system also addresses is security. Because their systems can be deployed almost anywhere in any environment, it does make them susceptible to theft. In fact, in talking to one of its representatives, he shared a story where someone drove a forklift through the side of a building and stole a computer system at one of its customer sites. Not that it mattered. To counter these types of threats, StorMagic encrypts all the data on its HCI solutions with its own software that is FIPS 140-2 compliant.

Best of all, to get these capabilities, companies do not have to break the bank to acquire one of these systems. The list price for the Standard Edition of the SvSAN software, which includes 2TB of usable storage, high availability, and remote management, is $2,500.

As companies look ahead and plan their 2019 budgets, they need to take care of their operational requirements but they may also want to dip their toes in the water to get the latest and greatest technologies. These two technologies give companies the opportunities to do both. Using BackupAssist 365, companies can quickly and easily address their pesky cloud file and email backup challenges while StorMagic gives them the opportunity to affordably and safely explore the HCI and edge computing waters.




Analytics, Automation and Hybrid Clouds among the Key Takeaways from VMworld 2018

At early VMworld shows, stories emerged of attendees scurrying from booth to booth on the exhibit floor looking for VM data protection and hardware solutions to address the early challenges that VMware ESXi presented. Fast forward to the 2018 VMworld show and the motivation behind attendees attending training sessions and visiting vendor booths has changed significantly. Now they want solutions that bring together their private and public clouds, offer better ways to analyze and automate their virtualized environments, and deliver demonstrable cost savings and/or revenue opportunities after deploying them.

The entrance to the VMworld 2018 exhibit hall greeted attendees a little differently this than in year’s past. Granted, there were still some of the usual suspects such as Dell EMC and HPE that have reserved booths at this show for many years. But right alongside them were relative newcomers (to the VMworld show anyway) such as Amazon Web Services and OVHcloud.

Then as one traversed the exhibit hall floor and visited the booths of the vendors immediately behind them, the data protection and hardware themes of the early VMworld shows persisted in these booths, though the messaging and many of the vendor names have changed since the early days of this show.

Companies such as Cohesity, Druva, and Rubrik represent the next generation of data protection solutions for vSphere while companies such as Intel and Trend Micro have a more pronounced presence on the VMworld show floor. Together these exhibitors reflect the changing dynamics of what is occurring in today’s data centers and what the current generation of organizations are looking for vendors to provide for their increasingly virtualized environments. Consider:

  1. Private and public cloud are coming together to become hybrid. The theme of hybrid clouds with applications that can span both public and private clouds began with VMworld’s opening keynote announcing the availability of Amazon Relational Database Service (Amazon RDS) on VMware. Available in the coming months, this functionality will free organizations to automate the setup of Microsoft SQL Server, Oracle, PostgreSQL, MariaDB and MySQL databases in their traditional VMware environments and then migrate them to the AWS cloud. Those interested in trying out this new service can register here for a preview.
  2. Analytics will pave the ways for increasing levels of automation. As organizations of all sizes adopt hybrid environments, the only way they can effectively manage their hybrid environments at scale is to automate their management. This begins with the use of analytics tools that capture the data points coming in from the underlying hardware, the operating systems, the applications, the public clouds to which they attach, the databases, the devices which feed them the data, whatever.

Evidence of growing presence of these analytics tools that enable this automation was everywhere at VMworld. One good example is Runecast analyzes the logs of these environments and then also scours blogs, white papers, forums, and other online sources for best practices to advise companies on how to best configure their environments. Another one is Login VSI which does performance benchmarking and forecasting to anticipate how VDI patches and upgrades will impact the current infrastructure.

  1. The cost savings and revenue opportunities for these hybrid environments promise to be staggering. One of the more compelling segments in one of the keynotes was the savings that many companies initially achieved deploying vSphere. Below is one graphic that appeared at the 8:23 mark in this video of the second day’s keynote where a company reduced its spend on utility charges by over $60,000 per month or an 84% reduction in cost. Granted, this example was for illustration purposes but it seemed inline with other stories I have anecdotally heard.

Source: VMware

But as companies move into this hybrid world that combines private and public clouds, the value proposition changes. While companies may still see cost savings going forward, it is more likely that they will realize and achieve new opportunities that were simply not possible before. For instance, they may deliver automated disaster recoveries and high availability for many more or all their applications. Alternatively, they will be able to bring new products and services to market much more quickly or perform analysis that simply could not have been done before because they have access to resources that were unavailable to them in a cost-effective or timely manner.




CloudShift Puts Flawless DR on Corporate Radar Screens

Hyper-converged Infrastructure (HCI) solutions excel at delivering on many of the attributes commonly associated with private clouds. Consequently, the concepts of hyper-convergence and private clouds have become, in many respects, almost inextricably linked.

But for an HCI solution to not have a clear path forward for public cloud support … well, that’s almost anathema in the increasingly hybrid cloud environments found in today’s enterprises. That’s what makes this week’s CloudShift announcement from Datrium notable – it begins to clarify Datrium’s strategy for how Datrium is going to go beyond backup to the public cloud as part of its DVX solution and puts the concept of flawless DR on corporate radar screens.

Source: Datrium

HCI is transforming how organizations manage their on-premise infrastructure. By combining compute, data protection, networking, storage and server virtualization into a single pre-integrated solution, they eliminate many of the headaches associated with traditional IT infrastructures while delivering the “cloud-like” speed and ease of deployment that enterprises want.

However, enterprises increasingly want more than “cloud-like” abilities from their on-premise HCI solution. They also want the flexibility to move the virtual machines (VMs) they host on their HCI solution into public cloud environments if needed. Specifically, if they run disaster recovery (DR) tests, perform an actual DR, or need to move a specific workload in the public cloud that is experiencing high throughput, having the flexibility to move VMs into and out of the cloud as needed is highly desirable.

Datrium answers the call for public cloud integration with its recent CloudShift announcement. However, Datrium did not just come out with a #MeToo answer for public clouds by announcing it will support the AWS cloud. Rather, it delivered what most enterprises are looking for at this stage in their journey to a hybrid cloud environment: a means to seamlessly incorporate the cloud into their overall DR strategy.

The goals behind its CloudShift announcement are three-fold:

  1. Build on the existing Datrium DVX platform that already manages the primary copy of data as well as its backups. With the forthcoming availability of CloudShift in the first half of 2019, it will complete the primary to backup to cloud circle that companies want.
  2. Make DR work flawlessly. If there are two words together that often represent an oxymoron, it is “flawless DR”. By bringing all primary, backup and cloud together and managing them as one holistic piece, companies can begin to someday soon (ideally in this lifetime) view flawless DR as the norm instead of the exception.
  3. Orchestrated DR failover and failback. DR failover and failback just rolls off the tongue – it is simple to say and everyone understands what it means. But to execute on the successful DR failover and failback in today’s world tends to get very pricey and very complex. By Datrium rolling the management of primary, backup and cloud under one roof and then continually performing compliance checks on the execution environment to ensure that they meet RPO and RTO of the DR plan, companies can have a higher degree of confidence that DR failovers and failbacks only occur when they are supposed to and that when they occur, they will succeed.

Despite many technology advancements in recent years, enterprise-wide, turnkey DR capabilities with orchestrated failover and failback between on-premises and the cloud are still largely the domain of high-end enterprises that have the expertise to pull it off and are willing to commit large amounts of money to establish and maintain a (hopefully) functional DR capability. Datrium’s CloudShift announcement puts the industry on notice that reliable, flawless DR that will meet the budget and demands of a larger number of enterprises is on its way.




Too Many Fires, Poor Implementations, and Cost Overruns Impeding Broader Public Cloud Adoption

DCIG’s analysts (myself included) have lately spent a great deal of time getting up close and personal on the capabilities of public cloud providers such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. We have also spent time talking to individuals deploying cloud solutions. As we have done so, we recognize that the capabilities of these cloud offerings should meet and exceed the expectations of most organizations regardless of their size. However, impeding cloud adoption are three concerns that have little to do with the technical capabilities of these public cloud solutions.

Anyone who spends any time studying the capabilities of any of these cloud offerings for the first time will walk away impressed. Granted, each offering has its respective strengths and weaknesses. However, when one examines each of these public cloud offerings and their respective infrastructures and compares them to the data centers that most companies own and manage, the differences are stark. The offerings from these public cloud providers win hands down. This might explain why organizations of all sizes are adopting the cloud at some level.

The more interesting dilemma is why organizations are not adopting public cloud offerings at a faster pace and why some early adopters are even starting to leave the cloud. While this is not an extensive list of reasons, here are three key concerns that have come out of our conversations and observations that are impeding cloud adoption.

Too many fires. Existing data centers are a constant target for budget cutbacks, understaffing, and too often lack any clear, long-term vision that guides data center development. This combination of factors has led to costly, highly complex, inflexible data centers that need a lot of people to manage them. This situation exists at the exact moment when the business side of the house expects the data center to become simpler and more cost-effective and flexible to manage. While in-house data center IT staff may want to respond to these business requests, they often are consumed with putting out the fires caused by the complexity of the existing data center. This leaves them little or no time to explore and investigate new solutions.

Poor implementations. The good news is that public cloud offerings have a very robust feature set. The bad news is that all these features make it very daunting to learn and very easy to incorrectly set it up. If anything, the ease and low initial costs of most public cloud providers may work against the adoption of public cloud solutions. They have made it so easy and low cost for companies to get into the cloud that companies may try it out without really understanding all the options available to them and the ramifications of the decisions they make. This can easily lead to poor application implementations in the cloud and potentially introduce more costs and complexity – not less. The main upside here is that because creating and taking down virtual private clouds with these providers is relatively easy, even if one does set it up poorly it can be rectified by creating a new virtual private cloud that does better meet your needs.

Cloud cost overruns. Part of the reason companies live with and even mask the complexity of their existing data centers is that they can control their costs. Even if an application needs more storage, compute, networking, power – whatever – they can sometimes move hardware and software around on the back end to mask these costs until the next fiscal quarter or year rolls around when they go to the business to ask for approval to buy more. Once applications and data are in the cloud and start to grow, these costs become exposed almost immediately. Since cloud providers bill based upon monthly usage, companies need to closely monitor their applications and data in the cloud to include identifying which ones are starting to incur additional charges, to know what options they have available to them to lower these charges, and the practicality of making these changes.

Anyone who honestly assesses the capabilities available from the major public cloud providers will find they can better deliver next-gen features than what most organizations can do on their own. That said, companies either need to find the time to first educate themselves about these cloud providers or identify someone they trust to help them down the cloud path. While these three issues are impeding cloud adoption, they should not be stopping it as they still too often do. The good news is that even if a company does poorly implement their environment in the cloud the first time around (and few will,) the speed and flexibility at which public cloud providers offer to build out new virtual private clouds and tear down existing ones means they can cost-effectively improve it.




Four Implications of Public Cloud Adoption and Three Risks to Address

Business are finally adopting public cloud because a large and rapidly growing catalog of services is now available from multiple cloud providers. These two factors have many implications for businesses. This article addresses four of these implications plus several cloud-specific risks.

Implication #1: No enterprise IT dept will be able to keep pace with the level of services innovation available from cloud providers

The battle is over. Cloud wins. Deal with it.

Dealing with it does not necessarily mean that every business will move every workload to the cloud. It does mean that it is time for business IT departments to build awareness of the services available from public cloud providers. One way to do this is to tap into the flow of service updates from one or more of the major cloud providers.

four public cloud logosFor Amazon Web Services, I like What’s New with AWS. Easy filtering by service category is combined with sections for featured announcements, featured video announcements, and one-line listings of the most recent announcements from AWS. The one-line listings include links to service descriptions and to longer form articles on the AWS blog.

For Microsoft Azure, I like Azure Updates. As its subtitle says, “One place. All updates.” The Azure Updates site provides easy filtering by product, update type and platform. I especially like the ability to filter by update type for General Availability and for Preview. The site also includes links to the Azure roadmap, blog and other resources. This site is comprehensive without being overwhelming.

For Google Cloud Platform, its blog may be the best place to start. The view can be filtered by label, including by announcements. This site is less functional than the AWS and Microsoft Azure resources cited above.

For IBM Cloud, the primary announcements resource is What’s new with IBM Cloud. Announcements are presented as one-line listings with links to full articles.

Visit these sites, subscribe to their RSS feeds, or follow them via social media platforms. Alternatively, subscribe to their weekly or monthly newsletters via email. Once a business has workloads running in one of the public clouds at a minimum an IT staff member should follow the updates site.

Implication #2: Pressure will mount on Enterprise IT to connect business data to public cloud services

The benefits of bringing public cloud services to bear on the organization’s data will create pressure on enterprise IT departments to connect business data to those services. There are many options for accomplishing this objective, including:

  1. All-in with one public cloud
  2. Hybrid: on-prem plus one public
  3. Hybrid: on-prem plus multiple public
  4. Multi-cloud (e.g. AWS + Azure)

The design of the organization and the priorities of the business should drive the approach taken to connect business data with cloud services.

Implication #3: Standard data protection requirements now extend to data and workloads in the public cloud

No matter what approach it taken when embracing the public cloud, standard data protection requirements extend to data and workloads in the cloud. Address these requirements up front. Explore alternative solutions and select one that meets the organizations data protection requirements.

Implication #4: Cloud Data Protection and DRaaS are on-ramps to public cloud adoption

For most organizations the transition to the cloud will be a multi-phased process. Data protection solutions that can send backup data to the cloud are a logical early phase. Disaster recovery as a service (DRaaS) offerings represent another relatively low-risk path to the cloud that may be more robust and/or lower cost that existing disaster recovery setups. These solutions move business data into public cloud repositories. As such, cloud data protection and DRaaS may be considered on-ramps to public cloud adoption.

Once corporate data has been backed up or replicated to the cloud, tools are available to extract and transform the data into formats that make it available for use/analysis by that cloud provider’s services. With proper attention, this can all be accomplished in ways that comply with security and data governance requirements. Nevertheless, there are risks to be addressed.

Risk to Address #1: Loss of change control

The benefit of rapid innovation has a downside. Any specific service may be upgraded or discarded by the provider without much notice. Features used by a business may be enhanced or decremented. This can force changes in other software that integrates with the service or in procedures used by staff and the associated documentation for those procedures.

For example, Office365 and Google G Suite features can change without much notice. This creates a “Where did that menu option go?” experience for end users. Some providers reduce this pain by providing an quick tutorial for new features within the application itself. Others provide online learning centers that make new feature tutorials easy to discover.

Accept this risk as an unavoidable downside to rapid innovation. Where possible, manage the timing of these releases to an organization’s users, giving them advance notice of the changes along with access to tutorials.

Risk to Address #2: Dropped by provider

A risk that may not be obvious to many business leaders is that of being dropped by a cloud service provider. A business with unpopular opinions might have services revoked, sometimes with little notice. Consider how quickly the movement to boycott the NRA resulted in severed business-to-business relationships. Even an organization as large as the US Military faces this risk. As was highlighted in recent news, Google will not renew its military AI project due in large part to pressure from Google employees.

Mitigate this risk through contracts and architecture. This is perhaps one argument in favor of a hybrid on-prem plus cloud approach to the public cloud versus an all-in approach.

Risk to Address #3: Unpredictable costs

It can be difficult to predict the costs of running workloads in the public cloud, and these costs can change rapidly. Address this risk by setting cost thresholds that trigger an alert. Consider subscribing to a service such as Nutanix Beam to gain granular visibility into and optimization of public cloud costs.

Its time to get real about the public cloud

Many business are ready to embrace the public cloud. IT departments should make themselves aware of services that may create value for their business. They should also work through the implications of moving corporate data and workloads to the cloud, and make plans for managing the attendant risks.




Two Insights into Why Enterprises are Finally Embracing Public Cloud Computing

In between my travels, doing research, and taking some time off in May, I also spent time getting up to speed on Amazon Web Services by studying for the AWS Certified Solutions Architect Associated exam in anticipation of DCIG doing more public cloud-focused competitive research. While I know it is no secret that cloud adoption has taken off in recent years, what has puzzled me during this time is, “Why is it now that have enterprises finally started to embrace public cloud computing?”

From my first days as an IT user I believed that all organizations would eventually embrace cloud computing in some form. That belief was further reinforced as I came to understand virtualization and its various forms (compute, network, and storage.) But what has perplexed me to one degree or another ever since then is why enterprises have not more fully invested in these various types of virtualization and embraced the overall concept of cloud computing sooner.

While there are various reasons for this, I sense the biggest reason is that most organizations view IT as a cost center. Granted, they see the value that IT has brought and continues to bring to their business. However, most organizations do not necessarily want to provide technology services. They would rather look to others to provide the IT technologies that they need and then consume them when they are sufficiently robust and mature for their needs.

Of course, establishing exactly when a technology satisfies these conditions varies for each industry. Some might rightfully argue that cloud computing has been around for a decade or more and that many organizations already use it.

But using public cloud computing for test, development, or even for some limited production deployments within an organization is one thing. Making public cloud computing the preferred or even the only choice for hosting new and existing applications is quite another.  When this change in policy occurs within an enterprise, then one can say an enterprise has embraced public cloud computing. To date, only a relatively few enterprises have embraced the cloud computing at scale but I recently ran across two charts that help to explain why this is changing.

The first chart I ran across was in one of the training videos I watched. This video included a graphic that showed the number of new service announcements and updates that AWS made each year from 2011-2017.

Source: A Cloud Guru

It was when I saw the amount of innovation and changes that have occurred in the past three years at AWS that I got a better understanding as to why enterprises have started to embrace cloud computing at scale. Based on these numbers, AWS announced nearly five service announcements and/or updates every business day of 2017.

Many businesses would consider themselves fortunate to do five changes every month much less every day. But this level of innovation and change also explains why public cloud providers are pulling away from traditional data center in terms of the capabilities they can offer. It also explains why enterprises can have more confidence in public cloud providers and move more of their production applications there. This level of innovation also inherently communicates high degrees of stability and maturity which is often what enterprises prioritize.

The other chart brought to my attention is found on Microsoft’s website and provides a side-by-side comparison of Microsoft Azure to AWS. This chart provides a high-level overview of the offerings from both of these providers and how their respective offerings compare and contrast.

Most notable about this chart is that it means organizations have another competitive cloud computing offering that is available from a large, stable provider. In this way, as an enterprise embraces the idea of cloud computing in general and chooses a specific provider of these services, they can do so with the knowledge that they have a viable secondary option should that initial provider become too expensive, change offerings, or withdraw an offering that they currently or plan to use.

Traditional enterprise data centers are not going away. However, as evidenced by the multiple of enhancements that AWS, Microsoft Azure, and others have made in the past few years, their cloud offerings surpass the levels of auditing, flexibility, innovation, maturity, and security found in many corporate data centers. These features coupled with organizations having multiple cloud providers from which to choose provide insight into why enterprises are lowering their resistance to adopting public cloud computing and embracing it more wholeheartedly.




Amazon AWS, Google Cloud, Microsoft Azure and … now Nutanix Xi Cloud Services?!

Amazon, Google, and Microsoft have staked their claims as the Big 3 as providers of enterprise cloud services with their respective AWS, Cloud, and Azure offerings. Enter Nutanix. It has from Day 1 sought to emulate AWS with its on-premise cloud offering. But with the announcements made at its .NEXT conference last week in New Orleans, companies can look for Nutanix to deliver cloud services both on- and off-premise that should fundamentally change how enterprises view Nutanix going forward.

There is a little dispute that Amazon AWS is the unquestioned leader in cloud services with Microsoft, Google, and IBM possessing viable offerings in this space. Yet where each of these providers still tend to fall short is in addressing enterprise needs to help them maintain a hybrid cloud environment for the foreseeable future.

Clearly most enterprises want to incorporate public cloud offerings into their overall corporate data center design and, by Nutanix’s own admission, the adoption of the public cloud is only beginning in North America. But already there is evidence from early adopters of the cloud that the costs associated with maintaining all their applications with public cloud providers outweighs the benefits. However, these same enterprises are hesitant to bring these applications back on-premise because they like, and I daresay have even become addicted, to the ease of managing applications and data that the cloud provides them.

This is where Nutanix at its recent .NEXT conference made a strong case for becoming the next cloud solution on which enterprises should place their bets. Three technologies it announced during this conference particularly stood out to me as evidence that Nutanix is doing more than bringing another robust cloud offering to market but it is addressing this nagging enterprise concern for the need to deploy a cloud solution that they can manage in the same way on-premise and off. Consider:

1. Beam takes the mystery out of where all the money and data in the cloud has gone. A story I repeatedly hear is how complicated billing statements are from AWS and how easy it is for these costs to exceed corporate budgets. Another story I often hear is that it is so easy for corporate employees to get started in the cloud that they can easily run afoul of corporate governance. These stories, true or not, likely impede broader cloud adoption by many companies.

This is where Beam sheds some light on the picture. For those companies already using the cloud, Beam provides enterprises visibility into the cloud to address both the cost and data governance concerns. Since Beam is a separate, standalone product available from Nutanix, organizations can quickly gain visibility into how much money they are spending on the cloud, who is spending it, and perform audits to ensure compliance with HIPAA, ISO, PCI-DSS, CIS, NiST and SOC-2. For those organizations not already using the cloud, they can implement Beam in conjunction with their adoption of cloud services to monitor and manage their usage of it. Beam currently supports AWS and Azure with support for Nutanix Xi and Google Cloud in the works.

2. Xi brings together the management of on- and off-premise clouds without compromise. Make no mistake – Nutanix’s recently announced Xi cloud services offering is not yet on the same standing as AWS, Azure, or Google Cloud. In fact, by Nutanix’s own admission Xi is “still coming” as an offering. That said, Nutanix addresses this lingering concern that persists among enterprise users – they want the same type of cloud experience on-premise and off. The Nutanix Acropolis Hypervisor (AHV) accompanied with its forthcoming Xi cloud services offering stand poised to deliver that giving companies the flexibility to seamlessly (relatively speaking) move applications and data between on- and off-premise locations without changing how they are managed.

3. Netsil is “listen” spelled backwards which is just one more reason to pay attention to this technology. Every administrators’ worst nightmare is when they must troubleshoot an issue in the cloud. In today’s highly virtualized, inter-dependent application world, identifying the root cause of a Sev 1 problem can make even the most ardent supporter of virtualized, serverless compute environments long the “simpler” days of standalone servers.

Thank God solutions such as Netsil are now available. Netsil tackles this thorny issue of microsegmentation – how applications within containers, virtual machines, and physical machines communicate, interact and wall off one another – by identifying their respective dependencies on each other. This helps to take much of the guesswork out of troubleshooting these environment as well as gives enterprises more confidence to deploy multiple applications on fewer hosts. While Netsil is “still coming” per Nutanix, this type of technology is one that enterprises should find almost a necessity to both maximize their use of resources in the cloud while giving them peace of mind that they have tools at their disposal to help them solve the challenges that will inevitably arise.




Two Most Disruptive Storage Technologies at the NAB 2018 Show

The exhibit halls at the annual National Association of Broadcasters (NAB) show in Las Vegas always contain eye-popping displays highlighting recent technological advances as well as what is coming down the path in the world of media and entertainment. But behind NAB’s glitz and glamour lurks a hard, cold reality; every word recorded, every picture taken, and every scene filmed must be stored somewhere, usually multiple times, and available at a moment’s notice. It is these halls at the NAB show that DCIG visited where it identified two start-ups with storage technologies poised to disrupt business as usual.

Storbyte. Walking the floor at NAB, a tall, blond individual literally yanked me by the arm as I was walking by and asked me if I had ever heard of Storbyte. Truthfully, the answer was No. This person turned out to be Steve Groenke, Storbyte’s CEO, and what ensued was a great series of conversations while at NAB.

Storbyte has come to market with an all-flash array. However, it took a very different approach to solve the problems of longevity, availability and sustainable high write performance in SSDs and storage systems built with them. What makes it so disruptive is it created a product that meets the demand for extreme sustained write performance by slowing down flash and it does so at a fraction of the cost of what other all-flash arrays cost.

In looking at today’s all-flash designs, every flash vendor is actively pursuing high performance storage. The approach they take is to maximize the bandwidth to each SSD. This means their systems must use PCIe attached SSDs addressed via the new NVMe protocol.

Storbyte chose to tackle the problem differently. Its initial target customers had continuous, real-time capture and analysis requirements as they routinely burned through the most highly regarded enterprise class SSDs in about seven months. Two things killed NAND flash in these environments: heat and writes.

To address this problem, Storbyte reduces heat and the number of writes that each flash module experiences by incorporating sixteen mSATA SSDs into each of its Eco*Flash SSDs. Further, Storbyte slows down the CPUs in each of the mSATA module on its system and then wide-stripes writes across all of them. According to Storbyte, this only requires about 25% of the available CPU on each mSATA module so they use less power. By also managing the writes, Storbyte simultaneously extends the life of each mSATA module on its Eco-flash drives.

The end result is a low cost, high performance, very dense, power-efficient all-flash array built using flash cards that rely upon “older”, “slower”, consumer-grade mSATA flash memory modules that can drive 1.6 million IOPS on a 4U system. More notably, its systems cost about a quarter of that of competitive “high performance” all-flash arrays while packing more than a petabyte of raw flash memory capacity in 4U of rack space that use less power than almost any other all-flash array.

Wasabi. Greybeards in the storage world may recognize the Wasabi name as a provider of iSCSI SANs. Well, right name but different company. The new Wasabi recently came out of stealth mode as a low cost, high performance, cloud storage provider. By low cost, we mean 1/5 of the cost of Amazon’s slowest offering (Glacier) and at 6x the speed of Amazon’s highest performing S3 offering. In other words, you can have your low cost cloud storage and eat it too.

What makes its offering so compelling is that it offers storage capacity at $4.99/TB per month. That’s it. No additional egress charges for every time you download files. No complicated monthly statements to decipher to figure out how much you are spending and where. No costly storage architects to hire to figure out how to tier data to optimize performance and costs. This translates into one fast cloud storage tier at a much lower cost than the Big 3 (Amazon AWS, Google Cloud, and Microsoft Azure.)

Granted, Wasabi is a cloud storage provider start-up so there is an element of buyer beware. However, it is privately owned and well-funded. It is experiencing explosive growth with over 1600 customers in just its few months of operation. It anticipates raising another round of funding. It already has data centers scattered throughout the United States and around the world with more scheduled to open.

Even so, past horror stories about cloud providers shutting their doors give every company pause by using a relatively unknown quantity to store their data. In these cases, Wasabi recommends that companies use its solution as your secondary cloud.

Its cloud offering is fully S3 compatible and most companies want a cloud alternative anyway. In this instances, store copies of your data to both Amazon and Wasabi. Once stored, run any queries, production, etc. against the Wasabi cloud. The Amazon egress charges that your company avoids by accessing its data on the Wasabi cloud will more than justify taking the risk of storing the data you routinely access on Wasabi. Then in the unlikely event Wasabi does go out of business (not that it has any plans to do so,) companies still have a copy of data with Amazon that they can fail back to.

This argument seems to resonate well with prospects. While I could not substantiate these claims, Wasabi said that they are seeing multi-petabyte deals coming their way on the NAB show floor. By using Wasabi instead of Amazon in the use case just described, these companies can save hundreds of thousands of dollars per month just by avoiding Amazon’s egress charges while mitigating their risk associated with using a start-up cloud provider such as Wasabi.

Editor’s Note: The spelling of Storbyte was corrected on 4/24.




Cool New Features and Offerings from AWS

Amazon has made significant progress in the last few years to dispel the notion that Amazon Web Services’ (AWS) primary purpose is as a repository for archives and backups. During this time, it has demonstrated time and time again it is well suited to host even the most demanding of production applications. However, what companies may still fail to realize is just how far beyond being a leading provider of cloud storage services that AWS has become. Here are some recent cool new features and offerings available from AWS that indicate how far it has come in terms of positioning itself to host enterprise applications of any type as well as satisfy specific enterprise demands.

  • Take a tour of Amazon’s data centers – virtually. As organizations look to host their mission critical applications, sensitive data, and regulated data with third party providers such as Amazon, the individuals who make these types of decisions to outsource this data have a natural inclination to want to physically inspect the data centers where this data is kept.

While opening up one’s data center to visitors may sound good on the surface, parading every Tom, Dick, and Harry through a “secure site” potentially makes a secure site insecure. To meet this demand, Amazon now gives individuals the opportunity to take virtual tours of its data centers. Follow this link to take this tour.

  • Get the infrastructure features you need when you need them at the price you want. One of the most challenging and frustrating aspects of managing any application within a data center is adapting to the application’s changing infrastructure requirements. In traditional data centers, applications are assigned specified amounts of CPU, memory, and storage when they are initially created. However, the needs and behavior of the application begin to change almost as soon as it is deployed and to try to manually adapt the infrastructure to these constantly changing requirements was, at best, a fool’s game.

Amazon Auto Scaling changes this paradigm. Users of this service can set target utilization levels for multiple resources to maintain optimal application performance and availability even as application workloads fluctuate. The beauty of this service is that it rewards users for using it since it only charges them for the resources they use. In this way, users get better performance, optimize the capacity available to them and only use the right resources at the right time to control costs.

  • Amazon has its own Linux release. Watch out Red Hat, SUSE, and Ubuntu – there is a new version of Linux in town. While DCIG has not yet taken the opportunity to evaluate and see how Amazon Linux 2 compares to these existing, competing versions of Linux, perhaps what makes Amazon’s release of Linux most notable is that it runs both on-premise and in the Amazon cloud. Further, it makes one wonder just how far Amazon will develop this version of Linux and will it eventually compete head-to-head with the likes of VMware vSphere and Microsoft Hyper-V?
  • Corporate world: Meet Alexa. Many of us are already familiar with the commercials that promote a consumer version of Alexa that enables us to order groceries, get answers to questions, and automate certain tasks about the home. But now Alexa has grown up and is entering the corporate world. Using Alexa for Business, companies can begin to perform mundane, business-oriented tasks such as managing calendars, setting up meetings, reserving conference rooms, and dialing into meetings.



Veritas Delivering on its 360 Data Management Strategy While Performing a 180

Vendors first started bandying about the phrase “cloud data management” a year or so ago. While that phrase caught my attention, specifics as what one should expect when acquiring a “cloud data management” solution remained nebulous at best. Fast forward to this week’s Veritas Vision 2017 and I finally encountered a vendor that was providing meaningful details as to what cloud data management encompasses while simultaneously performing a 180 behind the scenes.

Ever since I heard the term cloud data management a year or so ago, I loved it. If there was ever a marketing phrase that captured the essence of how every end-user secretly wants to manage all its data while the vendor or vendors promising to deliver it commits to absolutely nothing, this phrase nailed it. A vendor could shape and mold that definition however it wanted and know that end-users would listen to the pitch even if deep down the users knew it was marketing spin at its best.

Of course, Veritas promptly blew up these pre-conceived notions of mine this week at Vision 2017. While at the event, Veritas provided specifics about its cloud data management strategy that rang true if for no other reason that they had a high degree of veracity to them. Sure, Veritas may refer to its current strategy as “360 Data Management.” But to my ears it sure sounded like someone had finally articulated, in a meaningful way, what cloud data management means and the way in which they could deliver on it.

Source: Veritas

The above graphic is the one that Veritas repeatedly rolls out when it discusses its 360 Data Management strategy. While notable in that it is one of the few vendors that can articulate the particulars of its data management strategy, it more importantly has three important components to it that currently makes its strategy more viable than many of its competitors. Consider:

  1. Its existing product portfolio maps very neatly into its 360 Data Management strategy. One might argue (probably rightfully so) that Veritas derived its 360 Data Management strategy from its existing product portfolio that it has built-up over the years. However, many of these same critics have also contended that Veritas has been nothing but a company with an amalgamation of point products with no comprehensive vision. Well, guess what, the world changed over the past 12-24 months and it bent decidedly bent in the direction of software. Give Veritas some credit. It astutely recognized this shift, saw that its portfolio aligned damn well with how enterprises want to manage their data going forward, and had the hutzpah to craft a vision that it could deliver based upon the products it had in-house.
  2. It is not resting on its laurels. Last year when Veritas first announced its 360 Data Management strategy, I admit, I inwardly groaned a bit. In its first release, all it did was essentially mine the data in its own NetBackup catalogs. Hello, McFly! Veritas is only now thinking of this? To its credit, this past week it expanded the list of products to which to which its Information Map connectors can access to over 20. These include Microsoft Exchange, Microsoft SharePoint, and Google Cloud among others. Again, I must applaud Veritas for its efforts on this front. While this news may not be momentous or earth-shattering, it visibly reflects a commitment to delivering on and expanding the viability of its 360 Data Management strategy beyond just NetBackup catalogs.
  3. The cloud plays very well in this strategy. Veritas knows that plays in the enterprise space and it also knows that enterprises want to go to the cloud. While nowhere in its vision image above does it overtly say “cloud”, guess what? It doesn’t have to. It screams, “Cloud!” This is why many of its announcements at Veritas Vision around its CloudMobility, Information Map, NetBackup Catalyst, and other products talk about efficiently moving data to and from the cloud and then monitoring and managing it whether it resides on-premises, in the cloud, or both.

One other change it has made internally (and this is where the 180 initially comes in,) is how it communicates this vision. When Veritas was part of Symantec, it stopped sharing its roadmap with current and prospective customers. In this area, Veritas has made a 180, customers who ask and sign a non-disclosure agreement (NDA) with Veritas can gain access to this road map.

Veritas may communicate that the only 180 turn it has made in the last 18 months or so since it was spun out of Symantec is its new freedom to communicate its road map to current and/or prospective customers. While that may be true, the real 180 it has made entails it successfully putting together a cohesive vision that articulates the value of products in its portfolio in a context that enterprises are desperate to hear. Equally impressive, Veritas’ software-first focus better positions it than its competitors to enable enterprises to realize this ideal.

 




VMware Shows New Love for Public Clouds and Containers

In recent months and years, many have come to question VMware’s commitment to public clouds and containers used by enterprise data centers (EDCs). No one disputes that VMware has a solid footprint in EDCs and that it is in no immediate danger of being displaced. However, many have wondered how or if it will engage with public cloud providers such as Amazon as well as how it would address threats posed by Docker. At VMworld 2017, VMware showed new love for these two technologies that should help to alleviate these concerns.

Public cloud offerings such as are available from Amazon and container technologies such as what Docker offers have captured the fancy of enterprise organizations and for good reasons. Public clouds provide an ideal means for organizations of all size to practically create hybrid private-public clouds for disaster recovery and failover. Similarly, container technologies expedite and simplify application testing and development as well as provide organizations new options to deploy applications into production with even fewer resources and overhead than what virtual machines require.

However, the rapid adoption and growth of these two technologies in the last few years among enterprises had left VMware somewhat on the outside looking in. While VMware had its own public cloud offering, vCloud Air, it did not compete very well with the likes of Amazon Web Services (AWS) and Microsoft Azure as vCloud Air was primarily a virtualization platform. This feature gap probably led to VMware’s decision to create a strategic alliance with Amazon in October 2016 to run its vSphere-based cloud services on AWS and its subsequent decision in May 2017 to divest itself of vCloud Air altogether and sell it to OVH.

This strategic partnership between AWS and VMware became a reality at VMworld 2017 with the announcement of the initial availability of VMware Cloud on AWS.  Using VMware Cloud Foundation, administrators can use a single interface to manage their vSphere deployments whether reside locally or in Amazon’s cloud. The main caveat is this service is currently only available in the AWS US West region. VMware expects to roll this program out throughout the rest of AWS’s regions worldwide in 2018.

VMware’s pricing for this offering is as follows

Region: US West (Oregon)

On-Demand (hourly)

1 Year Reserved*

3 Year Reserved*

List Price ($ per host)

$8.3681

$51,987

$109,366

Effective Monthly**

$6,109

$4,332

$3,038

Savings Over On-Demand

30%

50%

*Coming Soon. Pricing Option Available at Initial Availability: Redeem HPP or SPP credits for on-demand consumption model.
**Effective monthly pricing is shown to help you calculate the amount of money that a 1-year and 3-year term commitment will save you over on-demand pricing. When you purchase a term commitment, you are billed for every hour during the entire term that you select, regardless of whether the instances are running or not.

Source: VMware

The other big news coming out of VMworld was its response to the threat/opportunity presented by container technologies. To tackle this issue, it partnered with Pivotal Software, Inc., and collaborated with Google Cloud to offer the new Pivotal Container Service (PKS) that combines the Pivotal Cloud Foundry and VMware’s software-defined data center infrastructure offerings.

Source: Pivotal Software

One of the major upsides of this offering is a defined, supported code level for use by enterprises for testing and development. Container technologies are experiencing a tremendous of change and innovation. While this may foretell great things for container platforms, this degree of innovation makes it difficult for enterprises to do predictable and meaningful application testing and development when the underlying code base is changing so swiftly.

By Google, Pivotal, and VMware partnering to deliver this platform, enterprises have access to a more predictable, stable, and supported container code base than what they might obtain independently. Further, they can have more confidence that the the platform on which they test their code will work in VMware environments in the months and years to come.

VMware’s commitment to public cloud and container providers has been somewhat unclear over the past few years. But what VMware made clear at this year’s VMworld is that it no longer views cloud and container providers such as Amazon and Google as threats. Rather, it finally embraced what its customers already understood. VMware excels at virtualization and Amazon and Google excel at cloud and container technologies. At VMworld 2017, it admitted to itself and the whole world that if you could not beat them, join them which was the right move for move VMware and the customers it seeks to serve.




The End Game for Cloud Data Protection Appliances is Recovery


The phrase “Cloud Data Protection Appliance” is included in the name of DCIG’s forthcoming Buyer’s Guide but the end game of each appliance covered in that Guide is squarely on recovery. While successful recoveries have theoretically always been the objective of backup appliances, vendors too often only paid lip service to that ideal as most of their new product features centered on providing better means for doing backups.  Recent technology advancements have flipped this premise on its head.

Multiple reasons exist as to why these appliances can focus more fully on this end game of recovery though five key ones have emerged in the last few years that have enabled it. These include:

  1. The low price point of using disk as a backup target (as opposed to tape)
  2. The general availability of private and public cloud providers
  3. The use of deduplication to optimize storage capacity
  4. The widespread availability of snapshot technologies on hypervisors, operating systems, and storage arrays
  5. The widespread enterprise adoption of hypervisors like VMware ESX, and Microsoft Hyper-V as well as the growing adoption of container technologies such as Docker and Kubernetes,

While there are other contributing technologies, these five more so than the others give these appliances new freedom to deliver on backup’s original promise: successful recoveries. By way of example:

  • The backup appliance is used for local application recoveries. Over 80 percent of the appliances that DCIG evaluated now support the instant recovery of an application on a virtual machine on the appliance. This frees enterprises to start the recovery of the application on the appliance itself before moving the application to its primary host. Enterprises can even opt to recover and run the application on the appliance for an extended time for test and development or to simply host the application until the production physical machine on which the application resides recovers.
  • Application conversions and migrations. All these appliances support the backup of virtual machines and their recovery as a virtual machine, but fully 88 percent of the software on these appliances support the backup of a physical machine and its recovery to a virtual machine. This feature gives enterprises access to a tool that can use to migrate applications from physical to virtual machines as a matter of course or in the event of disasters. Further, 77 percent of them support recovery of virtual machines to physical machines. While that may seem counter intuitive, not every application runs well on virtual machines or may need functionality only found when running on a physical machine.
  • Location of backup data. By storing data in the cloud (even if only using it as a cloud target,) enterprises know where their backup data is located. This is not trivial. Too many enterprises do not even know exactly what physical gear they have in their data center, much less where their data is located. While many enterprises still need to concern themselves with various international regulations governing the data’s physical location when storing data in the cloud, at least they know with which cloud provider they stored the data and how to access it. As anyone who uses or has used tape may recall, tracking down, lost tapes, misplaced tapes or even existing tapes can quickly become like trying to find a needle in a haystack. Even using disk is not without its challenges. Many enterprises may have to use multiple disk targets to store their backup data and trying to identify exactly which disk device holds what data may not be as simple as it sounds.
  • Recovering in the cloud. This end game of recovering in the cloud, whether it is recovering a single file, a single application, or an entire data center, may appeal to enterprises more so than any other option on these appliances. The ability to virtually create and have access to a secondary site from which they can recover data or even perform a disaster recovery and run one or more applications removes a dark cloud of unspoken worry that hangs over many enterprises today. The fact that they can use that recovery in the cloud as a stepping stone to potentially hosting applications or their entire data center in the cloud is an added benefit.

Enterprises should be very clear as to what opportunities that today’s cloud data protection appliances offer them. Near term they provide them a means to easily connect to one or more cloud providers, get their backup data offsite, and even recover their data or applications in the cloud. But the long term ramifications of using these appliances to store data in the cloud are much more significant. They represent the bridge to recovering and even potentially hosting more of their applications and data with one or more cloud providers. Organizations should therefore give this end game of recovery specific attention both when they choose a cloud data protection appliance and the cloud provider(s) to which the appliance connects.

To receive regular updates on when blog entries like this post on DCIG’s website, follow this link to subscribe to DCIG’s newsletter.




A Business Case for ‘Doing Something’ about File Data Management

The business case for organizations with petabytes of file data under management to classify and then place it across multiple tiers of storage has never been greater. By distributing this data across disk, flash, tape and the cloud, they stand to realize significant cost savings. The catch is finding a cost-effective solution that makes it easier to administer and manage file data than simply storing it all on flash storage. This is where a solution such as what Quantum now offers come into play.

Organizations love the idea of spending less money on primary storage – especially when they have multiple petabytes of file data residing on flash storage. Further, most organizations readily acknowledge that much of their file data residing on flash storage can reside on lower cost, lower performing media such as disk, the cloud, or even on tape with minimal to no impact to business operations if they know the files are infrequently or never accessed but can be accessed relatively quickly and easily if required.

The problem they encounter is that the “cure” of file data management is worse than the “disease” of inaction. Their concerns focus on the file data management solution itself. Specifically, can they easily implement and then effectively use it in such a way that they can derive value from it short and long term. This uncertainty about the success of implementing a file data management solution that is easier than the status quo of “doing nothing” prompts organizations to do exactly that: nothing.

Quantum, in partnership with DataFrameworks and its ClarityNow! software, gives companies new motivation to act. Other data management and archival solutions give companies the various parts and pieces that they need to manage their file data. However, they leave it up to the customer and their integrators and/or consultants to implement it.

Quantum and DataFrameworks differ in that they offer an integrated, turnkey, end-to-end solution that organizations need to have confidence to proceed. Quantum has integrated DataFrameworks ClarityNow! Software with its Xcellis scale-out storage and Artico archive gateway products to put companies on a fast track for effective file data management.

Source: Quantum

The Xcellis scale-out storage product was added to the Quantum product portfolio in 2015. Yet while the product is relatively new, the technology it uses is not – it bundles a server and storage with Quantum’s StorNext advanced data management software which has existed for years. Quantum packages it with its existing storage products to create an appliance-based solution for faster, more seamless deployments in organizations. Then, by giving organizations the option to include the DataFrameworks ClarityNow! software as part of the appliance, organizations get, in one fell swoop, the file data classification and management features they need in an appliance-based offering.

To give organizations a full range of cost-effective storage options, Quantum enables them to store data to the cloud, other disk storage arrays, and/or tape. As individuals store file data on the Xcellis scale-out storage and files age and/or become inactive, the ClarityNow! software recognizes these traits and others to proactively copy and/or move files to another storage tier.  Alternately, the Artico archive gateway can also be used in a NAS environment to move files onto  the tier or tiers of storage based on preset policies.

It should be noted this solution particularily makes sense in environments that minimally have a few petabytes of data and potentially even tens or hundreds of petabytes of file data under management. It is only when an organization has this amount of file data under management does it make sense for them to proceed with a robust file data management solution backed by the enterprise IT infrastructure such as what Quantum offers.

It is time for organizations who have seen their file data stores swell to petabyte levels who still are doing nothing to re-examine that conviction. Quantum, with its Xcellis scale-out storage solution and its integration with DataFrameworks ClarityNow!, has taken significant strides to make it easier than ever for organizations to deploy the type of file data management solution they need and derive the value they expect. In so doing, organizations can finally see the benefits of “doing something” about bringing their costs and headaches associated with file data management under control as opposed to simply “doing nothing.”

To subscribe and receive regular updates like this from DCIG, follow this link to subscribe to DCIG’s newsletter.

Note: This blog entry was originally published on June 28, 2017.



Exercise Caution Before Making Any Assumptions about Cloud Data Protection Products

There are two assumptions that IT professionals need to exercise caution before making when evaluating cloud data protection products. One is to assume all products share some feature or features in common. The other is to assume that one product possesses some feature or characteristic that no other product on the market offers. As DCIG reviews its recent research into the cloud data protection products, one cannot make either one of these assumptions, even on features such as deduplication, encryption, and replication that one might expect to be universally adopted by these products in comparable ways.

The feature that best illustrates this point is deduplication. One would almost surely think that after the emphasis put on deduplication over the past decade, every product would now support deduplication. That conclusion would be true. But how each product implements deduplication can vary greatly. For example:

  1. Block-level deduplication is still not universally adopted by all products. A few products still only deduplicate at the file level.
  2. In-line deduplication is also not universally available on all products. Further, post-process deduplication is becoming more readily available as organizations want to do more with their copies of data after they back it up.
  3. Only about 2 in 5 products offer the flexibility to recognize data in backup streams and apply the most appropriate deduplication algorithm.

Source: DCIG; 175 products

Deduplication is not the only feature that differs between these products. As organizations look to centralize data protection in their infrastructure and then keep a copy of data offsite with cloud providers, features such as encryption and replication have taken on greater importance in these products and more readily available than ever before. However, here again one cannot assume that all cloud data protection products support each of these features.

On the replication side, DCIG found that this feature to be universally supported across the products it evaluated. Further, these products all implement the option where organizations can schedule replication to occur at certain times (every five minutes, on the hour, etc.).

However, when organizations get beyond this baseline level of replication, differences again immediate appear. For instance, just over 75 percent of the products perform continuous data replication (replicate data immediately after the write occurs at the primary site) while less than 20 percent support synchronous replication.

Organizations all need to pay attention to the fan-in and fan-out options that these products provide. While all support 1:1 replication configurations, only 75 percent of the products support fan-in replication (N:1) and only 71 percent support fan-out replication (1:N). The number of products that support replication across multiple hops drops even further – down to less than 40 percent.

Source: DCIG; 176 products

Encryption is another feature that has become widely used in recent years as organizations have sought to centralize backup storage in their data centers as well as store data with cloud providers. In support of these initiatives, over 95 percent of the products support AES-256 bit encryption for data at-rest while nearly 80 percent of them support this level of encryption for data in-flight.

Deduplication, encryption, and replication are features that organizations of almost any size almost universally expect to find on any cloud data protection product that they are considering for their environment. Further, as DCIG’s research into these products reveals, they nearly all support these features in some capacity. However, they certainly do not give organizations the same number of options to deploy and leverage them and it is these differences in the breadth of feature functionality that each product offers that organizations need to be keenly aware of as they make their buying decisions.




A Full Embrace of the Cloud Has Occurred … Now the Challenge is to Successfully Get and Stay There

Ever since I got my first job in IT in the mid-1990’s, everyone has used a cloud in some form. Whether they referred to it as outsourcing, virtualization, central IT, or in some other way, the cloud existed and grew but it did little to stem the adoption of distributed computing. Yet at some point over the past few years, the parallel growth of these two technologies stopped and the cloud forged ahead. This shift indicates that companies have now fully embraced the cloud but remain unclear about how best and how soon to transition their IT infrastructure to the cloud and then manage it once it is there.

One of my first jobs in IT was as a system administrator at a police department in Kansas. During my time there, I was intimately involved in a project that involved setting up a cloud that enabled it along with other police departments throughout the state to communicate with state agencies. Setting this cloud up would enable our department along with others to run background checks as well as submit daily crime reports. While we did not at that refer to this statewide network as a cloud, it did provide a means to send and receive data and centralize store it.

However, the data that the police department sent, received, and stored with various state agencies represented only a fraction of the total data that the department generated and used daily. There were also photos, files, Excel spreadsheets, accident and incident reports, and many other types of data that officers and civilians in the police department needed and used to perform their daily duties. Since the state agencies did not need this data it was up to the police department to manage and house it.

This example is a microcosm of what happened everywhere. Private and public organizations would choose to store some data locally and only store certain data with cloud providers which, in the police department’s case, were the systems provided by the various state agencies.

The big change that has occurred this decade and particularly over the past two years is that the need to host any applications or data on-premise has essentially vanished. This change has freed organizations of all sizes to fully embrace the cloud by hosting most if not all internal application processing and data storage with cloud providers.

Technology largely exists at the application, compute, network, operating system, security and storage layers that make it more cost-effective and efficient to host all applications and data with cloud providers rather than trying to continue to host it on premise. Further, the plethora of powerful endpoint mobile devices that are available as phones, desktops, tablets, and/or laptops along with ever larger network pipes make it easier than ever to access and manipulate centrally stored data anywhere at any time.

Organizations must accept … and probably largely have … that the technologies exist in the cloud to support even their most demanding applications. Further, these technologies are often more mature, cost-effective, and efficient than what they possess in-house.

The challenges before them are to now identify and execute upon the following:

  1. Identify the right cloud provider or providers for them
  2. Securely and successfully migrate their existing applications and data to the cloud
  3. Manage their applications and data once hosted in the cloud

These objectives represent a fundamental shift in how organizations think and make decisions about their applications and data, and the IT infrastructure that supports them. This “cloud-first” view means that organizations must assume all new applications and data will end up in whole or in part in the cloud either initially or over time. As such, the new questions they must ask and answer are:

  • How soon should their applications and data end up in the cloud?
  • How much of their data should they put in the cloud versus retaining a copy onsite?
  • If they choose not to put an application or data in the cloud, why not?

Organizations have officially embraced the cloud and what it offers as evidenced by the “cloud-first” policies that many have implemented that require them to deploy all new applications and data with cloud providers. However, migrating existing applications and data to the public, private, or hybrid clouds and then successfully managing all migrated and applications and data in the cloud, as well as determining when to bring cloud-based applications and data out of the clouds, becomes more complicated.

Helping organizations understand these challenges and make the right choices will become a point of emphasis in DCIG’s blogs, research, and publications going forward to help organizations successfully migrate their data to the cloud, and then have a good experience once they get there.




BackupAssist 10.0 Brings Welcomed Flexibility for Cloud Backup to Windows Shops

Today’s backup mantra seems to be backup to the cloud or bust! But backup to the cloud is more than just redirecting backup streams from a local file share to a file share presented by a cloud storage provider and clicking the “Start” button. Organizations must examine to which cloud storage providers they can send their data as well as how their backup software packages and sends the data to the cloud. BackupAssist 10.0 answers many of these tough questions about cloud data protection that businesses face while providing them some welcomed flexibility in their choice of cloud storage providers.

Recently I was introduced to BackupAssist, a backup software company that hails from Australia, and had the opportunity to speak with its founder and CEO, Linus Chang, about Backup Assist’s 10.0 release. The big news in this release was BackupAssist’s introduction of cloud independent backup that gives organizations the freedom to choose any cloud storage provider to securely store their Windows backup data.

The flexibility to choose from multiple cloud storage providers as a target when doing backup in today’s IT environment has become almost a prerequisite. Organizations increasingly want the ability to choose between one or more cloud storage providers for cost and redundancy reasons.

Further, availability, performance, reliability, and support can vary widely by cloud storage provider. These features may even vary by the region of the country in which an organization resides as large cloud storage providers usually have multiple data centers located in different regions of the country and world. This can result in organizations having very different types of backup and recovery experiences depending upon which cloud storage provider they use and the data center to which they send their data.

These factors and others make it imperative that today’s backup software give organizations more freedom of their choice in cloud storage providers which is exactly what BackupAssist 10.0 provides. By giving organizations the freedom to choose from Amazon S3 and Microsoft Azure among others, they can select the “best” cloud storage provider for them. However, since the factors as to what constitute the “best” cloud storage provider can and probably will change over time, BackupAssist 10.0 gives organizations the flexibility to adapt to any changes in conditions at the situation warrants.

Source:BackupAssist

To ensure organizations experience success when they backup to the cloud, it has also introduced three other cloud-specific features as well, which include:

  1. Compresses and deduplicates data. Capacity usage and network bandwidth consumption are the two primary factors that drive up cloud storage costs. By introducing compression and deduplication into this release, BackupAssist 10.0 helps organizations better keeps these variable costs associated with using cloud storage under control.
  2. Insulated encryption. Every so often stories leak out about how government agencies subpoena cloud providers and ask for the data of their clients. Using this feature, organizations can fully encrypt their backup data to make it inaccessible to anyone.
  3. Resilient transfers. Nothing is worse than having a backup two-thirds to three-quarters complete only to have a hiccup in the network connection or on the server itself that interrupts the backup and forces one to restart the backup from the beginning. Minimally, this is annoying and disruptive to business operations. Over time, restarting backup jobs and resending the same backup data to the cloud can run networking and storage costs. BackupAssist 10.0 ensures that if a backup job gets interrupted, it can resume from the point where it stopped while only sending the required amount of data to complete the backup.

In its 10.0 release, BackupAssist makes needed enhancements to ensure it remains a viable, cost-effective backup solution for businesses wishing to protect their applications running on Windows Server. While these businesses should keep some copies of data on local disk for faster backups and recoveries, the value of efficiently and cost-effectively keeping copies of their data offsite with cloud storage providers cannot be ignored. The 10.0 version of BackupAssist gives them the versatility to store data locally, in the cloud, or both with new flexibility to choose a cloud storage provider at any time that most closely aligns with their business and technical requirements.




DCIG Quick Look: Acquisition of SimpliVity Fits Right into HPE’s Broader Hybrid IT Strategy

Last week HPE announced its acquisition of SimpliVity, a provider of enterprise hyper-converged infrastructure solutions. While that announcement certainly made news in the IT industry, the broader implications of this acquisition signaled that enterprise IT providers such as HPE could no longer sit on the sidelines and merely be content to partner with providers such as SimpliVity as hyper-converged solutions rapidly become a growing percentage of enterprise IT. If HPE wanted its fair share of this market, it was imperative that it act sooner rather than later to ensure it remained a leading player in this rapidly growing market.

The good news is that in HPE’s acquisition of Simplivity is that HPE chose a product that aligned well with its existing hybrid IT strategy.

HPE Hybrid IT Strategy

HPE Hybrid IT StrategyThe challenge with this strategy as defined is that its existing HPE Hyper Converged 380 simply did not have the chops to scale into the enterprise. It was a great solution for the midmarket environments for which it was intended offering fast deployments, simplified on-going management, and right-sized for these environments. But when looking at enterprise IT environments that demand greater scalability and data management services, the HPE HC 380 did not quite fit the bill.

Enter HPE’s acquisition of SimpliVity.

SimpliVity stormed onto the hyper-converged infrastructure market a few years ago. While it certainly had success in the midmarket with its products, it more importantly also had success in the enterprise market which many of its competitors failed to breach. I was even present at some of its analyst conferences where end-users openly talked about leveraging SimpliVity to displace enterprise level converged infrastructure solutions. Heady times indeed, for a company so new to the market.

Clearly HPE took notice, probably in part because I suspect some of these displacements involved SimpliVity cutting in on its turf. However, an enterprise caliber hyper-converged infrastructure offering was a gap that HPE needed to fill in its enterprise product portfolio anyway. In this case, if you can’t beat ‘em, buy ‘em, which is exactly what HPE is in the process of doing.

In the analyst briefing that I attended last week, it was obvious that HPE had a clear vision of how it intended to merge SimpliVity into its portfolio of offerings. In talking about how SimpliVity would fit into its hybrid IT strategy, HPE explained how enterprise IT would consist of the traditional IT stack (servers, storage, and networking,) hyper-converged infrastructure, and the cloud.

Yet that emerging enterprise IT framework did not dissuade HPE. Rather, HPE recognized that it could cleanly incorporate SimpliVity into this  IT architecture by creating what it terms “composable workloads” that encompassed its existing 3PAR StoreServ and cloud platforms as well as the new SimpliVity platform. Using its tools (I assume to be developed,) application workloads can dynamically be placed on any of these platforms and then moved if and when needed.

Further adding to SimpliVity’s appeal was its availability of enterprise data management services. However, HPE uncovered that these services worked and were in use. In researching SimpliVity prior to the acquisition, HPE found that many if not most of SimpliVity’s existing customers used its compression and deduplication services as well as its built-in data protection features. In other words, SimpliVity did more than “check” the box that it offered these features. It delivered these features in a manner that companies could confidently deploy and use them in their environment.

HPE admitted that SimpliVity’s offering still had some work to do in the areas of ease of use and speed of deployment to match enterprise expectations in those areas. But considering that HPE has probably done as much or more work than almost any other enterprise provider to deliver on these types of expectations with its 3PAR and StoreOnce lines of storage, SimpliVity should experience great success near and long term in its forthcoming home at HPE.

Bitnami