Seven Basic Questions to Screen Cloud Storage Offerings

Using cloud storage often represents the first way that most companies adopt the cloud. They leverage cloud storage to archive their data, as a backup target, share files, or for long term data retention. These approaches offer a low risk means for companies to get started in the cloud. However, with more cloud storage offerings available than ever, companies need to ask and answer more pointed questions to screen them.

50+ Cloud Storage Offerings

As recently as a few years ago, one could count on one hand the number of cloud storage offerings. Even now, companies may find themselves hard pressed to name more than five or six of them.

The truth of the matter is companies have more than 50 cloud storage offerings from which to choose. These offerings come from general purpose cloud providers such as Amazon, Microsoft, and Google to specialized providers such as Degoo, hubiCJottacloud and Wasabi.

 

 

 

 

 

 

 

The challenge companies now face is, “How do I screen these cloud storage offerings to make the right choice for me?” Making the best selection from these multiple cloud storage offerings starts by first asking and answering basic questions about your requirements.

Seven Basic Questions

Seven basic questions you should ask and answer to screen these offerings include:

  1. What type or types of data will you store in their cloud? If you only have one type of data (backups, files, or photos) to store in the cloud, a specialized cloud storage provider may best meet your needs. If you have multiple types of data (archival, backups, block, file, and/or object) to store in the cloud, a general-purpose cloud storage provider may better fit your requirements.
  2. How much data will you store in the cloud? Storing a few GBs or even a few hundred GBs of data in the cloud may not incur significant cloud storage costs. When storing hundreds of terabytes or petabytes of storage in the cloud, a cloud offering with multiple tiers of storage and pricing may be to your advantage.
  3. How much time do you have to move the data to the cloud? Moving a few GBs of data to the cloud may not take very long. Moving terabytes of data (or more) may take days, weeks or even months. In these circumstances, look for cloud providers that offer tools to ingest data at your site that they can securely truck back to their site.
  4. How much time do you have to manage the cloud? No one likes to think about managing data in the cloud. Cloud providers count on this inaction as this is when cloud storage costs add up. If you have no plans to optimize data placement or the data management costs outweigh the benefits, identify a cloud storage provider that either does this work for you or makes its storage so simple to use you do not have to manage it.
  5. How often will you retrieve data from the cloud and how much will you retrieve? If you expect to retrieve a lot of data from the cloud, identify if the cloud provider charges egress (data exit) fees and how much it charges.
  6. What type of performance do you need? Storing data on lower cost, lower tiers of storage may sound great until you need that data. If waiting multiple days to retrieve it could impact your business, keep your data on the higher performing storage tiers.
  7. What type of availability do you need? Check with your cloud storage provider to verify what uptime guarantees it provides for the region where your data resides.

A Good Base Line

There are many more questions that companies can and should ask to select the right cloud storage offering for them. However, these seven basic questions should provide the base line set of information companies need to screen any cloud storage offering.

If your company needs help in doing a competitive assessment of cloud storage providers, DCIG can help. You can contact DCIG by filling out this form on DCIG’s website or emailing us.




The Journey to the Cloud Begins by Speaking the Same Cloud Language

Every company wants to make the right cloud decision for their business. As a result, more companies than ever ask their vendors to describe the cloud capabilities of their products. However, as you ask your vendors cloud questions, verify that you both use the same cloud language. You may find that how you and your vendors define the cloud differ significantly which can quickly result in communication breakdowns.

Technology providers feel intense pressure to remain relevant in a rapidly changing space. As more companies adopt the cloud, they want to make sure they are part of the conversation. As such, one should have no problem identifying products that support the cloud. However, some vendors take more liberties than other in how they apply the term cloud to describing their products’ features.

The Language of Cloud

For better or worse, the term “cloud” can mean almost anything. A simple definition for the cloud implies the ability to access needed resources over a computer network. Hence, any product that claims “cloud support” means it can simply access resources over a computer network, regardless of where they reside.

Pictures of blue skies and fluffy clouds that often accompany vendor descriptions of “cloud support” for their products do not help clarify the situation. These pictures can lead one to assume that a product provides more robust cloud support than it truly delivers.

By way of example, almost every enterprise backup product claims support for the cloud. However, the breadth and depth of cloud support that each one offers varies widely. To assess the true scope of each one’s cloud support, one first needs to understand the language they use to describe the cloud.

For instance, if you plan to use the cloud for long-term backup retention, most enterprise backup products connect to a cloud. The key here is be very clear in what they mean by “connectivity to the cloud”.  Key questions you should ask include:

  • Does the product’s cloud support include connectivity to any large general-purpose cloud providers such as AWS, Azure, or Google?
  • Does the product need to work with all three of these cloud providers?
  • Does it support include any S3 compliant public cloud?
  • Does the product support more cost-effective public cloud options such as Wasabi?
  • Does its cloud support refer to a purpose-built cloud for backup and DR such as Unitrends offers?

Getting questions like these answered will provide you the insight you need to determine if the cloud capabilities of their products matches your requirements. That understand can only occur if you are both first speak the same language.

Multi-cloud can be Just as Cloudy

As companies connect to the cloud, many find they want the option to connect to multiple different clouds. This option gives them more power to negotiate prices as well as flexibility to deploy resources where they run best. But here again, one needs to drill down on exactly how a product delivers on its multi-cloud support.

Key questions that you must ask when evaluating a product’s multi-cloud capabilities include:

  1. To which public clouds can the product connect, if any?
  2. To which private clouds can it connect, if any?
  3. Can it connect and use multiple clouds simultaneously?
  4. Can it connect and use public and private clouds at the same time?
  5. Does it offer any features beyond just connectivity to manage the cloud’s features?

The Journey to the Cloud Begins by Speaking the Same Cloud Language

Companies today more so than ever want to start their journey to the cloud. To begin that journey, you must first speak the same language. Start by defining what the cloud means to you, or what you think it means to you.

This may even require you to engage some of your preferred vendors or partners to help you draft that definition. Regardless of how you arrive at your definition of the cloud, the sooner you do, the sooner you can ask the right questions, understand the answers given to you, and get the clarity you need to choose products that support the cloud in the way that you expect.




Four Ways to Achieve Quick Wins in the Cloud

More companies than ever want to use the cloud as part of their overall IT strategy. To do so, they often look to achieve some quick wins in the cloud to demonstrate its value. Achieving these quick wins also serves to give them some practical hands on experience in the cloud. Incorporating the cloud into your backup and disaster recovery (DR) processes may serve as the best way to get these wins.

Any company hoping to get some quick wins in the cloud should first define what a “win” looks like. For the purposes of this blog entry, a win consists of:

  • Fast, easy deployments of cloud resources
  • Minimal IT staff involvement
  • Improved application processes or workflows
  • The same or lower costs

Here are four ways for companies to achieve the quick wins in the cloud through their backup and DR processes:

#1 – Take a Non-disruptive Approach

When possible, leverage your company’s existing backup infrastructure to store copies of data in the cloud. All enterprise backup products such as backup software and deduplication backup appliances, save one or two, interface with public clouds. These products can store backup data in the cloud without disrupting your existing environment.

Using these products, companies can get exposure to the public cloud’s core compute and storage services. These are the cloud services companies are most apt to use initially and represent the most mature of the public cloud offerings.

#2 – Deduplicate Backup Data Whenever Possible

Public cloud providers charge monthly for every GB of data that companies store in their respective clouds. The more data that your company stores in the cloud, the higher these charges become.

Deduplicating data reduces the amount of data that your company stores in the cloud. In so doing, it also helps to control and reduce your company’s monthly cloud storage costs.

#3 – Tier Your Backup Data

Many public cloud storage providers offer multiple tiers of storage. The default storage tier they offer does not, however, represent their most cost-effective option. This is designed for data that needs high levels of availability and moderate levels of performance.

Backup data tends to only need these features for the first 24 – 72 hours after it is backed up. After that, companies can often move it to lower cost tiers of cloud storage. Note that these lower cost tiers of storage come with decreasing levels of availability and performance. While many backups (over 99%) fall into this category, check to see if any application recoveries occurred that required data over three days old before moving it to lower tiers of storage.

#4 – Actively Manage Your Cloud Backup Environment

Applications and data residing in the cloud differ from your production environment in one important way. Every GB of data consumed and every hour that an application runs incur costs. This differs from on-premises environments where all existing hardware represents a sunk cost. As such, there is less incentive to actively manage existing hardware resources since any resources recouped only represent a “soft” savings.

This does not apply in the cloud. Proactively managing and conserving cloud resources translate into real savings. To realize these savings, companies need to look to products such as Quest Foglight. It helps them track where their backup data resides in the cloud and identify the application processes they have running. This, in turn, helps them manage and control their cloud costs.

Companies rightfully want to adopt the cloud for the many benefits that it offers and, ideally, achieve a quick win in the process. Storing backup data in the cloud and moving DR processes to the cloud provides the quick win in the cloud that many companies initially seek. As they do so, they should also ensure they put the appropriate processes and software in place to manage and control their usage of cloud resources.




Lenovo TruScale and Nutanix Enterprise Cloud Accelerate Enterprise Transformation

Digital transformation is an enterprise imperative. Enabling that transformation is the focus of Lenovo’s TruScale data center infrastructure services. The combination of TruScale infrastructure services and Nutanix application services creates a powerful accelerant for enterprise transformation.

Cloud is the Transformation Trigger

Many enterprises are seeking to go to the cloud, or at least to gain the benefits associated with the cloud. These benefits include:

  • pay-as-you-go operational costs instead of large capital outlays
  • agility to rapidly deploy new applications
  • flexibility to adapt to changing business requirements

For many IT departments, the trigger for serious consideration of a move to the cloud is when the CFO no longer wants to approve IT acquisitions. Unfortunately, the journey to the cloud often comes with a loss of control over both costs and data assets. Thus many enterprise IT leaders are seeking a path to cloud benefits without sacrificing control of costs and data.

TruScale Brings True Utility Computing to Data Center Infrastructure

The Lenovo Data Center Group focused on the needs of these enterprise customers by asking themselves:

  • What are customers trying to do?
  • What would be a winning consumption model for customers?

The answer they came up with is Lenovo TruScale Infrastructure Services.

Nutanix invited DCIG analysts to attend the recent .NEXT conference. While there we met with many participants in the Nutanix ecosystem, including an interview with Laura Laltrello, VP and GM of Lenovo Data Center Services. This article, and DCIG’s selection of Lenovo TruScale as one of three Best of Show products at the conference, is based largely on that interview.

As noted in the DCIG Best of Show at Nutanix .NEXT article, TruScale literally introduces utility data center computing. Lenovo bills TruScale clients a monthly management fee plus a utilization charge. It bases this charge on the power consumed by the Lenovo-managed IT infrastructure. Clients can commit to a certain level of usage and be billed a lower rate for that baseline. This is similar to reserved instances on Amazon Web Services, except that customers only pay for actual usage, not reserved capacity.

infographic summarizing Lenovo TruScale features

Source: Lenovo

This power consumption-based approach is especially appealing to enterprises and service providers for which one or more of the following holds true:

  • Data center workloads tie directly to revenue.
  • Want IT to focus on enabling digital transformation, not infrastructure management.
  • Need to retain possession or secure control of their data.

Lenovo TruScale Offers Everything as a Service

TruScale can manage everything as a service, including both hardware and software. Lenovo works with its customers to figure out which licensing programs make the most sense for the customer. Where feasible, TruScale includes software licensing as part of the service.

Lenovo Monitors and Manages Data Center Infrastructure

TruScale does not require companies to install any extra software. Instead, it gets its power utilization data from the management processor already embedded in Lenovo servers. It then passes this power consumption data to the Lenovo operations center(s) along with alerts and other sensor data.

Lenovo uses the data it collects to trigger support interventions. Lenovo services professionals handle all routine maintenance including installing firmware updates and replacing failed components to ensure maximum uptime. Thus, Lenovo manages data center infrastructure below the application layer.

Lenovo Provides Continuous Infrastructure (and Cost) Visibility

Lenovo also uses the data it collects to provide near real-time usage data to customers via a dashboard. This dashboard graphically presents performance versus key metrics including actual vs budget. In short, Lenovo’s approach to utility data center computing provides a distinctive and easy means to deploy and manage infrastructure across its entire lifecycle.

Lenovo Integrates with Nutanix Prism

Lenovo TruScale infrastructure services cover the entire range Lenovo ThinkSystem and ThinkAgile products. The software defined infrastructure products include pre-integrated solutions for Nutanix, Azure HCI, Azure Stack and VMware.

Lenovo has taken extra steps to integrate its products with Nutanix. These include:

  • ThinkAgile XClarity Integrator for Nutanix is available via the Nutanix Calm marketplace. It works in concert with Prism to integrate server data and alerts into the Prism management console.
  • ThinkAgile Network Orchestrator is an industry-first integration between Lenovo switches and Prism. It reduces error and downtime by automatically changing physical switch configurations when changes are made to virtual Nutanix networks.

Nutanix Automates the Application Layer

Nutanix software simplifies the deployment and management of enterprise applications at scale. The following graphic, taken from the opening keynote lists each Nutanix component and summarizes its function.

image showing summary list of Nutanix services

Source: Nutanix

The Nutanix .NEXT conference featured many customers telling how Nutanix has transformed their data center operations. Their statements about Nutanix include:

“stable and reliable virtual desktop infrastructure”

“a private cloud with all the benefits of public, under our roof and able to keep pace with our ambitions”

“giving me irreplaceable time and memories with family”

“simplicity, ease of use, scale”

Lenovo TruScale + Nutanix = Accelerated Enterprise Transformation

I was not initially a fan of the term “digital transformation.” It felt like yet another slogan that really meant, “Buy more of my stuff.” But practical applications of machine learning and artificial intelligence are here now and truly do present significant new opportunities (or threats) for enterprises in every industry. Consequently, and more than at any time in the past, the IT department has a crucial role to play in the success of every company.

Enterprises need their IT departments to transition from being “Information Technology” departments to “Intelligent Transformation” departments. TruScale and Nutanix each enable such a transition by freeing up IT staff to focus on the business rather than on technology. Together, the combination of TruScale infrastructure services and Nutanix application services creates a powerful accelerant for enterprise transformation.

Transform and thrive.

 

Disclosure: As noted above, Nutanix invited DCIG analysts to attend the .NEXT conference. Nutanix covered most of my travel expenses. However, neither Nutanix nor Lenovo sponsored this article.

Updated on 5/24/2019.




HYCU Continues Its March Towards Becoming the Default Nutanix Backup Solution

Any time a new operating system platform comes to market, one backup solution tends to lead in providing a robust set of data protection features that companies can quickly, easily, and economically deploy. It happened with Unix. It happened with Windows and VMware. Now it is happening again with the Nutanix Acropolis operating system (AOS) as HYCU continues to make significant product enhancements in its march to become the default backup solution for Nutanix-centric environments.

I greatly respect any emerging technology provider that can succeed at any level in the hyper-competitive enterprise space. To compete and win in the enterprise market, it must execute simultaneously on multiple levels. Minimally, it must have solid technology, a compelling message, and competent engineering, marketing, management, sales, and support teams to back the product up. Nutanix delivers on all these fronts.

However, companies can sometimes overlook the value of the partner community that must simultaneously develop when a new platform such as Nutanix AOS comes to market. If companies such as HYCU, Intel, Microsoft, SAP and others did not commit resources to form technology alliances with Nutanix, it would impede Nutanix’s ability to succeed in the market place.

Of these alliances, Nutanix’s alliance with HYCU merits attention. While Nutanix does have technology alliances with other backup providers, HYCU is the only one of these providers that has largely hitched its wagon to the Nutanix train. As a result, as Nutanix goes, so largely goes HYCU.

Given that Nutanix continues to rock the hyperconverged infrastructure (HCI) market space, this bodes well for HYCU – assuming HYCU matches Nutanix’s pace of innovation step-for-step. Based upon the announcement that HYCU made at this week’s Nutanix .NEXT conference in Anaheim, CA, it is clear that HYCU fully understands the opportunities in front of it and capitalizes on them in its latest 4.0 release. Consider:

  • HYCU supports and integrates with Nutanix Mine beginning in the second half of 2019. Emerging data protection providers such as Cohesity and Rubrik have (rightfully) made a lot of noise about using HCI platforms (and especially theirs) for data protection use cases. In the face of this noise, HYCU, with its HYCU-X announcement in late 2018, grasped that it could use Nutanix to meet this use case. The question was, “Did Nutanix want to position AOS as a platform for data protection software and secondary enterprise workloads?

The short answer is Yes. The Nutanix Mine May 8 announcement makes it clear that Nutanix has no intention of conceding the HCI platform space to competitors that focus primarily on data protection. Further, Nutanix’s technology alliance with HYCU immediately pays dividends. Companies can select backup software that is fully integrated with the Nutanix AOS, obtaining it and managing it in almost the same way as if Nutanix had built its own backup software. Further, HYCU is the only data protection solution ready now to ship when Nutanix goes GA with Mine in the second half of 2019.

  • Manage HYCU through Nutanix Prism management interface. Nutanix Prism is the Nutanix interface used to manage Nutanix AOS environments. With the forthcoming release of HYCU 4.0, companies may natively administer HYCU through the Nutanix PRISM interface as part of their overall Nutanix AOS management experience.
  • Support for Nutanix Files. The scale-out characteristics of Nutanix make it very appealing for companies to use it for purposes other than simply hosting their VMs. Nutanix Files is a perfect illustration as companies can use Nutanix to host their unstructured data to get the availability, performance, and flexibility that traditional NAS providers increasingly struggle to deliver in a cost-effective manner.

HYCU 4.0’s support for Nutanix Files includes NFS support and changed file tracking. This feature eliminates the overhead of file system scans, automates protections of newly created VMs with a default policy, and should serve to accelerate the speed of incremental backups.

  • Protects physical Windows servers. Like it or not, physical Windows servers remain a fixture in many corporate environments and companies must protect them. To address this persistent need, HYCU 4.0 introduces protection for physical Windows servers so as companies look to adopt HYCU to protect their expanding Nutanix environment, they can “check the box”, so to speak, to extend their use of HYCU to protect their physical Windows environment.

The Nutanix Mine announcement represents yet another market place into which Nutanix will extend the reach of its AOS platform to provide a consistent, single cloud platform that companies may use. As Nutanix makes its Mine offering available, companies may note that Nutanix mentions multiple data protection providers who plan to come to market with solutions running on Nutanix Mine.

However, “running on Nutanix Mine” and “optimized and fully integrated with Nutanix” are two very different phrases. Of the providers who were mentioned by Nutanix that will run on Nutanix Mine, only HYCU has moved in lockstep with Nutanix AOS almost since HYCU’s inception. In so doing, HYCU has well positioned itself to become the default backup solution for Nutanix environments due to the many ways HYCU has adopted and deeply ingrained Nutanix’s philosophy of simplicity into its product’s design.




Tips to Selecting the Best Cloud Backup Solution

The cloud has gone mainstream with more companies than ever looking to host their production applications with general-purpose cloud providers such as the Google Cloud Platform (GCP). As this occurs, companies must identify backup solutions architected for the cloud that capitalize on the native features of each provider’s cloud offering to best protect their virtual machines (VMs) hosted in the cloud.

Company that move their applications and data to the cloud must orchestrate the protection of their applications and data once they move them there. GCP and other cloud providers offer highly available environments and replicate data between data centers in the same region. They also provide options in their clouds for companies to configure their applications to automatically fail over, fail back, scale up, and scale back down as well as create snapshots of their data.

To fully leverage these cloud features, companies must identify an overarching tool that orchestrates the management of these availability, backup and recovery features as well as integrates with their applications to create application-consistent backups. To select the right cloud backup solution for them, here are a few tips to help companies do so.

Simple to Start and Stop

The cloud gives companies the flexibility and freedom to start and stop services as needed and then only pay for these services as they use them. The backup solution should give companies the same ease to start and stop these services. It should only bill companies for the applications it protects during the time it protects them.

The simplicity of the software’s deployment should also extend to its configuration and ongoing management. Companies can quickly select and deploy the compute, networking, storage, and security services cloud providers offer. In the same way, the software should similarly make it easy for companies to select and configure it for the backup of VMs. They can also optionally turn the software off if needed.

Takes Care of Itself

When companies select any cloud provider’s service, companies get the benefits of the service without the maintenance headaches associated with owning it. For example, when companies choose to host data on GCP’s Cloud Storage service, they do not need to worry about administering Google’s underlying IT infrastructure. The tasks of replacing faulty HDDs, maintaining HDD firmware, keeping its Cloud Storage OS patched, etc. fall to Google.

In the same way, when companies select backup software, they want its benefits without the overhead of patching it, updating it, and managing it long term. The backup software should be available and run as any other cloud service. However, in the background, the backup software provider should take care of its software’s ongoing maintenance and updates.

Integrates with the Cloud Provider’s Identity Management Services

Companies use services such as LDAP or Microsoft AD to control access to corporate IT resources. Cloud providers also have their own identity management services that companies can use to control their employees’ access to cloud resources.

The backup software will ideally integrate with the cloud provider’s native identity management services to simplify its management and ensure that those who administer the backup solution have permission to access VMs and data in the cloud.

Integrates with the Cloud Provider’s Management Console

Companies want to make their IT environments easier to manage. For many, that begins with a single pane of glass to manage their infrastructure. In cloud environments, companies must adhere to this philosophy as cloud providers offer dozens of cloud services that individuals can view and access through that cloud provider’s management console.

To ensure cloud administrators remain aware that the backup is available as an option, much less use it, the backup software must integrate with the cloud provider’s default management console. In this way, these individuals can remember to use it and easily incorporate its management into their overall job responsibilities.

Controls Cloud Costs

It should come as no great surprise that cloud providers make their money when companies use their services. The more of their services that companies use, the more the cloud providers charge. It should also not shock anyone the default services that cloud providers offer may be among their most expensive.

The backup software can help companies avoid racking up unneeded costs in the cloud. The backup software will primarily consume storage capacity in the cloud. The software should offer features that help manage these costs. Aside from having policies in place to tier backup data as its ages across these different storage types, it should also provide options to archive, compress, deduplicate, and even delete data. Ideally, it will also spin up cloud compute resources when needed and shut them down once backup jobs complete to further control costs in the cloud.

HYCU Brings the Benefits of Cloud to Backup

Companies choose the cloud for simple reasons: flexibility, scalability, and simplicity. They already experience these benefits when they choose the cloud’s existing compute, networking, storage, and security services. So, they may rightfully wonder, why should the software service they use to orchestrate their backup experience in the cloud be any different?

In short, it should not be any different. As companies adopt and adapt to the cloud’s consumption model, they will expect all services they consume in the cloud to follow its billing and usage model. Companies should not give backup a pass on this growing requirement.

HYCU is the first backup and recovery solution that companies can choose when protecting applications and data on the Google Cloud Platform to follow these basic principles of consuming cloud services. By integrating with GCP’s identity management services, being simple to start and stop, and helping companies control their costs, among others, HYCU exemplifies how easy backup and recovery can and should be in the cloud. HYCU provides companies with the breadth of backup services that their applications and data hosted in the cloud need while relieving them of the responsibility to continue to manage and maintain it.




Best Practices for Getting Ready to Go “All-in” on the Cloud

To ensure an application migration to the cloud goes well or that a company should even migrate a specific application to the cloud requires a thorough understanding of each application. This understanding should encompass what resources the application currently uses as well as how it behaves over time. Here is a list of best practices that a company can put in place for its on-premises applications before it moves any of them to the cloud.

  1. Identify all applications running on-premises. A company may assume it knows what applications it has running in its data center environment. However, it is better to be safe than sorry. Take inventory and actively monitor its on-premises environment to establish a baseline. During this time, identify any new virtual or physical machines that come online.
  2. Quantify the resources used by these applications and when and how they use them. This step ensures that a company has a firm handle on the resources each application will need in the cloud, how much of these resources each one will need, and what types of resources it will need. For instance, simply knowing one needs to move a virtual machine (VM) to the cloud is insufficient. A company needs to know how much CPU, memory, and storage each VM needs; when the application runs; its run-time behavior; and, its periods of peak performance to choose the most appropriate VM instance type in the cloud to host it.
  3. Identify which applications will move and which will stay. Test and development applications will generally top the list of applications that a company will move to the cloud first. This approach gives a company the opportunity to become familiar with the cloud, its operations, and billing. Then a company should prioritize production applications starting with the ones that have the lowest level of impact to the business. Business and mission critical applications should be some of the last ones that a company moves. Applications that will stay on-premises are often legacy applications or those that cloud providers do not support.
  4. Map each application to the appropriate VM instance in the cloud. To make the best choice requires that a company knows both their application requirements and the offerings available from the cloud provider. This can take some time to quantify as Amazon Web Services (AWS) offers over 90 different VM instance types on which a company may choose to host an application while Microsoft Azure offers over 150 VM instance types. Further, each of these provider’s VMs may be deployed as an on-demand, reserved, or spot instance that each has access to multiple types of storage. A company may even look to move to serverless compute. To select the most appropriate VM instance type for each application requires that a company know at the outset the capacity and performance requirements of each VM as well as its data protection requirements. This information will ensure a company can select the best VM to host it as well as appropriately configure the VM’s CPU, data protection, memory, and storage settings.
  1. Determine which general-purpose cloud provider to use. Due to the multiple VM instance types each cloud provider offers and the varying costs of each VM instance type, it behooves a company to explore which cloud provider can best deliver the hosting services it needs. This decision may come down to price. Once it maps each of its applications to a cloud provider’s VM instance type, a company should be able to get an estimate of what its monthly cost will be to host its applications in each provider’s cloud.

Companies have good reasons for wanting to go “all-in” on the cloud as part of their overall business and IT strategies. But integral to both these strategies, a company must also have a means to ensure the stability of this new hybrid cloud environment as well as provide assurances that its cloud costs will be managed and controlled over time. By going “all-in” on software such as Quest Software’s Foglight, a company can have confidence that its decision to go “all-in” on the cloud will succeed initially and then continue to pay-off over time.

A recent white paper by DCIG provides more considerations for going all-in on the cloud to succeed both initially and over time. This paper is available to download by following this link to Quest Software’s website.




Purpose-built Backup Cloud Service Providers: A Practical Starting Point for Cloud Backup and DR

The shift is on toward using cloud service providers for an increasing number of production IT functions with backup and DR often at the top of the list of the tasks that companies first want to deploy in the cloud. But as IT staff seeks to “Check the box” that they can comply with corporate directives to have a cloud solution in place for backup and DR, they also need to simultaneously check the “Simplicity,” “Cost-savings,” and “It Works” boxes.

Benefits of Using a Purpose-built Backup Cloud Service Provider

Cloud service providers purpose-built for backup and DR put companies in the best position to check all those boxes for this initial cloud use case. These providers solve their clients’ immediate challenges of easily and cost-effectively moving backup data off-site and then retaining it long term.

Equally important, addressing the off-site backup challenge in this way positions companies to do data recovery with the purpose-built cloud service provider, a general purpose cloud service provider, or on-premises. It also sets the stage for companies to regularly test their disaster recovery capabilities so they can perform them when necessary.

Choosing a Purpose-built Backup Cloud Service Provider

To choose the right cloud service provider for corporate backup and DR requirements companies want to be cost conscious. But they also want to experience success and not put their corporate data or their broader cloud strategy at risk. A purpose-built cloud service provider such as Unitrends and its Forever Cloud solution frees companies to aggressively and confidently move ahead with a cloud deployment for its backup and DR needs.

picture showing date and time of webinar

Join Our Webinar for a Deeper Look at the Benefits of Using a Purpose-built Backup Cloud Service Provider

Join me next Wednesday, October 17, 2018, at 2 pm EST, for a webinar where I take a deeper look at both purpose-built and general purpose cloud service providers. In this webinar, I examine why service providers purpose-built for cloud backup and DR can save companies both money and time while dramatically improving the odds they can succeed in their cloud backup and DR initiatives. You can register by following this link.

 




HPE Expands Its Big Tent for Enterprise Data Protection

When it comes to the mix of data protection challenges that exist within enterprises today, these companies would love to identify a single product that they can deploy to solve all their challenges. I hate to be the bearer of bad news, but that single product solution does not yet exist. That said, enterprises will find a steadily improving ecosystem of products that increasingly work well together to address this challenge with HPE being at the forefront of putting up a big tent that brings these products together and delivers them as a single solution.

Having largely solved their backup problems at scale, enterprises have new freedom to analyze and address their broader enterprise data protection challenges. As they look to bring long term data retention, data archiving, and multiple types of recovery (single applications, site fail overs, disaster recoveries, and others) under one big tent for data protection, they find they often need to deploy multiple products.

This creates a situation where each product addresses specific pain points that enterprises have. However, multiple products equate to multiple management interfaces that each have their own administrative policies with minimal or no integration between them. This creates a thornier problem – enterprises are left to manage and coordinate the hand-off of the protection and recovery of data between these different individual data protection products.

A few years HPE started to build a “big tent” to tackle these enterprise data protection and recovery issues. It laid the foundation with its HPE 3PAR StoreServ storage arrays, StoreOnce deduplication storage systems, and Recovery Manager Central (RMC) software to help companies coordinate and centrally manage:

  • Snapshots on 3PAR StoreServ arrays
  • Replication between 3PAR StoreServ arrays
  • The efficient movement of data between 3PAR and StoreOnce systems for backup, long term retention, and fast recoveries

This week HPE expanded its big tent of data protection to give companies more flexibility to protect and recover their data more broadly across their enterprise. It did so in the following ways:

  • HPC RMC 6.0 can directly recover data to HPE Nimble storage arrays. Recoveries from backups can be a multi-step process that may require data to pass through the backup software and the application server before it lands on the target storage array. Beginning December 2018, companies can use RMC to directly recover data to HPE Nimble storage arrays from an HPE StoreOnce system without going through the traditional recovery process just as they can already do to HPE 3PAR StoreServ storage arrays.
  • HPE StoreOnce can directly send and retrieve deduplicated data from multiple cloud providers. Companies sometimes fail to consider that general purpose cloud service providers such as Amazon Web Services (AWS) or Microsoft Azure make no provisions to optimize data stored with them such as deduplicating it. Using HP StoreOnce’s new direct support for AWS, Azure, and Scality, companies can use StoreOnce to first compress and deduplicate data before they store the data in the cloud.
  • Integration between Commvault and HPE StoreOnce systems. Out of the gate, companies can use Commvault to manage StoreOnce operations such as replicating data between StoreOnce systems as well as moving data directly from StoreOnce systems to the cloud. Moreover, as this relationship between Commvault and HPE matures, companies will also be able to use HPE’s StoreOnce Catalyst, HPE’s client-based deduplication software agent, in conjunction with Commvault to backup data on server clients where data may not reside on HPE 3PAR or Nimble storage. Using the HPE StoreOnce Catalyst software, Commvault will deduplicate data on the source before sending it to an HPE StoreOnce system.

Source:HPE

Of these three announcements that HPE made this week, this new relationship with Commvault that accompanies its pre-existing relationships with Micro Focus (formerly HPE Data Protector) and Veritas demonstrate HPE’s commitment to helping enterprises build a big tent for their data protection and recovery initiatives. Storing data on the HPE 3PAR and Nimble and using RMC to manage their backups and recoveries on the StoreOnce systems certainly accelerates and simplifies these functions when companies can do so. But by working with these other partners, it illustrates that HPE recognizes that companies will not store all their data on its systems and that HPE will accommodate companies so they can create a single, larger data protection and recovery solution for their enterprise.




Two Hot Technologies to Consider for Your 2019 Budgets

Hard to believe but the first day of autumn is just two days away and with fall weather always comes cooler temperatures (which I happen to enjoy!) This means people are staying inside a little more and doing those fun, end of year activities that everyone enjoys – such as planning their 2019 budgets. As you do so, solutions from BackupAssist and StorMagic are two hot new technologies for companies to consider making room for in the New Year.

BackupAssist 365.

BackupAssist 365 backs up files and emails stored in the cloud. While backup of cloud-based data may seem rather ho-hum in today’s artificial intelligent, block chain obsessed, digital transformation focused world, it solves a real world that nearly every size organization faces: how to cost-effectively and simply protect all those pesky files and emails that people store in cloud applications such as DropBox, Office 365, Google Drive, OneDrive, Gmail, Outlook and others.

To do so, BackupAssist 365 adopted two innovative yet practical approaches to protect files and emails.

  • First, it interfaces directly with these various cloud providers to backup this data. Using your login permissions (which you provide when configuring the software,) BackupAssist 365 accesses data directly in the cloud. This negates the need for your server, PC, or laptop to be turned on when these backups occur so backups can occur at any time.
  • Second, it does cloud-to-local In other words, rather than running up more data transfer and network costs that come with backing up to another cloud, it backs the data backup to local storage on your site. While that may seem a little odd in today’s cloud-centric world, companies can get a great deal of storage capacity for nominal amounts of money. Since it only does an initial full backup and then differential backups thereafter, the ongoing data transfer costs are nominal and the amount of storage capacity that one should need onsite equally small.

Perhaps the best part about BackupAssist 365 is its cost (or lack thereof.) BackupAssist 365 licenses its software on a per user basis with each user email account counting as one user license. However, this one email account covers the backup of that user’s data in any cloud service used by that user. Further, the cost is only $1/month per user with a decreasing cost for greater number of users. In fact, the cost is so low on a per user basis, companies may not even need to budget for this service. They can just start using it and expense their credit cards to keep it below corporate radar screens.

StorMagic SvSAN

The StorMagic SvSAN touches on another two hot technology trends that I purposefully (or not so purposefully) left out above: hyperconverged infrastructure or HCI and edge computing. However, unlike many of the HCI and edge computing plays in the marketplace such as Cisco HyperFlex, Dell EMC VxRail, and Nutanix, StorMagic has not forgotten about cost constraints that branch, remote, and small offices face.

As Cisco, Dell EMC, Nutanix and others chase the large enterprise data center opportunities, they often leave remote, branch, and small offices with two choices: pay up or find another solution. Many of these size offices are opting to find alternative solutions.

This is where StorMagic primarily plays. For a less well-known player, they play much bigger than they may first appear. Through partnerships with large providers such as Cisco and Lenovo among others, StorMagic comes to market with highly available, two-server systems that scale across dozens, hundreds, or even thousands of remote sites. To get a sense of StorMagic’s scalability, walk into any of the 2,000+ Home Depots in the United States or Mexico and ask to look at the computer system that hosts their compute and storage. If the Home Depot lets you and you can find it, you will find a StorMagic system running somewhere in the store.

The other big challenge that each StorMagic system also addresses is security. Because their systems can be deployed almost anywhere in any environment, it does make them susceptible to theft. In fact, in talking to one of its representatives, he shared a story where someone drove a forklift through the side of a building and stole a computer system at one of its customer sites. Not that it mattered. To counter these types of threats, StorMagic encrypts all the data on its HCI solutions with its own software that is FIPS 140-2 compliant.

Best of all, to get these capabilities, companies do not have to break the bank to acquire one of these systems. The list price for the Standard Edition of the SvSAN software, which includes 2TB of usable storage, high availability, and remote management, is $2,500.

As companies look ahead and plan their 2019 budgets, they need to take care of their operational requirements but they may also want to dip their toes in the water to get the latest and greatest technologies. These two technologies give companies the opportunities to do both. Using BackupAssist 365, companies can quickly and easily address their pesky cloud file and email backup challenges while StorMagic gives them the opportunity to affordably and safely explore the HCI and edge computing waters.




Analytics, Automation and Hybrid Clouds among the Key Takeaways from VMworld 2018

At early VMworld shows, stories emerged of attendees scurrying from booth to booth on the exhibit floor looking for VM data protection and hardware solutions to address the early challenges that VMware ESXi presented. Fast forward to the 2018 VMworld show and the motivation behind attendees attending training sessions and visiting vendor booths has changed significantly. Now they want solutions that bring together their private and public clouds, offer better ways to analyze and automate their virtualized environments, and deliver demonstrable cost savings and/or revenue opportunities after deploying them.

The entrance to the VMworld 2018 exhibit hall greeted attendees a little differently this than in year’s past. Granted, there were still some of the usual suspects such as Dell EMC and HPE that have reserved booths at this show for many years. But right alongside them were relative newcomers (to the VMworld show anyway) such as Amazon Web Services and OVHcloud.

Then as one traversed the exhibit hall floor and visited the booths of the vendors immediately behind them, the data protection and hardware themes of the early VMworld shows persisted in these booths, though the messaging and many of the vendor names have changed since the early days of this show.

Companies such as Cohesity, Druva, and Rubrik represent the next generation of data protection solutions for vSphere while companies such as Intel and Trend Micro have a more pronounced presence on the VMworld show floor. Together these exhibitors reflect the changing dynamics of what is occurring in today’s data centers and what the current generation of organizations are looking for vendors to provide for their increasingly virtualized environments. Consider:

  1. Private and public cloud are coming together to become hybrid. The theme of hybrid clouds with applications that can span both public and private clouds began with VMworld’s opening keynote announcing the availability of Amazon Relational Database Service (Amazon RDS) on VMware. Available in the coming months, this functionality will free organizations to automate the setup of Microsoft SQL Server, Oracle, PostgreSQL, MariaDB and MySQL databases in their traditional VMware environments and then migrate them to the AWS cloud. Those interested in trying out this new service can register here for a preview.
  2. Analytics will pave the ways for increasing levels of automation. As organizations of all sizes adopt hybrid environments, the only way they can effectively manage their hybrid environments at scale is to automate their management. This begins with the use of analytics tools that capture the data points coming in from the underlying hardware, the operating systems, the applications, the public clouds to which they attach, the databases, the devices which feed them the data, whatever.

Evidence of growing presence of these analytics tools that enable this automation was everywhere at VMworld. One good example is Runecast analyzes the logs of these environments and then also scours blogs, white papers, forums, and other online sources for best practices to advise companies on how to best configure their environments. Another one is Login VSI which does performance benchmarking and forecasting to anticipate how VDI patches and upgrades will impact the current infrastructure.

  1. The cost savings and revenue opportunities for these hybrid environments promise to be staggering. One of the more compelling segments in one of the keynotes was the savings that many companies initially achieved deploying vSphere. Below is one graphic that appeared at the 8:23 mark in this video of the second day’s keynote where a company reduced its spend on utility charges by over $60,000 per month or an 84% reduction in cost. Granted, this example was for illustration purposes but it seemed inline with other stories I have anecdotally heard.

Source: VMware

But as companies move into this hybrid world that combines private and public clouds, the value proposition changes. While companies may still see cost savings going forward, it is more likely that they will realize and achieve new opportunities that were simply not possible before. For instance, they may deliver automated disaster recoveries and high availability for many more or all their applications. Alternatively, they will be able to bring new products and services to market much more quickly or perform analysis that simply could not have been done before because they have access to resources that were unavailable to them in a cost-effective or timely manner.




CloudShift Puts Flawless DR on Corporate Radar Screens

Hyper-converged Infrastructure (HCI) solutions excel at delivering on many of the attributes commonly associated with private clouds. Consequently, the concepts of hyper-convergence and private clouds have become, in many respects, almost inextricably linked.

But for an HCI solution to not have a clear path forward for public cloud support … well, that’s almost anathema in the increasingly hybrid cloud environments found in today’s enterprises. That’s what makes this week’s CloudShift announcement from Datrium notable – it begins to clarify Datrium’s strategy for how Datrium is going to go beyond backup to the public cloud as part of its DVX solution and puts the concept of flawless DR on corporate radar screens.

Source: Datrium

HCI is transforming how organizations manage their on-premise infrastructure. By combining compute, data protection, networking, storage and server virtualization into a single pre-integrated solution, they eliminate many of the headaches associated with traditional IT infrastructures while delivering the “cloud-like” speed and ease of deployment that enterprises want.

However, enterprises increasingly want more than “cloud-like” abilities from their on-premise HCI solution. They also want the flexibility to move the virtual machines (VMs) they host on their HCI solution into public cloud environments if needed. Specifically, if they run disaster recovery (DR) tests, perform an actual DR, or need to move a specific workload in the public cloud that is experiencing high throughput, having the flexibility to move VMs into and out of the cloud as needed is highly desirable.

Datrium answers the call for public cloud integration with its recent CloudShift announcement. However, Datrium did not just come out with a #MeToo answer for public clouds by announcing it will support the AWS cloud. Rather, it delivered what most enterprises are looking for at this stage in their journey to a hybrid cloud environment: a means to seamlessly incorporate the cloud into their overall DR strategy.

The goals behind its CloudShift announcement are three-fold:

  1. Build on the existing Datrium DVX platform that already manages the primary copy of data as well as its backups. With the forthcoming availability of CloudShift in the first half of 2019, it will complete the primary to backup to cloud circle that companies want.
  2. Make DR work flawlessly. If there are two words together that often represent an oxymoron, it is “flawless DR”. By bringing all primary, backup and cloud together and managing them as one holistic piece, companies can begin to someday soon (ideally in this lifetime) view flawless DR as the norm instead of the exception.
  3. Orchestrated DR failover and failback. DR failover and failback just rolls off the tongue – it is simple to say and everyone understands what it means. But to execute on the successful DR failover and failback in today’s world tends to get very pricey and very complex. By Datrium rolling the management of primary, backup and cloud under one roof and then continually performing compliance checks on the execution environment to ensure that they meet RPO and RTO of the DR plan, companies can have a higher degree of confidence that DR failovers and failbacks only occur when they are supposed to and that when they occur, they will succeed.

Despite many technology advancements in recent years, enterprise-wide, turnkey DR capabilities with orchestrated failover and failback between on-premises and the cloud are still largely the domain of high-end enterprises that have the expertise to pull it off and are willing to commit large amounts of money to establish and maintain a (hopefully) functional DR capability. Datrium’s CloudShift announcement puts the industry on notice that reliable, flawless DR that will meet the budget and demands of a larger number of enterprises is on its way.




Orchestrated Backup IN the Cloud Arrives with HYCU for GCP

Companies are either moving or have moved to the cloud with backup TO the cloud being one of the primary ways they plan to get their data and applications into the cloud. But orchestrating the backup of their applications and data once they reside IN the cloud… well, that requires an entirely different set of tools with few, if any, backup providers yet offering features in their respective products that deliver on this requirement. That ends today with the introduction of HYCU for GCP (Google Cloud Platform).

Listen to the podcast associated with this blog entry.

Regardless of which public cloud platform you may use to host your data and/or applications, Amazon Web Services (AWS), Microsoft Azure, GCP, or some other platform, they all provide companies with multiple native backup utilities to protect data that resides on their cloud. The primary tools include the likes of snapshots, replication, and versioning with GCP being no different.

What makes these tools even more appealing to use is that they are available at a cloud user’s fingertips; they can turn them on with the click of a button; and, they only pay for what they use. Available for any data or applications hosted in the cloud, they give organizations access to levels of data availability, data protection, and even disaster recovery for which they previously had no means to easily deliver and they can do so for any data or application hosted with the cloud provider.

But the problem in this scenario is not application and/or data backup. The catch is how does an organization do this at scale in such a way that they can orchestrate and manage the backups of all their applications and data on a cloud platform such as GCP for all their users. The short answer is: organizations cannot.

This is a problem that HYCU for GCP addresses head-on. HYCU has previously established a beachhead in Nutanix environments thanks to its tight integration with AHV. This integration well positions HYCU to extend those same benefits to any public cloud partner of Nutanix. The fact that Nutanix and Google announced a strategic alliance last year at the Nutanix .NEXT conference to build and operate hybrid clouds certainly helped HYCU prioritize GCP over the other public cloud providers for backup orchestration.

Leveraging HYCU in the GCP, companies immediately gain three benefits:

  1. Subscribe to HYCU directly from the GCP Marketplace. Rather than having to first acquire HYCU separately and then install it in the GCP, companies can buy it in the GCP Marketplace. This accelerates and simplifies HYCU’s deployment in the GCP while simultaneously giving companies access to a corporate grade backup solution that orchestrates and protects VMs in the GCP.
  2. Takes advantage of the native backup features in the GCP. GCP has its own native snapshots that can be used for backup and recovery that HYCU capitalizes on and puts at the fingertips of admins who can then manage and orchestrate backups and recoveries for all corporate VMs residing in the GCP.
  3. Frees organizations to confidently expand their deployment of applications and data in GCP. While GCP obviously had the tools to backup and recover data and applications in GCP, managing them at scale was going to be, at best, cumbersome, and, at worst, impossible. HYCU for GCP frees companies to begin to more aggressively deploy applications and data at scale in GCP knowing that they can centrally manage their protection and recovery.

Backup TO the cloud is great and almost every backup provider offers that feature functionality. But backup IN the cloud where the backup and recovery of a company’s applications and data in the cloud is centrally managed…now, that is something that stands apart from the competition. Thanks to HYCU for GCP, companies can finally do more than just deploy data and applications in the Google Cloud Platform that requires each of their users or admins to assume backup and recovery responsibilities for their applications and data. Instead, companies can do so knowing they now have a tool in place that can centrally manage their backups and recoveries.




Too Many Fires, Poor Implementations, and Cost Overruns Impeding Broader Public Cloud Adoption

DCIG’s analysts (myself included) have lately spent a great deal of time getting up close and personal on the capabilities of public cloud providers such as Amazon Web Services (AWS), Microsoft Azure, and Google Cloud. We have also spent time talking to individuals deploying cloud solutions. As we have done so, we recognize that the capabilities of these cloud offerings should meet and exceed the expectations of most organizations regardless of their size. However, impeding cloud adoption are three concerns that have little to do with the technical capabilities of these public cloud solutions.

Anyone who spends any time studying the capabilities of any of these cloud offerings for the first time will walk away impressed. Granted, each offering has its respective strengths and weaknesses. However, when one examines each of these public cloud offerings and their respective infrastructures and compares them to the data centers that most companies own and manage, the differences are stark. The offerings from these public cloud providers win hands down. This might explain why organizations of all sizes are adopting the cloud at some level.

The more interesting dilemma is why organizations are not adopting public cloud offerings at a faster pace and why some early adopters are even starting to leave the cloud. While this is not an extensive list of reasons, here are three key concerns that have come out of our conversations and observations that are impeding cloud adoption.

Too many fires. Existing data centers are a constant target for budget cutbacks, understaffing, and too often lack any clear, long-term vision that guides data center development. This combination of factors has led to costly, highly complex, inflexible data centers that need a lot of people to manage them. This situation exists at the exact moment when the business side of the house expects the data center to become simpler and more cost-effective and flexible to manage. While in-house data center IT staff may want to respond to these business requests, they often are consumed with putting out the fires caused by the complexity of the existing data center. This leaves them little or no time to explore and investigate new solutions.

Poor implementations. The good news is that public cloud offerings have a very robust feature set. The bad news is that all these features make it very daunting to learn and very easy to incorrectly set it up. If anything, the ease and low initial costs of most public cloud providers may work against the adoption of public cloud solutions. They have made it so easy and low cost for companies to get into the cloud that companies may try it out without really understanding all the options available to them and the ramifications of the decisions they make. This can easily lead to poor application implementations in the cloud and potentially introduce more costs and complexity – not less. The main upside here is that because creating and taking down virtual private clouds with these providers is relatively easy, even if one does set it up poorly it can be rectified by creating a new virtual private cloud that does better meet your needs.

Cloud cost overruns. Part of the reason companies live with and even mask the complexity of their existing data centers is that they can control their costs. Even if an application needs more storage, compute, networking, power – whatever – they can sometimes move hardware and software around on the back end to mask these costs until the next fiscal quarter or year rolls around when they go to the business to ask for approval to buy more. Once applications and data are in the cloud and start to grow, these costs become exposed almost immediately. Since cloud providers bill based upon monthly usage, companies need to closely monitor their applications and data in the cloud to include identifying which ones are starting to incur additional charges, to know what options they have available to them to lower these charges, and the practicality of making these changes.

Anyone who honestly assesses the capabilities available from the major public cloud providers will find they can better deliver next-gen features than what most organizations can do on their own. That said, companies either need to find the time to first educate themselves about these cloud providers or identify someone they trust to help them down the cloud path. While these three issues are impeding cloud adoption, they should not be stopping it as they still too often do. The good news is that even if a company does poorly implement their environment in the cloud the first time around (and few will,) the speed and flexibility at which public cloud providers offer to build out new virtual private clouds and tear down existing ones means they can cost-effectively improve it.




Proven Investment Principles Can Guide Your Cloud Strategy

Living in Omaha, Nebraska, one cannot help but be influenced by Berkshire Hathaway and its CEO, Warren Buffett, one of the wealthiest men in the world, when it comes to making investment decisions. However, the process that Berkshire Hathaway uses to make investment decisions has multiple other applications that include guiding you in making decisions about your cloud strategy.

If there is a company and an individual that epitomize the concept of “buy and hold”, they are Berkshire Hathaway and Warren Buffet.  Their basic premise is that  you thoroughly research a stock before making an investment decision. As part of that research, you investigates the financials of the company, its management team, its reputation, and the products and/or services it offers. Then you determine the type of growth that company will experience in the future. Once that decision is made, you then buy and hold it for a long time.

However, buy-and-hold is not the only principle that Warren Buffett follows. His first rule of investing is: Never lose money.

Companies should apply variations of both these principles when creating a cloud strategy. Once a company initiates and/or moves applications and/or data into the cloud, odds are that they will “buy-and-hold” them in the cloud for a long time assuming service levels and pricing continue to make sense. The more applications and data they store with a cloud provider, the more difficult it becomes for them to bring it back on-premise. Further, they can easily lose track of what data and applications their company has stored in the cloud.

The good and bad news is that public cloud providers such as Amazon, Google, and Microsoft have made and continue to make it easier than ever to get started with your cloud strategy as well as migrate existing applications and data to the cloud. This ease of implementing a cloud strategy can prompt organizations to bypass or shortcut the due diligence that they should take before placing applications and data in the cloud. Unfortunately, this approach leaves them without clearly defined plans to manage their cloud estate once it is in place.

To avoid this situation, here are some “investment” principles to follow when creating a cloud strategy to improve your chances of success to get the return from the cloud that you expect.

  1. Give preference to select proven, supported services from the cloud provider for critical applications and data. Most organizations when they move need to start with the basics such as compute, networking, security, and storage. These services are the bread and butter of IT and are the foundation for public cloud providers. These have been around for years, are stable, and are likely not going anywhere. Organizations can feel confident about using these cloud services for both existing and new applications and data and should expect them to be around for a long time to come.
  2. Shy away from “speculative” technologies. Newly and recently introduced Amazon services such as Lambda (serverless computing), Machine Learning, Polly (text-to-voice), and Rekognition (visual analysis of images and videos) among others sound (and are) exciting and fun to learn about and use. However, they are also the ones that cloud providers may abruptly change or even cancel. While organizations use them in production, companies just moving to the cloud may only want to use them with their test and dev applications or stay away altogether until they are confident they are stable and will be available indefinitely.
  3. Engage with a trusted advisor. Some feedback that DCIG has heard is that companies want a more orchestrated roll-out of their computing services in the cloud than they have had on-premise. To answer that need, cloud providers are working to build out partner networks which have individuals certified in their technologies to include helping with the initial design and deployment of new apps and data in the cloud as well as the subsequent migration of existing applications and data to the cloud.
  4. Track and manage your investment. A buy-and-hold philosophy does not mean you ignore your investment after you purchase it. You track cloud services like any other investment so take the time to understand and manage the billings. Due to the multiple options provided by each cloud service, you may need to periodically or even frequently change how you use a service or even move some applications and/or data back on-premise.

As organizations look to create a cloud strategy and make it part of how they manage their applications and data, they should take a conservative approach. Primarily adopt cloud technologies that are stable, that you understand, and which you can safely, securely, and confidently manage. Leave more “speculative” technologies for test and dev or until such a time that your organization has a comfort level with the cloud. While the cloud can certainly save you money, time, and hassle if you implement a cloud strategy correctly, its relative ease of adoption can also cost you much more if you pursue it in a haphazard manner.




Four Implications of Public Cloud Adoption and Three Risks to Address

Business are finally adopting public cloud because a large and rapidly growing catalog of services is now available from multiple cloud providers. These two factors have many implications for businesses. This article addresses four of these implications plus several cloud-specific risks.

Implication #1: No enterprise IT dept will be able to keep pace with the level of services innovation available from cloud providers

The battle is over. Cloud wins. Deal with it.

Dealing with it does not necessarily mean that every business will move every workload to the cloud. It does mean that it is time for business IT departments to build awareness of the services available from public cloud providers. One way to do this is to tap into the flow of service updates from one or more of the major cloud providers.

four public cloud logosFor Amazon Web Services, I like What’s New with AWS. Easy filtering by service category is combined with sections for featured announcements, featured video announcements, and one-line listings of the most recent announcements from AWS. The one-line listings include links to service descriptions and to longer form articles on the AWS blog.

For Microsoft Azure, I like Azure Updates. As its subtitle says, “One place. All updates.” The Azure Updates site provides easy filtering by product, update type and platform. I especially like the ability to filter by update type for General Availability and for Preview. The site also includes links to the Azure roadmap, blog and other resources. This site is comprehensive without being overwhelming.

For Google Cloud Platform, its blog may be the best place to start. The view can be filtered by label, including by announcements. This site is less functional than the AWS and Microsoft Azure resources cited above.

For IBM Cloud, the primary announcements resource is What’s new with IBM Cloud. Announcements are presented as one-line listings with links to full articles.

Visit these sites, subscribe to their RSS feeds, or follow them via social media platforms. Alternatively, subscribe to their weekly or monthly newsletters via email. Once a business has workloads running in one of the public clouds at a minimum an IT staff member should follow the updates site.

Implication #2: Pressure will mount on Enterprise IT to connect business data to public cloud services

The benefits of bringing public cloud services to bear on the organization’s data will create pressure on enterprise IT departments to connect business data to those services. There are many options for accomplishing this objective, including:

  1. All-in with one public cloud
  2. Hybrid: on-prem plus one public
  3. Hybrid: on-prem plus multiple public
  4. Multi-cloud (e.g. AWS + Azure)

The design of the organization and the priorities of the business should drive the approach taken to connect business data with cloud services.

Implication #3: Standard data protection requirements now extend to data and workloads in the public cloud

No matter what approach it taken when embracing the public cloud, standard data protection requirements extend to data and workloads in the cloud. Address these requirements up front. Explore alternative solutions and select one that meets the organizations data protection requirements.

Implication #4: Cloud Data Protection and DRaaS are on-ramps to public cloud adoption

For most organizations the transition to the cloud will be a multi-phased process. Data protection solutions that can send backup data to the cloud are a logical early phase. Disaster recovery as a service (DRaaS) offerings represent another relatively low-risk path to the cloud that may be more robust and/or lower cost that existing disaster recovery setups. These solutions move business data into public cloud repositories. As such, cloud data protection and DRaaS may be considered on-ramps to public cloud adoption.

Once corporate data has been backed up or replicated to the cloud, tools are available to extract and transform the data into formats that make it available for use/analysis by that cloud provider’s services. With proper attention, this can all be accomplished in ways that comply with security and data governance requirements. Nevertheless, there are risks to be addressed.

Risk to Address #1: Loss of change control

The benefit of rapid innovation has a downside. Any specific service may be upgraded or discarded by the provider without much notice. Features used by a business may be enhanced or decremented. This can force changes in other software that integrates with the service or in procedures used by staff and the associated documentation for those procedures.

For example, Office365 and Google G Suite features can change without much notice. This creates a “Where did that menu option go?” experience for end users. Some providers reduce this pain by providing an quick tutorial for new features within the application itself. Others provide online learning centers that make new feature tutorials easy to discover.

Accept this risk as an unavoidable downside to rapid innovation. Where possible, manage the timing of these releases to an organization’s users, giving them advance notice of the changes along with access to tutorials.

Risk to Address #2: Dropped by provider

A risk that may not be obvious to many business leaders is that of being dropped by a cloud service provider. A business with unpopular opinions might have services revoked, sometimes with little notice. Consider how quickly the movement to boycott the NRA resulted in severed business-to-business relationships. Even an organization as large as the US Military faces this risk. As was highlighted in recent news, Google will not renew its military AI project due in large part to pressure from Google employees.

Mitigate this risk through contracts and architecture. This is perhaps one argument in favor of a hybrid on-prem plus cloud approach to the public cloud versus an all-in approach.

Risk to Address #3: Unpredictable costs

It can be difficult to predict the costs of running workloads in the public cloud, and these costs can change rapidly. Address this risk by setting cost thresholds that trigger an alert. Consider subscribing to a service such as Nutanix Beam to gain granular visibility into and optimization of public cloud costs.

Its time to get real about the public cloud

Many business are ready to embrace the public cloud. IT departments should make themselves aware of services that may create value for their business. They should also work through the implications of moving corporate data and workloads to the cloud, and make plans for managing the attendant risks.




Two Insights into Why Enterprises are Finally Embracing Public Cloud Computing

In between my travels, doing research, and taking some time off in May, I also spent time getting up to speed on Amazon Web Services by studying for the AWS Certified Solutions Architect Associated exam in anticipation of DCIG doing more public cloud-focused competitive research. While I know it is no secret that cloud adoption has taken off in recent years, what has puzzled me during this time is, “Why is it now that have enterprises finally started to embrace public cloud computing?”

From my first days as an IT user I believed that all organizations would eventually embrace cloud computing in some form. That belief was further reinforced as I came to understand virtualization and its various forms (compute, network, and storage.) But what has perplexed me to one degree or another ever since then is why enterprises have not more fully invested in these various types of virtualization and embraced the overall concept of cloud computing sooner.

While there are various reasons for this, I sense the biggest reason is that most organizations view IT as a cost center. Granted, they see the value that IT has brought and continues to bring to their business. However, most organizations do not necessarily want to provide technology services. They would rather look to others to provide the IT technologies that they need and then consume them when they are sufficiently robust and mature for their needs.

Of course, establishing exactly when a technology satisfies these conditions varies for each industry. Some might rightfully argue that cloud computing has been around for a decade or more and that many organizations already use it.

But using public cloud computing for test, development, or even for some limited production deployments within an organization is one thing. Making public cloud computing the preferred or even the only choice for hosting new and existing applications is quite another.  When this change in policy occurs within an enterprise, then one can say an enterprise has embraced public cloud computing. To date, only a relatively few enterprises have embraced the cloud computing at scale but I recently ran across two charts that help to explain why this is changing.

The first chart I ran across was in one of the training videos I watched. This video included a graphic that showed the number of new service announcements and updates that AWS made each year from 2011-2017.

Source: A Cloud Guru

It was when I saw the amount of innovation and changes that have occurred in the past three years at AWS that I got a better understanding as to why enterprises have started to embrace cloud computing at scale. Based on these numbers, AWS announced nearly five service announcements and/or updates every business day of 2017.

Many businesses would consider themselves fortunate to do five changes every month much less every day. But this level of innovation and change also explains why public cloud providers are pulling away from traditional data center in terms of the capabilities they can offer. It also explains why enterprises can have more confidence in public cloud providers and move more of their production applications there. This level of innovation also inherently communicates high degrees of stability and maturity which is often what enterprises prioritize.

The other chart brought to my attention is found on Microsoft’s website and provides a side-by-side comparison of Microsoft Azure to AWS. This chart provides a high-level overview of the offerings from both of these providers and how their respective offerings compare and contrast.

Most notable about this chart is that it means organizations have another competitive cloud computing offering that is available from a large, stable provider. In this way, as an enterprise embraces the idea of cloud computing in general and chooses a specific provider of these services, they can do so with the knowledge that they have a viable secondary option should that initial provider become too expensive, change offerings, or withdraw an offering that they currently or plan to use.

Traditional enterprise data centers are not going away. However, as evidenced by the multiple of enhancements that AWS, Microsoft Azure, and others have made in the past few years, their cloud offerings surpass the levels of auditing, flexibility, innovation, maturity, and security found in many corporate data centers. These features coupled with organizations having multiple cloud providers from which to choose provide insight into why enterprises are lowering their resistance to adopting public cloud computing and embracing it more wholeheartedly.




Amazon AWS, Google Cloud, Microsoft Azure and … now Nutanix Xi Cloud Services?!

Amazon, Google, and Microsoft have staked their claims as the Big 3 as providers of enterprise cloud services with their respective AWS, Cloud, and Azure offerings. Enter Nutanix. It has from Day 1 sought to emulate AWS with its on-premise cloud offering. But with the announcements made at its .NEXT conference last week in New Orleans, companies can look for Nutanix to deliver cloud services both on- and off-premise that should fundamentally change how enterprises view Nutanix going forward.

There is a little dispute that Amazon AWS is the unquestioned leader in cloud services with Microsoft, Google, and IBM possessing viable offerings in this space. Yet where each of these providers still tend to fall short is in addressing enterprise needs to help them maintain a hybrid cloud environment for the foreseeable future.

Clearly most enterprises want to incorporate public cloud offerings into their overall corporate data center design and, by Nutanix’s own admission, the adoption of the public cloud is only beginning in North America. But already there is evidence from early adopters of the cloud that the costs associated with maintaining all their applications with public cloud providers outweighs the benefits. However, these same enterprises are hesitant to bring these applications back on-premise because they like, and I daresay have even become addicted, to the ease of managing applications and data that the cloud provides them.

This is where Nutanix at its recent .NEXT conference made a strong case for becoming the next cloud solution on which enterprises should place their bets. Three technologies it announced during this conference particularly stood out to me as evidence that Nutanix is doing more than bringing another robust cloud offering to market but it is addressing this nagging enterprise concern for the need to deploy a cloud solution that they can manage in the same way on-premise and off. Consider:

1. Beam takes the mystery out of where all the money and data in the cloud has gone. A story I repeatedly hear is how complicated billing statements are from AWS and how easy it is for these costs to exceed corporate budgets. Another story I often hear is that it is so easy for corporate employees to get started in the cloud that they can easily run afoul of corporate governance. These stories, true or not, likely impede broader cloud adoption by many companies.

This is where Beam sheds some light on the picture. For those companies already using the cloud, Beam provides enterprises visibility into the cloud to address both the cost and data governance concerns. Since Beam is a separate, standalone product available from Nutanix, organizations can quickly gain visibility into how much money they are spending on the cloud, who is spending it, and perform audits to ensure compliance with HIPAA, ISO, PCI-DSS, CIS, NiST and SOC-2. For those organizations not already using the cloud, they can implement Beam in conjunction with their adoption of cloud services to monitor and manage their usage of it. Beam currently supports AWS and Azure with support for Nutanix Xi and Google Cloud in the works.

2. Xi brings together the management of on- and off-premise clouds without compromise. Make no mistake – Nutanix’s recently announced Xi cloud services offering is not yet on the same standing as AWS, Azure, or Google Cloud. In fact, by Nutanix’s own admission Xi is “still coming” as an offering. That said, Nutanix addresses this lingering concern that persists among enterprise users – they want the same type of cloud experience on-premise and off. The Nutanix Acropolis Hypervisor (AHV) accompanied with its forthcoming Xi cloud services offering stand poised to deliver that giving companies the flexibility to seamlessly (relatively speaking) move applications and data between on- and off-premise locations without changing how they are managed.

3. Netsil is “listen” spelled backwards which is just one more reason to pay attention to this technology. Every administrators’ worst nightmare is when they must troubleshoot an issue in the cloud. In today’s highly virtualized, inter-dependent application world, identifying the root cause of a Sev 1 problem can make even the most ardent supporter of virtualized, serverless compute environments long the “simpler” days of standalone servers.

Thank God solutions such as Netsil are now available. Netsil tackles this thorny issue of microsegmentation – how applications within containers, virtual machines, and physical machines communicate, interact and wall off one another – by identifying their respective dependencies on each other. This helps to take much of the guesswork out of troubleshooting these environment as well as gives enterprises more confidence to deploy multiple applications on fewer hosts. While Netsil is “still coming” per Nutanix, this type of technology is one that enterprises should find almost a necessity to both maximize their use of resources in the cloud while giving them peace of mind that they have tools at their disposal to help them solve the challenges that will inevitably arise.




Hackers Say Goodbye to Ransomware and Hello to Bitcoin Mining

Ransomware gets a lot of press – and for good reason – because when hackers break through your firewalls, encrypt your data, and make you pay up or else lose your data, it rightfully gets people’s attention. But hackers probably have less desire than most to be in the public eye and sensationalized ransomware headlines bring them unwanted attention. That’s why some hackers have said goodbye to the uncertainty of a payout associated with getting a ransom for your data and instead look to access your servers to do some bitcoin mining using your CPUs.

A week or so ago a friend of mine who runs an Amazon Web Services (AWS) consultancy and reseller business shared a story with me about one of his clients who hosts a large SaaS platform in AWS.

His client had mentioned to him in the middle of the week that the applications on one of his test servers was running slow. While my friend was intrigued, he did not at the time give it much thought. This client was not using his managed services offering which meant that he was not necessarily responsible for troubleshooting their performance issues.

Then the next day his client called him back and said that now all his servers hosting this application – test, dev, client acceptance, and production – were running slow. This piqued his interest, so he offered resources to help troubleshoot the issue. The client then allowed his staff to log into these servers to investigate the issue

Upon logging into these server, they discovered that all instances running at 100% also ran a Drupal web application. This did not seem right, especially considering that it was early on a Saturday morning when the applications should mostly be idle.

After doing a little more digging around on each server, they discovered a mysterious multi-threaded process running on each server that was consuming all their CPU resources. Further, the process also had opened up a networking port to a server located in Europe. Even more curious, the executable that launched the process had been deleted after the process started. It was as if someone was trying to cover their tracks.

At this point, suspecting the servers had all been hacked, they checked to see if there were any recent security alerts. Sure enough. On March 28, 2018, Drupal issued a security advisory that if you were not running Drupal 7.58 or Drupal 8.5.1, your servers were vulnerable to hackers who could remotely execute code on your server.

However, what got my friend’s attention is that these hackers did not want his client’s data. Rather, they wanted his client’s processing power to do bitcoin mining which is exactly what these servers had been doing for a few days now on behalf of these hackers. To help their client, they killed the bitcoin mining process on each of these servers before calling his client to advise them to patch Drupal ASAP.

The story does not end there. In this case, his client did not patch Drupal quickly enough. Sometime after they killed the bitcoin mining processes, another hacker leveraged that same Drupal security flaw and performed the same hack. By the time his client came to work on Monday, there were bitcoin mining processes running on those servers that again consumed all their CPU cycles.

What they found especially interesting was how the executable file that the new hackers had installed worked. In reviewing their code, the first thing it did was to kill any pre-existing bitcoin mining processes started by other hackers. This freed all the CPU resources to handle bitcoin mining processes started by the new hackers. The hackers were literally fighting each other over access to the compromised system’s resources.

Two takeaways from this story:

  1. Everyone is rightfully worried about ransomware but bitcoin mining may not hit corporate radar screens. I doubt that hackers want the FBI, CIA, Interpol, MI6, Mossad, or any other criminal justice agency hunting them down any more than you or I do. While hacking servers and “stealing” CPU cycles is still a crime, it probably is much further down on the priority list of most companies as well as these agencies.

A bitcoin mining hack may go unnoticed for long periods of time and may not be reported by companies or prosecuted by these criminal justice agencies even when reported because it is easy to perceive this type of hack as a victimless crime. Yet every day the hacker’s bitcoin mining processes go unnoticed and remain active, the more bitcoin the hackers earn. Further, one should assume hackers will only become more sophisticated going forward. Expect hackers to figure out how to install bitcoin mining processes that run without consuming all CPU cycles so these processes remain running and unnoticed for longer periods of time.

  1. Hosting your data and processes in the cloud does not protect your data and your processes against these types of attacks. AWS has all the utilities available to monitor and detect these rogue processes. That said, organizations still need someone to implement these tools and then monitor and manage them.

Companies may be relieved to hear that some hackers have stopped targeting their data and are instead targeting their processors to use them for bitcoin mining. However, there are no victimless crimes. Your pocket book will still get hit in cases like this as Amazon will bill you for using these resources.

In cases like this, if companies start to see their AWS bills going through the roof, it may not be the result of their businesses. It may be their servers have been hacked and they are paying to finance some hacker’s bitcoin mining operation. To avoid this scenario, companies should ensure they have the right internal people and processes in place to keep their applications up-to-date, to protect infrastructure from attacks, and to monitor their infrastructures whether hosted on-premise or in the cloud.




Two Most Disruptive Storage Technologies at the NAB 2018 Show

The exhibit halls at the annual National Association of Broadcasters (NAB) show in Las Vegas always contain eye-popping displays highlighting recent technological advances as well as what is coming down the path in the world of media and entertainment. But behind NAB’s glitz and glamour lurks a hard, cold reality; every word recorded, every picture taken, and every scene filmed must be stored somewhere, usually multiple times, and available at a moment’s notice. It is these halls at the NAB show that DCIG visited where it identified two start-ups with storage technologies poised to disrupt business as usual.

Storbyte. Walking the floor at NAB, a tall, blond individual literally yanked me by the arm as I was walking by and asked me if I had ever heard of Storbyte. Truthfully, the answer was No. This person turned out to be Steve Groenke, Storbyte’s CEO, and what ensued was a great series of conversations while at NAB.

Storbyte has come to market with an all-flash array. However, it took a very different approach to solve the problems of longevity, availability and sustainable high write performance in SSDs and storage systems built with them. What makes it so disruptive is it created a product that meets the demand for extreme sustained write performance by slowing down flash and it does so at a fraction of the cost of what other all-flash arrays cost.

In looking at today’s all-flash designs, every flash vendor is actively pursuing high performance storage. The approach they take is to maximize the bandwidth to each SSD. This means their systems must use PCIe attached SSDs addressed via the new NVMe protocol.

Storbyte chose to tackle the problem differently. Its initial target customers had continuous, real-time capture and analysis requirements as they routinely burned through the most highly regarded enterprise class SSDs in about seven months. Two things killed NAND flash in these environments: heat and writes.

To address this problem, Storbyte reduces heat and the number of writes that each flash module experiences by incorporating sixteen mSATA SSDs into each of its Eco*Flash SSDs. Further, Storbyte slows down the CPUs in each of the mSATA module on its system and then wide-stripes writes across all of them. According to Storbyte, this only requires about 25% of the available CPU on each mSATA module so they use less power. By also managing the writes, Storbyte simultaneously extends the life of each mSATA module on its Eco-flash drives.

The end result is a low cost, high performance, very dense, power-efficient all-flash array built using flash cards that rely upon “older”, “slower”, consumer-grade mSATA flash memory modules that can drive 1.6 million IOPS on a 4U system. More notably, its systems cost about a quarter of that of competitive “high performance” all-flash arrays while packing more than a petabyte of raw flash memory capacity in 4U of rack space that use less power than almost any other all-flash array.

Wasabi. Greybeards in the storage world may recognize the Wasabi name as a provider of iSCSI SANs. Well, right name but different company. The new Wasabi recently came out of stealth mode as a low cost, high performance, cloud storage provider. By low cost, we mean 1/5 of the cost of Amazon’s slowest offering (Glacier) and at 6x the speed of Amazon’s highest performing S3 offering. In other words, you can have your low cost cloud storage and eat it too.

What makes its offering so compelling is that it offers storage capacity at $4.99/TB per month. That’s it. No additional egress charges for every time you download files. No complicated monthly statements to decipher to figure out how much you are spending and where. No costly storage architects to hire to figure out how to tier data to optimize performance and costs. This translates into one fast cloud storage tier at a much lower cost than the Big 3 (Amazon AWS, Google Cloud, and Microsoft Azure.)

Granted, Wasabi is a cloud storage provider start-up so there is an element of buyer beware. However, it is privately owned and well-funded. It is experiencing explosive growth with over 1600 customers in just its few months of operation. It anticipates raising another round of funding. It already has data centers scattered throughout the United States and around the world with more scheduled to open.

Even so, past horror stories about cloud providers shutting their doors give every company pause by using a relatively unknown quantity to store their data. In these cases, Wasabi recommends that companies use its solution as your secondary cloud.

Its cloud offering is fully S3 compatible and most companies want a cloud alternative anyway. In this instances, store copies of your data to both Amazon and Wasabi. Once stored, run any queries, production, etc. against the Wasabi cloud. The Amazon egress charges that your company avoids by accessing its data on the Wasabi cloud will more than justify taking the risk of storing the data you routinely access on Wasabi. Then in the unlikely event Wasabi does go out of business (not that it has any plans to do so,) companies still have a copy of data with Amazon that they can fail back to.

This argument seems to resonate well with prospects. While I could not substantiate these claims, Wasabi said that they are seeing multi-petabyte deals coming their way on the NAB show floor. By using Wasabi instead of Amazon in the use case just described, these companies can save hundreds of thousands of dollars per month just by avoiding Amazon’s egress charges while mitigating their risk associated with using a start-up cloud provider such as Wasabi.

Editor’s Note: The spelling of Storbyte was corrected on 4/24.

Bitnami