HPE Predicts Sunny Future for Cloudless Computing

Antonio Neri, CEO of HPE, declared at its Discover event last week that HPE is transforming into a consumption-driven company that will deliver “Everything as a Service” within three years. In addition, Neri put forward the larger concept of “cloudless” computing. Are these announcements a tactical response to the recent wave of public cloud adoption by enterprises, or are they something more strategic?

“Everything as a Service” is Part of a Larger Cloudless Computing Strategy

“Everything as a Service” is, in fact, part of a larger “cloudless” computing strategy that Neri put forth. Cloudless. Do we really need to add yet another term to our technology dictionaries? Yes, we probably do.

picture of Antonio Neri with the word Cloudless in the background

HPE CEO, Antonio Neri, describing Cloudless Computing at HPE Discover

“Cloudless” is intentionally jarring, just like the term “serverless”. And just as “serverless” applications actually rely on servers, so also “cloudless” computing will rely on public clouds. The point is not that cloud goes away, but that it will no longer be consumed as a set of walled gardens requiring individual management by enterprises and applications.

Enterprises are indeed migrating to the cloud, massively. Attractions of the cloud include flexibility, scalability of performance and capacity, access to innovation, and its pay-per-use operating cost model. But managing and optimizing the hybrid and multi-cloud estate is challenging on multiple fronts including security, compliance and cost.

Cloudless computing is more than a management layer on top of today’s multi-cloud environment. The cloudless future HPE envisions is one where the walls between the clouds are gone; replaced by a service mesh that will provide an entirely new form of consuming and paying for resources in a truly open marketplace.

Insecure Infrastructure is a Barrier to a Cloudless Future

Insecure infrastructure is a huge issue. We recently learned that more than a dozen of the largest global telecom firms were compromised for as much as seven years without knowing it. This was more than a successful spearfishing expedition. Bad actors compromised the infrastructure at a deeper level. In light of such revelations, how can we safely move toward a cloudless future?

Foundations of a Cloudless Future

Trust based on zero trust. The trust fabric is really about confidence. Confidence that infrastructure is secure. HPE has long participated in the Trusted Computing Group (TCG), developing open standards for hardware-based root of trust technology and the creation of interoperable trusted computing platforms. At HPE they call the result “silicon root of trust” technology. This technology is incorporated into HPE ProLiant Gen10 servers.

Memory-driven computing. Memory-driven computing will be important to cloudless computing because it is necessary for real-time supply chain, customer and financial status integration.

Instrumented infrastructure. Providers of services in the mesh must have an instrumented infrastructure. Providers will use the machine data in multiple ways; including analytics, automation, and billing. After all, you have to see it in order to measure it, manage it and bill for it.

Infrastructure providers have created multiple ways to instrument their systems. Lenovo TruScale measures and bills based on power consumption. In HPE’s case, it uses embedded instrumentation and the resulting machine data for predictive analytics (HPE InfoSight), billing (HPE GreenLake) and cost optimization (HPE Consumption Analytics Portal).

Cloudless Computing Coming Next Year

HPE is well positioned to deliver on the “everything as a service” commitment. It has secure hardware. It has memory-driven composable infrastructure. It has an instrumented infrastructure across the entire enterprise stack. It has InfoSight analytics. It has consumption analytics. If has its Pointnext services group.

However, achieving the larger vision of a cloudless future will involve tearing down some walls with participation from a wide range of participants. Neri acknowledged the challenges, yet promised that HPE will deliver cloudless computing just one year from now. Stay tuned.




Time: The Secret Ingredient behind an Effective AI or ML Product

In 2019 the level of interest that companies expressed in using artificial Intelligence (AI) and machine learning (ML) exploded. Their interest is justifiable. These technologies gather the almost endless streams of data coming out of the scads of devices that companies deploy everywhere, analyze it, and then turn it into useful information. But time is the secret ingredient that companies must look for as they look to select an effective AI or ML product.

Data
Collection Must Proceed AI and ML

The premise behind the deployment of AI and ML technologies
is sound. Every device that a company deploys, in whatever form it takes (video
camera, storage array, server, network switch, automatic door opener, whatever)
has some type of software on it. This software serves two purposes:

  1. Operates the device
  2. Gathers data about the device’s operations, health, and potentially even the environment in which it operates

Option 1 initially drove the development and deployment of
the device’s software while Option 2 sometimes got characterized as a necessary
evil to identify and resolve issues with the device before the device was
impacted. But with more devices Internet enabled, the data each device gathers no
longer needs to remain stranded on each device. It could be centralized.

Devices can now send their data to a central data repository. This is often hosted and supported by the device manufacturer though companies can do this data collection and aggregation on their own.

This is where the AI and ML comes into the picture. Once collected, the manufacturers use AI or ML software to analyze this aggregated amount of data. This analysis can reveal broader trends and patterns otherwise undetectable if the data remained on the devices.

Only
Time Can Deliver an Effective AI or ML Strategy

But here is a key to choosing a product that is truly effective at delivering AI and ML. The value that AI and ML technologies bring relies upon having devices deployed and in production in the field for some time. New vendors, products, and even new deployments, even when they offer AI and ML features, may not provide meaningful insights until the devices collect and analyze a large amount of data over some time from these devices. This can take months or perhaps even years to accomplish.

Only after data is collected and analyzed will the full value of AI or ML technologies become fully evident. Initially, they may help anticipate and prevent some issues. But their effectiveness at anticipating and predicting issues will be limited until they have months or years worth of data at their disposal to analyze.

The evidence of this is seen from companies such as HPE Nimble and Unitrends, among others. Each has improved its ability to better support its clients and resolve issues before companies even know they have issues. For example, HPE Nimble and Unitrends each use their respective technologies to identify and resolve many hardware issues before they impact production.

In each example, each provider needed to collect a great deal of data over multiple years and analyze it before they could proactively and confidently take the appropriate actions to predict and resolve specific issues.

This element of time gives the manufacturers who have large numbers of devices already in the field and who offer AI and ML such a substantial head start in this race to be the leaders in AI and MO. Those just deploying these technologies will still need to gather data for some time period from multiple data points before they can provide the broad type of analytics that companies need and are coming to expect.

Bitnami