Search
Close this search box.

SMBs Get Control Over Their Virtualized Environments; Interview with Scale Computing Part V

Companies of all sizes are implementing virtualization. But as they do, small and midsized businesses (SMBs) are in particular finding they are ill-equipped to deal with the levels of complexity it introduces. In this fifth and final part of my interview series with Scale Computing’s Global Solution Architect, Alan Conboy, and its EVP and GM, Patrick Conte, Alan discusses what Scale Computing has done to its HC3 simple for SMBs to introduce and manage virtualization at all layers in their environment.
Jerome: Alan, you have made it plain that the Scale HC3 is easy to grow. So how does it make it simple for SMBs to manage virtualization once they have introduced it into their environment?
Alan: When an SMB looks at virtualization, they want to implement it for a series of very simple reasons.

  • They need an architecture that is easier to manage and more cost effective than dedicated physical servers
  • They want an architecture that allows for the concepts of DR and high availability
  • They want an architecture that extracts applications away from hardware failure

Scale Computing has taken the hypervisor layer which traditionally has had to run on external servers and moved it into a purpose-built architecture. By pulling the hypervisor in as a kernel module running as a single stream, a single stack, within the running kernel on every node, Scale has achieved something nobody else has: a one-to-one relationship between IOPS and IO.
This is where it gets very interesting. If you are already running one of our existing M series clusters, you get all of its feature and functionality at no additional cost. It’s just a firmware update as we have removed all of the intervening layers.
Now you can point a browser at any node in the cluster, log in, and you are presented immediately with all facets of your data center in action. From this console, one gets a current and historical look at load levels and capacity utilization, event history, what is running where from a compute perspective, load levels and storage utilization.
Important from a management perspective, spinning up a new virtual machine becomes a trivial exercise. Choose how much storage, compute and RAM you need for the new VM and then hit next. Now you may recall that Scale has NAS functionality built-in so one of the tasks it performs by default is create a share called ISO that maps directly to your desktop. The virtual machine has been created.
What this realistically means for an SMB is that they now have control over all of their activities in their compute environment, identify what is running where, quickly load and re-load balance and manage all of their storage protocols.

In part I of this interview series, we examined how complexity in midmarket IT solutions is driving the need for a hyper converged infrastructure.

In part II of this interview series, we discuss how Scale Computing is positioned to meet the specific needs of small and midsized businesses.
In part III of this interview series, we discuss how Scale Computing drives out costs in its scale-out architecture.
In part IV of this interview series, we discuss how Scale Computing delivers the high levels of availability  that nearly every SMB seeks in its computing environment.

Share
Share

Click Here to Signup for the DCIG Newsletter!

Categories

DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.