Search
Close this search box.

Driving Down Costs in Scale-out Computing Architectures; Interview with Scale Computing Part III

Scale-out architectures sound great on the surface to small and midsized businesses (SMBs). These systems offer non-disruptive operation along with ease of scaling and management. However the cost of these systems may be anything but easy on the SMB pocket book. In this third part of my interview series with Scale Computing’s Global Solution Architect, Alan Conboy, and its EVP and GM, Patrick Conte, we discuss what Scale Computing has done to drive out costs in its scale-out HC3 solution.

Jerome: One of the challenges with scale-out solutions is making them cost effective for SMBs. What does Scale Computing offer on that front to help those out who may be cash-strapped?

Patrick: Scale does a combination of things. The Scale HC3 includes native replication, snapshots, the kinds of features that you would expect from a fully featured SAN/NAS scale-out solution and which you evaluated in your DCIG 2011 Scale-Out Storage Buyer’s Guide. Those feature sets all exist on the Scale R and M series platforms. While their features have improved and been extended in the time since that report came out, what you see with Scale is a kind of an extension based on clustering.

Scale Computing is all about leveraging the capabilities of an active-active cluster that can act as a delivery mechanism to SMBs for storage protocols, compute cycles and virtualization. But Scale does it in a way specific to the SMB space.

It is easy to grow. It is easy to manage. It inherently takes advantage of Moore’s law. It is highly fault tolerant. If you lose any component at any point in time everything just keeps going. In essence, we bring the concepts of Lego bricks, if you will, to the data center.

Every node in the Scale cluster starts life as a commodity X64 system. Every node contributes 8 or 12 logical cores. Every node contributes 32 or 64 gigabytes of RAM. Every node contains a quad network interface card, either 1Gb or 10Gb. Every node has dual redundant hot swap power supplies. Every node contributes four HDD spindles which are user selectable: they may be 7200, 10K or 15K RPM.

Now when you stand up a Scale cluster, Scale performs the following tasks internally:

1.    It creates the logical cluster entity. Scale pulls the resources together and creates that single logical entity from the multiple physical boxes.
2.    It maps a DME and file system straight from the computing resources across all drives in all nodes of the cluster.
3.    It creates a single global namespace, metadata managed. This is in turn mounted on every node in the cluster the same way in the same location.

This has some inherent advantages. First and foremost, it aggregates the throughput and IOPS capabilities of all of the drives and all of the nodes together into the greater whole of the cluster, while simultaneously turning every node in the cluster into yet another path for data, any object that you create.

Jerome:
So you spray all network and storage traffic across all available resources in your cluster all of the time?

Patrick: We absolutely federate all of the workloads together. Scale has taken the time to do something that physically cannot be done with VMware ESX, cannot realistically be done or accomplished with Hyper-V or even with failover cluster manager. Scale is delivering this functionality from an appliance perspective. In other words, Scale is coding to specific sets of hardware. Taking this approach means we do not have to compromise.

This technique has developed into a state machine that Scale runs on every node in its cluster. The state machine keeps track all the way down to the discrete chipset level. From a user space perspective all the way down to the discrete connection level, of what that node and every other node in the cluster is up to.

This information is all mapped over the back end using Layer 2 unrouted, 1Gb or 10 Gb, vLan, physical switching, redundant physical switching, whatever – it is entirely up to the end user. Node One knows exactly what the state of various chipsets are in Node Four. Node Three knows exactly who is connected to what through Node Two. Every node acts as a load sharing point, a failover or load balancing point, if you will, for every other node in the cluster in full parallel.

This is where things start to get interesting. Inside that global namespace every single block of data that is committed to the cluster, regardless of whether the data is originating from outside the cluster and coming in through a storage protocol, or originating on the cluster through a virtual machine, is committed to a minimum of two separate drives on two separate nodes in the cluster. This essentially turns that global namespace into a cluster-wide, cluster-based, RAID-10 style implementation.

In part I of this interview series, we examined how complexity in midmarket IT solutions is driving the need for a hyper converged infrastructure.

In part II of this interview series, we discuss how Scale Computing is positioned to meet the specific needs of small and midsized businesses.

In part IV of this interview series, we discuss how Scale Computing delivers the high levels of availability  that nearly every SMB seeks in its computing environment.

Share
Share

Click Here to Signup for the DCIG Newsletter!

Categories

DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.