Every organization likes more storage capacity at a lower cost per GB. What makes them nervous are the growing risks they face from data loss as a result of a failed hard disk drive (HDD) in their virtualized environment. In this fourth part of my interview series with Scale Computing’s Global Solution Architect, Alan Conboy, and its EVP and GM, Patrick Conte, Alan chimes in as to what Scale Computing has done to eliminate the exposure window associated with failed HDDs.
Jerome: A growing concern with all of today’s disk-based storage architectures is rebuild times associated with today’s ever-larger hard disk drives. What is Scale Computing doing on that front to mitigate rebuild times, the risk of another disk failing during a disk rebuild and the performance overhead associated with doing disk rebuild.
Alan: Scale has taken what it considers a very elegant approach to addressing this concern. In a four node cluster, every block of data that happens to be on Drive 3 Node 3 has a mirrored active copy spread across12 spindles that are not on that node. If that drive drops out, Scale is still talking in full parallel to those other drives without the overhead of parity calculations.
Scale uses a file system that which is data aware so it does not have to recreate white space. To recover a failed drive under an Exchange 2010 500 user load, we measured how long it took. It was only 23 minutes to full redundancy with no impact whatsoever on performance.
Scale has taken that huge exposure window that exists in all SMBs and collapsed it down to virtually nothing. This concept also applies to losing an entire node in the Scale cluster. Every other node is aware of every other node through the state machine so should a node go away for whatever reason, the other nodes are instantly aware of it.
Jerome: That is pretty powerful. That technology also has application in terms of non-disruptively introducing faster processors and more capacity into the cluster as well, does it not?
Alan: Correct. Every 18 months Intel, Western Digital, Seagate and others introduce newer faster, more powerful cores, bigger platters that spin faster, etc. even as price remains constant. That’s Moore’s law.
To keep up, every 18 months or so Scale comes out with new nodes based on the technology growth that is taking place. This means when you need to grow with the HC3X platform, Scale leverages its inherent two way replication of data. An organization can take the new node out of the box, stick it in the rack, cable it up, enter the appropriate IP addresses, and hit the “Join Cluster” button. That whole cluster gets extended. The capacity, the CPU and the RAM associated with that new node is immediately added to the pool and made available.
In Part I of this interview series, we examined how complexity in midmarket IT solutions is driving the need for a hyper converged infrastructure.
In Part II of this interview series, we discuss how Scale Computing is positioned to meet the specific needs of small and midsized businesses.
In Part III of this interview series, we discuss how Scale Computing drives out costs in its scale-out architecture.
In the fifth and final part of this interview series, I conclude my interview series with Scale Computing as we discuss how Scale makes it easier for SMBs to manage their virtualization deployment once it is in place.