Over the last twelve months a trend towards implementing flash drives as a new tier of memory has emerged. Driven by the lower cost of flash when compared to RAM plus the growing realization that not all of an application’s data requires the performance boost that flash provides, more organizations are looking to deploy flash as a new tier of memory. But as more solid state drive (SSD) manufacturers try to fit their SSDs into this new SSD use case, the trick for users is to figure out which product architecture is the best fit.
The impetus behind adopting flash as a new memory tier is being driven by four main factors:
- DRAM costs about 9X more than flash
- SSDs cost roughly 10x more than HDDs
- Only 5 – 20% of application data is sufficiently active to benefit from flash’s performance
- Deploying a mix of DRAM, SSD and SATA HDDs can be more economical and perform better than a mix of DRAM with FC and SATA HDDs
It is for these reasons that implementing SSDs as a new tier of memory is resonating with users for business and technical reasons. However what is not so obvious is which SSD architecture is the right choice for use as a memory tier since all SSDs are not architected in the same way.
SSD architectures can be broadly classified in two ways:
- Storage controller approach
- Memory controller approach
The storage controller approach implements SSDs so they look and function like HDDs. While I have previously illustrated some of the risks associated with implementing flash in this way, the two main problems that result are added costs in the construction of the SSD and new risks resulting from the soft errors that can occur within the SSD.
The extra costs result from manufacturers embedding processors, DRAM and firmware into the SSD. These are needed to make flash look like an HDD to an operating system.
Adding these extra components also introduces the possibility that soft errors can occur since many SSD products lack sufficient intelligence to detect soft errors should they occur and then correct them if they do. What few SSDs do possess this level of sophistication to detect and correct these soft errors carry a much higher price tag.
In addition, two other concerns emerge when using SSDs that are based upon the storage controller technique in this context of a new memory tier.
- They are significantly slower than native flash. SSDs that are implemented using the storage controller architecture require that data traverse its internal controllers. This can force the data to take as many as nine (9) additional hops before it gets to its final destination. Data translations also have to occur between each layer as it moves from ATA or SCSI protocols to flash and back again.
- Users must make an unpleasant RAID configuration decision. Users must account for the possibility that an SSD may fail so they have to select an appropriate RAID configuration to protect the data on that SSD. In the case of SSD, no RAID option is particularly attractive. Any RAID implementation will require the purchase of more SSDs and each RAID configurations comes with trade-offs. RAID 0 sacrifices reliability and redundancy; RAID 1 sacrifices capacity; and, RAID 5 introduces an additional performance hit.
Introducing SSDs that use a storage controller as a memory tier is feasible but using them in this manner is akin to trying to put a square peg in a round hole. It is for this reason that SSDs with a memory controller architecture are much more well suited for this emerging use case as a memory tier.
But what makes an SSD solution that uses a memory controller architecture appealing for use as a new tier of memory is that it takes three (3) steps to lower costs, increase performance and decrease risk.
- First, Fusion-io eliminates the need to manage an SSD like an HDD. Flash is not longer put into a box and configured to look like an HDD to the operating system. Instead Fusion-io collapses flash onto a PCI-Express card that is inserted directly into a PCI slot on the server backplane.
- Second, Fusion-io removes the need to configure SSDs in a RAID configuration by creating a flash array that has redundancy. In this configuration, it has multiple redundant flash chips that can dynamically replace any flash chip that becomes defective without requiring a user to replace the PCI-Express card.
- Third, it performs address translations the same way that virtual memory address translations are performed which is probably its most important characteristic. This technique removes the need for the numerous embedded address translations that SSDs that use a storage controller design require while expediting processing since only one translation needs to occur.
These steps contribute to making SSDs that use a memory controller architecture particularly well suited to act as this new memory tier because they do more than just act like virtual memory. They actually behave as if they are virtual memory since there is no intermediary bus or hierarchy of controllers that the data first has to traverse. Using this technique data can flow directly onto a Fusion-io ioDrive since it is specifically architected to act like DRAM and communicate in flash, the same language that DRAM uses.
Leveraging virtual memory to complement DRAM has long been a technique used to accelerate application performance. But this new option to introduce SSDs that communicate in the same language as DRAM has a dramatic and positive impact on how applications perform while forcing organizations to rethink how they will architect their storage infrastructures going forward.
To successfully execute on this vision and implement SSDs as a new memory tier dictates that organizations chose the right SSD architecture if they hope to optimize SSDs deployed in this manner. To do so, they must select SSDs that are architected to function as memory controllers, not storage controllers. It is for these reasons that organizations that want to leverage SSD as a new memory tier should look to the Fusion-io ioDrive as it is exactly these types of problems that it was architected and designed to solve.