As I mentioned in a number of the blogs that I posted last week while attending Storage Networking World (SNW), there was a heavy emphasis on solid state drives (SSDs) at the conference. However, during the many presentations that I attended and conversations that I had about this technology, SSD vendors revealed some key “gotchas” about SSDs. They also shared how SSDs stand to impact the hard disk drive (HDD) market as well as the market for memory as well. So here, in no particular order, are some of the new challenges and opportunities that SSDs create as well as what to watch out for.
No Management Software. This was probably the most glaring deficiency in many of the new SSD products that I saw at SNW. Every new SSD provider was focused on the IOPS and throughput that its appliance could provide but there was little or no talk about what options users had to manage the data once they had stored on SSD. A couple of the SSD providers I spoke with are having discussions with storage virtualization providers such as FalconStor Software and RELDATA for this exact reason.
Potential for data loss. Store data to disk or tape and put it on the shelf or even just turn off the storage system on which it is stored and one can be reasonably sure that the data will be there when you turn it back on. SSD does not come with any such assurance.
One SSD provider told me than on a brand new SSD, one can be reasonably certain that data written to it will be there for 10 years if it is powered off. However an SSD that is 80% “worn” (i.e. – has had 80% of its allocation of writes consumed) may only be able to preserve the data for about a year.
This may be of particular concern in those situations when an SSD is physically placed in a server and the server is turned off for a period of time. However I also got the sense that this SSD feature is in a state of flux as SSD providers are working hard to overcome this deficiency and the potential for data loss may vary widely by SSD provider and what steps they have taken to correct it.
Predictive failures. This same “wearing” of SSDs that leads to data loss is not an entirely bad thing. Unlike traditional HDDs where you are essentially rolling the dice as to when the HDD will fail, this is less so the case with SSDs.
While SSDs still have the potential to fail at anytime, since they have no moving parts, the potential for unexpected failure is far less. Rather SSD providers are finding that by monitoring the number of writes to SSDs, they can predict when an SSD is coming to the end of its life and advise users to proactively replace drives.
Saturation of existing network interconnects. One SSD provider, WhipTail Technologies, spoke about how its SSD appliance could easily saturate its two 4 Gb/Sec FC connections and even suspected the new 8 Gb/sec FC standard could be a bottleneck in performance intensive environment. (WhipTail was not yet using 8 Gb FC because it has not found the 8 Gb/FC drivers sufficiently mature.) This is leading it to look more seriously at introducing an Infiniband interface into its appliance.
Be suspicious of Iometer results. Most SSD vendors had Iometer prominently displayed in their booths at SNW showing results of 100,000+ IOPS. Where users need to exercise caution is to not be unduly swayed by these results as Iometer can be configured a number of different ways. One can assume that to achieve the results displayed in these booths, Iometer was configured in the most optimal way possible, probably with 100% reads and zero writes.
Replace memory on servers. Much attention has been given to SSDs replacing HDDs in the near future but it is also conceivable SSD could replace memory as well or at least reduce the amount of memory that systems need. While SSD is not as fast as DRAM, it may be a “good enough” replacement for memory on many application servers that organizations can justify making the switch and eliminating HDDs and memory altogether on them.