Procrastination is sometimes the best policy when it comes to making a storage purchase. After all, what enterprise organization has not benefited by waiting and then getting storage at a lower price? But when flexible storage solutions such as the IBM N series and Real-time Compression become part of the equation, the question that enterprise organizations need to answer is less about price. Rather they must answer the question, “Will this storage solution move us forward or backward?“
New storage array models generally break down into two categories.
The first consists of traditional storage arrays. These are for the most part based upon an architecture that is unchanged in over 20 years with limited amounts of capacity, connections and performance.
This architecture is problematic as each array tends to become a storage silo. As an organization’s capacity requirements exceed the maximum capabilities of a storage array a new one is needed with each array and the data on it managed independently of the other. This makes the traditional model complex, costly and cumbersome to manage.
While the traditional architecture could be adapted to overcome these limitations, that is unlikely.
- They are rooted in legacy architectures with heavy microcode investments. Legacy microcode denotes stability and reliability but makes code changes difficult to implement.
- Fundamental changes to how they manage storage require investments in new products or the purchase of emerging technologies. This approach almost never works as new technology is never fully integrated because of microcode conflicts so the new feature becomes a new array.
This second outcome of new features becoming new storage arrays results in new storage silos as is readily seen in many vendors’ storage portfolios. Rather than one logical storage system that contains all of the functionality that an enterprise organization needs, a different storage array exists for each application. As a result there is one model for archiving, one for backup, one for NAS and yet another for SAN. This creates multiple storage silos with multiple points of administration.
This architectural design precludes enterprise organizations from achieving revolutionary improvements in cost, flexibility and administration until they break from this model. But in making this break, they must also select technology that is architected to solve today’s storage problems.
To address these concerns, a more flexible category of storage has emerged that is architected to:
- Cost about the same or less as traditional storage arrays
- Be implemented and put into production swiftly
- Utilize existing storage capacity more efficiently and effectively
- Minimize or eliminate many of the ongoing data and storage management tasks
- Not put an organization’s core business and IT operations at risk
Storage arrays that fall into this flexible category are at the front end of meeting the demands of today’s business and IT requirements. By eliminating the risks associated with their deployment and the headaches around day-to-day storage management, flexible storage solutions enable organizations to think about storage from the perspective of business enablement.
This mindset of efficient storage management storage becomes even more important in light of the new forces that impact business and IT today which include:
- Cloud computing
- Data reduction
- Disaster recovery
- High availability
- Limited IT budgets
- Power and space efficiency
- Virtualizing network, server and storage resources
But in order for enterprise organizations to move to this more flexible model they first need to change how they think about buying storage. The similarity between traditional storage arrays has led to enterprise organizations pitting one vendor against another in order to get a better storage price.
This works up to a point but continuing in this practice may actually hurt enterprise organizations as they may end up with the wrong storage architecture for their business and IT needs. In this scenario, purchasing a traditional storage array may move it backwards as traditional storage architectures are not well suited for the new challenges that enterprises face.
So what storage solutions fall into this more flexible category of storage? The IBM N series and IBM Real-time Compression Appliance already are putting a stake in the ground as belonging in this category as a proven solution in business critical environments.
Marvell Technology Group is a prime example as it saw a split in storage architectures occurring in late 2008. Marvell wanted to be proactive and take advantage of IBM’s more flexible storage offerings but at the same time it did not want to put its production operations at risk.
Marvell wisely observed that the flexibility IBM offered was important but Marvell put a priority on reliability and stability. Kishore Fernandes, Marvell’s Senior Mgr of Engineering Computing Infrastructure, says, “Many of our engineering applications run for days so if the storage system delivers excellent performance for six days but fails on the seventh, we have to re-start the job – and we’ve lost six days.”
Marvell’s decision to implement the IBM N6070 was predicated upon it wanting – and getting – a storage solution that gave it the best of both worlds. It still got the “new” features associated with traditional storage arrays that it wanted – more performance, more capacity, and enterprise reliability. But it also got the flexibility of IBM’s storage.
For instance, Marvell took advantage of the IBM N6070’s deduplication feature. This led to it realizing a 92% reduction in storage related power costs, a 91% reduction in storage related cooling costs, and a 90% smaller storage footprint even as it increased its storage utilization from 50% to 80%.
IBM also gives those organizations who are not yet comfortable with deduplicating production data the option to implement the IBM N series and Real-time Compression as a single solution. This solution gives them the flexibility of the N series, the data reduction they seek and even a potential boost in application performance.
ESG Labs recently tested the IBM Real-time Compression solution and found that file copy operations running in a Windows guest operating system completed 28% faster than when just using the N series. ESG also tested the IBM Real-time Compression in front of an online transaction processing (OLTP) database and saw an increase in the number of user level transactions by 15% versus just using native storage.
It is a given that enterprise organizations are going to buy new storage arrays for any number of reasons but cost should no longer be the determining factor. The demands that enterprise organizations are placing on their IT infrastructures are increasing even as their budgets stay the same or even shrink. To meet these competing demands, they need to derive more value from the storage solutions they purchase which can only be accomplished when they procure storage solutions that store and manage data more efficiently.
Enterprises considering their next steps in storage are obliged to first step back and answer a much more basic question, “What storage architecture is best suited to move us forward?” Those enterprises that do NOT have cloud computing, data reduction and server virtualization on their corporate roadmap may find that they can stick with more traditional storage architectures. But if any of these new technologies are on their roadmap or already in their data center then deploying more flexible storage solutions like the I
BM N series and Real-time Co
mpression Appliance are decisions that they can no longer procrastinate on making.