This is one of my favorite times of the year as I look back on some of the most popular blog entries on DCIG’s site in the past year based on the number of page views. What makes it so intriguing for me is that it is similar to looking at a big wrapped gift under the Christmas tree and not knowing exactly what is in it. Every year I am never completely sure until this week which blog entries which will make up the Top Ten on DCIG’s site as the most read. This year is no exception.
Category: Storage Management
We can all get caught up in the hoopla of new and slick storage technology features and lose sight of some the most important and basic details that keep our storage fabrics up and humming. Among these are the Fibre Channel cabling infrastructures and the distance limitations incurred by continued increases in FC speeds.
Organizations have a proclivity to look at storage arrays primarily in the context of how much storage capacity do they offer. But as storage arrays add features such as deduplication and thin provisioning, storage efficiency is taking on new importance as an evaluation criteria when selecting a storage array. This is raising questions as to what role, if any, that a storage array’s storage efficiency features should play in the final buying decision.
The real news this past week out of EMC World is not that EMC has decoupled its VMAX or Symmetrix controller heads from its back end disk drives, added some bells and whistles to it and called it “VPLEX”. The big news in my mind is that this decoupling puts the storage industry on notice that EMC has officially begun its transformation from a disk vendor into a provider of storage intelligence.
Upon arriving at Symantec Vision on Wednesday morning, it quickly became evident that the messaging at this year’s event focused on how the business world is shifting from a Systems-Centric View (policies and governance is done according to the physical devices on which they reside such as servers, networking and storage) of data management to an Information Centric View (policies and governance are set independent of what storage device on which the data resides).
New Deduplication and Role-Based Access Features Close ARCserve Product Gaps; New Free SRM Feature may be Hidden Jewel in r12.5
Backup software is, if nothing else, a “Me-Too” space with each vendor adding new features to each release of its product to try to match what its competitors are doing as well as trying to add a few new twists of their own to differentiate themselves from the crowd. Today’s CA announcement of ARCserve r12.5 continues this trend. To remain competitive, r12.5 adds data deduplication as a core component of ARCserve, improves users’ abilities to recover guest VMs on virtual server operating systems and more tightly integrates ARCserve with popular applications. CA seeks to differentiate ARCserve from competitors with new native SRM reporting capabilities and providing assurance that organizations can restore their deduplicated backup data.
Highlights from the Spring SNW 2009 Virtualization Summit; “Our Electric Bill is Our Biggest Data Center Line Item”
You can’t talk about storage these days without including virtualization somewhere in the conversation. The Spring 2009 SNW was no different as one of its Summits was devoted to virtualization. The Tuesday, April 7, Virtualization Summit proved very interesting even though it was dominated by vendors. Some of the better data points that came out of this Summit were from TheInfoPro and Boston Medical Center. Also, interesting tidbits on SSD are emerging as SSD appears to solve performance challenges for VMware-access-to-storage in high I/O environments as well as performance intensive development environments.
Granularity of Control and Hypervisor Communication Becoming the Prerequisites for Virtual Machine File System Defragmentation
2009 is shaping up as the year of server virtualization. The hype around Citrix XenServer, Microsoft Hyper-V and VMware ESX Server is giving way to the reality of companies actually virtualizing their production servers as a means to improve energy efficiencies and slash infrastructure costs. But as companies virtualize these servers, many are leaving the familiarity of direct attached storage (DAS) and entering the world of networked storage for the first time. This is creating new challenges, especially for Windows servers using utilities such as defragmenters that will begin to operate on virtual machines (VMs) and defragment each VM’s associated file system.
In the computer industry, Diskeeeper is as synonymous with disk defragmentation as Microsoft is to Windows. In fact, any knowledgeable Microsoft Windows administrator knows that defragmenting a disk drive can provide application performance boosts of up to 176 percent, if you believe some reports. That makes Diskeeper a must-have in the eyes of some shops with performance intensive applications running on Windows servers. However as more enterprises virtualize their servers and disk drives, how does Diskeeper’s technology remain relevant? To get some answers to these questions, I recently spoke to Derek De Vette, VP of Public Affairs for Diskeeper Corporation.
SNW Reflections: Recommendations for Weathering the Forecasted Economic Downturn; “Loyalty Goes out the Window”
I just got back to Omaha after spending the last three days at Storage Networking World (SNW) and used the time on my flight home to reflect upon some of the conversations I had during my time there. While I still plan to do more blog entries in the coming days around the technologies that I reviewed at SNW, I first wanted to share some of the thoughts and feelings of those in attendance about how they think the economic crisis will affect tech in general and how companies should prepare to act in 2009. In particular, I wanted to share the thoughts of those who have weathered economic downturns in the past and how users have responded to them.