On the server virtualization side, VMware vCenter has emerged as a central console that first detects and then centrally manages VMware VMs across their environment. On the storage side, similar storage array management software like NEC Storage Manager is now available to complement VMware vCenter as it discovers NEC D, M and S Series storage arrays and then administers their advanced storage software features.
Category: NEC Corporation of America
According to IDC, revenue from external disk storage systems totaled over $18 billion in 2010. But what that IDC number does not fully reflect is the growing impact that midrange arrays are having on organizations of all sizes and how well they are positioned to deliver the other key feature that organizations now want in their virtualized environments: Reliability. Among the midrange arrays available, the new NEC M100 storage array is better positioned than most to deliver on these two features.
Genesis Hosting Uses a Proof of Concept to Reap the Tangible and Intangible Value of Building an In-house Reference Architecture
Virtualization is sweeping through data centers of all sizes and, as it does, it introduces levels of complexity that organizations are ill-equipped to handle. To mitigate this, reference architectures are emerging as a technique to standardize which hardware and software are deployed, under what circumstances, and how it is managed.
Dedupe is an easy concept to grasp. At its most basic level it reduces storage requirements and touts the improvement in backup and recovery times. It seems as if it is a “win-win” scenario and, for the most part, it is. But let’s not lose sight of the fact that dedupe is still in its infancy and is being continually fine-tuned and changed. This should keep us from becoming lackadaisical in our perception of this technology and how it is still in its early stages.
Recently Kelly Polanski (another DCIG analyst) and I had a rather lengthy discussion about the value of keeping archive and backup data on disk versus tape long term. We were both in agreement that using disk in some form as an initial backup target makes sense in most environments but as we started to debate the merits of keeping data on disk versus tape long term, the issue can get more cloudy. While DCIG has previously argued that eDiscovery is becoming a more compelling reason to keep archive and/or backup data on disk long term, the concerns we had centered on the fact that some disk-based archival and backup storage systems can become as problematic as tape.
Over the last few months DCIG has spent fair amount of time researching and documenting specific reasons why tape will not die. Green IT is the one reason we most often hear cited for retaining tape, though new disk-based deduplication and replication technologies coupled with new disk storage system designs that are based on grid storage architectures can offset some of those concerns. So before organizations think that after 30, 90 or 180 days that they should immediately move their archival and backup data, deduplicated or otherwise, from disk to tape just to save money, there are certain intangible savings from an eDiscovery perspective that keeping data on disk provides that are not always feasible on tape.
Almost 3 years ago now, Robin Harris over at Storagemojo.com starting posting the list prices for different vendor’s products so customers have at least a starting point when comparing product prices. Though I suspect the list prices associated with these vendors’ offerings have changed since he originally posted some of them, what I specifically found remarkable is how difficult it is to ascertain what a deduplication solution will cost for an organization. The difficulty in pricing deduplication solutions had less to do with making sure you getting deduplication than making sure you include in your configuration all of the options that your environment needs, such as failover, NAS or VTL interfaces, data retention periods or replication, to effectively compare different solutions.
Innovation within the data center seems to be on the lips of IT managers, vendors, and analysts alike. Innovation, it is said, will pull us through this economic downturn even as organizations experience cutbacks in budgets, staff and just general doom and gloom. These innovations include maturing technologies such as virtualization, grid computing and deduplication coupled with management initiatives like consolidation, outsourcing and reduced expansion. These ensure organizations can continue to cut costs and stay on budget while creating more efficient data centers that are ready for whatever tomorrow brings.
Are deduplication guarantees really something you can take to the bank? As more companies look towards using disk in general as a backup target and deduplicating systems specifically, deduplication guarantees are emerging as a way to influence users’ decision to deploy deduplicating systems. But in these tightening economic times, deduplication guarantees do not necessarily guarantee money in the bank and may shift your attention away from more critical evaluation criteria such as system reliability, scalability, and performance.
Data Migration and User Account Management: Grid Storage Tackles the Hidden Issues of Storage System Management
Having managed multiple types of storage systems from multiple different storage vendors, there are two flaws that are common across many vendors’ storage systems: the inability to transparently migrate data to subsequent generations of their own hardware and the inability to share administrative permissions with other like storage systems from that vendor. How acute this problem is depends on how many storage systems a company manages and how often it replaces them. However any administrator that is responsible for managing five, ten or more storage systems in today’s enterprise corporations understands exactly what I am talking about.