Overcoming Death by Data

Every organization knows its data stores are growing annually by 30%, 50% or more and, as they do, archiving is taking on a greater role to help organizations more economically store and manage this data. But, what organizations can fail to consider is the downside of not having an archival data store that can scale to meet their current and future data storage requirements. For example, today science departments across the nation are grappling with the inability to cost-effectively manage and scale their archived data stores.  Their experience will provide enterprise organizations some insight into the types of problems they can avoid if they act now.
An  that appeared recently on The Chronicle of Education  illustrates how digital data growth and the freedom to store research data is beginning to negatively impact the science community. While the growth of distributed computing has served as a boon for researchers in terms of more cost-effectively doing their research and generating the data needed to support their work, it is proving to be extremely difficult for these same individuals and the institutions that they work for to cost effectively manage all of this data. Some of the problems that these institutions are running into in managing the data include:

  • Accessing data that suddenly becomes relevant again. Researchers can never be sure exactly when or if the results from specific sets of tests that they have conducted in the past will become relevant again in the future. But, if that research data is suddenly deemed pertinent again, the question that everyone suddenly needs to answer is, “Where is that data physically located?” Then the hunt is on to find the specific computer and storage device that the research was stored on.
  • Cost-effective data storage. On the surface, it can make financial sense for these institutional researches to store data that is of unknown value on an inexpensive internal and/or external hard drive that contain tons of storage capacity. Yet once the data is stored on this drive, the data becomes for all intents and purposes inaccessible since it was not centrally stored in a place where anyone else could find it if they needed it.
  • Secretive nature of researchers. The research community is highly competitive so researchers have motive to keep both desirable and undesirable research data hidden to protect their ideas and findings. In the event where past research data suddenly becomes valuable, the researcher wants to make sure no one else claims credit for it or sells it without their permission.

So what does all of this have to do with enterprise organizations today? The same problems that are confronting science departments across the nation are already starting to show up in enterprise organizations today. For example, there are multiple repositories of information scattered around the typical enterprise.  Typical experience has been that there are six to ten significant repositories and possibly many more when research and development departments are considered.
Granted, it is unlikely that most enterprise organizations will suddenly find any of their stored data valuable for resale anytime soon, but there are many other reasons to track information. For example, core business data (financial and manufacturing) needs to be managed effectively so one can track and trend customers and their buying patterns. If the data is scattered, buying patterns cannot be identified and trends will never appear.  In addition, in today’s economy, enterprise organizations must deal with how to cost-effectively manage growing production file and email data stores, access and search archived data stores that become relevant again (think eDiscovery) and keep archived data stores secure from unauthorized eyes either during eDiscoveries or during the normal course of business.
Further, storing and managing archived data is a function that is best done centrally in this day and age for two reasons. First, it avoids the scenario where trying to find data archived and managed by individuals and/or business units becomes like trying to find the proverbial needle in the haystack. Second and equally important, enterprise organizations can avoid the issuance of “negative inference” by judges during litigation. In these circumstances, a judge advises a jury to look unfavorably upon defendants that cannot produce requested data and infer that the defendant is withholding information material to the case.
The good news for enterprise organizations willing to take action is that there are viable archiving solutions available to them that can help them avoid the types of problems that the education institutions and their researchers are encountering. The  Enterprise Archive is a prime example. It is purpose built to be a central repository of information.  It can safely scale to manage PBs of data. Its newest release, the Data Center Series , has driven the cost per GB down to as low as $1/GB and it can securely and centrally store data for different individuals and business units while maintaining the confidentiality of each individual’s and/or business unit’s data.
Cost-effectively and efficiently managing all of the archived data created in this new age of inexpensive computers, networks and storage is by no means a given as many businesses and educational institutions are discovering. Enterprises are fortunate in that they can learn from these examples and overcome this death by data that these institutions are experiencing. But, that is only possible if they select solutions such as the Permabit Enterprise Archive that provide them a means to centrally, economically and securely scale their enterprise archived data stores.

Click Here to Signup for the DCIG Newsletter!

Categories

DCIG Newsletter Signup

Thank you for your interest in DCIG research and analysis.

Please sign up for the free DCIG Newsletter to have new analysis delivered to your inbox each week.