Blog

The Problem with Data Storage: Way Too Much Information

Ara Trembly
Insurance Experts' Forum, January 11, 2010

For most of us, when we run across a friend or acquaintance, it’s not unusual to ask, “How are you doing?”  But what do we expect in response to that question? 

Frankly, we anticipate hearing something like: “okay,” “good,” “fine,” “terrific” or at least “not bad.”  Sometimes, though, we get a longer, more involved answer that includes our friend’s troubles with hellish in-laws, uncooperative spouses and children, lunkhead bosses, friends who don’t call (not you, of course)—not to mention an assortment of major and minor physical ailments.  The point is that if our friend turns a polite greeting into an invitation to vent on every problem known to man, we feel like were getting too much information (TMI)—also known in Internet parlance as WTMI (way too much information). 

TMI and WTMI aren’t just annoyances for humans, however.  Data processing systems can also suffer from these maladies, especially when we insist on exponentially increasing the amount of information we feed to these systems.

Aberdeen Group recently surveyed 173 organizations and 99% of them indicated that their organization's data volume is growing. The average overall data volume growth rate reported was just over 30% per year.  The researcher noted that while the cost of storage continues to fall, “the cost of managing a rapidly growing volume of data, supporting the required IT infrastructure, and preventing enterprise data from growing in an uncontrolled sprawl is not shrinking. Capacity must keep pace with demand and the kind of structured data management that yields actionable business intelligence is ever more critical to maintaining a competitive edge.”

Yet, Aberdeen pointed out, IT departments are under increasing pressure to reduce infrastructure, energy, and support costs.  “Store more and spend less” would seem to be the mantra, and don’t forget that all these data need to be instantly accessible for marketing, analytics and other critical systems. 

But 30% annual growth is not chicken feed.  In money terms, if you banked $10,000 at 30% interest (a fantasy, I realize), you would have close to $138,000 in 10 years.  In physical object terms, if you start with 10,000 paper documents and increase their volume by 30% annually, you would end up with 138,000 paper documents, and perhaps a dilemma as to where to store them (you’d need almost 14 times the storage space).  It goes without saying, too, that searching through 138,000 documents will take longer than if the volume is 10,000.  So if our systems could talk, they would tell us that increasing data volume at this rate is WTMI. 

This is a particularly vexing problem for insurance enterprises, since insurance companies live and die on their data, its accuracy and its availability.  A number of solutions have been floated, the latest being some form of virtualization or cloud computing.  But the solutions have their own problems, in terms of having to depend on a healthy and functioning Internet and dealing with the increased security risks of storing critical data in the ether. 

At one time, electronic data storage was straightforward.  You save files to tapes, disks, magnetic drives or some other medium, and you could retrieve data in a reasonable time.  Yet despite advances in capacity and access speed, the flood of data today threatens to overwhelm our ability to control it.  And the more we trust the Internet to help with storage needs, the less control we have and the more risk we take!  Quite a conundrum. 

This is why the primary hope for securely storing critical information should and must lie with technologies that increase capacity and access times without requiring that the data leave the enterprise.  As I have noted in this space, GE and several others have made encouraging announcements of this kind.  All of us in insurance, financial services and other data-intensive industries should be paying attention. 

Meanwhile, enterprises must also pay attention to prioritizing data storage in such a way that the most critical data are stored in the most secure places.  And if certain information is not so critical, perhaps a virtual solution is the best idea.  Sound judgments need to be made, however, because a mistake in these practices could lead to catastrophic consequences. 

Ara C. Trembly (www.aratremblytechnology.com) is the founder of Ara Trembly, The Tech Consultant, and a longtime observer of technology in insurance and financial services.

Readers are encouraged to respond to Ara using the “Add Your Comments” box below. He can also be reached at ara@aratremblytechnology.com.

The opinions posted in this blog do not necessarily reflect those of Insurance Networking News or SourceMedia.

Comments (0)

Be the first to comment on this post using the section below.

Add Your Comments...

Already Registered?

If you have already registered to Insurance Networking News, please use the form below to login. When completed you will immeditely be directed to post a comment.

Forgot your password?

Not Registered?

You must be registered to post a comment. Click here to register.

Blog Archive

CIOs: “We Don't Have Enough People to Run Our Mainframes”

Insurers will be competing with other industries for both legacy and “new IT" talent.

4 Ways to Keep Insurance Data Quality Healthy

Continually building trust and credibility in the data is the key to a successful data warehouse.

Customer Experience Trend Watch

Three recent HR moves demonstrate that large life insurers recognize customer experience as a strategic differentiator.

Insurers Have a Lot of Data, But Too Many Silos

Insurers actually have more data analytics resources than other industries.

Are Data Centers Shrinking or Expanding?

Today's data centers are doing far more with much smaller footprints.

Too Much Manual Effort is a Show Stopper

Examining the administrative burden of doing business in the E&S market.

Advertisement

Advertisement