Blog

Insurers Lag in Data Testing

Syed Haider
Insurance Experts' Forum, August 27, 2012

The insurance industry depends on reliable and timely data. This data originates from a variety of systems, like policy issuance, customer service, billing and claims. To aggregate and integrate large volumes of data from a plethora of sources is not a trivial problem for data architects. Yet what's even more challenging than the development of such projects is testing them.

While the application-testing landscape is fertile with a variety of fresh ideas and offerings, the terrain of data testing remains surprisingly barren. A quick search on Amazon will demonstrate the curious truth: the number of texts dedicated to testing data applications is woefully low when compared with those covering software testing. Why is that, one wonders, especially if one is tasked with architecting a data project that has few or no application/user-interface (UI) layers? A few quick explanations do come to mind:

First, software engineering as a discipline takes a holistic view of application development and doesn’t treat database development as a separate concern deserving any kind of special treatment. Consequently, we see a healthy proliferation of testing tools, patterns and best practices on the UI and application layers overall, with little or no caveats for data testing. We see the notions of continuous integration, test-driven development, code-coverage and so on firmly established in the application development communities. Yet the testers and developers of non-UI data-heavy applications, such as data warehouses and business intelligence, struggle to get even the rudimentary automation right.

Second, database vendors have by and large lacked any emphasis on enforcing some sort of development discipline on their platforms. With a near lack of built-in support for quality, it can become a challenge to enact even the basic quality-controls for a project. Consider, for instance, concepts like ‘project’, ‘build’, or ‘unit-test’. Such notions have been introduced in software development to improve the manageability of complex projects, and also to establish a quality-baseline. (For instance. a broken build isn’t considered working software). No such concepts exist in the database platforms, making the task of organizing large code bases and tracking quality an extremely daunting one.

Third, tools and technologies like extract, transform load (ETL) and BI that are integral to most contemporary data architectures are relatively new concepts undergoing frequent revisions. Built-in support for testing is slowly emerging in this space, though it’s nowhere near as comprehensive as the well-entrenched techniques in the application development arena.

As a result of some of the observations above, we witness a tendency with IT managers to treat data testing the old-fashioned way: hiring second-class resources and wasting excessive man-hours on repetitive and manual processes that could have otherwise gone into more impactful quality assurance.

Yet they start feeling overwhelmed as data volumes or integrations grow and customer demands become more complex. Consequently, data quality suffers, and it’s all downhill from there.

Sound familiar? We’ve been there too, and over the years have learned some common-sense remedies that we’ll be sharing in the second part of this blog.

Syed Haider is an architect with X by 2, a technology company in Farmington Hills, Mich., specializing in software and data architecture and transformation projects for the insurance industry.

Readers are encouraged to respond to Syed using the “Add Your Comments” box below. He can also be reached at shaider@xby2.com.

This blog was exclusively written for Insurance Networking News. It may not be reposted or reused without permission from Insurance Networking News.

The opinions of bloggers on www.insurancenetworking.com do not necessarily reflect those of Insurance Networking News.

Comments (1)

I agree, data is the heart of any system and should be at the heart of testing also. This whitepaper discusses some of the practical approaches to the challenge. http://www.origsoft.com/whitepapers/data-strategies-application-testing/

Posted by: George W | August 29, 2012 7:27 AM

Report this Comment

Add Your Comments...

Already Registered?

If you have already registered to Insurance Networking News, please use the form below to login. When completed you will immeditely be directed to post a comment.

Forgot your password?

Not Registered?

You must be registered to post a comment. Click here to register.

Blog Archive

Avoiding the Bermuda Triangle of Data

Handled poorly, questions around data ownership, data quality and data security can sidetrack big data conversations and alienate business stakeholders.

Global Supply Chain, Local Problem

As a technology provider, your client’s ability to deliver products and services to their customers, when and where they need them, is at the heart of their business success.

Legacy Systems Are Increasingly a Competitive Handicap

Legacy systems, while reliable, increasingly hold insurers back, a new study finds

Five Reasons to Software-Define Your Operations

It may be possible to provision key services with the click of a mouse, but benefits go well beyond that.

3 Policy Admin Conversion Considerations

Insurers would be wise to learn these lessons before formulating a strategy to convert policies to a new policy administration system.

From Her to Watson, and What’s Next?

Imagine a learning system that can replace the performance of your best employee to provide the same level of support across the organization.