Market Insights: Data Dominance in Insurance.

The insurance market is witnessing significant investment in accessing, engineering, and mining data, making data the dominant theme when discussing high-level macro trends. Based on recent market insights research and what we’re hearing, here are some of the data trends and challenges we see in the market:

The Dominance of Data in the Market

There is a noticeable shift towards using data to make better underwriting decisions and increase profitability. Most market players are investing in solutions to extract and structure data from slips and risk engineering or claims reports.

Organisations are also investing in third-party data sources to enrich data insights. Only a handful are looking at the impact of real-time data on risk, risk management and associated new service propositions. That having been said, there are a handful of new business models based on real-time data that are currently in stealth-mode.

Some are being strategic with their data and building out data models and linking that to technology upgrades to become data-driven/ data supported.



Obsession with Data Extraction and Structured Data

Most insurers are looking for solutions to scrape data from slips/ MRCs to capture better structured data for the avoidance of rekeying.  New instructions will need to be given to human data entry roles or AI model retraining in the short-term with the introduction of the new MRC v3 format and data points. However, long-term adhering to the structure and format of the new style MRC v3 will likely make extraction easier through consistent layout and formatting.

Those looking to extract data from MRC v3 to service the creation of the CDR v3.2 will face challenges. Due to the conditional nature of a large number of the data points, based on territory or coverage requirements, for example, successful implementations will need to consider both the extraction of the data and the CDR rules as to which data is or is not required for any given scenario.

The sheer quantity of data points and a lack of clear direction or understanding from internal teams do not appear to be setting projects up for success.

Data Science Capabilities and Talent Shortages

Regardless of the organisation’s size, there is a low level of capability/skills and understanding of data. This is compounded by industry talent shortages for these skills and indeed, data literacy across multiple functions such as broking and underwriting who now also need to be data savvy.

Where used, we have heard that bringing in Data Scientists has not been successful due to a general lack of London Market knowledge within these individuals nor the cultural readiness of the wider organisation to accept data-led recommendations alone.

The priority and current need are Data Engineering skills to extract, organise and standardise the data before Data Science can be enabled. Successful outcomes will be driven by the partnering of skills and domain knowledge rather than expecting a particular discipline to cover the breadth of knowledge.



Data Deluge and the Challenge of Making Sense of Data

Many market players struggle to make sense of all the data, what is important or meaningful vs what is noise. Even the fundamental dialogue of how to create a business case for new solutions or datasets is proving challenging for some.

Using the example of real-time data via the internet of things sensors, one responded that some 300 data points could be pulled from one sensor alone, and yet only 60 of those are meaningful from a risk management or a risk pricing lens.

Data Innovation and the Need for Developing New Products

Most insurers are focused on how to “clean their (data) house” before being able to consider how best to drive new insights, new products, or new services from that data. They are caught up in “the administration of underwriting” and are concerned with aligning new datasets to existing underwriting products.

Whilst on the one hand, there is a data deluge on the other, there is also a lack of data to enable the development of new products on emerging risks. This has led to slow innovation and thereby a further divergence from client needs. This “product-first” approach coupled with the conservative availability of limits for new products has stifled much-needed data-led product innovation.



Data Ownership and the Rise of Individual Peer-to-Peer Data Exchanges

With the recent announcement of the WTW Neuron platform, there appears a shift towards individual peer-to-peer API-led data exchanges, moving away from relying on the market to facilitate this. It is the data-savvy driving change. However, this is not without its issues as the multi-faceted data requirements – data that a broker needs vs data that the underwriter needs vs risk data vs risk exposure data vs regulatory/compliance data vs accounting data – is challenging and needs a complex series of partnership and data sharing agreements.

The London insurance market is seeing significant investment in data. Most market players are prioritising the extraction, organisation, and standardisation of data to enable better decision-making, increase profitability, and develop new products. However, there are significant challenges in building data science capabilities, making sense of data, and creating a business case for new solutions. It also remains to be seen whether the advent of the CDR v3.2 and MRC v3 will drive much needed change and finally move away from keying into Word documents for data extraction and whether data will flow between systems via APIs.

by Hélène Stanway.

Get in touch with r10’s Advisory team to help you comprehend further CDR, and your Future@Lloyd’s digital transformation journey, including understanding, defining and implementing your strategic priorities.

Get in touch.