2 Important Tips for Insurers’ Data Migration Processes
Many life and pensions senior managers say they have been involved with one or more data migrations in the past and that they:
- Just don’t work
- Went over budget
- Were delivered late, or not at all
- Only migrated some of the data
- Left the old system still running
- Moved lots of old data that nobody looks at
- All of the above…
This blog post will examine why such projects often run into trouble, or fail completely, and present two main tips for successful data migration. These tips can help ensure a successful and repeatable process for insurance providers.
1. Sometimes ‘Messy’ is OK
Don’t try and fix the unfixable.
Most modern policy administration systems are designed for capturing new business and have sophisticated validation in place to ensure that only ‘clean’ business is accepted, but usually have no mechanism for loading sub-standard data en masse.
It is usually not possible, for example, to record a new life policy without capturing the date of birth of the life insured. Many older systems were not built with such rigorous validation and it is very common for these sort of gaps to exist in the data that is being migrated. The usual approach to this issue is to have a ‘data clean-up’ exercise prior to migration that aims to plug the gaps in the data, so that it can be accepted by the target system.
This approach is flawed and is one of the major reasons why migrations fail. If the date of birth for a customer is missing, then it cannot usually be deduced from other data. It is likely that the only way the information can be obtained is by contacting the customer (assuming that the contact details for the customer are correct and that the customer will actually respond to such a request).
This is a cumbersome, lengthy and expensive process that is likely to provide a poor return on the time and money invested.
2. Don’t Load Directly into the Database
A common strategy to get around the missing data problem is to try to load the data directly to the new system’s database, bypassing the target system’s validation completely. This is a high-risk approach that requires an intimate knowledge of how the new system populates its data structure when data is entered. If you get it wrong, your data may still load, but the new system will fail when trying to process the data. This will cause major problems in day-to-day operations.