Big Bang Data Migration (and Beyond)

Big Bang Data Migration

The 'Big-Bang' data migration has served us well for decades but I am afraid it has had its day.

“What do you mean?” I hear you cry:

No more 48 hour sleepless shifts? No more frantic late night calls to ask operations for “just one more hour”? No post traumatic big-bang stress disorder? No more embarrassing system rollbacks?

No. I think it’s time to say farewell to the Big-Bang data migration for a number of reasons. This post will look at why Big-Bang has had its day and more importantly, what are we going to replace it with?

Introducing the Big-Bang Data Migration Strategy

If you are starting out with data migration let’s explain first of all what we mean by a Big-Bang data migration project.

A Big-Bang data migration execution is where you move an entire dataset from the legacy to the target system in one operation. This is typically carried out over a weekend or planned down-time period. So on the Friday the business users would be using the legacy system but come Monday morning they’ll be switched to the target system.

Sometimes this is a seamless exercise for the business as they’re not even aware of the switch. Other times it’s a big change, new applications, new front-end screens, new operating procedures and so on.

Whilst this sounds risky a lot of companies operate a fail safe strategy whereby they "parallel run" their old system and new system in tandem. Sometimes poor souls have to "double-key" the data into the legacy system so that target and legacy environments are synchronised. In recent years there have been technological advances that mean it's easier to keep target and legacy systems in-sync.

If the target system data and functionality is tested following migration and found to be a total mess then the business can still switch back to the old system, defiantly say "I told you so" and advise the project leader to update his CV post haste.

Challenges with a Big-Bang Data Migration Strategy

Data Migration Data Volumes Are Increasing (To Put It Mildly)

We’re in a world of Big Data (as if you didn’t already know) and whilst Hadoop isn’t a reality in most companies there is certainly a much larger volume of data even in mid-sized companies.

What does this mean?

Well, it’s going to get harder and harder to migrate the volumes of data we’re seeing in most companies even with the latest ETL and data movement software.

Before I get flamed by ETL vendors, the point I’m making is that even if you get a 48 hour downtime in which to migrate your data you still need a finite amount of time to perform testing and leave yourself with a rollback strategy.

I was getting pretty close to the wire back in 2002 (even using Ab Initio and high-end servers) so I think it’s fair to say that in another few years, given the huge volumes we’re seeing today, the old ETL warhorses are going to struggle, particularly when you factor in the next issue.

COTS and Cloud API’s Increasingly Common

I’ve been involved with several projects that have hit a metaphorical brick wall during data migration because of a slow, complex API that can severely put the brakes on your migration throughput. Telecoms is a classic example.

In Telco land there are several systems that have a very complex API (application programmer interface) because the underlying data model is a complex mashup of function and data relationships that traditional ETL tools simply cannot reach.

So, you have a situation where your data can be extracted with an ETL solution (or equivalent) at insanely high-speeds through your export, transformation and cleansing stages only to screech to a halt as it meets an API that loads at sickeningly slow speeds.

This is a very real problem and something you need to consider pre-migration when you’re signing off which products and specialists you’re going to hire for the migration.

For example, imagine you have an engineering system for a gas company containing 800,000,000 assets fixed assets and transactions. This needs to be migrated to a sophisticated new provisioning system which has an API that only supports 1000 asset migrations a second. That sounds impressive until you realise that will take 9 days to complete, just for the physical migration, without any interruption for failed loads, integrity failures and so on.

Cloud computing is another challenge because it’s increasingly popular yet many cloud platforms do not work in the same relational data approach we’re typically accustomed to. Cloud systems typically force the user through some kind of API or load gateway that can seriously slow down performance.

Alternatives to a Big-Bang Data Migration

So if Big-Bang is on its way out (and it will be) what are the other options available?

Well the first option is to design our applications differently in the first place. Part of the reason we have so many issues is that organisations build applications that closely couple data with function and interface. By de-coupling these links via SOA for example you can make the application migrations of the future execute far more simply.

For companies doing a COTS to COTS migration of course this isn’t an option. If you’re moving from “Big Billing System X” to “Big Billing System Y” then you’re stuck with what you’ve got.

The solution here is to look at iterative data migrations and phase in your migration over a period of weeks or even months.

The Iterative Data Migration Strategy

Iterative data migration has a number of names - phased data migration, trickle-feed data migration, synchronised data migration and so on. In essence though it all means the same thing, we’re going to move data in smaller increments until there is nothing left to move.

This is still quite a rare operation in data migration practitioner circles. I’ve worked with some companies who specialise in this area but it’s fair to say that most systems integrators and product vendors have been focused on the tried and tested Big-Bang projects.

There are two main challenges with an iterative data migration strategy:

  1. How can we keep our target and source systems synchronised until the full migration is complete?

  2. How can we coordinate the migration of distinct elements of our business users and functionality without breaking overall business continuity?

So iterative data migration is as much a lesson in business integrity as data integrity. We need to effectively run two systems in tandem whilst not screwing up either of them. Easier said than done.

Challenge 1: Keeping Data In-Sync Between Source and Target

There are a few things to consider here. Firstly, will your business users be using an entirely new application or simply using the old application but with the migrated data. We typically perform data migration during a wholesale application change but not always. With the prevalence of web application front-ends our data may have resided on an old mainframe but now needs to reside on an Oracle database housed in a private cloud. If we’re carrying out an iterative data migration how will your web app know which database to source its information from?

In the situation where you’re moving to a completely new application then the problem really becomes one of running business operations across both systems but with some form of data sync in place.

Sometimes this is mono-directional. Perhaps you migrate a particular region across from the source to target system and re-route web and phone enquiries for a region to the new target system. The old system might need to have an update of target transactions so you sync back from target to legacy.

Bi-directional sync is also increasingly required on iterative migrations. For example, if you’re migrating a retail stock management system and you decide to iteratively migrate by regions then you obviously have transit information that will link regions together. Both the legacy and target systems need to have the latest data to make their view of the data work.

Challenge 2: Migrating the Business Iteratively

This calls for a pragmatic approach. You’re never going to achieve this with perfection so it’s going to call for some temporary procedures to manage the split between source and target systems.

For example, you could update your call handling process so that if a customer rings about a particular type of product or service they get re-routed to call workers who are using the target system.

If you have a nationwide parts system you could migrate iteratively by a particular manufacturer e.g. Ford or BMW. When customers ring up for a specific manufacturer your users can be trained to use the appropriate system.

If you’re migrating patient data from regional systems to a new national healthcare system you may elect to migrate one region at a time. In this situation you would in effect be combining a Big Bang migration for each region but perhaps mono-syncing patient data back from the target system to the legacy regional system (so that the other regional systems can still pull data from them). Eventually when all of the regional systems have migrated to the national system you can shut them all down and have all business users on the target system.

The Bright New Future for the Iterative Migration Strategy

So as we have seen, the days are probably numbered for Big-Bang migrations, certainly on large and complex data migrations.

Yes, they’re still popular but I really don’t see how we can survive the next decade without a major move towards iterative, agile data migration strategies.

The complexity and sheer scale of modern data volumes may mean you have to think outside of the box on your next data migration project.

 About the Author

Dylan Jones

Co-Founder/Contributor - Data Migration Pro
Principal/Coach - myDataBrand

Dylan is the former editor and co-founder of Data Migration Pro. A former Data Migration Consultant, he has 20+ years experience of helping organisations deliver complex data migration, data quality and other data-driven initiatives.

He is now the founder and principal at myDataBrand, a specialist coaching, training and advisory firm that helps specialist data management consultancies and software vendors attract more clients.