How to avoid a data migration traffic-jam: the load-rate bottleneck explained

The key to successful data migration is to identify areas of major risk well in advance.

In this best-practice article we explore the dangers and impacts caused by the “load-rate bottleneck” scenario.

By following some simply guidelines we will help you spot any future data migration traffic-jams well in advance, leaving you ample time to remedy the problem.

Introduction

When creating a plan for any project it is important to identify potential bottlenecks.

Those people, process or technology factors that no amount of money, resource or ingenuity will remove.

One type of bottleneck that is becoming increasingly common in recent years due to changes in target platform architectures is the load-rate bottleneck.

What is the load-rate bottleneck?

The load-rate is the speed at which you can physically load data into a target system.

There is typically a point at which the system simply cannot load data at a faster rate given your budgetary or technology limitations.

Quick example: imagine a target interface that loads an average of 10 objects per second. If we have 10,000 objects to migrate then an estimated migration duration would be 1,000 seconds or approximately 16 minutes.

The traditional approach

In traditional data migration architectures, the typical approach is for the data migration architecture (eg. ETL/ELT or bespoke scripting tools) to load data directly into a relational database.

Even with the use of a staging area to store the data during the migration process, the rate of data transfer from legacy to target environments can be extremely high. We can get really clever by dropping constraints and indexes so that data loads are even faster.

ETL vendors have consistently increased the performance of their products in an attempt to gain a competitive edge. This has resulted in significant benefits for the end-customers in the form of ever increasing data transfer rates, greater parallelisation and more effective technology utilisation.

But this kind of throughput is largely dependent on easy access to the underlying database. And this is where the problem lies.

Challenges of the new approach

There is a new breed of target systems that present a headache for integration and migration designers. Many new COTS (Custom Off The Shelf) systems consist of architecture (eg. SOA) which prevents direct access to the underlying database.

These platforms are great for organisations because they abstract the underlying technologies. This allows exciting new services to quickly integrate (in theory) thus creating far more agile and competitive solutions for the business to exploit.

However, there is now a catch for the unwary migration planner.

Loading data into these systems must now go through an approved API or data gateway. Any other solution (eg. direct loading to the database) could actually invalidate your support agreement and is often so complex it simply is not feasible.

As a result of these new target interfaces you may find that your data migration project effectively reaches a “brick-wall” effect where no amount of people, technology or investment will improve your data migration load-rate.

With reduced load-rates comes reduced migration implementation options. “Big-bang weekends” where hundreds of millions of objects are migrated over a long-weekend using powerful ETL solutions and high-end servers could increasingly become a thing of the past.

Migration planners may need to explore more flexible and agile approaches that involve data migration during live operations to cope with this lack of downtime. Many operation centres for example never have a downtime so the window of opportunity to extract and load the data can be extremely limited indeed.

Take a typical scenario for a large utilities organisation that is aiming to migrate 25,000,000 inventory assets as part of a new corporate acquisition.

Imagine that the business grants 2 hours downtime per day where migrations from the live platform are permitted to the new target platform.

That only creates 14 hours per week available extraction and load time!

Now, let us assume a rather pessimistic target load-rate of 100 objects per second.

With these figures we can estimate the following:

14 x 60 x 60 = 50,400 available seconds for data transfer per week.
50,400 x 100 = 5,040,000 objects per week can be migrated giving us an estimated loading period of roughly 5 weeks.

But, examine the impact if our load-rate drops to 10 objects per second:

50,400×10=504,000 objects per week leading to a loading period of approximately 50 weeks.

Nearly an entire year, just to load the data and this doesn’t even include any design, development or testing.

The problem is made worse by the fact that many system vendors do not publish target load-rate figures. The emphasis is often on the client or integrator to establish the interface performance based on their particular configuration.

Admittedly, this example may be excessive and load rates may be far higher than this given high-end servers and efficient tuning but the fact is that many of the new target systems currently being deployed will never perform at the same speed as the traditional bulk load operations that many organisations have become accustomed to.

This previous calculation also shows us that a decrease in just a few objects per second can result in significant delays in overall loading times.

This clearly presents a risk and if we are following best practice, it’s a risk that should be managed.

So what’s the answer?

The solution, as so often the case in data migration projects, is to hope for the best but plan for the worst.

Be fully prepared, get some trusted, accurate intelligence on the load-rate performance based on your particular configuration at the earliest possible opportunity.

Gather as much information as you can around things you can determine. For example, here are some common questions you may wish to pose the business and technical teams right from the outset of the project:

  • What is the window of opportunity the business is willing to grant export and load operations in the live environments?

  • How many objects does the business require for migration?

  • Are the objects all uniformly complex or do they vary?

  • Does the target system have different application interfaces based on migration operation or object type? (For example some systems use a completely different interface for loading certain types of business objects. You may have to load data via one interface and update data via another).

  • What typical load-rate figures can you obtain from the target system vendor or other professionals (eg. via networking with other members on Data Migration Pro etc?)

  • What consultants are available who are specialists in the particular interface you are loading into? (Approach them well in advance as they may be prove difficult to secure).

  • Does the business have a drop-dead date for go-live or is a phased approach acceptable?

  • Is your data migration technology capable of interfacing to these new systems or will you need to invest in additional tools or bespoke software?

It is advisable to create a pilot project to establish indicative load-rates. Business sponsors may not see the value in this so early in the project as it may require significant investment to lease high-end servers and hire the necessary expertise. You should therefore map out all the project sponsors and explain to them the risks involved and what a major project delay will mean to their corporate objectives for the project.

Never wait until the development or testing phase to commence load-rate testing. It may well be too late to hit the final go-live date at those points if you do encounter the load-rate bottleneck.

If you have no access to indicative figures and the business will not sponsor an assessment then the only option is to create both an optimistic and pessimistic set of values to help you create a risk profile for your project. Examine the two sets of figures and arrive at an estimated loading period based on business and technical limitations. Ensure this loading period is included in the project gantt chart because you may now have to re-prioritise or de-scope the migration.

As already mentioned, leverage the community within Data Migration Pro and other organisations to find others who have implemented similar projects so you can at least gain some level of confidence over your figures if the vendor or solution partner are not forthcoming with their own statistics from previous engagements.

Summary

Organisations that implement service oriented architectures and certain COTS solutions have to be mindful there is a potential downside.

The increased flexibility and agility that these systems afford may mean that some organisations pay the price in the form of a delayed start-date due to longer loading times.

As ever, effective planning and a healthy aversion to risk could be invaluable resources on your next project.

 About the Author

dylan-jones.jpeg

Dylan Jones

Co-Founder/Contributor - Data Migration Pro
Principal/Coach - myDataBrand

Dylan is the former editor and co-founder of Data Migration Pro. A former Data Migration Consultant, he has 20+ years experience of helping organisations deliver complex data migration, data quality and other data-driven initiatives.

He is now the founder and principal at myDataBrand, a specialist coaching, training and advisory firm that helps specialist data management consultancies and software vendors attract more clients.