The use of logical and conceptual data models is often a rarity in data migration projects but when adopted correctly they can create significant benefits for both the business and technical community.
In this post we take a closer look at some of the reasons why they can prove so useful.
What are logical and conceptual models?
Most people involved in the more technical aspect of data migration will be well versed in the use of physical data models. Physical models give us a blueprint of how data it is to be stored and related in a particular database system. For this reason, physical models are often far too complex for our needs, particularly in the earlier analysis and discovery phases of a migration.
In contrast, logical and conceptual models provide greater insight into the underlying relationships of the data at a level that is meaningful to the business. (For more information on the difference between physical and logical models, click here).
Logical models provide some of the vital information we need about the entities that matter to the business but without the “noise” of a physical model.
Here are some of the typical elements we record in a logical model:
- Business name of an entity
- Relationships between entities
- Unique key for an entity
- List of attributes
The conceptual model has to be one of the most under utilised yet most beneficial resources in any data migration.
A conceptual model works by recording very high level business objects and defining their relationships. For example, we may begin with a simple model that records the relationships between subjects such as:
- Equipment etc.
Why are logical and conceptual data models so vital to data migration projects?
At this point some of you may be wondering why these high level models can add value in such a technically focused activity as data migration?
Here are some of the many ways these models can add value to a data migration project:
- Structuring exploratory business workshops: Every data migration has a learning curve where the project team begins to learn more about how the legacy (and target) businesses operate. By starting your workshop with a discovery activity to create a conceptual data model you immediately focus the attention on data plus you get all parties actively discussing the legacy and target environments at a level that does not descend into minute technical detail.
- Resolving”turf-wars” and political issues early in the project: By creating a more business focused model we can identify where ownership conflicts may be taking place. In any migration there are political sensitivities as major business change is often a driver for the migration process. By creating higher-level maps we can quickly ascertain where there are overlaps of data ownership that may lead to issues when we wish to decommission the legacy environment.
- A useful tool for scoping the legacy landscape and allocating appropriate resources: By creating high level business objects we get a far more manageable view of what data is involved in our migration. If we are only presented with a physical model, the sheer complexity and lack of information can delay the scoping process considerably.
- Great for partitioning workload and “chunking” the migration: Another byproduct of the scoping exercise is that we are more able to allocate the appropriate resources. With our project broken down into higher level subject areas we can identify the necessary expertise for each domain and create more accurate plans for each area.
- Helps find “data gaps” and “hidden” data stores early in the project: By creating an initial conceptual model then a lower-level logical models we can identify very quickly where there are gaps in the legacy datastores that may need further investigation. For example, our business analysts may indicate that the business manages a relationship between a particular service and a customer segment that does not appear to be found in the physical systems in scope. Very often this data is held in a private store in the form of local paper records or spreadsheets etc.
- Provides more focused business process analysis: Opinion is mixed on whether business process analysis should form part of a data migration project but there is no escaping the fact that a deep understanding of both the legacy and target environments is highly beneficial to delivering a “fit-for-purpose” migration. By identifying the entity relationships at a more business-focused level it makes deeper analysis of the business processes far easier and relevant.
- Helps to prioritise migration design and build: Progressive and incremental migrations are now increasingly favoured over the traditional “big-bang” approach. Progressive migrations need a great deal of focus on what data to migrate so that the business can gain maximum advantage. By creating conceptual and logical data models we can more easily converse with the business to understand which data items are critical to success on the target platform and which items can be delayed.
- Helps to align target application build with the target migration design: Very few migrations have a “firmed-up” target structure at project kick-off. Most target schemas can vary significantly throughout the migration build and this can cause severe delays to the migration and obvious risk. By creating a common logical data model that is version controlled and jointly agreed between both parties it provides a much easier means of communication than complex spreadsheets or design documents.
Have you used logical or conceptual data models in your data migration projects? Were they a help or a hindrance? Why not share your views below?