D365FO Data Migration Strategy
Introduction
Data migration is the process of obtaining data from the existing (legacy) systems, mapping this to the new data requirements. Generally, this is the result of introducing a new system or location for the data. The business driver is usually an application migration or consolidation in which legacy systems are replaced or augmented by new applications that will share the same dataset. These days, data migrations are often started as firms move from on-premises infrastructure and applications to cloud-based storage and applications to optimize or transform their company. The data mapping (or the lack there of) will highlight the transformations required as well as the cleansing activities required to ensure the new system(s) is/are populated with current (and not redundant) data.
These data migration activities are driven by the Key Users, as it is the business that owns the existing data, knows best how the data is currently used; and will ultimately own the data in the new system.
Development and testing of the migration is an iterative approach and will be performed a number of times during the test cycle.
The migration strategy aims to do the following:
In order to achieve the above objectives we aim to:
Migration Planning
A detailed data migration plan is the essential first step in a successful data migration project to select, prepare, extract, transform and transfer data of the correct form and quality. At a high-level, data migration consists of the following activities:
Analyse
The data first needs to be analysed & retrieved relevant information from data sources in a specific pattern.
Pre-process
In this step the data is transformed?into a format that will be more easily and effectively processed for the purpose of the user. There are a number of different tools and methods used for pre-processing, including:?
Extract
This process involves retrieval of data from data packages. The data is extracted to process it further, migrate the data to a data repository (such as a data warehouse or a data lake) It’s common to?transform the data?as a part of this process.
Load stage
Once the data is extracted & transformed into specified format, the packages then load the data into staging tables in this step.
Validate
Once the data is uploaded to staging the data validation checklist process is executed to validate the data. This validation is performed at table level as well as data entity level.
Target
Once the validation is completed the data from staging table is moved to target table.
Data Types
The legacy system data need to categorise as below by which migration activity works.?
The configuration data contain the company and system setup information, In D365FO we call that golden configuration which usually applies on gold environment. By using a vanilla system, the base company setups needs to be done before begging DM activity
e.g. system and module parameters
Setup Data is an extension of the setup and configuration and can have multiple values.? Depending on the volumes this information can be set up manually or extracted from the old systems, or sites “companies” that are already live and then loaded into the new site company account.
e.g. payment terms, shipping terms??
Transactional data is the data that is currently in the legacy system that will be extracted, transformed to adhere to the new validation rules and then loaded into?
As part of the data migration strategy [Company] must decide whether it will be only current or open transactions that will be migrated or if historical data will be migrated as well. This is defined below in Table 5.
e.g. postings, orders. lines, journals new system.
The following section provides information on Master Data with regard to data which changes infrequently. This table will need to be re-evaluated and adjusted for each site deployment.
e.g. customers, vendors, products
Process and best practices
The data migration process begin with configuration and Setup Data that copied from the Base data environment into the Data Migration (and all other) environments periodically. The Master and Transactional data will then be migrated into the Data Migration environment.?
Once the data migration has been completed successfully, the Key Users will test, validate and verify that they are able to sign off the data take-on on at go-live.? The Key Users liaise with the Data Migration lead to correct extracts, templates or scripts if need be.? Migration will be iterative and key users will need to validate the data after each iteration and initiate any needed changes.
Below are the details information about overall process and best practices
Environments
For any standard data migration, mainly two environments are required to keep configuration and transactional data isolated. The environment called ‘Gold’ is to keep configurational data, as per standard best practice this environment should not use for any transactional data upload.
Second environment call ‘DM’ is used to keep master and transactional data top of configuration, which means the base data, should be copied from ‘Gold’ first before importing other data.
Below is the high-level data flow
Data Flow
Below diagram, describe the details data from into FO system from various environment during implementation.
Migration steps?
Following are the steps for the data migration?
1, Default data templates: Dynamics 365 Finance and Operations comes with a set of standard data templates that include commonly used entities that are sequenced to provide a starting point for users to export data from the system. Standard templates are updated with each release of D365FO to reflect changes in the application.
2. Custom templates: Custom data templates can be created either manually or by modifying default templates to meet specific business needs. For instance, you can load template ID '190 – My Template' from LCS and update it with the necessary entities based on your requirements. If you need to change the template ID, you have the option to either rename the existing template or create a new one and import the updated data.
In this step, we may have to consider data transformation where data length & type should match with FO.
Note: if during data migration process, customization is going on those entities which in scope on DM, re-mapping needs to be done frequently according to the changes.
Below DevOpps structure can be followed for DM activity
Configuration data
Setup Data
Master Data
Transactional data
Within each high level step there will be sub-steps e.g. customer records will need to be uploaded before customer addresses can be uploaded, vendor accounts will need to be created before vendor bank account details can be uploaded.
The sequence of these activities must be should be stated in the Cutover Register as well as the duration, the responsible person, and dependencies it has.? (SCD comment - Is there a cutover register/checklist?)
Recommendations
High level Strategic approach
The following steps will be used to assess the specific needs for each site and the decisions will be recorded within the site specific Data Management plan. See appendix A
See Table 2, Table 3 and Table 4 above
Migration cycle
Precursors
Configuration & setup data to entered or seeded during the design phase/process
Setup Data – supporting tables entered/uploaded as part of design
Cycle 1?
Upload sample Master Data
Customers
Products
Vendors
Employees
etc.
Test verify sample data
Run migration test scripts including:
Data inspection
Run reports
Transact?
Review cycle 1
Validate the approach, strategy, data quality and tools used. If necessary:
Further data cleansing
Further education/training
Change migration approach (tool)
Modify templates
Adjust Base data
At this point an assessment will be made to determine if the results are sufficiently acceptable that the migration can move onto cycle 2 or if cycle 1 should be repeated. If results are deemed acceptable then this migration will be replicated into the CRP1 environment.
Cycle 2
Upload volume Master Data
Customers
Products
Vendors
Employees
etc.
Test/verify volume data
Run migration test scripts including:
Data inspection
Run reports
Transact?
领英推荐
Review cycle 2
Validate the approach, strategy, data quality and tools used. If necessary:
Further data cleansing
Further education/training
Change migration approach (tool)
Modify templates
Adjust Master data
Cycle 3
Upload volume Master Data and representative transactional data
Customers
Products
Vendors
Employees
Sales orders
Production orders
Purchase orders
etc.
Test/verify volume data
Run migration test scripts including:
Data inspection
Run reports
Transact?
Review cycle 3
Validate the approach, strategy, data quality and tools used. If necessary:
Further data cleansing
Further education/training
Change migration approach (tool)
Modify templates
Adjust Base data
It is intended that at this stage that the process is sufficiently refined that it can be signed of as fit for purpose to use for the go live. If results are not acceptable further migration test cycles will be performed. At his point the process should be acceptable to use to create data in the CRP2 environment.?
Deployment Phase
Following data migration activities are performed during deployment (prior to go-live):
Migration of data (from the source or legacy systems) to the new system. production environment is the final critical activity of data migration, and is the key to project success as well as user adoption. Data migrated into Production has to be timed accurately so as to ensure that users see the most recent data when the system is available to them. This activity is usually split into:
Initial migration of data into Production
The initial migration of data into production is performed to avoid overloading the systems during the go-live weekend, and is usually done prior to the actual go-live.? This data typically includes Master Data.
Final data migration into Production
The final data migration is done over the weekend prior to go-live. The final data load will consist of all transactional data.?
Sign-off
The last step in the data migration process is sign-off.? The system cannot be transferred into operation if the data take-on reports are not balancing and signed-off by the key users.
Risks and mitigation
Effective data migration strikes the optimal balance between data accuracy, migration speed, low or no downtime, and minimum costs. The following are data migration success factors requirements and / or possible risks:
?
During the data migration process, data loss can occur. When the data is migrated to the new system or target system, some of the data may not migrate over from the source system.?
This risk can be evaded by counting and cross verification of data from source and target system.
Even when the data migration process is done efficiently, semantics errors can occur. For example, if there is a field called “daily average” in the source data, the information from that field could be migrated over into a different column or field in the target system. This inconsistency in data could result in many issues for organizations and IT businesses that need to migrate their data to new environments. For this reason,?data migration testing?is highly recommended when migrating large amounts of business data.
The risk of extended downtime comes when the data migration process takes longer than expected. During the migration process, the source system is not active, so this poses potential risks for organizations and stakeholders.
This risk can be mitigate by using non-production environment, its highly recommended not to perform any data migration activity on production.
When organizations apply rules and validations on the target system, data corruption can occur. Unwanted data can be migrated into the new system, leading to potential crashes and data corruption, which can result in errors for the end user that uses the application.
To deal with risk, D365FO data consistency check can be run which provide the details of all data related issues.
The target application, or target system, can be unstable for a number of reasons, including improper development, improper coding of the new application, or improper coding of business requirements into the new system. These issues increase the risk of the new system being unable to fulfill business needs for end users.
Data migration testing on DM environment can combat these risks where any data entity related changes needs to be test and verified before actual data import.
With orchestration risk, the issue occurs when the processes of data migration are not performed in order. The order of data migration is extremely important, especially since there are varied dependencies between the various business objects.
Sequencing needs to be defined in data import project for all dependent data entities to mitigate this risk.
Interference risk is particularly problematic when multiple stakeholders make use of the application during the migration process simultaneously. An example of this risk affecting business operations can occur when a stakeholder locks a particular table, making it impossible for other stakeholders to make use of the same table.
To mitigate such risk, data migration operation should perform during non-business hours.
With configuration mismatch risks, the issue always lies with the target application. The target system may not sync with the base configuration setup which may cause configuration mismatch issue.?
To mitigate this risk, target application should by sync with base configuration frequently as soon as there is any configuration change. DevOpps task needs to be maintained to track such changes.
During data import, many records may get fail due to data validation or due to missing of associated master data.
In such cases, DM team usually coordinate with data provider/client to correct the data by highlighting the error records.
If there are many issues on data, the whole DM may get delayed, to avoid this, below process can be followed.
Data Automation and testing
Overall whole data migration more like a manual process as its required lots of human interaction but in certain level automation can be done by using D365FO data task automation feature.
Data task automation lets you easily repeat many types of data tasks and validate the outcome of each task. Data task automation is very useful for projects that are in the implementation phase. For example, you can automate the creation and configuration of data projects. You can also configure and trigger the execution of import/export operations, such as the setup of demo data and golden configuration data, and other tasks that are related to data migration. You can also create automated testing of data entities by using task outcome validation.
Data packages from the Shared Asset Library or Project Asset Library can be downloaded and imported automatically into D365FO using Data Task Automation (DTA), which is available in the Data management workspace. The high level data flow diagram is shown below:
Detailed information about data automation and testing available from below link
Use cases / Data management scenarios
Standard data migration?
The standard data migration process can be followed for a fresh D365FO fresh implementation where you have to migrate the data from non AX legacy system. The key work in this scenario is to extract the required data from legacy system and map with AX.?
Migration from legacy AX system
Below is the recommended process by Microsoft for data migration from legacy AX system. If you are upgrading your system from AX 2012 to D365FO, you can use the LCS data upgrade tool however MS still recommends to use below
There are general guidelines that needs to be followed in any project while importing the data.
Challenges and solutions
Joint activity
Certain client required join activity for DM where client owns the sensitive data migration like customer data, credit cards, email address, etc and other data migration own by Hitachi.
Below are the recommendation in that scenario.
DM activity own by client
In many project, client want to take the ownership of data migration, Hitachi provide support on data mapping and templates. Below points can be considered in this scenario.
In this method, the entered can be exported through associated entities and same entities can be refer for further data import.
e.g. For customer or vendor we have dependant data or entities related to currency, address, customer or vendor group etc.?
By using this approach, the dependant data gets identified.
High volume data migration
If you have high volume data to import, you have to deal the data import process by referring below points
Set based operation: A normal data import process first import the data into staging, validate and then it copy the record one by one into main table. The Set based option available on data entity, allow system to move the data from staging to main table in one shot thus it help on performance tuning.
Note: Since this option skip validations and defaulting values, complete validation is required before importing any data, user also have to provide all the default value in template itself.
Bundling: In bundling, we split the data into multiple files, which allow parallel execution. For example, you have to import 10K customer. Through bundling, you can run multiple import job for same data entity by splitting the file for each job.
Entity import execution parameter: In D365FO>Data management> Data import/export framework parameters> Entity setting>Entity import execution parameter, you can define threshold size and task size which allow system to create multiple task for set of records
Sequencing: In D365, definition group provide option to define data entity sequence and grouping which help to run the operation in sequence and in parallel fashion. To import huge data we can leverage this functionality by defining the sequence of dependent entity.
In below example A,B and X,Y belongs to same level which mean they will execute in parallels and C,E will executed once level 1 get completed.
Re-write custom entity for customer specific scenario: We always prefer to use standard data entities to import all types of data but in case to deal with customer specific data this option can be adopted where custom data entity with specific fields can be design which avoid lots of standard operation and can be used for fast data import.
Azure data factory
Azure Data Factory is a cloud-based data integration service that allows you to create data-driven workflows in the cloud for orchestrating and automating data movement and data transformation.
However, it is more useful for integration instead of data migration as data migration is not a recurring job.
Azure Data Factory does not store any data itself. It allows you to create data-driven workflows to orchestrate the?movement of data between supported data stores and processing of data using compute services in other regions or in an on-premise environment. It also allows you to monitor and manage workflows using both programmatic and UI mechanisms.
Data migration activities with Data Factory
By using Data Factory, data migration occurs between two cloud data stores?and between an on-premise data store and a cloud data store.
Copy Activity in Data Factory copies data from a source data store to a sink data store. Azure supports various data stores such as source or sinks data stores like?Azure Blob storage,?Azure Cosmos DB?(DocumentDB API), Azure Data Lake Store, Oracle, Cassandra, etc.
Following is the diagram which shows relationship between the components in data factory.
Import & export data in D365 using azure data factory
Azure data factory has a connector called “Dynamics AX connector” which is used to connect to D365 F&O & azure data factory.
This Dynamics AX connector is supported for the following activities:
Specifically, this Dynamics AX connector supports copying data from Dynamics AX using?OData protocol?with?Service Principal authentication.
OData
OData is a standard protocol for creating and consuming data. The purpose of OData is to provide a protocol that is based on Representational State Transfer (REST) for create, read, update, and delete (CRUD) operations. OData applies web technologies such as HTTP and JavaScript Object Notation (JSON) to provide access to information from various programs. OData provides the following benefits:
?
There are following links which are helpful to understand data import & export using azure data factory.
Microsoft dynamics 365 lead consultant
2 年Very good article. Great effort to compile everything in one place.
Solutions Architect Dynamics 365 Finance and Operations
2 年Very insightful. Thanks for sharing Sudheer!
Senior Development Manager at Staedean.(formerly known as To-Increase)
2 年Nice one
Advisor Solution Architect at DXC Technology | Cloud Solution Architect, ERP AI
2 年Nice article sir Sudheer Paliwal good share