Large size S/4 HANA - Brownfield Conversion , Scale Out design , Near Zero Downtime , Active-Active HA & Business Continuity
Shaik Ahmed
Chief Technology Officer | Board Member @ Clogration | Ex- Americas SAP Cloud Practice Head at Cognizant | 100% Client Value | Results-Driven | Proven Track Record.
The 2027 ECC maintenance deadline is approaching and S/4HANA adoption is on its peak compared to previous years. Along with SME sector now large-scale enterprises are also moving towards RISE with SAP solution to leverage several benefits.
We had already helped big size customers with their very large HANA and/or S/4HANA systems. Be it First 120TB of ECC move to AZURE cloud or First 48+ TB of ECC utility system moved to RISE with SAP S/4HANA SAP on RISE or running first S/4HANA on Azure 32TB or Native HANA Scale-out on AZURE with 12 TB VM with 3 nodes as part of a sidecar system.
We are well prepared & getting competent on regular basis for handling very large S/4HANA systems (40TB - 80TB, S/4 HANA scale-out environment) with our focused expert engineers. Converting to very large HANA database & its regular maintenance is very demanding and requires lot of skills. On top of that if customer business demands lower business downtime during migration/ brownfield conversion project then the whole scenario to decide approach & method need to undergo several technical discussions & POC’s along with commercial considerations.
There are different tools / services available for minimizing downtime for such complex projects like SAP NZDT (in form ofservice), SNP (CrystalBridge), Natuvion (Smart BrownField). Few which are generally available by SAP like DoC (Downtime optimized Conversion) & DoDMO (Downtime optimized DMO) which can be used in migration/conversion project. We are bringing experts together, framework of centralized learnings and blend it with SAP best practices & recommendations to handle such complex very large HANA databases.
Let’s pick an instance where on-prem ECC system with non-HANA database size of ~ 125TB+ needs to move to S/4HANA on RISE with SAP or any Public Cloud (Azure , AWS , GCP ). HANA Sizing report gives a memory estimation of 60 TB+ (CS + Workspace). Below are few key considerations which must be part of your project which includes very large HANA scale-out environment.
?? DMOVE2S4 (w/ DoC) requires minimum bandwidth & latency (OSS Note 3434358)
?? Use the latest version of HANA Sizing report to generate sizing (OSS Note: 1872170)
?? Perform S/4HANA scale-out sizing carefully (OSS Note: 2428711)
?? Table placement groups & reorg parameters SQL scripts (OSS Note: 2408419)
?? Don’t forget to use SUMTOOLBOX tool which is underrated so far (OSS Note: 3092738)
?? Potential impact of SLT during migration/conversion (OSS Note: 2755741)
?? Optimized partition strategy during migration/conversion (OSS Note: 2396601)
?? Executing & Monitoring S/4HANA DoC (OSS Note: 3480132)
?? DoC XCLA Uptime (OSS Note: 2778832)
?? DoDMO (OSS Note: 2547309)
?? Data Management on Technical Tables (OSS Note: 2388483)
?? Conversion/Migration approach discussions & minimum 1 POC
?? Evaluate existing EWA & Readiness Check Report
?? Always use the latest version of SUM
?? Consider additional source database CPU ~25% usage during migration/conversion
?? Consider additional source database space ~20% increase due to DB triggers
?? Additional file system space for SUM, log, download & trans directories.
?? Consider Enterprise Backup tool which provides HANA Backup Snapshot feature
?? HCMT tool readiness recommendations on parameters, CPU’s, NUMA, etc.
?? Latest SAPTUNE to automatically implement OS level parameters
?? Use latest HANA SPS & Revision
?? Use latest cluster/pacemaker package for HANA scale-out HSR setup (if in-scope)
?? 10Gbps network connectivity between on-prem & cloud (especially during migration)
?? Table partition strategy and execution
?? Table Grouping & Placements for target S/4HANA (HANA scale-out environment)
?? S/4HANA HANA scale-out node size & numbers keeping business growth in mind.
?Given above instance let’s see how DMOVE2S4 (w/ DoC) option to move ECC on-prem with non-HANA database to S/4HANA on AZURE (as an example). To perform DoC a standard conversion is required as prerequisite to capture config transports of FI, MM & ML. DoC run will use these transports to perform all these operations in business uptime
HANA Table partition during migration/conversion
SUM is great tool to perform updates/upgrades. But when it comes to HANA migration especially (very large HANA database) SUM puts a very basic or weak table partition strategy during migration/conversion. We hope to see this gets addressed & translate to efficient strategy in latest SUM version. In few cases even table partition is completely skipped in target HANA database because SUM tool doesn’t consider it for partitioning with its basic logic.
To overcome all this please design HANA table partition strategy very carefully considering HANA scale-out environment & NSE usage. General principle in our opinion of any HANA table requires partition will be any table which is more than 50 GB in size or more than 500M records. Look for SAP recommendations for partition type and use multi-level partitioning to take advantage of NSE in optimized way. Evaluate SQL statement for larger HANA tables and its WHERE clause fields to decide partition column. A partition must improve select & insert query performance along with delta merge operation.
?
HANA scale-out during migration/conversion
SUM tool checks for HANA target as scale-out for very large database and perform landscape reorganization as per standard table grouping & placement if selected. Generic placement rules are applied via SUM. We can also create table group & placements in controlled way outside SUM in lower systems and optimized it after performance testing. The same distribution plan can be exported and give in it production conversion run by setting breakpoint in SUM phase EU_CLONE_MIG_UT_RUN and later continue SUM to fill tables in respective nodes.
Check HANA sizing report carefully for table group names and estimated memory requirement for each group. Table group shouldn’t exceed a node memory or even give enough room within node for workspace. In case if it increased node size then we will have to place same table group in two nodes which requires lot of analysis and SAP suggestion as in general single table group cannot be put on multiple nodes.
?Native Storage Extension (NSE)
Native Storage Extension is great feature to handle warm storage of very large HANA database. SAP HANA sizing report already suggest some tables for NSE & NSE buffer cache size. But setting up those tables as page loadable blindly is not a good idea. In our experience NSE needs to plan carefully as this is a double edge sword which hits hard on performance if not carefully executed. To design faultless NSE functional & technical teams must partner and work together as it requires data & business knowledge as well. NSE needs to test in lower environment & finalize list of tables & buffer cache size which can adapt in production system. In our experience avoid putting Change Document tables & Application Log tables in NSE. Instead explore large tables where multi-level partition is used with RANGE for period or fiscal year and put old ranges in NSE. NSE is easy compared to doing Data Ageing in SAP HANA. NSE advisor will provide column & table load unit recommendations. It determines temperature of data and derive recommendations. From S/4HANA 2021 by default few tables offloads and part of NSE storage.
Reach us for to help your S/4 Brownfield Conversion .
Sivakumar Varadananjayan Mukul Sharma Subhajit Sengupta Kiran Vaidya Ganesh Anand T T Sivashankar Ramakrishnan Nitin Laddha Mahadevan Subramaniam
#Big size S/4 HANA Brownfield conversion # SAP on RISE # Hyperscalers (Azure/AWS/GCP) # On-Prem Solution# Scale Out and Scale Up Solution# Business Continuity # Infra Cost Savings # nZDT#
SAP S/4 HANA Project Implementation Lead at Tata Consultancy Services
4 个月thank you
SPM-SAP BASIS Enterprise Platform Services
4 个月Insightful thanks for sharing
SAP Enterprise Cloud Architect and Program Manager at Cognizant
4 个月Thanks Shaik Ahmed for sharing..
Transformative SAP on AWS Technology Leader | Driving Cloud Migration and Modernisation @ AWS
4 个月Thanks for sharing this article. I’m curious if the figures presented are based on general estimates or actual customer experiences or proof of concept s. It would be great to have more context on that. Also I was looking for comparison of downtime durations from various iterations and approaches you considered for 120 TiB systems.