Technology Refresh During A Public and Private Cloud Journey
Srinivasa (Chuck) Chakravarthy
Managing Director, West Lead for HiTech XaaS Practice at Accenture
Many Corporations and their respective CIOs are today in the beginning, middle, or in some cases the end of their journey to Cloud.?In highly regulated industries, this journey is going at a slower pace - understandably so given the scrutiny faced by such industries when it comes to data privacy (HIPAA, PII, PCI) and mission criticality nature of the services such industries provide.?For such industries, on-premise Private Cloud combined with 1-2 Public Cloud providers is not an unusual strategy.?However, what complicates this journey is that these companies face the conundrum of Technology Refreshes and Upgrades given the longer duration required by their complicated journey – they have to maintain a substantive on-prem presence while migrating to the Cloud and do this in a manner that is compatible with the services provided by the Public Cloud and make it easy for their Developers to use a consistent set of tools, techniques, and common guardrails when it comes to security and data privacy.?Furthermore, these companies must make some tough calls and constant tradeoffs between cost (one-time costs including migration/modernization/refresh/upgrade; ongoing operating cost; efficiency – do more with less; timeline – deadline for Public Cloud migration given spend commitments to Hyperscalers and/or data center lease expirations; any regulatory requirement for upgrading/refreshing out of support items; and expiration of migration/modernization credits). ?
Compute, Storage, and Networking Tech Refreshes
Today one has “Public Cloud” like commercial options in your Compute refreshes that include “pay only for what you consume” as well as a capex option.?From a technology angle, one has the option to go for Hyperconverged Infrastructure (HCI) which is a software-defined, unified system that combines all the elements of a traditional data center: storage, compute, networking, and management; or construct a hybrid architecture of traditional shared-disk (persistent data) and shared-nothing compute architectures (query processing) that is used by modern data platforms such as Snowflake.?However, keep in mind compatibility issues with Operating System (OS) versions as well as Hypervisor versions when undertaking such a refresh – any extended support systems you have been operating may need an upgrade.?When upgrading your Host OS, it’s important to consider any dependencies between data and applications. The application running in virtual machines (VMs) may require certain files stored in the Host OS or rely on features only available from specific versions of an OS. Making sure all relevant dependencies are identified and addressed will help ensure a smooth transition when upgrading your Host OS.
When you are evaluating Storage refresh, you are looking at Block, File, or Object storage or some combination of these three.?Block storage refresh is primarily for OS and Databases or for single-server high performance use cases and for synchronous data replication – hence, a refresh here could improve your application and/or DB performance.?Object storage refresh would be helpful when your data volumes are growing exponentially, and you desire scalability with cost effectiveness as well as Backup and Disaster Recovery when low performance is not an issue.?File storage refresh will be useful if your document collaboration needs are growing substantively and/or your compliance document archival, requiring set permissions, is increasing rapidly.?One of the key considerations in your Storage refresh is ability for lifecycle management (to move data from high performing expensive storage to low performing cheaper storage as data ages) and compatibility with Public Cloud, especially Object Storage such as S3 or Azure Blob Storage to allow for seamless data access.?Note however, that the latter may require server message block (SMB) to http/https converters especially if you are upgrading your NAS drives.
Networking technology refresh or upgrade is a whole topic by itself, and it is difficult to do justice to that in a blog.?However, I will look at a subset here.?Let us look at Load Balancer (LB) technology refresh or upgrade.?If you are moving more towards a more Cloud like environment for your on-premise, then you are possibly looking for automation & self-service ?Vs. a legacy environment where (a) several networking teams maintain lists of VIPs and pool members in spreadsheets; (b) network administrators need to consider application dependencies, perform manual capacity and tenancy assessments to decide where to place new VIPs, and if necessary order additional hardware to manually provision application services; (c) once load balancers are picked or purchased, they need to manually configure the network parameters, including physical connections, VLANs, and IP configurations before they can provision the VIP.?Additionally, you may be looking for elasticity – to scale up or down LBs as required.?
Another consideration would be ability to handle Hybrid Cloud.?Here you may want to stay away from legacy Load Balancers that provide their SW to run on VMs and instead gravitate towards modern SW Defined Architecture based solutions that deliver consistent application services with central management, visibility, security, and unique application analytics that are common across hybrid environments. The performance and elasticity of these platforms is also consistent across different data center and cloud environments.?With these modern solutions, enterprises can use intelligent hybrid cloud traffic management and application scaling across their data centers and the public cloud with central management. Enterprises developing container-based microservices applications get full stack L4-L7 services, including service discovery, service proxy, interactive application maps (showing traffic between each microservice), and micro-segmentation capabilities.
When you are looking at a Firewall refresh, you may want to go beyond a Next Generation Firewall or NGFWs (that included other dimensions such as users, groups, and applications; URL filtering and threat and data protection) to the cloud-generation firewall as NGFWs may not conform to Zero Trust principles.
领英推荐
With higher NIC speeds (to facilitate significant growth in server compute power), the top-of-rack (leaf) switch may need to be upgraded. Failure to update your legacy core (spine) switches will cause oversubscription ratios to move unfavorably, introducing excess congestion and unpredictable latency.?Legacy networks tend to have network admins log in to each switch, router, or firewall individually. Configurations are applied unique to the node and are backed up using a plain text file and stored locally on the network admin’s computer. These workflows tend to be error prone and can lead to typos or inefficiencies such as lost backups and slow maintenance windows. Configuration errors can open security gaps that a cyber adversary could exploit.?Modern networks leverage more advanced tooling and technologies that solve most of those problems.?Updating your networking can add benefits, including Infrastructure as code, so your configurations are centralized; Automation, which allows managing and updating multiple nodes at the same time, etc.
Tradeoffs
Technology refreshes and upgrade require tradeoff between several factors including but not limited to:
Conclusion
Doing technology refresh and/or upgrades in a dynamic environment when you are trying to modernize your legacy data centers while also migrating to the Public requires a well thought out programmatic approach, discipline, communication with key stakeholders that will be impacted by this activity, and strong governance.?This blog is not an exhaustive view but merely an attempt to highlight key points.
Disclaimer:?Note that these views are solely those of the author and not representative of the views of any of his current and past employers.