Server 2022 IaC, CaC Migrations
Utilizing Modern Cloud Engineering and Modernize on Demand cloud methodologies I decided to take on the task sometime after Christmas of upgrading my @127.0.0.1 datacenter and decommission some old hardware that is no longer capable of supporting Windows Server 2022 Datacenter as a hypervisor. So as the Christmas holiday food coma wore off I plotted my current lab architecture's demise. I decided to do this upgrade by creating an "easy button" to do so. As you can see from my screenshot that was not so easy and what I expected to accomplish in a few hours ended up being a few evenings being quite involving as I was evolving my "easy button" to do all of this and avoiding the "click-ops" approach of manual. In the end, many failures resulted in a well baked pipeline that I can utilize again. At this time everything is not fully automated but my goal was to automate as much as possible in this MVP and figure out the rest later... Hmm, Agile? :)
So here is the explanation of the above screen from my Jenkins development node which I use as a standalone DevOps in a box type of virtual machine (IaaS) rather than ADO (PaaS) pipelines. It's an Ubuntu server with Jenkins, Ansible, Terraform that I only have on when I am developing code so, very cost effective as visual studio subscription azure credits cover it's usage. I use IaaS Jenkins because this also allows me to deploy a Master and/or slave node in my private cloud directly and unaffected by network failures or as I like to architect them, fully redundant XaaS infrastructure that can automatically survive failures of network, storage, identity and other landing zones in each cloud. Finally, Jenkins IaaS because it allowed me to really understand how Jenkins works under the hood in the open source community and ultimately how ADO is using Jenkins in a managed fashion.
On December 30th my last good patch of DC1 (2019 DC) and DC2 (2016 DC) occurred. What I had schemed up and coded was my first batch of automation to bring up 2 new Server 2022 VMs on my Hypervisors HV1 (2019 DC) and HV2 (2016 DC). The Terraform for each of these was already realized so just a variable change and off to the races. Copied my .vhds from the imaging, staging and configuration server (BTW, I can do containers here too not only VMs/IaaS). Executed the CaC pipelines and it completed without issues; making and promoting and joining with some PowerShell help and Ansible executing it on the remote host through secure WinRM. These CaC pipelines worked flawlessly and gave my lab two new domain controllers, ready for DCPROMO activities.
I took backups of my current DCs at this time and shut them down to upgrade the Hypervisors next. This was the first issue as I migrated to new HW that would still not be supported (The Network cards coincidently). Looked for the correct 2022 drivers that are not written or updated yet. I wasn't stuck, just "pawsd" (That's for my cats that leave footprints on top of the server racks in the basement). I ordered 2 USB 1Gb Ethernet interfaces on Amazon and awaited their arrival. In the mean time I turned back on Directory services for my lab until I could try again. My rollback contingency plan was executed with extreme prejudice!
On January 6th my new Network interfaces arrived and I built the new Hypervisors as planned. Migrated the .vhds to them and will ultimately retire the old 2019 HV1 and 2016 HV2 machines and HW you still see below.
The new machine HW and Server 2022 has been rock solid now for a few days, no issues perceived with upgrades on Physical and/or Virtual Hardware. The DC's have been setup as previously and their domain policies to deploy our CaC desired configuration based on tagging to every new machine added in the lab on the active directory domain. We start by deploying a desired configuration for secure WinRM remoting and finishing with a Datadog agent install and addition into our Observability platform of choice. This can be extended to any scenario to give multiple easy buttons or pipelines for AdHoc CaC scenarios in infrastructure whether public, private, hybrid or multi-cloud.
领英推荐
Some cleanup work still exists but basically in the end I did this to retire old and expense to run HW to newer HW for my Hypervisors and home systems that will hopefully reduce the cost of operating a private cloud lab in your home @127.0.0.1.
About the Author:
Dave Chianese is an Offering Lead at Avanade for the Northeast region. He is dedicated to helping customers realize their full potential in deploying products and services while being cost effective multi-cloud capable and flexible. Have a question on the how now that you've seen the what? Feel free to reach out on LinkedIn or via e-mail.