HomeLab Imaging Server: iPXEv4
Contents:
Introduction -->
One of the issues that I continue to run into as an IT HomeLab enthusiast tends to be "Flash Drive Sprawl." The more flavors of software, utilities, and bootable operating systems that I want to experiment with, the more flash drives I need to have lying around. At some point, it becomes too much and I lose track of where my bootable USBs are, and what they store! There are certain tools that can help mitigate this, with Ventoy being a personal favorite...but I wanted something more enterprise. So, today, I decided to deploy a bootable network media server or, as it is often called, an Imaging (or PXE) Server.
Infrastructure -->
The Software: Netboot.xyz
Moving right along, I decided on an openly sourced iPXE-based solution called Netboot.xyz, a popular freeware imaging solution that can be found here on GitHub. This solution supports iPXE accross IPv6 and IPv4, as well as supporting both EFI and Legacy BIOS hosts. It is dynamic, simple, lightweight, and even supports pulling ISOs across the internet from remote hosts to boot from GitHub repos, remote storate, and more!
The Platform: Proxmox & Docker-Compose
Since I already had Proxmox and a Portainer host running in my Lab, I decided to do a simple YAML deployment for Netboot.xyz. This was both fast, and error-proof considering that orchestration was being handled by a Dev-maintained image. To begin, I launched into my Portainer WebApp and deployed a new stack. Then, in the WebEditor I wrote up a YAML file to deploy the container necessary to house Netboot.xyz. The YAML for my deployment is as follows:
version: "2.1"
services:
netbootxyz:
image: lscr.io/linuxserver/netbootxyz:latest
container_name: netbootxyz
environment:
- PUID=1000
- PGID=1000
- TZ=Etc/UTC
# - MENU_VERSION=1.9.9 #optional
- PORT_RANGE=30000:30010 #optional
- SUBFOLDER=/ #optional
volumes:
- /home/tsell/netboot_xyz/config:/config
- /home/tsell/netboot_xyz/assets:/assets
ports:
- 3000:3000
- 69:69/udp
- 8080:80 #optional
restart: unless-stopped
Outside of a few changes, this YAML configuration file is essentially identical to the stock recommended configurations that are provided on the LinuxServer.io reference page. In terms of changes, I only altered what was necessary. Firstly, I commented out the MENU_VERSION line to ensure that on each successive reboot, the stack would re-pull the image and grab the latest version of the WebApp GUI. Secondly, I of course hard-coded in my own explicit volume mappings for the Docker Container. The Directories /home/tsell/netboot_xyz/config & /assets were created prior to this process by hand using the mkdir command within the console of the Ubuntu Server Host running Docker-Compose.
mkdir /home/tsell/netboot_xyz/config
mkdir /home/tsell/netboot_xyz/assets
Other than these changes, the port mappings were standard, including the common port assignment of 69 for TFTP, the Trivial File Transfer Protocol that is used in PXE to transfer Boot Files over the network.
Deploying The System
From here, all that was left was to deploy the stack and pull the image. This was quick and easy. Once the container came online, I was able to navigate to it within my web browser and was presented with the minimal GUI that it comes with stock:
At this point the deployment was complete. The software comes stock with no local ISOs on board, but does come with multiple default boot options to pull ISOs down live from hosts on the internet, generally GitHub.
领英推荐
Configurations -->
In terms of post-deployment configurations, there was really only one thing to be done, and that was to set my DHCP server in Pfsense to point to my Netboot.xyz container for boot files during PXE session requests. For my network, this would have to be done on both my back end infrastructure VLAN and my production end-user VLAN. This, also, required a firewall rule.
To configure this within Pfsense, I opened the Pfsense WebGUI in my browser and navigated to the DHCP service for both of my interfaces. Then, for both interfaces, I specified which host was to be sent requests for PXE sessions. Then, I gave it the file names for the default bootfiles that Netboot.xyz offers for various system architectures. For example, UEFI systems required a boot file called "netboot.xyz.efi" and Legacy BIOS systems had a boot file called "netboot.xyz.kPXE" as shown below:
The firewall rule that I wrote for this project simply allows hosts in VLAN200, my end-user VLAN, to also reach the imaging server with TFTP for PXE sessions:
After these configurations, the solution was essentially totally deployed. I was able to conduct a test boot.
Usage -->
As is tradition, I christened my new imaging server by having it pull down a copy of Ubuntu Server 23.10 from GitHub and installing it to a stock, empty Proxmox Virtual Machine. It worked perfectly and as expected. Now that I have it up and running, I wanted to discuss what the practical use case here is and, in the interest of brevity, I have broken it down into two segments. "Utility / Troubleshooting, " and "Decreasing Flash Drive Sprawl."
Decreasing Flash Drive Sprawl:
Starting with the easiest practical improvement brought on by this deployment, there is the aforementioned eradication of Flash Drive Sprawl. I no longer have to sacrifice Flash Drives to becoming bootable media devices, and can return to using them as they were intended. This will save me time and money, as well as frustration. Having my booting capabilities and ISOs centralized is a better organizational choice for my future projects.
Utility / Troubleshooting:
This benefit is a bit more interesting for me. Since PXE does not have to be used for installing operating systems, and instead can be used to preload bootable media of all kinds, this solution makes a great candidate for hosting all of my IT-Swiss-Army-Knife bootable utilities for troubleshooting PCs and Servers! Things such as Memtest86, HiremsBootDisk, CloneZilla, Gparted, and more can now be loaded onto hosts in a live session, over PXE, without the need for a USB drive (or even a USB port!). Network access is all that is required.
Closing Thoughts -->
This article was truncated because, well, this project was not that difficult. We live in a day and age where IAC (Infrastructure as Code) and Dynamic Deployment Worflows allow us as technology professionals to quickly, and elastically, stand up solutions that may have taken our predecessors hours to configure and get off of the ground. This is a blessing, but I often wonder what we may lose in that as well. As far as it concerns my personal gains, though, I still learned much from this project. I learned about how TFTP exchanges boot files in the pre-boot environment of a host. I learned about how to stand up yet another YAML-based web app in Docker. I even learned a little bit during the ever-so-brief troubleshooting phase of this project...
In terms of the troubleshooting itself, there was really only one thing that stood in my way with this project. Secure Boot. Secure Boot, and devices (VMs) attempting to preload Windows Signed Keys for the Secure Boot process consistently killed iPXE's ability to establish a secure connection with the end-host devices. This is not surprising, but it is disappointing. In order to get UEFI devices to boot properly, I had to disable secure boot, or at least prevent devices from preloading keys for Windows Operating Systems. After doing this, though, the solution worked like a dream.
I think that this will be one of the most used and practical services that I have deployed into my Lab, and I look forward to using it to my academic advantage as I continue to learn more about Information Technology!
Thank you for reading,
Tyler Sell
Geek History Storyteller | Music Fanatic | Geek Speak Simplifier | Buzzword Buster | Grandpa
11 个月?? ?? ?? ???
Cybersecurity Enthusiast | Net+ | Sec+ | CySA+ Certified
11 个月Awesome article! ??