Old Machines, New Tricks: Building TensorFlow v2.16 from Scratch
Patrick Hamilton
CTO Internet 2.0 | Director & Boardmember (US) | Cybersecurity & Technology Expert | Machine Learning & Neural Network Specialist | Financial Institutions & Critical Infrastructure | Solution Architect | CISSP ?
"New Year, New Me" in the World of Machine Learning
As we roll into the new year, it's time to gear up for an adventure in the wild and wonderful world of machine learning. In my tech journey, which feels longer than a marathon of sci-fi movies, I've seen machine learning transform industries like a wizard's spell. Its importance in today's tech-driven world is as clear as the screen of a brand new smartphone. Whether you're a seasoned pro looking to jazz up your skillset or a curious cat looking for a brainy hobby, diving into machine learning now is as timely as getting a new calendar. So, buckle up, and let's decode the enigma of machine learning together – it's going to be a ride!
TensorFlow and Beyond
Enter TensorFlow, Google Brain team's gift to humanity, a cornerstone in the magic land of machine learning. It’s like the Swiss Army knife for AI – versatile and handy. But remember, it's just the appetizer in our machine learning feast. We'll also take a stroll through PyTorch park and peek into other intriguing tech bushes along the way.
Old Dogs can Learn New Tricks
Here’s a fun fact: hardware problems can be solved with cloud servers, but what about that old computer gathering dust in the corner? You know, the one that was once the star of your gaming battles but now can't even run <insert your favorite old game here> smoothly. That old friend can still have a second life. It's overkill for a paperweight and too sentimental to be a doorstopper. But, its CPU might not be the sharpest tool in the shed for Machine Learning. Yet, it has those video cards, which have those GPUs, those precious Graphical Processing Units. And perhaps there is more than one video card installed. Its those GPUs that hold the power.
Installing TensorFlow on this vintage beauty? You'll likely bump into the infamous "Illegal instruction (core dumped) error" – a classic ‘old hardware meets new software’ sitcom scenario. This is due to the available installable TensorFlow binaries (ie: programs) requires the capabilities of some techno-babble things called AVX and AVX2 instructions. But fear not! Building TensorFlow from source is like giving your computer a secret potion to bypass this hiccup. It's not just about squeezing the juice out of your old gear; it's about strapping a jetpack to it and seeing how high it can fly, metaphorically speaking, of course.
Baby Steps with TensorFlow
We're starting with baby steps – building TensorFlow for the CPU. It's like learning to walk before you can run into the GPU wonderland. You already know how to crawl… and trying to run before learning to walk, well, will lead to days or weeks or trying to deal with all of the unexpected issues. I'm guiding you with an Intel-based CPU and Nvidia video cards, a popular combination. Building TensorFlow from source can be trickier than a cat trying to catch a laser dot. And PyTorch? It’s like a walk in the park in comparison. We'll tackle TensorFlow v2.16.0, the latest and greatest. The official release, v2.15.0, is playing hard to get with its source code, throwing a tantrum with dependencies and whatnot. So, we’re going for the cooler, friendlier v2.16.
Here's the official treasure map, I mean, documentation for building TensorFlow from source, for your reference: https://www.tensorflow.org/install/source
The Instructions: Keeping it simple or More is Less, your choice
The best case is to install Ubuntu Desktop latest version 22.04. Best to avoid using Windows, or if you can, have the computer able to dual boot with Windows and Ubuntu. Of how to do this, depends on your system’s BIOS, if there is UEFI or Legacy in use and available drive space. I was able to shrink the Windows Operating System and then install Ubuntu, on a motherboard that uses Legacy BIOS. So that should give an indication that even such a old system is capable of running Machine Learning operations. If you need help in figuring out how to install Ubuntu or with the instructions below, please leave a comment. For the installation of Ubuntu Desktop v22.04, I went with the Minimum Installation and not to install 3rd party applications.
The following instructions are Step-by-Step commands in Linux to build out Tensorflow v2.16. After installing Ubuntu Desktop v2.16:
PERFORM UPDATES:
1.0.????? After installing Ubuntu, go to the Software Updater:
1.1.????? Set Ubuntu Software > Download from: Main Server
1.2.????? Click on Close
1.3.????? At the popup, click on Reload
2.0.????? Open Software Updater (again as it closed):
2.1.????? Go to Additional Drivers tab
2.2.????? Select: Using NVIDIA driver metapackage from nvidia-driver-535 (proprietary,tested)
2.3.????? Click on Apply Changes
2.4.????? Click on Close
3.0.????? Open Software Updater (as it did close again):
3.1.????? Wait for window popup to perform updates, or click on the Notification error due to Ubuntu Pro not loading, then click on Show Updates
3.2.????? At the Software Updater, click on Install Now
3.3.????? Click on Restart Now
4.0.????? After the Reboot and Logged back in, Open a Terminal
5.0.????? To check for Nvidia Cards, type:
lspci | grep -i nvidia
5.1.????? To check the Nvidia Version, type:
nvidia-smi | grep "Driver Version" | awk '{print $6}' | cut -c1-
5.2.????? Or just,
nvidia-smi
SETUP ENVIRONMENT AND PYTHON:
1.0.????? To check the current version of Python (should be v3.10), type:
python3 -V
2.0.????? Add the repositories:
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo add-apt-repository ppa:deadsnakes/ppa
3.0.????? Update the sytems:
sudo apt update -y && sudo apt upgrade -y
4.0.????? Install systems:
sudo apt install git python-is-python3 python3-pip python3-dev patchelf -y
5.0.????? To Verify the installation (should be at a higher version, such as v3.10.12):
python -V
6.0.????? Set Path now, type:
sudo nano ~/.bashrc
6.1.????? Add at the end of the file, add the following:
export PATH="$PATH:/home/sysop/.local/bin"
6.2.????? Save and exit (CTRL-O, CTRL-X)
7.0.????? To apply changes:
source ~/.bashrc
INSTALL BAZELISK:
TensorFlow requires the use of Bazel, which is a powerful build tool, like a super-smart recipe book for software, that helps organize and compile large code bases efficiently. Bazelisk is a helpful companion that automatically manages Bazel versions, ensuring you always use the right 'recipe' for projects like TensorFlow, without getting into technical hassles.
1.0.????? At the terminal, go to Downloads, such as:
cd Downloads
2.0.????? Download and copy Bazelisk:
wget https://github.com/bazelbuild/bazelisk/releases/download/v1.19.0/bazelisk-linux-amd64
chmod +x bazelisk-linux-amd64
sudo mv bazelisk-linux-amd64 /usr/local/bin/bazel
3.1.????? Set a Path for Bazelisk:
sudo nano ~/.bashrc
3.2.????? Add at the end of the file, similar as before:
export PATH=/usr/local/bin/bazel:$PATH
3.3.????? Save and exit (CTRL-O, CTRL-X)
4.0.????? To apply changes:
source ~/.bashrc
INSTALL CLANG v16:
Another requirement is for a compiler, such as Clang, which is a programming tool that acts like a translator, converting human-written code into a language that computers understand. It's essential for building TensorFlow because it ensures the code is translated accurately and efficiently, making the software run smoothly on your machine.
1.0.????? Download Clang Install Script:
wget https://apt.llvm.org/llvm.sh
2.0.????? Make the script executable:
chmod +x llvm.sh
3.0.????? Execute the script:
sudo ./llvm.sh 16
PREPARE FOR TENSORFLOW:
1.0.????? Install the dependencies:
pip install -U --user pip numpy wheel packaging requests opt_einsum
pip install -U --user keras_preprocessing --no-deps
***** NOW FOR A TRICK *****
Even though those dependencies are to be installed, well, apparently they are not all of the dependencies required or may not be the right version. So, if you skip this step, the compile is likely to fail. The trick here is to install TensorFlow v2.15.0 update...
But we are building our own TensorFlow v2.16, so why this installation, with depending on your CPU may not work? The reason is to automatically have the environment to install the correct dependencies and then we will uninstall TensorFlow v2.15 later. The dependencies will remain.
Run the following command:
pip install tensorflow==2.15.0 --upgrade
DOWNLOAD TENSORFLOW REPOSITORY:
1.0.????? Now we will need to pull the TensorFlow Repository:
git clone https://github.com/tensorflow/tensorflow.git
cd tensorflow/
2.0.????? Normally the next step would be to “checkout” a version of TensorFlow, but we are going to ignore this as we will be building the latest and greatest version, 2.16.
CHECK FOR CPU FLAGS:
1.0 This is the part to help optimize TensorFlow for your CPU, type:
grep flags -m1 /proc/cpuinfo | cut -d ":" -f 2 | tr '[:upper:]' '[:lower:]' | { read FLAGS; OPT="-march=native"; for flag in $FLAGS; do case "$flag" in "sse4_1" | "sse4_2" | "ssse3" | "fma" | "cx16" | "popcnt" | "avx" | "avx2") OPT+=" -m$flag";; esac; done; MODOPT=${OPT//_/\.}; echo "$MODOPT"; }
The result should be something like this, which is what will be used:
-march=native -mssse3 -mcx16 -msse4.1 -msse4.2 -mpopcnt
Generally, the "-march=native" is sufficient to use.
Or another method is to use the CPU Family, such as with this command:
cat /sys/devices/cpu/caps/pmu_name
Result would be something, such as Nehalem:
-march=Nehalem
CONFIGURE TENSORFLOW v2.16 for CPU Only:
1.0.????? Type the following command to configure for the build:
./configure
2.0.????? For the list of questions, use the following:
Python Location: Default
Python Library: Default
Tensorflow with ROCm: N
Tensorflow with CUDA: N
CLang as Compiler: Y
Clang Path: /usr/lib/llvm-16/bin/clang
Optimization Flags: -march=native -mssse3 -mcx16 -msse4.1 -msse4.2 -mpopcnt -Wno-gnu-offsetof-extensions
Android Builds: N
NOTES:
BUT WAIT, WE NEED TO FIX THE BAZEL CONFIGURATION FILE!
Unfortunately there is a bug with the Bazel configuration file in which there is a duplicate of "-Wno-gnu-offsetof-extensions", so this needs to be deleted:
sudo nano .tf_configure.bazelrc
Scroll down to the first line entry of "-Wno-gnu-offsetof-extensions" and delete it. In the picture below, it is the highlighted line. Once deleted, save and edit from the Nano editor.
COMPILE TENSORFLOW:
Now here is the big part…
Part 1 - Build the package-builder:
sudo bazel build --config=opt //tensorflow/tools/pip_package:build_pip_package
This is going to take a long time, from a couple to several hours… so let time to do something else, such as studying or coding on what ML script to run when this is completed.
Part 2 - Build the package:
sudo ./bazel-bin/tensorflow/tools/pip_package/build_pip_package /tmp/tensorflow_pkg
INSTALL TENSORFLOW:
Assuming no errors occurred, now is the time to install your individualized, customized and optimized TensorFlow:
python3 -m pip install /tmp/tensorflow_pkg/tensorflow*.whl
VERIFICATION:
1.0. To verify that the TensorFlow has installed, exit out of tensorflow directory, otherwise you will get an error (cannot import TensorFlow while within the tensorflow source directory.):
cd ..
1.2. Verify Install:
pip show tensorflow
1.2. Run Python3 Commands to Test, type:
python3
import tensorflow as tf
print(tf.__version__)
2.3. To exit from Python, hit CTRL-Z
Conclusion: The End of Our Tech Trek (For Now)
And just like that, we've danced through the digits and dallied with the details of building TensorFlow. Hopefully this has been filled with some laughs, a few head-scratches, and maybe even a eureka moment or two. Remember, diving into machine learning and wrestling with TensorFlow is like learning to cook a gourmet meal – it might get messy, but the end result is oh-so-satisfying.
As we wrap up this chapter, don’t forget to pat yourself on the back; you’ve taken a bold step in befriending some of the coolest tech tools out there. Stay tuned for the instructions of compiling TensorFlow for use with GPUs. There of course will be more tech adventures where we'll continue to demystify the world of AI and machine learning with a pinch of humor and a dash of simplicity. Until then, keep tinkering, keep learning, and maybe even laugh at the occasional error message – it's all part of the fun!