Running InstructLab on a Gaming PC - Part 1

Running InstructLab on a Gaming PC - Part 1

This article assumes the reader knows what InstructLab is. If you do not, see the instructlab announcement. The execution steps I went through, including all the snags I hit, are captured in this article. If you really want to understand the details of how instructlab uses a novel synthetic data-based alignment tuning method for Large Language Models (LLMs), please read the? Large-Scale Alignment for ChatBots paper.

As a long-term IBMer and self-professed geek, I’ve been eagerly awaiting the chance to dive into InstructLab ever since it was little more than an internal rumor. When it was finally announced, I knew I had to jump in immediately. Wondering what this means for watsonx.ai? Check this out!.?

As I participate in this democratization of model training with InstructLab, I realized that I needed to venture beyond my comfort zone - and my trusty M2 Mac with 24 GB RAM. According to Wikipedia, Microsoft Windows dominates the desktop and laptop market with a whopping 72.22% share, followed by Apple's macOS at 14.73%, desktop Linux at 3.88%, and Google's ChromeOS at 2.45%.

Whether the percentages are accurate or not, it's clear that there are a lot of Windows die-hards out there, and I wanted to make sure InstructLab is accessible to them. But, as the project is still in its early stages, there's currently no documentation available for running InstructLab on Windows. So, I donned my experimental hat and set out to find a Windows machine that's both widely used and powerful enough for the task.

Gaming rigs seemed like the perfect fit - not the über-fancy setups of pro gamers, but a decent machine with solid GPU acceleration. And then, I laid eyes on my son's gaming rig ... let's just say, a little bit of gentle maternal persuasion may have been involved in its "acquisition" (but I'll vehemently deny any accusations of emotional blackmail!).

Now, as I dive headfirst into this experiment, I've decided to skip PowerShell and stick with the good ol' Command Prompt. It's time to get down to business and see where this gaming machine adventure takes us! With its NVIDIA GeForce RTX 3080 GPU and Intel Core i9 processor, this rig is more than capable of handling the demands of InstructLab. So, buckle up, folks! It's time to embark on this wild ride and make model training more accessible to everyone.

Installing InstructLab

As I began my experiment, I realized that my son's gaming rig didn't have Python installed - not surprising, given that he's not a programmer! After reviewing the Linux instructions and GPU acceleration documentation, I decided to go with Python 3.11. Then, I followed the instructions on the iLab page, carefully adapting them for Windows by substituting Windows-specific syntax and commands as needed.?

mkdir instructlab
cd instructlab
python -m venv venv
venv\Scripts\activate
python -m pip install --upgrade pip setuptools        

Diving Right in

I decided to dive right in and run the pip install command, figuring it would be the best way to identify what was missing.

pip install git+https://github.com/instructlab/instructlab.git@stable -C cmake.args="-DLLAMA_CUBLAS=on"        

However, I soon hit a snag. As you can see, I attempted to add the CMake arguments for my NVIDIA GPU to the end of the command, hoping it would magically work. But, as expected, that didn't quite pan out. It turned out that I was missing some essential development dependencies on this machine, including CMake itself, which led to several dependency errors.

error due to dependencies missing :

Building wheel for llama_cpp_python (pyproject.toml) did not run successfully.
  │ exit code: 1
  ╰─> [20 lines of output]
      *** scikit-build-core 0.9.4 using CMake 3.29.3 (wheel)
      *** Configuring CMake...
      2024-05-21 20:25:51,358 - scikit_build_core - WARNING - Can't find a Python library, got libdir=None, ldlibrary=None, multiarch=None, masd=None
      loading initial cache file …………..
      -- Building for: NMake Makefiles
      CMake Error at CMakeLists.txt:3 (project):
        Running
         'nmake' '-?'
        failed with:
         no such file or directory
      CMake Error: CMAKE_C_COMPILER not set, after EnableLanguage
      CMake Error: CMAKE_CXX_COMPILER not set, after EnableLanguage
      -- Configuring incomplete, errors occurred!
      *** CMake configuration failed        

Solving Dev Dependency

Fortunately, the solution was straightforward. I simply needed to download and install the Build Tools for Visual Studio. The key was to select the right workloads and components during the installation process. This meant checking the boxes for the 'C++ build tools' workload, as well as the optional components for CMake, MSBuild, and the Windows 10 SDK. Additionally, I had to install the Desktop development with C++ package, which was necessary for CUDA support.

Visual Studio installer

Next, I needed to verify that CMake had been installed correctly. To do this, I ran the command cmake --version in my Command Prompt. If you're following along and find that CMake isn't installed, don't worry - just head to the CMake website and download and install it.

Setting up environment variables can sometimes be a bit tricky. If you're comfortable with tinkering around, you can locate the vcvarsall.bat file and run it from your Command Prompt instance. In my case, the file was located at

C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Auxiliary\Build\vcvarsall.bat.

However, if you'd rather skip the hassle or if Windows has become a bit unfamiliar, there's a handy shortcut for you: the 'Developer Command Prompt for Visual Studio'. Simply press the Windows key + S to open the Windows search, type in "Developer Command Prompt for Visual Studio", and open it up. Then, navigate to your desired folder using the trusty old cd command, and you're all set.

You should be able to run the pip install command now.

Hardware Acceleration

Taking advantage of hardware acceleration is a great idea! But, if you're working with an unfamiliar machine, it's essential to know your GPU driver details. Here are two ways to find out:

GPU Driver

Method 1: Using WMIC

Open a Command Prompt and run the following command:

wmic path win32_videocontroller get name,driverdate,driverversion        

This will display the name, driver date, and driver version of your GPU.

Method 2: Using Device Manager

Press the Windows key + X and select Device Manager. Then:

  • Locate the "Display Adapters" section and expand it.
  • Right-click on your graphics card and select "Properties".
  • Go to the "Driver" tab to see the driver provider, date, version, and other details.

Either method will help you identify your GPU driver.?

In my case, my GPU is a GeForce RTX 3080, but it was running an outdated driver. I updated it by downloading the latest driver from NVIDIA. Note: If you're following along, make sure to choose the Studio Driver option when updating your driver. This will ensure you get the right driver for your needs.

You can confirm the install went well by running?

nvidia-smi        

This command will display the current driver version, CUDA version and other information about your Nvidia GPU, confirming that the latest Studio Driver is installed.

nvidia-smi

The CUDA Conundrum

NVIDIA CUDA is a powerful platform that lets you tap into the power of your GPU to speed up your workflow. It works by offloading tasks that don't need to be done in a specific order, freeing up your CPU to focus on other things. By running these tasks in parallel on the GPU, you can get more done in less time. Think of it like having a super-fast coprocessor that works with your CPU to get the job done!

From the output of nvidia-smi ?I realized? I need? CUDA Version: 12.5 But, do I even have CUDA installed?

Let's check it out.

nvcc --version         

Uh-oh! The response wasn't exactly what I was hoping for. It seemed that I didn't have CUDA installed after all. No worries, though! I headed over to the NVIDIA Developer site and downloaded the CUDA Toolkit. With that installed, I should be good to go!


Next, I got the appropriate version of Pytorch

Source; pytorch.org


I had to go with the closest approximation. Pytorch for CUDA 12.5 isn't here yet for Windows. One thing I love about PyTorch though, is how they provide a convenient pip command to get started. So, I went ahead and ran it.

How do I verify that my installation was successful? Easy peasy! I just opened a Python shell, imported ‘torch’, and checked for CUDA availability. If everything went smoothly, I should see True indicating that CUDA is available. And that's it! I can exit the Python shell using exit()

Now, our prep work is done. Remember when we installed the Build Tools for Visual Studio and the "Desktop development with C++ package" earlier? Well, this is where it all comes together. If you are following along and didn't install the "Desktop development with C++ package" earlier, you might run into some issues. So, make sure you've got it installed before proceeding. Trust me, you don't want to skip this step!

Back to InstructLab instructions?

Now that we've got CUDA set up, let's get back to the InstructLab instructions. A heads up: when you're reinstalling, the docs say to install llama_cpp_python==0.2.55. But, when I tried it, InstructLab had already moved on to a newer version. If you get an error due to a version mix-up, just follow the suggestion. In my case, I installed 0.2.75

pip cache remove llama_cpp_python
pip install --force-reinstall llama_cpp_python==0.2.75 -C cmake.args="-DLLAMA_CUBLAS=on"        

And it worked out ... almost.

CMake Error: No CUDA Toolset Found

I hit a snag when running CMake above - it found the CUDAToolkit, but complained about a missing CUDA toolset. After some head-scratching and Googling, I found a solution. I copied these four CUDA related files

  • CUDA 12.5.props
  • CUDA 12.5.targets
  • CUDA 12.5.xml
  • Nvda.Build.CudaTasks.v12.5.dll

from: C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v12.5\extras\visual_studio_integration\MSBuildExtensions

to: C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\MSBuild\Microsoft\VC\v170\BuildCustomizations

Now I could re-install llama_cpp_python successfully.

As per the ilab steps, we need to reinstall instructlab now.

pip install instructlab -C cmake.args="-DLLAMA_CUBLAS=on"        
Installation? Check!?


Vincent Perrin

IBM Technical Leader | Artificial Intelligence, Quantum Computing

5 个月

Great work !!

回复

Amazing progress Suma!

要查看或添加评论,请登录

社区洞察

其他会员也浏览了