Communication runtimes in Parallel Programming with GPUs in HPC Cluster
(Note : This is compiled for my personal reference, not intended as a traditional article for a wider audience )
( NVSHMEM explained in Details )
Communication runtimes is a vital part of Parallel programming
Parallel programming involves dividing a computational task into smaller sub-tasks that can be executed simultaneously across multiple processing units, such as CPUs or GPUs. It’s crucial in high-performance computing (HPC) and AI workloads because it speeds up computation by leveraging multiple processors.
Communication runtimes are libraries or frameworks that facilitate data exchange between parallel tasks running on different processors, which is especially important in distributed computing environments. Such as NVSHMEM, MPI, NCCL , UCX etc.
NVSHMEM
NVSHMEM??is a parallel programming interface based on OpenSHMEM that provides efficient and scalable communication for NVIDIA GPU clusters. NVSHMEM creates a global address space for data that spans the memory of multiple GPUs and can be accessed with fine-grained GPU-initiated operations, CPU-initiated operations, and operations on CUDA? streams.
Efficient, Strong Scaling
NVSHMEM enables long-running kernels that include both communication and computation, reducing overheads that can limit an application’s performance when strong scaling.
Low Overhead
One-sided communication primitives reduce overhead by allowing the initiating process or GPU thread to specify all information required to complete a data transfer. This low-overhead model enables many GPU threads to communicate efficiently.
Naturally Asynchronous
Asynchronous communications make it easier for programmers to interleave computation and communication, thereby increasing overall application performance.
What's New in NVSHMEM 3.0
Key Features
NVSHMEM Advantages:
Increase Performance
Convolution is a compute-intensive kernel that’s used in a wide variety of applications, including image processing, machine learning, and scientific computing. Spatial parallelization decomposes the domain into sub-partitions that are distributed over multiple GPUs with nearest-neighbor communications, often referred to as halo exchanges.
In the Livermore Big Artificial Neural Network (LBANN) deep learning framework, spatial-parallel convolution is implemented using several communication methods, including MPI and NVSHMEM. The MPI-based halo exchange uses the standard send and receive primitives, whereas the NVSHMEM-based implementation uses one-sided put, yielding significant performance improvements on Lawrence Livermore National Laboratory’s Sierra supercomputer .
Efficient Strong-Scaling on Sierra Supercomputer
Efficient Strong-Scaling on NVIDIA DGX SuperPOD
Accelerate Time to Solution
Reducing the time to solution for high-performance, scientific computing workloads generally requires a strong-scalable application. QUDA is a library for lattice quantum chromodynamics (QCD) on GPUs, and it’s used by the popular MIMD Lattice Computation (MILC) and Chroma codes.
NVSHMEM-enabled QUDA avoids CPU-GPU synchronization for communication, thereby reducing critical-path latencies and significantly improving strong-scaling efficiency.
Simplify Development
The conjugate gradient (CG) method is a popular numerical approach to solving systems of linear equations, and CGSolve is an implementation of this method in the Kokkos programming model. The CGSolve kernel showcases the use of NVSHMEM as a building block for higher-level programming models like Kokkos.
NVSHMEM enables efficient multi-node and multi-GPU execution using Kokkos global array data structures without requiring explicit code for communication between GPUs. As a result, NVSHMEM-enabled Kokkos significantly simplifies development compared to using MPI and CUDA.
Productive Programming of Kokkos CGSolve
NVSHMEM Installation Guide
==============================
This NVIDIA NVSHMEM Installation Guide provides step-by-step instructions to download and install NVSHMEM 3.0.6
Overview:
NVIDIA? NVSHMEM? is a programming interface that implements a Partitioned Global Address Space (PGAS) model across a cluster of NVIDIA GPUs. NVSHMEM provides an easy-to-use interface to allocate memory that is symmetrically distributed across the GPUs. In addition to a CPU-side interface, NVSHMEM also provides a CUDA kernel-side interface that allows NVIDIA CUDA? threads to access any location in the symmetrically-distributed memory.
Hardware Requirements:
NVSHMEM requires the following software:
64-bit Linux.
For a complete compatibility matrix, see the NVIDIA CUDA Installation Guide for Linux.
A C++ Compiler with C++11 support.
CUDA 10.2 or later.
CMake version 3.19 or later
(Optional) InfiniBand GPUDirect Async (IBGDA) transport
Requires Mellanox OFED >= 5.0
Requires nvidia.ko >= 510.40.3. There are two operational modes supported, default and CPU-assisted. - In the default case, nvidia.ko must be loaded with PeerMappingOverride=1 by changing the options in the /etc/modprobe.d/nvidia.conf file to options nvidia NVreg_RegistryDwords=”PeerMappingOverride=1;” - In the CPU-assisted case, PeerMappingOverride is not required.
Requires nvidia-peermem >= 510.40.3
For more information, see: GPUDirect Async.
(Optional) Mellanox OFED.
This software is required to build the IBRC transport. If the OFED is unavailable, NVSHMEM can be built with NVSHMEM_IBRC_SUPPORT=0 set in the environment.
(Optional) nvidia-peermem for GPUDirect RDMA.
A PMI-1 (for example, Hydra), PMI-2 (for example, Slurm), or a PMIx (for example, Open MPI) compatible launcher.
(Optional) GDRCopy v2.0 or later.
This software is required for atomics support on non-NVLink connections.
It is required when NVSHMEM_IBRC_SUPPORT=0 and NVSHMEM_UCX_SUPPORT=0 are not set at compile time.
(Optional) UCX version 1.10.0 or later.
This software is required to build the UCX transport.
UCX must be configured with --enable-mt and --with-dm.
(Optional) libfabric 1.15.0.0 or later
(Optional) NCCL 2.0 or later.
(Optional) PMIx 3.1.5 or later.
System Requirements:
The CUDA MPS Service is optional. When using multiple processes per GPU, to support the complete NVHSMEM API, the CUDA MPS server must be configured on the system. To avoid deadlock situations, the total GPU utilization that is shared between the processes must be capped at 100% or lower.
Refer to Multi-Process Service for more information about how to configure the MPS server.
Installation
领英推荐
Downloading NVSHMEM
Download and extract the NVSHMEM txz archive from https://developer.download.nvidia.com/compute/redist/nvshmem/version-number/source (for example, https://developer.download.nvidia.com/compute/redist/nvshmem/2.11.0/source/ ). The extracted directory contains the following files and subdirectories:
Building And Installing NVSHMEM
Using the NVSHMEM cmake build system:
NVSHMEM now only supports building with cmake version 3.19 or later
Using NVSHMEM now requires you to build with cmake version 3.19 or later. Refer to the following sections for the comparison between the deprecated and removed make build system.
The cmake build system is backwards compatible with the environment variables used in the original Makefile. That is to say that the same environment will produce a comparable build whether make or cmake is used for the build.
Cmake natively supports some environment and cmake variables for facilitating discovery of NVSHMEM dependencies (e.g. MPI and CUDA). These native settings can be used within the context of NVSHMEM, but describing them is outside the scope of this document.
Additionally, with the exception of NVSHMEM_HOME (which was superseded by NVSHMEM_PREFIX for the install prefix) all previous environment variables are respected when passed as cmake variables.
The steps outlined below describe typical build steps for NVSHMEM when using cmake:
Using NVSHMEM In Your Applications
Launching NVSHMEM Programs
NVSHMEM supports the following methods to launch your application:
The PMI-1 and PMI-2 clients are in NVSHMEM and are automatically built as part of the build process. A PMIx client must be provided by the user by installing Open PMIx or by using the PMIx client that is installed by Open MPI or Slurm. When you build Open MPI, include the --enable-install-libpmix configure option. When you build NVSHMEM, set NVSHMEM_PMIX_SUPPORT=1 and PMIX_HOME=/path/to/openmpi.
To select the correct PMI library at runtime, set NVSHMEM_BOOTSTRAP_PMI to PMI, PMI-2, or PMIx. To bootstrap NVSHMEM by using MPI or OpenSHMEM, start the application in the typical way, start MPI or OpenSHMEM, and then call the nvshmemx_init_attr function to inform NVSHMEM that NVSHMEM is running as part of an existing MPI or OpenSHMEM job.
Using NVSHMEM with Multiple Processes-Per-GPU
Starting with release 2.5.0, NVSHMEM supports multiple processes per GPU (MPG), which does not require additional configuration and can be run with or without the CUDA Multi-process Service (MPS) enabled.
If MPS is not enabled, however, only the following APIs are supported:
To enable complete NVSHMEM MPG support, the NVIDIA MPS server must be installed and be running on the system. To enable support for the complete API, the MPS server must also be configured to place a limit on the total GPU utilization of a maximum of 100%.
The NVSHMEM library will automatically detect when it runs on a system with more processes than GPUs and fan out the processes accordingly. It also automatically detects the presence of the MPS server daemon and GPU utilization configuration and enables the APIs accordingly. If an unsupported API is used in a limited MPG run, an error message will be printed, and the application will exit.
Using NVSHMEM With Your C or C++ Program
Using NVSHMEM With Your MPI or OpenSHMEM Program
To run a Hybrid MPI + NVSHMEM program, use the mpirun launcher in the MPI installation.
Similarly, NVSHMEM can be used from OpenSHMEM programs, and you must use the corresponding launcher for the OpenSHMEM library. The only currently tested OpenSHMEM version is OSHMEM in Open MPI. Other OpenSHMEM implementations, such as Sandia OpenSHMEM (SOS) should also work, but these implementations have not been tested. To run the hybrid OpenSHMEM/NVSHMEM job, use the oshrun launcher in the OpenMPI installation or follow the launcher specification of your OpenSHMEM library.
NVSHMEM relies on a plug-in system for bootstrapping. By default, an MPI bootstrap plug-in is built for NVSHMEM and is installed in $(NVSHMEM_HOME)/lib. If this directory is not in your dynamic linker search path, you might need to add it to $LD_LIBRARY_PATH. This MPI plug-in is selected automatically at runtime if the nvshmemx_init_attr initialization function is used to request the MPI bootstrap, or if NVSHMEM_BOOTSTRAP=”MPI” is set.
The source code of the MPI bootstrap plug-in is installed in $(NVSHMEM_HOME)/share/nvshmem/src/bootstrap-plugins and can be built separately from the NVSHMEM library (for example, to support additional MPI libraries). Custom bootstrap plugins are also possible and should implement the interface that is defined in $(NVSHMEM_HOME)/include/nvshmem_bootstrap.h. Plug-ins must be built as relocatable shared objects.
After the external plug-in library is built, it can be specified to NVSHMEM at runtime by specifying NVSHMEM_BOOTSTRAP=”plugin” and NVSHMEM_BOOTSTRAP_PLUGIN=”[name of plugin]”. For example, NVSHMEM_BOOTSTRAP=”MPI” is equal to NVSHMEM_BOOTSTRAP=”plugin” and NVSHMEM_BOOTSTRAP_PLUGIN=”nvshmem_bootstrap_mpi.so”.
Running Performance Tests
Before you can run performance tests, you first must build them.
If the NVSHMEM library was built with NVSHMEM_MPI_SUPPORT=1, set the CUDA_HOME, NVSHMEM_HOME and MPI_HOME environment variables to build NVSHMEM performance tests:
CUDA_HOME=<path to supported CUDA installation>
NVSHMEM_HOME=<path to directory where NVSHMEM is installed>
MPI_HOME=<path to MPI installation>
If you built NVSHMEM with MPI and OpenSHMEM support (NVSHMEM_MPI_SUPPORT=1 and NVSHMEM_SHMEM_SUPPORT=1) when you build perftest/, MPI and OpenSHMEM support must be enabled.
Build without SHMEM interoperability: To build NVSHMEM performance tests without SHMEM interoperability, set the environment variable NVSHMEM_SHMEM_SUPPORT to 0. By default, performance tests are installed under perftest/perftest_install. To install to a different path, set NVSHMEM_PERFTEST_INSTALL to point to the correct path.
Update LD_LIBRARY_PATH to point to $CUDA_HOME/lib64, $MPI_HOME/lib, and $NVSHMEM_HOME/lib.
Assuming Hydra is installed under HYDRA_HOME, run performance tests as NVSHMEM jobs, hybrid MPI+NVSHMEM jobs, or hybrid OpenSHMEM+NVSHMEM jobs with the following commands (using perftest/device/pt-to-pt/put.cu as an example):
NVSHMEM job using Hydra (PMI-1)
$HYDRA_HOME/bin/nvshmrun -n <up to number of P2P or InfiniBand
NIC accessible GPUs>
$NVSHMEM_PERFTEST_INSTALL/device/pt-to-pt/shmem_put_bw
NVSHMEM job using Slurm
srun -n <up to number of P2P or InfiniBand NIC accessible GPUs>
$NVSHMEM_PERFTEST_INSTALL/device/pt-to-pt/shmem_put_bw
Note:
When Slurm was built with a PMI that does not match the default of NVSHMEM, for example, if Slurm was built with PMIx support and NVSHMEM_DEFAULT_PMIX=1 was not set when building NVSHMEM, NVSHMEM_BOOTSTRAP_PMI can be used to override the default. Possible values are PMIX, PMI-2, and PMI. The Slurm --mpi= option to srun can be used to tell Slurm which PMI interface to use.
Hybrid MPI/NVSHMEM job
$MPI_HOME/bin/mpirun -n <up to number of GPUs accessible by P2P
or InfiniBand NIC> -x NVSHMEMTEST_USE_MPI_LAUNCHER=1
$NVSHMEM_PERFTEST_INSTALL/device/pt-to-pt/shmem_put_bw
Hybrid OpenSHMEM/NVSHMEM job
$MPI_HOME/bin/oshrun -n <up to number of GPUs accessible by P2P
or InfiniBand NIC> -x USE_SHMEM_IN_TEST=1
$NVSHMEM_PERFTEST_INSTALL/device/pt-to-pt/shmem_put_bw
“Hello World” Example
#include <stdio.h>
#include <cuda.h>
#include <nvshmem.h>
#include <nvshmemx.h>
__global__ void simple_shift(int *destination) {
int mype = nvshmem_my_pe();
int npes = nvshmem_n_pes();
int peer = (mype + 1) % npes;
nvshmem_int_p(destination, mype, peer);
}
int main(void) {
int mype_node, msg;
cudaStream_t stream;
nvshmem_init();
mype_node = nvshmem_team_my_pe(NVSHMEMX_TEAM_NODE);
cudaSetDevice(mype_node);
cudaStreamCreate(&stream);
int *destination = (int *) nvshmem_malloc(sizeof(int));
simple_shift<<<1, 1, 0, stream>>>(destination);
nvshmemx_barrier_all_on_stream(stream);
cudaMemcpyAsync(&msg, destination, sizeof(int), cudaMemcpyDeviceToHost, stream);
cudaStreamSynchronize(stream);
printf("%d: received message %d\n", nvshmem_my_pe(), msg);
nvshmem_free(destination);
nvshmem_finalize();
return 0;
}
2. Build nvshmemHelloWorld.cu with the following command:
When using dynamic linking:
nvcc -rdc=true -ccbin g++ -gencode=$NVCC_GENCODE -I
$NVSHMEM_HOME/include nvshmemHelloWorld.cu -o
nvshmemHelloWorld.out -L $NVSHMEM_HOME/lib -lnvshmem_host -lnvshmem_device
When using static linking:
nvcc -rdc=true -ccbin g++ -gencode=$NVCC_GENCODE -I
$NVSHMEM_HOME/include nvshmemHelloWorld.cu -o
nvshmemHelloWorld.out -L $NVSHMEM_HOME/lib -lnvshmem -lnvidia-ml -lcuda -lcudart
Where arch=compute_70,code=sm_70 is the value of NVCC_GENCODE for V100 GPUs.
3. Run the nvshmemHelloWorld sample with one of the following commands:
When running on one host with two GPUs (connected by PCI-E, NVLink or Infiniband):
$HYDRA_HOME/bin/nvshmrun -n 2 -ppn 2 ./nvshmemHelloWorld.out
When running on two hosts with one GPU per host that is connected by InfiniBand:
$HYDRA_HOME/bin/nvshmrun -n 2 -ppn 1 –-hosts hostname1,hostname2 ./nvshmemHelloWorld.out
Resources
Principal Engineer( HPC, GPU computing, AI/ML workloads, AI-5G, Infra. Imaginative | Dreamer.
2 个月In an HPC or AI workload, you might use MPI for general process communication across a cluster, NCCL for efficient multi-GPU operations, UCX for low-latency communication across different hardware, and NVSHMEM for direct memory access across GPUs.