Performance Improvement in Numpy 2.x
Patrick Nicolas
Director Data Engineering @ aidéo technologies |software & data engineering, operations, and machine learning.
The performance of training geometric learning and complex deep learning models directly affects time to market and development costs. Numpy 2.x, combined with the accelerator and the latest BLAS/LAPACK libraries, can reduce execution time by up to 38%.
What you will learn: Enhanced performance in linear algebra computations using Numpy 2.x with the ILP64 accelerator and LAPACK 3.9.1 library
Notes:?
Overview
Non-linear deep learning and geometric learning depend on geodesics, Riemannian metrics, and exponential maps to train models and predict outcomes. These operations are computationally intensive, making them ideal candidates for performance enhancements in the latest release of Numpy, version 2.1 [ref?1].?
Linear algebra operations, on the other hand, rely on the BLAS and LAPACK libraries, which can be optimized for specific platforms [ref?2].
This study is performed on a MacOS M2 Maxlaptop, utilizing optimizations of various mathematical libraries like LAPACK and BLAS from the Accelerate framework [ref?3]. Specifically, we use the ILP64 interface with the LAPACK 3.9.1 library alongside Numpy 2.1 [ref?4].
In the following two sections, we will assess the performance enhancements in Numpy 2.1 when utilizing the ILP64 accelerator for the LINPACK library on macOS. The evaluation will cover two scenarios:
- Basic Linear Algebra operations
- Fast Fourier Transform
Linear Algebra
Let's compare the performance of Numpy 2.1 with the LINPACK accelerator 3.9.1 against Numpy 1.2.6 by performing the following computations:
Implementation
Let's start by creating keys for our linear algebra operators.
SIGMOID: AnyStr = 'sigmoid'
ADD: AnyStr = 'add'
MUL: AnyStr = 'mul'
DOT: AnyStr = 'dot'
We wraps the basic operations on Numpy array into the class?LinearAlgebraEval?with two constructors
The four methods/operations—sigmoid,?add,?mul, and?dot?product—are straightforward and self-explanatory.
class LinearAlgebraEval(object):
def __init__(self, x: np.array, shape: List[int]) -> None:
self.x = x.reshape(shape)
@classmethod
def build(cls, size: int) -> Self:
_x = np.random.uniform(0.0, 100.0, size)
return cls(_x, [10, 100, -1])
def sigmoid(self, a: float) -> np.array:
return 1.0/(1.0+ np.exp(-self.x * a))
def add(self, y: np.array) -> np.array:
return self.x + y.reshape(self.x.shape)
def mul(self, y: np.array) -> np.array:
z = 2.0* self.x * y.reshape(self.x.shape)
return z * z
def dot(self, y: np.array) -> np.array:
return np.dot(self.x.reshape(-1), y.reshape(-1))
The performance evaluation is conducted by an instance of the LinearAlgebraPerfEval class. The key method?call?relies on a dictionary with operation as key and list of execution times for these operations.
class LinearAlgebraPerfEval(object):
def __init__(self, sizes: List[int]) -> None:
self.sizes = sizes
def __call__(self) -> Dict[AnyStr, List[float]]:
_perf = {}
lin_algebra = LinearAlgebraEval.build(self.sizes[0])
_perf[SIGMOID] = [LinearAlgebraPerfEval.__performance(lin_algebra, SIGMOID)]
_perf[ADD] = [LinearAlgebraPerfEval.__performance(lin_algebra, ADD)]
_perf[MUL] = [LinearAlgebraPerfEval.__performance(lin_algebra, MUL)]
_perf[DOT] = [ LinearAlgebraPerfEval.__performance(lin_algebra, DOT)]
for index in range(1, len(self.sizes)):
# Alternative constructor
lin_algebra = LinearAlgebraEval.build(self.sizes[index])
# The method __record executes each of the four operations for
# a variable size of Numpy array.
_perf = LinearAlgebraPerfEval.__record(_perf, lin_algebra, SIGMOID)
_perf = LinearAlgebraPerfEval.__record(_perf, lin_algebra, ADD)
_perf = LinearAlgebraPerfEval.__record(_perf, lin_algebra, MUL)
_perf = LinearAlgebraPerfEval.__record(_perf, lin_algebra, DOT)
return _perf
The private, helper method,?__record?described in the Appendix, executes and time ?each of the four operations for set of Numpy arrays of increasing size. The timing relies on the?timeit?decorator described in the Appendix.
Evaluation
Let's plot the execution time for these 4 operations with a size of 3D arrays (10, 100, -1) varying between 1million to 60 million values.
Numpy 1.2.6 with OpenBlas
Numpy 2.1 with ILP64- BLAS/LAPACK 3.9.1
Numpy 2.1 and ILP-64 have no impact on the performance of the Sigmoid but reduce the average time to execute and addition, multiplication and dot product on these large arrays by 25 to 40%
领英推荐
Fast Fourier Transform
Let's define a signal as a sum of 4 sinusoidal functions with 4 different frequency modes.
The signal is then sampled in the interval [0, 1] with various number of data points.
Implementation
We encapsulate the generation, sampling, and extraction of the frequency spectrum within the FFTEval class.?
The constructor iteratively adds sine functions to implement the desired formula.?
The compute method samples the signal over the interval [0, 1] and calls the Numpy fft.fftfreq method to generate the frequency distribution. The timing of the execution in the method?compute?uses the?timeit decorator.
class FFTEval(object):
def __init__(self, frequencies: List[int], samples: int) -> None:
self.F = 1.0/samples
self.x = np.arange(0, 1.0, self.F)
pi_2 = 2*np.pi
# Composed the sinusoidal signal/function
self.signal = np.sin(pi_2*frequencies[0]*self.x)
if len(frequencies) > 1:
for f in frequencies[1:]:
self.signal += np.sin(pi_2*f*self.x)
@timeit
def compute(self) -> (np.array, np.array):
num_samples = len(self.signal)
# Select the positive amplitudes and frequencies
num_half_samples = num_samples//2
# Invoke the Numpy FFT function to extract frequencies
freqs = np.fft.fftfreq(num_samples, self.F)
_freqs = freqs[:num_half_samples]
_amplitudes = np.abs(np.fft.fft(self.signal)[:num_half_samples]) /
num_samples
return _freqs, _amplitudes
The class?FFTPerfEval?implements the execution and collection of the execution time for the FFT with various frequency of samples (number of samples in the interval [0, 1]).
class FFTPerfEval(PerfEval):
def __init__(self, sizes: List[int]) -> None:
super(FFTPerfEval, self).__init__(sizes)
def __call__(self) -> List[float]:
durations = []
# Collect the execution time for each of the number of
# samples defined in the constructor
for samples in self.sizes:
durations.append(FFTPerfEval.__compute(samples))
return durations
@staticmethod
def __compute(sz: int) -> float:
frequencies = [4, 7, 11, 17]
fft_eval = FFTEval(frequencies, sz)
# Time it
return fft_eval.compute()
Evaluation
Numpy 1.2.6 with OpenBlas
Numpy 2.1 with BLAS/LAPACK 3.9.1
Relative performance improvement
Let's evaluate the relative improvement of performance of Numpy 2.1 over Numpy 1.2.6.
The switch to Numpy 2.1 with ILP-64 on MacOS 14.6.1 shows an average improvement of 33%.
References
[1]?Numpy
[2]?Lapack
------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning.? He has been director of data engineering at Aideo Technologies since 2017 and he is the?author of "Scala for Machine Learning", Packt Publishing ISBN 978-1-78712-238-3?and?Geometric Learning in Python ?Newsletter on LinkedIn.
Appendix
Standard timing decorator in Python
def timeit(func):
import time
def wrapper(*args, **kwargs):
start_time = time.time()
func(*args, **kwargs)
end_time = time.time()
return end_time - start_time
return wrapper
Methods to compute and record the execution time for basic linear algebra operations. The execution time is recorded through the decorator timer.
@staticmethod
def __record(perf: Dict[AnyStr, List[float]],
lin_algebra: LinearAlgebraEval,
op: AnyStr) -> Dict[AnyStr, List[float]]:
duration = LinearAlgebraPerfEval.__performance(lin_algebra, op)
lst = perf[op]
lst.append(duration)
perf[op] = lst
return perf
@timeit
@staticmethod
def __performance(lin_algebra: LinearAlgebraEval, op: AnyStr) -> np.array:
match op:
case "sigmoid":
return lin_algebra.sigmoid(8.5)
case "add":
return lin_algebra.add(lin_algebra.x)
case "mul":
return lin_algebra.mul(lin_algebra.x)
case "dot":
return lin_algebra.dot(lin_algebra.