Fractal Dimension of Images in Python
Patrick Nicolas
Director Data Engineering @ aidéo technologies |software & data engineering, operations, and machine learning.
Configuring the parameters of a 2D convolutional neural network, such as kernel size and padding, can be challenging because it largely depends on the complexity of an image or its specific sections. Fractals help quantify the complexity of important features and boundaries within an image and ultimately guide the data scientist in optimizing his/her model.
What you will learn: How to evaluate the complexity of an image using fractal dimension index.
Notes:?
Overview
Fractal dimension
A fractal dimension is a measure used to describe the complexity of fractal patterns or sets by quantifying the ratio of change in detail relative to the change in scale [ref?1].
Initially, fractal dimensions were used to characterize intricate geometric forms where detailed patterns were more significant than the overall shape. For ordinary geometric shapes, the fractal dimension theoretically matches the familiar Euclidean or topological dimension.
However, the fractal dimension can take non-integer values. If a set's fractal dimension exceeds its topological dimension, it is considered to exhibit fractal geometry [ref?2].
There are many approaches to compute the fractal dimension [ref 1] ?of an image, among them:
This article describes the concept and implementation of the box counting method in Python.
Box counting method
The box counting method is similar to the perimeter measuring technique we applied to coastlines. However, instead of measuring length, we overlay the image with a grid and count how many squares in the grid cover any part of the image. We then repeat this process with progressively finer grids, each with smaller squares [ref 3]. By continually reducing the grid size, we capture the pattern's structure with greater precision.
If N is the number of measurements units (yardstick in 1D, square in 2D, cube in 3D,..) and eps the related scaling factor, the fractal dimension index is computed as:
We will use the height of the square box r as our measurement unit for images.
Implementation
First, let's define the box parameter (square)
@dataclass
class BoxParameter:
eps: float
r: int
# Denominator of the Fractal Dimension
def log_inv_eps(self) -> float:
return -np.log(self.eps)
# Numerator of the Fractal Dimension
def log_num_r(self) -> float:
return np.log(self.r)
The two methods log_inv_eps and log_num_r implement the numerator and denominator of the formula (1)
The class FractalDimImage encapsulates the computation of fractal dimension of a given image.
The two class (static) members are
class FractalDimImage(object):
# Default number of grey values
num_grey_levels: int = 256
# Convergence criteria
max_plateau_count = 3
def __init__(self, image_path: AnyStr) -> None:
raw_image: np.array = self.__load_image(image_path)
# If the image is actually a RGB (color) image, then converted
# to grey scale image
self.image = FractalDimImage.rgb_to_grey( raw_image)
if raw_image.shape[2] == 3
else raw_image
We cannot assume that the image is not defined with the 3 RGB channels. Therefore if the 3rd value of the shape is 3, then the image is converted into a grey scale array.
The following private method, __load_image load the image from a given path and converted into a numpy array.
@staticmethod
def __load_image(image_path: AnyStr) -> np.array
from PIL import Image
from numpy import asarray
this_image = Image.open(mode="r", fp=image_path)
return asarray(this_image)
The computation of fractal dimension is implemented by the method '__call__'. The method returns a tuple:
The symmetrical nature of fractal allows to iterate over half the size of the image [1]. The number of boxes N created at each iteration i, take into account the grey scale. N= (256/ num_pixels) *i [2].
The method populates each box with pixels/grey scale (method __create_boxes) [3] , then compute the sum of least squares (__sum_least_squares) [4]. The last statement [5] implement the formula (1).
领英推荐
The source code for the private methods __create_boxes and __sum_least_squares are included in the appendix for reference.
def __call__(self) -> (float, List[BoxParameter]):
image_pixels = self.image.shape[0]
plateau_count = 0
prev_num_r = -1
trace = []
max_iters = (image_pixels // 2) + 1 # [1]
for iter in range(2, max_iters):
num_boxes = grey_levels // (image_pixels // iter) # [2]
n_boxes = max(1, num_boxes)
num_r = 0 # Number of squares
eps = iter / image_pixels
for i in range(0, image_pixels, iter):
boxes = self.__create_boxes(i, iter, n_boxes) # [3]
num_r += FractalDimImage.__sum_least_squares(boxes, n_boxes) # [4]
# Detect if the number of measurements r has not changed..
if num_r == prev_num_r:
plateau_count += 1
prev_num_r = num_r
trace.append(BoxParameter(eps, num_r))
# Break from the iteration if the computation is stuck
# in the same number of measurements
if plateau_count > FractalDimImage.max_plateau_count:
break
# Implement the fractal dimension given the trace [5]
return FractalDimImage.__compute_fractal_dim(trace), trace
The implementation of the formula for fractal dimension extracts a polynomial fitting the numerator and denominator and return the first order value.
@staticmethod
def __compute_fractal_dim(trace: List[BoxParameter]) -> float:
from numpy.polynomial.polynomial import polyfit
_x = np.array([box_param.log_inv_eps() for box_param in trace])
_y = np.array([box_param.log_num_r() for box_param in trace])
fitted = polyfit(x=_x, y=_y, deg=1, full=False)
return float(fitted[1])
Evaluation
We compute the fractal dimension for an image then a region that contains the key features (meaning) of the image.
image_path_name = '../images/fractal_test_image.jpg'
fractal_dim_image = FractalDimImage(image_path_name)
fractal_dim, trace = fractal_dim_image()
Original image
The original RGB image has 542 x 880 pixels and converted into grey scale image.
Output: fractal_dim = 2.54
Image region
We select the following?region?of 395 x 378 pixels:
Output: fractal_dim = 2.63
The region has similar fractal dimension as the original image. This outcome should not be surprising: the pixels not contained in the selected region consists of background without features of any significance.
The convergence pattern for calculating the fractal dimension of the region is comparable to that of the original image, reaching convergence after 6 iterations.
References
------------------
Patrick Nicolas has over 25 years of experience in software and data engineering, architecture design and end-to-end deployment and support with extensive knowledge in machine learning.? He has been director of data engineering at Aideo Technologies since 2017 and he is the?author of "Scala for Machine Learning", Packt Publishing ISBN 978-1-78712-238-3?and?Geometric Learning in Python ?Newsletter on LinkedIn.
Appendix
Source code for initializing the square boxes:
def __create_boxes(self, i: int, iter: int, n_boxes: int) -> List[List[np.array]]:
boxes = [[]] * ((FractalDimImage.num_grey_levels + n_boxes - 1) // n_boxes)
i_lim = i + iter
# Shrink the boxes that are larger than i_lim
for img_row in self.image[i: i_lim]:
for pixel in img_row[i: i_lim]:
height = int(pixel // n_boxes)
boxes[height].append(pixel)
return boxes
Computation of the same of leas squares for boxes extracted from an image.
@staticmethod
def __sum_least_squares(boxes: List[List[float]], n_boxes: int) -> float:
# Standard deviation of boxes
stddev_box = np.sqrt(np.var(boxes, axis=1))
# Filter out NAN values
stddev = stddev_box[~np.isnan(stddev_box)]
nBox_r = 2 * (stddev // n_boxes) + 1
return sum(nBox_r)
Associate Professor in Artificial Intelligence and Imaging - Industry Expert in Automotive Imaging
2 个月This is a great article with useful references, thank you very much for that. There are many quantities for extracting image characteristics (e.g., SSIM) and I'm interested in their implications.