Multi-Objective Evolutionary Algorithms
Yeshwanth Nagaraj
Democratizing Math and Core AI // Levelling playfield for the future
Multi-Objective Evolutionary Algorithms (MOEAs) are a class of optimization techniques used to solve problems that involve multiple conflicting objectives. These algorithms are designed to find a set of solutions that represent a trade-off between different objectives, rather than a single optimal solution.
In many real-world problems, there are multiple objectives that need to be optimized simultaneously. However, these objectives often conflict with each other, meaning that improving one objective might lead to a deterioration in another. Traditional single-objective optimization methods cannot handle such scenarios effectively, as they aim to find a single best solution.
MOEAs use principles inspired by biological evolution to search for solutions in a multidimensional space of possible solutions. These algorithms maintain a population of candidate solutions (also known as individuals or solutions), and over multiple generations, they evolve this population to find a diverse set of solutions that represent the trade-offs between the conflicting objectives.
Key concepts and components of MOEAs include:
Common MOEAs include NSGA-II (Non-dominated Sorting Genetic Algorithm II), SPEA2 (Strength Pareto Evolutionary Algorithm 2), MOEA/D (Multi-Objective Evolutionary Algorithm Based on Decomposition), and many others.
领英推荐
MOEAs have found applications in various fields, including engineering design, finance, scheduling, and more, where decision-makers need to make trade-offs between multiple objectives.
Example :
In this example, we'll consider a 2D optimization problem with two conflicting objectives: maximizing the x-coordinate and minimizing the y-coordinate. We'll use the NSGA-II algorithm to find a set of Pareto optimal solutions.
import random
from deap import base, creator, tools, algorithms
# Define the optimization problem
creator.create("FitnessMulti", base.Fitness, weights=(1.0, -1.0))
creator.create("Individual", list, fitness=creator.FitnessMulti)
toolbox = base.Toolbox()
# Define the attributes (genes)
toolbox.register("attr_float", random.uniform, -10, 10)
toolbox.register("individual", tools.initRepeat, creator.Individual, toolbox.attr_float, n=2)
toolbox.register("population", tools.initRepeat, list, toolbox.individual)
# Define the evaluation functions (objectives)
def evaluate(individual):
? ? return individual[0], -individual[1]
toolbox.register("evaluate", evaluate)
# Define the genetic operators
toolbox.register("mate", tools.cxBlend, alpha=0.5)
toolbox.register("mutate", tools.mutGaussian, mu=0, sigma=1, indpb=0.2)
toolbox.register("select", tools.selNSGA2)
def main():
? ? random.seed(42)
? ? population = toolbox.population(n=100)
? ? algorithms.eaMuPlusLambda(population, toolbox, mu=50, lambda_=100, cxpb=0.7, mutpb=0.2, ngen=50, stats=None, halloffame=None, verbose=True)
? ? pareto_front = [ind.fitness.values for ind in population]
? ? print("Pareto Front:")
? ? for solution in pareto_front:
? ? ? ? print(solution)
if __name__ == "__main__":
? ? main()
This code sets up a simple 2D optimization problem with two conflicting objectives and uses the NSGA-II algorithm to find a set of Pareto optimal solutions. The evaluate function computes the objectives for each individual, and the algorithm evolves the population over a number of generations. The resulting Pareto front is printed at the end of the optimization process.