niapy.algorithms
¶
Module with implementations of basic and hybrid algorithms.
- class niapy.algorithms.Algorithm(population_size=50, initialization_function=<function default_numpy_init>, individual_type=None, seed=None, *args, **kwargs)[source]¶
Bases:
object
Class for implementing algorithms.
- Date:
2018
- Author
Klemen Berkovič
- License:
MIT
- Variables
Name (List[str]) – List of names for algorithm.
rng (numpy.random.Generator) – Random generator.
population_size (int) – Population size.
initialization_function (Callable[[int, Task, numpy.random.Generator, Dict[str, Any]], Tuple[numpy.ndarray, numpy.ndarray[float]]]) – Population initialization function.
individual_type (Optional[Type[Individual]]) – Type of individuals used in population, default value is None for Numpy arrays.
Initialize algorithm and create name for an algorithm.
- Parameters
population_size (Optional[int]) – Population size.
initialization_function (Optional[Callable[[int, Task, numpy.random.Generator, Dict[str, Any]], Tuple[numpy.ndarray, numpy.ndarray[float]]]]) – Population initialization function.
individual_type (Optional[Type[Individual]]) – Individual type used in population, default is Numpy array.
seed (Optional[int]) – Starting seed for random generator.
- Name = ['Algorithm', 'AAA']¶
- __init__(population_size=50, initialization_function=<function default_numpy_init>, individual_type=None, seed=None, *args, **kwargs)[source]¶
Initialize algorithm and create name for an algorithm.
- Parameters
population_size (Optional[int]) – Population size.
initialization_function (Optional[Callable[[int, Task, numpy.random.Generator, Dict[str, Any]], Tuple[numpy.ndarray, numpy.ndarray[float]]]]) – Population initialization function.
individual_type (Optional[Type[Individual]]) – Individual type used in population, default is Numpy array.
seed (Optional[int]) – Starting seed for random generator.
- bad_run()[source]¶
Check if some exceptions where thrown when the algorithm was running.
- Returns
True if some error where detected at runtime of the algorithm, otherwise False
- Return type
- static get_best(population, population_fitness, best_x=None, best_fitness=inf)[source]¶
Get the best individual for population.
- Parameters
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current populations fitness/function values of aligned individuals.
best_x (Optional[numpy.ndarray]) – Best individual.
best_fitness (float) – Fitness value of best individual.
- Returns
Coordinates of best solution.
beset fitness/function value.
- Return type
Tuple[numpy.ndarray, float]
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Parameter name (str): Represents a parameter name
Value of parameter (Any): Represents the value of the parameter
- Return type
Dict[str, Any]
- integers(low, high=None, size=None, skip=None)[source]¶
Get discrete uniform (integer) random distribution of D shape in range from “low” to “high”.
- Parameters
low (Union[int, Iterable[int]]) – Lower integer bound. If high = None low is 0 and this value is used as high
high (Union[int, Iterable[int]]) – One above upper integer bound.
size (Union[None, int, Iterable[int]]) – shape of returned discrete uniform random distribution.
skip (Union[None, int, Iterable[int], numpy.ndarray[int]]) – numbers to skip.
- Returns
Random generated integer number.
- Return type
- iteration_generator(task)[source]¶
Run the algorithm for a single iteration and return the best solution.
- Parameters
task (Task) – Task with bounds and objective function for optimization.
- Returns
Generator getting new/old optimal global values.
- Return type
Generator[Tuple[numpy.ndarray, float], None, None]
- Yields
Tuple[numpy.ndarray, float] – 1. New population best individuals coordinates. 2. Fitness value of the best solution.
- normal(loc, scale, size=None)[source]¶
Get normal random distribution of shape size with mean “loc” and standard deviation “scale”.
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core functionality of algorithm.
This function is called on every algorithm iteration.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population coordinates.
population_fitness (numpy.ndarray) – Current population fitness value.
best_x (numpy.ndarray) – Current generation best individuals coordinates.
best_fitness (float) – current generation best individuals fitness value.
**params (Dict[str, Any]) – Additional arguments for algorithms.
- Returns
New populations coordinates.
New populations fitness values.
New global best position/solution
New global best fitness/objective value
Additional arguments of the algorithm.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- set_parameters(population_size=50, initialization_function=<function default_numpy_init>, individual_type=None, *args, **kwargs)[source]¶
Set the parameters/arguments of the algorithm.
- Parameters
population_size (Optional[int]) – Population size.
initialization_function (Optional[Callable[[int, Task, numpy.random.Generator, Dict[str, Any]], Tuple[numpy.ndarray, numpy.ndarray[float]]]]) – Population initialization function.
individual_type (Optional[Type[Individual]]) – Individual type used in population, default is Numpy array.
- class niapy.algorithms.Individual(x=None, task=None, e=True, rng=None, **kwargs)[source]¶
Bases:
object
Class that represents one solution in population of solutions.
- Date:
2018
- Author:
Klemen Berkovič
- License:
MIT
- Variables
x (numpy.ndarray) – Coordinates of individual.
f (float) – Function/fitness value of individual.
Initialize new individual.
- Parameters
- __eq__(other)[source]¶
Compare the individuals for equalities.
- Parameters
other (Union[Any, numpy.ndarray]) – Object that we want to compare this object to.
- Returns
True if equal or False if no equal.
- Return type
- __getitem__(i)[source]¶
Get the value of i-th component of the solution.
- Parameters
i (int) – Position of the solution component.
- Returns
Value of ith component.
- Return type
Any
- __len__()[source]¶
Get the length of the solution or the number of components.
- Returns
Number of components.
- Return type
- __setitem__(i, v)[source]¶
Set the value of i-th component of the solution to v value.
- Parameters
i (int) – Position of the solution component.
v (Any) – Value to set to i-th component.
- __str__()[source]¶
Print the individual with the solution and objective value.
- Returns
String representation of self.
- Return type
- copy()[source]¶
Return a copy of self.
Method returns copy of
this
object so it is safe for editing.- Returns
Copy of self.
- Return type
- evaluate(task, rng=None)[source]¶
Evaluate the solution.
Evaluate solution
this.x
with the help of task. Task is used for repairing the solution and then evaluating it.- Parameters
task (Task) – Objective function object.
rng (Optional[numpy.random.Generator]) – Random generator.
See also
- generate_solution(task, rng)[source]¶
Generate new solution.
Generate new solution for this individual and set it to
self.x
. This method usesrng
for getting random numbers. For generating random componentsrng
andtask
is used.- Parameters
task (Task) – Optimization task.
rng (numpy.random.Generator) – Random numbers generator object.
- niapy.algorithms.default_individual_init(task, population_size, rng, individual_type=None, **_kwargs)[source]¶
Initialize population_size individuals of type individual_type.
- Parameters
task (Task) – Optimization task.
population_size (int) – Number of individuals in population.
rng (numpy.random.Generator) – Random number generator.
individual_type (Optional[Individual]) – Class of individual in population.
- Returns
Initialized individuals.
Initialized individuals function/fitness values.
- Return type
Tuple[numpy.ndarray[Individual], numpy.ndarray[float]
- niapy.algorithms.default_numpy_init(task, population_size, rng, **_kwargs)[source]¶
Initialize starting population that is represented with numpy.ndarray with shape (population_size, task.dimension).
- Parameters
- Returns
New population with shape (population_size, task.D).
New population function/fitness values.
- Return type
Tuple[numpy.ndarray, numpy.ndarray[float]]
niapy.algorithms.basic
¶
Implementation of basic nature-inspired algorithms.
- class niapy.algorithms.basic.AgingNpDifferentialEvolution(min_lifetime=0, max_lifetime=12, delta_np=0.3, omega=0.3, age=<function proportional>, *args, **kwargs)[source]¶
Bases:
DifferentialEvolution
Implementation of Differential evolution algorithm with aging individuals.
- Algorithm:
Differential evolution algorithm with dynamic population size that is defined by the quality of population
- Date:
2018
- Author:
Klemen Berkovič
- License:
MIT
- Variables
Name (List[str]) – list of strings representing algorithm names.
Lt_min (int) – Minimal age of individual.
Lt_max (int) – Maximal age of individual.
delta_np (float) – Proportion of how many individuals shall die.
omega (float) – Acceptance rate for individuals to die.
mu (int) – Mean of individual max and min age.
age (Callable[[int, int, float, float, float, float, float], int]) – Function for calculation of age for individual.
Initialize AgingNpDifferentialEvolution.
- Parameters
min_lifetime (Optional[int]) – Minimum life time.
max_lifetime (Optional[int]) – Maximum life time.
delta_np (Optional[float]) – Proportion of how many individuals shall die.
omega (Optional[float]) – Acceptance rate for individuals to die.
age (Optional[Callable[[int, int, float, float, float, float, float], int]]) – Function for calculation of age for individual.
- Name = ['AgingNpDifferentialEvolution', 'ANpDE']¶
- __init__(min_lifetime=0, max_lifetime=12, delta_np=0.3, omega=0.3, age=<function proportional>, *args, **kwargs)[source]¶
Initialize AgingNpDifferentialEvolution.
- Parameters
min_lifetime (Optional[int]) – Minimum life time.
max_lifetime (Optional[int]) – Maximum life time.
delta_np (Optional[float]) – Proportion of how many individuals shall die.
omega (Optional[float]) – Acceptance rate for individuals to die.
age (Optional[Callable[[int, int, float, float, float, float, float], int]]) – Function for calculation of age for individual.
- aging(task, pop)[source]¶
Apply aging to individuals.
- Parameters
task (Task) – Optimization task.
pop (numpy.ndarray[Individual]) – Current population.
- Returns
New population.
- Return type
numpy.ndarray[Individual]
- decrement_population(pop, task)[source]¶
Decrement population.
- Parameters
pop (numpy.ndarray) – Current population.
task (Task) – Optimization task.
- Returns
Decreased population.
- Return type
numpy.ndarray[Individual]
- get_parameters()[source]¶
Get parameters values of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- increment_population(task)[source]¶
Increment population.
- Parameters
task (Task) – Optimization task.
- Returns
Increased population.
- Return type
numpy.ndarray[Individual]
- static info()[source]¶
Get basic information of algorithm.
- Returns
Basic information of algorithm.
- Return type
See also
- post_selection(pop, task, xb, fxb, **kwargs)[source]¶
Post selection operator.
- Parameters
pop (numpy.ndarray) – Current population.
task (Task) – Optimization task.
xb (Individual) – Global best individual.
fxb (float) – Global best fitness.
- Returns
New population.
New global best solution
New global best solutions fitness/objective value
- Return type
Tuple[numpy.ndarray, numpy.ndarray, float]
- selection(population, new_population, best_x, best_fitness, task, **kwargs)[source]¶
Select operator for individuals with aging.
- Parameters
- Returns
New population of individuals.
New global best solution.
New global best solutions fitness/objective value.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, float]
- set_parameters(min_lifetime=0, max_lifetime=12, delta_np=0.3, omega=0.3, age=<function proportional>, **kwargs)[source]¶
Set the algorithm parameters.
- Parameters
min_lifetime (Optional[int]) – Minimum life time.
max_lifetime (Optional[int]) – Maximum life time.
delta_np (Optional[float]) – Proportion of how many individuals shall die.
omega (Optional[float]) – Acceptance rate for individuals to die.
age (Optional[Callable[[int, int, float, float, float, float, float], int]]) – Function for calculation of age for individual.
- class niapy.algorithms.basic.ArtificialBeeColonyAlgorithm(population_size=10, limit=100, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Artificial Bee Colony algorithm.
- Algorithm:
Artificial Bee Colony algorithm
- Date:
2018
- Author:
Uros Mlakar and Klemen Berkovič
- License:
MIT
- Reference paper:
Karaboga, D., and Bahriye B. “A powerful and efficient algorithm for numerical function optimization: artificial bee colony (ABC) algorithm.” Journal of global optimization 39.3 (2007): 459-471.
- Arguments
Name (List[str]): List containing strings that represent algorithm names limit (Union[float, numpy.ndarray[float]]): Maximum number of cycles without improvement.
See also
Initialize ArtificialBeeColonyAlgorithm.
- Parameters
- Name = ['ArtificialBeeColonyAlgorithm', 'ABC']¶
- __init__(population_size=10, limit=100, *args, **kwargs)[source]¶
Initialize ArtificialBeeColonyAlgorithm.
- calculate_probabilities(foods)[source]¶
Calculate the probes.
- Parameters
foods (numpy.ndarray) – Current population.
- Returns
Probabilities.
- Return type
numpy.ndarray
- static info()[source]¶
Get algorithms information.
- Returns
Algorithm information.
- Return type
See also
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of the algorithm.
- Parameters
task (Task) – Optimization task
population (numpy.ndarray) – Current population
population_fitness (numpy.ndarray[float]) – Function/fitness values of current population
best_x (numpy.ndarray) – Current best individual
best_fitness (float) – Current best individual fitness/function value
params (Dict[str, Any]) – Additional parameters
- Returns
New population
New population fitness/function values
New global best solution
New global best fitness/objective value
- Additional arguments:
trials (numpy.ndarray): Number of cycles without improvement.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- class niapy.algorithms.basic.BacterialForagingOptimization(population_size=50, n_chemotactic=100, n_swim=4, n_reproduction=4, n_elimination=2, prob_elimination=0.25, step_size=0.1, swarming=True, d_attract=0.1, w_attract=0.2, h_repel=0.1, w_repel=10.0, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of the Bacterial foraging optimization algorithm.
- Algorithm:
Bacterial Foraging Optimization
- Date:
2021
- Author:
Žiga Stupan
- License:
MIT
- Reference paper:
Passino, “Biomimicry of bacterial foraging for distributed optimization and control,” in IEEE Control Systems Magazine, vol. 22, no. 3, pp. 52-67, June 2002, doi: 10.1109/MCS.2002.1004010.
- Variables
Name (List[str]) – list of strings representing algorithm names.
population_size (Optional[int]) – Number of individuals in population \(\in [1, \infty]\).
n_chemotactic (Optional[int]) – Number of chemotactic steps.
n_swim (Optional[int]) – Number of swim steps.
n_reproduction (Optional[int]) – Number of reproduction steps.
n_elimination (Optional[int]) – Number of elimination and dispersal steps.
prob_elimination (Optional[float]) – Probability of a bacterium being eliminated and a new one being created at a random location in the search space.
step_size (Optional[float]) – Size of a chemotactic step.
d_attract (Optional[float]) – Depth of the attractant released by the cell (a quantification of how much attractant is released).
w_attract (Optional[float]) – Width of the attractant signal (a quantification of the diffusion rate of the chemical).
h_repel (Optional[float]) – Height of the repellent effect (magnitude of its effect).
w_repel (Optional[float]) – Width of the repellent.
See also
Initialize algorithm.
- Parameters
population_size (Optional[int]) – Number of individuals in population \(\in [1, \infty]\).
n_chemotactic (Optional[int]) – Number of chemotactic steps.
n_swim (Optional[int]) – Number of swim steps.
n_reproduction (Optional[int]) – Number of reproduction steps.
n_elimination (Optional[int]) – Number of elimination and dispersal steps.
prob_elimination (Optional[float]) – Probability of a bacterium being eliminated and a new one being created at a random location in the search space.
step_size (Optional[float]) – Size of a chemotactic step.
swarming (Optional[bool]) – If True use swarming.
d_attract (Optional[float]) – Depth of the attractant released by the cell (a quantification of how much attractant is released).
w_attract (Optional[float]) – Width of the attractant signal (a quantification of the diffusion rate of the chemical).
h_repel (Optional[float]) – Height of the repellent effect (magnitude of its effect).
w_repel (Optional[float]) – Width of the repellent.
- Name = ['BacterialForagingOptimization', 'BFO', 'BFOA']¶
- __init__(population_size=50, n_chemotactic=100, n_swim=4, n_reproduction=4, n_elimination=2, prob_elimination=0.25, step_size=0.1, swarming=True, d_attract=0.1, w_attract=0.2, h_repel=0.1, w_repel=10.0, *args, **kwargs)[source]¶
Initialize algorithm.
- Parameters
population_size (Optional[int]) – Number of individuals in population \(\in [1, \infty]\).
n_chemotactic (Optional[int]) – Number of chemotactic steps.
n_swim (Optional[int]) – Number of swim steps.
n_reproduction (Optional[int]) – Number of reproduction steps.
n_elimination (Optional[int]) – Number of elimination and dispersal steps.
prob_elimination (Optional[float]) – Probability of a bacterium being eliminated and a new one being created at a random location in the search space.
step_size (Optional[float]) – Size of a chemotactic step.
swarming (Optional[bool]) – If True use swarming.
d_attract (Optional[float]) – Depth of the attractant released by the cell (a quantification of how much attractant is released).
w_attract (Optional[float]) – Width of the attractant signal (a quantification of the diffusion rate of the chemical).
h_repel (Optional[float]) – Height of the repellent effect (magnitude of its effect).
w_repel (Optional[float]) – Width of the repellent.
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get algorithm information.
- Returns
Algorithm information.
- Return type
See also
- init_population(task)[source]¶
Initialize the starting population.
- Parameters
task (Task) – Optimization task
- Returns
New population.
New population fitness/function values.
- Additional arguments:
cost (numpy.ndarray): Costs of cells i.e. Fitness + cell interaction
health (numpy.ndarray): Cell health i.e. The accumulation of costs over all chemotactic steps.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, Dict[str, Any]]
- interaction(cell, population)[source]¶
Compute cell to cell interaction J_cc.
- Parameters
cell (numpy.ndarray) – Cell to compute interaction for.
population (numpy.ndarray) – Population
- Returns
Cell to cell interaction J_cc
- Return type
- random_direction(dimension)[source]¶
Generate a random direction vector.
- Parameters
dimension (int) – Problem dimension
- Returns
Normalised random direction vector
- Return type
numpy.ndarray
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Bacterial Foraging Optimization algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current population’s fitness/function values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best individuals function/fitness value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
New population.
New populations function/fitness values.
New global best solution,
New global best solution’s fitness/objective value.
- Additional arguments:
cost (numpy.ndarray): Costs of cells i.e. Fitness + cell interaction
health (numpy.ndarray): Cell health i.e. The accumulation of costs over all chemotactic steps.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- set_parameters(population_size=50, n_chemotactic=100, n_swim=4, n_reproduction=4, n_elimination=2, prob_elimination=0.25, step_size=0.1, swarming=True, d_attract=0.1, w_attract=0.2, h_repel=0.1, w_repel=10.0, **kwargs)[source]¶
Set the parameters/arguments of the algorithm.
- Parameters
population_size (Optional[int]) – Number of individuals in population \(\in [1, \infty]\).
n_chemotactic (Optional[int]) – Number of chemotactic steps.
n_swim (Optional[int]) – Number of swim steps.
n_reproduction (Optional[int]) – Number of reproduction steps.
n_elimination (Optional[int]) – Number of elimination and dispersal steps.
prob_elimination (Optional[float]) – Probability of a bacterium being eliminated and a new one being created at a random location in the search space.
step_size (Optional[float]) – Size of a chemotactic step.
swarming (Optional[bool]) – If True use swarming.
d_attract (Optional[float]) – Depth of the attractant released by the cell (a quantification of how much attractant is released).
w_attract (Optional[float]) – Width of the attractant signal (a quantification of the diffusion rate of the chemical).
h_repel (Optional[float]) – Height of the repellent effect (magnitude of its effect).
w_repel (Optional[float]) – Width of the repellent.
- class niapy.algorithms.basic.BareBonesFireworksAlgorithm(num_sparks=10, amplification_coefficient=1.5, reduction_coefficient=0.5, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Bare Bones Fireworks Algorithm.
- Algorithm:
Bare Bones Fireworks Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
https://www.sciencedirect.com/science/article/pii/S1568494617306609
- Reference paper:
Junzhi Li, Ying Tan, The bare bones fireworks algorithm: A minimalist global optimizer, Applied Soft Computing, Volume 62, 2018, Pages 454-462, ISSN 1568-4946, https://doi.org/10.1016/j.asoc.2017.10.046.
- Variables
Initialize BareBonesFireworksAlgorithm.
- Parameters
- Name = ['BareBonesFireworksAlgorithm', 'BBFWA']¶
- __init__(num_sparks=10, amplification_coefficient=1.5, reduction_coefficient=0.5, *args, **kwargs)[source]¶
Initialize BareBonesFireworksAlgorithm.
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get default information of algorithm.
- Returns
Basic information.
- Return type
See also
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Bare Bones Fireworks Algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current solution.
population_fitness (float) – Current solution fitness/function value.
best_x (numpy.ndarray) – Current best solution.
best_fitness (float) – Current best solution fitness/function value.
params (Dict[str, Any]) – Additional parameters.
- Returns
New solution.
New solution fitness/function value.
New global best solution.
New global best solutions fitness/objective value.
- Additional arguments:
amplitude (numpy.ndarray): Search range.
- Return type
Tuple[numpy.ndarray, float, numpy.ndarray, float, Dict[str, Any]]
- class niapy.algorithms.basic.BatAlgorithm(population_size=40, loudness=1.0, pulse_rate=1.0, alpha=0.97, gamma=0.1, min_frequency=0.0, max_frequency=2.0, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Bat algorithm.
- Algorithm:
Bat algorithm
- Date:
2015
- Authors:
Iztok Fister Jr., Marko Burjek and Klemen Berkovič
- License:
MIT
- Reference paper:
Yang, Xin-She. “A new metaheuristic bat-inspired algorithm.” Nature inspired cooperative strategies for optimization (NICSO 2010). Springer, Berlin, Heidelberg, 2010. 65-74.
- Variables
Name (List[str]) – List of strings representing algorithm name.
loudness (float) – Initial loudness.
pulse_rate (float) – Initial pulse rate.
alpha (float) – Parameter for controlling loudness decrease.
gamma (float) – Parameter for controlling pulse rate increase.
min_frequency (float) – Minimum frequency.
max_frequency (float) – Maximum frequency.
See also
Initialize BatAlgorithm.
- Parameters
population_size (Optional[int]) – Population size.
loudness (Optional[float]) – Initial loudness.
pulse_rate (Optional[float]) – Initial pulse rate.
alpha (Optional[float]) – Parameter for controlling loudness decrease.
gamma (Optional[float]) – Parameter for controlling pulse rate increase.
min_frequency (Optional[float]) – Minimum frequency.
max_frequency (Optional[float]) – Maximum frequency.
- Name = ['BatAlgorithm', 'BA']¶
- __init__(population_size=40, loudness=1.0, pulse_rate=1.0, alpha=0.97, gamma=0.1, min_frequency=0.0, max_frequency=2.0, *args, **kwargs)[source]¶
Initialize BatAlgorithm.
- Parameters
population_size (Optional[int]) – Population size.
loudness (Optional[float]) – Initial loudness.
pulse_rate (Optional[float]) – Initial pulse rate.
alpha (Optional[float]) – Parameter for controlling loudness decrease.
gamma (Optional[float]) – Parameter for controlling pulse rate increase.
min_frequency (Optional[float]) – Minimum frequency.
max_frequency (Optional[float]) – Maximum frequency.
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get algorithms information.
- Returns
Algorithm information.
- Return type
See also
- local_search(best, loudness, task, **kwargs)[source]¶
Improve the best solution according to the Yang (2010).
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Bat Algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population
population_fitness (numpy.ndarray[float]) – Current population fitness/function values
best_x (numpy.ndarray) – Current best individual
best_fitness (float) – Current best individual function/fitness value
params (Dict[str, Any]) – Additional algorithm arguments
- Returns
New population
New population fitness/function values
New global best solution
New global best fitness/objective value
- Additional arguments:
velocities (numpy.ndarray): Velocities.
alpha (float): Previous iterations loudness.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- set_parameters(population_size=20, loudness=1.0, pulse_rate=1.0, alpha=0.97, gamma=0.1, min_frequency=0.0, max_frequency=2.0, **kwargs)[source]¶
Set the parameters of the algorithm.
- Parameters
population_size (Optional[int]) – Population size.
loudness (Optional[float]) – Initial loudness.
pulse_rate (Optional[float]) – Initial pulse rate.
alpha (Optional[float]) – Parameter for controlling loudness decrease.
gamma (Optional[float]) – Parameter for controlling pulse rate increase.
min_frequency (Optional[float]) – Minimum frequency.
max_frequency (Optional[float]) – Maximum frequency.
- class niapy.algorithms.basic.BeesAlgorithm(population_size=40, m=5, e=4, ngh=1, nep=4, nsp=2, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Bees algorithm.
- Algorithm:
The Bees algorithm
- Date:
2019
- Authors:
Rok Potočnik
- License:
MIT
- Reference paper:
DT Pham, A Ghanbarzadeh, E Koc, S Otri, S Rahim, and M Zaidi. The bees algorithm-a novel tool for complex optimisation problems. In Proceedings of the 2nd Virtual International Conference on Intelligent Production Machines and Systems (IPROMS 2006), pages 454–459, 2006
- Variables
population_size (Optional[int]) – Number of scout bees parameter.
m (Optional[int]) – Number of sites selected out of n visited sites parameter.
e (Optional[int]) – Number of best sites out of m selected sites parameter.
nep (Optional[int]) – Number of bees recruited for best e sites parameter.
nsp (Optional[int]) – Number of bees recruited for the other selected sites parameter.
ngh (Optional[float]) – Initial size of patches parameter.
Initialize BeesAlgorithm.
- Parameters
population_size (Optional[int]) – Number of scout bees parameter.
m (Optional[int]) – Number of sites selected out of n visited sites parameter.
e (Optional[int]) – Number of best sites out of m selected sites parameter.
nep (Optional[int]) – Number of bees recruited for best e sites parameter.
nsp (Optional[int]) – Number of bees recruited for the other selected sites parameter.
ngh (Optional[float]) – Initial size of patches parameter.
- Name = ['BeesAlgorithm', 'BEA']¶
- __init__(population_size=40, m=5, e=4, ngh=1, nep=4, nsp=2, *args, **kwargs)[source]¶
Initialize BeesAlgorithm.
- Parameters
population_size (Optional[int]) – Number of scout bees parameter.
m (Optional[int]) – Number of sites selected out of n visited sites parameter.
e (Optional[int]) – Number of best sites out of m selected sites parameter.
nep (Optional[int]) – Number of bees recruited for best e sites parameter.
nsp (Optional[int]) – Number of bees recruited for the other selected sites parameter.
ngh (Optional[float]) – Initial size of patches parameter.
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm Parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get information about algorithm.
- Returns
Algorithm information
- Return type
See also
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Forest Optimization Algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray[float]) – Current population.
population_fitness (numpy.ndarray[float]) – Current population function/fitness values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best individual fitness/function value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
New population.
New population fitness/function values.
New global best solution.
New global best fitness/objective value.
- Additional arguments:
ngh (float): A small value used for patches.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- set_parameters(population_size=40, m=5, e=4, ngh=1, nep=4, nsp=2, **kwargs)[source]¶
Set the parameters of the algorithm.
- Parameters
population_size (Optional[int]) – Number of scout bees parameter.
m (Optional[int]) – Number of sites selected out of n visited sites parameter.
e (Optional[int]) – Number of best sites out of m selected sites parameter.
nep (Optional[int]) – Number of bees recruited for best e sites parameter.
nsp (Optional[int]) – Number of bees recruited for the other selected sites parameter.
ngh (Optional[float]) – Initial size of patches parameter.
- class niapy.algorithms.basic.CamelAlgorithm(population_size=50, burden_factor=0.25, death_rate=0.5, visibility=0.5, supply_init=10, endurance_init=10, min_temperature=-10, max_temperature=10, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Camel traveling behavior.
- Algorithm:
Camel algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
- Reference paper:
Ali, Ramzy. (2016). Novel Optimization Algorithm Inspired by Camel Traveling Behavior. Iraq J. Electrical and Electronic Engineering. 12. 167-177.
- Variables
Name (List[str]) – List of strings representing name of the algorithm.
population_size (Optional[int]) – Population size \(\in [1, \infty)\).
burden_factor (Optional[float]) – Burden factor \(\in [0, 1]\).
death_rate (Optional[float]) – Dying rate \(\in [0, 1]\).
visibility (Optional[float]) – View range of camel.
supply_init (Optional[float]) – Initial supply \(\in (0, \infty)\).
endurance_init (Optional[float]) – Initial endurance \(\in (0, \infty)\).
min_temperature (Optional[float]) – Minimum temperature, must be true \($T_{min} < T_{max}\).
max_temperature (Optional[float]) – Maximum temperature, must be true \(T_{min} < T_{max}\).
See also
Initialize CamelAlgorithm.
- Parameters
population_size (Optional[int]) – Population size \(\in [1, \infty)\).
burden_factor (Optional[float]) – Burden factor \(\in [0, 1]\).
death_rate (Optional[float]) – Dying rate \(\in [0, 1]\).
visibility (Optional[float]) – View range of camel.
supply_init (Optional[float]) – Initial supply \(\in (0, \infty)\).
endurance_init (Optional[float]) – Initial endurance \(\in (0, \infty)\).
min_temperature (Optional[float]) – Minimum temperature, must be true \($T_{min} < T_{max}\).
max_temperature (Optional[float]) – Maximum temperature, must be true \(T_{min} < T_{max}\).
- Name = ['CamelAlgorithm', 'CA']¶
- __init__(population_size=50, burden_factor=0.25, death_rate=0.5, visibility=0.5, supply_init=10, endurance_init=10, min_temperature=-10, max_temperature=10, *args, **kwargs)[source]¶
Initialize CamelAlgorithm.
- Parameters
population_size (Optional[int]) – Population size \(\in [1, \infty)\).
burden_factor (Optional[float]) – Burden factor \(\in [0, 1]\).
death_rate (Optional[float]) – Dying rate \(\in [0, 1]\).
visibility (Optional[float]) – View range of camel.
supply_init (Optional[float]) – Initial supply \(\in (0, \infty)\).
endurance_init (Optional[float]) – Initial endurance \(\in (0, \infty)\).
min_temperature (Optional[float]) – Minimum temperature, must be true \($T_{min} < T_{max}\).
max_temperature (Optional[float]) – Maximum temperature, must be true \(T_{min} < T_{max}\).
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm Parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get information about algorithm.
- Returns
Algorithm information
- Return type
See also
- init_pop(task, population_size, rng, individual_type, **_kwargs)[source]¶
Initialize starting population.
- Parameters
task (Task) – Optimization task.
population_size (int) – Number of camels in population.
rng (numpy.random.Generator) – Random number generator.
individual_type (Type[Individual]) – Individual type.
- Returns
Initialize population of camels.
Initialized populations function/fitness values.
- Return type
Tuple[numpy.ndarray[Camel], numpy.ndarray[float]]
- life_cycle(camel, task)[source]¶
Apply life cycle to Camel.
- Parameters
camel (Camel) – Camel to apply life cycle.
task (Task) – Optimization task.
- Returns
Camel with life cycle applied to it.
- Return type
Camel
- oasis(c)[source]¶
Apply oasis function to camel.
- Parameters
c (Camel) – Camel to apply oasis on.
- Returns
Camel with applied oasis on.
- Return type
Camel
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Camel Algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray[Camel]) – Current population of Camels.
population_fitness (numpy.ndarray[float]) – Current population fitness/function values.
best_x (numpy.ndarray) – Current best Camel.
best_fitness (float) – Current best Camel fitness/function value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
New population
New population function/fitness value
New global best solution
New global best fitness/objective value
Additional arguments
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, dict]
- set_parameters(population_size=50, burden_factor=0.25, death_rate=0.5, visibility=0.5, supply_init=10, endurance_init=10, min_temperature=-10, max_temperature=10, **kwargs)[source]¶
Set the arguments of an algorithm.
- Parameters
population_size (Optional[int]) – Population size \(\in [1, \infty)\).
burden_factor (Optional[float]) – Burden factor \(\in [0, 1]\).
death_rate (Optional[float]) – Dying rate \(\in [0, 1]\).
visibility (Optional[float]) – View range of camel.
supply_init (Optional[float]) – Initial supply \(\in (0, \infty)\).
endurance_init (Optional[float]) – Initial endurance \(\in (0, \infty)\).
min_temperature (Optional[float]) – Minimum temperature, must be true \($T_{min} < T_{max}\).
max_temperature (Optional[float]) – Maximum temperature, must be true \(T_{min} < T_{max}\).
- class niapy.algorithms.basic.CatSwarmOptimization(population_size=30, mixture_ratio=0.1, c1=2.05, smp=3, spc=True, cdc=0.85, srd=0.2, max_velocity=1.9, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Cat swarm optimization algorithm.
Algorithm: Cat swarm optimization
Date: 2019
Author: Mihael Baketarić
License: MIT
Reference paper: Chu, S. C., Tsai, P. W., & Pan, J. S. (2006). Cat swarm optimization. In Pacific Rim international conference on artificial intelligence (pp. 854-858). Springer, Berlin, Heidelberg.
Initialize CatSwarmOptimization.
- Parameters
population_size (int) – Number of individuals in population.
mixture_ratio (float) – Mixture ratio.
c1 (float) – Constant in tracing mode.
smp (int) – Seeking memory pool.
spc (bool) – Self-position considering.
cdc (float) – Decides how many dimensions will be varied.
srd (float) – Seeking range of the selected dimension.
max_velocity (float) – Maximal velocity.
Also (See) –
- Name = ['CatSwarmOptimization', 'CSO']¶
- __init__(population_size=30, mixture_ratio=0.1, c1=2.05, smp=3, spc=True, cdc=0.85, srd=0.2, max_velocity=1.9, *args, **kwargs)[source]¶
Initialize CatSwarmOptimization.
- Parameters
population_size (int) – Number of individuals in population.
mixture_ratio (float) – Mixture ratio.
c1 (float) – Constant in tracing mode.
smp (int) – Seeking memory pool.
spc (bool) – Self-position considering.
cdc (float) – Decides how many dimensions will be varied.
srd (float) – Seeking range of the selected dimension.
max_velocity (float) – Maximal velocity.
Also (See) –
- get_parameters()[source]¶
Get parameters values of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get algorithm information.
- Returns
Algorithm information.
- Return type
See also
- random_seek_trace()[source]¶
Set cats into seeking/tracing mode randomly.
- Returns
One or zero. One means tracing mode. Zero means seeking mode. Length of list is equal to population_size.
- Return type
numpy.ndarray
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Cat Swarm Optimization algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current population fitness/function values.
best_x (numpy.ndarray) – Current best individual.
best_fitness (float) – Current best cat fitness/function value.
**params (Dict[str, Any]) – Additional function arguments.
- Returns
New population.
New population fitness/function values.
New global best solution.
New global best solutions fitness/objective value.
- Additional arguments:
velocities (numpy.ndarray): velocities of cats.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- seeking_mode(task, cat, cat_fitness, pop, fpop, fxb)[source]¶
Seeking mode.
- Parameters
task (Task) – Optimization task.
cat (numpy.ndarray) – Individual from population.
cat_fitness (float) – Current individual’s fitness/function value.
pop (numpy.ndarray) – Current population.
fpop (numpy.ndarray) – Current population fitness/function values.
fxb (float) – Current best cat fitness/function value.
- Returns
Updated individual’s position
Updated individual’s fitness/function value
Updated global best position
Updated global best fitness/function value
- Return type
- set_parameters(population_size=30, mixture_ratio=0.1, c1=2.05, smp=3, spc=True, cdc=0.85, srd=0.2, max_velocity=1.9, **kwargs)[source]¶
Set the algorithm parameters.
- Parameters
population_size (int) – Number of individuals in population.
mixture_ratio (float) – Mixture ratio.
c1 (float) – Constant in tracing mode.
smp (int) – Seeking memory pool.
spc (bool) – Self-position considering.
cdc (float) – Decides how many dimensions will be varied.
srd (float) – Seeking range of the selected dimension.
max_velocity (float) – Maximal velocity.
Also (See) –
- tracing_mode(task, cat, velocity, xb)[source]¶
Tracing mode.
- Parameters
task (Task) – Optimization task.
cat (numpy.ndarray) – Individual from population.
velocity (numpy.ndarray) – Velocity of individual.
xb (numpy.ndarray) – Current best individual.
- Returns
Updated individual’s position
Updated individual’s fitness/function value
Updated individual’s velocity vector
- Return type
Tuple[numpy.ndarray, float, numpy.ndarray]
- class niapy.algorithms.basic.CenterParticleSwarmOptimization(*args, **kwargs)[source]¶
Bases:
ParticleSwarmAlgorithm
Implementation of Center Particle Swarm Optimization.
- Algorithm:
Center Particle Swarm Optimization
- Date:
2019
- Authors:
Klemen Berkovič
- License:
MIT
- Reference paper:
H.-C. Tsai, Predicting strengths of concrete-type specimens using hybrid multilayer perceptrons with center-Unified particle swarm optimization, Adv. Eng. Softw. 37 (2010) 1104–1112.
See also
niapy.algorithms.basic.WeightedVelocityClampingParticleSwarmAlgorithm
Initialize CPSO.
- Name = ['CenterParticleSwarmOptimization', 'CPSO']¶
- static info()[source]¶
Get basic information of algorithm.
- Returns
Basic information of algorithm.
- Return type
See also
- run_iteration(task, pop, fpop, xb, fxb, **params)[source]¶
Core function of algorithm.
- Parameters
task (Task) – Optimization task.
pop (numpy.ndarray) – Current population of particles.
fpop (numpy.ndarray) – Current particles function/fitness values.
xb (numpy.ndarray) – Current global best particle.
fxb (numpy.float) – Current global best particles function/fitness value.
- Returns
New population of particles.
New populations function/fitness values.
New global best particle.
New global best particle function/fitness value.
Additional arguments.
Additional keyword arguments.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, dict]
See also
niapy.algorithm.basic.WeightedVelocityClampingParticleSwarmAlgorithm.run_iteration()
- class niapy.algorithms.basic.ClonalSelectionAlgorithm(population_size=10, clone_factor=0.1, mutation_factor=10.0, num_rand=1, bits_per_param=16, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Clonal Selection Algorithm.
- Algorithm:
Clonal selection algorithm
- Date:
2021
- Authors:
Andraž Peršon
- License:
MIT
- Reference papers:
L. N. de Castro and F. J. Von Zuben. Learning and optimization using the clonal selection principle. IEEE Transactions on Evolutionary Computation, 6:239–251, 2002.
Brownlee, J. “Clever Algorithms: Nature-Inspired Programming Recipes” Revision 2. 2012. 280-286.
- Variables
See also
Initialize ClonalSelectionAlgorithm.
- Parameters
population_size (Optional[int]) – Population size.
clone_factor (Optional[float]) – Clone factor.
mutation_factor (Optional[float]) – Mutation factor.
num_rand (Optional[int]) – Number of random antibodies to be added to the population each generation.
bits_per_param (Optional[int]) – Number of bits per parameter of solution vector.
- Name = ['ClonalSelectionAlgorithm', 'CLONALG']¶
- __init__(population_size=10, clone_factor=0.1, mutation_factor=10.0, num_rand=1, bits_per_param=16, *args, **kwargs)[source]¶
Initialize ClonalSelectionAlgorithm.
- Parameters
population_size (Optional[int]) – Population size.
clone_factor (Optional[float]) – Clone factor.
mutation_factor (Optional[float]) – Mutation factor.
num_rand (Optional[int]) – Number of random antibodies to be added to the population each generation.
bits_per_param (Optional[int]) – Number of bits per parameter of solution vector.
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get algorithms information.
- Returns
Algorithm information.
- Return type
See also
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Clonal Selection Algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population
population_fitness (numpy.ndarray[float]) – Current population fitness/function values
best_x (numpy.ndarray) – Current best individual
best_fitness (float) – Current best individual function/fitness value
params (Dict[str, Any]) – Additional algorithm arguments
- Returns
New population
New population fitness/function values
New global best solution
New global best fitness/objective value
- Additional arguments:
bitstring (numpy.ndarray): Binary representation of the population.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- class niapy.algorithms.basic.ComprehensiveLearningParticleSwarmOptimizer(m=10, w0=0.9, w1=0.4, c=1.49445, *args, **kwargs)[source]¶
Bases:
ParticleSwarmAlgorithm
Implementation of Mutated Particle Swarm Optimization.
- Algorithm:
Comprehensive Learning Particle Swarm Optimizer
- Date:
2019
- Authors:
Klemen Berkovič
- License:
MIT
- Reference paper:
Liang, a. K. Qin, P. N. Suganthan and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” in IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281-295, June 2006. doi: 10.1109/TEVC.2005.857610
- Reference URL:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1637688&isnumber=34326
- Variables
Initialize CLPSO.
- Name = ['ComprehensiveLearningParticleSwarmOptimizer', 'CLPSO']¶
- generate_personal_best_cl(i, pc, personal_best, personal_best_fitness)[source]¶
Generate new personal best position for learning.
- Parameters
- Returns
Personal best for learning.
- Return type
numpy.ndarray
- static info()[source]¶
Get basic information of algorithm.
- Returns
Basic information of algorithm.
- Return type
See also
- run_iteration(task, pop, fpop, xb, fxb, **params)[source]¶
Core function of algorithm.
- Parameters
task (Task) – Optimization task.
pop (numpy.ndarray) – Current populations.
fpop (numpy.ndarray) – Current population fitness/function values.
xb (numpy.ndarray) – Current best particle.
fxb (float) – Current best particle fitness/function value.
params (dict) – Additional function keyword arguments.
- Returns
New population.
New population fitness/function values.
New global best position.
New global best positions function/fitness value.
Additional arguments.
- Additional keyword arguments:
personal_best: Particles best population.
personal_best_fitness: Particles best positions function/fitness value.
min_velocity: Minimal velocity.
max_velocity: Maximal velocity.
V: Initial velocity of particle.
flag: Refresh gap counter.
pc: Learning rate.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, list, dict]
- set_parameters(m=10, w0=0.9, w1=0.4, c=1.49445, **kwargs)[source]¶
Set Particle Swarm Algorithm main parameters.
- update_velocity_cl(v, p, pb, w, min_velocity, max_velocity, task, **_kwargs)[source]¶
Update particle velocity.
- Parameters
v (numpy.ndarray) – Current velocity of particle.
p (numpy.ndarray) – Current position of particle.
pb (numpy.ndarray) – Personal best position of particle.
w (numpy.ndarray) – Weights for velocity adjustment.
min_velocity (numpy.ndarray) – Minimal velocity allowed.
max_velocity (numpy.ndarray) – Maximal velocity allowed.
task (Task) – Optimization task.
- Returns
Updated velocity of particle.
- Return type
numpy.ndarray
- class niapy.algorithms.basic.CoralReefsOptimization(population_size=25, phi=0.4, asexual_reproduction_prob=0.5, broadcast_prob=0.5, depredation_prob=0.3, k=25, crossover_rate=0.5, mutation_rate=0.36, sexual_crossover=<function default_sexual_crossover>, brooding=<function default_brooding>, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Coral Reefs Optimization Algorithm.
- Algorithm:
Coral Reefs Optimization Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference Paper:
S. Salcedo-Sanz, J. Del Ser, I. Landa-Torres, S. Gil-López, and J. A. Portilla-Figueras, “The Coral Reefs Optimization Algorithm: A Novel Metaheuristic for Efficiently Solving Optimization Problems,” The Scientific World Journal, vol. 2014, Article ID 739768, 15 pages, 2014.
- Reference URL:
- Variables
Name (List[str]) – List of strings representing algorithm name.
phi (float) – Range of neighborhood.
num_asexual_reproduction (int) – Number of corals used in asexual reproduction.
num_broadcast (int) – Number of corals used in brooding.
num_depredation (int) – Number of corals used in depredation.
k (int) – Number of tries for larva setting.
mutation_rate (float) – Mutation variable \(\in [0, \infty]\).
crossover_rate (float) – Crossover rate in [0, 1].
sexual_crossover (Callable[[numpy.ndarray, float, Task, numpy.random.Generator, Dict[str, Any]], Tuple[numpy.ndarray, numpy.ndarray[float]]]) – Crossover function.
brooding (Callable[[numpy.ndarray, float, Task, numpy.random.Generator, Dict[str, Any]], Tuple[numpy.ndarray, numpy.ndarray]]) – Brooding function.
See also
Initialize CoralReefsOptimization.
- Parameters
population_size (int) – population size for population initialization.
phi (int) – distance.
asexual_reproduction_prob (float) – Value $in [0, 1]$ for Asexual reproduction size.
broadcast_prob (float) – Value $in [0, 1]$ for brooding size.
depredation_prob (float) – Value $in [0, 1]$ for Depredation size.
k (int) – Tries for larvae setting.
sexual_crossover (Callable[[numpy.ndarray, float, Task, numpy.random.Generator, Dict[str, Any]], Tuple[numpy.ndarray, numpy.ndarray]]) – Crossover function.
crossover_rate (float) – Crossover rate $in [0, 1]$.
brooding (Callable[[numpy.ndarray, float, Task, numpy.random.Generator, Dict[str, Any]], Tuple[numpy.ndarray, numpy.ndarray]]) – brooding function.
mutation_rate (float) – Crossover rate $in [0, 1]$.
- Name = ['CoralReefsOptimization', 'CRO']¶
- __init__(population_size=25, phi=0.4, asexual_reproduction_prob=0.5, broadcast_prob=0.5, depredation_prob=0.3, k=25, crossover_rate=0.5, mutation_rate=0.36, sexual_crossover=<function default_sexual_crossover>, brooding=<function default_brooding>, *args, **kwargs)[source]¶
Initialize CoralReefsOptimization.
- Parameters
population_size (int) – population size for population initialization.
phi (int) – distance.
asexual_reproduction_prob (float) – Value $in [0, 1]$ for Asexual reproduction size.
broadcast_prob (float) – Value $in [0, 1]$ for brooding size.
depredation_prob (float) – Value $in [0, 1]$ for Depredation size.
k (int) – Tries for larvae setting.
sexual_crossover (Callable[[numpy.ndarray, float, Task, numpy.random.Generator, Dict[str, Any]], Tuple[numpy.ndarray, numpy.ndarray]]) – Crossover function.
crossover_rate (float) – Crossover rate $in [0, 1]$.
brooding (Callable[[numpy.ndarray, float, Task, numpy.random.Generator, Dict[str, Any]], Tuple[numpy.ndarray, numpy.ndarray]]) – brooding function.
mutation_rate (float) – Crossover rate $in [0, 1]$.
- asexual_reproduction(reef, reef_fitness, best_x, best_fitness, task)[source]¶
Asexual reproduction of corals.
- Parameters
- Returns
New population.
New population fitness/function values.
- Return type
Tuple[numpy.ndarray, numpy.ndarray]
See also
niapy.algorithms.basic.CoralReefsOptimization.setting()
niapy.algorithms.basic.default_brooding()
- depredation(reef, reef_fitness)[source]¶
Depredation operator for reefs.
- Parameters
reef (numpy.ndarray) – Current reefs.
reef_fitness (numpy.ndarray) – Current reefs function/fitness values.
- Returns
Best individual
Best individual fitness/function value
- Return type
Tuple[numpy.ndarray, numpy.ndarray]
- get_parameters()[source]¶
Get parameters values of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get algorithms information.
- Returns
Algorithm information.
- Return type
See also
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Coral Reefs Optimization algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current population fitness/function value.
best_x (numpy.ndarray) – Global best solution.
best_fitness (float) – Global best solution fitness/function value.
**params – Additional arguments
- Returns
New population.
New population fitness/function values.
New global best solution
New global best solutions fitness/objective value
Additional arguments:
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
See also
niapy.algorithms.basic.CoralReefsOptimization.sexual_crossover()
niapy.algorithms.basic.CoralReefsOptimization.brooding()
- set_parameters(population_size=25, phi=0.4, asexual_reproduction_prob=0.5, broadcast_prob=0.5, depredation_prob=0.3, k=25, crossover_rate=0.5, mutation_rate=0.36, sexual_crossover=<function default_sexual_crossover>, brooding=<function default_brooding>, **kwargs)[source]¶
Set the parameters of the algorithm.
- Parameters
population_size (int) – population size for population initialization.
phi (int) – distance.
asexual_reproduction_prob (float) – Value $in [0, 1]$ for Asexual reproduction size.
broadcast_prob (float) – Value $in [0, 1]$ for brooding size.
depredation_prob (float) – Value $in [0, 1]$ for Depredation size.
k (int) – Tries for larvae setting.
sexual_crossover (Callable[[numpy.ndarray, float, Task, numpy.random.Generator, Dict[str, Any]], Tuple[numpy.ndarray, numpy.ndarray]]) – Crossover function.
crossover_rate (float) – Crossover rate $in [0, 1]$.
brooding (Callable[[numpy.ndarray, float, Task, numpy.random.Generator, Dict[str, Any]], Tuple[numpy.ndarray, numpy.ndarray]]) – brooding function.
mutation_rate (float) – Crossover rate $in [0, 1]$.
- settling(reef, reef_fitness, new_reef, new_reef_fitness, best_x, best_fitness, task)[source]¶
Operator for setting reefs.
New reefs try to settle to selected position in search space. New reefs are successful if their fitness values is better or if they have no reef occupying same search space.
- Parameters
reef (numpy.ndarray) – Current population of reefs.
reef_fitness (numpy.ndarray) – Current populations function/fitness values.
new_reef (numpy.ndarray) – New population of reefs.
new_reef_fitness (numpy.ndarray) – New populations function/fitness values.
best_x (numpy.ndarray) – Global best solution.
best_fitness (float) – Global best solutions fitness/objective value.
task (Task) – Optimization task.
- Returns
New settled population.
New settled population fitness/function values.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float]
- class niapy.algorithms.basic.CuckooSearch(population_size=25, pa=0.25, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Cuckoo behaviour and levy flights.
- Algorithm:
Cuckoo Search
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference:
Yang, Xin-She, and Suash Deb. “Cuckoo search via Lévy flights.” Nature & Biologically Inspired Computing, 2009. NaBIC 2009. World Congress on. IEEE, 2009.
- Variables
See also
Initialize CuckooSearch.
- Parameters
- Name = ['CuckooSearch', 'CS']¶
- static info()[source]¶
Get algorithms information.
- Returns
Algorithm information.
- Return type
See also
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of CuckooSearch algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current populations fitness/function values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best individual function/fitness values.
**params (Dict[str, Any]) – Additional arguments.
- Returns
Initialized population.
Initialized populations fitness/function values.
New global best solution.
New global best solutions fitness/objective value.
Additional arguments.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- class niapy.algorithms.basic.DifferentialEvolution(population_size=50, differential_weight=1, crossover_probability=0.8, strategy=<function cross_rand1>, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Differential evolution algorithm.
- Algorithm:
Differential evolution algorithm
- Date:
2018
- Author:
Uros Mlakar and Klemen Berkovič
- License:
MIT
- Reference paper:
Storn, Rainer, and Kenneth Price. “Differential evolution - a simple and efficient heuristic for global optimization over continuous spaces.” Journal of global optimization 11.4 (1997): 341-359.
- Variables
Name (List[str]) – List of string of names for algorithm.
differential_weight (float) – Scale factor.
crossover_probability (float) – Crossover probability.
strategy (Callable[numpy.ndarray, int, numpy.ndarray, float, float, numpy.random.Generator, Dict[str, Any]]) – crossover and mutation strategy.
See also
Initialize DifferentialEvolution.
- Parameters
population_size (Optional[int]) – Population size.
differential_weight (Optional[float]) – Differential weight (differential_weight).
crossover_probability (Optional[float]) – Crossover rate.
strategy (Optional[Callable[[numpy.ndarray, int, numpy.ndarray, float, float, numpy.random.Generator, list], numpy.ndarray]]) – Crossover and mutation strategy.
- Name = ['DifferentialEvolution', 'DE']¶
- __init__(population_size=50, differential_weight=1, crossover_probability=0.8, strategy=<function cross_rand1>, *args, **kwargs)[source]¶
Initialize DifferentialEvolution.
- Parameters
population_size (Optional[int]) – Population size.
differential_weight (Optional[float]) – Differential weight (differential_weight).
crossover_probability (Optional[float]) – Crossover rate.
strategy (Optional[Callable[[numpy.ndarray, int, numpy.ndarray, float, float, numpy.random.Generator, list], numpy.ndarray]]) – Crossover and mutation strategy.
- evolve(pop, xb, task, **kwargs)[source]¶
Evolve population.
- Parameters
pop (numpy.ndarray) – Current population.
xb (numpy.ndarray) – Current best individual.
task (Task) – Optimization task.
- Returns
New evolved populations.
- Return type
numpy.ndarray
- get_parameters()[source]¶
Get parameters values of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get basic information of algorithm.
- Returns
Basic information of algorithm.
- Return type
See also
- post_selection(pop, task, xb, fxb, **kwargs)[source]¶
Apply additional operation after selection.
- Parameters
- Returns
New population.
New global best solution.
New global best solutions fitness/objective value.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, float]
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Differential Evolution algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current populations fitness/function values.
best_x (numpy.ndarray) – Current best individual.
best_fitness (float) – Current best individual function/fitness value.
**params (dict) – Additional arguments.
- Returns
New population.
New population fitness/function values.
New global best solution.
New global best solutions fitness/objective value.
Additional arguments.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- selection(population, new_population, best_x, best_fitness, task, **kwargs)[source]¶
Operator for selection.
- Parameters
- Returns
New selected individuals.
New global best solution.
New global best solutions fitness/objective value.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, float]
- set_parameters(population_size=50, differential_weight=1, crossover_probability=0.8, strategy=<function cross_rand1>, **kwargs)[source]¶
Set the algorithm parameters.
- Parameters
population_size (Optional[int]) – Population size.
differential_weight (Optional[float]) – Differential weight (differential_weight).
crossover_probability (Optional[float]) – Crossover rate.
strategy (Optional[Callable[[numpy.ndarray, int, numpy.ndarray, float, float, numpy.random.Generator, list], numpy.ndarray]]) – Crossover and mutation strategy.
- class niapy.algorithms.basic.DynNpDifferentialEvolution(population_size=10, p_max=50, rp=3, *args, **kwargs)[source]¶
Bases:
DifferentialEvolution
Implementation of Dynamic population size Differential evolution algorithm.
- Algorithm:
Dynamic population size Differential evolution algorithm
- Date:
2018
- Author:
Klemen Berkovič
- License:
MIT
- Variables
Initialize DynNpDifferentialEvolution.
- Parameters
- Name = ['DynNpDifferentialEvolution', 'dynNpDE']¶
- __init__(population_size=10, p_max=50, rp=3, *args, **kwargs)[source]¶
Initialize DynNpDifferentialEvolution.
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get basic information of algorithm.
- Returns
Basic information of algorithm.
- Return type
See also
- post_selection(pop, task, xb, fxb, **kwargs)[source]¶
Post selection operator.
In this algorithm the post selection operator decrements the population at specific iterations/generations.
- Parameters
- Returns
Changed current population.
New global best solution.
New global best solutions fitness/objective value.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, float]
- class niapy.algorithms.basic.DynNpMultiStrategyDifferentialEvolution(population_size=40, strategies=(<function cross_rand1>, <function cross_best1>, <function cross_curr2best1>, <function cross_rand2>), *args, **kwargs)[source]¶
Bases:
MultiStrategyDifferentialEvolution
,DynNpDifferentialEvolution
Implementation of Dynamic population size Differential evolution algorithm with dynamic population size that is defined by the quality of population.
- Algorithm:
Dynamic population size Differential evolution algorithm with dynamic population size that is defined by the quality of population
- Date:
2018
- Author:
Klemen Berkovič
- License:
MIT
- Variables
Name (List[str]) – List of strings representing algorithm name.
See also
Initialize MultiStrategyDifferentialEvolution.
- Parameters
strategies (Optional[Iterable[Callable[[numpy.ndarray[Individual], int, Individual, float, float, numpy.random.Generator], numpy.ndarray[Individual]]]]) – List of mutation strategies.
- Name = ['DynNpMultiStrategyDifferentialEvolution', 'dynNpMsDE']¶
- evolve(pop, xb, task, **kwargs)[source]¶
Evolve the current population.
- Parameters
pop (numpy.ndarray) – Current population.
xb (numpy.ndarray) – Global best solution.
task (Task) – Optimization task.
- Returns
Evolved new population.
- Return type
numpy.ndarray
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get basic information of algorithm.
- Returns
Basic information of algorithm.
- Return type
See also
- class niapy.algorithms.basic.DynamicFireworksAlgorithm(amplification_coeff=1.2, reduction_coeff=0.9, *args, **kwargs)[source]¶
Bases:
DynamicFireworksAlgorithmGauss
Implementation of dynamic fireworks algorithm.
- Algorithm:
Dynamic Fireworks Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900485&isnumber=6900223
- Reference paper:
Zheng, A. Janecek, J. Li and Y. Tan, “Dynamic search in fireworks algorithm,” 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, 2014, pp. 3222-3229. doi: 10.1109/CEC.2014.6900485
- Variables
Name (List[str]) – List of strings representing algorithm name.
Initialize dynFWAG.
- Parameters
See also
- Name = ['DynamicFireworksAlgorithm', 'dynFWA']¶
- static info()[source]¶
Get default information of algorithm.
- Returns
Basic information.
- Return type
See also
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Co50re function of Dynamic Fireworks Algorithm.
- Parameters
- Returns
New population.
New population function/fitness values.
New global best solution.
New global best fitness.
Additional arguments.
- Return type
- class niapy.algorithms.basic.DynamicFireworksAlgorithmGauss(amplification_coeff=1.2, reduction_coeff=0.9, *args, **kwargs)[source]¶
Bases:
EnhancedFireworksAlgorithm
Implementation of dynamic fireworks algorithm.
- Algorithm:
Dynamic Fireworks Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6900485&isnumber=6900223
- Reference paper:
Zheng, A. Janecek, J. Li and Y. Tan, “Dynamic search in fireworks algorithm,” 2014 IEEE Congress on Evolutionary Computation (CEC), Beijing, 2014, pp. 3222-3229. doi: 10.1109/CEC.2014.6900485
- Variables
Initialize dynFWAG.
- Parameters
See also
- Name = ['DynamicFireworksAlgorithmGauss', 'dynFWAG']¶
- __init__(amplification_coeff=1.2, reduction_coeff=0.9, *args, **kwargs)[source]¶
Initialize dynFWAG.
- Parameters
See also
- explosion_amplitudes(population_fitness, task=None)[source]¶
Calculate explosion amplitude for other fireworks.
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get default information of algorithm.
- Returns
Basic information.
- Return type
See also
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of DynamicFireworksAlgorithmGauss algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current populations function/fitness values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best fitness/function value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
New population.
New populations fitness/function values.
New global best solution.
New global best solutions fitness/objective value.
- Additional arguments:
amplitude_cf (numpy.ndarray): Amplitude of the core firework.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- selection(population, population_fitness, sparks, task)[source]¶
Select fireworks for the next generation.
- set_parameters(amplification_coeff=1.2, reduction_coeff=0.9, **kwargs)[source]¶
Set core arguments of DynamicFireworksAlgorithmGauss.
- Parameters
See also
- update_cf(xnb, xcb, xcb_f, xb, xb_f, amplitude_cf, task)[source]¶
Update the core firework.
- Parameters
xnb – Sparks generated by core fireworks.
xcb – Current generations best spark.
xcb_f – Current generations best fitness.
xb – Global best individual.
xb_f – Global best fitness.
amplitude_cf – Amplitude of the core firework.
task (Task) – Optimization task.
- Returns
New core firework.
New core firework’s fitness.
New core firework amplitude.
- Return type
Tuple[numpy.ndarray, float, numpy.ndarray]
- class niapy.algorithms.basic.EnhancedFireworksAlgorithm(amplitude_init=0.2, amplitude_final=0.01, *args, **kwargs)[source]¶
Bases:
FireworksAlgorithm
Implementation of enhanced fireworks algorithm.
- Algorithm:
Enhanced Fireworks Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
- Reference paper:
Zheng, A. Janecek and Y. Tan, “Enhanced Fireworks Algorithm,” 2013 IEEE Congress on Evolutionary Computation, Cancun, 2013, pp. 2069-2077. doi: 10.1109/CEC.2013.6557813
- Variables
Initialize EFWA.
See also
- Name = ['EnhancedFireworksAlgorithm', 'EFWA']¶
- __init__(amplitude_init=0.2, amplitude_final=0.01, *args, **kwargs)[source]¶
Initialize EFWA.
See also
- explosion_amplitudes(population_fitness, task=None)[source]¶
Calculate explosion amplitude.
- Parameters
population_fitness (numpy.ndarray) –
task (Task) – Optimization task.
- Returns
New amplitude.
- Return type
numpy.ndarray
- gaussian_spark(x, task, best_x=None)[source]¶
Create new individual.
- Parameters
x (numpy.ndarray) –
task (Task) – Optimization task.
best_x (numpy.ndarray) – Current global best individual.
- Returns
New individual generated by gaussian noise.
- Return type
numpy.ndarray
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get default information of algorithm.
- Returns
Basic information.
- Return type
See also
- mapping(x, task)[source]¶
Fix value to bounds.
- Parameters
x (numpy.ndarray) – Individual to fix.
task (Task) – Optimization task.
- Returns
Individual in search range.
- Return type
numpy.ndarray
- selection(population, population_fitness, sparks, task)[source]¶
Generate new population.
- class niapy.algorithms.basic.EvolutionStrategy1p1(mu=1, k=10, c_a=1.1, c_r=0.5, epsilon=1e-20, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of (1 + 1) evolution strategy algorithm. Uses just one individual.
- Algorithm:
(1 + 1) Evolution Strategy Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
Reference URL:
- Reference paper:
KALYANMOY, Deb. “Multi-Objective optimization using evolutionary algorithms”. John Wiley & Sons, Ltd. Kanpur, India. 2001.
- Variables
See also
Initialize EvolutionStrategy1p1.
- Parameters
- Name = ['EvolutionStrategy1p1', 'EvolutionStrategy(1+1)', 'ES(1+1)']¶
- __init__(mu=1, k=10, c_a=1.1, c_r=0.5, epsilon=1e-20, *args, **kwargs)[source]¶
Initialize EvolutionStrategy1p1.
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get algorithms information.
- Returns
Algorithm information.
- Return type
See also
- init_population(task)[source]¶
Initialize starting individual.
- Parameters
task (Task) – Optimization task.
- Returns
Initialized individual.
Initialized individual fitness/function value.
- Additional arguments:
ki (int): Number of successful rho update.
- Return type
Tuple[Individual, float, Dict[str, Any]]
- mutate(x, rho)[source]¶
Mutate individual.
- Parameters
x (numpy.ndarray) – Current individual.
rho (float) – Current standard deviation.
- Returns
Mutated individual.
- Return type
- run_iteration(task, c, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of EvolutionStrategy(1+1) algorithm.
- Parameters
task (Task) – Optimization task.
c (Individual) – Current position.
population_fitness (float) – Current position function/fitness value.
best_x (numpy.ndarray) – Global best position.
best_fitness (float) – Global best function/fitness value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
Initialized individual.
Initialized individual fitness/function value.
New global best solution.
New global best solutions fitness/objective value.
- Additional arguments:
ki (int): Number of successful rho update.
- Return type
Tuple[Individual, float, Individual, float, Dict[str, Any]]
- class niapy.algorithms.basic.EvolutionStrategyML(lam=45, *args, **kwargs)[source]¶
Bases:
EvolutionStrategyMpL
Implementation of (mu, lambda) evolution strategy algorithm. Algorithm is good for dynamic environments. Mu individual create lambda children. Only best mu children go to new generation. Mu parents are discarded.
- Algorithm:
(\(\mu + \lambda\)) Evolution Strategy Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
Reference URL:
Reference paper:
- Variables
Name (List[str]) – List of strings representing algorithm names
See also
niapy.algorithm.basic.es.EvolutionStrategyMpL
Initialize EvolutionStrategyMpL.
- Parameters
lam (int) – Number of new individual generated by mutation.
- Name = ['EvolutionStrategyML', 'EvolutionStrategy(mu,lambda)', 'ES(m,l)']¶
- static info()[source]¶
Get algorithms information.
- Returns
Algorithm information.
- Return type
See also
- init_population(task)[source]¶
Initialize starting population.
- Parameters
task (Task) – Optimization task.
- Returns
Initialized population.
Initialized populations fitness/function values.
Additional arguments.
- Return type
Tuple[numpy.ndarray[Individual], numpy.ndarray[float], Dict[str, Any]]
See also
niapy.algorithm.basic.es.EvolutionStrategyMpL.init_population()
- new_pop(pop)[source]¶
Return new population.
- Parameters
pop (numpy.ndarray) – Current population.
- Returns
New population.
- Return type
numpy.ndarray
- run_iteration(task, c, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of EvolutionStrategyML algorithm.
- Parameters
task (Task) – Optimization task.
c (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current population fitness/function values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best individuals fitness/function value.
Dict[str (**params) – Additional arguments.
Any] – Additional arguments.
- Returns
New population.
New populations fitness/function values.
New global best solution.
New global best solutions fitness/objective value.
Additional arguments.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- class niapy.algorithms.basic.EvolutionStrategyMp1(mu=40, *args, **kwargs)[source]¶
Bases:
EvolutionStrategy1p1
Implementation of (mu + 1) evolution strategy algorithm. Algorithm creates mu mutants but into new generation goes only one individual.
- Algorithm:
(\(\mu + 1\)) Evolution Strategy Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
Reference URL:
Reference paper:
- Variables
Name (List[str]) – List of strings representing algorithm names.
Initialize EvolutionStrategyMp1.
- Name = ['EvolutionStrategyMp1', 'EvolutionStrategy(mu+1)', 'ES(m+1)']¶
- class niapy.algorithms.basic.EvolutionStrategyMpL(lam=45, *args, **kwargs)[source]¶
Bases:
EvolutionStrategy1p1
Implementation of (mu + lambda) evolution strategy algorithm. Mutation creates lambda individual. Lambda individual compete with mu individuals for survival, so only mu individual go to new generation.
- Algorithm:
(\(\mu + \lambda\)) Evolution Strategy Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
Reference URL:
Reference paper:
Initialize EvolutionStrategyMpL.
- Parameters
lam (int) – Number of new individual generated by mutation.
- Name = ['EvolutionStrategyMpL', 'EvolutionStrategy(mu+lambda)', 'ES(m+l)']¶
- __init__(lam=45, *args, **kwargs)[source]¶
Initialize EvolutionStrategyMpL.
- Parameters
lam (int) – Number of new individual generated by mutation.
- static change_count(c, cn)[source]¶
Update number of successful mutations for population.
- Parameters
c (numpy.ndarray[Individual]) – Current population.
cn (numpy.ndarray[Individual]) – New population.
- Returns
Number of successful mutations.
- Return type
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get algorithms information.
- Returns
Algorithm information.
- Return type
See also
- init_population(task)[source]¶
Initialize starting population.
- Parameters
task (Task) – Optimization task.
- Returns
Initialized population.
Initialized populations function/fitness values.
- Additional arguments:
ki (int): Number of successful mutations.
- Return type
Tuple[numpy.ndarray[Individual], numpy.ndarray[float], Dict[str, Any]]
See also
niapy.algorithms.algorithm.Algorithm.init_population()
- mutate_rand(pop, task)[source]¶
Mutate random individual form population.
- Parameters
pop (numpy.ndarray[Individual]) – Current population.
task (Task) – Optimization task.
- Returns
Random individual from population that was mutated.
- Return type
numpy.ndarray
- run_iteration(task, c, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of EvolutionStrategyMpL algorithm.
- Parameters
task (Task) – Optimization task.
c (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current populations fitness/function values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best individuals fitness/function value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
New population.
New populations function/fitness values.
New global best solution.
New global best solutions fitness/objective value.
- Additional arguments:
ki (int): Number of successful mutations.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- set_parameters(lam=45, **kwargs)[source]¶
Set the arguments of an algorithm.
- Parameters
lam (int) – Number of new individual generated by mutation.
See also
niapy.algorithms.basic.es.EvolutionStrategy1p1.set_parameters()
- update_rho(pop, k)[source]¶
Update standard deviation for population.
- Parameters
pop (numpy.ndarray[Individual]) – Current population.
k (int) – Number of successful mutations.
- class niapy.algorithms.basic.FireflyAlgorithm(population_size=20, alpha=1, beta0=1, gamma=0.01, theta=0.97, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Firefly algorithm.
- Algorithm:
Firefly algorithm
- Date:
2016
- Authors:
Iztok Fister Jr, Iztok Fister and Klemen Berkovič
- License:
MIT
- Reference paper:
Fister, I., Fister Jr, I., Yang, X. S., & Brest, J. (2013). A comprehensive review of firefly algorithms. Swarm and Evolutionary Computation, 13, 34-46.
- Variables
See also
Initialize FireflyAlgorithm.
- Parameters
- Name = ['FireflyAlgorithm', 'FA']¶
- __init__(population_size=20, alpha=1, beta0=1, gamma=0.01, theta=0.97, *args, **kwargs)[source]¶
Initialize FireflyAlgorithm.
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get algorithms information.
- Returns
Algorithm information.
- Return type
See also
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Firefly Algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current population function/fitness values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best individual fitness/function value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
New population.
New population fitness/function values.
New global best solution
New global best solutions fitness/objective value
- Additional arguments:
alpha (float): Randomness strength.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
See also
niapy.algorithms.basic.FireflyAlgorithm.move_ffa()
- class niapy.algorithms.basic.FireworksAlgorithm(population_size=5, num_sparks=50, a=0.04, b=0.8, max_amplitude=40, num_gaussian=5, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of fireworks algorithm.
- Algorithm:
Fireworks Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
- Reference paper:
Tan, Ying. “Fireworks algorithm.” Heidelberg, Germany: Springer 10 (2015): 978-3
- Variables
Name (List[str]) – List of strings representing algorithm names.
Initialize FWA.
- Parameters
- Name = ['FireworksAlgorithm', 'FWA']¶
- __init__(population_size=5, num_sparks=50, a=0.04, b=0.8, max_amplitude=40, num_gaussian=5, *args, **kwargs)[source]¶
Initialize FWA.
- explosion_amplitudes(population_fitness, task=None)[source]¶
Calculate explosion amplitude.
- Parameters
population_fitness (numpy.ndarray) – Population fitness values.
task (Optional[Task]) – Optimization task (Unused in this version of the algorithm).
- Returns
Explosion amplitude of sparks.
- Return type
numpy.ndarray
- gaussian_spark(x, task, best_x=None)[source]¶
Create gaussian spark.
- Parameters
x (numpy.ndarray) – Individual creating a spark.
task (Task) – Optimization task.
best_x (numpy.ndarray) – Current best individual. Unused in this version of the algorithm.
- Returns
Spark exploded based on gaussian amplitude.
- Return type
numpy.ndarray
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get default information of algorithm.
- Returns
Basic information.
- Return type
See also
- mapping(x, task)[source]¶
Fix value to bounds.
- Parameters
x (numpy.ndarray) – Individual to fix.
task (Task) – Optimization task.
- Returns
Individual in search range.
- Return type
numpy.ndarray
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Fireworks algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray[float]) – Current populations function/fitness values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best individuals fitness/function value.
**params (Dict[str, Any) – Additional arguments
- Returns
Initialized population.
Initialized populations function/fitness values.
New global best solution.
New global best solutions fitness/objective value.
- Additional arguments:
Ah (numpy.ndarray): Initialized amplitudes.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- selection(population, population_fitness, sparks, task)[source]¶
Generate new generation of individuals.
- class niapy.algorithms.basic.FishSchoolSearch(population_size=30, step_individual_init=0.1, step_individual_final=0.0001, step_volitive_init=0.01, step_volitive_final=0.001, min_w=1.0, w_scale=500.0, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Fish School Search algorithm.
- Algorithm:
Fish School Search algorithm
- Date:
2019
- Authors:
Clodomir Santana Jr, Elliackin Figueredo, Mariana Maceds, Pedro Santos. Ported to niapy with small changes by Kristian Järvenpää (2018). Ported to niapy 2.0 by Klemen Berkovič (2019).
- License:
MIT
- Reference paper:
Bastos Filho, Lima Neto, Lins, D. O. Nascimento and P. Lima, “A novel search algorithm based on fish school behavior,” in 2008 IEEE International Conference on Systems, Man and Cybernetics, Oct 2008, pp. 2646–2651.
- Variables
Name (List[str]) – List of strings representing algorithm name.
step_individual_init (float) – Length of initial individual step.
step_individual_final (float) – Length of final individual step.
step_volitive_init (float) – Length of initial volatile step.
step_volitive_final (float) – Length of final volatile step.
min_w (float) – Minimum weight of a fish.
w_scale (float) – Maximum weight of a fish.
See also
Initialize FishSchoolSearch.
- Parameters
population_size (Optional[int]) – Number of fishes in school.
step_individual_init (Optional[float]) – Length of initial individual step.
step_individual_final (Optional[float]) – Length of final individual step.
step_volitive_init (Optional[float]) – Length of initial volatile step.
step_volitive_final (Optional[float]) – Length of final volatile step.
min_w (Optional[float]) – Minimum weight of a fish.
w_scale (Optional[float]) – Maximum weight of a fish. Recommended value: max_iterations / 2
- Name = ['FSS', 'FishSchoolSearch']¶
- __init__(population_size=30, step_individual_init=0.1, step_individual_final=0.0001, step_volitive_init=0.01, step_volitive_final=0.001, min_w=1.0, w_scale=500.0, *args, **kwargs)[source]¶
Initialize FishSchoolSearch.
- Parameters
population_size (Optional[int]) – Number of fishes in school.
step_individual_init (Optional[float]) – Length of initial individual step.
step_individual_final (Optional[float]) – Length of final individual step.
step_volitive_init (Optional[float]) – Length of initial volatile step.
step_volitive_final (Optional[float]) – Length of final volatile step.
min_w (Optional[float]) – Minimum weight of a fish.
w_scale (Optional[float]) – Maximum weight of a fish. Recommended value: max_iterations / 2
- collective_instinctive_movement(school, task)[source]¶
Perform collective instinctive movement.
- Parameters
school (numpy.ndarray) – Current population.
task (Task) – Optimization task.
- Returns
New population
- Return type
numpy.ndarray
- collective_volitive_movement(school, step_volitive, school_weight, xb, fxb, task)[source]¶
Perform collective volitive movement.
- Parameters
- Returns
New population.
New global best individual.
New global best fitness.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, float]
- feeding(school)[source]¶
Feed all fishes.
- Parameters
school (numpy.ndarray) – Current school fish population.
- Returns
New school fish population.
- Return type
numpy.ndarray
- get_parameters()[source]¶
Get algorithm parameters.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- individual_movement(school, step_individual, xb, fxb, task)[source]¶
Perform individual movement for each fish.
- Parameters
- Returns
New school of fishes.
New global best position.
New global best fitness.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, float]
- static info()[source]¶
Get default information of algorithm.
- Returns
Basic information.
- Return type
See also
- init_population(task)[source]¶
Initialize the school.
- Parameters
task (Task) – Optimization task.
- Returns
Population.
Population fitness.
- Additional arguments:
step_individual (float): Current individual step.
step_volitive (float): Current volitive step.
school_weight (float): Current school weight.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, dict]
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of algorithm.
- Parameters
- Returns
New Population.
New Population fitness.
New global best individual.
New global best fitness.
- Additional parameters:
step_individual (float): Current individual step.
step_volitive (float): Current volitive step.
school_weight (float): Current school weight.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, dict]
- set_parameters(population_size=30, step_individual_init=0.1, step_individual_final=0.0001, step_volitive_init=0.01, step_volitive_final=0.001, min_w=1.0, w_scale=5000.0, **kwargs)[source]¶
Set core arguments of FishSchoolSearch algorithm.
- Parameters
population_size (Optional[int]) – Number of fishes in school.
step_individual_init (Optional[float]) – Length of initial individual step.
step_individual_final (Optional[float]) – Length of final individual step.
step_volitive_init (Optional[float]) – Length of initial volatile step.
step_volitive_final (Optional[float]) – Length of final volatile step.
min_w (Optional[float]) – Minimum weight of a fish.
w_scale (Optional[float]) – Maximum weight of a fish. Recommended value: max_iterations / 2
- class niapy.algorithms.basic.FlowerPollinationAlgorithm(population_size=20, p=0.8, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Flower Pollination algorithm.
- Algorithm:
Flower Pollination algorithm
- Date:
2018
- Authors:
Dusan Fister, Iztok Fister Jr. and Klemen Berkovič
- License:
MIT
- Reference paper:
Yang, Xin-She. “Flower pollination algorithm for global optimization. International conference on unconventional computing and natural computation. Springer, Berlin, Heidelberg, 2012.
- References URL:
Implementation is based on the following MATLAB code: https://www.mathworks.com/matlabcentral/fileexchange/45112-flower-pollination-algorithm?requestedDomain=true
- Variables
See also
Initialize FlowerPollinationAlgorithm.
- Name = ['FlowerPollinationAlgorithm', 'FPA']¶
- __init__(population_size=20, p=0.8, *args, **kwargs)[source]¶
Initialize FlowerPollinationAlgorithm.
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get default information of algorithm.
- Returns
Basic information.
- Return type
See also
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of FlowerPollinationAlgorithm algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current population fitness/function values.
best_x (numpy.ndarray) – Global best solution.
best_fitness (float) – Global best solution function/fitness value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
New population.
New populations fitness/function values.
New global best solution.
New global best solution fitness/objective value.
Additional arguments.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- class niapy.algorithms.basic.ForestOptimizationAlgorithm(population_size=10, lifetime=3, area_limit=10, local_seeding_changes=1, global_seeding_changes=1, transfer_rate=0.3, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Forest Optimization Algorithm.
- Algorithm:
Forest Optimization Algorithm
- Date:
2019
- Authors:
Luka Pečnik
- License:
MIT
- Reference paper:
Manizheh Ghaemi, Mohammad-Reza Feizi-Derakhshi, Forest Optimization Algorithm, Expert Systems with Applications, Volume 41, Issue 15, 2014, Pages 6676-6687, ISSN 0957-4174, https://doi.org/10.1016/j.eswa.2014.05.009.
- References URL:
Implementation is based on the following MATLAB code: https://github.com/cominsys/FOA
- Variables
Name (List[str]) – List of strings representing algorithm name.
lifetime (int) – Life time of trees parameter.
area_limit (int) – Area limit parameter.
local_seeding_changes (int) – Local seeding changes parameter.
global_seeding_changes (int) – Global seeding changes parameter.
transfer_rate (float) – Transfer rate parameter.
See also
Initialize ForestOptimizationAlgorithm.
- Parameters
population_size (Optional[int]) – Population size.
lifetime (Optional[int]) – Life time parameter.
area_limit (Optional[int]) – Area limit parameter.
local_seeding_changes (Optional[int]) – Local seeding changes parameter.
global_seeding_changes (Optional[int]) – Global seeding changes parameter.
transfer_rate (Optional[float]) – Transfer rate parameter.
- Name = ['ForestOptimizationAlgorithm', 'FOA']¶
- __init__(population_size=10, lifetime=3, area_limit=10, local_seeding_changes=1, global_seeding_changes=1, transfer_rate=0.3, *args, **kwargs)[source]¶
Initialize ForestOptimizationAlgorithm.
- Parameters
population_size (Optional[int]) – Population size.
lifetime (Optional[int]) – Life time parameter.
area_limit (Optional[int]) – Area limit parameter.
local_seeding_changes (Optional[int]) – Local seeding changes parameter.
global_seeding_changes (Optional[int]) – Global seeding changes parameter.
transfer_rate (Optional[float]) – Transfer rate parameter.
- get_parameters()[source]¶
Get parameters values of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- global_seeding(task, candidates, size)[source]¶
Global optimum search stage that should prevent getting stuck in a local optimum.
- static info()[source]¶
Get algorithms information.
- Returns
Algorithm information.
- Return type
See also
- local_seeding(task, trees)[source]¶
Local optimum search stage.
- Parameters
task (Task) – Optimization task.
trees (numpy.ndarray) – Zero age trees for local seeding.
- Returns
Resulting zero age trees.
- Return type
numpy.ndarray
- remove_lifetime_exceeded(trees, age)[source]¶
Remove dead trees.
- Parameters
trees (numpy.ndarray) – Population to test.
age (numpy.ndarray[int32]) – Age of trees.
- Returns
Alive trees.
New candidate population.
Age of trees.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray[int32]]
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Forest Optimization Algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray[float]) – Current population function/fitness values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best individual fitness/function value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
New population.
New population fitness/function values.
- Additional arguments:
age (numpy.ndarray[int32]): Age of trees.
- Return type
- set_parameters(population_size=10, lifetime=3, area_limit=10, local_seeding_changes=1, global_seeding_changes=1, transfer_rate=0.3, **kwargs)[source]¶
Set the parameters of the algorithm.
- Parameters
population_size (Optional[int]) – Population size.
lifetime (Optional[int]) – Life time parameter.
area_limit (Optional[int]) – Area limit parameter.
local_seeding_changes (Optional[int]) – Local seeding changes parameter.
global_seeding_changes (Optional[int]) – Global seeding changes parameter.
transfer_rate (Optional[float]) – Transfer rate parameter.
- survival_of_the_fittest(task, trees, candidates, age)[source]¶
Evaluate and filter current population.
- Parameters
task (Task) – Optimization task.
trees (numpy.ndarray) – Population to evaluate.
candidates (numpy.ndarray) – Candidate population array to be updated.
age (numpy.ndarray[int32]) – Age of trees.
- Returns
Trees sorted by fitness value.
Updated candidate population.
Population fitness values.
Age of trees
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray[float], numpy.ndarray[int32]]
- class niapy.algorithms.basic.GeneticAlgorithm(population_size=25, tournament_size=5, mutation_rate=0.25, crossover_rate=0.25, selection=<function tournament_selection>, crossover=<function uniform_crossover>, mutation=<function uniform_mutation>, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Genetic Algorithm.
- Algorithm:
Genetic algorithm
- Date:
2018
- Author:
Klemen Berkovič
- Reference paper:
Goldberg, David (1989). Genetic Algorithms in Search, Optimization and Machine Learning. Reading, MA: Addison-Wesley Professional.
- License:
MIT
- Variables
Name (List[str]) – List of strings representing algorithm name.
tournament_size (int) – Tournament size.
mutation_rate (float) – Mutation rate.
crossover_rate (float) – Crossover rate.
selection (Callable[[numpy.ndarray[Individual], int, int, Individual, numpy.random.Generator], Individual]) – selection operator.
crossover (Callable[[numpy.ndarray[Individual], int, float, numpy.random.Generator], Individual]) – Crossover operator.
mutation (Callable[[numpy.ndarray[Individual], int, float, Task, numpy.random.Generator], Individual]) – Mutation operator.
See also
Initialize GeneticAlgorithm.
- Parameters
population_size (Optional[int]) – Population size.
tournament_size (Optional[int]) – Tournament selection.
mutation_rate (Optional[int]) – Mutation rate.
crossover_rate (Optional[float]) – Crossover rate.
selection (Optional[Callable[[numpy.ndarray[Individual], int, int, Individual, numpy.random.Generator], Individual]]) – Selection operator.
crossover (Optional[Callable[[numpy.ndarray[Individual], int, float, numpy.random.Generator], Individual]]) – Crossover operator.
mutation (Optional[Callable[[numpy.ndarray[Individual], int, float, Task, numpy.random.Generator], Individual]]) – Mutation operator.
See also
- selection:
niapy.algorithms.basic.tournament_selection()
niapy.algorithms.basic.roulette_selection()
- Crossover:
niapy.algorithms.basic.uniform_crossover()
niapy.algorithms.basic.two_point_crossover()
niapy.algorithms.basic.multi_point_crossover()
niapy.algorithms.basic.crossover_uros()
- Mutations:
niapy.algorithms.basic.uniform_mutation()
niapy.algorithms.basic.creep_mutation()
niapy.algorithms.basic.mutation_uros()
- Name = ['GeneticAlgorithm', 'GA']¶
- __init__(population_size=25, tournament_size=5, mutation_rate=0.25, crossover_rate=0.25, selection=<function tournament_selection>, crossover=<function uniform_crossover>, mutation=<function uniform_mutation>, *args, **kwargs)[source]¶
Initialize GeneticAlgorithm.
- Parameters
population_size (Optional[int]) – Population size.
tournament_size (Optional[int]) – Tournament selection.
mutation_rate (Optional[int]) – Mutation rate.
crossover_rate (Optional[float]) – Crossover rate.
selection (Optional[Callable[[numpy.ndarray[Individual], int, int, Individual, numpy.random.Generator], Individual]]) – Selection operator.
crossover (Optional[Callable[[numpy.ndarray[Individual], int, float, numpy.random.Generator], Individual]]) – Crossover operator.
mutation (Optional[Callable[[numpy.ndarray[Individual], int, float, Task, numpy.random.Generator], Individual]]) – Mutation operator.
See also
- selection:
niapy.algorithms.basic.tournament_selection()
niapy.algorithms.basic.roulette_selection()
- Crossover:
niapy.algorithms.basic.uniform_crossover()
niapy.algorithms.basic.two_point_crossover()
niapy.algorithms.basic.multi_point_crossover()
niapy.algorithms.basic.crossover_uros()
- Mutations:
niapy.algorithms.basic.uniform_mutation()
niapy.algorithms.basic.creep_mutation()
niapy.algorithms.basic.mutation_uros()
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get basic information of algorithm.
- Returns
Basic information of algorithm.
- Return type
See also
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of GeneticAlgorithm algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current populations fitness/function values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best individuals function/fitness value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
New population.
New populations function/fitness values.
New global best solution
New global best solutions fitness/objective value
Additional arguments.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- set_parameters(population_size=25, tournament_size=5, mutation_rate=0.25, crossover_rate=0.25, selection=<function tournament_selection>, crossover=<function uniform_crossover>, mutation=<function uniform_mutation>, **kwargs)[source]¶
Set the parameters of the algorithm.
- Parameters
population_size (Optional[int]) – Population size.
tournament_size (Optional[int]) – Tournament selection.
mutation_rate (Optional[int]) – Mutation rate.
crossover_rate (Optional[float]) – Crossover rate.
selection (Optional[Callable[[numpy.ndarray[Individual], int, int, Individual, numpy.random.Generator], Individual]]) – selection operator.
crossover (Optional[Callable[[numpy.ndarray[Individual], int, float, numpy.random.Generator], Individual]]) – Crossover operator.
mutation (Optional[Callable[[numpy.ndarray[Individual], int, float, Task, numpy.random.Generator], Individual]]) – Mutation operator.
See also
- selection:
niapy.algorithms.basic.tournament_selection()
niapy.algorithms.basic.roulette_selection()
- Crossover:
niapy.algorithms.basic.uniform_crossover()
niapy.algorithms.basic.two_point_crossover()
niapy.algorithms.basic.multi_point_crossover()
niapy.algorithms.basic.crossover_uros()
- Mutations:
niapy.algorithms.basic.uniform_mutation()
niapy.algorithms.basic.creep_mutation()
niapy.algorithms.basic.mutation_uros()
- class niapy.algorithms.basic.GlowwormSwarmOptimization(population_size=25, l0=5, nt=5, rho=0.4, gamma=0.6, beta=0.08, s=0.03, distance=<function euclidean>, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of glowworm swarm optimization.
- Algorithm:
Glowworm Swarm Optimization Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
- Reference paper:
Kaipa, Krishnanand N., and Debasish Ghose. Glowworm swarm optimization: theory, algorithms, and applications. Vol. 698. Springer, 2017.
- Variables
Name (List[str]) – List of strings representing algorithm name.
l0 (float) – Initial luciferin quantity for each glowworm.
nt (float) – Number of neighbors.
rho (float) – Luciferin decay constant.
gamma (float) – Luciferin enhancement constant.
beta (float) – Constant.
s (float) – Step size.
distance (Callable[[numpy.ndarray, numpy.ndarray], float]]) – Measure distance between two individuals.
See also
NiaPy.algorithms.algorithm.Algorithm
Initialize GlowwormSwarmOptimization.
- Parameters
population_size (Optional[int]) – Number of glowworms in population.
l0 (Optional[float]) – Initial luciferin quantity for each glowworm.
nt (Optional[int]) – Number of neighbors.
rho (Optional[float]) – Luciferin decay constant.
gamma (Optional[float]) – Luciferin enhancement constant.
beta (Optional[float]) – Constant.
s (Optional[float]) – Step size.
distance (Optional[Callable[[numpy.ndarray, numpy.ndarray], float]]]) – Measure distance between two individuals.
- Name = ['GlowwormSwarmOptimization', 'GSO']¶
- __init__(population_size=25, l0=5, nt=5, rho=0.4, gamma=0.6, beta=0.08, s=0.03, distance=<function euclidean>, *args, **kwargs)[source]¶
Initialize GlowwormSwarmOptimization.
- Parameters
population_size (Optional[int]) – Number of glowworms in population.
l0 (Optional[float]) – Initial luciferin quantity for each glowworm.
nt (Optional[int]) – Number of neighbors.
rho (Optional[float]) – Luciferin decay constant.
gamma (Optional[float]) – Luciferin enhancement constant.
beta (Optional[float]) – Constant.
s (Optional[float]) – Step size.
distance (Optional[Callable[[numpy.ndarray, numpy.ndarray], float]]]) – Measure distance between two individuals.
- get_parameters()[source]¶
Get algorithms parameters values.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- init_population(task)[source]¶
Initialize population.
- Parameters
task (Task) – Optimization task.
- Returns
Initialized population of glowworms.
Initialized populations function/fitness values.
- Additional arguments:
luciferin (numpy.ndarray): Luciferin values of glowworms.
ranges (numpy.ndarray): Ranges.
sensing_range (float): Sensing range.
- Return type
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of GlowwormSwarmOptimization algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current populations fitness/function values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best individuals function/fitness value.
Dict[str (**params) – Additional arguments.
Any] – Additional arguments.
- Returns
Initialized population of glowworms.
Initialized populations function/fitness values.
New global best solution
New global best solutions fitness/objective value.
- Additional arguments:
luciferin (numpy.ndarray): Luciferin values of glowworms.
ranges (numpy.ndarray): Ranges.
sensing_range (float): Sensing range.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- set_parameters(population_size=25, l0=5, nt=5, rho=0.4, gamma=0.6, beta=0.08, s=0.03, distance=<function euclidean>, **kwargs)[source]¶
Set the arguments of an algorithm.
- Parameters
population_size (Optional[int]) – Number of glowworms in population.
l0 (Optional[float]) – Initial luciferin quantity for each glowworm.
nt (Optional[int]) – Number of neighbors.
rho (Optional[float]) – Luciferin decay constant.
gamma (Optional[float]) – Luciferin enhancement constant.
beta (Optional[float]) – Constant.
s (Optional[float]) – Step size.
distance (Optional[Callable[[numpy.ndarray, numpy.ndarray], float]]]) – Measure distance between two individuals.
- class niapy.algorithms.basic.GlowwormSwarmOptimizationV1(population_size=25, l0=5, nt=5, rho=0.4, gamma=0.6, beta=0.08, s=0.03, distance=<function euclidean>, *args, **kwargs)[source]¶
Bases:
GlowwormSwarmOptimization
Implementation of glowworm swarm optimization.
- Algorithm:
Glowworm Swarm Optimization Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
- Reference paper:
Kaipa, Krishnanand N., and Debasish Ghose. Glowworm swarm optimization: theory, algorithms, and applications. Vol. 698. Springer, 2017.
- Variables
Name (List[str]) – List of strings representing algorithm names.
See also
NiaPy.algorithms.basic.GlowwormSwarmOptimization
Initialize GlowwormSwarmOptimization.
- Parameters
population_size (Optional[int]) – Number of glowworms in population.
l0 (Optional[float]) – Initial luciferin quantity for each glowworm.
nt (Optional[int]) – Number of neighbors.
rho (Optional[float]) – Luciferin decay constant.
gamma (Optional[float]) – Luciferin enhancement constant.
beta (Optional[float]) – Constant.
s (Optional[float]) – Step size.
distance (Optional[Callable[[numpy.ndarray, numpy.ndarray], float]]]) – Measure distance between two individuals.
- Name = ['GlowwormSwarmOptimizationV1', 'GSOv1']¶
- class niapy.algorithms.basic.GlowwormSwarmOptimizationV2(alpha=0.2, *args, **kwargs)[source]¶
Bases:
GlowwormSwarmOptimization
Implementation of glowworm swarm optimization.
- Algorithm:
Glowworm Swarm Optimization Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
- Reference paper:
Kaipa, Krishnanand N., and Debasish Ghose. Glowworm swarm optimization: theory, algorithms, and applications. Vol. 698. Springer, 2017.
See also
NiaPy.algorithms.basic.GlowwormSwarmOptimization
Initialize GlowwormSwarmOptimizationV2.
- Parameters
alpha (Optional[float]) – Alpha parameter.
- Name = ['GlowwormSwarmOptimizationV2', 'GSOv2']¶
- __init__(alpha=0.2, *args, **kwargs)[source]¶
Initialize GlowwormSwarmOptimizationV2.
- Parameters
alpha (Optional[float]) – Alpha parameter.
- class niapy.algorithms.basic.GlowwormSwarmOptimizationV3(beta1=0.2, *args, **kwargs)[source]¶
Bases:
GlowwormSwarmOptimization
Implementation of glowworm swarm optimization.
- Algorithm:
Glowworm Swarm Optimization Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
- Reference paper:
Kaipa, Krishnanand N., and Debasish Ghose. Glowworm swarm optimization: theory, algorithms, and applications. Vol. 698. Springer, 2017.
See also
NiaPy.algorithms.basic.GlowwormSwarmOptimization
Initialize GlowwormSwarmOptimizationV3.
- Parameters
beta1 (Optional[float]) – Beta1 parameter.
- Name = ['GlowwormSwarmOptimizationV3', 'GSOv3']¶
- __init__(beta1=0.2, *args, **kwargs)[source]¶
Initialize GlowwormSwarmOptimizationV3.
- Parameters
beta1 (Optional[float]) – Beta1 parameter.
- class niapy.algorithms.basic.GravitationalSearchAlgorithm(population_size=40, g0=2.467, epsilon=1e-17, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Gravitational Search Algorithm.
- Algorithm:
Gravitational Search Algorithm
- Date:
2018
- Author:
Klemen Berkovič
- License:
MIT
- Reference URL:
- Reference paper:
Esmat Rashedi, Hossein Nezamabadi-pour, Saeid Saryazdi, GSA: A Gravitational Search Algorithm, Information Sciences, Volume 179, Issue 13, 2009, Pages 2232-2248, ISSN 0020-0255
- Variables
Name (List[str]) – List of strings representing algorithm name.
See also
Initialize GravitationalSearchAlgorithm.
- Parameters
See also
niapy.algorithms.algorithm.Algorithm.__init__()
- Name = ['GravitationalSearchAlgorithm', 'GSA']¶
- __init__(population_size=40, g0=2.467, epsilon=1e-17, *args, **kwargs)[source]¶
Initialize GravitationalSearchAlgorithm.
- Parameters
See also
niapy.algorithms.algorithm.Algorithm.__init__()
- get_parameters()[source]¶
Get algorithm parameters values.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
See also
niapy.algorithms.algorithm.Algorithm.get_parameters()
- init_population(task)[source]¶
Initialize staring population.
- Parameters
task (Task) – Optimization task.
- Returns
Initialized population.
Initialized populations fitness/function values.
- Additional arguments:
velocities (numpy.ndarray[float]): Velocities
- Return type
See also
niapy.algorithms.algorithm.Algorithm.init_population()
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of GravitationalSearchAlgorithm algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current populations fitness/function values.
best_x (numpy.ndarray) – Global best solution.
best_fitness (float) – Global best fitness/function value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
New population.
New populations fitness/function values.
New global best solution
New global best solutions fitness/objective value
- Additional arguments:
velocities (numpy.ndarray): Velocities.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- class niapy.algorithms.basic.GreyWolfOptimizer(population_size=50, initialization_function=<function default_numpy_init>, individual_type=None, seed=None, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Grey wolf optimizer.
- Algorithm:
Grey wolf optimizer
- Date:
2018
- Author:
Iztok Fister Jr. and Klemen Berkovič
- License:
MIT
- Reference paper:
Mirjalili, Seyedali, Seyed Mohammad Mirjalili, and Andrew Lewis. “Grey wolf optimizer.” Advances in engineering software 69 (2014): 46-61.
Grey Wolf Optimizer (GWO) source code version 1.0 (MATLAB) from MathWorks
- Variables
Name (List[str]) – List of strings representing algorithm names.
See also
Initialize algorithm and create name for an algorithm.
- Parameters
population_size (Optional[int]) – Population size.
initialization_function (Optional[Callable[[int, Task, numpy.random.Generator, Dict[str, Any]], Tuple[numpy.ndarray, numpy.ndarray[float]]]]) – Population initialization function.
individual_type (Optional[Type[Individual]]) – Individual type used in population, default is Numpy array.
seed (Optional[int]) – Starting seed for random generator.
- Name = ['GreyWolfOptimizer', 'GWO']¶
- static info()[source]¶
Get algorithm information.
- Returns
Algorithm information.
- Return type
See also
- init_population(task)[source]¶
Initialize population.
- Parameters
task (Task) – Optimization task.
- Returns
Initialized population.
Initialized populations fitness/function values.
- Additional arguments:
alpha (numpy.ndarray): Alpha of the pack (Best solution)
alpha_fitness (float): Best fitness.
beta (numpy.ndarray): Beta of the pack (Second best solution)
beta_fitness (float): Second best fitness.
delta (numpy.ndarray): Delta of the pack (Third best solution)
delta_fitness (float): Third best fitness.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, Dict[str, Any]]
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of GreyWolfOptimizer algorithm.
- Parameters
- Returns
New population
New population fitness/function values
- Additional arguments:
alpha (numpy.ndarray): Alpha of the pack (Best solution)
alpha_fitness (float): Best fitness.
beta (numpy.ndarray): Beta of the pack (Second best solution)
beta_fitness (float): Second best fitness.
delta (numpy.ndarray): Delta of the pack (Third best solution)
delta_fitness (float): Third best fitness.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- class niapy.algorithms.basic.HarmonySearch(population_size=30, r_accept=0.7, r_pa=0.35, b_range=1.42, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Harmony Search algorithm.
- Algorithm:
Harmony Search Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
- Reference paper:
Geem, Z. W., Kim, J. H., & Loganathan, G. V. (2001). A new heuristic optimization algorithm: harmony search. Simulation, 76(2), 60-68.
- Variables
See also
Initialize HarmonySearch.
- Parameters
- Name = ['HarmonySearch', 'HS']¶
- __init__(population_size=30, r_accept=0.7, r_pa=0.35, b_range=1.42, *args, **kwargs)[source]¶
Initialize HarmonySearch.
- improvise(harmonies, task)[source]¶
Create new individual.
- Parameters
harmonies (numpy.ndarray) – Current population.
task (Task) – Optimization task.
- Returns
New individual.
- Return type
numpy.ndarray
- static info()[source]¶
Get basic information about the algorithm.
- Returns
Basic information.
- Return type
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of HarmonySearch algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray) – Current populations function/fitness values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best fitness/function value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
New harmony/population.
New populations function/fitness values.
New global best solution
New global best solution fitness/objective value
Additional arguments.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- class niapy.algorithms.basic.HarmonySearchV1(bw_min=1, bw_max=2, *args, **kwargs)[source]¶
Bases:
HarmonySearch
Implementation of harmony search algorithm.
- Algorithm:
Harmony Search Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
https://link.springer.com/chapter/10.1007/978-3-642-00185-7_1
- Reference paper:
Yang, Xin-She. “Harmony search as a metaheuristic algorithm.” Music-inspired harmony search algorithm. Springer, Berlin, Heidelberg, 2009. 1-14.
- Variables
Initialize HarmonySearchV1.
- Parameters
- Name = ['HarmonySearchV1', 'HSv1']¶
- class niapy.algorithms.basic.HarrisHawksOptimization(population_size=40, levy=0.01, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Harris Hawks Optimization algorithm.
- Algorithm:
Harris Hawks Optimization
- Date:
2020
- Authors:
Francisco Jose Solis-Munoz
- License:
MIT
- Reference paper:
Heidari et al. “Harris hawks optimization: Algorithm and applications”. Future Generation Computer Systems. 2019. Vol. 97. 849-872.
- Variables
See also
Initialize HarrisHawksOptimization.
- Name = ['HarrisHawksOptimization', 'HHO']¶
- __init__(population_size=40, levy=0.01, *args, **kwargs)[source]¶
Initialize HarrisHawksOptimization.
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get algorithms information.
- Returns
Algorithm information.
- Return type
See also
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Harris Hawks Optimization.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population
population_fitness (numpy.ndarray[float]) – Current population fitness/function values
best_x (numpy.ndarray) – Current best individual
best_fitness (float) – Current best individual function/fitness value
params (Dict[str, Any]) – Additional algorithm arguments
- Returns
New population
New population fitness/function values
New global best solution
New global best fitness/objective value
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- class niapy.algorithms.basic.KrillHerd(population_size=50, n_max=0.01, foraging_speed=0.02, diffusion_speed=0.002, c_t=0.93, w_neighbor=0.42, w_foraging=0.38, d_s=2.63, max_neighbors=5, crossover_rate=0.2, mutation_rate=0.05, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of krill herd algorithm.
- Algorithm:
Krill Herd Algorithm
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
http://www.sciencedirect.com/science/article/pii/S1007570412002171
- Reference paper:
Amir Hossein Gandomi, Amir Hossein Alavi, Krill herd: A new bio-inspired optimization algorithm, Communications in Nonlinear Science and Numerical Simulation, Volume 17, Issue 12, 2012, Pages 4831-4845, ISSN 1007-5704, https://doi.org/10.1016/j.cnsns.2012.05.010.
- Variables
Name (List[str]) – List of strings representing algorithm names.
population_size (Optional[int]) – Number of krill herds in population.
n_max (Optional[float]) – Maximum induced speed.
foraging_speed (Optional[float]) – Foraging speed.
diffusion_speed (Optional[float]) – Maximum diffusion speed.
c_t (Optional[float]) – Constant $in [0, 2]$.
w_neighbor (Optional[Union[int, float, numpy.ndarray]]) – Inertia weights of the motion induced from neighbors \(\in [0, 1]\).
w_foraging (Optional[Union[int, float, numpy.ndarray]]) – Inertia weights of the motion induced from foraging \(\in [0, 1]\).
d_s (Optional[float]) – Maximum euclidean distance for neighbors.
max_neighbors (Optional[int]) – Maximum neighbors for neighbors effect.
crossover_rate (Optional[float]) – Crossover probability.
mutation_rate (Optional[float]) – Mutation probability.
See also
Initialize KrillHerd.
- Parameters
population_size (Optional[int]) – Number of krill herds in population.
n_max (Optional[float]) – Maximum induced speed.
foraging_speed (Optional[float]) – Foraging speed.
diffusion_speed (Optional[float]) – Maximum diffusion speed.
c_t (Optional[float]) – Constant $in [0, 2]$.
w_neighbor (Optional[Union[int, float, numpy.ndarray]]) – Inertia weights of the motion induced from neighbors \(\in [0, 1]\).
w_foraging (Optional[Union[int, float, numpy.ndarray]]) – Inertia weights of the motion induced from foraging \(\in [0, 1]\).
d_s (Optional[float]) – Maximum euclidean distance for neighbors.
max_neighbors (Optional[int]) – Maximum neighbors for neighbors effect.
cr (Optional[float]) – Crossover probability.
mutation_rate (Optional[float]) – Mutation probability.
See also
niapy.algorithms.algorithm.Algorithm.__init__()
- Name = ['KrillHerd', 'KH']¶
- __init__(population_size=50, n_max=0.01, foraging_speed=0.02, diffusion_speed=0.002, c_t=0.93, w_neighbor=0.42, w_foraging=0.38, d_s=2.63, max_neighbors=5, crossover_rate=0.2, mutation_rate=0.05, *args, **kwargs)[source]¶
Initialize KrillHerd.
- Parameters
population_size (Optional[int]) – Number of krill herds in population.
n_max (Optional[float]) – Maximum induced speed.
foraging_speed (Optional[float]) – Foraging speed.
diffusion_speed (Optional[float]) – Maximum diffusion speed.
c_t (Optional[float]) – Constant $in [0, 2]$.
w_neighbor (Optional[Union[int, float, numpy.ndarray]]) – Inertia weights of the motion induced from neighbors \(\in [0, 1]\).
w_foraging (Optional[Union[int, float, numpy.ndarray]]) – Inertia weights of the motion induced from foraging \(\in [0, 1]\).
d_s (Optional[float]) – Maximum euclidean distance for neighbors.
max_neighbors (Optional[int]) – Maximum neighbors for neighbors effect.
cr (Optional[float]) – Crossover probability.
mutation_rate (Optional[float]) – Mutation probability.
See also
niapy.algorithms.algorithm.Algorithm.__init__()
- crossover(x, xo, crossover_rate)[source]¶
Crossover operator.
- Parameters
x (numpy.ndarray) – Krill/individual being applied with operator.
xo (numpy.ndarray) – Krill/individual being used in conjunction within operator.
crossover_rate (float) – Crossover probability.
- Returns
New krill/individual.
- Return type
numpy.ndarray
- delta_t(task)[source]¶
Get new delta for all dimensions.
- Parameters
task (Task) – Optimization task.
- Returns
–
- Return type
numpy.ndarray
- get_parameters()[source]¶
Get parameter values for the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- get_x(x, y)[source]¶
Get x values.
- Parameters
x (numpy.ndarray) – First krill/individual.
y (numpy.ndarray) – Second krill/individual.
- Returns
–
- Return type
numpy.ndarray
- induce_foraging_motion(i, x, x_f, f, weights, population, population_fitness, best_index, worst_index, task)[source]¶
Induced foraging motion operator.
- Parameters
i (int) – Index of current krill being operated.
x (numpy.ndarray) – Position of food.
x_f (float) – Fitness/function values of food.
f –
weights (numpy.ndarray[float]) – Weights for this operator.
population (numpy.ndarray) – Current population/heard.
population_fitness (numpy.ndarray[float]) – Current heard/populations function/fitness values.
best_index (numpy.ndarray) – Index of current best krill in heard.
worst_index (numpy.ndarray) – Index of current worst krill in heard.
task (Task) – Optimization task.
- Returns
Moved krill.
- Return type
numpy.ndarray
- induce_neighbors_motion(i, n, weights, population, population_fitness, best_index, worst_index, task)[source]¶
Induced neighbours motion operator.
- Parameters
i (int) – Index of individual being applied with operator.
n –
weights (numpy.ndarray[float]) – Weights for this operator.
population (numpy.ndarray) – Current heard/population.
population_fitness (numpy.ndarray[float]) – Current populations/heard function/fitness values.
best_index (numpy.ndarray) – Current best krill in heard/population.
worst_index (numpy.ndarray) – Current worst krill in heard/population.
task (Task) – Optimization task.
- Returns
Moved krill.
- Return type
numpy.ndarray
- induce_physical_diffusion(task)[source]¶
Induced physical diffusion operator.
- Parameters
task (Task) – Optimization task.
- Return type
numpy.ndarray
- static info()[source]¶
Get basic information of algorithm.
- Returns
Basic information of algorithm.
- Return type
See also
- init_population(task)[source]¶
Initialize stating population.
- Parameters
task (Task) – Optimization task.
- Returns
Initialized population.
Initialized populations function/fitness values.
- Additional arguments:
w_neighbor (numpy.ndarray): Weights neighborhood.
w_foraging (numpy.ndarray): Weights foraging.
induced_speed (numpy.ndarray): Induced speed.
foraging_speed (numpy.ndarray): Foraging speed.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, Dict[str, Any]]
See also
niapy.algorithms.algorithm.Algorithm.init_population()
- init_weights(task)[source]¶
Initialize weights.
- Parameters
task (Task) – Optimization task.
- Returns
Weights for neighborhood.
Weights for foraging.
- Return type
Tuple[numpy.ndarray, numpy.ndarray]
- mutate(x, x_b, mutation_rate)[source]¶
Mutate operator.
- Parameters
x (numpy.ndarray) – Individual being mutated.
x_b (numpy.ndarray) – Global best individual.
mutation_rate (float) – Probability of mutations.
- Returns
Mutated krill.
- Return type
numpy.ndarray
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of KrillHerd algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current heard/population.
population_fitness (numpy.ndarray[float]) – Current heard/populations function/fitness values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best individuals function fitness values.
**params (Dict[str, Any]) – Additional arguments.
- Returns
New herd/population
New herd/populations function/fitness values.
New global best solution.
New global best solutions fitness/objective value.
- Additional arguments:
w_neighbor (numpy.ndarray): –
w_foraging (numpy.ndarray): –
induced_speed (numpy.ndarray): –
foraging_speed (numpy.ndarray): –
- Return type
Tuple [numpy.ndarray, numpy.ndarray, numpy.ndarray, float Dict[str, Any]]
- set_parameters(population_size=50, n_max=0.01, foraging_speed=0.02, diffusion_speed=0.002, c_t=0.93, w_neighbor=0.42, w_foraging=0.38, d_s=2.63, max_neighbors=5, crossover_rate=0.2, mutation_rate=0.05, **kwargs)[source]¶
Set the arguments of an algorithm.
- Parameters
population_size (Optional[int]) – Number of krill herds in population.
n_max (Optional[float]) – Maximum induced speed.
foraging_speed (Optional[float]) – Foraging speed.
diffusion_speed (Optional[float]) – Maximum diffusion speed.
c_t (Optional[float]) – Constant $in [0, 2]$.
w_neighbor (Optional[Union[int, float, numpy.ndarray]]) – Inertia weights of the motion induced from neighbors \(\in [0, 1]\).
w_foraging (Optional[Union[int, float, numpy.ndarray]]) – Inertia weights of the motion induced from foraging \(\in [0, 1]\).
d_s (Optional[float]) – Maximum euclidean distance for neighbors.
max_neighbors (Optional[int]) – Maximum neighbors for neighbors effect.
crossover_rate (Optional[float]) – Crossover probability.
mutation_rate (Optional[float]) – Mutation probability.
See also
niapy.algorithms.algorithm.Algorithm.set_parameters()
- class niapy.algorithms.basic.LionOptimizationAlgorithm(population_size=50, nomad_ratio=0.2, num_of_prides=5, female_ratio=0.8, roaming_factor=0.2, mating_factor=0.3, mutation_factor=0.2, immigration_factor=0.4, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of lion optimization algorithm.
- Algorithm:
Lion Optimization algorithm
- Date:
2021
- Authors:
Aljoša Mesarec
- License:
MIT
- Reference URL:
- Reference paper:
Yazdani, Maziar, Jolai, Fariborz. Lion Optimization Algorithm (LOA): A nature-inspired metaheuristic algorithm. Journal of Computational Design and Engineering, Volume 3, Issue 1, Pages 24-36. 2016.
- Variables
:ivar num_of_prides = Number of prides \(\in [1, \infty)\).: :ivar female_ratio = Ratio of female lions in prides \(\in [0, 1]\).: :ivar roaming_factor = Roaming factor \(\in [0, 1]\).: :ivar mating_factor = Mating factor \(\in [0, 1]\).: :ivar mutation_factor = Mutation factor \(\in [0, 1]\).: :ivar immigration_factor = Immigration factor \(\in [0, 1]\).:
See also
Initialize LionOptimizationAlgorithm.
- Parameters
:param num_of_prides = Number of prides \(\in [1: :param \infty)\).: :param female_ratio = Ratio of female lions in prides \(\in [0: :param 1]\).: :param roaming_factor = Roaming factor \(\in [0: :param 1]\).: :param mating_factor = Mating factor \(\in [0: :param 1]\).: :param mutation_factor = Mutation factor \(\in [0: :param 1]\).: :param immigration_factor = Immigration factor \(\in [0: :param 1]\).:
- Name = ['LionOptimizationAlgorithm', 'LOA']¶
- __init__(population_size=50, nomad_ratio=0.2, num_of_prides=5, female_ratio=0.8, roaming_factor=0.2, mating_factor=0.3, mutation_factor=0.2, immigration_factor=0.4, *args, **kwargs)[source]¶
Initialize LionOptimizationAlgorithm.
- Parameters
:param num_of_prides = Number of prides \(\in [1: :param \infty)\).: :param female_ratio = Ratio of female lions in prides \(\in [0: :param 1]\).: :param roaming_factor = Roaming factor \(\in [0: :param 1]\).: :param mating_factor = Mating factor \(\in [0: :param 1]\).: :param mutation_factor = Mutation factor \(\in [0: :param 1]\).: :param immigration_factor = Immigration factor \(\in [0: :param 1]\).:
- data_correction(population, pride_size, task)[source]¶
Update lion’s data if his position has improved since last iteration.
- defense(population, pride_size, gender_distribution, excess_lion_gender_quantities, task)[source]¶
Male lions attack other lions in pride.
- Parameters
population (numpy.ndarray[Lion]) – Lion population.
pride_size (numpy.ndarray[int]) – Pride and nomad sizes.
gender_distribution (numpy.ndarray[int]) – Pride and nomad gender distribution.
excess_lion_gender_quantities (numpy.ndarray[int]) – Pride and nomad excess members.
task (Task) – Optimization task.
- Returns
Lion population that finished with defending.
Pride and nomad excess gender quantities.
- Return type
Tuple[numpy.ndarray[Lion], numpy.ndarray[int])
- get_parameters()[source]¶
Get parameters of the algorithm.
- Returns
Algorithm Parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get information about algorithm.
- Returns
Algorithm information
- Return type
See also
- init_population(task)[source]¶
Initialize starting population.
- Parameters
task (Task) – Optimization task.
- Returns
Initialized population of lions.
Initialized populations function/fitness values.
- Additional arguments:
pride_size (numpy.ndarray): Pride and nomad sizes.
gender_distribution (numpy.ndarray): Pride and nomad gender distributions.
- Return type
Tuple[numpy.ndarray[Lion], numpy.ndarray[float], Dict[str, Any]]
- init_population_data(pop, d)[source]¶
Initialize data of starting population.
- Parameters
(numpy.ndarray[Lion] (pop) – Starting lion population
d (Dict[str, Any]) – Additional arguments
- Returns
Initialized population of lions.
- Additional arguments:
pride_size (numpy.ndarray): Pride and nomad sizes.
gender_distribution (numpy.ndarray): Pride and nomad gender distributions.
- Return type
Tuple[numpy.ndarray[Lion], Dict[str, Any]]
- mating(population, pride_size, gender_distribution, task)[source]¶
Female lions mate with male lions to produce offspring.
- Parameters
- Returns
Lion population that finished with mating.
Pride and nomad excess gender quantities.
- Return type
Tuple[numpy.ndarray[Lion], numpy.ndarray[int])
- migration(population, pride_size, gender_distribution, excess_lion_gender_quantities, task)[source]¶
Female lions randomly become nomad.
- Parameters
population (numpy.ndarray[Lion]) – Lion population.
pride_size (numpy.ndarray[int]) – Pride and nomad sizes.
gender_distribution (numpy.ndarray[int]) – Pride and nomad gender distribution.
excess_lion_gender_quantities (numpy.ndarray[int]) – Pride and nomad excess members.
task (Task) – Optimization task.
- Returns
Lion population that finished with migration.
Pride and nomad excess gender quantities.
- Return type
Tuple[numpy.ndarray[Lion], numpy.ndarray[int])
- move_to_safe_place(population, pride_size, task)[source]¶
Female pride lions move towards position with good fitness.
- population_equilibrium(population, pride_size, gender_distribution, excess_lion_gender_quantities, task)[source]¶
Remove extra nomad lions.
- Parameters
population (numpy.ndarray[Lion]) – Lion population.
pride_size (numpy.ndarray[int]) – Pride and nomad sizes.
gender_distribution (numpy.ndarray[int]) – Pride and nomad gender distribution.
excess_lion_gender_quantities (numpy.ndarray[int]) – Pride and nomad excess members.
task (Task) – Optimization task.
- Returns
Lion population with removed extra nomads.
- Return type
final_population (numpy.ndarray[Lion])
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core functionality of algorithm.
This function is called on every algorithm iteration.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population coordinates.
population_fitness (numpy.ndarray) – Current population fitness value.
best_x (numpy.ndarray) – Current generation best individuals coordinates.
best_fitness (float) – current generation best individuals fitness value.
**params (Dict[str, Any]) – Additional arguments for algorithms.
- Returns
New populations coordinates.
New populations fitness values.
New global best position/solution
New global best fitness/objective value
Additional arguments of the algorithm.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- set_parameters(population_size=50, nomad_ratio=0.2, num_of_prides=5, female_ratio=0.8, roaming_factor=0.2, mating_factor=0.3, mutation_factor=0.2, immigration_factor=0.4, **kwargs)[source]¶
Set the arguments of an algorithm.
- Parameters
:param num_of_prides = Number of prides \(\in [1: :param \infty)\).: :param female_ratio = Ratio of female lions in prides \(\in [0: :param 1]\).: :param roaming_factor = Roaming factor \(\in [0: :param 1]\).: :param mating_factor = Mating factor \(\in [0: :param 1]\).: :param mutation_factor = Mutation factor \(\in [0: :param 1]\).: :param immigration_factor = Immigration factor \(\in [0: :param 1]\).:
- class niapy.algorithms.basic.MonarchButterflyOptimization(population_size=20, partition=0.4166666666666667, period=1.2, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of Monarch Butterfly Optimization.
- Algorithm:
Monarch Butterfly Optimization
- Date:
2019
- Authors:
Jan Banko
- License:
MIT
- Reference paper:
Wang, G. G., Deb, S., & Cui, Z. (2019). Monarch butterfly optimization. Neural computing and applications, 31(7), 1995-2014.
- Variables
See also
Initialize MonarchButterflyOptimization.
- Parameters
- Name = ['MonarchButterflyOptimization', 'MBO']¶
- __init__(population_size=20, partition=0.4166666666666667, period=1.2, *args, **kwargs)[source]¶
Initialize MonarchButterflyOptimization.
- adjusting_operator(t, max_t, dimension, np1, np2, butterflies, best)[source]¶
Apply the adjusting operator.
- Parameters
t (int) – Current generation.
max_t (int) – Maximum generation.
dimension (int) – Number of dimensions.
np1 (int) – Number of butterflies in Land 1.
np2 (int) – Number of butterflies in Land 2.
butterflies (numpy.ndarray) – Current butterfly population.
best (numpy.ndarray) – The best butterfly currently.
- Returns
Adjusted butterfly population.
- Return type
numpy.ndarray
- static evaluate_and_sort(task, butterflies)[source]¶
Evaluate and sort the butterfly population.
- Parameters
task (Task) – Optimization task
butterflies (numpy.ndarray) – Current butterfly population.
- Returns
- Tuple[numpy.ndarray, float, numpy.ndarray]:
Best butterfly according to the evaluation.
The best fitness value.
Butterfly population.
- Return type
numpy.ndarray
- get_parameters()[source]¶
Get parameters values for the algorithm.
- Returns
Algorithm parameters.
- Return type
Dict[str, Any]
- static info()[source]¶
Get information of the algorithm.
- Returns
Algorithm information.
- Return type
See also
niapy.algorithms.algorithm.Algorithm.info()
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Forest Optimization Algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray) – Current population.
population_fitness (numpy.ndarray[float]) – Current population function/fitness values.
best_x (numpy.ndarray) – Global best individual.
best_fitness (float) – Global best individual fitness/function value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
New population.
New population fitness/function values.
New global best solution.
New global best solutions fitness/objective value.
- Additional arguments:
current_best (numpy.ndarray): Current generation’s best individual.
- Return type
Tuple[numpy.ndarray, numpy.ndarray, numpy.ndarray, float, Dict[str, Any]]
- class niapy.algorithms.basic.MonkeyKingEvolutionV1(population_size=40, fluctuation_coeff=0.7, population_rate=0.3, c=3, fc=0.5, *args, **kwargs)[source]¶
Bases:
Algorithm
Implementation of monkey king evolution algorithm version 1.
- Algorithm:
Monkey King Evolution version 1
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
https://www.sciencedirect.com/science/article/pii/S0950705116000198
- Reference paper:
Zhenyu Meng, Jeng-Shyang Pan, Monkey King Evolution: A new memetic evolutionary algorithm and its application in vehicle fuel consumption optimization, Knowledge-Based Systems, Volume 97, 2016, Pages 144-157, ISSN 0950-7051, https://doi.org/10.1016/j.knosys.2016.01.009.
- Variables
Name (List[str]) – List of strings representing algorithm names.
fluctuation_coeff (float) – Scale factor for normal particles.
population_rate (float) – Percent value of now many new particle Monkey King particle creates.
c (int) – Number of new particles generated by Monkey King particle.
fc (float) – Scale factor for Monkey King particles.
See also
Initialize MonkeyKingEvolutionV1.
- Parameters
population_size (int) – Population size.
fluctuation_coeff (float) – Scale factor for normal particle.
population_rate (float) – Percent value of now many new particle Monkey King particle creates. Value in rage [0, 1].
c (int) – Number of new particles generated by Monkey King particle.
fc (float) – Scale factor for Monkey King particles.
See also
niapy.algorithms.algorithm.Algorithm.__init__()
- Name = ['MonkeyKingEvolutionV1', 'MKEv1']¶
- __init__(population_size=40, fluctuation_coeff=0.7, population_rate=0.3, c=3, fc=0.5, *args, **kwargs)[source]¶
Initialize MonkeyKingEvolutionV1.
- Parameters
population_size (int) – Population size.
fluctuation_coeff (float) – Scale factor for normal particle.
population_rate (float) – Percent value of now many new particle Monkey King particle creates. Value in rage [0, 1].
c (int) – Number of new particles generated by Monkey King particle.
fc (float) – Scale factor for Monkey King particles.
See also
niapy.algorithms.algorithm.Algorithm.__init__()
- static info()[source]¶
Get basic information of algorithm.
- Returns
Basic information.
- Return type
See also
- move_mk(x, task)[source]¶
Move Monkey King particle.
For moving Monkey King particles algorithm uses next formula: \(\mathbf{x} + \mathit{fc} \odot \mathbf{population_rate} \odot \mathbf{x}\) where \(\mathbf{population_rate}\) is two dimensional array with shape {c * D, D}. Components of this array are in range [0, 1]
- Parameters
x (numpy.ndarray) – Monkey King patricle position.
task (Task) – Optimization task.
- Returns
New particles generated by Monkey King particle.
- Return type
numpy.ndarray
- move_monkey_king_particle(p, task)[source]¶
Move Monkey King Particles.
- Parameters
p (MkeSolution) – Monkey King particle to apply this function on.
task (Task) – Optimization task.
- move_p(x, x_pb, x_b, task)[source]¶
Move normal particle in search space.
For moving particles algorithm uses next formula: \(\mathbf{x_{pb} - \mathit{differential_weight} \odot \mathbf{r} \odot (\mathbf{x_b} - \mathbf{x})\) where \(\mathbf{r}\) is one dimension array with D components. Components in this vector are in range [0, 1].
- Parameters
x (numpy.ndarray) – Particle position.
x_pb (numpy.ndarray) – Particle best position.
x_b (numpy.ndarray) – Best particle position.
task (Task) – Optimization task.
- Returns
Particle new position.
- Return type
numpy.ndarray
- move_particle(p, p_b, task)[source]¶
Move particles.
- Parameters
p (MkeSolution) – Monkey particle.
p_b (numpy.ndarray) – Population best particle.
task (Task) – Optimization task.
- move_population(pop, xb, task)[source]¶
Move population.
- Parameters
pop (numpy.ndarray[MkeSolution]) – Current population.
xb (numpy.ndarray) – Current best solution.
task (Task) – Optimization task.
- Returns
New particles.
- Return type
numpy.ndarray[MkeSolution]
- run_iteration(task, population, population_fitness, best_x, best_fitness, **params)[source]¶
Core function of Monkey King Evolution v1 algorithm.
- Parameters
task (Task) – Optimization task.
population (numpy.ndarray[MkeSolution]) – Current population.
population_fitness (numpy.ndarray[float]) – Current population fitness/function values.
best_x (numpy.ndarray) – Current best solution.
best_fitness (float) – Current best solutions function/fitness value.
**params (Dict[str, Any]) – Additional arguments.
- Returns
Initialized solutions.
Fitness/function values of solution.
Additional arguments.
- Return type
Tuple(numpy.ndarray[MkeSolution], numpy.ndarray[float], Dict[str, Any]]
- set_parameters(population_size=40, fluctuation_coeff=0.7, population_rate=0.3, c=3, fc=0.5, **kwargs)[source]¶
Set Monkey King Evolution v1 algorithms static parameters.
- Parameters
population_size (int) – Population size.
fluctuation_coeff (float) – Scale factor for normal particle.
population_rate (float) – Percent value of now many new particle Monkey King particle creates. Value in rage [0, 1].
c (int) – Number of new particles generated by Monkey King particle.
fc (float) – Scale factor for Monkey King particles.
See also
niapy.algorithms.algorithm.Algorithm.set_parameters()
- class niapy.algorithms.basic.MonkeyKingEvolutionV2(population_size=40, fluctuation_coeff=0.7, population_rate=0.3, c=3, fc=0.5, *args, **kwargs)[source]¶
Bases:
MonkeyKingEvolutionV1
Implementation of monkey king evolution algorithm version 2.
- Algorithm:
Monkey King Evolution version 2
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
https://www.sciencedirect.com/science/article/pii/S0950705116000198
- Reference paper:
Zhenyu Meng, Jeng-Shyang Pan, Monkey King Evolution: A new memetic evolutionary algorithm and its application in vehicle fuel consumption optimization, Knowledge-Based Systems, Volume 97, 2016, Pages 144-157, ISSN 0950-7051, https://doi.org/10.1016/j.knosys.2016.01.009.
- Variables
Name (List[str]) – List of strings representing algorithm names.
Initialize MonkeyKingEvolutionV1.
- Parameters
population_size (int) – Population size.
fluctuation_coeff (float) – Scale factor for normal particle.
population_rate (float) – Percent value of now many new particle Monkey King particle creates. Value in rage [0, 1].
c (int) – Number of new particles generated by Monkey King particle.
fc (float) – Scale factor for Monkey King particles.
See also
niapy.algorithms.algorithm.Algorithm.__init__()
- Name = ['MonkeyKingEvolutionV2', 'MKEv2']¶
- static info()[source]¶
Get basic information of algorithm.
- Returns
Basic information.
- Return type
See also
- move_mk(x, task, dx=None)[source]¶
Move Monkey King particle.
For movement of particles algorithm uses next formula: \(\mathbf{x} - \mathit{fc} \odot \mathbf{dx}\)
- Parameters
x (numpy.ndarray) – Particle to apply movement on.
task (Task) – Optimization task.
dx (numpy.ndarray) – Difference between to random particles in population.
- Returns
Moved particles.
- Return type
numpy.ndarray
- class niapy.algorithms.basic.MonkeyKingEvolutionV3(*args, **kwargs)[source]¶
Bases:
MonkeyKingEvolutionV1
Implementation of monkey king evolution algorithm version 3.
- Algorithm:
Monkey King Evolution version 3
- Date:
2018
- Authors:
Klemen Berkovič
- License:
MIT
- Reference URL:
https://www.sciencedirect.com/science/article/pii/S0950705116000198
- Reference paper:
Zhenyu Meng, Jeng-Shyang Pan, Monkey King Evolution: A new memetic evolutionary algorithm and its application in vehicle fuel consumption optimization, Knowledge-Based Systems, Volume 97, 2016, Pages 144-157, ISSN 0950-7051, https://doi.org/10.1016/j.knosys.2016.01.009.
- Variables
Name (List[str]) – List of strings that represent algorithm names.
Initialize MonkeyKingEvolutionV3.
- Name = ['MonkeyKingEvolutionV3', 'MKEv3']¶