The experiment setup agrees with the requirements specified by the used benchmarks.
The number of independent runs was set to 30 for all benchmarks.
All implementations start from a specified, identical seed. This ensures the repeatability of results (until someone changes the random number generator implementation in the system library). Unfortunately, this does not ensure identical starting points for different implementations, as different programming languages can use various random number generator implementations.
Therefore, here, all algorithms use the same previously generated starting points to reduce the influence of starting points on the results. Of course, when an algorithm uses more points than the other, these additional points constitute additional variability at the start.
As I believe that: “a difference is a difference only if it makes a difference” (Darrell Huff, How to Lie with Statistics),
the error values are rounded to 5 significant digits, and the FES (objective functions evaluations) are rounded to tenths.
Benchmarks:
- BBOB - the blackbox optimization benchmarking is a part of COCO (COmparing Continuous Optimizers) platform. BBOB consists of 24 noiseless, single-objective functions available in 2, 3, 5, 10, 20, and 40 dimensions. Each function has 15 instances. BBOB assumes nonconstrained search but defines a region of interest as [-5,5]^D. Here, these functions are used like functions from CECs, i.e., 30 independent runs on the first instance of each function are performed. The budget is set to 10000*D (where D is problem dimensionality). The search is stopped when the error level is 1e-8. The search is bound constrained to [-100,100]^D (as most of the used algorithms were created for CECs competition, their setup assumes that range). The experiments were performed for the following dimensionalities: 10, 20, and 40.
- CEC2017 - the benchmark was used during the competition on single objective bound constrained numerical optimization organized during the IEEE congress on evolutionary computation 2017.
Its description is available here
The source code was downloaded from.
Remark: the competition organizers removed F2 from the competition.
In the version used here, the benchmark functions return "INF" for queries outside the box constraints.
- CEC2022 - the benchmark was used during the competition on single objective bound constrained numerical optimization organized during the IEEE congress on evolutionary computation 2022. Its description is available here. The source code was downloaded from.
The version used here corrected the issues reported, which resulted in the definitions of F9 and F5 being changed.
In the version used here, the benchmark functions return "INF" for queries outside the box constraints (one of the contestants of CEC 2022 asked for such points, and default implementation answered, giving additional knowledge to the algorithm).
Algorithms:
- NL-SHADE-LBC took the second place in the CEC 2022 competition on single objective bound constrained numerical optimization. The implementation used here is downloaded from the competition organizers' repository.
The results of the tuned variant are discussed in the paper: Revisiting CEC 2022 ranking: A new ranking method and influence of parameter tuning.
- NL-SHADE-RSP-MID took the third place in the CEC 2022 competition on single objective bound constrained numerical optimization. The implementation used here is downloaded from the competition organizers' repository.
The newer implementation (C++) is available on author's homepage.
The results of the tuned variant are discussed in the paper: Revisiting CEC 2022 ranking: A new ranking method and influence of parameter tuning.
- S-LSHADE-DP took fourth place in the CEC 2022 competition on single objective bound constrained numerical optimization. The implementation used here is downloaded from the competition organizers' repository.
The results of the tuned variant are discussed in the paper: Revisiting CEC 2022 ranking: A new ranking method and influence of parameter tuning.
- L-SRTDE won CEC 2024 competition on single objective bound constrained numerical optimization. The implementation used here is downloaded from the competition organizers' repository.
- aBIPOP_CMA-ES (active BIPOP-CMA-ES) as implemented in libcmaes.
The initial step size (σ) was set to 0.3(u-l), which equals 60.
According to, as of July 2014, it is the leader on the noise-free version of the BBOB2009 benchmark.
- CMA-ES as implemented in libcmaes.
The initial step size (σ) was set to 0.3(u-l), which equals 60.
- EA4EigSimpTowardsIDE - the algorithm is described in the paper: Analysis and simplification of the winner of the CEC 2022 optimization competition on single objective bound constrained search.
The source code.
It is the strongest simplification of EA4Eig, which won CEC2022 competition on single objective bound constrained search.
The code of EA4Eig is in Matlab (about 716 lines) and can be downloaded from the CEC 2022 competition organizers repository. The simplification uses only 244 lines of C++ code. Logically, it uses only a part of one modified component of the original. The resulting algorithm is more similar to IDE than to the EA4Eig.
- EA4EigSimpTowardsIDE_jSO - the algorithm is described in the paper: Analysis and simplification of the winner of the CEC 2022 optimization competition on single objective bound constrained search.
The source code.
It is the simplification of EA4Eig, which won CEC2022 competition on single objective bound constrained search.
The code of EA4Eig is in Matlab and can be downloaded from the CEC 2022 competition organizers repository. The simplification (in C++) uses only two components (of the four): jSO and IDE.
- L-BFGS-B as implemented in dlib.
The algorithm is restarted with different starting points until the entire budget is used up. When possible starting points are the same for all algorithms under comparison.
Algorithm configuration: objective_delta_stop=1e-10 (larger values stopped algorithm too early); max_size=10, derivative_eps=1e-6.
- EA4Eig - the algorithm won CEC2022 competition on single objective bound constrained search.
The code of EA4Eig was downloaded from the CEC 2022 competition organizers repository.
The code was corrected because in the original, one decision path does not enforce bound constrains. The modification is described in the paper: Analysis and simplification of the winner of the CEC 2022 optimization competition on single objective bound constrained search.
- Nelder-Mead Simplex as implemented in nlopt.
The algorithm is restarted with different starting points until the entire budget is used up. When possible starting points are the same for all algorithms under comparison.
Algorithm configuration: stopval=1e-8; xtol_rel=-1e-8; xtol_abs=-1e-8; ftol_abs=1e-9; ftol_rel=1e-7.
- RB-IPOP-CMA-ES is a modification of IPOP-CMA-ES. The RB-IPOP-CMA-ES took 7th place during CEC 2017 competition on sigle objective bound constrained search. Author's homepage.