The experiment setup agrees with the requirements specified by the used benchmarks.
The number of independent runs was set to 30 for all benchmarks.
All implementations start from a specified, identical seed. This ensures the repeatability of results (until someone changes the random number generator implementation in the system library). Unfortunately, this does not ensure identical starting points for different implementations, as different programming languages can use various random number generator implementations.
Therefore, here, all algorithms use the same previously generated starting points to reduce the influence of starting points on the results. Of course, when an algorithm uses more points than the other, these additional points constitute additional variability at the start.
As I believe that: “a difference is a difference only if it makes a difference” (Darrell Huff, How to Lie with Statistics),
the error values are rounded to 5 significant digits, and the FES (objective functions evaluations) are rounded to tenths.
Benchmarks:
- CEC2017 - the benchmark was used during the competition on single objective bound constrained numerical optimization organized during the IEEE congress on evolutionary computation 2017.
Its description is available here
The source code was downloaded from.
Remark: the competition organizers removed F2 from the competition.
In the version used here, the benchmark functions return "INF" for queries outside the box constraints.
- CEC2022 - the benchmark was used during the competition on single objective bound constrained numerical optimization organized during the IEEE congress on evolutionary computation 2022. Its description is available here. The source code was downloaded from.
The version used here corrected the issues reported, which resulted in the definitions of F9 and F5 being changed.
In the version used here, the benchmark functions return "INF" for queries outside the box constraints (one of the contestants of CEC 2022 asked for such points, and default implementation answered, giving additional knowledge to the algorithm).
Algorithms:
- NL-SHADE-LBC took the second place in the CEC 2022 competition on single objective bound constrained numerical optimization. The implementation used here is downloaded from the competition organizers' repository.
The results of the tuned variant are discussed in the paper: Revisiting CEC 2022 ranking: A new ranking method and influence of parameter tuning.
- NL-SHADE-RSP-MID took the third place in the CEC 2022 competition on single objective bound constrained numerical optimization. The implementation used here is downloaded from the competition organizers' repository.
The results of the tuned variant are discussed in the paper: Revisiting CEC 2022 ranking: A new ranking method and influence of parameter tuning.
- S-LSHADE-DP took fourth place in the CEC 2022 competition on single objective bound constrained numerical optimization. The implementation used here is downloaded from the competition organizers' repository.
The results of the tuned variant are discussed in the paper: Revisiting CEC 2022 ranking: A new ranking method and influence of parameter tuning.