This page contains the results and rankings of selected evolutionary algorithms. The results were calculated using selected benchmarks used in single objective bound constrained numerical optimization competitions.

Competitions give an impulse to the development of optimization algorithms. Unfortunately, usually, the winners of one of them are not used in the other. Besides the competitions, there are a plethora of papers that use the benchmarks defined by competition organizers and show that their algorithm is the best. Of course, it is not possible that all of the algorithms are "the best".

This page is intended to help in finding interesting algorithms. The filtering makes it possible to examine how they behave in different conditions, e.g., on some subset of benchmarks or functions. There are two types of rankings: 1) the one used in CEC 2022; 2) the percentage ranking proposed in the paper: Revisiting CEC 2022 ranking: A new ranking method and influence of parameter tuning. It is based on a weighted sum of the percentage of trials that found the global optimum, the percentage of thresholds achieved by all trials, and the percentage of budget left.

The list of examined algorithms will be extended by the winners of the competitions or other algorithms that I find interesting. For the algorithm to be included here, its source code must be available in one of the languages: C++, C, Python3, or R. I found so many errors in existing implementations (some of them are mentioned here and here) that raw results do not convince me. Nowadays, there are too many useless metaphor-based algorithms. Therefore, research based on metaphorical algorithms is not a promising direction and that kind of algorithms will not be included here.

More about used benchmarks, experimental setup, and algorithms can be found here.


Filtering options:

The percentage ranking:
Rank Algorithm % of optimum found % of achieved thresholds % of budget left
1 aBIPOP_CMA-ES 32.07 42.35 28.56
2 jSO 29.21 42.64 17.22
3 DES 28.35 40.81 20.73
4 EA4EigSimpTowardsIDE_jSO 28.32 41.26 19.48
5 EA4Eig 27.30 41.16 16.85
6 S-LSHADE-DP 26.03 39.29 13.20
7 RB-IPOP-CMA-ES 24.03 42.57 19.83
8 JADE 22.53 34.81 15.05
9 CMA-ES 21.76 31.39 19.35
10 NL-SHADE-LBC 21.44 35.92 16.02
11 NL-SHADE-RSP-MID 20.11 32.98 13.87
12 EA4EigSimpTowardsIDE 18.18 32.65 16.60
13 SADE 18.09 31.71 13.07
14 GenSA 16.36 30.33 13.65
15 L-SRTDE 14.74 33.57 20.80
16 DE_restarts 13.72 24.26 9.96
17 DE 13.13 23.70 9.51
18 L-BFGS-B 13.07 24.82 12.65
19 ES(1+1) 8.09 19.11 7.49
20 NM 7.55 13.19 7.28
21 PSopt 4.63 15.13 3.05
22 ES(mu+lambda) 1.60 10.66 1.39
23 EA 1.39 8.36 1.38

The CEC 2022 ranking method:
Rank Algorithm points
1 aBIPOP_CMA-ES 3189222.0
2 EA4EigSimpTowardsIDE_jSO 3112997.5
3 DES 3081355.5
4 jSO 3056137.0
5 L-SRTDE 3024579.5
6 EA4Eig 2971559.5
7 RB-IPOP-CMA-ES 2925697.0
8 NL-SHADE-LBC 2753598.0
9 EA4EigSimpTowardsIDE 2660051.0
10 S-LSHADE-DP 2636693.5
11 NL-SHADE-RSP-MID 2342328.5
12 JADE 2212685.5
13 SADE 2206989.5
14 CMA-ES 2205980.5
15 GenSA 2020485.5
16 DE_restarts 1500532.5
17 DE 1465951.5
18 ES(1+1) 1371636.5
19 L-BFGS-B 1317094.0
20 PSopt 1019219.5
21 NM 825164.0
22 ES(mu+lambda) 689705.0
23 EA 593537.0