CMOPs can be solved in various ways, but recently, two classes of algorithms have shown promises in solving such problems with a high level of success, namely: multi-objective evolutionary algorithms (MOEA) and multi-objective Bayesian optimisation (MOBO).  
MOEAs \cite{Deb_2011} work by maintaining and evolving a population of solutions across an optimisation run. For example, Genetic Algorithms (GA) are a specific subset that utilise ‘operations’ alike biological processes: members of the population are selected to become parents based on a specific selection criterion, and then undergo crossover and mutation to form a children population. \cite{mitchell1998introduction} Within the field of MOEAs, various constraint handling techniques have been proposed \cite{Fan_2019,Xu_2020,Tian_2022} as well as extensions of MOEAs to many-objective (m>2) problems. \cite{li2015many} MOEAs are well suited to implementations where solutions can be tested in parallel, given their population-based approach, where each generation’s population can be treated as a batch. MOEAs have been successfully applied in materials-specific multi-objective problems: experimental data is used to construct a machine learning model which is then treated as a computation optimisation problem to be solved, and the results evaluated physically. \cite{Zhang_2021a,Patra_2017,Menou_2016,Coello_Coello_2009,Ganguly_2007,Mahfouf_2005} The use of MOEAs relevant to materials science has seen computational and inverse design problems. \cite{Wu_2020,Avery_2017,Berardo_2018,Pakhnova_2020,Carvalho_2020,Jennings_2019,Salley_2020}
MOBOs leverage on surrogate models to cheaply predict some black-box function, and then utilise an acquisition function to probabilistically compute a predictive function and return the best possible candidate where gain is maximised. \cite{Shahriari_2016} The choice of surrogate model can depend on the user, but in recent literature, it has become synonymous with ‘kriging’ which refers specifically to the use of Gaussian Processes (GP) as the surrogate model, taking advantage of its flexibility and robustness. \cite{rasmussen2003gaussian} The extension of MOBOs to CMOPs is less mature, with relatively new implementations that cover parallelization, multi-objective and constraints. \cite{Garrido_Merch_n_2019,belakaria2020max,daulton2020differentiable,suzuki2020multi} On top of these, there are also hybrid variants such as TSEMO \cite{Bradford_2018} or MOEA/D-EGO \cite{Qingfu_Zhang_2010} which integrate the use of MOEAs to improve the prediction quality of the underlying surrogate models. In general, BO as an overarching optimisation strategy has already been established as an attractive strategy for use in both computational design problems, \cite{Mannodi_Kanakkithodi_2016,Solomou_2018,Yuan_2018,Karasuyama_2020,Janet_2020,Hanaoka_2021} as well as experimentation problems \cite{MacLeod_2022,MacLeod_2020,Cao_2021,Schweidtmann_2018,Christensen_2021,Epps_2020,Erps_2021} due to its sample efficient approach.
As previously discussed, the PF defines the set of optimal solutions of a CMOP. For optimisation of CMOPs, hypervolume (HV) is often used as a performance indicator. It defines the Euclidean distance bounded by a point, and the reference point in a single dimension, and a HV in multiple dimensions. It directly shows the quality of the solutions since a solution set with high HV is closer to the true PF and is diverse as it effectively dominates more objective space. An illustration of the HV measure for a multi-objective (two dimensions for illustration) convex minimization problem is presented in Figure 2, where HV is computed by finding the area of non-dominated solutions, i.e. the solutions closest to PF without any competitor, bounded by a reference point.
Aside from being a performance metric to compare optimisation strategies, HV can also be directly evaluated to guide convergence of various algorithms. Hanaoka et al showed that scalarization-based MOBOs may be best suited for clear exploitation and/or preferential optimisation trajectory of objectives, whereas HV-based MOBOs are better for exploration of the entire search space. \cite{Hanaoka_2022}  Indeed, HV-based approaches empirically show a preference in proposed solutions towards the extrema of a PF, \cite{Auger_2012,guerreiro2020hypervolume} and thus can better showcase extrapolation. In contrast, scalarization approaches to reduce multi-objective problems to a single-objective such as hierarchically in Chimera \cite{H_se_2018} or any user-defined function \cite{Zhang_2021} have limitations: i) it is difficult to determine how to properly scalarize objectives; ii) single objective optimisation methods cannot propose a set of solutions that balance trade-off.
Within the context of multi-objective optimisation and material science implementation, two state-of-the-art algorithms were compared in the present work: q-Noisy Expected Hypervolume Improvement (qNEHVI) \cite{daulton2021parallel} and Unified Non-dominated Sorting Genetic Algorithm III (U-NSGA-III). \cite{Seada_2016} They are MOBO and MOEA-based algorithms, respectively, and were chosen based on their reported performance in solving complex CMOPs (with respect to HV score), and the fact that they are capable of highly parallel sampling, making them suitable for integration within an HTE framework. Furthermore, both algorithms are chosen from open-source Python libraries, making them easy to implement and enabling reproducibility of results presented.