Introduction:
The selection of the “best” model typically involves both objective (numerical) and subjective criteria, such as interpretation of plots and consideration of biological plausibility. Multi Objective Optimization (MOO) allows for the simultaneous optimization of multiple criteria. This approach generates a Pareto front, representing a set of non-dominated models where no single solution can be improved in one objective without sacrificing the performance in another objective [1]. By leveraging MOO, a set of numerically optimal models are identified and presented to the pharmacometrician for consideration.
Objectives:
Identify a set of non-dominated models by simultaneously optimizing the objective function value (OFV) and the number of estimated parameters (nParms)
Methods:
This study utilized a 17-DMAG dataset, comprising 66 subjects and 951 observations. The user-defined model search space included:
-Number of compartments (1,2,3)
-Between-subject variability on peripheral volumes and intercompartment clearances
-Effect of weight, age, sex, and serum creatinine on CL
-Effect of weight, age, sex on Vc
-Effect of weight on Q and Vp
-Between-occasion variability on CL, Q, Vc
-Residual Error Models (proportional, combined)
In this case example, the model selection is driven by two objectives: OFV and nParms. The search included 80 models in each generation. The machine learning (ML) search was combined with local downhill search every 10 generations.
Results:
As the search progressed, the pareto front of non-dominated models shifted toward lower value for both objectives (OFV and nParms). While both OFV and parsimony improved, the expected trade-off between the two was observed: models with lower OFV generally contained more estimated parameters (figure 1). There is a significant decline in OFV when the nParms increased from 6 to 7, with models having <7 parameters being 1 compartment. In the search without local downhill search, 17 optimal models were identified on the Pareto front, with OFV ranging from 8034.493 to 9813.408 and nParms ranging from 5 to 22 (figure 1), the globally best model, as determined by exhaustive search [2] using single objective GA, also appeared on the Pareto Front. All non-dominated models passed the covariance step, while 3 failed the convergence step. As is done for other ML methods, downhill steps are alternated with the ML steps. All non-dominated models on the final Pareto front with the downhill steps passed the covariance step, while 8 of the 9 models with > 17 parameters failed the convergence. These models, selected based on objective criteria, are then presented to further examine a manageable set (figure 2) and select one or more from that set based on objective and subjective criteria such as biological plausibility and diagnostic graphics as the “best” model(s).
Conclusions:
The MOO method successfully identified a set of non-dominated models within the search space by evaluating both OFV and parsimony. Both search strategies (with and without local downhill search) successfully found the globally best model, while the incorporation of downhill search expanded the exploration area, yielding more non-dominated models. The application of MOO may provide an efficient approach to ML model selection combined with a subjective evaluation of model appropriateness.
References:
[1] M. Kochenderfer and T. Wheeler. Algorithms for optimization. 2019. The MIT Press.
[2] Li X, Sale M, Nieforth K, Craig J, Wang F, Solit D, Feng K, Hu M, Bies R, Zhao L. pyDarwin machine learning algorithms application and comparison in nonlinear