Identifiability analysis

Identifiability analysis is a group of methods found in mathematical statistics that are used to determine how well the parameters of a model are estimated by the quantity and quality of experimental data.[1] Therefore, these methods explore not only identifiability of a model, but also the relation of the model to particular experimental data or, more generally, the data collection process.

Introduction

edit

Assuming a model is fit to experimental data, the goodness of fit does not reveal how reliable the parameter estimates are. The goodness of fit is also not sufficient to prove the model was chosen correctly. For example, if the experimental data is noisy or if there is an insufficient number of data points, it could be that the estimated parameter values could vary drastically without significantly influencing the goodness of fit. To address these issues the identifiability analysis could be applied as an important step to ensure correct choice of model, and sufficient amount of experimental data. The purpose of this analysis is either a quantified proof of correct model choice and integrality of experimental data acquired or such analysis can serve as an instrument for the detection of non-identifiable and sloppy parameters, helping planning the experiments and in building and improvement of the model at the early stages.

Structural and practical identifiability analysis

edit

Structural identifiability analysis is a particular type of analysis in which the model structure itself is investigated for non-identifiability[2]. Recognized non-identifiabilities may be removed analytically through substitution of the non-identifiable parameters with their combinations or by another way. The model overloading with number of independent parameters after its application to simulate finite experimental dataset may provide the good fit to experimental data by the price of making fitting results not sensible to the changes of parameters values, therefore leaving parameter values undetermined. Structural methods are also referred to as a priori, because non-identifiability analysis in this case could also be performed prior to the calculation of the fitting score functions, by exploring the number degrees of freedom (statistics) for the model and the number of independent experimental conditions to be varied.

Practical identifiability analysis can be performed by exploring the fit of existing model to experimental data. Once the fitting in any measure was obtained, parameter identifiability analysis can be performed either locally near a given point (usually near the parameter values provided the best model fit) or globally over the extended parameter space. The common example of the practical identifiability analysis is profile likelihood method[3].

See also

edit
  • Curve fitting – Process of constructing a curve that has the best fit to a series of data points
  • Estimation theory – Branch of statistics to estimate models based on measured data
  • Identifiability – Statistical property which a model must satisfy to allow precise inference
  • Parameter identification problem – Parameter estimation technique in statistics, particularly econometrics
  • Regression analysis – Set of statistical processes for estimating the relationships among variables

Notes

edit
  1. ^ Cobelli & DiStefano 1980.
  2. ^ Anstett-Collin, F.; Denis-Vidal, L.; Millérioux, G. (2020-01-01). "A priori identifiability: An overview on definitions and approaches". Annual Reviews in Control. 50: 139–149. doi:10.1016/j.arcontrol.2020.10.006. ISSN 1367-5788.
  3. ^ Wieland, Franz-Georg; Hauber, Adrian L.; Rosenblatt, Marcus; Tönsing, Christian; Timmer, Jens (2021-03-01). "On structural and practical identifiability". Current Opinion in Systems Biology. 25: 60–69. arXiv:2102.05100. doi:10.1016/j.coisb.2021.03.005. ISSN 2452-3100.

References

edit