User:Kevin betts hsv/sandbox

Quantification of Margins and Uncertainty (QMU) is a decision-support methodology for complex technical decisions. QMU focuses on the identification, characterization, and analysis of performance thresholds and their associated margins for engineering systems that are evaluated under conditions of uncertainty, particularly when portions of those results are generated using computational modeling and simulation. [1] QMU has traditionally been applied to complex systems where comprehensive experimental test data is not readily available and cannot be easily generated for either system execution or for specific subsystems of interest. Examples of systems where QMU has been applied include nuclear weapons performance, qualification, and stockpile assessment. QMU focuses on characterizing in detail the various sources of uncertainty that exist in a model, thus allowing the uncertainty in the system response output variables to be well quantified. These sources are frequently described in determines of probability distributions to account for the stochastic nature of many engineering systems. The characterization of uncertainty supports comparisons of the system design margins for key system performance metrics to the uncertainty associated with their calculation by the model. QMU supports risk-informed decision making processes where computational simulation results provide one of several inputs to the decision-making authority. There is currently no standardized methodology across the simulation community for conducting QMU [2]  ; the term is applied to a variety of different modeling and simulation techniques that attempt to rigorously quantify model uncertainty in order to support comparison to design margins.

History

edit

The fundamental concepts of QMU were originally developed concurrently at several national laboratories supporting nuclear weapons programs in the late 1990’s, including Livermore National Laboratory, Sandia National Laboratory, and Los Alamos National Laboratory. The original focus of the methodology was to support nuclear stockpile decision making, an area where experimental test data for could no longer be generated for validation due to bans on nuclear weapons testing [3]. The methodology has since been applied in other applications where safety or mission critical data for complex projects must be made using results that depend on modeling and simulation. Recent examples include applications at NASA for interplanetary spacecraft and rover development [4] as well as characterization of material properties in terminal ballistic encounters [5].

Overview

edit

QMU focuses on quantification of the ratio of design margin to model output uncertainty. The process begins with the identification of the key performance thresholds for the system, which can frequently be found in the systems requirements documents. These thresholds (frequently referred to as performance gates) can specify an upper bound of performance, a lower bound of performance, or both in the case where the metric must remain within the specified range. For each of these performance thresholds, the associated performance margin must be identified. The margin represents the targeted range the system is being designed to operate in. These margins account for aspects such as the design safety factor the system as well as the confidence level of that design. QMU focuses on determining the quantified uncertainty of the simulation results as they relate to the performance threshold margins. This total uncertainty includes all forms of uncertainty related to the computational model as well as the uncertainty in the threshold and margin values. The identification and characterization of these values allows the ratios of margin-to-uncertainty (M/U) to be calculated for the system [good informal paper]. These M/U values can serve as quantified inputs that can help authorities make risk-informed decisions regarding how to interpret and act upon results based on simulations.

 
Overview of General QMU Process.

QMU recognizes that there are multiple types of uncertainty that propagate through a model of a complex system. The primary output of a QMU process are simulated results for the key performance thresholds of interest, known as the Best Estimate Plus Uncertainty (BE+U). The best estimate component of BE+U represents the information about the system being modeled that is firmly understood and behaves deterministically. The basis of this information is usually ample experimental test data regarding the process of interest which allows the simulation to model this behavior with high confidence.

The types of uncertainty that contribute to the value of the BE+U can be broken down into several categories [6]:

  • Aleatory uncertainty: This type of uncertainty is naturally present in the system being modeled and is sometimes known as “irreducible uncertainty” and “stochastic variability.” Examples include processes that are naturally stochastic such as wind gust parameters and manufacturing tolerances.
  • Epistemic uncertainty: This type of uncertainty is due to a lack of knowledge about the system being modeled and is also known as “reducible uncertainty.” Epistemic uncertainty can result from uncertainty about the correct underlying equations of the model, incomplete knowledge of the full set of scenarios to be encountered, and lack of experimental test data defining the key model input parameters.

The system may also suffer from requirements uncertainty related to the specified thresholds and margins associated with the system requirements. QMU acknowledges that in some situations, the system designer may have high confidence in what the correct value for a specific metric may be, while at other times, the chosen value required may itself suffer from uncertainty. QMU attempts to separate these uncertainty values and quantify each of them as part of the overall inputs to the process.

QMU can also factor in human error in the ability to identify the unknown unknowns that can affect a system. These errors can be quantified by looking at the limited experimental data that may be available for previous system tests and identifying what percentage of tests resulted in system thresholds being exceeded in an unexpected manner.

The underlying parameters that serve as inputs to the models are frequently modeled as samples from a probability distribution. The input parameter model distributions as well as the model propagation equations determine the distribution of the output parameter values. The distribution of a specific output value must be considered when determining what is an acceptable M/U ratio for that performance variable. If the uncertainty limit for U includes a finite upper bound due to the particular distribution of that variable, a lower M/U ratio may be acceptable. However, if U is distributed according a distribution such as a normal or exponential distribution which can include values well into the far tails of the distribution, a larger value may be required in order to reduce system risk to an acceptable level.

Ratios of M/U for safety critical systems can vary from application to application. Studies have cited acceptable M/U ratios as being in the 2:1 to 10:1 range for nuclear weapons stockpile decision-making. Intuitively, the larger the value of M/U, the less of the available performance margin is being consumed by uncertainty in the simulation outputs. A ratio of 1:1 could result in a simulation run where the simulated performance threshold is not exceeded when in actuality the entire design margin may have been consumed. It is important to note that rigorous QMU does not ensure that the system itself is capable of meeting its performance margin; rather, it serves to ensure that the decision-making authority can make judgments based on accurately characterized results.

The underlying objective of QMU is to present information to decision-makers that fully characterizes the results in light of the uncertainty as understood by the model developers. This presentation of results allows decision makers an opportunity to make informed decisions while understanding what sensitivities exist in the results due to the current understanding of uncertainty. QMU recognizes that decisions for complex systems cannot be made strictly based on the quantified M/U metrics. Subject matter expert (SME) judgment and other external factors such as stakeholder opinions and regulatory issues must also be considered by the decision-making authority before a final outcome is decided. . [7]

Verification and Validation

edit

Verification and validation (V&V) of a model is closely interrelated with QMU. Verification is broadly acknowledged as the process of determining if a model was built correctly; validation activities focus on determining if the correct model was built. [8] V&V against available experimental test data is an important aspect of accurately characterizing the overall uncertainty of the system response variables. V&V seeks to make maximum use of component and subsystem-level experimental test data to accurately characterize model input parameters and the physics-based models associated with particular sub-elements of the system. The use of QMU in the simulation process helps to ensure that the stochastic nature of the input variables (due to both aleatory and epistemic uncertainties) as well as the underlying uncertainty in the model are properly accounted for when determining the simulation runs required to establish model credibility prior to accreditation.

Advantages and Disadvantages

edit

QMU has the potential to support improved decision-making for programs that must rely heavily on modeling and simulation. Modeling and simulation results are being used more often during the acquisition, development, design, and testing of complex engineering systems. [9] One of the major challenges of developing simulations is to know how much fidelity should be built into each element of the model. The pursuit of higher fidelity can significantly increase development time and total cost of the simulation development effort. QMU provides a formal method for describing the required fidelity relative to the design threshold margins for key performance variables. This information can also be used to prioritize areas of future investment for the simulation. Analysis of the various M/U ratios for the key performance variables can help identify model components that are in need of fidelity upgrades to order to increase simulation effectiveness.

A variety of potential issues related to the use of QMU have also been identified. QMU can lead to longer development schedules and increased development costs relative to traditional simulation projects due to the additional rigor being applied. Proponents of QMU state that the level of uncertainty quantification required is driven by certification requirements for the intended application of the simulation. Simulations used for capability planning or system trade analyses must generally model the overall performance trends of the systems and components being analyzed. However, for safety-critical systems where experimental test data is lacking, simulation results provide a critical input to the decision-making process. Another potential risk related to the use of QMU is a false sense of confidence regarding protection from unknown risks. The use of quantified results for key simulation parameters can lead decision makers to believe all possible risks have been fully accounted for, which is particularly challenging for complex systems.

See also

edit


References

edit
  1. ^ Martin Pilch, Timothy G.Trucano, and Jon C. Helton (September 2006). "Ideas Underlying Quantification of Margins and Uncertainties (QMU): A white paper" (PDF). Sandia National Laboratories report SAND2006-5001.{{cite web}}: CS1 maint: multiple names: authors list (link)
  2. ^ D. Eardley; et al. (2005-03-25). "Quantification of Margins and Uncertainties" (PDF). JASON - The Mitre Corporation JASON report JSR-04-330. {{cite journal}}: Cite journal requires |journal= (help); Explicit use of et al. in: |author= (help)
  3. ^ David H. Sharp and Merri M. Wood-Schultz (2003). "QMU and Nuclear Weapons Certification—What's under the Hood?" (PDF). Los Alamos Science. 28: 47–53.
  4. ^ Lee Peterson (23 June 2011). "Quantification of Margins and Uncertainty (QMU): Turning Models and Test Data into Mission Confidence" (PDF). Keck Institute for Space xTerraMechanics Workshop.
  5. ^ A. Kidane; et al. (2012). "Rigorous model-based uncertainty quantification with application to terminal ballistics, part I: Systems with controllable inputs and small scatter" (PDF). Journal of the Mechanics of Physics and Solids. 60: 983–1001. {{cite journal}}: Explicit use of et al. in: |author= (help)
  6. ^ Jon C. Helton; et al. (2009). "Conceptual and computational basis for the quantification of margins and uncertainty" (PDF). Sandia National Laboratories technical report SAND2009-3055. {{cite journal}}: Cite journal requires |journal= (help); Explicit use of et al. in: |author= (help)
  7. ^ B.J. Garrick and R.F. Christie (2002). "Probabilistic Risk Assessment Practices in the USA for Nuclear Power Plants". Safety Science. 40: 177–201.
  8. ^ W. L. Oberkampf, T. G. Trucano, and C. Hirsch (2004). "Verification, Validation, and Predictive Capability in Computational Engineering and Physics" (PDF). Applied Mechanics Reviews. 57 (5): 345–384.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  9. ^ Blue Ribbon Panel on Simulation-based Engineering Science (2006). "Simulation-Based Engineering Science: Revolutionizing Engineering Science Through Simulation" (PDF). National Science Foundation Technical Report.

Category:Nuclear stockpile stewardship