Probabilistic inversion
Moderator:Jouni (see all) |
|
Upload data
|
We know what it means to invert a function at a point or point set. Probabilistic inversion (PI) denotes the operation of inverting a function at a distribution or set of distributions. Given a distribution on the range of a function, we seek a distribution over the domain such that pushing this distribution through the function returns the target distribution on the range. In dose-response uncertainty quantification, we seek a distribution over the parameters of a dose-response model which, when pushed through the dose-response (DR) model, recovers the observational uncertainty.
Applicable tools for PI derive from the Iterative Proportional Fitting (IPF) algorithm (Kruithof 1937, Deming and Stephan 1940). A brief description is given below, for details see, e.g., Du et al. (2006), Kurowicka and Cooke (2006). In the present context, we start with a DR model and with a large sample distribution over the model’s parameters. This distribution should be wide enough that its push-forward distribution covers the support of the observable uncertainty distributions. If there are N samples of parameter vectors, each vector has probability 1/N in the sample distribution. We then re-weight the samples such that if we re-sample the N vector samples with these weights, we recover the observational distributions. In practice, the observational distributions are specified by specifying a number of quantiles or percentiles. In all the cases reported here, three quantiles of the observational distributions are specified, as close as possible to (5%, 50%, 95%). Technically speaking, we are thus inverting the DR model at a set of distributions, namely those satisfying the specified quantiles. Of course, specifying more quantiles would yield better fits in most cases, at the expense of larger sample distributions and longer run times.
The method for finding the weights for weighted resampling is IPF. IPF starts with a distribution over cells in a contingency table and finds the maximum likelihood estimate of the distribution satisfying a number of marginal constraints. Equivalently, it finds the distribution satisfying the constraints which is minimally informative relative to the starting distribution. It does this by iteratively adjusting the joint distribution.
References
Roger Cooke (2007). Uncertainty Quantification for Dose-Response Models Using Probabilistic Inversion with Isotonic Regression: Bench Test Results. link