# Non-parametric Bayesian estimation

We present here the nonparametric Bayesian approach to uncertainty modeling in dose response. The Bayesian approach is unique in that Bayes’s theorem provides the mechanism to combine data with the prior information to produce updated results in the form of posterior distributions, a combination of the prior distribution (derived using prior information) and the likelihood function derived from the data.

In our analysis, we assume that the number of responses, Si, when $n_i$ subjects are exposed to dose level xi are independently distributed as Binomial random variables, Bin(ni,pi), where ni is the number of experimental subjects at dose level xi, and pi=P(xi) is the unknown probability of response at dose level xi. Thus, the likelihood at S=s is $L(p;s)=$ (1.1)

where , and M is the number of experiments. ),,

The prior belongs to the nonparametric class of right continuous, non-decreasing functions taking values in [. We introduce the Dirichlet Process (DP) as a prior distribution on the collection of all probability measures. We define it as follows: a random probability distribution P is generated by the DP if for any partition of the sample space, the vector of random probabilities follows a Dirichlet distribution:

Note that in order to specify DP prior, a precision parameter 0>α and base distribution are required. Here, defines the prior expectation:

the parameter α controls the strength of belief, or confidence in the prior. Thus, a large (small) value of α means that new observations will have a small (large) affect in updating the prior . Indeed, 0Pα is sometimes interpreted as the ‘equivalent prior observations’. In our study, we assume that P has an Ordered Dirichlet distribution, with density at ()1,,Mpp…as stated below:

(1.2)

where and

Unfortunately the joint posterior distribution, obtained from (1.1) and (1.2), is too complicated to allow an analytical solution, especially for obtaining the marginals. Therefore, we introduce one of the most commonly used Markov Chain Monte Carlo methods - the Gibbs sampler, which can be used to generate the marginal posterior densities. We follow investigation of Gelfand and Kuo (1991). The Gibbs Sampler requires sampling from the following conditional distributions successively resample the following conditional distributions: