Category talk:Exposure-response functions

From Opasnet
Jump to navigation Jump to search

Which method is the best for dose-response estimation?

How to read discussions

Fact discussion: .
Opening statement: Of the following methods, A is the best for estimating the dose-response of frambozadrine in rats.

Closing statement: Under discussion (to be changed when a conclusion is found)

(A closing statement, when resolved, should be updated to the main page.)

Argumentation:

←--1: . Method A is the best. --Jouni 14:26, 24 October 2007 (EEST) (type: truth; paradigms: science: defence)

⇤--8: . I assume the confidence bands eg in sheet nr 6 mean that you are 90% confident that the probability P(d) of response at dose d lies between the bounds. If you consider the P(d) mixture of binomials at dose d, with the number of animals from the original data, then shouldnt the actual number of responses at dose d lie within the corresponding bounds 90% of the time? I dont think that will be the case. In otherwords, we could look at these confidence bands as a statistical hypothesis, and it looks to me like it would be rejected on this data. --Roger 17:02, 24 October 2007 (EEST) (type: truth; paradigms: science: attack)

⇤--2: . Method B is the best. --Jouni 14:26, 24 October 2007 (EEST) (type: truth; paradigms: science: attack)

←--5: . B recovers the observed uncertainty the best when inversion works out. by Roger Cooke, added by --Jouni 14:26, 24 October 2007 (EEST) (type: truth; paradigms: science: defence)
⇤--6: . Probabilistic inversion is a demanding method and does not converge more often than others. by Roger Cooke, added by --Jouni 14:26, 24 October 2007 (EEST) (type: truth; paradigms: science: attack)
⇤--7: . Jouni didn't say it quite right, PI always converges, but it converges to a SOLUTION only if the problem is feasible. When the PI problem is not feasible, it converges to a 'minimally painful' answer. In this case the PI was feasible for the threshold model. --Roger 16:25, 24 October 2007 (EEST) (type: truth; paradigms: science: attack)

⇤--3: . Method C is the best. --Jouni 14:26, 24 October 2007 (EEST) (type: truth; paradigms: science: attack)

⇤--4: . Method D is the best. --Jouni 14:26, 24 October 2007 (EEST) (type: truth; paradigms: science: attack)

Unorganised thoughts about argument #8 and related issues by --Jouni 14:39, 26 October 2007 (EEST)

There is something wrong with Roger's reasoning, but I have difficulties understanding what it is. Think about this: We add dose groups to the frambozadrine study in such a way that we put five-animal groups in between of the other groups. The realisation of the small group is very likely to go beyond the 95% CI of the CURVE. The confidence bounds are mainly determined by the large study groups, and the bounds are NOT predicting a single study point. Why not? Because information comes also from other doses, because we know that the true curve cannot take any shape within the confidence bounds. In other words, the probability distribution of response at dose d + delta d is much narrower if the response at dose d is known than if it is not known.

This leads me to ask, whether the correlation between responses at close doses should be measured, and should we use the correlation as a way to describe our DR function. If we systematically measure this from well-established data such as drug research, we would learn a lot about the general flexibility of DR functions. Maybe we could even use this measure to derive non-parametric DR curves. (Is this actually the Taylor series where we approach the question by looking at the curve through its derivatives?)

What does the confidence bound actually mean? My Bayesian inference still sounds reasonable: the probability is 95 % that the true curve goes through that interval. But can you somehow test this? If you made a new study with a single dose its confidence bounds should be larger. Could we somehow say that because of information from other dose groups, the degrees of freedom have gone larger, and the study that we do to test the realism of the confidence bound should have a larger number of animals in a group (if we only have one dose group) than the original study?

This does not sound convincing, however. And it does not help us evaluating the confidence bounds with the data we have; it just tells what data we should have to be able to test the preciseness of the bounds.

What about this? Based on the predicted cloud of dose responses we should be able to predict the total probability of the observations: P(R1|d1)*P(R2|d2,(R1|d1))*P(R3|d3,(R1|d1),(R2|d2))... where Ri is the response at dose di. What Roger is calculating is P(R1|d1)*P(R2|d2)*P(R3|d3)... In conclusion, we need conditional probabilities of responses given other dose levels. Can we derive a function that describes this dependency? That would actually tell something about the extrapolation issue as well. We could talk about that intrapolation/extrapolation function instead of tring to discuss the good and bad mathematical properties of Weibull and multistage... that discussion has gone on for years, and there is little hope that biology will solve the issue. Rather, it actually misses most of the issue that would be biologically relevant.

What do we know about this interpolation function? It is continuous, most likely. Its derivative is unlikely to very large or much below zero. Its second derivative is probably in a rather mild range. About third and further derivatives I cannot say anything. However, can the derivatives be assumed to be constant along the dose range? If the second derivative could have two values changing at a cutpoint, it would pretty much cover all plausible curves. So, with background, slope, two curvature parameters and a cutpoint (five parameters) we can describe what we know about any dose response?

We could derive a dose-reponse based on any functional form, and based on the estimate then produce simulated data (bootstrapping?) from which these five parameters would then be derived. This way, we would be able to collect comparable information about any dose-responses, irrespective of the original data or the dose-response function used.