Bayesian network: Difference between revisions
Jump to navigation
Jump to search
Juha Villman (talk | contribs) No edit summary |
(mfiles template added) |
||
Line 13: | Line 13: | ||
:Formally, Bayesian networks are [[:en:directed acyclic graph|directed acyclic graph]]s whose nodes represent variables, and whose arcs encode conditional independencies between the variables. Nodes can represent any kind of variable, be it a measured parameter, a [[:en:latent variable|latent variable]] or a hypothesis. They are not restricted to representing [[:en:random variable|random variable]]s, which represents another "[[:en:Bayesian|Bayesian]]" aspect of a Bayesian network. Efficient algorithms exist that perform [[:en:inference|inference]] and learning in Bayesian networks. Bayesian networks that model sequences of variables (such as for example [[:en:speech recognition|speech signals]] or [[:en:peptide sequence|protein sequences]]) are called [[:en:dynamic Bayesian network|dynamic Bayesian network]]s. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called [[:en:influence diagrams|influence diagrams]]. <section end=glossary /> | :Formally, Bayesian networks are [[:en:directed acyclic graph|directed acyclic graph]]s whose nodes represent variables, and whose arcs encode conditional independencies between the variables. Nodes can represent any kind of variable, be it a measured parameter, a [[:en:latent variable|latent variable]] or a hypothesis. They are not restricted to representing [[:en:random variable|random variable]]s, which represents another "[[:en:Bayesian|Bayesian]]" aspect of a Bayesian network. Efficient algorithms exist that perform [[:en:inference|inference]] and learning in Bayesian networks. Bayesian networks that model sequences of variables (such as for example [[:en:speech recognition|speech signals]] or [[:en:peptide sequence|protein sequences]]) are called [[:en:dynamic Bayesian network|dynamic Bayesian network]]s. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called [[:en:influence diagrams|influence diagrams]]. <section end=glossary /> | ||
[[Category:Glossary term]] | [[Category:Glossary term]] | ||
{{mfiles}} |
Revision as of 07:02, 3 June 2009
<section begin=glossary />
- (or a Bayesian belief network, BBN): a probabilistic graphical model that represents a set of variables and their probabilistic independencies. For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. The term "Bayesian networks" was coined by Pearl (1985) to emphasize three aspects:
- The often subjective nature of the input information.
- The reliance on Bayes's conditioning as the basis for updating information.
- The distinction between causal and evidential modes of reasoning, which underscores Thomas Bayes's posthumous paper of 1763.[1]
- Formally, Bayesian networks are directed acyclic graphs whose nodes represent variables, and whose arcs encode conditional independencies between the variables. Nodes can represent any kind of variable, be it a measured parameter, a latent variable or a hypothesis. They are not restricted to representing random variables, which represents another "Bayesian" aspect of a Bayesian network. Efficient algorithms exist that perform inference and learning in Bayesian networks. Bayesian networks that model sequences of variables (such as for example speech signals or protein sequences) are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. <section end=glossary />
<mfanonymousfilelist></mfanonymousfilelist>
- ↑ Thomas Bayes (1763). "An Essay towards solving a Problem in the Doctrine of Chances. By the late Rev. Mr. Bayes, F.R.S., communicated by Mr. Price, in a letter to John Canton, A.M., F.R.S.". Philosophical Transactions of the Royal Society of London 53: 370–418.