pub enum BayesianNetworkAlgorithm {
Lauritzen,
LinearRelevanceDecomposition,
RecursiveRelevanceDecomposition,
EstimatedPosteriorImportanceSampling,
AdaptiveImportanceSampling,
LoopyBeliefPropagation,
LikelihoodSampling,
ProbabilisticLogicSampling,
}Expand description
Bayesian Network algorithms
All Bayesian Network algorithms produce marginal probability distributions over all network [super::Node]s.
Variants§
Lauritzen
Clustering algorithm originally proposed by Lauritzen and Spiegelhalter (1988) and improved by several researchers, e.g., Jensen et al. (1990) or Dawid (1992). It is the fastest known exact algorithm for belief updating in Bayesian networks.
The clustering algorithm works in two phases:
- compilation of a directed graph into a junction tree
- probability updating in the junction tree
This algorithm should be sufficient for most applications. Only when networks become very large and complex, the clustering algorithm may not be fast enough. In that case, we suggest that the user choose an approximate algorithm, such as one of the stochastic sampling algorithms.
For more information, refer to https://support.bayesfusion.com/docs/GeNIe/algorithms_clustering.html.
LinearRelevanceDecomposition
Relevance-based decomposition is an exact algorithm based on the clustering algorithm that performs a decomposition of the network when the network is very large.
The algorithm was described in (Lin & Druzdzel, 1997). Relevance-based decomposition extends the boundary of what is computable, while gracefully changing into the clustering algorithm for small networks. Because there is some overhead related to decomposition, we suggest that this algorithm be used on a case-by-case basis, only when the clustering algorithm cannot handle your network.
RecursiveRelevanceDecomposition
Recursive implementation of Relevance-based decomposition.
For more information, refer to BayesianNetworkAlgorithm::LinearRelevanceDecomposition.
EstimatedPosteriorImportanceSampling
The Estimated Posterior Importance Sampling (EPIS) algorithm is described in (Yuan & Druzdzel 2003). This is quite likely the best stochastic sampling algorithm for discrete Bayesian networks available. It produces results that are even more precise than those produced by the BayesianNetworkAlgorithm::AdaptiveImportanceSampling algorithm and in case of some networks produces results that are an order of magnitude more precise.
The algorithm uses loopy belief propagation to compute an estimate of the posterior probability over all nodes of the network and then uses importance sampling to refine this estimate. In addition to being more precise, it is also faster than the BayesianNetworkAlgorithm::AdaptiveImportanceSampling algorithm, as it avoids the costly learning stage of the latter.
For more information, refer to https://support.bayesfusion.com/docs/GeNIe/algorithms_epis.html.
AdaptiveImportanceSampling
The Adaptive Importance Sampling (AIS) algorithm is described in (Cheng & Druzdzel 2000). This algorithm offered a breakthrough in the field of stochastic sampling algorithms when first published in 2000. In really difficult cases, such as reasoning under very unlikely evidence in very large networks, the AIS algorithm produced two orders of magnitude smaller error in posterior probability distributions than other sampling algorithms available at that time. Improvement in speed given a desired precision were even more dramatic. The AIS algorithm is based on importance sampling. According to the theory of importance sampling, the closer the sampling distribution is to the (unknown) posterior distribution, the better the results will be. The AIS algorithm successfully approximates its sampling distribution to the posterior distribution by using two cleverly designed heuristic methods in its first stage, which leads to the big improvement in performance stated above.
For more information, refer to https://support.bayesfusion.com/docs/GeNIe/algorithms_ais.html.
LoopyBeliefPropagation
TODO: add description
LikelihoodSampling
The likelihood sampling algorithm makes an attempt to improve the efficiency of the BayesianNetworkAlgorithm::ProbabilisticLogicSampling algorithm by instantiating only non-evidence nodes. Each sample is weighted by the likelihood of evidence given the partial sample generated. It is a simple algorithm with little overhead that generally performs well and certainly better than Probabilistic Logic Sampling in cases with observed evidence.
ProbabilisticLogicSampling
The probabilistic logic sampling algorithm is based on forward (i.e., according to the weak ordering implied by the directed graph) generation of instantiations of nodes guided by their prior probability. If a generated instantiation of an evidence node is different from its observed value, then the entire sample is discarded. This makes the algorithm inefficient if the prior probability of evidence is low. The algorithm is very efficient in cases when no evidence has been observed or the evidence is very likely.
Implementations§
source§impl BayesianNetworkAlgorithm
impl BayesianNetworkAlgorithm
sourcepub const AIS: Self = Self::AdaptiveImportanceSampling
pub const AIS: Self = Self::AdaptiveImportanceSampling
An alias for BayesianNetworkAlgorithm::AdaptiveImportanceSampling.
sourcepub const Henrion: Self = Self::ProbabilisticLogicSampling
pub const Henrion: Self = Self::ProbabilisticLogicSampling
An alias for BayesianNetworkAlgorithm::ProbabilisticLogicSampling.
Trait Implementations§
source§impl Default for BayesianNetworkAlgorithm
impl Default for BayesianNetworkAlgorithm
source§fn default() -> BayesianNetworkAlgorithm
fn default() -> BayesianNetworkAlgorithm
source§impl PartialEq for BayesianNetworkAlgorithm
impl PartialEq for BayesianNetworkAlgorithm
source§fn eq(&self, other: &BayesianNetworkAlgorithm) -> bool
fn eq(&self, other: &BayesianNetworkAlgorithm) -> bool
self and other values to be equal, and is used
by ==.