Enum nice_smile::network::BayesianNetworkAlgorithm

source ·
pub enum BayesianNetworkAlgorithm {
    Lauritzen,
    LinearRelevanceDecomposition,
    RecursiveRelevanceDecomposition,
    EstimatedPosteriorImportanceSampling,
    AdaptiveImportanceSampling,
    LoopyBeliefPropagation,
    LikelihoodSampling,
    ProbabilisticLogicSampling,
}
Expand description

Bayesian Network algorithms

All Bayesian Network algorithms produce marginal probability distributions over all network [super::Node]s.

Variants§

§

Lauritzen

Clustering algorithm originally proposed by Lauritzen and Spiegelhalter (1988) and improved by several researchers, e.g., Jensen et al. (1990) or Dawid (1992). It is the fastest known exact algorithm for belief updating in Bayesian networks.

The clustering algorithm works in two phases:

  1. compilation of a directed graph into a junction tree
  2. probability updating in the junction tree

This algorithm should be sufficient for most applications. Only when networks become very large and complex, the clustering algorithm may not be fast enough. In that case, we suggest that the user choose an approximate algorithm, such as one of the stochastic sampling algorithms.

For more information, refer to https://support.bayesfusion.com/docs/GeNIe/algorithms_clustering.html.

§

LinearRelevanceDecomposition

Relevance-based decomposition is an exact algorithm based on the clustering algorithm that performs a decomposition of the network when the network is very large.

The algorithm was described in (Lin & Druzdzel, 1997). Relevance-based decomposition extends the boundary of what is computable, while gracefully changing into the clustering algorithm for small networks. Because there is some overhead related to decomposition, we suggest that this algorithm be used on a case-by-case basis, only when the clustering algorithm cannot handle your network.

§

RecursiveRelevanceDecomposition

Recursive implementation of Relevance-based decomposition.

For more information, refer to BayesianNetworkAlgorithm::LinearRelevanceDecomposition.

§

EstimatedPosteriorImportanceSampling

The Estimated Posterior Importance Sampling (EPIS) algorithm is described in (Yuan & Druzdzel 2003). This is quite likely the best stochastic sampling algorithm for discrete Bayesian networks available. It produces results that are even more precise than those produced by the BayesianNetworkAlgorithm::AdaptiveImportanceSampling algorithm and in case of some networks produces results that are an order of magnitude more precise.

The algorithm uses loopy belief propagation to compute an estimate of the posterior probability over all nodes of the network and then uses importance sampling to refine this estimate. In addition to being more precise, it is also faster than the BayesianNetworkAlgorithm::AdaptiveImportanceSampling algorithm, as it avoids the costly learning stage of the latter.

For more information, refer to https://support.bayesfusion.com/docs/GeNIe/algorithms_epis.html.

§

AdaptiveImportanceSampling

The Adaptive Importance Sampling (AIS) algorithm is described in (Cheng & Druzdzel 2000). This algorithm offered a breakthrough in the field of stochastic sampling algorithms when first published in 2000. In really difficult cases, such as reasoning under very unlikely evidence in very large networks, the AIS algorithm produced two orders of magnitude smaller error in posterior probability distributions than other sampling algorithms available at that time. Improvement in speed given a desired precision were even more dramatic. The AIS algorithm is based on importance sampling. According to the theory of importance sampling, the closer the sampling distribution is to the (unknown) posterior distribution, the better the results will be. The AIS algorithm successfully approximates its sampling distribution to the posterior distribution by using two cleverly designed heuristic methods in its first stage, which leads to the big improvement in performance stated above.

For more information, refer to https://support.bayesfusion.com/docs/GeNIe/algorithms_ais.html.

§

LoopyBeliefPropagation

TODO: add description

§

LikelihoodSampling

The likelihood sampling algorithm makes an attempt to improve the efficiency of the BayesianNetworkAlgorithm::ProbabilisticLogicSampling algorithm by instantiating only non-evidence nodes. Each sample is weighted by the likelihood of evidence given the partial sample generated. It is a simple algorithm with little overhead that generally performs well and certainly better than Probabilistic Logic Sampling in cases with observed evidence.

§

ProbabilisticLogicSampling

The probabilistic logic sampling algorithm is based on forward (i.e., according to the weak ordering implied by the directed graph) generation of instantiations of nodes guided by their prior probability. If a generated instantiation of an evidence node is different from its observed value, then the entire sample is discarded. This makes the algorithm inefficient if the prior probability of evidence is low. The algorithm is very efficient in cases when no evidence has been observed or the evidence is very likely.

Implementations§

source§

impl BayesianNetworkAlgorithm

source

pub const EPIS: Self = Self::EstimatedPosteriorImportanceSampling

source

pub const AIS: Self = Self::AdaptiveImportanceSampling

source

pub const Henrion: Self = Self::ProbabilisticLogicSampling

source

pub fn is_exact(&self) -> bool

Returns true if the algorithm is an exact algorithm.

Trait Implementations§

source§

impl Default for BayesianNetworkAlgorithm

source§

fn default() -> BayesianNetworkAlgorithm

Returns the “default value” for a type. Read more
source§

impl PartialEq for BayesianNetworkAlgorithm

source§

fn eq(&self, other: &BayesianNetworkAlgorithm) -> bool

This method tests for self and other values to be equal, and is used by ==.
1.0.0 · source§

fn ne(&self, other: &Rhs) -> bool

This method tests for !=. The default implementation is almost always sufficient, and should not be overridden without very good reason.
source§

impl Eq for BayesianNetworkAlgorithm

source§

impl StructuralPartialEq for BayesianNetworkAlgorithm

Auto Trait Implementations§

Blanket Implementations§

source§

impl<T> Any for T
where T: 'static + ?Sized,

source§

fn type_id(&self) -> TypeId

Gets the TypeId of self. Read more
source§

impl<T> Borrow<T> for T
where T: ?Sized,

source§

fn borrow(&self) -> &T

Immutably borrows from an owned value. Read more
source§

impl<T> BorrowMut<T> for T
where T: ?Sized,

source§

fn borrow_mut(&mut self) -> &mut T

Mutably borrows from an owned value. Read more
source§

impl<T> From<T> for T

source§

fn from(t: T) -> T

Returns the argument unchanged.

source§

impl<T, U> Into<U> for T
where U: From<T>,

source§

fn into(self) -> U

Calls U::from(self).

That is, this conversion is whatever the implementation of From<T> for U chooses to do.

source§

impl<T, U> TryFrom<U> for T
where U: Into<T>,

§

type Error = Infallible

The type returned in the event of a conversion error.
source§

fn try_from(value: U) -> Result<T, <T as TryFrom<U>>::Error>

Performs the conversion.
source§

impl<T, U> TryInto<U> for T
where U: TryFrom<T>,

§

type Error = <U as TryFrom<T>>::Error

The type returned in the event of a conversion error.
source§

fn try_into(self) -> Result<U, <U as TryFrom<T>>::Error>

Performs the conversion.