1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
use crate::traits::ToDslValue;

use smile::{
    autocxx::c_int,
    ffi::{
        DSL_ALG_BN_AISSAMPLING, DSL_ALG_BN_EPISSAMPLING, DSL_ALG_BN_HENRION, DSL_ALG_BN_LAURITZEN,
        DSL_ALG_BN_LBP, DSL_ALG_BN_LSAMPLING, DSL_ALG_BN_RELEVANCEDECOMP,
        DSL_ALG_BN_RELEVANCEDECOMP2, DSL_ALG_ID_COOPERSOLVING, DSL_ALG_ID_SHACHTER,
    },
};

/// Bayesian Network algorithms
///
/// All Bayesian Network algorithms produce marginal probability distributions over all network
/// [super::Node]s.
#[derive(Default, PartialEq, Eq)]
pub enum BayesianNetworkAlgorithm {
    #[default]
    /// Clustering algorithm originally proposed by Lauritzen and Spiegelhalter (1988) and improved by several researchers, e.g., Jensen et al. (1990) or Dawid (1992).
    /// It is the fastest known exact algorithm for belief updating in Bayesian networks.
    ///
    /// The clustering algorithm works in two phases:
    /// 1. compilation of a directed graph into a junction tree
    /// 2. probability updating in the junction tree
    ///
    /// This algorithm should be sufficient for most applications. Only when networks become very large and complex, the clustering algorithm may not be fast enough. In that case, we suggest that the user choose an approximate algorithm, such as one of the stochastic sampling algorithms.
    ///
    /// For more information, refer to <https://support.bayesfusion.com/docs/GeNIe/algorithms_clustering.html>.
    Lauritzen,

    /// Relevance-based decomposition is an exact algorithm based on the clustering algorithm that performs a decomposition of the network when the network is very large.
    ///
    /// The algorithm was described in (Lin & Druzdzel, 1997).
    /// Relevance-based decomposition extends the boundary of what is computable, while gracefully changing into the clustering algorithm for small networks.
    /// Because there is some overhead related to decomposition, we suggest that this algorithm be used on a case-by-case basis, only when the clustering algorithm cannot handle your network.
    LinearRelevanceDecomposition,

    /// Recursive implementation of Relevance-based decomposition.
    ///
    /// For more information, refer to [BayesianNetworkAlgorithm::LinearRelevanceDecomposition].
    RecursiveRelevanceDecomposition,

    /// The Estimated Posterior Importance Sampling (EPIS) algorithm is described in (Yuan & Druzdzel 2003).
    /// This is quite likely the best stochastic sampling algorithm for discrete Bayesian networks available.
    /// It produces results that are even more precise than those produced by the [BayesianNetworkAlgorithm::AdaptiveImportanceSampling] algorithm and in case of some networks produces results that are an order of magnitude more precise.
    ///
    /// The algorithm uses loopy belief propagation to compute an estimate of the posterior probability over all nodes of the network and then uses importance sampling to refine this estimate. In addition to being more precise, it is also faster than the [BayesianNetworkAlgorithm::AdaptiveImportanceSampling] algorithm, as it avoids the costly learning stage of the latter.
    ///
    /// For more information, refer to <https://support.bayesfusion.com/docs/GeNIe/algorithms_epis.html>.
    EstimatedPosteriorImportanceSampling,
    /// The Adaptive Importance Sampling (AIS) algorithm is described in (Cheng & Druzdzel 2000). This algorithm offered a breakthrough in the field of stochastic sampling algorithms when first published in 2000.
    /// In really difficult cases, such as reasoning under very unlikely evidence in very large networks, the AIS algorithm produced two orders of magnitude smaller error in posterior probability distributions than other sampling algorithms available at that time.
    /// Improvement in speed given a desired precision were even more dramatic.
    /// The AIS algorithm is based on importance sampling.
    /// According to the theory of importance sampling, the closer the sampling distribution is to the (unknown) posterior distribution, the better the results will be.
    /// The AIS algorithm successfully approximates its sampling distribution to the posterior distribution by using two cleverly designed heuristic methods in its first stage, which leads to the big improvement in performance stated above.
    ///
    /// For more information, refer to
    /// <https://support.bayesfusion.com/docs/GeNIe/algorithms_ais.html>.
    AdaptiveImportanceSampling,

    /// TODO: add description
    LoopyBeliefPropagation,

    /// The likelihood sampling algorithm makes an attempt to improve the efficiency of the [BayesianNetworkAlgorithm::ProbabilisticLogicSampling] algorithm by instantiating only non-evidence nodes.
    /// Each sample is weighted by the likelihood of evidence given the partial sample generated.
    /// It is a simple algorithm with little overhead that generally performs well and certainly better than Probabilistic Logic Sampling in cases with observed evidence.
    LikelihoodSampling,

    /// The probabilistic logic sampling algorithm is based on forward (i.e., according to the weak ordering implied by the directed graph) generation of instantiations of nodes guided by their prior probability.
    /// If a generated instantiation of an evidence node is different from its observed value, then the entire sample is discarded.
    /// This makes the algorithm inefficient if the prior probability of evidence is low.
    /// The algorithm is very efficient in cases when no evidence has been observed or the evidence is very likely.
    ProbabilisticLogicSampling,
}

impl BayesianNetworkAlgorithm {
    /// An alias for [BayesianNetworkAlgorithm::EstimatedPosteriorImportanceSampling].
    pub const EPIS: Self = Self::EstimatedPosteriorImportanceSampling;

    /// An alias for [BayesianNetworkAlgorithm::AdaptiveImportanceSampling].
    pub const AIS: Self = Self::AdaptiveImportanceSampling;

    /// An alias for [BayesianNetworkAlgorithm::ProbabilisticLogicSampling].
    #[allow(non_upper_case_globals)]
    pub const Henrion: Self = Self::ProbabilisticLogicSampling;

    /// Returns `true` if the algorithm is an exact algorithm.
    pub fn is_exact(&self) -> bool {
        matches!(
            self,
            Self::Lauritzen
                | Self::LinearRelevanceDecomposition
                | Self::RecursiveRelevanceDecomposition
        )
    }
}

impl ToDslValue for BayesianNetworkAlgorithm {
    fn to_dsl_value(&self) -> c_int {
        match *self {
            Self::Lauritzen => c_int(DSL_ALG_BN_LAURITZEN as i32),
            Self::LinearRelevanceDecomposition => c_int(DSL_ALG_BN_RELEVANCEDECOMP as i32),
            Self::RecursiveRelevanceDecomposition => c_int(DSL_ALG_BN_RELEVANCEDECOMP2 as i32),
            Self::EPIS => c_int(DSL_ALG_BN_EPISSAMPLING as i32),
            Self::AIS => c_int(DSL_ALG_BN_AISSAMPLING as i32),
            Self::LoopyBeliefPropagation => c_int(DSL_ALG_BN_LBP as i32),
            Self::LikelihoodSampling => c_int(DSL_ALG_BN_LSAMPLING as i32),
            Self::Henrion => c_int(DSL_ALG_BN_HENRION as i32),
        }
    }
}

/// Influence Diagram algorithms
pub enum InfluenceDiagramAlgorithm {
    /// The policy evaluation algorithm solves an influence diagram by first transforming it into a [super::Network] and then finding the expected utilities of each of the decision alternatives by performing repeated inference in this network.
    /// The algorithm will result in a full set of expected utilities for all possible policies in the network.
    ///
    /// The policy evaluation algorithm uses the default [BayesianNetworkAlgorithm] or one specified by [super::Network::set_bayesian_network_algorithm].
    /// This may have an impact on both the computational performance and the accuracy of the computation.
    ///
    /// This may be a computationally intensive process for large influence diagrams.
    /// If you are not interested in the values of expected utilities, but would just like to know the optimal decision at the highest level, consider using the [InfluenceDiagramAlgorithm::BestPolicySearch] algorithm for finding the best policy.
    ///
    /// For more information, refer to <https://support.bayesfusion.com/docs/GeNIe/algorithms_cooper.html>.
    PolicyEvaluation,

    /// The best policy search algorithm solves an influence diagram by finding just the best policy in the network.
    ///
    /// The algorithm instantiates the first decision node to the optimal decision alternative but does not produce the numerical expected utility of this or any other decision option.
    ///
    /// In order for this algorithm to be run, all informational predecessors of the first decision node have to be instantiated.
    ///
    /// For more information, refer to <https://support.bayesfusion.com/docs/GeNIe/algorithms_shachterpeot.html>.
    BestPolicySearch,
}

impl ToDslValue for InfluenceDiagramAlgorithm {
    fn to_dsl_value(&self) -> c_int {
        match *self {
            Self::PolicyEvaluation => c_int(DSL_ALG_ID_COOPERSOLVING as i32),
            Self::BestPolicySearch => c_int(DSL_ALG_ID_SHACHTER as i32),
        }
    }
}