Fundamentals of Machine Learning for Predictive Data Analytics: Algorithms, Worked Examples, and Case Studies by John D. Kelleher & Brian Mac Namee & Aoife D'Arcy

Fundamentals of Machine Learning for Predictive Data Analytics: Algorithms, Worked Examples, and Case Studies by John D. Kelleher & Brian Mac Namee & Aoife D'Arcy

Author:John D. Kelleher & Brian Mac Namee & Aoife D'Arcy [Kelleher, John D.]
Language: eng
Format: epub, azw3, pdf
ISBN: 9780262029445
Publisher: The MIT Press
Published: 2015-07-30T21:00:00+00:00


Figure 6.12

Two different Bayesian networks, each defining the same full joint probability distribution.

The chain rule, however, doesn’t specify any constraints on which features in the domain we choose to condition on. We could just as easily have decomposed the probability of the joint event as follows:

Both of these decompositions are valid, and both define different Bayesian networks for the domain. Figure 6.12(a)[301] illustrates the Bayesian network representing the decomposition defined in Equation (6.20)[300], and Figure 6.12(b)[301] illustrates the Bayesian network representing the decompositions defined in Equation (6.21)[301].

We can show that both of the networks in Figure 6.12[301] represent the same joint probability by using each of them to calculate the probability of an arbitrarily chosen joint event from the domain. We should get the same probability for the joint event from both of the networks. For this example, we will calculate the probability of the event ¬a, b, c. Using the Bayesian network in Figure 6.12(a)[301], we would carry out the calculation as follows:



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.