Bayesian Network Inference Using Marginal Trees

Date
2016-06
Authors
Oliveira, Jhonatan de Souza
Journal Title
Journal ISSN
Volume Title
Publisher
Faculty of Graduate Studies and Research, University of Regina
Abstract

Bayesian networks (BNs) are formal probabilistic graphical models for reasoning un- der uncertainty. BNs are used in a variety of applications, including the state-of- the-art forensic software tool, a ranking system for games, and landing the Mars Exploration Rover. A problem domain is modeled initially as a directed acyclic graph (DAG) and the strengths of relationships are quanti ed by conditional probability tables (CPTs). One key feature of BNs is that multiplying all CPTs yields a joint probability distribution (JPD). Moreover, the DAG encodes independencies present in the JPD. Di erent techniques can be employed while reasoning with BNs. Variable elimi- nation (VE) and join tree propagation (JTP) are two alternatives to inference. VE is regarded as a simple algorithm, hence it is used to introduce beginners to BNs infer- ence. VE uses the set of CPTs from a BN to answer a given query by eliminating all variables not present in the query. This elimination process can be viewed as one-way propagation in a join tree, a type of undirected graph. Although simple, VE answers each query against the BN meaning that computation can be repeated in subsequent queries. The JTP algorithms use a join tree as a secondary structure to perform inference. The DAG is converted into a join tree and the CPTs are systematically assigned to nodes in the tree. Inference in a join tree involves propagating inward and then outward in the join tree. After the two-way propagation, marginals for each non-evidence variable from the BN can be computed. Therefore, answering a single query with JTP yields computation that may remain unused. In this thesis, we propose marginal tree inference (MTI) as a new approach to ex- act inference in discrete BNs. MTI seeks to avoid recomputation, while at the same time ensuring that no constructed probability information remains unused. Thereby, MTI stakes out middle ground between VE and JTP. The two main steps of MTI are to determine the computation that can be reused and to subsequently determine what computation is then missing to answer the query. The usefulness of MTI is demon- strated in multiple probabilistic reasoning sessions. Compared to VE and a variant of VE incorporating precomputation, our approach fairs favourably in experimental results.

Description
A Thesis Submitted to the Faculty of Graduate Studies and Research In Partial Fulfillment of the Requirements for the Degree of Master of Science in Computer Science, University of Regina. xiii, 73 p.
Keywords
Citation
Collections