On the Development of Deep Convolutional Sum-Product Networks
A probabilistic graphical model (PGM) is a formal mathematical description of a problem domain. A Bayesian network (BN) is a PGM defined by a directed acyclic graph (DAG) and a set of conditional probability tables (CPTs). Two tasks can be employed when reasoning about a problem expressed as a BN: modeling and inference. In modeling, we want to express the problem domain as a DAG in which indepen- dencies among the variables involved can be read. There are two main algorithms for testing independencies from a BN DAG, namely, d-separation and m-separation. In this thesis, we begin by introducing Darwinian networks (DNs), which are, in some way, a clearer representation of a BN. By using DNs, we derived a new way of testing independencies in a BN, called rp-separation, which is a faster alternative to d-separation. Another practical application of DNs was simple propagation, which is the current state-of-the-art join tree inference algorithm in BNs. A sum-product network (SPN) is another type of PGM defined by a DAG and a set of parameters. An SPN permits tractable inference, while inference is generally NP-hard in BNs. Furthermore, SPNs can be compiled from BNs or learned from data. In this thesis, we first resolve the inconsistency between the SPN scope definition and the CPT label when compiling a BN into an SPN. Next, we empirically explore new methods for learning an SPN structure and parameters. Finally, we introduce deep convolutional sum-product networks (DCSPNs), which use a convolutional neural net- work to build a correct SPN. DCSPNs exploit the commonly used tensor libraries for neural networks, while still guaranteeing correctness as a PGM. Experimental results show that DCSPNs are comparable to state-of-the-art methods in image completion tasks.