- Office hour: 2-3pm on Tuesdays or by appointment
- New office: 4623 SPH-I (within Suite 4605)
- Homework 1 due 11:59pm on October 10, 2016 (Questions?)
September 27, 2016
Marginal probabilities. Compute marginals of variables (given model parameters \(\mathbf{\theta}\)): \(p(x_i\mid \mathbf{\theta})=\sum_{\mathbf{x}': x_i'=x_i}p(\mathbf{x}'\mid \mathbf{\theta}).\) (Or conditional probabilities: posterior distribution)
Partition function. For a Gibbs distribution representable as normalized products of factors: \(p(\mathbf{x})=\frac{1}{Z}\prod_{C\in\mathcal{C}}\psi_{C}(\mathbf{x}_C)\) with respect to a graph \(\mathcal{G}\) with cliques \(\mathcal{C}\), compute \[Z(\mathbf{\theta})=\sum_{\mathbf{x}}\prod_{C\in\mathcal{C}}\psi_{C}(\mathbf{x}_C)\]
Maximum A Posteriori (MAP) Inference. Compute the variable configuration with the hightest probability: \(\hat{\mathbf{x}}=\arg \max_{\mathbf{x}}p(\mathbf{x}\mid \mathbf{\theta}),\) where \(p\) is a generic notation for joint or conditional distributions with unobserved variables \(\mathbf{X}=\mathbf{x}\).
Compute the marginal probability naively, we do: \[p(X_1=x_1)=\frac{1}{Z}\sum_{x_2,\ldots, x_d}\prod_{i=1}^s \psi_{\alpha_i}(\mathbf{x}_{\alpha_i})\]
Computational complexity: \(\mathcal{O}(k^{d-1})\) additions of values read from the probability table, one row for each configuration \(X_2=x_2,\ldots,X_d=x_d\).
Instead, we choose an ordering \(\mathcal{I}\) among \(\{X_1,\ldots,X_d\}\) and utilize the factorization of \(p(\mathbf{x})\). (Variable Elimination)
Next Lecture: Belief propagation (Sum-Product Algorithm on Polytrees)
Required reading for the week: Chapter 9. Koller and Friedman (2009)