06 Sep 2016
by Zhenke Wu
Share
tweet

# Statistical and Computational Methods for Learning through Graphical Models

• Instructor: Zhenke Wu PhD, Assistant Professor of Biostatistics
• Email: zhenkewu@umich.edu
• Time: Tuesday and Thursday 12:30-2pm (15 weeks; September 6th to December 13th, 2016)
• Location: 4332 SPH II
• Office Hours: 4623 SPH-I (within Suite 4605); Tuesdays 2-3pm or by appointment

## Announcements

• [12/08/2016] Please fill out the [end-of-term survey] by December 21, 2016.
• [12/08/2016] [Homework 4] posted; Due to Instructor by 11:59pm on December 21, 2016.
• [11/27/2016] Deadline for extra credit problems: midnight, December 15, 2016.
• [11/08/2016] [Homework 3] posted; Due to Instructor by 11:59pm on December 15, 2016.
• [11/01/2016] [Homework 2] posted; Due to Instructor by 11:59pm on November 21, 2016.
• [10/15/2016] Please fill out the midterm survey here.
• [09/26/2016] Homework 1 due date extended to 11:59pm on October 10th. I have also redistributed the credits to the theory problems and added extra comments. Please refer to the [revised Problem Set 1].
• [09/26/2016] The instructor has moved to a new office 4623 SPH-I within Suite 4605.
• [09/19/2016] Problem Set 1 (obsolete; use the revised one) posted. Due 11:59PM, October 3rd, 2016 to Instructor’s email in electronic copy.
• [09/17/2016] Now you can comment to improve the course at the bottom of this page. Or if you like it, please tweet to share for others who are interested in learning, programming and applying graphical models!
• [09/08/2016] Please fill out the class survey for the first week.

## Syllabus

The pdf file linked below introduces the course objectives, organizational structures, lectures, references, evaluations and other course policies.

## Lecture Notes (required readings at the end of lecture notes):

Calendar of Upcoming Lectures [click to expand]

#### Module 1 (Representations)

• Lecture 1 - Introduction [slides]
• Lecture 2 - D-separation in DAG and Probabilistic Conditional Independence [slides]
• Lecture 3 - D-separation continued (blackboard)
• Lecture 4 - Representation for Undirected Graphical Models [slides]
• Lecture 5 - DAG and UG: Connections and Differences [slides]
• Lecture 6 - Examples of DAG and UG and Conclusion of the Representation Module [slides][RMarkdown file with Shiny Demo]. Please use RStudio to run the .Rmd file to generate Shiny R Presentation.

#### Module 2 (Inference and Computation for Graphical Models)

• Lecture 7 - Exact inference: factor graphs and variable elimination [slides]
• Lecture 8 - Exact inference: Belief Propagation [slides]
• Lecture 9 - Exact inference Examples [slides]
• Lecture 10 - Junction Tree Algorithm [slides]
• Lecture 12 - Examples of Junction Tree Algorithm [marked slides]
• Lecture 13 - Approximate Inference by Stochastic Simulation/Sampling Methods [slides]
• Lecture 14 - Survey of Automatic Bayesian Software and Why You Should Care [slides][code]
• Lecture 15 - Variational Inference Basics [slides][whiteboard-notes]
• Lecture 16 - Variational Inference: Examples

#### Module 3 (Graphical Models for Causality)

• Lecture 18 - Causal Inference in Medicine and Public Health: An Introduction [slides]
• Lecture 19/20 - Causal Diagram [slides]
• Lecture 21 - Marginal Structural Models [Note on IPW]

#### Module 4 (Case Studies)

• Nov 29: Professor Jian Kang on Graphical Models for Neuroscience.
• Title: Identifying Functional Co-Activation Patterns in Neuroimaging Studies Via Poisson Graphical Models
• Abstract: Studying the interactions between different brain regions is essential to achieve a more complete understanding of brain function. In this talk, we focus on identifying functional co-activation patterns and undirected functional networks in neuroimaging studies. We build a functional brain network, using a sparse covariance matrix, with elements representing associations between region-level peak activations. We adopt a penalized likelihood approach to impose sparsity on the covariance matrix based on an extended multivariate Poisson model. We obtain penalized maximum likelihood estimates via the expectation-maximization (EM) algorithm and optimize an associated tuning parameter by maximizing the predictive loglikelihood. Permutation tests on the brain co-activation patterns provide region pair and network-level inference. Simulations suggest that the proposed approach has minimal biases and provides a coverage rate close to 95% of covariance estimations. Conducting a meta-analysis of 162 functional neuroimaging studies on emotions, our model identifies a functional network that consists of connected regions within the basal ganglia, limbic system, and other emotion-related brain regions. We characterize this network through statistical inference on region-pair connections as well as by graph measures.
• Dec 1: Cancelled Junhyuk Oh on Deep Learning and Reinforcement Learning:
• Title: Improving Generalization via Deep Reinforcement Learning
• Abstract: The ability to generalize from past experience to solve previously unseen tasks or environments is a key research challenge in reinforcement learning (RL). In this talk, I will briefly introduce the basic idea of deep reinforcement learning (Deep RL) and present my recent work that aims to improve generalization ability of RL agents through deep learning. The first work focuses on how to generalize over unseen and larger topologies in 3D world given navigational tasks. The second work discusses how to generalize over new tasks that are described by natural language.
• Dec 6 and 8: Integrated nested Laplace Approximation with Application to Spatial Statistics [slides]

• Dec 13: Network Basics, Models and Social Network and Infectious Disease Examples [slides]