TECH 04 — Graphical Models: Uncovering Hidden Relationships in Complex Systems
Quarter: Fall
Day(s): Thursdays
Course Format: On-campus (About Formats)
Duration: 9 weeks
Date(s): Sep 28—Nov 30
Time: 7:00—8:50 pm (PT)
Refund Deadline: Sep 30
Unit: 1
Tuition: $545
Instructor(s): Harish Kashyap
Class Recording Available: No
Status: Open
Fall
On-campus
Thursdays
7:00—8:50 pm (PT)
Date(s)
Sep 28—Nov 30
9 weeks
Refund Date
Sep 30
1 Unit
Fees
$545
Instructor(s):
Harish Kashyap
Recording
No
Open
Graphical models are unique in the world of machine learning, possessing both beauty and brains. Their utility is far-reaching, including diagnosing medical conditions by representing the relationships among different symptoms, test results, and potential diagnoses; creating marketing recommendations based on relationships among users and their preferred products; and identifying key influencers on social networks by modeling the interactions among users.
But the real power of graphical models is their ability to represent knowledge by visualizing the connections among many variables, allowing for easier interpretation and understanding of the relationships within a given system. They achieve this through a unique architecture consisting of two components: a graph and a set of probability distributions. The graph represents the variables and their dependencies while the probability distributions specify the relationships between the variables.
In this course, students will learn how to model probabilistic relationships between variables using graphical models and how to perform inference in these models to make predictions and decisions. The course consists of four major learning modules:
But the real power of graphical models is their ability to represent knowledge by visualizing the connections among many variables, allowing for easier interpretation and understanding of the relationships within a given system. They achieve this through a unique architecture consisting of two components: a graph and a set of probability distributions. The graph represents the variables and their dependencies while the probability distributions specify the relationships between the variables.
In this course, students will learn how to model probabilistic relationships between variables using graphical models and how to perform inference in these models to make predictions and decisions. The course consists of four major learning modules:
- Basic probability theory concepts, including random variables, probability distributions, and Bayes's rule
- Bayesian networks and their applications in decision-making and causal inference
- Markov random fields, commonly used to model interactions between variables in many applications, including computer vision, natural language processing, and machine learning
- Conditional random fields (CRFs), including defining CRFs, inference in CRFs, and applications
This course is ideal for anyone interested in understanding and modeling complex systems, from data scientists and machine learning engineers to researchers and decision-makers in various fields. Students should have a basic working knowledge of Python & probability.
HARISH KASHYAP
Senior Machine Learning Algorithm Engineer, KLA
Harish Kashyap is a machine learning researcher with extensive experience in AI. He has worked in several AI-related organizations, such as Amazon Robotics, and research labs like MERL. He holds several patents and has authored numerous publications, and he is an AI subject matter expert at MIT Horizon. Kashyap has authored the AI curriculum for SUNY Buffalo. He received an MS in electrical engineering from Northeastern University. Textbooks for this course:
(Required) John E. Freund, Introduction to Probability (Dover Books on Mathematics) (ISBN 978-0486675497)