WSP 366 — Introduction to Deep Learning (ICME Workshop)
Big data has changed the way we work, live, and play. Data science—developing and testing models and algorithms—helps us gain knowledge for ourselves and provide insights to others.
Introduction to Deep Learning is one of ten workshops included in Fundamentals of Data Science, a series of one-day workshops offered by the Stanford Institute for Computational and Mathematical Engineering (ICME). Fundamentals of Data Science provides an introduction to multiple aspects of data science for those who are new to the field and those seeking to broaden their education and skills in data science. Students can sign up for one workshop, or several throughout the week. Students who complete four workshops will qualify for the Stanford ICME Fundamentals of Data Science Summer Workshops Certificate of Completion.
These workshops are not eligible for tuition discounts through Stanford Continuing Studies, but ICME offers discounts for eligible affiliates and partners. See below for details:
Stanford staff and full-time Stanford students: You may be eligible for a tuition discount if you register directly through ICME. The tuition for Stanford staff is $100 and the tuition for full-time Stanford students is $75. For more information and to register with these discounts, visit https://sto.stanfordtickets.org/icme2020/homepage.
ICME partners and affiliates qualify for a discount. If you think you qualify and have not received a discount code separately, email firstname.lastname@example.org. You will need to register through ICME's website.
Workshop: Introduction to Deep Learning
Instructors: Sherry Wang, PhD Student, ICME, Stanford
Deep Learning is a rapidly expanding field with new applications found every day. In this workshop, we will cover the fundamentals of deep learning for the beginner. We will introduce the math behind training deep learning models: the backpropagation algorithm. Building conceptual understanding of the fundamentals of deep learning will be the focus of the first part of the workshop. We will then cover some of the popular architectures used in deep learning, such as Convolutional Neural Networks (CNNs), Recurrent Neural Networks (RNNs), LSTMs, autoencoders and GANs. There will be a hands-on computing tutorial using Jupyter notebooks to build a basic image classification model via transfer learning. By the end of the workshop, participants will have a firm understanding of the basic terminology and jargon of deep learning and will be prepared to dive into the plethora of online resources and literature available for each specific application area.
Prerequisite: Familiarity of basic concepts from linear algebra, such as vectors and matrices, as well as calculus concepts, such as differentiation. Familiarity with the python programming language and an ability to use Jupyter notebooks will be helpful for the hands-on sessions.
Please note: Although the enrollment limit for this workshop is set to 60 Continuing Studies students, this course is designed for the entire Stanford community, and enrolled Continuing Studies students will be joined in the classroom by Stanford graduates and undergraduates. Students should expect a large class.