This module is 100% optional. It is intended as supplementary material if you plan to use git with your Jupyter Notebooks.
Category: Module
- Prepare (due Monday 4/01)
- Content below
- Canvas quizzes
- Class Participation – See on the class forum
- Homework (due Sun 4/07) [Link]
- There are no worked examples
Content
10 Deep Learning
- Neural Networks and Applications (16 min.)
- Forward Propagation (10 min.)
- Gradient Descent (14 min.)
- Back Propagation (11 min.)
- Convolutional Neural Network (15 min.)
- Introducing Pytorch (23 min.)
Optional Supplements
Pytorch
Unlike most other libraries for this course, Pytorch is not included in the basic Anaconda installation. To use Pytorch, we suggest you choose one of two options.
- Install Pytorch locally (for free). You can see the directions on the website: Select the stable build, your operating system, Conda (for Anaconda), Python, and CPU to see install directions for your particular setup. (CUDA is used to support hardware acceleration with NVIDIA graphics cards and is not necessary for this course).
- Use Pytorch in a Jupyter notebook in the cloud (also for free). The easiest way to do this if you have a Google account is with a Google colab notebook; Pytorch will already be available to you in this cloud environment.
You can find the official Pytorch documentation here. Of particular note are the Pytorch tutorials, including Pytorch recipes which serve as small examples of common tasks.
Book
The deep learning book is available free online and is authored by some of the leading experts in machine learning with deep artificial neural networks. It is very detailed and in-depth and is purely for those who are interested in learning more about deep learning theory now or in the future; you do not need to read the book for this course.
- Prepare (due Mon 03/25)
- Content below
- Canvas quizzes
- Class Participation – See on the class forum
- Homework (due Sun 03/31) [LINK]
- Worked Example [LINK]
Content
09.A – Relational Database
- Relational Database (24 min.)
09.B – SQL Python and Pandas
- SQL Querying (21 min.)
- SQL with Python and Pandas (12 min.)
Optional Supplements
- SQLite Command Line Interface If you have a Mac/Linux machine, you should already be able to launch by just entering “sqlite3” in a terminal. If you have a Windows machine, you can download the command line interface from the Precompiled Binaries for Windows on the SQLite download page.
- Python SQLite3 API Documentation
- Pandas SQL Documentation
- w3resource SQLite Tutorial
- Database Schema Visualizer
- Prepare (due Mon 03/18)
- Content below
- Canvas quizzes
- Class Participation – See on the class forum
- Homework (due Sun 03/24) [Link]
- Worked Examples [Link]
Content (Slides in Box)
08. A Predictive Modelling and Regression
- Ordinary Linear Regression and Intro Scikit-Learn (21 min.)
- Nonlinear Regression and Scikit-Learn Preprocessing (13 min.)
- Binary Classification with Logistic Regression (22 min.)
Note: sklearn.metrics.plot_confusion_matrix introduced in p.28-29 in the slides/video is deprecated; use sklearn.metrics.ConfusionMatrixDisplay instead. To see the updated slides, switch to the “slides” panel when viewing the 09.A.III video in Panopto.
08.B Machine Learning and Classification
- Naïve Bayes and Text Classification (20 min.) – The video has a typo on slide 10, see the pdf of the slides in Box for the fix.
- K-Nearest Neighbors and Training/Testing (31 min.)
Optional Supplements
Chapter 5 Machine Learning from the Python Data Science Handbook provides a very nice treatment of many of the topics from the above videos and more. If you are new to machine learning, we highly recommend that you read sections 5.1 “What is Machine Learning” through 5.4 “Feature Engineering” after completing the videos. After that, you can optionally read any of the In-Depth sections about specific algorithms for prediction.
In addition, the scikit-learn documentation itself provides several resources for working with the library:
-
- Scikit-learn Getting Started and Scikit-learn tutorials provide some short introductory materials
- Scikit-learn examples has an extensive library of example applications with code
- Scikit-learn user guide explains the classes of models and features of the library
- Scikit-learn api reference contains the full api reference
- Prepare (due Mon 2/19)
- Content below
- Canvas quizzes
- Peer Instructions – See on the class forum
- Homework (due Sun 2/25) [Link]
- Worked Example [Link]
Content (Slides in the Box Folder)
06.A – Summarizing Data
- Read Section 3.8 Aggregating and Grouping from Python Data Science Handbook.
- Read Section 3.9 Pivot Tables from Python Data Science Handbook.
06.B – Merging Data
- Read Section 3.6 Concat and Append from Python Data Science Handbook. Please note that the join_axes optional parameter mentioned in this section has been deprecated from the Pandas library, you can skip over the details on this parameter.
- Read Section 3.7 Merge and Join from Python Data Science Handbook
- Table Relationships (4 min.)
- Which Join to Use (4 min.)
- Record Linkage (8 min.)
- Fuzzy Matching (21 min.)
Optional Supplements
- Prepare (due Mon 2/12)
- Content below
- Canvas quizzes
- Class Participation – See on the class forum
- Homework (due Sun 2/18) [Link]
- Worked Examples [Link]
Content (Slides in the Box folder)
5.A – Foundations of Probability (52 min.)
- Outcomes, Events, Probabilities (15 min.)
- Joint and Conditional Probability (11 min.)
- Marginalization and Bayes’ Theorem (15 min.)
- Random Variables and Expectations (11 min.)
5.B – Distributions of Random Variables (46 min.)
- Distributions, Means, Variance (19 min.)
- Monte Carlo Simulation (15 min.)
- Central Limit Theorem (12 min.)
- Slide 26 in the video has a typo that is fixed in the pdf version of the slides on Box. In the video, it says the probability is <= 0.95, but it should say < 0.05.
Optional Supplements
Helpful YouTube videos to understand nuance with examples
- But what is the Central Limit Theorem? by 3Blue1Brown
- This is How Easy It Is to Lie With Statistics by Zach Star
- The medical test paradox, and redesigning Bayes’ rule by 3Blue1Brown
- How Long Can We Live? by MinuteEarth
- Understanding Cancer Survival Rates by vlogbrothers
Online Textbook and Documentation
You can access an excellent free online textbook on OpenIntro Statistics here, co-authored by Duke faculty. You can pay a suggested but adjustable price for a tablet-friendly pdf, but you can also just get the regular pdf for free. For this module, the following optional readings may be particularly helpful supplements:
- Chapter 3: Probability. This provides more information on many of the topics from the above videos in Foundations of Probability.
- Chapter 4: Distributions of random variables. This provides much more information about particular classic distributions than is provided in 2B.B.1.
- Chapter 5.1: Point estimates and sampling variability. This provides more information on some of the topics from 2B.B.2-3.
In addition, you can find documentation for the two pseudorandom number-generating / sampling libraries in python that we mentioned here:
- Python random – Base Python library
- Numpy random – Numpy random sampling library
- Prepare (due Mon 2/5)
- Content below
- Canvas quizzes
- Peer Instructions – See on the class forum
- Homework (due Sun 2/11) [LINK]
- Worked Example [LINK]
Content (Slides in the Box folder)
04.A – What is Wrangling
- Data sources, formats, and importing (26 min.)
- Common data cleaning problems (16 min.)
- Read Section 3.4 Handling Missing Data from Python Data Science Handbook
04.B – Wrangling Text
- Python string operations (16 min.)
- Introduction to regular expressions (18 min.)
- Read Section 3.10 Vectorized String Operations from Python Data Science Handbook
Optional Supplements
- Pandas IO tools Documentation
- Pandas working with missing data user guide
- Python Regular Expression HOWTO
- Pandas working with text data user guide
- Why is data wrangling sometimes hard? Check out this case study [The Maddening Mess of Airport Codes!]
- Prepare (due Mon 1/29)
- Content below
- Canvas quizzes
- Class Participation – See on the class forum
- Homework (due Sun 2/4) [Link]
- Worked Examples [Link]
Content
03.A – Data Visualization and Design
- Why Visualize? (11 min.)
- Kinds of Data (7 min.)
- Basic Plot Types (12 min.)
- Dos and Don’ts (10 min.)
03.B – Visualization in Python
- Intro to Python Visualization Landscape (7 min.)
- Seaborn Introduction (17 min.)
- Seaborn Examples (17 min.)
Optional Supplements
- Prepare (due Mon 1/22)
- Content below
- Canvas quiz
- Class Participation – See on the class forum
- Homework (due Sun 1/28) [Link]
- Worked Example [Link]
Content (Slides in the Box folder)
2.A – Numpy (1 hour)
- Why Numpy (8 min.)
- Numpy Array Basics (15 min.)
- Numpy Universal Functions (20 min.)
- Numpy Axis (14 min.)
2.B – Pandas (45 min.)
- Why Pandas (7 min.)
- Pandas Series (19 min.)
- Pandas Dataframe (21 min.)
Optional Supplements
- Numpy Beginner’s Tutorial
- Chapter 2: Introduction to Numpy from Python Data Science Handbook
- Numpy Documentation
- 10 Minute to Pandas Tutorial
- Pandas User Guide
- Chapter 3: Data Manipulation with Pandas from Python Data Science Handbook (just the first three subsections)
- Prepare (due Mon 3/4)
- Content below
- Canvas quizzes
- Class participation – See on the class forum
- Homework (due Sun 3/17) [Link] (note this is to handle spring break)
- Worked Example [Link]
Content
Note: the slides for this module have been updated. Please switch to the “slides” panel when viewing the video in Panopto. DO NOT stay on the “screen” panel, as the recorded screen showed the old slides (which contained typoes and old information).
07.A – Confidence Intervals and Bootstrapping
- Intro Confidence Intervals (17 min.)
- Confidence Intervals in Python (17 min.)
- Misconceptions about Confidence Intervals (short read)
OR
The 3rd paragraph (starting with “As a technical note…” in this link
07.B – Hypothesis Testing
- Intro Hypothesis Testing and Proportions (14 min.)
- Hypothesis Testing Means and More (33 min.)
Optional Supplements
You can access an excellent free online textbook on OpenIntro Statistics here, co-authored by Duke faculty. You can pay a suggested but adjustable price for a tablet-friendly pdf, but you can also just get the regular pdf for free. For Module 7, the following optional readings may be particularly helpful supplements:
- Chapter 5.2 Confidence intervals for a proportion. This provides introductory material on confidence intervals elaborating on 5.A.1.
- Chapter 5.3 Hypothesis testing for a proportion. This elaborates on the introduction to hypothesis testing from 5.B.1.
- Chapters 7.1, 7.3, and 7.5 cover material from 5.B.2 on using t-tests for a single mean, the difference of two means, and many pairwise means respectively.
- Chapter 6.3 discusses the chi-square test for categorical data introduced in 5.B.2.
In addition, here is the documentation for the scipy.stats library that implements most of the functionality described here as well as many other useful statistical functions.