Category Archives: Module

Project: Final Report

Due: Sunday 12/12, 11:59 PM

General Directions

The final report is intended to provide a comprehensive account of your collaborative course project in data science. The report should demonstrate your ability to apply the data science skills you have learned to a real-world project in a holistic way from posing research questions and gathering data to analysis, visualization, interpretation, and communication. The report should stand on its own so that it makes sense to someone who has not read your proposal or prototype.

The report should contain at least the parts defined below. In terms of length, it should be about 5-7 pages using standard margins (1 in.), font (11-12 pt), and line spacing (1-1.5). A typical submission is around 3-4 pages of text and 5-7 pages overall with tables and figures. You should convert your written report to a pdf and upload it to gradescope under the assignment “Project Final Report” by the due date. Be sure to include your names and netids in your final document and use the group submission feature on gradescope. You do not need to upload your accompanying data, code, or other supplemental resources demonstrating your work to gradescope; instead, your report should contain instructions on how to access these resources (see part 4 below for more details).

Part 1: Introduction and Research Questions

Your final report should begin by reintroducing your topic and restating your research question(s) as in your proposal. As before, your research question(s) should be (1) substantial, (2) feasible, and (3) relevant. In contrast to the prior reports the final report does not need to explicitly justify that the research questions are substantial and feasible in text; your results should demonstrate both of these points. You should still explicitly justify how your research questions are relevant. In other words, be sure to explain the motivation of your research questions.

You can start with the text from your prototype, but you should update your introduction and research questions to reflect changes in or refinements of the project vision. Your introduction should be sufficient to provide context for the rest of your report.

Part 2: Summary of Results

Provide a brief (one or two paragraphs) summary of your results. This summary of results should address your research questions. For example, if one of your research questions was “Did COVID-19 result in bankruptcy in North Carolina during 2020?” then a possible (and purely hypothetical) summary of results might be “We aggregate the public records disclosures of small businesses in North Carolina from January 2019 to December 2020 and find substantial evidence that COVID-19 did result in a moderate increase in bankruptcy during 2020. This increase is not geographically uniform and is concentrated during summer and fall 2020. We also examined the impact of federal stimulus but cannot provide an evaluation of its impact from the available data.”

Part 3: Data Sources

Discuss the data you have collected and are using to answer your research questions. Be specific: name the datasets you are using, the information they contain, and where they were collected from / how they were prepared. You can begin with the text from your prototype but be sure to update it to fit the vision for your final project.

Part 4: Results and Methods

This is likely to be the longest section of your paper at multiple pages. The results and methods section of your report should explain your detailed results and the methods used to obtain them. Where possible, results should be summarized using clearly labeled tables or figures and supplemented with written explanations of the significance of the results with respect to the research questions outlined previously.

Your description of your methods should be specific. For example, if you scraped multiple web databases, merged them, and created a visualization, then you should explain how each step was conducted in enough detail that an informed reader could reasonably be expected to reproduce your results with time and effort. Just saying “we cleaned the data and dealt with missing values” or “we built a predictive model” is not sufficient detail, for example.

Your report should also contain instructions on how to access your full implementation (that is, your code, data, and any other supplemental resources like additional charts or tables). The simplest way to do so is to include a link to the box folder, GitLab repo (if you use GitHub wish to keep the repo private add Prof. Stephens-Martinez (username: ksteph) and your mentor to the repo), or whatever other platforms your group is using to house your data and code.

Part 5: Limitations and Future Work

In this part, you should discuss any important limitations or caveats to your results with respect to answering your research questions. For example, if you don’t have as much data as you would like or are unable to fairly evaluate the performance of a predictive model, explain and contextualize those limitations.

Finally, provide a brief discussion of future work. This could explain how future research might address the limitations you outline, or it could pose additional follow-up research questions based on your results so far. In short, explain how an informed reader (such as a peer in the class) could improve on and extend your results.

Grading Rubric

Final reports will be evaluated on the following criterion-based rubric. Reports satisfying all criteria will receive full credit.

  1. Submits a relevant document satisfying general requirements
  2. Includes a brief introduction to the topic of interest
  3. Poses one or more concrete research questions
  4. Provides a reasonable justification that research questions are relevant
  5. Provides a brief summary of results
  6. Includes a discussion of concrete/specific data sources
  7. Provides results in the form of analysis, tables, visualization, etc.
  8. Final tables and visualizations are properly labeled and legible
  9. Results provide reasonable answers to research questions and interpretation is provided in the text. Some results may be negative or incomplete (with discussion) but should provide some concrete evidence toward answers to research questions.
  10. Results and methods demonstrate substantial effort and progress over the course of the project
  11. Methods used to obtain results are described in sufficient detail to understand and interpret results
  12. Methods used are generally appropriate and do not contain significant methodological errors
  13. Provides a link/reference to additional materials (e.g., code and data stored in Box or GitLab)
  14. Provides a reasonable discussion of any limitations to the results
  15. Provides a reasonable discussion of future work and how the results could be extended
  16. Final writeup is edited and polished. Can have one or two typos or grammatical errors, but the document is sufficiently edited as to not distract or confuse the reader.

Final Perform

Due: Monday 11/22

Box folder with the files for this perform

Introduction

The Final Perform will have you show all that you have learned in the class so far. This Perform consists of a skeleton notebook and a raw data set. You must process, clean, and analyze the raw data to learn something interesting. We encourage you to work in pairs so you can explore the data set more thoroughly, but it is not required.

The grading scale and points allocation are different than prior notebooks. Moreover, the last 3 (out of 100) points for this Perform are allocated towards a conclusion section and the overall cohesion of the notebook. These points focus on how well the sections are connected together and build towards a specific conclusion. Keep in mind that the syllabus states you only need 95% of the possible points to earn full credit. Therefore if you do not want to demonstrate that level of mastery, you do not need to spend the extra time to work on this.

Working together

  1. You may work with up to one other person.
    1. We recommend that you do, but understand if you would prefer to work by yourself.
    2. If you want to find a partner, try posting on the class forum.
  2. You may share your data loading and cleaning code.
    1. This is code that converts the data files into DataFrames and converts the columns into a useful format.
    2. Just like in the real world, developers would be helping each other in figuring out how to get raw data into a needed format. You may do so for this Perform.
    3. So you should feel free to ask and answer such questions on the class forum.
    4. If you are not sure a question falls under this designation, ask it as a private question first.
  3. You may discuss the kind of analysis you are doing.
  4. You may NOT share your analysis code with anyone except your partner (if you have one).

Assessment Goals

The goals of this Perform are for you to demonstrate the following skills:

  1. Load and process raw data that is not necessarily in an easy-to-use format for your intended analysis.
  2. Visualize data such that a meaningful interpretation can be made.
  3. Wisely choose, explain the choice of, conduct, and interpret the results of a hypothesis test.
  4. Create a prediction model from an existing data set.
  5. Stretch goal: Using all of the above elements to create a cohesive explanation of a finding(s).

Grading Scale and Points Allocation

Each section will be graded on a four-step rubric scale as follows.

  • E (Exemplary) – Work that meets all requirements and displays full mastery of all learning goals and material.
  • S (Satisfactory) – Work that meets all requirements and displays at least partial mastery of all learning goals as well as full mastery of core learning goals.
  • N (Not yet) – Work that does not meet some requirements and/or displays developing or incomplete mastery of at least some learning goals and material.
  • U (Unassessable) – Work that is missing, does not demonstrate meaningful effort, or does not provide enough evidence to determine a level of mastery.

There are 100 points possible. The number of points earned depends on the notebook section. The rubric will be converted to points as follows:

  • E = full credit
  • S = E_full_credit – 1
  • N = E_full_credit / 2
  • U = E_full_credit / 5
  • Blank = 0

Notebook Sections and Grading Expectations

Overall Grading Considerations

The entire notebook is expected to take into account the following:

  1. The code takes advantage of Pandas and NumPy libraries
    1. For loops are allowed
    2. Do not use a for loop to iterate over a DataFrame’s rows, unless it is guaranteed to be < 100 rows
  2. Accounts for the fact that there is a different number of ratings for each professor in the data set

Section: Data Loading and Cleaning (21 points)

This section should have all of your data loading and cleaning code where you load and create your DataFrame(s). It does not need to contain all of the data processing code if creating a new column or table in a later section makes more sense for explanation and cohesion.

  1. Loads data from all of the data files
  2. Shows at least the first 10 rows of all DataFrames created that are used later in the notebook
  3. Plus overall grading considerations

Section: Visualization (19 points, Module 5B)

This section should contain at least one visualization showing something informative about the data. The skills you learned for this section primarily came from Module 5B.

  1. Each visualization has:
    1. X-axis and Y-axis are labeled and have appropriate values
    2. Legend is provided if needed to interpret the visualization
    3. Use of color adds and does not detract from the visualization
    4. A title or caption describing what the visualization is showing
  2. Draws at least 1 visualization from at least 1 column of data
  3. Provides a short 1-4 sentence summary of key takeaways from the visualizations.
  4. Plus overall grading considerations

Section: Hypothesis Test (19 points, Module 3B)

This section should contain at least one hypothesis test about the data. The skills you learned for this section primarily came from Module 3B.

  1. H0 and H1 hypotheses are clearly labeled and stated
  2. What kind of test is clearly written
  3. Has a clear interpretation of the test’s result
  4. Plus overall grading considerations

Section: Prediction (19 points, Module 6)

This section should contain the creation and testing of at least one model. The skills you learned for this section primarily came from Module 6.

  1. The data and target for the model are clearly labeled
  2. Has a clear rationale for the data used in the model
  3. Properly splits and uses a train and test set
  4. Has a clear interpretation for the results of the model
  5. Plus overall grading considerations

Section: Additional Analysis (19 points)

This section should contain one more analysis of your choosing. It can be like any of the other analysis sections, so another visualization, hypothesis test, or prediction analysis.

  1. Clearly states what the additional analysis is
  2. Provides a clear rationale for the analysis
  3. Has a clear interpretation for the results of the analysis
  4. Fulfills all of the requirements of the kind of analysis that it is
  5. Plus overall grading considerations

Section: Conclusion (and Cohesion, 3 points)

You only need this section if you are interested in earning these last points.

If you need to rearrange the sections to improve the cohesion of your notebook, you may do so.

These points can only be earned if at least two of the analysis sections earned an E and an S is earned for all of the other sections. These points focus on the overall cohesion of your sections and if the conclusion effectively summarizes the results across all of the sections.

  1. All five sections have a clear progression and build off of each other
  2. Each section references another as appropriate in building a cohesive explanation of the main results of the notebook
  3. The conclusion effectively summarizes the notebook (it should not just be a list of the results of each section)
  4. The conclusion provides a summary of the key takeaways from the analyses
  5. Plus overall grading considerations

Module 7: Deep Learning

  1. Prepare (soft due Tu 11/10, hard due M 11/15)
    1. Content below
    2. Sakai quizzes
  2. Group Worksheet (soft due W 11/10, hard due M 11/15)
    1. Part 1
    2. Part 2
    3. Part 3
    4. Part 4
  3. Practice (due M 11/22)
  4. Perform – There is no Perform for this module

Content

7 Deep Learning

  1. Neural Networks and Applications (16 min.)
  2. Forward Propagation (10 min.)
  3. Gradient Descent (14 min.)
  4. Back Propagation (11 min.)
  5. Convolutional Neural Network (15 min.)
  6. Introducing Pytorch (23 min.)

Optional Supplements

The deep learning book is available free online and is authored by some of the leading experts in machine learning with deep artificial neural networks. It is very detailed and in-depth and is purely for those who are interested in learning more about deep learning theory now or in the future; you do not need to read the book for this course.

Unlike most other libraries for this course, Pytorch is not included in the basic Anaconda installation. To use Pytorch, we suggest you choose one of two options.

  • Install Pytorch locally (for free). You can see the directions on the website: Select the stable build, your operating system, Conda (for Anaconda), Python, and CPU to see install directions for your particular setup. (CUDA is used to support hardware acceleration with NVIDIA graphics cards and is not necessary for this course).
  • Use Pytorch in a Jupyter notebook in the cloud (also for free). The easiest way to do this if you have a Google account is with a Google colab notebook; Pytorch will already be available to you in this cloud environment.

You can find the official Pytorch documentation here. Of particular note are the Pytorch tutorials, including Pytorch recipes which serve as small examples of common tasks.

Module 6: Prediction & Supervised Machine Learning

  1. Prepare (soft due Tu 10/26, hard due M 11/1)
    1. Content below, if you are new to machine learning some of the optional is strongly recommended.
    2. Sakai quizzes
  2. Group Worksheet (soft due W 10/27, hard due M 11/1)
  3. Practice (due M 11/8)
  4. Perform (due M 11/22)

Content

6.A Predictive Modeling and Regression

  1. Ordinary Linear Regression and Intro Scikit-Learn (21 min.)
  2. Nonlinear Regression and Scikit-Learn Preprocessing (13 min.)
  3. Binary Classification with Logistic Regression (22 min.)

6.B Machine Learning and Classification

  1. Naïve Bayes and Text Classification (20 min.) – The video has a type on slide 10, see the pdf of the slides in Box for the fix.
  2. K-Nearest Neighbors and Training/Testing (31 min.)

Optional Supplements

Chapter 5 Machine Learning from the Python Data Science Handbook provides a very nice treatment of many of the topics from the above videos and more. If you are new to machine learning, we highly recommend that you read sections 5.1 What is Machine Learning through 5.4 Feature Engineering after completing the videos. After that, you can optionally read any of the In-Depth sections about specific algorithms for prediction.

In addition, the scikit-learn documentation itself provides several resources for working with the library:

Module 5B: Visualization

  1. Prepare (soft due Th 10/14, hard due 10/18)
    1. Content below
    2. Sakai quizzes
  2. Group Worksheet (soft due F 10/15, hard due 10/18)
  3. Practice (due M 10/25)
  4. Perform (due M 11/8)

Content

5B.A Data Visualization and Design

  1. Why Visualize? (11 min.)
  2. Basic Plot Types (17 min.)
  3. Dos and Don’ts (10 min.)

5B.B Visualization in Python

  1. Intro to Python Visualization Landscape (7 min.)
  2. Seaborn Introduction (17 min.)
  3. Seaborn Examples (17 min.)

Optional Supplements

Module 5A: Databases & SQL

  1. Prepare (soft due Tu 10/12, hard due 10/18)
    1. Content below
    2. Sakai quizzes
  2. Group Worksheet (soft due W 10/13, hard due 10/18)
  3. Practice (due M 10/25)
  4. Perform (due M 11/8)

Content

5A.A – Relational Database (24 min.)

5A.B

  1. SQL Querying (21 min.)
  2. SQL with Python and Pandas (12 min.)

Optional Supplements

Module 4: Combining Data

There is only 1 module for learning sprint 4. The rest of your time should be spent on your project.

  1. Prepare (soft due Tu 9/28, hard due M 10/11)
    1. Content below
    2. Sakai quizzes
  2. Group Worksheet (soft due W 9/29, hard due M 10/11)
  3. Practice (due M 10/11)
  4. Perform (due M 10/25)

Content

4.A – Summarizing Data

  1. Read Section 3.8 Aggregating and Grouping from Python Data Science Handbook.
  2. Read Section 3.9 Pivot Tables from Python Data Science Handbook.

4.B – Merging Data

  1. Record Linkage (8 min.)
  2. Read Section 3.6 Concat and Append from Python Data Science Handbook. Please note that the join_axes optional parameter mentioned in this section has been deprecated from the Pandas library, you can skip over the details on this parameter.
  3. Read Section 3.7 Merge and Join from Python Data Science Handbook
  4. Fuzzy Matching (21 min.)

Optional Supplements

Module 3B: Statistical Inference

  1. Prepare (soft due Th 9/16, hard due M 9/27)
    1. Content below
    2. Sakai quizzes
  2. Group Worksheet (soft due F 9/17, hard due M 9/27)
  3. Practice (due M 9/27)
  4. Perform (due M 10/11)

Content

3B.A – Confidence Intervals and Bootstrapping

  1. Intro Confidence Intervals (17 min.)
  2. Confidence Intervals in Python (17 min.)

3B.B – Hypothesis Testing

  1. Intro Hypothesis Testing and Proportions (14 min.)
  2. Hypothesis Testing Means and More (33 min.)

Optional Supplements

You can access an excellent free online textbook on OpenIntro Statistics here, co-authored by Duke faculty. You can pay a suggested but adjustable price for a tablet-friendly pdf, but you can also just get the regular pdf for free. For Module 3B, the following optional readings may be particularly helpful supplements:

  • Chapter 5.2 Confidence intervals for a proportion. This provides introductory material on confidence intervals elaborating on 3B.A.1.
  • Chapter 5.3 Hypothesis testing for a proportion. This elaborates on the introduction to hypothesis testing from 3B.B.1.
  • Chapters 7.1, 7.3, and 7.5 cover material from 3B.B.2 on using t-tests for a single mean, the difference of two means, and many pairwise means respectively.
  • Chapter 6.3 discusses the chi-square test for categorical data introduced in 3B.B.2.

In addition, here is the documentation for the scipy.stats library that implements most of the functionality described here as well as many other useful statistical functions.

Module 3A: Data Wrangling

  1. Prepare (soft due Tu 9/14, hard due M 9/27)
    1. Content below
    2. Sakai quizzes
  2. Group Worksheet (soft due W 9/15, hard due M 9/27)
  3. Practice (due M 9/27)
  4. Perform (due M 10/11)

Content

3A.A – What is Wrangling

  1. Data sources, formats, and importing (26 min.)
  2. Common data cleaning problems (16 min.)
  3. Read Section 3.4 Handling Missing Data from Python Data Science Handbook

3A.B – Wrangling Text

  1. Python string operations (16 min.)
  2. Introduction to regular expressions (18 min.)
  3. Read Section 3.10 Vectorized String Operations from Python Data Science Handbook

Optional Supplements

Module 2B: Probability

  1. Prepare (soft due Th 9/2, hard due M 9/13)
    1. Content below
    2. Sakai quizzes
  2. Group Worksheet (soft due F 9/3, hard due M 9/13)
  3. Practice (due M 9/13)
  4. Perform (due M 9/27)

Content

2B.A – Foundations of Probability (52 min.)

  1. Outcomes, Events, Probabilities (15 min.)
  2. Joint and Conditional Probability (11 min.)
  3. Marginalization and Bayes’ Theorem (15 min.)
  4. Random Variables and Expectations (11 min.)

2B.B – Distributions of Random Variables (46 min.)

  1. Distributions, Means, Variance (19 min.)
  2. Monte Carlo Simulation (15 min.)
  3. Central Limit Theorem (12 min.)

Optional Supplements

You can access an excellent free online textbook on OpenIntro Statistics here, co-authored by Duke faculty. You can pay a suggested but adjustable price for a tablet-friendly pdf, but you can also just get the regular pdf for free. For this module, the following optional readings may be particularly helpful supplements:

  • Chapter 3: Probability. This provides more information on many of the topics from the above videos in Foundations of Probability.
  • Chapter 4: Distributions of random variables. This provides much more information about particular classic distributions than is provided in 2B.B.1.
  • Chapter 5.1: Point estimates and sampling variability. This provides more information on some of the topics from 2B.B.2-3.

In addition, you can find documentation for the two pseudorandom number generating / sampling libraries in python that we mentioned here: