diochnos/teaching/CS4033-5033/2020F

CS 4033/5033 – Machine Learning (Fall 2020)

The class is cross-listed as CS 4033 and CS 5033, so that both undergraduate and graduate students can enroll simultaneously. No student may earn credit for both 4033 and 5033.

Table of Contents

Course Description

Topics include decision trees, relational learning, neural networks, Bayesian learning, reinforcement learning, multiple-instance learning, feature selection, learning appropriate representations, clustering, and kernel methods. No student may earn credit for both 4033 and 5033.

[Course Description] [Table of Contents] [Top]

Basic Information

Syllabus

The syllabus is available here.

Time and Location

Mondays and Wednesdays, 5:30pm – 6:45pm, Dale Hall 0218.

Contact Information

Please see here.

Teaching Assistants

The teaching assistant for the class is Tashfeen (tashfeen).

Office Hours

We will be holding our office hours at the following times.

Mondays
10:30am – 12:30pm, 244 Devon Energy Hall (Dimitris)
Tuesdays
12:15pm – 1:15pm, (Tashfeen)
Wednesdays
10:30am – 12:30pm, 244 Devon Energy Hall (Dimitris)
Thursdays
12:15pm – 1:15pm, (Tashfeen)

Please note that while anyone is welcome during the entire 2-hour time period that I have reserved on Mondays and Wednesdays, CS 5970 will have precedence during the first hour (10:30am – 11:30am) and CS 4033/5033 will have precedence during the second hour (11:30am – 12:30pm).

Exceptions to the Regular Schedule of Office Hours

If you want to meet me outside of my office hours, please send me an email and arrange an appointment.

Thursday, September 3, 2020: I will be holding office hours between 10:00am – 12:00pm. My regular office hours that were planned for Wednesday, September 2, 2020 are canceled. Please see Canvas for the Zoom link for these makeup office hours.

Thursday, September 10, 2020: I will be holding office hours between 10:00am – 12:00pm. My regular office hours that were planned for Wednesday, September 9, 2020 are canceled. Please see Canvas for the Zoom link for these makeup office hours.

Wednesday, September 16, 2020: I will be holding my office hours between 10:30am – 12:00pm; that is, 30 minutes less today.

Exceptions to the Regular Schedule of Office Hours for the TAs

As exceptions appear along the way, they will also be announced here.

[Basic Information] [Table of Contents] [Top]

Important Coronavirus-Related Information

We have the following links.

[Important Coronavirus-Related Information] [Table of Contents] [Top]

Homework Assignments

Assignment 1: Announced on Monday, August 31. Due Wednesday, September 9.

Assignment 2: Announced Wednesday, September 16. Due Monday, September 28.

Assignment 3: Announced Wednesday, September 30. Due Friday, October 9 Wednesday, October 14.

Assignment 4: Announced Wednesday, October 14. Due Monday, October 26.

Assignment 5: Announced Monday, November 2. Due Monday, November 16.

Assignment 6: Announced Monday, November 16. Due Thursday, December 3.

[Homework Assignments] [Table of Contents] [Top]

Projects

Information related to the projects will show up here.

Ideas for projects

Below are some ideas for your projects.

Reinforcement Learning Ideas
Supervised Learning Ideas

[Projects] [Table of Contents] [Top]

Machine Learning Resources

Books

The two books that we plan to use for the course are available for free in electronic format in the following links:

Another book that I like a lot and recommend to people who are starting with machine learning is the following one:

Personal Notes

Notes by your Teaching Assistant

Notes by Others

Papers

[Machine Learning Resources] [Table of Contents] [Top]

Class Log

A log for the class will be held online here.

Class 1 (Aug 24, 2020)

Discussion on syllabus and policies.

Pretest in class.

Class 2 (Aug 26, 2020)

Assigned Reading: Elements of Statistical Learning (ESL), Chapter 1.

Assigned Reading: Sutton & Barto: Chapters 1 and 2.

Assigned today: Think about short and long projects. Think about the topic for your RL project.

Discussion on projects. Introduction to Machine Learning and Reinforcement Learning.

Class 3 (Aug 31, 2020)

Assigned today: Homework 1.

Continued our discussion on the Reinforcement Learning Problem.

Basic ingredients of RL methods: policy, value function, model. The prediction problem and the control problem.

Markov Chains and Markov Decision Processes (MDPs). Recycling Robot example.

Returns and discounting.

Class 4 (Sep 2, 2020)

Continued our discussion on MDPs. Discussion on the Bellman Expectation Equations. Backup diagrams and solution of the prediction problem using linear algebra. Revisited the recycling robot example and we showed how we can evaluat the policy that picks an action with the same probability at each of the two energy states of the robot.

Optimal value functions and optimal policies. Theorem for existence of optimal policies on MDPs.

Bellman (Optimality) Equations and the respective backup diagrams for the v and q functions. We can no longer find a solution with linear algebra.

Started our discussion on Dynamic Programming (DP).

Assigned Reading: Sutton & Barto: Chapter 3.

Sep 7, 2020

No class today. Labor day.

Class 5 (Sep 9, 2020)

Due today: Homework 1.

Proposals for reinforcement learning projects; in-class as well as remote presentations.

Class 6 (Sep 14, 2020)

Due today: Written proposal for the reinforcement learning project; whether short or long.

Dynamic Programming (DP) methods.

Assigned Reading: Sutton & Barto: Chapter 4.

Class 7 (Sep 16, 2020)

Assigned today: Homework 2.

Started our discussion on Monte-Carlo and temporal difference learning methods.

Assigned Reading: Sutton & Barto: Chapter 5 up to (and including) Section 5.4. Chapter 6 up to (and including) Section 6.5.

Class 8 (Sep 21, 2020)

Continued our discussion on Monte-Carlo and Temporal Difference learning.

Using these model-free methods for solving the prediction problem, as well as the control problem.

We stopped at Sarsa (on-policy method).

Assigned Reading: Sutton & Barto: Chapter 5 up to (and including) Section 5.4. Chapter 6 up to (and including) Section 6.5.

Class 9 (Sep 23, 2020)

Q-learning (off-policy method).

Assigned Reading: Sutton & Barto: Chapter 5 up to (and including) Section 5.4. Chapter 6 up to (and including) Section 6.5.

Function approximation.

Assigned Reading: Sutton & Barto: Sections 9.1 – 9.4, 9.5.3, 9.5.4, and 9.5.5. (If you are familiar with neural networks you can also have a look in Section 9.7.) Section 10.1.

Class 10 (Sep 28, 2020)

Due today: Homework 2.

Eligibility traces.

Assigned Reading: Sutton & Barto: Sections 7.1, 7.2, 12.1, 12.2, 12.7.

Class 11 (Sep 30, 2020)

Assigned today: Homework 3.

We spent most of our time in class for discussion on Reinforcement Learning; mainly answering questions from the material that we have covered.

Started discussion on Supervised Learning. In particular about k-Nearest Neighbors method.

Assigned Reading: Elements of Statistical Learning: Section 2.3.2. Alternatively, Mitchell's book: Sections 8.1 and beginning of 8.2 (stopped before 8.2.1).

Class 12 (Oct 5, 2020)

Continuation on the discussion for k-Nearest Neighbor methods.

Introduction to learning with Decision Trees.

Assigned Reading: Tom Mitchell's book: Sections 8.2.1, 8.2.2, 8.2.3 as well as 3.1 – 3.4.

Class 13 (Oct 7, 2020)

Due today: Long reinforcement learning project checkpoint.

Assigned Reading: A Few Useful Things to Know About Machine Learning, by Pedro Domingos.

Continued our discussion on decision trees.

Overfitting and general approaches on dealing with it, by defining validation and cross-validation datasets.

Dealing with overfitting in decision tree learning; methods for pruning.

Assigned Reading: Tom Mitchell's book: Sections 3.5 – 3.7.1.2.

Fri, Oct 9

Due today: Homework 3.

Homework 3 postponed to October 14 after public demand in class (on Sep 30).

Class 14 (Oct 12, 2020)

Due today: Short reinforcement learning project write-up and source code.

Last remarks on decision trees.

Perceptrons and the perceptron learning algorithm.

Linearly separable data and non-linearly separable data. Transformations that allow the perceptron to learn non-linearly separable data and issues of these transformations.

The pocket algorithm.

Assigned Reading: Tom Mitchell's book: Sections 3.7.2 – end of Chapter 3, 4.1 – 4.4.2.

Optional Reading: Perceptron-based learning algorithms, by Stephen I. Gallant. (The paper for the pocket algorithm.)

Class 15 (Oct 14, 2020)

Due today: In-class long supervised learning project proposal.

Due today: Homework 3.

Assigned today: Homework 4.

Proposals for long supervised learning projects.

Some clarifications on past material.

Discussion on regression and revisiting the risk definition so that we can use the squared loss as a more meaningful loss function.

Class 16 (Oct 19, 2020)

Due today: Written long supervised learning project proposal.

Continued our discussion on loss functions. Discussion on cross-entropy loss, which is needed for the homework.

Regression and ordinary least squares algorithm that minimizes the empirical risk of the squared loss.

Assigned Reading: Elements of Statistical Learning: Section 3.2.

Class 17 (Oct 21, 2020)

Regression and minimization of the empirical risk that relies on squared loss, using gradient descent this time.

Stochastic gradient descent and comparison of the full and the stochastic variant of gradient descent.

Assigned Reading: Tom Mitchell's book: Sections 4.4.3 – 4.4.4.

Started our discussion on logistic regression.

Class 18 (Oct 26, 2020)

Due today: Homework 4.

No class due to inclement weather.

Class 19 (Oct 28, 2020)

No class due to inclement weather.

Class 20 (Nov 2, 2020)

Assigned today: Homework 5.

Logistic regression.

Assigned Reading: Elements of Statistical Learning: Section 4.4.

Class 21 (Nov 4, 2020)

Introduction to model selection via regularization – ridge regression, lasso, elastic-nets.

Assigned Reading: Elements of Statistical Learning: Sections 3.4 – 3.4.3, 18.4.

Introduction to model assessment – residual standard error, $R^2$, adjusted $R^2$.

Class 22 (Nov 9, 2020)

Due today: Written long reinforcement learning project.

Continuation on assessment methods: residual plots, Cp, AIC, BIC.

Bias-Variance tradeoff and decomposition for squared loss.

Discussion on over-fitting and under-fitting in order to motivate holdout and cross-validation methods.

Assigned Reading: Elements of Statistical Learning: Sections 7.1 – 7.7.

Class 23 (Nov 11, 2020)

Due today: Long supervised learning project chekpoint.

Due today: In-class short supervised learning project proposals.

Holdout method, k-fold cross-validation, leave-one-out cross-validation (LOOCV), stratified cross-validation.

Started our discussion on rare events (imbalanced datasets). We identified that low risk is not the correct metric anymore, especially as the datasets are more and more imbalanced.

Assigned Reading: Elements of Statistical Learning: Section 7.10.

Class 24 (Nov 16, 2020)

Due today: Homework 5.

Assigned today: Homework 6.

Due today: Written short supervised learning project proposals.

Recall, precision, F1, ROC, AUC, optimal threshold.

Assigned Reading: Recall, Precision, F1, ROC, AUC, and everything, by Ofir Shalev.

Neural networks and different activation functions.

Class 25 (Nov 18, 2020)

Backpropagation on neural networks.

Assigned Reading: Neural Networks and Deep Learning, by Michael Nielsen.

Optional Reading: The following articles by your TA:

Optional Reading: Elements of Statistical Learning: Sections 11.3 – 11.5.

Optional Reading: Also, you can have a look in Tom Mitchell's book, Sections 4.5 – 4.6.

Class 26 (Nov 23, 2020)

Concluding the discussion on neural networks.

Regression using decision trees.

Bagging

Assigned Reading: Elements of Statistical Learning: Sections 8.7 and 9.2.2.

Nov 25, 2020

Thanksgiving; no classes.

Class 27 (Nov 30, 2020)

Random forests.

Boosting.

Introduction to Naive Bayes.

Assigned Reading: Elements of Statistical Learning: Sections 6.6.3, 10.1 – 10.3, 15.1 – 15.2.

Assigned Reading: Section 6.9 from Tom Mitchell's book.

Class 28 (Dec 2, 2020)

Concluded our discussion on Naive Bayes.

Assigned Reading: Section 6.9 from Tom Mitchell's book. Optionally, see also Section 6.10 from Tom Mitchell's book.

Introduction to Support Vector Machines. Optimal hyperplane and margin.

Assigned Reading: Elements of Statistical Learning: Sections 12.1 – 12.2.
Note that your book defines the margin to be twice as much of the distance between the decision boundary and the closest training example (positive or negative it does not matter) – see Figure 12.1. However, other books define the margin to be half of this quantity (and I tend to prefer this view), essentially corresponding to the distance between the support vectors and the decision boundary (thus, this distance corresponds to a cushion that exists for all the instances in our dataset before they are misclassified). For example, the following books follow this view:

Just keep this in mind when you discuss with someone else, because even if you use the same term (margin), you may end up describing a quantity that is off by a factor of 2. At the end of the day it is just a definition and it is more of a personal preference on how one defines this quantity.

Thu, Dec 3

Due today: Homework 6.

Class 29 (Dec 7, 2020)

Continuation of our discussion on support vector machines.

Maximal margin classifier.

Devoted 12 minutes for submitting an evaluation – half-way through the class.

Class 30 (Dec 9, 2020)

Support vector classifier for non-linearly separable data.

Kernels and support vector machines.

Discussion on computational learning theory (the course and its content).

Ask me anything.

Thursday, December 17, 2020 (4:30pm – 6:30pm)

Due today: Supervised learning project (whether short or long) write-up and source code.

Normally this would be the date and time of the final exam. However, we will not have a final exam as the class has a semester-long project.

[Class Log] [Table of Contents] [Top]