diochnos/teaching/CS4033-5033/2022S

CS 4033/5033 – Machine Learning (Spring 2022)

The class is cross-listed as CS 4033 and CS 5033, so that both undergraduate and graduate students can enroll simultaneously. No student may earn credit for both 4033 and 5033.

Table of Contents

Course Description

Topics include decision trees, relational learning, neural networks, Bayesian learning, reinforcement learning, multiple-instance learning, feature selection, learning appropriate representations, clustering, and kernel methods. No student may earn credit for both 4033 and 5033.

[Course Description] [Table of Contents] [Top]

Basic Information

Syllabus

The syllabus is available here.

Time and Location

Mondays, Wednesdays, and Fridays, 11:30am – 12:20pm, Devon Energy Hall 120.

Contact Information

Please see here.

Teaching Assistants

The teaching assistant for the class is Taras Basiuk.

Office Hours

We will be holding our office hours at the following times.

Mondays
2:00pm – 3:00pm, 244 Devon Energy Hall & Online (Dimitris)
Tuesdays
10:00am – 11:00am, Online (Taras)
Wednesdays
2:00pm – 3:00pm, 244 Devon Energy Hall & Online (Dimitris)
Thursdays
10:00am – 11:00am, Online (Taras)

Zoom information about office hours is available on the Syllabus page on Canvas.

Exceptions to the Regular Schedule of Office Hours

If you want to meet me outside of my office hours, please send me an email and arrange an appointment.

Exceptions to the Regular Schedule of Office Hours for the TAs

As exceptions appear along the way, they will also be announced here.

[Basic Information] [Table of Contents] [Top]

Important Coronavirus-Related Information

We have the following links.

[Important Coronavirus-Related Information] [Table of Contents] [Top]

Homework Assignments

Assignment 1: Announce on Mon, Jan 24, 2022. Due on Wed, Feb 2, 2022.

Assignment 2: Announce on Mon, Feb 7, 2022. Due on Mon, Feb 21, 2022.

Assignment 3: Announce on Mon, Feb 28, 2022. Due on Fri, Mar 11, 2022.

Assignment 4: Announce on Fri, Mar 11, 2022. Due on Wed, Mar 30, 2022.

Assignment 5: Announce on Wed, Mar 30, 2022. Due on Mon, Apr 11, 2022.

Assignment 6: Announce on Mon, Apr 11, 2022. Due on Sun, May 1, 2022.

[Homework Assignments] [Table of Contents] [Top]

Projects

Information related to the projects will show up here.

Ideas for projects

Below are some ideas for your projects.

Reinforcement Learning Ideas
Supervised Learning Ideas

[Projects] [Table of Contents] [Top]

Milestones

Week 2: Homework 1 is announced (beginning of week).

Week 3: Homework 1 is due (mid-week). In-class presentations for the reinforcement learning project (end of week).

Week 4: Homework 2 is announced (beginning of week). Project written proposal is due (beginning of week).

Week 6: Homework 2 is due (beginning of week). Project checkpoint is due (end of week).

Week 7: Homework 3 is announced (beginning of week).

Week 8: Homework 3 is due and homework 4 is announced (end of week).

Week 9 (Spring Break): Reinforcement learning project is due at the end of week.

Week 10: In-class presentations for the supervised learning project (end of week).

Week 11: Homework 4 is due and homework 5 is announced (mid-week). Project written proposal is due (beginning of week).

Week 13: Homework 5 is due and homework 6 is announced (beginning of week). Project checkpoint is due (end of week).

Week 15: Homework 6 is due (end of week).

Week 16: Supervised learning project is due (end of week).

[Milestones] [Table of Contents] [Top]

Machine Learning Resources

Books

The two books that we plan to use for the course are available for free in electronic format in the following links:

Another book that I like a lot and recommend to people who are starting with machine learning is the following one:

Personal Notes

Notes by Others

Papers

[Machine Learning Resources] [Table of Contents] [Top]

Class Log

A log for the class will be held online here.

Week 1

Class 1 (Jan 19, 2022)

About this Course.

Discussion on syllabus and policies.

Class 2 (Jan 21, 2022)

Discussion on projects. Introduction to Machine Learning.

Pretest in class.

Assigned Reading: Elements of Statistical Learning (ESL), Chapter 1.

Assigned Reading: Sutton & Barto: Chapters 1 and 2.

Assigned today: Think about short and long projects. Think about the topic for your RL project.

Week 2

Class 3 (Jan 24, 2022)

Assigned today: Homework 1.

Introduction to reinforcement learning.

Basic ingredients of RL methods: policy, value function, model.

Class 4 (Jan 26, 2022)

Discussion on the projects, deadlines, and various expectations. Also, where we can find certain information on Canvas.

Continued our discussion on introduction to reinforcement learning.

Exploration vs Exploitation. The multi-armed bandit problem from the book (Chapter 2).

Class 5 (Jan 28, 2022)

The prediction problem and the control problem.

Markov Decision Processes (MDPs).

Discussion on the Bellman Expectation Equations. Backup diagrams and solution of the prediction problem using linear algebra. Revisited the recycling robot example and we showed how we can evaluate the policy that picks an action with the same probability at each of the two energy states of the robot.

Week 3

Class 6 (Jan 31, 2022)

Bellman optimality equations and the control problem.

Assigned Reading: Sutton & Barto: Chapter 3.

Introduction to dynamic programming methods.

Class 7 (Feb 2, 2022)

Due today: Homework 1.

Proposals for reinforcement learning projects; in-class as well as remote presentations.

Class 8 (Feb 4, 2022)

Proposals for reinforcement learning projects; in-class as well as remote presentations.

Week 4

Class 9 (Feb 7, 2022)

Assigned today: Homework 2.

Due today: Written proposal for the reinforcement learning project; whether short or long.

Concluded our discussion on dynamic programming. Discussed value iterations, as well as we had some last remarks on dynamic programming (complexity, asynchronous backups, etc.)

Assigned Reading: Sutton & Barto: Chapter 4.

Started our discussion on model-free methods that are used for prediction.

Overview of Monte-Carlo and Temporal Difference learning.

First-visit and every-visit Monte Carlo methods. Application to Blackjack.

Class 10 (Feb 9, 2022)

Iterative calculation of empirical average.

Temporal difference learning. Application to the "Return Home" example.

Comparison of Monte Carlo and Temporal Difference learning.

Assigned Reading: Sutton & Barto: Sections 5.1, 5.2, 6.1, 6.2, 6.3.

Class 11 (Feb 11, 2022)

Concluded our discussion on comparing Monte Carlo and Temporal Difference learning.

n-Step Returns and Eligibility Traces. Forward view, backward view, and equivalence.

Assigned Reading: Sutton & Barto: Sections 7.1, 7.2, 12.1, 12.2.

Week 5

Class 12 (Feb 14, 2022)

The control problem. Using $\varepsilon$-greedy policy in order to guarantee enough exploration of the state space so that we are able to calculate accurate optimal values for the value functions.

Solution with an on-policy Monte-Carlo approach.

Assigned Reading: Sutton & Barto: Sections 5.3, 5.4.

Class 13 (Feb 16, 2022)

Discussion on information about the class and the second homework.

Continued our discussion on solving the control problem. This time we used the idea of TD learning which leads to Sarsa and we also discussed the extension to Sarsa($\lambda$).

Assigned Reading: Sutton & Barto: Sections 6.4, 6.5, 12.7.

Class 14 (Feb 18, 2022)

Discussion on solving the control problem using an off-policy method: Q-Learning.

Discussion on function approximation.

Week 6

Class 15 (Feb 21, 2022)

Due today: Homework 2.

Finished our discussion on function approximation.

How can do linear function approximation and solve the prediction and the control problem using our basic methods.

Simple ways to construct features: state aggregation, coarse coding, tile coding. The book has more examples.

Discussion of some examples with function approximation. Among them, Sarsa with linear function approximation on the Mountain Car problem.

Assigned Reading: Sutton & Barto: Sections 9.1 – 9.5, 10.1, 10.2.

Class 16 (Feb 23, 2022)

Assigned today: Homework 3.

Introduction to supervised learning.

What is supervised learning? Regression and classification problems. The role of inductive bias.

Definitions on terminology that we will be using throughout the course for supervised learning.

Assigned Reading: ESL ...

Class 17 (Feb 25, 2022)

Due today: Reinforcement learning checkpoint. Deadline pushed to 2 days later.

No class due to inclement weather.

Sunday, February 27, 2022 (11:59pm)

Due today: Reinforcement learning checkpoint.

After some public demand I am pushing the deadline from Friday, February 25, to Sunday, February 27.

Week 7

Class 18 (Feb 28, 2022)

Revisited our discussion from last time.

We discussed slide 17 which was not there last time for the module slides and which allows us to visualize the difference between our hypothesis (model) $h$ and the ground truth $c$.

Further discussion on the notion of the hypothesis space $\mathcal{H}$. The different algorithms that we will see on supervised learning, largely define how we will perform the seach in this space and come up with a hypothesis $h$ that we will believe will approximate well the ground truth $c$.

Introduction to nearest neighbor learning. Example with flowers based on sepal length and sepal width.

1-Nearest Neighbor classification and connection to the Voronoi Diagram (this is a topic discussed in Computational Geometry classes).

Assigned Reading: Nearest neighbors: ESL 2.3.2, 13.1, 13.2. Alternatively, Mitchell 8.1, 8.2 – 8.2.3.

Class 19 (Mar 2, 2022)

Continued our discussion on Nearest Neighbors. Different metrics. Application to regression problems. Distance-weighted nearest neighbor method.

Naive Bayes for classification. Example on PlayTennis.

The m-estimate and dealing with various corner cases.

Assigned Reading: Naive Bayes: ESL 6.6.3. Alternatively, Mitchell 6.9, 6.10.

Class 20 (Mar 4, 2022)

Naive Bayes for document classification.

Discussion on creating features for document classification: bag-of-words, n-grams, TF-IDF.

Gaussian Naive Bayes (dealing with continued-valued attributes) and other variants of Naive Bayes for classification.

Naive Bayes is not used for regression problems.

Loss functions: 0-1 loss, square loss.

The quality of our solutions: risk and empirical risk.

Assigned Reading: Please pay attention to the slides for the discussion on loss functions, risk, empirical risk, the ERM principle, etc.

Week 8

Class 21 (Mar 7, 2022)

Discussion on risk and empirical risk and how these quantities look like when we use the 0-1 loss $\ell_{\text{0-1}}$ or the square loss $\ell_{\text{sq}}$. Empirical Risk Minimization (ERM) principle.

Some remarks on terminology and notation used in statistics and in computer science.

Started our discussion on linear models. Perceptrons, decision boundary, discussion on the representational power of various functions. The update rule that is used for learning using perceptrons.

Assigned Reading: Perceptrons: ESL 4.5, 4.5.1, optional 4.5.2. Alternatively, Mitchell 4.1 – 4.4.2.

Class 22 (Mar 9, 2022)

Continued our discussion on perceptrons and the perceptron learning algorithm.

Linearly separable data and non-linearly separable data. Transformations that allow the perceptron to learn non-linearly separable data and issues of these transformations. The pocket algorithm.

Linear regression. Ordinary least squares solution.

Started our discussion on solving linear regression problems using gradient descent.

Assigned Reading: Linear regression: ESL 3.1, 3.2. Alternatively, Mitchell 4.4.3 – 4.4.4.

Class 23 (Mar 11, 2022)

Due today: Homework 3. Deadline pushed back by two days.

Introduction to logistic regression. How is it different from other linear models?

Introduction of the logistic loss.

Assigned Reading: Logistic regression: ESL 4.4.

Sunday, March 13, 2022

Due today: Homework 3.

Week 9

Mar 14, 2022

Spring break; no classes.

Mar 16, 2022

Spring break; no classes.

Mar 18, 2022

Due today: Reinforcement learning project. Deadline pushed back by two days.

Spring break; no classes.

Sunday, March 20, 2022

Due today: Reinforcement learning project.

Week 10

Class 24 (Mar 21, 2022)

Concluded our discussion on the logistic loss.

Almost finished with our discussion on the logistic regression.

Class 25 (Mar 23, 2022)

Discussion on termination criteria for logistic regression using gradient descent.

Proposals for supervised learning project projects; in-class as well as remote presentations.

Class 26 (Mar 25, 2022)

Proposals for supervised learning project projects; in-class as well as remote presentations.

Week 11

Class 27 (Mar 28, 2022)

Introduction to regularization and stability. Structural Risk Minimization (SRM) and Regularized Loss Minimization (RLM).

Regression and regularization. Ridge regression, lasso regression, and elastic-net. Why lasso is more likely to create a sparser model compared to ridge regression.

Started our discussion on assessing the goodness of fit of linear models. Discussed: Residual Standard Error (RSE), R2 statistic, adjusted R2 statistic, and residual plots.

Class 28 (Mar 30, 2022)

Due today: Homework 4.

Assigned today: Homework 5.

Finished our discussion on assessing the goodness of fit of linear models. Discussed: Cp, AIC, BIC criteria.

Bias – variance tradeoff. Underfitting and overfitting and the general ideas behind model selection.

Holdout method using 2-way and 3-way partitioning of the given dataset.

Started discussion on cross-validation.

Assigned Reading: A Few Useful Things to Know About Machine Learning , by Pedro Domingos.

Class 29 (Apr 1, 2022)

For the largest part we had a nice discussion on issues that arise in supervised learning algorithms that we have covered so far: perceptron and bound on number of mistakes, stability, PAC guarantees and distributional assumptions.

Concluded our discussion on cross-validation. Leave-One-Out-Cross-Validation (LOOCV). Stratified cross-validation.

Week 12

Class 30 (Apr 4, 2022)

Metrics Beyond Low Risk.

A nice decomposition of the instance space and ultimately of the predictions that we make in a dataset, by using false positives, false negatives, true positives, and true negatives -- regions in the instance space, or counting examples from the dataset.

The confusion matrix.

Complex performance measures based on: recall, precision, specificity. Balanced accuracy and F1-score.

Class 31 (Apr 6, 2022)

Receiver Operator Characteristic (ROC) curve.

Introduction to neural networks.

Class 32 (Apr 8, 2022)

Continued our discussion on neural networks.

Popular activation functions (and their derivatives).

The backpropagation algorithm.

Explanation of the phenomenon of vanishing gradients.

Week 13

Class 33 (Apr 11, 2022)

Due today: Homework 5.

Assigned today: Homework 6.

Derivation of the backpropagation rule.

More related to neural networks.

Class 34 (Apr 13, 2022)

An illustrative example on artificial neural networks. Design decisions and results.

Class 35 (Apr 15, 2022)

Final remarks on neural networks.

Introduction to decision trees.

Sunday, April 17, 2022

Due today: Supervised Learning Project Checkpoint.

Checkpoint pushed back to Wendesday, April 20.

Week 14

Class 36 (Apr 18, 2022)

Continuation of decision trees.

Class 37 (Apr 20, 2022)

Due today: Supervised Learning Project Checkpoint.

Continuation of decision trees.

Class 38 (Apr 22, 2022)

Conclusion of decision trees.

Week 15

Class 39 (Apr 25, 2022)

Introduction to ensembles.

Bootstrap and bagged predictors.

Class 40 (Apr 27, 2022)

Conclusion of ensembles (random forests, boosting methods).

Introduction to support vector machines.

Class 41 (Apr 29, 2022)

Introduction to Support Vector Machines. Optimal hyperplane and margin.

Assigned Reading: Elements of Statistical Learning: Sections 12.1 – 12.2.
Note that your book defines the margin to be twice as much of the distance between the decision boundary and the closest training example (positive or negative it does not matter) – see Figure 12.1. However, other books define the margin to be half of this quantity (and I tend to prefer this view), essentially corresponding to the distance between the support vectors and the decision boundary (thus, this distance corresponds to a cushion that exists for all the instances in our dataset before they are misclassified). For example, the following books follow this view:

Just keep this in mind when you discuss with someone else, because even if you use the same term (margin), you may end up describing a quantity that is off by a factor of 2. At the end of the day it is just a definition and it is more of a personal preference on how one defines this quantity.

Sunday, May 1, 2022

Due today: Homework 6.

Week 16

Class 42 (May 2, 2022)

Guest lecture on clustering: Prof. Lakshmivarahan.

Class 43 (May 4, 2022)

Guest lecture on clustering: Prof. Lakshmivarahan

Class 44 (May 6, 2022)

Mention kernels for support vector machines.

Devote some time in class for the student evaluations.

Advertisement of computational learning theory.

Ask me anything.

Sunday, May 8, 2022

Due today: Supervised learning project (whether short or long) write-up and source code.

Wednesday, May 11, 2022 (1:30pm – 3:30pm)

Normally this would be the date and time of the final exam. However, we will not have a final exam as the class has a semester-long project.

The supervised learning project will be due at 3:30pm on this day.

Due today: Supervised learning project (whether short or long) write-up and source code.

[Class Log] [Table of Contents] [Top]