ML

Classification And Regression Tree (CART)

Tree-based Methods CART: Classification And Regression Tree Grow a binary tree At each node, “split” the data into two “daughter” nodes. Splits are chosen using a splitting criterion. Bottom nodes are “terminal” nodes.

2020-10-27

K Nearest Neighbors

Classification models.

2020-07-13

Kernelized Ridge Regression

Kernel regression Kernel identities Let $$ \boldsymbol{\Phi}\_{X}=\left[\begin{array}{c} \boldsymbol{\phi}\left(\boldsymbol{x}\_{1}\right)^{T} \\\\ \vdots \\\\ \boldsymbol{\phi}\left(\boldsymbol{x}\_{N}\right)^{T} \end{array}\right] \in \mathbb{R}^{N \times d} , \qquad \left( \boldsymbol{\Phi}\_{X}^T = \left[ \boldsymbol{\phi}(x\_1), \dots, \boldsymbol{\phi}(x\_N)\right] \in \mathbb{R}^{d \times N} \right) $$ then the following identities hold:

2020-07-13

Polynomial Regression (Generalized linear regression models)

đź’ˇIdea Use a linear model to fit nonlinear data: add powers of each feature as new features, then train a linear model on this extended set of features. Generalize Linear Regression to Polynomial Regression In Linear Regression $f$ is modelled as linear in $\boldsymbol{x}$ and $\boldsymbol{w}$

2020-07-13

Linear Regression

Linear Regression Model A linear model makes a prediction $\hat{y}_i$ by simply computing a weighted sum of the input $\boldsymbol{x}_i$, plus a constant $w_0$ called the bias term: For single sample/instances $$ \hat{y}_i = f \left( \boldsymbol{x} \right) = w_0 + \sum\_{j=1}^{D}w\_{j} x\_{i, j} $$ In matrix-form:

2020-07-06

Cross Validation

Objective function overview

2020-07-06

Bias Variance Tradeoff

TL;DR Resaon Example affect Model's complexity ⬆️ Model's complexity ⬇️ Bias wrong assumption assume a quadratic model to be linear underfitting ⬇️ ⬆️ Variance excessive sensitivity to small variations high-degree polynomial model overfitting ⬆️ ⬇️ Inreducible error noisy data Explaination A model’s generalization error can be expressed as the sum of three very different errors:

2020-07-06