Unsupervised Learning

Principle Components Analysis (PCA)

TL;DR The usual procedure to calculate the $d$-dimensional principal component analysis consists of the following steps: Calculate average $$ \bar{m}=\sum\_{i=1}^{N} m_{i} \in \mathbb{R} $$ data matrix $$ \mathbf{M}=\left(m\_{1}-\bar{m}, \ldots, m\_{N}-\bar{m}\right) \in \mathbb{R}^{d \times \mathrm{N}} $$ scatter matrix (covariance matrix) $$ \mathbf{S}=\mathbf{M M}^{\mathrm{T}} \in \mathbb{R}^{d \times d} $$ of all feature vectors $m\_{1}, \ldots, m\_{N}$

2020-11-07

Gaussian Mixture Model

Gaussian Distribution Univariate: The Probability Density Function (PDF) is: $$ P(x | \theta)=\frac{1}{\sqrt{2 \pi \sigma^{2}}} \exp \left(-\frac{(x-\mu)^{2}}{2 \sigma^{2}}\right) $$ $\mu$: mean $\sigma$: standard deviation Multivariate: The Probability Density Function (PDF) is: $$ P(x | \theta)=\frac{1}{(2 \pi)^{\frac{D}{2}}|\Sigma|^{\frac{1}{2}}} \exp \left(-\frac{(x-\mu)^{T} \Sigma^{-1}(x-\mu)}{2}\right) $$ $\mu$: mean $\Sigma$: covariance $D$: dimension of data Learning For univariate Gaussian model, we can use Maximum Likelihood Estimation (MLE) to estimate parameter $\theta$ : $$ \theta= \underset{\theta}{\operatorname{argmax}} L(\theta) $$ Assuming data are i.

2020-11-07

Unsupervised Learning

Learn patterns from untagged data.

2020-09-07

Bolzmann Machine

Boltzmann Machine Stochastic recurrent neural network Introduced by Hinton and Sejnowski Learn internal representations Problem: unconstrained connectivity Representation Model can be represented by Graph: Undirected graph Nodes: States Edges: Dependencies between states

2020-08-18

Hopfield Nets

Binary Hopfield Nets Basic Structure: Binary Unit Single layer of processing units Each unit $i$ has an activity value or “state” $u\_i$ Binary: $-1$ or $1$ Denoted as $+$ and $–$ respectively Example

2020-08-18

Restricted Boltzmann Machines (RBMs)

Definition Invented by Geoffrey Hinton, a Restricted Boltzmann machine is an algorithm useful for dimensionality reduction classification regression collaborative filtering feature learning topic modeling Given their relative simplicity and historical importance, restricted Boltzmann machines are the first neural network we’ll tackle.

2020-08-16

Auto Encoder

Supervised vs. Unsupervised Learning Supervised vs. unsupervised Supervised learning Given data $(X, Y)$ Estimate the posterior $P(Y|X)$ Unsupervised learning Concern with the structure (unseen) of the data Try to estimate (implicitly or explicitly) the data distribution $P(X)$ Auto-Encoder structure In supervised learning, the hidden layers encapsulate the features useful for classification.

2020-08-16

Unsupervised Learning

2020-08-16