👍 Data Augmentation
Motivation
Overfitting happens because of having too few examples to train on, resulting in a model that has poor generalization performance 😢. If we had infinite training data, we wouldn’t overfit because we would see every possible instance.
However, in most machine learning applications, especially in image classification tasks, obtaining new training data is not easy. Therefore we need to make do with the training set at hand. 💪
Data augmentation is a way to generate more training data from our current set. It enriches or “augments” the training data by generating new examples via random transformation of existing ones. This way we artificially boost the size of the training set, reducing overfitting. So data augmentation can also be considered as a regularization technique.
Data augmentation is done dynamically during training time. We need to generate realistic images, and the transformations should be learnable, simply adding noise won’t help. Common transformations are
- rotation
- shifting
- resizing
- exposure adjustment
- contrast change
- etc.
This way we can generate a lot of new samples from a single training example.
Notice that data augmentation is ONLY performed on the training data, we don’t touch the validation or test set.
Popular Augmentation Techniques
Flip
Rotation
Note: image dimensions may not be preserved after rotation
- If image is a square, rotating it at right angles will preserve the image size.
- If image is a rectangle, rotating it by 180 degrees would preserve the size.
Scale
The image can be scaled outward or inward. While scaling outward, the final image size will be larger than the original image size. Most image frameworks cut out a section from the new image, with size equal to the original image.
Crop
Random cropping
- Randomly sample a section from the original image
- Resize this section to the original image size
Translation
Translation = moving the image along the X or Y direction (or both)
This method of augmentation is very useful as most objects can be located at almost anywhere in the image. This forces your convolutional neural network to look everywhere.
Gaussian Noise
One reason of overfitting ist that neural network tries to learn high frequency features (patterns that occur a lot) that may not be useful.
Gaussian noise, which has zero mean, essentially has data points in all frequencies, effectively distorting the high frequency features. This also means that lower frequency components (usually, your intended data) are also distorted, but your neural network can learn to look past that. Adding just the right amount of noise can enhance the learning capability.
A toned down version of this is the salt and pepper noise, which presents itself as random black and white pixels spread through the image. This is similar to the effect produced by adding Gaussian noise to an image, but may have a lower information distortion level.