Nonlinear dimensionality reduction
The world is made of a huge amount of complex data.
Discovering the hidden order in a bag of data can be seen as a dimensionality reduction problem, passing from an unmanagable dimension to an understandable dimension.
This problem is very common in data processing a lot of research and engineering have been dedicated to the topic in the last 30 years.
Here are some important algorithms in the history of manifold learning and nonlinear dimensionality reduction.
- Pairwise distance methods.
- Multidimensional scaling (MDS), non-metric MDS, Sammon mapping, etc.
- Linear methods, independent components analysis (ICA),
- Principal component analysis (PCA) or Karhunen-Loève transform (KLT),
- singular value decomposition (SVD),
- Topographic maps.
- Elastic net, Self-organizing maps (SOM), generative topographic mapping (GTM).
- Principal curves and surfaces.
- Neural network methods.
- Nonlinear autoencoders, kernel PCA, kernel ICA.
- Isomap.
- Locally Linear Embedding (LLE).
Locally Linear Embedding
LLE is very effective and simple.
(please explain algorithm here)
External link