The summary of ‘tSNE vs MDS vs PCA’

This summary of the video was created by an AI. It might contain some inaccuracies.

00:00:0000:05:51

The video discusses three key three-dimensional data reduction techniques: Principal Component Analysis (PCA), Multidimensional Scaling (MDS), and t-distributed Stochastic Neighbor Embedding (t-SNE). These methods aim to reduce data dimensions to two for easier visualization. PCA identifies principal components from the covariance matrix to project data onto new axes, retaining the most variance. MDS preserves the distances between data points when embedding them into two-dimensional space, though its results can be influenced by random initialization. t-SNE, more sophisticated than MDS, prioritizes maintaining local structures, making it especially effective for clustering. The video includes a practical demonstration using a biological dataset of bone marrow mono-nuclear cells, illustrating that t-SNE reveals more clustering than PCA, and that MDS behaves more similarly to PCA. The video concludes that t-SNE is optimal for identifying clusters, MDS excels in maintaining all distances, and PCA is robust for overall dimensionality reduction while preserving variance.

00:00:00

In this segment of the video, three dimensional data reduction techniques are discussed: Principal Component Analysis (PCA), Multidimensional Scaling (MDS), and t-distributed Stochastic Embedding (t-SNE). The primary goal is to reduce the data dimensions to two for the purpose of visualization. Despite their differences, these techniques yield similar results on smaller data sets.

PCA works by finding projections of data onto axes defined by principal components, computed from the eigenvectors of the covariance matrix.

MDS, on the other hand, tries to embed multidimensional data into a two-dimensional space while preserving the distances between data points as much as possible. It starts with a random placement and iteratively adjusts the points to minimize the difference between original and embedded space distances. The process is demonstrated using the zoo data set, illustrating how MDS iteratively optimizes the positions of data points. The segment also notes that MDS’s results can vary due to its random initialization, which affects the optimization outcome.

00:03:00

In this part of the video, the speaker discusses the use of t-SNE, an embedding algorithm similar to MDS but with a more complex criteria function that prioritizes maintaining distances between each point’s neighbors. The speaker applies t-SNE to a biological data set of bone marrow mono-nuclear cells, containing 1000 features each recording gene activity. Compared to PCA, t-SNE provides a much more pronounced clustering structure in the visualization. The speaker demonstrates how selecting clusters in the t-SNE visualization translates to the PCA plot, showing that t-SNE reveals more clusters than PCA. MDS is also compared, showing that it resembles PCA more and finds less structure. The key takeaway is that t-SNE is preferable for finding clusters, MDS for maintaining all distances, and PCA for robust dimensionality reduction while retaining variance.

Scroll to Top