SVD — Single Value Decomposition


Today, we embark on an exciting journey into the world of Singular Value Decomposition (SVD) — a fundamental concept in linear algebra with wide-ranging applications in data science, machine learning, and beyond. Don’t worry if you’re new to it; I’ll also cover the basic definitions to help you follow along!


Basic Terminologies

  1. Rank of a matrix - The rank of a matrix is the number of linearly independent rows or columns in the matrix. It is determined by the maximum number of linearly independent vectors in the matrix. The rank represents the amount of “non-redundant” or “unique” information contained in the matrix.
  2. Orthogonal Vectors - Orthogonal vectors are vectors that are perpendicular to each other. If two vectors are orthogonal, their dot product is zero. Mathematically, for two vectors a and b, they are orthogonal if: a ? b = 0 where ? represents the dot product.
  3. Diagonal Matrix - A matrix where all the non-diagonal elements are zero.

Diagonal Matrix

4. Eigenvalues and Eigenvectors - Eigenvalues describe the scaling factor (how much a vector is stretched or compressed), and eigenvectors describe the direction that remains unchanged under a transformation applied to the vector.


5. Input Direction and Output Direction -

  • Input Direction refers to the direction of the vector before the transformation. It’s the vector you start with
  • Output Direction refers to the direction of the vector after the transformation. Even if the direction hasn’t changed, it is still considered the “output” because the transformation has affected the vector (even if just by scaling).


6. Singular Value - A non-negative value that represents how much a matrix stretches or compresses a vector along a specific direction. If the singular value is —

  • Greater than 1: Indicates that the vector is stretched (expanded).
  • Between 0 and 1: Indicates that the vector is compressed (reduced).

7. Singular Vector - A vector that describes a direction in the input or output space of a matrix. It gets stretched or compressed by the matrix, but its direction remains the same. Example: Imagine a matrix that stretches a vector by a factor of 2. The singular vector remains in the same direction, but its length is doubled. Essentially, the matrix scales the vector, but its direction stays unchanged.

“There are two types of singular vectors: Left singular vectors and Right singular vectors.”

8. Left Singular Vectors - These are the eigenvectors of the matrix AAT (where A is the matrix and AT is its transpose) and represent directions in the input space (associated with the rows of the matrix).

9. Right Singular Vectors - These are the eigenvectors of the matrix ATA and represent directions in the output space (associated with the columns of the matrix).

t space (associated with the columns of the matrix).


Theorems

  1. Eigenvalue Theorem - The Eigenvalue Theorem states that for a matrix A, there are special vectors (called eigenvectors) that are only scaled (stretched or shrunk) when multiplied by A. The factor by which they are stretched or shrunk is the corresponding eigenvalue.
  2. Singular Value Theorem - The Singular Value Theorem states that any matrix can be decomposed into three parts:

  • V (Right Singular Vectors): These represent directions in the input space (the original direction of the data).
  • Σ (Singular Values): These represent how much the matrix stretches or shrinks the data along those directions.
  • U (Left Singular Vectors): These represent directions in the output space (the transformed direction of the data after applying the matrix).

This theorem expresses any matrix A in its Singular Value Decomposition (SVD) form as: A = UΣV?


Defination Of SVD

Singular Value Decomposition (SVD) is a mathematical technique that decomposes a matrix A into three parts:

  • U: A matrix containing the left singular vectors (column-wise).
  • Σ (Sigma): A diagonal matrix containing the singular values (representing the strength of each component).
  • V?: A matrix containing the right singular vectors (row-wise).


We can compress an image based on its rank. Let’s take an example:


The rank of the above example is 1. We have reduced 36 numbers to 12 numbers.

SVD (Singular Value Decomposition) decomposes a matrix into simpler components, known as rank-one matrices, represented as u?v??. Each rank-one matrix corresponds to a specific component of the original matrix. By summing these matrices, we can reconstruct the original matrix A.

These rank-one matrices are ordered by their importance, which is determined by their singular values. The components with the largest singular values contribute the most to the matrix’s structure. This process helps simplify the data, focusing on the most significant components to gain a clearer understanding of the matrix.


How To Calculate Singular Values

Step 1: Compute A?A — Multiply the transpose of A by A.

Step 2: Find Eigenvalues — Calculate the eigenvalues of the resulting matrix A?A.

Step 3: Take Square Roots — The singular values of A are the square roots of these eigenvalues.


Where:

  • σ? is the singular value,
  • λ? is the eigenvalue of the matrix A?A


Applications Of SVD

  1. Dimensionality Reduction: Reduce features for faster model training (e.g., Principal Component Analysis (PCA)).
  2. Recommendation Systems: Discover hidden patterns to recommend items (e.g., collaborative filtering).
  3. Image Processing: Compress or denoise images for efficient analysis.
  4. Text Mining: Extract topics from text (e.g., Latent Semantic Analysis (LSA)).


Reference

  1. https://medium.com/@doublekien/singular-value-decomposition-svd-e01ef6291604
  2. https://ai-research.dev/singular-value-decomposition/
  3. https://www.mathsisfun.com/algebra/eigenvalue.html


Acknowledgements

A special thank you to Prof. Chandramani Singh for providing such a clear and engaging explanation of Singular Value Decomposition (SVD). Your teachings have been invaluable in shaping my understanding of this topic!


Finally

I hope you had fun reading this introduction to Singular Value Decomposition (SVD)! Trust me, there’s a lot more to uncover in the world of matrices. Buckle up, because our next blog is going to be an epic deep dive into even more fascinating concepts!

Got questions? Don’t be shy! Hit me up on LinkedIn. Coffee’s on me (virtually, of course) ??

Joao Jarego

Delivery Director | Devoteam - Data Driven

2 周

Wowwww! We’ve just connected and I already know that I can learn from you!

Meanigful data could be different w.r.t different people ?? ?? Let me check how SVD works? Thanks for keep sharing info

回复
Giovanni Sisinna

??Portfolio-Program-Project Management, Technological Innovation, Management Consulting, Generative AI, Artificial Intelligence??AI Advisor | Director Program Management @ISA | Partner @YOURgroup

1 个月

Ishika Garg, your article simplifies SVD brilliantly. SVD also plays a crucial role in noise reduction for audio processing, showcasing its impressive versatility across domains.

Priya Pandey

SAS | EX-Mindtickler | Ex- Infosian l Python | AWS

1 个月

Very informative

Harpreet Kaur

Immediate joiner | SOC Analyst | #Open to work | Basics of Networking and Cyber Security

1 个月

Interesting

要查看或添加评论,请登录

Ishika Garg的更多文章

  • Linear Regression

    Linear Regression

    Today, we’re diving into the math behind one of the most fundamental models in machine learning: linear regression…

    12 条评论
  • RAG

    RAG

    RAG stands for Retrieval-Augmented Generation. It’s a game-changer when working with LLMs.

    6 条评论
  • Vector Database

    Vector Database

    In the world of databases, we’re all familiar with traditional databases like RDBMS. But have you heard about vector…

    9 条评论
  • Transformers

    Transformers

    We’re exploring the realm of Deep Learning, focusing on the pivotal role that “transformers” play in driving…

    23 条评论
  • LLM Models

    LLM Models

    LLMs are a category of foundation models trained on large amounts of data (such as books, articles, etc.), enabling…

    14 条评论
  • Foundation Model

    Foundation Model

    FOUNDATION MODEL is a versatile machine learning model that has been pre-trained on a vast amount of unlabelled, and…

    6 条评论

社区洞察

其他会员也浏览了