Algebra basics

Vector

💭 A vector is a quantity which has both magnitude and direction

The general effect of matrix A on the vector x is a combination of rotation and stretching.

Geometrical interpretation of dot product of two vectors =magnitude (first vector) X magnitude (projection: second vector -> first vector)

Matrix

Matrix is a linear transformation :

 x Ax


A rotates vector x from origin counterclockwise by agnle θ 

B stretches vector along the x-axis by constant k 

C stretches vector along the y-axis by constant k 

Change of basis

Euclidian basis

Other basis example:

Eigen vectors

For example, let's take vector x such that:

Note, after transformation A:

Change magnitude <=> multiply vector by scalar λ 

..and let's apply A




in case of a circle, A transforms it into an ellipse:

This is not true for all the vectors in x.  In fact, for each matrix A, only some of the vectors have this property. 

These vectors are called the eigenvectors of A and scalar λ is called an eigenvalue of A for that eigenvector.

 

Example 1 :  A - arbitrary matrix

Example 2 : B - symmetric matrix

Example 1. Arbitrary matrix transformation

Example 2. Symmetric matrix transformation

❗ A symmetric matrix transforms a vector by stretching or shrinking it along its eigenvectors 


❗ For symmetric matrix, eigenvectors  form a basis for a vector space.

Def. A set of vectors {v1, v2, v3 …, vn} form a basis for a vector space V, if they are linearly independent and generate the whole V.

Properties of symmetric matrix:

n×n symmetric matrix has n linearly independent and orthogonal eigenvectors with n real eigenvalues corresponding to those eigenvectors

2. Eigendecomposition  

Symmetric matrix is orthogonally diagonalizable:

Def. rank(A) = max number independent vectors of A

Geometrical interpretation of eigendecomposition

Eigendecomposition represents the directions and magnitude of initial matrix transformation.

When we apply A to any vector X, we make an orthogonal projection of X onto eigenvectors and change the magnitude by eigenvalues 

Symmetric matrix stretch or shrink the vector along its eigenvectors by amount of eigenvalue

Eigendecomposition rewrites A as a sum of simplier nxn matrices

where rank of any part λuu^T is 1 

Thanks to eigendecomposition, we can see A-transformation as a set of n projection onto eigenvectors + eigenvalue magnitude

The bigger the eigenvalue, the more important transformation is along this eigenvector.

Thus, we can approximate transformation A by its eigenvectors!

let k < n

Singular Values 

Let's look at

A^T A is symetric by definition. v1, v2, … vn are eigenvectors of A^T A with eigenvalues λi 

=> the squared length of vector [A vi] is the scalar λi. 

Intuition : vi might be seen as an analogy of unexistent eigenvectors of n x m matrix A

Def. singular value σi  of matrix A := square root of eigenvalues of matrix A^T A

Thus, singular value σi is the lenght of vector Avi (cause λi is the square of vector Avi)

The maximum value of ||Ax|| with constraints : ||x||=1, x^Tv_k-1 = 0, is σk and x == v_k (for any k>0)

It means that by applying non-symmetric matrix A to vector X, we stretch X along eigenvectors of symmetric A^T A

It gives us an intuition, that non-symmetric matrix A might be decomposed using eigenvectors of A^T A and its eigenvalues (sqrt)

Let's take an example of a non-symmetric matrix

u1, u2 - eigenvectors of A, are not the directions of stretching

v1, v2 - eigenvectors of A^T A. Avi are directions of stretching

Singular Values Decomposition 

Let A be an m×n matrix and rank(A) = r. Singular values of A