Home Algorithms Commercialization Data Science Information Theories Quantum Theories Lab Linear Algebra
<< Matrix Types and Properties PDF Space Mapping >>

$\require{cancel} \newcommand{\Ket}[1]{\left|{#1}\right\rangle} \newcommand{\Bra}[1]{\left\langle{#1}\right|} \newcommand{\Braket}[1]{\left\langle{#1}\right\rangle} \newcommand{\Rsr}[1]{\frac{1}{\sqrt{#1}}} \newcommand{\RSR}[1]{1/\sqrt{#1}} \newcommand{\Verti}{\rvert} \newcommand{\HAT}[1]{\hat{\,#1~}} \DeclareMathOperator{\Tr}{Tr}$

Similarity

First created in February 2019

Basics

Given two matrices $A$ and $B$, if there is an invertible matrix $S$ such that $\boxed{A=SBS^{-1}}$, we say that $A$ and $B$ are similar.

It follows that $AS=SB$, which means that $S$ transforming $B$ gives the same result as $A$ transforming $S$.


??Note: Does $AS=SB$ lead to $A=SBS^{-1}$? How about $S$ not being invertible but still have $AS=SB$? Or is it that $AS=SB$ would require $S$ to be invertible? Does $A$ need to be invertible too?


Basic laws:

  • If $A\sim B$, $A=SBS^{-1},$ $B=S^{-1}AS=(S^{-1})A(S^{-1})^{-1}$, then $B\sim A$ (commutative law).

  • If $A\sim B$ and $B\sim C$, $AS_1=S_1B$, $BS_2=S_2C$, $A(S_1S_2)=S_1BS_2=S_1S_2C=(S_1S_2)C$, then $A\sim C$ (transitive law).

  • Because $A=IAI^{-1}$, $A\sim A$ (reflexive law).

Shared Properties

In real numbers, $as=sb$ means $a=b$. We cannot say the same for matrices as their multiplication is not commutative, but $AS=SB$ hints that $A$ and $B$ while not equal would share some invariants.

e.g. For similar matrices,

  • Their determinants are equal (product of eigenvalues, see Eigenvalues).

  • Their characteristic polynomials are the same.

  • Their eigenvalues are the same (with different eigenspaces $\mathbf{v}_A=S\mathbf{v}_B$).

  • They are both invertable or both not.

  • They are both diagonalisable or both not.

The above are inter-related and can prove each other.

Determinant

From the definition of similarity, $A=SBS^{-1}$.

$\det(A)=\det(SBS^{-1}) =\det(S)\det(B)\det(S^{-1}) =\det(S)\det(B){1\over\det(S)} =\det(B) .$

$\boxed{\text{Similar matrices have the same determinant}.}$

Characteristic Polynomial

The characteristic polynomial of $A$:

$\det(\lambda I-A) =\det\left(S(\lambda I-B)S^{-1}\right) =\det(S)\det(\lambda I-B)\det(S^{-1}) =\det(\lambda I-B).$

$\boxed{\text{Similar matrices have the same characteristic polynomial}.}$

Eigenvalues

Similar matrices have the same characteristic polynomial.

$\det(\lambda I-A) =\det(\lambda SIS^{-1}-SBS^{-1}) =\det(S(\lambda I-B)S^{-1}) =\det(S)\det(\lambda I-B)\det(S^{-1}) =\det(\lambda I-B).$

Eigenvalues are the zeros of characteristic polynomial. i.e. $\boxed{\text{Similar matrices have the same set of eigenvalues}.}$

Eigenvectors

If $\mathbf{v}_B$ is an eigenvector of $B, B\mathbf{v}_B=\lambda\mathbf{v}_B,~~ AS\mathbf{v}_B=SB\mathbf{v}_B=\lambda S\mathbf{v}_B,~~ A(S\mathbf{v}_B)=\lambda(S\mathbf{v}_B).$

i.e. $\boxed{\text{If }\mathbf{v}_B\text{ is an eigenvector of }B,~S\mathbf{v}_B\text{ is an eigenvector of }A.~}$

For a full rank $B$ with $\{\mathbf{v}_B\}$ being its eigenbasis, $SBS^{-1}$ convert it into a new matrix $A$ with $\{S\mathbf{v}_B\}$ as eigenbasis.

Change of Basis

Let us consider $S$ being a transformation transforming basis vectors of $B$ into those of $A$. (The basis does not necessarily be an eigenbasis.)

Now $S^{-1}AS\Ket\psi$ means to transform $\Ket\psi$ from basis of $B$ into basis of $A$, apply $A$, then reverse-transform the result back to the basis of $B$, effectively $B\Ket\psi$.


Example: To implement $cZ$ using $cX$, you may transform the basis of $Z~~\{\Ket0,\Ket1\}$ by $H$ into the basis of $X~~\{\Ket+,\Ket-\}$, perform the $cX$ operation, then transform back to $\{\Ket0,\Ket1\}$ by $H^{-1}=H$. In short, $cZ=H~cX~H.$

Diagonalisation

Let $P=\big[\mathbf{v}_1\mathbf{v}_2\ldots\mathbf{v}_n\big]$, where $\mathbf{v}_r$ are eigenvectors of $A$. Also let $D$ be a diagonal matrix with the eigenvalues of $A$ on its diagonal, in the same order as $\mathbf{v}_r$.

As $A\mathbf{v}_r=\lambda_r\mathbf{v}_r,~~ A\big[\mathbf{v}_1\mathbf{v}_2\ldots\mathbf{v}_n\big] =\big[A\mathbf{v}_1~A\mathbf{v}_2\ldots A\mathbf{v}_n\big] =\big[(\lambda_1\mathbf{v}_1)(\lambda_2\mathbf{v}_2)\ldots(\lambda_n\mathbf{v}_n)\big] =\big[\mathbf{v}_1\mathbf{v}_2\ldots\mathbf{v}_n\big]D .$ We have $AP=PD$ so $A$ and $D$ are similar.


If a matrix $A$ is similar to a diagonal matrix $D$, then $A$ is said to be diagnoisable.

Let $D= \begin{bmatrix} d_1 & 0 & \ldots & 0 \\ 0 & d_2 & \ldots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \ldots & d_n \end{bmatrix}.$

Two observations:

  • All $d_r$ ($r=1,2,\ldots,n$) are eigenvalues of $D$, as $\det(d_r I-D)=\prod_{k=1}^n(d_r-d_k)=0$.

  • The product of its eigenvalues on the diagonal is the determinant of $D$. i.e. $\det(D)=\prod_{k=1}^nd_r.$


The diagonal matrix $D$ that $A$ is similar to has its determinant being the product of all its eigenvalues. As $A$ is sharing the same eigenvalues and the same determinant with $D$, it immediately follows that the $\boxed{\text{product of eigenvalues of a matrix is its determinant}.}$


By rearranging the order of $\mathbf{v}_r$, we will have $D'$, with the same set of eigenvalues as $D$ but in different order.

Let $P'$ be the eigenvector matrix rearranged. By the same process, we will find $AP'=P'D'$ so $A$ and $D'$ are similar.

By the transitive law, $D$ and $D'$ are similar. $D'=P'^{-1}AP'=(P'^{-1}P)~D~(P^{-1}P').$


This means all invertible matrices that are similar to each other forms a "conjugacy class". In fact, each invertible matrices belongs to precisely one conjugacy class of similarity.

Degeneracy


There are several related concepts about linear maps with domain of $n$ dimension (e.g. a square matrix $A$ of $n\times n$). In the following, vector $\mathbf{v}$ is in the $n$-D domain.

  • Rank - the dimension of space the $n$ column vectors of $A$ span. In order words, it is the dimension of the image of $A$ (the space of all possible $A\mathbf v$). When rank is $n$, we call it "full rank".

  • Nullity - the dimension of space that $A$ gives a zero projection. In order words, it is the dimension of the "kernel" of $A$ (the space of $\mathbf v$ such that $A\mathbf v=0$). A "full rank" $A$ has a only the zero vector in its kernel.

  • The sum of Rank and Nullity is $n$, the dimension of the domain - Rank-Nullity Theorem.


To visualise the above, imaging a real projector matrix of 3x3, mapping any 3D vectors onto a 2D plane. The image is a 2D plane, so rank is 2. The kernel is the ray at the origin perpendicular to the image plane, so the nullity is 1. 2+1=3, which is the dimension of the domain.


The number of non-zero elements in a diagonal matrix is obviously its rank.


For $A$ if there is $B$ such that $AB=BA=I_n$, $A$ is said to be invertible, or non-singular or nondegenerate.

?? The followings are necessary and sufficient conditions of each other:

  1. $A$ is invertible.

  2. $\det(A)\ne 0$.

  3. $A$ is full rank.

$(1)\Rightarrow(2)$: $\det(AB)=\det(A)\det(B)=\det(I)=1$, so not $(2)$ means not $(1)$. $(2)\Rightarrow(1)$ can resort to well-known process to obtain $A^{-1}$ like Cayley–Hamilton method.

?? A full rank $A$ is similar to a diagonal matrix $D$ with $n$ non-zero eigenvalues.


A matrix being degenerate means that it is not invertible, while eigenvalues being degenerate means they are equal.

Diagonal Matrix Degenerate Eigenvalues Degenerate
1, 2, 3 No No
1, 2, 0 Yes No
1, 2, 2 No Yes
2, 2, 0 Yes Yes

WORKING


Degeneracy is related to rank and nullity but not the same concept. A diagonal matrix with 1, 2, 2 on the diagonal is full rank but

?? Not all matrices are diagonalisable. But if a matrix has $n$ eigenvectors (including identical ones), it is diagonalisable. In fact, as long as $P$ is invertible (i.e. there are $n$ mutually independent eigenvectors, $A$ does not need to be invertible.

?? If there are identical eigenvalues, $D$ is not invertible, neither is $A$, but it may still have $n$ mutually independent eigenvectors, so $P^{-1}$ exists and therefore is diagonalisable (i.e. similar to $D$).

 

<< Matrix Types and Properties Top Space Mapping >>