Home Algorithms Commercialization Data Science Information Theories Quantum Theories Lab Linear Algebra
<< Superoperator Examples PDF

$\require{cancel} \newcommand{\Ket}[1]{\left|{#1}\right\rangle} \newcommand{\Bra}[1]{\left\langle{#1}\right|} \newcommand{\Braket}[1]{\left\langle{#1}\right\rangle} \newcommand{\Rsr}[1]{\frac{1}{\sqrt{#1}}} \newcommand{\RSR}[1]{1/\sqrt{#1}} \newcommand{\Verti}{\rvert} \newcommand{\HAT}[1]{\hat{\,#1~}} \DeclareMathOperator{\Tr}{Tr}$

Trace

First created in February 2019

Trace is the sum of the diagonal of a square matrix. Trace is useful in the discription of mixed states as $\Tr(\rho)=1$, and $\Tr(\rho^2)=1$ for a pure state and $\Tr(\rho^2)<1$ for mixed states.

Overview

Let $A=[a_{ij}],~\text{where }i,j=1,2,\ldots,n.~~\Tr(A)=\sum_i a_{ii}~.$

Linear mapping:

$\Tr(A+B)=\Tr(A)+\Tr(B),$

$\Tr(cA)=c\Tr(A)~,~~$where $c$ is a scaler.


$\Tr(A)=\Tr(A^T).$

$\Tr(A^TB)=\Tr(AB^T)=\sum_{i,j}[a_{ij}b_{ij}].~~ \text{Proof: }\Tr(A^TB) =\Tr(\left[\sum_{k}a_{ki}b_{kj}\right]) =\sum_i\left[\sum_{k}a_{ki}b_{ki}\right].~~ \text{Symmetrically, }\Tr(AB^T) =\sum_i\left[\sum_{k}a_{ik}b_{ik}\right] .$


$\Tr(AB)=\Tr(BA).~~ \text{Proof: }\Tr(AB) =\Tr\left(\sum_ka_{ik}b_{kj}\right) =\sum_i\sum_k a_{ik}b_{ki} =\sum_k\sum_i b_{ki}a_{ik} =\Tr(BA) .$

$\because\Tr(ABC)=\Tr(A(BC))=\Tr((BC)A)=\Tr(BCA),\\ \therefore\text{cyclic permutations yield equal trace:} \Tr(ABC)=\Tr(BCA)=\Tr(CAB).$


$\Tr(A\otimes B)=\Tr(A)\Tr(B).$

Proof: $A\otimes B =\left(\sum_{i,j}a_{ij}\Ket i\Bra j\right)\otimes \left(\sum_{p,q}a_{pq}\Ket p\Bra q\right) =\sum_{i,j,p,q}a_{ij}b_{pq}\Ket{ip}\Bra{jq}.\\ \Tr(A\otimes B) =\Tr\left(\sum_{i,j,p,q}a_{ij}b_{pq}\Ket{ip}\Bra{jq}\right) =\sum_{i,p}a_{ii}b_{pp} =\left(\sum_ia_{ii}\right)\cdot\left(\sum_pb_{pp}\right) =\Tr(A)\Tr(B).$


Similar matrics have the same trace: $\Tr(P^{-1}AP)=\Tr(P^{-1}(AP))=\Tr((AP)P^{-1})=\Tr(A).$

With reference to Similarity, the fact that all matrices diagonalisable into the same set of diagonal matrices $D$ (with different permutation of the same set of eigenvalues on their diagonal) form a class of mutually similar matrices, it is not surprising that similar matrices have the same eigenvalues.


If $A=P^{-1}DP$ with $D$ having $\lambda_r$, the eigenvalues of $A$, on its diagonal, $~~A^2=P^{-1}DP~P^{-1}DP=P^{-1}D^2P~$ will have eigenvalues $\lambda_r^2$.

Sum of Eigenvalues

Eigenvalues are zeros of $\det(\lambda I-A).$ Let $p(\lambda)=\det(\lambda I-A) =\prod_{k=1}^n(\lambda-\lambda_k) ,$ where $\lambda_k$ is an eigenvalue of $A$.

We can expand $p(\lambda) =\sum_{m=0}^nc_m\lambda^m .$ By observation, one can tell that $c_0=\prod_{k=1}^n(-\lambda_k)=(-1)^n\prod_{k=1}^n\lambda_k.$

At the same time, $c_0=p(0)=\det(-A)=(-1)^n\det(A).$ So we again proved that $\det(A)=\prod_{k=1}^n\lambda_k.$


But the most interesting part is $c_{n-1}=-\sum_{k=1}^n\lambda_k.$

The definition of determinant is $\large\sum\prod_i(-1)^rA_{ij_i}$ for all permutation of $j_i$ with $r=0$ for even permutation and 1 for odd.

In $\lambda I-A,~A_{ii}=\lambda-a_{ii}$. All other terms would have at most $n-2$ factors on the diagonal with $\lambda$.

So $p(\lambda)=\prod_{k=1}^n(\lambda-a_{kk})+q(\lambda),~$ where $q(\lambda)$ is of degree $n-2$.

That means the $\lambda^{n-1}$ term comes from the product, and therefore $c_{n-1}=-\sum_{k=1}^na_{kk}.$

Comparing with the previous finding, $\boxed{\Tr(A)=\sum_{k=1}^na_{kk}=\sum_{k=1}^n\lambda_k~}.$


Please note that this does not mean that eigenvalues are all on the diagonal. It only means that their sum is the same as the trace.

If the non-diagonal elements change, the eigenvalues will change, but their sum remains constant for as long as the diagonal does not change (or change with their sum being invariant).

Idempotent Matrices

Idempotent matrices ($A^2=A$) is singular (not full rank and therefore not invertible) except for $I$.

Proof: If idempotent matrix $A$ is invertible, $A=IA=A^{-1}AA=A^{-1}A=I.$


Corollary: For $A$ (idempotent or not) with an eigenvalue $\lambda$ and eigenvector $\mathbf v,~ \lambda^2$ is an eigenvalue of $A^2$ with the same eigenvector.

Proof: $A\mathbf v=\lambda v.~~A^2\mathbf v=A\lambda\mathbf v=\lambda A\mathbf v=\lambda^2\mathbf v.$

To generalise, $\lambda^m$ is an eigenvalue of $A^m$. Therefore, $\displaystyle\Tr(A^m)=\sum_k\lambda_k^m.$


For an idempotent matrix, $A^2=A$ so each of its eigenvalue satisfies $\lambda^2=\lambda$, and therefore can only be $0$ or $1$.

From Similarity, we can find a matrix $D$ similar to $A$, but with all its eigenvalues on the diagonal and all other elements zero.

Given that the eigenvalues of $D$ are either $0$ or $1$ as well, the rank of $D$ is the sum of its eigenvalues. As ?($A$ has the same rank as $D$) and share the same set of eigenvalues, $\Tr(A)$ (the sum of its eigenvalues) is its rank.


Note: Eigenvalue can be zero. The multiplicity of the zero eigenvalue is nullity.


WORKING

For $\displaystyle e^A=\sum_{m=0}^\infty{1\over m!}A^m,~ \Tr(e^A) =\sum_{m=0}^\infty{1\over m!}\Tr(A^m) =\sum_{m=0}^\infty{1\over m!}\sum_k\lambda_k^m =\sum_k\sum_{m=0}^\infty{1\over m!}\lambda_k^m =\sum_ke^{\lambda_k} .$


$\det(e^A)=e^{\Tr(A)}.$

 

<< Superoperator Examples Top