Home | Algorithms | Commercialization | Data Science | Information Theories | Quantum Theories | Lab | Linear Algebra |
<< Sum of Ket-Bra | Superoperator Examples >> |
$\require{cancel} \newcommand{\Ket}[1]{\left|{#1}\right\rangle} \newcommand{\Bra}[1]{\left\langle{#1}\right|} \newcommand{\Braket}[1]{\left\langle{#1}\right\rangle} \newcommand{\Rsr}[1]{\frac{1}{\sqrt{#1}}} \newcommand{\RSR}[1]{1/\sqrt{#1}} \newcommand{\Verti}{\rvert} \newcommand{\HAT}[1]{\hat{\,#1~}} \DeclareMathOperator{\Tr}{Tr}$
First created in August 2018
This text was based on a citing (possibly on Wikipedia) but was since lost. After some search, it seems that the double bra-ket notation is related to
Some new citings are found:
Triple Quantum Imaging of Sodium in Inhomogeneous Fields
Costin Tanase, 23 Nov 2004
Sec 2.2 Superspace representation of quantum mechanics
On the Matrix Representation of Quantum Operations Yoshihiro Nambu and Kazuo Nakamura, 1 Feb 2008
$\Ket{A\rangle} =\mathrm{cvec}(A) =\begin{bmatrix}\mathrm{col}(A,1)\\\vdots\\\mathrm{col}(A,N)\end{bmatrix}.~~~ \Bra{\langle A} =\left(\mathrm{cvec}(A)\right)^\dagger =\left(\mathrm{row}(A^\dagger,1)\ldots\mathrm{row}(A^\dagger,N)\right) .$
As illustrated in this text, $\Ket{A\rangle}$ and $\Bra{\langle A}$ will be formally defined using $\mathcal{F_r}$ and $\mathcal{F_c}$. (cvec is presented as vec in this text.)
$\displaystyle \langle\Braket{A\Verti B}\rangle\equiv\Tr\left\{{A^\dagger B}\right\} =\sum_{k=1}^N\mathrm{row}\left(A^\dagger,k\right)\cdot\mathrm{col}\left(B,k\right) =\mathrm{cvec}(A)^\dagger\cdot\mathrm{cvec}(B) .$
For a matrix $A$, the left and right superoperators are defined as $\mathcal{L}(A)[\rho]:=A\rho$ and $\mathcal{R}(A)[\rho]:=\rho A$ respectively.
The commutator $[A,\rho]=\mathcal{L}(A)[\rho]-\mathcal{R}(A)[\rho]=A\rho-\rho A.$
Here we use the general form of $\rho=\sum_{i,j}p_{ij}\Ket i\Bra j=[p_{ij}],$ where $p_{ij}\in\mathbb{R}$, and $\Ket i$ and $\Ket j$ are standard basis vectors.
Please note that in this context $\rho$ does not represent a mixed state as the symbol usually does. It is a matrix represented as sum of outer products of standard basis vectors – the ket-bra form.
As the form implies, the two superoperators can be interpreted as applying $A$ respectively from the left to the ket only ($A\otimes I$) and from the right to the bra only ($I\otimes A^T$), as we will illustrate later.
Vectorisation is a "flattening" mapping from a matrix to a vector, by joining all columns in the matrix head to tail, into a "tall" column.
$\mathrm{vec}(A)=[a_{11},a_{21},\ldots,a_{n1},a_{12},\ldots,a_{nn}]^T,~$ where $A=[a_{ij}].$
From this we can define $\mathcal{F}_c(A):=\mathrm{vec}(A),~$ and $\mathcal{F}_r(A):=\left(\mathrm{vec}(A^T)\right)^T.$
A 2x2 example: To flatten to a column, $\mathcal{F}_c\left(\begin{bmatrix}a&b\\c&d\end{bmatrix}\right) =\begin{bmatrix}a\\c\\b\\d\end{bmatrix}.~~$ To flatten to a row, $\mathcal{F}_r\left(\begin{bmatrix}a&b\\c&d\end{bmatrix}\right) =[a,b,c,d].$
The two can be inter-converted: $\mathcal{F}_c(A)=\left(\mathcal{F}_r(A^T)\right)^T$ and $\mathcal{F}_r(A)=\left(\mathcal{F}_c(A^T)\right)^T.$
$\Ket{A\rangle} :=\left(\mathcal{F}_r\left(A\right)\right)^T =\mathrm{vec}(A^T) =[a_{11},a_{12},\ldots,a_{1n},a_{21},\ldots,a_{nn}]^T.$
$\Bra{\langle A} :=\left(\mathcal{F}_c\left(A^\dagger\right)\right)^T =\left(\mathrm{vec}\left(A^\dagger\right)\right)^T =[\overline{a}_{11},\overline{a}_{12},\ldots,\overline{a}_{1n},\overline{a}_{21},\ldots,\overline{a}_{nn}].$
$\Braket{\langle A\Verti A\rangle} =\sum_{i,j}\Verti a_{ij}\Verti^2 .~~~ \Braket{\langle\Lambda\Verti\Lambda\rangle} =\sum_k\lambda^2_k,~~~\text{where }\Lambda\text{ is a diagonal matrix with } \lambda_k\text{ as values of diagonal elements} .$
Let $P =\Ket\psi\Bra\phi =\begin{bmatrix}\psi_1\\\psi_2\\\vdots\\\psi_n\end{bmatrix} [\phi_1,\phi_2,\ldots,\phi_n] =\begin{bmatrix} \psi_1\phi_1&\psi_1\phi_2&\ldots&\psi_1\phi_n\\ \psi_2\phi_1&\psi_2\phi_2&\ldots&\psi_2\phi_n\\ \vdots&\vdots&\ldots&\vdots\\ \psi_n\phi_1&\psi_n\phi_2&\ldots&\psi_n\phi_n\\ \end{bmatrix} .$
$\Ket{P\rangle} =\left[\psi_1\phi_1,\psi_1\phi_2,\ldots,\psi_2\phi_1,\ldots,\psi_n\phi_n\right]^T =\Ket\psi\Ket\phi =\Ket{\psi\phi} .$
$\Bra{\langle P} =\left[\overline{\psi_1\phi_1},\overline{\psi_1\phi_2},\ldots,\overline{\psi_2\phi_1},\ldots,\overline{\psi_n\phi_n}\right] =\Bra\psi\Bra\phi =\Bra{\psi\phi} .$
$\Bra{\langle P}=\Ket{P\rangle}^\dagger .$
$\boxed{\Ket{AP\rangle}=(A\otimes I)\Ket{P\rangle}.}~~$ Generally, $\Ket{\Pi_k A_kP\rangle}=\Pi_k(A_k\otimes I)\Ket{P\rangle}.$
When expressed in vectorised form, $A$ operating on $P$ from the left has affected the ket only: $\Ket{AP\rangle}=(A\otimes I)\Ket{\psi\phi}=\Ket{A\psi}\Ket\phi.$
Proof:
$\mathrm{LHS} =\Ket{AP\rangle} =\Ket{A(\Ket\psi\Bra\phi)\rangle} =\Ket{(A\Ket\psi)\Bra\phi\rangle} =\left[ \sum_k(a_{1k}\psi_k)\phi_1, \sum_k(a_{1k}\psi_k)\phi_2, \ldots, \sum_k(a_{2k}\psi_k)\phi_1, \ldots, \sum_k(a_{nk}\psi_k)\phi_n \right]^T .$
$\mathrm{RHS} =(A\otimes I)\Ket{P\rangle} =\begin{bmatrix} a_{11}I&a_{12}I&\ldots&a_{1n}I\\ a_{21}I&a_{22}I&\ldots&a_{2n}I\\ \vdots&\vdots&\ddots&\vdots\\ a_{n1}I&a_{n2}I&\ldots&a_{nn}I\\ \end{bmatrix} \begin{bmatrix}\psi_1\Ket\phi\\\psi_2\Ket\phi\\\vdots\\\psi_n\Ket\phi\end{bmatrix} =\left[ \sum_k a_{1k}(\psi_k\phi_1), \sum_k a_{1k}(\psi_k\phi_2), \ldots, \sum_k a_{2k}(\psi_k\phi_1), \ldots, \sum_k a_{nk}(\psi_k\phi_n) \right]^T .$
$\therefore~~\mathrm{LHS=RHS.~~~Q.E.D.}$
A 2x2 example:
$\mathrm{LHS} =\Ket{AP\rangle} =\Ket{ \begin{bmatrix} a_{11}&a_{12}\\ a_{21}&a_{22} \end{bmatrix} \begin{bmatrix}\psi_1\phi_1&\psi_1\phi_2\\\psi_2\phi_1&\psi_2\phi_2\end{bmatrix} {\large\rangle}} =\Ket{ \begin{bmatrix} a_{11}\psi_1\phi_1+a_{12}\psi_2\phi_1& a_{11}\psi_1\phi_2+a_{12}\psi_2\phi_2\\ a_{21}\psi_1\phi_1+a_{22}\psi_2\phi_1& a_{21}\psi_1\phi_2+a_{22}\psi_2\phi_2\\ \end{bmatrix} {\large\rangle}} =\begin{bmatrix} a_{11}\psi_1\phi_1+a_{12}\psi_2\phi_1\\ a_{11}\psi_1\phi_2+a_{12}\psi_2\phi_2\\ a_{21}\psi_1\phi_1+a_{22}\psi_2\phi_1\\ a_{21}\psi_1\phi_2+a_{22}\psi_2\phi_2\\ \end{bmatrix} .$
$\mathrm{RHS} =(A\otimes I)\Ket{P\rangle} =\begin{bmatrix} a_{11}&0&a_{12}&0\\ 0&a_{11}&0&a_{12}\\ a_{21}&0&a_{22}&0\\ 0&a_{21}&0&a_{22}\\ \end{bmatrix} \begin{bmatrix}\psi_1\phi_1\\\psi_1\phi_2\\~\psi_2\phi_1\\~\psi_2\phi_2\end{bmatrix} =\begin{bmatrix} a_{11}\psi_1\phi_1+a_{12}\psi_2\phi_1\\ a_{11}\psi_1\phi_2+a_{12}\psi_2\phi_2\\ a_{21}\psi_1\phi_1+a_{22}\psi_2\phi_1\\ a_{21}\psi_1\phi_2+a_{22}\psi_2\phi_2\\ \end{bmatrix} =\mathrm{LHS} .$
$\boxed{\Bra{\langle PA}=\Bra{\langle P}(I\otimes A^T)^\dagger.}~~$ Generally, $\Bra{\langle P~\Pi_k A_k}=\Bra{\langle P}\Pi_k(I\otimes A^\dagger_k).$
When expressed in vectorised form, $A$ operating on $P$ from the right has affected the bra only: $\Bra{\langle PA}=\Bra{\psi\phi}(I\otimes A^T)^\dagger=\Bra\psi\Bra{\phi A^T}.$
Proof: $\mathrm{LHS} =\Bra{\langle PA} =\Bra{\langle(\Ket\psi\Bra\phi)A} =\Bra{\langle\Ket\psi(\Bra\phi A)} =\left[ \overline{\psi_1\sum_k\phi_k a_{k1}}, \overline{\psi_1\sum_k\phi_k a_{k2}}, \ldots, \overline{\psi_2\sum_k\phi_k a_{k1}}, \ldots, \overline{\psi_n\sum_k\phi_k a_{kn}}\right] .$
$\mathrm{RHS} =\Bra{\langle P}(I\otimes A^T)^\dagger =\left[\overline{\psi_1\Bra\phi},\overline{\psi_2\Bra\phi},\ldots,\overline{\psi_n\Bra\phi}\right] \begin{bmatrix} A^*&0&\ldots&0\\ 0&A^*&\ldots&0\\ \vdots&\vdots&\ddots&\vdots\\ 0&0&\ldots&A^*\\ \end{bmatrix} =\left[ \overline{\psi_1\sum_k\phi_k a_{k1}}, \overline{\psi_1\sum_k\phi_k a_{k2}}, \ldots, \overline{\psi_2\sum_k\phi_k a_{k1}}, \ldots, \overline{\psi_n\sum_k\phi_k a_{kn}}, \right] .$
$\therefore~~\mathrm{LHS=RHS.~~~Q.E.D.}$
A 2x2 example:
$\mathrm{LHS} =\Bra{\langle PA} =\Bra{{\large\langle} \begin{bmatrix} \psi_1\phi_1&\psi_1\phi_2\\\psi_2\phi_1&\psi_2\phi_2\end{bmatrix} \begin{bmatrix} a_{11}&a_{12}\\ a_{21}&a_{22} \end{bmatrix}} =\Bra{{\large\langle} \begin{bmatrix} a_{11}\psi_1\phi_1+a_{21}\psi_1\phi_2& a_{12}\psi_1\phi_1+a_{22}\psi_1\phi_2\\ a_{11}\psi_2\phi_1+a_{21}\psi_2\phi_2& a_{12}\psi_2\phi_1+a_{22}\psi_2\phi_2\\ \end{bmatrix}}\\ =[\overline{a_{11}\psi_1\phi_1+a_{21}\psi_1\phi_2}, \overline{a_{12}\psi_1\phi_1+a_{22}\psi_1\phi_2}, \overline{a_{11}\psi_2\phi_1+a_{21}\psi_2\phi_2}, \overline{a_{12}\psi_2\phi_1+a_{22}\psi_2\phi_2}] .$
$\mathrm{RHS} =\Bra{\langle P}(I\otimes A^T)^\dagger =[\overline{\psi_1\phi_1},\overline{\psi_1\phi_2},\overline{\psi_2\phi_1},\overline{\psi_2\phi_2}] \begin{bmatrix} \bar{a}_{11}&\bar{a}_{12}&0&0\\ \bar{a}_{21}&\bar{a}_{22}&0&0\\ 0&0&\bar{a}_{11}&\bar{a}_{12}\\ 0&0&\bar{a}_{21}&\bar{a}_{22}\\ \end{bmatrix}\\ =[\overline{a_{11}\psi_1\phi_1+a_{21}\psi_1\phi_2}, \overline{a_{12}\psi_1\phi_1+a_{22}\psi_1\phi_2}, \overline{a_{11}\psi_2\phi_1+a_{21}\psi_2\phi_2}, \overline{a_{12}\psi_2\phi_1+a_{22}\psi_2\phi_2}] =\mathrm{LHS} .$
To recap:
$\rho=\sum_{i,j}p_{ij}\Ket i\Bra j=[p_{ij}].~~$ So, $\Ket{\rho\rangle}=\sum_{i,j}p_{ij}\Ket{ij}$ and $\Bra{\langle\rho}=\sum_{i,j}p_{ij}\Bra{ij}$.
$\Bra{\langle\rho}=\Ket{\rho\rangle}^\dagger.~~ \Bra{\langle\rho A}=\Bra{\langle\rho}(I\otimes A^T)^\dagger.~~ \Ket{A\rho\rangle}=(A\otimes I)\Ket{\rho\rangle} .$
$\mathcal{L}(A)[\rho] :=A\rho =\sum_{i,j}p_{ij}(A\Ket i)\Bra j .$
$\Ket{\mathcal{L}(A)[\rho]\rangle} =\Ket{A\rho\rangle} =(A\otimes I)\Ket{\rho\rangle} =\sum_{i,j}p_{ij}(A\otimes I)\Ket i\Ket j .$
$\mathcal{R}(A)[\rho] :=\rho A =\sum_{i,j}p_{ij}\Ket i(\Bra j A) .$
$\Bra{\langle\mathcal{R}(A)[\rho]} =\Bra{\langle\rho A} =\Bra{\langle\rho}(I\otimes A^T)^\dagger =\sum_{i,j}p_{ij}\Bra i\Bra j(I\otimes A^T)^\dagger .$
The Left and Right Superoperators are simple enough, just a shorthand, almost too general to be useful. It is their pattern of applying from the left and right that inspire some useful ideas.
Let us not restrict $\rho$ to be a density matrix, a sum of projectors $\rho=\sum_k P_k$ where $P_k=\Ket{\psi_k}\Bra{\psi_k}$. But we will explore the $M\rho$ and $\rho N$ pattern without the two sides being the same.
For $P=\Ket\psi\Bra\psi,~~ (A\Ket\psi)^\dagger=\Bra\psi A^\dagger,~~ A\rho A^\dagger=(A\Ket\psi)(\Bra\psi A^\dagger),$ which is also a projector.
If $A$ is Hermitian, $A^\dagger=A$ and $A\rho A^\dagger=A\rho A$.
For non-projector $\xi=\Ket\psi\Bra\phi$ operated on by the same Hermitian $A$, $A\xi A=(A\Ket\psi)(\Bra\phi A).$ As $\xi$ is not a projector, it has more degree of freedom, and should be interpreted as two qubits $\Ket{\psi\phi}$.
The vectorisation is in fact taking $\Ket{\xi\rangle}=\Ket{\psi\phi}$, so applying $A$ to the left means $(A\otimes I)\Ket{\psi\phi}$, which is $\Ket{\mathcal{L}(A)[\xi]\rangle}$.
Likewise, $\Bra{\langle\xi}=\Bra{\psi\phi}$, and $A$ on the right means $\Bra{\psi\phi}(I\otimes A^T)^\dagger$, which is $\Bra{\langle\mathcal{R}(A)[\xi]}$.
The transpose on $A$ is to get the row major vectorisation right. In fact, $(A^T)^\dagger=A^*$. If $A$ is Hermitian, it is equivalent to $\Bra{\psi\phi}(I\otimes A^T)$.
<< Sum of Ket-Bra | Top | Superoperator Examples >> |