Home | Algorithms | Commercialization | Data Science | Information Theories | Quantum Theories | Lab | Linear Algebra |
<< Superdense Coding | CHSH Inequality >> |
$\require{cancel} \newcommand{\Ket}[1]{\left|{#1}\right\rangle} \newcommand{\Bra}[1]{\left\langle{#1}\right|} \newcommand{\Braket}[1]{\left\langle{#1}\right\rangle} \newcommand{\Rsr}[1]{\frac{1}{\sqrt{#1}}} \newcommand{\RSR}[1]{1/\sqrt{#1}} \newcommand{\Verti}{\rvert} \newcommand{\HAT}[1]{\hat{\,#1~}} \DeclareMathOperator{\Tr}{Tr}$
First created in May 2019
A measurement on a quantum system yields a result based on a probability given by the Born Rule.
The wave function $\psi(x)\in\mathbb C$ as described by the Schrödinger equation represents the probability amplitude, of which the norm squared is the probability density (or probability in the discrete case) of finding the particle at that point.
In one-dimensional space, the probability of finding the point particle at position $x_i$ is $\displaystyle \int_{x_i-\epsilon}^{x_i+\epsilon}\big|\psi(x)\big|^2dx,$ with the normalisation condition $\displaystyle \int_{-\infty}^{+\infty}\big|\psi(x)\big|^2dx=1 .$
Therefore, the probability amplitude is $\psi(x_i)$ and the probability density is $\big|\psi(x_i)\big|^2$.
In Dirac notation, $\Ket\psi\equiv\psi(\bullet)\in\mathbb H$, the Hilbert space. We write $\Ket\psi$ to avoid the confusion of $\psi(x)$ being a value of $\psi(\bullet)$ at $x$.
The inner product is defined by the linear transformation of $\Bra\phi:\Ket\psi\mapsto \int_{-\infty}^{+\infty}\phi(x)^*\psi(x)~dx,~$ denoted as $\Braket{\phi\Verti\psi}\in\mathbb C.$
The transformation $\Bra\phi$ forms the dual space of $\Ket\psi.$ $\Ket\phi=\phi(\bullet)\in\mathbb H,$ and $\Braket{\psi\Verti\phi}=\Braket{\phi\Verti\psi}^*.$
Assuming both vectors $\Ket\psi$ and $\Ket\phi$ are unit vectors (normalised wave functions), we can say that $\Braket{\phi\Verti\psi}$ is a scaler representation of "how much of $\Ket\psi$ is inline with $\Ket\phi$". In the language of linear algebra, it is the magnitude of the component of $\Ket\psi$ in the direction of $\Ket\phi$.
If $\Ket\phi=\Ket\psi$, then the two are completely inline and therefore the inner product is unity. $\Braket{\psi\Verti\psi} =\int_{-\infty}^{+\infty}\psi(x)^*\psi(x)~dx=\int_{-\infty}^{+\infty}\big|\psi(x)\big|^2~dx=1.$ There is only one component of $\Ket\psi$ along the direction of $\Ket\psi$, which is itself.
Proof of uniqueness:
$\text{If } \Ket\phi\ne\Ket\psi\text{ and }\Braket{\phi\Verti\psi}=1, \text{ then }\Braket{\psi\Verti\phi}=\Braket{\phi\Verti\psi}^*=1,~ \Braket{\psi\Verti\phi}-\Braket{\psi\Verti\psi}=1-1=0,~ \Bra\psi\big(\Ket\phi-\Ket\psi\big)=0\\ \big(\Bra{\psi}-\Bra{\phi}\big)\big(\Ket{\psi}-\Ket{\phi}\big) =\Braket{\psi\Verti\psi}-\Braket{\psi\Verti\phi}-\Braket{\phi\Verti\psi}+\Braket{\phi\Verti\phi} =1-1-1+1=0.\\ \text{Let }\Ket{\alpha}=\Ket{\psi}-\Ket{\phi}.~~ \text{We have }\Braket{\alpha\Verti\alpha}=0.~~ \text{i.e.}\Ket{\alpha}=\Ket{\psi}-\Ket{\phi}=\mathbf{0}.~~ \Ket{\psi}=\Ket{\phi},\text{ contrary to the assumption.} .$
On the other hand, if $\Braket{\phi\Verti\psi}=0$, the two vectors are orthogonal, meaning their amplitudes interfere with each other and are cancelled out.
In other words, if both $\Ket\phi$ and $\Ket\psi$ are unity vectors, $0\le\Braket{\phi\Verti\psi}\le 1.$
Orthogonal vectors are not unique. i.e. It is possible that $\Ket{\phi_1}\ne\Ket{\phi_2}$ and still $\Braket{\phi_1\Verti\psi}=\Braket{\phi_2\Verti\psi}=0.$
Also, $\Braket{0\Verti}=\Braket{\Verti 0}=0.$ So $\Ket 0$ is orthogonal to all vectors.
The projection of $\Ket\psi$ onto $\Ket\phi$ is a vector of which the magnitude is $\Braket{\phi\rvert\psi}$ in the direction of $\Ket\phi$. This is a new wave function $\Ket\phi\Braket{\phi\Verti\psi}.$
Let us define $P_\phi\equiv\Ket\phi\Bra\phi$. The projection of $\Ket\psi$ onto $\Ket\phi$ is therefore $P_\phi\Ket\psi=\Ket\phi\Braket{\phi\Verti\psi}.$
$\Ket\psi$ and $\Ket\phi$ are probability amplitudes. So is $P_\phi\Ket\psi.$
The physical interpretation is that if we measure $\Ket\psi$ using an apparatus configured to measure the $\Ket\phi$ state, the probability amplitude of a successful detection is $a=\Braket{\phi\Verti\psi}.$ That means the probability density is $a^*a=\Braket{\psi\Verti\phi}\Braket{\phi\Verti\psi}=\Braket{\psi\big|~P_\phi\big|\psi}.$
Experimentally, the probability is derived from the probability density times the range the apparatus covers.
After the measurement, the system is in the state of $\Ket\phi$ with unity probability (wave function collapse). Subsequent measurement (ignoring time evolution) would yield a detection with certainty, as $\Braket{\phi\Verti\phi}\Braket{\phi\Verti\phi}=\Braket{\phi\big|~P_\phi\big|\phi}=1$
Let us configure the apparatus to measure the particle being in the neighbourhood of $x$, and name the state $\Ket x$.
The probability density is therefore $\Braket{\psi\Verti x}\Braket{x \Verti\psi}$. As we know that this value is $\psi(x)^*\psi(x)=\big|\psi(x)\big|^2$, it makes sense to interpret $\Braket{x\Verti\psi}=\psi(x).$
The probability of a detection in $[x-\epsilon/2,x+\epsilon/2]$ is therefore $\big|\psi(x)\big|^2\epsilon.$
If $\Ket\psi$ were sharply defined at $x'$ instead of a distribution, the probability amplitude would be $\Braket{x\Verti x'}.$
The probability of detection would be $\big|\Braket{x\Verti x'}\big|^2\epsilon =\left\{\begin{matrix}1&\text{when }x=x'\\0&\text{when }x\ne x'\end{matrix}\right. .$
Let us define $\Ket x\equiv\delta_x(\bullet)=\delta(x-\bullet).$
For convenience, let us denote $\Bra x=\delta_x^*(\bullet)=\delta(\bullet-x),$ although they are all real and equal.
It follows that:
$\Braket{x\Verti\psi}=\int_{-\infty}^{+\infty}\delta(y-x)~\psi(y)~dy=\psi(x).$
$\Braket{x\Verti x'}=\int_{-\infty}^{+\infty}\delta(y-x)~\delta(x'-y)~dy=\delta(x'-x)=\delta(x-x').$
$\Braket{x\Verti x}\to\infty,$ but $\lim_{x'\to x}\delta(x-x')\epsilon=1.$
The last point shows that the basis vector $\Ket x$ is not normalisable to 1, but to $\delta(x-x')$ instead.
Yet, $\{\Ket x\}$ serves as a good basis for continuous case.
The wave function $\psi(\bullet)$ can be expressed as a superposition of all basis vectors $\Ket x$.
$\displaystyle \psi(\bullet) =\int_{-\infty}^{+\infty}\delta_x(\bullet)~\psi(x)~dx =\int_{-\infty}^{+\infty}\Ket x\psi(x)~dx .$
It can be further elaborate as $\displaystyle \Ket\psi =\int_{-\infty}^{+\infty}\Ket x\Braket{x\Verti\psi}~dx =\left(\int_{-\infty}^{+\infty}\Ket x\Bra x~dx\right)\Ket\psi .$
Therefore, we can define $\displaystyle\HAT I\equiv\int_{-\infty}^{+\infty}\Ket x\Bra x~dx.$
Alternatively, $\displaystyle\HAT I=\int_{-\infty}^{+\infty}P_x~dx,$ where $P_x=\Ket x\Bra x$ is a projection operator onto $\Ket x$.
An observable is a measurement operator, which is a series of projection operators "attached" with a measurement result. $\displaystyle Q\equiv\sum_i q_iP_i.$
In the continuous case: $\displaystyle Q=\int_{-\infty}^{+\infty}q\Ket q\Bra q~dq,$ where $\Ket q$ are basis vectors.
When measuring $\Ket\psi,$ we have $\displaystyle \Braket{\psi\Verti Q\Verti\psi} =\Bra\psi\left(\int_{-\infty}^{+\infty}q\Ket q\Bra q~dq\right)\Ket\psi =\int_{-\infty}^{+\infty}q\Braket{\psi\Verti q}\Braket{q\Verti\psi}dq$
$\displaystyle =\int_{-\infty}^{+\infty}q~\psi(q)^*\psi(q)~dq =\int_{-\infty}^{+\infty}q~\big|\psi(q)\big|^2 dq =\Braket{Q_\psi} .$
$\Braket{\psi\Verti Q\Verti\psi}=\Braket{Q_\psi}$ is the expectation value of measurement of $Q$ on the quantum state $\Ket\psi$.
Let us consider $\displaystyle Q\Ket x =\left(\int_{-\infty}^{+\infty}q\Ket q\Bra q~dq\right)\Ket x =\int_{-\infty}^{+\infty}q\Ket q\Braket{q\Verti x}~dq =\int_{-\infty}^{+\infty}q\Ket q\delta(q-x)~dq =x\Ket x .$
That means $x$ is an eigenvalue of $Q$ with eigenvector $\Ket x$.
Please note that $\Braket{x\Verti\psi}=\psi(x)$ is a probability amplitude. So is $\Braket{g\Verti f}.$
In fact, $\Braket{\psi\Verti\psi}$ is also a probability amplitude, not a probability density.
By definition, $\displaystyle \Braket{\psi\Verti\psi} =\int_{-\infty}^{+\infty}\psi(x)^*\psi(x)~dx =\int_{-\infty}^{+\infty}\big|\psi(x)\big|^2~dx =1 .$
Only when there is a projector or operator in use we can consider the expression a probability density. e.g. $\Braket{\psi\Verti x}\Braket{x\Verti\psi}=\psi(x)^*\psi(x)$ is a probability density. So is $\Braket{\psi\Verti P\Verti\psi}$.
*** WORKING ***
$\displaystyle \HAT D\equiv\frac{\partial}{\partial x} .$
$\displaystyle \Braket{f~\Big|\HAT D\Big|~g} =\int_{-\infty}^{+\infty}f^*(x)\frac{d}{dx}g(x)~dx =\cancelto{0}{\big[f^*g\big]_{-\infty}^{+\infty}} -\int_{-\infty}^{+\infty}g(x)\frac{d}{dx}f^*(x)~dx =-\left(\int_{-\infty}^{+\infty}g^*(x)\frac{d}{dx}f(x)~dx\right)^* =-\Braket{g~\Big|\HAT D\Big|~f}^* .$
$\HAT D^*=-\HAT D.~~$ i.e. $\HAT D$ is antihermitian.
Let $\HAT K=-i\HAT D.~~ \HAT K^*=i\HAT D^*=-i\HAT D=\HAT K.~~$ i.e. $\HAT K$ is Hermitian.
$\displaystyle \Braket{f~\Big|\HAT K\Big|~g} =\int_{-\infty}^{+\infty}f^*(x)\frac{d}{dx}\big(-i\cdot g(x)\big)~dx =-i\cancelto{0}{\big[f^*g\big]_{-\infty}^{+\infty}} -\int_{-\infty}^{+\infty}\big(-i\cdot g(x)\big)\frac{d}{dx}f^*(x)~dx$
$\displaystyle =-\left(\int_{-\infty}^{+\infty}i\cdot g^*(x)\frac{d}{dx}f(x)~dx\right)^* =\Braket{g~\Big|\HAT K\Big|~f}^* .$
$\HAT X\Ket{x_0}=x_0\Ket{x_0},\text{ or generally }\HAT X\Ket x=x\Ket x$ for any $x\in\mathbb R$.
$\displaystyle \int_{-\infty}^{+\infty}\Ket x\Bra x~dx=\HAT I .$
$\Bra x:\Ket f\mapsto f(x) .$
$\displaystyle \Ket f =\HAT I\Ket f =\int_{-\infty}^{+\infty}\Ket x\Braket{x\Verti f}~dx =\int_{-\infty}^{+\infty}f(x)\Ket x~dx .~~$ (superposition form)
$\displaystyle \Braket{g\Verti f} =\Braket{g~\Big|\HAT I\Big|~f} =\int_{-\infty}^{+\infty}\Braket{g\Verti x}\Braket{x\Verti f}~dx =\int_{-\infty}^{+\infty}g^*(x)~f(x)~dx .$
$\displaystyle f(x) =\Braket{x\Verti f} =\Braket{x~\Big|\HAT I\Big|~f} =\int_{-\infty}^{+\infty}\Braket{x\Verti y}\Braket{y\Verti f}~dy =\int_{-\infty}^{+\infty}\Braket{x\Verti y}~f(y)~dy .$
Comparing this to $f(x)=\int_{-\infty}^{+\infty}\delta(y-x)~f(y)~dy,$ we can say:
$\delta(y-x)=\Braket{y\Verti x}=\Braket{x\Verti y}^*=\delta(x-y)^*=\delta(x-y)=\Braket{x\Verti y}\in\mathbb R .$
That said, $\Braket{x\Verti x}=\delta(x-x)\to\infty,~$ but $\int_{-\infty}^{+\infty}\Braket{x\Verti y}~dy=\int_{-\infty}^{+\infty}\delta(x-y)~dy=1.~$ (This works only if one variable is moving.)
As $\delta$ is zero almost everywhere, it can be considered as $\Braket{x\Verti x}dx=\delta(x-x)~dx=1.$
Let us interpret the above in the physics context.
$\Braket{g\Verti f}$ is the probability density of $\Ket f$ using $\Bra g$ as a measuring apparatus (detector).
The probability density function of such apparatus is $g(x)$. If $f$ is certainly at $x_0\pm\epsilon$, the probability that $\Bra g$ will register a detection is $\displaystyle\int_{x_0-\epsilon}^{x_0+\epsilon}g(x)~dx.$
Note: The norm squared of the Braket times $\epsilon$, i.e. $\Braket{f\Verti g}\Braket{g\Verti f}dx$, is the probability.
The above example of $g(x)$ is a distribution. In other words, it can detect a wide range of positions with some uncertainty at each, a probability denoted by $g(x_0)$ at position $x_0$.
If we have a series of perfect detectors, with certain detection at the position it corresponds to ($x_j$), and zero everywhere else, the detectors can be represented as $\sum_j\Ket{x_j}\Bra{x_j}$.
If we have a series of detectors $\Ket{x_j}\Bra{x_j}$, each detecting a position of $x_j\pm\epsilon/2$, then $\HAT X=\sum_j\Ket{x_j}\Bra{x_j}$ is the measurement operator.
To measure the position, the apparatus can be viewed as a series of ideal detectors at $x_0,x_1,\ldots,x_\infty$, each will register a detection with 100% certainty when the position of the subject $f$ is realised at the corresponding detector.
This is consistent with $\Braket{x\Verti f}=f(x).$ The norm of the Braket times $\epsilon$ is $\Braket{f\Verti x}\Braket{x\Verti f}dx$, which is the probability.
As $\Ket x\Bra x dx=1$, $\Braket{f(x)\Verti f(x)}dx=\lvert f(x)\rvert^2dx$ is the probability, which means $\lvert f(x_0)\rvert^2$ is the probability density at $x_0$.
For change of basis from $x$ to $k$ for example, what is the meaning of $\Braket{x\Verti k}~$?
If with certainty we know that the subject has a measurement on $k_0$ (e.g. momentum $\hbar k$), the probability density of measuring it at $x$ is $\Braket{x\Verti k_0}.$ (We avoid using $x_0$ as it is not the corresponding measurement of $k_0$.)
Repeating the previous analysis:
$\displaystyle \int_{-\infty}^{+\infty}\Ket k\Bra k~dk=\HAT{I_k}.~~$ (and the previous $\HAT I$ with regard to $x$ is now presented as $\HAT{I_x}$.)
$\displaystyle \Ket{k_0} =\HAT{I_x}\Ket{k_0} =\int_{-\infty}^{+\infty}\Ket x\Braket{x\Verti k_0}~dx =\int_{-\infty}^{+\infty}k_0(x)\Ket x~dx .~~$ This is the "superposition" of $k_0$ projected on the basis $\{\Ket x\}$.
$\phi_k(x)\equiv\Braket{x\Verti k}=\frac{1}{\sqrt{2\pi}}~e^{ikx} .$
Although $\displaystyle\int_{-\infty}^{+\infty}\frac{1}{\sqrt{2\pi}}e^{-x^2/2}dx=1,$ the none-weighted $\phi_k(x)$ integrated over $x\in\mathbb R$ diverges.
That said, $\displaystyle\int_{-\pi}^{+\pi}\frac{1}{\sqrt{2\pi}}e^{-x^2/2}dx =\ldots .$
*** WORKING ***
We include the "physicists" version of the Hermite Polynomails here for completeness.
$\displaystyle H_n(x) \equiv(-1)^n~e^{x^2}\frac{d^n}{dx^n}e^{-x^2} =\left(2x-\frac{d}{dx}\right)^n 1 .$
$\displaystyle (-1)^ne^{-x^2}H_n(x)=\frac{d^n}{dx^n}e^{-x^2} .$
$\displaystyle (-1)^n\frac{d}{dx}\left(e^{-x^2}H_n(x)\right)=\frac{d^{n+1}}{dx^{n+1}}e^{-x^2} .$
$\displaystyle (-1)^ne^{-x^2}\left(-2x~H_n(x)+H_n'(x)\right)=\frac{d^{n+1}}{dx^{n+1}}e^{-x^2} .$
$\displaystyle -2x~H_n(x)+H_n'(x)=(-1)^ne^{x^2}\frac{d^{n+1}}{dx^{n+1}}e^{-x^2}=-H_{n+1}(x) .$
$\displaystyle\boxed{H_{n+1}(x)=\left(2x-\frac{d}{dx}\right)H_n(x).}$
$\displaystyle H_n(x) =\left(2x-\frac{d}{dx}\right)H_{n-1}(x) =\left(2x-\frac{d}{dx}\right)^2H_{n-2}(x) =\ldots =\left(2x-\frac{d}{dx}\right)^nH_0(x),~~ H_0(x)=1 .$
$\displaystyle\boxed{H_n(x)=\left(2x-\frac{d}{dx}\right)^n 1.}$
$\displaystyle H_0(x)=1,\\ H_1(x)=\left(2x-\frac{d}{dx}\right)1=2x,\\ H_2(x)=\left(2x-\frac{d}{dx}\right)2x=4x^2-2,\\ H_3(x)=\left(2x-\frac{d}{dx}\right)(4x^2-2)=8x^3-12x,\\ H_4(x)=\left(2x-\frac{d}{dx}\right)(8x^3-12x)=16x^4-48x^2+12,\\ H_5(x)=\left(2x-\frac{d}{dx}\right)(16x^4-48x^2+12)=32x^5-160x^3+120x,\\ H_6(x)=\left(2x-\frac{d}{dx}\right)(32x^5-160x^3+120x)=64x^6-480x^4+720x-120,\\ \ldots$
$\displaystyle \int_{-\infty}^{+\infty}H_m(x)~H_n(x)~w(x)~dx=\sqrt{\pi}~2^nn!~\delta_{mn}~,$ where the weight function $w(x)=e^{-x^2}.$ $H_n(x)$ are multually orthogonal.
Normalised function: $\phi_n(x)=\left(\sqrt{\pi}~2^nn!\right)^{-1/2}e^{-x^2/2}H_n(x) .$
$\displaystyle \int_{-\infty}^{+\infty}\phi_m(x)~\phi_n(x)~dx =\int_{-\infty}^{+\infty} \left(\sqrt{\pi}~2^mm!\right)^{-1/2}e^{-x^2/2}H_m(x) \cdot\left(\sqrt{\pi}~2^nn!\right)^{-1/2}e^{-x^2/2}H_n(x)~dx$
$\displaystyle =\left(\sqrt{\pi}~2^mm!\right)^{-1/2}\cdot\left(\sqrt{\pi}~2^nn!\right)^{-1/2} \int_{-\infty}^{+\infty}H_m(x)~H_n(x)~e^{-x^2}~dx =\delta_{mn} .$
*** WORKING ***
$\displaystyle \frac{d^2}{dx^2}H_n-2x\frac{d}{dx}H_n=-2nH_n .$
Proof:
$\displaystyle H_{n+1}(x)=\left(2x-\frac{d}{dx}\right)H_n(x),~~~ \frac{d}{dx}H_n(x)=2x~H_n(x)-H_{n+1}(x),~~~ \frac{d^2}{dx^2}H_n(x)=2~H_n(x)+2x~\frac{d}{dx}H_n(x)-\frac{d}{dx}H_{n+1}(x),$
$\displaystyle \frac{d^2}{dx^2}H_n(x)-2x~\frac{d}{dx}H_n(x)=2~H_n(x)-\frac{d}{dx}H_{n+1}(x) .$
$\displaystyle \mathrm{RHS} =2~H_n(x)-\frac{d}{dx} .$
Hermite Polynomials form an orthogonal basis of the Hilbert Space of functions satisfying $\displaystyle\int_{-\infty}^{+\infty}\lvert f(x)\rvert^2~w(x)~dx<\infty,$ with inner product $\displaystyle\int_{-\infty}^{+\infty}f(x)~\overline{g(x)}~w(x)~dx.$
Such Hilbert Space can be represented as $\{\phi_n(x)=N_nH_n(x)~e^{-x^2/2}\}.$ (Please note that $e^{-x^2/2}$ here is not the weight function.)
The "Probabilists' Hermite Poloynomails" are depicted in https://en.wikipedia.org/wiki/Hermite_polynomials.
It is represented as $He_n(x)$ and can be converted to the "Physicists'" by $\displaystyle He_n(x)=2^{-\frac{n}{2}}H_n\left(\frac{x}{\sqrt 2}\right).$
$\displaystyle\boxed{ He_n(x) \equiv(-1)^n~e^{x^2/2}\frac{d^n}{dx^n}e^{-x^2/2} .}$
$\displaystyle (-1)^ne^{-x^2/2}He_n(x)=\frac{d^n}{dx^n}e^{-x^2/2} .$
$\displaystyle (-1)^n\frac{d}{dx}\left(e^{-x^2/2}He_n(x)\right)=\frac{d^{n+1}}{dx^{n+1}}e^{-x^2/2} .$
$\displaystyle (-1)^ne^{-x^2/2}\big(-x~He_n(x)+He_n'(x)\big)=\frac{d^{n+1}}{dx^{n+1}}e^{-x^2} .$
$\displaystyle -x~He_n(x)+He_n'(x)=(-1)^ne^{x^2/2}\frac{d^{n+1}}{dx^{n+1}}e^{-x^2/2}=-He_{n+1}(x) .$
$\displaystyle\boxed{He_{n+1}(x)=\left(x-\frac{d}{dx}\right)He_n(x).}$
$\displaystyle He_n(x) =\left(x-\frac{d}{dx}\right)He_{n-1}(x) =\left(x-\frac{d}{dx}\right)^2He_{n-2}(x) =\ldots =\left(x-\frac{d}{dx}\right)^nHe_0(x),~~ He_0(x)=1 .$
$\displaystyle\boxed{He_n(x)=\left(x-\frac{d}{dx}\right)^n 1.}$
$\displaystyle He_0(x)=1,\\ He_1(x)=\left(x-\frac{d}{dx}\right)1=x,\\ He_2(x)=\left(x-\frac{d}{dx}\right)x=x^2-1,\\ He_3(x)=\left(x-\frac{d}{dx}\right)(x^2-1)=x^3-3x,\\ He_4(x)=\left(x-\frac{d}{dx}\right)(x^3-3x)=x^4-6x^2+3,\\ He_5(x)=\left(x-\frac{d}{dx}\right)(x^4-6x^2+3)=x^5-10x^3+15x,\\ He_6(x)=\left(x-\frac{d}{dx}\right)(x^5-10x^3+15x)=x^6-15x^4+45x-15,\\ \ldots$
The weight function used for $He_n(x)$ is $w=e^{-x^2/2}.$
<< Superdense Coding | Top | CHSH Inequality >> |