Home | Algorithms | Commercialization | Data Science | Information Theories | Quantum Theories | Lab | Linear Algebra |
<< Probability - Discrete Probability Distributions | Bloch Sphere Orthonormality >> |
$\require{cancel} \newcommand{\Ket}[1]{\left|{#1}\right\rangle} \newcommand{\Bra}[1]{\left\langle{#1}\right|} \newcommand{\Braket}[1]{\left\langle{#1}\right\rangle} \newcommand{\Rsr}[1]{\frac{1}{\sqrt{#1}}} \newcommand{\RSR}[1]{1/\sqrt{#1}} \newcommand{\Verti}{\rvert} \newcommand{\HAT}[1]{\hat{\,#1~}} \DeclareMathOperator{\Tr}{Tr}$
First created in October 2012
A random variable $X$ is continuous iff $F_X(x)$ is continuous, where the Cumulative Distribution Function $F(x)=F_X(x)=P(X\le x)$ for $x\in\mathbb{R}$.
Probability Density Function $f(x)$: $\left\{ \begin{array}{ll} f(x)\ge 0\text{ for all }x,\\ \int_{-\infty}^{\infty}f(x)dx=1. \end{array} \right.$
$\displaystyle F(x)=\int_{-\infty}^{x}f(t)dt .$
$\displaystyle F\text{ is non-decreasing and }\lim_{x\to\infty}F(x)=1 .$
If $F(x)$ is differentiable, $f_X(x)=f(x)=\frac{d}{dx}F(x),~~x\in\mathbb{R} .$
If $F(x)$ is not differentiable at $x=a,~\lim_{x\to a^-}\frac{d}{dx}F(x)$ exists.
$\displaystyle P(a\le X\le b)=P(a<X\le b)=F(b)-F(a)=\int_a^b f(x)dx .$
$\displaystyle \mu=\mathrm{E}(X)=\int_{-\infty}^{\infty}xf(x)dx .$
$\displaystyle \sigma^2=\mathrm{Var}(X)=\int_{-\infty}^{\infty}(x-\mu)^2 f(x)dx .$
$\displaystyle Y=g(X),\quad\mathrm{E}(Y)=\mathrm{E}(g(X))=\int_{-\infty}^{\infty}g(x)f(x)dx .$
Theorem: $\mathrm{Var}(X)=\mathrm{E}(X^2)-\big(\mathrm{E}(X)\big)^2.$
Proof: Let $\mu=\mathrm{E}(X)$ and $g(X)=(X-\mu)^2$, then $\displaystyle \mathrm{Var}(X) =\int_{-\infty}^\infty(x-\mu)^2 f(x)dx =\int_{-\infty}^\infty(x^2-2x\mu+\mu^2)f(x)dx$
$\displaystyle \qquad=\int_{-\infty}^\infty x^2 f(x)dx-2\mu\int_{-\infty}^\infty xf(x)dx+\mu^2\int_{-\infty}^\infty f(x)dx. =\mathrm{E}(X^2)-2\mu\cdot\mu+\mu^2 =\mathrm{E}(X^2)-\big(\mathrm{E}(X)\big)^2 .$
Theorem: $\mathrm{E}(aX+b)=a\mathrm{E}(X)+b,~~ \mathrm{Var}(aX+b)=a^2\mathrm{Var}(X),~~ \mathrm{SD}(aX+b)=|a|\mathrm{SD}(X).$ ($a$ and $b$ are constants.)
Proof: See the same theorem in Discrete Probability Distributions.
Theorem: If $Z=\frac{X-\mu}{\sigma}$ where $\mu=\mathrm{E}(X)$ and $\sigma=\sqrt{\mathrm{Var}(X)}$, then $\mathrm{E}(Z)=0\quad\text{and}\quad\mathrm{Var}(Z)=1.$
Proof: $\mathrm{E}(Z) =\mathrm{E}\left(\frac{X-\mu}{\sigma}\right) =\frac{\mathrm{E}(X)-\mu}{\sigma} =\frac{\mu-\mu}{\sigma}=0.~ \mathrm{Var}(Z) =\mathrm{Var}\left(\frac{X-\mu}{\sigma}\right) =\frac{1}{\sigma^2}\mathrm{Var}(X) =1 .$
$\displaystyle N(\mu, \sigma^2): \phi(x)=\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\tfrac{1}{2}\left(\tfrac{x-\mu}{\sigma}\right)^2},$ where $-\infty<x<\infty.$
$\displaystyle \int_{-\infty}^\infty\phi(x)dx=1 .$
$\displaystyle P(X\le x)=F_X(x)=\int_{-\infty}^x\phi(t)dt .$
$\displaystyle Z=\frac{X-\mu}{\sigma} .$
$\displaystyle P(Z\le z)=F_Z(z)=\int_{-\infty}^x\frac{1}{\sqrt{2\pi}}e^{-\tfrac{1}{2}t^2} .$
Theorem: $\mathrm{E}(X)=\mu.$
Proof: Let $\displaystyle z=\frac{x-\mu}{\sigma}. \mathrm{E}\left(\frac{X-\mu}{\sigma}\right) =\int_{-\infty}^\infty\frac{x-\mu}{\sigma}\cdot\phi(x)dx =\int_{-\infty}^\infty\frac{x-\mu}{\sigma}\cdot\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\tfrac{1}{2}\left(\tfrac{x-\mu}{\sigma}\right)^2}dx$
$\displaystyle \qquad=\int_{x=-\infty}^{x=\infty}z\cdot\frac{1}{\sqrt{2\pi\sigma^2}}e^{-\tfrac{1}{2}z^2}\sigma dz =\frac{\sigma}{\sqrt{2\pi\sigma^2}}\left(\int_{-\infty}^0 z\cdot e^{-\tfrac{1}{2}z^2}dz+\int_0^{\infty}z\cdot e^{-\tfrac{1}{2}z^2}dz\right)$
$\displaystyle \qquad=\frac{\sigma}{\sqrt{2\pi\sigma^2}}\left(\int_\infty^0-(-z)\cdot e^{-\tfrac{1}{2}(-z)^2}dz+\int_0^{\infty}z\cdot e^{-\tfrac{1}{2}z^2}dz\right) =\frac{\sigma}{\sqrt{2\pi\sigma^2}}\left(\int_0^\infty(-z)\cdot e^{-\tfrac{1}{2}(-z)^2}dz+\int_0^{\infty}z\cdot e^{-\tfrac{1}{2}z^2}dz\right)=0.$
$\qquad\displaystyle \because\mathrm{E}\left(\frac{X-\mu}{\sigma}\right)=\frac{\mathrm{E}(X)-\mu}{\sigma}=0. \quad\therefore\mathrm{E}(X)=\mu.$
$\mathrm{Exp}(X): \quad f(t)=\left\{\begin{array}{ll} \lambda e^{-\lambda t}&\text{if }t\ge 0\\ 0&\text{if }t<0. \end{array}\right.$
$\mathrm{E}(X)=\frac{1}{\lambda}.$
Proof: $\displaystyle \mathrm{E}(X)=\int_{-\infty}^\infty tf(t)dt =0+\int_0^\infty t\lambda e^{-\lambda t}dt =\int_0^\infty(-t)\cdot(-\lambda)e^{-\lambda t}dt$
$\displaystyle \qquad=\Big[(-t)\cdot e^{-\lambda t}\Big]_0^\infty-\int_0^\infty e^{-\lambda t}(-1)dt =\Big[(-t)\cdot e^{-\lambda t}\Big]_0^\infty-\frac{1}{\lambda}\Big[e^{-\lambda t}\Big]_0^\infty$
$\displaystyle \qquad=0-\frac{1}{\lambda}\Big[0-1\Big] =\frac{1}{\lambda}.$
$\displaystyle \sigma^2=\mathrm{Var}(X)=\frac{1}{\lambda^2} .$
Proof: $\displaystyle \mathrm{Var}(X)=\mathrm{E}(X^2)-\mu^2 =\int_{-\infty}^\infty t^2f(t)dt-\mu^2 =0+\int_0^\infty(-t^2)\cdot(-\lambda)e^{-\lambda t}dt-\mu^2$
$\displaystyle \qquad=\Big[(-t^2)\cdot e^{-\lambda t}\Big]_0^\infty-\int_0^\infty e^{-\lambda t}(-2t)dt-\mu^2 =0+\frac{2}{\lambda}\int_0^\infty t\lambda e^{-\lambda t}dt-\mu^2 =\frac{2}{\lambda^2}-\frac{1}{\lambda^2} =\frac{1}{\lambda^2} .$
<< Probability - Discrete Probability Distributions | Top | Bloch Sphere Orthonormality >> |