Posts

Showing posts from July, 2020

EPR paradox and Bell inequality

Popular science books or articles on quantum mechanics love the topic of EPR paradox. It relates to a legend of Albert Einstein and his debates with Neils Bohr. As well known in physics history, Einstein was not completely convinced by the quantum mechanics interpretation, and the so-called EPR paradox was proposed as an attempt to prove that quantum mechanics is "incomplete". Nowadays, we are used to quantum mechanics. EPR paradox is no longer a paradox. It is solved by Bell inequality and its various experimental verifications. Thanks to the simplification made by physicists David Bohm and Yakir Aharanov, we can describe EPR paradox in terms of a spin singlet state of two entangled electrons \begin{equation}|S=0\rangle=\frac{1}{\sqrt{2}}\left[|\uparrow\rangle_A |\downarrow\rangle_B -  |\downarrow\rangle_A |\uparrow\rangle_B\right]\,.\tag{1}\end{equation} Remarks: The spins of two electrons are alway along the exact opposite directions, with or without measurements.

Second quantization and quantum field theory

When I was an undergraduate and took the course of advanced quantum mechanics, I did not fully grasp the concept of second quantization. In my impression, second quantization was summarized as the following recipe: Expand the wave function in some basis: $\psi(x, t)=\sum_k a_k(t)\phi_k(x)$.  Impose the commutation relation like $[a_k, a^{\dagger}_{k'}]=\delta(k-k')$ (second quantization). Replace operators $\hat{\cal{O}}$ by their expectation values $\int dx \psi(x)^{\dagger} \hat{\cal{O}} \psi(x)$. I don't remember clearly whether the above summary was from my crash study for the final exam or exactly what I was taught on class. So I asked one of my best friends Joking for confirmation since we had that class together. Without any surprise, Joking told me that he never understand second quantization either.  Now let me try to explain what second quantization actually is in this post. Before start, I have to say second quantization is really a misleading terminology. As sho

Interview problems on C++

1. Difference between pointer and reference: A pointer is a variable storing the memory address of the variable it points to. It is safe to think of a reference as an alias of the variable. (1) A pointer should be dereferenced with * while a reference can be used directly. (2) A pointer can be reassigned while a reference cannot once initialized. (3) A pointer can be assigned to point to nullptr directly while a reference cannot. (4) We can perform arithmetic operations on a pointer. (5) Pointer of pointer provides multiple levels of indirections. 2. Inheritance: 'is a' 3. Virtual: (1) A virtual function is a member function that is declared in the base class with keyword 'virtual' and can be overridden by derived classes. (2) A pure virtual function is a virtual function without implementation in the base class. It is specified by '=0' in the declaration. (3) A abstract class is a class that contains at least one pure virtual function. (4) An interface is a cla

Interview problem: distance between median and mean

Problem: Provide a bound on the distance between median and mean. Solution: Let $X$ be a random variable and $M(X)$ be the median. Besides $\mathbb{E}(X)=\text{argmin}_c\mathbb{E}\left(X-c\right)^2$, we have \begin{equation}M(X)=\text{argmin}_c\mathbb{E}\left|X-c\right|\,.\end{equation} As a result,  we can prove using  Jensen's inequality : \begin{eqnarray} \left|\mathbb{E}(X)-M(X)\right| &\leq& \mathbb{E}\left|X-M(X)\right| \\ &\leq& \mathbb{E}\left|X-\mathbb{E}(X)\right| \\ &\leq& \sqrt{\mathbb{E}\left(X-\mathbb{E}(X)\right)^2}=\sqrt{\text{Var}(X)}\,.\end{eqnarray} 

Interview problem: Welford's online algorithm

Problem:  Provide an online algorithm to compute the sample mean and sample variance of stream data.  Solution: Let $x_1, x_2, \cdots, x_n, \cdots$ be the stream data, the sample mean $\bar{x}_n \equiv\sum_{i=1}^n x_i / n$ can be computed by \begin{equation}\bar{x}_n=\bar{x}_{n-1} +\frac{x_n - \bar{x}_{n-1}}{n}\,.\tag{1}\end{equation} For the sample variance $s^2_{n}\equiv \frac{1}{n-1}\sum_{i=1}^n\left(x_i-\bar{x}_n\right)^2$, a naive method is to compute \begin{equation} s^2_n = \frac{1}{n-1}\left(\sum_{i=1}^n x_i^2 - \frac{\left(\sum_{i=1}^n x_i\right)^2}{n}\right)\end{equation} by the statistics of the sum and sum of squared. However, such method is numerically unstable since it may involve the cancellation of two very similar numbers. Instead, as suggested in this wiki , we use Welford's algorithm: \begin{eqnarray} M_{2,n} &=& M_{2, n-1}+\left(x_n-\bar{x}_{n-1}\right)\left(x_n-\bar{x}_n\right)\,, \\ s_n^2 &=&\frac{M_{2, n}}{n-1}\,.\tag{2}\end{eqnarray} Proof: O

Kalman filter

Conditional mean and variance of Gaussian Let $X, Y$ be joint Gaussian, i.e. \begin{eqnarray}\left[\begin{array}{c} X \\ Y\end{array}\right] \,\sim \, {\cal{N}}\left(\left[\begin{array}{c}\mu_X \\ \mu_Y\end{array}\right],\,\left[\begin{array}{cc}\Sigma_{XX} & \Sigma_{XY} \\ \Sigma_{YX} & \Sigma_{YY}\end{array}\right]\right)\,,\end{eqnarray} what is $\mathbb{E}(X|Y)$ and $\text{Var}(X|Y)$?  Trick: Find constant matrix $A$ such that $X+AY$ is independent with $Y$. Then we have \begin{eqnarray} \mathbb{E}(X|Y) &=& \mathbb{E}(X + AY|Y) - \mathbb{E}(AY|Y) = \mathbb{E}(X + AY) - AY\nonumber\\ &=&\mu_X + A\left(\mu_Y - Y\right)\,,\nonumber\\ \text{Var}(X|Y) &=& \text{Var}(X+AY|Y) = \text{Var}(X+AY) = \text{Cov}(X+AY, X^T + Y^T A^T)\nonumber\\ &=&\Sigma_{XX} + \Sigma_{XY} A^T + A \Sigma_{YX} + A \Sigma_{YY} A^T\,. \nonumber \end{eqnarray} Finally, from $0=\text{Cov}(X+AY, Y)=\Sigma_{XY}+A\Sigma_{YY}$, we have $A=-\Sigma_{XY}\Sigma_{YY}^{-1}$. As a result

Interview problem: Implement convex hull

Graham's scan $\text{arctan2}(y,x)$ can be replaced by \begin{equation}\text{sign}(y)\,\left(1 - \frac{x}{|x|+|y|}\right)\end{equation} since they shared the same monotonicity, as proposed here . Collinearity is tricky compared to monotone chain. Here is my C++ implementation that is the solution of Leetcode 587 : 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 struct Point { int x ; int y ; double angle ; int dist ; bool operator < ( const Point & other ) { return angle < other . angle || ( angle == other . angle && dist < other . dist ); } }; class Solution { public: vector < vector < int >> outerTrees ( vector < vector < int >>& points ) { const auto & min_iter = min_elemen

Interview problem: Implement unique pointer

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 template < typename T > class UniquePtr { public: explicit UniquePtr ( T * ptr = nullptr ) : ptr_ ( ptr ){} ~ UniquePtr (){ delete ptr_ ; } UniquePtr ( const UniquePtr & other ) = delete ; UniquePtr & operator = ( const UniquePtr & other ) = delete ; UniquePtr ( UniquePtr && other ) noexcept : ptr_ ( other . ptr_ ) { other . ptr_ = nullptr ; } UniquePtr & operator = ( UniquePtr && other ) noexcept { other . swap ( * this ); return * this ; } T & operator * () const { return * ptr_ ; } T * operator -> () const { return ptr_ ; } explicit operator bool () const { return ptr_ ; } void swap ( UniquePtr & other ) noexcept {

Interview problem: Implement shared pointer

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 template < typename T > class SharedPtr { public: explicit SharedPtr ( T * ptr = nullptr ) : ptr_ ( ptr ), count_ ( new long ( 0 )) { if ( ptr_ ) { ++ ( * count_ ); } } ~ SharedPtr () { if ( count_ ) { -- ( * count_ ); if ( * count_ == 0 ) { delete count_ ; delete ptr_ ; } } } SharedPtr ( const SharedPtr & other ) : ptr_ ( other . ptr_ ), count_ ( other . count_ ) { ++ ( * count_ ); } SharedPtr & operator = ( const SharedPtr & other ) { SharedPtr tmp ( other ); tmp . swap ( * this ); return * this ; } SharedPtr ( SharedPtr && other ) noexcept