Posts

Showing posts from August, 2020

Interview problems: counter examples in probability

Problems: (1) Zero correlation does not imply independence. (2) For two marginally normal random variables, zero correlation does not imply independence. (3) Pairwise independence does not imply jointly independence. (4) For three events A, B, C: A is independent with B and A is independent with C. Is A independent with the intersection of B and C? (5) Convergence in distribution does not imply convergence in probability. (6) Convergence in probability does not imply almost sure convergence. Solution: (1) X: take values of -1, 0, +1 with equal probability. $Y=X^2$. (2) X is standard Gaussian. Let W = +1 or -1 with equal probability and W is independent with X. Define Y=WX. X and Y are standard Gaussians. They are uncorrelated but dependent. Note: Only for random variables with jointly Gaussian, zero correlation implies independence. (3) Throw two dices. A: sum is 7, B: get 3, C: get 4. (4) No. Throw a dice. A: get a even number, B: get 1 or 2,

Interview problems: Bayes

Problems: Suppose a family has two children: (1) If the first child is a boy, what is the probability that the second child is a boy? (2) If one child is a boy, what is the probability that the other child is a boy? (3) If one child is a boy, you make a random guess of the boy's name and it happens that your guess is correct. Then what is the probability that other other child is a boy (assume two boys can have the same name and a girl cannot have a boy's name)? Solution: (1) 1/2. Independent events. (2) 1/3. There are three events with equal probability: (boy, boy), (boy, girl) and (girl, boy). Compared to (1), here the one boy can be either the first child or the second child. (3) Let $p$ be the probability that the random guess of the boy's name is correct. In general, $p<<1$, \begin{eqnarray}&&\mathbb{P}(\text{correct guess}|\text{two boys})=1-(1-p)^2=2p-p^2\approx 2p\,, \\ &&\mathbb{P}(\text{correct guess}|\text{one boy and one g

Rank of a matrix

We start by defining the column (or row) rank of a matrix as the maximum number of linearly independent columns (or rows) in the matrix. It is not obvious to me when I learned linear algebra that row rank and column rank are equal. Thanks to this equality, we can simply speaks of the rank of a matrix.    The proof of row rank = column rank:  Suppose the column rank of a matrix $\mathbf{A}\in \mathbb{R}^{m\times n}$ is $r$, we can make a linear expansion $\mathbf{A} = \mathbf{X}\mathbf{Y}$ where the $r$ columns of $\mathbf{X}\in \mathbb{R}^{m\times r}$ are linearly independent and $\mathbf{Y}\in \mathbb{R}^{r\times n}$ contains the expansion coefficients: $\mathbf{A}[:, j]=\sum_{k=1}^r \mathbf{X}[:, k]\,\mathbf{Y}[k, j]$. $\mathbf{A} = \mathbf{X}\mathbf{Y}$ can also be interpreted as a linear expansion of rows: $\mathbf{A}[i, :] = \sum_{k=1}^r \mathbf{X}[i, k]\mathbf{Y}[k, :]$. As a result, $\text{row rank}(\mathbf{A})\leq r = \text{column rank}(\mathbf{A})$. By applying the same

Algorithms for the shortest path

Image
Dijkstra's algorithm Given a non-negative weighted graph G(V, E), Dijkstra's algorithm is to find the shortest path from a source node s to a target node $t$. As shown in Figure 1, Dijkstra's algorithm picks the unvisited node with the shortest distance to s and update the distance (if smaller) through it to its unvisited neighbors. Because of non-negative weights, adding a edge can never make the path shorter.  Figure 1: Pick the unvisited (yellow) node v with the shortest distance to s. Credit: Keith's lecture slide 02 of CS 161 .    The pseudo code of Dijkstra's algorithm is as follows: My implementation in C++ that is the solution of Leetcode 743 is as follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 class Solution { public: int networkDelayTime ( vector < vector < int >>& times , int N , int K ) { vector < vector < int >> graph ( N , v