Exercises

ex-ch02-01

Easy

A BPSK receiver observes a signal rr and must decide between hypotheses H0H_0 (bit 00 sent) and H1H_1 (bit 11 sent). The channel model is r=(โˆ’1)bโ€‰A+n,r = (-1)^{b}\,A + n, where A=1A = 1, bโˆˆ{0,1}b \in \{0,1\} is the transmitted bit, and nโˆผN(0,ฯƒ2)n \sim \mathcal{N}(0,\sigma^2) with ฯƒ2=1\sigma^2 = 1. The prior probabilities are P(H0)=0.6P(H_0) = 0.6 and P(H1)=0.4P(H_1) = 0.4.

Given an observation r=0.5r = 0.5, use Bayes' theorem to compute the posterior probabilities P(H0โˆฃr=0.5)P(H_0 \mid r = 0.5) and P(H1โˆฃr=0.5)P(H_1 \mid r = 0.5).

ex-ch02-02

Easy

The inter-arrival time of packets at a network node is modeled as an exponential random variable XX with rate parameter ฮป=2\lambda = 2 packets/ms.

(a) Write the PDF fX(x)f_X(x) and CDF FX(x)F_X(x).

(b) Compute the mean E[X]E[X] and variance Varโก(X)\operatorname{Var}(X).

(c) Find P(X>1โ€…โ€Šms)P(X > 1\;\text{ms}).

ex-ch02-03

Easy

Let XโˆผN(0,ฯƒ2)X \sim \mathcal{N}(0, \sigma^2) model the noise voltage at a receiver front-end, with ฯƒ2=4\sigma^2 = 4. Define the instantaneous power as Y=X2Y = X^2.

Using the law of the unconscious statistician (LOTUS), compute E[Y]E[Y] and E[Y2]E[Y^2], then find Varโก(Y)\operatorname{Var}(Y).

ex-ch02-04

Easy

Let (X,Y)(X, Y) be a jointly Gaussian random vector with joint PDF fX,Y(x,y)=12ฯ€ฯƒXฯƒY1โˆ’ฯ2expโกโ€‰โฃ(โˆ’12(1โˆ’ฯ2)[x2ฯƒX2โˆ’2ฯโ€‰xโ€‰yฯƒXฯƒY+y2ฯƒY2]),f_{X,Y}(x,y) = \frac{1}{2\pi\sigma_X \sigma_Y \sqrt{1-\rho^2}} \exp\!\biggl(-\frac{1}{2(1-\rho^2)} \Bigl[\frac{x^2}{\sigma_X^2} - \frac{2\rho\,x\,y}{\sigma_X \sigma_Y} + \frac{y^2}{\sigma_Y^2}\Bigr]\biggr), where ฯƒX=1\sigma_X = 1, ฯƒY=2\sigma_Y = 2, ฯ=0.5\rho = 0.5, and both means are zero.

Obtain the marginal PDF fX(x)f_X(x) by integrating over yy.

ex-ch02-05

Easy

Packets arrive at a base station according to a Poisson process with rate ฮป=5\lambda = 5 packets per second.

(a) What is the probability that exactly 33 packets arrive in a 11-second interval?

(b) What is the probability that no packets arrive in a 0.50.5-second interval?

(c) Find the expected number of packets in a 1010-second window and its variance.

ex-ch02-06

Medium

A multi-stage detection system processes a received signal through two successive classifiers. The first classifier C1C_1 declares "signal present" (D1D_1) with probability P(D1โˆฃH1)=0.95P(D_1 \mid H_1) = 0.95 when a signal is truly present, and has a false alarm rate P(D1โˆฃH0)=0.10P(D_1 \mid H_0) = 0.10. The second classifier C2C_2 takes the output of C1C_1 and refines it: P(D2โˆฃD1,H1)=0.98P(D_2 \mid D_1, H_1) = 0.98 and P(D2โˆฃD1,H0)=0.05P(D_2 \mid D_1, H_0) = 0.05.

If C1C_1 declares "no signal" (Dห‰1\bar{D}_1), the system outputs Dห‰2\bar{D}_2 regardless. The prior probability of signal presence is P(H1)=0.01P(H_1) = 0.01.

Using the total probability theorem and Bayes' theorem, find:

(a) P(D2)P(D_2), the overall probability that the system declares "signal present."

(b) P(H1โˆฃD2)P(H_1 \mid D_2), the posterior probability that a signal is truly present given a positive declaration.

ex-ch02-07

Medium

Let XX and YY be independent, zero-mean Gaussian random variables, each with variance ฯƒ2\sigma^2, representing the in-phase and quadrature components of narrowband noise. Define the envelope R=X2+Y2.R = \sqrt{X^2 + Y^2}.

Derive the PDF of RR (the Rayleigh distribution) from first principles.

ex-ch02-08

Medium

Let X1,X2,โ€ฆ,XnX_1, X_2, \ldots, X_n be independent and identically distributed exponential random variables with rate ฮป\lambda, modeling the inter-arrival times of packets. Define the total waiting time for nn arrivals as Sn=โˆ‘i=1nXiS_n = \sum_{i=1}^n X_i.

(a) Derive the moment generating function (MGF) MSn(t)M_{S_n}(t).

(b) Identify the distribution of SnS_n from the MGF.

(c) Specialize to n=3n = 3, ฮป=2\lambda = 2 and compute E[S3]E[S_3] and Varโก(S3)\operatorname{Var}(S_3).

ex-ch02-09

Medium

A random vector X=[X1,X2]T\mathbf{X} = [X_1, X_2]^T has zero mean and covariance matrix CX=[5335].\mathbf{C}_{\mathbf{X}} = \begin{bmatrix} 5 & 3 \\ 3 & 5 \end{bmatrix}.

(a) Find the eigenvalues and eigenvectors of CX\mathbf{C}_{\mathbf{X}}.

(b) Define the Karhunen-Lo`eve transform (KLT) Y=QTX\mathbf{Y} = \mathbf{Q}^T \mathbf{X} where Q\mathbf{Q} is the matrix of eigenvectors. Show that the components of Y\mathbf{Y} are uncorrelated.

(c) Interpret the result in the context of decorrelating received signal components in a 2ร—22 \times 2 MIMO system.

ex-ch02-10

Medium

Let X1,X2,โ€ฆ,XnX_1, X_2, \ldots, X_n be i.i.d. uniform random variables on [0,1][0, 1]. Define Sn=โˆ‘i=1nXiS_n = \sum_{i=1}^n X_i.

(a) Compute the exact mean and variance of SnS_n.

(b) Using the Central Limit Theorem, approximate P(S12>7)P(S_{12} > 7).

(c) Compare with the exact value (noting that S12S_{12} has an Irwin-Hall distribution) and comment on the accuracy of the CLT approximation.

ex-ch02-11

Medium

A wide-sense stationary (WSS) random process X(t)X(t) with autocorrelation RX(ฯ„)=ฯƒ2eโˆ’ฮฑโˆฃฯ„โˆฃR_X(\tau) = \sigma^2 e^{-\alpha|\tau|} is passed through a causal LTI filter with impulse response h(t)=ฮฒโ€‰eโˆ’ฮฒtโ€‰u(t)h(t) = \beta\,e^{-\beta t}\,u(t), where u(t)u(t) is the unit step function.

Let Y(t)Y(t) denote the output process. Set ฯƒ2=1\sigma^2 = 1, ฮฑ=1\alpha = 1, and ฮฒ=2\beta = 2.

(a) Find the power spectral density SX(f)S_X(f) of the input.

(b) Compute the transfer function H(f)H(f) and the output PSD SY(f)S_Y(f).

(c) Find the output power E[Y2(t)]E[Y^2(t)].

ex-ch02-12

Medium

A wireless channel alternates among three states: Good (GG), Fair (FF), and Bad (BB). The one-step transition probability matrix is P=[0.70.20.10.30.50.20.10.30.6],\mathbf{P} = \begin{bmatrix} 0.7 & 0.2 & 0.1 \\ 0.3 & 0.5 & 0.2 \\ 0.1 & 0.3 & 0.6 \end{bmatrix}, where rows and columns are ordered (G,F,B)(G, F, B).

(a) Verify that P\mathbf{P} is a valid stochastic matrix.

(b) Find the stationary distribution ฯ€=[ฯ€G,ฯ€F,ฯ€B]\boldsymbol{\pi} = [\pi_G, \pi_F, \pi_B] by solving ฯ€P=ฯ€\boldsymbol{\pi}\mathbf{P} = \boldsymbol{\pi} together with ฯ€G+ฯ€F+ฯ€B=1\pi_G + \pi_F + \pi_B = 1.

(c) Interpret the stationary distribution in terms of long-run channel availability.

ex-ch02-13

Medium

Two independent Poisson processes model packet arrivals from two user classes at a base station: Class A with rate ฮปA=3\lambda_A = 3 packets/s and Class B with rate ฮปB=7\lambda_B = 7 packets/s.

(a) Show that the merged (superposed) arrival process N(t)=NA(t)+NB(t)N(t) = N_A(t) + N_B(t) is also a Poisson process, and find its rate.

(b) Given that a packet arrives, what is the probability it belongs to Class A?

(c) Compute the probability that exactly 22 Class-A packets and exactly 33 Class-B packets arrive in a 11-second interval.

ex-ch02-14

Hard

Consider binary hypothesis testing for detecting a signal in noise: H0:r=n,H1:r=A+n,H_0: r = n, \qquad H_1: r = A + n, where A>0A > 0 is a known amplitude and nโˆผN(0,ฯƒ2)n \sim \mathcal{N}(0, \sigma^2). The prior probabilities are P(H0)=1โˆ’pP(H_0) = 1 - p and P(H1)=pP(H_1) = p with pโ‰ 0.5p \neq 0.5.

(a) Derive the MAP (Maximum A Posteriori) decision rule and show that the optimal threshold is ฮณMAP=A2+ฯƒ2Alnโก1โˆ’pp.\gamma_{\text{MAP}} = \frac{A}{2} + \frac{\sigma^2}{A}\ln\frac{1-p}{p}.

(b) Show that when p=0.5p = 0.5 the MAP rule reduces to the ML rule with threshold A/2A/2.

(c) For A=2A = 2, ฯƒ2=1\sigma^2 = 1, and p=0.3p = 0.3, compute the MAP threshold and the resulting probability of error PeP_e.

ex-ch02-15

Hard

Let X1,X2,โ€ฆ,XnX_1, X_2, \ldots, X_n be i.i.d. random variables with MGF MX(t)=E[etX]M_X(t) = E[e^{tX}]. Consider the tail probability P(Snโ‰ฅnโ€‰a)P(S_n \ge n\,a) where Sn=โˆ‘i=1nXiS_n = \sum_{i=1}^n X_i and a>E[X]a > E[X].

(a) Derive the Chernoff bound: P(Snโ‰ฅnโ€‰a)โ‰คeโˆ’nโ€‰ฮ›โˆ—(a),P(S_n \ge n\,a) \le e^{-n\,\Lambda^*(a)}, where ฮ›โˆ—(a)=supโกtโ‰ฅ0{taโˆ’lnโกMX(t)}\Lambda^*(a) = \sup_{t \ge 0}\{ta - \ln M_X(t)\} is the Fenchel-Legendre (rate function) of the log-MGF.

(b) Specialize to XiโˆผN(0,ฯƒ2)X_i \sim \mathcal{N}(0, \sigma^2) and show that the Chernoff bound on P(Xห‰nโ‰ฅa)P(\bar{X}_n \ge a) (where Xห‰n=Sn/n\bar{X}_n = S_n/n) yields P(Xห‰nโ‰ฅa)โ‰คeโˆ’na2/(2ฯƒ2).P(\bar{X}_n \ge a) \le e^{-na^2/(2\sigma^2)}.

(c) Apply to BER analysis: if XiX_i represents the decision statistic error for the ii-th bit with ฯƒ2=1\sigma^2 = 1, find the bound on P(Xห‰100โ‰ฅ0.5)P(\bar{X}_{100} \ge 0.5).

ex-ch02-16

Hard

Let X(t)X(t) be a wide-sense stationary (WSS) process with mean ฮผX\mu_X and autocorrelation RX(ฯ„)R_X(\tau). The process is passed through a stable LTI system with impulse response h(t)h(t) to produce Y(t)=โˆซโˆ’โˆžโˆžh(ฮฑ)โ€‰X(tโˆ’ฮฑ)โ€‰dฮฑY(t) = \int_{-\infty}^{\infty} h(\alpha)\,X(t - \alpha)\,d\alpha.

Prove that Y(t)Y(t) is also WSS by showing:

(a) E[Y(t)]E[Y(t)] is constant (independent of tt).

(b) RY(t1,t2)R_Y(t_1, t_2) depends only on ฯ„=t1โˆ’t2\tau = t_1 - t_2.

(c) Derive the relationship RY(ฯ„)=h(ฯ„)โˆ—h(โˆ’ฯ„)โˆ—RX(ฯ„)R_Y(\tau) = h(\tau) * h(-\tau) * R_X(\tau), or equivalently SY(f)=โˆฃH(f)โˆฃ2SX(f)S_Y(f) = |H(f)|^2 S_X(f).

ex-ch02-17

Hard

The Gilbert-Elliott channel is a two-state Markov chain with states GG (Good) and BB (Bad). The transition matrix is P=[1โˆ’bbg1โˆ’g],\mathbf{P} = \begin{bmatrix} 1 - b & b \\ g & 1 - g \end{bmatrix}, where b=P(Gโ†’B)b = P(G \to B) and g=P(Bโ†’G)g = P(B \to G). Set b=0.1b = 0.1 and g=0.4g = 0.4.

(a) Find the stationary distribution [ฯ€G,ฯ€B][\pi_G, \pi_B].

(b) Verify the Chapman-Kolmogorov equation by computing P(2)\mathbf{P}^{(2)} directly and also as P2\mathbf{P}^2.

(c) Compute P(n)\mathbf{P}^{(n)} for general nn using eigendecomposition and verify convergence to the stationary distribution.

ex-ch02-18

Hard

Let X\mathbf{X} and Y\mathbf{Y} be jointly Gaussian random vectors with XโˆผN(ฮผX,CXX),YโˆผN(ฮผY,CYY),\mathbf{X} \sim \mathcal{N}(\boldsymbol{\mu}_X, \mathbf{C}_{XX}), \quad \mathbf{Y} \sim \mathcal{N}(\boldsymbol{\mu}_Y, \mathbf{C}_{YY}), and cross-covariance CXY=E[(Xโˆ’ฮผX)(Yโˆ’ฮผY)T]\mathbf{C}_{XY} = E[(\mathbf{X} - \boldsymbol{\mu}_X) (\mathbf{Y} - \boldsymbol{\mu}_Y)^T].

Consider the linear MMSE (minimum mean-square error) estimator X^=AY+b\hat{\mathbf{X}} = \mathbf{A}\mathbf{Y} + \mathbf{b} that minimizes E[โˆฅXโˆ’X^โˆฅ2]E[\|\mathbf{X} - \hat{\mathbf{X}}\|^2].

(a) Derive the optimal A\mathbf{A} and b\mathbf{b}.

(b) Show that the MMSE is MMSE=trโก(CXXโˆ’CXYCYYโˆ’1CYX)\text{MMSE} = \operatorname{tr}(\mathbf{C}_{XX} - \mathbf{C}_{XY}\mathbf{C}_{YY}^{-1}\mathbf{C}_{YX}).

(c) Specialize to scalar XX and YY with ฮผX=ฮผY=0\mu_X = \mu_Y = 0, ฯƒX2=4\sigma_X^2 = 4, ฯƒY2=1\sigma_Y^2 = 1, and correlation coefficient ฯ=0.8\rho = 0.8. Compute the estimator and the MMSE.

ex-ch02-19

Challenge

Consider a MIMO channel y=Hx+n\mathbf{y} = \mathbf{H}\mathbf{x} + \mathbf{n} where HโˆˆCNrร—Nt\mathbf{H} \in \mathbb{C}^{N_r \times N_t} is a random channel matrix, xโˆˆCNt\mathbf{x} \in \mathbb{C}^{N_t} is the transmitted signal with covariance Cx=E[xxH]\mathbf{C}_x = E[\mathbf{x}\mathbf{x}^H] satisfying trโก(Cx)โ‰คP\operatorname{tr}(\mathbf{C}_x) \le P, and nโˆผCN(0,ฯƒ2INr)\mathbf{n} \sim \mathcal{CN}(\mathbf{0}, \sigma^2\mathbf{I}_{N_r}).

Using the probability and linear algebra tools from Chapters 1โ€“2:

(a) Derive the mutual information I(x;yโˆฃH)I(\mathbf{x}; \mathbf{y} \mid \mathbf{H}) for a fixed channel realization, expressing it in terms of the eigenvalues of HCxHH\mathbf{H}\mathbf{C}_x\mathbf{H}^H.

(b) When the transmitter has no CSI, show that the optimal input covariance is Cx=PNtINt\mathbf{C}_x = \frac{P}{N_t}\mathbf{I}_{N_t} (uniform power allocation), and derive the resulting capacity formula C=โˆ‘i=1minโก(Nr,Nt)logโก2โ€‰โฃ(1+PNtฯƒ2โ€‰ฮปi),C = \sum_{i=1}^{\min(N_r, N_t)} \log_2\!\left(1 + \frac{P}{N_t \sigma^2}\,\lambda_i\right), where ฮปi\lambda_i are the eigenvalues of HHH\mathbf{H}\mathbf{H}^H.

(c) For a 2ร—22 \times 2 channel with H=[2112]\mathbf{H} = \begin{bmatrix} 2 & 1 \\ 1 & 2 \end{bmatrix}, P=10P = 10, and ฯƒ2=1\sigma^2 = 1, compute the capacity numerically.

ex-ch02-20

Challenge

Consider a stationary ergodic random process X(t)X(t) that represents the indicator of bit error at time tt for a communication link: X(t)=1X(t) = 1 if the bit transmitted at time tt is received in error, and X(t)=0X(t) = 0 otherwise.

The ensemble-average BER is pห‰=E[X(t)]\bar{p} = E[X(t)] and the time-averaged BER measured over an observation window [0,T][0, T] is p^T=1Tโˆซ0TX(t)โ€‰dt.\hat{p}_T = \frac{1}{T}\int_0^T X(t)\,dt.

(a) State the conditions under which p^Tโ†’pห‰\hat{p}_T \to \bar{p} as Tโ†’โˆžT \to \infty (in mean-square sense). Relate this to the autocorrelation RX(ฯ„)R_X(\tau).

(b) For a channel whose error process has autocorrelation RX(ฯ„)=pห‰2+pห‰(1โˆ’pห‰)โ€‰eโˆ’โˆฃฯ„โˆฃ/ฯ„cR_X(\tau) = \bar{p}^2 + \bar{p}(1 - \bar{p})\,e^{-|\tau|/\tau_c} (with correlation time ฯ„c\tau_c), derive the variance of p^T\hat{p}_T as a function of TT and ฯ„c\tau_c.

(c) Determine how large TT must be (in terms of ฯ„c\tau_c) to ensure Varโก(p^T)โ‰คฯต2โ€‰pห‰2\operatorname{Var}(\hat{p}_T) \le \epsilon^2\,\bar{p}^2 for a prescribed relative accuracy ฯต\epsilon.

(d) Discuss when ergodicity fails and the time-averaged BER does not converge to the ensemble average. Give a concrete channel example.