Exercises

ex-ch02-01

Easy

Compute the differential entropy of XUniform(a,a)X \sim \text{Uniform}(-a, a) for a>0a > 0.

ex-ch02-02

Easy

Show that h(X+c)=h(X)h(X + c) = h(X) for any constant cc (translation invariance).

ex-ch02-03

Easy

Compute h(X)h(X) for XN(5,9)X \sim \mathcal{N}(5, 9).

ex-ch02-04

Easy

Let XN(0,1)X \sim \mathcal{N}(0, 1) and Y=3X+2Y = 3X + 2. Compute h(Y)h(Y) using the scaling property.

ex-ch02-05

Medium

Prove that I(X;Y)I(X;Y) for continuous random variables is invariant under invertible transformations: if U=g(X)U = g(X) and V=h(Y)V = h(Y) where g,hg, h are invertible, then I(U;V)=I(X;Y)I(U;V) = I(X;Y).

ex-ch02-06

Medium

Show that for the exponential distribution with parameter λ\lambda: (a) h(X)=log(e/λ)h(X) = \log(e/\lambda), and (b) among all non-negative distributions with mean 1/λ1/\lambda, the exponential uniquely maximizes hh.

ex-ch02-07

Medium

Compute the mutual information I(X;Y)I(X;Y) for the AWGN channel Y=X+ZY = X + Z where XN(0,P)X \sim \mathcal{N}(0, P) and ZN(0,N)Z \sim \mathcal{N}(0, N) are independent.

ex-ch02-08

Medium

Prove that for a random vector XRn\mathbf{X} \in \mathbb{R}^n: h(AX)=h(X)+logdet(A)h(\mathbf{A}\mathbf{X}) = h(\mathbf{X}) + \log|\det(\mathbf{A})| for any invertible matrix A\mathbf{A}.

ex-ch02-09

Medium

Show that for the Gaussian vector XN(0,K)\mathbf{X} \sim \mathcal{N}(\mathbf{0}, \mathbf{K}), the differential entropy can be written as h(X)=12i=1nlog(2πeλi)h(\mathbf{X}) = \frac{1}{2}\sum_{i=1}^n \log(2\pi e \lambda_i), where λ1,,λn\lambda_1, \ldots, \lambda_n are the eigenvalues of K\mathbf{K}.

ex-ch02-10

Hard

Derive the capacity of the parallel Gaussian channel: Yk=Xk+ZkY_k = X_k + Z_k for k=1,,Kk = 1, \ldots, K, where ZkN(0,Nk)Z_k \sim \mathcal{N}(0, N_k) are independent, and the total power constraint is kE[Xk2]P\sum_k \mathbb{E}[X_k^2] \leq P.

ex-ch02-11

Hard

Prove that for independent X,YX, Y with YN(0,σ2)Y \sim \mathcal{N}(0, \sigma^2):

h(X+Y)h(Y)=12log(2πeσ2).h(X + Y) \geq h(Y) = \frac{1}{2}\log(2\pi e \sigma^2).

ex-ch02-12

Hard

The de Bruijn identity states that if ZN(0,1)Z \sim \mathcal{N}(0, 1) is independent of XX:

ddth(X+tZ)=12J(X+tZ),\frac{d}{dt}h(X + \sqrt{t}Z) = \frac{1}{2}J(X + \sqrt{t}Z),

where J()J(\cdot) is the Fisher information. Verify this for X=0X = 0 (deterministic) and for XN(0,σ2)X \sim \mathcal{N}(0, \sigma^2).

ex-ch02-13

Hard

Prove the maximum entropy under covariance constraint for complex random vectors: among all distributions on Cn\mathbb{C}^n with covariance matrix K\mathbf{K}, the circularly symmetric complex Gaussian CN(0,K)\mathcal{CN}(\mathbf{0}, \mathbf{K}) uniquely maximizes differential entropy, achieving h(X)=log(πe)ndet(K)h(\mathbf{X}) = \log(\pi e)^n \det(\mathbf{K}).

ex-ch02-14

Medium

Let XN(0,P)X \sim \mathcal{N}(0, P) and ZN(0,N)Z \sim \mathcal{N}(0, N) be independent. Compute h(XX+Z)h(X|X+Z) (the conditional differential entropy of the input given the noisy output).

ex-ch02-15

Challenge

(Costa's EPI strengthening) Show that for independent XX and ZN(0,N)Z \sim \mathcal{N}(0, N), the function g(t)=e2h(X+tZ)g(t) = e^{2h(X + \sqrt{t}Z)} is concave in t0t \geq 0. Deduce the EPI as a special case.