Exercises

ex-ch06-01

Easy

For a Bernoulli(0.3) source with Hamming distortion, compute RR for D=0,0.1,0.2,0.3D = 0, 0.1, 0.2, 0.3 and sketch the curve.

ex-ch06-02

Easy

For a Gaussian source X∼N(0,4)X \sim \mathcal{N}(0, 4) with squared-error distortion, compute the rate needed to achieve SNR = 30 dB.

ex-ch06-03

Easy

Explain why R(Dmax⁑)=0R(D_{\max}) = 0 for any source and distortion measure, where Dmax⁑=min⁑x^E[d(X,x^)]D_{\max} = \min_{\hat{x}} \mathbb{E}[d(X, \hat{x})].

ex-ch06-04

Easy

A Gaussian source X∼N(0,1)X \sim \mathcal{N}(0, 1) is quantized to 8 bits/sample using an ideal entropy-coded quantizer. What is the achievable SNR according to the R-D bound? What if using a uniform quantizer without entropy coding?

ex-ch06-05

Medium

Prove that RR is convex in DD directly from the definition (without using the test-channel mixing argument). Hint: use the operational definition β€” achievability implies a code exists, and time-sharing two codes gives a convex combination.

ex-ch06-06

Medium

Compute the rate-distortion function for a ternary source XX uniform on {0,1,2}\{0, 1, 2\} with Hamming distortion d(x,x^)=1{x≠x^}d(x, \hat{x}) = \mathbf{1}\{x \neq \hat{x}\}.

ex-ch06-07

Medium

(Reverse waterfilling.) Two independent Gaussian sources have variances Οƒ12=4\sigma_1^2 = 4 and Οƒ22=1\sigma_2^2 = 1. Compute the rate-distortion function at total distortion D=2D = 2. Which component is "shut off"?

ex-ch06-08

Medium

Show that for the Gaussian source, the distortion-rate function D(R)=Οƒ22βˆ’2RD(R) = \sigma^2 2^{-2R} satisfies 10log⁑10(Οƒ2/D)=6.02R10\log_{10}(\sigma^2/D) = 6.02 R dB, confirming the "6 dB per bit" rule.

ex-ch06-09

Medium

(Blahut-Arimoto.) Implement one iteration of the Blahut-Arimoto algorithm for the binary source with P(0)=0.7P(0) = 0.7, Hamming distortion, at Ξ»=2\lambda = 2. Start with q(x^)=(0.5,0.5)q(\hat{x}) = (0.5, 0.5).

ex-ch06-10

Hard

Prove the converse of the rate-distortion theorem: any sequence of (2nRn,n)(2^{nR_n}, n) codes with E[d(X,X^)]≀D\mathbb{E}[d(\mathbf{X}, \hat{\mathbf{X}})] \leq D satisfies lim inf⁑Rnβ‰₯R(D)\liminf R_n \geq R(D).

ex-ch06-11

Hard

(Wyner-Ziv for binary source.) Let X∼Bern(1/2)X \sim \text{Bern}(1/2) and Y=XβŠ•NY = X \oplus N where N∼Bern(p)N \sim \text{Bern}(p) independent of XX (BSC observation). Compute RXY(D)R_{XY}(D) (side information at both encoder and decoder) and compare with RWZ(D)R_{WZ}(D) for Hamming distortion.

ex-ch06-12

Hard

(Separation theorem proof sketch.) Prove that a DMS {Xi}\{X_i\} is transmissible over a DMC WW at distortion DD if and only if R<C(W)R < C(W), assuming one channel use per source symbol. Show that separate source and channel coding achieves this.

ex-ch06-13

Challenge

(Rate-distortion for exponential source.) Let X∼Exp(Ξ»)X \sim \text{Exp}(\lambda) (exponential with rate Ξ»\lambda) and squared-error distortion d(x,x^)=(xβˆ’x^)2d(x, \hat{x}) = (x - \hat{x})^2. Show that R(D)=12log⁑21Ξ»2DR(D) = \frac{1}{2}\log_2\frac{1}{\lambda^2 D} for D≀1/Ξ»2D \leq 1/\lambda^2, and compare with the Gaussian R-D at the same variance Οƒ2=1/Ξ»2\sigma^2 = 1/\lambda^2.

ex-ch06-14

Challenge

(Successive refinement necessity.) Show that a binary source with P(0)=pβ‰ 1/2P(0) = p \neq 1/2 and Hamming distortion is NOT successively refinable between some pairs (D1,D2)(D_1, D_2). Specifically, find (D1,D2)(D_1, D_2) with D2<D1<pD_2 < D_1 < p such that R(D2)βˆ’R(D1)>RD1β†’D2incrementalR(D_2) - R(D_1) > R_{D_1 \to D_2}^{\text{incremental}}, i.e., the incremental rate exceeds what successive refinement would require.

ex-ch06-15

Challenge

(Dithered ECSQ achieves R(D)+0.754R(D) + 0.754 bits.) Show that a uniform quantizer with step Ξ”\Delta, combined with dithering (adding uniform noise U∼Uniform(βˆ’Ξ”/2,Ξ”/2)U \sim \text{Uniform}(-\Delta/2, \Delta/2) independent of XX before quantization, then subtracting UU after), achieves distortion D=Ξ”2/12D = \Delta^2/12 for ANY source distribution, and the rate (entropy of the quantized output) is at most h(X)βˆ’log⁑Δ+0.754h(X) - \log \Delta + 0.754 bits.