I heard a talk at Stanford with many of these slides. (or here)

The idea is to capture a ‘natural’ analog signal f in a record y that consists of n approximate reals where n is much less than the classic Nyquist requirement based on the frequency components of f. Regarding the equation on slide 18: y = Φf
where:
 y is a vector in ℝM whose components are the M reals of the record, f is the signal that we record; it is a vector in either an abstract Hilbert space or physically either a vector space with a base vector per pixel, or an analog signal. Φ is a linear transformation from signal space to record space.

Φ carries all of the compressive magic at time of capturing the signal. The art sought here is to choose Φ to:

• be feasibly implemented at the time of capture,
• capture enough information about the signal,
• correspond to some feasible algorithm to retrieve the signal from the record with sufficient fidelity.

Buzzwords to clarify:

• Uniform Uncertainty Principle
• K-sparse
These are both carefully described here.

Φf is calculated as the record is made. The more difficult algorithm is to construct from the record some suitable approximation F to f. Here we speak of this as producing from the record, a display, composed of pixels. On slide 24 we are invited to find a vector f that minimizes
#{t : f(t) ≠ 0} while y = Φf
In the above t ranges over pixels in the display space and f(t) is the pixel displayed at position t in the uncompressed image. This seems to want to minimize the image in some sense. Note that in vector spaces with carelessly selected bases, vector components are as likely to be negative as positive. Negative pixels are a bit of a mystery. Perhaps the invitation is to make the display as gray as possible, not as black.

Slide 25 admits that the above problem is too hard and substitutes minimizing the ℓ1 norm of f but still subject to y = Φf. The ℓ1 norm of f is |f| = Σt|f(t)|. This is a hefty linear programming job. I am not familiar with that art but in this particular case there is a technique that seems promising. We must select components of the display vector F to satisfy y = ΦF and subsequently, or concurrently to minimize |F|.

As we seek algorithms we might start by knowing the answer and demanding that ‘downhill’ algorithms at least realize that they have found the answer. This has found a large fraction of the bugs in hydrodynamics equations that I have written. The fantasy is to choose some image f as a set of pixel values, compute the record y as Φf, then take f as an initial value for d. A candidate recovery algorithm must report immediate success. We then perturb d to stress the algorithm. I think that we must find a set of vectors that span the kernel of Φ.