Wednesday, July 1, 2009

Compressive Sensing

The idea behind compressive sensing is that any interesting signal is not going to look like noise. If you want to reconstruct the signal you probably don't need to sample all the data in the signal but just enough to capture the information in the signal. Say for instance you know the signal is a perfect linear signal. You can just sample two points and be done with it. In essence the idea behind compressive sensing is that real signals tend to be approximately sparse in most bases. Here being approximately sparse in a wavelet basis means that most of the coefficients of the wavelet transformation are either zero or close to zero. The compressive sensing community uses the terminology that a vector is k sparse if the vector has k non zero entries, and approximately k sparse if all but k entries are close to zero. Now the oh so nifty thing about compressive sensing is that you only need something like k*log(N/k) measurements to get the data in the signal where N is the number of coefficients in the signal. What you do is you take your measurements at random, you use these measurements to create constraints on a linear program whose variables are the coefficients of the signal you want to recreate and then minimize the sum of those coefficients.

coding up something like that however would be a bit complicated, at some point it would be great if I were to program something capable of doing the simplex algorithm for linear programming but for the moment it is a project I am not willing to undertake. I wanted to use compressive sensing for my wavelets project at the end of the semester but I ended up not doing it because I couldn't code it (at least in time) and I didn't understand it all that well. What I ended up using was an algorithm that I just made up. The algorithm was relatively easy to code (compared to a general LP solver) and that is essentially why I ended up using it. The algorithm takes a set of direct measurements of a signal and then makes a bad guess for what the signal ought to look like. It then takes this bad guess and takes its wavelet transform. It also takes the wavelet transform of a vector of 1's where the measurements were made. This keeps track of where the wavelet coefficients contain direct information about the signal and where not. Starting at the lowest level of the wavelet transform we look at the transform of the information vector and if it has a non-zero coefficient in a certain place then we do a reverse wavelet transform ignoring the detail coefficients and put the value for that place in the reverse transform in that place in the signal. We continue to do this for every level all the way up until the last step is simply putting the observed measurements in place.

The algorithm as I just described it is pretty sucky, it is slow and inefficient but does give pretty good estimates of the signal. An interesting thing to note however is that iteration of the algorithm with the same observations but starting with the output of the algorithm as the "bad guess" input for the next iteration yields better results. I suppose I haven't really tested how much "better" the results are but the iterated version is totally a compressive sensing algorithm of sorts. Intuitively it makes perfect sense that since we throw away the detail coefficients every time we do an iteration we get closer and closer to a sparse vector in the wavelet basis. I haven't really done a proper analysis so I don't know if that is really true or not. I certainly am not willing to go so far as to say that the iteration converges onto the L1 minimization but I am willing to say that it does in fact do that in at least some simple cases.

No comments: