Went to the algorithms seminar today for the first time this semester. Easily one of the best ones I’ve been to.

Dr. Dean presented the main ideas of a recent (as in within the last month or so) paper which improves on the Fast Fourier Transform. The FFT has been one of the most important algorithms of the twentieth century and an improvement on it is really quite important.

The *Faster* Fast Fourier Transform was basically about how signals usually only have a few peaks in the frequency domain relative to the number of samples in the time domain. It then figures out how many peaks there are and has runtime based on that. Specifically if there are k peaks in the frequency domain and n samples then it runs in O(klog(n)) time compared to the O(nlog(n)) runtime of FFT.

Think about this. The FFFT runs in less time than it takes to look at all of the input data.

There was a lot of crazy stuff going on. Like randomly shuffling the time domain data to get a random shuffling of the frequency domain, “folding” the data on top of itself, and producing some horrible looking function that gets a sort of box in its frequency domain.

The actual paper is quite long and filled with a lot of mathematical proof. Reviewing the paper is going to be a horrible experience for whoever has to take on that burden. The actual results are not an exact computation of the Fourier Transform but provide a really nice approximation that is useful on a wide variety of real world applications. The authors also implemented it and compared it to current top of the line FFT implementations and found a significant improvement. The talk itself was really quite interesting and entertaining with everyone’s reactions to various claims and their justifications.

Also it was good just to see some of those people again.