The Sparse Fourier Transform. Haitham HassaniehЧитать онлайн книгу.
in the General Case
Preface
The Fourier transform is one of the most fundamental tools for computing the frequency representation of signals. It plays a central role in signal processing, communications, audio and video compression, medical imaging, genomics, astronomy, as well as many other areas. Because of its widespread use, fast algorithms for computing the Fourier transform can benefit a large number of applications. The fastest algorithm for computing the Fourier transform is the Fast Fourier Transform (FFT), which runs in near-linear time making it an indispensable tool for many applications. However, today, the runtime of the FFT algorithm is no longer fast enough especially for big data problems where each dataset can be few terabytes. Hence, faster algorithms that run in sublinear time, i.e., do not even sample all the data points, have become necessary.
This book addresses the above problem by developing the Sparse Fourier Transform algorithms and building practical systems that use these algorithms to solve key problems in six different applications.
Part I of the book focuses on the theory front. It introduces the Sparse Fourier Transform algorithms: a family of sublinear time algorithms for computing the Fourier transform faster than FFT. The Sparse Fourier Transform is based on the insight that many real-world signals are sparse, i.e., most of the frequencies have negligible contribution to the overall signal. Exploiting this sparsity, the book introduces several new algorithms which encompass two main axes.
Runtime Complexity. Nearly optimal Sparse Fourier Transform algorithms are presented that are faster than FFT and have the lowest runtime complexity known to date.
Sampling Complexity. Sparse Fourier Transform algorithms with optimal sampling complexity are presented in the average case and the same nearly optimal runtime complexity. These algorithms use the minimum number of input data samples and, hence, reduce acquisition cost and I/O overhead.
Part II of the book focuses on the systems front. It develops software and hardware architectures for leveraging the Sparse Fourier Transform to address practical problems in applied fields. The systems customize the theoretical algorithms to capture the structure of sparsity in each application, and hence maximize the resulting gains. All systems are prototyped and evaluated in accordance with the standards of each application domain. The following list gives an overview of the systems presented in this book.
Wireless Networks. The book demonstrates how to use the Sparse Fourier Transform to build a wireless receiver that captures GHz-wide signals without sampling at the Nyquist rate. Hence, it enables wideband spectrum sensing and acquisition using cheap commodity hardware.
Mobile Systems. The Sparse Fourier Transform is used to design a GPS receiver that both reduces the delay to find the location and decreases the power consumption by 2×.
Computer Graphics. Light fields enable new virtual reality and computational photography applications like interactive viewpoint changes, depth extraction, and refocusing. The book shows that reconstructing light field images using the Sparse Fourier Transform reduces camera sampling requirements and improves image reconstruction quality.
Medical Imaging. The book enables efficient magnetic resonance spectroscopy (MRS), a new medical imaging technique that can reveal biomarkers for diseases like autism and cancer. The book shows how to improve the image quality while reducing the time a patient spends in an MRI machine by 3× (e.g., from 1 hour to less than 40 minutes).
Biochemistry. The book demonstrates that the Sparse Fourier Transform reduces Nuclear Magnetic Resonance (NMR) experiment time by 16× (e.g. from weeks to days), enabling high-dimensional NMR needed for discovering complex protein structures.
Digital Circuits. The book develops a chip with the largest Fourier Transform to date for sparse data. It delivers a 0.75 million point Sparse Fourier Transform chip that consumes 40× less power than prior FFT VLSI implementations.
Acknowledgments
The work presented in this book would not have been possible without the help and support of a large group of people to whom I owe a lot of gratitude. I would also like to thank everyone who worked on the Sparse Fourier Transform project. Dina, Piotr, and Eric Price were the first people to work with me. They played an indispensable role in developing the theoretical Sparse Fourier Transform algorithms. The next person to work with me was Fadel Adib who helped me take on the hard task of kickstarting the applications. After that, Lixin Shi helped me bring to life more applications. He worked with me tirelessly for a very long time while making the work process extremely enjoyable. Finally, I would like to thank all of the remaining people who contributed to the material in this book: Elfar Adalsteinsson, Fredo Durand, Omid Abari, Ezzeldin Hamed, Abe Davis, Badih Ghazi, Ovidiu Andronesi, Vladislav Orekhov, Abhinav Agarwal, and Anantha Chandrakasan.
Haitham Hassanieh
January 2018
1
Introduction
The Fourier transform is one of the most important and widely used computational tasks. It is a foundational tool commonly used to analyze the spectral representation of signals. Its applications include audio/video processing, radar and GPS systems, wireless communications, medical imaging and spectroscopy, the processing of seismic data, and many other tasks [Bahskarna and Konstantinides 1995, Chan and Koo 2008, Heiskala and Terry 2001, Nishimura 2010, Van Nee and Coenen 1991, Yilmaz 2008]. Hence, faster algorithms for computing the Fourier transform can benefit a wide range of applications. The fastest algorithm to compute the Fourier transform today is the Fast Fourier Transform (FFT) algorithm [Cooley and Tukey 1965]. Invented in 1965 by Cooley and Tukey, the FFT computes the Fourier transform of a signal of size n in O(n log n) time. This near-linear time of the FFT made it one of the most influential algorithms in recent history [Cipra 2000]. However, the emergence of big data problems, in which the processed datasets can exceed terabytes [Schadt et al. 2010], has rendered the FFT’s runtime too slow. Furthermore, in many domains (e.g., medical imaging, computational photography), data acquisition is costly or cumbersome, and hence one may be unable to collect enough measurements to compute the FFT. These scenarios motivate the need for sublinear time algorithms that compute the Fourier transform faster than the FFT algorithm and use only a subset of the input data required by the FFT.
The key insight