Эротические рассказы

The Sparse Fourier Transform. Haitham HassaniehЧитать онлайн книгу.

The Sparse Fourier Transform - Haitham Hassanieh


Скачать книгу
href="#fb3_img_img_c1f269be-a7e7-51e8-899f-6e50b7dc5aed.png" alt="image"/> time. Location loops thus take image time and estimation loops take image time. Figure 3.1 illustrates the inner loop.

      For estimation loops, we get the following guarantee.

      Lemma 3.1 Let S be the support of the largest k coefficients of , and −S contain the rest. Then for any ≤ 1,

image

      Proof The proof can be found in Appendix A.1. ■

      Furthermore, since image is a good estimate for |x̂i|—the division is mainly useful for fixing the phase. Therefore in location loops, we get the following guarantee:

      Algorithm 3.1 SFT 1.0: Non-iterative Sparse Fourier Transform for image

image

      Figure 3.1 Example inner loop of the algorithm on sparse input. This run has parameters n = 256, k = 4, G being the (0.11, 0.06, 2 × 10−9, 133) flat window function in Figure 2.1, and selecting the top 4 of B = 16 samples. In part (a), the algorithm begins with time domain access to Pσ, τ, bx given by (Pσ, τ, bx)i = xσ(i−τ)ωσbi, which permutes the spectrum of x by permuting the samples in the time domain. In part (b), the algorithm computes the time domain signal y = G · Pσ, τ, bx. The spectrum of y (pictured) is large around the large coordinates of Pσ, τ, bx. The algorithm then computes , which is the rate B subsampling of as pictured in part (c). During estimation loops, the algorithm estimates x̂i based on the value of the nearest coordinate in , namely image. During location loops (part (d)), the algorithm chooses J, the top dk (here, 4) coordinates of , and selects the elements of [n] that are closest to those coordinates (the shaded region of the picture). It outputs the set I of preimages of those elements. In this example, the two coordinates on the left landed too close in the permutation and form a “hash collision.” As a result, the algorithm misses the second from the left coordinate in its output. Our guarantee is that each large coordinate has a low probability of being missed if we select the top O(k) samples.

      Lemma 3.2 Define image to be the error tolerated in Lemma 3.1. Then for any i ∈ [n] with |x̂i| ≥ 4E

image

      Proof The proof can be found in Appendix A.2 ■

      Our SFT 1.0 algorithm shown in Algorithm 3.1 is parameterized by and δ. It runs L = O(log n) iterations of the inner loop, with parameters image and d = O(1/) as well as δ.7

      Lemma 3.3 The algorithm runs in time image.

      Proof To analyze this algorithm, note that

image

      or |I′| ≤ 2dkn/B. Therefore, the running time of both the location and estimation inner loops is image. Computing I′ and computing the medians both take linear time, namely O(Ldkn/B). Thus, the total running time is image. Plugging in image and d = O(1/), this running time is image. We require B = Ω(k/), however; for k > ∊n/log(n/δ), this would cause the run time to be larger. But in this case, the predicted run time is Ω(n log n) already, so the standard FFT is faster and we can fall back on it. ■

      Theorem 3.1 Running the algorithm with parameters ∊, δ < 1 gives ′ satisfying

image

      with probability 1 − 1/n and running time image.

      Proof Define

image

      Lemma 3.2 says that in each location iteration r, for any i with |x̂i| ≥ 4E,

image

      Thus, E[si] ≥ 3L/4, and each iteration is an independent trial, so by a Chernoff bound the chance that si < L/2 is at most 1/2Ω(L) < 1/n3. Therefore by a union bound, with probability at least 1 − 1/n2, iI′ for all i with |x̂i| ≥ 4E.

      Next, Lemma 3.1 says that for each estimation iteration r and index i,

image

      Therefore, with probability image in at least 2L/3 of the iterations.

      Since real image is the median of the real image, there must exist two r with image but one real image above real image and one below. Hence, one of these r has image, and similarly for the imaginary axis. Then

image

      By a union bound over I′, with probability at least 1 − 1/n2 we have Скачать книгу

Яндекс.Метрика