Abstract We provide in this work an algorithm for approximating a very broad class ofsymmetric Toeplitz matrices to machine precision in 𝒪 ( n log n ) 𝒪 𝑛 𝑛 \mathcal{O}(n\log n) caligraphic_O ( italic_n roman_log italic_n ) timewith applications to fitting time series models. In particular, for a symmetricToeplitz matrix 𝚺 𝚺 \mathbf{\Sigma} bold_Σ with values 𝚺 j , k = h | j − k | = ∫ − 1 / 2 1 / 2 e 2 π i | j − k | ω S ( ω ) d ω subscript 𝚺 𝑗 𝑘
subscript ℎ 𝑗 𝑘 superscript subscript 1 2 1 2 superscript 𝑒 2 𝜋 𝑖 𝑗 𝑘 𝜔 𝑆 𝜔 differential-d 𝜔 \mathbf{\Sigma}_{j,k}=h_{|j-k|}=\int_{-1/2}^{1/2}e^{2\pi i|j-k|\omega}S(\omega%)\mathrm{d}\omega bold_Σ start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT = italic_h start_POSTSUBSCRIPT | italic_j - italic_k | end_POSTSUBSCRIPT = ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i | italic_j - italic_k | italic_ω end_POSTSUPERSCRIPT italic_S ( italic_ω ) roman_d italic_ω whereS ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) is piecewise smooth, we give an approximation ℱ 𝚺 ℱ H ≈ 𝐃 + 𝐔𝐕 H ℱ 𝚺 superscript ℱ 𝐻 𝐃 superscript 𝐔𝐕 𝐻 \mathbf{\mathcal{F}}\mathbf{\Sigma}\mathbf{\mathcal{F}}^{H}\approx\mathbf{D}+%\mathbf{U}\mathbf{V}^{H} caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT ≈ bold_D + bold_UV start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT , where ℱ ℱ \mathbf{\mathcal{F}} caligraphic_F is the DFT matrix, 𝐃 𝐃 \mathbf{D} bold_D isdiagonal, and the matrices 𝐔 𝐔 \mathbf{U} bold_U and 𝐕 𝐕 \mathbf{V} bold_V are in ℂ n × r superscript ℂ 𝑛 𝑟 \mathbb{C}^{n\times r} blackboard_C start_POSTSUPERSCRIPT italic_n × italic_r end_POSTSUPERSCRIPT with r ≪ n much-less-than 𝑟 𝑛 r\ll n italic_r ≪ italic_n . Studying these matrices in the context of timeseries, we offer a theoretical explanation of this structure and connect it toexisting spectral-domain approximation frameworks. We then give a completediscussion of the numerical method for assembling the approximation anddemonstrate its efficiency for improving Whittle-type likelihood approximations,including dramatic examples where a correction of rank r = 2 𝑟 2 r=2 italic_r = 2 to the standardWhittle approximation increases the accuracy of the log-likelihoodapproximation from 3 3 3 3 to 14 14 14 14 digits for a matrix 𝚺 ∈ ℝ 10 5 × 10 5 𝚺 superscript ℝ superscript 10 5 superscript 10 5 \mathbf{\Sigma}\in\mathbb{R}^{10^{5}\times 10^{5}} bold_Σ ∈ blackboard_R start_POSTSUPERSCRIPT 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT × 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT . The method and analysis of this work applieswell beyond time series analysis, providing an algorithm for extremely accuratesolutions to linear systems with a wide variety of symmetric Toeplitzmatrices whose entries are generated by a piecewise smooth S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) .The analysis employed here largely depends on asymptotic expansions ofoscillatory integrals, and also provides a new perspective on when existingspectral-domain approximation methods for Gaussian log-likelihoods can beparticularly problematic.
Keywords: Time series, Spectral density, Asymptoticexpansion, Oscillatory integral, Gaussian process
1 IntroductionA mean-zero, stationary, and Gaussian time series is a stochastic process{ Y t } t ∈ ℤ subscript subscript 𝑌 𝑡 𝑡 ℤ \left\{Y_{t}\right\}_{t\in\mathbb{Z}} { italic_Y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_t ∈ blackboard_Z end_POSTSUBSCRIPT such that any contiguous collection of measurements 𝒚 = [ Y t 0 , Y t 0 + 1 , … , Y t 0 + n − 1 ] 𝒚 subscript 𝑌 subscript 𝑡 0 subscript 𝑌 subscript 𝑡 0 1 … subscript 𝑌 subscript 𝑡 0 𝑛 1
\bm{y}=[Y_{t_{0}},Y_{t_{0}+1},...,Y_{t_{0}+n-1}] bold_italic_y = [ italic_Y start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_Y start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + 1 end_POSTSUBSCRIPT , … , italic_Y start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + italic_n - 1 end_POSTSUBSCRIPT ] with arbitrary t 0 ∈ ℤ subscript 𝑡 0 ℤ t_{0}\in\mathbb{Z} italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∈ blackboard_Z isdistributed as 𝒚 ∼ 𝒩 ( 𝟎 , 𝚺 ) similar-to 𝒚 𝒩 0 𝚺 \bm{y}\sim\mathcal{N}(\bm{0},\bm{\Sigma}) bold_italic_y ∼ caligraphic_N ( bold_0 , bold_Σ ) , where 𝚺 j , k = h | j − k | subscript 𝚺 𝑗 𝑘
subscript ℎ 𝑗 𝑘 \bm{\Sigma}_{j,k}=h_{\left|j-k\right|} bold_Σ start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT = italic_h start_POSTSUBSCRIPT | italic_j - italic_k | end_POSTSUBSCRIPT isthe autocovariance matrix generated by the autocovariance sequence { h k } k ∈ ℤ subscript subscript ℎ 𝑘 𝑘 ℤ \left\{h_{k}\right\}_{k\in\mathbb{Z}} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k ∈ blackboard_Z end_POSTSUBSCRIPT . A very special property of such covariance matrices for stationarytime series is that they are symmetric Toeplitz matrices, meaning thatthey are constant along sub- and super-diagonal bands. A fundamental andimportant problem in time series analysis is fitting a parametric model to data,which in this setting means using some parametric family of models for theautocovariance sequence { h k ( 𝜽 ) } k ∈ ℤ subscript subscript ℎ 𝑘 𝜽 𝑘 ℤ \left\{h_{k}(\bm{\theta})\right\}_{k\in\mathbb{Z}} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( bold_italic_θ ) } start_POSTSUBSCRIPT italic_k ∈ blackboard_Z end_POSTSUBSCRIPT that specify thedistribution 𝒚 ∼ 𝒩 ( 𝟎 , 𝚺 𝜽 ) similar-to 𝒚 𝒩 0 subscript 𝚺 𝜽 \bm{y}\sim\mathcal{N}(\bm{0},\bm{\Sigma}_{\bm{\theta}}) bold_italic_y ∼ caligraphic_N ( bold_0 , bold_Σ start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ) (like, for example, (AR)(I)(MA)process models). The canonical estimator for parameters 𝜽 ∈ 𝚯 𝜽 𝚯 \bm{\theta}\in\bm{\Theta} bold_italic_θ ∈ bold_Θ from data is the maximum likelihood estimator (MLE), which is computed byminimizing the negative log-likelihood:
𝜽 ^ MLE = arg min 𝜽 ∈ 𝚯 ℓ ( 𝜽 | 𝒚 ) = arg min 𝜽 ∈ 𝚯 1 2 ( log | 𝚺 𝜽 | + 𝒚 T 𝚺 𝜽 − 1 𝒚 ) , superscript ^ 𝜽 MLE subscript arg min 𝜽 𝚯 ℓ conditional 𝜽 𝒚 subscript arg min 𝜽 𝚯 1 2 subscript 𝚺 𝜽 superscript 𝒚 𝑇 superscript subscript 𝚺 𝜽 1 𝒚 \hat{\bm{\theta}}^{\text{MLE}}=\operatorname*{arg\,min}_{\bm{\theta}\in\bm{%\Theta}}\;\ell(\bm{\theta}\,|\,\bm{y})=\operatorname*{arg\,min}_{\bm{\theta}%\in\bm{\Theta}}\;\frac{1}{2}\left(\log\left|\bm{\Sigma}_{\bm{\theta}}\right|+%\bm{y}^{T}\bm{\Sigma}_{\bm{\theta}}^{-1}\bm{y}\right), over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT bold_italic_θ ∈ bold_Θ end_POSTSUBSCRIPT roman_ℓ ( bold_italic_θ | bold_italic_y ) = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT bold_italic_θ ∈ bold_Θ end_POSTSUBSCRIPT divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( roman_log | bold_Σ start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT | + bold_italic_y start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_Σ start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT bold_italic_y ) ,
where constant terms in the negative log-likelihood ℓ ( 𝜽 | 𝒚 ) ℓ conditional 𝜽 𝒚 \ell(\bm{\theta}\,|\,\bm{y}) roman_ℓ ( bold_italic_θ | bold_italic_y ) are suppressed.
It is an elementary fact that autocovariance sequences are positive(semi-)definite, and by Herglotz’s theorem one has that
𝚺 j , k = h | j − k | = ∫ [ − 1 / 2 , 1 / 2 ) e 2 π i | j − k | ω d F ( ω ) , subscript 𝚺 𝑗 𝑘
subscript ℎ 𝑗 𝑘 subscript 1 2 1 2 superscript 𝑒 2 𝜋 𝑖 𝑗 𝑘 𝜔 differential-d 𝐹 𝜔 {\color[rgb]{0,0,0}\bm{\Sigma}_{j,k}=h_{|j-k|}}=\int_{[-1/2,1/2)}e^{2\pi i|j-k%|\omega}\mathop{}\!\mathrm{d}F(\omega), bold_Σ start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT = italic_h start_POSTSUBSCRIPT | italic_j - italic_k | end_POSTSUBSCRIPT = ∫ start_POSTSUBSCRIPT [ - 1 / 2 , 1 / 2 ) end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i | italic_j - italic_k | italic_ω end_POSTSUPERSCRIPT roman_d italic_F ( italic_ω ) ,
where F 𝐹 F italic_F is the spectral distribution function of { Y t } subscript 𝑌 𝑡 \left\{Y_{t}\right\} { italic_Y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT } [9 ] . If F 𝐹 F italic_F has a density with respect to the Lebesgue measureso that d F ( ω ) = S ( ω ) d ω d 𝐹 𝜔 𝑆 𝜔 d 𝜔 \mathop{}\!\mathrm{d}F(\omega)=S(\omega)\mathop{}\!\mathrm{d}\omega roman_d italic_F ( italic_ω ) = italic_S ( italic_ω ) roman_d italic_ω , we call S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) the spectraldensity function (SDF). An elementary property of spectral densities is thatthey need only be non-negative, and symmetric about the origin in the case ofreal-valued processes Y t subscript 𝑌 𝑡 Y_{t} italic_Y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT . Considering that writing parametric families offunctions that are nonnegative and symmetric is much easier than writingparametric families of sequences that are positive (semi-)definite, it comes asno surprise that modeling { h k ( 𝜽 ) } k ∈ ℤ subscript subscript ℎ 𝑘 𝜽 𝑘 ℤ \left\{h_{k}(\bm{\theta})\right\}_{k\in\mathbb{Z}} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( bold_italic_θ ) } start_POSTSUBSCRIPT italic_k ∈ blackboard_Z end_POSTSUBSCRIPT in terms of the spectraldensity S 𝜽 ( ω ) subscript 𝑆 𝜽 𝜔 S_{\bm{\theta}}(\omega) italic_S start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( italic_ω ) is an attractive and liberating option forpractitioners.
Unfortunately, there is no particularly easy way to reformulate ℓ ( 𝜽 | 𝒚 ) ℓ conditional 𝜽 𝒚 \ell(\bm{\theta}\,|\,\bm{y}) roman_ℓ ( bold_italic_θ | bold_italic_y ) directly in terms of S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) , meaning that computing the true MLE𝜽 ^ MLE superscript ^ 𝜽 MLE \hat{\bm{\theta}}^{\text{MLE}} over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT requires either knowing the closed-form expression for𝚺 j , k = h | j − k | = ∫ − 1 / 2 1 / 2 e 2 π i ω | j − k | S ( ω ) d ω subscript 𝚺 𝑗 𝑘
subscript ℎ 𝑗 𝑘 superscript subscript 1 2 1 2 superscript 𝑒 2 𝜋 𝑖 𝜔 𝑗 𝑘 𝑆 𝜔 differential-d 𝜔 \bm{\Sigma}_{j,k}=h_{|j-k|}=\int_{-1/2}^{1/2}e^{2\pi i\omega|j-k|}S(\omega)%\mathop{}\!\mathrm{d}\omega bold_Σ start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT = italic_h start_POSTSUBSCRIPT | italic_j - italic_k | end_POSTSUBSCRIPT = ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_ω | italic_j - italic_k | end_POSTSUPERSCRIPT italic_S ( italic_ω ) roman_d italic_ω for all lags | j − k | 𝑗 𝑘 |j-k| | italic_j - italic_k | or directly computing all the integrals numerically,and in both cases, for direct likelihood evaluation, one must in general build𝚺 𝚺 \bm{\Sigma} bold_Σ entry-wise and evaluate ℓ ( 𝜽 | 𝒚 ) ℓ conditional 𝜽 𝒚 \ell(\bm{\theta}\,|\,\bm{y}) roman_ℓ ( bold_italic_θ | bold_italic_y ) using dense linear algebra . As wewill discuss, there are several popular likelihood approximations for 𝒚 𝒚 \bm{y} bold_italic_y interms of S 𝜽 ( ω ) subscript 𝑆 𝜽 𝜔 S_{\bm{\theta}}(\omega) italic_S start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( italic_ω ) that are convenient and computationally expedient,often running in 𝒪 ( n log n ) 𝒪 𝑛 𝑛 \mathcal{O}(n\log n) caligraphic_O ( italic_n roman_log italic_n ) complexity compared to the 𝒪 ( n 3 ) 𝒪 superscript 𝑛 3 \mathcal{O}(n^{3}) caligraphic_O ( italic_n start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ) of adirect likelihood approximation (another fundamental challenge that motivatesmany spectral approximations). The most classical and simple of theseapproximations to ℓ ( 𝜽 | 𝒚 ) ℓ conditional 𝜽 𝒚 \ell(\bm{\theta}\,|\,\bm{y}) roman_ℓ ( bold_italic_θ | bold_italic_y ) , denoted the Whittle approximation [38 ] , unfortunately yields very biased estimators. Improvementsto this method can mitigate the challenging new sources of bias, but still comeat the cost of additional variability compared to 𝜽 ^ MLE superscript ^ 𝜽 MLE \hat{\bm{\theta}}^{\text{MLE}} over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT . Inthis work, we introduce a new approximation to ℓ ( 𝜽 | 𝒚 ) ℓ conditional 𝜽 𝒚 \ell(\bm{\theta}\,|\,\bm{y}) roman_ℓ ( bold_italic_θ | bold_italic_y ) that retainsthe 𝒪 ( n log n ) 𝒪 𝑛 𝑛 \mathcal{O}(n\log n) caligraphic_O ( italic_n roman_log italic_n ) complexity but can be made effectively exact to doubleprecision with 14 − 15 14 15 14-15 14 - 15 correct digits by exploiting piecewise smoothness ofspectral densities and special rank structure in the matrix ℱ 𝚺 ℱ H ℱ 𝚺 superscript ℱ 𝐻 \mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H} caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT ,where ℱ ℱ \mathbb{\mathcal{F}} caligraphic_F is the discrete Fourier transform (DFT) matrix. As a result of thisdesign, we obtain effectively exact evaluations of the log-likelihood ℓ ( 𝜽 | 𝒚 ) ℓ conditional 𝜽 𝒚 \ell(\bm{\theta}\,|\,\bm{y}) roman_ℓ ( bold_italic_θ | bold_italic_y ) and its derivatives, as well as the ability to perform uncertaintyquantification on estimates.
1.1 Whittle approximations and existing workLet
J n ( ω ) = 1 n ∑ j = 0 n − 1 e − 2 π i j ω Y j subscript 𝐽 𝑛 𝜔 1 𝑛 superscript subscript 𝑗 0 𝑛 1 superscript 𝑒 2 𝜋 𝑖 𝑗 𝜔 subscript 𝑌 𝑗 J_{n}(\omega)=\frac{1}{\sqrt{n}}\sum_{j=0}^{n-1}e^{-2\pi ij\omega}Y_{j} italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω ) = divide start_ARG 1 end_ARG start_ARG square-root start_ARG italic_n end_ARG end_ARG ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT - 2 italic_π italic_i italic_j italic_ω end_POSTSUPERSCRIPT italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT
denote the discrete Fourier transform of a time series 𝒚 = { Y 0 , … , Y n − 1 } 𝒚 subscript 𝑌 0 … subscript 𝑌 𝑛 1 {\color[rgb]{0,0,0}\bm{y}=}\left\{Y_{0},...,Y_{n-1}\right\} bold_italic_y = { italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_Y start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT } at frequency ω 𝜔 \omega italic_ω . For the entirety of this work, we willrestrict focus to ω 𝜔 \omega italic_ω at Fourier frequencies { ω j = ( j − n / 2 − 1 ) n } j = 1 n superscript subscript subscript 𝜔 𝑗 𝑗 𝑛 2 1 𝑛 𝑗 1 𝑛 \left\{\omega_{j}=\tfrac{(j-n/2-1)}{n}\right\}_{j=1}^{n} { italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = divide start_ARG ( italic_j - italic_n / 2 - 1 ) end_ARG start_ARG italic_n end_ARG } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT so that one can compute [ J n ( ω 1 ) , … , J n ( ω n ) ] subscript 𝐽 𝑛 subscript 𝜔 1 … subscript 𝐽 𝑛 subscript 𝜔 𝑛
[J_{n}(\omega_{1}),...,J_{n}(\omega_{n})] [ italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) , … , italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ] in 𝒪 ( n log n ) 𝒪 𝑛 𝑛 \mathcal{O}(n\log n) caligraphic_O ( italic_n roman_log italic_n ) time with ℱ 𝒚 ℱ 𝒚 \mathbb{\mathcal{F}}\bm{y} caligraphic_F bold_italic_y using an FFT, where ℱ ℱ \mathbb{\mathcal{F}} caligraphic_F is parameterized in the unitary form with the “fftshift” for convenience asℱ = [ n − 1 / 2 e − 2 π i j k / n ] j ∈ ( − n / 2 ) : ( n / 2 − 1 ) , k ∈ 0 : ( n − 1 ) ℱ subscript delimited-[] superscript 𝑛 1 2 superscript 𝑒 2 𝜋 𝑖 𝑗 𝑘 𝑛 : 𝑗 𝑛 2 𝑛 2 1 𝑘
0 : 𝑛 1 \mathbb{\mathcal{F}}=[n^{-1/2}e^{-2\pi ijk/n}]_{j\in(-n/2):(n/2-1),k\in 0:(n-1)} caligraphic_F = [ italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT - 2 italic_π italic_i italic_j italic_k / italic_n end_POSTSUPERSCRIPT ] start_POSTSUBSCRIPT italic_j ∈ ( - italic_n / 2 ) : ( italic_n / 2 - 1 ) , italic_k ∈ 0 : ( italic_n - 1 ) end_POSTSUBSCRIPT .Motivated by the elementary observations that 𝔼 | J n ( ω k ) | 2 → n S ( ω k ) subscript → 𝑛 𝔼 superscript subscript 𝐽 𝑛 subscript 𝜔 𝑘 2 𝑆 subscript 𝜔 𝑘 \mathbb{E}|J_{n}(\omega_{k})|^{2}\rightarrow_{n}S(\omega_{k}) blackboard_E | italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT → start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) and Cov ( J n ( ω k ) , J n ( ω k ′ ) ) → n 0 subscript → 𝑛 Cov subscript 𝐽 𝑛 subscript 𝜔 𝑘 subscript 𝐽 𝑛 subscript 𝜔 superscript 𝑘 ′ 0 \text{Cov}(J_{n}(\omega_{k}),J_{n}(\omega_{k^{\prime}}))\rightarrow_{n}0 Cov ( italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) , italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ) → start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT 0 (see[9 ] for an introduction), one can substitute thefinite-sample (co)variances with these asymptotic ones to obtain the Whittleapproximation
(1.1) 2 ℓ W ( 𝜽 | 𝒚 ) = ∑ j = 1 n log S 𝜽 ( ω j ) + | J n ( ω j ) | 2 S 𝜽 ( ω j ) , 2 superscript ℓ 𝑊 conditional 𝜽 𝒚 superscript subscript 𝑗 1 𝑛 subscript 𝑆 𝜽 subscript 𝜔 𝑗 superscript subscript 𝐽 𝑛 subscript 𝜔 𝑗 2 subscript 𝑆 𝜽 subscript 𝜔 𝑗 2\ell^{W}(\bm{\theta}\,|\,\bm{y})=\sum_{j=1}^{n}\log S_{\bm{\theta}}(\omega_{j%})+\frac{\left|J_{n}(\omega_{j})\right|^{2}}{S_{\bm{\theta}}(\omega_{j})}, 2 roman_ℓ start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT ( bold_italic_θ | bold_italic_y ) = ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT roman_log italic_S start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) + divide start_ARG | italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_S start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_ARG ,
which has this simplified form since the asymptotic covariance matrix for[ J n ( ω 1 ) , … , J n ( ω n ) ] subscript 𝐽 𝑛 subscript 𝜔 1 … subscript 𝐽 𝑛 subscript 𝜔 𝑛
[J_{n}(\omega_{1}),...,J_{n}(\omega_{n})] [ italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) , … , italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ] is diagonal . Optimizing this approximationinstead of ℓ ( 𝜽 | 𝒚 ) ℓ conditional 𝜽 𝒚 \ell(\bm{\theta}\,|\,\bm{y}) roman_ℓ ( bold_italic_θ | bold_italic_y ) gives the estimator 𝜽 ^ W superscript ^ 𝜽 𝑊 \hat{\bm{\theta}}^{W} over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT . Anothercommon way to think about this approximation is to observe that it effectivelyapproximates 𝚺 𝜽 subscript 𝚺 𝜽 \bm{\Sigma}_{\bm{\theta}} bold_Σ start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT with the circulant matrix 𝑪 𝑪 \bm{C} bold_italic_C whose firstcolumn 𝒄 𝒄 \bm{c} bold_italic_c has ( ℱ 𝒄 ) j = S ( ω j ) subscript ℱ 𝒄 𝑗 𝑆 subscript 𝜔 𝑗 (\mathbb{\mathcal{F}}\bm{c})_{j}=S(\omega_{j}) ( caligraphic_F bold_italic_c ) start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = italic_S ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) . This is unfortunatelyincorrect, as 𝚺 𝚺 \bm{\Sigma} bold_Σ is almost never exactly circulant. Further, evenif it were, it wouldn’t in general for any finite n 𝑛 n italic_n be that circulantmatrix. As a result of these finite-sample applications of limiting identities,this estimator can exhibit severe bias for finite sample sizes and certainvarieties of spectral densities S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) —see [33 ] for aparticularly thoughtful and concrete analysis of the sources of bias of thisapproximation in terms of boundary effects. Yet another way to understand the sourceof bias with this approximation is to observe that even if 𝒚 ∼ 𝒩 ( 𝟎 , 𝚺 𝜽 0 ) similar-to 𝒚 𝒩 0 subscript 𝚺 subscript 𝜽 0 \bm{y}\sim\mathcal{N}(\bm{0},\bm{\Sigma}_{\bm{\theta}_{0}}) bold_italic_y ∼ caligraphic_N ( bold_0 , bold_Σ start_POSTSUBSCRIPT bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) with [ 𝚺 𝜽 0 ] j , k = ∫ − 1 / 2 1 / 2 e 2 π i ω | j − k | S 𝜽 0 ( ω ) d ω subscript delimited-[] subscript 𝚺 subscript 𝜽 0 𝑗 𝑘
superscript subscript 1 2 1 2 superscript 𝑒 2 𝜋 𝑖 𝜔 𝑗 𝑘 subscript 𝑆 subscript 𝜽 0 𝜔 differential-d 𝜔 [\bm{\Sigma}_{\bm{\theta}_{0}}]_{j,k}=\int_{-1/2}^{1/2}e^{2\pi i\omega|j-k|}S_%{\bm{\theta}_{0}}(\omega)\mathop{}\!\mathrm{d}\omega [ bold_Σ start_POSTSUBSCRIPT bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ] start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT = ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_ω | italic_j - italic_k | end_POSTSUPERSCRIPT italic_S start_POSTSUBSCRIPT bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_ω ) roman_d italic_ω , S 𝜽 0 ( ω k ) subscript 𝑆 subscript 𝜽 0 subscript 𝜔 𝑘 S_{\bm{\theta}_{0}}(\omega_{k}) italic_S start_POSTSUBSCRIPT bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) is not thevariance of J n ( ω k ) subscript 𝐽 𝑛 subscript 𝜔 𝑘 J_{n}(\omega_{k}) italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) for any finite n 𝑛 n italic_n . And while ignoring covariancestructure between values (which the Whittle approximation also does) can in some casesonly come at the cost of efficiency, using incorrect variances will almostalways come at the cost of bias. The exact (co)variances of these DFTcoefficients is a classical computation, but because they will be usedextensively in this work we provide a terse statement of their derivation here.
Proposition 1.1 . Let { Y t } t = 0 n − 1 superscript subscript subscript 𝑌 𝑡 𝑡 0 𝑛 1 \left\{Y_{t}\right\}_{t=0}^{n-1} { italic_Y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_t = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT be a stationary mean-zero time series with spectraldensity S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) and J n ( ω k ) subscript 𝐽 𝑛 subscript 𝜔 𝑘 J_{n}(\omega_{k}) italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) defined as above, and defineS n ( ω k , ω k ′ ) = Cov ( J n ( ω k ) , J n ( ω k ′ ) ) subscript 𝑆 𝑛 subscript 𝜔 𝑘 subscript 𝜔 superscript 𝑘 ′ Cov subscript 𝐽 𝑛 subscript 𝜔 𝑘 subscript 𝐽 𝑛 subscript 𝜔 superscript 𝑘 ′ S_{n}(\omega_{k},\omega_{k^{\prime}})=\textrm{Cov}(J_{n}(\omega_{k}),J_{n}(%\omega_{k^{\prime}})) italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) = Cov ( italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) , italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ) .Then
(1.2) S n ( ω k , ω k ′ ) = e i n − 1 n π ( k − k ′ ) n ∫ − 1 / 2 1 / 2 D n s ( ω k − ω ) D n s ( ω k ′ − ω ) S ( ω ) d ω , subscript 𝑆 𝑛 subscript 𝜔 𝑘 subscript 𝜔 superscript 𝑘 ′ superscript 𝑒 𝑖 𝑛 1 𝑛 𝜋 𝑘 superscript 𝑘 ′ 𝑛 superscript subscript 1 2 1 2 subscript superscript 𝐷 𝑠 𝑛 subscript 𝜔 𝑘 𝜔 subscript superscript 𝐷 𝑠 𝑛 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝑆 𝜔 differential-d 𝜔 S_{n}(\omega_{k},\omega_{k^{\prime}})=\frac{e^{i\frac{n-1}{n}\pi(k-k^{\prime})%}}{n}\int_{-1/2}^{1/2}D^{s}_{n}(\omega_{k}-\omega)D^{s}_{n}(\omega_{k^{\prime}%}-\omega)S(\omega)\mathop{}\!\mathrm{d}\omega, italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) = divide start_ARG italic_e start_POSTSUPERSCRIPT italic_i divide start_ARG italic_n - 1 end_ARG start_ARG italic_n end_ARG italic_π ( italic_k - italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT end_ARG start_ARG italic_n end_ARG ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_D start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) italic_D start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) italic_S ( italic_ω ) roman_d italic_ω ,
where D n s ( ω ) = sin ( π n ω ) sin ( π ω ) = e − i ( n − 1 ) ω / 2 ∑ j = 0 n − 1 e 2 π i j ω subscript superscript 𝐷 𝑠 𝑛 𝜔 𝜋 𝑛 𝜔 𝜋 𝜔 superscript 𝑒 𝑖 𝑛 1 𝜔 2 superscript subscript 𝑗 0 𝑛 1 superscript 𝑒 2 𝜋 𝑖 𝑗 𝜔 D^{s}_{n}(\omega)=\frac{\sin(\pi n\omega)}{\sin(\pi\omega)}=e^{-i(n-1)\omega/2%}\sum_{j=0}^{n-1}e^{2\pi ij\omega} italic_D start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω ) = divide start_ARG roman_sin ( italic_π italic_n italic_ω ) end_ARG start_ARG roman_sin ( italic_π italic_ω ) end_ARG = italic_e start_POSTSUPERSCRIPT - italic_i ( italic_n - 1 ) italic_ω / 2 end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_j italic_ω end_POSTSUPERSCRIPT is a “shifted” Dirichlet kernel.
Proof 1.2 . Using Herglotz’s theorem, an elementary computation shows that
Cov ( J n ( ω k ) , J n ( ω k ′ ) ) = ∫ − 1 / 2 1 / 2 ( ∑ j = 0 n − 1 n − 1 / 2 e 2 π i j ( ω − ω k ) ) ( ∑ j = 0 n − 1 n − 1 / 2 e 2 π i j ( ω k ′ − ω ) ) S ( ω ) d ω Cov subscript 𝐽 𝑛 subscript 𝜔 𝑘 subscript 𝐽 𝑛 subscript 𝜔 superscript 𝑘 ′ superscript subscript 1 2 1 2 superscript subscript 𝑗 0 𝑛 1 superscript 𝑛 1 2 superscript 𝑒 2 𝜋 𝑖 𝑗 𝜔 subscript 𝜔 𝑘 superscript subscript 𝑗 0 𝑛 1 superscript 𝑛 1 2 superscript 𝑒 2 𝜋 𝑖 𝑗 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝑆 𝜔 differential-d 𝜔 \displaystyle\text{Cov}(J_{n}(\omega_{k}),J_{n}(\omega_{k^{\prime}}))=\int_{-1%/2}^{1/2}\left(\sum_{j=0}^{n-1}n^{-1/2}e^{2\pi ij(\omega-\omega_{k})}\right)%\left(\sum_{j=0}^{n-1}n^{-1/2}e^{2\pi ij(\omega_{k^{\prime}}-\omega)}\right)S(%\omega)\mathop{}\!\mathrm{d}\omega Cov ( italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) , italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ) = ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT ( ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_j ( italic_ω - italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT ) ( ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_j ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) end_POSTSUPERSCRIPT ) italic_S ( italic_ω ) roman_d italic_ω = 1 n ∫ − 1 / 2 1 / 2 ( e − i ( n − 1 ) π ω k sin ( π n ( ω k − ω ) ) sin ( π ( ω k − ω ) ) ) ( e i ( n − 1 ) π ω k ′ sin ( π n ( ω k ′ − ω ) ) sin ( π ( ω k ′ − ω ) ) ) S ( ω ) d ω . absent 1 𝑛 superscript subscript 1 2 1 2 superscript 𝑒 𝑖 𝑛 1 𝜋 subscript 𝜔 𝑘 𝜋 𝑛 subscript 𝜔 𝑘 𝜔 𝜋 subscript 𝜔 𝑘 𝜔 superscript 𝑒 𝑖 𝑛 1 𝜋 subscript 𝜔 superscript 𝑘 ′ 𝜋 𝑛 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝜋 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝑆 𝜔 differential-d 𝜔 \displaystyle=\frac{1}{n}\int_{-1/2}^{1/2}\left(e^{-i(n-1)\pi\omega_{k}}\frac{%\sin(\pi n(\omega_{k}-\omega))}{\sin(\pi(\omega_{k}-\omega))}\right)\left(e^{i%(n-1)\pi\omega_{k^{\prime}}}\frac{\sin(\pi n(\omega_{k^{\prime}}-\omega))}{%\sin(\pi(\omega_{k^{\prime}}-\omega))}\right)S(\omega)\mathop{}\!\mathrm{d}\omega. = divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT ( italic_e start_POSTSUPERSCRIPT - italic_i ( italic_n - 1 ) italic_π italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUPERSCRIPT divide start_ARG roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) end_ARG start_ARG roman_sin ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) end_ARG ) ( italic_e start_POSTSUPERSCRIPT italic_i ( italic_n - 1 ) italic_π italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_POSTSUPERSCRIPT divide start_ARG roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) end_ARG start_ARG roman_sin ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) end_ARG ) italic_S ( italic_ω ) roman_d italic_ω .
Applying the definition of D n s ( ω ) subscript superscript 𝐷 𝑠 𝑛 𝜔 D^{s}_{n}(\omega) italic_D start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω ) , collecting terms, and bringing thecomplex exponential prefactor on the trigonometric terms outside the integralgives the results.
A particularly elegant proposition for dealing with the bias of 𝜽 ^ W superscript ^ 𝜽 𝑊 \hat{\bm{\theta}}^{W} over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT is given in [35 ] : instead of using S 𝜽 ( ω j ) subscript 𝑆 𝜽 subscript 𝜔 𝑗 S_{\bm{\theta}}(\omega_{j}) italic_S start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) inℓ W superscript ℓ 𝑊 \ell^{W} roman_ℓ start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT , use the correct finite-sample marginal variances S n ( ω j , ω j ) subscript 𝑆 𝑛 subscript 𝜔 𝑗 subscript 𝜔 𝑗 S_{n}(\omega_{j},\omega_{j}) italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) in their place. In doing so, one immediately obtains a new likelihoodapproximation whose gradient gives unbiased estimating equations (UEEs) forparameters 𝜽 𝜽 \bm{\theta} bold_italic_θ [35 , 22 ] . This estimator, calledthe debiased Whittle estimator, is given as
𝜽 ^ D W = arg min 𝜽 ∈ 𝚯 ℓ D W ( 𝜽 | 𝒚 ) = arg min 𝜽 ∈ 𝚯 1 2 ( ∑ j = 1 n log S n , 𝜽 ( ω j , ω j ) + | J n ( ω j ) | 2 S n , 𝜽 ( ω j , ω j ) ) , superscript ^ 𝜽 𝐷 𝑊 subscript arg min 𝜽 𝚯 superscript ℓ 𝐷 𝑊 conditional 𝜽 𝒚 subscript arg min 𝜽 𝚯 1 2 superscript subscript 𝑗 1 𝑛 subscript 𝑆 𝑛 𝜽
subscript 𝜔 𝑗 subscript 𝜔 𝑗 superscript subscript 𝐽 𝑛 subscript 𝜔 𝑗 2 subscript 𝑆 𝑛 𝜽
subscript 𝜔 𝑗 subscript 𝜔 𝑗 \hat{\bm{\theta}}^{DW}=\operatorname*{arg\,min}_{\bm{\theta}\in\bm{\Theta}}%\ell^{DW}(\bm{\theta}\,|\,\bm{y})=\operatorname*{arg\,min}_{\bm{\theta}\in\bm{%\Theta}}\frac{1}{2}\left(\sum_{j=1}^{n}\log S_{n,\bm{\theta}}(\omega_{j},%\omega_{j})+\frac{\left|J_{n}(\omega_{j})\right|^{2}}{S_{n,\bm{\theta}}(\omega%_{j},\omega_{j})}\right), over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT italic_D italic_W end_POSTSUPERSCRIPT = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT bold_italic_θ ∈ bold_Θ end_POSTSUBSCRIPT roman_ℓ start_POSTSUPERSCRIPT italic_D italic_W end_POSTSUPERSCRIPT ( bold_italic_θ | bold_italic_y ) = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT bold_italic_θ ∈ bold_Θ end_POSTSUBSCRIPT divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT roman_log italic_S start_POSTSUBSCRIPT italic_n , bold_italic_θ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) + divide start_ARG | italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_S start_POSTSUBSCRIPT italic_n , bold_italic_θ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_ARG ) ,
Where the additional subscript of 𝜽 𝜽 \bm{\theta} bold_italic_θ in S n , 𝜽 subscript 𝑆 𝑛 𝜽
S_{n,\bm{\theta}} italic_S start_POSTSUBSCRIPT italic_n , bold_italic_θ end_POSTSUBSCRIPT is includedsimply to reinforce the dependence of this function on (varying) parameters𝜽 𝜽 \bm{\theta} bold_italic_θ and is independent from the subscript n 𝑛 n italic_n , which is used to signify thefinite-sample correction. 𝜽 ^ D W superscript ^ 𝜽 𝐷 𝑊 \hat{\bm{\theta}}^{DW} over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT italic_D italic_W end_POSTSUPERSCRIPT now only comes at the cost ofadditional variance from ignoring the correlation of J n ( ω k ) subscript 𝐽 𝑛 subscript 𝜔 𝑘 J_{n}(\omega_{k}) italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) withJ n ( ω k ′ ) subscript 𝐽 𝑛 subscript 𝜔 superscript 𝑘 ′ J_{n}(\omega_{k^{\prime}}) italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) , and serves as a useful point of comparison with the estimatorproposed here, as it better isolates the additional variance ofWhittle-type estimators compared to 𝜽 ^ MLE superscript ^ 𝜽 MLE \hat{\bm{\theta}}^{\text{MLE}} over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT .
While the standard Whittle approximation requires a pre-computed FFT of the data,the debiased Whittle approximation requires one FFT per evaluation ofℓ D W ( 𝜽 | 𝒚 ) superscript ℓ 𝐷 𝑊 conditional 𝜽 𝒚 \ell^{DW}(\bm{\theta}\,|\,\bm{y}) roman_ℓ start_POSTSUPERSCRIPT italic_D italic_W end_POSTSUPERSCRIPT ( bold_italic_θ | bold_italic_y ) , since one can compute { S n ( ω j , ω j ) } j = 1 n superscript subscript subscript 𝑆 𝑛 subscript 𝜔 𝑗 subscript 𝜔 𝑗 𝑗 1 𝑛 \left\{S_{n}(\omega_{j},\omega_{j})\right\}_{j=1}^{n} { italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT in the time domain using the classical identity that
(1.3) S n ( ω j , ω j ) = 2 Re { ∑ k = 0 n − 1 ( 1 − n − 1 h ) h k e − 2 π i ω j h } − h 0 . subscript 𝑆 𝑛 subscript 𝜔 𝑗 subscript 𝜔 𝑗 2 Re superscript subscript 𝑘 0 𝑛 1 1 superscript 𝑛 1 ℎ subscript ℎ 𝑘 superscript 𝑒 2 𝜋 𝑖 subscript 𝜔 𝑗 ℎ subscript ℎ 0 S_{n}(\omega_{j},\omega_{j})=2\text{Re}\left\{\sum_{k=0}^{n-1}(1-n^{-1}h)h_{k}%e^{-2\pi i\omega_{j}h}\right\}-h_{0}. italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) = 2 Re { ∑ start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT ( 1 - italic_n start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_h ) italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT - 2 italic_π italic_i italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_h end_POSTSUPERSCRIPT } - italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT .
In [35 ] it is noted that this identity is useful to avoidnumerical integration. While this is true, it is also quite in convenientbecause it requires knowing the covariance function { h k } subscript ℎ 𝑘 \left\{h_{k}\right\} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } as well asthe spectral density S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) . Considering that a large part of thepoint of resorting to a Whittle-type approximation is to write a parametricspectral density instead of a covariance function, this is a potentially seriousinconvenience. In Section 2 , in the process of building up to our ownapproximation, we will discuss two performant and simple numerical integrationstrategies to relax this requirement and also make this method more generallyavailable to practitioners.
1.2 Our methodThe approximation we introduce here is a continuation of such spectrallymotivated likelihood methods. Letting 𝑫 𝑫 \bm{D} bold_italic_D denote thediagonal matrix with values { S ( ω j ) } j = 1 n superscript subscript 𝑆 subscript 𝜔 𝑗 𝑗 1 𝑛 \left\{S(\omega_{j})\right\}_{j=1}^{n} { italic_S ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT , the fundamentalobservation of this work is that for the very broad class ofpiecewise-smooth spectral densities with a small number of roughpoints—including basically all standard parametric families, including(AR)(I)(MA), Matérn, rational quadratic, power law,and other models— we have that
(1.4) ℱ 𝚺 ℱ H ≈ 𝑫 + 𝑼 𝑽 H = 𝑫 + 𝑬 , ℱ 𝚺 superscript ℱ 𝐻 𝑫 𝑼 superscript 𝑽 𝐻 𝑫 𝑬 \mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H}\approx\bm{D}+\bm{U}\bm%{V}^{H}=\bm{D}+{\color[rgb]{0,0,0}\bm{E}}, caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT ≈ bold_italic_D + bold_italic_U bold_italic_V start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT = bold_italic_D + bold_italic_E ,
where 𝑼 𝑼 \bm{U} bold_italic_U and 𝑽 𝑽 \bm{V} bold_italic_V are in ℂ n × r superscript ℂ 𝑛 𝑟 \mathbb{C}^{n\times r} blackboard_C start_POSTSUPERSCRIPT italic_n × italic_r end_POSTSUPERSCRIPT with r ≪ n much-less-than 𝑟 𝑛 r\ll n italic_r ≪ italic_n . Thestandard Whittle approximation views that low-rank update as being exactly zero,and in this light our method can be viewed as a “corrected” Whittleapproximation. The key observation that makes working with this approximationconvenient is that the “Whittle correction” matrix 𝑬 = ℱ 𝚺 ℱ H − 𝑫 𝑬 ℱ 𝚺 superscript ℱ 𝐻 𝑫 {\color[rgb]{0,0,0}\bm{E}}=\mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}%^{H}-\bm{D} bold_italic_E = caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT - bold_italic_D ,along with being severely rank-deficient in many settings, can be applied to avector in 𝒪 ( n log n ) 𝒪 𝑛 𝑛 \mathcal{O}(n\log n) caligraphic_O ( italic_n roman_log italic_n ) complexity via circulant embedding of Toeplitzmatrices, and so one can utilize the methods of randomizedlow-rank approximation [21 ] to assemble 𝑼 𝑼 \bm{U} bold_italic_U and 𝑽 𝑽 \bm{V} bold_italic_V efficiently and scalably with implicit tools.Counter-examples of models where ℱ 𝚺 ℱ H ℱ 𝚺 superscript ℱ 𝐻 \mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H} caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT does not have thisstructure are rare in practice, as most popular and natural models for spectraldensities S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) are smooth except at possibly a modest number oflocations. The category of models that likely will not be amenable tothis structure, as will be discussed at length in Section3 , are those where S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) is either highly oscillatoryor has many rough points. Spectral densities like S ( ω ) = γ sinc ( γ ω ) 2 𝑆 𝜔 𝛾 sinc superscript 𝛾 𝜔 2 S(\omega)=\gamma\text{sinc}(\gamma\omega)^{2} italic_S ( italic_ω ) = italic_γ sinc ( italic_γ italic_ω ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT or S ( ω ) = ∑ j = 1 R α j e − θ | ω − ω j | 𝑆 𝜔 superscript subscript 𝑗 1 𝑅 subscript 𝛼 𝑗 superscript 𝑒 𝜃 𝜔 subscript 𝜔 𝑗 S(\omega)=\sum_{j=1}^{R}\alpha_{j}e^{-\theta|\omega-\omega_{j}|} italic_S ( italic_ω ) = ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_R end_POSTSUPERSCRIPT italic_α start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT - italic_θ | italic_ω - italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | end_POSTSUPERSCRIPT for large γ 𝛾 \gamma italic_γ or R 𝑅 R italic_R respectively are good examples of modelsfor which the rank r 𝑟 r italic_r required for high accuracy is sufficiently large that themethod is not practical.
Outside of these pathological cases, despite having the same quasilinear runtime complexity, this representation canbe exact to computer precision for very small r 𝑟 r italic_r (sometimes as small asr = 2 𝑟 2 r=2 italic_r = 2 ). As one might expect, this does come at the cost of a moreexpensive prefactor. If the standard Whittle approximation requires onepre-computed FFT of the data and subsequently runs at linear complexity, and thedebiased Whittle approximation requires one additional FFT per evaluation due tocomputation of the terms in (1.3 ), our approximation requires 𝒪 ( 3 r ) 𝒪 3 𝑟 \mathcal{O}(3r) caligraphic_O ( 3 italic_r ) FFTs to assemble (although a simpler implementation using 𝒪 ( 4 r ) 𝒪 4 𝑟 \mathcal{O}(4r) caligraphic_O ( 4 italic_r ) FFTs isused in the computations of this work)1 1 1 A software package and scriptsfor all computations done in this work is available athttps://github.com/cgeoga/SpectralEstimators.jl . .For a spectral density that is neither particularly well- nor poorly-behaved, areasonable expectation for an r 𝑟 r italic_r that gives all 14 14 14 14 significant digits ofℓ ( 𝜽 | 𝒚 ) ℓ conditional 𝜽 𝒚 \ell(\bm{\theta}\,|\,\bm{y}) roman_ℓ ( bold_italic_θ | bold_italic_y ) is around r ≈ 100 𝑟 100 r\approx 100 italic_r ≈ 100 —and so for full precision ofthe likelihood, this method can require several hundred FFTs. But consideringhow fast FFTs are, the prefactor cost for this method is sufficiently low thatit compares favorably with other high-accuracy alternatives.
Another material advantage of this method over other spectral approximations isthat it can be used to compute uncertainties for estimators. A basic fact aboutMLEs is that, under sufficient regularity conditions, 𝑰 ( 𝜽 ) − 1 / 2 ( 𝜽 ^ MLE − 𝜽 true ) ↝ 𝒩 ( 𝟎 , 𝓘 ) ↝ 𝑰 superscript 𝜽 1 2 superscript ^ 𝜽 MLE superscript 𝜽 true 𝒩 0 𝓘 \bm{I}(\bm{\theta})^{-1/2}(\hat{\bm{\theta}}^{\text{MLE}}-\bm{\theta}^{\text{%true}})\rightsquigarrow\mathcal{N}(\bm{0},\bm{\mathcal{I}}) bold_italic_I ( bold_italic_θ ) start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT ( over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT - bold_italic_θ start_POSTSUPERSCRIPT true end_POSTSUPERSCRIPT ) ↝ caligraphic_N ( bold_0 , bold_caligraphic_I ) , where 𝑰 ( 𝜽 ) 𝑰 𝜽 \bm{I}(\bm{\theta}) bold_italic_I ( bold_italic_θ ) is the expected Fisher informationmatrix that will be introduced and discussed at length later in this work,although in some settings it may actually be preferable to use the “observed”information matrix H ℓ ( 𝜽 | 𝒚 ) | 𝜽 = 𝜽 ^ MLE evaluated-at 𝐻 ℓ conditional 𝜽 𝒚 𝜽 superscript ^ 𝜽 MLE H\ell(\bm{\theta}\,|\,\bm{y})\,|\,_{\bm{\theta}=\hat{\bm{\theta}}^{\text{MLE}}} italic_H roman_ℓ ( bold_italic_θ | bold_italic_y ) | start_POSTSUBSCRIPT bold_italic_θ = over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT end_POSTSUBSCRIPT [14 ] .But when one is obtaining an estimator by optimizing some functionℓ ~ ( 𝜽 ) ~ ℓ 𝜽 \tilde{\ell}(\bm{\theta}) over~ start_ARG roman_ℓ end_ARG ( bold_italic_θ ) other than the true log-likelihood, the Hessian ofℓ ~ ~ ℓ \tilde{\ell} over~ start_ARG roman_ℓ end_ARG does not have this interpretation. The fundamental property anapproximation ℓ ~ ( 𝜽 ) ~ ℓ 𝜽 \tilde{\ell}(\bm{\theta}) over~ start_ARG roman_ℓ end_ARG ( bold_italic_θ ) needs to satisfy to provide a statisticallyuseful estimator is that it provides unbiased estimating equations (UEEs), so that 𝔼 𝜽 0 [ ∇ ℓ ~ ( 𝜽 | 𝒚 ) | 𝜽 = 𝜽 0 ] = 𝟎 subscript 𝔼 subscript 𝜽 0 delimited-[] evaluated-at ∇ ~ ℓ conditional 𝜽 𝒚 𝜽 subscript 𝜽 0 0 \mathbb{E}_{\bm{\theta}_{0}}[\nabla\tilde{\ell}(\bm{\theta}\,|\,\bm{y})\,|\,_{%\bm{\theta}=\bm{\theta}_{0}}]=\bm{0} blackboard_E start_POSTSUBSCRIPT bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT [ ∇ over~ start_ARG roman_ℓ end_ARG ( bold_italic_θ | bold_italic_y ) | start_POSTSUBSCRIPT bold_italic_θ = bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ] = bold_0 for any 𝜽 0 subscript 𝜽 0 \bm{\theta}_{0} bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT [22 ] . But there are many suchℓ ~ ~ ℓ \tilde{\ell} over~ start_ARG roman_ℓ end_ARG functions that satisfy this property and are not goodpointwise approximations to ℓ ℓ \ell roman_ℓ . And in those cases, one certainly cannotexpect that the Hessian of ℓ ~ ~ ℓ \tilde{\ell} over~ start_ARG roman_ℓ end_ARG at 𝜽 ^ MLE superscript ^ 𝜽 MLE \hat{\bm{\theta}}^{\text{MLE}} over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT resemblesthe Hessian of ℓ ℓ \ell roman_ℓ at those values. Approximations like the debiasedWhittle-induced ℓ ~ ~ ℓ \tilde{\ell} over~ start_ARG roman_ℓ end_ARG fall into this category. The method of this work,however, can be used to obtain both estimators and approximateuncertainties via its Hessian matrix, since it provides a fully pointwise-accurateapproximation to ℓ ( 𝜽 ) ℓ 𝜽 \ell(\bm{\theta}) roman_ℓ ( bold_italic_θ ) at for all 𝜽 𝜽 \bm{\theta} bold_italic_θ .
There is a significant body of literature on the more general problem ofapproximating covariance matrices for the purpose of estimation problems. Amongthe most accurate varieties of approximations in this space is the use ofhierarchical matrices , which employ rank-structure of matrixsub-blocks in a way that provides high accuracy but retains good run-timecomplexity. These methods have been applied to the Gaussian process problem in avariety of ways[8 , 1 , 15 , 28 , 26 , 17 ] ,and unlike the tools proposed in this work are not specialized toone-dimensional gridded measurements. Similarly, tools specific to Toeplitzmatrices that exploit rank-structure in off-diagonal blocks of ℱ 𝚺 ℱ H ℱ 𝚺 superscript ℱ 𝐻 \mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H} caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT have been proposed for fast direct linear solvers[27 , 10 ] .The downside to these methods is that thealgorithms are often quite complicated, and more domain-specific quantities instatistical computing—such as log-determinants, derivatives with respect tomodel parameters, and certain matrix-matrix products—are more involved tocompute, and are often computed to lower accuracy[3 , 32 , 28 , 17 ] . Other popular methods inthis space that provide less accurate likelihood approximations in the pointwisesense but are often easier to implement and analyze include sparseapproximations to 𝚺 − 1 superscript 𝚺 1 \bm{\Sigma}^{-1} bold_Σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT [30 , 25 , 29 , 34 , 24 ] , direct low-rankapproximation methods [11 ] , and matrix tapering [16 ] .The method presented here fits into this body of work as a specialized methodfor stationary time series data. In exchange for that specialization, however,it achieves the crushing accuracy of hierarchical matrix methods in a muchsimpler and more tractable approximation format, and permits the fast andaccurate computation of domain-specific quantities like log-determinantsand derivatives of the Gaussian log-likelihood.
1.3 OutlineIn the next several sections, we will introduce tools for implicitly assemblingand working with matrices of the form (1.4 ). We start with adiscussion of using quadrature and asymptotic expansions to efficiently andaccurately work with 𝚺 𝚺 \bm{\Sigma} bold_Σ using only the spectral density S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) , and wethen provide a discussion of (i) why the perturbation term in (1.4 )should be low-rank, which also offers a new theoretical perspective for whenWhittle approximations are particularly problematic, and (ii) the details ofassembling the low-rank 𝑼 𝑽 H 𝑼 superscript 𝑽 𝐻 \bm{U}\bm{V}^{H} bold_italic_U bold_italic_V start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT entirely from fast matrix-vectorproducts. We then close by providing several tests and benchmarks to demonstratethe speed and accuracy of the method.
2 Oscillatory integration of spectral densitiesAs discussed in the previous section, creating the low-rank approximation ℱ 𝚺 ℱ H ≈ 𝑫 + 𝑼 𝑽 H ℱ 𝚺 superscript ℱ 𝐻 𝑫 𝑼 superscript 𝑽 𝐻 \mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H}\approx\bm{D}+\bm{U}\bm%{V}^{H} caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT ≈ bold_italic_D + bold_italic_U bold_italic_V start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT requires working with 𝚺 𝚺 \bm{\Sigma} bold_Σ , theToeplitz matrix with values 𝚺 j , k = h | j − k | subscript 𝚺 𝑗 𝑘
subscript ℎ 𝑗 𝑘 \bm{\Sigma}_{j,k}=h_{|j-k|} bold_Σ start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT = italic_h start_POSTSUBSCRIPT | italic_j - italic_k | end_POSTSUBSCRIPT where { h k } subscript ℎ 𝑘 \left\{h_{k}\right\} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } is theautocovariance sequence of the time series. Unlike in [35 ] whereboth kernel and spectral density values are used to avoid numerical integration,we will now discuss strategies for efficiently and accurately computingintegrals of the formh k = ∫ − 1 / 2 1 / 2 S ( ω ) e 2 π i k ω d ω subscript ℎ 𝑘 superscript subscript 1 2 1 2 𝑆 𝜔 superscript 𝑒 2 𝜋 𝑖 𝑘 𝜔 differential-d 𝜔 h_{k}=\int_{-1/2}^{1/2}S(\omega)e^{2\pi ik\omega}\mathop{}\!\mathrm{d}\omega italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_S ( italic_ω ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω end_POSTSUPERSCRIPT roman_d italic_ω for k ∈ 0 , … , n − 1 𝑘 0 … 𝑛 1
k\in 0,...,n-1 italic_k ∈ 0 , … , italic_n - 1 , so that one can create this factorization given onlythe spectral density. Let us briefly review the challenges of such a task.First, the standard method for obtaining the values { h k } subscript ℎ 𝑘 \left\{h_{k}\right\} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } using thetrapezoidal rule via the FFT may require a large number of nodes for evenmoderate accuracy if, for example, S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) has rough points (like how S ( ω ) = e − | ω | 𝑆 𝜔 superscript 𝑒 𝜔 S(\omega)=e^{-|\omega|} italic_S ( italic_ω ) = italic_e start_POSTSUPERSCRIPT - | italic_ω | end_POSTSUPERSCRIPT has no derivatives at the origin). A second common issue arisesif S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) is not periodic at its endpoints, which limits the convergence rateof the trapezoidal rule to its baseline rate of 𝒪 ( m − 2 ) 𝒪 superscript 𝑚 2 \mathcal{O}(m^{-2}) caligraphic_O ( italic_m start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT ) for m 𝑚 m italic_m nodes forgeneral non-smooth and/or non-periodic integrands. As a final complication, bythe Shannon-Nyquist sampling theorem, one requires at least 𝒪 ( k ) 𝒪 𝑘 \mathcal{O}(k) caligraphic_O ( italic_k ) quadraturenodes to resolve the oscillations of e 2 π i k ω superscript 𝑒 2 𝜋 𝑖 𝑘 𝜔 e^{2\pi ik\omega} italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω end_POSTSUPERSCRIPT on [ − 1 / 2 , 1 / 2 ] 1 2 1 2 [-1/2,1/2] [ - 1 / 2 , 1 / 2 ] , andso any direct summation method to obtain { h k } k = 0 n − 1 superscript subscript subscript ℎ 𝑘 𝑘 0 𝑛 1 \left\{h_{k}\right\}_{k=0}^{n-1} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT would scale atleast as 𝒪 ( n 2 ) 𝒪 superscript 𝑛 2 \mathcal{O}(n^{2}) caligraphic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) .
To bypass these issues, we propose the use of adaptive quadraturefor a fixed number of lags k 𝑘 k italic_k , but then a transition to the use of asymptoticexpansions to h k subscript ℎ 𝑘 h_{k} italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT for lags greater than some k 0 subscript 𝑘 0 k_{0} italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT . For integrands that arenot too oscillatory, standard adapative integration tools can be veryefficient and capable for any piecewise-smooth integrand. In this work, theJulia lanuage package QuadGK.jl [7 , 23 ] was used,although many other tools would likely have worked equally well. Briefly,Gaussian quadrature is a tool for achieving high-order accuracy in the numericalintegration of smooth functions. It gives approximations to integrals as∫ a b f ( x ) d x ≈ ∑ j = 1 M α j f ( x j ) superscript subscript 𝑎 𝑏 𝑓 𝑥 differential-d 𝑥 superscript subscript 𝑗 1 𝑀 subscript 𝛼 𝑗 𝑓 subscript 𝑥 𝑗 \int_{a}^{b}f(x)\mathop{}\!\mathrm{d}x\approx\sum_{j=1}^{M}\alpha_{j}f(x_{j}) ∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT italic_f ( italic_x ) roman_d italic_x ≈ ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT italic_α start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_f ( italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) that are exact for polynomials up to order 2 M − 1 2 𝑀 1 2M-1 2 italic_M - 1 , where{ α j } j = 1 M superscript subscript subscript 𝛼 𝑗 𝑗 1 𝑀 \left\{\alpha_{j}\right\}_{j=1}^{M} { italic_α start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT are weights and { x j } j = 1 M superscript subscript subscript 𝑥 𝑗 𝑗 1 𝑀 \left\{x_{j}\right\}_{j=1}^{M} { italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT arenodes . In the case of [ a , b ] = [ − 1 , 1 ] 𝑎 𝑏 1 1 [a,b]=[-1,1] [ italic_a , italic_b ] = [ - 1 , 1 ] , the Legendre polynomials can beused to obtain the weights and nodes and achieve an error rate of 𝒪 ( M − 2 m − 1 ) 𝒪 superscript 𝑀 2 𝑚 1 \mathcal{O}(M^{-2m-1}) caligraphic_O ( italic_M start_POSTSUPERSCRIPT - 2 italic_m - 1 end_POSTSUPERSCRIPT ) for functions f ∈ 𝒞 ( m ) ( [ a , b ] ) 𝑓 superscript 𝒞 𝑚 𝑎 𝑏 f\in\mathcal{C}^{(m)}([a,b]) italic_f ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ italic_a , italic_b ] ) [18 , 36 ] .
More interesting, however, is the discussion of methods for accurately andefficiently computing h k subscript ℎ 𝑘 h_{k} italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT when k 𝑘 k italic_k is large. Unlike traditional integrationtools, asymptotic expansion-type methods get more accurate as k 𝑘 k italic_k increases. As in the introduction, the following result is not new (see[12 ] for a comprehensive discussion), but we state it in thespecific form that is useful for this work since it will be referred tofrequently. Since the idea of the proof is also used repeatedly, we againprovide it here.
Proposition 2.1 . Let S ( ω ) ∈ 𝒞 ( m ) ( [ − 1 / 2 , 1 / 2 ] ) 𝑆 𝜔 superscript 𝒞 𝑚 1 2 1 2 S(\omega)\in\mathcal{C}^{(m)}([-1/2,1/2]) italic_S ( italic_ω ) ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ - 1 / 2 , 1 / 2 ] ) and ∫ − 1 / 2 1 / 2 | S ( m ) | ( ω ) d ω < C < ∞ superscript subscript 1 2 1 2 superscript 𝑆 𝑚 𝜔 differential-d 𝜔 𝐶 \int_{-1/2}^{1/2}|S^{(m)}|(\omega)\mathop{}\!\mathrm{d}\omega<C<\infty ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT | italic_S start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT | ( italic_ω ) roman_d italic_ω < italic_C < ∞ . Then
∫ − 1 / 2 1 / 2 S ( ω ) e 2 π i k ω d ω = − ∑ j = 0 m − 1 ( − i 2 π k ) − ( j + 1 ) ( S ( j ) ( 1 / 2 ) e π i k − S ( j ) ( − 1 / 2 ) e − π i k ) + 𝒪 ( k − m − 1 ) . superscript subscript 1 2 1 2 𝑆 𝜔 superscript 𝑒 2 𝜋 𝑖 𝑘 𝜔 differential-d 𝜔 superscript subscript 𝑗 0 𝑚 1 superscript 𝑖 2 𝜋 𝑘 𝑗 1 superscript 𝑆 𝑗 1 2 superscript 𝑒 𝜋 𝑖 𝑘 superscript 𝑆 𝑗 1 2 superscript 𝑒 𝜋 𝑖 𝑘 𝒪 superscript 𝑘 𝑚 1 \int_{-1/2}^{1/2}S(\omega)e^{2\pi ik\omega}\mathop{}\!\mathrm{d}\omega=-\sum_{%j=0}^{m-1}(-i2\pi k)^{-(j+1)}(S^{(j)}(1/2)e^{\pi ik}-S^{(j)}(-1/2)e^{-\pi ik})%+\mathcal{O}(k^{-m-1}). ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_S ( italic_ω ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω end_POSTSUPERSCRIPT roman_d italic_ω = - ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - italic_i 2 italic_π italic_k ) start_POSTSUPERSCRIPT - ( italic_j + 1 ) end_POSTSUPERSCRIPT ( italic_S start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( 1 / 2 ) italic_e start_POSTSUPERSCRIPT italic_π italic_i italic_k end_POSTSUPERSCRIPT - italic_S start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( - 1 / 2 ) italic_e start_POSTSUPERSCRIPT - italic_π italic_i italic_k end_POSTSUPERSCRIPT ) + caligraphic_O ( italic_k start_POSTSUPERSCRIPT - italic_m - 1 end_POSTSUPERSCRIPT ) .
Proof 2.2 . Simply apply integration by parts as many times as possible:
∫ − 1 / 2 1 / 2 S ( ω ) e 2 π i k ω d ω superscript subscript 1 2 1 2 𝑆 𝜔 superscript 𝑒 2 𝜋 𝑖 𝑘 𝜔 differential-d 𝜔 \displaystyle\int_{-1/2}^{1/2}S(\omega)e^{2\pi ik\omega}\mathop{}\!\mathrm{d}\omega ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_S ( italic_ω ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω end_POSTSUPERSCRIPT roman_d italic_ω = S ( 1 / 2 ) e π i k − S ( − 1 / 2 ) e − π i k − 2 π i k + ( − 2 π i k ) − 1 ∫ − 1 / 2 1 / 2 S ′ ( ω ) e 2 π i k ω d ω absent 𝑆 1 2 superscript 𝑒 𝜋 𝑖 𝑘 𝑆 1 2 superscript 𝑒 𝜋 𝑖 𝑘 2 𝜋 𝑖 𝑘 superscript 2 𝜋 𝑖 𝑘 1 superscript subscript 1 2 1 2 superscript 𝑆 ′ 𝜔 superscript 𝑒 2 𝜋 𝑖 𝑘 𝜔 differential-d 𝜔 \displaystyle=\frac{S(1/2)e^{\pi ik}-S(-1/2)e^{-\pi ik}}{-2\pi ik}+(-2\pi ik)^%{-1}\int_{-1/2}^{1/2}S^{\prime}(\omega)e^{2\pi ik\omega}\mathop{}\!\mathrm{d}\omega = divide start_ARG italic_S ( 1 / 2 ) italic_e start_POSTSUPERSCRIPT italic_π italic_i italic_k end_POSTSUPERSCRIPT - italic_S ( - 1 / 2 ) italic_e start_POSTSUPERSCRIPT - italic_π italic_i italic_k end_POSTSUPERSCRIPT end_ARG start_ARG - 2 italic_π italic_i italic_k end_ARG + ( - 2 italic_π italic_i italic_k ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_S start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_ω ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω end_POSTSUPERSCRIPT roman_d italic_ω ⋮ ⋮ \displaystyle\vdots ⋮ = ∑ j = 0 m − 1 ( − 2 π i k ) − ( j + 1 ) ( S ( j ) ( 1 / 2 ) e π i k − S ( j ) ( − 1 / 2 ) e − π i k ) absent superscript subscript 𝑗 0 𝑚 1 superscript 2 𝜋 𝑖 𝑘 𝑗 1 superscript 𝑆 𝑗 1 2 superscript 𝑒 𝜋 𝑖 𝑘 superscript 𝑆 𝑗 1 2 superscript 𝑒 𝜋 𝑖 𝑘 \displaystyle=\sum_{j=0}^{m-1}(-2\pi ik)^{-(j+1)}(S^{(j)}(1/2)e^{\pi ik}-S^{(j%)}(-1/2)e^{-\pi ik}) = ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 2 italic_π italic_i italic_k ) start_POSTSUPERSCRIPT - ( italic_j + 1 ) end_POSTSUPERSCRIPT ( italic_S start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( 1 / 2 ) italic_e start_POSTSUPERSCRIPT italic_π italic_i italic_k end_POSTSUPERSCRIPT - italic_S start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( - 1 / 2 ) italic_e start_POSTSUPERSCRIPT - italic_π italic_i italic_k end_POSTSUPERSCRIPT ) + ( − 2 π i k ) − ( m + 1 ) ∫ − 1 / 2 1 / 2 S ( m ) ( ω ) e 2 π i ω k d ω . superscript 2 𝜋 𝑖 𝑘 𝑚 1 superscript subscript 1 2 1 2 superscript 𝑆 𝑚 𝜔 superscript 𝑒 2 𝜋 𝑖 𝜔 𝑘 differential-d 𝜔 \displaystyle+(-2\pi ik)^{-(m+1)}\int_{-1/2}^{1/2}S^{(m)}(\omega)e^{2\pi i%\omega k}\mathop{}\!\mathrm{d}\omega. + ( - 2 italic_π italic_i italic_k ) start_POSTSUPERSCRIPT - ( italic_m + 1 ) end_POSTSUPERSCRIPT ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_S start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_ω ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_ω italic_k end_POSTSUPERSCRIPT roman_d italic_ω .
Remarkably, evaluating this expansion for any lag k 𝑘 k italic_k only requires a fewderivatives of S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) at its endpoints. For this reason, with this toolevaluating the tail of { h k } subscript ℎ 𝑘 \left\{h_{k}\right\} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } for high lags k 𝑘 k italic_k actually becomes thefastest part of the domain to handle. And while the supposition of the abovetheorem that S ∈ 𝒞 ( m ) ( [ − 1 / 2 , 1 / 2 ] ) 𝑆 superscript 𝒞 𝑚 1 2 1 2 S\in\mathcal{C}^{(m)}([-1/2,1/2]) italic_S ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ - 1 / 2 , 1 / 2 ] ) is obviously restrictive, asimple observation extends this result to a much broader class of functions.
Corollary 2.3 . Let S ( ω ) ∈ 𝒞 ( m ) ( [ − 1 / 2 , 1 / 2 ] ∖ { ω 1 r , … , ω L r } ) 𝑆 𝜔 superscript 𝒞 𝑚 1 2 1 2 superscript subscript 𝜔 1 𝑟 … superscript subscript 𝜔 𝐿 𝑟 S(\omega)\in\mathcal{C}^{(m)}([-1/2,1/2]\setminus\left\{\omega_{1}^{r},...,%\omega_{L}^{r}\right\}) italic_S ( italic_ω ) ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ - 1 / 2 , 1 / 2 ] ∖ { italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT , … , italic_ω start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT } ) ,so that it is smooth except at locations { ω 1 r , … , ω L r } superscript subscript 𝜔 1 𝑟 … superscript subscript 𝜔 𝐿 𝑟 \left\{\omega_{1}^{r},...,\omega_{L}^{r}\right\} { italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT , … , italic_ω start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT } ,further assume that at each ω l r superscript subscript 𝜔 𝑙 𝑟 \omega_{l}^{r} italic_ω start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT it has at least m 𝑚 m italic_m directionalderivatives S ( m ± ) ( ω l r ) superscript 𝑆 limit-from 𝑚 plus-or-minus superscript subscript 𝜔 𝑙 𝑟 S^{(m\pm)}(\omega_{l}^{r}) italic_S start_POSTSUPERSCRIPT ( italic_m ± ) end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) from both the left (m − limit-from 𝑚 m- italic_m - ) and the right(m + limit-from 𝑚 m+ italic_m + ) for l ∈ 1 , … , L 𝑙 1 … 𝐿
l\in 1,...,L italic_l ∈ 1 , … , italic_L . Then
∫ − 1 / 2 1 / 2 S ( ω ) e 2 π i k ω d ω superscript subscript 1 2 1 2 𝑆 𝜔 superscript 𝑒 2 𝜋 𝑖 𝑘 𝜔 differential-d 𝜔 \displaystyle\int_{-1/2}^{1/2}S(\omega)e^{2\pi ik\omega}\mathop{}\!\mathrm{d}\omega ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_S ( italic_ω ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω end_POSTSUPERSCRIPT roman_d italic_ω = ∑ l = 1 L − 1 ∑ j = 0 m − 1 ( − 2 π i k ) − ( j + 1 ) ( S ( j − ) ( ω l + 1 r ) e 2 π i k ω l + 1 r − S ( j + ) ( ω l r ) e 2 π i k ω l r ) absent superscript subscript 𝑙 1 𝐿 1 superscript subscript 𝑗 0 𝑚 1 superscript 2 𝜋 𝑖 𝑘 𝑗 1 superscript 𝑆 limit-from 𝑗 superscript subscript 𝜔 𝑙 1 𝑟 superscript 𝑒 2 𝜋 𝑖 𝑘 superscript subscript 𝜔 𝑙 1 𝑟 superscript 𝑆 limit-from 𝑗 superscript subscript 𝜔 𝑙 𝑟 superscript 𝑒 2 𝜋 𝑖 𝑘 superscript subscript 𝜔 𝑙 𝑟 \displaystyle=\sum_{l=1}^{L-1}\sum_{j=0}^{m-1}(-2\pi ik)^{-(j+1)}(S^{(j-)}(%\omega_{l+1}^{r})e^{2\pi ik\omega_{l+1}^{r}}-S^{(j+)}(\omega_{l}^{r})e^{2\pi ik%\omega_{l}^{r}}) = ∑ start_POSTSUBSCRIPT italic_l = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_L - 1 end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 2 italic_π italic_i italic_k ) start_POSTSUPERSCRIPT - ( italic_j + 1 ) end_POSTSUPERSCRIPT ( italic_S start_POSTSUPERSCRIPT ( italic_j - ) end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_l + 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω start_POSTSUBSCRIPT italic_l + 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT - italic_S start_POSTSUPERSCRIPT ( italic_j + ) end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT ) + 𝒪 ( k − m − 1 ) . 𝒪 superscript 𝑘 𝑚 1 \displaystyle+\mathcal{O}(k^{-m-1}). + caligraphic_O ( italic_k start_POSTSUPERSCRIPT - italic_m - 1 end_POSTSUPERSCRIPT ) .
This corollary, which is proven by simply breaking up the domain [ − 1 / 2 , 1 / 2 ] 1 2 1 2 [-1/2,1/2] [ - 1 / 2 , 1 / 2 ] into segments with endpoints at each ω l r superscript subscript 𝜔 𝑙 𝑟 \omega_{l}^{r} italic_ω start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT and applying the above argumentto each segment, now means that this asymptotic expansion approach can be usedto accurately calculate h k subscript ℎ 𝑘 h_{k} italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT for very large k 𝑘 k italic_k for any spectral density thatis just piecewise smooth. The prefactor on the error term in this settingis natually increased compared to the setting of Proposition 2.1 .Nonetheless, however, the convergence rates are on our side. For example, if k = 3 000 𝑘 3000 k=3\,000 italic_k = 3 000 and one uses five derivatives, then the error term in the simple caseis given by C ⋅ ( 6 , 000 π ) − 6 ≈ C ⋅ 10 − 26 ⋅ 𝐶 superscript 6 000 𝜋 6 ⋅ 𝐶 superscript 10 26 C\cdot(6,000\pi)^{-6}\approx C\cdot 10^{-26} italic_C ⋅ ( 6 , 000 italic_π ) start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT ≈ italic_C ⋅ 10 start_POSTSUPERSCRIPT - 26 end_POSTSUPERSCRIPT for someconstant C 𝐶 C italic_C . For a spectral density like S ( ω ) = 10 e − 10 | ω | 𝑆 𝜔 10 superscript 𝑒 10 𝜔 S(\omega)=10e^{-10|\omega|} italic_S ( italic_ω ) = 10 italic_e start_POSTSUPERSCRIPT - 10 | italic_ω | end_POSTSUPERSCRIPT , anexample that will be studied extensively in later sections due to beingwell-behaved except at the origin, we see that h 3000 ≈ 5 ⋅ 10 − 7 subscript ℎ 3000 ⋅ 5 superscript 10 7 h_{3000}\approx 5\cdot 10^{-7} italic_h start_POSTSUBSCRIPT 3000 end_POSTSUBSCRIPT ≈ 5 ⋅ 10 start_POSTSUPERSCRIPT - 7 end_POSTSUPERSCRIPT . For that particular function, then, this bound indicates that unlessC 𝐶 C italic_C is quite large, one can reasonably expect a full 14 − 15 14 15 14-15 14 - 15 digits of accuracy.For functions with few rough points, this evaluation method onlyrequires derivatives of S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) at a few locations, and can be evaluated injust a few nanoseconds on a modern CPU once those have been pre-computed.Combined with adaptive quadrature for sufficiently low lags k 𝑘 k italic_k that theexpansion is not yet accurate, this gives { h k } k = 0 n − 1 superscript subscript subscript ℎ 𝑘 𝑘 0 𝑛 1 \left\{h_{k}\right\}_{k=0}^{n-1} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT for all lagsquickly and accurately.
An alternative way to compute { h k } k = 1 n superscript subscript subscript ℎ 𝑘 𝑘 1 𝑛 \left\{h_{k}\right\}_{k=1}^{n} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT for potentially large n 𝑛 n italic_n in𝒪 ( n log n ) 𝒪 𝑛 𝑛 \mathcal{O}(n\log n) caligraphic_O ( italic_n roman_log italic_n ) time is to use nonuniform fast Fourier transforms (NUFFTs) andGauss-Legendre quadrature to evaluate h k subscript ℎ 𝑘 h_{k} italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT at all lags[13 , 5 ] . While the traditional FFT algorithm criticallydepends on measurements being given on a regular grid and exploits thesymmetries that that creates, NUFFTs are fast algorithms to evaluate sums ofcomplex exponentials at potentially irregular locations and target frequenciesin quasilinear runtime. Per the discussion above, we see that computing{ h k } k = 1 n superscript subscript subscript ℎ 𝑘 𝑘 1 𝑛 \left\{h_{k}\right\}_{k=1}^{n} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT would require M = 𝒪 ( n ) 𝑀 𝒪 𝑛 M=\mathcal{O}(n) italic_M = caligraphic_O ( italic_n ) quadrature nodes (to resolve thehighest frequencies) and naturally would need to be evaluated at n 𝑛 n italic_n target values,which means that naive computation of those integrals would require 𝒪 ( n 2 ) 𝒪 superscript 𝑛 2 \mathcal{O}(n^{2}) caligraphic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ) work. With the NUFFT, however, the sums can be computed as a matrix-vectorproduct
h k = ∑ j = 1 M α j e 2 π i k ω j q S ( ω j q ) , k = 0 , … , n − 1 formulae-sequence subscript ℎ 𝑘 superscript subscript 𝑗 1 𝑀 subscript 𝛼 𝑗 superscript 𝑒 2 𝜋 𝑖 𝑘 superscript subscript 𝜔 𝑗 𝑞 𝑆 superscript subscript 𝜔 𝑗 𝑞 𝑘 0 … 𝑛 1
h_{k}=\sum_{j=1}^{M}\alpha_{j}e^{2\pi ik\omega_{j}^{q}}S(\omega_{j}^{q}),\quadk%=0,...,n-1 italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT italic_α start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT italic_S ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT ) , italic_k = 0 , … , italic_n - 1
in 𝒪 ( n log n ) 𝒪 𝑛 𝑛 \mathcal{O}(n\log n) caligraphic_O ( italic_n roman_log italic_n ) time, where ω j q superscript subscript 𝜔 𝑗 𝑞 \omega_{j}^{q} italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT denotes the j 𝑗 j italic_j -th quadrature node.This method, while much more expensive than using asymptotic expansions, has theadvantage that it does not require computing any derivatives to evaluate thesequence { h k } k = 0 n − 1 superscript subscript subscript ℎ 𝑘 𝑘 0 𝑛 1 \left\{h_{k}\right\}_{k=0}^{n-1} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT . Because of this, it can be more accurate for SDFsS ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) with higher-order derivatives that potentially get very large or arefor some other reason numerically unstable to compute. With that said, however,as discussed above, the accuracy of Gauss-Legendre quadrature depends heavily onhow smooth the integrand is, so we still recommend splitting the domain [ − 1 / 2 , 1 / 2 ) 1 2 1 2 [-1/2,1/2) [ - 1 / 2 , 1 / 2 ) into panels such that the rough points of S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) are endpoints as withthe splitting of the asymptotic expansions. If S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) is amenable to thesplit asymptotic expansion method and n 𝑛 n italic_n is large, then the asymptoticexpansion approach will be significantly faster. But if S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) has some properties that make working with higher-order derivatives infeasible,this NUFFT-based method is an attractive alternative that still enablespractitioners to work exclusively with S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) for evaluating thelog-likelihood and retain 𝒪 ( n log n ) 𝒪 𝑛 𝑛 \mathcal{O}(n\log n) caligraphic_O ( italic_n roman_log italic_n ) complexity. For a much more detaileddiscussion of numerical Fourier integration of spectral densities and thequadrature-based tools introduced here for doing it quickly, we refer readers to[6 ] .
3 Low rank structure of the Whittle correctionWe now turn to the question of why one should expect the Whittlecorrection matrix 𝑬 = ℱ 𝚺 ℱ H − 𝑫 𝑬 ℱ 𝚺 superscript ℱ 𝐻 𝑫 {\color[rgb]{0,0,0}\bm{E}}=\mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}%^{H}-\bm{D} bold_italic_E = caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT - bold_italic_D where 𝑫 𝑫 \bm{D} bold_italic_D is diagonal with 𝑫 j , j = S ( ω j ) subscript 𝑫 𝑗 𝑗
𝑆 subscript 𝜔 𝑗 \bm{D}_{j,j}=S(\omega_{j}) bold_italic_D start_POSTSUBSCRIPT italic_j , italic_j end_POSTSUBSCRIPT = italic_S ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) , to be of lownumerical rank. The primary tool for this investigation will again be asymptoticexpansions, and the crucial observation is that for every k 𝑘 k italic_k and k ′ superscript 𝑘 ′ k^{\prime} italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ,𝑬 k , k ′ subscript 𝑬 𝑘 superscript 𝑘 ′
{\color[rgb]{0,0,0}\bm{E}}_{k,k^{\prime}} bold_italic_E start_POSTSUBSCRIPT italic_k , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT is given by an oscillatory integral with the same high frequencyof 2 π n 2 𝜋 𝑛 2\pi n 2 italic_π italic_n . While the below analysis is not actually how we choose to computeand assemble this low-rank approximation in this work, the following derivationprovides some idea for why the Whittle correction matrix 𝑬 𝑬 {\color[rgb]{0,0,0}\bm{E}} bold_italic_E has low-rankstructure, even when S 𝑆 S italic_S has non-smooth points or { h k } subscript ℎ 𝑘 \left\{h_{k}\right\} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } decays slowly. For theduration of this section, we will assume that S ( ω ) ∈ 𝒞 ( m ) ( [ − 1 / 2 , 1 / 2 ] ) 𝑆 𝜔 superscript 𝒞 𝑚 1 2 1 2 S(\omega)\in\mathcal{C}^{(m)}([-1/2,1/2]) italic_S ( italic_ω ) ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ - 1 / 2 , 1 / 2 ] ) for convenience, although the results here canagain be extended using directional derivatives if S 𝑆 S italic_S is not differentiable butis smooth from the left and the right at rough points.
We begin by adding and subtracting S ( ω k ) 𝑆 subscript 𝜔 𝑘 S(\omega_{k}) italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) and S ( ω k ′ ) 𝑆 subscript 𝜔 superscript 𝑘 ′ S(\omega_{k^{\prime}}) italic_S ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) to the inner integrandin (1.2 ), which with minor simplification steps can be expressed as
(3.1) 1 n 1 𝑛 \displaystyle\frac{1}{n} divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∫ − 1 / 2 1 / 2 D n s ( ω k − ω ) D n s ( ω k ′ − ω ) S ( ω ) d ω superscript subscript 1 2 1 2 superscript subscript 𝐷 𝑛 𝑠 subscript 𝜔 𝑘 𝜔 superscript subscript 𝐷 𝑛 𝑠 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝑆 𝜔 differential-d 𝜔 \displaystyle\int_{-1/2}^{1/2}D_{n}^{s}(\omega_{k}-\omega)D_{n}^{s}(\omega_{k^%{\prime}}-\omega)S(\omega)\mathop{}\!\mathrm{d}\omega ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) italic_S ( italic_ω ) roman_d italic_ω = 1 2 n absent 1 2 𝑛 \displaystyle=\frac{1}{2n} = divide start_ARG 1 end_ARG start_ARG 2 italic_n end_ARG ∫ − 1 / 2 1 / 2 D n s ( ω k − ω ) D n s ( ω k ′ − ω ) ( S ( ω ) − S ( ω k ) ) d ω superscript subscript 1 2 1 2 superscript subscript 𝐷 𝑛 𝑠 subscript 𝜔 𝑘 𝜔 superscript subscript 𝐷 𝑛 𝑠 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝑆 𝜔 𝑆 subscript 𝜔 𝑘 differential-d 𝜔 \displaystyle\int_{-1/2}^{1/2}D_{n}^{s}(\omega_{k}-\omega)D_{n}^{s}(\omega_{k^%{\prime}}-\omega)(S(\omega)-S(\omega_{k}))\mathop{}\!\mathrm{d}\omega ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ( italic_S ( italic_ω ) - italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ) roman_d italic_ω + 1 2 n 1 2 𝑛 \displaystyle+\frac{1}{2n} + divide start_ARG 1 end_ARG start_ARG 2 italic_n end_ARG ∫ − 1 / 2 1 / 2 D n s ( ω k − ω ) D n s ( ω k ′ − ω ) ( S ( ω ) − S ( ω k ′ ) ) d ω superscript subscript 1 2 1 2 superscript subscript 𝐷 𝑛 𝑠 subscript 𝜔 𝑘 𝜔 superscript subscript 𝐷 𝑛 𝑠 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝑆 𝜔 𝑆 subscript 𝜔 superscript 𝑘 ′ differential-d 𝜔 \displaystyle\int_{-1/2}^{1/2}D_{n}^{s}(\omega_{k}-\omega)D_{n}^{s}(\omega_{k^%{\prime}}-\omega)(S(\omega)-S(\omega_{k^{\prime}}))\mathop{}\!\mathrm{d}\omega ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ( italic_S ( italic_ω ) - italic_S ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ) roman_d italic_ω + S ( ω k ) + S ( ω k ′ ) 2 n 𝑆 subscript 𝜔 𝑘 𝑆 subscript 𝜔 superscript 𝑘 ′ 2 𝑛 \displaystyle+\frac{S(\omega_{k})+S(\omega_{k^{\prime}})}{2n} + divide start_ARG italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) + italic_S ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) end_ARG start_ARG 2 italic_n end_ARG ∫ − 1 / 2 1 / 2 D n s ( ω k − ω ) D n s ( ω k ′ − ω ) d ω . superscript subscript 1 2 1 2 superscript subscript 𝐷 𝑛 𝑠 subscript 𝜔 𝑘 𝜔 superscript subscript 𝐷 𝑛 𝑠 subscript 𝜔 superscript 𝑘 ′ 𝜔 differential-d 𝜔 \displaystyle\int_{-1/2}^{1/2}D_{n}^{s}(\omega_{k}-\omega)D_{n}^{s}(\omega_{k^%{\prime}}-\omega)\mathop{}\!\mathrm{d}\omega. ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) roman_d italic_ω .
While unwieldy, this representation of the integral already provides areasonably direct explanation about several features of ℱ 𝚺 ℱ H ℱ 𝚺 superscript ℱ 𝐻 \mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H} caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT .Recalling that
∫ − 1 / 2 1 / 2 D n s ( ω k − ω ) D n s ( ω k ′ − ω ) d ω = { n k = k ′ 0 k ≠ k ′ , superscript subscript 1 2 1 2 superscript subscript 𝐷 𝑛 𝑠 subscript 𝜔 𝑘 𝜔 superscript subscript 𝐷 𝑛 𝑠 subscript 𝜔 superscript 𝑘 ′ 𝜔 differential-d 𝜔 cases 𝑛 𝑘 superscript 𝑘 ′ 0 𝑘 superscript 𝑘 ′ \int_{-1/2}^{1/2}D_{n}^{s}(\omega_{k}-\omega)D_{n}^{s}(\omega_{k^{\prime}}-%\omega)\mathop{}\!\mathrm{d}\omega=\begin{cases}n&k=k^{\prime}\\0&k\neq k^{\prime}\end{cases}, ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) roman_d italic_ω = { start_ROW start_CELL italic_n end_CELL start_CELL italic_k = italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL italic_k ≠ italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_CELL end_ROW ,
we see that the third term is precisely the diagonal contribution ofS ( ω k ) 𝑆 subscript 𝜔 𝑘 S(\omega_{k}) italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) . We will now argue that the first two terms in the above sumcorrespond to severely rank-deficient matrices. Since they can of course beanalyzed in the exact same way, we study only the first in detail here.
To begin, we slightly rewrite the first term in (3.1 ) as
(3.2) ∫ − 1 / 2 1 / 2 sin ( π n ( ω k − ω ) ) sin ( π n ( ω k ′ − ω ) ) csc ( π ( ω k − ω ) ) csc ( π ( ω k ′ − ω ) ) ( S ( ω ) − S ( ω k ) ) d ω . superscript subscript 1 2 1 2 𝜋 𝑛 subscript 𝜔 𝑘 𝜔 𝜋 𝑛 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝜋 subscript 𝜔 𝑘 𝜔 𝜋 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝑆 𝜔 𝑆 subscript 𝜔 𝑘 differential-d 𝜔 \int_{-1/2}^{1/2}\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-%\omega))\csc(\pi(\omega_{k}-\omega))\csc(\pi(\omega_{k^{\prime}}-\omega))(S(%\omega)-S(\omega_{k}))\mathop{}\!\mathrm{d}\omega. ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) ( italic_S ( italic_ω ) - italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ) roman_d italic_ω .
From here, we partition the domain into
(3.3) [ − 1 / 2 , 1 / 2 ] = [ − 1 / 2 , ω k − γ ] ⏟ Type I ∪ B γ ( ω k ) ⏟ Type II ∪ [ ω k + γ , ω k ′ − γ ] ⏟ Type I ∪ B γ ( ω k ′ ) ⏟ Type II ∪ [ ω k ′ + γ , 1 / 2 ] ⏟ Type I , 1 2 1 2 subscript ⏟ 1 2 subscript 𝜔 𝑘 𝛾 Type I subscript ⏟ subscript 𝐵 𝛾 subscript 𝜔 𝑘 Type II subscript ⏟ subscript 𝜔 𝑘 𝛾 subscript 𝜔 superscript 𝑘 ′ 𝛾 Type I subscript ⏟ subscript 𝐵 𝛾 subscript 𝜔 superscript 𝑘 ′ Type II subscript ⏟ subscript 𝜔 superscript 𝑘 ′ 𝛾 1 2 Type I [-1/2,1/2]=\underbrace{[-1/2,\omega_{k}-\gamma]}_{\text{Type I}}\cup%\underbrace{B_{\gamma}(\omega_{k})}_{\text{Type II}}\cup\underbrace{[\omega_{k%}+\gamma,\omega_{k^{\prime}}-\gamma]}_{\text{Type I}}\cup\underbrace{B_{\gamma%}(\omega_{k^{\prime}})}_{\text{Type II}}\cup\underbrace{[\omega_{k^{\prime}}+%\gamma,1/2]}_{\text{Type I}}, [ - 1 / 2 , 1 / 2 ] = under⏟ start_ARG [ - 1 / 2 , italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_γ ] end_ARG start_POSTSUBSCRIPT Type I end_POSTSUBSCRIPT ∪ under⏟ start_ARG italic_B start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) end_ARG start_POSTSUBSCRIPT Type II end_POSTSUBSCRIPT ∪ under⏟ start_ARG [ italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_γ , italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ ] end_ARG start_POSTSUBSCRIPT Type I end_POSTSUBSCRIPT ∪ under⏟ start_ARG italic_B start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) end_ARG start_POSTSUBSCRIPT Type II end_POSTSUBSCRIPT ∪ under⏟ start_ARG [ italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ , 1 / 2 ] end_ARG start_POSTSUBSCRIPT Type I end_POSTSUBSCRIPT ,
where B γ ( x ) = [ x − γ , x + γ ] subscript 𝐵 𝛾 𝑥 𝑥 𝛾 𝑥 𝛾 B_{\gamma}(x)=[x-\gamma,x+\gamma] italic_B start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT ( italic_x ) = [ italic_x - italic_γ , italic_x + italic_γ ] and γ 𝛾 \gamma italic_γ is some smallnumber chosen to keep distance from the singularities of the cosecant terms. Bydesign, then, in Type I intervals the cosecant terms are simple analyticfunctions. Defining
(3.4) S ~ n , k , k ′ ( ω ) = csc ( π ( ω k − ω ) ) csc ( π ( ω k ′ − ω ) ) ( S ( ω ) − S ( ω k ) ) , subscript ~ 𝑆 𝑛 𝑘 superscript 𝑘 ′
𝜔 𝜋 subscript 𝜔 𝑘 𝜔 𝜋 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝑆 𝜔 𝑆 subscript 𝜔 𝑘 \tilde{S}_{n,k,k^{\prime}}(\omega)=\csc(\pi(\omega_{k}-\omega))\csc(\pi(\omega%_{k^{\prime}}-\omega))(S(\omega)-S(\omega_{k})), over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) = roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) ( italic_S ( italic_ω ) - italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ) ,
we see that in Type I regions this function is bounded above (assuming that S 𝑆 S italic_S itself is) and as smooth as S 𝑆 S italic_S . This motivates the following result that willbe used many times in this section.
Proposition 3.1 . If g ( ω ) ∈ 𝒞 ( m ) ( [ a , b ] ) 𝑔 𝜔 superscript 𝒞 𝑚 𝑎 𝑏 g(\omega)\in\mathcal{C}^{(m)}([a,b]) italic_g ( italic_ω ) ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ italic_a , italic_b ] ) and g ( m ) superscript 𝑔 𝑚 g^{(m)} italic_g start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT is integrable on [ a , b ] 𝑎 𝑏 [a,b] [ italic_a , italic_b ] ,then
(3.5) ∫ a b sin ( π n ( ω k − ω ) ) sin ( π n ( ω k ′ − ω ) ) g ( ω ) d ω = superscript subscript 𝑎 𝑏 𝜋 𝑛 subscript 𝜔 𝑘 𝜔 𝜋 𝑛 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝑔 𝜔 differential-d 𝜔 absent \displaystyle\int_{a}^{b}\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{%\prime}}-\omega))g(\omega)\mathop{}\!\mathrm{d}\omega= ∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) italic_g ( italic_ω ) roman_d italic_ω = (3.6) ( − 1 ) k − k ′ 2 ∫ a b g ( ω ) d ω − 1 2 ℜ { e i π n k + k ′ 2 [ e 2 π i b ∑ j = 0 m − 1 g ( j ) ( b ) ( − i 2 π n ) j + 1 − e 2 π i a ∑ j = 0 m − 1 g ( j ) ( a ) ( − i 2 π n ) j + 1 ] } superscript 1 𝑘 superscript 𝑘 ′ 2 superscript subscript 𝑎 𝑏 𝑔 𝜔 differential-d 𝜔 1 2 superscript 𝑒 𝑖 𝜋 𝑛 𝑘 superscript 𝑘 ′ 2 delimited-[] superscript 𝑒 2 𝜋 𝑖 𝑏 superscript subscript 𝑗 0 𝑚 1 superscript 𝑔 𝑗 𝑏 superscript 𝑖 2 𝜋 𝑛 𝑗 1 superscript 𝑒 2 𝜋 𝑖 𝑎 superscript subscript 𝑗 0 𝑚 1 superscript 𝑔 𝑗 𝑎 superscript 𝑖 2 𝜋 𝑛 𝑗 1 \displaystyle\frac{(-1)^{k-k^{\prime}}}{2}\int_{a}^{b}g(\omega)\mathop{}\!%\mathrm{d}\omega-\frac{1}{2}\Re\left\{e^{i\pi n\frac{k+k^{\prime}}{2}}\left[e^%{2\pi ib}\sum_{j=0}^{m-1}\frac{g^{(j)}(b)}{(-i2\pi n)^{j+1}}-e^{2\pi ia}\sum_{%j=0}^{m-1}\frac{g^{(j)}(a)}{(-i2\pi n)^{j+1}}\right]\right\} divide start_ARG ( - 1 ) start_POSTSUPERSCRIPT italic_k - italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT end_ARG start_ARG 2 end_ARG ∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT italic_g ( italic_ω ) roman_d italic_ω - divide start_ARG 1 end_ARG start_ARG 2 end_ARG roman_ℜ { italic_e start_POSTSUPERSCRIPT italic_i italic_π italic_n divide start_ARG italic_k + italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT [ italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_b end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_b ) end_ARG start_ARG ( - italic_i 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG - italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_a end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_a ) end_ARG start_ARG ( - italic_i 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG ] } + 𝒪 ( n − m − 1 ) , 𝒪 superscript 𝑛 𝑚 1 \displaystyle+\mathcal{O}(n^{-m-1}), + caligraphic_O ( italic_n start_POSTSUPERSCRIPT - italic_m - 1 end_POSTSUPERSCRIPT ) ,
where ℜ ( a + i b ) = a 𝑎 𝑖 𝑏 𝑎 \Re(a+ib)=a roman_ℜ ( italic_a + italic_i italic_b ) = italic_a denotes the real part of a complex number.
Proof 3.2 . Using the product-to-sum angle formula that sin ( θ ) sin ( ϕ ) = 1 2 ( cos ( θ − ϕ ) − cos ( θ + ϕ ) ) 𝜃 italic-ϕ 1 2 𝜃 italic-ϕ 𝜃 italic-ϕ \sin(\theta)\sin(\phi)=\tfrac{1}{2}(\cos(\theta-\phi)-\cos(\theta+\phi)) roman_sin ( italic_θ ) roman_sin ( italic_ϕ ) = divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( roman_cos ( italic_θ - italic_ϕ ) - roman_cos ( italic_θ + italic_ϕ ) ) and standardmanipulations of the complex exponential, the left hand side equation can bere-written as
1 2 cos ( π n k − k ′ n ) ∫ a b g ( ω ) d ω + 1 2 ℜ { e i π n k + k ′ 2 ∫ a b e − i 2 π n ω g ( ω ) d ω } . 1 2 𝜋 𝑛 𝑘 superscript 𝑘 ′ 𝑛 superscript subscript 𝑎 𝑏 𝑔 𝜔 differential-d 𝜔 1 2 superscript 𝑒 𝑖 𝜋 𝑛 𝑘 superscript 𝑘 ′ 2 superscript subscript 𝑎 𝑏 superscript 𝑒 𝑖 2 𝜋 𝑛 𝜔 𝑔 𝜔 differential-d 𝜔 \frac{1}{2}\cos\left(\pi n\frac{k-k^{\prime}}{n}\right)\int_{a}^{b}g(\omega)%\mathop{}\!\mathrm{d}\omega+\frac{1}{2}\Re\left\{e^{i\pi n\frac{k+k^{\prime}}{%2}}\int_{a}^{b}e^{-i2\pi n\omega}g(\omega)\mathop{}\!\mathrm{d}\omega\right\}. divide start_ARG 1 end_ARG start_ARG 2 end_ARG roman_cos ( italic_π italic_n divide start_ARG italic_k - italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG start_ARG italic_n end_ARG ) ∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT italic_g ( italic_ω ) roman_d italic_ω + divide start_ARG 1 end_ARG start_ARG 2 end_ARG roman_ℜ { italic_e start_POSTSUPERSCRIPT italic_i italic_π italic_n divide start_ARG italic_k + italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT ∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT - italic_i 2 italic_π italic_n italic_ω end_POSTSUPERSCRIPT italic_g ( italic_ω ) roman_d italic_ω } .
But cos ( π n ( k − k ′ ) ) = ( − 1 ) k − k ′ 𝜋 𝑛 𝑘 superscript 𝑘 ′ superscript 1 𝑘 superscript 𝑘 ′ \cos(\pi n(k-k^{\prime}))=(-1)^{k-k^{\prime}} roman_cos ( italic_π italic_n ( italic_k - italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ) = ( - 1 ) start_POSTSUPERSCRIPT italic_k - italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT , which provides the simplification ofthe first term. For the second term, just as in the proof of Proposition2.1 , we simply do integration by parts as many times as possibleon the now standard-form oscillatory integral in the second term to get
∫ a b e − 2 π i n ω g ( ω ) d ω = − ∑ j = 0 m − 1 ( − 2 π i n ) − j − 1 { g ( j ) ( b ) e − 2 π i n b − g ( j ) ( a ) e − 2 π i n a } + 𝒪 ( n − m − 1 ) . superscript subscript 𝑎 𝑏 superscript 𝑒 2 𝜋 𝑖 𝑛 𝜔 𝑔 𝜔 differential-d 𝜔 superscript subscript 𝑗 0 𝑚 1 superscript 2 𝜋 𝑖 𝑛 𝑗 1 superscript 𝑔 𝑗 𝑏 superscript 𝑒 2 𝜋 𝑖 𝑛 𝑏 superscript 𝑔 𝑗 𝑎 superscript 𝑒 2 𝜋 𝑖 𝑛 𝑎 𝒪 superscript 𝑛 𝑚 1 \int_{a}^{b}e^{-2\pi in\omega}g(\omega)\mathop{}\!\mathrm{d}\omega=-\sum_{j=0}%^{m-1}(-2\pi in)^{-j-1}\left\{g^{(j)}(b)e^{-2\pi inb}-g^{(j)}(a)e^{-2\pi ina}%\right\}+\mathcal{O}(n^{-m-1}). ∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT - 2 italic_π italic_i italic_n italic_ω end_POSTSUPERSCRIPT italic_g ( italic_ω ) roman_d italic_ω = - ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 2 italic_π italic_i italic_n ) start_POSTSUPERSCRIPT - italic_j - 1 end_POSTSUPERSCRIPT { italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_b ) italic_e start_POSTSUPERSCRIPT - 2 italic_π italic_i italic_n italic_b end_POSTSUPERSCRIPT - italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_a ) italic_e start_POSTSUPERSCRIPT - 2 italic_π italic_i italic_n italic_a end_POSTSUPERSCRIPT } + caligraphic_O ( italic_n start_POSTSUPERSCRIPT - italic_m - 1 end_POSTSUPERSCRIPT ) .
An elementary rearrangement gives the result.
While this proposition is presented in generality, there is a reasonable amountof additional simplification that one can do with the right-hand side based onthe value of k + k ′ ( mod 4 ) annotated 𝑘 superscript 𝑘 ′ pmod 4 k+k^{\prime}\pmod{4} italic_k + italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_MODIFIER ( roman_mod start_ARG 4 end_ARG ) end_MODIFIER . In particular, we see that e i π n ( k + k ′ ) / 2 = i k − k ′ ( mod 4 ) superscript 𝑒 𝑖 𝜋 𝑛 𝑘 superscript 𝑘 ′ 2 superscript 𝑖 annotated 𝑘 superscript 𝑘 ′ pmod 4 e^{i\pi n(k+k^{\prime})/2}=i^{k-k^{\prime}\pmod{4}} italic_e start_POSTSUPERSCRIPT italic_i italic_π italic_n ( italic_k + italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) / 2 end_POSTSUPERSCRIPT = italic_i start_POSTSUPERSCRIPT italic_k - italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_MODIFIER ( roman_mod start_ARG 4 end_ARG ) end_MODIFIER end_POSTSUPERSCRIPT , which means that every single complexexponential on the right-hand side of (3.5 ) simplifies nicely. Ifwe assume that k = k ′ ( mod 4 ) 𝑘 annotated superscript 𝑘 ′ pmod 4 k=k^{\prime}\pmod{4} italic_k = italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_MODIFIER ( roman_mod start_ARG 4 end_ARG ) end_MODIFIER so that e i π n ( k + k ′ ) / 2 = 1 superscript 𝑒 𝑖 𝜋 𝑛 𝑘 superscript 𝑘 ′ 2 1 e^{i\pi n(k+k^{\prime})/2}=1 italic_e start_POSTSUPERSCRIPT italic_i italic_π italic_n ( italic_k + italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) / 2 end_POSTSUPERSCRIPT = 1 , theasymptotic expansion term can be more concretely expanded to
ℜ { ( cos ( 2 π b ) + i sin ( 2 π b ) ) ∑ j = 0 m − 1 g ( j ) ( b ) ( − i 2 π n ) j + 1 − ( cos ( 2 π a ) + i sin ( 2 π a ) ) ∑ j = 0 m − 1 g ( j ) ( a ) ( − i 2 π n ) j + 1 } 2 𝜋 𝑏 𝑖 2 𝜋 𝑏 superscript subscript 𝑗 0 𝑚 1 superscript 𝑔 𝑗 𝑏 superscript 𝑖 2 𝜋 𝑛 𝑗 1 2 𝜋 𝑎 𝑖 2 𝜋 𝑎 superscript subscript 𝑗 0 𝑚 1 superscript 𝑔 𝑗 𝑎 superscript 𝑖 2 𝜋 𝑛 𝑗 1 \displaystyle\Re\left\{(\cos(2\pi b)+i\sin(2\pi b))\sum_{j=0}^{m-1}\frac{g^{(j%)}(b)}{(-i2\pi n)^{j+1}}-(\cos(2\pi a)+i\sin(2\pi a))\sum_{j=0}^{m-1}\frac{g^{%(j)}(a)}{(-i2\pi n)^{j+1}}\right\} roman_ℜ { ( roman_cos ( 2 italic_π italic_b ) + italic_i roman_sin ( 2 italic_π italic_b ) ) ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_b ) end_ARG start_ARG ( - italic_i 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG - ( roman_cos ( 2 italic_π italic_a ) + italic_i roman_sin ( 2 italic_π italic_a ) ) ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_a ) end_ARG start_ARG ( - italic_i 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG } (3.7) = \displaystyle= = cos ( 2 π b ) ∑ j = 0 , j odd m − 1 ( − 1 ) 1 + ⌊ j / 2 ⌋ g ( j ) ( b ) ( 2 π n ) j + 1 − cos ( 2 π a ) ∑ j = 0 , j odd m − 1 ( − 1 ) 1 + ⌊ j / 2 ⌋ g ( j ) ( a ) ( 2 π n ) j + 1 2 𝜋 𝑏 superscript subscript 𝑗 0 𝑗 odd
𝑚 1 superscript 1 1 𝑗 2 superscript 𝑔 𝑗 𝑏 superscript 2 𝜋 𝑛 𝑗 1 2 𝜋 𝑎 superscript subscript 𝑗 0 𝑗 odd
𝑚 1 superscript 1 1 𝑗 2 superscript 𝑔 𝑗 𝑎 superscript 2 𝜋 𝑛 𝑗 1 \displaystyle\cos(2\pi b)\sum_{j=0,\,j\text{ odd}}^{m-1}(-1)^{1+\lfloor j/2%\rfloor}\frac{g^{(j)}(b)}{(2\pi n)^{j+1}}-\cos(2\pi a)\sum_{j=0,\,j\text{ odd}%}^{m-1}(-1)^{1+\lfloor j/2\rfloor}\frac{g^{(j)}(a)}{(2\pi n)^{j+1}} roman_cos ( 2 italic_π italic_b ) ∑ start_POSTSUBSCRIPT italic_j = 0 , italic_j odd end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 1 ) start_POSTSUPERSCRIPT 1 + ⌊ italic_j / 2 ⌋ end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_b ) end_ARG start_ARG ( 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG - roman_cos ( 2 italic_π italic_a ) ∑ start_POSTSUBSCRIPT italic_j = 0 , italic_j odd end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 1 ) start_POSTSUPERSCRIPT 1 + ⌊ italic_j / 2 ⌋ end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_a ) end_ARG start_ARG ( 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG + \displaystyle+ + sin ( 2 π b ) ∑ j = 0 , j even m − 1 ( − 1 ) 2 + ⌊ j / 2 ⌋ g ( j ) ( b ) ( 2 π n ) j + 1 − sin ( 2 π a ) ∑ j = 0 , j even m − 1 ( − 1 ) 2 + ⌊ j / 2 ⌋ g ( j ) ( a ) ( 2 π n ) j + 1 . 2 𝜋 𝑏 superscript subscript 𝑗 0 𝑗 even
𝑚 1 superscript 1 2 𝑗 2 superscript 𝑔 𝑗 𝑏 superscript 2 𝜋 𝑛 𝑗 1 2 𝜋 𝑎 superscript subscript 𝑗 0 𝑗 even
𝑚 1 superscript 1 2 𝑗 2 superscript 𝑔 𝑗 𝑎 superscript 2 𝜋 𝑛 𝑗 1 \displaystyle\sin(2\pi b)\sum_{j=0,\,j\text{ even}}^{m-1}(-1)^{2+\lfloor j/2%\rfloor}\frac{g^{(j)}(b)}{(2\pi n)^{j+1}}-\sin(2\pi a)\sum_{j=0,\,j\text{ even%}}^{m-1}(-1)^{2+\lfloor j/2\rfloor}\frac{g^{(j)}(a)}{(2\pi n)^{j+1}}. roman_sin ( 2 italic_π italic_b ) ∑ start_POSTSUBSCRIPT italic_j = 0 , italic_j even end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 1 ) start_POSTSUPERSCRIPT 2 + ⌊ italic_j / 2 ⌋ end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_b ) end_ARG start_ARG ( 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG - roman_sin ( 2 italic_π italic_a ) ∑ start_POSTSUBSCRIPT italic_j = 0 , italic_j even end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 1 ) start_POSTSUPERSCRIPT 2 + ⌊ italic_j / 2 ⌋ end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_a ) end_ARG start_ARG ( 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG .
And while we don’t enumerate the other cases for the three values of e i π n ( k + k ′ ) / 2 superscript 𝑒 𝑖 𝜋 𝑛 𝑘 superscript 𝑘 ′ 2 e^{i\pi n(k+k^{\prime})/2} italic_e start_POSTSUPERSCRIPT italic_i italic_π italic_n ( italic_k + italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) / 2 end_POSTSUPERSCRIPT , the only difference is in the odd/even indexing and the alternatingsign inside the sum. The key takeaway here is that the entire integral(3.2 ) can actually be written as a very smooth integral term(∫ a b g ( ω ) d ω superscript subscript 𝑎 𝑏 𝑔 𝜔 differential-d 𝜔 \int_{a}^{b}g(\omega)\mathop{}\!\mathrm{d}\omega ∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT italic_g ( italic_ω ) roman_d italic_ω ), four trigonometric functions that arenon-oscillatory since they have unit wavelength and a , b ∈ [ − 1 / 2 , 1 / 2 ] 𝑎 𝑏
1 2 1 2 a,b\in[-1/2,1/2] italic_a , italic_b ∈ [ - 1 / 2 , 1 / 2 ] withcoefficients taken from functions that are themselves smooth (since g ∈ 𝒞 ( m ) ( [ a , b ] ) 𝑔 superscript 𝒞 𝑚 𝑎 𝑏 g\in\mathcal{C}^{(m)}([a,b]) italic_g ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ italic_a , italic_b ] ) ), and a remainder term depending on high-orderderivatives of g ( ω ) 𝑔 𝜔 g(\omega) italic_g ( italic_ω ) on [ a , b ] 𝑎 𝑏 [a,b] [ italic_a , italic_b ] . As it pertains to the Whittle correctionmatrix 𝑬 𝑬 {\color[rgb]{0,0,0}\bm{E}} bold_italic_E , for each of the Type-I regions we have endpoints ( a = − 1 / 2 , b = ω k − γ ) formulae-sequence 𝑎 1 2 𝑏 subscript 𝜔 𝑘 𝛾 (a=-1/2,b=\omega_{k}-\gamma) ( italic_a = - 1 / 2 , italic_b = italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_γ ) , ( a = ω k + γ , b = ω k ′ − γ ) formulae-sequence 𝑎 subscript 𝜔 𝑘 𝛾 𝑏 subscript 𝜔 superscript 𝑘 ′ 𝛾 (a=\omega_{k}+\gamma,b=\omega_{k^{\prime}}-\gamma) ( italic_a = italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_γ , italic_b = italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ ) , and ( a = ω k ′ + γ , b = 1 / 2 ) formulae-sequence 𝑎 subscript 𝜔 superscript 𝑘 ′ 𝛾 𝑏 1 2 (a=\omega_{k^{\prime}}+\gamma,b=1/2) ( italic_a = italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ , italic_b = 1 / 2 ) respectively, and so the effective contribution ofthe Type-I integrals to 𝑬 k , k ′ subscript 𝑬 𝑘 superscript 𝑘 ′
{\color[rgb]{0,0,0}\bm{E}}_{k,k^{\prime}} bold_italic_E start_POSTSUBSCRIPT italic_k , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT is given by∫ [ − 1 / 2 , 1 / 2 ] ∖ ( B γ ( ω k ) ∪ B γ ( ω k ′ ) ) S ~ n , k , k ′ ( ω ) d ω subscript 1 2 1 2 subscript 𝐵 𝛾 subscript 𝜔 𝑘 subscript 𝐵 𝛾 subscript 𝜔 superscript 𝑘 ′ subscript ~ 𝑆 𝑛 𝑘 superscript 𝑘 ′
𝜔 differential-d 𝜔 \int_{[-1/2,1/2]\setminus(B_{\gamma}(\omega_{k})\cup B_{\gamma}(\omega_{k^{%\prime}}))}\tilde{S}_{n,k,k^{\prime}}(\omega)\mathop{}\!\mathrm{d}\omega ∫ start_POSTSUBSCRIPT [ - 1 / 2 , 1 / 2 ] ∖ ( italic_B start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ∪ italic_B start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ) end_POSTSUBSCRIPT over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) roman_d italic_ω and a total of 12 12 12 12 trigonometric functions at the three sets of given endpointsa 𝑎 a italic_a and b 𝑏 b italic_b with smoothly varying coefficients. Given that the trigonometricfunctions are analytic and that g ∈ 𝒞 ( m ) ( [ − 1 / 2 , 1 / 2 ] ) 𝑔 superscript 𝒞 𝑚 1 2 1 2 g\in\mathcal{C}^{(m)}([-1/2,1/2]) italic_g ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ - 1 / 2 , 1 / 2 ] ) , sincevery smooth kernel matrices often exhibit rapid spectral decay (an observationdating back to at least [20 ] with the Fast Multipole Method(FMM)), we expect the contribution of the Type-I intervals to 𝑬 𝑬 {\color[rgb]{0,0,0}\bm{E}} bold_italic_E to haveexceptionally fast spectral decay and be severely rank-deficient so long as theremainder from the asymptotic expansion is small. With slightly more effort, wemay repeat this analysis on the Type-II regions with singularities.
For the first Type II region of [ ω k − γ , ω k + γ ] subscript 𝜔 𝑘 𝛾 subscript 𝜔 𝑘 𝛾 [\omega_{k}-\gamma,\omega_{k}+\gamma] [ italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_γ , italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_γ ] , the story isonly slightly more complicated. This time we introduce the condensed notation of
S ~ n , k ′ ( ω ) = csc ( π ( ω k ′ − ω ) ) ( S ( ω ) − S ( ω k ) ) subscript ~ 𝑆 𝑛 superscript 𝑘 ′
𝜔 𝜋 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝑆 𝜔 𝑆 subscript 𝜔 𝑘 \tilde{S}_{n,k^{\prime}}(\omega)=\csc(\pi(\omega_{k^{\prime}}-\omega))(S(%\omega)-S(\omega_{k})) over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) = roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) ( italic_S ( italic_ω ) - italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) )
for the bounded and non-oscillatory part of the integrand, and we study
(3.8) ∫ ω k + γ ω k − γ sin ( π n ( ω k − ω ) ) sin ( π n ( ω k ′ − ω ) ) csc ( π ( ω k − ω ) ) S ~ n , k ′ ( ω ) d ω . superscript subscript subscript 𝜔 𝑘 𝛾 subscript 𝜔 𝑘 𝛾 𝜋 𝑛 subscript 𝜔 𝑘 𝜔 𝜋 𝑛 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝜋 subscript 𝜔 𝑘 𝜔 subscript ~ 𝑆 𝑛 superscript 𝑘 ′
𝜔 differential-d 𝜔 \int_{\omega_{k}+\gamma}^{\omega_{k}-\gamma}\sin(\pi n(\omega_{k}-\omega))\sin%(\pi n(\omega_{k^{\prime}}-\omega))\csc(\pi(\omega_{k}-\omega))\tilde{S}_{n,k^%{\prime}}(\omega)\mathop{}\!\mathrm{d}\omega. ∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) roman_d italic_ω .
The important observation to make here is that the singularity presented bycsc ( π ( ω k − ω ) ) 𝜋 subscript 𝜔 𝑘 𝜔 \csc(\pi(\omega_{k}-\omega)) roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) is simple. Recalling the Laurent expansion csc ( t ) = t − 1 + t 6 + 7 t 2 360 + … 𝑡 superscript 𝑡 1 𝑡 6 7 superscript 𝑡 2 360 … \csc(t)=t^{-1}+\tfrac{t}{6}+\tfrac{7t^{2}}{360}+... roman_csc ( italic_t ) = italic_t start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT + divide start_ARG italic_t end_ARG start_ARG 6 end_ARG + divide start_ARG 7 italic_t start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 360 end_ARG + … , we see that one can simply“subtract off” the singularity and obtain a standard power-series typerepresentation of csc ( t ) − t − 1 ≈ P l ( t ) 𝑡 superscript 𝑡 1 subscript 𝑃 𝑙 𝑡 \csc(t)-t^{-1}\approx P_{l}(t) roman_csc ( italic_t ) - italic_t start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ≈ italic_P start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ( italic_t ) for t ∈ [ − γ , γ ] 𝑡 𝛾 𝛾 t\in[-\gamma,\gamma] italic_t ∈ [ - italic_γ , italic_γ ] and some low-order polynomial P l subscript 𝑃 𝑙 P_{l} italic_P start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT based on the Laurent series. Thismotivates the decomposition of (3.8 ) into
∫ ω k + γ ω k − γ sin ( π n ( ω k − ω ) ) sin ( π n ( ω k ′ − ω ) ) P l ( ω k − ω ) S ~ n , k ′ ( ω ) d ω superscript subscript subscript 𝜔 𝑘 𝛾 subscript 𝜔 𝑘 𝛾 𝜋 𝑛 subscript 𝜔 𝑘 𝜔 𝜋 𝑛 subscript 𝜔 superscript 𝑘 ′ 𝜔 subscript 𝑃 𝑙 subscript 𝜔 𝑘 𝜔 subscript ~ 𝑆 𝑛 superscript 𝑘 ′
𝜔 differential-d 𝜔 \displaystyle\int_{\omega_{k}+\gamma}^{\omega_{k}-\gamma}\sin(\pi n(\omega_{k}%-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))P_{l}(\omega_{k}-\omega)\tilde%{S}_{n,k^{\prime}}(\omega)\mathop{}\!\mathrm{d}\omega ∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) italic_P start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) roman_d italic_ω + \displaystyle+ + ∫ ω k + γ ω k − γ sin ( π n ( ω k − ω ) ) sin ( π n ( ω k ′ − ω ) ) S ~ n , k ′ ( ω ) ω k − ω d ω . superscript subscript subscript 𝜔 𝑘 𝛾 subscript 𝜔 𝑘 𝛾 𝜋 𝑛 subscript 𝜔 𝑘 𝜔 𝜋 𝑛 subscript 𝜔 superscript 𝑘 ′ 𝜔 subscript ~ 𝑆 𝑛 superscript 𝑘 ′
𝜔 subscript 𝜔 𝑘 𝜔 differential-d 𝜔 \displaystyle\int_{\omega_{k}+\gamma}^{\omega_{k}-\gamma}\sin(\pi n(\omega_{k}%-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))\frac{\tilde{S}_{n,k^{\prime}}%(\omega)}{\omega_{k}-\omega}\mathop{}\!\mathrm{d}\omega. ∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) divide start_ARG over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) end_ARG start_ARG italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω end_ARG roman_d italic_ω .
The first term above can again be expanded as a small number of unit-wavelengthtrigonometric functions via Proposition 3.1 with g ( ω ) = P l ( ω k − ω ) S ~ n , k ′ ( ω ) 𝑔 𝜔 subscript 𝑃 𝑙 subscript 𝜔 𝑘 𝜔 subscript ~ 𝑆 𝑛 superscript 𝑘 ′
𝜔 g(\omega)=P_{l}(\omega_{k}-\omega)\tilde{S}_{n,k^{\prime}}(\omega) italic_g ( italic_ω ) = italic_P start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) . Because S ~ n , k ′ ( ω ) = 0 subscript ~ 𝑆 𝑛 superscript 𝑘 ′
𝜔 0 \tilde{S}_{n,k^{\prime}}(\omega)=0 over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) = 0 atω = ω k 𝜔 subscript 𝜔 𝑘 \omega=\omega_{k} italic_ω = italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT and has been assumed to be smooth, the second term is also notsingular and is a standard oscillatory integral. Another application ofProposition 3.1 with g ( ω ) = ( ω k − ω ) − 1 S ~ n , k ′ ( ω ) 𝑔 𝜔 superscript subscript 𝜔 𝑘 𝜔 1 subscript ~ 𝑆 𝑛 superscript 𝑘 ′
𝜔 g(\omega)=(\omega_{k}-\omega)^{-1}\tilde{S}_{n,k^{\prime}}(\omega) italic_g ( italic_ω ) = ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) can be applied, making the entire contribution of(3.8 ) expressible as a linear combination of a small numberof unit-wavelength trigonometric functions with smoothly varying coefficients.
The final segment of [ − 1 / 2 , 1 / 2 ] 1 2 1 2 [-1/2,1/2] [ - 1 / 2 , 1 / 2 ] to study is the Type II region [ ω k ′ − γ , ω k ′ + γ ] subscript 𝜔 superscript 𝑘 ′ 𝛾 subscript 𝜔 superscript 𝑘 ′ 𝛾 [\omega_{k^{\prime}}-\gamma,\omega_{k^{\prime}}+\gamma] [ italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ , italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ ] , where the singularity caused by csc ( π ( ω k ′ − ω ) ) 𝜋 subscript 𝜔 superscript 𝑘 ′ 𝜔 \csc(\pi(\omega_{k^{\prime}}-\omega)) roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) has not already been factored out. Introducing one finalcondensed integrand notation of
S ~ n , k ( ω ) = csc ( π ( ω k − ω ) ) ( S ( ω ) − S ( ω k ) ) , subscript ~ 𝑆 𝑛 𝑘
𝜔 𝜋 subscript 𝜔 𝑘 𝜔 𝑆 𝜔 𝑆 subscript 𝜔 𝑘 \tilde{S}_{n,k}(\omega)=\csc(\pi(\omega_{k}-\omega))(S(\omega)-S(\omega_{k})), over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) = roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) ( italic_S ( italic_ω ) - italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ) ,
we again decompose the contribution of that segment as
∫ ω k ′ + γ ω k ′ − γ sin ( π n ( ω k − ω ) ) sin ( π n ( ω k ′ − ω ) ) csc ( π ( ω k ′ − ω ) ) S ~ n , k ( ω ) d ω superscript subscript subscript 𝜔 superscript 𝑘 ′ 𝛾 subscript 𝜔 superscript 𝑘 ′ 𝛾 𝜋 𝑛 subscript 𝜔 𝑘 𝜔 𝜋 𝑛 subscript 𝜔 superscript 𝑘 ′ 𝜔 𝜋 subscript 𝜔 superscript 𝑘 ′ 𝜔 subscript ~ 𝑆 𝑛 𝑘
𝜔 differential-d 𝜔 \displaystyle\int_{\omega_{k^{\prime}}+\gamma}^{\omega_{k^{\prime}}-\gamma}%\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))\csc(\pi(%\omega_{k^{\prime}}-\omega))\tilde{S}_{n,k}(\omega)\mathop{}\!\mathrm{d}\omega ∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) roman_d italic_ω = \displaystyle= = ∫ ω k ′ + γ ω k ′ − γ sin ( π n ( ω k − ω ) ) sin ( π n ( ω k ′ − ω ) ) P l ( ω k ′ − ω ) S ~ n , k ( ω ) d ω superscript subscript subscript 𝜔 superscript 𝑘 ′ 𝛾 subscript 𝜔 superscript 𝑘 ′ 𝛾 𝜋 𝑛 subscript 𝜔 𝑘 𝜔 𝜋 𝑛 subscript 𝜔 superscript 𝑘 ′ 𝜔 subscript 𝑃 𝑙 subscript 𝜔 superscript 𝑘 ′ 𝜔 subscript ~ 𝑆 𝑛 𝑘
𝜔 differential-d 𝜔 \displaystyle\int_{\omega_{k^{\prime}}+\gamma}^{\omega_{k^{\prime}}-\gamma}%\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))P_{l}(%\omega_{k^{\prime}}-\omega)\tilde{S}_{n,k}(\omega)\mathop{}\!\mathrm{d}\omega ∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) italic_P start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) roman_d italic_ω + \displaystyle+ + ∫ ω k ′ + γ ω k ′ − γ sin ( π n ( ω k − ω ) ) sin ( π n ( ω k ′ − ω ) ) S ~ n , k ( ω ) ω k ′ − ω d ω . superscript subscript subscript 𝜔 superscript 𝑘 ′ 𝛾 subscript 𝜔 superscript 𝑘 ′ 𝛾 𝜋 𝑛 subscript 𝜔 𝑘 𝜔 𝜋 𝑛 subscript 𝜔 superscript 𝑘 ′ 𝜔 subscript ~ 𝑆 𝑛 𝑘
𝜔 subscript 𝜔 superscript 𝑘 ′ 𝜔 differential-d 𝜔 \displaystyle\int_{\omega_{k^{\prime}}+\gamma}^{\omega_{k^{\prime}}-\gamma}%\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))\frac{%\tilde{S}_{n,k}(\omega)}{\omega_{k^{\prime}}-\omega}\mathop{}\!\mathrm{d}\omega. ∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) divide start_ARG over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) end_ARG start_ARG italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω end_ARG roman_d italic_ω .
The first term in the divided integral can be handled yet again by Proposition3.1 just as above. The only new wrinkle is that the second termin the right-hand side now still has a singularity. Thankfully, with one moreintegral splitting we can re-express this last term with
∫ ω k ′ + γ ω k ′ − γ sin ( π n ( ω k − ω ) ) sin ( π n ( ω k ′ − ω ) ) S ~ n , k ( ω ) ω k ′ − ω d ω superscript subscript subscript 𝜔 superscript 𝑘 ′ 𝛾 subscript 𝜔 superscript 𝑘 ′ 𝛾 𝜋 𝑛 subscript 𝜔 𝑘 𝜔 𝜋 𝑛 subscript 𝜔 superscript 𝑘 ′ 𝜔 subscript ~ 𝑆 𝑛 𝑘
𝜔 subscript 𝜔 superscript 𝑘 ′ 𝜔 differential-d 𝜔 \displaystyle\int_{\omega_{k^{\prime}}+\gamma}^{\omega_{k^{\prime}}-\gamma}%\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))\frac{%\tilde{S}_{n,k}(\omega)}{\omega_{k^{\prime}}-\omega}\mathop{}\!\mathrm{d}\omega ∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) divide start_ARG over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) end_ARG start_ARG italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω end_ARG roman_d italic_ω = \displaystyle= = ∫ ω k ′ + γ ω k ′ − γ sin ( π n ( ω k − ω ) ) sin ( π n ( ω k ′ − ω ) ) S ~ n , k ( ω ) − S ~ n , k ( ω k ′ ) ω k ′ − ω d ω superscript subscript subscript 𝜔 superscript 𝑘 ′ 𝛾 subscript 𝜔 superscript 𝑘 ′ 𝛾 𝜋 𝑛 subscript 𝜔 𝑘 𝜔 𝜋 𝑛 subscript 𝜔 superscript 𝑘 ′ 𝜔 subscript ~ 𝑆 𝑛 𝑘
𝜔 subscript ~ 𝑆 𝑛 𝑘
subscript 𝜔 superscript 𝑘 ′ subscript 𝜔 superscript 𝑘 ′ 𝜔 differential-d 𝜔 \displaystyle\int_{\omega_{k^{\prime}}+\gamma}^{\omega_{k^{\prime}}-\gamma}%\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))\frac{%\tilde{S}_{n,k}(\omega)-\tilde{S}_{n,k}(\omega_{k^{\prime}})}{\omega_{k^{%\prime}}-\omega}\mathop{}\!\mathrm{d}\omega ∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) divide start_ARG over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) - over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) end_ARG start_ARG italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω end_ARG roman_d italic_ω + S ~ n , k ( ω k ′ ) subscript ~ 𝑆 𝑛 𝑘
subscript 𝜔 superscript 𝑘 ′ \displaystyle+\tilde{S}_{n,k}(\omega_{k^{\prime}}) + over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ∫ ω k ′ + γ ω k ′ − γ sin ( π n ( ω k − ω ) ) sin ( π n ( ω k ′ − ω ) ) 1 ω k ′ − ω d ω . superscript subscript subscript 𝜔 superscript 𝑘 ′ 𝛾 subscript 𝜔 superscript 𝑘 ′ 𝛾 𝜋 𝑛 subscript 𝜔 𝑘 𝜔 𝜋 𝑛 subscript 𝜔 superscript 𝑘 ′ 𝜔 1 subscript 𝜔 superscript 𝑘 ′ 𝜔 differential-d 𝜔 \displaystyle\int_{\omega_{k^{\prime}}+\gamma}^{\omega_{k^{\prime}}-\gamma}%\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))\frac{1}{%\omega_{k^{\prime}}-\omega}\mathop{}\!\mathrm{d}\omega. ∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) divide start_ARG 1 end_ARG start_ARG italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω end_ARG roman_d italic_ω .
The first term on the right-hand side is no longer singular becauseS ~ n , k ( ω ) − S ~ n , k ( ω k ′ ) subscript ~ 𝑆 𝑛 𝑘
𝜔 subscript ~ 𝑆 𝑛 𝑘
subscript 𝜔 superscript 𝑘 ′ \tilde{S}_{n,k}(\omega)-\tilde{S}_{n,k}(\omega_{k^{\prime}}) over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) - over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) is smooth and goes to zeroat ω = ω k ′ 𝜔 subscript 𝜔 superscript 𝑘 ′ \omega=\omega_{k^{\prime}} italic_ω = italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT faster than ( ω k ′ − ω ) − 1 superscript subscript 𝜔 superscript 𝑘 ′ 𝜔 1 (\omega_{k^{\prime}}-\omega)^{-1} ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT blows up, and so onemay apply Proposition 3.1 . The second integral is the integralof sin \sin roman_sin against a bounded function since sin ( π n ( ω k ′ − ω k ) ) 𝜋 𝑛 subscript 𝜔 superscript 𝑘 ′ subscript 𝜔 𝑘 \sin(\pi n(\omega_{k^{\prime}}-\omega_{k})) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ) isalso smooth and goes to zero as ω → ω k ′ → 𝜔 subscript 𝜔 superscript 𝑘 ′ \omega\rightarrow\omega_{k^{\prime}} italic_ω → italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT , and thus canbe handled with the even more simple asymptotic expansion analagous to the oneused in Proposition 2.1 that again represents it as a small numberof smooth functions.
Combining all of these segmented contributions, we finally conclude inrepresenting 𝑬 k , k ′ subscript 𝑬 𝑘 superscript 𝑘 ′
{\color[rgb]{0,0,0}\bm{E}}_{k,k^{\prime}} bold_italic_E start_POSTSUBSCRIPT italic_k , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT as a small number of very smooth functions. And whileostensibly breaking the integral into five pieces, two of which need to bebroken up again into two or three additional terms to be suitable for expansion,should lead to so many smooth terms that the matrix is not particularlyrank-deficient, we note that all of these trigonometric functions have unitwavelength and are potentially linearly dependent with each other, and thatderivatives of the corresponding function g 𝑔 g italic_g in each application of Proposition3.1 are by supposition also well-behaved.
This derivation is also illuminating about the sources of error in Whittleapproximations and the degree of rank-structure in the Whittle correctionmatrices. For example, if S 𝑆 S italic_S has rough points where it is continuous but notsmooth, then the above domain partitioning will have to be refined so thatCorollary 1 1 1 1 can be applied to fix the accuracy of the asymptoticapproximation. This will then mean that 𝑬 k , k ′ subscript 𝑬 𝑘 superscript 𝑘 ′
{\color[rgb]{0,0,0}\bm{E}}_{k,k^{\prime}} bold_italic_E start_POSTSUBSCRIPT italic_k , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT will need more asymptoticexpansion-sourced terms for the expansion to be accurate. For spectraldensities S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) that have many such rough points, the number of unique termsin the split expansion can certainly add up to a degree that ruins this rankstructure. And if S ( ω ) 𝑆 𝜔 S(\omega) italic_S ( italic_ω ) and its derivatives are highly oscillatory, thenterms like (3 ) may not actually yield particularlylow-rank matrices despite being smooth.
4 Numerical assembly and action of the Whittle correctionWith this theoretical justification for the low-rank structure of the Whittlecorrection matrix, we now turn to discussing how to actually compute andassemble the matrices 𝑼 𝑼 \bm{U} bold_italic_U and 𝑽 𝑽 \bm{V} bold_italic_V such that ℱ 𝚺 ℱ H − 𝑫 ≈ 𝑼 𝑽 H ℱ 𝚺 superscript ℱ 𝐻 𝑫 𝑼 superscript 𝑽 𝐻 \mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H}-\bm{D}\approx\bm{U}\bm%{V}^{H} caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT - bold_italic_D ≈ bold_italic_U bold_italic_V start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT . First, we note that the action of 𝚺 𝚺 \bm{\Sigma} bold_Σ onto vectorscan be done in 𝒪 ( n log n ) 𝒪 𝑛 𝑛 \mathcal{O}(n\log n) caligraphic_O ( italic_n roman_log italic_n ) with the familiar circulant embedding technique[39 ] once the values { h k } k = 0 n − 1 superscript subscript subscript ℎ 𝑘 𝑘 0 𝑛 1 \left\{h_{k}\right\}_{k=0}^{n-1} { italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT have been computed.While we refer readers to the references for a more complete discussion ofcirculant embedding, we will provide a brief discussion here. The symmetricToeplitz matrix 𝚺 𝚺 \bm{\Sigma} bold_Σ is specified entirely by its first column [ h 0 , h 1 , … , h n − 1 ] subscript ℎ 0 subscript ℎ 1 … subscript ℎ 𝑛 1
[h_{0},h_{1},...,h_{n-1}] [ italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_h start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT ] . This column can be extended to one of length 2 n − 1 2 𝑛 1 2n-1 2 italic_n - 1 given by𝒄 = [ h 0 , h 1 , … , h n − 1 , h n − 1 , h n − 2 , … , h 1 ] 𝒄 subscript ℎ 0 subscript ℎ 1 … subscript ℎ 𝑛 1 subscript ℎ 𝑛 1 subscript ℎ 𝑛 2 … subscript ℎ 1
\bm{c}=[h_{0},h_{1},...,h_{n-1},h_{n-1},h_{n-2},...,h_{1}] bold_italic_c = [ italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_h start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT , italic_h start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT , italic_h start_POSTSUBSCRIPT italic_n - 2 end_POSTSUBSCRIPT , … , italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ] , and if oneassembles a Toeplitz matrix with this length 2 n − 1 2 𝑛 1 2n-1 2 italic_n - 1 column, the matrix is infact circulant and diagonalized by the FFT [19 ] . Sincethe FFT can be applied in 𝒪 ( n log n ) 𝒪 𝑛 𝑛 \mathcal{O}(n\log n) caligraphic_O ( italic_n roman_log italic_n ) complexity, so can thisaugmented circulant matrix. Thus, one can compute 𝚺 𝒗 𝚺 𝒗 \bm{\Sigma}\bm{v} bold_Σ bold_italic_v for any vector 𝒗 𝒗 \bm{v} bold_italic_v by extracting the first n 𝑛 n italic_n components of 𝑪 [ 𝒗 , 𝟎 n − 1 ] 𝑪 𝒗 subscript 0 𝑛 1 \bm{C}[\bm{v},\bm{0}_{n-1}] bold_italic_C [ bold_italic_v , bold_0 start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT ] , where𝑪 𝑪 \bm{C} bold_italic_C is the circulant matrix made with 𝒄 𝒄 \bm{c} bold_italic_c , and 𝟎 n − 1 subscript 0 𝑛 1 \bm{0}_{n-1} bold_0 start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT is n − 1 𝑛 1 n-1 italic_n - 1 many zeros appended to 𝒗 𝒗 \bm{v} bold_italic_v .
Letting ℱ n subscript ℱ 𝑛 \mathbb{\mathcal{F}}_{n} caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT be the FFT of size n 𝑛 n italic_n (briefly now defined without the“fftshift”), we can thus summarize the action of ℱ n 𝚺 ℱ n H − 𝑫 subscript ℱ 𝑛 𝚺 superscript subscript ℱ 𝑛 𝐻 𝑫 \mathbb{\mathcal{F}}_{n}\bm{\Sigma}\mathbb{\mathcal{F}}_{n}^{H}-\bm{D} caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT bold_Σ caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT - bold_italic_D on a vector 𝒗 𝒗 \bm{v} bold_italic_v with
(4.1) ( ℱ n 𝚺 ℱ n H − 𝑫 ) 𝒗 = ℱ n { ℱ 2 n − 1 H ( { ℱ 2 n − 1 𝒄 } ∘ ℱ 2 n − 1 [ ℱ n H 𝒗 𝟎 n − 1 ] ) } 1 : n − 𝑫 𝒗 . subscript ℱ 𝑛 𝚺 superscript subscript ℱ 𝑛 𝐻 𝑫 𝒗 subscript ℱ 𝑛 subscript superscript subscript ℱ 2 𝑛 1 𝐻 subscript ℱ 2 𝑛 1 𝒄 subscript ℱ 2 𝑛 1 matrix superscript subscript ℱ 𝑛 𝐻 𝒗 subscript 0 𝑛 1 : 1 𝑛 𝑫 𝒗 (\mathbb{\mathcal{F}}_{n}\bm{\Sigma}\mathbb{\mathcal{F}}_{n}^{H}-\bm{D})\bm{v}%=\mathbb{\mathcal{F}}_{n}\left\{\mathbb{\mathcal{F}}_{2n-1}^{H}\left(\left\{%\mathbb{\mathcal{F}}_{2n-1}\bm{c}\right\}\circ\mathbb{\mathcal{F}}_{2n-1}%\begin{bmatrix}\mathbb{\mathcal{F}}_{n}^{H}\bm{v}\\\bm{0}_{n-1}\end{bmatrix}\right)\right\}_{1:n}-\bm{D}\bm{v}. ( caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT bold_Σ caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT - bold_italic_D ) bold_italic_v = caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT { caligraphic_F start_POSTSUBSCRIPT 2 italic_n - 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT ( { caligraphic_F start_POSTSUBSCRIPT 2 italic_n - 1 end_POSTSUBSCRIPT bold_italic_c } ∘ caligraphic_F start_POSTSUBSCRIPT 2 italic_n - 1 end_POSTSUBSCRIPT [ start_ARG start_ROW start_CELL caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT bold_italic_v end_CELL end_ROW start_ROW start_CELL bold_0 start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT end_CELL end_ROW end_ARG ] ) } start_POSTSUBSCRIPT 1 : italic_n end_POSTSUBSCRIPT - bold_italic_D bold_italic_v .
Assuming that ℱ 2 n − 1 𝒄 subscript ℱ 2 𝑛 1 𝒄 \mathbb{\mathcal{F}}_{2n-1}\bm{c} caligraphic_F start_POSTSUBSCRIPT 2 italic_n - 1 end_POSTSUBSCRIPT bold_italic_c is pre-computed, this means that the action ofℱ 𝚺 ℱ H − 𝑫 ℱ 𝚺 superscript ℱ 𝐻 𝑫 \mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H}-\bm{D} caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT - bold_italic_D on 𝒗 𝒗 \bm{v} bold_italic_v requires four FFTs, two of size n 𝑛 n italic_n and two ofsize 2 n − 1 2 𝑛 1 2n-1 2 italic_n - 1 .
With this established, we turn to the process of approximating 𝑪 𝑪 \bm{C} bold_italic_C from theposition of only being able to apply it to vectors efficiently. The field ofrandomized low-rank approximations has become an important part of modernnumerical linear algebra, and we refer readers to review papers like[21 ] and the more recent [37 ] for broad introductionsand context. As it pertains to this work, it is sufficient to discuss theproblem of using randomized algorithms to estimate the range of a matrix.Letting 𝑨 ∈ ℝ n × n 𝑨 superscript ℝ 𝑛 𝑛 \bm{A}\in\mathbb{R}^{n\times n} bold_italic_A ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT denote an arbitrary matrix of rank r 𝑟 r italic_r ,[21 ] shows that the orthogonal matrix 𝑸 𝑸 \bm{Q} bold_italic_Q such that
𝑨 𝛀 = 𝑸 𝑹 , 𝑨 𝛀 𝑸 𝑹 \bm{A}\bm{\Omega}=\bm{Q}\bm{R}, bold_italic_A bold_Ω = bold_italic_Q bold_italic_R ,
where 𝛀 ∈ ℝ n × ( r + p ) 𝛀 superscript ℝ 𝑛 𝑟 𝑝 \bm{\Omega}\in\mathbb{R}^{n\times(r+p)} bold_Ω ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × ( italic_r + italic_p ) end_POSTSUPERSCRIPT is a “sketching”/test matrix, whichin this work will be made of i.i.d. standard normal random variables (althoughother and potentially faster choices are available), is an approximate basis forthe column space of 𝑨 𝑨 \bm{A} bold_italic_A . The matrix 𝑸 𝑸 \bm{Q} bold_italic_Q itself may be obtained byperforming a standard QR factorization on the product 𝑨 𝛀 𝑨 𝛀 \bm{A}\bm{\Omega} bold_italic_A bold_Ω , andthe parameter p 𝑝 p italic_p is an oversampling parameter which is often picked to be somesmall number like p = 5 𝑝 5 p=5 italic_p = 5 [21 ] .From there, one obtains a simple low-rank representation of𝑨 𝑨 \bm{A} bold_italic_A as
𝑨 ≈ 𝑸 ⏟ 𝑼 𝑸 H 𝑨 ⏟ 𝑽 H . 𝑨 subscript ⏟ 𝑸 𝑼 subscript ⏟ superscript 𝑸 𝐻 𝑨 superscript 𝑽 𝐻 \bm{A}\approx\underbrace{\bm{Q}}_{\bm{U}}\underbrace{\bm{Q}^{H}\bm{A}}_{\bm{V}%^{H}}. bold_italic_A ≈ under⏟ start_ARG bold_italic_Q end_ARG start_POSTSUBSCRIPT bold_italic_U end_POSTSUBSCRIPT under⏟ start_ARG bold_italic_Q start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT bold_italic_A end_ARG start_POSTSUBSCRIPT bold_italic_V start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT end_POSTSUBSCRIPT .
This entire process is summarized in Algorithm 1 .
1. Using a method from Section 2 , obtain theautocovariance terms { h j } j = 0 n − 1 superscript subscript subscript ℎ 𝑗 𝑗 0 𝑛 1 \left\{h_{j}\right\}_{j=0}^{n-1} { italic_h start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT .
2. Create diagonal matrix 𝑫 j , j = S ( ω j ) subscript 𝑫 𝑗 𝑗
𝑆 subscript 𝜔 𝑗 \bm{D}_{j,j}=S(\omega_{j}) bold_italic_D start_POSTSUBSCRIPT italic_j , italic_j end_POSTSUBSCRIPT = italic_S ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) at Fourierfrequencies { ω j } j = 1 n superscript subscript subscript 𝜔 𝑗 𝑗 1 𝑛 \left\{\omega_{j}\right\}_{j=1}^{n} { italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT .
3. Draw sketching matrix 𝛀 ∈ ℝ n × ( p + r ) 𝛀 superscript ℝ 𝑛 𝑝 𝑟 \bm{\Omega}\in\mathbb{R}^{n\times(p+r)} bold_Ω ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × ( italic_p + italic_r ) end_POSTSUPERSCRIPT with𝛀 j , k ∼ i.i.d. 𝒩 ( 0 , 1 ) subscript 𝛀 𝑗 𝑘
i.i.d. similar-to 𝒩 0 1 \bm{\Omega}_{j,k}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,1) bold_Ω start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT overi.i.d. start_ARG ∼ end_ARG caligraphic_N ( 0 , 1 ) .
4. Using Equation 4.1 , compute ( ℱ n 𝚺 ℱ n H − 𝑫 ) 𝛀 subscript ℱ 𝑛 𝚺 superscript subscript ℱ 𝑛 𝐻 𝑫 𝛀 (\mathbb{\mathcal{F}}_{n}\bm{\Sigma}\mathbb{\mathcal{F}}_{n}^{H}-\bm{D})\bm{\Omega} ( caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT bold_Σ caligraphi