Fast Machine-Precision Spectral Likelihoods for Stationary Time Series (2024)

Christopher J. GeogaDept. of Statistics, University of Wisconsin-Madison

Abstract

We provide in this work an algorithm for approximating a very broad class ofsymmetric Toeplitz matrices to machine precision in 𝒪(nlogn)𝒪𝑛𝑛\mathcal{O}(n\log n)caligraphic_O ( italic_n roman_log italic_n ) timewith applications to fitting time series models. In particular, for a symmetricToeplitz matrix 𝚺𝚺\mathbf{\Sigma}bold_Σ with values 𝚺j,k=h|jk|=1/21/2e2πi|jk|ωS(ω)dωsubscript𝚺𝑗𝑘subscript𝑗𝑘superscriptsubscript1212superscript𝑒2𝜋𝑖𝑗𝑘𝜔𝑆𝜔differential-d𝜔\mathbf{\Sigma}_{j,k}=h_{|j-k|}=\int_{-1/2}^{1/2}e^{2\pi i|j-k|\omega}S(\omega%)\mathrm{d}\omegabold_Σ start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT = italic_h start_POSTSUBSCRIPT | italic_j - italic_k | end_POSTSUBSCRIPT = ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i | italic_j - italic_k | italic_ω end_POSTSUPERSCRIPT italic_S ( italic_ω ) roman_d italic_ω whereS(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) is piecewise smooth, we give an approximation 𝚺H𝐃+𝐔𝐕H𝚺superscript𝐻𝐃superscript𝐔𝐕𝐻\mathbf{\mathcal{F}}\mathbf{\Sigma}\mathbf{\mathcal{F}}^{H}\approx\mathbf{D}+%\mathbf{U}\mathbf{V}^{H}caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT ≈ bold_D + bold_UV start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT, where \mathbf{\mathcal{F}}caligraphic_F is the DFT matrix, 𝐃𝐃\mathbf{D}bold_D isdiagonal, and the matrices 𝐔𝐔\mathbf{U}bold_U and 𝐕𝐕\mathbf{V}bold_V are in n×rsuperscript𝑛𝑟\mathbb{C}^{n\times r}blackboard_C start_POSTSUPERSCRIPT italic_n × italic_r end_POSTSUPERSCRIPT with rnmuch-less-than𝑟𝑛r\ll nitalic_r ≪ italic_n. Studying these matrices in the context of timeseries, we offer a theoretical explanation of this structure and connect it toexisting spectral-domain approximation frameworks. We then give a completediscussion of the numerical method for assembling the approximation anddemonstrate its efficiency for improving Whittle-type likelihood approximations,including dramatic examples where a correction of rank r=2𝑟2r=2italic_r = 2 to the standardWhittle approximation increases the accuracy of the log-likelihoodapproximation from 3333 to 14141414 digits for a matrix 𝚺105×105𝚺superscriptsuperscript105superscript105\mathbf{\Sigma}\in\mathbb{R}^{10^{5}\times 10^{5}}bold_Σ ∈ blackboard_R start_POSTSUPERSCRIPT 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT × 10 start_POSTSUPERSCRIPT 5 end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT. The method and analysis of this work applieswell beyond time series analysis, providing an algorithm for extremely accuratesolutions to linear systems with a wide variety of symmetric Toeplitzmatrices whose entries are generated by a piecewise smooth S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ).The analysis employed here largely depends on asymptotic expansions ofoscillatory integrals, and also provides a new perspective on when existingspectral-domain approximation methods for Gaussian log-likelihoods can beparticularly problematic.

Keywords: Time series, Spectral density, Asymptoticexpansion, Oscillatory integral, Gaussian process

Acknowledgements

The author is grateful to Paul G. Beckman and the two anonymous referees fortheir thoughtful comments, discussion, and proofreading of the manuscript. Healso thanks Lydia Zoells for her careful copyediting work in the revisionstage.

1 Introduction

A mean-zero, stationary, and Gaussian time series is a stochastic process{Yt}tsubscriptsubscript𝑌𝑡𝑡\left\{Y_{t}\right\}_{t\in\mathbb{Z}}{ italic_Y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_t ∈ blackboard_Z end_POSTSUBSCRIPT such that any contiguous collection of measurements 𝒚=[Yt0,Yt0+1,,Yt0+n1]𝒚subscript𝑌subscript𝑡0subscript𝑌subscript𝑡01subscript𝑌subscript𝑡0𝑛1\bm{y}=[Y_{t_{0}},Y_{t_{0}+1},...,Y_{t_{0}+n-1}]bold_italic_y = [ italic_Y start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT , italic_Y start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + 1 end_POSTSUBSCRIPT , … , italic_Y start_POSTSUBSCRIPT italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT + italic_n - 1 end_POSTSUBSCRIPT ] with arbitrary t0subscript𝑡0t_{0}\in\mathbb{Z}italic_t start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT ∈ blackboard_Z isdistributed as 𝒚𝒩(𝟎,𝚺)similar-to𝒚𝒩0𝚺\bm{y}\sim\mathcal{N}(\bm{0},\bm{\Sigma})bold_italic_y ∼ caligraphic_N ( bold_0 , bold_Σ ), where 𝚺j,k=h|jk|subscript𝚺𝑗𝑘subscript𝑗𝑘\bm{\Sigma}_{j,k}=h_{\left|j-k\right|}bold_Σ start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT = italic_h start_POSTSUBSCRIPT | italic_j - italic_k | end_POSTSUBSCRIPT isthe autocovariance matrix generated by the autocovariance sequence {hk}ksubscriptsubscript𝑘𝑘\left\{h_{k}\right\}_{k\in\mathbb{Z}}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k ∈ blackboard_Z end_POSTSUBSCRIPT. A very special property of such covariance matrices for stationarytime series is that they are symmetric Toeplitz matrices, meaning thatthey are constant along sub- and super-diagonal bands. A fundamental andimportant problem in time series analysis is fitting a parametric model to data,which in this setting means using some parametric family of models for theautocovariance sequence {hk(𝜽)}ksubscriptsubscript𝑘𝜽𝑘\left\{h_{k}(\bm{\theta})\right\}_{k\in\mathbb{Z}}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( bold_italic_θ ) } start_POSTSUBSCRIPT italic_k ∈ blackboard_Z end_POSTSUBSCRIPT that specify thedistribution 𝒚𝒩(𝟎,𝚺𝜽)similar-to𝒚𝒩0subscript𝚺𝜽\bm{y}\sim\mathcal{N}(\bm{0},\bm{\Sigma}_{\bm{\theta}})bold_italic_y ∼ caligraphic_N ( bold_0 , bold_Σ start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ) (like, for example, (AR)(I)(MA)process models). The canonical estimator for parameters 𝜽𝚯𝜽𝚯\bm{\theta}\in\bm{\Theta}bold_italic_θ ∈ bold_Θfrom data is the maximum likelihood estimator (MLE), which is computed byminimizing the negative log-likelihood:

𝜽^MLE=argmin𝜽𝚯(𝜽|𝒚)=argmin𝜽𝚯12(log|𝚺𝜽|+𝒚T𝚺𝜽1𝒚),superscript^𝜽MLEsubscriptargmin𝜽𝚯conditional𝜽𝒚subscriptargmin𝜽𝚯12subscript𝚺𝜽superscript𝒚𝑇superscriptsubscript𝚺𝜽1𝒚\hat{\bm{\theta}}^{\text{MLE}}=\operatorname*{arg\,min}_{\bm{\theta}\in\bm{%\Theta}}\;\ell(\bm{\theta}\,|\,\bm{y})=\operatorname*{arg\,min}_{\bm{\theta}%\in\bm{\Theta}}\;\frac{1}{2}\left(\log\left|\bm{\Sigma}_{\bm{\theta}}\right|+%\bm{y}^{T}\bm{\Sigma}_{\bm{\theta}}^{-1}\bm{y}\right),over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT bold_italic_θ ∈ bold_Θ end_POSTSUBSCRIPT roman_ℓ ( bold_italic_θ | bold_italic_y ) = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT bold_italic_θ ∈ bold_Θ end_POSTSUBSCRIPT divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( roman_log | bold_Σ start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT | + bold_italic_y start_POSTSUPERSCRIPT italic_T end_POSTSUPERSCRIPT bold_Σ start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT bold_italic_y ) ,

where constant terms in the negative log-likelihood (𝜽|𝒚)conditional𝜽𝒚\ell(\bm{\theta}\,|\,\bm{y})roman_ℓ ( bold_italic_θ | bold_italic_y ) are suppressed.

It is an elementary fact that autocovariance sequences are positive(semi-)definite, and by Herglotz’s theorem one has that

𝚺j,k=h|jk|=[1/2,1/2)e2πi|jk|ωdF(ω),subscript𝚺𝑗𝑘subscript𝑗𝑘subscript1212superscript𝑒2𝜋𝑖𝑗𝑘𝜔differential-d𝐹𝜔{\color[rgb]{0,0,0}\bm{\Sigma}_{j,k}=h_{|j-k|}}=\int_{[-1/2,1/2)}e^{2\pi i|j-k%|\omega}\mathop{}\!\mathrm{d}F(\omega),bold_Σ start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT = italic_h start_POSTSUBSCRIPT | italic_j - italic_k | end_POSTSUBSCRIPT = ∫ start_POSTSUBSCRIPT [ - 1 / 2 , 1 / 2 ) end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i | italic_j - italic_k | italic_ω end_POSTSUPERSCRIPT roman_d italic_F ( italic_ω ) ,

where F𝐹Fitalic_F is the spectral distribution function of {Yt}subscript𝑌𝑡\left\{Y_{t}\right\}{ italic_Y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT }[9]. If F𝐹Fitalic_F has a density with respect to the Lebesgue measureso that dF(ω)=S(ω)dωd𝐹𝜔𝑆𝜔d𝜔\mathop{}\!\mathrm{d}F(\omega)=S(\omega)\mathop{}\!\mathrm{d}\omegaroman_d italic_F ( italic_ω ) = italic_S ( italic_ω ) roman_d italic_ω, we call S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) the spectraldensity function (SDF). An elementary property of spectral densities is thatthey need only be non-negative, and symmetric about the origin in the case ofreal-valued processes Ytsubscript𝑌𝑡Y_{t}italic_Y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT. Considering that writing parametric families offunctions that are nonnegative and symmetric is much easier than writingparametric families of sequences that are positive (semi-)definite, it comes asno surprise that modeling {hk(𝜽)}ksubscriptsubscript𝑘𝜽𝑘\left\{h_{k}(\bm{\theta})\right\}_{k\in\mathbb{Z}}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ( bold_italic_θ ) } start_POSTSUBSCRIPT italic_k ∈ blackboard_Z end_POSTSUBSCRIPT in terms of the spectraldensity S𝜽(ω)subscript𝑆𝜽𝜔S_{\bm{\theta}}(\omega)italic_S start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( italic_ω ) is an attractive and liberating option forpractitioners.

Unfortunately, there is no particularly easy way to reformulate (𝜽|𝒚)conditional𝜽𝒚\ell(\bm{\theta}\,|\,\bm{y})roman_ℓ ( bold_italic_θ | bold_italic_y ) directly in terms of S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ), meaning that computing the true MLE𝜽^MLEsuperscript^𝜽MLE\hat{\bm{\theta}}^{\text{MLE}}over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT requires either knowing the closed-form expression for𝚺j,k=h|jk|=1/21/2e2πiω|jk|S(ω)dωsubscript𝚺𝑗𝑘subscript𝑗𝑘superscriptsubscript1212superscript𝑒2𝜋𝑖𝜔𝑗𝑘𝑆𝜔differential-d𝜔\bm{\Sigma}_{j,k}=h_{|j-k|}=\int_{-1/2}^{1/2}e^{2\pi i\omega|j-k|}S(\omega)%\mathop{}\!\mathrm{d}\omegabold_Σ start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT = italic_h start_POSTSUBSCRIPT | italic_j - italic_k | end_POSTSUBSCRIPT = ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_ω | italic_j - italic_k | end_POSTSUPERSCRIPT italic_S ( italic_ω ) roman_d italic_ω for all lags |jk|𝑗𝑘|j-k|| italic_j - italic_k | or directly computing all the integrals numerically,and in both cases, for direct likelihood evaluation, one must in general build𝚺𝚺\bm{\Sigma}bold_Σ entry-wise and evaluate (𝜽|𝒚)conditional𝜽𝒚\ell(\bm{\theta}\,|\,\bm{y})roman_ℓ ( bold_italic_θ | bold_italic_y ) using dense linear algebra. As wewill discuss, there are several popular likelihood approximations for 𝒚𝒚\bm{y}bold_italic_y interms of S𝜽(ω)subscript𝑆𝜽𝜔S_{\bm{\theta}}(\omega)italic_S start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( italic_ω ) that are convenient and computationally expedient,often running in 𝒪(nlogn)𝒪𝑛𝑛\mathcal{O}(n\log n)caligraphic_O ( italic_n roman_log italic_n ) complexity compared to the 𝒪(n3)𝒪superscript𝑛3\mathcal{O}(n^{3})caligraphic_O ( italic_n start_POSTSUPERSCRIPT 3 end_POSTSUPERSCRIPT ) of adirect likelihood approximation (another fundamental challenge that motivatesmany spectral approximations). The most classical and simple of theseapproximations to (𝜽|𝒚)conditional𝜽𝒚\ell(\bm{\theta}\,|\,\bm{y})roman_ℓ ( bold_italic_θ | bold_italic_y ), denoted the Whittle approximation[38], unfortunately yields very biased estimators. Improvementsto this method can mitigate the challenging new sources of bias, but still comeat the cost of additional variability compared to 𝜽^MLEsuperscript^𝜽MLE\hat{\bm{\theta}}^{\text{MLE}}over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT. Inthis work, we introduce a new approximation to (𝜽|𝒚)conditional𝜽𝒚\ell(\bm{\theta}\,|\,\bm{y})roman_ℓ ( bold_italic_θ | bold_italic_y ) that retainsthe 𝒪(nlogn)𝒪𝑛𝑛\mathcal{O}(n\log n)caligraphic_O ( italic_n roman_log italic_n ) complexity but can be made effectively exact to doubleprecision with 1415141514-1514 - 15 correct digits by exploiting piecewise smoothness ofspectral densities and special rank structure in the matrix 𝚺H𝚺superscript𝐻\mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H}caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT,where \mathbb{\mathcal{F}}caligraphic_F is the discrete Fourier transform (DFT) matrix. As a result of thisdesign, we obtain effectively exact evaluations of the log-likelihood (𝜽|𝒚)conditional𝜽𝒚\ell(\bm{\theta}\,|\,\bm{y})roman_ℓ ( bold_italic_θ | bold_italic_y ) and its derivatives, as well as the ability to perform uncertaintyquantification on estimates.

1.1 Whittle approximations and existing work

Let

Jn(ω)=1nj=0n1e2πijωYjsubscript𝐽𝑛𝜔1𝑛superscriptsubscript𝑗0𝑛1superscript𝑒2𝜋𝑖𝑗𝜔subscript𝑌𝑗J_{n}(\omega)=\frac{1}{\sqrt{n}}\sum_{j=0}^{n-1}e^{-2\pi ij\omega}Y_{j}italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω ) = divide start_ARG 1 end_ARG start_ARG square-root start_ARG italic_n end_ARG end_ARG ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT - 2 italic_π italic_i italic_j italic_ω end_POSTSUPERSCRIPT italic_Y start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT

denote the discrete Fourier transform of a time series 𝒚={Y0,,Yn1}𝒚subscript𝑌0subscript𝑌𝑛1{\color[rgb]{0,0,0}\bm{y}=}\left\{Y_{0},...,Y_{n-1}\right\}bold_italic_y = { italic_Y start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , … , italic_Y start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT } at frequency ω𝜔\omegaitalic_ω. For the entirety of this work, we willrestrict focus to ω𝜔\omegaitalic_ω at Fourier frequencies {ωj=(jn/21)n}j=1nsuperscriptsubscriptsubscript𝜔𝑗𝑗𝑛21𝑛𝑗1𝑛\left\{\omega_{j}=\tfrac{(j-n/2-1)}{n}\right\}_{j=1}^{n}{ italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = divide start_ARG ( italic_j - italic_n / 2 - 1 ) end_ARG start_ARG italic_n end_ARG } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT so that one can compute [Jn(ω1),,Jn(ωn)]subscript𝐽𝑛subscript𝜔1subscript𝐽𝑛subscript𝜔𝑛[J_{n}(\omega_{1}),...,J_{n}(\omega_{n})][ italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) , … , italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ] in 𝒪(nlogn)𝒪𝑛𝑛\mathcal{O}(n\log n)caligraphic_O ( italic_n roman_log italic_n ) time with 𝒚𝒚\mathbb{\mathcal{F}}\bm{y}caligraphic_F bold_italic_y using an FFT, where \mathbb{\mathcal{F}}caligraphic_Fis parameterized in the unitary form with the “fftshift” for convenience as=[n1/2e2πijk/n]j(n/2):(n/21),k0:(n1)subscriptdelimited-[]superscript𝑛12superscript𝑒2𝜋𝑖𝑗𝑘𝑛:𝑗𝑛2𝑛21𝑘0:𝑛1\mathbb{\mathcal{F}}=[n^{-1/2}e^{-2\pi ijk/n}]_{j\in(-n/2):(n/2-1),k\in 0:(n-1)}caligraphic_F = [ italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT - 2 italic_π italic_i italic_j italic_k / italic_n end_POSTSUPERSCRIPT ] start_POSTSUBSCRIPT italic_j ∈ ( - italic_n / 2 ) : ( italic_n / 2 - 1 ) , italic_k ∈ 0 : ( italic_n - 1 ) end_POSTSUBSCRIPT.Motivated by the elementary observations that 𝔼|Jn(ωk)|2nS(ωk)subscript𝑛𝔼superscriptsubscript𝐽𝑛subscript𝜔𝑘2𝑆subscript𝜔𝑘\mathbb{E}|J_{n}(\omega_{k})|^{2}\rightarrow_{n}S(\omega_{k})blackboard_E | italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT → start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) and Cov(Jn(ωk),Jn(ωk))n0subscript𝑛Covsubscript𝐽𝑛subscript𝜔𝑘subscript𝐽𝑛subscript𝜔superscript𝑘0\text{Cov}(J_{n}(\omega_{k}),J_{n}(\omega_{k^{\prime}}))\rightarrow_{n}0Cov ( italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) , italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ) → start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT 0 (see[9] for an introduction), one can substitute thefinite-sample (co)variances with these asymptotic ones to obtain the Whittleapproximation

(1.1)2W(𝜽|𝒚)=j=1nlogS𝜽(ωj)+|Jn(ωj)|2S𝜽(ωj),2superscript𝑊conditional𝜽𝒚superscriptsubscript𝑗1𝑛subscript𝑆𝜽subscript𝜔𝑗superscriptsubscript𝐽𝑛subscript𝜔𝑗2subscript𝑆𝜽subscript𝜔𝑗2\ell^{W}(\bm{\theta}\,|\,\bm{y})=\sum_{j=1}^{n}\log S_{\bm{\theta}}(\omega_{j%})+\frac{\left|J_{n}(\omega_{j})\right|^{2}}{S_{\bm{\theta}}(\omega_{j})},2 roman_ℓ start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT ( bold_italic_θ | bold_italic_y ) = ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT roman_log italic_S start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) + divide start_ARG | italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_S start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_ARG ,

which has this simplified form since the asymptotic covariance matrix for[Jn(ω1),,Jn(ωn)]subscript𝐽𝑛subscript𝜔1subscript𝐽𝑛subscript𝜔𝑛[J_{n}(\omega_{1}),...,J_{n}(\omega_{n})][ italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ) , … , italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ) ] is diagonal. Optimizing this approximationinstead of (𝜽|𝒚)conditional𝜽𝒚\ell(\bm{\theta}\,|\,\bm{y})roman_ℓ ( bold_italic_θ | bold_italic_y ) gives the estimator 𝜽^Wsuperscript^𝜽𝑊\hat{\bm{\theta}}^{W}over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT. Anothercommon way to think about this approximation is to observe that it effectivelyapproximates 𝚺𝜽subscript𝚺𝜽\bm{\Sigma}_{\bm{\theta}}bold_Σ start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT with the circulant matrix 𝑪𝑪\bm{C}bold_italic_C whose firstcolumn 𝒄𝒄\bm{c}bold_italic_c has (𝒄)j=S(ωj)subscript𝒄𝑗𝑆subscript𝜔𝑗(\mathbb{\mathcal{F}}\bm{c})_{j}=S(\omega_{j})( caligraphic_F bold_italic_c ) start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT = italic_S ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ). This is unfortunatelyincorrect, as 𝚺𝚺\bm{\Sigma}bold_Σ is almost never exactly circulant. Further, evenif it were, it wouldn’t in general for any finite n𝑛nitalic_n be that circulantmatrix. As a result of these finite-sample applications of limiting identities,this estimator can exhibit severe bias for finite sample sizes and certainvarieties of spectral densities S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω )—see [33] for aparticularly thoughtful and concrete analysis of the sources of bias of thisapproximation in terms of boundary effects. Yet another way to understand the sourceof bias with this approximation is to observe that even if 𝒚𝒩(𝟎,𝚺𝜽0)similar-to𝒚𝒩0subscript𝚺subscript𝜽0\bm{y}\sim\mathcal{N}(\bm{0},\bm{\Sigma}_{\bm{\theta}_{0}})bold_italic_y ∼ caligraphic_N ( bold_0 , bold_Σ start_POSTSUBSCRIPT bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ) with [𝚺𝜽0]j,k=1/21/2e2πiω|jk|S𝜽0(ω)dωsubscriptdelimited-[]subscript𝚺subscript𝜽0𝑗𝑘superscriptsubscript1212superscript𝑒2𝜋𝑖𝜔𝑗𝑘subscript𝑆subscript𝜽0𝜔differential-d𝜔[\bm{\Sigma}_{\bm{\theta}_{0}}]_{j,k}=\int_{-1/2}^{1/2}e^{2\pi i\omega|j-k|}S_%{\bm{\theta}_{0}}(\omega)\mathop{}\!\mathrm{d}\omega[ bold_Σ start_POSTSUBSCRIPT bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ] start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT = ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_ω | italic_j - italic_k | end_POSTSUPERSCRIPT italic_S start_POSTSUBSCRIPT bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_ω ) roman_d italic_ω, S𝜽0(ωk)subscript𝑆subscript𝜽0subscript𝜔𝑘S_{\bm{\theta}_{0}}(\omega_{k})italic_S start_POSTSUBSCRIPT bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) is not thevariance of Jn(ωk)subscript𝐽𝑛subscript𝜔𝑘J_{n}(\omega_{k})italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) for any finite n𝑛nitalic_n. And while ignoring covariancestructure between values (which the Whittle approximation also does) can in some casesonly come at the cost of efficiency, using incorrect variances will almostalways come at the cost of bias. The exact (co)variances of these DFTcoefficients is a classical computation, but because they will be usedextensively in this work we provide a terse statement of their derivation here.

Proposition 1.1.

Let {Yt}t=0n1superscriptsubscriptsubscript𝑌𝑡𝑡0𝑛1\left\{Y_{t}\right\}_{t=0}^{n-1}{ italic_Y start_POSTSUBSCRIPT italic_t end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_t = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT be a stationary mean-zero time series with spectraldensity S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) and Jn(ωk)subscript𝐽𝑛subscript𝜔𝑘J_{n}(\omega_{k})italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) defined as above, and defineSn(ωk,ωk)=Cov(Jn(ωk),Jn(ωk))subscript𝑆𝑛subscript𝜔𝑘subscript𝜔superscript𝑘Covsubscript𝐽𝑛subscript𝜔𝑘subscript𝐽𝑛subscript𝜔superscript𝑘S_{n}(\omega_{k},\omega_{k^{\prime}})=\textrm{Cov}(J_{n}(\omega_{k}),J_{n}(%\omega_{k^{\prime}}))italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) = Cov ( italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) , italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ).Then

(1.2)Sn(ωk,ωk)=ein1nπ(kk)n1/21/2Dns(ωkω)Dns(ωkω)S(ω)dω,subscript𝑆𝑛subscript𝜔𝑘subscript𝜔superscript𝑘superscript𝑒𝑖𝑛1𝑛𝜋𝑘superscript𝑘𝑛superscriptsubscript1212subscriptsuperscript𝐷𝑠𝑛subscript𝜔𝑘𝜔subscriptsuperscript𝐷𝑠𝑛subscript𝜔superscript𝑘𝜔𝑆𝜔differential-d𝜔S_{n}(\omega_{k},\omega_{k^{\prime}})=\frac{e^{i\frac{n-1}{n}\pi(k-k^{\prime})%}}{n}\int_{-1/2}^{1/2}D^{s}_{n}(\omega_{k}-\omega)D^{s}_{n}(\omega_{k^{\prime}%}-\omega)S(\omega)\mathop{}\!\mathrm{d}\omega,italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) = divide start_ARG italic_e start_POSTSUPERSCRIPT italic_i divide start_ARG italic_n - 1 end_ARG start_ARG italic_n end_ARG italic_π ( italic_k - italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) end_POSTSUPERSCRIPT end_ARG start_ARG italic_n end_ARG ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_D start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) italic_D start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) italic_S ( italic_ω ) roman_d italic_ω ,

where Dns(ω)=sin(πnω)sin(πω)=ei(n1)ω/2j=0n1e2πijωsubscriptsuperscript𝐷𝑠𝑛𝜔𝜋𝑛𝜔𝜋𝜔superscript𝑒𝑖𝑛1𝜔2superscriptsubscript𝑗0𝑛1superscript𝑒2𝜋𝑖𝑗𝜔D^{s}_{n}(\omega)=\frac{\sin(\pi n\omega)}{\sin(\pi\omega)}=e^{-i(n-1)\omega/2%}\sum_{j=0}^{n-1}e^{2\pi ij\omega}italic_D start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω ) = divide start_ARG roman_sin ( italic_π italic_n italic_ω ) end_ARG start_ARG roman_sin ( italic_π italic_ω ) end_ARG = italic_e start_POSTSUPERSCRIPT - italic_i ( italic_n - 1 ) italic_ω / 2 end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_j italic_ω end_POSTSUPERSCRIPT is a “shifted” Dirichlet kernel.

Proof 1.2.

Using Herglotz’s theorem, an elementary computation shows that

Cov(Jn(ωk),Jn(ωk))=1/21/2(j=0n1n1/2e2πij(ωωk))(j=0n1n1/2e2πij(ωkω))S(ω)dωCovsubscript𝐽𝑛subscript𝜔𝑘subscript𝐽𝑛subscript𝜔superscript𝑘superscriptsubscript1212superscriptsubscript𝑗0𝑛1superscript𝑛12superscript𝑒2𝜋𝑖𝑗𝜔subscript𝜔𝑘superscriptsubscript𝑗0𝑛1superscript𝑛12superscript𝑒2𝜋𝑖𝑗subscript𝜔superscript𝑘𝜔𝑆𝜔differential-d𝜔\displaystyle\text{Cov}(J_{n}(\omega_{k}),J_{n}(\omega_{k^{\prime}}))=\int_{-1%/2}^{1/2}\left(\sum_{j=0}^{n-1}n^{-1/2}e^{2\pi ij(\omega-\omega_{k})}\right)%\left(\sum_{j=0}^{n-1}n^{-1/2}e^{2\pi ij(\omega_{k^{\prime}}-\omega)}\right)S(%\omega)\mathop{}\!\mathrm{d}\omegaCov ( italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) , italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ) = ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT ( ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_j ( italic_ω - italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) end_POSTSUPERSCRIPT ) ( ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT italic_n start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_j ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) end_POSTSUPERSCRIPT ) italic_S ( italic_ω ) roman_d italic_ω
=1n1/21/2(ei(n1)πωksin(πn(ωkω))sin(π(ωkω)))(ei(n1)πωksin(πn(ωkω))sin(π(ωkω)))S(ω)dω.absent1𝑛superscriptsubscript1212superscript𝑒𝑖𝑛1𝜋subscript𝜔𝑘𝜋𝑛subscript𝜔𝑘𝜔𝜋subscript𝜔𝑘𝜔superscript𝑒𝑖𝑛1𝜋subscript𝜔superscript𝑘𝜋𝑛subscript𝜔superscript𝑘𝜔𝜋subscript𝜔superscript𝑘𝜔𝑆𝜔differential-d𝜔\displaystyle=\frac{1}{n}\int_{-1/2}^{1/2}\left(e^{-i(n-1)\pi\omega_{k}}\frac{%\sin(\pi n(\omega_{k}-\omega))}{\sin(\pi(\omega_{k}-\omega))}\right)\left(e^{i%(n-1)\pi\omega_{k^{\prime}}}\frac{\sin(\pi n(\omega_{k^{\prime}}-\omega))}{%\sin(\pi(\omega_{k^{\prime}}-\omega))}\right)S(\omega)\mathop{}\!\mathrm{d}\omega.= divide start_ARG 1 end_ARG start_ARG italic_n end_ARG ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT ( italic_e start_POSTSUPERSCRIPT - italic_i ( italic_n - 1 ) italic_π italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT end_POSTSUPERSCRIPT divide start_ARG roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) end_ARG start_ARG roman_sin ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) end_ARG ) ( italic_e start_POSTSUPERSCRIPT italic_i ( italic_n - 1 ) italic_π italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT end_POSTSUPERSCRIPT divide start_ARG roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) end_ARG start_ARG roman_sin ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) end_ARG ) italic_S ( italic_ω ) roman_d italic_ω .

Applying the definition of Dns(ω)subscriptsuperscript𝐷𝑠𝑛𝜔D^{s}_{n}(\omega)italic_D start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω ), collecting terms, and bringing thecomplex exponential prefactor on the trigonometric terms outside the integralgives the results.

A particularly elegant proposition for dealing with the bias of 𝜽^Wsuperscript^𝜽𝑊\hat{\bm{\theta}}^{W}over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPTis given in [35]: instead of using S𝜽(ωj)subscript𝑆𝜽subscript𝜔𝑗S_{\bm{\theta}}(\omega_{j})italic_S start_POSTSUBSCRIPT bold_italic_θ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) inWsuperscript𝑊\ell^{W}roman_ℓ start_POSTSUPERSCRIPT italic_W end_POSTSUPERSCRIPT, use the correct finite-sample marginal variances Sn(ωj,ωj)subscript𝑆𝑛subscript𝜔𝑗subscript𝜔𝑗S_{n}(\omega_{j},\omega_{j})italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) in their place. In doing so, one immediately obtains a new likelihoodapproximation whose gradient gives unbiased estimating equations (UEEs) forparameters 𝜽𝜽\bm{\theta}bold_italic_θ [35, 22]. This estimator, calledthe debiased Whittle estimator, is given as

𝜽^DW=argmin𝜽𝚯DW(𝜽|𝒚)=argmin𝜽𝚯12(j=1nlogSn,𝜽(ωj,ωj)+|Jn(ωj)|2Sn,𝜽(ωj,ωj)),superscript^𝜽𝐷𝑊subscriptargmin𝜽𝚯superscript𝐷𝑊conditional𝜽𝒚subscriptargmin𝜽𝚯12superscriptsubscript𝑗1𝑛subscript𝑆𝑛𝜽subscript𝜔𝑗subscript𝜔𝑗superscriptsubscript𝐽𝑛subscript𝜔𝑗2subscript𝑆𝑛𝜽subscript𝜔𝑗subscript𝜔𝑗\hat{\bm{\theta}}^{DW}=\operatorname*{arg\,min}_{\bm{\theta}\in\bm{\Theta}}%\ell^{DW}(\bm{\theta}\,|\,\bm{y})=\operatorname*{arg\,min}_{\bm{\theta}\in\bm{%\Theta}}\frac{1}{2}\left(\sum_{j=1}^{n}\log S_{n,\bm{\theta}}(\omega_{j},%\omega_{j})+\frac{\left|J_{n}(\omega_{j})\right|^{2}}{S_{n,\bm{\theta}}(\omega%_{j},\omega_{j})}\right),over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT italic_D italic_W end_POSTSUPERSCRIPT = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT bold_italic_θ ∈ bold_Θ end_POSTSUBSCRIPT roman_ℓ start_POSTSUPERSCRIPT italic_D italic_W end_POSTSUPERSCRIPT ( bold_italic_θ | bold_italic_y ) = start_OPERATOR roman_arg roman_min end_OPERATOR start_POSTSUBSCRIPT bold_italic_θ ∈ bold_Θ end_POSTSUBSCRIPT divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT roman_log italic_S start_POSTSUBSCRIPT italic_n , bold_italic_θ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) + divide start_ARG | italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) | start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG italic_S start_POSTSUBSCRIPT italic_n , bold_italic_θ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) end_ARG ) ,

Where the additional subscript of 𝜽𝜽\bm{\theta}bold_italic_θ in Sn,𝜽subscript𝑆𝑛𝜽S_{n,\bm{\theta}}italic_S start_POSTSUBSCRIPT italic_n , bold_italic_θ end_POSTSUBSCRIPT is includedsimply to reinforce the dependence of this function on (varying) parameters𝜽𝜽\bm{\theta}bold_italic_θ and is independent from the subscript n𝑛nitalic_n, which is used to signify thefinite-sample correction. 𝜽^DWsuperscript^𝜽𝐷𝑊\hat{\bm{\theta}}^{DW}over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT italic_D italic_W end_POSTSUPERSCRIPT now only comes at the cost ofadditional variance from ignoring the correlation of Jn(ωk)subscript𝐽𝑛subscript𝜔𝑘J_{n}(\omega_{k})italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) withJn(ωk)subscript𝐽𝑛subscript𝜔superscript𝑘J_{n}(\omega_{k^{\prime}})italic_J start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ), and serves as a useful point of comparison with the estimatorproposed here, as it better isolates the additional variance ofWhittle-type estimators compared to 𝜽^MLEsuperscript^𝜽MLE\hat{\bm{\theta}}^{\text{MLE}}over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT.

While the standard Whittle approximation requires a pre-computed FFT of the data,the debiased Whittle approximation requires one FFT per evaluation ofDW(𝜽|𝒚)superscript𝐷𝑊conditional𝜽𝒚\ell^{DW}(\bm{\theta}\,|\,\bm{y})roman_ℓ start_POSTSUPERSCRIPT italic_D italic_W end_POSTSUPERSCRIPT ( bold_italic_θ | bold_italic_y ), since one can compute {Sn(ωj,ωj)}j=1nsuperscriptsubscriptsubscript𝑆𝑛subscript𝜔𝑗subscript𝜔𝑗𝑗1𝑛\left\{S_{n}(\omega_{j},\omega_{j})\right\}_{j=1}^{n}{ italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT in the time domain using the classical identity that

(1.3)Sn(ωj,ωj)=2Re{k=0n1(1n1h)hke2πiωjh}h0.subscript𝑆𝑛subscript𝜔𝑗subscript𝜔𝑗2Resuperscriptsubscript𝑘0𝑛11superscript𝑛1subscript𝑘superscript𝑒2𝜋𝑖subscript𝜔𝑗subscript0S_{n}(\omega_{j},\omega_{j})=2\text{Re}\left\{\sum_{k=0}^{n-1}(1-n^{-1}h)h_{k}%e^{-2\pi i\omega_{j}h}\right\}-h_{0}.italic_S start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT , italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) = 2 Re { ∑ start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT ( 1 - italic_n start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT italic_h ) italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT - 2 italic_π italic_i italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_h end_POSTSUPERSCRIPT } - italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT .

In [35] it is noted that this identity is useful to avoidnumerical integration. While this is true, it is also quite inconvenientbecause it requires knowing the covariance function {hk}subscript𝑘\left\{h_{k}\right\}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } as well asthe spectral density S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ). Considering that a large part of thepoint of resorting to a Whittle-type approximation is to write a parametricspectral density instead of a covariance function, this is a potentially seriousinconvenience. In Section 2, in the process of building up to our ownapproximation, we will discuss two performant and simple numerical integrationstrategies to relax this requirement and also make this method more generallyavailable to practitioners.

1.2 Our method

The approximation we introduce here is a continuation of such spectrallymotivated likelihood methods. Letting 𝑫𝑫\bm{D}bold_italic_D denote thediagonal matrix with values {S(ωj)}j=1nsuperscriptsubscript𝑆subscript𝜔𝑗𝑗1𝑛\left\{S(\omega_{j})\right\}_{j=1}^{n}{ italic_S ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT, the fundamentalobservation of this work is that for the very broad class ofpiecewise-smooth spectral densities with a small number of roughpoints—including basically all standard parametric families, including(AR)(I)(MA), Matérn, rational quadratic, power law,and other models—we have that

(1.4)𝚺H𝑫+𝑼𝑽H=𝑫+𝑬,𝚺superscript𝐻𝑫𝑼superscript𝑽𝐻𝑫𝑬\mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H}\approx\bm{D}+\bm{U}\bm%{V}^{H}=\bm{D}+{\color[rgb]{0,0,0}\bm{E}},caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT ≈ bold_italic_D + bold_italic_U bold_italic_V start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT = bold_italic_D + bold_italic_E ,

where 𝑼𝑼\bm{U}bold_italic_U and 𝑽𝑽\bm{V}bold_italic_V are in n×rsuperscript𝑛𝑟\mathbb{C}^{n\times r}blackboard_C start_POSTSUPERSCRIPT italic_n × italic_r end_POSTSUPERSCRIPT with rnmuch-less-than𝑟𝑛r\ll nitalic_r ≪ italic_n. Thestandard Whittle approximation views that low-rank update as being exactly zero,and in this light our method can be viewed as a “corrected” Whittleapproximation. The key observation that makes working with this approximationconvenient is that the “Whittle correction” matrix 𝑬=𝚺H𝑫𝑬𝚺superscript𝐻𝑫{\color[rgb]{0,0,0}\bm{E}}=\mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}%^{H}-\bm{D}bold_italic_E = caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT - bold_italic_D,along with being severely rank-deficient in many settings, can be applied to avector in 𝒪(nlogn)𝒪𝑛𝑛\mathcal{O}(n\log n)caligraphic_O ( italic_n roman_log italic_n ) complexity via circulant embedding of Toeplitzmatrices, and so one can utilize the methods of randomizedlow-rank approximation [21] to assemble 𝑼𝑼\bm{U}bold_italic_U and 𝑽𝑽\bm{V}bold_italic_Vefficiently and scalably with implicit tools.Counter-examples of models where 𝚺H𝚺superscript𝐻\mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H}caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT does not have thisstructure are rare in practice, as most popular and natural models for spectraldensities S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) are smooth except at possibly a modest number oflocations. The category of models that likely will not be amenable tothis structure, as will be discussed at length in Section3, are those where S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) is either highly oscillatoryor has many rough points. Spectral densities like S(ω)=γsinc(γω)2𝑆𝜔𝛾sincsuperscript𝛾𝜔2S(\omega)=\gamma\text{sinc}(\gamma\omega)^{2}italic_S ( italic_ω ) = italic_γ sinc ( italic_γ italic_ω ) start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT or S(ω)=j=1Rαjeθ|ωωj|𝑆𝜔superscriptsubscript𝑗1𝑅subscript𝛼𝑗superscript𝑒𝜃𝜔subscript𝜔𝑗S(\omega)=\sum_{j=1}^{R}\alpha_{j}e^{-\theta|\omega-\omega_{j}|}italic_S ( italic_ω ) = ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_R end_POSTSUPERSCRIPT italic_α start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT - italic_θ | italic_ω - italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT | end_POSTSUPERSCRIPT for large γ𝛾\gammaitalic_γ or R𝑅Ritalic_R respectively are good examples of modelsfor which the rank r𝑟ritalic_r required for high accuracy is sufficiently large that themethod is not practical.

Outside of these pathological cases,despite having the same quasilinear runtime complexity, this representation canbe exact to computer precision for very small r𝑟ritalic_r (sometimes as small asr=2𝑟2r=2italic_r = 2). As one might expect, this does come at the cost of a moreexpensive prefactor. If the standard Whittle approximation requires onepre-computed FFT of the data and subsequently runs at linear complexity, and thedebiased Whittle approximation requires one additional FFT per evaluation due tocomputation of the terms in (1.3), our approximation requires 𝒪(3r)𝒪3𝑟\mathcal{O}(3r)caligraphic_O ( 3 italic_r ) FFTs to assemble (although a simpler implementation using 𝒪(4r)𝒪4𝑟\mathcal{O}(4r)caligraphic_O ( 4 italic_r ) FFTs isused in the computations of this work)111A software package and scriptsfor all computations done in this work is available athttps://github.com/cgeoga/SpectralEstimators.jl..For a spectral density that is neither particularly well- nor poorly-behaved, areasonable expectation for an r𝑟ritalic_r that gives all 14141414 significant digits of(𝜽|𝒚)conditional𝜽𝒚\ell(\bm{\theta}\,|\,\bm{y})roman_ℓ ( bold_italic_θ | bold_italic_y ) is around r100𝑟100r\approx 100italic_r ≈ 100—and so for full precision ofthe likelihood, this method can require several hundred FFTs. But consideringhow fast FFTs are, the prefactor cost for this method is sufficiently low thatit compares favorably with other high-accuracy alternatives.

Another material advantage of this method over other spectral approximations isthat it can be used to compute uncertainties for estimators. A basic fact aboutMLEs is that, under sufficient regularity conditions, 𝑰(𝜽)1/2(𝜽^MLE𝜽true)𝒩(𝟎,𝓘)𝑰superscript𝜽12superscript^𝜽MLEsuperscript𝜽true𝒩0𝓘\bm{I}(\bm{\theta})^{-1/2}(\hat{\bm{\theta}}^{\text{MLE}}-\bm{\theta}^{\text{%true}})\rightsquigarrow\mathcal{N}(\bm{0},\bm{\mathcal{I}})bold_italic_I ( bold_italic_θ ) start_POSTSUPERSCRIPT - 1 / 2 end_POSTSUPERSCRIPT ( over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT - bold_italic_θ start_POSTSUPERSCRIPT true end_POSTSUPERSCRIPT ) ↝ caligraphic_N ( bold_0 , bold_caligraphic_I ), where 𝑰(𝜽)𝑰𝜽\bm{I}(\bm{\theta})bold_italic_I ( bold_italic_θ ) is the expected Fisher informationmatrix that will be introduced and discussed at length later in this work,although in some settings it may actually be preferable to use the “observed”information matrix H(𝜽|𝒚)|𝜽=𝜽^MLEevaluated-at𝐻conditional𝜽𝒚𝜽superscript^𝜽MLEH\ell(\bm{\theta}\,|\,\bm{y})\,|\,_{\bm{\theta}=\hat{\bm{\theta}}^{\text{MLE}}}italic_H roman_ℓ ( bold_italic_θ | bold_italic_y ) | start_POSTSUBSCRIPT bold_italic_θ = over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT end_POSTSUBSCRIPT[14].But when one is obtaining an estimator by optimizing some function~(𝜽)~𝜽\tilde{\ell}(\bm{\theta})over~ start_ARG roman_ℓ end_ARG ( bold_italic_θ ) other than the true log-likelihood, the Hessian of~~\tilde{\ell}over~ start_ARG roman_ℓ end_ARG does not have this interpretation. The fundamental property anapproximation ~(𝜽)~𝜽\tilde{\ell}(\bm{\theta})over~ start_ARG roman_ℓ end_ARG ( bold_italic_θ ) needs to satisfy to provide a statisticallyuseful estimator is that it provides unbiased estimating equations(UEEs), so that 𝔼𝜽0[~(𝜽|𝒚)|𝜽=𝜽0]=𝟎subscript𝔼subscript𝜽0delimited-[]evaluated-at~conditional𝜽𝒚𝜽subscript𝜽00\mathbb{E}_{\bm{\theta}_{0}}[\nabla\tilde{\ell}(\bm{\theta}\,|\,\bm{y})\,|\,_{%\bm{\theta}=\bm{\theta}_{0}}]=\bm{0}blackboard_E start_POSTSUBSCRIPT bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT [ ∇ over~ start_ARG roman_ℓ end_ARG ( bold_italic_θ | bold_italic_y ) | start_POSTSUBSCRIPT bold_italic_θ = bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT end_POSTSUBSCRIPT ] = bold_0 for any 𝜽0subscript𝜽0\bm{\theta}_{0}bold_italic_θ start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT [22]. But there are many such~~\tilde{\ell}over~ start_ARG roman_ℓ end_ARG functions that satisfy this property and are not goodpointwise approximations to \ellroman_ℓ. And in those cases, one certainly cannotexpect that the Hessian of ~~\tilde{\ell}over~ start_ARG roman_ℓ end_ARG at 𝜽^MLEsuperscript^𝜽MLE\hat{\bm{\theta}}^{\text{MLE}}over^ start_ARG bold_italic_θ end_ARG start_POSTSUPERSCRIPT MLE end_POSTSUPERSCRIPT resemblesthe Hessian of \ellroman_ℓ at those values. Approximations like the debiasedWhittle-induced ~~\tilde{\ell}over~ start_ARG roman_ℓ end_ARG fall into this category. The method of this work,however, can be used to obtain both estimators and approximateuncertainties via its Hessian matrix, since it provides a fully pointwise-accurateapproximation to (𝜽)𝜽\ell(\bm{\theta})roman_ℓ ( bold_italic_θ ) at for all 𝜽𝜽\bm{\theta}bold_italic_θ.

There is a significant body of literature on the more general problem ofapproximating covariance matrices for the purpose of estimation problems. Amongthe most accurate varieties of approximations in this space is the use ofhierarchical matrices, which employ rank-structure of matrixsub-blocks in a way that provides high accuracy but retains good run-timecomplexity. These methods have been applied to the Gaussian process problem in avariety of ways[8, 1, 15, 28, 26, 17],and unlike the tools proposed in this work are not specialized toone-dimensional gridded measurements. Similarly, tools specific to Toeplitzmatrices that exploit rank-structure in off-diagonal blocks of 𝚺H𝚺superscript𝐻\mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H}caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPThave been proposed for fast direct linear solvers[27, 10].The downside to these methods is that thealgorithms are often quite complicated, and more domain-specific quantities instatistical computing—such as log-determinants, derivatives with respect tomodel parameters, and certain matrix-matrix products—are more involved tocompute, and are often computed to lower accuracy[3, 32, 28, 17]. Other popular methods inthis space that provide less accurate likelihood approximations in the pointwisesense but are often easier to implement and analyze include sparseapproximations to 𝚺1superscript𝚺1\bm{\Sigma}^{-1}bold_Σ start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT[30, 25, 29, 34, 24], direct low-rankapproximation methods [11], and matrix tapering [16].The method presented here fits into this body of work as a specialized methodfor stationary time series data. In exchange for that specialization, however,it achieves the crushing accuracy of hierarchical matrix methods in a muchsimpler and more tractable approximation format, and permits the fast andaccurate computation of domain-specific quantities like log-determinantsand derivatives of the Gaussian log-likelihood.

1.3 Outline

In the next several sections, we will introduce tools for implicitly assemblingand working with matrices of the form (1.4). We start with adiscussion of using quadrature and asymptotic expansions to efficiently andaccurately work with 𝚺𝚺\bm{\Sigma}bold_Σ using only the spectral density S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ), and wethen provide a discussion of (i) why the perturbation term in (1.4)should be low-rank, which also offers a new theoretical perspective for whenWhittle approximations are particularly problematic, and (ii) the details ofassembling the low-rank 𝑼𝑽H𝑼superscript𝑽𝐻\bm{U}\bm{V}^{H}bold_italic_U bold_italic_V start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT entirely from fast matrix-vectorproducts. We then close by providing several tests and benchmarks to demonstratethe speed and accuracy of the method.

2 Oscillatory integration of spectral densities

As discussed in the previous section, creating the low-rank approximation 𝚺H𝑫+𝑼𝑽H𝚺superscript𝐻𝑫𝑼superscript𝑽𝐻\mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H}\approx\bm{D}+\bm{U}\bm%{V}^{H}caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT ≈ bold_italic_D + bold_italic_U bold_italic_V start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT requires working with 𝚺𝚺\bm{\Sigma}bold_Σ, theToeplitz matrix with values 𝚺j,k=h|jk|subscript𝚺𝑗𝑘subscript𝑗𝑘\bm{\Sigma}_{j,k}=h_{|j-k|}bold_Σ start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT = italic_h start_POSTSUBSCRIPT | italic_j - italic_k | end_POSTSUBSCRIPT where {hk}subscript𝑘\left\{h_{k}\right\}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } is theautocovariance sequence of the time series. Unlike in [35] whereboth kernel and spectral density values are used to avoid numerical integration,we will now discuss strategies for efficiently and accurately computingintegrals of the formhk=1/21/2S(ω)e2πikωdωsubscript𝑘superscriptsubscript1212𝑆𝜔superscript𝑒2𝜋𝑖𝑘𝜔differential-d𝜔h_{k}=\int_{-1/2}^{1/2}S(\omega)e^{2\pi ik\omega}\mathop{}\!\mathrm{d}\omegaitalic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_S ( italic_ω ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω end_POSTSUPERSCRIPT roman_d italic_ωfor k0,,n1𝑘0𝑛1k\in 0,...,n-1italic_k ∈ 0 , … , italic_n - 1, so that one can create this factorization given onlythe spectral density. Let us briefly review the challenges of such a task.First, the standard method for obtaining the values {hk}subscript𝑘\left\{h_{k}\right\}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } using thetrapezoidal rule via the FFT may require a large number of nodes for evenmoderate accuracy if, for example, S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) has rough points (like how S(ω)=e|ω|𝑆𝜔superscript𝑒𝜔S(\omega)=e^{-|\omega|}italic_S ( italic_ω ) = italic_e start_POSTSUPERSCRIPT - | italic_ω | end_POSTSUPERSCRIPT has no derivatives at the origin). A second common issue arisesif S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) is not periodic at its endpoints, which limits the convergence rateof the trapezoidal rule to its baseline rate of 𝒪(m2)𝒪superscript𝑚2\mathcal{O}(m^{-2})caligraphic_O ( italic_m start_POSTSUPERSCRIPT - 2 end_POSTSUPERSCRIPT ) for m𝑚mitalic_m nodes forgeneral non-smooth and/or non-periodic integrands. As a final complication, bythe Shannon-Nyquist sampling theorem, one requires at least 𝒪(k)𝒪𝑘\mathcal{O}(k)caligraphic_O ( italic_k ) quadraturenodes to resolve the oscillations of e2πikωsuperscript𝑒2𝜋𝑖𝑘𝜔e^{2\pi ik\omega}italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω end_POSTSUPERSCRIPT on [1/2,1/2]1212[-1/2,1/2][ - 1 / 2 , 1 / 2 ], andso any direct summation method to obtain {hk}k=0n1superscriptsubscriptsubscript𝑘𝑘0𝑛1\left\{h_{k}\right\}_{k=0}^{n-1}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT would scale atleast as 𝒪(n2)𝒪superscript𝑛2\mathcal{O}(n^{2})caligraphic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT ).

To bypass these issues, we propose the use of adaptive quadraturefor a fixed number of lags k𝑘kitalic_k, but then a transition to the use of asymptoticexpansions to hksubscript𝑘h_{k}italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT for lags greater than some k0subscript𝑘0k_{0}italic_k start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT. For integrands that arenot too oscillatory, standard adapative integration tools can be veryefficient and capable for any piecewise-smooth integrand. In this work, theJulia lanuage package QuadGK.jl [7, 23] was used,although many other tools would likely have worked equally well. Briefly,Gaussian quadrature is a tool for achieving high-order accuracy in the numericalintegration of smooth functions. It gives approximations to integrals asabf(x)dxj=1Mαjf(xj)superscriptsubscript𝑎𝑏𝑓𝑥differential-d𝑥superscriptsubscript𝑗1𝑀subscript𝛼𝑗𝑓subscript𝑥𝑗\int_{a}^{b}f(x)\mathop{}\!\mathrm{d}x\approx\sum_{j=1}^{M}\alpha_{j}f(x_{j})∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT italic_f ( italic_x ) roman_d italic_x ≈ ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT italic_α start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_f ( italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT )that are exact for polynomials up to order 2M12𝑀12M-12 italic_M - 1, where{αj}j=1Msuperscriptsubscriptsubscript𝛼𝑗𝑗1𝑀\left\{\alpha_{j}\right\}_{j=1}^{M}{ italic_α start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT are weights and {xj}j=1Msuperscriptsubscriptsubscript𝑥𝑗𝑗1𝑀\left\{x_{j}\right\}_{j=1}^{M}{ italic_x start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT arenodes. In the case of [a,b]=[1,1]𝑎𝑏11[a,b]=[-1,1][ italic_a , italic_b ] = [ - 1 , 1 ], the Legendre polynomials can beused to obtain the weights and nodes and achieve an error rate of 𝒪(M2m1)𝒪superscript𝑀2𝑚1\mathcal{O}(M^{-2m-1})caligraphic_O ( italic_M start_POSTSUPERSCRIPT - 2 italic_m - 1 end_POSTSUPERSCRIPT ) for functions f𝒞(m)([a,b])𝑓superscript𝒞𝑚𝑎𝑏f\in\mathcal{C}^{(m)}([a,b])italic_f ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ italic_a , italic_b ] )[18, 36].

More interesting, however, is the discussion of methods for accurately andefficiently computing hksubscript𝑘h_{k}italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT when k𝑘kitalic_k is large. Unlike traditional integrationtools, asymptotic expansion-type methods get more accurate as k𝑘kitalic_kincreases. As in the introduction, the following result is not new (see[12] for a comprehensive discussion), but we state it in thespecific form that is useful for this work since it will be referred tofrequently. Since the idea of the proof is also used repeatedly, we againprovide it here.

Proposition 2.1.

Let S(ω)𝒞(m)([1/2,1/2])𝑆𝜔superscript𝒞𝑚1212S(\omega)\in\mathcal{C}^{(m)}([-1/2,1/2])italic_S ( italic_ω ) ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ - 1 / 2 , 1 / 2 ] ) and 1/21/2|S(m)|(ω)dω<C<superscriptsubscript1212superscript𝑆𝑚𝜔differential-d𝜔𝐶\int_{-1/2}^{1/2}|S^{(m)}|(\omega)\mathop{}\!\mathrm{d}\omega<C<\infty∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT | italic_S start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT | ( italic_ω ) roman_d italic_ω < italic_C < ∞. Then

1/21/2S(ω)e2πikωdω=j=0m1(i2πk)(j+1)(S(j)(1/2)eπikS(j)(1/2)eπik)+𝒪(km1).superscriptsubscript1212𝑆𝜔superscript𝑒2𝜋𝑖𝑘𝜔differential-d𝜔superscriptsubscript𝑗0𝑚1superscript𝑖2𝜋𝑘𝑗1superscript𝑆𝑗12superscript𝑒𝜋𝑖𝑘superscript𝑆𝑗12superscript𝑒𝜋𝑖𝑘𝒪superscript𝑘𝑚1\int_{-1/2}^{1/2}S(\omega)e^{2\pi ik\omega}\mathop{}\!\mathrm{d}\omega=-\sum_{%j=0}^{m-1}(-i2\pi k)^{-(j+1)}(S^{(j)}(1/2)e^{\pi ik}-S^{(j)}(-1/2)e^{-\pi ik})%+\mathcal{O}(k^{-m-1}).∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_S ( italic_ω ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω end_POSTSUPERSCRIPT roman_d italic_ω = - ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - italic_i 2 italic_π italic_k ) start_POSTSUPERSCRIPT - ( italic_j + 1 ) end_POSTSUPERSCRIPT ( italic_S start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( 1 / 2 ) italic_e start_POSTSUPERSCRIPT italic_π italic_i italic_k end_POSTSUPERSCRIPT - italic_S start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( - 1 / 2 ) italic_e start_POSTSUPERSCRIPT - italic_π italic_i italic_k end_POSTSUPERSCRIPT ) + caligraphic_O ( italic_k start_POSTSUPERSCRIPT - italic_m - 1 end_POSTSUPERSCRIPT ) .
Proof 2.2.

Simply apply integration by parts as many times as possible:

1/21/2S(ω)e2πikωdωsuperscriptsubscript1212𝑆𝜔superscript𝑒2𝜋𝑖𝑘𝜔differential-d𝜔\displaystyle\int_{-1/2}^{1/2}S(\omega)e^{2\pi ik\omega}\mathop{}\!\mathrm{d}\omega∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_S ( italic_ω ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω end_POSTSUPERSCRIPT roman_d italic_ω=S(1/2)eπikS(1/2)eπik2πik+(2πik)11/21/2S(ω)e2πikωdωabsent𝑆12superscript𝑒𝜋𝑖𝑘𝑆12superscript𝑒𝜋𝑖𝑘2𝜋𝑖𝑘superscript2𝜋𝑖𝑘1superscriptsubscript1212superscript𝑆𝜔superscript𝑒2𝜋𝑖𝑘𝜔differential-d𝜔\displaystyle=\frac{S(1/2)e^{\pi ik}-S(-1/2)e^{-\pi ik}}{-2\pi ik}+(-2\pi ik)^%{-1}\int_{-1/2}^{1/2}S^{\prime}(\omega)e^{2\pi ik\omega}\mathop{}\!\mathrm{d}\omega= divide start_ARG italic_S ( 1 / 2 ) italic_e start_POSTSUPERSCRIPT italic_π italic_i italic_k end_POSTSUPERSCRIPT - italic_S ( - 1 / 2 ) italic_e start_POSTSUPERSCRIPT - italic_π italic_i italic_k end_POSTSUPERSCRIPT end_ARG start_ARG - 2 italic_π italic_i italic_k end_ARG + ( - 2 italic_π italic_i italic_k ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_S start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ( italic_ω ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω end_POSTSUPERSCRIPT roman_d italic_ω
\displaystyle\vdots
=j=0m1(2πik)(j+1)(S(j)(1/2)eπikS(j)(1/2)eπik)absentsuperscriptsubscript𝑗0𝑚1superscript2𝜋𝑖𝑘𝑗1superscript𝑆𝑗12superscript𝑒𝜋𝑖𝑘superscript𝑆𝑗12superscript𝑒𝜋𝑖𝑘\displaystyle=\sum_{j=0}^{m-1}(-2\pi ik)^{-(j+1)}(S^{(j)}(1/2)e^{\pi ik}-S^{(j%)}(-1/2)e^{-\pi ik})= ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 2 italic_π italic_i italic_k ) start_POSTSUPERSCRIPT - ( italic_j + 1 ) end_POSTSUPERSCRIPT ( italic_S start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( 1 / 2 ) italic_e start_POSTSUPERSCRIPT italic_π italic_i italic_k end_POSTSUPERSCRIPT - italic_S start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( - 1 / 2 ) italic_e start_POSTSUPERSCRIPT - italic_π italic_i italic_k end_POSTSUPERSCRIPT )
+(2πik)(m+1)1/21/2S(m)(ω)e2πiωkdω.superscript2𝜋𝑖𝑘𝑚1superscriptsubscript1212superscript𝑆𝑚𝜔superscript𝑒2𝜋𝑖𝜔𝑘differential-d𝜔\displaystyle+(-2\pi ik)^{-(m+1)}\int_{-1/2}^{1/2}S^{(m)}(\omega)e^{2\pi i%\omega k}\mathop{}\!\mathrm{d}\omega.+ ( - 2 italic_π italic_i italic_k ) start_POSTSUPERSCRIPT - ( italic_m + 1 ) end_POSTSUPERSCRIPT ∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_S start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( italic_ω ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_ω italic_k end_POSTSUPERSCRIPT roman_d italic_ω .

Remarkably, evaluating this expansion for any lag k𝑘kitalic_k only requires a fewderivatives of S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) at its endpoints. For this reason, with this toolevaluating the tail of {hk}subscript𝑘\left\{h_{k}\right\}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } for high lags k𝑘kitalic_k actually becomes thefastest part of the domain to handle. And while the supposition of the abovetheorem that S𝒞(m)([1/2,1/2])𝑆superscript𝒞𝑚1212S\in\mathcal{C}^{(m)}([-1/2,1/2])italic_S ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ - 1 / 2 , 1 / 2 ] ) is obviously restrictive, asimple observation extends this result to a much broader class of functions.

Corollary 2.3.

Let S(ω)𝒞(m)([1/2,1/2]{ω1r,,ωLr})𝑆𝜔superscript𝒞𝑚1212superscriptsubscript𝜔1𝑟superscriptsubscript𝜔𝐿𝑟S(\omega)\in\mathcal{C}^{(m)}([-1/2,1/2]\setminus\left\{\omega_{1}^{r},...,%\omega_{L}^{r}\right\})italic_S ( italic_ω ) ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ - 1 / 2 , 1 / 2 ] ∖ { italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT , … , italic_ω start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT } ),so that it is smooth except at locations {ω1r,,ωLr}superscriptsubscript𝜔1𝑟superscriptsubscript𝜔𝐿𝑟\left\{\omega_{1}^{r},...,\omega_{L}^{r}\right\}{ italic_ω start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT , … , italic_ω start_POSTSUBSCRIPT italic_L end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT },further assume that at each ωlrsuperscriptsubscript𝜔𝑙𝑟\omega_{l}^{r}italic_ω start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT it has at least m𝑚mitalic_m directionalderivatives S(m±)(ωlr)superscript𝑆limit-from𝑚plus-or-minussuperscriptsubscript𝜔𝑙𝑟S^{(m\pm)}(\omega_{l}^{r})italic_S start_POSTSUPERSCRIPT ( italic_m ± ) end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) from both the left (mlimit-from𝑚m-italic_m -) and the right(m+limit-from𝑚m+italic_m +) for l1,,L𝑙1𝐿l\in 1,...,Litalic_l ∈ 1 , … , italic_L. Then

1/21/2S(ω)e2πikωdωsuperscriptsubscript1212𝑆𝜔superscript𝑒2𝜋𝑖𝑘𝜔differential-d𝜔\displaystyle\int_{-1/2}^{1/2}S(\omega)e^{2\pi ik\omega}\mathop{}\!\mathrm{d}\omega∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_S ( italic_ω ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω end_POSTSUPERSCRIPT roman_d italic_ω=l=1L1j=0m1(2πik)(j+1)(S(j)(ωl+1r)e2πikωl+1rS(j+)(ωlr)e2πikωlr)absentsuperscriptsubscript𝑙1𝐿1superscriptsubscript𝑗0𝑚1superscript2𝜋𝑖𝑘𝑗1superscript𝑆limit-from𝑗superscriptsubscript𝜔𝑙1𝑟superscript𝑒2𝜋𝑖𝑘superscriptsubscript𝜔𝑙1𝑟superscript𝑆limit-from𝑗superscriptsubscript𝜔𝑙𝑟superscript𝑒2𝜋𝑖𝑘superscriptsubscript𝜔𝑙𝑟\displaystyle=\sum_{l=1}^{L-1}\sum_{j=0}^{m-1}(-2\pi ik)^{-(j+1)}(S^{(j-)}(%\omega_{l+1}^{r})e^{2\pi ik\omega_{l+1}^{r}}-S^{(j+)}(\omega_{l}^{r})e^{2\pi ik%\omega_{l}^{r}})= ∑ start_POSTSUBSCRIPT italic_l = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_L - 1 end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 2 italic_π italic_i italic_k ) start_POSTSUPERSCRIPT - ( italic_j + 1 ) end_POSTSUPERSCRIPT ( italic_S start_POSTSUPERSCRIPT ( italic_j - ) end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_l + 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω start_POSTSUBSCRIPT italic_l + 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT - italic_S start_POSTSUPERSCRIPT ( italic_j + ) end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT ) italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT )
+𝒪(km1).𝒪superscript𝑘𝑚1\displaystyle+\mathcal{O}(k^{-m-1}).+ caligraphic_O ( italic_k start_POSTSUPERSCRIPT - italic_m - 1 end_POSTSUPERSCRIPT ) .

This corollary, which is proven by simply breaking up the domain [1/2,1/2]1212[-1/2,1/2][ - 1 / 2 , 1 / 2 ]into segments with endpoints at each ωlrsuperscriptsubscript𝜔𝑙𝑟\omega_{l}^{r}italic_ω start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_r end_POSTSUPERSCRIPT and applying the above argumentto each segment, now means that this asymptotic expansion approach can be usedto accurately calculate hksubscript𝑘h_{k}italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT for very large k𝑘kitalic_k for any spectral density thatis just piecewise smooth. The prefactor on the error term in this settingis natually increased compared to the setting of Proposition 2.1.Nonetheless, however, the convergence rates are on our side. For example, if k=3 000𝑘3000k=3\,000italic_k = 3 000 and one uses five derivatives, then the error term in the simple caseis given by C(6,000π)6C1026𝐶superscript6000𝜋6𝐶superscript1026C\cdot(6,000\pi)^{-6}\approx C\cdot 10^{-26}italic_C ⋅ ( 6 , 000 italic_π ) start_POSTSUPERSCRIPT - 6 end_POSTSUPERSCRIPT ≈ italic_C ⋅ 10 start_POSTSUPERSCRIPT - 26 end_POSTSUPERSCRIPT for someconstant C𝐶Citalic_C. For a spectral density like S(ω)=10e10|ω|𝑆𝜔10superscript𝑒10𝜔S(\omega)=10e^{-10|\omega|}italic_S ( italic_ω ) = 10 italic_e start_POSTSUPERSCRIPT - 10 | italic_ω | end_POSTSUPERSCRIPT, anexample that will be studied extensively in later sections due to beingwell-behaved except at the origin, we see that h30005107subscript30005superscript107h_{3000}\approx 5\cdot 10^{-7}italic_h start_POSTSUBSCRIPT 3000 end_POSTSUBSCRIPT ≈ 5 ⋅ 10 start_POSTSUPERSCRIPT - 7 end_POSTSUPERSCRIPT. For that particular function, then, this bound indicates that unlessC𝐶Citalic_C is quite large, one can reasonably expect a full 1415141514-1514 - 15 digits of accuracy.For functions with few rough points, this evaluation method onlyrequires derivatives of S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) at a few locations, and can be evaluated injust a few nanoseconds on a modern CPU once those have been pre-computed.Combined with adaptive quadrature for sufficiently low lags k𝑘kitalic_k that theexpansion is not yet accurate, this gives {hk}k=0n1superscriptsubscriptsubscript𝑘𝑘0𝑛1\left\{h_{k}\right\}_{k=0}^{n-1}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT for all lagsquickly and accurately.

An alternative way to compute {hk}k=1nsuperscriptsubscriptsubscript𝑘𝑘1𝑛\left\{h_{k}\right\}_{k=1}^{n}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT for potentially large n𝑛nitalic_n in𝒪(nlogn)𝒪𝑛𝑛\mathcal{O}(n\log n)caligraphic_O ( italic_n roman_log italic_n ) time is to use nonuniform fast Fourier transforms (NUFFTs) andGauss-Legendre quadrature to evaluate hksubscript𝑘h_{k}italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT at all lags[13, 5]. While the traditional FFT algorithm criticallydepends on measurements being given on a regular grid and exploits thesymmetries that that creates, NUFFTs are fast algorithms to evaluate sums ofcomplex exponentials at potentially irregular locations and target frequenciesin quasilinear runtime. Per the discussion above, we see that computing{hk}k=1nsuperscriptsubscriptsubscript𝑘𝑘1𝑛\left\{h_{k}\right\}_{k=1}^{n}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT would require M=𝒪(n)𝑀𝒪𝑛M=\mathcal{O}(n)italic_M = caligraphic_O ( italic_n ) quadrature nodes (to resolve thehighest frequencies) and naturally would need to be evaluated at n𝑛nitalic_n target values,which means that naive computation of those integrals would require 𝒪(n2)𝒪superscript𝑛2\mathcal{O}(n^{2})caligraphic_O ( italic_n start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT )work. With the NUFFT, however, the sums can be computed as a matrix-vectorproduct

hk=j=1Mαje2πikωjqS(ωjq),k=0,,n1formulae-sequencesubscript𝑘superscriptsubscript𝑗1𝑀subscript𝛼𝑗superscript𝑒2𝜋𝑖𝑘superscriptsubscript𝜔𝑗𝑞𝑆superscriptsubscript𝜔𝑗𝑞𝑘0𝑛1h_{k}=\sum_{j=1}^{M}\alpha_{j}e^{2\pi ik\omega_{j}^{q}}S(\omega_{j}^{q}),\quadk%=0,...,n-1italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT = ∑ start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_M end_POSTSUPERSCRIPT italic_α start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_k italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT italic_S ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT ) , italic_k = 0 , … , italic_n - 1

in 𝒪(nlogn)𝒪𝑛𝑛\mathcal{O}(n\log n)caligraphic_O ( italic_n roman_log italic_n ) time, where ωjqsuperscriptsubscript𝜔𝑗𝑞\omega_{j}^{q}italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_q end_POSTSUPERSCRIPT denotes the j𝑗jitalic_j-th quadrature node.This method, while much more expensive than using asymptotic expansions, has theadvantage that it does not require computing any derivatives to evaluate thesequence {hk}k=0n1superscriptsubscriptsubscript𝑘𝑘0𝑛1\left\{h_{k}\right\}_{k=0}^{n-1}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT. Because of this, it can be more accurate for SDFsS(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) with higher-order derivatives that potentially get very large or arefor some other reason numerically unstable to compute. With that said, however,as discussed above, the accuracy of Gauss-Legendre quadrature depends heavily onhow smooth the integrand is, so we still recommend splitting the domain [1/2,1/2)1212[-1/2,1/2)[ - 1 / 2 , 1 / 2 ) into panels such that the rough points of S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) are endpoints as withthe splitting of the asymptotic expansions. If S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) is amenable to thesplit asymptotic expansion method and n𝑛nitalic_n is large, then the asymptoticexpansion approach will be significantly faster. But if S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω )has some properties that make working with higher-order derivatives infeasible,this NUFFT-based method is an attractive alternative that still enablespractitioners to work exclusively with S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) for evaluating thelog-likelihood and retain 𝒪(nlogn)𝒪𝑛𝑛\mathcal{O}(n\log n)caligraphic_O ( italic_n roman_log italic_n ) complexity. For a much more detaileddiscussion of numerical Fourier integration of spectral densities and thequadrature-based tools introduced here for doing it quickly, we refer readers to[6].

3 Low rank structure of the Whittle correction

We now turn to the question of why one should expect the Whittlecorrection matrix 𝑬=𝚺H𝑫𝑬𝚺superscript𝐻𝑫{\color[rgb]{0,0,0}\bm{E}}=\mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}%^{H}-\bm{D}bold_italic_E = caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT - bold_italic_Dwhere 𝑫𝑫\bm{D}bold_italic_D is diagonal with 𝑫j,j=S(ωj)subscript𝑫𝑗𝑗𝑆subscript𝜔𝑗\bm{D}_{j,j}=S(\omega_{j})bold_italic_D start_POSTSUBSCRIPT italic_j , italic_j end_POSTSUBSCRIPT = italic_S ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ), to be of lownumerical rank. The primary tool for this investigation will again be asymptoticexpansions, and the crucial observation is that for every k𝑘kitalic_k and ksuperscript𝑘k^{\prime}italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT,𝑬k,ksubscript𝑬𝑘superscript𝑘{\color[rgb]{0,0,0}\bm{E}}_{k,k^{\prime}}bold_italic_E start_POSTSUBSCRIPT italic_k , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT is given by an oscillatory integral with the same high frequencyof 2πn2𝜋𝑛2\pi n2 italic_π italic_n. While the below analysis is not actually how we choose to computeand assemble this low-rank approximation in this work, the following derivationprovides some idea for why the Whittle correction matrix 𝑬𝑬{\color[rgb]{0,0,0}\bm{E}}bold_italic_E has low-rankstructure, even when S𝑆Sitalic_S has non-smooth points or {hk}subscript𝑘\left\{h_{k}\right\}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } decays slowly. For theduration of this section, we will assume that S(ω)𝒞(m)([1/2,1/2])𝑆𝜔superscript𝒞𝑚1212S(\omega)\in\mathcal{C}^{(m)}([-1/2,1/2])italic_S ( italic_ω ) ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ - 1 / 2 , 1 / 2 ] ) for convenience, although the results here canagain be extended using directional derivatives if S𝑆Sitalic_S is not differentiable butis smooth from the left and the right at rough points.

We begin by adding and subtracting S(ωk)𝑆subscript𝜔𝑘S(\omega_{k})italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) and S(ωk)𝑆subscript𝜔superscript𝑘S(\omega_{k^{\prime}})italic_S ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) to the inner integrandin (1.2), which with minor simplification steps can be expressed as

(3.1)1n1𝑛\displaystyle\frac{1}{n}divide start_ARG 1 end_ARG start_ARG italic_n end_ARG1/21/2Dns(ωkω)Dns(ωkω)S(ω)dωsuperscriptsubscript1212superscriptsubscript𝐷𝑛𝑠subscript𝜔𝑘𝜔superscriptsubscript𝐷𝑛𝑠subscript𝜔superscript𝑘𝜔𝑆𝜔differential-d𝜔\displaystyle\int_{-1/2}^{1/2}D_{n}^{s}(\omega_{k}-\omega)D_{n}^{s}(\omega_{k^%{\prime}}-\omega)S(\omega)\mathop{}\!\mathrm{d}\omega∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) italic_S ( italic_ω ) roman_d italic_ω
=12nabsent12𝑛\displaystyle=\frac{1}{2n}= divide start_ARG 1 end_ARG start_ARG 2 italic_n end_ARG1/21/2Dns(ωkω)Dns(ωkω)(S(ω)S(ωk))dωsuperscriptsubscript1212superscriptsubscript𝐷𝑛𝑠subscript𝜔𝑘𝜔superscriptsubscript𝐷𝑛𝑠subscript𝜔superscript𝑘𝜔𝑆𝜔𝑆subscript𝜔𝑘differential-d𝜔\displaystyle\int_{-1/2}^{1/2}D_{n}^{s}(\omega_{k}-\omega)D_{n}^{s}(\omega_{k^%{\prime}}-\omega)(S(\omega)-S(\omega_{k}))\mathop{}\!\mathrm{d}\omega∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ( italic_S ( italic_ω ) - italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ) roman_d italic_ω
+12n12𝑛\displaystyle+\frac{1}{2n}+ divide start_ARG 1 end_ARG start_ARG 2 italic_n end_ARG1/21/2Dns(ωkω)Dns(ωkω)(S(ω)S(ωk))dωsuperscriptsubscript1212superscriptsubscript𝐷𝑛𝑠subscript𝜔𝑘𝜔superscriptsubscript𝐷𝑛𝑠subscript𝜔superscript𝑘𝜔𝑆𝜔𝑆subscript𝜔superscript𝑘differential-d𝜔\displaystyle\int_{-1/2}^{1/2}D_{n}^{s}(\omega_{k}-\omega)D_{n}^{s}(\omega_{k^%{\prime}}-\omega)(S(\omega)-S(\omega_{k^{\prime}}))\mathop{}\!\mathrm{d}\omega∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ( italic_S ( italic_ω ) - italic_S ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ) roman_d italic_ω
+S(ωk)+S(ωk)2n𝑆subscript𝜔𝑘𝑆subscript𝜔superscript𝑘2𝑛\displaystyle+\frac{S(\omega_{k})+S(\omega_{k^{\prime}})}{2n}+ divide start_ARG italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) + italic_S ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) end_ARG start_ARG 2 italic_n end_ARG1/21/2Dns(ωkω)Dns(ωkω)dω.superscriptsubscript1212superscriptsubscript𝐷𝑛𝑠subscript𝜔𝑘𝜔superscriptsubscript𝐷𝑛𝑠subscript𝜔superscript𝑘𝜔differential-d𝜔\displaystyle\int_{-1/2}^{1/2}D_{n}^{s}(\omega_{k}-\omega)D_{n}^{s}(\omega_{k^%{\prime}}-\omega)\mathop{}\!\mathrm{d}\omega.∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) roman_d italic_ω .

While unwieldy, this representation of the integral already provides areasonably direct explanation about several features of 𝚺H𝚺superscript𝐻\mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H}caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT.Recalling that

1/21/2Dns(ωkω)Dns(ωkω)dω={nk=k0kk,superscriptsubscript1212superscriptsubscript𝐷𝑛𝑠subscript𝜔𝑘𝜔superscriptsubscript𝐷𝑛𝑠subscript𝜔superscript𝑘𝜔differential-d𝜔cases𝑛𝑘superscript𝑘0𝑘superscript𝑘\int_{-1/2}^{1/2}D_{n}^{s}(\omega_{k}-\omega)D_{n}^{s}(\omega_{k^{\prime}}-%\omega)\mathop{}\!\mathrm{d}\omega=\begin{cases}n&k=k^{\prime}\\0&k\neq k^{\prime}\end{cases},∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) italic_D start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_s end_POSTSUPERSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) roman_d italic_ω = { start_ROW start_CELL italic_n end_CELL start_CELL italic_k = italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_CELL end_ROW start_ROW start_CELL 0 end_CELL start_CELL italic_k ≠ italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_CELL end_ROW ,

we see that the third term is precisely the diagonal contribution ofS(ωk)𝑆subscript𝜔𝑘S(\omega_{k})italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ). We will now argue that the first two terms in the above sumcorrespond to severely rank-deficient matrices. Since they can of course beanalyzed in the exact same way, we study only the first in detail here.

To begin, we slightly rewrite the first term in (3.1) as

(3.2)1/21/2sin(πn(ωkω))sin(πn(ωkω))csc(π(ωkω))csc(π(ωkω))(S(ω)S(ωk))dω.superscriptsubscript1212𝜋𝑛subscript𝜔𝑘𝜔𝜋𝑛subscript𝜔superscript𝑘𝜔𝜋subscript𝜔𝑘𝜔𝜋subscript𝜔superscript𝑘𝜔𝑆𝜔𝑆subscript𝜔𝑘differential-d𝜔\int_{-1/2}^{1/2}\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-%\omega))\csc(\pi(\omega_{k}-\omega))\csc(\pi(\omega_{k^{\prime}}-\omega))(S(%\omega)-S(\omega_{k}))\mathop{}\!\mathrm{d}\omega.∫ start_POSTSUBSCRIPT - 1 / 2 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT 1 / 2 end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) ( italic_S ( italic_ω ) - italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ) roman_d italic_ω .

From here, we partition the domain into

(3.3)[1/2,1/2]=[1/2,ωkγ]Type IBγ(ωk)Type II[ωk+γ,ωkγ]Type IBγ(ωk)Type II[ωk+γ,1/2]Type I,1212subscript12subscript𝜔𝑘𝛾Type Isubscriptsubscript𝐵𝛾subscript𝜔𝑘Type IIsubscriptsubscript𝜔𝑘𝛾subscript𝜔superscript𝑘𝛾Type Isubscriptsubscript𝐵𝛾subscript𝜔superscript𝑘Type IIsubscriptsubscript𝜔superscript𝑘𝛾12Type I[-1/2,1/2]=\underbrace{[-1/2,\omega_{k}-\gamma]}_{\text{Type I}}\cup%\underbrace{B_{\gamma}(\omega_{k})}_{\text{Type II}}\cup\underbrace{[\omega_{k%}+\gamma,\omega_{k^{\prime}}-\gamma]}_{\text{Type I}}\cup\underbrace{B_{\gamma%}(\omega_{k^{\prime}})}_{\text{Type II}}\cup\underbrace{[\omega_{k^{\prime}}+%\gamma,1/2]}_{\text{Type I}},[ - 1 / 2 , 1 / 2 ] = under⏟ start_ARG [ - 1 / 2 , italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_γ ] end_ARG start_POSTSUBSCRIPT Type I end_POSTSUBSCRIPT ∪ under⏟ start_ARG italic_B start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) end_ARG start_POSTSUBSCRIPT Type II end_POSTSUBSCRIPT ∪ under⏟ start_ARG [ italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_γ , italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ ] end_ARG start_POSTSUBSCRIPT Type I end_POSTSUBSCRIPT ∪ under⏟ start_ARG italic_B start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) end_ARG start_POSTSUBSCRIPT Type II end_POSTSUBSCRIPT ∪ under⏟ start_ARG [ italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ , 1 / 2 ] end_ARG start_POSTSUBSCRIPT Type I end_POSTSUBSCRIPT ,

where Bγ(x)=[xγ,x+γ]subscript𝐵𝛾𝑥𝑥𝛾𝑥𝛾B_{\gamma}(x)=[x-\gamma,x+\gamma]italic_B start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT ( italic_x ) = [ italic_x - italic_γ , italic_x + italic_γ ] and γ𝛾\gammaitalic_γ is some smallnumber chosen to keep distance from the singularities of the cosecant terms. Bydesign, then, in Type I intervals the cosecant terms are simple analyticfunctions. Defining

(3.4)S~n,k,k(ω)=csc(π(ωkω))csc(π(ωkω))(S(ω)S(ωk)),subscript~𝑆𝑛𝑘superscript𝑘𝜔𝜋subscript𝜔𝑘𝜔𝜋subscript𝜔superscript𝑘𝜔𝑆𝜔𝑆subscript𝜔𝑘\tilde{S}_{n,k,k^{\prime}}(\omega)=\csc(\pi(\omega_{k}-\omega))\csc(\pi(\omega%_{k^{\prime}}-\omega))(S(\omega)-S(\omega_{k})),over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) = roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) ( italic_S ( italic_ω ) - italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ) ,

we see that in Type I regions this function is bounded above (assuming that S𝑆Sitalic_Sitself is) and as smooth as S𝑆Sitalic_S. This motivates the following result that willbe used many times in this section.

Proposition 3.1.

If g(ω)𝒞(m)([a,b])𝑔𝜔superscript𝒞𝑚𝑎𝑏g(\omega)\in\mathcal{C}^{(m)}([a,b])italic_g ( italic_ω ) ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ italic_a , italic_b ] ) and g(m)superscript𝑔𝑚g^{(m)}italic_g start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT is integrable on [a,b]𝑎𝑏[a,b][ italic_a , italic_b ],then

(3.5)absin(πn(ωkω))sin(πn(ωkω))g(ω)dω=superscriptsubscript𝑎𝑏𝜋𝑛subscript𝜔𝑘𝜔𝜋𝑛subscript𝜔superscript𝑘𝜔𝑔𝜔differential-d𝜔absent\displaystyle\int_{a}^{b}\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{%\prime}}-\omega))g(\omega)\mathop{}\!\mathrm{d}\omega=∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) italic_g ( italic_ω ) roman_d italic_ω =
(3.6)(1)kk2abg(ω)dω12{eiπnk+k2[e2πibj=0m1g(j)(b)(i2πn)j+1e2πiaj=0m1g(j)(a)(i2πn)j+1]}superscript1𝑘superscript𝑘2superscriptsubscript𝑎𝑏𝑔𝜔differential-d𝜔12superscript𝑒𝑖𝜋𝑛𝑘superscript𝑘2delimited-[]superscript𝑒2𝜋𝑖𝑏superscriptsubscript𝑗0𝑚1superscript𝑔𝑗𝑏superscript𝑖2𝜋𝑛𝑗1superscript𝑒2𝜋𝑖𝑎superscriptsubscript𝑗0𝑚1superscript𝑔𝑗𝑎superscript𝑖2𝜋𝑛𝑗1\displaystyle\frac{(-1)^{k-k^{\prime}}}{2}\int_{a}^{b}g(\omega)\mathop{}\!%\mathrm{d}\omega-\frac{1}{2}\Re\left\{e^{i\pi n\frac{k+k^{\prime}}{2}}\left[e^%{2\pi ib}\sum_{j=0}^{m-1}\frac{g^{(j)}(b)}{(-i2\pi n)^{j+1}}-e^{2\pi ia}\sum_{%j=0}^{m-1}\frac{g^{(j)}(a)}{(-i2\pi n)^{j+1}}\right]\right\}divide start_ARG ( - 1 ) start_POSTSUPERSCRIPT italic_k - italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT end_ARG start_ARG 2 end_ARG ∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT italic_g ( italic_ω ) roman_d italic_ω - divide start_ARG 1 end_ARG start_ARG 2 end_ARG roman_ℜ { italic_e start_POSTSUPERSCRIPT italic_i italic_π italic_n divide start_ARG italic_k + italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT [ italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_b end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_b ) end_ARG start_ARG ( - italic_i 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG - italic_e start_POSTSUPERSCRIPT 2 italic_π italic_i italic_a end_POSTSUPERSCRIPT ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_a ) end_ARG start_ARG ( - italic_i 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG ] }
+𝒪(nm1),𝒪superscript𝑛𝑚1\displaystyle+\mathcal{O}(n^{-m-1}),+ caligraphic_O ( italic_n start_POSTSUPERSCRIPT - italic_m - 1 end_POSTSUPERSCRIPT ) ,

where (a+ib)=a𝑎𝑖𝑏𝑎\Re(a+ib)=aroman_ℜ ( italic_a + italic_i italic_b ) = italic_a denotes the real part of a complex number.

Proof 3.2.

Using the product-to-sum angle formula that sin(θ)sin(ϕ)=12(cos(θϕ)cos(θ+ϕ))𝜃italic-ϕ12𝜃italic-ϕ𝜃italic-ϕ\sin(\theta)\sin(\phi)=\tfrac{1}{2}(\cos(\theta-\phi)-\cos(\theta+\phi))roman_sin ( italic_θ ) roman_sin ( italic_ϕ ) = divide start_ARG 1 end_ARG start_ARG 2 end_ARG ( roman_cos ( italic_θ - italic_ϕ ) - roman_cos ( italic_θ + italic_ϕ ) ) and standardmanipulations of the complex exponential, the left hand side equation can bere-written as

12cos(πnkkn)abg(ω)dω+12{eiπnk+k2abei2πnωg(ω)dω}.12𝜋𝑛𝑘superscript𝑘𝑛superscriptsubscript𝑎𝑏𝑔𝜔differential-d𝜔12superscript𝑒𝑖𝜋𝑛𝑘superscript𝑘2superscriptsubscript𝑎𝑏superscript𝑒𝑖2𝜋𝑛𝜔𝑔𝜔differential-d𝜔\frac{1}{2}\cos\left(\pi n\frac{k-k^{\prime}}{n}\right)\int_{a}^{b}g(\omega)%\mathop{}\!\mathrm{d}\omega+\frac{1}{2}\Re\left\{e^{i\pi n\frac{k+k^{\prime}}{%2}}\int_{a}^{b}e^{-i2\pi n\omega}g(\omega)\mathop{}\!\mathrm{d}\omega\right\}.divide start_ARG 1 end_ARG start_ARG 2 end_ARG roman_cos ( italic_π italic_n divide start_ARG italic_k - italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG start_ARG italic_n end_ARG ) ∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT italic_g ( italic_ω ) roman_d italic_ω + divide start_ARG 1 end_ARG start_ARG 2 end_ARG roman_ℜ { italic_e start_POSTSUPERSCRIPT italic_i italic_π italic_n divide start_ARG italic_k + italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_ARG start_ARG 2 end_ARG end_POSTSUPERSCRIPT ∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT - italic_i 2 italic_π italic_n italic_ω end_POSTSUPERSCRIPT italic_g ( italic_ω ) roman_d italic_ω } .

But cos(πn(kk))=(1)kk𝜋𝑛𝑘superscript𝑘superscript1𝑘superscript𝑘\cos(\pi n(k-k^{\prime}))=(-1)^{k-k^{\prime}}roman_cos ( italic_π italic_n ( italic_k - italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) ) = ( - 1 ) start_POSTSUPERSCRIPT italic_k - italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUPERSCRIPT, which provides the simplification ofthe first term. For the second term, just as in the proof of Proposition2.1, we simply do integration by parts as many times as possibleon the now standard-form oscillatory integral in the second term to get

abe2πinωg(ω)dω=j=0m1(2πin)j1{g(j)(b)e2πinbg(j)(a)e2πina}+𝒪(nm1).superscriptsubscript𝑎𝑏superscript𝑒2𝜋𝑖𝑛𝜔𝑔𝜔differential-d𝜔superscriptsubscript𝑗0𝑚1superscript2𝜋𝑖𝑛𝑗1superscript𝑔𝑗𝑏superscript𝑒2𝜋𝑖𝑛𝑏superscript𝑔𝑗𝑎superscript𝑒2𝜋𝑖𝑛𝑎𝒪superscript𝑛𝑚1\int_{a}^{b}e^{-2\pi in\omega}g(\omega)\mathop{}\!\mathrm{d}\omega=-\sum_{j=0}%^{m-1}(-2\pi in)^{-j-1}\left\{g^{(j)}(b)e^{-2\pi inb}-g^{(j)}(a)e^{-2\pi ina}%\right\}+\mathcal{O}(n^{-m-1}).∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT italic_e start_POSTSUPERSCRIPT - 2 italic_π italic_i italic_n italic_ω end_POSTSUPERSCRIPT italic_g ( italic_ω ) roman_d italic_ω = - ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 2 italic_π italic_i italic_n ) start_POSTSUPERSCRIPT - italic_j - 1 end_POSTSUPERSCRIPT { italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_b ) italic_e start_POSTSUPERSCRIPT - 2 italic_π italic_i italic_n italic_b end_POSTSUPERSCRIPT - italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_a ) italic_e start_POSTSUPERSCRIPT - 2 italic_π italic_i italic_n italic_a end_POSTSUPERSCRIPT } + caligraphic_O ( italic_n start_POSTSUPERSCRIPT - italic_m - 1 end_POSTSUPERSCRIPT ) .

An elementary rearrangement gives the result.

While this proposition is presented in generality, there is a reasonable amountof additional simplification that one can do with the right-hand side based onthe value of k+k(mod4)annotated𝑘superscript𝑘pmod4k+k^{\prime}\pmod{4}italic_k + italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_MODIFIER ( roman_mod start_ARG 4 end_ARG ) end_MODIFIER. In particular, we see that eiπn(k+k)/2=ikk(mod4)superscript𝑒𝑖𝜋𝑛𝑘superscript𝑘2superscript𝑖annotated𝑘superscript𝑘pmod4e^{i\pi n(k+k^{\prime})/2}=i^{k-k^{\prime}\pmod{4}}italic_e start_POSTSUPERSCRIPT italic_i italic_π italic_n ( italic_k + italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) / 2 end_POSTSUPERSCRIPT = italic_i start_POSTSUPERSCRIPT italic_k - italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_MODIFIER ( roman_mod start_ARG 4 end_ARG ) end_MODIFIER end_POSTSUPERSCRIPT, which means that every single complexexponential on the right-hand side of (3.5) simplifies nicely. Ifwe assume that k=k(mod4)𝑘annotatedsuperscript𝑘pmod4k=k^{\prime}\pmod{4}italic_k = italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT start_MODIFIER ( roman_mod start_ARG 4 end_ARG ) end_MODIFIER so that eiπn(k+k)/2=1superscript𝑒𝑖𝜋𝑛𝑘superscript𝑘21e^{i\pi n(k+k^{\prime})/2}=1italic_e start_POSTSUPERSCRIPT italic_i italic_π italic_n ( italic_k + italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) / 2 end_POSTSUPERSCRIPT = 1, theasymptotic expansion term can be more concretely expanded to

{(cos(2πb)+isin(2πb))j=0m1g(j)(b)(i2πn)j+1(cos(2πa)+isin(2πa))j=0m1g(j)(a)(i2πn)j+1}2𝜋𝑏𝑖2𝜋𝑏superscriptsubscript𝑗0𝑚1superscript𝑔𝑗𝑏superscript𝑖2𝜋𝑛𝑗12𝜋𝑎𝑖2𝜋𝑎superscriptsubscript𝑗0𝑚1superscript𝑔𝑗𝑎superscript𝑖2𝜋𝑛𝑗1\displaystyle\Re\left\{(\cos(2\pi b)+i\sin(2\pi b))\sum_{j=0}^{m-1}\frac{g^{(j%)}(b)}{(-i2\pi n)^{j+1}}-(\cos(2\pi a)+i\sin(2\pi a))\sum_{j=0}^{m-1}\frac{g^{%(j)}(a)}{(-i2\pi n)^{j+1}}\right\}roman_ℜ { ( roman_cos ( 2 italic_π italic_b ) + italic_i roman_sin ( 2 italic_π italic_b ) ) ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_b ) end_ARG start_ARG ( - italic_i 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG - ( roman_cos ( 2 italic_π italic_a ) + italic_i roman_sin ( 2 italic_π italic_a ) ) ∑ start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_a ) end_ARG start_ARG ( - italic_i 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG }
(3.7)=\displaystyle==cos(2πb)j=0,joddm1(1)1+j/2g(j)(b)(2πn)j+1cos(2πa)j=0,joddm1(1)1+j/2g(j)(a)(2πn)j+12𝜋𝑏superscriptsubscript𝑗0𝑗odd𝑚1superscript11𝑗2superscript𝑔𝑗𝑏superscript2𝜋𝑛𝑗12𝜋𝑎superscriptsubscript𝑗0𝑗odd𝑚1superscript11𝑗2superscript𝑔𝑗𝑎superscript2𝜋𝑛𝑗1\displaystyle\cos(2\pi b)\sum_{j=0,\,j\text{ odd}}^{m-1}(-1)^{1+\lfloor j/2%\rfloor}\frac{g^{(j)}(b)}{(2\pi n)^{j+1}}-\cos(2\pi a)\sum_{j=0,\,j\text{ odd}%}^{m-1}(-1)^{1+\lfloor j/2\rfloor}\frac{g^{(j)}(a)}{(2\pi n)^{j+1}}roman_cos ( 2 italic_π italic_b ) ∑ start_POSTSUBSCRIPT italic_j = 0 , italic_j odd end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 1 ) start_POSTSUPERSCRIPT 1 + ⌊ italic_j / 2 ⌋ end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_b ) end_ARG start_ARG ( 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG - roman_cos ( 2 italic_π italic_a ) ∑ start_POSTSUBSCRIPT italic_j = 0 , italic_j odd end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 1 ) start_POSTSUPERSCRIPT 1 + ⌊ italic_j / 2 ⌋ end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_a ) end_ARG start_ARG ( 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG
+\displaystyle++sin(2πb)j=0,jevenm1(1)2+j/2g(j)(b)(2πn)j+1sin(2πa)j=0,jevenm1(1)2+j/2g(j)(a)(2πn)j+1.2𝜋𝑏superscriptsubscript𝑗0𝑗even𝑚1superscript12𝑗2superscript𝑔𝑗𝑏superscript2𝜋𝑛𝑗12𝜋𝑎superscriptsubscript𝑗0𝑗even𝑚1superscript12𝑗2superscript𝑔𝑗𝑎superscript2𝜋𝑛𝑗1\displaystyle\sin(2\pi b)\sum_{j=0,\,j\text{ even}}^{m-1}(-1)^{2+\lfloor j/2%\rfloor}\frac{g^{(j)}(b)}{(2\pi n)^{j+1}}-\sin(2\pi a)\sum_{j=0,\,j\text{ even%}}^{m-1}(-1)^{2+\lfloor j/2\rfloor}\frac{g^{(j)}(a)}{(2\pi n)^{j+1}}.roman_sin ( 2 italic_π italic_b ) ∑ start_POSTSUBSCRIPT italic_j = 0 , italic_j even end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 1 ) start_POSTSUPERSCRIPT 2 + ⌊ italic_j / 2 ⌋ end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_b ) end_ARG start_ARG ( 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG - roman_sin ( 2 italic_π italic_a ) ∑ start_POSTSUBSCRIPT italic_j = 0 , italic_j even end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_m - 1 end_POSTSUPERSCRIPT ( - 1 ) start_POSTSUPERSCRIPT 2 + ⌊ italic_j / 2 ⌋ end_POSTSUPERSCRIPT divide start_ARG italic_g start_POSTSUPERSCRIPT ( italic_j ) end_POSTSUPERSCRIPT ( italic_a ) end_ARG start_ARG ( 2 italic_π italic_n ) start_POSTSUPERSCRIPT italic_j + 1 end_POSTSUPERSCRIPT end_ARG .

And while we don’t enumerate the other cases for the three values of eiπn(k+k)/2superscript𝑒𝑖𝜋𝑛𝑘superscript𝑘2e^{i\pi n(k+k^{\prime})/2}italic_e start_POSTSUPERSCRIPT italic_i italic_π italic_n ( italic_k + italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT ) / 2 end_POSTSUPERSCRIPT, the only difference is in the odd/even indexing and the alternatingsign inside the sum. The key takeaway here is that the entire integral(3.2) can actually be written as a very smooth integral term(abg(ω)dωsuperscriptsubscript𝑎𝑏𝑔𝜔differential-d𝜔\int_{a}^{b}g(\omega)\mathop{}\!\mathrm{d}\omega∫ start_POSTSUBSCRIPT italic_a end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_b end_POSTSUPERSCRIPT italic_g ( italic_ω ) roman_d italic_ω), four trigonometric functions that arenon-oscillatory since they have unit wavelength and a,b[1/2,1/2]𝑎𝑏1212a,b\in[-1/2,1/2]italic_a , italic_b ∈ [ - 1 / 2 , 1 / 2 ] withcoefficients taken from functions that are themselves smooth (since g𝒞(m)([a,b])𝑔superscript𝒞𝑚𝑎𝑏g\in\mathcal{C}^{(m)}([a,b])italic_g ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ italic_a , italic_b ] )), and a remainder term depending on high-orderderivatives of g(ω)𝑔𝜔g(\omega)italic_g ( italic_ω ) on [a,b]𝑎𝑏[a,b][ italic_a , italic_b ]. As it pertains to the Whittle correctionmatrix 𝑬𝑬{\color[rgb]{0,0,0}\bm{E}}bold_italic_E, for each of the Type-I regions we have endpoints (a=1/2,b=ωkγ)formulae-sequence𝑎12𝑏subscript𝜔𝑘𝛾(a=-1/2,b=\omega_{k}-\gamma)( italic_a = - 1 / 2 , italic_b = italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_γ ), (a=ωk+γ,b=ωkγ)formulae-sequence𝑎subscript𝜔𝑘𝛾𝑏subscript𝜔superscript𝑘𝛾(a=\omega_{k}+\gamma,b=\omega_{k^{\prime}}-\gamma)( italic_a = italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_γ , italic_b = italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ ), and (a=ωk+γ,b=1/2)formulae-sequence𝑎subscript𝜔superscript𝑘𝛾𝑏12(a=\omega_{k^{\prime}}+\gamma,b=1/2)( italic_a = italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ , italic_b = 1 / 2 ) respectively, and so the effective contribution ofthe Type-I integrals to 𝑬k,ksubscript𝑬𝑘superscript𝑘{\color[rgb]{0,0,0}\bm{E}}_{k,k^{\prime}}bold_italic_E start_POSTSUBSCRIPT italic_k , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT is given by[1/2,1/2](Bγ(ωk)Bγ(ωk))S~n,k,k(ω)dωsubscript1212subscript𝐵𝛾subscript𝜔𝑘subscript𝐵𝛾subscript𝜔superscript𝑘subscript~𝑆𝑛𝑘superscript𝑘𝜔differential-d𝜔\int_{[-1/2,1/2]\setminus(B_{\gamma}(\omega_{k})\cup B_{\gamma}(\omega_{k^{%\prime}}))}\tilde{S}_{n,k,k^{\prime}}(\omega)\mathop{}\!\mathrm{d}\omega∫ start_POSTSUBSCRIPT [ - 1 / 2 , 1 / 2 ] ∖ ( italic_B start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ∪ italic_B start_POSTSUBSCRIPT italic_γ end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) ) end_POSTSUBSCRIPT over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) roman_d italic_ωand a total of 12121212 trigonometric functions at the three sets of given endpointsa𝑎aitalic_a and b𝑏bitalic_b with smoothly varying coefficients. Given that the trigonometricfunctions are analytic and that g𝒞(m)([1/2,1/2])𝑔superscript𝒞𝑚1212g\in\mathcal{C}^{(m)}([-1/2,1/2])italic_g ∈ caligraphic_C start_POSTSUPERSCRIPT ( italic_m ) end_POSTSUPERSCRIPT ( [ - 1 / 2 , 1 / 2 ] ), sincevery smooth kernel matrices often exhibit rapid spectral decay (an observationdating back to at least [20] with the Fast Multipole Method(FMM)), we expect the contribution of the Type-I intervals to 𝑬𝑬{\color[rgb]{0,0,0}\bm{E}}bold_italic_E to haveexceptionally fast spectral decay and be severely rank-deficient so long as theremainder from the asymptotic expansion is small. With slightly more effort, wemay repeat this analysis on the Type-II regions with singularities.

For the first Type II region of [ωkγ,ωk+γ]subscript𝜔𝑘𝛾subscript𝜔𝑘𝛾[\omega_{k}-\gamma,\omega_{k}+\gamma][ italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_γ , italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_γ ], the story isonly slightly more complicated. This time we introduce the condensed notation of

S~n,k(ω)=csc(π(ωkω))(S(ω)S(ωk))subscript~𝑆𝑛superscript𝑘𝜔𝜋subscript𝜔superscript𝑘𝜔𝑆𝜔𝑆subscript𝜔𝑘\tilde{S}_{n,k^{\prime}}(\omega)=\csc(\pi(\omega_{k^{\prime}}-\omega))(S(%\omega)-S(\omega_{k}))over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) = roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) ( italic_S ( italic_ω ) - italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) )

for the bounded and non-oscillatory part of the integrand, and we study

(3.8)ωk+γωkγsin(πn(ωkω))sin(πn(ωkω))csc(π(ωkω))S~n,k(ω)dω.superscriptsubscriptsubscript𝜔𝑘𝛾subscript𝜔𝑘𝛾𝜋𝑛subscript𝜔𝑘𝜔𝜋𝑛subscript𝜔superscript𝑘𝜔𝜋subscript𝜔𝑘𝜔subscript~𝑆𝑛superscript𝑘𝜔differential-d𝜔\int_{\omega_{k}+\gamma}^{\omega_{k}-\gamma}\sin(\pi n(\omega_{k}-\omega))\sin%(\pi n(\omega_{k^{\prime}}-\omega))\csc(\pi(\omega_{k}-\omega))\tilde{S}_{n,k^%{\prime}}(\omega)\mathop{}\!\mathrm{d}\omega.∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) roman_d italic_ω .

The important observation to make here is that the singularity presented bycsc(π(ωkω))𝜋subscript𝜔𝑘𝜔\csc(\pi(\omega_{k}-\omega))roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) is simple. Recalling the Laurent expansion csc(t)=t1+t6+7t2360+𝑡superscript𝑡1𝑡67superscript𝑡2360\csc(t)=t^{-1}+\tfrac{t}{6}+\tfrac{7t^{2}}{360}+...roman_csc ( italic_t ) = italic_t start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT + divide start_ARG italic_t end_ARG start_ARG 6 end_ARG + divide start_ARG 7 italic_t start_POSTSUPERSCRIPT 2 end_POSTSUPERSCRIPT end_ARG start_ARG 360 end_ARG + …, we see that one can simply“subtract off” the singularity and obtain a standard power-series typerepresentation of csc(t)t1Pl(t)𝑡superscript𝑡1subscript𝑃𝑙𝑡\csc(t)-t^{-1}\approx P_{l}(t)roman_csc ( italic_t ) - italic_t start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT ≈ italic_P start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ( italic_t ) for t[γ,γ]𝑡𝛾𝛾t\in[-\gamma,\gamma]italic_t ∈ [ - italic_γ , italic_γ ] and some low-order polynomial Plsubscript𝑃𝑙P_{l}italic_P start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT based on the Laurent series. Thismotivates the decomposition of (3.8) into

ωk+γωkγsin(πn(ωkω))sin(πn(ωkω))Pl(ωkω)S~n,k(ω)dωsuperscriptsubscriptsubscript𝜔𝑘𝛾subscript𝜔𝑘𝛾𝜋𝑛subscript𝜔𝑘𝜔𝜋𝑛subscript𝜔superscript𝑘𝜔subscript𝑃𝑙subscript𝜔𝑘𝜔subscript~𝑆𝑛superscript𝑘𝜔differential-d𝜔\displaystyle\int_{\omega_{k}+\gamma}^{\omega_{k}-\gamma}\sin(\pi n(\omega_{k}%-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))P_{l}(\omega_{k}-\omega)\tilde%{S}_{n,k^{\prime}}(\omega)\mathop{}\!\mathrm{d}\omega∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) italic_P start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) roman_d italic_ω
+\displaystyle++ωk+γωkγsin(πn(ωkω))sin(πn(ωkω))S~n,k(ω)ωkωdω.superscriptsubscriptsubscript𝜔𝑘𝛾subscript𝜔𝑘𝛾𝜋𝑛subscript𝜔𝑘𝜔𝜋𝑛subscript𝜔superscript𝑘𝜔subscript~𝑆𝑛superscript𝑘𝜔subscript𝜔𝑘𝜔differential-d𝜔\displaystyle\int_{\omega_{k}+\gamma}^{\omega_{k}-\gamma}\sin(\pi n(\omega_{k}%-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))\frac{\tilde{S}_{n,k^{\prime}}%(\omega)}{\omega_{k}-\omega}\mathop{}\!\mathrm{d}\omega.∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) divide start_ARG over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) end_ARG start_ARG italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω end_ARG roman_d italic_ω .

The first term above can again be expanded as a small number of unit-wavelengthtrigonometric functions via Proposition 3.1 with g(ω)=Pl(ωkω)S~n,k(ω)𝑔𝜔subscript𝑃𝑙subscript𝜔𝑘𝜔subscript~𝑆𝑛superscript𝑘𝜔g(\omega)=P_{l}(\omega_{k}-\omega)\tilde{S}_{n,k^{\prime}}(\omega)italic_g ( italic_ω ) = italic_P start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ). Because S~n,k(ω)=0subscript~𝑆𝑛superscript𝑘𝜔0\tilde{S}_{n,k^{\prime}}(\omega)=0over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) = 0 atω=ωk𝜔subscript𝜔𝑘\omega=\omega_{k}italic_ω = italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT and has been assumed to be smooth, the second term is also notsingular and is a standard oscillatory integral. Another application ofProposition 3.1 with g(ω)=(ωkω)1S~n,k(ω)𝑔𝜔superscriptsubscript𝜔𝑘𝜔1subscript~𝑆𝑛superscript𝑘𝜔g(\omega)=(\omega_{k}-\omega)^{-1}\tilde{S}_{n,k^{\prime}}(\omega)italic_g ( italic_ω ) = ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ( italic_ω ) can be applied, making the entire contribution of(3.8) expressible as a linear combination of a small numberof unit-wavelength trigonometric functions with smoothly varying coefficients.

The final segment of [1/2,1/2]1212[-1/2,1/2][ - 1 / 2 , 1 / 2 ] to study is the Type II region [ωkγ,ωk+γ]subscript𝜔superscript𝑘𝛾subscript𝜔superscript𝑘𝛾[\omega_{k^{\prime}}-\gamma,\omega_{k^{\prime}}+\gamma][ italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ , italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ ], where the singularity caused by csc(π(ωkω))𝜋subscript𝜔superscript𝑘𝜔\csc(\pi(\omega_{k^{\prime}}-\omega))roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) has not already been factored out. Introducing one finalcondensed integrand notation of

S~n,k(ω)=csc(π(ωkω))(S(ω)S(ωk)),subscript~𝑆𝑛𝑘𝜔𝜋subscript𝜔𝑘𝜔𝑆𝜔𝑆subscript𝜔𝑘\tilde{S}_{n,k}(\omega)=\csc(\pi(\omega_{k}-\omega))(S(\omega)-S(\omega_{k})),over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) = roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) ( italic_S ( italic_ω ) - italic_S ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ) ,

we again decompose the contribution of that segment as

ωk+γωkγsin(πn(ωkω))sin(πn(ωkω))csc(π(ωkω))S~n,k(ω)dωsuperscriptsubscriptsubscript𝜔superscript𝑘𝛾subscript𝜔superscript𝑘𝛾𝜋𝑛subscript𝜔𝑘𝜔𝜋𝑛subscript𝜔superscript𝑘𝜔𝜋subscript𝜔superscript𝑘𝜔subscript~𝑆𝑛𝑘𝜔differential-d𝜔\displaystyle\int_{\omega_{k^{\prime}}+\gamma}^{\omega_{k^{\prime}}-\gamma}%\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))\csc(\pi(%\omega_{k^{\prime}}-\omega))\tilde{S}_{n,k}(\omega)\mathop{}\!\mathrm{d}\omega∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) roman_csc ( italic_π ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) roman_d italic_ω
=\displaystyle==ωk+γωkγsin(πn(ωkω))sin(πn(ωkω))Pl(ωkω)S~n,k(ω)dωsuperscriptsubscriptsubscript𝜔superscript𝑘𝛾subscript𝜔superscript𝑘𝛾𝜋𝑛subscript𝜔𝑘𝜔𝜋𝑛subscript𝜔superscript𝑘𝜔subscript𝑃𝑙subscript𝜔superscript𝑘𝜔subscript~𝑆𝑛𝑘𝜔differential-d𝜔\displaystyle\int_{\omega_{k^{\prime}}+\gamma}^{\omega_{k^{\prime}}-\gamma}%\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))P_{l}(%\omega_{k^{\prime}}-\omega)\tilde{S}_{n,k}(\omega)\mathop{}\!\mathrm{d}\omega∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) italic_P start_POSTSUBSCRIPT italic_l end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) roman_d italic_ω
+\displaystyle++ωk+γωkγsin(πn(ωkω))sin(πn(ωkω))S~n,k(ω)ωkωdω.superscriptsubscriptsubscript𝜔superscript𝑘𝛾subscript𝜔superscript𝑘𝛾𝜋𝑛subscript𝜔𝑘𝜔𝜋𝑛subscript𝜔superscript𝑘𝜔subscript~𝑆𝑛𝑘𝜔subscript𝜔superscript𝑘𝜔differential-d𝜔\displaystyle\int_{\omega_{k^{\prime}}+\gamma}^{\omega_{k^{\prime}}-\gamma}%\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))\frac{%\tilde{S}_{n,k}(\omega)}{\omega_{k^{\prime}}-\omega}\mathop{}\!\mathrm{d}\omega.∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) divide start_ARG over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) end_ARG start_ARG italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω end_ARG roman_d italic_ω .

The first term in the divided integral can be handled yet again by Proposition3.1 just as above. The only new wrinkle is that the second termin the right-hand side now still has a singularity. Thankfully, with one moreintegral splitting we can re-express this last term with

ωk+γωkγsin(πn(ωkω))sin(πn(ωkω))S~n,k(ω)ωkωdωsuperscriptsubscriptsubscript𝜔superscript𝑘𝛾subscript𝜔superscript𝑘𝛾𝜋𝑛subscript𝜔𝑘𝜔𝜋𝑛subscript𝜔superscript𝑘𝜔subscript~𝑆𝑛𝑘𝜔subscript𝜔superscript𝑘𝜔differential-d𝜔\displaystyle\int_{\omega_{k^{\prime}}+\gamma}^{\omega_{k^{\prime}}-\gamma}%\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))\frac{%\tilde{S}_{n,k}(\omega)}{\omega_{k^{\prime}}-\omega}\mathop{}\!\mathrm{d}\omega∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) divide start_ARG over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) end_ARG start_ARG italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω end_ARG roman_d italic_ω
=\displaystyle==ωk+γωkγsin(πn(ωkω))sin(πn(ωkω))S~n,k(ω)S~n,k(ωk)ωkωdωsuperscriptsubscriptsubscript𝜔superscript𝑘𝛾subscript𝜔superscript𝑘𝛾𝜋𝑛subscript𝜔𝑘𝜔𝜋𝑛subscript𝜔superscript𝑘𝜔subscript~𝑆𝑛𝑘𝜔subscript~𝑆𝑛𝑘subscript𝜔superscript𝑘subscript𝜔superscript𝑘𝜔differential-d𝜔\displaystyle\int_{\omega_{k^{\prime}}+\gamma}^{\omega_{k^{\prime}}-\gamma}%\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))\frac{%\tilde{S}_{n,k}(\omega)-\tilde{S}_{n,k}(\omega_{k^{\prime}})}{\omega_{k^{%\prime}}-\omega}\mathop{}\!\mathrm{d}\omega∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) divide start_ARG over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) - over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) end_ARG start_ARG italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω end_ARG roman_d italic_ω
+S~n,k(ωk)subscript~𝑆𝑛𝑘subscript𝜔superscript𝑘\displaystyle+\tilde{S}_{n,k}(\omega_{k^{\prime}})+ over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT )ωk+γωkγsin(πn(ωkω))sin(πn(ωkω))1ωkωdω.superscriptsubscriptsubscript𝜔superscript𝑘𝛾subscript𝜔superscript𝑘𝛾𝜋𝑛subscript𝜔𝑘𝜔𝜋𝑛subscript𝜔superscript𝑘𝜔1subscript𝜔superscript𝑘𝜔differential-d𝜔\displaystyle\int_{\omega_{k^{\prime}}+\gamma}^{\omega_{k^{\prime}}-\gamma}%\sin(\pi n(\omega_{k}-\omega))\sin(\pi n(\omega_{k^{\prime}}-\omega))\frac{1}{%\omega_{k^{\prime}}-\omega}\mathop{}\!\mathrm{d}\omega.∫ start_POSTSUBSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT + italic_γ end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_γ end_POSTSUPERSCRIPT roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT - italic_ω ) ) roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) ) divide start_ARG 1 end_ARG start_ARG italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω end_ARG roman_d italic_ω .

The first term on the right-hand side is no longer singular becauseS~n,k(ω)S~n,k(ωk)subscript~𝑆𝑛𝑘𝜔subscript~𝑆𝑛𝑘subscript𝜔superscript𝑘\tilde{S}_{n,k}(\omega)-\tilde{S}_{n,k}(\omega_{k^{\prime}})over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω ) - over~ start_ARG italic_S end_ARG start_POSTSUBSCRIPT italic_n , italic_k end_POSTSUBSCRIPT ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT ) is smooth and goes to zeroat ω=ωk𝜔subscript𝜔superscript𝑘\omega=\omega_{k^{\prime}}italic_ω = italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT faster than (ωkω)1superscriptsubscript𝜔superscript𝑘𝜔1(\omega_{k^{\prime}}-\omega)^{-1}( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω ) start_POSTSUPERSCRIPT - 1 end_POSTSUPERSCRIPT blows up, and so onemay apply Proposition 3.1. The second integral is the integralof sin\sinroman_sin against a bounded function since sin(πn(ωkωk))𝜋𝑛subscript𝜔superscript𝑘subscript𝜔𝑘\sin(\pi n(\omega_{k^{\prime}}-\omega_{k}))roman_sin ( italic_π italic_n ( italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT - italic_ω start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT ) ) isalso smooth and goes to zero as ωωk𝜔subscript𝜔superscript𝑘\omega\rightarrow\omega_{k^{\prime}}italic_ω → italic_ω start_POSTSUBSCRIPT italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT, and thus canbe handled with the even more simple asymptotic expansion analagous to the oneused in Proposition 2.1 that again represents it as a small numberof smooth functions.

Combining all of these segmented contributions, we finally conclude inrepresenting 𝑬k,ksubscript𝑬𝑘superscript𝑘{\color[rgb]{0,0,0}\bm{E}}_{k,k^{\prime}}bold_italic_E start_POSTSUBSCRIPT italic_k , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT as a small number of very smooth functions. And whileostensibly breaking the integral into five pieces, two of which need to bebroken up again into two or three additional terms to be suitable for expansion,should lead to so many smooth terms that the matrix is not particularlyrank-deficient, we note that all of these trigonometric functions have unitwavelength and are potentially linearly dependent with each other, and thatderivatives of the corresponding function g𝑔gitalic_g in each application of Proposition3.1 are by supposition also well-behaved.

This derivation is also illuminating about the sources of error in Whittleapproximations and the degree of rank-structure in the Whittle correctionmatrices. For example, if S𝑆Sitalic_S has rough points where it is continuous but notsmooth, then the above domain partitioning will have to be refined so thatCorollary 1111 can be applied to fix the accuracy of the asymptoticapproximation. This will then mean that 𝑬k,ksubscript𝑬𝑘superscript𝑘{\color[rgb]{0,0,0}\bm{E}}_{k,k^{\prime}}bold_italic_E start_POSTSUBSCRIPT italic_k , italic_k start_POSTSUPERSCRIPT ′ end_POSTSUPERSCRIPT end_POSTSUBSCRIPT will need more asymptoticexpansion-sourced terms for the expansion to be accurate. For spectraldensities S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) that have many such rough points, the number of unique termsin the split expansion can certainly add up to a degree that ruins this rankstructure. And if S(ω)𝑆𝜔S(\omega)italic_S ( italic_ω ) and its derivatives are highly oscillatory, thenterms like (3) may not actually yield particularlylow-rank matrices despite being smooth.

4 Numerical assembly and action of the Whittle correction

With this theoretical justification for the low-rank structure of the Whittlecorrection matrix, we now turn to discussing how to actually compute andassemble the matrices 𝑼𝑼\bm{U}bold_italic_U and 𝑽𝑽\bm{V}bold_italic_V such that 𝚺H𝑫𝑼𝑽H𝚺superscript𝐻𝑫𝑼superscript𝑽𝐻\mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H}-\bm{D}\approx\bm{U}\bm%{V}^{H}caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT - bold_italic_D ≈ bold_italic_U bold_italic_V start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT. First, we note that the action of 𝚺𝚺\bm{\Sigma}bold_Σ onto vectorscan be done in 𝒪(nlogn)𝒪𝑛𝑛\mathcal{O}(n\log n)caligraphic_O ( italic_n roman_log italic_n ) with the familiar circulant embedding technique[39] once the values {hk}k=0n1superscriptsubscriptsubscript𝑘𝑘0𝑛1\left\{h_{k}\right\}_{k=0}^{n-1}{ italic_h start_POSTSUBSCRIPT italic_k end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_k = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT have been computed.While we refer readers to the references for a more complete discussion ofcirculant embedding, we will provide a brief discussion here. The symmetricToeplitz matrix 𝚺𝚺\bm{\Sigma}bold_Σ is specified entirely by its first column [h0,h1,,hn1]subscript0subscript1subscript𝑛1[h_{0},h_{1},...,h_{n-1}][ italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_h start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT ]. This column can be extended to one of length 2n12𝑛12n-12 italic_n - 1 given by𝒄=[h0,h1,,hn1,hn1,hn2,,h1]𝒄subscript0subscript1subscript𝑛1subscript𝑛1subscript𝑛2subscript1\bm{c}=[h_{0},h_{1},...,h_{n-1},h_{n-1},h_{n-2},...,h_{1}]bold_italic_c = [ italic_h start_POSTSUBSCRIPT 0 end_POSTSUBSCRIPT , italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT , … , italic_h start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT , italic_h start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT , italic_h start_POSTSUBSCRIPT italic_n - 2 end_POSTSUBSCRIPT , … , italic_h start_POSTSUBSCRIPT 1 end_POSTSUBSCRIPT ], and if oneassembles a Toeplitz matrix with this length 2n12𝑛12n-12 italic_n - 1 column, the matrix is infact circulant and diagonalized by the FFT [19]. Sincethe FFT can be applied in 𝒪(nlogn)𝒪𝑛𝑛\mathcal{O}(n\log n)caligraphic_O ( italic_n roman_log italic_n ) complexity, so can thisaugmented circulant matrix. Thus, one can compute 𝚺𝒗𝚺𝒗\bm{\Sigma}\bm{v}bold_Σ bold_italic_v for any vector 𝒗𝒗\bm{v}bold_italic_vby extracting the first n𝑛nitalic_n components of 𝑪[𝒗,𝟎n1]𝑪𝒗subscript0𝑛1\bm{C}[\bm{v},\bm{0}_{n-1}]bold_italic_C [ bold_italic_v , bold_0 start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT ], where𝑪𝑪\bm{C}bold_italic_C is the circulant matrix made with 𝒄𝒄\bm{c}bold_italic_c, and 𝟎n1subscript0𝑛1\bm{0}_{n-1}bold_0 start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT is n1𝑛1n-1italic_n - 1many zeros appended to 𝒗𝒗\bm{v}bold_italic_v.

Letting nsubscript𝑛\mathbb{\mathcal{F}}_{n}caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT be the FFT of size n𝑛nitalic_n (briefly now defined without the“fftshift”), we can thus summarize the action of n𝚺nH𝑫subscript𝑛𝚺superscriptsubscript𝑛𝐻𝑫\mathbb{\mathcal{F}}_{n}\bm{\Sigma}\mathbb{\mathcal{F}}_{n}^{H}-\bm{D}caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT bold_Σ caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT - bold_italic_Don a vector 𝒗𝒗\bm{v}bold_italic_v with

(4.1)(n𝚺nH𝑫)𝒗=n{2n1H({2n1𝒄}2n1[nH𝒗𝟎n1])}1:n𝑫𝒗.subscript𝑛𝚺superscriptsubscript𝑛𝐻𝑫𝒗subscript𝑛subscriptsuperscriptsubscript2𝑛1𝐻subscript2𝑛1𝒄subscript2𝑛1matrixsuperscriptsubscript𝑛𝐻𝒗subscript0𝑛1:1𝑛𝑫𝒗(\mathbb{\mathcal{F}}_{n}\bm{\Sigma}\mathbb{\mathcal{F}}_{n}^{H}-\bm{D})\bm{v}%=\mathbb{\mathcal{F}}_{n}\left\{\mathbb{\mathcal{F}}_{2n-1}^{H}\left(\left\{%\mathbb{\mathcal{F}}_{2n-1}\bm{c}\right\}\circ\mathbb{\mathcal{F}}_{2n-1}%\begin{bmatrix}\mathbb{\mathcal{F}}_{n}^{H}\bm{v}\\\bm{0}_{n-1}\end{bmatrix}\right)\right\}_{1:n}-\bm{D}\bm{v}.( caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT bold_Σ caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT - bold_italic_D ) bold_italic_v = caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT { caligraphic_F start_POSTSUBSCRIPT 2 italic_n - 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT ( { caligraphic_F start_POSTSUBSCRIPT 2 italic_n - 1 end_POSTSUBSCRIPT bold_italic_c } ∘ caligraphic_F start_POSTSUBSCRIPT 2 italic_n - 1 end_POSTSUBSCRIPT [ start_ARG start_ROW start_CELL caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT bold_italic_v end_CELL end_ROW start_ROW start_CELL bold_0 start_POSTSUBSCRIPT italic_n - 1 end_POSTSUBSCRIPT end_CELL end_ROW end_ARG ] ) } start_POSTSUBSCRIPT 1 : italic_n end_POSTSUBSCRIPT - bold_italic_D bold_italic_v .

Assuming that 2n1𝒄subscript2𝑛1𝒄\mathbb{\mathcal{F}}_{2n-1}\bm{c}caligraphic_F start_POSTSUBSCRIPT 2 italic_n - 1 end_POSTSUBSCRIPT bold_italic_c is pre-computed, this means that the action of𝚺H𝑫𝚺superscript𝐻𝑫\mathbb{\mathcal{F}}\bm{\Sigma}\mathbb{\mathcal{F}}^{H}-\bm{D}caligraphic_F bold_Σ caligraphic_F start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT - bold_italic_D on 𝒗𝒗\bm{v}bold_italic_v requires four FFTs, two of size n𝑛nitalic_n and two ofsize 2n12𝑛12n-12 italic_n - 1.

With this established, we turn to the process of approximating 𝑪𝑪\bm{C}bold_italic_C from theposition of only being able to apply it to vectors efficiently. The field ofrandomized low-rank approximations has become an important part of modernnumerical linear algebra, and we refer readers to review papers like[21] and the more recent [37] for broad introductionsand context. As it pertains to this work, it is sufficient to discuss theproblem of using randomized algorithms to estimate the range of a matrix.Letting 𝑨n×n𝑨superscript𝑛𝑛\bm{A}\in\mathbb{R}^{n\times n}bold_italic_A ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × italic_n end_POSTSUPERSCRIPT denote an arbitrary matrix of rank r𝑟ritalic_r,[21] shows that the orthogonal matrix 𝑸𝑸\bm{Q}bold_italic_Q such that

𝑨𝛀=𝑸𝑹,𝑨𝛀𝑸𝑹\bm{A}\bm{\Omega}=\bm{Q}\bm{R},bold_italic_A bold_Ω = bold_italic_Q bold_italic_R ,

where 𝛀n×(r+p)𝛀superscript𝑛𝑟𝑝\bm{\Omega}\in\mathbb{R}^{n\times(r+p)}bold_Ω ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × ( italic_r + italic_p ) end_POSTSUPERSCRIPT is a “sketching”/test matrix, whichin this work will be made of i.i.d. standard normal random variables (althoughother and potentially faster choices are available), is an approximate basis forthe column space of 𝑨𝑨\bm{A}bold_italic_A. The matrix 𝑸𝑸\bm{Q}bold_italic_Q itself may be obtained byperforming a standard QR factorization on the product 𝑨𝛀𝑨𝛀\bm{A}\bm{\Omega}bold_italic_A bold_Ω, andthe parameter p𝑝pitalic_p is an oversampling parameter which is often picked to be somesmall number like p=5𝑝5p=5italic_p = 5 [21].From there, one obtains a simple low-rank representation of𝑨𝑨\bm{A}bold_italic_A as

𝑨𝑸𝑼𝑸H𝑨𝑽H.𝑨subscript𝑸𝑼subscriptsuperscript𝑸𝐻𝑨superscript𝑽𝐻\bm{A}\approx\underbrace{\bm{Q}}_{\bm{U}}\underbrace{\bm{Q}^{H}\bm{A}}_{\bm{V}%^{H}}.bold_italic_A ≈ under⏟ start_ARG bold_italic_Q end_ARG start_POSTSUBSCRIPT bold_italic_U end_POSTSUBSCRIPT under⏟ start_ARG bold_italic_Q start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT bold_italic_A end_ARG start_POSTSUBSCRIPT bold_italic_V start_POSTSUPERSCRIPT italic_H end_POSTSUPERSCRIPT end_POSTSUBSCRIPT .

This entire process is summarized in Algorithm 1.

1. Using a method from Section 2, obtain theautocovariance terms {hj}j=0n1superscriptsubscriptsubscript𝑗𝑗0𝑛1\left\{h_{j}\right\}_{j=0}^{n-1}{ italic_h start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 0 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n - 1 end_POSTSUPERSCRIPT.

2. Create diagonal matrix 𝑫j,j=S(ωj)subscript𝑫𝑗𝑗𝑆subscript𝜔𝑗\bm{D}_{j,j}=S(\omega_{j})bold_italic_D start_POSTSUBSCRIPT italic_j , italic_j end_POSTSUBSCRIPT = italic_S ( italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT ) at Fourierfrequencies {ωj}j=1nsuperscriptsubscriptsubscript𝜔𝑗𝑗1𝑛\left\{\omega_{j}\right\}_{j=1}^{n}{ italic_ω start_POSTSUBSCRIPT italic_j end_POSTSUBSCRIPT } start_POSTSUBSCRIPT italic_j = 1 end_POSTSUBSCRIPT start_POSTSUPERSCRIPT italic_n end_POSTSUPERSCRIPT.

3. Draw sketching matrix 𝛀n×(p+r)𝛀superscript𝑛𝑝𝑟\bm{\Omega}\in\mathbb{R}^{n\times(p+r)}bold_Ω ∈ blackboard_R start_POSTSUPERSCRIPT italic_n × ( italic_p + italic_r ) end_POSTSUPERSCRIPT with𝛀j,ki.i.d.𝒩(0,1)subscript𝛀𝑗𝑘i.i.d.similar-to𝒩01\bm{\Omega}_{j,k}\overset{\text{i.i.d.}}{\sim}\mathcal{N}(0,1)bold_Ω start_POSTSUBSCRIPT italic_j , italic_k end_POSTSUBSCRIPT overi.i.d. start_ARG ∼ end_ARG caligraphic_N ( 0 , 1 ).

4. Using Equation 4.1, compute (n𝚺nH𝑫)𝛀subscript𝑛𝚺superscriptsubscript𝑛𝐻𝑫𝛀(\mathbb{\mathcal{F}}_{n}\bm{\Sigma}\mathbb{\mathcal{F}}_{n}^{H}-\bm{D})\bm{\Omega}( caligraphic_F start_POSTSUBSCRIPT italic_n end_POSTSUBSCRIPT bold_Σ caligraphi