Skip to main content

Theory and Modern Applications

Nonparametric threshold estimation of spot volatility based on high-frequency data for time-dependent diffusion models with jumps

Abstract

We construct a spot volatility kernel estimator of time-dependent diffusion models with jumps. Instead of idiomatic intraday return over an observation interval, in the proposed estimator, we use intraday range. Since the range represents the maximum difference among all observations within an interval, all data are used, and no information is lost. By setting a reasonable threshold and making the range not greater than it we effectively eliminate the negative effect of jump on volatility estimation. In this paper, we also prove the consistency and asymptotic normality of the estimator and testify its higher accuracy.

1 Introduction

In the analysis of financial markets, a correct description of the underlying variables is crucial. As a matter of fact, the dynamic rules of these underlying variables can often be evolved by the diffusion class models. Volatility is undoubtedly one of the most important indicators in diffusion model research. Its correct estimation and forecast play important roles in risk management, hedging, portfolio, and derivative pricing.

As we all know, the macro- and microeconomic environment is not static. Therefore it is necessary to assume that the spot volatility is not only related to a specific state variable, but also to time. In other words, the function of the underlying variables should satisfy a time-dependent diffusion process. Fan and Wang [1] used a kernel smoothing technique to consider spot volatility estimation for high-dimensional time-dependent diffusion models and proved the consistency and asymptotic normality of their proposed estimator. Zu and Boswijk [2] constructed a spot volatility estimator for high-frequency data contaminated by market noise and presented a data-driven method to select the scale and bandwidth parameters. For a continuous diffusion process

$$\begin{aligned} dX_{t} = \alpha _{t}\,dt + \beta _{t} \,dW_{t}, \end{aligned}$$

Kristensen [3] constructed a weighted integral volatility estimator by kernel and proposed the following filtered spot volatility:

$$\begin{aligned} \hat{\beta }_{t}^{2} =\sum _{i = 1}^{n} {K_{h} ( {t_{i - 1} - t} ) ( {X_{t_{i} } - X_{t_{i - 1} } } )^{2} } , \end{aligned}$$
(1)

where h is the bandwidth, K is the kernel function, and \(K_{h} ( \cdot ) = {{K ( {{ \cdot / h}} )} / h}\). Under some weak conditions, he proved the following asymptotic normality of the estimator:

$$\begin{aligned} \sqrt{nh} \bigl( {\hat{\beta }_{t}^{2} - \beta _{t}^{2} } \bigr) \xrightarrow{d}N \biggl( {0,2 \beta _{t}^{4} \int _{R} {K^{2} ( z )\,dz} } \biggr). \end{aligned}$$
(2)

With the development of electronic trading technology and means, it becomes possible to conduct frequent trading in a shorter period of time, and financial activities can be carried out in scenario of higher frequency. It is unconvincing to use the continuous diffusion models to describe high-frequency and even ultrahigh-frequency data. Both theoretical and practical studies show that there are jumps in financial variables, which have important impacts in financial analysis (see Lee and Mykland [4] and Aït-Sahalia and Jacod [5]). In recent years, more and more scholars have begun to consider jump diffusion models to describe financial variables and studied their impacts on financial activities (see Zhu et al. [6] and Matenda and Chikodza [7]).

We consider the following jump diffusion model:

$$\begin{aligned} X_{t} = X_{0} + \int _{0}^{t} \alpha _{u}\,du + \int _{0}^{t} { \beta _{u} \,dW_{u} } + \sum_{i = 1}^{N_{t} } {Y_{i} } ,\quad t \in ( {0,T} ], \end{aligned}$$
(3)

where \({\alpha _{t} }\), \({\beta _{t} }\) are càdlàg, \({W_{t} }\) is a standard Wiener process, \(\sum_{i = 1}^{N_{t} } {Y_{i} } \) is a compound Poisson process independent of \({W_{t} } \), \({N_{t} }\) is a Poisson process with intensity λ, \(\{ {Y_{i} ,i = 1,2, \ldots ,N_{t} } \} \) is the jump size at the ith jump point and independent of \({N_{t} }\).

To estimate the volatility better for jump diffusion models, eliminating the impacts of jumps is one of the good choices, among which the threshold technique is a common and effective method to remove jumps (Mancini [8], Mancini and Renò [9], Wang and Zhou [10], Song and Wang [11], and Sun and Yu [12]). Intuitively, it is plausible that we can remove the effects of jumps and estimate the spot volatility in equation (3) by modifying \(\hat{\beta }_{t}^{2} \) in equation (1) as

$$\begin{aligned} \hat{\beta }_{t}^{2} =\sum _{i = 1}^{n} {K_{h} ( {t_{i - 1} - t} ) ( {X_{t_{i} } - X_{t_{i - 1} } } )^{2} \cdot } I_{ \{ { ( {X_{t_{i} } - X_{t_{i - 1} } } )^{2} \leq \phi ( \delta )} \} }, \end{aligned}$$
(4)

where \(I_{ \{ \cdot \} } \) is the indicator function, \(\phi ( \delta ) \) is a deterministic function of the time interval δ. As a matter of fact, it is inappropriate more or less to apply this return-based method directly to high-frequency data for the well-known negative effects of microscopic noise.

In view of the advantages of the range-based method, such as estimating accuracy, data integrity, and power of antinoise (see Christensen and Podolskij [13, 14], Liu et al. [15], Vortelinos [16], and Xu et al. [17]), in this paper, we use the range to replace the return \(( {X_{t_{i} } - X_{t_{i - 1} } } )\) in equation (4). Meanwhile, since the range is defined as the maximum difference between two state variables within a given timing spacing, dominating its square no more than a specific threshold, we can propose a range-based threshold spot volatility estimator.

The rest part is fixed up as follows. In Sect. 2, we give some necessary technical conditions and construct the estimator of interest. Section 3 proves the consistency and asymptotic normality of the estimator. Conclusions are presented in Sect. 4.

2 Construction of estimator

For discussion, we decompose the process \({X_{t}}\) as \(X = X^{ ( C )} + X^{ ( J )} \), where \(X^{ ( C )} \) and \(X^{ ( J )} \) are the continuous and jump parts of the process, respectively. Some necessary constraints are given further.

T1:

The process \({\alpha _{t} }\) is a second-order differentiable measurable process and satisfies

$$ \bigl\vert {\alpha _{t}^{ ( i )} } \bigr\vert =O_{P} ( 1 ),\quad t \in [ {0,T} ],i = 0,1,2. $$
T2:

The process \({\beta _{t} }\) is differentiable and satisfies

$$ \sup \bigl\{ { \vert {\beta _{s} - \beta _{t} } \vert , s,t \in [ {0,T} ], \vert {s - t} \vert \leq \xi } \bigr\} = O_{P} \bigl( {\xi ^{{1 / 2}} \vert {\log \xi } \vert ^{{1 / 2}} } \bigr) $$

and

$$ \sup_{0 \leq t \leq T} \beta _{t}^{2} = O_{P} ( 1 ). $$
T3:

Kernel function K is differentiable with support \([-1,1]\) and satisfies

$$ \int _{ - 1}^{1} {K ( c )\,dc} = 1 $$

and

$$ \int _{ - 1}^{1} {K^{2} ( c )\,dc} , \int _{ - 1}^{1} {K^{3} ( c )\,dc} , \int _{ - 1}^{1} {K' ( c )\,dc} =O_{P} ( 1 ). $$

Without loss of generality, we assume that the samples are selected at the equal time spacing. Given n observations in the interval \([0,T]\), the time spacing between two adjacent observations is \(\delta =T/n\). For sampling of nonequal time spacing, it suffices to define \(\delta = \max_{i} ( {t_{i} - t_{i - 1} } )\) (\(i=1,2,\ldots,n\)).

We define the range of a process \({X_{t} }\) in \([ {t_{i - 1} ,t_{i} } ]\) as

$$ r_{X_{t_{i},\delta } } = \sup_{t_{i - 1} \leq \varsigma ,\tau \leq t_{i} } \{ {X_{\varsigma }- X_{\tau }} \} . $$

For a scaled Wiener process \(X_{t} = \beta W_{t} \), Parkinson [18] proposed the pth moment generating function of its range in \([ {t_{i - 1} ,t_{i} } ]\) as

$$\begin{aligned} E \bigl[ {r_{X_{t_{i},\delta } }^{p} } \bigr] = \lambda _{p} \delta _{i}^{{p / 2}} \beta ^{p}\quad ( {p \geq 1} ), \end{aligned}$$
(5)

where \(\lambda _{p} = E [ {r_{W_{1,1} }^{p} } ]\).

Now we present the nonparametric spot volatility estimator

$$\begin{aligned} \hat{\beta }_{t}^{2} = \frac{1}{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( { \frac{{t_{i} - t}}{h}} \biggr)r_{X_{t_{i},\delta } }^{2} I_{ \{ {r_{X_{t_{i}, \delta } }^{2} \leq \phi ( \delta )} \} } }, \end{aligned}$$
(6)

where \(I_{ \{ \cdot \} } \) is the indicator function, and \(\phi ( \delta ) \) is a deterministic function of the time spacing δ and satisfies

$$\begin{aligned} \lim_{\delta \to 0} \phi ( \delta ) = 0 \end{aligned}$$
(7)

and

$$\begin{aligned} \lim_{\delta \to 0} \bigl( {{{\delta \log \delta } / {\phi ( \delta )}}} \bigr) = 0. \end{aligned}$$
(8)

Remark 1

In the estimator \(\hat{\beta }_{t}^{2} \) of equation (6), we replace the term \(( {X_{i} - X_{t_{i - 1} } } )^{2} \) in return-based estimators (see equation (1)) with range term \({{r_{X_{t_{i},\delta } }^{2} } / {\mu _{2} }}\). As mentioned by Christensen and Podolskij [13], when the sampling frequency is not very high (say, 2- or 3-h returns), the return-based method is simple and efficient, but with the increase of frequency (1-s returns and even tick-by-tick returns), it will be seriously affected by microscopic noise. The main advantages of the range-based method are reflected in two aspects. On one hand, it requires no sparse sampling, uses the entire data without being affected by the noise, and thus ensures the integrity of information. On the other hand, it has a higher efficiency.

Remark 2

The deterministic function \(\phi ( \delta )\) is a threshold used to determine whether a jump occurs. Theoretically, any function that satisfies equations (7) and (8) can be selected as the threshold function. The power function \(\delta ^{a}\) (for any \(a \in ( {0,1} )\)) is a possible option for the function \(\phi ( \delta )\), since it satisfies equations (7) and (8). In diffusion models with finite activity jumps, Yu et al. [19] chose \(\phi ( \delta ) = \delta ^{0.49}\), but in diffusion models with finite and infinite activity jumps, Mancini [8] chose \(\phi ( \delta ) = \delta ^{0.99}\). Mancini and Renò [9] even thought that a time-varying function could be selected.

Remark 3

By setting a reasonable threshold \(\phi ( \delta )\) and controlling the range in a proper interval no more than the threshold, as \(\delta \to 0\), the estimator \(\hat{\beta }_{t}^{2} \) can effectively eliminate the influence of jumps. Therefore it is an ideal estimator of spot volatility in the jump diffusion models, which are more in line with the realities of financial markets.

3 Consistency and asymptotic normality

Lemma 1

(Cai et al. [20])

Suppose that\(\sum_{i = 1}^{N_{t} } {Y_{i} } \)is the jump process in equation (3) and\(\phi ( \delta ) \)satisfies equations (7) and (8). Suppose that for all\(t \in ( {0,T} ]\),

$$ P ( {Y_{N_{t} } = 0,\Delta N_{t} \ne 0} ) = 0. $$

Then

$$\begin{aligned} I_{ \{ {r_{X_{t_{i},\delta } }^{2} \le \phi ( \delta )} \} } = I_{ \{ {\Delta _{i} N = 0} \} }\quad ( { \forall i = 1,2, \ldots ,n} ). \end{aligned}$$
(9)

Remark 4

Lemma 1 is an extension of Theorem 1 in [8]. In equation (9), the return is replaced by the range. For small δ, if the squared range \(r_{X_{t_{i} ,\delta } }^{2}\) in the interval \([ {t_{i - 1} ,t_{i} } ]\) is not larger than the threshold \(\phi ( \delta )\), then no jumps will appear in the interval. Since the range represents the maximum difference among all observations within an interval, more high-frequency data are used, and the ability to detect jumps is better.

Remark 5

The condition \(P ( {Y_{N_{t} } = 0,\Delta N_{t} \ne 0} ) = 0\) was also used in [8]; it means that the probability is 0 when \(\Delta N_{t} \ne 0\) and the jump size at the \(N_{t}\)th jump point equals 0.

Theorem 2

Suppose that\({X_{t} }\)satisfies equation (3) and T1T3 hold. If\(\delta \to 0\)so that

$$\begin{aligned} {\delta / h} \to 0 \end{aligned}$$
(10)

and\(\phi ( \delta )\)satisfies equations (7) and (8), then

$$ \hat{\beta }_{t}^{2} \stackrel{{{P }}}{\longrightarrow }\beta _{t}^{2}, $$

where the symbol “\(\stackrel{{P }}{\longrightarrow }\)” denotes the convergence in probability.

Proof

We first prove that \(\hat{\beta }_{t}^{2} \stackrel{{P }}{\longrightarrow }\frac{1 }{{h \mu _{2} }}\sum_{i = 1}^{n} {K ( {\frac{{t_{i} - t} }{h}} )\beta _{t_{i} }^{2} r_{W_{t_{i},\delta } }^{2} } \). We can see from the definition of \(\hat{\beta }_{t}^{2} \) and Lemma 1 that

$$ \hat{\beta }_{t}^{2} = \frac{1 }{{h\mu _{2} }}\sum _{i = 1}^{n} {K \biggl( { \frac{{t_{i} - t} }{h}} \biggr)r_{X_{t_{i},\delta } }^{2} I_{ \{ {\Delta _{i} N = 0} \} } } = A - B, $$

where

$$\begin{aligned} &A= \frac{1 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)r_{X_{t_{i},\delta }^{C} }^{2} } \\ &\hphantom{A}= \frac{1 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr) \biggl( {\sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \biggl( { \int _{\varsigma }^{\tau }{\alpha _{s}\,ds} + \int _{\varsigma }^{\tau }{\beta _{s} \,dW_{s} } } \biggr)} \biggr)^{2} }, \\ &B=\frac{1 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr) \biggl( {\sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \biggl( { \int _{\varsigma }^{\tau }{\alpha _{s}\,ds} + \int _{\varsigma }^{\tau }{\beta _{s} \,dW_{s} } } \biggr)} \biggr)^{2} I_{ \{ {\Delta _{i} N \ne 0} \} } }. \end{aligned}$$

Using the triangle inequality, we obtain

$$\begin{aligned} & \biggl( {\sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \biggl( { \int _{\varsigma }^{\tau }{\alpha _{s}\,ds} + \int _{\varsigma }^{\tau }{\beta _{s} \,dW_{s} } } \biggr)} \biggr)^{2} \\ &\quad \le 2 \biggl( {\sup_{t_{i - 1} \le \varsigma , \tau \le t_{i} } \int _{\varsigma }^{\tau }{\alpha _{s}\,ds} } \biggr)^{2} + 2 \biggl( {\sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \int _{\varsigma }^{\tau }{\beta _{s} \,dW_{s} } } \biggr)^{2} \\ &\quad = D + E. \end{aligned}$$

Obviously,

$$\begin{aligned} D = 2 \biggl( {\sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \int _{\varsigma }^{\tau }{\alpha _{s}\,ds} } \biggr)^{2} = O_{P} \bigl( {\delta ^{2} } \bigr). \end{aligned}$$

For the term E, by the Burkholder–Davis–Gundy (BDG) inequality we obtain that

$$\begin{aligned} E = O_{P} \biggl( { \int _{t_{i - 1} }^{t_{i} } {\beta _{s}^{2} \,ds} } \biggr) = O_{P} ( \delta ). \end{aligned}$$

So

$$\begin{aligned} B = O_{P} \biggl( {\frac{{N_{T} \delta } }{{h\mu _{2} }}} \biggr) = O_{P} \biggl( {\frac{\delta }{h}} \biggr) \to 0. \end{aligned}$$

Next, we prove that

$$\begin{aligned} A\stackrel{{P }}{\longrightarrow } \frac{1 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} r_{W_{t_{i},\delta } }^{2} }. \end{aligned}$$
(11)

We easily get

$$\begin{aligned} A - \frac{1 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} r_{W_{t_{i},\delta } }^{2} } &= \frac{1 }{{h\mu _{2} }}\sum _{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr) \bigl( {r_{X_{t_{i},\delta }^{C} }^{2} - \beta _{t_{i} }^{2} r_{W_{t_{i},\delta } }^{2} } \bigr)} \\ &= F + G, \end{aligned}$$

where

$$\begin{aligned}& F = \frac{2 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} } r_{W_{t_{i},\delta } } ( {r_{X_{t_{i}, \delta }^{C} } - \beta _{t_{i} } r_{W_{t_{i},\delta } } } )}, \\& G = \frac{1 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr) ( {r_{X_{t_{i},\delta }^{C} } - \beta _{t_{i} } r_{W_{t_{i},\delta } } } )^{2} }. \end{aligned}$$

For the term G, we have

$$\begin{aligned} G \le{}& \frac{1 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr) \biggl( {\sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \biggl\vert { \int _{\varsigma }^{\tau }{ \alpha _{s}\,ds} + \int _{\varsigma }^{\tau }{ ( {\beta _{s} - \beta _{t_{i} } } )\,dW_{s} } } \biggr\vert } \biggr)^{2} } \\ \le{}& \frac{2 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr) \biggl( {\sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \biggl\vert { \int _{\varsigma }^{\tau }{ \alpha _{s}\,ds} } \biggr\vert } \biggr)^{2} } \\ &{}+ \frac{2 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr) \biggl( {\sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \biggl\vert { \int _{\varsigma }^{\tau }{ ( { \beta _{s} - \beta _{t_{i} } } )\,dW_{s} } } \biggr\vert } \biggr)^{2} } \\ ={}& G_{1} + G_{2}. \end{aligned}$$

By condition T1 we have

$$\begin{aligned} \max_{i} \sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \biggl\vert { \int _{\varsigma }^{\tau }{\alpha _{s}\,ds} } \biggr\vert = O_{P} ( \delta ). \end{aligned}$$

Therefore

$$\begin{aligned} G_{1} = \frac{2 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr) \biggl( {\sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \biggl\vert { \int _{\varsigma }^{\tau }{ \alpha _{s}\,ds} } \biggr\vert } \biggr)^{2} } = O_{P} ( \delta ). \end{aligned}$$

By the BDG inequality there exists a constant \(C ( { > 0} )\) such that

$$\begin{aligned} \biggl( {\sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \biggl\vert { \int _{\varsigma }^{\tau }{ ( {\beta _{s} - \beta _{t_{i} } } )\,dW_{s} } } \biggr\vert } \biggr)^{2} \le C \int _{t_{i - 1} }^{t_{i} } { ( {\beta _{s} - \beta _{t_{i} } } )^{2}\,ds}. \end{aligned}$$

Combining with condition T2, we obtain

$$\begin{aligned} G_{2} &\le \frac{{2C} }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr) \int _{t_{i - 1} }^{t_{i} } { ( {\beta _{s} - \beta _{t_{i} } } )^{2}\,ds} } \\ &= O_{P} \bigl( {\delta \vert {\log \delta } \vert } \bigr). \end{aligned}$$

Hence

$$\begin{aligned} G = o_{P} ( 1 ). \end{aligned}$$

Using the decomposition similar to G, we get

$$\begin{aligned} F \le \frac{2 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} } r_{W_{t_{i},\delta } } \sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \biggl\vert { \int _{\varsigma }^{\tau }{\alpha _{s}\,ds} + \int _{\varsigma }^{\tau }{ ( {\beta _{s} - \beta _{t_{i} } } )\,dW_{s} } } \biggr\vert }. \end{aligned}$$

Using Hölder’s inequality, we have

$$\begin{aligned} F\le{}& \frac{2 }{{\mu _{2} }} \Biggl( {\frac{1 }{h}\sum _{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} r_{W_{t_{i}, \delta } }^{2} } } \Biggr)^{{1 / 2}} \\ & {}\cdot \Biggl( {\frac{1 }{h}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr) \biggl( {\sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \biggl\vert { \int _{\varsigma }^{\tau }{\alpha _{s}\,ds} + \int _{\varsigma }^{\tau }{ ( {\beta _{s} - \beta _{t_{i} } } )\,dW_{s} } } \biggr\vert } \biggr)^{2} } } \Biggr)^{{1 / 2}}. \end{aligned}$$

Using Hölder’s inequality again, we have

$$\begin{aligned} E [ F ]\le{}& \frac{2 }{{\mu _{2} }} \Biggl( {E \Biggl[ {\frac{1 }{h} \sum_{i = 1}^{n} {K \biggl( { \frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} r_{W_{t_{i},\delta } }^{2} } } \Biggr]} \Biggr)^{{1 / 2}} \\ & {}\cdot \Biggl( {E \Biggl[ {\frac{1 }{h}\sum _{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr) \biggl( {\sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \biggl\vert { \int _{\varsigma }^{\tau }{ \alpha _{s}\,ds} + \int _{\varsigma }^{\tau }{ ( {\beta _{s} - \beta _{t_{i} } } )\,dW_{s} } } \biggr\vert } \biggr)^{2} } } \Biggr]} \Biggr)^{{1 / 2}}. \end{aligned}$$

From equation (5) we have \(E [ {r_{W_{t_{i},\delta } }^{2} } ]{\mathrm{{ = }}}\mu _{2} \delta \), and therefore

$$\begin{aligned} \frac{2 }{{\mu _{2} }} \Biggl( {E \Biggl[ {\frac{1 }{h}\sum _{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} r_{W_{t_{i}, \delta } }^{2} } } \Biggr]} \Biggr)^{{1 / 2}} = O_{P} ( 1 ). \end{aligned}$$

By the discussion of G we have obtained

$$\begin{aligned} & \Biggl( {E \Biggl[ {\frac{1 }{h}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr) \biggl( {\sup_{t_{i - 1} \le \varsigma ,\tau \le t_{i} } \biggl\vert { \int _{\varsigma }^{\tau }{ \alpha _{s}\,ds} + \int _{\varsigma }^{\tau }{ ( {\beta _{s} - \beta _{t_{i} } } )\,dW_{s} } } \biggr\vert } \biggr)^{2} } } \Biggr]} \Biggr)^{{1 / 2}} \\ &\quad = O_{P} \bigl( {\delta ^{{1 / 2}} \vert {\log \delta } \vert ^{{1 / 2}} } \bigr). \end{aligned}$$

Hence

$$\begin{aligned} A - \frac{1 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} r_{W_{t_{i},\delta } }^{2} } = o_{P} ( 1 ). \end{aligned}$$

So equation (11) is proved. In the rest of the proof, it suffices to prove

$$\begin{aligned} \begin{aligned} &\frac{1 }{{h\mu _{2} }}\sum _{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} r_{W_{t_{i},\delta } }^{2} } \stackrel{{P }}{\longrightarrow }\beta _{t}^{2} , \\ &E \Biggl[ {\frac{1 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} r_{W_{t_{i},\delta } }^{2} } } \Biggr] \\ &\quad = \frac{\delta }{h}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} } \\ &\quad = \frac{1 }{h}\sum_{i = 1}^{n} {\beta _{t_{i} }^{2} \int _{t_{i - 1} }^{t_{i} } { \biggl( {K \biggl( { \frac{{t_{i} - t} }{h}} \biggr) - K \biggl( {\frac{{u - t} }{h}} \biggr)} \biggr)\,du} } \\ &\qquad {} + \frac{1 }{h}\sum_{i = 1}^{n} \int _{t_{i - 1} }^{t_{i} } K \biggl( {\frac{{u - t} }{h}} \biggr) \bigl( {\beta _{t_{i} }^{2} - \beta _{u}^{2} } \bigr)\,du \\ &\qquad {} + \frac{1 }{h}\sum_{i = 1}^{n} { \int _{t_{i - 1} }^{t_{i} } {K \biggl( {\frac{{u - t} }{h}} \biggr)\beta _{u}^{2}\,du} } \\ &\quad = H_{1} + H_{2} + H_{3} . \end{aligned} \end{aligned}$$
(12)

For \(H_{1} \), using Taylor’s formula, we have

$$\begin{aligned} H_{1} & = \frac{1 }{h}\sum_{i = 1}^{n} {\beta _{t_{i} }^{2} \int _{t_{i - 1} }^{t_{i} } { \biggl( {K' \biggl( {\frac{{u - t} }{h}} \biggr) \cdot \frac{{t_{i} - u} }{h} + o \biggl( { \frac{{t_{i} - u} }{h}} \biggr)} \biggr)\,du} } \\ &= O_{P} \biggl( {\frac{\delta }{{h^{2} }} \int _{0}^{T} {K' \biggl( { \frac{{u - t} }{h}} \biggr)\,du} } \biggr) \\ &= O_{P} \biggl( {\frac{\delta }{h} \int _{ - 1}^{1} {K' ( s )\,ds} } \biggr). \end{aligned}$$

By condition T3 and equation (10) we have \(H_{1} = O_{P} ( {{\delta / h}} ) = o_{P} ( 1 )\). Using condition T2, we easily get

$$\begin{aligned} H_{2} = \delta ^{{1 / 2}} \vert {\log \delta } \vert ^{{1 / 2}} = o_{P} ( 1 ). \end{aligned}$$

For the term \(H_{3} \), let

$$\begin{aligned} \frac{{u - t} }{h} = s. \end{aligned}$$

Then

$$\begin{aligned} H_{3} & = \int _{ - 1}^{1} {K ( s )\beta _{sh + t}^{2}\,ds} \\ & = \int _{ - 1}^{1} {K ( s ) \bigl( {\beta _{t}^{2} + O_{P} \bigl( {h^{{1 / 2}} \vert {\log h} \vert ^{{1 / 2}} } \bigr)} \bigr)\,ds} \\ &= \beta _{t}^{2} + O_{P} \bigl( {h^{{1 / 2}} \vert {\log h} \vert ^{{1 / 2}} } \bigr). \end{aligned}$$

Combining with the discussions of the terms \(H_{1}\), \(H_{2}\), and \(H_{3} \), we get

$$\begin{aligned} E \Biggl[ {\frac{1 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} r_{W_{t_{i},\delta } }^{2} } } \Biggr]\stackrel{{P }}{\longrightarrow } \beta _{t}^{2}. \end{aligned}$$

Let

$$\begin{aligned} \rho _{i} = \frac{1 }{{h\mu _{2} }}K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} \bigl( {r_{W_{t_{i},\delta } }^{2} - E \bigl[ {r_{W_{t_{i},\delta } }^{2} } \bigr]} \bigr). \end{aligned}$$

Then

$$\begin{aligned} E \bigl[ {\rho _{i}^{2} } \bigr] = \frac{{\delta ^{2} } }{{h^{2} }}K^{2} \biggl( {\frac{{t_{i} - t} }{h}} \biggr) \beta _{t_{i} }^{4} \cdot { \mathrm{M}}_{2}, \end{aligned}$$

where \({\mathrm{M}}_{2} = {{ ( {\mu _{4} - \mu _{2}^{2} } )} / {\mu _{2}^{2} }}\). By using a decomposition similar to equation (12) we can obtain

$$\begin{aligned} \sum_{i = 1}^{n} {E \bigl[ {\rho _{i}^{2} } \bigr]} &= \frac{{ \delta {\mathrm{M}}_{2} \beta _{t}^{4} } }{h} \int _{ - 1}^{1} {K^{2} ( s )\,ds} + O_{P} \biggl( {\frac{{\delta \vert {\log h} \vert ^{{1 / 2}} } }{{h^{{1 / 2}} }}} \biggr) \\ &= o_{P} ( 1 ). \end{aligned}$$

Therefore

$$\begin{aligned} \frac{1 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} r_{W_{t_{i},\delta } }^{2} } \stackrel{{P }}{\longrightarrow }\beta _{t}^{2}. \end{aligned}$$

These complete the proof of Theorem 2. □

Theorem 3

Suppose the process\({X_{t} }\)satisfies the conditions of Theorem 2. If\(\delta \to 0\)so that

$$\begin{aligned} \frac{{h^{2} \vert {\log h} \vert } }{\delta } = o_{P} ( 1 ), \end{aligned}$$
(13)

then

$$\begin{aligned} \sqrt{\frac{h }{\delta }} \bigl( {\hat{\beta }_{t}^{2} - \beta _{t}^{2} } \bigr)\stackrel{{d }}{\longrightarrow }N \biggl( {0,{\mathrm{M}}_{2} \beta _{t}^{4} \int _{ - 1}^{1} {K^{2} ( s )\,ds} } \biggr), \end{aligned}$$
(14)

where\({\mathrm{M}}_{2} = {{ ( {\mu _{4} - \mu _{2}^{2} } )} / {\mu _{2}^{2} }}\), and the symbol “\(\stackrel{d }{\longrightarrow }\)” denotes the convergence in distribution.

Proof

Decompose \(( {\hat{\beta }_{t}^{2} - \beta _{t}^{2} } )\) as

$$\begin{aligned} & \Biggl( {\hat{\beta }_{t}^{2} - \frac{1 }{{h\mu _{2} }} \sum_{i = 1}^{n} {K \biggl( { \frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} r_{W_{t_{i},\delta } }^{2} } } \Biggr) \\ &\qquad {}+ \frac{1 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} \bigl( {r_{W_{t_{i},\delta } }^{2} - E \bigl[ {r_{W_{t_{i},\delta } }^{2} } \bigr]} \bigr)} \\ &\qquad {}+ \Biggl( {\frac{1 }{{h\mu _{2} }}\sum_{i = 1}^{n} {K \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{2} E \bigl[ {r_{W_{t_{i}, \delta } }^{2} } \bigr]} - \beta _{t}^{2} } \Biggr) \\ &\quad = L_{1} + L_{2} + L_{3}. \end{aligned}$$

By the proof of Theorem 2 we know that

$$\begin{aligned} L_{1} = L_{3} = O_{P} \bigl( {h^{{1 / 2}} \vert {\log h} \vert ^{{1 / 2}} } \bigr), \end{aligned}$$

and therefore we obtain from equation (13) that

$$\begin{aligned} \sqrt{\frac{h }{\delta }} L_{1} = \sqrt{\frac{h }{\delta }} L_{3} = O_{P} \biggl( {\frac{{h \vert {\log h} \vert ^{{1 / 2}} } }{{\delta ^{{1 / 2}} }}} \biggr) = o_{P} ( 1 ). \end{aligned}$$

Now we discuss the term \(L_{2} \). Similarly, from the proof of Theorem 2 we get that

$$\begin{aligned} \sqrt{\frac{h }{\delta }} L_{2} = \sqrt{\frac{h }{\delta }} \sum_{i = 1}^{n} {\rho _{i} } \end{aligned}$$

and

$$\begin{aligned} \frac{h }{\delta }\sum_{i = 1}^{n} {E \bigl[ {\rho _{i}^{2} } \bigr]} = {\mathrm{M}}_{2} \beta _{t}^{4} \int _{ - 1}^{1} {K^{2} ( s )\,ds} + O_{P} \bigl( {h^{{1 / 2}} \vert {\log h} \vert ^{{1 / 2}} } \bigr). \end{aligned}$$

As long as \(( {{{h^{{3 / 2}} } / {\delta ^{{3 / 2}} }}} )\sum_{i = 1}^{n} {E [ {\rho _{i}^{3} } ]} \to 0 \) we further can conclude that \(\sqrt{{h / \delta }} \rho _{i} \) (\({i = 1,2, \ldots ,n} \)) satisfies Lyapunov’s condition:

$$\begin{aligned} &\frac{{h^{{3 / 2}} } }{{\delta ^{{3 / 2}} }}\sum_{i = 1}^{n} {E \bigl[ {\rho _{i}^{3} } \bigr]} \\ &\quad = \frac{{h^{{3 / 2}} } }{{\delta ^{{3 / 2}} }} \cdot \frac{1 }{{h^{3} \mu _{2}^{3} }}\sum _{i = 1}^{n} {K^{3} \biggl( { \frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{6} \cdot E \bigl[ { \bigl( {r_{W_{t_{i}, \delta } }^{2} - E \bigl[ {r_{W_{t_{i},\delta } }^{2} } \bigr]} \bigr)^{3} } \bigr]} \\ &\quad = \frac{{ ( {\frac{{\mu _{6} } }{{\mu _{2}^{3} }} - \frac{{3\mu _{4} } }{{\mu _{2}^{2} }} + 2} )\delta ^{3} } }{{h^{{3 / 2}} \delta ^{{3 / 2}} }}\sum_{i = 1}^{n} {K^{3} \biggl( {\frac{{t_{i} - t} }{h}} \biggr)\beta _{t_{i} }^{6} } \\ &\quad = O_{P} \biggl( {\frac{{\delta ^{{1 / 2}} } }{{h^{{1 / 2}} }}} \biggr) \to 0. \end{aligned}$$

By Lyapunov’s central limit theorem we obtain

$$\begin{aligned} \sqrt{\frac{h }{\delta }} \sum_{i = 1}^{n} {\rho _{i} } \stackrel{{d }}{\longrightarrow }N \biggl( {0,{ \mathrm{M}}_{2} \beta _{t}^{4} \int _{ - 1}^{1} {K^{2} ( s )\,ds} } \biggr). \end{aligned}$$

Further,

$$\begin{aligned} \sqrt{\frac{h }{\delta }} \bigl( {\hat{\beta }_{t}^{2} - \beta _{t}^{2} } \bigr)\stackrel{{d }}{\longrightarrow }N \biggl( {0,\mathrm{M}_{2} \beta _{t}^{4} \int _{ - 1}^{1} {K^{2} ( s )\,ds} } \biggr). \end{aligned}$$

 □

Remark 6

In spot volatility estimation for continuous diffusion models, Fan and Wang [1] selected the bandwidth \(h \sim {{\delta ^{{1 / 2}} } / {\log ( {{1 / \delta }} )}}\). In this case, equation (13) is also satisfied. Choosing \(h = O ( \delta ^{{1 / 2}} /\log ( {{1 / \delta }} ) )\), we can obtain the convergence rate close to the optimal rate \(n^{ - {1 / 4}} \), which was in keeping with the rate in Mykland and Zhang [21] and Foster and Nelson [22]. Kristensen [3] chose a variable bandwidth \(h = O ( {\delta ^{{1 / { ( {2\gamma + 1} )}}} } )\) by setting \(0 < \gamma \le 1\) (for models driven by a Wiener process, it is \(0 < \gamma < {1 / 2}\)) and obtained the optimal attainable convergence rate \(O_{P} ( {\delta ^{{\gamma / { ( {2\gamma + 1} )}}} } )\).

Remark 7

It is worth mentioning that the constant \({\mathrm{M}}_{2} \) in equation (14) is approximately equal to 0.4; however, the amount in equation (2) is 2 (the same amount in Theorem 1 in Fan and Wang [1]).

4 Conclusions

Combining with the range-based method and the threshold technique, we propose a nonparametric spot volatility estimation procedure for time-dependent diffusion models with jumps. Using the range instead of the return of the state variables, we employ the total data and improve the estimating precision. Meanwhile, restricting the squared range to be not greater than a specific threshold, the estimator is robust to the jumps.

References

  1. Fan, J., Wang, Y.: Spot volatility estimation for high-frequency data. Stat. Interface 1, 279–288 (2008)

    Article  MathSciNet  MATH  Google Scholar 

  2. Zu, Y., Boswijk, H.P.: Estimating spot volatility with high-frequency financial data. J. Econom. 181, 117–135 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  3. Kristensen, D.: Nonparametric filtering of the realized spot volatility: a kernel-based approach. Econom. Theory 26, 60–93 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  4. Lee, S.S., Mykland, P.A.: Jumps in financial markets: a new nonparametric test and jump dynamics. Rev. Financ. Stud. 21, 2535–2563 (2008)

    Article  Google Scholar 

  5. Aït-Sahalia, Y., Jacod, J.: Testing for jumps in a discretely observed process. Ann. Stat. 37, 184–222 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  6. Zhu, H., Huang, Y., Zhou, J., Yang, X., Deng, C.: Optimal proportional reinsurance and investment problem with constraints on risk control in a general jump-diffusion financial market. Anziam J. 57, 352–368 (2016)

    MathSciNet  MATH  Google Scholar 

  7. Matenda, F.R., Chikodza, E.A.: Stock model with jumps for Itô–Liu financial markets. Soft Comput. 2, 1–16 (2018)

    MATH  Google Scholar 

  8. Mancini, C.: Non-parametric threshold estimation for models with stochastic diffusion coefficients and jumps. Scand. J. Stat. 36, 270–296 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  9. Mancini, C., Renò, R.: Threshold estimation of Markov models with jumps and interest rate modeling. J. Econom. 160, 77–92 (2011)

    Article  MathSciNet  MATH  Google Scholar 

  10. Wang, H., Zhou, L.: Bandwidth selection of nonparametric threshold estimator in jump -diffusion models. Comput. Math. Appl. 73, 211–219 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  11. Song, Y., Wang, H.: Central limit theorems of local polynomial threshold estimator for diffusion processes with jumps. Scand. J. Stat. 45, 644–681 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  12. Sun, H., Yu, B.: Volatility asymmetry in functional threshold GARCH model. J. Time Ser. Anal. 41, 95–109 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  13. Christensen, K., Podolskij, M.: Realized range-based estimation of integrated variance. J. Econom. 141, 323–349 (2007)

    Article  MathSciNet  MATH  Google Scholar 

  14. Christensen, K., Podolskij, M.: Asymptotic theory of range-based multipower variation. J. Financ. Econom. 10, 417–456 (2012)

    Article  Google Scholar 

  15. Liu, J., Wei, Y., Ma, F., Wahab, M.I.M.: Forecasting the realized range-based volatility using dynamic model averaging approach. Econ. Model. 61, 12–26 (2017)

    Article  Google Scholar 

  16. Vortelinos, D.I.: Optimally sampled realized range-based volatility estimators. Res. Int. Bus. Finance 30, 34–50 (2014)

    Article  Google Scholar 

  17. Xu, W.J., Wang, J.Q., Ma, F., Lu, X.J.: Forecast the realized range-based volatility: the role of investor sentiment and regime switching. Physica A 527, 1–7 (2019)

    MathSciNet  Google Scholar 

  18. Parkinson, M.: The extreme value method for estimating the variance of the rate of return. J. Bus. 53, 61–65 (1980)

    Article  Google Scholar 

  19. Yu, C., Fang, Y., Li, Z., Zhang, B., Zhao, X.: Non-parametric estimation of high-frequency spot volatility for Brownian semimartingale with jumps. J. Time Ser. Anal. 35, 572–591 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  20. Cai, J., Chen, P., Mei, X., Ji, X.: Realized range-based threshold estimation for jump-diffusion models. Int. J. Appl. Math. 45, 293–299 (2015)

    MathSciNet  Google Scholar 

  21. Mykland, P.A., Zhang, L.: Inference for volatility-type objects and implications for hedging. Stat. Interface 1, 255–278 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  22. Foster, D.P., Nelson, D.B.: Continuous record asymptotics for rolling sample variance estimators. Econometrica 64, 139–174 (1996)

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

The authors greatly appreciate the anonymous referees’ comments and suggestions, which have helped us improve the paper a lot.

Availability of data and materials

Not applicable.

Funding

This work was jointly supported by Start-up Fund for Scientific Research of High-level Talents of Jinling Institute of Technology (Grant No. jit-b-202028), the National Natural Science Foundation of China (Grant No. 11271189, 61773217, 61374080), the Natural Science Foundation of Jiangsu Province (Grant No. BK20161552), the Scientific Research Fund of Hunan Provincial Education Department (Grant No. 18A013), the Construct Program of the Key Discipline in Hunan Province and Advanced Study, Training of Professional Leaders of Higher Vocational Colleges in Jiangsu (Grant No. 2017GRFX019).

Author information

Authors and Affiliations

Authors

Contributions

All authors read and approved the manuscript.

Corresponding author

Correspondence to Jingwei Cai.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Cai, J., Zhu, Q. & Chen, P. Nonparametric threshold estimation of spot volatility based on high-frequency data for time-dependent diffusion models with jumps. Adv Differ Equ 2020, 378 (2020). https://doi.org/10.1186/s13662-020-02832-5

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13662-020-02832-5

MSC

Keywords