Skip to main content

Advertisement

A study of dividend yield model under stochastic earning yield environment in stock exchange of Thailand

Abstract

A compound Ornstein–Uhlenbeck process is applied to create a model that can calculate the dividend yield represented in a sample case of Stock Exchange of Thailand index in which earning yield is randomly determined. Parameter estimations are made through the use of least-square technique, while the outcomes are deduced from the Euler–Maruyama method. We use numerical simulation to determine the effectiveness of the models, comparing our newly proposed model with the previous models. The actual dividend yield data is applied for comparison. The results show that our model performs best among the three models being compared.

Introduction

In determining the option pricing, dividend yield is among the most crucial factors as stated in the Black–Scholes–Merton framework which has been one of the main original contents to be applied in financial analysis. As it can be seen that the dividend yield is reported daily in the newspapers, many traders consider the performance of dividend yield before investing in the stock market. Many financial researchers have proposed option pricing models assuming that the dividend yield exhibits stochastic behavior, as seen in [1, 2], and [3].

In [4], a proposition was made that the stock price process follows the following stochastic differential equation:

$$ dS(t)=\bigl(\mu _{S}-\gamma (t)\bigr)S(t)\,dt+\sigma _{S}S(t)\,dW(t), $$
(1)

where parameter \(\mu _{S}\) is constant and \(\sigma _{S}>0\). The Wiener process \(W(t)\) and the dividend yield parameter \(\gamma (t)\) are under the Ornstein–Uhlenbeck process environment.

In this model, the most essential element is the mean-reverting dividend yield which is the natural characteristic of most financial factors. Since one of the main economic characteristics is mean-reversion, investors find this model financially interesting. There is a tendency for an economy to fall into a recession when the dividend yields are high. Later, the yields will drop down to their equilibrium value. In contrast, when the dividend yields are below expectation, they will in reverse increase to the equilibrium value.

Another factor that should be analyzed is the P/E ratio which is also among common data in the financial field that can be acquired easily. Phewchean inserted a factor of P/E ratio in his extended option pricing model, and results showed the better performance in 2012 [5].

One of the financial assumptions used is that the dividend yield actually can depend on the earning yield \(\vartheta (t)\), that is, E/P ratio. This earning yield can then be presumed to be under the process of Ornstein–Uhlenbeck. Conversely, the dividend yield can rely on more than one stochastic factor and therefore cannot qualify as an Ornstein–Uhlenbeck process. Hence, stochastic differential equations as a system must be considered.

Compound Ornstein–Uhlenbeck processes

In 1930, L. Ornstein and G. Eugene Uhlenbeck introduced a process which is considered a solution to stochastic differential equation as follows:

$$ dX(t)=\theta \bigl(\mu -X(t)\bigr)\,dt+\sigma \,dW(t), $$
(2)

where constant parameters are μ, \(\theta \neq 0\), and \(\sigma >0\) with Wiener process \(W(t)\). μ, θ, and σ are long-term mean, velocity, and friction coefficient, respectively.

In order to have a rational explanation, Ito’s theory was applied. We were able to come up with a solid explanation from which we could prove that the process was mean-reverting when θ is positive [5].

To develop the process for multivariate cases, the following steps are performed. Let \(X_{1},\ldots,X_{n}\) be stochastic processes given by the following system:

$$\begin{aligned} &dX_{1}(t)=\theta _{11} \bigl(\mu _{11}-X_{1}(t) \bigr)\,dt+\cdots+\theta _{1n} \bigl(\mu _{1n}-X_{n}(t) \bigr)\,dt+ \sum_{k=1}^{m} \sigma _{1k}\,dW_{k}(t), \\ &dX_{2}(t)=\theta _{11} \bigl(\mu _{21}-X_{2}(t) \bigr)\,dt+\cdots+\theta _{2n} \bigl(\mu _{2n}-X_{n}(t) \bigr)\,dt+ \sum_{k=1}^{m} \sigma _{2k}\,dW_{k}(t), \\ &\vdots \\ &dX_{n}(t)=\theta _{n1} \bigl(\mu _{n1}-X_{n}(t) \bigr)\,dt+\cdots+\theta _{nn} \bigl(\mu _{nn}-X_{n}(t) \bigr)\,dt+ \sum_{k=1}^{m} \sigma _{nk}\,dW_{k}(t), \end{aligned}$$

where \(\sigma _{ik}\) is non-zero, and each \(W_{k}(t)\) is an independent Wiener process. An individual \(X_{i}\) is a compound Ornstein–Uhlenbeck process, and together they form a system of compound Ornstein–Uhlenbeck processes.

If the matrix \(\boldsymbol{\theta }= [\theta _{ij} ]_{n\times n}\) in the previous explanation is non-singular, then there is an n-dimensional vector \(\boldsymbol{\mu }=[\mu _{1}\ \cdots\ \mu _{n}]^{T}\) such that

$$ \begin{bmatrix} \theta _{11} & \theta _{12} &\cdots & \theta _{1n} \\ \theta _{21} & \theta _{22} &\cdots & \theta _{2n} \\ \vdots & \vdots & \ddots & \vdots \\ \theta _{n1} &\theta _{n2} & \cdots & \theta _{nn} \end{bmatrix} \begin{bmatrix} \mu _{1} \\ \mu _{2} \\ \vdots \\ \mu _{n} \end{bmatrix} = \begin{bmatrix} \theta _{11}\mu _{11}+\theta _{12}\mu _{12}+\cdots+\theta _{1n}\mu _{1n} \\ \theta _{21}\mu _{21}+\theta _{22}\mu _{22}+\cdots+\theta _{2n}\mu _{2n} \\ \vdots \\ \theta _{n1}\mu _{n1}+\theta _{n2}\mu _{n2}+\cdots+\theta _{nn}\mu _{nn} \end{bmatrix} . $$

This μ yields

$$\begin{aligned} &dX_{1}(t)=\theta _{11} \bigl(\mu _{1}-X_{1}(t) \bigr)\,dt+\cdots+\theta _{1n} \bigl(\mu _{n}-X_{n}(t) \bigr)\,dt+ \sum_{k=1}^{m} \sigma _{1k}\,dW_{k}(t), \\ &dX_{2}(t)=\theta _{11} \bigl(\mu _{1}-X_{2}(t) \bigr)\,dt+\cdots+\theta _{2n} \bigl(\mu _{n}-X_{n}(t) \bigr)\,dt+ \sum_{k=1}^{m} \sigma _{2k}\,dW_{k}(t), \\ &\vdots \\ &dX_{n}(t)=\theta _{n1} \bigl(\mu _{1}-X_{n}(t) \bigr)\,dt+\cdots+\theta _{nn} \bigl(\mu _{n}-X_{n}(t) \bigr)\,dt+ \sum_{k=1}^{m} \sigma _{nk}\,dW_{k}(t). \end{aligned}$$

Equivalently,

$$ d\boldsymbol{X}(t)=\boldsymbol{\theta } \bigl(\boldsymbol{\mu }-\boldsymbol{X}(t) \bigr)\,dt+ \boldsymbol{\sigma } \,d \boldsymbol{W}(t), $$
(3)

where \(\boldsymbol{X}(t)=[X_{1}(t)\ \cdots\ X_{n}(t)]^{T}\), \(\boldsymbol{\sigma }=[ \sigma _{ik}]_{n\times m}\) and \(\boldsymbol{W}(t)=[W_{1}(t)\ \cdots\ W_{m}(t)]^{T}\).

This shows that there will be a vector process correlating to the system if the matrix θ of the system of compound Ornstein–Uhlenbeck processes is non-singular. The vector process in (3) is called the vector Ornstein–Uhlenbeck process.

When \(\boldsymbol{X}(t)\) satisfies (3) with the initial condition \(\boldsymbol{X}(0)=\boldsymbol{x}_{0}\), the result of the vector Ornstein–Uhlenbeck process is

$$ \boldsymbol{X}(t)=\bigl(\boldsymbol{I}-e^{-\boldsymbol{\theta } t}\bigr)\boldsymbol{\mu }+e^{-\boldsymbol{\theta } t} \boldsymbol{x}_{0}+ \int _{0}^{t} e^{-\boldsymbol{\theta } (t-s)}\boldsymbol{\sigma } \,d \boldsymbol{W}(s). $$
(4)

This is derived by general Ito’s formula, Theorem 4.2.1 in [6], with \(f(\boldsymbol{X},t)=e^{\boldsymbol{\theta } t}\boldsymbol{X}\).

Furthermore, by applying Ito’s isometry property of the stochastic integral, Corollary 3.1.7 in [6], we learned that

$$ E \bigl[\boldsymbol{X}(t) \bigr]=e^{-\boldsymbol{\theta } t}\boldsymbol{x}_{0}+ \bigl(\boldsymbol{I}-e^{- \boldsymbol{\theta } t}\bigr)\boldsymbol{\mu } $$
(5)

and

$$ \operatorname{Cov} \bigl[\boldsymbol{X}(s),\boldsymbol{X}(t) \bigr]= \int _{0}^{\min (s,t)} e^{-\boldsymbol{\theta } (s-u)}\boldsymbol{\sigma }\boldsymbol{ \sigma } ^{T}e^{-\boldsymbol{\theta }^{T} (t-u)}\,du. $$
(6)

Moreover, the matrix of covariance is

$$ \operatorname{Var} [\boldsymbol{X} ]= \int _{0}^{t} e^{-\boldsymbol{\theta } (t-s)}\boldsymbol{\sigma }\boldsymbol{ \sigma }^{T}e^{- \boldsymbol{\theta }^{T} (t-s)}\,ds. $$
(7)

Theorem 1

The vector Ornstein–Uhlenbeck process \(\boldsymbol{X}(t)\) fulfilling (3) is mean-reverting if all eigenvalues of θ are positive.

Proof

Because \(e^{-\boldsymbol{\theta } t}\) tends to the zero matrix as t tends to infinity if all eigenvalues of θ are positive, a conclusion from (5) is that, under this state, \(E [\boldsymbol{X}(t)]\) becomes μ.

However, it is different for \(\operatorname{Var} [\boldsymbol{X}(t)]\). We cannot take t in (7) to infinity directly while we can in the case of \(E [\boldsymbol{X}(t)]\). We use the identity \(\operatorname{vec}(\boldsymbol{ABC})=(\boldsymbol{C}^{T} \otimes \boldsymbol{A})~\operatorname{vec}(\boldsymbol{B})\), where is a Kronecker product expressed in [7] and \(\operatorname{vec}(\boldsymbol{A})\) is expressed as a column vector made of the columns of A stacked on top of one another from left to right. Then

$$ \operatorname{vec}\bigl(\operatorname{Var} \bigl[\boldsymbol{X}(t)\bigr]\bigr)= \int _{0}^{t} e^{\boldsymbol{\theta }(s-t)}\otimes e ^{\boldsymbol{\theta }(s-t)}\,ds~\operatorname{vec}\bigl(\boldsymbol{\boldsymbol{\sigma }\boldsymbol{\sigma }^{T}} \bigr). $$
(8)

Here, we can apply \(e^{\boldsymbol{A}\otimes \boldsymbol{B}}=e^{\boldsymbol{A}}\oplus e^{ \boldsymbol{B}}\), where is a Kronecker sum. The result is

$$\begin{aligned} \operatorname{vec}\bigl(\operatorname{Var} \bigl[\boldsymbol{X}(t)\bigr]\bigr) & = \int _{0}^{t} e^{\boldsymbol{\theta }(s-t)}\otimes e^{\boldsymbol{\theta }(s-t)}\,ds~\operatorname{vec}\bigl(\boldsymbol{\boldsymbol{\sigma }\boldsymbol{\sigma }^{T}} \bigr) \\ & = \int _{0}^{t} e^{(\boldsymbol{\theta }\oplus \boldsymbol{\theta })(s-t)}\,ds~\operatorname{vec}\bigl( \boldsymbol{\boldsymbol{\sigma }\boldsymbol{\sigma }^{T}}\bigr) \\ & = (\boldsymbol{\theta }\oplus \boldsymbol{\theta })^{-1} \bigl(\boldsymbol{I}- e^{-( \boldsymbol{\theta }\oplus \boldsymbol{\theta })t} \bigr)~\operatorname{vec}\bigl( \boldsymbol{\boldsymbol{\sigma }\boldsymbol{\sigma }^{T}} \bigr). \end{aligned}$$
(9)

Still, because every eigenvalue of \(\boldsymbol{\theta }\oplus \boldsymbol{\theta }\) is positive, the covariance matrix becomes a constant matrix Σ such that \(\operatorname{vec}(\boldsymbol{\varSigma })= (\boldsymbol{\theta }\oplus \boldsymbol{\theta })^{-1}~\operatorname{vec}(\boldsymbol{\boldsymbol{\sigma }\boldsymbol{\sigma }^{T}})\). □

Corollary 1

The 2-dimensional vector Ornstein–Uhlenbeck process \(\boldsymbol{X}(t)\) fulfilling (3) is mean-reverting if one of the following conditions is met.

  1. 1.

    \((\theta _{11}-\theta _{22})^{2}+4\theta _{12}\theta _{21}\geq 0\) and \(\theta _{11}\theta _{22}-\theta _{12}\theta _{21}<0\), or

  2. 2.

    \((\theta _{11}-\theta _{22})^{2}+4\theta _{12}\theta _{21}<0\) and \(\theta _{11}+\theta _{22}>0\), or

  3. 3.

    \(\theta _{11}\theta _{22}-\theta _{12}\theta _{21}<0\) and \(\theta _{11}+\theta _{22}>0\).

Models and parameter estimation

In this study, three unique models are proposed to emulate dividend yield values. To begin with, we presume the stochastic dividend yield to reflect the Ornstein–Uhlenbeck process (SDY model), as shown in [4], Moreover, as laid out in [5], we have an assumption that the stochastic dividend yield complies with the compound Ornstein–Uhlenbeck process which is determined by the earning yield under stochastic environment (SEY model). However, not only does the dividend yield depend on the earning yield, but the earning yield also reciprocally depends on the dividend yield. Therefore, we introduce the new model based on the idea that both yields follow the compound Ornstein–Uhlenbeck process depending on each other (MSEY model). We can define these models with the following stochastic differential equations.

Model for SDY:

$$ d\gamma (t)=\theta _{\gamma }\bigl(\mu _{\gamma }-\gamma (t)\bigr)\,dt+\sigma _{ \gamma }\,dW(t). $$
(10)

Model for SEY:

$$ \begin{aligned}& d\gamma (t) = \theta _{\gamma \vartheta }\bigl(\mu _{\vartheta }-\vartheta (t)\bigr)\,dt +\theta _{\gamma }\bigl(\mu _{\gamma }-\gamma (t)\bigr)\,dt+\sigma _{\gamma }\,dW _{1}(t), \\ &d\vartheta (t) =\theta _{\vartheta }\bigl(\mu _{\vartheta }-\vartheta (t)\bigr)\,dt+ \sigma _{\vartheta }\,dW_{2}(t), \end{aligned} $$
(11)

where \(W_{1}(t)\) and \(W_{2}(t)\) are assumed to be independently uncorrelated.

Model for MSEY:

$$ \begin{aligned}& d\gamma (t) = \theta _{\gamma \vartheta }\bigl(\mu _{\vartheta }-\vartheta (t)\bigr)\,dt +\theta _{\gamma }\bigl(\mu _{\gamma }-\gamma (t)\bigr)\,dt+\sigma _{\gamma }\,dW _{1}(t), \\ &d\vartheta (t) =\theta _{\vartheta \gamma }\bigl(\mu _{\gamma }-\gamma (t) \bigr)\,dt + \theta _{\vartheta }\bigl(\mu _{\vartheta }-\vartheta (t) \bigr)\,dt +\sigma _{\vartheta }\,dW_{2}(t), \end{aligned} $$
(12)

where \(W_{1}(t)\) and \(W_{2}(t)\) are assumed to be independently uncorrelated.

By using Euler–Maruyama approach, the Markov chains related to each model are obtained.

Model for SDY:

$$ \gamma _{t+\Delta t}=\gamma _{t}+ \theta _{\gamma }(\mu _{\gamma }-\gamma _{t})\Delta t+\sigma _{\gamma }\Delta W_{t}. $$
(13)

Model for SEY:

$$ \begin{aligned}& \gamma _{t+\Delta t}= \gamma _{t}+\theta _{\gamma \vartheta }( \mu _{\vartheta }-\vartheta _{t})\,dt +\theta _{\gamma }(\mu _{\gamma }- \gamma _{t})\,dt+\sigma _{\gamma }\Delta W_{1t}, \\ &\vartheta _{t+\Delta t}= \vartheta _{t}+\theta _{\vartheta }( \mu _{\vartheta }-\vartheta _{t})\,dt+\sigma _{\vartheta }\Delta W_{2t}. \end{aligned} $$
(14)

Model for MSEY:

$$ \begin{aligned}& \gamma _{t+\Delta t}= \gamma _{t}+\theta _{\gamma \vartheta }( \mu _{\vartheta }-\vartheta _{t})\,dt +\theta _{\gamma }(\mu _{\gamma }- \gamma _{t})\,dt+\sigma _{\gamma }\Delta W_{1t}, \\ &\vartheta _{t+\Delta t}= \vartheta _{t}+\theta _{\vartheta }( \mu _{\vartheta }-\vartheta _{t})\,dt+\theta _{\vartheta \gamma }(\mu _{ \gamma }-\gamma _{t})\,dt +\sigma _{\vartheta }\Delta W_{2t}. \end{aligned} $$
(15)

In order to perform the application of the proposed model, we use the least square technique to estimate each parameter. The derivation for MSEY model can be shown. Firstly, we rewrite equation (15):

$$ \begin{aligned} &\gamma _{t+\Delta t}= a\gamma _{t}+b \vartheta _{t}+\gamma _{t}+c+ \epsilon _{\gamma } \\ &\vartheta _{t+\Delta t}= d\gamma _{t}+e\vartheta _{t}+\vartheta _{t}+f+ \epsilon _{\vartheta }, \end{aligned} $$
(16)

where parameters \(a=1-\theta _{\gamma }\Delta t\), \(b=- \theta _{\gamma \vartheta }\Delta t\), \(c=(\theta _{\gamma }\mu _{\gamma }+\theta _{\gamma \vartheta }\mu _{\vartheta })\Delta t\), \(d=- \theta _{\vartheta \gamma }\Delta t\), \(e=1-\theta _{\vartheta }\Delta t\), \(f=(\theta _{\vartheta \gamma }\mu _{\gamma }+\theta _{\vartheta } \mu _{\vartheta })\Delta t\), \(\epsilon _{\gamma }=\sigma _{\gamma }\sqrt{ \Delta t}N(0,1)\), and \(\epsilon _{\vartheta }=\sigma _{\vartheta }\sqrt{ \Delta t}N(0,1)\).

In order to find estimations of \(a,b,c,d,e\), and f, when data \(\gamma _{t_{0}}, \gamma _{t_{1}},\ldots, \gamma _{t_{N}}\) and \(\vartheta _{t_{0}}, \vartheta _{t_{1}},\ldots, \vartheta _{t_{N}}\) are notified, it is satisfactory to solve the systems defined by \(\nabla E(a,b,c)=0\) as well as \(\nabla E(d,e,f)=0\). The results are as follows.

$$ \begin{bmatrix} \sum_{t=0}^{N-1}\gamma _{t+1}\gamma _{t} \\ \sum_{t=0}^{N-1}\gamma _{t+1}\vartheta _{t} \\ \sum_{t=0}^{N-1}\gamma _{t+1} \end{bmatrix} = \begin{bmatrix} \sum_{t=0}^{N-1}\gamma _{t}^{2} & \sum_{t=0}^{N-1}\gamma _{t}\vartheta _{t} & \sum_{t=0}^{N-1}\gamma _{t} \\ \sum_{t=0}^{N-1}\gamma _{t}\vartheta _{t} & \sum_{t=0}^{N-1}\vartheta _{t}^{2} & \sum_{t=0}^{N-1}\vartheta _{t} \\ \sum_{t=0}^{N-1}\gamma _{t} & \sum_{t=0}^{N-1}\vartheta _{t} & N \end{bmatrix} \begin{bmatrix} a \\ b \\ c \end{bmatrix} , $$

and

$$ \begin{bmatrix} \sum_{t=0}^{N-1}\vartheta _{t+1}\gamma _{t} \\ \sum_{t=0}^{N-1}\vartheta _{t+1}\vartheta _{t} \\ \sum_{t=0}^{N-1}\vartheta _{t+1} \end{bmatrix} = \begin{bmatrix} \sum_{t=0}^{N-1}\gamma _{t}^{2} & \sum_{t=0}^{N-1}\gamma _{t}\vartheta _{t} & \sum_{t=0}^{N-1}\gamma _{t} \\ \sum_{t=0}^{N-1}\gamma _{t}\vartheta _{t} & \sum_{t=0}^{N-1}\vartheta _{t}^{2} & \sum_{t=0}^{N-1}\vartheta _{t} \\ \sum_{t=0}^{N-1}\gamma _{t} & \sum_{t=0}^{N-1}\vartheta _{t} & N \end{bmatrix} \begin{bmatrix} d \\ e \\ f \end{bmatrix} . $$

We can obtain these estimators \(\hat{\theta }_{\gamma },\hat{\theta } _{\vartheta },\hat{\theta }_{\gamma \vartheta },\hat{\theta }_{\vartheta \gamma },\hat{\mu }_{\gamma }\), and \(\hat{\mu }_{\vartheta }\) by substitution.

For \(\sigma _{\gamma }\) and \(\sigma _{\vartheta }\), we calculate a primary estimation:

$$ \tilde{\gamma }_{t+\Delta t}=a\tilde{\gamma }_{t}+b\tilde{ \vartheta } _{t}+\tilde{\gamma }_{t}+c $$

and

$$ \tilde{\vartheta }_{t+\Delta t}=d\tilde{\gamma }_{t}+e \tilde{ \vartheta }_{t}+\tilde{\vartheta }_{t}+f. $$

Lastly, let

$$ \hat{\sigma }_{\gamma }=\sqrt{\frac{\operatorname{Var}(\gamma _{t}-\tilde{\gamma } _{t})}{\Delta t}} \quad\text{and}\quad \hat{ \sigma }_{\vartheta }=\sqrt{\frac{\operatorname{Var}( \vartheta _{t}-\tilde{\vartheta }_{t})}{\Delta t}}. $$

Numerical simulation

Referring to the previous procedures, we are able to produce numerical results of the simulation. We collected the dividend yield data every month from January 2009 to December 2016 from Stock Exchange of Thailand index of the fifty most active companies (SET50). Figure 1 shows the collected data presented graphically.

Figure 1
figure1

Actual data of stock exchange of Thailand dividend yield from January 2009 to December 2016

Table 1 presents estimations of values of all parameters used in three separate models. These estimators fulfill Corollary 1’s mean-reverting conditions.

Table 1 Estimations of parameters in models SDY, SEY, and MSEY

Thus, a simulation path of the aforementioned models is then generated using the analytic solution (4) by parameters in Table 1. We discern the initial values \(\gamma (0)\) and \(\vartheta (0)\) to instead be its actual values at January of 2009, and we use 120 observations to make up each path. In Figs. 24, we compare the real data to a sampling of simulated paths and proceed to use the root mean square error (RMS) for assessing the potential of these proposed models. These outcomes of the RMS error are statistically calculated by 10,000 separate simulations, and in Table 2 we give the means and standard deviations of RMS error for corresponding models.

Figure 2
figure2

Sample of SDY model simulated path (dashed line) compared to actual data (normal line)

Figure 3
figure3

Sample of SEY model simulated path (dashed line) compared to actual data (normal line)

Figure 4
figure4

Sample of MSEY model simulated path (dashed line) compared to actual data (normal line)

Table 2 Comparison of RMS error

Error reduction of approximately 8.65% and 13.91% when using the SEY and MSEY models respectively can be observed as compared to the SDY model. Therefore, the observations suggest that both the SEY and especially the MSEY models function more proficiently than the SDY model. Such a result concurs with the assumption that the values of dividend yield and earning yield depend on each other.

Conclusion

We consider the compound Ornstein–Uhlenbeck process in order to create dividend yield models to study in a case of Stock Exchange of Thailand through consideration of the earning yield as an additional stochastic factor. Using the least squares technique and simulating results through the Euler–Maruyama technique, we can conduct parameter estimation with respect to three different models: SDY, SEY, and MSEY. Python’s simulation results in the model with earning yield slightly reduce the RMS error. The numerical result shows that both SEY and MSEY models reduce the RMS error of estimation. The new MSEY model is especially proficient at reducing error at a rate of 14%. This suggests that our proposed dividend yield models with an extension of earning yield have more accurate data comparing to the original model. To further improve our model for future studies, we should focus on improving our estimation technique and using more financial factors in the real world.

References

  1. 1.

    Chance, D.M., Kumar, R., Rich, D.R.: European option pricing with discrete stochastic dividends. J. Deriv. 9, 39–45 (2002)

  2. 2.

    Cox, J.C., Ross, S.A.: The valuation of options for alternative stochastic processes. J. Financ. Econ. 3, 145–166 (1976)

  3. 3.

    Kruse, S., Muller, M.: Pricing American Call Options under the Assumption of Stochastic Dividends—An application of the Korn–Rogers-Model. SSRN (2009)

  4. 4.

    Lioui, A.: Black–Scholes–Merton revisited under stochastic dividend yields. J. Futures Mark. 26, 703–732 (2006)

  5. 5.

    Phewchean, N.: Option pricing with GOU process under a stochastic earning yield. Doctoral Thesis, Curtin Institute of Technology (2012)

  6. 6.

    Øksendal, B.: Stochastic Differential Equations an Introduction with Applications, 5th edn. Springer, New York (2000)

  7. 7.

    Graham, A.: Kronecker Products and Matrix Calculus with Applications. Ellis Horwood, Chichester (1981)

Download references

Acknowledgements

The authors would like to greatly thank the referees for their valuable suggestions and comments.

Availability of data and materials

Data and material are publicly available at https://finance.yahoo.com/ and http://www.cboe.com/.

Funding

We acknowledge the support of the Centre of Excellence in Mathematics, CHE, Thailand.

Author information

All authors contributed equally to this work. All authors read and approved the final manuscript.

Correspondence to N. Phewchean.

Ethics declarations

Competing interests

The authors declare that they have no competing interests.

Consent for publication

All authors have seen and approve the submission of this manuscript.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

MSC

  • 91G80
  • 91G60
  • 65C30
  • 82C80

Keywords

  • Dividend yield
  • Earning yield
  • Stochastic process
  • Compound Ornstein–Uhlenbeck process