Bayesian Statistics from Methods to Models and Applications by Sylvia Frühwirth-Schnatter Angela Bitto Gregor Kastner & Alexandra Posekany

Bayesian Statistics from Methods to Models and Applications by Sylvia Frühwirth-Schnatter Angela Bitto Gregor Kastner & Alexandra Posekany

Author:Sylvia Frühwirth-Schnatter, Angela Bitto, Gregor Kastner & Alexandra Posekany
Language: eng
Format: epub
Publisher: Springer International Publishing, Cham


(7.4)

From here we can derive the smoothing density, or conditional posterior density of θ 0: T . We use the method of [18], based on [21], for drawing from this density, called the mixed Cholesky factor algorithm (MCFA) by [23]. The following derivation closely follows Appendix C of [23]. The full conditional density of θ 0: T can be written as

where

Then g has the form where K is some constant with respect to θ 0: T , Ω is a square, symmetric matrix of dimension and ω is a column vector of dimension . This gives . Further, Ω is block tridiagonal since there are no cross product terms involving θ t and θ t+k where | k |  > 1. Because of this, the Cholesky factor and thus inverse of Ω can be efficiently computed leading to the Cholesky factor algorithm (CFA) [21]. Instead of computing the Cholesky factor of Ω all at once before drawing θ 0: T as in the CFA, the same technology can be used to draw θ T , then θ t  | θ (t+1): T recursively in a backward sampling structure, resulting in the MCFA. In simulations, the MCFA has been found to be significantly cheaper than Kalman filter based methods and often cheaper than the CFA [18].

In order to implement the algorithm, we need to first characterize the diagonal and off diagonal blocks of Ω and the blocks of ω:



Download



Copyright Disclaimer:
This site does not store any files on its server. We only index and link to content provided by other sites. Please contact the content providers to delete copyright contents if any and email us, we'll remove relevant links or contents immediately.