• 2018-07
  • 2019-04
  • 2019-05
  • 2019-06
  • 2019-07
  • 2019-08
  • 2019-09
  • 2019-10
  • 2019-11
  • 2019-12
  • 2020-01
  • 2020-02
  • 2020-03
  • 2020-04
  • 2020-05
  • Atglistatin br The LSCP model and


    The LSCP model and its properties In this section, we first introduce the LSCP model in Section 2.1. Some examples of the model are presented in Section 2.2 and basic properties of the model are presented in Section 2.3. Finally, Section 2.4 introduces finite dimensional approximations necessary for inference.
    Inference A popular approach for Bayesian inference is through Markov chain Monte Carlo (MCMC) methodology, for instance using the Metropolis adjusted Langevin algorithm (MALA) (Roberts and Tweedie, 1996) which was suggested for LGCP models by Møller et al. (1998). Another approach is through integrated nested Laplace approximation (INLA) Illian et al., 2012, Rue et al., 2009, Simpson et al., 2016, which when applicable can have beneficial computational properties. Unfortunately the level set construction cannot be handled by INLA. In this work we propose an MCMC algorithm for fitting the LSCP model and perform predictions. Specifically, we propose a method based on the preconditioned Crank–Nicholson Langevin (pCNL) algorithm (Cotter et al., 2013). The algorithm is developed for the case when the target distribution (the posterior) has a Gaussian prior and a non-Gaussian likelihood, as is the case for the LSCP model. An important property of the pCNL algorithm is that it Atglistatin is discretization invariant (Beskos et al., 2008). This implies that as the finite dimensional approximation is made finer the mixing of the Markov Chain does not deteriorate. This is important since we utilize a rather fine discretization and thus one should expect poor mixing for an algorithm without this property, like for instance MALA. We now present in more detail how we implement the MCMC algorithm for the LSCP model. Denote the set of parameters associated with class as . For the level set field, , we also include the nugget variance, , and the thresholds, in the set . By introducing an auxiliary field defined such that , we have This means that parameters and latent fields of different classes, , are conditionally independent given . We use this to construct a Metropolis-within-Gibbs algorithm (Robert and Casella, 2004) to sample from the joint posterior. In the th iteration of the algorithm, the following three steps are performed
    The computational bottleneck of the algorithm is the third step, where the latent Gaussian fields are sampled. This is computationally expensive since the spatial discretization in two dimensions will correspond to a high dimensional distribution. However, since we are using a spatial lattice discretization with subregions, efficient sampling from the prior Gaussian field is possible with a computational complexity of using spectral methods (Lang and Potthoff, 2011) (that is, using Fast Fourier transforms). Working in the spectral domain also allows for efficient computation of all gradients and acceptance probabilities needed, making the spectral approach and the pCNL-algorithm in combination very favorable. Using the Fast Fourier transform relies on truncating the spectral basis expansion of the fields, In Appendix A we justify this additional approximation theoretically by showing that convergence of the lattice approximation still holds given certain bounds on the spectral densities.
    Application To further illustrate our approach we return to the tropical rainforest data example in Section 1 to compare the effect of considering LSCP models instead of simple Poisson regression or the LGCP model.
    Discussion We have considered the problem of Bayesian level set inversion for point process data. The proposed model can be seen as a generalization of the LGCP model where the latent Gaussian field is extended to a level set mixture of Gaussian fields. We derived basic model properties and in Appendix A showed consistency of the posterior probability measure of finite-dimensional approximations to the continuous model. A computationally efficient MCMC method for Bayesian inference, based on the preconditioned Crank–Nicholson Langevin algorithm, was presented. A topic of further research could be to investigate other, potentially even quicker, estimation methods such as based on INLA in combination with variational Bayes.