首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
This paper reports the fitting of a number of Bayesian logistic models with spatially structured or/and unstructured random effects to binary data with the purpose of explaining the distribution of high‐intensity crime areas (HIAs) in the city of Sheffield, England. Bayesian approaches to spatial modeling are attracting considerable interest at the present time. This is because of the availability of rigorously tested software for fitting a certain class of spatial models. This paper considers issues associated with the specification, estimation, and validation, including sensitivity analysis, of spatial models using the WinBUGS software. It pays particular attention to the visualization of results. We discuss a map decomposition strategy and an approach that examines properties of the full posterior distribution. The Bayesian spatial model reported provides some interesting insights into the different factors underlying the existence of the three police‐defined HIAs in Sheffield.  相似文献   

2.
This article presents a Bayesian method based on spatial filtering to estimate hedonic models for dwelling prices with geographically varying coefficients. A Bayesian Adaptive Sampling algorithm for variable selection is used, which makes it possible to select the most appropriate filters for each hedonic coefficient. This approach explores the model space more systematically and takes into account the uncertainty associated with model estimation and selection processes. The methodology is illustrated with an application for the real estate market in the Spanish city of Zaragoza and with simulated data. In addition, an exhaustive comparison study with a set of alternatives strategies used in the literature is carried out. Our results show that the proposed Bayesian procedures are competitive in terms of prediction; more accurate results are obtained in the estimation of the regression coefficients of the model, and the multicollinearity problems associated with the estimation of the regression coefficients are solved.  相似文献   

3.
ABSTRACT Many databases involve ordered discrete responses in a temporal and spatial context, including, for example, land development intensity levels, vehicle ownership, and pavement conditions. An appreciation of such behaviors requires rigorous statistical methods, recognizing spatial effects and dynamic processes. This study develops a dynamic spatial‐ordered probit (DSOP) model in order to capture patterns of spatial and temporal autocorrelation in ordered categorical response data. This model is estimated in a Bayesian framework using Gibbs sampling and data augmentation, in order to generate all autocorrelated latent variables. It incorporates spatial effects in an ordered probit model by allowing for interregional spatial interactions and heteroskedasticity, along with random effects across regions or any clusters of observational units. The model assumes an autoregressive, AR(1), process across latent response values, thereby recognizing time‐series dynamics in panel data sets. The model code and estimation approach is tested on simulated data sets, in order to reproduce known parameter values and provide insights into estimation performance, yielding much more accurate estimates than standard, nonspatial techniques. The proposed and tested DSOP model is felt to be a significant contribution to the field of spatial econometrics, where binary applications (for discrete response data) have been seen as the cutting edge. The Bayesian framework and Gibbs sampling techniques used here permit such complexity, in world of two‐dimensional autocorrelation.  相似文献   

4.
Conventional discrete choice models assume implicitly that the choice set is independent of the decisionmaker's preferences conditional on the explanatory variables of the models. This assumption is implausible in many choice situations where the decisionmaker selects his or her choice set. This paper estimates and tests a discrete choice model with endogenous choice sets based on Horowitz' theoretical work. To calibrate the model, a new probability simulator is introduced and a sequential estimation procedure is developed. The model and calibration methods are tested in an empirical application as well as Monte Carlo simulations. The empirical results are used to test the theory of endogenous choice sets and to examine the differences between the new model and a conventional choice model in parameter estimates and predicted choice probabilities. The empirical results strongly suggest that ignoring the endogeneity of choice sets in choice modeling can have serious consequences in applications.  相似文献   

5.
This article discusses how standard spatial autoregressive models and their estimation can be extended to accommodate geographically hierarchical data structures. Whereas standard spatial econometric models normally operate at a single geographical scale, many geographical data sets are hierarchical in nature—for example, information about houses nested into data about the census tracts in which those houses are found. Here we outline four model specifications by combining different formulations of the spatial weight matrix W and of ways of modeling regional effects. These are (1) groupwise W and fixed regional effects; (2) groupwise W and random regional effects; (3) proximity‐based W and fixed regional effects; and (4) proximity‐based W and random regional effects. We discuss each of these model specifications and their associated estimation methods, giving particular attention to the fourth. We describe this as a hierarchical spatial autoregressive model. We view it as having the most potential to extend spatial econometrics to accommodate geographically hierarchical data structures and as offering the greatest coming together of spatial econometric and multilevel modeling approaches. Subsequently, we provide Bayesian Markov Chain Monte Carlo algorithms for implementing the model. We demonstrate its application using a two‐level land price data set where land parcels nest into districts in Beijing, China, finding significant spatial dependence at both the land parcel level and the district level.  相似文献   

6.
Baseline models have been used in the analysis of social networks as a way to understand how empirical networks differ from “random” ones. For the purposes of social network analysis, a “random” network is one chosen—at random—from a population of possible graphs derived from a given generating function. Although these principled hypothesis tests have a long history, many of their properties and extensions to multiple data structures—here, specifically two‐mode data—have been overlooked. This article focused on applications of different baseline models to two data sets: donations and voting of the 111th U.S. Congress, and organizations involved in forums on watershed policy in San Francisco, USA. Tests using each data set, but with different baseline reference distributions, will illustrate the range of possible questions baseline models can address and the differences between them. The ability to apply different models and generate a constellation of results provides a deeper understanding of the structure of the system.  相似文献   

7.
Bayesian Model Averaging for Spatial Econometric Models   总被引:1,自引:0,他引:1  
We extend the literature on Bayesian model comparison for ordinary least-squares regression models to include spatial autoregressive and spatial error models. Our focus is on comparing models that consist of different matrices of explanatory variables. A Markov Chain Monte Carlo model composition methodology labeled MC 3 by Madigan and York is developed for two types of spatial econometric models that are frequently used in the literature. The methodology deals with cases where the number of possible models based on different combinations of candidate explanatory variables is large enough such that calculation of posterior probabilities for all models is difficult or infeasible. Estimates and inferences are produced by averaging over models using the posterior model probabilities as weights, a procedure known as Bayesian model averaging. We illustrate the methods using a spatial econometric model of origin–destination population migration flows between the 48 U.S. states and the District of Columbia during the 1990–2000 period.  相似文献   

8.
This paper addresses the application of a Bayesian parameter estimation method to a regional seismic risk assessment of curved concrete bridges. For this purpose, numerical models of case-study bridges are simulated to generate multiparameter demand models of components, consisting of various uncertainty parameters and an intensity measure (IM). The demand models are constructed using a Bayesian parameter estimation method and combined with limit states to derive the parameterized fragility curves. These fragility curves are used to develop bridge-specific and bridge-class fragility curves. Moreover, a stepwise removal process in the Bayesian parameter estimation is performed to identify significant parameters affecting component demands.  相似文献   

9.
A coarse Bayesian approach to evaluate luminescence ages   总被引:1,自引:0,他引:1  
This paper develops a simplified Bayesian approach to evaluate a luminescence age. We limit our purpose to the cause-effect relationship between the age and the accumulated dose. The accumulated dose is given as a function of the age and several others parameters: internal radionuclides contents, gamma dose rate, cosmic dose rate, alpha efficiency, wetness, conversion factors, wetness coefficients, fading rate and storage time. The age is the quantity we are looking for. Bayes’ theorem expresses the changes on the probability distribution of age due to the luminescence study. The information before study (prior) comprises what is previously known about the age and the archaeological model (cultural period, stratigraphic relations, type, etc.) as well as the parameters of the physical model. The accumulated dose consists in the data describing the measurement. The various stages of Bayesian approach were implemented using the software WinBugs. Simulated data sets were used in various models. We present various small models representing typical examples encountered in luminescence dating.  相似文献   

10.
Spatial econometric specifications pose unique computational challenges to Bayesian analysis, making it difficult to estimate models efficiently. In the literature, the main focus has been on extending Bayesian analysis to increasingly complex spatial models. The stochastic efficiency of commonly used Markov Chain Monte Carlo (MCMC) samplers has received less attention by comparison. Specifically, Bayesian methods to analyze effective sample size and samplers that provide large effective size have not been thoroughly considered in the literature. Thus, we compare three MCMC techniques: the familiar Metropolis‐within‐Gibbs sampling, Slice‐within‐Gibbs sampling, and Hamiltonian Monte Carlo. The latter two methods, while common in other domains, are not as widely encountered in Bayesian spatial econometrics. We assess these methods across four different scenarios in which we estimate the spatial autoregressive parameter in a mixed regressive, spatial autoregressive specification (or, spatial lag model). We find that off‐the‐shelf implementations of the newer high‐yield simulation techniques require significant adaptation to be viable. We further find that the effective sizes are often significantly smaller than nominal sizes. In addition, we find that stopping simulation early may understate posterior credible interval widths when effective sample size is small. More broadly, we suggest that sample information and stopping rules deserve more attention in both applied and basic Bayesian spatial econometric research.  相似文献   

11.
Bayesian Estimation of Regional Production for CGE Modeling   总被引:1,自引:0,他引:1  
Abstract Computable general equilibrium (CGE) models are often criticized for using restrictive functional forms and relying on external sources for parameter values in their calibration. CGE modelers argue that in many instances reliable econometric estimates of important model parameters are unavailable because they must be estimated using small numbers of time‐series observations. To address these criticisms, this paper uses a Bayesian approach to estimate the parameters of a translog production function in a regional computable general equilibrium model. Using priors from more reliable national estimates, and parameter restrictions required by neoclassical production theory, estimation is done by Markov chain Monte Carlo simulation. A stylized regional CGE model is then used to contrast policy responses of a Cobb‐Douglas specification with those from the estimated translog equation.  相似文献   

12.
Data sets from some VHF radars have been analysed. Gaussian distributions with random variance are proposed for the signal's quadrature components. The suggested distributions explain the data sets satisfactorily, especially as the length of the data series increases. Non-stationarity of the signals will also be interpreted using the proposed model. Moreover, a χ2-goodness of fit test for the proposed model has been conducted and its results are persuasive. We suggest that it is better to use the proposed distribution for the quadrature components than to use the Nakagami distribution for the amplitude distribution or the regular Gaussian distribution for the quadrature components. In addition, the sampling time should be less than 4 min to guarantee the stationarity of the data.  相似文献   

13.
Gaussian Process Regression (GPR) is a nonparametric technique that is capable of yielding reliable out‐of‐sample predictions in the presence of highly nonlinear unknown relationships between dependent and explanatory variables. But in terms of identifying relevant explanatory variables, this method is far less explicit about questions of statistical significance. In contrast, more traditional spatial econometric models, such as spatial autoregressive models or spatial error models, place rather strong prior restrictions on the functional form of relationships, but allow direct inference with respect to explanatory variables. In this article, we attempt to combine the best of both techniques by augmenting GPR with a Bayesian Model Averaging (BMA) component that allows for the identification of statistically relevant explanatory variables while retaining the predictive performance of GPR. In particular, GPR‐BMA yields a posterior probability interpretation of model‐inclusion frequencies that provides a natural measure of the statistical relevance of each variable. Moreover, while such frequencies offer no direct information about the signs of local marginal effects, it is shown that partial derivatives based on the mean GPR predictions do provide such information. We illustrate the additional insights made possible by this approach by applying GPR‐BMA to a benchmark BMA data set involving potential determinants of cross‐country economic growth. It is shown that localized marginal effects based on partial derivatives of mean GPR predictions yield additional insights into comparative growth effects across countries.  相似文献   

14.
Chronology building has long served as a major focus of archaeological interest in the Central Illinois River valley (CIRV) of west-central Illinois. Previous methods have relied primarily upon relative dating techniques (e.g., ceramic seriation) as a means of sorting out temporal relationships between sites. This study represents the first investigation into the utility of Bayesian techniques (which consider radiocarbon dates in context with archaeological information) in the CIRV. We present the results of a detailed ceramic seriation of the region, data that we use as a priori information in our Bayesian models. We then offer contiguous, overlapping, and sequential models of site occupations in the Mississippian CIRV, review the output and appropriateness of each model, and consider their implications for the pace of sociopolitical change in the region.  相似文献   

15.
ABSTRACT Recent developments in combining input-output and transportation planning models have made it possible to construct realistic comprehensive urban and regional activity models of land use intensity. These models form the basis for a rigorous approach to studying the interactions among urban activities. However, efficient computational solution methods for implementing such comprehensive models are still not available. In this paper, an efficient solution method for a nonlinear programming urban systems model is developed by combining Evans's partial linearization technique with Powell's hybrid method. The solution algorithm is applied to a small but realistic urban area with a detailed transportation network.  相似文献   

16.
ABSTRACT In this paper, we specify a linear Cliff‐and‐Ord‐type spatial model. The model allows for spatial lags in the dependent variable, the exogenous variables, and disturbances. The innovations in the disturbance process are assumed to be heteroskedastic with an unknown form. We formulate multistep GMM/IV‐type estimation procedures for the parameters of the model. We also give the limiting distributions for our suggested estimators and consistent estimators for their asymptotic variance‐covariance matrices. We conduct a Monte Carlo study to show that the derived large‐sample distribution provides a good approximation to the actual small‐sample distribution of our estimators.  相似文献   

17.
Accurately estimating the length of Vehicle Routing Problem (VRP) distances can inform transportation planning in a wide variety of delivery and service provision contexts. This study extends the work of previous research where multiple linear regression models were used to estimate the average distance of VRP solutions with various customer demands and capacity constraints. This research expands on that approach in two ways: first, the point patterns used in estimation have a wider range of customer clustering or dispersion values as measured by the Average Nearest Neighbor Index (ANNI) as opposed to just using a Poisson or random point process; second, the tour coefficient adjusted by this complementary spatial information is shown to exhibit statistically more accurate estimations. To generate a full range of ANNI values, point patterns were simulated using a Poisson process, a Matern clustering process, and a simple sequential inhibition process to obtain random, clustered, and dispersed point patterns, respectively. The coefficients of independent variables in the models were used to explain how the spatial distributions of customers influence the VRP distances. These results demonstrate that complementary spatial data can be used to improve operational results, a concept that could be applied more broadly.  相似文献   

18.
We model the relationship between coronary heart disease and smoking prevalence and deprivation at the small area level using the Poisson log-linear model with and without random effects. Extra-Poisson variability (overdispersion) is handled through the addition of spatially structured and unstructured random effects in a Bayesian framework. In addition, four different measures of smoking prevalence are assessed because the smoking data are obtained from a survey that resulted in quite large differences in the size of the sample across the census tracts. Two of the methods use Bayes adjustments of standardized smoking ratios (local and global adjustments), and one uses a nonparametric spatial averaging technique. A preferred model is identified based on the deviance information criterion. Both smoking and deprivation are found to be statistically significant risk factors, but the effect of the smoking variable is reduced once the confounding effects of deprivation are taken into account. Maps of the spatial variability in relative risk, and the importance of the underlying covariates and random effects terms, are produced. We also identify areas with excess relative risk.  相似文献   

19.
A new estimator links migration data to a random utility "voting with your feet" model to compare the relative living standards of pairs of regions. It is argued that this estimator has a firmer theoretical basis and uses migration information more efficiently than previous methods. An algorithm converts pairwise comparisons into rankings of the U.S. states for 1970, 1980, and 1990. The rankings indicate living standards were highest in the Northwest (1970, 1980) and the south Atlantic coast (1990). A nonparametric test suggests that the system was in disequilibrium in 1980 (probably due to energy price shocks), but near equilibrium in 1970 and 1990.  相似文献   

20.
ABSTRACT. Johansen's (1988) multivariate test for cointegration is first applied to four models involving quarterly state data and five variables, along with a national model based on Friedman and Kuttner's (1992) model of money demand, which uses three variables. Each regional model consists of frequently used national and state series, for which theory suggests the possible cointegration of several series pairs. Beginning with all five series, however, one state model is found to be cointegrated over each of 20 successive estimation intervals. The money demand model and one state model are not cointegrated over the same intervals. In the cointegrated case, five-year experimental forecasts show that error correction mechanism (ECM) and Bayesian ECM models outperform all other approaches. More importantly, forecasting performance improves further by respecifying the ECM model based on three cointegrated series pairs rather than the five-component cointegrating vector. For the two noncointegrated systems, the first-difference model suggested by the cointegration/ error correction literature is far superior to VAR in levels over both shortand long-term horizons.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号