首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
Bayesian Model Averaging for Spatial Econometric Models   总被引:1,自引:0,他引:1  
We extend the literature on Bayesian model comparison for ordinary least-squares regression models to include spatial autoregressive and spatial error models. Our focus is on comparing models that consist of different matrices of explanatory variables. A Markov Chain Monte Carlo model composition methodology labeled MC 3 by Madigan and York is developed for two types of spatial econometric models that are frequently used in the literature. The methodology deals with cases where the number of possible models based on different combinations of candidate explanatory variables is large enough such that calculation of posterior probabilities for all models is difficult or infeasible. Estimates and inferences are produced by averaging over models using the posterior model probabilities as weights, a procedure known as Bayesian model averaging. We illustrate the methods using a spatial econometric model of origin–destination population migration flows between the 48 U.S. states and the District of Columbia during the 1990–2000 period.  相似文献   

2.
This paper continues our work focused on developing a new socio-economic geography for Australia such that the chosen spatial aggregation of data is based on an analysis of economic behaviour. The underlying hypothesis is that the development of a geographical classification based on underlying economic behaviour will provide new insights into critical issues of regional performance, including unemployment differentials, the impact of industry, infrastructure and changes in local public expenditure on local labour markets. As a precursor to detailed work on the 2006 Census of Population and Housing data, we establish the proof of concept in this paper of the Intramax methodology using 2001 Journey-to-Work data from the Australian Bureau of Statistics (ABS) for the state of New South Wales. The functional regionalisation generated by the Intramax method is then tested using ABS labour force data. We compare 2001 ABS Census of Population and Housing data aggregated by the ABS labour force regions to the same data aggregated using our functional regions. The results demonstrate the potential value of this technique for the development of a new geography.  相似文献   

3.
Although the need for aggregation in input –output modelling has diminished with the increases in computing power, an alarming number of regional studies continue to use the procedure. The rationales for doing so typically are grounded in data problems at the regional level. As a result many regional analysts use aggregated national input –output models and trade –adjust them at this aggregated level. In this paper, we point out why this approach can be inappropriate. We do so by noting that it creates a possible source of model misapplication (i.e., a direct effect could appear for a sector where one does not exist) and also by finding that a large amount of error (on the order of 100 percent) can be induced into the impact results as a result of improper aggregation. In simulations, we find that average aggregation error tends to peak at 81 sectors after rising from 492 to 365 sectors. Perversely, error then diminishes somewhat as the model size decreases further to 11 and 6 sectors. We also find that while region – and sector –specific attributes influence aggregation error in a statistically significantly manner, their influence on the amount of error generally does not appear to be large.  相似文献   

4.
Facility location problems often involve movement between facilities to be located and customers/demand points, with distances between the two being important. For problems with many customers, demand point aggregation may be needed to obtain a computationally tractable model. Aggregation causes error, which should be kept small. We consider a class of minimax location models for which the aggregation may be viewed as a second‐order location problem, and use error bounds as aggregation error measures. We provide easily computed approximate “square root” formulas to assist in the aggregation process. The formulas establish that the law of diminishing returns applies when doing aggregation. Our approach can also facilitate aggregation decomposition for location problems involving multiple “separate” communities.  相似文献   

5.
After over a century of archaeological research in the American Southwest, questions focusing on population aggregation and abandonment continue to preoccupy much of Pueblo archaeology. This article presents a historical overview of the present range of explanatory approaches to these two processes, with a primary focus on population aggregation in those regions occupied by historic and prehistoric Pueblo peoples. We stress the necessarily complementary nature of most of these explanations of residential abandonment and aggregation. Case studies from the northern Southwest illustrate the continuous nature of these processes across time and space. We suggest that additional explanatory potential will be gained by the use of well-defined theoretical units to frame our current approaches. We extend the use of the local community concept as a theoretical unit of organization that, along with explicit archaeological correlates, should help advance our research into population aggregation and abandonment in this and other regions of the world.  相似文献   

6.
When solving a location problem using aggregated units to represent demand, it is well known that the process of aggregation introduces error. Research has focussed on individual components of error, with little work on identifying and controlling total error. We provide a focussed review of some of this literature and suggest a strategy for controlling total error. Consideration of alternative criteria for evaluating aggregation schemes shows that the method selected should be compatible with the objectives of the analyses in which it is used. Experiments are described that show that two different measures of error are related in a nonlinear way to the number of aggregate demand points (q), for any value of the number of facilities (p). We focus on the parameter q/p and show that it is critical for determining the expected severity of the error. Many practical implementations of location algorithms operate within the range of q/p where the rate of change of error with respect to q/p is highest.  相似文献   

7.
The geography of the Canadian economy has long been dominated by heartland‐hinterland contrasts, with manufacturing identified as the dominant function of most heartland cities in analyses of the 1961 and 1971 census data. However, the proportion of employment in manufacturing has been declining in the heartland provinces of Ontario and Quebec over the past fifty years and some geographers argue that the heartland‐hinterland dimension of the regional economy is being overridden by city‐regions that are integrated into global networks of production and trade. The heartland‐hinterland trends are examined using multifactor partitioning (MFP), an advanced shift‐share methodology, for the period of 2001–2006. This is the first intercensal period in which Canadian business has faced the full impact of the removal of North American tariff protection and the increased globalization of the Canadian economy. The data covers employment by eighteen industry sectors for the seventy‐three economic regions defined by Statistics Canada. MFP measures the region and industry‐mix effects, which are interpreted as in the traditional shift‐share model (though they are derived more accurately) and, in addition, an interaction effect. The results demonstrate that the broad heartland‐hinterland differences in the distribution of population and employment growth are increasing not decreasing and that the hinterland is in fact falling further behind the heartland in employment growth. However the Calgary‐Edmonton corridor and the Lower Mainland of British Columbia are emerging as a western heartland. The population size of cities does affect their rates of employment growth, but so too does their location: the growth of heartland cities is outpacing those in the hinterland. The Appendix provides the equations for two‐variable multifactor partitioning.  相似文献   

8.
Spatial clusters contain biases and artifacts, whether they are defined via statistical algorithms or via expert judgment. Graph-based partitioning of spatial data and associated heuristics gained popularity due to their scalability but can define suboptimal regions due to algorithmic biases such as chaining. Despite the broad literature on deterministic regionalization methods, approaches that quantify regionalization probability are sparse. In this article, we propose a local method to quantify regionalization probabilities for regions defined via graph-based cuts and expert-defined regions. We conceptualize spatial regions as consisting of two types of spatial elements: core and swing. We define three distinct types of regionalization biases that occur in graph-based methods and showcase the use of the proposed method to capture these types of biases. Additionally, we propose an efficient solution to the probabilistic graph-based regionalization problem via performing optimal tree cuts along random spanning trees within an evidence accumulation framework. We perform statistical tests on synthetic data to assess resulting probability maps for varying distinctness of underlying regions and regionalization parameters. Lastly, we showcase the application of our method to define probabilistic ecoregions using climatic and remotely sensed vegetation indicators and apply our method to assign probabilities to the expert-defined Bailey's ecoregions.  相似文献   

9.
In the wake of the voting controversy of Election 2000, along with passage of a congressional measure designed to fix what many believe is an ailing voting system, research into the impact of voting equipment on residual voting error has become a crucial question as the states prepare to replace existing voting equipment through the use of matching federal funds, to adjust existing equipment, or to face yet more lawsuits. Most existent studies into the link between voting equipment and residual voting error have concentrated on voting equipment across the states rather than within the individual states, generating results that are subject to a possible aggregation bias. Using a variety of statistical techniques, data on Election 2000 U.S. presidential and U.S. senatorial races are analyzed in an attempt to determine the impact of voting equipment on the voting error levels intrastate in those races. This study presents analysis of two sets of state data, Wyoming and Pennsylvania, and is used to argue that the infamous punch-card voting equipment may not be a significant contributor to an increase in voter error when analyzing intrastate, contrary to existing research that indicates it is significant when analyzed across multiple states. This research underscores the importance of researchers' ideological perspectives in application of statistical methodology to the American policy arena.  相似文献   

10.
This article categorizes existing maximum coverage optimization models for locating ambulances based on whether the models incorporate uncertainty about (1) ambulance availability and (2) response times. Data from Edmonton, Alberta, Canada are used to test five different models, using the approximate hypercube model to compare solution quality between models. The basic maximum covering model, which ignores these two sources of uncertainty, generates solutions that perform far worse than those generated by more sophisticated models. For a specified number of ambulances, a model that incorporates both sources of uncertainty generates a configuration that covers up to 26% more of the demand than the configuration produced by the basic model.  相似文献   

11.
In this paper we examine dynamic relationships among wheat prices from five countries for the years 1981–1999. Error correction models and directed acyclic graphs are employed with observational data to sort–out the dynamic causal relationships among prices from major wheat producing regions: Canada, the European Union, Argentina, Australia, and the United States. An ambiguity related to the cyclic or acyclic flow of information between Canada and Australia is uncovered. We condition our analysis on the assumption that information flow is acyclic. The empirical results show that Canada and the U.S. are leaders in the pricing of wheat in these markets. The U.S. has a significant effect on three markets excluding Canada.  相似文献   

12.
ABSTRACT We analyze the resilience of U.K. regions to employment shocks. Two basic notions of resilience are distinguished. With engineering resilience, there is an underlying stable growth path to which a regional economy rebounds following a shock. With ecological resilience, shocks can permanently affect the growth path of the regional economy. Our data set consists of quarterly employment series for 12 U.K. regions (NUTS I) for the period 1971–2010. Using a seemingly unrelated regression (SUR) model specification, we test for the relevance of (engineering) resilience of U.K. regional employment to the four recessionary shocks in our sample. It turns out that U.K. regions do indeed differ in their resilience, but that these differences mainly concern the initial resistance to these shocks and not so much the recovery stage. The SUR model does not allow shocks to have permanent effects and it also does not take the possibility of time differentiated shock spillovers between the 12 regions into account. To this end, we also estimate a vector error‐correction model (VECM) specification where employment shocks can have permanent effects and where also interregional employment linkages are included. We find that employment shocks typically have permanent effects when it concerns the own‐region effects. Permanent effects can also be found for the impact on other regions but the interregional effects are typically only significant for nearby regions.  相似文献   

13.
This article presents a new spatial modeling approach that deals with interactions between individual geographic entities. The developed model represents a generalization of the transportation problem and the classical assignment problem and is termed the hierarchical assignment problem (HAP). The HAP optimizes the spatial flow pattern between individual origin and destination locations, given that some grouping, or aggregation of individual origins and destinations is permitted to occur. The level of aggregation is user specified, and the aggregation step is endogenous to the model itself. This allows for the direct accounting of aggregation costs in pursuit of optimal problem solutions. The HAP is formulated and solved with several sample data sets using commercial optimization software. Trials illustrate how HAP solutions respond to changes in levels of aggregation, as well as reveal the diverse network designs and allocation schemes obtainable with the HAP. Connections between the HAP and the literature on the p-median problem, cluster analysis, and hub-and-spoke networks are discussed and suggestions for future research are made.  相似文献   

14.
In this paper, we propose a recursive approach to estimate the spatial error model. We compare the suggested methodology with standard estimation procedures and we report a set of Monte Carlo experiments which show that the recursive approach substantially reduces the computational effort affecting the precision of the estimators within reasonable limits. The proposed technique can prove helpful when applied to real-time streams of geographical data that are becoming increasingly available in the big data era. Finally, we illustrate this methodology using a set of earthquake data.  相似文献   

15.
ABSTRACT. Average monthly price data from twelve hinterland markets and the Houston port price for wheat are studied in a cointegration framework using the Engle-Granger "two-step" procedure and Johansen's maximum likelihood procedure. Out-of-sample forecasts from an error correction model are compared to those from a vector autoregression fit to levels and a univariate autoregression fit to first differences. This comparison suggests that modeling these (cointegrated) data as a levels vector autoregression, rather than as an error-correction process, results in significantly higher error bias, but lower error variance, at long horizons.  相似文献   

16.
The aim of this article is to find optimal or nearly optimal designs for experiments to detect spatial dependence that might be in the data. The questions to be answered are: how to optimally select predictor values to detect the spatial structure (if it is existent) and how to avoid to spuriously detect spatial dependence if there is no such structure. The starting point of this analysis involves two different linear regression models: (1) an ordinary linear regression model with i.i.d. error terms—the nonspatial case and (2) a regression model with a spatially autocorrelated error term, a so-called simultaneous spatial autoregressive error model. The procedure can be divided into two main parts: The first is use of an exchange algorithm to find the optimal design for the respective data collection process; for its evaluation an artificial data set was generated and used. The second is estimation of the parameters of the regression model and calculation of Moran's I , which is used as an indicator for spatial dependence in the data set. The method is illustrated by applying it to a well-known case study in spatial analysis.  相似文献   

17.
The p‐regions problem involves the aggregation or clustering of n small areas into p spatially contiguous regions while optimizing some criteria. The main objective of this article is to explore possible avenues for formulating this problem as a mixed integer‐programming (MIP) problem. The critical issue in formulating this problem is to ensure that each region is a spatially contiguous cluster of small areas. We introduce three MIP models for solving the p regions problem. Each model minimizes the sum of dissimilarities between all pairs of areas within each region while guaranteeing contiguity. Three strategies designed to ensure contiguity are presented: (1) an adaptation of the Miller, Tucker, and Zemlin tour‐breaking constraints developed for the traveling salesman problem; (2) the use of ordered‐area assignment variables based upon an extension of an approach by Cova and Church for the geographical site design problem; and (3) the use of flow constraints based upon an extension of work by Shirabe. We test the efficacy of each formulation as well as specify a strategy to reduce overall problem size.  相似文献   

18.
Ensuring equity of access to primary health care (PHC) across Canada is a continuing challenge, especially in rural and remote regions. Despite considerable attention recently by the World Health Organization, Health Canada and other health policy bodies, there has been no nation-wide study of potential (versus realized) spatial access to PHC. This knowledge gap is partly attributable to the difficulty of conducting the analysis required to accurately measure and represent spatial access to PHC. The traditional epidemiological method uses a simple ratio of PHC physicians to the denominator population to measure geographical access. We argue, however, that this measure fails to capture relative access. For instance, a person who lives 90 minutes from the nearest PHC physician is unlikely to be as well cared for as the individual who lives more proximate and potentially has a range of choice with respect to PHC providers. In this article, we discuss spatial analytical techniques to measure potential spatial access. We consider the relative merits of kernel density estimation and a gravity model. Ultimately, a modified version of the gravity model is developed for this article and used to calculate potential spatial access to PHC physicians in the Canadian province of Nova Scotia. This model incorporates a distance decay function that better represents relative spatial access to PHC. The results of the modified gravity model demonstrate greater nuance with respect to potential access scores. While variability in access to PHC physicians across the test province of Nova Scotia is evident, the gravity model better accounts for real access by assuming that people can travel across artificial census boundaries. We argue that this is an important innovation in measuring potential spatial access to PHC physicians in Canada. It contributes more broadly to assessing the success of policy mandates to enhance the equitability of PHC provisioning in Canadian provinces.  相似文献   

19.
The p‐regions is a mixed integer programming (MIP) model for the exhaustive clustering of a set of n geographic areas into p spatially contiguous regions while minimizing measures of intraregional heterogeneity. This is an NP‐hard problem that requires a constant research of strategies to increase the size of instances that can be solved using exact optimization techniques. In this article, we explore the benefits of an iterative process that begins by solving the relaxed version of the p‐regions that removes the constraints that guarantee the spatial contiguity of the regions. Then, additional constraints are incorporated iteratively to solve spatial discontinuities in the regions. In particular we explore the relationship between the level of spatial autocorrelation of the aggregation variable and the benefits obtained from this iterative process. The results show that high levels of spatial autocorrelation reduce computational times because the spatial patterns tend to create spatially contiguous regions. However, we found that the greatest benefits are obtained in two situations: (1) when ; and (2) when the parameter p is close to the number of clusters in the spatial pattern of the aggregation variable.  相似文献   

20.
Dry times: hard lessons from the Canadian drought of 2001 and 2002   总被引:1,自引:0,他引:1  
Droughts are one of the world's most significant natural hazards. They have major impacts on the economy, environment, health and society. In 2001 and 2002, many regions within Canada experienced unprecedented drought conditions, or conditions unseen for at least 100 years in some regions. This article draws upon a national assessment of this drought with particular attention to its implications for the agriculture and water sectors, although some attention is also devoted to other sectors. The study's methodology involves a comprehensive inter-disciplinary, cause–effect integrated framework as a basis to explore the characteristics of drought and the associated biological and physical impacts and socio-economic consequences. Numerous primary and secondary sources of data were used, including public and semi-public sources such as Agriculture and Agri-Food Canada, Environment Canada, Statistics Canada, Crop Insurance Corporations and provincial governments, as well as phone interviews, focus groups, print media surveys and economic modelling. Evidence indicates that the risk of drought is increasing as demands for food and water relentlessly climb and the manifestations of climate change become more apparent. The key to better dealing with drought lies in taking the steps necessary to enhance our adaptive capacity and decrease vulnerability.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号