首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 109 毫秒
1.
Facility location problems often involve movement between facilities to be located and customers/demand points, with distances between the two being important. For problems with many customers, demand point aggregation may be needed to obtain a computationally tractable model. Aggregation causes error, which should be kept small. We consider a class of minimax location models for which the aggregation may be viewed as a second‐order location problem, and use error bounds as aggregation error measures. We provide easily computed approximate “square root” formulas to assist in the aggregation process. The formulas establish that the law of diminishing returns applies when doing aggregation. Our approach can also facilitate aggregation decomposition for location problems involving multiple “separate” communities.  相似文献   

2.
The p-median problem is a powerful tool in analyzing facility location options when the goal of the location scheme is to minimize the average distance that demand must traverse to reach its nearest facility. It may be used to determine the number of facilities to site, as well as the actual facility locations. Demand data are frequently aggregated in p-median location problems to reduce the computational complexity of the problem. Demand data aggregation, however, results in the loss of locational information. This loss may lead to suboptimal facility location configurations (optimality errors) and inaccurate measures of the resulting travel distances (cost errors). Hillsman and Rhoda (1978) have identified three error components: Source A, B, and C errors, which may result from demand data aggregation. In this article, a method to measure weighted travel distances in p-median problems which eliminates Source A and B errors is proposed. Test problem results indicate that the proposed measurement scheme yields solutions with lower optimality and cost errors than does the traditional distance measurement scheme.  相似文献   

3.
In this paper, we extend the concepts of demand data aggregation error to location problems involving coverage. These errors, which arise from losses in locational information, may lead to suboptimal location patterns. They are potentially more significant in covering problems than in p-median problems because the distance metric is binary in covering problems. We examine the Hillsman and Rhoda (1978) Source A, B, and C errors, identify their coverage counterparts, and relate them to the cost and optimality errors that may result. Three rules are then presented which, when applied during data aggregation, will reduce these errors. The third rule will, in fact, eliminate all loss of locational information, but may also limit the amount of aggregation possible. Results of computational tests on a large-scale problem are presented to demonstrate the performance of rule 3.  相似文献   

4.
Location-allocation solutions based on aggregate estimates of demand are subject to error because of a loss of locational information during aggregation. It is shown that any method to remove or reduce uncertainty must be solution-specific and therefore impractical, for both median and center classes of problems. The significance of the error is illustrated by simulation of solutions to a number of artificial and real problems. It is suggested that aggregation problems be specifically addressed in applications of location-allocation models, and possible methods are proposed.  相似文献   

5.
Multiple Facilities Location in the Plane Using the Gravity Model   总被引:3,自引:0,他引:3  
Two problems are considered in this article. Both problems seek the location of p facilities. The first problem is the p median where the total distance traveled by customers is minimized. The second problem focuses on equalizing demand across facilities by minimizing the variance of total demand attracted to each facility. These models are unique in that the gravity rule is used for the allocation of demand among facilities rather than assuming that each customer selects the closest facility. In addition, we also consider a multiobjective approach, which combines the two objectives. We propose heuristic solution procedures for the problem in the plane. Extensive computational results are presented.  相似文献   

6.
A datum is considered spatial if it contains location information. Typically, there is also attribute information, whose distribution depends on its location. Thus, error in location information can lead to error in attribute information, which is reflected ultimately in the inference drawn from the data. We propose a statistical model for incorporating location error into spatial data analysis. We investigate the effect of location error on the spatial lag, the covariance function, and optimal spatial linear prediction (that is, kriging). We show that the form of kriging after adjusting for location error is the same as that of kriging without adjusting for location error. However, location error changes entries in the matrix of explanatory variables, the matrix of co‐variances between the sample sites, and the vector of covariances between the sample sites and the prediction location. We investigate, through simulation, the effect that varying trend, measurement error, location error, range of spatial dependence, sample size, and prediction location have on kriging after and without adjusting for location error. When the location error is large, kriging after adjusting for location error performs markedly better than kriging without adjusting for location error, in terms of both the prediction bias and the mean squared prediction error.  相似文献   

7.
Although the need for aggregation in input –output modelling has diminished with the increases in computing power, an alarming number of regional studies continue to use the procedure. The rationales for doing so typically are grounded in data problems at the regional level. As a result many regional analysts use aggregated national input –output models and trade –adjust them at this aggregated level. In this paper, we point out why this approach can be inappropriate. We do so by noting that it creates a possible source of model misapplication (i.e., a direct effect could appear for a sector where one does not exist) and also by finding that a large amount of error (on the order of 100 percent) can be induced into the impact results as a result of improper aggregation. In simulations, we find that average aggregation error tends to peak at 81 sectors after rising from 492 to 365 sectors. Perversely, error then diminishes somewhat as the model size decreases further to 11 and 6 sectors. We also find that while region – and sector –specific attributes influence aggregation error in a statistically significantly manner, their influence on the amount of error generally does not appear to be large.  相似文献   

8.
This article develops and calibrates a spatial interaction model (SIM) incorporating additional temporal characteristics of consumer demand for the U.K. grocery market. SIMs have been routinely used by the retail sector for location modeling and revenue prediction and have a good record of success, especially in the supermarket/hypermarket sector. However, greater planning controls and a more competitive trading environment in recent years has forced retailers to look to new markets. This has meant a greater focus on the convenience market which creates new challenges for retail location models. In this article, we present a custom built SIM for the grocery market in West Yorkshire incorporating trading and consumer data provided by a major U.K. retailer. We show that this model works well for supermarkets and hypermarkets but poorly for convenience stores. We then build a series of new demand layers taking into account the spatial distributions of demand at the time of day that consumers are likely to use grocery stores. These new demand layers include workplace populations, university student populations and secondary school children. When these demand layers are added to the models, we see a very promising increase in the accuracy of the revenue forecasts.  相似文献   

9.
ABSTRACT Equilibrium in spatial models invariably depends on firms' conjectures about how competitors will react to their price changes. This paper analyzes spatial price and location equilibrium when firms hold consistent (i.e. correct) conjectures. Most spatial models assume an exogenous conjecture. Consistent conjectures are one method, albeit a controversial one, for endogenizing the conjecture. We show that the consistent conjecture about a competitor's reaction to a price change in the simplest case is 1/3. When demand is elastic the consistent conjecture is a decreasing function of the radius. It is always below 1/3 and can be negative. In the third model, we show that the consistent conjecture declines as the number of dimensions and the number of competitors increases.  相似文献   

10.
The vector assignment p‐median problem (VAPMP) is one of the first discrete location problems to account for the service of a demand by multiple facilities, and has been used to model a variety of location problems in addressing issues such as system vulnerability and reliability. Specifically, it involves the location of a fixed number of facilities when the assumption is that each demand point is served a certain fraction of the time by its closest facility, a certain fraction of the time by its second closest facility, and so on. The assignment vector represents the fraction of the time a facility of a given closeness order serves a specific demand point. Weaver and Church showed that when the fractions of assignment to closer facilities are greater than more distant facilities, an optimal all‐node solution always exists. However, the general form of the VAPMP does not have this property. Hooker and Garfinkel provided a counterexample of this property for the nonmonotonic VAPMP. However, they do not conjecture as to what a finite set may be in general. The question of whether there exists a finite set of locations that contains an optimal solution has remained open to conjecture. In this article, we prove that a finite optimality set for the VAPMP consisting of “equidistant points” does exist. We also show a stronger result when the underlying network is a tree graph.  相似文献   

11.
ABSTRACT An important goal in many planning contexts is maximizing primary and secondary (or backup) coverage while locating a specified number of service facilities. In general, we are interested in providing the greatest level of coverage to demand that is continuously distributed across space. A critical issue is how to represent continuous demand in coverage analysis, reducing or eliminating error and uncertainty. This paper evaluates representation issues in primary and secondary coverage location modeling. To overcome representational limitations, enhancements for spatial coverage abstraction are introduced and incorporated in a mathematical optimization model. In addition to model improvements, this paper introduces a new and novel error assessment approach arising due to the existence of multiple objectives. Surveillance sensor siting in an urban area is utilized to assess enhanced modeling capabilities.  相似文献   

12.
The p‐center problem is one of the most important models in location theory. Its objective is to place a fixed number of facilities so that the maximum service distance for all customers is as small as possible. This article develops a reliable p‐center problem that can account for system vulnerability and facility failure. A basic assumption is that located centers can fail with a given probability and a customer will fall back to the closest nonfailing center for service. The proposed model seeks to minimize the expected value of the maximum service distance for a service system. In addition, the proposed model is general and can be used to solve other fault‐tolerant center location problems such as the (p, q)‐center problem using appropriate assignment vectors. I present an integer programming formulation of the model and computational experiments, and then conclude with a summary of findings and point out possible future work.  相似文献   

13.
We extend the well-known transport users' benefits measure (TUB) for the doubly-constrained spatial interaction model derived by Williams (1976). The original formula expresses the TUB as composed by two terms associated with the origin and the destination zones. First, the TUB is associated here with trips instead of zones, providing a natural interpretation as a rule-of-a-half measure of benefit under inelastic demand (for the short-run case). Second, a TUB formula for the long-run case is derived, that is, when the total number of trips, trip origins, and trip destinations change. We then propose updated measures of accessibility for location behavior.  相似文献   

14.
The p‐regions is a mixed integer programming (MIP) model for the exhaustive clustering of a set of n geographic areas into p spatially contiguous regions while minimizing measures of intraregional heterogeneity. This is an NP‐hard problem that requires a constant research of strategies to increase the size of instances that can be solved using exact optimization techniques. In this article, we explore the benefits of an iterative process that begins by solving the relaxed version of the p‐regions that removes the constraints that guarantee the spatial contiguity of the regions. Then, additional constraints are incorporated iteratively to solve spatial discontinuities in the regions. In particular we explore the relationship between the level of spatial autocorrelation of the aggregation variable and the benefits obtained from this iterative process. The results show that high levels of spatial autocorrelation reduce computational times because the spatial patterns tend to create spatially contiguous regions. However, we found that the greatest benefits are obtained in two situations: (1) when ; and (2) when the parameter p is close to the number of clusters in the spatial pattern of the aggregation variable.  相似文献   

15.
The spatial interaction model (SIM) is an important tool for retail location analysis and store revenue estimation, particularly within the grocery sector. However, there are few examples of SIM development within the literature that capture the complexities of consumer behavior or discuss model developments and extensions necessary to produce models which can predict store revenues to a high degree of accuracy. This article reports a new disaggregated model with more sophisticated demand terms which reflect different types of retail consumer (by income or social class), with different shopping behaviors in terms of brand choice. We also incorporate seasonal fluctuations in demand driven by tourism, a major source of non‐residential demand, allowing us to calibrate revenue predictions against seasonal sales fluctuations experienced at individual stores. We demonstrate that such disaggregated models need empirical data for calibration purposes, without which model extensions are likely to remain theoretical only. Using data provided by a major grocery retailer, we demonstrate that statistically, spatially, and in terms of revenue estimation, models can be shown to produce extremely good forecasts and predictions concerning store patronage and store revenues, including much more realistic behavior regarding store selection. We also show that it is possible to add a tourist demand layer, which can make considerable forecasting improvements relative to models built only with residential demand.  相似文献   

16.
ABSTRACT We develop an objective methodology for forming large regions (called macroregions) from small regions (microregions). We observe that the aggregation of microregions in an interregional input-output model causes aggregation error in the model. Our optimal regions are those that cause aggregation error to be minimized. We apply our methodology to Canada, and we compare our optimal regions for Canada to those of Statistics Canada and to those obtained using the well-known heuristic of Kossov. Statistics Canada's regions as well as those produced using Kossov's technique are characterized by substantially greater aggregation error than are those produced with our methodology.  相似文献   

17.
Renewed research interest in the origins of pottery has illuminated an array of possible precipitating causes and environmental contexts in which pottery began to be made and used. This article is an attempt at synthesizing some of these data in hopes of stimulating further research into this intriguing topic. Following a review of theories on the origins of pottery, discussion proceeds to a survey of geographic and cultural contexts of low-fired or unfired pottery, highlighting the role(s) of pottery among contemporary hunter-gatherers and summarizing data pertaining to varied uses of pottery containers. It is argued that objects of unfired and low-fired clay were created as part of early prestige technologies of material representations beginning in the Upper Paleolithic and are part of an early software horizon. Clay began to be more widely manipulated by nonsedentary, complex hunter-gatherers in the very Late Pleistocene and early Holocene in areas of resource abundance, especially in tropical/subtropical coastal/riverine zones, as part of more general processes of resource and social intensification (such as competitive feasting or communal ritual). Knowledge of making and using pottery containers spread widely as prestige technology and as practical technology, the kind and timing of its adoption or reinvention varying from location to location depending on specific needs and circumstances.  相似文献   

18.
The spatial analysis literature recognizes three sources of aggregation error, termed Source A, Source B, and Source C, which affect models relying on distance measurements between populations and facilities. We consider these effects with respect to aggregating from census enumeration areas to census tracts, on a popular location model. We identify a further source of aggregation error, which we dub Source D error, arising from the representation of facility sites by discrete points. Source D effects are of the same magnitude as Source A and B combined, much greater than Source C effects. Source D error is further significant, because, unlike Source A and B error, it can be eliminated only by disaggregating .
La littérature sur Vanalyse spatiale reconnaît trois sources d'erreur d'agrégation appelées erreurs de source A, B, et C. Ces erreurs influencent les modèles qui reposent sur des mesures de distance entre des populations et des installations. On considère les effets d'agrégation des secteurs de dénombrement en secteurs de recensement sur un modèle de localisation courant. On identifie une quatrième source d'erreur d'agrégation appelée erreur de source D. Cette erreur découle de la représentation des sites des installations par des points discrets. Ses effets sont du même ordre de grandeur que ceux des sources A et B combinées et beaucoup plus grands que les effets de source G. L'erreur de source D est d'autant plus significative à cause du fait que, contrairement aux erreurs de source A et B, elle ne peut être éliminée que par la désagrégation .  相似文献   

19.
COMPETITIVE LOCATION UNDER UNCERTAINTY OF COSTS   总被引:1,自引:0,他引:1  
ABSTRACT. In this paper, we study the centroid problem from competitive location theory for a linear market with uniform demand, assuming that the leader has imperfect information about the follower's fixed and marginal costs. It is shown that the general version of this problem can be formulated as a nonlinear programming problem and the exact solution can be obtained analytically in a special case. A simple strategy is also given for the general problem, and it is proven that this strategy has a guaranteed error bound. It is demonstrated that uncertainty of costs might lead to market failure in the centroid problem, but this disappears if the game is repeated and the firms learn from observing each other's moves. It is also shown that it is possible for the leader to obtain optimal expected profit at a low perceived risk, with only sufficient, and not necessarily perfect, information. These two observations lead to our primary conclusion from the study that although cost uncertainty is a realistic feature of most competitive location models, there are very effective ways of dealing with it.  相似文献   

20.
On the Logit Approach to Competitive Facility Location   总被引:1,自引:0,他引:1  
The random utility model in competitive facility location is one approach for estimating the market share captured by a retail facility in a competitive environment. However, it requires extensive computational effort for finding the optimal location for a new facility because its objective function is based on a k -dimensional integral. In this paper we show that the random utility model can be approximated by a logit model. The proportion of the buying power at a demand point that is attracted to the new facility can be approximated by a logit function of the distance to it. This approximation demonstrates that using the logit function of the distance for estimating the market share is theoretically founded in the random utility model. A simplified random utility model is defined and approximated by a logit function. An iterative Weiszfeld-type algorithm is designed to find the best location for a new facility using the logit model. Computational experiments show that the logit approximation yields a good location solution to the random utility model.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号