首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 15 毫秒
1.
The p‐regions problem involves the aggregation or clustering of n small areas into p spatially contiguous regions while optimizing some criteria. The main objective of this article is to explore possible avenues for formulating this problem as a mixed integer‐programming (MIP) problem. The critical issue in formulating this problem is to ensure that each region is a spatially contiguous cluster of small areas. We introduce three MIP models for solving the p regions problem. Each model minimizes the sum of dissimilarities between all pairs of areas within each region while guaranteeing contiguity. Three strategies designed to ensure contiguity are presented: (1) an adaptation of the Miller, Tucker, and Zemlin tour‐breaking constraints developed for the traveling salesman problem; (2) the use of ordered‐area assignment variables based upon an extension of an approach by Cova and Church for the geographical site design problem; and (3) the use of flow constraints based upon an extension of work by Shirabe. We test the efficacy of each formulation as well as specify a strategy to reduce overall problem size.  相似文献   

2.
The aim of this study was to determine the process–structure–property relationships between the pre‐ and post‐CO2 injection pore network geometry and the intrinsic permeability tensor for samples of core from low‐permeability Lower Triassic Sherwood Sandstone, UK. Samples were characterised using SEM‐EDS, XRD, MIP, XRCT and a triaxial permeability cell both before and after a three‐month continuous‐flow experiment using acidic CO2‐rich saline fluid. The change in flow properties was compared to those predicted by pore‐scale numerical modelling using an implicit finite volume solution to the Navier–Stokes equations. Mass loss and increased secondary porosity appeared to occur primarily due to dissolution of intergranular cements and K‐feldspar grains, with some associated loss of clay, carbonate and mudstone clasts. This resulted in a bulk porosity increase from 18 to 25% and caused a reduction in mean diameter of mineral grains with an increase in apparent pore wall roughness, where the fractal dimension, Df, increased from 1.68 to 1.84. All significant dissolution mass loss occurred in pores above c. 100 μm mean diameter. Relative dilation of post‐treatment pore area appeared to increase in relation to initial pore area, suggesting that the rate of dissolution mass loss had a positive relationship with fluid flow velocity; that is, critical flow pathways are preferentially widened. Variation in packing density within sedimentary planes (occurring at cm‐scale along the ‐z plane) caused the intrinsic permeability tensor to vary by more than a factor of ten. The bulk permeability tensor is anisotropic having almost equal value in ‐z and ‐y planes but with a 68% higher value in the ‐x plane (parallel to sedimentary bedding planes) for the pretreated sample, reducing to only 30% higher for the post‐treated sample. The intrinsic permeability of the post‐treatment sample increased by one order of magnitude and showed very close agreement between the modelled and experimental results.  相似文献   

3.
Ripley's K‐function is a test to detect geographically distributed patterns occurring across spatial scales. Initially, it assumed infinitely continuous planar space, but in reality, any geographic distribution occurs in a bounded region. Hence, the edge problem must be solved in the application of Ripley's K‐function. Traditionally, three basic edge correction methods were designed for regular study plots because of simplified geometric computation: the Ripley circumference, buffer zone, and toroidal methods. For an irregular‐shaped study region, a geographic information system (GIS) is needed to support geometric calculation of complex shapes. The Ripley circumference method was originally implemented by Haase and has been modified into a Python program in a GIS environment via Monte Carlo simulation (hereafter, the Ripley–Haase and Ripley–GIS methods). The results show that in terms of the statistical powers of clustering detection for irregular boundaries, the Ripley–GIS method is the most stable, followed by the buffer zone, toroidal, and Ripley–Haase methods. After edge effects of irregular boundaries have been eliminated, Ripley's K‐function is used to estimate the degree of spatial clustering of cities in a given territory, and in this paper, we demonstrate that by reference to the relationship between urban spatial structure and economic growth in China.  相似文献   

4.
The p‐regions is a mixed integer programming (MIP) model for the exhaustive clustering of a set of n geographic areas into p spatially contiguous regions while minimizing measures of intraregional heterogeneity. This is an NP‐hard problem that requires a constant research of strategies to increase the size of instances that can be solved using exact optimization techniques. In this article, we explore the benefits of an iterative process that begins by solving the relaxed version of the p‐regions that removes the constraints that guarantee the spatial contiguity of the regions. Then, additional constraints are incorporated iteratively to solve spatial discontinuities in the regions. In particular we explore the relationship between the level of spatial autocorrelation of the aggregation variable and the benefits obtained from this iterative process. The results show that high levels of spatial autocorrelation reduce computational times because the spatial patterns tend to create spatially contiguous regions. However, we found that the greatest benefits are obtained in two situations: (1) when ; and (2) when the parameter p is close to the number of clusters in the spatial pattern of the aggregation variable.  相似文献   

5.
The p‐compact‐regions problem involves the search for an aggregation of n atomic spatial units into p‐compact, contiguous regions. This article reports our efforts in designing a heuristic framework—MERGE (memory‐based randomized greedy and edge reassignment)—to solve this problem through phases of dealing, randomized greedy, and edge reassignment. This MERGE heuristic is able to memorize (ME of MERGE) the potential best moves toward an optimal solution at each phase of the procedure such that the search efficiency can be greatly improved. A dealing phase grows seeded regions into a viable size. A randomized greedy (RG of MERGE) approach completes the regions' growth and generates a feasible set of p‐regions. The edge‐reassigning local search (E of MERGE) fine‐tunes the results toward better objectives. In addition, a normalized moment of inertia (NMI) is introduced as the method of choice in computing the compactness of each region. We discuss in detail how MERGE works and how this new compactness measure can be seamlessly integrated into different phases of the proposed regionalization procedure. The performance of MERGE is evaluated through the use of both a small and a large p‐compact‐regions problem motivated by modeling the regional economy of Southern California. We expect this work to contribute to the regionalization theory and practice literature. Theoretically, we formulate a new model for the family of p‐compact‐regions problems. The novel NMI introduced in the model provides an accurate, robust, and efficient measure of compactness, which is a key objective for p‐compact‐regions problems. Practically, we developed the MERGE heuristic, proven to be effective and efficient in solving this nonlinear optimization problem to near optimality. El problema de regiones compactas tipo p (p‐compact regions) consiste en la búsqueda de la agregación de n unidades espaciales atómicas que produzca regiones contiguas de tipo p‐compacto. Este artículo reporta los esfuerzos de los autores en el diseño de un marco heurístico MERGE (memory‐based randomized greedy and edge‐reassignment) el cual resuelve este problema a través de las fases de la negociación (dealing), codicia aleatorizada (randomized greedy), y la reasignación de bordes (edge reassignment). El heurístico MERGE es capaz de memorizar ( ME de MERGE) los mejo)res desplazamientos posibles hacia una solución óptima en cada fase del procedimiento de tal manera que la eficiencia de la búsqueda puede ser mejorada en gran medida. La fase de negociación crea regiones “sembradas” aleatoriamente y las hace crecer en diferentes tamaños . El componente de codicia aleatorio (RG de MERGE) completa la fase de crecimiento de las regiones y genera un conjunto factible de regiones tipo p. La reasignación de bordes se realiza vía una búsqueda local (E de MERGE) que afina los resultados con el fin the alcanzar los objetivos. Además, el enfoque propuesto aquí utiliza el momento de inercia normalizado (normalized momento of inertia‐NMI) como método para el cálculo de la compacidad de cada región. El artículo discute en detalle el funcionamiento de MERGE y cómo esta nueva medida compacidad puede integrarse perfectamente en las diferentes fases del procedimiento de regionalización propuesto. Para ilustrar y evaluar el desempeño de MERGE, el metodo es aplicado a dos problemas de p‐compact regions, uno grande y uno pequeño, basados en el modelado de la economía regional del sur de California. Los autores esperan que este trabajo contribuya a la literatura teórica y práctica de la regionalización. En términos teóricos, se formula un nuevo modelo de la familia de problemas de p‐compact regions. El NMI equipa al modelo con una novedosa forma de obtener una medida exacta, robusta y eficiente de compacidad, que es un objetivo clave para los problemas región compacta tipo p. En términos prácticos, se desarrolla MERGE, un procedimiento heurístico que ha demostrado ser eficaz y eficiente en la solución de este problema de optimización no lineal de manera casi óptima. p紧凑区域问题包含寻找一种聚集方法,将n个不可分割的空间单元集合成p紧凑的邻近区域。本文阐述了一种启发式架构MERGE(基于记忆的随机贪婪和边界再赋值算法),通过处理、随机贪婪和边界再分配这几个阶段解决p紧凑问题。MERGE启发式框架能存储 处理过程每个阶段中,向一个最优解的潜在最佳移动方式,从而可极大地提升搜索效率。一个处理阶段可将种子区域生长到可行大小。而一个随机贪婪阶段能完成区域增长并生成p区域的可行集。在边界再分配的局部搜索阶段对结果进行微调以达到更好的目标。此外,引入标准化的惯性矩作为每个区域紧凑度计算的选择方法。本文详细讨论了MERGE的工作原理,以及这种新的紧凑度测算方法如何能无缝地整合到所提出的区域化流程的不同阶段。通过在南部加州区域经济建模中一小一大两个p紧凑区域问题的应用,对MERGE的性能进行评估,期望该工作能够对区域化理论和实践作出贡献。理论上,提出了可解决p紧凑区域这类问题的新模型。在模型中引入新颖的标准化惯性矩这一精确的、鲁棒的和有效的紧凑度度量方法,是解决p紧凑区域问题的关键目标;实践上,本文发展了MERGE启发式框架,并证明了它在解决这种非线性优化问题近优性的有效性和高效性。  相似文献   

6.
Accurate estimates of heavy rainfall probabilities reduce loss of life, property, and infrastructure failure resulting from flooding. NOAA's Atlas‐14 provides point‐based precipitation exceedance probability estimates for a range of durations and recurrence intervals. While it has been used as an engineering reference, Atlas‐14 does not provide direct estimates of areal rainfall totals which provide a better predictor of flooding that leads to infrastructure failure, and more relevant input for storm water or hydrologic modeling. This study produces heavy precipitation exceedance probability estimates based on basin‐level precipitation totals. We adapted a Generalized Extreme Value distribution to estimate Intensity‐Duration‐Frequency curves from annual maximum totals. The method exploits a high‐resolution precipitation data set and uses a bootstrapping approach to borrow spatially across homogeneous regions, substituting space in lieu of long‐time series. We compared area‐based estimates of 1‐, 2‐, and 4‐day annual maximum total probabilities against point‐based estimates at rain gauges within watersheds impacted by five recent extraordinary precipitation and flooding events. We found considerable differences between point‐based and area‐based estimates. It suggests that caveats are needed when using pointed‐based estimates to represent areal estimates as model inputs for the purpose of storm water management and flood risk assessment.  相似文献   

7.
ABSTRACT Standard spatial autoregressive models rely on spatial weight structures constructed to model dependence among n regions. Ways of parsimoniously modeling the connectivity among the sample of N=n2 origin‐destination (OD) pairs that arise in a closed system of interregional flows has remained a stumbling block. We overcome this problem by proposing spatial weight structures that model dependence among the N OD pairs in a fashion consistent with standard spatial autoregressive models. This results in a family of spatial OD models introduced here that represent an extension of the spatial regression models described in Anselin (1988) .  相似文献   

8.
This paper explores various edge correction methods for K‐function analysis via Monte Carlo simulation. The correction methods discussed here are Ripley's circumference correction, a toroidal correction, and a guard area correction. First, simulation envelopes for a random point pattern are constructed for each edge correction method. Then statistical powers of these envelopes are analyzed in terms of the probability of detecting clustering and regularity in simulated clustering/regularity patterns. In addition to the K‐function, K(h), determined for individual distances, h, an overall statistic k is also examined. A major finding of this paper is that the K‐function method adjusted by either the Ripley or toroidal edge correction method is more powerful than what is not adjusted or adjusted by the guard area method. Another is that the overall statistic k outperforms the individual K(h) across almost the entire range of potential distances h.  相似文献   

9.
10.
11.
Snow cover is often measured as snow‐water equivalent (SWE), which refers to the amount of water stored in a snow pack that would be available upon melting. Snow cover and SWE represent a source of local snow‐melt release, and are sensitive to regional and global atmospheric circulation, and changes in climate. Monitoring SWE using satellite‐based passive microwave radiometry has provided nearly three decades of continuous data for North America. The availability of spatially and temporally extensive SWE data enables a better understanding of the nature of space‐time trends in snow cover, changes in these trends and linking these trends to underlying landscape and terrain characteristics. To address these interests, we quantify the spatial pattern of SWE by applying a local measure of spatial autocorrelation to 25 years of mean February SWE derived from passive microwave retrievals. Using a method for characterizing the temporal trends in the spatial pattern of SWE, temporal trends and variability in spatial autocorrelation are quantified. Results indicate that within the Canadian Prairies, extreme values of SWE are becoming more spatially coherent, with potential impacts on water availability, and hazards such as flooding. These results also highlight the need for Canadian ecological management units that consider winter conditions.  相似文献   

12.
Linear enamel hypoplasia (LEH) is a macroscopically detectable band‐like dental defect, which represents localized decrease in enamel thickness caused by some form of disruption to a child's health. Such dental deformations are utilized in osteoarchaeological research as permanent markers of childhood physiological stress and have been extensively studied in numerous ancient human populations. However, currently there is no such data for medieval populations from Canterbury, UK. Here, LEH is examined in the context of age‐at‐death in human burials from the medieval St. Gregory's Priory and adjacent cemetery (11th–16th centuries), Canterbury, UK. The cemetery and Priory burials represented lower (n = 30) and higher status (n = 19) social groups, respectively. Linear enamel hypoplastic defects were counted on mandibular and maxillary anterior permanent teeth (n = 374). The age and sex of each skeleton were estimated using standard methods. Differences in LEH counts, age‐at‐death, and LEH formation ages were sought between the two social groups. Results indicate significantly greater frequencies of LEH in the Cemetery (mean = 17.6) compared to the Priory (mean = 7.9; t = −3.03, df = 46, p = 0.002). Adult age‐at‐death was also significantly lower in the Cemetery (mean = 39.8 years) compared to the Priory burials (mean = 44.1 years; t = 2.275, df = 47, p = 0.013). Hypoplasia formation ages differed significantly between the Priory (mean = 2.49 years) and Cemetery (mean = 3.22 years; t = 2.076; df = 47; p = 0.034) individuals. Results indicate that childhood stress may reflect adult mortality in this sample, and that the wellbeing of individuals from diverse social backgrounds can be successfully assessed using LEH analyses. Results are discussed in terms of the multifactorial etiology of LEH, as well as weaning‐related LEH formation. Copyright © 2012 John Wiley & Sons, Ltd.  相似文献   

13.
R. M. Visser 《Archaeometry》2021,63(1):204-215
The Gleichläufigkeitskoeffizient (GLK), or the percentage of parallel variation (%PV), is an often used non‐parametric similarity measure in dendrochronological research. However, when analysing big data sets using the GLK, this measure has some issues. The main problem is that it includes not only synchronous but also semi‐synchronous growth changes. These are years in which the growth in one of the compared series does not change in two subsequent years. This influences the GLK, often only slightly, but the larger the data set the stronger the effect. The similarity between tree‐ring series can be more objectively expressed by replacing the GLK with the synchronous (SGC) and semi‐synchronous growth changes (SSGC). The calculation is similar, since GLK = SGC + SSGC/2. Large values of the SSGC are indicative of possible anomalies or even errors. The SGC is much better suited than the GLK to describe similarity. The SGC should therefore be used to analyse big data sets, for clustering and/or dendroprovenance studies. It is recommended to combine the SGC with parametric measures.  相似文献   

14.
The hub location problem has been widely used in analyzing hub‐and‐spoke systems. The basic assumption is that a large number of demands exist to travel from origins to destinations via a set of intermediate transshipment nodes. These intermediate nodes can be lost, due to reasons such as natural disasters, outbreaks of disease, labor strikes, and intentional attacks. This article presents a hub interdiction median (HIM) problem. It can be used to identify the set of critical facilities in a hub‐and‐spoke system that, if lost, leads to the maximal disruption of the system's service. The new model is formulated using integer linear programming. Special constraints are constructed to account for origin‐to‐destination demand following the least‐cost route via the remaining hubs. Based on the HIM problem, two hub protection problems are defined that aim to minimize the system cost associated with the worst‐case facility loss. Computational experiment results are presented along with a discussion of possible future work. El problema de la ubicación de la central (hub) ha sido ampliamente analizado para el caso de los sistemas de sistemas radiales (hub‐and‐spoke). La presunción inicial es que existe un gran número de demandas que viajan desde puntos de origen hasta sus puntos de destino a través de un set de nodos intermedios de trasbordo. Estos nodos intermedios pueden perderse por diferentes motivos, como desastres naturales, brotes de enfermedades, huelgas de trabajadores, o ataques intencionales. Este artículo presenta un problema de tipo mediana de interdicción de hub, conocido como hub interdiction median‐HIM. Puede usarse para identificar un set de instalaciones críticas de un sistema tipo hub‐and‐spoke que, si se pierde, conduce a la máxima interrupción del servicio del sistema. El nuevo modelo se ha formulado utilizando programación entera lineal, (integer linear programming‐ILP). El modelo construye restricciones especiales para dar cuenta de la demanda de “origen‐a‐destino” (O‐D), siguiendo la ruta de menor costo, a través de los hubs restantes. Basándonos en el problema de HIM, se definen dos problemas de protección de hub que buscan minimizar el costo asociado al peor caso posible de pérdida de instalaciones. Se presentan además, resultados de experimentos computacionales, así como a una discusión sobre posibles futuros trabajos en la materia. 枢纽区位研究已广泛应用于中枢辐射型系统分析,其基本假设条件为起始点到目的地之间存在大量旅行需求的中间转运节点。但自然灾害、突发疾病、劳务罢工和蓄意攻击等因素可能导致中间转运节点的丧失。本文提出了一种枢纽封闭中心模型(HIM),可用于识别中枢辐射型系统的重要节点,一旦这些节点丧失,将导致整个系统服务最大程度的瓦解。新模型通过整数线性规划公式建立。模型特殊约束条件的建立基于最小成本路径通过余下枢纽的花费来解释始发到目的地( origin‐to‐destination)需求量。基于HIM问题,双枢纽保护问题被定义为旨在最小化系统花费及其与之关联的最坏情况下节点丢失问题。最后,根据计算的经验结果讨论未来可能深入的研究。  相似文献   

15.
ABSTRACT The rank‐size rule and Zipf's law for city sizes have been traditionally examined by means of OLS estimation and the t test. This paper studies the accurate and approximate properties of the OLS estimator and obtains the distribution of the t statistic under the assumption of Zipf's law (i.e., Pareto distribution). Indeed, we show that the t statistic explodes asymptotically even under the null, indicating that a mechanical application of the t test yields a serious type I error. To overcome this problem, critical regions of the t test are constructed to test the Zipf's law. Using these corrected critical regions, we can conclude that our results are in favor of the Zipf's law for many more countries than in the previous researches such as Rosen and Resnick (1980) or Soo (2005) . By using the same database as that used in Soo (2005) , we demonstrate that the Zipf law is rejected for only one of 24 countries under our test whereas it is rejected for 23 of 24 countries under the usual t test. We also propose a more efficient estimation procedure and provide empirical applications of the theory for some countries.  相似文献   

16.
This paper introduces improved methods for statistically assessing birth seasonality and intra‐annual variation in δ18O from faunal tooth enamel. The first method estimates input parameters for use with a previously developed parametric approach by C. Tornero et al. The second method uses a non‐parametric clustering procedure to group individuals with similar time‐series data and estimate birth seasonality. This method was successful in analysing data from a modern sample with known season of birth, as well as two heterogeneous archaeological data sets. Modelling indicates that the non‐parametric approach estimates birth seasonality more successfully than the parametric method when less of the tooth row is preserved. The new approach offers a high level of statistical rigour and flexibility in dealing with the time‐series data produced through intra‐individual sampling in isotopic analysis.  相似文献   

17.
Recent studies on urban poverty in Canadian cities suggest a growing spatial concentration of poor populations within metropolitan regions. This article assesses trends in the intra‐urban distribution of the poor population from 1986 to 2006 in eight of Canada's largest cities. We consider five well‐known dimensions of segregation, as identified by Massey and Denton (1988) , in order to examine changes in the spatial distribution of poor populations within metropolitan areas: evenness, exposure, concentration, clustering, and centralization. These indices were calculated for low‐income populations at the census tract level using data from five Canadian censuses. Although each metropolitan area has distinctive characteristics, we were able to identify some general trends. The results suggest that, in 2006 compared to 1986, low‐income populations lived in more spatially concentrated areas, which were, at the same time, socioeconomically more homogeneous and more dispersed throughout the metropolitan area. In addition, we observed that over the last twenty years areas of poverty have been located, for the most part, in neighbourhoods adjacent to downtown cores. Nevertheless, we found that poverty has mostly increased in suburban areas located outside inner‐city neighbourhoods. Growing socioeconomic homogeneity and dispersion of low income areas in metropolitan areas reveal new spatial patterns of urban poverty distribution. These findings should be cause for concern as social isolation in the most disadvantaged neighbourhoods could affect the life chances and opportunities for the residents of those areas.  相似文献   

18.
This paper studies the applicability of the Mean Shift algorithm as support in interpreting geophysical images produced, on this occasion, from magnetic prospection data. The data obtained from a magnetic survey carried out in Gilena (Seville province, Spain) by the La Rábida Archaeophysics Group will be used for the research. Its applicability is illustrated by comparing, on the one hand, some reduction‐to‐pole algorithms and on the other, the (well‐known) k‐means algorithm. Finally, the paper shows the results obtained by applying the Mean Shift algorithm as an alternative method to ‘unsupervised clustering’ of anomalies that appear in images obtained from geophysical data, in which the a priori knowledge of the number of classes is difficult or impossible.  相似文献   

19.
20.
The p-dispersion problem is to locate p facilities on a network so that the minimum separation distance between any pair of open facilities is maximized. This problem is applicable to facilities that pose a threat to each other and to systems of retail or service franchises. In both of these applications, facilities should be as far away from the closest other facility as possible. A mixed-integer program is formulated that relies on reversing the value of the 0–1 location variables in the distance constraints so that only the distance between pairs of open facilities constrain the maximization. A related problem, the maxisum dispersion problem, which aims to maximize the average separation distance between open facilities, is also formulated and solved. Computational results for both models for locating 5 and 10 facilities on a network of 25 nodes are presented, along with a multicriteria approach combining the dispersion and maxisum problems. The p -dispersion problem has a weak duality relationship with the (p-1)-center problem in that one-half the maximin distance in the p-dispersion problem is a lower bound for the minimax distance in the center problem for (p-1) facilities. Since the p-center problem is often solved via a series of set-covering problems, the p-dispersion problem may prove useful for finding a starting distance for the series of covering problems.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号