首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
The p-dispersion problem is to locate p facilities on a network so that the minimum separation distance between any pair of open facilities is maximized. This problem is applicable to facilities that pose a threat to each other and to systems of retail or service franchises. In both of these applications, facilities should be as far away from the closest other facility as possible. A mixed-integer program is formulated that relies on reversing the value of the 0–1 location variables in the distance constraints so that only the distance between pairs of open facilities constrain the maximization. A related problem, the maxisum dispersion problem, which aims to maximize the average separation distance between open facilities, is also formulated and solved. Computational results for both models for locating 5 and 10 facilities on a network of 25 nodes are presented, along with a multicriteria approach combining the dispersion and maxisum problems. The p -dispersion problem has a weak duality relationship with the (p-1)-center problem in that one-half the maximin distance in the p-dispersion problem is a lower bound for the minimax distance in the center problem for (p-1) facilities. Since the p-center problem is often solved via a series of set-covering problems, the p-dispersion problem may prove useful for finding a starting distance for the series of covering problems.  相似文献   

2.
Multiple Facilities Location in the Plane Using the Gravity Model   总被引:3,自引:0,他引:3  
Two problems are considered in this article. Both problems seek the location of p facilities. The first problem is the p median where the total distance traveled by customers is minimized. The second problem focuses on equalizing demand across facilities by minimizing the variance of total demand attracted to each facility. These models are unique in that the gravity rule is used for the allocation of demand among facilities rather than assuming that each customer selects the closest facility. In addition, we also consider a multiobjective approach, which combines the two objectives. We propose heuristic solution procedures for the problem in the plane. Extensive computational results are presented.  相似文献   

3.
When solving a location problem using aggregated units to represent demand, it is well known that the process of aggregation introduces error. Research has focussed on individual components of error, with little work on identifying and controlling total error. We provide a focussed review of some of this literature and suggest a strategy for controlling total error. Consideration of alternative criteria for evaluating aggregation schemes shows that the method selected should be compatible with the objectives of the analyses in which it is used. Experiments are described that show that two different measures of error are related in a nonlinear way to the number of aggregate demand points (q), for any value of the number of facilities (p). We focus on the parameter q/p and show that it is critical for determining the expected severity of the error. Many practical implementations of location algorithms operate within the range of q/p where the rate of change of error with respect to q/p is highest.  相似文献   

4.
In many applications involving the location of public facilities, the activity that is being located is added to the landscape of existing public facilities and services. Further, there also exists a political and behavioral landscape of differing jurisdictions, representations, and perceptions. This paper presents a new form of the p-median problem which addresses two types of “regional” constraints that can arise in public facilities planning. A formulation is given for this specially constrained median problem along with a solution approach. Several examples are presented with computational results which indicate that this type of constrained problem could lead to better facilities planning.  相似文献   

5.
Dispersion of Nodes Added to a Network   总被引:2,自引:2,他引:0  
For location problems in which optimal locations can be at nodes or along arcs but no finite dominating set has been identified, researchers may desire a method for dispersing p additional discrete candidate sites along the m arcs of a network. This article develops and tests minimax and maximin models for solving this continuous network location problem, which we call the added-node dispersion problem (ANDP). Adding nodes to an arc subdivides it into subarcs. The minimax model minimizes the maximum subarc length, while the maximin model maximizes the minimum subarc length. Like most worst-case objectives, the minimax and maximin objectives are plagued by poorly behaved alternate optima. Therefore, a secondary MinSumMax objective is used to select the best-dispersed alternate optima. We prove that equal spacing of added nodes along arcs is optimal to the MinSumMax objective. Using this fact we develop greedy heuristic algorithms that are simple, optimal, and efficient (O( mp )). Empirical results show how the maximum subarc, minimum subarc, and sum of longest subarcs change as the number of added nodes increases. Further empirical results show how using the ANDP to locate additional nodes can improve the solutions of another location problem. Using the p-dispersion problem as a case study, we show how much adding ANDP sites to the network vertices improves the p-dispersion objective function compared with (a) network vertices only and (b) vertices plus randomly added nodes. The ANDP can also be used by itself to disperse facilities such as stores, refueling stations, cell phone towers, or relay facilities along the arcs of a network, assuming that such facilities already exist at all nodes of the network.  相似文献   

6.
The vector assignment p‐median problem (VAPMP) is one of the first discrete location problems to account for the service of a demand by multiple facilities, and has been used to model a variety of location problems in addressing issues such as system vulnerability and reliability. Specifically, it involves the location of a fixed number of facilities when the assumption is that each demand point is served a certain fraction of the time by its closest facility, a certain fraction of the time by its second closest facility, and so on. The assignment vector represents the fraction of the time a facility of a given closeness order serves a specific demand point. Weaver and Church showed that when the fractions of assignment to closer facilities are greater than more distant facilities, an optimal all‐node solution always exists. However, the general form of the VAPMP does not have this property. Hooker and Garfinkel provided a counterexample of this property for the nonmonotonic VAPMP. However, they do not conjecture as to what a finite set may be in general. The question of whether there exists a finite set of locations that contains an optimal solution has remained open to conjecture. In this article, we prove that a finite optimality set for the VAPMP consisting of “equidistant points” does exist. We also show a stronger result when the underlying network is a tree graph.  相似文献   

7.
In this article, we address the problem of allocating an additional cell tower (or a set of towers) to an existing cellular network, maximizing the call completion probability. Our approach is derived from the adaptive spatial sampling problem using kriging, capitalizing on spatial correlation between cell phone signal strength data points and accounting for terrain morphology. Cell phone demand is reflected by population counts in the form of weights. The objective function, which is the weighted call completion probability, is highly nonlinear and complex (nondifferentiable and discontinuous). Sequential and simultaneous discrete optimization techniques are presented, and heuristics such as simulated annealing and Nelder–Mead are suggested to solve our problem. The adaptive spatial sampling problem is defined and related to the additional facility location problem. The approach is illustrated using data on cell phone call completion probability in a rural region of Erie County in western New York, and accounts for terrain variation using a line‐of‐sight approach. Finally, the computational results of sequential and simultaneous approaches are compared. Our model is also applicable to other facility location problems that aim to minimize the uncertainty associated with a customer visiting a new facility that has been added to an existing set of facilities.  相似文献   

8.
A number of variations of facilities location problems have appeared in the research literature in the past decade. Among these are problems involving the location of multiple new facilities in a discrete solution space, with the new facilities located relative to a set of existing facilities having known locations. In this paper a number of discrete solution space location problems are treated. Specifically, the covering problem and the central facilities location problem are shown to be related. The covering problem involves the location of the minimum number of new facilities among a finite number of sites such that all existing facilities (customers) are covered by at least one new facility. The central facilities location problem consists of the location of a given number of new facilities among a finite number of sites such that the sum of the weighted distances between existing facilities and new facilities is minimized. Computational experience in using the same heuristic solution procedure to solve both problems is provided and compared with other existing solution procedures.  相似文献   

9.
The p‐compact‐regions problem involves the search for an aggregation of n atomic spatial units into p‐compact, contiguous regions. This article reports our efforts in designing a heuristic framework—MERGE (memory‐based randomized greedy and edge reassignment)—to solve this problem through phases of dealing, randomized greedy, and edge reassignment. This MERGE heuristic is able to memorize (ME of MERGE) the potential best moves toward an optimal solution at each phase of the procedure such that the search efficiency can be greatly improved. A dealing phase grows seeded regions into a viable size. A randomized greedy (RG of MERGE) approach completes the regions' growth and generates a feasible set of p‐regions. The edge‐reassigning local search (E of MERGE) fine‐tunes the results toward better objectives. In addition, a normalized moment of inertia (NMI) is introduced as the method of choice in computing the compactness of each region. We discuss in detail how MERGE works and how this new compactness measure can be seamlessly integrated into different phases of the proposed regionalization procedure. The performance of MERGE is evaluated through the use of both a small and a large p‐compact‐regions problem motivated by modeling the regional economy of Southern California. We expect this work to contribute to the regionalization theory and practice literature. Theoretically, we formulate a new model for the family of p‐compact‐regions problems. The novel NMI introduced in the model provides an accurate, robust, and efficient measure of compactness, which is a key objective for p‐compact‐regions problems. Practically, we developed the MERGE heuristic, proven to be effective and efficient in solving this nonlinear optimization problem to near optimality. El problema de regiones compactas tipo p (p‐compact regions) consiste en la búsqueda de la agregación de n unidades espaciales atómicas que produzca regiones contiguas de tipo p‐compacto. Este artículo reporta los esfuerzos de los autores en el diseño de un marco heurístico MERGE (memory‐based randomized greedy and edge‐reassignment) el cual resuelve este problema a través de las fases de la negociación (dealing), codicia aleatorizada (randomized greedy), y la reasignación de bordes (edge reassignment). El heurístico MERGE es capaz de memorizar ( ME de MERGE) los mejo)res desplazamientos posibles hacia una solución óptima en cada fase del procedimiento de tal manera que la eficiencia de la búsqueda puede ser mejorada en gran medida. La fase de negociación crea regiones “sembradas” aleatoriamente y las hace crecer en diferentes tamaños . El componente de codicia aleatorio (RG de MERGE) completa la fase de crecimiento de las regiones y genera un conjunto factible de regiones tipo p. La reasignación de bordes se realiza vía una búsqueda local (E de MERGE) que afina los resultados con el fin the alcanzar los objetivos. Además, el enfoque propuesto aquí utiliza el momento de inercia normalizado (normalized momento of inertia‐NMI) como método para el cálculo de la compacidad de cada región. El artículo discute en detalle el funcionamiento de MERGE y cómo esta nueva medida compacidad puede integrarse perfectamente en las diferentes fases del procedimiento de regionalización propuesto. Para ilustrar y evaluar el desempeño de MERGE, el metodo es aplicado a dos problemas de p‐compact regions, uno grande y uno pequeño, basados en el modelado de la economía regional del sur de California. Los autores esperan que este trabajo contribuya a la literatura teórica y práctica de la regionalización. En términos teóricos, se formula un nuevo modelo de la familia de problemas de p‐compact regions. El NMI equipa al modelo con una novedosa forma de obtener una medida exacta, robusta y eficiente de compacidad, que es un objetivo clave para los problemas región compacta tipo p. En términos prácticos, se desarrolla MERGE, un procedimiento heurístico que ha demostrado ser eficaz y eficiente en la solución de este problema de optimización no lineal de manera casi óptima. p紧凑区域问题包含寻找一种聚集方法,将n个不可分割的空间单元集合成p紧凑的邻近区域。本文阐述了一种启发式架构MERGE(基于记忆的随机贪婪和边界再赋值算法),通过处理、随机贪婪和边界再分配这几个阶段解决p紧凑问题。MERGE启发式框架能存储 处理过程每个阶段中,向一个最优解的潜在最佳移动方式,从而可极大地提升搜索效率。一个处理阶段可将种子区域生长到可行大小。而一个随机贪婪阶段能完成区域增长并生成p区域的可行集。在边界再分配的局部搜索阶段对结果进行微调以达到更好的目标。此外,引入标准化的惯性矩作为每个区域紧凑度计算的选择方法。本文详细讨论了MERGE的工作原理,以及这种新的紧凑度测算方法如何能无缝地整合到所提出的区域化流程的不同阶段。通过在南部加州区域经济建模中一小一大两个p紧凑区域问题的应用,对MERGE的性能进行评估,期望该工作能够对区域化理论和实践作出贡献。理论上,提出了可解决p紧凑区域这类问题的新模型。在模型中引入新颖的标准化惯性矩这一精确的、鲁棒的和有效的紧凑度度量方法,是解决p紧凑区域问题的关键目标;实践上,本文发展了MERGE启发式框架,并证明了它在解决这种非线性优化问题近优性的有效性和高效性。  相似文献   

10.
When the geographic distribution of landscape pattern varies, global indices fail to capture the spatial nonstationarity within the dataset. Methods that measure landscape pattern at a spatially local scale are advantageous, as an index is computed at each point in the dataset. The geographic distribution of local indices is used to discover spatial trends. Local indicators for categorical data (LICD) can be used to statistically quantify local spatial patterns in binary geographic datasets. LICD, like other spatially local methods, are impacted by decisions relating to the spatial scale of the data, such as the data grain (p), and analysis parameters such as the size of the local neighbourhood (m). The goal of this article is to demonstrate how the choice of the m and p parameters impacts LICD analysis. We also briefly discuss the impacts spatial extent can have on analysis; specifically the local composition measure. An example using 2006 forest cover data for a region in British Columbia, Canada where mountain pine beetle mitigation and salvage harvesting has occurred is used to show the impacts of changing m and p. Selection of local window size (m = 3,5,7) impacts the prevalence and interpretation of significant results. Increasing data grain (p) had varying effects on significant LICD results. When implementing LICD the choice of m and p impacts results. Exploring multiple combinations of m and p will provide insight into selection of ideal parameters for analysis.  相似文献   

11.
The biggest problem with most lakes that have no contact with other water sources and are being charged by precipitation is the massive eutrophication. The aim of this work was to determine the sedimentation rate in order to evaluate the progress of eutrophication for St. Ana Lake (Ciomad Mountain near the Băile Tuşnad in Harghita County (Romania)). The concentration of 210Pb was determined by means of high resolution gamma spectrometry as well as derived from 210Po activity which was measured through alpha spectrometry; values obtained are in good agreement. For the excess 210Pb activity values between 4.0±0.5 Bq/kg and 218±20 Bq/kg have been found. As an alternative method, the 137Cs dating method was applied as well. Calculated mass sedimentation rates are in the range of 0.06±0.01 to 0.32±0.05 g/cm2 year with a mean of value of 0.15±0.02 g/cm2 year. Linear sedimentation rates yielded much higher sedimentation values (between 0.5±0.1 and 7.9±0.7 cm/year with a mean of 2.4±0.6 cm/year), due to the predominant organic matter composition and the long suspension time of the sediment. This is an indication for the process of eutrophication which will probably lead to the transformation of the lake into a peat bog.  相似文献   

12.
The placement of facilities according to spatial and/or geographic requirements is a popular problem within the domain of location science. Objectives that are typically considered in this class of problems include dispersion, median, center, and covering objectives—and are generally defined in terms of distance or service‐related criteria. With few exceptions, the existing models in the literature for these problems only accommodate one type of facility. Furthermore, the literature on these problems does not allow for the possibility of multiple placement zones within which facilities may be placed. Due to the unique placement requirements of different facility types—such as suitable terrain that may be considered for placement and specific placement objectives for each facility type—it is expected that different suitable placement zones for each facility type, or groups of facility types, may differ. In this article, we introduce a novel mathematical treatment for multi‐type, multi‐zone facility location problems. We derive multi‐type, multi‐zone extensions to the classical integer‐linear programming formulations involving dispersion, centering and maximal covering. The complexity of these formulations leads us to follow a heuristic solution approach, for which a novel multi‐type, multi‐zone variation of the non‐dominated sorting genetic algorithm‐II algorithm is proposed and employed to solve practical examples of multi‐type, multi‐zone facility location problems.  相似文献   

13.
The p‐regions problem involves the aggregation or clustering of n small areas into p spatially contiguous regions while optimizing some criteria. The main objective of this article is to explore possible avenues for formulating this problem as a mixed integer‐programming (MIP) problem. The critical issue in formulating this problem is to ensure that each region is a spatially contiguous cluster of small areas. We introduce three MIP models for solving the p regions problem. Each model minimizes the sum of dissimilarities between all pairs of areas within each region while guaranteeing contiguity. Three strategies designed to ensure contiguity are presented: (1) an adaptation of the Miller, Tucker, and Zemlin tour‐breaking constraints developed for the traveling salesman problem; (2) the use of ordered‐area assignment variables based upon an extension of an approach by Cova and Church for the geographical site design problem; and (3) the use of flow constraints based upon an extension of work by Shirabe. We test the efficacy of each formulation as well as specify a strategy to reduce overall problem size.  相似文献   

14.
The p‐center problem is one of the most important models in location theory. Its objective is to place a fixed number of facilities so that the maximum service distance for all customers is as small as possible. This article develops a reliable p‐center problem that can account for system vulnerability and facility failure. A basic assumption is that located centers can fail with a given probability and a customer will fall back to the closest nonfailing center for service. The proposed model seeks to minimize the expected value of the maximum service distance for a service system. In addition, the proposed model is general and can be used to solve other fault‐tolerant center location problems such as the (p, q)‐center problem using appropriate assignment vectors. I present an integer programming formulation of the model and computational experiments, and then conclude with a summary of findings and point out possible future work.  相似文献   

15.
The solubility of quartz has been measured in a wide range of salt solutions at 800°C and 0.5 GPa, and in NaCl, CaCl2 and CsCl solutions and H2O–CO2 fluids at six additional PT conditions ranging from 400°C at 0.1 GPa to 800°C at 0.9 GPa. The experiments cover a wide range of compositions along each binary. At PT conditions where the density of pure water is low (0.43 g cm?3), addition of most salts produces an enhancement of quartz solubility at low to moderate salt concentrations (salt‐in effect), although quartz solubility falls with further decrease in XH2O. At higher fluid densities (0.7 g cm?3 and greater), the salt‐in effect is generally absent, although this depends on both the cation present and the actual PT conditions. The salt‐in effect is most readily produced by chloride salts of large monovalent cations, while CaCl2 only produced a salt‐in effect at the most extreme conditions of high‐T and low‐P investigated (800°C at 0.2 GPa). Under most crustal conditions, the addition of common salts to aqueous fluids results in a lowering of quartz solubility relative to that in pure water (salt‐out effect). Comparing quartz solubility in different fluids by calculating XH2O on the basis that all salts are fully associated under all conditions yields higher quartz solubility in solutions of monovalent salts than in solutions of divalent salts, absolute values are also influenced by cation radius. Quartz solubility measurements have been fitted to a Setchenow‐type equation, modified to take account of the separate effects of both the lowering of XH2O and the specific effects of different salts, which are treated as arising through distinct patterns of non‐ideal behaviour, rather than the explicit formation of additional silica complexes with salt components. Quartz solubility in H2O–CO2 fluids can be treated as ideal, if the solvation number of aqueous silica is taken as 3.5. For this system the solubility (molality) of quartz in the binary fluid, S is related to its solubility in pure water at the same PT conditions, So, by: Quartz solubility in binary salt systems (H2O–RCln) can be fitted to the relationship: where salt concentration mRCln is expressed as molality and the exponent b has a value of 1 except under conditions where salting‐in is observed at low salt concentrations, in which case it is <1. Under most crustal conditions, the solubility of quartz in NaCl solutions is given to a good approximation by: We propose that quartz solubility in multicomponent fluids can be estimated from an extended expression, calculating XH2O based on the total fluid composition (including dissolved gasses), and adding terms for each major salt present. Our experimental results on H2O–NaCl–CO2 fluids are satisfactorily predicted on this basis. An important implication of the results presented here is that there are circumstances where the migration of a fluid from one quartz‐bearing host into another, if it is accompanied by re‐equilibration through cation exchange, may lead to dissolution or precipitation of quartz even at constant P and T, with concomitant modification of the permeability structure of the deep crust.  相似文献   

16.
We model pore‐pressure diffusion caused by pressurized waste‐fluid injection at two nearby wells and then compare the buildup of pressure with the observed initiation and migration of earthquakes during the early part of the 2010–2011 Guy–Greenbrier earthquake swarm. Pore‐pressure diffusion is calculated using MODFLOW 2005 that allows the actual injection histories (volume/day) at the two wells to diffuse through a fractured and faulted 3D aquifer system representing the eastern Arkoma basin. The aquifer system is calibrated using the observed water‐level recovery following well shut‐in at three wells. We estimate that the hydraulic conductivities of the Boone Formation and Arbuckle Group are 2.2 × 10?2 and 2.03 × 10?3 m day?1, respectively, with a hydraulic conductivity of 1.92 × 10?2 m day?1 in the Hunton Group when considering 1.72 × 10?3 m day?1 in the Chattanooga Shale. Based on the simulated pressure field, injection near the relatively conductive Enders and Guy–Greenbrier faults (that hydraulically connect the Arbuckle Group with the underlying basement) permits pressure diffusion into the crystalline basement, but the effective radius of influence is limited in depth by the vertical anisotropy of the hydraulic diffusivity. Comparing spatial/temporal changes in the simulated pore‐pressure field to the observed seismicity suggests that minimum pore‐pressure changes of approximately 0.009 and 0.035 MPa are sufficient to initiate seismic activity within the basement and sedimentary sections of the Guy–Greenbrier fault, respectively. Further, the migration of a second front of seismicity appears to follow the approximately 0.012 MPa and 0.055 MPa pore‐pressure fronts within the basement and sedimentary sections, respectively.  相似文献   

17.
We determined the 230Th/U ages of individual calcite layers that grew on the walls of artificial water‐supply tunnels (‘water quarries’) at Troy/Ilios by using thermal ionization mass spectrometry. The oldest age of overgrowth being 4350 ± 570 years, the tunnels must have been built a short time earlier, during the archaeological period Troy I–II. The tunnels were also used during Troy VI–VII (1700–1150 bce, a period that includes the date of the supposed ‘Trojan War’), in Homeric times (c. 720 bce) and in the Roman period. These findings add strong support to the identification of the water quarries with a natural phenomenon and Anatolian deity known in Hittite texts of the second millennium bce as KASKAL.KUR, a term denoting subsurface water systems. Consequently, they reinforce the view that Troy in the second millennium bce was Anatolian in character. In this way, the findings are also consistent with the identification of (W)Ilios with Wilusa, a city attested to in Hittite historical texts.  相似文献   

18.
Research in the area of spatial decision support (SDS) and resource allocation has recently generated increased attention for integrating optimization techniques with GIS. In this paper we address the use of spatial optimization techniques for solving multi‐site land‐use allocation (MLUA) problems, where MLUA refers to the optimal allocation of multiple sites of different land uses to an area. We solve an MLUA problem using four different integer programs (IP), of which three are linear integer programs. The IPs are formulated for a raster‐based GIS environment and are designed to minimize development costs and to maximize compactness of the allocated land use. The preference for either minimizing costs or maximizing compactness has been made operational by including a weighting factor. The IPs are evaluated on their speed and their efficacy for handling large databases. All four IPs yielded the optimal solution within a reasonable amount of time, for an area of 8 × 8 cells. The fastest model was successfully applied to a case study involving an area of 30 × 30 cells. The case study demonstrates the practical use of linear IPs for spatial decision support issues.  相似文献   

19.
This article seeks to introduce a new methodological approach to estimate population size of settlements in the Graeco-Roman Fayum/Egypt (330 BC–400 AD). The aim is to represent and analyse the relationship of settlements and the facilities they provide. We suggest turning the information commonly contained in traditional site gazetteers into presence–absence matrices of selected facilities. We then use these facility matrices to estimate the size of ancient settlements through linear regression. The equation is initially tested on medieval settlements in Norwich and East Anglia (England) and later applied to a settlement-facility matrix for the Ptolemaic period (330–30 BC). The results from the medieval data show the validity of the approach with a complete data set. The circumstance that the Ptolemaic data is fragmented and incomplete naturally adds a small, but acceptable error to the estimates.  相似文献   

20.
Comme ailleurs dans le monde, on se préoccupe de plus en plus au Québec de l'utilisation massive des pesticides en agriculture. Des données sur l'utilisation de ces produits et sur l'incidence du cancer du cerveau, des tissur lymphatiques et de la leucémie en 1982–83, ont été cornpilées à I'échelle de 34 bassins hydrographiques situés dans la partie méridionale du Québec. Le calcul des indices comparatifs de morbidité (ICM) a permis d'évaluer l'incidence des cancers dans les bassins oÙ I'utilisation des pesticides en agriculture est la plus importante depuis plus de 15 ans. Pour la leucémie, un excès statistiquement significatif d'incidence chez les hommes (ICM = 1,69, p ≤ 0.05) a été calculé dans la population rurale agricole du bassin de la rivière Yamaska, ce dernier figurant parmi les bassins très exposés aux pesticides. Le calcul des risques relatifs (RR) à l'échelle des municipalités du bassin de la Yamaska a montré un excès statistiquement significatif de risque (p ≤ 0,05) pour la leucémie chez les hommes dans les municipalités rurales agricoles IRK = 2,27) par rapport aux municipalités urbaines. Il existe également un excès statistiquement significatif (p ≤ 0,05) chez les hornmes dans les municipalités yur s alimentent en eau potable dans des puits (RR = 2.07) par rapport à celles qui s'alirnentent aux rivières, mais le rôle de la source d'alimentation en eau est difficile à isoler. puisque la plupart des municipalités qui s'ali mentent dans des puits sont également des rnunicipalités rurales agricoles. Les résultats globaux de cette étude exploratoire au niveau du bassin de la Yamaska permet-rent de soulever l'hypothese d'une relation entre la leucémie et l'utilisation massive des pesticides en agriculture dans cette région du Québec. Cette hypothèse mériterait d'être vérifiée par des études de type épidémiologique au niveau individuel. As elsewhere in the world, researchers in Quebec are becoming increasingly concerned about the extensive use of pesticides in agriculture. Data for 1982–83 concerning the use of these products and the incidence of leukemia and of cancer of the brain and the lymphatic tissues have been tabulated for 34 drainage basins located in southern Quebec. The calculation of the standard morbidity ratio (SMR) allowed us to evaluate the incidence of cancers in these drainage basins where agricultural pesticides have been used at high levels for more than 15 years. For leukemia, a statistically significant higher SMR (1.69 p ≤ 0.05) was shown to exist among men in the rural farm population in the basin of the Yamaska River. This basin was one of the areas most exposed to agricultural pesticides. The calculation of the relative risks (RR) for men at the level of municipalities within the Yamaska River basin showed a statistically significant excess (p ≤ 0.05) for leukemia in the rural farm municipalities (RR =2.27) as compared to urban municipalities. There was also a statistically significant excess (p ≤ 0.05) for men in municipalities that draw their drinking water from wells (RR =2.07) as compared to those where water is drawn from rivers. However, the role of the source of drinking water is difficult to isolate because most municipalities that draw their water from wells are also agricultural and rural. The overall results of this exploratory study from the basin of the Yamaska River suggest that there may be a relationship between leukemia and the extensive use of agricultural pesticides in this region of Quebec. This hypothesis could be verified in epidemiological studies at the individual level.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号