首页 | 本学科首页   官方微博 | 高级检索  
相似文献
 共查询到20条相似文献,搜索用时 31 毫秒
1.
Anthropologists require methods for accurately estimating stature and body mass from the human skeleton. Age-structured, generalized Least Squares (LS) regression formulas have been developed to predict stature from femoral length and to predict body mass in immature human remains using the width of the distal metaphysis, midshaft femoral geometry (J), and femoral head diameter. This paper tests the hypothesis that panel regression is an appropriate statistical method for regression modeling of longitudinal growth data, with longitudinal and cross-sectional effects on variance. Reference data were derived from the Denver Growth Study; panel regression was used to create one formula for estimating stature (for individuals 0.5–11.5 years old); two formulas for estimating body mass from the femur in infants and children (0.5–12.5 years old); and one formula for estimating body mass from the femoral head in older subadults (7–17.5 years old). The formulas were applied to an independent target sample of cadavers from Franklin County, Ohio and a large sample of immature individuals from diverse global populations. Results indicate panel regression formulas accurately estimate stature and body mass in immature skeletons, without reference to an independent estimate for age at death. Thus, using panel regression formulas to estimate stature and body mass in forensic and archaeological specimens may reduce second stage errors associated with inaccurate age estimates.  相似文献   

2.
A new statistical inversion method to estimate the errors in incoherent scatter measurements for a given set of ionospheric parameters is presented. The possibilities to determine the ion composition in a multiparameter fit are considered. The method can also be applied to estimate the requirements on the measurement of autocorrelation functions to find the minimum possible lag resolution and lag extent for a stable inversion solution of the plasma parameters, and the temperature interval for which a stable solution exists, with or without the zero lag data. The results are illustrated in the case of a multipulse code.  相似文献   

3.
The Perlman–Asaro data bank contains nearly 900 data sets of Mycenaean and Minoan sherds which were sampled in different regions of Greece and Crete. The data were obtained from Neutron Activation Analysis measurements at Berkeley in the 1970s, and for each concentration value a corresponding uncertainty of measurement was also recorded. Parts of the contents of the data bank have been published before. Here, we present the first complete statistical analysis of the whole data bank, considering measurement errors as well as constant shifts of the data due to pottery making practices (“dilutions”). We establish new reference patterns for different regions of Greece and Crete and compare the results with the contents of our own group data bank in Bonn. For those parts of the data which have been published previously, a comparison between these studies and our recent investigation is presented.  相似文献   

4.
陶器古剂量P包含等效剂量Q和超线性修正I两个部分,为了提高古剂量测量的准确性,正确地计算古剂量的测量误差,本工作测试了10个古陶片样品,利用线性回归方法计算等效剂量Q和超线性修正I,并对不同的古剂量计算方法进行误差分析和评估。研究表明,线性回归方法(包括归平法和平归法)计算陶器古剂量的准确性优于目前采用的常规法;通过几种不同方法的误差比较和分析,归平法所得误差较其它方法更严谨、合理。该研究在提高古剂量测量的准确性和正确计算古剂量的测量误差两方面具有重要的意义,使得古剂量的测量更加合理,符合数理统计规律。  相似文献   

5.
This paper is concerned with the statistical errors which are present when wind velocities in the atmosphere are determined by the radar method known as the spaced antenna technique. It is assumed that the (complex) data is processed by the method known as full correlation analysis (FCA). A theory is first developed to give the error in the determination of the position of the maximum of a cross-correlation function and the value of lag such that the auto-correlation falls to a value equal to that of the crosscorrelation at zero lag. These are the basic quantities needed for the application of FCA. These error estimates are tested with a variety of numerically simulated data and shown to be realistic. The results are applied to real data and, using the standard techniques for the propagation of errors, they lead to estimates of the errors in the derived wind velocities. In order to test these estimates, an experiment was carried out in which two independent wind determinations were made simultaneously. The differences were used to obtain experimental estimates of the errors. It was found that the theory overestimates the error in the wind velocities by about 50%. Possible reasons for this are discussed.  相似文献   

6.
The desire of many geographical information science (GIS) practitioners to undertake sophisticated spatial pattern analysis has been facilitated by the increasing availability of specialised software and the appearance of pedagogic papers illustrating the application of various techniques. However, the appropriate use of these techniques also requires an understanding of the nature of hypothesis testing and statistical inference for spatial data. Since there is little information currently available to aid the GIS practitioner in this regard, we offer such guidance here. We do so by revisiting the steps involved in spatial pattern analysis. Our perspective is based on the notion of spatial stochastic models and is presented as a decision tree. The four levels of the tree (i.e., sequential decisions) are associated with the assumptions, the type of data representation and the types of questions asked by the analyst. We emphasise the scientific and educational challenges involved.  相似文献   

7.
Spatial Cluster Detection in Spatial Flow Data   总被引:2,自引:0,他引:2       下载免费PDF全文
As a typical form of geographical phenomena, spatial flow events have been widely studied in contexts like migration, daily commuting, and information exchange through telecommunication. Studying the spatial pattern of flow data serves to reveal essential information about the underlying process generating the phenomena. Most methods of global clustering pattern detection and local clusters detection analysis are focused on single‐location spatial events or fail to preserve the integrity of spatial flow events. In this research we introduce a new spatial statistical approach of detecting clustering (clusters) of flow data that extends the classical local K‐function, while maintaining the integrity of flow data. Through the appropriate measurement of spatial proximity relationships between entire flows, the new method successfully upgrades the classical hot spot detection method to the stage of “hot flow” detection. Several specific aspects of the method are discussed to provide evidence of its robustness and expandability, such as the multiscale issue and relative importance control, using a real data set of vehicle theft and recovery location pairs in Charlotte, NC.  相似文献   

8.
Biogeographical studies are often based on a statistical analysis of data sampled in a spatial context. However, in many cases standard analyses such as regression models violate the assumption of independently and identically distributed errors. In this article, we show that the theory of wavelets provides a method to remove autocorrelation in generalized linear models (GLMs). Autocorrelation can be described by smooth wavelet coefficients at small scales. Therefore, data can be decomposed into uncorrelated and correlated parts. Using an appropriate linear transformation, we are able to extend GLMs to autocorrelated data. We illustrate our new method, called the wavelet‐revised model (WRM), by applying it to multiple regression with response variables conforming to various distributions. Results are presented for simulated data and real biogeographical data (species counts of the plant genus Utricularia [bladderworts] in grid cells throughout Germany). The results of our WRM are compared with those of GLMs and models based on generalized estimating equations. We recommend WRMs, especially as a method that allows for spatial nonstationarity. The technique developed for lattice data is applicable without any prior knowledge of the real autocorrelation structure.  相似文献   

9.
The application of radiocarbon dating to archaeological samples generally requires calibration of 14C dates to calendar ages and interpretation of dating errors. In this paper, four recent methods of age calibration are assessed, particularly with regard to their quality of error treatment. Recent experimental research has suggested that commonly quoted errors on “raw” 14C dates may require enlargement to more realistic levels, which, when incorporated in the calibration schemes, produce a considerable increase in the size of the typical calibrated interval. A general decrease in the sensitivity of 14C dating using single, “normal precision” dates is implied. Thus typical calibrated age intervals range from 300 to 1300 years (approximate 95% confidence level), with little improvement resulting if “high precision” calibration systems are used to correct “normal precision” dates. Of the four methods considered here, that proposed by Neftel is found to provide the most objective, flexible, comprehensive and “easy to use” scheme. This method is particularly recommended for its treatment of errors both on the dates to be calibrated and on the calibration curve itself.  相似文献   

10.
The p-median problem is a powerful tool in analyzing facility location options when the goal of the location scheme is to minimize the average distance that demand must traverse to reach its nearest facility. It may be used to determine the number of facilities to site, as well as the actual facility locations. Demand data are frequently aggregated in p-median location problems to reduce the computational complexity of the problem. Demand data aggregation, however, results in the loss of locational information. This loss may lead to suboptimal facility location configurations (optimality errors) and inaccurate measures of the resulting travel distances (cost errors). Hillsman and Rhoda (1978) have identified three error components: Source A, B, and C errors, which may result from demand data aggregation. In this article, a method to measure weighted travel distances in p-median problems which eliminates Source A and B errors is proposed. Test problem results indicate that the proposed measurement scheme yields solutions with lower optimality and cost errors than does the traditional distance measurement scheme.  相似文献   

11.
Spatial land‐use models over large geographic areas and at fine spatial resolutions face the challenges of spatial heterogeneity, model predictability, data quality, and of the ensuing uncertainty. We propose an improved neural network model, ART‐Probability‐Map (ART‐P‐MAP), tailored to address these issues in the context of spatial modeling of land‐use change. First, it adaptively forms its own network structure to account for spatial heterogeneity. Second, it explicitly infers posterior probabilities of land conversion that facilitates the quantification of prediction uncertainty. Extensive calibration under various test settings is conducted on the proposed model to optimize its utility in seeking useful information within a spatially heterogeneous environment. The calibration strategy involves building a bagging ensemble for training and stratified sampling with varying category proportions for experimentation. Through a temporal validation approach, we examine models’ performance within a systematic assessment framework consisting of global metrics and cell‐level uncertainty measurement. Compared with two baselines, ART‐P‐MAP achieves consistently good and stable performance across experiments and exhibits superior capability to handle the spatial heterogeneity and uncertainty involved in the land‐use change problem. Finally, we conclude that, as a general probabilistic regression model, ART‐P‐MAP is applicable to a broad range of land‐use change modeling approaches, which deserves future research.  相似文献   

12.
This paper describes the application of some further statistical methods for the calibration of floating treering chronologies to data pertaining to the Neolithic settlement at Auvernier. The more general statistical approach utilizes the information contained in the successive time-derivatives as well as the absolute values of the bristlecone pine calibration curve, by fitting polynomial or piecewise polynomial curves, thereby producing better estimates, in general, of the absolute dates of the floating chronologies. Presentation of results in terms of joint confidence regions enables the relevant stratigraphic evidence to be included in the analysis by simple graphical means. In addition, the paper offers a new approach, using only floating tree-ring chronologies, to the testing of Libby's principle of simultaneity.  相似文献   

13.
Computed tomography (CT) was first applied in the early 1970s and introduced subsequently a new perspective towards anatomical imaging. In the last decade, high-resolution CT (HR-CT) had a high impact on anthropology and paleoanthropology through its ability to define and explore subtle differences in hard tissue structures in fossil and extant humans and nonhuman primates. CT is very suitable for unique fossil material, because it is destruction free and the original material stays intact while the internal structures are digitized. The imaging yields a virtual copy of the object, which can be used for the generation of detailed copies of original fossil material. CT data allow multiple studies in parallel and independently on specimens which are not commonly accessible. Diverse CT systems with different performance characteristics as designed for different functions can make it difficult for a researcher to choose the most appropriate CT system and to check the image quality of CT scans. The physical principles involved in CT imaging and the principles of signal processing and computer graphics can help to choose the best scan setting and the most suitable CT system for a study. Quantitative and qualitative analysis can also be improved and comparisons between different studies can be facilitated when the above mentioned principles are taken into account. In the following, I will give an overview of the different CT systems and discus both theoretical and practical matters of CT imaging using the example of trabecular bone.  相似文献   

14.
The bristlecone pine tree-ring calibration of radiocarbon dates, while necessitating changes of up to 700 years in Holocene chronology before 1000 b.c. , offers possibilities of very accurate dating when 14C determinations from floating tree-ring chronologies are utilized. A statistical approach assuming linear regression is developed and used to position the floating tree-ring chronologies at Swiss neolithic sites, using radiocarbon dates published by Ferguson, Huber and Suess and by Suess. The statistical method gives objective estimated dates with estimates of error related, in a consistent and explicit manner, to the inherent inaccuracies of the radiocarbon dates. Most of the method may readily be tested by standard statistical procedures. For the particular cases considered the assumptions of linearity and parallelism are investigated, and the precision of the estimated dates is comparable with that claimed by Suess and his co-workers. A precise calibration is thus possible without utilizing the short-term fluctuations in the Suess calibration curve. The analysis, while avoiding some assumptions of Suess and his collaborators, offers an explicit procedure for establishing controlled teleconnections with the Ferguson dendrochronology, and supports their emphasis on the importance of radiocarbon dates from floating tree-ring sequences for the construction of precise prehistoric chronologies.  相似文献   

15.
Knowledge about the Inca measurement system is based on information from the colonial chronicles and modern studies of the 16th-century Quechua dictionaries. Based on those texts, we can presume that the Incas used an anthropometric system of measurement adopted from the proportions of the human body. Using cosine quantogram analysis and statistical verification, it is possible to verify the existence of the measurement system used by the Inca architects. For this purpose, a measurement series of architectural and water infrastructure elements were collected from 3D point cloud of the Chachabamba and Machu Picchu settlements in Machupicchu National Archaeological Park.  相似文献   

16.
This article presents a new metric we label the colocation quotient (CLQ), a measurement designed to quantify (potentially asymmetrical) spatial association between categories of a population that may itself exhibit spatial autocorrelation. We begin by explaining why most metrics of categorical spatial association are inadequate for many common situations. Our focus is on where a single categorical data variable is measured at point locations that constitute a population of interest. We then develop our new metric, the CLQ, as a point‐based association metric most similar to the cross‐k‐function and join count statistic. However, it differs from the former in that it is based on distance ranks rather than on raw distances and differs from the latter in that it is asymmetric. After introducing the statistical calculation and underlying rationale, a random labeling technique is described to test for significance. The new metric is applied to economic and ecological point data to demonstrate its broad utility. The method expands upon explanatory powers present in current point‐based colocation statistics.  相似文献   

17.
This paper extends Milk's method for estimating urban population density gradients to general noncircular and asymmetrical urban forms, using Gauss-Legendre quadrature embedded in a Newton-Raphson root finding algorithm. We also examine the sensitivity of the Mills method to measurement errors in the assumptions. Several issues arising from the comparison of analytical, Mills type estimation procedures with statistical procedures are explored, particularly in light of recent work that questions the negative exponential formulation of urban density gradients. We note in particular the influence of secondary population centers as a source of estimation bias.  相似文献   

18.
This study is a part of a larger investigation related to questions of the systemization of neutron activation analysis results. It considers the analysis of experimental data and focuses on the application of a multivariate statistical technique to the characterization and classification of ceramic artifacts in order to establish and assess their provenance. Central to the statistical methodology reported here is the determination of a specific characterization-classification function. This function provides a method of defining a group of archaeological ceramics on the basis of their elemental composition patterns. Conditional and expected probabilities calculated from the function are then used as a means of classifying unknown observations, indicating outliers, and detecting the existence of another group as yet unidentified. The utility of the function is demonstrated in a specific test case. The possibility of using the characterization-classification function to increase the portability of information and to reduce the need for centralized computer data banks is pointed out.  相似文献   

19.
Facility location models are examined as a framework for generating rain gauge networks designed to reduce errors in mean areal precipitation (MAP) estimation. Errors in estimating MAP may be divided into two types: (i) capture error, not observing a storm which occurs in a gauged area, and (ii) extrapolation error, using a rain gauge measurement to represent a heterogeneous area. In this paper, five rain gauge location models are developed to minimize these errors. The models include adaptations of the maximal covering location problem, the p-median model, and three models derived from multicriteria cluster analysis. The models are tested using precipitation data from an experimental watershed maintained by the U.S. Department of Agriculture in Arizona. Analysis of the results reveals, for the particular watershed, that (1) in sparse networks, location of rain gauges can play a larger role than number of rain gauges in reducing errors in MAP estimates; (2) models based on mean hydrologic data provide nearly as good networks as models based on spatially correlated data; and (3) models yielding the best networks for estimating precipitation for flood predictions are different from the models providing the best precipitation estimates for low flow forecasts.  相似文献   

20.
We propose a new estimator of spatial autocorrelation of areal incidence or prevalence rates in small areas, such as crime and health indicators, for correcting spatially heterogeneous sampling errors in denominator data. The approach is dubbed the heteroscedasticity‐consistent empirical Bayes (HC‐EB) method. As American Community Survey (ACS) data have been released to the public for small census geographies, small‐area estimates now form the demographic landscape of neighborhoods. Meanwhile, there is growing awareness of the diminished statistical validity of global and local Moran’s I when such small‐area estimates are used in denominator data. Using teen birth rates by census tracts in Mecklenburg County, North Carolina, we present comparisons of conventional and new HC‐EB estimates of Global and Local Moran’s I statistics created on ACS data, along with estimates on ground truth values from the 2010 decennial census. Results show that the new adjustment method dramatically enhances the statistical validity of global and local spatial autocorrelation statistics.  相似文献   

设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号