首页 | 本学科首页   官方微博 | 高级检索  
文章检索
  按 检索   检索词:      
出版年份:   被引次数:   他引次数: 提示:输入*表示无穷大
  收费全文   75篇
  免费   2篇
  2019年   1篇
  2017年   1篇
  2016年   1篇
  2015年   2篇
  2013年   6篇
  2012年   3篇
  2011年   6篇
  2010年   2篇
  2009年   2篇
  2008年   1篇
  2007年   1篇
  2006年   1篇
  2005年   1篇
  2003年   2篇
  2002年   1篇
  2001年   3篇
  2000年   2篇
  1999年   5篇
  1997年   1篇
  1996年   2篇
  1994年   2篇
  1993年   1篇
  1992年   1篇
  1989年   1篇
  1988年   3篇
  1987年   3篇
  1986年   1篇
  1985年   1篇
  1983年   1篇
  1982年   1篇
  1979年   1篇
  1977年   1篇
  1975年   1篇
  1974年   1篇
  1970年   1篇
  1967年   2篇
  1966年   1篇
  1965年   1篇
  1964年   1篇
  1960年   3篇
  1959年   1篇
  1958年   2篇
  1957年   2篇
排序方式: 共有77条查询结果,搜索用时 15 毫秒
51.
Reviews   总被引:1,自引:0,他引:1  
  相似文献   
52.
53.
54.
ABSTRACT

The present article compares John Locke’s and John Owen’s approaches to toleration. Owen, a towering figure of the Puritan revolution and a Protestant scholastic whose work is still the object of significant appreciation in Reformed circles, was Locke’s dean during his time as a student in Oxford. There is a number of treatises on toleration by Owen, especially during the mid-1640s, and later again after the Restoration, in his role as a nonconforming divine. There has also been some speculation regarding the involvement of both Owen and Locke in the circle around Shaftesbury. Together with their writings against Parker and Stillingfleet, this would seem to draw Owen and Locke quite close to each other. Both authors are, however, divided in their approach to Christian doctrine: Owen represents classical confessionalism and Locke modern doctrinal minimalism. The article explores the ways in which these oppositional approaches to doctrine relate to their views of toleration.  相似文献   
55.
This paper attempts to develop a mathematically rigid framework for minimizing the cross-entropy function in an error backpropagating framework. In doing so, we derive the backpropagation formulae for evaluating the partial derivatives in a computationally efficient way. Various techniques of optimizing the multiple-class cross-entropy error function to train single hidden layer neural network classifiers with softmax output transfer functions are investigated on a real-world multispectral pixel-by-pixel classification problem that is of fundamental importance in remote sensing. These techniques include epoch-based and batch versions of backpropagation of gradient descent, PR-conjugate gradient, and BPGS quasi-Newton errors. The method of choice depends upon the nature of the learning task and whether one wants to optimize learning for speed or classification performance. It was found that, comparatively considered, gradient descent error backpropagation provided the best and most stable out-of-sample performance results across batch and epoch-based modes of operation. If the goal is to maximize learning speed and a sacrifice in classification accuracy is acceptable, then PR-conjugate gradient error backpropagation tends to be superior. If the training set is very large, stochastic epoch-based versions of local optimizers should be chosen utilizing a larger rather than a smaller epoch size to avoid unacceptable instabilities in the classification results.  相似文献   
56.
57.
58.
59.
This paper attempts to develop a mathematically rigid framework for minimizing the cross-entropy function in an error backpropagating framework. In doing so, we derive the backpropagation formulae for evaluating the partial derivatives in a computationally efficient way. Various techniques of optimizing the multiple-class cross-entropy error function to train single hidden layer neural network classifiers with softmax output transfer functions are investigated on a real-world multispectral pixel-by-pixel classification problem that is of fundamental importance in remote sensing. These techniques include epoch-based and batch versions of backpropagation of gradient descent, PR-conjugate gradient, and BFGS quasi-Newton errors. The method of choice depends upon the nature of the learning task and whether one wants to optimize learning for speed or classification performance. It was found that, comparatively considered, gradient descent error backpropagation provided the best and most stable out-of-sample performance results across batch and epoch-based modes of operation. If the goal is to maximize learning speed and a sacrifice in classification accuracy is acceptable, then PR-conjugate gradient error backpropagation tends to be superior. If the training set is very large, stochastic epoch-based versions of local optimizers should be chosen utilizing a larger rather than a smaller epoch size to avoid unacceptable instabilities in the classification results.  相似文献   
60.
设为首页 | 免责声明 | 关于勤云 | 加入收藏

Copyright©北京勤云科技发展有限公司  京ICP备09084417号