全文获取类型
收费全文 | 75篇 |
免费 | 2篇 |
出版年
2019年 | 1篇 |
2017年 | 1篇 |
2016年 | 1篇 |
2015年 | 2篇 |
2013年 | 6篇 |
2012年 | 3篇 |
2011年 | 6篇 |
2010年 | 2篇 |
2009年 | 2篇 |
2008年 | 1篇 |
2007年 | 1篇 |
2006年 | 1篇 |
2005年 | 1篇 |
2003年 | 2篇 |
2002年 | 1篇 |
2001年 | 3篇 |
2000年 | 2篇 |
1999年 | 5篇 |
1997年 | 1篇 |
1996年 | 2篇 |
1994年 | 2篇 |
1993年 | 1篇 |
1992年 | 1篇 |
1989年 | 1篇 |
1988年 | 3篇 |
1987年 | 3篇 |
1986年 | 1篇 |
1985年 | 1篇 |
1983年 | 1篇 |
1982年 | 1篇 |
1979年 | 1篇 |
1977年 | 1篇 |
1975年 | 1篇 |
1974年 | 1篇 |
1970年 | 1篇 |
1967年 | 2篇 |
1966年 | 1篇 |
1965年 | 1篇 |
1964年 | 1篇 |
1960年 | 3篇 |
1959年 | 1篇 |
1958年 | 2篇 |
1957年 | 2篇 |
排序方式: 共有77条查询结果,搜索用时 15 毫秒
51.
52.
53.
Manfred Mayrhofer 《Indo-Iranian Journal》1960,4(4):318
Ohne Zusammenfassung 相似文献
54.
Manfred Svensson 《History of European Ideas》2017,43(4):302-316
ABSTRACTThe present article compares John Locke’s and John Owen’s approaches to toleration. Owen, a towering figure of the Puritan revolution and a Protestant scholastic whose work is still the object of significant appreciation in Reformed circles, was Locke’s dean during his time as a student in Oxford. There is a number of treatises on toleration by Owen, especially during the mid-1640s, and later again after the Restoration, in his role as a nonconforming divine. There has also been some speculation regarding the involvement of both Owen and Locke in the circle around Shaftesbury. Together with their writings against Parker and Stillingfleet, this would seem to draw Owen and Locke quite close to each other. Both authors are, however, divided in their approach to Christian doctrine: Owen represents classical confessionalism and Locke modern doctrinal minimalism. The article explores the ways in which these oppositional approaches to doctrine relate to their views of toleration. 相似文献
55.
This paper attempts to develop a mathematically rigid framework for minimizing the cross-entropy function in an error backpropagating framework. In doing so, we derive the backpropagation formulae for evaluating the partial derivatives in a computationally efficient way. Various techniques of optimizing the multiple-class cross-entropy error function to train single hidden layer neural network classifiers with softmax output transfer functions are investigated on a real-world multispectral pixel-by-pixel classification problem that is of fundamental importance in remote sensing. These techniques include epoch-based and batch versions of backpropagation of gradient descent, PR-conjugate gradient, and BPGS quasi-Newton errors. The method of choice depends upon the nature of the learning task and whether one wants to optimize learning for speed or classification performance. It was found that, comparatively considered, gradient descent error backpropagation provided the best and most stable out-of-sample performance results across batch and epoch-based modes of operation. If the goal is to maximize learning speed and a sacrifice in classification accuracy is acceptable, then PR-conjugate gradient error backpropagation tends to be superior. If the training set is very large, stochastic epoch-based versions of local optimizers should be chosen utilizing a larger rather than a smaller epoch size to avoid unacceptable instabilities in the classification results. 相似文献
56.
57.
58.
59.
This paper attempts to develop a mathematically rigid framework for minimizing the cross-entropy function in an error backpropagating framework. In doing so, we derive the backpropagation formulae for evaluating the partial derivatives in a computationally efficient way. Various techniques of optimizing the multiple-class cross-entropy error function to train single hidden layer neural network classifiers with softmax output transfer functions are investigated on a real-world multispectral pixel-by-pixel classification problem that is of fundamental importance in remote sensing. These techniques include epoch-based and batch versions of backpropagation of gradient descent, PR-conjugate gradient, and BFGS quasi-Newton errors. The method of choice depends upon the nature of the learning task and whether one wants to optimize learning for speed or classification performance. It was found that, comparatively considered, gradient descent error backpropagation provided the best and most stable out-of-sample performance results across batch and epoch-based modes of operation. If the goal is to maximize learning speed and a sacrifice in classification accuracy is acceptable, then PR-conjugate gradient error backpropagation tends to be superior. If the training set is very large, stochastic epoch-based versions of local optimizers should be chosen utilizing a larger rather than a smaller epoch size to avoid unacceptable instabilities in the classification results. 相似文献
60.