Документ взят из кэша поисковой машины. Адрес оригинального документа : http://acat02.sinp.msu.ru/tut5.html
Дата изменения: Sun Jul 7 00:43:32 2002
Дата индексирования: Mon Oct 1 19:39:27 2012
Кодировка:
Tutorials on Neural Networks

23 June (Sunday), Lecture Hall 02, Main Building MSU
9.00 - 11.00 J.-C. Prevotet (Univ. P. et M. Curie, Paris) "Neural Networks: Architectures, preprocessing and hardware implementation", slides (PPT)
11.00 - 11.30 Coffee-break
11.30 - 13.30 S. Terekhoff (NeurOK, Moscow) "Neural approximations of probability density in informational modeling", slides (PS) in Russian (some slides in English on "Data Mining" topic, from 15th page).
13.30 - 15.00 Lunch
15.00 - 17.00 S. Shumsky (NeurOK, Moscow) "Bayesian regularization of learning", slides (PPT)

Tutorial titles Lecturer(s) Dates, subtitles and duration of the tutorials Comments
Neural Networks: Architectures, preprocessing and hardware implementation J.-C. Prevotet 23 June (2 hours) The objectives of the tutorial are to present general neural networks concepts; introduce some classical neural architectures, (Multi-Layer Perceptrons, Radial Basis Function, Time Delay Neural Networks, etc.); stress the importance of preprocessing; and provide a survey of neural network hardware techniques. Applications in which neural networks have successfully demonstrated their superiority over other classical methods and results will be shown.
An important aspect of neural networks resides in the nature of data provided as inputs. Pre-processing concepts will be introduced and it will be shown to what extent it is possible to model input data to provide more computational power to neural networks.
Hardware aspects will be discussed and general architectures utilized to simulate neural networks in real time presented. We will focus on specific applications, which are highly constrained in terms of execution speed and quantity of data to be processed. An overview of the hardware solutions for this case will be outlined, and some commercial devices as well as academic circuits shown.
An overview of current technology and an introduction to programmable logic will then be given, explaining in what way it might substitute for custom devices. Finally, new technological challenges as well as the future of neuro-hardware will be discussed.
Neural approximations of probability density in informational modeling S. Terekhoff 23 June (2 hours) The problem of probability density approximation, based on set of multivariate experimental data is considered from the point of view of practical informatics. Effective neural techniques of approximation of the density form are proposed. Statements of several data analysis problems are presented using density approximation approach. Applications of the method are discussed.
Bayesian regularization of learning S. Shumsky 23 June (2 hours) Bayesian approach based on the first principles of probability theory is the most consistent paradigm of statistical learning. From practical perspective Bayesian learning offers intrinsic regularization procedure providing a viable alternative to traditional cross-validation technique.
This lecture provides both theoretical background for Bayessian learning (in particular in relation to Minimal Description Length and Maximum Entropy principles) and its practical applications to noisy measurements, neural networks learning and clustering.