Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://acat02.sinp.msu.ru/bookabs.ps
Äàòà èçìåíåíèÿ: Sat Jul 6 21:45:38 2002
Äàòà èíäåêñèðîâàíèÿ: Mon Oct 1 19:40:45 2012
Êîäèðîâêà:
VIII International Workshop on
Advanced Computing and Analysis Techniques
in Physics Research
24-28 June, 2002
Moscow, Russia
Organizers: Moscow State University
and Joint Institute for Nuclear Research (Dubna)
ACAT'2002
BOOK OF ABSTRACTS
Edited by: V.A. Ilyin
http://acat02.sinp.msu.ru
e-mail: acat02@sinp.msu.ru
Phones:
(LOC, during the Workshop days) +7 (095) 939-57-06
+7 (095) 939-50-77
+7 (095) 939-03-97 (+fax)
For urgent phone call:
from Russia (in Moscow too) 8-903-774-74-63
from outside the Russia +7 903-774-74-63


The main goal of the ACAT (formerly AIHENP) series of workshops is to foster
close collaboration between physicists and computer scientists. The swift evolution
of computer hardware and crucial developments in software methodologies in recent
years, provide a solid basis for essential breakthroughs in many challenging projects
in physics. However, the achievements in computer science do not easily make their
way into physics research as state of the art: physicists should experiment with so-
phisticated computational techniques in their work while computer scientists require
feedback for further development and for tailoring of the methods to address prac-
tical problems. Thus, direct interactions between computer experts and physicists
pave the way for new ideas and innovations both in physics research and computer
science.
Among the various hot topics in the eld to be discussed at the Conference, the
following stand out:
Unprecedented amounts of data (hundreds of Terabytes to Petabytes) in on-going
and future high energy and nuclear physics experiments pose a real challenge to all
basic components of computing in physics research such as data mining, treatment
and analysis. Together with the fact that many modern experiments involve a huge
number of researchers (hundreds to thousands), from many laboratories around the
world, this requires creation of very large distributed computing systems - GRIDs.
Finding the signals of new physics often require extracting tiny signals in the
data from amidst huge backgrounds. Making precision measurements also require
extraction of signal with high eôciency. Impressive success of the arti cial intelli-
gence methods (in particular, the neural networks) promises further achievements
for solutions to these kinds of problems.
The computer algebra nds a wide application area, in particular as an e ective
tool for preparation and evaluation of problems, as well as for providing precise
measurements on the basis of exact theoretical computations of physical quantities.
This year a new topic "Advanced Statistical Methods for Data Analysis" has
been added to the program at ACAT'2002 in order to more fully cover Advanced
Data Analysis Techniques.
The workshop consists of ve sections:
I. Very Large-scale Computing and GRID
II. Arti cial Intelligence
III. Simulations and Computations in Theoretical Physics and Phenomenology
IV. Innovative Software Algorithms and Tools
V. Advanced Statistical Methods for Data Analysis
Traditionally, researchers from high energy and nuclear physics together with ex-
perts in computer science take part in the ACAT series of workshops. But nowadays
computing problems of quite similar nature and scale appear in many other elds,
e.g., in astrophysics, accelerator physics, space research, biology (a good example in
this area is the study of human genome), ecology and chemistry, as well as in indus-
try and nances. Thus, researchers from these and other elds are welcome to join
us in discussions on modern computing techniques and ways for new developments.
i

The meeting has a mixed character of both a workshop and a conference. Reports
on applications of modern computing techniques in di erent areas of physics are
followed by discussions of the problems as well as new ideas and projects within
parallel sessions.
The participation of young researchers is of great importance as they may nd
in this workshop a unique forum for exchanging ideas and developing their creativ-
ity. The educational outreach of this workshop is strengthened by the 5 proposed
tutorials on the GRID, the ROOT system, signal processing, statistics and neural-
networks.
Organizing Committee
ii

WORKSHOP TOPICS
I. Very Large-scale Computing and GRID
 Innovations in Distributed Computing
 Computational and Data Intensive GRIDs
 Challenge of LHC Computing and GRID
 GRID Middleware and Standard Services
 Parallel Computing Technologies and Applications
 Data Fabric and Data Management
 Online Monitoring and Control
II. Arti cial Intelligence
 Neural Networks and Other Pattern
 Recognition Techniques
 Evolutionary and Genetic Algorithms
 Wavelet Analysis
 New Human Interfaces, Virtual Reality
 Alternative Algorithms
III. Simulations and Computations in Theoretical Physics
and Phenomenology
 Theoretical Physics and Phenomenology
 Automatic Computation Systems: from
 Feynman Diagrams to Events
 Multi-loop Calculations and Higher Order Corrections
 Multi-dimensional Integration and Event Generators
 Computer Algebra Techniques and Applications
IV. Innovative Algorithms and Tools
 Advanced Analysis Environments
 Large Scale Detector Simulations
 Reconstruction Algorithms
 Innovations in Software Engineering
 Graphic User Interfaces, Common Libraries
V. Advanced Statistical Methods for Data Analysis
 Signal Signi cance
 Separation of Signal from Background, Rare Events
 Combining Analysis and Results
 Unfolding Methods
 Con dence Limits and Intervals
 Treatment of Systematics
iii

International Advisory Committee
Halina Abramowicz (Tel Aviv Univ.)
Karl-Heinz Becks (Wuppertal Univ.)
Chris Berger (RWTH-Aachen Univ.)
Pushpalatha Bhat (FNAL, Batavia)
Rene Brun (CERN, Geneva)
Bruce Denby (Versailles Univ.)
Jochem Fleischer (Bielefeld Univ.)
Ian Foster (ANL, Argonne)
Raoul Gatto (Geneva Univ.)
Gaston Gonnet (ETHZ, Zurich)
Viacheslav Ilyin (SINP MSU, Moscow)
Fred James (CERN, Geneva)
Toshiaki Kaneko (KEK, Tsukuba)
Andrei Kataev (INR RAS, Moscow)
Matthias Kasemann (FNAL, Batavia)
Setsuya Kawabata (KEK, Tsukuba)
Christian Kiesling (MPI, Munich)
Paul Kunz (SLAC, Stanford)
Marcel Kunze (FZK, Karlsruhe)
Leif Lonnblad (Lund Univ.)
Victor Matveev (INR RAS, Moscow)
Denis Perret-Gallix (IAC Chair, LAPP, Annecy-le-Vieux)
Peter Overmann (Wolfram Res. Inc.)
Carsten Peterson (Lund Univ.)
Ettore Remiddi (Bologna Univ.)
Les Robertson (CERN, Geneve)
Robert Rosner (Univ. of Chicago)
Robert Ryne (LBL, Berkeley)
Jose Seixas (UFRJ, Rio de Janeiro)
Yoshimitsu Shimizu (KEK, Tsukuba)
Dmitri Shirkov (JINR, Dubna)
Alexandre Smirnitsky (ITEP, Moscow)
Jos Vermaseren (NIKHEF, Amsterdam)
Monique Werlen (LAPTH, Annecy-Le-Vieux)
iv

International Organizing Committee
Victor Sadovnichii (Co-Chair, MSU, Moscow)
Vladimir Kadyshevsky (Co-Chair, JINR, Dubna)
Vladimir Belokurov (MSU, Moscow)
Pushpalatha Bhat ( Fermilab, Batavia)
Bruce Denby (Versailles Univ.)
Viacheslav Ilyin (SINP MSU, Moscow)
Vladimir Korenkov (JINR, Dubna)
Denis Perret-Gallix (LAPP, Annecy-le-Vieux)
Les Robertson (CERN, Geneve)
Yoshimitsu Shimizu (KEK, Tsukuba)
Alexander Ugol'nikov (MSU, Moscow)
Vladimir Voevodin (RCC MSU, Moscow)
Local Organizing Committee
Viacheslav Ilyin (Co-Chair, SINP MSU, Moscow)
Vladimir Korenkov (Co-Chair, JINR, Dubna)
Alexander Antonov (RCC MSU, Moscow)
Pavel Baikov (SINP MSU, Moscow)
Sergei Berezhnev (SINP MSU, Moscow)
Edward Boos (SINP MSU, Moscow)
Vladimir Gerdt (JINR, Dubna)
Andrei Demichev (SINP MSU, Moscow)
Lev Dudko (SINP MSU, Moscow)
Vladimir Litvin (Caltech)
Andrei Kataev (INR RAS, Moscow)
Alexander Kryukov (SINP MSU, Moscow)
Viktor Pose (JINR, Dubna)
Alexander Smirnitsky (ITEP, Moscow)
Natalia Sotnikova (SINP MSU, Moscow)
Tatiana Strizh (JINR, Dubna)
Elena Tikhonenko (JINR, Dubna)
Vladimir Voevodin (RCC MSU, Moscow)
Sergei Zhuravlev (RCC MSU, Moscow)
v

GENERAL SCHEDULE
In conjunction with ACAT'2002, tutorials are organized on June 22 and 23 (the
tutorials on GRID/GLOBUS/CONDOR/EDG will continue during the Workshop
days as well), see details on the Web.
24 June (Monday)
from 8.30 Registration of the participants at the Workshop
onwards (foyer of the Cultural Center (CC) in the MSU main building)
10.00 - 11.00 Workshop opening session (CC Large Hall)
11.00 - 11.30 Co ee-break (CC foyer)
11.30 - 13.30 Plenary session (CC Large Hall)
13.30 - 15.00 Lunch
15.00 - ~ 16.30 Parallel sessions: I , II , III , IV , V
~ 16.30 - ~ 17.00 Co ee-break (CC foyer)
~ 17.00 - 18.30 Parallel sessions: I , II , III , IV , V
19.00 - 22.00 Welcome-party
25 June (Tuesday)
9.00 - 11.00 Plenary session (CC Large Hall)
11.00 - 11.30 Co ee-break (CC foyer)
11.30 - 13.30 Plenary session (CC Large Hall)
13.30 - 15.00 Lunch
15.00 - ~ 16.30 Parallel sessions: I , II , III , IV , V
~ 16.30 - ~ 17.00 Co ee-break (CC foyer)
~ 17.00 - 18.30 Parallel sessions: I , II , III , IV , V
26 June (Wednesday)
9.00 - 11.00 Plenary session (CC Large Hall)
11.00 - 11.30 Co ee-break (CC foyer)
11.30 - 13.30 Plenary session (CC Large Hall)
13.30 - 15.00 Lunch
15.00 - ~ 16.30 Parallel sessions: I , II , III , IV
~ 16.30 - ~ 17.00 Co ee-break (CC foyer)
~ 17.00 - 18.30 Parallel sessions: I , III , IV
vi

27 June (Thursday)
Session in Dubna (120 km from Moscow)
8.00 Departure of buses from the MSU main building and hotels
8.00 - 10.00 Bus trip to Dubna
10.00 - 10.30 Co ee-break
10.30 - 13.10 Plenary session
13.10 - 13.40 Co ee-break, snack
14.00 - 17.00 Boat trip along Volga river, discussions
17.00 - 20.00 Workshop banquet in the open air
20.00 - 22.00 Bus trip to Moscow
28 June (Friday)
10.00 - 11.00 Summary talks (CC Large Hall)
11.00 - 11.30 Co ee-break (CC foyer)
11.30 - 13.30 Summary talks (CC Large Hall)
13.30 - 15.00 Lunch
15.00 - 17.00 Summary talks (CC Large Hall)
17.00 - 17.30 Closing of the Workshop
vii

viii

Contents
CONTENTS
PLENARY REPORTS
RUNII PHYSICS AT FERMILAB AND ADVANCED DATA ANALYSIS
METHODS.
P. Bhat . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
LHCB COMPUTING AND THE GRID.
N. Brook . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
COMPUTING AT ALICE.
R. Brun . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
DATA ANALYSIS SOFTWARE TOOLS USED DURING VIRGO
ENGINEERING RUNS, REVIEW AND FUTURE NEEDS.
D. Buskulic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
CMS SOFTWARE AND COMPUTING.
C. Charlot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
METHODS FOR ENHANCING NUMERICAL INTEGRATION.
E. Doncker . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
SWARM INTELLIGENCE FOR OPTIMIZATION PROBLEMS.
B. Denby . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
GREAT BRAIN DISCOVERIES: WHEN WHITE SPOTS WILL DISAPPEAR?
W. Dunin-Barkowski . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
IBM EXPERIENCE IN GRID.
D. Green . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
SUMMARY OF RECENT IDEAS AND DISCUSSIONS ON STATISTICS IN
HEP.
F. James . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
ATLAS COMPUTING AND THE GRID.
R. Jones . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
THE LCG PROJECT - COMMON SOLUTIONS FOR LHC.
M. Kasemann . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
ix

Contents
GRID COMPUTING.
C. Kesselman . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
BELLE COMPUTING.
P. Krokovny . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
STATUS OF THE EU DATAGRID PROJECT.
P. Kunszt . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
THE CROSSGRID PROJECT.
M. Kunze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
COMPUTING AT CDF.
M. Neubauer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
A FRONTIER IN MULTISCALE MULTILOOP INTEGRALS: THE
ALGEBRAIC-NUMERICAL METHOD.
G. Passarino . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
THE LHC COMPUTING GRID PROJECT - CREATING A GLOBAL
VIRTUAL COMPUTING CENTRE FOR PARTICLE PHYSICS.
L. Robertson . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
LIGO DATA ANALYSIS.
P. Shawhan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
VERY LARGE-SCALE COMPUTING AND GRID
(ORAL SESSION)
PARALLEL SIMULATION SYSTEM.
V. Okol'nishnikov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
USING OF GRID PROTOTYPE INFRASTRUCTURE FOR QCD BACKGROUND
STUDY TO THE H ! PROCESS ON ALLIANCE RESOURCES.
V. Litvine et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
OFFLINE MASS DATA PROCESSING USING ONLINE COMPUTING
RESOURCES.
J. Hernandez . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
x

Contents
INTERFACING INTERACTIVE DATA ANALYSIS TOOLS WITH THE
GRID: THE PPDG CS-11 ACTIVITY.
D. Olson, J. Perl . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
EVOLUTIONARY ALGORITHMS AND PARALLEL COMPUTING.
R. Berlich, M. Kunze . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
ALIEN - ALICE ENVIRONMENT ON THE GRID.
P. Saiz et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
GENIUS: A WEB PORTAL TO THE GRID.
R. Barbera et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
THE NORDUGRID PROJECT: USING GLOBUS TOOLKIT FOR BUILDING
GRID INFRASTRUCTURE.
A. Konstantinov et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
THE SAM-GRID PROJECT: ARCHITECTURE AND PLAN.
G. Garzoglio et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
DEVELOPMENT OF AN INTERDISCIPLINARY FRAGMENT OF THE
RUSSIAN GRID SEGMENT: STATE OF THE ART .
A. Joutchkov et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
RESOURCE MANAGER FOR GRID WITH GLOBAL JOB QUEUE AND WITH
PLANNING BASED ON LOCAL SCHEDULES.
V. Kovalenko et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
SOFTWARE TOOLS FOR DYNAMIC RESOURCE MANAGEMENT (STDRM).
I. Shoshmina, D. Malashonok, S. Romanov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
FIRST EXPERIENCE OF EDG MIDDLEWARE USAGE FOR MASS
SIMULATION OF ATLAS MONTE-CARLO DATA.
A. Minaenko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
A COMPONENT FRAMEWORK FOR DISTRIBUTED PARALLEL DATA
ANALYSIS IN HEP.
J. Moscicki . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
IMPLEMENTATION OF REMOTE JOB SUBMISSION OVER GRID WITH
IMPALA/BOSS CMS MC PRODUCTION TOOLS.
A. Kryukov et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 35
xi

Contents
ADMINISTRATION TOOLS FOR MANAGING LARGE SCALE LINUX
CLUSTER.
A. Manabe, S. Kawabata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
AMS COMPUTING.
A. Klimentov, V. Choutko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
PROVIDING GRID DATA SERVICES TODAY.
M. Gasthuber, P. Fuhrmann, R. Wellner . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
DATA CHALLENGES IN ATLAS COMPUTING.
A. Vaniachine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
PARTICLE IDENTIFICATION IN THE NA48 EXPERIMENT USING
NEURAL NETWORK.
L. Litov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
VERY LARGE-SCALE COMPUTING AND GRID
(POSTER SESSION)
THE SELF-ORGANIZATION OF THE CELLULAR ENVIRONMENT AND THE
REPRODUCTION OF THE NETWORK LOGICAL STRUCTURE.
M. Medvedeva, V. Koloskov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
THE SOFTWARE FOR CONTROL SYSTEM OF THE LUE-200.
A. Kayukov et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
THE FORMAL SPECIFICATION OF ALGORITHMIC MAINTENANCE OF
DISTRIBUTED COMPUTING SYSTEMS OF IMITATIVE AND
SEMINATURAL SIMULATION.
A. Kvachenko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
APPLICATION OF INFORMATION TECHNOLOGIES IN MANAGEMENT OF
GROUND RESOURCES.
V. Lazarev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
THE ALICE HIGH LEVEL TRIGGER (FOR THE ALICE
COLLABORATION).
A. Vestbo et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45
xii

Contents
ONLINE PATTERN RECOGNITION FOR THE ALICE HIGH LEVEL
TRIGGER (FOR THE ALICE COLLABORATION).
C. Loizides et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
METAMAKE TOOLS FOR PERSONAL PROJECT PREPARATION IN
HETEROGENEOUS NETWORK ENVIRONMENT.
E. Huhlaev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
PERFORMANCE CHARACTERISTICS OF AN IDE DISKS BASED FILE
SERVER IN THE ENVIRONMENT OF A LINUX PC FARM.
E. Slabospitskaya et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48
DISTRIBUTING APPLICATIONS IN DISTRIBUTED COMPUTING
ENVIRONMENT.
N. Ratnikova, A. Sciaba, S. Wynho . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49
EXPERIENCE WITH OO DATABASE FOR CMS EVENTS DISTRIBUTED
BETWEEN TWO SITES.
O. Kodolova, N. Kruglov, V. Kolosov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50
DISTRIBUTED COMPUTING ENVIRONMENT FOR DATA INTENSIVE
TASKS BY USE OF METADISPATCHER.
V. Kalyaev, E. Huhlaev, N. Kruglov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51
SECURE AUTOMATED REQUEST PROCESSING SOFTWARE FOR DATAGRID
CERTIFICATION AUTHORITIES.
L. Shamardin, P. Martucci, N. Kruglov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52
CORRELATION ENGINE PORTOTYPE.
V. Pose, B. Panzer-Steindel . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 53
ARTIFCIAL INTELLIGENCE
(ORAL SESSION)
ON-LINE LOCAL MONITORING AND ADAPTIVE NAVIGATION OF
MOBILE ROBOTS ON ENVIRONMENT WITH UNKNOWN OBSTACLES.
A. Timofeev, H. He . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
SELECTION OF W-PAIR-PRODUCTION IN DELPHI WITH
FEED-FORWARD NEURAL NETWORKS.
U. Mueller, K.-H. Becks, H. Wahlen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
xiii

Contents
EFFECTIVE TRAINING ALGORITHMS FOR RBF-NEURAL NETWORKS.
G. Ososkov, A. Stadnik . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 56
MBB DISTRIBUTION OF SUBSETS OF HIGGS BOSON DECAY EVENTS
DEFINED VIA NEURAL NETWORKS.
F. Hakl, M. Hlavachek, R. Kalous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57
THE EVOLUTIONARY MODEL OF PHYSICS LARGE-SCALE SIMULATION
ON PARALLEL DATAFLOW ARCHITECTURE.
A. Nikitin, L. Nikitina . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58
OPTICAL NEURAL NETWORK BASED ON THE PARAMETRICAL
FOUR-WAVE MIXING PROCESS.
L. Litinskii, B. Kryzhanovsky, A. Fonarev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59
SUPPORT VECTOR MACHINES IN ANALYSIS OF TOP QUARK
PRODUCTION.
A. Vaiciulis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60
TRAINING SET PREPROCESSING: ESTIMATED VALUE OF LIPSCHITZ
CONSTANT OVER TRAINING SET AND RELATED PROPERTIES OF
TRAINABLE NEURAL NETWORKS.
V. Tsaregorodtsev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61
NEURAL TRACKING IN ALICE.
A. Pulvirenti et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62
OPTIMIZED NEURAL NETWORK SEARCH OF HIGGS BOSON PRODUCTION
AT THE TEVATRON.
L. Dudko, E. Boos, D. Smirnov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63
APPLICATION OF WAVELET ANALYSIS FOR DATA TREATMENT OF
SMALL-ANGLE NEUTRON SCATTERING.
A. Soloviev et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64
FOUNDATIONS, STATUS, AND PROSPECTS OF SUPPORT VECTOR
REGRESSION AS A NEW MULTIVARIATE TOOL FOR HIGH ENERGY
PHYSICS.
N. Naumann . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65
NEURAL NETWORK APPROACH TO DISCOVERING TEMPORAL
CORRELATIONS.
Y. Orlov, S. Dolenko, I. Persiantsev, Ju. Shugai . . . . . . . . . . . . . . . . . . . . . . . . . . 66
xiv

Contents
NEURAL NETWORK PRE-PROCESSING OF ULTRASONIC SCANNING
DATA.
O. Agapkin, S. Dolenko, Yu. Orlov, I. Persiantsev . . . . . . . . . . . . . . . . . . . . . . . . 67
USE OF NEURAL NETWORK BASED AUTO-ASSOCIATIVE MEMORY AS A
DATA COMPRESSOR FOR PRE-PROCESSING OPTICAL EMISSION
SPECTRA IN GAS THERMOMETRY WITH THE HELP OF NEURAL
NETWORK.
S. Dolenko et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68
APPLICATION OF NEURAL NETWORKS FOR ENERGY
RECONSTRUCTION.
L. Litov, J. Damgov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
THUNDERSTORM CLOUD CELLULAR AUTOMATON MODEL.
D. Iudin, A.N. Grigoriev, V.Yu. Trakhtengerts . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70
MOVING NN TRIGGERS TO LEVEL-1 AT LHC RATES.
J. Prevotet, B. Denby, C. Kiesling, P. Garda, B. Granado . . . . . . . . . . . . . . . . . 71
PARAMETER ESTIMATION AND CLASS SEPARATION WITH NEURAL
NETS FOR THE XEUS PROJECT.
J. Zimmermann . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72
ARTIFCIAL INTELLIGENCE
(POSTER SESSION)
IMPLEMENTATION OF LINGUISTIC MODELS BY FOURIER-HOLOGRAPHY
TECHNIQUE.
A. Pavlov, Y. Shevchenko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73
TECHNIQUES OF FUNCTIONAL ANALYSIS OF FAULTS AND METHODS
OF FAULT-STABLE MOTION CONTROL FOR ELECTROMECHANICAL
SYSTEMS.
A. Timofeev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74
MULTIDISCIPLINARY APPROACH TO NEUROCOMPUTERS.
A. Voronov, G. Voronova, V. Karpets, A. Krisko . . . . . . . . . . . . . . . . . . . . . . . . . . 75
CONSTRUCTIVE METHODS FOR SUPERVISED LEARNING WITH
COMPLEXITY MINIMIZATION OF ARTIFICIAL NEURAL NETWORKS OF
ALGEBRAIC SIGMA-PI NEURONS.
Z. Shibzoukhov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76
xv

Contents
GENETIC LIMITS OF INTELLIGENCE.
V. Lavrov, V. Valtzev, A. Rudinsky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 77
THE IDENTIFICATION OF DYNAMIC OBJECT PARAMETERS.
S. Ivanova, Z. Ilyichenkova . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 78
THE USAGE OF NEURAL NETWORKS FOR ROBOTS NAVIGATION.
Z. Ilyichenkova, S. Ivanova . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 79
WHETHER NEURON NETWORK CAN BE INTELLECTUAL SYSTEM?
S. Romanov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80
THE GA BASED APPROACH TO OPTIMIZATION OF PARALLEL
CALCULATIONS IN LARGE PHYSICS PROBLEMS.
A. Nikitin, L. Nikitina . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81
RIGOROUS RESULTS FOR THE HOPFIELD-TYPE NEURAL NETWORK.
L. Litinskii . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82
AN EVOLVING ALGEBRA APPROACH TO FORMAL DESCRIPTION OF
AUTOMATA NETWORK DYNAMICAL SYSTEMS
V. Severyanov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83
VIRTUAL REALITY TECHNOLOGY: PROBLEMS OF NEUROCOMPUTING.
D. Shapiro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84
SIMULATIONS AND COMPUTATIONS
IN THEORETICAL PHYSICS AND PHENOMENOLOGY
(ORAL SESSION)
MULTILOOP CALCULATIONS IN HQET.
A. Grozin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85
NUMERICAL EVALUATION OF GENERAL MASSIVE 2-LOOP SELF-MASS
MASTER INTEGRALS FROM DIFFERENTIAL EQUATIONS.
M. Ca o, H. Czyz, E. Remiddi . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86
NUMERICAL SIMULATION OF COLLOIDAL INTERACTION.
P. Dyshlovenko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87
xvi

Contents
SOME METHODS FOR THE EVALUATION OF COMPLICATED FEYNMAN
INTEGRALS.
A. Kotikov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88
PROGRAMS FOR DIRECT AND INVERSE NORMALIZATION OF A CLASS
OF POLYNOMIAL HAMILTONIANS.
S. Vinitsky, A. Gusev, V. Rostovtsev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89
COMPUTATION OF COHOMOLOGY OF LIE (SUPER)ALGEBRA:
ALGORITHMS, IMPLEMENTATION AND NEW RESULTS.
V. Kornyak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90
REALIZATION OF DENSITY FUNCTIONAL THEORY CALCULATIONS
WITH KLI-APPROXIMATION OF OPTIMIZED EFFECTIVE POTENTIAL
IN Q96.
K. Popov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91
APPLICATION OF THE RESONANT NORMAL FORM TO HIGH ORDER
NONLINEAR ODES USING MATHEMATICA.
V. Edneral, R. Khanin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92
ADVANCED COMPUTING WITH SUBGROUP LATTICES BY THE COMPUTER
ALGEBRA PACKAGE GAP.
V. Mysovskikh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93
THE USES OF COVARIANT FORMALISM FOR ANALYTICAL
COMPUTATION OF FEYNMAN DIAGRAMS WITH MASSIVE FERMIONS.
R. Rogalyov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94
ON THE WAY TO COMPUTERIZABLE SCIENTIFIC KNOWLEDGE (BY THE
EXAMPLE OF THE OPERATOR FACTORIZATION METHOD).
A. Niukkanen . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95
WAVELET-BASED MODELING IN QUANTUM DYNAMICS: FROM
LOCALIZATION TO ENTANGLEMENT.
M. Zeitlin, A. Fedorova . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96
COMPHEP/SUSY PACKAGE.
A. Semenov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
A FEYNMAN DIAGRAM ANALYZER DIANA - RECENT DEVELOPMENT.
M. Tentyukov, J. Fleischer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
xvii

Contents
STATUS OF THE PYTHIA7 PROJECT.
L. Lonnblad . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99
FOAM: A GENERAL-PURPOSE CELLULAR MONTE CARLO EVENT
GENERATOR.
S. Jadach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100
POLE MASSES OF GAUGE BOSON.
M. Kalmykov, F. Jegerlehner, O. Veretin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101
QED RADIATIVE CORRECTIONS WITHIN THE CALCPHEP PROJECT.
P. Christova . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102
PROJECT CALCPHEP, CALCULUS FOR PRECISION HIGH ENERGY
PHYSICS.
D. Bardin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
ANALYTICAL EVALUATION OF CERTAIN ON-SHELL TWO-LOOP
THREE-POINT DIAGRAMS.
A. Davydychev, V. Smirnov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 104
A PACKAGE OF GENERATING FEYNMAN RULES IN GRACE SYSTEM.
T. Kaneko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105
A NEW MONTE CARLO METHOD OF THE NUMERICAL INTEGRATION.
K. Tobimatsu, T. Kaneko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
MULTI-DIMENSIONAL INTEGRATION BASED ON STOCHASTIC
SAMPLING METHOD.
A. Shibata, S. Tanaka, S. Kawabata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
THE CALCULATION OF THE  4 FIELD THEORY BETA-FUNCTION IN THE
FRAMEWORK OF PERTURBATION THEORY WITH CONVERGENT SERIES.
I. Yudin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
BATCH CALCULATIONS IN CALCHEP.
A. Pukhov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
ADAPTATION OF VEGAS FOR EVENT GENERATION.
A. Pukhov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
xviii

Contents
MULTI-DIMENSIONAL INTEGRATION PACKAGE DICE FOR PARALLEL
PROCESSORS.
F. Yuasa, K. Tobimatsu, S. Kawabata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
FACTORIZATION AND TRANSFORMATIONS OF LINEAR AND NONLINEAR
ORDINARY DIFFERENTIAL EQUATIONS.
L. Berkovich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112
TOOLKIT FOR PARTONIC EVENTS DATA BASES IN THE COMPHEP
PACKAGE.
A. Cherstnev, S. Balatenyshev, V. Ilyin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113
FACTORIZING ONE-LOOP CONTRIBUTIONS TO TWO-LOOP BHABHA
SCATTERING AND AUTOMATIZATION OF FEYNMAN DIAGRAM
CALCULATIONS.
J. Fleischer, T. Riemann, O. Tarasov, A. Werthenbach . . . . . . . . . . . . . . . . . . 114
SIMULATIONS AND COMPUTATIONS IN THEOR.PHYSICS AND
PHENOMENOLOGY
(POSTER SESSION)
GENERALIZED COMMUTATORS AND IDENTITIES ON VECTOR FIELDS.
A. Dzhumadil'daev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
FITS OF DIS DATA AT THE NNLO AND BEYOND AS THE CONCRETE
APPLICATION OF DEFINITE RESULTS OF MULTILOOP CALCULATIONS
IN QCD.
A. Kataev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
THE VALUE OF QCD COUPLING CONSTANT AND POWER CORRECTIONS
IN THE STRUCTURE FUNCTION F2 MEASUREMENTS.
A. Kotikov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
ADVANCED TECHNIQUES FOR COMPUTING DIVERGENT SERIESES.
S. Skorokhodov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
FAST CALCULATIONS IN NONLINEAR COLLECTIVE MODELS OF
BEAM/PLASMA PHYSICS.
M. Zeitlin, A. Fedorova . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
ANALYTICAL CALCULATION S-MATRIX IN QUANTUM
ELECTRODYNAMICS.
V. Andreev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 120
xix

Contents
ANALYTICAL CALCULATION OF S-MATRIX ELEMENTS OF REACTION
WITH FERMIONS
V. Andreev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121
STUDY OF VIABLE SUSY GUTS WITH NON-UNIVERSAL GAUGINO
MEDIATION: COMPHEP AND ISAJET APPLICATION.
A. Belyaev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122
THE PROGRAM OF ANALYTICAL CALCULATIONS OF THE EFFECTIVE
POTENTIALS IN THE THREE-BODY PROBLEM ON A LINE.
A. Gusev, D. Pavlov, S. Vinitsky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123
PARAMETRIC ANALYSIS OF STABILITY CONDITIONS FOR A
SATELLITE WITH A GRAVITATION STABILIZER.
A. Banshchikov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 124
A MONTE CARLO SIMULATION OF DECAYS WITHIN THE CALCPHEP
PROJECT.
G. Nanava . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125
ABOUT IMPLEMENTATION OF e + e ! f 
f PROCESSES INTO
THE FRAMEWORK OF CALCPHEP PROJECT.
L. Kalinovskaya . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126
USING OF FORM FOR SYMBOLIC EVALUATION OF FEYNMAN DIAGRAMS
IN COMPHEP PACKAGE.
A. Kryukov, V. Bunichev, A. Vologdin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127
INNOVATIVE ALGORITHMS AND TOOLS
(ORAL SESSION)
CHIRAL INVARIANT PHASE SPACE EVENT GENERATOR.
M. Kosov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128
NEW ACHIEVEMENTS IN DEVELOPMENT OF MULTIDIMENSIONAL DATA
ACQUISITION, PROCESSING AND VISUALIZATION - DAQPROVIS.
M. Morhac et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129
EFFICIENT STORING OF MULTIDIMENSIONAL HISTOGRAMS USING
ADVANCED COMPRESSION TECHNIQUES.
V. Matousek et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
xx

Contents
THE AUTOMATED NETWORKS OF MANAGEMENT OF FINANCIAL
ACTIVITY, THE CONTROL AND THE ACCOUNT OF DATABASES OF
ECONOMIC DIVISIONS JINR.
T. Tyupikova, V. Samoilov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131
APPLICATION OF PARALLEL COMPUTING TO THE SIMULATION OF
CHAOTIC DYNAMICS.
A. Kostousov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132
INNOVATIVE CALORIMETER HADRON ENERGY RECONSTRUCTION
ALGORITHM FOR THE EXPERIMENTS AT THE LHC (FOR THE ATLAS
TILECAL COLLABORATION).
V. Vinogradov, Y. Kulchitsky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133
GEANT4 TOOLKIT FOR SIMULATION OF HEP EXPERIMENTS (FOR THE
GEANT4 COLLABORATION).
V. Ivanchenko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 134
COMPUTER INVESTIGATION OF THE PERCOLATION PROCESSES IN
TWO- AND THREE-DIMENSIONAL SYSTEMS WITH HETEROGENEOUS
INTERNAL STRUCTURE.
A. Konash, S. Bagnich . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135
APPLICATION OF DIGITAL FILTERING TECHNIQUES FOR ANALYSIS
OF X-RAY SIGNALS USING INTERACTIVE DATA LANGUAGE.
A. Kiseleva et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
ANALYTICAL FOUNDATIONS OF LOCALIZING COMPUTING.
G. Men'shikov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137
ALGORITHMS AND METHODS FOR PARTICLE IDENTIFICATION WITH
ALICE TOF DETECTOR AT VERY HIGH PARTICLE MULTIPLICITY.
B. Zagreev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138
STATUS OF AIDA AND JAS 3 (FOR THE AIDA COLLABORATION).
V. Serbo . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139
RECENT RESULTS ON ADAPTIVE TRACK AND MULTITRACK FITTING
IN CMS.
A. Strandlie, R. Fruehwirth, T. Todorov, M. Winkler . . . . . . . . . . . . . . . . . . . . . 140
xxi

Contents
MAPPING MODERN SOFTWARE PROCESS ENGINEERING TECHNIQUES
ONTO A HEP DEVELOPMENT ENVIRONMENT.
H.-P. Wellisch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141
GEANT4 PHYSICS VALIDATION FOR LARGE SCALE HEP DETECTORS.
H.-P. Wellisch . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142
THE ROOT GEOMETRY PACKAGE.
R. Brun, A. Gheata, M. Gheata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143
A REVIEW OF FAST CIRCLE AND HELIX FITTING.
R. Fruehwirth, A. Strandlie, J. Wroldsen, W. Waltenberger . . . . . . . . . . . . . . 144
NEW DEVELOPMENTS IN VERTEX RECONSTRUCTION FOR CMS.
W. Waltenberger et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145
EVENT EFFICIENCY FUNCTION AS AN ANALOG OF A NEURAL
CLASSIFIER.
V. Samoilenko, S. Klimenko, N. Minaev, E. Slobodyuk . . . . . . . . . . . . . . . . . . . 146
OBJECT ORIENTED SOFTWARE FOR SIMULATION AND
RECONSTRUCTION OF BIG ALIGNMENT SYSTEMS.
P. Arce . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147
SIMULATION FRAMEWORK AND XML DETECTOR DESCRIPTION
DATABASE FOR CMS EXPERIMENT.
P. Arce et al. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 148
IGNOMINY: TOOL FOR ANALYSING SOFTWARE DEPENDENCIES AND
FOR REDUCING COMPLEXITY IN LARGE SOFTWARE SYSTEMS (FOR
THE CMS COLLABORATION).
L. Tuura . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149
CMS DATA ANALYSIS: CURRENT STATUS AND FUTURE STRATEGY
(FOR THE CMS COLLABORATION).
L. Tuura . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150
STATUS OF THE ANAPHE PROJECT.
M. Sang . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151
AN ONLINE CALORIMETER TRIGGER FOR REMOVING OUTSIDERS FROM
PARTICLE BEAM CALIBRATION TESTS.
J. Seixas, D. Damazio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152
xxii

Contents
OO/C++ RECONSTRUCTION MODEL BASED ON GEANT3.
Y. Fisyak, V. Fine, P. Nevski, T. Wenaus . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153
CROSS-PLATFORM QT-BASED IMPLEMENTATION OF LOWER LEVEL GUI
LAYER OF ROOT.
V. Fine . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154
INNOVATIVE ALGORITHMS AND TOOLS
(POSTER SESSION)
DELPHI-BASED VISUAL OBJECT-ORIENTED PROGRAMMING FOR THE
ANALYSIS OF EXPERIMENTAL DATA IN LOW ENERGY PHYSICS.
V. Zlokazov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155
GENETIC ALGORITHM FOR SUSY TRIGGER OPTIMIZATION IN CMS
DETECTOR AT LHC.
S. Abdullin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156
COMPUTER SIMULATION OF SPECTRAL AND POLARIZATION
CHARACTERISTICS OF CHANNELING RADIATION FROM RELATIVISTIC
PARTICLES IN CRYSTALS.
Y. Pivovarov, V. Dolgikh . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 157
COMPUTER SIMULATION OF INTERACTION OF RELATIVISTIC
POSITRONIUM ATOM WITH A CRYSTAL.
Y. Kunashenko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 158
CREATING AND ESTIMATING INTERVAL MODELS.
G. Shilo, V. Krischuk, N. Gaponenko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 159
SOME NEW APPROACH TO QUANTUM SYSTEM EVOLUTION SIMULATION.
A. Bogdanov, A. Gevorkyan, E. Stankova . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160
TRANSMEMBRANE RECEPTIVE DIMERS AS MOLECULAR TIGGERS
HAVING CHEMICAL AND ELECTRICAL INPUTS.
A. Radchenko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161
ON INTERVAL APPROACH TO PROBLEMS OF RADIO ENGINEERING AND
TELECOMMUNICATION.
V. Zelenina, G. Men'shikov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162
xxiii

Contents
OPTIMIZATION AT CONDITIONS OF THE INTERVAL INDETERMINACY.
V. Levin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163
= 0 SEPARATION IN THE PHOS WITH NEURAL NETWORK.
M. Bogolyubsky, Yu. Kharlov, S. Sadovsky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164
INTERACTIVE DATA ANALYSIS USING IGUANA WITH CMS, D0, L3
AND GEANT4 EXAMPLES.
L. Tuura, G. Alverson, I. Osborne, L. Taylor . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165
NEURAL PARTICLE DISCRIMINATION FOR TRIGGERING INTERESTING
PHYSICS CHANNELS USING CALORIMETRY DATA.
A. Anjos, J. Seixas . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166
CELLULAR AUTOMATON MODEL OF LITHOSPHERE DEGASSING.
A. Grigoriev, D. Iudin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 167
ADVANCED STATISTICAL METHODS FOR DATA ANALYSIS
(ORAL SESSION)
NEW METHOD FOR DATA PROCESSING IN POLARIZATION
MEASUREMENTS.
S. Manaenkov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 168
SEARCH OF NATURAL CUTS IN SEEKING OF NEW PHYSICS PHENOMENA.
(EXAMPLE - SEARCH OF PHASE SPACE AREAS THAT BRING THE BEST SIG-
NAL/BACKGROUND RATIO USEFUL FOR EXPERIMENTATION IN e !W).
D. Anipko, I. Ginzburg, A. Pak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 169
ANALYSIS OF COINCIDENCE GAMMA-RAY SPECTRA USING ADVANCED
BACKGROUND ELIMINATION, UNFOLDING AND FITTING ALGORITHMS.
M. Morhac, V. Matousek, J. Kliman, L. Krupa, M. Jandel . . . . . . . . . . . . . . . 170
MULTIVARIATE METHODS OF DATA ANALYSIS IN COSMIC RAY
ASTROPHYSICS.
A. Vardanyan, A. Chilingarian . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171
EVALUATION OF CONFIDENCE INTERVALS FOR PARAMETERS OF
CHI-SQUARED FIT.
S. Redin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 172
xxiv

Contents
QUASIOPTIMAL OBSERVABLES AND THE OPTIMAL JET ALGORITHM.
F. Tkachov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 173
STATISTICAL MULTIDIMENSIONAL SEPARATION OF THE ELECTRONS
AND HADRONS IN THE TILE IRON-SCINTILLATOR HADRONIC
CALORIMETER OF THE ATLAS AT THE LHC (FOR THE ATLAS
TILECAL COLLABORATION).
V. Vinogradov, Y. Kulchitsky . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 174
SUPERRESOLUTION CHROMATOGRAPHY.
E. Kosarev, K.O. Muranov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175
PRINCIPAL COMPONENT ANALYSIS OF NETWORK TRAFFIC: THE
\CATERPILLAR"-SSA APPROACH.
P. Zrelov, I. Antoniou, Victor Ivanov, Valery Ivanov . . . . . . . . . . . . . . . . . . . . . 176
ON A STATISTICAL MODEL OF NETWORK TRAFFIC.
Victor Ivanov, I. Antoniou, Valery Ivanov, P. Zrelov . . . . . . . . . . . . . . . . . . . . . 177
NEURAL NETS FOR GROUND BASED GAMMA-RAY ASTRONOMY.
G. Maneva, G. Maneva, J. Procureur, P. Temnikov . . . . . . . . . . . . . . . . . . . . . . 178
A NEW APPROACH TO CLUSTER FINDING AND HIT RECONSTRUCTION
IN CATHODE PAD CHAMBERS AND ITS DEVELOPMENT FOR THE
FORWARD MUON SPECTROMETER OF ALICE.
G. Chabratova, A. Zinchenko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179
PRINCIPAL CURVES FOR IDENTIFYING OUTSIDERS IN
EXPERIMENTAL TESTS WITH CALORIMETERS.
J. Seixas, P. Vitor, M. da Silva . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180
COMPARATIVE STUDY OF THE UNCERTAINTIES IN PARTON
DISTRIBUTION FUNCTIONS.
S. Alekhin . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181
ADVANCED STATISTICAL METHODS FOR DATA ANALYSIS
(POSTER SESSION)
SIGNAL SIGNIFICANCE IN THE PRESENCE OF SYSTEMATIC AND
STATISTICAL UNCERTAINTIES.
S. Bityukov, N. Krasnikov . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182
xxv

Contents
ESTIMATION OF INITIAL PARTICLES SPECTRUM UNCERTAINTY CONTRIBU-
TION TO OVERALL STATISTICAL ERROR IN SEARCH OF ANOMALOUS INTER-
ACTIONS (EXAMPLE -e !W).
D. Anipko, I. Ginzburg, A. Pak . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183
TOWARDS THE TEXAS INSTRUMENTS TMS320C6701 SIGNAL
PROCESSOR USING FOR STATISTICAL PROCESSING OF IRKUTSK
INCOHERENT SCATTER RADAR EXPERIMENTAL DATA.
D. Kushnarev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184
DIAGNOSIS OF STOCHASTIC FIELDS BY MATHEMATICAL MORPHOLOGY
AND COMPUTATIONAL TOPOLOGY METHODS.
L. Karimova, N. Makarenko . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185
WHAT DO WE WANT, WHAT DO WE HAVE, WHAT WE CAN DO?
(UNFOLDING IN LHC ERA).
V. Anikeev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186
ABOUT A PROBLEM OF INTERPRETATION AND FORECASTING OF
TIME-SPATIAL VARIATIONS OF GEOPHYSICAL FIELDS BY RESULTS
OF DEEP SCIENTIFIC DRILLING.
A. Kolbasova, O. Esipko, A. Rosaev . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187
MULTIFRACTALITY IN ECOLOGICAL MONITORING.
D. Iudin, D. Gelashvily . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
LIST OF REGISTERED PARTICIPANTS . . . . . . . . . . . . . . . . . . . . . . 189
xxvi

Plenary reports
Pushpa BHAT
FNAL, Batavia
pushpa@fnal.gov
RunII Physics at Fermilab and Advanced Data Analysis
Methods.
ID=501
The collider Run II now underway at the Fermilab Tevatron brings extraordi-
nary opportunities for new discoveries, precision measurements, and exploration of
parameter spaces of theoretical models. We hope to discover the Higgs boson and
nd evidence for new physics beyond the Standard Model such as Supersymmetry
or Technicolor or something completely unexpected. We will pursue searches for
hints for the existence of extra dimensions and other exotic signals. These opportu-
nities, however, come with extraordinary challenges. In this talk, I will describe the
physics pursuits of the CDF and DZero experiments in Run II and discuss why the
use of multivariate and advanced statistical techniques will be crucial in achieving
the physics goals.
1

Plenary reports
Nicholas BROOK
Univ. of Bristol
n.brook@bristol.ac.uk
LHCb Computing and the GRID.
ID=502
The main requirements of the LHCb software environment in the context of
GRID computing will be presented. Emphasis will be given to the preliminary
experiences gained in the development of a distributed Monte Carlo production
system.
2

Plenary reports
Rene BRUN
CERN, Geneve
Rene.Brun@cern.ch
Computing at ALICE.
ID=503
The ALICE software is based on three major components AliRoot, Alien and
ROOT that are in constant development and on exernal packages like Geant3, Pythia
and Fluka. The AliRoot framework is totally written in C++ and includes classes
for detailed detector simulation and reconstruction. This framework has been ex-
tensively used to test the complete chain from data acquisition to data storage and
retrieval during several Alice data Challenges. The software is GRID-aware via the
Alien system presented in another talk. The detector simulation part is based on
the concept of a Virtual Monte Carlo. The same detector geometry classes and
hits/digits are used to run with the Geant3 or Geant4 packages and an interface
with Fluka is in preparation. When running with a very large number of classes
(thousands) it is important to minimize the classes dependencies. Access to the
large object collections is via the Folder mechanism available in Root. This struc-
ture is not only more scalable but allows simple user to easily browse and understand
the various data structures. The fact that the Alice environment is based on a small
number of components has greatly facilitated the maintenance, the developement
and the adoption of the system by all physicists in the collaboration.
3

Plenary reports
Damir BUSKULIC
Univ. de Savoie & LAPP, Annecy-le-Vieux
buskulic@lapp.in2p3.fr
Data Analysis Software Tools Used During VIRGO
Engineering Runs, Review and Future Needs.
ID=504
During last years, data ow and data storage needs for large gravitational waves
interferometric detectors have reached an order of magnitude similar to high energy
physics experiments.Software tools have been developed to handle and analyse those
large amounts of data, with the speci cities associated to gravitational waves search.
We will make a review of the experience acquired during engineering runs on the
VIRGO detector with the currently used data analysis software tools, pointing out
the peculiarities inherent to our type of experiments. We will also show what are
the possible future needs for the VIRGO data oine analysis.
4

Plenary reports
Claude CHARLOT
LLR-Ecole Polytechnique CNRS & IN2P3, Palaiseau
charlot@poly.in2p3.fr
CMS Software and Computing.
ID=505
CMS is one of the two general-purpose HEP experiments currently under con-
struction for the Large Hadron Collider at CERN. The handling of multi-petabyte
data samples in a worldwide context requires computing and software systems with
unprecedented scale and complexity. We describe how CMS is meeting the many
data analysis challenges in the LHC era. We cover in particular the status of our
object-oriented software, our system of globally distributed regional centres and our
strategies for Grid-enriched data analysis.
5

Plenary reports
Elise DE DONCKER
Western Michigan Univ., Kalamazoo
elise@cs.wmich.edu
Methods for Enhancing Numerical Integration.
ID=506
As we consider common strategies for numerical integration (Monte-Carlo, Quasi-
Monte Carlo, adaptive), we can delineate their realm of applicability. The inherent
accuracy and error bounds for basic integration methods are given via such measures
as the degree of precision of cubature rules, the index of a family of lattice rules,
and the discrepancy of (deterministic) uniformly distributed point sets. Strategies
incorporating these basic methods are built on paradigms to reduce the error by,
e.g., increasing the number of points in the domain or decreasing the mesh size,
locally or uniformly. For these processes the order of convergence of the strategy is
determined by the asymptotic behavior of the error, and may be too slow in prac-
tice for the type of problem at hand. For certain problem classes we may be able to
improve the e ectiveness of the method or strategy by such techniques as transfor-
mations, absorbing a diôcult part of the integrand into a weight function, suitable
partitioning of the domain and extrapolation or convergence acceleration processes.
Situations warranting the use of these techniques (possibly in an \automated" way)
will be described and illustrated by sample applications.
6

Plenary reports
Bruce DENBY
LISIF, Paris
denby@ieee.org
Swarm Intelligence for Optimization Problems.
ID=507
It has long been known that ensembles of social insects such as bees and ants
exhibit intelligence far beyond that of the individual members. More recently, opti-
misation algorithms which attempt to mimic this 'swarm intelligence' have begun to
appear, and have been applied with considerable success to a number of real world
problems. The talk will rst cite examples of naturally occurring swarm intelligence
in bees and ants before passing to a concrete application of Ant Colony Optimi-
sation to adaptive routing in a satellite telecommunications network. Analogies to
other types of optimisation such as gradient descent and simulated annealing will
be also given. Finally, some ideas of further applications in scienti c research will
be suggested.
7

Plenary reports
Witali DUNIN-BARKOWSKI
Texas Tech Univ. Health Sciences Center & Information Transmission Problems
Inst. RAS
witali.duninbarkowski@ttuhsc.edu
Great Brain Discoveries: When White Spots Will
Disappear?
ID=508
Knowledge progress about a particular object (e.g. , brain) has characteristics
of exponential growth in a limited volume. As soon as you know that a visible
part of the whole volume is lled (1/2, 1/10, 1/1000 or 1/10000 - doesn't matter),
the time for the volume be all lled has almost come. The time scale is in units
of a total duration of the process of the lling in the limited volume, if you have
started from zero level. We didn't know how much we were ignorant about the brain
even decade ago. The whole brain was just Terra Incognito. But recent progress in
computational neuroscience shows that presently we know about 1/10 (and not less
than 1/100000) of all brain network mechanisms. That's why we can say that we are
dealing with white spots on the map of knowledge about the brain and not with the
Terra Incognito any more. The time for full understanding of the brain is not far from
now (several years by cautious estimates). A couple of well understood mechanisms
of brain functioning (work of synchronous/asynchronous neuron ensembles in cortex,
cerebellar data prediction machinery, etc.) will be exposed in the talk.
8

Plenary reports
Daron GREEN
IBM EMEA
Grid Technologies EMEA, IBM UK
IBM experience in GRID.
ID=510
To many industry watchers Grid Technology represents the next wave of dis-
tributed computing in which companies can share IT infrastructure and IT services
within or between enterprises - so go as far as saying that it will replace the in-
ternet. Grid Technology provides the answer to the question facing many IT man-
agers: \How will my organisation ensure that its IT infrastructure is suôciently
exible to support a rapidly changing global market?". It tackles the challenges
faced when users need to access data/IT services from anywhere in the organisation
and with the added complexity of potential for mergers/acquisitions while at the
same time allowing for the possibility of embracing e-utility services. IBM was the
rst major company to commit to support the Grid movement and contribute to
the open-source development community - some see this as a visionary move, giving
a potential for IBM to dominate the IT industry for decades. The presentation arm
you with an understanding of what IBM sees as 'Grid Computing' and how it may
change the way we use IT. The discussion will provide an indication of the challenges
facing an organisation wishing to invest in grid technology and explain why IBM is
so interested in overcoming the many diôculties yet remaining to be solved.
9

Plenary reports
Frederick JAMES
CERN, Geneve
f.james@cern.ch
Summary of Recent Ideas and Discussions on Statistics in
HEP.
ID=511
Starting with the Con dence Limits Workshop at CERN in January 2000, a
series of four meetings has brought together particle physicists to discuss and try
to settle some of the major outstanding problems of statistical data analysis that
continue to cause disagreement among experts. These were the rst international
conferences devoted exclusively to statistics in HEP, but they will not be the last.
In this talk, I will summarize the main ideas that have been treated, and in a few
cases, points that have been agreed upon.
10

Plenary reports
Roger JONES
Univ. of Lancaster
Roger.Jones@cern.ch
ATLAS Computing and the Grid.
ID=512
ATLAS is building a Grid infrastructure using middleware tools from both Eu-
ropean and American Grid projects. As such, it plays an important role in ensuring
coherence between projects. Various Grid applications are being build, some in col-
laboration with LHCb. These will be exercised and re ned, along with our overall
computing model, by means of a series of Data Challenges of increasing complexity.
11

Plenary reports
Matthias KASEMANN
FNAL, Batavia
kasemann@fnal.gov
The LCG project - Common Solutions for LHC.
ID=513
Four LHC experiments are developing software for all aspects of data analysis.
Joint e orts and common projects between the experiments and the LHC Computing
Grid Project are underway to minimize costs and risks. However, the experiments
are di erent from one another, the right balance between a single set of methods
and tools and experiment speci c solutions must be found. Data Challenges of
increasing size and complexity will be performed as milestones along the way towards
completion until LHC start-up to verify the solutions found and to measure the
readiness for data analysis.
12

Plenary reports
Carl KESSELMAN
Inf. Sci. Inst., Univ. of Southern California, Marina del Rey
carl@isi.edu
Grid Computing.
ID=514
13

Plenary reports
Pavel KROKOVNY
BINP SB RAS, Novosibirsk
krokovny@inp.nsk.su
Belle computing.
ID=515
Belle is a high luminosity asymmetric e+/e- collider experiment designed to
investigate the origins of CP violation and other physics. An important aspect of
this experiment is a computing system. The details of the Belle oine reconstruction
and Monte Carlo production scheme will be discussed at the conference.
14

Plenary reports
Peter KUNSZT
CERN, Geneve
Peter.Kunszt@cern.ch
Status of the EU DataGrid Project.
ID=516
The EU DataGrid project has as its aim to develop a large-scale research testbed
for Grid computing. Three major application domains have already been running
demonstrations: Particle physics, Earth observation and Biomedics. The project is
in the middle of its second year and has successfully passed its rst EU independent
review. The DataGrid testbed is up and running at the several project sites and
is growing in functionality with each new release. We discuss the status of the
project and the evolution foreseen in the current year, especially in view of the
potential impact of the Globus migration to OGSA. We also present the plans of
the applications how to exploit this technology in the future.
15

Plenary reports
Marcel KUNZE
FZK, Karlsruhe
Marcel.Kunze@hik.fzk.de
The CrossGrid Project.
ID=517
There are many large-scale problems which require new approaches to computing,
such as earth observation, environmental management, biomedicine, industrial and
scienti c modelling. The CrossGrid project addresses realistic problems in medicine,
environmental protection, ood prediction, and physics analysis and is oriented to-
wards speci c end-users: a) medical doctors, who could obtain new tools to help
them to obtain correct diagnoses and to guide them during operations, 2) industries,
which could be advised on the best timing for some critical operations involving risk
of pollution, 3) ood crisis teams, which could predict the risk of a ood on the basis
of historical records and actual hydrological and meteorological data, 4) physicists,
who could optimise the analysis of massive volumes of data distributed across coun-
tries and continents. Corresponding applications will be based on Grid technology
and could be complex and diôcult to use: the CrossGrid project aims at developing
several tools which will make the Grid more friendly for average users. Portals for
speci c applications will be designed, which should allow for easy connection to the
Grid, create a customised work environment, and provide users with all necessary
information to get their job done.
16

Plenary reports
Mark NEUBAUER
MIT, Naperville
msn@fnal.gov
Computing at CDF.
ID=518
Run II at the Fermilab Tevatron Collider began in March 2001 and will continue
to probe the high energy frontier in particle physics until the start of the LHC at
CERN. It is expected that the CDF collaboration will store up to 10 Petabytes
of data onto tape by the end of Run II. Providing eôcient access to such a large
volume of data for analysis by hundreds of collaborators world-wide will require
new ways of thinking about computing in particle physics research. In this talk, I
discuss the computing model at CDF designed to address the physics needs of the
collaboration. Particular emphasis is placed on current development of a O(1000)
processor PC cluster accessing O(200 TB) of disk at Fermilab serving as the Central
Analysis Facility for CDF and the vision for incorporating this into a decentralized
(GRID-like) framework.
17

Plenary reports
Gian Piero PASSARINO
Univ. of Turin
giampiero@to.infn.it
A Frontier in Multiscale Multiloop Integrals: the
Algebraic-Numerical Method.
ID=519
Schemes for systematically achieving accurate numerical evaluation of arbitrary
multi-loop Feynman diagrams are discussed. The role of a reliable approach to the
direct and precise numerical treatment of these integrals in producing a complete
calculation for two-loop Standard Model predictions is also reviewed.
18

Plenary reports
Leslie ROBERTSON
CERN, Geneve
les.robertson@cern.ch
The LHC Computing Grid Project - Creating a Global
Virtual Computing Centre for Particle Physics.
ID=521
The computing needs of LHC will require enormous computational and data
storage resources, far beyond the possibilities of a single computing centre. Grid
technology o ers a possible solution, tying together computing resources available
to particle physics in the di erent countries taking part in LHC. A major activity
of the LHC Computing Grid Project (LCG) is to develop and operate a global grid
service, capable of handling multi-PetaByte data collections while providing levels
of reliability, usability and eôciency comparable with those available in scienti c
computing centres.
19

Plenary reports
Peter SHAWHAN
CALTECH, Pasadena
shawhan p@ligo.caltech.edu
LIGO Data Analysis.
ID=522
The Laser Interferometer Gravitational-Wave Observatory (LIGO) project has
constructed two 'observatories' in the United States which are poised to begin col-
lecting scienti cally interesting data. Members of the LIGO Scienti c Collaboration
have been using data from recent 'engineering runs' to develop and re ne signal de-
tection algorithms and data analysis procedures. I will describe a few distinct LIGO
data-analysis tasks which vary greatly in their computational demands, and thus
will be addressed in di erent ways. I will also comment on some of the organization
and implementation challenges which have been encountered so far.
20

Very Large-scale Computing and GRID oral session
Victor OKOL'NISHNIKOV
Inst. of Comp. Math. and Math. Geophysics SB RAS, Novosibirsk
okoln@kti.nsc.ru
Parallel Simulation System.
ID=132
Recently a new very large-scale and high-performance simulation system can
be built as a cluster of computers that are relatively slow and cheap. But, ex-
isting solutions in simulation does not permit to use all advantages of these new
architectures. In other words, there is a necessity to create new, portable, exible,
powerful, well-scaled tools of simulation.The parallel simulation system was realized
for supercomputer RM600-E30 that is an SMP system under Reliant UNIX. The
source language is C++ based, process-oriented, discrete simulation language. The
system provides following capabilities: interaction of processes between themselves
with the help of message passing, building of hierarchical models, dynamical change
of the structure of the model.The goal of developing of the simulation system was
to obtain the great portable, high-performance system. This goal is achieved by
using recent approaches in system design, by using recent portable and progressive
techniques. These techniques are threads and MPI (Message Passing Interface).
Presently many operational systems have a support of these techniques. This tech-
nology gives a possibility the portability of simulation system on distinct parallel and
distributed architectures. The performance is achieved at the expense of dividing
run-time system by a Communication Part and Simulation Engine. The Commu-
nication Part provides a message passing between processes and also synchronizes
an execution of model in model time. The method of synchronization can be set
in Communication Part. It is intended to realize a library of various methods of
synchronization both conservative and optimistic. It is also intended to provide the
capability to choose a method that is more suitable on performance for a concrete
class of models or for individual models. Recently one of conservative methods of
synchronization is realized in simulation system. There are various realizations of
the simulation system. It is the quasiparallel realization for Windows 95/98/NT and
distributed realization for QNX operating system. The parallel simulation system
was also realized on supercomputer MBC-1000M. The parallel simulation system is
intended for large-scale simulation of large systems.
21

Very Large-scale Computing and GRID oral session
Vladimir LITVINE
CALTECH, Pasadena
litvin@hep.caltech.edu
Using of Grid Prototype Infrastructure for QCD
Background Study to the H ! Process on Alliance
Resources.
Co-authors: H. Newman, S. Shevchenko, S. Koranda, B. Loftis, J. Towns,
M. Livny, P. Couvares, T. Tannenbaum, J. Frey
ID=174
The CMS experiment at the CERN Large Hadron Collider (LHC) will begin
taking data in 2007. Along with the Petabytes of data expected to be collected,
CMS will also generate an enormous amount of Monte Carlo simulation data. This
paper describes solutions developed at Caltech to address the issue of controlled gen-
eration of large amounts of Monte Carlo simulation data using distributed Alliance
resources. We will report on the results of production using Grid tools (Globus,
Condor-G) to knit together resources from various institutions within Alliance in
NPACI framework. The results of this e ort have been used for the study of the
H ! decay channel with full background simulation.
22

Very Large-scale Computing and GRID oral session
Jose HERNANDEZ
DESY, Zeuthen
Jose.Hernandez@desy.de
Oine Mass Data Processing Using Online Computing
Resources.
ID=218
Traditionally in High Energy Physics Experiments, due to the quite di erent
environment and requirements, the online and oine software and computing areas
have been sharply separated. Normally, dedicated hardware and software are used
in the data acquisition and trigger systems. In HERA-B, except for the rst level,
all trigger levels are implemented as PC-farms running the UNIX-like operating
system Linux, thus blurring the sharp border between online and oine application
software. The second and third level triggers are run in a 220-CPU farm. The fourth
level trigger and the online reconstruction are performed in an additional 200-CPU
farm. The farms are connected through a Fast/Gigabit Etherned switched network.
In HERA-B, mass data processing (data reprocessing and Monte Carlo production)
is performed in the online farms during periods without data taking (shutdowns of
the accelerator or periods between luminosity lls). A system to exploit the online
resources, both the vast CPU power of the farms and the online booting, control,
monitoring, event distribution, logging and archiving protocols, has been setup. The
system is fully integrated in the run control system. All processes are booted and
controlled in the same manner as in the normal data taking. Thus, the shift crew
can eôciently use any single moment without data taking for performing oine
mass data processing. The event data reprocessing works similarly as the usual
online processing scheme. Only the source of the data is di erent. The data do
not come from the detector but from a multi-threaded process retrieving the events
from tape and distributing them to the online farms. The same scheme is used
to run mass Monte Carlo production in the online farms. No event distribution is
needed in this case. The MC events are generated in the farm nodes, the detector
simulation, trigger and event reconstruction is performed, and nally they are sent
to the logger for archiving to tape. All the LHC experiments are planning to build
PC farms for triggering, made up of thousands of nodes. The HERA-B approach of
using the online computing resources for oine mass data processing might be quite
interesting, specially at the startup phase in LHC, where signi cant down times of
the accelerator can be expected and reprocessing of the data taken will be necessary
as the knowledge of the detectors improves, the reconstruction packages are further
developed and improved calibration and alignment constants become available.
23

Very Large-scale Computing and GRID oral session
Douglas OLSON
LBNL, Berkeley
dlolson@lbl.gov
Interfacing Interactive Data Analysis Tools with the Grid:
The PPDG CS-11 Activity.
Co-authors: J. Perl
ID=219
For today's physicists, who work in large geographically distributed collabora-
tions, the data grid promises signi cantly greater capabilities for analysis of experi-
mental data and production of physics results than is possible with today's \remote
access" technologies. The goal of letting scientists at their home institutions interact
with and analyze data as if they were physically present at the major laboratory that
houses their detector and computer center has yet to be accomplished. The Particle
Physics Data Grid project (www.ppdg.net) has recently embarked on an e ort to
\Interface and Integrate Interactive Data Analysis Tools with the grid and identify
Common Components and Services". The initial activities are to collect known
and identify new requirements for grid services and analysis tools from a range of
current and future experiments (ALICE, ATLAS, BaBar, D0, CMS, JLab, STAR,
others welcome), to determine if existing plans for tools and services meet these
requirements. Follow-on activities will foster the interaction between grid service
developers, analysis tool developers, experiment analysis framework developers and
end user physicists, and will identify and carry out speci c development/integration
work so that interactive analysis tools utilizing grid services actually provide the ca-
pabilities that users need. This talk will summarize what we know of requirements
for analysis tools and grid services, as well as describe the identi ed areas where
more development work is needed.
24

Very Large-scale Computing and GRID oral session
Ruediger BERLICH
EP1 Bochum Univ., Karlsruhe
ruediger@berlich.de
Evolutionary Algorithms and Parallel Computing.
Co-authors: M. Kunze
ID=223
The talk hilights a new, object-oriented approach to unify the implementation
of Evolutionary Strategies and Genetic Algorithms. Special emphasis lies on the
transparent provision of parallel program execution using MPI and Posix Threads,
putting the burden of implementing parallel execution models on the library, not
its users. An additional focus lies on GRID Computing, as the execution of parallel
Algorithms implemented using the Message Passing Interface MPI can be carried
out over a GRID.
25

Very Large-scale Computing and GRID oral session
Pablo SAIZ
CERN, Geneve
pablo.saiz@cern.ch
AliEn - ALICE Environment on the GRID.
Co-authors: L. Aphecetche, P. Buncic, R. Piskac, J.-E. Revsbech, V. Sego
ID=230
AliEn (Alice Environment) is a framework providing Grid functionality. It has
been developed in the context of the ALICE experiment to satisfy the LHC re-
quirements. AliEn is built on top of the latest Internet standards for information
exchange and authentication (SOAP, SASL, PKI) and common Open Source com-
ponents (such as Globus/GSI, OpenSSL, OpenLDAP, SOAPLite, MySQL, perl5).
AliEn provides a virtual le catalogue that allows transparent access to distributed
data-sets and provides top to bottom implementation of a lightweight Grid applica-
ble to cases when handling of a large number of les is required (up to 2PB and 10 9
les/year distributed on more than 20 locations worldwide in case of ALICE exper-
iment). At the same time, AliEn is meant to provide an insulation layer between
di erent Grid implementations and provide a stable user and application interface
to the community of ALICE users during the expected lifetime of the experiment.
As progress is being made in the de nition of Grid standards and interoperability,
AliEn will be progressively interfaced to the mainstream Grid infrastructure HEP
(EU DataGrid as well as to the US Grid infrastructure GriPhyn and iVDGL). In
addition, AliEn will be use d to implement Grid component for MammoGrid, an EU
project in the domain of health informatics which aims to, in light of emerging grid
technology, develop a European-wide database of mammograms that will be used to
investigate a set of important healthcare applications as well as the potential of this
Grid to support e ective co-working between healthcare professionals throughout
the EU.
26

Very Large-scale Computing and GRID oral session
Roberto BARBERA
Univ. of Catania & INFN, Catania
roberto.barbera@ct.infn.it
GENIUS: a Web Portal to the GRID.
Co-authors: G. Andronico, A. Falzone, S. Maccarone, A. Rodolico
ID=250
Grid computing infrastructures are rapidly expanding within the academic (and
non-academic) world as common frameworks to build complex Problem Solving En-
vironments (PSE) in physics, biology, chemistry, engineering, Earth observations
and in many other elds. This is witnessed by the many grid projects which have
recently been started in Europe, US and other countries in the world. However,
the present lack of standardization, coupled with the intrinsic complexity of the
implementation of grid concepts, keep away from using grids a large number of
generic users and pose some problems of interoperability. A big help in this respect
could come from grid portals intended as especially customized web interfaces to
the grid services which can be accessed from everywhere and by everything (desk-
tops, laptops, PDA's, WAP phones, etc.) hiding the users from the complexity
behind them. In this contribution, the present status of the grid portal GENIUS
(https://genius.ct.infn.it), jointly developed by INFN and NICE srl in the context of
the INFN Grid and EU DataGrid projects will be shown. Its architecture principles
will be discussed and compared with those of similar grid portals already available
on the market. If technically possible, a small live demonstration of its use will also
be provided.
27

Very Large-scale Computing and GRID oral session
Aleksandr KONSTANTINOV
Univ. of Oslo
aleks@fys.uio.no
The NorduGrid Project: Using Globus Toolkit for Building
Grid Infrastructure.
Co-authors: M. Ellert, B. K?nya, O. Smirnova, A. W??n?nen
ID=265
The NordGrid is the pioneering Grid project in Scandinavia. The purpose of the
project is to create a Grid computing infrastructure in Nordic countries, operate a
functional testbed and expose the infrastructure to end-users of di erent scienti c
communities. Project participants include universities and research centers in Den-
mark, Sweden, Finland and Norway. The cornerstone of the infrastructure adopted
at NorduGrid is the Globus toolkit developed at Argonne National Laboratory and
University of Southern California. The Globus toolkit is widely accepted as being
the defacto standard for Grid computing and provides a collection of robust proto-
cols, low level services and libraries. It is however missing several important high
level services like: grid-level scheduler, grid-level authorization system, grid-level
accounting and quotas, job data stagein/stageout, grid application development
toolkits, user-friendly grid entry points. With the need to provide working produc-
tion system, NorduGrid has developed it's own solution for the most essential parts.
An early prototype implementation of the proposed architecture adopted by the
NorduGrid is being tested and further developed. It consists of: Information Sys-
tem based on Globus Monitoring and Discovery Service, User Interface integrated
with Resource Broker for submitting jobs with complex enough requirements, Grid
Manager providing interface for complex job submission based on Globus Resource
Allocation Manager and GridFTP protocol developed by Globus team. Aiming at
simple but yet functional system capable of handling common computational prob-
lems encountered in the Nordic scienti c communities we were choosing simple but
still functional solutions with necessary parts implemented rst.
28

Very Large-scale Computing and GRID oral session
Gabriele GARZOGLIO
FNAL, Batavia
garzogli@fnal.gov
The SAM-GRID Project: Architecture and Plan.
Co-authors: A. Baranovski, L. Lueking, R. Pordes, I. Terekhov, S. Veseli, J. Yu,
R. Walker, V. White
ID=266
SAM is a robust distributed le-based data management and access service,
fully integrated with the DZero experiment at Fermilab and in evaluation at the
CDF experiment. The goal of the SAM/Grid project is to fully enable distributed
computing for the experiments. The architecture of the project is composed by
three primary functional blocks: the job handling, data handling and monitoring
and information services. Job handling and monitoring/information services are
built on top of standard grid technologies (Condor-G/Globus Toolkit), which are
integrated with the data handling system, provided by SAM. The plan is devised
to provide the users incrementally increased levels of capabilities over the next two
years.
29

Very Large-scale Computing and GRID oral session
Alexei JOUTCHKOV
Telecommunication Centre \Science and Society", Moscow
alex@chph.ras.ru
Development of an Interdisciplinary Fragment of the
Russian Grid Segment: State of the Art .
Co-authors: N. Tverdokhlebov, S. Arnautov, A. Yanovskii, Y. Lyssov, A. Cherny
ID=274
A national interdisciplinary Grid-segment is being developed in Moscow as a
research tool in methacomputing investigations. A 1,0 Gb/s communication line
is built and serves as backbone in framework of the project. The interdisciplinary
Grid-segment development and o ered decisions testing is carried out within the
framework of European project EU DataGrid under CERN general management.
There are 3 directions of investigation that we have marked as most prospective:
1) biomedical methacomputing Grid-decisions for large-scale computing tasks in
biology and health care, 2) Grid-services providing using specialized portals for
interested users ( rst of all, from the scienti c organizations), 3) studying some
aspects of building of knowledge networks (KN). At rst, an experimental prototype
of Grid-based version of BLAST (Basic Local Alignment Search Tool) has been
developed and tested. BLAST is the most usable tool in genomic research and it
needs also a lot of computer resources. The experience of the interdisciplinary Grid-
segment development made clear, that specialized interfaces are of key importance
for an e ective usage of Grid-resources. We assume that the right way is to develop
user/problem-oriented Grid services and portals. An example to go with is our
attempt to provide access to BLAST by specialized portal of Institute of Molecular
Biology. Two ideas have been chosen with high priority in KN development. These
are development of appropriate portal and metadata model as steps of integration
of information resources of a subject domain in a full digital library (DL). Such a
portal with an appropriate interface and a set of services should be a uniform point
of access to DL. DL is projected essentially as an open system providing mechanisms
of connection of the most di erent collections using various methods. Dublin Core
with quali ers is o ered to be a main solution for metadata formats.
30

Very Large-scale Computing and GRID oral session
Victor KOVALENKO
Keldysh Inst. of Applied Mathematics RAS, Moscow
kvn@keldysh.ru
Resource Manager for Grid with Global Job Queue and
with Planning Based on Local Schedules.
Co-authors: E. Kovalenko, D. Koryagin, E. Ljubimskii, A. Orlov, E. Huhlaev
ID=275
Even with control facilities o ered by Globus, the problem of automatic job dis-
tribution amongst resources in the Grid environment remains open. Through the
last year, appreciable advance was made in this direction - several resource brokers
were developed. A common distinctive feature of all those brokers is that the re-
sources for the job are de ned at the moment it arrives into the broker. However,
this scheme only works satisfactory provided that free resources are available. We
discuss an approach (Resource Manager) whereby jobs are placed into the global
queue. Each job is started at a time and a place as de ned by scheduling. The
scheduling process is carried out continuously, aiming to provide optimization of
jobs launch-time in accordance with their priorities. Such planning is based on the
information that relies on resources usage plans of the local schedulers. Several plan-
ning algorithms are discussed, as well as architecture and implementation strategy of
the Resource Manager's components: 1) Resource Manager agent, meant to expand
opportunities of local schedulers with function that builds the resource utilization
plan, 2) the protocol of the Resource Manager interaction with local schedulers, 3)
information service for scheduling taking into account speci city of the planning
algorithms.
31

Very Large-scale Computing and GRID oral session
Irene SHOSHMINA
Inst. for High-Performance Computing and Inf. Systems, S-Petersburg
irena@csa.ru
Software Tools for Dynamic Resource Management
(STDRM).
Co-authors: D. Malashonok, S. Romanov
ID=303
Management of computing resources in an information computational space, a
grid, is an interesting, modern problem. Functional loading of a computational grid
can be various: the high-performance computing, intensive exchange of data, collab-
orative work between participants etc. The issues in a computational grid concern
both technical and theoretical problems, such as scheduling, security, fault tolerance,
con guration and etc. The supercomputer centre of Institute of High-Performance
Computing and Information Systems has the di erent equipment, serves di erent
scienti c and technical tasks. The large category of the users of the centre is en-
gaged in parallel tasks with an intensive exchange of the data (for physics and
chemical problems). To solve similar tasks in the computational grid e ectively we
together with the University of Amsterdam develop the Software Tools for Dynamic
Resource Management (STDRM). The basic goals of the project is developing of:
the migration of parallel tasks and the system dynamic loading of resources in the
global-distributed computational environment, the monitoring of loading of a grid
and the scheduler, and solving of: problems of grid security, problems of heteroge-
neous systems association. The development would go stage by stage. At the rst
stage the system of migration of parallel tasks, checkpointing, interprocess exchange
in a grid (including support of the standards MPI and PVM) would be developed.
The block of work connected with parallel tasks in a computational grid would be
included in the Globus within the framework of the CrossGrid project.
32

Very Large-scale Computing and GRID oral session
Andrey MINAENKO
IHEP, Protvino
minaenko@mx.ihep.su
First Experience of EDG Middleware Usage for Mass
Simulation of ATLAS Monte-Carlo Data.
ID=305
A characteristic feature of future experiments at LHC is ahuge data ow at a
Petabytes/year level. To process and analyse these data world wide distributed
computing facilities will be used. Regional computing centres will be uni ed in a
whole with the help of a Grid middleware. Russian HEP institutes participate in
the creation of a prototype for LHC computing and testing of mw developed in the
framework of European DataGrid (EDG) Project. The experience of usage of the
rst releases of EDG mw at the Russian part of the prototype for mass simulation
of ATLAS Monte-Carlo data is presented in the report.
33

Very Large-scale Computing and GRID oral session
Jakub MOSCICKI
CERN, Geneve
Jakub.Moscicki@cern.ch
A Component Framework for Distributed Parallel Data
Analysis in HEP.
ID=306
The huge data volumes of modern experiments require end-user analysis tasks to
be run in parallel on large clusters. A R&D project has been started in the CERN
IT/API group to create a generic, component-based framework for distributed and
parallel data processing, based on the Master/Worker model. Precompiled user
analysis code is loaded dynamically at runtime from component libraries and called
back when appropriate. Such an application-oriented framework must be exible
enough to integrate with the emerging Grid technologies as they become available.
Therefore common services such as reconstruction of the runtime environment, code
distribution, load balancing and authentication are designed and implemented as
pluggable modules. This way they can be easily replaced with modules implemented
in newer technologies when they arrive. While the focus of end-user HEP analysis
is on ntuple-like data, the Master/Worker model of parallel processing may be also
used in other contexts such as detector simulation. We describe the preliminary
architecture and explain the design choices for the distributed, component-based
system.
34

Very Large-scale Computing and GRID oral session
Alexander KRYUKOV
SINP MSU, Moscow
kryukov@theory.sinp.msu.ru
Implementation of Remote Job Submission over GRID with
IMPALA/BOSS CMS MC Production Tools.
Co-authors: A. Edunov, U. Gasparini, S. Lacaprara, M. Verlato
ID=309
Monte Carlo simulation plays a very important role to plan new High Energy
Physics experiments and data analysis of their data. The experiments at the CERN
Large Hadron Collider (LHC) started an extensive program of simulation of future
detectors. Because this task requires generation of millions of events, the problem
of management of MC simulation run and computer power utilization is very im-
portant. One of the main decision of Compact Muon Solenoid (CMS) Collaboration
at LHC is to adopt an approach of distributed data processing. To automate the
CMS MC production special tools were developed, the so called IMPALA (Intelligent
Monte Carlo Processing And Local Activator) and BOSS (Batch Object Submission
System) packages currently are working successfully on LAN-based farms of proces-
sors during SPRING-2002 CMS MC production run. The implementation of jobs
submitting over GRID middleware is a necessity. The major diôculty of GRIDi ca-
tion is the fact that IMPALA was designed to generate local activated jobs by using
local environment, as implied by the name of package. A set of scripts was designed
and developed which 1) pack user dependent and application dependent information,
2) generate scripts and JDL- le that make two steps on CE: start production job
generator, for example IMPALA start generated production job, 3) submit job by
using BOSS package to Resource Broker. This implementation was tested on Padua
INFN-GRID testbed1 and INFN-CNAF Resource Broker successfully. It is very
important that such approach require minimal modi cation of IMPALA (version 2)
and no changes in BOSS (version3).
35

Very Large-scale Computing and GRID oral session
Atsushi MANABE
KEK, Tsukuba
Atsushi.Manabe@kek.jp
Administration Tools for Managing Large Scale Linux
Cluster.
Co-authors: S. Kawabata
ID=312
In administrating large-scale PC clusters, installation/updating, con guration,
monitoring system are hard work for system administrators to maintain them We
are using several tools for the purpose in our Linux cluster. Some are developed
and being developed ourselves and some are from other people. In the presentation,
we want to introduce their functionality, usability and performance with experi-
ence on our 100 CPU Linux PC cluster: 1) Installation/updating - ?fdolly+?f is
a disk-image-cloning type system installation tool. By forming a connection ring
among nodes logically, ?edolly+?f can install OS and utilities very quickly without
having server bottle neck which su er many widely-used installer in applying to
large-scale clusters. 2) Monitoring and Con guration - for monitoring and con gu-
ration of large number of PC, many tools are available as public domain software.
For monitoring, SNMP based kernel status-monitor is common and 'cfengine'/'pikt'
are widely used as server daemon process health checker. For con guration, based
on XML and database technology, several tools are being developed in computer
science community. 3) Command execution - even executing one small command on
several hundred nodes simultaneously is not easy. Execution itself could be easy,
but examination of the result, success or fail, and reissue the command after xing
problem is very boring and troublesome. `WANI' helps to make this work easy. Ad-
ministrator can select nodes and execute commands from a WEB browser. Results
from nodes will be examined in each node by simple but e ective methods, then
showed to the administrator as he can understand at a glance.
36

Very Large-scale Computing and GRID oral session
Alexei KLIMENTOV
ETH Zurich & MIT Cambridge
alexei.klimentov@cern.ch
AMS Computing.
Co-authors: V. Choutko
ID=323
AMS (Alpha Magnetic Spectrometer) is an experiment to search in space for
dark matter, missing matter and antimatter on the international space station (ISS)
Alpha. AMS detector has precursive ight in year 1998 (STS-91, June 2-12, 1998).
More than 100M events been collected and analyzed. The detector will have another
ight (AMS02) in the fall of year 2005 for 3+ years on International Space Station.
The data will be transmitted from ISS to NASA Marshall Space Flight Center
(MSFC, Huntsvile, Alabama) and transfered to CERN for processing and analysis.
We are presenting OO SW developped for AMS experiment, data handling and
with more details the production framework used CORBA technology (to control
and monitor the production process) and ORACLE relational database (to keep
catalogues, event tags, production and monitoring information). Production farm
testbed is installed at CERN, and the SW framework is running on it and been
successfully tested with AMS01 data. More tests are foreseen during May-June
with data transmitted from MSFC and processed at CERN.
37

Very Large-scale Computing and GRID oral session
Martin GASTHUBER
DESY, Hamburg
Martin.Gasthuber@desy.de
Providing GRID Data Services TODAY.
Co-authors: P. Fuhrmann, R. Wellner
ID=326
In the course of the Disc-Cache project, performed in a collaboration between
DESY and Fermi, we have started the integration of GRID data service functional-
ities into the Disc-Cache system. Although the functionalities are already supplied
by the core Disc-Cache a dedicated component for translating GRID requests into
Disc-Cache internal commands is required and under development. In contrast to
the ongoing GRID projects, we did a bottom-up development - looking from the
Fabric layer supplier perspective. Looking for agreed interface de nitions between
Fabric and GRID middle-ware layer, we only found the SRM (Storage Resource
Manager - LBL, Je . Lab, Fermi) and choose this initially. This talk presents the
experience and problems we have seen tackling the problem from the Fabric provider
view.
38

Very Large-scale Computing and GRID oral session
Alexandre VANIACHINE
ANL, Argonne
vaniachine@anl.gov
Data Challenges in ATLAS Computing.
ID=524
ATLAS computing is steadily progressing towards a highly functional software
suite, plus a World Wide computing model which gives all ATLAS equal and equal
quality of access to ATLAS data. A key component in the period before the LHC is a
series of Data Challenges of increasing scope and complexity. These Data Challenges
will use as much as possible the Grid middleware being developed in Grid projects
around the world. We are committed to ?common solutions? and look forward to
the LHC Computing Grid (LCG) being the vehicle for providing these in an e ective
way. In the context of the CERN Review of LHC Computing, the scope and goals
of ATLAS Data Challenges are executed at the prototype tier centers, which will be
built in the Phase 1 of the LCG project. In close collaboration between the Grid and
Data Challenge communities ATLAS is testing large-scale testbed prototypes around
the world, deploying prototype components to integrate and test Grid software in a
production environment, and running Data Challenge 1 production in 26 prototype
tier centers in 17 countries on four continents.
39

Very Large-scale Computing and GRID oral session
Leandar LITOV
JINR & Univ. of So a
litov@phys.uni-so a.bg
Particle Identi cation in the NA48 Experiment Using
Neural Network.
ID=525
The Na48 detector situated at CERN SPS accelerator is designed for precise
measurement of direct CP-violation in the neutral kaon system. A large programme
for investigation of rare Ks, K+/-, neutral hyperon decays and measurement of
CP violating asymmetry in charged kaon decays with unprecedented precision is
envisaged. In order to suppress the background for some of the rare kaon and
neutral hyperon decays, a good particle identi cation is required. The possibility to
use a feed-forward neural networks to separate electrons from hadrons is considered.
To test the performance of the neural network, electrons and pions from clearly
reconstructed experimental kaon decays have been used. It is shown, that the neural
network can be a powerful tool for particle identi cation. A signi cant suppression
of the background can be reached allowing a precise measurement of rare decays
parameters.
40

Very Large-scale Computing and GRID poster session
Mariya MEDVEDEVA
St. Technical Univ., Kursk
mariya medvedeva@hotmail.com
The Self-organization of the Cellular Environment and the
Reproduction of the Network Logical Structure.
Co-authors: V. Koloskov
ID=131
For support of fault tolerance and continuity of network operation the article
o ers the approach based on reproduction of the logical network structure by the
cellular self-organization environment. The projection of faults on the cellular en-
vironment causes activation of the environment and reallocation of its activation
energy. The interaction of activation waves forms the structure of the environment,
which is re ected into the network as the logical structure. The work considers the
technology of parallel cellular control of activation energy distribution in the envi-
ronment and forming the steady structure in it. The reproduction rules of the logical
network structure based on the results of self-organization are formulated. The pre-
sentation levels and variants of environment control, the local rules of operation of
cells, the rules of interaction of the cellular environment and the network with faults,
and the features of organization of network elements are considered here. The results
of researches of the self-organization of the cellular environment on the simulation
model have shown a correctness and high performance of the o ered approach.
41

Very Large-scale Computing and GRID poster session
Alexey KAYUKOV
JINR, Dubna
kayukov@nf.jinr.ru
The Software for Control System of the LUE-200.
Co-authors: M. Korjovkina, O. Strekalovsky, D. Turkin
ID=144
This article describes the software structure for control system of LUE-200 based
on Facrotysuite2000 SCADA system by the Wonderware corporation. The outcomes
of test runs characterizing speed of data acquisition from sensors and the database
access time are adduced. There is described the inner pattern of the database. The
organization of the client applications permitting physicists to receive restricted
access to base from the workstations is shown.
42

Very Large-scale Computing and GRID poster session
Andrey KVACHENKO
Air Engineering Inst., Tambov
kvachenk@tmaec.ru
The Formal Speci cation of Algorithmic Maintenance of
Distributed Computing Systems of Imitative and
Seminatural Simulation.
ID=170
The formal description of the algorithm graph on the basis of the stable in-
termediate descriptions is o ered. The algorithms of the automated synthesis of
algorithmic maintenance for SIMD-MIMD of the architecture with the use of uni-
ed submission of the initial graph are developed. The problems of algorithm graph
re ection on the structural and functional components of the distributed computing
system with the use of di erent criteria of synthesis are considered. The techniques
of representation, storage, veri cation and usage of mathematical models on the
basis of the stable intermediate descriptions in the problems of imitative and sem-
inatural simulation are suggested. The estimations of quality of the software on
the basis of suggested techniques and the results of experimental researches of gen-
eralized analytical models with the use the developed algorithmic maintenance are
presented.
43

Very Large-scale Computing and GRID poster session
V. LAZAREV
Penza St. Univ.
nis@diamond.stup.ac.ru
Application of Information Technologies in Management of
Ground Resources.
ID=203
In managerial process grows a role of such kind of resources as the informa-
tion. Operatively to receive a trustworthy information without specially organized
for its(her) gathering, processings and actualizations of services it is practically im-
possible. Paying due attention to systems of a supply with information, it is not
necessary to forget and about the decision of a problem of application of geoinfor-
mation technologies. Information on the basis of modern computer facilities and
geoinformation technologies conducts to much qualitative change of character of
management. The leaging role among cadastres is played with a ground cadastre
as the base information for all other cadastres is incorporated here. Function of
conducting the state ground cadastre is one of main in management of ground re-
sources. Necessary basis for creation of a ground cadastre at qualitatively new level
introduction of geoinformation systems (GIS) and gives the automated systems of
the state ground cadastre builded on their base. Introduction of geoinformation
systems and technologies builded on their base gives a necessary basis for creation
of complex territorial cadastres. Thus, except for other, GIS - is a natural stage for
a way of transition to paperless technology of processing of the information, opening
new ample opportunities of a manipulation the data having spatial binding.
44

Very Large-scale Computing and GRID poster session
Anders VESTBO
Univ. of Bergen
vestbo@ .uib.no
The ALICE High Level Trigger (for the ALICE
collaboration).
Co-authors: R. Bramm, H. Helstrup, J. Lien, V. Lindenstruth, C. Loizides, D.
Rohrich, B. Skaali, T. Steinbeck, R. Stock, K. Ullaland, A. Wiebalck
ID=231
The central detectors of the ALICE experiment at LHC will produce a data
size of up to 75 MByte/event at an event rate < 200 Hz resulting in a data rate
of  15 GByte/sec. This exceeds the foreseen mass storage bandwidth of 1.25
GByte/sec by one order of magnitude. Online processing of the data is necessary in
order to select interesting (sub)events (\High Level Trigger"), or to compress data
eôciently by modeling techniques. Processing this data requires a massive parallel
computing system (High Level Trigger System). The system will consist of a farm of
clustered SMP-nodes based on o -the-shelf PCs connected with a high bandwidth
low latency network. The system nodes will be interfaced to the front-end electronics
via optical bers connecting to their internal PCI-bus, using a custom PCI Receiver
Card. These boards provide a FPGA co-processor for data intensive repetitive task
of the pattern recognition. Currently tests on prototypes are being done, using both
the foreseen software for online data analysis and communication. Latest results
will be shown.
45

Very Large-scale Computing and GRID poster session
Constantin LOIZIDES
Univ. of Frankfurt
loizides@ikf.uni-frankfurt.de
Online Pattern Recognition for the ALICE High Level
Trigger (for the ALICE collaboration).
Co-authors: R. Bramm, H. Helstrup, J. Lien, V. Lindenstruth, D. Rohrich, B.
Skaali, T. Steinbeck, R. Stock, K. Ullaland, A. Vestbo, A. Wiebalck
ID=234
The ALICE High Level Trigger (HLT) system has to perform online pattern
recognition at a event rate of < 200 Hz for Pb-Pb collisions and < 1 kHz for p-p
collisions. About 20,000 charged particles per Pb-Pb event have to be reconstructed
within a time budget of 5 ms. Around 600 clustered SMP-nodes provide the nec-
essary computing power for the HLT system. In addition, roughly 300 of these
nodes are equipped with a FPGA co-processor provided on a PCI card, which will
be interfaced to the frond-end electronics of the detector via optical bers. Most of
the local pattern recognition will be done using the FPGA co-processor while the
data is being transferred to the memory of the corresponding nodes. Algorithms
for conventional cluster nding and local track nding based on a Circle Hough
Transformation of the raw data are currently under development. Latest results
concerning the VHDL implementation and rst eôciencies on simulated data will
be shown.
46

Very Large-scale Computing and GRID poster session
Eugene HUHLAEV
Keldysh Inst. of Applied Mathematics RAS, Moscow
huh@keldysh.ru
Metamake Tools for Personal Project Preparation in
Heterogeneous Network Environment.
ID=260
Considered is preparation of personal projects in heterogeneous network envi-
ronment with automatic job distribution between target hosts. The main problems
posed by project portability are: le location, automatic generation of make- le,
source code portability. Proposed technology allows locating les in transparent
network le system and writing portable Imake- les using CPP-macroses incap-
sulated architecture-depend features of compilation and linking. This technology
is intended for automatic preparation of simple personal computational projects
and does not require complicated constructions. The technology is implemented by
metamake program. Like ordinary make, metamake keeps platform-depended les
(object, executables, libraries) for repeated usage. To use the metamake, one must
de ne basic project directories (task directory, target root, object directory etc) and
prepare two simple les: task- le that contains common project settings (selected
compilers, common system libraries etc) and architecture-independent Imake- le
which includes project description along with compiler and linker directives. In
most cases, as long as mere generation of user's executables and libraries is required,
Imake- le will only include the source le list and several simple CPP-macroses.
47

Very Large-scale Computing and GRID poster session
Elena SLABOSPITSKAYA
IHEP, Protvino
lspitsky@sirius.ihep.su
Performance Characteristics of an IDE Disks Based File
Server in the Environment of a Linux PC Farm.
Co-authors: A. Minaenko, Yu. Lazin, V. Motyakov, A. Kardanev, M. Sapunov,
E. Galkin,V. Kotlyar, A. Sergeev, V. Petukhov, V. Kukhtenkov, E. Berdnikov
ID=264
The Linux PC farm used for tests has been installed at IHEP (Protvino) in the
framework of the distributed environment for future LHC computing. An important
component of the farm is a 1.3 TB le server. The main requirements to such a
server are robustness, large storage capacity at a relatively low cost. The server has
dual Pentium III CPU of 933 MHz and 16 IDE disks of 80 GB each. The disks are
driven by two 3Ware 6800 IDE RAID controllers. The investigation's results of the
performance characteristics of the server as a part of the farm are presented in the
report.
48

Very Large-scale Computing and GRID poster session
Natalia RATNIKOVA
FNAL, Batavia
natasha@fnal.gov
Distributing Applications in Distributed Computing
Environment.
Co-authors: A. Sciaba, S. Wynho
ID=272
Software distribution is a process of delivering software products to the users. It
is an essential part of the software process. The complexity of this task increases
in the highly geographically dispersed collaborations, such as modern HEP exper-
iments. This presentation will focus on general requirements to the software dis-
tribution system, main problems and various solutions. New requirements speci c
to the successful software operation in the GRID environment will be discussed.
We also describe the organization of the CMS software distribution and present
the automated tools developed and used for the software distribution within the
Collaboration.
49

Very Large-scale Computing and GRID poster session
Olga KODOLOVA
SINP MSU, Moscow
Olga.Kodolova@cern.ch
Experience with OO Database for CMS Events Distributed
Between Two Sites.
Co-authors: N. Kruglov, V. Kolosov
ID=294
During 2001/2002 Russian Regional Center participates in CMS events produc-
tion using Objectivity database. The common federation was created in two geo-
graphically remoted sites in Moscow, SINP MSU and ITEP, connected by Gigabit
Ethernet. Semi-authomatic scripts for database management have been created.
The database is using by physicists from many russian institutes and CERN. We
also discuss the experience of usage of such tools as GDMP in this environment.
50

Very Large-scale Computing and GRID poster session
Vladimir KALYAEV
SINP MSU, Moscow
kalyaev@theory.sinp.msu.ru
Distributed Computing Environment for Data Intensive
Tasks by Use of Metadispatcher.
Co-authors: E. Huhlaev, N. Kruglov
ID=295
Data processing in GRID environment requires access to the data stored in dif-
ferent sites. For users this process should be transparent. This talk describies our
experience of Metadispatcher usage as a core for the Data Intensive Grid. This
system is an e ective tool to distribute tasks between computering sites. We have
developed and tested exible le transfer services for data migration. Additional
informational services are also developed. These services provide user-friendly in-
terface for task processing. Some usage examples are shown.
51

Very Large-scale Computing and GRID poster session
Lev SHAMARDIN
SINP MSU, Moscow
shamardin@lav.sinp.msu.ru
Secure Automated Request Processing Software for
DataGrid Certi cation Authorities.
Co-authors: P. Martucci, N. Kruglov
ID=308
Security model of Grids uses asymmetric cryptography for authentication and
data protection. Thus, to build a Grid infrastructure one needs to setup a Public
Key Infrastructure (PKI). Typically PKI includes a Certi cation Authority (CA)
and several Registration Authorities (RA). In this report we present our solution
for building a CA. Our goal was to make it secure, robust and as automated as
possible. In our solution the message exchange between CA's and RA's uses signed
email. This makes the system easier for the end users and RA managers. Supported
features include issuing and revocation of certi cates, information services and cer-
ti cate renewal. The incoming messages are processed automatically. All operations
requiring a private key of the CA are held on the separate oine signing host and
are fully controlled by an operator, making the CA attack proof.
52

Very Large-scale Computing and GRID poster session
Viktor POSE
JINR, Dubna
vpose@pccmsy.jinr.ru
Correlation Engine Portotype.
Co-authors: B. Panzer-Steindel
ID=325
The CERN monitoring prototype, part of the fabric management work package
(WP4) of the DataGrid project, task Monitoring, gathers monitoring data from farm
nodes in CERN into a central monitoring database. Performing correlations on the
data in the monitoring database should help to: 1) foresee exceptions on individual
nodes and on node groups, 2) analyse performance of the farm. The Correlation
Engine Prototype was developed to enable easy adding of new correlations of mon-
itoring data and actions triggered in case of exceptions. The current prototype is
written in Perl. The results of the correlation engine can be accessed through a
web-inerface. The Correlation Engine Prototype was developed.
53

Arti cial Intelligence oral session
Adil TIMOFEEV
Inst. of Informatics and Automata RAS, S-Petersburg
adil@iias.spb.su
On-line Local Monitoring and Adaptive Navigation of
Mobile Robots on Environment with Unknown Obstacles.
Co-authors: H. He
ID=129
Problems of navigation and motion control of mobile robots in the environment
with unknown obstacles are very important. Autonomous mobile robot is a very
complex physical-technical object, consisting of motion system (wheel chassis, en-
gines etc.), sensor system (ultra-sound or laser radars, TV-sensors etc.), telecommu-
nicational system and adaptive system of navigation and motion control with the
elements of arti cial intelligence. For task solution of navigational control for au-
tonomous mobile robots in unknown environments it is necessary to use on-line local
monitoring of environment with the help of robot sensors and advanced adaptive
computing for simulation of obstacles, planning of safe paths and motion control
of the robot.In this paper, on-line navigation control methods for mobile robots in
unknown environments are discussed. A new reactive navigation control method,
based on reinforcement learning, is presented. The proposed method is tested both
in simulation and on a real mobile robot. The mobile tested robot is a LABMATE
platform, which is equipped with sonar sensors and encoders. Experimental results
show that the method is e ective for mobile robot navigation with unknown obsta-
cles.Advanced methods of navigational control of mobile robots is based on on-line
synthesis of databases and models of the virtual reality. In the paper the peculiarities
of virtual reality models for mobile robots are discussed. Methods of scanning and
simulation of physical environment and local-optimal algorithms of path planning for
motions of robots, avoiding obstacles, are suggested. The problems of multi-agent
navigation and control motion of a mobile robot group (collective).
54

Arti cial Intelligence oral session
Uwe MUELLER
Bergische Univ. IGH, Wuppertal
mueller@whep.uni-wuppertal.de
Selection of W-Pair-Production in DELPHI with
Feed-Forward Neural Networks.
Co-authors: K.-H. Becks, H. Wahlen
ID=155
Since 1998 feed-forward neural networks have been applied for the separation of
hadronic W-decays from background processes measured by the DELPHI collabo-
ration at di erent center of mass-energies of the Large Electron Positron collider
(LEP) at CERN. The nal publication will contain analyses at all di erent center-
of-mass energies measured at LEP. So the neural network had to be adapted to give
the best possible result at each energy. Detailed studies were performed concerning
the level of preselection, the choice of network parameters and especially of the net-
work architecture. The number of hidden nodes was optimized by testing di erent
pruning methods. The methods and results will be discussed.
55

Arti cial Intelligence oral session
Gennadii OSOSKOV
JINR, Dubna
ososkov@jinr.ru
E ective Training Algorithms for RBF-Neural Networks.
Co-authors: A. Stadnik
ID=205
Problems of pattern recognition and related classi cation problems are of the
great interest, in general, and in high energy physics (HEP), in particular. More-
over, a signi cant progress of arti cial neural network (ANN) applications is acheved
in many respects due to HEP needs. Nevertheless, the drastic advance of the physi-
cal experiments in the last decade shows that the ANN- eld is still to be stydied in
order to nd new, more e ective neural net con gurations and training disciplines
allowing to improve their training speed, and application eôciency. In the given
paper a new structure and training algorithm of feed-forward ANNs of the RBF-
type is proposed. A comparable study of its eôciency for some classes of pattern
recognition problems is ful lled. The mostly used traditional approach to solve clas-
si cation problems consists in applying a multilayer perceptron (MLP) assuming its
prior training by the backpropagation method (BPM). This method being quite ef-
fective in many applications, has, however, in some important practical cases either
the training time unreasonable big or the MLP con guration unjusti able compli-
cated or both hindrances together. Such MLP-BPM imperfections stimulated the
authors to look for some alternative approach. Our idea is to use a modi ed RBF
net with Euclidean or Mahalanobis metrics and special training technique. We made
two modi cations of the traditional RBF con guration. The rst is to add an ex-
tra neuron layers after the input layer needed to accoplish the principal component
method. It allows to take into account the correlation of input data in order to
reduce the ANN dimensionality by two orders of magnitude. The second modi -
cation is to be chosen depending on the solved problem. For the face recognition,
for instance, the output layer is formed as a Kohonen layer with \winner-takes-all"
discipline. The novel training algorithm guarantees the nitness of the training
procedure by special dynamic reassignment of the number of neurons in the hid-
den layer. The algorithm for training neurons of the output layer is designed to
train each output neuron individually. Special attention is devoted to hadle famous
benchmarking classi cation problems with very entangled classes forming double
spiral or concentric circles. Results obtained for the problems of such di erent na-
ture as automatic reading of handwritten letters, frontal recognition of human faces
and physical object classi cation look very promising.
56

Arti cial Intelligence oral session
Frantisek HAKL
Inst. of Computer Science, Prague
hakl@cs.cas.cz
Mbb Distribution of Subsets of Higgs Boson Decay Events
De ned via Neural Networks.
Co-authors: M. Hlavachek, R. Kalous
ID=215
This paper describe an application of a neural network approach to SM (stan-
dard model) and MSSM (minimal supersymetry standard model) Higgs search in
the associated production t  tH with H ! b  b. This decay channel is considered as a
discovery channel for Higgs scenarios for Higgs boson masses in the range 80 - 130
GeV. Neural network model with a special type of data ow is used to separate t  tjj
background from H ! b  b events. Used neural network combine together a classical
neural network approach and linear decision tree separation process. Parameters of
these neural networks are randomly generated and population of prede ned size of
those networks is learned to get initial generation for the following genetic algorithm
optimization process. A genetic algorithm principles are used to tune parameters
of further neural network individuals derived from previous neural networks by GA
operations of crossover and mutation. The goal of this GA process is optimization
of the nal neural network performance. Our results show that NN approach is ap-
plicable to the problem of Higgs boson detection. Neural network lters can be used
to emphasize di erence of M bb distribution for events accepted by lter (with bet-
ter signal
background rate) and M bb distribution for original events (with original signal
background
rate) under condition that there is no loss of signi cance. This improvement of the
shape of M bb distribution can be used as a criterion of existence of Higgs boson
decay in considered discovery channel.
57

Arti cial Intelligence oral session
Andrey NIKITIN
Moscow St. Univ.
andrey@cs.msu.su
The Evolutionary Model of Physics Large-Scale Simulation
on Parallel Data ow Architecture.
Co-authors: L. Nikitina
ID=221
The problem of e ective mapping of computational algorithms to parallel archi-
tecture is very important in the large-scale simulation. The developed model allows
to explore and utilize ne-grain parallelism as well as coarse-grain parallelism. The
model was tested on nonlinear 3D magnetohydrodynamic (MHD) code.
58

Arti cial Intelligence oral session
Leonid LITINSKII
Inst. of Optical and Neuronal Technologies RAS, Moscow
litin@hppi.troitsk.ru
Optical Neural Network Based on the Parametrical
Four-Wave Mixing Process.
Co-authors: B. Kryzhanovsky, A. Fonarev
ID=233
We develop a formalism allowing us to describe operating of a network based on
the parametrical four-wave mixing process that is well-known in nonlinear optics.
The recognition power of a network using parametric neurons operating with Q
di erent frequencies is considered. It is shown that the storage capacity of such a
network is higher compared with the Potts-glass neural network.
59

Arti cial Intelligence oral session
Anthony VAICIULIS
Univ. of Rochester, Batavia
vaiciuli@fnal.gov
Support Vector Machines in Analysis of Top Quark
Production.
ID=238
The Support Vector Machine learning algorithm is a new alternative to multi-
variate methods such as neural networks. Potential applications of SVMs in high
energy physics include the common classi cation problem of signal/background dis-
crimination as well as particle identi cation. Possible uses in Run II physics at the
CDF experiment include separation of top quark events from background processes
and identi cation of tau particles.
60

Arti cial Intelligence oral session
Victor TSAREGORODTSEV
Inst. for Computational Modelling SB RAS, Krasnoyarsk
tsar@ksc.krasn.ru
Training Set Preprocessing: Estimated Value of Lipschitz
Constant over Training Set and Related Properties of
Trainable Neural Networks.
ID=252
The problem of optimal data coding and preprocessing for neural networks train-
ing is considered. The article shows that there exists a close relation between value
of Lipschitz constant over set of training patterns and nal properties of neural
networks, trained by back propagation algorithms. For computer simulation a well-
known Statlog NASA Shuttle data base was taken as complicated enought data set.
Such a decision was made in order to hide all the e ects of neural networks random
initialization etc and highlight a real in uence of properties of data distribution and
distances between training samples. The used data set consists of 43500 samples
in training part, 9 continuous independent variables and poses a classi cation task
with 7 classes with very nonuniform distribution of patterns over di erent classes.
The experiments show that between all the used linear and nonlinear schemes of
independent variables normalizaion the best results gives such a scheme that leads
to a minimal estimated value of Lipschitz constant over data set. Comparision of
schemes (and related Lipschitz values) was made using the following averaged prop-
erties of trained networks: 1) number of training epoches to achieve the desired
accuracy - the lower is the Lipschitz value the smaller number of training steps is re-
quired, 2) number of unsigni cant synapses that can be removed from the networks
without the latter retraining, which grow with the reduction of Lipshcitz value, 3)
standard deviation of distribution of synaptic values that decreases with Lipschitz
value reduction and gives an e ect that is similar to training with regularization
(training with speci c regularization term that penalize for large synaptic values).
The results show that data normalization scheme highly a ects to the nal network
properties. So a few recommendations can be formulated concerning not only a se-
lection of optimal normalization scheme but also an active pattern selection schemes.
The opposite way - maximization of Lipschitz value of a complicated function that
can be computed by neural net - is also in agreement with existing papers.
61

Arti cial Intelligence oral session
Alberto PULVIRENTI
Univ. of Catania & INFN, Catania
alberto.pulvirenti@ct.infn.it
Neural Tracking in ALICE.
Co-authors: A. Badala, R. Barbera, G. Lo Re, A. Palmeri, G. Pappalardo,
F. Riggi
ID=253
Results of a \combined" neural and Kalman Filter track nding in the ALICE
detector will be shown. ALICE is one of the four planned experiments at the CERN
Large Hadron Collider (LHC). Due to the unprecedented track density expected in
PbPb collisions at LHC energy (more than 80000 primary particles in the whole
phase space) track nding and reconstruction in ALICE is a daunting task. This
task is usually accomplished using the track information both from the Time Pro-
jection Chamber (TPC) and the Inner Tracking System (ITS). In this contribution
we present an arti cial neural network algorithm for high transverse momentum
tracking in the ITS stand-alone mode, i.e. when the information from the TPC will
not be available. This might be the case if the ITS should be used (without the
TPC) together with other fast ALICE detectors for special purposes/studies.
62

Arti cial Intelligence oral session
Lev DUDKO
SINP MSU, Moscow
dudko@fnal.gov
Optimized Neural Network Search of Higgs Boson
Production at the Tevatron.
Co-authors: E. Boos, D. Smirnov
ID=267
We have optimized the method of search for the 115 GeV Higgs boson at the
Tevatron collider using neural networks. Our optimizations are based on the correct
MC model of the signal and background processes, method of singular variables to
constrain the kinematical set of input variables for the neural networks and taking
into account all of the spin e ects of the nal states for the signal and background
processes. Such strategy leads to improved eôciency of the Higgs search in compar-
ison with previous NN strategy and conventional analysis.
63

Arti cial Intelligence oral session
Alexei SOLOVIEV
JINR, Dubna
solovjev@spp.jinr.ru
Application of Wavelet Analysis for Data Treatment of
Small-Angle Neutron Scattering.
Co-authors: A. Islamov, A. Kuklin, E. Litvinenko, G. Ososkov
ID=269
Small-angle neutron scattering (SANS) is a very popular method used for con-
densed matter investigation. The wide range of its applications is corroborated
by numerous publications. The spectrometer YuMO is one of the powerful SANS
instruments. It is running on the fast pulsed reactor IBR-2 at Frank Laboratory
of Neutron Physics of the JINR. The time-of- ight technique is used in YuMO to
register corresponding spectra in dependence of scattering angle and wavelength by
several ring-shaped detectors. Data registered by those detectors have quite di erent
statistical errors, that hinders considerably both the correct matching and smooth-
ing of these spectra and the choice of the form-factor model to be tted on the next
stage of data processing. The software programs developed many years ago to treat
YuMO spectra become obsolete now and cannot meet all requirements, especially for
its new set-up, cardinally upgraded recently. The present work rst demonstrated
an improvement of the resulting scattering spectra quality due to the usage the spec-
trometer resolution during wavelet analyzing of the SANS data. This result leads
to the better tting of the form-factor curve on the next step of the data analysis.
Besides, wavelet analysis permit one to extract and analyze the background (noisy)
component and what is more to carry out the instrumental hardware corrections.
The possibilities of the wavelet approach are demonstrated on the several types of
di erent experimental data.
64

Arti cial Intelligence oral session
Axel NAUMANN
Univ. of Nijmegen & NIKHEF, Muenster
axel@fnal.gov
Foundations, Status, and Prospects of Support Vector
Regression as a New Multivariate Tool for High Energy
Physics.
ID=281
Support Vector Regression is a very powerful multivariate tool from out of HEP,
and has shown to outperform neural networks in many cases. We will point out
how SVR's solid mathematical background, the learning theory, de nes many of
the advantageous properties of SVRs. We will present applications where SVRs are
currently used, and some restrictions an analysis must meet so it can bene t from
SVMs. We will introduce libSVM by Chih-Chung Chang and Chih-Jen Lin as one of
the major current implementations, its limitations, and its planned improvements.
65

Arti cial Intelligence oral session
Yuri ORLOV
SINP MSU, Moscow
yvo@radio-msu.net
Neural Network Approach to Discovering Temporal
Correlations.
Co-authors: S. Dolenko, I. Persiantsev, Ju. Shugai
ID=292
Numerous real-world problems require discovering causal relationship between a
behavior of a complex object and rather rare events initiated by such behavior. The
problem of geomagnetic storms forecasting by nding phenomena on the Sun surface
that initiated the storm is a typical example. A neural network approach for solving
this task is proposed here. The approach is based on a reasonable assumption that
an event is initiated by a phenomenon - unknown combination of input features
existing within some time interval (initiation duration). The phenomenon initiating
each event is sought within a time interval of a prede ned size (search interval), con-
siderably longer than initiation duration. The task is to discover the most probable
phenomenon (within the search interval) that initiated the event, to determine the
phenomenon type and the delay between the event and its initiation. To accomplish
this task, the analyzed search interval is divided into overlapping segments, with
length equal to initiation duration. A separate neural network is constructed for ev-
ery segment, and it is trained to forecast events according to the features within the
corresponding segments. The sequence of networks predictions for segments within
a given search interval may be treated as estimation of event probability, made by
a committee of independent experts. Once the networks are trained, they may be
used to predict similar events based on new temporal data for the input features,
when the search interval is shifted along the time axis. Event forecasting may be
obtained by applying the set of trained neural networks to search intervals along
the time axis. The approach proposed was successfully tested on a set of simple
model problems. After further development, the above algorithm may be applied
for discovering temporal correlations between phenomena on the Sun surface and
geomagnetic storms, substantially simplifying the task of storms forecasting.
66

Arti cial Intelligence oral session
Oleg AGAPKIN
SINP MSU, Moscow
guardian@srd.sinp.msu.ru
Neural Network Pre-Processing of Ultrasonic Scanning
Data.
Co-authors: S. Dolenko, Yu. Orlov, I. Persiantsev
ID=293
Ultrasonic scanning with coherent data treatment is a very promising method
for nondestructive inspection of welded pipeline joints (e.g. in nuclear plants, oil
pipelines, etc). This method requires processing of high dimensional raw data ob-
tained from ultrasonic probes. In practice, raw data have substantial noise level.
Besides that, acoustic contact between scanning probe and inspecting pipe is some-
times lost, what may cause gaps in data resulting in performance degradation during
further analysis. This paper describes a neural network system providing noise re-
moval and gaps lling in ultrasonic data. The approach suggested includes two
steps. First, correlation analysis of raw data is performed using a \standard" pat-
tern, and the points most correlating with it, constitute a \correlation image". This
step removes random noise and transforms the data to a form convenient for neural
network analysis. Second, the correlation image is analyzed using Hop eld-style
recurrent neural network, providing gaps lling and further reduction of noise. The
task solved at this step resembles the well-known problem of track nding in high-
energy physics. The results obtained on real data are presented and discussed.
67

Arti cial Intelligence oral session
Serge DOLENKO
SINP MSU, Moscow
dolenko@srd.sinp.msu.ru
Use of Neural Network Based Auto-Associative Memory as
a Data Compressor for Pre-Processing Optical Emission
Spectra in Gas Thermometry with the Help of Neural
Network.
Co-authors: A. Filippov, A. Pal, I. Persiantsev, A. Serov
ID=300
Optical emission spectroscopy is widely used for monitoring low-temperature
plasmas in science and technology. However, determination of temperature from
optical emission spectra is an inverse problem that is often very diôcult to solve,
especially when substantial noise is present. One of the means that can be used
to solve such a problem is a neural network trained on the results of modeling of
spectra at di erent temperatures. However, the spectra are usually recorded in
several hundred of channels, which is much more than the real dimensionality of the
data. Reducing the dimensionality of the input data prior to application of neural
network can increase the accuracy and stability of temperature determination. In
this study, such pre-processing is performed with another neural network working
as an auto-associative memory with a narrow bottleneck in the hidden layer. The
compressed data from the bottleneck is used as the input for the main network that
determines the plasma temperature. The improvement in the accuracy and stability
of temperature determination in presence of noise is demonstrated on model spectra
and experimental spectra recorded in a DC-discharge CVD reactor used for diamond
lm deposition.
68

Arti cial Intelligence oral session
Leandar LITOV
JINR & Univ. of So a
litov@phys.uni-so a.bg
Application of Neural Networks for Energy Reconstruction.
Co-authors: J. Damgov
ID=302
The possibility to use Neural Networks for reconstruction of the energy deposited
in the calorimetry system of the CMS detector is investigated. It is shown that
using feed-forward neural network, a good linearity, Gaussian energy distribution
and good energy resolution can be achieved. A signi cant improvement of the energy
resolution and linearity is reached in comparison with other weighting methods for
energy reconstruction.
69

Arti cial Intelligence oral session
Dmitry IUDIN
Radiophysical Research Inst., Nizhny Novgorod
iudin@nir .sci-nnov.ru
Thunderstorm Cloud Cellular Automaton Model.
Co-authors: A. Grigoriev, V. Trakhtengerts
ID=314
We consider the thunderstorm (TC) activity on the base of a cellular automa-
ton model on tree-dimensional lattice. Each site of the lattice is related to a time-
dependent scalar that characterises potential of the point. In our model the potential
di erences between the neighbouring sites are growing due to the instability e ects.
We consider three random-growth models: the simplest one, when random additions
with a normal distribution are added to the electric potentials at the lattice sites at
each step of model time, the second, when along with random additions we add an
external bias eld (so, the rst case is just particular case of the second with zero
bias), and the last and most complicated case, when potential relief looks like gener-
alised Brownian landscape. In every case, each site, independently of its neighbours,
undergoes Brownian motion in the space of electric-potential values. The potential
di erence growth is limited by some critical value. As soon as this critical value
is reached for any two neighbouring sites on the lattice, breakdown between the
sites takes place and the lattice bond between the sites becomes a conductor. We
assume that such a ne scale spark discharge can initiate breakdowns of the neigh-
bouring lattice bonds (\infect" the neighbours), if the potential di erence between
the cells exceeds some activation level, which is less than critical one. Interaction of
neighbouring cells leads to formation of dynamical chains of microdischarges, which
reveal percolation-like behaviour in the wide range of TC parameters. Even a weak
macroscopic electric eld modi es drastically the structure and dynamical features
of the conducting percolation cluster. The important new e ect in this situation is
a large-scale electric current, which ows through the conducting cluster and redis-
tributes the large-scale electric charge. It is clear with the physical point of view
that the large-scale electric eld will determine an electrical discharge in TC, if the
potential di erence on the cluster size is comparable with the critical value. We
show that fractal dynamics of electrical microdischarges in a thundercloud can serve
as the basis for explanation of main features of a lightning ash on its preliminary
stage.
70

Arti cial Intelligence oral session
Jean-Christophe PREVOTET
Univ. Pierre et Marie Curie, Paris
prevotet@lis.jussieu.fr
Moving NN Triggers to Level-1 at LHC Rates.
Co-authors: B. Denby, C. Kiesling, P. Garda, B. Granado
ID=327
The use of neural network hardware for level-2 triggering at hadron colliders
has been established for many years - rst at CDF, and, more extensively, at H1.
Although there has been some groundbreaking work on developing level-1 neural
hardware for LHC experiments, it is probably fair to say that the question is still
very much open. In addition, commercially available NN solutions with appropri-
ate timing constraints are nonexistant. The talk will present a new, FPGA-based
method for implementing multilayer perceptrons with hundreds of neurons in a 25
nanosecond pipeline structure having only a 500 to 600 nanosecond latency. Possible
implemenations in an LHC level-1 trigger scenario will be discussed.
71

Arti cial Intelligence oral session
Jens ZIMMERMANN
Max-Planck Inst. for Physics, Munich
zimmerm@mppmu.mpg.de
Parameter Estimation and Class Separation with Neural
Nets for the XEUS project.
ID=328
The X-ray Evolving Universe Spectroscopy Mission is a potential follow-on to
ESA's Cornerstone X-Ray Spectroscopy Mission (XMM-Newton) which is in orbit
since december 1999. The Wide Field Imaginer on board of XEUS will be a pix-
eldetector from the semiconductor lab of the Max-Planck-Instituts for physiscs and
extraterrestrical physics. This pixeldetector will have half the pixelsize of the CCD
used for XMM with over 6 times more pixels (1000x1000). Due to the advances
in the mirror system XEUS will be about 200 times more sensitive than XMM.
To face the increased data rate and improve event processing neural networks are
under study to be integrated into the electronics on board (online) and for the sci-
ence analysis on ground (oine). The following ideas with their training results are
presented: a) separation of single photon events from pileup (Here the unwanted
event-structures read from the pixeldetector should be separated from the useful
event-structures which belong to an x-ray photon), b) estimation of the incident
position of a photon ( Here the split e ect (that a photon distributes its charge over
more than one pixel) can be used to estimate the incident position of a photon more
precisely than just in pixel-coordinates), c) estimation of the charge deposited by a
photon (Here the split e ect should be inversed: A total charge should be calculated
from the charges which were deposited in di erent neighbouring pixels). The train-
ing data was generated by a simulation which was developed at the semiconductor
lab. For the position estimation experimental data was also available.
72

Arti cial Intelligence poster session
Alexander PAVLOV
Vavilov St. Optical Inst., S-Petersburg
pavlov@soi.spb.ru
Implementation of Linguistic Models by Fourier-Holography
Technique.
Co-authors: Y. Shevchenko
ID=123
Concept of linguistic variable was introduced by L.Zadeh to model human way
of thinking. To implement the model by technical device metric scale of the device
has to be matched with the linguistic scale, intuitively used by the operator. To
solve the problem we develop algebraic description of 4-f Fourier-holography setup
by using triangular norms based approach. We demonstrate the setup is described
adequately by fuzzy-valued logic. We consider General Modus Ponens rule imple-
mentation to de ne the implication operator, which is adequate to the setup. We
use representation of linguistic labels by fuzzy numbers to form the scale and discuss
the dependance of the scale grading on the holographic recording medium operator.
We present experimental illustration of measurements on the linguistic scale.
73

Arti cial Intelligence poster session
Adil TIMOFEEV
Inst. of Informatics and Automata RAS, S-Petersburg
adil@iias.spb.su
Techniques of Functional Analysis of Faults and Methods of
Fault-Stable Motion Control for Electromechanical Systems.
ID=126
Functional analysis of faults of controlled electromechanical systems (mechanical
systems, mechatronic gears, robots etc.) plays important role at control of correct-
ness of its functioning, technical diagnosis of states and localization of possible faults.
In common case this analysis is not reduced to named as functional diagnosing in
real time, but considers also formal de nition and apriori evaluation of correctness
criteria of electromechanical systems functioning, classi cation of dynamical models
of faults, its allowed bounds (tolerances) etc. Because that correctness of electrome-
chanical systems functioning is determined by reaching of control aim at existing
of di erent disturbances and faults, a big value is given to the methods of synthe-
sis and e ective algorithms of motion control and analysis of their fault-stability.
In the paper absolute and related indications of correctness (fault) functioning of
controlled electromechanical systems with inverse (on subspace) dynamics are intro-
duced for the st time and two-side evaluations for the di erent types of fault classes
are shown. Fault-stable algorithms of stabilizing control of programmed motion in
the di erent classes of indeterminacy of possible faults are synthesized. These al-
gorithms are based on earlier proposed methods of programme, stabilizing, modal,
robust and adaptive control of reversible dynamic systems and physical-technical di-
agnosing. Using of suggested methods provides control high-quality and correctness
of functioning of electromechanical systems in a broad class of fault indeterminacy.
The problems and methods of multi-agent diagnosing of mechatronic agents with
intelligent control and communication channels between the agents for distributed
information processing are described.
74

Arti cial Intelligence poster session
Aleksei VORONOV
Inst. for Automation and Control Processes FEB RAS, Vladivostok
voronov@iacp.dvo.ru
Multidisciplinary Approach to Neurocomputers.
Co-authors: G. Voronova, V. Karpets, A. Krisko
ID=138
The experimental results of forming self-organizing nanostructures for element
basis of neural networks are presented. The examples of application of neural net-
works to classify nanostructures and solve psychodiagnostical problems are given.
The atomlike and silicidelike nanostructures were revealed. The real structure of
social groups and their dominate characteristics were determined. The functions of
persons evaluation were calculated. We believe that the checking, improvement and
development of neuropsychhological hypothesises will contribute to progress in the
neurocomputers.
75

Arti cial Intelligence poster session
Zaur SHIBZOUKHOV
Inst. of Applied Mathematics and Automation KBSC RAS, Nalchik
szport@fromru.com
Constructive Methods for Supervised Learning with
Complexity Minimization of Arti cial Neural Networks of
Algebraic Sigma-Pi Neurons.
ID=152
A new class of arti cial feedforward neural networks (ANN) of sigma-pi neurons
is considered. Each sigma-pi neuron (SPN) implements composition of polylin-
ear function over algebraic ring without denominators of zero and nonlenear scalar
function. An advanced method for constructive supervised learning with complexity
minimizations of such sigma-pi neurons is proposed. This method uses in construc-
tive supervised learning of multy-layered ANN of SPN. The complexity of each SPN
is bounded. A new constructive method is proposed for constructive learning of such
ANN. It is similar to learning method using in constructive algorithms Tower and
Pyramid. Algorithms based on this method buil network structure incrementally.
A new layers sequentially adding during learning process such that number of true
answers on training sequence is growing too. After adding a new layer and training
the number of true answers increase by the value, which is not less than the bound
of complexity of SPN minus a small constant. This make learning process more
e ective and quick then learning process of ANN of classical formal neurons with
algorithm Tower and Pyramid.
76

Arti cial Intelligence poster session
Vasilij LAVROV
Pavlov Inst. of Physiology RAS, S-Petersburg
lavr@infran.ru
Genetic Limits of Intelligence.
Co-authors: V. Valtzev, A. Rudinsky
ID=156
What is the reason of di erence of intellectual capacities of various men, if the
neurons of a brain do not essentially di er? I.P.Pavlov had proved, that \the un-
conditional re exes" (that is genetic information) ensure training. It means, that
during training the brain reads out and copies elements of genetic memory, and then
creates a new matrix of memory. The modern genetics con rms that the training
activates the expression of genes. Thus, the limits of development of intelligence are
determined by not only quantity and quality of the genetic information, but also
ability to reproduce the genetic information. The intellectual system simulating
work of a brain represents a complex of coprocessors, each of which has the lim-
ited set of the elementary programs. During training the programs do not change,
and are combined in di erent combinations. The physiological and mathematical
arguments for the bene t of existence of inhomogeneous neuronic webs providing
training are resulted.
77

Arti cial Intelligence poster session
Svetlana IVANOVA
St. Inst. of Electronics and Mathematics, Moscow
svetaivanova@pisem.net
The Identi cation of Dynamic Object Parameters.
Co-authors: Z. Ilyichenkova
ID=191
Adaptive algorithms for optimal identi cation of a restoration system are de-
scribed which allow to correct the coordinates of a linear dynamic system from a
restricted set of observable coordinates in the presence of noise. The algorithms are
based on the neural network's method. The estimations obtained converge to the
optimal ones in the sense of minimum mean square deviation from the exact value.
The problem of restoration of identi cation of continued linear dynamic systems
can be reduced to the classical neuronet problem. The identi cation is an unknown
parameters de nition. This problem is the important one for di erent control sys-
tems with parameters which may vary over a wide range. Such systems are in a
robototechnic, a nuclear engineering, a chemical industry, and other. There are u
is an input signal, x is an output signal in dynamic object. But the de nition of
parameters of dynamic systems is either too complex or impossible because noises
of measurement are existent. Then an adequate model of the object should be
constructed.
78

Arti cial Intelligence poster session
Zoya ILYICHENKOVA
St. Inst. of Electronics and Mathematics, Moscow
zv@stk.mmtel.ru
The Usage of Neural Networks for Robots Navigation.
Co-authors: S. Ivanova
ID=192
This article deals with the problem of using neural networks for choice the op-
timal path for autonomous wheel transport robot as a part of control system pro-
jection for technical objects. Autonomous robots are devices for di erent actions.
Control systems of such robots are built-in them. Then autonomous robots are con-
trolled without human.One of the main problems of autonomous objects projecting
is a control system designing for them. Such control systems must be adaptable,
reliable, and take decisions in true time. Neural networks (NN) have all these qual-
ities. So a lot of control systems for autonomous objects are built using NN as a
basic computer now. Control systems for such robots were developed by using NN.
In the paper the problem of control system building is discussed, using NN for one
type of autonomous robots. They are autonomous transport wheel robots. They
should be moved from one point of any surface to another point. It's suggested
some new method, including three parts. There are a technical sensor system, a
navigation system, and a control system of the robot movement. In this work the
navigation system for the autonomous transport robot will be discussed. One of the
problems of control system designing contains the nding an optimal safety trajec-
tory for robot movement. This method makes possible to remove the restrictions on
the movement and to choice the optimal path taking into account real dimensions
of robot. It makes possible to nd a decision in true time.
79

Arti cial Intelligence poster session
Sergey ROMANOV
Pavlov Inst. of Physiology RAS, S-Petersburg
spromanov@spr.usr.pu.ru
Whether Neuron Network Can Be Intellectual System?
ID=196
The achievements of quantum physics are widely applied in study of a brain,
revealing its pathological state or they visualise a functioning of its structures in var-
ious conditions of rest or intellectual activity. On the other hand, the homogeneous
networks of arti cial neurons use for the decision of numerous tasks of recognition
of images, classi cations or search of the decisions of tasks formalised. Develop-
ment of neuroinformatics and neurocomputers how the \intellectual" structures of
processing of the large ows of the information, and an increase of a computation
speed, and an extension of their memory, and a perfection of hardware-software
means contribute to creation of virtual space, been existing in real-life in computing
surroundings, in which it is possible to present and to investigate any phenomenon
of the environmental world. But in most cases, a such imitating modelling of the
physical phenomena and biological structures does not open the true reasons and
mechanisms that are shaping their behaviour. The numerous groups of mathemati-
cians investigate processes of propagation of excitation in homogeneous networks
with the purpose to reveal the mechanisms of memory and training, by operating
with concepts of the determined chaos and synchronisation. We have shown, that
the ring structures of regulation of motoneuron activity on segmental level represent
the peripheral homeostatic mechanism. We characterise nervous system as a system
of automatic regulation, about \machinery similar" of behaviour of which as early
as I.P. Pavlov wrote, having applied a method of conditional re exes for research of
high nervous activity. By accepting, that the consciousness is inherent as property
only alive beings, which are capable to form own representation about an environ-
ment, an arising at us of concepts, by re ected in the symbolical forms, have become
the same signals for nervous system as the signals of a various physical nature, that
are in uencing our organs of sense and are processing under the same laws of work
of nervous system. The signals, acting from external environment, create dynamic
distribution of nervous activity that can be perceived somewhere on border of a ner-
vous tissue and internal environment to which the neurons of central structures of
brain are immersed and where the subjective re ection have been shaped. Then our
consciousness chooses similarly to demon of Maxwell the purposes of our behaviour
in according to an internal conditions organism and developed representations about
an environment. The reproduction of such structure of nervous system in all cases
will result in creation of robots deciding a wide circle of tasks on motor control and
the analysis of the information according to the purposes xed by the designer, and
do not capable to change them because of absence of internal self-feeling.
80

Arti cial Intelligence poster session
Andrey NIKITIN
Moscow St. Univ.
andrey@cs.msu.su
The GA Based Approach to Optimization of Parallel
Calculations in Large Physics Problems.
Co-authors: L. Nikitina
ID=220
The parallelization of computational algorithms in large physics problems on
multiprocessor computer system containing thousand and more of processor ele-
ments requires special tools. In the present work the approach based on the using
of Genetic Algorithm (GA) is proposed. The experiment results and the in uence
of GA parameters on convergence of the method are given. The approach may be
using for real-time parallelization.
81

Arti cial Intelligence poster session
Leonid LITINSKII
Inst. of Optical and Neuronal Technologies RAS, Moscow
litin@hppi.troitsk.ru
Rigorous Results for the Hop eld-Type Neural Network.
ID=232
For the Hop eld-type neural network we ivestigate the case of P memorized
patterns that are distorted copies of the same N-dimensional standard. In other
words, we try to simulate that learning always takes place by means of repeating
presentations of one and the same standard, and the presentations are accompanied
by distortions of the standard. We obtain some rigorous results relating to the
dependence of the set of xed points on such external parameters as P/N, the values
of distortions and the dynamic threshold H.
82

Arti cial Intelligence poster session
Vasily SEVERYANOV
JINR, Dubna
severyan@jinr.ru
An Evolving Algebra Approach to Formal Description of
Automata Network Dynamical Systems
ID=248
Automata Network Dynamical Systems are based on Iterated Function Systems
and originally were designed for constructing fractal objects. However, they are in
their own right and can be studied in the context of the computer simulation theory.
Automata Networks can be considered as a generalization of Cellular Automata: the
main di erence lies in the fact that the automata networks have non-regular struc-
ture of the cell neighborhoods system. Evolving Algebras have been proposed by
Yuri Gurevich as models for arbitrary computational processes. They are nite dy-
namic algebras representing state transitions and describing operational semantics
of discrete dynamical systems. They may be tailored to any desired level of abstrac-
tion. System states are represented here as static algebras. In the talk the evolving
algebra approach to formal description of automata network dynamical systems is
to be presented.
83

Arti cial Intelligence poster session
David SHAPIRO
Inst. for Computer Technology and Informatization Problems, Moscow
vniipvti@pvti.ru
Virtual Reality Technology: Problems of Neurocomputing.
ID=317
The important methodological problems in the virtual reality technology are the
formation of the User's speci cs, speci cs of the intellectual virtual communicant
and multisensorical discours. The speci cs of the intellectual virtual communicant
determine the demands to the presentation of the brain's procedures in the concrete
task. The formation of these procedures are based on the high brain's functions
which by means of the structural-functional brain's speci cs are realized. The virtual
reality technology (VRT) is characterized by three types of problems: instrumen-
tal, programme-algorithmical and neuropsycholinguistical. The central problem is
the brain-like procedure of the discovery and interpretation of the meaning (DIM)
which is presented by means of the multisigni cant images. The meaning of the
concrete metatext (Sc) is a mental essence which allows to attribute the informa-
tion (\elementary knowledge") to the concrete feature of the object. The concrete
meaning (Sc) is presented by means of the concrete form and structure of the meta-
text [MET] and the type of the bearers. This problem is characterised in VRT by
the discovery of the meaning of the multisensorical (audio and video images and
text) input pattern (question) and preparation of the adequate answer. This is
the exchange of imagery knowledge. Such \brain-like" algorithms are based on the
speci cs of the mechanisms of the cerebral asymmetry which are realized by means
of the dual (parallele-di erent functions) of the brain's procedures. The cerebral
asymmery procedures for the cognition processes (\System Darween") was inves-
tigated by Edelman (1981). The cognitive-creative brain's possibilities are based
on the following structural-functional features: dualism, multiconnections, aggrega-
tions, multilayerness, growth of the dendrite trees, self-organization. The standard
homogeneous neural networks are not precise re ect of the all complex neurody-
namic and neuroinformation processes. New approach to a re ection of the brain-
like structural-functional features on the principles is based. The using of these
principles allows to make a step to the working of the cognitive model of the func-
tioning of the brain. Such principles determine the working of the heterogeneous
neural networks (HNN).
84

Simulations and Computations in Theor.Physics and Phenomenology oral session
Andrey GROZIN
BINP SB RAS, Novosibirsk
A.G.Grozin@inp.nsk.su
Multiloop Calculations in HQET.
ID=117
Recently, algorithms for calculation of 3-loop propagator diagrams in HQET and
on-shell QCD witha heavy quark have been constructed and implemented. These
algorithms (based on integration by partsrecurrence relations) reduce an arbitrary
diagram to a combination of a nite number of basesintegrals. Here I discuss various
ways to calculate non-trivial bases integrals, either exactlyor as expansions in epsilon.
Some integrals of these two classes are related to each other byinversion, which
provides a useful cross-check.
85

Simulations and Computations in Theor.Physics and Phenomenology oral session
Michele CAFFO
INFN, Bologna
ca o@bo.infn.it
Numerical Evaluation of General Massive 2-loop Self-Mass
Master Integrals from Di erential Equations.
Co-authors: H. Czyz, E. Remiddi
ID=125
The system of 4 di erential equations in the external invariant satis ed by the 4
master integrals of the general massive 2-loop sunrise self-mass diagram is solved by
the Runge-Kutta method in the complex plane. The method o ers a reliable and
robust approach to the direct and precise numerical evaluation of Feynman graph
integrals.
86

Simulations and Computations in Theor.Physics and Phenomenology oral session
Pavel DYSHLOVENKO
St. Technical Univ., Ulyanovsk
pavel@ulstu.ru
Numerical Simulation of Colloidal Interaction.
ID=128
The adaptive numerical method for the Poisson-Boltzmann equation is proposed.
Then the method is applied to some problems of the electrostatic interaction of
colloids. Special attention is given to the e ects of non-linearity and geometrical
con nement. The prospect of the research is brie y discussed.Non-linear Poisson-
Boltzmann equation describes, under some approximation, the electric potential
and charge distribution in colloidal systems. Information on the free energy, forces
and related quantities can be obtained from the solution of the equation.The nite-
element method combined with the adaptive mesh re nement for non-linear Poisson-
Boltzmann equation is considered. The mesh is a Delaunay triangulation in each
step of the solution. The error estimation is based on a posteriori error estimator of
Zienkiewicz-Zhu type. The standard Galerkin approach with the quadratic approx-
imation and six-noded triangular elements is used for the numerical solution of the
Poisson-Boltzmann equation. The system of non-linear algebraic equations arising
from the discretization process is solved by means of quasi-Newton method with
analytical evaluation of Jacobian. The sparse matrix technique reduces the memory
requirement. The proposed numerical method is well suited for the two-dimensional
and three-dimensional axisymmetric problems with sophisticated geometry and var-
ious boundary conditions.Several particle-particle and particle-wall problems are
studied numerically. Two free identical particles, two identical particles con ned in
a charged cylindrical pore and a particle near a charged plane are included. Two-
dimensional colloidal crystals are also investigated. Di erent electrical models of the
colloids are considered and discussed. Special attention is given to the e ects of the
geometrical con nement and non-linearity. In particular, pure repulsion under any
circumstances was observed numerically for two identical charged colloidal particles
in a cylindrical domain which is in agreement with the rigorous theoretical proof
and contradicts the earlier numerical result.Further development of the method for
more complex two-dimensional and three-dimensional problems and computer re-
quirements are brie y discussed.
87

Simulations and Computations in Theor.Physics and Phenomenology oral session
Anatoly KOTIKOV
JINR, Dubna
kotikov@thsun1.jinr.ru
Some Methods for the Evaluation of Complicated Feynman
Integrals.
ID=130
A progress in calculations of Feynman integrals based on the Gegenbauer Poly-
nomial Technique and the Di erential Equation Method is discussed. The results for
a class of two-point two-loop diagrams are demonstratedand the evaluation of most
complicated part of O(1=N 3 ) contributions to critical exponents of  4 -theory is done.
An illustration of the results obtained with help of above methods is considered.
88

Simulations and Computations in Theor.Physics and Phenomenology oral session
Sergue VINITSKY
JINR, Dubna
vinitsky@thsun1.jinr.ru
Programs for Direct and Inverse Normalization of a Class of
Polynomial Hamiltonians.
Co-authors: A. Gusev, V. Rostovtsev
ID=137
The programs GITAN and GITARVS for direct and inverse normalization of a
class of polynomial Hamiltonians have been elaborated by means of REDUCE 3.7. A
new BDC procedure is proposed for extracting a special class of polynomial Hamil-
tonians satisfying the Bertrand-Darboux integrability condition using the ordinary
Birkho -Gustavson normalization.The integrals of motion quadratic polynomial in
momenta are examined with help of this procedure.
89

Simulations and Computations in Theor.Physics and Phenomenology oral session
Vladimir KORNYAK
JINR, Dubna
kornyak@jinr.ru
Computation of Cohomology of Lie (Super)Algebra:
Algorithms, Implementation and New Results.
ID=145
The dual constructions called homology and cohomology are the main tools for
investigation of di erent mathematical objects and physical models by means of
algebraic topology. In particular, cohomologies of Lie algebras and superalgebras
describe such important features of the algebras as external di erentiations, cen-
tral extensions, local deformations and other \topological" peculiarities in their
construction. In our previous works we proposed an algorithm and its C imple-
mentation for computing cohomologies oônite-dimensional and in nite-dimensional
graded Lie (super)algebras. With the help of this program we have obtained some
new results. Nevertheless the problem of computation of cohomology is still farfrom
satisfactory solution due to very high dimensions of the cochain spaces appearing
in the computation. Recently we proposed a new, much more eôcient, algorithm
(with its C implementation) for this problem. This algorithm isbased on splitting
the full cochain complex into \minimal" (in a given system of coordinates for Lie
(super)algebra under consideration) subcomplexes. Though the earlier algorithm is
a subalgorithm of the new algorithm, the latter can be applied to a wider class of
in nite-dimensional algebras. In this talk we discuss these algorithms, new com-
putational results and related topics. We present also new results in such diôcult
problem as computation of cohomology of Lie algebra of Hamiltonian vector elds.
90

Simulations and Computations in Theor.Physics and Phenomenology oral session
Konstantin POPOV
Mathematics and Mechanic Institute UB RAS, Syktyvkar
pkon.chemi@ksc.komisc.ru
Realization of Density Functional Theory Calculations with
KLI-approximation of Optimized E ective Potential in Q96.
ID=146
Density Functional Theory (DFT), after its nearly 40 years history, have become
the most powerful tool for computational solving of many particle problem. Based
on ab-initio calculations, among the other approach it allows reaching the highest
precision in investigating of the multi-electron systems, such as atoms, molecules,
clusters and solids. The progress in development of DFT and its applications is con-
nected with successful attempts of nding the realizable procedures of calculations
of exchange and correlation interaction between electrons without any model or em-
pirical conceptions. Ideally, the exchange-correlation potential must be expressed
through one-electron wave functions of the system and Coulomb interaction. The
most success in this direction was achieved by using of Optimized E ective Potential
(OEP) and its so-called KLI-approximation. It makes it possible to calculate the
exact value of exchange potential and to get the more realistic way of correlation
energy calculations. Do not taking in examination the commercial programs, which
are used in investigations of Fermi-systems, we deal only with open-source soft ware,
such as Q96, Abinit, DeFT. All those programs (packages), being a complex and
good realization of DFT, do not contain the latest achievements in inter-electron
exchange-correlation potential calculations. The present work is devoted to imple-
mentation of OEP in KLI-approximation for exchange only interaction of electrons
in frame of Q96-0.14. The necessary algorithm was developed. It was realized with
maximum possible using of native procedures of Q96 and was assimilated in body
of existing program. Codes were written on Fortran-90 and work under Linux. Now
our additions (modules) are in testing period of development and soon it may appear
in the next version of Q96.
91

Simulations and Computations in Theor.Physics and Phenomenology oral session
Victor EDNERAL
SINP MSU, Moscow
edneral@theory.sinp.msu.ru
Application of the Resonant Normal Form to High Order
Nonlinear ODEs Using MATHEMATICA.
Co-authors: R. Khanin
ID=161
The paper discusses application of the normal form method to construction of
analytic approximations for local periodic families of solutions of high order systems.
The authors have implemented the normal form method for the Dynamical Systems
in MATHEMATICA. As a part of this project, the authors have developed a pack-
age \PolynomSeries" which contains tools for dealing with multivariate power series.
Such a package can be used not only in the Dynamical Systems theory but in many
other applications where multivariate polynom series are encountered. The demo
version of the \PolynomSeries" package has been submitted to www.mathsource.com
. The families of solutions of the fourth and sixth order systems are obtained as
truncated Fourier series in approximated frequencies. Comparison of the numerical
values created by tabulation of the approximated solutions with the results of nu-
merical integration of the system displays a good agreement. Such approximations
can be useful for phase analysis of a wide class of autonomous nonlinear systems
with smooth enough right hand-sides near stationary point(s).
92

Simulations and Computations in Theor.Physics and Phenomenology oral session
Vitaliy MYSOVSKIKH
S-Petersburg St. Univ.
vimys@pdmi.ras.ru
Advanced Computing with Subgroup Lattices by the
Computer Algebra Package GAP.
ID=171
Symmetries of discrete objects can be described by properties of the respective
transformation groups. Structure of subgroup lattices of these re ects their impor-
tant characteristics. Computation of the subgroup lattice of a large nite group is
internally diôcult problem. This talk is devoted to certain aspects of such calcula-
tions. The computer algebra package GAP is used for this purpose. The technique
of Burnside tables of marks is discussed. It allows one to replace tedious straight-
forward calculations by fast computing with integers from the table of marks. This
information is available by the respective library TOM in GAP 4.2.
93

Simulations and Computations in Theor.Physics and Phenomenology oral session
Roman ROGALYOV
IHEP, Protvino
rogalyov@th1.ihep.su
The Uses of Covariant Formalism for Analytical
Computation of Feynman Diagrams with Massive Fermions.
ID=172
The bilinear combination of Dirac spinors u(p 1 ; n 1 )u(p 2 ; n 2 ) is expressed in terms
of Lorentz vectors in an explicit covariant form. The fact that the obtained expres-
sion involves only one auxiliary vector makes it very convenient for analytical com-
putations with REDUCE (or FORM) package in the spinor formalism. The other
advantage of the proposed formulas is that they apply to massive fermions as well
as to massless fermions. The proposed approach is employed for the computation
of one-loop Feynman diagrams and it is demonstrated that it considerably reduces
the time of computations.
94

Simulations and Computations in Theor.Physics and Phenomenology oral session
Arthur NIUKKANEN
Vernadsky Inst. for Geochemistry and Anal. Chemestry RAS, Tula
NIUKKANEN@tula.net
On the Way to Computerizable Scienti c Knowledge (by
the Example of the Operator Factorization Method).
ID=185
Advent of computers may produce much more serious and deep changes in sci-
ence than is apparently customary to assume. Subdividing the scienti c process into
a nal result and derivation of the result one can see that the result is most impor-
tant from \antropocentric" point of view. On the contrary, the derivation rules play
the primary role for \computercentric" science. Remembering that, according to
Ludwig Wittgenstein, \in mathematics process and result are equivalent" one can
see that the primary goal in reconstruction of scienti c knowledge would be making
the derivation rules equally acceptable for computer and researcher. This implies
that a useful dialog between computer and the user would be possible. The recent
version of the theory of special functions and multiple (and simple) hypergeometric
series can serve us as an illustrative example of this and many other advantages
that can be gained by using new basic principles as a foundation of the theory.
The hypergeometric series are ubiquitous. They play an outstanding unifying role
in science because they have \a tendency to appear in a variety of mathematical
and physical circumstances" (B.A. Cipra, 1998). In his preface to \Special Func-
tions" (Reidel, 1984) Richard Askey wrote \There are many examples and no single
way of looking at them that can illuminate all examples or even all the important
properties of a single example of a special function". The Askey's statement does
not hold any more. The operator factorization method just gives us a single way
of looking at scores of thousands of special functions and multiple hypergeometric
series. Moreover it allows us to \computerize" the theory of these functions in a
two-fold way. Our main goal is to discuss these ways and their relation to the ex-
isting computer - aided approaches to accumulating, processing and generating of
scienti c knowledge. By the example of the operator factorization method one can
see one of the ways making the structure of knowledge easily accessible to com-
puter. Generally, the state-of-the-art level of scienti c knowledge falls far short of
the desired accessibility.
95

Simulations and Computations in Theor.Physics and Phenomenology oral session
Michael ZEITLIN
Inst. for Mechanical Engineering Problems RAS, S-Petersburg
zeitlin@math.ipme.ru
Wavelet-Based Modeling in Quantum Dynamics: From
Localization to Entanglement.
Co-authors: A. Fedorova
ID=188
We present the application of variational-wavelet analysis to numerical-analytical
calculations of Wigner functions in (nonlinear) quasiclassical dynamical problems
as solution of corresponding (pseudo)di erential Wigner-von Neumann equations.
(Naive) deformation quantization and multiresolution representations are the key
points. We construct the representations via multiscale expansions in generalized
coherent states or high-localized nonlinear eigenmodes in the base of compactly sup-
ported wavelets and wavelet packets which are natural nonlinear generalization of
standard coherent, squeezed, thermal squeezed states corresponding to quadratical
systems (linear dynamics) with Gaussian Wigner functions. As result we calcu-
late quantum corrections to classical dynamics described by arbitrary polynomial
nonlinear Hamiltonians such as orbital motion in storage rings or in general multi-
polar elds. We give contributions to our full quasiclassical representation from each
scale of underlying resolution. We consider applications of constructed localized and
pattern-like solutions to dynamics of entangled states in quantum computers.
96

Simulations and Computations in Theor.Physics and Phenomenology oral session
Andrei SEMENOV
LAPTH, Annecy
semenov@lapp.in2p3.fr
CompHEP/SUSY Package.
ID=190
The CompHEP software package allows the evaluation of cross section and decays
of elementary particles with a high level of automation. Arbitrary tree level processes
can be calculated starting from the set of vertices prescribed by a given physical
model. This talk describes the details of the Minimal Supersymmetric Standard
Model (MSSM) implementation in the CompHEP package, and the notation for the
particles and parameters of the MSSM in CompHEP.
97

Simulations and Computations in Theor.Physics and Phenomenology oral session
Mikhail TENTYUKOV
JINR, Dubna
tentukov@thsun1.jinr.ru
A Feynman Diagram Analyzer DIANA - Recent
Development.
Co-authors: J. Fleischer
ID=217
New developments concerning the extension of the Feynman diagram analyser
DIANA are presented. We discuss new graphic facilities, application of DIANA
to processes with Majorana fermions and di erent approaches to automation of
momenta distribution.
98

Simulations and Computations in Theor.Physics and Phenomenology oral session
Leif LONNBLAD
Lund Univ.
Leif.Lonnblad@thep.lu.se
Status of the Pythia7 Project.
ID=236
I will describe the current status of the Pythia7 project to completely rewrite
the 'Lund family' of event generators in C++. The Pythia7 program is a general
platform for implementing event generator models and is not limited to the standard
Lund programs (Pythia, Jetset, Ariadne,...). Also the future C++ version of the
Herwig program will be implemented in the Pythia7 framework. The underlying
framework is now rather stable and the project has gone into a phase where physics
models are being implemented.
99

Simulations and Computations in Theor.Physics and Phenomenology oral session
Stanislaw JADACH
Henryk Niewodniczanski Inst. of Nucl. Physics, Krakow
Stanislaw.Jadach@cern.ch
Foam: A General-Purpose Cellular Monte Carlo Event
Generator.
ID=241
A general purpose, self-adapting, Monte Carlo (MC) event generator (simula-
tor) is described. The high eôciency of the MC, that is small maximum weight or
variance of the MC weight is achieved by means of dividing the integration domain
into small cells. The cells can be n-dimensional simplices, hyperrectangles or Carte-
sian product of them. The grid of cells, called \foam", is produced in the process
of the binary split of the cells. The choice of the next cell to be divided and the
position/direction of the division hyper-plane is driven by the algorithm which op-
timizes the ratio of the maximum weight to the average weight or (optionally) the
total variance. The algorithm is able to deal, in principle, with an arbitrary pattern
of the singularities in the distribution. As any MC generator, it can also be used
for the MC integration. With the typical personal computer CPU, the program is
able to perform adaptive integration/simulation at relatively small number of di-
mensions ( 16). With the continuing progress in the CPU power, this limit will
get inevitably shifted to ever higher dimensions. Foam is aimed (and already tested)
as a component in the MC event generators for the high energy physics experiments.
A few simple examples of the related applications are presented. Foam is written in
fully object-oriented style, in the C++ language. Two other versions with a slightly
limited functionality, are available in the Fortran77 language.
100

Simulations and Computations in Theor.Physics and Phenomenology oral session
Mikhail KALMYKOV
JINR, Dubna
kalmykov@ifh.de
Pole Masses of Gauge Boson.
Co-authors: F. Jegerlehner, O. Veretin
ID=249
Full two-loop EW corrections to the relationship between MS and pole masses of
the vector bosons Z and W (two-loop renormalization constants in on-shell scheme)
are calculated. All calculations were performed in the linear R  gauge with three
arbitrary gauge parameters utilizing the method of asymptotic expansions. The
results are presented in analytic form as series in the small parameters sin 2  W and
the mass ratio m 2
Z =m 2
H .
101

Simulations and Computations in Theor.Physics and Phenomenology oral session
Pena CHRISTOVA
JINR, Dubna
penchris@nusun.jinr.ru
QED Radiative Corrections within the CalcPHEP Project.
ID=255
An automatic calculation of the QED radiative corrections in the framework
of the CalcPHEP computer system is discussed. The collecting of the computer
programs written in Form3 language is aimed at the creation of a database of an-
alytic results to be used for the theoretical support of experiments at high energy
accelerators.
102

Simulations and Computations in Theor.Physics and Phenomenology oral session
Dmitri BARDIN
JINR, Dubna
bardin@nusun.jinr.ru
Project CalcPHEP, Calculus for Precision High Energy
Physics.
ID=256
The project, aimed at the theoretical support of experiments at modern and
future accelerators { TEVATRON, LHC, electron Linear Colliders (TESLA, NLC,
CLIC) and muon factories, will be presented. Within this Project a four-level com-
puter system is being created, which eventually must automatically calculate at the
one-loop level precision the realistic- and pseudo-observables (event distributions
and decay rates) for more and more complicated processes of elementary particle
interactions, using the principle of knowledge storing. It was already used for a
recalculation of the EW radiative corrections for Atomic Parity Violation and com-
plete one-loop corrections for the process e + e ! t  t, for the latter an agreement up
to 12 digits with FeynArts and other existing in the literature results was found.
Its rst phase capable to automatically compute decay rates of Z(H;W ) ! f 
f in
the one-loop approximation will be demonstrated. The system is written in several
computer languages: a) its symbolic part is realized in FORM3, b) a part, generat-
ing automatically the FORTRAN codes { in PERL, c) the graphical user interface
{ in JAVA.
103

Simulations and Computations in Theor.Physics and Phenomenology oral session
Andrei DAVYDYCHEV
SINP MSU, Moscow
davyd@theory.sinp.msu.ru
Analytical Evaluation of Certain On-Shell Two-Loop
Three-Point Diagrams.
Co-authors: V. Smirnov
ID=270
An analytical approach based on the evaluation of multiple Mellin-Barnes inte-
grals is applied to the calculation of certain dimensionally-regulated two-loop vertex-
type diagrams with essential on-shell singularities. Exact results for the divergent
and nite parts are presented in terms of the polylogarithms and their generaliza-
tions. Calculation of on-shell diagrams with two di erent masses is also discussed.
104

Simulations and Computations in Theor.Physics and Phenomenology oral session
Toshiaki KANEKO
KEK, Tsukuba
toshiaki.kaneko@kek.jp
A Package of Generating Feynman Rules in GRACE
System.
ID=279
A package is developed for the generation of Feynman rules in the suitable form
for the input of GRACE system, which is an automatic calculation system of Feyn-
man amplitudes in accordance with given Feynman rules. With this package one
can easily obtain the results of perturbative calculations based on his own physical
model starting from a newly de ned Lagrangian. In order to realize this package, a
programming language and its interpreter is developed for the symbolic processing
of mathematical expressions. This programming language is equipped with several
data structures and control statements specially extended from ones de ned in usual
programming languages. The package of generation of Feynman rules is prepared
as a library. The elds and the Lagrangian in the model are speci ed as a program
written in this language. Thus users can control whole procedure of the generation
of Feynman rules and can apply special treatment by user de ned procedure.
105

Simulations and Computations in Theor.Physics and Phenomenology oral session
Keijiro TOBIMATSU
Kogakuin Univ., Tokyo
tobimatu@cc.kogakuin.ac.jp
A New Monte Carlo Method of the Numerical Integration.
Co-authors: T. Kaneko
ID=280
Usual adoptive Monte Carlo methods for the multi-dimensional integration di-
vide the integration region into sub-regions in order to pick up singular behavior
of the integrand and accumulate sampling points to improve the accuracy of the
result. We propose a new adoptive Monte Carlo method which randomly select
sub-regions, in contrast to the deterministic methods usually used. The method has
more freedom than usual ones in choosing the shape of the sub-regions which can
be suitable to the structure of the singuralities of the integrand. We have estimated
the performance of the method with a test implementation.
106

Simulations and Computations in Theor.Physics and Phenomenology oral session
Akihiro SHIBATA
KEK, Tsukuba
akihiro.shibata@kek.jp
Multi-Dimensional Integration Based on Stochastic
Sampling Method.
Co-authors: S. Tanaka, S. Kawabata
ID=283
In high energy physics, VEGAS and BASES based on the important sampling
algorithm have been used, since they are quite eôcient and applicable for various
type of high dimensional integrand. These routines, however, have a weakness for
those integrand that are concentrated in one-dimensional (or higher) curved tra-
jectories (or hypersurface) close to diagonal integral region, since the geometry is
nonseparable and these algorithms degenerate into a naive Monte Carlo integra-
tion. We study a new algorithm of multi-dimensional numerical integration based
on SSM (stochastic sampling method). SSM bases on the random walk on the
multi-dimension implicit surface (or the supersurface of graph function), which can
generate fast and homogenous sampling points on it. The homogenous sampling
on the integrand means that the sampling points are generated depending on the
gradient of the function at each point. In this paper, we report the preliminary
result of the implementation of the algorithm, and discuss the way of extension to
one with adaptive integration for subdivision space.
107

Simulations and Computations in Theor.Physics and Phenomenology oral session
Ilya YUDIN
Moscow St. Univ.
elieyudin@mail.ru
The Calculation of the  4 Field Theory Beta-Function in
the Framework of Perturbation Theory with Convergent
Series.
ID=287
Perturbation theory with convergent series, a new technique of divergent series
summation, is applied to the problem of the calculation of the beta-function in the
scalar eld theory with quartic self-interaction. Various computational aspects of
the method are discussed.
108

Simulations and Computations in Theor.Physics and Phenomenology oral session
Alexander PUKHOV
SINP MSU, Moscow
pukhov@theory.sinp.msu.ru
Batch Calculations in CalcHEP.
ID=288
CalcHEP was designed as a program for calculations in High Energy Physics
in the interactive mode. This talk describes how the program can be launched in
the batch mode. Special tools like complete automatic treatment of singularities for
Monte Carlo integration were designed.
109

Simulations and Computations in Theor.Physics and Phenomenology oral session
Alexander PUKHOV
SINP MSU, Moscow
pukhov@theory.sinp.msu.ru
Adaptation of Vegas for Event Generation.
ID=289
Vegas is a very popular program for Monte Carlo integration. Some auxiliary
routines added to Vegas allow one to use this program for generation of event ow.
110

Simulations and Computations in Theor.Physics and Phenomenology oral session
Fukuko YUASA
KEK, Tsukuba
fukuko.yuasa@kek.jp
Multi-Dimensional Integration Package DICE for Parallel
Processors.
Co-authors: K. Tobimatsu, S. Kawabata
ID=290
DICE is a multi-dimensional integration package developed in 1992. The basic
idea of DICE is to divide an integral region into subregions recursively until the
convergence condition for each of them are satis ed. It has been upgraded in 1998
and the current DICE 1.1 can perform the numerical integration on Vector Super
Computer. Here we will present new parallel version of DICE using MPI (Message
Passing Interface).
111

Simulations and Computations in Theor.Physics and Phenomenology oral session
Lev BERKOVICH
Samara St. Univ.
berk@ssu.samara.ru
Factorization and Transformations of Linear and Nonlinear
Ordinary Di erential Equations.
ID=299
We consider the method of factorization of linear nonautonomous ordinary di er-
ential equations of nth order, reduced to linear equations with constant coeôcients
by the most general point transformations preserving linearity and the order of the
equation. The algorithmic procedures were developed for search of transformations,
factorizations and solutions of linear second order equations with variable coeô-
cients (containing parameters) of general form. They were realized in the computer
algebra system REDUCE, in the program SOLDE. The indicated method of the
factorization has appeared e ective also the equations of the higher order. By main
idea was the synthesis of the method of factorization with a method of transforma-
tions. In the present talk the class of nonlinear equations of the order N, dependent
from two arbitrary functions and N parameters, reduced to linear equations, also is
considered.
112

Simulations and Computations in Theor.Physics and Phenomenology oral session
Alexandre CHERSTNEV
SINP MSU, Moscow
sherstnv@theory.sinp.msu.ru
Toolkit for Partonic Events Data Bases in the CompHEP
Package.
Co-authors: S. Balatenyshev, V. Ilyin
ID=321
New format for le storage of partonic events is proposed in the CompHEP
package based on the Les Houches standard. This format is based on simple lin-
guistic constructions, tags. Data on beams, (sub)processes, structure functions and
events are written in the transparent form. It is allowed to write any additional
information (e.g. described theoretical aspects of the ME evaluation) in universal
form. The I/O routines (C and Fortran ones) are proposed as standard tools for
ME and SH generators. This approach should provide exible interface between two
stages of the full simulation of collision processes - matrix element evaluation and
generation of physical events with showering and hadronization. We present toolkit
to manipulate with event samples ( les) stored in this standard. Toolkit includes
utilities for mixing events of di erent partonic (sub)processes, picking out of events
according to given conditions, histogramming, reweighting events etc. We discuss
the CompHEP - PYTHIA(v.6.2) interface used the standard proposed.
113

Simulations and Computations in Theor.Physics and Phenomenology oral session
Jochem FLEISCHER
Univ. of Bielefeld
eischer@physik.uni-bielefeld.de
Factorizing One-Loop Contributions to Two-Loop Bhabha
Scattering and Automatization of Feynman Diagram
Calculations.
Co-authors: T. Riemann, O. Tarasov, A. Werthenbach
ID=324
In higher order calculations a number of new technical problems arise: one needs
diagrams in arbitrary dimension in order to obtain their needed "-expansion, zero
Gram determinants appear unexpectedly, renormalization produces diagrams with
`dots' on the lines, i.e. higher order powers of scalar propagators. All these problems
cannot be accessed by the `standard' Passarino-Veltman approach: there is not
available what is needed for higher loops.Moreover for higher-leg diagrams (like 5-
point functions) further problems arise: the number of diagrams increases drastically
and due to the many di erent parameters in the Standard Model the results become
extremely lengthy. Here a proper simpli cation is needed. This is best achieved
with Maple.
114

Simulations and Computations in Theor.Physics and Phenomenology poster session
Askar DZHUMADIL'DAEV
Inst. of Mathematics, Almaty
askar@math.kz
Generalized Commutators and Identities on Vector Fields.
ID=100
Well known that commutator of two vector elds is a vector eld. We interested
in the following problem: is it possible to construct k-commutator s k (X 1 ; : : : ; X k ) =
P
2Sym k
sign  X (1)    X (k) on vector leds for bigger k. Using computer algebraic
methods we establish that V ect(2) has 6-commutator and the number 6 here can
not be improved: s 7 = 0 is identity and s 5 is not well de ned on V ect(2). In general,
V ect(n) has n 2 + 2n 2-commutator. These are some of results that we would like
to discuss in our talk.
115

Simulations and Computations in Theor.Physics and Phenomenology poster session
Andrei KATAEV
INR RAS, Troitsk
kataev@ms2.inr.ac.ru
Fits of DIS Data at the NNLO and Beyond as the Concrete
Application of De nite Results of Multiloop Calculations in
QCD.
ID=104
The recently obtained results for NNLO corrections to anomalous dimensions and
N 3 LO contributions to the coeôcient functions of de nite Mellin moments of xF 3
and F 2 structure functions are used to perform the ts of the concrete experimental
data. De nite typical features of the results of analytical calculations are revealed.
116

Simulations and Computations in Theor.Physics and Phenomenology poster session
Anatoly KOTIKOV
JINR, Dubna
kotikov@thsun1.jinr.ru
The Value of QCD Coupling Constant and Power
Corrections in the Structure Function F2 Measurements.
ID=134
The deep inelastic scattering data of BCDMS Collaboration have been reana-
lyzed by including proper cuts of ranges with large systematic errors. The ts of
high statistic deep inelastic scattering data of BCDMS, SLAC, NM and BFP Col-
laborations have been performedtaking the data separately and in combined way
and a good agreementbetween these analyses has been found. The values of both
the QCD coupling constant up to NLO level and of the power corrections to the
structure function F2 have been extracted.
117

Simulations and Computations in Theor.Physics and Phenomenology poster session
Sergey SKOROKHODOV
Computing Center RAS, Moscow
skor@ccas.ru
Advanced Techniques for Computing Divergent Serieses.
ID=150
The problem of analytic continuation of the power series is considered. Three
methods of e ective solution of the problem are used. The rst approarch is based
on constructing of conformal mapping of the domain, in which we are looking for
the analytic continuation, onto the unit disk and a corresponding transformation of
the series. The problem of selection an optimal mapping, supplying the maximal
rate of convergency of the new series, is solved. The second approarch is based on
re-expansion of the series and using the governing di erential equation to obtain
the numerically stable nite di erence relations for coeôcients of the new series.
The third approarch is based on using diagonal Pade approximations. The methods
above have been used for high-eôcient computing of generalized hypergeometric
functions, Zeta-functions, Dirichlet-type serieses and other important special func-
tions.
118

Simulations and Computations in Theor.Physics and Phenomenology poster session
Michael ZEITLIN
Inst. for Mechanical Engineering Problems RAS, S-Petersburg
zeitlin@math.ipme.ru
Fast Calculations in Nonlinear Collective Models of
Beam/Plasma Physics.
Co-authors: A. Fedorova
ID=189
We consider applications of numerical{analytical technique based on the methods
of local nonlinear harmonic analysis to nonlinear collective models of beam/plasma
physics, e.g. some forms of Vlasov-Maxwell-Poisson equations or more general ki-
netic equations related to modeling of propagation of intense charged particle beams
in high-intensity accelerators and transport systems. In our approach we use fast
convergent variational-wavelet representations for solutions, which allows to con-
sider polynomial and rational type of nonlinearities. The solutions are represented
via the multiscale decomposition in nonlinear high-localized eigenmodes, which cor-
responds to the full multiresolution expansion in all underlying hidden time/space or
phase space scales. In contrast with di erent approaches we do not use perturbation
technique or linearization procedures. Fast scalar/parallel modeling demonstrates
appearance of high-localized coherent structures and pattern formation in spatially{
extended stochastic systems with complex collective behaviour.
119

Simulations and Computations in Theor.Physics and Phenomenology poster session
Victor ANDREEV
Gomel St. University
andreev@gsu.unibel.by
Analytical Calculation S-matrix in Quantum
Electrodynamics.
ID=193
In a quantum eld theory a main way of obtaining of analytical expressions of
amplitudes of processes of interaction of elementary particles is the use of the Wick
theorems. From the Wick theorems follows, that any matrix element of process
eventually expresses through products of the operators of elds and appropriate
pairings. The aim of present paper is to calculate the S-matrix with the help of
package MATHEMATICA analytically. In the given work the program is submitted
which allows to calculate chronological T-product of quantum elds. Input parame-
ters are the Hamiltonian of interaction in coordinate space and order of pertubation,
which is necessary for us. The program is rather compact and allows during sev-
eral seconds of real time to receive expressions for T-product of six Hamiltonians of
electromagnetic interaction in the standard form.
120

Simulations and Computations in Theor.Physics and Phenomenology poster session
Victor ANDREEV
Gomel St. University
andreev@gsu.unibel.by
Analytical Calculation of S-matrix Elements of Reaction
with Fermions
ID=195
The aim of paper is to present rule-based program in environment MATHE-
MATICA for calculating the matrix elements of reaction with fermions. We ob-
tained as test and illustration of the program the amplitudes of interaction processes
e e + ! f 
f , e e + ! W W + in terms of momentum and polarization vector com-
ponents. It should be noted that getting the analytical expression in terms of scalar
products on the given matrix element of the reaction on the ordinary computer takes
0:2 to 0:6 second.
121

Simulations and Computations in Theor.Physics and Phenomenology poster session
Alexander BELYAEV
SINP MSU & FSU, Tallahassee
belyaev@hep.fsu.edu
Study of Viable SUSY GUTS with Non-Universal Gaugino
Mediation: CompHEP and ISAJET Application.
ID=204
Recently, extra dimensional SUSY GUT models have been proposed in which
compacti cation of the extra dimension(s) leads to a breakdown of the gauge sym-
metry and/or supersymmetry. We examine a particular class of higher-dimensional
models exhibiting supersymmetry and SU(5) or SO(10) GUT symmetry. SUSY
breaking occurs on a hidden brane, and is communicated to the visible brane via
gaugino mediation. With the intensive use of CompHEP and ISAJET programs we
examine the parameter space of models where gaugino masses are related due to a
Pati-Salam symmetry on the hidden brane. We nd limited but signi cant regions
of model parameter space where a viable spectra of SUSY matter is generated. Our
results are extended to the more general case of three independent gaugino masses:
here we nd that large parameter space regions open up for large values of the U(1)
gaugino mass M1. We also nd the relic density of neutralinos for these models to
be generally below expectations from cosmological observations, thus leaving room
for hidden sector states to make up the bulk of cold dark matter.
122

Simulations and Computations in Theor.Physics and Phenomenology poster session
Alexander GUSEV
JINR, Dubna
gusev baatar@mail.ru
The Program of Analytical Calculations of the E ective
Potentials in the Three-Body Problem on a Line.
Co-authors: D. Pavlov, S. Vinitsky
ID=210
The Hilbert ber bundle construction induced by the adiabatic expansion of the
wave function of a three-body problem is considered on example of three identical
particles on a line with pair zero-range potentials. The canonical transformation of
the problem is explicitly constructed to reduce the coupled adiabatic equations with
induced gauge eld potentials to equations involving only the open channels that we
are interested in. The analytical calculations of the e ective long-range potentials
with help of programs implemented on MAPLE 7 and REDUCE 3.7 are presented.
123

Simulations and Computations in Theor.Physics and Phenomenology poster session
Andrey BANSHCHIKOV
Inst. of Systems Dynamics and Control Theory SB RAS, Irkutsk
bav@icc.ru
Parametric Analysis of Stability Conditions for a Satellite
with a Gravitation Stabilizer.
ID=239
Unique software elaborated on the basis of the computer algebra software pack-
age MATHEMATICA has been employed in investigations of stability of a relative
equilibrium position for an uncontrolled satellite with a gravitation stabilizer on a
circular orbit. Domains of various degrees of Poincare instability have been revealed
in the space of introduced parameters. Under the assumption that the potential
system is unstable (the degree of instability being even) the problem of possibility
of its gyroscopic stabilization is considered. Parametric analysis of stability condi-
tions has been conducted and two-parameter sections of the system's domains of
gyroscopic stabilization have been constructed with the aid of a software system
intended for graphic solving of systems of algebraic inequalities.
124

Simulations and Computations in Theor.Physics and Phenomenology poster session
Gizo NANAVA
JINR, Dubna
nanava@nusun.jinr.ru
A Monte Carlo Simulation of Decays within the CalcPHEP
Project.
ID=254
The library of Monte carlo programs for simulation of two particle leptonic and
quark decays of the W, Z and Higgs bosons with single bremsstrahlung photon
emission as a part of CalcPHEP system is described. The QED and EW O( ) ra-
diative corrections are implemented without any approximation, keeping the masses
of decay product particles. The decay amplitudes are evaluated numerically using
Kleiss&Stirling helicity amplitude method. The comparison with PHOTOS { A
universal Monte Carlo simulator for QED radiative corrections is also discussed.
125

Simulations and Computations in Theor.Physics and Phenomenology poster session
Lidia KALINOVSKAYA
JINR, Dubna
kalinov@nusun.jinr.ru
About Implementation of e + e ! f 
f Processes into the
Framework of CalcPHEP Project.
ID=257
In this report it is described how an automatic calculation of the di erential
cross-sections of the processes e + e ! f 
f (with an arbitrary nal state massive
fermion) at one-loop level is realized within the framework of the CalcPHEP Project
computer system. The results of numerical comparison with the other calculations,
done with FeynArts system and other existing codes, will be presented.
126

Simulations and Computations in Theor.Physics and Phenomenology poster session
Alexander KRYUKOV
SINP MSU, Moscow
kryukov@theory.sinp.msu.ru
Using of FORM for Symbolic Evaluation of Feynman
Diagrams in CompHEP package.
Co-authors: V. Bunichev, A. Vologdin
ID=322
CompHEP package includes the module for symbolic evaluation of squared Feyn-
man diagrams. This built-in module is highly optimized for fast calculations and
economical memory usage. However the price for this was tight specialization of
this module - xed number of symbolic structures are allowed in symbolic expes-
sions under the evaluation. Thus, today, when new models of particle interactions
assume new and more complicated structures in vertices, and when new advanced
methods for the treatment of matrix element (for example amplitude method) are
needed to be implemented, such a specialization is too heavy for further develop-
ment of CompHEP package. One can add that nowadays, due to incredible growth
of the CPU performance, the speed of symbolic calculations is not restricting stage
on the whole way of the collision processes simulation. In the report we consider
the incorporation of the computer algebra system FORM with CompHEP code for
automatic evaluation of squared Feynman diagrams.
127

Innovative Algorithms and Tools oral session
Mikhail KOSOV
ITEP, Moscow
Mikhail.Kossov@cern.ch
Chiral Invariant Phase Space Event Generator.
ID=120
The Chiral Invariant Phase Space (CHIPS) model is based on the uniform dis-
tribution of quark-partons over invariant phase space inside hadrons and hadronic
compounds (Quasmons). For hadronization the quark exchange andquark-fusion
mechanisms are used. The model is implemented in the GEANT4 simulation pack-
age. The possible topics are: 1) nucleon-antinucleon annihilation at rest, 2) nuclear
pion capture at rest, 3) photo- and electronuclear reactions below pion production
threshold, 4) spectrum of hadrons in CHIPS, 5) approximation of photonuclear inter-
action cross sections, 6) structure functions of hadrons in CHIPS, 7) fragmentation
of the nuclear Giant Resonance.
128

Innovative Algorithms and Tools oral session
Miroslav MORHAC
JINR & Inst. of Phys. Slovak Acad. Sci., Dubna
fyzimiro@ nr.jinr.ru
New Achievements in Development of Multidimensional
Data Acquisition, Processing and Visualization -
DAQPROVIS.
Co-authors: V. Matousek, J. Kliman, I. Turzo, L. Krupa, M. Jandel
ID=142
In the contribution we present the data acquisition, processing and visualiza-
tion system, which is being built at the Institute of Physics, Slovak Academy of
Sciences, Bratislava and FLNR JINR Dubna. The software described integrates
a comprehensive number of both conventional and new developed algorithms. It
allows to 1) acquire multidimensional data from experiments, 2) sort events accord-
ing to prede ned conditions, 3) create histograms and store them eôciently using
new developed compression methods, 4) analyse multidimensional histograms using
a set of sophisticated algorithms, 5) visualize 1, 2, 3, 4 dimensional histogramsIts
modular structure lends itself to set up the con guration of employed procedures
according to speci c needs of an experiment. Data acquisition part of the system
allows to acquire multiparameter events either directly from experiment or from a
list le. It means that the system can work either in on-line or o -line acquisition
mode. In o -line acquisition mode the system can analyse event data even from
big experiments, e.g. from GAMMASPHERE. The capability of DAQPROVIS to
work simultaneously in both the client and server working mode enables us to realize
remote as well as distributed nuclear data acquisition, processing and visualization
con gurations. The raw events can be written unchanged to a list le, or/and to
other DAQPROVIS client systems. They can be sorted according to prede ned cri-
teria (gates) and written to sorted strems. The event variables can be analysed to
create 1, 2, 3, 4 - parameter histograms - spectra, analysed and compressed using
on-line comression procedure, sampled - using di erent sampling modes. Once col-
lected the analysed spectra can be further processed using sophisticated background
elimination, deconvolution, peak searching and tting algorithms. The system al-
lows to display 1, 2, 3, 4 - parameter spectra using a great variety of visualization
techniques. The data can be visualized in live mode during the acqusition (exper-
iment), before as well as after processing. Also sophisticated methods like shaded
isosurface, volumetric data presentation, with the possibility to smooth data using
Bezier or B-spline techniques are implemented.
129

Innovative Algorithms and Tools oral session
Vladislav MATOUSEK
JINR & Inst. of Phys. Slovak Acad. Sci., Dubna
matousek@lnr.jinr.ru
Eôcient Storing of Multidimensional Histograms Using
Advanced Compression Techniques.
Co-authors: M. Morhac, J. Kliman, I. Turzo, L. Krupa, M. Jandel
ID=143
The nuclear data taken from experiment can be stored either in the form o ist
of events or analyzed and stored as a multidimensional histograms - spectra. This
way of nuclear spectra storing has its disadvantage in the enormous amount of
information which has to be written, primarily onto tapes. This results in very
long time needed to process each tape. In the contribution we present the eôcient
data storing and compression methods implemented in the multiparameter data ac-
quisition, processing and visualization package - DAQPROVIS. The compression
techniques include: 1) single binning of neighboring data channels (with the loss of
resolution), 2) utilizing special properties of data (symmetry of multidimensional
gamma-ray spectra), 3) compression using new developed fast adaptive orthogonal
transforms.We have developed special algorithms of new adaptive orthogonal trans-
forms (Walsh, Cosine, Fourier) , which allow to compress the spectra much more
eôciently than classical transforms. The basic principle consists in direct modi -
cation of the coeôcients of the signal ow graph of the fast algorithm. Using this
algorithm we have modi ed the Walsh-Hadamard, cosine andFourier transform ker-
nels. They allow to achieve much higher compression ratios while preserving the
important features and suôcient quality of multidimensional spectra. Our orthog-
onal transforms were optimized for on-line analysis andcompression with minimal
number of mathematical operations needed. The eôciency of various methods using
two-, three- and four-dimensional experimental gamma-ray spectra were studied.
130

Innovative Algorithms and Tools oral session
Tatyana TYUPIKOVA
JINR, Dubna
tanya@jinr.ru
The Automated Networks of Management of Financial
Activity, the Control and the Account of Databases of
Economic Divisions JINR.
Co-authors: V. Samoilov
ID=162
Modern information technologies move the natural sciences to the further de-
velopment. But this promotion occurs together with evaluation of infrastructures,
called to create favorable conditions for development of a science, nancial base
to prove and legally to protect new researches. Any scienti c development entails
accounting and a legal protection. In the report we consider a new direction in
the software, ideology of the organization of computer networks, the organization
and control of the common databases on an example real functioning the electronic
document handling of the Department of the main Thing Power JINR.
131

Innovative Algorithms and Tools oral session
Andrew KOSTOUSOV
Ural St. Univ., Ekaterinburg
andrew@skbkontur.ru
Application of Parallel Computing to the Simulation of
Chaotic Dynamics.
ID=165
This work is concerned with the problems arising in modeling of chaotic dynam-
ics. We are investigating a particular example of cellular automaton, the \forest-
re" model, which demonstrates a self-organized critical behaviour when appropriate
values of its parameters are chosen. The main goal of our research is to obtain the
statistics of large res and to undersatnd its laws. Enormous volume of necessary
computing is one of the diôculties of this problem. Parallel computers helps us to
partially overcome this issue, but new questions concerning the use of the paral-
lelism itself arise. Here we discuss two lines of investigation. The rst is to make
the measurements more precise (i.e. to increase \resolution" of the model). This
implies the use of large elds (8192x8192 and larger) in the simulation process. Here
we encounter a problem that such a eld does not t in the memory of single proces-
sor. The second line of investigation is bounded up with the aquisition of the very
long time series, which is necessary to accumulate ammount of data large enough
to analyze the laws of rare events. The question here is how should we produce
such time series? One way is to compute some time series using di erent processors
independently and then laminate these time series into the long one. But in this
case we have to discover the conditions under which this operation is legitimate (i.e.
the properties of the laminated time series are equal to those of the truly long one).
132

Innovative Algorithms and Tools oral session
Vladimir VINOGRADOV
JINR, Dubna
vinogradov@vxjinr.jinr.ru
Innovative Calorimeter Hadron Energy Reconstruction
Algorithm for the Experiments at the LHC (for the ATLAS
TILECAL collaboration).
Co-authors: Y. Kulchitsky
ID=168
The key question of calorimetry generally and hadronic calorimetry in particular
is the energy reconstruction. This question is especially important when a hadronic
calorimeter has a complex structure being a combined calorimeter, consisting of
a electromagnetic and hadronic parts. Such is the combined calorimeter of the
ATLAS and CMS detectors which being constructed at the Large Hadron Collider
at CERN. We review the known calorimeter energy reconstruction methods and
suggest the new algorithm of the energy reconstruction for a combined calorimeter.
It uses only the known calorimeter non-compensations (e/h), the electron calibration
constants and does not require the determination of parameters by a minimization
technique. The algorithm has been tested on the basis of the experimental data
at the CERN SPS of 10 { 300 GeV energy range of the ATLAS prototype barrel
combined calorimeter, consisting of a lead-liquid argon electromagnetic part and
an iron-scintillator hadronic part, and demonstrated the correctness of the energy
reconstruction algorithm. The new algorithm has been also tested on the basis of
the Monte Carlo data obtained with the help of the GEANT program. Thus, this is
the rst non-parametrical method given the good results. The algorithm has been
implemented in the ATLAS Tile calorimeter analysis experimental data program
using the PAW and CERNLIB packages. The proposed algorithm can be used for
the fast energy reconstruction in the rst level trigger and for the data analysis from
modern combined calorimeters like the ATLAS, CMS detectors at the LHC and the
CDF, D0 at the TEVATRON.
133

Innovative Algorithms and Tools oral session
Vladimir IVANCHENKO
CERN & BINP, Geneve
Vladimir.Ivantchenko@cern.ch
Geant4 Toolkit for Simulation of HEP Experiments (for the
GEANT4 collaboration).
ID=173
The status of Geant4 toolkit is described. The examples of Geant4 applications
are discussed. The results of comparison of Geant4 predictions with experimental
data are demonstrated.
134

Innovative Algorithms and Tools oral session
Aliaksei KONASH
Inst. of Molecular and Atomic Physics, Minsk
konash@imaph.bas-net.by
Computer Investigation of the Percolation Processes in
Two- and Three-Dimensional Systems with Heterogeneous
Internal Structure.
Co-authors: S. Bagnich
ID=181
As known, the percolation model has been found useful to characterize many
disordered systems, such as porous media, fragmentation and fractures, gelation,
random-resister insulator systems, dispersed ionic conductors, forest res and epi-
demics. Hoshen and Kopelman developed the cluster formalism for description
energy transport in disordered media. But experimental results of the energy trans-
port research di ered from the theoretical ones were found when the porous glass
was used as the matrix. This e ect was connected with the heterogeneous properties
of porous glasses. To investigate the in uence of the internal structure on energy
transport in the heterogeneous systems the computational technique war developed.
Under our consideration was square lattice with randomly arranged square obsta-
cles. Were calculated such important percolation characteristics of the system as
critical concentration, percolation probability, average nite cluster size, the values
of critical exponents, the fractal and spectral dimensions of percolation cluster in
critical point. The strong in uence of linear size & concentration of the obstacles on
them was shown in our resent work. In our investigation theory of system for math-
ematical de nition of the physical model and percolation theory for interpretation
of the obtained data were applied. And in this paper the results of the heteroge-
neous condition preparing, the percolation cluster growth and the mentioned above
features calculation are discussed.
135

Innovative Algorithms and Tools oral session
Anna KISELEVA
S-Petersburg St. Univ.
A.Kisseleva@gsi.de
Application of Digital Filtering Techniques for Analysis of
X-ray Signals Using Interactive Data Language.
Co-authors: A. Bleile, P. Egelhof, O. Kisselev, D. McCammon, J. Meier
ID=186
The advanced program for the analysis of experimental data obtained using the
calorimetric low-temperature detectors is presented. One of the applications of such
detectors is a precise determination of the Lamb shift in heavy hydrogen-like ions
which provides a sensitive test of quantum electrodynamics in very strong Coulomb
elds. This experiment requires a very high energy resolution on the level of 0.1
136

Innovative Algorithms and Tools oral session
Grigori MEN'SHIKOV
S-Petersburg St. Univ.
miksha@pobox.spbu.ru
Analytical Foundations of Localizing Computing.
ID=206
Some fundamental mathematical ideas are based on the localizing (or enclosing)
computing. We present their review. Idea of an interval enclosing and its usefullness
for the mathematical computations. Interval arithmetic as extension of the numeri-
cal one. Association of a desired process with the enclosing process. The application
of the properties of the interval arithmetic. Element-by-element enclosing. Idea of
the application of the boxes as tools of the enclosing in the spaces R n . Idea of a
majorization. An intervalization of an approximate formulae. Idea of the experi-
mental research of a function and of the veri cation of approximate formulae. A
checking by means of intersections. Idea of a nit stabilization of interval sequen-
cies. Brower-Schauder theorem application to the enclosing of a xed point of the
process-prototype by the corresponding interval analog. A nality of the enclosing
the xed point of the process-prototype by its interval analog. Ideas of intersections
and interval hulls application for a creating of embedded and anti-embedded itera-
tive processes. Idea of a preliminary enclosing Solutions of the number equations.
Idea of a criteria of the solutions absence. Idea of a multiinstrumentness of the
algoritmic solver of equations. Subdivision of the preliminary enclosure. Idea of a
preliminary (veri cated) enclosing of the integral curve. Idea of a correcting (inter-
polating) enclosing of the integral curve by means of Taylor intervalized formula.
Idea of a checking enclosing of the integral curve by means of intervalized quadra-
ture formula. Idea of a separate enclosing of the bounds of the localizing zone of
the integral curves. Idea of an inner enclosing. Attraction of a majoristics and an
asymptotics to obtain of the new inclusions.
137

Innovative Algorithms and Tools oral session
Boris ZAGREEV
ITEP, Moscow
Boris.Zagreev@itep.ru
Algorithms and Methods for Particle Identi cation with
ALICE TOF Detector at Very High Particle Multiplicity.
ID=212
Di erent algorithms and methods for particle identi cation (PID) with ALICE
time-of- ight (TOF) detector are considered. The aim is to match particle tracks
obtained by TPC detector with TOF pads and then to identify charge particles.
That has to be done at the conditions of very high multiplicity and background.
Di erent approaches were used: simple contour cuts, neural networks, and proba-
bility approach when we calculate probability for each track to be di erent kind of
particle. The eôciency and contamination of PID are discussed.
138

Innovative Algorithms and Tools oral session
Victor SERBO
SLAC, Menlo Park
serbo@slac.stanford.edu
Status of AIDA and JAS 3 (for the AIDA collaboration).
ID=228
I will present the status of AIDA project, the Java implementation of AIDA, and
its integration with the Java Analysis Studio (JAS). The goal of the AIDA (Abstract
Interfaces for Data Analysis) project is to de ne abstract C++ and Java interfaces
for common physics analysis objects, such as histograms, ntuples, tters, IO etc..
The adoption of these interfaces should make it easier for physicists to use di erent
tools without having to learn new interfaces or change all of their code. Additional
bene ts will be interoperability of AIDA compliant applications (for example by
making it possible for applications to exchange analysis objects via XML). Several
analysis tools now support AIDA, one of these is Java Analysis Studio (JAS) { a
experiment independent graphical application for analysis of high-energy physics
data. JAS 3.0 will represent a major rewrite of many of the components of JAS,
to incorporate AIDA for the basic analysis components, and to make the entire
application more modular, making it easier for others to contribute to the project.
139

Innovative Algorithms and Tools oral session
Are STRANDLIE
CERN, Geneve
Are.Strandlie@cern.ch
Recent Results on Adaptive Track and Multitrack Fitting
in CMS.
Co-authors: R. Fruehwirth, T. Todorov, M. Winkler
ID=229
Some recently developed adaptive methods for the tting of tracks and track bun-
dles have been implemented in ORCA, the object-oriented reconstruction program
for the CMS experiment. We review their main features and discuss their relation
to other elastic tracking algorithms. We show results of the veri cation on arti cial
events as well as results from comparative studies on selected physics channels. It is
shown that in some diôcult channels adaptive methods are superior to the Kalman
lter both in terms of resolution and in the quality of the error estimate.
140

Innovative Algorithms and Tools oral session
Hans-Peter WELLISCH
CERN, Geneve
Hans-Peter.Wellisch@cern.ch
Mapping Modern Software Process Engineering Techniques
onto a HEP Development Environment.
ID=235
One of the most challenging issues faced in HEP in recent years is the ques-
tion of how to capitalise on software development and maintenance experience in a
continuous manner. To capitalise means in our context to evaluate and apply new
process technologies as they arise, and to further evolve technologies already widely
in use. It also implies the de nition and adoption of standards. The CMS o -line
software improvement e ort aims at continual software quality improvement, and
continual improvement in the eôciency of the working environment with the goal to
facilitate doing great new physics. To achieve this, we followed a process improve-
ment program based on ISO-15504, and Rational Uni ed Process. This experiment
in software process improvement in HEP has been progressing now for a period of 3
years. Taking previous experience from ATLAS and SPIDER into account, we used
a soft approach of continuous change within the limits of current culture to create of
de-facto software process standards within the CMS o line community as the only
viable route to a successful software process improvement program in HEP. We will
present the CMS approach to software process improvement in this process R&D,
describe lessons learned, and mistakes made. We will describe the architecture of
the supporting tool suite, demonstrate the bene ts gained, and the current status
of the software processes established in CMS o -line software.
141

Innovative Algorithms and Tools oral session
Hans-Peter WELLISCH
CERN, Geneve
Hans-Peter.Wellisch@cern.ch
Geant4 Physics Validation for Large Scale HEP Detectors.
ID=237
Optimal exploitation of hadronic nal states played a key role in successes of
all recent collider experiment in HEP, and the ability to use hadronic nal states
will continue to be one of the decisive issues during the analysis phase of the LHC
experiments. Monte Carlo techniques facilitate the use of hadronic nal states, and
have been developed for many years. We will give a brief overview of the physics
underlying hadronic shower simulation, discussing the three basic types of modelling
- data driven, parametrisation driven, and theory driven modelling at the example
of geant4. We will confront these di erent types of modelling with the stringent
requirements posed by the LHC experiments on hadronic shower simulation, and
report on the current status of the validation e ort of geant4 for large HEP applica-
tions. We will address robustness, CPU, and physics performance evaluations with
a focus on hadronic showers.
142

Innovative Algorithms and Tools oral session
Rene BRUN
CERN, Geneve
Rene.Brun@cern.ch
The ROOT Geometry package.
Co-authors: A. Gheata, M. Gheata
ID=240
A new geometry package is being introduced in the ROOT Data Analysis and
Visualisation system. This package developed in collaboration with the Alice experi-
ment includes: 1) a modeller with an Object-Oriented API , 2) a geometry browser,
3) a geometry visualisation kit (2-D and 3-D). 4) algorithms to answer questions
such as: where am I, distance to next surface, normal to a surface. A powerful
cache management system system has been developed and makes the new package 2
to 3 times faster than the geometry system of Geant3. The package is currently in a
testing phase and is already able to support complex geometries such as Alice, Atlas,
CMS, LHCb, CDF, Minos. Extensive comparisons have been made with Geant3 to
validate the results.
143

Innovative Algorithms and Tools oral session
Rudolf FRUEHWIRTH
HEPHY, Vienna
fru@hephy.oeaw.ac.at
A Review of Fast Circle and Helix Fitting.
Co-authors: A. Strandlie, J. Wroldsen, W. Waltenberger
ID=245
Circle and helix tting is of paramount importance in the data analysis of LHC
experiments. We review several approaches to exact but fast tting, including a
recent development based on the projection of the measured points onto a second-
order surface in space (a sphere or a paraboloid). We also show how multiple
scattering can be handled by this method and present results of a comparison with
global and recursive linearized least-squares estimators.
144

Innovative Algorithms and Tools oral session
Wolfgang WALTENBERGER
HEPHY, Vienna
walten@hephy.oeaw.ac.at
New developments in vertex reconstruction for CMS.
Co-authors: R. Fruehwirth, K. Proko ev, T. Speer, P. Vanlaer
ID=246
We present adaptive and robust methods of vertex estimation and investigate
their suitability to vertex nding and vertex tting in the context of LHC physics.
We show results of tests with both arti cial and physical events and discuss the
e ects of robust and adaptive procedures on the eôciency of primary and secondary
vertex reconstruction and b-tagging.
145

Innovative Algorithms and Tools oral session
Vladimir SAMOILENKO
IHEP, Protvino
samoylenko@mx.ihep.su
Event Eôciency Function as an Analog of a Neural
Classi er.
Co-authors: S. Klimenko, N. Minaev, E. Slobodyuk
ID=259
We discuss the approach based on the event eôciency function in feature space
as an analog of the neural classi er for a di erent types of the physical events.
The parametrization function is chosen in the form of an orthogonal function set.
The procedure of the set optimization and coeôcient determination based on the
maximum entropy principle is analogous to the procedure of neural net learning.
The accuracy of the set expansion can be then interpreted as the generalization
ability in the neural net approach. This method of the parametrization of event
eôciency is applied for separation of rare neutral decays of , ! and  0 -mesons in a
real experiment. It reduces essentially the physical background which arise due to
the nite spatial and energetic resolutions of the experimental setup. The results
are compared with those obtained with the standard neural net method.
146

Innovative Algorithms and Tools oral session
Pedro ARCE
CERN & CIEMAT, Geneva
pedro.arce@cern.ch
Object Oriented Software for Simulation and
Reconstruction of Big Alignment Systems.
ID=271
Modern high energy physics experiments require tracking detectors to provide
high precision under diôcult working conditions (high magnetic eld, gravity loads
and temperature gradients). This is the reason why several of them are deciding to
implement optical alignment systems to monitor the displacement of tracking ele-
ments in operation. To simulate and reconstruct optical alignment systems a general
purpose software, named COCOA, has been developed, using the object oriented
paradigm and software engineering techniques. Thanks to the big exibility in its
design, COCOA is able to reconstruct any optical system made of a combination of
the following objects: laser, x-hair laser, incoherent source - pinhole, lens, mirror,
plate splitter, cube splitter, optical square, rhomboid prism, 2D sensor, 1D sensor,
distancemeter, tiltmeter, user-de ned. COCOA was designed to satisfy the require-
ments of the CMS alignment system, which has several thousands of components.
Sparse matrix techniques had been investigated for solving non linear least squares
ts with such a big number of parameters. The soundness of COCOA has already
been stressed in the reconstruction of the data of a full simulation of a quarter plane
of the CMS muon alignment system, which implied solving a system of 900 equa-
tions with 850 unknown parameters. Full simulation of the whole CMS alignment
system, with over 30000 parameters, is quite advanced. The integration of COCOA
in the CMS software framework is also under progress.
147

Innovative Algorithms and Tools oral session
Pedro ARCE
CERN & CIEMAT, Geneva
pedro.arce@cern.ch
Simulation Framework and XML Detector Description
Database for CMS Experiment.
Co-authors: S. Banerjee, M. Batagglia, M. Case, A. De Roeck, V. Lara, M.
Liendl, M. Schroder, H.-P. Wellisch, A. Straessner, F. Van Lingen, S. Wynho , H.
Wenzel
ID=284
Currently CMS event simulation is based on GEANT3 while the detector de-
scription is built from di erent sources for simulation and reconstruction. A new
simulation framework based on GEANT4 is under development. A full description
of the detector is available, and the tuning of the GEANT4 performance and the
checking of the ability of the physics processes to describe the detector response
is ongoing. Its integration on the CMS mass production system and GRID is also
currently under development. The Detector Description Database project aims at
providing a common source of information for Simulation, Reconstruction, Analy-
sis, and Visualisation, while allowing for di erent representations as well as speci c
information for each application. A functional prototype, based on XML, is already
released. Also examples of the integration of DDD in the GEANT4 simulation and
in the reconstruction applications are provided.
148

Innovative Algorithms and Tools oral session
Lassi TUURA
CERN & Northeastern Univ., Geneve
lassi.tuura@cern.ch
Ignominy: Tool for Analysing Software Dependencies and
For Reducing Complexity in Large Software Systems (for
the CMS collaboration).
ID=286
LHC experiments such as CMS have large-scale software projects that are chal-
lenging to manage. We present Ignominy, a tool developed in CMS to help us deal
better with complex software systems. Ignominy analyses the source code as well
binary products such as libraries and programs to deliver a comprehensive view of
the package depencies, including all the external products used by the project. We
describe the analysis and the various charts, diagrams and metrics collected by the
tool, including results from several large scale HEP software projects. We also dis-
cuss the progress made in CMS to improve the software structure and the experience
we have gained in physical packaging and distribution of our code.
149

Innovative Algorithms and Tools oral session
Lassi TUURA
CERN & Northeastern Univ., Geneve
lassi.tuura@cern.ch
CMS Data Analysis: Current Status and Future Strategy
(for the CMS collaboration).
ID=297
We present the current status of CMS data analysis architecture and describe
work on future Grid-based distributed analysis prototypes. CMS has two main
software frameworks related to data analysis: COBRA, the main framework, and
IGUANA, the interactive visualisation framework. Software using these frameworks
is used today in the world-wide production and analysis of CMS data. We describe
their overall design and present examples of their current use with emphasis on inter-
active analysis. CMS is currently developing remote analysis prototypes, including
one based on Clarens, a Grid-enabled client-server tool. Use of the prototypes by
CMS physicists will guide us in forming a Grid-enriched analysis strategy. The sta-
tus of this work is presented, as is an outline of how we plan to leverage the power
of our existing frameworks in the migration of CMS software to the Grid.
150

Innovative Algorithms and Tools oral session
Maxwell SANG
CERN, Geneve
max.sang@cern.ch
Status of the Anaphe Project.
ID=304
Anaphe is a project in the CERN IT division to provide modular libraries for data
analysis, including histogramming, ntuples, graphical plotting, tting and minimisa-
tion. Abstract interfaces provide complete implementation decoupling between the
components, and careful design minimises dependencies on the interfaces themselves,
allowing great run-time exibility. For interactive work we have built a lightweight
component framework called Lizard which maps the underlying C++ functional-
ity into Python modules. The rst fully functional release was in July 2001 and
the rst version using only license-free foundation libraries was released in Septem-
ber. The most recent release (May 2002) is compliant with AIDA, a collaborative
e ort to standardise the user interfaces of analysis applications. This permits inter-
operability with components from other AIDA-compliant packages. We present an
architectural overview and a status report, and discuss the work in progress, in-
cluding enhancements of the AIDA interfaces, GRID-enabled distributed analysis
capability and new component implementations currently under development, such
as XML-based persistency and GSL-based minimisation and tting.
151

Innovative Algorithms and Tools oral session
Jose SEIXAS
Fed. Univ., Rio de Janeiro
seixas@lps.ufrj.br
An Online Calorimeter Trigger for Removing Outsiders
from Particle Beam Calibration Tests.
Co-authors: D. Damazio
ID=310
The next collider experiment at CERN, LHC, will be operational by the year
2006 and will be colliding bunches of protons at 14 TeV, an energy that has never
been achieved. One of the LHC's detectors, ATLAS, relies substantially on the
calorimeter system for both measurement and triggering purposes. The hadronic
calorimeter of ATLAS is a scintillating tile calorimeter (Tilecal) and is now nishing
its production and has already started to be calibrated for installation. A fraction
of Tilecal modules is being calibrated with particle beams at CERN. In spite of
beam quality, experimental beam contamination is unavoidable. For instance, muons
can be found in pion samples, and pions and muons are often found in electron
samples. To cope with this contamination problem, a signi cant enlargement of
the acquired data set is required for providing enough statistics for the interesting
physics. In addition, an oine analysis is then applied on the acquired data samples
for the removal of the outsiders. In this paper, an online neural trigger for removing
outsiders from experimental data samples is developed. The online trigger allows to
reduce the data sample and makes it possible a more eôcient usage of the calibration
period possible, as only interesting physics is recorded for the oine analysis. The
online neural triggering system was fully integrated to the Read-Out Driver (ROD)
system of the data acquisition system and tested with modules from the barrel
sector. Analysis of the network output for electron, pion and muon samples and
comparisons with classical oine analysis show that the online neural trigger is able
to identify correctly more than 95.0
152

Innovative Algorithms and Tools oral session
Yuri FISYAK
BNL, Upton
syak@bnl.gov
OO/C++ Reconstruction Model Based on GEANT3.
Co-authors: V. Fine, P. Nevski, T. Wenaus
ID=318
Many current HENP experiments still have their most detailed geometry in
GEANT3. It looks very attractive to be able to access this geometry description
from modern OO/C++ reconstruction codes. The \ideal" detector model provided
by GEANT3 accounts for symmetry of the detector and it is \compact" because it
doesn't contain any duplication of the geometrical nodes. However this \compact"
model cannot facilitate the real reconstruction needs when a subset of geometrical
nodes (detector elements) may have also their own unique list of parameters which
include alignment, calibration, and so on. An OO model and its implementation are
being developed within ROOT framework to match the requirements of both sim-
ulation and reconstruction. A hit object model is also included to store GEANT3
hits together with tools supporting navigation from hits to geometry and vice versa.
A prototype Muon Object Oriented Reconstruction for the ATLAS experiment has
been developed based on this geometry model.
153

Innovative Algorithms and Tools oral session
Valeri FINE
BNL, Upton
ne@bnl.gov
Cross-Platform Qt-Based Implementation of Lower Level
GUI Layer of ROOT.
ID=319
A version of the widely used ROOT analysis framework based on the cross plat-
form GUI package \Qt" from Trolltech will be presented. Qt-based ROOT consists
of a standard ROOT installation with the addition of two additional shared libraries,
libQt and libQtGui. The libQt library is a Qt-based implementation of the TVir-
tualX ROOT abstract interface with the low level local graphics subsystem (X11
or Win32 for example). The libQtGui library contains implementations of abstract
interfaces provided by the TGuiFactory ROOT class: TCanvasImp, TBrowserImp,
TContextMenuImp, TControlBarImp, TInspectorImp. It is possible to switch the
ROOT session from the standard \platform-oriented" to the new \Qt-based cross
platform" shared libraries in order to compare both approaches with no change or
re-compilation of the user code. The present approach allows the ROOT developer
as well as ROOT user to work with code that has no X11/WIN32 graphics subsys-
tem dependencies and at the same time opens unrestricted access to a rich set of
ready-to-use commercial and free GUI Qt-based widgets. The Qt-based version was
tested on Unix and Windows.
154

Innovative Algorithms and Tools poster session
Victor ZLOKAZOV
JINR, Dubna
zlokazov@nf.jinr.ru
DELPHI-based Visual Object-Oriented Programming for
the Analysis of Experimental Data in Low Energy Physics.
ID=101
The existing software for the analysis of experimental distributions in low energy
physics (spectra, cross-sections of di erent reactions, etc.) is written in a rather ob-
solete programming style, based normally on the use of sequential algorithms and the
Fortran language for their implementation.However, new trends in the programming
technologies consist in the transition to treating an algorithm as event interaction
and to use of the visual object-oriented programming languages for the development
of the corresponding software, combining analytical and graphical methods for prob-
lem solution.The report describes the concepts and experience in the creation of the
methods for sophisticated mathematical analysis of experimental data, implemented
entirely by DELPHI-5 o ered new programming means.The created programs are
typical Windows-XX mouse-clicking controled user-friendly applications, intended
to solve the following problems: 1) Shape-independent Rietveld multi-spectrum and
multi-phase tting (in particular, for RTOF-spectra, for which the program is spe-
cially adjusted). 2) Powder matching. 3) Automatical 3-dimensional Fourier syn-
thesis. 4) Multi-phase autoindexing. 5) Peak shape-independent tting and various
types of ltering, etc.
155

Innovative Algorithms and Tools poster session
Salavat ABDULLIN
Univ. of Maryland
abdullin@mail.cern.ch
Genetic Algorithm for SUSY Trigger Optimization in CMS
Detector at LHC.
ID=118
We apply a simple genetics algorithm to optimize leve-1 and level-2 triggersfor
generic SUSY signatures (jets + missing E T ) to be observed with the Compact
Muon Solenoid (CMS) detector at the Large Hadron Collider (LHC).
156

Innovative Algorithms and Tools poster session
Yury PIVOVAROV
Polytechnical Univ., Tomsk
pivovarov@fnsm.tpu.edu.ru
Computer Simulation of Spectral and Polarization
Characteristics of Channeling Radiation From Relativistic
Particles in Crystals.
Co-authors: V. Dolgikh
ID=151
We present the modi ed model and results of computer simulations of cpectral
and polarization charcteristics from relativistic particles passing through a crystal
under channeling condition. The trajectory of the particle is calculated using the
binary collision model, and is substituted into the formulas of classical electrody-
namics which de ne the components of radiation eld of given polarizations in a
wave zone. Then the intensity spectrum of radiation and its Stokes parameters
which are quadratic forms of simulated radiation eld components are calculated.In
classical electrodynamics, the whole trajectory contributes to the eld component
of given frequency. This makes computational diôculties, since at a de nite pen-
etration depth the photon is already emitted. How to to optimize the simulation
procedure in order to consider the targets of large thicknesses - is the subject of
suggested talk.
157

Innovative Algorithms and Tools poster session
Yuri KUNASHENKO
Polytechnical Univ., Tomsk
kun@npi.tpu.ru
Computer Simulation of Interaction of Relativistic
Positronium Atom with a Crystal.
ID=157
When relativistic positronium (Ps) atom penetrates trough an amorphous target
of the small thickness L
L  c; (1)
here and c are Ps relativistic factor and velocity,  is Ps \internal" time, the prob-
ability W 11 of observation of Ps atom in a qround state after the passage through
a target exceeds the one calculated using an exponential decay low. It was shown
that for a target with thickness satisfying a condition (1) one can use the impact ap-
proximation for calculation of W 11 value. In this approximation W 11 is determined
by total momenta obtained by electron and positron of Ps during Ps collisions with
atoms of the target. Interaction of the relativistic particles with a crystal suôciently
di ers from one with an amorphous target. There exists an orientation dependence
upon initial angle with respect to a crystal axis or plane of many processes accom-
panying particles penetration trough a crystal. The orientation dependence appears
and for W 11 too, when a Ps penetrates through an aligned crystal. In our report
we investigate in details the interaction of relativistic Ps with the crystal target.
In order to nd a distribution of electron (positron) momenta after a crystal tar-
get we used the binary collision model for computer simulation of particle passage
trough a crystal. Most interesting from the point of view of an experiment is a case,
when Ps enters into a crystal at the small angle with respect to a crystallographic
plane, the probability W 11 is maximum and oscillates with a crystal thickness. These
oscillations are connected with periodical motion inside a crystal.
158

Innovative Algorithms and Tools poster session
Galina SHILO
Nat. Technical Univ., Zaporizhzhia
gshilo@zstu.edu.ua
Creating and Estimating Interval Models.
Co-authors: V. Krischuk, N. Gaponenko
ID=169
The interval models have found wide application when analyzing experiment
results, estimating functions, calculating tolerances. One of advantages of interval
models is the possibility of taking into account nonlinearity of the function in linear
interval models. This approach considerably simpli es procedures of optimizing so-
lutions. The linear interval models are created by means of interpolation of nonlinear
functions in two directions from the point of rating values. As a result two hyper-
surfaces are obtained. Precision of interval model increases when using exterior
interpolation from points, where function reaches maximum and minimum value.
These points are de ned by results of interior interpolation. The interval functions
obtained as a result of these interpolations are combined in interval structures having
oating intervals. These structures can be transformed into interval structures with
oating bounds and twins allowed to estimate functions at endpoints and beyond
the bounds. Probable application of interval structures is described modi cations
of functions during a life cycle of objects. Such researches receive the increasing
popularity in connection with development of CALS-technology. In this case the
interval models are converted into interval structures with oating bounds because
of interval coeôcients of exposures.
159

Innovative Algorithms and Tools poster session
Alexander BOGDANOV
Inst. for High-Performance Computing and Inf. Systems, S-Petersburg
bogdanov@hm.csa.ru
Some New Approach to Quantum System Evolution
Simulation.
Co-authors: A. Gevorkyan, E. Stankova
ID=175
Well known problems in numerical algorithms of quantum system simulation
come from the obvious fact, that theory of such systems is formulated in terms of
PDE. We study in detail possibilities, based on other formulation of quantum me-
chanics in terms of functional integrals. The corresponding algorithms are proposed
and realized for some modern architectures of supercomputers. We discuss some sit-
uations, when diôculties take place in new approach. It is found, that origins and
in uence of diôculties in standard and new approaches are di erent, so they can be
complementary. From this point of view some situations, when standard approach is
unsuitable are discussed, and we propose some possible interpretation of functional
integral formulation and argue, that such approach can be more suitable in certain
situations, when standard one brings paradoxes. Especially one of such situations is
analyzed and the contradiction between the deterministic nature of standard PDE
formulation and some chaotic e ects is illuminated.
160

Innovative Algorithms and Tools poster session
Arcady RADCHENKO
Inst. of Informatics and Automata RAS, S-Petersburg
radch@gw2.spiiras.nw.ru
Transmembrane Receptive Dimers as Molecular Tiggers
Having Chemical and Electrical Inputs.
ID=201
The biophysical processes and biophysical mechanisms of neural memory based
on conformational changes of soma-dendrite membrane are researched. Transforma-
tion of the changes in endogenous activity is determined by a erent signals and their
compliance to combinatorial and geometrical speci city of synaptic environment of
conformation loci. The array of such loci and their responses to chemical and elec-
trical stimulations is analyzed. The relationship is found between biomolecular and
neurophysiological phenomena, including conformity between three conformation
states of receptive clusters and gating charge function, that de nes closed, open
and inactivated state of ion channels. The inactivation is connected with engram
forming. The selective properties of conformation loci are described with an infor-
mation model, which allows to understand engram writing/reading processes and to
evaluate memory recognition properties. The accuracy of writing-reading function
was researched and parameters of memory were optimized by criterion of accuracy.
161

Innovative Algorithms and Tools poster session
Vera ZELENINA
S-Petersburg St. Univ.
miksha@pobox.spbu.ru
On Interval Approach to Problems of Radio Engineering
and Telecommunication.
Co-authors: G. Men'shikov
ID=207
The interval approach is one of mathematical tools to take into account an in-
completeness of the information about the properties of the phenomena and systems.
It consists of the choosing the zone for incompletely known characteristics and of the
nding the set of the system responses upon all possible realizations of characteris-
tics lying in this zone or in some enclosure for one. In speci c form this approach
is discussed from 60-70s but scienti c interest to it increase greately during last 10-
15 years. The two examples of constructing interval methods are presented in this
paper. The rst one applies to transmission of the digital signals along the linear
channel. The interval model takes into account their nonstability caused by the
bounded noise, an impact of the pre-history of the signals. The second one applies
to model of radio and television signals in which the set of modulating functions is
de ned on the base of the velocity ideas instead of the frequency ones.
162

Innovative Algorithms and Tools poster session
Vitaly LEVIN
Technological Institute, Penza
levin@pti.ac.ru
Optimization at Conditions of the Interval Indeterminacy.
ID=209
The modern theory of optimization is based on the entire de nity in giving of the
function, which we should optimize. But in the real problems the optimized func-
tion contains one or another indeterminacy. It makes the decision of such problems
more diôcult. The method of the optimization problems at conditions of interval
indeterminacy solution is supposed. It is based on the method of the interval num-
ber comparison that logically generalize comparison of real numbers. This method
reduces the solution of interval problem of optimization to the solution of two ac-
cordance exact problems. Decomposition given problems into two problems and the
union of its solutions are made with the de nite rules. These rules follow from the
interval number comparison rules.
163

Innovative Algorithms and Tools poster session
Mikhail BOGOLYUBSKY
IHEP, Protvino
bogolyubsky@mx.ihep.su
= 0 Separation in the PHOS with Neural Network.
Co-authors: Yu. Kharlov, S. Sadovsky
ID=263
A neural network method is developed to separate the direct photons from the
neutral pion background. The algorithm is based on the exploiting of analysis of the
cluster shape in the PHOS with respect to the main axies of the energy weighted
two-coordinate tensor of the cluster. Proposed method allows to nd a limited
number of variables which carry enough information to educate the neural network
for e ective = 0 separation. This method was applied for Monte-Carlo enents in
the PHOS. It was found that probability of misidenti cation of a neutral pion as a
photon is on the level of a few persent in the energy range of pion of 30-120 GeV
with relatively high eôciency of the correct photon identi cation as isolated photon
in the same energy range.
164

Innovative Algorithms and Tools poster session
Lassi TUURA
CERN & Northeastern Univ., Geneve
lassi.tuura@cern.ch
Interactive Data Analysis Using IGUANA With CMS, D0,
L3 and GEANT4 Examples.
Co-authors: G. Alverson, I. Osborne, L. Taylor
ID=282
IGUANA is a generic interactive visualisation framework. It provides powerful
user interface and visualisation primitives in a way that is not tied to any particular
physics experiment or detector design. We describe interactive visualisation tools
built for GEANT4 and GEANT3, and for the CMS, D0 and L3 experiments using
this framework. We cover the features of the graphical user interfaces, 3D and 2D
graphics, various textual, tabular and hierarchical data views, and integration with
the application through control panels and a command line.
165

Innovative Algorithms and Tools poster session
Andre ANJOS
Fed. Univ., Rio de Janeiro
Andre.dos.Anjos@cern.ch
Neural Particle Discrimination for Triggering Interesting
Physics Channels Using Calorimetry Data.
Co-authors: J. Seixas
ID=311
This article introduces a triggering scheme for high input rate processors, based
on neural networks and calorimeter data. The technique is applied to the Elec-
tron/Jet discrimination problem, present at the Second Level Trigger of the ATLAS
detector, which is being constructed at CERN, for LHC experiment. The proposed
solution is based on describing the energy deposited on each calorimeter segment in
the region of interest in the form of concentric ring sums, so that both high com-
paction rate and discrimination eôciency can be achieved. The neural discriminator
is shown to outperform a reference algorithm that has been adopted, both in terms
of discrimination eôciency (at 2 kHz of background rate, the neural discriminator
reaches an electron eôciency above 98%) and performance (executes on half of the
time required by standard processing), becoming a good candidate algorithm for
nal implementation at the experiment.
166

Innovative Algorithms and Tools poster session
Alexander GRIGORIEV
Radiophysical Research Inst., Nizhny Novgorod
iudin@nir .sci-nnov.ru
Cellular Automaton Model of Lithosphere Degassing.
Co-authors: D. Iudin
ID=316
We develop a cellular automaton model of degassing process that shows the
lithospheric substratum as a two-component two-phase system consisted of a super-
saturated solid solution and a gas emanating from it. There are so-called zones of
transparency (clusters of gas phase), which grow and coagulate. Such zones arise
because of the local connectedness of the intraporous space. The lower part of the
transparency area is characterised by pore pressure de ciency, which is compen-
sated by durability and elasticity of the solid matrix. On the other hand, the upper
part of the area includes zones with redundant pore pressure. The material damage
in ultimate deformation zones changes the con guration of the porous space. The
gas is migrated into upper horizons that in turn magni es porous overpressure and
favours a further damage of the solid matrix. We present percolation mechanism
that provides direct transformation of potential gravity energy to energy of damage.
167

Advanced Statistical Methods for Data Analysis oral session
Serguei MANAENKOV
PNPI RAS, Gatchina
sman@pcfarm.pnpi.spb.ru
New Method for Data Processing in Polarization
Measurements.
ID=121
Precise formulas are derived for the expectedvalues <  >, <  > and variances
ô, ô of random variables ,  describing the spin asymmetry in the same reaction
when a background process contribution is zero and appreciable, respectively.The
variances of  and  are provedto be nite. It is shown that <  > is equal to
thephysical asymmetry. This property of  and the niteness of ô allows to nd th-
easymmetry from experimental data without detailed studyingthe detector eôciency
as a function of all kinematicvariables essential for a process under investigation.This
is the base of the proposed method of datatreatment which is illustrated by Monte
Carlo calculations.The formula for <  > can be used to estimate the systematic-
uncertainty of the physical asymmetry due to the backgroundcontribution. The high
statistics limits for <  > and ô 2 are also considered. It is shown that<  >! A
in this limit if a value of the signal tobackground ratio is a positive constant.
168

Advanced Statistical Methods for Data Analysis oral session
Dmitriy ANIPKO
Novosibirsk St. Univ.
anipko@tornado.nsk.ru
Search of Natural Cuts in Seeking of New Physics
Phenomena. (Example - Search of Phase Space Areas That
Bring the Best Signal/Background Ratio Useful for
Experimentation in e !W).
Co-authors: I. Ginzburg, A. Pak
ID=139
In future experiments the signal/background ratio (or statistical signi cance -
SS) is to be used as a tool for estimation of some parameters of New Physics.
Exploiting di erent physic models one can calculate total cross sections, compare
them to experimental data and work out the SS value. However, the better results
can be obtained introducing phase space cuts. We present several approaches for
search of the cuts that are natural for the considered process. The obtained results
for e ! W !  are presented.
169

Advanced Statistical Methods for Data Analysis oral session
Miroslav MORHAC
JINR & Inst. of Phys. Slovak Acad. Sci., Dubna
fyzimiro@ nr.jinr.ru
Analysis of Coincidence Gamma-Ray Spectra Using
Advanced Background Elimination, Unfolding and Fitting
Algorithms.
Co-authors: V. Matousek, J. Kliman, L. Krupa, M. Jandel
ID=141
Over the last few years much e ort has been devoted to developing methods for
analysis of data from large gamma-ray detector arrays, such as GAMMASPHERE,
EUROGAM and others. The information contained in high-fold data obtained from
these arrays is overwhelming. Analysis of such an information-rich high-fold coinci-
dence data requires sophisticated algorithms and methods to extract the physically
interesting information from the raw data. The nal product in the data processing
is the information on energies and intensities of gamma-transitions. Correct elimina-
tion of background and improvement of resolution represent problems common for
the methods of spectral analysis. In the contribution we present sophisticated meth-
ods allowing the eôcient determination of the requested part of a multidimensional
spectrum, created by partial absorption of gamma-rays and Compton scattering
in detector materials. After the elimination of background from multidimensional
spectra the resolution can be improved by employing an unfolding method. The
Gold method of the unfolding proved to be the most eôcient for decomposition of
multiplets in the nuclear spectra. Then the analysis of peaks in spectra consists
in determination of positions of peaks and subsequent tting, which results in the
estimate of peak shape parameters. The positions of peaks in multidimensional
spectra can be determined either by employing peak search methods or simply by
nding local maxima of well separated peaks after decomposition. The positions
of found peaks can be fed as initial estimates into a tting procedure. The aim of
the contribution is to present an algorithm applicable to t large number of peaks
and peak shape parameters in both one dimensional and coincidence gamma-ray
spectra. The gradient methods based on the inversion of large matrices are not
applicable because of two reasons- calculation of inverse matrix is extremely time
consuming- due to accumulation of truncation and rounding-o errors the result can
become worthless.Inversion of large matrices should be avoided wherever possible.
The proposed algorithms without matrix inversion allow to t large blocks of data
and large number of parameters in reasonable time.
170

Advanced Statistical Methods for Data Analysis oral session
Ararat VARDANYAN
Physics Inst., Yerevan
aro@crdlx5.yerphi.am
Multivariate Methods of Data Analysis in Cosmic Ray
Astrophysics.
Co-authors: A. Chilingarian
ID=148
Each time new type of detectors started to detect Universe they revealed details of a
cosmos that don't show up through the eyepiece of an optical telescope. Indeed, every
part of the electromagnetic spectrum surprised astronomers in one way or another. And
only simultaneous detection of all 25 magnitudes of energy by di erent kinds of detectors,
starting from radio telescopes to giant arrays measuring ultrahigh energy cosmic rays, will
allow to understand physics of such exotic objects like black holes, and neutron stars, and
such energetic processes like supernova explosions and gamma ray bursts. Though each
experimental device created to detect new type of radiation is a technical breakthrough,
we need also intellectual breakthrough to understand and handle abundant multidimen-
sional information available from numerous sensors measuring various types of particles.
One of the most important problems in physical inference from multivariate measure-
ments is development of the reliable statistical procedures dealing with information from
modern multipurpose experimental installations. Nowadays, when multidimensionality of
physical phenomena is well recognized and experimental techniques reach excellence to
measure simultaneously many parameters with high precision, the necessity of adequate
multivariate analysis methods is apparent. This report will present coherent system of
multivariate statistical methods dealing with analysis of data of stochastic nature. All
stages of analysis, from preprocessing and indication of outliers to sophisticated physical
inference on the theoretical models under consideration, will be presented with numerous
examples of application. The most general framework in which to formulate solutions
to physical inference problems in Cosmic Ray Astrophysics experiments is a statistical
one, which recognizes the probabilistic nature both of the physical processes of cosmic
radiation propagation, and of the form in which the data analysis results should be ex-
pressed. To make the conclusions about investigated physical phenomenon more reliable
and signi cant we have developed a uni ed framework of statistical inference, based on
nonparametric models, in which various nonparametric approaches and Neural Networks
are implemented and compared. In this context it is necessary to mention that we consider
the Neural information technology not as a \black box", but as an extension of conven-
tional nonparametric technique of statistical inference. The Analysis and Nonparametric
Inference (ANI) program package is the software realization of our concept and appropri-
ate tool for the physical inference in High Energy Cosmic Ray Astrophysics experiments.
During last 10 years ANI package was updated and intensively used for comparisons of
di erent nonparametric techniques and for data analysis of world biggest experiments, like
PAMIR emulsion chamber collaboration, Wipple air Cherenkov telescope, KASCADE and
ANI surface installations for detecting the Extensive Air Showers.
171

Advanced Statistical Methods for Data Analysis oral session
Sergei REDIN
BINP SB RAS, Novosibirsk
redin@inp.nsk.su
Evaluation of Con dence Intervals for Parameters of
Chi-Squared Fit.
ID=164
While working on data analysis for the ongoing muon (g-2) experiment at Brookhaven
National Laboratory, the author has derived equation for statistical uctuations of
chi-squared t parameters as function of statistical uctuations of number of events
within individual channels of tted histogram. This equation and its applications for
muon (g-2) experiment has been reported on the Advanced Statistical Techniques in
Particle Physics at Durham, England in March 18-22, 2002 along with other statis-
tical equations and techniques used in our experiment. In this presentation I show
how one can use this formula for statistical uctuations of parameters to reconstruct
probability density function (PDF) for those t parameters and hence evaluate cor-
responding con dence intervals if required. In a course of my evaluations I use some
orthogonal polynomials which are very useful in description of small distortions of
Gaussian distribution. I also present my attempt to use Taylor expansion coeô-
cients of chi-square as a function of parameters in vicinity of minimum for the same
purpose of the same purpose of the PDF reconstruction.
172

Advanced Statistical Methods for Data Analysis oral session
Fyodor TKACHOV
INR RAS, Troitsk
ftkachov@ms2.inr.ac.ru
Quasioptimal Observables and the Optimal Jet Algorithm.
ID=166
The connection between methods of maximal likelihood and generalized mo-
ments discovered recently allows one to tackle in a systematic fashion the problem
of signal selection/parameter measurements in situations where maximal likelihood
is unapplicable. It also o ers a scienti c way to compare di erent jet algorithms.
First results of such comparisons are reported.
173

Advanced Statistical Methods for Data Analysis oral session
Vladimir VINOGRADOV
JINR, Dubna
kulcicki@nu.jinr.ru
Statistical Multidimensional Separation of the Electrons
and Hadrons in the Tile Iron-Scintillator Hadronic
Calorimeter of the ATLAS at the LHC (for the ATLAS
TILECAL collaboration).
Co-authors: Yu. Kulchitsky
ID=167
The constructed ATLAS detector at the Large Hadron Collider at CERN will
have the great physics discovery potential, in particular in the detection of a heavy
Higgs boson. Calorimeters will play a crucial role in it. The key question for
calorimetry is the absolute energy scale calibration which should be known to an
accuracy of 1%. The ATLAS hadronic TILECAL calorimeter will contain 5120 cells
which will be read-out by 10240 PMT's . The energy deposited in a single cell
can vary in the wide range from 15 MeV to 1.5 TeV. For each cell the calibration
constants, which de ne the relationship between the calorimeter signals, expressed
in picoCoulombs, and the energy of the absorbed particles must be determined.
It is important to have the events with known initial particles. We review the
known calorimetric methods of particle separation, in particular the neural network
methods, and suggest the new statistical multidimensional combined algorithm for
the selecting of the electron and hadron events, which is based on the di erent
spatial electromagnetic and hadronic shower developments. The algorithm has been
tested on the basis of the experimental data obtained at the CERN SPS at 10 { 300
GeV energy range for the ATLAS Tile hadronic calorimeter, and demonstrated the
correctness of the particle selection and calibration constants determination. The
algorithm has been also tested on the basis of the GEANT Monte-Carlo simulation.
The algorithm has been implemented in the ATLAS Tile calorimeter analysis data
programme using the PAW and CERNLIB packages. The proposed algorithm can
be used for the data analysis from modern combined calorimeters like the ATLAS,
CMS detector at the LHC and the CDF, D0 at the TEVATRON.
174

Advanced Statistical Methods for Data Analysis oral session
Evgenij KOSAREV
Kapitza Inst. for Physical Problems RAS, Moscow
kosarev@kapitza.ras.ru
Superresolution Chromatography.
Co-authors: K. Muranov
ID=211
A method for improving the resolution of the chromatographic analysis based
on deriving the point-spread function of a chromatographic column, i.e. a chro-
matogram of a an individual compound, is described. A system of two functions
- a chromatogram of the substance analyzed and a point-spread function of the
chromatographic column - in combination with noise statistics enables the applica-
tion of the RECOVERY signal reconstruction software package in order to obtain a
superresolution chromatogram. The superesolution means a better resolution than
that determined by the width of the point-spread function. The proposed method is
tested with a bovine serum albumin chromatography with the use of gel ltration.
The resolution obtained exceeds that reached with the use of high-perfomance liquid
chromatography (with a lower cost of the instrument system by a factor of 15-20).
175

Advanced Statistical Methods for Data Analysis oral session
Peter ZRELOV
JINR, Dubna
zrelov@jinr.ru
Principal Component Analysis of Network Traôc: the
\Caterpillar"-SSA Approach.
Co-authors: I. Antoniou, Victor Ivanov, Valery Ivanov
ID=242
We applied the Principal Components Analysis, especially the \Caterpillar"-SSA
approach, to the network traôc measurements. This approach proved to be very
eôcient for understanding main features of terms forming the network traôc. The
statistical analysis of leading components demonstrated that a few rst components
already form the fundamental part of the information traôc. The residual compo-
nents play a role of small irregular variations, which do not t in the basic part
of network traôc and can be interpreted as a stochastic noise. Based on the fea-
ture characteristics of residual components, we developed a statistical method that
provides the selection and elimination of residuals from a whole set of principal
components.
176

Advanced Statistical Methods for Data Analysis oral session
Victor IVANOV
JINR, Dubna
ivanov@jinr.ru
On a Statistical Model of Network Traôc.
Co-authors: I. Antoniou, Valery Ivanov, P. Zrelov
ID=243
We applied a nonlinear analysis to traôc measurements obtained at the input of
a medium size local area network. The reliable values of the time lag and embedding
dimension provided the application of a layered neural network for identi cation and
reconstruction of the underlying dynamical system. The trained neural network
reproduced the statistical distribution of real data, which well ts the log-normal
form. The Principal Components Analysis of traôc series demonstrated that a few
rst components already form the fundamental part of network traôc, while the
residual components play a role of small irregular variations that can be interpreted
as a stochastic noise. The applicability of the scheme, developed by A.Kolmogorov
for the homogeneous fragmentation of grains, to the network traôc is discussed.
177

Advanced Statistical Methods for Data Analysis oral session
Galina MANEVA
Inst. for Nucl. Res. and Nucl. Energy, So a
maneva@inrne.bas.bg
Neural Nets for Ground Based Gamma-Ray Astronomy.
Co-authors: G. Maneva, J. Procureur, P. Temnikov
ID=258
The application of the arti cial neural networks (ANN) to data treatment in
the domain of the atmospheric Cherenkov gamma-ray telescopes is considered. The
main problems arising from the speci cs of these experiments as low signal, pairs of
ON - OFF observations, instability of the background events and there in uence on
the analysis results are discussed. A method of discrimination of the gamma-induced
atmospheric showers from the huge hadronic background is proposed.
178

Advanced Statistical Methods for Data Analysis oral session
Galina CHABRATOVA
JINR, Dubna
gshabrat@sunhe.jinr.ru
A New Approach to Cluster Finding and Hit
Reconstruction in Cathode Pad Chambers and its
Development for the Forward Muon Spectrometer of
ALICE.
Co-authors: A. Zinchenko
ID=262
A new approach to cluster and hit nding in muon chambers of the ALICE for-
ward spectrometer has been developed. It is based on maximum likelihood - expec-
tation maximization (MLEM) algorithm or Bayesian unfolding. The method allows
to improve the hit reconstruction accuracy for background contaminated events by
making use of the pad charge distribution deconvolution according to the mentioned
above technique.
179

Advanced Statistical Methods for Data Analysis oral session
Jose SEIXAS
Fed. Univ., Rio de Janeiro
seixas@lps.ufrj.br
Principal Curves for Identifying Outsiders in Experimental
Tests with Calorimeters.
Co-authors: P. Vitor, M. da Silva
ID=307
Calorimeters play a major role in modern collider experiments. Typically, calorime-
ter prototypes are tested experimentally using particle beams of di erent energies.
Despite the quality of the particle beams available nowadays, particle contamination
is unavoidable and can even be the major part of the acquired data set. This con-
tamination usually implies a longer testbeam period for a given physics programme,
as the acquired data sets have to be enlarged in order to keep enough statistics for
the interesting physics when contamination events are removed oine. In this pa-
per, principal curves are used to identify such contamination, so that pure samples
of electrons, pions and muons can be obtained. Principal curves can be consid-
ered a generalization of the non-linear principal component analysis, in which the
data space is represented by a parametrized curve. The algorithm used to nd the
principal curves for each type of particle was based on the k-segments algorithm of
vector quantization. The method is applied to data acquired in beam tests of the
Tilecal, the hadronic calorimeter of the ATLAS detector, which is being developed
for LHC. A Module 0 prototype from the barrel section produces 46 readout cells,
which form the input data vectors. Observing the distances from the data samples
to the curves that were determined for each particle class, the incoming particle can
be identi ed. A classical method that is based on the speci c characteristics of the
energy deposition pro les of particle classes was used to validate the experimental
results. It is shown that the agreement between the classical method and principal
curves is better than 94%.
180

Advanced Statistical Methods for Data Analysis oral session
Sergey ALEKHIN
IHEP, Protvino
alekhin@sirius.ihep.su
Comparative Study of the Uncertainties in Parton
Distribution Functions.
ID=313
Comparison of the methods used to extract the uncertainties in parton distribu-
tions is given, including their statistical properties and practical issues of implemen-
tation. Advantages and disadvantages of di erent methods are illustrated using the
examples based on the analysis of real data. Available PDFs sets with associated
uncertainties are reviewed and critically compared.
181

Advanced Statistical Methods for Data Analysis poster session
Sergei BITYUKOV
IHEP, Protvino
Serguei.Bitioukov@cern.ch
Signal Signi cance in the Presence of Systematic and
Statistical Uncertainties.
Co-authors: N. Krasnikov
ID=102
The incorporation of uncertainties to calculationsof signal signi cance in planned
experiments isan actual task. Several approaches to this problem are discussed. We
present a procedure for taking into account the systematic uncertainty related to
nonexact knowledge of signal and background cross sections. A method for account
of statistical uncertainties in determination of mean numbers of signal and back-
ground events is proposed. Corresponding algorithmsand programs are described.
182

Advanced Statistical Methods for Data Analysis poster session
Dmitriy ANIPKO
Novosibirsk St. Univ.
anipko@tornado.nsk.ru
Estimation of Initial Particles Spectrum Uncertainty
Contribution to Overall Statistical Error in Search of
Anomalous Interactions (Example -e !W).
Co-authors: I. Ginzburg, A. Pak
ID=140
Among various sources contributing the statistical error of simulated data there
is initial particles spectra uncertainty. We suggest an approach how to account this
contribution based on exaggeration of this contribution and consequent interpola-
tion. In the considered physical example the problem is of high importance since at
Photon Colliders the initial photons are products of compton back-scattering. This
very approach looks useful at the estimation of observable e ects at hadron colliders
like Tevatron and LHC.
183

Advanced Statistical Methods for Data Analysis poster session
Dmitriy KUSHNAREV
Inst. of Solar-Terrestrial Physics, Irkutsk
ds k@iszf.irk.ru
Towards the Texas Instruments TMS320C6701 Signal
Processor Using for Statistical Processing of Irkutsk
Incoherent Scatter Radar Experimental Data.
ID=149
The radiowaves incoherent scattering (IS) technique is one of most informative
of the ground based methods for top atmosphere and ionosphere diagnostics. Using
this technique one could determine the basic parameters of the ionospheric plasma in
a range of heights from 100 up to 1000 km. For investigations with the IS technique
there are used a huge power radars with a high sensitivity of receiving devices,
allowing to register a very weak radiosignals, scattered by the thermal irregularities
of ionospheric plasma. The IS radars are expensive and complex instruments, and
at the moment there are only 9 observatories in the world equipped by this tool.
The one of these observatories is Irkutsk IS radar, which was created in 90's and
has essentially extended a longitudinal chain of USA, Europe and Japan radars. In
the paper we present real-time techniques for statistical processing of a large data
volume, obtained with the Irkutsk IS radar. There is described the construction and
principles of work of the high-performance Texas Instruments TMS320C6701 signal
processor used for this tasks.
184

Advanced Statistical Methods for Data Analysis poster session
Lyailya KARIMOVA
Inst. of Mathematics, Almaty
makarenko@math.kz
Diagnosis of Stochastic Fields by Mathematical Morphology
and Computational Topology Methods.
Co-authors: N. Makarenko
ID=183
The talk focuses on diagnosis of extended observations arising in the di erent
branches of science. In many cases, nonlinearity and heterogeneity of natural pro-
cesses produce elds with stochastic properties. The evolution of the eld represents
a sequence of patterns. Direct generalization of Takens algorithm to the patterns
leads to computational diôculties. To extract dynamical information characterizing
the spatio-temporal chaos one can use topological complexity of patterns (maps).
Morphological Image Analysis (MIA) allows extracting the geometry and topol-
ogy of the maps by means of the Minkowski functionals. Then, time series of the
functionals can be used for reconstruction of the universal model by embedding
techniques. Moreover, it is possible to extract additional information concerning
dynamical scenario by investigating a change of a connectedness of the map using
di erent resolution. The change is estimated by the index of disconnectedness which
is equivalent to the box dimension for simple sets. The application of these methods
in seismology, ecology and Solar physics is demonstrated.
185

Advanced Statistical Methods for Data Analysis poster session
Vladimir ANIKEEV
IHEP, Protvino
anikeev@mx.ihep.su
What Do We Want, What Do We Have, What We Can Do?
(Unfolding in LHC Era).
ID=208
Nonparametric estimation of measured distribution is essential before extracting
parameters if model exists. We consider the model of measurements in general form:
Ax = y. Speci c of LHC era in aspect of data analysis are errors both in operator A
and right side y. Several algorythms to solve the problem are discussed and review
of existing software is given.
186

Advanced Statistical Methods for Data Analysis poster session
Alexandra KOLBASOVA
Fed. St. Unitary Enterprise 'Nedra', Yaroslavl
log@nedra.ru
About a Problem of Interpretation and Forecasting of
Time-Spatial Variations of Geophysical Fields by Results of
Deep Scienti c Drilling.
Co-authors: O. Esipko, A. Rosaev
ID=224
During drilling deep bore holes, both scienti c, and industrial, and at processing
the received results, there are a number of the problems connected to the analysis of
the big data les. Application of modern mathematical methods in this connection
is very important for: The Forecast rock properties under maximal depth with the
purpose of accident precaution during drilling, and also expediency of continuation
of drilling, Allocation of a useful signal on a background of casual hindrances, Opti-
mization of methods of averaging, smoothing, allocation of homogeneous structural
units, in view of speci city of geologic-geophysical researches. Since 1992 on base
Vorotilovskaya deep well, the geo-laboratory operates, regular measurements of geo-
physical elds that has allowed to set the problems about their change in time are
carried out. In a more comprehensive sense, the geophysical forecast includes also
a problem of a prediction of earthquakes, and the analysis of variations of a com-
plex of the factors united by concept of space weather (solar activity, variations of
geomagnetic eld, change of a condition of an atmosphere, etc.). Despite of the cer-
tain success in the decision of the formulated problems, there is a need for constant
improvement of mathematical methods of the analysis of the data.
187

Advanced Statistical Methods for Data Analysis poster session
Dmitry IUDIN
Radiophysical Research Inst., Nizhny Novgorod
iudin@nir .sci-nnov.ru
Multifractality in Ecological Monitoring.
Co-authors: D. Gelashvily
ID=315
In this paper we introduce the concept of multifractality at the problem of eco-
logical monitoring and species diversity estimation. Ecological communities can be
considered as an opened and hardly nonequilibrium systems, that experience an
external driving. The process of resource consumption and resource allocation in
communities re ects their complex internal structure. The structure is characterized
by number of species, population, links between species and extent of domination.
In experiment we deal with relative frequencies or speci c numbers of individuals
in species that we nd in a sample. We consider every species as a separate box
that contain an arbitrary number of individuals and apply box counting method
for relative frequencies calculation. We nd that box number or number of species
as a function of sample population follows power low when population increases
and, consequently, species distribution may be considered as a fractal set. To esti-
mate species diversity one ordinary use well known diversity indexes, each of which
simply enters a measure in space of relative frequencies. We tender multifractal
generalization of this routine. An example from aquatic ecology is considered.
188