Äîêóìåíò âçÿò èç êýøà ïîèñêîâîé ìàøèíû. Àäðåñ îðèãèíàëüíîãî äîêóìåíòà : http://rtm-cs.sinp.msu.ru/publications/pte2007/pte1_eng.pdf
Äàòà èçìåíåíèÿ: Thu Jan 24 13:21:26 2008
Äàòà èíäåêñèðîâàíèÿ: Mon Oct 1 21:12:14 2012
Êîäèðîâêà:
ISSN 0020-4412, Instruments and Experimental Techniques, 2007, Vol. 50, No. 4, pp. 487­493. © Pleiades Publishing, Ltd., 2007. Original Russian Text © D.I. Orekhov, A.S. Chepurnov, A.A. Sabel'nikov, D.I. Maimistov, 2007, published in Pribory i Tekhnika Eksperimenta, 2007, No. 4, pp. 65­72.

APPLICATION OF COMPUTERS IN EXPERIMENTS

A Distributed Data Acquisition and Analysis System Based on a CAN Bus
D. I. Orekhova, A. S. Chepurnova, A. A. Sabel'nikovb, and D. I. Maimistova
a

Skobel'tsyn Institute of Nuclear Physics, Moscow State University, Vorob'evy gory 1, str. 2, Moscow, 119899 Russia b Russian Research Centre Kurchatov Institute, pl. Kurchatova 1, Moscow, 123182 Russia
Received August 14, 2006; in final form, December 20, 2006

Abstract--A distributed remote-control system for large nuclear physical setups is intended to collect, store, and analyze data arriving from detecting devices and to visualize them via the WEB. The system uses the CAN industrial data transmission network and the DeviceNet high-level protocol. The hardware part is the set of controllers, which convert signals of the detecting devices into a frequency and transmit them in the digital form via the CAN network to the host computer. The software realizes the DeviceNet protocol stack, which ensures the data acquisition and transmission. The user interface is based on dynamic WEB pages. Server scripts carry out their formation and graphical visualization of data. The system is used for monitoring dark noises of photomultiplier tubes in the BOREXINO neutrino detector (Italy). PACS numbers: 07.05.Dz, 07.05.Hd, 07.05.Kf, 07.05.Rm DOI: 10.1134/S0020441207040100

INTRODUCTION In the process of designing the system for monitoring dark noises of photomultiplier tubes (PMTs) for the BOREXINO large neutrino detector [1], we faced a set of problems characteristic of any large nuclear physical setup: (i) data acquisition (and their further storing), (ii) analysis of the acquired data, (iii) control of the setup, and (iv) maintaining the setup in operating conditions. The principle of independent software and hardware modules interacting via standard interfaces is at the heart of the described system. The BOREXINO detector falls into the category of large systems due to its size and the number of devices and subsystems used in it. Similarly to the majority of up-to-date physical setups, its salient features are as follows: (i) a great number of various sensors are used (in our case, approximately 2500 detecting devices), the states of which must be monitored in the course of experiments; (ii) in spite of careful selection of electronic components, there is a probability of their failure, which may result in data loss and distortion of experimental results; (iii) a great amount of miscellaneous raw data are acquired in the process of experiment, which are subsequently processed in order to extract physical data; (iv) in the dark noise monitoring system of the BOREXINO detector, the basic investigated parameter is the frequency of the dark noise from 2500 PMTs, which is measured once per second; and

(v) since the processes of data acquisition and analysis are separated in time, the acquired and preliminarily processed information must be stored for further analysis. The analysis of the acquired data is intended to solve the following problems: (i) observation of sporadic effects and critical failures in separate sensors and sensor groups; (ii) observation of correlated changes in sensor groups; (iii) estimation of long-term trends (a decrease or an increase, estimation of time constant); (iv) estimation of periodic variations by using the Fourier analysis; (v) spectral analysis of signals from sensors; and (vi) investigation of correlations of the quantity under study with other physical characteristics (temperature, vibrations, acoustic noises, electromagnetic disturbances, etc.). The BOREXINO detector is a complex engineering structure consisting of several subsystems, separate modules of which are distributed over a large area. In particular, it is possible to mark out two subsystems in the dark noise monitoring system: (i) for the detector itself and (ii) for the muon veto system. Therefore, the BOREXINO detector, like other similar setups, should have built-in facilities for monitoring its operation and allow for scaling. The dark noise monitoring system of the BOREXINO detector is auxiliary and is used for monitoring the state of the detector and its readiness for executing the basic target function (collection of physical data). Nev-

487


488

OREKHOV et al.

ertheless, the acquired data on PMT dark noises can act as an independent physical data source [2, 3]. Scientists in 19 laboratories in eight countries are working together with the BOREXINO. In order to ensure the prompt access to data for all collaboration participants, the system should ensure a possibility of remote data treatment. At present time, it is the most convenient to use the Internet global network for this purpose, organizing the interactive access to the acquired data and visualization of the current state of the setup via the WEB interface. Due to its architecture, the created system can be adapted for monitoring or collecting data in other large physical setups; hence, it can be considered a platform for building monitoring systems. In the first part of the paper, we describe the architecture of the created control system for data acquisition and analysis. In the last part, special features of the particular monitoring system for the BOREXINO detector are considered in detail. ARCHITECTURE OF THE HARDWARE PART OF THE SYSTEM In industrial systems, where it is necessary to organize the data acquisition from many analog sensors, the most suitable are 4- to 20-mA current and frequency interfaces in which the analog parameter is at once generated or converted into the frequency signal. Such physical parameters as temperature, voltage, and pressure can be easily converted into frequency, ensuring convenience during transmission of signals for further processing. When the signal is transmitted in frequency form, it is technically simpler and cheaper to ensure the galvanic decoupling required for the multichannel data acquisition in large distributed systems. Thus, the universal input converter in the proposed architectural design is the module, which has several inputs for measuring frequency. The modules are unified by an industrial communication network, ensuring a possibility of constructing a hierarchical distributed system and bringing the sensing devices closer to sensors with the frequency output. Of a great number of existing industrial networks, the Controller Area Network (CAN) bus was selected in order to connect the modules of our system. In view of its own special features (bitwise arbitration, differential signal transmission mode, and highly reliable algorithm of error handling and bug arrest), the CAN industrial network is well suitable for creating distributed data acquisition, monitoring and control systems. The CAN bus is characterized by a high data transmission speed (up to 1 Mbit/s) and high noise immunity. The CAN flexibility is attained due to the simple connection of CAN modules to the bus and disconnection from it; in this case, the total number of the modules is not limited by low-level protocol.

The up-to-date CAN 2.0B standard [4] describes only two lower layers of the ISO/OSI reference network model [5]: (i) the physical layer and (ii) the data link layer. For building the data transmission system, it is required that one or another high (applied)-level protocol be used. As this protocol, it is convenient to use DeviceNet or CanOpen protocols, since they have an open specification and appropriate international standards have been designed and supported for them. The DeviceNet protocol is used in the developed dark noise monitoring system of the BOREXINO detector [6]. The architectural special feature of the system is the possibility of dividing the data acquisition system into an arbitrary number of independent subsystems. In each subsystem, up to 256 frequency channels can be measured by several independent controllers­frequency meters, and communication between them is supported by the DeviceNet protocol. In this case, the controllers for each subsystem can have different technical characteristics (the number of channels and measured frequency band), depending on the specific physical problem. However, they have a common architecture and the same presentation in the system. At present, 8-, 16-, and 64-channel frequency meters are used. It is assumed that all sensors located in one region of the monitored setup are connected to one module. Each frequency meter has a CAN controller and is connected to the CAN network as a slave device. The master device is the control computer of the data acquisition subsystem. Each subsystem of controllers is the independent CAN network. Data from the control computer arrive at the server, where they are stored and issued in accordance with inquiries of client stations. While developing the system, we assumed that relatively slowly varying data should have been measured, i.e., that the typical variation time of the monitored parameter should be 1 s or over. In this case, the measured frequency value can vary from 10 Hz to 0.5 MHz. The frequency is measured with an accuracy of 1 Hz in the middle of the operating range; at the borders of this range, the error is 10%. Standard modules­converters connected via the CAN bus can be added into the system in order to measure physical parameters other than frequency. The system is expanded by adding modules using standard high-level protocols of the CAN-bus network. The created system realizes all levels of classical three-level data control and acquisition system [7]: (i) object access level, (ii) control station level, and (iii) service level for supporting the graphical interface. It is this architecture that suits distributed and large systems best. In addition, it allows one to scale and adapt the data acquisition system to large physical setups of different types. The independent groups of controllers used allow one to logically group sensors according to their location or function. Therefore, the system can be considered a platform for developing similar systems.
Vol. 50 No. 4 2007

INSTRUMENTS AND EXPERIMENTAL TECHNIQUES


A DISTRIBUTED DATA ACQUISITION AND ANALYSIS SYSTEM Hardware level Controller subsystem no. 1
Slave 1 Slave 2

489

Controller subsystem no. 2
Slave 13 Slave 14 Slave 1 Slave 2 Slave 3 Slave 4

...

Data acquistion level Computer
Data acquisition subsystem

Computer
Data acquisition subsystem

Nonphysical data source

Nonphysical data source

Data processing level Computer
Nonphysical data consumer

Dataserver
Physical data source

Storage (files or databases)

Data presentation level

WEB server
Equipment status and detector configuration database Physical data consumer Visualization subsystem (CGI scripts)

WEB server
Physical data consumer Analysis subsystem (CGI scripts)

Operator console
Online reports

Fig. 1. Block diagram of the software of the monitoring system.

ARCHITECTURE OF THE SOFTWARE PART OF THE SYSTEM We consider a general block diagram of the system software for two subsystems of controllers (Fig. 1). The software for the system of monitoring, data acquisition, and analysis is based on a multilevel scheme; in particular, it is possible to distinguish four
INSTRUMENTS AND EXPERIMENTAL TECHNIQUES

levels in it: (i) hardware, (ii) data acquisition, (iii) data processing, and (iv) data presentation levels. In addition, it is possible to distinguish six functional subsystems in the software: (i) data acquisition subsystem, (ii) physical data simulation subsystem; (iii) data processing subsystem, (iv) data storage subsystem, (v) status monitoring (visualization) subsystem, and (vi) off-line analysis subsystem.
Vol. 50 No. 4 2007


490

OREKHOV et al. Data from DeviceNet slave Computer Simulation subsystem
DeviceNet master

Sharing memory Console output TCP/IP Daemon (data source)

Monitor

Consumers
Fig. 2. Structure of the data acquisition subsystem.

Figures 2 and 3 show structures of the data acquisition and processing subsystems, respectively. In this case, the control programs for each subsystem of frequency meters can be located both in one and different computers. Two client­server links are used to transmit data between levels of the system. One is located between the data acquisition level, where the server is a TCP/IP Daemon module, and the data processing level, where the client is the Dataserver program module. The other link is located between the data processing level, where the server is the Dataserver module, and the data presentation level, where the client is the visualization subsystem. Data are exchanged between the client and server via a standard socket mechanism on the inquiry­ answer principle. It can be seen from Fig. 1 that the data acquisition, processing, and presentation levels are physically separated and located in different computers due to the nonuniformity of problems executed by the system, the separation of data acquisition and analysis processes in time, and also for balancing the loads on components. We consider a scheme of operation and interaction of various modules of the system. In the data acquisition mode, the information on the monitored parameters arrives from many various sensors grouped by means of several measuring modules and unified by the CAN network. Hence, the data acquisition subsystem is distributed. These data are written into the distributed memory space via the data acquisition module (DeviceNet Master). If the setup is not in the working mode, i.e., the dis-

tributed controller system is off, the data can arrive into the distributed memory space from the simulation module. The simulation module is intended for debugging the system. It allows one to simulate infrequent and emergency states of the setup; i.e., it is used for checking the adequacy of the monitoring system behavior. When the simulation module starts operating, it checks whether the basic data acquisition system is operating or not and, only if it is off, simulated data are written into the distributed memory space. The interface part of the simulation system allows one to specify online the behavior of each simulated channel via the comfortable graphical interface. Data are collected by the data transmission module (TCP/IP Daemon) from the distributed memory space. In response to inquiries, the module delivers data to the data processing module (Dataserver). The data transmission module can issue data acquired both from each individual frequency meter subsystem and from all operating subsystems simultaneously. The data inquiry takes place in accordance with the timer, and the interrogation period (T1 = 0.5 s) is equal to 1/2 of the typical data arrival time (T2 = 1 s). The data processing module packs data into the database (DB) and temporarily stores data on frequencies. The temporary storage space is arranged as a ring buffer, permitting the realization of the running temporary window mode in the visualization module. The buffer size can be adjusted to the required prehistory length requested by the visualization module. The DB data packing subsystem executes primary processing of data; their referencing to the absolute time; if it is posVol. 50 No. 4 2007

INSTRUMENTS AND EXPERIMENTAL TECHNIQUES


A DISTRIBUTED DATA ACQUISITION AND ANALYSIS SYSTEM

491

Data sources

Computer
Dataserver Basic part Inner data stack (ring buffer) Client part (nonphysical data) Packing and storage Warning generation Storage (files or databases)

Server part (physical data)

Clients

Fig. 3. Structure of the data processing subsystem.

sible, their compression; and packing into the DB in structured form. Data received in a nonphysical format are converted into a physical form (frequency, temperature, and voltage) before storage and transmission to WEB clients. The data are saved periodically during operation of the program, and, if necessary, the data saving can be disabled in the test mode. The data acquisition and simulation subsystems and the Dataserver module are written in language ANSI C for use in the Linux operational system. The data visualization module displays data on frequencies in frequency meter channels in an online mode. This subsystem displays the states of all the channels and devices, in this case, requesting in the DB the status of the equipment and verifying the conformity of the status of the frequency meter channel to a real physical device. The data analysis module generates inquiries for the database management system (DBMS) in order to receive information on the monitored parameter over the specified time. The data visualization and analysis modules are written by using CGI scripts in the Perl and PHP languages. Data Storage Subsystem The design number of frequency channels in one frequency meter subsystem is 256. In this case, each
INSTRUMENTS AND EXPERIMENTAL TECHNIQUES

data unit occupies 10 bytes during storage (4 bytes, for the frequency value; 4 bytes, for the time mark; 1 byte, for the channel number; and 1 byte, for the data format). The large record length is necessary because data may arrive irregularly. The selected storage method ensures complete reconstruction of the time pattern of events in the observed device. Thus, when the data are collected with a period of 1 inquiry/s, the data flow is ~2.5 Kbyte/s, i.e., ~211 Åbyte per working day, ~76 Gbyte per year, and ~0.75 Tbyte over 10 years of continuous operation of the detector. It is necessary to note that, depending on the rate of change of the studied physical parameters, the minimal interval for data storage can be increased by a factor of N, thus decreasing the total data volume by a factor of N. The estimate shows that the data from one subsystem of controllers can be stored on one DB server located on one storage consisting of several up-to-date hard disks. The use of the relational DBMS for storing data allows one to simplify the data retrieval for the analysis using the ANSI SQL standardized query language. In addition, flexibility of the SQL and its rich capabilities of data sorting, grouping, and retrieval fill all potential needs in the process of designing the data analysis subsystem.
Vol. 50 No. 4 2007


492

OREKHOV et al.

Fig. 4. Example of the channel status monitoring screen (channel) for one frequency meter (rack) subsystem (inner detector).

The PostgreSQL is selected as a particular DBMS [8]. In the process of selecting the DBMS, the basic factors were its reliability, support of the data integrity, crossplatform realization of the DBMS, availability of access interfaces for basic programming languages, prevalence, realization of the data access monitoring, completeness support of the ANSI SQL standards, data processing speed, and free-of-charge capabilities. Status Monitoring Subsystem The data visualization subsystem is used online both for technological adjustments, monitoring of the setup status by the operator, and analyzing emergency and misunderstood events in the operation of the setup. For correct data interpretation, the subsystem has an access to the database of equipment status and device configuration in order to detect states when a separate sensor is cut off or absent. The data visualization is carried out via the WEB interface. The basic screen form in the visualization subsystem is the screen for monitoring statuses of channels of the frequency meter subsystem. Figure 4 gives an approximate appearance of the screen. This form grants a possibility of an access to each individual channel, its prehistory, and calculated parameters of the statistical

distribution (mean over the period and variance), based on the specified length of the prehistory (the latter, like other parameters of the subsystem, is specified on a special set-up screen). When the prehistory is displayed, it is possible to average the studied parameter over some adjustable data time. There is also a space for displaying data from the equipment status DB relevant to a particular channel (for example, its location in the general diagram of the setup). In addition, a running window mode is realized in the system. This window displays the average frequency value prehistory of the frequency meter subsystem (as an average value in all channels of all devices over some averaging time). This mode allows one to trace significant frequency variations in sensor groups. The prehistory length and averaging time are also adjusted. Due to the used multilevel architecture of the software and organization of the dynamic generation of WEB pages as the most convenient data presentation mechanism, it is possible to simultaneously service a great number of inquiries for receipt and visualization of collected data via the Internet network.
Vol. 50 No. 4 2007

INSTRUMENTS AND EXPERIMENTAL TECHNIQUES


A DISTRIBUTED DATA ACQUISITION AND ANALYSIS SYSTEM

493

USE OF THE SYSTEM FOR DATA MONITORING, ACQUISITION, AND ANALYSIS IN THE BOREXINO DETECTOR The BOREXINO is a low-energy neutrino detector (0.86 MeV) located in the underground laboratory in Gran Sasso, Italy (LNGS) [9]. At present, its construction has been virtually completed. The basic purpose of the experiment is the direct online measurement of the "beryllium" solar neutrino flux. The detector consists of two parts, which are conventionally called inner and outer detectors (the outer detector realizes the muon veto system). The special feature of low-energy neutrino detection is the small number of studied events. For example, the expected rate of neutrino counts in the BOREXINO is 0.1­0.5 event/day per ton of the detector substance. Therefore, it is very important that a low background level be maintained in the detector. For "beryllium" neutrinos, the required luminosity can be ensured only by a scintillation detector with PMTs operating in a single-photoelectron counting mode due to the small energies of particles produced in it. Dark noise is a characteristic feature of any PMT. The dark noise is the current (sum of pulses in the single-photoelectron mode) induced at the PMT output when no photons hit the photocathode. Under certain conditions, the PMT dark noise is capable of distorting a physical pattern and can even make efficient operation of the detector impossible. Therefore, to maintain serviceability of the detector, it is important that the rate of PMT dark pulses be monitored. The structure of the system for monitoring the PMT dark noise is determined by the structure of the BOREXINO detector itself and, hence, is divided into two independent parts, conventionally called the outer and inner subsystems. The inner subsystem monitors the total dark current from a group of 12 PMTs for all 2300 PMTs of the inner detector. This group is attributed to the special feature of the electronics design of the detector and is sufficient for tracing the general status of the dark noise of the whole inner detector. The inner system contains 14 modules­frequency meters with 16 channels in each; in this case, only 14 of the 16 channels are in operation simultaneously. Two more channels are reserved in case of failures. Thus, in all, 196 independent data flows arrive from the system. The outer subsystem traces the dark noise level for each 256 PMTs of the muon veto detector. The outer controller system consists of four frequency meters

with 64 channels in each. There are 256 channels in total, each having the sense of the frequency of the dark noise from one PMT. The data arrive with a 1-Hz frequency, and the monitored frequency value can be within 10­500 kHz. The designed hardware and software were put into test operation in the autumn of 2004 as part of the data acquisition system of the BOREXINO detector. CONCLUSIONS The software and hardware components that we developed form a complete platform for building the systems for monitoring, data acquisition, and analysis. The systems based on it are capable of ensuring prompt monitoring of the large nuclear physical setup, including timely response to malfunctions in the equipment. The platform structure allows one to easily scale, update, and expand the functional capabilities of the systems created on its basis. The experience of using the dark noise monitoring system of the BOREXINO detector allows one to consider the platform as a basis for building distributed online monitoring systems for large physical setups. REFERENCES
1. Orekhov, D.I., Chepurnov, A.S., Sabel'nikov, A.A., et al., Preprint of Institute of Nuclear Physics, Moscow State University, Moscow, 2006, no. 2006 10/809. 2. Sabel'nikov, A.A. and Chepurnov, A.S., Preprint of Kurchatov Institute of Atomic Energy, Moscow, 2003, no. 6305/15. 3. Chepurnov, A.S., Nedeoglo, F.N., Etenko, A.V., and Sabelnikov, A.A., Probl. At. Sci. Technol., Ser: Nucl. Phys. Invest., 2004, vol. 43, no. 2, p. 75. 4. CAN Specification 2.0B. Robert Bosch GmbH, Postfach 30 02 40, D-70442, Stuttgart, 1991; http://www.semiconductors.bosch.de/pdf/can2spec.pdf. 5. ISO/IEC 7498-1:1994. Information Technology--Open Systems Interconnection--Basic Reference Model: The Basic Model, International Organization for Standardization, 1994. 6. Chepurnov, A.S., Komissarov, D.V., Nedeoglo, F.N., and Nikolaev, A.S., Abstracts of Papers, Proc. ICALEPCS'99, Trieste, Italy, 1999, p. 388. 7. Gribov, I.V., Shvedunov, I.V., and Yailiyan, V.R., Prib. Tekh. Eksp., 2003, no. 5, p. 26 [Instrum. Exp. Tech. (Engl. Transl.), no. 5, p. 602]. 8. PostgreSQL Documentation. PostgreSQL Global Development Group, 2006, http://www.postgresql.org/docs/. 9. Borexino Collab., Astropart. Phys., 2002, vol. 16, p. 205.

INSTRUMENTS AND EXPERIMENTAL TECHNIQUES

Vol. 50

No. 4

2007