Документ взят из кэша поисковой машины. Адрес оригинального документа : http://qfthep.sinp.msu.ru/proceedings2015/Collider_Volkov.pdf
Дата изменения: Mon Nov 30 19:26:34 2015
Дата индексирования: Sat Apr 9 23:26:10 2016
Кодировка:
Status of mTCA Slow Control development at CMS
Petr Volkov1 D. V. Skobeltsyn Institute of Nuclear Physics, M. V. Lomonosov Moscow State University Leninskie Gory 1, 119991 Moscow, Russia

The CMS experiment at the LHC at CERN has decided to upgrade their readout hardware at a large scale from VME to MicroTCA. In this talk major parts of the MicroTCA Slow Control will be introduced. As a typical use in the HEP environment the applying of SCADA system WinCC OA will be discussed for a Detector Control System (DCS) of several sub detectors. The new software developed by the collaboration will be introduced with a focus on the data transffering between low and high software levels.

1

Introduction

Scheduled upgrades of the LHC experiments at CERN are foreseen over more than 10 years and attached to LHC upgrade long shutdowns: 2013/14, 2018, 2023 At the moment the basic part of detector electronics of the LHC experiments based on VME standard with its "old" (for today) technology and doubts about long-term availability. There are some experiments which are planning to use TCA systems ( that are MicroTCA and ATCA) for upgrades of their back-end electronics. For example, CMS is planning to work with MicroTCA standard, LHCb and ATLAS - with ATCA standard. There are some advantages of TCA systems, for example, one can choice several form factors, different cooling and power supply modules, use redundancy possibilities of power modules and cooling devices, use infrastructure monitoring features. MicroTCA is a modular, open standard for building high performance fabric computer systems in a small form factor [1] [2] [3] [4]. At its core are standard Advanced Mezzanine Cards (so called AMC's) which provide processing and I/O functions. Hundreds of different AMC's are commercially available. MicroTCA systems are both physically smaller and less expensive other systems, although their internal architectures are large. MicroTCA was originally intended for smaller telecom systems but has moved into many non-telecom applications becoming popular in mobile, military, telemetry, data acquisition, and avionics applications. The core specification, MTCA.0, defines the basic system, including backplane, cards, cooling, power, and management. A variety of different sized AMC modules are supported. Because of its modularity and flexibility, the MicroTCA standards are well-suited for a wide range of applications, including industrial control and automation, test and measurement, mobile and avionics, traffic control and transportation. MicroTCA developments already are on-going at CERN and collaborating institutes, accelerator sector is also investigating MicroTCA. CMS, as one of the LHC experiments, is upgrading back end electronics [6]. Reasons of upgrade are following. CMS need more good performance for Run 2 in order - to be ready for 4 times more data
1

petr.volkov@cern.ch

1


XXIInd International Workshop "High-Energy Physics and Quantum Field Theory", June 24 ­ July 1, 2015, Samara, Russia

Figure 1: MicroTCA crate

- to be ready for much higher luminosity / occupancy - to be ready for filter / trigger processes needing much more data - current system not able to cope with this Second reason of upgrade is to improve maintainability in order - to avoid legacy support for 15 years or more with current design based on pre 2000 technology - to reduce complexity with reducing number of cables for crate internal data transfer and reducing number of different boards - getting rid of many mezzanine cards According to CMS TDR for the phase 1 upgrade [5] some VME crates are changed to mTCA ones. A task was (and is) to develop a new Slow Control software which has to provide: - bringing devices into any desired operational state - signaling any abnormal behavior to the operator - allowing manual or automatic actions to be taken - monitoring and archiving the operational parameters such as voltages, currents, temperatures

2

Description of the system and steps of devopment

CMS Detector Control System handles the configuration, monitoring and operation of all experimental equipment involved in the various activities of the experiment. A control framework JCOP (based on an industrial SCADA-system WINCC OA [8]) allows the integration of the various devices into a coherent hierarchical system is being developed in common way for the four LHC experiments. The same architecture and tools are used to control and monitor all the different types of devices, from front-end electronics boards to temperature sensors providing CMS with a homogeneous control and a coherent interface to all parts of the experiment. In figure 2 the top CMS FSM panel and the HCAL FSM panel are shown. 2


XXIInd International Workshop "High-Energy Physics and Quantum Field Theory", June 24 ­ July 1, 2015, Samara, Russia

Figure 2: CMS FSM panel and HCAL FSM panel

To realize these requirements the central CMS DCS group sets mandatory rules [9] which have to be taken by all of CMS Slow Control developers. First step of the development was to decide what software architecture (operation system, packages and others) have to be used. For the control and the monitoring tasks by default as all LHC experiments standard package was taken WinCC OA under Win7 (64 bit). Wisconsin System manager under Linux was taken as low level interface by the proposal of the central DCS group. Databases were taken by default as WinCC OA components: Condition DB - under Oracle, Configuration DB - under Oracle. Basic questions were which had to be decided: - how to connect WSM to WinCC/OA? - how to discribe in WinCC a variable hot-swap mTCA configuraton? Three possible solutions how to connect Wisconsin System manager to WinCC OA were discussed. The only difference between was what kind of interface will be better to use. The comparison of possible interfaces is shown in figure 3, according to it the DIM server was taken as a basic interface.

Figure 3: The comparison of possible interfaces

3


XXIInd International Workshop "High-Energy Physics and Quantum Field Theory", June 24 ­ July 1, 2015, Samara, Russia

3

Roadmap of the data and logical view of MicroTCA branching

3.1 Roadmap of the data
How information goes from hardware to the software endpoint is shown in figure 4. Data from sensors of many different kinds of devices, such as power modules, cooling units and others go to the MCH module and collect by Dim server via low-level interface named Winsconsin System Manager. Dim server keeps all the information in the special memory blocks and sends it from time to time or by the request to the high-level monitoring program - WinCC OA.

Figure 4: Roadmap of the data

3.2

Logical view of MicroTCA branching

A logical view of the full mTCA Slow Control project is presented in figure 5. Modules, units, sensors and other devices send its states to one level up where a common state of the crate is produced. Several crates are joined in branches, each branch takes a part in producing of a common state of the mTCA electronics. At present time there are three basic branches: HCALTR - hadronic calorimeter trigger/readout; ngFEC - next generation Frond End Crates; TCDS - trigger control and distribution system. Some other branches will be added in the future.

4


XXIInd International Workshop "High-Energy Physics and Quantum Field Theory", June 24 ­ July 1, 2015, Samara, Russia

Figure 5: Logical view of the mTCA Slow Control project

4

Current status and future steps

Current status of the project is the following: first production version of mTCA DCS is installed and normally works on the CMS DCS production system. Current configuration of the project is: FSM tree consists of three branches - HCALTR branch (up to 3 crates), ngFEC branch (up to 2 crates), TCDS branch (up to 10 crates). HCALTR branch (3 crates) already connected to mTCA hardware via System manager. Condition DB is connected, data archiving is implemented, the history of every sensor is recorded in Condition DB. There is a possibility to have a look to the trend of each sensor during all working time. Dim server - under Linux and Win - is located in regular position and works all the time, transfer rate of sensors is measured - 10 sec/crate is observed, beta-version of software is ready and works. In order to move the system in full production operation for 2015 - 2016 runs one need: to define a final configuration of branches, crates and AMCs, to connect to TCDS, ngFEC and maybe other subsystems, to realize links from subdetectors FSM panels to mTCA FSM tree for those who need mTCA visual information, to bring the control of the system to the Central DCS shifter, to implement alert signals and messages for three levels of severities. Next step will be to prepare the system for highly increased number of mTCA equipments (up to 50 crates) in future LHC Runs.

5


XXIInd International Workshop "High-Energy Physics and Quantum Field Theory", June 24 ­ July 1, 2015, Samara, Russia

References
[1] PICMG xTCA Standards Extensions for Physics: New Developments and Future Plans R.S. Larsen (SLAC). Aug 2010. 7 pp. SLAC-PUB-14182 [2] PICMG, MicroTCA Specificiation, Technical Report MTCA.0/R1.0, (2006). [3] PICMG, AdvancedMC Mezzanine Module Standard, Technical Report AMC.0/R2.0, (2006). [4] PICMG, AdvancedTCA Base Specification, Technical Report ATCA.0/R3.0, (2008). [5] CMS Technical Design Report for the Phase 1 Upgrade of the Hadron Calorimeter CMS Collaboration (J. Man (ed.) et al.). Sep 26, 2012. 185 pp. CERN-LHCC-2012-015, CMS-TDR-010 [6] CMS Technical Design Report for the Level-1 Trigger Upgrade CMS Collaboration (A. Tapper (ed.) (Imperial Coll., London) et al.). Jun 18, 2013. 195 pp. CERN-LHCC-2013-011, CMS-TDR-12 [7] Drivers and Software for MTCA.4 Martin Killenberg, Lyudvig Petrosyan, Christian Schmidt, Sebastian Marsching, Adam Piotrowski. Jul 2014. 3 pp. Conference: C14-06-16, p.THPRO104 Proceedings [8] Siemens Automation, SCADA system SIMATIC WinCC Open Architecture, Technical Report (2014), [9] CMS DCS INTEGRATION GUIDELINES, CMS Collaboration, Technical Report

6