Author_Institution :
Frankfurt Inst. for Adv. Studies, Goethe-Univ. Frankfurt, Frankfurt am Main, Germany
Abstract :
The High Level Trigger (HLT) of the ALICE detector system, one of the four big experiments at the Large Hadron Collider (LHC) at CERN, is a dedicated real time system for online event reconstruction and selection. Its main task is to reduce the large volume of raw data of up to 25 GB/s read out from the detector systems by an order of magnitude to fit within the available data acquisition bandwidth. A dedicated computing cluster of 225 processing nodes, connected by an Infiniband high-speed network, is in operation to provide the necessary computing resources for this task. The available computing power is supplemented by utilizing FPGAs for the first steps of the processing, as well as 64 GPUs which are used at later stages of the event reconstruction. During the 2011 LHC heavy-ion run, the HLT was for the first time actively used to reduce the data volume. For this the raw data of the Time Projection Chamber, the largest data source in ALICE, was replaced by the results of the online FPGA based cluster finder. A further reduction of the data volume by roughly a factor 4 was achieved by optimizing the data format for a subsequent standard Huffman compression. For this, entropy reducing data transformations have been implemented. In this contribution, we will present the experience gained during the 2011 run, both on the technical and operational levels of the system, as well as from a physics performance point of view. Building on the success of the 2011 run, possibilities for even more advanced uses of online reconstruction results in the future will be discussed as well.
Keywords :
data acquisition; data reduction; field programmable gate arrays; high energy physics instrumentation computing; particle calorimetry; real-time systems; transition radiation detectors; workstation clusters; AD 2011; ALICE High Level Trigger; ALICE detector system; GPU; HLT; Infiniband high-speed network; LHC heavy-ion run; Large Hadron Collider; Time Projection Chamber; bit rate 25 Gbit/s; computing cluster; computing resources; data acquisition bandwidth; data format; data volume reduction; dedicated real time system; entropy reducing data transformations; online FPGA based cluster finder; online event reconstruction; online reconstruction; physics performance; standard Huffman compression; Collaboration; Data acquisition; Detectors; Field programmable gate arrays; Large Hadron Collider; Monitoring; Optimization;