Title :
Prompt reconstruction of ATLAS data in 2010 and 2011
Author_Institution :
Int. Center for Elementary Particle Phys., Univ. of Tokyo, Tokyo, Japan
Abstract :
Excluding a short duration in 2008, the LHC started regular operation in November 2009 and has since then provided billions of collision events that were recorded by the ATLAS experiment and promptly reconstructed at an on-site computing farm. The prompt reconstruction chain takes place in two steps and is designed to provide high-quality data for physics publications with as little delay as possible. The first reconstruction step is used for data quality assessment and determining calibration constants and beam spot position, so that this information can be used in the second reconstruction step to optimize the reconstruction performance. After the technical stop of the LHC at the end of 2010, the prompt reconstruction chain had to deal with greatly increased luminosity and pileup conditions. To allow the computing resources to cope with this increased dataflow, without developing a backlog, recently significant improvements have been made in the ATLAS reconstruction software to reduce CPU time and file sizes for the produced datasets.
Keywords :
calibration; data handling; high energy physics instrumentation computing; position sensitive particle detectors; ATLAS data; ATLAS reconstruction software; CPU time; LHC; beam spot position; calibration constant; collision event; computing resource; data quality assessment; dataflow; luminosity; on-site computing farm; pileup condition; prompt reconstruction chain; Collaboration; Mesons;
Conference_Titel :
Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2011 IEEE
Conference_Location :
Valencia
Print_ISBN :
978-1-4673-0118-3
DOI :
10.1109/NSSMIC.2011.6154584