Author :
Antos, J. ; Babik, M. ; Benjamin, D. ; Cabrera, S. ; Chan, A.W. ; Chen, Y.C. ; Coca, M. ; Cooper, B. ; Genser, K. ; Hatakeyama, K. ; Hou, S. ; Hsieh, T.L. ; Jayatilaka, B. ; Kraan, A.C. ; Lysak, R. ; Mandrichenko, I.V. ; Robson, A. ; Siket, M. ; Stelzer,
Abstract :
The data production for the CDF experiment is conducted on a large Linux PC farm designed to meet the needs of data collection at a maximum rate of 40 MByte/sec. We present two data production models that exploits advances in computing and communication technology. The first production farm is a centralized system that has achieved a stable data processing rate of approximately 2 TByte per day. The recently upgraded farm is migrated to the SAM (Sequential Access to data via Metadata) data handling system. The software and hardware of the CDF production farms has been successful in providing large computing and data throughput capacity to the experiment
Keywords :
data acquisition; data handling; data models; distributed processing; high energy physics instrumentation computing; meta data; 40 MByte/s; Fermilab collider detector; Linux PC farm; data collection; data handling system; data processing rate; data production models; metadata; sequential data access; Communications technology; Data handling; Data processing; Detectors; Hardware; Linux; Physics; Production systems; Software systems; Throughput; PACS: 07.05-t. Keywords: Computer system; data processing;