In the hunt for new physics and to understand the known mechanisms in greater detail, the Large Hadron Collider (LHC) at CERN is being upgraded to HL-LHC to produce data at an unprecedented rate corresponding to 10 times more collision than at present. The higher collision rate implies data recording at a rate of at least 640 Tbps for which a paradigm shift in computing is very crucial. The project is designed to integrate data-driven innovation with particle physics experiments at the LHC.
To collect and process these huge datasets, the ATLAS detector is being upgraded. This project addresses the issue by porting the large codebase used by ATLAS to alternate computing architectures such as GPUs (Graphical Processing Units) as these are able to concurrently process hundreds of thousands of threads at a time. We collaborate with NVIDIA solution architect to work on the parallelization of the complex algorithms effectively.
Our approach is distinct from others because in addition to the handling of vast data we also address a broader than usual set of physics goals for the HL-LHC such as Long-Lived Particles (LLPs). LLPs are well motivated as potential new physics (including dark matter candidates), whose long lifetimes are due to suppressed interactions. To search for these rare events, faster decision making is required, whether to record and process a particular dataset or to discard it. During the process of making decisions in nanoseconds, we do not want to lose any event which might indicate potential new physics phenomena. We aim to provide sensitivity to LLPs along with the usual instantly decaying particles with the use of parallelized GPU programs and Machine learning algorithms.
This TRAIN@Ed project has received funding from the DDI programme, the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 801215.