Computing Grid

Computing Grid lucas

Even when whittled down by the trigger system, CMS still produces a huge amount of data that must be analysed, more than five petabytes per year when running at peak performance. To meet this challenge, the LHC employs a novel computing system, a distributed computing and data storage infrastructure called the Worldwide LHC Computing Grid (WLCG). In ‘The Grid’, tens of thousands of standard PCs collaborate worldwide to have much more processing capacity than could be achieved by a single supercomputer, giving access to data to thousands of scientists all over the world.

The “Tier 0” centre at CERN first reconstructs the full collision events and analysts start to look for patterns; but the data has a long way to go yet. Once CERN has made a primary backup of the data it is then sent to large “Tier 1” computer centres in seven locations around the world: in France, Germany, Italy, Spain, Taiwan, the UK and the US. Here events are reconstructed again, using information from the experiment to improve calculations using refined calibration constants.

Tier 1 starts to interpret and make sense of the particle events and collate the results to see patterns emerging. Meanwhile each sends the most complex events to a number of “Tier 2” facilities, which total around 40, for further specific analysis tasks. In this way information braches out from each tier across the world so that, on a local level, physicists and students whether in Rio de Janeiro or Oxford, can study CMS data from their own computer, updated on a regular basis by the LHC Computing Grid.

For a more detailed account of CMS Computing see:
CMS: The Computing Project Technical Design Report