Monte Carlo simulations for 2009

16 June 2009

Computing time spent on simulation over the last year, including 350 million events for MC08 production. (first five bars. See the rest)



Following the long and successful campaign of Monte Carlo (MC) data simulation that began in 2008, the 2009 production is imminent. This is ATLAS’s chance to check the performance of the detector and to work out how much data will be needed to, for example, find the Higgs or Supersymmetry when collisions start registering.

“To re-do the entire MC08 production all at once would take about four months,” says Zach Marshall, who works on core simulation, “but it was actually done in stops and starts.” It began last June, but when the LHC shut down in September, extra computing resources, previously ear-marked for data processing, became available, enabling a long list of samples to be worked through.


MC08 jobs running all over the world, in the past year. (See full size)



A total of 350 million events were produced, all at 10 TeV centre-of-mass energy – 2008’s planned collision energy. For MC09 though, there will be a mix of energies represented, including just 900 GeV – the injection energy of the LHC. The plan for start-up is to have roughly three shifts – 24 hours – of running at this energy, so it is important to simulate it first, to ensure that ATLAS is able to understand that first data.

“It’s not just to see if we can though,” says Zach. “It turns out we could actually get some good physics out of those first 24 hours too.”

The aim is to run as many jobs as possible during an MC production, so timing is critical. The software must have an accurate idea of how each element of the detector looks in space, and be stable and bug-free. Once a particular software release sufficiently satisfies these criteria, the MC run can begin.

The data are all run through the exact same version of the Geant4 detector simulation software and the exact same version of the reconstruction on the Grid, meaning that all the events produced should be comparable. MC09 simulations will be run with ATLAS software release 15.1.0.4 and an updated version of the Geant4 ‘toolkit’, while the trigger and reconstruction will be run with an early version of 15.3.0, which is due out on June 24th.

“We want the reconstruction and trigger that we use for the Monte Carlo to be identical to that we'll use with first data,” says Zach. “That way we get the fairest assessment of the abilities of ATLAS, and we can decide whether we need any final touches before collisions.”

As for the changes made for MC09: “We’ve improved a lot of the geometry, which is going to be important,” Zach reports. There are two main databases of information in the MC system – one detailing size and coarse positioning information for each element, and another detailing the fine real-life variations in those positions. A third database details the inactive “dead material” – pieces of steel, support structures and so on – which, until now, had not been described in the software.

“There were literally hundreds of tons of material that just weren’t in the Monte Carlo,” says Zach. Since the bulk of this is towards the outer edges of the detector, the Inner Detector and Calorimeters are hardly affected by the changes being introduced; MC sample-running for these parts will begin on Tuesday June 16th, while the rest should begin after June 23rd.

“We need to make sure the rest of that dead material is in before we get to producing anything that the muon system will care about,” cautions Zach. “You don’t want to under- or over-estimate your ability to do physics because you’ve blown the detector descriptions.”

It has been over a year in the making, but, says Zach: “Now we’re really happy with the way it looks, and we’re confident enough to use it for MC09.”

In the same vein, accounting for the ‘cavern background’ is also on the agenda. This refers to all the radiation and particles “just floating around” in the cavern when the LHC is running, and has the potential to leave significant energy deposits in the muon system.

“You want to make sure you get right the number of fake muons that you end up collecting because of [cavern background],” Zach explains. But this is “notoriously almost impossible to model” because of the multitude of external factors contributing to its calculation.

Until now, a ‘safety factor’ was employed – multiplying the best guess at cavern background a few times, “just to be conservative”. But, as soon as beam starts circulating, some collision-free data will be collected. “Those ‘zero-bias’ events can then be overlaid on top of our Monte Carlo events to guarantee the right amount of cavern background,” explains Zach.

As well as having a clearer picture of the entire contents of the cavern, MC09 will also reflect the latest understanding of what events ATLAS ought to keep. A new trigger menu will include recent thinking on jet algorithms, along with other updates in event-selection. “It’s not too major,” smiles Zach of these little tweaks and adjustments, “but they’re all things that will make us better equipped to do good physics studies.”

 

 

 

Ceri Perkins

ATLAS e-News