Noise, new algorithms and nail-biting: HLT pushes forward

13 July 2009

Level-2 reconstruction time for the e/gamma slice. All data retrieving/unpacking, hot cells treatment and the clustering itself are performed in only about 3.9 ms, well within the L2 time budget of 40 ms. In the crack regions, data from barrel and end-cap detectors must be fetched, increasing slightly the processing time.



The High Level Trigger (HLT) has been included in ATLAS Milestone combined runs since M3, back in June 2007. Each time, algorithms are added or adjusted to refine the HLT software’s functioning. The latest combined cosmic run was no exception, with particular attention paid to noise and how to tackle it.

“During cosmic runs, the energy is really nothing most of the time,” says HLT Calorimeter Convenor Denis Damazio. “But then you get an electronic that is a bit noisy, and it makes a mess.”

To avoid noisy cells causing phantom triggers, the algorithms that the HLT runs with are tweaked to mask the problem cells, “so the algorithm requests data from everything in that region, but doesn’t see that cell,” says Denis. “This is what we call ‘data preparation’.”

For the energies involved in physics, the effect of this noise ought to be much less of a problem, but, in the event of a read-out element taking a turn for the worse during data-taking, the same cell-masking mechanisms that have been practiced with cosmic runs can be applied.

At the other end of the scale, there was also progress in tackling medium- and low-level noise. To pick this out requires a finer grained sieve than has previously been available. For example, a more sensitive variation of the treatment of noisy cells by the L2 e/gamma algos is under investigation.

These algorithms run in two stages, with the first stage hauling out and rejecting the noisy data. The benefit of this is that the second stage of the algorithm is much more sensitive. One advantage of this so called ‘L2 sliding window’ algorithm (which is also used in EF/offline) is that in a J/psi –> ee decay, the opening angle between the two electrons can be so small that the old algorithm would have reconstructed only one blurred cluster.

Although this and other new algorithms had been built into the ATLAS software since release 14, this was the first time they had been tested in a cosmic run. In the absence of electrons and other physics particles, the HLT team were looking to see whether the algorithms could run, process the data as they were meant to, and replicate the results of older algorithms with cosmic muons. Things looked promising.

Another first for the HLT was running the developing neural network algorithms for the first time. This is one of many new options currently under investigation. “It’s a completely novel, non-linear approach, which relies on more complex processing based on statistical features of the events.” explains Denis. “In the end, the point is that you let the machine take a decision that you would want to take at some point yourself.” The advantage is that a machine can sometimes see things that a human cannot, because it uses a non-linear approach and much more complex processing.  The idea is to make it available to the community to test and validate the various algorithms and to use the 'best' algorithm in the long run.

Monitoring – both online and offline – was also in the spotlight during the recent run. “We did a lot of work in online monitoring for the different physics slices,” reports Denis, “as well as the part which tells the HLT if the hardware has failed.”

Offline monitoring at the T0 stage cross-checks certain quantities, such as energies in the calorimeter, against one another in the online and offline streams. In theory, these quantities ought to come out the same. Coming up with the same values in the reduced-pressure offline situation adds a level of reliability to the trigger data, which is primarily concerned with speed.

Even though the trigger software is tested thoroughly offline, to show that the code is sturdy, the moment of truth doesn’t come until it can be tested online at Point 1 in a cosmic run. “Even then, after so many steps, sometimes you go back and find problems. It’s a completely different environment,” cautions Denis.

“This is why the weeks with software running – real system, real problems – are very important for us. Because that’s when we see whether our software is good enough to run when we get beam.”

 

 

 

Ceri Perkins

ATLAS e-News