Need for speed

13 April 2010

A snapshot of the LHC Optical Private Network's status



The flow of data through the tier system of the LHC Computing Grid is familiar enough, but how does it achieve the 110 gigabit data rates exported from CERN via the LHC Optical Private Network (LHCOPN), dedicated to LHC traffic? As with almost every other aspect of the LHC projects, the answer is collaboration.

“It’s sort of like the nervous system,” says David Foster, deputy head of the CERN IT department and creator of the LHCOPN. “It’s the core that connects CERN to all of the Tier-1 centres.” But this is a strangely inverted beast, with the information-processing brains out at the Tier-2 and -3 extremities while the sensory information from the detectors comes through a central location: Tier-0.

Naturally, the superhighways run from Tier-0 to the Tier-1 centres, each connection able to deliver 10 gigabits per second, or a DVD every three to four seconds.  But CERN didn’t lay these wires. LHC computing takes advantage of existing network infrastructures, including the GÉANT network, to get data to the countries that host Tier-1 centres within Europe. From there, the national research networks take the data the rest of the way to the sites.

But that only covers seven of the eleven Tier-1 sites. Two of the remaining sites require the data to cross the Atlantic, arriving in the US. “This is a network within a network,” explains David Foster. The USLHCNet project, jointly run by Caltech and CERN, uses a mixture of commercial circuits and GEANT links between CERN and the two major US centres, Fermilab and Brookhaven. This approach provides the connectivity of 20 Gbps each to Fermilab and Brookhaven – and alternative routes in case any one of the links is disrupted. “It can take a while before a ship can be sent to repair a submarine cable in mid-Atlantic and the LHC data will not wait,” he says.

The remaining two sites are in Canada and in Taipei, Taiwan. The Academica Sinica in Taipei uses commercial circuits to cross the USA and arrive at CERN directly with Canada using a mixture of transatlantic circuits and connectivity provided by the Canadian research network, CANARIE.

The Grid has been up and running for years now, generating Monte Carlo data, and distributing and analysing it along with cosmic ray data. With the LHC beginning its first real run, and detectors taking data, Tier-0 is currently sending out information at a rate of about 15 gigabits per second to the Tier-1 centres around the world. The LHCOPN transports this data along with a re-distribution of data between the Tier-1’s.

For the moment, they have plenty of room to grow as the luminosity ramps up, but even now, those working on the LHCOPN can’t afford to sit back on their heels. “There’s a number of significant networking challenges coming up, and we are trying to keep one step ahead of the experiments,” says David Foster.

They expect a considerable increase in data coming from the experiments once the LHC reaches its design energy and luminosity. For this reason, the LHCOPN group is already looking at how to upgrade the system for data rates an order of magnitude larger than they can currently achieve: each connection from CERN to a Tier-1 site would have a capacity of 100 gigabits per second. Trials are already starting at 40 and 100Gbps and they are expected to become feasible in a production network in the next 3-5 years.

The biggest challenge for the LHCOPN will be upgrades to the transatlantic circuits where capacity is limited and the costs are likely to rise. “We expect to see new investments coming from the commercial operators as the world appetite for network bandwidth continues to grow,” says David Foster. “Networking is now fundamental to the world economies and is creating new opportunities including rethinking the ways we do collaborate and do physics. I don’t see this trend slowing anytime soon, and that is exciting for us all.”

 

Katie McAlpine

ATLAS e-News