ATLAS e-News
23 February 2011
TDAQ upgrade network
31 March 2008
Data and control network switches in one of what will eventually be some 80 racks of HLT processors and a network of over 200 switches
January was a quiet month for data taking while various cooling systems underwent yearly maintenance, and the TDAQ network team took advantage of this to introduce a major upgrade to the data taking network.
This is the system that transports data from the ROS buffers in USA15 over fibre optic cables up to the surface building SDX, next to the lift entrance, and then distributes it to the trigger and event farm computers housed there. There will be over 2000 processors in the final high-level trigger (HLT) farms and the TDAQ network that connects them all has to handle a data flow that, if recorded, would fill two DVD’s per second. When the requirements for this level of data transport first became clear it wasn’t obvious how to solve it and there were several competing scenarios.
Many people have contributed to the research program that followed, but some of the key contributors have been involved as part of a long relationship with the LAPI laboratory run by Professor Buzuloiu at the "Politehnica" University of Bucharest. In 2000 he sent us the first of what was to become a succession of students who studied technologies and architectures, and helped find solutions to the networking problems.
Eight students, numerous papers and six PhD's later a plan for the full system was ready. This proposal was accepted and the first stage of installation was completed in 2006. One and 10Gbit switched Ethernet is the chosen technology and there are four independent networks deployed. There are two data networks: one for the very high rate data collection network that feeds the Level 2 processors with partial event data at 100 kHz and one for the network that delivers the full selected events to the Event Farm processors. Separate to these is a Control Network used for process communications, histogram and database updates and shared file-system services. Finally, there is a monitoring network that constantly fetches statistics on traffic flows, errors and network utilisation from the switches deployed in the network.
There are 26 kilometers of cable in the network, making up over 6000 cables which of course means 12000 labels all placed by hand. The picture above shows the data and control network switches in one of what will eventually be some 80 racks of HLT processors and a network of over 200 switches.
At the core of the network we employ chassis based switches. Shown here are the Back End and Control network core switches. The first of these have been running throughout 2007 supporting the development and testing of the TDAQ system. All the requirements for throughput and reliability were met and this January it was time to install the remaining switches and cabling.
This allows for redundant operation so that no individual network component failure will stop data taking. As with most of the research associated with ATLAS, practical experience is the natural complement to theory and the physical implementation of the network was carried out by the four Romanians of the current networking team, seen here: Stefan, Lucian, Matei and Silvia.
All the extra cables were first placed in position in both USA15 and SDX and then the redundant core switches were hoisted into SDX behind the lift shaft and installed.
Once the core switching blades are all installed, the final connections can be made.
With the fully redundant installation now in place, we can commission and test it out to be ready for data taking. Attention then shifts to the task of monitoring all the traffic flows to ensure that operations run smoothly, and provide clear feedback to the data taking teams who will be trying to balance the system for maximum efficiency.