Next Generation Networks and Big Science
As I briefly mentioned in a previous post, I’ve spent most of the last week in meetings about next generation research and education networks, and networking for two large science experiments — the Large Hadron Collider and the upcoming Square Kilometre Array.
The Next Generation Networks meeting concentrated on two topics, the use of overlays and sharing the existing infrastructure.
Overlays are used to create segregated networks on top of the networks for research and education that the NRENs offer, but why do we build them? Is it because of bandwidth? Latency? Security? Or purely segregation for its own sake? And if the latter, why are we segregating? Is it to allow projects to use RFC1918 address space, or for some perceived benefit that might not be as realised as our users think?
When it comes to sharing the underlying infrastructure, the NRENs have pushed hard to encourage fibre providers to offer “spectrum as a service”, both terrestrially and intercontinentally. This reserves a given proportion of the 4.4THz of available C band optical spectrum that might be used in a variety of ways depending on the provider and the demand. For example, I might contract for a quarter of a fibre pair — 1.1THz. It’s possible I could light that with my own transponders or transceivers at a rate of my desire as an “alien wavelength” into the provider’s ROADM (transmission equipment). That could be 6Tbps of capacity with established technology. More common at the moment, especially with submarine technology, is that the spectrum is “managed,” so I have that amount of spectrum reserved for my own use, but I have to pay the provider for the appropriate transponders that they install and manage in their own SLTE (submarine line termination equipment).
This is becoming increasingly affordable and realistic at the moment. GEANT, Internet2 (the US National Research & Education Network) and ESnet (the US Energy Sciences Network, focused on high energy physics) have all contracted, or are about to contract for transatlantic spectrum to provide large amount of capacity for research and education.
Of course, when you have several providers all trying to supply capacity for the same community, you then get into the technical discussions of how that capacity is backed up between them.
I kicked off both of the half days with some curmudgeonly recollections and thoughts, but it was interesting to see some of the other presentations come back to those points.
The Square Kilometre Array is a huge new project that is headquartered in Jodrell Bank in the UK, but building massive infrastructure in the form of two new arrays of radio telescopes in South Africa and Australia.
领英推荐
When I say “massive” infrastructure, SKA-Low in Australia will have 131,072 Christmas-tree-like antennas spread between 512 stations, and about 72km between the most distant stations. SKA-Mid in South Africa will have 197 more traditional radio dishes with about 152km between the most distant dishes.
The amount of data coming off the telescopes is in the order of petabits per second (exabits per second off the raw detectors), but most of the processing is handled locally ('locally' in this context being a relative term) in Cape Town and Perth, so that the data products can be delivered to radio astronomers around the world within the capacity of existing technology.
The Large Hadron Collider is another kettle of fish. Next year is the last active year of Run 3, following which we have the third “long shutdown,” LS3, lasting for three years during which the collider will have a substantial upgrade resulting in the “High Luminosity LHC” (HL-LHC). At the moment there is in excess of 2.66Tbps of optical paths to national Tier 1 datacentres (including direct to the Rutherford Appleton Laboratory in the UK) from the LHC in Geneva, in addition to IP capacity to R&E networks.
Most of the data that isn't carried in the private optical network is carried in a global VRF spanning many of the R&E networks in the world, including the UK's NREN, Janet. This is delivered to institutions who advertise the IP addresses of their physics storage and compute clusters into the VRF.
Between now and when HL-LHC appears in 2029 there will be a series of data challenges. This year's was at 25% of the expected rate of HL-LHC. There will be another in 2026/27 at 50% of the expected data rate. These don't just test the network, but also -- and perhaps more importantly -- the storage arrays and file transfer systems used to get the data out of CERN and to the particle physicists around the world. For HL-LHC, the 16 Tier 1 datacentres around the world are expected to have 1Tbps of private optical networking back to CERN, plus another 1Tbps of connection to the VRF.
Why can’t all this discussion happen over video-conferences? A lot of it already does, but there is no way to share the amount of information, and facilitate the free-flowing discussion, either in the room or over coffee, or over an evening meal, that being face-to-face enables. So it might be a large investment of time and energy, but not only have I come back with far more information than a handful of webinars would have provided, but hopefully helped shape some of the plans too.
Product Manager (Networks)at GéANT
11 个月Great summary Rob. Nothing beats a good in person discussion. Was good to catch up with you last week.
Your views on overlays were spot on !
Senior Engineer @ NetBox Labs, former Network Engineer
11 个月This is way cool Rob, thanks for posting!
Network Architect (IT/OT Convergence) at Jaguar Land Rover
11 个月Those are some chonky networks, Rob. Great post.
Digital policy, Internet governance, corporate communications | Board member | Views, likes, and laughter are my own!
11 个月That's awesome, thanks for sharing, Rob! And I personally think that "curmudgeonly recollections" are a great way to kick off a productive meeting...