NFD29: Broadcom Discussed Latest Chip Capabilities
Ok, I confess: I’m not a chip (ASIC) kind of guy. Yes, chips, their performance, their power consumption are all highly important in computing, networking, security, I get that. It’s just that the details and performance numbers aren’t likely to stick in my brain, which seems to work more at the router, switch, firewall level of things. And some (other) chip vendor presentations have been known to cause me to suffer “MEGO” (My Eyes Glaze Over).?
So I have to applaud Broadcom at #NFD29 , for fairly solidly capturing and holding my attention! So even if you’re like me, a bit chip-presentation-averse, the recorded presentations are well worth your time.?
This blog is going to summarize some of the items that caught my attention. And also go into a little bit of DDC: Distributed Disaggregated Chassis, which was new to me and somewhat different, unexpected!?
My intent in doing this is to give you a little flavor of the presentations, so you can decide whether or not to go watch the session recordings. I’d suggest you do, it was all good stuff!
What is DDC?
DDC stands for?Distributed Disaggregated Chassis. I’ll attempt to describe what I think I heard.?
DDC is switches plus line cards supporting a spine-leaf-ish or multi-tiered switch topology at large scale.
With a single tier, that looks like the following:
And with multiple tiers:
The neat thing is that the switches running DDC code act like one large distributed switch. This is not just a concept, it has apparently been deployed at some part of AT&T, with has 2-3 years in the field. It scales even larger by adding another layer of NCF (network cloud fabric) switches.?
The idea seems to be to spread flows across all the links, using all available bandwidth, with no polarization. Doing so in a manner that keeps the fabric congestion free, with minimum latency.?
Jericho chips are used for scheduling and routing traffic to links.?
In DDC, traffic is divided into cells and “sprayed” across available links. I understood this as sort of leaf to spine and back out again. We’re told that is not ATM nor TDM, but scheduling driven, where a switch holds data until the next switch (NCF switch) is able to forward it.?
I found myself thinking “sounds like hyperscaler megapods without VXLAN.”?
One noteworthy use case for this is apparently speeding up job completion in AI/ML clusters with GPU’s, to minimize latency and allow closely synchronized completion of parallel tasks.?
If this interests you, please go watch the recorded presentation, for correct and more detailed information!
Notes from Broadcom’s Presentations
Here are a couple of the other things I learned / flagged in my notes:
领英推荐
Tomahawk line:
Trident 4c intro:
Links
Network Field Day 29:?https://techfieldday.com/event/nfd29/
Tech Field Day’s Broadcom page:?https://techfieldday.com/companies/broadcom/
#NFD29 ?Broadcom videos etc. page:?https://techfieldday.com/appearance/broadcom-presents-at-networking-field-day-29/
Conclusion
I consider it useful to broaden my knowledge concerning things that are not my main interest. And at least get some idea of what’s hot or new, what’s considered important, while not bogging down in details I’m not going to retain.?
Broadcom has been doing good presentations periodically at Network Field Day events. They represent a good way to learn a bit about the chips and capabilities being built into switches and networking gear.
So with all that in mind, go watch the videos! Especially if you love details about chips and what processing can be offloaded to them!
Comments
Comments are welcome, both in agreement or constructive disagreement about the above. I enjoy hearing from readers and carrying on deeper discussion via comments. Thanks in advance!?
Hashtags:?#NetCraftsmen #CCIE1773 ?#Broadcom #ASICs #Datacenter #Switching ??
Twitter:?@pjwelcher
LinkedIn:?Peter Welcher