Asterfusion Open Packet Broker based on SONiC
What is Open Packet Broker ?
Open Packet Broker (OPB ) utilizes the SONiC network operating system (NOS) to modernize network traffic monitoring through a software-driven approach. This method is more flexible, scalable, and cost-effective compared to traditional hardware-based packet brokers.
The Era of SONiC-Based Network Packet Broker 2.0: Redefining Network Visibility
The Asterfusion Packet Broker 2.0 is an advanced, containerized network packet broker application built on the robust Asterfusion SONiC network operating system. Designed for enhanced efficiency, it features an intuitive Web UI that simplifies the management and maintenance of traffic capture configuration rules, eliminating complexity and putting control back into the hands of network operators.
The SONiC-based campus switch CX-M fully leverages SONiC’s open, containerized architecture, seamlessly embedding packet broker functionality into a single container. This groundbreaking integration unlocks a range of advanced capabilities, including traffic mirroring, raw data tagging, filtering, aggregation, preprocessing, load balancing, and output replication.
By integrating traditional L2/L3 switching with packet broker functionality into a cost-effective platform, these SONiC-based cloud switches significantly reduce operational costs while enhancing network visibility, flexibility, and performance. Asterfusion’s innovative approach delivers a smarter, more streamlined solution—building future-ready campus networks with unparalleled efficiency and control.
Software Architecture of SONiC-Based Packet Broker
The principle of traffic collection on SONiC is achieved by extending its capabilities through the installation of the APP-PB extension. Specifically, a packet broker Docker container is deployed on SONiC, adding a new NPB (Network Packet Broker) container that enhances traffic collection capabilities on top of existing switch services.
Once deployed, the system can simultaneously handle standard switching operations and traffic collection tasks, or it can focus solely on traffic collection services to enhance features like ACL specifications and other traffic-related functionalities, thereby improving the accuracy and efficiency of traffic management and monitoring.
After deployment, the switch can run traditional L2/L3 services alongside packet broker services (replicating traffic via SPAN or RSPAN), or you can opt to deploy only the packet broker service without running switch services, enhancing ACL specifications and NPB capabilities (forwarding traffic based on policies).
Topology Comparison of Traffic Collection of Traditional VS Asterfusion SONiC-based NPB2.0
In the past, building a dedicated NPB monitoring network meant adding a whole lot of extra equipment. But with Asterfusion NPB2.0, all NPB functions are now fully integrated into the switching and production network. This not only cuts down on hardware but also simplifies the entire network architecture, turning what used to be two separate networks into one unified system—making everything more streamlined, efficient, and less complex.
CX-M NPB2.0 Traffic Collection Implementation Principles
Traffic Mirroring
Based on SPAN technology, traffic at the collection point is mirrored. The collection point is typically chosen at the interconnection ports between the business Spine and Leaf, where both upstream and downstream traffic is collected and mirrored. For example, the bidirectional traffic A and B from the X1 port of the business Spine are mirrored to the X2 port.
Original Information Tagging
Through the RSAPN protocol, VLAN tagging is applied to the mirrored traffic at the collection point. VLAN tags are used to identify business types and collection point locations. Different VLAN tags can be assigned to each port. For example, the traffic mirrored from the X1 port of the business Spine is tagged with VLAN 100, while the traffic from the X2 port is tagged with VLAN 101, and both are sent together to the X3 port.
Filtering
Based on ACL rules, the mirrored spine switch traffic is initially filtered to improve forwarding efficiency. ACL rules support matching conditions using MAC, VLAN, and IP five-tuple. ACL rules use the TCAM resources of the switch chip, with different models supporting up to 3,000 rules and allowing for rule expansion based on specific business scenarios. For example, when mirroring traffic from the X1 port, ACL filters out traffic A1 and forwards it to the X2 port.
Traffic Aggregation
On the Leaf port connected to the tool, input ports (such as X1, X2) are selected based on forwarding policies, and a set of ACL rules is specified to form a business group. The required business traffic is then forwarded to the interface connected to the tool (such as X3). Through traffic aggregation, traffic from multiple ports (X1, X2), such as business A with source IP 192.168.1.1 and source port 80, is aggregated at the X3 port and sent to the backend tool.
Traffic Preprocessing
On the Leaf port connected to the tool, traffic can be preprocessed based on ACL while filtering, such as VXLAN/GRE tunnel stripping, packet truncation, VLAN stripping, etc. For example, when traffic is input from X1 and output to X3, the preprocessing operation strips the GRE/VXLAN tunnel, retains the inner MAC address (using content MAC if no inner MAC exists), and truncates the packet to 128 bytes.
Traffic Load Balancing
When a single port’s bandwidth isn’t sufficient to handle the traffic aggregation needs, you can create an LAG (Link Aggregation Group) to distribute the traffic load across multiple ports, enabling dynamic load balancing. The LAG can use hash seeds and keys like IP, port, or MAC address for traffic distribution.
As shown in the diagram, during protocol mirroring on the business Spine, traffic from X1, X2, X3, and X4 can be forwarded to a single LAG for load balancing, with LAG members X5 and X6. The connected Leaf devices can also distribute traffic based on forwarding policies, forwarding traffic from X2 and X3 to another LAG, with members X4 and X5. This ensures the traffic is efficiently handled and ultimately directed to the corresponding tool clusters of the same type.
Traffic Replication
When different tools need to receive the same business traffic, traffic replication can be performed based on forwarding policies. As shown in the diagram, the connected Leaf device receives traffic A and B from the X2 port after initial filtering. Based on the forwarding policy, the traffic is abstracted as Business A and then replicated to output ports X4 and X5. This way, both tools receive the same traffic, meeting the needs of different tools.
Why Asterfusion SONiC-Based NPB 2.0 is a Game-Changer?
Buckle up, because Asterfusion’s SONiC-based NPB 2.0 is here to shake up the networking world! This isn’t just an upgrade—it’s a revolution that redefines how we monitor and manage network traffic. Here’s why it’s stealing the spotlight:
Cost-Effective Innovation: Say Goodbye to Wallet-Busting Hardware
Traditional Network Packet Brokers (NPBs) are budget killers—think tens of thousands of dollars for clunky hardware. NPB 2.0 flips the script by ditching the pricey gear and running full-blown NPB capabilities right on your switches. The result? Massive savings without sacrificing power. With software license fees making up just 10% of the total system cost, your Total Cost of Ownership (TCO) takes a nosedive. This isn’t just smart—it’s a financial game-changer that’s rewriting the rules of the NPB industry!
Simplified Operations: Less Hassle, More Control
Let’s face it—traditional NPBs are a nightmare to manage. Proprietary systems, endless vendor calls, and a steep learning curve? No thanks. NPB 2.0 changes the game with SONiC containers and ntopng, built on open standards. If you know basic Linux, you’re already ahead of the curve—no PhD in Vendor-ology required. This open architecture slashes complexity, cuts operational costs, and hands you the keys to seamless integration and ultimate flexibility. Say hello to simplicity and goodbye to headaches!
On-Demand, Flexible, and Multi-Functional: One Device, Endless Possibilities
Why settle for a one-trick pony when you can have a networking superstar? NPB 2.0 is a multitasking marvel—run it as a Layer 2/3 switch, a traffic-monitoring packet broker, or both at the same time, all on a single device. Need to pivot as your business grows? Just flip the NPB container on or off—no extra hardware, no fuss. It’s like having a Swiss Army knife for your network, adapting to your needs and stretching your investment further than ever before.
Merging Networks for Maximum Efficiency: Work Smarter, Not Harder
In the old world, campus switches sit around underutilized while separate monitoring networks drain your budget. NPB 2.0 says, “Why waste resources?” By embedding monitoring right into the switch, it maxes out processing power and delivers real-time traffic insights—all without extra gear. Forget juggling two networks (production and monitoring)—NPB 2.0 fuses them into one lean, mean, cost-cutting machine. Efficiency? Check. Savings? Double check.
Filling the Gap in Small-Port High-Performance NPBs: Big Power, Small Package
The 1G/10G small-port NPB market has been crying out for a hero—enter the CX-M NPB 2.0. With top-tier performance and a full suite of features, it’s the perfect fit for this underserved niche. High efficiency, low cost, and zero compromises? It’s the small-port solution you didn’t know you needed—until now.
Real-World Use Cases: Where NPB 2.0 Shines
Asterfusion SONiC-Based Packet Broker: Features That Wow
Take a peek at the CX-M campus Layer 2/3 switches—our lineup is rolling out NPB 2.0 support step by step, bringing this game-changing tech to every corner of your network. From traffic mirroring to load balancing, it’s packed with the tools you need to dominate the networking game.
Asterfusion’s SONiC-based NPB 2.0 isn’t just a tool—it’s a movement.