Cisco Application Centric Infrastructure 6.x FEATURES
Victor Mahdal
Manager / Team Lead / Network Cloud DC DevOps Engineer / Solution Architect
The Cisco ACI, version 5.2, ACI 6.0 and beyond
Previous ACI releases
The initial release of ACI was primarily focussed on launching the product. Most of the basic functionality has been present since release 1.0. This is the version that introduced us to the tenant model, the policy model and the overall way of working with ACI.
In ACI 2.x many improvements were made on the basic networking. Also Multi-pod was introduced. ACI 2.x saw ACI becoming more stable.
The biggest addition to ACI during the 3.x versions was the addition of multi-site. Enabling customers to connect separate fabrics together using a new tool called the Multi-Site Orchestrator. It also implemented several network features that were as of yet missing.
The primary focus of ACI 4.x was ACI anywhere. This version introduced cloud extensions and virtual fabrics. It also introduced a lot of Day2 tools like the Network Insights suite. ACI 4.2 is currently the most recent long lived version and currently the recommended version to install in a new fabric.
ACI 5.x is the newest train of ACI. It might be a bit too early to really say what the focus is for these releases. Let’s look at this in the next paragraph.
You can find more about the ACI - SD-A integration in the following presentation from Cisco Live:?BRKOPS-2110
Predictions for ACI 6 by public
ACI 6 is HERE.
the Geneve support which might be added. In the same spirit, we might expect some truly novel technologies to be implemented. For example the Open/R routing protocol has been released by Facebook. For now no Cisco product supports it (as far as I know). Why not start by supporting it in ACI?
Prediction 6:?ACI will start supporting Open/R
In regards to integrations. Why stop at the SD-Access and SD-Wan solutions. Let’s further the integrations with UCS platforms. We can already integrate with UCSM, but to be honest. Aren’t Fabric Interconnects just extra Nexus switches? It’s just an extra step which is not really necessary anymore. Nexus 9k switches aren’t that expensive and port density and latency is sufficient. I’m expecting the Fabric Interconnects to disappear, at least for the UCS platform and have all UCS systems connect to ACI Leaf (or extended leaf) switches.
Prediction 7:?ACI will take over the role of Fabric Interconnects (directly or via extended leaf switches)
Many companies want to move their workloads into the cloud. ACI already supports extending policy into the cloud, but it does not support extending L2 into the cloud (as VMware does with VMware on AWS/Azure). This is potentially a large market, so I’m expecting Cisco to figure out some way to make this work. I’m not sure how they would make this work from a technical standpoint, but something with tunneling VXLAN to the end host would be possible (I think)
Prediction 8:?ACI will support L2 extension to the public cloud
==================================================================================
Reality of Cisco ACI 6.x features
Cisco Nexus 9000 switch secure erase
Cisco Nexus 9000 switches utilize persistent storage to maintain system software images, switch configuration, software logs, and operational history. Each of these areas can contain user-specific information such as details on network architecture and design, and potential target vectors for would-be attackers. The secure erase feature enables you comprehensively to erase this information, which you can do when you return a switch with return merchandise authorization (RMA), upgrade or replace a switch, or decommission a system that has reached its end-of-life.
This feature erases user data in the following storage devices:
Support for BFD on secondary IPv4/IPv6 subnets
Bidirectional Forwarding Detection (BFD) is now supported for static routes that are reachable using secondary IPv4/IPv6 subnets that are configured on routed interfaces. This feature was originally introduced in the 5.2(4) release and is now available in the 6.0 releases.
Support for PTP G.8275.1 on remote leaf switch peer links and on vPCs
You can now use the PTP Telecom profile (G.8275.1) on virtual port channels (vPCs) and on remote leaf switch peer links.
Support for SyncE on vPCs and on remote leaf switch peer links
You can now use SyncE on vPCs and on remote leaf switch peer links.
Transport Layer Security version 1.3 support
Transport Layer Security (TLS) version 1.3 is now supported. This feature was originally introduced in the 5.2(5) release and is now available in the 6.0 releases.
Weight-based symmetric policy-based redirect (PBR)
Prior to Cisco APIC Release 6.0(1), there was no option to specify a weight for each PBR destination. The capacity of the PBR destinations (service nodes) was not considered, and the weight for each destination was the same, which is the default value, 1. In the example below, consider four destinations, each destination could roughly receive the same amount of traffic because the weight for traffic load balancing is the same, approximately about 25% of the traffic.
Beginning with Cisco APIC Release 6.0(1), weight-based symmetric PBR is supported, which handles traffic more efficiently. In weight-based symmetric PBR, an administrator can set weights for a PBR destination based on the capacity of the service node, and traffic is load balanced based on the set weights. One service node can be part of multiple policies and can have different weights in different policies.
Consider, four PBR destinations with different capacities. Instead of the same amount of traffic being sent to all destinations, PBR configuration for a destination is weight-based. You can assign a weight from 1 to 10. If no weight is assigned, the default value is 1. The assigned weight determines the traffic sent to the destination. An example of the weight-based distribution of traffic is shown below.
DESTINATIONWEIGHTTRAFFIC %Destination 1440Destination 2330Destination 3220Destination 4110
To maintain symmetric PBR on a two-arm node configured with weights, ensure to configure the same weight for the external and internal leg.
To maintain symmetric PBR for service insertion, where each service node has two interfaces, consumer and provider connectors, ensure to configure the same weight for both consumer and provider connectors.
领英推荐
Limitations of weight-based PBR
For PBR destination in a bridge domain, the maximum weight per PBR policy is 128. For PBR destination in an L3Out, the maximum weight per PBR policy is 64.
System faults are raised under the following conditions:
Support for a user group map rule for SAML and OAuth 2
Authentication by an external server for SAML and OAuth 2 is based on user group map rule information, in addition to the standard?CiscoAVpair-based?authentication.
BGP autonomous system (AS) enhancements
Cisco APIC now supports the?Remove Private AS?option to remove private autonomous system numbers from the?AS_path?in an eBGP route and supports the?AS-Path match?clause while creating a BGP per-peer route-map.
Extended filter entries
In a SPAN session, you can now configure extended filter entries for filter groups by using either the APIC GUI, NX-OS-style CLI, or REST API.
Along with usual SPAN filter parameters such as Source/Destination IP Prefix, First/Last source port, First/Last destination port and IP Protocol, there's a way now to specify an extended filter entry that consist of:
You can either set the values for the Source/Destination range or the?DSCP/Dot1P?range. If you set both the Source/Destination and?DSCP/Dot1P?ranges, faults are displayed.
DSCP?or?Dot1p?is not supported for the egress direction. If you select?Both?as the direction then either?DSCP?or?Dot1p?is supported for ingress direction only and not for egress direction.
TCP flags?can be configured only if you have selected?Unspecified?or?TCP?as the?IP Protocol.
Support for remote pools with a subnet mask of up to /28
Starting with the 6.0(1) release, remote leaf switches support remote pools with a subnet mask of up to /28. In prior releases, remote leaf switches supported remote pools with a subnet mask of up to /24. You can remove remote pools only after you have decommissioned and removed them from the fabric including all the nodes that are using that pool.
New hardware features
The new?Cisco Nexus 9336C-FX2-E?has been added as a leaf/spine option for your fabric. It has 36 x 40/100 Gbps QSFP ports supporting 1/10/25/40/50/100 Gbps port speeds or 16/32 Gbps FC ports with a total of 7.2 Tbps bandwidth and over 2.4 bpps. Breakout is supported on all ports.
Resolved issues
A breakout parent port shows in the drop-down list for the SPAN source even after the port is broken out.
For a health record query using the last page and a time range, the GUI displays some health records with a creation time that are beyond the time range (such as 24h).
After migrating a VM between two hosts using VMware vMotion, EPG does not get deployed on the target leaf node. When affected, the fvIfConn managed object corresponding to the missing EPG can be seen on APIC, but it would be missing from the target leaf node when queried.
When there are more than 40 objects in the tree and you double click on an object in the BGP Peer table, then the tree does not expand because the tree does not have pagination. The APIC tries to load all objects in one query, which is drastically slows the GUI
When HBR is enabled on a source EPG's bridge domain and the subnet is configured with the private scope (advertise externally = FALSE), if there is a shared service EPG contract with an L3Out, the L3Out will not publish the subnet or the corresponding /32 host routes because of this private scope.
In this scenario, if there is also an explicit ESG leakRoute configured for the same subnet across those VRF instances, the leakRoute is faulted because the route is already shared with an EPG contract, and the leakRoute is installed in the hardware along with a pcTag, then the leakRoute should not be processed and any flags under it should not be considered.
But, if this explicit leakRoute has a public scope, the /32 host routes are still published externally out of the L3Out, which should not happen as the leakRoute itself is faulted and bridge domain subnet scope is private.
When a VRF-level subnet?<fvRtSummSubnet>?and instP-level subnet?<l3extSubnet>?with a summary policy is configured for an overlapping subnet, the routes will get summarized by the configuration that was added first. But, the fault on the configuration that was added last will not be shown in the Cisco APIC GUI.
When a VRF-level subnet, fvRtSummSubnet, exists with a summary policy and an instP level subnet,?<l3extSubnet>, with the same subnet as the VRF-level subnet is associated with summary policy, then there won't be any fault seen on the Cisco APIC. The summarization will be done according to the VRF-level subnet?<fvRtSummSubnet>.
VMM domain attachments of floating SVIs configured for dual stack with the same encapsulation and the same VMM domain attachments are not being cleaned up after downgrading from 6.0(1) to an earlier release.
Importing the routing table of a remote site carries the wrong autonomous system number (ASN).
For more details go to https://Cisco.com/go/aci