Time to have some fun in the lab with Inter-AS Option AB. Let’s get our geek on!
Inter-AS Option AB –
where the
- data traffic uses the VRF interfaces (or sub-interfaces) and the
- control plane (BGP VPNv4) uses the global interfaces (or sub-interfaces).
Inter-AS Option AB has been around for awhile. What are the benefits of it? Let’s take a snippet out of a Cisco configuration guide to help us with that.
“Benefits of MPLS VPN—Inter-AS Option AB
The MPLS VPN—Inter-AS Option AB feature provides the following benefits for service providers:
- Network configuration can be simplified because only one BGP session is configured for each VRF on the ASBR.
- One BGP session reduces CPU utilization.
- Networks can be scaled because a single MP-BGP session, which is enabled globally on the router, reduces the number of sessions required by multiple VPNs, while continuing to keep VPNs isolated and secured from each other.
- IP QoS functions between ASBR peers are maintained for customer SLAs.
- Dataplane traffic is isolated on a per-VRF basis for security purposes”
Ready to get your geek on? Let’s rock and roll and have some fun! Let’s go through the configs and how to set it up and then follow the control plane and the data plane. We will use colors to keep which VRF we are in clear.
RED = vrf PC
GREEN = vrf HR
BLUE = global (default routing table)
FUN IN THE LAB
- Create the VRFs (HR and PC) in Cowbird and Whiteduck
- Config Whiteduck’s Trunk to the Traffic Generator
- Configure Cowbird’s Trunk to the Traffic Generator
- Config Whiteduck’s Trunk to Cowbird
- Config Cowbird’s Trunk to Whiteduck
- Configure BGP VPNv4 between Whiteduck and Cowbird
- Look at Control Plane and Data (Forwarding) Plane
1) Create the VRFs (HR and PC) in Cowbird and Whiteduck
For those familiar with BGP VPNv4 and VRFs the RD, RTs, and address-family IPv4 being defined are “standard”. The thing that is different with Inter-AS Option AB is the “inter-as-hybrid next-hop”.
As you can see from the configs above and the diagram,
- Under VRF definition HR, Cowbird has as “inter-as-hybrid next-hop” the IP address 101.101.101.1 which is Whiteduck’s interface that is also defined as being in VRF definition HR as we can see from the diagram above the VRF definitions for Cowbird.
- Under VRF definition PC, Cowbird has as “inter-as-hybrid next-hop” the IP address 102.102.102.1 which is Whiteduck’s interface that is also defined as being in VRF definition PC.
Given this, it should not be a surprise if I tell you that Whiteduck’s VRF definitions for HR and PC are similar.
2) Config Whiteduck’s Trunk to the Traffic Generator
Whiteduck’s Gig0/0/2 is connected to a Spirent TestCenter. Using sub-interfaces, trunk this port to have vlan 11 in VRF HR and vlan 12 in VRF PC.
3) Configure Cowbird’s Trunk to the Traffic Generator
Cowbird’s Gig0/0/11 is connected to a Spirent TestCenter. Trunk this port to have vlan 11 in VRF HR and vlan 12 in VRF PC.
What you see above are called an Ethernet Virtual Switch (EVC). This is actually not a new thing and has been around for a super long time now. My first experience with these was years ago when the 7600 ES+ line cards came out.
“Ethernet virtual circuits (EVCs) define a Layer 2 bridging architecture that supports Ethernet services. An EVC is defined by the Metro-Ethernet Forum (MEF) as an association between two or more user network interfaces that identifies a point-to-point or multipoint-to-multipoint path within the service provider network. An EVC is a conceptual service pipe within the service provider network. A bridge domain is a local broadcast domain that exists separately from VLANs.”
4) Config Whiteduck’s Trunk to Cowbird
On Whiteduck we are going to make the physical interface from Whiteduck to Cowbird a trunk again by sub-interfacing this interface to have vlan 10 in the global (default VRF) routing table, vlan 101 in VRF HR, and vlan 102 in VRF PC.
Looks all pretty normal.
Just like the sub-interfaces on Whiteduck towards the Spirent Traffic Generator. Well… okay… except for that “mpls bgp forwarding” command. I didn’t actually type that in.
How did it show up? It actually got configured and showed up automatically in step 6 below – when I was in Whiteduck and configured Cowbird’s 10.32.10.2 IP address as an “inter-as-hybrid” BGP VPNv4 peer.
So it protected me from doing an incomplete configuration that wouldn’t work. Yes… Cowbird did the same thing for me as we shall see in it’s configuration below.
5) Config Cowbird’s Trunk to Whiteduck
As we can see, EVCs used again on Cowbird’s interface to make it’s physical connection with Whiteduck into a trunk port. Also we can see, again, the “mpls bgp forwarding” that got put on the layer 3 portion of the global (default VRF) interface. Again, I had not actually typed this in. It was put in, again, when I configured in Cowbird the BGP VPNv4 peer with Whiteduck as an “inter-as hybrid” peer.
6) Configure BGP VPNv4 between Whiteduck and Cowbird
So what exactly does the BGP look like?
Looking at the below 2 BGP configs we see
- No BGP peering between the two routers over the vrf HR
- No BGP peering between the two routers over the vrf PC
- “Typical” BGP VPNv4 commands (activate & send-community extended)
- “inter-as hybrid” command on the neighbor statement. This is the new thing. This is what “triggers” the install of the command “mpls bgp forwarding” on the global (default VRF) layer 3 interface between these 2 routers.
Cowbird’s BGP is pretty similar to Whiteduck’s.
7) Look at Control Plane and Data (Forwarding) Plane
Since you might be more familiar with sub-interfaces than with EVCs and BDIs, we will go to Whiteduck to look at what all this looks like.
Let’s focus specifically on the control plane and the forwarding plane on Whiteduck as it relates to getting to Cowbirds LAN interfaces for the VRFs.
- VRF HR:
- 9.22.11.0
- VRF PC:
- 9.22.12.0
CONTROL PLANE
First let’s look at the control-plane on Whiteduck.
Looking at VRF HR, next hop for 9.22.11.0 is 10.32.10.2 which is in the global (default VRF).
Looking at VRF PC, we see the same. The next hop for 9.22.12.0 is also 10.32.10.2.
At first this kinda surprised me. Why? Remember when we first started looking at inter-AS option AB I said —
- data traffic uses the VRF interfaces (or sub-interfaces) and the
- control plane (BGP VPNv4) uses the global interfaces (or sub-interfaces).
But the next hop for both subnet 9.22.11.0 and 9.22.12.0, according to the control plane, is via the global (default VRF) thru BGP VPNv4. With regular BGP VPNv4 this would be the case and we would use labels and forward the data traffic over the interface associated with the control plane next hop. Which for us is the 10.32.10.2 in the global (default VRF).
So when I saw this originally I was like… “wait… that ain’t right!”.
Let’s dig deeper shall we? 🙂 Let’s look at the labels for the control plane.
Again… IF this were “traditional” BGP VPNv4 and NOT inter-as option AB we would read the above and assume, still that if we want to get to 9.22.11.0 or 9.22.12.0 we need to (according to the control plane) slap a label on the packet and send it over the global interface. What we would see above is that the BGP labels that control plane instructions Cowbird is sending Whiteduck are:
label 25 with next-hop 10.32.10.2 to get to 9.22.11.0
label 23 with next-hop 10.32.10.2 to get to 9.22.12.0
But this is Inter-AS Option AB.
And the data traffic will use the VRF interfaces (or sub-interfaces). Let’s stop looking at the control plane. 🙂 Let’s look at the forwarding plane.
DATA PLANE
VOILA! Success!
Looks EXACTLY like we wanted! All traffic is going over the actual VRF interfaces (or sub-interfaces in our case) just as inter-AS option AB is supposed to! Woot woot!
Hope you had “fun in the lab” with me. 🙂
*Additional Note: This blog originally posted in 2015. It was updated and rewritten in March of 2019.
Categories: Fun in the Lab
Hi Fish,”mpls bgp forwarding” command is triggered by your vpnv4 eBGP connection over a not mpls enabled link.
MPLS IP is not enabled on the inter-provider IFes, but the ASBRs won’t drop the received mpls encapsulated packets (ethertype 0x8847) on the global inter-provider link, thanks to the command “mpls bgp forwarding”. This feature is needed for CsC, Option B, Option C and Option AB Shared Interface Forwarding.
I just think it is kinda interesting cause it is not triggered if I use iBGP VPNv4. Packet itself should look the same regardless of whether the label is sent from an iBGP vpnv4 peer or a eBGP vpnv4 peer. I’ve used this option with CsC before. And… admittedly.. this is over traffic running over DMVPN … over MPLS. So the true internet carrier will never see the label. Just the DMVPN crypto fun. The DMVPN hub and spoke are both also the BGP VPNv4 peers. Just just kinda found it more of a surprise. 🙂 Love fun in the lab.
There are some crazy designs with DMVPN :)) I saw the following solution by a customer last year: MPLS over encrypted DMVPN over the MPLS backbone of the SP with CsC.
It is not a problem, if you don’t have to find the cause of a QoS marking issue 🙂
“mpls bgp forwarding”:
It is only triggered if we use eBGP VPNv4, or labeled eBGP unicast (SAFI 4) for label exchange over a non-MPLS enabled interface.
If I use an eBGP VPNv4 connection, the systems assumes that the 2 ASes (connected by this VPNv4 peering) have their
own separated IGP/LDP setup and the network admins don’t want to run IGP and LDP on the Inter-AS link.
–> Without “mpls ip”, the mpls protocol won’t be enabled/allowed on the Inter-AS link –> so the command “mpls bgp forwarding” will be needed.
Hi Fish,
You explained this technology in a simple manner, I appreciate it. I have worked/ done INE labs on other options, CSC and U-MPLS. Now, EVC is not my strong suite 🙁 Do you have any good material on that particular topic?
Thanks Again