Saturday, February 9, 2013

Multicast VPN with static-rp




Issue

In this example we will examine Multicast VPN with MPLS implemented using static rp for multicast information deleivery
Our CEs in this example are R1 and R5
Our PEs are R2 and R4, and R3 is the P

We will run OSPF as the IGP inside the MPLS backbone

Configuration

R1#sh run int f1/0
Building configuration...

Current configuration : 114 bytes
!
interface FastEthernet1/0
 ip address 192.1.12.1 255.255.255.0
 ip pim sparse-mode
 speed 100
 duplex full

R1#sh run | sec ip route
ip route 0.0.0.0 0.0.0.0 192.1.12.2
R2

R2#sh run int f1/0
interface FastEthernet1/0
 vrf forwarding VPN_A
 ip address 192.1.12.2 255.255.255.0
 ip pim sparse-mode
 speed 100
 duplex full

R2#sh run int f1/1
interface FastEthernet1/1
 ip address 192.1.23.2 255.255.255.0
 ip pim sparse-mode
 speed 100
 duplex full
 mpls ip

R2#sh run int lo0
interface Loopback0
 ip address 2.2.2.2 255.255.255.255
 ip pim sparse-mode

R2#sh run | sec vrf def
vrf definition VPN_A
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 !
 address-family ipv4
  mdt default 239.1.1.1
 exit-address-family

R2#sh run | sec router ospf
router ospf 1
 router-id 2.2.2.2
 network 2.2.2.2 0.0.0.0 area 0
 network 192.1.23.2 0.0.0.0 area 0

R2#sh run | sec router bgp
router bgp 100
 bgp log-neighbor-changes
 no bgp default ipv4-unicast
 neighbor 4.4.4.4 remote-as 100
 neighbor 4.4.4.4 update-source Loopback0
 !
 address-family ipv4
 exit-address-family
 !
 address-family vpnv4
  neighbor 4.4.4.4 activate
  neighbor 4.4.4.4 send-community both
 exit-address-family
 !
 address-family ipv4 vrf VPN_A
  network 192.1.12.0
 exit-address-family

R3

R3#sh run int f1/0
interface FastEthernet1/0
 ip address 192.1.23.3 255.255.255.0
 ip pim sparse-mode
 speed 100
 duplex full
 mpls ip

R3#sh run int f1/1
interface FastEthernet1/1
 ip address 192.1.34.3 255.255.255.0
 ip pim sparse-mode
 speed 100
 duplex full
 mpls ip

R3#sh run int lo0
interface Loopback0
 ip address 3.3.3.3 255.255.255.255
 ip pim sparse-mode

R3#sh run | sec router ospf
router ospf 1
 router-id 3.3.3.3
 network 3.3.3.3 0.0.0.0 area 0
 network 192.1.23.3 0.0.0.0 area 0
 network 192.1.34.3 0.0.0.0 area 0

R4

R4#sh run int f1/0
interface FastEthernet1/0
 ip address 192.1.34.4 255.255.255.0
 ip pim sparse-mode
 speed 100
 duplex full
 mpls ip

R4#sh run int f1/1
interface FastEthernet1/1
 vrf forwarding VPN_A
 ip address 192.1.45.4 255.255.255.0
 ip pim sparse-mode
 speed 100
 duplex full

R4#sh run int lo0
interface Loopback0
 ip address 4.4.4.4 255.255.255.255
 ip pim sparse-mode

R4#sh run | sec vrf def
vrf definition VPN_A
 rd 100:1
 route-target export 100:1
 route-target import 100:1
 !
 address-family ipv4
  mdt default 239.1.1.1
 exit-address-family

R4#sh run | sec router ospf
router ospf 1
 router-id 4.4.4.4
 network 4.4.4.4 0.0.0.0 area 0
 network 192.1.34.4 0.0.0.0 area 0

R4#sh run | sec router bgp
router bgp 100
 bgp log-neighbor-changes
 no bgp default ipv4-unicast
 neighbor 2.2.2.2 remote-as 100
 neighbor 2.2.2.2 update-source Loopback0
 !
 address-family ipv4
 exit-address-family
 !
 address-family vpnv4
  neighbor 2.2.2.2 activate
  neighbor 2.2.2.2 send-community both
 exit-address-family
 !
 address-family ipv4 vrf VPN_A
  network 192.1.45.0
 exit-address-family

R5

R5#sh run int f1/0
interface FastEthernet1/0
 ip address 192.1.45.5 255.255.255.0
 ip pim sparse-mode
speed 100
 duplex full

R5#sh run | sec ip route
ip route 0.0.0.0 0.0.0.0 192.1.45.4

Multicast Configuration

We first have to enable multicast globally on our devices, but on our PE routers, the multicast routing should be enabled for the VRF as this is our concern

R2#sh run | inc multica
ip multicast-routing
ip multicast-routing vrf VPN_A

Next, we will enable pim sparse-mode on the transit interfaces (sample configuration)

R1#conf t
Enter configuration commands, one per line.  End with CNTL/Z.
R1(config)#int f1/0
R1(config-if)#ip pim sparse-mode

Next, we will define who is the rp-address of our multicast domain, we will choose our P router (R3) to take this role, but for the VRF PE1 will take this role

R1#sh run | inc rp-add
ip pim rp-address 192.1.12.2

R2#sh run | inc rp-add
ip pim rp-address 3.3.3.3
ip pim vrf VPN_A rp-address 192.1.12.2

R3#sh run | inc rp-add
ip pim rp-address 3.3.3.3

R4#sh run | inc rp-add
ip pim rp-address 3.3.3.3
ip pim vrf VPN_A rp-address 192.1.12.2

R5#sh run | inc rp-add
ip pim rp-address 192.1.12.2
R1#sh ip pim rp mapping
PIM Group-to-RP Mappings

Group(s): 224.0.0.0/4, Static
    RP: 192.1.12.2 (?)

R2#sh ip pim rp mapping
PIM Group-to-RP Mappings

Group(s): 224.0.0.0/4, Static
    RP: 3.3.3.3 (?)

R3#sh ip pim rp mapping
PIM Group-to-RP Mappings

Group(s): 224.0.0.0/4, Static
    RP: 3.3.3.3 (?)

R4#sh ip pim rp mapping
PIM Group-to-RP Mappings

Group(s): 224.0.0.0/4, Static
    RP: 3.3.3.3 (?)

R5#sh ip pim rp mapping
PIM Group-to-RP Mappings

Group(s): 224.0.0.0/4, Static
    RP: 192.1.12.2 (?)

R2#sh ip pim vrf VPN_A rp mapping  
PIM Group-to-RP Mappings

Group(s): 224.0.0.0/4, Static
    RP: 192.1.12.2 (?)

R4#sh ip pim vrf VPN_A rp mapping    
PIM Group-to-RP Mappings

Group(s): 224.0.0.0/4, Static
    RP: 192.1.12.2 (?)

Not forgetting the MDT group!

R2#sh run | sec vrf def
address-family ipv4
  mdt default 239.1.1.1
 exit-address-family

R4#sh run | sec vrf def
address-family ipv4
  mdt default 239.1.1.1
 exit-address-family

To make sure that the multicast inside our core is working properly interface R4 lo0 will join the group 224.5.5.5
R4#sh run int lo0 | inc igm
 ip igmp join-group 224.5.5.5

Will R2 be able to ping it?
R2#ping 224.5.5.5 repeat 1
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 224.5.5.5, timeout is 2 seconds:

Reply to request 0 from 4.4.4.4, 96 ms
Reply to request 0 from 4.4.4.4, 100 ms

Then it is working fine, now let us configure R5 interface F1/0 to join the group 239.9.9.9
R5#sh run int f1/0 | inc igmp
 ip igmp join-group 239.9.9.9

R1#ping 239.9.9.9 repeat 1
Type escape sequence to abort.
Sending 1, 100-byte ICMP Echos to 239.9.9.9, timeout is 2 seconds:

Reply to request 0 from 192.1.45.5, 148 ms

Then our Multicast VPN is working fine and as required

R1#sh ip int bri | inc Tun
Tunnel0                192.1.12.1      YES unset  up                    up  

R2#sh ip mroute vrf VPN_A 239.9.9.9
IP Multicast Routing Table
Flags: D - Dense, S - Sparse, B - Bidir Group, s - SSM Group, C - Connected,
       L - Local, P - Pruned, R - RP-bit set, F - Register flag,
       T - SPT-bit set, J - Join SPT, M - MSDP created entry, E - Extranet,
       X - Proxy Join Timer Running, A - Candidate for MSDP Advertisement,
       U - URD, I - Received Source Specific Host Report,
       Z - Multicast Tunnel, z - MDT-data group sender,
       Y - Joined MDT-data group, y - Sending to MDT-data group,
       G - Received BGP C-Mroute, g - Sent BGP C-Mroute,
       Q - Received BGP S-A Route, q - Sent BGP S-A Route,
       V - RD & Vector, v - Vector
Outgoing interface flags: H - Hardware switched, A - Assert winner
 Timers: Uptime/Expires
 Interface state: Interface, Next-Hop or VCD, State/Mode

(*, 239.9.9.9), 21:57:03/stopped, RP 192.1.12.2, flags: S
  Incoming interface: Null, RPF nbr 0.0.0.0
  Outgoing interface list:
    Tunnel2, Forward/Sparse, 21:51:36/00:02:41

(192.1.12.1, 239.9.9.9), 00:00:03/00:02:56, flags: T
  Incoming interface: FastEthernet1/0, RPF nbr 192.1.12.1
  Outgoing interface list:
    Tunnel2, Forward/Sparse, 00:00:03/00:03:26


R2#show derived-config interface tunnel 2
Building configuration...

Derived configuration : 133 bytes
!
interface Tunnel2
 ip unnumbered Loopback0
 no ip redirects
 ip mtu 1500
 tunnel source Loopback0
 tunnel mode gre multipoint
end

As can be seen that a GRE tunnel was used with a source of loopback 0 and a destination of the default MDT group configured under the VRF definition



No comments:

Post a Comment