Monthly Archives: February 2014

4.1.d Implement and troubleshoot DMVPN [single hub]

this one covers a lot of blueprint ground.

2.3b, 4.1c, 4.1d

mlppp

continuing with the earlier mlppp topology, i added dmvpn.

interface Multilink1
ip address 100.1.10.100 255.255.255.0
ip nbar protocol-discovery
ppp authentication chap
ppp multilink
ppp multilink group 1

hub#sh run int tun 0 | b inter
interface Tunnel0
ip address 10.1.1.1 255.255.255.0
no ip redirects
ip mtu 1400
ip hold-time eigrp 1 60
no ip next-hop-self eigrp 1
ip nhrp map multicast dynamic
ip nhrp network-id 1
no ip split-horizon eigrp 1
tunnel source 100.1.10.100
tunnel mode gre multipoint

hub#sh ip eigrp neigh
IP-EIGRP neighbors for process 1
H   Address                 Interface       Hold Uptime   SRTT   RTO  Q  Seq
(sec)         (ms)       Cnt Num
1   10.1.1.3                Tu0               11 00:54:30  143  5000  0  2
0   10.1.1.2                Tu0               59 00:54:34  195  5000  0  4

hub#sh ip route eigrp
2.0.0.0/24 is subnetted, 1 subnets
D       2.2.2.0 [90/297372416] via 10.1.1.2, 00:55:01, Tunnel0
100.0.0.0/8 is variably subnetted, 6 subnets, 2 masks
D       100.1.1.10/32 [90/297756416] via 10.1.1.2, 00:55:01, Tunnel0
D       100.1.1.20/32 [90/297756416] via 10.1.1.3, 00:54:55, Tunnel0
3.0.0.0/24 is subnetted, 1 subnets
D       3.3.3.0 [90/297372416] via 10.1.1.3, 00:54:55, Tunnel0

the two spoke tunnel configs:

spoke1#sh run int tun 0 | b inter
interface Tunnel0
ip address 10.1.1.2 255.255.255.0
no ip redirects
ip mtu 1416
ip hold-time eigrp 1 60
no ip next-hop-self eigrp 1
ip nhrp map 10.1.1.1 100.1.10.100
ip nhrp map multicast 100.1.10.100
ip nhrp network-id 1
ip nhrp nhs 10.1.1.1
no ip split-horizon eigrp 1
tunnel source 100.1.20.2
tunnel mode gre multipoint

spoke2#sh run int tun 0 | b inter
interface Tunnel0
ip address 10.1.1.3 255.255.255.0
no ip redirects
ip mtu 1400
ip nhrp map multicast dynamic
ip nhrp map 10.1.1.1 100.1.10.100
ip nhrp map multicast 100.1.10.100
ip nhrp network-id 1
ip nhrp nhs 10.1.1.1
tunnel source 100.1.30.2
tunnel mode gre multipoint

spoke2#trace 10.1.1.2

Type escape sequence to abort.
Tracing the route to 10.1.1.2

1 10.1.1.2 76 msec *  48 msec

spoke1#sh ip route | b Gate
Gateway of last resort is 100.1.20.1 to network 0.0.0.0

1.0.0.0/24 is subnetted, 1 subnets
D       1.1.1.0 [90/297372416] via 10.1.1.1, 01:05:56, Tunnel0
2.0.0.0/24 is subnetted, 1 subnets
C       2.2.2.0 is directly connected, Loopback0
100.0.0.0/8 is variably subnetted, 6 subnets, 2 masks
S       100.1.10.100/32 [1/0] via 100.1.20.1
D       100.1.1.1/32 [90/299804416] via 10.1.1.1, 01:05:56, Tunnel0
C       100.1.1.10/32 is directly connected, Serial1/0
D       100.1.10.0/24 [90/299804416] via 10.1.1.1, 01:05:56, Tunnel0
D       100.1.1.20/32 [90/310556416] via 10.1.1.3, 01:05:51, Tunnel0
C       100.1.20.0/24 is directly connected, Serial1/0
3.0.0.0/24 is subnetted, 1 subnets
D       3.3.3.0 [90/310172416] via 10.1.1.3, 01:05:51, Tunnel0
10.0.0.0/24 is subnetted, 1 subnets
C       10.1.1.0 is directly connected, Tunnel0
S*   0.0.0.0/0 [1/0] via 100.1.20.1

2.3.b Implement and troubleshoot PPP

multilink point-to-point protocol

mlppp

create the multilink interface:

interface Multilink1
ip address 100.1.1.100 255.255.255.0
ppp multilink
ppp multilink group 1

add the interfaces to the group:

interface Serial1/1
no ip address
encapsulation ppp
ppp multilink group 1

interface Serial1/2
no ip address
encapsulation ppp
ppp multilink group 1

verify:

hub#sh ppp multilink active

Multilink1, bundle name is inet
Endpoint discriminator is inet
Bundle up for 00:03:45, total bandwidth 3088, load 1/255
Receive buffer limit 24000 bytes, frag timeout 1000 ms
0/0 fragments/bytes in reassembly list
0 lost fragments, 7 reordered
0/0 discarded fragments/bytes, 0 lost received
0xF received sequence, 0xE sent sequence
Member links: 2 active, 1 inactive (max not set, min not set)
Se1/1, since 00:03:45
    Se1/2, since 00:03:45
Se1/0 (inactive)

place nbar on the multilink:

interface Multilink1
ip address 100.1.1.100 255.255.255.0
ip nbar protocol-discovery
ppp multilink
ppp multilink group 1

spoke2#ping 100.1.1.100 rep 100 siz 1500

hub#sh ip nbar proto proto icmp

Multilink1
Input                    Output
—–                    ——
Protocol                 Packet Count             Packet Count
Byte Count               Byte Count
5min Bit Rate (bps)      5min Bit Rate (bps)
5min Max Bit Rate (bps)  5min Max Bit Rate (bps)
———————— ———————— ————————
icmp                     100                      0
150400                   0
3000                     0
6000                     0

configure chap:

inet#sh run | i user
username hub password 0 ccie

hub#sh run | i user
username inet password 0 ccie

hub#sh run int multi 1 | i auth
ppp authentication chap

verify:

chap

 

organization

in recent weeks i have taken to reorganizing this site to match the new v5 blueprint. the items categorized ccie v5 are all relevant to the blueprint. more specifically, those items tagged with 1.1a, 2.2.a etc, are an exact match to the blueprint, ie:

click the 1.1a  tag under between the sheets:

between the sheets

this will yield posts specifically aimed at satisfying the requirements for 1.1a for the ccie v5 written blueprint below:

Network theory
1.1.a Describe basic software architecture differences between IOS and IOS XE
1.1.a (i) Control plane and Forwarding plane
1.1.a (ii) Impact to troubleshooting and performances
1.1.a (iii) Excluding specific platform’s architecture

example_tag_1.1a

simple enough…

3.2b PIM dense

3.2.b Implement and troubleshoot IPv4 protocol independent multicast
3.2.b (i) PIM dense mode, sparse mode, sparse-dense mode

dense mode assumes every node in the network wants the multicast traffic, therefore dense mode floods the network. nodes not wanting the traffic must then prune. (flood and prune). uses source distribution trees (S,G)

Router(config)#ip multicast-routing
Router(config)#int f0/0
Router(config-if)#ip pim dense-mode

dense mode mechanics:

1. a source floods multicast traffic through the network

2. if more than one router is forwarding over a broadcast medium Assert messages will determine which becomes the PIM forwarder. the router with the highest metric (default: highest ip address) wins

3. Routers may not have recipients for the group currently flooded. these routers will send a prune message to their upstream router requesting their branch of the distribution tree be pruned. however if another router is on the same broadcast medium as the one who sent the prune, and if that router has attached receivers, the prune message will be treated with a join-override message.

4. if a receiver comes up on a previously pruned router, that router may rejoin the tree by sending a graft packet.

r2(config-if)#do sh ip mroute | b \(
(*, 224.9.9.9), 00:03:54/stopped, RP 0.0.0.0, flags: DC
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet1/0, Forward/Dense, 00:03:54/00:00:00
FastEthernet0/0, Forward/Dense, 00:03:54/00:00:00

(192.168.34.4, 224.9.9.9), 00:03:00/00:00:07, flags: T
Incoming interface: FastEthernet1/0, RPF nbr 192.168.23.3
Outgoing interface list:
FastEthernet0/0, Forward/Dense, 00:03:00/00:00:00

(*, 224.0.1.40), 00:06:29/00:02:01, RP 0.0.0.0, flags: DCL
Incoming interface: Null, RPF nbr 0.0.0.0
Outgoing interface list:
FastEthernet1/0, Forward/Dense, 00:05:27/00:00:00
FastEthernet0/0, Forward/Dense, 00:06:29/00:00:00

r3(config-if)#do sh ip pim neigh
PIM Neighbor Table
Mode: B – Bidir Capable, DR – Designated Router, N – Default DR Priority,
S – State Refresh Capable
Neighbor          Interface                Uptime/Expires    Ver   DR
Address                                                            Prio/Mode
192.168.34.4      FastEthernet0/0          00:09:17/00:01:20 v2    1 / DR S
192.168.23.2      FastEthernet1/0          00:09:57/00:01:39 v2    1 / S