首页 > 代码库 > Part3 – OPENVSWICH – Campus Model with Layer2 Access built with Open-Source Applications
Part3 – OPENVSWICH – Campus Model with Layer2 Access built with Open-Source Applications
In part one we showed how to create Openvswitch extension and submit it to Microcore repository. There were also presented after-install steps for Openvswitch adapted for specific Core needs.
http://brezular.com/2011/09/03/part1-openvswich-creating-and-submitting-openvswitch-extension-to-microcore-upstream/
In part two we execute several tests in order to test feature such as vlans, 8021q trunks and VLAN interfaces widely used in typical multilayer switches.
http://brezular.com/2011/06/25/part2-openvswich-vlans-trunks-l3-vlan-interface-intervlan-routing-configuration-and-testing/
In part three we are going to create Campus Model using Core Linux with installed Openvswitch extension. As we need to enable routing between Distribution and Core layer to follow Campus design recommendations, Quagga routing suite should be installed to deploy routing protocols between layers.
Keepalived extension helps us to achieve default gateway’s redundancy using VRRP protocol, in case of failure of default gateway. Other extensions such as iproute2, tcpdump are not necessary but useful thus they included in our Qemu image.
Here is my own linux-microcore-3.0.3 Qemu image with installed Openvswitch 1.2.2, Quagga 0.99.20 and Keepalived 1.2.2 as extensions.
http://brezular.com/2013/09/17/linux-core-qemu-and-virtualbox-appliances-download/
I had to recompile the kernel because thefault Microcore kernel is not compiled with Multipath option. Read more abut the issue here.
The image also contains tcpdump and iproute2 command. I recommend you to use my Qemu image otherwise additional after install configuration steps for Quagga and Openvswitch are required. At least you should put following commands to /opt/bootlocal.sh
#Load modules to kernel
modprobe openvswitch_mod
modprobe ipv6
#Start ovsdb-server
ovsdb-server --remote=punix:/usr/local/var/run/openvswitch/db.sock --remote=db:Open_vSwitch,manager_options --private-key=db:SSL,private_key --certificate=db:SSL,certificate --bootstrap-ca-cert=db:SSL,ca_cert --pidfile --detach
#Initialize database
ovs-vsctl --no-wait init
#Start vswitchd daemon
ovs-vswitchd --pidfile --detach
#Enable forwarding between interfaces
sysctl -w net.ipv4.ip_forward=1
sysctl -w net.ipv6.conf.all.forwarding=1
#Start Quagga routing daemons
/usr/local/sbin/zebra -u root -d -f /usr/local/etc/quagga/zebra.conf
/usr/local/sbin/ripd -u root -d -f /usr/local/etc/quagga/ripd.conf
/usr/local/sbin/ripngd -u root -d -f /usr/local/etc/quagga/ripngd.conf
/usr/local/sbin/ospfd -u root -d -f /usr/local/etc/quagga/ospfd.conf
/usr/local/sbin/ospf6d -u root -d -f /usr/local/etc/quagga/ospf6d.conf
/usr/local/sbin/bgpd -u root -d -f /usr/local/etc/quagga/bgpd.conf
/usr/local/sbin/isisd -u root -d -f /usr/local/etc/quagga/isisd.conf
Note: 8021q module does not have to be loaded to get trunk working with Openvswitch. It has to be loaded when creating sub-interfaces using vconfig command is needed. On the other hand, choosing the right emulated network card in GNS3 Qemu settings is crucial to get trunk working. Nework card i82557b is known to working.
Note: Iproute2 extension is needed to show multiple equal-cost routes presented in Linux kernel. The topic is described more in detail here.
http://brezular.com/2011/11/14/part2-testing-equal-cost-routes-in-linux-microcore-4-0-2/
Part1 – Core Layer
Core layer consists from two Multilayer Core switches – Core1 nad Core2. They are connected together with point-to-point layer3 links. The full-mesh topology between Core and Distribution layer provides path even in case of failure of links. For example, if both interfaces eth4 and eth1 fail on switch Distrib1 there is still path to Core Layer via interface eth0.
Quagga routing daemon is running on all Core and Distribution switches. It offers routing capabilities using OSPF routing protocol.
Use vtysh – Quagga shell to configure Core switches as following.
1. Command for accessing Quagga CLI
tc@box:~$ sudo /usr/local/bin/vtysh
Hello, this is Quagga (version 0.99.20).
Copyright 1996-2005 Kunihiro Ishiguro, et al.
2. Configure hostname, interfaces and OSPF routing protocol for Core1
box# configure terminal
box(config)# hostname Core1
Core1(config)# interface eth0
Core1(config-if)# description Link to Core2
Core1(config-if)# ip address 10.10.10.5/30
Core1(config-if)# no shutdown
Core1(config-if)# exit
Core1(config)# interface eth2
Core1(config-if)# description Link to Distrib2
Core1(config-if)# ip address 10.10.10.18/30
Core1(config-if)# no shutdown
Core1(config-if)# exit
Core1(config)# interface eth1
Core1(config-if)# description Link to Distrib1
Core1(config-if)# ip address 10.10.10.10/30
Core1(config-if)# no shutdown
Core1(config-if)# exit
Core1(config)# router ospf
Core1(config-router)# network 10.10.10.4/30 area 0
Core1(config-router)# network 10.10.10.16/30 area 0
Core1(config-router)# network 10.10.10.8/30 area 0
Core1(config-router)# do write
Now,exit from vtysh. As Quagga had not saved hostname “Core1″ to /usr/local/etc/quagga/zebra.conf, configure it from Core CLI..
tc@box:~$ echo "hostname Core1" >> /opt/bootlocal.sh
tc@box:~$ sudo hostname Core1
Force Core to save Quagga’s configuration files in /usr/locac/etc/quagga/ and other files in /opt/.filetool.lst.
tc@Core1:~$ /usr/bin/filetool.sh -b
Save GNS3 project. Go to File-> Save or use Ctrl + s.
3. Configure hostname, interfaces and OSPF routing protocol for Core2
Start Core2 and configure router according to topology.
Check if Core switches can see themselves as OSPF neighbours.
Core2# show ip ospf neighbor
Neighbor ID Pri State Dead Time Address Interface RXmtL RqstL DBsmL
10.10.10.18 1 Full/DR 35.785s 10.10.10.5 eth0:10.10.10.6 0 0 0
Check if routes are properly propagated in the routing table.
Core2# show ip route ospf
Codes: K – kernel route, C – connected, S – static, R – RIP, O – OSPF,
I – ISIS, B – BGP, > – selected route, * – FIB route
O 10.10.10.4/30 [110/10] is directly connected, eth0, 00:08:20
O>* 10.10.10.8/30 [110/20] via 10.10.10.5, eth0, 00:08:10
O 10.10.10.12/30 [110/10] is directly connected, eth1, 00:00:27
O>* 10.10.10.16/30 [110/20] via 10.10.10.5, eth0, 00:08:10
O 10.10.10.20/30 [110/10] is directly connected, eth2, 00:08:20
Part2 – Distribution Layer
Distribution layer consists from two Multilayer Distribution switches – Distrib1 and Distrib2. The main job for Distribution switches is routing between different vlan’s subnet that are terminated here. Any traffic filtering rules should be configured on Distribution swiches.
Uplink interfaces connecting Distribution switches to Core switches are layer3 interfaces. They participate in OSPF messages forwarding. Downlink interfaces connecting Distribution switches to Access switches are Layer2 interfaces. They are trunks capable of carrying traffic belongig to multiple VLANs.
The next configuration’s steps are shown for Distrib1 switch only. Continue and make similiar configuration for Distrib2 switch.
1. Configure Layer3 interfaces and OSPF routing protocol on Distrib1 switch
tc@box:~$ echo "hostname Distrib1" >> /opt/bootlocal.sh
tc@box:~$ /usr/bin/filetool.sh -b
tc@box:~$ sudo hostname Distrib1
tc@Distrib1:~$ sudo /usr/local/bin/vtysh
Distrib1# conf t
Distrib1(config)# hostname Distrib1
Distrib1(config)# interface eth1
Distrib1(config-if)# description Link to Core1
Distrib1(config-if)# ip address 10.10.10.9/30
Distrib1(config-if)# no shutdown
Distrib1(config-if)# exit
Distrib1(config)# int eth2
Distrib1(config-if)# description Link to Core2
Distrib1(config-if)# ip address 10.10.10.21/30
Distrib1(config-if)# no shutdown
Distrib1(config-if)# exit
Distrib1(config)# int eth0
Distrib1(config-if)# description Link to Distrib2
Distrib1(config-if)# ip address 10.10.10.1/30
Distrib1(config-if)# no shutdown
Distrib1(config-if)# exit
Distrib1(config)# router ospf
Distrib1(config-router)# network 10.10.10.0/30 area 0
Distrib1(config-router)# network 10.10.10.20/30 area 0
Distrib1(config-router)# network 10.10.10.8/30 area 0
Distrib1(config-router)# exit
Distrib1(config)# do write
Exit from vtysh and save the content of /usr/local/etc/quagga/ directory.
tc@box:~$ /usr/bin/filetool.sh -b
Ctrl + s
2. Configure Layer3 interfaces and OSPF routing protocol on Distrib2 switch
Configure Distrib1 switch according to the topology. Check if Distrib2 can see all three OSPF neighbours.
Distrib2# show ip ospf neighbor
Neighbor ID Pri State Dead Time Address Interface RXmtL RqstL DBsmL
10.10.10.21 1 Full/DR 38.389s 10.10.10.1 eth0:10.10.10.2 0 0 0
10.10.10.22 1 Full/DR 38.927s 10.10.10.14 eth1:10.10.10.13 0 0 0
10.10.10.18 1 Full/DR 33.153s 10.10.10.18 eth2:10.10.10.17 0 0 0
Check if routes are properly propagated in Quagga routing table.
Distrib2# show ip route ospf
Codes: K – kernel route, C – connected, S – static, R – RIP, O – OSPF,
I – ISIS, B – BGP, > – selected route, * – FIB route
O 10.10.10.0/30 [110/10] is directly connected, eth0, 00:01:47
O>* 10.10.10.4/30 [110/20] via 10.10.10.14, eth1, 00:01:17
* via 10.10.10.18, eth2, 00:01:17
O>* 10.10.10.8/30 [110/20] via 10.10.10.1, eth0, 00:01:17
* via 10.10.10.18, eth2, 00:01:17
O 10.10.10.12/30 [110/10] is directly connected, eth1, 00:01:37
O 10.10.10.16/30 [110/10] is directly connected, eth2, 00:01:24
O>* 10.10.10.20/30 [110/20] via 10.10.10.1, eth0, 00:01:31
* via 10.10.10.14, eth1, 00:01:31
Exit from Quagga vtysh shell and check routing table of Linux Core.
tc@Distrib2:~$ ip route show
10.10.10.0/30 dev eth0 proto kernel scope link src 10.10.10.2
10.10.10.4/30 proto zebra metric 20
nexthop via 10.10.10.14 dev eth1 weight 1
nexthop via 10.10.10.18 dev eth2 weight 1
10.10.10.8/30 proto zebra metric 20
nexthop via 10.10.10.1 dev eth0 weight 1
nexthop via 10.10.10.18 dev eth2 weight 1
10.10.10.12/30 dev eth1 proto kernel scope link src 10.10.10.13
10.10.10.16/30 dev eth2 proto kernel scope link src 10.10.10.17
10.10.10.20/30 proto zebra metric 20
nexthop via 10.10.10.1 dev eth0 weight 1
nexthop via 10.10.10.14 dev eth1 weight 1
127.0.0.1 dev lo scope link
They are two euqal-cost paths for each of network 10.10.10.4/30, 10.10.10.8/30, 10.10.10.20/30 presented in kernel routing table of Distrib2.
3. Openvswitch Configuration – configure Layer2 trunk ports, Layer3 VLAN interfaces on Distrib1 switch
Openvswitch does not have separate CLI for its configuration thus all the configuration must be done from Core CLI.
a) Configure eth3 to become a trunk port with VLAN 10 and VLAN 20 traffic allowed on trunk
tc@Distrib1:~$ sudo ovs-vsctl add-br br0
tc@Distrib1:~$ sudo ovs-vsctl add-port br0 eth3 trunks=10,20
b) Configure eth4 to become trunk portwith VLAN 30 and VLAN 40 traffic allowed on trunk
tc@Distrib1:~$ sudo ovs-vsctl add-port br0 eth4 trunks=30,40
c) Create VLAN interfaces
tc@Distrib1:~$ sudo ovs-vsctl add-port br0 vlan10 tag=10 -- set interface vlan10 type=internal
tc@Distrib1:~$ sudo ovs-vsctl add-port br0 vlan20 tag=20 -- set interface vlan20 type=internal
tc@Distrib1:~$ sudo ovs-vsctl add-port br0 vlan30 tag=30 -- set interface vlan30 type=internal
tc@Distrib1:~$ sudo ovs-vsctl add-port br0 vlan40 tag=40 -- set interface vlan40 type=internal
d) Check Openvswitch configuration
tc@Distrib1:~$ sudo ovs-vsctl show
a66779ff-0224-40ef-89f1-0deb21b939dBridge “br0″
Port “eth3″
trunks: [10, 20]
Interface “eth3″
Port “eth4″
trunks: [30, 40]
Interface “eth4″
Port “vlan20″
tag: 20
Interface “vlan20″
type: internal
Port “vlan10″
tag: 10
Interface “vlan10″
type: internal
Port “br0″
Interface “br0″
type: internal
Port “vlan30″
tag: 30
Interface “vlan30″
type: internal
Port “vlan40″
tag: 40
Interface “vlan40″
type: internal
e) Configure IP addresses of VLAN interfaces and OSPF routing protocol on Distrib1 switch
We have two options here – either to use Linux kernel command – ifconfig or Quagga vtysh shell. I chose vtysh – no need to put commands to /opt/bootlocal.sh
tc@Distrib1:~$ sudo /usr/local/bin/vtysh
Distrib1# conf t
Distrib1(config)# interface vlan10
Distrib1(config-if)# ip address 192.168.10.2/24
Distrib1(config-if)# no shutdown
Distrib1(config-if)# interface vlan20
Distrib1(config-if)# ip address 192.168.20.2/24
Distrib1(config-if)# no shutdown
Distrib1(config-if)# interface vlan30
Distrib1(config-if)# ip address 192.168.30.2/24
Distrib1(config-if)# no shutdown
Distrib1(config-if)# interface vlan40
Distrib1(config-if)# ip address 192.168.40.2/24
Distrib1(config-if)# no shutdown
Distrib1(config-if)# router ospf
Distrib1(config-router)# network 192.168.10.0/24 area 0
Distrib1(config-router)# network 192.168.20.0/24 area 0
Distrib1(config-router)# network 192.168.30.0/24 area 0
Distrib1(config-router)# network 192.168.40.0/24 area 0
Distrib1(config)# do write
Exit from vtysh shell and save configuration.
tc@Distrib1:~$ /usr/bin/filetool.sh -b
Ctrl + s.
Do similar configuration for Distrib2 switch.
Part3 – Access Layer
Access Layer consists of two Layer2 switches – Access1 and Access2. VLANs are created here and switchports are assigned to VLANS. As each VLAN is restricted to the local Access switch we call them – local VLANs. Primary task of Access layer is to provide switchports for end users and forward traffic. Since Layer2 Access switches do not route between different VLANs, traffic is sent to Distribution layer via 8021q trunks when routing is required. The design guides recommend a campus model with local VLANs when 20 percent of user’s traffic stays in Campus and 80 percent of traffic is forwarded outside campus.
1. Configure hostname, access ports for VLAN 10 and 20 and 8021q trunk ports on Access1 switch
tc@box:~$ echo "hostname Access1" >> /opt/bootlocal.sh
tc@box:~$ sudo hostname Access1
tc@Access1:~$ sudo ovs-vsctl add-br br0
tc@Access1:~$ sudo ovs-vsctl add-port br0 eth0 tag=10
tc@Access1:~$ sudo ovs-vsctl add-port br0 eth1 tag=20
tc@Access1:~$ sudo ovs-vsctl add-port br0 eth3 trunks=10,20
tc@Access1:~$ sudo ovs-vsctl add-port br0 eth4 trunks=10,20
2. Check Openvswitch configuration
tc@Access1:~$ sudo ovs-vsctl show
a66779ff-0224-40ef-89f1-0deb21b939db
Bridge “br0″
Port “br0″
Interface “br0″
type: internal
Port “eth1″
tag: 20
Interface “eth1″
Port “eth4″
trunks: [10, 20]
Interface “eth4″
Port “eth0″
tag: 10
Interface “eth0″
Port “eth3″
trunks: [10, 20]
Interface “eth3″
Save configuration.
tc@Access1:~$ /usr/bin/filetool.sh -b
Ctrl + s.
3. Configure access ports for VLAN 30 and 40 and 8021q trunk ports on Access2 switch
Configure Access2 switch according to topology.
4. Configure hostname and IP seetings for all hosts, PC1 is an example
tc@box:~$ echo "hostname PC1" >> /opt/bootlocal.sh
tc@box:~$ sudo hostname PC1
tc@PC1:~$echo "ifconfig eth0 192.168.10.10 netmask 255.255.255.0 up" >> /opt/bootlocal.sh
tc@PC1:~$ sudo ifconfig eth0 192.168.10.10 netmask 255.255.255.0 up
tc@PC1:~$ echo "route add default gw 192.168.10.1" >> /opt/bootlocal.sh
tc@PC1:~$ sudo route add default gw 192.168.10.1
tc@PC1:~$ /usr/bin/filetool.sh -b
5. Test connectivity inside the same VLAN, PC1 is an example
PC1 should ping IP address of vlan10 interface on both Distribution switches.
tc@PC1:~$ ping 192.168.10.2
PING 192.168.10.2 (192.168.10.2): 56 data bytes
64 bytes from 192.168.10.2: seq=0 ttl=64 time=1.949 ms
64 bytes from 192.168.10.2: seq=1 ttl=64 time=2.342 ms
64 bytes from 192.168.10.2: seq=2 ttl=64 time=3.016 ms
^C
— 192.168.10.2 ping statistics —
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 1.949/2.435/3.016 ms
tc@PC1:~$ ping 192.168.10.3
PING 192.168.10.3 (192.168.10.3): 56 data bytes
64 bytes from 192.168.10.3: seq=0 ttl=64 time=2.297 ms
64 bytes from 192.168.10.3: seq=1 ttl=64 time=2.470 ms
64 bytes from 192.168.10.3: seq=2 ttl=64 time=2.815 ms
^C
— 192.168.10.3 ping statistics —
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 2.297/2.527/2.815 ms2
Part4 – Virtual Router Redundancy Protocol /VRRP/ Configuration
Things look good now. We can ping a virtual interfaces on Distribution switches from host residing in the same VLAN. But we still cannot ping any IP address residing in different subnet than host. It is because both Distribution switches have not been configured to act as default gateway yet.
You might have noticed that all the hosts were configured with IP address of default gateway 192.168.x.1. This is a virtual IP address created by Keepalived extension.
Keepalived offers default gateway’s redundancy using VRRP protocol. In case of failure one of the Distribution switches, the other Distrib switch takes responsibility and continues to forward packets. Switch with higher priority is called Master and forwards packet. At least it is always one Master switch in one VRRP group. For example, in VRRP group 10 it is Distrib1 with priority 150. Switch with lower priority is called Backup and as it was said, it forwards packets only if Master switch fails. The Backup switch in VRRP group 10 is Distrib2 with priority 100. The higher is priority, more likely switch becomes Master switch.
Obviously, it must be communication between switches to let known each other about its existence. By default every one second Advertisiment is sent from Master switch to each memember of VRRP group to multicast IP address 224.0.0.18. After three missing Advertisiment plus screw time, Backup switch knowns that Master is down and transition from Backup to Master state occurs. It forwards packets now.
Note:
Virtual IP address is tied with virtual MAC address – 00-00-5E-00-01-XX. The last byte of the address (XX) is the Virtual Router Identifier (VRID), which is different for each VRRP instance in the network. This address is used by only one physical router at a time, and it will reply with this MAC address when an ARP request is sent for the virtual router’s IP address. If Master fails, the new Master will broadcast Gratious ARP containing the virtual router MAC address for the associated IP address. If I understand it correctly, nothing has been change in host configuration but Access switches had changed their CAM table – frames with destination MAC address 00-00-5E-00-01-XX will be send via new path to the new MASTER router.
Keepalived works slightly differently – it uses real MAC address of interface instead of Virtual MAC address. Check ARP cache of hosts in VRRP testing section – it seems that the current Keepalived/VRRP does not support Virtual MAC addresses.
1. VRRP configuration on switch Distrib1
tc@Distrib1:~$ sudo su
root@Distrib1:~# mkdir /usr/local/etc/keepalived/
root@Distrib1:~# echo "/usr/local/etc/keepalived/" >> /opt/.filetool.lst
root@Distrib1:~# vi /usr/local/etc/keepalived/keepalived.conf
vrrp_instance 10 { # VRRP instance declaration
state MASTER # Start-up default state
interface vlan10 # Binding interface
virtual_router_id 10 # VRRP VRID
priority 150 # VRRP PRI
authentication {
auth_type PASS # Simple Passwd or IPSEC AH
auth_pass Campus123 # Password string
}
virtual_ipaddress { # VRRP IP address block
192.168.10.1/24 brd 192.168.10.255 dev vlan10
}
}
vrrp_instance 20 {
state BACKUP
interface vlan20
virtual_router_id 20
priority 100
authentication {
auth_type PASS
auth_pass Campus123
}
virtual_ipaddress {
192.168.20.1/24 brd 192.168.20.255 dev vlan20
}
}
vrrp_instance 30 {
state BACKUP
interface vlan30
virtual_router_id 30
priority 100
authentication {
auth_type PASS
auth_pass Campus123
}
virtual_ipaddress {
192.168.30.1/24 brd 192.168.30.255 dev vlan30
}
}
vrrp_instance 40 {
state MASTER
interface vlan40
virtual_router_id 40
priority 150
authentication {
auth_type PASS
auth_pass Campus123
}
virtual_ipaddress {
192.168.40.1/24 brd 192.168.40.255 dev vlan40
}
}
Start Keepalived daemon. As Distrib2 is not configured for VRRP, all the VRRP instances should transit to MASTER state.
root@Distrib1:~# /usr/local/sbin/keepalived -P -f /usr/local/etc/keepalived/keepalived.conf
Make Keepalived daemon to be started after boot of Core Linux.
root@Distrib1:~# echo "/usr/local/sbin/keepalived -P -f /usr/local/etc/keepalived/keepalived.conf" >> /opt/bootlocal.sh
Save Keepalived configuration file.
root@Distrib1:~# /usr/bin/filetool.sh -b
Note: After each change in keepalived.conf file you have to restart keepalived daemon to accept changes.
2. VRRP configuration on switch Distrib2
tc@Distrib2:~$ sudo su
root@Distrib2:~# mkdir /usr/local/etc/keepalived/
root@Distrib2:~# echo "/usr/local/etc/keepalived/" >> /opt/.filetool.lst
root@Distrib2:~# vi /usr/local/etc/keepalived/keepalived.conf
vrrp_instance 10 { # VRRP instance declaration
state BACKUP # Start-up default state
interface vlan10 # Binding interface
virtual_router_id 10 # VRRP VRID
priority 100 # VRRP PRI
authentication {
auth_type PASS # Simple Passwd or IPSEC AH
auth_pass Campus123 # Password string
}
virtual_ipaddress { # VRRP IP address block
192.168.10.1/24 brd 192.168.10.255 dev vlan10
}
}
vrrp_instance 20 {
state MASTER
interface vlan20
virtual_router_id 20
priority 150
authentication {
auth_type PASS
auth_pass Campus123
}
virtual_ipaddress {
192.168.20.1/24 brd 192.168.20.255 dev vlan20
}
}
vrrp_instance 30 {
state MASTER
interface vlan30
virtual_router_id 30
priority 150
authentication {
auth_type PASS
auth_pass Campus123
}
virtual_ipaddress {
192.168.30.1/24 brd 192.168.30.255 dev vlan30
}
}
vrrp_instance 40 {
state BACKUP
interface vlan40
virtual_router_id 40
priority 100
authentication {
auth_type PASS
auth_pass Campus123
}
virtual_ipaddress {
192.168.40.1/24 brd 192.168.40.255 dev vlan40
}
}
Start keepalived daeomon.
root@Distrib2:~# /usr/local/sbin/keepalived -P -f /usr/local/etc/keepalived/keepalived.conf
Distrib1 should become BACKUP router in VRRP group 20 and 30 and stays in MASTER state in VRRP 10 and 40. Similarly, DOSTRIB2 is BACKUP router in VRRP group 10 and 40 and MASTER in VRRP group 20 and 30.
Make keepalived daeomon to be started after boot of Microcore.
root@Distrib2:~# echo "/usr/local/sbin/keepalived -P -f /usr/local/etc/keepalived/keepalived.conf" >> /opt/bootlocal.sh
Save keepalived configuration file.
root@Distrib2:~# /usr/bin/filetool.sh -b
Part5 – Testing
Connectivity should be established between any two nodes in our Campus network and we are going to test it.
1. Test routing between VLANs
Issue ping from host PC1 in VLAN10 to host P2, PC3, PC4 and check if intervlan routing is working.
tc@PC1:~$ ping 192.168.20.1
PING 192.168.20.10 (192.168.20.10): 56 data bytes
64 bytes from 192.168.20.10: seq=0 ttl=63 time=14.443 ms
64 bytes from 192.168.20.10: seq=1 ttl=63 time=4.257 ms
64 bytes from 192.168.20.10: seq=2 ttl=63 time=4.962 ms
^C
— 192.168.20.10 ping statistics —
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 4.257/7.887/14.443 ms
tc@PC1:~$ ping 192.168.30.10
PING 192.168.30.10 (192.168.30.10): 56 data bytes
64 bytes from 192.168.30.10: seq=0 ttl=63 time=9.720 ms
64 bytes from 192.168.30.10: seq=1 ttl=63 time=4.125 ms
64 bytes from 192.168.30.10: seq=2 ttl=63 time=4.920 ms
^C
— 192.168.30.10 ping statistics —
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 4.125/6.255/9.720 ms
tc@PC1:~$ ping 192.168.40.10
PING 192.168.40.10 (192.168.40.10): 56 data bytes
64 bytes from 192.168.40.10: seq=0 ttl=63 time=8.404 ms
64 bytes from 192.168.40.10: seq=1 ttl=63 time=7.604 ms
64 bytes from 192.168.40.10: seq=2 ttl=63 time=4.798 ms
^C
— 192.168.40.10 ping statistics —
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 4.798/6.935/8.404 ms0
2. Check a routing table on Core switch
Check if redundant routes are presented in kernel routing table of Core1. Non-redundant routes such as 10.10.10.4/30, 10.10.10.8/30, 10.10.10.16/30 and 127.0.0.1 are directly connected networks. Two equal-cost path should exist to other network learned by OSPF routing protocol.
tc@Core1:~$ ip route show
10.10.10.0/30 proto zebra metric 20
nexthop via 10.10.10.9 dev eth1 weight 1
nexthop via 10.10.10.17 dev eth2 weight 1
10.10.10.4/30 dev eth0 proto kernel scope link src 10.10.10.5
10.10.10.8/30 dev eth1 proto kernel scope link src 10.10.10.10
10.10.10.12/30 proto zebra metric 20
nexthop via 10.10.10.6 dev eth0 weight 1
nexthop via 10.10.10.17 dev eth2 weight 1
10.10.10.16/30 dev eth2 proto kernel scope link src 10.10.10.18
10.10.10.20/30 proto zebra metric 20
nexthop via 10.10.10.6 dev eth0 weight 1
nexthop via 10.10.10.9 dev eth1 weight 1
127.0.0.1 dev lo scope link
192.168.10.0/24 proto zebra metric 20
nexthop via 10.10.10.9 dev eth1 weight 1
nexthop via 10.10.10.17 dev eth2 weight 1
192.168.20.0/24 proto zebra metric 20
nexthop via 10.10.10.9 dev eth1 weight 1
nexthop via 10.10.10.17 dev eth2 weight 1
192.168.30.0/24 proto zebra metric 20
nexthop via 10.10.10.9 dev eth1 weight 1
nexthop via 10.10.10.17 dev eth2 weight 1
192.168.40.0/24 proto zebra metric 20
nexthop via 10.10.10.9 dev eth1 weight 1
nexthop via 10.10.10.17 dev eth2 weight 1
3. Check connectivity between Access and Core layer
Issue ping from host PC1 in VLAN10 to eth0 interface of Core2.
tc@PC1:~$ ping 10.10.10.6
PING 10.10.10.6 (10.10.10.6): 56 data bytes
64 bytes from 10.10.10.6: seq=0 ttl=63 time=7.052 ms
64 bytes from 10.10.10.6: seq=1 ttl=63 time=4.383 ms
64 bytes from 10.10.10.6: seq=2 ttl=63 time=5.246 ms
^C
— 10.10.10.6 ping statistics —
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 4.383/5.560/7.052 ms6
4. VRRP testing
Issue ping from PC1 to IP address of VLAN10 interfaces – 192.168.10.2 and 192.168.10.3 on both Distribution switches. Stop ping sequence and ping VRRP Virtual IP address 192.168.10.1.
Now check ARP cache of PC1.
tc@PC1:~$ arp
? (192.168.10.2) at 00:23:20:bc:57:67 [ether] on eth0
? (192.168.10.1) at 00:23:20:bc:57:67 [ether] on eth0
? (192.168.10.3) at 00:23:20:8c:1d:cf [ether] on eth
Virtual IP address 192.168.10.1 has assigned MAC address of VLAN10 interface – 00:23:20:bc:57:67 on Distrib1 switch. Packets destined outside VLAN10 will leave subnet 192.168.10.0/24 via vlan10 interface on Distrib1. We have expected it because Distrib1 is MASTER router for VRRP group 10.
Now start to ping IP address of eth0 interface of Core2 – 10.10.10.6/30 from PC1. We are going to kill keepalived proccess on Distrib1 switch to check if Disrib2 will transit to MASTER state for VRRP group 10 and 40.
List the keepalived proccess on Distrib1 switch
root@Distrib1:~# ps -ef | grep keepalived
2549 root /usr/local/sbin/keepalived -P -f /usr/local/etc/keepalived/keepalived.conf
2550 root /usr/local/sbin/keepalived -P -f /usr/local/etc/keepalived/keepalived.conf
4937 root grep keepalived
Now, kill Keepalived process.
root@Distrib1:~# kill 2549
Keepalived: Terminating on signal
Keepalived: Stopping Keepalived v1.2.2 (06/28,2011)
Keepalived_vrrp: Terminating VRRP child process on signal
Check the output of console of Distrib2 switch. Information messages tell us about transition of Distrib2 switch to MASTER state for VRRP group 10 and 40.
Keepalived_vrrp: VRRP_Instance(10) Transition to MASTER STATE
Keepalived_vrrp: VRRP_Instance(40) Transition to MASTER STATE
Keepalived_vrrp: VRRP_Instance(10) Entering MASTER STATE
Keepalived_vrrp: VRRP_Instance(40) Entering MASTER STATE
Check ARP cache PC1 again. It is not big surprise that Virtual IP address 192.168.10.1 has assigned MAC address 00:23:20:8c:1d:cf of VLAN10 interface of Distrib2 switch.
tc@PC1:~$ arp
? (192.168.10.2) at 00:23:20:bc:57:67 [ether] on eth0
? (192.168.10.1) at 00:23:20:8c:1d:cf [ether] on eth0
? (192.168.10.3) at 00:23:20:8c:1d:cf [ether] on eth0p
It seems that series of ping requests from PC1 to 10.10.10.6 has not been broken by transition.
— 10.10.10.6 ping statistics —
206 packets transmitted, 206 packets received, 0% packet loss
round-trip min/avg/max = 1.664/4.343/15.437 ms.
Start keepalived daemon on Distrib1 switch again. Distrib1 is going immediately to state MASTER for VRRP group 10 and 40 and to BACKUP for VRRP group 20 and 30.
root@Distrib1:~# /usr/local/sbin/keepalived -P -f /usr/local/etc/keepalived/keepalived.conf
Keepalived: Starting VRRP child process, pid=5134
Keepalived_vrrp: Registering Kernel netlink reflector
Keepalived_vrrp: Registering Kernel netlink command channel
Keepalived_vrrp: Registering gratutious ARP shared channel
Keepalived_vrrp: Opening file ‘/usr/local/etc/keepalived/keepalived.conf’.
Keepalived_vrrp: Configuration is using : 45592 Bytes
Keepalived_vrrp: Using LinkWatch kernel netlink reflector…
Keepalived_vrrp: VRRP_Instance(20) Entering BACKUP STATE
Keepalived_vrrp: VRRP_Instance(30) Entering BACKUP STATE
Keepalived_vrrp: VRRP_Instance(10) Transition to MASTER STATE
Keepalived_vrrp: VRRP_Instance(40) Transition to MASTER STATE
Keepalived_vrrp: VRRP_Instance(10) Entering MASTER STATE
Keepalived_vrrp: VRRP_Instance(40) Entering MASTER STATE
Distrib2 received VRRP Advertisiment with priority higher than its own and transit to BACKUP state for VRRP instance 10 and 40.
Keepalived_vrrp: VRRP_Instance(10) Received higher prio advert
Keepalived_vrrp: VRRP_Instance(10) Entering BACKUP STATE
Keepalived_vrrp: VRRP_Instance(40) Received higher prio advert
Keepalived_vrrp: VRRP_Instance(40) Entering BACKUP STATE.
Also in this case ping started on PC1 was not interrupted by transition back to MASTER state on Distrib1 for VRRP group 10.
END.
http://en.wikipedia.org/wiki/Virtual_Router_Redundancy_Protocol
http://www.estoile.com/links/vrrp.htm
http://osdir.com/ml/linux.keepalived.devel/2005-01/msg00000.html