首页 > 代码库 > [转]Openflow Lab

[转]Openflow Lab

GETTING STARTED OPENFLOW OPENVSWITCH TUTORIAL LAB : SETUP

For a more up to date tutorial as anything more then 6 months old is outdated in the world of SDN Please see:
OpenDaylight OpenStack Integration with DevStack on Fedora 20

I wrote a Python OpenFlow installation app to automate an OpenFlow KVM and Open vSwitch setup found at:
OpenFlow, OpenvSwitch and KVM SDN Lab Installation App →


GETTING STARTED OPENFLOW OPENVSWITCH TUTORIAL LAB – SETUP:

Getting Started OpenFlow OpenvSwitch Tutorial Lab : This is an OpenFlow Tutorial using OpenvSwitch and Floodlight controller but any other controller or switch can be used. I have had some requests on some scenarios so I put this together. Adding a few more flexible components. Getting to know all of these packages like KVM, OpenvSwitch are going to be pretty big in the future ecosystem orchestrations.

The video doesn’t have any sound, I am tight on time, sorry. I think it should be pretty straightforward and the video may help if you get stuck. Probably a couple typos here and there I will try and catch over the weekend. We are lacking good lab material on these topics right now so maybe this will save a couple folks some time.


PREREQUISITES

The KVM requires an x86 machine with either Intel VT or AMD w/AMD-V support. Anything fairly new will have that support in the processor. There are a few older HW builds that support hardware assisted virtualization by enabling it in the bios. Pretty much Googling your machine for hardware virtualization will let you know. Qemu can be run on non VT HW but the machines will probably get brutalized by a few host VMs. When you are setting up the vSwitch either have an out of band or be on it physically. Be careful when you are adding multiple interfaces to bridges since you can spin up a bridging loop pretty quickly unless you have STP on. I recommend a test/dev network or mom’s basement network. If not BPDUguard is your friend :) This is done on a fresh install of 64-bit Ubuntu 12.04 (Precise).

Quick screencast. I highly recommend considering using a small Linux Kernel named linux-0.2.img.bz2 from Qemu if using a laptop or nested hypervisor.



Shell

$nano /etc/network/interfaces

Add in your configuration to the file to your physical interface and save the file.

Restart networking. If the configuration is off this will cut you off.

$/etc/init.d/networking restart

$route -n will display your default route if you are having connectivity issues.

$apt-get update

$apt-get dist-upgrade

INSTALL OPENVSWITCH

Shell

$ apt-get install openvswitch-datapath-source bridge-utils

$ module-assistant auto-install openvswitch-datapath

$ apt-get install openvswitch-common

Verify install

$ovs-vsctl show

ovs_version: “1.4.0+build0″

Processes should look something like this

$ps -ea | grep ovs

26464 ? 00:00:00 ovsdb-server

26465 ? 00:00:00 ovsdb-server

26473 ? 00:00:00 ovs-vswitchd

26474 ? 00:00:00 ovs-vswitchd

26637 ? 00:00:00 ovs-controller

$ /etc/init.d/openvswitch-switch restart


Add your bridge, think of this as a subnet if you aren’t familiar with the term.

Add a physical interface to your virtual bridge for connectivity off box. If you don’t script this part you will probably clip your connection as you zero out eth0 and apply it to br-int. You can pop the commands into a text file and make it executable with
chmod +x script.sh

Shell

$ ovs-vsctl add-br br-int

$ ovs-vsctl add-port br-int eth0

$ ifconfig eth0 0

#Zero out your eth0 interface and slap it on the bridge interface

#(warning will clip you unless you script it)

$ifconfig br-int 192.168.1.208 netmask 255.255.255.0

#Change your default route

$route add default gw 192.168.1.1 br-int and $route del default gw 192.168.1.1 eth0


INSTALL FLOODLIGHT OPENFLOW CONTROLLER AND ATTACH OPENVSWITCH

Install dependencies, apt-get for UB and yum for RH:

Shell

 

apt-get install build-essential default-jdk ant python-dev eclipse git

Clone the Github project and build the jar and start the controller:

Shell

 

$git clone git://github.com/floodlight/floodlight.git

cd into the floodlight directory created.

$cd floodlight

Run ant to build a jar. It will be in the ~/floodlight/target directory.

$ant

Run the controller :

$java -jar target/floodlight.jar

By default it will binds to port 6633 and all ports e.g. 0.0.0.0/0.0.0.0:6633


ATTACH OPENVSWITCH TO THE CONTROLLER

Shell

 

$ovs-vsctl set-controller br-int tcp:192.168.1.208:6633


In the FloodLight console you will see something like this:

Shell

 

[New I/O server worker #1-1] INFO n.f.core.internal.Controller - Switch handshake successful: OFSwitchImpl [/192.168.1.208:49519 DPID[00:00:ba:66:35:e8:38:48]]


The output of OVS ‘ovs-vsctl show’ looks something like this:


Shell

 

# ovs-vsctl show

70a40219-8725-46a8-b808-af75c642cac8

Bridge "br-int"

Controller "tcp:192.168.1.208:6633"

is_connected: true

Port "eth0"

Interface "eth0"

Port "br-int"

Interface "br-int"

type: internal

ovs_version: "1.4.0+build0"


INSTALL KVM AND INTEGRATE INTO OVS

Shell

 

$apt-get install kvm uml-utilities

These two scripts bring up the KVM Tap interfaces into your bridge from the CLI. If you copy and paste below make sure the (‘) does not get formatted improperly. It should be yellow in nano. “switch=br-int” br-int is the name of your bridge in OVS.
$nano /etc/ovs-ifup  (open and paste what is below)

Shell

#!/bin/sh

switch=‘br-int‘

/sbin/ifconfig $1 0.0.0.0 up

ovs-vsctl add-port ${switch} $1

$nano /etc/ovs-ifdown (open and paste what is below)

Shell

#!/bin/sh

switch=‘br-int‘

/sbin/ifconfig $1 0.0.0.0 down

ovs-vsctl del-port ${switch} $1

Make both files executable
chmod +x /etc/ovs-ifup /etc/ovs-ifdown


BOOT THE GUEST VIRTUAL MACHINES

  • Host1

Shell

 

kvm -m 512 -net nic,macaddr=00:00:00:00:cc:10 -net tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -cdrom ubuntu-12.04-desktop-amd64.iso

  • Host2

Shell

 

kvm -m 512 -net nic,macaddr=00:11:22:CC:CC:10 -net tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -cdrom ubuntu-12.04-desktop-amd64.iso

  • Host3

Shell

 

kvm -m 512 -net nic,macaddr=22:22:22:00:cc:10 -net tap,script=/etc/ovs-ifup,downscript=/etc/ovs-ifdown -cdrom ubuntu-12.04-desktop-amd64.iso

Each one of those will begin loading from the ISO. I just click “Try Ubuntu” when they are booting and just run them from disk since really all we need are nodes that can test connectivity as we push static flows. If it is a more permanent test lab it would make since to install them to disk.

While those are spinning up let’s install curl.

Shell

 

$apt-get install curl


Figure 1. OVS Taps


One they are up assign IP addresses to them by clicking in the top left of the Ubuntu window and type in ‘terminal’ no parentheses. Then give them IPs if you want to statically assign them with ifconfig.

Shell

 

sudo ifconfig eth0 192.168.1.x netmask 255.255.255.0

 

OPENFLOW STARTER TUTORIAL LAB #1

For a more up to date tutorial as anything more then 6 months old is outdated in the world of SDN Please see:
OpenDaylight OpenStack Integration with DevStack on Fedora 20

  1. Lab 1: Add static destination MAC addresses to each node. Match: DstMac: , Action:DstPortX
  2. Lab 2: Add static flow with src mac address match with the associated action to an output port e.g. Match:SrcMac Action:DstPortY.
  3. Lab 3: Add a bad static flow for one of the hosts and watch ICMP replies from the gateway on the board port come back through tcpdump. Match:DstMac, Action:PortZ


Figure 1. The topology for the lab simulates in software the same capabilities you can get in hardware thanks to OpenvSwitch.[/crayon]
This setup allows you to add and remove as many matches into the API calls and tinker with them to get a feel once you nail down the basics. Then you can write the next “killer app” get rich and make it rain, but first lets figure out what is going on here.

RESTFUL/JSON API

The API is documented very well (that is huge and differentiating IMO) @

Shell

 

<a href=http://www.mamicode.com/"http://www.openflowhub.org/display/floodlightcontroller/Proposed+New+API">http://www.openflowhub.org/display/floodlightcontroller/Proposed+New+API

RESTful APIs are very important in my opinion if there is to be a transition of any kind to make it human readable for at the least troubleshooting or easy field parsing programmaticlaly for those of us who are only willing to muck our way through interpreted languages. Huge fan of what they have done here with their API and I expect the industry to follow this.

FORWARDING TABLE IN OPENVSWITCH

Based on ‘ovs-appctl fdb/show br-int’ build your cheat sheet so see what port your host VMs are on inside of OpenvSwitch. If you do not see your entry it has like timed out ~300 seconds or so, refresh the entry by simply pinging the host VM from the vSwitch. These tables are the same as your CAM tables doing key/value exact matches for L2 mac address lookups and LPM (Longest Prefix Match) in todays network systems only in software.

Shell

 

$ovs-appctl fdb/show br-int

port VLAN MAC Age

1 0 00:23:69:62:26:09 58

6 0 00:11:22:cc:cc:10 7

5 0 00:00:00:00:cc:10 4

0 0 5c:26:0a:5a:c8:b2 3

7 0 22:22:22:00:cc:10 3

MAC tables for this lab are as follows. Yours will likely be different based on the assignment by the vSwitch. The mac addresses are specifed by the KVM boot but anything can be used as long they are unique.

The DPID datapath ID is required to send the API calls. You need to find the one on your vSwitch. Lots of ways to find it either through the Floodlight console or APIs or from the ovs-ofctl show <bridge name> listed below. It is basically a few bytes prepended on your Nics MAC address).

Shell

 

$ ovs-ofctl show br-int

OFPT_FEATURES_REPLY (xid=0x1): ver:0x1, dpid:00005c260a5ac8b2 (that is your DPID)

Replace the curl commands with your DPID curl -d ‘{“switch”: “00:00:5c:26:0a:5a:c8:b2″,  (that longer than usual mac looking ID)

“ovs-dpctl dump-flows br-int” will display the datapaths being instantiated into the OpenvSwitch and handy for debugging and tshooting.
Openvswitch FIB entries

Figure2. MAC to Port mapping or forwarding table for the labs. Build this from  “ovs-appctl fdb/show br-int” output.
Throughout the lab I have my VM hosts pinging the gateway so I can watch what happens as I instantiate static flows into the OpenvSwitch (OVS) flow table.

OPENFLOW WEBUI GUI

Through the lab for starters it might be easier for some to watch the web page. This is a nice Django front end put together by Wes Felter and some of his guys at IBM. There are some bugs which I’m sure the Floodlight guys would like anyone to clean up. If you leave the page open it continues to refresh until it consumes the planet as it polls the controller. Just close and reopen every now and then.

The WebUI loads be default with the jar binary:

Shell

 

java -jar floodlight.jar

Shell

 

http://&lt;yourIP&gt;:8080/ui/index.html


Figure 3. WebUI starts automatically and binds to port 8080

It might be more comfortable for some to use the WebUI / GUI. It is a nice clean web front at that!

All three labs are in this screencast.

LAB 1 STATIC MAC ENTRIES FOR OUR 3 HOSTS

Figure 3. Three hosts with static mac entries for each port.

STATIC FLOW PUSH INTO THE OPENFLOW PIPELINE

Before we run we crawl, before we dynamically forward we statically forward! It seems natural that most of the time we start with static entries when teaching the mechanics of routing with network IGPs. Here we are defining static data paths. We match (or don’t) a rule and have an associated action to it that will eventually kick off a fairly complex sets of flow tables in a pipeline in v1.1 and up.

The fairly close command for a data path  in a tradiational instruction set on today’s switches would be this ‘”mac-address-table static 0000.0000.cc10 vlan 100 interface GigabitEthernet0/1″. We are not setting a vlan id but would be as easy as adding “dataLayerVirtualLan”:x to the flow push. That is obviously not scalable but I think it is important to understand how datapaths get pushed to the OF enabled switch. Normally even in the SDN world those mac address are learned through flooding to all ports FFFF.FFFF.FFFF on the broadcast domain. The controller than learns of it starts a mac address timer to begin to age it out if no more traffic is received so as not to exhaust the it’s tables but cache it if it continues talking by restarting the timer each time a frame is received from the MAC source.

Push static flows for each destination mac address in the switch to an assigned port. We have a match and action explicitly defined. All we are doing is adding static mac address entries instead of them being defined dynamically through flooding. Not each name is unique. If copying and pasting make sure to strip formatting.

As you add the flows keep in mind each curl you do will overwrite the previous one their with the same name in the table. Notice each flow pushed has a unique name. It’s almost ACLs but not quite.

  • Install curl

Shell

 

$apt-get install curl

With OVS and the OF controller run each of these from your command line.
Remember to replace the DPID “switch”: “00:00:5c:26:0a:5a:c8:b2″ & the IP addr 192.168.1.208 with your lab addresses. Each curl command is one line.

INSTANTIATE THE OPENFLOW FORWARDING RULES
  • Host 1

Shell

 

$ curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow1", "cookie":"0", "priority":"32768", "dst-mac":"00:00:00:00:cc:10","active":"true", "actions":"output=5"}‘ http://192.168.1.208:8080/wm/staticflowentrypusher/json

  • Host 2

Shell

 

$ curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow2", "cookie":"0", "priority":"32768", "dst-mac":"00:11:22:cc:cc:10","active":"true", "actions":"output=6"}‘ http://192.168.1.208:8080/wm/staticflowentrypusher/json

  • Host 3

Shell

 

$ curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow3", "cookie":"0", "priority":"32768", "dst-mac":"22:22:22:00:cc:10","active":"true", "actions":"output=7"}‘ http://192.168.1.208:8080/wm/staticflowentrypusher/json

LIST THE FLOWS

Now through the API we can pull all static flows that have been pushed with this API call. Notice all of the Tuples (header fields e.g. SrcMac, Dest,IP etc) being listed. Look for the “match” and “action” you pushed.

Shell

 

$ curl http://192.168.1.208:8080/wm/staticflowentrypusher /list/00:00:5c:26:0a:5a:c8:b2/json

CLEAR OR DELETE THE STATIC FLOWS

To clear all of the static flows the API call looks like this. Clear all flows the API also has a delete function documented:

Shell

 

$ curl http://192.168.1.208:8080/wm/staticflowentrypusher /clear/00:00:5c:26:0a:5a:c8:b2/json


OPENFLOW STARTER TUTORIAL LAB #2

For a more up to date tutorial as anything more then 6 months old is outdated in the world of SDN Please see:
OpenDaylight OpenStack Integration with DevStack on Fedora 20

OpenFlow Starter Tutorial Lab #2 :This lab is to restrict two hosts to only talk to each other with source based forwarding using the static flow pusher RESTful API. You can add any field you want to make the forwarding decisions on. Remember to name the flows with unique names or else you will overwrite previously instantiated flows. Previous posts in the series have setup included. Links to those at the bottom of this post.

OpenFlow Tutorial Lab

Figure 1. OpenFlow Starter Tutorial Lab #2 Topology

Based on source MAC address we can lock two ports into only talking to each other. This is used for security reasons today in sensitive areas. This allows for very granular port to port mapping. We are adding two flows, just as a host needs a flow setup to talk to another host it also needs a return flow to put established.

Delete old static Flows from Lab 1.

Shell

 

curl http://192.168.1.208:8080/wm/staticflowentrypusher/clear/00:00:5c:26:0a:5a:c8:b2/json

PUSH THE TWO STATIC OPENFLOW RESTFUL API CALLS TO CREATE YOUR FLOWMOD

Shell

#To ping from port 1 to 6

$curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow1", "cookie":"0", "priority":"32768", "src-mac":"00:11:22:cc:cc:10","active":"true", "actions":"output=6"}‘ http://192.168.1.208:8080/wm/staticflowentrypusher/json

#To ping from port 6 to 1

$curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow2", "cookie":"0", "priority":"32768", "src-mac":"22:22:22:00:cc:10","active":"true", "actions":"output=1"}‘ http://192.168.1.208:8080/wm/staticflowentrypusher/json

Ping the hosts from those two ports. They should only be able to ping each other not your gateway or anything else since the closets match is the static one pushed.

Once I add these may gateway no longer pings becuase the only place those to source mac addresses explicitly match on are eachothers ports. So while they can talk to each other they can not talk anywhere else.

While this is clearly not managable at scale, it should get the your wheels going on the possiblities this opens when you start thinking about how powerful this granularity can become in the security world if done programmatically from policy.

OPENFLOW STARTER TUTORIAL LAB #3

For a more up to date tutorial as anything more then 6 months old is outdated in the world of SDN Please see:
OpenDaylight OpenStack Integration with DevStack on Fedora 20

OpenFlow Starter Tutorial Lab #3 : Move individual flows

Pre-requisites install and the beginning of the lab can be found here.

OpenFlow Lab #3 Topology

Figure 1. OpenFlow starter tutorial Lab #3 topology. Add an entry to the wrong port and watch it break.

Let’s clear all of our flows and get everything pinging the gateway again.

Shell

 

$curl http://192.168.1.208:8080/wm/staticflowentrypusher /clear/00:00:5c:26:0a:5a:c8:b2/json

Add our three earlier entries from Lab1

Shell

 

$curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow1", "cookie":"0", "priority":"32768", "dst-mac":"00:00:00:00:cc:10","active":"true", "actions":"output=5"}‘ http://192.168.1.208:8080/wm/staticflowentrypusher/json

Shell

 

curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow2", "cookie":"0", "priority":"32768", "dst-mac":"00:11:22:cc:cc:10","active":"true", "actions":"output=6"}‘ http://192.168.1.208:8080/wm/staticflowentrypusher/json

Shell

 

$curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"static-flow3", "cookie":"0", "priority":"32768", "dst-mac":"22:22:22:00:cc:10","active":"true", "actions":"output=7"}‘ http://192.168.1.208:8080/wm/staticflowentrypusher/json

TCPDUMP ANALYSIS

Start tcpdump on the host you will send Host3′s traffic to. In my case I am starting tcpdump on Host1 where I am going to send Host3′s traffic to.

Shell

 

$ sudo tcpdump -i eth0 host &lt;IP of host 3&gt;

The filter “host <ip>”says only capture traffic to or from that host. We should never see unicast traffic from one host to another under proper condictions on a packet switched network.

INSTANTIATE BAD FLOWS

Now lets push a mac to a bad port and watch it break. This will overwrite ‘static-flow3′. This will break Host 3.

Shell

 

$curl -d ‘{"switch": "00:00:5c:26:0a:5a:c8:b2", "name":"wrong-port", "cookie":"0", "priority":"32768", "dst-mac":"22:22:22:00:cc:10","active":"true", "actions":"output=5"}‘ http://192.168.1.208:8080/wm/staticflowentrypusher/json

OpenFlow Lab #3 TCPDump

Figure 2. TCPdump output when I push host 3′s forwarding datapath to host 1.

As soon as you added the “wrong-port” static flow you began getting ICMP replies from the gateway until that times out. This has many more security type implications. Why not have your action be forward to two ports instead of just one. The 2nd port could be an IDS monitoring traffic and instead of trying to process the firehose of traffic in a typical port mirror you can get as granular you want and watch only particular matching traffic as defined by the tuple matching in the header fields  (src_mac,dst_ip,VID etc.). No you can use a fraction of the hardware and only process what is important to your use case. Load balancing is another obvious one. Policy routing that may be scalable if managed programmatically by northbound API’s.

OpenFlow Lab #3 Abstractions

Figure 3. The API is the end game IMO

CONCLUSION

Thats it hope this maybe demystifies a bit of OpenFlow for you. I still have lots to learn as it is never-ending cycle, but going through a couple of labs seems to help nail some of this down and show that with complexity or more accuraltely abstraction, will bring more simplicity to the operators(some day so very very very far away theoretically). This lab setup can scale out to a wide range of different scenarios then just the couple little guys here. Would love to hear what others are doing.

From an end user perspective, it is the same ideas we have had in best matching of prefixes all along but we are adding more ways to match and fields to match upon. The API is what is going to be very important in my opinion and will open up the value over the next coulple of years as the northbound apps begin to surface. Sorry there is not any commentary on the videos, swamped but I think its fairly straightforward. I only added the video in case someone gets stuck. Feel free to contact with assistance or jump on irc.freenode.net on #openflow.

MISCELLANEOUS API CALLS

Find all flows

Shell

 

$curl http://192.168.1.208:8080/wm/core/switch/ 00:00:5c:26:0a:5a:c8:b2/flow/json

List all static

Shell

 

$curl http://192.168.1.208:8080/wm/staticflowentrypusher/list/00:00:5c:26:0a:5a:c8:b2/json

Clear all flows

Shell

 

$curl http://192.168.1.208:8080/wm/staticflowentrypusher/clear/00:00:5c:26:0a:5a:c8:b2/json


ADDITIONAL OPENFLOW AND SDN LINKS AND RESOURCES

http://openvswitch.org/ Martin Casado’s group have put an amazing vSwitch out there. I doubt there will be many vSwitches that are not munging his work in some form or fashion over the next few years.

http://floodlight.openflowhub.org/ Thanks to Nick Bastin for answering my question on the #openflow channel. He is a great asset to the community.

http://www.noxrepo.org/ Another nice OpenFlow Controller is POX a Python based platform agnostic project the Murphy McCauley is doing a great job with. As soon as I dig into the API I am going to do a similar tutorial with that. I need the API docs if anyone has them hook me up.

I am typically always /nick networkstatic on irc.freenode.net in #openvswitch #openflow #openstack and #packetpushers if anyone has any questions.