Has OpenFlow failed? – Challenges and implementations

In truth, very few vendors have successfully implemented full capabilities of OpenFlow. OpenFlow provides way too much flexibility to programmers. It’s hard to make the hardware couple with that much power. A few vendors are able to deliver programmable ASICs like that such as NoviFlow, Corsa and Barefoot.

The reason for that comes from the nature of matching tables, a match table is implemented in memory. In a match table, we match on a field, say MAC address and we take an action, say forward the packet to port 1. The complexity comes when we want to match on multiple fields. Say we have a MAC table with N addresses, and an IP table with M addresses. The total size of my flow tables (memory) is M +N. Now if we want to execute the match on a single table, the size of those tables raises to M*N. Now imagine matching on multiple fields at the same time.

The multi-table aspect of OpenFlow, came on version 1.3, and it addresses the scalability problem of flow-tables. But now the challenge is how to provide a standard API via OpenFlow when different vendors have different table patterns?

The answer is we don’t. Rather, we adapt our OpenFlow version to each vendor in order to achieve our forwarding objective. Now, say we want to do a L3 forwarding – which means match on ip, then modify L2 addresses and forward to port N – one vendor might have put the modify action in the IP table, while other vendor might have grouped all actions in a group action later on.

OpenFlow became popular as a promise to bring innovation to the industry analogously as the x86 API brought innovation to computers. In truth, interoperability between vendors via OpenFlow has been rare, exactly because vendors have different implementations of OpenFlow. We’ve seen vertical stacks of software deliver SDN capabilities, but we haven’t seen interoperable solutions yet.

Last time I checked, ONOS, a great SDN controller, provided an abstraction to Openflow via the FlowObjective primitive, basically, an Objective is defined and then the OpenFlow drivers will match that objective to the hardware implementation. What that provides you is the ability to have a controller controlling multiple vendors. Vendors still need to write code as drivers but developers only have to write software once. Again the power of abstraction shows itself. There may be others out there, but I’m aware of a couple solutions for OpenFlow fabric such as BigSwitch and Trellis used in the CORD project that have successfully deployed stable solutions.

OpenFlow is not the answer to all your networking problems. The perfect abstraction for networking is the answer, but it does not exist. OpenFlow definitely succeed in bringing innovation to the networking industry. A few vendors like BigSwitch have built incredible solutions. and the OpenNetworkingFoundation has merged with the ON.LAB which may bring some more energy towards standardization of the protocol. The support from vendors has slowed down as vendors started generalizing the SDN definition, I will write more about it.

Advertisements
Posted in SDN, Thoughts & theories

Visualizing Sflow data with Ntop and Nprobe on Ubuntu 16.04

Open Source tools can be useful if you need to put something together easily.

I was able to use Nprobe to visualize real time traffic observed via Sflow. Here is how you install it on Ubuntu 16.04.

wget http://apt.ntop.org/16.04/all/apt-ntop.deb
dpkg -i apt-ntop.deb

apt-get clean all
apt-get update
apt-get install pfring nprobe ntopng ntopng-data n2disk cento

Nprobe works as a Sflow collector and consumes the data generated by the switches. Nprobe, then, exports the data to Ntop.

To start Nprobe run:

sudo nprobe –collector-port 6343 –zmq “tcp://127.0.0.1:5556” -i none -n none

To start Ntop make sure you properly configured:

–interface=tcp://127.0.0.1:5556
–http-port=4000

Then restart the service:

sudo service ntopng restart

Then access http://127.0.0.1:4000, login with admin, admin and you can see something like this:

Screen Shot 2017-04-28 at 2.53.27 PM

 

Tagged with:
Posted in Lab Projects, Tutorials, Ubuntu

Network Automation vs Software Defined Network – Ansible vs Openflow

At Verizon, we are moving towards automating network configuration and provisioning. To me the goals for this move can be summarized as:

  • Maintenance cost reduction
  • More agile deployment processes

Coming from an OpenFlow SDN background, where changes to the network can be immediate, and looking at the real world, where changes to the network require human approval and human intervention to be deployed resulting in 1-2 weeks time, it’s really hard to tolerate this acceptance for delay with legacy systems.

I’m much interested in identifying where automation of legacy systems offers a real benefit over OpenFlow networks and vice-versa. My experiences tell me the biggest paradigm shift comes from the users. If the network operator is used to the OpenFlow paradigm, and has the software development skills, pretty much anything can be done. On the other side when the network operator comes from a classical Cisco network engineer background, even incremental changes to the network as advocated by network automation gurus can be challenging.

So far, my only experience with network automation is Ansible. A great positive factor for Ansible is its learning curve. Very easy to try. Right now, I’m intrigued with testing of Ansible code, refactoring variables consists of project-wise find and replace, it’s also not yet intuitive to me how Ansible code can be continuously tested and deployed. Quoting Uncle Ben: “with great power comes great responsibility”, Ansible does give you the opportunity to mess up things really well.

That’s where my bias towards OpenFlow comes in: successful OF projects, like ONOS, have been tested for a couple years now and are quite mature for open source projects. AS mentioned in my last article, to me it all comes down to the skill set companies want to cherish, it’s easy to leverage network engineer expertise plus some python scripting capabilities to work on network automation, but I bet you won’t get great code quality out of that.

Another option is to leverage great software developing skills to make sure you do get the code quality, but then what I would advocate for is to get this great software developer and put him to work to develop a real SDN system with real software challenges in place where the opportunity for gain is incredible.

OpenFlow has an inherent disadvantage which is the requirement for extra hardware support. Successful OF deployment have been performed with new gear, or have used successful hybrid deployment strategies, which can be complex. So, if you want to improve current deployments, OpenFlow won’t be your pick.

I’m still skeptical regarding the value of network automation, other than incremental adoption of new technology, in other words, it’s easy to sell.

Tagged with:
Posted in Perspectives, Thoughts & theories

What’s going on?

In January, I’ve started working for Verizon as a DevOps Engineer with focus in network engineering. I’ve been working with SDN for about 2 years and my last experience was at the Open Networking Lab, a research lab, pioneer in terms of SDN research, in collaboration with AT&T.

In this article, instead of describing a technology as I usually do in this blog, I’ll try to summarize my thoughts on where this industry is going.

Every day it’s clearer to me that innovation in service providers is driven by two factors: pressure to reduce acquisition and operational costs; increasing pressure to deliver new services fast, which BTW happens in order to generate new sources of revenue.

Most service providers are trying to leverage open hardware from OCP and open source technology in order to achieve those goals. The “open” alternative of solutions is quite cost-effective compared to current legacy solutions; at the same time it offers the opportunity to be at the edge of technology development, that’s to say open technologies fasten innovation cycles significantly. The disaggregation of network devices has played a tremendous role in enabling innovation as well.

There are challenges in order to achieve those goals. Acquisition costs are definitely the most compelling point of open technologies. The delivery of the open source solutions on the other side is where the risk lies. If you are used to open source, you do know that bugs are just part of your life. There’s a 9 in 10 chance that at least one of your critical features won’t be supported natively by available open source solutions.

To couple with that I believe service providers should invest in acquiring diverse talents, or invest in training its own staff.

The truth is change is inevitable, you either hop on the boat and deliver reduced costs or new services or you will be left behind. We’ve started to see evidence why that has been happening with big vendors, I believe this pattern will repeat with providers.

In the next posts, I’ll try to comment on what is going on with vendors or make a follow-up post with my thoughts on costs, risks and benefits of this search for innovation as well.

Posted in Thoughts & theories

Troubleshooting Shortest Path and Topology Discovery on RYU

This post is a follow-up to Shortest Path forwarding with Openflow on RYU.

I originally made this code to show how to use SDN to achieve one of the most basic things you can do in a network: shortest path forwarding. In this post I’m answering common question on getting the code to work.

Quickstart:

Assuming you have all the dependencies, you should be able to run a mininet topology using:

sudo mn --topo=tree,4 --controller remote

After starting mininet start RYU using the following command:

bin/ryu-manager --observe-links ryu/app/sp.py

In my computer this is sufficient to discover the topology.

Now, let’s move on to the questions:

1 – Why do I see an empty or incomplete list of links?

Honestly, I’m not super familiar with the RYU topology app, so I don’t know. What works trying to restart Ryu/Mininet in different orders, so stop both applications and try starting Ryu first, if that doesn’t work do the opposite. Repeat until it works.

2 – Does it still work with a loop in the topology?

As far as my tests go it does work with a loop in the topology.

3 – Does it still work with a Spanning Tree?

To test it I start mininet,  setup spanning tree using ovs-vsctl, then I start RYU. After RYU learns the topology it successfully lets the pings go through.

I had to restart RYU a couple times until it learned the topology

4- Why do I see so many packet-ins?

I did not care to handle floodstorms when I coded this, so if your topology has a loop and spanning-tree isn’t set, ARP and other types of flooded packets may be broadcast forever in your network

5 – Can I use another algorithm or set custom weights?

Yes. To set custom weights you just have to figure out how to add that information to the network graph. I’ll try to give an example for this soon.

Posted in Lab Projects, RYU, SDN, Tutorials

VMware ESXI Home Lab

I recently bought an Intel NUC 6th generation in order to build my own VMware ESXi lab. This is my first home lab and the first PC I ever built so I’m excited.

I’m building this for two reasons, one my laptop has a small SSD preventing me from having a bunch of VMs. Second, I’m attempting to get a CCNP certification and I’d like do setup a virtual lab for that.

Bill of materials

I decided to go for the i5 simply because the i7 design wouldn’t allow me to have a HD, while the one I got has space and a connection for a SATA disk.

I also had to buy a keyboard to complete ESXI installation. I bought the NUC with the 256 SSD included on Ebay for 390. The total price was 616 U$ which makes me pretty glad for an I5 machine with plenty of storage and fast SSD if needed.

Assembly

Assembly was straightforward and I used this video as reference.

Installation

Installation is simple and consists of 4 steps:

  • Downloading ESXi iso
  • Creating bootable ESXi usb drive from image using RUFUS
  • Installing and configuring ESXi
  • Installing GNS3 from OVA

I will come back here and put a link to download the ESXi iso, basically vmware can provide you this.

Rufus is also very straightforward and can be downloaded here.

Configuring ESXi could be tricky, but don’t pay attention to details, simply enable ssh and set a static IP address and you should be fine. Next you can download a Vsphere client from the ESXi machine. And you can also use the web browser.

I couldn’t create a VM from the OVA using the web client, so I recommend you to use the vsphere client.

If you need a step-by-step guide. I recommend you to check this youtube guide on how to install ESXI 6.0

In my next blog, I’ll post my experiences with GNS3.

ps: I wish I had installed ESXi with an SD card, just because I think it’s cool. I also wish you could deploy a VM from an OVA in the ESXi storage because that would make it much faster.

I’m also a little pissed because VIRL requires an 200$ license. I haven’t tested it yet but I have the feeling that for learning purposes INE would be much more cost effective, and I doubt VIRL will provide a seamless experience.

Thanks for reading. 🙂

Posted in Lab Projects, Tutorials

Back to blogging

For contextualization, I just concluded my internship at ONLab one of the pioneer research labs in SDN. Now, I’m looking to get a little closer to the industry and I’m pursuing a CCNP certification.

My study plan is simple. I will build a home LAB using GNS3 and VIRL to practice the contents of both exams and go through the certification guide trying the configurations on the virtual lab. I aim to quickly acquire the CCNP certification in a month since I believe I already have the necessary skills. That gives me one exam every 10 days…

My first step was to build a ESXI lab on a Intel NUC computer. I’ll post the details in a separate blog post.

Posted in Perspectives
Network Heresy

Tales of the network reformation

How to Do Great Research

Grad school survival advice from Nick Feamster and Alex Gray

n40lab

A blog about cloud, virtualization, sdn and centos