Skip to content

Category: SDN

MPLS Traffic Engineering – Review

I wanted to review the basics of MPLS and Traffic Engineering (TE) so I went to my favorite networking blog and searched for RSVP and found the following articles:

Although the articles were incredible and clearly explained the technologies, it also clearly demonstrated how complex ‘legacy’ MPLS technologies are. UPDATE: I recently found about PacketDesign and got very excited by the material they put out there. Their white paper on MPLS-TE is one of the best pieces I’ve seen on the subject!  I urge you to check it out.

This article is divided into 4 sections: First, I mention reasons for MPLS forwarding. Second, I go through some of the motivations behind Traffic Engineering technologies. Then, I briefly explain Segment Routing, and I conclude with a tutorial on how ONOS can achieve TE using an SR SDN application on top of OpenFlow.

Why MPLS at all?

To reduce network state.
Today, The full Internet routing table includes +600.000 routes. Routing this by itself is already complicated. Now what if you took different paths for different Classes of Service (CoS), you could easily reach 2M routes. With MPLS, you basically can aggregate several network prefixes into labels, reducing the state drastically. The articles I mentioned at the beginning go through some of those numbers. A Segment Routing (SR) architecture can reduce this number even further to the order of the number of network devices. ps: SR can also be achieved with IPV6 encapsulation.

Why Traffic Engineering?

To save money!! $$$$

Diptanshu Singh explains this subject wonderfully, so I urge you to check his article if you need a more detailed explanation.

For instance, say the Comcast network in your neighborhood has 1 Gbps of VOIP and 4 Gbps of data traffic demand. It’s overprovisioned by 50%, so its 10G links suffice at the moment. Now, suppose its traffic increases 20% next year, sustaining this strategy would require an immediate upgrade of the infrastructure.

A Diffserv strategy would change resource allocation rates: One could instead allocate a 2x overprovision rate for VOIP and a 1.2x overprovision for data. Resulting in 2.4+6 Gbps total of bandwidth ( 1G*120%*2, voice data plus 20% increase times 2x overprovision rate) Next year, you would have 2.8+7.2 Gbps of data, still smaller than 10G.  With this approach, Comcast can delay its backbone upgrade for 2 years and can still adhere to the SLA’s required for sensitive traffic.

With the first rule, your expansion rate is dictated by generic traffic growth because you must keep network utilization low. On the second case, your expansion rate is mandated by critical traffic growth and networking equipment life-cycle (at your convenience). Critical traffic is 5x smaller than best-effort, thus your expansion rate would be 5x lower if you don’t care about best-effort traffic.

Now you have the opportunity to reduce your expansion budget by a factor of 5 and invest that money on engineering power. I’m sure that’s what Google saw 10 years ago when it started heavily investing in its networking technology. Bad Vendors will often say ‘you don’t need QoS or Traffic Engineering’, the problem can be solved with more bandwidth. That’s a convenient message if you sell bandwidth.

Why Segment Routing?

I wanted to compare legacy technologies (RSVP, LDP) with SR, but I realized that is pointless. To me, the only reason you would use legacy is for backward compatibility with existent equipment. Don’t get me wrong, RSVP will get the job done. Also, you may not be able to afford replacing it with SR or maybe your RSVP infrastructure works perfectly and you already have proper processes in place.

That all said, SR is just simpler and better. To learn more about RSVP check for yourself: http://packetpushers.net/rsvp-te-protocol-deep-dive/. If you know nothing about SR check http://www.segment-routing.net/.

In summary, SR is a network architecture that allows the network to keep no flow-state. Rather than only forwarding packets based on IP destination address, they are forwarded based on the segment address. The network maintains shortest path forwarding state information to each segment and backup paths to implement fast reroute. Fast reroute by itself is worth money, SR TILFA allows for sub 50ms failure recovery.

Additionally, The architecture allows you to enforce loose source routing. For example, say, IGP OSPF will give a 40ms path, to steer your VoIP traffic through a node 104, you would just change your routing at the edge of the network to include that segment before the end destination.

 SR

Tutorial

I already wrote a tutorial on this 2 years ago. I’m just going to highlight the main points.

Screen Shot 2015-08-13 at 4.46.02 PM.png

In this configuration, you have a cluster of 3 ONOS SDN controllers controlling a leaf-spine fabric. The entry-nodes, do a route lookup and encapsulate the packets with the MPLS label correspondent to the exit-node. The packet is then forwarded using shortest path based on the MPLS label. That’s basic IP forwarding. The cool thing here is the ability to programmatically set forwarding tunnels.

Let’s say you want all Netflix traffic to go through spine s105, thus making sure all Web and Voice traffic has 3 spines worth of bandwidth and thus lower delays, you could establish a tunnel in the following way:

A tunnel is defined as a set of LABELS, defining the path taken by a flow. The following command instantiates a tunnel called FASTPATH through the routers 101, 105, and 102 in that order.

onos> srtunnel-add FASTPATH 101,105,102

Then, a policy can be applied to a subset of traffic, for example,  policy1 = tcp_port=80 >> fwd( TUNNEL_1)

onos> srpolicy-add p1 1000 10.1.1.1/24 80 10.0.2.2/24 80 TCP TUNNEL_FLOW FASTPATH

This tunnels can be used to reinforce TE policies and guarantee SLAs and improve network utilization.

Conclusion

A Segment Routing network combined with a centralized controller for path computation can enable advanced Real-Time traffic engineering capabilities. In this way, Segment Routing is a perfect match for SDN.

The SDN applications have already been developed and made available in open-source projects like ONOS. The Segment Routing app mentioned has evolved to TRELLIS which is the networking fabric that supports the Cord project. I urge you to check their work.

Please reach out to me if you have any questions regarding how one could move forward and implement this.

 

Leave a Comment

Can P4 save Software-Defined Networking?

Now, P4 is gaining momentum due to engagement of big players such as Google and AT&T. P4 has potential to cause a significant change in the industry and deliver on the SDN value-proposition. I’d like to discuss that.

In summary, P4 aims to provide 3 main goals:

  • Reconfigurability
  • Protocol independence
  • Target independence

OpenFlow had its shortcomings: somehow diversity of implementation strategies evolved into incompatibility. P4 target independence proposes to solve this issue using a compiler to translate P4 code into switch code taking into account its capabilities.

Screen Shot 2017-10-20 at 1.35.00 PM

In order to understand how disruptive this is, let’s look at the current state of affairs: commodity silicon vendors such as Broadcom and Mellanox already have an API to control their switches, the existence of that API itself already disrupted the industry enabling Cumulus, SnapRoute and even Arista. Now would you prefer that your silicon vendors established a common interface, or would you rather rewrite software everytime you want to test a new switch Vendor? The answer is obvious: the first option benefits users and new vendors, the second benefits established vendors. New industry players or the adventurous operators could write software on top of P4 and achieve multi-vendor integration at the cost of writing compilers for each vendor they use.

So, that’s the big pay-off opportunity, enabling competition, thus innovation. The challenge here is to provide vendors the incentives to write the P4 compiler.

New industry players or the adventurous operators on the other side, could be able to write software on top of P4 and achieve multi-vendor integration at the cost of writing compilers for each vendor they use. That can be game-changing, the big questions are “How eager are developers to write P4 software?”,  “how much does it cost to hire somebody to do it?”, additionally, “Who will write Cisco/Broadcom specific p4 compiler code?

There are endless opportunities: in a parallel universe, AT&T forces Cisco to enable a P4 compiler to their devices, Cisco writes a bad compiler, claims it’s bad technology and sells you ACI instead. In a different universe, Barefoot writes a Broadcom compiler ensuring it works, but then it “wastes” some resources promoting a competitor. A little bit more realistically, SnapRoute or Cumulus could write a P4 compiler to Broadcom Tomahawk, and thus would be able to enable their software in a plethora of existing devices. Even more realistically, Barefoot writes their own compiler to Tofino and keeps selling P4 to a limited niche market.

Now, if Barefoot takes on the responsibility to write a P4 compiler for Broadcom and Mellanox that would be translated into huge value to NOS vendors and Operators; since they would be able to seamlessly switch vendors. It would marginally increase adoption of Tofino, so the question remains, who would pay for this?

Now how much does it cost to adopt P4?

Before I answer this question I’m going to callback to a point previously when I wrote about network disaggregation. I ended it asking: “Does OpenFlow effectively lock you in?”. Now the same question may apply to P4.

The question is misleading by itself. I’ve heard vendors saying “OpenFlow locks you in, you might as well just buy our SDN”. There’s just so much wrong with this. OpenFlow isn’t perfect, but it does allow you to adopt software processes to deliver features much faster than your vendor will.

Any choice is a potential barrier and locks you in a little bit, but what everybody refers to when talking about lock-in is hardware lock-in. When you buy a generic x86 computer you are free to install Ubuntu, Debian, Windows or whatever you’d like, when you buy a PlayStation, you can’t just install Xbox on it, that’s vendor lock-in, the costs of doing that are prohibitive, you would be better off just buying another appliance.

You could at barely no cost try an OpenFlow Lab or Field trial on Broadcom-based network devices and fallback to Cumulus if it doesn’t fulfill your needs. Unsurprisingly, The vendors will claim lab trials aren’t needed because of their product quality, but the experience will tell there will always be a missing feature.

Now P4, from the adventurous perspective, P4 is great, you just have to write more software to get it done. For everybody else it has a significant cost: you have to hire premium developers or Barefoot itself to do it. That cost won’t be insignificant when using Broadcom + Big Switch might already give you the tools to improve your current process.

OpenFlow vs P4

OpenFlow is going to be 10 years old next year, a significant amount of resources has been put into testing it. It has been (properly) commercially supported by Big Switch for 3+ years if I’m not mistaken. I’d say with certainty that you could get an OpenFlow solution production-ready in a year. Realistically, could you get P4 ready to be deployed in production in a year?

Misconceptions:

  • Will P4 replace OpenFlow? Maybe. P4 offers a different value proposition. OpenFlow agents may be written on top of P4. Great P4 implementations may force OF into being obsolete.
  • Will P4 replace Broadcom SDK? Same answer, P4 may write a much better API on top of theirs.
  • Will P4 replace OpenNSL?  Why not?
  • Will P4 replace NetFlow/Sflow? No. Sflow is a protocol to export data from the switches, it does not say (much) on how you should implement it in the dataplane.
  • Will P4 replace Riverbed? No way.
  • Will P4 replace OpenConfig? Nope, they are actually quite complementary.

Thanks for reading the long post. I welcome any thoughts or questions.

6 Comments

TCP BBR Congestion Control on Mininet

In this post, I demonstrate some benefits of using BBR congestion control and illustrate how easy it is to adopt it by using Mininet as an example. I’m excited to share this post with you guys because it’s been a while since I’ve made a tutorial and I love breakthrough innovations like this.

This post is divided into three sections: Background on BBR, Tutorial and Technical challenges.

Background on BBR

TCP BBR has significantly increased throughput and reduced latency on Google’s internal backbone networks. From this  a great resource:

TCP BBR is rate-based rather than window-based; that is, at any one time, TCP BBR sends at a given calculated rate, instead of sending new data in response to each received ACK. In particular, TCP BBR does not directly link the sending of new data to the receipt of ACKs, and so, strictly speaking, is not actually a sliding-windows implementation. Therefore, we cannot properly talk about winsize or cwnd. Instead, we talk about the number of packets in flight, which is the rate times RTTactual, with the understanding that this number may vary with conditions.

Basically, BBR estimates bandwidth by keeping track of goodput: if an increase in the sender rate does not increase the observed goodput, it assumes that’s the available bandwidth. It is reasonably effective in doing so and that way it provides minimal queueing in the network.

TCP’s throughput is inversely proportional to RTT and most TCP implementations cause additional delays, in consequence, TCP by itself can never reach 100% utilization. BBR changes that, that’s why it’s such an impressive accomplishment.

Quick start

Open Source is great because it allows innovation to be deployed much faster, BBR is already implemented in the Linux kernel and using Mininet you can test it right away.

I’m a long time fan of the website: reproducing network research from Stanford. I leveraged most of the Mininet code for this experiment from there.

Now let’s get to it!! This tutorial assumes you have vagrant and git. If you don’t, don’t panic, follow this link. To start you will need to set up the VM. I took care of all the dependencies for you. If you want to inspect what I’m doing take a look at the mininet role in the ansible folder.

git clone https://github.com/castroflavio/bbr-replication/
git checkout vagrant
vagrant up

This should take 10 min to complete. After it’s done proceed

vagrant ssh
cd mininet
sudo ./figure5.sh all

After around 30 seconds the experiment should be done and you can exit the VM:

exit
open figure5_mininet/figure5_mininet.png

This should open the following figure:figure5_mininet

The figure compares the latency on TCP BBR and TCP CUBIC (less is better). And as you can see BBR reduces the latency from ~150ms to ~50ms(66%) on the average case and from 400ms to 50ms (87%) on the worst case. This is crazy!

Technical challenges

The first technical challenge is finding a linux kernel that implements BBR, and it turns out it’s implemented on 4.9 so look out for that. The second challenge was to implement the BBR pacing mechanism, it was mentioned on the CS244 website but I did not understand it at first.

BBR requires a mechanism to control the sender rate and it leverages tc ( traffic control ) module from linux. I knew about tc but I didn’t know it was such a powerful tool. After some research on linux queueing mechanisms, I found that BBR requires the fq (Fair queueing) queueing discipline because it uses that to rate control the sender. It turns out Mininet did not support fq for some reason, and I had to change a couple lines of code to add support for it.

Conclusion

TCP has been around for decades and for decades people have been trying to improve it. At first, TCP congestion control mechanism literally saved the internet, now I’m gonna be bold and say that BBR by providing a “queueless” congestion control is saving latency-sensitive applications. It really is a big deal. I highly encourage you to try it out, the least you should do is check the following article: Increase your linux server Internet speed with TCP BBR congestion control.

For future reference:

1 Comment

ESPRESSO – More insights into Google’s SDN

Google recently released a paper detailing how it has designed and deployed ESPRESSO: the SDN at the edge of its network. The paper was published at SIGCOMM’17. In the past, Google’s SDN papers have been very insightful and inspiring, so I had big expectations for this one as well.

In this blog post, I’ll summarize and highlight the main points of the paper. I’ll follow-up with some conjectures on what we can abstract from the paper in terms of industry trends and understand the state-of-the-art SDN technologies.

For reference, Google has released several papers detailing its networking technologies, these are the most important ones:

  • B4 detailed Google’s SDN WAN – A must read. It explains how they drastically increased network utilization by means of global traffic engineering controller.
  • Jupiter Rising details hardware aspects of data center networks.
  • BwE explains Google’s bandwidth enforcer that plays a huge role in traffic engineering.

B4 connects Google’s Data Centers, B2 is the public facing network, which connects to ISPs in order to serve end-users. Espresso, an SDN infrastructure deployed at the edge of B2 enabled higher network utilization(+13%) and faster networking service roll-out.

nespresso-2-width-566

Requirements and Design Principles

The basic networking services provided at the edge are:

  1. Peering – Learning routes by means of BGP
  2. Routing – Forwarding packets. Based on BGP or TE policies
  3. Security – Blocking or allowing packets based on security policies

To design the system, the following requirements were taken into account:

  1. Efficiency – capacity needs to be better utilized and grow cheaply
  2. Interoperability – espresso needs to connect to diverse environments
  3. Reliability – must be available 99.999% of the time
  4. Incremental Deployment – green-field deployment only is not compelling enough
  5. High Feature Velocity

Historically, we have relied on big routers from Juniper or Cisco to achieve these requirements. Those routers usually would have the full internet routing table stored, as well as giant TCAM tables for all ACL rules needed to protect THE WHOLE INTERNET, and those are quite expensive. More importantly, a real Software-Defined Network allows you to deliver innovation at the speed of software development rather than the speed of hardware vendors.

Basically, 5 design principles are applied in order to fulfill those requirements:

  1. Software Programmability – OpenFlow-like Peering fabric
  2. Testability – Loosely coupled components allow software practices to be applied.
  3. Manageability – Large-scale operations must be safe, automated and incremental
  4. Hierarchical control plane – Global and local controllers with different functions allow the system to scale
  5. Fail-static – Data plane maintains the last known good state to prevent failures in case of control plane unavailability

Peering Fabric

The Peering Fabric provides the following functions:

  • Tunnels BGP peering traffic to BGP Speakers
  • Tunnels End user requests to TCP reverse proxy hosts
  • Provides IP and MPLS based packet forwarding in the fabric
  • Implements a part of the ACL rules

Screen Shot 2017-08-15 at 10.41.28 PM

All the magic happens in the hosts. First, the BGP speakers learn the neighbor routes and propagate those to the local controller (LC), which then propagates those to the global controller(GC). The GC then builds its intent for optimal forwarding of the full internet routing table. It then propagates those routes to LCs which then install them in all the TCP reverse proxy hosts. The same thing happens for security policies.

The BGP Speakers are in fact a Virtual Network Function, which is a network function implemented using x86 CPUs, the routing also is a VNF, as well as ACLs. Also, notice that the peering fabric is not complicated at all. The most used ACL rules(5%) are there but the full Internet Routing table is not. The hosts will make the routing decision and encapsulate the packets, labeling it with the egress switch and egress port of the Fabric.

Configuration and Management

It’s mentioned in the paper that as the LC propagates configuration changes down, it canaries those changes to a subset of nodes and verify correct behavior before proceeding to wide-scale deployment. These features are implemented:

  • Big Red Button – the ability to backroll features of the system and test this nightly.
  • Network Telemetry – monitors peering link failure and route withdrawals.
  • Dataplane Probing – End-to-end probes monitor ACL – unclear if OF is used for this

Please refer to the original paper for details. I hope this post is useful for you and I apologize for any miscommunication. At the end of the day, I’m writing this post to myself more than anything.

Feature and rollout velocity

Google has achieved great results in terms of feature and rollout velocity. Because it’s software-defined they can leverage their testing and development infrastructure. Along three years Google has updated Espresso’s control plane >50x more frequently compared to traditional routers, which would have been impossible without the test infrastructure.

The L2 private connectivity solution for cloud customers was developed and deployed in a few months. Without new hardware or need for waiting vendors to deliver new features. Again something unimaginable with legacy network systems. In fact, they state the same work on the traditional routing platform is still ongoing and has already taken 6x longer.

Screen Shot 2017-08-16 at 5.09.21 PM

Traffic Engineering

To date, Espresso carries at least 22% of outgoing traffic. The nature of GC allows them to serve traffic from a peering point to another. The ability to make this choice by means of Espresso allows them to serve 13% more customers during peaks.

Google caps loss-sensitive traffic to prevent errors in bandwidth estimation. Nonetheless, GC can push link utilization to almost 100% by transmitting lower QoS, loss-tolerant traffic.

Conclusion

From the paper: “Espresso decouples complex routing and packet processing functions from the routing hardware. A hierarchical control-plane design and close attention to fault containment for loosely-coupled components underlie a system that is highly responsive, reliable and supports global/centralized traffic optimization. After more than a year of incremental rollout, Espresso supports six times the feature velocity, 75% cost-reduction, many novel features and the exponential capacity growth relative to traditional architectures.

Leave a Comment

Network Disaggregation – The holy grail?

Tl;DR: Yes

The networking industry has seen more innovation in the last decade than in the last 30 years. The popularization of the SDN concept and the release of OpenFlow 1.0 pretty much ignited a flame present in every operator’s mind: the fear of vendor lock-in.

It was common for operators to solely rely on a single vendor every time a new feature was need: let’s say, Joe has decided your network now needs to be monitored using a specific monitoring protocol, xFlow, for illustration, then, because you only use vendor A gear you would have to convince request your vendor to add that feature to your software stack. Your sales engineer would then have to convince his developers that this is a critical feature and then that feature would have to go through the full Q&A hardening pipeline in order to make sure it doesn’t break any of the 400 protocols present in the OS of your network. That process easily took years. It still takes a few years for the unfortunate souls that choose to be locked into a specific vendor.

OpenFlow became popular as a promise to bring innovation to the industry and solve the multi-vendor integration problem by providing a standard interface for programming the network. As I mentioned in my last post, while it has brought innovation to the industry, for a lack of a strong standardization process, it failed to achieve vendor integration, and the demand for an escape route from vendor lock-in remained.

 In 2011, a few smart minds in the industry ( Facebook, Arista, Rackspace ) started the Open Compute Project as an initiative to open hardware design, having in mind that there’s already so much innovation in the software layer of computation. Quickly the idea expanded to networking gear and a trend of disaggregation between NOS (networking operating system) and hardware started. Hardware vendors such as Broadcom and Mellanox started working on their own abstraction for hardware programming interface, and that abstraction layer allowed a lot of good innovation and that’s where the OpenNetworking concept started.

Having established a common interface to interact with the hardware, several NOS vendors have come up and in fact disaggregated the network. This naturally allows for faster development cycles since it decouples software development cycles from hardware development cycles, the NOS vendors focus on software instead of hardware specificities, it allows for a diversity of vendors, increasing the speed of innovation.

Let me give you a couple examples: Say, you convinced your manager to buy Open Networking gear based on Broadcom chips (for example) and you went for a “traditional” vendor, say, Dell, 3 years later, Broadcom comes up with a next generation chip, you could (1) choose to keep using Dell and upgrade the gear with no need to change any management systems. Alternatively, (2) let’s say Dell features didn’t keep up with your expectations, then you could replace it with Arista, or even Cumulus Linux in order to experiment with completely new paradigms and finally deploy xFlow. On another scenario, let’s say Mellanox next generation hardware performs much better, then you could again choose to keep using Dell OS and smoothly upgrade your hardware for an optimal cost.

Traditionally, vendor lock-in makes you pay for decades for a non-optimal decision, network disaggregation makes your decisions lighter, allowing you to quickly rethink your strategy and cheaply pivot if necessary.

Choice is extremely powerful, in college, I remember being amazed by the power of MIMO communications. Embracing path diversity and the ability to “choose” the best path just almost linearly increases the capacity of a channel. Network disaggregation gives you the same power, the power of choice.

Now, let me approach a few misconceptions I’ve seen around:

  • Is network disaggregation SDN?  No.
  • Can SDN be achieved through network disaggregation? Yes, ultimately network disaggregation accelerates innovation.
  • Does OpenFlow effectively locks you to a vendor?

That’s a good one and I’m going to answer this on a next post.

Don’t hesitate to reach out to me with any questions.

 

Leave a Comment

Has OpenFlow failed? – Challenges and implementations

In truth, very few vendors have successfully implemented full capabilities of OpenFlow. OpenFlow provides way too much flexibility to programmers. It’s hard to make the hardware couple with that much power. A few vendors are able to deliver programmable ASICs like that such as NoviFlow, Corsa and Barefoot.

The reason for that comes from the nature of matching tables, a match table is implemented in memory. In a match table, we match on a field, say MAC address and we take an action, say forward the packet to port 1. The complexity comes when we want to match on multiple fields. Say we have a MAC table with N addresses, and an IP table with M addresses. The total size of my flow tables (memory) is M +N. Now if we want to execute the match on a single table, the size of those tables raises to M*N. Now imagine matching on multiple fields at the same time.

The multi-table aspect of OpenFlow, came on version 1.3, and it addresses the scalability problem of flow-tables. But now the challenge is how to provide a standard API via OpenFlow when different vendors have different table patterns?

The answer is we don’t. Rather, we adapt our OpenFlow version to each vendor in order to achieve our forwarding objective. Now, say we want to do a L3 forwarding – which means match on ip, then modify L2 addresses and forward to port N – one vendor might have put the modify action in the IP table, while other vendor might have grouped all actions in a group action later on.

OpenFlow became popular as a promise to bring innovation to the industry analogously as the x86 API brought innovation to computers. In truth, interoperability between vendors via OpenFlow has been rare, exactly because vendors have different implementations of OpenFlow. We’ve seen vertical stacks of software deliver SDN capabilities, but we haven’t seen interoperable solutions yet.

Last time I checked, ONOS, a great SDN controller, provided an abstraction to Openflow via the FlowObjective primitive, basically, an Objective is defined and then the OpenFlow drivers will match that objective to the hardware implementation. What that provides you is the ability to have a controller controlling multiple vendors. Vendors still need to write code as drivers but developers only have to write software once. Again the power of abstraction shows itself. There may be others out there, but I’m aware of a couple solutions for OpenFlow fabric such as BigSwitch and Trellis used in the CORD project that have successfully deployed stable solutions.

OpenFlow is not the answer to all your networking problems. The perfect abstraction for networking is the answer, but it does not exist. OpenFlow definitely succeed in bringing innovation to the networking industry. A few vendors like BigSwitch have built incredible solutions. and the OpenNetworkingFoundation has merged with the ON.LAB which may bring some more energy towards standardization of the protocol. The support from vendors has slowed down as vendors started generalizing the SDN definition, I will write more about it.

Leave a Comment

Troubleshooting Shortest Path and Topology Discovery on RYU

This post is a follow-up to Shortest Path forwarding with Openflow on RYU.

I originally made this code to show how to use SDN to achieve one of the most basic things you can do in a network: shortest path forwarding. In this post I’m answering common question on getting the code to work.

Quickstart:

Assuming you have all the dependencies, you should be able to run a mininet topology using:

sudo mn --topo=tree,4 --controller remote

After starting mininet start RYU using the following command:

bin/ryu-manager --observe-links ryu/app/sp.py

In my computer this is sufficient to discover the topology.

Now, let’s move on to the questions:

1 – Why do I see an empty or incomplete list of links?

Honestly, I’m not super familiar with the RYU topology app, so I don’t know. What works trying to restart Ryu/Mininet in different orders, so stop both applications and try starting Ryu first, if that doesn’t work do the opposite. Repeat until it works.

2 – Does it still work with a loop in the topology?

As far as my tests go it does work with a loop in the topology.

3 – Does it still work with a Spanning Tree?

To test it I start mininet,  setup spanning tree using ovs-vsctl, then I start RYU. After RYU learns the topology it successfully lets the pings go through.

I had to restart RYU a couple times until it learned the topology

4- Why do I see so many packet-ins?

I did not care to handle floodstorms when I coded this, so if your topology has a loop and spanning-tree isn’t set, ARP and other types of flooded packets may be broadcast forever in your network

5 – Can I use another algorithm or set custom weights?

Yes. To set custom weights you just have to figure out how to add that information to the network graph. I’ll try to give an example for this soon.

1 Comment

On the path to deployment of SDN technologies

At On.lab we are moving fast toward real deployment of SDN technologies.

ONOS aims to be a reliable platform to program networks. In order to unleash the full potential of SDN, developers should be able to develop network programs regardless of the hardware used. This means the operating system should provide an abstraction that is just right, in a way that developers can take full advantage of the existing hardware while still being flexible enough to write software once and have it executed on anything.

That’s not an easy task, in order to achieve such a goal several subsystems and layers of abstractions are constantly being developed on ONOS. Today, I will approach the FlowObjective Service.

The FlowObjective service provides an interface between Openflow devices and ONOS. The need for it arose with OpenFlow 1.3 as vendors were allowed to diversify the implementation of multi-table forwarding pipelines in order to be more efficient. The diversification of pipelines is great for performance matters, but it is not so great for developers who have to either choose one specific vendor to write software for or rewrite the software for each hardware device.

The FlowObjective service abstracts that complexity by means of OpenFlow drivers. Using the Flow Objective forwarding elements, you only have to write code for the application once, and someone only has to implement each driver once as well. Still, someone has to be the first to write the drivers.

The Bgp router app and the Segment Routing app currently use the Flow Objective service. In that manner, the OpenFlow drivers were built to support those applications and still may not be able to support some other applications.

We believe that the development of more applications will enrich the current OpenFlow driver, and the results achieved with those drivers will aggregate innate value to new applications. Wouldn’t it be great to write an app that just works in a well known set of hardware?

Well we are working for that!

Leave a Comment

Easiest way to develop on ONOS

I just started interning at the ON.LAB. We are developing the ONOS controller and other things.

The learning curve for ONOS is a bit of a challenge compared to its python competitors. But it has several features that a carrier-grade controller need and make the effort worth. To me, the two most important things are:

  • The Flow-objective abstraction
  • High-availability mechanisms

I’ll talk more about them in another posts.

Today I’ll show you the easiest way to setup your development environment.

cd ~   
git clone https://gerrit.onosproject.org/onos   
. ~/onos/tools/dev/bash_profile   
onos-setup-ubuntu-devenv   
cd onos
mci   

This will take a while to finish… While you are waiting check this another post with a series of very explicative videos about ONOS.

After it done do.

ok clean

That’s it the controller is on.

For detailed information check the ONOS from Scratch tutorial

I hope this was helpful

Leave a Comment

Introduction to ONOS

I just put together a few screencast from ONOS.

IMO, they are a great way to get you introduced to ONOS. Then, if you think it sounds good, go ahead and look further.

This blog post shows how to setup the ubuntu development environment.

More information can be found in the wiki:
https://wiki.onosproject.org/

I hope this is helpful!

Leave a Comment