Skip to content

Month: July 2017

Is vendor lock-in really a big deal?

I’ve recently come across a Datanauts podcast regarding ““Choosing Your Next Infrastructure” ( if you like podcasts, I HIGHLY recommend Packet Pushers, I’m a fan because of their diverse and unbiased content). In this episode, various great considerations on choosing new infrastructure are made and they perform an excellent job at describing pros and cons of different strategies, but a few points regarding vendor lock-in got me scratching my head. The article “Vendor lock-in the good, the bad and the ugly” does a great job at explaining the overall concept of vendor lock-in.

Additionally, I see it in the following way: Some vendors provide hardware and software as integrated solutions, potentially including storage, networking, or computing. Traditional vendors have been doing this for decades and that’s one part of vendor lock-in, because you rely on your vendor to deliver new features, if they do not deliver it, the migration costs, most of the time, would be prohibitive, and a good enough reason to just pay the same vendor a premium.

imgonline-com-ua-twotoone-BHYS6FHG8Q

During the podcast, the following question was asked: “If you commit to a hyper-converged platform, you are committing to a vendor and thus, in fact, locked-in, is that a big deal?”.

Where the response was “What’s important is understanding that lock-in is going to happen… and it’s important to choose a vendor that is going to be a good partner for your business… So if you have a very good relationship with a vendor who provides an all-at-once solution, that may be strategic for you, and if you would rather keep the hardware open and have a vendor you trust to give a good software solution, that’s your best path”.

Learning curves, and migration costs will always exist. Successful organizations, managers, and architects will minimize those costs while meeting critical requirements. That answer caught my attention because this is not the first time I’ve heard comparisons between hardware lock-in and software lock-in minimizing the cost of hardware lock-in. I’ve heard stronger opinions from hardware vendors before (of course): “hardware locks you in, software locks you in, therefore you might as well lock yourself to the hardware”; that statement is easy to be made when you are selling hardware, it’s much harder to justify when you are buying hardware.

I’m not completely opposed to lock-in in order to meet critical requirements, but that decision must be taken very carefully, and rationally, more often than not, the future cost of the decision is much higher than the initial cost of the whole project. Requirements are uncertain, and they become more dynamic every day.

For example, say at the time of design you thought your critical requirement was performance and acquired the best in the industry, a year from now, your solution becomes popular in your organization (because it is so good!), now multi-tenancy is much more important, and you are locked in, your manager now demands multi-tenancy and your sales engineer gladly offers you an add-on contract for whatever price (s)he wishes. The requirement is fulfilled, all parties involved go to dinner at a fancy steakhouse, everybody is happy!

If your organization is mature enough to have a project starting and ending with the exact same requirements, then you definitely should pick vendor-lock in. But, if your organization stands in a dynamic environment, external or internal, then you should always maximize choice and minimize barriers to change in order to meet ever-changing requirements.

imgonline-com-ua-twotoone-G7zqrsKzwtXPUX

I’m a firm believer that competition and choice ultimately drive innovation, thus in order to consistently deliver innovative solutions one must be open to competition. I’d argue that computers only are what they are now because of choice. And personal computers can be a nice example. One can choose between AMD or Intel processors, OSX, Windows or Linux. At the end of the day, lots of people will buy a solid computer integrated by Microsoft or Apple, but in the long run, the most innovative solutions and sometimes cost-effective solutions are the build-your-own type.

More than that, at the end of the day, a well put gaming setup is much more exciting than a boring Macbook, as Facebooks’ or Google’s chassis switches are more exciting than an expensive Juniper router.

 

Leave a Comment

Network Disaggregation – The holy grail?

Tl;DR: Yes

The networking industry has seen more innovation in the last decade than in the last 30 years. The popularization of the SDN concept and the release of OpenFlow 1.0 pretty much ignited a flame present in every operator’s mind: the fear of vendor lock-in.

It was common for operators to solely rely on a single vendor every time a new feature was need: let’s say, Joe has decided your network now needs to be monitored using a specific monitoring protocol, xFlow, for illustration, then, because you only use vendor A gear you would have to convince request your vendor to add that feature to your software stack. Your sales engineer would then have to convince his developers that this is a critical feature and then that feature would have to go through the full Q&A hardening pipeline in order to make sure it doesn’t break any of the 400 protocols present in the OS of your network. That process easily took years. It still takes a few years for the unfortunate souls that choose to be locked into a specific vendor.

OpenFlow became popular as a promise to bring innovation to the industry and solve the multi-vendor integration problem by providing a standard interface for programming the network. As I mentioned in my last post, while it has brought innovation to the industry, for a lack of a strong standardization process, it failed to achieve vendor integration, and the demand for an escape route from vendor lock-in remained.

 In 2011, a few smart minds in the industry ( Facebook, Arista, Rackspace ) started the Open Compute Project as an initiative to open hardware design, having in mind that there’s already so much innovation in the software layer of computation. Quickly the idea expanded to networking gear and a trend of disaggregation between NOS (networking operating system) and hardware started. Hardware vendors such as Broadcom and Mellanox started working on their own abstraction for hardware programming interface, and that abstraction layer allowed a lot of good innovation and that’s where the OpenNetworking concept started.

Having established a common interface to interact with the hardware, several NOS vendors have come up and in fact disaggregated the network. This naturally allows for faster development cycles since it decouples software development cycles from hardware development cycles, the NOS vendors focus on software instead of hardware specificities, it allows for a diversity of vendors, increasing the speed of innovation.

Let me give you a couple examples: Say, you convinced your manager to buy Open Networking gear based on Broadcom chips (for example) and you went for a “traditional” vendor, say, Dell, 3 years later, Broadcom comes up with a next generation chip, you could (1) choose to keep using Dell and upgrade the gear with no need to change any management systems. Alternatively, (2) let’s say Dell features didn’t keep up with your expectations, then you could replace it with Arista, or even Cumulus Linux in order to experiment with completely new paradigms and finally deploy xFlow. On another scenario, let’s say Mellanox next generation hardware performs much better, then you could again choose to keep using Dell OS and smoothly upgrade your hardware for an optimal cost.

Traditionally, vendor lock-in makes you pay for decades for a non-optimal decision, network disaggregation makes your decisions lighter, allowing you to quickly rethink your strategy and cheaply pivot if necessary.

Choice is extremely powerful, in college, I remember being amazed by the power of MIMO communications. Embracing path diversity and the ability to “choose” the best path just almost linearly increases the capacity of a channel. Network disaggregation gives you the same power, the power of choice.

Now, let me approach a few misconceptions I’ve seen around:

  • Is network disaggregation SDN?  No.
  • Can SDN be achieved through network disaggregation? Yes, ultimately network disaggregation accelerates innovation.
  • Does OpenFlow effectively locks you to a vendor?

That’s a good one and I’m going to answer this on a next post.

Don’t hesitate to reach out to me with any questions.

 

Leave a Comment

Has OpenFlow failed? – Challenges and implementations

In truth, very few vendors have successfully implemented full capabilities of OpenFlow. OpenFlow provides way too much flexibility to programmers. It’s hard to make the hardware couple with that much power. A few vendors are able to deliver programmable ASICs like that such as NoviFlow, Corsa and Barefoot.

The reason for that comes from the nature of matching tables, a match table is implemented in memory. In a match table, we match on a field, say MAC address and we take an action, say forward the packet to port 1. The complexity comes when we want to match on multiple fields. Say we have a MAC table with N addresses, and an IP table with M addresses. The total size of my flow tables (memory) is M +N. Now if we want to execute the match on a single table, the size of those tables raises to M*N. Now imagine matching on multiple fields at the same time.

The multi-table aspect of OpenFlow, came on version 1.3, and it addresses the scalability problem of flow-tables. But now the challenge is how to provide a standard API via OpenFlow when different vendors have different table patterns?

The answer is we don’t. Rather, we adapt our OpenFlow version to each vendor in order to achieve our forwarding objective. Now, say we want to do a L3 forwarding – which means match on ip, then modify L2 addresses and forward to port N – one vendor might have put the modify action in the IP table, while other vendor might have grouped all actions in a group action later on.

OpenFlow became popular as a promise to bring innovation to the industry analogously as the x86 API brought innovation to computers. In truth, interoperability between vendors via OpenFlow has been rare, exactly because vendors have different implementations of OpenFlow. We’ve seen vertical stacks of software deliver SDN capabilities, but we haven’t seen interoperable solutions yet.

Last time I checked, ONOS, a great SDN controller, provided an abstraction to Openflow via the FlowObjective primitive, basically, an Objective is defined and then the OpenFlow drivers will match that objective to the hardware implementation. What that provides you is the ability to have a controller controlling multiple vendors. Vendors still need to write code as drivers but developers only have to write software once. Again the power of abstraction shows itself. There may be others out there, but I’m aware of a couple solutions for OpenFlow fabric such as BigSwitch and Trellis used in the CORD project that have successfully deployed stable solutions.

OpenFlow is not the answer to all your networking problems. The perfect abstraction for networking is the answer, but it does not exist. OpenFlow definitely succeed in bringing innovation to the networking industry. A few vendors like BigSwitch have built incredible solutions. and the OpenNetworkingFoundation has merged with the ON.LAB which may bring some more energy towards standardization of the protocol. The support from vendors has slowed down as vendors started generalizing the SDN definition, I will write more about it.

Leave a Comment