In the last post, I talked about how Ansible could be used for various forms of network automation. In the comments, Michael asked if Ansible could also be used for network test automation and verification. Since I’m just starting to explore Ansible, I figured why not try it out. The short answer is, it’s possible. Let’s take a look at an example proving this out.
[This article is the outcome of some great conversations and exchanges I’ve had recently with Jeremy Schulman (@nwkautomaniac) around automation and Devops in the world of networking. Thank you to Jeremy for those late tweaks before getting this posted! Thanks to Kirk Byers (@kirkbyers) as well - he was also gracious enough to respond to clarify a few things and assisted with this post indirectly.]
There have been numerous articles written that describe the what and the why of Devops. Reading through a few of these, you find references to CAMS --- you’ll read how “Devops is about CAMS.” CAMS stands for Culture, Automation, Measurement, and Sharing. Imagine working in an environment where automation is embraced? We know most networks are not leveraging nearly any type of automation. While we usually talk about engineers (of all types) not embracing automation, is the harsh reality most organizations are from having the right culture to embrace automation?
You can’t listen to an interview or podcast, an industry panel, or read a Q&A about the future of networking that doesn't involve skill sets. The biggest question of them all – what skills should network engineers focus on so they don’t become irrelevant? If you really want to know what skills make sense, why ask, when you can do an easy search to see what skills companies are looking for these days in a variety of roles. Combine SDN with DevOps into your search criteria and the results may surprise you. They sure surprised me.
It’s been two weeks since I attended my 3rd consecutive Open Networking Summit (ONS) and I’m glad to say, I finally found some time to get some notes and thoughts on paper about the conference. Here are some on SDN at Google and Microsoft, and how they compare and contrast to industry incumbents’ solutions, but also how programmable NFV can be game changing in the Enterprise. I also include thoughts on how Embrane and Big Switch play into this.
Over the past few weeks, I’ve written about the idea behind a common programmable abstraction layer. Previous articles are here and here. It’s worth stating that something like a CPAL can be used with or without SDN controllers and with or without cloud management platforms. As can be seen from the previous write ups and the video/demo below, today its primary focus is data extraction and data visibility. It can use device APIs or controller APIs. It’s about accessing the data you need quicker. It’s that simple. No more jumping from device to device and having to manage text and excel files.
Github repo for CPAL
If there is a controller in the environment, you can still view data around particular physical and virtual switches in the environments by creating the right modules. Same can be said if there was a CMP/CMS deployed. While a CPAL can easily make changes to the network, it’s about taking small steps that can have a larger impact on how we use new APIs on network devices and controllers. And if we don’t strive for a common framework now, we will end up with many more APIs than there are CLIs. What good is that?
Two of the three companies promoting white box, now more commonly known as bare metal, switching are Cumulus and Big Switch Networks. There has been coverage on each of these companies, but the question always arises, “does Cumulus support OpenFlow?” I had the chance to talk to JR Rivers, Cumulus CEO, at the last Open Networking User Group (ONUG) during a Tech Field Day video and heard the answer from him then, but hadn’t seen anything documented publicly.
In the previous post, I talked about a common programmable abstraction layer (CPAL). To better understand the thought process behind having a common PAL, it makes sense to review some of the work Jeremy Schulman has been doing. Jeremy often refers to the Python interactive shell as the new CLI for networking. When you watch him give a demo using the Python shell as a CLI, it is second nature and looks exactly like a network CLI. It makes perfect sense.
In late January, there were some big names on stage at the latest Open Compute Summit. I’d like to focus on one keynote panel that was called, “Opening Up Network Hardware.” The panelists for this session included Martin Casado (VMware), Matthew Liste (Goldman Sachs), Dave Maltz (Microsoft), and JR Rivers (Cumulus) and was led by Najam Ahmad (Facebook). If you haven’t watched the session already, it’s definitely worth it. You can check it out here.
In a recent post, I wrote about some Python work I was testing on the Nexus 3000. The end conclusion was that open Linux platforms will offer more flexibility --- for the consumer of the technology, ultimately the customer. In this post, we’ll take a look at an example that integrates Python with the native Linux operating system.
If you haven’t heard, there is a new switch vendor in town – Pluribus Networks. That’s right. In the new world where hardware is being dominated by software, there is an upstart that is trying to sell ASICs (along with their value added software, of course). This actually isn’t too common these days. Since Software Defined Networking (SDN) became the latest craze, the only startups going after major incumbents have been Plexxi and Pica8. Before them, Arista.
Note: I am not including software only companies that can run on bare metal switches such as Cumulus Networks.
This post shares some thoughts on some recent testing I’ve done with a Cisco Nexus 3000 and its built-in Python interpreter. It also touches upon why open and programmable could benefit the community with some concrete examples.
The application that I have started to build is all about more efficiently and more easily managing devices programmatically without using the CLI. You will see that the Python APIs (methods, functions, etc.) are still fairly limited on the 3K, so I did have to use the “CLI” function to send commands from Python to the native Cisco NX-OS CLI. Having access to Linux could have made it possible to modify the files needed instead.
Software Defined Networking (SDN) is the new way of networking. It’s plain and simple. And one of these days we’ll just go back to calling it networking because at its root, the network will still be forwarding the data needed for businesses to operate and thrive. In this post, we’ll look at several new products and companies that have emerged over the last few years within the SDN Ecosystem and see why SDN is already the new norm in networking.
There is more talk these days on mice and elephants. One option to give these elephants special treatment is to deploy a separate physical network to handle the top talkers and elephant flows. How can OpenFlow help in a design like this to increase the overall performance of the network?
It's been nearly a week since the Insieme launch and I've yet to write a post about it, but wanted to share the following excerpt that was originally posted in a recent Network World article where Martin Casado comments on Cisco's ACI vs. VMware's NSX.
"NSX supports Citrix XenServer and Red Hat KVM as well as VMware ESX, he says. Support for Microsoft Hyper V is coming. And if the point Cisco's trying to make is that software overlays require a hypervisor, well, NSX can also run on bare metal servers without one, Casado claims. It can create tunnels from a Linux endpoint, he says."
Tunnels to bare metal servers. Interesting to say the least.
For the original article:
SDSec – have you heard that one before? This is actually what startup vArmour is preaching – Software Defined Security. I had the opportunity to talk with one of their guys at ONUG to learn a little bit more about them. Here is what I found out.
Yesterday was an interesting day in that I attended a full day ONUG academy session that was all about writing SDN applications. Big thanks to Matt Davy and Chuck Black for leading the session. While we weren’t hacking on code, there was a lot of discussion around APIs, network programmability, and the approach to take when building SDN applications [that leverage northbound APIs of a controller]. I’ve made it pretty public that I’ve been working with onePK building my own controller (using the term controller very loosely here) communicating directly with network devices as opposed to natively integrating with an existing controller like OpenDaylight, Floodlight, etc. and leveraging their northbound APIs.
In a 3-tier software defined network (SDN) that has control and data plane separation leveraging a protocol such as OpenFlow, there are generally data plane devices, controllers, and applications/control programs. Pretty straightforward.
If a packet enters the network switch (data plane device) and doesn’t have a match in the flow table, it’s punted to the controller to see how to handle that packet and the subsequent packets in that flow. This is classic reactive forwarding. Due to latency and possible scaling issues, it’s recommended to leverage and deploy proactive flow forwarding whenever possible.
I recently participated in two podcasts where the focus was all about Software Defined Networking and the changing network landscape.
If you are interested in listening, check them out:
On a side note, Brian and Theo rock. If you haven't been listening to Providing Cloudy Service or The Cloudcast, I suggest you start!
Training and use cases are still emerging in the world of Software Defined Networking (SDN). Luckily, there is an event, local (for me) in New York City, that has two full days dedicated to SDN (some call it open networking nowadays since it’s never been more cool to be open) on October 29 & 30. The event is ONUG Fall 2013. On day one there will be solid hands-on training on building your own SDN applications, understanding white box networking, and how to get started with OpenFlow deployments. Day two is structured more like a traditional conference.
I was driving home tonight and saw a tweet from Ethan Banks (@ecbanks) that stated, “After all these years of IPSEC (a standard, after all), bringing up a tunnel between disparate vendors is one of the hardest tasks I do." When I see these kinds of statements and have these thoughts myself, I think, there is clear problem, do others have the same problem, is this a problem looking for a solution, and can be there be a better way? In this particular case, it’s definitely a problem, but can there be a better way? Can we view this as an example where the network and security industry has been okay with mediocrity? Maybe.
A few weeks ago, I wrote about where I was in the world of programming. As I said then, I am still focused on building a onePK application. This onePK application now dubbed Network Control Manager is a central interface to the network. It can be used to gather real time data as well as make changes to the network in a more centralized, automated, and real-time fashion. Following the SDN model, this application can be seen as a SDN controller if you wish to call it that. The southbound API used is Cisco’s onePK and the northbound API is self-defined as “je-nb-API” :). The application/controller exposes northbound RESTful interfaces to be consumed by 3rd party applications and control programs, the first of which is a CLI application that interacts with the network via Network Control Manager.
In the new world of networking, you can program your network. You can make it do whatever you want. Even your business applications can program your network. Have you heard this before? If so, you aren’t alone. Well, before you let business applications program the network, how about starting somewhere a little less frightening? Here is a good use case for network programmability that I thought about during the ThousandEyes presentation while at Network Field Day 6. It combines ThousandEyes Private Agents and Cisco’s onePK.
In Part 1, I talked about how OpenFlow could commoditize hardware in the network visibility fabric market. In this post, I’ll focus on intelligent network load balancing.
Long overdue, but here are some slides from the Open Networking Summit that happened back in April 2013. These were presented by an architect on the Azure team. Fully relevant given some discussion happening at the SDDC today.
Absolutely, but I’m not going to say what you think. I’m going to shift from talking about the traditional network or network virtualization solutions that have been getting all of the attention lately. There are still companies out there building new products that leverage black box vertically integrated hardware and software. The two markets that could lose out to commodity hardware are network visibility fabrics and intelligent network load balancing. In this post, I’ll focus on visibility fabrics and save the latter for my next post.