The first Tech Field Day (@TechFieldDay) event I ever participated in was Wireless Field Day 2 in January 2012. I happen to be on Twitter and saw some people I followed talking about it, so I clicked a link, and there I was watching a LIVE feed on Meraki while I sat on my couch having dinner. I even asked a question on Twitter directed at the speaker. If my memory serves me correct, Tom Hollingsworth (@NetworkingNerd) was nice enough to relay the message from Twitter to the Meraki presenter as I waited and listened for an answer. My voice was heard from across the United States while not even being in the room. This was pretty sweet.
It Starts with Affinities
Plexxi is a start up in the network industry re-thinking networking from the ground up. A Plexxi network is different. It is cabled differently. It is thought about differently. It is integrated to other systems differently. With Plexxi, it all starts with conversations that are occurring on the network. These conversations, or relationships, occur between different systems on the network. These relationships are what Plexxi calls Affinities.
I’ve written in the past about how the virtual switch is an SDN war zone. Well, it still seems to be the early days for Software Defined Networking (SDN) no matter how much time goes by and I realized there isn't a whole lot of documentation out there, especially for the new guy or gal on the block, when it comes to Open vSwitch when compared to the vendor offerings from Cisco and VMware. Over the coming weeks, I’m going to hope to write more about Open vSwitch, Linux networking, and Open Stack Networking (Neutron, formerly Quantum). On that note, this post is meant to be an easy to read, longer than expected, concise introduction to Open vSwitch (OVS).
What is going on with Big Switch? It is really hard to tell on the surface, but there is a general sentiment out there (particularly in Twitter land) that they have a bumpy road ahead of them, especially after the Open Daylight (ODP) project launched. Okay, but what start-up has it easy? The only ones who know how Big Switch is really doing are the users of their technology and Big Switch themselves.
Two weeks ago I had the pleasure of taking a 2-day OpenStack training by Mirantis in New York City. It was well worth it because up until this point I had never been hands-on with OpenStack and more relevant for me, never had a deep dive on the underlying architecture. Plus it’s always good to get time out of the office and do a deep dive into a new technology.
While many are still figuring out private cloud and public cloud along with the networking impact of each, Cisco went into a bit more detail today on its Hybrid Cloud Networking strategy leveraging the Cisco Nexus 1000V InterCloud product that was initially announced a few months ago. I had the opportunity to attend a session on this just a few hours ago and here are some of the highlights along with some general thoughts.
After a few days immersed in Cisco land down in Orlando, what’s trending?
While there are definitely many trends, many sessions, and many perspectives, I can only speak to what I am seeing. Here are three (of many) things that I’ve seen a good amount of focus on in the breakouts I’ve attended. I would also say all three are highly strategic for Cisco in the Data Center and Cloud markets.
I wrote this a few days ago, but didn’t have time to post, but it’s still relevant given all the discussion around SDN and Network Virtualization.
Getting into a long thread on Twitter is entertaining. You have to keep your thoughts short and concise and sometimes it’s hard to list every descriptive phrase known to man to articulate what you mean. But…that also makes it fun! One example is the thread that happened last Saturday that I jumped into a little late.
In my previous post, I closed with asking, “if you require certain hardware configurations and ASICs for your virtual network solutions, have you truly deployed network virtualization?” I didn’t touch upon where hardware does and does not make sense though. I will expand on that here.
Brad Hedlund recently wrote an overview of Network Virtualization. I’d recommend it to anyone exploring network virtualization technologies over the coming months. In particular, I want to focus on the comments in the blog coming from both Brad and David Klebanov. The comments sparked a flurry of thoughts that I’ll attempt to get out in this post.
As a reminder, this is pure opinion and speculation on my part just as it is theirs. Mine however, I’ll say is a bit more neutral since I don’t work for a manufacturer :).
Who will be the first to promote it? Will it be via hardware or simply an application of network virtualization? Because it will happen.
While some whole heartedly believe in not connecting sites with ANY type of layer 2, and I actually am a bigger believer in that now than I used to be, customers still ask and “require” this occasionally – namely for workload mobility. Any answer I get or anything I read does not actively promote using an overlay such as VXLAN between data centers. The responses are usually around 1. BUM traffic control 2. ARP localization 3. Traffic Trombone (since only one active default gateway) 4. STP isolation. If you want to know all of the typical responses, look at the benefits of OTV. But again, in a world that will soon be eaten by software, why can’t a viable solution be developed for L2 DCI with overlays?
Reflecting back and writing about my first Interop as I wait to board a sweet red eye home to go straight into the city for a full day SDN session with Cisco is livin’ the dream, I say.
It was a short trip, but action packed from the keynote sessions, breakout sessions, and private sessions set up for some of us bloggers. I also somehow ended up in two Tech Field Day sessions as well. A big thanks to Ivy Worldwide and HP for bringing us out here. It was definitely interesting being at Interop as a blogger because we (about 6 of us) had some great access to HP product management, technical marketing, and executive team members. The group I was in also had the opportunity to sit down and have a Q&A with Bethany Mayer, SVP & GM of Networking at HP. Technology aside, they were a great group of people to talk with. For the ones I actually got to talk to for more than 2 minutes (of course, about SDN) listened and asked plenty of questions as I did back to them. I sincerely felt they wanted to solicit feedback on their solutions to further improve them. On that note, they did have some big announcements this week.
There have already been a few great write ups of how to get OpenDaylight up and running. I referenced a few of them during my journey --- see links at bottom. This post also covers getting the controller installed, but I wanted to share some of the issues I ran into during the install process. It wasn’t 100% clean and smooth, but since I’m no expert in Linux, they were probably user errors. I hope this helps others out that go down this path and run into similar issues. I also run through some basics in Linux to aid others like myself that have been primarily users of Windows and the Cisco CLI.
Last year at ONS, Google announced they had built their own switches, OpenFlow controller, Traffic Engineering algorithms, and were using OpenFlow on their Wide Area Network links. This year, Vint Cerf, Google’s Chief Internet Evangelist announced they are also using OpenFlow in their data centers, not just between them anymore. So, what can’t Google do on their own and where could they use some help from the vendors out there? This was a question asked to Amin Vahdat, Distinguished Engineer at Google, during a panel discussion during this year’s Open Networking Summit.
After attending ONS last week, I will say there is some doubt on if the OpenDaylight Project (ODP) team can execute (not just about the project in general), but at the same time there is an increased amount of optimism from the SDN community. I first posted about the ODP here when it launched and I can say I’m one of the optimists at this point. Borrowing Omar Sultan’s LinkedIn headline, I’ll cautiously call myself a skeptical optimist. You always need a bit of healthy paranoia/skepticism, don’t you?
Goldman Sachs, the only Enterprise that sits on the Board of the Open Networking Foundation (ONF), had a key speaking slot at the 2013 Open Networking Summit in the “Software Defined Networking (SDN) for Enterprises” session. Steve Schwartz, global head of Telecommunications and Market Data Services at GS, gave the presentation. Highlights from this session include:
Bruce Davie, former Cisco Distinguished Engineer and now Principal Engineer in the Networking & Security Division of VMware via Nicira, did a pretty good job at confusing the audience this week at the Open Networking Summit (ONS) during his presentation. While most other presenters talked about Network Virtualization as an application of Software Defined Networking (SDN), Davie wanted to state repeatedly they are different and that network virtualization is possible without SDN. This is true, and unlike most vendors, he was actually trying not to SDN-wash. Shouldn’t that be a good thing?
Today marks the end of the first day at ONS 2013. You had a choice to attend one of two tutorial sessions: one for engineers and one for market opportunities. I chose to attend the engineering session mainly because I’ve done a lot of research around SDN and wanted some good quality time in front of the keyboard.
The session was comprised of hands-on labs and lectures.
I recently had a good exchange with Brian Gracely after a comment I made on twitter in which I was asking where the industry is heading with more open source offerings being announced. His response to my question can be found here. Brian poses great questions to keep in mind as technologies and the related value chains continue to evolve. Think from product acquisition, testing, to production deployments and day 2 support. The value chain in IT could likely shift over the next few years, so it’s definitely worth the read. The response was not expected, so thank you to Brian. Very much appreciated. I’d encourage all to have a read.
What sort of insight should the physical network fabric offer network operators when it comes to deploying network virtualization? It is a great question and the answer is really going to vary based on who answers it. Martin Casado and co. recently voiced their perspective here. As always, Martin’s blogs are a great read and I encourage you to follow him at NetworkHeresy if you aren't already, although there haven’t been many posts since the Nicira acquisition. Looks like he is making it a community based blog going forward, so let’s hope to see more soon.
We know virtualization, server and network, offer a means of abstracting the underlying physical hardware. Once the hardware is abstracted though, how much visibility should there be into the virtual networks or virtual servers?
Have you heard of OpenFlow? Have you heard of vPath? Over the past few months, I’ve been thinking about how they are related to each other when it comes to, yup, you guessed it --- Software Defined Networking (SDN).
OpenFlow is one of the most widely talked about protocols in the world of SDN. It is simply an *open* protocol that enables the separation of the control and data planes of a network device. Most commonly, it is a protocol used between a controller and physical/virtual switch to remotely program device forwarding tables.
vPath on the other hand, isn’t as popular (yet?) and rarely discussed in SDN conversations, so what is it?
With only one week until Open Networking Summit (ONS) 2013, the announcements have started. The first is not a vendor announcement, but an industry announcement. It is the coming out party of industry wide open source project, OpenDaylight. There have been rumors about OpenDaylight for a few weeks now, so it is good to finally see what it is all about.
The idea behind OpenDaylight is simple. To move the industry forward toward next generation (software defined) networks. That sounds like the ONF a bit, but maybe their play is still to focus on standard APIs --- not sure, but look forward to an announcement from the ONF as well.
Cisco wants to empower the network engineer, just like Embrane, to deploy virtual network services. But it’s not easy due to servers, virtualization, virtual networking, and flat out fear to not run big specialized boxes. Cisco has the Nexus 1110 that can run multiple Cisco virtual services such as VSM, VSG, etc. However, there are limitations on quantities and what particular services can be run on the 1110. Cisco cannot create a pool of 1110s and deploy virtual resources dynamically. There is a GUI manager, but not a hypervisor-like manager. The Nexus 1110 is a physical server running a Cisco modified hypervisor. This hypervisor doesn't seem to be off the shelf.
Post Update 6/26/2013: Think about deploying multiple virtual firewalls, load balancers, and other virtual services in a given environment. How do you know where to put a particular virtual FW (which physical host)? How do you know if it should be moved? How do you instantly deploy another FW VM based on a certain trigger? You may be thinking of vCenter as a comparison, but what I was referring to above was a hypervisor-like manager specifically built for network resources (services/VMs). It may be similar to an existing hypervisor in reality, but this one could be dedicated to the network team because we all know the Compute/Network teams will be independent for the foreseeable future.
I wanted to do a post on different tools used to automate physical and virtual networks. They were going to include BMC Blade Logic Network Automation (BBNA), Cisco Network Services Manager (NSM), and vCloud Director. OpenStack may have found its way in there too. Note: Cisco NSM is the product of the LineSider acquisition.
The post was going to compare what each product calls its network construct. For example, in NSM network containers are defined, but in vCloud, External, Organization, and vApp networks are defined. Other tools refer to networks as domains and PODs. Trying to decipher what the next tool is going to call a basic Layer 2 segment will likely take even more time. Imagine trying to remember all of this?
A few weeks ago I created a presentation in which the goal was to summarize “the what” and “the why” of SDN. After talking about the why (exaggerated by saying networks suck), I talked about “the what.” I broke down “the what” into four quadrants. These quadrants were Programmability, Controller Based Networking, Network Functions Virtualization (NFV), and Overlays. The bottom half, NFV + Overlays, was really meant to capture the complete view of network virtualization. One can then accomplish network virtualization by using technology from the top 2 quadrants, i.e. leveraging a controller (that hopefully creates abstractions) with programmatic interfaces (north and southbound) that automates provisioning of L2-L7 network resources. Technology from each quadrant can be deployed individually or altogether.