VMware’s NSX officially launched just two weeks ago. Since the launch, the media has focused on the VMware and Cisco relationship and where that may end up in the future. That also includes me. I wrote my own take that was recently published by TechTarget on the impact NSX will have on the Cisco/VMware relationship, but when you look at the industry as a whole, it’s more than that. If we take a step back, it’s not about just VMware and Cisco. If we use stereotypes (good or bad) in the networking space, Cisco falls into the traditional physical network or incumbent category and VMware falls into the emerging network virtualization category.
Harry Quakenboss made an interesting comment on a previous post of mine a few days ago. He noted that Big Switch has been pretty quiet in terms of their outbound marketing. He is absolutely right and I also commented back to him, I actually remember thinking a few times over the past 18 months when startups would launch or get acquired, they usually went through a quiet period when it came to social media and outbound marketing. That makes perfect sense --- so what is going on with Big Switch?
Many focus on the lack of visibility in network virtualization environments. It's time for more concrete conversations in this area. The statements around visibility are usually too broad After a quick search, I found that Riverbed’s Cascade network performance management (NPM) solution already supports VXLAN and I’m sure what they offer will only get better. That means they can let you know what applications are being used within the overlay tunnels. A demo of the solution is below.
It’s been something I’ve wanted to tackle for a while, but choosing one is damn hard, especially as a network guy who hasn’t programmed in a while. I had two years of Advanced Programming (AP) in high school, but that was it and that was a long time ago. With network agility, automation, APIs, and SDN on the horizon, how should you pick a language and what do you even want to program? For everyone, it will be different, but I’ll let you know the path I’m taking and how I ended up where I’m at today.
In Plexxi & Affinities Part 1, I gave a very high level overview of Affinities and how algorithms are used to ultimately figure out which network path certain traffic should use in a Plexxi network. In this post, I want to explore and speculate where else in the network Affinities make sense. Don’t forget, this is fully speculative and just my opinion.
The first Tech Field Day (@TechFieldDay) event I ever participated in was Wireless Field Day 2 in January 2012. I happen to be on Twitter and saw some people I followed talking about it, so I clicked a link, and there I was watching a LIVE feed on Meraki while I sat on my couch having dinner. I even asked a question on Twitter directed at the speaker. If my memory serves me correct, Tom Hollingsworth (@NetworkingNerd) was nice enough to relay the message from Twitter to the Meraki presenter as I waited and listened for an answer. My voice was heard from across the United States while not even being in the room. This was pretty sweet.
It Starts with Affinities
Plexxi is a start up in the network industry re-thinking networking from the ground up. A Plexxi network is different. It is cabled differently. It is thought about differently. It is integrated to other systems differently. With Plexxi, it all starts with conversations that are occurring on the network. These conversations, or relationships, occur between different systems on the network. These relationships are what Plexxi calls Affinities.
I’ve written in the past about how the virtual switch is an SDN war zone. Well, it still seems to be the early days for Software Defined Networking (SDN) no matter how much time goes by and I realized there isn't a whole lot of documentation out there, especially for the new guy or gal on the block, when it comes to Open vSwitch when compared to the vendor offerings from Cisco and VMware. Over the coming weeks, I’m going to hope to write more about Open vSwitch, Linux networking, and Open Stack Networking (Neutron, formerly Quantum). On that note, this post is meant to be an easy to read, longer than expected, concise introduction to Open vSwitch (OVS).
What is going on with Big Switch? It is really hard to tell on the surface, but there is a general sentiment out there (particularly in Twitter land) that they have a bumpy road ahead of them, especially after the Open Daylight (ODP) project launched. Okay, but what start-up has it easy? The only ones who know how Big Switch is really doing are the users of their technology and Big Switch themselves.
Two weeks ago I had the pleasure of taking a 2-day OpenStack training by Mirantis in New York City. It was well worth it because up until this point I had never been hands-on with OpenStack and more relevant for me, never had a deep dive on the underlying architecture. Plus it’s always good to get time out of the office and do a deep dive into a new technology.
While many are still figuring out private cloud and public cloud along with the networking impact of each, Cisco went into a bit more detail today on its Hybrid Cloud Networking strategy leveraging the Cisco Nexus 1000V InterCloud product that was initially announced a few months ago. I had the opportunity to attend a session on this just a few hours ago and here are some of the highlights along with some general thoughts.
After a few days immersed in Cisco land down in Orlando, what’s trending?
While there are definitely many trends, many sessions, and many perspectives, I can only speak to what I am seeing. Here are three (of many) things that I’ve seen a good amount of focus on in the breakouts I’ve attended. I would also say all three are highly strategic for Cisco in the Data Center and Cloud markets.
I wrote this a few days ago, but didn’t have time to post, but it’s still relevant given all the discussion around SDN and Network Virtualization.
Getting into a long thread on Twitter is entertaining. You have to keep your thoughts short and concise and sometimes it’s hard to list every descriptive phrase known to man to articulate what you mean. But…that also makes it fun! One example is the thread that happened last Saturday that I jumped into a little late.
In my previous post, I closed with asking, “if you require certain hardware configurations and ASICs for your virtual network solutions, have you truly deployed network virtualization?” I didn’t touch upon where hardware does and does not make sense though. I will expand on that here.
Brad Hedlund recently wrote an overview of Network Virtualization. I’d recommend it to anyone exploring network virtualization technologies over the coming months. In particular, I want to focus on the comments in the blog coming from both Brad and David Klebanov. The comments sparked a flurry of thoughts that I’ll attempt to get out in this post.
As a reminder, this is pure opinion and speculation on my part just as it is theirs. Mine however, I’ll say is a bit more neutral since I don’t work for a manufacturer :).
Who will be the first to promote it? Will it be via hardware or simply an application of network virtualization? Because it will happen.
While some whole heartedly believe in not connecting sites with ANY type of layer 2, and I actually am a bigger believer in that now than I used to be, customers still ask and “require” this occasionally – namely for workload mobility. Any answer I get or anything I read does not actively promote using an overlay such as VXLAN between data centers. The responses are usually around 1. BUM traffic control 2. ARP localization 3. Traffic Trombone (since only one active default gateway) 4. STP isolation. If you want to know all of the typical responses, look at the benefits of OTV. But again, in a world that will soon be eaten by software, why can’t a viable solution be developed for L2 DCI with overlays?
Reflecting back and writing about my first Interop as I wait to board a sweet red eye home to go straight into the city for a full day SDN session with Cisco is livin’ the dream, I say.
It was a short trip, but action packed from the keynote sessions, breakout sessions, and private sessions set up for some of us bloggers. I also somehow ended up in two Tech Field Day sessions as well. A big thanks to Ivy Worldwide and HP for bringing us out here. It was definitely interesting being at Interop as a blogger because we (about 6 of us) had some great access to HP product management, technical marketing, and executive team members. The group I was in also had the opportunity to sit down and have a Q&A with Bethany Mayer, SVP & GM of Networking at HP. Technology aside, they were a great group of people to talk with. For the ones I actually got to talk to for more than 2 minutes (of course, about SDN) listened and asked plenty of questions as I did back to them. I sincerely felt they wanted to solicit feedback on their solutions to further improve them. On that note, they did have some big announcements this week.
There have already been a few great write ups of how to get OpenDaylight up and running. I referenced a few of them during my journey --- see links at bottom. This post also covers getting the controller installed, but I wanted to share some of the issues I ran into during the install process. It wasn’t 100% clean and smooth, but since I’m no expert in Linux, they were probably user errors. I hope this helps others out that go down this path and run into similar issues. I also run through some basics in Linux to aid others like myself that have been primarily users of Windows and the Cisco CLI.
Last year at ONS, Google announced they had built their own switches, OpenFlow controller, Traffic Engineering algorithms, and were using OpenFlow on their Wide Area Network links. This year, Vint Cerf, Google’s Chief Internet Evangelist announced they are also using OpenFlow in their data centers, not just between them anymore. So, what can’t Google do on their own and where could they use some help from the vendors out there? This was a question asked to Amin Vahdat, Distinguished Engineer at Google, during a panel discussion during this year’s Open Networking Summit.
After attending ONS last week, I will say there is some doubt on if the OpenDaylight Project (ODP) team can execute (not just about the project in general), but at the same time there is an increased amount of optimism from the SDN community. I first posted about the ODP here when it launched and I can say I’m one of the optimists at this point. Borrowing Omar Sultan’s LinkedIn headline, I’ll cautiously call myself a skeptical optimist. You always need a bit of healthy paranoia/skepticism, don’t you?
Goldman Sachs, the only Enterprise that sits on the Board of the Open Networking Foundation (ONF), had a key speaking slot at the 2013 Open Networking Summit in the “Software Defined Networking (SDN) for Enterprises” session. Steve Schwartz, global head of Telecommunications and Market Data Services at GS, gave the presentation. Highlights from this session include:
Bruce Davie, former Cisco Distinguished Engineer and now Principal Engineer in the Networking & Security Division of VMware via Nicira, did a pretty good job at confusing the audience this week at the Open Networking Summit (ONS) during his presentation. While most other presenters talked about Network Virtualization as an application of Software Defined Networking (SDN), Davie wanted to state repeatedly they are different and that network virtualization is possible without SDN. This is true, and unlike most vendors, he was actually trying not to SDN-wash. Shouldn’t that be a good thing?
Today marks the end of the first day at ONS 2013. You had a choice to attend one of two tutorial sessions: one for engineers and one for market opportunities. I chose to attend the engineering session mainly because I’ve done a lot of research around SDN and wanted some good quality time in front of the keyboard.
The session was comprised of hands-on labs and lectures.
I recently had a good exchange with Brian Gracely after a comment I made on twitter in which I was asking where the industry is heading with more open source offerings being announced. His response to my question can be found here. Brian poses great questions to keep in mind as technologies and the related value chains continue to evolve. Think from product acquisition, testing, to production deployments and day 2 support. The value chain in IT could likely shift over the next few years, so it’s definitely worth the read. The response was not expected, so thank you to Brian. Very much appreciated. I’d encourage all to have a read.
What sort of insight should the physical network fabric offer network operators when it comes to deploying network virtualization? It is a great question and the answer is really going to vary based on who answers it. Martin Casado and co. recently voiced their perspective here. As always, Martin’s blogs are a great read and I encourage you to follow him at NetworkHeresy if you aren't already, although there haven’t been many posts since the Nicira acquisition. Looks like he is making it a community based blog going forward, so let’s hope to see more soon.
We know virtualization, server and network, offer a means of abstracting the underlying physical hardware. Once the hardware is abstracted though, how much visibility should there be into the virtual networks or virtual servers?