There are trends of high density 10GbE connectivity in the data center that are increasing the needs for some to use 40GbE interfaces for uplink connectivity. Because 40GbE requires a new type of optic, called a QSFP+, I’ve had many questions from customers and myself regarding the connectivity and cabling options. Oddly enough, it took talking to at least 5 Cisco Engineers that span San Jose to NYC to compile this data, so if you’ve like to correct or add anything here, please feel free to comment below.
The specific questions and research I was doing was related to the Cisco Nexus 3000 series switches, namely the 3064 and 3016. I state that because there is the chance that the QSFP+ could operate differently if using different switch types – and that’s per the TME of optic. For those new to the Nexus 3000 series, the 3064 has 48 front facing ports of SFP+ (1G/10G) and 4 x 40GbE QSFP+ that could be used as uplinks. The 3016 has 16 40GbE QSFP+ ports.
While it is common to state each QSFP+ supports native 40GbE or 4 individual 10GE interfaces, it’s not entirely true if you wanted to mix and match ports. For example, on a 3016, if you wanted 14 interfaces configured for native 40GbE and the other 2 physical interfaces configured for 8 individual 10GbE interfaces, that is not supported (per the documentation that I have). On the Nexus 3016, there are only 3 valid modes of operation. These are1. all ports would have to operate at 10GbE, 2. all ports would have to operate at 40GbE, or 3. 8 ports at 40GbE (half) and 32 ports at 10GbE (half). This is one point to verify when you have “flexibility” with hardware and optics. Taken from a late 2011 Cisco slide deck, “At FCS the chassis can be configured in the following modes (with a reload): 16x40 GE ports, 32x10 GE ports and 8x40GE ports, 64x10GE ports.” However, I actually haven’t been able to find that on Cisco.com, so if anyone knows if it’s changed, please let me know. Since Cisco stated “at FCS,” I imagine and hope this caveat will go away in the short future.
If you simply wanted to cable up two Nexus 3000s with 40GbE, the options are multi mode fiber or Twinax copper. I only cover the fiber throughout this post as that is where most of the questions are at this time. If you’re looking into Twinax, the information out there is pretty solid, but if you still have questions, feel free to contact me.
So, since you are now using fiber, how do we connect these switches into the network? You will first need to insert the QSFP+ optic similar to how you insert a fiber optic for standard 1G or 10G connectivity. For the Nexus 3000, only multi mode fiber is available, so the Cisco part number needed is QSFP-40G-SR4. This is the equivalent of the GLC-SX-MM or SFP-10G-SR, for 1G and 10G, respectively. The connector type is no longer LC, but now is a MPO (multi-fiber push-on) connector.
It is interesting to note that these cables actually have 12 fiber strands internal to them (not all being used) to achieve 40GbE. Distance limitations are 100m using OM3 and 150m using OM4 fiber. Because these cables are MPO connectors, have 12 strands, and are ribbon cables for native 40GbE, they will not be able to leverage any of your existing fiber optic cable plant. Be prepared to home run these cable where needed throughout the data center.
However, you may not always need native 40GbE between two switches. Instead, you may opt to configure multiple 10GbE interfaces instead. In this case, the QSFP-40G-SR4 is still needed, but the cable selection is different than what was previously shown above and the ability to use current cable plants becomes a possibility. The cable required here would have an MPO connector on one end that would connect into the QSFP port and then “break out” into 4 individual fiber links on the other end. These breakout cables terminate with LC male connectors.
This is great that they terminate with LC male connectors because this allows customers to leverage the current cable infrastructure assuming existing patch panels have LC interfaces and OM3/OM4 fiber is in use throughout the data center. These breakout cables are also nice if you want to attach a northbound switch that only supports 10GbE interfaces. You can easily direct connect or jump through a panel in the data center to connect the Nexus 3000 via multiple 10GbE interfaces to a Nexus 7000 (or any other switch with 10GbE-only interfaces), for example.
Here is the caveat in my opinion.
What if you want to inter-connect two different Nexus 3000s together via a 20GbE port-channel using MMF through the QSFP+ interfaces? You would end up with each end having LC connectors with male interfaces. Last time I checked, a male needs a female to make a connection. Remember, this isn’t a problem if these switches are on different sides of the data center and you are going through a panel. This is only an issue if you’re in small data center or colo and need to home-run these connections. I suppose you can use fiber couplers that have two female sides to inter-connect each of the breakout cables, but that just seems messy and has more moving parts than desired. But hey, there really doesn’t seem to be any other way around this. If I’ve missed something, feel free to comment below.
For a design I was working recently that resembled what I was describing above, it wasn’t worth dealing with the ‘mess’ of cabling, couplers, and QSFP+ optics on a 3016, and since 40GbE wasn’t technically required, we chose to use 3064s in the Core with all native 10GE rather than 3016s with QSFP+’s. Just some food for thought.