Contoso Labs Series - Table of Contents
Now that Cisco has been chosen as the vendor for our network, we need to identify the layers of our network fabric, and the devices to be used in those roles.
We'll have a 3-level network topology when done. Some devices will act as leaf node top-of-rack switches. More capable devices will be core+edge routers. Finally, we'll need an aggregator level to connect the two, and isolate storage traffic from the core. We'll detail the network configuration and the traffic shaping considerations that went into it at a later time. For now, let's just identify the devices we're using, and why they suit our needs.
Leaf Switches
For leaf top-of-rack switches, we'll be using the Cisco Nexus 3048. This device has good port density, (48x1GbE RJ-45 ports, 4x10GbE SFP+ ports) and supports all of the important functions needed, like OMI and port-channels. Given our node count, we’ll end up needing 32 of these total, so their relative affordability is another asset to us. A simple, solid choice for our purposes.
Spine Switches
The spine aggregator switches will be Cisco Nexus 3064-X devices. These are much more capable 10GbE switches, of which we’ll need 8. Each has 48 SFP+ 10GbE ports, as well as four QSFP+ 40GbE ports. Extremely low latency and line-speed switching combined with Layer-3 routing allows us to create a very high speed spine/aggregator layer for our racks. This is critical in our overall architecture because we’re using a converged fabric design, where our Ethernet fabric has to carry all of our combined I/O. Isolating storage traffic off of the core will keep performance acceptable for everyone, and we need a high speed intermediate layer to pull that off.
Edge Router
The edge of our network will be served by two Cisco Nexus 6001 devices. While from the outside these would appear almost identical to the 3064-X’s, the network capabilities provided by the 6001 are much greater. Larger lookup and routing tables, and more sophisticated controls are available that make it better suited to sit at the center of a network that will be hosting 300+ physical nodes and thousands of virtual machines operating on NVGRE virtual networks.
That covers our purchasing of net new equipment for this project. Combined with our existing assets, we have everything we need to design and deploy our private cloud. Starting on Wednesday, we’ll start describing how we integrated these components, and what our deployment is going to look like.