Contoso Labs Series - Table of Contents
Now that we have our JBOD's identified, we needed to decide what we were connecting them to. We seriously considered using some of our compute nodes as file server heads, but quickly ran into insurmountable problems.
Precious I/O
We designed our Scale-Out File Server clusters to serve 72 compute nodes. While we believe those compute nodes can get away with just 4Gbit of connectivity each, the file servers obviously need to serve up far more than that to keep 72 concurrent nodes happy. That meant that multiple 10GbE ports were required, as well as enough SAS ports to keep data moving to-and-from all of the JBODs.
Old Servers Can't Keep Up
The compute nodes presented a few problems, none which we could easily address.
- Their onboard LAN consists of two 1GbE ports.
- They only have two PCIe slots.
- One slot is semi-permanently occupied by a RAID controller used by onboard storage
Put all that together, and we had 1 slot in which to get 10GbE and external SAS connectivity. That meant we had to bite the bullet and get new storage head nodes.
Modern Servers to the Rescue
The solution to this problem was to simply buy some reasonably priced modern servers to act as the file server heads. We ended up going with the updated version of our compute nodes: HP DL360p Gen8's. Going with the 'p' (Performance) series let us spec systems that came with two 10GbE ports onboard, and left both PCIe slots free. That meant we could choose to deploy either additional 10GbE ports, or dual SAS controllers. These newer systems also gave us newer 8-core processors, and a relatively cheap 128GB of RAM. That's useful for features like CSV cache, and data dedup, something we want to be able to take advantage of on a file server in some scenarios.
In the next post, we'll cover the last part of piecing together our storage solution: the SAS configuration.