VMware Storage Part 5: Storage Networking Overview

Storage networking is an interesting discussion. This is usually pretty dependent on your storage vendor, or your personal preferences, but this is often also a misunderstood issue. When I was new to storage, we used to joke about iSCSI, real men used Fibre Channel. File storage like NFS was someone else’s problem, I was only interested in block, fibre channel storage. In a VMware environment, it becomes increasingly clear that we need to examine all types of storage networking to make sure that we have the right fit. Now I did talk a bit about this topic in previous posts on SAN and NAS, but I feel it is worth discussing in greater detail.

To start, VMware fully supports NFS, linux Network File System, and this is a perfectly logical way to handle storage in this environment. It is simple, it works much like a traditional file system, it is easy to grow, and is generally thin provisioned by default. There is no VMware File System, VMFS, to manage, the NFS server provides the file system. In cases where datastore sizes need to change often, and cases where redundancy is less important this might work. The main draw here at this point is simplicity of provisioning, and management. This can happen over 1GbE and 10GbE, and this really has to be decided based on network bandwith needs. It can be done using a converged network, but should always be on a separate subnet from the rest of the network traffic.

The major downside of NFS is that it is sometimes treated by VMware as a second class citizen. Most times when VMware releases new features from a storage perspective they are released on block storage. NFS is usually not far behind, but it does take time. Another downside is a lack of multipathing. Now I will say there are ways to do multipathing with NFS in a VMware environment, but it is more complex and not always a standard. Finally NFS is heavily reliant on the NFS server. While there are some good systems out there, there is also a reason a majority of the largest deployments have opted to use block storage for VMware. It is more trusted, and gives more options for storage vendors.

iSCSI is another IP based protocol, supported in both 1GbE and 10GbE. This is a great protocol for small to mid sized environments. It is simple to configure, and runs on the traditional IP network, using traditional IP switches with a few minor modifications. iSCSI is attractive because it generally keeps costs down. It also can run over a converged network, but should be isolated by VLAN’s at a minimum, and dependent upon your storage vendor may take advantage of multiple paths for redundancy or performance. Modern storage arrays are moving away from 1GbE in favor of 10GbE for iSCSI, which is something to consider if looking at this as an option.

The major downside of iSCSI is that it is so easy to deploy improperly, and is seldom designed correctly. Being an IP based protocol it is not purpose built for storage, so there is inherent latency. Jumbo frames and flow control, though widely debated, often time can have tremendous impact on the storage in a VMware environment. With 1GbE networking in larger environments, or 10GbE in a converged network, speed and complexity can be a factor. In a future post I will discuss storage network design in more details, but this should give a high level idea of some of the challanges.

Fibre Channel Networking, in the enterprise at least, is probably the most common protocol. The major advantage is this is dedicated. A fibre channel network is designed to transmit the SCSI commands between the storage and the host server. This becomes particularly important in a virtualized environment. Again more on this in a future post on storage network design. The major advantage here is that the latency is very low, and the switches are not having to handle any other traffic. In a virtualized environment, we also want to consider the number of servers per host we are using. In a traditional physical server if we have 2 fibre channel hba ports on the servers, and we virtualize 20 servers per host, we now have 10 times as many servers using the same bandwith. In this context, the low latency and lack of resource contention on the storage network is much more important.

The downside for Fibre Channel is cost and complexity. This requires dedicated specialized switches that cannot be used for anything else outside of storage. They are costly to deploy and to manage when compared to a typical IP network. They are also complex. They are often times outside the traditional network teams expertise so they will leave management to the storage team.

Finally there are of course Fibre Channel over Ethernet, FCOE, and Serial Attached SCSI, SAS, but these are less common in most environments and have not as of yet gained wide adoption.

At the end of the day these all get us to the same place, and all have their merrits. There are pros and cons to each, and I will talk more about the actual network design soon. It is good to know about each and where it may or may not be a good fit in a particular environment.

VMware Storage Part 5: Storage Networking Overview

Leave a Reply

Your email address will not be published. Required fields are marked *