iSCSI vs CIFS
ITS has begun to explore several methods that bring data center speeds closer to the edge. Our goal is to give the advantages of “big data” technologies to media end users that could really use the extra horsepower. After some careful analytics of our lab usage we determined that we just might squeak by with gigabit to the desktop and 10 gig to the storage. I decided to experiment with iSCSI. The benefits of this are widely known to storage pros but it’s a technology that hasn’t made its way to the desktop yet, save for a few power users.
What is iSCSI?
There’s a lot of very high tech ways to describe this but simply put, iSCSI allows you to treat a NAS array as a local drive. The connection is “block level” as opposed to “file level” therefore the connections are faster and more efficient as well as less taxing on your machine. It does all of this over standard, regular CAT 6 cables and regular every day gigabit switches.
To benchmark iSCSI we built a FreeNAS server to act as our storage array. FreeNAS, if you’ve never used it, is an amazing FreeBSD distro that essentially turns any system into a powerful storage appliance. We setup the iSCSI service in FreeNAS to target an internal SSD rated 4800Mb/s, well above the threshold of the gigabit sandbox network we built to deliver data to our MacBook Pro. We used a commodity Netgear GS108E gigabit network switch and an EdgeRouter to provide DNS and DHCP. On the client side, we used a new MacBook Pro outfitted with an SSD that’s also rated for far more write speed than the gig network could deliver. iSCSI requires special client software on Macs, so we used GlobalSAN’s iSCSI initiator (client software) for what is essentially a client side driver (called an initiator in iSCSI speak). As a control, we also enabled a CIFS share to the same disk and share point on the FreeNAS server. The server and client are the only two systems on the test network. We tested iSCSI vs CIFS using the Blackmagic Disk Speed Test.
In short, iSCSI is all it is claims to be. It should be noted before I go any further that the results of the Blackmagic Speed Test are in bytes not bits as networks are typically measured so don’t misread the graphics. Bytes are eights times larger than bits so our results are mathematically corrected.
For reference, a standard GigE connection is theoretically 1000Mb/s (or 125MB/s). That number is spec, in the real world it’s unachievable. But how close can we get?
Before I explain the numbers in more detail, a little more info for relevance to the video heads, here are some numbers to keep in the back of your mind. A DCT compressed piece of video (ProRes or DNxHD), like what you might use as optimized for Avid, Final Cut or iMovie is requires quite a bit of speed to playback and edit. For 1080 resolutions at 8-bit and 4:2:2, you will need up to 136Mb/s for 23.98/24P, 168Mb/s for 29.97/30P, and 336Mb/s for 59.94/60P. These are rates for run-of-the-mill average files. ProRes LT for example is less bandwidth but at lower quality, ProRes HQ is higher quality but more bandwidth. For the Avid people, DNxHD is comparable to ProRes in terms of quality and bandwidth, ProRes was just easier to test in our environment.
The CIFS connection was able to achieve a pretty good speed writing at 584Mb/s (or 73MB/s) and reading at 664Mb/s (or 83MB/s). These are maximum numbers. That’s not quite enough to work with two streams of 1080/60p thus limiting you to trim edits on one clip or constant rendering. You could work with three streams of 1080/30p (just missing 4 streams) letting you do more complex work though eliminating anything but rudimentary compositing in real time. Advanced formats like 4K at rates over 500Mb/s for 24P and 664Mb/s for 30P are mathematically possible but realistically it’s a no fly zone.
How did the iSCSI connection fair? A whopping 760Mb/s write speed (or 95MB/s in bytes) with an amazing 880Mb/s (110MB/s) read speed. That’s a 24% improvement over CIFS with no hardware change whatsoever, only the protocol is different. That breaks the threshold on 1080/60p allowing for two streams and opens the door for compositing work by allowing for up to 5 streams of 1080/30p simultaneously before rendering due to bandwidth limitations is required.
More technically, iSCSI is a block level technology. This means the file system belongs to your computer where as in CIFS, AFP or NFS the file system belongs to the server. The result is a man-in-the-middle inefficiency. In addition to the faster overall delivery of media in bulk the latency is significantly lower with iSCSI. Nothing is worse than moving the playhead and waiting a second before seeing the video or it starts playing. While this test didn’t measure latency we did attempt to edit using both technologies. While you could edit with both CIFS and iSCSI the experience was better with iSCSI and the editing software was much more responsive since it was being given more data with less latency. Informally the lag was about a second in CIFS while it was almost imperceptible with iSCSI. Regarding integration, since iSCSI is seen as a local drive, the possibility of using advanced SAN technologies for sharing also exists that is simply not possible with NAS technologies like CIFS.