When people started building VDI environments, they approached the storage design the same way they had always done with server environments. VDI vendor lab tests gave some indication
about the required IOPS and all was well.
How wrong they were!
As it turned out, the VDI requirements for storage are a world away from server infrastructure requirements. Everything is different as the first VDI projects soon discovered.
Herco and Marcel first do a profound analysis of VDI workload characteristics in this whitepaper. What does a VDI storage profile looks like? What is the impact of bootstorms? What tools are available to help? The authors also talk about somthing that is called IO amplification; think about usage of linked clones or differencing disks, profile management and virus scanners. The whitepaper is covering the role of server hardware and networking from a storage perspective: PCIe bus speeds, usage of FC, iSCSI, NFS or SMB and queue depths are all the paper.
After setting the framework, Herco and Marcel discuss how IOPS are handled at the storage level. This is about RAID, usage of flash storage and of course caching.
Now that we know about the RAID penalty, it becomes clear that our read/write ratio is an important factor. In case of 100% reads we don’t need to take anything into account but as soon as we start writing, say in case of a VDI environment, 80 out of each 100 IOs things start to look different. It’s now easy to see why in high-write workloads, a choice for the most economical RAID configuration is important.
The whitepaper has a focus on NetApp technology, but also includes general storage guidelines. If you’re into VDI this whitepaper is certainly worth a read. It’s available for download here. There’s also a whitepaper about usage of flash storage in the Enterprise called “Spinning out of Control”, available for download at the same page.
Also check Ruben Spruijt’s article about the whitepaper at brianmadden.com.