After a first article about the QNAP TS-670 and vSphere 5.5, it’s now time for some additional testing. The TS-670 is equipped with a SSD caching feature which allows you to configure two SSD disks as cache drives. This should increase the performance of the QNAP, so let’s run a little test!
Configuring QNAP SSD Cache
First things first; the QNAP SSD caching feature is a read-only cache and will accelerate read iops in particular, although write performance might benefit of the caching feature as well (more on that later in this article). Configuration of the SSD cache is quite straight forward. Open the QNAP storage manager, choose SSD cache (after adding one or two SSD drives to slot 5 and 6 of the TS-670) and create a new SSD cache:
QNAP SSD cache offers two algorithms LRU and FIFO:
- LRU (Default): Higher HIT rate, but requires more CPU resources. When the cache is full, LRU discards the least-used items first. As the system needs to track the cached data to ensure the algorithm always discards the least recently used data, it requires more CPU resources but provides a higher Hit rate.
- FIFO: Requires less CPU resources, but lower HIT rate. When the cache is full, FIFO discards the oldest data in cache. This reduces the HIT rate but does not require too much CPU resources.
In this test I am using LRU. After the cache is configured you can choose which volumes and/or iSCSI LUNs will leverage the cache. In my test I ran some tests with SSD cache enabled and disabled for a specific iSCSI LUN connected to vSphere 5.5.
Cache Testing
For the test I am using IOmeter, a very popular tool for disk IO testing. Is available for free here. IOmeter allows you to fully customize the IO test you want to run. I choose to run some predefined tests which are discussed and can be downloaded from this article at Maish Saidel-Keesing’s Technodrone website. The test includes four smaller tests:
- A Max Throughput Read test with 100% read, 32 KB transfer size, 0% random IO.
- A Reallife Server Load test with 65/35 read/write ratio and a 60/40 random/sequal IO ratio with a 8 KB transfer size.
- A Max Throughput Read/Write test with 50/50 read/write ratio, 32 KB transfer size, 0% random IO.
- A Random 8k test with 70/30 read/write ratio and 100% random IO with a 8 KB transfer size.
Each test runs for 5 minutes, I ran the test 2 times. Note that the #of outstanding IO’s is 64 for this test.
You can find a lot of results in this article at the VMware website where different users are reporting their results running this test on enterprise storage arrays. Although the purpose of this test is NOT to compare QNAP performance with NetApp, Equallogic or whatsoever, you will get you some insight information on the performance of different arrays. Actual performance of a storage solution depends on a lot of different factors including cache sizes, RAID configurations, SAN/LAN configurations, etc. The TS-670 used for this test is only equiped with three ordinary, consumer-rate SATA disks configured in a RAID 5 configuration (and two SSD disks for the cache). Adding more disks to this array would certainly improve performance. The only purpose of this test is to see the impact of the SSD cache feature on the TS-670.
In the first test I ran IO Meter without SSD cache enabled.
Read IOPS | Write IOPS | Read MB/s per second | Write MB/s per second | Average read IO response time | Average write IO response time | |
Max Throughput Read | 4.400 iops | 0 iops | 137 MB/s | 0 MB/s | 10,1 ms | 0 ms |
Reallife Server Load | 158 iops | 85 iops | 1,9 MB/s | 0,7 MB/s | 225,1 ms | 284,8 ms |
Max Throughput Read/Write | 2690 iops | 2691 iops | 84,1 MB/s | 84,1 MB/s | 7,2 ms | 7,6 ms |
Random 8k | 148 iops | 64 iops | 1,1 MB/s | 0,5 MB/s | 255,3 ms | 328,0 ms |
You can see that the QNAP performs well with sequential IOs. The test results that includes random IOps show a significant drop in performance. Let’s enable the SSD caching feature and see if this improves performance:
Read IOPS | Write IOPS | Read MB/s per second | Write MB/s per second | Average read IO response time | Average write IO response time | |
Max Throughput Read | 4.360 iops | 0 iops | 136,2 MB/s | 0 MB/s | 9,1 ms | 0 ms |
Reallife Server Load | 795 iops | 516 iops | 6,2 MB/s | 4,0 MB/s | 63,0 ms | 79,0 ms |
Max Throughput Read/Write | 2717 iops | 2722 iops | 84,9 MB/s | 85,0 MB/s | 6,4 ms | 6,8 ms |
Random 8k | 987 iops | 692 iops | 5,4 MB/s | 2,3 MB/s | 43,4 ms | 56,8 ms |
The results show a big (very big) increase of the performance especially for the reallife server load and random 8k tests which include random iops. There’s is also a big improvement on the write performance in these tests, probably because the read IOps are off-loaded to the SSD cache and the existing RAID diskset has more time to deal with the write IOPS.
Note that I’ve had much better results with the second run of the test (the results included in this article), probably because it will take some time before the cache was populated. You can check the cache hit rate on the QNAP as displayed in the figure on the left.
The results after enabling the SSD cache certainly surprised me and resulted in a significant performance improvement, and…I am only using standard samsung evo 820 SSD’s (consumer grade SSD’s voor less than 100 euros for 128 GB).
If you have the time to do some testing yourself, I would be looking forward to see your results.
8 Comments
Mvd
Hi,
will ssd-cache only be available in specific slots in the qnap?
How to determine which slots for qnap 869 pro?
Thx
viktorious
Hi, I’ve just learned that: “SSD caching function is not available for TS-x69 series.”
Source: http://www.qnap.com/en/index.php?lang=en&sn=845&c=2699&sc=&n=20045
Mvd
Thx for the answer.
And what a shame for my 869 pro 🙁
Pingback: Might consider this for the new homelab | Virtual-J
FSE
Hey, cool test, thanks for putting this together!
Do you know, or have some experience, about the SSD cache block size? (Especially with VMWare iSCSI datastore)
Will it be faster if you synchronize the SSD block size with the block size of the VMWare datastore?
Thx
viktorious
Good question, but I don’t think I can help you with an answer. Anybody else?
Sirozha
Hi, why does the test with SSD cache enabled show a six-fold increase in IOPS of RealLife Server Load writes over the same test without SSD cache enabled? QNAP claims that their implementation of SSD caching only affects reads. Before I replace my TS-459L with a model that supports SSD caching, I’d like to make sure that the writes benefit from SSD caching.
Thank you!
viktorious
Because read IOPS are cached, the spinning disks can offer more write IOPS, thus you can (will) experience an improvement in writes.