Testing QNAP TS-670 SSD Cache Feature with vSphere 5.5

After a first article about the QNAP TS-670 and vSphere 5.5, it’s now time for some additional testing. The TS-670 is equipped with a SSD caching feature which allows you to configure two SSD disks as cache drives. This should increase the performance of the QNAP, so let’s run a little test!

Configuring QNAP SSD Cache

First things first; the QNAP SSD caching feature is a read-only cache and will accelerate read iops in particular, although write performance might benefit of the caching feature as well (more on that later in this article). Configuration of the SSD cache is quite straight forward. Open the QNAP storage manager, choose SSD cache (after adding one or two SSD drives to slot 5 and 6 of the TS-670) and create a new SSD cache:


QNAP SSD cache offers two algorithms LRU and FIFO:

  • LRU (Default): Higher HIT rate, but requires more CPU resources. When the cache is full, LRU discards the least-used items first. As the system needs to track the cached data to ensure the algorithm always discards the least recently used data, it requires more CPU resources but provides a higher Hit rate.
  • FIFO: Requires less CPU resources, but lower HIT rate. When the cache is full, FIFO discards the oldest data in cache. This reduces the HIT rate but does not require too much CPU resources.

In this test I am using LRU. After the cache is configured you can choose which volumes and/or iSCSI LUNs will leverage the cache. In my test I ran some tests with SSD cache enabled and disabled for a specific iSCSI LUN connected to vSphere 5.5.

Cache Testing

For the test I am using IOmeter, a very popular tool for disk IO testing. Is available for free here. IOmeter allows you to fully customize the IO test you want to run. I choose to run some predefined tests which are discussed and can be downloaded from this article at Maish Saidel-Keesing’s Technodrone website. The test includes four smaller tests:

  1. A Max Throughput Read test with 100% read, 32 KB transfer size, 0% random IO.
  2. A Reallife Server Load test with 65/35 read/write ratio and a 60/40 random/sequal IO ratio with a 8 KB transfer size.
  3. Max Throughput Read/Write test with 50/50 read/write ratio, 32 KB transfer size, 0% random IO.
  4. Random 8k test with 70/30 read/write ratio and 100% random IO with a 8 KB transfer size.

Each test runs for 5 minutes, I ran the test 2 times. Note that the #of outstanding IO’s is 64 for this test.

You can find a lot of results in this article at the VMware website where different users are reporting their results running this test on enterprise storage arrays. Although the purpose of this test is NOT to compare QNAP performance with NetApp, Equallogic or whatsoever, you will get you some insight information on the performance of different arrays. Actual performance of a storage solution depends on a lot of different factors including cache sizes, RAID configurations, SAN/LAN configurations, etc. The TS-670 used for this test is only equiped with three ordinary, consumer-rate SATA disks configured in a RAID 5 configuration (and two SSD disks for the cache). Adding more disks to this array would certainly improve performance. The only purpose of this test is to see  the impact of the SSD cache feature on the TS-670.

In the first test I ran IO Meter without SSD cache enabled.

Read IOPS Write IOPS Read MB/s per second Write MB/s per second Average read IO response time Average write IO response time
Max Throughput Read 4.400 iops 0 iops 137 MB/s 0 MB/s 10,1 ms 0 ms
Reallife Server Load 158 iops 85 iops 1,9 MB/s 0,7 MB/s 225,1 ms 284,8 ms
Max Throughput Read/Write 2690 iops 2691 iops 84,1 MB/s 84,1 MB/s 7,2 ms 7,6 ms
Random 8k 148 iops 64 iops 1,1 MB/s 0,5 MB/s 255,3 ms 328,0 ms

You can see that the QNAP performs well with sequential IOs. The test results that includes  random IOps show a significant drop in performance. Let’s enable the SSD caching feature and see if this improves performance:

Read IOPS Write IOPS Read MB/s per second Write MB/s per second Average read IO response time Average write IO response time
Max Throughput Read 4.360 iops 0 iops 136,2 MB/s 0 MB/s 9,1 ms 0 ms
Reallife Server Load 795 iops 516 iops 6,2 MB/s 4,0 MB/s 63,0 ms 79,0 ms
Max Throughput Read/Write 2717 iops 2722 iops 84,9 MB/s 85,0 MB/s 6,4 ms 6,8 ms
Random 8k 987 iops 692 iops 5,4 MB/s 2,3 MB/s 43,4 ms 56,8 ms

The results show a big  (very big) increase of the performance especially for the reallife server load and random 8k tests which include random iops. There’s is also a big improvement on the write performance in these tests, probably because the read IOps are off-loaded to the SSD cache and the existing RAID diskset has more time to deal with the write IOPS.


Note that I’ve had much better results with the second run of the test (the results included in this article), probably because it will take some time before the cache was populated. You can check the cache hit rate on the QNAP as displayed in the figure on the left.

The results after enabling the SSD cache certainly surprised me and resulted in a significant performance improvement, and…I am only using standard samsung evo 820 SSD’s (consumer grade SSD’s voor less than 100 euros for 128 GB).

If you have the time to do some testing yourself, I would be looking forward to see your results.

About viktorious

Viktor van den Berg is a Solution Architect at PQR and VCDX #121.

Viktor is a regular speaker at seminars and conferences. Continue reading...

Comments (8)

Leave a Comment

© 2013 Powered By Wordpress

Scroll to top