Note that the TS-670 is officially not supported on vSphere 5.5. Some problems are reported by other users and I also faced some issues, so be careful when using vSphere 5.5 icw QNAP. vSphere 5.0, 5.1 are officially supported, so if you want to go safe I would advise to use this vSphere 5.0/5.1. Configuration steps for 5.0/5.1 are similar to the steps described in this article.
It’s time for some SAN testing again! This time I had the chance to connect the QNAP TS-670 to my lab. The TS-670 is a 6 bay NAS solution from QNAP targeted at SMB users, and includes some cool (enterprise level) features:
- 4x 1 Gbit connectivity, with optional 10 Gbit connectivity (I don’t have that available in my lab);
- More than 400 MB/s read and write speed;
- You can expand the device with the optional REXP-1600U-RP RAID expansion enclosure, allowing a raw capacity of 152 TB;
- Backup, disaster recovery and data security management options;
- SSD caching (sounds cool);
- iSCSI and NFS;
- Officially supported by VMware…yes the device is on the HCL. Note that the device is supported on ESXi 5.0 and 5.1: ESXi 5.5 is not on the list right now, but this will hopefully change soon;
- VAAI is supported for firmware 3.8 and newer, the VAAI primitives Block Zero,Full Copy,HW Assisted Locking are supported.
The TS-670 isn’t rack-mounted and includes one power supply. The 4x 1 Gbit ethernet connections are divided over two different dual port nics (one on-board, one expansion NIC). The storage device is equiped with a dual core 2.6 Ghz dual core processor and 2 GB of RAM. You can configure the device with both 2.5″ and 3.5″ disks. Full specs are available on the QNAP website.
Before connecting and putting some virtual machines on the device, first verify if you’ve got the latest firmware running. My TS-670 was running firmware 4.0.2, 4.0.5 is available so upgrading is a must. Unfortunately the included GUI based upgrade feature didn’t work, so I had to fall back on the command line upgrade option. Although I am not to familiar with QNAP, this manual worked fine for me. Upgrading the firmware is completed in less than 15 minutes.
Configure networking
The TS-670 offers some compelling networking options. The 4 available NICs can be configured as individual adapters, but there are also some teaming options available including active/standby and 802.3ad link aggregation.
For iSCSI separate network connections are just fine, because VMware’s MPIO driver will manage availability. So I just configured 4 different IP addressess, one for each of the available NICs:
Configure a raid set
Before you can create an actual iSCSI volume, you first have to setup a RAID set. QNAP offers various redundant options such as RAID-1, RAID-10, RAID-5 and RAID-6. Non-redundant options like JBOD or RAID-0 are also available.
QNAP works with a layered model consisting of a storage pool, a volume and iSCSI LUN:
- A storage pool aggregates physical drives into a big storage space and offers RAID protection. A storage pool can consist of multiple RAID groups and can also include one or multiple spare disks. A storage pool hosts one or more volumes. A storage pool can also be the container for an iSCSI LUN.
- A volume is logical boundary within a storage pool, a volume can use all the space that is available in the storage pool but you can also more volumes on the storage pool. A volume can include shared volumes or one or more iSCSI LUNs. You can set SSD cache settings on a per volume basis.
- An iSCSI LUN can be configured as block-based or file-based LUN. A block-based LUN is directly placed on the storage pool, whereas a file-based LUN is placed on a volume. You can configure SSD cache settings on a block-based LUN, a file-based LUN will use the volume SSD cache settings.
In my case I choose to put the LUN directly on the storage pool, without using a volume. LUN management is part of the QNAP storage manager. Choose iSCSI storage and then the create option. You probably want to use the `iSCSI target with a mapped LUN´ if you´re configuring your first iSCSI volume, or just choose iSCSI LUN for creating only the iSCSI volume.
Creating a LUN will give you the following options:
The LUN type (block based versus file based) will determine if the LUN is placed on a storge pool or volume. The LUN allocation option will determine if all required space is allocated instantly or you’re using thin provisioning. In this case the LUN only requires the storage space actually used; this will give more flexibility but you have to monitor LUN usage because you could over provision the QNAP. I will be talking about the sector size and SSD cache option in another article.
Note that the storage service was suspended for a short period of time after creating a LUN, which has (serious) impact on already connected LUNs. This is an issue I faced with the 4.0.2 firmware; this issue didn’t occur with the 4.0.5 firmware.
After creating the LUN it’s time to map the ESXi hosts to the LUN. This is completed through the Advanced ACL option. Verify you’re allowing read/write access for the LUNs.
Connecting ESXi
The next step is to connect ESXi to the QNAP. You can just use a default iSCSI configuration. In my case I have two iSCSI initiators on the ESXi servers, connecting to the QNAP iSCSI target which is available through 4 network addresses. I am ending up with 8 different iSCSI paths:
Notice that the default default path selection policy is MRU. This would mean only one NIC (on the QNAP side) will be used for data transfers. Although I cannot find official statements on this, setting the policy to Round Robin (in which case all paths are used) gives better results. You have to change the Path Selection Policy for each volume on each ESXi host through the GUI, or you can change the default behavior of the VMW_ALUA_SATP by executing this command (on each ESXi host):
esxcli storage nmp satp list (to see the current setting) esxcli storage nmp satp set -s VMW_SATP_ALUA -P VMW_PSP_RR esxcli storage nmp satp list (to see the new setting)
Important: This will change behavior for all storage devices using the VMW_ALUA_SATP, so verify if all your device support round robin.
By changing the iSCSI default Round Robin IOPS value you will get better performance in some cases, see this article by Cormac Hogan.
To change this setting: first determine the vendor and modelname for the storage:
~ # esxcfg-scsidevs -l | egrep -i 'display name|vendor' Display Name: QNAP iSCSI Disk (naa.6e843b681f58978dc434d4f75db678d2) Vendor: QNAP Model: iSCSI Storage Revis: 4.0 Display Name: QNAP iSCSI Disk (naa.6e843b6b2e45080d25f0d4b36da53dda) Vendor: QNAP Model: iSCSI Storage Revis: 4.0
Now run the following command (from Cormac’s blog) to set the IOPS=1 value on all round robin QNAP iSCSI volumes. The vendorname (-V) and modelname (-M) are derived from the preceding command.
esxcli storage nmp satp rule add -s "VMW_SATP_ALUA" -V "QNAP" -M "iSCSI Storage" -P "VMW_PSP_RR" -O "iops=1"
This setting will applied after a reboot, or follow the procedure outlined in this article.
To check the IOPS=1 setting use:
esxcli storage nmp psp roundrobin deviceconfig get -d NAA ID
De NAA ID is the ID of the volume, e.g. naa.6e843b681f58978dc434d4f75db678d2.
Now you’re all set and start using iSCSI on ESXi 5.5. In a next article I will test QNAP’s VAAI integration en SSD caching options. You can subscribe to this blog by leaving your mail address in the box on the right of this webpage.
5 Comments
Mvd
Hi,
great blog.
But what is the performance read/write on this disks when doing a performanc-check?
@ My qnap NFS performance is better 86/110 Mbps.
Thx
Michael Scherr
Great Article! *i Like*
When blogs you the ssd cache Part?
viktorious
Hi, you can find it here: http://www.viktorious.nl/2013/12/10/testing-qnap-ts-670-ssd-cache-feature-vsphere-5-5/
Mark
Hello, why you use two iSCSI initiators on the ESXi servers?
Because the server have two dedicated NIC for the storage?
viktorious
Hi; because of availability and bandwidth considerations. You get 2x 1 Gbit bandwidth and operation continues when 1 NIC fails.