I recently got my hands on a Synology DS412+ 4 BAY SAN/NAS. The DS412+ is a solution targeted at small- and medium size business and features 4 bays, 1 GB of memory and an Intel Atom dual core 2.13 Ghz processor. The maximum raw capacity is 16 TB (4x 4 TB) and the device is equipped with 2 x 1 Gbit network connections which can be configured as LACP team.
The cool thing is that the Synology DS412+ is on VMware’s Hardware Compatiblity List and also supports the vStorage API for Array Integration (VAAI). The DS412+ supports Block Zero, Full Copy, HW Assisted Locking and Thin Provisioning. Time for some testing!
Basic setup of the DS412+
The basic setup of the DS412+ is pretty straight forward. The device supports RAID 0, 1, 5, 6, 10, JBOD and SHR. SHR stands for Synology Hybrid Raid and is nice option which combines disk usage and availability specifically when using physical disks of different sizes. When configuring (e.g.) RAID 5 it’s very important to use disks with an identical size, otherwise you will loose disk space. When using disks of different sizes SHR will divide each disk in smaller chunks, creating additional redundant storage, and allowing the capacity of each disks to be utilized to it’s maximum capacity. The full story about SHR is on the synology WIKI.
Configuring iSCSI and MPIO
Configuring iSCSI in a vSphere environment means you’re using VMFS as the filesystem to store your virtual machines. As you all know, VMFS is a multi access filesystem allowing more iSCSI iniators (ESXi servers) connecting concurrently to the same iSCSI target. The DS412+ has no problems with this, although you have to do a small configuration change on the box:
Go to the storage manager, iSCSI target, your iSCSI target, advanced. Now select the option “Allow multiple sessions from one or more iSCSI initiators”. This option is disabled by default, which will result in the iSCSI target to accept only the first initiator. Enabling the option allows multiple hosts to access the same iSCSI target/volume and also lets you configure Multi Pathing IO (MPIO).
Talking about MPIO…yes the DS412+ supports MPIO. There’s a good article about this on the Synlogy website which explains how to configure multipathing for iSCSI. From a vSphere perspective you have to options when configuring MPIO on the vSphere side:
- Configure 2 (or more) NICs in the same IP subnet
- Configure 2 (or more) NICs in different IP subnets
Imporant: although you can configure both options, the Synology DS412+ performs best when using option 2, at least that is what my lab tests show. I first tried to configure option 1…vSphere showed 4 active paths to an iSCSI LUN in this case:
The paths are from each VMkernel IP address connecting to the 2 Synlogy IP addresses, which are all living in the same subnet. This results in 4 paths.
When configuring option 2, I’ve only got two active paths:
Each path represents a unique connection from the VMkernel IP in the first subnet to the Synlogy IP in the first subnet, and a connection from the VMkernel IP in the second subnet connecting to the Synology IP in the second subnet. Although you might think: “Ah, 4 paths is better than 2 paths, so let’s configure the first option”, something odd was displayed in the SDM performance manager:
In the old situation (scenario 1) only the first nic (on the Synology) was used. The second scenario resulted in storage traffic on both interfaces, which will result in a higher bandwith in the end.
DS412+ LUN Masking
By default the Synology will allow access from all iSCSI initiators. However, you can configure LUN masking per iSCSI target. The option is available in the storage manager, iSCSI target, choose your iSCSI target and then choose the masking option:
Set the default privileges to “No Access” and add the ESXi IQN’s to the list. You choose read only or read/write.
Setting the correct Path Selection Policy
After configuring the DS412+ and vSphere’s iSCSI HBA + networking your LUNs will show up on your ESXi host. The VMW_ALUA_SATP Storage Array Type Plugin (SATP) is selected for the DS412+, which by default will use the Most Recently Used (MRU) Path Selection Policy (PSP):
This is not an optimal setting for a multipath iSCSI environment, so you should definitely change this to Round Robin. This should be changed for every seperate LUN available on each host in your environment….yes, that’s a lot of work. An alternative is to change the default PSP for the VMW_SATP_ALUA, this can be caried out through the ESXi command line.
The following command show that Most Recently Used (VMW_PSP_MRU) is the default PSP for the VMW_SATP_ALUA:
~ # esxcli storage nmp satp list Name Default PSP Description ------------------- ------------- ------------------------------------------------------- VMW_SATP_ALUA VMW_PSP_MRU Supports non-specific arrays that use the ALUA protocol VMW_SATP_MSA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AP VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_SVC VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EQL VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_INV VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EVA VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_ALUA_CX VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_SYMM VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_CX VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_LSI VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices
You can change the default PSP by issuing the following esxcli command:
~ # esxcli storage nmp satp set --default-psp=VMW_PSP_RR --satp=VMW_SATP_ALUA Default PSP for VMW_SATP_ALUA is now VMW_PSP_RR
This results in a new default PSP for the VMW_SATP_ALUA:
~ # esxcli storage nmp satp list Name Default PSP Description ------------------- ------------- ------------------------------------------------------- VMW_SATP_ALUA VMW_PSP_RR Supports non-specific arrays that use the ALUA protocol VMW_SATP_MSA VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AP VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_SVC VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EQL VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_INV VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_EVA VMW_PSP_FIXED Placeholder (plugin not loaded) VMW_SATP_ALUA_CX VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_SYMM VMW_PSP_RR Placeholder (plugin not loaded) VMW_SATP_CX VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_LSI VMW_PSP_MRU Placeholder (plugin not loaded) VMW_SATP_DEFAULT_AA VMW_PSP_FIXED Supports non-specific active/active arrays VMW_SATP_LOCAL VMW_PSP_FIXED Supports direct attached devices
About DS412+ and VAAI
As stated before the DS412+ supports the VAAI primitives. We can test if VAAI is actually recognized and working by issuing the following commands, each command will test one the primitives, Int Value = 1 means a primitive is enabled:
~ # esxcli system settings advanced list -o /DataMover/HardwareAcceleratedMove Path: /DataMover/HardwareAcceleratedMove Type: integer Int Value: 1 Default Int Value: 1 Min Value: 0 Max Value: 1 String Value: Default String Value: Valid Characters: Description: Enable hardware accelerated VMFS data movement (requires compliant hardware)
~ # esxcli system settings advanced list -o /DataMover/HardwareAcceleratedInit Path: /DataMover/HardwareAcceleratedInit Type: integer Int Value: 1 Default Int Value: 1 Min Value: 0 Max Value: 1 String Value: Default String Value: Valid Characters: Description: Enable hardware accelerated VMFS data initialization (requires compliant hardware)
<~ # esxcli system settings advanced list -o /VMFS3/HardwareAcceleratedLocking Path: /VMFS3/HardwareAcceleratedLocking Type: integer Int Value: 1 Default Int Value: 1 Min Value: 0 Max Value: 1 String Value: Default String Value: Valid Characters: Description: Enable hardware accelerated VMFS locking (requires compliant hardware)
Another option to test is each volume on VAAI compatibility is:
~ # esxcli storage core device vaai status get naa.60014051dd470b9d2ee1d393bd8df2d4 VAAI Plugin Name: ATS Status: supported Clone Status: supported Zero Status: supported Delete Status: supported mpx.vmhba32:C0:T0:L0 VAAI Plugin Name: ATS Status: unsupported Clone Status: unsupported Zero Status: unsupported Delete Status: unsupported naa.6001405b0cfd378dfbc9d3d88db16fda VAAI Plugin Name: ATS Status: supported Clone Status: supported Zero Status: supported Delete Status: supported
Looks good, don’t you think? An actual clone of a virtual machine will be off-loaded to the DS412+, increasing copy speed and reducing the network load for such a task. Read more about VAAI in this VMware KB article.
To conclude
I hope this article gave you some insights on the DS412+ running as an iSCSI solution in a vSphere 5.1 Update 1 environment. In a next article I will test NFS and do some performance testing. Subscribe to this website by leaving you website in the textbox on the right to stay tuned for new articles!
11 Comments
pitchdown
Do you also have this blog available for QNAP ?
Thx
viktorious
Uhm no…sorry.
Marco Broeken
I know my QNAP has VAAI support on iSCSI volumes (I own the TS-459 Pro II)
You need to install firmware 3.8.2 Build0301 or up.
VMware VAAI support for the TS-x79, x70U, and x69 series. but it works on lower models
Todd Mace
Nice post and run down of the Synology advanced features!
viktorious
Thx!
Dhobah Yare
Thanks Viktor for the post. Which Gigabit L3 Switch are you using in the lab. I’m currently refreshing my home lab and I’m looking of a cheap 8-port gigabit with LACP support to test the DS412+ LACP.
viktorious
Hi, I am not using LACP on the DS412+ because of my iSCSI configuration. You won’t need LACP for that. I have a Cisco SG200-08, which supports LACP and up to 16 VLANs (costs<100 dollar): I had some troubles with this switch and removed it from the lab, no using an unmanaged basic consumer Gbit switch.
Cisco SG200-08 is layer 2 switch….
INI
Very interest post !
Could you share your network configurarion (diagram) as well ?
Jason
Note that Synology uses a 64k chunk size and its not configurable yet (I’ve submitted the request to Synology – and they are looking at it) and the minimum VMware block Size is 1Mb.
If you use RAID5 you will have three (3) data disk and this will cause you performance issues.
e.g You’ll never get a full stripe write and throw in various partition alignment issues with the virtual machines.
I like the Synology boxes a lot, we have many VMware deployments using the ten (10) bay units.
Just a tip, if you cannot divide 1024kb by the number of data disks in your raid set and get an integer then don’t do it.
4x disk in RAID5 = 3x data, 1x parity, 1024kb/3= 341.3333kb > Bad
5x disk in RAID5 = 4x data, 1x parity, 1024kb/4=256Kb > Good
Mirrors don’t have a parity disk and therefore don’t have this problem.
Daniel
Excellent post!
Wasim
Recently I got a DS1515+, for now I have 2 x 2TB (RAID0) block LUN and 2 x 1TB (RAID0) File based setup.
very 1st, I had to use file based LUN to get VAAI support and when I have move VM’s from VAAI supported datastore to unsupported datastore, the tranfer is very slow and CPU is showing high I/O Wait. (50-89%)
If I setup both LUNs to support VAAI still there is I/O Wait.
I do have multipath configured properly.
coz if I/OWait the data transfer is slow.
Not able to figure out where exactly is the problem.