About the author

Related Articles


  1. 1


    Do you also have this blog available for QNAP ?


    1. 1.1
    2. 1.2

      Marco Broeken

      I know my QNAP has VAAI support on iSCSI volumes (I own the TS-459 Pro II)

      You need to install firmware 3.8.2 Build0301 or up.
      VMware VAAI support for the TS-x79, x70U, and x69 series. but it works on lower models

  2. 2

    Todd Mace

    Nice post and run down of the Synology advanced features!

    1. 2.1



  3. 3

    Dhobah Yare

    Thanks Viktor for the post. Which Gigabit L3 Switch are you using in the lab. I’m currently refreshing my home lab and I’m looking of a cheap 8-port gigabit with LACP support to test the DS412+ LACP.

    1. 3.1


      Hi, I am not using LACP on the DS412+ because of my iSCSI configuration. You won’t need LACP for that. I have a Cisco SG200-08, which supports LACP and up to 16 VLANs (costs<100 dollar): I had some troubles with this switch and removed it from the lab, no using an unmanaged basic consumer Gbit switch.

      Cisco SG200-08 is layer 2 switch….

  4. 4


    Very interest post !
    Could you share your network configurarion (diagram) as well ?

  5. 5


    Note that Synology uses a 64k chunk size and its not configurable yet (I’ve submitted the request to Synology – and they are looking at it) and the minimum VMware block Size is 1Mb.

    If you use RAID5 you will have three (3) data disk and this will cause you performance issues.

    e.g You’ll never get a full stripe write and throw in various partition alignment issues with the virtual machines.

    I like the Synology boxes a lot, we have many VMware deployments using the ten (10) bay units.

    Just a tip, if you cannot divide 1024kb by the number of data disks in your raid set and get an integer then don’t do it.

    4x disk in RAID5 = 3x data, 1x parity, 1024kb/3= 341.3333kb > Bad
    5x disk in RAID5 = 4x data, 1x parity, 1024kb/4=256Kb > Good

    Mirrors don’t have a parity disk and therefore don’t have this problem.

  6. 6


    Excellent post!

  7. 7


    Recently I got a DS1515+, for now I have 2 x 2TB (RAID0) block LUN and 2 x 1TB (RAID0) File based setup.
    very 1st, I had to use file based LUN to get VAAI support and when I have move VM’s from VAAI supported datastore to unsupported datastore, the tranfer is very slow and CPU is showing high I/O Wait. (50-89%)
    If I setup both LUNs to support VAAI still there is I/O Wait.
    I do have multipath configured properly.
    coz if I/OWait the data transfer is slow.
    Not able to figure out where exactly is the problem.


Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.