Synology DS1512+ versus SSD: Performance Benchmark on VMWare / SCVMM

We finally got rid of our HP MSA1000 we used for the Lab Environment. Altough we only host like 40 Servers on the SCVMM Cluster, the 35MB/sec which the MSA provided was a bit of a pain.

So we found that superfancy Synology Product and it said something about ~200MB/sec R/W performance so we thought: why not give it a try! If it doesn’t work out, we still can have our students to return it to the shop.

We are currently hosting about 20 Virtual Machines on the Synology DS1512+ (iSCSI and Jumbo Frames) and it runs pretty well!

The 200MB/sec are pretty close to the real world, I did a small Benchmark using ATTO on one of the Virtual Machines and here’s the result:

HyperV Guest, iSCSI Attached to Synology DS1512+ (up to 170MB/sec Write & 182MB/sec Read)

Comparing an ESX Guest residing on a local OCZ Agility 3 SSD 240GB, 2.5″, SATA3, MLC-Chips, 525MB/s (up to 270MB/sec Write & 400MB/sec Read)

ESX Guest residing on a local Hitachi 2000GB, SATA3, 7200rpm, 64MB (up to 90MB/sec Write & 96MB/sec Read)

20 responses to Synology DS1512+ versus SSD: Performance Benchmark on VMWare / SCVMM

Would be nice to see more details about your iSCSI attached Synology DS1512+ setup. We currently max out at 120 MB/sec on a RAID 5, 1GbE, 7200 rpm setup and have tried Jumbo Frames and LAG but haven’t been able to break out of the 120 MB/sec limit

Hey John, that is weird. Are you running VMs on it, if so, how many?
I did the benchmark within a VHD (Microsoft Virtual HD) on a system that was setting up 2 machines and hosting about 10 machines at that time i captured.
I’m not in the office right now, but I can send you details in 1.5 weeks if you wish.

Now put some SSD’s in the Synology and run the tests again!
We’ve been using the Synology Diskstation & Rackstation units for our datacenter and for the price they seem to perform rather well. Would love to see some of the network offloading functions added, but we’re willing to sacrifice that function for the more than $10k in cost difference to what a NetApp, EMC, or Equalogic appliance would cost. It’s a great way to configure a “poor man’s” cloud that costs in the tens of thousands, instead of hundreds of thousands.

I have spent many days tuning my Synology DS1812+ based on your figures and there is basically nothing I can do to get the figures you have. I max out at 120MB. I use an ESX host wich of course differs from your config. I also use jumbo frames and iSCSI. I have tested Link Aggregation but on advice changed that to vmwares Multipath, didnt change much really.

What else execpt for jumbo frames and raid5 can you tell us? I also have mine using jumbo frames. The most interresting part of this is that your figures are over what 1GB connection can handle and the DS1512+ and DS1812+ is not supposed to be able to handle more then that in one session.

Hello Zulan

Thanks for your comment! I think it entirely depends on how many VMs are lying on the NAS. The test above was done while only very few VMs were running. We have now about 40 Machines on the Synology and I did another Benchmark for you. It has come down to 30MB Write and 38MB Read. The Setting of the Synology are out of the box. JumboFrames are enabled everywhere. The other tests above have been performed using an internal SSD / HDD (not iSCSI attached). I just wanted to Show an example in that post. Hope that helps? Greetings, Tom

Thanks for your reply, I thought as much and there for I havent put any VMs or anything on the SAN. It’s one LUN mapped to one VM not as system drive and benchmarked from that VM.

I also see that your SSD performance is extremly good. I have an SSD connected straight to the host itself. It’s an Intel 520 that is supposed to be quite fast. I’m not sure what kind of SSD disk yours is, but mine would be able to push about 500MB/sec,3124-5.html but it still maxes out at under 250MB/sec both read and rw, much lower then your result.In all fairness it’s used as system drives for 2 VMs, where one is totally idle while testing. Maybe it is ESX after all, or maybe it’s my host not beeing quick enough? I have an Intel i7 @ 2,8 Ghz with 24MB of memory. A total of 3 VMs running on it.

Hello Zulan, I use these (Storage IO on ESX disabled and no RAID):
223.50 GB

The Host has an i7 6 Core with 64GB of RAM (I use a Gaming Board from Gigabyte that has more RAM slots), I am running about 20 Machines on it.

is there any final result to get the max out the 1512+ using vmware and iscsi?
Any howto avalaible for maximal performance?


Same here. Can’t break out of the 120 MB/Sec either. Would like to know what kind of switch he is using. LACP set to active/active or active/passive, etc.

Hello Laapsaap
Good Point, I don’t know if i mentioned this, but i do use LCAP :) Thanks for pointing that out.
However, tests were made on local disks inside the VMs so Network Bandwith would not really matter.

Greetings, Tom

Thanks for the update! Nice performance boost there. We used OCZs at first but experienced some rather high failure rates. We’ve since switched to Crucial & Samsung 840s. I prefer Intel’s for clients but the price has kept me from using them in our lab.

Have you had to create new data stores to take advantage of DSM 4.2 VAAI? Or were you able to add it on the fly?

Also, was curious if you’re planning on testing storage HA now that Synology has added it to many of their vmWare ready line.

Hello David

Thanks a lot for your comment! I think VAAI is enabled by default as far as I remember, I simply created new LUNs. (Our Lab is down at the Moment, so I can’t really verify and I have a shot Memory ;)).
We’re not planning High Availablity either since our Primary Storage is on a way more expensive NetApp FAS and the Synology was only bought for some test pourposes and non-critical virtual machines.

Greetings, Tom

Leave a Reply