You may have the crazy idea that you can move your high performing database server, running on your all-flash SAN, to the cloud and expect it to perform the same for a tenth of the price. Boy, are you in for an awkward moment trying to explain why the migrated server isn’t performing the way you planned or is costing you more.
Premium disks do buy you some improvement over standard HDDs and SSDs, but the disk alone is generally nowhere close to the performance you get from a dedicated SAN.
With premium disk you have size tiers that also work as performance tiers. Even though they are all cut from the same back-end storage account, smaller disks are slower and bigger ones are faster. On top of that, the bigger disks also respond quicker. You might not need the space of a larger disk, but you will need the higher max I/O per second and lower latency.
Windows Server now has the ability to create storage pools that mimic a typical RAID configuration. You could just group disk together as one and you would be off to the races… almost. To get any tangible performance instead you will need to forego the safety of parity.
In the world of on-premises physical RAID, parity is what saves your bacon. Turning off parity is what a crazy person would do, in the world of on premise storage. In the cloud, those disks you are pooling together are abstracted away from the physical realm. There is already a RAID behind your RAID. As far as Microsoft is concerned, losing an Azure disk should not happen. I say should not happen, because anyone, with the means, could detach a disk from a running Virtual Machine and cause you to lose your data.
With your expensive pool of premium disks, you will still feel like someone is holding you back. That is because they are. There is a max throughput on the Virtual Machine itself. Increasing the size of your Virtual Machine will increase the combined speed of all those disks.
You would think the storage optimized L series Virtual Machines would get you faster disk for you workload, but you thought wrong. They are optimized to provide improved performance for the temporary storage volume only. In fact, sometimes that series has lower max disk performance then comparable DSv3 series Virtual Machines.
The temporary storage disk is much faster then managed disk and should be used whenever possible. The big drawback with it is that any data stored on it is lost when the VM is shutdown, reset, or crashes (that still happens in the cloud). So it should be used for cache data that will be recreated on startup or when needed.
To achieve the best performance, for your performance dependent workload, you would need to determine how much disk performance you need and find the most efficient balance of disk tier, configuration, and Virtual Machine size. Lucky for you, I worked through a lot of that and have a colorful table.
Below is a table of results, I have meticulously collected, covering a wide range of resource options. As these results were collected almost a year ago, things may have changed, but this should give you a good idea of what to try. The prices are not accurate and should only be used for comparing sizes.
Operating System: Windows Server 2016
Configuration: Simple Storage Pool of 3 disks with 3 columns and an NTFS volume.
Testing Tool: Microsoft DiskSpd
|Virtual Machine||VM Limit||Disk||Disk Limit||Measured||Averages|
|Series||vCPUs||Memory||Cost/mth||Max IO/s||Max MB/s||Tier||Size||Cost/disk||MiBs/disk||IO/disk||MiB/s||IO/s||Latancy||Deviation|
James started out as a web developer who dabbled in hardware and open sourced software development. He then switched to IT infrastructure and spent many years with virtualized servers, networking, storage, and domain management.
His wide range of talents and experience were not being used properly in traditional IT. So he made the switch to cloud computing, as it was a perfect fit.
For the last 2 years he have been dedicated to automating and providing secure cloud solutions for our clients.