A Virtual Machine Cloud Backup Software You Can’t Live Without

Using virtual machine cloud backup software to back up your VMs to the cloud makes sense and involves just a few steps. Sending VM backups to the cloud, however, requires a bit of planning because without planning, the process might take too long or end up costing too much.
First of all, you need to decide which cloud platform to use. A special case is BackupChain, which supports setting up a DIY cloud. Having your own, secure cloud server cuts the costs dramatically. Large providers, such as Amazon or Azure, charge for each access transaction, as well as for the storage monthly. If you do the math, especially for larger VMs and especially if you already have the infrastructure, it makes perfect sense to look into a DIY cloud.
If you want to use one of the cloud giant’s storage, look into bandwidth costs, monthly storage costs, access costs and come up with a rough cost estimate. Because the number of file/block accesses into your account are pretty much unpredictable, the monthly cost will be, too. The predictability of cost is another good reason to go for a DIY cloud when you can.
Next, you need to check the size of your VMs and the available upload bandwidth you have. You could use speedtest.net or a similar service to get an idea. Divide the Mbps number obtained by 9 to allow for some overhead and obtain the Mbytes/sec. Use an estimated 60% compression for your first full backup and the Mbytes/sec computed to figure out how long the full upload will take. Ideally, the full backup upload should be done within a few days. An increment will be about 5% or less of the virtual machine size. That size should be small enough so that you can upload the increment within 24 hours; otherwise, you will need to schedule your cloud backup to run only a few times a week instead of daily.
Example: If your VM is 100GB and your upload bandwidth is 10Mbps, the upload is 1.1 Mbyte/sec or 3.9 Gbyte per hour. The full backup compressed = 60% of 100GB = 60GB and will take 60/3.9 = 16 hours to complete. The increment is estimated at 5Gbytes and will take about two hours to upload. That would actually be a reasonable scenario and a good candidate for a daily backup. Now, if you added more VMs, the full upload and daily incremental upload would take much longer, obviously. And at some point, the bandwidth wouldn’t suffice.

The key to reduce the size of incremental backups to a minimum is to use deduplication. BackupChain shrinks virtual disk files with incremental backup by using deduplication and it offers various block size settings to fine-tune the process. As long as the VM-internal changes are minimal, your incremental delta files will also be very small and hence upload quickly.

What about restoring? That’s where cloud giants will likely rip you off because downstream data is usually expensive. When data leaves the data center, that’s when they are rubbing their hands. Because it makes sense to test restore VMs every once in a while, this is another cost burden to keep in mind. Again, the DIY cloud would save you at this point here as well but the infrastructure needs to be good, i.e. upload and download bandwidth at both sites you own must be good.
Stability of upload and download links is another important factor, as is latency. High latencies between the cloud servers and your office are going to cause the stream to slow down, even though you are paying your ISP big money to obtain a high bandwidth connection. So the best thing to do is to run many tests in a row and use the logs to obtain timing statistics and size statistics. Because at the end of the day, the actual backup sizes matter and not the estimated ones; hence, it makes sense to use those numbers in future cost and upload or restore time calculations.

Add a Comment

Your email address will not be published. Required fields are marked *