![]() ![]() I wont go into much details about the other options I evaluated but I will explain in the next section why you should not use Backblaze as this would be the obvious choice for most people. In the rest of this article I will outline my reasoning for choosing Arq. Restoration via Client: If I want to restore data I want to restore my files via the client back to the original local and not have to rely on a web download or the delivery of a hard drive.Īfter in investigating several different softwares products for backup I ended up choosing Arq.This means the software needs to be able to allow to multiple backup storage locations. One solution: I like to have a online backup as well as local backups (for redundancy and speed I want to have backups on local storage).Fast Backups: The backup should be fast and not unnecessarily limited (at some point you will run into limits with everything but 100 kB/s is does not seem reasonable to me).In this section, I lay out my requirements for an ideal backup solution. Requirementsīackup solutions are very subjective. This is clearly excessive and not necessary if you look at other backup systems.įor all these reasons I decided to investigate an alternative backup strategy. ![]() Crasplan’s official recommendation is to allocate 1GB of memory for every TB of data you have. The real problems seem to be with the design and the fact that Crashplan will hold a lot of data in memory. ![]() If you do not know what that means don’t worry it’s complete nonsense and you can ignore these statements, Java is plenty fast and more than suitable for this kind of job. You will find plenty of people out there complaining Crashplan is bad because is not “native” but written in Java and therefore has bad performance. With the latest update, the block synchronization was trigger at least once a week which meant that you could not backup your data for days and if for whatever reason one of my external hard drives got disconnected the whole process started all over again.Īnother issue that started becoming more prominent is resource consumption. But if you have a large amount of data this process can take days. I understand the importance of verifying the data against the backup. The second issue was related to the never-ending block synchronization. Either way, this only shows that you are at Crashplan’s mercy. I don’t know if this was because I uploaded less data recently or whether that is a permanent change. Lately, it seemed that the speed issue got better (at least for my rather slow connection). In the last major update, Crashplan removed the config files all together and one had to resort to some “creative solutions” to change the settings. This option was later removed but the setting was still changeable via the configuration file. Early versions of Crashplan had a GUI option to select the level of de-duplication. The fix for this problem was to disable the de-duplication. Coincidentally this in is about the 10GB per day that Crashplan quotes on their webpage. I am not sure if there are technical reasons for this but the speed was pretty constant regardless of which corner of the world I backing up my data. In my case, I ended up getting upload speeds of ~100kB/s. The usual issues were related to being de-duplication turned on. Historically Crashplan was not known for being the fastest backup system out there. The first issue is related to the uploaded speed. Over these years through Code 42, the company behind Crashplan, make a bunch of changes that it unbearable to use. I have been using Crashplan for the better for at least half a decade and wrote several posts about it.
0 Comments
Leave a Reply. |
Details
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |