In The Words Of A Reggae Group That I  Like A Lot, Mystic Revealers... "Got To Be A Better Way" For Your Backup Storage

In The Words Of A Reggae Group That I Like A Lot, Mystic Revealers... "Got To Be A Better Way" For Your Backup Storage

For those of you who have no personal or professional interest in I.T. backup storage, I suggest you just scroll to the end, where I've posted a link to the song, "Got To Be a Better Way". I do need to point out that the lyrics and the song are about a much more important topic than backup storage. The song is a poignant cry for social justice. It's very moving, and has such a cool sound. Please enjoy it!

Back to the article, for those that do have an interest in backup storage, please read on...

What is the fastest way for you to backup your important data? Straight to disk. Right? So if the main concern was your backup window being as short as possible (or just never being exceeded), you would always backup straight to disk. Right?

Is that what you're doing? Is that what most people are doing? What I can say with certainty, is that many of you do this, albeit with a variation. A very important variation. If you backed up all your data, all the time, keeping the numbers of copies required, all on disk, what if any problems might that pose? You got it! You'd burn through disk like it's going out of style. Not a very cost effective method, right?

So, the variation is, that some form of deduplication has been introduced into most of your backup environments, to reduce the amount of disk you need. Some of you are using backup software applications with deduplication algorithms built in. You're generally getting anywhere from a 2:1 to an 8:1 deduplication ratio. This is going to save you from burning through as much disk as you would without the software deduplication. However, this very compute intensive process is going to dramatically tax the processing ability of your media server. As you know, this results in slower ingest rates, which equals longer backup windows. So for some of you, based upon the size of your environment (< 30 TBs) and retention policies (few copies), that may be all you need.

However, for those of you with larger environments (30+ TBs), and/or longer retention policies (5+ copies), this approach still only gets you part way to where you need to get. You have saved some disk consumption, but is this as good as it gets? This brings us to the next stop along the path of backup storage evolution.

A number of companies have made scale up backup target appliances with inline deduplication available. These appliances have top of the line processesing resources, and much more aggressive deduplication algorithms (20:1) then those found in the software only approach. Problem solved, right?

Well, these appliances do help you reduce the amount of disk needed, by 2.5 to 20 times. As data grows, you can simply add shelves of disk, to meet your capacity needs. Wow!

There are some challenges though, with this approach, and many of you are familiar with them first hand. Again, deduplication is a very compute intensive operation, and now you're performing it inline, during the backup window. When coupled with the fact that this approach is its fastest on day one, and slows down as the data grows, you have backup windows that will grow over time as capacity grows. In fact, they will grow to such an extent eventually, that they will become unacceptable. It is at this point that a forklift upgrade is required. The front end controller will no longer be able to keep up with the processing demands and maintain the backup window, and a costly upgrade is now necessary.

This is where the Gotta Be a Better Way piece comes in.

We agreed that backing up straight to disk is the fastest method. We understand that for large environments with some retention needs, straight disk becomes cost prohibitive. We understand that more deduplication is needed, but don't like the idea of backup windows growing out of control, only to have write another large Capex check to reign them back in.

It's a Catch 22; a hamster wheel that you can never hop off... or is it.

Imagine, that regardless of the amount of data you need to backup and store (TBs to PBs), and even needing to keep many copies over time, that you could do it ALL straight to disk. This, of course, achieving the fastest ingest and shortest backup windows possible.

Now imagine that you could always keep the latest backup copy on that disk, in its native form, and that you could restore or boot VMs from it, in seconds to minutes. 

Stay with me here. Further imagine that additionally you could deduplicate your data, at a 20:1 ratio, adaptively. In other words, not inline, using the same cycles that are trying to run your backup job/s.

Now you've just imagined...what ExaGrid provides you.

ExaGrid’s unique landing zone and scale-out deduplication approach, which is 3 times faster for backups, 10 times faster for restores and VM boots and is lower cost than any other backup storage. You can also backup SQL Dumps straight to our appliances, and we support Oracle RMAN Channels as well.

There really is a better way, and ExaGrid's 2nd generation, purpose built backup storage appliance is it.

If you would like to learn more, and either have a conversation about our highly differentiated architecture, or take a deep dive look, please let me know.

If you've read this far, thank you!


Whether you've read to this point, or scrolled down to listen to this cool tune, please enjoy.


要查看或添加评论,请登录

Aaron Richman的更多文章

社区洞察

其他会员也浏览了