|
cheese-cube posted:Is what you're suggesting free? I'm really just looking to setup a small home lab environment and want to avoid purchasing anything (Hence why I'm going with Hyper-V).
|
# ? May 21, 2013 17:02 |
|
|
# ? Apr 26, 2024 21:14 |
|
sanchez posted:Is there an answer to "I need to keep x TB of stuff for 7 years" that isn't tape? Tapes are comforting really, they can't get corrupted by bad firmware or hit by a power surge or become EoL'd by a vendor. The drives can but you can always find drives. Tape is sitting in a room slowly deteriorating merely by it's very existence. Every use of tape lowers its effective lifespan. I used to cringe every time I had to do a restore from tape going back a few years because it was a crap shoot whether it was going to work properly or not. Replacing tape technologies is non-trivial as you must roll all of your backups forward to the new technology or keep old drives/robots around to perform restore from old backups. If you've ever had to migrate hundreds of tapes from LTO to T10K or likewise you will hatehatehate tape. Meanwhile disk based backup on an array constructed for that purpose is constantly protecting your data. You have raid, scrubs, lost write protection, and a host of other data protection features constantly protecting your backup data with the same diligence that you protect your production data. Which makes it a lot safer than having it sitting on a shelf in a room. The other advantages of disk based backup are based on the vendor you use, but everything from easy migration from old to new technology to being able to make your backup data immediately available to the users where it sits through clones are some of the things I've done, and been grateful for being able to do. Tape has it's place for long retention environments where cost constraints make it impossible to maintain the full amount of backups on disk. But you can most definitely meet a 7 year requirement with disk as well, if you've got the scratch, and I know of companies that do it, including those that do it for compliance reasons because it can make discovery quicker and easier.
|
# ? May 21, 2013 18:34 |
|
EoRaptor posted:I'm not complaining that there are tapes, just that for deleted folders and other simple file recoveries, it's usually much, much faster to use a storage devices volume snapshot abilities to go back in time X hours/days and grab it. I'm really not sure why they aren't using snapshots on the NetApp controller but I'm guessing it's due to capacity limitations. The storage environment at the moment is a bit hodgepodge. There's two SANs, a HP EVA8100 and a EVA8400, which are presented to a NetApp V3410 which in turn provides block and file (CIFS) storage. To complicate things further they have two NetApp FAS3240 NASs with one providing NFS shares to VMware hosts and the other providing CIFS shares for backup staging. I think they're planning on replacing the entire setup with a NetApp FAS6200-series or something.
|
# ? May 22, 2013 17:08 |
|
cheese-cube posted:I'm really not sure why they aren't using snapshots on the NetApp controller but I'm guessing it's due to capacity limitations. Snapshots on user file shares are generally pretty thin. We retain 3 months worth of snapshots and we see around 5% of the total space consumed being held by snapshot blocks. You can almost always afford to enable them on low-change data like that. If the concern is running out of space due to snapshot usage you can set up snapshot autodelete to prune them when the volume gets near full. As mentioned, they are so quick to restore from that it's really a waste not to have at least a few available for quick restores of recently lost data. The fact that users can initiate their own restores through the previous versions functionality in windows or the ~snapshot directory makes it even better.
|
# ? May 22, 2013 18:53 |
|
NippleFloss posted:Snapshots on user file shares are generally pretty thin. We retain 3 months worth of snapshots and we see around 5% of the total space consumed being held by snapshot blocks. You can almost always afford to enable them on low-change data like that. If the concern is running out of space due to snapshot usage you can set up snapshot autodelete to prune them when the volume gets near full. As mentioned, they are so quick to restore from that it's really a waste not to have at least a few available for quick restores of recently lost data. The fact that users can initiate their own restores through the previous versions functionality in windows or the ~snapshot directory makes it even better. Yeah having them setup would be a godsend, especially if users can use Previous Versions. The idiot who configured the NetApp didn't use any standard naming convention for the volumes and shares so when a user gives me a UNC path to a file/folder to be restored I have to spend ages hunting around to find the actual volume that the share maps to.
|
# ? May 23, 2013 02:08 |
|
Long shot, but do you have dedupe on? If not you might be able to find the space you need for snapshots in dedupe savings alone.
|
# ? May 23, 2013 02:15 |
|
parid posted:Long shot, but do you have dedupe on? If not you might be able to find the space you need for snapshots in dedupe savings alone. I'm pretty sure that dedupe is turned on but I'd have to check with the storage team. Is there a general best-practice for snapshot capacity requirements based on volume size?
|
# ? May 23, 2013 03:06 |
|
Its really more about rate of change than size of volume. Each snapshot takes up space for every 4k block that has changed since the last snapshot. Generally for workgroup/homedrive type volumes its the total change since the oldest snapshot. If you have files that keep changing between a snapshots (like a database) it will be more.
|
# ? May 23, 2013 03:46 |
|
cheese-cube posted:I'm pretty sure that dedupe is turned on but I'd have to check with the storage team. Is there a general best-practice for snapshot capacity requirements based on volume size? It really depends on your snapshot schedule and volume deltas. 20% is a good starting point generally.
|
# ? May 23, 2013 23:50 |
|
I am wowed by the knowledge in here. We don't have quite the budget some of you folks have but hoping someone has some recent experience with datacore.com The search turned up a few hits but mostly just mentioning the product in passing. Anyone have real world experience they can relay? We're looking at them vs the EMC VNXE line at about $25k.
|
# ? May 25, 2013 01:01 |
Are there any NetApp devices I could deploy in a home lab? Like last gen or something? We use everything from 6000 series to some 2240's in my smaller networks. I need to learn more about this even though I do not want to be a storage engineer.
|
|
# ? May 25, 2013 01:39 |
|
You don't want a full NetApp at home. Like all SANs, they're power-hungry and they'll run up your electric (and cooling) bill. What you want is ONTAP Edge, which is a fully-functioning version of ONTAP in a VM format. Runs on ESX/ESXi. You should be able to get a free (or trial) copy easy enough, spin up 2-3 of them, and you can test replication/etc. The regular tools (System Manager) work for managing them.
|
# ? May 25, 2013 01:41 |
|
The OnTap Simulator is still free and is the basis for the paid and supported OnTap Edge product. You can get 7-mode or Clustered OnTap simulators that run under VMware. The only real downside is there is no simulated HA for takeover/giveback and there are some restrictions on the number of disks you can create. If you were to take an official training course through a training partner the odds are about 100% that the remote lab environment you log in to is running the OnTap simulator and not physical hardware.
|
# ? May 25, 2013 03:33 |
|
Stugazi posted:I am wowed by the knowledge in here. Datacore does storage virtualization which really only makes sense to me if you have a large amount of isolated SAN environments that you want to consolidate under a single piece of management. To me it's an idea that has really specific applications and only makes a lot of sense if you have a relatively large and heterogeneous storage environment, if you have a large homogenous environment but it is fragmented and your vendor's management tools are very poor, or if you're a cloud operator who has a hard on for virtualizing and abstracting away everything under the sun. You've still got to buy the underlying storage to get any benefit out of it, so it's an added cost up front in the hopes of saving costs down the road on management. Regarding your comparison with VNXE, they really don't compare. You can't do anything with SanSymphony without having some storage pieces underneath. What is your goal in purchasing storage? What sort of applications are you running and what do you hope to get out of it. Give us some information and people here can probably give you a lot of advice on who to talk to.
|
# ? May 25, 2013 20:05 |
|
Stugazi posted:I am wowed by the knowledge in here. Is there a specific reason for looking at Datacore? If you're looking at storage as software there are a ton of options. If you're looking at doing a storage cluster with synchronous replication the two main products alongside Datacore are probably Starwind and HP StoreVirtual.
|
# ? May 25, 2013 20:53 |
|
Bitch Stewie posted:Is there a specific reason for looking at Datacore? We need a storage option for the SMB space (50-500 employees) that doesn't break the bank. We'd be the reseller so a vendor's willingness to partner with us is important. It can be difficult to get someone at Netapp to call us back. That aside, Datacore as a VSA is interesting. I understand it needs it's own storage but that storage can be a commodity Server. Once installed Datacore has failover, huge cache, async replication at price points that kill everyone else. Datacore is also very responsive to us. I am trying to gather info from anyone who has experience with the product or who can recommend an alternative SMB sized vendor. For reference I've worked on EMC, Netapp, DotHill and Hitachi SANs but it's been a long time and I'm no longer on the frontlines.
|
# ? May 26, 2013 17:49 |
|
Stugazi posted:We need a storage option for the SMB space (50-500 employees) that doesn't break the bank. We'd be the reseller so a vendor's willingness to partner with us is important. It can be difficult to get someone at Netapp to call us back.
|
# ? May 26, 2013 18:31 |
|
Has anyone bought Nexenta before along with their certified hardware? Want to share thoughts or how their support works? I am extremely concerned about Nexenta vs Hardware vendor for support problems. It was real worrisome how the Nexenta sales rep we are working with already poo poo on the hardware vendor they recommended to us and it wasn't their fault when a series of benchmarks came up incredibly poor. I
|
# ? May 31, 2013 01:33 |
|
This seemed like the best place to ask, apologies if there is a better subforum. We are looking at backup options. VM level software like Veeam is great except that some clients still have physical servers. I know. WTF. We've had good experience with StorageCraft. They have an MSP edition as well. However, for bigger clients I think that type of software starts to become unwieldy. What are the entry level enterprise options we should consider? I need to narrow down my choices for research on backing up 10-50TB with reasonable budgets (two different clients). We're looking at EMC DataDomain to match a VNXE but everything I've heard is that DataDomain is really loving expensive. Disk is strongly preferred. Offsite replication is a nice option, deduplication is not necessary but also nice to have. Side note: how does anyone get that amount of data offsite without tapes? Any guidance much appreciated!
|
# ? Jun 3, 2013 18:37 |
|
Stugazi posted:This seemed like the best place to ask, apologies if there is a better subforum. Stugazi posted:We've had good experience with StorageCraft. They have an MSP edition as well. However, for bigger clients I think that type of software starts to become unwieldy. I have no idea what StorageCraft is and no idea what you're actually trying to do. Can you give some real requirements? Stugazi posted:Side note: how does anyone get that amount of data offsite without tapes? Bottom line: you need to plan and think holistically, instead of jamming something on top of a WAN solution that's clearly not equipped for the solution you're trying to implement.
|
# ? Jun 3, 2013 18:51 |
|
I'm working in a lab that requires a (network-based) backup solution for about 5TB, scalable to upwards of 20TB(?). Ideally, it would also be easy to manage, since we don't really have dedicated IT support for the backups. Anyone have any suggestions?
|
# ? Jun 3, 2013 22:23 |
|
Rated PG-34 posted:I'm working in a lab that requires a (network-based) backup solution for about 5TB, scalable to upwards of 20TB(?). Ideally, it would also be easy to manage, since we don't really have dedicated IT support for the backups. Anyone have any suggestions?
|
# ? Jun 3, 2013 23:02 |
|
Rated PG-34 posted:I'm working in a lab that requires a (network-based) backup solution for about 5TB, scalable to upwards of 20TB(?). Ideally, it would also be easy to manage, since we don't really have dedicated IT support for the backups. Anyone have any suggestions? What's your budget? My go-to would be netbackup appliances in this situation, but it's highly budget dependent and tends to be on the high side unless you've got a decent in with Symantec. I've seen some good Netvault installs on entry level type hardware for this sort of infrastructure. Avoid backupexec - if anyone points you in that direction, avoid their advice. It's not good for these kind of data volumes.
|
# ? Jun 3, 2013 23:41 |
|
Misogynist posted:Can you give some real requirements? Our needs are in between the SMB world and the Enterprise gear that seems to dominate this thread. 10 - 50TB range. I am behind on my technology so yes, I did learn that DataDomain is just an appliance. It's a BYO Software solution. Hurts to admit I am no longer the alpha geek I used to be. Every client would like a DR plan. It's our job to deliver the best results on limited budgets. Multiply that by different environments and a swiss army knife solution looks appealing. Hence my appeal to the hive mind. I'll get back to researching but still open to suggestions to guide our hand.
|
# ? Jun 4, 2013 04:01 |
|
Stugazi posted:Our needs are in between the SMB world and the Enterprise gear that seems to dominate this thread. 10 - 50TB range.
|
# ? Jun 4, 2013 04:06 |
|
Stugazi posted:Our needs are in between the SMB world and the Enterprise gear that seems to dominate this thread. 10 - 50TB range. There is a lot more than size that goes into storage, 15x4TB RAID 0 SATA 7200RPM in drives will give you 60TB usable, but no redundancy, or speed for a Virtual environment or high work loads. RAID levels, IOPS, and Storage access(iSCSI/FC/NFS) are many things that go into making a deployable solution. In layman's terms you're basically asking us "I want a Car that does stuff!", sure a lot of cars will work, but what car fits your needs? quote:I am behind on my technology so yes, I did learn that DataDomain is just an appliance. It's a BYO Software solution. Hurts to admit I am no longer the alpha geek I used to be. DataDomains are great but they carry a price tag, IIRC the DD160 starts at around 10k pre-support/install. I'm not familiar with what you mean BYO, are you looking for a software base backup which is hardware agnostic? If so are your customers virtual, things like Virtual Data Protection, Veeam, PHD virtual, BackupEXEC can be very handy in the SMB space. No one knows everything about everything(if you do please PM me), but we know enough on where to look for what we don't and that is a really important skill to have. quote:Every client would like a DR plan. It's our job to deliver the best results on limited budgets. Multiply that by different environments and a swiss army knife solution looks appealing. Hence my appeal to the hive mind. Every customer NEEDS a DR plan. For some companies it is the owner picking up a WD external that backed up the Exchange Server the night prior, others it is a full colo site with stanby servers and a 15 minute RTO. If you are trying to make a "glove fits all" scaled solution you are A) going to lose a poo poo load of opportunities B) run your self silly trying to prove it works C) Get nowhere fast. Sure things like VCE and such make it work but they also have many engineers, researchers, and highly trained professionals working on it constantly. You need to find out what the client's needs are for their environment, (I really have yet to find two companies that are the same) While being yourself, knowledgeable in the products and solutions you are selling, to find the most cost effective and successful solution. Dilbert As FUCK fucked around with this message at 04:42 on Jun 4, 2013 |
# ? Jun 4, 2013 04:31 |
|
EMC Avamar may fit your needs. From the sales pitch I just heard you pay for your storage and they can use unlimited agents. No idea on pricing though. They were claiming reallll good compression as well.
|
# ? Jun 4, 2013 04:45 |
|
Moey posted:EMC Avamar may fit your needs. From the sales pitch I just heard you pay for your storage and they can use unlimited agents. If you have the VDP you already got a pretty good taste of the compression they can do, it was funny how many people asked some spokes people at a conference I was at "Can you just come out and say VDP is basically the avamar appliance?" Well poo poo now they are saying it pulls from it directly. http://www.vmware.com/company/news/releases/vmw-vmworld-emc-082712.html Dilbert As FUCK fucked around with this message at 04:50 on Jun 4, 2013 |
# ? Jun 4, 2013 04:48 |
|
Avamar is pretty decent for relatively static data as it does have some nice global deduplication and compression features. However I'm not sure how well it does in high change rate environments or when backing up applications because I've predominantly seen it used for thin PC backup over WAN. I will also note that if you want to back things up across WAN the process for seeding Avamar via sneakernet is fairly painful. Anyway, as a few people said, without knowing what is being backed up, and how often, and to where, it's pretty hard to recommend anything.
|
# ? Jun 4, 2013 05:42 |
|
evil_bunnY posted:What's the existing infra like? Existing infrastructure is a linux cluster. One of the problems is that we have space constraints at the server location so we wouldn't be able to keep the backups attached to the servers. Zephirus posted:What's your budget? My go-to would be netbackup appliances in this situation, but it's highly budget dependent and tends to be on the high side unless you've got a decent in with Symantec. I've seen some good Netvault installs on entry level type hardware for this sort of infrastructure. Avoid backupexec - if anyone points you in that direction, avoid their advice. It's not good for these kind of data volumes. Ballpark budget is not very high: 2-4k.
|
# ? Jun 4, 2013 16:41 |
|
I guess you could go to Best Buy and buy as many 4TB drives as you can? That's a pretty paltry budget for 5-20TB.
|
# ? Jun 4, 2013 16:59 |
|
Rated PG-34 posted:Ballpark budget is not very high: 2-4k. $2k for 20TB of storage. Good luck with that.
|
# ? Jun 4, 2013 16:59 |
|
Rated PG-34 posted:Ballpark budget is not very high: 2-4k.
|
# ? Jun 4, 2013 17:03 |
|
Rated PG-34 posted:Existing infrastructure is a linux cluster. One of the problems is that we have space constraints at the server location so we wouldn't be able to keep the backups attached to the servers. That's completely unrealistic budget unless you buy some sort of Synology/QNap rack unit and fill it with consumer SATA drives. And that's not something anybody in here would likely recommend.
|
# ? Jun 4, 2013 17:04 |
|
Rated PG-34 posted:Existing infrastructure is a linux cluster. One of the problems is that we have space constraints at the server location so we wouldn't be able to keep the backups attached to the servers. How valuable is your data? SHould that be floating on 4k? If it is for backups You might want to look into the Synology DS1812+, it's 8 bay. But the Device is 1k without drives. Dilbert As FUCK fucked around with this message at 17:15 on Jun 4, 2013 |
# ? Jun 4, 2013 17:12 |
|
Crackbone posted:That's completely unrealistic budget unless you buy some sort of Synology/QNap rack unit and fill it with consumer SATA drives. And that's not something anybody in here would likely recommend. Ugh. Do you know my old environment? My boss thought he was a loving IT god by taking a QNAP 1679 and filling it with SSDs. Initial performance was good, but once that thing blows up and takes down everything I will be dying laughing. Also laughing all the way to the bank as I am doing hourly contract work for them still.
|
# ? Jun 4, 2013 18:01 |
|
evil_bunnY posted:This is me trying not to laugh. Initially, we'll only be storing something like 5 TB, so I'm hoping there's a scalable solution, but I agree that it's a paltry sum. Edit: okay, the idea is that this is an interim solution while they sort out funding for something longterm. Rated PG-34 fucked around with this message at 18:14 on Jun 4, 2013 |
# ? Jun 4, 2013 18:08 |
|
Paging Scott Allan Miller to this thread. He'll know just the thing!
|
# ? Jun 4, 2013 18:13 |
|
Maybe we could buy a bunch of USB sticks and glue them together.
|
# ? Jun 4, 2013 18:15 |
|
|
# ? Apr 26, 2024 21:14 |
|
Rated PG-34 posted:Maybe we could buy a bunch of USB sticks and glue them together. You joke, but... 6x USB Flash Drive Raid quote:Based on the feel of the system, we have chosen to use this RAID as part of a production system (we will keep Bacula running just to be safe). Day 1 of the flash-raid-as-root starts today.
|
# ? Jun 4, 2013 18:25 |