Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
parid
Mar 18, 2004
Any recommendations on storage solutions to look at for cheap mass storage? We're a NetApp shop but even a 2240 loaded out with 3TB drives is beyond the price-point for a lot of the groups I work with (higher ed, lol). I'm looking for something that can do NFS/CIFS and won't drive me insane operating it for essentially as cheap as possible.

Has anyone bit the bullet on clustered-ontap yet? I just put our first cluster into production last weekend and I wouldn't describe it as "smooth sailing".

Adbot
ADBOT LOVES YOU

parid
Mar 18, 2004
The biggest driver for me was the lack of development for 7-mode. When we buy controllers, we're making a 5 year investment. It was a choice between staying with 7-mode and risking never having cifs3/nfs improvements. I bet that it would be worth the growing pains now for the pay off later when its stable. The big challenge to all of this is that all we purchased was a controller uplift, no new disks. We were given loaner trays as "swing gear" to allow the migration to happen.

I agree with the rest of you. Sales is pushing it too hard and too early. It's an awkward place for netapp to be in. Abandoning further development on their stable proven platform and betting on a system that isn't ready yet. I'm getting tired of hearing "such and such feature should be available in 8.2!". 8.2 looks to be the release that would normally be considered production ready. That would be my advice to anyone thinking about clustered ontap, wait for 8.2.

~70 volumes and 60tb of sata with flash cache cards on a pair of 3250s. This particular system is in a jack of all trades role currently just CIFS and NFS. No block level storage. Mostly hosting things for campus, some lower level VM hosting, home drives. Of all our controllers its the most likely to benefit from the big promises of clustered ontap (unified name space, shifting data around, etc).

There are a lot of really neat things. The new CLI is awesome. The unified namespace is great. The concept of export-policies is very handy too.

There is a replication tool for migration. Currently, you have to engage PS and get them to bring it. We did and it was worth it. It's volume based. It essentially queries the source filer, does a sanity check, checks on many existing cifs/nfs settings, and sets up a snapmirror relationship with a cluster node. When you go to finish the migration it breaks off the mirror, sets up the shares/export-policies and away you go. Of the 70 volumes I moved it with it, I only had problems with 3. It only took 15 minutes to clean them up by hand. This tool is likely the basis of what the fancier, user accessible tool will be (which haha I hear will be released around 8.2). We managed to move everything with about 3.5 hours of downtime.

parid
Mar 18, 2004
Something well under $1,000 / TB usable. I hate to not buy HA in a consolidated storage system, but I think its on the table to negotiate away if its cheap enough. This is the kind of storage that will attempt to be pitched to people who run external HDs full of research data and who think a student, an external HD, and sneaker net is a data protection strategy. Sure, it will never be that cheap but we have been unsuccessful trying to get them into enterprise storage in one bite. This would also be a place to keep archive data. It would probably need to grow into the 200TB range without being a management nightmare (like 14 15TB systems might be).

Some good ideas already, I'll check them out. How is oracle to work with for storage? I have had nothing but horrible experiences with their other business units.

parid
Mar 18, 2004
The "standard" documentation is available and is mostly complete but the "community" documentation like forums, KBs, and other experiential knowledge shared from other users is pretty thin. All the bugs I have hit, and KBs to work around them are hidden from the public as well. NippleFloss, you seem pretty close to this, do you have any recommendations on where to get some of the community information? I have found googling incredibly difficult as cDOT/c-mode/clustered ontap/cluster mode has so many names and there is so much out there on 7-mode already.

Also, fun thing you can do in cDOT is mount a volume from a completely different HA pair inside the file structure of another volume. That Archive directory in your home folder? That actually points to a different filer with cheaper disk. I'm not sure how I would use it yet, but its neat :).

parid
Mar 18, 2004
Long shot, but do you have dedupe on? If not you might be able to find the space you need for snapshots in dedupe savings alone.

parid
Mar 18, 2004
Its really more about rate of change than size of volume. Each snapshot takes up space for every 4k block that has changed since the last snapshot. Generally for workgroup/homedrive type volumes its the total change since the oldest snapshot. If you have files that keep changing between a snapshots (like a database) it will be more.

parid
Mar 18, 2004

GrandMaster posted:

If you value your sanity, you will give Networker a wide berth.. Awful piece of software.

Did emc pay you for such a glowing review? This x10. A couple years ago I worked with a networker issue. Root cause was due to case handling issues in a script related to their exchange backup module. Took three months for them to fix it once they found it. Delayed launch of a massive new service. Not an awesome experience.

parid
Mar 18, 2004
Dell is pretty much giving away MD's now, might be worth checking into.

parid
Mar 18, 2004
I think they believe they are able to make the best choice for themselves for anything. The information they have is how much it costs them to go to Costco and but an external 2tb drive. anything bigger should be perportionally more expensive right? Since they have a superior ability to critically think, why listen to anyone else? Don't understand the decision? Who cares, take your ball and go home.

Its not like working together for a common good has ever helped anyone.

parid
Mar 18, 2004
Yup, that's higher ed. They are specifically structured do that each school/department has total freedom. The system is set up to punish people for doing the right thing. Most of the it money is split up and divided out to each school who gets to make their own decisions with it. Without charging its self, there would be no way to fund large central projects.

Your central rates are very reasonable. Have you sat down with your contact there and explained your budget issue? They might be able to cut you a break to "help you do the right thing"?

parid
Mar 18, 2004

Misogynist posted:

Try explaining this to the 25 different research groups that are already using Dropbox for Business while IT has absolutely no clue.

That is my job 8-5. Its not easy. There are still people willing to look through that and try to work together. That's where I put most of my time.

parid
Mar 18, 2004

Maneki Neko posted:

Well sounds like our support engineer who has been handling our case might need a punch in the dick then, the way he was talking we should be digging up the body of Robert Stack because this was some unsolved mysteries level poo poo.

I'll look more at that in the morning.

Make sure you at least get to a proper level 2 engineer before you give up on the bug. Most of the time your first "escalation" is to a level 1 specialist. If its got a real PR number and is acknowledged as a real bug, it will eventually get a fix and they should be able to tell you when.

I have had some craaaazy hard issues with netapp and they have been able to solve all of them with enough time and pushing. If your current tech thinks this is beyond them, they are supposed to escalate. Sometimes they need to be reminded of that as its not good for their "stats".

parid fucked around with this message at 05:52 on Jul 2, 2013

parid
Mar 18, 2004

Linux Nazi posted:

Should be interesting. The idea is a VMWare stretched / metro cluster. We are 99% virtualized, and we already have layer 2 spanning courtesy of OTV. With vplex taking care of the storage side, we can essentially put one datacenter's ESXi hosts into maintenance mode and go to lunch while we wait for things to gracefully vmotion to the other side of town.

Right now we are all RecoverPoint and SRM, it works pretty well, but failovers are a huge event.

Have you run a metrocluster before? VMware's development of metrocluster related functions is a bit lacking.

The biggest issue we have had with ours is capacity planning. Traditional vmware cluster capacity planning tools don't have a way to simulate a site failure. All of our sizing is done as a "what if" worst-case scenario. What if we lost Site A? How which VMs would be left? How much would they need? How big does site B need to be? Since it isn't perfectly balanced, the answer for each site will be different.

Right now, I essentially manage this with physical:virtual cpu ratios in a spreadsheet. It's messy. I'm the only one who can understand it. There is a lot of manual information manipulation. I only update it a couple times a year due to the hassle it is to work with it. We don't even have that large of an environment, only ~700 vms. Scaling this any larger would quickly make my manual processes impossible. I have been working with VMware for a whole year on this and they essentially have no answer. Our VAR is lost as well. I have spoken to every capacity management vendor that has come along, and they all don't have a way to deal with this.

Do you know how your going to deal with capacity planning in your MetroCluster?

parid
Mar 18, 2004

Misogynist posted:

You're doing this manually? We get all this information through PowerCLI in a few dozen lines.

The bulk of configured data I get from powercli as well. The two biggest problems for that are:

1. How do you get the host affinity rules and attribute them to which hosts are in which site? That's that part that's stopped me from automating more of the process.

2. This means you are doing all your capacity planning based on configured resources, not actual resource use. Its better than nothing, but supporting 20 pegged 1 vcpu machines takes a different amount of host resources than 20 idle 1 vpcu machines.

parid
Mar 18, 2004
That's an interesting idea. Don't think about the VMs, just look a level up at the hosts and monitor their data more closely.

I think the challenge my environment has is VMs are configured individually for site affinity rules. They may or may not move between sites. It's not perfectly balanced either so the scenario changes depending on which side fails. It challenging to know what resources would be necessary to support what would move.

This is a problem most people probably solve by throwing money at it. It wouldn't even be that much money. Right now, no one is getting capital improvement funds unless they can justify an emergency. I'm spending most of my time trying to find ways to do more with less and trying to predict when we will be "out" in the cluster.

parid
Mar 18, 2004

Maneki Neko posted:

That may have been a chunk of it. Another has been that it seems like Netapp support and our VAR just weren't familiar with how to do things in cluster mode, so felt like we were paying to have them learn, and our deployment took a lot longer than we had planned.

Otherwise we've just been hitting bugs of various severity, including at least one that has caused us to stop serving data.

Clustered ontap support, for me, has been noticeably better in the last two months. I'm seeing signs that they are actually fixing that problem.

We have had similar issues with bugs. The latest, some dedupe issue, is blocking our 8.2 upgrade.

parid
Mar 18, 2004
Even if they did let you quote 6th year support, you wouldn't want to pay for it. They normally structure the pricing to make it cheaper to upgrade. I recently did a head uplift on a fas3070 netapp. One additional year of support was %70 of the cost of a pair of new fas3250s with PAM cards. It was a no brainer.

parid
Mar 18, 2004

Jadus posted:

On a head upgrade like that, do you normally just keep the disks and shelves running regardless of warranty, since they're in a redundant state anyways?

The shelf support is tied to the controllers (on these systems at least). Im sure something is done with the pricing on the backend but i never get to see it. It hasn't been exorbitant . I run disks till they literally won't send me replacement drives anymore. Spindle is a spindle! Even after production use, they end up on a test or temporary system. That system had space, but no iops left. The controller had 4x more nvram already and the PAM card was just gravy on top. Ironically this system is now has more capacity and lower latencies (due to the cache) than our fabric metrocluster with all fc drives.

I have a bunch of old EOL'd 300a drives (300gig ata) in ds14 trays. They still work great even on modern netapp releases. I would want to promise anyone service off of them though.

parid fucked around with this message at 05:03 on Sep 26, 2013

parid
Mar 18, 2004
I'm trying to do the full planning exercise for a couple of new NetApps to support a medium size (~7000 mailbox) exchange environment. My main goal for this process is to make sure all the proposed volumes fit into a reasonable aggregate configuration. So going through the work to eventually get at the usable space in the aggregates.

Not that it probably matters, but its two pairs (one per site) of FAS3220's with a 512GB flash cache card per controller. Clustered ONTAP 8.2 in a switchless configuration. Each pair is its own cluster.

So far I'm accounting for the following factors:

* Taking 3 drives out for a dedicated aggregate for the various vserver root volumes. (once per controller)
* RAID-DP parity drives
* At least 2 spares per disk type per controller
* Convert printed disk size to Base 2 (and check sum right sizing)
* Aggr Snapshot Reserve
* WAFL Overhead at 10%
* Keeping at least 15% free on the aggregates

Is there anything else that I might be missing?

parid
Mar 18, 2004

NippleFloss posted:

You can find the right-sized capacities of the different (newer) disk types here: https://library.netapp.com/ecmdocs/ECMP1196821/html/GUID-5D3F3F2D-A153-49D3-9858-04C30B3C7E74.html

Aggregate snapshot reserve defaults to 0 starting in 8.1 for non-mirrored aggregates, and in general you should have aggregate snapshots and snapshot reserve disabled unless you are using MC or syncmirror.

Otherwise, everything in there is correct.

Thanks!

parid
Mar 18, 2004

Agrikk posted:

Don't you also need to keep at least 10% free space per LUN (if you are using LUNs)?


This exact list was the first reason why I was so mad at NetApp. We got a FAS2050 with 20x300GB drives and after all of the above best practices were applied we ended up with something less than 40% usable space of the theoretical 6TB.

You want some free space overhead in the volume, but you don't need that much. I have a bunch of oracle luns on dedicated volumes with upper 90% usage.

NetApps inability to accurately set usable space expectations with their customers on sales seems to be pretty common. I can't even get my sales team to do it with specific requests. I think the challenge is that aggregate setup and sizing is an implementation detail and can have a large effect on what you get.

parid
Mar 18, 2004

Docjowles posted:

My company is 100% NetApp over NFS. We have literally billions of files in the kb-to-mb range stored there and replicated with SnapMirror, not aware of any issues. Just... don't try to do a directory listing unless you have a few weeks to kill ;) . Not sure about dedup. Disclaimer: I am not the storage admin, though I'm hoping to learn more about our NetApp stuff over the next year.

Another NetApp site with 100mil+ file volumes checking in. Its old school mail dirs for 20k+ mail boxes. Running with snapshots/syncmirror/quotas/dedupe without issue. I will say that NDMP based backups are painfully slow. I'm pushing hard for moving our data protection of these volumes to snapmirror instead.

parid
Mar 18, 2004
I'm not impressed by the new version of dfm, ohh sorry I mean, OnCommand Unified Manager 6. Its virtual appliance only now. So far, I fear its the bad kind of virtual appliance. Lack of patches for 3rd party products (like the OS), built in backdoors (why isn't the customer allowed to know or change the local root account?), un-nessisary open ports (no one needs to talk to mysql on that box in my environment).

I ran into similar problems when we demoed balance.

I'm barely scratching the surface and I'm worried I see a lot of problems coming my way. Has anyone successfully deployed this in a remotely secure way?

parid
Mar 18, 2004
Anyone have recommendations for backup storage target systems? Commvault's software dedupe and compression tech continues its march to mediocrity. I'm getting real tired of them moving the goal posts for dedupe database's system requirements (it's now: just put it on FusionIO). Their storage efficiency has been poor. Including compression were seeing worse than 4:1, 420TB stored in 116TB of disk. Throughput has also been abysmal. If you add in the FusionIO cards, Commvault's very high support costs (we have capacity base licensing), and the cost of FAS2240's that we currently use, this environment has become very expensive and performs poorly in just about every measure.

I had a positive experience with DataDomain in the past (2+ years ago). I got decent throughput but excellent compression ratios (>10:1). I hear that EMC is messing with their backup products and the future is mirky for the DataDomain product line. They are trying to integrate all these disparate products they purchased and drive people into their complete data protection stack. Considering were a NetApp\Commvault shop right now that would lead to many complications for us.

Anyone know whats going to happen with DataDomain? Are there other similar products (inline dedupe storage) out there worth considering?

parid
Mar 18, 2004

TKovacs2 posted:

Really happy with ExaGrid at the moment personally. Not cheap though.

I'm not sure how we could be spending more at the moment. Its probably in range. How are your compression ratios in the real world? These guys love to promise the world and it almost never lives up to it.

parid
Mar 18, 2004

Mr Shiny Pants posted:

Thanks, I was just wondering if it works as advertised. I've seen a lot of solutions over the years that over promise and under deliver. :)

Especially if it makes regular administration harder, or some other gotchas that they won't tell you about until you start using it.

We have essentially the same environmental you are looking at. Living with it is only slightly different than any other 7mode NetApp. SyncMirror (aggregate mirroring, which is what makes it a metro cluster) is solid, easy to work with, and functions as advertised. Since installed 4 years ago, our fabric metrocluster has never had a service outage (even for maintenance). In that time, we have done controller upgrades, a complete back end fabric upgrade, major ontap upgrade (from 7 to 8), and but numerous bugs and the like.

There are only two major downsides. The first is the price. No hidden costs but you are buying twice the disk and paying for the fabric. The second is whatever they are doing with clustered ontap. NetApp has announced end of life on 7 mode. They stopped adding new features some time ago. All the effort is being put into cdot. There is no timeline for cdot support for metroclusters.

parid fucked around with this message at 17:53 on Feb 1, 2014

parid
Mar 18, 2004

Bitch Stewie posted:

If you're on capacity licensing, Commvault license off front-end not back AFAIK so you'll be paying the same for Commvault regardless of the back-end won't you?

What I'm getting at there, is that presumably any kind of inline dedupe backup target is going to be an order of magnitude more than throwing in additional shelves full of "dumb" bulk disk and letting Commvault do the dedupe?

There are different levels of capacity based licensing. We would have "Standard" licenses if we didn't do commvault dedupe but since we do, we have "Enterprise". If I remember correctly it was in the range of twice as expensive. Don't forget to add in the cost of all the flash cards for the DDBs.

There's also a significant difference in the ability of the dedupe engines to compress. If commvault only gets us 4:1 and one of the storage appliances gets 8:1, we only need half as much disk.

parid
Mar 18, 2004

NippleFloss posted:

Request an NDA to get more exact information, but there is definitely a timeline for MetroCluster on CDOT and it's not TOO far off.

It still leaves potential new customers in a hard spot. They have to buy a new storage system who's OS has already stopped development. Its replacement's timeline is so far out its NDA and the only migration path currently out is a complete forklift...

NetApp is still great to work with but this is a hard position they have put their (metrocluster) customers in.

parid
Mar 18, 2004

Mr Shiny Pants posted:

We are not a big shop compared to American standards I guess, we are buying a whole new infrastructure that is going to last us at least 4 - 5 years.

Can you explain the CDOT compared to 7 mode some more? I don't want us to choose a solution and needing to do a forklift upgrade, needing to buy new hardware later on, or having a dead-end solution.

We also looked at 3Par and they are also pretty nice, especially the licensing. Need replication? You buy it for the array and if you want to do sync or async it is totally up to you.

Much better than the IBM per TB licensing.

They are completely separate OS's but there's no denying that CDOT is a direct descendant of 7mode. Its more of an evolution with some large changes.

CDOT adds another layer of abstraction between the storage and there presentation of that storage. It essentially compartmentalises the NFS/cifs/iSCSI/fc servers into their own instance. They then also allow you to run multiple of these instances (called vservers). It let's you do things like delegate roles, setup test instances, join multiple ad domains, ext.

They also added the ability to join multiple controller pairs to a larger cluster. Any presentation node can make any storage in the cluster available. Adding this layer of abstraction and flexibility means you can do all sorts of neat virtualization tricks. For example you can move a live volume between disk sets (aggregates) on completely separate controller pairs live. Think VMware vmotion for netapp volumes.

CDOT also adds all the modern features SMB3, pNFS, ect. They have a great new command line interface.

There's hope for a smoother upgrade paths in the future but right now its total forklift. Do you have any downtime at all? Christmas/Thanksgiving or the like? If so you can probably work out some kind of transition with your sales team. They have the ability to loan "swing gear" out to help with these kinds of transitions. We did that with our first CDOT system. It was more complicated than a normal upgrade and we had a couple hours of downtime during the cutover. We got there in the end and the impact on our customers wasn't serious.

Previously mentioned NDAs might be able to help you understand what's coming in the future. I personally don't make purchasing decisions on the promises of any vendor. The situation may or may not get better but you should make sure that the current constraints are something you can live with. You aren't going to want to be on 7mode in 2-3 years. This is a problem you would have to address into the lifespan of the system.

My environment is about 50/50 7mode and CDOT. Other than our fabric metrocluster I'm making plans to get everything over to CDOT in the next 1.5 years. The future is murky for our metrocluster. We don't have needs pressing us to upgrade.... yet. My hope is that NetApp has an answer for us before they do.

parid
Mar 18, 2004

Mr Shiny Pants posted:

Thanks man, that is exactly the information I am looking for. So if I understand correctly, Metro Cluster is a 7 Mode feature but 7 Mode is not actively developed anymore. Cdot is the future and has all the SMB3 and Pnfs goodness. They can't tell when the MetroCluster featureset will be added to Cdot.

drat, we really like the Metro Cluster because it is an awesome fit for our situation. We like the idea of one logical system divided over two physical locations, what can Cdot do right now? And is it comparable?

Each cdot cluster can't span sites. Two sites, two clusters.

That was the very first question I asked when I got my first cdot pitch. How physically close do the nodes in a cdot cluster have to be? Unfortunetly right now, they have to be in the same data center. In cdot, you still have the traditional controller pairs. You just now have the ability to link them. This is done over a pair of dedicated 10gig Ethernet connections to a pair of Cisco switches. The links to this switch have to be the short rage sx modules. I asked about the longer range stuff. Apparently the latencies involved in the longer distance fiver modules are an issue. I bet if you plugged it all in, it work work. Its not supported though.

I suspect this is what they are planning to replace metroclusters with but that's just a guess.

I'd bring your concern to your sales team to discuss. They might have ways of mitigating the problem for you that might still make this a viable solution for you. You might just have to be creative with a solution.

parid
Mar 18, 2004

Moey posted:

So Dilbert advised to look into DataDomain. Spoke with EMC and they pretty much advised to give them raw backups and let them compress.

Does anyone actually do this with a VM backup software like Veeam/PDH Virtual?

I think I rather have some raw storage and let my software do it for cheaper.

The rub is who does the dedupe better. Datadomain's is pretty good.

parid
Mar 18, 2004
Looks like I might be getting into the HPC business soon. I have mostly been a NetApp admin so far. Any recommendations on what technologies/architectures to start learning about? I hear a lot about paralyzed file systems.

parid
Mar 18, 2004

The_Groove posted:

I work in HPC, I don't know if you'll still be in storage, but I'd get a little familiar with the main HPC/storage things I guess. Parallel filesystems, infiniband, SAS/fiber channel, cluster management things (xcat, etc.), MPI, job schedulers, your favorite monitoring/alerts framework, etc. HPC is generally IBM, Cray, SGI, and Dell's world, excluding some smaller integrators. So if you have an inkling of what type of system you have or will have you can start research on some of their specific offerings.

Our storage is 76 Netapp E5400's (IBM dcs3700), so that part may be familiar!

A "paralyzed" filesystem is an issue we see a lot, usually caused by some user job triggering the OOM-killer on a node (or hundreds of nodes). It's really the filesystem being "delayed for recovery" while GPFS tries to figure out what happened to the node and what to do with the open files and all these tokens that got orphaned. It's not a very fast process and can result in things like a specific file, directory, or entire filesystem being "hung" until the recovery finishes.

I don't have a lot of details. I wouldn't take responsibility for a whole cluster, but I might be asked to help them with storage and I'd like to be useful if they do. Existing systems are getting long in the tooth and haven't been meeting their needs well. Its all NFS right now. I wouldn't be surprised if it was time to step up into something more specific to their use.

I have gotten the E series pitches before so I'm familiar with their architecture. NetApp is a strong partner for our traditional IT needs so I'm sure we will talking to them at some point. I don't want to just assume they are going to be the best fit due to our success with a different need.

What kind of interconnects do your see? Between processing nodes and storage? Sounds like a lot of block-level stuff. Is that just due to performance drivers?

parid
Mar 18, 2004
That's a great head start. I'll get to googling. Thanks!

parid
Mar 18, 2004
Real long shot here, has anyone setup Intellisnap with Commvault 10 to a clustered ontap NetApp? Commvault's documentation has a lot of "check box here" type documentation but nothing about required configuration for the array, or how it connects, with what protocols (pretty sure its http), or from where. This is all listed as supported but it appears much of the config is hidden or hard coded and not documented anywhere. Ultimately we want to snap Exchange 2013 DB iscsi luns, but were just doing file system now to see if we can get that working.

parid
Mar 18, 2004
I think I'm going to be down this commodity + layer of abstraction road soon. Anyone here implement one of these systems recently? What was the experience? What should people going down this road pay attention to? What were your favorite pieces ( hw platforms, software layers, designs, ect )?

parid
Mar 18, 2004
I have had similar experiences with NetApp really bending over backwards to fix mistakes. Something we haven't seen from someone like let's say Commvault.

I wonder if this is a by product of the heavy competition (and spending) in the storage space.

parid
Mar 18, 2004

skipdogg posted:

Hopefully someone can help me out here... storage is not my strong suit.

I have a NDMP backup of a folder from our old NetApp filer.. Taken in Jan 2012 We moved to EMC shortly after this backup was taken. The data is 5 years old (hasn't changed since 2009), but now needs to be restored for legal reasons.

It seems NDMP to different devices doesn't work. (NetApp NDMP backup restored to EMC is a no go). The backup software is BackupExec 12.5.

What's the best way to get this data restored? Maybe fire up a virtual netapp appliance? It's only 20GB of data or so.


No idea if this would work...

NetApp has these VM simulators. Maybe get a demo for one, do your restore, then copy the data off?

parid
Mar 18, 2004

Amandyke posted:

I need to update a bunch of user quotas on cdot. Aside from going through them one by one does anyone have any ideas on how to more result automate the quota increases? Roughly 200 users of 12,000. Quota reports on the GUI are extremely cumbersome as you cannot seem to sort by percentage of quota used.

I'm pretty sure there is a command to pull in a file over http to update quotas. I have an 7-mode system with a couple hundred custom set user quotas that I'm going to be migrating to cmode in a year or two and noticed this is going to be a problem. If you figure it out, I'd love to hear how.

Adbot
ADBOT LOVES YOU

parid
Mar 18, 2004
Or do both. Put crashplan on immediately. Have something covering you while you set up a NAS solution and fight to change the corporate culture.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply