Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Maneki Neko
Oct 27, 2000

Catch 22 posted:

How does a SAN fail? I mean, I can see by SpoonDaddys way of failing, but has anyone had a SAN truly fail? And I don't mean you ignored it after it had multiple hardware failures. I mean, everything is working and BAM, SAN dead.

We once hit a bug in Data Ontap while resizing a LUN that knocked one head offline, then the other head took over and continued the same operation, hit the same bug and died too. It took about an hour for everything to come back up and be happy after replaying the logs, but I would classify that as a failure. :)

Adbot
ADBOT LOVES YOU

Maneki Neko
Oct 27, 2000

Catch 22 posted:

I would say so. Wow, was this a firmware bug?

Just a bug in Data Ontap, the OS the filers run. Granted, that's the ONLY failure we ever had on those boxes (outside the occasional drive, but Netapp is fairly aggressive about failing drives), but it was a bit disconcerting at the time.

Maneki Neko
Oct 27, 2000

rage-saq posted:

EMC actually came out onsite and did some performance monitoring to determine IOPS usage patterns before you gave them any money?

I've not actually heard of them doing this, just coming up with random guesstimates for customers based off a little input from the customer. They were horribly wrong (short by about 40% in some cases) that I ended up fixing their order at the last minute before they placed their order (but sometimes after and having to fix it by buying more disks).

Christ, I was asking general questions about their product line and they wanted to come out and do a complete site performance audit. It was a pain in the rear end to get them to talk to me WITHOUT them doing this first.

Maneki Neko
Oct 27, 2000

H110Hawk posted:

I would chalk it up to power, but this is at three different datacenter locations, with three different power feeds, one of which is many miles away. We've also had countless webservers and stuff just arbitrarily falling over. Has anyone else been having a "when it rains it pours" week with really random errors?

At this point I'm blaming bogons and the LHC.

Don't forget sunspots!

Maneki Neko
Oct 27, 2000

Mierdaan posted:

With the FAS2020, do we need to pay for the CIFS and NFS software licenses if we only intend to use it as an iSCSI target initially? Understandably we'd lose the NAS functionality, but we can add it back in later.

We have a PC Mall sales engineer claiming we need at least CIFS to allow Windows-based hosts to use the FAS as an iSCSI target, which just seems odd to me.

edit: nevermind, I asked him the question again via email and he recanted. Huzzah!

I think you might needs the CIFS license if you plan to also use snapdrive on the windows hosts, but I could be wrong.

Maneki Neko
Oct 27, 2000

Cultural Imperial posted:

Q. Do I need a CIFS license on the storage system to run SnapDrive?
A. No. SnapDrive no longer requires a CIFS share for the host to access the storage system volumes.

Shows how long it's been since I used SnapDrive. :)

Maneki Neko
Oct 27, 2000

Wicaeed posted:

I figure it probably wouldn't be worth my time/money to buy a license from Netapp, am I right?

Yeah, you're probably pretty well hosed, unless you can find some nice field engineer to take pity on you.

Maneki Neko
Oct 27, 2000

Vanilla posted:

I've never been a fan of the replication of snapshots in the Netapp sense because, in the example of Exchange, ESEutil is not run at the time each Snap is taken.

Doesn't snapmanager for Exchange do this? I haven't used it, but I was under the impression it could handle this.

Maneki Neko
Oct 27, 2000

Wicaeed posted:

Can anyone with a Netapp Now subscription tell me if the ONTAP 6.5.1R1 is installable on a FAS720? I'm reading the documentation that came with my ONTAP release CD, and it says there should be a folder called Alpha for the F700 series filers under CD-Drive:\ONTAP\6_5_1R1, however my CD only has folders X86 and MIPS. Do I have a "special" cd or am I missing a folder?

Sounds like you are missing some stuff. The last release I see for a 720 is 6.5.7.

Maneki Neko
Oct 27, 2000

InferiorWang posted:

I hate sales people. I want to get pricing on some lefthand gear, but I don't want to listen to any of their spiel, or get follow up calls only to have the person get pissy when I remind them we're a public school and everything comes down to dollars, not necessarily doing things the proper way. I don't want to talk to a reseller either. All I want to know is how much it costs.

Are there any resources available which might have this information available without having to talk to someone or does it pretty much come down to having to put up with sales people?

I'll send you the numbers I've seen, but you don't seem to accept emails/pms. :)

Maneki Neko
Oct 27, 2000

InferiorWang posted:

Thanks fellas. What I'd really like to do is have an iSCSI SAN, 2-4 TB. I'd like to host a modest amount of vmware guests on it hosting primarily file shares, home directories, and groupwise email for the staff numbering 250-300. Then I'd like to do it all over again with a completely redundant SAN at one of our other schools in some sort of fail over configuration. Being able to do snapshots and have some sort of monitoring dashboard is something I'd like as well.

My problem is I can't even get anyone to listen to me about doing this and getting away from bare metal machines for our critical data without having some semblance of a dollar figure attached to it. I have to approach this rear end backwards from how most normal people would approach it.

You might be able to get pricing guesstimates on the Lefthand stuff (since email doesn't seem to be working on the forums at the moment) by googling part numbers from here: http://www.gosignal.com/datasheets/lefthandsan.pdf

2-4 TB isn't much storage wise, my ballpark guess for the SATA starter edition with installation would be around 30k, but you should hopefully be able to negotiate that down.

Do you have a CDW rep? If not, call and get one ask for pricing on those part numbers. Those guys are frothing at the mouth for sales right now.

Maneki Neko
Oct 27, 2000

I don't have any management concerns with NetApp (especially with the environment size that you're talking about). If you're brand spankin' new to NetApp, get some install/configure services tacked on there, all of the field engineers I've ever dealt with have been fantastic, and they should be able to get you up to speed.

My biggest NetApp complaint (and you mentioned it) is that everything is ala carte, and once you've purchased, the discounts aren't usually as good (unless you manage to find them at the end of a quarter/fiscal year). In my experience, the more you can bundle into that initial purchase in terms of protocols, etc. the better off you'll be.

I love their stuff, just hate having to pay for it. :(

Maneki Neko fucked around with this message at 15:13 on Apr 17, 2009

Maneki Neko
Oct 27, 2000

EscapeHere posted:

Are there any knowledgeable goons here that would care to comment on an HDS AMS 2500 vs an IBM DS5300? We're looking at around 100Tb with a mix of FC/SAS and SATA drives, but will need to expand that to +200Tb over the next couple of years. HDS are claiming their new internal SAS architecture is all the rage; IBM are basically saying HDS are full of poo poo and their stuff is way cooler. The IBM kit is about 10-15% more expensive but supposedly faster, so they tell me. On the other hand, we've been previously using HDS for many years and had no major problems, it's all worked very well and their support has been excellent. While we have plenty of IBM servers we have never used or bought anything from their storage range.

For this project the "fast" drives (FC or SATA) would be used for VMWare for Exchange, AD, etc while the SATA disks would be used for archiving of medical records. Can anyone give any advice or reasons why it might be worth (or not) spending the extra $s on the IBM?

I've never been terribly happy with the IBM storage I've used in the past (we have an older DS4000 series that I'm in the process of retiring), although I haven't used the DS5300. Performance was ok, and the hardware itself was fairly reliable (with the exception of cache batteries dying every 5 or 6 months, which requires you to pull and disassemble the controller with a screwdriver), but support and management of the hardware itself was a huge pain in the rear end.

By and large though, that's the same general complaint have about all IBM equipment. Stupid poo poo like 17 different serial numbers being on a piece of equipment and IBM has no idea which one they actually want, FRU numbers that are meaningless 3 weeks after you buy something, 12 different individual firmware updates that need to manually tracked and aren't updated regularly vs. regularly updated packages, etc. I have no idea how IBM stays in business, dealing with them is terrible.

Maneki Neko fucked around with this message at 20:46 on Jun 19, 2009

Maneki Neko
Oct 27, 2000

Got KarmA? posted:

I'm not sure any kind storage level replication technology will net you anything better than crash-equivalent data.

NetApp should have this covered.

Maneki Neko
Oct 27, 2000

Oh hay IBM people. I've got a DS4300 that I need to wipe the drives on so we can surplus it. Does IBM have a nice fancy utility to do that in bulk or anything? Didn't see anything in the storage manager client, but we are using a version that's roughly 400 years old.

Maneki Neko
Oct 27, 2000

Misogynist posted:

This is a long shot, but what the hell, lemme run it by you guys.

I've got a pair of Brocade 300 switches (rebranded as IBM SAN24B-4), and I'm trying to connect up each switch's Ethernet management port to a separate Cisco Nexus fabric. Each one is run with a 5020, and the switches link up to a 2048T FEX. Problem is, whenever I do this, there is no link. I can hook up the 300 to a laptop and get a link. I can hook up the 300s to each other and get a link on both. I can hook it up to some old-rear end Cisco 100-megabit 16-port switch and get a link. I can hook up other devices, like an IBM SAN and a Raritan KVM, and get links. But for some reason, the goddamn things just will not show a link when I hook them up to the gigabit ports on the 2048T.

Any ideas? The only thing I can think is that the Nexus has issues with stuff below 1 gigabit, but if that's the case, that's some of the most braindead poo poo I've ever heard.

From what I recall during discussions with our resellers when we were picking up our 5000s + fabric extenders, Nexus 2000 is all 1gig, no 100 meg.

EDIT: Apparently the 2248s do 100/1000.

Maneki Neko fucked around with this message at 21:19 on May 20, 2010

Maneki Neko
Oct 27, 2000

Not quite sure if this is the domain of this thread or maybe that other centralized storage thread, but was curious what (if anything) people are doing on the cheap.

Got a friend who works at a place that has a fine NetApp setup, but through some shenanigans with a different storage vendor, they now have a giant pile of SATA drives, which he was looking to just throw in a giant case and use as a dumping ground for things (likely over NFS), with the expectation of eventually spooling it off to tape.

Hardware side seems pretty straightforward, but software stuff gets fairly interesting from there. Can just do linux with something like xfs/drbd or get a bit more exotic, but things look a little hazy from there.

Opensolaris/ZFS looks to be circling the bowl since the Oracle buyout, and Nexenta building off that seems interesting, but who knows what will end up happening there once Oracle finally cuts off OpenSOlaris

What other options out there are actually worth considering?

Maneki Neko fucked around with this message at 04:08 on Jul 16, 2010

Maneki Neko
Oct 27, 2000

TobyObi posted:

Oracle aren't going to drop OpenSolaris and ZFS.

A large part of the purchase was the 7000 series, which are these software components behind pretty clicky buttons.

Whether OpenSolaris remains actually "open" is another question though.

As for btrfs, with the main driver of this being Oracle for Oracle Unbreakable Linux, they may drop a lot of the development push behind it and work on rolling it into ZFS.

Sure, I didn't mean ZFS was going anywhere, just that OpenSolaris community is probably just going to implode at some point unless Oracle actually decides to do something to support it. I'm sure Oracle will keep Solaris around and ZFS kicking, they want to sell more poo poo.

Maneki Neko
Oct 27, 2000

Cultural Imperial posted:

Is anyone out there looking at 10GbE?

We moved to 10GbE to replace our aging FC infrastructure, any more specific questions?

Maneki Neko
Oct 27, 2000

oblomov posted:

Well, you would have multiple 10GB links per server so you should still have MPIO. Here is the thing. Look at switch/datacenter/cabling costs, and 10GB starts making sense. Our 2U VMware servers each used to have 8 cables (including Dell drac) and now we have 3. It's similar with our NFS/iSCSI storage. You would be surprised how much cabling, patch panels and all that stuff costs and how much pain in the rear it takes to run say 100 cables from a blade enclosure.

We are going all 10GB for new VMware and storage infrastructure, and cost analysis makes sense.

Yeah, the ability to carve up those 10GbE pipes and deal with less cabling made it worthwhile for us (as well as being a nice chance to bail on FC).

Maneki Neko
Oct 27, 2000

FISHMANPET posted:

Open ended question, but how do you guys backup large Unix volumes? Right now we're limited to 25 GB Partitions because that's the biggest partition AMANDA can handle within our artificial constraints (level 0 every 14 days, LTO2 tape). Faculty has started rumbling, and our solution is to ??? We've managed to do 50GB Partitions with LTO4 (though I imagine those could be bigger) but I think even that's too small, especially when a few professors have used their grants to buy Terabytes of contiguous storage at once, only to have it split up into tiny useless morsels.

We've started using Bakbone with 500GB partitions, but I really hate Bakbone and would really much rather use AMANDA because it deals with all the stupid bullshit way better than Bakbone does.

What don't you like about BakBone? I ran that for a while, and although it wasn't obvious how to actually do anything the first time you tried it, it did perform well and was reliable.

All backup software is terrible, you just need to find one that does what you want and you can manage to bend to your will.

Maneki Neko
Oct 27, 2000

Jadus posted:

I'm curious what the general thought is when warranty expires on a SAN.

Lets say my company completely invests in a P2V conversion, purchasing a SAN with a 5 year warranty. After that 5 years are up, the SAN itself may be running perfectly, but the risk of having something like a controller or power supply fail and then wait multiple business days for a replacement is pretty high.

I can't image it's typical to replace a multi-TB SAN every 5 years, and warranty extensions only last so long.

I suppose after that 5 years it may be a good idea to purchase a new SAN and mirror the existing, for a fully redundant infrastructure, and then only make replacements on major failures.

What is normally done in this situation?

After 3 years, support contracts start usually going up price wise. After 5 years they generally jump up again (assuming the hardware hasn't actually just been desupported), because no one wants to have to keep around spare parts. EDIT (drat YOU MYSOGINIST, that's what I get for leaving this window open for an hour apparently).

So yeah, upgrading/replacing your SAN every 3-5 years is not unreasonable. If you don't have to forklift upgrade and can just replace a head and maybe add a few new disk shelves, even better.

Maneki Neko
Oct 27, 2000

oblomov posted:

Wow, I would love to see where you are getting that pricing. Shelf of 24 x 600GB 10K SAS drive is retailed at $80K plus, and 2TB SATA at about same. Then you throw in 3 year support and it's almost another 20% on top. Now, nobody is paying retail, but still, you are getting a very very good deal if you are paying $30K for that.

It's even more ridiculous pricing for full 3140HA with all licensing and 10TB SAS plus 10TB SATA. Now, NetApp is going to be discounting now since new hardware is coming out soon, but still.....

You are talking Equallogic pricing here, and while I like Equallogic, let's face it, nobody would be buying it if NetApp was priced same.

Last shelf we bought was 1TB SATA FC shelf (that was before the 2TB shelves came out and the lead times then were terrible on SAS shelves) and it was probably around 20-25k out the door, but that's only about 8TB.

I'm really curious what the 3200s look like, or if they're just an incremental bump.

Our Netapp folks hinted about some SSD stuff, as we don't want to swing the bux for PAM cards, but I guess we'll see.

Maneki Neko fucked around with this message at 05:10 on Oct 28, 2010

Maneki Neko
Oct 27, 2000

Intrepid00 posted:

:xd:

Not sure what that's about, as even Oracle themselves runs their poo poo on NFS.

Maneki Neko
Oct 27, 2000

Anyone using the commercial version of NexentaStor?

We've been looking around at some options for an side storage project and this looks like a fairly decent hands off option vs. rolling our own software stack for a cheapo giant pile of storage.

Mainly curious if people are happy with the support they're getting, etc. Also sounds like potentially a less iffy future now (assuming that OpenIndiana doesn't crumble).

Maneki Neko fucked around with this message at 18:52 on Jan 3, 2011

Maneki Neko
Oct 27, 2000

conntrack posted:

i guess you did the "company down and nobody working" calculation and it didn't bite? i feel you pain.

ROI: You still have all your email.

Maybe the car analogy can be:

It's like cloning your family every morning so when they die in a fiery car crash on the way to work/school you aren't left a bitter empty shell of a man.

Maneki Neko
Oct 27, 2000

quackquackquack posted:

It's "my first SAN" time at work!

We have 4 shiny ESXi hosts and a vSphere Enterprise Plus license. I should note I play much more with the Windows side, rarely the linux.

We're currently running roughly 60% linux VMs, with a couple of them actually requiring decent resources. All of the Windows VMs are pretty light - print, DC, tiny SCCM, light file, etc. In the past, the linux servers have mounted NFS shares, and the Windows file servers have pointed at iSCSI.

For storage options, we're somewhat locked into IBM nSeries (rebadged NetApp), and trying to figure out what makes the most sense in terms of licensing protocols. My thought was to license only NFS, and store the vmdks on NFS. Save money by not paying for iSCSI (have to double check with the vendor that this is true).

Is there any reason to expose NFS or iSCSI directly to VMs, as opposed to making NFS/iSCSI datastores in ESXi?

I'm not sure why I would want to use CIFS. Would it be to get rid of our Windows file servers?

iSCSI is generally free on NetApp filers, its NFS that costs $$$ (although I'm not sure what kind of terrible stuff IBM might pull licensing wise on these). There's some nice things about Netapp + NFS + ESX, but iSCSI works fine there too if you're in a pinch and your company doesn't want to get bent over on the NFS license.

There's some circumstances where you might want to have a VM mounting an NFS volume or an iSCSI LUN vs just creating a local disk on a datastore, but that really depends on application needs, etc.

As you mentioned, CIFS is generally just used on the filers as a replacement for an existing windows file server.

Maneki Neko
Oct 27, 2000

Looks like Microsoft released a free iSCSI software target for Windows Server 2008 R2.

http://blogs.technet.com/b/virtualization/archive/2011/04/04/free-microsoft-iscsi-target.aspx

Fully supported, having another option is always handy.

Maneki Neko
Oct 27, 2000

Dreadite posted:

This is good advice. Something I noticed was that this particular vendor quoted 11k for the actual hardware and 3600 dollars for what appears to be "racking and stacking" the server in our noc. Needless to say, that's outrageous, but this is my first time buying a piece of hardware in this way. Is that to be expected with all vendors, or can I find someone who will just send me my hardware?

Edit: I'm actually waiting on a quote from another couple of vendors for some EMC equipment and an HP Lefthand setup, I'll probably report back with those prices too so I can get a feel if the prices are fair.

"Services" are pretty standard, and usually cover installation, initial setup and some sort of training/knowledge transfer, best practices, etc.

Some of that you can pick up along the way, but it's often helpful.

Maneki Neko
Oct 27, 2000

three posted:

We do scheduled Snapshots + SAN Replication.

We do this + NMDP dumps to tape, although the tape dump is primarily for contractual reasons.

Maneki Neko
Oct 27, 2000

Shaocaholica posted:

Curious how different implementations of the 'cp' command works when copying from one network location to another. Do the bits actually have to go through the machine running the command? Are there any common or budget implementations where they do not?

Yes.

If the devices support NDMP, you can run something like NDMPCopy to have it do a system to system transfer.

Maneki Neko
Oct 27, 2000

Internet Explorer posted:

Yes, sorry. Having a bit of a brain fart. I could have sworn there was a way to do it with a VMDK. I think VMFS is pretty resilient. I have used snapshots for test labs a lot and never ran into any problems, even Exchange or SQL.

There are products that interact with VSS on the guest to get things into a nice happy state for snapshotting. To your point though, crash consistent snapshots are generally "good enough"

Maneki Neko
Oct 27, 2000

FISHMANPET posted:

So from my understanding, Microsoft's DFS can be used to abstract the actual storage to the servers. If I want to replace a storage server, I can throw it in and set DFS to replicate to it, and when I want remove the old server.

So first, is that a correct assumption? And second, does anything like that exist for NFS? It looks like NFS v4.1 does this, is that right?

EDIT: Sorry, thought I was in a different thread.

Yeah, DFS does abstract the back end of how you have your serves and what not laid out from the users who are connecting to your SMB shares. You can add/change servers, do whatever without the clients having any idea what's going on. I'm also curious what's going come out with SMB 2.2 in Windows 8.

For NFS, are you talking pNFS?

Maneki Neko fucked around with this message at 05:47 on Feb 29, 2012

Maneki Neko
Oct 27, 2000

Less Fat Luke posted:

So since we're talking about 10GE does anyone have switch suggestions? My current setup is using a Cisco 4900M. It's totally solid, 8+8+8 fiber ports (with X2 modules). However it's very expensive and I was wondering what other people have been using.

We just went to Nexus 5000s when we took the 10GbE plunge, but I'm sure they're not terribly cheap either. We're also using Twinax for almost everything, as it doesn't have to go far.

Maneki Neko
Oct 27, 2000

Xenomorph posted:

I guess I don't fully understand iSCSI.

I've looked at a few NAS servers that advertise "Built-in iSCSI Target Service". It then mentions that it runs Linux and uses EXT4 for its file system.

How does that work if a Windows system is the iSCSI initiator? I thought I could just connect the device and then a drive would show up to Windows that I could then format as NTFS.

That's exactly how it works. The NAS box is probably just making a big fat file and then presenting that storage to the initiator as a block device.

Maneki Neko
Oct 27, 2000

Syano posted:

I am still trying to wrap my head around what Microsoft's strategy is with Server 2012 and SMB 3.0 and scale out file servers for applications. Everything starts out looking awesome. Storing SQL and Exchange databases on your file servers is a pretty neat option. Having your hyper-v stores on an SMB share is pretty awesome too. So you move from that to thinking high availability at the file server level and you start reading about scale-out file servers. At this point things start looking fantastic. Unified storage for my MS shop on an active-active file server cluster. Then it hits you... you still have to have shared storage for all this to work.

So is Microsoft's strategy for all of this for me to build this out with the file server cluster acting as the filer and all the back-end storage still being done via 3rd party iscsi or fiberchannel kits? Or heck by shared SAS shelfs? And if that is the strategy why would I not just avoid the middle man and connect my hyper-v, sql and exchange application services directly to the iSCSI targets? I think I am missing something here.

The idea is that your shared storage will also support SMB 3.0. Not any different than running Oracle or VMWare over NFS to a filer today.

Syano posted:

Maybe this is just a natural progression of things... ie Microsoft puts out Server 2012 with SMB 3.0 support and the idea is that 3rd parties ala netapp/emc/etc pick up and implement SMB 3.0 support soon and thats their idea of the end to end solution.

This is a given, and Netapp/EMC/SAMBA team etc have already committed to supporting at least some of the SMB 3.0 feature set.

Maneki Neko fucked around with this message at 21:03 on Nov 8, 2012

Maneki Neko
Oct 27, 2000

parid posted:

Has anyone bit the bullet on clustered-ontap yet? I just put our first cluster into production last weekend and I wouldn't describe it as "smooth sailing".

We're in the process now, it seems like the reason the guy in charge of the project came up with to do it instead of just going 7-mode was "because". I don't have super warm/fuzzies, so far I'm not really seeing any actual benefits, we don't have more than 2 heads in any of our sites at the moment.

Maneki Neko
Oct 27, 2000

NippleFloss posted:

- If you're holding out for an in place upgrade option then you may have a long wait. PM me about this if you have questions.

Ha, after the traditional -> flexvol migration, I just generally assume that there's never going to be a reasonable migration path, and do-over is the way to go.

Maneki Neko
Oct 27, 2000

Erwin posted:

Yeah, it's awful.

Worse than NOW?

Adbot
ADBOT LOVES YOU

Maneki Neko
Oct 27, 2000

LOL, our 3200 series filers apparently have a known issue that causes them to flip on the OMG ERROR light randomly. The only solution is to reboot the head until it happens again.

THANKS NETAPP!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply