Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
complex
Sep 16, 2003

Sounds like bug 536445? I've heard of similar NVRAM battery issues.

Even though that bug does not show an available fix, there is one: flash battery firmware. See https://kb.netapp.com/support/index?page=content&id=2016592&actp=LIST_RECENT&viewlocale=en_US&searchid=1327443096712

Adbot
ADBOT LOVES YOU

KS
Jun 10, 2003
Outrageous Lumpwad
I'm looking to add some secondary storage for D2D backups -- about 25TB, and I don't want to take more than 2-3U and ~$30k. Not opposed to roll-your-own for this, but only if it saves a bunch of money. Performance needs to not suck. It needs to export as either NFS or iscsi.

I know I can get a 380 G8 with 12x 3TB drives in it, but what would I run on it? Nexenta adds $10k to the bill, and that's a hard pill to swallow. I don't know enough about the collection OpenSolaris forks to know if they're at a point where they're usable for something like this with ZFS, or if I should just go with something I know better.

Also looking at the Nexsan E18, and if anyone has other suggestions I'd love to hear them.

evil_bunnY
Apr 2, 2003

Whitebox FreeBSD?

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

KS posted:

I'm looking to add some secondary storage for D2D backups -- about 25TB, and I don't want to take more than 2-3U and ~$30k. Not opposed to roll-your-own for this, but only if it saves a bunch of money. Performance needs to not suck. It needs to export as either NFS or iscsi.

I know I can get a 380 G8 with 12x 3TB drives in it, but what would I run on it? Nexenta adds $10k to the bill, and that's a hard pill to swallow. I don't know enough about the collection OpenSolaris forks to know if they're at a point where they're usable for something like this with ZFS, or if I should just go with something I know better.

Also looking at the Nexsan E18, and if anyone has other suggestions I'd love to hear them.

We've been pretty happy with OmniOS, and the support is much cheaper than Nexenta. Just make sure the hardware is on the Illumos (nee OpenSolaris) HCL; the Dell R720XDs H310 is not as of this writing.

Jadus
Sep 11, 2003

KS posted:

I'm looking to add some secondary storage for D2D backups -- about 25TB, and I don't want to take more than 2-3U and ~$30k. Not opposed to roll-your-own for this, but only if it saves a bunch of money. Performance needs to not suck. It needs to export as either NFS or iscsi.

I'm looking at doing this same thing, and am leaning towards something like this guy did using a SuperMicro SC847 - 36 drive chassis and FreeNAS.

Configured half full with 18 x 3TB drives is about $8,000 from CDW, and would give over 40TB of usable space.

Its definitely a 'roll your own' solution, and I'm not sure how fast it would be, but for that price the capacity can't be beat.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Jadus posted:

I'm looking at doing this same thing, and am leaning towards something like this guy did using a SuperMicro SC847 - 36 drive chassis and FreeNAS.

Configured half full with 18 x 3TB drives is about $8,000 from CDW, and would give over 40TB of usable space.

Its definitely a 'roll your own' solution, and I'm not sure how fast it would be, but for that price the capacity can't be beat.

We've been happy with that chassis w/ OpenSolaris for a few years for light/medium workloads.

Nukelear v.2
Jun 25, 2004
My optional title text

Nukelear v.2 posted:

Anyone have ideas on why an EQL 6110XS might go non-responsive when doing 8k read IO? Seems to occur across all volumes on the unit. Doing some bench-marking with SQLIO and all my tests are good except for 8k random read. At 16k random read it was just under 16.5k iops so I thought maybe the switches were flooding so I dialed SQLIO back a single thread and it still dies.

We had a similar situation occur when I tried to format the volume, the volume became completely non-responsive to the initiator. EQL said Windows was trying to format with an 8k cluster size and that caused the volume to go non-responsive (no idea why). Recreated the volume and formatted with 64k explicitly (given the size of the disk windows auto detect should been using 64k in the first place)

2 x Powerconnect 8024F switches
2 x Broadcom 10G NICS w/ EQL MPIO in Win2008R2

Just as a follow-up on this, the host I was testing from did not have it's nics set for jumbo frames. Changing the MTU to 9000 resolved the issue, not exactly sure why this would have happened though, should have just been performance degradation as far as I understand things. It's now happily pushing close to 33,000 IO/s at 255MB/s on my 8k random workload.

Edit: Personally I think broadcom drivers are wrong when they label the default MTU at 1500 and they really sending 9216. Since EQL says they can only support 9000 even on 10G. That makes more sense to me as the cause.

Nukelear v.2 fucked around with this message at 21:20 on Jul 6, 2012

spoon daddy
Aug 11, 2004
Who's your daddy?
College Slice

evil_bunnY posted:

gently caress you system manager


(can't resize any column either, and multiple config screens are blank and unresponsive). e: trying it on FF14, it doesn't even pop the config fields, which I guess is better than locking up the whole interface, but now all storage screen fields are empty.

This is after the webUI on the filer itself refused to ever work.

Heh, this is why I only use the CLI to manage filers. System Manager has limited usefulness compared to CLI.

Powdered Toast Man
Jan 25, 2005

TOAST-A-RIFIC!!!
The fun just never ends, guys.

Starboard Storage finally pulled a permanent patch out of their collective Russian asses (did you know the company is run by Russians now? I have no problem with Russians, just thought it was interesting), and it made things worse, not better.

:what: I am Jack's complete lack of surprise.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Powdered Toast Man posted:

The fun just never ends, guys.

Starboard Storage finally pulled a permanent patch out of their collective Russian asses (did you know the company is run by Russians now? I have no problem with Russians, just thought it was interesting), and it made things worse, not better.

:what: I am Jack's complete lack of surprise.
Storage based on the SuperMicro SC840 series is garbage? :monopop:

nuckingfuts
Apr 21, 2003
Does anyone know what happens to a failed NetApp drive after it is returned? Is it sanitized / destroyed? We just had a drive fail and my boss asked, I haven't found an answer yet and thought someone here might now.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

nuckingfuts posted:

Does anyone know what happens to a failed NetApp drive after it is returned? Is it sanitized / destroyed? We just had a drive fail and my boss asked, I haven't found an answer yet and thought someone here might now.

I know that they are sanitized, otherwise they can retain some of the ownership info they had previously (which happens all the time when I buy 3rd-party NetApp drives). I heard somewhere (not officially) that the good ones are repaired/reused as spares/replacements and the bad ones are canned.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

They are either overwritten or destroyed depending on whether they can be reconditioned. See KB article 3012103.

ghostinmyshell
Sep 17, 2004



I am very particular about biscuits, I'll have you know.

nuckingfuts posted:

Does anyone know what happens to a failed NetApp drive after it is returned? Is it sanitized / destroyed? We just had a drive fail and my boss asked, I haven't found an answer yet and thought someone here might now.

I know they sanitize it at least, but most of the time the paranoid bunch like banks just get new drives and keep the old ones.

Nomex
Jul 17, 2002

Flame retarded.
Does anyone know if there's an HDS simulator available? I'm going for a Netapp/HDS interview and I don't really have much experience with HDS outside of the old HP XP arrays.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
So someone is looking at 2 8 bay Drobos at work (this) to be specific, and I'm pretty sure that's an awful idea from what I've heard about Drobos. Would a pair of 8 bay Synology boxes be a better choice? I guess the plan is to mirror them, which Drobo has the capability to do. Beyond that I don't really know what the plan is. It looks like these are going to be backup space (so the second would be backup of a backup?) for experiment data.

Internet Explorer
Jun 1, 2005





Please God, don't get Drobos. Synology, QNAP, or even Buffalo would be infinitely better. I really love our 10 bay Synology NAS.

Powdered Toast Man
Jan 25, 2005

TOAST-A-RIFIC!!!
I can confirm that Synology's products are excellent and so is their support. We are rolling them out for on-site software repository purposes at 130+ sites. It's only a single-drive model but it did some very specific things that no other device we found would do (primarily with FTP access for Wyse Device Manager, which we use to patch/image Wyse thin clients).

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Can anyone link to some specific criticisms of Drobo, or tell me what I should be looking for? Normally "I read it on the Internet" would be good enough but in this case, because of ~~PoLiTiCs~~ I really have to beat them over the head with why this would be bad.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Nomex posted:

Does anyone know if there's an HDS simulator available? I'm going for a Netapp/HDS interview and I don't really have much experience with HDS outside of the old HP XP arrays.
HDS is very traditional SAN storage. Raid groups divided into LDEVs which can be exported directly or joined into LUSEs. Tag a host group with a WWN and add LDEVs to it for masking.

Feature wise they don't really do anything out of the norm. Shadowcopy is local LDEV mirroring, universal replicator is remote LDEV mirroring. There's really not much to say about it, honestly.

Nukelear v.2
Jun 25, 2004
My optional title text

FISHMANPET posted:

Can anyone link to some specific criticisms of Drobo, or tell me what I should be looking for? Normally "I read it on the Internet" would be good enough but in this case, because of ~~PoLiTiCs~~ I really have to beat them over the head with why this would be bad.

My standard pre-purchase practice is to google "<x> sucks" and review the results.
In this case there's a lot material there. The drobosucks blogspot is pretty decent.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Building hundreds of snapmirror relationships so that I migrate my data to a new netapp sucks. What sucks worse is our offsite netapp is a 2050 so after we cutover, it will be a race to upgrade our 3140 to ontap8, reverse the snapmirrors, and drive it to our DR site.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

adorai posted:

Building hundreds of snapmirror relationships so that I migrate my data to a new netapp sucks. What sucks worse is our offsite netapp is a 2050 so after we cutover, it will be a race to upgrade our 3140 to ontap8, reverse the snapmirrors, and drive it to our DR site.
Protection manager could be used to do this somewhat trivially.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

NippleFloss posted:

Protection manager could be used to do this somewhat trivially.
yeah but then i would have to use (and configure) protection manager. Either way it sucks.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

FISHMANPET posted:

Can anyone link to some specific criticisms of Drobo, or tell me what I should be looking for? Normally "I read it on the Internet" would be good enough but in this case, because of ~~PoLiTiCs~~ I really have to beat them over the head with why this would be bad.

NO NO NO DO NOT GET DROBO

Seriously if you want to talk answer my PM or email me at Corvttefish3r@gmail.com I can help you out and offer a bunch of support for cheap

madsushi
Apr 19, 2009

Baller.
#essereFerrari

NippleFloss posted:

Protection manager could be used to do this somewhat trivially.

Protection Manager could be great... if it wasn't such a piece of poo poo.

Let me count the ways:

1) What the gently caress is up with requiring 130% space on your destination volumes? Sometimes I want for my 100GB volume to SnapVault to another 100GB volume, and I really don't enjoy the idea of requiring the destination volume to be 130GB. I end up making all of my SnapVault relationships manually and then importing them to get around this... but that's the OPPOSITE of what I want to be doing.

2) Speaking of, what's up with all of the arbitrary volume requirements? The language is different between my source and destination volumes, which doesn't matter at all for LUNs, but I guess that's a good enough reason to not let me set up a SnapVault relationship!

3) There needs to be a really simple SnapVault option in Protection Manager where PM goes and gets the last snapshot taken on the source and then copies it over to the destination. Requiring me to reconfigure every single SnapDrive and SnapManager instance is a huge task, whereas PM could EASILY be smart enough to grab the latest snapshot name to sync over.


I spoke with one of the OnCommand/PM project managers at Insight, and he was explaining how you could take 10 NetApps and put them into a big destination pool and let PM manage everything -- it would make all of the volumes 16TB and thin-provision everything. That sounds great... if you had 10 NetApps. If you're just trying to sync 1-2 NetApps to 1-2 other NetApps, PM simply doesn't give you the options or the flexibility (30% OVERHEAD REQUIRED) that I want. I am working on replacing the whole goddamn thing with a series of PowerShell scripts and calling it a day.

Mierdaan
Sep 14, 2004

Pillbug

Corvettefisher posted:

NO NO NO DO NOT GET DROBO

Seriously if you want to talk answer my PM or email me at Corvttefish3r@gmail.com I can help you out and offer a bunch of support for cheap

If you've got negative things to say about a storage vendor, say it in here so that other people may learn from your pain.

evil_bunnY
Apr 2, 2003

Mierdaan posted:

If you've got negative things to say about a storage vendor, say it in here so that other people may learn from your pain.
But but what about a bunch of support fir cheap?

Ps: drobos are bad enough when it's just one nerd's anime on there, putting VM's on the things will be like trying to swim is cowshit: slow, unsafe and very, very unpleasant.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

madsushi posted:

Lots of words about Protection Manager

It'd definitely not a perfect product and the earlier iterations were basically unusable, but it has improved to the point where it is functional and possibly even useful if you spent some time getting familiar with it. Regarding your specific issues:

1) There isn't a one-to-one ratio of source-to-destination size for snapvaults, so it doesn't really make sense to size them at one-to-one. A vault destination will have a different number of snapshot copies than the source (generally more) and if dedupe is in use the data is initially re-inflated before being deduped on the destination. The extra size is accounting for that overhead. That said, it's not a strict %130, the calculation is a bit more detailed than that, and differs depending on whether you're using 3.7 or 3.8 and up. There are some hidden options that can be changed to tune the calculation to provide more or less additional space. If you're interested I can provide them. Enabling Dynamic Secondary Sizing is probably the best way to go, provided you're on 3.8 or above.

2) Snapvault gets very unhappy when there are volume language mismatches between a source and destination. This isn't a Protection Manager issue, it's a WAFL issue, or, more generally, an issue with there not being a direct mapping of some characters from one language to another. If the destination volume doesn't support umlauts because of it's language setting and there are files on the source that have umlauts then it's going to fail.

3) The integration with SnapDrive and SnapManager is required because the vaults get cataloged in PM as being part of a SnapManager backup set. That allows you do do things like perform a restore from an archive transparently, or perform your validation on the secondary site. You can't do that if you don't have that catalog information because you are performing your vaulting separately from your SM backups. Of course, for some people that would be just fine, and so the limitation sucks, but that's the rationale behind it.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

NippleFloss posted:

It'd definitely not a perfect product and the earlier iterations were basically unusable, but it has improved to the point where it is functional and possibly even useful if you spent some time getting familiar with it. Regarding your specific issues:

1) There isn't a one-to-one ratio of source-to-destination size for snapvaults, so it doesn't really make sense to size them at one-to-one. A vault destination will have a different number of snapshot copies than the source (generally more) and if dedupe is in use the data is initially re-inflated before being deduped on the destination. The extra size is accounting for that overhead. That said, it's not a strict %130, the calculation is a bit more detailed than that, and differs depending on whether you're using 3.7 or 3.8 and up. There are some hidden options that can be changed to tune the calculation to provide more or less additional space. If you're interested I can provide them. Enabling Dynamic Secondary Sizing is probably the best way to go, provided you're on 3.8 or above.

2) Snapvault gets very unhappy when there are volume language mismatches between a source and destination. This isn't a Protection Manager issue, it's a WAFL issue, or, more generally, an issue with there not being a direct mapping of some characters from one language to another. If the destination volume doesn't support umlauts because of it's language setting and there are files on the source that have umlauts then it's going to fail.

3) The integration with SnapDrive and SnapManager is required because the vaults get cataloged in PM as being part of a SnapManager backup set. That allows you do do things like perform a restore from an archive transparently, or perform your validation on the secondary site. You can't do that if you don't have that catalog information because you are performing your vaulting separately from your SM backups. Of course, for some people that would be just fine, and so the limitation sucks, but that's the rationale behind it.

1) I have never been able to make a volume smaller than 1.3x and still have Protection Manager accept it as a candidate for SnapVault. I opened a TAC case to see about reducing that down but never got anywhere. Sometimes my vault will be almost a mirror, sometimes I want the vault to store fewer snapshots than the source, etc. My destination filer has about 1.2x the space of my production filers, so making every volume start at 1.3x really doesn't work well. If you know of a way to get the minimum size under 1.3x, I am all ears and that would help quite a bit.

2) I can get the volume language mismatch issue, but I guess I am unhappy there is not 1) an override 2) a button to "fix" it in PM 3) when I fix it myself, I have to wait 15-30m before Protection Manager sees that I fixed the volume name manually.

3) Gotcha, restore from archive is actually a good point I did not think about.

I still use PM at several client sites simply because it's better than my batch files, but it feels like it's a lot of work/learning for small clients (1-2 NetApps) and there are so many little "gotchas" that make it difficult for me to teach others.

Here's a day in the life:

1) (SD install) Enable SnapDrive integration with Protection Manager.
2) (SM config wizard) Enable SnapManager integration with Protection Manager.
3) (NetApp Management Console) Attach the newly-created dataset with some destination volumes. It wants to make them 1.3x? OK, we'll make them manually.
4) (OnCommand System Manager) Make the volume, turn off snapshots, turn on manual dedupe, make qtree.
5) Wait 15 minutes for Protection Manager to rescan
6) (NetApp Management Console) Try to attach the new qtrees to the dataset, but the volume language is wrong.
7) (ONTAP CLI) Change the volume language
8) Wait 15 minutes for Protection Manager to rescan
9) (NetApp Management Console) Attach the new qtrees (finally), assign a policy, initialize the SnapVault
10) (SM backup wizard) Configure the backup jobs to archive

All of that is due to Protection Manager, not including the steps needed to set up SD/SM and MPIO and the application database migration in the first place. Right now I can make a volume in 10 minutes and hand it off to a non-storage admin who can use SnapDrive and SnapManager to get their application set up quickly. With Protection Manager, a smart storage admin needs to spend an hour in so many different consoles just to set up replication. This is in contrast to SnapMirror which is "mirror to this volume using OnCommand System Manager -- done".

Trojan
Sep 16, 2004

Best Custom Title I ever.
So today I turned this:


Into this:



6 times. Across 3 sites.

There's something great to say about how far IBM's Midrange storage has come over the years. The step from FASTT to V7000 was extremely positive. The fact that now I just plug the expansion drawers in and it detects them, and configuration for the drives is a two click process? Insane. A monkey could deploy these things.

r u ready to WALK
Sep 29, 2001

Nomex posted:

Does anyone know if there's an HDS simulator available? I'm going for a Netapp/HDS interview and I don't really have much experience with HDS outside of the old HP XP arrays.

I know they have lab environments for training courses that often sit unused, if they're eager to sell you something maybe you could ask them for access to one of those to have a quick look at the different admin tools.

Our shop has the whole range of HDS products (AMS, HUS, USPV, VSP, HNAS) and I can tell you that they are generally fast, reliable and easy to understand, but their GUI management tools are unbelievably slow and unsexy. At least HDS admits they need to improve them, and things are starting to look a lot better than they did a few years ago.

And yeah there's no amazing new technology in them, they do things the tried and trusted way and generally choose the simplest implementations, but in a SAN the less things that can go horribly wrong the better.

Pile Of Garbage
May 28, 2007



Trojan posted:

There's something great to say about how far IBM's Midrange storage has come over the years. The step from FASTT to V7000 was extremely positive. The fact that now I just plug the expansion drawers in and it detects them, and configuration for the drives is a two click process? Insane. A monkey could deploy these things.

I've worked with V7000's of varying configurations for a while now and I agree they are brilliant devices (If you can afford them).

On the suject of IBM midrange storage systems has anyone had a look at their new DCS3700 and DCS9900 high-density systems yet? 60 HDDs in 4U is pretty drat crazy. Also you can get the DCS9900 with either 8 x FC 8Gb or 4 x InfiniBand DDR host interfaces which is insane.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Trojan posted:

There's something great to say about how far IBM's Midrange storage has come over the years. The step from FASTT to V7000 was extremely positive. The fact that now I just plug the expansion drawers in and it detects them, and configuration for the drives is a two click process? Insane. A monkey could deploy these things.
I agree with what someone else said earlier in this thread about the lack of contra-rotating cabling options being a tremendous problem :mad:

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

cheese-cube posted:

I've worked with V7000's of varying configurations for a while now and I agree they are brilliant devices (If you can afford them).

On the suject of IBM midrange storage systems has anyone had a look at their new DCS3700 and DCS9900 high-density systems yet? 60 HDDs in 4U is pretty drat crazy. Also you can get the DCS9900 with either 8 x FC 8Gb or 4 x InfiniBand DDR host interfaces which is insane.

The 9900 is a rebadged data direct networks 9900, which is two generations behind DDN's current offering. You see them in hpc and broadcast; they do some interesting things to keep those pipes full that make them less well suited for general SAN workloads.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
They're useful as giant scratch storage, but my personal recommendation is to never store anything important on DDN.

The_Groove
Mar 15, 2003

Supersonic compressible convection in the sun

cheese-cube posted:

On the suject of IBM midrange storage systems has anyone had a look at their new DCS3700 and DCS9900 high-density systems yet? 60 HDDs in 4U is pretty drat crazy. Also you can get the DCS9900 with either 8 x FC 8Gb or 4 x InfiniBand DDR host interfaces which is insane.
I manage an older DDN9900 w/infiniband and GPFS in an HPC environment and it's pretty solid. By which I mean that we've had GBIC failures, SATA chip failures, I/O module failures, and an entire disk enclosure failure. Sometimes two at once resulting in 60 arrays with 1 failed drive, and another 60 with 2. Raid6 has saved my rear end so many times. But it's always rebuilt things fine and DDN has some great technicians. Not sure if it's just our unit or 9900's in general. Average I/O rates are around 1GB/sec for reads and writes, with peaks up to 5-6 GB/sec.

Getting 76 DCS3700's up and running pretty soon, don't know too much about them just yet.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

The_Groove posted:

Sometimes two at once resulting in 60 arrays with 1 failed drive, and another 60 with 2.
How many of those are you running?

The_Groove
Mar 15, 2003

Supersonic compressible convection in the sun
One pair of controllers, but with 2 drive enclosures (daisy-chained) for each channel. So 1200 drives total.

Adbot
ADBOT LOVES YOU

the spyder
Feb 18, 2011
I need to build a new HA SAN for our ESXi backend. Since we are a white box shop (supermicro) and huge ZFS user, I was planning on building a OpenIndiana Supermicro ZFS box. I made the terrible mistake of quoting some parts through my CDW rep and mentioning the above. He is now trying to shove a NetApp rep down my throat- claiming their new $8k entry level box can do everything I want. For some reason, I highly doubt it.

the spyder fucked around with this message at 22:29 on Jul 24, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply