Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
bigmandan
Sep 11, 2001

lol internet
College Slice
So we had a cache battery controller "failure" on one of our Dell SC4020s'. It's less than a year old. Apparently there is a known issue with the firmware on the cache controller. Reseating the battery did not work, so the suggestion from co-pilot is to reseat the controller. Even though this is a redundant system, I'm still wary of reseating the controller during normal hours, so I get to do some maintenance tonight. I'm just hoping this is not indicative of a larger issue.

Adbot
ADBOT LOVES YOU

sanchez
Feb 26, 2003
What does Dell say? If it's a known issue they should have something, I wouldn't pull a controller without their recommendation.

bigmandan
Sep 11, 2001

lol internet
College Slice

sanchez posted:

What does Dell say? If it's a known issue they should have something, I wouldn't pull a controller without their recommendation.

They said to pull the controller.

kiwid
Sep 30, 2013

How come Nimble isn't in the OP? What are Goon's opinions on it?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

kiwid posted:

How come Nimble isn't in the OP? What are Goon's opinions on it?

OP was written a few years ago and I haven't had time to be a good curator. We like Nimble very much (speaking for the company I work for not goons in general) because it's easy to use and it generally performs well at a pretty solid price point.

edit:
My god I wrote that poo poo in 2008. Is there interest in a refresh?

Docjowles
Apr 9, 2009

Based on past experience, no one actually reads megathread OP's. kwid is apparently the rare exception.

Erwin
Feb 17, 2006

kiwid posted:

How come Nimble isn't in the OP? What are Goon's opinions on it?

I've never seen anything negative about Nimble here, or anywhere for that matter. I love mine. I was promised 32,000 IOPS and can pull 40,000. I rarely think about it, which is a compliment.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Biggest knocks against Nimble are that its block only and not particularly feature rich. It's simple and performs fairly well though. Tegile is winning a lot of deals over Nimble in my area lately and has multi-protocol support, which can be nice. I like Tintri a lot as well, though it's a bit more expensive than the other two.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
Nimble is just stupid fast per spindle.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Never had any problems with my nimble arrays. One of the is now pegging the CPU in the controller and latency is still sub 3ms (a lotta growth). Upgrading the controllers in early 2016.

beepsandboops
Jan 28, 2014

1000101 posted:

edit:
My god I wrote that poo poo in 2008. Is there interest in a refresh?
As a complete beginner with storage, I'd be up for an OP refresh

theducks
Feb 13, 2007
Duckman

OldPueblo posted:

You can still choose 7-mode if you like, just bear in mind there is no 8.3+ 7-mode, 8.2.x is the last 7-mode release and it'll continue to get bug fixes, etc., for years. You can also change your mind after it's delivered really, just get with your licensing reps and get licenses switched. At some point there will probably be a platform that isn't supported for 7-mode, I'm just guessing. The default is cDOT now though, if that was your question.

Since 8.2, licenses work for either 7 mode or cDOT - so if you receive with cDOT and want to change your mind to 7, you just unset a BIOS variable, netboot the filer to 8.2.3P2 or whatever you want, then reinitialize. But you shouldn't. cDOT works great for essentially everything that 7 mode did. If you have 7 mode, just set the BIOS variable, reinitialize, and reconfigure. You'll need a cluster license key, but whoever you bought the netapp from can help you with that, for free.

Wicaeed
Feb 8, 2005

kiwid posted:

How come Nimble isn't in the OP? What are Goon's opinions on it?

Have two Nimble arrays that have been rock solid since we bought them. They require very little maintenance as well, which is a huge + in my book.

Quite happy with them, however dammit I wish they did NFS as well :(

bigmandan
Sep 11, 2001

lol internet
College Slice
So the Compellent Array that had a bad controller I posted about, now also had SSD a drive fail. I'm kinda surprised to see failure like this in something that's not even been running for a year. A Dell tech should be here in about 3 hours with the parts at least.

Rhymenoserous
May 23, 2008

1000101 posted:

OP was written a few years ago and I haven't had time to be a good curator. We like Nimble very much (speaking for the company I work for not goons in general) because it's easy to use and it generally performs well at a pretty solid price point.

edit:
My god I wrote that poo poo in 2008. Is there interest in a refresh?

I remember first writing about nimble in this thread like 2-3 years ago and goons going "Not so sure 'bout those dudes" and now I feel all vindicated.

Kaddish
Feb 7, 2002
Anybody have experience with SolidFire flash arrays? How well does the scale-out and volume QoS features work?

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.
Has anyone run storage performance benchmarks on a Cisco UCS box?

I'm doing a side gig for a customer, setting up a database cluster on a UCS box (UCS C240 M3 SFF, dual-8-core/2.7 GHz,128 GB RAM, 16x300G) running VMware for them. They had a question on how to set up the storage and now I'm curious about RAID-10 v RAID-6 (or whatever Cisco calls it on a UCS box).

Has anyone ever compared the two?

I'm also considering RAID-50 and RAID-60 to maximize usable disk space while retaining some level of performance. Where's the sweet spot?

Agrikk fucked around with this message at 14:27 on Jul 17, 2015

Internet Explorer
Jun 1, 2005





You will almost certainly want to use RAID 10 unless they are super read heavy. The sweet spot depends on your read/write ratio, but if they only have that one box and all of the VMs are going to be on it, RAID 10 is the right choice.

Rhymenoserous
May 23, 2008
Raid 10.

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
RAID 10 or passthrough for VMware's VSAN.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
we do raid 6 for our VDI stuff, but honestly raid10 would have been a better choice.

bigmandan
Sep 11, 2001

lol internet
College Slice
I have learned that keeping on top of storage reclamation is probably a good idea.

Over the weekend we came pretty close to being completely full on our tier 3 storage. After cleaning up some old data I thought I had cleaned up mostly everything, but noticed usage on our Compellent arrays didn't change (after replay and DP)... File deletion does not zero blocks out. This is something I already knew but it didn't really click until I saw the space discrepancy. I ended up having to use a combination of 'esxcli storage vmfs unmap` and dd within our linux guests (thick disks) to free up the blocks on the array.

Here is the dd script i used:

code:
#!/bin/bash

for i in {1..1000};  # ~1 TB to free up
	do dd if=/dev/zero bs=1M count=1024 of=/home/reclaim/zero.$i.bin;
done

rm /home/reclaim/*.bin
I'm wondering if there is a better way of managing storage reclamation than this.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

bigmandan posted:

I have learned that keeping on top of storage reclamation is probably a good idea.

Over the weekend we came pretty close to being completely full on our tier 3 storage. After cleaning up some old data I thought I had cleaned up mostly everything, but noticed usage on our Compellent arrays didn't change (after replay and DP)... File deletion does not zero blocks out. This is something I already knew but it didn't really click until I saw the space discrepancy. I ended up having to use a combination of 'esxcli storage vmfs unmap` and dd within our linux guests (thick disks) to free up the blocks on the array.

Here is the dd script i used:

code:

#!/bin/bash

for i in {1..1000};  # ~1 TB to free up
	do dd if=/dev/zero bs=1M count=1024 of=/home/reclaim/zero.$i.bin;
done

rm /home/reclaim/*.bin

I'm wondering if there is a better way of managing storage reclamation than this.

Nope, which is why thin provisioning on NFS is the tits. Some Vendors have tools that simplify the guest zeroing portion, but the basic steps are the same.

Internet Explorer
Jun 1, 2005





[Edit: gently caress me... thought this was the NAS thread. Sorry.]

Internet Explorer fucked around with this message at 01:28 on Jul 22, 2015

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

If you're in the market for an all flash array there are some pretty good deals to be had out there right now. Pure is running a promo where you can get an FA405 with 2.75TB raw for about $50k. NetApp is offering an 8020 AFF with 4.8TB raw for about $25k. Both arrays feature both dedupe and compression, so data reduction rates are pretty good. Flash storage is definitely getting cheaper and cheaper.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Also, Cisco has killed off Invicta (formerly Whiptail) less than two years after acquiring it, having done basically nothing at all with it.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

NippleFloss posted:

Also, Cisco has killed off Invicta (formerly Whiptail) less than two years after acquiring it, having done basically nothing at all with it.
Cisco acqui-hires, they never keep a product line intact.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Vulture Culture posted:

Cisco acqui-hires, they never keep a product line intact.

All indications are that they intended to sell the product directly (they did this, there are Invicta customers out there and they tried to get us to quote it) as well as integrate it with UCS. They've given up on both and it's basically a dead product with no obvious landing spot for the people or IP they acquired.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Vulture Culture posted:

Cisco acqui-hires, they never keep a product line intact.

Think Meraki will stick around? They are probably making some good money off the licensing.

Richard Noggin
Jun 6, 2005
Redneck By Default
I don't see Meraki going anywhere. It's already rebranded as Cisco Meraki, and fills a nice gap in their product line allowing them to compete with the Aerohives of the world.

kiwid
Sep 30, 2013

Figured this is the best thread to ask this in but does anyone here backup to Amazon S3 or Amazon Glacier (not sure what the difference is currently)?

If so, do you like it? Is there any caveats I should know about it or is it simple just back up/archive your data and hope you don't have to touch it ever (we'd still be doing on-site backups)?

Also, what the gently caress is a "request"? If I'm backing up one server, is that one request or is a request done for each file or what?

edit: Also, assuming poo poo hit the fan and we had to resort to disaster recovery by getting our data off Amazon Glacier. How do you actually do that, assuming you had 20TB on there? Would you be downloading that all over the WAN or would they send you a hard drive or something?

kiwid fucked around with this message at 22:01 on Jul 30, 2015

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

kiwid posted:

Figured this is the best thread to ask this in but does anyone here backup to Amazon S3 or Amazon Glacier (not sure what the difference is currently)?

If so, do you like it? Is there any caveats I should know about it or is it simple just back up/archive your data and hope you don't have to touch it ever (we'd still be doing on-site backups)?

Also, what the gently caress is a "request"? If I'm backing up one server, is that one request or is a request done for each file or what?

edit: Also, assuming poo poo hit the fan and we had to resort to disaster recovery by getting our data off Amazon Glacier. How do you actually do that, assuming you had 20TB on there? Would you be downloading that all over the WAN or would they send you a hard drive or something?

You probably don't want to use Glacier for backup. There is a minimum four hour wait before your request will even begin to be serviced. It's meant for archival data that you will access very infrequently and with a very generous RTO. It's also priced much higher per get request so restores can get expensive.

Backing up to S3 is pretty common though and a lot of backup vendors have configurations to allow that fairly trivially.

AWS stores objects, not files or blocks, so a request is just a request to store or retrieve an object. And object is just a blob of data identified by some metadata. How many requests are required to store or retrieve and object is going to be determined by how your backup software handles writing to the object store.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

kiwid posted:

Figured this is the best thread to ask this in but does anyone here backup to Amazon S3 or Amazon Glacier (not sure what the difference is currently)?

If so, do you like it? Is there any caveats I should know about it or is it simple just back up/archive your data and hope you don't have to touch it ever (we'd still be doing on-site backups)?

Also, what the gently caress is a "request"? If I'm backing up one server, is that one request or is a request done for each file or what?

edit: Also, assuming poo poo hit the fan and we had to resort to disaster recovery by getting our data off Amazon Glacier. How do you actually do that, assuming you had 20TB on there? Would you be downloading that all over the WAN or would they send you a hard drive or something?

Echoing what NippleFloss has already said:

Glacier is for long term archiving of data. Think processed log files, legal documents with 7-year retention times, and anything else you refer to once and then need to keep for a long time without accessing it.

S3 is the more typical approach for backups as a bucket can typically be mounted as a storage device in most cloud-aware software these days and integrates seamlessly with your existing backup infrastructure. If cost is a thing, you can always look at S3's reduced redundancy option which basically reduces availability from 11 nines to four (99.99%) I think.

Have a look at the cost calculator for a better sense of cost differences between the various storage options:

http://calculator.s3.amazonaws.com/index.html


Also, for lager backup or recovery jobs, have a look at AWS Import/Export. It is pretty much what you said: you dump your data to a hard drive and ship it to them. Or you ship them a hard drive and they dump your data back on it and ship it back. But the process can take several days and isn't designed to be a disaster recovery option for critical data.

Levitate
Sep 30, 2005

randy newman voice

YOU'VE GOT A LAFRENIÈRE IN ME
Not entirely sure if this is the best thread to ask in but it does involve a function of storage...

We use NetApp filers and almost every user has a laptop. We end up with a bunch of orphaned lock files on our network storage because people tend to not actually close files before they disconnect their laptops from the network, or disconnect from the VPN if they're at home. In turn, this leads to the orphaned lock files being reused when someone else comes along and opens the file, so the wrong user is reported as having the file open for editing. Basically, user A disconnects improperly and leaves a lock file on the network for a random word file. User B comes along and opens that same file and the lock file is re-used (apparently). Now if another user tries to open the same file, they're told that user A has the file open, when actually user B has it open. The lock file also doesn't seem to always correctly clear after that even if user B closes the file properly (we sometimes find months old lock files that aren't cleared).

The question being...short of beating it into the heads of users to stop disconnecting from the network without closing their files, is there any way to automatically clear these orphaned lock files rather than handling it on a case by case basis? Some function of NetApp that I haven't discovered or anything else like? Or in general, has anyone had similar problems and found a better solution?

Thanks

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Agrikk posted:

If cost is a thing, you can always look at S3's reduced redundancy option which basically reduces availability from 11 nines to four (99.99%) I think.

One minor point, both the reduced redundancy option and the standard option have the same availability target (99.99), the difference is that the standard option has 11 9s of durability while the reduced has only four. Durability is the chance that an object will be lost within a year, versus availability which is the chance that it will be unavailable for some portion of the year.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Agrikk posted:

Echoing what NippleFloss has already said:

Glacier is for long term archiving of data. Think processed log files, legal documents with 7-year retention times, and anything else you refer to once and then need to keep for a long time without accessing it.

S3 is the more typical approach for backups as a bucket can typically be mounted as a storage device in most cloud-aware software these days and integrates seamlessly with your existing backup infrastructure. If cost is a thing, you can always look at S3's reduced redundancy option which basically reduces availability from 11 nines to four (99.99%) I think.

Have a look at the cost calculator for a better sense of cost differences between the various storage options:

http://calculator.s3.amazonaws.com/index.html


Also, for lager backup or recovery jobs, have a look at AWS Import/Export. It is pretty much what you said: you dump your data to a hard drive and ship it to them. Or you ship them a hard drive and they dump your data back on it and ship it back. But the process can take several days and isn't designed to be a disaster recovery option for critical data.
Keep in mind that you can use ILM policies to move data between S3 and Glacier transparently. So if you have a bucket that you use for storing your full database backups, you can tell it to keep that data in S3 for 30 days and then offload to Glacier until you delete it.

Cidrick
Jun 10, 2001

Praise the siamese
My team is playing with the idea of going tapeless when we refresh our Netbackup environment. However, we'd like to do it without throwing hundreds of thousands of dollars at a particular storage vendor if we can help it (like DataDomain or something similar).

Does anyone have experience for setting up a high-density, cheap, non-performant storage array for the purposes of backups attached to a Netbackup media server? Preferably something with dedupe and compression? We've also thought of just rolling some dense HP servers with a bunch of 6TB SATA drives in it and running something like OpenDedupe on top of it, but I'm not sure if that would be more trouble than it'd be worth to maintain all that, and have to worry about setting up our own alarming and scheduling drive replacements and whatnot.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


So, thin provisioning (especially with virtualization).

You have thin provisioning at the VMWare level
You have thin provisioning at the storage level, sometimes even twice at the storage level (NetApp thin provisioned LUN inside of a thin provisioned volume.)

The only reason to thin provision is to oversubscribe resources. The question is WHERE to you oversubscribe. What's best practice?

If you have dedup/compression, does it make sense to even thin provision VMs at the VMWare level anymore? At that point is it better to thick provision them and size the drives accordingly so you don't have to worry about oversubscribing each VMWare volume, instead focusing all your attention on the storage level?

Internet Explorer
Jun 1, 2005





I'm phone posting so I can't respond to all of your questions, but make sure you know how to reclaim space at the guest level, the host level, and the storage level. Read up on iSCSI UNMAP and make sure your environment supports it. Otherwise you can paint yourself into a corner. I generally avoid thin provisioning at the storage level unless there's a specific need or benefit. At the hypervisor level is a bit more flexible and does have some storage-motion related benefits.

Adbot
ADBOT LOVES YOU

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


If you don't thin provision at the storage level though, how is dedup providing you any benefits at all?

If I thick provision a 10tb volume on netapp and put 2 thick provisioned 5tb LUNs on it and then put 50 100% full but identical 100gb VMs on each LUN I'll be wasting a ton of space.

I will have reserved 10tb of storage on the SAN, both of my 5tb VMware volumes would be full, each guest would also be full, but I would only be consuming around 100gb (give or take) of real storage.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply