Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Timdogg
Oct 4, 2003

The internet? Is that thing still around?

MrMoo posted:

For not gigantic data Amazon S3 is actually surprisingly reasonable and is my current choice for TB per day. Glacier turns out not to be effective with such small datasets.

Thanks, that is good to know. What size datasets are you talking? Do you use the AWS Storage Gateway, or any other appliance to assist?

Adbot
ADBOT LOVES YOU

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
Hey NetApp Goons:

is there a way to delete large folder/data sets from a CLI? I have hundreds of gigs to delete to get free some space and Windows explorer isn't doing a very good job.

mayodreams fucked around with this message at 17:37 on Feb 13, 2015

MrMoo
Sep 14, 2000

Timdogg posted:

Thanks, that is good to know. What size datasets are you talking? Do you use the AWS Storage Gateway, or any other appliance to assist?

It is only 1-2TB per day currently, North American equity data, the real bonus is that MemSQL, the database I am using has an S3 loader than can really take advantage of Amazon's scaling and slurp in millions of rows per second. I think it scales a lot bigger because Facebook & friends use it. Every month or so I need to pull in 6 months+ of data and batch process it, but is all really easy and scalable as it is only day at a time.

toplitzin
Jun 13, 2003


mayodreams posted:

Hey NetApp Goons:

is there a way to delete large folder/data sets from a CLI? I have hundreds of gigs to delete to get free some space and Windows explorer isn't doing a very good job.

It depends on how you have the data shared/segregated.

If each data set/share is it's own qtree or volume you could just nuke the vol/qtree and recreate it anew and empty. Ie: \\filer\share files to go bye 1\*.*
\\filer\all this needs to go\*.*

However if it's /vol/allmydata/nested in subfolders/ seen as \\filer\share\some data here(don't delete me)\some here\ then nope.

Edit: short version. No we don't do file level CLI access.

toplitzin fucked around with this message at 10:20 on Feb 16, 2015

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

toplitzin posted:

It depends on how you have the data shared/segregated.

If each data set/share is it's own qtree or volume you could just nuke the vol/qtree and recreate it anew and empty. Ie: \\filer\share files to go bye 1\*.*
\\filer\all this needs to go\*.*

However if it's /vol/allmydata/nested in subfolders/ seen as \\filer\share\some data here(don't delete me)\some here\ then nope.

Edit: short version. No we don't do file level CLI access.

Thanks. We use Nexenta and I guess I took CLI access to the filer for granted.

Mr Shiny Pants
Nov 12, 2012

mayodreams posted:

Thanks. We use Nexenta and I guess I took CLI access to the filer for granted.

Hah, yesterday I needed some logs from a NetApp. That was a fun half hour.

It has \\filername\c$ though, for access to the log files.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

There are actually rm and ls commands from the NetApp CLI but they are very limited compared to the posix standards (no recursive delete) and are only available in advanced mode.

You can also read and edit files from the cli with rdfile and wrfile. That said, there are good reasons to keep storage management separate from data management. Deleting files is the province of the data owner, and shouldn't necessarily be easily accomplished by the storage admin.

the spyder
Feb 18, 2011

Timdogg posted:

We have a 5 Node 72NL (320 TB Usable) cluster about to hit EOL this fall and I am working on options to replace it with a much higher capacity system. We really have enjoyed the simplicity of Isilon and it has worked great, but we are now being tasked with storing bigger datasets most around 20TB to 50TB, one edge case is 400TB.

Unfortunately, it is becoming more and more apparent that I am not going to be able to afford a 1.5 PB+ Isilon cluster and was interested in hearing if you all had some suggestions. This data is usually large files, not commonly accessed, we used the Near-Line archival storage from Isilon and speed was *never* an issue. So no IO requirements, but the data does need to be accessible, so Amazon Glacier really isn't useful here. We use CIFS and NFS on the Isilon.

I have a good group of Sysadmin, but none are dedicated storage admin, so I am not interested in rolling my own cloud with OpenStack or Ceph. We have some Dell 3200i with 3 1200i attached for 96 TB RAW, it has worked okay, but wasn't a huge fan of the LUN management...but the more I read this thread, the more I am thinking this may be one of my best options for capacity + reliability + price. We also just bought vSAN for our VM storage, ... we could scale that out, hadn't really thought of it for this particular use case.

Anyways, would love to hear any suggestions if you have them.

TLDR: Need easy to manage, reliable PB+ storage, oh yeah inexpensive would be great.

I was given the task of storing 1.2PB of imagery on a shoe string budget and ended up rolling my own dual head Solaris/ZFS based system, serving out CIFS/NFS. Purchase price for the hardware was around $200k and it took under a week to assemble, image, and setup. We deal with thousands of 1GB files per dataset and so far our users have been happy (~5 months). I need to play with the ARC/L2ARC settings a bit, but management has been incredibly simple. There's a learning curve if you're not a UNIX guy, but it's fairly well documented. I use NAPP-IT for simplicity, but have been doing more things via CMD on Solaris 11.2.

Timdogg
Oct 4, 2003

The internet? Is that thing still around?

the spyder posted:

I was given the task of storing 1.2PB of imagery on a shoe string budget and ended up rolling my own dual head Solaris/ZFS based system, serving out CIFS/NFS. Purchase price for the hardware was around $200k and it took under a week to assemble, image, and setup. We deal with thousands of 1GB files per dataset and so far our users have been happy (~5 months). I need to play with the ARC/L2ARC settings a bit, but management has been incredibly simple. There's a learning curve if you're not a UNIX guy, but it's fairly well documented. I use NAPP-IT for simplicity, but have been doing more things via CMD on Solaris 11.2.

Awesome, sounds like a similar setup that I was thinking of... what kind of hardware did you buy? Supermicro? Did the solaris licenses cost a pretty penny? Or did you already have some?

Amandyke
Nov 27, 2004

A wha?

Timdogg posted:

Awesome, sounds like a similar setup that I was thinking of... what kind of hardware did you buy? Supermicro? Did the solaris licenses cost a pretty penny? Or did you already have some?

Have you looked at the new HD400 Isilon nodes? I don't have the pricing specifics but we're currently looking into buying them to replace one of our 108NL clusters of around 30 nodes. It's as cheap and deep as Isilon gets.

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

NippleFloss posted:

That said, there are good reasons to keep storage management separate from data management. Deleting files is the province of the data owner, and shouldn't necessarily be easily accomplished by the storage admin.

I agree with you, but my company has 15 years of really bad habits, and a complete aversion to a CMS for media files that turns into multiple 1gb+ revisions with minor changes that can NEVER be deleted. That fills up your storage in a hurry. Oh and the 300GB of backups a Mac user did of her entire machine every 3 months without telling anyone.

My current crusade is rooting out the idiocy at our company, and it is a monumental task to be sure.

Mr Shiny Pants
Nov 12, 2012
NetApp question: We have a Metro Cluster running with one node running all SATA and one node running SAS shelves. We migrated our Exchange environment (DBs) to the SATA one and since we made the Netapp the primary member in the DAG the filer seems to stall.

The filer serves RDMs to Vshpere hosts via ISCSI.

All IO seems to drop to almost zero, disconnecting users and making our life miserable. ISCSI latency rises to 60 ms instead of the normal 1 ms - 4 ms.

Anyone ever seen something like this before?

Load is a constant 200MB sec on a filer that has about 30 SATA disks in its aggregate and 512GB of flash cache.

I've checked the EMS logs but can't find anything in it that would explain the behaviour we are seeing.

Timdogg
Oct 4, 2003

The internet? Is that thing still around?

Amandyke posted:

Have you looked at the new HD400 Isilon nodes? I don't have the pricing specifics but we're currently looking into buying them to replace one of our 108NL clusters of around 30 nodes. It's as cheap and deep as Isilon gets.

I was looking at their website today and happen to see it on a datasheet, first time I had ever seen mention of it? Certainly looks appealing, 59 6TB disks for 354 per node, 1.06 PB out the gate with a cluster of them... sounds awesome. I would love to know pricing, our EMC reps kinda suck and really only respond with pricing if we are ready to buy and we need to wait until July to make the purchase.

Isilon certainly advertises it a little differently, I wonder if it performs similarly to the NL400 nodes? Glad you mentioned this, if you have any more info,, do let me know.

OldPueblo
May 2, 2007

Likes to argue. Wins arguments with ignorant people. Not usually against educated people, just ignorant posters. Bing it.

Mr Shiny Pants posted:

NetApp question: We have a Metro Cluster running with one node running all SATA and one node running SAS shelves. We migrated our Exchange environment (DBs) to the SATA one and since we made the Netapp the primary member in the DAG the filer seems to stall.

The filer serves RDMs to Vshpere hosts via ISCSI.

All IO seems to drop to almost zero, disconnecting users and making our life miserable. ISCSI latency rises to 60 ms instead of the normal 1 ms - 4 ms.

Anyone ever seen something like this before?

Load is a constant 200MB sec on a filer that has about 30 SATA disks in its aggregate and 512GB of flash cache.

I've checked the EMS logs but can't find anything in it that would explain the behaviour we are seeing.

It could really be a bunch of things. MetroCluster has a small automatic performance hit due to the distance factor, even though you'd think its only writing to the local side its really writing to both plexus simultaneously, etc. I guess I'd start with where was it working better before, on the SAS? How fast are your ISL links, do you have two or four? Do the switch ports show excessive errors via porterrshow? Could be a network bottleneck, I've seen a MetroCluster overwhelm a network after a head upgrade resulting in tons of network discard packets on an older network infrastructure. I'd say maybe fire off some ASUPs and see what support has to say, or check My ASUP and System Manager for any recommendations.

Mr Shiny Pants
Nov 12, 2012

OldPueblo posted:

It could really be a bunch of things. MetroCluster has a small automatic performance hit due to the distance factor, even though you'd think its only writing to the local side its really writing to both plexus simultaneously, etc. I guess I'd start with where was it working better before, on the SAS? How fast are your ISL links, do you have two or four? Do the switch ports show excessive errors via porterrshow? Could be a network bottleneck, I've seen a MetroCluster overwhelm a network after a head upgrade resulting in tons of network discard packets on an older network infrastructure. I'd say maybe fire off some ASUPs and see what support has to say, or check My ASUP and System Manager for any recommendations.

The primary environment is now running on our 8 year old DS4800 and the Netapp is functioning as a DAG member receiving the log changes. Now it works fine, it is when we switch to the NetApp as our primary that the problem seems to occur.

The switchports are showing no errors, everything is connected through Nexus switches it has 10 gbit for uplinks. No idea what an ISL is or why I need two or four, explain that please. Inter shelf link? We have optical SAS cabling between the cluster nodes.

We already have a support case, I was just wondering if anyone of you might have seen this before as there are a few people visiting this thread that have Netapp experience.

Mr Shiny Pants fucked around with this message at 09:49 on Feb 19, 2015

grobbendonk
Apr 22, 2008

Captain Foo posted:

Anyone familiar with emc data domains as backup solutions?

We've used data domain with Netbackup since about 2011/2012 and we went totally tape less around the end of 2012, and we've never looked back. The advantages we've found is the backup and restore success rate improved significantly, our replication to DR is much faster, we saved a lot of space in our data centres and we didn't have to worry about tape storage and getting random tapes back from offsite when wanting to restore a particular data set. The main disadvantages are cost and getting the sizing right, we're actually doing our third head swap to a DD990 next month.

It works for us because even though we're protecting around 2Pb of data our retention period for the vast majority of backups is only 2-4 weeks. The dedup and compression varies enormously depending on what it is being backed up and Oracle databases can be particularly inefficient. We're currently trialling Avamar directly to Data Domain to cut dump Netbackup but I'm not sure how well that's going at the moment.

Things to look out for or consider are the number of simultaneous streams the DD will support, whether to separate different data types onto different mtrees, getting your Oracle guys to change their backups to compression off and files per data set=1 (although this is changing in the latest firmware) and if you can deploy DD boost to make your clients do the work.

Captain Foo
May 11, 2004

we vibin'
we slidin'
we breathin'
we dyin'

grobbendonk posted:

We've used data domain with Netbackup since about 2011/2012 and we went totally tape less around the end of 2012, and we've never looked back. The advantages we've found is the backup and restore success rate improved significantly, our replication to DR is much faster, we saved a lot of space in our data centres and we didn't have to worry about tape storage and getting random tapes back from offsite when wanting to restore a particular data set. The main disadvantages are cost and getting the sizing right, we're actually doing our third head swap to a DD990 next month.

It works for us because even though we're protecting around 2Pb of data our retention period for the vast majority of backups is only 2-4 weeks. The dedup and compression varies enormously depending on what it is being backed up and Oracle databases can be particularly inefficient. We're currently trialling Avamar directly to Data Domain to cut dump Netbackup but I'm not sure how well that's going at the moment.

Things to look out for or consider are the number of simultaneous streams the DD will support, whether to separate different data types onto different mtrees, getting your Oracle guys to change their backups to compression off and files per data set=1 (although this is changing in the latest firmware) and if you can deploy DD boost to make your clients do the work.

Hmm, my environment is much smaller than yours, I'm only looking to hold a few TB of data

Seventh Arrow
Jan 26, 2005

I work in the NAS department for a bank and we've been a NetApp shop ever since we've had NAS (we have about 30 filers) - except now, we're going to be switching to Hitachi's HNAS platform. It's kind of interesting, I never even knew Hitachi had a NAS solution. We've done some of the training so far and it's kind of cool to see a different architecture, because they're both kind of different. So far, though, it seems like Hitachi EVS = NetApp vfilers and Hitachi virtual volumes = NetApp qtrees. This is important, because qtrees are our bread and butter. Some of our old qtrees have grown into disorganized monstrosities, so this might be a good chance to migrate some stuff into a more orderly setup.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Mr Shiny Pants posted:

NetApp question: We have a Metro Cluster running with one node running all SATA and one node running SAS shelves. We migrated our Exchange environment (DBs) to the SATA one and since we made the Netapp the primary member in the DAG the filer seems to stall.

The filer serves RDMs to Vshpere hosts via ISCSI.

All IO seems to drop to almost zero, disconnecting users and making our life miserable. ISCSI latency rises to 60 ms instead of the normal 1 ms - 4 ms.

Anyone ever seen something like this before?

Load is a constant 200MB sec on a filer that has about 30 SATA disks in its aggregate and 512GB of flash cache.

I've checked the EMS logs but can't find anything in it that would explain the behaviour we are seeing.

Grab a "sysstat -x 1" output during the issue and check for a B or b in the "CP Type" column and high disk utilization. What ONTAP version are you running? Also, are you seeing high read latency or write latency, and are you seeing it on the log volumes, the DB volumes, or both?

parid
Mar 18, 2004

Mr Shiny Pants posted:

We already have a support case, I was just wondering if anyone of you might have seen this before as there are a few people visiting this thread that have Netapp experience.

What does sysstat look like? Is it keeping up with consistency points? CPU usage? What does your raid domain cpu usage (sysstat -M) look like while its in the problem state? How old is the install? What version of ontap?

Do you have FMC-DC running and log collecting? If not, you may want to start in case you support case goes that way and you need days of samples.

parid fucked around with this message at 04:57 on Feb 20, 2015

Mr Shiny Pants
Nov 12, 2012

parid posted:

What does sysstat look like? Is it keeping up with consistency points? CPU usage? What does your raid domain cpu usage (sysstat -M) look like while its in the problem state? How old is the install? What version of ontap?

Do you have FMC-DC running and log collecting? If not, you may want to start in case you support case goes that way and you need days of samples.


NippleFloss posted:

Grab a "sysstat -x 1" output during the issue and check for a B or b in the "CP Type" column and high disk utilization. What ONTAP version are you running? Also, are you seeing high read latency or write latency, and are you seeing it on the log volumes, the DB volumes, or both?

I am not in the office today, NetApp also asked for a systat output. So we'll plan a day where we can coordinate this with our users. Will let you know. Thanks for the input guys.

the spyder
Feb 18, 2011

Timdogg posted:

Awesome, sounds like a similar setup that I was thinking of... what kind of hardware did you buy? Supermicro? Did the solaris licenses cost a pretty penny? Or did you already have some?

It's all off the shelf Supermicro X9 hardware, 45 bay JBOD's, LSI 9300-8E HBA's, Intel SSD's or OS/Log/Cache, and Hitachi/WD SATA or SAS disks. I have over 7PB deployed in ~300TB systems and outside of our support guys failing to address failed drives, we rarely have issues. Solaris is $1k, per CPU socket, per year. We add a $140 Napp-It commercial license and call it good. This is essentially bottom teir storage, the only thing lower IMO would be a back blaze style pod. I know at least a few people roll their eyes when ever I mention Solaris/ZFS, but I've had a good experience with it thus far.

Zephirus
May 18, 2004

BRRRR......CHK

Seventh Arrow posted:

I work in the NAS department for a bank and we've been a NetApp shop ever since we've had NAS (we have about 30 filers) - except now, we're going to be switching to Hitachi's HNAS platform. It's kind of interesting, I never even knew Hitachi had a NAS solution. We've done some of the training so far and it's kind of cool to see a different architecture, because they're both kind of different. So far, though, it seems like Hitachi EVS = NetApp vfilers and Hitachi virtual volumes = NetApp qtrees. This is important, because qtrees are our bread and butter. Some of our old qtrees have grown into disorganized monstrosities, so this might be a good chance to migrate some stuff into a more orderly setup.

That's a pretty good analogy. The HNAS product is ex-Bluearc (Titan/Mercury depending on the version). If they're set up properly we've found them to be ridiculously fast.

If you have a large amount of small files you will want definitely want to look into fast metadata storage.

There are a number of horrible horrible bugs in 11.x so you really want to make sure you're on 12.x

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Zephirus posted:

That's a pretty good analogy. The HNAS product is ex-Bluearc (Titan/Mercury depending on the version). If they're set up properly we've found them to be ridiculously fast.

If you have a large amount of small files you will want definitely want to look into fast metadata storage.

There are a number of horrible horrible bugs in 11.x so you really want to make sure you're on 12.x
Adding to this:

One thing to be really careful of with HNAS if you're running it in a very high-performance environment like an HPC cluster (we had around 120 gigabits of aggregate throughput across 3 heads): unless something has changed since the last time I looked at them, the cluster networking connection between the frontend nodes is limited to a single 10-gigabit link. This is important, because if you talk to the node in an EVS that doesn't "own" a filesystem (i.e. you're trying to round-robin your cluster between NFS head nodes), all file traffic for that filesystem will be served (read: proxied) from the other frontend node over this link. So while you can port-channel multiple 10-gig connections across multiple frontends, if you're trying to run in an active/active configuration, you can actually end up severely bottlenecking the inter-node link to the point where clustering traffic gets lost, network buffers fill up, and the entire EVS falls over. In cluster environments where you have sufficient orchestration control over the mount points, always talk directly to the node that owns the FS underneath the export unless you have a terrific reason not to.

That said, the support is pretty good and the product is fairly robust and reliable. We ran a few petabytes with relatively few issues for how much traffic we pushed over it.

Vulture Culture fucked around with this message at 22:23 on Feb 20, 2015

Zephirus
May 18, 2004

BRRRR......CHK

Misogynist posted:

Adding to this:

One thing to be really careful of with HNAS if you're running it in a very high-performance environment like an HPC cluster (we had around 120 gigabits of aggregate throughput across 3 heads): unless something has changed since the last time I looked at them, the cluster networking connection between the frontend nodes is limited to a single 10-gigabit link. This is important, because if you talk to the node in an EVS that doesn't "own" a filesystem (i.e. you're trying to round-robin your cluster between NFS head nodes), all file traffic for that filesystem will be served (read: proxied) from the other frontend node over this link. So while you can port-channel multiple 10-gig connections across multiple frontends, if you're trying to run in an active/active configuration, you can actually end up severely bottlenecking the inter-node link to the point where clustering traffic gets lost, network buffers fill up, and the entire EVS falls over. In cluster environments where you have sufficient orchestration control over the mount points, always talk directly to the node that owns the FS underneath the export unless you have a terrific reason not to.

That said, the support is pretty good and the product is fairly robust and reliable. We ran a few petabytes with relatively few issues for how much traffic we pushed over it.

Yeah. They try to get around some of this with the read-caching options, but it's best suited for trad. file workloads and I assume not HPC or Media (our usage).

Managing bossock fiber workload becomes a ballache if the storage can't keep up too.

Seventh Arrow
Jan 26, 2005

Actually believe it or not, we're not running on a clustered setup. Our NetApp stuff is using OnTap 8.x in 7-mode. We might have to update our architecture to get with the times, though.

That's good to hear about Hitachi's support, though. In our neck of the woods, NetApp farms out their hardware replacements to IBM and they've dropped the ball a number of times. It's especially irritating in a large bank, where you have to reopen tickets that have an approval process.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Zephirus posted:

Managing bossock fiber workload becomes a ballache if the storage can't keep up too.
That's true. The network stack is poor in general and backs up if the storage backs up, which means eventually all your traffic on unrelated filesystems also stops performing.

For cases where you don't have 100+ locally-connected 10-gig client stressing the storage, though, it screams.

Aquila
Jan 24, 2003

Seventh Arrow posted:

I work in the NAS department for a bank and we've been a NetApp shop ever since we've had NAS (we have about 30 filers) - except now, we're going to be switching to Hitachi's HNAS platform. It's kind of interesting, I never even knew Hitachi had a NAS solution. We've done some of the training so far and it's kind of cool to see a different architecture, because they're both kind of different. So far, though, it seems like Hitachi EVS = NetApp vfilers and Hitachi virtual volumes = NetApp qtrees. This is important, because qtrees are our bread and butter. Some of our old qtrees have grown into disorganized monstrosities, so this might be a good chance to migrate some stuff into a more orderly setup.

I'm so sorry. I only used a Hitachi FC SAN once, they threw in a free HNAS head but we declined to use it or even take it out of the box. Best practices that our Hitachi consultant described were ridiculous at our scale. Administration of HDS filers is so hilariously bad compared to a Netapp that you're probably going to want to kill yourself soon.

Richard Noggin
Jun 6, 2005
Redneck By Default
What are people using for storage switches in very small deployments (2-3 hosts, 5-15 VMs, iSCSI SAN, vSphere)? We've been using Catalyst 3750-Xs stacked, but only because they've been solid.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Richard Noggin posted:

What are people using for storage switches in very small deployments (2-3 hosts, 5-15 VMs, iSCSI SAN, vSphere)? We've been using Catalyst 3750-Xs stacked, but only because they've been solid.

Are you only 1gbe? At a small remote site with a poo poo internet connection I am running a pair of Juniper EX3300 in a virtual chassis, it's been humming along fine.

Richard Noggin
Jun 6, 2005
Redneck By Default
Yup, 1GbE. We do have the option of going host-->controller and bypassing switching altogether with the VNXe3200s we usually deploy, but I'm not sure if that's a good idea.

Gwaihir
Dec 8, 2009
Hair Elf

Richard Noggin posted:

What are people using for storage switches in very small deployments (2-3 hosts, 5-15 VMs, iSCSI SAN, vSphere)? We've been using Catalyst 3750-Xs stacked, but only because they've been solid.

That's what I'm using in a pretty much identical environment. Can't think of the last time I had switch issues.

Rhymenoserous
May 23, 2008

Richard Noggin posted:

What are people using for storage switches in very small deployments (2-3 hosts, 5-15 VMs, iSCSI SAN, vSphere)? We've been using Catalyst 3750-Xs stacked, but only because they've been solid.

Looks like the 3750-x's have (Or can have) SFP that support 10g ethernet. You could get 10 gig nics for the servers and bobs your uncle. That's what I did for the longest time before I could justify shelling out for a dedicated 10g switch just for server traffic.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

4900 series switches will handle storage traffic better than 3750s owing to a much larger shared port buffer space. Bursty traffic or mismatched egress/ingress rates (all common for network storage) can overload the relatively small buffers on the 3750s and lead to packet drops.

Seventh Arrow
Jan 26, 2005

Aquila posted:

I'm so sorry. I only used a Hitachi FC SAN once, they threw in a free HNAS head but we declined to use it or even take it out of the box. Best practices that our Hitachi consultant described were ridiculous at our scale. Administration of HDS filers is so hilariously bad compared to a Netapp that you're probably going to want to kill yourself soon.

Bad in what way? Is the hardware itself bad, or is it just the management tools?

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Seventh Arrow posted:

Bad in what way? Is the hardware itself bad, or is it just the management tools?

Hitachi hardware is rock solid and stable, if not particularly cutting edge. Hitachi software is a collapsing building built on top of a landfill built on top of an Indian burial ground.

Zephirus
May 18, 2004

BRRRR......CHK

NippleFloss posted:

Hitachi hardware is rock solid and stable, if not particularly cutting edge. Hitachi software is a collapsing building built on top of a landfill built on top of an Indian burial ground.

This is pretty much correct. The newer command suite, command director, and tuning manager versions are generally approaching usability in terms of day to day use, but actual setup and maintenance is still labyrinthine and un-intuitive (especially tuning manager). They do look pretty in the new darker colour scheme though.

Historically the whole thing is a bit of a mess, the worst of which is things like the old AMS2000/HUS web console and the USP-V java console (which is so slow and awful to use that most people prefer to use the command line tools, which are also hilariously difficult to comprehend - especially HORCM). Command suite was developed because the USP management was so awful they decided to just go around it with a web-based tool.

You had to have shelled out at least 7 figures to see the below console. Not pictured is the 30 second wait caused by clicking anything.




The NAS web console is relatively unchanged from the blue-arc days, other than colourscheme changes. If you are using the NAS console you are probably doing it wrong though, as SSC/Bali command line is very good and actually quite intuitive.

Cidrick
Jun 10, 2001

Praise the siamese

No... no! Not again! I won't go back!

Richard Noggin
Jun 6, 2005
Redneck By Default

NippleFloss posted:

4900 series switches will handle storage traffic better than 3750s owing to a much larger shared port buffer space. Bursty traffic or mismatched egress/ingress rates (all common for network storage) can overload the relatively small buffers on the 3750s and lead to packet drops.

Yeah, that I know, but at 4x the cost.

Adbot
ADBOT LOVES YOU

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Zephirus posted:

This is pretty much correct. The newer command suite, command director, and tuning manager versions are generally approaching usability in terms of day to day use, but actual setup and maintenance is still labyrinthine and un-intuitive (especially tuning manager). They do look pretty in the new darker colour scheme though.

Historically the whole thing is a bit of a mess, the worst of which is things like the old AMS2000/HUS web console and the USP-V java console (which is so slow and awful to use that most people prefer to use the command line tools, which are also hilariously difficult to comprehend - especially HORCM). Command suite was developed because the USP management was so awful they decided to just go around it with a web-based tool.

You had to have shelled out at least 7 figures to see the below console. Not pictured is the 30 second wait caused by clicking anything.




The NAS web console is relatively unchanged from the blue-arc days, other than colourscheme changes. If you are using the NAS console you are probably doing it wrong though, as SSC/Bali command line is very good and actually quite intuitive.

Storage Navigator is absolutely terrible, so I can only assume that HORCM was designed to make it seem intuitive and useful by comparison. My favorite Storage Navigator feature was the fact that only a single user could be logged on at a time, and if they closed the window without logging out properly you had to wait like 15 minutes before anyone, including that user, could log back in and actually provision storage. Hope it's not an emergency!

HORCM, on the other hand, will happily allow you to accidentally reverse a replication relationship just by transposing a couple of CU/LDEV pairs in a text file that is nothing but an unreadable list of CU/LDEV pairs and wipe out your production data without any warning whatsoever.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply