Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Richard Noggin posted:

Yeah, that I know, but at 4x the cost.

A refurbished 4948 is few hundred bucks more than a refurb 3750X-24T.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

NippleFloss posted:

A refurbished 4948 is few hundred bucks more than a refurb 3750X-24T.
4948 doesn't stack, though, which can complicate topologies somewhat once you go beyond the port count of a single switch. Costs go up a bit once you throw the uplink SFP+ modules in the mix.

Azhais
Feb 5, 2007
Switchblade Switcharoo

Cidrick posted:

No... no! Not again! I won't go back!

This is the only thing that evokes that reaction from me

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Misogynist posted:

4948 doesn't stack, though, which can complicate topologies somewhat once you go beyond the port count of a single switch. Costs go up a bit once you throw the uplink SFP+ modules in the mix.

I don't think I'd ever use stacked 3750s for a storage network though.

re: SFP+ costs,

'no errdisable detect cause gbic-invalid'

and

'service unsupported-transceiver'

will be your best friend on a shoestring budget. Cisco criminally overcharges for optics.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Misogynist posted:

4948 doesn't stack, though, which can complicate topologies somewhat once you go beyond the port count of a single switch. Costs go up a bit once you throw the uplink SFP+ modules in the mix.

Yea, but the original question was about storage switches for small environments so the port count requirements are going to low, the topology is going to very simple, and you can generally manage failover in ways that don't require cross switch port-channel. If you just need a simple gigabit switch that can handle a reasonable amount of storage traffic without dropping frames they work fine.

Richard Noggin
Jun 6, 2005
Redneck By Default

NippleFloss posted:

A refurbished 4948 is few hundred bucks more than a refurb 3750X-24T.

Interesting. I'll have to check it out. List is like 2x I think.

sanchez
Feb 26, 2003

1000101 posted:

I don't think I'd ever use stacked 3750s for a storage network though.

re: SFP+ costs,

'no errdisable detect cause gbic-invalid'

and

'service unsupported-transceiver'

will be your best friend on a shoestring budget. Cisco criminally overcharges for optics.

DA SFP+ cables are <$100 for genuine cisco, no need for optics to join a pair of switches that are right next to each other.

devmd01
Mar 7, 2006

Elektronik
Supersonik
Obligatory gently caress IBM post, we have a DS3512 that we need to get firmware to the latest for a vmware upgrade to 5.5, and they lock you out if you don't have a service contract. Guess what these people didnt renew six months ago!

Also, IBM thinks that our units serial is in Singapore. :suicide:

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
That is the way a lot of manufacturers are going - HP started doing something similar where you can't get their "official" firmware/driver poo poo without having an active support contract of some sort.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I don't think I can even name any enterprise hardware vendors that don't do that. Raritan was the last one I worked with, and they started requiring service contracts for firmware updates after I refused to re-up SNS on our fleet of KVMs, pointing out that we'd have to lose 20% of our KVMs every year for the warranty to be worth it.

Thanks Ants
May 21, 2004

#essereFerrari


Dell are the only ones I can name that don't need service contracts for firmware upgrades.

To add some content, I have a requirement for some cheap NAS but with decent support and dual controllers, and I'm looking at the Fujitsu stuff - specifically the DX100 S3. To give you an idea of where this is being positioned it's going up against Synology crap because I'm bored with a buggy OS and non-existent support.

Has anyone had any horrific experiences with these boxes, or know anything about them at all?

Thanks Ants fucked around with this message at 22:31 on Feb 26, 2015

Kaddish
Feb 7, 2002
We're going through a switch upgrade. We currently have old IBM 2005-B5K's (rebranded Brocade) and we're going upgrade to Directors. Our hardware rep has scheduled a call with Brocade and Hitachi - evidently Hitachi also sells rebranded Brocades and we can save some money. Any use Hitachi Directors? How is support?

Zephirus
May 18, 2004

BRRRR......CHK

Kaddish posted:

We're going through a switch upgrade. We currently have old IBM 2005-B5K's (rebranded Brocade) and we're going upgrade to Directors. Our hardware rep has scheduled a call with Brocade and Hitachi - evidently Hitachi also sells rebranded Brocades and we can save some money. Any use Hitachi Directors? How is support?

They're not even re-branded afaik. They're just brocades - at least that is the case for everything brocade i've bought through HDS. Support is OK, but with HDS YMMV depending on your area. APAC is not great, UK/EMEA is ok, US/CAN i've had good/bad experiences but not enough to judge.

You get access to everything brocade software wise, through the HDS portal - there is a partner link to brocade.com, so you don't lose out there.

Kaddish
Feb 7, 2002

Zephirus posted:

They're not even re-branded afaik. They're just brocades - at least that is the case for everything brocade i've bought through HDS. Support is OK, but with HDS YMMV depending on your area. APAC is not great, UK/EMEA is ok, US/CAN i've had good/bad experiences but not enough to judge.

You get access to everything brocade software wise, through the HDS portal - there is a partner link to brocade.com, so you don't lose out there.

Awesome, I didn't realize they weren't even re-branded. I popped over to the brocade website and that does seem to be the case. So support will be my primary concern still. Thanks for the info.

mattisacomputer
Jul 13, 2007

Philadelphia Sports: Classy and Sophisticated.

FWIW, we use newer IBM 2498s and they've been solid. Still re-branded Brocade obviously but might be worth checking out if your org prefers IBM stuff for some unfortunate reason.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I know I am the rear end in a top hat who has only worked in medium side environments, but why is FC still popular?

10GbE block storage iSCSI has never let me down or been a bottleneck. I understand I am not running an environment for super day traders, but am I missing something outside of the "legacy" factor?

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
FC is an easy protocol to scale on the cheap and some people like the idea of keeping the storage network separate anyway.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

1000101 posted:

FC is an easy protocol to scale on the cheap and some people like the idea of keeping the storage network separate anyway.

How is iSCSI not just as easy? Converged networking isn't a new concept.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Guaranteed end to end ordered delivery, no spanning tree, no broadcast storms, quicker convergence during topology changes due to link failures, less protocol overhead...lots of reasons why you might prefer FC for technical reasons.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Moey posted:

How is iSCSI not just as easy? Converged networking isn't a new concept.

An oversubscribed ethernet switch will drop a bunch of frames, forcing retransmits, and absolutely tanking performance. An oversubscribed FC switch will rate limit traffic to each client so while traffic flows slower, it flows consistently. The lack of broadcasts for device discovery (every device logs in to the fabric when connected and registers with the name service so that other devices know where to find it) mean that you don't have to worry about wide flat topologies being overwhelmed with overhead from broadcasts and multicasts. When you add a new switch you simply plug it in and it learns all of the zone information and you could move a device to it with basically no configuration and it would continue to work because zoning is based on WWN which moves with the device. No forgetting to create VLANs on the switch, or going through and setting up individual ports as for tagging or trunking, no problems with VTP, no making sure you add a new VLAN to each ISL trunk.

It's not that iSCSI is bad, it's just that FC is meant to carry storage traffic specifically, so it avoids some issues that Ethernet/IP can suffer from when used as a storage medium.

YOLOsubmarine fucked around with this message at 07:11 on Feb 28, 2015

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
what nipplefloss is saying is really only going to matter in larger environments. Even in those larger environments, many of the weaknesses you would find in IP storage can be designed away, it just takes so much more thought and planning that you might as well buy some FC gear at that point.

Syano
Jul 13, 2005
I came from medium business and exclusive used iSCSI for my entire career until about a year ago when I got a job on a storage team for an organization that uses enterprise stuff like VMAX and exclusively FC. My perspective on the differences: If you don't factor in cost, FC is better at what is does in every conceivable way. Everything Nipplefloss said plus more. Its just so dang easy and inherently fast that again, if you don't factor cost, why would I ever go back to IP storage? Multipathing? Yep it just works, no special configurations. Failed link convergance? Yeah 99.999% of applications will never even know its so seamless. Attaching new storage? Make sure your zones are built correct and it just inherently works. No CHAP or isci initiator logins or other mess like that. And we could go on. The things that IP and FC can do are almost identical but the way FC does those things are orders of magnitude better in every way. Again, all of this is not considering $$$

Rhymenoserous
May 23, 2008
Meh, I prefer iscsi, at this point with 10gE being commodity priced. The real reason though is that a lot of the mid sized storage arrays only support iscsi. Unless you are doing hft the performance difference just doesn't matter all that much if you design things correctly.

Pile Of Garbage
May 28, 2007



NippleFloss posted:

An oversubscribed ethernet switch will drop a bunch of frames, forcing retransmits, and absolutely tanking performance. An oversubscribed FC switch will rate limit traffic to each client so while traffic flows slower, it flows consistently. The lack of broadcasts for device discovery (every device logs in to the fabric when connected and registers with the name service so that other devices know where to find it) mean that you don't have to worry about wide flat topologies being overwhelmed with overhead from broadcasts and multicasts. When you add a new switch you simply plug it in and it learns all of the zone information and you could move a device to it with basically no configuration and it would continue to work because zoning is based on WWN which moves with the device. No forgetting to create VLANs on the switch, or going through and setting up individual ports as for tagging or trunking, no problems with VTP, no making sure you add a new VLAN to each ISL trunk.

It's not that iSCSI is bad, it's just that FC is meant to carry storage traffic specifically, so it avoids some issues that Ethernet/IP can suffer from when used as a storage medium.

Just want to quote NippleFloss because holy poo poo so many people gloss-over FC based on throughput numbers yet they don't actually understand how resilient FC is as a protocol. You could have an FC fabric with >200 unique domain IDs and still have sub-millisecond RSCN propagation times.

FC as a protocol is really amazing and I wish more people have the chance to work with it (Disclaimer: I'm a massive IBM/Brocade shill but lol if you have to do zoning with QLogic product or whatevs).

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

cheese-cube posted:

Just want to quote NippleFloss because holy poo poo so many people gloss-over FC based on throughput numbers yet they don't actually understand how resilient FC is as a protocol. You could have an FC fabric with >200 unique domain IDs and still have sub-millisecond RSCN propagation times.

FC as a protocol is really amazing and I wish more people have the chance to work with it (Disclaimer: I'm a massive IBM/Brocade shill but lol if you have to do zoning with QLogic product or whatevs).

FC is great; if you've got the budget for diamonds why go with a polished turd.

Mr Shiny Pants
Nov 12, 2012

PCjr sidecar posted:

FC is great; if you've got the budget for diamonds why go with a polished turd.

If you have diamonds why not go whole hog and get Infiniband?

It is probably the best interconnect, too bad it gets glossed over.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

PCjr sidecar posted:

FC is great; if you've got the budget for diamonds why go with a polished turd.
sometimes children get their hands chopped off because of the diamond trade. :colbert:

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

Mr Shiny Pants posted:

If you have diamonds why not go whole hog and get Infiniband?

It is probably the best interconnect, too bad it gets glossed over.

Mostly because not a lot of storage manufacturers use it as a host interconnect.

in a well actually
Jan 26, 2011

dude, you gotta end it on the rhyme

Mr Shiny Pants posted:

If you have diamonds why not go whole hog and get Infiniband?

It is probably the best interconnect, too bad it gets glossed over.

I like Infiniband; it's so weird, the per-port cost's lower than 40GbE, but the enterprise fabric management is not anywhere near what you get with FC.

adorai posted:

sometimes children get their hands chopped off because of the diamond trade. :colbert:

A small price to pay to escape bob metcalfe's abomination.

TeMpLaR
Jan 13, 2001

"Not A Crook"
Anyone try out OnTap 8.3 yet? Thoughts?

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Speaking of OnTap, we just finished our Netapp 2554 install today. 20x 4tb SATA, 4x 400gb SSD, 48x 900gb SAS. There's definitely going to be a bit of a learning curve to this as it's not quite as point and shoot as the Equallogic or Pure I've used so far.

We haven't configured the Flash Pool yet on recommendation of the tech we were working with from our vendor. OnTap 8.3 allows for partitioning of the flash pool, so we would rather wait until we upgrade to it and allow both the SAS and SATA aggregates use the flash pool than choose one now.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

TeMpLaR posted:

Anyone try out OnTap 8.3 yet? Thoughts?

I've done a couple of upgrades and a few fresh installs of 8.3 by now and I think it's mostly good and stable, but the network changes make things needlessly complex for small environments and make upgrading to 8.3 a pain in the rear end. Before you can upgrade to 8.3 you have to satisfy a bunch of requirements around lif home ports and auto-revert settings, ever SVM MUST have a management LIF that can connect out to services like DNS, AD, etc, even if it doesn't need one, failover groups may need to be modified to meet port requirements...the first time I attempted to upgrade to 8.3 it took an hour to make all of the changes required to satisfy the upgrade check and allow the upgrade to proceed.

Once you're on 8.3 you'll find that everything in system manager expects you to create subnets with associated IP pools for allocating IPs to LIFs, things like creating an SVM can't actually be completed through the wizard unless you've got a subnet that is associated with a broadcast domain that is in an ipspace that you want to run that SVM in. Even if you just want one LIF! And you can't actually create ipspaces within System Manager at all, nor can you modify failover groups, though you can modify broadcast domains which create failover groups by default. The management through System Manager is half baked. Ditto for advanced disk partitioning, which works pretty well but I still don't see any way to reassign partitioned drives in System Manager. ADP is also not configurable at all, it just decides if it will or won't do it based on the system type at node initialization, and gives no indication of what it's going to do.

For service providers there are some really nice features but man, the networking stuff has been such a pain in the rear end to explain to customers and they really need to improve the tools around managing it to make things simpler for customers who aren't doing multi-tenancy.

On the other hand, the ability to create storage pools out of flash and divide the capacity up between aggregates/nodes is really great, and despite being sort of unintuitive in execution disk partitioning works well to get you some spindles (and a modest amount of capacity) back from the root aggregate. Data Motion for LUNs (instant LUN move) is neat and the same engine will be leveraged for VVOLs to allow instant VM moves across nodes in the cluster. There are also some performance improvements in there that I haven't had a chance to test yet, but I had an all flash 8060 running 8.3 in the lab that kept pace with a Pure box we've got, so it's seems like the read path optimizations for flash helped some.

Let me know if you've got any specific questions.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Any tips or tricks on Volume/LUN provisioning with NetApp when the primary consumer of storage is going to be VMWare via iSCSI?

With the previous SANs that we've had (Equallogic and Pure) Volume to LUN mapping was 1:1. Obviously this is different on NetApp where you can have multiple LUNs per volume and multiple volumes per SVM. I understand that things like dedup are siloed within a particular volume and if the LUNs in the volume grow beyond the max volume size, the volume will go offline.

At the same time, if you don't overprovision somewhere you really aren't taking advantage of the extra space that dedup gives you.

It seems like there's two ways to go about doing this. Thick provision the volume and overprovision LUNs on that volume based on the dedup ratio you are usually seeing. So, if you thick provision a 10tb volume and are getting a >50% dedup rate, you are pretty much safe to provision 20tb worth of LUNs on that volume (minus any room you would need for snapshots. If the dedup rate gets suddenly worse and you fill up your LUNs, you have a risk of the volume going offline.

Alternatively, you could thin provision both the Volume and the LUNs and size the volumes for the total size of all the LUNs in a volume (plus overhead for snapshots). That way, even if you maxed out all the LUNs, the volume wouldn't go offline. You would then size the Volume based on available disk space in the aggregate with your average dudup ratio taken into account. So, if you wanted 20tb worth of LUNs, you would make the volume ~20tb thin provisioned and assuming a 50% dedup you just need to make sure you had 10tb of space in the aggregate to be safe. That seems like the more dangerous route though since if the LUNs + snapshots managed grow beyond what you assumed for dedup, you could fill the whole aggregate which could cause multiple volumes to go offline.

Kaddish
Feb 7, 2002
So, I've had about a week to play with a Pure FA-420. I'm using it for VMware. It took me a bit to wrap my head around the relationship between thin provisioning to the hosts and the de-dupe/compression on the backend. Let me just say this thing is pure magic awesome. VMware think it has 24TB in 6 datastores. Of that 24TB it says its actively using about 10TB. On the backend it's actually using 3TB on disk. Two thumbs up from me!

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Yeah, right now I have 2.9tb of SQL databases consuming 649gb on disk on an FA420.

Vanilla
Feb 24, 2002

Hay guys what's going on in th
That reminds me - for those of you who were following my job race between Nimble and Pure I did eventually join Pure and am loving the product and the company so far.

Assisted on my first install a few weeks ago. Took longer to rack than set up.....I was stood there saying 'is that it?'

I'm surprised at how many Pure employees are actually ex-customers.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Yeah, it seems like Pure is expanding their team pretty rapidly. Just months after we got our array, they added personnel local to our area. It is some nice piece of mind knowing they have a tech that lives within spitting distance of our datacenter.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

bull3964 posted:

Netapp Volumes/LUNs

If you're planning on leveraging snapshots then you'll want to group things with similar snapshot/retention schedules into the same volumes. For space management, what works best really depends on how you provision space, how you experience growth, and how proactive you can be about capacity planning/purchasing. I generally like to thin provision everything and just manage the space I'm using. You can use volume auto-grow to make sure that volumes don't fill up overnight and take LUNs offline before you can grow the volume manually. Likewise, snapshot auto-delete can be used to manage free space within volumes. Aggregates filling up are more of a concern but OnCommand Unified Manager can be configured to send alerts when you breach certain thresholds for capacity or growth rate so you can get a new purchase in the pipeline in time. If you've got things nailed down enough you can basically do a JIT purchasing model.

The safest choice is to simply thick provision everything, but even then snapshot growth can cause a volume to fill up and cause problems for LUNs inside the volume. If you set the space-allocation option on the LUNs (not set by default) then VMware will handle a volume full condition more gracefully and the LUNs will be left online, which makes it safer to thin provision than it was in the past.

Have you considered doing NFS instead of iSCSI for your VMware storage?

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


NippleFloss posted:



Have you considered doing NFS instead of iSCSI for your VMware storage?

It's certainly a consideration and something I was actually researching today. This is the first NFS capable storage device we've had so far, so it hasn't been an option before now. I understand that NFS handles free space reclamation a bit more gracefully and we would have some more granular options for snapshot recovery if we went the NFS route.

We will also have a few select very high IOP VMs on Pure which is iSCSI only, but that shouldn't be much of an issue.


While we're on the subject of NFS (and CIFS for that matter) what is generally the best way to handle offline backup of native CIFS and NFS (non-VMware) on something like netapp? Snapshots are great for recovery assuming the filer isn't compromised in some way. However, we only have one NetApp install so far and aren't likely to get another for some time. Not only that, many of the most recent hacks have been about data destruction as much as they've been about theft for profit. So, backup of the data to wholly different storage and potentially to tape still seems like the only option to completely cover your rear end from an absolute loss perspective.

I've been thinking of a few different solutions such as copying data to a VM and then backing up the VM itself or copying the data to other storage and shoving it off to tape from there.

I'm planning on using our soon to be vacant 80tb Equallogic SATA array as a local data store for the backups while I come up with a new strategy for offsite. We're currently using backup exec which I don't particularly like and aren't really married to. So far, my options range from going with a purely VM based backup solution like Veeam and doing a few tricks to get that to backup some of the physical machine data we have or go with another solution that covers both physical and virtual options. Sadly, nothing cloud is a option as our client contracts forbid it.


This has all been a massive reorg and expansion. We started with a Dell 3-2-1 solution with 3 R620s, 2 1gb power connect switches, and 1 Equallogic array (which we later added a 2nd unit on to).

Since then, we have upgraded our production switching to Nexus 5672UP, added a Pure FA-420 for very high IOP processing, and are now in the process of replacing our Equallogic storage with Netapp for our main VMWare workhorse due to better availability and needing things like dedup. On top of all that, we're expanding from 3 to 6 VMWare hosts and redoing the NICs on all the old hosts with 10gbe and will be updating VMWare from 5.1 to 6 across our whole environment (so I'm also trying to figure out how vVols figure into all of this.)

When all is said and done, we've have a much faster, less complex, and easier to scale infrastructure. But we're also at a crossroads to a lot of decision points for setup and backup/recovery that I'm researching.

bull3964 fucked around with this message at 20:19 on Mar 6, 2015

Adbot
ADBOT LOVES YOU

Kaddish
Feb 7, 2002
I'm not a big Netapp guy but isn't BackupExec NDMP capable?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply