Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

szlevi posted:

I never did it but IIRC my values are ~30 secs and my hosts all tolerate failovers just fine...

Do other arrays require this, and specifically state this requirement?

Adbot
ADBOT LOVES YOU

madsushi
Apr 19, 2009

Baller.
#essereFerrari

three posted:

Do other arrays require this, and specifically state this requirement?

NetApp's various host utilities (SnapDrive, VSC) will set these timeout values for you automatically.

Pile Of Garbage
May 28, 2007



This talk of configuring iSCSI hosts to accommodate node failover reminds me of one of the reasons why I really prefer FC over iSCSI. Assuming the fabric is configured correctly hosts will failover between nodes as soon as they receive an RSCN. Propagation of RSCNs is pretty much instantaneous in a well-configured fabric which makes everything extremely tolerant to failures.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

cheese-cube posted:

This talk of configuring iSCSI hosts to accommodate node failover reminds me of one of the reasons why I really prefer FC over iSCSI. Assuming the fabric is configured correctly hosts will failover between nodes as soon as they receive an RSCN. Propagation of RSCNs is pretty much instantaneous in a well-configured fabric which makes everything extremely tolerant to failures.
http://en.wikipedia.org/wiki/Internet_Storage_Name_Service#State_Change_Notification

Of course, very few people out there actually use iSNS, but the functionality is there for iSCSI initiators.

evil_bunnY
Apr 2, 2003

szlevi posted:

I cannot fathom what they can teach you that you cannot learn yourself in a few days, for free...
I had a box for testing and it literally is that simple.

Pile Of Garbage
May 28, 2007



Misogynist posted:

http://en.wikipedia.org/wiki/Internet_Storage_Name_Service#State_Change_Notification

Of course, very few people out there actually use iSNS, but the functionality is there for iSCSI initiators.

What is the main reason that people choose iSCSI over FC? The company that I previously worked for always deployed FC SANs so the bulk of my experience is with FC which I came to prefer over iSCSI. The majority of the talk in this thread seems to be around iSCSI devices so I'm just wondering what is the deciding factor to deploy iSCSI over FC.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

cheese-cube posted:

What is the main reason that people choose iSCSI over FC? The company that I previously worked for always deployed FC SANs so the bulk of my experience is with FC which I came to prefer over iSCSI. The majority of the talk in this thread seems to be around iSCSI devices so I'm just wondering what is the deciding factor to deploy iSCSI over FC.
cost and simplicity. Why spend the extra for an fc switch and hba when iscsi works just fine?

Pile Of Garbage
May 28, 2007



adorai posted:

cost and simplicity. Why spend the extra for an fc switch and hba when iscsi works just fine?

See that's what I thought the main reason would be: the ability to leverage existing switching equipment. However what about environments which require storage bandwidth greater than 1Gb but do not already have 10Gb switching equipment? Is that the point where deploying FC become cost-effective?

On that subject have 16Gb FC HBAs and switches hit the market yet or are vendors still finalising their designs?

Pile Of Garbage fucked around with this message at 12:55 on Aug 17, 2012

evil_bunnY
Apr 2, 2003

When your IT crew's never touched FC it makes a lot of sense to not get into it.

Rhymenoserous
May 23, 2008

cheese-cube posted:

See that's what I thought the main reason would be: the ability to leverage existing switching equipment. However what about environments which require storage bandwidth greater than 1Gb but do not already have 10Gb switching equipment? Is that the point where deploying FC become cost-effective?

On that subject have 16Gb FC HBAs and switches hit the market yet or are vendors still finalising their designs?

Not even then sometimes because of port aggregation. I got lucky (or unlucky depending on your ~views) in that my office uses nothing but Dell Powerconnect switches which all have the option to buy a fairly inexpensive module that you can plug 10G HBA's into.

So I slapped together a nice 10G iSCSI backbone fairly quickly. Works like a champ too.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

cheese-cube posted:

See that's what I thought the main reason would be: the ability to leverage existing switching equipment. However what about environments which require storage bandwidth greater than 1Gb but do not already have 10Gb switching equipment? Is that the point where deploying FC become cost-effective?

On that subject have 16Gb FC HBAs and switches hit the market yet or are vendors still finalising their designs?
round robin let's you have more than one link actively transferring.

evil_bunnY
Apr 2, 2003

Rhymenoserous posted:

fairly inexpensive module that you can plug 10G HBA's into.
You mean a 10GBE port?

bort
Mar 13, 2003

Dell posted:

Finally, please be advised that the best practices for the use of RAID 5 and RAID 50 on Dell EqualLogic arrays have changed. The changes to the RAID policy best practice recommendations are being made to offer enhanced protection for your data.
  • RAID 5 is no longer recommended for any business critical information on any drive type
  • RAID 50 is no longer recommended for business critical information on Class 2 7200 RPM drives of 1TB and higher capacity.
We don't have any 5, but we're having to plan some conversion from RAID 50 to RAID 6. Luckily, that's online and we have the headroom.

BnT
Mar 10, 2006

What are considered "class 2" drives?

Pile Of Garbage
May 28, 2007



evil_bunnY posted:

When your IT crew's never touched FC it makes a lot of sense to not get into it.

Sorry I'm really not sure what your point is here as the same can be said for iSCSI.

From a configuration perspective I've found FC much easier to configure. I've mainly worked with IBM SAN24B-4 FC switches and SAN06B-R MPRs which are basically re-branded Brocade 300 and 7800 series devices respectively and they are extremely easy to use (Great GUI, very logical CLI and Brocade provides great documentation). Once you understand the basic concepts of configuring a stable fabric you can easily scale that knowledge out. It only starts to get complicated when you start utilising more advanced features like FC-FC routing, fabric merging or FCIP.

From my experience with iSCSI there are way more things that need to be considered in even simple deployments (i.e. VLAN tagging for iSCSI traffic segregation, link aggregation, MPIO drivers, jumbo frame support, etc.).

Of course as I said a few posts ago my experience with iSCSI is tiny when compared to my FC experience so feel free to shoot me down.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Even if you have no experience with iSCSI, you probably have experience with IP, so already you know something about iSCSI and nothing about FC.

Unless you just came out of a pod from another planet or something. There's also the fact that IP speeds are growing faster than FC is.

And FC is really an all or nothing proposition. You get an FC infrastructure or you don't. iSCSI you can connect to your existing network. Those "way more things" you mention about iSCSI are things people running IP networks already understand.

I'm really not seeing why this is so difficult to grasp. It sounds like you're viewing it from the point of view of some enormous enterprise that can easily afford to build out an entire new infrastructure, where as most of the reasons for iSCSI come from the other end of the spectrum, small units dipping their feet in the waters of IP storage.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

cheese-cube posted:

Sorry I'm really not sure what your point is here as the same can be said for iSCSI.

From a configuration perspective I've found FC much easier to configure. I've mainly worked with IBM SAN24B-4 FC switches and SAN06B-R MPRs which are basically re-branded Brocade 300 and 7800 series devices respectively and they are extremely easy to use (Great GUI, very logical CLI and Brocade provides great documentation). Once you understand the basic concepts of configuring a stable fabric you can easily scale that knowledge out. It only starts to get complicated when you start utilising more advanced features like FC-FC routing, fabric merging or FCIP.

From my experience with iSCSI there are way more things that need to be considered in even simple deployments (i.e. VLAN tagging for iSCSI traffic segregation, link aggregation, MPIO drivers, jumbo frame support, etc.).

Of course as I said a few posts ago my experience with iSCSI is tiny when compared to my FC experience so feel free to shoot me down.

iSCSI is much easier for most general purpose IT people to grasp. Fabrics and zoning aren't too tricky but there is a learning curve there. Additionally when you get into FC you're also getting into the business of ensuring that you've got solid HBA firmware, that you understand the vendor specific MPIO suite you're using, that you understand the OS specific tools provided to manage those HBAs. And it's still much less likely that you have anyone on staff who knows enough about FC at the protocol layer to troubleshoot difficult issues, while it's quite easy to find IP expertise.

FC simply doesn't make sense for most IT shops from a management or performance perspective. When properly configured it's great because it just works seamlessly, but getting it to the "properly configured" point is a non-trivial task for most shops, on top of the added hardware cost.

Nebulis01
Dec 30, 2003
Technical Support Ninny

BnT posted:

What are considered "class 2" drives?

Nearline SAS I believe, stuff that's supposed to be used for bulk storage of infrequently accessed data.

KS
Jun 10, 2003
Outrageous Lumpwad
We moved from FC to 10g ISCSI to support a converged network/storage fabric. When we bought UCS, support for FCOE in the Nexus 5k series was basically nonexistent -- you could present storage to ports locally on the 5k, but you could not trunk into a 6140. Updates have made it better now, but that ship has sailed.

It is also considerably cheaper. Switchport cost is relatively equal, but on the HBA side it's not really close. For my DC that isn't UCS, I have 19 ESX hosts. The first 7 we bought with dual 8gb FC HBAs for $1700 each including cables. The next 12 used 10g CNAs for $900, which eliminated 4 1gig network ports per host as well. The HBA cost savings paid for one of the 10g switches.

Last, you can present storage direct to VMs, which lets you do all sorts of tricks with snapshotting. VMware's NPIV support sucks. The MS iscsi initiator does not.

KS fucked around with this message at 20:21 on Aug 17, 2012

Ratzap
Jun 9, 2012

Let no pie go wasted
Soiled Meat

bort posted:

We don't have any 5, but we're having to plan some conversion from RAID 50 to RAID 6. Luckily, that's online and we have the headroom.

My current clients just finished installing a new Compellent and the first thing they did was force it to create raid 5 volumes :( And these were for the Oracle database servers...

I'm trying to persuade them to choke down the extra cost and use raid 10 but I dunno, they seem pretty entrenched.

KS
Jun 10, 2003
Outrageous Lumpwad
Compellent arrays do all writes to 10 and rewrite to RAID-5 in the background. They should be using RAID10/RAID-5 for 15k disks and RAID10-DM/RAID-6 for the bigger 7.2k disks per Compellent's best practices. You can't even specify just RAID-10 without turning on advanced mode, I believe.

PM me if you'd like the doc, but they're right.

KS fucked around with this message at 20:51 on Aug 17, 2012

bort
Mar 13, 2003

Nebulis01 posted:

Nearline SAS I believe, stuff that's supposed to be used for bulk storage of infrequently accessed data.
Or SATA if you have the 6500 platform.

And as for the RAID 5 on Compellent, it's very different since they assign RAID levels to blocks within a LUN, and those blocks may migrate to RAID 10 or various combinations of RAID5/6, depending on usage. RAID 5 is less attractive when you're configuring an actual disk/volume.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

NippleFloss posted:

iSCSI is much easier for most general purpose IT people to grasp. Fabrics and zoning aren't too tricky but there is a learning curve there. Additionally when you get into FC you're also getting into the business of ensuring that you've got solid HBA firmware, that you understand the vendor specific MPIO suite you're using, that you understand the OS specific tools provided to manage those HBAs. And it's still much less likely that you have anyone on staff who knows enough about FC at the protocol layer to troubleshoot difficult issues, while it's quite easy to find IP expertise.

FC simply doesn't make sense for most IT shops from a management or performance perspective. When properly configured it's great because it just works seamlessly, but getting it to the "properly configured" point is a non-trivial task for most shops, on top of the added hardware cost.

Yes + iSCSI is a lot cheaper, even in 10GbE flavor: show me a 24-port line-rate FC16 switch for $5-6k...

...did I mention that for IB you can get a 36-port FDR switch for ~$8k?

Ratzap
Jun 9, 2012

Let no pie go wasted
Soiled Meat

KS posted:

Compellent arrays do all writes to 10 and rewrite to RAID-5 in the background. They should be using RAID10/RAID-5 for 15k disks and RAID10-DM/RAID-6 for the bigger 7.2k disks per Compellent's best practices. You can't even specify just RAID-10 without turning on advanced mode, I believe.

PM me if you'd like the doc, but they're right.

This is odd then as they told me last week that they had to go in and specifically force it to use raid 5. I'll have a longer talk with them next week as this has my curiosity piqued now.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

madsushi posted:

NetApp's various host utilities (SnapDrive, VSC) will set these timeout values for you automatically.

Yeah, that's possible, that EQL's HIT sets it at install...

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

KS posted:

Compellent arrays do all writes to 10 and rewrite to RAID-5 in the background.

Last December I was told it's RAID10 and RAID6...

ghostinmyshell
Sep 17, 2004



I am very particular about biscuits, I'll have you know.
How is NFS with Windows these days? Going to be Server 2008 R2 writing to some random unix based NAS. I should mention about 20,000 directories with 30,000 files with NTFS permissions.

My first thought was to put a gun in my mouth when asked to look into this, which means someone has convinced my CTO this is viable.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

ghostinmyshell posted:

How is NFS with Windows these days? Going to be Server 2008 R2 writing to some random unix based NAS. I should mention about 20,000 directories with 30,000 files with NTFS permissions.

My first thought was to put a gun in my mouth when asked to look into this, which means someone has convinced my CTO this is viable.
Step 1) install VMware on server hardware
Step 2) configure NFS datastore with linux guest
Step 3) share NFS storage out as cifs (or even iSCSI)
Step 4) create Windows 2008 R2 guest and map storage

this will be better than using NFS on windows.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

Step 1) install VMware on server hardware
Step 2) configure NFS datastore with linux guest
Step 3) share NFS storage out as cifs (or even iSCSI)
Step 4) create Windows 2008 R2 guest and map storage

this will be better than using NFS on windows.
And that's the NFS support in Windows if you already have SFU configured in Active Directory and all your SFU schema attributes populated (UID, GID, etc.).

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Misogynist posted:

And that's the NFS support in Windows if you already have SFU configured in Active Directory and all your SFU schema attributes populated (UID, GID, etc.).
If you're using 2008 R2 domain controllers then you have the extended schema by default, so it's not quite as bad. But yea, no one ever populates those, so it hardly matters.

Syano
Jul 13, 2005
I've got a question for those of you who have messed around with HP Lefthand gear. I have to upgrade the switches I have this kit plugged in to. What will happen if I just start unplugging from the old switches and then replug into the new? Am I going to end up with some sort of split brain cluster? Or will it reconverge on its own?

Nomex
Jul 17, 2002

Flame retarded.

Syano posted:

I've got a question for those of you who have messed around with HP Lefthand gear. I have to upgrade the switches I have this kit plugged in to. What will happen if I just start unplugging from the old switches and then replug into the new? Am I going to end up with some sort of split brain cluster? Or will it reconverge on its own?

Why even risk it? Just take 20 minutes and label all the cables before you start ripping. Check the switch configuration too. It doesn't take very long.

Syano
Jul 13, 2005

Nomex posted:

Why even risk it? Just take 20 minutes and label all the cables before you start ripping. Check the switch configuration too. It doesn't take very long.

Thats the easy part. What I am wondering is if I am going to need to shut down the array or if it can survive the cluster being down for about 5 minutes.

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


Are the old and new switches connected either directly or through another switch? If so, just move one over completely, wait until they warnings clear and then move the other one.

If you're switching subnets then you'll have a bit more work but it's still doable so long as you have routing enabled between those subnets.

Syano
Jul 13, 2005
This will be the same subnet but what we are doing is upgrading to 10gig. These are top of rack switches so I am actually doing a one to one swap. Just about everything in these racks has storage on this array so I know I am going to need to shut down the servers. The more I think about it I guess I am going to have to power down the array as well to avoid any potential problems. I was just trying to avoid that. I am not sure why it just struck me as scary

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


You don't need to power it down. Just move the network interfaces over one node at a time. So long as all your volumes have network raid 1 or better enabled you won't even notice a thing. Just make sure that your cluster has quorum before you move anything.

Also, make sure your failover manager isn't stored on the LeftHand units.

Syano
Jul 13, 2005

Number19 posted:


Also, make sure your failover manager isn't stored on the LeftHand units.

Zinger! FOM has been running on the array itself since we put it in. Nows a good a time as any to switch her up!

Number19
May 14, 2003

HOCKEY OWNS
FUCK YEAH


Syano posted:

Zinger! FOM has been running on the array itself since we put it in. Nows a good a time as any to switch her up!

Yeah don't do this. It needs to do some quick disk access to handle failover and since losing a node halts all disk access briefly until the failover occurs you take the whole array offline.

Just put it on an OpenFiler or Nexenta or whatever you can make with cheap old parts so it can be on shared storage and vMotion around easily. It doens't need ot be backed up or anything. If you lose it you can just make another one, usually quicker than restoring it.

Pvt. Public
Sep 9, 2004

I am become Death, the Destroyer of Worlds.
I was recently told to build a wish list of changes for my office and one of them (in a long list) is to upgrade the vCenter cluster we run. Right now it runs two nodes (both active) and stores VMs on iSCSI LUNs held on a QNAP 459U. The first thing I am going to do is get rid of the lovely Trendnet 8-port GigE switch the cluster is using in favor of something else (potentially two switches in HA).

The other thing I want to do is HA the storage. Problem being, QNAP doesn't support iSCSI replication to another QNAP unit without taking the LUN offline. DRBD isn't supported on the units yet, nor does there appear to be plans to put in any kind of HA features in the immediate future.

Is there a cheap(ish) rackmount NAS device that can replicate iSCSI LUNs (in realtime as changes are made or daily at a scheduled time) between two identical units? Ideally, they'd have two (or more) GigE ports. The QNAP unit cost me ~$2000 for 4TB in RAID5 with a hot spare, for an example of the price point I'm looking at. I'm going to post this in the VM thread as well. Thanks.

Adbot
ADBOT LOVES YOU

KS
Jun 10, 2003
Outrageous Lumpwad
Local replication is not often used for this kind of scenario. The common method used to solve the problem is to buy enterprise-grade storage that is highly available internally -- SAS drives with two paths, dual controllers with redundant power supplies, dual switches, etc. You will not get that in your budget, but that is what you should probably aim for if you want to improve reliability.

Replication involves a manual failover process and generally some data loss until you get up into the arrays that can do synchronous replication, which is what you'd want for the local replication situation you're proposing. It is not really the solution I'd recommend for a single cluster.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply