Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

evil_bunnY posted:

So how do people go about fixing that on 1GBE storage networks?

It's only a concern if you've got a single ESX server that needs to communicate with a single datastore at a transfer rate of above 1Gb/s, which is pretty rare. You can split VMDKs for a single VM between multiple datastores, each mounted from a different IP, and then use a volume manager to create a single logical device from all of those disks. That will spread the load evenly for that logical device over the available 1GB links.

But really, if you have performance requirements that demand throughput from a single ESX host to a single datastore to exceed the max throughput of a 1GB link then you should probably be investing in a high performance storage infrastructure like 10GBE or FC.

Adbot
ADBOT LOVES YOU

Less Fat Luke
May 23, 2003

Exciting Lemon
So since we're talking about 10GE does anyone have switch suggestions? My current setup is using a Cisco 4900M. It's totally solid, 8+8+8 fiber ports (with X2 modules). However it's very expensive and I was wondering what other people have been using.

Intrepid00
Nov 10, 2003

I'm tired of the PM’s asking if I actually poisoned kittens, instead look at these boobies.

NippleFloss posted:

Correct. NFS has no protocol level load balancing. It can only piggyback on network layer load balancing, at least in current versions of the protocol.

It is more than that.



http://lass.cs.umass.edu/papers/pdf/FAST04.pdf

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Intrepid00 posted:

It is more than that.



http://lass.cs.umass.edu/papers/pdf/FAST04.pdf

I have no idea why you posted this and what it has to do with discussions of load balancing mechanisms.

It's utterly meaningless in this context since we aren't talking about a Linux server running NFS, we are talking about a dedicated NAS appliance that is tuned specifically for NFS responsiveness. I wouldn't recommend NFS of FC or iSCSI as a blanket statement, but on NetApp it makes perfect sense and there are plenty of benchmarks that place real world performance in VMware within a few percentage points of one another on all 3 protocols.

Intrepid00
Nov 10, 2003

I'm tired of the PM’s asking if I actually poisoned kittens, instead look at these boobies.

NippleFloss posted:

I have no idea why you posted this and what it has to do with discussions of load balancing mechanisms.

It's utterly meaningless in this context since we aren't talking about a Linux server running NFS, we are talking about a dedicated NAS appliance that is tuned specifically for NFS responsiveness. I wouldn't recommend NFS of FC or iSCSI as a blanket statement, but on NetApp it makes perfect sense and there are plenty of benchmarks that place real world performance in VMware within a few percentage points of one another on all 3 protocols.

Caching performance. Your load balance will benefit greatly from iSCSI's type of caching and update methods overall.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Less Fat Luke posted:

So since we're talking about 10GE does anyone have switch suggestions? My current setup is using a Cisco 4900M. It's totally solid, 8+8+8 fiber ports (with X2 modules). However it's very expensive and I was wondering what other people have been using.
we looked at a lot of options and are about to pull the trigger on a pair of Cisco 5500s.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Intrepid00 posted:

Caching performance. Your load balance will benefit greatly from iSCSI's type of caching and update methods overall.

This is true, but no one was disputing that iSCSI round robin would provide a more even balance than network level link aggregation. The question is whether a perfectly even balance is necessary. In some cases, where you're very bandwidth constrained, then it is. But on most properly designed IP storage networks that isn't the case.

Muslim Wookie
Jul 6, 2005
Goddamn Intrepid I couldn't help but picture you as that internal IT person that smugly pulls out some datasheet or stats during a meeting as to why I *have* to be wrong and s/he's right.

They're never right.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

so anyway, the point is that iSCSI can utilize MPIO round robin via multiple initiators and sessions, NFS can't, right?
VMware does not support NFS 4.1. However:

http://tools.ietf.org/html/rfc5661#section-12

Vulture Culture fucked around with this message at 04:49 on Mar 18, 2012

Maneki Neko
Oct 27, 2000

Less Fat Luke posted:

So since we're talking about 10GE does anyone have switch suggestions? My current setup is using a Cisco 4900M. It's totally solid, 8+8+8 fiber ports (with X2 modules). However it's very expensive and I was wondering what other people have been using.

We just went to Nexus 5000s when we took the 10GbE plunge, but I'm sure they're not terribly cheap either. We're also using Twinax for almost everything, as it doesn't have to go far.

namaste friends
Sep 18, 2004

by Smythe
Question for the folks who like using RDMs. What sort of performance problems have you experienced? Are you still using RDMs or not?

dietcokefiend
Apr 28, 2004
HEY ILL HAV 2 TXT U L8TR I JUST DROVE IN 2 A DAYCARE AND SCRATCHED MY RAZR
Working on setting up a new networking test bench around a Mellanox switch and would like to find a way to tie in some 1GE RJ45 devices. I have seen some of the adapters that break out QSFP to 4 SFP+... what would the best way be to bring in those types of devices without adding in too much overhead/latency into the mix? I have seen SFP+ port to female 1G RJ45 adapters but not QSFP to RJ45 :v:

Reason behind this is putting all devices on the same switch in our office, setup around a Mellanox SX1036, and I want to tie in 10G, 40G, and even legacy 1G for testing on equal grounds. I just don't want to add weird latency issues into the mix from active adapters.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Cultural Imperial posted:

Question for the folks who like using RDMs. What sort of performance problems have you experienced? Are you still using RDMs or not?
This question is making me incredibly confused. What are you even getting at?

namaste friends
Sep 18, 2004

by Smythe

Misogynist posted:

This question is making me incredibly confused. What are you even getting at?

A couple posts back someone mentioned seeing performance problems with RDMs. My question is pretty much the same as yours except I'm not trying to be a dick about it.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Cultural Imperial posted:

A couple posts back someone mentioned seeing performance problems with RDMs. My question is pretty much the same as yours except I'm not trying to be a dick about it.
Can you quote a post? I went back several pages and still don't have any idea what you're referencing, and without that context your question doesn't make any sense at all. There is no significant performance difference between RDMs and VMDKs, and what little performance difference there is leans in favor of RDM.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
The downside to using an rdm is no vmotion.

Otherwise performance should more or less be the same.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

adorai posted:

The downside to using an rdm is no vmotion.
Totally wrong. vMotion works fine with RDMs of both flavors. Storage vMotion doesn't work with physical mode RDMs. Obviously, VADP stuff used to perform VM backups doesn't work with physical RDMs either.

Vulture Culture fucked around with this message at 18:15 on Mar 19, 2012

namaste friends
Sep 18, 2004

by Smythe

madsushi posted:

I disagree with this.

Inflating your dedupe ratio by stacking only OS drives into one volume is bad for your overall dedupe amount. You get the BEST dedupe results (total number of GBs saved) by stacking as MUCH data into a single volume as possible. The ideal design would be a single, huge volume with all of your data in it with dedupe on.

Also, re: slowing down by slipping VMFS in the middle, this is wrong, because there is no VMFS on an NFS share. You're better off using iSCSI with SnapDrive to your NetApp LUNs, rather than doing RDM.

For misogynist, from page 67.

namaste friends fucked around with this message at 20:19 on Mar 19, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Cultural Imperial posted:

For misogynist, from page 67.
I still don't see what you're talking about when you say that someone implied that RDMs cause performance problems.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
I need to attach a pile of disks (goal is around 20TB) to a Solaris server so ZFS can manage them. Where can I get either a server that will hold ~12 drives + a system drive with a dumb pass through controller, or a drive enclosure that I can pass through the disks into an 1068E controller or something equivalent?

Bluecobra
Sep 11, 2001

The Future's So Bright I Gotta Wear Shades

FISHMANPET posted:

I need to attach a pile of disks (goal is around 20TB) to a Solaris server so ZFS can manage them. Where can I get either a server that will hold ~12 drives + a system drive with a dumb pass through controller, or a drive enclosure that I can pass through the disks into an 1068E controller or something equivalent?
If money is no object, you can buy a Sun Fire X4270 which can either use 12 3.5" disks or 24 2.5" disks in 2U. There is an internal USB port that you can stick a flash drive in and boot Solaris off of. One thing you should take in account is licensing costs. You get the right to use Solaris with Sun hardware without having to buy software maintenance. If you plan to use Solaris 10/11 and go the non-Sun hardware route, you have to pay Uncle Larry $1K per year per CPU socket. (unless you are using the server for development purposes)

Nomex
Jul 17, 2002

Flame retarded.

FISHMANPET posted:

I need to attach a pile of disks (goal is around 20TB) to a Solaris server so ZFS can manage them. Where can I get either a server that will hold ~12 drives + a system drive with a dumb pass through controller, or a drive enclosure that I can pass through the disks into an 1068E controller or something equivalent?

Why not just install ZFS on the storage server itself and pass the storage to your other Solaris server through iSCSI? Your options are pretty much limitless then.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Nomex posted:

Why not just install ZFS on the storage server itself and pass the storage to your other Solaris server through iSCSI? Your options are pretty much limitless then.

The Solaris iSCSI stack sucks and you lose a lot of the flexibility of ZFS if you have to present zvols to hosts.

Since ZFS is meant to run on commodity hardware you can make due with any decent jbod enclosure.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Nomex posted:

Why not just install ZFS on the storage server itself and pass the storage to your other Solaris server through iSCSI? Your options are pretty much limitless then.

There's only one server in this scenario. It's going to be running AMANDA, so we'll use it as a holding disk.

I guess Norco sells a 12 bay enclosure for $400 which I can stuff full of disks. Any reasons to be concerned with the reliability of such a product? Though for such a low price we could just get two and use one as a cold spare.

evil_bunnY
Apr 2, 2003

FISHMANPET posted:

I need to attach a pile of disks (goal is around 20TB) to a Solaris server so ZFS can manage them.
Dell MD1200 (1220 if you want 24x2,5") are SAS and pretty well built. I've used them before direct-attached to B2D servers and never had issues.

evil_bunnY fucked around with this message at 09:10 on Mar 20, 2012

luminalflux
May 27, 2005



All the fun new stuff in ZFS won't be available in Solaris ever, you'll have to use something based off Illumos since that's where all the development is these days.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

evil_bunnY posted:

Dell MD1200 (1220 if you want 24x2,5") are SAS and pretty well built. I've used them before direct-attached to B2D servers and never had issues.

The problem with that is that it presents itself to the server as one disk, and I want to present each disk individaully to the server and let ZFS manage it all.


luminalflux posted:

All the fun new stuff in ZFS won't be available in Solaris ever, you'll have to use something based off Illumos since that's where all the development is these days.

Yeah, I think the odds of this being true are like zero, since Illuminos hasn't done anything of note in nearly two years.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

NippleFloss posted:

The Solaris iSCSI stack sucks and you lose a lot of the flexibility of ZFS if you have to present zvols to hosts.

Since ZFS is meant to run on commodity hardware you can make due with any decent jbod enclosure.
Are you talking about the Solaris 10 iSCSI stack? COMSTAR is amazing.

The rest of your post holds true, of course.

Vulture Culture fucked around with this message at 14:25 on Mar 20, 2012

evil_bunnY
Apr 2, 2003

FISHMANPET posted:

The problem with that is that it presents itself to the server as one disk, and I want to present each disk individaully to the server and let ZFS manage it all.
Huh? Pretty sure you can just configure it however the hell you want from the controller?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

evil_bunnY posted:

Huh? Pretty sure you can just configure it however the hell you want from the controller?

I'll have to ask our Dell guy about it tomorrow when he comes in to show off Dell's newest Intel servers. Though even if it's true, we may be priced out of it by default. Apparently the budget for this is $10k, and that array filled with 2 TB disks is already $10k, and the $10k budget is supposed to include the server.

luminalflux
May 27, 2005



FISHMANPET posted:

Yeah, I think the odds of this being true are like zero, since Illuminos hasn't done anything of note in nearly two years.

OpenIndiana seems active, and Bryan Cantrill's talk at LISA showed signs of a lot of fun ZFS stuff that won't make it into Solaris.

Then again I don't do Solaris since the first releases of Sol10.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

luminalflux posted:

OpenIndiana seems active, and Bryan Cantrill's talk at LISA showed signs of a lot of fun ZFS stuff that won't make it into Solaris.

Then again I don't do Solaris since the first releases of Sol10.

I'm pretty pessimistic when it comes to the future of an open source Solaris. OpenSolaris always struck me as being primarily developed by Sun engineers, and when they stopped contributing it seemed to mostly die. I'd like to be proven wrong, but I don't think there's enough money or interest outside of Oracle to make it. On the other hand, Joyent is apparently pretty big and is throwing a lot of money at it, so who knows.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

FISHMANPET posted:

I'll have to ask our Dell guy about it tomorrow when he comes in to show off Dell's newest Intel servers. Though even if it's true, we may be priced out of it by default. Apparently the budget for this is $10k, and that array filled with 2 TB disks is already $10k, and the $10k budget is supposed to include the server.
At $10K, you're going to be priced out of the market if you're looking for a server with a separate disk enclosure -- most major vendors don't get cheap and build in things like array multipathing that don't fit your budget. Your best bet is to look for a server with plenty of disk drives built in. The SGI 1116 is probably the best-priced one from a major vendor, but there's lots of whitebox vendors that all sell similar things.

FISHMANPET posted:

I'm pretty pessimistic when it comes to the future of an open source Solaris. OpenSolaris always struck me as being primarily developed by Sun engineers, and when they stopped contributing it seemed to mostly die. I'd like to be proven wrong, but I don't think there's enough money or interest outside of Oracle to make it. On the other hand, Joyent is apparently pretty big and is throwing a lot of money at it, so who knows.
In the worst case, FreeBSD is still a torch bearer for open-source ZFS. Their implementation is fairly mature and just missing niceties like the Solaris kernel CIFS server.

Vulture Culture fucked around with this message at 15:45 on Mar 20, 2012

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Misogynist posted:

Are you talking about the Solaris 10 iSCSI stack? COMSTAR is amazing.

The rest of your post holds true, of course.

Yes, Solaris 10. OpenSolaris has had a much more mature storage feature set for a while, unfortunately my customer can't use it.

Notable features of Solaris 10 iSCSI include the server hanging indefinitely on boot if the discovery target is down or inaccessible and a patch that changed the order in which services start that causes ZFS to attempt to bring pools online before the iSCSI initiator service has started, resulting in all pools coming up broken.

My experience with iSCSI on Solaris 10 has been pretty infuriating.

evil_bunnY
Apr 2, 2003

There's really no one competent left at Sun is there?

Rhymenoserous
May 23, 2008
Yeah there is, but odds are he's either punching retirement age or is considering hanging himself.

Aniki
Mar 21, 2001

Wouldn't fit...

Misogynist posted:

At $10K, you're going to be priced out of the market if you're looking for a server with a separate disk enclosure -- most major vendors don't get cheap and build in things like array multipathing that don't fit your budget. Your best bet is to look for a server with plenty of disk drives built in. The SGI 1116 is probably the best-priced one from a major vendor, but there's lots of whitebox vendors that all sell similar things.

That's good to know, I have a meeting with NetApp tomorrow and while I like the sound of their equipment, I fear that even their lowest model, the 2040, may be too pricey for our project. I'll look into that SGI unit and see how it compares.

What do you guys do for archiving data that doesn't need to be high performance. Do you just add slower speed HDs as a second array inside your NAS or do you pick up a cheap unit like a Buffalo to handle archival? In my case, we are required to store calls for 1-3 years, they are likely only going to be accessed sparringly after the first day or so they were logged, but we need them to settle disputes and as part of the agreement with our merchant accounts and payment processors.

evil_bunnY
Apr 2, 2003

Aniki posted:

What do you guys do for archiving data that doesn't need to be high performance. Do you just add slower speed HDs as a second array inside your NAS or do you pick up a cheap unit like a Buffalo to handle archival? In my case, we are required to store calls for 1-3 years, they are likely only going to be accessed sparringly after the first day or so they were logged, but we need them to settle disputes and as part of the agreement with our merchant accounts and payment processors.
The advantage of traditional SAN setups (controllers and and discrete shelves) is that you can add just capacity if you don't need any extra IO. Just plug in more shelves.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Aniki posted:

That's good to know, I have a meeting with NetApp tomorrow and while I like the sound of their equipment, I fear that even their lowest model, the 2040, may be too pricey for our project. I'll look into that SGI unit and see how it compares.

What do you guys do for archiving data that doesn't need to be high performance. Do you just add slower speed HDs as a second array inside your NAS or do you pick up a cheap unit like a Buffalo to handle archival? In my case, we are required to store calls for 1-3 years, they are likely only going to be accessed sparringly after the first day or so they were logged, but we need them to settle disputes and as part of the agreement with our merchant accounts and payment processors.

Depends on your budget, really. Data Domain provides some excellent solutions for backup, archive, and replication, but they tend to be pretty pricey. However a Buffalo terastation isn't going to provide much in the way of resiliency. It also depends on your primary storage vendor and how they handle things like storage tiering. Some work very well supporting multiple disk types on the same frame, and some do not.

NetApp does a pretty job job as near-line storage, but that you can hit some funky performance issues mixing slow and fast disk on the same controller in certain situations, and if you don't intend to use a lot of the features and just want very cheap disk then there are probably some better players on the $/GB scale.

YOLOsubmarine fucked around with this message at 23:48 on Mar 20, 2012

Adbot
ADBOT LOVES YOU

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Aniki posted:

That's good to know, I have a meeting with NetApp tomorrow and while I like the sound of their equipment, I fear that even their lowest model, the 2040, may be too pricey for our project. I'll look into that SGI unit and see how it compares.
Take a look at Nimble.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply