|
evil_bunnY posted:So how do people go about fixing that on 1GBE storage networks? It's only a concern if you've got a single ESX server that needs to communicate with a single datastore at a transfer rate of above 1Gb/s, which is pretty rare. You can split VMDKs for a single VM between multiple datastores, each mounted from a different IP, and then use a volume manager to create a single logical device from all of those disks. That will spread the load evenly for that logical device over the available 1GB links. But really, if you have performance requirements that demand throughput from a single ESX host to a single datastore to exceed the max throughput of a 1GB link then you should probably be investing in a high performance storage infrastructure like 10GBE or FC.
|
# ? Mar 17, 2012 22:57 |
|
|
# ? Apr 18, 2024 08:00 |
|
So since we're talking about 10GE does anyone have switch suggestions? My current setup is using a Cisco 4900M. It's totally solid, 8+8+8 fiber ports (with X2 modules). However it's very expensive and I was wondering what other people have been using.
|
# ? Mar 18, 2012 00:37 |
|
NippleFloss posted:Correct. NFS has no protocol level load balancing. It can only piggyback on network layer load balancing, at least in current versions of the protocol. It is more than that. http://lass.cs.umass.edu/papers/pdf/FAST04.pdf
|
# ? Mar 18, 2012 00:46 |
|
Intrepid00 posted:It is more than that. I have no idea why you posted this and what it has to do with discussions of load balancing mechanisms. It's utterly meaningless in this context since we aren't talking about a Linux server running NFS, we are talking about a dedicated NAS appliance that is tuned specifically for NFS responsiveness. I wouldn't recommend NFS of FC or iSCSI as a blanket statement, but on NetApp it makes perfect sense and there are plenty of benchmarks that place real world performance in VMware within a few percentage points of one another on all 3 protocols.
|
# ? Mar 18, 2012 01:23 |
|
NippleFloss posted:I have no idea why you posted this and what it has to do with discussions of load balancing mechanisms. Caching performance. Your load balance will benefit greatly from iSCSI's type of caching and update methods overall.
|
# ? Mar 18, 2012 01:30 |
|
Less Fat Luke posted:So since we're talking about 10GE does anyone have switch suggestions? My current setup is using a Cisco 4900M. It's totally solid, 8+8+8 fiber ports (with X2 modules). However it's very expensive and I was wondering what other people have been using.
|
# ? Mar 18, 2012 01:53 |
|
Intrepid00 posted:Caching performance. Your load balance will benefit greatly from iSCSI's type of caching and update methods overall. This is true, but no one was disputing that iSCSI round robin would provide a more even balance than network level link aggregation. The question is whether a perfectly even balance is necessary. In some cases, where you're very bandwidth constrained, then it is. But on most properly designed IP storage networks that isn't the case.
|
# ? Mar 18, 2012 03:39 |
|
Goddamn Intrepid I couldn't help but picture you as that internal IT person that smugly pulls out some datasheet or stats during a meeting as to why I *have* to be wrong and s/he's right. They're never right.
|
# ? Mar 18, 2012 04:14 |
|
adorai posted:so anyway, the point is that iSCSI can utilize MPIO round robin via multiple initiators and sessions, NFS can't, right? http://tools.ietf.org/html/rfc5661#section-12 Vulture Culture fucked around with this message at 04:49 on Mar 18, 2012 |
# ? Mar 18, 2012 04:44 |
|
Less Fat Luke posted:So since we're talking about 10GE does anyone have switch suggestions? My current setup is using a Cisco 4900M. It's totally solid, 8+8+8 fiber ports (with X2 modules). However it's very expensive and I was wondering what other people have been using. We just went to Nexus 5000s when we took the 10GbE plunge, but I'm sure they're not terribly cheap either. We're also using Twinax for almost everything, as it doesn't have to go far.
|
# ? Mar 18, 2012 06:23 |
|
Question for the folks who like using RDMs. What sort of performance problems have you experienced? Are you still using RDMs or not?
|
# ? Mar 18, 2012 07:55 |
|
Working on setting up a new networking test bench around a Mellanox switch and would like to find a way to tie in some 1GE RJ45 devices. I have seen some of the adapters that break out QSFP to 4 SFP+... what would the best way be to bring in those types of devices without adding in too much overhead/latency into the mix? I have seen SFP+ port to female 1G RJ45 adapters but not QSFP to RJ45 Reason behind this is putting all devices on the same switch in our office, setup around a Mellanox SX1036, and I want to tie in 10G, 40G, and even legacy 1G for testing on equal grounds. I just don't want to add weird latency issues into the mix from active adapters.
|
# ? Mar 18, 2012 17:13 |
|
Cultural Imperial posted:Question for the folks who like using RDMs. What sort of performance problems have you experienced? Are you still using RDMs or not?
|
# ? Mar 18, 2012 19:16 |
|
Misogynist posted:This question is making me incredibly confused. What are you even getting at? A couple posts back someone mentioned seeing performance problems with RDMs. My question is pretty much the same as yours except I'm not trying to be a dick about it.
|
# ? Mar 19, 2012 05:12 |
|
Cultural Imperial posted:A couple posts back someone mentioned seeing performance problems with RDMs. My question is pretty much the same as yours except I'm not trying to be a dick about it.
|
# ? Mar 19, 2012 14:20 |
|
The downside to using an rdm is no vmotion. Otherwise performance should more or less be the same.
|
# ? Mar 19, 2012 15:38 |
|
adorai posted:The downside to using an rdm is no vmotion. Vulture Culture fucked around with this message at 18:15 on Mar 19, 2012 |
# ? Mar 19, 2012 18:06 |
|
madsushi posted:I disagree with this. For misogynist, from page 67. namaste friends fucked around with this message at 20:19 on Mar 19, 2012 |
# ? Mar 19, 2012 20:14 |
|
Cultural Imperial posted:For misogynist, from page 67.
|
# ? Mar 19, 2012 21:53 |
|
I need to attach a pile of disks (goal is around 20TB) to a Solaris server so ZFS can manage them. Where can I get either a server that will hold ~12 drives + a system drive with a dumb pass through controller, or a drive enclosure that I can pass through the disks into an 1068E controller or something equivalent?
|
# ? Mar 19, 2012 23:09 |
|
FISHMANPET posted:I need to attach a pile of disks (goal is around 20TB) to a Solaris server so ZFS can manage them. Where can I get either a server that will hold ~12 drives + a system drive with a dumb pass through controller, or a drive enclosure that I can pass through the disks into an 1068E controller or something equivalent?
|
# ? Mar 20, 2012 02:57 |
|
FISHMANPET posted:I need to attach a pile of disks (goal is around 20TB) to a Solaris server so ZFS can manage them. Where can I get either a server that will hold ~12 drives + a system drive with a dumb pass through controller, or a drive enclosure that I can pass through the disks into an 1068E controller or something equivalent? Why not just install ZFS on the storage server itself and pass the storage to your other Solaris server through iSCSI? Your options are pretty much limitless then.
|
# ? Mar 20, 2012 04:42 |
|
Nomex posted:Why not just install ZFS on the storage server itself and pass the storage to your other Solaris server through iSCSI? Your options are pretty much limitless then. The Solaris iSCSI stack sucks and you lose a lot of the flexibility of ZFS if you have to present zvols to hosts. Since ZFS is meant to run on commodity hardware you can make due with any decent jbod enclosure.
|
# ? Mar 20, 2012 04:50 |
|
Nomex posted:Why not just install ZFS on the storage server itself and pass the storage to your other Solaris server through iSCSI? Your options are pretty much limitless then. There's only one server in this scenario. It's going to be running AMANDA, so we'll use it as a holding disk. I guess Norco sells a 12 bay enclosure for $400 which I can stuff full of disks. Any reasons to be concerned with the reliability of such a product? Though for such a low price we could just get two and use one as a cold spare.
|
# ? Mar 20, 2012 06:05 |
|
FISHMANPET posted:I need to attach a pile of disks (goal is around 20TB) to a Solaris server so ZFS can manage them. evil_bunnY fucked around with this message at 09:10 on Mar 20, 2012 |
# ? Mar 20, 2012 08:52 |
|
All the fun new stuff in ZFS won't be available in Solaris ever, you'll have to use something based off Illumos since that's where all the development is these days.
|
# ? Mar 20, 2012 13:34 |
|
evil_bunnY posted:Dell MD1200 (1220 if you want 24x2,5") are SAS and pretty well built. I've used them before direct-attached to B2D servers and never had issues. The problem with that is that it presents itself to the server as one disk, and I want to present each disk individaully to the server and let ZFS manage it all. luminalflux posted:All the fun new stuff in ZFS won't be available in Solaris ever, you'll have to use something based off Illumos since that's where all the development is these days. Yeah, I think the odds of this being true are like zero, since Illuminos hasn't done anything of note in nearly two years.
|
# ? Mar 20, 2012 13:57 |
|
NippleFloss posted:The Solaris iSCSI stack sucks and you lose a lot of the flexibility of ZFS if you have to present zvols to hosts. The rest of your post holds true, of course. Vulture Culture fucked around with this message at 14:25 on Mar 20, 2012 |
# ? Mar 20, 2012 14:23 |
|
FISHMANPET posted:The problem with that is that it presents itself to the server as one disk, and I want to present each disk individaully to the server and let ZFS manage it all.
|
# ? Mar 20, 2012 14:25 |
|
evil_bunnY posted:Huh? Pretty sure you can just configure it however the hell you want from the controller? I'll have to ask our Dell guy about it tomorrow when he comes in to show off Dell's newest Intel servers. Though even if it's true, we may be priced out of it by default. Apparently the budget for this is $10k, and that array filled with 2 TB disks is already $10k, and the $10k budget is supposed to include the server.
|
# ? Mar 20, 2012 15:25 |
|
FISHMANPET posted:Yeah, I think the odds of this being true are like zero, since Illuminos hasn't done anything of note in nearly two years. OpenIndiana seems active, and Bryan Cantrill's talk at LISA showed signs of a lot of fun ZFS stuff that won't make it into Solaris. Then again I don't do Solaris since the first releases of Sol10.
|
# ? Mar 20, 2012 15:30 |
|
luminalflux posted:OpenIndiana seems active, and Bryan Cantrill's talk at LISA showed signs of a lot of fun ZFS stuff that won't make it into Solaris. I'm pretty pessimistic when it comes to the future of an open source Solaris. OpenSolaris always struck me as being primarily developed by Sun engineers, and when they stopped contributing it seemed to mostly die. I'd like to be proven wrong, but I don't think there's enough money or interest outside of Oracle to make it. On the other hand, Joyent is apparently pretty big and is throwing a lot of money at it, so who knows.
|
# ? Mar 20, 2012 15:41 |
|
FISHMANPET posted:I'll have to ask our Dell guy about it tomorrow when he comes in to show off Dell's newest Intel servers. Though even if it's true, we may be priced out of it by default. Apparently the budget for this is $10k, and that array filled with 2 TB disks is already $10k, and the $10k budget is supposed to include the server. FISHMANPET posted:I'm pretty pessimistic when it comes to the future of an open source Solaris. OpenSolaris always struck me as being primarily developed by Sun engineers, and when they stopped contributing it seemed to mostly die. I'd like to be proven wrong, but I don't think there's enough money or interest outside of Oracle to make it. On the other hand, Joyent is apparently pretty big and is throwing a lot of money at it, so who knows. Vulture Culture fucked around with this message at 15:45 on Mar 20, 2012 |
# ? Mar 20, 2012 15:43 |
|
Misogynist posted:Are you talking about the Solaris 10 iSCSI stack? COMSTAR is amazing. Yes, Solaris 10. OpenSolaris has had a much more mature storage feature set for a while, unfortunately my customer can't use it. Notable features of Solaris 10 iSCSI include the server hanging indefinitely on boot if the discovery target is down or inaccessible and a patch that changed the order in which services start that causes ZFS to attempt to bring pools online before the iSCSI initiator service has started, resulting in all pools coming up broken. My experience with iSCSI on Solaris 10 has been pretty infuriating.
|
# ? Mar 20, 2012 16:59 |
|
There's really no one competent left at Sun is there?
|
# ? Mar 20, 2012 18:04 |
|
Yeah there is, but odds are he's either punching retirement age or is considering hanging himself.
|
# ? Mar 20, 2012 18:50 |
|
Misogynist posted:At $10K, you're going to be priced out of the market if you're looking for a server with a separate disk enclosure -- most major vendors don't get cheap and build in things like array multipathing that don't fit your budget. Your best bet is to look for a server with plenty of disk drives built in. The SGI 1116 is probably the best-priced one from a major vendor, but there's lots of whitebox vendors that all sell similar things. That's good to know, I have a meeting with NetApp tomorrow and while I like the sound of their equipment, I fear that even their lowest model, the 2040, may be too pricey for our project. I'll look into that SGI unit and see how it compares. What do you guys do for archiving data that doesn't need to be high performance. Do you just add slower speed HDs as a second array inside your NAS or do you pick up a cheap unit like a Buffalo to handle archival? In my case, we are required to store calls for 1-3 years, they are likely only going to be accessed sparringly after the first day or so they were logged, but we need them to settle disputes and as part of the agreement with our merchant accounts and payment processors.
|
# ? Mar 20, 2012 19:27 |
|
Aniki posted:What do you guys do for archiving data that doesn't need to be high performance. Do you just add slower speed HDs as a second array inside your NAS or do you pick up a cheap unit like a Buffalo to handle archival? In my case, we are required to store calls for 1-3 years, they are likely only going to be accessed sparringly after the first day or so they were logged, but we need them to settle disputes and as part of the agreement with our merchant accounts and payment processors.
|
# ? Mar 20, 2012 19:59 |
|
Aniki posted:That's good to know, I have a meeting with NetApp tomorrow and while I like the sound of their equipment, I fear that even their lowest model, the 2040, may be too pricey for our project. I'll look into that SGI unit and see how it compares. Depends on your budget, really. Data Domain provides some excellent solutions for backup, archive, and replication, but they tend to be pretty pricey. However a Buffalo terastation isn't going to provide much in the way of resiliency. It also depends on your primary storage vendor and how they handle things like storage tiering. Some work very well supporting multiple disk types on the same frame, and some do not. NetApp does a pretty job job as near-line storage, but that you can hit some funky performance issues mixing slow and fast disk on the same controller in certain situations, and if you don't intend to use a lot of the features and just want very cheap disk then there are probably some better players on the $/GB scale. YOLOsubmarine fucked around with this message at 23:48 on Mar 20, 2012 |
# ? Mar 20, 2012 23:43 |
|
|
# ? Apr 18, 2024 08:00 |
|
Aniki posted:That's good to know, I have a meeting with NetApp tomorrow and while I like the sound of their equipment, I fear that even their lowest model, the 2040, may be too pricey for our project. I'll look into that SGI unit and see how it compares.
|
# ? Mar 21, 2012 02:38 |