|
three posted:You might be right. I thought they were BSD-based like the Equallogic. Got it, thank you (I was told it's going to be Q2.) Also confirmed: they are on track releasing the 10-gig flavor of the new PS6100-series (2.5"-line of EQL chassis) in March.
|
# ? Jan 17, 2012 18:40 |
|
|
# ? Apr 27, 2024 14:52 |
|
KS posted:It is definitely a bigger deal, but it requires series 40 or better controllers at the moment, hence my recommendation. 512k is the smallest page size -- the default is 2mb. Ah, OK. quote:6.0 also adds full VAAI support which is nice. I'm sure it is but I'm an MS cluster shop.
|
# ? Jan 17, 2012 18:43 |
|
Ok I have a question about iscsi offload. I have a couple R710s with 4 onboard broadcom NICs that have the option to do iscsi offload and a quadport Intel gigabit nic. Broadcoms documentation says they see big huge CPU performance increases running high IO loads by using the iscsi initiator. Intel's documentation says they see no discernible difference in cpu usage when using their nics and a software initiator plus they say using their nics and the software initiator is better since it doesnt break the standard OS model (whatever that means). Who is correct? Have any of you done any testing at all in the real world with these scenarios?
|
# ? Jan 20, 2012 18:08 |
|
This isn't real-world, but I'd bet it's a rare initiator that is so badly CPU-bound that it needs to offload iSCSI processing with the number of cores in processors these days. I also trust my kernel more than Broadcom kernel modules/driver stack. Dell told me they've seen performance degrade using iSCSI offload with EqualLogic infrastructure, to further muddy the waters for you. This goon upthread is a [sarcasm]huge fan[/sarcasm] of Broadcom's offload engine.
|
# ? Jan 20, 2012 21:52 |
|
Aye, Broadcom iSOE is a total junk, even EQL support agreed with me. As forr TOE et al I never noticed any difference on qny of my server.
|
# ? Jan 20, 2012 22:17 |
|
I've only deployed iSCSI offloading within a two super-computer level installations, and there it was carefully planned for with the correct hardware tested and purchased. Anywhere else it's a complete waste of administrative time and complexity, forget it.
|
# ? Jan 21, 2012 04:31 |
|
Consider, too, that if Intel were able to squeeze a teeny bit of performance improvement out of it, they would sell the hell out of it and tell you it made coffee, too. Saying "it's just as good" doesn't make anyone wanna buy anything.
|
# ? Jan 21, 2012 06:35 |
|
It's pretty simple to run tests with and without iSCSI offloading enabled, if you're concerned do that and go with whatever gives you the best results. In my experience offload isn't a good thing. Especially with Broadcom cards.
|
# ? Jan 21, 2012 09:08 |
|
Echoing everyone else. Not worth the trouble it will cause.
|
# ? Jan 21, 2012 12:22 |
|
Does ZFS on BSD handle replication as gracefully as on Solaris? I'm looking for a migration path from our old OpenSolaris boxes to something more supportable.
|
# ? Jan 21, 2012 17:23 |
|
marketingman posted:I've only deployed iSCSI offloading within a two super-computer level installations, and there it was carefully planned for with the correct hardware tested and purchased. Anywhere else it's a complete waste of administrative time and complexity, forget it. Just out of curiosity, why did you use iSCSI at the supercomputer level, rather than FC or FCOE? Nomex fucked around with this message at 17:32 on Jan 21, 2012 |
# ? Jan 21, 2012 17:27 |
|
Nomex posted:Just out of curiosity, why did you use iSCSI at the supercomputer level, rather than FC or FCOE?
|
# ? Jan 21, 2012 17:41 |
|
|
# ? Jan 22, 2012 01:52 |
|
Misogynist posted:Given the supercomputer bit, probably iSCSI over InfiniBand. Hole in one. Custom NetApp configuration using Infiniband... Edit: For more clarity, there was already a significant Infiniband infrastructure and culture in place, so engineering in NetApp got involved for a custom build. Muslim Wookie fucked around with this message at 03:02 on Jan 23, 2012 |
# ? Jan 23, 2012 03:00 |
|
Has anyone heard of Avere Systems? Looks like some NAS accelerator...? Apparently recent (~3 years old), founded by a bunch of former NetApp, Spinakker and Compellent head honcho: http://www.averesystems.com/AboutUs_ManagementTeam.aspx Heard about it around 2010 (Siggraph?) but never got around reading up on them... szlevi fucked around with this message at 10:11 on Jan 23, 2012 |
# ? Jan 23, 2012 10:00 |
|
Vanilla posted:So speaking quite honestly an NX4 box is basically as low as it gets and is very old. Support is likely the same. So far, I've had nothing but good experiences. Latency has been kept in good shape, and the data progression has saved me a bunch of both space and time, since I don't have to do anything to move things. I've been told that the upgrade to 64bit OS will be coming later this year, and shouldn't provide any more headaches than a normal controller software upgrade. The series 40 controllers already came loaded with 6GB of RAM, so if RAM is used as cache, it should be possible to see an immediate increase. Since they're just commodity supermicro xeon servers, adding cache beyond that should be as simple as adding memory. I know that there is a cache card currently, and if they're not using memory as of now, then it would take a hardware upgrade to use more than 4GB of cache. It seems to me that the "They're 32BIT!!! Only 4GB of CACHE!!!!" has become a rallying cry against Compellent mostly since NTAP finally got 64 bit code running last year. That said, I haven't really seen any issues with having "only" 4GB of cache. Your workload may vary of course. Our biggest reason for going with the Compellent was that we have a very small staff (4) that is responsible for a very large number of systems (everything in the data center). We don't have the luxury of having a network admin team, a storage admin team, a windows team, a unix team, a VMware team, etc etc etc. We are all of those teams. Therefore, the Compellent having a bunch of features that are done for me, in the background, without me having to mess with anything, was a big selling point. I still CAN mess with stuff if I need to, but I can let the majority of it happen automatically.
|
# ? Jan 23, 2012 18:22 |
|
Intraveinous posted:It seems to me that the "They're 32BIT!!! Only 4GB of CACHE!!!!" has become a rallying cry against Compellent mostly since NTAP finally got 64 bit code running last year. That said, I haven't really seen any issues with having "only" 4GB of cache. Your workload may vary of course. Not really germane to your argument, but NTAP has supported more than 4GB of ram for quite some time. The 64-bit upgrade that you're talking about was to Aggregates, not to the actual code base. It allowed larger pools of disk but did not provide any additional overhaul of OnTAP to support larger memory pools, as it was not required. Cache size is one of those questions that customers ask because it's a nice easy measuring stick number that you can use to compare two arrays, but it's really only a valid question in the context of a specific workload and the overall design of the controller. If your cache is appropriately matched to your disk on the back end, and the application on the front end, then you'll be fine. That said, I have seen workloads that will absolutely thrash the 32GB per controller in the FAS6080 nodes, so there are certainly instances where more cache that 4GB would be requisite, at least on a NetApp controller. I'm not conversant enough with Compellant and how they use their cache to say, but I'd guess 4GB isn't enough for high end workloads on a single node of their gear either.
|
# ? Jan 23, 2012 19:12 |
|
It's kind of a moot point now that they're shipping the new 64-bit v6.0 but isn't this "limited" RAM is what the controllers are using as cache? Because if it is then having a much larger one *could* make a difference in certain scenarios.
szlevi fucked around with this message at 18:32 on Jan 24, 2012 |
# ? Jan 23, 2012 20:22 |
|
Trying to decide between a VNXe 3100 and a FAS 2040. The unit will be used solely as the storage attached to two hosts running VMWare Essentials Plus. Each is configured with 12x 600GB 15K sas disks. I'm currently getting a better price from NetApp by about $2K. Both seem to meet and exceed our basic requirements (1100 IOPS, 12MB/sec throughput, 600GB capacity). Our read:write ratio is about 2.6:1. At this point, I don't believe I'm going to use compression, dedupe, or thin provision because the disk savings is not worth the performance hit. However, these features are important in case we do need them. I'm not sure how much I'm going to use snaps vs. a software package like Veeam. Replication and expansion is not a concern at this point. I don't forsee that kind of growth. Specific questions:
|
# ? Jan 23, 2012 20:56 |
|
With all the bad press I have seen flying recently concerning EMC I would default to the netapp based on that alone. Pricing being cheaper is just icing on the cake.
|
# ? Jan 23, 2012 21:08 |
|
Just spinning up my new FAS2040, its amazing:
|
# ? Jan 23, 2012 21:22 |
|
FlyingZygote posted:Trying to decide between a VNXe 3100 and a FAS 2040. The unit will be used solely as the storage attached to two hosts running VMWare Essentials Plus. Each is configured with 12x 600GB 15K sas disks. I'm currently getting a better price from NetApp by about $2K. FlyingZygote posted:At this point, I don't believe I'm going to use compression, dedupe, or thin provision because the disk savings is not worth the performance hit. FlyingZygote posted:Specific questions:
evil_bunnY fucked around with this message at 22:16 on Jan 23, 2012 |
# ? Jan 23, 2012 22:12 |
|
Bluecobra posted:I disagree. My definition of a whitebox is to use your own case or barebones kit from a place like Newegg to make a desktop or server. If Compellent chose to use a standard Dell/HP/IBM server instead, it would be miles ahead in build quality. Have you ever had the pleasure of racking Compellent gear? They have to be some of the worst rail kits ever, thanks to Supermicro. The disk enclosure rail kits don't even come assembled. It took me about 30 minutes to get just one controller rack mounted. Compare that with every other major vendor, racking shouldn't take more than a couple of minutes due to railkits that snap into the square holes. The Supermicro controllers were the easier of the parts to rack. The Disk Enclosures are OEM'd by Xyratex, who make enclosures for a wide array of different vendors out there. Info here (yeah, it's a few years old and things have likely changed) http://www.theregister.co.uk/2009/03/02/xyratex_sff_arrays/. It was definitely an annoyance at first, but once I'd done one, the others weren't that hard. I don't base the worth of something on how easy it is to rack. EDIT: I was behind a bit in the thread and didn't notice SC 6.0 release had already been talked about. Intraveinous fucked around with this message at 23:48 on Jan 23, 2012 |
# ? Jan 23, 2012 22:29 |
|
Is Nexenta somehow optimized for SAN duty in some way that Solaris isn't? I guess what I'm asking is, other than the GUI and prepackaged tools is there any advantage to running Nexenta over stock Solaris for an iSCSI target.
some kinda jackal fucked around with this message at 22:51 on Jan 23, 2012 |
# ? Jan 23, 2012 22:48 |
|
NippleFloss posted:Not really germane to your argument, but NTAP has supported more than 4GB of ram for quite some time. The 64-bit upgrade that you're talking about was to Aggregates, not to the actual code base. It allowed larger pools of disk but did not provide any additional overhaul of OnTAP to support larger memory pools, as it was not required. Yeah, you're correct that I wasn't saying that Ontap 8 being 64 bit was what allowed them to address more memory or more cache, just that since NTAP started using a 64bit OS the poo poo-slinging against Compellent for using a 32bit OS has increased several fold. Absolutely true on the match cache to back end disk to workload or you'll be sorry. I doubt most people who had workloads that would thrash 32GB of cache on a FAS6xxx would be looking at Compellent to begin with. For my environment, I was easily able to hit my storage IOPS and latency requirements with 3-4 shelves of 2.5" 15Krpm disks, and my size requirements with 1-3 shelves of 7.2K 3.5" 2TB disks. We looked very hard at 3PAR, and couldn't justify the 4x raise in price over the Compellent. There were nice features that I liked better on each. We also had NTAP come in, and while their kit was quite nice, and I liked the PAM Cards and the price, it really came down to their resellers not listening to what I told them I wanted and needed on multiple occasions. They were trying to sell me a V3xxx array to stick in front of my existing EVA4400. While that's an interesting concept, they kept balking when I told them that I wanted this array to be able to stand on its own and handle the full load by itself. Being able to snap and dedupe and whatever else on EVA4400 LUNs would have been a nice value-add, but I couldn't get them to give me a straight up quote for a FAS/Vxxxx and enough disk to handle the whole load by itself. On the upgrade to Storage Center 6.0, I don't know if the memory in the controller is used as a cache or not, but I suspect it's not. I know that there's a separate flash backed cache card installed into the controller as well.
|
# ? Jan 24, 2012 00:03 |
|
Martytoof posted:Is Nexenta somehow optimized for SAN duty in some way that Solaris isn't? I guess what I'm asking is, other than the GUI and prepackaged tools is there any advantage to running Nexenta over stock Solaris for an iSCSI target. Vulture Culture fucked around with this message at 00:45 on Jan 24, 2012 |
# ? Jan 24, 2012 00:43 |
|
You won't pay a performance penalty for thin provisioning on the FAS2040 as its a function of the way OnTAP does block accounting and falls out for free (much like snapshots). Dedup on the FAS is not in-line so the performance hit is only taken when the scheduled job is run, and it can be scheduled in off peak times. There is a small performance hit on new writes to deduped volumes as the new block needs to be added to the fingerprint DB, but it's very small (around 7% at the upper end in testing). Compression is definitely a no-no on VMware datastores. If you go with the FAS I would definitely recommend NFS. Having your vmdks as WAFL files provides a number of benefits and nfs is just easier to work with. Full disclosure, I work for NetApp. But I was a customer for a long time before that, so I'm familiar with the trials and troubles of the day to day SAN admin.
|
# ? Jan 24, 2012 02:09 |
|
FlyingZygote posted:At this point, I don't believe I'm going to use compression, dedupe, or thin provision because the disk savings is not worth the performance hit. First, go NetApp, you will be able to get more help here. Secondly, skip compression, but USE DEDUPE. You actually get better performance out of your NetApps with dedupe turned on, since the cache is dedupe-aware so you can get more blocks into your cache (in addition to space savings). 1) NFS, definitely. Even VMWare is recommending NFS now. All of the VAAI tricks are just attempts to get iSCSI to where NFS is already at. Here is the KEY REASON for NFS on a NetApp: when you get free space via dedupe, you can then use that free space for more VMs. If you go with a LUN-based iSCSI setup, all of your dedupe savings are wasted since the hosts aren't aware of the free space, since they can only see the LUN. Hosts connected with NFS see the whole volume, so they can take advantage of your deduped space savings. 2) NetApp, the software is the best. System Manager (now ONCommand System Manager) is great. 3) NetApp, because it's easy, and they have the best software tools. The Virtual Storage Console (VSC) is a plugin for vCenter that hooks your NetApp in. Here's what it lets you do:
|
# ? Jan 24, 2012 06:06 |
|
FlyingZygote posted:Trying to decide between a VNXe 3100 and a FAS 2040. The unit will be used solely as the storage attached to two hosts running VMWare Essentials Plus. Each is configured with 12x 600GB 15K sas disks. I'm currently getting a better price from NetApp by about $2K. - As said above, for VMware i'd go NFS over iSCSI every time. More flexible - Decide for yourself, i'm biased! - http://www.youtube.com/watch?v=4VfO_hAgCPQ - install http://www.youtube.com/watch?v=S1HD-KglfYs - management GUI Can't find 2040 vids. Make sure each vendor has given you a usable capacity not taking into account features such as compression and dedup seeing as you're not going to use the features immediately. As for the rap EMC has taken - which vendor hasn't? Plenty of good and bad threads on all vendors here, hardforum, arstechnica, etc.
|
# ? Jan 24, 2012 07:47 |
|
FlyingZygote posted:Trying to decide between a VNXe 3100 and a FAS 2040. The unit will be used solely as the storage attached to two hosts running VMWare Essentials Plus. Each is configured with 12x 600GB 15K sas disks. I'm currently getting a better price from NetApp by about $2K. So let me stop you right there because there in your scenario there is no performance hit from dedupe or thin provisioning on a NetApp filer. With a NetApp you'd also be silly to use Veeam because you'd be using twice the amount of space required to perform backups at the very least, though without know the rate of data change you have I can't give an exact figure. Further, there is absolutely no performance hit to having snapshots, restoring snapshots, or mounting snapshots as volumes and using them like real live data. You'd absolutely go NFS and the NetApp would be pretty easy to set up and never ever look at again but I can't compare that to the VNXe so take that as anecdotal... Edit: Also you'd be able to move CIFS fileshares to the filer and leverage dedupe and performance instead of relying on a Windows file server, if that will be a concern. You could even share the NFS datastore via CIFS at the same time if you wanted (some people do to make it easier to dump ISOs etc in there). Muslim Wookie fucked around with this message at 12:07 on Jan 24, 2012 |
# ? Jan 24, 2012 09:58 |
|
Vanilla posted:Can't find 2040 vids. Make sure each vendor has given you a usable capacity not taking into account features such as compression and dedup seeing as you're not going to use the features immediately. 2) Who complains about Netapp? Compellent? marketingman posted:Edit: Also you'd be able to move CIFS fileshares to the filer and leverage dedupe and performance instead of relying on a Windows file server, if that will be a concern. You could even share the NFS datastore via CIFS at the same time if you wanted (some people do to make it easier to dump ISOs etc in there). evil_bunnY fucked around with this message at 13:24 on Jan 24, 2012 |
# ? Jan 24, 2012 13:09 |
|
I will just say do not make a decision without the vendor bringing in the box for you to play with for 2 weeks or so. If you do not have a storage admin the ease of use is very important. I cannot speak to NetApp as I have no had a chance to use their SANs. When I looked at them ~3 years ago the UI was just as bad as everyone else's but Equallogic.evil_bunnY posted:If you can run Windows VM's and aren't bothered by the MS licenses, there are a few good reasons to not do this. It's not like you can't back them by dedup'ed datastore. Care to go into a bit more detail? I am very happy with the results of putting our CIF shares onto our VNX.
|
# ? Jan 24, 2012 13:39 |
|
Yeah, interested in your reasoning as well. I have some ideas, like for example if DFSR is required then it's not an option. Or do you think having the same volume (ie files) served by NFS and CIFS at the same time is a problem? Sorry, it's late at night and I'm not thinking too well
|
# ? Jan 24, 2012 14:21 |
|
Internet Explorer posted:Care to go into a bit more detail? I am very happy with the results of putting our CIF shares onto our VNX. There are clearly cases where you want to do this, but those are things I've had to deal with before. Also rights management issues with cmdlets (or the underlying classes), but this may be fixed now. Internet Explorer posted:I will just say do not make a decision without the vendor bringing in the box for you to play with for 2 weeks or so. marketingman posted:Or do you think having the same volume (ie files) served by NFS and CIFS at the same time is a problem? evil_bunnY fucked around with this message at 14:33 on Jan 24, 2012 |
# ? Jan 24, 2012 14:29 |
|
evil_bunnY posted:1) This is kinda disingenuous: you want the raw capacity to matter somewhat, but do take into account what you'll pay to get the raid level you'll actually use (if you say RAID5 I'm going to lol) and do take into account what dedup/thin provision will get you. Why would you 'lol' at RAID5?
|
# ? Jan 24, 2012 14:38 |
|
three posted:Why would you 'lol' at RAID5? three posted:So I'm guessing you really hate RAID 50? evil_bunnY fucked around with this message at 16:10 on Jan 24, 2012 |
# ? Jan 24, 2012 14:43 |
|
evil_bunnY posted:Rebuild failures. It's happened twice to me before (once on a Dell MD which was bad enough, the other time on a semi-old EMC unit), and it's mathematically very likely to happen during any storage system's lifetime. So I'm guessing you really hate RAID 50?
|
# ? Jan 24, 2012 15:23 |
|
evil_bunnY posted:DFSR is one, virus scanning, not dedicating SAN ports to a particular app (or paying for data movers in EMC's case?), and local logs for troubleshooting. DFSR is something we both agree on. Virus scanning is available by McAfee, but you can also just tell a server to scan the share if that's not an option, so don't see that as an issue in most instances though it could be in some small and/or political offices. What do you mean dedicating SAN ports to a particular app, and how does that relate to CIFS? Local logs for troubleshooting what? Access errors? The same logs exist on the NetApp but I can see an applications team being annoyed at not being able to access that easily... not really a show stopper IMO. +1 on the RAID5 lol, that should stay well away from enterprise storage arrays.
|
# ? Jan 24, 2012 15:31 |
|
Martytoof posted:Is Nexenta somehow optimized for SAN duty in some way that Solaris isn't? I guess what I'm asking is, other than the GUI and prepackaged tools is there any advantage to running Nexenta over stock Solaris for an iSCSI target. You know in principle I love Nexenta. I don't know why but I'm just wary about thinking about using it in production. I'd be interested to hear some reasons why I'm being irrational. (I specifically mean Nexenta, not the whole ZFS/dedupe/Oracle 7000 family type of unified storage).
|
# ? Jan 24, 2012 15:40 |
|
|
# ? Apr 27, 2024 14:52 |
|
Re: CIFS on or off SAN, one big reason is that your NetApp or VMX isn't going to give you the advanced share/file reporting stats that Windows will give you if you run your storage through a Windows server. I like the idea of making a big LUN, deduping it, and then presenting that LUN to Windows and letting it serve out the data. Granted, most customers choose to just toss CIFS on the NetApp and forget about it, but the share reporting features of Windows are one thing to consider.
|
# ? Jan 24, 2012 16:00 |