Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

three posted:

You might be right. I thought they were BSD-based like the Equallogic.

I think the NAS head is due later this year. If you have a Dell rep, they may be able to pinpoint it to a specific Quarter.

Got it, thank you (I was told it's going to be Q2.)
Also confirmed: they are on track releasing the 10-gig flavor of the new PS6100-series (2.5"-line of EQL chassis) in March.

Adbot
ADBOT LOVES YOU

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

KS posted:

It is definitely a bigger deal, but it requires series 40 or better controllers at the moment, hence my recommendation. 512k is the smallest page size -- the default is 2mb.

Ah, OK.

quote:

6.0 also adds full VAAI support which is nice.

I'm sure it is but I'm an MS cluster shop. :)

Syano
Jul 13, 2005
Ok I have a question about iscsi offload. I have a couple R710s with 4 onboard broadcom NICs that have the option to do iscsi offload and a quadport Intel gigabit nic. Broadcoms documentation says they see big huge CPU performance increases running high IO loads by using the iscsi initiator. Intel's documentation says they see no discernible difference in cpu usage when using their nics and a software initiator plus they say using their nics and the software initiator is better since it doesnt break the standard OS model (whatever that means).

Who is correct? Have any of you done any testing at all in the real world with these scenarios?

bort
Mar 13, 2003

This isn't real-world, but I'd bet it's a rare initiator that is so badly CPU-bound that it needs to offload iSCSI processing with the number of cores in processors these days. I also trust my kernel more than Broadcom kernel modules/driver stack.

Dell told me they've seen performance degrade using iSCSI offload with EqualLogic infrastructure, to further muddy the waters for you.

This goon upthread is a [sarcasm]huge fan[/sarcasm] of Broadcom's offload engine.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]
Aye, Broadcom iSOE is a total junk, even EQL support agreed with me.

As forr TOE et al I never noticed any difference on qny of my server.

Muslim Wookie
Jul 6, 2005
I've only deployed iSCSI offloading within a two super-computer level installations, and there it was carefully planned for with the correct hardware tested and purchased. Anywhere else it's a complete waste of administrative time and complexity, forget it.

bort
Mar 13, 2003

Consider, too, that if Intel were able to squeeze a teeny bit of performance improvement out of it, they would sell the hell out of it and tell you it made coffee, too. Saying "it's just as good" doesn't make anyone wanna buy anything.

Hok
Apr 3, 2003

Cog in the Machine
It's pretty simple to run tests with and without iSCSI offloading enabled, if you're concerned do that and go with whatever gives you the best results.

In my experience offload isn't a good thing. Especially with Broadcom cards.

Internet Explorer
Jun 1, 2005





Echoing everyone else. Not worth the trouble it will cause.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Does ZFS on BSD handle replication as gracefully as on Solaris? I'm looking for a migration path from our old OpenSolaris boxes to something more supportable.

Nomex
Jul 17, 2002

Flame retarded.

marketingman posted:

I've only deployed iSCSI offloading within a two super-computer level installations, and there it was carefully planned for with the correct hardware tested and purchased. Anywhere else it's a complete waste of administrative time and complexity, forget it.

Just out of curiosity, why did you use iSCSI at the supercomputer level, rather than FC or FCOE?

Nomex fucked around with this message at 17:32 on Jan 21, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Nomex posted:

Just out of curiosity, why did you use iSCSI at the supercomputer level, rather than FC or FCOE?
Given the supercomputer bit, probably iSCSI over InfiniBand.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]
:allears:

Muslim Wookie
Jul 6, 2005

Misogynist posted:

Given the supercomputer bit, probably iSCSI over InfiniBand.

Hole in one. Custom NetApp configuration using Infiniband...

Edit: For more clarity, there was already a significant Infiniband infrastructure and culture in place, so engineering in NetApp got involved for a custom build.

Muslim Wookie fucked around with this message at 03:02 on Jan 23, 2012

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]
Has anyone heard of Avere Systems? Looks like some NAS accelerator...?
Apparently recent (~3 years old), founded by a bunch of former NetApp, Spinakker and Compellent head honcho: http://www.averesystems.com/AboutUs_ManagementTeam.aspx
Heard about it around 2010 (Siggraph?) but never got around reading up on them...

szlevi fucked around with this message at 10:11 on Jan 23, 2012

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists

Vanilla posted:

So speaking quite honestly an NX4 box is basically as low as it gets and is very old. Support is likely the same.

VNX support is much better, VMAX / Symmetrix support is a country mile beyond both of those. The VNX is a world away from the NX4 - the NX4 is from what, 2006?

Compellent is more VNX range so you're talking a different EMC ball game. With Compellent Dell are doing great but there is one teensy weensy problem with Compellent:

Compellent is still a 32 bit OS which means the maximum cache is 4GB. This is a 'welcome to 2003' roadblock for Compellent and likely offers so more cache than you use today. That's the kind of cookie that isn't going to get solved without a painful software upgrade and the usual 'buy a ton of memory'' offering (assuming you can upgrade a 32bit array to a 64 bit array - likely you are buying the last lemons off the truck).

Other arrays, such as the VNX and Netapp arrays can offer you far more cache on the controller and also through the use of SSD drives or PAM cards. These make a world of difference.
We've got a couple of Compellent arrays that we installed last year. So far, I haven't seen any issues with the 4GB of cache. We're using them behind a 10 node ESX environment, as well as an Oracle RAC database cluster on IBM AIX.

So far, I've had nothing but good experiences. Latency has been kept in good shape, and the data progression has saved me a bunch of both space and time, since I don't have to do anything to move things.
I've been told that the upgrade to 64bit OS will be coming later this year, and shouldn't provide any more headaches than a normal controller software upgrade. The series 40 controllers already came loaded with 6GB of RAM, so if RAM is used as cache, it should be possible to see an immediate increase. Since they're just commodity supermicro xeon servers, adding cache beyond that should be as simple as adding memory.

I know that there is a cache card currently, and if they're not using memory as of now, then it would take a hardware upgrade to use more than 4GB of cache.

It seems to me that the "They're 32BIT!!! Only 4GB of CACHE!!!!" has become a rallying cry against Compellent mostly since NTAP finally got 64 bit code running last year. That said, I haven't really seen any issues with having "only" 4GB of cache. Your workload may vary of course.

Our biggest reason for going with the Compellent was that we have a very small staff (4) that is responsible for a very large number of systems (everything in the data center). We don't have the luxury of having a network admin team, a storage admin team, a windows team, a unix team, a VMware team, etc etc etc. We are all of those teams. Therefore, the Compellent having a bunch of features that are done for me, in the background, without me having to mess with anything, was a big selling point. I still CAN mess with stuff if I need to, but I can let the majority of it happen automatically.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Intraveinous posted:

It seems to me that the "They're 32BIT!!! Only 4GB of CACHE!!!!" has become a rallying cry against Compellent mostly since NTAP finally got 64 bit code running last year. That said, I haven't really seen any issues with having "only" 4GB of cache. Your workload may vary of course.

Not really germane to your argument, but NTAP has supported more than 4GB of ram for quite some time. The 64-bit upgrade that you're talking about was to Aggregates, not to the actual code base. It allowed larger pools of disk but did not provide any additional overhaul of OnTAP to support larger memory pools, as it was not required.

Cache size is one of those questions that customers ask because it's a nice easy measuring stick number that you can use to compare two arrays, but it's really only a valid question in the context of a specific workload and the overall design of the controller. If your cache is appropriately matched to your disk on the back end, and the application on the front end, then you'll be fine.

That said, I have seen workloads that will absolutely thrash the 32GB per controller in the FAS6080 nodes, so there are certainly instances where more cache that 4GB would be requisite, at least on a NetApp controller. I'm not conversant enough with Compellant and how they use their cache to say, but I'd guess 4GB isn't enough for high end workloads on a single node of their gear either.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]
It's kind of a moot point now that they're shipping the new 64-bit v6.0 but isn't this "limited" RAM is what the controllers are using as cache? Because if it is then having a much larger one *could* make a difference in certain scenarios.

szlevi fucked around with this message at 18:32 on Jan 24, 2012

FlyingZygote
Oct 18, 2004
Trying to decide between a VNXe 3100 and a FAS 2040. The unit will be used solely as the storage attached to two hosts running VMWare Essentials Plus. Each is configured with 12x 600GB 15K sas disks. I'm currently getting a better price from NetApp by about $2K.

Both seem to meet and exceed our basic requirements (1100 IOPS, 12MB/sec throughput, 600GB capacity). Our read:write ratio is about 2.6:1.

At this point, I don't believe I'm going to use compression, dedupe, or thin provision because the disk savings is not worth the performance hit. However, these features are important in case we do need them. I'm not sure how much I'm going to use snaps vs. a software package like Veeam. Replication and expansion is not a concern at this point. I don't forsee that kind of growth.

Specific questions:
  • NFS or iSCSI?
  • Which is easier to install and manage?
  • Which unit would you choose, and why?

Syano
Jul 13, 2005
With all the bad press I have seen flying recently concerning EMC I would default to the netapp based on that alone. Pricing being cheaper is just icing on the cake.

j3rkstore
Jan 28, 2009

L'esprit d'escalier
Just spinning up my new FAS2040, its amazing:



:3:

evil_bunnY
Apr 2, 2003

FlyingZygote posted:

Trying to decide between a VNXe 3100 and a FAS 2040. The unit will be used solely as the storage attached to two hosts running VMWare Essentials Plus. Each is configured with 12x 600GB 15K sas disks. I'm currently getting a better price from NetApp by about $2K.
I'm gonna lol if you get the VNXe after reading this thread.

FlyingZygote posted:

At this point, I don't believe I'm going to use compression, dedupe, or thin provision because the disk savings is not worth the performance hit.
Get a Dell MD then? Only half joking.

FlyingZygote posted:

Specific questions:
  • NFS or iSCSI?
  • Which is easier to install and manage?
  • Which unit would you choose, and why?
  • NFS or iSCSI? depends on array
  • Which is easier to install and manage? Haven't tried the VNXe, but the FAS is cake.
  • Which unit would you choose, and why? FAS because dedupe, WAFL, ONTAP is nice and that particular FAS is good for the money.

evil_bunnY fucked around with this message at 22:16 on Jan 23, 2012

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists

Bluecobra posted:

I disagree. My definition of a whitebox is to use your own case or barebones kit from a place like Newegg to make a desktop or server. If Compellent chose to use a standard Dell/HP/IBM server instead, it would be miles ahead in build quality. Have you ever had the pleasure of racking Compellent gear? They have to be some of the worst rail kits ever, thanks to Supermicro. The disk enclosure rail kits don't even come assembled. It took me about 30 minutes to get just one controller rack mounted. Compare that with every other major vendor, racking shouldn't take more than a couple of minutes due to railkits that snap into the square holes.

The Supermicro controllers were the easier of the parts to rack. The Disk Enclosures are OEM'd by Xyratex, who make enclosures for a wide array of different vendors out there. Info here (yeah, it's a few years old and things have likely changed) http://www.theregister.co.uk/2009/03/02/xyratex_sff_arrays/.

It was definitely an annoyance at first, but once I'd done one, the others weren't that hard. I don't base the worth of something on how easy it is to rack.

EDIT: I was behind a bit in the thread and didn't notice SC 6.0 release had already been talked about.

Intraveinous fucked around with this message at 23:48 on Jan 23, 2012

some kinda jackal
Feb 25, 2003

 
 
Is Nexenta somehow optimized for SAN duty in some way that Solaris isn't? I guess what I'm asking is, other than the GUI and prepackaged tools is there any advantage to running Nexenta over stock Solaris for an iSCSI target.

some kinda jackal fucked around with this message at 22:51 on Jan 23, 2012

Intraveinous
Oct 2, 2001

Legion of Rainy-Day Buddhists

NippleFloss posted:

Not really germane to your argument, but NTAP has supported more than 4GB of ram for quite some time. The 64-bit upgrade that you're talking about was to Aggregates, not to the actual code base. It allowed larger pools of disk but did not provide any additional overhaul of OnTAP to support larger memory pools, as it was not required.

Cache size is one of those questions that customers ask because it's a nice easy measuring stick number that you can use to compare two arrays, but it's really only a valid question in the context of a specific workload and the overall design of the controller. If your cache is appropriately matched to your disk on the back end, and the application on the front end, then you'll be fine.

That said, I have seen workloads that will absolutely thrash the 32GB per controller in the FAS6080 nodes, so there are certainly instances where more cache that 4GB would be requisite, at least on a NetApp controller. I'm not conversant enough with Compellant and how they use their cache to say, but I'd guess 4GB isn't enough for high end workloads on a single node of their gear either.

Yeah, you're correct that I wasn't saying that Ontap 8 being 64 bit was what allowed them to address more memory or more cache, just that since NTAP started using a 64bit OS the poo poo-slinging against Compellent for using a 32bit OS has increased several fold. Absolutely true on the match cache to back end disk to workload or you'll be sorry. I doubt most people who had workloads that would thrash 32GB of cache on a FAS6xxx would be looking at Compellent to begin with. For my environment, I was easily able to hit my storage IOPS and latency requirements with 3-4 shelves of 2.5" 15Krpm disks, and my size requirements with 1-3 shelves of 7.2K 3.5" 2TB disks.
We looked very hard at 3PAR, and couldn't justify the 4x raise in price over the Compellent. There were nice features that I liked better on each. We also had NTAP come in, and while their kit was quite nice, and I liked the PAM Cards and the price, it really came down to their resellers not listening to what I told them I wanted and needed on multiple occasions. They were trying to sell me a V3xxx array to stick in front of my existing EVA4400. While that's an interesting concept, they kept balking when I told them that I wanted this array to be able to stand on its own and handle the full load by itself. Being able to snap and dedupe and whatever else on EVA4400 LUNs would have been a nice value-add, but I couldn't get them to give me a straight up quote for a FAS/Vxxxx and enough disk to handle the whole load by itself.

On the upgrade to Storage Center 6.0, I don't know if the memory in the controller is used as a cache or not, but I suspect it's not. I know that there's a separate flash backed cache card installed into the controller as well.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Martytoof posted:

Is Nexenta somehow optimized for SAN duty in some way that Solaris isn't? I guess what I'm asking is, other than the GUI and prepackaged tools is there any advantage to running Nexenta over stock Solaris for an iSCSI target.
The sanity cost of dealing with Oracle's support, as well as whatever the licensing differential is. On the other hand, Oracle is actual Solaris, and Nexenta is Illumos and a GNU userland.

Vulture Culture fucked around with this message at 00:45 on Jan 24, 2012

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

You won't pay a performance penalty for thin provisioning on the FAS2040 as its a function of the way OnTAP does block accounting and falls out for free (much like snapshots).

Dedup on the FAS is not in-line so the performance hit is only taken when the scheduled job is run, and it can be scheduled in off peak times. There is a small performance hit on new writes to deduped volumes as the new block needs to be added to the fingerprint DB, but it's very small (around 7% at the upper end in testing).

Compression is definitely a no-no on VMware datastores.

If you go with the FAS I would definitely recommend NFS. Having your vmdks as WAFL files provides a number of benefits and nfs is just easier to work with.

Full disclosure, I work for NetApp. But I was a customer for a long time before that, so I'm familiar with the trials and troubles of the day to day SAN admin.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

FlyingZygote posted:

At this point, I don't believe I'm going to use compression, dedupe, or thin provision because the disk savings is not worth the performance hit.


Specific questions:
  • NFS or iSCSI?
  • Which is easier to install and manage?
  • Which unit would you choose, and why?

First, go NetApp, you will be able to get more help here. :)

Secondly, skip compression, but USE DEDUPE. You actually get better performance out of your NetApps with dedupe turned on, since the cache is dedupe-aware so you can get more blocks into your cache (in addition to space savings).

1) NFS, definitely. Even VMWare is recommending NFS now. All of the VAAI tricks are just attempts to get iSCSI to where NFS is already at. Here is the KEY REASON for NFS on a NetApp: when you get free space via dedupe, you can then use that free space for more VMs. If you go with a LUN-based iSCSI setup, all of your dedupe savings are wasted since the hosts aren't aware of the free space, since they can only see the LUN. Hosts connected with NFS see the whole volume, so they can take advantage of your deduped space savings.

2) NetApp, the software is the best. System Manager (now ONCommand System Manager) is great.

3) NetApp, because it's easy, and they have the best software tools. The Virtual Storage Console (VSC) is a plugin for vCenter that hooks your NetApp in. Here's what it lets you do:

  • Backups / Recovery (via snapshots, very slick)
  • Set host networking/storage best practices (it audits your hosts, and will recommend AND allow you to push a button to set all of the settings to best practices (things like timeouts and packet size, etc))
  • Allow you to provision storage from the VMWare console without going anywhere else (including best practices like auto-enabling dedupe, turning off automatic snapshots, setting NFS permissions, adding the NFS datastore to ALL your hosts at once)
  • Monitor space usage

Vanilla
Feb 24, 2002

Hay guys what's going on in th

FlyingZygote posted:

Trying to decide between a VNXe 3100 and a FAS 2040. The unit will be used solely as the storage attached to two hosts running VMWare Essentials Plus. Each is configured with 12x 600GB 15K sas disks. I'm currently getting a better price from NetApp by about $2K.

Both seem to meet and exceed our basic requirements (1100 IOPS, 12MB/sec throughput, 600GB capacity). Our read:write ratio is about 2.6:1.

At this point, I don't believe I'm going to use compression, dedupe, or thin provision because the disk savings is not worth the performance hit. However, these features are important in case we do need them. I'm not sure how much I'm going to use snaps vs. a software package like Veeam. Replication and expansion is not a concern at this point. I don't forsee that kind of growth.

Specific questions:
  • NFS or iSCSI?
  • Which is easier to install and manage?
  • Which unit would you choose, and why?

- As said above, for VMware i'd go NFS over iSCSI every time. More flexible
- Decide for yourself, i'm biased! -

http://www.youtube.com/watch?v=4VfO_hAgCPQ - install
http://www.youtube.com/watch?v=S1HD-KglfYs - management GUI

Can't find 2040 vids. Make sure each vendor has given you a usable capacity not taking into account features such as compression and dedup seeing as you're not going to use the features immediately.

As for the rap EMC has taken - which vendor hasn't? Plenty of good and bad threads on all vendors here, hardforum, arstechnica, etc.

Muslim Wookie
Jul 6, 2005

FlyingZygote posted:

Trying to decide between a VNXe 3100 and a FAS 2040. The unit will be used solely as the storage attached to two hosts running VMWare Essentials Plus. Each is configured with 12x 600GB 15K sas disks. I'm currently getting a better price from NetApp by about $2K.

Both seem to meet and exceed our basic requirements (1100 IOPS, 12MB/sec throughput, 600GB capacity). Our read:write ratio is about 2.6:1.

At this point, I don't believe I'm going to use compression, dedupe, or thin provision because the disk savings is not worth the performance hit. However, these features are important in case we do need them. I'm not sure how much I'm going to use snaps vs. a software package like Veeam. Replication and expansion is not a concern at this point. I don't forsee that kind of growth.

Specific questions:
  • NFS or iSCSI?
  • Which is easier to install and manage?
  • Which unit would you choose, and why?

So let me stop you right there because there in your scenario there is no performance hit from dedupe or thin provisioning on a NetApp filer. With a NetApp you'd also be silly to use Veeam because you'd be using twice the amount of space required to perform backups at the very least, though without know the rate of data change you have I can't give an exact figure. Further, there is absolutely no performance hit to having snapshots, restoring snapshots, or mounting snapshots as volumes and using them like real live data.

You'd absolutely go NFS and the NetApp would be pretty easy to set up and never ever look at again but I can't compare that to the VNXe so take that as anecdotal...

Edit: Also you'd be able to move CIFS fileshares to the filer and leverage dedupe and performance instead of relying on a Windows file server, if that will be a concern. You could even share the NFS datastore via CIFS at the same time if you wanted (some people do to make it easier to dump ISOs etc in there).

Muslim Wookie fucked around with this message at 12:07 on Jan 24, 2012

evil_bunnY
Apr 2, 2003

Vanilla posted:

Can't find 2040 vids. Make sure each vendor has given you a usable capacity not taking into account features such as compression and dedup seeing as you're not going to use the features immediately.

As for the rap EMC has taken - which vendor hasn't? Plenty of good and bad threads on all vendors here, hardforum, arstechnica, etc.
1) This is kinda disingenuous: you want the raw capacity to matter somewhat, but do take into account what you'll pay to get the raid level you'll actually use (if you say RAID5 I'm going to lol) and do take into account what dedup/thin provision will get you.
2) Who complains about Netapp? Compellent?

marketingman posted:

Edit: Also you'd be able to move CIFS fileshares to the filer and leverage dedupe and performance instead of relying on a Windows file server, if that will be a concern. You could even share the NFS datastore via CIFS at the same time if you wanted (some people do to make it easier to dump ISOs etc in there).
If you can run Windows VM's and aren't bothered by the MS licenses, there are a few good reasons to not do this. It's not like you can't back them by dedup'ed datastore.

evil_bunnY fucked around with this message at 13:24 on Jan 24, 2012

Internet Explorer
Jun 1, 2005





I will just say do not make a decision without the vendor bringing in the box for you to play with for 2 weeks or so. If you do not have a storage admin the ease of use is very important. I cannot speak to NetApp as I have no had a chance to use their SANs. When I looked at them ~3 years ago the UI was just as bad as everyone else's but Equallogic.


evil_bunnY posted:

If you can run Windows VM's and aren't bothered by the MS licenses, there are a few good reasons to not do this. It's not like you can't back them by dedup'ed datastore.

Care to go into a bit more detail? I am very happy with the results of putting our CIF shares onto our VNX.

Muslim Wookie
Jul 6, 2005
Yeah, interested in your reasoning as well. I have some ideas, like for example if DFSR is required then it's not an option.

Or do you think having the same volume (ie files) served by NFS and CIFS at the same time is a problem?

Sorry, it's late at night and I'm not thinking too well

evil_bunnY
Apr 2, 2003

Internet Explorer posted:

Care to go into a bit more detail? I am very happy with the results of putting our CIF shares onto our VNX.
DFSR is one, virus scanning, not dedicating SAN ports to a particular app (or paying for data movers in EMC's case?), and local logs for troubleshooting.
There are clearly cases where you want to do this, but those are things I've had to deal with before. Also rights management issues with cmdlets (or the underlying classes), but this may be fixed now.

Internet Explorer posted:

I will just say do not make a decision without the vendor bringing in the box for you to play with for 2 weeks or so.
This is solid advice not just for storage. The sales dudes can talk until they're blue in the face, but just trialling stuff for even a couple of days will expose things you never knew you cared about more often than not.

marketingman posted:

Or do you think having the same volume (ie files) served by NFS and CIFS at the same time is a problem?
No, for stuff like ISOs it's actually awesome. Being able to just DLoad media straight to where your hypervisor can see it from any windows machine is great.

evil_bunnY fucked around with this message at 14:33 on Jan 24, 2012

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

evil_bunnY posted:

1) This is kinda disingenuous: you want the raw capacity to matter somewhat, but do take into account what you'll pay to get the raid level you'll actually use (if you say RAID5 I'm going to lol) and do take into account what dedup/thin provision will get you.
2) Who complains about Netapp? Compellent?

If you can run Windows VM's and aren't bothered by the MS licenses, there are a few good reasons to not do this. It's not like you can't back them by dedup'ed datastore.

Why would you 'lol' at RAID5?

evil_bunnY
Apr 2, 2003

three posted:

Why would you 'lol' at RAID5?
Rebuild failures. It's happened twice to me before (once on a Dell MD which was bad enough, the other time on a semi-old EMC unit), and it's mathematically very likely to happen during any storage system's lifetime.

three posted:

So I'm guessing you really hate RAID 50?
Not anymore that I'd "hate" a RAID 5 set with the same amount of disks.

evil_bunnY fucked around with this message at 16:10 on Jan 24, 2012

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

evil_bunnY posted:

Rebuild failures. It's happened twice to me before (once on a Dell MD which was bad enough, the other time on a semi-old EMC unit), and it's mathematically very likely to happen during any storage system's lifetime.

So I'm guessing you really hate RAID 50?

Muslim Wookie
Jul 6, 2005

evil_bunnY posted:

DFSR is one, virus scanning, not dedicating SAN ports to a particular app (or paying for data movers in EMC's case?), and local logs for troubleshooting.
There are clearly cases where you want to do this, but those are things I've had to deal with before. Also rights management issues with cmdlets (or the underlying classes), but this may be fixed now.

DFSR is something we both agree on. Virus scanning is available by McAfee, but you can also just tell a server to scan the share if that's not an option, so don't see that as an issue in most instances though it could be in some small and/or political offices.

What do you mean dedicating SAN ports to a particular app, and how does that relate to CIFS?

Local logs for troubleshooting what? Access errors? The same logs exist on the NetApp but I can see an applications team being annoyed at not being able to access that easily... not really a show stopper IMO.

+1 on the RAID5 lol, that should stay well away from enterprise storage arrays.

Bitch Stewie
Dec 17, 2011

Martytoof posted:

Is Nexenta somehow optimized for SAN duty in some way that Solaris isn't? I guess what I'm asking is, other than the GUI and prepackaged tools is there any advantage to running Nexenta over stock Solaris for an iSCSI target.

You know in principle I love Nexenta. I don't know why but I'm just wary about thinking about using it in production.

I'd be interested to hear some reasons why I'm being irrational.

(I specifically mean Nexenta, not the whole ZFS/dedupe/Oracle 7000 family type of unified storage).

Adbot
ADBOT LOVES YOU

madsushi
Apr 19, 2009

Baller.
#essereFerrari
Re: CIFS on or off SAN, one big reason is that your NetApp or VMX isn't going to give you the advanced share/file reporting stats that Windows will give you if you run your storage through a Windows server. I like the idea of making a big LUN, deduping it, and then presenting that LUN to Windows and letting it serve out the data. Granted, most customers choose to just toss CIFS on the NetApp and forget about it, but the share reporting features of Windows are one thing to consider.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply