|
Vanilla posted:I recall you mentioning isilon at some point in the past. Is it the cost that rules them out or a technical reason? Isilon never been cheap but now that it's carrying the EMC logo it would be even more expensive, not to mention the exodus of old hands from Isilon, the rash of issues in the the past year or so I heard of (from costumers and ex-eomployees) etc etc.
|
# ? Nov 28, 2012 21:55 |
|
|
# ? Apr 24, 2024 16:50 |
|
NippleFloss posted:I'd be absolutely shocked if they can get their stated 12GB/s out of only 24 SSD drives. Assuming the data is protected at all you're losing at least a couple of those to parity drives, which means each SSD is doing more than 500 MB/s. You might get that out of a consumer quality SSD, but enterprise class SSDs only hit around 300 to 350 MB/s. There just aren't enough disks to get you to those throughput numbers with reliable hardware. It's possible that they are using consumer grade drives, but that would worry me, especially with a new vendor with no real track record. They make a lot of claims that border on magic, like 10 years of 1.2PB writes a weak without any performance loss, which just doesn't fit with the characteristics of SSD drives as they exist right now, especially not consumer grade drives. Could be true, but will they be around in 10 years to verify? I'd say they are probably using some sort of 8+2 raid sets, plus one or two hot spare... that gives you 2x10 drives so yes, 500MB/s - that's not unheard of and again, they claim that by having their own custom-designed (eg I know they have a small amount of memory built into every drive), custom manufactured SSDs they can gain enough extra bandwidth to get there (10Gb/s) with raid etc vs commercially available drives... who knows, could be true. quote:I know a E5460 box from NetApp can do about 3GB/s (this may be higher, theoretically, this is just the number I've seen when sizing for Lustre with a certain block size and stream count) in a 4u enclosure that includes redundant controllers and 60 7.2k 2/3TB NL-SAS drives. That'll get you around 80TB, give or take, with raid-6 and 2TB drives. I've got no idea on price though, since, as I said, I don't support these at all. Could be cheap of very expensive. It's probably less than the $8/GB raw that Nimbus gear lists at, but whether you need the extra capacity is another matter. I have to be able to feed 7-8 10gig-enabled people, that's ~8GB/s and I have no overhead left... with disks it'd be ridiculously more expensive, not to mention power, maintenance etc. We are also a Windows shop although I'm open for a proper (redundancy/HA, transparent Windows security support etc etc) non-Windows solution... Capacity does not matter, around 2-3TB is fine, 5-6TB would be downright future-proof, it's only for this purpose and I'd do my backend scripting/linking kungfu to hide it in our DFS hierarchy. quote:Anyway, my point wasn't that all flash arrays are bad, I'm just trying to understand who is using them and why. If your requirements are for a very high throughput low capacity solution in a small footprint then it might be the right move for you. SSD throughput is only about double spinning drive throughput, at best, but that might be enough difference to get to your magic number. I share your skepticism hence my request for a demo unit... quote:My guess is that Isilon would way too expensive and require too much gear to the to the 5 GB/s number he mentioned. I'd guess he'd be looking at 10 nodes, at minimum, to get to that number. Exactly.
|
# ? Nov 28, 2012 22:28 |
|
szlevi posted:...stuff... Cool, thanks for giving me the background. I'm genuinely curious to see how this stuff is getting positioned. Obviously long term I think spinning platters go away and the vast majority of storage is flash based, but it seems like the big storage players are taking a wait-and-see approach to all flash arrays. I'm just wondering if it's truly a niche market, and will be for a while, or if they're going to get caught off guard with a massive technology shift.
|
# ? Nov 28, 2012 22:39 |
|
szlevi posted:Just upgraded my NAS boxes (Dell NX3000s) to Server 2012, I'll test SMB3.0 with direct FIO shares again - I'm sure it's got better but I doubt it's got that much better... Post back here when you do this please, I'm interested to see what they did with SMB3.0.
|
# ? Nov 28, 2012 23:13 |
|
What's the budget, szlevi? Is an all-SSD Equallogic too pricey?
|
# ? Nov 29, 2012 03:36 |
|
szlevi posted:Yeah, that 'moving blocks'/tiering approach never worked, never will for this type of thing, I can tell you that already. Sorry, I shouldn't have said move in relation to blocks. Flash pool is a caching system. It doesn't do any tiering. Reads and overwrites are cached, but the flash pool is consistent with the disks. Just out of curiosity, why are you using CIFS? Why not mount a LUN instead? You can slap a dual 8 gig FC HBA in and pull way way higher throughput than using CIFS. How many clients are running at a time and what kind of budget do you have for this?
|
# ? Nov 29, 2012 06:33 |
|
szlevi posted:Isilon never been cheap but now that it's carrying the EMC logo it would be even more expensive, not to mention the exodus of old hands from Isilon, the rash of issues in the the past year or so I heard of (from costumers and ex-eomployees) etc etc. Rash of issues would be news to me. I know of several customers throwing serious capital at huge isilon clusters. In the 8 figure range.
|
# ? Nov 30, 2012 05:02 |
|
Amandyke posted:Rash of issues would be news to me. I know of several customers throwing serious capital at huge isilon clusters. In the 8 figure range.
|
# ? Nov 30, 2012 19:32 |
|
Is anyone here hosting Windows 8 roaming profiles on a Samba-based share? Are there any known issues that keeps this from working?
|
# ? Dec 3, 2012 15:18 |
|
Where the hell does NetApp keep their MIBs? I tried downloading them from the website and it was about as confusing, if not moreso, than trying to download a driver from Dell. With Cisco I was able to just download one MIB for every device they have and never have to worry about it ever again.
|
# ? Dec 3, 2012 18:23 |
|
szlevi, have you looked at Nimble? We use them in our mailstore environment, and it consistently outperforms our compellent. Additionally, the pricing is amazing - no "extras", it's all included. At the rate we're going, we'll have 10 in production by summer, and we're hungry for more.
|
# ? Dec 3, 2012 20:21 |
|
Another nimble buddy! Hey ! EDIT: They have expansion cabinets now, and the pricing looks good.
|
# ? Dec 3, 2012 20:25 |
|
Rhymenoserous posted:Another nimble buddy! Hey ! Hey! Yes, they do! We're getting some for our main units. We're drooling at this stuff. We have separate head units depending on their purpose, project, etc. You know how that goes... oogs fucked around with this message at 20:48 on Dec 3, 2012 |
# ? Dec 3, 2012 20:39 |
|
HP's Lefthand boxes have version 10 released Finally, a patch to the VSS provider I've been waiting for.
|
# ? Dec 3, 2012 23:29 |
|
can someone ballpark what 15-20TB of nimble storage would cost me, per HA cluster? My only requirements are 10GBe, NFS or iSCSI, deduplication, and mirroring.
|
# ? Dec 4, 2012 00:26 |
|
psydude posted:Where the hell does NetApp keep their MIBs? I tried downloading them from the website and it was about as confusing, if not moreso, than trying to download a driver from Dell. With Cisco I was able to just download one MIB for every device they have and never have to worry about it ever again. MIBs are stored on the appliance in the /etc/mib directory. Just connect via CIFS or NFS and pull them off of the device. oogs posted:szlevi, have you looked at Nimble? We use them in our mailstore environment, and it consistently outperforms our compellent. Additionally, the pricing is amazing - no "extras", it's all included. At the rate we're going, we'll have 10 in production by summer, and we're hungry for more. Nimble won't do the throughput he's looking for at the price/density he wants. Nimble's caching approach is great for random I/O where the working set is a small portion of the total data set (like e-mail and some OLTP). It's not very good for high throughput applications since the density per controller is pretty low and the your SSD cache layer does basically nothing and you're limited to the aggregated throughput of your SATA drives. YOLOsubmarine fucked around with this message at 01:21 on Dec 4, 2012 |
# ? Dec 4, 2012 01:13 |
|
NippleFloss posted:Nimble won't do the throughput he's looking for at the price/density he wants. Nimble's caching approach is great for random I/O where the working set is a small portion of the total data set (like e-mail and some OLTP). It's not very good for high throughput applications since the density per controller is pretty low and the your SSD cache layer does basically nothing and you're limited to the aggregated throughput of your SATA drives. At the end of the day, Nimble is really just a shelf of SATA disk. It's got your normal NVRAM for write-caching and SSD for read-caching, but your consistent writes are limited to your SATA disks. If your reads fit well into a read-cache situation (Nimble, NetApp's FlashCache or Flash Pool, etc) then the SSDs will help your reads, but otherwise it's still just a shelf of SATA disk. It's the same reason I'm always wary about Compellent: when the rubber meets the road, do you really want all of your production data on a small number of slow SATA disks? SAS disks are going to give you 6x the IOPS/GB of SATA.
|
# ? Dec 4, 2012 01:25 |
|
madsushi posted:At the end of the day, Nimble is really just a shelf of SATA disk. It's got your normal NVRAM for write-caching and SSD for read-caching, but your consistent writes are limited to your SATA disks. If your reads fit well into a read-cache situation (Nimble, NetApp's FlashCache or Flash Pool, etc) then the SSDs will help your reads, but otherwise it's still just a shelf of SATA disk. But you can get Compellent with SAS
|
# ? Dec 4, 2012 01:48 |
|
And a large number of them. Maybe they're only nearline?
|
# ? Dec 4, 2012 01:51 |
|
FISHMANPET posted:But you can get Compellent with SAS Many of the Compellent installs I see are with 6 SAS (15k) and 12 SAS 7.2k drives. My overall point was that even though there's a fast "tier" there, if you are doing anything substantial you are still limited by the 7.2k drives. That's why tiering is a dangerous game to play: because once you are outside of its capabilities, you are limited by the slower drives.
|
# ? Dec 4, 2012 02:11 |
|
Writes always go to the top tier, though. Data that's migrated to lower tiers only does so specifically because it's idle. You can also peg data to a particular tier of disk if you are paranoid. I don't think your fear is completely unfounded but I think practically it may be rather unlikely. If by doing something substantial, you mean filling the array and not growing it so that it can tier properly, then yes, that is indeed a dangerous game.
|
# ? Dec 4, 2012 02:15 |
|
We've been looking at nimble boxes too, but I was surprised at how expensive they were considering it's full of lovely SATA. We are looking at ~100TB of storage, and it came in more expensive than Compellent, VNX5500 & FAS3250 boxes with similar capacity - the other boxes take up more space but I've got much more confidence around the performance since they all have truckloads of 15K SAS & SSD caching. I'm concerned about how some of the workloads would perform on a Nimble like some of our OLTP/OLAP etc apps. I'm sure VMware/VDI would run pretty quick though.
|
# ? Dec 4, 2012 02:20 |
|
madsushi posted:Many of the Compellent installs I see are with 6 SAS (15k) and 12 SAS 7.2k drives. My overall point was that even though there's a fast "tier" there, if you are doing anything substantial you are still limited by the 7.2k drives. That's why tiering is a dangerous game to play: because once you are outside of its capabilities, you are limited by the slower drives. The tiers on the compellent are interesting - it's fun to watch the raid stripe on the "slow" side rebuild, wreaking havoc on other VMs that share those disks through other (random) stripes. NippleFloss - yup, that's exactly why we got ours. We like the Nimble because we deal with a small set of hot data that is heavy on random I/O, and a large set of data (old emails) that is usually inert. Our compellent can keep up with the random I/O if there isn't anything big going on in the background, but as soon as there's a rebuild, a few vmotions, or some automatic optimization, we see the effects spread farther and wider than we'd like.
|
# ? Dec 4, 2012 02:25 |
|
bort posted:Writes always go to the top tier, though. Data that's migrated to lower tiers only does so specifically because it's idle. You can also peg data to a particular tier of disk if you are paranoid. I don't think your fear is completely unfounded but I think practically it may be rather unlikely. If by doing something substantial, you mean filling the array and not growing it so that it can tier properly, then yes, that is indeed a dangerous game. The top tier in Compellant is basically a write-through cache, which I'm not a big fan of. You're acknowledging that you're going to do triple the work for any write and in the form of write, then later read and re-write to slower disk. This is based on the assumption that you've got relatively idle time to perform that I/O, which is often, but not always true. Likewise, I think that tiering via physically moving data is generally kludgy. The page sizes are generally much larger than actual transaction sizes. Compellant's page size is relatively small compared to the competition (512k, I think?) but you're unlikely to see a transaction larger than 64k and for truly random work, which benefits most from the SSD tier, your transaction sizes are likely smaller. So you're wasting a lot of space and I/O moving data you don't actually need. It's also slower responding than read cache since it requires moving blocks rather than just reading them into memory. The eviction process to move data down a tier can be particularly problematic since at the same time your users are accessing data on the SATA tier, and pushing that up to the SSD layer, the SSD layer is trying to evict data to make room for the incoming data, and it's writing that out to the SATA layer. So the automated tiering creates conflict between a user workload and a process that is meant to improve the speed of the user workload. I guess I just have a hard time seeing what it's really good for. High throughput workloads will be bound by the SATA layer since you'll over-run your SSD capacity pretty quickly with those (this is a general problem with write cache). For random workloads I'd worry about the performance variance depending on where my data was coming from. I'd rather have consistent 10ms responses than some 1ms responses and some 50ms responses, which is what you get if you're not sure what tier you're getting data from at any given time. If I'm going to pin a workload to a specific tier then I'm not really tiering at that point anyway. Some of these objections are based corner cases, and I know there are many happy users of these automated data tiering systems. I just think it's an inelegant solution since it creates excess spinning disk I/O and the most expensive operation a disk array ever performs is I/O to spinning disk.
|
# ? Dec 4, 2012 02:53 |
|
Really good points. I'm not sure if this is consistent across the board, but at least on our Compellent, tiering only happens every 24 hours and runs as a batch job during the slow part of the day. The data progression shouldn't contend with production workloads, but there is the chance that an occasionally run job will have data on a higher latency tier when it shouldn't. The point is really to save money and not have idle data on expensive disk. It's been performing well for the way we use our data, but you've given me something to think about.
|
# ? Dec 4, 2012 03:07 |
|
bort posted:Really good points. I'm not sure if this is consistent across the board, but at least on our Compellent, tiering only happens every 24 hours and runs as a batch job during the slow part of the day. The data progression shouldn't contend with production workloads, but there is the chance that an occasionally run job will have data on a higher latency tier when it shouldn't. The point is really to save money and not have idle data on expensive disk. It's been performing well for the way we use our data, but you've given me something to think about. rock the boat for a theoretical benefit that you likely won't notice.
|
# ? Dec 4, 2012 03:19 |
|
GrandMaster posted:We've been looking at nimble boxes too, but I was surprised at how expensive they were considering it's full of lovely SATA. We are looking at ~100TB of storage, and it came in more expensive than Compellent, VNX5500 & FAS3250 boxes with similar capacity - the other boxes take up more space but I've got much more confidence around the performance since they all have truckloads of 15K SAS & SSD caching. At the 100TB mark I'd be looking at a bigger vendor too. To me Nimble's products make sense at the medium business level i.e. I need 20tb of storage for VM's etc.
|
# ? Dec 4, 2012 16:32 |
|
Anyone have any 3PAR dealings? Their new 7x00 entry level arrays look like they're worth investigating.
|
# ? Dec 5, 2012 17:31 |
|
Anyone had problems pulling performance statistics off of their NetApps in WhatsUp Gold? I copied the MIBs to the MIB store and have all of the right SNMP credentials, but the only thing I'm getting is interface statistics. I have nothing about memory, CPU, or disk utilization.
|
# ? Dec 5, 2012 18:24 |
|
Apple SMB, ladies and gents:
|
# ? Dec 5, 2012 19:10 |
|
psydude posted:Anyone had problems pulling performance statistics off of their NetApps in WhatsUp Gold? I copied the MIBs to the MIB store and have all of the right SNMP credentials, but the only thing I'm getting is interface statistics. I have nothing about memory, CPU, or disk utilization. I didn't have any issues getting SolarWinds to query NetApp devices using the custom MIBs, though it's been a while since I set them up. Since OnCommand Performance Advisor is free now my customer uses that. Have you tried just running an snmpwalk at the base of the tree and seeing what it returns?
|
# ? Dec 5, 2012 19:28 |
|
I can vouch for Oncommand. I honestly don't know how I used Netapp hardware without it now.
|
# ? Dec 5, 2012 19:51 |
|
oogs posted:Hey! Wooo more Nimble users! I've got a pair of CS460s coming in and I feel like I'm more excited to have them in house then I should be.
|
# ? Dec 5, 2012 20:47 |
|
adorai posted:can someone ballpark what 15-20TB of nimble storage would cost me, per HA cluster? My only requirements are 10GBe, NFS or iSCSI, deduplication, and mirroring. Any specific IOPS requirements? Also, Nimble doesn't do de-dupe, just compression. Not sure if that matters to your usage case (or how much it matters at all), but it bears mentioning. Our CS460s were pushing 6 figures each with 2.4TB of Flash and four years of 4 hour support. If you don't need 60K+ stated IOPS you could get a CS240 for probably well under half what we paid. Feel free to PM me if you have any more questions or if you want me to reach out to my sales rep for sound figures.
|
# ? Dec 5, 2012 22:45 |
|
Beelzebubba9 posted:Any specific IOPS requirements? Also, Nimble doesn't do de-dupe, just compression. Not sure if that matters to your usage case (or how much it matters at all), but it bears mentioning. Talking about IOPs without discussion what *kind* of IOPs is completely meaningless. What block size, read or write, how random is the workload? Saying "how many IOPs do you need, this system can do 60k" doesn't really say anything. I could do 1 million IOPs on just about any system if I can fit everything in cache. Or if my block size is 1 byte. I don't mean to single you out, this is just a pet peeve of mine with storage talk in general. Talking about IOPs without context is about as useful ask asking someone how fast they can run and them saying "72".
|
# ? Dec 5, 2012 23:00 |
|
NippleFloss posted:I don't mean to single you out, this is just a pet peeve of mine with storage talk in general. Talking about IOPs without context is about as useful ask asking someone how fast they can run and them saying "72". You're totally right, I was just crudely try to get at Adorai's workload since it seems to me there are better values for the money than Nimble's SANs if you aren't going to leverage the strengths of their design, one of those being random write performance. I'll phrase my posts better in the future to avoid that kind of unclear language.
|
# ? Dec 6, 2012 00:05 |
|
Beelzebubba9 posted:Any specific IOPS requirements? The other alternative is to relocate an existing netapp to the location and buy more storage for it. Which brings me to a nice question for nipplefloss: is it worth considering (or even possible at this point) to get a PAM for a 2050?
|
# ? Dec 6, 2012 05:36 |
|
adorai posted:Which brings me to a nice question for nipplefloss: is it worth considering (or even possible at this point) to get a PAM for a 2050? No flashcache or flashpool on a 2050, unfortunately. The 2050 just does not have nearly enough memory to support them. You could see if your rep can track down a CPOC system as those generally see some pretty substantial discounts. A 2240 would be worlds better than a 2050.
|
# ? Dec 6, 2012 07:14 |
|
I have a 2240 and wuve it.
|
# ? Dec 6, 2012 07:21 |
|
|
# ? Apr 24, 2024 16:50 |
|
adorai posted:Which brings me to a nice question for nipplefloss: is it worth considering (or even possible at this point) to get a PAM for a 2050? 2050 is a dead box unfortunately, no ONTAP updates anymore. In general, the 2xxx series doesn't have any PCI-E slots for expansion slots. So the 2020/2040/2050/2220/2240 can't support FlashCache (PAM). I don't think it has to do with the memory, it has to do with the fact that they don't have the right slot. In addition, there was an issue with some of the older 3xxx series that prevented FlashCache from working after you upgraded to 8+. A 2240 isn't going to get you FlashCache, but it is going to be way faster than a 2050 is (in addition to all of the nifty 8+ features). 50% more RAM, faster CPUs, etc. I actually have a 2240 on my bench right now with a 10Gb card and it's very fast.
|
# ? Dec 6, 2012 08:28 |