|
complex posted:I assume you have installed the MPIO Multipathing Support for ISCSI thinger. http://technet.microsoft.com/en-us/library/cc725907.aspx I have to say that Server 2008 iscsi and MPIO performance is just plain better compared even to 2.07 on Server 2003. Plus, finally Microsoft is supporting dynamic disks. Funny story, DPM 2007 requires dynamic disks (well, unless you really really want to deal with creating all partitions manually). Good going, Microsoft!
|
![]() |
|
![]()
|
# ¿ Dec 5, 2023 04:26 |
|
Also, on VMware and NetApp. RSM is only supported on iSCSI, I believe. Fiber support should be out shortly, but NFS is not in the picture just yet.
|
![]() |
|
oblomov posted:Also, on VMware and NetApp. RSM is only supported on iSCSI, I believe. Fiber support should be out shortly, but NFS is not in the picture just yet. Oh, and on the subject of NetApp and iSCSI. I have been running a 12K Exchange environment on a clustered NetApp 3020 with 10 shelves of 15K rpm disks, and it's running like a champ. 2 iSCSI connections from each Exchange cluster node (2 Exchange clusters, 4 and 3 nodes) go to a pair of Cisco 3750 switches. Each NetApp head has 6 iSCSI ports with failover VIF trunking two aggregation VIFs in failover mode. I built this setup 2.5 years ago and haven't had any major issues.
|
![]() |
|
oblomov posted:Oh, and on the subject of NetApp and iSCSI. I have been running a 12K Exchange environment on a clustered NetApp 3020 with 10 shelves of 15K rpm disks, and it's running like a champ. 2 iSCSI connections from each Exchange cluster node (2 Exchange clusters, 4 and 3 nodes) go to a pair of Cisco 3750 switches. Each NetApp head has 6 iSCSI ports with failover VIF trunking two aggregation VIFs in failover mode. I built this setup 2.5 years ago and haven't had any major issues. Although I must say I have been disappointed with NetApp lately. Ability to add shelves and additional fiber loops to clustered heads is quite lacking, can't do much live. Also, their sales engineering and sales people are way over-engineering environments quite a bit. You have to call them out on that and negotiate them down. Btw, my company does have some MD3000i arrays in remote offices and lab environments, and they are great little iSCSI SANs. You can also attach 2 more shelves of MD1000 to each MD3000i array. I am running a 3 4x4 core nodes with about 100 VMs off one of these setups in a lab with no issues. Performance is obviously not the greatest, but it's a lab environment, so it's good enough.
|
![]() |
|
Anyone have experience with LeftHand Networks, specifically their rebranded HP (or Dell 2950) appliances? Went to couple of demos and their premise seems pretty slick. Kind of like 3Par but cheaper. I especially liked ability to add/remove nodes almost on the fly.
|
![]() |
|
So, nobody has any experience with LeftHand? Googling online did not find much. I guess I'll tell them for some references and will put the product in the lab for some stress testing.
|
![]() |
|
Maneki Neko posted:Shows how long it's been since I used SnapDrive. While the share was necessary before the Snapdrive 6 came out, you did not need CIFS license. You could just go through CLI and create the necessary share as well as put permissions on it. You just did not have the option in the GUI.
|
![]() |
|
Cultural Imperial posted:Q. Do I need a CIFS license on the storage system to run SnapDrive? Another little known fact: You do NOT need FCP to run NDMP over fiber.
|
![]() |
|
Backing up or replicating (locally) large amount of data, how do you guys do it? So, my new project will require me to backup/replicate/copy/whatever about 100TB of data to tertiary storage. I will already be doing replicate to remote DR system, but will want to do a backup or replication job to local storage. I ruled out NetBackup with VTL or tapes since that is really unmanageable with this much storage, and now I am trying to figure out what is out there to use. So far, best option seems to be SAN vendor based replication of DATA to nearby cheaper storage SAN. So, with NetApp, for example, I could take primary 3170 SAN cluster and then SnapMirror or SnapVault that to NearPoint SAN (basically a 3140 or something). It would be similar with say Equalogic from Dell or EMC. Other then this sort of thing, which requires bunch of overhead for SnapShots, is there any sort of say block-level streaming backup software that could be used (ala MS DPM 2007)? I haven't kept up with EMC recently, but their Celerra stuff looks interesting. Is anyone here familiar with it?
|
![]() |
|
Vanilla posted:People usually use array based remote replication tools to replicate their data to another array at another site. Yes, I plan to do that. I am going to have a set of storage with bunch of data at one site and will replicate (through SAN replication technology, SnapMirror, SRDF, whatever) to my hot DR site. In addition to that I need a third copy of data locally for higher level of protection (and at least somewhat delayed write just in case). quote:Backup is then often over SAN to VTL or straight to tape. I need to make a backup of the data or a replica of data to localized storage. This includes a Database (SQL, 3-4TB), a few TB (say 15) of index logs (non-sql, full text search kind), and 60-70TB of flat files. Tapes won't work, there is too much to backup. I was thinking of doing SnapShot replicas but just wondered if there was a better way then doing NetApp sort of SnapMirror/SnapVault (or EMC/Equalogic/Whatever equivalent). quote:What do you want to know? How are the new Clarions compared to say a NetApp or Equalogic. I did not like cx3 series much since that seemed to be limited in both management and features compared to the competition, but it seems that Cx4 caught up to NetApp at least and bypassed it on some fronts (from a SAN perspective, not NAS).
|
![]() |
|
Vanilla posted:Ok, any decent array can do this and will have the ability to take crash consistent copies of things like Oracle and Exchange. Yeap, looking at various things now. I am quite familiar with NetApps, run a few of the clusters right now, and much less familiar with EMC. Anyhow, appreciate the response, man.
|
![]() |
|
Nomex posted:I may be a little late with this. You should look into a data de-duplicating solution for the backup and tertiary storage. Check out Data Domain. They can be optioned to mount as SMB, NFS, FC or iSCSI. I've had one that I've been playing with for a little while now. My 300GB test data set deduplicated down to 101 GB on the first pass. Speed is pretty good too. 3GB/min over a single gigabit link. As it just shows up as disk space, it's supported by pretty much every backup product you can think of too. Data Domain is quite good, we are using it for some of this on other projects. Here, the tricky part is that I am just going to replicate snapshots over I think, and not going to bother with backup software since that will take forever. I don't think I can get say NetApp talking to Data Domain for snapshots. Unless there is something cool continuous backup product that exists which I am not aware of, tertiary storage of different brand won't work here (guess there are some storage virtualization products out there, but they are pricey). Also, NetApp does have dedupe which is pretty good, but it kind of sucks for iSCSI due to funky way NetApp does LUNS on top of waffle.
|
![]() |
|
1000101 posted:The app owners will be pretty livid in many cases when you give them a crash consistent state when you restore a snap. That was my only jib. He's going to expect it to be consistent should he need to restore. You can't depend 100% on transaction logs or an 'fsck' to bring things back to sanity. Don't forget that EMC is also basically 1.5-2x the cost of comparible NetApp (talking Clarions here, forget about DMX). That said, to me, NetApp is having the same issue now. It's all filer based, so in mid-level engagements, Equalogic, Lefthand, Compellent, or say 3Par is offering better future proof network. I got Equalogic and Lefthand in the lab now, doing vendor "play-off", and they are both much more manageable then Netapp can be. We have Operations Manager, DFM and FilerView (with some CLI love going) on NetApp side to basically match built-in tools from the two newcomers. The only reason I don't have Compellent in there as well is that they are small and my management would rather have support from Dell or HP. Also, Lefthand functionality for iSCSI with snapshots reserve on right is pretty nifty (so is replicating those thin-provisioned snaps). That said, NetApp is still very versatile and allows us to have multiple services going to same box. I just wish they got their gear in order and worked on the software front, especially integrating their "cloud-os" with ontap.
|
![]() |
|
Vanilla posted:Source? That's a bit of a wild claim. Is this based on one example? That's based on last 5-6 times we have purchased storage, anything from NetApp FAS2000 series to FAS6000 series and appropriate EMC hardware. I have had yet to see time when EMC was cost effective. I could see the value in really high-end stuff to which NetApp would have to respond with their cluster OS instead of ontap.
|
![]() |
|
1000101 posted:I can't see the report but I'd be curious to see the methodology. I am wiht 1000101 here. That's pretty much what we saw. Once you throw in RAID-DP, Thin Provisioning and DeDupe, EMC can't come close on the utilization. Plus the older CX-3 Clarions underperformed the newer NetApp boxes. Not sure about the new Clarions, but they are still more expensive then NetApp (that was before some big discounts from NetApp). Now, if you are talking real high end, then yeah, IMO, DMX will outperform a 6080. Although I would love to see GX based 6080 clustered system do its thing.
|
![]() |
|
1000101 posted:The awesome thing is; I can actually do that with one of these crazy virtualized storage backends as well. Regardless of whether I'm doing it EMC style or like the rest of the modern world, when performance sucks the solution is always to buy more disks. With everyone else's products though, I may not have to quite as soon. Ok, I am a bit confused here. Unless you do a dedicated aggregate, you are sharing disk I/O across multiple raid groups on NetApp. At least as far as I know. Is there a way to enforce a volume/LUN to sit on a particular raid group, separate from everything in an aggregate? If so, that would be awesome. quote:Also keep in mind that I'm thinking about a market that spans from 50 employees on up to and including a good chunk of the 5,000 employee orgs. Anything larger than that, and people are more than happy to staff a bunch of excel experts to manage their storage. These guys are buying DMX for performance and lots of it. Welp, my company is either 10x or 20+x the max size you quoted above (depending on US or worldwide view) and we have yet to need a DMX. Hell, we don't really have too many 6080 clusters throughout US either. Now, we have sold DMX to oure customers (we are also an IT shop from a certain standpoint, but thats' not core business) when they are hell-bent on EMC, but other then that, unless one has very specific apps, NetApp performance can more then cover what 99% of companies even much larger in size then 5K employees need. quote:None yet, most of my customers who would need that level of IO already have 1000+ spindles that they bought prior to the SSD shelves offered by EMC. It will be a while before they validate the shelves and put them in production. I like the look of new Sun boxes that front-end SATA with SSDs. Now, that makes sense. quote:I'd love to see this sort of thing more often though. It would be a hell of a thing to leverage with automated tiered storage. Other then Compellent, does any vendor have automatic tiering built-in? Sun kind of does it with SSD/SATA hybrid storage on the newest stuff, but nobody else as far as I know. Now, NetApp, EMC and others have other solutions that can do tiering, but it's not the same. I would love to be able to buy a shelf of SSDs, add bunch of either fiber/SAS or SATA shelves, and shove it all into single aggregate that would take care of tiering/caching/etc... dynamically by moving blocks to faster / slower storage as required. One thing that I wish NetApp got in gear is thin provisioning management. It's terrible, pretty much non-existent. That's one thing that newcomers like Equalogic and LeftHand do much better. Hell, even with DFM or OpsManager, you can't get a good view at a system and figure out what's thin provisioned and what's not and how much space remains at volume or LUN level, etc... Meh. quote:edit: Don't think I hate all things EMC, I'm a services guy and the product is great. I'm mostly passing on complaints from paying customers and their feeling on the matter. I have a hard time disagreeing in many cases. I like DMX and Centerras, and I think CX-3 were a waste of money. Now with CX-4 situation might have changed, but VMware jacked up the price on that.
|
![]() |
|
1000101 posted:You have it, as far as I know you'd basically create say a 20 drive aggregate, throw one volume on it with one LUN and be done with it. Oh, nevermind then, I see where you are going. I have had to do single purpose aggregates (still mutli-LUN) for our Exchange and SQL implementations. One aggregate for DBs, one aggregate for Logs (different filers). Snapvault to separate filer. There were enough users and DBs in multiple clusters to warrant the separation.
|
![]() |
|
I've been looking at LeftHand and Equalogic both. Got both setup in my lab. Lefthand software appears more flexible, Equalogic seems more "rack-dense". Equalogic is a bit less expensive since Dell can discount quite a bit. Both appear to work just fine. Lefthand in particular has flexclone type of technology, thin replication, well done thin provisioning, multi-target replicas and couple other things. Downside is that for redundancy you are looking at somewhat large number of boxes. On the other side, you are not wasting space on spare drives. Equalogic offers better reliability in single SAN, and perhaps higher throughput (3GB per shelf vs 2GB for LeftHand). I think I/O is a bit higher per Equalogic controller as well, however, considering you would normally have 2 LeftHand shelves for each Equalogic controller, it's a wash. Power/Cooling I am not sure about. Lefthand has more boxes but it is more efficient. Equalogic does offer 5500 series box with 48 drives in it which is great if you just need bunch of space (think Sun with it's x4500 series). With Equalogic, if you lose the box, you lose all volumes striped across from it. Hence dual-controllers, dual everything pretty-much. You would basically have to fry the powersupply (so it short-circuits the system) or something. Lefthand system approach is much more flexible, they have multiple network RAID levels and if you lose one box, you are still good to go. One thing I am not sure with Lefthand is how the SAN will handle itself if there is a power/network failure. With Equalogic, we had power pulled couple times in a lab while running VMware with Exchange 2007 CCR clusters inside, and everything came back just peachy. With LeftHand we are going to test this next week ![]() Also, I think Equalogic will provide you better support then Lefthand, or at least I've had better support and responsiveness with Dell then HP (both premium support agreements, whatever the new Gold contract is called for Dell, and the HP equivalent). Edit: One thing I forgot about Equalogic which kind of sucks is that they can't do multi-hop replicas or multi-target replicas. This sucks and means you can't replicate between say two DR sites and then do a separate snapshot/replica to tertiary "backup" storage. However, that said boxes appear very solid. I don't buy the BS about reliability on these. We've been running these things in a lab for 60 days now and have fairly decent VMware lab deployment on top of them. Seems fairly fast and very solid (and that's with constant switch drops and power interruptions due to it being a lab environment). Intrepid00 posted:Anyone have experiance with Lefthand boxes? We are looking to buy our fist SAN for our office. We currently have like 12 servers we are going to consildate (that, I've heard it too much) servers down to two. We are without a doubt going to use Hyper-v from MS. We use it now for development platforms and find it fast and relieable (as long as you don't put windows XP on it, oblomov fucked around with this message at 05:29 on Dec 20, 2008 |
![]() |
|
Chucklehead posted:Wow you guys this is an awesome thread. NetApp is solid. The reason I am looking at other stuff is because a few newer solutions are more flexible in both management and downtime perspective. Keep in mind, NetApp is a dual controller (either in separate boxes or same chassis) cluster design with traditional fiber backbone running to the shelves (you can have multiple loops). Thus, you are facing downtime if you have to say remove a shelf in the middle of the loop or for a few other reasons. There is nothing wrong with this design, but between a few frustrating sessions with NetApp support (usually they are very good) and downtime for maintenance, I started looking around. That said, I am not entirely sure there are better solutions. NetApp is good, and especially if you need to do not just SAN but also CIFS and NFS, it's really good. Just be prepared for some CLI love, FilerView sucks and DFM/OpsManager are pricey and still not completely there compared to Equalogic/Lefthand. IMO, HP is the devil :P.
|
![]() |
|
Intrepid00 posted:Equalogic box we found had 4x the maintance costs then lefthand. We also found Equalogic to cost a lot more, but you may be larger and Dell isn't dicking you on the price. Well, for us Equalogic pricing is very good (quite large company). Equalogic is actually cheaper then Lefthand for the solution I was looking at. Service cost was very reasonable (we always buy 24x7 premium support), comparible with LeftHand (after discounts) or NetApp (after heavy discounts). quote:It would appear so as well to me that this is true. The one thing I did notice is that the Equalogic box will not do replication on the fly with two clusters always being eactly the same. The equalogic box will only do scheadule. Not sure what you mean. Do you mean you can't just kick off replication but need to schedule it? Haven't tried, thanks for the tip, will test it in the lab. quote:The Lefthand boxes tend to have less drives and will not max out the 2GB limit of the equalogic box. I don't even thing the PS5000 will get above 2GB as well unless you are throwing SAS 15K RPM's at it. Yeap, I doubt either will max out 2GB even with SAS 15K RPMs (that's what we specced). quote:Lefthand is owned by HP, but they are run as a seperate company. We would be caling lefthand for support not HP. Same goes for Equalogic. Also, Dell support can be pretty lovely even with gold depending on the time of day you call. Well, this is the setup for now, but you bet that HP is working on integration. Our usual Dell calls are very good since we have account managers we can call and bitch at if things don't go the right way, and then poo poo gets escalated. If you don't mind answering, how big is your LeftHand setup? I am kind of leaning LeftHand over Equalogic at this moment (but NetApp may win out). Trying to get some references from a large customer out of Lefthand (basically our management wants to see a large rollout, understandably) and I am not sure if Lefthand has many decent sized customers (beyond one large gov customer doing archived storage on it).
|
![]() |
|
Intrepid00 posted:I refuse to believe a company that sells ink at $40 bucks a pop is the devil In both of these cases, the situation will not last long, IMO. It's just not efficient way of doing things, especially for HP (not like LeftHand is high end storage for HP).
|
![]() |
|
Intrepid00 posted:On the other hand we are just a around an 50 user software company. We fit in more with the Small Business. Ahh, yeah, that's much harder to drive Dell down in price. We literally spent couple million this year with them (actually, maybe even a bit more then a couple). So we have huge negotiating position with Dell. Just need to mention magical words "Hewlett Packard" and price goes down ![]() quote:I believe you have to scheadule it (5 min being min.) and if you so choose you can't have the replication actually be a mirror if you so choose. While the lefthand box has varying levels of Network raid and repelication. Makes sense, and yeah, I found Lefthand to be much more flexible software wise. I am just basically not sure if I can trust it on a large scale. That's my fear. Hence search for references, hopefully Lefthand can come through on that since it will give us another good choice. quote:Only that monster 45 drive one might that Equalogic has, but that's more of storage need performance you are looking for. Yeap, that's an interesting box Equalogic has there. Kind of like Sun 48 drive server. I would think that 3GB is quite enough there since it's really bulk storage, they don't have enough controller performance to really drive this until next year's upgrades (whenever those come). quote:Unfortunally I can only tell you will most diffently have a much large SAN network. You may also want to look, I think the Equalogic as a controller cap while the Lefthand boxes have almost none. I appreciate the discussion, hard to find a lot of info on the web comparing Equalogic, and Lefthand. Stability with either is what I am worried about at larger scales. Scalability wise both will be similar because with Lefthand, you are looking at double the boxes (at Network Raid 2) and according to Lefthand around 20-25 you are basically tapping out the expandability, which by the time you are done is comparable to group of 12 for Equalogic (which is supposed to get expanded next year). Now, you can argue that cluster side for Equalogic is only limited to 8 boxes inside each group, but you can still copy stuff between clusters. Dunno, each has pluses and minuses. I have the feeling we'll go NetApp for the moment though and then will do a much longer eval next year. That will also let us check out say 3Par.
|
![]() |
|
Well, I would be real careful connecting all that stuff to a 2050. IMO, this is either multiple NetApp clusters or 3140 (and maybe higher) type of setup. Here is the thing to think about - you do NOT need FC connectivity to the filers from your server. I am running SQL Server, Oracle, Exchange, File Services, VMware (couple dozen hosts at one site here), and whole bunch of other crap. Number of users ranges from couple thousand (for file services) to about 14K for Exchange to 30K Sharepoint instance running off SQL to multiple heavy production DBs. Guess what, it's all iSCSI or in case of Oracle and one of the VMware clusters, NFS. Fiber is overrated for vast majority of environments. For the org your size and what you are looking to do, fiber is just overkill. If you go with a single SAN scenario (and even with multiples), you just have to make sure to layout network ports (or fiber if you so desire) appropriately, get enough shelves in different aggregates (i.e. you don't want to put Groupwise on same disks as say your financial DBs) and generally, use common sense. Replication, btw, works, but it's not panacea. SnapMirror has a fairly sizable impact on CPU (do it at night). You have to make sure to get appropriate software to quiesce your apps. I have no clue, for example, if NetApp offers GroupWise support. For VMware servers, do you mean each one will have 8x1GB connections for iSCSI or is this 8x1GB total amongst eight? You can use NFS, btw, with Netapp and VMware. There are some pro/con scenarios for either iSCSI or NFS. Chucklehead posted:Edit: As usual I wasn't quite specific enough to start put with. Here is what I have envisioned so far:
|
![]() |
|
rage-saq posted:Sounds like someone is wishing they got a 6 Hour Call To Repair warranty instead Rage, can you clarify on this? Is this through HP or Lefthand support? Personally, I think I am going to go with NetApp after all. Equalogic's lack of dual-target replication may just be a deal breaker, and I do not want to deal with these sorts of supports issues that were talked about with Lefthand/HP combo. Brent78, PS5000XV seems to be pretty good as long as you understand what you are and are not getting. I don't see a big deal in having 2 Lefthand boxes doing exactly the same thing as 2 Equalogic boxes. Lefthand wins a bit of redundancy (how often do SANs burst into flames after all) and software flexibility and Lefthand wins a bit on performance, rack space, power consumption and mainly on support. If I was sure of Lefthand support I would go with them, but I am not and I haven't gotten references from them on large customer install base. Chucklehead, as far as different filers go, keep in mind the SAN you are getting, number of ports, number of shelves, etc... Also, there is a good chance that you will be taken the whole thing down for maintenance. Realistically, as long as you have enough ports/bandwidth and SAN Controller I/O throughput, disk I/O will be what matters for multiple types of work profiles. Say you have Groupwise, SQL Server and a VMware cluster to put on a SAN. You could say get a single 3170 SAN and get enough shelves to form multiple aggregates to serve all of this, or you could get two different 3140 SANs instead, one for Groupwise and SQL and another for VMware. You won't have to have as many aggregates per each NetApp active-active cluster and you will have more flexibility during upgrades/maintenance and actually more connectivity options. Price-wise, I am not sure how this would come out, dual 3140s will probably be more expensive though. Also, IMO, 10 ports per ESX host is kind of overkill. That's a lot of cost right there. What do you really need? You need 1 for control crap, vcenter, etc..., 1 for VMotion, say 2-3 for iSCSI and couple for front-end. Shouldn't need more then 6-8 per host, IMO. This is 20Ghz you said, so dual quad 2.5? I doubt you will saturate more then 2GB from say iSCSI or NFS. You can go FC, but that's just overkill on cost, IMO, for your situation.
|
![]() |
|
rage-saq posted:Well in a few weeks their support will be through HP. Hmm... I will hit up my rep on this. The other route is frankly unacceptable. One of big reasons even at looking at something other then NetApp is less downtime and quicker potential fixes. If it takes 2 days to repair a SAN, that's simply unacceptable. The fact that their support is going to be through HP is what concerns me most actually. IMO, HP support is simply not up to par considering their pricing.
|
![]() |
|
rage-saq posted:HP support options like 4 hour 24/7 etc is just as good as anyone elses these days on Standard equipment, if you really need to have insane support you buy 6 hour CTR. LeftHand should be getting shelved under Standard equipment so the same Proliant level warranty options will apply. Yeap, we run various NetApps from older FAS270 to newer 6080 beasts. I know all about the Enterprise support ![]()
|
![]() |
|
Chucklehead posted:My experience is that the sales guys are going to lie right to your face and hope you are ignorant and/or lazy and don't call them on their bullshit. Also, what are you looking to do with the SAN and how proficient are you in various SAN technology in general and how much time will you have to manage the said SAN? For example, NetApp can do NFS, CIFS, and Fiber in addition to iSCSI, so if any of that looks attractive, LeftHand simply can't do it. Pricewise, the difference is not going to be large either way. LeftHand has better interface and more flexible design, while NetApp has proven architecture/support/stability. You can lift and replace NetApp controllers to move your data to bigger boxes, with LeftHand you can just add more units (up to say 20 or so). I got both Equalogic and LeftHand in the lab, and both are pretty good. My company has lots and lots of NetApp storage so I deal with it on daily basis (provisioning, monitoring, configuring, etc...). Hell, that Sun hybrid storage looks pretty nice too. Friend of mine got Compellent and that's also pretty solid (small company though, so who knows what will happen down the road.
|
![]() |
|
Intrepid00 posted:I'm not to thrilled with running NFS or any file service right off my storage. I'd like to minamize its attack surface. There is no attack surface if the VLANs are non-routable with NFS. It's much the same as iSCSI, really. Some apps behave better (Oracle, VMware with NetApp). For fiber, I think the transmission protocol is better but then you can (or will be able to) do fiber over ethernet I guess. Also, there are some other heavy data scientific/engineering apps where the more heavier iSCSI protocols won't function as well, IMO. That said, yeah, if you don't have fiber investment now, it's most likely an unnecessary option.
|
![]() |
|
rage-saq posted:They are called Thatchers. Funny enough, just had Sun come in and do a dog and pony show on the new 7000 series storage. Hardware seems interesting, but software seems very very raw. I think we are going to skip on the Sun option. On Lefthand, I dunno, I would say I disagree to the extent that I have been able to test in the lab. I've a little 3 node cluster in the lab and so far it behaves as advertised. We put some VMware, Exchange, and SQL volumes on it, turned things off to test Network Raid-2/3, dropped power on the whole cluster, hit it with iometer, jetstress, etc.. and so far they have been behaving fairly well. Their software is also simply awesome. I do a lot with NetApp SANs and while they are pretty stable and "Enterprise" grade, we still get little glitches, etc... I've seen EMC drop the ball too. I don't think anyone is immune, including HP. Hell, everything I've seen from HP concerning pre-sales and post-sales support sucks. My company recently transitioned to HP based desktops/laptops. Our reps are not responsive, support is kind of bad, etc.. We have top tier support and we are a largish company with 50K people in US alone and over 100K worldwide. This is a big and fairly important contract I'd think, even for a company as large as HP. At this point, the main reason my management does not want to consider Lefthand seriously is that they are now HP. Our desktop/laptop issues may not translate to SANs or Servers, but it still shows you something about the company in general and their approach to the Enterprise.
|
![]() |
|
1000101 posted:You won't; but at the same time if your head goes you lose access to all of your data. I don't think 2020 can do clustering, you have to pony up to 2050 for that. At least that's what I recall when we last got a few of each for remote offices. 2050 is basically almost 2x the size and has space for 2 internal controllers and 20 drives.
|
![]() |
|
1000101 posted:It supports clustering. We had to upgrade one of our client's 2020's to support it but it does work. The only fault with the 2020 is that its basically a dead end platform. The 2050 is generally a much better fit for people and has a lot more expandability. What do you do then, just get another 2020 and cluster the two? 2050 has capability to do clustering within single chassis.
|
![]() |
|
Catch 22 posted:What?!? Please give your definition on "Small SAN"? $100K is smallish. I think anything up to say $150K is on the small size. To give you an example, we just paid about half a mil for a 6080 NetApp SAN with whole bunch of FAS storage with a few hundred TB. And that's really mid-size not high-end SAN, IMO, although it is going toward the high-end. On the LeftHand, I am still testing it in the lab and it's pretty good from everything I am seeing. Don't expect huge IO, i.e. 1600-1800 per G2 node (SAS). So max you can get is maybe 40K IOPS out of a cluster of 20-25 boxes.
|
![]() |
|
brent78 posted:Just wanted to post that I got a shelf of EqualLogic 5000VX setup in the lab and I'm very impressed with it's performance. It's configured with 16 x 300GB 15k disks, active/backup controller and all 6 gigE connected to a pair of 3750's. Using jumbo frames and receive flow control as well. I'm achieving 200 MB/s writes with ease and barely sweats with mixed random reads/writes. This shelf as configured was 40k, not the cheapest thing out there but on par with 15k SAS. The equivalent NetApp or EMC solution would have been double considering all their retarded licensing costs. Ohh you want iSCSI, caa-ching. Equalogic is not bad at all performance wise. Management is straightforward, support is good and hardware is pretty neat. However, I must say that I like LeftHand more, mainly for flexibility of software there. Also, to be fair to NetApp (less so with EMC), you will see pricing converge closer together as you "fill up" on nodes. With NetApp and EMC (and Hitachi, HP EVA, etc...) you pay a lot more in the front, but at the end once you start scaling up, pricing is going to be much closer (if still more) then Equalogic (and LeftHand). So once you compare say NetApp 3160 with whole bunch of shelves and a similarly large Equalogic deployment, prices are much closer then you'd think at the start. There are other advantages to Equalogic (Lefthand too) compared to traditional SANs though.
|
![]() |
|
1000101 posted:You can in fact use a NetApp as a gateway in front of whoever. How well do these V-filers work? Haven't tried them yet and we were thinking of trying to front some EMC and Hitachi storage with it.
|
![]() |
|
Catch 22 posted:You would get an app consistent snap first (using SnapView to manage and set this up) then RecoverPoint replicates at the blocklevel (clones) the LUN. Flatfiles would not need the snap first. I just saw a presentation by EMC last week replicating Oracle RAC and it was pretty drat impressive. Now, it looks like managing the various caches may be a bit of a hassle, and it's the usual EMC CX4 stuff (I still like NetApp better), but for replicating copious amounts of data, this is some very good stuff. Cool part is they track the changes and only replicate changed block at certain interval (with Async), say like every 20-30 min, so unlike SnapMirror, the amount of data after compressions and folded writes (i think that's what the EMC engineer called the process) appears to be going 3-1 for Oracle. Not too shabby. That said, it's not cheap and requires whole bunch of fiber connections.
|
![]() |
|
TobyObi posted:What are your issues with the Sun kit? Does Sun storage have any VSS snapshot software for Windows now? Back in December they did not and neither did they have any vmware compatible snapshotting or anything else. I really liked their hardware, but their software was just way too raw for my liking. I went with Equallogic instead for that particular project and we got bunch of NetApp boxes since for a few other projects. However, if Sun software stack improved (or you don't need it and NFS/iSCSI on Linux/Solaris is good enough), I'd definitely take a look. I did hear bad things about support, and heck, even their pre-sales tech support was meh.
|
![]() |
|
Yeap, probably took us 5-6 months each last couple times we were selecting a storage solution. There were usually 3-4 vendors in the running and only 1 was selected each time, so there were plenty of losers, but everyone understood that this is what it takes in the Enterprise.
|
![]() |
|
Speaking of storage, anyone have experience with fairly large systems, as in 600-800TB, with most of that being short-term archive type of storage? If so, what do you guys use? NetApp wasn't really great solution for this due to volume size limitations, which I guess one could mask with a software solution on top, but that's clunky. They just came out with 8.0, but I have 0 experience with that revision. What about EMC, say Clariion 960, anyone used that? Symmetrix would do this, but that's just stupidly expensive. Most of my experience is NetApp with Equallogic thrown in for a good measure (over last year or so).
|
![]() |
|
Vanilla posted:All the time, depends exactly what you need it for - just to stream to and delete shortly after? More details? 2TB is not out yet as far as I know. It's mostly just to stream data to (sequential writes mostly) and then archive it for a few weeks. I was basically thinking Clariion or NetApp 6000 series. Only problem being that 15/16TB is max volume size (not sure on the Clariion, could be wrong there). However, today I found out that size needs to be double, i.e. 1.5TB or so. My guess is that management will look at the cost and abandon the project, but hey, I got to come up with something, and it sure to be an interesting design exercise ![]() Will also look at Isilon, never heard of them before.
|
![]() |
|
![]()
|
# ¿ Dec 5, 2023 04:26 |
|
Cultural Imperial posted:Ontap 8 running in "classic" mode overcomes the 16 TB aggregate limit. Yeap, talked to my NetApp reps and looks like Ontap 8 will do this. Also talked to some EMC guys, and Vanilla is correct as well, Clariion will handle the LUN size as well. There is no way in hell that I would be doing LVM stripes and such ![]()
|
![]() |