|
With an oracle zfssa with two heads, two shelves can get you 30TB usable easily, have great performance, fit in 6u, and cost well under $100k. It will support CIFS, NFS, RSYNC, ZFS replication, FTP, SFTP, and pretty much anything else that is unix-y. It comes with enough CPU that you can do on the fly compression and actually improve your performance as you reduce IO. HA pair of controllers with 10GBe 48 disks: 2x SSD for ZIL 46x 2.5 1TB 7200 RPM disk for storage: -2x 2.5 1TB 7200 RPM disk as spares -4x vdevs of 9 data and 2 parity disks gives you ~32TB usable It will be flash accelerated and lightning fast while reasonably inexpensive ($30k for the controllers, $20k ish for each shelf with disks, $10k for the write cache). You can get an identical unit and replicate offsite with the built in replication features. The biggest problem is you will like it so much you will start using it for more than you originally thought (this happened to us and we filled it up pretty quickly).
|
# ? Dec 17, 2013 00:59 |
|
|
# ? Mar 28, 2024 21:39 |
|
adorai posted:With an oracle zfssa with two heads, two shelves can get you 30TB usable easily, have great performance, fit in 6u, and cost well under $100k. It will support CIFS, NFS, RSYNC, ZFS replication, FTP, SFTP, and pretty much anything else that is unix-y. It comes with enough CPU that you can do on the fly compression and actually improve your performance as you reduce IO. why are there no L2ARC SSDs; only SSDs for ZIL?
|
# ? Dec 17, 2013 01:35 |
|
adorai posted:With an oracle zfssa with two heads, two shelves can get you 30TB usable easily, have great performance, fit in 6u, and cost well under $100k. It will support CIFS, NFS, RSYNC, ZFS replication, FTP, SFTP, and pretty much anything else that is unix-y. It comes with enough CPU that you can do on the fly compression and actually improve your performance as you reduce IO. That's way overkill for this application, our san is already running an insanely expensive ssd tier. I'm more interested into a backup appliance that accepts data and holds it safely, while letting me access it relatively quickly (copy on and off, not running anything off it). I don't plan on running db's or vm's on it. That being said we'd probably end up rolling this exact thing ourselves (we've done it before) if we do it that way. Either way we probably want something more like 3.5" near line enterprise sata drives which can usually go 12-16 in a 3u chassis. I'm ballparking that we can roll our own for about 15k each with 4tb enterprisy drives so I'm probably going to limit a commercial solution to about twice that (60k total).
|
# ? Dec 17, 2013 02:36 |
|
feld posted:why are there no L2ARC SSDs; only SSDs for ZIL? Aquila posted:That's way overkill for this application, our san is already running an insanely expensive ssd tier. I'm more interested into a backup appliance that accepts data and holds it safely, while letting me access it relatively quickly (copy on and off, not running anything off it). I don't plan on running db's or vm's on it. That being said we'd probably end up rolling this exact thing ourselves (we've done it before) if we do it that way. Either way we probably want something more like 3.5" near line enterprise sata drives which can usually go 12-16 in a 3u chassis. I'm ballparking that we can roll our own for about 15k each with 4tb enterprisy drives so I'm probably going to limit a commercial solution to about twice that (60k total).
|
# ? Dec 17, 2013 03:43 |
|
adorai posted:drop the HA and I bet you can do it for under $60k. You'd have to talk to a sales guy to verify. Hmmm ok, though the guy who would be managing this would probably have an aneurysm if we buy oracle storage gear, he was part of a team that made the original thumpers work.
|
# ? Dec 17, 2013 19:21 |
|
ZombieReagan posted:EMC has some new mid-tier DataDomian units that are really reasonably priced compared to what the older units used to run if you want to back up over NFS. Really great deduplication/compression ratio on those things as well, but make sure to get quotes for the software maintenance per year to make sure you're ok with the price if you decide to check it out. I remember our DD890 really surprising us the 2nd year we had it, but we've got closer to 90TB of actual disk space on it. In our environment we're able to pack over 1PB on that easily with room to spare, but that's going to vary depending on how much unique data you have and your change rate between backups. And if you encrypt anything
|
# ? Dec 25, 2013 19:18 |
|
General iSCSI question: should Interrupt Moderation be disabled on network adapters that handle iSCSI traffic? I'm leaning towards yes however I just wanted to see if there's a general consensus or if it is a deployment-specific consideration (The bulk of my experience is with FCP so I've got a few gaps when it comes to iSCSI best-practice).
|
# ? Dec 26, 2013 18:59 |
|
What's the traffic load look like? Is this to the host or a guest?Aquila posted:I'm looking for something for nearline local backups for our systems, mostly db backups. I'm thinking 3-6u, one or two boxes, 20-40tb usable, bonded Gbe or 10Gbe connected, nfs and or rsync, ftp, etc transfer. While we have alot of in house expertise rolling just this kind of solution ourselves I'm hoping for something very turnkey and reliable, while not being horrendously expensive, moderately expensive is potentially ok. We already have a hitachi fc san for db's and vm's, but it's file options appear to be so bad we're not even considering them (and they gave us a free file module). Data domains are the poo poo for what you are doing. the DD160 may squeeze your needs but the 620's could be nice for storage. You also may want to look into the Avamar virtual appliances, they are really nice. and the VDP and VDP advanced(does replication) are in all terms avamar appliances so use them if you got em Dilbert As FUCK fucked around with this message at 01:46 on Dec 27, 2013 |
# ? Dec 27, 2013 01:42 |
|
cheese-cube posted:General iSCSI question: should Interrupt Moderation be disabled on network adapters that handle iSCSI traffic? I'm leaning towards yes however I just wanted to see if there's a general consensus or if it is a deployment-specific consideration (The bulk of my experience is with FCP so I've got a few gaps when it comes to iSCSI best-practice). Yes, disable it.
|
# ? Dec 27, 2013 20:33 |
|
Dilbert As gently caress posted:What's the traffic load look like? Is this to the host or a guest? I haven't really bothered to look at traffic loads honestly. The reason I ask is because I've inherited a rather hilariously bad setup: a HP blade server running Windows Server 2008 R2 and operating as the primary Domain Controller for the environment. The blade has a 75GB RAID1 which has the system drive on it and two iSCSI LUNs each formatted as NTFS. The first LUN contains the AD database (!) and the second contains the page file (!!!). We are looking to decommission it ASAP however in the mean-time I'm just trying to make it somewhat less-poo poo. NippleFloss posted:Yes, disable it. Thanks that's what I thought.
|
# ? Dec 28, 2013 10:05 |
|
What do you guys see being used most in the SMB space for backup storage? Are people storing their backups on the same SAN as their production data? Or do you see folks adding secondary storage or maybe even local storage for backups?
|
# ? Jan 13, 2014 14:41 |
|
So we've been using Netapp filers for about 5 years now. And recently we are experiencing some ridiculous (in my opinion) drive failures. We used to run a 2020 with SAS disks and I don't think we had one failure in 3 years of production. A year ago we refreshed it to a 2240-4 with 24x2TB SATA. We've had 7 disk failures in one year. This seems insanely high to me. We also just added a 2220 as a Snapmirror backup controller, and it just lost 2 SATA disks within 24 hours and the thing hasn't even been turned on for a week yet. Anyone else experiencing failure rates like this on SATA disks in large aggregates?
|
# ? Jan 13, 2014 14:47 |
|
Syano posted:What do you guys see being used most in the SMB space for backup storage? Are people storing their backups on the same SAN as their production data? Or do you see folks adding secondary storage or maybe even local storage for backups? If I had my way I'd be backing up locally to a large multi TB QNAP or something and then syncing to our building down the road via MIMO PTP wireless links. All can be done relatively cheaply. But hey, I live in the real world where we don't backup to anything.
|
# ? Jan 13, 2014 14:49 |
|
It depends if you go by what Microsoft thinks a small business is, or whether you're using the real world definition. If it's the real world then you probably won't see a SAN in a small business, just a NAS backing up to another NAS in a different location / different room. Or no backups. Going up the scale you get the dedicated backup appliances, I think someone here bought a Unitrends appliance they were quite happy with. Then in the Microsoft definition of SMB stuff like DataDomain.
|
# ? Jan 13, 2014 15:02 |
|
Syano posted:What do you guys see being used most in the SMB space for backup storage? Are people storing their backups on the same SAN as their production data? Or do you see folks adding secondary storage or maybe even local storage for backups? We're just in the process of setting up an Exagrid unit to be the D in our B2D2T setup. I can't get them to spring for replication offsite....
|
# ? Jan 13, 2014 15:14 |
|
Almost every small business I have dealt with (50 users or less) runs with local storage backing up to a NAS.
|
# ? Jan 13, 2014 15:15 |
|
I was thinking an environment with more like 500 users and about 30 servers with all the usual culprits like exchange and sql. Its a partner who is asking me for some advice and to be quite honest Ive never really thought it out.
|
# ? Jan 13, 2014 15:46 |
whaam posted:So we've been using Netapp filers for about 5 years now. And recently we are experiencing some ridiculous (in my opinion) drive failures. Do you know what brand/model the drives netapp gave you are? There was a host of 1-2tb drives in the past few years that have a high rate of failure alot of the major storage companies are dealing with. I know a customers box had that issue and their vendor said shhh keep this hush hush
|
|
# ? Jan 13, 2014 15:50 |
|
I got an email from netapp but we haven't had a failed drive in forever. I'll dig it out tomorrow.
|
# ? Jan 13, 2014 21:03 |
|
Syano posted:I was thinking an environment with more like 500 users and about 30 servers with all the usual culprits like exchange and sql. Its a partner who is asking me for some advice and to be quite honest Ive never really thought it out. I've seen a lot of SMB designs, and agrued heavily with engineers about their designs. People to this day STILL insist a dual controller NAS/SAN is backed up enough because one controller is a backup! and then use BE2012/Veeam for backups. I've only dealt with a few SMB customers who actually have a replication and failover to offsite setup. Usually they just use the san/NAS that was replaced if they can get a warranty upped and firmware upgraded without issues. From previous jobs I can only like of a few places I saw an active stand by NAS and most of that had been a design I made or some datacore setup. Dilbert As FUCK fucked around with this message at 21:12 on Jan 13, 2014 |
# ? Jan 13, 2014 21:09 |
|
Dilbert As gently caress posted:I've seen a lot of SMB designs, and agrued heavily with engineers about their designs. People to this day STILL insist a dual controller NAS/SAN is backed up enough because one controller is a backup! and then use BE2012/Veeam for backups. Realistically, the the failure of a storage path to what is probably your only SAN head in a SMB which has no need for a DR site or budget for a second head is backed up enough.
|
# ? Jan 13, 2014 21:56 |
|
HORRIBLE VNX2 bug ETA 175619 https://support.emc.com/docu50194_V...e=en_US SPA and SPB panic within minutes of each other, and their associated LUNs and DMs go offline. This problem occurs every 90-99 days in the following systems: VNX5200 VNX5400 VNX5600 VNX5800 VNX7600 This problem occurs in a VNX8000 system every 80 days. I got a call from my EMC rep and immediately filed an SR to get RCM to upgrade us. This is totally fresh off the presses, they don't have a KB article for it. It's only in the release notes for the new version. Thank god I heard today, our new VNX is just about at the 90 day mark. Edit: Oh AND apparently there's a memory leak fix that was causing SP panics with RecoverPoint or SAN Copy. El_Matarife fucked around with this message at 22:54 on Jan 13, 2014 |
# ? Jan 13, 2014 22:45 |
|
El_Matarife posted:HORRIBLE VNX2 bug ETA 175619 https://support.emc.com/docu50194_V...e=en_US Have I waited the appropriate amount of time between my "gently caress EMC" posts? Because if so, gently caress EMC. Syano posted:I was thinking an environment with more like 500 users and about 30 servers with all the usual culprits like exchange and sql. Its a partner who is asking me for some advice and to be quite honest Ive never really thought it out. Ah, that is more "medium business" to me. In that environment I would personally insist on at least a pair of Equallogic SANs with offsite replication and backing up to a NAS.
|
# ? Jan 14, 2014 00:40 |
|
Syano posted:I was thinking an environment with more like 500 users and about 30 servers with all the usual culprits like exchange and sql. Its a partner who is asking me for some advice and to be quite honest Ive never really thought it out. Haha we have 600 users and 200 VMs With that we run a netapp HA pair and a Oracle ZFS storage appliance HA pair at each of our datacenters which replicate to each other. We also have some non-HA storage for poo poo that doesn't matter.
|
# ? Jan 14, 2014 00:49 |
|
El_Matarife posted:HORRIBLE VNX2 bug ETA 175619 https://support.emc.com/docu50194_V...e=en_US Ahh awesome, heard about this last week from our EMC TAM but the patch wasn't out yet. Their temporary fix was to reboot one of the SPs to stagger the uptime so you don't lose both at the same time.
|
# ? Jan 14, 2014 00:53 |
|
El_Matarife posted:HORRIBLE VNX2 bug ETA 175619 https://support.emc.com/docu50194_V...e=en_US Christ, they had a very similar bug with the VNX5300 (and others) that surprised us one morning. Good to see that they've learned from their mistakes.
|
# ? Jan 14, 2014 01:33 |
GrandMaster posted:Ahh awesome, heard about this last week from our EMC TAM but the patch wasn't out yet. Their temporary fix was to reboot one of the SPs to stagger the uptime so you don't lose both at the same time. We had to stagger ours out, and a lot of annoying dial home bugs they said would be fixed Q1.
|
|
# ? Jan 14, 2014 14:48 |
|
whaam posted:So we've been using Netapp filers for about 5 years now. And recently we are experiencing some ridiculous (in my opinion) drive failures. There are some drive types that have had higher failure rates. Seagate moose drives have had a lot of problems, and there are also some 500GB FC disks that have extremely high failure rates. I don't know of any 2TB drives off the top of my head that have had major issues. ONTAP is pretty aggressive about pre-failing drives if it thinks they may be going bad, so you'll sometimes have false positives where a drive is failed for safety but isn't actually bad. Certain drive types have extra error handling build into ONTAP because they are known to be problematic, so they will be failed even more aggressively. Things like cable, IOXM, port, or SFP problems can also cause scsi errors that will cause ONTAP to think that there are drive issues and fail them. Or it could just be that you're on the very high end of the failure rates. Seeing failures on brand new controllers is pretty common because the disk zero process touches every block on every drive so if any brand new drives had manufacturing defects then it becomes obvious very quickly. I've certainly seen the zeroing process kill a drive or two before. If you're really concerned about it, open a support case and they should be able to tell you if it's normal behavior or not.
|
# ? Jan 14, 2014 19:56 |
|
Any info around that shows the performance difference from 7-mode to cluster mode on the same hardware. Something like what the perf gain is from going from 7 to C on a FAS 2240-2 or 3220? Specifically for dedupe with compression and NFS iops. Given that your vserver has more mem and cpu at its disposal you would think that there would be a goodly gain for CPU intensive operations. Or has CDOT yet to fully make use of these?
|
# ? Jan 15, 2014 03:16 |
|
What is the general opinion of Nimble with you guys? We are considering them for a project and like what we see so far but just are wondering about real world experiences also.
|
# ? Jan 15, 2014 20:07 |
|
demonachizer posted:What is the general opinion of Nimble with you guys? We are considering them for a project and like what we see so far but just are wondering about real world experiences also. Been using a few of them for a while now (CS-240) and have no complaints except. They are dead simple to configure and just seem to work. Let me know if you have any specific questions.
|
# ? Jan 15, 2014 20:11 |
|
Moey posted:Been using a few of them for a while now (CS-240) and have no complaints except. They are dead simple to configure and just seem to work. Let me know if you have any specific questions. Except?
|
# ? Jan 15, 2014 20:40 |
|
Nimbles are pretty good. The price/performance is tough to argue with. My only complaint is the lack of software like NetApp's SnapManager and the fact that they're iSCSI-only so you can't store files on there. Otherwise solid.
|
# ? Jan 15, 2014 21:24 |
|
demonachizer posted:What is the general opinion of Nimble with you guys? We are considering them for a project and like what we see so far but just are wondering about real world experiences also. We had a pair of CS460s backing our production VMware/Database environment at my last job, and I think they're a great solution within their intended market. They're very fast and we had absolutely no issues with our units while I was there. Feel free to PM me if you'd like more details.
|
# ? Jan 15, 2014 21:51 |
|
demonachizer posted:Except? I somehow must have lost train of thought there, no complaints here currently. Been running them for 18 months or so.
|
# ? Jan 15, 2014 21:58 |
|
JockstrapManthrust posted:Any info around that shows the performance difference from 7-mode to cluster mode on the same hardware. Something like what the perf gain is from going from 7 to C on a FAS 2240-2 or 3220? Specifically for dedupe with compression and NFS iops. Given that your vserver has more mem and cpu at its disposal you would think that there would be a goodly gain for CPU intensive operations. Or has CDOT yet to fully make use of these? I don't think we've see anything that I would chalk up to cluster mode directly, but we've also had tons of problems (hopefully fixed in the 7.2 version we just upgraded to), so performance hasn't been a huge focus for us lately.
|
# ? Jan 16, 2014 00:00 |
|
demonachizer posted:What is the general opinion of Nimble with you guys? We are considering them for a project and like what we see so far but just are wondering about real world experiences also.
|
# ? Jan 16, 2014 01:41 |
|
Yes, but then you are giving money to Oracle.
|
# ? Jan 16, 2014 03:51 |
|
I'm not impressed by the new version of dfm, ohh sorry I mean, OnCommand Unified Manager 6. Its virtual appliance only now. So far, I fear its the bad kind of virtual appliance. Lack of patches for 3rd party products (like the OS), built in backdoors (why isn't the customer allowed to know or change the local root account?), un-nessisary open ports (no one needs to talk to mysql on that box in my environment). I ran into similar problems when we demoed balance. I'm barely scratching the surface and I'm worried I see a lot of problems coming my way. Has anyone successfully deployed this in a remotely secure way?
|
# ? Jan 16, 2014 05:11 |
|
|
# ? Mar 28, 2024 21:39 |
|
Internet Explorer posted:Yes, but then you are giving money to Oracle. Not to mention having to deal with Oracle. Nimble will probably be very responsive if you have issues. Oracle's bastard hardware division... probably a crap shoot.
|
# ? Jan 16, 2014 17:54 |