|
I still want to find the picture that goon posted where he had 2 USB sticks labeled "SERVER DRIVES"
|
# ? Jun 4, 2013 18:34 |
|
|
# ? Apr 25, 2024 23:21 |
|
Rated PG-34 posted:Existing infrastructure is a linux cluster. One of the problems is that we have space constraints at the server location so we wouldn't be able to keep the backups attached to the servers.
|
# ? Jun 4, 2013 18:49 |
|
Misogynist posted:You already gave the size, and I'm no closer to understanding what you're trying to do. Are you looking for a big dumb brick of storage to roll your own solution onto? Are you looking for a backup solution? If so, are you looking for something with multi-tenant features baked in? ~7TB. ~4TB is standard file shares. Couple of SQL databases. Email is on Google Apps. Line of business apps are moving cloud too. Big push to consolidate datacenter on a single Virtual platform. Current state is a mix of physical and VMs. Looking at (3) Dells with VMware Essentials Plus and a VNXE. Talked to EMC this morning and we're going to look at a DD160 and client can decide to keep BackupExec or EMC will give us a Networker license for really cheap. Avamar is out of budget. We'll have a second DD estimate for offsite replication. DR is phase 2, we've been focused on the VMware consolidation and storage piece. Client is currently taking tapes home so plenty of room for improvement given a budget. Appreciate the feedback. This has been out of my wheelhouse a bit but it's all coming back together.
|
# ? Jun 4, 2013 19:29 |
|
Corvettefisher posted:I still want to find the picture that goon posted where he had 2 USB sticks labeled "SERVER DRIVES" Bam.
|
# ? Jun 4, 2013 19:33 |
|
Stugazi posted:EMC will give us a Networker license for really cheap. If you value your sanity, you will give Networker a wide berth.. Awful piece of software.
|
# ? Jun 5, 2013 03:03 |
|
GrandMaster posted:If you value your sanity, you will give Networker a wide berth.. Awful piece of software. Did emc pay you for such a glowing review? This x10. A couple years ago I worked with a networker issue. Root cause was due to case handling issues in a script related to their exchange backup module. Took three months for them to fix it once they found it. Delayed launch of a massive new service. Not an awesome experience.
|
# ? Jun 5, 2013 05:11 |
|
Moey posted:Ugh. Do you know my old environment? My boss thought he was a loving IT god by taking a QNAP 1679 and filling it with SSDs. Initial performance was good, but once that thing blows up and takes down everything I will be dying laughing. Also laughing all the way to the bank as I am doing hourly contract work for them still. I work in a tiny environment where budgeting is the owner deciding what something is worth. I briefly flirted with that kind of idea but told the boss I'd rather we kept what we had rather than putting our entire production environment on a "prosumer" setup. I managed to keep the storage costs at about $18k, and that was with cutting corners. I probably could have gotten it a bit lower but certainly not down to $5k.
|
# ? Jun 5, 2013 14:40 |
|
skipdogg posted:Paging Scott Allan Miller to this thread. He'll know just the thing! Oh please, no... not another pseudo-scientific/semi-pretentious explanation why everybody should build and run OpenFiler on el cheapo white boxes and how they are every bit of enterprise like brand-name units...
|
# ? Jun 5, 2013 16:29 |
|
Mierdaan posted:Bam. Ahahaha, awesome, saved.
|
# ? Jun 5, 2013 16:29 |
|
Rated PG-34 posted:Ballpark budget is not very high: 2-4k. If it helps, I got an email about "aggressive pricing" on DataDomain through June (read: EMC trying to reach quarterly sales projections), but you'd still be paying 5-10x that for what you need (depending on how good DD's dedupe is).
|
# ? Jun 5, 2013 18:51 |
|
Mierdaan posted:Bam. Whoever did this was serious? It's getting hard to tell in this thread. If they were serious, please leave the room if this will affect you
|
# ? Jun 5, 2013 22:42 |
|
paperchaseguy posted:Whoever did this was serious? It's getting hard to tell in this thread. If they were serious, please leave the room if this will affect you I think it is from the early parts of "poo poo that pisses you off" I think this is dude had the infrastructure running off Wifi-G, and running pirated XP.
|
# ? Jun 5, 2013 22:57 |
|
Elitists. This guy is just ahead of the curve with storage commoditization.
|
# ? Jun 5, 2013 23:16 |
|
Synology DS would do the job or you could go buy a DL180 or something similar. I love those Synology boxes but the lack of SLA based support puts me off using them for anything important.
|
# ? Jun 6, 2013 17:28 |
|
Does anyone have experience with Hitachi HUS sans? I'm considering one for for heavy db use.
|
# ? Jun 7, 2013 03:18 |
|
Aquila posted:Does anyone have experience with Hitachi HUS sans? I'm considering one for for heavy db use. I haven't used the new HUS stuff but I worked with the previous generation of USP-V and the Tagmastore great before that and pretty much all of it had the same strengths and weaknesses. It was absolutely rock solid (no unplanned downtime EVER over 5 years) and performance was more than adequate. The downsides were the cost per GB was very high. the management software was uniformly awful, and it wasn't terribly flexible compared to newer multi-protocol filesystem based storage appliances. But if you just want it to burn through database operations and never go down then the high end Hitachi stuff is pretty great in my experience.
|
# ? Jun 7, 2013 03:52 |
|
I found a copy of "Designing Storage Area Networks" by Tom Clark at the office which I flicked through and I found the following page which is pretty hilarious: I like to think that it's based on a true story
|
# ? Jun 7, 2013 14:26 |
|
My eyes, it burns!
|
# ? Jun 7, 2013 14:36 |
|
I have a pair of Equallogic PS4000 SANs that I need to configure for replication for a DR scenario. Is there a way to have them mirror each other over the network as changes are made instead of shipping volume snapshots? Essentially, I am hoping to be able to bring one SAN down and then bring the other up and continue working as before as quickly as possible.
|
# ? Jun 7, 2013 17:46 |
|
demonachizer posted:Is there a way to have them mirror each other over the network as changes are made instead of shipping volume snapshots? Essentially, I am hoping to be able to bring one SAN down and then bring the other up and continue working as before as quickly as possible. Is a 15 minute window not good enough? What is the connection to your DR site? Or are these same rack etc? Dilbert As FUCK fucked around with this message at 18:16 on Jun 7, 2013 |
# ? Jun 7, 2013 18:08 |
|
What's the distance between the sites? If you do synchronous replication it may impact the performance of your applications if the distance is large. And if the distance is more than 180km (300km for some IBM replication), it isn't practical or possible. The latency just gets too high for your host application.
|
# ? Jun 7, 2013 18:18 |
|
Corvettefisher posted:Is a 15 minute window not good enough? DR site is across the street. It looks like firmware 6.x is the way to go as it enabled synchronous replication. I was running off of 5.x before and didn't realize they had added it in. Shipping them as replication sets periodically was looking sucky. I have a direct dedicated 1gig link between the two sites and I can toss them on the same subnet etc. so it looks like I am good to go with sync rep. I just now need to figure out the best way to reconfigure my second SAN without a serial cable! I already tossed it in its own group because of the previous limits on replication from 5.x.
|
# ? Jun 7, 2013 19:30 |
|
Wow not too shabby, I would love to hear how it goes.
|
# ? Jun 7, 2013 19:41 |
|
Corvettefisher posted:Wow not too shabby, I would love to hear how it goes. I wish I could tell the whole story of this thing. I am two years into a VMware project in which I wasn't involved in choosing the SAN hardware and that has changed its scope 5 (!) times. Two years meaning that the servers and poo poo were delivered then and the environment is still not live... It is live next week. I was given the project with no knowledge of VMware or SANs and just told to figure it out. I am glad in a way since I have been able to play with a shitload of nice hardware while learning it. Once I was more in charge I got some poo poo sorted in a better fashion like the network side of things. I still don't have a redundant SAN switch due to cost (...) but we have spares all over the place so downtime won't be bad. Demonachizer fucked around with this message at 20:18 on Jun 7, 2013 |
# ? Jun 7, 2013 20:16 |
|
demonachizer posted:DR site is across the street.
|
# ? Jun 7, 2013 20:28 |
|
It satisfies the business needs of the project. If we have a location based disaster taking out both server rooms business is stopped anyway. But we also have daily shipments to servers 3 miles away and offsite backups to Iron Mountain for business continuity purposes. We are comfortable with a lot of downtime on paper really since these aren't our clinical systems and just for some administrative poo poo and stuff like print servers etc. The purpose of the second room is just in case we have a flood in the primary room or something. The environment is not as important as the data and the data has a lot of redundancy built in. The file server that will be hosted will also be mirroring off to a physical box for backup to tape and for additional redundancy. EDIT: Funny story. I was just told that we need to start thinking about the environment that is replacing this one since we are EOLing it at 3 years. Demonachizer fucked around with this message at 20:57 on Jun 7, 2013 |
# ? Jun 7, 2013 20:40 |
|
demonachizer posted:It satisfies the business needs of the project. If we have a location based disaster taking out both server rooms business is stopped anyway. But we also have daily shipments to servers 3 miles away and offsite backups to Iron Mountain for business continuity purposes. We are comfortable with a lot of downtime on paper really since these aren't our clinical systems and just for some administrative poo poo and stuff like print servers etc. You wouldn't happen to be doing this project for a company located in Subiaco, Western Australia that starts with an M?
|
# ? Jun 7, 2013 21:38 |
|
cheese-cube posted:You wouldn't happen to be doing this project for a company located in Subiaco, Western Australia that starts with an M? Nope. Other side of the planet.
|
# ? Jun 7, 2013 21:43 |
|
Sssh don't tell him what DR IS FOR!
|
# ? Jun 7, 2013 22:00 |
|
Still having a separate building is better than most companies, granted across the street isn't amazing but if it is acceptable to the business in terms of risks, where is the problem. Granted it would be nice to have a >10km or site but not all businesses can do or afford that. It's better than many companies out there, but not perfect.
|
# ? Jun 7, 2013 22:17 |
|
demonachizer posted:Nope. Other side of the planet. My bad. On-topic question: what has your experience with Iron Mountain been like? I've had to start dealing with them very frequently in my new job and while they are extremely reliable their TapeGuard website is poo poo, especially when you are dealing with a collection of 5,000+ tapes and are trying to manually recall a large amount of them. Also the fact that they don't give you any details regarding scheduled or unscheduled deliveries until 2 hours before delivery is a bit crap.
|
# ? Jun 7, 2013 22:27 |
|
evil_bunnY posted:Sssh don't tell him what DR IS FOR! Yes, I know what DR is for/means. I could have (should have) used a different term I guess. The business need dictates that we have data continuity and that is all. This is provided for through offsite warehousing of data with Iron Mountain and shipments to servers cross town. The second site is to provide minimal downtime in case of hardware failures, network issues, localized issues (flooding, fire etc.). Our mission critical poo poo exists in three different places. The two aforementioned server rooms and then another server room 3 miles away. Again the data exists in many more places. If there is a disaster event that causes a huge geographic outage, there is no business to be done as this is a school and all business is carried on in a specific building. cheese-cube posted:My bad. On-topic question: what has your experience with Iron Mountain been like? I've had to start dealing with them very frequently in my new job and while they are extremely reliable their TapeGuard website is poo poo, especially when you are dealing with a collection of 5,000+ tapes and are trying to manually recall a large amount of them. Also the fact that they don't give you any details regarding scheduled or unscheduled deliveries until 2 hours before delivery is a bit crap. Honestly, we haven't had much interaction with them in an emergency situation and our tape situation is about 14 out per week on a two week rotation (56 tapes total maybe?) so we wouldn't see any issues with management etc. Our tape guy arrives at about the same time every pickup. Demonachizer fucked around with this message at 23:18 on Jun 7, 2013 |
# ? Jun 7, 2013 23:15 |
|
What sort of data are you replicating? What do you think synchronous replication gains you over scheduled replication jobs, and is it worth the degraded performance at the primary site? Will you even be recoverable at the secondary if you are replicating data constantly when it may be in a filesystem or application inconsistent state?
|
# ? Jun 8, 2013 03:38 |
|
NippleFloss posted:What sort of data are you replicating? What do you think synchronous replication gains you over scheduled replication jobs, and is it worth the degraded performance at the primary site? Will you even be recoverable at the secondary if you are replicating data constantly when it may be in a filesystem or application inconsistent state? VMs and fileserver data. I am not sure that there will be a huge performance degradation as there is a direct/dedicated to the SAN gigabit fiber link between the two rooms and a limited number of hops (I think two). It is proving to be fine currently when both are in the same room. I will be moving them next week and if things still seem fine then we will use that model. If things aren't fine, we can switch to scheduled rep jobs. Are scheduled replication sets guaranteed to not be in a crash consistent state? We are not hosting a major MSSQL instance or Exchange on this environment. I am under the impression that this use is fine for synchronous replication. As far as gains, less data loss in a failure scenario. The file server data will also be mirrored on the guest level using double-take to another physical machine so I think that level is covered. The operating state of the VMs in a failover could be questionable but the only item that has a DB that is going in is the vCenter instance. I haven't seen anywhere that this is a large concern though. Am I missing something? Demonachizer fucked around with this message at 05:17 on Jun 8, 2013 |
# ? Jun 8, 2013 05:05 |
|
Just throwing this out there but, So what storage vendors do you all use and why?
|
# ? Jun 8, 2013 05:50 |
|
Corvettefisher posted:Just throwing this out there but, I'll have a better answer for "why" in a few weeks, but my soon to be employer is into NetApp to the tune of 20 petabytes. Thankfully they have a full-time storage admin so I'll get to learn to deal with storage on that scale without just drinking from the firehose on day 1.
|
# ? Jun 8, 2013 06:03 |
|
demonachizer posted:VMs and fileserver data. I am not sure that there will be a huge performance degradation as there is a direct/dedicated to the SAN gigabit fiber link between the two rooms and a limited number of hops (I think two). It is proving to be fine currently when both are in the same room. I will be moving them next week and if things still seem fine then we will use that model. If things aren't fine, we can switch to scheduled rep jobs. Are scheduled replication sets guaranteed to not be in a crash consistent state? We are not hosting a major MSSQL instance or Exchange on this environment. I am under the impression that this use is fine for synchronous replication. That's mostly pretty reasonable. Crash consistent backups for VMs are almost always sufficient for recoverability and that will often be all you get with storage array snapshot based VM backups anyway because VMWare quiesced snapshots suck. Fileserver data is similarly stable enough that you won't have problems, especially with CIFS using temporary hidden files when modifications are being made to open files. The one thing I would recommend is making sure you take semi-frequent backups of your VCenter SQL database since crash consistent DB backups should not be considered reliable. As long as you have a recent backup that is also replicated you should be covered.
|
# ? Jun 8, 2013 06:21 |
|
Corvettefisher posted:Just throwing this out there but, Nimble, because it's a ton of power in a 3U box. If space is a concern, hard to beat the performance/capacity per U of Nimble. NetApp, because it can do everything (SAN/NAS, encryption, mirroring, archiving) with solid software integration for everything (SQL, Exchange, VMware, Hyper-V) and one of the best UIs I've worked with (System Manager). HP, because a lot of clients already had it in place and it was a cheap way to RAID a lot of disk together.
|
# ? Jun 8, 2013 07:24 |
|
Netapp because it's ZFS+ except not owned by oracle. Also we got it p cheap. And it made backups easier. And VMware on nfs is nice.
|
# ? Jun 8, 2013 08:42 |
|
|
# ? Apr 25, 2024 23:21 |
|
Corvettefisher posted:Still having a separate building is better than most companies, granted across the street isn't amazing but if it is acceptable to the business in terms of risks, where is the problem. Granted it would be nice to have a >10km or site but not all businesses can do or afford that.
|
# ? Jun 8, 2013 08:45 |