Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
I still want to find the picture that goon posted where he had 2 USB sticks labeled "SERVER DRIVES"

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Rated PG-34 posted:

Existing infrastructure is a linux cluster. One of the problems is that we have space constraints at the server location so we wouldn't be able to keep the backups attached to the servers.


Ballpark budget is not very high: 2-4k.
Roxio Easy CD Creator might fit your budget :3:

Stugazi
Mar 1, 2004

Who me, Bitter?

Misogynist posted:

You already gave the size, and I'm no closer to understanding what you're trying to do. Are you looking for a big dumb brick of storage to roll your own solution onto? Are you looking for a backup solution? If so, are you looking for something with multi-tenant features baked in?

~7TB. ~4TB is standard file shares. Couple of SQL databases. Email is on Google Apps. Line of business apps are moving cloud too. Big push to consolidate datacenter on a single Virtual platform. Current state is a mix of physical and VMs.

Looking at (3) Dells with VMware Essentials Plus and a VNXE. Talked to EMC this morning and we're going to look at a DD160 and client can decide to keep BackupExec or EMC will give us a Networker license for really cheap. Avamar is out of budget. We'll have a second DD estimate for offsite replication. DR is phase 2, we've been focused on the VMware consolidation and storage piece. Client is currently taking tapes home so plenty of room for improvement given a budget.

Appreciate the feedback. This has been out of my wheelhouse a bit but it's all coming back together. :)

Mierdaan
Sep 14, 2004

Pillbug

Corvettefisher posted:

I still want to find the picture that goon posted where he had 2 USB sticks labeled "SERVER DRIVES"

Bam.

GrandMaster
Aug 15, 2004
laidback

Stugazi posted:

EMC will give us a Networker license for really cheap.

If you value your sanity, you will give Networker a wide berth.. Awful piece of software.

parid
Mar 18, 2004

GrandMaster posted:

If you value your sanity, you will give Networker a wide berth.. Awful piece of software.

Did emc pay you for such a glowing review? This x10. A couple years ago I worked with a networker issue. Root cause was due to case handling issues in a script related to their exchange backup module. Took three months for them to fix it once they found it. Delayed launch of a massive new service. Not an awesome experience.

Crackbone
May 23, 2003

Vlaada is my co-pilot.

Moey posted:

Ugh. Do you know my old environment? My boss thought he was a loving IT god by taking a QNAP 1679 and filling it with SSDs. Initial performance was good, but once that thing blows up and takes down everything I will be dying laughing. Also laughing all the way to the bank as I am doing hourly contract work for them still.

I work in a tiny environment where budgeting is the owner deciding what something is worth. I briefly flirted with that kind of idea but told the boss I'd rather we kept what we had rather than putting our entire production environment on a "prosumer" setup. I managed to keep the storage costs at about $18k, and that was with cutting corners. I probably could have gotten it a bit lower but certainly not down to $5k.

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

skipdogg posted:

Paging Scott Allan Miller to this thread. He'll know just the thing!

Oh please, no... not another pseudo-scientific/semi-pretentious explanation why everybody should build and run OpenFiler on el cheapo white boxes and how they are every bit of enterprise like brand-name units...

szlevi
Sep 10, 2010

[[ POKE 65535,0 ]]

Ahahaha, awesome, saved. :)

Erwin
Feb 17, 2006

Rated PG-34 posted:

Ballpark budget is not very high: 2-4k.

If it helps, I got an email about "aggressive pricing" on DataDomain through June (read: EMC trying to reach quarterly sales projections), but you'd still be paying 5-10x that for what you need (depending on how good DD's dedupe is).

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

Whoever did this was serious? It's getting hard to tell in this thread. If they were serious, please leave the room if this will affect you

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

paperchaseguy posted:

Whoever did this was serious? It's getting hard to tell in this thread. If they were serious, please leave the room if this will affect you

I think it is from the early parts of "poo poo that pisses you off" I think this is dude had the infrastructure running off Wifi-G, and running pirated XP.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
Elitists.

This guy is just ahead of the curve with storage commoditization.

Bitch Stewie
Dec 17, 2011
Synology DS would do the job or you could go buy a DL180 or something similar. I love those Synology boxes but the lack of SLA based support puts me off using them for anything important.

Aquila
Jan 24, 2003

Does anyone have experience with Hitachi HUS sans? I'm considering one for for heavy db use.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

Aquila posted:

Does anyone have experience with Hitachi HUS sans? I'm considering one for for heavy db use.

I haven't used the new HUS stuff but I worked with the previous generation of USP-V and the Tagmastore great before that and pretty much all of it had the same strengths and weaknesses. It was absolutely rock solid (no unplanned downtime EVER over 5 years) and performance was more than adequate. The downsides were the cost per GB was very high. the management software was uniformly awful, and it wasn't terribly flexible compared to newer multi-protocol filesystem based storage appliances. But if you just want it to burn through database operations and never go down then the high end Hitachi stuff is pretty great in my experience.

Pile Of Garbage
May 28, 2007



I found a copy of "Designing Storage Area Networks" by Tom Clark at the office which I flicked through and I found the following page which is pretty hilarious:



I like to think that it's based on a true story :allears:

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
My eyes, it burns!

Demonachizer
Aug 7, 2004
I have a pair of Equallogic PS4000 SANs that I need to configure for replication for a DR scenario. Is there a way to have them mirror each other over the network as changes are made instead of shipping volume snapshots? Essentially, I am hoping to be able to bring one SAN down and then bring the other up and continue working as before as quickly as possible.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

demonachizer posted:

Is there a way to have them mirror each other over the network as changes are made instead of shipping volume snapshots? Essentially, I am hoping to be able to bring one SAN down and then bring the other up and continue working as before as quickly as possible.

Is a 15 minute window not good enough?

What is the connection to your DR site? Or are these same rack etc?

Dilbert As FUCK fucked around with this message at 18:16 on Jun 7, 2013

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO
What's the distance between the sites? If you do synchronous replication it may impact the performance of your applications if the distance is large. And if the distance is more than 180km (300km for some IBM replication), it isn't practical or possible. The latency just gets too high for your host application.

Demonachizer
Aug 7, 2004

Corvettefisher posted:

Is a 15 minute window not good enough?

What is the connection to your DR site? Or are these same rack etc?

DR site is across the street. It looks like firmware 6.x is the way to go as it enabled synchronous replication. I was running off of 5.x before and didn't realize they had added it in. Shipping them as replication sets periodically was looking sucky.

I have a direct dedicated 1gig link between the two sites and I can toss them on the same subnet etc. so it looks like I am good to go with sync rep. I just now need to figure out the best way to reconfigure my second SAN without a serial cable! I already tossed it in its own group because of the previous limits on replication from 5.x.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Wow not too shabby, I would love to hear how it goes.

Demonachizer
Aug 7, 2004

Corvettefisher posted:

Wow not too shabby, I would love to hear how it goes.

I wish I could tell the whole story of this thing. I am two years into a VMware project in which I wasn't involved in choosing the SAN hardware and that has changed its scope 5 (!) times. Two years meaning that the servers and poo poo were delivered then and the environment is still not live... It is live next week. I was given the project with no knowledge of VMware or SANs and just told to figure it out. I am glad in a way since I have been able to play with a shitload of nice hardware while learning it.

Once I was more in charge I got some poo poo sorted in a better fashion like the network side of things. I still don't have a redundant SAN switch due to cost (...) but we have spares all over the place so downtime won't be bad.

Demonachizer fucked around with this message at 20:18 on Jun 7, 2013

paperchaseguy
Feb 21, 2002

THEY'RE GONNA SAY NO

demonachizer posted:

DR site is across the street.

:eyepop:

Demonachizer
Aug 7, 2004

It satisfies the business needs of the project. If we have a location based disaster taking out both server rooms business is stopped anyway. But we also have daily shipments to servers 3 miles away and offsite backups to Iron Mountain for business continuity purposes. We are comfortable with a lot of downtime on paper really since these aren't our clinical systems and just for some administrative poo poo and stuff like print servers etc.

The purpose of the second room is just in case we have a flood in the primary room or something. The environment is not as important as the data and the data has a lot of redundancy built in. The file server that will be hosted will also be mirroring off to a physical box for backup to tape and for additional redundancy.

EDIT:

Funny story. I was just told that we need to start thinking about the environment that is replacing this one since we are EOLing it at 3 years.

Demonachizer fucked around with this message at 20:57 on Jun 7, 2013

Pile Of Garbage
May 28, 2007



demonachizer posted:

It satisfies the business needs of the project. If we have a location based disaster taking out both server rooms business is stopped anyway. But we also have daily shipments to servers 3 miles away and offsite backups to Iron Mountain for business continuity purposes. We are comfortable with a lot of downtime on paper really since these aren't our clinical systems and just for some administrative poo poo and stuff like print servers etc.

The purpose of the second room is just in case we have a flood in the primary room or something. The environment is not as important as the data and the data has a lot of redundancy built in. The file server that will be hosted will also be mirroring off to a physical box for backup to tape and for additional redundancy.

EDIT:

Funny story. I was just told that we need to start thinking about the environment that is replacing this one since we are EOLing it at 3 years.

You wouldn't happen to be doing this project for a company located in Subiaco, Western Australia that starts with an M?

Demonachizer
Aug 7, 2004

cheese-cube posted:

You wouldn't happen to be doing this project for a company located in Subiaco, Western Australia that starts with an M?

Nope. Other side of the planet.

evil_bunnY
Apr 2, 2003

Sssh don't tell him what DR IS FOR!

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Still having a separate building is better than most companies, granted across the street isn't amazing but if it is acceptable to the business in terms of risks, where is the problem. Granted it would be nice to have a >10km or site but not all businesses can do or afford that.


It's better than many companies out there, but not perfect.

Pile Of Garbage
May 28, 2007



demonachizer posted:

Nope. Other side of the planet.

My bad. On-topic question: what has your experience with Iron Mountain been like? I've had to start dealing with them very frequently in my new job and while they are extremely reliable their TapeGuard website is poo poo, especially when you are dealing with a collection of 5,000+ tapes and are trying to manually recall a large amount of them. Also the fact that they don't give you any details regarding scheduled or unscheduled deliveries until 2 hours before delivery is a bit crap.

Demonachizer
Aug 7, 2004

evil_bunnY posted:

Sssh don't tell him what DR IS FOR!

Yes, I know what DR is for/means. I could have (should have) used a different term I guess. The business need dictates that we have data continuity and that is all. This is provided for through offsite warehousing of data with Iron Mountain and shipments to servers cross town. The second site is to provide minimal downtime in case of hardware failures, network issues, localized issues (flooding, fire etc.).

Our mission critical poo poo exists in three different places. The two aforementioned server rooms and then another server room 3 miles away. Again the data exists in many more places.

If there is a disaster event that causes a huge geographic outage, there is no business to be done as this is a school and all business is carried on in a specific building.

cheese-cube posted:

My bad. On-topic question: what has your experience with Iron Mountain been like? I've had to start dealing with them very frequently in my new job and while they are extremely reliable their TapeGuard website is poo poo, especially when you are dealing with a collection of 5,000+ tapes and are trying to manually recall a large amount of them. Also the fact that they don't give you any details regarding scheduled or unscheduled deliveries until 2 hours before delivery is a bit crap.

Honestly, we haven't had much interaction with them in an emergency situation and our tape situation is about 14 out per week on a two week rotation (56 tapes total maybe?) so we wouldn't see any issues with management etc. Our tape guy arrives at about the same time every pickup.

Demonachizer fucked around with this message at 23:18 on Jun 7, 2013

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

What sort of data are you replicating? What do you think synchronous replication gains you over scheduled replication jobs, and is it worth the degraded performance at the primary site? Will you even be recoverable at the secondary if you are replicating data constantly when it may be in a filesystem or application inconsistent state?

Demonachizer
Aug 7, 2004

NippleFloss posted:

What sort of data are you replicating? What do you think synchronous replication gains you over scheduled replication jobs, and is it worth the degraded performance at the primary site? Will you even be recoverable at the secondary if you are replicating data constantly when it may be in a filesystem or application inconsistent state?

VMs and fileserver data. I am not sure that there will be a huge performance degradation as there is a direct/dedicated to the SAN gigabit fiber link between the two rooms and a limited number of hops (I think two). It is proving to be fine currently when both are in the same room. I will be moving them next week and if things still seem fine then we will use that model. If things aren't fine, we can switch to scheduled rep jobs. Are scheduled replication sets guaranteed to not be in a crash consistent state? We are not hosting a major MSSQL instance or Exchange on this environment. I am under the impression that this use is fine for synchronous replication.

As far as gains, less data loss in a failure scenario.

The file server data will also be mirrored on the guest level using double-take to another physical machine so I think that level is covered. The operating state of the VMs in a failover could be questionable but the only item that has a DB that is going in is the vCenter instance. I haven't seen anywhere that this is a large concern though. Am I missing something?

Demonachizer fucked around with this message at 05:17 on Jun 8, 2013

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Just throwing this out there but,

So what storage vendors do you all use and why?

Docjowles
Apr 9, 2009

Corvettefisher posted:

Just throwing this out there but,

So what storage vendors do you all use and why?

I'll have a better answer for "why" in a few weeks, but my soon to be employer is into NetApp to the tune of 20 petabytes. Thankfully they have a full-time storage admin so I'll get to learn to deal with storage on that scale without just drinking from the firehose on day 1.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

demonachizer posted:

VMs and fileserver data. I am not sure that there will be a huge performance degradation as there is a direct/dedicated to the SAN gigabit fiber link between the two rooms and a limited number of hops (I think two). It is proving to be fine currently when both are in the same room. I will be moving them next week and if things still seem fine then we will use that model. If things aren't fine, we can switch to scheduled rep jobs. Are scheduled replication sets guaranteed to not be in a crash consistent state? We are not hosting a major MSSQL instance or Exchange on this environment. I am under the impression that this use is fine for synchronous replication.

As far as gains, less data loss in a failure scenario.

The file server data will also be mirrored on the guest level using double-take to another physical machine so I think that level is covered. The operating state of the VMs in a failover could be questionable but the only item that has a DB that is going in is the vCenter instance. I haven't seen anywhere that this is a large concern though. Am I missing something?

That's mostly pretty reasonable. Crash consistent backups for VMs are almost always sufficient for recoverability and that will often be all you get with storage array snapshot based VM backups anyway because VMWare quiesced snapshots suck. Fileserver data is similarly stable enough that you won't have problems, especially with CIFS using temporary hidden files when modifications are being made to open files. The one thing I would recommend is making sure you take semi-frequent backups of your VCenter SQL database since crash consistent DB backups should not be considered reliable. As long as you have a recent backup that is also replicated you should be covered.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Corvettefisher posted:

Just throwing this out there but,

So what storage vendors do you all use and why?

Nimble, because it's a ton of power in a 3U box. If space is a concern, hard to beat the performance/capacity per U of Nimble.

NetApp, because it can do everything (SAN/NAS, encryption, mirroring, archiving) with solid software integration for everything (SQL, Exchange, VMware, Hyper-V) and one of the best UIs I've worked with (System Manager).

HP, because a lot of clients already had it in place and it was a cheap way to RAID a lot of disk together.

evil_bunnY
Apr 2, 2003

Netapp because it's ZFS+ except not owned by oracle. Also we got it p cheap. And it made backups easier. And VMware on nfs is nice.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Corvettefisher posted:

Still having a separate building is better than most companies, granted across the street isn't amazing but if it is acceptable to the business in terms of risks, where is the problem. Granted it would be nice to have a >10km or site but not all businesses can do or afford that.


It's better than many companies out there, but not perfect.
As long as you're able to vault long-term data offsite, this really isn't a bad setup for companies that have almost all their employees in the same location as their DC anyway. It gives you some redundancy from all of the most common physical damages to a datacenter -- flood, fire, electrical -- and the close proximity makes it very easy to do stuff like stretch SANs and Ethernet networks over dark fiber to build very simple DR without the cost of something like SRM.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply