Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

FISHMANPET posted:

And when I say continuous, I really mean continuous. So as a file is written to a local directory, that same data is streamed to a networked location.

I don't think you're going to find anything that lets you do truly synchronous replication between a local file using a standard filesystem like ext3 or NTFS and a remote location running some network sharing protocol. There's just too many technical issues there. You'd also probably have terrible performance due to write latency issues.

Something like GlusterFS or AFS would probably be the closest you're going to get. The only things I know of that do anything like what you're talking about with local and remote copies doing fully continuous replication are products provided by hardware vendors like EMC or HDS where the tight coupling with hardware ensures that bad things don't happen when writes back up a little.

Adbot
ADBOT LOVES YOU

Moey
Oct 22, 2010

I LIKE TO MOVE IT

M@ posted:

Not to be overly spammy, but if you can use refurb equipment, my company deals in all 3 of those product lines. I'd be happy to quote out some gear for you if you're interested.

We have been dabbling in more used/refurb equiptment. It hasn't bitten us in the rear end...yet.

All of our storage stuff is still new, but switches and servers have been refurbs lately.

CF, let me know what you end up with for new storage (also what space/speed specs drove your decision).

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Moey posted:

We have been dabbling in more used/refurb equiptment. It hasn't bitten us in the rear end...yet.

All of our storage stuff is still new, but switches and servers have been refurbs lately.

CF, let me know what you end up with for new storage (also what space/speed specs drove your decision).

I am evaling a some VNX 5300 and VNXe 3150/3300 as well as some cybernetics and netapp.

If you want to talk about some storage stuff email me, I might be able to do a join.me demo and show you some stuff on the netapp I work with. I don't mind posting the poo poo hear but to fully answer your questions PM/Email me.


E: I would cheap out on anything but storage for virtualization it is the heart and soul of your infrastructure I would love to take a look at the refurbs buthonestly I don't feel comfortable implementing them, however, I might know someone who is.

Dilbert As FUCK fucked around with this message at 01:09 on Oct 13, 2012

optikalus
Apr 17, 2008

NippleFloss posted:

I don't think you're going to find anything that lets you do truly synchronous replication between a local file using a standard filesystem like ext3 or NTFS and a remote location running some network sharing protocol. There's just too many technical issues there. You'd also probably have terrible performance due to write latency issues.

drbd does it pretty well, and can do async writes.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

optikalus posted:

drbd does it pretty well, and can do async writes.

This, if it is caching to flash you should be able to do DRBD without it mukking up anything.

If you want to see it in action you can easily set it up on openfiler 2.99

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Corvettefisher posted:

I am evaling a some VNX 5300 and VNXe 3150/3300 as well as some cybernetics and netapp.

If you want to talk about some storage stuff email me, I might be able to do a join.me demo and show you some stuff on the netapp I work with. I don't mind posting the poo poo hear but to fully answer your questions PM/Email me.


E: I would cheap out on anything but storage for virtualization it is the heart and soul of your infrastructure I would love to take a look at the refurbs buthonestly I don't feel comfortable implementing them, however, I might know someone who is.

I'll try to remember to PM you later on it. Since I only work internally any our firm isn't giant, I don't get to deal with much outside of the stuff my old boss randomly picked (a few Dell MD3220i, QNAP NAS units). We have expanded on that "standard" as well as iSCSI. I currently don't see the need to introduce any NFS storage since iSCSI has been meeting my needs, but seeing the management of some other block storage devices would be cool.

Is anyone here running DRBD in production? I have always found it interesting (my old boss used to always talk about using "doubletake") but have not heard much talk of it.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Moey posted:

I'll try to remember to PM you later on it. Since I only work internally any our firm isn't giant, I don't get to deal with much outside of the stuff my old boss randomly picked (a few Dell MD3220i, QNAP NAS units). We have expanded on that "standard" as well as iSCSI. I currently don't see the need to introduce any NFS storage since iSCSI has been meeting my needs, but seeing the management of some other block storage devices would be cool.

Is anyone here running DRBD in production? I have always found it interesting (my old boss used to always talk about using "doubletake") but have not heard much talk of it.

Sounds good you can pm me and I'll give you my cell to do whatever with.

I use DRBD/rsync on some of my Gov, bank, or medica deploys where data has to be in two separate boxes. If you copy it from flash it doesn't hit the production SAN/NAS which is great.

YOLOsubmarine
Oct 19, 2004

When asked which Pokemon he evolved into, Kamara pauses.

"Motherfucking, what's that big dragon shit? That orange motherfucker. Charizard."

optikalus posted:

drbd does it pretty well, and can do async writes.
DRBD isn't going to allow access on the secondary node as well, at least not without failing over, which was one thing he mentioned wanting. It's also not going to run on windows at all, unless I missed there being a windows client.

BelDin
Jan 29, 2001

NippleFloss posted:

DRBD isn't going to allow access on the secondary node as well, at least not without failing over, which was one thing he mentioned wanting. It's also not going to run on windows at all, unless I missed there being a windows client.

I;m pretty sure you can, but you have to use a cluster filesystem (like GFS or OCFS2) in order to make it work. My fuzzy recollection was that I created a Redhat web/database cluster back around 2008 using DRBD and GFS and we could publish to the secondary node which would then replicate to the primary. The filesystem was active/active, even though the services were active/passive.

optikalus
Apr 17, 2008

BelDin posted:

I;m pretty sure you can, but you have to use a cluster filesystem (like GFS or OCFS2) in order to make it work. My fuzzy recollection was that I created a Redhat web/database cluster back around 2008 using DRBD and GFS and we could publish to the secondary node which would then replicate to the primary. The filesystem was active/active, even though the services were active/passive.

Yep, though I think RHEL uses LVM2 now instead of DRBD for their own GFS2 active/active stuff.

Everything except for mysql worked great in my testing (LAMP). mysql did not play nice when another server had a lock set.

evol262
Nov 30, 2010
#!/usr/bin/perl
Yes, but no.

RHEL always used cman for managing the DLM for GFS2. clvmd isn't required, but it's often used. You can certainly get a "cluster" up and running (just to start the storage/DLM stuff, really) only running GFS2.

You can also use native OCFS2 cluster.conf (or RHCS cluster.xml) to start that DLM.

Either works with DRBD active/active. No need for clvmd.

kzersatz
Oct 13, 2012

How's it the kiss of death, if I have no lips?
College Slice
Netapp PS guy reporting in;

I'd love to see a cross comparison of an EMC customer who happens to be a NetApp user as well; I know there is a LONG love // hate relationship between the two...

Also; to the dudes above me doing a POC of NetApp, ask questions, ask LOTS of questions... heck, PM me some if you want help; it's an amazing product (I'm a smidge of a fanboy...), but it isn't your traditional SAN at all...

M@
Jul 10, 2004

Moey posted:

We have been dabbling in more used/refurb equiptment. It hasn't bitten us in the rear end...yet.

All of our storage stuff is still new, but switches and servers have been refurbs lately.


Refurb gear is pretty reliable. I offer 1 year NBD replacement on everything I sell and I replace maybe 3-4 drives a month.

If you guys ever want to kick around new vs used on anything, just shoot me a PM. Happy to assist fellow goons.

no pubes yet sorry
Sep 11, 2003

I haven't read through this huge rear end thread yet but is there a thread for discussion for straight up backup solutions for medium sized businesses? Heading away from tapes for the first time but consumer level stuff probably isn't going to cut it and enterprise level san is way overkill.

Syano
Jul 13, 2005
This is a good a place as any. Discuss away and I am sure people will chime in

Rhymenoserous
May 23, 2008
I've heard great things about Unitrends: but honestly any full fledged backup solution is going to be pricey. I'd look back into the San market honestly. It's getting very competitive and prices are plummeting compared to say five years ago.

Vanilla
Feb 24, 2002

Hay guys what's going on in th

platypusmalone posted:

I haven't read through this huge rear end thread yet but is there a thread for discussion for straight up backup solutions for medium sized businesses? Heading away from tapes for the first time but consumer level stuff probably isn't going to cut it and enterprise level san is way overkill.

So by moving away from tape you're ossibly heading towards disk based deduplication appliances?

These are not SAN devices but certainly can be considered 'Enterprise'. They're used by a poo poo ton of companies these days from huge 20+PB organisations to Small-medium businesses.

http://www.theregister.co.uk/2012/06/26/emc_backup_appliance_king/

Data Domain is the current king. They start with little DD160 arrays which are cheap (I recall $10-20k??).

Amandyke
Nov 27, 2004

A wha?

Vanilla posted:

Data Domain is the current king. They start with little DD160 arrays which are cheap (I recall $10-20k??).

I've got a lot of customers that love their Data Domain appliances. If my opinion counts for anything, I may be slightly prejudice.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

We're moving away from tape next year. Not completely, but we'll probably be going all DataDomains, then replicating back to HQ and the big DataDomain there, and spinning off monthlies to tape from there. Right now we have multiple sites all using tape and all using Iron Mountain. Media and service savings should make this at least break even not counting other savings.

Rhymenoserous
May 23, 2008
Whoe, when did EMC buy Datadomain?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Rhymenoserous posted:

Whoe, when did EMC buy Datadomain?
Over 3 years ago

evil_bunnY
Apr 2, 2003

Say whatever about their products, their vertical acquisitions make a ton of sense.

evil_bunnY fucked around with this message at 01:47 on Oct 17, 2012

Mierdaan
Sep 14, 2004

Pillbug
Any recommendations on 3rd party service providers for out-of-warranty NetApp gear? Already checked out Service Express.

qutius
Apr 2, 2003
NO PARTIES

Mierdaan posted:

Any recommendations on 3rd party service providers for out-of-warranty NetApp gear? Already checked out Service Express.

I've had good luck with Canvass Systems for a variety of vendor hardware.

Nomex
Jul 17, 2002

Flame retarded.
http://www.netapp.com/us/company/news/news-rel-20120821-746791.html

I hope I still have time to get a small demo unit squeezed into next year's budget.
I wonder if you can stack a bunch of small Fusion IO cards together to get that 2TB. I could use a few million extra IOPs.

18 Character Limit
Apr 6, 2007

Screw you, Abed;
I can fix this!
Nap Ghost

Nomex posted:

http://www.netapp.com/us/company/news/news-rel-20120821-746791.html

I hope I still have time to get a small demo unit squeezed into next year's budget.
I wonder if you can stack a bunch of small Fusion IO cards together to get that 2TB. I could use a few million extra IOPs.

I've seen five Duos together in one server chassis before.

Mierdaan
Sep 14, 2004

Pillbug
Anybody experimented with Ceph at all? Was just reading this post and hadn't even heard much about it yet.

Nomex
Jul 17, 2002

Flame retarded.

18 Character Limit posted:

I've seen five Duos together in one server chassis before.

I know the server will take it, but will the software support it? Hey, any Netapp engineers in here?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Might be more a networking question but,

Is anyone here actively using FC for anything other than high end transaction servers? I've had a few customers who have existing 4Gb/s FC networks for their SAN looking to upgrade and I usually just go with 10Gb to run ISCSI or FCoE. But I get those "Is this guy really suggesting that?" moments from some people on ISCSI/FCoE. It might just be dinosaurs in the IT management sector who haven't looked at what is going on around them, and think FC @ 8Gb/s is the poo poo. Normally I'll sell the admins pretty fast when I show them the performance numbers, and sell the managers when I show them the cost, performance, and manageability of ISCSI/FCoE.

Just gets annoying having to repeat myself over and over, didn't know if anyone had some view points they could shed some light on.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

We're using 8GB FC but that's because it's what the existing SAN guy knows and is comfortable with.

Amandyke
Nov 27, 2004

A wha?
Reason you stick with FC is because the infrastructure (cables and such) is usually pretty future proof. Lay the cable once and you can keep switching out SFP's/switches to newer faster speeds without having to re-wire your datacenter.

That and current brocade switches support 16Gbps FC.

evol262
Nov 30, 2010
#!/usr/bin/perl

Corvettefisher posted:

Might be more a networking question but,

Is anyone here actively using FC for anything other than high end transaction servers? I've had a few customers who have existing 4Gb/s FC networks for their SAN looking to upgrade and I usually just go with 10Gb to run ISCSI or FCoE. But I get those "Is this guy really suggesting that?" moments from some people on ISCSI/FCoE. It might just be dinosaurs in the IT management sector who haven't looked at what is going on around them, and think FC @ 8Gb/s is the poo poo. Normally I'll sell the admins pretty fast when I show them the performance numbers, and sell the managers when I show them the cost, performance, and manageability of ISCSI/FCoE.

Just gets annoying having to repeat myself over and over, didn't know if anyone had some view points they could shed some light on.

10Gb iSCSI is fine, but I really don't see the advantage to FCoE other than reduced cabling. HBAs are still expensive, it still has some problems hopping nodes (that fabrics over not-ethernet solved a long time ago), it still needs switch support, etc. If they already have a fiber infrastructure, why not put in new Brocades and HBAs and stick with 8GB or bump to 16? It's what they know. It's not that much slower than 10Gb FCoE, etc.

iSCSI is a much easier sell. I still have no idea why you'd be pushing FCoE, though. Do you work on commission?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

evol262 posted:

why not put in new Brocades and HBAs and stick with 8GB? It's what they know. It's not that much slower than 10Gb FCoE, etc.

I can only push cisco gear to customers. I 90% of the time do iSCSI for simplicity's sake and what not, however I do throw out the "we can do FCoE if you require it"

evol262
Nov 30, 2010
#!/usr/bin/perl
I guess you answered the "why do I push iSCSI and FCoE" question. Last I checked, Cisco doesn't even have a convergent switch that does 8GB (much less 16GB), so you can't possibly try to convince customers to do a gradual switchover from traditional FC to FCoE.

To put this in a different context, you're walking into a AIX/Solaris shop saying "I'm selling Linux. Linux is the future; you should buy it. You have to give up your existing infrastructure for nothing, though. It won't work with the stuff you have at all." :a

I wonder why you get the "is this guy really suggesting that?" moments.

Edit: looks like you can in fact get a 6 port 8GB FC module for the Nexuses. Our vendor puts it at approximately the same price as a 24 port Brocade. :allears:

evol262 fucked around with this message at 18:33 on Oct 18, 2012

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

evol262 posted:

I guess you answered the "why do I push iSCSI and FCoE" question. Last I checked, Cisco doesn't even have a convergent switch that does 8GB (much less 16GB), so you can't possibly try to convince customers to do a gradual switchover from traditional FC to FCoE.

To put this in a different context, you're walking into a AIX/Solaris shop saying "I'm selling Linux. Linux is the future; you should buy it. You have to give up your existing infrastructure for nothing, though. It won't work with the stuff you have at all." :a

I wonder why you get the "is this guy really suggesting that?" moments.

Edit: looks like you can in fact get a 6 port 8GB FC module for the Nexuses. Our vendor puts it at approximately the same price as a 24 port Brocade. :allears:

Yeah the Nexuses are there and workable however cost a small fortune for the price. I don't have anything against other vendors but company policy and all that dictates what I can/cannot sell or push to clients.

That's a good analogy

Nomex
Jul 17, 2002

Flame retarded.
I use 4/8gb FC for a ton of stuff. Its more due to the fact that someone up high hates iscsi, and we just got nexus switches, so fcoe is going to be new. Forget about the future proof cable stuff, because you can run 10gigE through fiber as well. Honestly if I could I would convert our entire environment to fcoe and eliminate our brocade infrastructure. With fiber channel pass through on the nexus stuff there's really no speed penalty with fcoe on 10gigE vs FC on 8gig fiber channel.



To the post above me, nexus gear may be expensive, but so are fabric switches. I think our last cost for brocade licensing was about $1300/port.

Nomex fucked around with this message at 21:15 on Oct 18, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

Edit: looks like you can in fact get a 6 port 8GB FC module for the Nexuses. Our vendor puts it at approximately the same price as a 24 port Brocade. :allears:
Doesn't the Brocade only come with 8 ports activated?

We use FC all over the place because it's easy and simpler than iSCSI with less bullshit, especially once large-scale MPIO gets involved. The cost concern is pretty minimal, but I can see FC going away for many of our less-important applications once our top-of-rack switching infrastructure moves to 10-gig a few years down the road.

Vulture Culture fucked around with this message at 21:17 on Oct 18, 2012

evol262
Nov 30, 2010
#!/usr/bin/perl

Misogynist posted:

Doesn't the Brocade only come with 8 ports activated?

We use FC all over the place because it's easy and simpler than iSCSI with less bullshit, especially once large-scale MPIO gets involved. The cost concern is pretty minimal, but I can see FC going away for many of our less-important applications once our top-of-rack switching infrastructure moves to 10-gig a few years down the road.

No, I meant with 24 ports activated (the "pay to activate hardware you already have" model IBM pioneered is idiotic and we don't do it), but we probably have more buying power than most.

iSCSI MPIO isn't horrible as long as your network guys are good, but I greatly enjoy the fact that I don't need to involve them in our storage network on fiber.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

iSCSI MPIO isn't horrible as long as your network guys are good, but I greatly enjoy the fact that I don't need to involve them in our storage network on fiber.
This is honestly a big part of why we run FC.

Adbot
ADBOT LOVES YOU

Nomex
Jul 17, 2002

Flame retarded.

User 39204 posted:

This is honestly a big part of why we run FC.

You're missing the big picture. You offload the traffic to the network guys so when they break your storage network you get overtime. It's a win win!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply