Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Does anyone know how to make a RAID 10 in Freenas? I can't seem to find it anywhere, all the guides point me to raid 5 or 1 or 0 but not 1+0? Am I missing something or does freenas not do this. Yes this is software emulation

Adbot
ADBOT LOVES YOU

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

H110Hawk posted:

Maybe it's like some really terrible LSI firmware versions which are out there. Do you have to make a bunch of raid 1's, then start over and make a raid 0 out of all of your mirrors?

I thought I would make 2 raid 1 arrays, then go to raid 0 and add both arrays but to no avail

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
I love this Dell MD 3200 Powervualt, currently I am going over a few things about disaster recovery, and looking to automate the servers shutdown when APC power is running. Basically the way to turn it off is, you don't...

thanks dell

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
I gotta say freenas 8.2 is really impressive

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
My Freenas box just died today... Only 150G of data, good thing it was only a test machine

Dilbert As FUCK fucked around with this message at 18:16 on Oct 20, 2011

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Studying for my VCP and soon to be VCAP, got any good suggestions for a complex SAN environment to build and try out?

I have storage DRS going but want to get someone that will take we a while to chew on. Mostly doing iscsi, not sure if I can do FCoE in freenas

Dilbert As FUCK fucked around with this message at 22:05 on Dec 12, 2011

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
I asked it on the last page but I don't think anyone saw it as it was the last reply. Anyone know a challenging SAN environment to set up? Studying for my VCP and want to have storage down pat, even if Storage DRS makes it easy

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Internet Explorer posted:

I'm a little confused by what you're asking. You mean like an open SAN system that you can use on whitebox hardware, or are you going to buy some NetApp/EMC gear to test with?

Sorry, allow me to explain, I plan to go for a VCAP-DCA some day, so I can work as a Virtualization Admin or engineer. So I want to know as much as I can about everything.

Running a basic SAN environment Freenas/openfiler doing iscsi at home with Storage DRS and Storage vmotion fully working. Just wondering what areas may be good know for taking into the job market. I do a good deal of HA, DRS, FT, vMotion, and still working on getting DPM fully working;Storage work FC, FCoE, Iscsi, NFS, NAS replication, and Iscsi Multipathing.

I Have a good bit of EMC expirence, almost got the cert, Guess I can still take it after taking the EMC class. Just looking for something more.

Dilbert As FUCK fucked around with this message at 19:22 on Dec 15, 2011

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Internet Explorer posted:

I am not exposed to a wide variety of tech in my job, but it sounds like you have most of it covered. You said NAS replication, what about Block level replication? Snapshots, Dedupe, etc? To me the ability to deal with performance issues and the ability to plan out IOPS, etc., for new projects is a big one.

Haven't done Block level replication, might be fun to do. Snapshots have done, Dedupe, will look into. My previous job had me consolidate the whole infrastructure 4 windows hosts, 2 linux hosts, 1 BSD host, with shared storage, Good there, as well as NIC teaming and failover.


Thanks!

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Moey posted:

Corvettefisher, what kind of hardware are you using for your home test lab? I am setting up something similar for myself for both studying and personal use with some old hardware I have laying around. I'm still on the fence about either building a dedicated storage box, or doing something virtual.

I can get the full hardware list at home. Most of my work up until now has been done in workstation
All thin provisioned on SSD's
3-4 Freenas
2 Windows 2008r2 installs one doing Remote Routing to my computer to the net and providing the vm's with internet on the Virtual Network. The other doing vCenter hosting
Running in 3 ESXi Vm's 5
4 XP installs
2 RH/Fedora installs
4 Win7 installs
1 BSD install
1 2003 server

it helps I have an x6(150), 32GB ram(wich is only $200 now), and an 256GB(300) SSD with 25k IOPS getting another soon.
I have some other hardware at home too, but running all that on my PC runs me about 80% CPU, 75% ram, using 200 out of 248GB on the SSD

Dilbert As FUCK fucked around with this message at 20:26 on Dec 15, 2011

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

ehzorg posted:

Yep.

So I have a feeling that any solution I choose is going to be "OMFG" so much better than what they've got... do you think there's a significant reason to push for a real enterprise level solution over say, a big 'ol NAS with enough storage? I can't see more than 10 simultaneous users accessing this thing at once at any point.

My biggest fear is recommending a solution which ends up underperforming - second only to recommending a solution which is so loving expensive it gets me laughed out of a job.

As you recommended I'll see what vendors I can get a hold of here in die Schweiz. It would be nice to hear someone else second my plea for basic IT necessities around here.

If I were you this is how I would do it,
Order something like a NX3100 Get 10-20Tb on that. 7.2k drives will give you a decent amount of I/O(1k-1.25k) Rate performance on that, see if you are maxing out I/O. Order more to fit space and I/O requests. Provision Luns and Arrays per servers and Clients as needed. You are going to need more than one storage device anyways, might as well start it off as a modular storage array.

My previous employer
I worked with a PowerVault 20TB NAS RAID 0 all my VM server, hosted: Sharepoint/Lync/2 Domain r2 serves/NFS/DFS and backups and was able to server 150-200 clients with ease.

Don't run it in raid 0 but Raid 5/6 should give you ample performance over daisy chaining externals

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Oh Openfiler has Dedupe and DRBD?

So long freenas

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Internet Explorer posted:

I wouldn't sell Equallogic short.

This, for clients want a VM setup I usually just go get a Equallogic setup as it is really easy to install, maintain, and provides a decent value.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Get that EMC Storage book I posted in the other thread it will really help you make a clearer decision.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
I haven't toyed around too much with things other than DRBD and started looking at Rsync, from my light googling it looks like it is just file level instead of block level. It also looks a good deal easier to set up too, only question I have is how does this play against a DRDB setup Active/Passive for an ESXi HA environment?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Has anyone had any trouble setting up Active/Active storage arrays with centos. Any issues or problems anyone run into? Openfiler only does active/passive and Freenas 8 only does Rsync I believe

Oddhair posted:

My boss just walked up and said we want to spend ~$20K-30K on a pair of servers and possibly some shared storage...by Thursday. I haven't run perfmon on the current servers in question (and was actually spinning up some logged performance stats earlier today) so I'm not sure where to start. I've been pushing to virtualize for a while, since our SQL server is a single socket, and our voicemail server is a dual PIII, so off the cuff we're a good candidate for server consolidation. Is there a way to get decent quotes with almost no information up front?

Dell will let you build/customize your own NAS devices and get a quote instantly. NX300 and NX3100 are good if you work in a medium size company, and some R210's are cheap good first time virtualizing type servers.


VVVV- answer that but, if you just need something to go off of Servers storage + Essetianls plus kit should run you right up near 25-26k, and if you don't already have some get some gig switches and make a network just for storage.

Dilbert As FUCK fucked around with this message at 22:14 on Feb 28, 2012

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
A lot of our clients are wanting cloud solutions for things now, so I am trying to piece together a "cloud" storage solution. I was thinking

1)Lease out a NAS device to clients
2)NAS Device holds local backups for client for faster redeploys
here is where I a a bit torn

A)Rsync/Bacula to our "private cloud" during the night, compress further 7z => to our cloud server
Pro's
more redundancy
getting data to Client if NAS fails and backups are needed is faster our clients could get a full backup by us driving over to them with an external of backup from the past two weeks
Con's
More steps

B)NAS device runs Bacula=> on stie compress=>upload to cloud
Pro's
We don't clog any of our WAN, backup depends on clients connection
We don't have to use any storage
Con's
Slowish

Kinda torn between these two. I prefer A more offer the customer, but B is more simplistic.

Plan A would be 55c/GB + 50/mo + ~150 for NAS setup/deploy

Plan B would be $10/mo per PC(no charge per/GB) + 50/mo + ~150 NAS setup deploy

Thoughts? Which would you all choose given the option?

Dilbert As FUCK fucked around with this message at 20:07 on Mar 16, 2012

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Misogynist posted:

This is not something you want to roll and support yourself. The poo poo you will land yourself in when your backups fail will end your company. There's dozens of cloud backup vendors out there who do rebranding partnerships all the time. Talk to them instead.

Normally I would say yeah to that but another company is offering something similar to our clients and we were asked to make a counter offer, not much you can do. I figured out the easiest way to do it, I feel like an idiot for not seeing it sooner.

local NAS(local backups) => our Servers(offsite) => CrashplanPRO(cloud)
1. Local keeps 2 weeks
2. Ours 1-2 Months
3. CrashPlanPro all backups of $client

Dilbert As FUCK fucked around with this message at 19:52 on Mar 16, 2012

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Anyone have some advice for setting up an Active/active NAS? Openfiler to my knowledge only does active/passive. I know Centos supports GFS so doing an active/active would be doable, but any suggestions about other distros or what not to look into for this would be appreciated.

I tried asking a few people in my storage/Vmware class most everyone answers with. "Oh we just pay the EMC/netapp people to set it up, we don't really touch that too much"

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

You probably get that answer because it's not a common thing to do because the benefits of active/active for NAS aren't that great. Any single export or share is only going to be accessible from a single physical host, just as it would be in an active/passive config. Yes, you can provide a better balance between all exports if you have active active, but if your intent is to run both hosts at above 50% utilization then when you do have a failure event you're going to be hosed.

That said, CentOS would be my choice if I was intent on doing something like that.

Yeah I know but assuming your SLA's are set properly, active/active can be a good move even in a host failure(other than users noticing it is slower than usual). Running with active/passive right now and just wanting to explore more into storage.

I guess I thought most SA's would know about setting from the ground up Active/<passive> or <active> storage. But is seems most people just pay netapp/emc/dell/oracle to do it, which struck me as odd

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

http://sources.redhat.com/cluster/doc/nfscookbook.pdf

That *should* cover it. You might also have better luck asking in the Linux questions thread.

I really didn't run across active/active NAS too much as an SA since usually if the NAS environment had high enough performance demands to need active/active clustering the customer was usually willing to pay a vendor to do it.

Yeah My RHCSA book actually has a section on this I just saw. They used fedora but I am sure it will work on Centos, and if I have to use fedora oh well they are very similar.

optikalus posted:

:words:

Thank's I have been playing around with corosync on my Openfiler setups, I still don't know why crm_mon --one-shot -V returns a "connection failed", which is very helpful in troubleshooting :rolleyes:. I have gotten A/P to work a couple times but now I seem to be striking out and can't figure out why

I am following this guide almost step by step although I am using a raid array 60GB instead of a 4Gb disk/partition.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Alright my current work is using Openfiler virtual nas devices for Shared storage, this is working thus far but I would really rather get a SAN going just need to bump heads with some of you all. I posted in the VMware thread but have gotten some updated numbers and feel this thread a touch more appropriate.

Here is the Equipment I want to buy, all are NX3100 Powervaults, OS drive RAID 1 15K SAS 146GB, Dual PSU's, Dual 10G/Ethernet, 2x1GB backend connectors, Write Cache is 512MB on all, Quad cores + 6GB ram on each.
NAS A - 10 600Gb 15k drives RAID 5 + 1 HS
High end disk I/O intensive storage, SQL, web facing servers, priority VDI, Small NFS share
NAS B - 10 1TB 7.2k SAS drives RAID 5
Backend servers, Domain controllers, non priority VDI, other VM's, other NFS servers
NAS C - 10 2TB 7.2k SAS drives RAID 5 + HS 1 per array
Backup server
10TB will be provisioned for GFS/DRBD on an ACTIVE/ACTIVE with NAS B
10TB deduped and store backups of critical information, after X days file sent to Crash Plan

These will run Centos 6.2, primarily iscsi w/ jumbo frames set at 9000. I might however plan to run the data stores over FCoE since it seems the x520-T2 supports it, cutting latency out, and TCP overhead. B and C will do Active/Active storage, then iSCSI to the esxi boxes, A will use C as a backup device

All this will hook up to 5 Esxi servers running VDI's and other drives, and 3 Servers running Esxi 4.1, 50+ vm's mostly windows clients and Servers. I know in about 6 months we want to expand our VMware working with clients, hosting solutions, demonstrations for clients, and other things so I find it better to get the ball rolling.

I would love to go with EMC or netapp, but I can get this up and running for around 30k. Open for other ways to go about this, I might just do 2 600GB 15K drives ~12TB w/ Active/active, then backup to a NAS C with 20TB, but I like the idea of tiering my storage.

Dilbert As FUCK fucked around with this message at 17:57 on May 23, 2012

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Cpt.Wacky posted:

I don't have any comments on your design but I'm in a similar position of needing a real SAN. I'm curious what led you to the Powervaults over something like Equallogic. Is it just price?

Worked with them before at other places, they are decent for the price, never had gripes with them compared to HP's solution of NAS devices. The fact I can dig down and customize them via web interface is nice too

Nukelear v.2 posted:

Why use NAS at all, why not just get some MD3xxxi's and just present iSCSI and use VMFS to your hosts? Or buy some EQL if you get a bigger budget.

I could go that way but I have some 10GoE connections and 10g FCoE adapters on the hosts, I would like to put those to use. Seeing how MD's start at 18g for one device, it doesn't look cost effective either.

skipdogg posted:

CF, I can't give exact numbers but our pricing on a VNXe box from EMC with close to your data capacity isn't far off that 30K mark.

I would also recommend exploring Raid 6 instead of 5. I've heard (probably from this thread) that rebuilding a Raid 5 array with those large disks can take days, and the odds of coming across a bad sector that fucks the rebuild is increased.

Yeah I am going to look about getting a larger write cache though if I go raid 6.

If you have any documentation on some SAN setups for around 30k let me know, most places will sell me a single box maxed out for 28 grand or shove 2 mid teir boxes at me with 2-4 gig net interfaces

Dilbert As FUCK fucked around with this message at 18:35 on May 23, 2012

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Nukelear v.2 posted:

An unpopulated MD3620i with dual controllers is ~9k and is 10GigE, vs 6k for an empty NX. MD also isn't drive locked so feel free to put whatever you want in it.

Though again, I'd get enough scratch together to do Equallogic at least.

What happens when the raid controller or the motherboard fails in NAS A? Or you need to patch CentOS/WS?

H700 is a pci-e card so, pop in a new card and watch it rebuild run of the backups that rely on NAS C for the time being, until A is back up.

Patching just svMotion stuff off and run the patches, reboot, svMotion stuff back on, move to next host.

I know it isn't super awesome, best SAN ever but it is still head over heels better than running everything DAS, with backups being snapshots.

Dilbert As FUCK fucked around with this message at 23:54 on May 23, 2012

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Nukelear v.2 posted:

So going to backup really sounds like a good plan, you want that call at 3am that you entire Web/SQL infrastructure is down? Can your cheap/slow tier actually handle running your production traffic at anything resembling your previous speeds? How well does vmware do when it's storage backend goes to hell? Your own first post in the VM thread is don't cheap out on storage


For ~15k you can get a 2Gig cache DUAL CONTROLLER MD3620i with 2x MD1220 shelves, that's capacity for 66 2.5 inch drives. Or mix in a MD1200 and get some 2TB 3.5 inch drives. Splurge and buy the drives from Dell directly, prices on everything but SSD's aren't bad. Also, the whole thing can be covered by 4 hour on-site support.

I would say you are right if I had existing SAN/NAS to go off of, but here is my situation

Right now our SQL/Web/VDI infrastructure is best hosted on a server with DAS 8 1TB 7.2k SATA RAID 5 256MB write cache. WE HAVE NO CENTRAL STORAGE. It isn't too slow for our needs but definitely slower than it should be, people are starting to drop the line "why is everything so slow". VDI is all on DAS put randomly on hosts through the cluster, same with other servers, HA/FT/vmotion/sDRS/DRS is non exsistant even with out Enterpise Plus licensing. A host failure right now would be hell more difficult recovery, than dealing with some sql servers on a backup NAS.

No I am not trying to "cheap out" I don't have a huge budget to work with here. I am trying to get the infrastructure what it needs to get it somewhat better, more redundant than what it is at right now. I would love to spludge if I had the funds to do it, and go get some NetApp/EMC equipment but I don't have the funds for it.

The 2 MD1200 would be great to get, until I realize I have to address 8 hosts... So I need 16 SAS cables, and then 2x8port switches... and then 8 additional SAS cards...
I could go SAS => 2 hosts(which I would need some SAS cards for) then run VM iscsi servers, but that is adding a good amount of added layers that are needed to go through, just to get my data to where it needs to be. That would more than likely push my over 30k.

Dilbert As FUCK fucked around with this message at 23:57 on May 23, 2012

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Internet Explorer posted:

Put Dell / Equallogic / Compellent up again EMC and NetApp up against Nimble. The price will drop a lot.

Dell = Equal logic just FYI
http://venturebeat.com/2007/11/05/dell-buys-equallogic-for-14b-biggest-cash-purchase-of-private-tech-company/

Serfer posted:

Speaking of EMC being terrible, I wasted three hours of my time working with yet another know-nothing EMC tech.

I knew things were going to go bad when he tried to cd and ls /dev/hda7. Then he mounted an iso loopback and after doing df, said the upgrade couldn't proceed because one of the drives was 100% full (care to guess which one?).

One company I am working with has their eyes set on the VNXE3100, I am trying to push nettapp but they say a video from an EMC sales rep and instantly knew there are no other options!

Surprisingly this company wants to spend money on their IT infrastructure, unlike a it firm not wanting to spend money.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Internet Explorer posted:

Like the others said, I I know that. If you are looking at the VNXe line you absolutely need to look at Equallogic.

No I got you, didn't realize you knew that.

But yeah this customer is like EMC VNX 3100s!

Probably will get them an nice equal logic setup, ther are multisite but 20Mb fiber 5ms lines between them they want full replication and dedup.


Anyone have the how to become a dell partner guide? would appricaite it

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

NippleFloss posted:

Not sure where the VNX to Equallogic transition is coming from. Going from a unified block/file scale up SAN to an iSCSI only scale out SAN doesn't make much sense to me. The VNX line is a direct response to NetApp and NetApp would be the obvious competitor.

Anyway, what is it that they like about the VNX gear?

Anyway, Equallogic doesn't do on-box dedupe so if that's a requirement they would need to look elsewhere. If they're content with iSCSI only then Nimble might be a good fit. They don't do dedupe but they do in-line compression and apparently see pretty good results.

Of course, I'd recommend NetApp because I think their offering is genuinely much better than the VNX gear, but I'm hardly impartial.

If it is like what most sales reps do, $IT_manager went to lunch with $EMC_rep, and discussed what they have in the cheap end and the VNX 3100 came up.

The main requirement is replication between sites, dedupe is a nice bonus they would like but aren't pushing for it outside of a nice to have.

Depending on what my budget is will depend where I go, seeing how they listed the VNX 3100 my guess is they want to go on the cheaper end.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

evil_bunnY posted:

I don't think VNX does block dedup?

Correct file level dedupe
http://www.emc.com/storage/vnx/vnx-series.htm#!compare

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

evil_bunnY posted:

And that's only if you get the NAS heads.

oops I posted the higher end models
http://www.emc.com/storage/vnx/vnxe-series.htm#!compare
VNXE is actually the model name, but yeah, pretty sure I can get better with Equal logic or Netapp if I throw them up against EMCs offerings

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

skipdogg posted:

Trying to decide right now between the VNXe 3300 and a EqualLogic 4100 series setup. 1 Shelf of fast, 1 of slow for each.

Assuming pricing is pretty close to each other, any reason I should run away from either solution? The NAS functionality of the VNXe currently has us leaning that way.

FYI the VMXe 3150's come out soon and are very promising. I know Nimble is great but all there stuff is basically supermicro, with a custom tailored OS. Netapp is a great person to play off against emc

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

FISHMANPET posted:

Can anyone link to some specific criticisms of Drobo, or tell me what I should be looking for? Normally "I read it on the Internet" would be good enough but in this case, because of ~~PoLiTiCs~~ I really have to beat them over the head with why this would be bad.

NO NO NO DO NOT GET DROBO

Seriously if you want to talk answer my PM or email me at Corvttefish3r@gmail.com I can help you out and offer a bunch of support for cheap

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

paperchaseguy posted:

i think you're wanted here

That is almost as bad as when I first started this gig and came to find out the shared storage was actually openfiler VM's. I mentioned that wasn't really shared storage and a terrible idea, he basically looked at me like I insulted his grandmother.

bonus: the OP VM's were thin provisioned

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Hooray just got some funding for some more storage! Not too terribly much, 20k, but still happy nonetheless. Now time to versus with Netapp, Dell, and EMC

Dilbert As FUCK fucked around with this message at 16:24 on Oct 11, 2012

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Moey posted:

We have been dabbling in more used/refurb equiptment. It hasn't bitten us in the rear end...yet.

All of our storage stuff is still new, but switches and servers have been refurbs lately.

CF, let me know what you end up with for new storage (also what space/speed specs drove your decision).

I am evaling a some VNX 5300 and VNXe 3150/3300 as well as some cybernetics and netapp.

If you want to talk about some storage stuff email me, I might be able to do a join.me demo and show you some stuff on the netapp I work with. I don't mind posting the poo poo hear but to fully answer your questions PM/Email me.


E: I would cheap out on anything but storage for virtualization it is the heart and soul of your infrastructure I would love to take a look at the refurbs buthonestly I don't feel comfortable implementing them, however, I might know someone who is.

Dilbert As FUCK fucked around with this message at 01:09 on Oct 13, 2012

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

optikalus posted:

drbd does it pretty well, and can do async writes.

This, if it is caching to flash you should be able to do DRBD without it mukking up anything.

If you want to see it in action you can easily set it up on openfiler 2.99

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Moey posted:

I'll try to remember to PM you later on it. Since I only work internally any our firm isn't giant, I don't get to deal with much outside of the stuff my old boss randomly picked (a few Dell MD3220i, QNAP NAS units). We have expanded on that "standard" as well as iSCSI. I currently don't see the need to introduce any NFS storage since iSCSI has been meeting my needs, but seeing the management of some other block storage devices would be cool.

Is anyone here running DRBD in production? I have always found it interesting (my old boss used to always talk about using "doubletake") but have not heard much talk of it.

Sounds good you can pm me and I'll give you my cell to do whatever with.

I use DRBD/rsync on some of my Gov, bank, or medica deploys where data has to be in two separate boxes. If you copy it from flash it doesn't hit the production SAN/NAS which is great.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Might be more a networking question but,

Is anyone here actively using FC for anything other than high end transaction servers? I've had a few customers who have existing 4Gb/s FC networks for their SAN looking to upgrade and I usually just go with 10Gb to run ISCSI or FCoE. But I get those "Is this guy really suggesting that?" moments from some people on ISCSI/FCoE. It might just be dinosaurs in the IT management sector who haven't looked at what is going on around them, and think FC @ 8Gb/s is the poo poo. Normally I'll sell the admins pretty fast when I show them the performance numbers, and sell the managers when I show them the cost, performance, and manageability of ISCSI/FCoE.

Just gets annoying having to repeat myself over and over, didn't know if anyone had some view points they could shed some light on.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

evol262 posted:

why not put in new Brocades and HBAs and stick with 8GB? It's what they know. It's not that much slower than 10Gb FCoE, etc.

I can only push cisco gear to customers. I 90% of the time do iSCSI for simplicity's sake and what not, however I do throw out the "we can do FCoE if you require it"

Adbot
ADBOT LOVES YOU

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

evol262 posted:

I guess you answered the "why do I push iSCSI and FCoE" question. Last I checked, Cisco doesn't even have a convergent switch that does 8GB (much less 16GB), so you can't possibly try to convince customers to do a gradual switchover from traditional FC to FCoE.

To put this in a different context, you're walking into a AIX/Solaris shop saying "I'm selling Linux. Linux is the future; you should buy it. You have to give up your existing infrastructure for nothing, though. It won't work with the stuff you have at all." :a

I wonder why you get the "is this guy really suggesting that?" moments.

Edit: looks like you can in fact get a 6 port 8GB FC module for the Nexuses. Our vendor puts it at approximately the same price as a 24 port Brocade. :allears:

Yeah the Nexuses are there and workable however cost a small fortune for the price. I don't have anything against other vendors but company policy and all that dictates what I can/cannot sell or push to clients.

That's a good analogy

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply