|
Does anyone know how to make a RAID 10 in Freenas? I can't seem to find it anywhere, all the guides point me to raid 5 or 1 or 0 but not 1+0? Am I missing something or does freenas not do this. Yes this is software emulation
|
# ¿ Nov 16, 2010 05:25 |
|
|
# ¿ Apr 23, 2024 17:40 |
|
H110Hawk posted:Maybe it's like some really terrible LSI firmware versions which are out there. Do you have to make a bunch of raid 1's, then start over and make a raid 0 out of all of your mirrors? I thought I would make 2 raid 1 arrays, then go to raid 0 and add both arrays but to no avail
|
# ¿ Nov 16, 2010 22:41 |
|
I love this Dell MD 3200 Powervualt, currently I am going over a few things about disaster recovery, and looking to automate the servers shutdown when APC power is running. Basically the way to turn it off is, you don't... thanks dell
|
# ¿ Jun 6, 2011 19:42 |
|
I gotta say freenas 8.2 is really impressive
|
# ¿ Sep 19, 2011 01:28 |
|
My Freenas box just died today... Only 150G of data, good thing it was only a test machine
Dilbert As FUCK fucked around with this message at 18:16 on Oct 20, 2011 |
# ¿ Oct 20, 2011 18:14 |
|
Studying for my VCP and soon to be VCAP, got any good suggestions for a complex SAN environment to build and try out? I have storage DRS going but want to get someone that will take we a while to chew on. Mostly doing iscsi, not sure if I can do FCoE in freenas Dilbert As FUCK fucked around with this message at 22:05 on Dec 12, 2011 |
# ¿ Dec 12, 2011 21:51 |
|
I asked it on the last page but I don't think anyone saw it as it was the last reply. Anyone know a challenging SAN environment to set up? Studying for my VCP and want to have storage down pat, even if Storage DRS makes it easy
|
# ¿ Dec 15, 2011 18:48 |
|
Internet Explorer posted:I'm a little confused by what you're asking. You mean like an open SAN system that you can use on whitebox hardware, or are you going to buy some NetApp/EMC gear to test with? Sorry, allow me to explain, I plan to go for a VCAP-DCA some day, so I can work as a Virtualization Admin or engineer. So I want to know as much as I can about everything. Running a basic SAN environment Freenas/openfiler doing iscsi at home with Storage DRS and Storage vmotion fully working. Just wondering what areas may be good know for taking into the job market. I do a good deal of HA, DRS, FT, vMotion, and still working on getting DPM fully working;Storage work FC, FCoE, Iscsi, NFS, NAS replication, and Iscsi Multipathing. I Have a good bit of EMC expirence, almost got the cert, Guess I can still take it after taking the EMC class. Just looking for something more. Dilbert As FUCK fucked around with this message at 19:22 on Dec 15, 2011 |
# ¿ Dec 15, 2011 19:12 |
|
Internet Explorer posted:I am not exposed to a wide variety of tech in my job, but it sounds like you have most of it covered. You said NAS replication, what about Block level replication? Snapshots, Dedupe, etc? To me the ability to deal with performance issues and the ability to plan out IOPS, etc., for new projects is a big one. Haven't done Block level replication, might be fun to do. Snapshots have done, Dedupe, will look into. My previous job had me consolidate the whole infrastructure 4 windows hosts, 2 linux hosts, 1 BSD host, with shared storage, Good there, as well as NIC teaming and failover. Thanks!
|
# ¿ Dec 15, 2011 19:45 |
|
Moey posted:Corvettefisher, what kind of hardware are you using for your home test lab? I am setting up something similar for myself for both studying and personal use with some old hardware I have laying around. I'm still on the fence about either building a dedicated storage box, or doing something virtual. I can get the full hardware list at home. Most of my work up until now has been done in workstation All thin provisioned on SSD's 3-4 Freenas 2 Windows 2008r2 installs one doing Remote Routing to my computer to the net and providing the vm's with internet on the Virtual Network. The other doing vCenter hosting Running in 3 ESXi Vm's 5 4 XP installs 2 RH/Fedora installs 4 Win7 installs 1 BSD install 1 2003 server it helps I have an x6(150), 32GB ram(wich is only $200 now), and an 256GB(300) SSD with 25k IOPS getting another soon. I have some other hardware at home too, but running all that on my PC runs me about 80% CPU, 75% ram, using 200 out of 248GB on the SSD Dilbert As FUCK fucked around with this message at 20:26 on Dec 15, 2011 |
# ¿ Dec 15, 2011 20:21 |
|
ehzorg posted:Yep. If I were you this is how I would do it, Order something like a NX3100 Get 10-20Tb on that. 7.2k drives will give you a decent amount of I/O(1k-1.25k) Rate performance on that, see if you are maxing out I/O. Order more to fit space and I/O requests. Provision Luns and Arrays per servers and Clients as needed. You are going to need more than one storage device anyways, might as well start it off as a modular storage array. My previous employer I worked with a PowerVault 20TB NAS RAID 0 all my VM server, hosted: Sharepoint/Lync/2 Domain r2 serves/NFS/DFS and backups and was able to server 150-200 clients with ease. Don't run it in raid 0 but Raid 5/6 should give you ample performance over daisy chaining externals
|
# ¿ Dec 15, 2011 21:12 |
|
Oh Openfiler has Dedupe and DRBD? So long freenas
|
# ¿ Feb 14, 2012 21:26 |
|
Internet Explorer posted:I wouldn't sell Equallogic short. This, for clients want a VM setup I usually just go get a Equallogic setup as it is really easy to install, maintain, and provides a decent value.
|
# ¿ Feb 14, 2012 22:08 |
|
Get that EMC Storage book I posted in the other thread it will really help you make a clearer decision.
|
# ¿ Feb 14, 2012 22:43 |
|
I haven't toyed around too much with things other than DRBD and started looking at Rsync, from my light googling it looks like it is just file level instead of block level. It also looks a good deal easier to set up too, only question I have is how does this play against a DRDB setup Active/Passive for an ESXi HA environment?
|
# ¿ Feb 24, 2012 15:47 |
|
Has anyone had any trouble setting up Active/Active storage arrays with centos. Any issues or problems anyone run into? Openfiler only does active/passive and Freenas 8 only does Rsync I believeOddhair posted:My boss just walked up and said we want to spend ~$20K-30K on a pair of servers and possibly some shared storage...by Thursday. I haven't run perfmon on the current servers in question (and was actually spinning up some logged performance stats earlier today) so I'm not sure where to start. I've been pushing to virtualize for a while, since our SQL server is a single socket, and our voicemail server is a dual PIII, so off the cuff we're a good candidate for server consolidation. Is there a way to get decent quotes with almost no information up front? Dell will let you build/customize your own NAS devices and get a quote instantly. NX300 and NX3100 are good if you work in a medium size company, and some R210's are cheap good first time virtualizing type servers. VVVV- answer that but, if you just need something to go off of Servers storage + Essetianls plus kit should run you right up near 25-26k, and if you don't already have some get some gig switches and make a network just for storage. Dilbert As FUCK fucked around with this message at 22:14 on Feb 28, 2012 |
# ¿ Feb 28, 2012 21:45 |
|
A lot of our clients are wanting cloud solutions for things now, so I am trying to piece together a "cloud" storage solution. I was thinking 1)Lease out a NAS device to clients 2)NAS Device holds local backups for client for faster redeploys here is where I a a bit torn A)Rsync/Bacula to our "private cloud" during the night, compress further 7z => to our cloud server Pro's more redundancy getting data to Client if NAS fails and backups are needed is faster our clients could get a full backup by us driving over to them with an external of backup from the past two weeks Con's More steps B)NAS device runs Bacula=> on stie compress=>upload to cloud Pro's We don't clog any of our WAN, backup depends on clients connection We don't have to use any storage Con's Slowish Kinda torn between these two. I prefer A more offer the customer, but B is more simplistic. Plan A would be 55c/GB + 50/mo + ~150 for NAS setup/deploy Plan B would be $10/mo per PC(no charge per/GB) + 50/mo + ~150 NAS setup deploy Thoughts? Which would you all choose given the option? Dilbert As FUCK fucked around with this message at 20:07 on Mar 16, 2012 |
# ¿ Mar 16, 2012 18:58 |
|
Misogynist posted:This is not something you want to roll and support yourself. The poo poo you will land yourself in when your backups fail will end your company. There's dozens of cloud backup vendors out there who do rebranding partnerships all the time. Talk to them instead. Normally I would say yeah to that but another company is offering something similar to our clients and we were asked to make a counter offer, not much you can do. I figured out the easiest way to do it, I feel like an idiot for not seeing it sooner. local NAS(local backups) => our Servers(offsite) => CrashplanPRO(cloud) 1. Local keeps 2 weeks 2. Ours 1-2 Months 3. CrashPlanPro all backups of $client Dilbert As FUCK fucked around with this message at 19:52 on Mar 16, 2012 |
# ¿ Mar 16, 2012 19:42 |
|
Anyone have some advice for setting up an Active/active NAS? Openfiler to my knowledge only does active/passive. I know Centos supports GFS so doing an active/active would be doable, but any suggestions about other distros or what not to look into for this would be appreciated. I tried asking a few people in my storage/Vmware class most everyone answers with. "Oh we just pay the EMC/netapp people to set it up, we don't really touch that too much"
|
# ¿ Mar 21, 2012 18:54 |
|
NippleFloss posted:You probably get that answer because it's not a common thing to do because the benefits of active/active for NAS aren't that great. Any single export or share is only going to be accessible from a single physical host, just as it would be in an active/passive config. Yes, you can provide a better balance between all exports if you have active active, but if your intent is to run both hosts at above 50% utilization then when you do have a failure event you're going to be hosed. Yeah I know but assuming your SLA's are set properly, active/active can be a good move even in a host failure(other than users noticing it is slower than usual). Running with active/passive right now and just wanting to explore more into storage. I guess I thought most SA's would know about setting from the ground up Active/<passive> or <active> storage. But is seems most people just pay netapp/emc/dell/oracle to do it, which struck me as odd
|
# ¿ Mar 21, 2012 19:41 |
|
NippleFloss posted:http://sources.redhat.com/cluster/doc/nfscookbook.pdf Yeah My RHCSA book actually has a section on this I just saw. They used fedora but I am sure it will work on Centos, and if I have to use fedora oh well they are very similar. optikalus posted:Thank's I have been playing around with corosync on my Openfiler setups, I still don't know why crm_mon --one-shot -V returns a "connection failed", which is very helpful in troubleshooting . I have gotten A/P to work a couple times but now I seem to be striking out and can't figure out why I am following this guide almost step by step although I am using a raid array 60GB instead of a 4Gb disk/partition.
|
# ¿ Mar 21, 2012 20:26 |
|
Alright my current work is using Openfiler virtual nas devices for Shared storage, this is working thus far but I would really rather get a SAN going just need to bump heads with some of you all. I posted in the VMware thread but have gotten some updated numbers and feel this thread a touch more appropriate. Here is the Equipment I want to buy, all are NX3100 Powervaults, OS drive RAID 1 15K SAS 146GB, Dual PSU's, Dual 10G/Ethernet, 2x1GB backend connectors, Write Cache is 512MB on all, Quad cores + 6GB ram on each. NAS A - 10 600Gb 15k drives RAID 5 + 1 HS High end disk I/O intensive storage, SQL, web facing servers, priority VDI, Small NFS share NAS B - 10 1TB 7.2k SAS drives RAID 5 Backend servers, Domain controllers, non priority VDI, other VM's, other NFS servers NAS C - 10 2TB 7.2k SAS drives RAID 5 + HS 1 per array Backup server 10TB will be provisioned for GFS/DRBD on an ACTIVE/ACTIVE with NAS B 10TB deduped and store backups of critical information, after X days file sent to Crash Plan These will run Centos 6.2, primarily iscsi w/ jumbo frames set at 9000. I might however plan to run the data stores over FCoE since it seems the x520-T2 supports it, cutting latency out, and TCP overhead. B and C will do Active/Active storage, then iSCSI to the esxi boxes, A will use C as a backup device All this will hook up to 5 Esxi servers running VDI's and other drives, and 3 Servers running Esxi 4.1, 50+ vm's mostly windows clients and Servers. I know in about 6 months we want to expand our VMware working with clients, hosting solutions, demonstrations for clients, and other things so I find it better to get the ball rolling. I would love to go with EMC or netapp, but I can get this up and running for around 30k. Open for other ways to go about this, I might just do 2 600GB 15K drives ~12TB w/ Active/active, then backup to a NAS C with 20TB, but I like the idea of tiering my storage. Dilbert As FUCK fucked around with this message at 17:57 on May 23, 2012 |
# ¿ May 23, 2012 17:54 |
|
Cpt.Wacky posted:I don't have any comments on your design but I'm in a similar position of needing a real SAN. I'm curious what led you to the Powervaults over something like Equallogic. Is it just price? Worked with them before at other places, they are decent for the price, never had gripes with them compared to HP's solution of NAS devices. The fact I can dig down and customize them via web interface is nice too Nukelear v.2 posted:Why use NAS at all, why not just get some MD3xxxi's and just present iSCSI and use VMFS to your hosts? Or buy some EQL if you get a bigger budget. I could go that way but I have some 10GoE connections and 10g FCoE adapters on the hosts, I would like to put those to use. Seeing how MD's start at 18g for one device, it doesn't look cost effective either. skipdogg posted:CF, I can't give exact numbers but our pricing on a VNXe box from EMC with close to your data capacity isn't far off that 30K mark. Yeah I am going to look about getting a larger write cache though if I go raid 6. If you have any documentation on some SAN setups for around 30k let me know, most places will sell me a single box maxed out for 28 grand or shove 2 mid teir boxes at me with 2-4 gig net interfaces Dilbert As FUCK fucked around with this message at 18:35 on May 23, 2012 |
# ¿ May 23, 2012 18:27 |
|
Nukelear v.2 posted:An unpopulated MD3620i with dual controllers is ~9k and is 10GigE, vs 6k for an empty NX. MD also isn't drive locked so feel free to put whatever you want in it. H700 is a pci-e card so, pop in a new card and watch it rebuild run of the backups that rely on NAS C for the time being, until A is back up. Patching just svMotion stuff off and run the patches, reboot, svMotion stuff back on, move to next host. I know it isn't super awesome, best SAN ever but it is still head over heels better than running everything DAS, with backups being snapshots. Dilbert As FUCK fucked around with this message at 23:54 on May 23, 2012 |
# ¿ May 23, 2012 19:58 |
|
Nukelear v.2 posted:So going to backup really sounds like a good plan, you want that call at 3am that you entire Web/SQL infrastructure is down? Can your cheap/slow tier actually handle running your production traffic at anything resembling your previous speeds? How well does vmware do when it's storage backend goes to hell? Your own first post in the VM thread is don't cheap out on storage I would say you are right if I had existing SAN/NAS to go off of, but here is my situation Right now our SQL/Web/VDI infrastructure is best hosted on a server with DAS 8 1TB 7.2k SATA RAID 5 256MB write cache. WE HAVE NO CENTRAL STORAGE. It isn't too slow for our needs but definitely slower than it should be, people are starting to drop the line "why is everything so slow". VDI is all on DAS put randomly on hosts through the cluster, same with other servers, HA/FT/vmotion/sDRS/DRS is non exsistant even with out Enterpise Plus licensing. A host failure right now would be hell more difficult recovery, than dealing with some sql servers on a backup NAS. No I am not trying to "cheap out" I don't have a huge budget to work with here. I am trying to get the infrastructure what it needs to get it somewhat better, more redundant than what it is at right now. I would love to spludge if I had the funds to do it, and go get some NetApp/EMC equipment but I don't have the funds for it. The 2 MD1200 would be great to get, until I realize I have to address 8 hosts... So I need 16 SAS cables, and then 2x8port switches... and then 8 additional SAS cards... I could go SAS => 2 hosts(which I would need some SAS cards for) then run VM iscsi servers, but that is adding a good amount of added layers that are needed to go through, just to get my data to where it needs to be. That would more than likely push my over 30k. Dilbert As FUCK fucked around with this message at 23:57 on May 23, 2012 |
# ¿ May 23, 2012 23:45 |
|
Internet Explorer posted:Put Dell / Equallogic / Compellent up again EMC and NetApp up against Nimble. The price will drop a lot. Dell = Equal logic just FYI http://venturebeat.com/2007/11/05/dell-buys-equallogic-for-14b-biggest-cash-purchase-of-private-tech-company/ Serfer posted:Speaking of EMC being terrible, I wasted three hours of my time working with yet another know-nothing EMC tech. One company I am working with has their eyes set on the VNXE3100, I am trying to push nettapp but they say a video from an EMC sales rep and instantly knew there are no other options! Surprisingly this company wants to spend money on their IT infrastructure, unlike a it firm not wanting to spend money.
|
# ¿ May 26, 2012 20:12 |
|
Internet Explorer posted:Like the others said, I I know that. If you are looking at the VNXe line you absolutely need to look at Equallogic. No I got you, didn't realize you knew that. But yeah this customer is like EMC VNX 3100s! Probably will get them an nice equal logic setup, ther are multisite but 20Mb fiber 5ms lines between them they want full replication and dedup. Anyone have the how to become a dell partner guide? would appricaite it
|
# ¿ May 27, 2012 04:27 |
|
NippleFloss posted:Not sure where the VNX to Equallogic transition is coming from. Going from a unified block/file scale up SAN to an iSCSI only scale out SAN doesn't make much sense to me. The VNX line is a direct response to NetApp and NetApp would be the obvious competitor. If it is like what most sales reps do, $IT_manager went to lunch with $EMC_rep, and discussed what they have in the cheap end and the VNX 3100 came up. The main requirement is replication between sites, dedupe is a nice bonus they would like but aren't pushing for it outside of a nice to have. Depending on what my budget is will depend where I go, seeing how they listed the VNX 3100 my guess is they want to go on the cheaper end.
|
# ¿ May 27, 2012 21:15 |
|
evil_bunnY posted:I don't think VNX does block dedup? Correct file level dedupe http://www.emc.com/storage/vnx/vnx-series.htm#!compare
|
# ¿ May 27, 2012 21:23 |
|
evil_bunnY posted:And that's only if you get the NAS heads. oops I posted the higher end models http://www.emc.com/storage/vnx/vnxe-series.htm#!compare VNXE is actually the model name, but yeah, pretty sure I can get better with Equal logic or Netapp if I throw them up against EMCs offerings
|
# ¿ May 27, 2012 21:30 |
|
skipdogg posted:Trying to decide right now between the VNXe 3300 and a EqualLogic 4100 series setup. 1 Shelf of fast, 1 of slow for each. FYI the VMXe 3150's come out soon and are very promising. I know Nimble is great but all there stuff is basically supermicro, with a custom tailored OS. Netapp is a great person to play off against emc
|
# ¿ Jun 14, 2012 03:08 |
|
FISHMANPET posted:Can anyone link to some specific criticisms of Drobo, or tell me what I should be looking for? Normally "I read it on the Internet" would be good enough but in this case, because of ~~PoLiTiCs~~ I really have to beat them over the head with why this would be bad. NO NO NO DO NOT GET DROBO Seriously if you want to talk answer my PM or email me at Corvttefish3r@gmail.com I can help you out and offer a bunch of support for cheap
|
# ¿ Jul 21, 2012 03:36 |
|
paperchaseguy posted:i think you're wanted here That is almost as bad as when I first started this gig and came to find out the shared storage was actually openfiler VM's. I mentioned that wasn't really shared storage and a terrible idea, he basically looked at me like I insulted his grandmother. bonus: the OP VM's were thin provisioned
|
# ¿ Oct 3, 2012 20:40 |
|
Hooray just got some funding for some more storage! Not too terribly much, 20k, but still happy nonetheless. Now time to versus with Netapp, Dell, and EMC
Dilbert As FUCK fucked around with this message at 16:24 on Oct 11, 2012 |
# ¿ Oct 11, 2012 16:17 |
|
Moey posted:We have been dabbling in more used/refurb equiptment. It hasn't bitten us in the rear end...yet. I am evaling a some VNX 5300 and VNXe 3150/3300 as well as some cybernetics and netapp. If you want to talk about some storage stuff email me, I might be able to do a join.me demo and show you some stuff on the netapp I work with. I don't mind posting the poo poo hear but to fully answer your questions PM/Email me. E: I would cheap out on anything but storage for virtualization it is the heart and soul of your infrastructure I would love to take a look at the refurbs buthonestly I don't feel comfortable implementing them, however, I might know someone who is. Dilbert As FUCK fucked around with this message at 01:09 on Oct 13, 2012 |
# ¿ Oct 13, 2012 01:04 |
|
optikalus posted:drbd does it pretty well, and can do async writes. This, if it is caching to flash you should be able to do DRBD without it mukking up anything. If you want to see it in action you can easily set it up on openfiler 2.99
|
# ¿ Oct 13, 2012 01:25 |
|
Moey posted:I'll try to remember to PM you later on it. Since I only work internally any our firm isn't giant, I don't get to deal with much outside of the stuff my old boss randomly picked (a few Dell MD3220i, QNAP NAS units). We have expanded on that "standard" as well as iSCSI. I currently don't see the need to introduce any NFS storage since iSCSI has been meeting my needs, but seeing the management of some other block storage devices would be cool. Sounds good you can pm me and I'll give you my cell to do whatever with. I use DRBD/rsync on some of my Gov, bank, or medica deploys where data has to be in two separate boxes. If you copy it from flash it doesn't hit the production SAN/NAS which is great.
|
# ¿ Oct 13, 2012 02:19 |
|
Might be more a networking question but, Is anyone here actively using FC for anything other than high end transaction servers? I've had a few customers who have existing 4Gb/s FC networks for their SAN looking to upgrade and I usually just go with 10Gb to run ISCSI or FCoE. But I get those "Is this guy really suggesting that?" moments from some people on ISCSI/FCoE. It might just be dinosaurs in the IT management sector who haven't looked at what is going on around them, and think FC @ 8Gb/s is the poo poo. Normally I'll sell the admins pretty fast when I show them the performance numbers, and sell the managers when I show them the cost, performance, and manageability of ISCSI/FCoE. Just gets annoying having to repeat myself over and over, didn't know if anyone had some view points they could shed some light on.
|
# ¿ Oct 18, 2012 16:33 |
|
evol262 posted:why not put in new Brocades and HBAs and stick with 8GB? It's what they know. It's not that much slower than 10Gb FCoE, etc. I can only push cisco gear to customers. I 90% of the time do iSCSI for simplicity's sake and what not, however I do throw out the "we can do FCoE if you require it"
|
# ¿ Oct 18, 2012 17:40 |
|
|
# ¿ Apr 23, 2024 17:40 |
|
evol262 posted:I guess you answered the "why do I push iSCSI and FCoE" question. Last I checked, Cisco doesn't even have a convergent switch that does 8GB (much less 16GB), so you can't possibly try to convince customers to do a gradual switchover from traditional FC to FCoE. Yeah the Nexuses are there and workable however cost a small fortune for the price. I don't have anything against other vendors but company policy and all that dictates what I can/cannot sell or push to clients. That's a good analogy
|
# ¿ Oct 18, 2012 18:35 |