|
three posted:If you don't have your own Synology NAS with WD Red drives, then you are literally living like poo poo.
|
# ? Jun 25, 2013 14:49 |
|
|
# ? May 2, 2024 20:10 |
|
three posted:If you don't have your own Synology NAS with WD Red drives, then you are literally living like poo poo. Yeah well I am not spending ~499+drives+additional switch, when I can do a virtual ZFS store, that fits the needs of a lab.
|
# ? Jun 25, 2013 14:50 |
|
Corvettefisher posted:Yeah well I am not spending ~499+drives+additional switch, when I can do a virtual ZFS store, that fits the needs of a lab. Your time should be worth more than that.
|
# ? Jun 25, 2013 15:38 |
|
Still though, you should probably use drives with a limited error recovery mode (TLER for Western Digital Drives) or they are going to be a PITA to use under any sort of RAID like system as the first time one of them goes into heroic recovery mode it will likely hose the volume.
|
# ? Jun 25, 2013 15:44 |
|
three posted:Your time should be worth more than that. I'm fairly happy with the performance I get thus far, Freenas 8.3 and 9.x support Hardware acceleration.
|
# ? Jun 25, 2013 15:59 |
|
Dicktrama, here are my labs: Home: Xeon e3 1220 Supermicro Board 32GB Kingston value ram Quad Port Intel 1gb PCIe nic Samsung 830 240gb SSD 3TB Hitachi NAS for the above server: HP Microserver n40l 8GB ram 4x 1.5TB seagate 7200rpm drives 4x 256gb Crucial M4's I am sadly hosting only a single W7 machine ATM. I just wiped my entire 2k8r2 lab to load 2012 on. Work: Some fancy dual 6 core xeon, Napp-it all-in-one Solaris ZFS passthrough serving up 3TB of SSD's and 32TB of spindle drives. Why are you worried about someone stealing a computer from your office? Check out either servethehome.com or the virtualization section on hardforums for quite a few home lab builds.
|
# ? Jun 25, 2013 16:08 |
|
bull3964 posted:Still though, you should probably use drives with a limited error recovery mode (TLER for Western Digital Drives) or they are going to be a PITA to use under any sort of RAID like system as the first time one of them goes into heroic recovery mode it will likely hose the volume. So this. Oh so this. A thousand times this. I had a RAID-5 array of six WD Blues and discovered the pitfalls of unnoticed bad blocks. What essentially happened is that I had a drive fail and then the array failed to rebuild after swapping out the drive. The vendor-verified solution was to tear down the array, rebuild it and restore data from backup (making sure to test each individual drive for health before rebuilding it, of course). Don't be me. Buy TLER/CCTL drives for your hardware RAID solution. Note that TLER/CCTL does not add benefit in devices that use software raid.
|
# ? Jun 25, 2013 23:17 |
|
Agrikk posted:I had a RAID-5 array of six WD Blues and discovered the pitfalls of unnoticed bad blocks. What essentially happened is that I had a drive fail and then the array failed to rebuild after swapping out the drive. evil_bunnY fucked around with this message at 23:22 on Jun 25, 2013 |
# ? Jun 25, 2013 23:20 |
|
ZFS supremacy. I still bought Reds for my home server, but I love ZFS. VM related, I had a dream that I was in the position to design and implement a pretty large VMware deployment at work, then woke up and realize it was a dream and got sad. What does this say about me?
|
# ? Jun 25, 2013 23:24 |
|
FISHMANPET posted:ZFS supremacy. That, like the rest of us, you enjoy playing with expensive toys and getting to build things instead of maintaining things.
|
# ? Jun 25, 2013 23:26 |
|
Yeah, I think my dream job would be consulting or MSP where I get to come in and build nice stuff from scratch then leave other people to run it, occasionally coming in to check up on it and stuff.
|
# ? Jun 25, 2013 23:28 |
|
FISHMANPET posted:Yeah, I think my dream job would be consulting or MSP where I get to come in and build nice stuff from scratch then leave other people to run it, occasionally coming in to check up on it and stuff. This is where I want to be. Working for a VAR doing implementation and then handing the keys over to some dummy.
|
# ? Jun 25, 2013 23:40 |
|
Agrikk posted:So this. Oh so this. A thousand times this. Not even doing HW raid, just plopping Zero'd VMDK's on it and passing it to the Freenas VM. Dilbert As FUCK fucked around with this message at 00:09 on Jun 26, 2013 |
# ? Jun 26, 2013 00:06 |
|
FISHMANPET posted:Yeah, I think my dream job would be consulting or MSP where I get to come in and build nice stuff from scratch then leave other people to run it, occasionally coming in to check up on it and stuff. Yeah you want work for a VAR or to be an implementation engineer or something. You should know by now MSP= Shoestring budget and busted rear end equipment.
|
# ? Jun 26, 2013 00:36 |
|
XenServer and XenCenter are both being open-sourced and are completely free, as of today: http://blog.xen.org/index.php/2013/06/25/xenserver-org-and-the-xen-project/ Looks like Citrix' strategy to use XenServer as a loss-leader for XenDesktop is definitive, now. Here's a great FAQ on the changes: http://xenserver.org/discuss-virtualization/q-and-a/categories/listings/xenserver-org-launch.html
|
# ? Jun 26, 2013 00:42 |
|
We finally got a license for vCenter, and I'm starting the process of setting it up and I have a few questions: Does vCenter have to be installed on a Windows Server OS, or can I use a 7 Pro or even XP Pro box? Is there any reason not to just use the virtual appliance on one of your hosts?
|
# ? Jun 26, 2013 15:07 |
|
Frozen-Solid posted:We finally got a license for vCenter, and I'm starting the process of setting it up and I have a few questions: It must be installed on a Windows Server box or you can use the Linux Appliance. The vApp has a few restictions on hosts, 5 hosts ~50VM's supported(may be higher with 5.1 u1), and external databases only supported one is Oracle, as well as a few compatibility or non support for a few other VM products. Dilbert As FUCK fucked around with this message at 15:20 on Jun 26, 2013 |
# ? Jun 26, 2013 15:17 |
|
My lab for the past 2.5 years has been: 3X HP DL 380 G7 servers each with 2X Xeon 5660 (6 cores per; 2.8GHz) 64GB RAM and for storage 2X Dell Equallogic PS4000X with 9.6TB RAW each This is because nobody here knows VMware and it was decided that we needed it and I would have to figure it out. Then other more important projects came up and here we are. EDIT: I might have mentioned it before but, primary use for the environment? A file server. I poo poo you not. I was just told the other day that I should try to limit the number of guests that go on it to like 4 or 5...
|
# ? Jun 26, 2013 20:24 |
|
demonachizer posted:EDIT: I might have mentioned it before but, primary use for the environment? A file server. I poo poo you not. I was just told the other day that I should try to limit the number of guests that go on it to like 4 or 5... Your company has a lot of a cash and poor planning then ?
|
# ? Jun 27, 2013 13:54 |
|
jre posted:
Normally things are not like this. I think it is because everyone is really old school IT and virtualization is something that they know we need to be moving towards but nobody knows anything about it. Like decision makers don't even really understand the technology at a basic level. My boss goes to talks and poo poo with vendors about technology then comes into the office with his hair on fire about what the next project will be sometimes and this was one of them. I mean it makes sense that we need to virtualize. Our main mission critical application as an example is accessed via citrix. It can only run on 32 bit windows currently so our citrix farm consists of 45 servers with 4gb of ram each... This should have been addressed prior but the environment is in place and has been for 9 years or so. We will probably virtualize all of it in 2-3 years and then things might start moving correctly. The biggest problem with the whole thing for me is that the scope of this first project keeps changing. Like it started as "hey we need two sans in two locations with two VM clusters for redundancy and let's use SRM" Then it changed to a san in each location but the hosts only in one location. Now it looks like the two cluster idea is back again. This time I said no. I said that we need to plan that as a separate project and that this one needs to be put to bed because we need to have an end date. Everything that we do that is non-virtualized is done well with plenty of redundancy etc. but for some reason virtualization has caused everyone to kind of lose their heads. I try to keep the perspective that there is probably very few places out there that I could just be handed a shitload of hardware and be told that to put it together at my own pace with very very little pressure on timelines. The sort of logical side of me though can't deal with it very well.
|
# ? Jun 27, 2013 15:14 |
|
demonachizer posted:Everything that we do that is non-virtualized is done well with plenty of redundancy etc. but for some reason virtualization has caused everyone to kind of lose their heads. I try to keep the perspective that there is probably very few places out there that I could just be handed a shitload of hardware and be told that to put it together at my own pace with very very little pressure on timelines. The sort of logical side of me though can't deal with it very well.
|
# ? Jun 27, 2013 16:05 |
|
Erwin posted:Why not take the ICM course? Obviously your company can afford it. They won't pay for it. They gave me the time instead of the money... I will be taking it myself and getting my VCP and jumping ship as soon as I am done.
|
# ? Jun 27, 2013 16:23 |
|
B-b-b-but, they have all that hardware! Just sitting there! $1,200 or whatever the course is and they could put it to good use!
|
# ? Jun 27, 2013 16:27 |
|
I've successfully deployed vCenter and vSphere Data Protection, and everything is up and running smoothly. The first backups to VDP should run tonight after work. I'm excited! The one thing I can't quite figure out is how to handle off-site backups after a backup has been made to VDP. Am I right in my understanding that VDP just stores everything in the vmdk that come in the ova package? I moved the vmdks so that VDP is stored on our secondary backup storage, but I also want to be able to make sure I make off site backups of the VDP, which will get us off-site backups of our VMWare setup as well. I should be able to just make a snapshop of the VDP appliance, and copy the files off manually to do offsite, like I have been with each individual VM in the past, right? Or is there an easier way to copy backups off site that I haven't found yet?
|
# ? Jun 27, 2013 17:01 |
|
Frozen-Solid posted:I've successfully deployed vCenter and vSphere Data Protection, and everything is up and running smoothly. The first backups to VDP should run tonight after work. I'm excited! You can place the VDP appliance on an NFS share, then replicate the NFS shares. Sadly there is not built in way to do it. Sadly there isn't a built in way to replicate VDP, but I think that was a "by choice" decision by VMware, judging how they responded to the question at pex. Dilbert As FUCK fucked around with this message at 17:13 on Jun 27, 2013 |
# ? Jun 27, 2013 17:11 |
|
Frozen-Solid posted:I've successfully deployed vCenter and vSphere Data Protection... You throw that word out of your vocabulary right now.
|
# ? Jun 27, 2013 23:53 |
|
I'll preface this by saying I kind of lucked into my job and I feel like I probably don't know a lot of things a person in my position should know. I've been looking into doing a infrastructure refresh for a months now, and after the CFO balked at the initial 130k price tag of replacing servers, switches, SANs and backup appliances, I'm hoping to just implement better backups initially, then later this year or next, move forward with the rest. Right now I'm looking into implementing a couple Data Domain DD620's to replace a few lovely BDR's that are backing up our (VMware) servers at the file system level. Getting pressure from EMC saying the price is going to go up 10k if I don't buy in the next two days. I'm want to implement site-to-site replication of some sort however I'm trying to figure out what is the most logical and practical approach to backing up two offices. However, since I want to replace our SAN in Seattle and potentially implement one in Portland, what is the practical difference between replicating backups and SAN to SAN replication? Am I going about it all wrong with the Data Domains? With the assumption I go with the DD620's, does anyone have a strong opinion on which backup software to use? Veeam, vRanger? (Zerto? - looks more like SAN replication?) Jesus Christ I need to organize my thoughts better.
|
# ? Jun 28, 2013 00:18 |
|
goobernoodles posted:I'll preface this by saying I kind of lucked into my job and I feel like I probably don't know a lot of things a person in my position should know. Please give us an idea of your current infrastructure so we can more effectively advise you.
|
# ? Jun 28, 2013 00:58 |
|
Can't advise on your environment since we don't have a good picture of it, but tell EMC you're going to go talk to Exagrid to see what they have and they'll shut up. They talk tough, but usually fold when it comes down to getting a check or not. They dicked us around on maintenance on our DD units, so we said screw it, cancel it, and they came back.
|
# ? Jun 28, 2013 01:13 |
|
adorai posted:personally I think buying a pair of dd devices when your real goal seems to be new sans with replication is a bad idea. Just get a pair of new sans now if you can afford to. Seattle -3x IBM X series hosts in running 13 VM's. -IBM DS3300 SAN w/ expansion, maxed out at 9tb. -lovely netgear switches. -Some random BDR device from a company named Datto. Currently 7tb of the 10tb of storage is being utilized. Portland -1 ancient IBM X series server running on local storage. -lovely netgear switches -Only 4 VM's -Two random BDR's backing up about 2-3tb. Servers are a mix of Windows Server 2003 and 2008 R2. Offices are connected with an MPLS with about 12mbps max bandwidth inbetween. Primary goals: 1) Backup at the vmdk level. Eliminate monthly $2300 charge for BDR maintenance and offsite backups. Use the two offices as primary backup locations, then replicate to another offsite location. 2) Replace aging DS3300, and increase storage capacity. 3) If I can get approval for the added cost, I want to implement two servers in Portland and a SAN and implement HA/DRS. In the event of an emergency in Seattle, I'd like to be able to run everything from Portland. 4) Increase performance of software applications (the hogs are SQL based)
|
# ? Jun 28, 2013 02:35 |
|
1) Get 2x HA pairs of Oracle Sun 7320 storage. 2) Get 4x (2x for each location) Cisco 3750x gigabit switches 3) License everything you need For #1, I went all out with ours and got roughly 15TB (usable) on each with tons of cache for $100k for both pairs. You could get less storage and less cache and probably end up somewhere around $80k For #2, I think you can probably do all four switches for well under $10k. They stack and allow you to do cross switch etherchannel links, giving you good enough speed and redundancy for your organizations needs. For #3, I can't exactly comment. I think you could do the entire project for under $100k. You would be using only gigabit Ethernet rather than 10gig Ethernet or FC, but you can replicate between the two.
|
# ? Jun 28, 2013 04:10 |
|
It's really hard to say what you "need" without looking at the VM's their needs and resource requirements. Do you REALLY need SAN level replication or can you get by with something such as PHDvirtual/Veeam VM Replication to a DR site? Primary goals: 1) Backup at the vmdk level. Eliminate monthly $2300 charge for BDR maintenance and offsite backups. Use the two offices as primary backup locations, then replicate to another offsite location. Look into Veeam or PHDvirtual, maybe even the Avamar Virtual Appliance before you just head right to a DD, a 160 may be more reasonable for your environment if you want to go that way. 2) Replace aging DS3300, and increase storage capacity. Storage capacity is easy to obtain, what about IOP requests, Latency, Reads vs. writes, Hot data or mostly stagnate data, Future growth in 3-5 years 3) If I can get approval for the added cost, I want to implement two servers in Portland and a SAN and implement HA/DRS. In the event of an emergency in Seattle, I'd like to be able to run everything from Portland. For your DR site what is your RTO? What would be your ideal scenario of migrating to your DR site? Do you need automated or manual failover? E: also if you feel uncomfortable reach out to your VAR sit down with them explain your budget, your goals, more than likely they will be able to give you a solution that works well. Us goons can recommend a bunch of things but without looking closely at the environment it's difficult to tell what you really need. I mean we can throw potshots out but I wouldn't take it for "EXACTLY WHAT YOU NEED". Dilbert As FUCK fucked around with this message at 04:50 on Jun 28, 2013 |
# ? Jun 28, 2013 04:20 |
|
adorai posted:1) Get 2x HA pairs of Oracle Sun 7320 storage. How are these working out for you? Is the performance good? Have you tested HA on them? I am curious because I am a huge ZFS fan and some real world experience with Oracle/ Sun gear would be nice.
|
# ? Jun 28, 2013 10:32 |
|
Mr Shiny Pants posted:How are these working out for you? Is the performance good? Have you tested HA on them? I am curious because I am a huge ZFS fan and some real world experience with Oracle/ Sun gear would be nice. They work great for our needs, which are storage for a little over 200 vdi sessions and 50 Citrix servers. I suspect that we can double the number of vdi sessions we host without seeing a performance hit. We did play around with HA early on and the takeover was quite fast. VMs hung for about 2 seconds but then picked right back up again. Honestly, for the price, Oracle should be murdering everyone else based on what I've seen.
|
# ? Jun 28, 2013 12:23 |
|
That's great to hear, does Oracle have a roadmap for the systems? I don't want to buy/advise something with no upgrade path.... Is this your primary storage? If not why not?
|
# ? Jun 28, 2013 17:22 |
|
The problem with oracle ZFS is that 80% of their core engineers left after they closed Solaris.
|
# ? Jun 28, 2013 18:20 |
|
Corvettefisher posted:It's really hard to say what you "need" without looking at the VM's their needs and resource requirements.
|
# ? Jun 28, 2013 18:49 |
|
Look into things like SRM, Veeam, or PHDvirtual for that. SRM has many nice features in it which can make a failover similar to the time of an HA event ~5 minutes, and can be automated. Also what are your plans for backing up guest level objects such as files and settings inside the VM? Dilbert As FUCK fucked around with this message at 20:10 on Jun 28, 2013 |
# ? Jun 28, 2013 20:06 |
|
evil_bunnY posted:The problem with oracle ZFS is that 80% of their core engineers left after they closed Solaris. Not to defend Oracle or something but isn't this the problem almost anywhere? EMC has lost a couple of their directors who started another company and build the XIV. NetApp is laying off 800 people and other tech companier are doing the same.
|
# ? Jun 28, 2013 20:22 |
|
|
# ? May 2, 2024 20:10 |
|
Mr Shiny Pants posted:Not to defend Oracle or something but isn't this the problem almost anywhere? EMC has lost a couple of their directors who started another company and build the XIV. NetApp is laying off 800 people and other tech companier are doing the same. This typically doesn't factor into business decisions, but it's important, I think. Especially so now, since open-source ZFS uses feature flags instead of hard/meaningless version numbers. Vulture Culture fucked around with this message at 20:46 on Jun 28, 2013 |
# ? Jun 28, 2013 20:37 |