Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
evil_bunnY
Apr 2, 2003

three posted:

If you don't have your own Synology NAS with WD Red drives, then you are literally living like poo poo.
Cheap drives with SSD cache in a N40L 8(

Adbot
ADBOT LOVES YOU

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

three posted:

If you don't have your own Synology NAS with WD Red drives, then you are literally living like poo poo.

Yeah well I am not spending ~499+drives+additional switch, when I can do a virtual ZFS store, that fits the needs of a lab.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Corvettefisher posted:

Yeah well I am not spending ~499+drives+additional switch, when I can do a virtual ZFS store, that fits the needs of a lab.

Your time should be worth more than that.

bull3964
Nov 18, 2000

DO YOU HEAR THAT? THAT'S THE SOUND OF ME PATTING MYSELF ON THE BACK.


Still though, you should probably use drives with a limited error recovery mode (TLER for Western Digital Drives) or they are going to be a PITA to use under any sort of RAID like system as the first time one of them goes into heroic recovery mode it will likely hose the volume.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

three posted:

Your time should be worth more than that.

I'm fairly happy with the performance I get thus far, Freenas 8.3 and 9.x support Hardware acceleration.

the spyder
Feb 18, 2011
Dicktrama, here are my labs:

Home:
Xeon e3 1220
Supermicro Board
32GB Kingston value ram
Quad Port Intel 1gb PCIe nic
Samsung 830 240gb SSD
3TB Hitachi

NAS for the above server:
HP Microserver n40l
8GB ram
4x 1.5TB seagate 7200rpm drives
4x 256gb Crucial M4's

I am sadly hosting only a single W7 machine ATM. I just wiped my entire 2k8r2 lab to load 2012 on.

Work:
Some fancy dual 6 core xeon, Napp-it all-in-one Solaris ZFS passthrough serving up 3TB of SSD's and 32TB of spindle drives.

Why are you worried about someone stealing a computer from your office? Check out either servethehome.com or the virtualization section on hardforums for quite a few home lab builds.

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

bull3964 posted:

Still though, you should probably use drives with a limited error recovery mode (TLER for Western Digital Drives) or they are going to be a PITA to use under any sort of RAID like system as the first time one of them goes into heroic recovery mode it will likely hose the volume.

So this. Oh so this. A thousand times this.

I had a RAID-5 array of six WD Blues and discovered the pitfalls of unnoticed bad blocks. What essentially happened is that I had a drive fail and then the array failed to rebuild after swapping out the drive.

The vendor-verified solution was to tear down the array, rebuild it and restore data from backup (making sure to test each individual drive for health before rebuilding it, of course).

Don't be me. Buy TLER/CCTL drives for your hardware RAID solution.

Note that TLER/CCTL does not add benefit in devices that use software raid.

evil_bunnY
Apr 2, 2003

Agrikk posted:

I had a RAID-5 array of six WD Blues and discovered the pitfalls of unnoticed bad blocks. What essentially happened is that I had a drive fail and then the array failed to rebuild after swapping out the drive.
And this is why ZFS owns. The first time it yells at you like "hey nerd I fixed your sucky drive you're welcome", ah that feeling.

evil_bunnY fucked around with this message at 23:22 on Jun 25, 2013

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
ZFS supremacy.

I still bought Reds for my home server, but I love ZFS.

VM related, I had a dream that I was in the position to design and implement a pretty large VMware deployment at work, then woke up and realize it was a dream and got sad. What does this say about me?

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

FISHMANPET posted:

ZFS supremacy.

I still bought Reds for my home server, but I love ZFS.

VM related, I had a dream that I was in the position to design and implement a pretty large VMware deployment at work, then woke up and realize it was a dream and got sad. What does this say about me?

That, like the rest of us, you enjoy playing with expensive toys and getting to build things instead of maintaining things.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
Yeah, I think my dream job would be consulting or MSP where I get to come in and build nice stuff from scratch then leave other people to run it, occasionally coming in to check up on it and stuff.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

FISHMANPET posted:

Yeah, I think my dream job would be consulting or MSP where I get to come in and build nice stuff from scratch then leave other people to run it, occasionally coming in to check up on it and stuff.

This is where I want to be. Working for a VAR doing implementation and then handing the keys over to some dummy.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Agrikk posted:

So this. Oh so this. A thousand times this.

I had a RAID-5 array of six WD Blues and discovered the pitfalls of unnoticed bad blocks. What essentially happened is that I had a drive fail and then the array failed to rebuild after swapping out the drive.

The vendor-verified solution was to tear down the array, rebuild it and restore data from backup (making sure to test each individual drive for health before rebuilding it, of course).

Don't be me. Buy TLER/CCTL drives for your hardware RAID solution.

Note that TLER/CCTL does not add benefit in devices that use software raid.

Not even doing HW raid, just plopping Zero'd VMDK's on it and passing it to the Freenas VM.

Dilbert As FUCK fucked around with this message at 00:09 on Jun 26, 2013

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

FISHMANPET posted:

Yeah, I think my dream job would be consulting or MSP where I get to come in and build nice stuff from scratch then leave other people to run it, occasionally coming in to check up on it and stuff.

Yeah you want work for a VAR or to be an implementation engineer or something. You should know by now MSP= Shoestring budget and busted rear end equipment.

madsushi
Apr 19, 2009

Baller.
#essereFerrari
XenServer and XenCenter are both being open-sourced and are completely free, as of today:

http://blog.xen.org/index.php/2013/06/25/xenserver-org-and-the-xen-project/

Looks like Citrix' strategy to use XenServer as a loss-leader for XenDesktop is definitive, now.

Here's a great FAQ on the changes:

http://xenserver.org/discuss-virtualization/q-and-a/categories/listings/xenserver-org-launch.html

Frozen Peach
Aug 25, 2004

garbage man from a garbage can
We finally got a license for vCenter, and I'm starting the process of setting it up and I have a few questions:

Does vCenter have to be installed on a Windows Server OS, or can I use a 7 Pro or even XP Pro box?

Is there any reason not to just use the virtual appliance on one of your hosts?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Frozen-Solid posted:

We finally got a license for vCenter, and I'm starting the process of setting it up and I have a few questions:

Does vCenter have to be installed on a Windows Server OS, or can I use a 7 Pro or even XP Pro box?

Is there any reason not to just use the virtual appliance on one of your hosts?

It must be installed on a Windows Server box or you can use the Linux Appliance.

The vApp has a few restictions on hosts, 5 hosts ~50VM's supported(may be higher with 5.1 u1), and external databases only supported one is Oracle, as well as a few compatibility or non support for a few other VM products.

Dilbert As FUCK fucked around with this message at 15:20 on Jun 26, 2013

Demonachizer
Aug 7, 2004
My lab for the past 2.5 years has been:

3X HP DL 380 G7 servers
each with
2X Xeon 5660 (6 cores per; 2.8GHz)
64GB RAM

and for storage

2X Dell Equallogic PS4000X with 9.6TB RAW each


This is because nobody here knows VMware and it was decided that we needed it and I would have to figure it out. Then other more important projects came up and here we are. :( :(


EDIT: I might have mentioned it before but, primary use for the environment? A file server. I poo poo you not. I was just told the other day that I should try to limit the number of guests that go on it to like 4 or 5...

jre
Sep 2, 2011

To the cloud ?



demonachizer posted:

EDIT: I might have mentioned it before but, primary use for the environment? A file server. I poo poo you not. I was just told the other day that I should try to limit the number of guests that go on it to like 4 or 5...

:what:

Your company has a lot of a cash and poor planning then ?

Demonachizer
Aug 7, 2004

jre posted:

:what:

Your company has a lot of a cash and poor planning then ?

Normally things are not like this. I think it is because everyone is really old school IT and virtualization is something that they know we need to be moving towards but nobody knows anything about it. Like decision makers don't even really understand the technology at a basic level. My boss goes to talks and poo poo with vendors about technology then comes into the office with his hair on fire about what the next project will be sometimes and this was one of them. I mean it makes sense that we need to virtualize. Our main mission critical application as an example is accessed via citrix. It can only run on 32 bit windows currently so our citrix farm consists of 45 servers with 4gb of ram each... This should have been addressed prior but the environment is in place and has been for 9 years or so. We will probably virtualize all of it in 2-3 years and then things might start moving correctly.

The biggest problem with the whole thing for me is that the scope of this first project keeps changing. Like it started as "hey we need two sans in two locations with two VM clusters for redundancy and let's use SRM" Then it changed to a san in each location but the hosts only in one location. Now it looks like the two cluster idea is back again. This time I said no. I said that we need to plan that as a separate project and that this one needs to be put to bed because we need to have an end date.

Everything that we do that is non-virtualized is done well with plenty of redundancy etc. but for some reason virtualization has caused everyone to kind of lose their heads. I try to keep the perspective that there is probably very few places out there that I could just be handed a shitload of hardware and be told that to put it together at my own pace with very very little pressure on timelines. The sort of logical side of me though can't deal with it very well.

Erwin
Feb 17, 2006

demonachizer posted:

Everything that we do that is non-virtualized is done well with plenty of redundancy etc. but for some reason virtualization has caused everyone to kind of lose their heads. I try to keep the perspective that there is probably very few places out there that I could just be handed a shitload of hardware and be told that to put it together at my own pace with very very little pressure on timelines. The sort of logical side of me though can't deal with it very well.
Why not take the ICM course? Obviously your company can afford it.

Demonachizer
Aug 7, 2004

Erwin posted:

Why not take the ICM course? Obviously your company can afford it.

They won't pay for it. They gave me the time instead of the money...


I will be taking it myself and getting my VCP and jumping ship as soon as I am done.

Erwin
Feb 17, 2006

B-b-b-but, they have all that hardware! Just sitting there! $1,200 or whatever the course is and they could put it to good use! :psyduck:

Frozen Peach
Aug 25, 2004

garbage man from a garbage can
I've successfully deployed vCenter and vSphere Data Protection, and everything is up and running smoothly. The first backups to VDP should run tonight after work. I'm excited!

The one thing I can't quite figure out is how to handle off-site backups after a backup has been made to VDP. Am I right in my understanding that VDP just stores everything in the vmdk that come in the ova package? I moved the vmdks so that VDP is stored on our secondary backup storage, but I also want to be able to make sure I make off site backups of the VDP, which will get us off-site backups of our VMWare setup as well.

I should be able to just make a snapshop of the VDP appliance, and copy the files off manually to do offsite, like I have been with each individual VM in the past, right? Or is there an easier way to copy backups off site that I haven't found yet?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Frozen-Solid posted:

I've successfully deployed vCenter and vSphere Data Protection, and everything is up and running smoothly. The first backups to VDP should run tonight after work. I'm excited!

The one thing I can't quite figure out is how to handle off-site backups after a backup has been made to VDP. Am I right in my understanding that VDP just stores everything in the vmdk that come in the ova package? I moved the vmdks so that VDP is stored on our secondary backup storage, but I also want to be able to make sure I make off site backups of the VDP, which will get us off-site backups of our VMWare setup as well.



You can place the VDP appliance on an NFS share, then replicate the NFS shares. Sadly there is not built in way to do it.

Sadly there isn't a built in way to replicate VDP, but I think that was a "by choice" decision by VMware, judging how they responded to the question at pex.

Dilbert As FUCK fucked around with this message at 17:13 on Jun 27, 2013

Agrikk
Oct 17, 2003

Take care with that! We have not fully ascertained its function, and the ticking is accelerating.

Frozen-Solid posted:

I've successfully deployed vCenter and vSphere Data Protection...

...and copy the files off manually to do offsite...

You throw that word out of your vocabulary right now.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.
I'll preface this by saying I kind of lucked into my job and I feel like I probably don't know a lot of things a person in my position should know.

I've been looking into doing a infrastructure refresh for a months now, and after the CFO balked at the initial 130k price tag of replacing servers, switches, SANs and backup appliances, I'm hoping to just implement better backups initially, then later this year or next, move forward with the rest. Right now I'm looking into implementing a couple Data Domain DD620's to replace a few lovely BDR's that are backing up our (VMware) servers at the file system level. Getting pressure from EMC saying the price is going to go up 10k if I don't buy in the next two days. :jerkbag: I'm want to implement site-to-site replication of some sort however I'm trying to figure out what is the most logical and practical approach to backing up two offices.

However, since I want to replace our SAN in Seattle and potentially implement one in Portland, what is the practical difference between replicating backups and SAN to SAN replication? Am I going about it all wrong with the Data Domains?

With the assumption I go with the DD620's, does anyone have a strong opinion on which backup software to use? Veeam, vRanger? (Zerto? - looks more like SAN replication?)

Jesus Christ I need to organize my thoughts better.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

goobernoodles posted:

I'll preface this by saying I kind of lucked into my job and I feel like I probably don't know a lot of things a person in my position should know.

I've been looking into doing a infrastructure refresh for a months now, and after the CFO balked at the initial 130k price tag of replacing servers, switches, SANs and backup appliances, I'm hoping to just implement better backups initially, then later this year or next, move forward with the rest. Right now I'm looking into implementing a couple Data Domain DD620's to replace a few lovely BDR's that are backing up our (VMware) servers at the file system level. Getting pressure from EMC saying the price is going to go up 10k if I don't buy in the next two days. :jerkbag: I'm want to implement site-to-site replication of some sort however I'm trying to figure out what is the most logical and practical approach to backing up two offices.

However, since I want to replace our SAN in Seattle and potentially implement one in Portland, what is the practical difference between replicating backups and SAN to SAN replication? Am I going about it all wrong with the Data Domains?

With the assumption I go with the DD620's, does anyone have a strong opinion on which backup software to use? Veeam, vRanger? (Zerto? - looks more like SAN replication?)

Jesus Christ I need to organize my thoughts better.
personally I think buying a pair of dd devices when your real goal seems to be new sans with replication is a bad idea. Just get a pair of new sans now if you can afford to.

Please give us an idea of your current infrastructure so we can more effectively advise you.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Can't advise on your environment since we don't have a good picture of it, but tell EMC you're going to go talk to Exagrid to see what they have and they'll shut up. They talk tough, but usually fold when it comes down to getting a check or not. They dicked us around on maintenance on our DD units, so we said screw it, cancel it, and they came back.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

adorai posted:

personally I think buying a pair of dd devices when your real goal seems to be new sans with replication is a bad idea. Just get a pair of new sans now if you can afford to.

Please give us an idea of your current infrastructure so we can more effectively advise you.
Currently:

Seattle
-3x IBM X series hosts in running 13 VM's.
-IBM DS3300 SAN w/ expansion, maxed out at 9tb.
-lovely netgear switches.
-Some random BDR device from a company named Datto. Currently 7tb of the 10tb of storage is being utilized.

Portland
-1 ancient IBM X series server running on local storage.
-lovely netgear switches
-Only 4 VM's
-Two random BDR's backing up about 2-3tb.

Servers are a mix of Windows Server 2003 and 2008 R2. Offices are connected with an MPLS with about 12mbps max bandwidth inbetween.

Primary goals:

1) Backup at the vmdk level. Eliminate monthly $2300 charge for BDR maintenance and offsite backups. Use the two offices as primary backup locations, then replicate to another offsite location.
2) Replace aging DS3300, and increase storage capacity.
3) If I can get approval for the added cost, I want to implement two servers in Portland and a SAN and implement HA/DRS. In the event of an emergency in Seattle, I'd like to be able to run everything from Portland.
4) Increase performance of software applications (the hogs are SQL based)

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
1) Get 2x HA pairs of Oracle Sun 7320 storage.
2) Get 4x (2x for each location) Cisco 3750x gigabit switches
3) License everything you need

For #1, I went all out with ours and got roughly 15TB (usable) on each with tons of cache for $100k for both pairs. You could get less storage and less cache and probably end up somewhere around $80k
For #2, I think you can probably do all four switches for well under $10k. They stack and allow you to do cross switch etherchannel links, giving you good enough speed and redundancy for your organizations needs.
For #3, I can't exactly comment.

I think you could do the entire project for under $100k. You would be using only gigabit Ethernet rather than 10gig Ethernet or FC, but you can replicate between the two.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
It's really hard to say what you "need" without looking at the VM's their needs and resource requirements.

Do you REALLY need SAN level replication or can you get by with something such as PHDvirtual/Veeam VM Replication to a DR site?

Primary goals:

1) Backup at the vmdk level. Eliminate monthly $2300 charge for BDR maintenance and offsite backups. Use the two offices as primary backup locations, then replicate to another offsite location.

Look into Veeam or PHDvirtual, maybe even the Avamar Virtual Appliance before you just head right to a DD, a 160 may be more reasonable for your environment if you want to go that way.

2) Replace aging DS3300, and increase storage capacity.
Storage capacity is easy to obtain, what about IOP requests, Latency, Reads vs. writes, Hot data or mostly stagnate data, Future growth in 3-5 years

3) If I can get approval for the added cost, I want to implement two servers in Portland and a SAN and implement HA/DRS. In the event of an emergency in Seattle, I'd like to be able to run everything from Portland.

For your DR site what is your RTO? What would be your ideal scenario of migrating to your DR site? Do you need automated or manual failover?

E: also if you feel uncomfortable reach out to your VAR sit down with them explain your budget, your goals, more than likely they will be able to give you a solution that works well. Us goons can recommend a bunch of things but without looking closely at the environment it's difficult to tell what you really need.

I mean we can throw potshots out but I wouldn't take it for "EXACTLY WHAT YOU NEED".

Dilbert As FUCK fucked around with this message at 04:50 on Jun 28, 2013

Mr Shiny Pants
Nov 12, 2012

adorai posted:

1) Get 2x HA pairs of Oracle Sun 7320 storage.
2) Get 4x (2x for each location) Cisco 3750x gigabit switches
3) License everything you need

For #1, I went all out with ours and got roughly 15TB (usable) on each with tons of cache for $100k for both pairs. You could get less storage and less cache and probably end up somewhere around $80k
For #2, I think you can probably do all four switches for well under $10k. They stack and allow you to do cross switch etherchannel links, giving you good enough speed and redundancy for your organizations needs.
For #3, I can't exactly comment.

I think you could do the entire project for under $100k. You would be using only gigabit Ethernet rather than 10gig Ethernet or FC, but you can replicate between the two.

How are these working out for you? Is the performance good? Have you tested HA on them? I am curious because I am a huge ZFS fan and some real world experience with Oracle/ Sun gear would be nice.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Mr Shiny Pants posted:

How are these working out for you? Is the performance good? Have you tested HA on them? I am curious because I am a huge ZFS fan and some real world experience with Oracle/ Sun gear would be nice.

They work great for our needs, which are storage for a little over 200 vdi sessions and 50 Citrix servers. I suspect that we can double the number of vdi sessions we host without seeing a performance hit. We did play around with HA early on and the takeover was quite fast. VMs hung for about 2 seconds but then picked right back up again.

Honestly, for the price, Oracle should be murdering everyone else based on what I've seen.

Mr Shiny Pants
Nov 12, 2012
That's great to hear, does Oracle have a roadmap for the systems? I don't want to buy/advise something with no upgrade path....

Is this your primary storage? If not why not?

evil_bunnY
Apr 2, 2003

The problem with oracle ZFS is that 80% of their core engineers left after they closed Solaris.

goobernoodles
May 28, 2011

Wayne Leonard Kirby.

Orioles Magician.

Corvettefisher posted:

It's really hard to say what you "need" without looking at the VM's their needs and resource requirements.

Do you REALLY need SAN level replication or can you get by with something such as PHDvirtual/Veeam VM Replication to a DR site?

For your DR site what is your RTO? What would be your ideal scenario of migrating to your DR site? Do you need automated or manual failover?

E: also if you feel uncomfortable reach out to your VAR sit down with them explain your budget, your goals, more than likely they will be able to give you a solution that works well. Us goons can recommend a bunch of things but without looking closely at the environment it's difficult to tell what you really need.

I mean we can throw potshots out but I wouldn't take it for "EXACTLY WHAT YOU NEED".
There's no specific reason why we would need SAN level replication. I just need to get our VM's backed up and replicated with an RTO under an hour. Manual failover is fine, though automated would be a nice touch.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Look into things like SRM, Veeam, or PHDvirtual for that. SRM has many nice features in it which can make a failover similar to the time of an HA event ~5 minutes, and can be automated.

Also what are your plans for backing up guest level objects such as files and settings inside the VM?

Dilbert As FUCK fucked around with this message at 20:10 on Jun 28, 2013

Mr Shiny Pants
Nov 12, 2012

evil_bunnY posted:

The problem with oracle ZFS is that 80% of their core engineers left after they closed Solaris.

Not to defend Oracle or something but isn't this the problem almost anywhere? EMC has lost a couple of their directors who started another company and build the XIV. NetApp is laying off 800 people and other tech companier are doing the same.

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mr Shiny Pants posted:

Not to defend Oracle or something but isn't this the problem almost anywhere? EMC has lost a couple of their directors who started another company and build the XIV. NetApp is laying off 800 people and other tech companier are doing the same.
The bigger problem is that the divergent ZFS codebases mean that many more independent vendors are using Illumos ZFS than Oracle ZFS. Oracle now has vendor lock-in on their formerly open-source filesystem, while the competition doesn't.

This typically doesn't factor into business decisions, but it's important, I think. Especially so now, since open-source ZFS uses feature flags instead of hard/meaningless version numbers.

Vulture Culture fucked around with this message at 20:46 on Jun 28, 2013

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply