Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Docjowles
Apr 9, 2009

Misogynist posted:

DRBD is incredibly fragile and breaks all the time and you're going to want to kill yourself. Other than this, have a great time!

Seconding this. We use it in our dev environment and it makes me want to :black101:

Adbot
ADBOT LOVES YOU

evol262
Nov 30, 2010
#!/usr/bin/perl

Goon Matchmaker posted:

Don't tell me this :(

Edit: Just found lsyncd. This suits our needs way better than DRBD and should be more reliable.

This is not the same as DRBD if you want to use a clustered filesystem.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

evol262 posted:

This is not the same as DRBD if you want to use a clustered filesystem.


We're not aiming for a clustered filesystem. Those things have too many restrictions when run under ESX, like not being able to back them up using VEEAM. Instead what we're aiming for is an NFS failover cluster where the data is replicated between two hosts and if one host blows up the other one takes over. We originally tried OCFS2 but due to being unable to back it up we had to go with DRBD. Lsyncd is almost exactly what we want. Just replicating files between two hosts.

Less Fat Luke
May 23, 2003

Exciting Lemon
Yeah DRBD is kind of a cock. It's not included officially in RedHat or CentOS as far as I know, so get ready to forget to compile the kernel module every time you update things if you use those distros. That being said it hasn't destroyed our data (yet), but performance definitely sucks rear end even over a 10GE link. We have it set to synchronously since our DRBD store is for VMs and it's been fairly reliable, just not fast.

We have two NFS servers and replicate via DRBD - there's a keepalived process that fails over between them if one goes down and amazingly that's only hosed up once in the 3 years I think since configuring it. And to be fair the fuckup was Netbackup holding the DRBD partitions open during a planned failover event.

It is very nice running 70+ VMs though off a gigantic pair of fileservers and being able to fail back and forth between them without anyone really noticing other than IO freezing for 10 seconds while ARP responses are sent and caches flushed. Well, I mean it's very nice being able to do that for free. Now I'm lazy and as I said, would just buy like a Dell MD3220 or an iSCSI NAS or something instead.

Less Fat Luke fucked around with this message at 03:13 on Feb 15, 2013

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Less Fat Luke posted:

Yeah DRBD is kind of a cock. It's not included officially in RedHat or CentOS as far as I know, so get ready to forget to compile the kernel module every time you update things if you use those distros.
I'm pretty sure it's been in CentOS Plus since 2007 or so, actually. poo poo outta luck for RHEL of course, but anyone stuck with RHEL is probably used to that by now.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

Misogynist posted:

I'm pretty sure it's been in CentOS Plus since 2007 or so, actually. poo poo outta luck for RHEL of course, but anyone stuck with RHEL is probably used to that by now.

I just use elrepo and epel to cover anything RHEL is missing. I'm waiting on Redhat support to tell me to gently caress off but so far they haven't.

evol262
Nov 30, 2010
#!/usr/bin/perl

Goon Matchmaker posted:

I just use elrepo and epel to cover anything RHEL is missing. I'm waiting on Redhat support to tell me to gently caress off but so far they haven't.

I've never used elrepo, but EPEL explicitly avoids replacing system packages on RHEL. It's "Extra Packages", not "Replacement Packages", unlike CentOS Plus. Redhat support is ok with that.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online

evol262 posted:

I've never used elrepo, but EPEL explicitly avoids replacing system packages on RHEL. It's "Extra Packages", not "Replacement Packages", unlike CentOS Plus. Redhat support is ok with that.

Elrepo does the same unless you enable one of the optional repositories and even then it only updates a few minor things. I usually use it to get a mainline kernel since they seem to perform better in ESX than the default RHEL kernel.

Aniki
Mar 21, 2001

Wouldn't fit...

Aniki posted:

I think my plan is to try the USB to Parallel adapter first and if that doesn't work, then I'll consider trying VMWare (currently using Hyper-V) or ordering the USB2LAN hub. The hardware key is for Call Cener Worx v. 2.1, which was released in 2001 and can only run on Windows NT based operating systems and for some reason they stopped purchasing updates after that. I know that the USB keys they released later on were finicky and I'm not sure if the parallel key we have is supposed to be any better.

Ok, I ended up using VMware instead. Their support for parallel ports is much better and I had no issues getting the hardware security key working. I should have done that in the first place, but at least it is working now.

Less Fat Luke
May 23, 2003

Exciting Lemon

Misogynist posted:

I'm pretty sure it's been in CentOS Plus since 2007 or so, actually. poo poo outta luck for RHEL of course, but anyone stuck with RHEL is probably used to that by now.
Nice, I'll check that. I've never even heard of it.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

Our cluster is having major issues, been on the phone with VMWare support most of today, another guy is taking a crack at it right now. 2 hosts just show up disconnected from vsphere. The VM's are still running and accessible but VMWare support is telling us to manually shut down all VM's and hard reboot the hosts. It's a last resort option right now, there's 3 SQL servers on there among other servers.

One host isn't even responsive on the console... bleh.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

skipdogg posted:

Our cluster is having major issues, been on the phone with VMWare support most of today, another guy is taking a crack at it right now. 2 hosts just show up disconnected from vsphere. The VM's are still running and accessible but VMWare support is telling us to manually shut down all VM's and hard reboot the hosts. It's a last resort option right now, there's 3 SQL servers on there among other servers.

One host isn't even responsive on the console... bleh.
My guess is they lost some storage. I've been through exactly this, it sucked. The worst part is that in our case the HA agent tried to restart the guests, but since they were already running it failed, and we had duplicate VMs all over the place.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

adorai posted:

My guess is they lost some storage. I've been through exactly this, it sucked. The worst part is that in our case the HA agent tried to restart the guests, but since they were already running it failed, and we had duplicate VMs all over the place.

If it were storage wouldn't some of the VM's stop responding due to disk timeouts? It could be ISCSI loss as it is sparatic and a duplicated/mis assigned IP can cause this, but it sounds a bit like something else.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
Storage issues can cause hostd to hang with no affect on VMs necessarily; I've seen it when datastores are removed improperly. vSphere 5.1 introduced improvements into it with a new timeout setting.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

three posted:

Storage issues can cause hostd to hang with no affect on VMs necessarily; I've seen it when datastores are removed improperly. vSphere 5.1 introduced improvements into it with a new timeout setting.

Can't say I have ever seen that, but it sounds plausible. It wouldn't be the strangest thing I have heard of this week....

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Corvettefisher posted:

Can't say I have ever seen that, but it sounds plausible. It wouldn't be the strangest thing I have heard of this week....
We improperly removed some iSCSI datastores that had no VMs on them, and it caused exactly this behaviour.

Kachunkachunk
Jun 6, 2011
It often comes down to storage issues, or improper presentation changes, yes. ESXi from 5.0 and onward is supposed to be tolerant of All Paths Down conditions, especially the host agents. There are situations where it really doesn't handle it so perfectly, but it's quite a few steps ahead of 4.x and earlier.
It's also a lot easier to remove devices gracefully on 5.0 and later. The steps involved for 4.x can be a bit involved, requiring you to set specific claim rules for each device you want removed before proceeding to rescanning and unpresenting.

HA is a bit more intelligent if you use Fault Domain Manager (FDM), which is vCenter Server 5.0 and later's HA agent. It also uses datastores to fully-ascertain whether a host is down (traditionally HA just used network ping response between nodes).

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Well 1 week till PEX! Anyone else going?

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

I don't know how, but basically some old LUN's from our previous SAN showed back up somewhere/somehow causing an All Paths Down issue.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
http://www.vmware.com/products/view/overview.html

So did view 5.2 just get released?

Erwin
Feb 17, 2006

Woah, I've been waiting on that poo poo.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Erwin posted:

Woah, I've been waiting on that poo poo.

Yeah eager to test it out. Just so sudden, I thought it was getting announced at PEX

huh, wonder where the supported GPU list is or am I missing it, I am fairly certian it is the quadro lineup only

Dilbert As FUCK fucked around with this message at 19:59 on Feb 20, 2013

DevNull
Apr 4, 2007

And sometimes is seen a strange spot in the sky
A human being that was given to fly

Corvettefisher posted:

Yeah eager to test it out. Just so sudden, I thought it was getting announced at PEX

huh, wonder where the supported GPU list is or am I missing it, I am fairly certian it is the quadro lineup only

Correct. It requires the GF100GL chip, which is found in the Quadro 4000, 5000, 6000. It looks like they released a Plex 7000 as well with that chip. The Kepler based Quadro should work as well. Maybe the supported cards are listed on the Nvidia site? I don't know how the driver vib is distributed.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Unless I'm missing something, the download pages still link to 5.1.

Changes make you buy premier licensing basically, Wanova/Mirage still don't work with View, but the HTML5 looks neat and it supports Lync now which is cool if you use that technology.

I find it intriguing that the HTML5 client doesn't use PCoIP. First nail in PCoIP's coffin? Why continue paying Teradici for PCoIP if you can build another protocol (reminder they never bought Teradici, when many thought they would)?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Yeah the Download link takes me to 5.1, so maybe the 5.2 GA release will be during PEX after all. Either way looking forward to a week in vegas.

http://www.vmware.com/company/news/releases/vmw-euc-portfolio-02-20-13.html

Dilbert As FUCK fucked around with this message at 21:39 on Feb 20, 2013

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
Semi-related to virtualization, but did anyone follow the VCE launch presentation this morning? They claim a billion dollar run rate, but have only sold 1,000 units. Am I way off on my math ($1mil/Vblock) or does that not make any sense?

Pantology
Jan 16, 2006

Dinosaur Gum

three posted:

Semi-related to virtualization, but did anyone follow the VCE launch presentation this morning? They claim a billion dollar run rate, but have only sold 1,000 units. Am I way off on my math ($1mil/Vblock) or does that not make any sense?

Looking through some past BoMs, I don't know if you can get a Vblock 300 for less than $1m, or a 700 for less than $2m.

Pantology fucked around with this message at 18:37 on Feb 21, 2013

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!

three posted:

Semi-related to virtualization, but did anyone follow the VCE launch presentation this morning? They claim a billion dollar run rate, but have only sold 1,000 units. Am I way off on my math ($1mil/Vblock) or does that not make any sense?

Run rate is a projection anyway but it's very possible to hit that target. Lots of companies that buy 1 vBlock end up buying more and I believe they are going to be releasing/have released some lower cost options to pick up more volume.

1000101 fucked around with this message at 23:07 on Feb 21, 2013

Pantology
Jan 16, 2006

Dinosaur Gum
Yeah, the Vblock 100 and 200 were officially announced today. Vblock 100 is C-series UCS and VNXe, supporting NFS and iSCSI. Vblock 200 is C-series UCS and VNX 5300. Available March and mid-year, respectively.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
So I feel that the majority of VMware skills/best practice can be carried over to a XenServer environment, but would like to grab a book to keep me well rounded with XenServer as well. Does anyone have any recommendations?

I am staring at this one on Amazon.

http://www.amazon.com/Citrix-XenSer...rds=xenserver+6

Well I just ordered a copy. I guess I'll find out.

Moey fucked around with this message at 17:08 on Feb 22, 2013

1000101
May 14, 2003

BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY BIRTHDAY FRUITCAKE!
It's actually a pretty common cause of making hostd fall over and die.

Kachunkachunk
Jun 6, 2011
What? What is?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Are there any considerable stability differences between Hyper-V and VMware Workstation? VMware seems to support VirtIO and would get my FreeBSD VMs running faster, however on a hunch, I'd expect Hyper-V to be more stable, seeing how the host is a Windows 8 box.

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online
VMWare Workstation's hypervisor has been around for a very long time. HyperV is the new kid on the block. Take that for what you will.

Goon Matchmaker fucked around with this message at 18:15 on Feb 24, 2013

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
They both work very, very differently from one another and you'll probably get much better overall performance with Hyper-V. Network performance may suffer relative to VMware Workstation if FreeBSD doesn't support Microsoft's SMBus paravirtual network adapter, though.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
Well PEX was a great first day, lot's of learning on new things!

Thanks 1000101 for the dinner and chat! It was really awesome to meet another vmware goon.

Mierdaan
Sep 14, 2004

Pillbug
Are View 5.1 linked clone replica disks really restricted to NFS datastores? We're on Compellent storage here, do we really need to look at a zNAS frontend or something if we want to go that route?

BelDin
Jan 29, 2001

Mierdaan posted:

Are View 5.1 linked clone replica disks really restricted to NFS datastores? We're on Compellent storage here, do we really need to look at a zNAS frontend or something if we want to go that route?

Not that I know of, but there is a VMFS where you can't have more than 8 hosts connected to a non-NFS datastore that is used for your replica image. That limitation is from View Composer, not VSphere itself. Evidently they hard coded it in composer itself.

Mierdaan
Sep 14, 2004

Pillbug
Yeah, I just ran across the 8-host limit in the installation documents; the architecture planning document made it sound like it was NFS-only regardless of cluster size. That or I misread it last night.

Adbot
ADBOT LOVES YOU

CanOfMDAmp
Nov 15, 2006

Now remember kids, no running, no diving, and no salt on my margaritas.
Has anyone worked with any outside vendors for cloud IaaS? I'm looking into purchasing a managed vCloud setup, but I'm not entirely sure of where to start or who to look for in this sector.

In terms of requirements we've got 100-200 users, with around 30-40 various images and configurations, as well as 10 separate teams utilizing this environment. Is there a ballpark range in terms of cost that we would get from vendors? We're willing to build the system ourselves, which could be a bargaining point, so we're paying solely for convenience.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply