|
Misogynist posted:DRBD is incredibly fragile and breaks all the time and you're going to want to kill yourself. Other than this, have a great time! Seconding this. We use it in our dev environment and it makes me want to
|
# ? Feb 14, 2013 21:53 |
|
|
# ? Apr 19, 2024 10:59 |
|
Goon Matchmaker posted:Don't tell me this This is not the same as DRBD if you want to use a clustered filesystem.
|
# ? Feb 14, 2013 22:05 |
|
evol262 posted:This is not the same as DRBD if you want to use a clustered filesystem. We're not aiming for a clustered filesystem. Those things have too many restrictions when run under ESX, like not being able to back them up using VEEAM. Instead what we're aiming for is an NFS failover cluster where the data is replicated between two hosts and if one host blows up the other one takes over. We originally tried OCFS2 but due to being unable to back it up we had to go with DRBD. Lsyncd is almost exactly what we want. Just replicating files between two hosts.
|
# ? Feb 14, 2013 22:20 |
|
Yeah DRBD is kind of a cock. It's not included officially in RedHat or CentOS as far as I know, so get ready to forget to compile the kernel module every time you update things if you use those distros. That being said it hasn't destroyed our data (yet), but performance definitely sucks rear end even over a 10GE link. We have it set to synchronously since our DRBD store is for VMs and it's been fairly reliable, just not fast. We have two NFS servers and replicate via DRBD - there's a keepalived process that fails over between them if one goes down and amazingly that's only hosed up once in the 3 years I think since configuring it. And to be fair the fuckup was Netbackup holding the DRBD partitions open during a planned failover event. It is very nice running 70+ VMs though off a gigantic pair of fileservers and being able to fail back and forth between them without anyone really noticing other than IO freezing for 10 seconds while ARP responses are sent and caches flushed. Well, I mean it's very nice being able to do that for free. Now I'm lazy and as I said, would just buy like a Dell MD3220 or an iSCSI NAS or something instead. Less Fat Luke fucked around with this message at 03:13 on Feb 15, 2013 |
# ? Feb 15, 2013 03:10 |
|
Less Fat Luke posted:Yeah DRBD is kind of a cock. It's not included officially in RedHat or CentOS as far as I know, so get ready to forget to compile the kernel module every time you update things if you use those distros.
|
# ? Feb 15, 2013 15:20 |
|
Misogynist posted:I'm pretty sure it's been in CentOS Plus since 2007 or so, actually. poo poo outta luck for RHEL of course, but anyone stuck with RHEL is probably used to that by now. I just use elrepo and epel to cover anything RHEL is missing. I'm waiting on Redhat support to tell me to gently caress off but so far they haven't.
|
# ? Feb 15, 2013 16:02 |
|
Goon Matchmaker posted:I just use elrepo and epel to cover anything RHEL is missing. I'm waiting on Redhat support to tell me to gently caress off but so far they haven't. I've never used elrepo, but EPEL explicitly avoids replacing system packages on RHEL. It's "Extra Packages", not "Replacement Packages", unlike CentOS Plus. Redhat support is ok with that.
|
# ? Feb 15, 2013 16:31 |
|
evol262 posted:I've never used elrepo, but EPEL explicitly avoids replacing system packages on RHEL. It's "Extra Packages", not "Replacement Packages", unlike CentOS Plus. Redhat support is ok with that. Elrepo does the same unless you enable one of the optional repositories and even then it only updates a few minor things. I usually use it to get a mainline kernel since they seem to perform better in ESX than the default RHEL kernel.
|
# ? Feb 15, 2013 17:57 |
|
Aniki posted:I think my plan is to try the USB to Parallel adapter first and if that doesn't work, then I'll consider trying VMWare (currently using Hyper-V) or ordering the USB2LAN hub. The hardware key is for Call Cener Worx v. 2.1, which was released in 2001 and can only run on Windows NT based operating systems and for some reason they stopped purchasing updates after that. I know that the USB keys they released later on were finicky and I'm not sure if the parallel key we have is supposed to be any better. Ok, I ended up using VMware instead. Their support for parallel ports is much better and I had no issues getting the hardware security key working. I should have done that in the first place, but at least it is working now.
|
# ? Feb 15, 2013 18:56 |
|
Misogynist posted:I'm pretty sure it's been in CentOS Plus since 2007 or so, actually. poo poo outta luck for RHEL of course, but anyone stuck with RHEL is probably used to that by now.
|
# ? Feb 15, 2013 22:07 |
|
Our cluster is having major issues, been on the phone with VMWare support most of today, another guy is taking a crack at it right now. 2 hosts just show up disconnected from vsphere. The VM's are still running and accessible but VMWare support is telling us to manually shut down all VM's and hard reboot the hosts. It's a last resort option right now, there's 3 SQL servers on there among other servers. One host isn't even responsive on the console... bleh.
|
# ? Feb 16, 2013 02:02 |
|
skipdogg posted:Our cluster is having major issues, been on the phone with VMWare support most of today, another guy is taking a crack at it right now. 2 hosts just show up disconnected from vsphere. The VM's are still running and accessible but VMWare support is telling us to manually shut down all VM's and hard reboot the hosts. It's a last resort option right now, there's 3 SQL servers on there among other servers.
|
# ? Feb 16, 2013 04:18 |
|
adorai posted:My guess is they lost some storage. I've been through exactly this, it sucked. The worst part is that in our case the HA agent tried to restart the guests, but since they were already running it failed, and we had duplicate VMs all over the place. If it were storage wouldn't some of the VM's stop responding due to disk timeouts? It could be ISCSI loss as it is sparatic and a duplicated/mis assigned IP can cause this, but it sounds a bit like something else.
|
# ? Feb 16, 2013 04:22 |
|
Storage issues can cause hostd to hang with no affect on VMs necessarily; I've seen it when datastores are removed improperly. vSphere 5.1 introduced improvements into it with a new timeout setting.
|
# ? Feb 16, 2013 04:28 |
|
three posted:Storage issues can cause hostd to hang with no affect on VMs necessarily; I've seen it when datastores are removed improperly. vSphere 5.1 introduced improvements into it with a new timeout setting. Can't say I have ever seen that, but it sounds plausible. It wouldn't be the strangest thing I have heard of this week....
|
# ? Feb 16, 2013 04:31 |
|
Corvettefisher posted:Can't say I have ever seen that, but it sounds plausible. It wouldn't be the strangest thing I have heard of this week....
|
# ? Feb 16, 2013 04:50 |
|
It often comes down to storage issues, or improper presentation changes, yes. ESXi from 5.0 and onward is supposed to be tolerant of All Paths Down conditions, especially the host agents. There are situations where it really doesn't handle it so perfectly, but it's quite a few steps ahead of 4.x and earlier. It's also a lot easier to remove devices gracefully on 5.0 and later. The steps involved for 4.x can be a bit involved, requiring you to set specific claim rules for each device you want removed before proceeding to rescanning and unpresenting. HA is a bit more intelligent if you use Fault Domain Manager (FDM), which is vCenter Server 5.0 and later's HA agent. It also uses datastores to fully-ascertain whether a host is down (traditionally HA just used network ping response between nodes).
|
# ? Feb 16, 2013 07:57 |
|
Well 1 week till PEX! Anyone else going?
|
# ? Feb 16, 2013 15:38 |
|
I don't know how, but basically some old LUN's from our previous SAN showed back up somewhere/somehow causing an All Paths Down issue.
|
# ? Feb 18, 2013 16:25 |
|
http://www.vmware.com/products/view/overview.html So did view 5.2 just get released?
|
# ? Feb 20, 2013 19:45 |
|
Woah, I've been waiting on that poo poo.
|
# ? Feb 20, 2013 19:50 |
|
Erwin posted:Woah, I've been waiting on that poo poo. Yeah eager to test it out. Just so sudden, I thought it was getting announced at PEX huh, wonder where the supported GPU list is or am I missing it, I am fairly certian it is the quadro lineup only Dilbert As FUCK fucked around with this message at 19:59 on Feb 20, 2013 |
# ? Feb 20, 2013 19:54 |
|
Corvettefisher posted:Yeah eager to test it out. Just so sudden, I thought it was getting announced at PEX Correct. It requires the GF100GL chip, which is found in the Quadro 4000, 5000, 6000. It looks like they released a Plex 7000 as well with that chip. The Kepler based Quadro should work as well. Maybe the supported cards are listed on the Nvidia site? I don't know how the driver vib is distributed.
|
# ? Feb 20, 2013 20:16 |
|
Corvettefisher posted:http://www.vmware.com/products/view/overview.html Unless I'm missing something, the download pages still link to 5.1. Changes make you buy premier licensing basically, Wanova/Mirage still don't work with View, but the HTML5 looks neat and it supports Lync now which is cool if you use that technology. I find it intriguing that the HTML5 client doesn't use PCoIP. First nail in PCoIP's coffin? Why continue paying Teradici for PCoIP if you can build another protocol (reminder they never bought Teradici, when many thought they would)?
|
# ? Feb 20, 2013 20:22 |
|
Yeah the Download link takes me to 5.1, so maybe the 5.2 GA release will be during PEX after all. Either way looking forward to a week in vegas. http://www.vmware.com/company/news/releases/vmw-euc-portfolio-02-20-13.html Dilbert As FUCK fucked around with this message at 21:39 on Feb 20, 2013 |
# ? Feb 20, 2013 21:22 |
|
Semi-related to virtualization, but did anyone follow the VCE launch presentation this morning? They claim a billion dollar run rate, but have only sold 1,000 units. Am I way off on my math ($1mil/Vblock) or does that not make any sense?
|
# ? Feb 21, 2013 17:51 |
|
three posted:Semi-related to virtualization, but did anyone follow the VCE launch presentation this morning? They claim a billion dollar run rate, but have only sold 1,000 units. Am I way off on my math ($1mil/Vblock) or does that not make any sense? Looking through some past BoMs, I don't know if you can get a Vblock 300 for less than $1m, or a 700 for less than $2m. Pantology fucked around with this message at 18:37 on Feb 21, 2013 |
# ? Feb 21, 2013 18:31 |
|
three posted:Semi-related to virtualization, but did anyone follow the VCE launch presentation this morning? They claim a billion dollar run rate, but have only sold 1,000 units. Am I way off on my math ($1mil/Vblock) or does that not make any sense? Run rate is a projection anyway but it's very possible to hit that target. Lots of companies that buy 1 vBlock end up buying more and I believe they are going to be releasing/have released some lower cost options to pick up more volume. 1000101 fucked around with this message at 23:07 on Feb 21, 2013 |
# ? Feb 21, 2013 19:24 |
|
Yeah, the Vblock 100 and 200 were officially announced today. Vblock 100 is C-series UCS and VNXe, supporting NFS and iSCSI. Vblock 200 is C-series UCS and VNX 5300. Available March and mid-year, respectively.
|
# ? Feb 21, 2013 22:22 |
|
So I feel that the majority of VMware skills/best practice can be carried over to a XenServer environment, but would like to grab a book to keep me well rounded with XenServer as well. Does anyone have any recommendations? I am staring at this one on Amazon. http://www.amazon.com/Citrix-XenSer...rds=xenserver+6 Well I just ordered a copy. I guess I'll find out. Moey fucked around with this message at 17:08 on Feb 22, 2013 |
# ? Feb 22, 2013 00:22 |
|
It's actually a pretty common cause of making hostd fall over and die.
|
# ? Feb 23, 2013 08:48 |
|
What? What is?
|
# ? Feb 23, 2013 17:26 |
|
Are there any considerable stability differences between Hyper-V and VMware Workstation? VMware seems to support VirtIO and would get my FreeBSD VMs running faster, however on a hunch, I'd expect Hyper-V to be more stable, seeing how the host is a Windows 8 box.
|
# ? Feb 24, 2013 16:27 |
|
VMWare Workstation's hypervisor has been around for a very long time. HyperV is the new kid on the block. Take that for what you will.
Goon Matchmaker fucked around with this message at 18:15 on Feb 24, 2013 |
# ? Feb 24, 2013 18:12 |
|
They both work very, very differently from one another and you'll probably get much better overall performance with Hyper-V. Network performance may suffer relative to VMware Workstation if FreeBSD doesn't support Microsoft's SMBus paravirtual network adapter, though.
|
# ? Feb 24, 2013 21:15 |
|
Well PEX was a great first day, lot's of learning on new things! Thanks 1000101 for the dinner and chat! It was really awesome to meet another vmware goon.
|
# ? Feb 26, 2013 01:39 |
|
Are View 5.1 linked clone replica disks really restricted to NFS datastores? We're on Compellent storage here, do we really need to look at a zNAS frontend or something if we want to go that route?
|
# ? Feb 28, 2013 17:59 |
|
Mierdaan posted:Are View 5.1 linked clone replica disks really restricted to NFS datastores? We're on Compellent storage here, do we really need to look at a zNAS frontend or something if we want to go that route? Not that I know of, but there is a VMFS where you can't have more than 8 hosts connected to a non-NFS datastore that is used for your replica image. That limitation is from View Composer, not VSphere itself. Evidently they hard coded it in composer itself.
|
# ? Feb 28, 2013 18:15 |
|
Yeah, I just ran across the 8-host limit in the installation documents; the architecture planning document made it sound like it was NFS-only regardless of cluster size. That or I misread it last night.
|
# ? Feb 28, 2013 19:36 |
|
|
# ? Apr 19, 2024 10:59 |
|
Has anyone worked with any outside vendors for cloud IaaS? I'm looking into purchasing a managed vCloud setup, but I'm not entirely sure of where to start or who to look for in this sector. In terms of requirements we've got 100-200 users, with around 30-40 various images and configurations, as well as 10 separate teams utilizing this environment. Is there a ballpark range in terms of cost that we would get from vendors? We're willing to build the system ourselves, which could be a bargaining point, so we're paying solely for convenience.
|
# ? Feb 28, 2013 22:22 |