|
Regarding the Veeam chat last page, I just saw an ad that Veeam v6 supports Hyper-V. I know Hyper-V isn't too popular here, but does anyone have experience with this side of Veeam? Right now I'm using Symantec BackupExec with the Hyper-V agent direct to tape, and I'm looking at potential improvements to that situation.
|
# ¿ Mar 14, 2012 22:31 |
|
|
# ¿ Apr 23, 2024 17:02 |
|
Syano posted:I am backing up a 3 node hyper-v cluster with veeam, currently running 18 VMs. So far it just works. Backup window is about 3 hours nightly. Pretty happy with it so far. You can't do instant recoveries with it but oh well no worry. It runs a LOAD better than backup exec. How big are your VM's, to get that 3 hour window? I've got a slightly smaller cluster than you've described so I'm thinking this might be a good solution. The problem is I've got one VM that has about 1.9TB worth between 2 VHD's that is our primary DFS folder target. Being able to use dedupe and reverse incrementals sounds very attractive.
|
# ¿ Mar 14, 2012 23:06 |
|
Methylethylaldehyde posted:One actual question: What do you guys do for an offsite disaster recovery site? We're looking at setting one up for business continuity in the case of fire/earthquake/volcanic implosion, and aside from setting up a DFS replication group and using DPM to Disk to Disk system state backup all our VMs, I can't think of what else we'd need to do. I'm actually really looking forward to Windows Server 2012 and Hyper-V v3 for this, because I plan to use Hyper-V Replica for off-site disaster recovery, in addition to file-level backups of the VM contents. Here's a good write up on it: http://www.aidanfinn.com/?p=12147
|
# ¿ May 25, 2012 15:59 |
|
MC Fruit Stripe posted:Getting into my first Hyper-V real world experience, and the guy who was showing me around a bit today was showing me where to set CPUs and was saying he likes to use 4 cores, and while I know best practice on VMware is "use 1 unless you know why not to use 1", I didn't want to say anything and be wrong. But for my own reference going forward, with Hyper-V am I still looking to use as few cores as possible? Hyper-V doesn't suffer the same penalty that VMware does with multiple vCPUs. This is discussed here but I'm having trouble finding a Microsoft sourced reference at the moment.
|
# ¿ Jul 25, 2012 14:23 |
|
COCKMOUTH.GIF posted:Is there an updated RSAT from Microsoft for manipulating the new features from Windows 7? Has anyone actually had a chance to play with the new Hyper-V Server and its features? Unfortunately you can't manage Hyper-v 2012 from windows 7 RSAT or Hyper-v Manager and you also can't manage Hyper-v 2008 r2 from a Windows 8 or server 2012 hyper-v manager. See this post and comments: http://blogs.msdn.com/b/virtual_pc_guy/archive/2012/05/29/installing-the-remote-management-tools-for-windows-8-hyper-v.aspx I've got an install of 2012 about to go live, using vhdx which are larger than 2TB, and I'll be looking at upgrading my cluster in the very near future for the better CSV features and Replica feature.
|
# ¿ Sep 5, 2012 02:48 |
|
I prefer to use Remote Desktop Connection Manager: http://www.microsoft.com/en-us/download/details.aspx?id=21101 Inheriting permissions and server group organizations, along with support for RD Gateway make it super easy.
|
# ¿ Nov 1, 2012 15:08 |
|
I went to upgrade my Hyper-V cluster to Server 2012 this morning. I have two nodes consisting of Dell R410 servers. And I forgot to check compatibility for the PERC S300 RAID, which turns out doesn't have a valid driver for Server 2012 and never will. Arrrrggg. Now I've got to wait until I can get an H200 or H700 (which I should have purchased in the server in the first place).
|
# ¿ Dec 27, 2012 18:30 |
|
adorai posted:If I was the only person who voted on my team, I would go with the cheapest current gen dual socket highest ram density single proc 8 core server with dual power supplies that I could get. I think when we were looking last, it was around $2000 or so for HPs that fit the bill, plus $600 for a 10Gbe nic and another thousand for 96GB of RAM. We could add a second proc and double the RAM later if we wanted, and VMware provides redundancy provided you can tolerate short bouts of downtime while HA kicks in during hardware failure (which lets be honest, even the cheapest HP server is probably not going to fail). I did the same thing for our Hyper-V cluster in 2010 with Dell R410's. They've since had a second processor added and been upgraded from 32GB to 128 GB RAM. Clustered virtualization has made hardware upgrades SO nice.
|
# ¿ Oct 1, 2013 04:50 |
|
geera posted:I'm going to be replacing our existing 5 year old VMware 2-host cluster with a 3-host cluster next year, and the contractor that I'm working with on this is gently encouraging me to consider moving to Hyper-V. It seems very attractive cost-wise (especially considering we will also have to put in a new DR system to complement the new cluster), but I have zero experience with it and have a 2009 era opinion of it. He's told me that it has improved dramatically over the last couple years and is now pretty close to being at feature parity with VMware. I can't really speak to a comparison of VMWare having never used it, however I've been using a 2-node Hyper-V Cluster since 2008 R2 and am upgrading to 2012 R2 and adding a 3rd node this weekend. I've been extremely pleased with Hyper-V; you absolutely can live migrate, and as of 2012 (and R2) you can also live storage migrate, support >2TB vhdxs, live expand VHDx files, and most importantly in my mind, make use of Hyper-V replica.
|
# ¿ Dec 13, 2013 15:41 |
|
evol262 posted:It's basically the same as Window Failover Clustering. You set up a Failover Cluster with the usual Microsoft mechanisms, flag a machine HA, and it "just works". Yup, just like this. You can set preferred hosts for each VM (or multiple preferred hosts) and failback schedules per VM for when your downed host returns. It's still a "power-off" event for the VM when a failover occurs, just like VMware. Hyper-V doesn't have anything like Fault Tolerance to allow for zero downtime, but you can now do shared guest VHDx so that your VMs can be a cluster within the Hyper-V cluster, spread across two hosts. If one host goes down, so does one member of your application cluster but the remaining member of the application cluster purrs along just fine on the remaining Hyper-V host.
|
# ¿ Dec 13, 2013 18:42 |
|
CaptainGimpy posted:Anyone have any experience with running CAD machines in VDI using the NVidia Grid technology? We have a specific use case that makes this look like our only option. I'm just about to go live with my Citrix XenApp project, using: - XenDeskop 7.1 App Edition - VMs inside Citrix XenServer - Dell R720 with 128GB RAM and nVidia GRID K2 So far in testing results have been great, although we're not doing heavy 3D Design work (thus using XenApp instead of XenDesktop). Not enough users on the system yet to fully define how much load the server will handle, but I expect 30 at a minimum for our environment.
|
# ¿ Mar 11, 2014 19:42 |
|
AtomD posted:We're running an environment with about 40 virtual servers spread across 5 sites. Save for about 4 or 5 all our VMs are running Windows Server. At our smaller sites we're using Hyper-V, but our main office (4 hosts, 27 VMs) uses ESX. Sorry to disappoint, Hyper-V is awesome. I've got a 2012 R2 two node cluster at my head office and it has been rock solid since 2008 R2 when I deployed it in 2010. We're not even running System Center VMM yet, and it's still extremely easy to manage.
|
# ¿ Sep 16, 2014 18:00 |
|
cheese-cube posted:
AtomD, is there anything specific you wanted to know about Hyper-v?
|
# ¿ Sep 23, 2014 23:48 |
|
Fancy_Lad posted:There are a few things you couldn't do at the time in Hyper-V, like adding storage to a SCSI drive with the VM running, but nothing that can't be worked around with a little effort or some well-designed PowerShell scripts. For general knowledge, live expansion of a VHDX is a feature of Server 2012 R2 now, and worked great on my 4.5TB VHDX file in production.
|
# ¿ Sep 26, 2014 22:42 |
|
stevewm posted:We have a handful of servers we are looking at virtualizing. RAM is cheap, so you should be looking at least at 64GB if not triple digits. One of the primary results you may see of virtualizing is incremental server sprawl and having the extra RAM gives you lots of flexibility. The other thing to consider is the Windows licensing; if your physical is already Server Standard I believe 2012 gives you 2 Windows VMs with that. You can add additional standard licenses but if you're going over 6-8 Windows VMs eventually then you should consider Datacenter licensing as it grants unlimited VMs.
|
# ¿ Oct 15, 2014 18:13 |
|
|
# ¿ Apr 23, 2024 17:02 |
|
BangersInMyKnickers posted:RapidRecovery does a global dedupe on the entire repository (which you can keep tacking up to like 2k extents on to as you add storage), so you don't have the limitation of Veem only deduping against the data in the job which was a big complaint for me. By default you capture only incrementals on you clients on an hourly basis so you aren't scheduling backup windows over night and the amount of data transferred on those hourly jobs is very small so you don't see much impact on that end I've inherited a Rapid Recovery environment in a rough state that doesn't give the impression of being capable of scaling to the sizes you mentioned. I'm wondering if you were protecting in-guest with the agent, or VMs at the hypervisor?
|
# ¿ Feb 9, 2018 05:35 |