Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
mpeg4v3
Apr 8, 2004
that lurker in the corner
I've got a general virtualization question, that I haven't really been able to find a clear answer to. I'm trying to spec out parts for my first ESXi server, but I'm having trouble figuring out what sort of CPU power I should be going for. This server is going to be running the following VMs:
  • FreeBSD to manage a ZFS storage pool
  • CentOS (or Ubuntu, or something else, whatever) web development server & dedicated XBMC updater.
  • Ubuntu video transcoding server, to handle AirVideo, Emit, and Subsonic
  • Server 2008 R2 for Active Directory
I already understand that RAM is of paramount importance for VMs, and I'm planning to go at least 32GB no matter what. The issue I have is with determining how much power I need for the video transcoding VM. AirVideo (iOS) and Emit (Android) are video transcoding and streaming servers, to transcode video to a format that their respective mobile device clients can play. What I'm looking for is to put together a server that could handle streaming at least two, preferably three, movies in 1080p. That means, for each movie, decoding the movie from 1080p in real time, and then transcoding it to the proper format at 1080p (probably 8-10mbit/sec) in real time.

I plan to compile multithreaded-aware versions of ffmpeg and possibly mplayer (along with possibly even buying a dedicated pass-through'd GPU to handle decoding rather than relying on CPU), but even with those, encoding 1080p in real time for at least two, preferably three, different movies is quite a huge task to handle.

The thing is, I just don't know how well video encoding works in VMs. I don't have a huge budget to work with (~$1500-1750), and I have to devote quite a bit to the "file server requirements" portion of the parts list, so I can't go all out with some 12 core Xeon dual processor monstrosity. I'm okay with non-ECC RAM as this is not mission critical stuff, and am okay with consumer level parts. I've been looking at the following options:
  • A consumer-level X79 motherboard with an Intel i7-3930K 6-core 3.2ghz CPU
  • The same as above, just with a temporary i7-3820 4-core 3.6ghz CPU, as a holdover until Intel releases their 8-core i7.
  • A dual processor Socket C32 motherboard with two 6-core AMD Opteron 4180 2.6ghz CPUs
  • Wait for a dual processor Socket 2011 motherboard and buy two 4-core Xeon E5's
Traditional "as many cores as possible!" logic makes me want to side with the dual 6-core Opteron setup, but my mind keeps coming back to the fact that the processors are potentially going to need high clock speeds in order to handle decoding and reencoding in real time. Or maybe I'm just overthinking this, and the 6-core 3930K, or even the 4-core 3820 would be enough for what I want to do. I just can't seem to find anyone else that runs video transcoding in VMs to ask.

Adbot
ADBOT LOVES YOU

mpeg4v3
Apr 8, 2004
that lurker in the corner

1000101 posted:

Generally something like video transcoding or rendering or anything of that sort will use 100% of the CPU resources you throw at it.

That said, if you went with the 6 core i7 you could give 4 vCPUs to the Ubuntu VM and run it 24x7 and still have enough CPU power to run your remaining 3 VMs (which I'd generally recommend you do with 1 vCPU each.)

What kind of load are you expecting on the remaining VMs? What kind of disks and how many?

What are you looking at to provide this functionality? Be wary of VMdirectPath and understand that not every device works perfectly with pass-through. It was generally intended to let you hook things like network cards, HBAs and disk controllers directly to virtual machines.

I do see here: http://vm-help.com/esx40i/esx40_vmdirectpath_whitebox_HCL.php

that someone has managed to get a couple Radeon boards to work with 4.X though. I'd presume it'll still work in 5.X.

Also, make sure you shop the HCL or research hardware before you buy. To boot ESXi you need a supported disk controller and a supported NIC or it will PSOD. If your NIC isn't supported it'll PSOD with an LVM error and you'll spend all your time beating your head against the wall troubleshooting your disk controller.

For some reason I thought GPU passthrough worked well now, but I have no idea where I got that impression from. I guess that means I'll have to do all decoding in software, which is fine- but I'm really not sure if a single 4-core VM would be able to decode and reencode to 1080p with two or three movies at once. If I went with the 6-core, I was actually planning to assign the transcoding VM six vCPUs, and assign one vCPU to each of the rest of the VMs, with my reasoning being that the other VMs were not really going to be that CPU-intensive.

The ZFS file server VM is going to be running a 12-disk 3-vdev pool, and it doesn't see much action. The most that happens on it is just streaming of media to an HTPC and a few computers here and there. Nothing super intensive, so I'm not going to devote that many resources to it in the first place (except for RAM, of course, since ZFS loves RAM).

I'm probably going to host the VMs off of two leftover 750GB drives in a RAID1, if possible. I know multiple VMs absolutely kill platter drives due to random access, but the VMs won't be doing much disk activity anyway. I might spring for 2x 120GB SSDs instead, and just try and keep the OS sizes down, but I'd rather not have to spend the money.

BnT posted:

I'd make sure that whatever you're getting supports VT-d or AMD's thingy, especially if you're planning on running a guest with ZFS direct access. For this budget you could easily make a pretty beefy system centered around a Sandy Bridge Xeon E3 and the Intel C20x series chipsets, but that would require ECC (which I would recommend anyway), also this would only get you four cores.

Another thing to be aware of is that the free vSphere 5 entitlement only supports a single socket CPU (unlimited cores) and 32GB of RAM.

Thanks for the info- I knew the free version only supported up to 32GB of RAM, but didn't realize it only supported a single socket, too. That does change my plans- I was starting to think it would be best to wait for a dual socket 2011 and start off with one quad core for now. I've made sure to factor VT-d and IOMMU into the boards and CPUs I'm looking at.

I guess I'll just need to spend some time running tests of my own with a VM on my 2500K-powered desktop this weekend to see what video transcoding in a VM is like with four cores.

mpeg4v3 fucked around with this message at 02:27 on Feb 25, 2012

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply