I've got a general virtualization question, that I haven't really been able to find a clear answer to. I'm trying to spec out parts for my first ESXi server, but I'm having trouble figuring out what sort of CPU power I should be going for. This server is going to be running the following VMs:
I plan to compile multithreaded-aware versions of ffmpeg and possibly mplayer (along with possibly even buying a dedicated pass-through'd GPU to handle decoding rather than relying on CPU), but even with those, encoding 1080p in real time for at least two, preferably three, different movies is quite a huge task to handle.
The thing is, I just don't know how well video encoding works in VMs. I don't have a huge budget to work with (~$1500-1750), and I have to devote quite a bit to the "file server requirements" portion of the parts list, so I can't go all out with some 12 core Xeon dual processor monstrosity. I'm okay with non-ECC RAM as this is not mission critical stuff, and am okay with consumer level parts. I've been looking at the following options:
|# ¿ Feb 23, 2012 07:04|
|# ¿ May 23, 2013 07:58|
Generally something like video transcoding or rendering or anything of that sort will use 100% of the CPU resources you throw at it.
For some reason I thought GPU passthrough worked well now, but I have no idea where I got that impression from. I guess that means I'll have to do all decoding in software, which is fine- but I'm really not sure if a single 4-core VM would be able to decode and reencode to 1080p with two or three movies at once. If I went with the 6-core, I was actually planning to assign the transcoding VM six vCPUs, and assign one vCPU to each of the rest of the VMs, with my reasoning being that the other VMs were not really going to be that CPU-intensive.
The ZFS file server VM is going to be running a 12-disk 3-vdev pool, and it doesn't see much action. The most that happens on it is just streaming of media to an HTPC and a few computers here and there. Nothing super intensive, so I'm not going to devote that many resources to it in the first place (except for RAM, of course, since ZFS loves RAM).
I'm probably going to host the VMs off of two leftover 750GB drives in a RAID1, if possible. I know multiple VMs absolutely kill platter drives due to random access, but the VMs won't be doing much disk activity anyway. I might spring for 2x 120GB SSDs instead, and just try and keep the OS sizes down, but I'd rather not have to spend the money.
I'd make sure that whatever you're getting supports VT-d or AMD's thingy, especially if you're planning on running a guest with ZFS direct access. For this budget you could easily make a pretty beefy system centered around a Sandy Bridge Xeon E3 and the Intel C20x series chipsets, but that would require ECC (which I would recommend anyway), also this would only get you four cores.
Thanks for the info- I knew the free version only supported up to 32GB of RAM, but didn't realize it only supported a single socket, too. That does change my plans- I was starting to think it would be best to wait for a dual socket 2011 and start off with one quad core for now. I've made sure to factor VT-d and IOMMU into the boards and CPUs I'm looking at.
I guess I'll just need to spend some time running tests of my own with a VM on my 2500K-powered desktop this weekend to see what video transcoding in a VM is like with four cores.
mpeg4v3 fucked around with this message at Feb 25, 2012 around 01:27
|# ¿ Feb 25, 2012 01:24|