|
Another goon is offering a HP ProLiant DL585 G5 4X AMD Quad Core Opteron 8356 2.3GHz w/ 72GB RAM (no drives included) for $600 Should I get this based on the situation I've been explaining on the last page? Or would I be better off payin some extra coin a very low end 1u supermicro?
|
# ? Jun 21, 2012 04:50 |
|
|
# ? Apr 18, 2024 03:15 |
|
Mierdaan posted:It does, but the licensing terms basically require that the hypervisor instance of Server 2008 is used only to run the virtual instance; you can't set up that bare-metal instance of Server 2008 to do anything other than run the Hyper-V role. What you loose is the ability to start/restart/setup guests directly on the host and most other settings, you need hyper-v manager installed on another machine, but at the same time i figure there would be "less" of things that needs updates when you do not have the regular windows gui. Am i completely wrong? I'll actually expand that question, are there less updates and as a result less restarts due to installation of those updates on server core machines?
|
# ? Jun 21, 2012 07:54 |
|
peepsalot posted:Another goon is offering a HP ProLiant DL585 G5 4X AMD Quad Core Opteron 8356 2.3GHz w/ 72GB RAM (no drives included) for $600 Those chips are from 2007-2008. You'd probably be better off with a single present-day i5. It might be worth it (since there's a huge amount of RAM there) for some applications, but not yours.
|
# ? Jun 21, 2012 13:34 |
|
underlig posted:Wouldn't it be better in that case to run hyper-v core server? (since hyper-v server core is "free" there is no difference in what you pay). Installing Core over a full GUI means very little if you're running a small number of VMs on the machine and have the available resources. The majority of your updates are going to be applied to the virtual machine, not the physical host, unless MS pushes updates out for Hyper-V itself. For a first time hyper-v user it's probably easier to do a full install.
|
# ? Jun 21, 2012 13:42 |
|
Bob Morales posted:Those chips are from 2007-2008. You'd probably be better off with a single present-day i5. I don't know, it's twice as many as you've marked in that image there, he said it was 4x Quads not 2x Quads, and with that much CPU grunt and 72GB RAM, I'd argue that's a steal for $600 if you need a virtualization toy. I didn't follow what his use was, though. Edit: eh, followed back, it's for consolidating some servers that are running on single core P4s.. I don't see how that deal is all bad; of course I'd suggest a server with a warranty and so on, but I guess that's not an option here. Double Edit for a more relevant graph: Also, I'd take that graph as the worst case:- You're going to have better multithreading performance than it indicates with that many CPUs. Each core would be better than the lousy old P4s. vv Yeah, I take your point, guess I just find it an attractive buy for the sheer lack of cost; but more than anything, that box will chow down on power. HalloKitty fucked around with this message at 13:58 on Jun 21, 2012 |
# ? Jun 21, 2012 13:45 |
|
HalloKitty posted:I don't know, it's twice as many as you've marked in that image there, he said it was 4x Quads not 2x Quads, and with that much CPU grunt and 72GB RAM, I'd argue that's a steal for $600 if you need a virtualization toy. I was just getting at for running like 5 lightly-loaded VM's, it'd be overkill to have 16 vCPU's and 72GB RAM. You'd have vCPU's with double the speed (but of much less of them) with an i5/i7, effectively making it much faster. But for certain things a 16-core 72GB box for $600 is a good price, even without drives. $300 for a i5 + MB, $80 for 16GB RAM... Score Per-CPU: 11,026/16 cores = 689 per core 6,604/4 cores = 1,651 per core Well over twice as fast per core. Plus, the amount of disk I/O you'd need to keep 16 cores busy....I guess if you can fit it all into RAM (and you have 72GB of it) it would be alright, but if you had it loaded with VM's it'd be fighting over disk contention. Bob Morales fucked around with this message at 13:56 on Jun 21, 2012 |
# ? Jun 21, 2012 13:52 |
|
HalloKitty posted:Edit: eh, followed back, it's for consolidating some servers that are running on single core P4s.. I don't see how that deal is all bad; of course I'd suggest a server with a warranty and so on, but I guess that's not an option here.
|
# ? Jun 21, 2012 17:31 |
|
Misogynist posted:The deal's not bad, but consolidating an entire infrastructure onto a single point of failure with no warranty seems like a job-loss event to me. We have clients who would see that as out of the box thinking to save costs and be delighted.
|
# ? Jun 21, 2012 21:58 |
|
sanchez posted:We have clients who would see that as out of the box thinking to save costs and be delighted.
|
# ? Jun 21, 2012 23:02 |
|
i would rather put together 3x newegg boxes for virtualization at that price.
|
# ? Jun 22, 2012 00:41 |
|
We're going to be changing some of our servers over to ESXi vsphere servers, we purchased 2x Dell 6248's for the iSCSI traffic and LAN traffic. To make the vmotion/lan traffic redundant over both switches (assuming the hosts are hooked into both switches) do I just have to add the nics to the same vswitch?
|
# ? Jun 22, 2012 01:23 |
|
adorai posted:i would rather put together 3x newegg boxes for virtualization at that price. How would you build 3 boxes for $600?
|
# ? Jun 22, 2012 01:25 |
|
peepsalot posted:How would you build 3 boxes for $600? Bunch of these? http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=2527224&CatId=31 Biostar N68S+ DiabloTek Barebones Kit - Biostar N68S+ Board, AMD Phenom X3 8550, CPU Cooler, Patriot 2GB DDR2 RAM, Seagate 500GB HDD, Lite-On 24x DVDRW, DiabloTek Mid Tower Case, 400W Power Supply
|
# ? Jun 22, 2012 02:46 |
|
^ As long as you don't actually use the in-case PSU. Edit: woah, wait a minute, that's AM2 and the original Phenom, from 2008. I personally wouldn't recommend it. adorai posted:i would rather put together 3x newegg boxes for virtualization at that price. It still has 72GB RAM. I'd take it even if I wasn't going to use it in production. That's a hell of a testing machine for the price, but that may not be needed or be relevant. HalloKitty fucked around with this message at 08:54 on Jun 22, 2012 |
# ? Jun 22, 2012 08:49 |
|
Bob Morales posted:Bunch of these? 2GB of memory? Single SATA disk? Compared to 72GB, 16 cores, and (up to) 16 SAS drives (or 2.5" SATA, I guess)? Building 3 "virtualization boxes" from newegg is a losing proposition on all ends. That 585 would be able to run circles around your 3, whilst running 3 instances of ESXi inside itself to play with vCenter and exporting iSCSI back to itself.
|
# ? Jun 22, 2012 16:37 |
|
Anyone know if they added recurring scheduled p2vs functionality in vsphere 5? They took it out in 4.1 so I've been using vConverter and it's a little flaky.
|
# ? Jun 22, 2012 20:49 |
|
Kaddish posted:Anyone know if they added recurring scheduled p2vs functionality in vsphere 5? They took it out in 4.1 so I've been using vConverter and it's a little flaky. I didn't know the vsphere client ever had the ability to schedule P2V conversions, but I don't see it in 5. Are you sure you weren't using a different version of vmware converter?
|
# ? Jun 22, 2012 21:00 |
|
nuckingfuts posted:I didn't know the vsphere client ever had the ability to schedule P2V conversions, but I don't see it in 5. Are you sure you weren't using a different version of vmware converter? I used converter enterprise which is a vsphere plugin. The recurring functionality was in 3.5 and 4.0. I read somewhere that they were adding this functionality back in because so many customers complained about them removing it. You should at least have the ability to schedule a p2v, you're just not able to have it repeat, correct? If you can't schedule via vsphere you may have the standalone version.
|
# ? Jun 22, 2012 21:41 |
|
Gotcha, I've only used converter standalone. I wasn't even aware there was a vcenter plugin / enterprise version.
|
# ? Jun 22, 2012 22:59 |
|
nuckingfuts posted:Gotcha, I've only used converter standalone. I wasn't even aware there was a vcenter plugin / enterprise version. I'd buy a play box for $600 with 72Gb of ram, even if I didn't use it in production. If I really wanted to save money, I'd use that as a main box, then have a cheap NFS or iSCSI + backup solution providing shared storage, not bother with DRS/HA and just have a script which registered the VMs on a second cheap host and powered them on in the event of failure. You don't get DRS but if you don't need 24/7 systems then why pay for it unless you don't like working the odd evening. Downtime is inevitable, and it's relative to how bad it is for the business and how much you pay to avoid extending it. Your scenarios end up being: 1. Primary host is hosed; run script and be back up in 15mins. 2. Shared storage is hosed; you'd be hosed with fancy-rear end clusters anyway. Back up in 2 days with fixed or new storage. 3. Both are hosed; boy are you properly hosed now, better use a backup and order next-day delivery from NewEgg. Back up in 2 days.
|
# ? Jun 22, 2012 23:12 |
|
I'm in the pre-pre-pre stages of analyzing our current setup for virtualization. I have Cacti running, is anyone aware of a good way to use it to start graphing and trending IOPS on my servers? It seems doable with my *nix machines with iostat/cron/snmpwalk but the Windows side isn't returning much. We're a drat small environment (13-14 servers total) but with our internal web software and exchange 2010 I'm trying to make sure I don't under spec and I also want to make sure we leave ourselves enough room to expand in the future.
|
# ? Jun 25, 2012 14:43 |
|
LmaoTheKid posted:I'm in the pre-pre-pre stages of analyzing our current setup for virtualization. Windows has Performance Monitor built-in. You can run the whole show from a single machine and record to disk for later analysis. http://blogs.technet.com/b/cotw/archive/2009/03/18/analyzing-storage-performance.aspx
|
# ? Jun 25, 2012 15:41 |
|
This has been baffling me for days. We use PHD Virtual to do backups. I had an issue last week and another a month ago where during the Snapshot stage of backups, the host machine fails to create them and times out after a few hours. During this, and after the operation times out, the host becomes dog slow and all VMs on it become nearly unusable. Even the host itself crawls to a stop -- it will take 3-5 seconds to log in via SSH, or even run a command. I have no idea why... there is no visible problem as far as we have been able to tell. All performance tracking looks normal, 'esxtop' via SSH shows nothing out of the ordinary, and neither does a regular 'top'. It's making me insane. The only thing that fixes it is a total host reboot. Anyone seen anything like this before? Host is ESX4.1 (Build 260247), for reference. It's making me borderline angry since I can't figure it out and I feel dumb as bricks.
|
# ? Jun 25, 2012 19:38 |
|
This definitely sounds like a backend storage issue, since most non-VM storage operations in ESXi are synchronous for some awful reason. Check your ESXi host logs on the impacted host, especially vmkernel.log, and see if you can correlate anything useful from your log events. Check your dangling snapshots, as well -- I've seen issues where Veeam created a chain of snapshots on a VM about 30 deep and dragged the host's performance into the ground until we cleaned them up.
|
# ? Jun 25, 2012 20:16 |
|
No snapshots were visible or in the datastore folders of the VMs that were being snapshotted. First thing I checked, I thought maybe snapshots went out of control... or maybe it was even still trying to create them. It's also local storage, for reference. It seems that the logs were cleared up on reboot, so I'll need to wait again to get a look. I didn't find anything obvious last time, though.
|
# ? Jun 25, 2012 20:23 |
|
You can try running RVTools against your host and see if it points out anything glaring.
|
# ? Jun 25, 2012 22:23 |
|
If you're rooting around in esxtop and suspecting disk related performance issues, check 'd' mode and add the latency/cmd columns ("f" for column editing, and "j" for latency/cmd). Follow a similar process in modes U and V. Press "h" at any time for help. Also remember the sampling period is 5 seconds.
|
# ? Jun 25, 2012 23:59 |
|
Kachunkachunk posted:esxtop Just came across this the other day and found it incredibly handy to have near my desk. An esxtop cheat sheet. http://www.vmworld.net/wp-content/uploads/2012/05/Esxtop_Troubleshooting_eng.pdf
|
# ? Jun 26, 2012 00:45 |
|
Yellow Bricks has some good guides on interpreting and using esxtop metrics, as well - be sure to print off a copy of that too: http://www.yellow-bricks.com/esxtop/. Also... FFFFFFFFFFFUUUU- Okay, so the Linux kernel bug detailed here may likely affect anyone running ESX 4.0 U1 and earlier, later this week. ESXi users, or at least anyone running patch-6 for ESX 4.0 and later should be fine, however. In case some of you do not already know, ESX "Classic" runs with a Console OS, which is basically a repurposed derivative of RHEL 3 or 4... along with all of the applicable bugs and quirks that may accompany such a release (resolved with patches). If Red Hat bug 479765 triggers on an ESX box's Console OS, it will probably result in a Lost Heartbeat purple screen eventually (vmkernel/hypervisor continues to run but notices the Console OS has stopped responding, then panics). Patch up by Saturday if you're running such an old release.
|
# ? Jun 26, 2012 22:10 |
|
Kachunkachunk posted:Also... FFFFFFFFFFFUUUU- Wha, patch-6? What does that mean? I'm working on patching my 4.0 ESX hosts (currently RTM or Update 1) and it looks like Update 4 is the latest, with 4 updates relased in addition after update 4. What does "patch" specifically mean in this context, and what does the 6 mean?
|
# ? Jun 26, 2012 22:20 |
|
Specifically known as "P06," which is only pissing me off, since I also have trouble figuring out exactly what release level that is. Rest assured, it's probably still something that predates Update-2, even. You'll surpass that with a regular update rollout. Update-x is a roll-up package, like a Service Pack. It may also include other specific fixes and stuff. P0x would be a specific minor rollup that looks like an individual patch. Installing Update-4 will net you all the benefits and fixes from prior updates/releases cumulatively. Edit: I think this is the relevant kernel update for ESX 4 that one would need: http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=1013127, otherwise known as ESX400-201005001. Also see: http://scientificlinuxforum.org/index.php?s=2c11935c7c49be320ff30d9a09376a6e&showtopic=1695&st=0&#entry11777 and/or http://blog.toracat.org/2012/06/leap-seconds-who-cares/ for some information on what kernel revision needs to be met or exceeded. Edit2: I am confident that the above patch resolves the issue. You don't have to get to 4.1 right now. Kachunkachunk fucked around with this message at 22:45 on Jun 26, 2012 |
# ? Jun 26, 2012 22:38 |
|
FISHMANPET posted:Wha, patch-6? What does that mean? I'm working on patching my 4.0 ESX hosts (currently RTM or Update 1) and it looks like Update 4 is the latest, with 4 updates relased in addition after update 4. What does "patch" specifically mean in this context, and what does the 6 mean? I would assume the 6th patch level for ESX 4, so get to 4.1 quote:VMware vSphere 4.0 (May 20, 2009)
|
# ? Jun 26, 2012 22:42 |
|
Kachunkachunk posted:Specifically known as "P06," which is only pissing me off, since I also have trouble figuring out exactly what release level that is. Ugh, thanks VMWare, can't wait to get a VCenter Server and not have to worry about all this. Moey posted:I would assume the 6th patch level for ESX 4, so get to 4.1 Not sure on what the status is of current maintenance, do we need to be current to go to 4.1?
|
# ? Jun 26, 2012 22:45 |
|
FISHMANPET posted:Ugh, thanks VMWare, can't wait to get a VCenter Server and not have to worry about all this. To my knowledge, if you are licensed for 4, you are good for all of 4 (4.1 included as well).
|
# ? Jun 26, 2012 22:49 |
|
No vCenter? Okay, so you'll probably be working on this in a maintenance window or something. Get Update-4 for 4.0 and interactively install it on each box using the command-line. Won't take you very long for each host, but it will require rebooting and downtime (especially since you don't have vMotion without vCenter?). If you do go to 4.1 and later, you generally do not need to update your existing install before it upgrades. Otherwise the worst case is to install over it and retain your local VMFS partition (in case you have VMs there).
|
# ? Jun 26, 2012 22:49 |
|
Kachunkachunk posted:No vCenter? Okay, so you'll probably be working on this in a maintenance window or something. I found this page: https://my.vmware.com/web/vmware/details/esx41u2/dHdlYnRoKmRidGRkKg== And it lists an update to install before going from 4.0 to 4.1. So I'd install that and the the Update 2. E: And this confirms that the same key is good: http://kb.vmware.com/selfservice/microsites/search.do?cmd=displayKC&externalId=1024256 I've got a downtime on one of the servers tonight. I was going to patch to 4.0u4, but I guess I'm going to 4.1u2! FISHMANPET fucked around with this message at 22:53 on Jun 26, 2012 |
# ? Jun 26, 2012 22:50 |
|
Has anyone used Veeam for their monitoring tools? I know people are hit and mostly miss on their backup products but wondering about their other offerings.
|
# ? Jun 27, 2012 15:52 |
|
Does anyone know if there is any CPU performance difference between VM version 7 & 8? We are in the process of migrating into a new ESXi5 cluster and it looks like any guests with v7 hardware are all running in older EVC modes (Westmere/Nehalem etc), but v8 machines are on Sandy Bridge. The early machines I've moved over are all compute heavy compile/build nodes for our developers and I'm wondering if the additional Sandy Bridge cpu features will give any speed benefit. I'm hesitant to upgrade everything to v8 right now, as I'm still proving/testing the cluster and I don't want to remove any rollback capability to the old ESXi4.1 cluster. Unfortunately I haven't been able to find an easy way to benchmark it. I'm looking for some sort of bootable ISO that has benchmarking tools, but I've only found bootable stress tests that max out the CPU without giving me any numbers I can compare. GrandMaster fucked around with this message at 04:45 on Jun 28, 2012 |
# ? Jun 28, 2012 04:43 |
|
GrandMaster posted:We are in the process of migrating into a new ESXi5 cluster and it looks like any guests with v7 hardware are all running in older EVC modes (Westmere/Nehalem etc), but v8 machines are on Sandy Bridge.
|
# ? Jun 28, 2012 04:56 |
|
|
# ? Apr 18, 2024 03:15 |
|
There should not be a performance difference between an identically specced v7 and v8 VM. What you gain is the ability to take advantage of new features, such as USB 3, HW accelerated Aero, and scaling VMs beyond 8-core and 256GB RAM. You just need to shut down and power on to take advantage of the new CPU mode -- you do not need to upgrade.
|
# ? Jun 28, 2012 04:58 |