Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Mausi
Apr 11, 2006

I run 4500 VMs on NFS because dedupe, also dedupe and cheaper because of dedupe.
Some of our development databases (SQL, Sybase and Oracle) have also been moved over for the same reasons. That being said, we're looking at potentially cheaper storage solutions like DAS for certain kinds of desktop deployment.

NFS will never replace DMX/VMAX for us on the server side, but nothing else is coming close on price/performance/manageability (according to the beancounters) to the pile of 6240s we're amassing. And it's getting towards being a loving pile, which is a different problem.

We run it on 10GbE on Cisco Nexus though, so this may make a difference over your average small implementation, but it's a lot easier for us to manage right now than the legacy FC environment.

Adbot
ADBOT LOVES YOU

madsushi
Apr 19, 2009

Baller.
#essereFerrari
When people talk about NFS and dedupe vs iSCSI and dedupe, the key thing is what VMWare sees.

If you have a volume and a LUN via iSCSI and you get great dedupe, that extra space can only be utilized by the SAN for things like snapshots. The space isn't actually made available to the LUN and the underlying OS (VMWare).

If you have an NFS volume and you get great dedupe, VMWare sees that free space, so it can utilize that space to oversusbscribe your storage. Now you're actually using all that free space!

Plus NFS was the only way to exceed 2 TB datastores prior to the recent ESXi 5.

Mierdaan
Sep 14, 2004

Pillbug

madsushi posted:

When people talk about NFS and dedupe vs iSCSI and dedupe, the key thing is what VMWare sees.

If you have a volume and a LUN via iSCSI and you get great dedupe, that extra space can only be utilized by the SAN for things like snapshots. The space isn't actually made available to the LUN and the underlying OS (VMWare).

If you have an NFS volume and you get great dedupe, VMWare sees that free space, so it can utilize that space to oversusbscribe your storage. Now you're actually using all that free space!

Plus NFS was the only way to exceed 2 TB datastores prior to the recent ESXi 5.

This seems like a myopic and virtualization-admin-centric way of viewing things. If the SAN is deduping and aware of that ratio, that allows you to create more LUNs and more datastores; there's no reason to look at this from a one-LUN standpoint, is there?

madsushi
Apr 19, 2009

Baller.
#essereFerrari

Mierdaan posted:

This seems like a myopic and virtualization-admin-centric way of viewing things. If the SAN is deduping and aware of that ratio, that allows you to create more LUNs and more datastores; there's no reason to look at this from a one-LUN standpoint, is there?

It's definitely the virtualization-admin way of thinking, see thread title.

So: you just deduped your 2TB volume down to 1TB. With NFS, the story ends here, as VMWare sees 1TB of free space. Yay!

So: you just deduped your 2TB volume with a 2TB LUN in it down to 1TB. You have 1TB of space in the volume. With iSCSI, you're now left with:

*Shrink the volume
*Make a new volume with the shrunk space
*Make a new LUN in the new volume
*Move VMs to that new LUN (which sucks w/o sMotion)
*Enjoy your lower dedupe ratio since everything isn't in the same volume and increased management since now you have 2 volumes and 2 LUNs to manage

*Make a new, smaller LUN in that volume
*Move your VMs to that new LUN (which sucks w/o sMotion)
*Dedupe again
*Repeat several times until your volume is actually "full"
*Enjoy your several miscellaneous-sized LUNs that will cause you all sorts of fun


For VMWare, there are very clear advantages to just using a huge NFS datastore to host all of your VMs: better dedupe ratios, easier management, fewer constraints.

Mierdaan
Sep 14, 2004

Pillbug
For sure - my perspective is always the small shop admin view on things, where I'm doing both storage and virtualization. I'm being moved from NetApp to Compellent storage now, so I've had to deal with the actuality of deduping on the file level with NFS and having ESXi be aware of that ratio, or deduping at the block level and leveraging the ratio in your volume layout. There's no huge difference (for me) but if you're in a position where your storage is hands-off and you can just request gently caress-off huge NFS exports, then I definitely see the appeal.

Mierdaan fucked around with this message at 06:37 on Jul 10, 2012

KS
Jun 10, 2003
Outrageous Lumpwad

FISHMANPET posted:

So is there a reason to choose NFS datastores over something block based? It can be as good as block as far as I can tell, but it seems like it just started as an afterthought and snowballed from there, and when starting from scratch there's no reason to choose NFS if you have iSCSI or FC.

This seems really backwards -- up until VAAI leveled the playing field NFS was flat-out superior at scaling out, and the five biggest ESX deployments I saw while doing 3.5 and 4.0-era consulting all used NFS storage.

Performance is probably equal on 10g fabrics, but NFS just wins from a management perspective in my book. It's historically easier to run NFS over a converged infrastructure. In some cases you save using 10 gig NICs instead of CNAs, and you don't have to worry about the 3 driver installs it takes to get a QLE8242 working in ESXi 5.

Even now I'd put Netapp running NFS up against anything I can think of for ease of management. Before 5.0 I was juggling ~45 2TB datastores. Not fun.

KS fucked around with this message at 16:22 on Jul 10, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
It's still a little bit iffy for critical workloads in my book as long as it doesn't support proper MPIO. Yeah, link bonding whatever, but that only gets you so far -- you can't have two independent fabrics.

KS
Jun 10, 2003
Outrageous Lumpwad
In the UCS world I now live in, no matter if I choose FC, 10g iscsi, or NFS, it gets plugged into an interface on each of a pair of Nexus switches in VPC mode. Since we no longer have completely independent fabrics, maybe it's less of a difference.

Doing your redundancy at layer 2 also means not having to manage MPIO on the physical servers, which is probably another point in favor of NFS for manageability.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
So onto another question, to hard drive or not to hard drive?

I guess the root of my question is what happens to an esxi host running off of SD card but with swap space on a local hard drive if the hard drive dies (and therefore swap space disappears).

And then any thoughts on installing on flash media vs installing on a hard drive?

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

FISHMANPET posted:

So onto another question, to hard drive or not to hard drive?

I guess the root of my question is what happens to an esxi host running off of SD card but with swap space on a local hard drive if the hard drive dies (and therefore swap space disappears).

And then any thoughts on installing on flash media vs installing on a hard drive?

Hard drive, unless you are doing +20 hosts where the $200x20+x=4000 makes a difference. My teacher loves auto deploy+host profiles for central manageability and fast redeploys hell he doesn't even buy warranties for his servers anymore, due to him buying cheap servers he could just rebuy it later on or spend that budget. I like that idea a lot but I still feel comfortable with an HDD on there, that way if my vcenter shits itself, or TFTP is down, and a host or two power cycle I still have HA going for me. Not to mention once an Auto Deploy'd host cuts power the logs for that are gone unless you set up a syslog server. On top of that what servers don't come with at least 1 hard drive in the server already(excluding blades).

As for installing to USB yeah, if the vendor gives you the option for ordering a diskless server, I would go that way.
How to:
http://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2004784

It is also good to keep in mind for host swapping which should never happen but if it needed to for some reason HDD would be best.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
So here's a new diagram, not testing the configuration maximums:


We're looking at Dell servers, and they all have the ability to run ESXi off of SD cards. In fact, it looks like if we get VMware directly through Dell, it's required (at least it is when you configure the server, who knows what our Rep can do).

It makes sense to have one hard drive for swap and other scratch stuff. I know that I can't join an HA cluster without a configured swap space. So I'm wondering what happens to a running server if the local hard drive dies and it loses its swap space. Is it worth it to spend the extra for a second hard drive and possibly a RAID card. And at that point is it worth it to spend the extra for SD card (which can come out to more than the price of a hard drive).

KS
Jun 10, 2003
Outrageous Lumpwad
You can buy 4gig SD cards at staples for $7. Dell's customized ESX is downloadable as well, so preinstall only saves you a few minutes, and those SD cards are ridiculously expensive.

Not sure what swap space you're talking about. If you're talking about swap to host cache, you're better off with a single SSD. You can forego this if you have headroom on the hosts. It's a fairly new feature and only kicks in for a situation that you want to avoid anyways.

If you're talking about virtual machine swap files, and you store them separately from the VM for some reason, you want that to be redundant and fast. However, you really probably don't want to be doing this either unless you're replicating the VMs and want to avoid replicating the swap file. Even then, you're better off putting swap on shared storage for vmotion purposes.



edit: That diagram represents 10g links? Is your storage iscsi-based? Putting storage, networking, and vmotion over the same wire requires some careful planning. Be sure to do your homework and get the right network adapters for your environment.

KS fucked around with this message at 20:09 on Jul 10, 2012

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

KS posted:

You can buy 4gig SD cards at staples for $7. Dell's customized ESX is downloadable as well, so preinstall only saves you a few minutes, and those SD cards are ridiculously expensive.

Not sure what swap space you're talking about. If you're talking about swap to host cache, you're better off with a single SSD. You can forego this if you have headroom on the hosts. It's a fairly new feature and only kicks in for a situation that you want to avoid anyways.

If you're talking about virtual machine swap files, and you store them separately from the VM for some reason, you want that to be redundant and fast. However, you really probably don't want to be doing this either unless you're replicating the VMs and want to avoid replicating the swap file. Even then, you're better off putting swap on shared storage for vmotion purposes.

I'm talking about Hypervisor swap. Normally I wouldn't care because if the hypervisor is doing swapping then holy gently caress hold onto your butts, but I've read that you can't join a server to an HA cluster unless it has local hypervisor swap, so I'm not sure what the loss of that swap would do to a machine.

KS
Jun 10, 2003
Outrageous Lumpwad

FISHMANPET posted:

I've read that you can't join a server to an HA cluster unless it has local hypervisor swap

I think that's really outdated -- you had to turn it on back in 2.5. Nowadays even on an SD card it sets up a 1GB swap space on the card and can join fine, but god help you if it ever gets into a swapping situation. Have never run into that error from 4.0 till now.

e: vvv It'll probably never even be a problem.

KS fucked around with this message at 20:24 on Jul 10, 2012

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

KS posted:

I think that's really outdated -- you had to turn it on back in 2.5. Nowadays even on an SD card it sets up a 1GB swap space on the card and can join fine, but god help you if it ever gets into a swapping situation. Have never run into that error from 4.0 till now.

This is the kb, it says it still applies to 5.0. Though if it only wants 1 GB, I could easily dump that on the SAN super cheaply, and not have to worry about disks at all.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Hypervisor swap is so completely uncommon since they implemented memory compression in 4.1 that seriously, gently caress yourself and your career if you let your environment get so oversubscribed that you're swapping.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

FISHMANPET posted:

So here's a new diagram, not testing the configuration maximums:


We're looking at Dell servers, and they all have the ability to run ESXi off of SD cards. In fact, it looks like if we get VMware directly through Dell, it's required (at least it is when you configure the server, who knows what our Rep can do).

It makes sense to have one hard drive for swap and other scratch stuff. I know that I can't join an HA cluster without a configured swap space. So I'm wondering what happens to a running server if the local hard drive dies and it loses its swap space. Is it worth it to spend the extra for a second hard drive and possibly a RAID card. And at that point is it worth it to spend the extra for SD card (which can come out to more than the price of a hard drive).

Before we go make more recommendations which license are you getting?

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Misogynist posted:

Hypervisor swap is so completely uncommon since they implemented memory compression in 4.1 that seriously, gently caress yourself and your career if you let your environment get so oversubscribed that you're swapping.

I'm well aware that it's a situation that I should strive to avoid, but ESXi requires it for HA (which we'll be using), and I can't find anything that tells what happens when that swap space disappears in an HA cluster.

Corvettefisher posted:

Before we go make more recommendations which license are you getting?

We'll be getting Enterprise licenses. There's a possibility that we might get some Standard for a separate cluster, but that's for instruction, not infrastructure production.

I've looked over the Enterprise plus and I don't think our enviroment will be big enough to justify the added cost for stuff like storage DRS and VDS.

Mausi
Apr 11, 2006

Misogynist posted:

Hypervisor swap is so completely uncommon since they implemented memory compression in 4.1 that seriously, gently caress yourself and your career if you let your environment get so oversubscribed that you're swapping.
but but but...I inherited it that way; I put an alert on ballooning over 20% physical memory and swap over 5% physical memory and holy poo poo did I have to clear my inbox.
The guy who handed it over to me was all "But memory allocation is at 175%; it's efficient!" :eng99:

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

FISHMANPET posted:

I'm well aware that it's a situation that I should strive to avoid, but ESXi requires it for HA (which we'll be using), and I can't find anything that tells what happens when that swap space disappears in an HA cluster.


We'll be getting Enterprise licenses. There's a possibility that we might get some Standard for a separate cluster, but that's for instruction, not infrastructure production.

I've looked over the Enterprise plus and I don't think our enviroment will be big enough to justify the added cost for stuff like storage DRS and VDS.

Don't forget, Profile driven storage, Auto Deploy, SIOC/NIOC, and Host profiles. Some really good features, and 5.1 is just around the corner promising to bring some nice new stuff.

But to your previous question, if you are going with hard drives get the smallest size use the integrated adapter, RAID 1 if you really want to make youself sleep better at night. A single drive wouldn't be terrible although it is a SPoF(so is a usb), you could simply vMotion things off rip replace and re install.

I would weigh the cost of Disk servers to diskless, then take the difference and see if you can throw that sum to Enterprise plus, use auto deploy, and host profiles to get rip and replace hosts.

VVV forgot he had bunch of 10gb

Dilbert As FUCK fucked around with this message at 01:14 on Jul 11, 2012

KS
Jun 10, 2003
Outrageous Lumpwad
SIOC and NIOC become really important when running a converged network. You need to be able to throttle that vmotion traffic somehow or bad things happen. If you're considering plain-jane 10g NICs with this design and using iscsi, you're setting yourself up for some pain.

You should strongly consider enterprise plus. The 4-host Enterprise Plus + VCenter bundle is like $25k + $6k/year support, for a ballpark figure.

I will be ripping the SD card out of a diskless running host in an HA cluster tomorrow to see what happens, so stay tuned.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug
http://professionalvmware.com/2012/07/vbrownbag-vcap5-dca-objective-1-2-1-3-herseyc/

BrownBag tonight of VCAP-DCA 5 objectives 1.2 and 1.3 tonight, if anyone is interested


@Fish yeah go for enterprise plus, if you haven't bought the licenses shoot me an email @ corvettefish3r (at) gmail.com My firm sells licenses if you needed to compare anything.

Dilbert As FUCK fucked around with this message at 21:06 on Jul 11, 2012

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mausi posted:

I know I'm a little late here, but PowerCLI has the get-stat command, if you do a little googling there's some great posts by LucD on using it.
You can also use get-view on virtualmachine and gather the VM summary, which will always have quickstats and is very fast to return. There are also pretty simple techniques to gather the realtime stats as well.

I wrote a bespoke script with it to gather VM cpu/mem stats from 17 vCenters and dig out what needed rightsizing, so it'll probably provide you what you need.
Get-Stat seems reaaaaally slow. Do you know if other API approaches are any quicker? I'm about to just start querying the vCenter database, which seems like it's an order of magnitude quicker.

Mausi
Apr 11, 2006

Misogynist posted:

Get-Stat seems reaaaaally slow. Do you know if other API approaches are any quicker? I'm about to just start querying the vCenter database, which seems like it's an order of magnitude quicker.

Yeah, it's drat slow but you're already pulling from the VC tables, just indirected through PowerCLI. You also have to consider that you end up with an average of averages which is dangerous to use for inappropriate measurements.
When I use get-stat, I do precisely one call which gets everything I want, and then process the rest in hash tables from there on in. If you're making more than one call to it you're asking for pain.

I'm not aware of faster methods beyond pulling the data into another tool which correlates as it goes and gives you a report whenever you push the button.
VCOps is great, Netuitive does ok, haven't tried much else but I hear things like Graphite can do cool poo poo if you have the time to build something specific.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Mausi posted:

Yeah, it's drat slow but you're already pulling from the VC tables, just indirected through PowerCLI. You also have to consider that you end up with an average of averages which is dangerous to use for inappropriate measurements.
When I use get-stat, I do precisely one call which gets everything I want, and then process the rest in hash tables from there on in. If you're making more than one call to it you're asking for pain.

I'm not aware of faster methods beyond pulling the data into another tool which correlates as it goes and gives you a report whenever you push the button.
VCOps is great, Netuitive does ok, haven't tried much else but I hear things like Graphite can do cool poo poo if you have the time to build something specific.
That's exactly what we're looking to do -- Graphite with one of several dashboarding frontends. (R.I. Pienaar's gdash tool looks promising.) Unfortunately, actually getting a shitload of data out of vCenter and into Graphite is proving to be its own challenge.

I'm probably going to stick with the Perl API. It seems to be the least-bullshit way of getting raw access to the low-level perf query mechanism.

Vulture Culture fucked around with this message at 02:29 on Jul 13, 2012

Christobevii3
Jul 3, 2006
Did the guy with the 6 core amd vmware esxi server ever post his build/specs/cost? I am probably going to do something similar soon.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Christobevii3 posted:

Did the guy with the 6 core amd vmware esxi server ever post his build/specs/cost? I am probably going to do something similar soon.

Its the OP. He runs a desktop and through workstation sets up all his labs.

I'll make a post tomorrow when I'm not on my phone of my home lab setup. It's a little mini itx intel build.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Christobevii3 posted:

Did the guy with the 6 core amd vmware esxi server ever post his build/specs/cost? I am probably going to do something similar soon.



basically what I have I can do about 20 VM's before my system starts grinding slower, you'll still need an extra HDD or two, and power supply of course but that should give you the basic jist of it.

I am looking into some whitebox builds for study on my VCAP-DCA

My teacher posted a link to this http://kendrickcoleman.com/index.php?/Tech-Blog/vmware-vsphere-home-lab-qthe-green-machinesq.html on his site but I think it can be made cheaper


Also SSD prices are falling rapidly :w00t:

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
In my opinion, you are often better off building two boxes. One lower powered one jsut for storage, and one for VMware. You can buy a zacate motherboard, case and power supply for pretty drat cheap, and then you don't have to worry about iommu or a VMware supported storage controller.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

adorai posted:

In my opinion, you are often better off building two boxes. One lower powered one jsut for storage, and one for VMware. You can buy a zacate motherboard, case and power supply for pretty drat cheap, and then you don't have to worry about iommu or a VMware supported storage controller.

Yeah 2 physical boxes are great as the physical network/disk limitations give you a nice gauge of performance when doing things, but workstation + esxi VM's is much simpler and cheaper

Kachunkachunk
Jun 6, 2011
I passed through my LSI controller to a VM and use that as a storage server, personally.

Note that this is a home lab thing. If you had to learn anything that looks like a larger setup, a decent modern desktop could have upwards of 16GB to 32GB of memory and at least 8 independent threads. You can virtualize some ESXi hosts with vCenter in a Hosted solution like Fusion, Workstation, etc.

Plus I tend to have this fallback thing where I turn old gaming desktops into servers, so every desktop gets maxed out with RAM in case it has to turn into another ESXi node, before RAM prices climb due to rarity.

With that said, however, I still want to consolidate everything into one larger box at some point and donate/sell the old hardware.

Aaearon
Jul 9, 2001
I went ahead and pulled the trigger on the box I posted back many pages ago:

SUPERMICRO MBD-X9SCM-F-O $199
Intel Xeon E3-1230 V2 $233
Antec NEO ECO 400C 400W $49.99
Mushkin Enhanced Prospector 4GB USB $6
LIAN LI PC-V351B $109
Super Talent DDR3-1333 8GB ECC x2 $132

so all for around ~730. Right now I am using my qnap as a datastore (and fileserver). In the future I will transfer the fileserver role to a VM running freenas or something using a LSI controller being passed through. Next pay check I will also get another 16gb of RAM just because.

Pretty happy with it especially for the price and the features that I have.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Edit: Never mind, Powershell was being a pain in the rear end

Vulture Culture fucked around with this message at 05:07 on Jul 16, 2012

Christobevii3
Jul 3, 2006
If I was going to keep running in virtual box is there a difference between the amd fx8150 and the intel 2600k really? I need to upgrade my desktop and have 16GB of ddr3 and would rather not build another machine now.

I have a 120GB ssd for os/480GB for vms/300GB raptor/4x1TB on 3ware 9650/400GB hd. Will this suffice to 2 2008 DC, 4 windows 7 clients for labbing?

evil_bunnY
Apr 2, 2003

Just the big SSD would suffice for that, if it's any good.

Christobevii3
Jul 3, 2006
Sandforce 2 mushkin. Yeah I need to upgrade my cpu/mobo anyways since my motherboard is going out. The 400GB drive is choking at the moment.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Here's what I have running at work:



The parts were all bought about 2 years ago (not by me, and I put them all together when I started (about 6 months after that)

Server 1 - A bunch of different versions of IE, running on Windows 7/XP. We have two instances each of XP+IE6, XP+IE7, XP+IE8, W7+IE8, and W7+IE9. There's also a clone of our company website running Joomla on Linux, our bug tracker (Redmine), and m0n0wall. Lastly we have Windows 2003 Web Server Edition running, which basically has our development MySQL database (all our other servers are on Linux this makes no sense)

Server 2 - Mirror of our production MySQL database, and then a 'test' database where we dump copies of the live DB to. Then we have a quasi-mirror of our production webserver. Basically so we can run tests with real data. These all run Linux.

The servers were originally 12GB/8GB, but the database cloning runs so much faster with 24GB so now they are 24GB/12GB.

They're in 1U cases. Loud as all hell. We have the room, I would have used desktop cases if I did it again. Also we can only fit 2 HD's in those things. The other thing I'm not real sure about is the LGA1366 CPU's and X58 boards. On one hand, that was the only way to get the fastest CPU's we could afford, but the boards are another $150 more than a regular board. Plus, they had 6 RAM slots instead of just 4. But there's no CPU upgrade path.

Also, I'd have gone with multiple 128GB or 256GB SSD's instead of the 1TB drives, which performance basically dies on if you get two VM's hitting the same disk for any amount of time. It's not SO bad in our use case, but we really pack the Windows VM's on there. Then again, a 256GB SSD at the time cost something like $800, and those 1TB drives were $60 (before the floods).

Originally there were ASUS motherboards, but they had Realtek NICs that didn't work with ESX. They also didn't have integrated video, so the only expansion slot I could use (1U's remember) was taken by a graphics card. Those Supermicro boards are great because they have dual Intel NICs.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Christobevii3 posted:

If I was going to keep running in virtual box is there a difference between the amd fx8150 and the intel 2600k really? I need to upgrade my desktop and have 16GB of ddr3 and would rather not build another machine now.

I have a 120GB ssd for os/480GB for vms/300GB raptor/4x1TB on 3ware 9650/400GB hd. Will this suffice to 2 2008 DC, 4 windows 7 clients for labbing?

The FX only has 4 FPU's so 1/2 of those 8 cores aren't being fully utilized. I would go with an x6 or a 2600k. I can run that easy using under 10GB assuming you aren't giving DC's/Clients 4GB ram, run them at 85% mem usage, if they have to swap to an ssd oh well, it isn't that bad for a lab environment.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug


+ 1 per server 100PT http://www.ebay.com/itm/IBM-PRO-100...#ht_2543wt_1163

So this is going to be my VCAP-DCA lab and development/test lab

NAS will be my Desktop running VM's, I could push my luck and go with some drobo box(es)... The HP microserver might do wonder if it is drive locked http://www.newegg.com/Product/Produ...#scrollFullInfo

Suggestions?


Ahh poo poo I thought I hit edit

Dilbert As FUCK fucked around with this message at 23:06 on Jul 16, 2012

Adbot
ADBOT LOVES YOU

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

Corvettefisher posted:

So this is going to be my VCAP-DCA lab and development/test lab

Those 1U's are supah dupah loud but you probably already know that.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply