Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

vty posted:

Not many, about 25.

At what numbers did you have issues? Speed?
The biggest issue we had with Veeam was that, as many other people will tell you, CBT inexplicably fails to work in a huge number of deployments, leading to multi-day backup windows. Sometimes it works fine and then randomly stops. The net result is that your incrementals take just as long as your fulls. Ick.

Veeam's actually a pretty neat product with a reasonable price level if you're able to get the support you need. (I've been keeping in touch with Michelle Randolph, one of their support managers, after she got word from support renewals that we went with a competitor's product. She's a total sweetheart and really does want to make things better. If you have issues with their support, try to get in touch with her directly.)

The real problem we had following our support issues was that trying to get any kind of real concurrency out of Veeam was an absolute maintenance nightmare. Veeam's architecture requires you to set up full-blown Windows Server VM instances in order to run the backup proxies, but the proxies can't dynamically take individual tasks off the queue -- they can only take complete backup jobs and cannot distribute VMs from those backup jobs. We weren't about to redesign our entire folder configuration in vCenter to accommodate Veeam's rear end-backwards way of parallelizing backups (we have ACLs tied in and everything), nor were we about to hardcode VM lists into our jobs and risk missing backups of new VMs. So, we switched to a competitor's product. PHD Virtual has a few similar limitations, but is at least a multithreaded architecture, so it requires 25% as much loving around. With 125 or so VMs being backed up, we haven't had to add a second virtual appliance yet.

It's not a bad product by any means, but it definitely has issues scaling up beyond mid-sized environments. Then there were the support issues I've posted about previously, which I forgive them for. (I'm still receiving prank calls from the "wazzaaaaaaaaaahhhh" guy to my office phone, and I'm pretty confident it's the Veeam tech who no longer works there as a result of my support case.)

PHD Virtual has its own issues -- single-file restores are really annoying, for instance -- but in terms of "is this poo poo going to not eat my backups and not waste hours of my backup engineer's time every week," PHD Virtual came out to be a clear winner.

Kachunkachunk posted:

Now, for this handful of select ESX/ESXi boxes, you probably want to turn off AD authentication and rely on local, too.
All our ESXi hosts are using local authentication. I have literally no conception of why Active Directory is a useful thing to have for ESXi when you should be doing 99.9% of your environment's management through vCenter. But part of that is my cynicism about how every single VMware feature will, through some freak incident, turn around and bite you in the rear end when you least expect it someday.

Vulture Culture fucked around with this message at 05:07 on Jun 15, 2012

Adbot
ADBOT LOVES YOU

Kachunkachunk
Jun 6, 2011
I think domain authentication of ESXi boxes is purely for tracking and ease of credentials management, really. If you fire a user, you don't have to change all the server passwords, for example.

Not saying I like AD authentication of ESXi, though.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Kachunkachunk posted:

I think domain authentication of ESXi boxes is purely for tracking and ease of credentials management, really. If you fire a user, you don't have to change all the server passwords, for example.

Not saying I like AD authentication of ESXi, though.

Can users be managed through VCenter Server (or whatever the gently caress it is)?

Kachunkachunk
Jun 6, 2011
I don't think so. It pretty much is all up to whatever the VC box is installed on and/or the domain that the system was attached to. Then for individual ESXi boxes and their logins, it's either Local, AD/LDAP.

Edit: To be more specific, VC doesn't appear to have a way of managing local users on an ESXi box. The vSphere Client, directly connected to the host, can. I suppose this was just never part of VC's design (or intent).

Erwin
Feb 17, 2006

Misogynist posted:

The biggest issue we had with Veeam was that, as many other people will tell you, CBT inexplicably fails to work in a huge number of deployments, leading to multi-day backup windows. Sometimes it works fine and then randomly stops. The net result is that your incrementals take just as long as your fulls. Ick.


They fixed the CBT issue a few patches ago, and I haven't had trouble with that since. Veeam has its issues, and I figure once we get to 75 VMs or so, I'll be looking elsewhere, but for my size, it's perfect, and it has some really cool features that I don't even need. The instant restore is pretty brilliant.

Nukelear v.2
Jun 25, 2004
My optional title text

Digital_Jesus posted:

Hyper-V just installs a barebones version of Server 2k8 R2 to run the hypervisor, it doesn't require AD at all. If you've got multiple Hyper-V hosts though this becomes a problem since multi-host management is handled by Server Center. I only tested out Hyper-V with one physical host before deciding to go with VMware, so I didn't have to worry about managing multiple Hyper-V Hosts.

That being said I'd still keep at least one DC on each physical host, and not put your eggs all in one basket with two virtual DCs running on the same Hyper-V Host.

Yea didn't mean Hyper-V itself required it, just if you wanted to run it in a cluster mode, which any rational person should.

To sate my own curiosity I dug up the docs on setting up Hyper-V clustering from MS and here it is:

• Domain role: All servers in the cluster must be in the same Active Directory domain. As a best practice, all clustered servers should have the same domain role (either member server or domain controller). The recommended role is member server.
• Domain controller: We recommend that your clustered servers be member servers. If they are, you need an additional server that acts as the domain controller in the domain that contains your failover cluster.

somecallmetim
Mar 30, 2004

Anyone here hear of Dot Hill at all? I has a pitch from them and it doesn't sound too bad.
We don't have the biggest budget so it is a nice fit.

Mausi
Apr 11, 2006

three posted:

I can't think of anything that VMware says not to virtualize.
Physical dependency cards, low latency systems, flakey RDMs, Anything that licenses by MAC address, non-stop systems requiring more than 1 cpu. Of course most people don't have these issues.

vty posted:

Veeam rocks in my testing, by the way.
Veeam doesn't scale easily beyond a few hundred VMs, and has had quite a few 'How did that bug make it through to release' moments which make me not trust it. PHD Virtual has rarely failed me, and scales up to 4 figures worth of VMs on good hardware in my experience.

DJ Commie posted:

How useful is a HP Proliant DL585 G5 for VMs?
IIRC the G5's had the early versions of the virtualisation hardware support for Memory and CPU, so it should be pretty decent. It's pretty aged now but you should be able to run quite a bit on it. I've still got Excel Calc farms running on BL685 G1s so there's no excuses.
Don't run it at home though, I'll gently caress your power bill.

Digital_Jesus posted:

I didn't see a clearly defined answer here but maybe I missed it. My understanding of the VMware Essentials Plus package was a 192GB VRAM limit across 3 hosts, 6 processors, and you may only allocate a maximum of 32GB of VRAM *per instance*. Yes?

The new licensing is best thought of in 3 sections:
1st is the number of physical Cpu sockets you're licensed for, you buy this number of licenses.
2nd you take the number of licenses you've bought and multiply that by the VRAM entitlement, this is the amount of allocated memory your powered on VMs may have across the entire environment
3rd take a look at the type of license you've bought, that tells you the maximum size of your VM.

So essentials plus was (last I checked) 6 CPU licenses for 1st, 2nd is 6 x 32, or 192 GB of powered on VMs added to your environment pool, and 3rd is 96GB Max VM size, which you'll never hit.


My new PODs land next week; 39x HP DL380 G8, Octo core and starting with 192GB backing onto my new NetApp 6240s and VMax. Time for a loving upgrade.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Mausi posted:

Physical dependency cards, low latency systems, flakey RDMs, Anything that licenses by MAC address, non-stop systems requiring more than 1 cpu. Of course most people don't have these issues.

You can change the MAC in the OS to get around MAC address licensing, can't you? Of course if you're starting out virtual then just be sure that VM always has the same MAC.

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Mausi posted:

Physical dependency cards, low latency systems, flakey RDMs, Anything that licenses by MAC address, non-stop systems requiring more than 1 cpu. Of course most people don't have these issues.

Any source to this? Not to say it's not true, but I haven't seen these listed publicly before.

Mausi
Apr 11, 2006

three posted:

Any source to this? Not to say it's not true, but I haven't seen these listed publicly before.
If you're talking about official documentation then you are certainly right - they'll say you can virtualise anything. If you're talking about what VMware PSO get up to, then those are items I can remember from the list of regular issues.


FISHMANPET posted:

You can change the MAC in the OS to get around MAC address licensing, can't you? Of course if you're starting out virtual then just be sure that VM always has the same MAC.
You can change it in the .vmx file as well. However it becomes a problem when you inadvertently change the MAC and haven't saved it anywhere which VMware newbies regularly do. This was/is often a problem for people new to virtualisation especially when P2Ving from an old environment.

sanchez
Feb 26, 2003

Mausi posted:

Veeam doesn't scale easily beyond a few hundred VMs, and has had quite a few 'How did that bug make it through to release' moments which make me not trust it. PHD Virtual has rarely failed me, and scales up to 4 figures worth of VMs on good hardware in my experience.

We used it in places with only a 10-20 VM's (perhaps 2tb total) and it was still quirky. It'd run fine for weeks and then poo poo the bed requiring new full backups for no apparent reason. Support always sounded hungover.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

Mausi posted:

You can change it in the .vmx file as well. However it becomes a problem when you inadvertently change the MAC and haven't saved it anywhere which VMware newbies regularly do. This was/is often a problem for people new to virtualisation especially when P2Ving from an old environment.

I remember you can only set certain ranges in the GUI, does editing the VMX let you use any MAC?

Moey
Oct 22, 2010

I LIKE TO MOVE IT

sanchez posted:

We used it in places with only a 10-20 VM's (perhaps 2tb total) and it was still quirky. It'd run fine for weeks and then poo poo the bed requiring new full backups for no apparent reason. Support always sounded hungover.

I am currently using it with about 50 VMs. Most of them are small, but we have a few 1.5 TB VMs. We hit a few snags where after doing reverse incremental for a few weeks, one of the large ones would fail CBT and then do a full, which hosed me because a full bu on a disk that large takes like 12 hours.

After a lot of dealing with their support and fuckery on my own, apparently it's best practice to do "synthetic fulls" at least once a month.

Tips from my failures.

- Do not make one big job to grab all your VMs
- Each VM has its own job that daisy chains to eachother
- Stagger your full backups so they all do not hit at once

I still seem to get random issues where hot add will fail, and it will flip to network mode. Since my proxy is a VM and it is setup for 10gb iSCSI and network, it makes no difference. If it was a different case, it would be miserable. A reboot of the Veeam proxy normally fixes it for for the week, but random small things always seem to pop up.

Veeam support is tolerable at best. They seem more concerned about closing the ticket as fast as possible vs finding a proper solution.

For content, we are running v6.0 Patch 3. v6.1 just came out a few days ago, probably going to wait a month on that update.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

three posted:

A virtualization company saying not to virtualize something is ironic.

I can't think of anything that VMware says not to virtualize.

Domain controllers are not reccomended to be virtualized. I can't say I have had problems with 2008 and later but 2k3 have problems.

Dilbert As FUCK
Sep 8, 2007

by Cowcaster
Pillbug

Mausi posted:

Physical dependency cards, low latency systems, flakey RDMs, Anything that licenses by MAC address, non-stop systems requiring more than 1 cpu. Of course most people don't have these issues.


1. What cards? a bunch of v.56 manufactures offer software counterparts to their modems for VM enviroments
2. What protocol are you using, what is your SAN/NAS setup
3. No problem with my current MSCS setup, what problems are you having?
4. You can enable forged transmissions
5. Yeah it sucks FT is limited to 1 vcpu, but any system that is clusterable should run on vmware and have HA support at the very least

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Corvettefisher posted:

Domain controllers are not reccomended to be virtualized. I can't say I have had problems with 2008 and later but 2k3 have problems.
Domain controllers are absolutely fine to be virtualized, Microsoft just has a very specific list of things you should not do with a DC (reverting to a snapshot being a very important one). These are typically related to USN incrementing and are fairly obvious once you stop and think about how AD replication actually works. Things were vastly different in the pre-VT days where you were really likely to end up in clock skew hell if you weren't really, really careful about the hardware you virtualized on because virtual handling of RTC interrupts was so bad.

Most of these issues are going away with Server 8 anyway. MS is even supporting snapshot rollback.

Vulture Culture fucked around with this message at 13:40 on Jun 17, 2012

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole

Corvettefisher posted:

Domain controllers are not reccomended to be virtualized. I can't say I have had problems with 2008 and later but 2k3 have problems.

VMware does not say domain controllers should not be virtualized. Looks like Misogynist already cleared that up, though.

Mausi
Apr 11, 2006

Corvettefisher posted:

1. What cards? a bunch of v.56 manufactures offer software counterparts to their modems for VM enviroments
2. What protocol are you using, what is your SAN/NAS setup
3. No problem with my current MSCS setup, what problems are you having?
4. You can enable forged transmissions
5. Yeah it sucks FT is limited to 1 vcpu, but any system that is clusterable should run on vmware and have HA support at the very least
1. New cards, perhaps even recently manufactured ones, but there are plenty of devices that run in servers that can't be retired in the enterprise space. A great example is the 386 running the legacy voice mail system in my office; the only manufacturer of its parts that still exists is Intel. Of course, if the card was software emulate-able then it wouldn't be a physical dependency would it?
2.When I talk about low latency I'm referencing production high speed trading systems for an investment bank. The guys cut code on my regular VMware stack but only test and deploy on physical to keep the stack as short as possible.
3. There are flaws in the 4.0 release where having a dead RDM will cause a VM to not boot while the hypervisor continually polls the paths waiting for it to come back, all without an error message outside of messages. 4.0 release was a rewrite of the FC stack and it was a little buggy, and I can't upgrade that cluster just yet because of other dependencies.
4. Yeah, there's lots of ways around it these days, it's an old school example.
5. What, like Oracle RAC? That's only very recently supported, and I'm struggling to think of many examples where I'd want non-stop and only 1 cpu would be ok - maybe an IP load balancer.

They might be edge cases, but that's kind of the point - I don't trust a salesman who tells me that their platform does everything, because they're going to sell me up the river.

Corvettefisher posted:

Domain controllers are not reccomended to be virtualized. I can't say I have had problems with 2008 and later but 2k3 have problems.
Misogynist is bang out - Win2k3's have be on the white list for a few years now much like you correctly pointed out MAC-issues are.

Mausi fucked around with this message at 19:34 on Jun 17, 2012

luminalflux
May 27, 2005



What magic rubber chicken do I need to wave to get vCenter to realize that yes, I do have redundant paths to my storage? I have 2 LeftHand boxes, each with 2 NICs connected to 2 different switches. Each ESXi has 2 interfaces in the iSCSI vSwitch, each going to a different switch. The iSCSI vSwitch was created according to HPs best practices doc: 2 vmknics, with one IF as active and the other as unused (shifted so vmk1 has nic2 active, nic3 unused & vmk2 has nic3 active, nic2 unused).

Yet when I view the storage views tab, it says "Partial/No Redundancy". :wtc:

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer
is it esxi 4.x or 5.0? The process is different but at least similar.

http://virtualgeek.typepad.com/virtual_geek/2009/01/a-multivendor-post-to-help-our-mutual-iscsi-customers-using-vmware.html

luminalflux
May 27, 2005



It's 4.1. Dunno if this should be in the storage thread, but when I inspect the datastore paths, I have 2 IQNs to the same IP address, which feels wrong.

KS
Jun 10, 2003
Outrageous Lumpwad

Corvettefisher posted:

Domain controllers are not reccomended to be virtualized. I can't say I have had problems with 2008 and later but 2k3 have problems.

I think this one gets so much traction years-on because they're not recommended to be p2ved. Build a fresh virtual machine and dcpromo it rather than p2ving it.


Misogynist posted:

Veeam vs. PHD

We were in the midst of a Veeam trial and based on your feedback here we gave PHD a try. You're right, it's absolutely better -- and it worked out to be about 60% of the cost, which was a nice bonus. Ended up buying enough to cover our entire production clusters. Their sales guy should send you a gift basket.

We are backing up 110 VMs covering about 9 TB and the incrementals take <1 hour with a single 1-proc VBA. The initial full took about a day once we scaled the VBA up to 6 CPU/24GB.

KS fucked around with this message at 15:03 on Jun 18, 2012

three
Aug 9, 2007

i fantasize about ndamukong suh licking my doodoo hole
The only weirdness I've seen demoing PHD so far is that retention policies are on an appliance basis instead of per job, and recovering Linux guest files requires additional external programs. It's pretty cool, though.

Anything interesting you guys noticed like that?

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Looking for some advice on virtualizing a few servers at a small business I work for.

Currently we have the following physical servers:
Linux box running svn + jenkins build server
Windows box running as jenkins build node/slave
Linux box running test server for a webstore software
Windows box that is the IT guy's workstation
Netgear ReadyNAS with raid 2x1TB drives in RAID 1 i think.
PBX box with 4 lines (i don't think we plan on trying to virtualize this)
We will also be adding a SQL server which will hold a few(<10) GB of data

All these are for internal use and not hosting any public websites or other services. They are pretty much all running p4 cpus.

We have been considering virtualizing with vSphere hypervisor ESXi. I really don't know anything about server hardware but trying to make a good decision on what hardware to run the host on.

A coworker has one of these that he is asking $250 for:
Dell PowerEdge 1800 tower
1x Xeon 3ghz CPU(slot available for 2nd cpu)
4gb ddr2 ram, (room up to 12GB)
2x 300GB SCSI 10K maxtor HDD (i think)
2x power supplies

As far as I can tell this is ~8yr old computer and barely faster than the p4s that most of the other crap is running on, and that $250 is not really any kind of deal on it. I guess redundant power supplies sounds nice, but that's about the only thing i see that suggests it's better than a cheap modern desktop. I'm thinking we could find something much more capable for maybe just a few more bucks?

I want to put at least the build server stuff on SSD with some redundancy. The poweredge has sata ports on the mobo but I don't think they are hardware raid. I've been told that software raid is not possible with esxi. So it would need a pcie card to accomplish it I guess.

Looking at the official esxi hardware compatibility list only goes down to poweredge 1850, 1800s aren't listed, though I saw on a the community list that someone has done it.

Can anyone help with this plan?

peepsalot fucked around with this message at 05:46 on Jun 19, 2012

Wibla
Feb 16, 2011

Buying that Xeon box would be going down a dead end street, I really recommend going for something newer.

The mere fact that it uses SCSI drives should be a huge alarm sign, have you even considered how much of a pain in the rear end it is to find replacements nowadays? Ugh.

I'd consider a supermicro rackmount with an intel xeon e3 or similar, 16-32GB ECC ram, a few harddrives on a HBA hooked up to a virtual storage appliance, and taking it from there. Cost? probably a bit more than $250...

Or you can probably find a used server with dual psues etc. for not much more than $250 that is a lot newer than that PE1800 :v:

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Wibla posted:

Buying that Xeon box would be going down a dead end street, I really recommend going for something newer.

The mere fact that it uses SCSI drives should be a huge alarm sign, have you even considered how much of a pain in the rear end it is to find replacements nowadays? Ugh.

I'd consider a supermicro rackmount with an intel xeon e3 or similar, 16-32GB ECC ram, a few harddrives on a HBA hooked up to a virtual storage appliance, and taking it from there. Cost? probably a bit more than $250...

Or you can probably find a used server with dual psues etc. for not much more than $250 that is a lot newer than that PE1800 :v:
Yeah I was already leaning away from getting that server, unless someone here gave me a good reason to go for it. It's just that was the only thing I have looked at so far, because the guy brought it in and it was there.

I looked into the supermicro servers (1017C and 5017C series that use Xeon E3), and they look pretty reasonably priced. But they don't show up as compatible on vmware site. Supermicro says the servers support RAID but is that software only?

I looked briefly at the virtual storage appliance, and it seems overkill to me for what we're doing. The overview video is talking about redundancy across a bunch of servers, when I'm only planning on a single one. Wouldn't it make more sense to find a low cost server with a compatible hardware RAID, or added pcie raid card and put a couple drives in there?

I'm going to talk to the boss tomorrow about what kind of budget he's willing to give for this. Money is a bit tight, I expect to be able to do ~1k, but not much more than that.

Wibla
Feb 16, 2011

peepsalot posted:

I looked briefly at the virtual storage appliance, and it seems overkill to me for what we're doing. The overview video is talking about redundancy across a bunch of servers, when I'm only planning on a single one. Wouldn't it make more sense to find a low cost server with a compatible hardware RAID, or added pcie raid card and put a couple drives in there?

Both those options would work well.

I'd try to find a server with a hardware RAID controller that is supported by vmware, so you can get management info etc in vSphere.

3ware 9750s are supported, and not very ($340) spendy in the 4 port variant. There are other options aswell, that's just the first one that comes to mind for me as we've used it with some success. I wouldn't bank on spending less than $300 on a dedicated RAID controller with RAID5/6 support though, and if you want more ports, it gets expensive fast.

The supermicro RAID stuff seems to be BIOS/fakeraid, they mention RAID5 only being supported in Windows etc. A low-cost option if you only need RAID10 is to get an IBM M1015 card off ebay, they shouldn't run you more than $60-80 and are supported in ESXi afaik.

skipdogg
Nov 29, 2004
Resident SRT-4 Expert

That server isn't worth the electricity it takes to run much less 250 bucks. You can find newer models of servers on ebay for that price. If you want a cheap server that will run ESXi just fine, try to find an off lease HP DL380 Gen5. Solid little boxes and there should be plenty of off lease models being listed. I just checked ebay and saw serviceable units from 199 to 999 dollars depending on how many drives and how much ram you want.

As with anything IT, whats your budget?

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

skipdogg posted:

That server isn't worth the electricity it takes to run much less 250 bucks. You can find newer models of servers on ebay for that price. If you want a cheap server that will run ESXi just fine, try to find an off lease HP DL380 Gen5. Solid little boxes and there should be plenty of off lease models being listed. I just checked ebay and saw serviceable units from 199 to 999 dollars depending on how many drives and how much ram you want.

As with anything IT, whats your budget?

The boss is saying he would be OK with $1500-2000. Looking at those supermicros last night I think I came up to about 900 after CPU and 16gb ram. But that doesn't cover storage at all.

I looked at those IBM m1015 raid cards and I only see two ports on it. Don't you need at least 4 for raid 10? I don't really understand how sas/sata works. Can you hook up regular sata drives to it? Can one port connect multiple drives somehow? Or would I need to use multiple cards.

GMontag
Dec 20, 2011

peepsalot posted:

The boss is saying he would be OK with $1500-2000. Looking at those supermicros last night I think I came up to about 900 after CPU and 16gb ram. But that doesn't cover storage at all.

I looked at those IBM m1015 raid cards and I only see two ports on it. Don't you need at least 4 for raid 10? I don't really understand how sas/sata works. Can you hook up regular sata drives to it? Can one port connect multiple drives somehow? Or would I need to use multiple cards.

One SFF-8087 port = 4 SAS or SATA ports. You have to buy a breakout cable if your case doesn't have a backplane that accepts 8087 connectors.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

peepsalot posted:

The boss is saying he would be OK with $1500-2000. Looking at those supermicros last night I think I came up to about 900 after CPU and 16gb ram. But that doesn't cover storage at all.

I looked at those IBM m1015 raid cards and I only see two ports on it. Don't you need at least 4 for raid 10? I don't really understand how sas/sata works. Can you hook up regular sata drives to it? Can one port connect multiple drives somehow? Or would I need to use multiple cards.

PS, if you need hardware RAID, make sure you get the m1015 with the "key".

I use one in my home server flashed to IT mode (ZFS :spergin:) and it's solid as gently caress. As mentioned above, you can get a SAS to 4xSATA cable.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

peepsalot posted:

Looking for some advice on virtualizing a few servers at a small business I work for.

gently caress that old server, seriously.

How much HD space do you need?

Maybe go for a used T410 (or if you have to put it in a rack, an R710) and fill it with big, cheap drives. Dual-xeon and 32GB would probably be overkill but it's cheap.

If your drive space requirements are fairly low, you could get a pair of 256GB SSD's to run the VM's on and some 2TB hard drives for storing other poo poo. Just throw that in a SuperMicro case and get some Intel NICs (or a motherboard that has them already)

vty
Nov 8, 2007

oh dott, oh dott!
Do any of you guys have a useful contact on the Insight Vmware team, or Ingram Micro? I need to deal with one of these "Aggregates" (VMware lingo) to get my VSPP partnership going. I've given Insights vmwareteam@ about a week now to get back to me with information on their pricing/offering and I've just gotten bounced around a few times.

Considering they were the first VSPP provider ever I'm a bit confused, but I've had problems with Insight sales people before.

vty fucked around with this message at 20:51 on Jun 19, 2012

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

Thanks for the comments so far everyone, I really appreciate the help.

LmaoTheKid posted:

PS, if you need hardware RAID, make sure you get the m1015 with the "key".

I use one in my home server flashed to IT mode (ZFS :spergin:) and it's solid as gently caress. As mentioned above, you can get a SAS to 4xSATA cable.

From what I read it looks like the key is needed for RAID5 or 50, but not if I'm going to do RAID 10?

ebay posted:

You have to specify in your notes how you want to get the controller:
- cross-flashed with the latest available LSI "IT" firmware (for LSI 9210-8i) - IMHO best for JBODs and software RAIDs (ZFS, Unraid, Flexraid,...)
- cross-flashed with the latest available LSI "IR" firmware (for LSI 9210-8i) - IMHO best for SSD drives in RAID 0 or 1
- cross-flashed with the latest available LSI 9240-8i firmware - you know what are you doing.
I don't know what i'm doing :ohdear:. Would I have to get the 9240 firmware for RAID 10? Goddamit this poo poo is confusing.

I also looked briefly at the 3ware 9750, but I don't see 3ware even listed as a brand on the vmware compatibility site.

Bob Morales posted:

How much HD space do you need?
We probably won't use more than 1TB total.

...
So I've been slowly typing this reply over the past few hours as I look at different options on the web and asking other quick questions on IRC and deliberating over crap that I barely understand, but someone brought up a good point just now which is that keeping my data safe is what backups are for, and RAID redundancy is more for uptime. Since I don't really consider uptime super crucial in this case, I think maybe I'm going about this all wrong.
I'm considering changing the plan to just plain single 256-512GB SSD and then maybe a single 1-2TB mech HDD.

And then some external (offsite?) backup situation.
Does this sound reasonable?

So if I end up not even doing any RAID setup, do I still need something like a M1015 since vmware is maybe not compatible with cheapo supermicro boards (or does that only matter with RAID)?

peepsalot fucked around with this message at 22:56 on Jun 19, 2012

Wibla
Feb 16, 2011

peepsalot posted:


So if I end up not even doing any RAID setup, do I still need something like a M1015 since vmware is maybe not compatible with cheapo supermicro boards (or does that only matter with RAID)?

The onboard controller is a standard intel job, it'll work just fine for your needs.

3ware has been bought by LSI, the 9750 is basically a rebranded LSISAS2108-based controller, with 3ware management stuff (3dm2 etc).

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

peepsalot posted:

I'm considering changing the plan to just plain single 256-512GB SSD and then maybe a single 1-2TB mech HDD.

That's along the lines of what I was suggesting. If you got a cheap server you could have a RAID 10/5/6 of say 6 or 8 drives, and you could rebuild if you lost one but stay running.

But that's still server + 8 drives. We have some machines with 16+ VM's on them and they're just spread across 2 regular HD's. Works fine, but it can stink when 3-4 of them get busy with disk activity. With an SSD and another HD or two for big storage, you'd probably be fine. You just have to back up the VM's.

You could even slap it in an i7 desktop. Heck, an i5 or Phenom X6 would even be alright. If things get slow you can just add another drive and move a busy VM to it. We're stuck because they bought 1U servers before I came around and they only hold 2 drives each.

Crossbar
Jun 16, 2002
Chronic Lurker
I've read that Windows Server 2008 R2 Standard comes with a 'free' licence for a virtual install of Server 2008? Is this true? If it is, what licence key do I use?

Mierdaan
Sep 14, 2004

Pillbug

Crossbar posted:

I've read that Windows Server 2008 R2 Standard comes with a 'free' licence for a virtual install of Server 2008? Is this true? If it is, what licence key do I use?

It does, but the licensing terms basically require that the hypervisor instance of Server 2008 is used only to run the virtual instance; you can't set up that bare-metal instance of Server 2008 to do anything other than run the Hyper-V role.

You can just use the same license key, afaik.

edit: source

Microsoft posted:

Windows Server 2008 R2 Standard

Each software license allows you to run one instance of the server software in an OSE on one server. If the instance you run is in a virtual OSE, you can also run an instance in the physical OSE solely to run hardware virtualization software, provide hardware virtualization services, or run software to manage and service OSEs on the licensed server. We refer to this in shorthand as 1+1.

Mierdaan fucked around with this message at 17:53 on Jun 20, 2012

Adbot
ADBOT LOVES YOU

Crossbar
Jun 16, 2002
Chronic Lurker
Thanks for the clarification!

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply