Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Wibla
Feb 16, 2011

This has instructions on getting the 9650 to work with 5.0 U1, it takes a little work, but is worth it.

I wouldn't recommend running raid10 or raid5 with hotspares, the 9650 supports raid6, and with write cache enabled (and BBU or UPS, preferably) it performs pretty well for being an "old" raid controller.

I would try to get more than 6 gigs of ram though, if at all possible.

Adbot
ADBOT LOVES YOU

Wibla
Feb 16, 2011

You could lease a server with esxi?

Wibla
Feb 16, 2011

Buying that Xeon box would be going down a dead end street, I really recommend going for something newer.

The mere fact that it uses SCSI drives should be a huge alarm sign, have you even considered how much of a pain in the rear end it is to find replacements nowadays? Ugh.

I'd consider a supermicro rackmount with an intel xeon e3 or similar, 16-32GB ECC ram, a few harddrives on a HBA hooked up to a virtual storage appliance, and taking it from there. Cost? probably a bit more than $250...

Or you can probably find a used server with dual psues etc. for not much more than $250 that is a lot newer than that PE1800 :v:

Wibla
Feb 16, 2011

peepsalot posted:

I looked briefly at the virtual storage appliance, and it seems overkill to me for what we're doing. The overview video is talking about redundancy across a bunch of servers, when I'm only planning on a single one. Wouldn't it make more sense to find a low cost server with a compatible hardware RAID, or added pcie raid card and put a couple drives in there?

Both those options would work well.

I'd try to find a server with a hardware RAID controller that is supported by vmware, so you can get management info etc in vSphere.

3ware 9750s are supported, and not very ($340) spendy in the 4 port variant. There are other options aswell, that's just the first one that comes to mind for me as we've used it with some success. I wouldn't bank on spending less than $300 on a dedicated RAID controller with RAID5/6 support though, and if you want more ports, it gets expensive fast.

The supermicro RAID stuff seems to be BIOS/fakeraid, they mention RAID5 only being supported in Windows etc. A low-cost option if you only need RAID10 is to get an IBM M1015 card off ebay, they shouldn't run you more than $60-80 and are supported in ESXi afaik.

Wibla
Feb 16, 2011

peepsalot posted:


So if I end up not even doing any RAID setup, do I still need something like a M1015 since vmware is maybe not compatible with cheapo supermicro boards (or does that only matter with RAID)?

The onboard controller is a standard intel job, it'll work just fine for your needs.

3ware has been bought by LSI, the 9750 is basically a rebranded LSISAS2108-based controller, with 3ware management stuff (3dm2 etc).

Wibla
Feb 16, 2011

Virtualization isn't always the only solution.

For this specific workload, going bare-metal might just be the right call.

To go even further afield, have you looked into AMD's offerings? They have some G34 socket cpus with 16 cores. Though they are not as fast per core (and per MHz, arguably) compared to intel, you can build a dual socket 32 core server without breaking the bank.

That's pending on if your application benefits from a larger number of slower cores or not, though...

Wibla
Feb 16, 2011

Corvettefisher posted:

They aren't even a week old which is the sad part

That's normal, actually. Bathtub-curve for drive failures means a good fraction will die during the first few weeks.

Wibla
Feb 16, 2011

Regarding VSAN SAS controllers, the LSI 9211-8i is on the VSAN SAS controller HCL. You can find this controller on ebay for decent prices listed as IBM M1015 and then crossflash it to the 9211-8i IT firmware if you want to play with VSAN in a home lab.

Wibla
Feb 16, 2011

HPL posted:

Bumped the ESXi box up from 8GB to 16GB. Possibly the most painful RAM upgrade I've ever done. It's an older Dell Optiplex 780, and yes, it HAS to have DDR3 1333 DIMMs. No, it can't be the faster stuff that's widely available or it won't even POST.

Anyone tried VMWare Photon Linux yet? If so, how's that working out for you?

Jikes, what a pain in the rear end.

Wibla
Feb 16, 2011

HPL posted:

Two returns to the local computer store later, I gave up on trying current RAM and ended up eBaying slower RAM which thankfully worked. Originally, I was like: "Sweet, 4 DIMM slots, I'll fill 'er with 4 8GB DIMMs". Nope. Wouldn't POST. Dug around some, found out the old motherboard chipset wouldn't support 8GB DIMMs, but could possibly support 4GB DIMMs. Okay, back to the store, eat the restocking fee and I go home and try the 4GB-1666 DIMMs. Nope, still wouldn't POST. Returned that RAM, ate another restocking fee and then ordered some actual 1333 from a Chinese eBay seller. Just arrived yesterday and thank god it works. Gigantic pain in the kiester.

Should have just bought a TS140 in the first place.

And to think that I have bitched about getting the ram slots right on a G7... I won't complain about that again, that's for sure, because that's a walk in the park compared to this poo poo, heh.

Wibla
Feb 16, 2011

evil_bunnY posted:

Samey. My boss loves to decom physical machines then suddenly realize it ran more than he thought. I've never had that issue (also because I actually write doc, but you know...).

These are the best (worst) surprises, especially on a friday afternoon.

Wibla
Feb 16, 2011

Please stop :negative:

Wibla
Feb 16, 2011

Short answer: get an intel nic, the realtek nics are not great.

Long answer: You can find howtos on how to embed drivers for realtek nics online, but really - just get an intel nic. Dual ports are nice.

Wibla
Feb 16, 2011

RDM supports SMART huh? I need to look into that. I'm moving shortly and would like to reduce the number of machines I have running, if possible.

My storage server is way overpowered, so I could probably virtualize all the things on it with 12 cores and 144 GB ram, if RDM works reliably or I can pass through the LSI9211 to a VM reliably. Could connect some SSDs to the on board sata controller too.

Wibla
Feb 16, 2011

Do people not use 2FA?

Wibla
Feb 16, 2011

You have servers with less than 64gb ram? :psyduck:

Wibla
Feb 16, 2011

bobfather posted:

Now that 64GB of RAM costs more than all the other parts of the server put together, it's a tough sell.

Have you looked at used hp G7 or G8 servers? For home lab use they're just fine, but they are noisier than a nuc. G7 is really cheap now, and G8 is following suit.

Wibla
Feb 16, 2011

Docjowles posted:

It's all about the tradeoffs you want to make. I have nooooo interest in super loud and power hungry rackmount servers inside my home. I'd rather pay more up front to do it with NUCs or a tricked out desktop case or whatever. Or these days, some compute instances on the cloud provider of my choice that I power off when I'm not messing with it.

My G7 with 2x X5675, 120GB ram and 2 SSD + 6 SAS 10k drives use around 155-160W on average, and it's not noisy as such - but it varies fan speeds continually due to how the sensors are setup, so it's not a box I'd want in a living space. I get where you're coming from though.

Wibla
Feb 16, 2011

Methanar posted:

What could you possibly need that at home for

Well, for what I actually use it for, I'd probably be fine with an i5 with 32GB ram, but it was given to me for free, so I might as well use it? :v:

Wibla
Feb 16, 2011

Vulture Culture posted:

My setup has a 4x4" footprint, is nearly silent, and consumes under 40 watts at full load, so congratulations on living and dying alone I guess

:cawg:

Wibla
Feb 16, 2011

evol262 posted:

Your cards are dead. Lots of SD cards (and SSDs) go to read-only once new blocks can't safely be written.

All the information you're talking about is only living in memory. It's probably desperately trying to log this fact and failing

Seems like it wouldn't be very hard to move to M.2 SSDs for this kind of poo poo.

Wibla
Feb 16, 2011

TheFace posted:

drat what type of super secret poo poo are you all working on?!

He could tell you, but then he would have to kill you.

:ninja:

Wibla
Feb 16, 2011

Sigh, I need a compact esxi host with room for 4-6 3.5" drives and 16ish GB ram, but the microserver gen8 is out of production and the gen10 is apparently garbage?

Wibla
Feb 16, 2011

CommieGIR posted:

Find a SuperMicro system, you can get a 1U with a dual Xeon or AMD Opteron for fairly cheap.

I run a Quad AMD Opteron system for my Virtual Lab running Xenserver and PfSense for virtual switch routing/vlans.

Yeah I already have a colocated DL360 G7 with dual X5675 CPUs and 120GB ram running most of the stuff that requires heavy lifting, and I'm keeping that system as-is.
I'm looking for something smaller that can do double duty as VM host + firewall + storage appliance in my apartment, preferably while being quiet :v:


That Xeon config is expensive as all hell, but the chassis is interesting, one of these would probably do the job: https://www.supermicro.com/products/system/midtower/5029/SYS-5029S-TN2.cfm

Wibla fucked around with this message at 14:08 on Apr 6, 2019

Wibla
Feb 16, 2011

Methanar posted:

Why do you want a personal hardware lab in your living room

Maybe it's the only room he has? :v:

Wibla
Feb 16, 2011

Most Oracle products should probably be marked as a threatening process.

Wibla
Feb 16, 2011

Mr. Crow posted:

In what world is systemd worse than anything Windows :psyduck:

This is a rabbithole that will only end in tears and probations :haw:

(gently caress systemd, btw)

Wibla
Feb 16, 2011

wolrah posted:

The only position I take on systemd is that I'm glad to have some kind of modern init system as the de facto standard. I was fine with Upstart, and the little I've messed with launchd it seems reasonable as well.

You can be anti-systemd all you want, many of the complaints about it are at least based in reality, but anyone who suggests going back to a series of init scripts all executed sequentially and managing their own state however they feel like it needs to be laughed out of the room.

I was soured on it by some early implementations that were dogshit, and I am not at all happy with how systemd seems to be spreading like a virus into more aspects of the base OS. Logging is an example that comes to mind (because of corner-cases that lead to weird failures and no logs whatsoever), and networking as has been mentioned.

That ihatesystemd.com link that CommieGIR posted mentions some of the core things.

Then there's the minor detail that Poettering is an asinine piece of poo poo whose code shouldn't be let near production systems ever, but that's just my personal opinion on a person, so don't take that as gospel :v:

Not that SysV init was perfect, or even good - because it wasn't. But the behaviour was predictable, and it worked well enough.

Wibla
Feb 16, 2011

BlankSystemDaemon posted:

If it's objectively better, how can it get into a state when the system is unbootable without the configuration being touched by the user?

I've also been bitten by that poo poo in the early days, went back to Debian (that hadn't implemented systemd as a default yet)...

BlankSystemDaemon posted:

There's a lot of bugs that're closed on the official bug tracker (and even more bugs that were never filed as such, but have just been blogged about, because the people who blog about them know Lennarts attitude) without any resolution given, despite plenty of rootcausing having been done by the bug reporter.

As a personal anecdote, the last time I was running a Linux distribution on my entire network, it managed to stop booting properly, and when I checked things with zfs-diff(8); no changes related to systemd or its configuration - nor had anything else had changed, only a couple of files of userdata.
That was pretty much the last straw, for me.

Bolded bits very relevant.

Eletriarnation posted:

I appreciate the insight here. I guess some of it just opens up more questions though - Red Hat, being responsible for countless actual paid-support production systems, surely had good reasons for choosing systemd over... whatever they were using before, right? If launchd and SMF are better and older, why did systemd choose a different path for what it's doing and why did everyone (in Linux-land, at least) go along with that? I don't expect you to educate me on the history of all this, but it feels like there's more going on here than the Linux userbase not knowing a good init system from a bad one.

Lennart Poettering has been employed by Red Hat since 2008.

You do the math.

Wibla
Feb 16, 2011

Less Fat Luke posted:

This is a weird question that I'm having trouble getting an answer to on Google; I've used ESXi and KVM for PCI-E passthrough on my system that has IOMMU support and what not, but I'm trying to find out if PCI non-E cards are also capable of being passed through to a guest OS. I have a Windows 10 installation using an old 8-port coaxial cable input card (PCI, the OG kind) and would love to virtualize it.

You could get a pci-e to pci adapter card and do it that way? :allears:

Wibla
Feb 16, 2011

Less Fat Luke posted:

LOL you're no help. I guess I'll make a bootable ESXi drive and just boot that host, and see if the PCI card shows up as available to passthrough.

Wow. You're welcome :v:

Wibla
Feb 16, 2011

I just use a barebones pfsense for stuff like that.

Wibla
Feb 16, 2011

And all the logs look fine otherwise?

How old is the server? Is the BBU on the raid card OK, if equipped?

Also unrelated: switched from ESXi to proxmox for my small home setup, absolutely no regrets doing so so far :sun:

Wibla
Feb 16, 2011

I installed proxmox because I got pissed at the vmware bullshit and I like it a lot more than I ever did esxi post-5.5.

(I say that as one of the people who've been bitten by the auto-boot bullshit in 6.5, and also trying to deal with just loving getting the ISOs without jumping through a thousand hoops as a homelab user).

Wibla
Feb 16, 2011

SlowBloke posted:

ESXi has stopped being hobbist friendly at 5.5, everything else has been a race to gently caress up non standard setups. With William Lam using nucs as test beds, those are your best bet if you want to homelab, making your own server is currently an exercise in frustration IMHO.

Or just install proxmox.

Wibla
Feb 16, 2011

There's also minisforum, they have some neat small form factor machines, like this: https://store.minisforum.com/products/minisforum-hm80?variant=39960884117665

Wibla
Feb 16, 2011

My DL360p gen8 with dual E5-2650 v0, 256GB ram, 1TB NVMe, 2x240GB SSDs and 2x3000GB 10k RPM drives uses (on average) about 170W. The G7 it replaced used well over 250W with the same workload, but it had 2x240GB SSDs and 6x300GB SAS drives, along with some pretty thirsty X5675 CPUs.

Wibla
Feb 16, 2011

I'll just stick to proxmox for now :v:

Wibla
Feb 16, 2011

Proxmox is great, I have no qualms recommending it over rolling your own KVM stuff.

Adbot
ADBOT LOVES YOU

Wibla
Feb 16, 2011

SlowBloke posted:

It seems like Broadcom might be planning to purchase VMware for ~60B$. Given their previous software purchases, expect them to make it far worse than today support wise(which isn't stellar already nowadays) and kill any community outreach.

Oh, great.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply