New around here? Register your SA Forums Account here!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $10! We charge money because it costs us money per month for bills alone, and since we don't believe in shady internet advertising, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Doh004
Apr 22, 2007

Mmmmm Donuts...
Found a Poweredge T440 for sale that I think might be worth grabbing to centralize my old NAS solutions into one but I was wondering if I need to swap out the "PERC H730P RAID Controller" for a different HBA? Reddit says the H730 doesn't work well for ZFS and I'll be primarily running Truenas Scale on the box. If so, what would be a recommendation to run a bunch of sata drives off of it? Appreciate any help :)

Adbot
ADBOT LOVES YOU

Hays Watkins
Aug 14, 2024

Doh004 posted:

Found a Poweredge T440 for sale that I think might be worth grabbing to centralize my old NAS solutions into one but I was wondering if I need to swap out the "PERC H730P RAID Controller" for a different HBA? Reddit says the H730 doesn't work well for ZFS and I'll be primarily running Truenas Scale on the box. If so, what would be a recommendation to run a bunch of sata drives off of it? Appreciate any help :)

You’ll want to swap it out for a HBA (host bus adapter). If you keep the raid controller it will act as an intermediary between ZFS and the drives and will cause you issues on setup and major issues if it ever dies. You can get a LSI 9300 card which is a host bus adapter for reasonable prices on eBay. Check before you buy that poweredge though and ensure that the Perc is a standard pcie card and not a mezzanine card. Mezzanine card replacements may be much more expensive or not exist.

some kinda jackal
Feb 25, 2003

 
 
Can the H730P be flashed into IT/HBA mode?

Hays Watkins
Aug 14, 2024
Looks like it might support it natively: https://www.dell.com/community/en/conversations/rack-servers/operating-an-r630-as-a-jbod-using-an-h730-mini-raid/647f9863f4ccf8a8deb81089

Doh004
Apr 22, 2007

Mmmmm Donuts...
That would be great! And I think this is the device as I copy/pasta'd the name from the support tag specs: https://www.serversupply.com/CONTRO...gOe8V_eTlRLWNxs.

Sounds like I might be okay to take it as it is?

some kinda jackal
Feb 25, 2003

 
 
Yeah, but even if you can't use the PERC as expected, you should be able to slap the aforementioned LSI HBA in a PCIe slot or something. If you feel you're getting the base hardware at a price that works for you then I'd certainly say yes.

movax
Aug 30, 2008

movax posted:



Pretty happy with this little guy -- just need to order two more fans (I can probably only get away with one) so I can cool the NIC properly. On the X11 (and I think X9 and X10), the BMC fan control zones are "CPU" (FAN1/2/3/4) and "Peripheral" (FANA). I will probably put it in "optimal" which will scale the CPU speed and run Peripheral fixed at ~30% speed. That's a X710-T4L NIC -- some quick Googling did not show that it would report temperatures via lm-sensors, which is a bummer -- the on-board NIC reports correctly, so I wonder what's up there.

I did re-paste the CPU, figuring it was ancient... forgot to check thermals before doing it. CPU maxes at ~60 C under full load (Linux stress tool), 45 W Xeon with the BMC managing fans... if I override to full, that only gets me 3-4 C improvement with insane noise, so I'm not going to bother. Idles around 27 C.

The shorter-rack depth version would have been better (I would have been fine with 2 M.2 slots internally but X11 boards are just one generation too old to really have that and I didn't want to buy newer) since I don't need the bays, but I think it will fit in my rack just fine.

So I have brainworms again and only have one real complaint about this box (and it's barely a complaint)... the on-board NICs are only 1 Gb and I can't add more to the system since it only has one PCIe slot. I think the IOMMU groups are such that I cannot split off one of the four ports on the X710-T4L NIC, which would be my ideal solution since I only need 3 for PCIe pass-through to my OPNsense VM. All the Proxmox VMs are stuck on 1Gb, which actually isn't a problem since it's Home Assistant and poo poo like that... but backups are slow. Does it matter if it takes longer overnight? No, but like I said, brainworms.

Started looking at AM5 boards with IPMI and most of them still only seem to have 1 Gb on-board NICs -- there was one mobo I found with on-board 10Gb + IPMI, AM5D4ID-2T/BCM. Gigabyte makes one too, but it's been ages since I've touched their mobos. Ideally I want something I can throw ECC into and undervolt a Ryzen 3/5 into, and bonus if it has no chipset / only a Knoll activator.

Or -- I guess I try to find another 1U box that does ACS/IOMMU in such a way that I can separate one of the four ports on the device and give it to Proxmox, which would widen my shopping ability considerably on the used side, and try to find something that's low cost / cheap.

tl;dr -- I want something 1U that can host a PCIe card (4-port 10GBASE-T NIC), with IPMI and either an on-board 10G NIC or proper virtualization to support splitting up aforementioned NIC and I want it as low power as humanly possible (to the point that the NIC will be more than the system).

e: Please correct me if I'm wrong, but I believe below indicates that all of the NIC ports share the same IOMMU group, but of course, the two onboard NICs don't. I swear Intel did some weird poo poo with the PEG port / PCIe root complex in the Skylake era, because even ASPM and other things behave differently on CPU lanes vs. PCH lanes. I would have hoped Supermicro did this stuff "right" but this one might be on Intel because the x16 slot is definitely tied to the PEG port. Wonder if there is a BIOS hack / I can gently caress around with to fix this.

code:
root@myrkr:~# find /sys/kernel/iommu_groups/ -type l
...
/sys/kernel/iommu_groups/13/devices/0000:05:00.1
...
/sys/kernel/iommu_groups/12/devices/0000:05:00.0
/sys/kernel/iommu_groups/2/devices/0000:02:00.2
/sys/kernel/iommu_groups/2/devices/0000:02:00.0
/sys/kernel/iommu_groups/2/devices/0000:02:00.3
/sys/kernel/iommu_groups/2/devices/0000:02:00.1
...
root@myrkr:~# lspci -nn | grep Ethernet
02:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GBASE-T [8086:15ff] (rev 02)
02:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GBASE-T [8086:15ff] (rev 02)
02:00.2 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GBASE-T [8086:15ff] (rev 02)
02:00.3 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GBASE-T [8086:15ff] (rev 02)
05:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
05:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
e2: Better command

code:
IOMMU group 2 00:01.0 PCI bridge [0604]: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 0a)
IOMMU group 2 00:01.1 PCI bridge [0604]: Intel Corporation Xeon E3-1200 v5/E3-1500 v5/6th Gen Core Processor PCIe Controller (x8) [8086:1905] (rev 0a)

root@myrkr:~# for d in /sys/kernel/iommu_groups/*/devices/*; do n=${d#*/iommu_groups/*}; n=${n%%/*}; printf 'IOMMU group %s ' "$n"; lspci -nns "${d##*/}"; done | grep Ethernet
IOMMU group 12 05:00.0 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU group 13 05:00.1 Ethernet controller [0200]: Intel Corporation I350 Gigabit Network Connection [8086:1521] (rev 01)
IOMMU group 2 02:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GBASE-T [8086:15ff] (rev 02)
IOMMU group 2 02:00.1 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GBASE-T [8086:15ff] (rev 02)
IOMMU group 2 02:00.2 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GBASE-T [8086:15ff] (rev 02)
IOMMU group 2 02:00.3 Ethernet controller [0200]: Intel Corporation Ethernet Controller X710 for 10GBASE-T [8086:15ff] (rev 02)
That's gotta be it, the PEG port is just... sigh. Any idea if AMD RCs are any better in this regard?

movax fucked around with this message at 05:05 on Jul 8, 2025

Shalhavet
Dec 10, 2010

This post is terrible
Doctor Rope
Can you not set up a virtual switch and route VM traffic back out through opnsense using that last 10g port?

FAT32 SHAMER
Aug 16, 2012



Shalhavet posted:

Can you not set up a virtual switch and route VM traffic back out through opnsense using that last 10g port?

That’s what I’m doing rn and it works gr8 other than the host proxmox not being able to use that same interface for the dns. I added the four 1s and dns worked again after a failure against the gateway

movax
Aug 30, 2008

I might just be a dumb dumb then -- so set up the 4th as an OPT1/OPT2 interface in OPNsense, give a virtual NIC to OPNsense as well, bridge VMs + Proxmox to it? It will still end up on a different subnet, no?

FAT32 SHAMER posted:

That’s what I’m doing rn and it works gr8 other than the host proxmox not being able to use that same interface for the dns. I added the four 1s and dns worked again after a failure against the gateway

My DNS is NextDNS via forwarder on OPNsense, so I think I'd avoid this?

Shalhavet
Dec 10, 2010

This post is terrible
Doctor Rope
If everything is virtualized in proxmox with the nic passed through to an opnsense VM, then yeah you'll need to add a virtual nic connected to the virtual switch that all the other VMs are using. You should be able to set a second IP pool and then just use firewall rules to block/allow as needed. OPNsense is a router, so everything will use the 10g interfaces for LAN, and then you have a spare 10g port to do whatever. You could use a LAN bridge but that might murder your CPU? not sure

Shalhavet fucked around with this message at 05:48 on Jul 8, 2025

movax
Aug 30, 2008

Shalhavet posted:

If everything is virtualized in proxmox with the nic passed through to an opnsense VM, then yeah you'll need to add a virtual nic connected to the virtual switch that all the other VMs are using. You should be able to set a second IP pool and then just use firewall rules to block/allow as needed. OPNsense is a router, so everything will use the 10g interfaces for LAN, and then you have a spare 10g port to do whatever.

Well, this is the $0 solution to my problem / woes then. It feels like the power consumption of a modern Zen 4/Zen 5 part, even after undervolting and such (if BIOS even allows it on a server board) won't come down enough compared to a 1585L (BGA Xeon, 45W TDP) to be worth it, right? That CPU will keep up with SW switching / routing 10 Gbps without issue I'm pretty sure.

e: Wait, I re-read your message. That's the difference, I think I set up a LAN Bridge and failed at it in the past.

Thinking out loud then, I should:

1. Give an extra virtual NIC to OPNsense VM.
2. Create another vmswitch/bridge on Proxmox (vmbr1)
3. Connect VMs / Proxmox to vmbr1.
4. Setup OPNsense to assign 172.16.69.1/24 to that network. Configure firewall as needed to enable that + make sure each VM has the right subnet mask (I suspect weird poo poo will happen w/ HomeAssistant and Homebridge wrt mDNS).

No need to connect that 4th port physically, everything will just use the LAN port on that NIC as desired? I was thinking I'd have to hook up the 4th one, but I see I don't have too now.

movax fucked around with this message at 05:48 on Jul 8, 2025

Shalhavet
Dec 10, 2010

This post is terrible
Doctor Rope
Adding the os-mdns-repeater plugin on OPNsense is probably the only change you'll need to make there.

I'd create the bridge first but otherwise that looks right. Probably worth keeping the 1g port for management/console access to proxmox.

Doh004
Apr 22, 2007

Mmmmm Donuts...

some kinda jackal posted:

Yeah, but even if you can't use the PERC as expected, you should be able to slap the aforementioned LSI HBA in a PCIe slot or something. If you feel you're getting the base hardware at a price that works for you then I'd certainly say yes.

Cool, thank you for the help (and everyone else too).

I'll see if I can get the price down. It's not the most kitted out machine (only 1 silver xeon, 32gb of ram and a bunch of 480gb ssds) but could be pretty extensible solution for me moving forward.

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
I'm planning to upgrade my home network, and wanted opinions on good ways to set things up with a small 3 node proxmox cluster in mind.

Right now I have 3 proxmox nodes all with their own single 1GbE NIC connection.
My upgrade will have each proxmox node connected to their own switch with 2x 2.5GbE, and the switches themselves connected to each other with 10Gb SFP+.

RIght now my switches and Proxmox nodes are on the Default vlan 1, it's configured as VLAN aware, and any VMs I setup use that network connection with whatever VLAN tag I want it to be on.

I think to able to use the second NIC for VMs and leave the first NIC solely for Proxmox I need to switch the Proxmox nodes from using the default network to being in their own dedicated VLAN, and have the second NIC be the default VLAN with VLAN aware ticked, or should I have it be separate too and still vlan aware and avoid using the default VLAN for anything but router/APs/switches?

Bjork Bjowlob
Feb 23, 2006
yes that's very hot and i'll deal with it in the morning


Is there a strong reason to have each node connected to a dedicated switch which is then connected to a core switch? Are the nodes in different spots in your place?

some kinda jackal
Feb 25, 2003

 
 
I DIY swapped the battery in my APC's APCRBC117 and ended up saving like $400 off OEM replacement.

The swap itself was incredibly easy, but the jumper wires are so tightly crammed in there that every time I tried to pack the batteries all in, one of the leads would pop off some random battery terminal.

Finally got everything buttoned up, measured the connector to a correct 129VDC and popped it in the APC. Battery fault.

Disconnected the battery connector and measured it again. 0VDC.

Lugged the 50lb pack out of the rack again, over to the bench and unscrewed the cover to see one of the jumpers had popped off again. Tightened the connector, buttoned everything back up, measured 129VDC.

Popped the battery back into the APC. Battery Fault. 0VDC at the connector.

Lugged it back out, opened it, saw a different lead flying. Finally did what I should have in the first place and took pliers to ALL the connectors to tighten them so they grip the battery leads. I should have been more worried about leads just flying randomly, even if they've got a plastic cover. You never know how something will go wrong and an arc starts a fire.


Some other odds and ends in my homelab of late..

Bought an HPe DL380 G8 to replace my R620. Just going to take all the reusable parts out of the Dell and throw them in the HP to use as a TrueNAS/SAN server. There's nothing technically wrong with the R620 and I'll probably keep it on ice, but I love the idea of a nice unified HPe setup to go with my G9 DL380 vmware compute server..

I found that HPe DL160/320 G5 (HP #451459-002) are a possibly cheap alternative to a basic shelf-style server rail for my AlphaServer DS10Ls. I know you can find generic L-style "shelf" rails on Amazon, but I managed to find a seller on eBay that was liquidating a bunch for cheap. I bet there are other rails from the big server manufs for their low end servers that do the job too, this is just the first one I managed to find.

some kinda jackal fucked around with this message at 19:33 on Jul 8, 2025

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.

Bjork Bjowlob posted:

Is there a strong reason to have each node connected to a dedicated switch which is then connected to a core switch? Are the nodes in different spots in your place?

1 Node is in a different spot, so I was already looking at 2 switches as a starting point.

I wasn't going to have a core switch, just each of the 3 switches connected to the other two, and then setting Priority so Switch A < Switch B < Switch C. The only real benefit this gets me is I can reboot the switches one at a time and always have a 2 node quorum online and connected. It's not a big deal if having the 3 switches in a loop like that will cause problems despite RTSP/Priority. Can instead just simplify to 2 switches.

Wibla
Feb 16, 2011

some kinda jackal posted:

gen8 and G5 stuff

G5 is e-waste at this point, do not buy. Do not take them for free. They are old, will fail, and use a ton of power for what little work they do.

Gen8 is almost e-waste, only worth considering if they have v2 CPUs and lots of ram. For basically free :v:

Gen9 and up is generally still fine, first gen scalable is a big step up from E5 v3/v4 though.

eschaton
Mar 7, 2007

the knowledge knower. a wisdom imparter. irritatingly self-assertive. odorous.
HPE also had a split between Gen9 and Gen10 in how some of the monitoring/management etc. tooling for Linux work—they’ve stopped providing updates for Gen9 and earlier tools now, so you either need to run an older distribution on Gen9 or earlier hardware to get integration with the system management or you need to do without it.

some kinda jackal
Feb 25, 2003

 
 
nonono the only G5 thing I mentioned was the rails. Because they fit my AlphaServer DS10L perfectly, and the rails for that stopped being available in the 2000s :)

The G8 I got for almost nothing, only because I love the way the HP front bezels look when covered. If the G8 really sucks as a truenas server I'll probably replace it with a G9 in a month or so and send this back to the waste bin.

The G8 will get TrueNAS and the G9 will run vsphere7 until the hardware physically fails.

Apparently the new TrueNAS VM thing is really nice so I may end up consolidating everything on the G9 anyway. I don't necessarily need it on a separate machine anyway, except it chugs RAM for ARC and it feels cleaner to have it off my compute node. Plus I need it to act as a FC target for my AlphaServers and I guess I could do that by passing an FC HBA through to an ESOS VM or something but :effort:

some kinda jackal fucked around with this message at 02:13 on Jul 9, 2025

FAT32 SHAMER
Aug 16, 2012



Tonight the UPS successfully handled a power outage, tho now I realise I need to set both servers to boot on power restoration :D

It was also nice to have PeaNUT on my dashboard to see how it handled everything. I have the Synology working as the server and everything else as the client.

some kinda jackal
Feb 25, 2003

 
 
Hmm, so TrueNAS Community doesn't support FC targets without an Enterprise license. Wish I'd done a little more research before I spent a hundred bucks on 16GB equipment and to connect my storage server-to-be and ESXi host among other things I can chuck a PCIe HBA into.

I do have an actual use case for FC in my network, to serve boot LUNs to a handful of old enterprise hardware, but that's all at 1GB and 2GB and is almost certainly never going to negotiate with a 16GB SAN switch, so the plan was always to add a slower HBA with a tiny sanbox 1400 switch for all the old poo poo that needs a legacy SAN speed.

But at this point I'm going to declare 16GB FC bankruptcy and just set up a 10GB iSCSI LUN for my ESXi host. All my storage is 6GB SAS/SATA anyway so I'm probably not in any danger of saturating anything.

The actual FC use case can still be done with an older HBA passed through to a small ESOS VM running on TrueNAS I guess, I just wish I didn't have to hack the functionality in.

Looks like you can work around the enterprise license need by inserting the HBA modules and modifying scst configs by hand, but they're overwritten by the TrueNAS middleware any time you edit iSCSI configs and somehow that solution feels a lot more fragile than just running an ESOS VM.

Anyway, live and learn. I was excited to try FC because it feels exotic and I'm a sucker for awful enterprise solutions, but I'm tired of thinking about it so it's time to move on.

All I'll say is that I'm glad I didn't go with my first impulse desire to buy an actual FC disk shelf. I mean an aging Gen8 Proliant isn't amazing, but at least I can pivot to ethernet for way cheaper than it would cost me to replace those OEM proprietary FC controllers with iSCSI ones. Here I can just swap NICs and I'm off to the races :cool:

Wibla
Feb 16, 2011

I'm increasingly of the opinion that TrueNAS is a trap of sorts :v:

SlowBloke
Aug 14, 2017
Doesn’t starwind vsan include FC? Maybe try that, they still have a free edition even after being purchased by datacore.

Aware
Nov 18, 2003
Is MiniKVM still the best/cheapest option? Seems need, I'd like to pay less as I want two for testing new kit as it comes in remotely.

Subjunctive
Sep 12, 2006

ask me about nix or tailscale
NanoKVM is cheapest I think? probably depends on where you live. in the US I think the gl.iNet one (comet?) is well-regarded as being aggressively priced

Aware
Nov 18, 2003
Hadn't heard of the comet, ordered a couple! Thanks.

some kinda jackal
Feb 25, 2003

 
 
This G8 experiment experience is really and truly awful compared to the G9 but I won't say nobody warned me.

I mean it WORKS but everything about this feels so bad compared to the G9. The G9 is whirring away quietly with non-HP drives and cards and isn't breaking a sweat, while I'm over here micromanaging fan speeds with hacked iLO on the G8 to keep my sanity intact. The RAID controller is just hot garbage, and I do mean HOT garbage. It's running at something like 80C doing absolutely nothing. I threw a spare PCI slot cooler above it to see if I can get it to chill the gently caress out. It doesn't have a swappable LOM so getting 10gb SFP+ means throwing a PCIe card in to its already limited slots. About all it has going for it is that I could max it out with the RAM from my old R620 and that it cost me almost nothing.

Again, you guys al warned me and I thought "yeah I know but how bad can it be?" and I guess it's one of those things you have to try for yourself to appreciate what a step up the G9 feels like from a G8.

I'm done fiddling with it. I like the idea of a ZFS box with 192GB ram for a nice thicc ARC but the second I have any issues with this G8 it's getting replaced with a G9 and I'll just live with whatever stock RAM it comes with. In fact I still need to replace the RAID controller with something I can flash to IT mode. I can probably find something for a few bucks on eBay but if it's going to cost me like a quarter of what I'd pay for a G9 with an IT-able HBA then I'm going to do some serious thinking.

If one of my goals wasn't to replace my Synology then I'd be finished now. I'd just throw all the SSDs I want to use for compute directly into the existing ESXi G9 on a RAID5 or something and forget the whole endeavour. As it is I want to replace the Synology with something that can do 10GB which would cost me a thou Canadian if I wanted to go with Synology again, so even if I need to grab another HP system I'm still under what I would have spent in money, just not frustration.

Volguus
Mar 3, 2009
Huh, I just looked on ebay on HPE G9 servers available, but their specs looked very similar with my Dell R720 (2xE5-2650, 320GB DDR3 RAM) and their price was just ... absurd. Comparable with a Dell R740. Are the HPE servers just better than comparable Dell ones so people are willing to pay the price?

drk
Jan 16, 2005

Volguus posted:

Huh, I just looked on ebay on HPE G9 servers available, but their specs looked very similar with my Dell R720 (2xE5-2650, 320GB DDR3 RAM) and their price was just ... absurd. Comparable with a Dell R740. Are the HPE servers just better than comparable Dell ones so people are willing to pay the price?

I havent read the last few pages but what is the plausible reason for buying a system with a 13 year old server CPU

I buy used computers frequently and you couldnt pay me to take a 2012 system for home use

Volguus
Mar 3, 2009

drk posted:

I havent read the last few pages but what is the plausible reason for buying a system with a 13 year old server CPU

I buy used computers frequently and you couldnt pay me to take a 2012 system for home use

Cheap. And good enough for your needs. When I bought my R720 it cost me about $400 CAD, was quite a few years back., Bought some more DDR3 RAM for it and it's sprinting. Recently I found out that it doesn't seem to like nvme drives so I'm looking for an upgrade (R740 ideally, though it's at least $1300CAD).
I run VMs on mine, most just a git server, build servers for my poo poo that needs to be built, and :filez: VMs. And I'd like to be able to have a backup in case poo poo shits the fan with my current one. but I really like the the VMs and the compute power.

GrandMaster
Aug 15, 2004
laidback

drk posted:

I havent read the last few pages but what is the plausible reason for buying a system with a 13 year old server CPU

Core count and memory capacity

some kinda jackal
Feb 25, 2003

 
 

drk
Jan 16, 2005

wasnt intending to poo poo post

i pay far too much per kwh so I keep my home CPU budget to like 10-15W total

some kinda jackal
Feb 25, 2003

 
 
I'm not offended in the least :)

I have a handful of USFF machines that draw considerably less power, but honestly I'm tired of micromanaging small resource machines. Once I started collecting old retro enterprise equipment the power budget just became a secondary consideration so I figured I might as well just get something that I'll probably pay some bucks to run but that I won't really outgrow for the next five or so years, and can play some supporting role to my other hobbies to boot.

I mean I'm building a PDP-11 so I lost any credibility in homelab sanity a long time ago :q:

Basically I want something where I can stand up whatever VMs I want without ever needing to think about RAM, CPU, or disk, within reason. I have more RAM than I think I'll ever really need with only 1/4 of the slots full. 2x 14 core HT CPUs which aren't going to set any speed records but I'm used to 2013 era i7's anyway and none of my workloads are particularly compute intensive, I just like the ability to run lots of little things. If I needed number crunchers that might be a different story. No LLM/AI stuff right now either.

I killed a bunch of AWS assets and repatriated them to a physical server so I'm probably saving money even when you consider the electricity TBH. I haven't done the math though.

some kinda jackal fucked around with this message at 04:56 on Jul 12, 2025

drk
Jan 16, 2005

yeah its crazy how much you can run on an N100 or similar these days, or a $5-10/month cloud server

building a pdp-11 at home is very cool though. not something you can just spin up online for a few bucks

some kinda jackal
Feb 25, 2003

 
 
Technically if you just want the software it's something you can probably spin up for pennies with SimH, but where's the fun in that :twisted:

eschaton
Mar 7, 2007

the knowledge knower. a wisdom imparter. irritatingly self-assertive. odorous.

some kinda jackal posted:

In fact I still need to replace the RAID controller with something I can flash to IT mode.

you can set the built-in controller to HBA mode pretty easily, boot from the HP Gen8.1 utilities ISO and run the RAID setup utility, remove all volumes, and then just tell the controller in its config to use HBA mode instead of RAID mode

that’s how I set up the DL360p Gen8 that I have running Proxmox, so I can use ZFS; on the other hand, I’m using the HP RAID on the DL360p Gen8 that I have running NetBSD and it’s worked just fine for that too

Adbot
ADBOT LOVES YOU

Aware
Nov 18, 2003
I got lucky and ended up with a cheap R740XD because it was redbaged as an NVR server under some white-label company and the guy selling it didn't know what it was worth. Keep an eye out for non-Dellnswrvers that are clearly Good Dells. It was anaemicly specced but second CPU and a bunch of RAM from AliExpress/eBay wasn't difficult or expensive to get.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply