Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Outside of the NAS thread which is largely people seeing how many drives they can schuck, some of us enjoy Enterprise hardware in the home for either learning, practice, or because we're desperate geeks.

I run an M1000e Bladecenter + and R720 NAS hosting FreeNAS, all wired up with 10GB Fiber for hosting VMs under Xenserver XCP-NG



Its mostly used for practice and Security lab stuff for work/certs, as we all hosting the occasional minecraft server for my kids.
While the goal of this thread isn't just people with more money than sense, tiny homelabs are valid too! Show off your homelab and talk about what you do with it!

Adbot
ADBOT LOVES YOU

DeepBlue
Jul 7, 2004

SHMEH!!!
Holy crap, that is a lot for a homelab! Have you considered hosting other people's infrastructure on that?

The only thing I have is a homebrew Power9 server. There is a lot of info on the below page:

https://forums.raptorcs.com/index.php/topic,99.0.html

Hope to see other people's crazy creations and rack builds!

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

DeepBlue posted:

Holy crap, that is a lot for a homelab! Have you considered hosting other people's infrastructure on that?

The only thing I have is a homebrew Power9 server. There is a lot of info on the below page:

https://forums.raptorcs.com/index.php/topic,99.0.html

Hope to see other people's crazy creations and rack builds!

I do, actually! I host a bunch of labs for some Ethical Hacking courses, a couple multiplayer servers for friends, and some other small things!
It is overkill, but the M1000e really consolidates a lot of things like Networking stack and 10/40GB Fiber Networking gear that would cost in the thousands normally.

That is an AWESOME power9, I have a couple PowerPC / Cell blades in the IBM Bladecenter E below the M1000e, but I don't power it on often anymore because:
- Noise
- Power draw is crazy, even compared to the M1000e.

Blade wise in the M1000e:
2 x M915 - 4 x 16 core (64 cores total) AMD Opterons / 256 MB ECC DDR3 / 900GB RAID-1 10k OS disks
8 x M620 - 2 x 8 core (32 cores total w/ HT) Intel Xeons / 128GB ECC DDR3 / 900GB RAID-1 10k OS disks
1 x M630 - 2 x 12 core (48 cores total w/ HT) Intel Xeons / 144GB ECC DDR4 / 900GB RAID-1 10k OS disks

The R720 is:
2 x 64GB SATADOM Disks in Mirror
16 x 900GB SAS 10ks (~10TB)
9 x 1.2TB SAS 7.2ks (~9 TB)
12 x 240GB SATA SSDs (1.2TB) - Mostly for VM OS disks.

Is there a possibility of watercooling your Power9?

CommieGIR fucked around with this message at 21:34 on Oct 23, 2020

DeepBlue
Jul 7, 2004

SHMEH!!!

CommieGIR posted:

I do, actually! I host a bunch of labs for some Ethical Hacking courses, a couple multiplayer servers for friends, and some other small things!
It is overkill, but the M1000e really consolidates a lot of things like Networking stack and 10/40GB Fiber Networking gear that would cost in the thousands normally.

That is pretty nifty! If you don't mind divulging, what is your ISP speed into that rack? I think stuff like this needs more attention as cloud computing obfuscates all this complexity away, and it becoming a lost art would be a shame for homelabs.

quote:

That is an AWESOME power9, I have a couple PowerPC / Cell blades in the IBM Bladecenter E below the M1000e, but I don't power it on often anymore because:
- Noise
- Power draw is crazy, even compared to the M1000e.

The Talos Blackbird is the best motherboard I have ever purchased. The IPMI on it is top notch and it blows my mind that I have linux environments from boot, to OS, to starting up VMs and containers. I mucked up the IPL process on the board during a firmware update, and unbricking it was as easy as attaching a serial port to a board header to load up the correct firmware. The IRC community is also pretty good at helping out with issues.

quote:

Is there a possibility of watercooling your Power9?

I have not gone into that because the Power9 uses a non-standard method to lock the heatsink onto the board. I would have to have a CAD drawing and a CNC shop to make that, but I am sure that it would sell to the 3 people who bought this for their home lab.

DeepBlue fucked around with this message at 21:47 on Oct 23, 2020

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

DeepBlue posted:

That is pretty nifty! If you don't mind divulging, what is your ISP speed into that rack? I think stuff like this needs more attention as cloud computing obfuscates all this complexity away, and it becoming a lost art would be a shame for homelabs.

So, I have AT&T Gigabit Fiber, and I have a Ubiquiti Edgerouter X in place of the AT&T Gateway thanks to EAP_Proxy script. So I get a decent 850-900 Mbps at the Router, and about the same at the Firewall for the M1000e.

DeepBlue
Jul 7, 2004

SHMEH!!!

CommieGIR posted:

So, I have AT&T Gigabit Fiber, and I have a Ubiquiti Edgerouter X in place of the AT&T Gateway thanks to EAP_Proxy script. So I get a decent 850-900 Mbps at the Router, and about the same at the Firewall for the M1000e.

Can you share that script? I have a buddy who has Gig service from AT&T and would appreciate that swap out from the Arris NVG unit.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

DeepBlue posted:

Can you share that script? I have a buddy who has Gig service from AT&T and would appreciate that swap out from the Arris NVG unit.

As far as I know, you have to use an Edgerouter or some other Linux based router:

https://community.ui.com/stories/Bypassing-ATandT-Fiber-Gateway-with-Edgerouter-Lite-newbie-version/e494f292-a2d0-4d1b-be7d-858f340b14b4

But there you go! Caveat: Put it on a UPS, because if you lost power you have to plug the gateway back into the Edgerouter to capture the new EAP handshake.

DeepBlue
Jul 7, 2004

SHMEH!!!

CommieGIR posted:

But there you go! Caveat: Put it on a UPS, because if you lost power you have to plug the gateway back into the Edgerouter to capture the new EAP handshake.

Good call on that one, I ran into the Google Fiber guide on replacing the modem/router with an ER-Pro and it does not have that problem. I was just going to assume the same thing!

DeepBlue fucked around with this message at 22:57 on Oct 23, 2020

Actuarial Fables
Jul 29, 2014

Taco Defender
The joys of a short-depth rack


Can it fit cheap used enterprise servers? No
Can it fit server cases with hot-swappable 3.5" drive bays? No
Can you attach rails to the cases that do fit? No
Does it fit in the back of a station wagon and also a small apartment? YES

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Actuarial Fables posted:

The joys of a short-depth rack


Can it fit cheap used enterprise servers? No
Can it fit server cases with hot-swappable 3.5" drive bays? No
Can you attach rails to the cases that do fit? No
Does it fit in the back of a station wagon and also a small apartment? YES

Its a clean little rack! And yeah, my IBM 42U rack is a pain if I ever have to move it out of the room, have to unload it and tilt it all the way down to get out the door.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

I don't fit squarely in the homelab camp anymore -- I used to, when I had a literal stack of SPARC machines, including multiple rackmounts -- but SPARC is dead and I can't deal with the noise of datacenter gear anymore. But since I developed a crippling addiction to making numbers go up for science, I once again have too many computers in my house and higher-than-necessary power/cooling bills.

My current setup is a home-built tiny rack, which still holds 4 nodes but they have all been upgraded to Ryzen 3900Xs. Then, in the bedroom, there are two Ryzen 2700 machines built from a mix of spare and new parts.

The nodes all run Arch Linux and are managed through a blend of Ansible playbooks and bespoke tooling named Homefarm. I'm working on the last touches of adding multi-arch support to Homefarm, and to actually test that out I needed some ARM machines, so I also also have four Raspberry Pi 4Bs lashed together inside a windtunnel made of foamcore board.

Everything runs 24/7/365 doing work for various bio/medical and astrophysics research teams. I'm in a tiny 1BR apartment, so to keep things livable the 3900Xs are PPT-limited to 75W, which results in them running at about 3500MHz all-core.

Once the 5900Xs drop, my plan is to rebuild the 2700-based nodes, and one of the 3900X nodes. Then I'll have an even split of current and previous generation CPUs, and the long-range plan is to keep upgrading in that fashion.

Finally, I'm also working on a new rack that will hold all 6 nodes -- and have better cooling thanks to having solid sides and three 140mm fans at the back of each tray.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

mdxi posted:

I don't fit squarely in the homelab camp anymore -- I used to, when I had a literal stack of SPARC machines, including multiple rackmounts -- but SPARC is dead and I can't deal with the noise of datacenter gear anymore. But since I developed a crippling addiction to making numbers go up for science, I once again have too many computers in my house and higher-than-necessary power/cooling bills.

My current setup is a home-built tiny rack, which still holds 4 nodes but they have all been upgraded to Ryzen 3900Xs. Then, in the bedroom, there are two Ryzen 2700 machines built from a mix of spare and new parts.

The nodes all run Arch Linux and are managed through a blend of Ansible playbooks and bespoke tooling named Homefarm. I'm working on the last touches of adding multi-arch support to Homefarm, and to actually test that out I needed some ARM machines, so I also also have four Raspberry Pi 4Bs lashed together inside a windtunnel made of foamcore board.

Everything runs 24/7/365 doing work for various bio/medical and astrophysics research teams. I'm in a tiny 1BR apartment, so to keep things livable the 3900Xs are PPT-limited to 75W, which results in them running at about 3500MHz all-core.

Once the 5900Xs drop, my plan is to rebuild the 2700-based nodes, and one of the 3900X nodes. Then I'll have an even split of current and previous generation CPUs, and the long-range plan is to keep upgrading in that fashion.

Finally, I'm also working on a new rack that will hold all 6 nodes -- and have better cooling thanks to having solid sides and three 140mm fans at the back of each tray.

That's really cool build! I love your tiny Raspi Fan box!

Actuarial Fables
Jul 29, 2014

Taco Defender
The DIY rack is pretty neat. I had been looking around for something to secure the external HDD I have, might have to look into making something myself.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
I may be picking up an HP C7000 Bladecenter here shortly, the blades seem even cheaper than the M1000e

Lorem ipsum
Sep 25, 2007
IF I REPORT SOMETHING, BAN ME.
I have 3 4U servers + battery backup + NAS in my basement but I'm too cheap to buy a server rack so I dropped $20 on two wooden doors at home depot and I just have my stuff on top of those.

Famethrowa
Oct 5, 2012

I'm looking to build something cheap as a babby's first DIY. Any recommendations? Looking for a simple NAS server + router + switch to learn how to manage a small network on.

I've browsed the subreddit and read a bit, but curious if anyone has personal experience with this.

cage-free egghead
Mar 8, 2004
Last year I got a good deal on an HP DL380 G7 and equipped it with a bunch of ram and HDDs and threw Unraid onto it, but I just didn't utilize the hardware as much as I could. I was hosting my bitwarden and nextcloud instances but soon took them down after seeing the server logs of everything snooping on them. Now the server has been down for about a month and I'm not sure on what I want to do with it, especially with the lockdowns. May just turn it into filestorage but I feel like the server is VERY overkill for that sort of application, even if I did do plex too. I did get the lower power dual Xeons and the server only pulls down like 80w on idle but still.

Any ideas on the networking side? I was using SWAG for cert management and Cloudlfare for my DNS stuff but not sure what else I could do to harden it just in case.

Actuarial Fables
Jul 29, 2014

Taco Defender

Famethrowa posted:

I'm looking to build something cheap as a babby's first DIY. Any recommendations? Looking for a simple NAS server + router + switch to learn how to manage a small network on.

I've browsed the subreddit and read a bit, but curious if anyone has personal experience with this.

If you have any old desktop computers, they are usually a great place to start with for servers. The requirements for a simple storage server are pretty small, so even an 8 year old computer will be great.

For a router, the edgerouter x is cheap and does everything you'd want a lab router to do, with the added benefit of being able to install .deb packages to play around with. It can also perform switching functions, so you don't need to buy a switch if you've only got a few ports that need to be connected right now.

e. One thing to consider is where you're going to put everything. If it's in a place with people around, like a home office, then you'll probably want to rule out cheap-but-loud used enterprise equipment.

Actuarial Fables fucked around with this message at 02:52 on Nov 21, 2020

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Picked up a HP c7000 to play with, waiting on PSU, rails, and other parts to arrive



ihafarm
Aug 12, 2004
Lol. Are you getting the 120/240 compatible power supplies? Got bit once and only had 240 parts, but no service! What blades and interconnects? I haven’t run one in 4-5 years now, but didn’t HP start requiring an active service contract for access to firmware/software updates?

At least you’ve got all the fans.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

ihafarm posted:

Lol. Are you getting the 120/240 compatible power supplies? Got bit once and only had 240 parts, but no service! What blades and interconnects? I haven’t run one in 4-5 years now, but didn’t HP start requiring an active service contract for access to firmware/software updates?

At least you’ve got all the fans.

Getting the 240 compatibles, since it'll be running off my current 40 Amp 240v service breaker for the lab.

And yeah, HP closed off access for firmware, but that can be gotten around.

CommieGIR fucked around with this message at 00:21 on Nov 22, 2020

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
C7000 is powered up!


H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord
In case anybody is starting out on the homelab journey and is thinking about going the Nutanix CE route, let me offer a couple words of widsom.

DON'T loving DO IT.

I originally went down the Nutanix CE route because I found a mislabled Supermicro x9 based 4-node 2U server on eBay and got it for mad cheap. It was all JBOD storage though, no RAID, so I figured it'd be a fun testbed for hyperconverged stuff like VSAN and Nutanix CE.

Nutanix CE is an absolute house of cards, you have zero access to any knowledge base or documentation unless you've also got a Nutanix account with active support through work/etc. The simplest of functions, such as replacing a failed disk drive or adding a new one, is completely undocumented and abstracted away from the interface leaving you to have to troll through god knows how many blogs and random forum posts with broken image links and missing formatting to attempt to figure out what esoteric CLI fuckery you need to do in order to make it happen.

Also, upgrades are compulsory. If they release a new version of the hypervisor/platform, the next time you login to your cluster you are forced to apply it right away. No access to your cluster/VMs/etc until the process completes, no option to defer. But that's not such a bad thing, their whole gimmick is one-click painless upgrades, right?

Wrong. Every single time I've attempted to upgrade my cluster it's resulted in needing some form of recovery. The best case scenario was a node gets hung at upgrade, evicts itself from the metadata ring, then after a manual reboot it comes back up and you can manually enable the metadata store again. This latest version caused a total cluster-wide data loss event, as when the first node updated it immediately started evicting every single disk with no warning and no loving reason. Something its own configuration should have prevented considering it knows how many drive failures/node failures/etc it can tolerate. If it had just evicted the node itself and halted the process it would have been fine, but it's not smart enough to do that.

There is at best a total lack of QA on any CE releases. For instance, the previous version's ISO installer just plain didn't work, and they were too lazy to fix it so they just removed the ISO. This led to people just using the USB image to deploy their clusters, but the USB image doesn't do any kind of repartitioning of the install drives which means that slowly but surely nodes would run out of space on the system drive making things get real fucky. If you figured out what was going on you could do some fdisk magic, write a new partition table, and extend the existing partitions but that's only if you whispered the right incantations into google and found your issue on page 32 of the results.

With the latest crash and burn I finally just said "gently caress it" and bought four 1U boxes with boring old RAID controllers, transferred the guts over, and swapped back to good old ESX and I wish I cut bait sooner. Just doing that effectively gave me back 128GB of memory that was previously wasted on the Nutanix controller VMs.

I know I'm salty, but I argue I have good reason to be considering all the headaches they caused in my lab, which by extension gave me a glimpse into how flimsy their house of cards really is. I'm sure their retail product is great. I'm sure their support is great. But based on this experience over four generations of CE, I'd actively block them from being considered for any projects at work.

H2SO4 fucked around with this message at 03:34 on Nov 26, 2020

Actuarial Fables
Jul 29, 2014

Taco Defender

H2SO4 posted:

Also, upgrades are compulsory. If they release a new version of the hypervisor/platform, the next time you login to your cluster you are forced to apply it right away. No access to your cluster/VMs/etc until the process completes, no option to defer.

Yikes.

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!

If any of you nerds are in Michigan I have a Dell PowerEdge R710 and R300 that I'm selling cheap.

Mr. Crow
May 22, 2008

Snap City mayor for life
Thoughts with throwing a rack in the garage? Seems like the best/ only place to put something that large and loud in our house.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Mr. Crow posted:

Thoughts with throwing a rack in the garage? Seems like the best/ only place to put something that large and loud in our house.

Mine lives in a workroom attached to my garage, the only thing I had to add was a vent fan to pull the hot side air out of the house.

Network module came for the C7000, bonus 10GB QSFP included!

CommieGIR fucked around with this message at 22:33 on Nov 30, 2020

ihafarm
Aug 12, 2004
How many watts is it pulling? Are you populating all the ps slots?

*consuming

ihafarm fucked around with this message at 01:49 on Dec 1, 2020

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

ihafarm posted:

How many watts is it pulling? Are you populating all the ps slots?

*consuming

Its sitting at 300 watts right now, and no we'll only have 2 PSUs in there

Doccers
Aug 15, 2000


Patron Saint of Chickencheese
Oh hey, I can play this game now.



Like most others, it's mainly for learning/loving about, but I do have it set up for some useful poo poo (OpenVPN, NAS, Plex, game servers, LTO-4 tape backups of everything every week, etc - as well as a full windows AD envrionment with RADIUS, exchange, MECM, etc).

Doccers fucked around with this message at 02:25 on Dec 1, 2020

Doccers
Aug 15, 2000


Patron Saint of Chickencheese

CommieGIR posted:

Getting the 240 compatibles, since it'll be running off my current 40 Amp 240v service breaker for the lab.

And yeah, HP closed off access for firmware, but that can be gotten around.



... is that a IIcx up there...

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Doccers posted:

... is that a IIcx up there...

Yup and an HP UX Pizza box.

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug
Finished getting the C7000 setup, got the Virtual Connector configured. Had an issue where the 10GB XFP would link but passed no packets/frames.

However the 1GB Ethernet on the same VC Interface worked fine.

Flail Snail
Jul 30, 2019

Collector of the Obscure
I had grand intentions of building up a decent sized homelab. Repairs and a recent house purchase have kind of put a damper on it, though.

At the moment, my "lab" consists of a lightly upgraded DL380 G6 running Proxmox (pretty old but it's fine for my use case - web playground, family Nextcloud, and some occasional game hosting) and an unused Catalyst 2960G that my workplace was selling, both sitting on a half-built Lack Rack. Now that I've got a house and things are going to be stabilizing over the next year or two, I should work on getting a rack and some more hardware.

Ziploc
Sep 19, 2006
MX-5
A lot of my poo poo is ewaste from work and from a friend who used to work in IT for a bank. I think my rack is like 60% ewaste. The DL360P G8 was free ewaste. And the two G9s were also ewaste, but good enough that I had to throw some money at that friend. I did some upgrading through eBay. My work building has a lot of indie tenants that come and go so the ewaste bin sometimes has some real fun stuff in it.

I'm a SORTA sysadmin for a small research firm. With a wild mishmash of old and new hardware and paid and open sourced software. So the more experience I get with random hardware/software crap, the better.

I wired the rack myself using a lacing technique that allows cables to be bundled in ways that doesn't cause snags or tangles. This allows me to yank any machine while it's powered on without having to go behind the rack. It's also fun for when people come over and oogle the rack I can just yoink a machine out and show them the guts of an enterprise machine or show them one of the normal ITX racked cases.

Right now the two DL360G9s are running Proxmox and run cool and quiet even with unmeshed doors closed. The DL360G8 is running Windows Server because the bios fan controller poo poo ramps like crazy under any other OS. I've had them idle down there for a week with the doors closed with out issue or fan ramp. But they're only there for fiddling.

There's a ThinkServer RS140 and a Dell KVM hiding in there somewhere too.

The rest of the rack is Ubiquity for network, a Synology for old storage, an ewasted ACER H340 running XPenology for random media I don't care about, and some 4U boxes for random stuff. Two dedicated to Freenas/Freenas replication, one for playing around with Unraid, and a Proxmox server at the bottom running Home Assistant.

The guts of the 4U boxes are all over the place. Two supermicro boards with 14 SATA ports for FreeNAS, and two ewaste i7 9XX machines that I play around on.

I color coded my in rack RJ45 as follows:
Blue cable blue jacket - Server NIC
Blue cable white jacket - IPMI
Blue cable yellow jacket - Power related (UPS network, PDU network)
Yellow cable yellow jacket - POE

I'm a hardware packrat and I kept having servers fall into my lap so I bought a flat pack 42U rack that arrived in small enough pieces that I was able to bring down to my basement and assemble myself. Which seemed way less painful than trying to get a much much cheaper rack from the classifieds.

Next steps are to wire the house with ethernet to rooms, and POE for outdoor security. And move from FreeNAS to TrueNAS. And continue having fun with Home Assistant.





HP wins for the best blinken lights. Though I'm sad my SATA SSDs don't blinken. :(
https://i.imgur.com/c9aWCmz.mp4

Ziploc fucked around with this message at 19:49 on Dec 19, 2020

the spyder
Feb 18, 2011
Commie, I'm recycling a dozen of those chassis. Need anything?
If you're in the PDX area and need lab gear, PM me.

the spyder fucked around with this message at 01:23 on Dec 20, 2020

the spyder
Feb 18, 2011

H2SO4 posted:

In case anybody is starting out on the homelab journey and is thinking about going the Nutanix CE route, let me offer a couple words of widsom.

DON'T loving DO IT.

I originally went down the Nutanix CE route because I found a mislabled Supermicro x9 based 4-node 2U server on eBay and got it for mad cheap. It was all JBOD storage though, no RAID, so I figured it'd be a fun testbed for hyperconverged stuff like VSAN and Nutanix CE.

Nutanix CE is an absolute house of cards, you have zero access to any knowledge base or documentation unless you've also got a Nutanix account with active support through work/etc. The simplest of functions, such as replacing a failed disk drive or adding a new one, is completely undocumented and abstracted away from the interface leaving you to have to troll through god knows how many blogs and random forum posts with broken image links and missing formatting to attempt to figure out what esoteric CLI fuckery you need to do in order to make it happen.

Also, upgrades are compulsory. If they release a new version of the hypervisor/platform, the next time you login to your cluster you are forced to apply it right away. No access to your cluster/VMs/etc until the process completes, no option to defer. But that's not such a bad thing, their whole gimmick is one-click painless upgrades, right?

Wrong. Every single time I've attempted to upgrade my cluster it's resulted in needing some form of recovery. The best case scenario was a node gets hung at upgrade, evicts itself from the metadata ring, then after a manual reboot it comes back up and you can manually enable the metadata store again. This latest version caused a total cluster-wide data loss event, as when the first node updated it immediately started evicting every single disk with no warning and no loving reason. Something its own configuration should have prevented considering it knows how many drive failures/node failures/etc it can tolerate. If it had just evicted the node itself and halted the process it would have been fine, but it's not smart enough to do that.

There is at best a total lack of QA on any CE releases. For instance, the previous version's ISO installer just plain didn't work, and they were too lazy to fix it so they just removed the ISO. This led to people just using the USB image to deploy their clusters, but the USB image doesn't do any kind of repartitioning of the install drives which means that slowly but surely nodes would run out of space on the system drive making things get real fucky. If you figured out what was going on you could do some fdisk magic, write a new partition table, and extend the existing partitions but that's only if you whispered the right incantations into google and found your issue on page 32 of the results.

With the latest crash and burn I finally just said "gently caress it" and bought four 1U boxes with boring old RAID controllers, transferred the guts over, and swapped back to good old ESX and I wish I cut bait sooner. Just doing that effectively gave me back 128GB of memory that was previously wasted on the Nutanix controller VMs.

I know I'm salty, but I argue I have good reason to be considering all the headaches they caused in my lab, which by extension gave me a glimpse into how flimsy their house of cards really is. I'm sure their retail product is great. I'm sure their support is great. But based on this experience over four generations of CE, I'd actively block them from being considered for any projects at work.

Was this on the previous 4.x CE code? Given the details, I'm going to have to say yes. My CE lab experience has always been "Oh god why did that break" - The new 5 code is solid and I've had it running 3 months or so. I've had nothing but an excellent experience running it in prod. After reviewing all the major competitors (except VMware, politics), it's the best out there right now IMO. The quality of support is second to none.

the spyder fucked around with this message at 01:39 on Dec 20, 2020

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

the spyder posted:

Commie, I'm recycling a dozen of those chassis. Need anything?
If you're in the PDX area and need lab gear, PM me.

Pm'ed! I just finished setting up a BL465C G8 with a GPU and Xenserver

https://twitter.com/CommieGIR/status/1340460913057533954?s=19

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

the spyder posted:

Was this on the previous 4.x CE code? Given the details, I'm going to have to say yes. My CE lab experience has always been "Oh god why did that break" - The new 5 code is solid and I've had it running 3 months or so. I've had nothing but an excellent experience running it in prod. After reviewing all the major competitors (except VMware, politics), it's the best out there right now IMO. The quality of support is second to none.

CE uses a different versioning system than their retail stuff, but this was 5.x code for sure. I'm sure their retail support is great but my experiences have absolutely not reflected a stable/well thought out offering.

Adbot
ADBOT LOVES YOU

Mr Shiny Pants
Nov 12, 2012
I just have ONE TS440.

drat, power is too expensive over here to run this stuff 24x7.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply