|
Gentle Autist posted:my kid is loving nuts for jurassic park and you’ve just given me a great in for ruining his life by teaching him linux no. irix with fsn. go all out
|
# ? Feb 26, 2021 03:22 |
|
|
# ? Apr 24, 2024 23:25 |
|
Rufus Ping posted:If you update fedora through dnf it only needs to be rebooted for the kernel but if you update through gnome package manager (which you would imagine would be the preferred way these days) it downloads the packages and then reboots into an automatic dnf-like package installer then reboots again (lol). Stick to running dnf manually, OP all yall no-reboot-updaters, what do you do to ensure that the updated packages are the ones actually in memory on your zomguptime machine? i guess don't bother answering that because it wasn't a good faith question. i already know you're wrong
|
# ? Feb 26, 2021 03:26 |
|
You can look at the name of the packages that have been updated and make that call. No rush to reboot if it's just updated some random application you aren't even using
|
# ? Feb 26, 2021 03:30 |
|
DoomTrainPhD posted:Firmware update mode isn't the preferred update method anymore. Thanks to cheap eMMC’s it's all about A/B updates. my company is selling cpus from 2003, we’ll maybe get to that in another decade
|
# ? Feb 26, 2021 03:32 |
|
Rufus Ping posted:You can look at the name of the packages that have been updated and make that call. No rush to reboot if it's just updated some random application you aren't even using loving auditing my update list and deciding if libshitlord is currently loaded, and whether I care if it’s out of date. what was the severity of that CVE anyway. if this is the price of high uptime then I pay it gladly
|
# ? Feb 26, 2021 04:17 |
Rufus Ping posted:If you update fedora through dnf it only needs to be rebooted for the kernel but if you update through gnome package manager (which you would imagine would be the preferred way these days) it downloads the packages and then reboots into an automatic dnf-like package installer then reboots again (lol). Stick to running dnf manually, OP Thanks, I knew I wasn't crazy when I remembered it rebooting and doing a whole update process.
|
|
# ? Feb 26, 2021 04:20 |
|
Nomnom Cookie posted:loving auditing my update list and deciding if libshitlord is currently loaded, and whether I care if it’s out of date. what was the severity of that CVE anyway. if this is the price of high uptime then I pay it gladly If the answer isn't immediately obvious assume you'll have to reboot. Not difficult
|
# ? Feb 26, 2021 04:25 |
|
Remember when you'd install stuff on Windows 95 and at the end of the installer it told you to reboot even when there was absolutely no need to. That's you in 2021
|
# ? Feb 26, 2021 04:27 |
|
Rufus Ping posted:Remember when you'd install stuff on Windows 95 and at the end of the installer it told you to reboot even when there was absolutely no need to. That's you in 2021 yes I’ve dealt with enough weird fuckery after an update that it’s not worthwhile anymore. update and reboot is far more reliable
|
# ? Feb 26, 2021 04:44 |
|
You can use lsof to see which version of a library is being used. You can restart the service to make sure that new version is loaded. Also stuff like Ubuntu's livepatch will patch kernels without rebooting. So if you have critical servers that absolutely cannot be rebooted, then there are ways to keep that uptime and stay secure. Of course, livepatch isn't free, so you'd have to pay for stuff like that.
|
# ? Feb 26, 2021 05:21 |
|
I really enjoy this thread. my biggest hope for Linux is for it to simply not boot/install on unsupported hardware. hardware requirements page will say this graphics card or WiFi card has issues, so rather than refusing to boot when said graphics card or whatever is being used, it’s like shrug, good luck or google to try to find some random forum where somebody complains and the workaround is to disable some major feature that should work. more than anything else, this would be a sign that Linux is ready to be something interesting - strict hardware requirements. Linux distros would save so much time and be so useful if they refused to let you install on a system with ANY unsupported hardware.
|
# ? Feb 26, 2021 05:24 |
|
i think the tarball my old homedir was in got hosed up drat
|
# ? Feb 26, 2021 06:36 |
|
Never stop posting
|
# ? Feb 26, 2021 06:57 |
|
sb hermit posted:You can use lsof to see which version of a library is being used. I am super glad I wasn’t so terrible in a past life as to be punished with a NEVER REBOOT EVER server. like, everywhere I’ve worked we’ve had such things as failover at the very least. “just don’t ever do anything that might impact availability and you get HA for free” has never been something for me to have to deal with
|
# ? Feb 26, 2021 07:06 |
|
surely if you have a server that cant be rebooted you could have some sort of mirror server that picks up the slack while the other's rebooting or whatever yes i know thats not what a mirror does but you get what im getting at
|
# ? Feb 26, 2021 07:10 |
|
hbag posted:surely if you have a server that cant be rebooted you could have some sort of mirror server that picks up the slack while the other's rebooting or whatever that would be called active/passive failover and it’s a very well established thing. any system that needs to be actually fault tolerant is going to do that or do something else that provides that capability. what are you going to do in case of hardware failure, otherwise? I mean aside from poo poo your pants
|
# ? Feb 26, 2021 07:21 |
|
once you start from “the system must tolerate the failure of a single node” it’s not hard to conclude that your ideal is to handle node failure gracefully and then every sort of maintenance action turns into “fail the node, because that’s easy and safe, then bring it back in the new state”. in particular, it makes things like livepatch completely pointless
|
# ? Feb 26, 2021 07:24 |
|
There's always redhat's hotpatching which has zero implementation
|
# ? Feb 26, 2021 07:31 |
|
hbag posted:i think the tarball my old homedir was in got hosed up Was your home not a separate mount? Lol
|
# ? Feb 26, 2021 07:58 |
|
Nomnom Cookie posted:that would be called active/passive failover and it’s a very well established thing. any system that needs to be actually fault tolerant is going to do that or do something else that provides that capability. what are you going to do in case of hardware failure, otherwise? I mean aside from poo poo your pants right, so... these guys making arguments about "but what if you have a server that absolutely cannot power off for a second" dont really have an argument in the first place?
|
# ? Feb 26, 2021 08:47 |
|
not really it's always good to have a solution that has redundancy and durability but sometimes that doesn't work. Maybe you have esoteric hardware or you have special requirements. Likely for people that can't just spin up more redundant resources in aws or azure, or they can't rewrite custom software or some other restriction. Suffice it to say that people are paying for Red Hat's kpatch and Ubuntu's live patch. And kpatch has been around for a long time. The demand exists, because someone is paying for it.
|
# ? Feb 26, 2021 09:01 |
|
Don't forget budget. Redundancy costs money. Sometimes you just don't have the money. Sometimes redundancy is just not economical. Sometimes you could architect redundancy but it complicates operations. "Yes, we could use glusterfs everywhere, but are we confident we can recover glusterfs if it dies and the gluster guru is on vacation/got hit by a bus?" is a real concern. Often you can take the hit of going down unscheduled once a decade but interrupting service when you need to patch right now isn't a good option.
|
# ? Feb 26, 2021 09:50 |
|
yeah, that’s what I meant. getting stuck pouring effort into hacky poo poo like patching a running kernel because management isn’t willing to fund the capex for real fault tolerance. or even worse, babysitting some abomination that simply can’t be redundant. I must have good karma, because that’s never come up for me
|
# ? Feb 26, 2021 10:07 |
|
Never work at a university then. Pro: Some seriously cool poo poo, some of our stuff is currently in a container on a boat in the arctic circle. A real container, not some cgrouped chroot. Neg: People tell you to re-architect your "app" in node.js on aws. We also have ssh bastion hosts that let you connect to multi-million euro equipment via rsh…
|
# ? Feb 26, 2021 10:37 |
|
forget zero downtime. zero uptime is where it’s at
|
# ? Feb 26, 2021 10:58 |
|
Soricidus posted:zero uptime is where it’s at There's medication for that.
|
# ? Feb 26, 2021 10:59 |
|
Nomnom Cookie posted:once you start from “the system must tolerate the failure of a single node” it’s not hard to conclude that your ideal is to handle node failure gracefully and then every sort of maintenance action turns into “fail the node, because that’s easy and safe, then bring it back in the new state”. in particular, it makes things like livepatch completely pointless i mean a server can be long lived and disposable, theyre not mutually exclusive. livepatch is still useful in this situation. bringing down nodes is very disruptive to clients depending on the software, i wouldn't do it for something as trivial as a kernel update
|
# ? Feb 26, 2021 12:33 |
|
we're talking about two entirely different things here, if you're doing some hobbyist thing and obsessively don't want to reboot fine read update logs and restart random things that may be enough, if it is a thing in a system where it could be a "single node" it is already a bit hosed up if you're manually running commands on the thing, and the likely greatest threat to the systems stability is that you're there doing poo poo by hand.
|
# ? Feb 26, 2021 13:59 |
|
hbag posted:right, so... these guys making arguments about "but what if you have a server that absolutely cannot power off for a second" dont really have an argument in the first place? so one thing you may discover about linux is that the odds of any two people anywhere agreeing about what is a valid use case or best practice or implementation choice are essentially nil. it is an infinite fractal of doctrinal wars about every single thing and the only thing you can be sure of is that no matter what you're trying to do, or why, it is wrong.
|
# ? Feb 26, 2021 15:04 |
|
infernal machines posted:
|
# ? Feb 26, 2021 15:50 |
|
infernal machines posted:so one thing you may discover about linux is that the odds of any two people anywhere agreeing about what is a valid use case or best practice or implementation choice are essentially nil. linux: no matter what you're trying to do, or why, it is wrong.
|
# ? Feb 26, 2021 15:57 |
|
infernal machines posted:
too long for thread title?
|
# ? Feb 26, 2021 15:57 |
|
How can you do right when all the tools are wrong?
|
# ? Feb 26, 2021 16:28 |
Last Chance posted:too long for thread title? Could be shortened to this: "The only thing you can be sure of is that no matter what you're trying to do, or why, it is wrong."
|
|
# ? Feb 26, 2021 16:45 |
|
decided to take a look at gentoo and relive my teenage years. it sure compiles a lot faster vs. my PIII-500
|
# ? Feb 26, 2021 16:48 |
|
all proprietary software is malware open sores poo poo is all doctrinal wars except for occasional things like systemd that forcefully settle such crucial questions of the faith like "what should we call /etc/hostname" choose your poison
|
# ? Feb 26, 2021 16:58 |
|
Sapozhnik posted:all proprietary software is malware all software is bad, yes
|
# ? Feb 26, 2021 17:05 |
|
left my laptop running, downloading a disk image from an ftp server while i went to get my vaccine came back and it had somehow shat the bed in my absence, the font was gibberish and the wifi card had hung itself didnt even finish downloading the loving disk
|
# ? Feb 26, 2021 17:07 |
|
hbag posted:left my laptop running, downloading a disk image from an ftp server while i went to get my vaccine sounds like your poo poo may be hosed
|
# ? Feb 26, 2021 17:19 |
|
|
# ? Apr 24, 2024 23:25 |
|
we already know linux is bad hbag. you do not need to report back
|
# ? Feb 26, 2021 17:19 |