|
Please start testing zero day fixes at least a week in advance.
|
# ? Oct 13, 2017 02:16 |
|
|
# ? Apr 25, 2024 23:18 |
|
Saukkis posted:Even more annoying are the security updates. A new critical kernel bug is published, Ubuntu releases the update in couple hours and Red Hat is right behind in about a week. Is there a channel to get the untested RPMs? I'd rather take my chances with an un-QAd kernel update that shut down a general use shell server for several days. What environment are you running in that an unpatched local kernel exploit requires you to completely shut down your servers until a fix is available? I don't mean that in a mocking way, just genuinely curious. Is this like a shared hosting business where you offer shell access to random users who can't be trusted to not be assholes? Or worse...students? Dr. Arbitrary posted:Please start testing zero day fixes at least a week in advance. Symantec, now with -7 Day Protection
|
# ? Oct 13, 2017 03:11 |
|
when the exploits are -7days every week-later patch is a 0day patch!
|
# ? Oct 13, 2017 03:57 |
|
Docjowles posted:What environment are you running in that an unpatched local kernel exploit requires you to completely shut down your servers until a fix is available? I don't mean that in a mocking way, just genuinely curious. Is this like a shared hosting business where you offer shell access to random users who can't be trusted to not be assholes? Or worse...students? Students. And the worst of all, professors. Hundreds of people with access to those servers.
|
# ? Oct 13, 2017 17:37 |
|
jre posted:This is a reason never to use Ubuntu, not a reason to avoid redhat Have there been any prevailing instances where Canonical's rapid release schedule caused more problems than addressed?
|
# ? Oct 13, 2017 19:25 |
|
nem posted:Have there been any prevailing instances where Canonical's rapid release schedule caused more problems than addressed? There have been a ton of instances where the complete lack of QA and "let's just use the Debian patch verbatim" have caused major breakage. This allows for a "rapid release cycle", but when there's a 0-day kernel CVE and they release same day, you have to ask "was there any regression testing on this at all?" And the answer to that question is no. It's a matter of time until they break a major component instead of minor ones.
|
# ? Oct 13, 2017 21:22 |
|
Ubuntu's totally rad on a desktop where a blind update and reboot breaking things doesn't hurt too bad, but in the server world where you got thousands of machines at risk of not doing their job, you need more reliability. Unless you really, really enjoy updating the dozens of tickets that roll in. And have bosses that don't mind the company not doing any work.
|
# ? Oct 13, 2017 21:30 |
|
I have no idea how distributions are put together, but as a software developer it seems crazy to me that they can't whip the patch in and run a many-hours long test suite and then go "ok tis good, roll it out".
|
# ? Oct 13, 2017 23:05 |
|
Thermopyle posted:I have no idea how distributions are put together, but as a software developer it seems crazy to me that they can't whip the patch in and run a many-hours long test suite and then go "ok tis good, roll it out". for a stable distro it needs to be tested against anything else in the system that might be affected. that's quite a few things for a kernel patch
|
# ? Oct 13, 2017 23:28 |
|
Thermopyle posted:I have no idea how distributions are put together, but as a software developer it seems crazy to me that they can't whip the patch in and run a many-hours long test suite and then go "ok tis good, roll it out". As a software developer it seems crazy to me that you think you could test a kernel patch on a significant set of hardware combinations in hours. Also if the patch is a security update for a subtle bug, doing a rapid bodge has a decent chance of introducing a worse problem. nem posted:Have there been any prevailing instances where Canonical's rapid release schedule caused more problems than addressed? If you are happy installing obviously untested changes to your production server environment then go hog wild, I feel sorry for anyone using your service though.
|
# ? Oct 13, 2017 23:49 |
|
Try 'rpm --erase python' on a redhat system sometime and look at the list of dependencies it spews out. Good loving luck testing all that in a couple hours.
|
# ? Oct 13, 2017 23:50 |
|
jre posted:If you are happy installing obviously untested changes to your production server environment then go hog wild, I feel sorry for anyone using your service though. CentOS crew It's to assess new players in the hosting panel market now, both RunCloud and ServerPilot rely on Ubuntu LTS. This feedback strengthens selling a solution built on CentOS/RHEL.
|
# ? Oct 14, 2017 01:08 |
|
jre posted:As a software developer it seems crazy to me that you think you could test a kernel patch on a significant set of hardware combinations in hours. But I don't think that? It sounded to me like people were saying there was no testing being done.
|
# ? Oct 14, 2017 02:45 |
|
Thermopyle posted:But I don't think that? That would be the case on Ubuntu, hence the very short release period. No, or so little that it may as well be no
|
# ? Oct 14, 2017 03:49 |
|
nem posted:CentOS crew Nothing wrong with LTS. But a kernel build on RHEL runs a full regression against almost 1000 different pieces of hardware. Take every OEM multiplied by local+FC+iscsi multiplied by every model the OEMs ship. Once all of those pass, then manual QA happens. Then the release. But "Debian posted a patch so we crossed our fingers and shipped it -- good luck, customers" isn't the same level of safety. Different strokes. But this is why EL distros take 4-5 days after a zero day, not 4-5 hours.
|
# ? Oct 14, 2017 06:45 |
|
evol262 posted:But a kernel build on RHEL runs a full regression against almost 1000 different pieces of hardware. If I understand correctly, wouldn't this be less of a concern on guest machines with virtio devices, since that serves as an abstraction layer dependent upon the host kernel? Testing a kernel against virtio devices would be the same irrespective hardware provided the host kernel has no regressions. Edit \/\/\/: thanks for clearing that up! nem fucked around with this message at 05:44 on Oct 15, 2017 |
# ? Oct 14, 2017 17:46 |
|
No, since virtio still uses the entire storage subsystem, and the network subsystem, and the scheduler, etc. Plus all the variations in possible libvirt CPU models. And testing host-passthrough. But the point of regression testing is to check whether the patch failed anywhere, from scheduler deadlocks to a failure in an ancient cciss controller. If any test fails, the entire suite fails, and any exceptions are known for documentation or additional patches. E: I work on rhev and libvirt. There are a huge number of kernel bugs which are only reproducible on specific hardware, which is why most vendors also have a lab where you can go request a DL380 G6 or whatever. Even a 0-day in USB mass storage can't be tested in VMs, since some flash drives present as multipath. Testing on real hardware is necessary. We do include a large suite of virtualized sanity tests, but passing that is not sufficient for a regression pass. It's only part. Canonical doesn't test at all, including on virt. They rely on upstream to have tested, and assume a successful kernel build for whatever Arch is good enough evol262 fucked around with this message at 20:28 on Oct 14, 2017 |
# ? Oct 14, 2017 20:25 |
|
So you people are of the opinion, that it's better to shut down a service for several days than using a non-QA'd kernel that would most likely work just fine without any issues. Unless you have better information about how many attempts Red Hat usually requires before they manage to produce a kernel that doesn't fail catastrophically on a common hardware? And I'm not planning to stuff it in hundreds of servers, I just want it for the couple that have untrusted users. Of course it shouldn't be available through standard channels, just give me a maze of half a dozen links and with as many warnings and disclaimers as you want, as long as there are beta RPM downloads at the end. Really, how much worse it can be than the fully-QA'd kernels which fail to boot every now and then anyway. And on the iptables issue, how long can it take to QA a patch that teaches iptables-restore how to use -w? In the mean time whenever we start a RHEL 7.4 server there is about 1 in 3 chance that it won't have a firewall. Saukkis fucked around with this message at 19:47 on Oct 16, 2017 |
# ? Oct 16, 2017 19:41 |
|
Saukkis posted:So you people are of the opinion, that it's better to shut down a service for several days than using a non-QA'd kernel that would most likely work just fine without any issues. Unless you have better information about how many attempts Red Hat usually requires before they manage to produce a kernel that doesn't fail catastrophically on a common hardware?
|
# ? Oct 16, 2017 20:39 |
|
Saukkis posted:So you people are of the opinion, that it's better to shut down a service for several days than using a non-QA'd kernel that would most likely work just fine without any issues. Unless you have better information about how many attempts Red Hat usually requires before they manage to produce a kernel that doesn't fail catastrophically on a common hardware?
|
# ? Oct 16, 2017 20:50 |
|
Saukkis posted:So you people are of the opinion, that it's better to shut down a service for several days than using a non-QA'd kernel that would most likely work just fine without any issues. Unless you have better information about how many attempts Red Hat usually requires before they manage to produce a kernel that doesn't fail catastrophically on a common hardware? If you feel like you need to shut down a service, you've done something wrong. Banking kept operating after multiple public CVEs. Even if it took some banks a couple of days to apply patches for Heartbleed (despite the it being a coordinated release with responsible disclosure, meaning packages for RHEL were available within minutes of every other distro), they didn't "shut down" until they could schedule a maintenance window. And it's not about taking multiple attempts to "produce a kernel that doesn't fail catastrophically". The RHEL kernel does through a full regression test on PPC, s390, x86_64, and sometimes aarch64. A regression test here meaning "every kernel runs through a couple thousand tests of various subsystems on every possible piece of hardware", and some of those tests take a while. Kernels rarely fail, because they also go through the testing harness as part of development. But it still takes a while to say "ensure that bug#12345 didn't regress" Note that for critical CVEs, we get them out as fast as possible. This means war rooms, maintainers working during PTO, etc. Saukkis posted:And I'm not planning to stuff it in hundreds of servers, I just want it for the couple that have untrusted users. Of course it shouldn't be available through standard channels, just give me a maze of half a dozen links and with as many warnings and disclaimers as you want, as long as there are beta RPM downloads at the end. Really, how much worse it can be than the fully-QA'd kernels which fail to boot every now and then anyway. If you want an unsupported kernel, which those RPMs surely would be, feel free to use ELrepo or apply a patch to the last released SRPM yourself. The patches are public. The SRPMs are public. This isn't hard to do if it's really urgent for you; so urgent that you feel the need to rail about it on an internet comedy forum. Saukkis posted:And on the iptables issue, how long can it take to QA a patch that teaches iptables-restore how to use -w? In the mean time whenever we start a RHEL 7.4 server there is about 1 in 3 chance that it won't have a firewall. It's far more likely that whatever bug you're talking about isn't a Z-stream, which means it will stay "ON_QA" until whenever RHEL 7.5 comes out. Want it to be a Z-stream? Talk to your TAM and express that this is important. The fact that it's presumably targeted to a Y-stream with or without a TAM presumably means that it's very hard to reproduce (much more than 33%), has few users (like the rancid bug from the last page, not iptables/firewalld) or has an easy workaround. Have you opened a support case? Open a support case. Nobody can help you here. I'm not saying that there aren't problems or bugs with RHEL. I'm saying that there's no way they can be fixed unless they're reported. And that if there is a bug you want to know the schedule for, you can talk to CEE or your TAM. I'd also suggest that, given the size of the RHEL install base and the fact that these bugs are presumably not urgent enough for a Z-stream or asynchronous release, that there's a problem with your environment or your kickstart somewhere if there's a 1/3 chance a system won't have a firewall up.
|
# ? Oct 16, 2017 21:04 |
|
evol262 posted:I work in Red Hat engineering. I am not omniscient. I have no idea what bug you're talking about, and I have no information about it.
|
# ? Oct 16, 2017 22:21 |
|
That bug is VERIFIED and a z-stream. Which means it'll be out with the next RHEL z-stream. Which means batch updates. These are on a regular cadence. It is not a mystery to guess when it's going to be (soon) For the 7.3 bug, you can tell from the doc text being upgraded that it'll be available very soon. Yes, there are hotfixes available for both of these.
|
# ? Oct 16, 2017 23:06 |
|
evol262 posted:That bug is VERIFIED and a z-stream. Which means it'll be out with the next RHEL z-stream. Which means batch updates.
|
# ? Oct 16, 2017 23:39 |
|
I'm thinking of setting up a new system next year, but I'm a bit skeptical about dual boot Linux - Windows 10. Is there (still) a risk that Windows decides to erase the Linux disk because it thinks it's faulty, or was that FUD?
|
# ? Oct 20, 2017 15:46 |
|
mike12345 posted:I'm thinking of setting up a new system next year, but I'm a bit skeptical about dual boot Linux - Windows 10. Is there (still) a risk that Windows decides to erase the Linux disk because it thinks it's faulty, or was that FUD? I've been dual booting for years and that hasn't ever happened to me. If I'm being cautious I will disconnect the other OS's disk during install, but that is mostly so I don't fat finger disk selection. IIRC, windows can sometimes overwrite the MBR if you are dual booting from different partitions on the same disk, but that might just be older versions of Windows. It is pretty easy to fix and won't hurt your data if it does happen.
|
# ? Oct 20, 2017 15:53 |
|
mike12345 posted:I'm thinking of setting up a new system next year, but I'm a bit skeptical about dual boot Linux - Windows 10. Is there (still) a risk that Windows decides to erase the Linux disk because it thinks it's faulty, or was that FUD? Well, that's a new one, never heard of it. I've always had a windows partition on my home computer and that goes back to ... 1998 or so. Unless I had way too many beers and selected the wrong thing to install an os on (being it windows or linux or whatever) none of them ever gave me any trouble. Sure, in the old days a windows reinstallation would overwrite the MBR, but that wasn't a problem to fix and reinstall lilo or grub.
|
# ? Oct 20, 2017 15:59 |
|
mike12345 posted:I'm thinking of setting up a new system next year, but I'm a bit skeptical about dual boot Linux - Windows 10. Is there (still) a risk that Windows decides to erase the Linux disk because it thinks it's faulty, or was that FUD? I always feel the need to point out that you may be happier not dual booting and instead running one of the operating systems in a virtual machine. It works really well.
|
# ? Oct 20, 2017 16:04 |
|
It's even possible to run linux and passthrough the video card to a gaming Windows VM if gaming is the reason you want to dual boot.
|
# ? Oct 20, 2017 16:10 |
|
Windows Subsystem for Linux just got released, so you don’t even need a VM. You would have to start any daemons from Task Scheduler and supply your own display server, but none of that is hard.
|
# ? Oct 20, 2017 16:39 |
|
I tinkered around with WSL last night, didn't get as much time as I wanted because it took longer than I expected to get the fall update installed but after that was done it took maybe 10 minutes to get an X server running and I was off tweaking my terminator config. If it means I never again need to be tempted to buy another macbook just to get a comfortable work environment on a laptop, I'm sold. Even better if I can do it without cygwin.
|
# ? Oct 20, 2017 16:46 |
|
Yeah I'm running a Linux VM on Windows right now, but it's more about switching to Linux as a host. I just need Windows for the occasional game, and don't mind buying a second sdd. I guess that pass-through pci is an option, too, but rebooting every now and then is no biggie. I was just worried that Windows 10 is more aggressive when it comes to non-ntfs disks.
|
# ? Oct 20, 2017 17:48 |
|
Does anyone know if there's a good way to set up udisks or something to change the default mount options of all NTFS disks? By default NTFS-3G allows the creation of files with names technically legal under NTFS (such as question marks) that are illegal under windows (and produce hilarious "this file does not exist" messages when interacted with.) This has tripped me up a few times recently, and you can apparently use the windows_names mount flag to change the behavior, but I would need a way to do it automatically for it to be useful.
gourdcaptain fucked around with this message at 18:21 on Oct 20, 2017 |
# ? Oct 20, 2017 18:18 |
|
If I install Fedora without a separate /home will I be kicking myself when it's time to upgrade to the next version?
|
# ? Oct 20, 2017 22:09 |
|
thebigcow posted:If I install Fedora without a separate /home will I be kicking myself when it's time to upgrade to the next version?
|
# ? Oct 20, 2017 22:15 |
|
thebigcow posted:If I install Fedora without a separate /home will I be kicking myself when it's time to upgrade to the next version? No, but it can help if you want for whatever reason to re-install or install another distro (just choose the same /home). That's the only thing. Of course, you can backup your /home prior to installing/reinstalling things but that's
|
# ? Oct 20, 2017 23:10 |
|
I'm not sure this is the right place to ask this but I'm working with ansible and trying to learn a little about facts, and I'm wondering why this:code:
- debug: msg="{{ ansible_default_ipv4.alias }}" does work, eg: code:
|
# ? Oct 20, 2017 23:35 |
|
my bitter bi rival posted:I'm not sure this is the right place to ask this but I'm working with ansible and trying to learn a little about facts, and I'm wondering why this: If you take the filter out and list all facts is it there ?
|
# ? Oct 20, 2017 23:53 |
|
Yep. I've noticed that running that type of filter, looking for a "nested" (not sure if thats the right word) fact within a fact doesn't seem to work as an ad-hoc command for anything I've tried. eg this works fine: code:
post hole digger fucked around with this message at 00:11 on Oct 21, 2017 |
# ? Oct 21, 2017 00:03 |
|
|
# ? Apr 25, 2024 23:18 |
|
my bitter bi rival posted:Yep. I've noticed that running that type of filter, looking for a "nested" (not sure if thats the right word) fact within a fact doesn't seem to work as an ad-hoc command for anything I've tried. The filter option filters only the first level subkey below ansible_facts. When you run the playbook version, all of the facts are gathered and then your debug task filters the dictionary.
|
# ? Oct 21, 2017 00:11 |