|
RFC2324 posted:ok, if this is the case it makes perfect sense and I was wrong. Actually, I re-read it and have to revise my previous post: they meant being able to set things up so DNS would respond to anything.domain.com with a certain IP.
|
# ? Aug 2, 2020 00:46 |
|
|
# ? Apr 23, 2024 13:05 |
|
astral posted:Actually, I re-read it and have to revise my previous post: they meant being able to set things up so DNS would respond to anything.domain.com with a certain IP. Yeah, if you buy a domain you can do that, the registrar has no say. The dns provider might not let you, but changing dns providers should be easy and fairly transparent if they don't provide the service you require for your money
|
# ? Aug 2, 2020 01:47 |
|
astral posted:Actually, I re-read it and have to revise my previous post: they meant being able to set things up so DNS would respond to anything.domain.com with a certain IP. Correct. The web UI of the registrar I personally use lets me use asterisks in DNS records, but the one my company uses doesn't, so in order to deploy a new app we need to have someone define records for app.company.com. It takes like a minute so I haven't pushed to change registrar over it, but it's still totally unnecessary.
|
# ? Aug 2, 2020 12:21 |
|
NihilCredo posted:Correct. The web UI of the registrar I personally use lets me use asterisks in DNS records, but the one my company uses doesn't, so in order to deploy a new app we need to have someone define records for app.company.com. It takes like a minute so I haven't pushed to change registrar over it, but it's still totally unnecessary. Use a different dns service. Or delegate a subdomain zone to a dns server you operate and do your app stuff on the new subdomain. (don't do this just get a better dns host)
|
# ? Aug 2, 2020 15:48 |
|
NihilCredo posted:Correct. The web UI of the registrar I personally use lets me use asterisks in DNS records, but the one my company uses doesn't, so in order to deploy a new app we need to have someone define records for app.company.com. It takes like a minute so I haven't pushed to change registrar over it, but it's still totally unnecessary. You seem to not understand the difference between the dns registrar and the dns provider? You can change providers easily, and should do so if they won't let you create subdomains for yourself.
|
# ? Aug 2, 2020 16:23 |
|
You don't need to tell me to get a better provider. I already have a good one for myself, the "no wildcards allowed" issue is only with the one my company uses (and it's nowhere close to worth making a fuss over, or even finding a dedicated provider separate from the domain registrar + web host), and in fact the whole reason we are having this discussion in the first place is because I mentioned to another goon that they should check if their provider supports wildcards - which I thought was a given until I discovered one that didn't. Let's move on.
|
# ? Aug 2, 2020 22:04 |
|
What's the go-to method/ course/ etc to go from Linux zero to hero in 4-6 months? I know that's a short time frame for a new OS and it's a constant learning process but I'd like to learn as much as I can. I have Ubuntu and CentOS iso's downloaded. I've also set Ubuntu up on a separate partition and dabbles in it last night. Not sure if I should skip Ubuntu and go right to CentOS? I also downloaded VM Ware so I could run it through a VM. What courses are recommended to get started? Thanks.
|
# ? Aug 3, 2020 19:26 |
|
For desktop use? Need a use case since so much of the distro choice depends on that. Assuming desktop : I would recommend Fedora, they get a lot of the new features while maintaining a rock solid platform. CentOS is great for servers, dunno if it would be a good desktop experience since it lags behind in updates. Ubuntu is also a good intro distro but canonical has been doing some stupid poo poo more frequently then normal lately. PopOS is a good Ubuntu derivative, if you want to game.
|
# ? Aug 3, 2020 20:46 |
|
Zotix posted:What's the go-to method/ course/ etc to go from Linux zero to hero in 4-6 months? I know that's a short time frame for a new OS and it's a constant learning process but I'd like to learn as much as I can. I have Ubuntu and CentOS iso's downloaded. I've also set Ubuntu up on a separate partition and dabbles in it last night. Not sure if I should skip Ubuntu and go right to CentOS? I also downloaded VM Ware so I could run it through a VM. What courses are recommended to get started? Thanks. Set up a NAS with ZFS and sonarr/radarr/torrents/plex with no GUI. Ssh only. Then turn it into an Ansible routine.
|
# ? Aug 3, 2020 23:24 |
|
Matt Zerella posted:Set up a NAS with ZFS and sonarr/radarr/torrents/plex with no GUI. Ssh only. Then turn it into an Ansible routine. Times have changed! When I got my first linux job in 1998 all I could do was compile the kernel and get a ppp connection configured.
|
# ? Aug 4, 2020 01:10 |
|
xzzy posted:Times have changed! When I got my first linux job in 1998 all I could do was compile the kernel and get a ppp connection configured. I do not miss those days. I remember getting a red hat CD with a magazine and only had man pages to keep me warm.
|
# ? Aug 4, 2020 01:31 |
|
Matt Zerella posted:I do not miss those days. I remember getting a red hat CD with a magazine and only had man pages to keep me warm. there was usually a README
|
# ? Aug 4, 2020 01:40 |
|
By the time I was on scene TDLP was a thing that existed and it was good enough, but I had to actually download it to a local copy so I could reference it while I was loving around with stuff.
|
# ? Aug 4, 2020 03:02 |
|
Mr. Crow posted:For desktop use? Need a use case since so much of the distro choice depends on that. Assuming desktop : For a similar but different distro, openSUSE is sort of the Euro/German equivalent of Fedora/CentOS (it can fill both roles, depending on the install choices you make), just like SUSE proper is to German enterprise what Red Hat is in the US. There's both a regular stable version called Leap, and a rolling release version called Tumbleweed, which gets the latest versions of everything. I use Leap, because I'm old and I don't mind not using the absolute latest version of everything.
|
# ? Aug 4, 2020 07:13 |
|
KozmoNaut posted:For a similar but different distro, openSUSE is sort of the Euro/German equivalent of Fedora/CentOS (it can fill both roles, depending on the install choices you make), just like SUSE proper is to German enterprise what Red Hat is in the US. My old desktop has Leap and my new desktop has Tumbleweed, and they have both been great. My old desktop has had like 4 or 5 in-place upgrades to new point releases of Leap and it was never a problem. The only problem I've had on Tumbleweed is the wifi getting flaky, and like I think I've said before I'm pretty sure that is Realtek as a problem more than OpenSUSE.
|
# ? Aug 4, 2020 07:28 |
|
Yup, I've been running it on my laptop (TP X220i) for a while, everything just *works*. That influenced my decision to also use it on my NAS/HTPC. My desktop is still running KDE Neon, but I'm getting fed up with the Ubuntu underpinnings, so that's getting openSUSE as well, when I get around to it.
|
# ? Aug 4, 2020 08:05 |
|
I'm seeing that Fedora might make Btrfs the default over ext4 in their next release. I've been thinking about reinstalling my desktop anyway so I might give it a try, but is there any reason to choose Btrfs over ext4 if you're not running a server?
|
# ? Aug 4, 2020 10:32 |
|
Kassad posted:I'm seeing that Fedora might make Btrfs the default over ext4 in their next release. I've been thinking about reinstalling my desktop anyway so I might give it a try, but is there any reason to choose Btrfs over ext4 if you're not running a server? Btrfs (and ZFS, which share a number of key features) is a different approach than a more traditional filesystem. The biggest difference is that it is a Copy-on-Write filesystem, which means that any changes are done on copies of the original data and only actually committed they are successful. Think of it as the next step up in data integrity over a journaled file system like ext4. The drawback is that this does trade some outright performance for reliability, but there are ways to mitigate that for most use cases, for instance using compression. Another major feature is that the handling of multiple devices is in the filesystem, rather than in dedicated RAID hardware or software (mdadm/LVM in Linux). For instance in my NAS, I started my storage pool on a single Btrfs-formatted 2TB disk moved over from my desktop PC. Then I added two additional disks (2TB+3TB) to the storage pool using "btrfs device add /dev/sdX /mnt/Storage", creating one big 7TB contiguous volume. When that was done, I used "btrfs balance start -dconvert=raid1 -mconvert=raid1 /mnt/Storage" to convert the storage pool to a 3.5TB RAID1* for data redundancy, and balance the data across the devices. * (Btrfs RAID1 is not the same as hardware RAID1. It takes a storage pool with an arbitrary number of disks and makes sure there are always two copies of any data, distributed over all the devices. That means you can lose one drive without losing data.) Another very important feature is the ability to create snapshots. It's a copy of a specific state of the filesystem, and as long as nothing changes, it doesn't take up any additional space, it's merely a reference. Then as you change things, it keeps log of the changes, so you can always roll back to the state of the FS when you took the snapshot. This is extremely handy for backups and "system restore"-type functionality in case of botched upgrades, openSUSE does this automatically by default. You can also send and receive snapshots between networked systems with Btrfs filesystems, for handy remote backups. There are also subvolumes, which in openSUSE take the place of the traditional UNIX partitions for /usr, /home, /var and so on. They are still part of the base volume and expand as needed, they can have quotas applied to them, plus they're excluded from default from root volume snapshots and can be snapshotted individually. That means you can snapshot your system files and roll them back completely separately from your config files in /etc or your home folder. Coming back to performance and compression, modern SSDs and CPUs are so fast that a good compression algorithm can improve performance, especially if your SSD is bottlenecked by the interface it's connected to. A fast compression method can save ~30% of storage space in most cases, while improving performance because it reduces the effective amount of data that needs to be read from the drive. For ordinary everyday desktop usage, the main differences will be improved data integrity and snapshots, especially if your distro uses them like openSUSE does to provide roll backs in case of failed upgrades or config changes.
|
# ? Aug 4, 2020 11:53 |
|
Btrfs: I came for the improved data integrity
|
# ? Aug 4, 2020 15:26 |
|
Haters vacate the premises
|
# ? Aug 4, 2020 15:44 |
KozmoNaut posted:Haters vacate the premises
|
|
# ? Aug 4, 2020 17:54 |
|
People got all hyped and started using it in production way too early. It's the eternal curse of always wanting the newest and shiniest stuff as quickly as possible. Their unwitting data sacrifices helped iron out a lot of bugs, though. The same thing happened with KDE4. Yeah it was kinda technically 4.0, the big new rewrite on Qt4 and all shiny and stuff, but the KDE team tried really hard to remind people that it was to be considered an early release and to please wait until it was considered stable before using it. Obviously people didn't listen and got all huffy when it turned out to not be stable yet, surprise. It's also why I've switched back to a normal release cycle distro now instead of a rolling release cycle one. I can't be arsed to be a tester in my free time anymore. KozmoNaut fucked around with this message at 18:08 on Aug 4, 2020 |
# ? Aug 4, 2020 18:05 |
|
KozmoNaut posted:People got all hyped and started using it in production way too early. It's the eternal curse of always wanting the newest and shiniest stuff as quickly as possible. Their unwitting data sacrifices helped iron out a lot of bugs, though. I love rolling release in concept but gently caress tracking down why something is misbehaving for the 3rd time today
|
# ? Aug 4, 2020 18:20 |
KozmoNaut posted:People got all hyped and started using it in production way too early. It's the eternal curse of always wanting the newest and shiniest stuff as quickly as possible. Their unwitting data sacrifices helped iron out a lot of bugs, though. BTRFS developers, meanwhile, removed the alpha notice seemmingly without having tested it, as the first reports of corruption of actual honest-to-goodness production data came in weeks if not days afterwards. Meanwhile, the available documentation suggests that everything is still not all sunshine and daisies. Ironically, it looks as if they've successfully managed to scrub the evidence of the removal of the alpha notice from the web. BlankSystemDaemon fucked around with this message at 18:44 on Aug 4, 2020 |
|
# ? Aug 4, 2020 18:33 |
|
KozmoNaut posted:Btrfs (and ZFS, which share a number of key features) is a different approach than a more traditional filesystem. The biggest difference is that it is a Copy-on-Write filesystem, which means that any changes are done on copies of the original data and only actually committed they are successful. Think of it as the next step up in data integrity over a journaled file system like ext4. The drawback is that this does trade some outright performance for reliability, but there are ways to mitigate that for most use cases, for instance using compression. Thanks for the explanation. It does potentially seem like overkill for a desktop from what you say (keeping in my mind that I back my poo poo up regularly).
|
# ? Aug 4, 2020 18:55 |
|
Ignoring the data integrity issue, snapshots are helpful for any kernel or dist upgrades. I just is lvm though because in lazy.
|
# ? Aug 4, 2020 19:27 |
|
D. Ebdrup posted:Matt Ahrens and Jeff Bonwick were using ZFS after just a year of development for their home folders, and Sun started using it internally a year after that. It wasn't announced until 2 years after that and didn't ship until 2 years after that. RAID5/6 simply shouldn't have been implemented in the first place. There is absolutely no need for them. Regarding the alpha notice, anyone who rolls out a brand new filesystem to production machines before testing it internally first deserves every single data loss they experience. Refusing to to the proper research, preparation and testing before rolling out any new hardware or software is simply bad practice. Never play fast and loose with production systems. E: I don't really care about the turbulent history and flamewars, I care about that it works and that SUSE trusts it enough to ship it as the default filesystem. It's the same story as with systemd, it got a rocky start and there are cadres of Linux geeks with seemingly too much free time who are still moaning about flamewars that happened years and years ago and issues that have been fixed for ages. The sheer amount of ignorance and misinformation about the current state of Btrfs is impressive and saddening. E2: Here's a fair article about Btrfs, from the perspective of Facebook, who uses Btrfs extensively: https://lwn.net/Articles/824855/ And it's not like ZFS is without its share of serious bugs: https://github.com/openzfs/zfs/issues/7401 KozmoNaut fucked around with this message at 20:19 on Aug 4, 2020 |
# ? Aug 4, 2020 19:54 |
KozmoNaut posted:RAID5/6 simply shouldn't have been implemented in the first place. There is absolutely no need for them. You seem to have missed my point, which was that the notice was quietly removed instead of the developers coming out and saying "we've done our best to test it, but we'd awfully like it if we could get much more broad-scale testing, on non-critical systems", or something to that effect. Had that happened, I wouldn't have nearly the same problems with it that I do - I don't even expect them to do what Sun did and run it for over half a decade[/i] in production before releasing it, just that a little more forethought goes into things which affect production systems because the world is absolutely devops mad and everything has to go at the fastest rate possible. It's opensource - if you know that the information on what amounts to an official source of documentation is outdated and likely to be a source of information, send a patch or at least file a bug report. Bringing up a completely separate issue is just a misdirection - but the lwn article you link also mentions BTRFS issues associated with striping with distributed parity. At least they don't dismiss it because they don't use it. What is now the OpenZFS repo isn't the codebase that started at Sun in 2001, it's a reimplementation based on code that originates at LLNL - and that particular commit has neither made it into the Illumos codebase, nor is it in FreeBSD, macOS or NetBSD. Case in point, I seem to recall that ZFS was used to recover the data, which was never really lost, as it was actually written to disk.
|
|
# ? Aug 4, 2020 22:10 |
|
KozmoNaut posted:E2: Here's a fair article about Btrfs, from the perspective of Facebook, who uses Btrfs extensively: https://lwn.net/Articles/824855/ Facebook's usage is probably different than many users' because they have the tooling to take a node/VM/container out of service at the drop of a hat. Even the articles I've read about their distributed storage usage of btrfs indicates that they're doing redundant CDN type instances for photos. I'm not confident smaller enterprises could match their engineering capacity to make something that resilient with btrfs.
|
# ? Aug 5, 2020 02:42 |
|
OpenSUSE has been making Btrfs default for new installations for years, but I steadfastly keep going with ext4 instead. As much as snapshots and such appeal to me, I prefer going with a known quantity and worrying about back-ups manually.
|
# ? Aug 5, 2020 03:05 |
|
On a zfs/bsd note, I finally have a system set up correctly to play with bectl and boot environments. It seems ... easier than expected, which is always nice. That said, I haven't actually done anything with it yet - but since this is a spare machine under my desk at home and it's a holiday, I guess it's time to get a bit reckless in /usr/src.
|
# ? Aug 5, 2020 04:13 |
CaptainSarcastic posted:OpenSUSE has been making Btrfs default for new installations for years, but I steadfastly keep going with ext4 instead. As much as snapshots and such appeal to me, I prefer going with a known quantity and worrying about back-ups manually. EDIT: By sheer coincidence, the new bootonce feature just landed in FreeBSD. Computer viking posted:On a zfs/bsd note, I finally have a system set up correctly to play with bectl and boot environments. It seems ... easier than expected, which is always nice. BlankSystemDaemon fucked around with this message at 20:33 on Aug 5, 2020 |
|
# ? Aug 5, 2020 13:49 |
|
I did that on another machine not super long ago, but was mildly dreading trying to remember the magic keywords to rediscover it - thanks.
|
# ? Aug 6, 2020 03:37 |
|
I wanted today to play around with kernel's keyring service capabilities. And to store encrypted keys in it. To much of my dismay, neither FEdora 32 nor Debian 10.5 do not ship with the encrypted-keys.ko built. I suppose I could build my own kernel, but that seems really strange. And the internet is very silent about it. Yup, they simply do not have the file. Is here a reason for not including it? Am I just searching wrong for it?
|
# ? Aug 18, 2020 22:48 |
|
Volguus posted:I wanted today to play around with kernel's keyring service capabilities. And to store encrypted keys in it. To much of my dismay, neither FEdora 32 nor Debian 10.5 do not ship with the encrypted-keys.ko built. I suppose I could build my own kernel, but that seems really strange. And the internet is very silent about it. Yup, they simply do not have the file.
|
# ? Aug 18, 2020 23:06 |
|
mystes posted:Are you sure it's not just not being built as a loadable module? In the config it says that CONFIG_ENCRYPTED_KEYS is not set, and "keyctl add encrypted ...." fails with "add_key: No such device" which is a sign that the module was not loaded. That's in Debian 10.5 now though, haven't checked Fedora (it could be built in I guess in there). So yeah, in Debian it is simply not built at all, neither in the kernel nor as a module.
|
# ? Aug 18, 2020 23:20 |
|
Volguus posted:In the config it says that CONFIG_ENCRYPTED_KEYS is not set, and "keyctl add encrypted ...." fails with "add_key: No such device" which is a sign that the module was not loaded. That's in Debian 10.5 now though, haven't checked Fedora (it could be built in I guess in there). So yeah, in Debian it is simply not built at all, neither in the kernel nor as a module.
|
# ? Aug 18, 2020 23:43 |
|
$ grep PRETTY_NAME /etc/os-release PRETTY_NAME="Fedora 32 (Workstation Edition)" $ grep CONFIG_ENCRYPTED_KEYS /boot/config-* /boot/config-5.6.14-300.fc32.x86_64:CONFIG_ENCRYPTED_KEYS=y /boot/config-5.6.18-300.fc32.x86_64:CONFIG_ENCRYPTED_KEYS=y /boot/config-5.7.8-200.fc32.x86_64:CONFIG_ENCRYPTED_KEYS=y
|
# ? Aug 19, 2020 08:05 |
|
other people posted:$ grep PRETTY_NAME /etc/os-release Yes, I saw that Fedora has it built-in. Which helps me somewhat for what I wanted to do. Was looking for the module initially and wasn't there, as a RedHat guide was saying to modprobe it. Debian is still a bummer.
|
# ? Aug 19, 2020 13:44 |
|
|
# ? Apr 23, 2024 13:05 |
|
Volguus posted:Yes, I saw that Fedora has it built-in. Which helps me somewhat for what I wanted to do. Was looking for the module initially and wasn't there, as a RedHat guide was saying to modprobe it. Debian is still a bummer. CONFIG_ENCRYPTED_KEYS is baked into the kernel in both RHEL7 and RHEL8 so I hope that wasn't an official guide you were reading
|
# ? Aug 19, 2020 14:32 |