Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Locked thread
Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Smrtz posted:

The current book is too expensive for me. How much is left of it before a new book is started?
Which one? You've got different people simultaneously going through the Dekker human factors book, the RHCSA book, and the PowerShell book.

Adbot
ADBOT LOVES YOU

Smrtz
May 26, 2015

Vulture Culture posted:

Which one? You've got different people simultaneously going through the Dekker human factors book, the RHCSA book, and the PowerShell book.

They're all a little pricey for me. I'll just jump in for the next book.

Any idea when that'll start?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Everyone kind of just proposes their own readings, but here are some design thinking books that I like and that dovetail nicely into Dekker's book from a human factors standpoint:

Don't Make Me Think (Revisited), by Steve Krug (~$20): the classic web usability book, updated for modern paradigms and mobile applications.
The Design of Everyday Things, by Don Norman (~$9): a primer on how the things we use every day function as simply as possible. This one's full of lots of great little reminders of how much work goes into making something as rudimentary as a door function well.

We're around halfway through the Dekker book.

Vulture Culture fucked around with this message at 20:50 on Jun 10, 2015

Smrtz
May 26, 2015

Vulture Culture posted:

Everyone kind of just proposes their own readings, but here are some design thinking books that I like and that dovetail nicely into Dekker's book from a human factors standpoint:

Don't Make Me Think (Revisited), by Steve Krug (~$20): the classic web usability book, updated for modern paradigms and mobile applications.
The Design of Everyday Things, by Don Norman (~$9): a primer on how the things we use every day function as simply as possible. This one's full of lots of great little reminders of how much work goes into making something as rudimentary as a door function well.

We're around halfway through the Dekker book.

Hmm, The Design of Everyday Things does seem pretty cool, and it's cheap....

I mentioned it before and someone said they'd be interested, but Bruce Schneirs Applied Cryptography is really interesting and and only about 35$ new. I've actually already got that, but every time I try to read it I can only get a few chapters in before something happens and I put it down, and then just never bother to pick it back up...

Roargasm
Oct 21, 2010

Hate to sound sleazy
But tease me
I don't want it if it's that easy
I've been slacking massively on RHCSA. Not gonna get to it this week either since once this lovely plane gets fixed I'm on vacation :toot:

Also Powershell 5 is loving awesome and if you were going to buy a PS book I would wait until someone writes one for this revision. Month of Lunches is very engaging though.

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


Roargasm posted:

I've been slacking massively on RHCSA. Not gonna get to it this week either since once this lovely plane gets fixed I'm on vacation :toot:

Same here, it's warm outside and I don't want to do more work after work.

Then again, because I'm semi-competent at bash I'm the Linux Expert at work :suicide:

Edit, I'll try to make a post this weekend.

Gucci Loafers fucked around with this message at 04:15 on Jun 11, 2015

MC Fruit Stripe
Nov 26, 2002

around and around we go
Powershell FIVE? There's barely relevant books for 4, sheesh. (edited from "literature" to "books", online help is unimpeachable)

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Got burnt out and haven't done these in like a month. Let's finish this book before my kid is born and it's gone forever!

Chapter 12: Build a Timeline
This is a long and dense chapter. It is a significant departure from previous chapters because it includes a lot of "what to do" instead of "what not to do."

When reconstructing a situation, it's helpful to create a timeline. In addition to helping demonstrate cause-and-effect (refer back to the limitations of the sequence-of-events model), it's helpful to be able to reference exactly what information was or was not likely known by people in the situation at any given time. Additionally, if not for time constraints, phenomena like taskload, workload management, stress, fatigue, distractions, and problem escalation would be effectively irrelevant. There are many different ways of constructing a timeline, and many different degrees of resolution that can be used. In order to surface issues with human performance, you should strive to produce a timeline of the highest possible resolution. Dekker defines the resolutions as follows:
  • What was said or done (low resolution timeline)
  • What was said or done when (medium resolution timeline)
  • What was said or done when and how (high resolution timeline)

Unless you routinely collect very in-depth operational data, like voice recordings of people involved in a procedure, the data you have available may not be sufficient to reconstruct a timeline at high resolution. However, you may have access to logs, instrumentation, and other machine data that can help you interpolate an understanding of the situation. The ending point of an incident is usually fairly obvious, but the beginning of a timeline is somewhat arbitrary. (As Dekker said in previous chapters, a "root cause" is simply the place where you stop looking.)

A high-resolution timeline should involve the following pieces of information:
  • How and when people engaged in conversation switch roles as speaker and listener
  • How and when silence occurs in the conversation
  • What overlap occurs when people talk at the same time
  • Vocal mannerisms that may be relevant to people's emotions or thought processes in the situation (including "tokens" like "oh," "um," "ah")

These high-resolution timelines help understand what was going on in the process at the time an error may have occurred, as well as understanding what other tasks people might have been involved in or distracted by. The goal is to couple behavior to the situation and, as we've been doing, understand why what someone did made sense to them at the time. We want to understand how inputs to that process changed over time, as well as how and why those inputs were relayed (especially in the case of mechanical status indicators). This gives us the tools we need to determine which of these were potential contributors to the problem, and which were not.

We should focus our attention on the following areas:
  • Moments where people or processes contributed critically to the outcome
  • Places where people or processes did or could have done something to influence the direction of events

In particular, here are some common kinds of events that are worth noting:
  • Decisions
  • Shifts in behavior (especially where people realize the situation is different from what they previously believed)
  • Actions to influence the process, which provide insight into how people understood the situation
  • Changes in the process, especially changes in state of an automated process managed or monitored by humans

The goal is to link people's decisions to the processes that govern them. As we strive to determine what people were trying to accomplish, we ask the following:
  • What is normal at this point in the process or operation?
  • What was happening? How was the system set or configured? How do we know? How did the people in the situation know?
  • What were other people doing? How were duties divided?

waloo
Mar 15, 2002
Your Oedipus complex will prove your undoing.
Vulture Culture, which edition have you been working through of that book? Your chapters and the ones in the version I just read do not seem to match up so well.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I'm using the Kindle edition, which is apparently the '06 edition and not the latest version of the book. :shrug:

evol262
Nov 30, 2010
#!/usr/bin/perl
Just an update -- I'm a bum who hasn't been keeping up on the RH cert books at all, but will continue the analysis this week

evol262
Nov 30, 2010
#!/usr/bin/perl
Chapter 5: So-so virt that you don't really need to know for the RHCSA (on most exams)
  • I kind of take issue with the muddled way in which he talks about KVM/libvirt/qemu, so let's be really clear. qemu emulates a system and provides devices, the best of which tie into the virtio systems in the kernel. KVM provides an accelerated driver for qemu to use, which itself uses hypercalls and ring -1 hardware virt stuff (ings are beyond the scope of this). libvirt provides convenience functions for automatically migrating virtual machines, providing storage pools, setting up guest networks, etc. He goes back and forth all over the chapter, and often uses the wrong term, so keep this in mind.
  • Hot-adding CPUs is very guest dependent.
  • libvirt handles live migration, storage thing provisioning, etc. This is "virtualization", but it's libvirt, and libvirt can also do it for lxc and xen and anything else libvirt supports (including support for esxi/vsphere, though that's always half broken)
  • Storage pools can also be set up against ceph and gluster. But the kind of support that's there isn't so hard to find out. "rpm -qa | grep libvirt". You'll see a bunch of "libvirt-daemon-driver-*" packages. libvirt-daemon-driver-storage is clearly the storage bits. rpm -q --filesbypkg libvirt-daemon-driver-storage shows you one (1!) library (.so files are broadly equivalent to .dll files on Windows). "ldd /usr/lib64/libvirt/connection-driver/libvirt_driver_storage.so", and you'll see stuff like "librados" and "librbd", both of which are used for ceph. In general. Googling some library file will tell you where it comes from. libvirt's storage driver being linked against these is a good way to tell you that it can use them, even if the GUI virt-manager tools he loves so much don't have wizards for them
  • libvirt is not the default hypervisor and virtual machine management software. KVM is the hypervisor. libvirt is glue that can manage your VMs, but it's not necessary, and libvirt doesn't know anything about VMs started by qemu-kvm directly by you (even though it invokes the same things). Download an ISO, create a backing store with qemu-img, and run the command in the next bullet to watch virt-manager/libvirt/virsh completely fail to manage it
  • /usr/libexec/qemu-kvm -m 1024 -smp 1 -drive file=~/test.img -name unmanaged -cdrom ~/Downloads/centos.iso -boot d -net user -net nic -serial stdio
  • As always, just ignore the GUI stuff.
  • Almost nobody is going to memorize the syntax of some XML files. All the "net-dumpxml" (and anything that has dumpxml) commands are your best friends, since they give you a good template to base it from. If there's no default networks/etc on the test system (and there may not be), remember that you have repositories set up, that many of the default configs for any package are somewhere in /usr/share, etc. The certs are not a braindump, and you have enough time to go read docs for some stuff if you need to.
  • Using "find . | cpio" is just unnecessary for a lot of stuff, and cpio does badly with sparse files unless you specify (among other things). Just "cp -Rp ..." in 99% of ases.
  • You don't need a bridge to have bi-directional communication between guest VMs or host/guest. You it for the guests to get "regular" addresses on a non-NATed network and be accessible from outside the host, like almost every other virt solution that gives you the option of "NAT", "host-only", and "bridged"
  • Copying /root/anaconda-ks.cfg means that you get very verbose options to everything and unnecessarily numbered stuff. It's fine, but bear in mind that you can cut some of these down if you want. Read the kickstart docs if you want to go through this and see "what's necessary and what's not"
  • The sections in the kickstart can be rearranged if you want. The "commands" section (at the top with no %header) and %packages must be in order. After that, you can do whatever you want with %pre and %post, and using %pre to generate partitioning per hardware config is common in the real world (for virt machines with only one disk, do A, for DL380s, do B, etc)
Chapter 6: The author doesn't understand how any of this works
To start, if you really want to get a good idea of how UEFI works, you should read this, then continue on, because there's way too much wrong in this book for me to go into detail about it all
  • The kernel only runs, schedules, and manages processes and service daemons in the sense that there's a scheduler in the kernel, and the kernel ultimately signals processes and reaps dead ones, but it's not actively "managing" many of them in any real sense. systemd and users and administrators and httpd and a bunch of other things are managing and running them (or executing them, and the kernel actually "runs" them, but this distinction is unclear here
  • systemd does replace init and upstart. But upstart already replaced init in RHEL6. Please use "systemctl reboot" and "systemctl halt", not plain "reboot". Yes, really.
  • RHEL7 logs all system activities by default to the system journal, provided by journald, unless that process handles its own logging or specifies a different output file in its systemd unit file. Any "logging to appropriate files" is handled by rsyslog, as always, and that is not guaranteed to be present on all systems (I'm the maintainer of the RHEL guest images for openstack, and we trim a lot of stuff from there, and I know it also happens in docker images and EC2 images, and...)
  • BIOS/UEFI doesn't "install appropriate drivers for the video". UEFI uses GOP, and BIOS uses VEB, in general.
  • Read the link above. UEFI systems are not scanning for a 512 byte boot sector. The author either doesn't know this or has opted to not talk about UEFI at all
  • GRUB2 does support BIOS/MBR and BIOS/GPT and UEFI (which is only GPT). But it needs to be redone if you change this, because EFI booting relies on EFI executable residing in the EFI partition
  • On that note, GRUB2 isn't looking for /boot/efi and running any configuration based on /boot/efi/EFI/redhat/grub.efi. The EFI system partition (of which there can be more than one, but generally is only one) gets mounted at /boot/efi by Linux, but grub-efi (and EFI) is just looking for a FAT32 partition with \EFI, and /boot/efi/EFI/redhat/grub.efi is an EFI executable which is actually grub. grub-legacy (and grub2 on BIOS) loads a very small stage1 which looks for grub stage2 on the "root (hd..." system specified in the grub config file. The grub configuration file on EFI is /boot/efi/EFI/redhat/grub.cfg. /boot/efi needs to be mounted on kernel updates to update the grub config (a utility aptly called "grubby" updates grub as part of the kernel RPM scripts). grub doesn't actually depend on /boot/efi or any Linux pathnames to boot.
  • The initial ramdisk (initrd) is not mounted read only. It's mounted read-write. The root filesystem is mounted read-only at /sysroot during the initrd boot process (dracut). You can play around with this by appending "rd.shell rd.break=pre-pivot" to the boot options, which will drop you into a shell in the initrd before it switches. Many of the binaries are statically compiled (though they don't all need to be, which is a change from earlier init systems), and you can expect to find some "basic but unnecessary for booting" utilities like "which" missing. Still, it's not magic and you can play with it.
  • Any modules needed for actually mounting the root filesystem must be present in the initrd, and can be added to dracut. It's limited and it isn't magic. "dracut --list-modules" shows a list of available modules, and writing your own isn't that hard. Ping me if you have questions about it, which is unlikely here
  • Whew, that was just the first two pages!
  • You do not need to touch /etc/default/grub. grubby's pretty smart. In general, if you want to add options to the kernel, you can just edit /boot/grub2/grub.cfg or /boot/efi/EFI/redhat/grub.cfg and add them. grubby will pick those up and copy them over to the next kernel you install. You can use /etc/default/grub if you want to, but you don't need to
  • You can add scripts beyond 40_cutome and 41_custom. They run in numeric order, so you can add "99_yourscript" if you want to be reasonably sure it runs last. You probably won't need to touch this.
  • grub-set-default can actually use strings, and this is the default. The "root" directives in grub2 also specify /boot, and it looks for /boot/grub2/grubenv for environment variables, including "saved_default". If you set a friendlier title (like "oldkernel" or "dban" or whatever), you can set-default that just as easily without guessing, which is nice if you want to automatically fall back to some kernel without guessing/parsing how many entries there are.
  • The kernel does not provide libraries. The kernel does not provide libraries. The kernel does not provide libraries
  • The kernel provides an interface for modules (both through headers to compile them and a binary interface for prebuilt modules to be loaded, though this changes, and guaranteeing a stable ABI but backporting changes is one of the things RHEL offers you over some other distros). ELF and a.out execution are also beyond the scope of this post, but I'm happy to go into it if someone really wants to know.
  • Just to clarify, drivers are modules (sometimes, sometimes they're built in and they aren't loadable modules), but modules can also be crypto support, etc.
  • Each architecture has its own kernel, just the same (basic) config, and not all options for one architecture exist for another one in the kernel config
  • Not all modules need kernel-devel. Some are happy with kernel-headers. You need kernel-devel if you want to rebuild the kernel yourself.
  • kernel-headers specify the same thing as all C header files -- what functions are present and what their signatures are
  • kernel-debug is primarily used for systemtap
  • The kernel does not need to be rebuilt when new functionality is added or removed. That's exactly what modules are for. Exactly. Changing a "critical system component" and "adding hardware" do not require a kernel rebuild. At all. Removing functionality to reduce memory usage is very 2001 and not recommended at all unless you know what you're doing and exactly what you need/don't need. It's not fun to find out that you removed the driver for the NICs in your new system or the HBAs on your storage servers when you deploy a new image with your crappy kernel.
  • -123 is the "custom kernel version from Red Hat". Meaning that it's the 123rd version of kernel 3.10.0 that we've made, and they keep getting incremented as we backport hardware enablement and features, etc. There are still patches applied. A lot of them. You can broadly expect the minor number to never change during a RHEL release cycle
  • You don't need to add modules when you allocate a new LUN. Any hardware that's "undetected" will show up in lspci (or other utilities) if it's present at all, and building a module adds drivers for it. Like Windows, basically. You don't need to build a new module or rebuild the kernel when you add a video card or a NIC or whatever unless that driver isn't part of the kernel (which almost everything is).
  • systemd units also have a "masked" state which basically says "this is disabled and pretend it doesn't even exist if something else depends on it and tries to start it"
  • Using pkg-config to display variables is incredibly stupid (by the author), especially with no leadin. It's used for defining stuff in rpm specfiles and makefiles and elsewhere, and you can go look at "pkg-config --list-all", followed by "pkg-config $module --print-variables", which could lead you to the author's examples, but he certainly doesn't tell you how you'd magically know this invocation.
  • System logging
  • Just to clarify (yes again), rsyslog has its own core engine that it calls a kernel. Adding "modules" to the "kernel" here means rsyslog. Not the kernel.
  • "grep -v ^# | grep -v ^$" removes commented lines (starting with a #) and blank lines (with no whitespace either) from the output

evol262 fucked around with this message at 17:33 on Oct 19, 2015

Sheep
Jul 24, 2003
Thanks for posting these critiques of the book, evol262. This stuff is super helpful.

evol262
Nov 30, 2010
#!/usr/bin/perl
I'm reasonably sure that you could pass the cert by following along with the book. My big complaint so far has been that it's a terrible way to learn how Linux actually works, if you want to use the book for that. I understand that it's not a book on internals, but I wish he'd just skip elaborating on how things work when he doesn't have any idea and gets it terribly wrong. Continuing...

Chapter 8: Less poo poo because all of this is 25 years old
  • The ability to have users with the same UID comes up later in the chapter, but I just want to say off the bat that UID 0 is not reserved for root. root will always be uid0, and uid0 is treated specially in many places, but you can create another account with uid0 which will be root, and this happens at some shops
  • Forget finger exists. It's not even part of a base install, and it's been a security risk for years (or the finger daemon was -- the finger utility isn't, but you should just avoid the entire thing)
  • The hashed password starts with $6 because that's SHA-512. You can see the supported hash types with "man 3 crypt". ${id}${salt}${password} is the format (not that the salt really matters to you, but if you ever wanted to generate a hashed password string yourself)
  • newgrp is sometimes useful, and it's barely mentioned here. You can "chmod 2XXX" some directory to set default group ownership to the group owner of the directory, but what if you're a non-administrative user who wants to edit some new files and have them owned by "staff" or "admins" or whatever? newgrp to the rescue
  • /etc/skel just has basic configs for bash, emacs, etc. You can easily change these so new users get whatever defaults. Or add them to /etc/bashrc.
  • All these options to useradd... the most common invocation is "useradd -m -G somegroup user" (where "somegroup" is often "wheel" on your test systems)
  • cd /etc/; grep user2 passwd... is terrible for two reasons. First, it leaves you in /etc because it doesn't end with "cd -", which takes you back to the last directory you were in. It's really terrible because shells can just expand this, and "grep user2 /etc/{passwd,shadow,group,gshadow}" is nicer in every way and teaches you a potentially useful bash trick
  • Please ask Suspicious Dish about why you shouldn't rely on ~/.gnome2/. But RHEL7 doesn't have ~/.gnome2. It ships with gnome3 by default, which puts a lot of stuff in ~/.config/gnome-session.

Chapter 9: Every time you mklabel msdos, somebody dies
  • The second sentence says a partition can span multiple disks. It can't. And it's never been able to. If you set some kind of RAID under it, this is technically true, but that's not meaningful in any sense
  • The "dump space" to store memory and kernel dumps is actually swap
  • GPT vs MBR mess again. It isn't MBR on BIOS-based systems. BIOS-based systems also support GPT. UEFI requires GPT, but you should never be using MBR/msdos partition layouts on a new install in 2015.
  • Basic parts of the early boot process are wrong again. The author seems to think that BIOSes/EFI are a lot smarter than they are. They both do initialize disks. BIOS just looks at the first disk (you know, the "select boot disk" BIOS option we've all seen a million times) and blindly tries to load a boot sector. EFI looks at nvram variables to figure out where to go. This includes basic files on the EFI system partition. It knows filenames. And they're not in /boot. They're on a FAT32 partition (the EFI system partition) with Windows-style path separators, like "\EFI\centos\shim.efi" :gonk: (shim.efi is a chainloader which passes secureboot then loads grub).
  • GPT and MBR are not designed for different firmware types. UEFI wasn't a twinkle in God's eye when UEFI came into being (ok, EFI was probably a twinkle in the eye of somebody at Intel for Itanium). bootloaders need to be GPT aware. The BIOS does not. GPT solves an entirely different class of problems than UEFI, like logical block addressing and more partitions (without limitations on "primary" partitions) and using UUIDs so those partiitions are identifiable across operating systems, and support for drives over 2TB, and storing backup partition tables, and...
  • There aren't a lot of good reasons to "qemu-img create -f raw". Just use "-f qcow2"
  • You should probably actually "virsh domblklist" before deciding --target, right?
  • Another case of "output doesn't match the command". Would "|--vg00-swap" show up in "lsblk | grep vd"? No.
  • Every time he says "mklabel msdos", you should mentally substitute "mklabel gpt". The VMs you'll use for the cert (if you take it) have msdos partition tables, probably to make it more annoying for you, but don't let this become a habit. "mklabel gpt". Learn it and love it.
  • There are good reasons to use gdisk. Like "I have Stockholm syndrome and I really miss fdisk's arcane syntax" and "I don't care if my partitioning tool checks whether partitions are aligned" and "I'm a developer at Intel who needs the ability to use arbitrary GPT types". None of those are you. Everything parted can do for creating partitions on msdos, it can do on gpt. And it's harder to shoot yourself in the foot with parted. Move to gdisk later if you want, but start with parted for gpt
  • All LVM commands, including "pvs" and "vgs" and "lvdisplay" are actually "lvm pvs". Go ahead. Type it. Remember it, because none of the convenience symlinks are there if you find yourself in dracut or another limited environment, but "lvm $verb" still works.
  • The default naming convention should not be "lvol0", and nobody's used that in a while. That's what it defaults to if you don't specify "-n somebettername". Older versions of RHEL used VolGroup00/LogVol00 and such. New versions use "-n $mountpoint". Don't name your logical volumes lvol0. Name them "home" and "opt" and "swap"
  • He missed "lvchange", which you should read about. Primarily, "lvchange -n" and "lvchange -y" can activate/deactivate logical volumes and remove them from /dev/mapper, which matters sometimes

kujeger
Feb 19, 2004

OH YES HA HA
I've finished chapter 4, and having these posts to know in advance what to look out for is super awesome, evol262. Huge thanks for this!
Some of this I would know was wrong, some I would be very uncertain about, and some I would have taken the author's word for.

evol262
Nov 30, 2010
#!/usr/bin/perl
Chapter 10: The best chapter in the book so far

Really, it is.
  • I just want to point out that "making a filesystem accessible or inaccessible to other users to hide/reveal information" is something you can do on the same filesystem with bind mounts in fstab (which work, believe it or not), and it's a lovely attempt at "security"
  • It's weird to do it in fstab, but the line below works on the same filesystem if you ever need to do it (just don't "systemctl mask tmp.mount" if you do, since whoever decided to put tmpfs on /tmp pulled a genius move and named it tmp.mount, and masking tmp.mount is the recommended way to disable this, which does bad things with /tmp /tmp bind.. in fstab
  • /tmp /tmp none bind,nosuid,noexec 0 0
  • The ext4 driver technically supports ext2, for now, and is not the same as the legacy ext2 driver. It's deprecated, but it'll probably survive into the next iteration of RHEL
  • XFS is used because RHEL has a 10 year support cycle, and xfs has much larger possible filesystem sizes than ext4 at the time rhel7 was forked.
  • Yet again, EFI requires GPT. It's not optional or a choice.
  • autofs can be smb or cdroms or anything, really. It's not a filesystem type, it's not network, and it's not necessarily NFS. It's on-demand mounting.
  • Most filesystems these days use extents. ext2/3 are the exception in not doing so. ext4 is not the exception in doing it.
  • VFAT -> FAT32 here. This is because it's "mkfs.vfat" and "mount -t vfat", but it's still stupid, and 99% of the time it's going to be fat32, even though vfat and fat32 aren't technically synonymous.
  • The best use of filesystem label is to mount by label
  • lsof a useful tool to use alongside fuser
  • noatime is usually used to speed up filesystem access and reduce wear on media with limited writes.
  • noexec (and nosuid) are commonly used for web data mounts and other bits
  • systemd is (sometimes) smart enough to pick out filesystems (like nfs) that require networking add add them to remote-fs.target, making _netdev optional. If you take the exam, you may as well use it, but expect _netdev to disappear at some point
  • /proc/mounts and /etc/mtab are not the same thing, and systems with a readonly-root commonly have an outdated /etc/mtab. Rely on /proc/mounts, not /etc/mtab.
  • mke2fs -t ext3 -- why? why??? why not "mkfs.ext4"?
  • Speaking of /proc/mounts, it's worth noting that you can pretty much copy lines directly out of there into fstab if you mount something and wanna add it quickly/sloppily
  • His example shows '2599c(root)' for fuser. Great. Why not "ps 2599"? Or "echo $$?" Some indication of how to actually find out what's using it if it's not you may be nice.
  • Sometimes NFS is broken when you try to mount, and it gives you no errors at all. There will be an obscure one in dmesg or the end of the journal. "rpm -q --filesbypkg nfs-utils | grep service", and you'll see a few. rpcbind.service (not part of nfs-utils), nfs-idmapd, and sometimes rpc-gssd can help (or speed up mounts). Your test system shouldn't be broken this way. A real system someday may be. NFSv3 wants rpcbind. NFSv4 wants rpcbind and nfs-idmapd, preferably with rpc-gssd also.
  • autofs is horrible. But it'll be on there, so you should learn it
  • If you're using a wildcard in auto.master or anything, it needs to be the last line, because they're greedy and they match everything
  • * -nfs4,rw &:/home/& is confusing
  • It's in auto.home, which means:
  • cd /home/evol262 will try to match evol262:/home/evol262. The & is a backreference for the matched key (whatever gets stuck in *). This is an indirect map.
  • man 5 autofs is your best friend on the exam

Chapter 11: Don't hire this guy for network
  • port 25 is smtp, not "postfix".
  • iptables can also apply updates at any time without disruption and can be done anytime. firewalld doesn't offer this as an advantage. firewalld's advantages are twofold: first off, it's less arcane than iptables. secondly, ebtables is eventually replacing iptables, and firewalld will mask the transition for users.
  • JUMP is also a target for iptables, where the matched rule jumps to another chain, follows that to the bottom, then returns. firewalld uses this heavily
  • The first rule does not accept all inbound connection requests. There's no --state NEW. It accepts all new related packets (passive ftp, etc) and packets from established connections which are accepted elsewhere. Which is just ssh, in this ruleset
  • The forwarding example is simply wrong. ip forwarding isn't enabled on RHEL by default anyway. "echo 1 > /proc/sys/net/ipv4/ip_forward" if you want it. But "iptables -A FORWARD -d 192.168.0.0/24 -j ACCEPT" does not forward all inbound traffic to that network. You can do that with iptables, but you'd need to rewrite packets with postrouting to NAT it, etc. The rule given will look at all packets on all interfaces and check whether the destination header is in 192.168.0.0/24, and if it is, resend it out all interfaces. That's great if you want a router. It's terrible if you want to NAT anything.
  • Same issue on the next page. "iptables -I INPUT ! -d 192.168.3.3/24 -p icmp -j DROP" has multiple problems.
  • He says reject. It doesn't. It drops. This seems pedantic, but they're different things. REJECT immediately responds with "no, I'm not doing this" (probably ICMP Host Prohibited in this case, connection refused for TCP). DROP literally just blackholes it, and you'll see a timeout. Actually, almost all of his "append a rule to reject" has the same problem. They all DROP.
  • Secondly, You can see in the coalesced rules below that 192.168.3.3/24 is nonsense, and iptables has happily said "reject all ICMP traffic which isn't directed to 192.168.3.0/24", which is what his rule actually says. For his rule to work, he'd need:
  • iptables -I INPUT -d 192.168.3.0/24 -p icmp -j DROP
  • iptables -I INPUT -d 192.168.3.3 -p icmp -j ACCEPT
  • Reverse them and make it -A INPUT if you want and it does the same thing
  • This takes us to our second point: iptables is parsed top to bottom, and it craps out at the first matching rule. It doesn't matter what's below it. If rule #1 is "DROP all anywhere anywhere", nothing else matters. The rules need to be in the right order, and -j ACCEPT must accept packets before a rule which drops them
  • His description of forwarding is potentially misleading again. Not technically incorrect, but misleading. It will forward all that traffic. As in route that traffic. It won't rewrite it at all.
  • A couple of pages later in firewalld, we start seeing things like "firewall-cmd --add-port=443/tcp", and he reloads firewalld in the next command. This rule isn't permanent. Guess what happens when we reload firewalld? It's gone.
  • It's show in "firewall-cmd --list-orts". But it wouldn't be there. 443/tcp would be missing after reloading firewalld.
  • It also wouldn't be in iptables -L -n.
  • Make rules permanent if you want them to stay
  • Since it's gone, "firewall-cmd --remove-port=443/tcp" is totally spurious.
  • Selinux contexts cannot be used interchangably with labels. A label is a kind of context (for a file). A context is not a kind of label. Processes don't have labels. They have contexts. Right?
  • chcon -vu user_u -t public_content_t /root/file1 is stupid. It's almost certainly object_r, but you can "ls -lZ /root/file1" to find out, then "chcon user_u:object_r:public_content_t /root/file1"

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
O'Reilly is running a sale for SysAdmin Appreciation Day -- 50% off a big selection of sysadmin books and video training using promo code DEAL.

List of titles: http://shop.oreilly.com/category/br...y_20150731_deal

yung lambic
Dec 16, 2011

Do you guys provide book recommendations?

I work in marketing for an enterprise SaaS video hosting product. Large sports clients use the product to manage all their online video and put in subscriptions, etc.

I have a high level overview of how the products and services work, and how to market these to the end user.

But I really want to get into the nitty gritty and learn how they're built up.

Things I'm interested in learning about are CDNs, cloud computing, streaming delivery, and so on... Are there any books I can get dug into? No matter how dense...

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Convexed posted:

Do you guys provide book recommendations?

I work in marketing for an enterprise SaaS video hosting product. Large sports clients use the product to manage all their online video and put in subscriptions, etc.

I have a high level overview of how the products and services work, and how to market these to the end user.

But I really want to get into the nitty gritty and learn how they're built up.

Things I'm interested in learning about are CDNs, cloud computing, streaming delivery, and so on... Are there any books I can get dug into? No matter how dense...
If you want to get into the real cutting-edge end of streaming, MLBAM (Major League Baseball's Internet multimedia division that does a lot of what you're describing, now for different third-party clients also) has a lot of talks online from the last few years describing their infrastructure. You should be able to pull up the ones from the last few Velocity conferences without too much effort.

Inspector_666
Oct 7, 2003

benny with the good hair
I applied for a job at MLBAM that I was probably woefully under qualified for, but to this day it's the only job I am sad about not getting.

kujeger
Feb 19, 2004

OH YES HA HA

evol262 posted:

[...]To start, if you really want to get a good idea of how UEFI works, you should read this, then continue on, because there's way too much wrong in this book for me to go into detail about it all[...]

All your posts here have been extremely good, but I wanted to specifically point out this link in particular; reading the whole thing made me appreciate and understand a lot more about UEFI booting.

I read through the whole thing when you first posted it, and already I've had good practical use of it several times!

evol262
Nov 30, 2010
#!/usr/bin/perl
Incoming :effort: posts to finish out the RHCE section, because I'm sick of seeing the three zillion flagged pages with errors sitting around, and I want to throw this book in the trash.

Also, my wife is doing her PhD and has lots of study time, so I suddenly have lotsa time to hypothetically :justpost:

Chapter 12: Networking
  • I just want to start off by saying that NTP isn't a way for time-sensitive functions on a system to function accurately. High precision timers (and even low precision timers, and counting ticks outside of VMs) are precise enough. NTP is a way for time-sensitive functions between systems (like Kerberos -- especially kerberos) to function
  • OpenLDAP is a shitshow. It is, in fact, an open source implementation of LDAP. But the default schema is a total mess, and you have to apply 30000 LDIFs to get it anything like the old netscape-ds derived LDIFs (or anything for AD) to apply. And it lost a major use case by samba4 using its own LDAP to better match AD. Not that OpenLDAP is dead. It's just... nobody really cares about it.
  • Using ifup/ifdown has advantages. Primarily, that they read all the junk in /etc/sysconfig/network-scripts, so it's an ok way to test your scripts. But for just networking? You can (and arguably should) use "ip link set dev $if up/down"
  • You don't need "GATEWAY0" or "BROADCAST0". BROADCAST (no 0) is fine for 99% of cases.
  • The example ifcfg-eth1 on page 391 is very obviously culled from a NetworkManager-managed file. You do not need most of that. Barebones, you need DEVICE, ONBOOT, BOOTPROTO, TYPE, and IPADDR. That's it.
  • If ping fails, you don't need to see if the driver is installed. Is this windows? If it's in your list of devices, the driver is installed (because it's a kernel module). If it's a wireless card or some 10ge cards misbehaving, you can check the journal for missing ${module}-firmware which couldn't be loaded. Instead of checking all that other stuff (firewall rules, hosts file, etc), try checking ethtool (or miitool), which will provide lots of information about what the system thinks is happening (offered rates, negotiated rate, whether there's a link, whether it's full duplex, the module in use, the firmware version in use, etc). Obviously the other stuff is better for VMs, and situations where ethtool can lie to you (like the interface being attached to a bridge), but it's nicer to check ethtool than it is to truck it to the datacenter and check whether the card is seated or the cable is seated.
  • sssd is really important. openldap isn't necessarily important.
Chapter 13: TCP wrapper/sshd_config
I'm not even covering this, because it's ancient, and the author (amazingly) didn't make any mistakes.
------------------------------------------------------------------------------
RHCE section: here be dragons
Shell Scripts, my god
  • Each line isn't actually evaluated like it's typed at the command prompt. Semantics differ (slightly) between if/elif/fi on the command line and in a script. What works on the command line will work in a script. It doesn't go both ways
  • I wouldn't attempt to run any of the scripts here in any other shells as a general rule. If you want something portable, you should be writing in a better language. Learn enough shell to pass the exam and read basic scripts. When you start looking up how to split things with ${var#...}, it's time to move on.
  • The shebang (#!) is actually a magic number which is read as part of exec() or execl(). It specifies the interpreter for the following script, which is re-execed. This can be perl, lua, python, brainfuck, or anything else you want. When you try to execute an executable-flagged file, this is what happens. The shell doesn't do this, and "bash some_script.sh" just ignores it like it's a comment.
  • Technically, "echo" is a command, but it also exists as a shell builtin, if you care.
  • The sixth line does not underline it. It prints a separator make of "==". This seems pedantic, but the shell can actually underline.
  • On that note, you can get a newline with "\n", which is an escape, and telling echo to interpret it with -e. Take a look at the bottom of the VT100 control characters here. Then "echo -e '\e[4munderlined text\e[0m' (you can also use \033 or a hex code), which basically says "change the terminal to underline (code 4), then back to normal (code 0). Note that blink doesn't work on most modern terminals, because everybody hates it.
  • Note that hostnamectl guess whether it's virtualized (and under what platform) by looking at DMI, then falling back to device trees, and CPU types. You can poke at virt-who and systemd-detect-virt if you're curious about this.
  • If you're curious about variables, please look up "bash variable scoping", which will tell you all about the differences between local and global and what works where. Note that less terrible programming languages have more involved scoping
  • It's not mentioned in the book, but it's really important that variable definitions don't have spaces. "FOO=bar", not "FOO = bar"
  • You can source a script with ".", which basically says "execute this script in the same process space as this script". It runs like any other shell script, but "local" variables will pop into the current script. This is useful sometimes when you don't want another process, or you want to split your scripts up. Just note that sourcing actually runs the script, so you may still want to check whether it was sources or executed.
  • The author totally missed $@, which is incredibly useful.
  • You can also "echo -n" to not spit out a newline, which is nice if you want to have a "Input y/n: " and not have it, you know, spit out a newline
  • "\c" blocks all the output after that character (of which there isn't any, except for \n). Why not -n?
  • The exit code depends on what you return from main() (in C), or whatever happens when your script runs. If you have a shell script that just runs "exit 2" and you run it, $? will be 2. The convention is for successful commands to return 0, and some numeric return for bad results.
  • Test conditions totally missed "=" and "!=". Because she shell is fun, this is a good way to compare strings. Or other stuff (if you quote FOO=1 && [ "$FOO" = "1" ]). Don't use the shell for complex stuff
  • You can find the list of test conditions with "man test"
  • && and || are much more common logical AND/OR operators, and you should get used to those if you'r eever going to write anything else
  • As noted above, the syntax differs between scripts and the shell. But you can "if condition; then something; fi" on the shell as a oneliner, which is where shell scripting really shines (as oneliners)
  • Don't use tabs in your scripts. Use spaces. Tabs are only for makefiles and other things which need tabs, and you're a bad person if you need them. If you mix tabs and spaces, I'll find you.
  • = vs == similarity/confusion. Did I mention you shouldn't do complex shell scripting?
  • The condition of a while loop is almost never an arithmeric expression containing "let". That's what for loops are for.
  • In case statements, ";;" terminates that case and lets you go to the next one.
  • "echo foo | while read bar" assigns the output of echo (or cat) to bar. This is useful when you say "ps -ef | grep [f]oo | awk ... | while read ...; do" and other cases

I'm just gonna power through this at about one post a day.

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
I don't post because I'm a loser and haven't picked up another book after reading the EMCISM book, but I still read posts so I appreciate your effort.

evol262
Nov 30, 2010
#!/usr/bin/perl
Chapter 15: Teaming/Bonding/Routing/IPv6
  • Bonding doesn't always provide higher performance. Sometimes it's higher perfomance. Sometimes it's redundancy. Sometimes it's both. It varies a lot depending on mode, and you should check this.
  • Routing selects next hop, not paths
  • A RHEL system can be a full-blown L3 router, participating in OSPF, BGP, RIP, or your routing protocol of choice.
  • LACP is probably used more often than flat bonding in 2015, which isn't even mentioned by an option by the author
  • Some of the mods (particularly xor) only provide load balancing assuming all the guests have different MACs (which are hashed), and it doesn't do that for single high-bandwidth clients.
  • Use "ip link", not "ip addr | grep..." if you're trying to find new interfaces
  • You don't need to modprobe bonding yourself.
  • There is absolutely no need for uuidgen unless you really feel like you need it. NetworkManager can handle this for you on EL7 (which cares about UUIDs). The old "network.service" doesn't give a poo poo about UUIDs.
  • BONDING_MASTER is unnecessary
  • You don't need IPV6INIT=no unless you don't need IPv6. Do you?
  • "reboot" is a symlink to systemctl (which checks argv[0], and reboots if it was called as 'reboot'). Why not 'systemctl reboot' yourself?
  • Teaming is honestly better than bonding in almost every way, except the horrible oneline JSON TEAM_CONFIG stuff
  • IPv6 is expected to fulfill the requirements for IP... forever. I mean, we always think this, but the address space for ipv6 is incomprehensibly huge, even if we give everyone a /64
  • "ip addr | grep inet6" is showing link-local autoconfig addresses here, not configured interfaces
  • The author completely forgets BGP (and some others which mostly matter to network guys, but maybe I'm just the only sysadmin who's never needed EIGRP)
  • If the source and destination systems are on the same broadcast network, and can get a direct response from "arp who has", they'll send it directly there. Otherwise routers.
  • The default gateway doesn't "attempt to search" for anything. It forwards to another router which has more routes, which forwards to its default, until you eventually hit someone who knows how to get from here to there by exchanging routes with other routers the the aforementioned routing protocols. Nothing is smart here.
Chapter 16: NTP
  • NTP matters for kerberos. A lot.
  • The default firewall on RHEL allows outgoing traffic basically anywhere. Including NTP. You don't need to touch it.
  • How close any NTP source doesn't matter. The first thing NTP does when you start it is sync the time, then check again in small increments which gradually get larger in a sliding window, depending on how accurate your system clock is. If it's super far off (virtual machines on ticked kernels which skip cycles, for example), NTP gives up and makes you manually intervene. Otherwise, it's a protocol designed to figure this poo poo out without your intervention.
  • Pretty much the only way you'll connect to a stratum 0 device is by acquiring an atomic clock (which is doable, but expensive), or getting a cheap USB GPS dongle and using that as a stratum 1 source
  • The author completely forgets chrony, which handles time syncing on unreliable systems. Like laptops.
  • The drift file keeps track of... drift. Like, how far off your system clock drifts from what the NTP server says, so it can correct a bit automatically if it loses touch with the server. It's like knowing that you're on a high latency VOIP call.
  • iburst helps sync up by immediately sending a bunch of packets to establish the window and drift to lock on. It's designed to wring efficiency out of the server.
  • You can still configure an NTP server with an upstream reference source. If you want to understand NTP, you should probably read about stratums and how they work. It's not complex, but it helps demistify ntpq -p. Mostly, you're looking for * next to servers, which basically means "the last couple of times I checked in, the jitter was about the same, so I'm reasonably sure that I can use this server reliably, at stratum+1

Japanese Dating Sim
Nov 12, 2003

hehe
Lipstick Apathy
I never contributed to this, so it's kind of an rear end in a top hat thing for me to say, but I'm sad that this thread died. I was actually trying to think of IT books to put on my Amazon wishlist for Christmas, so I'm gonna dig through this and see what you guys read (and thought about reading).

Dr. Arbitrary
Mar 15, 2006

Bleak Gremlin
It's not dead as long as one person is still keeping it alive.

Japanese Dating Sim
Nov 12, 2003

hehe
Lipstick Apathy
True enough! And I did read Phoenix Project because of this thread. Nothing else though, I've mostly been focusing on my CCNA studies. Once that's done I'd like to pick something up that doesn't necessarily lead to a specific cert. I'll mention anything I grab when I get it.

I am actually about to start Time Management for System Administrators, which should take me like 3 hours.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
I was thinking about posting another chapter summary from the Dekker book, but holy poo poo, having infants is exhausting.

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


I'm up for Michael Jang's RHCSA/RHCE 7 but it's delayed until February.

Japanese Dating Sim
Nov 12, 2003

hehe
Lipstick Apathy
So my aforementioned wish list currently has the following books on it. I have never read any of them and am interested in the material covered by all of them.
Aside from the fact that I know that Linux+ isn't worth a whole lot to employers (I just like the idea of following a semi-structured study plan, and my work will pay for any certs so I figure why not), anyone got opinions on any of the above, or recommendations along the same lines? I'm hoping to finish up my CCNA in a couple of months and then want to give myself some practice with Linux and scripting in general. The 70-410 book is in the "maybe" category (as I don't know if/when I'll want to pursue that). I've heard very good things about Network Warrior in terms of it being a good book for post-CCNA types.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Network Warrior is fantastic and I found it way more informative than any of the actual CCNA study stuff I read.

Zorak of Michigan
Jun 10, 2006

I've a UNIX veteran so I know a fair bit about the theory of networking but nada about Cisco. Would I get anything out of Network Warrior, or would I lack the grounding?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Zorak of Michigan posted:

I've a UNIX veteran so I know a fair bit about the theory of networking but nada about Cisco. Would I get anything out of Network Warrior, or would I lack the grounding?
Network Warrior is actually where I got most of my Cisco knowledge from. It's very pragmatic, and much less dryly written than certification materials. I'm pretty sure I read it on the toilet on a vacation in the Adirondacks.

Japanese Dating Sim
Nov 12, 2003

hehe
Lipstick Apathy

Vulture Culture posted:

Network Warrior is actually where I got most of my Cisco knowledge from. It's very pragmatic, and much less dryly written than certification materials. I'm pretty sure I read it on the toilet on a vacation in the Adirondacks.

Everything I've read about it seems to indicate that the author assumes you have CCNA-level knowledge of networking/IOS and writes from that perspective, so people who haven't already worked with it might be a little lost. Is that a bit of an exaggeration?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Japanese Dating Sim posted:

Everything I've read about it seems to indicate that the author assumes you have CCNA-level knowledge of networking/IOS and writes from that perspective, so people who haven't already worked with it might be a little lost. Is that a bit of an exaggeration?
If you're brand-new you're probably going to be lost, but if you understand networking basics like how L2/L3 addressing and broadcasts and ARP and VLANs work, it's not a big gap to cross. This was the first edition, though; I haven't read the second.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
A very interesting book called Site Reliability Engineering: How Google Runs Production Systems just came out and it sounds like a good one to read and discuss:

http://shop.oreilly.com/product/0636920041528.do

Gucci Loafers
May 20, 2006

Ask yourself, do you really want to talk to pair of really nice gaudy shoes?


Vulture Culture posted:

A very interesting book called Site Reliability Engineering: How Google Runs Production Systems just came out and it sounds like a good one to read and discuss:

http://shop.oreilly.com/product/0636920041528.do

There's a small preview here.

I found this interesting...

quote:

Therefore, Google places a 50% cap on the aggregate “ops” work for all SREs—
tickets, on-call, manual tasks, etc. This cap ensures that the SRE team has enough
time in their schedule to make the service stable and operable. This cap is an upper
bound; over time, left to their own devices, the SRE team should end up with very
little operational load and almost entirely engage in development tasks, because the
service basically runs and repairs itself: we want systems that are automatic, not just
automated. In practice, scale and new features keep SREs on their toes.

SREs are Google Software Engineers with some kind of Operations Background. While this is great and but isn't also true there are still traditional System Administrators at Google but they're merely contractors/vendors?

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Chapter 1: Introduction

This chapter is a brief introduction to the practice of Site Reliability Engineering, and what it means to be a Site Reliability Engineer. At Google, an SRE is someone who has "85-99%" (why not 100?) of the required skills for a software engineering position, plus additional technical background and skills that are useful for the SRE practice. The result is a culture where SREs are doing work that have been historically done by operations teams, but they are able to a) fix problems instead of merely working around them, b) automate solutions at scale in ways that are out of reach of traditional operations silos, c) move and adapt at the speed of product engineering and d) easily cross-train the developers on operability. To facilitate this, Google ensures that the time spent on "ops" work for SREs does not exceed 50% of their work hours. There are mechanisms in place to add stability back when the SREs are overburdened, such as error budgets enforced against internal SLOs (Service Level Objectives). SRE is compared and contrasted with DevOps; they share many common features, but site reliability engineering is a specific implementation (my analogy: like how Scrum or XP or Kanban as a specific implementation of Agile).

Site Reliability Engineering focuses on availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning of problems: systemic problems rather than specific operational issues, in other words. No more than two operational issues should be looked at per engineer, per shift. Beyond this threshold, there is insufficient time or focus to handle the event, restore service, and conduct a (blame-free) postmortem.

Culturally, Google SREs approach SLOs in the opposite direction of most operations teams. Engineers understand the goal is never 100% availability, and they should not be afraid of causing downtime, as long as they don't cause too much downtime (the SLO is determined by the owners of the product). The goal is to ensure maximum feature velocity without making users have a bad time.

The Google monitoring strategy is diametrically opposed to the Nagios-style approach of "check value against threshold, and alert if it's outside of some static operating parameter." Instead, their software stack interprets the monitoring conditions, and notifies humans of what actions need to be taken when one needs to be taken. These notifications are broken down into alerts, tickets, and logging based on the urgency of the response. When emergencies do occur, engineers are encouraged to follow and create "playbooks" instead of approaching the problem in an ad-hoc way. This has improved their MTTR by 3x.

Change management at Google is very lean, compared to what people might think of when they hear the term. The following are requirements of the production change process:

  • Progressive rollouts
  • Fast and accurate problem detection
  • Safe rollback

Humans are removed from this loop as much as possible to minimize error during routine changes.

Capacity planning works the way you'd expect. You have a demand forecast which incorporates organic growth. You factor in inorganic demand sources, like big events (planned or unforeseen). You load-test the systems to estimate their real-world capacity. Likewise, the way efficiency and performance are handled shouldn't be surprising to anyone who's worked in web operations: you aim to support a certain number of concurrent users or requests within a certain latency objective. If you cannot meet this, you get to work. If you cannot meet this cheaply, you get to work.

deimos
Nov 30, 2006

Forget it man this bat is whack, it's got poobrain!
Uhh, why is the SRE book $12 cheaper to pre order on paperback than the kindle edition?

Adbot
ADBOT LOVES YOU

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

deimos posted:

Uhh, why is the SRE book $12 cheaper to pre order on paperback than the kindle edition?
Amazon routinely prices print pre-orders cheaper than generally-available eBooks, and we should expect that price to rise once it's in stock. If you're going to order the eBook, order it from O'Reilly directly -- it's the same price, but DRM-free and available in multiple formats for one price. I've been running through the PDF while I wait for build jobs to finish.

  • Locked thread