Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
jre
Sep 2, 2011

To the cloud ?



Thermopyle posted:

My cars wheels fall off if I drive it at 56 miles per hour. Sure there's no good argument for it to be that way, and there is no benefit to anyone, and it's incredibly dangerous, but it's not stupid because it was a design decision to make it do this.

That's what I'm hearing from the pro-Arch camp so far here.

Should have read the manual cover to cover first n00b

Adbot
ADBOT LOVES YOU

Suspicious Dish
Sep 24, 2011

2020 is the year of linux on the desktop, bro
Fun Shoe
my pet theory is that the developers put in a bunch of traps like this so they can blame the user when arch inevitably fucks up their computer. but that would require them to be malicious instead of just incompetent...

CaptainSarcastic
Jul 6, 2013



Suspicious Dish posted:

my pet theory is that the developers put in a bunch of traps like this so they can blame the user when arch inevitably fucks up their computer. but that would require them to be malicious instead of just incompetent...

In my experience flagrant incompetence and deliberate evil look very much the same in their practical output.

some kinda jackal
Feb 25, 2003

 
 
Is there a USB bootable Linux that still has Adaptec 2940 SCSI support? I need to image an old 80 meg SCSI drive before it dies and I really don't want to build a system from scratch to do it.

Puppy Linux doesn't seem to have Adaptec drivers any longer, not sure if there's another recommendation out there.

ToxicFrog
Apr 26, 2008


Martytoof posted:

Is there a USB bootable Linux that still has Adaptec 2940 SCSI support? I need to image an old 80 meg SCSI drive before it dies and I really don't want to build a system from scratch to do it.

Puppy Linux doesn't seem to have Adaptec drivers any longer, not sure if there's another recommendation out there.

You mean the AHA-2940 (based on the aic7860 chip)? You want the aic7xxx driver. It looks like TinyCore includes it in the 'scsi-$KVER-tinycore' package. You can either boot from USB and then install it with the package manager, or pre-download the tcz and then install it offline after booting.

If you're ok with a ~4GB download rather than a ~15MB one, openSUSE includes this driver by default, I believe.

some kinda jackal
Feb 25, 2003

 
 
Yes, the aic78xx chipset. Perfect, thanks. Size doesn't matter much, I just didn't want to go through the trouble of installing Linux to disk and then mucking with the drivers when I could just boot off USB and DD from SCSI to another USB. Hell, it's an 80mb drive so I could just DD to temp fs and scp it off after the fact.

Thanks again!

evol262
Nov 30, 2010
#!/usr/bin/perl
Or build your own kernel and dump it on, which is pretty easy to do. Maybe useful if you have other old/esoteric hardware on there.

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum
code:
(384/384): zip-3.0-1.el6_7.1.x86_64.rpm                                                                                                         | 259 kB     00:00
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------
Total                                                                                                                                  4.1 MB/s | 403 MB     01:37
:feelsgood:

peepsalot
Apr 24, 2007

        PEEP THIS...
           BITCH!

I have a pdf open in chrome browser, and I can select text, but not copy that text to clipboard. Is this a Chrome problem with pdf's in general, Chrome problem on Linux, or Chrome problem just for my specific distro (Linux Mint 17.3, which is basically ubuntu 14.04)?

I feel like the crappy linux pdf viewer I used to use before chrome integrated one (is it eye of gnome? idk) had the same infuriating problem of not being able to copy text to clipboard

e: Oh, the pdf viewer is evince, and it looks like it is capable of copying text now. So, nevermind, I guess that solves it. Maybe I'll write up a bug report for chrome or something. Though i'm sure there's already some years old bug for it that google is ignoring.

peepsalot fucked around with this message at 03:05 on Oct 12, 2016

Mao Zedong Thot
Oct 16, 2008


peepsalot posted:

I have a pdf open in chrome browser, and I can select text, but not copy that text to clipboard. Is this a Chrome problem with pdf's in general, Chrome problem on Linux, or Chrome problem just for my specific distro (Linux Mint 17.3, which is basically ubuntu 14.04)?

I feel like the crappy linux pdf viewer I used to use before chrome integrated one (is it eye of gnome? idk) had the same infuriating problem of not being able to copy text to clipboard

e: Oh, the pdf viewer is evince, and it looks like it is capable of copying text now. So, nevermind, I guess that solves it. Maybe I'll write up a bug report for chrome or something. Though i'm sure there's already some years old bug for it that google is ignoring.

No clue, but have you tried using X clipboard (i.e. select = automatically copied, middle click = paste)

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Isn't it just a PDF thing? Nothing to do with Chrome, Linux, Debian-based distros, x86 hardware, computing or Universal Law

^I'm zooming out there: I think it's just an option to have a PDF in some kind of 'protected' mode.

gourdcaptain
Nov 16, 2012

The definition of frustrating is when you get your new workstation (for both work and personal use) finally up and online after multiple parts being DOA, have to set it up over wireless (integrated into the motherboard, not that I ever plan to use it once it's set up) due to the only table you have large enough to do the assembly being too far away from the router, and then find out that the integrated wireless is Broadcom (which frustratingly none of the online literature I could find for the mobo ahead of time listed just saying "Asus", and their tech support didn't know), and none of the open-source drivers support it. (In my defense, it's basically the only microATX X99 chipset motherboard I could find that had everything else I wanted. Against my defense, in assembly I found out why nobody makes microATX X99 boards.)

Yeah, as soon as I hook it up to ethernet, those broadcom-wl drivers are being uninstalled and I'm getting a USB Wifi stick with better driver support for if I ever need to wireless it again. (I once bought a new netbook almost to get away from the proprietary broadcom-wl drivers. So many kernel oops, and that's if I'm lucky. Had to reset the interface after each boot today to get it to work even, and it kept chugging at random down to 10% max speed for no explainable reason.)

CaptainSarcastic
Jul 6, 2013



gourdcaptain posted:

The definition of frustrating is when you get your new workstation (for both work and personal use) finally up and online after multiple parts being DOA, have to set it up over wireless (integrated into the motherboard, not that I ever plan to use it once it's set up) due to the only table you have large enough to do the assembly being too far away from the router, and then find out that the integrated wireless is Broadcom (which frustratingly none of the online literature I could find for the mobo ahead of time listed just saying "Asus", and their tech support didn't know), and none of the open-source drivers support it. (In my defense, it's basically the only microATX X99 chipset motherboard I could find that had everything else I wanted. Against my defense, in assembly I found out why nobody makes microATX X99 boards.)

Yeah, as soon as I hook it up to ethernet, those broadcom-wl drivers are being uninstalled and I'm getting a USB Wifi stick with better driver support for if I ever need to wireless it again. (I once bought a new netbook almost to get away from the proprietary broadcom-wl drivers. So many kernel oops, and that's if I'm lucky. Had to reset the interface after each boot today to get it to work even, and it kept chugging at random down to 10% max speed for no explainable reason.)

Getting a second router and setting it up as a bridge gets around hassling with wireless, too. I got a cheap little Buffalo router that I use so I that I can just use ethernet from my main desktop to it, and it connects to my wireless router across the room. As far as my desktop is concerned it is on a wired connection, since the actual wireless connection is from the base router to the Buffalo router. If you have a machine that doesn't need to move around this is a pretty simple workaround.

18 Character Limit
Apr 6, 2007

Screw you, Abed;
I can fix this!
Nap Ghost

CaptainSarcastic posted:

I got a cheap little Buffalo router

What happens when the network hits 95% saturation?

CaptainSarcastic
Jul 6, 2013



18 Character Limit posted:

What happens when the network hits 95% saturation?

Meh, the most demanding thing on my home network is one device streaming Netflix, so it generally isn't much of an issue.

mwdan
Feb 7, 2004

Webbed Blobs

18 Character Limit posted:

What happens when the network hits 95% saturation?

That doesn't concern you.

gourdcaptain
Nov 16, 2012

CaptainSarcastic posted:

Getting a second router and setting it up as a bridge gets around hassling with wireless, too. I got a cheap little Buffalo router that I use so I that I can just use ethernet from my main desktop to it, and it connects to my wireless router across the room. As far as my desktop is concerned it is on a wired connection, since the actual wireless connection is from the base router to the Buffalo router. If you have a machine that doesn't need to move around this is a pretty simple workaround.

As soon as I get this thing to its final setup place tomorrow, it's just getting plugged straight in to a router directly by ethernet and left there for at least a year. The wireless is basically a null issue other than just setting it up and testing it out from a friend's house in the middle of their living room as the largest table I can work on. (I could hook it up myself, but I'm bad at figuring out cable management and such and I have bad eye coordination, so I got a friend to help out.)

EDIT: Also had to deal with boot issues with it failing to mount partitions (hilariously enough, the RAID array worked fine, the Intel 600p NVMe SSD just kept acting weirdly). Or the right partitions. Or find the partitions at all, given one of the crashes somehow corrupted the UEFI system partition. None of this was helped by a UEFI setup where to get to the UEFI menu to boot a USB stick required mashing F2 precisely enough, but not too soon, not too fast or slow, or not too long or it would just dump me into the broken Linux install failing to find partitions. Yeah, bad day.

gourdcaptain fucked around with this message at 06:32 on Oct 15, 2016

Buttcoin purse
Apr 24, 2014

I keep hearing about how great ZFS is because it has checksums of some sort, but I'd prefer to stick to a mainstream filesystem, preferably one supported by Red Hat, so I think I'll stick with XFS because I'm a coward. Are there any better options than just going into every directory and running sha256sum>sha256sums.txt, then using sha256sum --check sha256sums.txt later, such as a tool that can do that recursively for me, and maybe tell me if there are any files that don't have checksums stored or automagically work out which ones need to be recalculated?

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!
Checksumming your files and then regularly scanning for changes won't bring your files back if there is a corruption somewhere.

I don't know what you can do without using ZFS: I'm doing some experimenting with it (on an Ubuntu file server).

Before I pull the trigger and move my backup to ZFS I currently just have a spare ext4 drive plugged in and regularly rsync to that. Maybe that's an option for you? It isn't 'proper' data redundancy but it has worked for me, so far. You can run --checksum with rsync, so that it'll checksum the source and destination. rsync --checksum --dry-run <source> <dest> would also give you a list of differences before you commit to any overwrites.

Buttcoin purse
Apr 24, 2014

When I saw that somebody called apropos man had replied to my post, I was worried that there was a lmatfy.com!

I do have semi-regular backups (it only took a few decades), but I was thinking that checksums would help me work out which copy was the correct one if I have two accessible copies but one is bad. I suppose this optimistically assumes I don't just have a total disk failure where I can't read anything at all like I usually do. Also, in lots of cases, applications can tell you when the file is corrupt, so that's a good way to work out which file is the correct one, but this isn't always the case, or isn't always that easy to check.

ToxicFrog
Apr 26, 2008


First of all, yes there is a program that does this already, called hashdeep. It does both the "compute checksums for everything in a directory tree" part and the "compare results against known good checksums" part.

The biggest flaw in this plan that I see is: how do you tell the difference between a file that fails to checksum because of silent corruption, and a file that fails to checksum because you modified or replaced it and forgot to regenerate the checksum file? ZFS handles this by automatically checksumming the data when it's written, and then verifying the checksum when it's read, but not being part of the filesystem, hashdeep can't do this (barring stupid inotify tricks, but I don't think you can scale that to the whole filesystem -- and it still doesn't catch cases where you write good data and immediately get bad data back on the next read).

It also seems kind of weird to call ZFS "not mainstream" when it's been available on Solaris and BSD (and battle tested in datacenters) for a decade; it's only linux support that's relatively new. It's even officially supported by Ubuntu. :v: (It's not supported by Red Hat, but the ZFSonLinux project does release RHEL 6 and 7 packages.) It's quite easy these days to set up a zpool on linux.

That said, one thing that definitely is not yet fully baked on most distros is putting / on ZFS; I've done this, but I don't think I'd recommend the experience.

Buttcoin purse
Apr 24, 2014

ToxicFrog posted:

First of all, yes there is a program that does this already, called hashdeep. It does both the "compute checksums for everything in a directory tree" part and the "compare results against known good checksums" part.

I saw that, but it sounded to me more like the kind of software you use to work out "C:\WINDOWS\SYSTEM\BLAH.DLL is BLAH.DLL from Windows XP SP3" with a big set of known checksums you get from the Internet, rather than comparing each file against the checksum for that particular file. Does it also do the latter?

quote:

The biggest flaw in this plan that I see is: how do you tell the difference between a file that fails to checksum because of silent corruption, and a file that fails to checksum because you modified or replaced it and forgot to regenerate the checksum file?

I was only planning to use this for files that I don't expect to change, so yeah not exactly 100% coverage of my filesystem :v:

quote:

It also seems kind of weird to call ZFS "not mainstream" when it's been available on Solaris and BSD (and battle tested in datacenters) for a decade; it's only linux support that's relatively new. It's even officially supported by Ubuntu. :v: (It's not supported by Red Hat, but the ZFSonLinux project does release RHEL 6 and 7 packages.)

Oh yeah, sorry, I meant ZFS on Linux isn't mainstream, or at least I didn't think so, but I guess if Ubuntu supports it, then it's getting there?

ToxicFrog
Apr 26, 2008


Buttcoin purse posted:

I saw that, but it sounded to me more like the kind of software you use to work out "C:\WINDOWS\SYSTEM\BLAH.DLL is BLAH.DLL from Windows XP SP3" with a big set of known checksums you get from the Internet, rather than comparing each file against the checksum for that particular file. Does it also do the latter?
Yes.

quote:

I was only planning to use this for files that I don't expect to change, so yeah not exactly 100% coverage of my filesystem :v:

Oh, that's a different sack of ferrets. (And you might also want to look into something like Tripwire.)

quote:

Oh yeah, sorry, I meant ZFS on Linux isn't mainstream, or at least I didn't think so, but I guess if Ubuntu supports it, then it's getting there?

At this point the main thing keeping it from being officially packaged and supported on most distros is legal concerns around the ZFS license; it's considerably more stable than, say, btrfs. Debian, Ubuntu, and Gentoo all have it in the repos, the ZoL team makes RHEL packages available, and there's unofficial builds for other distros like SUSE. If Ubuntu manages to continue deploying it without getting sued -- or, better yet, if they get sued but win -- I would expect to see more official packages start appearing.

feedmegin
Jul 30, 2008

apropos man posted:

Isn't it just a PDF thing? Nothing to do with Chrome, Linux, Debian-based distros, x86 hardware, computing or Universal Law

^I'm zooming out there: I think it's just an option to have a PDF in some kind of 'protected' mode.

That is a thing yes. You can get around it by getting the source to a PDF viewer, removing the flag check, and compiling it yourself. Open source!

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

feedmegin posted:

That is a thing yes. You can get around it by getting the source to a PDF viewer, removing the flag check, and compiling it yourself. Open source!

I'm not sure if that is actually necessary. PDF protection is no protection at all. It just a flag on the document that says you aren't allowed to print or copy-paste the document and it relies on the PDF viewer to obey those flags, which is probably just Adobe Reader. I think most other PDF viewers just ignore those flags, commercial and open source.

evol262
Nov 30, 2010
#!/usr/bin/perl

Buttcoin purse posted:

I keep hearing about how great ZFS is because it has checksums of some sort, but I'd prefer to stick to a mainstream filesystem, preferably one supported by Red Hat, so I think I'll stick with XFS because I'm a coward. Are there any better options than just going into every directory and running sha256sum>sha256sums.txt, then using sha256sum --check sha256sums.txt later, such as a tool that can do that recursively for me, and maybe tell me if there are any files that don't have checksums stored or automagically work out which ones need to be recalculated?

Note that XFS is only the default because we worried disk sizes would hit the limits of ext4 by the end of the EL7 support cycle.

Most other filesystems are equally good. Nothing wrong with ZFS

gourdcaptain posted:

EDIT: Also had to deal with boot issues with it failing to mount partitions (hilariously enough, the RAID array worked fine, the Intel 600p NVMe SSD just kept acting weirdly). Or the right partitions. Or find the partitions at all, given one of the crashes somehow corrupted the UEFI system partition. None of this was helped by a UEFI setup where to get to the UEFI menu to boot a USB stick required mashing F2 precisely enough, but not too soon, not too fast or slow, or not too long or it would just dump me into the broken Linux install failing to find partitions. Yeah, bad day.

EFI is not magic at all. Reformat the EFI system partition to fat32 and copy the USB EFI files to it.

It sounds like your efivars may also want a reset. And like you have two EFI system partitions somehow. Why? And why not fix the kargs on the "broken" one so it works?

gourdcaptain
Nov 16, 2012

evol262 posted:



EFI is not magic at all. Reformat the EFI system partition to fat32 and copy the USB EFI files to it.

It sounds like your efivars may also want a reset. And like you have two EFI system partitions somehow. Why? And why not fix the kargs on the "broken" one so it works?

I did reformat the Fat32 UEFI partition and do that, basically. It just took a while to diagnose because it just kept spitting out random misleading error messages until I just tried running a fsck on the UEFI partition and got an insane amount of errors it couldn't fix. I eventually sorted it out. And I only had one UEFI partition. It's not magic, just a problem that was hard to diagnose because the error messages didn't indicate the problem or stay consistent from attempt to attempt.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

evol262 posted:

Note that XFS is only the default because we worried disk sizes would hit the limits of ext4 by the end of the EL7 support cycle.
Learned that lesson from EL5/ext3 :haw:

xzzy
Mar 5, 2009

We switched to xfs because we'd used it for years on our irix systems, and trusted it. Then once disks got over 1tb we continued to use it so we don't have to drum our fingers for an hour while mkfs.ext does its thing.

We don't have hard data but my group is getting the "feeling" that xfs file systems flip out and go read only more often than ext. It's something on our list to gather data for.

hifi
Jul 25, 2012

zfs will fix your bad files though instead of just giving you a list of them. You should probably use par files instead of checksums if you want to not use zfs.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

hifi posted:

zfs will fix your bad files though instead of just giving you a list of them
Big caveat: if you are running in a redundant zpool disk configuration

ToxicFrog
Apr 26, 2008


Vulture Culture posted:

Big caveat: if you are running in a redundant zpool disk configuration

Even if the zpool itself isn't redundant, you can set copies=2 (or some other value >1) on the filesystem and it'll store each data block that many times -- if there are multiple underlying disks it'll try to spread them across disks, if it can't it'll try to store them far apart on a single disk so that localized corruption is unlikely to hit all copies. On read, it'll only ask for one copy of each block, but on IO error or checksum failure it'll try to recover using the other copies.

This does not of course protect you from the entire disk failing outright.

Buttcoin purse
Apr 24, 2014

ToxicFrog posted:

Yes.


Oh, that's a different sack of ferrets. (And you might also want to look into something like Tripwire.)

Thanks, and I can't believe I didn't think of Tripwire.

ExcessBLarg!
Sep 1, 2001

xzzy posted:

Then once disks got over 1tb we continued to use it so we don't have to drum our fingers for an hour while mkfs.ext does its thing.
One of the weaknesses of ext[234] is that most of the filesystem metadata--specifically the inode tables--are statically allocated at mkfs time. The default behavior of mke2fs is to allocate at a bytes/inode ratio of that equals the blocksize of the file system (pretty much always 4 kB now) so that you're guaranteed to run out of block space before inodes. But unless you fill your disk with small files (4 kB or smaller) this is inefficient. As file systems get larger, an increasing portion gets reserved for inode tables and mkfs takes forever to run. These days I typically use a 1 MB ratio which reserves much less space for the inode tables and mkfs is fast. So long as your average file is larger than 1 MB you're unlikely to run out of inodes.

Aside from that, ext4 is still a pretty good file system when compared to the 90s journaling file sytems like jfs, xfs, etc. The code is very well vetted, and it's use on embedded devices (e.g., Android phones) where power loss is pretty regular has made it rock solid. The feature set of zfs is obviously better but the licensing "issue" delayed its adoption on Linux, and while btrfs is better on paper honestly feels like a dumping ground for file systems research projects with limited real long-term ownership.

xzzy
Mar 5, 2009

We did take care of a few large array ext3 systems created with the large file system option for a few years, and physicists pretty much instantly filled up the inode table by storing code on those disks.

:downs:

So yeah it's been xfs for a while and I dig it.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

ToxicFrog posted:

Even if the zpool itself isn't redundant, you can set copies=2 (or some other value >1) on the filesystem and it'll store each data block that many times -- if there are multiple underlying disks it'll try to spread them across disks, if it can't it'll try to store them far apart on a single disk so that localized corruption is unlikely to hit all copies. On read, it'll only ask for one copy of each block, but on IO error or checksum failure it'll try to recover using the other copies.

This does not of course protect you from the entire disk failing outright.
Owned. You're right, of course.

HPL
Aug 28, 2002

Worst case scenario.
I just set up a Pi-Hole ad blocker on my Pi 3 and holy cow, it's like rolling back the clock on my Chromebook. It's all nice and quick again. Highly recommended if you have a spare Pi sitting around or you're looking for a project to use one.

Odette
Mar 19, 2011

Ran into a weird issue upgrading chromium-53.0.2785.143 to chromium-54.0.2840.59. Cursor was the system default. I use KDE Plasma 5.8.x on Arch Linux.

For anyone running into the same problem, the fix: Settings -> Application Style -> GNOME Application Style (GTK) -> Cursor Theme -> set desired cursor theme.

Renegret
May 26, 2007

THANK YOU FOR CALLING HELP DOG, INC.

YOUR POSITION IN THE QUEUE IS *pbbbbbbbbbbbbbbbbt*


Cat Army Sworn Enemy
hi linux thread

I don't actually have a question, I just wanted to say that I'm learning the CLI and I just did a rm -rf / in a VM just to get it out of my system.

It feels like a rite of passage, watching this VM destroy itself from the inside.

Adbot
ADBOT LOVES YOU

anthonypants
May 6, 2007

by Nyc_Tattoo
Dinosaur Gum

Renegret posted:

hi linux thread

I don't actually have a question, I just wanted to say that I'm learning the CLI and I just did a rm -rf / in a VM just to get it out of my system.

It feels like a rite of passage, watching this VM destroy itself from the inside.
A good distro would have stopped you and made you type in --no-preserve-root before it broke anything. I think SELinux might also disallow you from killing your system like that but I'm not sure.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply