Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Crunchy Black
Oct 24, 2017

by Athanatos
I'm on 11.3U5 FreeNAS, anyone else jump ship to the TrueNAS train yet?

(Paging CommieGIR)

Adbot
ADBOT LOVES YOU

phosdex
Dec 16, 2005

I'm still on 11.3U5 as well. Barring a significant bug suddenly popping up, I don't plan on upgrading for about another year just due to some personal life plans when I'll look at overhauling all my homelab stuff.

EC
Jul 10, 2001

The Legend

H110Hawk posted:

If you're getting into the territory where you have more users transcoding than space and network you shouldn't run plex on the NAS. Ironically the ram is unlikely to have much impact on plex workloads unless your plex ram usage itself for the database or whatever balloons up. Using it as block cache on large media files isn't going to save you unless you have small files (1gb/hr) and very high hot spotting (lots of users watching the same few things.)

At most three people in house and one outside using it, but more often two people streaming at the same and that's it.

BlankSystemDaemon
Mar 13, 2009



Crunchy Black posted:

I'm on 11.3U5 FreeNAS, anyone else jump ship to the TrueNAS train yet?

(Paging CommieGIR)
Jump to FreeBSD instead 😈

BlankSystemDaemon
Mar 13, 2009



I swear, I didn't know this was going to happen, but following the conversation VostokProgram and I had on paging, Mark Johnson - probably one of the smartest people in the FreeBSD project - wrote an article on how FreeBSD handles swap.

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home

Crunchy Black posted:

I'm on 11.3U5 FreeNAS, anyone else jump ship to the TrueNAS train yet?

(Paging CommieGIR)

The in place upgrade crapped out on me, but doing the upgrade from the install media worked. It's been running fine and aside from the branding and minor web UI improvements it's about the same.

Actuarial Fables
Jul 29, 2014

Taco Defender

Crunchy Black posted:

I'm on 11.3U5 FreeNAS, anyone else jump ship to the TrueNAS train yet?

(Paging CommieGIR)

I did a few months ago. So far I have only run into two (non-critical) issues, one of which (reporting timing out) has been fixed and the other (S.M.A.R.T values not showing up in the WebUI) will be fixed in the next stable update.

Best new feature for me is TOTP 2FA.

Enos Cabell
Nov 3, 2004


Enos Cabell posted:

I passed on buying some sale priced 12tb externals because I'm dumb and want to hit a full year of uptime before I do another upgrade. Last time it was down was when I was simulating power failures with my new UPS. Still very happy with how easy Unraid makes everything.



Should never have posted this, tremendous self own incoming:

Installed the beta branch of Unraid since the old nvidia plugin was deprecated and rolled into the OS. Apparently my flash boot drive died at some point, so reboot failed. No problem, grabbed a new flash drive, restored backup, and transferred my license to the new USB drive. Figured all was well, slammed the start array button, and only then noticed that I restored the wrong backup. A backup made before I added two new drives, one of which replaced the old parity drive. Stopped the array immediately, but damage was done. Whatever was stored on that 8tb former parity drive is gone now =(

Fortunately it's all crap I can replace, but I will be kicking myself over that for a while.

Sir Bobert Fishbone
Jan 16, 2006

Beebort

Actuarial Fables posted:

the other (S.M.A.R.T values not showing up in the WebUI) will be fixed in the next stable update.


This has been driving me nuts for the last 2 days and I never even stopped to think that it might be a bug.

Actuarial Fables
Jul 29, 2014

Taco Defender

Sir Bobert Fishbone posted:

This has been driving me nuts for the last 2 days and I never even stopped to think that it might be a bug.

https://jira.ixsystems.com/browse/NAS-107395

You can still view the values through the CLI using smartctl, and email alerts still trigger on failed self-tests.

IOwnCalculus
Apr 2, 2003





Enos Cabell posted:

Should never have posted this, tremendous self own incoming:

Installed the beta branch of Unraid since the old nvidia plugin was deprecated and rolled into the OS. Apparently my flash boot drive died at some point, so reboot failed. No problem, grabbed a new flash drive, restored backup, and transferred my license to the new USB drive. Figured all was well, slammed the start array button, and only then noticed that I restored the wrong backup. A backup made before I added two new drives, one of which replaced the old parity drive. Stopped the array immediately, but damage was done. Whatever was stored on that 8tb former parity drive is gone now =(

Fortunately it's all crap I can replace, but I will be kicking myself over that for a while.

This seems like a pretty big design flaw in Unraid, to be fair.

BlankSystemDaemon
Mar 13, 2009



If it's a parity drive, surely there shouldn't be anything that can't be rebuilt from the data striped across the rest of the array?
That's the point of RAID3/4 - that there's a single drive (or a mirror of two drives, in the case of UnRAID?) with the parity on it - as opposed to distributed parity with RAID5/6/7?

Enos Cabell
Nov 3, 2004


Well I'm still in the process of rebuilding parity and adding the old drives back. I suspect the data from the 8tb is gone but I guess we'll see.

Rooted Vegetable
Jun 1, 2002

Enos Cabell posted:

Should never have posted this, tremendous self own incoming

I'd post precise recreation steps on the Unraid forums just so the potential for human error is known. I'm a bit surprised there wasn't at least a check that the 8tb drive was expected to be parity and isn't, at least keeping the array stopped until you could review it.

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

BlankSystemDaemon posted:

If it's a parity drive, surely there shouldn't be anything that can't be rebuilt from the data striped across the rest of the array?
That's the point of RAID3/4 - that there's a single drive (or a mirror of two drives, in the case of UnRAID?) with the parity on it - as opposed to distributed parity with RAID5/6/7?

Unraid is not Raid3/Raid4/Raid# so stop trying to talk like the two are at all similar.

H110Hawk
Dec 28, 2006

Enos Cabell posted:

Should never have posted this, tremendous self own incoming:

Installed the beta branch of Unraid since the old nvidia plugin was deprecated and rolled into the OS. Apparently my flash boot drive died at some point, so reboot failed. No problem, grabbed a new flash drive, restored backup, and transferred my license to the new USB drive. Figured all was well, slammed the start array button, and only then noticed that I restored the wrong backup. A backup made before I added two new drives, one of which replaced the old parity drive. Stopped the array immediately, but damage was done. Whatever was stored on that 8tb former parity drive is gone now =(

Fortunately it's all crap I can replace, but I will be kicking myself over that for a while.

If this resulted in data loss that's a critical bug. Unraid should be storing a copy of the current disk metadata setup on every disk. Restoring the boot drive from backup should not destroy your array.

Enos Cabell
Nov 3, 2004


Hmm, well I don't know for certain yet that any data is lost, but I'll lay out the steps I've taken so far. If anything is lost it's likely due to a mistake on my part.

- Restored backup from when server had 6 8tb drives, 1 parity and 5 data. In the past year I added 2 12tb drives, one of which became the new parity and the old 8tb parity drive was cleaned and added to the array as a data drive.

- Backup reordered the drive assignments, put the old 8tb parity drive back in the parity slot and left the two 12tb drives unmounted. I hosed up by not noticing this before starting the array.

- Started array, this triggered a parity check/rebuild. I noticed my mistake and hit stop array but it had already nuked the file system and started rebuilding as a parity drive by then

- Tried to mount the 2 12tb drives and the 8tb former parity under unassigned devices. 1 12tb mounted and had all it's data, the other two drives were not mountable (which I expected for parity drives)

- Moved the parity 12tb to parity slot, would not let me add the other 12tb or 8tb as data drives at same time so those remained in unassigned for now. Started array which began parity check/rebuild. Rebuild took just over 24 hours and reported it finished with 0 errors. None of the data that was on the 12tb or 8tb drives is present after rebuild.

- Moved 8tb drive to array, warned that it would have to be formatted / cleared. Took about 12 hours and now is added to array.

- 12tb data drive said it also needs to be cleared, but since I can still get to those files I am copying them over to the array first. Still on this step which is probably going to take about 24 hours to finish copying.

After everything is done copying I plan to add the 12tb drive to array and then run another parity check. Right now I suspect the data from that 8tb drive is lost, but possibly this could restore it?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Enos Cabell posted:

- Backup reordered the drive assignments, put the old 8tb parity drive back in the parity slot and left the two 12tb drives unmounted. I hosed up by not noticing this before starting the array.

- Started array, this triggered a parity check/rebuild. I noticed my mistake and hit stop array but it had already nuked the file system and started rebuilding as a parity drive by then

This right here is where I'd say there's an enormous flaw with the system design if it doesn't throw up warnings along the lines of "you are trying to restore a drive configuration that does not match the last known configuration for this array" and/or other notifications that you're attempting to restore a different number of drives than there should be. Said array configuration metadata should be stored on each drive so that you're not relying on the OS's information expressly to protect against cases like this where you are rolling back the OS.

Like, you should have had to click through at least one or two "Doing this may result in data loss!" warnings, IMO.

Crunchy Black
Oct 24, 2017

by Athanatos
Hm, well I might just power up my cold backup shelf, do a rsync and then do a hard reinstall once the next release hits, then. I've got 64 more GB sitting on my desk I need to install into the old girl anyway. I'll never have to touch my L2ARC again. :smuggo:

H110Hawk
Dec 28, 2006

Enos Cabell posted:

- Restored backup from when server had 6 8tb drives, 1 parity and 5 data. In the past year I added 2 12tb drives, one of which became the new parity and the old 8tb parity drive was cleaned and added to the array as a data drive.

- Backup reordered the drive assignments, put the old 8tb parity drive back in the parity slot and left the two 12tb drives unmounted. I hosed up by not noticing this before starting the array.

Basically the backup should not be caring about the drive configuration as it knew it. It should be reading a copy off each disk and comparing a monotonic/vector-based version counter to the uuid's of the disks. If they disagree it should make you make a decision. The backup of unraid itself should be the metadata of last resort. If it cannot reconcile itself it should print out the label information so that you can compare it to the disks. (serial+model are exposed to the os and on the sticker.) If you lose data this should be a p1 bug to unraid. Losing lovely flash boot medium is probably the most common form of failure for unraid.

BlankSystemDaemon
Mar 13, 2009



EVIL Gibson posted:

Unraid is not Raid3/Raid4/Raid# so stop trying to talk like the two are at all similar.
What, exaclty, does unraid do, then?
Some hand-wavey thing about file-level parity, with a bunch of caveats?

Crunchy Black posted:

Hm, well I might just power up my cold backup shelf, do a rsync and then do a hard reinstall once the next release hits, then. I've got 64 more GB sitting on my desk I need to install into the old girl anyway. I'll never have to touch my L2ARC again. :smuggo:

RAIDz1, with disks how big, and with what URE rate?

Because the calculations of drive size*array width/URE rate still apply, and RAIDz1 is no exception even if UREs won't kill the entire array like it will a traditional RAID array.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

BlankSystemDaemon posted:

RAIDz1, with disks how big, and with what URE rate?

Because the calculations of drive size*array width/URE rate still apply, and RAIDz1 is no exception even if UREs won't kill the entire array like it will a traditional RAID array.

Just remember that published URE guarantees are generally highly pessimistic. WD Red Pros post a <1x10^15 error rate, and you're not gonna convince me that the WD Red non-Pros are so much different that their posted <1x10^14 error rate means they're actually 10x worse, or we'd be hearing about UREs from >8TB non-Pros on a regular basis. Honestly, though I've yet to see anyone crack 'em open to find out, I wouldn't be surprised if the Pro was exactly the same internally as the non-Pro, just with a 7200RPM motor.

Most of us here are also mostly storing media as the predominant data type. Losing a chunk of a file to a URE is annoying, but a corrupted frame or two in a movie ain't the biggest of deals, and often is a perfectly reasonable risk to run in exchange for not losing another drive to parity.

Crunchy Black
Oct 24, 2017

by Athanatos
I was mostly joking about not touching the l2arc, hence the smiley. This is definitely a home/non-prod situation. I don't even have any VMs using it as storage at the moment, it spends 99.97% of time idle. To reiterate, I have a cold MD1000 I backup to about every quarter.

To Dr. Dork's point, it's mostly streaming media and my .GIF storage folder (which I'm pretty sure just gets kept in memory anyway lol) that gets accessed with any regularity.

Disks are 8TB Reds about a year old and of course this being a full-Xeon platform, RAM is buffered ECC.
e: and for full disclosure, it has a 128GB l2arc NVMe SSD

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

BlankSystemDaemon posted:

What, exaclty, does unraid do, then?
Some hand-wavey thing about file-level parity, with a bunch of caveats?


Yup.

It's ability to recover files is it copies the file to at least one other drive.

Raid calculates hash sums to allow for rebuilding the data from other surviving data directly without having to search/copy like you understand.

There are parity disks in unraid but it is modified to allow the use of different sizes disks and much slower. It is not raid at all and uses the JBOD (just a bunch of disks) methodology where you can just throw disks in and it will work.

BurgerQuest
Mar 17, 2009

by Jeffrey of YOSPOS
I mean, its in the name.

For most of us using it it presents a good trade-off on functionality, ease of use, the fact we had a bunch of random old hardware to start with and reliability.

I don't use it for backing up data, it serves plex and media and if it died tomorrow it'd suck but I dont have the inclination to further backup my media.

H2SO4
Sep 11, 2001

put your money in a log cabin


Buglord

GreenBuckanneer posted:

I hear it's good for that, and also as an email server. The latter piqued my interest, as it would be nice to self-own an email server and email address, if that's possible.


You generally don't want to deal with the headaches of hosting your own mail server, especially one on a residential IP block, unless you have good reason. A fair amount of ISPs block SMTP inbound by default for residential customer IP space to try and stem the tide of spam.

HalloKitty
Sep 30, 2005

Adjust the bass and let the Alpine blast

EVIL Gibson posted:

Yup.

It's ability to recover files is it copies the file to at least one other drive

Hm, but that's not the case, it doesn't do simple duplication. You have one or two dedicated parity drives, and any drive can fail and be rebuilt, just like a normal raid array. The parity is above the file system layer, as opposed to below it like normal raid, and the parity isn't striped, it's on specific drives (as mentioned before).

It's not crazy slow, especially with reconstruct write enabled. But a high end solution it is not, it's not sold like that either. It's sold as an alternative to buying an off the shelf black box nas, whilst allowing a ton of flexibility that works well for a home user

HalloKitty fucked around with this message at 23:28 on Jan 17, 2021

BlankSystemDaemon
Mar 13, 2009



HalloKitty posted:

Hm, but that's not the case, it doesn't do simple duplication. You have one or two dedicated parity drives, and any drive can fail and be rebuilt, just like a normal raid array. The parity is above the file system layer, as opposed to below it like normal raid, and the parity isn't striped, it's on specific drives (as mentioned before).

It's not crazy slow, especially with reconstruct write enabled. But a high end solution it is not, it's not sold like that either. It's sold as an alternative to buying an off the shelf black box nas, whilst allowing a ton of flexibility that works well for a home user
If it's stored on one drive, that means it's not distributed, rather than not striped.
If it's stored on two, that likely means it's mirrored - ie. not a XOR or Galois field theory matrix calculation, that can recover both P+Q.

Raymond T. Racing
Jun 11, 2019

BlankSystemDaemon posted:

If it's stored on one drive, that means it's not distributed, rather than not striped.
If it's stored on two, that likely means it's mirrored - ie. not a XOR or Galois field theory matrix calculation, that can recover both P+Q.

Double parity lets you recover two failures without loss of data in Unraid.

The saving grace of it IMO is that failures higher than your parity only lose data on those drives, rather than "say goodbye to all your data"

Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

I swear, I didn't know this was going to happen, but following the conversation VostokProgram and I had on paging, Mark Johnson - probably one of the smartest people in the FreeBSD project - wrote an article on how FreeBSD handles swap.

Good article! Makes me wonder if TrueNAS enables swap.

BlankSystemDaemon
Mar 13, 2009



multiple people posted:

:words: extolling the amazing abilities of unraid
I really wish I had some source code to read, because I keep getting different mutually-exclusive descriptions, and some which are not technically possible the way people describe them.
Granted, my understanding isn't perfect, but to the best of my knowledge you cannot have P+Q recovery without distributed parity records, while also getting the best advantages of a SPAN array without any of the downsides, and at the same time having mirrored parity drives.
Computers aren't magic, and mathematics don't work like that.

Reed-Solomon encoding, which is the basis for data error correction on everything from ECC memory, CD/DVD media, hardware RAID arrays, CPU caches, and is used in a lot of other places including WiMax, digital video broadcast, DSL, and ZFS, is what provides P+Q parity (and P+Q+R, although that's a modification - you can't go beyond P+Q+R without pessimizing performance so much that it's not practical, even for relatively show I/O like spinning rust), and simply mirroring the parity drive when you have "file-level RAID4 that's also a SPAN array" doesn't work.

If the unraid creators had done something, which can achieve the same for much less computing power, they could've patented it and made so much money, they wouldn't need to charge for unraid, but could release it under any license they wanted.

VostokProgram posted:

Good article! Makes me wonder if TrueNAS enables swap.
Nope, swap on ZFS is not used very often in production systems - not just for FreeBSD, but for Illumos and Linux too.
The reason is complex, but it has to do with how ARC (and ZIL) are not strictly speaking part of the VM structures of the OS, in any of the implementations.
And even if it were, because of the way ZFS works, you would need to lower your VM watermark levels (ie when the memory pressure is high enough and the VM triggers its paging facility) to the point that you won't be making as efficient use of your memory for caching, without having a unified buffer cache.
Now, FreeBSD is probably the only OS to have a unified buffer cache, while also implementing ZFS - so Jeff Roberson might be looking into how to add ARC (and ZIL) to the unified buffer cache in FreeBSD. Which is cool, because it's a feature Illumos is unlikely to get for technical reasons (there's no unified buffer cache, and Solaris engineers never added one), and Linux even more so, for both technical and political reasons (aside from the fact that Linux has no unified buffer cache, we all know Linus' feelings on ZFS).

As an aside, it's kind of ironic that FreeBSD has a unified buffer cache, given that it dates back to the mid-90s, and was made by David Greenman and John Dyson who were at AT&T at the time - because AT&T was the company that later attempted to sue BSDi and University of Berkeley Regency, and had to settle out of court because it turned out that AT&T stole code from BSD, stripped it from copyright, and included it in UNIX.

BlankSystemDaemon fucked around with this message at 04:46 on Jan 18, 2021

H110Hawk
Dec 28, 2006

BlankSystemDaemon posted:

I really wish I had some source code to read, because I keep getting different mutually-exclusive descriptions, and some which are not technically possible the way people describe them.
Granted, my understanding isn't perfect, but to the best of my knowledge you cannot have P+Q recovery without distributed parity records, while also getting the best advantages of a SPAN array without any of the downsides, and at the same time having mirrored parity drives.
Computers aren't magic, and mathematics don't work like that

This was bothering me too, knowing unraid works at the file level yet has parity that can rebuild arbitrary drives. How do they do the chunking to know what the parity value is if it's not a block boundary like raid4/5/6. Google got me a description but not code. The answer is apparently that they work parity at the device level? It says they do it bit by bit and that's why they have to do zeroing.

https://wiki.unraid.net/Parity

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
I guess you don't actually need to stripe the data over several drives just to use parity. You could have drives with completely unrelated data, one with Linux ISOs, second with movies, third with mp3s, and then a fourth drive with parity calculated from the rest of the drives.

Chilled Milk
Jun 22, 2003

No one here is alone,
satellites in every home
What's everyone using to run Docker on Free|TrueNAS nowadays? I had a decaying RancherOS VM from when that was supported but I need to migrate due to.. user error. I just want something I can feed my docker-compose yaml and have it just self update. I use Fedora desktop, so I was looking at Fedora Server or CoreOS, which seems ideal if I can figure out their 18 configuration formats. But if there's something simpler I'm all ears

BlankSystemDaemon
Mar 13, 2009



H110Hawk posted:

This was bothering me too, knowing unraid works at the file level yet has parity that can rebuild arbitrary drives. How do they do the chunking to know what the parity value is if it's not a block boundary like raid4/5/6. Google got me a description but not code. The answer is apparently that they work parity at the device level? It says they do it bit by bit and that's why they have to do zeroing.

https://wiki.unraid.net/Parity
So if it's bit-level parity, doesn't that mean that there's an implication that if you have, say, 4 drives of full of 10TB each, that your parity drive should be 40TB?

But at least this confirms they're not doing P+Q parity, whether distributed or not, because XOR is only capable of correcting one error - so that must mean the parity drive is simply mirrored in cases of "double"-parity.

There's still a lot of hand-waving in that article, like the mention of "Parity disk being valid (emphasis theirs) means that there is a parity disk present, and sometime in the past a parity sync completed without error", because that seems to indicate that there's an actual checksum attached to each record, yet there's no mention of that, which means it can't know that it's valid.

Saukkis posted:

I guess you don't actually need to stripe the data over several drives just to use parity. You could have drives with completely unrelated data, one with Linux ISOs, second with movies, third with mp3s, and then a fourth drive with parity calculated from the rest of the drives.
That's what I was getting at, I suppose - your parity drive will have to contain ALL of the data of all the other drives, for it to be bit-identical.

Yaoi Gagarin
Feb 20, 2014

BlankSystemDaemon posted:

So if it's bit-level parity, doesn't that mean that there's an implication that if you have, say, 4 drives of full of 10TB each, that your parity drive should be 40TB?


I don't think so? It only needs to be as large as the largest drive. XOR bit i from every drive, write to bit i of the parity drive.

THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.
SpaceInvader One who does a lot of very good unraid tutorials has a very basic guide to how parity works on Unraid here.
https://www.youtube.com/watch?v=HybwCOVDg9k

Raymond T. Racing
Jun 11, 2019

BlankSystemDaemon posted:

So if it's bit-level parity, doesn't that mean that there's an implication that if you have, say, 4 drives of full of 10TB each, that your parity drive should be 40TB?

But at least this confirms they're not doing P+Q parity, whether distributed or not, because XOR is only capable of correcting one error - so that must mean the parity drive is simply mirrored in cases of "double"-parity.

There's still a lot of hand-waving in that article, like the mention of "Parity disk being valid (emphasis theirs) means that there is a parity disk present, and sometime in the past a parity sync completed without error", because that seems to indicate that there's an actual checksum attached to each record, yet there's no mention of that, which means it can't know that it's valid.

That's what I was getting at, I suppose - your parity drive will have to contain ALL of the data of all the other drives, for it to be bit-identical.
The parity wiki article explicitly says at the bottom dual parity is not just mirrored parity.

madsushi
Apr 19, 2009

Baller.
#essereFerrari

The Milkman posted:

What's everyone using to run Docker on Free|TrueNAS nowadays? I had a decaying RancherOS VM from when that was supported but I need to migrate due to.. user error. I just want something I can feed my docker-compose yaml and have it just self update. I use Fedora desktop, so I was looking at Fedora Server or CoreOS, which seems ideal if I can figure out their 18 configuration formats. But if there's something simpler I'm all ears

I just use an Ubuntu VM for that. I know Ubuntu enough that I can manage the "host" layer if I have to.

Adbot
ADBOT LOVES YOU

Hed
Mar 31, 2004

Fun Shoe
After a few years of working fine, I got these from my FreeNAS security output yesterday.
Reading around sounds like it could be bad cable... should I just find the serial for ada2, open up and replace SATA connector? Or go ahead and prepare for the worst?

The drives are 2 years old, onboard Intel SATA controller, 6 drives total in RAIDZ2 config.

code:
freenas.hed.lan kernel log messages:
> (ada2:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 e0 c6 4c 40 7f 02 00 01 00 00
> (ada2:ahcich3:0:0:0): CAM status: Uncorrectable parity/CRC error
> (ada2:ahcich3:0:0:0): Retrying command
> (ada2:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 e0 c7 4c 40 7f 02 00 01 00 00
> (ada2:ahcich3:0:0:0): CAM status: Uncorrectable parity/CRC error
> (ada2:ahcich3:0:0:0): Retrying command
> (ada2:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 e0 c8 4c 40 7f 02 00 01 00 00
> (ada2:ahcich3:0:0:0): CAM status: Uncorrectable parity/CRC error
> (ada2:ahcich3:0:0:0): Retrying command
> (ada2:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 e0 c9 4c 40 7f 02 00 01 00 00
> (ada2:ahcich3:0:0:0): CAM status: Uncorrectable parity/CRC error
> (ada2:ahcich3:0:0:0): Retrying command
> (ada2:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 00 e0 ca 4c 40 7f 02 00 01 00 00
> (ada2:ahcich3:0:0:0): CAM status: Uncorrectable parity/CRC error
> (ada2:ahcich3:0:0:0): Retrying command
> (ada2:ahcich3:0:0:0): READ_FPDMA_QUEUED. ACB: 60 e8 e0 cb 4c 40 7f 02 00 00 00 00
> (ada2:ahcich3:0:0:0): CAM status: Uncorrectable parity/CRC error
> (ada2:ahcich3:0:0:0): Retrying command

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply