Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
THF13
Sep 26, 2007

Keep an adversary in the dark about what you're capable of, and he has to assume the worst.

Oysters Autobio posted:

Was considering an HBA card but now if this sort of setup makes it easy then I probably will get one then.

These are really new hardware expansions for me. What do I need to do with my build to look for compatibility and/or performance when picking an HBA card? I'm seeing a decently priced ASR-71605 ADAPTEC 6GB/S SAS SATA PCI-E RAID card but have zero clue how to spec these to my mobo and build-form factor.

I was told to get LSI cards and avoid Adaptec ones, which was advice I followed but don't really know the reasons behind.

You'll want one that's capable of being flashed to IT mode, but you'll probably find the seller advertising they pre-flashed it and won't need to do it yourself.

Different cards will use different numbers of PCIE lanes. I think you can use an 8x card in a 4x slot, but I'm unsure if that will only affect total bandwidth (combined max speed of all drives limited based on number of lanes and pcie version), or if it will cause errors or weirdness if you approach that speed.

You connect hard drives to the hba with breakout cables that let you hook up 4 drives to each port on the hba. Depending on the hba the type of connector is different, so make sure you have the right one.

The naming convention of lsi hba's is pretty simple. An LSI 92## card should be a PCIE gen 2 version, and a 93## will be a gen 3. Gen 2 is still probably fine for spinning drives and are extremely cheap. But gen 3 ones have come down in price recently.
After the 9### model number, they will have another number and letter. That number is the number of drives it supports and if the connections are internal or external.
So a 9300-8i is a PCIE gen 3 card with 2 internal connectors that you can connect 8 drives to.
Don't assume PCIE lane requirements from the number of hard drives connections, lookup the specific model details.

You can use more drives than a HBA card supports with a SAS expander card. These are pcie cards that just need power and a connection the the GBA, and act as the hard drive equivalent of a network switch.

EDIT:
As a final note, some HBAs were designed for servers with the expectation that there would be airflow over them and can overheat, so a common recommendation is to zip tie a tiny fan onto its heatsink. Some don't need this, but I don't know anywhere you could check.

Adbot
ADBOT LOVES YOU

Oysters Autobio
Mar 13, 2017
Hmm ok I'll take a look. The one appealing thing about the Adaptec HBA I was looking at was that it wouldn't require any flashing the BIOS, which I'm not keen on because I'm looking to assemble everything over the next week except for HDDs that I'm waiting on, so it'd be nice if I don't have to redo anything when I got the HBA, unless I'm misunderstanding something with them.

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.

Oysters Autobio posted:

Hmm ok I'll take a look. The one appealing thing about the Adaptec HBA I was looking at was that it wouldn't require any flashing the BIOS, which I'm not keen on because I'm looking to assemble everything over the next week except for HDDs that I'm waiting on, so it'd be nice if I don't have to redo anything when I got the HBA, unless I'm misunderstanding something with them.

You can also buy a Broadcom HBA that is designed to be an HBA and not a hardware RAID controller, like a 9300 or 9400 and just use that. No flashing required.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
Back to a project that went on the backburner. Migration to new TrueNAS server.

I have enough new/unused disks to create a 6 disk RAIDZ2 vdev that will hold existing data.

After that migration of data, I'll end up with a some disks from the old box that I'll be reusing, so adding another 6 disk RAIDZ2 vdev to that new pool.

After that, I would like to rebalance the pool. Anyone done any "in-place rebalancing"?

https://github.com/markusressel/zfs-inplace-rebalancing

BlankSystemDaemon
Mar 13, 2009



That script just does what zfs send/receive does, but worse because it re-computes all checksums, and takes significantly longer.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

BlankSystemDaemon posted:

That script just does what zfs send/receive does, but worse because it re-computes all checksums, and takes significantly longer.

How would send/receive help me in this current situation though? Without bringing in some temporary storage as a dumping ground for a two-transfer migration.

I don't see any benefit? Unless I am missing some feature of ZFS send/receive (which I admittedly do not use).

Edit:

Oooo, are you suggesting zpool send/receive within the same pool.

Same concept as the script, so every file does a copy then delete (of original)?

I was not aware send/receive could operate within a single pool (if that is what you were hinting at).

Double edit:

Like this? https://forum.proxmox.com/threads/zfs-send-recv-inside-same-pool.119983/post-521105

Moey fucked around with this message at 07:16 on Mar 25, 2024

Oysters Autobio
Mar 13, 2017

THF13 posted:

I was told to get LSI cards and avoid Adaptec ones, which was advice I followed but don't really know the reasons behind.

You'll want one that's capable of being flashed to IT mode, but you'll probably find the seller advertising they pre-flashed it and won't need to do it yourself.

Different cards will use different numbers of PCIE lanes. I think you can use an 8x card in a 4x slot, but I'm unsure if that will only affect total bandwidth (combined max speed of all drives limited based on number of lanes and pcie version), or if it will cause errors or weirdness if you approach that speed.

You connect hard drives to the hba with breakout cables that let you hook up 4 drives to each port on the hba. Depending on the hba the type of connector is different, so make sure you have the right one.

The naming convention of lsi hba's is pretty simple. An LSI 92## card should be a PCIE gen 2 version, and a 93## will be a gen 3. Gen 2 is still probably fine for spinning drives and are extremely cheap. But gen 3 ones have come down in price recently.
After the 9### model number, they will have another number and letter. That number is the number of drives it supports and if the connections are internal or external.
So a 9300-8i is a PCIE gen 3 card with 2 internal connectors that you can connect 8 drives to.
Don't assume PCIE lane requirements from the number of hard drives connections, lookup the specific model details.

You can use more drives than a HBA card supports with a SAS expander card. These are pcie cards that just need power and a connection the the GBA, and act as the hard drive equivalent of a network switch.

EDIT:
As a final note, some HBAs were designed for servers with the expectation that there would be airflow over them and can overheat, so a common recommendation is to zip tie a tiny fan onto its heatsink. Some don't need this, but I don't know anywhere you could check.

I decided to go with the Adaptec, mainly because of reviews on them like here. Snagged used one off ebay for around $35USD.

Seems like the only downside I could see with these adaptec is very much needing to attach a 40mm fan to prevent overheating. But looking at other options, it's a really decent price point for something that can both support up to 16 HDDs, is on PCIe 3.0, and doesn't require re-flashing when switching between IR/IT modes. Someone in that thread also showed an easy way to thread through two unused threads on the heatsink to mount a small 40mm fan, so i like that over any kind of zipties or anything.

Oysters Autobio fucked around with this message at 04:19 on Mar 25, 2024

BlankSystemDaemon
Mar 13, 2009



Moey posted:

How would send/receive help me in this current situation though? Without bringing in some temporary storage as a dumping ground for a two-transfer migration.

I don't see any benefit? Unless I am missing some feature of ZFS send/receive (which I admittedly do not use).

Edit:

Oooo, are you suggesting zpool send/receive within the same pool.

Same concept as the script, so every file does a copy then delete (of original)?

I was not aware send/receive could operate within a single pool (if that is what you were hinting at).

Double edit:

Like this? https://forum.proxmox.com/threads/zfs-send-recv-inside-same-pool.119983/post-521105
Yep, got it in one; zfs send -R tank/olddataset@snapshot | mbuffer | zfs receive newdataset, then once it’s done you delete the old one and rename the new one.

If you’re smart about it, you enable zpool checkpoint until you’re satisfied that everything made it over - that way, you can revert administrative changes like dataset removal.
Just don’t forget to turn it off again.

BlankSystemDaemon fucked around with this message at 11:37 on Mar 25, 2024

Moey
Oct 22, 2010

I LIKE TO MOVE IT

BlankSystemDaemon posted:

Yep, got it in one; zfs send -R tank/olddataset@snapshot | mbuffer | zfs receive newdataset, then once it’s done you delete the old one and rename the new one.

If you’re smart about it, you enable zpool checkpoint until you’re satisfied that everything made it over - that way, you can revert administrative changes like dataset removal.
Just don’t forget to turn it off again.

Neato. Gracias.

I'll do some testing before migrating data and letting it rip on the actual "final" disk layout.

BlankSystemDaemon
Mar 13, 2009



Moey posted:

Neato. Gracias.

I'll do some testing before migrating data and letting it rip on the actual "final" disk layout.
One trick I was taught early was to truncate a small handful of files, give them GEOM gate devices so they're exposed via devfs the same way memory devices are, and create a testing pool to try commands on.

I still periodically do it if it's been a while since I've done some administrative task and want to make sure I'm doing it right.
This, of course, goes hand in hand with using the -n flag at least once before running it without, on any administrative command.

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

One trick I was taught early was to truncate a small handful of files, give them GEOM gate devices so they're exposed via devfs the same way memory devices are, and create a testing pool to try commands on.

I still periodically do it if it's been a while since I've done some administrative task and want to make sure I'm doing it right.
This, of course, goes hand in hand with using the -n flag at least once before running it without, on any administrative command.

Something about playing with a large stack of real disks (before putting them to their final use) feels good, though. I can't explain why.

BlankSystemDaemon
Mar 13, 2009



Computer viking posted:

Something about playing with a large stack of real disks (before putting them to their final use) feels good, though. I can't explain why.
But truncate can do arbitrary-sized files???

Computer viking
May 30, 2011
Now with less breakage.

BlankSystemDaemon posted:

But truncate can do arbitrary-sized files???

Sure, but they don't make fun disk access noises. :colbert:

BlankSystemDaemon
Mar 13, 2009



Computer viking posted:

Sure, but they don't make fun disk access noises. :colbert:
If you can hear the disk access noises, you should be wearing hearing protection against the fan boise from all the fans in the rack :c00lbert:

BlankSystemDaemon fucked around with this message at 15:20 on Mar 26, 2024

IOwnCalculus
Apr 2, 2003





BlankSystemDaemon posted:

If you can hear the disk access noises, you should be wearing hearing protection against the fan boise from all the fans in the rack :c00lbert:

Will someone pick up the phone?

Shumagorath
Jun 6, 2001

IOwnCalculus posted:

Will someone pick up the phone?
JBOD
HELLO

Talorat
Sep 18, 2007

Hahaha! Aw come on, I can't tell you everything right away! That would make for a boring story, don't you think?
Is there any way to merge two discrete zfs pools so that to the filesystem they appear as a single mount point? I’d rather not go to the trouble of moving specific files and folders to this new pool. Alternatively, any way to hardlink across filesystem boundaries?

Yaoi Gagarin
Feb 20, 2014

Can't hardlink but you could symlink, or use a bind mount.

But if you want to "merge" the pools - why even make a second pool, you could put those drives in as a new vdev in the original pool?

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Yaoi Gagarin posted:

why even make a second pool, you could put those drives in as a new vdev in the original pool?

Because Wibla will call you dumb.

Wibla
Feb 16, 2011

I thought that was BSD's job :smith:

Multiple vdevs in one pool will lock you into a certain drive/pool layout though, be aware of that.

Talorat
Sep 18, 2007

Hahaha! Aw come on, I can't tell you everything right away! That would make for a boring story, don't you think?

Yaoi Gagarin posted:

Can't hardlink but you could symlink, or use a bind mount.

But if you want to "merge" the pools - why even make a second pool, you could put those drives in as a new vdev in the original pool?

Tell me more about that second option. What’s a vdev? Will this allow the single pool to have a single mount point?

BlankSystemDaemon
Mar 13, 2009



Wibla posted:

I thought that was BSD's job :smith:

Multiple vdevs in one pool will lock you into a certain drive/pool layout though, be aware of that.
Look buddy, I just workmanual-page-at-people here.

Talorat posted:

Tell me more about that second option. What’s a vdev? Will this allow the single pool to have a single mount point?
A ZFS pool consists of vdevs, each of which is its own RAID configuration, and data is spanned across multiple vdevs.
If you add a vdev to an existing pool, you expand the pool, and data will be distributed across then span such that they ahould end up being approximately equally full.

See zfsconcepts(7).
EDIT: Looking at it, I think this article from Klara explains it best.

BlankSystemDaemon fucked around with this message at 08:25 on Mar 28, 2024

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Talorat posted:

Tell me more about that second option. What’s a vdev? Will this allow the single pool to have a single mount point?

vdev = Single or multiple sets of drives. Single disk, mirror , RAIDZ, RAIDZ2......

Pool (zpool) = collection of single or multiple vdevs

If you create a pool with multiple vdevs, and one vdev suffers a catastrophic failure (more disk failures than your level of parity) your pools data is gone.


e:fb

Yaoi Gagarin
Feb 20, 2014

Would you mind pasting the output of `zpool status -v` here?

Computer viking
May 30, 2011
Now with less breakage.

As for adding another vdev to a pool: It's nice to avoid adding new vdevs to almost full pools, for performance reasons: The pool will prioritize the new vdev until they're about equally full, which slows down writes compared to spreading them equally over the entire pool. Reading back that data in the future will also be slower (since it's spread over fewer disks), but depending on your access pattern that may not be a problem. For bulk storage that's probably not a problem, doubly so if it's just connected over Gbit network.

IOwnCalculus
Apr 2, 2003





Computer viking posted:

As for adding another vdev to a pool: It's nice to avoid adding new vdevs to almost full pools, for performance reasons: The pool will prioritize the new vdev until they're about equally full, which slows down writes compared to spreading them equally over the entire pool. Reading back that data in the future will also be slower (since it's spread over fewer disks), but depending on your access pattern that may not be a problem. For bulk storage that's probably not a problem, doubly so if it's just connected over Gbit network.

This is absolutely a thing and while it's not problematic for people with hoards of Linux ISOs, actual production data is a whole different ballgame. I've seen the results from adding a single 2-drive mirror vdev to a nearly-full production pool that was already made up of ~20 vdevs; it was not pretty.

Pablo Bluth
Sep 7, 2007

I've made a huge mistake.
Does it give the performance of the single mirror vdev, or does it end up being even worse?

IOwnCalculus
Apr 2, 2003





If I remember right, it was the performance of the single vdev - amplified heavily by the fact that it was a pair of spinning disks trying to simultaneously handle the bulk of incoming writes and also the vast majority of the reads because the newest data was the most popular.

Canine Blues Arooo
Jan 7, 2008

when you think about it...i'm the first girl you ever spent the night with

Grimey Drawer
I've been out of the loop on all things 'NAS' and am wondering what the current recommendation for a low power, barebones kits? LTT kinda of re-ignited my desire to actually set this up again, pointing out that this exists (https://wiki.friendlyelec.com/wiki/index.php/CM3588_NAS_Kit). This is pretty compelling and it's actually available by some skeezy 3rd party here but I'm not sure what the other options are. I always imagined this as a box, but if I can avoid a big box and can instead stick a set of M.2 drives onto a small board that's sipping 30w~ of power, that would be highly preferable.

My requirements are:

• Small-ish at least
• Low Power
• Has 4+ M.2 slots
• No proprietary nonsense.

Canine Blues Arooo fucked around with this message at 01:17 on Mar 31, 2024

insta
Jan 28, 2009
TopTon on AliExpress has a few N100-based boards that'll do what you want, and they run on a 12v barrel plug, drawing like 16w total.

madsushi
Apr 19, 2009

Baller.
#essereFerrari
I am building a new NAS (TrueNAS Core) and am trying to figure out the best approach for balancing drives is.

I have qty 38 of 12 TB drives, and qty 34 of 18 TB drives.

How would you allocate these across vdevs?

I was thinking something like:

RAIDZ3
Qty 3 of 12-disk vdevs of 12 TB drives (36 drives)
Plus qty 2 spares

Qty 3 of 11-disk vdevs of 18 TB drives (33 drives)
Plus qty 1 spare

And just put that all into a big pool. RAIDZ3 for 11-12 drives seems fine, not burning too many for parity, keeping the vdevs spindle count the same (although space will be ~25% different).

Talorat
Sep 18, 2007

Hahaha! Aw come on, I can't tell you everything right away! That would make for a boring story, don't you think?

Moey posted:

vdev = Single or multiple sets of drives. Single disk, mirror , RAIDZ, RAIDZ2......

Pool (zpool) = collection of single or multiple vdevs

If you create a pool with multiple vdevs, and one vdev suffers a catastrophic failure (more disk failures than your level of parity) your pools data is gone.


e:fb

Got it! Thanks, so I guess the main disadvantage would be that since the new vDev is going to be an external DAS connected through a cable, if the DAS, HBA card or cable failed, I would nuke my entire pool until I was able to fix it.

IOwnCalculus
Apr 2, 2003





madsushi posted:

I am building a new NAS (TrueNAS Core) and am trying to figure out the best approach for balancing drives is.

I have qty 38 of 12 TB drives, and qty 34 of 18 TB drives.

How would you allocate these across vdevs?

I was thinking something like:

RAIDZ3
Qty 3 of 12-disk vdevs of 12 TB drives (36 drives)
Plus qty 2 spares

Qty 3 of 11-disk vdevs of 18 TB drives (33 drives)
Plus qty 1 spare

And just put that all into a big pool. RAIDZ3 for 11-12 drives seems fine, not burning too many for parity, keeping the vdevs spindle count the same (although space will be ~25% different).

My inner OCD would want them all to be 11-disk vdevs but I can't imagine that it really matters when you're talking about that many spindles and spindle sizes that different between vdevs.


Talorat posted:

Got it! Thanks, so I guess the main disadvantage would be that since the new vDev is going to be an external DAS connected through a cable, if the DAS, HBA card or cable failed, I would nuke my entire pool until I was able to fix it.

Yes, the pool requires all vdevs to be at least healthy enough to read in order to mount. Though (knock on wood) DAS/HBA/SAS cable failures have been extremely rare for me compared to drive failures.

Henrik Zetterberg
Dec 7, 2007

Why is my Synology constantly flipping over to battery power? It's on a 3-week old APC 850VA and it hasn't happened until like a week or two ago. No issues with electric anywhere in the house, and my utility essentially never goes down unless there's a storm or some idiot runs his car into a substation.



edit: it did it again while I was typing this post. There's a clicking noise when it switches over.

Henrik Zetterberg fucked around with this message at 23:09 on Apr 5, 2024

Eletriarnation
Apr 6, 2005

People don't appreciate the substance of things...
objects in space.


Oven Wrangler
Sounds like a problem with the UPS, I can't think of how anything about the load would trigger that. I would contact APC support and ask them about it; since the unit is so new, they may just replace it proactively. I have a 1500W APC unit and I hear it spontaneously click twice a couple seconds apart once in a while, which I assume is some kind of self test, but it doesn't generate alarms and is nowhere near that often - maybe weekly?

Eletriarnation fucked around with this message at 23:15 on Apr 5, 2024

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

Henrik Zetterberg posted:

Why is my Synology constantly flipping over to battery power? It's on a 3-week old APC 850VA and it hasn't happened until like a week or two ago. No issues with electric anywhere in the house, and my utility essentially never goes down unless there's a storm or some idiot runs his car into a substation.



edit: it did it again while I was typing this post. There's a clicking noise when it switches over.

Could be low voltage/high voltage on your lines if there's AVR on the system. Mine will click over to battery power to keep it closer to 115V if it goes down to about 113 or up to about 119. It happens more to me in the summer when the AC kicks on but also if there's storms or just weird power in the area.

shame on an IGA
Apr 8, 2005

the clicking when it switches over is normal, that's just mechanical relays being a big robot finger mashing a pushbutton, that's how they work. As for why that's happening every ~4 minutes, yeah could be highly sensitive voltage or you have some kind of big motor load starting up and causing a voltage sag. I would suggest pulling up a live log on your phone or laptop or some mobile device and then stand next to your HVAC unit, then your refrigerator, and then any other chest freezers or etc and see if any of those cutting on/off correlates to the UPS switchovers.

e: yeah like 5-10 seconds at a time some compressor motor somewhere in your house almost certainly needs a new starter capacitor

shame on an IGA fucked around with this message at 01:08 on Apr 6, 2024

KS
Jun 10, 2003
Outrageous Lumpwad
The UPS logs will have detail on why it's switching.

There are also usually knobs to tweak in the UPS settings -- like if it's undervoltage you can widen the tolerance.

Henrik Zetterberg
Dec 7, 2007

I did a reboot on my Synology and it hasn't happened in 30 mins. :iiam:
Checking the PowerChute logs (if they exist) would have been my next step.

edit: nevermind it started doing that poo poo again

KS posted:

There are also usually knobs to tweak in the UPS settings -- like if it's undervoltage you can widen the tolerance.

Ahh this is good to know. Thanks!

Henrik Zetterberg fucked around with this message at 04:07 on Apr 6, 2024

Adbot
ADBOT LOVES YOU

Tiny Timbs
Sep 6, 2008

FYI for using an Intel ARC card in Plex: I spent ages trying to figure out why HW transcode wasn't working in Unraid and it turns out HDR Tone Mapping is broken. Turning that off allowed Plex to use the GPU.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply