Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Farmer Crack-Ass
Jan 2, 2001

this is me posting irl

Farmer Crack-rear end posted:

Running into a weird issue, perhaps due to a weird config.

Weird update: spun up a new Win2012 VM, made the iSCSI connection... and it's working just fine. so wtf is Windows doing?

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Farmer Crack-rear end posted:

wtf is Windows doing?
That's pretty much always the question when it comes to IT, isn't it?

dox
Mar 4, 2006
Went ahead with an expansion of my existing Synology array, replacing a 2TB drive with a 6TB Red making it 5x 4TB, 3x 6TB... but the expansion/repair is taking an extreme amount of time. I started this over 3 days ago and it is still 26% checking parity consistency.

mdstat shows:

quote:

md3 : active raid5 sdd6[8] sdh6[0] sdc6[6] sdf6[5] sdb6[4] sde6[7] sda6[2] sdg6[1]
11720968704 blocks super 1.2 level 5, 64k chunk, algorithm 2 [8/8] [UUUUUUUU]
[=====>...............] reshape = 26.5% (519001920/1953494784) finish=9176.6min speed=2604K/sec

Is this all normal, or should I be concerned? It's been a while since I expanded, but I don't recall it taking this long-- not sure if this is a symptom of a bad/old synology, bad drive(s), just having a massive array, or something worse. At this rate, the expansion will finish something like 9-10 days after I started it...

tonic
Jan 4, 2003

I’m currently expanding my Synology SHR volume from 5x10tb to 9x10tb. Parity check is taking FOREVER, it’s at 34% after 4.5 days. I’ve never expanded a volume before, but does it really take 2 weeks just for the parity check? RAID scrubbing on the 5x10 has always been much quicker.

mdstat:

quote:

md3 : active raid5 sdd5[8] sdc5[7] sdb5[6] sda5[5] sdea5[0] sdee5[4] sded5[3] sdec5[2] sdeb5[1]
39046395904 blocks super 1.2 level 5, 64k chunk, algorithm 2 [9/9] [UUUUUUUUU]
[======>..............] reshape = 34.5% (3373564864/9761598976) finish=13982.1min speed=7614K/sec

tonic fucked around with this message at 16:13 on Apr 16, 2018

Dicty Bojangles
Apr 14, 2001

Converting my Syno 1817+ RAID 5 4x8tb to RAID 6 and adding 2x8tb drives took a few hours short of 7 days so yeah it takes a while.

dox
Mar 4, 2006
The scheduler was running extended SMART tests on all disks during the rebuild :mad: down to ~36 hours

Droo
Jun 25, 2003

For slow Synology parity checks you can "echo 50000 > /proc/sys/dev/raid/speed_limit_min" to make it try and devote more resources to the rebuild - by default I think it slows way down if any other activity is going on.

If your CPU is really slow though there's nothing you can do about it.

Droo fucked around with this message at 22:07 on Apr 16, 2018

tonic
Jan 4, 2003

Droo posted:

For slow Synology parity checks you can "echo 50000 > /proc/sys/dev/raid/speed_limit_min" to make it try and devote more resources to the rebuild - by default I think it slows way down if any other activity is going on.

If your CPU is really slow though there's nothing you can do about it.

Thanks! This sped it up by 10-15%!

IOwnCalculus
Apr 2, 2003





Finishing up the last rebuild on mine tonight. The 8tb HGST drives from goharddrive have been perfect since I had to rma that first pair.

4x8 + 4x5 as two RAIDZ vdevs, 39TB usable space. That'll probably last me until the server isn't useful anymore, and I've probably got a CPU upgrade lined up for the hell of it.

redeyes
Sep 14, 2002

by Fluffdaddy
Sorry, im lazy, how much were those 8TB HGST units from goharddrive again?

IOwnCalculus
Apr 2, 2003





redeyes posted:

Sorry, im lazy, how much were those 8TB HGST units from goharddrive again?

$679.96 shipped ($169.99 each) via Newegg, since they were on a small sale at the time. No tax or shipping for me.

redeyes
Sep 14, 2002

by Fluffdaddy
Gotcha, thats a deal for sure.

tonic
Jan 4, 2003

7 days into a Synology parity check/rebuild and suddenly its been going ~4x the speed for the last few hours. Is this normal?

quote:

md3 : active raid5 sdd5[8] sdc5[7] sdb5[6] sda5[5] sdea5[0] sdee5[4] sded5[3] sdec5[2] sdeb5[1]
39046395904 blocks super 1.2 level 5, 64k chunk, algorithm 2 [9/9] [UUUUUUUUU]
[==========>..........] reshape = 54.8% (5357195136/9761598976) finish=2142.6min speed=34259K/sec

redeyes
Sep 14, 2002

by Fluffdaddy
Off hand I'd say you have a hosed up drive in that array somewhere.

IOwnCalculus
Apr 2, 2003





Is the array, perhaps, about 54% full?

Twinty Zuleps
May 10, 2008

by R. Guyovich
Lipstick Apathy
My dog pulled my hard drive enclosure off the shelf and broke one of my drives. There wouldn't have been anything worth paying for data recovery on there, but I may never remember just how much random crap was destroyed in one accident. I'm ready to take the plunge for a NAS that I can keep in the safe corner of the boring room where he won't ever get excited near it.

It looks like Synology seems to be the standard for standalone NAS boxes? If I get a 4 bay can I put two independent drives in there now, and get 2 more to set up a RAID 5 or some such later on? I've heard not to buy all the drives for a RAID all at once, but that was from an old grognard. Is that still good advice or will I be safe with 4 WD Reds in one order?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Wulfolme posted:

It looks like Synology seems to be the standard for standalone NAS boxes? If I get a 4 bay can I put two independent drives in there now, and get 2 more to set up a RAID 5 or some such later on? I've heard not to buy all the drives for a RAID all at once, but that was from an old grognard. Is that still good advice or will I be safe with 4 WD Reds in one order?

Synology or QNAP, depending on what you're looking to do with them and what sort of deals you can get, yeah. 4-bay ones aren't cheap ($500+ without disks), especially if you have any intent on using it to host Plex so you can stream media to things other than your computer, so if you're the tinkering type, it can be cheaper to roll a white-box one that'll be faster and more expandable (and physically larger, for good or for ill). If you aren't inclined to tinker, though, yeah, Synology or QNAP.

Synology's SHR (RAID-like-setup) explicitly supports expanding onto new drives gracefully. Pretty sure modern Synology and QNAP devices can also gracefully expand RAID 1 into RAID 5, as well, but older ones I'm not so sure on.

Some people who are super sensitive about the potential for data loss will, in fact, buy their drives from different stores or at different times to ensure they get drives from different batches. The idea being if there is some manufacturing defect in a specific batch, it won't take down all their drives at roughly the same time. In reality, this particular vulnerability is quite rare, but if it makes you feel better, it certainly won't hurt anything.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

D. Ebdrup posted:

So your working set doesn't fit into your systems memory (ie. the data you're actively using over a given period of time), you have already maxed the system on how much memory it allows for (so as to not lose too much ARC size from having to allocate between 70 and 300 bytes per block of L2ARC stored which in turn depends on recordsize for individually written records, since that is variable up to the defined maximum set during pool creation), and are you prepared to accept the write penalty associated with using an L2ARC?
If you can answer yes for all these questions, then for the low low price of $125 you can buy it if it doesn't have too few DWPDs.
(300TB TDW with a MTBF of 2 million hours and a warrenty of 5 years doesn't seem like that much to me).

The trouble with giving advice for anything with ZFS is that you need to couch it in so many clauses, since by the time you get to optimizations for a pool, you're already dealing with specific workloads where ZFS' defaults (and its position on defaults) don't apply, so advice can't be applied in general - so that's why I tried turning it into a TV Shopping commercial.

To bring this up again, my system config is going to be 32 GB of RAM on 8x8 TB RAIDZ2 or 2x4x8TB RAIDZ1. It would be dirt cheap for me to throw a 256 GB NVMe drive on there for cache/etc - like 1/5th the cost of going for an additional 32 GB of RAM to max my board out. Or are there settings I should tweak to force more aggressive caching/etc? I'm assuming ZFS has reasonable defaults...

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
Do you have some read-intensive workload you're going to need to deal with? I'm running 32GB with 8x8TB, and it's 100% fine as long as I'm not in the middle of a scrub, and even then it's usable, just not spectacular.

BlankSystemDaemon
Mar 13, 2009



Paul MaudDib posted:

To bring this up again, my system config is going to be 32 GB of RAM on 8x8 TB RAIDZ2 or 2x4x8TB RAIDZ1. It would be dirt cheap for me to throw a 256 GB NVMe drive on there for cache/etc - like 1/5th the cost of going for an additional 32 GB of RAM to max my board out. Or are there settings I should tweak to force more aggressive caching/etc? I'm assuming ZFS has reasonable defaults...
You have to remember that all the sectors of the harddrive has to be mapped into memory for the NVMe drive to function as an L2ARC - so assuming you have on the close order of 500 million sectors on your drive (which is not unlikely, if your drive isn't a 4k sector drive - though it's hard to actually find information on that), you'll need in more memory for ARC than you currently have in the box - about 34GB - to fully utilize your L2ARC since each L2ARC header takes up around 70 bytes so far as I remember.

Internet Explorer
Jun 1, 2005





Wulfolme posted:

My dog pulled my hard drive enclosure off the shelf and broke one of my drives. There wouldn't have been anything worth paying for data recovery on there, but I may never remember just how much random crap was destroyed in one accident. I'm ready to take the plunge for a NAS that I can keep in the safe corner of the boring room where he won't ever get excited near it.

It looks like Synology seems to be the standard for standalone NAS boxes? If I get a 4 bay can I put two independent drives in there now, and get 2 more to set up a RAID 5 or some such later on? I've heard not to buy all the drives for a RAID all at once, but that was from an old grognard. Is that still good advice or will I be safe with 4 WD Reds in one order?

Just remember that RAID is not backup. You should have a backup of your NAS just like you should have had a backup of that hard drive.

Odette
Mar 19, 2011

Internet Explorer posted:

Just remember that RAID is not backup. You should have a backup of your NAS just like you should have had a backup of that hard drive.

What options are there? I only really know of tarsnap.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

D. Ebdrup posted:

You have to remember that all the sectors of the harddrive has to be mapped into memory for the NVMe drive to function as an L2ARC - so assuming you have on the close order of 500 million sectors on your drive (which is not unlikely, if your drive isn't a 4k sector drive - though it's hard to actually find information on that), you'll need in more memory for ARC than you currently have in the box - about 34GB - to fully utilize your L2ARC since each L2ARC header takes up around 70 bytes so far as I remember.

Thanks. Man, ZFS really does love its RAM, doesn't it.

G-Prime posted:

Do you have some read-intensive workload you're going to need to deal with? I'm running 32GB with 8x8TB, and it's 100% fine as long as I'm not in the middle of a scrub, and even then it's usable, just not spectacular.

Not really, just future-proofing. I really hate to tear apart SFF PCs once they're together, they are incredibly fiddly.

I'm still thinking of putting an ADATA SX6000 512 GB in there, though, just for scratch space or Postgres indexes or something. Only an x2 drive, but I only have an x2 slot anyway...

(out of curiousity, how does that affect NVMe performance? Obviously it'll cut peak bandwidth in half, but I assume IOPS is more of a function of the latency than the connection speed/lane count, right?)

edit: also note that you can configure how aggressive ZFS is during a resilver or scrub... by default it's not very aggressive to avoid totally trashing performance, but you can turn it up to speed up the resilver/scrub if you aren't going to be using it at the same time

Paul MaudDib fucked around with this message at 10:18 on Apr 22, 2018

G-Prime
Apr 30, 2003

Baby, when it's love,
if it's not rough it isn't fun.
That's the other thing I should point out. I DID tune mine to be super aggressive on resilvers and scrubs, intentionally. Brought my scrub time down to about 12 hours, which means that it's generally done when nobody's using the array actively anyway.

Nevets
Sep 11, 2002

Be they sad or be they well,
I'll make their lives a hell
Finally got my FreeNAS box working (just not configured yet). Had to replace the iDRAC card on the motherboard because the firmware was unrecoverable (apparently a common thing to happen) but that part was fairly cheap, just wish I hadn't spent hours and hours trying to fix it first. Tried 4 different computers before I got one that could flash new firmware onto my SAS controller card, doesn't help that it wouldn't work in any Dell system and that's the only thing we have at work. Then I had to reformat all my HDD's because the sector size was 520 bytes instead of 512 bytes and that took a day & a night, but I now have a fairly good box for cheap:

$140 - Used R710 server with 2x X5560 2.8Ghz CPU's, 16GB RAM, dual PSU's
$30 - New Dell H310 SAS controller card
$32 - 2x New Mini SAS Cables (SFF-8087 right angle to SFF-8087)
$300 - 6x Used HGST Ultrastar 3TB SAS drives
$36 - 6x Used HDD Trays
$9 - Used iDRAC6 Express

Total: $547

DoctorTristan
Mar 11, 2006

I would look up into your lifeless eyes and wave, like this. Can you and your associates arrange that for me, Mr. Morden?
Welp one of my WD Reds just failed after only a few weeks of use. Time to get RMA-ing...

Odette
Mar 19, 2011

Nevets posted:

Finally got my FreeNAS box working (just not configured yet). Had to replace the iDRAC card on the motherboard because the firmware was unrecoverable (apparently a common thing to happen) but that part was fairly cheap, just wish I hadn't spent hours and hours trying to fix it first. Tried 4 different computers before I got one that could flash new firmware onto my SAS controller card, doesn't help that it wouldn't work in any Dell system and that's the only thing we have at work. Then I had to reformat all my HDD's because the sector size was 520 bytes instead of 512 bytes and that took a day & a night, but I now have a fairly good box for cheap:

$140 - Used R710 server with 2x X5560 2.8Ghz CPU's, 16GB RAM, dual PSU's
$30 - New Dell H310 SAS controller card
$32 - 2x New Mini SAS Cables (SFF-8087 right angle to SFF-8087)
$300 - 6x Used HGST Ultrastar 3TB SAS drives
$36 - 6x Used HDD Trays
$9 - Used iDRAC6 Express

Total: $547

These SAS cables are a huge loving pain in the rear end. I did some research into replacing a Dell H700i with a H200i, apparently the cables from the H700i were fine. Problem is, I'm flashing the H200i to LSI IT mode so I can't have it in the R710 storage slot so one cable is long enough and the other isn't ... :v:

suddenlyissoon
Feb 17, 2002

Don't be sad that I am gone.
I've currently maxed out my case & motherboard with 6 SATA drives running Xpenology. I've found the compatible PCI-E SATA card that will give me space for 4 more drives, but how would I house them? My current case, Node 304, has two expansion slots and I'd prefer to just run the four SATA cables out of one of these and to an enclosure of some sort. Would that work and if so, what sort of enclosure would be recommended? If not...anyone know of a good 10 bay case?

Nevets
Sep 11, 2002

Be they sad or be they well,
I'll make their lives a hell

Odette posted:

These SAS cables are a huge loving pain in the rear end. I did some research into replacing a Dell H700i with a H200i, apparently the cables from the H700i were fine. Problem is, I'm flashing the H200i to LSI IT mode so I can't have it in the R710 storage slot so one cable is long enough and the other isn't ... :v:

Yeah, I bought the cables I did specifically because one of the Amazon reviews said they used them for the exact configuration I wanted.

AgentCow007
May 20, 2004
TITLE TEXT

DoctorTristan posted:

Welp one of my WD Reds just failed after only a few weeks of use. Time to get RMA-ing...

I had a bunch of failures right away with my first FreeNAS build and it turned out to be cable chatter. I had cheaper SAS to SATA adapter cables and it was solved by switching to Supermicro ones, and for the drives connected to the motherboard I used SATA cables from monoprice that were cheap but transparent so I could see they were actually shielded. I'd be very surprised if WD Reds were dying instantly.

AgentCow007 fucked around with this message at 21:57 on Apr 24, 2018

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

AgentCow007 posted:

I'd be very surprised if WD Reds were dying instantly.

Even good drives like Reds have a non-zero failure rate, and HDDs traditionally have a "bath-tub" style failure rate, so you'd expect to see a few die in short order every now and then, with the ones that survive past infant mortality going on to live good, full lives of storing bad porn and lovely DVD rips or whatever.

But you're right that bad cables absolutely are A Thing, and should be checked before assuming any more expensive bit is hosed up.

DoctorTristan
Mar 11, 2006

I would look up into your lifeless eyes and wave, like this. Can you and your associates arrange that for me, Mr. Morden?

AgentCow007 posted:

I had a bunch of failures right away with my first FreeNAS build and it turned out to be cable chatter. I had cheaper SAS to SATA adapter cables and it was solved by switching to Supermicro ones, and for the drives connected to the motherboard I used SATA cables from monoprice that were cheap but transparent so I could see they were actually shielded. I'd be very surprised if WD Reds were dying instantly.

This was in a Synology box (DS918+). I certainly hope they’re using good quality connectors in that product line...

100% Dundee
Oct 11, 2004
Bestbuy WD 8TB Easystores are back on sale at $150+tax for anyone looking to shuck some drives. Just went and grabbed a few more to fill up my DS1817+!

Now I just need to figure out how to add the disks to the pool and wait forever while it does it's data magic.

Moey
Oct 22, 2010

I LIKE TO MOVE IT
I really don't need more this second, but damnit I want more.

MagusDraco
Nov 11, 2011

even speedwagon was trolled

DrDork posted:

Even good drives like Reds have a non-zero failure rate,

Yeah. I had to rma two of my four 8tb reds I bought a couple years ago after one was doa and the other ended up with a bad sector after testing. Oh well, Newegg rma was painless.

Varashi
Sep 1, 2006
THE MAN is limiting my BANDWIDTH :argh: [belgian goons]

Paul MaudDib posted:

Thanks. Man, ZFS really does love its RAM, doesn't it.

Eh, this is almost correct, but not quite. ZFS L2ARC needs a ram entry per zfs RECORD, not disk sector. Most likely your recordsize still is the default 128k.

You'll be fine on RAM.

Sheep
Jul 24, 2003
God bless Best Buy. Second time I've bought a set of Reds and they dropped the price a week later; 2 minute online chat and boom price match refund is done.

wolfbiker
Nov 6, 2009
I've got a home server with 10 drives that has been running Windows 7 for many years, serving media to devices in the home. I've been thinking about ditching Windows and using something else. Ideally, I don't want to wipe out my existing drives, and don't care about RAID or redundancy. I'd just want JBOD (individual drives, not pooled together), and to use the drives with the new OS without formatting them. Is there anything that can do this, or should I just reinstall Windows? I would need to be able to run SABNZBD, Sonarr, and Couch Potato. Thanks for any suggestions.

derk
Sep 24, 2004

wolfbiker posted:

I've got a home server with 10 drives that has been running Windows 7 for many years, serving media to devices in the home. I've been thinking about ditching Windows and using something else. Ideally, I don't want to wipe out my existing drives, and don't care about RAID or redundancy. I'd just want JBOD (individual drives, not pooled together), and to use the drives with the new OS without formatting them. Is there anything that can do this, or should I just reinstall Windows? I would need to be able to run SABNZBD, Sonarr, and Couch Potato. Thanks for any suggestions.

What are the specs of your server?

Adbot
ADBOT LOVES YOU

redeyes
Sep 14, 2002

by Fluffdaddy
No NTFS formatted drivers are strictly Windows territory. Either stick with windows or get ready to start migrating to another file system.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply