Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Moey posted:

Why is that? I thought everything related to SMR was handled by the drives controller/firmware?

It is, but the firmware would still prefer to understand the difference between "please write this data (all zeros)" and "this block no longer matters (you can write it on the next pass)".

SMR drives have "block rewrite" characteristics that are similar to the way SSDs handle erases. They can't write a block, only a whole track, just like SSDs need to erase and reprogram a whole block at a time. Or at least that's what I'm going with after hearing that SMR drives work well with TRIM.

Adbot
ADBOT LOVES YOU

BlankSystemDaemon
Mar 13, 2009



Moey posted:

Why is that? I thought everything related to SMR was handled by the drives controller/firmware?
Tim Feldman of Seagate did a presentation on SMR technology and its relationship with ZFS (pdf), that I think most people should read. It goes into the three types of SMR that exist and what works/needs fixing, as well as some other rather interesting things.

Star War Sex Parrot
Oct 2, 2003

Moey posted:

Why is that? I thought everything related to SMR was handled by the drives controller/firmware?
There are drive-managed, host-aware, and host-managed SMR schemes. Consumers will likely only interact with the first option. Also I don’t know if the last one is still a thing. It had very few targeted customers.

Star War Sex Parrot fucked around with this message at 12:03 on Aug 16, 2018

Hughlander
May 11, 2005

D. Ebdrup posted:

I'm not sure I'm the best to advise you when it comes to Linux - on FreeBSD I'd just throw Dtrace at the problem until I got some meaningful numbers for explaining what's wrong, but I got rid of my last Linux machine recently (hts/tvheadend plugin for kodi finally hit FreeBSD ports after being absent when the namechange and big version bump from Kodi which broke ABI compatibility was made).
Best recommendation I have is to familiarize yourself with Linux's new tracing framework eBPF (which Brendan Gregg of Solaris/Dtrace fame has put a lot of work into getting useful information out of on behalf of Netflix, since their front-end servers run Linux and they had performance issues on them).

In general though, when it comes to benchmarks, I like to keep in mind the BUGS section of the diskinfo man-page in FreeBSD.

Are you sure what you're hitting isn't the performance loss associated with wide stripe widths of RAIDzN?

Also, is there a reason you created the pool layout like this, or is it just that it grew that way over time?

I created the pool layout like that basically from the N^2+stripe belief in the second link. Originally the 4TB pool was 4 + /Z2, and I brought the 8s to be 8 + /Z2. But then changed my mind and redid the pools. (The original system is from 2014, in January I added the 8TB drives.)

Ok, I did find some interesting things:

8TB drives with 8 in the external array 2 internal:
dd if=/dev/zero of=speetest bs=1M count=10000 conv=fdatasync
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 18.906 s, 555 MB/s
(A backup was running while doing this that had a sustained read of 100MBit)

4TB drives all internal
dd if=/dev/zero of=speetest bs=1M count=10000 conv=fdatasync
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB, 9.8 GiB) copied, 78.5028 s, 134 MB/s
:captainpop:

iostat
pre:
Linux 4.15.17-2-pve (tso)       08/16/2018      _x86_64_        (8 CPU)

avg-cpu:  %user   %nice %system %iowait  %steal   %idle
           3.83    0.13    1.76    4.70    0.00   89.58

Device:            tps    kB_read/s    kB_wrtn/s    kB_read    kB_wrtn
sda              79.52      1910.64       242.72 12455506354 1582272316
sdh              79.50      1909.18       241.69 12445986450 1575556148
sdd              55.89       719.94       661.35 4693331612 4311375924
sdb              55.60       720.35       661.35 4696008204 4311383704
sdc              55.61       690.28       647.98 4499966944 4224173544
sde              54.46       693.81       648.10 4522962432 4225009008
sdg              55.42       690.90       648.01 4503982892 4224372704
sdf              55.46       721.06       661.35 4700600484 4311347472
sdi              21.54        27.79       302.22  181189027 1970191064
sdj              20.51        27.54       302.22  179533367 1970191064
sdk              67.08      1909.21       242.72 12446186494 1582275684
sdl              66.82      1907.71       241.72 12436424914 1575791488
sdm              66.83      1907.72       241.72 12436492590 1575786240
sdn              67.11      1908.93       242.71 12444371782 1582260820
sdo              67.04      1909.01       242.72 12444902186 1582268112
sdp              66.92      1907.79       241.72 12436961138 1575780244
sdq              67.12      1909.10       242.71 12445447754 1582257472
sdr              66.69      1907.60       241.72 12435723274 1575776352
zd0               1.66         1.98         4.82   12906872   31410620
zd16              0.00         0.00         0.00         20          4
Those 2 drives with low read are the system SSDs.

zfs get all for the two datastores:

pre:
NAME                           PROPERTY              VALUE                           SOURCE
Main-Volume/subvol-103-disk-1  type                  filesystem                      -
Main-Volume/subvol-103-disk-1  creation              Sun Jan 28  8:51 2018           -
Main-Volume/subvol-103-disk-1  used                  11.9G                           -
Main-Volume/subvol-103-disk-1  available             30.3G                           -
Main-Volume/subvol-103-disk-1  referenced            1.69G                           -
Main-Volume/subvol-103-disk-1  compressratio         1.00x                           -
Main-Volume/subvol-103-disk-1  mounted               yes                             -
Main-Volume/subvol-103-disk-1  quota                 none                            default
Main-Volume/subvol-103-disk-1  reservation           none                            default
Main-Volume/subvol-103-disk-1  recordsize            128K                            default
Main-Volume/subvol-103-disk-1  mountpoint            /Main-Volume/subvol-103-disk-1  default
Main-Volume/subvol-103-disk-1  sharenfs              rw=@192.168.86.0/24             inherited from Main-Volume
Main-Volume/subvol-103-disk-1  checksum              on                              default
Main-Volume/subvol-103-disk-1  compression           off                             default
Main-Volume/subvol-103-disk-1  atime                 on                              default
Main-Volume/subvol-103-disk-1  devices               on                              default
Main-Volume/subvol-103-disk-1  exec                  on                              default
Main-Volume/subvol-103-disk-1  setuid                on                              default
Main-Volume/subvol-103-disk-1  readonly              off                             default
Main-Volume/subvol-103-disk-1  zoned                 off                             default
Main-Volume/subvol-103-disk-1  snapdir               hidden                          default
Main-Volume/subvol-103-disk-1  aclinherit            restricted                      default
Main-Volume/subvol-103-disk-1  createtxg             131698                          -
Main-Volume/subvol-103-disk-1  canmount              on                              default
Main-Volume/subvol-103-disk-1  xattr                 sa                              local
Main-Volume/subvol-103-disk-1  copies                1                               default
Main-Volume/subvol-103-disk-1  version               5                               -
Main-Volume/subvol-103-disk-1  utf8only              off                             -
Main-Volume/subvol-103-disk-1  normalization         none                            -
Main-Volume/subvol-103-disk-1  casesensitivity       sensitive                       -
Main-Volume/subvol-103-disk-1  vscan                 off                             default
Main-Volume/subvol-103-disk-1  nbmand                off                             default
Main-Volume/subvol-103-disk-1  sharesmb              on                              inherited from Main-Volume
Main-Volume/subvol-103-disk-1  refquota              32G                             local
Main-Volume/subvol-103-disk-1  refreservation        none                            default
Main-Volume/subvol-103-disk-1  guid                  15933573363077316406            -
Main-Volume/subvol-103-disk-1  primarycache          all                             default
Main-Volume/subvol-103-disk-1  secondarycache        all                             default
Main-Volume/subvol-103-disk-1  usedbysnapshots       10.3G                           -
Main-Volume/subvol-103-disk-1  usedbydataset         1.69G                           -
Main-Volume/subvol-103-disk-1  usedbychildren        0B                              -
Main-Volume/subvol-103-disk-1  usedbyrefreservation  0B                              -
Main-Volume/subvol-103-disk-1  logbias               latency                         default
Main-Volume/subvol-103-disk-1  dedup                 off                             default
Main-Volume/subvol-103-disk-1  mlslabel              none                            default
Main-Volume/subvol-103-disk-1  sync                  standard                        default
Main-Volume/subvol-103-disk-1  dnodesize             legacy                          default
Main-Volume/subvol-103-disk-1  refcompressratio      1.00x                           -
Main-Volume/subvol-103-disk-1  written               780K                            -
Main-Volume/subvol-103-disk-1  logicalused           11.5G                           -
Main-Volume/subvol-103-disk-1  logicalreferenced     1.48G                           -
Main-Volume/subvol-103-disk-1  volmode               default                         default
Main-Volume/subvol-103-disk-1  filesystem_limit      none                            default
Main-Volume/subvol-103-disk-1  snapshot_limit        none                            default
Main-Volume/subvol-103-disk-1  filesystem_count      none                            default
Main-Volume/subvol-103-disk-1  snapshot_count        none                            default
Main-Volume/subvol-103-disk-1  snapdev               hidden                          default
Main-Volume/subvol-103-disk-1  acltype               posixacl                        local
Main-Volume/subvol-103-disk-1  context               none                            default
Main-Volume/subvol-103-disk-1  fscontext             none                            default
Main-Volume/subvol-103-disk-1  defcontext            none                            default
Main-Volume/subvol-103-disk-1  rootcontext           none                            default
Main-Volume/subvol-103-disk-1  relatime              off                             default
Main-Volume/subvol-103-disk-1  redundant_metadata    all                             default
Main-Volume/subvol-103-disk-1  overlay               off                             default


NAME             PROPERTY                VALUE                   SOURCE
datastore/Media  type                    filesystem              -
datastore/Media  creation                Sun Jan 21  8:38 2018   -
datastore/Media  used                    20.3T                   -
datastore/Media  available               41.5T                   -
datastore/Media  referenced              18.0T                   -
datastore/Media  compressratio           1.00x                   -
datastore/Media  mounted                 yes                     -
datastore/Media  quota                   none                    default
datastore/Media  reservation             none                    default
datastore/Media  recordsize              128K                    default
datastore/Media  mountpoint              /datastore/Media        default
datastore/Media  sharenfs                rw=@192.168.86.0/24     inherited from datastore
datastore/Media  checksum                on                      default
datastore/Media  compression             off                     received
datastore/Media  atime                   off                     received
datastore/Media  devices                 on                      default
datastore/Media  exec                    on                      default
datastore/Media  setuid                  on                      default
datastore/Media  readonly                off                     default
datastore/Media  zoned                   off                     default
datastore/Media  snapdir                 hidden                  default
datastore/Media  aclinherit              passthrough             inherited from datastore
datastore/Media  createtxg               243                     -
datastore/Media  canmount                on                      default
datastore/Media  xattr                   on                      default
datastore/Media  copies                  1                       default
datastore/Media  version                 5                       -
datastore/Media  utf8only                off                     -
datastore/Media  normalization           none                    -
datastore/Media  casesensitivity         sensitive               -
datastore/Media  vscan                   off                     default
datastore/Media  nbmand                  off                     default
datastore/Media  sharesmb                on                      inherited from datastore
datastore/Media  refquota                none                    default
datastore/Media  refreservation          none                    default
datastore/Media  guid                    9149317449859012954     -
datastore/Media  primarycache            all                     default
datastore/Media  secondarycache          all                     default
datastore/Media  usedbysnapshots         2.28T                   -
datastore/Media  usedbydataset           18.0T                   -
datastore/Media  usedbychildren          0B                      -
datastore/Media  usedbyrefreservation    0B                      -
datastore/Media  logbias                 latency                 default
datastore/Media  dedup                   off                     received
datastore/Media  mlslabel                none                    default
datastore/Media  sync                    standard                default
datastore/Media  dnodesize               legacy                  default
datastore/Media  refcompressratio        1.00x                   -
datastore/Media  written                 37.4M                   -
datastore/Media  logicalused             20.3T                   -
datastore/Media  logicalreferenced       18.0T                   -
datastore/Media  volmode                 default                 default
datastore/Media  filesystem_limit        none                    default
datastore/Media  snapshot_limit          none                    default
datastore/Media  filesystem_count        none                    default
datastore/Media  snapshot_count          none                    default
datastore/Media  snapdev                 hidden                  default
datastore/Media  acltype                 off                     default
datastore/Media  context                 none                    default
datastore/Media  fscontext               none                    default
datastore/Media  defcontext              none                    default
datastore/Media  rootcontext             none                    default
datastore/Media  relatime                off                     default
datastore/Media  redundant_metadata      all                     default
datastore/Media  overlay                 off                     default
datastore/Media  org.freebsd.ioc:active  yes                     inherited from datastore
8TBs: WDC WD80EMAZ-00WJTA0

4TBs WDC WD40EFRX-68WT0N0

Not sure why the purely internal drives are 1/5th the speed of the mixxed internal/external. What the hell am I missing / what do I try or look at next?

BlankSystemDaemon
Mar 13, 2009



Hughlander posted:

I created the pool layout like that basically from the N^2+stripe belief in the second link. Originally the 4TB pool was 4 + /Z2, and I brought the 8s to be 8 + /Z2. But then changed my mind and redid the pools. (The original system is from 2014, in January I added the 8TB drives.)
[snip]
Not sure why the purely internal drives are 1/5th the speed of the mixxed internal/external. What the hell am I missing / what do I try or look at next?
Ah, I thought you had two vdevs in the same pool, but you have two separate pools?

What controller does the motherboard use? I've heard of Marvell controllers that either start blocking data when being overloaded, or more problematically, actually drop connections - so depending on how Linux handles devices and how your pool is setup, you can try pulling out the four internal disks and hook them up to USB3 (at least on FreeBSD and Solaris, ZFS doesn't care how the disks are accessed) or a HBA if you've got a spare one.
Otherwise, I'm really drawing a blank here.

Hughlander
May 11, 2005

D. Ebdrup posted:

Ah, I thought you had two vdevs in the same pool, but you have two separate pools?

What controller does the motherboard use? I've heard of Marvell controllers that either start blocking data when being overloaded, or more problematically, actually drop connections - so depending on how Linux handles devices and how your pool is setup, you can try pulling out the four internal disks and hook them up to USB3 (at least on FreeBSD and Solaris, ZFS doesn't care how the disks are accessed) or a HBA if you've got a spare one.
Otherwise, I'm really drawing a blank here.

I have 2 zpools, each /Z1s. 1 is 10 8TB Reds, the other is 6 4TB Reds. The 4TBs are directly connected to a SUPERMICRO MBD-X10SL7-F-O that has a LSI2308 in IT Mode. The 8TBs are attached via a LSI LSI00188 PCI Express Low Profile Ready SATA / SAS 9200-8e Controller Card (Also flashed to IT mode) and then a SANS DIGITAL TR8X6G JBOD enclosure.

I should see if I have any PCI slots open, I could try just another LSI in IT mode and see if that affects it.

Shrimp or Shrimps
Feb 14, 2012


Storage goons, how long after a drive has run in one orientation should it not be changed to another.

I have a 10tb WD that's about a year old with only light usage (media drive for watching stuff) that i want to switch orientations with in a new case, but don't want to run into any issues or shorten its lifespan.

Is there a threshold either in terms of hours on or data written in a single orientation?

Thanks in advance.

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib

Shrimp or Shrimps posted:

Storage goons, how long after a drive has run in one orientation should it not be changed to another.

I have a 10tb WD that's about a year old with only light usage (media drive for watching stuff) that i want to switch orientations with in a new case, but don't want to run into any issues or shorten its lifespan.

Is there a threshold either in terms of hours on or data written in a single orientation?

Thanks in advance.

:wtf:
You mean physical orientation? Just mount it. It's not a liquid, it isn't going to care.

Shrimp or Shrimps
Feb 14, 2012


sharkytm posted:

:wtf:
You mean physical orientation? Just mount it. It's not a liquid, it isn't going to care.

Yes, I mean swapping from horizontal label-up to vertical label-side.

I thought I might have read that if a drive's been used for a long time in one orientation, that moving it to a different one (horizontal --> vertical or vice versa) can end up shortening its life span as the bearings are "used" to one orientation, or whatever.

I do know with my 6+ year old 2 TB spinner, when I swapped it to vertical from horizontal it started making much more noise than it used to, and I figured that's just the drive going tits up, and I replaced it with my current 10tb spinner which I've only been using for a year.

I should say that my 2TB spinner stopped making more noise when I mounted it back in the horizontal position.

MonkeyFit
May 13, 2009
I have a good 10TB of data that's not likely to expand at a fast rate. I plan on doing zfs raid using ubuntu server. My goal is to use this as a plex/media/backup/storage server. What raid scheme is recommended?

System specs:
  • Ryzen 3 2200G
  • 8GB DDR 2400 RAM
  • Gigabyte GA-AX370-Gaming (Rev 1.x) motherboard (it has 8 sata ports)
  • Intel 520 series 240GB OS drive
  • 4x 8TB WD EMAZ drives (white label, 5400rpm, 256MB cache)

H110Hawk
Dec 28, 2006

Shrimp or Shrimps posted:

Yes, I mean swapping from horizontal label-up to vertical label-side.

I should say that my 2TB spinner stopped making more noise when I mounted it back in the horizontal position.

Yeah that's psychosomatic. Think about every laptop hard drive ever.

Disks literally use a buffer of air from the platter spinning to hold the head off of crashing. Direction hardly matters.

Rexxed
May 1, 2010

Dis is amazing!
I gotta try dis!

Rotate your drives monthly for even bearing wear!

Don't.

priznat
Jul 7, 2009

Let's get drunk and kiss each other all night.
I keep my drives constantly rotating on all three axes for even wear.

Shrimp or Shrimps
Feb 14, 2012


H110Hawk posted:

Yeah that's psychosomatic. Think about every laptop hard drive ever.

Disks literally use a buffer of air from the platter spinning to hold the head off of crashing. Direction hardly matters.

Excellent, than I shall not worry and get the new PC case!

Though I would suspect laptop hard drives don't really get used while vertical, and the x360-style laptops all use SSDs.

Edit: FWIW, the reason I asked is because I read this from WD Support:

quote:

New E-Mail from the WD Support:

I would like to inform that I had checked the link and peter is right as well. Mounting a 3/5" hard drive horizontally, vertically, or sideways doesn’t affect the hard drive life significantly. WD drives will function normally whether they are mounted sideways or upside down (any X, Y, Z orientation).

The above statement is right but at the first installation of the drive. Western Digital does not recommend to change the orientation after the long usage.

https://community.wd.com/t/change-mounting-orientation-after-long-usage/214990/4

Dated 2017. I know this was a "thing" back in the 2000s.

Shrimp or Shrimps fucked around with this message at 04:35 on Aug 17, 2018

H110Hawk
Dec 28, 2006

Shrimp or Shrimps posted:

Excellent, than I shall not worry and get the new PC case!

Though I would suspect laptop hard drives don't really get used while vertical, and the x360-style laptops all use SSDs.

Laptop hard drives get shaken around in all directions all the time. It's the worst thing for them. SSDs are basically brand new in comparison. I don't know what a x360 laptop is but you're overthinking it.

BlankSystemDaemon
Mar 13, 2009



H110Hawk posted:

Disks literally use a buffer of air from the platter spinning to hold the head off of crashing. Direction hardly matters.
Fun fact: the distance between the platter and the read head is actually something like 700nm - or possibly smaller since it's been a few years since I heard that number - so there's not a whole lot of air molecules functioning as a buffer.
The only thing you really shouldn't do with disks is turn them perpendicular to the disk rotation while the disks are spinning, because of the precession of the platters that this causes. As long as they stay within the limits of the vibrational force that they rated for, there's very little that can go wrong.

Mr. Crow
May 22, 2008

Snap City mayor for life

PUBLIC TOILET posted:

I'm hoping someone can shine some light on this. I posted about it on reddit (unfortunately.) I want to run bare-metal Hyper-V Server with a Server (GUI) VM. I have 2x 4TB drives for data storage that I want passed to the VM. The VM will maintain the shared folders, file downloads, etc. I don't know enough about Storage Spaces and Hyper-V disk management to determine the best approach. From all of the research I've done so far, it sounds like this may be the best approach, but it leaves a bunch of questions and ifs I have about it:

Hyper-V Server 2016 -> Create VHDX files equal to the size of the 4TB disks and then save the VHDX files to those disks.
Create the Storage Pool on the Hyper-V Server 2016 using the VHDX files.
Offline the resulting Storage Pool drive, configure the Server GUI VM to use the Storage Pool
Online the Storage Pool on the VM as one drive (so let's say drive X:\)

Does this make sense? My experience so far has been that straight pass-through of the physical disks then doing all of the pool creation, etc. on the VM is not only unsupported, but it also broke entirely when I tried it.

https://www.reddit.com/r/homelab/comments/97f35y/hyperv_windows_server_storage_spaces_best_practice/

Pass through would probably be the recommended approach, I'm running hyper-v server core and pass everything to a nas VM. I have an SSD that the host/guests run off that hyper-v is in charge of and the rest are passed through.

I'm not entirely sure what you're asking though and didn't click on the Reddit post. Are you trying to run the guest on a pass through disk or something? Because that wouldn't work.

H110Hawk
Dec 28, 2006

D. Ebdrup posted:

Fun fact: the distance between the platter and the read head is actually something like 700nm - or possibly smaller since it's been a few years since I heard that number - so there's not a whole lot of air molecules functioning as a buffer.
The only thing you really shouldn't do with disks is turn them perpendicular to the disk rotation while the disks are spinning, because of the precession of the platters that this causes. As long as they stay within the limits of the vibrational force that they rated for, there's very little that can go wrong.

Yeah don't move the disk while operational. Laptop drives aggressively park their heads, which is one reason they can feel really slow to use if you aren't perfectly still with the laptop.

It was fun to pull dead disks out of a netapp by just ripping it out and twisting it around. You could feel the centripetal (I don't care if I'm wrong you know what I mean) force of the platter still going at somewhere close to 10k rpm.

Not that I would do that ever.

io_burn
Jul 9, 2001

Vrooooooooom!
How does the cadence of Synology's release cycle work? I'm thinking of pulling the trigger on a DS918+ primarily as a Plex/sabnzbd/associated apps server I guess I'm used to Apple products where it's pretty well known when and how they release things and what time of year it's a good idea to wait to buy something. (i.e. right now is not a great time to b uy an iPhone) Is Synology similar?

BlankSystemDaemon
Mar 13, 2009



H110Hawk posted:

Yeah don't move the disk while operational. Laptop drives aggressively park their heads, which is one reason they can feel really slow to use if you aren't perfectly still with the laptop.

It was fun to pull dead disks out of a netapp by just ripping it out and twisting it around. You could feel the centripetal (I don't care if I'm wrong you know what I mean) force of the platter still going at somewhere close to 10k rpm.

Not that I would do that ever.
A lot of laptops also have a way of meassuring sudden g-forces which can force-park the head of the disk, in case your laptop just got yanked off the table by someone pulling on the charger port.

Unfortunately, that also means there's a potential for corrupt writes, if you're not using ZFS - how's that for getting back on-topic?

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo
So encountered the dumbest thing.

Synology 5 bay device. A standalone file server built for convenience. Boot it up, hit the button to start setup.

"Failed to format the drive (35)"

Solution is to ... remove the drives and make sure to format and remove partitions on a separate PC

An all-in-one file server that... can't automatically format? I've seen bash scripts do this better...

BobHoward
Feb 13, 2012

The only thing white people deserve is a bullet to their empty skull
re: orientation and wear, it might have been true once but I think it isn't now. HDD spindles once used ball bearings, which do wear out, and it's easy to imagine how ball bearings and their races might wear into a preferred orientation. However, it's been at least 10 years since anyone shipped a ball bearing HDD spindle. The whole industry switched over to fluid dynamic bearings (FDBs).

The thing about FDBs is that there is no metal to metal contact, and, consequently, essentially zero metal wear. The bearing is a very thin film of oil that fully separates the metal surfaces. They probably won't take a set no matter what orientation you run them in.

Take all the above with a grain of salt since I'm not a mechE

BlankSystemDaemon
Mar 13, 2009



There are more moving parts in spinning rust disks than just the platters themselves; the whole contraption for moving the head is only rated for a certain number of load/unload cycles and when people insist on using powersaving when disks are idle, you sometimes find that those load/unload cycles are a lot less than you bargined for.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
A sure way to kill a drive is to operate it heavily and then drop the sucker on the floor in the middle of it. Pretty much how I killed an old external drive of mine while I was backing up files to it. The sounds it made were... not pleasant. Even though gyroscopes and such exist to suspend the drive’s mechanical functions quickly if it’s detected to be in free-fall, the rough impact alone can cause some serious damage.

Star War Sex Parrot
Oct 2, 2003

necrobobsledder posted:

Even though gyroscopes and such exist to suspend the drive’s mechanical functions quickly if it’s detected to be in free-fall, the rough impact alone can cause some serious damage.
The vast majority of drives on the market are not equipped with internal free-fall sensors.

Moey
Oct 22, 2010

I LIKE TO MOVE IT

Star War Sex Parrot posted:

The vast majority of drives on the market are not equipped with internal free-fall sensors.

This. That was a fad that quickly died out with SSDs.

ProjektorBoy
Jun 18, 2002

I FUCK LINEN IN MY SPARE TIME!
Grimey Drawer
It's obnoxious to upgrade HP Laptops from spinners to SSD's because you still have to install that stupid 3D Driveguard driver to fully reconcile the Device Manager.

BoyBlunder
Sep 17, 2008
I'm currently running an old Synology DS214 from a few years back, and am looking to upgrade.

I'm running 2x 4TB WD Reds from 2017, and running out of space using Synology's SHR raid thing. Would it make sense if I wanted to upgrade to the DS918+, if I was looking to upgrade the storage, and maybe have the option of streaming Plex through it? I currently have a standalone system that does nothing except run a Crashplan VM and serve out Plex to my clients, so I'm hoping the 918+ would be able to do this too.

Internet Explorer
Jun 1, 2005





Maybe others can chime in, but I'm really not a fan of running Plex off a purpose-built NAS like Synology. Don't get me wrong, I love their NASes and have two of them. They just aren't powerful enough to do any serious transcoding and if you start having clients that can't Direct Stream or Direct Play things can go south very quickly. They can do hardware transcoding, but again, not in an ideal fashion. I'd rather have my storage and compute separate.

I personally use a Synology NAS for storage and a Shield for my main client and server. If I needed more power than that, I'd likely build a little server out of a NUC or something small with a ThreadRipper in it.

Internet Explorer fucked around with this message at 15:10 on Aug 21, 2018

EVIL Gibson
Mar 23, 2001

Internet of Things is just someone else's computer that people can't help attaching cameras and door locks to!
:vapes:
Switchblade Switcharoo

Internet Explorer posted:

Maybe others can chime in, but I'm really not a fan of running Plex off a purpose-built NAS like Synology. Don't get me wrong, I love their NASes and have two of them. They just aren't powerful enough to do any serious transcoding and if you start having clients that can't Direct Stream or Direct Play things can go south very quickly. They can do hardware transcoding, but again, not in an ideal fashion. I'd rather have my storage and compute separate.

I personally use a Synology NAS for storage and a Shield for my main client and server. If I needed more power than that, I'd likely build a little server out of a NUC or something small with a ThreadRipper in it.

For the longest time I was wondering why you would want to turn on the Plex option to make a streamable version of every single piece of media instead of doing an on-the-fly transcoding. Ashamed it wasn't until I got a Synology last weekend to realize that those devices kind of suck at doing any heavy processing not related to NAS.

I got this NAS to store my static files that aren't video/audio media as a semi-expensive way to clear up space on my main Ubuntu ZFS file server. The i5 xenon on the box has power but I put in a cheap Nvidia 730 gpu to activate the Plex GPU transcoding and it loads so much faster on my phone. I read posts on Plex's forums about how their CPU is pegged whenever Plex transcodes anything and it's like... well, the CPU in these NAS are much weaker than what's in your phone now. What did you expect?

Thanks Ants
May 21, 2004

#essereFerrari


Synology underpower their boxes across the range, especially considering the claims they make. Even the business-oriented boxes will come with a quad-core Atom and 4GB RAM and choke as soon as you actually do any of the stuff that they advertise their OS as being well suited for.

Hadlock
Nov 9, 2004

A ~$100 reolink plus a $50 surveillancestation license is the easiest way to go with adding a camera to my Synology DS418?

Flipperwaldt
Nov 11, 2011

Won't somebody think of the starving hamsters in China?



Hadlock posted:

A ~$100 reolink plus a $50 surveillancestation license is the easiest way to go with adding a camera to my Synology DS418?
License to use Surveillance Station with two cameras is included with every Synology NAS.

Hadlock
Nov 9, 2004

Heck yeah. We're going on a trip and the last time I left the cats to their own devices I rigged up a crude web cam that failed halfway through the trip. This seems like a more sensible solution.

Looking on Amazon there's about eleventy billion security cameras available, looking for the sweet spot of reputable, compatible, and reasonably priced, it is overwhelming to pick one.

H110Hawk
Dec 28, 2006

Hadlock posted:

Heck yeah. We're going on a trip and the last time I left the cats to their own devices I rigged up a crude web cam that failed halfway through the trip. This seems like a more sensible solution.

Looking on Amazon there's about eleventy billion security cameras available, looking for the sweet spot of reputable, compatible, and reasonably priced, it is overwhelming to pick one.

Skip reputable and segment it from the rest of your lan and the internet at large. I've got a Foscam PTZ camera, it doesn't have internet access at the router level.

https://smile.amazon.com/gp/product/B00I9M4HBO/ref=oh_aui_search_detailpage?ie=UTF8&psc=1

It phones home to china if allowed on the internet.

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer
How worried should I be about FreeNAS screaming critical errors about some random Python files on the boot USB? We've already had one USB completely die on us, but that did give us a good 'dry' run at restoring the config backup to a fresh ISO and that seemed to work fine. But now on this 2nd USB it almost immediately started throwing errors at 2 seemingly random files. Based on the file paths I can't tell if they're even being used, and as far as we can tell everything else about the NAS is working fine.



The data HDs are coming back clean so I'm not super worried, but I don't like red lights flashing at me in the UI :mad:

Unrelated question, is it normal for your NAS to just hang out consuming all its RAM? The first time I moved a significant amount of data over to the NAS it spiked its 8gb RAM for the duration of the transfer. So I picked up another 8gb off eBay and stuck it in, but then it just used 16gb instead of 8. The Reports tab is showing about 14.5gb Wired all the time, but I'm not sure what that means or if it's something to be concerned about. Everything else in the system (CPU, disk activity etc) almost never breaks double digits of % usage.

H110Hawk
Dec 28, 2006

Takes No Damage posted:

How worried should I be about FreeNAS screaming critical errors about some random Python files on the boot USB? We've already had one USB completely die on us, but that did give us a good 'dry' run at restoring the config backup to a fresh ISO and that seemed to work fine. But now on this 2nd USB it almost immediately started throwing errors at 2 seemingly random files. Based on the file paths I can't tell if they're even being used, and as far as we can tell everything else about the NAS is working fine.



The data HDs are coming back clean so I'm not super worried, but I don't like red lights flashing at me in the UI :mad:

Unrelated question, is it normal for your NAS to just hang out consuming all its RAM? The first time I moved a significant amount of data over to the NAS it spiked its 8gb RAM for the duration of the transfer. So I picked up another 8gb off eBay and stuck it in, but then it just used 16gb instead of 8. The Reports tab is showing about 14.5gb Wired all the time, but I'm not sure what that means or if it's something to be concerned about. Everything else in the system (CPU, disk activity etc) almost never breaks double digits of % usage.

Sounds like a lovely USB port.

If you mean the output of 'free' then look at the cached number. That is free memory too. Basically yes, that's how linux/freebsd work.

Sheep
Jul 24, 2003
https://www.linuxatemyram.com/

Unused memory is wasted memory.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Sheep posted:

https://www.linuxatemyram.com/

Unused memory is wasted memory.

This. Any decent file server is gonna use every last byte of RAM to cache files so it doesn't have to touch the disks unless it absolutely has to.

Adbot
ADBOT LOVES YOU

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer

H110Hawk posted:

Sounds like a lovely USB port.

We are still trying to use the internal USB slot, since the mobo only has 2 more on the rear panel. But I can't think of what else we would need plugged in back there besides the keyboard so maybe for this next FreeNAS update I'll flash a fresh stick and try one of the back ports.

Sheep posted:

https://www.linuxatemyram.com/

Unused memory is wasted memory.


Except it was my rear end, thanks.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply