Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Atomizer
Jun 24, 2007



What are your opinions on enterprise HDDs, thread?

For a little background, I have an older gaming desktop that gets limited use (a couple days a week) and the ST3000DM001 that was in it finally died (actuator/head failure from what I was able to diagnose) after about 5.5 years and ~5.5k hours. I replaced it with an HGST He8 6 TB that I bought as a used/refurb because the price was right (~$105 which was and still is good for that capacity) and I was interested in finally getting a He-filled drive (for no good reason.) I restored from my backup of the original drive and a second ST3000DM001 (both just held games) to simplify my setup; now I just have the one 6 TB internal drive for games and I backed that up to a new 6 TB USB HDD. (Like I said, these are just holding game installs and are replaceable/re-downloadable, but like someone posted above, the time to re-download and reinstall everything is of value. It took several days to transfer everything over, get all the games updated, defrag, then backup everything to the new external drive.)

That being said, I like the He8. It's fast (IIRC sequential R/W was >200 MB/s) and the noise/heat don't bother me (it's pretty clicky but I wear a headset, and I think it's supposed to be cooler than an air-filled drive but that's irrelevant in my single-drive setup.) Other than that, it's just another HDD and anything would work for my purpose here, but I do also like that it's a higher-reliability enterprise-class drive, and being a used/refurb'd drive that was taken out of service while still functional, it's at the bottom of that bathtub curve (SMART status is good and it came with ~18k power-on hours but, no joke, <20 power-on cycles.) I just need a drive to work reliably, and cost-effectively (because gaming isn't a necessity and I don't have a ton of time for it anyway,) and the cheapest drives are generally either used/refurb'd or external ones.

With all that in mind, I've been keeping an eye out for similar deals on 6/8 TB drives. With discounts I can periodically find new 8 TB USB drives in the $120 range (these are occasionally WD with Red or white label drives inside, but more commonly Seagate SMR drives which are nevertheless useful for backups/archives,) and some of the aforementioned server-pulled enterprise drives like the aforementioned HGST He-series or also WD Re/Gold, around $100-110 for 6 TB with the right deals. Getting back to the initial question, do you guys have any opinions about these drives in particular, or about similar alternatives?

While the situation with the gaming desktop is resolved at the moment, it'd be nice to have a spare for the active drive but also I'm going to populate a 4-bay NAS box mainly for use as a Plex server, where it will see fairly low-to-medium use. Something like cheap mainstream consumer HDDs (e.g. WD green, blue) would be less appropriate for these purposes, and really only the used enterprise drives I've been looking at offer the best combination of capacity/performance/reliability/price.

Adbot
ADBOT LOVES YOU

Atomizer
Jun 24, 2007



Part of the issue with shucking is that you're not 100% certain what drive is inside (which is even more of an issue when ordering online because even though you may get a good price with a deal, you can't even look at the info on the box to take a guess what's inside before opening it,) and those 8 TB Easystores can have like 3-4 different drives last time I checked. Then you may run into that obnoxious 3.3v issue on top of that. Plus, I'm perfectly fine with using external drives as-is, so that's why I'm curious about the most reliable and cost-effective internal drives.

H110Hawk posted:

You're way over thinking your optimization of a 2 day chore, most of which is unsupervised, twice a decade. Putting your OS on a ssd and your steam downloads on a regular hdd will go much further.

Further, enterprise drives fail after 5 years too. Stick with regular new disks.

I'm not asking about "optimizing a chore," I asked about some specific HDDs. There are certainly differences in WD Green, Blue, Red, Purple, etc., and some are NOT appropriate for certain scenarios (e.g. many drives packed into a high-vibration setting.) I'm just wondering about the enterprise drives as I have the least experience with them. If you have no knowledge then feel free to ignore the question!

Atomizer
Jun 24, 2007



caberham posted:

Hey everyone, so I’m a home user with a NAS. My primary focus is my photo directory but I also store my documents and some movies there.

Besides backblaze off site backup, what would be best practices to protect my photos and backup?

1. A mini 1 Bay NAS just for photos and documents? Double points if it’s mirrors into another physical location.

2. USB attachment backup for photos nightly?

I just dread dealing with a baclblaze or restore which takes forever

The more important/irreplaceable a given file is, the better the backup solution needs to be, in terms of number of copies and geographic isolation. In your case, you'd probably be fine with two steps:
- an external drive (or as many as are necessary) for a 2nd copy on-site.
- a 3rd party provider like Backblaze for a 3rd copy, just in case.
The external drives contain your primary backups, and if something catastrophic like a fire occurs then that's when you would rely on the Backblaze restoration.

Atomizer
Jun 24, 2007



spincube posted:

My home videos, movies and teevee shows currently sit on my big, power-thirsty PC, which is directly plugged into my TV. I'm currently using Emby as a sort of fancy wrapper for watching the videos, rather than for its media server capabilities. However, this is something of a waste.

So, I'd like to invest in a NAS to serve up my videos to my TV, by throwing them straight to a Chromecast over my network. I'd like something that doesn't use a lot of power, first and foremost - naturally I'm not going to be watching all day and all night, so something that sits on standby and spins up as needed is fine.

My second point probably contradicts the first: some of these videos, home videos in particular, are ancient DVD rips (remember when camcorders briefly recorded straight to DVD?), so the codecs, containers etc. are all over the place. So, there's probably going to be some transcoding needed, which I understand needs horsepower and, thus, power usage.

I don't have any issue with spending a few weekends with Handbrake - it's probably overdue at this point anyway - so I'd prioritise power usage over horsepower, but naturally if I can have my cake and eat it that'd be great. Right now I'm looking at a bog-standard Synology DS218play, as it's apparently built for media streaming/transcoding and there's an Emby package available for the Synology platform - but if the thread has any advice I'm all ears.

I use Plex instead of Emby, but for exactly what you want to do. You can set up a PMS on literally any spare PC or laptop; I have mine currently on a SFF HP EliteDesk. However, you can also get a dedicated NAS box to do the same thing; I just got an WD MyCloud Pro PR4100 (the 2-bay version is the PR2100) and it uses hardware transcoding (Intel QuickSync) to handle those needs for PMS (there's a dedicated branch for this NAS.) It works beautifully (exactly like a PC-based PMS implementation although I'm still only just starting to around with it.) It uses a 90 W adapter (you can actually plug in a 2nd one for redundancy) although I'm sure it uses far less power than that most of the time, and the PR2100 IIRC has a 48 W supply.

I transcoded all of my DVD rips to HEVC (although AVC would also suffice) using Handbrake which chops down the file size to about half, and I strongly recommend doing this as the default MPEG-1/2 is very inefficient compared to modern codecs.

necrobobsledder posted:

I’m generally against transcoding compressed media to other compressed media and would instead recommend keeping copies of software that you know plays it and to occasionally check for software incompatibility down the road (notorious on Macs, somewhat less of an issue on Windows). Like the old codec packs from CCCP, old VLC versions, etc. are a bigger problem for preserving stuff than the actual media.

Although I guess if you’re a weirdo and don’t care too much about video quality you can upload it to YouTube for free and make it private.

Your point about lossy transcoding is well taken, however as long as you're not going back and forth between codecs for no reason it's a good idea to do a one-time transcode to a more universal format, as above.

Using Youtube for non-commercial stuff (i.e. DVD rips, torrented TV shows, etc.) is a good idea, because you're accomplishing the same thing (streaming video over the Internet from somewhere, be it your own home or not) and then it become's Youtube's problem to manage the storage and backup.

caberham posted:

I’m a little confused by the virus time delay thingy, sorry.


Sorry I have trouble understanding this post

The idea is, if you have a backup setup to keep a copy of drive A on drive B, it will reflect any deletions or hostile encryptions (i.e. from hijacking malware) on drive A so you can lose everything on both drives. This is why you don't immediately want your backup to delete or change files on drive B so you have time to rectify any errors on drive A. (This is kind of why the Recycle Bin is useful instead of all deletions being permanent immediately.)

dexefiend posted:



I love double CPU computers.

Now that's loving gorgeous! :swoon:

Atomizer
Jun 24, 2007



Yeah I mean as I've said the more important something is the more copies you should have, but if it's not particularly vital or sensitive then having someone else host it is the way to go.

Atomizer
Jun 24, 2007



space marine todd posted:

Suggestions for a 2-bay NAS? I already have a 6-bay QNAP, but I am moving and need something smaller in size/bulkiness for my new place. I plan on doing RAID 1 with 10TB drives.

I'll throw out the WD PR2100 because it can also be a PMS server. If you didn't need that functionality then that basically leaves you with any other NAS!

Atomizer
Jun 24, 2007



various cheeses posted:

I need a new secondary drive to hold music/photos/etc. Any downsides to using a WD Gold drive vs a Black drive in a USB 3.0 external enclosure? Reliability is #1 priority, noise #2, and all else is of no concern. 1-2TB is all I need.

Forum posts everywhere else on the internet are worthless and can�t come to a consensus.

I asked about enterprise drives a few days ago and got more or less ignored. Using either of those drives in an enclosure would be fine, but the Gold would be more reliable (rated for heavier-duty use, vibration sensor, etc.) yet possibly louder (not that Black is as quiet as Green or Blue.)

Since reliability is your top priority, that would tilt you towards the Gold (or the slightly older Re, or any competitors' enterprise drives.) You still have to back up your poo poo though!!! You could perhaps go with a Green and then get an archive drive for the backup? Or, you could just use an SSD or two for the active drive(s) and whatever HDD for the backup, because that would satisfy both your criteria (no moving parts so as reliable as any storage can be, and totally silent, plus individual drives are well within your capacity range and price is not a concern.)

Atomizer
Jun 24, 2007



D. Ebdrup posted:

I swear I meant to answer your post, but got distracted and chemo-brain took care of the rest.

Anywho, the only reason why enterprise drives exist is that the companies which buy them are willing to pay that much to have some sort of extra warrenty, and they operate on such a large scale that getting a bunch of drives RMA'd isn't going to mean they'll be running at decreased levels of availability provided by whatever RAID they use.

Thanks for the input. I agree that there's definitely a +cost for +warranty on enterprise/business hardware in general, but there also are certainly hardware differences in HDD lines including things that determine how they perform (e.g. RPM, aggressive head docking in the WD Green, etc.) as well as how durable they are (which is why dedicated NAS, media, etc., drives exist.) I guess at the time I was looking for input into this class in general, but I've done research since then and at this point I'm mostly curious about how the brands stack up. Like I said I'm happy with the HGST He drive, but the WD Re/Gold also looks promising, and then there's Seagate and Toshiba beyond that.

codo27 posted:

Whats the scoop on WD's entries into the SSD field? Looking at the 970 for my main drive in my next build but considering WD for the slave, given price.

They're fine, maybe a tick below the Crucial MX500, which itself is maybe a small tick below the Samsung Evos. I wouldn't at all hesitate to put the OS for a new build on a WD Blue, and Samsung is by no means the supreme, must-have brand that it once was. There's a dedicated SSD thread here, btw. For the OS, you're basically just looking for a good drive with DRAM, at a reasonable price. ~$80 is reasonable for a sale on one of the aforementioned ~512 GB-class drives, for example (and maybe $150 or less for 1 TB.) Really, I think sufficient capacity is the biggest criterion beyond that (for future proofing, OS maintenance, and drive self-maintenance purposes.)

Wooper posted:

Is there any tiered storage solution available for normal users?

I got into looking at Enmotus FuzeDrive because AMD has bundled the standard edition software as StoreMI with their newer chipsets. It looks pretty good, except the limitations of only being able to use it on 2 physical drives is a bit disappointing.

Try PrimoCache (and I mean there's literally a free trial, so check it out.) It will let you pretty much cache anything with anything else. You can cache an HDD (or a group of multiple HDDs) with an SSD (or any decent USB flash drive) and then cache that with spare RAM, have multiple cache groups simultaneously, and you can tweak caches to be R, W, or both, etc.

Atomizer
Jun 24, 2007



D. Ebdrup posted:

They promise to attempt data recovery, they don't promise to succeed. I've never paid for a service like that, but I would be very surprised if they throw forensic levels of data recovery at it.

Doesn't idle3-tools/wdidle (or camcontrol raw 0x44 0x00 0x57 0x44 0x00 0xA0 0x80*) still work?
*: Don't actually try this, as you may brick your drive even if it IS a WD Green.

Likely, it all just comes down to binning in the end; if you're lucky the binned SKU you're paying for is actually way more capable than what you're buying, but has just missed one out of a couple hundred checks that disqualifies it for a much higher-priced SKU.

Changing the idling behavior on the Greens seemed like more trouble than it's worth. The thing is, even with their default behavior the Green drives should be quite perfect for the roles that you'd typically use an HDD for nowadays: not for OS usage (which is what would cause the drives to wear out faster due to that aggressive power saving) but for archival/bulk storage (write sparingly, read occasionally.) They're still not what I'd cram in a multi-bay NAS, but for the average user, with an SSD and maybe one or two HDDs attached to their PC, a WD Green should be quite sufficient. (I still wouldn't blame anyone, though, for wanting something rated for heavier use if they feel that's appropriate.)

On this topic, I just shucked a Green (WD30EZRS IIRC) from a single-drive, sealed, consumer NAS. I bought it about 7.5 years ago for $200 (:eyepop:) and it's been connected ever since, but has only seen light use because it's a little too slow (~60 MB/s over Gigabit at best, and most of my PC hardware is over slower network links) and behaved a little wonky at times. I still used it to hold stuff like utilities and benchmarking software, drivers, etc., so it was actually useful for all my PCs on the network, but just not practical to take advantage of the whole 3 TB; I can just host the files on any of the PCs already connected to the network 24/7, like my Plex server. So I put the drive in a USB enclosure, ran CDI, and learned that it has about 2 dozen power cycles over all these years, but over 64 THOUSAND hours :popeye: of power-on time (which translates to about 7.3 years at 24/7/365)...with in-range SMART readings.... :stare: I guess that's not that out of line considering it was just sitting there idle, docked and spun-down most of the time, but nevertheless that just goes to show you that you can't really predict a drive's lifespan based on service hours alone.

Atomizer
Jun 24, 2007



TransatlanticFoe posted:

My current setup is a WD MyCloud 4TB (the one with an enclosure, no removable drives) and a Raspberry Pi as a Plex server, which works surprisingly fine for local direct play but terrible for everything else. I’m looking to upgrade both of these to a single NAS I can run Plex off of for <$400 before drives, if possible. Anyone have any suggestions? Right now I’m looking at the Synology DS218+, from what I’ve seen it can handle 1080p transcoding.

WD PR2100/PR4100 can do that; check eBay and you can find both of them under your price limit.

Atomizer
Jun 24, 2007



You just defined "VPN," yes.

Atomizer
Jun 24, 2007



forbidden dialectics posted:

Had to split up the testing across 2 different computers, and the one with only 2 drives attached finished much quicker (the write/read speeds decrease with more drives attached, probably saturating whatever link the USB controller is connected to). The first batch was good, so let's shuck!

What software are you using / what's your testing process?

Atomizer
Jun 24, 2007



Hadlock posted:

I have a bunch of data locked up on a windows storage pool/storage spaces, originally created with windows server 2012 r2, on drives 2,3,4,5

If I replace the boot drive (drive 1) with a new drive, and then install win 10 on there, will I be able to access the disk/storage pool/storage spaces? I am thinking the answer is yes. Trying to migrate to temporarily access the array, to transfer the data a synology NAS unit.

I'm pretty sure it will work. I think Windows sees the Storage Space data on the drives themselves, rather than having it tied to a single Windows installation. I had tried it once (with 3 x 250 GB HDDs that had little other purpose) and then disassembled the array without formatting the drives, and when I replaced one of them in a system Windows started to look for the other drives.

Atomizer
Jun 24, 2007



RAID is for redundancy (except for RAID0, which is of course for performance at the expense of everything else) and ease of rebuilding your array - it's very convenient to have, say, a 4-bay RAID5 that you can just replace a bad drive one at a time and let it rebuild itself. You still need backups of everything important though, of course.

I have an unrelated question, about SMART data, after reading some [refurb'd] HDD reviews online: how are SMART data reset (in this case, to 0 hours and 0 power-on cycles) and how did this user find "traces of old self-tests" indicating the actual hours/cycles? I mean I know that all this is possible, but I'm wondering what software is needed.

Atomizer
Jun 24, 2007



Is there anything special about video/surveillance drives, specifically the ones in this line, beyond their construction/specs/intended use? I know that this class of drive is rated for 24/7 operation and recording from multiple sources continuously, but how are they for general storage purposes (i.e. less demanding, but different, operation than they were originally designed for?)

Atomizer
Jun 24, 2007



SamDabbers posted:

They're basically the same as NAS drives, but support an additional command set that surveillance appliances can use to make simultaneous recording or playback of multiple streams more reliable. You can definitely use them as regular NAS drives without sacrificing anything, since the special features are opt-in.

Edit: Page 60 of this document. Section 4.23 Streaming feature set

Excellent info, thank you very much!

Atomizer
Jun 24, 2007



Killer_B posted:

Would the WD purple drive series work in a similar fashion?

Looks like it, as they appear to be the WD equivalent to those Seagate "pipeline" drives I linked.

Atomizer
Jun 24, 2007



That, uh, completely defeats the purpose of RAID1, but ok!

Atomizer
Jun 24, 2007



H110Hawk posted:

RAID is not backup. If you haven't touched enough computers to see a raid controller poo poo the bed and write garbage to all of your mirrored replicas then please just trust in the mantra of raid is not backup.

Yeah, I know, I wrote basically that about RAID on the last page. I've emphasized the importance of backups totally unrelated to the concept of RAID.

The point I was making was that RAID1 in particular is supposed to leave you with one good drive when the other goes bad (in a 2-drive setup of course,) so you can rebuild the mirror from the good drive. Obviously there's a worst-case scenario and the whole array might be destroyed, but it's probably not as common an occurrence as you're making it out to be. And yes, backup everything anyway, but if you have a RAID1, a drive dies, and you replace it but manually restore from backup, then that I guess defeats half the point of that array in the first place (the other being availability.)

Devian666 posted:

I use raid1 myself and it is good if a drive fails as it's easy to build a new mirror and keep everything running in the mean time. Also read performance from two disks is also very good, where my storage is primarily about reads.

This sounds like a normal experience; I think Hawk is just being super-pessimistic, which is understandable when you're talking about data integrity.

Atomizer
Jun 24, 2007



Rusty posted:

Will a 200 watt power supply be fine for two 4TB drives? Do drives alone generate too much heat that a really small case would be an issue, one that isn't designed to hold two full sized drives? I have this I3 with 16GB of RAM that I don't do anything with so was thinking of adding two large drives and freeNAS (or unRAID).

This is what it looks like:

https://imgur.com/a/E4Dw5DC

Yes, and probably not, respectively. The last time I was working with an external 3.5" HDD connected to my Kill-A-Watt it read about 10 W, IIRC. Everything else is fine with that PSU. I'd be more concerned about figuring out how you're gonna fit two drives in there than the amount of power/heat.

H110Hawk posted:

It's the RAM you want to be worried about.

:rolleyes:

Atomizer
Jun 24, 2007



Rusty posted:

Thanks you, yes, my thoughts as well, I have two full sized drives I can test first before I buy two large drives. Seems like it will fit, it has two drives in it now at the bottom, but obviously not fill sized and I think I can mount one in the DVD enclosure.

Ah if you have a spare full-size 5.25" bay you can use something like this (and I have that exact one in a Shuttle XPC) to get a proper 3.5" mounting point (plus a couple of 2.5"!) They also make a 4x2.5" to 5.25" version if you have enough SATA ports to just use laptop-size HDDs and SSDs.

Atomizer
Jun 24, 2007



H110Hawk posted:

Look at me I am wrong on the internet. I was remembering back to anecdotal evidence from 5? Years ago now where upgrading ram in a rack of servers caused them to blow breakers. We had added amps to the rack by doubling the dimm count. Guess it was a red herring or voltages/types have dropped dramatically in wattage. Could have also been that they were able to work harder so their cpus drew more power?

Now I know.

I thought you were actually being facetious. I didn't know off the top of my head how much power DIMMs drew (I looked it up and found the ~3 W value which sounds about right,) but I knew it was obviously the lowest amount of all the components in that system. It's basically CPU>mobo>HDDs>RAM, with an SSD somewhere at the end there and I'm not even going to bother looking that up.

D. Ebdrup posted:

Considering what people on the internet, outside of SA, are getting up to.. I'm not sure this is a bad idea, if I can replace those two phrases with the number 42 and Dune quotes.

Lol loving nerd. :jerkbag:

Atomizer
Jun 24, 2007



necrobobsledder posted:

I know those exist, the hard part is finding a case that’s got a Mini ITX motherboard and 2x 5.25” bays (to support 16x SSDs for sake of maintenance / buffer for a normal 8-drive array) and nothing else. Most of the cases with such bays nowadays are larger cases.

Right, that's because the trend is towards smaller systems overall and the 5.25" bay is obsolete. When is the last time you installed a full-size optical drive? Why would anyone manufacture SFF systems and include a 5.25" bay for no reason (let alone more than one of them?)

Atomizer
Jun 24, 2007



necrobobsledder posted:

Not disagreeing with the market drivers or anything, I'm just saying that because of the trend, there's now even less likelihood of finding a chassis that can let you run a 16x 2.5" drive setup with minimum volume taken. The great majority of mini ITX systems are for industrial / digital signage purposes (which the NUC has not managed to take out it seems) or boutique desktops like from the SFF / Mini ITX thread getting created from crowdfunding. The few 8x2.5" 5.25" SATA expanders I did see had issues with drives above 7.5 mm in height when most 2TB+ 2.5" magnetic drives are 9.5 mm. The trend away from home NASes is being partly driven by cloud storage being so competitive comparatively, but there's always that sliver of a market that's between 10 TB and 50 TB that would balk at even $.003/GB/mo. I'm mostly saying that in another couple years we may start seeing people building 40TB+ arrays with QLC SSDs. Storage prices in clouds have taken a break while consumption is going up steadily and we can't be sure that the Internet Overlords will be able to lower pricing to make it all competitive.

I just think your 16x 2.5" idea is such an extraordinarily niche use-case. Anyone who needs plenty of storage is going to be better served in price & performance with 3.5" HDDs. You're looking at 2 TB max for 7 or 9.5 mm height 2.5" HDDs, anything above that is 15 mm with the fitting problems that entails, and you can get, what, 14, 16 TB in a single 3.5" drive? The price, power, heat, performance, everything just doesn't work out with 2.5" drives. It's another thing to want all solid-state storage, but that's way off the chart in the price dimension, even if QLC was cheap & plentiful.

You can certainly still get a bunch of 5.25" bays in a tower case, which should be fine for anyone wanting tens of TB of storage, but like I said those bays are literally obsolete nowadays and it doesn't even make sense to want that many drives in a SFF setup. What mini ITX setup supports 16 SATA (or SAS I guess) drives? Obviously you'd need an AIC or two, and then you may or may not need to worry about the heat generated (and possibly the power consumed) by 16 HDDs crammed as tightly as possible. Oh and also you'd need to use NAS/enterprise drives that are made to deal with the high-vibration environment of having up to 16(!!!) drives operating all at once. I don't think that type of drive even exists.

I get what you're going for, it just doesn't make any sense in practice.

Atomizer
Jun 24, 2007



Yeah any tower case will work, but he wants 5.25" bays in a modern SFF system, which is pretty nonsense. There are entire systems (e.g. NUCs) that more or less take up the same amount of space as an old, full-size optical drive. It's ludicrous to add huge obsolete bays to a state-of-the-art SFF case when you can just get a tower and stick it in a closet.

Atomizer
Jun 24, 2007



Xerophyte posted:

FWIW, I just shucked two of these and both were white label drives (WD80EMAZ). I'll see if I can get them to work in my old DS215j, sounds like worst case scenario is I have to tape over the 3.3v pins for it to play nice.

E: Works swimmingly without any messing with the pins. The 215j is still not exactly an awesome NAS, but it can sit next to the router and store stuff which is about all I need.

You only have to do the tape mod if the power supply has a 3.3 V line in the first place. This is generally only an issue when installing the drive into a PC; external enclosures or NAS boxes probably won't provide 3.3 V.

Atomizer
Jun 24, 2007



D. Ebdrup posted:

I remember CAS Latency being important many years ago, but that was back when going from 2 to 4 meant an effective doubling. I can't imagine it means much now?

I don't think the RAM timings mean much in terms of performance outside of some very specific, non-general-consumer workloads. Might make a difference in an APU (where single- vs. dual-channel does matter, for comparison.)

Atomizer
Jun 24, 2007



LorneReams posted:

I've been using my computer like a swiss army knife, as in it's handing file storage, streaming, production, and gaming. I've realized after a HD crash as well as a high electric bill, that I should probably segregate these tasks as opposed to just leaving my PC on 100% of the time. I got a laptop to handle productivity poo poo and financial, boring poo poo, and I'm going to rebuild the gaming PC to be more focused on that task, but then that leaves me with wanting a File Storage and streaming box. I primarily will use this for backups and streaming (using Plex) for my household. Looking online is such a dazzling array of choices that I'm unsure where to start. I don't know what I need in terms of firepower to be able to stream, so it's hard to get a handle on the requirements. Any help will be appreciated.

A gaming desktop will likely use more power than necessary for those other non-gaming tasks, so it's probably a smart idea to get a lower-powered system for the other stuff. You could get a NAS that's capable of transcoding (e.g. WD PR4100, and there are some from other vendors like Synology that can run PMS and hardware transcode.) Otherwise it depends on what you need to transcode; the recommendation from Plex is 2k Passmarks to software transcode a FHD stream, so check out the benchmarked values of various CPUs that you might be considering. A modern i3 though is more than enough for a couple simultaneous streams. Currently I have my PMS/file server running on an older HP EliteDesk micro-SFF PC and it's perfect; that line is the main competitor to the Lenovo ThinkCentre Tiny SFF PCs, which are also an option. An old laptop may also suffice.

MagusDraco posted:

I toss disks with 1 bad sector since usually that means there's way more coming.

Latest one I just removed from my stablebit drivepool stuff actually registered a second bad sector during the day or so it took to remove. Maybe I'll be able to RMA the disk this time though it isn't more than two years old.

Already bought a replacement though. Is it okay to just let a replacement just sit on a shelf/in a desk somewhere in a box?

You could, for the hell of it, use a questionable HDD like that for a non-vital purpose, like for holding games (that are backed up elsewhere if necessary and convenient.) If the drive finally dies while I'm playing Overwatch or whatever then I just grab one of the hundred other drives I have available and install it on that. :shrug:

It's not a bad idea to have a spare drive on hand, but you should probably run some tests on it for awhile to make sure it's not one that'll fail early.

uhhhhahhhhohahhh posted:

Yeah I just put the new disk in, running the extended SMART test before I start rebuilding but the Synology says its going to take 255 minutes (!!!)

When you do drive tests, it takes as much time as necessary based on the drive's transfer rate. If you have a drive that averages 150 MB/s r/w, and it's 2 TB in capacity, that means a read test would take approximately...4 hours. When I get a new drive I run extensive r/w tests on them that take hours or days to make sure the drive is sound out of the box.

Atomizer
Jun 24, 2007



MagusDraco posted:

How would you go about doing that in windows. Don't really have a spare computer to throw them into that can run bad blocks off a linux boot disc for a week (or really fit the extra 2 drives into the NAS in the first place since it maxes out at 4 drives. If "run it in bad blocks for a week anyway" is still the best option I'll probably wait to RMA the two bad disks til the summer after I build a new computer if Zen2 ends up being real nice.

edit: the NAS at least has stablebit scanner doing its version of disc scrubbing ever month so that's let me catch bad drives before any data was lost since it found the bad sectors on two of the WD Reds that had gotten bad blocks run on them approximately 2 years later

I've found that Hard Disk Sentinel has a good selection of diagnostics for use in Windows. Get a $20 enclosure, which really comes in handy for drive maintenance/analysis or just giving yourself a functional external drive.

Atomizer
Jun 24, 2007



Again, guys, the time it takes to read and/or write to a drive is entirely dependent on capacity and drive speed. Zero-fill takes exactly as much time as it would take you to fill the drive with actual content.

Atomizer
Jun 24, 2007



Sheep posted:

36 hours for 8TB of data is 493Mb/s which is frankly really loving fast for a non-solid state disk.

Uh, did you really mean 493 Mb/s or MB/s, because none of what you wrote really makes any sense within context.

MagusDraco posted:

I mean 36 hours is a really lovely guesstimate on my part. We're at 60 % as of now and I started it around 22-24 hours ago. I mean this seems about right actually since 140 MB/s would mean 11.53 Terabytes per day unless my math is wrong.




edit: No idea if it would work on a networked drive but the format command is done via the command prompt window and if you close that window the format command stops so pretty sure the computer would have to remain on the entire time if you did it via the command prompt on a remote computer. I just did it on the command prompt of the computer in question through a remote desktop connection. I don't need to keep the remote desktop session open if that's what you mean.

WRT to drive speed it's just some 8TB 7200rpm Seagate Ironwolf because that was what was on sale and I didn't want to mess with the tape mod for a WD White label and can't do the molex to SATA power stuff because the Dell T20 has a custom proprietary PSU with no Molex and a nonstandard mobo connector. 'course it was also $200 for it with the Xeon so hell that was worth it in late 2015/early 2016.

This makes sense, because typical HDD transfer rates are in the 100-200 MB/s range. That's why I split the difference and wrote my comments based on an 150 MB/s average.

Atomizer
Jun 24, 2007



Realistically what you guys are all getting at is, "if it doesn't die a premature death (i.e. within a few months if not the burn-in period) then an HDD will last several years until its natural death from normal use."

Atomizer
Jun 24, 2007



Sniep posted:

I'd agree to that, with the stipulation that "several years" is "some duration of time among the batch the drives were all from"

But if one of a group starts going, this is time to upgrade them all and relegate the survivors to non-critical tasks if not entirely dismiss.

Agreed. When I say "several years" though it's just an estimate based on how long I suspect guys like us use their drives, because in 24/7 service with heavy use I'd expect <5 years of life, while a backup drive that spends most of the time offline could last (meaning, "still be usable if not desirable") a decade plus; I'm guessing a drive like the latter would be usable until something like the seals or lubricant or whatever dries up.

I have a 3 TB WD Green with IIRC ~65k hours of "on" time (which equates to almost 7.5 years :eyepop:) but it's still healthy because it was self-powered-down for nearly all that time.

I bought 3x 1 TB WD Greens IIRC >10 years ago (for like $240 each :popeye:), and they're all still functional; 2 are boxed up, and one is in my spare desktop with some games on it, which is a perfect non-critical task for it.

I also have some very old (e.g. 14+ years) SATA and PATA drives that still work like new, with no reported errors. They're not practical anymore, but they're otherwise usable. Basically what we're getting back to is that, premature deaths aside, HDDs will last until you've worn them out but this totally depends on how hard you use them.

Atomizer
Jun 24, 2007



Sheep posted:

It's called a bathtub curve.


I know exactly what it is, my point was that for all the posting about different brands of HDDs the end result is going to be that.

Looten Plunder posted:

I can't justify the price of the DS918+ so I think I'm going to go with the DS418PLAY which I think meets all my needs. I think I'll start with 3x 6TB drives in RAID1 and add another down the line. The information around Plex decoding seems to be conflicting but I think it can handle everything I want it to (only need it to do 720p x265 decoding or 1080p x264 decoding).

In case it doesn't though, is the critical part for decoding the CPU? If so, is upgrading the CPU on the NAS exactly the same as upgrading the CPU on any board or is there an extra layer of complexity involved?

Make sure your specific NAS model is capable of hardware transcoding, first of all; some similar models have the same CPU but different firmware I guess and not all can transcode. Next, you can check the CPU (e.g. N3700) to see what level of QuickSync it has, which will let you know what it can, separately, decode and encode. This is mostly going to be critical in terms of HEVC, which newer CPUs support but not necessarily beyond a few generations ago (AVC is widely supported now.) Note that both decoding (i.e., your source) and encoding performance are important, and they're not linked (e.g. IIRC decoding support for a given codec at a given feature level generally comes in an earlier generation than encoding support for the same thing.)

So for transcoding, the critical part is usually the CPU.* In software transcoding, which is what Plex originally used, the CPU simply brute-forces the process. Hardware transcoding still involves the CPU (in this scenario,) but it's done in a specialized core/ASIC, and in Intel's case, this is their Quick Sync engine. (AMD's is VCE, and nVidia has NVENC.) Note that Quick Sync is literally designed for speed over quality and the results may be visibly more artifacted than a more intensive software transcode.

*So Plex has added hardware transcoding to more and more devices; first it was the WD My Cloud PR2100/4100 using Quick Sync, and around the same time the nVidia Shield with NVENC, and now you can enable it on any desktop PMS installation and it can also use NVENC and I believe VCE, so in these specific cases the "critical part" could actually be the GPU.

Atomizer
Jun 24, 2007



necrobobsledder posted:

The issue with hardware decoders is that they tend to take algorithmic shortcuts that are optimized for speed over quality and you can get some noticeable quality loss on certain videos. The primary advantages of technologies like QuickSync and Nvenc are to let lower power systems decode videos in real-time primarily for purposes like video conferencing where fidelity is not important compared to saving power. You don’t need to be a cinephile to see some of the artifacts in my experience, but if you’re just viewing some random crap with your buddies in college it’s totally fine.

Well yeah, I literally already wrote the "speed over quality" part. For what it's worth, another proposed use for Quick Sync was in live streaming gameplay, where output quality isn't really that important, although the well-known streamers do tend to use 2 separate PCs for this purpose, one for the actual gaming and a second for the encoding, which I think is overkill.

Atomizer
Jun 24, 2007



Volguus posted:

What is a decent multi-drive (5+) enclosure that one could connect to an already existing computer? Under $200 or so if possible. Is USB the only connectivity option or are there more speedy and enterprisey options, while keeping cost down?

Something like this might suffice.

Volguus posted:

My main beef (and why i call it slow) with my current NAS is that since I have gigabit internet i have started to download big files. Everything is over 15GB. All is cool and dandy, repairing and unpacking is done on the downloading machine, but then it comes time to copy it over to the NAS. And that takes longer than downloading it in the first place and that's just ridiculous. Mounted over samba I get 10MB/s or so. Plus, if it copies something and I play a movie from the NAS at that time sometimes I do get choppy playback, especially at very high qualities (40GB +) .

Is your existing NAS only Fast Ethernet? :stare:

Atomizer
Jun 24, 2007



Volguus posted:

It is poo poo: Netgear ReadyNAS 104. Which is why I was wondering if i could not get a different NAS, but maybe attach HDDs to one of the computers that I already have. The idea being that it could and should be cheaper that way.

It sounds like the issue has something to do with the NAS because the rest of the network seems fine, but you're still getting real-world performance far beneath what you should be. Try switching cables and ports as a quick troubleshooting measure. The HDDs could possibly be contributing to the issue. If, however, you're getting those 10-25 MB/s speeds when transferring lots of tiny files or ones that are heavily fragmented, that could be the issue. Especially since you mentioned lots of downloading, unpacking, and reassembling of files.

The literal cheapest upgrade you could make for NAS would be to repurpose as much of the equipment you already have, which may be as simple as getting a full tower case and putting the guts from one of your existing PCs in there so you have the room to add as many HDDs as necessary. If the above fragmentation issue does indeed turn out to be the problem then you'll experience the same thing though.

Laserface posted:

I have a readynas RN204 with 2x WD 3tb reds in RAID1.

Lately, stuff I download to stream via plex to my Apple TV has been having artifacting/scrambled frames. Audio seems fine.

Playing the file from the nas on other devices or using other players like vlc has the same problem.

I redownloaded the same file and kept it locally on my machine and there was no issues.

I think I know the answer is yes, but one of my drives is failing, yeah? One of them is noisy on seek and sometimes the nas is slow to respond while it grinds away for like 10minutes . The disk health on the NAS shows both as fine. Overall volume health is in a warning state due to 5% free space, however this was not an issue prior.


The only thing I have changed is recently deleting a bunch of mp3s out of my library. Could this have caused some fragmentation and the video files being stored in the now free gaps cause these issues? Copying the files to my pc and playing them has the same problem.

I actually just had a similar thing pointed out to me earlier tonight. Content streamed via Chromecast to the living room TV has this persistent artifacting/pixellation across the middle and bottom of the entire frame, but the same file streamed at the same time to a different device looks fine. It sounds like your situation is a bit different, and may very well have to do with file corruption due to failing HDDs. You probably can't see as much into the drive health while they're in the NAS, so consider temporarily transferring them to an external USB enclosure to troubleshoot? As always, I recommend HD Sentinel which can tell you a lot about drive health as well as perform diagnostics.

Atomizer fucked around with this message at 09:19 on Jan 20, 2019

Atomizer
Jun 24, 2007



priznat posted:

Hm, I would like the extra power of a Xeon D but those APUs are nice. Since converting my 2500K over to unraid 24/7 duties I have noticed a bit of a jump in my electric bill and would be nice to get something lower powered.

I underclocked it but haven’t kill-a-watt’d it yet (need to find mine) to see how much it draws. Could be something else drawing power too but it’s the one major change from last year I can think of (usage is 25% more than last year).

Are you comparing your power usage by month year-to-year? Because if you're looking at data for the past few months compared to last year, you may just be using more power for heating. I dunno where you live, but here it's been cold as gently caress and is projected to stay below freezing for over a week, so I have electric heaters that are probably working overtime.

Your CPU is a 95 W part but is presumably not working at 100% capacity 24/7, and even then your entire system is probably drawing <150 W (just a guess, based on measurements of my own PCs) which would probably account for less than 25% of your previous draw without having that system on 24/7.

IOwnCalculus posted:

I can't be the only one who has just slapped a SSD wherever it will fit / won't fall into a fan, right?

Same. If they need to be fastened then sure, strips, ties, whatever, but if I can just leave SSDs in there I've done it. Often I've crammed them in free 3.5" or 5.25" bays (although you can actually get cheap ~$10 bay adapters that will fit 2x2.5" drives in a 3.5" bay or 4x2.5" in a 5.25".) On my SFF desktop, it has a bracket that can mount a 2.5" drive side-by-side next to a 3.5" one, and because of that there's extra height where it can fit another 2.5" on top of the existing one, just without any fasteners, so a SSD is perfect for that because it doesn't need to be mounted rigidly. I could theoretically put a 2.5" HDD fastened to the bracket (as it should be) and an SSD sandwiched on top of that although I happen to have 2x SSDs in there instead.

Atomizer fucked around with this message at 22:57 on Jan 24, 2019

Atomizer
Jun 24, 2007



ILikeVoltron posted:

Can't you just run a smart test?

Running the long or short SMART diagnostics will tell you if the drive encounters disk surface issues but won't tell you if there's something wrong with the data on your drive, i.e. if the extremely low free space is indeed the cause of the performance issue.

Adbot
ADBOT LOVES YOU

Atomizer
Jun 24, 2007




FSP seems to be well-known for those kinds of server-type PSUs. I have one in my SFF desktop, and it's going strong after ~6 years. I also have an old Shuttle XPC as a backup PC with the same type of PSU. No complaints, although they can be fairly loud under load (I'd assume the other one is the same way - there's not much you can do about it because they have to use such small fans that run at high RPM.)

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply