Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
I used XP Pro x64 Edition for a couple years, and if drivers were available for your hardware it was outright a better OS than XP.

Adbot
ADBOT LOVES YOU

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
During my Vista years, my boot volume was a WD Raptor RAID0. That was WAY fast, but only like 72GB, and the whole RAID0 thing. I never lost any data, but I was drat sure to have my actual personal data on a 2nd drive.

MagusDraco
Nov 11, 2011

even speedwagon was trolled
I didn't run windows update until the late XP SP2 and Windows 7 era. So that was a thing. Also I forgot to put in the motherboard standoffs when I built one of my computers. Took several months to figure out why it would randomly crash.

SynMoo
Dec 4, 2006

mayodreams posted:

During my Vista years, my boot volume was a WD Raptor RAID0. That was WAY fast, but only like 72GB, and the whole RAID0 thing. I never lost any data, but I was drat sure to have my actual personal data on a 2nd drive.

I ran two 36GB Raptors in RAID 0 on my Athlon 64 3000+. That thing was a screamer for the time. The sounds the drives made were awesome. Still have them.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

havenwaters posted:

I didn't run windows update until the late XP SP2 and Windows 7 era. So that was a thing. Also I forgot to put in the motherboard standoffs when I built one of my computers. Took several months to figure out why it would randomly crash.

Oof!

I had two SSD in RAID0. I think they were 30GB Vertex 1's. I remember it being fast, but I also spent £200+ on a controller card just so I could have 'hardware raid'. I'm still not sure if it was genuinely running as hardware raid.

I own two 250gb Samsung 850 Evo's at the moment: one of each for laptop and desktop. I removed the laptop one a few days ago to run a different drive in it.

So I'm sitting there thinking to myself "you've got two Samsung 850's here... Why not have them both in the desktop running RAID0?".

I was quite tempted before realising that I'd only be doing it for the sake of it: even if it was noticeably faster (I have my doubts) then I wouldn't actually need extra speed for anything in particular.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

NihilCredo posted:

We all did stupid nerd poo poo in the aughts. I used Windows Server 2003 as my gaming machine for a few years because somebody, somewhere said that it has less bloat and was therefore faster than XP.
Client and server are same goddamn operating system, was before and still is. Latter just comes with additional optional components.

apropos man posted:

That makes sense. The only time I've noted the number of platters before purchase was on an old Seagate Momentus XT: I bought the single platter 250gb version because I obsessively figured that a single platter would load Windows from the outside edge and be therefore faster as an OS drive. How obsessively embarrassing!
There is actually something to managing data on platter drives. By partitioning the drive, you can force the drive to "short-stroke" (lol), because data will be constrained to a certain region on the drive, regardless of fragmentation. Of course, if you're randomly accessing data over all partitions, the advantages fly out of the window, but it'd still work when just booting or whatever.

Combat Pretzel fucked around with this message at 23:05 on Oct 21, 2016

Ika
Dec 30, 2004
Pure insanity

I think my work PC still has a raptor in it. I probably should stop using it one of these days, its been in use for 7+ years. It hasn't been the OS drive for 4 years or so though, ever since I got an SSD.



Completely unrelated: If I want to replace a drive in my synology NAS, can I just dd it's data to a new one and somehow expand the partitions in the admin panel? I have a 2 drive box running without any type of raid configured.

apropos man
Sep 5, 2016

You get a hundred and forty one thousand years and you're out in eight!

Combat Pretzel posted:

There is actually something to managing data on platter drives. By partitioning the drive, you can force the drive to "short-stroke" (lol), because data will be constrained to a certain region on the drive, regardless of fragmentation. Of course, if you're randomly accessing data over all partitions, the advantages fly out of the window, but it'd still work when just booting or whatever.

I remember the term "short stroke", so that's definitely what I was trying to achieve. I can't remember if I partitioned it to keep Windows constrained to a certain area on the edge of the platter.

Probably not.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless

Combat Pretzel posted:

There is actually something to managing data on platter drives. By partitioning the drive, you can force the drive to "short-stroke" (lol), because data will be constrained to a certain region on the drive, regardless of fragmentation. Of course, if you're randomly accessing data over all partitions, the advantages fly out of the window, but it'd still work when just booting or whatever.

This is also why games on optical disks often had padding files. I personally dealt with them when trying to reduce the size of my PSP ISOs after ripping.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
OpenSolaris actually had done a whole effort back then in improving the boot times of their live CDs, by analyzing the whole boot cycle and rearranging the physical data layout on the disc for minimal seeks.

Walked
Apr 14, 2003

Walked posted:

Can anyone offer a suggestion to nail down a bottleneck?

I have a TS140 with an Adaptec 6405e RAID card, and 4x 3tb 7200k drives in RAID10.

I just moved this from my desktop too the TS140 and went from ~300mbps r/w to about 120.

The two systems are connected iSCSI at 10gbe and it maxes out the network until the TS140 RAM cache is full, then dies down to 120ish.

I've benchmarked this on the desktop and TS140 and it's pretty consistently different in iometer.

I'm thinking it has to be a bottleneck somewhere on the TS140 but it's a relatively modern PCIe port.

What am I missing?

Config:
TS140 Quad core Xeon
20gb RAM
Intel X540-T2 10gbe
Adaptec 6405e
4x 3tb 7200k drives RAID 10
Server 2016
Star winds virtual SAN

The performance is the same local, network, and iSCSI.

I just don't know what would bottleneck it..

So I need some help figuring this out.

I moved my 10gbe NIC to a 710 with H700 controller, and 2x 850evo in RAID 0 to completely eliminate HDD as a possible bottleneck. I also directly connected it to my workstation to remove cabling and the switch from the picture.

Still capping at 2gb/sec transfer. gently caress.

I've verified the source disk (850pro 1tb) is capable of so much more than 2gbit/sec. So it seems it has to be something with the PCIE or some other weird quirk in Windows 10.

PCIe is running in Gen3 mode at 8x. So it shouldn't be a bus limitation.

Any other ideas?

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
What are you using for your 10g switch?

I would check that your firmware and drivers line up. Is Server 2016 and Windows 10 actually supported for the 10g HBA?

From the HDD perspective, 4 disks, even in RAID 10, isn't a ton of spindle speed.

Walked
Apr 14, 2003

mayodreams posted:

What are you using for your 10g switch?

I would check that your firmware and drivers line up. Is Server 2016 and Windows 10 actually supported for the 10g HBA?

From the HDD perspective, 4 disks, even in RAID 10, isn't a ton of spindle speed.

Like I said; to eliminate HDD as the bottleneck I'm going from SSD (850pro 1tb) --> SSD RAID 0 (2x 850pro on hardware RAID0, with battery and 5112mb cache, write-back enabled/forced); I should very easily be doing more than 2gbit/sec; maybe not maxing out 10gbe, but notably better than what I'm seeing.
I've eliminated the switch from the equation by directly connecting the hosts.

Windows 10 is supported; I'm using mainstream Intel X540-T2 adapters.


Edit; on a whim I blew away VMware and installed Server 2016. Getting speeds as expected now.

Something is amiss in the ESXi default drivers it seems.

Walked fucked around with this message at 15:42 on Oct 22, 2016

mayodreams
Jul 4, 2003


Hello darkness,
my old friend

Walked posted:

Like I said; to eliminate HDD as the bottleneck I'm going from SSD (850pro 1tb) --> SSD RAID 0 (2x 850pro on hardware RAID0, with battery and 5112mb cache, write-back enabled/forced); I should very easily be doing more than 2gbit/sec; maybe not maxing out 10gbe, but notably better than what I'm seeing.
I've eliminated the switch from the equation by directly connecting the hosts.

Windows 10 is supported; I'm using mainstream Intel X540-T2 adapters.


Edit; on a whim I blew away VMware and installed Server 2016. Getting speeds as expected now.

Something is amiss in the ESXi default drivers it seems.

You didn't mention VMware, so that complicates things a lot. The HBA you are using is supported in ESXi 6.0, but it looks like you need to download the driver from VMware: VMware Download

Also you should check if your server is also on the compatibility list, and if it needs additional drivers.

The vanilla ESXi image will 'work' until it doesn't.

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
Meant to edit but hit reply.

Was your WIndows VM using E1000 or VMXnet3?

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
What's the CPU usage during the transfer? You can lose a lot of throughput if you're doing things like TCP checksums on the CPU (yes, I know it should be faster than 2 Gbps on CPU).

Check your TCP frame / segment sizes as well. If you're doing primarily sequential bandwidth you should check your MTU is at least 4k.

Your OS could be mucking up buffering making bandwidth severely limited. On Linux you need to tune settings in /etc/sysctl.conf to increase kernel buffer sizes for TCP sockets, for example, because the defaults are not the best for throughput beyond a gigabit of throughput. Windows probably doesn't need this tuning but it's worth mentioning.

Furthermore, what are you using to test bandwidth? You should be using tools that test primarily NIC-to-NIC with as little use of other components as possible such as your disk. iperf is fine for this, but you can also just do pings and estimate bandwidth based upon ICMP packet size and latency and compare bandwidth delay product to the buffer sizes you set.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
So, is this Intel X540 a worthwhile investments for a direct high speed link to the NAS? I assume T1 and T2 indicate the amount of ports on the card, because that's how it looks on Ebay?

mayodreams
Jul 4, 2003


Hello darkness,
my old friend
I don't feel 10g is worth the cost yet unless you want a baller home lab with separate storage and hypervisor. But you can also accomplish that with LACP on a managed switch. Unless you have a very high performing storage system, you are really not going to push the limits of more than gigabit in the home.

Even in the entry level enterprise level 10g isn't that necessary unless you have a lot of load on the storage. A lot storage fabric is 4g/8g connections. Our production ESXi hosts are 2 x 10g twinax for networking and 2 x 8g FC for storage. They host anywhere from 20-50 VMs and I'd have to look, but I doubt they really push the storage that much. The dual connections are really for redundancy rather than aggregate bandwidth.

The benefit of having everything using VMXnet3 on a single ESXi host is that everything is 10g internally. Of course, you can't put your storage vm on that storage, but stuff like FreeNAS is supposed to boot from a USB/SD card anyway on bare metal.

Greatest Living Man
Jul 22, 2005

ask President Obama
How much can I switch things around in freeNAS? I have 5% used of a 4x3 TB striped pool and I'd like to switch it to raid5. What's the sanest way to do this?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

Greatest Living Man posted:

How much can I switch things around in freeNAS? I have 5% used of a 4x3 TB striped pool and I'd like to switch it to raid5. What's the sanest way to do this?

Copy it elsewhere and then destroy the pool and remake it as whatever you want. One of the biggest limitations of FreeNAS and everything else built on ZFS is that it really doesn't appreciate reforming storage units on the fly.

IOwnCalculus
Apr 2, 2003





Assuming you did it as ZFS because FreeNAS really pushes ZFS... You have to blow it away and start over.

sharkytm
Oct 9, 2003

Ba

By

Sharkytm doot doo do doot do doo


Fallen Rib
Especially with 5% usage, burn it down and rebuild.

Melp
Feb 26, 2004

You know the drill.
I finally pulled the trigger on the gigantic NAS I've been planning for like 6 or 7 years now. It's running on FreeNAS in 2 8-drive RAID-Z2 arrays, one with 8TB drives and one with 4TB drives for a total of ~60TB of usable space. I already owned the 4TB drives, so I built the system with the 8TB drives, migrated the data off the old 4TB drives onto the NAS, then added the 4TB drives to the NAS to expanded the primary volume. All data is shared via SMB/CIFS.

The system also hosts 2 Debian bhyve's running rtorrent/rutorrent and sonarr. I tried qbittorrent but the web ui is awful. I also tried hosting stuff in jails, but it wasn't very stable and FreeNAS is moving away from jails anyway. I'll be setting up a third bhyve (probably FreeBSD) to host nginx for my personal website. I have scripts set up to send SMART reports, etc, to my email address on a weekly basis and scrubs/SMART checks scheduled every 2 weeks. I got a Perl script from the FreeNAS forums that controls my fan speed based on HDD and CPU temps. I have CrashPlan running on my desktop to keep the most important data from the server backed up. I might try to move CrashPlan to the server in the future (there's a FreeNAS plugin, but it's outdated and doesn't work any more), but it seems like it will be a huge pain. I also hate CrashPlan, so I might just drop it and go with ACD + rsync instead.

Main volume -- http://i.imgur.com/JzYh5u1.png

Pics of the server itself -- http://imgur.com/a/hHsAL

Server noise while under moderate load -- https://www.youtube.com/watch?v=n1P188d9zsk

Here's the parts list:
code:
Part		Make/Model			Qty	$ Per	$ Total	From
=================================================================================
Chassis		SuperMicro SC846		1	$200 	$200 	eBay
Motherboard	SuperMicro X10SRL-F		1	$272 	$272 	Amazon
CPU		Intel Xeon E5-1630v3		1	$373 	$373 	SuperBiiz
RAM		Samsung M393A2G40DB0-CPB (16GB)	4	$80 	$318 	Amazon
SAS Expander	IBM M1015			2	$146 	$292 	eBay
PSU		SuperMicro PWS-920P-SQ		2	$118 	$236 	eBay
Backplane	SuperMicro BPN-SAS-846A		1	$250 	$250 	eBay
Boot Device	Intel 540s 120GB SSD		2	$53 	$105 	Amazon
CPU Cooler	Noctua NH-U9DXi4		1	$56 	$56 	Amazon
120mm Fan	Noctua NF-F12 iPPC 3000 PWM	3	$24 	$72 	Amazon
80mm Fan	Noctua NF-R8 PWM		2	$10 	$20 	Amazon
UPS		APC SUA1500RM2U Smart-UPS	1	$300 	$300 	eBay
SSD Cage	SuperMicro MCP-220-84603-0N	1	$25 	$25 	eBay
SAS Cable	SFF-8087 to SFF-8087		4	$11 	$44 	Amazon
HDD Screws	SuperMicro HDD Screws (100 ct)	1	$8 	$8 	Amazon
Lack Rack	Lack Coffee Table		1	$25	$25	IKEA
Tax		Tax				1	$199 	$199 	-
=================================================================================
Data Drive	WD 8TB Red (8 + 1 Spare)	9	$319 	$2,871 	Amazon
Data Drive	WD 4TB Red (Already Owned)	8	$0 	$0	-
Data Drive	WD 4TB Red (Spare)		1	$157 	$157 	Amazon
=================================================================================
						Total (No HDD):	$2,795
						Total (HDDs):	$3,028
						Grand Total:	$5,823
Here's a rundown of each of the individual part selections:
  • Chassis: SuperMicro SC846 -- I picked this up used on eBay ($200 shipped, retails for like $1000, amazing deal), but it comes with an old backplane that doesn't support larger capacity volumes, so I had to buy a different backplane. The PSUs that came with it are really loud, so I picked up some quieter ones. The stock fans are also loud, so I replaced those too. I've got 8 open drive bays to allow for future expansion. I blocked off the airflow to the empty bays with index cards and masking tape on the inside of the HDD sleds.

  • Motherboard: SuperMicro X10SRL-F -- This is SuperMicro's basic LGA 2011 server board. LGA 1151 would have worked but the SC846 chassis doesn't take micro ATX boards and the full ATX versions of SuperMicro's LGA 1151 boards are like $500. LGA 2011 will also allow me to add more RAM (if I ever need it).

  • CPU: Intel Xeon E5-1630v3 -- With 4 hyper-threaded cores at 3.7GHz, this has the highest single core clock speed in this family of Xeons, which is really nice for SMB/CIFS. I had to get it on SuperBiiz because it's typically only sold to systems integrators.

  • RAM: Samsung M393A2G40DB0-CPB (4x16GB) -- This is the SuperMicro recommended RAM for the X10SRL-F board, ECC for ZFS. 64GB is probably overkill, but 32GB is the next step down and that would have been cutting it close. Whatever, overkill is kind of a theme here.

  • SAS Expander: IBM M1015 -- These are flashed to IT mode so the OS has direct access to the drives for SMART data, etc. Each card handles 8 drives and I've got room for another card if/when I need to populate the last 8 bays in the chassis.

  • Data Drives: WD Red (4TB & 8TB) -- Like I said above, I was already running the 4TB drives on my desktop, so I moved them in after I migrated the data off them. I bought a 4TB and 8TB spare to have on hand in case of a failure. People like WD Red for NAS, but HGST would have been a good option too.

  • PSU: PWS-920P-SQ -- These are 920W redundant PSUs for the SC846 chassis and much quieter than the stock 900W ones. I got them new/open box from eBay for $120 each, which is a fantastic deal. I guess the "-SQ" stands for "super quiet"? Whatever, they're really quiet.

  • Backplane: SuperMicro BPN-SAS-846A -- The backplane that came with the server has a built-in SAS expander but isn't SAS2 capable so the maximum capacity of the array is limited. This backplane is basically a SAS breakout cable baked into a PCB. SuperMicro also has an expander-based backplane with SAS2 compatibility, but it's hard to find (BPN-SAS2-846EL1). If you did use one of these backplanes, you would only need to use one port on a single M1015 card and the backplane would expand that connection out for all 24 bays. Note that this setup would cause a slight bottleneck with most platter drives (you would get 24Gb/s on the SAS link, so up to 1Gb/s or 125MB/s per drive).

  • Boot Device: Intel 540s 120GB SSD -- This is the cheapest SATA Intel SSD I could find. People typically use USB drives for their boot device, but for a build like this, the FreeNAS gurus recommended SSDs instead for increased reliability. The controllers on most USB drives are pretty unreliable.

  • CPU Cooler: Noctua NH-U9DXi4 -- I was nervous about the fit with my motherboard and chassis, but this ended up working out pretty well. While it does provide enough clearance for DIMMs installed in the RAM slots closest to the CPU socket (at least with these Samsung DIMMs), it's so close that I'll probably have to remove the cooler to actually perform the installation in those slots. You can sort of see what I mean here (same case exists on the other side); notice the RAM slot just under the edge of the cooler: http://i.imgur.com/KE493iV.jpg

  • HDD Fans: Noctua NF-F12 iPPC 3000 PWM -- The SC846 comes with 3 80mm HDD fans which are absurdly loud. Fortunately, the fan wall is removable and 3 120mm fans fit perfectly in its place. I zip-tied the 120mm fans together and used zip-tie mounts to secure them to the chassis. I started with Noctua NF-F12 1500 RPM fans, but some of the drives were getting a bit hot under heavy load, so I switched to their 3000 RPM model. I also discovered that air was flowing from the CPU side of the fan wall back over the top of the fans rather than coming through the HDD trays, so I cut a ~3/4" strip of wood to block the space between the top of the fans and the chassis lid. With the wood strip in place, HDD temps dropped like 10 C. Pics of the fan wall install process (still showing 1500 RPM fans): http://imgur.com/a/SCaWu

  • Rear Fans: Noctua NF-R8 PWM -- As above, the stock fans are super loud. These Noctua 80mm fans fit perfectly in their place.

  • UPS: APC SUA1500RM2U Smart-UPS -- I got this from eBay, used chassis with a new battery. The total load capacity is 980W and with the server and all my network gear on it, it sits around 25-30% load. Working really well and FreeNAS comes with drivers for it, so I can monitor all sorts of stats.

  • Misc: I got a SuperMicro cage for the SSD boot drives that mounts to the inside wall of the chassis (http://i.imgur.com/CGxeFs7.jpg). The chassis did not come with HDD screws, so I got a baggie from Amazon for a few dollars. I picked up the SAS cables from Monoprice via Amazon. I'm using a Lack Coffee Table from IKEA with some reinforcement on the lower shelf to serve as a rack for the server and UPS (LackRack Enterprise Edition™). The LackRack is only temporary, but for $25 it's done remarkably well.
All in all, I'm really happy with everything. I've had the machine running for about 2 weeks now and everything is humming along pretty well. I'm gonna work on nginx and maybe openvpn in the next few days and I'll probably continue to tinker with minor stuff over the next few months. Let me know if anyone has questions, comments, etc. I learned a ton during this whole process and I'm happy to help if anyone else is looking to do something similar.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

A thought: with the new power delivery standards from USB Type C, should we now be able to have external 3.5" HDDs that don't need a separate power brick?

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
That is pretty badass Melp. Thanks for the writeup and pics!

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Lack rack strikes again. Surprised that the coffee table can support the weight of all that, I'd expect to see use of several L brackets.

Also, that's pretty pricey for what amounts to FreeBSD-supported 16 extra SATA ports. The M1015 is way overpriced on Ebay now because of all the people that were putting them into their random NAS builds and you can find even M1115 controllers for cheaper than the M1015.

Melp
Feb 26, 2004

You know the drill.

necrobobsledder posted:

Lack rack strikes again. Surprised that the coffee table can support the weight of all that, I'd expect to see use of several L brackets.

Also, that's pretty pricey for what amounts to FreeBSD-supported 16 extra SATA ports. The M1015 is way overpriced on Ebay now because of all the people that were putting them into their random NAS builds and you can find even M1115 controllers for cheaper than the M1015.
I've got L braces on each of the legs under the lower shelf and a short piece of 2x4 holding up the underside of the shelf in the middle.

I noted the price on the M1015s wrong, they were $75 shipped each for new/open box, so not too bad.

Samuel L. ACKSYN
Feb 29, 2008


NihilCredo posted:

A thought: with the new power delivery standards from USB Type C, should we now be able to have external 3.5" HDDs that don't need a separate power brick?


Yes it's possible, although there's not many right now.

http://www.seagate.com/consumer/backup/innov8/

Shaocaholica
Oct 29, 2002

Fig. 5E
I've got one of those early gen Seagate 8TB Archive drives. I have the luxury of running some lengthy tests on it before I put it into 'production' (home media player). Should I be running Seatools or something else to validate the drive? I see the latest Seatools DOS is v2.23 on seagate.com which is like 2010/2011 vintage. Should I use that or some 3rd party tool?

edit: Seagate tech support says v2.23 supports my drive even though my drive is PMR and about 5 years newer than the v2.23 build of Seatools.

Shaocaholica fucked around with this message at 18:07 on Oct 31, 2016

Krailor
Nov 2, 2001
I'm only pretending to care
Taco Defender

Shaocaholica posted:

I've got one of those early gen Seagate 8TB Archive drives. I have the luxury of running some lengthy tests on it before I put it into 'production' (home media player). Should I be running Seatools or something else to validate the drive? I see the latest Seatools DOS is v2.23 on seagate.com which is like 2010/2011 vintage. Should I use that or some 3rd party tool?

edit: Seagate tech support says v2.23 supports my drive even though my drive is PMR and about 5 years newer than the v2.23 build of Seatools.

While there are a bunch of programs that can stress test a drive most of them also include a bunch of benchmarking tools that aren't really needed just for making sure a drive is good.

The only things you really need to do are to read and write every sector of the drive; this should catch most drives that are going to fail early, and Seatools works just fine for this.

The two things you'll want to do are a Full Erase (writes 0s to every sector) and Long Generic (read every sector). If the drive makes it through these two without any errors I'd call them good to go.

Shaocaholica
Oct 29, 2002

Fig. 5E

Krailor posted:

While there are a bunch of programs that can stress test a drive most of them also include a bunch of benchmarking tools that aren't really needed just for making sure a drive is good.

The only things you really need to do are to read and write every sector of the drive; this should catch most drives that are going to fail early, and Seatools works just fine for this.

The two things you'll want to do are a Full Erase (writes 0s to every sector) and Long Generic (read every sector). If the drive makes it through these two without any errors I'd call them good to go.

Yep, did the full erase pass in Seatools but it crashes around 15-20% on my 8TB drive. I might just have a bad copy of it. Oh well, I already put it into 'production' after a full surface scan in some GNU app I forget the name.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Doing either a full write or a full erase on a shingled drive should be a very bad idea, no? Wouldn't it instantly put it into its super slow rewrite mode?

I didn't run any extensive tests on mine for that reason, but then again I don't have anything that can't be really replaced on it (or in any other drive, for that matter).

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

NihilCredo posted:

Doing either a full write or a full erase on a shingled drive should be a very bad idea, no? Wouldn't it instantly put it into its super slow rewrite mode?

This shouldn't impact anything as long as you format the drive before (re)using it; the shingled rewrite issue only comes into play if it has actual data to rearrange; it still uses the same GPT as a normal drive and therefore should be aware that it's "empty" and thus can write-over everything.

Shaocaholica
Oct 29, 2002

Fig. 5E

NihilCredo posted:

Doing either a full write or a full erase on a shingled drive should be a very bad idea, no? Wouldn't it instantly put it into its super slow rewrite mode?

I didn't run any extensive tests on mine for that reason, but then again I don't have anything that can't be really replaced on it (or in any other drive, for that matter).

Wouldn't this have come up in designing the drive and its FW? Not having the ability to zero the drive and maintain performance would be kind of a show stopper, no?

Is it not standard practice to zero drives before putting them into production for enterprises? Catch the bad ones early.

The Slack Lagoon
Jun 17, 2008



I'm helping my friend put together a computer for video editing. He had suggested doing an external Thunderbolt 3 RAID for storage, which I feel like would be overpriced for the performance.

I figured we could maybe do an internal RAID with 3 7200 drives, but at the end of the day I feel like a high capacity Samsung PRO drive might be faster and easier to set up to run the editing off of.

I figured a 250-500 gb EVO drive for OS/programs, 1TB PRO drive for working off of, and 1-2 more drives for bulk storage. How much of a difference does 7200 vs 5400 rpm make in transfer speeds for storage?

Is doing an internal RAID wort it?

I still need to find out the max amount of raw footage he would work with would be, but I feel like if its under 1tb at a time then a SSD for working off of would be the best bet, with a few more bulk storage drives.

PerrineClostermann
Dec 15, 2012

by FactsAreUseless
Wouldn't an ssd be better for this sort of thing than raided rust?

Shaocaholica
Oct 29, 2002

Fig. 5E
Having fast external storage is going to be useful if hes going to moving data around sites/clients. Plus he can run off with it in a hurry if his house burns down or something. I'd only seriously commit to it if hes doing it for money or really really serious about it.

SSDs inside ofcourse.

RoadCrewWorker
Nov 19, 2007

camels aren't so great
Hello experts, i scanned the OP (though unsurprisingly it was last edited in 2012 and most links are dead or outdated) and the last page and did a bit of own research but i'm still uncertain so i thought i'd just ask if i'm even looking for the right thing.

Basically i had an old pc serving files in my home LAN (via http server, network drive share etc) from a mixture of existing 1,2,3 or 4 TB drives that i'm looking to replace with something more economical that can be set up and administrated remotely (the pc, not the drives). Data is mostly noncritical stuff like compressed drive backup images or a history of database backup dumps that are infrequently written and even more rarely read, so transfer speed is not a factor at all. The few non-redundant parts that matter are backed up specifically off-site anyway, so internal redundancy or a lost harddrive barely matters. I was looking at entry level 4 bay NAS stuff from QNAP and Synology, but those apparently all (re)format any existing drives even for non-raid setups, and i'd rather avoid the required dump/restore for all existing data for absolutely no benefit.

If i just want to hook up a variety of existing, non-uniform disks (more would be better but 4 is fine) to my network, are dedicated multi-bay enclosures even the right place to look or is a custom built fanless pc the only way to go? Is there some obvious alternative ready-made solution that i just haven't stumbled on yet?

RoadCrewWorker fucked around with this message at 14:47 on Nov 6, 2016

Toshimo
Aug 23, 2012

He's outta line...

But he's right!
I got an e-mail ad from NewEgg for the QNAP TS-431+ @ $265, which seemed pretty good, but it's only got 1 review which is 1 egg. Anyone familiar with the product line have any input?

http://www.newegg.com/Product/Product.aspx?Item=N82E16822107243&ignorebbr=1

Adbot
ADBOT LOVES YOU

Twerk from Home
Jan 17, 2009

This avatar brought to you by the 'save our dead gay forums' foundation.
Has anybody set up Ceph storage in their home lab as a proof of concept? It looks pretty appealing, but the minimum scale where it starts to make sense is far larger than home NAS scale, more like 300TB+ clusters.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply