Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
emocrat
Feb 28, 2007
Sidewalk Technology
Thanks for the explanations, very helpful. Network performance is a good point to bring up, I can see that being a problem. I don't care too much about keeping something like this bleeding edge once its working, so Xpenology might work.

One question, lets say I put together some hardware and run Xpenology or FreeNAS or whatever you guys recommend. Say down the road I need to add some space, whats the process like? Do i need to find a temporary home for the data, wipe the install out, add drives and copy it back over? Or are there facilities that allow me to add in a couple new drives and then have it re write the data across the pool? I guess the heart of the question is, should you really stretch to get the most you possibly can up front, if your storage needs are going grow, or is it reasonable to add on later?

Adbot
ADBOT LOVES YOU

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

emocrat posted:

One question, lets say I put together some hardware and run Xpenology or FreeNAS or whatever you guys recommend. Say down the road I need to add some space, whats the process like? Do i need to find a temporary home for the data, wipe the install out, add drives and copy it back over? Or are there facilities that allow me to add in a couple new drives and then have it re write the data across the pool? I guess the heart of the question is, should you really stretch to get the most you possibly can up front, if your storage needs are going grow, or is it reasonable to add on later?
It depends entirely on what sort of file system you opted to go with. Xpenology with SHR lets you just add on individual disks later, as far as I know, though I'm not sure if it'll let you upgrade from SHR to SHR-2 seamlessly, and I'd certainly want at least 2 drive redundancy once I got above 6 total drives. ZFS on FreeNAS, on the other hand, won't let you just add an individual disk gracefully--you can replace all the disks with larger ones and grow the pool that way, or you have to add several disks and make an additional new pool. This is probably the biggest downside to ZFS, since it makes it prohibitive to go from, say, 6x4TB drives to 8x4TB drives when you just need a little more space, but don't quite want to invest in going all the way up to 10 or 12x4TB. On the other hand, the data integrity/safety of ZFS is loving phenomenal.

Furism
Feb 21, 2006

Live long and headbang
What's the difference between Nas4Free and FreeNAS? Can I install my own FreeBSD packages on either of them? I'd like to move my MariaDB server and lftp client to the custom NAS I'm building so I can get rid of both my Synology and CentOS box at the same time.

Also does anybody know if Syncthing is any good on either of these systems?

BlankSystemDaemon
Mar 13, 2009



Farmer Crack-rear end posted:

As for fitting more hard drives, just get a bigger case, no problem. :v:
A quick bit of head-math tells me that by adding in 11 LSI SAS 9201 its possible, barring money and power requirements, to get something like 76 raidz3 12-disk vdevs with around 5,45EB diskspace in a 48U rack. That's pretty impressive.

Furism posted:

What's the difference between Nas4Free and FreeNAS? Can I install my own FreeBSD packages on either of them? I'd like to move my MariaDB server and lftp client to the custom NAS I'm building so I can get rid of both my Synology and CentOS box at the same time.

Also does anybody know if Syncthing is any good on either of these systems?
NAS4Free is a fork of an older of FreeNAS with a different UI, but is still being maintained and I believe they still more or less have feature parity - so it comes down to which one you like the flavor of more. As for FreeBSD packages (either from png or ports), those can be installed in a jail (in fact, tftp - both as a client and a server daemon with UI configuration - is built-in to at least FreeNAS; syncthing is available as a package or port).

BlankSystemDaemon fucked around with this message at 07:56 on May 5, 2016

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
For philosophical / design differences between FreeNAS and NAS4Free, FreeNAS tries to be more of an all-in-one device that handles more than just file sharing abilities such as UI support for jails and such. NAS4Free is oriented around being better at just being a NAS and less hand holding of advanced features. I don't remember NAS4Free adding encrypted backup areas to new RAIDZ vdevs like FreeNAS, but that's one example of making more decisions without users knowing that would potentially be a pain if you know what you're doing.

D. Ebdrup posted:

A quick bit of head-math tells me that by adding in 11 LSI SAS 9201 its possible, barring money and power requirements, to get something like 76 raidz3 12-disk vdevs with around 5,45EB diskspace in a 48U rack. That's pretty impressive.
Forget money and power, I'm not sure how you can cool those hard drives effectively in a 48U stacked on top of each other. The airflow in that case looks terrible unless the intent is to try to get the drives to conduct heat through each other as heat sinks or something like that.

Also, given the smaller capacity of the two cases is like $8000, I don't even want to know what the price of the max density storage system is.

eightysixed
Sep 23, 2004

I always tell the truth. Even when I lie.

DrDork posted:

Xpenology with SHR lets you just add on individual disks later, as far as I know, though I'm not sure if it'll let you upgrade from SHR to SHR-2 seamlessly, and I'd certainly want at least 2 drive redundancy once I got above 6 total drives.
Unfortunately, unless something has changed, you cannot migrate from SHR-1 to SHR-2 without moving the data off the box, rebuilding, and then moving back (which I actually plan to do soon, which is going to be a pain in the balls :( )

BlankSystemDaemon
Mar 13, 2009



necrobobsledder posted:

For philosophical / design differences between FreeNAS and NAS4Free, FreeNAS tries to be more of an all-in-one device that handles more than just file sharing abilities such as UI support for jails and such. NAS4Free is oriented around being better at just being a NAS and less hand holding of advanced features. I don't remember NAS4Free adding encrypted backup areas to new RAIDZ vdevs like FreeNAS, but that's one example of making more decisions without users knowing that would potentially be a pain if you know what you're doing.
Forget money and power, I'm not sure how you can cool those hard drives effectively in a 48U stacked on top of each other. The airflow in that case looks terrible unless the intent is to try to get the drives to conduct heat through each other as heat sinks or something like that.

Also, given the smaller capacity of the two cases is like $8000, I don't even want to know what the price of the max density storage system is.
The disk chassis have some pretty high-CFM fans in them, iirc - they sound basically like a jet-engine most of the time.

In the meantime, I have found a better solution for the actual system which gives a bit more diskspace.
Additionally I made a mistake in my calculations earlier, I found out that the SuperChassis can be daisy-chained to a certain extend (meaning you can use 6 SAS 9201-16e instead of 11), as well as changed the SuperStorage server, found out 10TB disks are available, and had the idea that it would probably be smart to plan on at least three hot-spare per vdev in each chassis, here's another attempt:
code:
((((((90/3)-9)*(21-3))*11)+(((15-3)-3)*11)+(36/3-6))/24)*10)-14%
With the overhead costs of parity, compression, sector size, padding and allocation (details of which can be found here) taken into account, that gives 1.528 EB in a 48U rack in 24 21-disk raidz3 vdevs and 11 18-disk raidz3 vdevs with 3 hotspares for each vdev (which leaves 6 disk bays empty in the SuperServer itself, which can be used for cache and log disks, say 2x 1TB MLC cache SSDs and 4x SLC log SSDs in raidz3, as I see no way of making efficient use of them since 4 disks in raidz2 with two hot-spares doesn't make much of a difference). Does anyone know of a fool with too much money and a datacenter that'll let me pull 921A@230V from the wall?

This, of course, doesn't take into account the mind-numbing terror that is sliding out up to 89 spinning and operating disks to replace disk(s).

BlankSystemDaemon fucked around with this message at 18:44 on May 5, 2016

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Oh, with the 32 disk system that's not the worst thing to try to cool. I was looking at the top-loading case similar to what Backblaze custom-built as my nightmare scenario of chassis cooling (aside from being in a hot, dusty environment like an attic in Atlanta).

The number of controllers necessary depends primarily upon how many ports are on the HBA and how much datapath redundancy you want in your DAS config. You can find monstrosities like these cards to reduce the number of cards necessary and to make daisy chaining (and more importantly, data path redundancy) more efficient in port usage and PCI slots. I think you may get a better response from people that are in the SAN megathread than here honestly.

ZFS metadata is compressed by default so calculating the overhead is pretty iffy. I'd give a range of 1 - 5% total available space overhead (which is really good for most block sizes I've seen on other file systems). Effective disk space matters a lot based upon record size of the vdev and the block size used to build the vdev out, too. For example, I wrote out maybe 10k 200KB jpg keyframes from a video that were supposed to be about 250 MB of disk and the effective disk usage of it in ZFS according to what zdb showed me was actually 4 GB.

Toast Museum
Dec 3, 2005

30% Iron Chef

Shaocaholica posted:

Overstock.com would be fine for warranty purposes for Synology right?

http://www.overstock.com/Electronic...OBA&searchidx=0

Haven't done much homework yet but any reason not to get the DS1515+ for home use? My initial thoughts are that it has lots of bays for expansion, internal power supply.

Heads up, I received my order today, and what they sent me was a DS1515, not a DS1515+. I'm waiting to hear back about exchanging it for the thing I actually ordered.

BlankSystemDaemon
Mar 13, 2009



necrobobsledder posted:

Oh, with the 32 disk system that's not the worst thing to try to cool. I was looking at the top-loading case similar to what Backblaze custom-built as my nightmare scenario of chassis cooling (aside from being in a hot, dusty environment like an attic in Atlanta).

The number of controllers necessary depends primarily upon how many ports are on the HBA and how much datapath redundancy you want in your DAS config. You can find monstrosities like these cards to reduce the number of cards necessary and to make daisy chaining (and more importantly, data path redundancy) more efficient in port usage and PCI slots. I think you may get a better response from people that are in the SAN megathread than here honestly.

ZFS metadata is compressed by default so calculating the overhead is pretty iffy. I'd give a range of 1 - 5% total available space overhead (which is really good for most block sizes I've seen on other file systems). Effective disk space matters a lot based upon record size of the vdev and the block size used to build the vdev out, too. For example, I wrote out maybe 10k 200KB jpg keyframes from a video that were supposed to be about 250 MB of disk and the effective disk usage of it in ZFS according to what zdb showed me was actually 4 GB.
I've since edited my post a bit, but I think I've accounted for everything including overhead and making use of as many disks as possible.
That said, you won't be placing a rack like this at home anyhow, since you're looking at drawing 221040W from the wall to power the whole thing at max load (granted, you hopefully won't reach max load since you can power on each disk chassis individually before powering on the server itself, but still).

The MegaRAID 9280-24i4e appears to be a bad choice since it's mostly made for internal drives and is a RAID rather than JBOD card - the controllers I'd decided upon are 5x SAS 9202-16e (daisy chaining 2 disk chassis per controller and 1 carrying just one disk chassis) and 1x SAS 9305-16i for the internal disks since the motherboard itself apparently doesn't support JBOD (and cannot be flashed to IT-mode, from what I can read).

As to ZFS metadata, padding for sectors, sector sizes and compression - that's why I linked the chart, which has a complete overview of what's most efficient for a given number of disks with a chosen sector size.

The 10k 200kb images taking up 4GB does sound a bit extreme though - are you sure you don't have 4k sectors enabled on that vdev?

My whole reason for posting this is just a bit of fun, since I'm going through stuff which gives me a lot of spare time..
(You can read about it in my post history if you wan't, but I don't wanna derail the thread with it).

BlankSystemDaemon fucked around with this message at 19:45 on May 5, 2016

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

necrobobsledder posted:

Oh, with the 32 disk system that's not the worst thing to try to cool. I was looking at the top-loading case similar to what Backblaze custom-built as my nightmare scenario of chassis cooling (aside from being in a hot, dusty environment like an attic in Atlanta).

HP SL4500 is another example with up to 60 harddrives in one case. But I guess harddrives aren't that hard to keep cool enough, considering that traditionally desktop cases had hardly any airflow over the harddrives, like in that Inwin Q500 I linked few days ago. One of the reasons I used a separate harddrive stand was to get some airflow over them. Later I modified the case and put a 120mm fan in front of the harddrive bracket. Front intake fans that blow over the harddrives seem to be a relatively recent invention.

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness
Wasn't one of the notable things that came out of all that BackBlaze data that hard drive temperatures (up to a point, anyhow) didn't really matter anywhere near as much as people had always assumed? And that, conversely, humidity was a good bit more important than had previously been assumed?

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer
My job had some extra hardware that was up for grabs and I ended up taking it home, so I'm now the proud owner of, I think, a Storagetek 2540 Array.



I found this data sheet that seems to line up, it looks just like the picture above and is stocked with 12x300GB drives that match from a list of compatable HDs (Seagate ST3300655SS).

Now I could just pull all the drives out, but it would be kind of cool to get it running and then do something dumb like install my entire Steam library at once, but I'm not sure what I need to do to make it actually usable. The data sheet lists power requirements of AC 515v / DC 17 A, so I'm assuming I can't just take it home and plug it into a couple of wall sockets.

It also has some kind of fiber optic connection (FC HBA?) that the data sheet claims compatibility with all HBAs supported in SAN 4.4.12. Is that just a PCI card I can buy and drop into a Windows 7 host or would I need an additional network device to interface with a standard PC?

My end goal would be to have it come up as a single giant network drive, and I don't mind buying the fiber card and a few cables to make it work just as a science experiment, but if I need like $1000 in additional network/power gear (or the noise/heat would be unreasonable for an apartment) then I'll just yank all the drives and use them individually.

Thanks Ants
May 21, 2004

#essereFerrari


Should be fine to run off the wall, the 515 figure is the wattage.

https://docs.oracle.com/cd/E19508-01/820-0015-14/820-0015-14.pdf

It's a 4Gbit Fibre Channel SAN so you'd need the relevant HBA etc.

Thanks Ants fucked around with this message at 21:41 on May 5, 2016

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer
Cool, thanks. I wasn't exactly sure what I had because I still don't see anything that actually says 2540 on it, but all the other descriptions of ports/hard drives line up so that has to be it. There's something perversely satisfying about misusing professional network gear for personal reasons that in no way justify the power and (market) expense...

Thanks Ants
May 21, 2004

#essereFerrari


If you snap a photo of the rear of it then it should be easy to figure out if you have a controller or just a disk shelf.

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Takes No Damage posted:

My job had some extra hardware that was up for grabs and I ended up taking it home, so I'm now the proud owner of, I think, a Storagetek 2540 Array.



I found this data sheet that seems to line up, it looks just like the picture above and is stocked with 12x300GB drives that match from a list of compatable HDs (Seagate ST3300655SS).

Now I could just pull all the drives out, but it would be kind of cool to get it running and then do something dumb like install my entire Steam library at once, but I'm not sure what I need to do to make it actually usable. The data sheet lists power requirements of AC 515v / DC 17 A, so I'm assuming I can't just take it home and plug it into a couple of wall sockets.

It also has some kind of fiber optic connection (FC HBA?) that the data sheet claims compatibility with all HBAs supported in SAN 4.4.12. Is that just a PCI card I can buy and drop into a Windows 7 host or would I need an additional network device to interface with a standard PC?

My end goal would be to have it come up as a single giant network drive, and I don't mind buying the fiber card and a few cables to make it work just as a science experiment, but if I need like $1000 in additional network/power gear (or the noise/heat would be unreasonable for an apartment) then I'll just yank all the drives and use them individually.

You make it sound like some fantastically huge storage device you'll never fill. It's basically a hugely power hungry, loud 3tb drive.

Thanks Ants
May 21, 2004

#essereFerrari


Oh lol I didn't see the disk sizes.

It might be worth a bit as scrap, or you could try selling it if that fits in with the ethics of taking it free from work.

Internet Explorer
Jun 1, 2005





You couldn't pay me to take that.

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer
Yeah there's not that much storage in it right now, I'm sure that's the other reason no body else in the office wanted it besides 'gently caress off it's heavy.' I think I could replace them all with up to 2TB drives for 24TB total but of course that would be expensive. Anyway here's the back of it:


Right now I'm trying to figure out what kind of PCI card I'd need, I'm seeing prices of 4Gb Fiber ranging from 50bux to a couple thousand.

Thanks Ants
May 21, 2004

#essereFerrari


That's just a disk shelf and those are SAS (SFF-8088) ports.

Personally I'd toss it out.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
If the onboard controller / multiplexer on that shelf can support drives beyond 2 TB in size, it's worth keeping around if the power usage can be brought down. Otherwise, it's garbage. I suspect that it won't be able to meet that criteria.

Thanks Ants
May 21, 2004

#essereFerrari


There's a high chance it can only use SAS disks (maybe dual ported SAS as well) so it's going to cost you a loving fortune to make useful anyway.

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer
So what I'm hearing is that I should pull the drives and 'recycle' the chassis? I'm OK with that, it would have been fun to get something like this working at home but it's not worth any significant expense to me.

IOwnCalculus
Apr 2, 2003





You might not even be able to recycle the chassis in any usable manner, unless you literally mean throw it in the scrap bin.

And it will be LOUD as gently caress. The only "quiet" rackmount gear is stuff that crazy homegamers like in this thread have modded with slower, quieter fans. When this poo poo is supposed to be sitting off in a secure datacenter where people should have limited time around it, noise is the least possible concern.

salted hash browns
Mar 26, 2007
ykrop

salted hash browns posted:

Any thoughts on the Synology DS-216+ vs the Synology DS-216j?

Biggest difference between the two seems to be the DS-216+ uses a much more powerful Intel Celeron. I don't really plan on using this for much more than just plain RAID-1 network attached storage, so feel like I could probably just use the DS-216j? Not sure if I'll regret not just getting the beefier model down the road if the latter ends up being way slower.

Or maybe there is a QNAP equivalent that could be looked into?

DrDork
Dec 29, 2003
commanding officer of the Army of Dorkness

IOwnCalculus posted:

When this poo poo is supposed to be sitting off in a secure datacenter where people should have limited time around it, noise is the least possible concern.
I love centers where the server rooms have "hearing protection required" labels on the doors.

Thanks Ants
May 21, 2004

#essereFerrari


The noise threshold for exposure of over a few minutes to cause hearing damage is surprisingly low. Pretty much anywhere with more than a couple of racks full of 1u servers, blades, whatever would be loud enough to cause hearing damage if you made a habit of working in there. Considering how cheap the 3M disposable plugs are and how easy it would be to get your employer to supply them since it's not worth the risk, everyone should have access to ear protection when working in those environments.

BlankSystemDaemon
Mar 13, 2009



salted hash browns posted:

Questions about which simple NAS to choose
QNAP and Synology are both well-liked for plain NAS devices that fit neatly into market-segmented caterogies divided up by the price-tag - and are, to my knowledge, feature-comparable - so I suggest that you go look up benchmarks for the devices you're considering (as that'll give you real-world preformance expectations) and then choose whichever you think will fit your needs/wallet - and yes, if you plan on installing all the extra features (both official and unofficial), you're gonna want a beefier version.

Shaocaholica posted:

Anyone here have issues with Kodi not coming back from sleep? Either not waking, waking the display or coming back with no sound, etc. Basically a broken state after sleep.

I'll check the Kodi thread too. Just covering my bases.
Just noticed something related to this, and since you didn't get any response I wanted to let you know that there's at least one bug that's been fixed in the latest version of libva-driver-intel related to freezing after sleep, but it's seemingly limited to Intel Baytrail-based CPUs - so make sure you're up-to-date if you have a Baytrail-based CPU.

BlankSystemDaemon fucked around with this message at 15:16 on May 6, 2016

Thanks Ants
May 21, 2004

#essereFerrari


Heads up that the latest DSM release fixes the SMB permissions problem where you couldn't set them from Windows (you'd get an RPC failure and something about the machine not being on the domain). I hope the venn diagram of "people running AD" and "people running Synology" has quite a small crossover but I know they aren't uncommon to use as backup repositories.

Froist
Jun 6, 2004

I have a question that may get me chased out of this thread with pitchforks on principal. Here goes..

Current setup: A few years ago I set up a NAS in a HP N40L with 4x2tb data drives using ZFS, plus a smaller drive for the OS (Ubuntu). I've been using it with one of the drives for parity (so 6tb usable in one zpool), running Sickbeard etc, and it's been ticking over with zero hassle since the start. A bit more recently I got on board with the "raid != backup" train, and bought 3x2tb external drives to occasionally rsync the data to in categories and (ideally, though I'm lax on it in practice) keep off-site.

Issue: Predictably I'm beginning to creep close to my current storage limits. One of my externals (and the one for which the data category grows the fastest) is currently 96% full, and the whole zpool itself is 82% full.

Controversial plan: As there's no way for me to easily grow this pool without buying a full array of new disks, and I already have an external "occasional yet good enough" backup of the data, I'm thinking of throwing caution to the wind: Switch to a non-redundant storage method so I "gain" another 2gb of internal space, and buy one more external drive to cover the shortfall in external backup space. Nothing I have is particularly irreplaceable (except around 400gb of raw GoPro footage which I may throw into Glacier, but more likely I'd be better off losing as I'll never look at it again anyway).

Desired features:
  • I'd like to have something where I could pool together data spanning multiple drives into single network shares, while not linking the actual storage and meaning one dead disk will take out all the data. I'm fine with some extra effort shuffling the data around when required to achieve this, but the key factor is that the total of some data I would like to keep "merged" will soon grow too big to be stored on a single disk.
  • SSH/MySQL server/Sickbeard etc
  • Ability to run cron jobs/custom scripts from the shell, mainly to run backup scripts to my externals (ExFAT/NTFS formatted)
  • To support the above, some form of scripting language (python etc)
  • Time Machine support would be a nice bonus but far from a requirement
  • Some sort of VPN server so that I don't have to expose different services to the web would be great, but I've been living without it this long

Is this the kind of thing I could achieve with Xpenology? Am I right in thinking Xpenology is Debian under the hood, so would allow extra tinkering/functionality beyond what is provided as stock/with plugins? I don't mind a chunk of time and effort setting this up in the short term.

Froist fucked around with this message at 16:26 on May 6, 2016

Takes No Damage
Nov 20, 2004

The most merciful thing in the world, I think, is the inability of the human mind to correlate all its contents. We live on a placid island of ignorance in the midst of black seas of infinity, and it was not meant that we should voyage far.


Grimey Drawer

IOwnCalculus posted:

You might not even be able to recycle the chassis in any usable manner, unless you literally mean throw it in the scrap bin.

And it will be LOUD as gently caress. The only "quiet" rackmount gear is stuff that crazy homegamers like in this thread have modded with slower, quieter fans. When this poo poo is supposed to be sitting off in a secure datacenter where people should have limited time around it, noise is the least possible concern.

Yeah I may make a cursory post on Craigslist or something but more likely it's just getting trashed. So now that I'll have 12 speedy drives laying around, I think I'll load a couple of them into my PC and set up a RAID 0 between them. Even 'just' 600GB is enough to load 10 or 12 big Steam games, and if it ever fails I can just toss another pair of drives in there and redownload the files.

What's the preferred method/software for setting up RAID 0 in Windows 7?

Thanks Ants
May 21, 2004

#essereFerrari


Just buy an SSD.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
12 drives in a raid 0?

smax
Nov 9, 2009

Don Lapre posted:

12 drives in a raid 0?

12 drives of questionable age and use history in a RAID 0. What could possibly go wrong?

Skandranon
Sep 6, 2008
fucking stupid, dont listen to me

Takes No Damage posted:

Yeah I may make a cursory post on Craigslist or something but more likely it's just getting trashed. So now that I'll have 12 speedy drives laying around, I think I'll load a couple of them into my PC and set up a RAID 0 between them. Even 'just' 600GB is enough to load 10 or 12 big Steam games, and if it ever fails I can just toss another pair of drives in there and redownload the files.

What's the preferred method/software for setting up RAID 0 in Windows 7?

Just go into drive management and set up a striped volume. But an SSD will work much better.

Don Lapre
Mar 28, 2001

If you're having problems you're either holding the phone wrong or you have tiny girl hands.
People use raid 0 even less now than they did when they never used it. SSD's are just that good.

Krailor
Nov 2, 2001
I'm only pretending to care
Taco Defender

Froist posted:

Desired features:
  • I'd like to have something where I could pool together data spanning multiple drives into single network shares, while not linking the actual storage and meaning one dead disk will take out all the data. I'm fine with some extra effort shuffling the data around when required to achieve this, but the key factor is that the total of some data I would like to keep "merged" will soon grow too big to be stored on a single disk.
  • SSH/MySQL server/Sickbeard etc
  • Ability to run cron jobs/custom scripts from the shell, mainly to run backup scripts to my externals (ExFAT/NTFS formatted)
  • To support the above, some form of scripting language (python etc)
  • Time Machine support would be a nice bonus but far from a requirement
  • Some sort of VPN server so that I don't have to expose different services to the web would be great, but I've been living without it this long

Is this the kind of thing I could achieve with Xpenology? Am I right in thinking Xpenology is Debian under the hood, so would allow extra tinkering/functionality beyond what is provided as stock/with plugins? I don't mind a chunk of time and effort setting this up in the short term.

I'm using a combo of Drivepool and Snapraid to do this in windows.

I use Drivepool, with no redundancy, to just combine all of my storage disks into one big drive. I then have an 8tb internal and 8tb external that are dedicated parity drives for Snapraid. This setup gives me 2 drive redundancy and when I need more storage I don't have to worry about matching drive sizes or leaving unused space on a drive I just buy whatever size drive I want (up to 8tb), add it to the pool and I get that much more space.

I know that Snapraid supports Linux but I'm not sure what would replace Drivepool if you wanted to use Linux instead of windows.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

smax posted:

12 drives of questionable age and use history in a RAID 0. What could possibly go wrong?
Not posting the results on SA, of course.

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Anything going for less than the IBM M1015 nowadays that's worth getting?


Also, if I've got a zfs pool of 4TB drives on a SAS1068E controller (which only supports up to 2TB drives) and then move that pool to a controller that supports 4TB drives...what do I need to do to make use of that newly-available space? Will ZFS do it automatically?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply