|
Thanks for the explanations, very helpful. Network performance is a good point to bring up, I can see that being a problem. I don't care too much about keeping something like this bleeding edge once its working, so Xpenology might work. One question, lets say I put together some hardware and run Xpenology or FreeNAS or whatever you guys recommend. Say down the road I need to add some space, whats the process like? Do i need to find a temporary home for the data, wipe the install out, add drives and copy it back over? Or are there facilities that allow me to add in a couple new drives and then have it re write the data across the pool? I guess the heart of the question is, should you really stretch to get the most you possibly can up front, if your storage needs are going grow, or is it reasonable to add on later?
|
# ? May 5, 2016 01:46 |
|
|
# ? May 16, 2024 19:09 |
|
emocrat posted:One question, lets say I put together some hardware and run Xpenology or FreeNAS or whatever you guys recommend. Say down the road I need to add some space, whats the process like? Do i need to find a temporary home for the data, wipe the install out, add drives and copy it back over? Or are there facilities that allow me to add in a couple new drives and then have it re write the data across the pool? I guess the heart of the question is, should you really stretch to get the most you possibly can up front, if your storage needs are going grow, or is it reasonable to add on later?
|
# ? May 5, 2016 02:22 |
|
What's the difference between Nas4Free and FreeNAS? Can I install my own FreeBSD packages on either of them? I'd like to move my MariaDB server and lftp client to the custom NAS I'm building so I can get rid of both my Synology and CentOS box at the same time. Also does anybody know if Syncthing is any good on either of these systems?
|
# ? May 5, 2016 06:09 |
Farmer Crack-rear end posted:As for fitting more hard drives, just get a bigger case, no problem. Furism posted:What's the difference between Nas4Free and FreeNAS? Can I install my own FreeBSD packages on either of them? I'd like to move my MariaDB server and lftp client to the custom NAS I'm building so I can get rid of both my Synology and CentOS box at the same time. BlankSystemDaemon fucked around with this message at 07:56 on May 5, 2016 |
|
# ? May 5, 2016 06:15 |
|
For philosophical / design differences between FreeNAS and NAS4Free, FreeNAS tries to be more of an all-in-one device that handles more than just file sharing abilities such as UI support for jails and such. NAS4Free is oriented around being better at just being a NAS and less hand holding of advanced features. I don't remember NAS4Free adding encrypted backup areas to new RAIDZ vdevs like FreeNAS, but that's one example of making more decisions without users knowing that would potentially be a pain if you know what you're doing.D. Ebdrup posted:A quick bit of head-math tells me that by adding in 11 LSI SAS 9201 its possible, barring money and power requirements, to get something like 76 raidz3 12-disk vdevs with around 5,45EB diskspace in a 48U rack. That's pretty impressive. Also, given the smaller capacity of the two cases is like $8000, I don't even want to know what the price of the max density storage system is.
|
# ? May 5, 2016 15:01 |
|
DrDork posted:Xpenology with SHR lets you just add on individual disks later, as far as I know, though I'm not sure if it'll let you upgrade from SHR to SHR-2 seamlessly, and I'd certainly want at least 2 drive redundancy once I got above 6 total drives.
|
# ? May 5, 2016 15:29 |
necrobobsledder posted:For philosophical / design differences between FreeNAS and NAS4Free, FreeNAS tries to be more of an all-in-one device that handles more than just file sharing abilities such as UI support for jails and such. NAS4Free is oriented around being better at just being a NAS and less hand holding of advanced features. I don't remember NAS4Free adding encrypted backup areas to new RAIDZ vdevs like FreeNAS, but that's one example of making more decisions without users knowing that would potentially be a pain if you know what you're doing. In the meantime, I have found a better solution for the actual system which gives a bit more diskspace. Additionally I made a mistake in my calculations earlier, I found out that the SuperChassis can be daisy-chained to a certain extend (meaning you can use 6 SAS 9201-16e instead of 11), as well as changed the SuperStorage server, found out 10TB disks are available, and had the idea that it would probably be smart to plan on at least three hot-spare per vdev in each chassis, here's another attempt: code:
This, of course, doesn't take into account the mind-numbing terror that is sliding out up to 89 spinning and operating disks to replace disk(s). BlankSystemDaemon fucked around with this message at 18:44 on May 5, 2016 |
|
# ? May 5, 2016 15:50 |
|
Oh, with the 32 disk system that's not the worst thing to try to cool. I was looking at the top-loading case similar to what Backblaze custom-built as my nightmare scenario of chassis cooling (aside from being in a hot, dusty environment like an attic in Atlanta). The number of controllers necessary depends primarily upon how many ports are on the HBA and how much datapath redundancy you want in your DAS config. You can find monstrosities like these cards to reduce the number of cards necessary and to make daisy chaining (and more importantly, data path redundancy) more efficient in port usage and PCI slots. I think you may get a better response from people that are in the SAN megathread than here honestly. ZFS metadata is compressed by default so calculating the overhead is pretty iffy. I'd give a range of 1 - 5% total available space overhead (which is really good for most block sizes I've seen on other file systems). Effective disk space matters a lot based upon record size of the vdev and the block size used to build the vdev out, too. For example, I wrote out maybe 10k 200KB jpg keyframes from a video that were supposed to be about 250 MB of disk and the effective disk usage of it in ZFS according to what zdb showed me was actually 4 GB.
|
# ? May 5, 2016 17:14 |
|
Shaocaholica posted:Overstock.com would be fine for warranty purposes for Synology right? Heads up, I received my order today, and what they sent me was a DS1515, not a DS1515+. I'm waiting to hear back about exchanging it for the thing I actually ordered.
|
# ? May 5, 2016 18:20 |
necrobobsledder posted:Oh, with the 32 disk system that's not the worst thing to try to cool. I was looking at the top-loading case similar to what Backblaze custom-built as my nightmare scenario of chassis cooling (aside from being in a hot, dusty environment like an attic in Atlanta). That said, you won't be placing a rack like this at home anyhow, since you're looking at drawing 221040W from the wall to power the whole thing at max load (granted, you hopefully won't reach max load since you can power on each disk chassis individually before powering on the server itself, but still). The MegaRAID 9280-24i4e appears to be a bad choice since it's mostly made for internal drives and is a RAID rather than JBOD card - the controllers I'd decided upon are 5x SAS 9202-16e (daisy chaining 2 disk chassis per controller and 1 carrying just one disk chassis) and 1x SAS 9305-16i for the internal disks since the motherboard itself apparently doesn't support JBOD (and cannot be flashed to IT-mode, from what I can read). As to ZFS metadata, padding for sectors, sector sizes and compression - that's why I linked the chart, which has a complete overview of what's most efficient for a given number of disks with a chosen sector size. The 10k 200kb images taking up 4GB does sound a bit extreme though - are you sure you don't have 4k sectors enabled on that vdev? My whole reason for posting this is just a bit of fun, since I'm going through stuff which gives me a lot of spare time.. (You can read about it in my post history if you wan't, but I don't wanna derail the thread with it). BlankSystemDaemon fucked around with this message at 19:45 on May 5, 2016 |
|
# ? May 5, 2016 18:50 |
|
necrobobsledder posted:Oh, with the 32 disk system that's not the worst thing to try to cool. I was looking at the top-loading case similar to what Backblaze custom-built as my nightmare scenario of chassis cooling (aside from being in a hot, dusty environment like an attic in Atlanta). HP SL4500 is another example with up to 60 harddrives in one case. But I guess harddrives aren't that hard to keep cool enough, considering that traditionally desktop cases had hardly any airflow over the harddrives, like in that Inwin Q500 I linked few days ago. One of the reasons I used a separate harddrive stand was to get some airflow over them. Later I modified the case and put a 120mm fan in front of the harddrive bracket. Front intake fans that blow over the harddrives seem to be a relatively recent invention.
|
# ? May 5, 2016 20:26 |
|
Wasn't one of the notable things that came out of all that BackBlaze data that hard drive temperatures (up to a point, anyhow) didn't really matter anywhere near as much as people had always assumed? And that, conversely, humidity was a good bit more important than had previously been assumed?
|
# ? May 5, 2016 20:51 |
|
My job had some extra hardware that was up for grabs and I ended up taking it home, so I'm now the proud owner of, I think, a Storagetek 2540 Array. I found this data sheet that seems to line up, it looks just like the picture above and is stocked with 12x300GB drives that match from a list of compatable HDs (Seagate ST3300655SS). Now I could just pull all the drives out, but it would be kind of cool to get it running and then do something dumb like install my entire Steam library at once, but I'm not sure what I need to do to make it actually usable. The data sheet lists power requirements of AC 515v / DC 17 A, so I'm assuming I can't just take it home and plug it into a couple of wall sockets. It also has some kind of fiber optic connection (FC HBA?) that the data sheet claims compatibility with all HBAs supported in SAN 4.4.12. Is that just a PCI card I can buy and drop into a Windows 7 host or would I need an additional network device to interface with a standard PC? My end goal would be to have it come up as a single giant network drive, and I don't mind buying the fiber card and a few cables to make it work just as a science experiment, but if I need like $1000 in additional network/power gear (or the noise/heat would be unreasonable for an apartment) then I'll just yank all the drives and use them individually.
|
# ? May 5, 2016 21:14 |
|
Should be fine to run off the wall, the 515 figure is the wattage. https://docs.oracle.com/cd/E19508-01/820-0015-14/820-0015-14.pdf It's a 4Gbit Fibre Channel SAN so you'd need the relevant HBA etc. Thanks Ants fucked around with this message at 21:41 on May 5, 2016 |
# ? May 5, 2016 21:34 |
|
Cool, thanks. I wasn't exactly sure what I had because I still don't see anything that actually says 2540 on it, but all the other descriptions of ports/hard drives line up so that has to be it. There's something perversely satisfying about misusing professional network gear for personal reasons that in no way justify the power and (market) expense...
|
# ? May 5, 2016 22:20 |
|
If you snap a photo of the rear of it then it should be easy to figure out if you have a controller or just a disk shelf.
|
# ? May 5, 2016 22:29 |
|
Takes No Damage posted:My job had some extra hardware that was up for grabs and I ended up taking it home, so I'm now the proud owner of, I think, a Storagetek 2540 Array. You make it sound like some fantastically huge storage device you'll never fill. It's basically a hugely power hungry, loud 3tb drive.
|
# ? May 5, 2016 22:29 |
|
Oh lol I didn't see the disk sizes. It might be worth a bit as scrap, or you could try selling it if that fits in with the ethics of taking it free from work.
|
# ? May 5, 2016 22:31 |
|
You couldn't pay me to take that.
|
# ? May 5, 2016 22:45 |
|
Yeah there's not that much storage in it right now, I'm sure that's the other reason no body else in the office wanted it besides 'gently caress off it's heavy.' I think I could replace them all with up to 2TB drives for 24TB total but of course that would be expensive. Anyway here's the back of it: Right now I'm trying to figure out what kind of PCI card I'd need, I'm seeing prices of 4Gb Fiber ranging from 50bux to a couple thousand.
|
# ? May 5, 2016 23:15 |
|
That's just a disk shelf and those are SAS (SFF-8088) ports. Personally I'd toss it out.
|
# ? May 5, 2016 23:17 |
|
If the onboard controller / multiplexer on that shelf can support drives beyond 2 TB in size, it's worth keeping around if the power usage can be brought down. Otherwise, it's garbage. I suspect that it won't be able to meet that criteria.
|
# ? May 5, 2016 23:40 |
|
There's a high chance it can only use SAS disks (maybe dual ported SAS as well) so it's going to cost you a loving fortune to make useful anyway.
|
# ? May 5, 2016 23:45 |
|
So what I'm hearing is that I should pull the drives and 'recycle' the chassis? I'm OK with that, it would have been fun to get something like this working at home but it's not worth any significant expense to me.
|
# ? May 6, 2016 00:13 |
|
You might not even be able to recycle the chassis in any usable manner, unless you literally mean throw it in the scrap bin. And it will be LOUD as gently caress. The only "quiet" rackmount gear is stuff that crazy homegamers like in this thread have modded with slower, quieter fans. When this poo poo is supposed to be sitting off in a secure datacenter where people should have limited time around it, noise is the least possible concern.
|
# ? May 6, 2016 00:33 |
|
salted hash browns posted:Any thoughts on the Synology DS-216+ vs the Synology DS-216j? Or maybe there is a QNAP equivalent that could be looked into?
|
# ? May 6, 2016 01:27 |
|
IOwnCalculus posted:When this poo poo is supposed to be sitting off in a secure datacenter where people should have limited time around it, noise is the least possible concern.
|
# ? May 6, 2016 01:44 |
|
The noise threshold for exposure of over a few minutes to cause hearing damage is surprisingly low. Pretty much anywhere with more than a couple of racks full of 1u servers, blades, whatever would be loud enough to cause hearing damage if you made a habit of working in there. Considering how cheap the 3M disposable plugs are and how easy it would be to get your employer to supply them since it's not worth the risk, everyone should have access to ear protection when working in those environments.
|
# ? May 6, 2016 01:49 |
salted hash browns posted:Questions about which simple NAS to choose Shaocaholica posted:Anyone here have issues with Kodi not coming back from sleep? Either not waking, waking the display or coming back with no sound, etc. Basically a broken state after sleep. BlankSystemDaemon fucked around with this message at 15:16 on May 6, 2016 |
|
# ? May 6, 2016 15:14 |
|
Heads up that the latest DSM release fixes the SMB permissions problem where you couldn't set them from Windows (you'd get an RPC failure and something about the machine not being on the domain). I hope the venn diagram of "people running AD" and "people running Synology" has quite a small crossover but I know they aren't uncommon to use as backup repositories.
|
# ? May 6, 2016 15:19 |
|
I have a question that may get me chased out of this thread with pitchforks on principal. Here goes.. Current setup: A few years ago I set up a NAS in a HP N40L with 4x2tb data drives using ZFS, plus a smaller drive for the OS (Ubuntu). I've been using it with one of the drives for parity (so 6tb usable in one zpool), running Sickbeard etc, and it's been ticking over with zero hassle since the start. A bit more recently I got on board with the "raid != backup" train, and bought 3x2tb external drives to occasionally rsync the data to in categories and (ideally, though I'm lax on it in practice) keep off-site. Issue: Predictably I'm beginning to creep close to my current storage limits. One of my externals (and the one for which the data category grows the fastest) is currently 96% full, and the whole zpool itself is 82% full. Controversial plan: As there's no way for me to easily grow this pool without buying a full array of new disks, and I already have an external "occasional yet good enough" backup of the data, I'm thinking of throwing caution to the wind: Switch to a non-redundant storage method so I "gain" another 2gb of internal space, and buy one more external drive to cover the shortfall in external backup space. Nothing I have is particularly irreplaceable (except around 400gb of raw GoPro footage which I may throw into Glacier, but more likely I'd be better off losing as I'll never look at it again anyway). Desired features:
Is this the kind of thing I could achieve with Xpenology? Am I right in thinking Xpenology is Debian under the hood, so would allow extra tinkering/functionality beyond what is provided as stock/with plugins? I don't mind a chunk of time and effort setting this up in the short term. Froist fucked around with this message at 16:26 on May 6, 2016 |
# ? May 6, 2016 16:23 |
|
IOwnCalculus posted:You might not even be able to recycle the chassis in any usable manner, unless you literally mean throw it in the scrap bin. Yeah I may make a cursory post on Craigslist or something but more likely it's just getting trashed. So now that I'll have 12 speedy drives laying around, I think I'll load a couple of them into my PC and set up a RAID 0 between them. Even 'just' 600GB is enough to load 10 or 12 big Steam games, and if it ever fails I can just toss another pair of drives in there and redownload the files. What's the preferred method/software for setting up RAID 0 in Windows 7?
|
# ? May 6, 2016 19:45 |
|
Just buy an SSD.
|
# ? May 6, 2016 19:57 |
|
12 drives in a raid 0?
|
# ? May 6, 2016 20:23 |
|
Don Lapre posted:12 drives in a raid 0? 12 drives of questionable age and use history in a RAID 0. What could possibly go wrong?
|
# ? May 6, 2016 20:41 |
|
Takes No Damage posted:Yeah I may make a cursory post on Craigslist or something but more likely it's just getting trashed. So now that I'll have 12 speedy drives laying around, I think I'll load a couple of them into my PC and set up a RAID 0 between them. Even 'just' 600GB is enough to load 10 or 12 big Steam games, and if it ever fails I can just toss another pair of drives in there and redownload the files. Just go into drive management and set up a striped volume. But an SSD will work much better.
|
# ? May 6, 2016 20:49 |
|
People use raid 0 even less now than they did when they never used it. SSD's are just that good.
|
# ? May 6, 2016 20:58 |
|
Froist posted:Desired features: I'm using a combo of Drivepool and Snapraid to do this in windows. I use Drivepool, with no redundancy, to just combine all of my storage disks into one big drive. I then have an 8tb internal and 8tb external that are dedicated parity drives for Snapraid. This setup gives me 2 drive redundancy and when I need more storage I don't have to worry about matching drive sizes or leaving unused space on a drive I just buy whatever size drive I want (up to 8tb), add it to the pool and I get that much more space. I know that Snapraid supports Linux but I'm not sure what would replace Drivepool if you wanted to use Linux instead of windows.
|
# ? May 6, 2016 20:59 |
|
smax posted:12 drives of questionable age and use history in a RAID 0. What could possibly go wrong?
|
# ? May 6, 2016 21:11 |
|
|
# ? May 16, 2024 19:09 |
|
Anything going for less than the IBM M1015 nowadays that's worth getting? Also, if I've got a zfs pool of 4TB drives on a SAS1068E controller (which only supports up to 2TB drives) and then move that pool to a controller that supports 4TB drives...what do I need to do to make use of that newly-available space? Will ZFS do it automatically?
|
# ? May 6, 2016 21:23 |