|
drat, i expected it to be way more than $160. SO tempting.
|
# ? Feb 3, 2014 18:42 |
|
|
# ? May 1, 2024 06:25 |
|
I just did a full rebuild of my Proliant on FreeBSD 10 and actually made use of jails correctly this time. Everything feels so clean and compartmentalized.
|
# ? Feb 3, 2014 22:53 |
|
Can anyone comment on the RAM requirements with XPEnology? I currently have a DS411j as my main storage, which I believe only has 256-512 MB of RAM to begin with. I am planning to use an Opteron 165 (dual core, 1.8 GHz from ~2007) with 2 GB of RAM as my second copy of storage. Before I get everything setup, I just want to make sure it's not in my best interest to buy a cheap i3 and use some extra DDR3 RAM to setup FreeNAS. XPEnology would simplify my replication process, but I'm sure FreeNAS would ultimately be the better performer. For reference, my Synology has 4x3TB Reds and my second copy would have 4x2TB Greens. Not ideal, but something is better than nothing I guess.
|
# ? Feb 3, 2014 23:53 |
|
luigionlsd posted:Can anyone comment on the RAM requirements with XPEnology? I currently have a DS411j as my main storage, which I believe only has 256-512 MB of RAM to begin with. I am planning to use an Opteron 165 (dual core, 1.8 GHz from ~2007) with 2 GB of RAM as my second copy of storage. Before I get everything setup, I just want to make sure it's not in my best interest to buy a cheap i3 and use some extra DDR3 RAM to setup FreeNAS. Just like a regular synology unit. It should run at 256 or 512mb ram. But throw as much as you got in there as it wont hurt. You can probably even migrate your ds411j drives though make a backup before you try. Hook them up in the exact same order they are in your actual synology. So drives 1 2 3 4, install them in the same order in the xpenology. Synology assistant will probably offer the option to migrate the data.
|
# ? Feb 4, 2014 00:16 |
|
Don Lapre posted:Just like a regular synology unit. It should run at 256 or 512mb ram. But throw as much as you got in there as it wont hurt. Thanks for the tip, but I actually did that with my 4x2TB drives (previously used in the DS411j, replicated to the 3TBs one by one) and it prompted for a full erase since the Synology DS software did not match XPEnology. Trying to come up with a way to move my reds to the XPEnology without losing everything and minimal effort.
|
# ? Feb 4, 2014 00:41 |
|
NickPancakes posted:I just did a full rebuild of my Proliant on FreeBSD 10 and actually made use of jails correctly this time. Everything feels so clean and compartmentalized. Are you using a jail management utility, or did you build them by hand?
|
# ? Feb 4, 2014 02:13 |
|
Back to the DS380, is there a ECC Mini-ITX motherboard with 9+ SATA and SAS ports that fits? Or a 8+ port SAS card that's either under 6" long or 2.35" tall? Apparently the E3C224D4I-14S won't fit I'd rather not get one of those Atom boards.
|
# ? Feb 4, 2014 18:13 |
|
frunksock posted:Back to the DS380, is there a ECC Mini-ITX motherboard with 9+ SATA and SAS ports that fits? Or a 8+ port SAS card that's either under 6" long or 2.35" tall? Apparently the E3C224D4I-14S won't fit I'd rather not get one of those Atom boards. SuperMicro also makes a Haswell Xeon mITX board with a built-in LSI 2308, the X10SL7-F. No dual GbE NICs though, which is disappointing.
|
# ? Feb 4, 2014 23:02 |
|
Anyone who is familiar with Xpenology, quick question: I'm going to roll an Xpenology unit this week (the rest of the hardware is going to be delivered tomorrow), with a bunch of WD Greens to be used as independent drives - no JBOD or anything, just completely different drives. I was told by Don Lapre that Xpenology is going to want to format each drive on original install which is all good and fine. However, I have a 3TB Green that is completely full. So my idea is to add a couple of 2TB Greens, and move the data over, but of course, after the fact I'd like to then insert said 3TB Green into the unit once the data is moved. Is adding additional drives to Xpenology as simple/plug-and-play as 'Add Drive -> It wants to format -> You format -> You now have X Amount of addition space' when using independent drives, and not have to worry about it erasing everything on the rest of the drive?
|
# ? Feb 5, 2014 00:39 |
|
eightysixed posted:and not have to worry about it erasing everything on the rest of the drive?
|
# ? Feb 5, 2014 01:34 |
|
eightysixed posted:Anyone who is familiar with Xpenology, quick question: Put the two 2tb drives in. Set it up as an shr array. Copy the data from your 3tb over to the new array. After that When you put the 3tb in the system it will let you add it to the array and format it. http://www.synology.com/en-global/support/tutorials/559
|
# ? Feb 5, 2014 01:55 |
|
GokieKS posted:SuperMicro also makes a Haswell Xeon mITX board with a built-in LSI 2308, the X10SL7-F. No dual GbE NICs though, which is disappointing. http://www.supermicro.com/products/motherboard/atom/x10/a1sai-2550f.cfm
|
# ? Feb 5, 2014 01:56 |
|
necrobobsledder posted:That board is micro ATX. I have never seen an Intel board with 4 DIMM sockets that mini ITX. I saw there's some Avoton boards with CPUs that have slots for 4 SO-DIMMs though. Supermicro has a board that sorta meets all these requirements... but you will be paying out the nose to the point where you'd rather just build a micro ATX system or drop back to mini ITX with an add-on SAS controller. Oh, oops, I completely goofed. Yeah, a standard mITX board just really doesn't have enough space to include a built-in HBA. And ASRock also makes a Avoton mITX board with 4 DIMM slots and a ton of extra SATA ports from Marvell SATA controllers (which Silverstone actually specifically recommends for the DS380), and at $370 I guess it's not terrible for what you get, but... it's still Avoton. frunksock: if I'm reading the DS380 component size restrictions right, you should be able to use a low profile HBA with it. The IBM M1115 that I have here is about 57mm / 2.25 in tall, and I think most of the other common LSI2008 rebrands should be around the same size. GokieKS fucked around with this message at 02:30 on Feb 5, 2014 |
# ? Feb 5, 2014 02:25 |
|
necrobobsledder posted:That board is micro ATX. I have never seen an Intel board with 4 DIMM sockets that mini ITX. I saw there's some Avoton boards with CPUs that have slots for 4 SO-DIMMs though. Supermicro has a board that sorta meets all these requirements... but you will be paying out the nose to the point where you'd rather just build a micro ATX system or drop back to mini ITX with an add-on SAS controller. GokieKS posted:Oh, oops, I completely goofed. Yeah, a standard mITX board just really doesn't have enough space to include a built-in HBA. And ASRock also makes a Avoton mITX board with 4 DIMM slots and a ton of extra SATA ports from Marvell SATA controllers (which Silverstone actually specifically recommends for the DS380), and at $370 I guess it's not terrible for what you get, but... it's still Avoton. Thanks guys. Yeah, I don't want an Atom. I have no problem with an add-on SAS card. Like I said, it just needs to be 8 port, and under 6" long or 2.35" tall if it's going to work in the DS380. So far none of the ones I've been randomly clicking on on newegg meet those criteria, they're all really close but a little too big. It doesn't help that I don't know the important differences between all the various LSI chipsets. Probably I'll just have to wait until the DS380 is in more hands and I can read some reports on what works.
|
# ? Feb 5, 2014 07:24 |
|
frunksock posted:Thanks guys. Yeah, I don't want an Atom. I have no problem with an add-on SAS card. Like I said, it just needs to be 8 port, and under 6" long or 2.35" tall if it's going to work in the DS380. So far none of the ones I've been randomly clicking on on newegg meet those criteria, they're all really close but a little too big. It doesn't help that I don't know the important differences between all the various LSI chipsets. Probably I'll just have to wait until the DS380 is in more hands and I can read some reports on what works. Someone earlier posted the Asrock Mini ITX board that will fit a Xeon and has 4DIMM slots and a SAS controller. It is the ultimate NAS board but costs 300 Euros and is not available over here in Europe.
|
# ? Feb 5, 2014 08:14 |
|
Mr Shiny Pants posted:Someone earlier posted the Asrock Mini ITX board that will fit a Xeon and has 4DIMM slots and a SAS controller. It is the ultimate NAS board but costs 300 Euros and is not available over here in Europe. https://www.facebook.com/ASRockRack/posts/336285553178293
|
# ? Feb 5, 2014 08:18 |
|
frunksock posted:Do you mean this one that I posted? It doesn't fit the DS380, at least not with the Silverstone SFX PSU: You sure? That sucks..
|
# ? Feb 5, 2014 08:28 |
|
Mr Shiny Pants posted:You sure? That sucks..
|
# ? Feb 5, 2014 08:29 |
|
frunksock posted:Horse's mouth. You are right, just checked the facebook page. That's a shame
|
# ? Feb 5, 2014 08:33 |
|
frunksock posted:Thanks guys. Yeah, I don't want an Atom. I have no problem with an add-on SAS card. Like I said, it just needs to be 8 port, and under 6" long or 2.35" tall if it's going to work in the DS380. So far none of the ones I've been randomly clicking on on newegg meet those criteria, they're all really close but a little too big. It doesn't help that I don't know the important differences between all the various LSI chipsets. Probably I'll just have to wait until the DS380 is in more hands and I can read some reports on what works. Will this not fit? http://www.newegg.com/Product/Produ...0140205162347:s
|
# ? Feb 5, 2014 17:24 |
|
I've never heard anyone say anything good about HighPoint cards besides pricing, especially in reference to getting them to work on limited HCL OSes like Solaris and FreeBSD. I suspect they'll work alright for someone running something like Windows Server 2012 but a lot of people going a little off the NAS deep end don't consider Windows for their NAS baseline OS (although I think it'd be perhaps better than ESXi in many cases given Hyper-V's flexibility and ease of use).frunksock posted:Thanks guys. Yeah, I don't want an Atom. I have no problem with an add-on SAS card. Like I said, it just needs to be 8 port, and under 6" long or 2.35" tall if it's going to work in the DS380. So far none of the ones I've been randomly clicking on on newegg meet those criteria, they're all really close but a little too big. It doesn't help that I don't know the important differences between all the various LSI chipsets. Probably I'll just have to wait until the DS380 is in more hands and I can read some reports on what works. Depending upon what you want out of your SAS controller and for how long, you probably only need to make sure that the controller supports drives larger than 2TB (which should last until we hit something like 12TB disks, which should be another 8 years before non-businesses can buy them reasonably). I'm a little suspicious that everyone breaking the 2TB limit are doing awful hacks internally to do so and are only opening up to a few more TB of addressable space beyond 2TB.
|
# ? Feb 5, 2014 17:45 |
|
Don Lapre posted:Will this not fit? necrobobsledder posted:I don't think you understand that Bay Trail and Avoton (current-gen) Atoms are much beefier than before and are now basically yesteryear's Celerons and perform somewhere in line with Core 2 Duos from about '09, which is powerful enough to transcode 1080P h.264 high profile movies in realtime. They should handle most NAS + media transcoding + download overload duties for most households just fine. For a business I'd be running a small SAN or keeping crap on Amazon EBS volumes or something until I could afford an onsite SAN.
|
# ? Feb 5, 2014 18:22 |
|
frunksock posted:It'll fit, but it's only four ports. Sorry picked the wrong one http://www.newegg.com/Product/Product.aspx?Item=N82E16816115100
|
# ? Feb 5, 2014 19:53 |
|
Don Lapre posted:Put the two 2tb drives in. Set it up as an shr array. Copy the data from your 3tb over to the new array. After that When you put the 3tb in the system it will let you add it to the array and format it. Okay, that's what I thought, I just wanted to make sure. Ugh, I was planning on using all independent drives, but after reading around, you can only use Basic Independent Disks for one single drive, which kind of sucks. I have three 1TB Greens, two 2TB Greens, and a 3TB Green. I was just going to throw them all in there independently, but now it seems like I can't. I was trying to stay away from any type of RAID setup due to the consensus that RAID'ing WD Greens is a bad idea. I was even going to stay away from JBOD, because I dont want one drive to go bad and then I love 10GB worth of crap because one old 1TB Green died. That would be terrible. I planned on using WDIDLE3 on the drives, but other than that, what would you guys recommend for so many Green drives?
|
# ? Feb 5, 2014 21:24 |
|
REDs are way better but if you are using a non system intensive raid setup (I use unraid for example) I think it should be acceptable. the SHR I think would be fine since its not really at the same level as a ZFS or hardware raid. Other posters feel free to tell me if I'm wrong though, I'm just going off my anecdotal evidence since UNRAID and I am guessing SHR dont give a poo poo about TLER.
|
# ? Feb 5, 2014 21:37 |
|
eightysixed posted:Okay, that's what I thought, I just wanted to make sure. Greens are fine if you already have them and disable head parking. Just dont buy them in the future. Also i dont understand what you are talking about you dont want to use JBOD. How is jbod any different then just throwing the drives in independently? Don Lapre fucked around with this message at 21:42 on Feb 5, 2014 |
# ? Feb 5, 2014 21:40 |
|
Don Lapre posted:Greens are fine if you already have them and disable head parking. Just dont buy them in the future. I agree. Just use wdidle3 and then you are fine doing SHR, which is software Raid5. I'd suggest doing a pretty deep health check on all of them first though. I also don't understand why you think you would lose 10GB of crap instead of 1TB of crap when a 1TB drive dies if you aren't using any sort of redundency.
|
# ? Feb 5, 2014 22:30 |
|
Don Lapre posted:i dont understand what you are talking about you dont want to use JBOD. How is jbod any different then just throwing the drives in independently? Sub Rosa posted:I also don't understand why you think you would lose 10GB of crap instead of 1TB of crap when a 1TB drive dies if you aren't using any sort of redundency. I guess I was misunderstanding something I thought JBOD striped them into one logical drive with zero redundancy, so: (3x 1TB) + (2x 2TB) + 3TB = 10TB total, spanned together in a non redundant array, whereby a single 1TB disk failure would degrade the whole array causing complete data loss to the full 10TB, as they're spanned to one logical array, but with zeo redundancy/fault tolerance. Where as if they were kept independent, and each drive only held its own capacity in its own Volume, if one 1TB drive died, I would only lose what's on that one single drive, as opposed to it killing the entire array. So, 7 Logical Volumes, as opposed to 1 with data spread through the array. At least thats how I thought it worked eightysixed fucked around with this message at 22:37 on Feb 5, 2014 |
# ? Feb 5, 2014 22:33 |
|
JBOD: Just a Bunch of Disks. No different than having a bunch of extra SATA ports on your motherboard.
|
# ? Feb 5, 2014 22:38 |
|
Well, the page eightysixed linked says this:quote:Note: JBOD does not support any redundancy. Any lost of any disk will result in volume destruction
|
# ? Feb 5, 2014 22:39 |
|
You are thinking about spanning. Someone correct me if I'm wrong, but no striping takes place in spanning so the data on the other disks would be fine or at least easily recoverable.
|
# ? Feb 5, 2014 22:41 |
|
Sub Rosa posted:You are thinking about spanning. Someone correct me if I'm wrong, but no striping takes place in spanning so the data on the other disks would be fine or at least easily recoverable. You didn't click the Synology link that I posted. It is spanning: code:
quote:JBOD does not support any redundancy. Any lost of any disk will result in volume destruction And that is why I wanted to use 7 independent Volumes (one per disk), as opposed to spanning them into one. If I lose 1TB, I want to lose only that terabyte that was on the drive. I can't lose 10TB of data because one 1TB Green kicked the bucket. But I guess it's a null point now, since apparently SHR is the way to go. Either way, JBOD is a bad idea.
|
# ? Feb 5, 2014 22:46 |
|
eightysixed posted:If I lose 1TB, I want to lose only that terabyte that was on the drive. I can't lose 10TB of data because one 1TB Green kicked the bucket.
|
# ? Feb 5, 2014 22:53 |
|
Then why does literally everything say one disk failure results in complete data loss over the entire array? Think of it this way, hypothetically - I have three 2TB drives in JBOD, containing a large 5tb file. If one drive fails, my file is gone for good. This should work in the very same sense with a lot of smaller files, because they will be spanned over the entire array, not contained to there one physical volume, no?
eightysixed fucked around with this message at 23:00 on Feb 5, 2014 |
# ? Feb 5, 2014 22:57 |
|
eightysixed posted:Then why does literally everything say one disk failure results in complete data loss over the entire array? Think of it this way, hypothetically - I have three 2TB drives in JBOD, containing a large 5tb file. If one drive fails, my file is gone for good. This should work in the very same sense with a lot of smaller files, because they will be spanned over the entire array, not contained to there one physical volume, no? Again, please someone correct me if I'm wrong, but when you are spanning your file size is generally limited to the size that will fit on the largest disk. You wouldn't be able to store a 5tb file in the case you mean. Likewise the small files would also be on one particular disk. Striping is Raid 0. You aren't striping when you are only spanning.
|
# ? Feb 5, 2014 23:02 |
|
blocks are not files and files are not blocks.
|
# ? Feb 5, 2014 23:04 |
|
Maybe it says you will lose all the data on the non-failed disks because one of the disks that you lose might be the one that has all the information about your 'array'? I'd assume you can still connect the disk up to another machine and pull the data out of it, if it's not just fragments of files.
|
# ? Feb 5, 2014 23:06 |
|
My only practical experience with spanning was with Windows Home Server, and from what I'm reading the rules that were true with how it did it may not be true for other ways spanning can be done.
|
# ? Feb 5, 2014 23:14 |
|
If you use shr a drive can fail and you lose nothing. If you use shr2 then two drives can fail. Just do that.
|
# ? Feb 5, 2014 23:17 |
|
|
# ? May 1, 2024 06:25 |
|
If all the drives appear as a single giant disk to the OS, then the loss of one drive is going to lose all data on that drive. Whether or not you can recover the rest of the data is up to the filesystem and restore tools you use. You might lose it all. If they show up as independent drives then you only lose what's on one drive. At least as far as all of the server raid controllers I've used, JBOD and spanning are the same thing. If you want all of the drives passed through to the OS, you have to create one RAID-0 array per drive.
|
# ? Feb 5, 2014 23:23 |