Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Ethereal posted:

So, running FreeNAS 24/7 is definitely going to up my power bill considerably unless I go get a more power efficient processor/power supply. I was really hoping it had some intelligent states so it could run in really low power until a device requests a service...Oh well.
Actually, you could probably use an older Celeron (make it 64-bit!) and underclock it for pretty reasonable power consumption. I feel it'd be more power efficient than even an Atom CPU and given that I'm not about to get FreeNAS working on an ARM CPU, it should be the most compatible with software CPU. Most of your power consumption running a file server is typically due to the drives, but with ZFS you need a bit more CPU and memory horsepower. I would get a CF card, an appropriate SATA/IDE adapter, and grab lower RPM or "energy efficient" drives.

spencer for hire posted:

I was looking at an Antec Mini P180 to use as a case but I'm looking for any suggestions for a smaller one. I only see myself needing 4 drives, swapping them with bigger ones once they are filled. Finally, I have no clue when building your own how much noise/power usage to expect.
Someone posted their experience with a Chenbro case that would meet your needs.

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

pipingfiend posted:

Thinking of building my own raidz box. Quick question would i be able to run things like torrentflux or sabnzbd on solaris?
Branded Zones / BrandZ in OpenSolaris will help you here if you can't get them to work immediately on Solaris.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

WickedMetalHead posted:

OpenSolaris Requires a Keyboard attached? At all times?
I know that on most Sun UltraSparc workstations that unplugging the keyboard will result in the system turning off or rebooting. I thought that it was a hardware-based reason, but perhaps it's software-related in the OS.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
You can generally figure out what sort of libraries the vendor used by running ldd on a binary library or executable. However, you can statically link the libraries into your stuff with some Makefile / gcc hackery. This, however, can massively bloat the size of the binary generated, so I wouldn't advise this for most embedded devices like a NAS OS.

arm-elf-gcc is the generic ARM instruction set, but you will likely need to specify the actual CPU generation. In your case, you should be able to get away with a CPU from the ARM7 family, so check that a flag like -mcpu=arm7tdmi shows up during the compilation. You should not need to downgrade your version of gcc. In fact, I know I've been able to cross compile from gcc 3.2 with an arm7 target... because I was the one on my team to port the build scripts to use newer gcc versions for certain optimizations we needed.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Triikan posted:

Any good state-side suppliers of VIA-processor ITX boards? Newegg only has a few jetways.
http://www.logicsupply.com/ is worth a shot, but they're pretty overpriced from what I can see. Their selection, on the other hand, is pretty decent.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The problem I had with a lot of those supposed "microATX" towers is that they're actually about the same size as a lot of regular ATX towers.

The 8-bay VIA drive tower would be nice if I could use my own motherboard, but I don't think we could find the OEM that VIA used until someone actually bought one of those, looked up the manufacturer labeled on the sucker, and we called up some dude in China / Taiwan.

I'm planning on picking up a cheapass, smallish case and waiting for a more ideal file server enclosure. If nothing comes up, I'll just get a SATA port multiplier in the 5.25" external bays as I fill up the bays and rotate out drives.

adorai posted:

what's wrong with http://www.antec.com/Believe_it/product.php?id=Mw== ?
The problem with that case for me (I suspect it's the same for Animaniac) is that the 3 external bays are not immediately next to each other to be able to put in a 5x port multiplier. There's a lot of internal bays available in that case, but on top of being an overkill case for a file server probably, its dimensions are fairly large for a "microATX" case like I mentioned above. The case I'm going to pick up is a full 6" shorter than that one with a similar amount of drive bays.

Cooling shouldn't be as important for home users buying those low power drives because they happen to use a lot less power and as a result produce less heat potentially. If you've got 15k rpm SAS drives like in a business environment, then cooling is a lot bigger concern and you should be looking at a serious case with serious cooling... or just buy the dumbass 4u file server like you're supposed to.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
One option you have would be to just buy the 1.5 tb drives and replace the existing 1 tbs with 1.5 tb drives later, then expand the zpool. Then you could create another zpool with the existing 1tb drives. I prefer to have multiple zpools instead of a single, giant zpool for home servers.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
You're looking at either zfs import or zfs replace depending upon how screwed your hardware / configuration is. I hate to be that guy, but the zfs man page is more helpful than I could be.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

DLCinferno posted:

They're basically the same drive except for one software difference that cripples the Black drives in a RAID setup. Unfortunately, WD charges a shitload for the privilege of that one feature, which is totally bogus in my opinion. Here's why you don't want to use them in an array:
Psst, you can use a software utility from WD to enable TLER on even the Green drives. The primary reason to buy the Black series of drives is for the warranty and the far superior hardware (extra drive head motors apparently), not for the firmware features. To me, I think of extra warranty as insurance, and for maybe $15 more, I can get nearly double the insurance, so it's great as an external backup drive for me.

You'll want the RAID edition drives if you're willing to pay WD to run that utility on the drives for you. Otherwise, I'd stick with Black for performance and warranty. I personally use Green drives at home because I just sell my primary storage drives before the warranties are up and it's worked alright for me.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I know they would be using 3 500GB platters instead of just 2 of them like the WD10EADS, so that could potentially result in higher mechanical failures and certainly causes a bit more power draw and heat output, but it shouldn't be any different from previous 3-platter drives. I know the WD10EACS, which has 3 333GB platters, seems to be experiencing some reallocated sectors (first step toward dead to me) among the drives I have.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

KennyG posted:

The green power is variable speed drive designed for home use in regular desktops. The RE4-GP is designed for Large scale data storage (raid) and has much better durability, fault tolerance, and response in raid arrays. Put a bunch of Green desktop drives in a raid array and the array will fail in days if not sooner.
While the characteristics of the RAID drives are basically true, saying that Green drives in an array = fail is not exactly accurate considering there's an awful lot of people out there now doing it for home use (it's implied in this thread that we're building a NAS for home use, right?). I'm quite aware of performance and physical characteristics of Green drives and have 4 of them in a RAID5 with TLER on, and the individual drives are running several degrees cooler in a tiny NAS than my 1TB Black drive is in a big, beefy case. I'm more worried about the maintenance guys here being retards and flooding my place than my array dying with my size of an array. It's been running fine for about 2 weeks straight now streaming movies, music, and copying multi-gigabyte files. It's not like I'm using these in a SAN or something.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Just try to enable TLER on the drives if you're using them in a RAID1/10/01/5/6 setup and you should get basically a slower set of drives with less than bulletproof reliability per disk and less warranty than the drives that cost almost twice as much.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Ranma posted:

Right now, I have 2 1 gig hard drives, a 500 gig, and a 750 gig formatted as spanned dynamic disks. For heat reasons, I am replacing them all with 3 1 gig WD Green drives. I am slightly worried because I have read reports of the green drives not working in RAID (and not working with streaming HD movies). Will I be able to use these drives?
Whoever said you can't stream HD movies with these must be doing something wrong because I get at least 30MBps transfer off of my 4x1TB WD Green array, so that should be plenty enough for 1080P streaming.

WD Green drives in RAID was addressed earlier in the thread. You'll need to run WDTLER.EXE on them to enable TLER (Google for it) but it is not necessary to do this to get a functional RAID - just safer. TLER is an issue when a drive is getting flakey to begin with. The WD RAID / Black drives get you over Green:
- better performance
- higher power consumption
- better mechanical hardware
- 2 years more warranty
- Black still requires WDTLER utility in a RAID1+ system

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Yeah, this is why I don't trust anything other than decent RAID5 hardware and proven software RAID (md, zfs, etc.). There's a lot of complexities in storage systems, and nobody's going to solve that with some product aimed at end users firstly and the technical capabilities a distant second.

Scratch2k posted:

The moral of the story is that despite WHS's claims to the contrary, it is not a foolproof backup/data security solution, but then, for the average home user, what is?
The thing to note here is that the average user is actually not going to be using WHS in the first place. If someone has the know-how to look at WHS and other solutions, they're only a couple steps from looking at serious solutions like RAID5 and ZFS. This is where there's a lot of room available for someone to write a solid commercial NAS distribution for Solaris or FreeBSD that implements software RAID5 or ZFS. Until then, I think everyone will have to wait for btrfs on Linux.

The average user will almost certainly lose a lot of data even while using all these funky solutions they're paying good money for.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Fitret posted:

I thought that a lot of people were moving towards Atom because they still had great speed while vastly reducing the amount of power consumed - very important for always-on servers. Is there something I'm missing here?
The problem with Atom is the chipsets it's been paired with so far. It's either nVidia's Ion or a really old, power-hungry chipset from Intel's basement bargain bin. The limited expandability of motherboards so far makes Atom boards terrible for an array that will take more than maybe 4 drives, and it's even worse if you're planning on trying to add a TV card on top of that (no go - 1 PCI slot for every Atom mobo I've seen). I'm planning on going with a Supermicro motherboard and multiple SATA cards for my setup later because frankly, I don't trust those new 2TB drives that much and plan on sticking with lots of 1TB drives.

I still think that for a home file server, a low-watt Intel or suitable AMD CPU with a low-power chipset is better than an Atom board - the power consumption is maybe a difference of 10w for a lot more computing power. If you want to move up into higher throughput network I/O, you're going to need a bit more than what an Atom can do while juggling some other important tasks.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
While QNAP's NASes are really nice, I don't think it's really worth the price premium. Wound up looking at their stuff before I settled on a Thecus N4100Pro (albeit out of an emergency need when a machine died and I needed a storage server FAST).

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Well, if you're running ESXi, you have to be running it on bare metal frankly, so FreeBSD would have to be a guest OS with direct I/O access to all the drives in the RAID.

I'd normally go with a hardware setup to run OpenSolaris and run Solaris Zones and/or VirtualBox VMs under it to get anything else I need. The problem here is that virtualization doesn't automagically get you support for hardware that wouldn't be found on your host OS (with some exceptions - my USB SmartCard for work works under Windows XP 32-bit and I run a VM to access my VPN as a result).

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

network.guy posted:

If I get a NAS like a QNAP TS-239 Pro can I access it from multiple PCs as though it's just another folder on the PC? If so, is it possible to set it up so that one can read and write to the NAS, but not delete (or only as an admin)?
Yeah, just setup SMB or NFS ACLs / user accounts.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

riichiee posted:

Then, when the file server is accessed, the machine wakes up, serves the files, stays on for another 30 minutes (or whatever) then goes back to sleep.

Is this possible?
Looks like you're looking for Wake On LAN. You'll need a NIC and BIOS that supports it.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Putting multiple slices from the same vdev wouldn't be so terrible if you're using an SSD. However, I really doubt anyone is bothering to put ZFS and migrating lots of data from an array of SSDs at present.

japtor posted:

I was going to suggest something homemade as well, albeit not entirely so like this one
I'm going to suggest getting something that'll fit your needs basically forever. Like this beefy 4u case to hold onto a large array of drives.

Although if someone is making homemade drive rack setups I suppose $300 is out of their budget to physically protect and secure their data and just getting a bigass n-bay tower case on the cheap is more appropriate. However, if you're going to buy a new case and a few 5-in-3 enclosures, the cost will be very similar to the Norco case. Then again, few people are willing to put a 4u rack in their bedrooms...

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Methylethylaldehyde posted:

The one thing people never mentioned on the reviews of this case is how god damned LOUD it is. The thing has 4 80mm delta screamers in it. I can hear it through a closed door, and can't hear other people talking in the next room if I'm near it.
Umm, every other review on Newegg says the fans are loud as gently caress. But really, if it's that loud I'll have to reconsider it for placement in the garage in a 12U enclosed rack. I'm not sure what a good set of fans to replace the Deltas would be considering airflow is a big, big concern with so many drives. Maybe Panaflos could work? The 4220's primary advantage over the 4020 is the SFF8087 backplane really, so that should theoretically help with airflow anyway. I kinda want to go with a 4U over a crammed 2U partly because I want to keep using the PSU I've already got (redundant PSUs seems overkill for a home network)

Methylethylaldehyde posted:

On the other hand, once I figured out how the gently caress opensolaris worked, I was able to set up xVM,
Umm, last I remember on OpenSolaris it was really VirtualBox and you shouldn't be using xVM anymore since development has stopped on it in favor of VirtualBox. VirtualBox is pretty close to VMWare Workstation in features and reliability, which is pretty solid.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

H110Hawk posted:

It's a server chassis made for a datacenter. "Quiet" isn't on their mind. Delta fans are awesome fans, but I wouldn't put this in my house. Garage, sure.
It's a really low quality case that I wouldn't put in a datacenter unless I'm doing some crazy low-cost storage project is the thing, and there's a bedroom next to my garage, so I wouldn't want something so loud you can hear it easily outside the garage even. I just wish there was a better option for 12+ drives in a box for home users besides some rackmounted monstrosity like this, but given how much data it potentially holds, I'll have to treat it like a professional datacenter setup.

supster posted:

I've read that it's possible to do with virtualbox - I'm not sure about vmware but I wouldn't be surprised if it was possible as well. Are there any other issues that come to mind? Running an entire server that's just running unraid seems so silly.
Actually, I've run direct / raw disk access on Virtualbox before, but I wouldn't do it in a professional environment. It's just come out of experimental status I believe. VMWare has supported "raw disk" access in just about every version of their product line since about 2007. I've read that it's possible to do with virtualbox - I'm not sure about vmware but I wouldn't be surprised if it was possible as well. Are there any other issues that come to mind? Running an entire server that's just running unraid seems so silly. But it's kinda inaccurate to say it's always raw is because you honestly can't get pure, direct access to a hard drive unless you are the OS (various protection modes and all), but enough can be done with wrappers to create a transparent view of the drive to a virtual machine that suffices for most tasks.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

H110Hawk posted:

Out of curiosity, what specifically is low quality about that case? It appears to be metal, metal, metal, some plastic, some Delta fans, and a power button? Plus, if your walls are so hollow why not insulate them? It will add value to your home and you will sleep better at night.
The thickness of the metal (merely "adequate") along with the flimsiness of the drive trays was mentioned in a lot of reviews I've read, along with the backplanes having the molex connectors hot-glued on there poorly. I know I shouldn't expect much for a $340 4u case, but I want to feel like my data is physically sound at least. The poster I quoted said that he could hear the fans in the next room over, and given how sounds bounce in a garage, it'd be awful. Also, I rent my place and it's a guest bedroom next to the garage. The master bedroom is two levels above the garage and is pretty great minus the small shower / toilet room.

quote:

And if you don't want a datacenter-esque case, why look at rackmounts at all? I'm sure you could build a really bitchin full tower with 12+ drives in it.
Partly because a tower isn't as portable as a rack with all my equipment in it. I'm considering one of these for both computer and recording equipment. I suspect I'll setup a home recording area down the road with the small datacenter, and a couple racks are ideal for my desired architecture (wallplates & thin clients embedded in every room). I like to keep my tech separate from my actual living space really.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
That's a lot of Linux isos... but where's the space for the CPU / motherboard in that mess of hard drives? That'd be the one and only rack case you'd ever need for hard drives at home.

H110Hawk posted:

If you want your data to be "physically sound" then it's time to get spendy on your RAID card (LSI only, really) and motherboard. If you want cheap or quiet you're looking at non-rack mount, and not dense.
I'm going to use ZFS on Solaris, no need for anything more than a reliable controller - no hw RAID needed. I'd be more concerned about the mass of drives breaking off the rack ears because the rack rails broke off due to being so flimsy under the weight of 60 lbs of stuff. And like I said above, I'm probably not going to need more than 12 drives, so I highly doubt I'm looking at SAS expanders with daisy chaining drives and other scale-out architectures.

I was just hoping for the quality of trays I see on the HP Proliant servers I've racked up before.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
You don't need a powerful CPU for most media server use cases. Unless you're doing transcoding like with PS3MediaServer, you're best getting a moderately powerful 64-bit CPU with low TDP for an OpenSolaris file server. ZFS benefits from fast CPUs, sure, but it's probably not worth it for even most small office uses of a file server. I'd say no more than an E5200 (or the Xeon equivalent) would ever be necessary for a ZFS file server unless it's running at quite high loads.

My E5200 setup only uses about 50w idle and can churn through video if I need it to while I doubt most of the Xeons can get total system wattage that low. Power consumption may matter more or less in your region or household.

devilmouse posted:

Unbuffered or buffered RAM?
Buffered / registered RAM is not as necessary if you use ZFS specifically for data integrity, but it can help make sure that if you get a weird batch of RAM that it'll smooth out irregularities that most people see with consumer non-ECC and non-registered RAM. For servers this is normally not even a question to go with ECC + registered because of the importance of each server. Only reason I'm going to use it later is that if I'm going to trust 24TB of data to a server, it better be stinkin' reliable as gently caress.

FISHMANPET posted:

Also, I would ditch your lame CPU for an AMD Phenom II X2 or X3 or X4. You can get chips that are cheaper and have more cores when you go AMD.
Have you seen the server boards available for AMD CPUs? Terrible selection of NICs, and most assume you're going to be doing multi physical CPUs. If you go with consumer level motherboards, you'll almost certainly need to buy an extra couple of NICs. So for the sake of convenience in a motherboard layout so I don't need to use two extra PCI slots, I'd go with a cheaper Intel CPU with a server class motherboard. This is really personal preference as far as I'm concerned, not really about cost (maybe about a $100 cost difference, which I would hope doesn't matter if you're building such a large array in the first place).

Also, part of why you should get a server motherboard with multiple PCI x8/x16 slots is that most consumer class motherboards split lanes into a single x8 on the backend (because even SLI setups don't max out 8 lanes) and incurs some multiplexing penalty (the cost of the extra lanes exceeds that of the multiplex logic). This means that if you use two PCI-e SAS / SATA cards with 8+ drives each you don't get maximum bandwidth possible. This doesn't matter if you only use one card, but it can be a problem if you use two or more cards that run at high load.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

JerseyMonkey posted:

If I remove a 1TB drive and replace it with a 1.5TB drive, allow the RAID1 to rebuild, then pop in the second 1.5TB drive, will the RAID1 expand to 1.5TB or do I need to back up the 1TB RAID1 setup, and recreate the RAID1 setup with the (2) 1.5TB drives?
In most RAID systems, you will have to do a RAID expansion / expand command that will resilver the RAID array. I don't think the DNS-323 does this automatically and you'll have to find the feature to do this on the web utility. Either that or issue the right mdadm commands (doesn't the DNS-323 do mdadm?).

jeeves posted:

I've read that WD Green drives are not good for raids, since they spin down their heads after a few seconds of idleness. Are most 'green' drives like this? Like the Samsung line as well?
This is head parking behavior that is somewhat consistent with drives, but so long as the drive has something accessing it, has good RAID or file system level caching, or is actually unused most of the time, it won't matter much.

The cost of replacing drives periodically due to excessive head parking (if it happens really frequently like some people report on hardware RAID setups, it'll definitely matter as opposed to maybe 5x as often) may outweigh the power and drive cost savings by going Green. For a large array (10+ drives) at home it might make more sense perhaps, but for smaller 3-4 drive arrays, I would say the cost savings from power are minimal - 9 watts or so. Also consider that Green drives only carry 3 year warranty (my butt's been saved plenty by 5-year warranties). Might as well change out an incandescent lightbulb instead for an easier way to save power. My 3 level townhouse uses about 70w for all the lights on at full blast measured at the meter - cost me about $50 in bulbs.

I'd encourage green drive use for software RAID arrays like unRAID, mdadm, and ZFS. If you're going serious, you'll need a serious budget to go with it, and Green drives won't suffice anymore. It's up to you to decide how serious you are and to spec accordingly.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
My array growth strategy is adding zpools constructed of separate 5x1TB, 4x1.5TB, and later 6x2TB hooked onto a couple JBOD cards with 8 ports per card. When a drive goes down in a raidz1 array, I'll buy a new drive to replace it or migrate the data on the existing array to another zpool and sell the old, obsolete drives. I don't expect to need more than 10TB of storage at a time and only need about 6TB at present, so this should work fine.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Cyberdud posted:

Am i better off just telling them that for this kind of money we won't be getting a reliable enterprise solution?
I've been in your sort of position for most of my career now and here's what I've done thus far:

1. Write down a detailed analysis of what the pros & cons of technical options are given a budget along with the ideal solution and one-step down from ideal. Show that the budget does not allow the ability to meet critical business requirements and that you're basically throwing money away on top of taking a great risk by not budgeting for the right solution
2. Bite the bullet and do a hosed up, crazy implementation on a shoestring budget that gets you incredible amounts of praise
3. Look for a new job because the organization will likely not be in business for much longer, and it's probably stressful working there anyway
4. Not give a poo poo about the job except the bare minimum and looking good enough for your next job
5. Quit for a better job and pray that they don't (de)evolve into your previous job, ironic enough, through business success

Combat Pretzel posted:

I'm interested, because I'm looking at a parachute option, since I'm growing weary of Oracle's bullshit related to OpenSolaris.
You know what the hilarious thing is? Even BTRFS is partly an investment by Oracle in a sense since the main guy works for Oracle. The real difference between ZFS and BTRFS is that BTRFS is more compatible with the GPL than ZFS ever will be, and Linux is what is inline with Oracle's long-term product strategy it seems, so I would expect OpenSolaris to be more or less dead by 2013 anyway and ZFS by 2016 since enterprise customers take a while to be weaned off anything. But I don't actually care so long as I have a solid storage backend.

BTRFS will probably be production ready by the end of 2011 is my guess. I've been reading forum posts on occasion by BTRFS early adopters and they've basically had to have a marriage with the developers to get it working so far it seems. The disk format is still not stable, so I wouldn't use it for anything longterm either. It took a few years for Linux to be usable for anything beyond hobby computing, and perhaps if we're lucky something workable will be out before 2010.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I don't think it'd be very cost-effective to buy that many drives unless you expect to need that much storage in the next 12 months or so. I'm only going to have 9 disks in two separate zpools soon and even then I'm going to phase them out with larger disks as the drives die off. It's the best compromise between the WHS 1 drive-at-a-time method and the ZFS "replace the entire array's drives to expand" to me.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Samsung drives are the ones that don't carry over those settings across reboots while Seagates do write through onto EEPROM I believe. Western Digital is the biggest downer though with basically completely disabling most of these options on their green drives from what I can tell. Too many people building home megaraids (oftentimes failing) and are trying to upsell them on black drives. I'm willing to deal with the shortcomings of green drives for now until later when I actually need good IOPS and bandwidth.

For low-power RAID setups on most OSes with a strong cost consideration, I'd recommend Samsung drives since the other manufacturers seem to have firmware oddities (reliability is about par across all of them unless it comes to old Maxtor drives and maybe some Lacie refurbished crap). If cost isn't a primary factor, then you're looking at black or raid edition drives from Western Digital regardless of whether you're using hardware or software RAID.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Combat Pretzel posted:

Hmmm. The only way I see this working is if the intro is at the beginning of the show, as frame and pixel accurate copies across the episodes. Any minor variance in pixels or the episode starting at a different frame will generate different bitstreams and influence bitrate allocation.
Yeah, if you have DVD rips, for example, and those have to have the exact same intro data to be de-duped. De-dupe isn't perfect because data itself isn't quite perfect either. I highly doubt that it'd de-dupe anything much at all if you have any analog or even HDTV broadcast rips. Stuff like this is why MKV files have ways to reference other files and join them together - the primary use case is to save space on TV episode (read: my animes) intros and outros.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
The only way out of paying that much is to basically build your own NAS. For home users, there's a cost curve that dips to its low point somewhere around the Drobo and above, and around there it's probably cheaper (even considering power) to just build your own. mini ITX motherboards and Atom processors have helped make low-power DIY NAS boxes a reality. Heck, even my E5200 with an nForce chipset with 5 drives only draws like 50w at idle, and I never have to worry about whether it'll support X software feature. Of course, building a NAS box is a bit more specific and complex in some respects than a simple DIY desktop system, so it's certainly possible it's more cost-effective to buy one of those $1k+ boxes meant for small businesses. It's just I can build a great OpenSolaris box with 6TB of drives for about $800 when a NAS from an ok manufacturer costs $1k without any drives.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Whatever gains you could have made in power efficiency by choosing a lower-power chipset and footprint motherboard were lost with that 6-core CPU. There's maybe a difference of about 4-5 watts among the same chipset, and contrary to intuition, my experiences with Intel-based and nVidia-based chipsets say that even with a much better GPU the nVidia-based chipset actually beat the Intel by about 1w. This is crazy considering how much extra power a GF9400M would take over a GMA4500 chip with utter crap performance at even idle.

You're better off choosing an appropriate wattage PSU (if you only use 80w normally, even the best 80Plus PSU rated for 1000w will be less efficient by about 15w than a 400w PSU at slightly less efficiency) for that than the chipset or a better RAID controller that supports staggered spin-ups to help avoid power spikes.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Be careful about the software implementation of daap that they're using. My Thecus N4100Pro uses the obsolete mt-daapd and with the miniscule bit of RAM they have on there, it dies at about only 4000 songs. Newer NASes should be using Firefly media server I think and I dunno how well it does compared to mt-daapd. The BitTorrent client on my Thecus is limited to 4 torrents as well (it uses rtorrent on the backend) and is lame there. There's always some limitation to the features on these SOHO NASes I've found, which is why I'm planning on getting rid of mine ASAP and going with a homebrew box.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
I think what he really means is hackable because you're now going into unsupported territory. Some people want to hack up a box that's been made because it's cheaper, sorta like how people approach flashing a router with 3rd party firmware or what people used to do before these NASes with the Linksys NAS extension NSLU2 or whatever.

If you want to hack up these boxes for your own software featureset, you're probably best off with the WHS boxes like the HP or Acer models because everything else uses low-power non-x86 chips or an AMD Geode or something else that's cheap and bare minimum in the SOHO market, which could require a recompile of binaries you try to bring onboard. There's also the issue that many can only run headless and can be pretty easily bricked unless you can get access to the onboard flash without the mobo or manage to attach a keyboard to it.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Scuttle_SE posted:

Can I mess with TLER on newer WD drives? I seem to recall WD locking that down or something...
WDXXEADS drives have them disabled after those dates (you must get EACS or older EADS drives, most of which have been taken out of supply lines by now). Regardless, if you're going to be using consumer drives in a regular RAID array, you MUST NOT USE WDXXEARS DRIVES. They have WDTLER and WDIDLE disabled and you will get problems with no possible fix but to get new drives. Seagate and Samsung are worth looking into. Given Seagate's wonky problems with reliability (bursts of bad drives seem to happen given the sort of reviews I'm reading, although on the aggregate unlikely to be much worse than the others), I'd go with Samsung drives now for cheap, reliable home mass storage.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

PopeOnARope posted:

Why is this so loving confusing. loving Western Digital. How are Hitachi in terms of reliability and warranty (e.g. do they offer advanced replacement)
Hitachis are par with Samsung although they seem to be worst (by like 1w) in terms of power consumption and heat. To their credit, they're the ones that came out with THE first 1TB drives, so it's not like they have no R&D muscle. It's just that being a technological innovator doesn't actually mean you make better products.

I decided to stop giving a crap about the drives and to just use ZFS. I'm still migrating over the drives I put into my Thecus NAS that I bought as a literal omg, I need it now emergency (my company laptop broke and I needed to use my fileserver as my workstation).

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Almost all SOHO NASes actually use md RAID on Linux, which is software RAID. The RAID problems with TLER and such matter only for hardware RAID platforms.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Just a note, but Newegg has 2TB Samsung drives for $110 each. I was waiting for them to hit about this pricepoint before expanding out. I should be set for a while with this next order, wheee.

The black drives get you better warranty and electronics over the regular ol' variety of consumer drives, but sometimes the extra warranty hardly matters if you keep upgrading drives every few years anyway. 3 years ago 1TB drives cost a bit more than what 2 TB drives cost now. Given the pace is keeping decent stride, 4TB for $100 will be likely in 2013 while SSDs will have gone down significantly in price.

KennyG posted:

It's completely situational. RAID-Z leaves you with fewer OS options...
And this is part of why I'm putting VMs on my OpenSolaris box. Lets me run whatever OSes I need on the machine with local filesystem access speeds. It's not like I'm going to play games on the fileserver, right? SMB sucks balls (and iSCSI over ethernet scares me with my grade of equipment), so I'd rather put as many services as possible local to the file server anyway.

On top of the other benefits mentioned, RAIDZ offers de-dupe options, which can be a huge cost (and even performance) saver. Its performance gains are mostly done via the ARC (adaptive reactive cache) that implies you need a bit more RAM to get good performance than you would with straight hardware RAID.

RAIDZ also reduces the critical need for ECC RAM on your fileserver, further reducing hardware costs. With 10TB+ of data, even a 1.0E-15 error rate in memory will get you some bad writes once in a while on a regular hardware RAID.

Adbot
ADBOT LOVES YOU

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Be careful about the NICs you use and the driver you have. OpenSolaris drivers for lots of common consumer NICs are pretty bad. There's alternative NIC drivers out there (Realtek ones come to mind) that will help with stability. OpenSolaris is about where Linux was in 2003 in terms of support for consumer devices IMO, so it's Intel NICs and server-grade hardware or expect problems.

FISHMANPET posted:

Also, wondering what you mean by "SMB sucks balls"
The protocol is terribly inefficient and incurs an incredible amount of overhead and latency per request, not to mention the anemic ACL support making it difficult to create isomorphic permissions with POSIX ACLs. Furthermore, it makes mounting SMB filesystems to use for network apps horrible (have you tried iTunes libraries on an SMB share? Slow as hell, even with gigabit ethernet and jumbo frames w/ direct connections) while it's normally not a big deal with NFS. I'm pretty impatient about my network filesystems, even at home. It pisses me off that I've moved over my 1.5TB iTunes library to the network and now it takes forever to even edit metadata because iTunes does so many back-and-forth reads / writes (try adding a large video file to iTunes from an SMB mount. Try it from local. Weep).

Bridged NATs are better typically for VMs on workstations mostly because you don't have to fire up a DHCP client on the virtual ethernet devices. Also host to guest filesystem sharing on Virtualbox fires up a lightweight NetBIOS daemon along with some DNS-level override for the VBOXSVR share and other stuff to work with Windows last I saw.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply