|
Ethereal posted:So, running FreeNAS 24/7 is definitely going to up my power bill considerably unless I go get a more power efficient processor/power supply. I was really hoping it had some intelligent states so it could run in really low power until a device requests a service...Oh well. spencer for hire posted:I was looking at an Antec Mini P180 to use as a case but I'm looking for any suggestions for a smaller one. I only see myself needing 4 drives, swapping them with bigger ones once they are filled. Finally, I have no clue when building your own how much noise/power usage to expect.
|
# ¿ Nov 3, 2008 05:58 |
|
|
# ¿ Apr 20, 2024 04:09 |
|
pipingfiend posted:Thinking of building my own raidz box. Quick question would i be able to run things like torrentflux or sabnzbd on solaris?
|
# ¿ Nov 19, 2008 20:16 |
|
WickedMetalHead posted:OpenSolaris Requires a Keyboard attached? At all times?
|
# ¿ Nov 28, 2008 04:50 |
|
You can generally figure out what sort of libraries the vendor used by running ldd on a binary library or executable. However, you can statically link the libraries into your stuff with some Makefile / gcc hackery. This, however, can massively bloat the size of the binary generated, so I wouldn't advise this for most embedded devices like a NAS OS. arm-elf-gcc is the generic ARM instruction set, but you will likely need to specify the actual CPU generation. In your case, you should be able to get away with a CPU from the ARM7 family, so check that a flag like -mcpu=arm7tdmi shows up during the compilation. You should not need to downgrade your version of gcc. In fact, I know I've been able to cross compile from gcc 3.2 with an arm7 target... because I was the one on my team to port the build scripts to use newer gcc versions for certain optimizations we needed.
|
# ¿ Jan 9, 2009 00:00 |
|
Triikan posted:Any good state-side suppliers of VIA-processor ITX boards? Newegg only has a few jetways.
|
# ¿ Feb 1, 2009 19:16 |
|
The problem I had with a lot of those supposed "microATX" towers is that they're actually about the same size as a lot of regular ATX towers. The 8-bay VIA drive tower would be nice if I could use my own motherboard, but I don't think we could find the OEM that VIA used until someone actually bought one of those, looked up the manufacturer labeled on the sucker, and we called up some dude in China / Taiwan. I'm planning on picking up a cheapass, smallish case and waiting for a more ideal file server enclosure. If nothing comes up, I'll just get a SATA port multiplier in the 5.25" external bays as I fill up the bays and rotate out drives. adorai posted:what's wrong with http://www.antec.com/Believe_it/product.php?id=Mw== ? Cooling shouldn't be as important for home users buying those low power drives because they happen to use a lot less power and as a result produce less heat potentially. If you've got 15k rpm SAS drives like in a business environment, then cooling is a lot bigger concern and you should be looking at a serious case with serious cooling... or just buy the dumbass 4u file server like you're supposed to.
|
# ¿ Mar 15, 2009 17:09 |
|
One option you have would be to just buy the 1.5 tb drives and replace the existing 1 tbs with 1.5 tb drives later, then expand the zpool. Then you could create another zpool with the existing 1tb drives. I prefer to have multiple zpools instead of a single, giant zpool for home servers.
|
# ¿ Apr 14, 2009 01:20 |
|
You're looking at either zfs import or zfs replace depending upon how screwed your hardware / configuration is. I hate to be that guy, but the zfs man page is more helpful than I could be.
|
# ¿ May 27, 2009 22:21 |
|
DLCinferno posted:They're basically the same drive except for one software difference that cripples the Black drives in a RAID setup. Unfortunately, WD charges a shitload for the privilege of that one feature, which is totally bogus in my opinion. Here's why you don't want to use them in an array: You'll want the RAID edition drives if you're willing to pay WD to run that utility on the drives for you. Otherwise, I'd stick with Black for performance and warranty. I personally use Green drives at home because I just sell my primary storage drives before the warranties are up and it's worked alright for me.
|
# ¿ Jun 14, 2009 17:54 |
|
I know they would be using 3 500GB platters instead of just 2 of them like the WD10EADS, so that could potentially result in higher mechanical failures and certainly causes a bit more power draw and heat output, but it shouldn't be any different from previous 3-platter drives. I know the WD10EACS, which has 3 333GB platters, seems to be experiencing some reallocated sectors (first step toward dead to me) among the drives I have.
|
# ¿ Jun 16, 2009 18:44 |
|
KennyG posted:The green power is variable speed drive designed for home use in regular desktops. The RE4-GP is designed for Large scale data storage (raid) and has much better durability, fault tolerance, and response in raid arrays. Put a bunch of Green desktop drives in a raid array and the array will fail in days if not sooner.
|
# ¿ Jun 25, 2009 22:09 |
|
Just try to enable TLER on the drives if you're using them in a RAID1/10/01/5/6 setup and you should get basically a slower set of drives with less than bulletproof reliability per disk and less warranty than the drives that cost almost twice as much.
|
# ¿ Jun 26, 2009 01:43 |
|
Ranma posted:Right now, I have 2 1 gig hard drives, a 500 gig, and a 750 gig formatted as spanned dynamic disks. For heat reasons, I am replacing them all with 3 1 gig WD Green drives. I am slightly worried because I have read reports of the green drives not working in RAID (and not working with streaming HD movies). Will I be able to use these drives? WD Green drives in RAID was addressed earlier in the thread. You'll need to run WDTLER.EXE on them to enable TLER (Google for it) but it is not necessary to do this to get a functional RAID - just safer. TLER is an issue when a drive is getting flakey to begin with. The WD RAID / Black drives get you over Green: - better performance - higher power consumption - better mechanical hardware - 2 years more warranty - Black still requires WDTLER utility in a RAID1+ system
|
# ¿ Jul 17, 2009 16:04 |
|
Yeah, this is why I don't trust anything other than decent RAID5 hardware and proven software RAID (md, zfs, etc.). There's a lot of complexities in storage systems, and nobody's going to solve that with some product aimed at end users firstly and the technical capabilities a distant second.Scratch2k posted:The moral of the story is that despite WHS's claims to the contrary, it is not a foolproof backup/data security solution, but then, for the average home user, what is? The average user will almost certainly lose a lot of data even while using all these funky solutions they're paying good money for.
|
# ¿ Aug 24, 2009 04:11 |
|
Fitret posted:I thought that a lot of people were moving towards Atom because they still had great speed while vastly reducing the amount of power consumed - very important for always-on servers. Is there something I'm missing here? I still think that for a home file server, a low-watt Intel or suitable AMD CPU with a low-power chipset is better than an Atom board - the power consumption is maybe a difference of 10w for a lot more computing power. If you want to move up into higher throughput network I/O, you're going to need a bit more than what an Atom can do while juggling some other important tasks.
|
# ¿ Aug 25, 2009 00:14 |
|
While QNAP's NASes are really nice, I don't think it's really worth the price premium. Wound up looking at their stuff before I settled on a Thecus N4100Pro (albeit out of an emergency need when a machine died and I needed a storage server FAST).
|
# ¿ Sep 11, 2009 03:20 |
|
Well, if you're running ESXi, you have to be running it on bare metal frankly, so FreeBSD would have to be a guest OS with direct I/O access to all the drives in the RAID. I'd normally go with a hardware setup to run OpenSolaris and run Solaris Zones and/or VirtualBox VMs under it to get anything else I need. The problem here is that virtualization doesn't automagically get you support for hardware that wouldn't be found on your host OS (with some exceptions - my USB SmartCard for work works under Windows XP 32-bit and I run a VM to access my VPN as a result).
|
# ¿ Sep 18, 2009 17:54 |
|
network.guy posted:If I get a NAS like a QNAP TS-239 Pro can I access it from multiple PCs as though it's just another folder on the PC? If so, is it possible to set it up so that one can read and write to the NAS, but not delete (or only as an admin)?
|
# ¿ Oct 8, 2009 17:41 |
|
riichiee posted:Then, when the file server is accessed, the machine wakes up, serves the files, stays on for another 30 minutes (or whatever) then goes back to sleep.
|
# ¿ Oct 16, 2009 15:04 |
|
Putting multiple slices from the same vdev wouldn't be so terrible if you're using an SSD. However, I really doubt anyone is bothering to put ZFS and migrating lots of data from an array of SSDs at present.japtor posted:I was going to suggest something homemade as well, albeit not entirely so like this one Although if someone is making homemade drive rack setups I suppose $300 is out of their budget to physically protect and secure their data and just getting a bigass n-bay tower case on the cheap is more appropriate. However, if you're going to buy a new case and a few 5-in-3 enclosures, the cost will be very similar to the Norco case. Then again, few people are willing to put a 4u rack in their bedrooms...
|
# ¿ Dec 4, 2009 17:27 |
|
Methylethylaldehyde posted:The one thing people never mentioned on the reviews of this case is how god damned LOUD it is. The thing has 4 80mm delta screamers in it. I can hear it through a closed door, and can't hear other people talking in the next room if I'm near it. Methylethylaldehyde posted:On the other hand, once I figured out how the gently caress opensolaris worked, I was able to set up xVM,
|
# ¿ Mar 5, 2010 02:00 |
|
H110Hawk posted:It's a server chassis made for a datacenter. "Quiet" isn't on their mind. Delta fans are awesome fans, but I wouldn't put this in my house. Garage, sure. supster posted:I've read that it's possible to do with virtualbox - I'm not sure about vmware but I wouldn't be surprised if it was possible as well. Are there any other issues that come to mind? Running an entire server that's just running unraid seems so silly.
|
# ¿ Mar 5, 2010 17:28 |
|
H110Hawk posted:Out of curiosity, what specifically is low quality about that case? It appears to be metal, metal, metal, some plastic, some Delta fans, and a power button? Plus, if your walls are so hollow why not insulate them? It will add value to your home and you will sleep better at night. quote:And if you don't want a datacenter-esque case, why look at rackmounts at all? I'm sure you could build a really bitchin full tower with 12+ drives in it.
|
# ¿ Mar 5, 2010 20:17 |
|
That's a lot of Linux isos... but where's the space for the CPU / motherboard in that mess of hard drives? That'd be the one and only rack case you'd ever need for hard drives at home.H110Hawk posted:If you want your data to be "physically sound" then it's time to get spendy on your RAID card (LSI only, really) and motherboard. If you want cheap or quiet you're looking at non-rack mount, and not dense. I was just hoping for the quality of trays I see on the HP Proliant servers I've racked up before.
|
# ¿ Mar 6, 2010 01:45 |
|
You don't need a powerful CPU for most media server use cases. Unless you're doing transcoding like with PS3MediaServer, you're best getting a moderately powerful 64-bit CPU with low TDP for an OpenSolaris file server. ZFS benefits from fast CPUs, sure, but it's probably not worth it for even most small office uses of a file server. I'd say no more than an E5200 (or the Xeon equivalent) would ever be necessary for a ZFS file server unless it's running at quite high loads. My E5200 setup only uses about 50w idle and can churn through video if I need it to while I doubt most of the Xeons can get total system wattage that low. Power consumption may matter more or less in your region or household. devilmouse posted:Unbuffered or buffered RAM? FISHMANPET posted:Also, I would ditch your lame CPU for an AMD Phenom II X2 or X3 or X4. You can get chips that are cheaper and have more cores when you go AMD. Also, part of why you should get a server motherboard with multiple PCI x8/x16 slots is that most consumer class motherboards split lanes into a single x8 on the backend (because even SLI setups don't max out 8 lanes) and incurs some multiplexing penalty (the cost of the extra lanes exceeds that of the multiplex logic). This means that if you use two PCI-e SAS / SATA cards with 8+ drives each you don't get maximum bandwidth possible. This doesn't matter if you only use one card, but it can be a problem if you use two or more cards that run at high load.
|
# ¿ Mar 29, 2010 16:17 |
|
JerseyMonkey posted:If I remove a 1TB drive and replace it with a 1.5TB drive, allow the RAID1 to rebuild, then pop in the second 1.5TB drive, will the RAID1 expand to 1.5TB or do I need to back up the 1TB RAID1 setup, and recreate the RAID1 setup with the (2) 1.5TB drives? jeeves posted:I've read that WD Green drives are not good for raids, since they spin down their heads after a few seconds of idleness. Are most 'green' drives like this? Like the Samsung line as well? The cost of replacing drives periodically due to excessive head parking (if it happens really frequently like some people report on hardware RAID setups, it'll definitely matter as opposed to maybe 5x as often) may outweigh the power and drive cost savings by going Green. For a large array (10+ drives) at home it might make more sense perhaps, but for smaller 3-4 drive arrays, I would say the cost savings from power are minimal - 9 watts or so. Also consider that Green drives only carry 3 year warranty (my butt's been saved plenty by 5-year warranties). Might as well change out an incandescent lightbulb instead for an easier way to save power. My 3 level townhouse uses about 70w for all the lights on at full blast measured at the meter - cost me about $50 in bulbs. I'd encourage green drive use for software RAID arrays like unRAID, mdadm, and ZFS. If you're going serious, you'll need a serious budget to go with it, and Green drives won't suffice anymore. It's up to you to decide how serious you are and to spec accordingly.
|
# ¿ Apr 12, 2010 19:17 |
|
My array growth strategy is adding zpools constructed of separate 5x1TB, 4x1.5TB, and later 6x2TB hooked onto a couple JBOD cards with 8 ports per card. When a drive goes down in a raidz1 array, I'll buy a new drive to replace it or migrate the data on the existing array to another zpool and sell the old, obsolete drives. I don't expect to need more than 10TB of storage at a time and only need about 6TB at present, so this should work fine.
|
# ¿ May 1, 2010 20:12 |
|
Cyberdud posted:Am i better off just telling them that for this kind of money we won't be getting a reliable enterprise solution? 1. Write down a detailed analysis of what the pros & cons of technical options are given a budget along with the ideal solution and one-step down from ideal. Show that the budget does not allow the ability to meet critical business requirements and that you're basically throwing money away on top of taking a great risk by not budgeting for the right solution 2. Bite the bullet and do a hosed up, crazy implementation on a shoestring budget that gets you incredible amounts of praise 3. Look for a new job because the organization will likely not be in business for much longer, and it's probably stressful working there anyway 4. Not give a poo poo about the job except the bare minimum and looking good enough for your next job 5. Quit for a better job and pray that they don't (de)evolve into your previous job, ironic enough, through business success Combat Pretzel posted:I'm interested, because I'm looking at a parachute option, since I'm growing weary of Oracle's bullshit related to OpenSolaris. BTRFS will probably be production ready by the end of 2011 is my guess. I've been reading forum posts on occasion by BTRFS early adopters and they've basically had to have a marriage with the developers to get it working so far it seems. The disk format is still not stable, so I wouldn't use it for anything longterm either. It took a few years for Linux to be usable for anything beyond hobby computing, and perhaps if we're lucky something workable will be out before 2010.
|
# ¿ May 12, 2010 20:07 |
|
I don't think it'd be very cost-effective to buy that many drives unless you expect to need that much storage in the next 12 months or so. I'm only going to have 9 disks in two separate zpools soon and even then I'm going to phase them out with larger disks as the drives die off. It's the best compromise between the WHS 1 drive-at-a-time method and the ZFS "replace the entire array's drives to expand" to me.
|
# ¿ May 13, 2010 23:14 |
|
Samsung drives are the ones that don't carry over those settings across reboots while Seagates do write through onto EEPROM I believe. Western Digital is the biggest downer though with basically completely disabling most of these options on their green drives from what I can tell. Too many people building home megaraids (oftentimes failing) and are trying to upsell them on black drives. I'm willing to deal with the shortcomings of green drives for now until later when I actually need good IOPS and bandwidth. For low-power RAID setups on most OSes with a strong cost consideration, I'd recommend Samsung drives since the other manufacturers seem to have firmware oddities (reliability is about par across all of them unless it comes to old Maxtor drives and maybe some Lacie refurbished crap). If cost isn't a primary factor, then you're looking at black or raid edition drives from Western Digital regardless of whether you're using hardware or software RAID.
|
# ¿ May 14, 2010 19:22 |
|
Combat Pretzel posted:Hmmm. The only way I see this working is if the intro is at the beginning of the show, as frame and pixel accurate copies across the episodes. Any minor variance in pixels or the episode starting at a different frame will generate different bitstreams and influence bitrate allocation.
|
# ¿ May 19, 2010 18:08 |
|
The only way out of paying that much is to basically build your own NAS. For home users, there's a cost curve that dips to its low point somewhere around the Drobo and above, and around there it's probably cheaper (even considering power) to just build your own. mini ITX motherboards and Atom processors have helped make low-power DIY NAS boxes a reality. Heck, even my E5200 with an nForce chipset with 5 drives only draws like 50w at idle, and I never have to worry about whether it'll support X software feature. Of course, building a NAS box is a bit more specific and complex in some respects than a simple DIY desktop system, so it's certainly possible it's more cost-effective to buy one of those $1k+ boxes meant for small businesses. It's just I can build a great OpenSolaris box with 6TB of drives for about $800 when a NAS from an ok manufacturer costs $1k without any drives.
|
# ¿ May 25, 2010 16:53 |
|
Whatever gains you could have made in power efficiency by choosing a lower-power chipset and footprint motherboard were lost with that 6-core CPU. There's maybe a difference of about 4-5 watts among the same chipset, and contrary to intuition, my experiences with Intel-based and nVidia-based chipsets say that even with a much better GPU the nVidia-based chipset actually beat the Intel by about 1w. This is crazy considering how much extra power a GF9400M would take over a GMA4500 chip with utter crap performance at even idle. You're better off choosing an appropriate wattage PSU (if you only use 80w normally, even the best 80Plus PSU rated for 1000w will be less efficient by about 15w than a 400w PSU at slightly less efficiency) for that than the chipset or a better RAID controller that supports staggered spin-ups to help avoid power spikes.
|
# ¿ May 26, 2010 16:19 |
|
Be careful about the software implementation of daap that they're using. My Thecus N4100Pro uses the obsolete mt-daapd and with the miniscule bit of RAM they have on there, it dies at about only 4000 songs. Newer NASes should be using Firefly media server I think and I dunno how well it does compared to mt-daapd. The BitTorrent client on my Thecus is limited to 4 torrents as well (it uses rtorrent on the backend) and is lame there. There's always some limitation to the features on these SOHO NASes I've found, which is why I'm planning on getting rid of mine ASAP and going with a homebrew box.
|
# ¿ May 31, 2010 22:50 |
|
I think what he really means is hackable because you're now going into unsupported territory. Some people want to hack up a box that's been made because it's cheaper, sorta like how people approach flashing a router with 3rd party firmware or what people used to do before these NASes with the Linksys NAS extension NSLU2 or whatever. If you want to hack up these boxes for your own software featureset, you're probably best off with the WHS boxes like the HP or Acer models because everything else uses low-power non-x86 chips or an AMD Geode or something else that's cheap and bare minimum in the SOHO market, which could require a recompile of binaries you try to bring onboard. There's also the issue that many can only run headless and can be pretty easily bricked unless you can get access to the onboard flash without the mobo or manage to attach a keyboard to it.
|
# ¿ Jun 10, 2010 01:31 |
|
Scuttle_SE posted:Can I mess with TLER on newer WD drives? I seem to recall WD locking that down or something...
|
# ¿ Jun 14, 2010 00:24 |
|
PopeOnARope posted:Why is this so loving confusing. loving Western Digital. How are Hitachi in terms of reliability and warranty (e.g. do they offer advanced replacement) I decided to stop giving a crap about the drives and to just use ZFS. I'm still migrating over the drives I put into my Thecus NAS that I bought as a literal omg, I need it now emergency (my company laptop broke and I needed to use my fileserver as my workstation).
|
# ¿ Jun 14, 2010 00:44 |
|
Almost all SOHO NASes actually use md RAID on Linux, which is software RAID. The RAID problems with TLER and such matter only for hardware RAID platforms.
|
# ¿ Jun 15, 2010 05:43 |
|
Just a note, but Newegg has 2TB Samsung drives for $110 each. I was waiting for them to hit about this pricepoint before expanding out. I should be set for a while with this next order, wheee. The black drives get you better warranty and electronics over the regular ol' variety of consumer drives, but sometimes the extra warranty hardly matters if you keep upgrading drives every few years anyway. 3 years ago 1TB drives cost a bit more than what 2 TB drives cost now. Given the pace is keeping decent stride, 4TB for $100 will be likely in 2013 while SSDs will have gone down significantly in price. KennyG posted:It's completely situational. RAID-Z leaves you with fewer OS options... On top of the other benefits mentioned, RAIDZ offers de-dupe options, which can be a huge cost (and even performance) saver. Its performance gains are mostly done via the ARC (adaptive reactive cache) that implies you need a bit more RAM to get good performance than you would with straight hardware RAID. RAIDZ also reduces the critical need for ECC RAM on your fileserver, further reducing hardware costs. With 10TB+ of data, even a 1.0E-15 error rate in memory will get you some bad writes once in a while on a regular hardware RAID.
|
# ¿ Jun 15, 2010 19:33 |
|
|
# ¿ Apr 20, 2024 04:09 |
|
Be careful about the NICs you use and the driver you have. OpenSolaris drivers for lots of common consumer NICs are pretty bad. There's alternative NIC drivers out there (Realtek ones come to mind) that will help with stability. OpenSolaris is about where Linux was in 2003 in terms of support for consumer devices IMO, so it's Intel NICs and server-grade hardware or expect problems.FISHMANPET posted:Also, wondering what you mean by "SMB sucks balls" Bridged NATs are better typically for VMs on workstations mostly because you don't have to fire up a DHCP client on the virtual ethernet devices. Also host to guest filesystem sharing on Virtualbox fires up a lightweight NetBIOS daemon along with some DNS-level override for the VBOXSVR share and other stuff to work with Windows last I saw.
|
# ¿ Jun 15, 2010 23:15 |