|
moep posted:That is really the only thing I dislike about OpenSolaris — up–to–date packages are sparse and it requires hours of patching and compiling to get basic stuff working that would require nothing more than one line in the command shell of a linux distribution. The sad thing is how very much better it is now compared to a few years ago. I won't get into sunfreeware and blastwave and all that, but as bad as it is now, it's come a long way pretty quickly. I got torrentflux-b4rt with transmission up and working on OpenSolaris over the past few days -- it was a ridiculous hackfest to get it working, and it reminded me just far it has yet to go. I'd love to submit my work as a spec file or something upstream to the tfb4rt people, but I ended up making so many sloppy hacks that it's not really submittable.
|
# ? Sep 9, 2009 07:34 |
|
|
# ? Apr 26, 2024 02:13 |
|
I've decided to do a HTPC overhaul and replace the single big tower that was doing double duty as a frontend and storage with a low profile, quiet front end, and a RAID server for the backend (preferably power conscious). I've got the frontend taken care of, but I'm stuck in analysis paralysis over what sort of RAID solution to go with. As much as I love tinkering, since the main duty of this is streaming media to an HTPC, I'd love for it to just work without having to mess with it, so my initial thought was a pre-built solution. However, the problem is that I'm close to filling 4x1.5 TB disks now (with no redundancy/RAID overhead) so at the very least I'd need something with 4 bays. On top of that, the idea of replacing and growing 1 disk at a time is pretty darn appealing, and one of my biggest concerns is growth, so that'd limit me to a Drobo or one of the ReadyNAS products (I believe they're the only ones that support different-sized disks). Reviews seem 50/50 on the ReadyNAS, and I'm not crazy about Drobo's USB-only connectivity, or paying an extra 200 bucks for the DroboShare (which is also necessary to run the software hacks people have come up with), and I also suspect they've got another product coming out soon...(seems like they're due). However, after reading through this thread, I'm starting to really appreciate the flexibility of running a full-blown server that does its own RAID. UNRAID has caught my eye since it supports different sized disks, does mdadm support this as well? Am I out of luck on finding a pre-built (preferrably BYOD) solution? Should I just suck it up and look into UNRAID or possibly mdadm? EDIT: Also read a bit about openfiler: http://openfiler.com/ Anyone tried it? dickthatcomeslead fucked around with this message at 08:19 on Sep 9, 2009 |
# ? Sep 9, 2009 07:40 |
|
dickthatcomeslead posted:However, after reading through this thread, I'm starting to really appreciate the flexibility of running a full-blown server that does its own RAID. UNRAID has caught my eye since it supports different sized disks, does mdadm support this as well? If you're going with mdadm, your best bet to use it in this way is probably to use it in conjunction with lvm. mdadm for raid5 and raid1 falls to the smaller drive size. If you throw a larger drive at it, it just won't use the space and it's wasted. mdadm has the ability to grow a raid (google mdadm grow) you are still limited to all drives in the raid having the same size. However, since you can (and most usually do) create mdadm raids on partitions you can create one large raid5 across all drives on a partition of the same size. Use lvm on top of it even if you don't have more than the one raid5, it will make life easier later for expansion. Later, when you have more than one drive with wasted space you can create either raid5 or raid1 across the wasted space and add it to your lvm. After a lot of pvresize and resize2fs (or resize_reiserfs or whatever resizing tool for your filesystem of choice is) you now have one large disk drive available where everything underneath it is redundant. Although you're wasting space with extra parity drives or mirrored drives, it's not as much wasted as you might have otherwise had. Edit: you may fare better with drobo or unraid. I have never used either before, so I don't know. They are both out of my wallet's range (my home storage system is made of drives and hardware that work was otherwise throwing out because they are too small or too slow). HorusTheAvenger fucked around with this message at 13:15 on Sep 9, 2009 |
# ? Sep 9, 2009 13:13 |
|
Has anyone here used OpenSolaris and Nexenta? I'd imagine OpenSolaris is harder to use (given a linux background) but it would be balanced out with better package support - it doesn't sound like OpenSolaris has any more packages than Nexenta though :S Nexenta v1 had me patching rtorrent manually every update, but v2 has rtorrent in its software repo and it built with out a hitch
|
# ? Sep 9, 2009 16:20 |
|
DLCinferno posted:I haven't been able to get ahold of my friend, but I should have been more clear...I didn't install the patches from the guide, although I did use the newest versions of the packages. Thanks for the power-man info, I'll have to set that up once I'm back home. I got some terribly old version of rTorrent running on my Solaris box (Nevada b114 I think) off some packages I found googling rTorrent + Solaris. Manually walked all its dependencies (thanks sunfreeware.org), did some naughty ln'ing of libs, and it runs!
|
# ? Sep 9, 2009 18:37 |
|
Hi everyone, I'm looking into building a fairly cheap NAS computer that will run OpenSolaris with about 4-5 TB drives in raid-z (either the equivalent of raid-5 or raid-6). I have an old atx case sitting around, along with 2x1GB of DDR2 ram and a 350W PSU that came with the case for my current desktop. In addition, I have a 200GB WD SATA drive and a 1TB WD Caviar Black in my desktop (which I'm thinking doesn't need all of that storage once I get the server up and running. I've read that Solaris plays nicer with Intel processors, so I was thinking of going with the cheapest celeron I can find, along with a motherboard with 6+ SATA ports (which should be compatible with Solaris). I read a recommendation for this Gigabyte motherboard earlier in the thread, but supposedly not all of the SATA ports work under Solaris. I've been looking at some of the cheaper super micro server boards, but I'm not sure if I would get any benefit from using one. Finally, I mentioned before that I already have a 1TB WD Caviar Black, but I'm considering just using the slower 1TB WD Caviar green drives instead just to take a load off of my power supply. Thoughts? Any help is appreciated.
|
# ? Sep 9, 2009 21:09 |
|
I wish there was a good source for up to date out-of-the-box NAS review roundups. QNAP vs ReadyNAS vs Drobo, that kind of thing. I'm not feeling up to rolling my own, but I'm really torn between what's available.
|
# ? Sep 10, 2009 05:32 |
|
Twiin posted:I wish there was a good source for up to date out-of-the-box NAS review roundups. QNAP vs ReadyNAS vs Drobo, that kind of thing. I'm not feeling up to rolling my own, but I'm really torn between what's available. http://smallnetbuilder.com/
|
# ? Sep 10, 2009 07:12 |
|
EnergizerFellow posted:for the most part. Finally they updated their utterly retarded NAS performance tests. They used to list several products with faster than wire speed performance
|
# ? Sep 10, 2009 07:27 |
|
Anybody get their hands on a Iomega StorCenter ix4-200d NAS Server yet? Looking like the perfect ESX testing box for me to pickup, assuming the performance isn't total rear end. I'm also tempted to setup an acer Aspire Easystore H340 for a little test box. Any ideas how a single-core Atom does on software RAID and gig-e? MrMoo posted:Finally they updated their utterly retarded NAS performance tests. They used to list several products with faster than wire speed performance EnergizerFellow fucked around with this message at 16:38 on Sep 10, 2009 |
# ? Sep 10, 2009 16:36 |
|
EnergizerFellow posted:Anybody get their hands on a Iomega StorCenter ix4-200d NAS Server yet? Looking like the perfect ESX testing box for me to pickup, assuming the performance isn't total rear end. I've been looking at the same box for here, but the performance details available online seem a little slim to non existent.
|
# ? Sep 10, 2009 17:36 |
|
bob arctor posted:I've been looking at the same box for here, but the performance details available online seem a little slim to non existent. I saw a forum post saying the Iomega might be locked into specific drive models. This is a typical EMC trick, unfortunately (EMC owns Iomega these days). I'd be all over the Iomega if it supported generic SAS drives.
|
# ? Sep 10, 2009 22:02 |
|
EnergizerFellow posted:For the Acer or the Iomega? I'm more interested in the Iomega as the Acer seems to be a WHS box not an NFS / iSCSI box. Granted there is a large amount of overlap in their audiences. There's no suggestion that the iomega supports SAS at all, the question in my mind is does it how does it rank among similar products by QNAP and others. Apparently the previous Generation (ix2) were very slow compared to the theoretical max for the hardware and the competitors. It would nice if someone was running more professional benchmarks of these with Windows Server and VMWare which would allow people to get an idea of how they perform in comparison to Direct attached SATA etc. Clearly they're going to be pretty low in the scheme of things but it would be nice to see what they would be passable for.
|
# ? Sep 11, 2009 00:12 |
|
Looks like QNAP released a new line of NASes today aimed at the SOHO/home users. I've been looking at picking up a TS-439 (or maybe a TS-639...)(.......) for a while but the relatively high cost was the prohibiting factor (isn't it always). However, it looks like they've simply taken the TS-439 and replaced the Atom CPU with a Marvell CPU to get the prices down. If the performance hit isn't too bad, I think I'm going to bite. Going to wait for some reviews first to make sure they didn't absolutely gently caress the performance or nerf/cripple any other features, but I doubt that's the case.
|
# ? Sep 11, 2009 02:52 |
|
While QNAP's NASes are really nice, I don't think it's really worth the price premium. Wound up looking at their stuff before I settled on a Thecus N4100Pro (albeit out of an emergency need when a machine died and I needed a storage server FAST).
|
# ? Sep 11, 2009 03:20 |
|
I'm torn between getting a cheaper qnap 2-bay 219 (and paying a premium for large hard drives) or paying the extra $250 for a 4-bay 419, so I can buy cheaper hard drives.
|
# ? Sep 11, 2009 03:21 |
|
Twiin posted:I'm torn between getting a cheaper qnap 2-bay 219 (and paying a premium for large hard drives) or paying the extra $250 for a 4-bay 419, so I can buy cheaper hard drives. bob arctor posted:Clearly they're going to be pretty low in the scheme of things but it would be nice to see what they would be passable for.
|
# ? Sep 11, 2009 06:21 |
|
I've got about $2-$2,500 budgeted to build a NAS here shortly (with-in the next 3 months most likely) - am planning on FreeBSD with Raid-Z using this hardware any obvious problems so far? Also not in a huge hurry, as maybe the 2TB drives will hit a more affordable $/gig ratio and I will have this thing for a while...
|
# ? Sep 11, 2009 20:01 |
|
I have a computer running a hardware-based Raid-1 configuration with two 150 GB drives. I have two new 500 GB drives that I would like to use instead. Here is my plan of action for migrating the array, please let me know if this should work or if there is a better solution 1) Remove single 150 GB drive from array 2) Replace with single 500 GB 3) Rebuild array 4) Remove other 150 GB drive 5) Boot onto liveCD and use GParted to extend the partition to take up the full disk (500 GB) 6) Attach second 500 GB drive 7) Rebuild array
|
# ? Sep 11, 2009 20:04 |
|
Modern Pragmatist posted:I have a computer running a hardware-based Raid-1 configuration with two 150 GB drives. I have two new 500 GB drives that I would like to use instead. What RAID controller are you using? I've created/rebuilt arrays with the tool inside of Windows, and created arrays in the BIOS of the card, but I wonder if you can resize a logical drive using gparted like that.
|
# ? Sep 11, 2009 21:01 |
|
Modern Pragmatist posted:I have a computer running a hardware-based Raid-1 configuration with two 150 GB drives. I have two new 500 GB drives that I would like to use instead. If it's a media/backup server, I'd probably just 1: Remove 150GB drives and build new array with 500GB disks 2: Add one of the 150GB drives to the system via SATA or USB and copy all the files to the new array.
|
# ? Sep 12, 2009 00:03 |
|
roadhead posted:I've got about $2-$2,500 budgeted to build a NAS here shortly (with-in the next 3 months most likely) - am planning on FreeBSD with Raid-Z using this hardware any obvious problems so far? You are spending far too much on the UPS. Just grab some cheapo thing that can run your machine for 5 minutes and then shut it down via RS-232/USB. This will give you more holdup for half the price: http://www.newegg.com/Product/Product.aspx?Item=N82E16842102048 The APC brand isn't doing anything there except costing you money, they all use the same batteries. I found that by opening the UPS page and it was the first thing listed. I would target $100 for the box alone, or $125-150 for everything in your closet. You seem to have a need for speed with the CPU, but what realistic values are you looking to push from this machine? Is it just for NAS duty, or do you hope to offload HTPC functions like video transcoding? IF not, I would go to dual core for half the price and still maintain a lot of flexibility. This chipset likely won't get you very far unless it's always single stream, and even then: Onboard LAN LAN Chipset Realtek 8111DL If you want to go fast, or have multiple clients accessing it, slap a cheap Intel NIC in there: http://www.newegg.com/Product/Product.aspx?Item=N82E16833106033 Cheaper ram: http://www.newegg.com/Product/Product.aspx?Item=N82E16820148160 DVD-ROM? Are you going to be backing up your DVD's? This may seem nit-picky, but for a dedicated NAS box you can now throw in 2 more hard disks for roughly the same amount of money.
|
# ? Sep 12, 2009 02:36 |
|
Hi everyone. I hope this is an appropriate place to ask my question... If not please let me know I am trying to come up with a good backup solution for the three computers in my house (two Macbook laptops, one Windows server 2003). This is strictly for disaster recovery type stuff so I don't need to keep backups for more than a week. After thinking about it for a while, I think the best solution would be: - A NAS with 2 TB storage, with raid1 - splitting the NAS into three partitions - having each computer back up to its dedicated partition (using superduper/cobian) What do you think of my plan? Am I going about this the right way? I was thinking of just putting more hard drives into the server, but I am most likely going to replace this server in the next twelve months. My other concern is that I would have to have the two partitions for the macbook as HFS+ partitions. Any hardware recommendations for this?
|
# ? Sep 12, 2009 17:46 |
|
Bob Morales posted:What RAID controller are you using? I've created/rebuilt arrays with the tool inside of Windows, and created arrays in the BIOS of the card, but I wonder if you can resize a logical drive using gparted like that. Adaptec AAR-1420SA gparted didn't work out because it is claiming that there are bad sectors within the NTFS partition and is unable to resize the partition. The system is a full windows install so I can't just copy the files over as you would for a media or backup system.
|
# ? Sep 12, 2009 19:08 |
|
Modern Pragmatist posted:The system is a full windows install so I can't just copy the files over as you would for a media or backup system.
|
# ? Sep 12, 2009 19:32 |
|
adorai posted:If you boot into the vista or windows 7 installer, you can resize the filesystem there. Any chance XP allows this?
|
# ? Sep 12, 2009 23:03 |
|
Modern Pragmatist posted:Any chance XP allows this?
|
# ? Sep 12, 2009 23:32 |
|
Being new to RAID arrays and all, I have a question about this new box I'm setting up. Right now, after much trial and error, I've gotten Ubuntu Server AMD x64 up and running on a single 1.5TB hard disk. Initially I had planned to create a software RAID5 array and set it up during the Ubuntu Server install, but for some reason the installer would flip out once I initialized my other hard drives. At one point I even had the array set up across all 4 disks just the way I wanted, but as it turns out GRUB will not boot from disks in a RAID5 array, so that triumph was shot to poo poo. What I can't seem to wrap my head around though is how one can configure a RAID array across a bunch of disks including the operating system, swap space, and boot loader and do it without partitioning for the /boot partition, /swap space, and root partition since you can't have other partitions on the disks you're using in the array. The current plan is to keep one of my 1.5TB disks as a partitioned disk containing /boot, /swap, and / (using the extra space for recording and video editing scratch space or something), and then set up a RAID5 array with the 3 remaining 1.5TB disks. This will limit me as far as storage capacity is concerned (I want to have 3 disks worth of storage space and 1 disk for mirroring as opposed to 2 disks worth of storage and 1 for mirroring) but I guess what I'm really trying to ask is how can I set up my disks so that all 4 of them are in the same RAID5 array and contain the necessary partitions/software to boot and initialize Linux?
|
# ? Sep 13, 2009 21:28 |
|
tehschulman posted:since you can't have other partitions on the disks you're using in the array.
|
# ? Sep 13, 2009 23:44 |
|
adorai posted:Sure you can. I would create one 10GB partition on all the disks, build a raid1 array across your disks for that and install to it, then build a raid5 out of the remaining 1.49TB of each disk. Should I partition for /boot and /swap across all 4 disks too? I can probably replace /swap with a pagefile on the system disk, but there's got to be something else I need to account for with boot, right? EDIT: I reinstalled Ubuntu server per the above with a 40GB partition across all 4 disks in a RAID1 (set as / directory and flagged as bootable) and with a 1.5TB partition across all 4 disks in a RAID5 (set as /home). I'm getting a familiar error message that I've received every time I've tried booting a RAID array under Ubuntu Server: code:
ALSO: I'm trying to map the 4x 40GB partitions that contain the / directory to a RAID1 array, but it seems that I can only map 2 of the drives in a RAID1 array at a time? What's my ideal partition set up look like in this situation? Thanks. Dotcom Jillionaire fucked around with this message at 06:37 on Sep 14, 2009 |
# ? Sep 14, 2009 02:37 |
|
This is a fairly obvious post and I shouldn't be making it, but I just wanted to point out to everyone that airflow is important with little NAS boxes. I hid my DNS-321 with two Seagate 1TB drives in a cabinet to obtain the necessary WAF for our apartment...I checked it yesterday, and the system config page was stating 175F for system temperature. Not to mention that it was uncomfortably hot to the touch - I had it crammed in the cabinet along with a battery backup and a switch. I ran cable, extended it behind a chair in the living room with another gigabit switch and now it's a much more acceptable 120F.
|
# ? Sep 14, 2009 02:45 |
|
H110Hawk posted:You are spending far too much on the UPS. Just grab some cheapo thing that can run your machine for 5 minutes and then shut it down via RS-232/USB. This will give you more holdup for half the price: H110Hawk posted:You seem to have a need for speed with the CPU, but what realistic values are you looking to push from this machine? Is it just for NAS duty, or do you hope to offload HTPC functions like video transcoding? H110Hawk posted:IF not, I would go to dual core for half the price and still maintain a lot of flexibility. This chipset likely won't get you very far unless it's always single stream, and even then: The server will be ripping/encoding DVDs to back them up and make getting at them easier.
|
# ? Sep 14, 2009 14:35 |
|
roadhead posted:Ok, cheaper UPS. Glad to help quote:Yes it has to be able to transcode 1080P x264 and deliver it to my PS3 over the gigabit network. That will probably be the most taxing thing the box does. I would also like to use it for Asterisk, and other fun random projects that I dream up. Also I am a programmer by trade so you never know what this box might end up doing. Cool, as long as you have your reasons! Have fun! Please add IP6 to rtorrent if you're looking for a project. http://libtorrent.rakshasa.no/ticket/1111
|
# ? Sep 14, 2009 16:26 |
|
roadhead posted:Ok, cheaper UPS. I plan on building something similar, using the Norco 4220 SAS box. I'm planning on using VMware ESXi and a shitload of loopback voodoo to use ZFS as a large storage pool accessed via Samba/iSCSI/NFS by the various computers in the house, with VMs run as appropriate for whatever programs people want to use. We'll see how well ESXi plays with the hardware I'm getting, and how much of a prick all the setup will be.
|
# ? Sep 18, 2009 13:31 |
|
Methylethylaldehyde posted:I plan on building something similar, using the Norco 4220 SAS box. I'm planning on using VMware ESXi and a shitload of loopback voodoo to use ZFS as a large storage pool accessed via Samba/iSCSI/NFS by the various computers in the house, with VMs run as appropriate for whatever programs people want to use. We'll see how well ESXi plays with the hardware I'm getting, and how much of a prick all the setup will be. So FreeBSD or Solaris for hosting the RAID-Z ?
|
# ? Sep 18, 2009 14:48 |
|
ZFS in FreeBSD 8 just went non-experimental http://svn.freebsd.org/viewvc/base?view=revision&revision=197221 I've been running ZFS in FreeBSD 7.2 and its been working great, but I don't push it very hard.
|
# ? Sep 18, 2009 15:09 |
|
complex posted:ZFS in FreeBSD 8 just went non-experimental Is FreeBSD also your host OS or is the guest of a Hypervisor like ESXi or something?
|
# ? Sep 18, 2009 17:33 |
|
Well, if you're running ESXi, you have to be running it on bare metal frankly, so FreeBSD would have to be a guest OS with direct I/O access to all the drives in the RAID. I'd normally go with a hardware setup to run OpenSolaris and run Solaris Zones and/or VirtualBox VMs under it to get anything else I need. The problem here is that virtualization doesn't automagically get you support for hardware that wouldn't be found on your host OS (with some exceptions - my USB SmartCard for work works under Windows XP 32-bit and I run a VM to access my VPN as a result).
|
# ? Sep 18, 2009 17:54 |
|
necrobobsledder posted:Well, if you're running ESXi, you have to be running it on bare metal frankly, so FreeBSD would have to be a guest OS with direct I/O access to all the drives in the RAID. Two different people responded, I was asking complex if *he* personally ran FreeBSD as his MAIN OS or if he was using a hypervisor. I've thought about the pros and cons of each, but I am curious what other people with actual running machines are doing. I'll probably just run FreeBSD directly unless someone can make a good argument for doing otherwise. roadhead fucked around with this message at 17:35 on Sep 19, 2009 |
# ? Sep 18, 2009 20:36 |
|
|
# ? Apr 26, 2024 02:13 |
|
I run fileservers at home and at work on FreeBSD as the host OS. At home I use ZFS.
|
# ? Sep 18, 2009 22:54 |