Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I run WHS, and I'm getting ready to move from a total of 10TB (6 drives) to 20TB (adding 5 drives) of storage. Unfortunately, I need to add more SATA ports.

Any recommendations for a PCI-Express solution that provides an Individual mode? Don't need RAID, but from what I can tell quality performance cards are all RAID anyway.

I guess I could go with PCI as well.

I've currently got 4 drives on a cheap SATA PCI card and I find performance to be somewhat meh as the drives on it seem to max out around 30-35MB/s...

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Phatty2x4 posted:

Take a look at this card:

http://www.amazon.com/Supermicro-Add-Card-AOC-SASLP-MV8-controller/dp/B002KGLDXU/ref=sr_1_5?ie=UTF8&s=electronics&qid=1277948232&sr=8-5

I believe it is supported by WHS. All you need to do is pick up a bracket and break outs.

Do you really need a bracket? It seems like it'd work without one. If you do need one, where can I find such a bracket? Searches for "bracket" and "pci bracket" yield much that doesn't seem too useful...

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

ATLbeer posted:

Thread needs more pictures



14 drives with sizes ranging from 250GB to 2TB for a total of ~17TB in a WHS.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

PopeOnARope posted:

How is WHS in terms of usability? I might need to VM it and see how I feel about it's use.


I have 17 hard drives in my WHS.

You just plug a HD in, boot, and click "Add"...that's basically it. It's all run from the WHS console on the client, but if you want to you can remote desktop in to the server and you get a regular Windows Server 2003 desktop so you can do...Windows Server 2003-type stuff with it.

WHS is made with ease-of-use as priority #1.

Here's the SH/SC thread for it.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Methylethylaldehyde posted:

I just can't get over the corruption bugs that sat there for like 4 months before someone fixed them. This is why I'm using ZFS, because gently caress having my poo poo look like it's there, but be silently hosed up.

That makes it sound like MS was just twiddling their thumbs for four months and then decided to fix some minor bug. That isn't true at all.

Nowadays you can pretty much view WHS like a bunch of regular NTFS formatted drives with a process running that spreads files amongst all the drives and makes copies of files to more than one drive for folders that you have duplication turned on. You can take any one of the drives out, put it in another NTFS-capable system, and read all the files on it.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

HorusTheAvenger posted:

I hope you're not implying it's good for small backups.

I don't use it since WHS does it's own backup thing, but I generally hear good things about Win7's backup program.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I'm thinking about moving on from WHS.

My main requirements are:
* Handle many (16 right now) disks of varying capacities (500GB to 2TB) with a total of over 17TB of raw disk space.
* One of my favorite features of WHS is not having to worry about multiple partitions...it's just one big pool of storage.
* Some sort of parity to protect from at least 1 (the more the better) drive failure.
* The main purpose of this storage is to store and stream HD video in the home. Streaming isn't too big of a bandwidth hog with hard drives on a gigabit network, but I do copy multi-gigabyte files to/from the array quite often so the closer it comes to saturating gigabit, the better.

Is this raid/lvm/mdadm linux thing still a cool thing to do? Is this guide from the OP still accurate/up-to-date/the best?

I was thinking that a linux thing would be best for me since I do lots of python development, and run several server apps written in python on my server...

The main reservation I have right now is that, while I won't have any problems figuring out how to set this up, I'm not terribly interested in futzing with it everyday, and that's one thing WHS as provided me...I set it up and never have to think about it.

Also, I will be running this on a fairly powerful machine (P55 mobo/Core 2 Quad/4GB RAM)...does this have any implications for which distro I should use? I'm most familiar with Ubuntu.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

DLCinferno posted:

You should be fine with almost all your assumptions except potentially the actual RAIDing of your drives. Be aware that unlike WHS, which distributes data across any combination of drive sizes, mdadm will require you to choose the smallest size drive within the array as the size to use for each of the devices that array is built from. This means you have one 500GB drive and 15 2TB drives, you'll waste 1.5TB on all 15 of them. The way to use all your disk space is to create separate arrays for each combination of drive sizes, but in order to support at least RAID 5 on all your data you'll need at least 3 drives of each size.

Assuming this isn't too much of a burden, you can proceed with the rest of your plan, and if you use lvm you'll be able to combine all your mdadm arrays into a single big pool. Your computer should be plenty powerful enough to handle this and will probably get fairly close to saturating your gig network during reads, especially with that many spindles. Ubuntu is a fine choice for an OS. For reference, one of my servers is running Ubuntu Server edition with a 4 disk mdadm array of 7200rpm 1TB Seagates and I can get about 80-85MB/s transfer, with maybe a 20% cpu hit on the Core2Duo 2.16GHz (I think that's the cpu if I remember right).

Once you set it up and configure samba or whatever you're going to use to access the data, you can pretty much ignore it and it will just work. Make sure to setup mdadm notifications though, in case you lose a disk you'll want to get an email or something right away so you can replace it.

Thanks. This is helpful. I've got enough 2TB and 1.5TB drives, but I'm going to have a problem with only have 2-1TB, 2-750GB, 1-500GB, and 1-400GB.

Hrmph.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

DLCinferno posted:

In that case, you do actually have enough drives to safely do what Saukkis suggested. For example, if you didn't mind losing 150GB, you could create 250GB partitions on each of the drives and build four RAID5 arrays from those partitions. Each array would have only one partition per drive, so you could lose an entire disk without losing any data. A little more complex to setup, but it would work.



Oh yeah, that would work. Thanks!

Now, I just have to work out some sort of plan for moving 12 TB of data from WHS to Ubuntu.

My first thought is to use an older P4 PC as a temporary server, install Ubuntu, move as many hard drives as possible from WHS into it...up to my free space, copy data over the network to the Ubuntu server to fill those up, remove more from WHS, rinse, repeat.

The problem with that plan, is that Ubuntu actually needs to end up on my current WHS machine. Are the arrays I create on one machine easily transferable to another machine with different hardware?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

So what are you guys using to backup your Windows client PCs to your server/NAS?

I asked this in the Windows Megathread, but thought maybe I'd get more response here, since it doesn't seem to be getting any traction over there.

Windows 7 built-in backup tool is fantastic with image-based incremental backups. The only problem is that only Pro or Ultimate supports backing up to a network location.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I've got 3 different SATA/RAID controller cards that I just use to add SATA ports.

I'd like to consolidate these. What's a good solution for adding 8 SATA ports, preferably via PCIe?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

IOwnCalculus posted:

Cheapest option is a Supermicro LSI-based card that comes up pretty regularly, though the only issue is the bracket for it is technically for Supermicro's UIO form factor. The connector is still PCIe so it will fit in a PCIe slot but you need to remove the bracket first.

So, this and two of these?

What's the performance like on these?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

jeeves posted:

That is really disheartening to hear. Right now I have a 2 year old ASUS EEE-Box as my home server, which streams out (via an ethernet cord, not wireless) to my girlfriend's mac laptop which we have been using as our HTPC/media player to the tv. I keep most of my media archive on two terabyte drives inside my own desktop (seperate from the ASUS) but since I like to turn it off a lot to save power, my girlfriend can't easily access the media on it-- plus due to crappy design on my desktop's case, I can't actually install any new hard drives while also having one of those comically large nice video cards that have become the norm.

I was hoping to migrate all of my archive drives out to a HTPC, and use it for both expandable storage+media playing, but if WHS can't do that well then that is a huge let down. I'd rather not have to buy another tiny-PC just for playing, as my ASUS Box I'm going to keep as my remote desktop/downloads/low power-24/7 server and it is not powerful enough to play HD stuff, and I was hoping to have a bit of a beefier machine for the achieves/playing that I could turn off/hibernate to save on power when not in use.

Yeah, WHS just isn't built for this sort of scenario. You're going to be much better off with a dedicated server. If you're insistent on not doing that, I would probably just use a regular Win7 install instead of WHS for my HTPC and just share the drives you hook up to it.

I've been serving media off of a WHS to a little Atom-powered nettop hooked up to my HDTV for a long while and couldn't be happier.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Any "gotchas" to using these drives in an mdadm raid5 array?

I know there has been some problems with WD Green drives and some sort of NAS solution that's popular in here, but I've lost track of the actual issues.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Thermopyle posted:

So, this and two of these?

What's the performance like on these?

I think this got lost on the end of last page...

Just making sure this is the LSI card that everyone talks about (and that those are the right cables).

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

adorai posted:

This is close to what I was referring to, though it has 3 ports instead of 2.

http://cgi.ebay.com/LSI-SAS3041E-HP-4-Port-PCI-E-SAS-SATA-RAID-Controller-/140430819374?pt=LH_DefaultDomain_0&hash=item20b254402e
Nice.

I think I'll get one of these and some fanout cables.

edit: Fanout cables are frickin' expensive, and it seems like there's different types...

edit2: Wait, HP's specs on that card says it has 4 internal SATA ports. The only fan-out cables I can find are Mini-SAS to SATA. I'm ignorant.

edit3: Ok, I don't think this card is what I was talking about. As far as I can tell, you have to have a card with Mini-SAS internal connectors to use fanout cables to multiple SATA drives. This is unfortunate, because I already ordered the card you linked to. :/

Thermopyle fucked around with this message at 22:51 on Nov 20, 2010

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

double post

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

This is what I want, isn't it?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Star War Sex Parrot posted:

I've lost track of what you're trying to accomplish, but basically that's an 8-port SAS/SATA card utilizing two miniSAS connectors, so you could use two fan-out cables to get 8 SATA drives attached.

That's what I'm trying to accomplish, thanks.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

wolrah posted:


Is there anything other than WHS which works reliably and can offer both a pool that single arbitrary size drives can be added to as well as the knowledge that a disk failure will only kill the data on the failed disk, not the entire pool?

I'm experimenting with doing this with mdadm + LVM right now.

It's not as dead simple as adding drives to WHSv1, but it provides RAID5 protection.

It works like this:

Partition your hard drives, and raid5 one partition from each drive together and then LVM them all into a single volume.

For example, if you have a bunch of hard drives with capacities all divisible by 1TB, you make 1TB your partition size. 1TB hard drives have 1 partitions, 2TB hard drives have 2 partitions, etc.

Take 1 partition from each drive and make a RAID 5 array out of them. For example, /dev/sda1, /dev/sdb1, /dev/sdc1 all make /dev/md0.

After you're done you have several RAID devices like: /dev/md0, /dev/md1, /dev/md2.

Then you can take all of those and pool them together with LVM.

You can then add a single new hard drive and partition it into 1TB partitions. Then use mdadm to grow each of your existing RAID devices.

I asked pretty much the same question as you a couple pages back. See there and the following posts for a bit of discussion.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

As mentioned earlier I'm experimenting with different configurations of mdadm RAID5 arrays and LVM.

I haven't kept up with linux filesystems at all in years. I'm trying to pick a filesystem and have no idea what's good for my purposes. I'm tempted to just stick with ext3 since that was what I used last time I was fooling around with a linux file server, but I don't know if that's still a good choice. ext4 is out now and it's one whole number bigger!

This will be used for 75% streaming of multigigabyte video files and 25% dicking around.

My raw storage space will be starting out with hard drives totaling 18TB in size.

So what filesystem would you pick?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

DLCinferno posted:

Honestly, you probably won't notice the difference between the two with normal usage. The wiki page gives a pretty clear comparison though.

Personally, I use ext4, but since I didn't want to revert to ZFS at this time I'm really just holding out for BTRFS to go stable.

Thanks.

The Linux RAID wiki gives some advice about setting the stride and stripe-width of the file system.

If I'm doing this correctly, when using LVM you don't actually create a ext4 (or whatever) filesystem on your RAID arrays before you make a volume group out of them. You actually create the ext4 filesystem on the logical volume after you create it.

Does the stride and stripe-width advice given on the RAID wiki still apply in that case?

FWIW, I'm writing an article to pull together all of the stuff I'm learning about this. The point of writing it is mostly to help me remember it all, but also I think it will help others in the future.

edit: While dicking around with mdadm I noticed this:

quote:

~$ sudo mdadm --detail --scan
ARRAY /dev/md0 level=raid5 num-devices=4 metadata=01.02 name=JEHOSHOPHAT:0 UUID=ab793916:3c4fff4d:e66ae03c:d13c7ecf
ARRAY /dev/md1 level=raid5 num-devices=4 metadata=01.02 name=JEHOSHOPHAT:1 UUID=b1238698:deb79d4e:15944bd6:43056aba
ARRAY /dev/md2 level=raid5 num-devices=4 metadata=01.02 name=JEHOSHOPHAT:2 UUID=731b279d:8ee153d0:87d46c5b:83d1349a

Where does this "metadata=01.02" come from? I obviously use "1.2":

quote:

:~$ cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md2 : active raid5 sdc10[4] sdb10[2] sdb9[1] sdb8[0]
2963712 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

md1 : active raid5 sdc9[4] sdc6[1] sdb7[0] sdc7[3]
3180672 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

md0 : active raid5 sdc8[4] sdb6[1] sdb5[0] sdc5[3]
3180288 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

unused devices: <none>

Thermopyle fucked around with this message at 23:24 on Nov 25, 2010

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Scuttle_SE posted:

so if bad comes to worse, you can easily pull your files from the pool-disks.

That seems like one of the great features of greyhole. You're not dependent upon it at all when it comes to data recovery.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Factory Factory posted:

Check the SMART bad sector counts to verify. If you've got sector reallocations, you have a drive that is likely to die and is literally on its way out.

Ignorant person here. I thought it was to be expected that some drives would have bad sectors and that it was only a problem if you started seeing a lot (hundreds? thousands? bajillions?) of them.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

GhostSeven posted:

I have a question that is best suited to the expertise in this thread! I hope you can give me some advice!

I currently have a mdadm Raid 5 setup with 5x500GB discs running on Ubuntu I have two spare SATA connections on a controller card and I am wondering what my best move is to essentially upgrade my Raid. I am looking to purchase some 2TB drives in place of the 500's.

What are my options here? Ideally I want to keep Raid 5, is there any way for me to swap the drives out for larger ones and keep the raid running?

Sorry I will admit I have a limited knowledge of the mechanics of what I can and can't do with software Raids

Thanks, Sorry if my question is unclear or has been worded oddly!

You can create a "degraded" RAID5 array. This means that you could put two 2TB drives on the two ports you have available and then copy data off of one of your current drives onto that array, remove that 500GB drive, put in another 2TB drive, and add it to the degraded RAID5 array.

Something like this: "mdadm --create --level=5 --raid-devices=3 --force /dev/md0 /dev/sda /dev/sdb /dev/sdc missing"

That creates the array in the same state as it would be if /dev/sdc had failed. This means the array is offering you no protection until you add the third 2TB drive.

Disclaimer: I haven't done this, do some more research before you try it. I do think it would work, though.


(Or you could spend 30 bucks on a cheap PCI-SATA adapter and add more ports)

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

what is this posted:

Slower, and less reliable? Where do I sign up?

Do you have any evidence that it is less reliable other than the obvious "less time in production than regular raid"?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

what is this posted:

Not synology's, no. It's too new for me to know about the reliability (hint: that means don't trust it). Their other stuff has been very solid in my experience, and they have some of the better software for consumer NAS decives.

There are countless drobo horror stories out there however. And I do mean countless. I've experienced drobo disasters in person.

So...the answer is "no, unless you're talking about Drobo".

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

what is this posted:

Drobo's been the main company selling devices that do this. Unless you count Windows Home Server, which had a bunch of issues early on with data corruption, and now dropped the feature from the upcoming release because of issues with data corruption and horrible slowdowns in heavy/enterprise usage (admittedly it was fine in small consumer setups).

Hard drives are extremely cheap and in two years you can buy a new rack of drives. You may even want a new NAS. You can expand a RAID set with existing drive sizes without issue and without using the unevenly sized drives faux-raid feature.

The only reason to want different sized drives is because you have a bunch of junky old hard drives lying around, maybe a 250GB drive here, a 500GB drive there, a 1.5TB drive there, and hey just throw out the old small drives and buy a few 1TB, 1.5TB, or 2TB drives and be done with it. Hard drives are really, really cheap.

Giving up speed and reliability just because you have a four year old 250GB drive sitting in a USB enclosure that you think you can save some money on to store your precious animes is a hilarious joke. Buy a drive 10 time the size for $100.

So the answer is still "no, unless you're talking about Drobo".

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Thanks to advice given earlier in the thread by DLCInferno and others, I've now moved all my data from WHS to an Ubuntu machine with mdadm+LVM.

I've got a mix of drive sizes in multiple arrays...

code:
md5 : active raid5 sdk2[0] sdo1[4] sdm2[3] sdl2[1]
      2927197440 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

md4 : active raid5 sdk1[0] sdn1[4] sdm1[3] sdl1[1]
      2927197440 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

md2 : active raid5 sdb3[0] sdc3[1] sdd3[3]
      2867772928 blocks super 1.2 level 5, 128k chunk, algorithm 2 [3/3] [UUU]

md3 : active raid5 sdh1[1] sdj1[3] sdi1[4] sde1[0]
      5860535424 blocks super 1.2 level 5, 128k chunk, algorithm 2 [4/4] [UUUU]

md6 : active raid5 sda1[0] sdf1[1] sdg1[3]
      488391680 blocks super 1.2 level 5, 128k chunk, algorithm 2 [3/3] [UUU]

md0 : active raid0 sdd1[2] sdb1[0] sdc1[1]
      5855040 blocks 64k chunks

md1 : active raid5 sdb2[0] sdc2[1] sdd2[2]
      58593152 blocks level 5, 64k chunk, algorithm 2 [3/3] [UUU]
I copy to/from the box over a gigabit network at 100-120MB/s (WHS on the same hardware did 60-70 MB/s) and I've got a nice linux machine for dicking around with. My total usable storage is somewhere around 15TB now...

It took frickin forever copying data off the NTFS drives to existing arrays and then expanding the arrays with that drive (I probably ended up with 150+ hours of copy/RAID growing), but it's done!

Thanks for the advice, guys.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Are there any web interfaces for seeing the status of my RAID arrays and LVM volumes?

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

frogbs posted:

that will literally double my electric bill.

Are you sure about that? I can think of circumstances where this would be the case, but they're definitely outliers.

According to my Kill-A-Watt my Q6600 server with 15 hard drives (not quite that many when I did the measurements I guess...maybe only 10 at that time) uses like 5 to 8 USD a month worth of electricity.

Like I said, you could be right (low energy usage, high energy rates, etc), but just make sure you know what you're talking about.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

frogbs posted:

I suppose I could break out the Kill-A-Watt and ballpark it, but doubling wont be far off. My energy usage is extremely minimal as is, so adding a Pentium 4 on all day with a 400w power supply is going to impact that significantly. I'll do a little math and come up with a more concrete figure....

FWIW, the size of your power supply doesn't have anything to do with how much power a system uses. Just because it's 400W doesn't mean it's using 400W of power. It uses only what the system is demanding.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

G-Prime posted:

Can you go into more detail on this? Read or write or both? If it's just slow writes, that's no big deal in my book. As long as I can get ~10mbit or more reads via USB, I'll be happy.

Off the top of my head USB1 max theoretical transfer speed is 12mbit and USB2 is 400mbit.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

what is this posted:

There's also CPU usage associated with USB reads/writes.



That's what I get for not following the thread. I just assume everyone has an awesome quad core server like me! :smug:

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Wizzle posted:

They sold their hard drive line to Hitachi several years back and Hitachi maintains the same quality.

http://en.wikipedia.org/wiki/Deskstar#Deskstar_failures

I laughed. (I actually have a couple Hitachi Deskstars)

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

surrender posted:

Is it feasible to set up a NAS on a Ubuntu desktop? My family has a spare computer that I recently set up as a web surfing/office machine for visitors, and I'd like to keep this functionality while using it as a server for documents (mostly photos, maybe some MP3s) and to store Windows backups, so I don't need anything powerful. Would I be fine with setting up some SAMBA shares and mapping them as drives on each Windows 7 client?

Yes, it's feasible.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Scratch2k posted:

Thanks for that, I never could find a definitive answer in the doco. I have since reinstalled WHS on my file server but am still considering unRAID because all I really need is a big file system with shares defined and unRAID does that (as does WHS) but after spending several hours looking for drivers and mucking around getting WHS configured I can see the attraction in using unRAID.

If you don't care about RAID, it's pretty easy to set up LVM on most linux distributions. Hell, it's pretty easy to set up LVM on top of RAID.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Sizzlechest posted:

No HTPC will support Samsung's "continue" function. A HTPC will probably work fine for the 95% of the other functions and I'll probably end up going that route when it's all said and done. I don't think any other solution exists.

That been said, The Synology looks great, but there are a lot of potential issues with Samsung and WD drives. I'm going to give them a call and get the low-down (if possible) on Monday. Maybe the spindown/spinup issues have been resolved in the latest software.

With an HTPC, you don't need Samsung's continue function. That's handled by whatever media software you use...like XBMC.

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

Sizzlechest posted:

You're suggesting spending even more money for another device to accomplish the same thing as I already have to solve a problem with a more expensive and complex alternative. I think you broke some kind of record for the gooniest advice ever.

How dare he give goony advice to a ... wait for it ... goon!??!

Also, your current device doesn't do what you want, and you already said you want an HTPC. What's the problem?

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

nbv4 posted:

I need help NAS thread! I'm one of those people who hords files and crap like that, but have never really had a good storage solution set up to manage it all. Up until now I've always just thrown my stuff into a folder on a drive until it gets full, then I buy a new drive, and then repeat.

I recently bought one of those external dual hard drive enclosures, along with two 2tb drives. My plan was to put the two drives into that puppy, RAID-1 them together, and hook it up to my wireless router. I think (but correct me if I'm wrong) that my best bet is to go with an NAS solution rather than through USB since I use a lot of different operating systems. My main desktop is Ubuntu/Windows 7, plus I have a macbook pro I use at work, and I also have an android phone. All of which I want to be able to use my storage with. I assume that if I interface my storage through a network protocol, I won't have to worry about crap like RAID drivers not existing for X operating system, correct?

The problem is that the enclosure I bought really loving sucks. It won't do RAID unless you want your storage Windows only. And the networking feature is completely undocumented, and most likely only works if you install some crazy windows only proprietary driver.

What should I do? Ideally, I'd like to but a small enclosure like what I just bought. Then I'll install something like Openfiler on it so that I can connect up with windows and crap easily. Do they make small enclosures that is capable of this? It seems to me all these enclosures out there on the market are kind of rinkydink if you know what I mean... Would I be better off just buying a fullsize case/motherboard/proc to serve all this media? I don't really want to do that because I want something small and hassle free. I also move a lot, so the smaller the better.

Enclosures don't really do RAID. You want a NAS device, like the Synology units everyone is always talking about in here.

Personally, I use a regular PC as my file server, since in addition to managing my hard drives, it can do lots of other poo poo.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply