|
I run WHS, and I'm getting ready to move from a total of 10TB (6 drives) to 20TB (adding 5 drives) of storage. Unfortunately, I need to add more SATA ports. Any recommendations for a PCI-Express solution that provides an Individual mode? Don't need RAID, but from what I can tell quality performance cards are all RAID anyway. I guess I could go with PCI as well. I've currently got 4 drives on a cheap SATA PCI card and I find performance to be somewhat meh as the drives on it seem to max out around 30-35MB/s...
|
# ¿ Jun 23, 2010 01:22 |
|
|
# ¿ Apr 28, 2024 08:53 |
|
Phatty2x4 posted:Take a look at this card: Do you really need a bracket? It seems like it'd work without one. If you do need one, where can I find such a bracket? Searches for "bracket" and "pci bracket" yield much that doesn't seem too useful...
|
# ¿ Jul 1, 2010 04:14 |
|
ATLbeer posted:Thread needs more pictures 14 drives with sizes ranging from 250GB to 2TB for a total of ~17TB in a WHS.
|
# ¿ Jul 13, 2010 23:42 |
|
PopeOnARope posted:How is WHS in terms of usability? I might need to VM it and see how I feel about it's use. I have 17 hard drives in my WHS. You just plug a HD in, boot, and click "Add"...that's basically it. It's all run from the WHS console on the client, but if you want to you can remote desktop in to the server and you get a regular Windows Server 2003 desktop so you can do...Windows Server 2003-type stuff with it. WHS is made with ease-of-use as priority #1. Here's the SH/SC thread for it.
|
# ¿ Aug 16, 2010 02:37 |
|
Methylethylaldehyde posted:I just can't get over the corruption bugs that sat there for like 4 months before someone fixed them. This is why I'm using ZFS, because gently caress having my poo poo look like it's there, but be silently hosed up. That makes it sound like MS was just twiddling their thumbs for four months and then decided to fix some minor bug. That isn't true at all. Nowadays you can pretty much view WHS like a bunch of regular NTFS formatted drives with a process running that spreads files amongst all the drives and makes copies of files to more than one drive for folders that you have duplication turned on. You can take any one of the drives out, put it in another NTFS-capable system, and read all the files on it.
|
# ¿ Aug 16, 2010 22:42 |
|
HorusTheAvenger posted:I hope you're not implying it's good for small backups. I don't use it since WHS does it's own backup thing, but I generally hear good things about Win7's backup program.
|
# ¿ Oct 29, 2010 02:20 |
|
I'm thinking about moving on from WHS. My main requirements are: * Handle many (16 right now) disks of varying capacities (500GB to 2TB) with a total of over 17TB of raw disk space. * One of my favorite features of WHS is not having to worry about multiple partitions...it's just one big pool of storage. * Some sort of parity to protect from at least 1 (the more the better) drive failure. * The main purpose of this storage is to store and stream HD video in the home. Streaming isn't too big of a bandwidth hog with hard drives on a gigabit network, but I do copy multi-gigabyte files to/from the array quite often so the closer it comes to saturating gigabit, the better. Is this raid/lvm/mdadm linux thing still a cool thing to do? Is this guide from the OP still accurate/up-to-date/the best? I was thinking that a linux thing would be best for me since I do lots of python development, and run several server apps written in python on my server... The main reservation I have right now is that, while I won't have any problems figuring out how to set this up, I'm not terribly interested in futzing with it everyday, and that's one thing WHS as provided me...I set it up and never have to think about it. Also, I will be running this on a fairly powerful machine (P55 mobo/Core 2 Quad/4GB RAM)...does this have any implications for which distro I should use? I'm most familiar with Ubuntu.
|
# ¿ Nov 7, 2010 04:40 |
|
DLCinferno posted:You should be fine with almost all your assumptions except potentially the actual RAIDing of your drives. Be aware that unlike WHS, which distributes data across any combination of drive sizes, mdadm will require you to choose the smallest size drive within the array as the size to use for each of the devices that array is built from. This means you have one 500GB drive and 15 2TB drives, you'll waste 1.5TB on all 15 of them. The way to use all your disk space is to create separate arrays for each combination of drive sizes, but in order to support at least RAID 5 on all your data you'll need at least 3 drives of each size. Thanks. This is helpful. I've got enough 2TB and 1.5TB drives, but I'm going to have a problem with only have 2-1TB, 2-750GB, 1-500GB, and 1-400GB. Hrmph.
|
# ¿ Nov 8, 2010 03:05 |
|
DLCinferno posted:In that case, you do actually have enough drives to safely do what Saukkis suggested. For example, if you didn't mind losing 150GB, you could create 250GB partitions on each of the drives and build four RAID5 arrays from those partitions. Each array would have only one partition per drive, so you could lose an entire disk without losing any data. A little more complex to setup, but it would work. Oh yeah, that would work. Thanks! Now, I just have to work out some sort of plan for moving 12 TB of data from WHS to Ubuntu. My first thought is to use an older P4 PC as a temporary server, install Ubuntu, move as many hard drives as possible from WHS into it...up to my free space, copy data over the network to the Ubuntu server to fill those up, remove more from WHS, rinse, repeat. The problem with that plan, is that Ubuntu actually needs to end up on my current WHS machine. Are the arrays I create on one machine easily transferable to another machine with different hardware?
|
# ¿ Nov 8, 2010 03:31 |
|
So what are you guys using to backup your Windows client PCs to your server/NAS? I asked this in the Windows Megathread, but thought maybe I'd get more response here, since it doesn't seem to be getting any traction over there. Windows 7 built-in backup tool is fantastic with image-based incremental backups. The only problem is that only Pro or Ultimate supports backing up to a network location.
|
# ¿ Nov 9, 2010 22:10 |
|
I've got 3 different SATA/RAID controller cards that I just use to add SATA ports. I'd like to consolidate these. What's a good solution for adding 8 SATA ports, preferably via PCIe?
|
# ¿ Nov 14, 2010 20:18 |
|
IOwnCalculus posted:Cheapest option is a Supermicro LSI-based card that comes up pretty regularly, though the only issue is the bracket for it is technically for Supermicro's UIO form factor. The connector is still PCIe so it will fit in a PCIe slot but you need to remove the bracket first. So, this and two of these? What's the performance like on these?
|
# ¿ Nov 14, 2010 21:04 |
|
jeeves posted:That is really disheartening to hear. Right now I have a 2 year old ASUS EEE-Box as my home server, which streams out (via an ethernet cord, not wireless) to my girlfriend's mac laptop which we have been using as our HTPC/media player to the tv. I keep most of my media archive on two terabyte drives inside my own desktop (seperate from the ASUS) but since I like to turn it off a lot to save power, my girlfriend can't easily access the media on it-- plus due to crappy design on my desktop's case, I can't actually install any new hard drives while also having one of those comically large nice video cards that have become the norm. Yeah, WHS just isn't built for this sort of scenario. You're going to be much better off with a dedicated server. If you're insistent on not doing that, I would probably just use a regular Win7 install instead of WHS for my HTPC and just share the drives you hook up to it. I've been serving media off of a WHS to a little Atom-powered nettop hooked up to my HDTV for a long while and couldn't be happier.
|
# ¿ Nov 18, 2010 23:20 |
|
Any "gotchas" to using these drives in an mdadm raid5 array? I know there has been some problems with WD Green drives and some sort of NAS solution that's popular in here, but I've lost track of the actual issues.
|
# ¿ Nov 20, 2010 01:18 |
|
I think this got lost on the end of last page... Just making sure this is the LSI card that everyone talks about (and that those are the right cables).
|
# ¿ Nov 20, 2010 17:45 |
|
adorai posted:This is close to what I was referring to, though it has 3 ports instead of 2. I think I'll get one of these and some fanout cables. edit: Fanout cables are frickin' expensive, and it seems like there's different types... edit2: Wait, HP's specs on that card says it has 4 internal SATA ports. The only fan-out cables I can find are Mini-SAS to SATA. I'm ignorant. edit3: Ok, I don't think this card is what I was talking about. As far as I can tell, you have to have a card with Mini-SAS internal connectors to use fanout cables to multiple SATA drives. This is unfortunate, because I already ordered the card you linked to. :/ Thermopyle fucked around with this message at 22:51 on Nov 20, 2010 |
# ¿ Nov 20, 2010 22:12 |
|
double post
|
# ¿ Nov 20, 2010 22:51 |
|
This is what I want, isn't it?
|
# ¿ Nov 20, 2010 23:15 |
|
Star War Sex Parrot posted:I've lost track of what you're trying to accomplish, but basically that's an 8-port SAS/SATA card utilizing two miniSAS connectors, so you could use two fan-out cables to get 8 SATA drives attached. That's what I'm trying to accomplish, thanks.
|
# ¿ Nov 20, 2010 23:21 |
|
wolrah posted:
I'm experimenting with doing this with mdadm + LVM right now. It's not as dead simple as adding drives to WHSv1, but it provides RAID5 protection. It works like this: Partition your hard drives, and raid5 one partition from each drive together and then LVM them all into a single volume. For example, if you have a bunch of hard drives with capacities all divisible by 1TB, you make 1TB your partition size. 1TB hard drives have 1 partitions, 2TB hard drives have 2 partitions, etc. Take 1 partition from each drive and make a RAID 5 array out of them. For example, /dev/sda1, /dev/sdb1, /dev/sdc1 all make /dev/md0. After you're done you have several RAID devices like: /dev/md0, /dev/md1, /dev/md2. Then you can take all of those and pool them together with LVM. You can then add a single new hard drive and partition it into 1TB partitions. Then use mdadm to grow each of your existing RAID devices. I asked pretty much the same question as you a couple pages back. See there and the following posts for a bit of discussion.
|
# ¿ Nov 23, 2010 22:59 |
|
As mentioned earlier I'm experimenting with different configurations of mdadm RAID5 arrays and LVM. I haven't kept up with linux filesystems at all in years. I'm trying to pick a filesystem and have no idea what's good for my purposes. I'm tempted to just stick with ext3 since that was what I used last time I was fooling around with a linux file server, but I don't know if that's still a good choice. ext4 is out now and it's one whole number bigger! This will be used for 75% streaming of multigigabyte video files and 25% dicking around. My raw storage space will be starting out with hard drives totaling 18TB in size. So what filesystem would you pick?
|
# ¿ Nov 24, 2010 08:13 |
|
DLCinferno posted:Honestly, you probably won't notice the difference between the two with normal usage. The wiki page gives a pretty clear comparison though. Thanks. The Linux RAID wiki gives some advice about setting the stride and stripe-width of the file system. If I'm doing this correctly, when using LVM you don't actually create a ext4 (or whatever) filesystem on your RAID arrays before you make a volume group out of them. You actually create the ext4 filesystem on the logical volume after you create it. Does the stride and stripe-width advice given on the RAID wiki still apply in that case? FWIW, I'm writing an article to pull together all of the stuff I'm learning about this. The point of writing it is mostly to help me remember it all, but also I think it will help others in the future. edit: While dicking around with mdadm I noticed this: quote:~$ sudo mdadm --detail --scan Where does this "metadata=01.02" come from? I obviously use "1.2": quote::~$ cat /proc/mdstat Thermopyle fucked around with this message at 23:24 on Nov 25, 2010 |
# ¿ Nov 24, 2010 20:45 |
|
Scuttle_SE posted:so if bad comes to worse, you can easily pull your files from the pool-disks. That seems like one of the great features of greyhole. You're not dependent upon it at all when it comes to data recovery.
|
# ¿ Dec 13, 2010 22:15 |
|
Factory Factory posted:Check the SMART bad sector counts to verify. If you've got sector reallocations, you have a drive that is likely to die and is literally on its way out. Ignorant person here. I thought it was to be expected that some drives would have bad sectors and that it was only a problem if you started seeing a lot (hundreds? thousands? bajillions?) of them.
|
# ¿ Dec 14, 2010 04:36 |
|
GhostSeven posted:I have a question that is best suited to the expertise in this thread! I hope you can give me some advice! You can create a "degraded" RAID5 array. This means that you could put two 2TB drives on the two ports you have available and then copy data off of one of your current drives onto that array, remove that 500GB drive, put in another 2TB drive, and add it to the degraded RAID5 array. Something like this: "mdadm --create --level=5 --raid-devices=3 --force /dev/md0 /dev/sda /dev/sdb /dev/sdc missing" That creates the array in the same state as it would be if /dev/sdc had failed. This means the array is offering you no protection until you add the third 2TB drive. Disclaimer: I haven't done this, do some more research before you try it. I do think it would work, though. (Or you could spend 30 bucks on a cheap PCI-SATA adapter and add more ports)
|
# ¿ Dec 16, 2010 18:03 |
|
what is this posted:Slower, and less reliable? Where do I sign up? Do you have any evidence that it is less reliable other than the obvious "less time in production than regular raid"?
|
# ¿ Dec 20, 2010 23:58 |
|
what is this posted:Not synology's, no. It's too new for me to know about the reliability (hint: that means don't trust it). Their other stuff has been very solid in my experience, and they have some of the better software for consumer NAS decives. So...the answer is "no, unless you're talking about Drobo".
|
# ¿ Dec 21, 2010 03:04 |
|
what is this posted:Drobo's been the main company selling devices that do this. Unless you count Windows Home Server, which had a bunch of issues early on with data corruption, and now dropped the feature from the upcoming release because of issues with data corruption and horrible slowdowns in heavy/enterprise usage (admittedly it was fine in small consumer setups). So the answer is still "no, unless you're talking about Drobo".
|
# ¿ Dec 21, 2010 06:46 |
|
Thanks to advice given earlier in the thread by DLCInferno and others, I've now moved all my data from WHS to an Ubuntu machine with mdadm+LVM. I've got a mix of drive sizes in multiple arrays... code:
It took frickin forever copying data off the NTFS drives to existing arrays and then expanding the arrays with that drive (I probably ended up with 150+ hours of copy/RAID growing), but it's done! Thanks for the advice, guys.
|
# ¿ Dec 28, 2010 04:42 |
|
Are there any web interfaces for seeing the status of my RAID arrays and LVM volumes?
|
# ¿ Dec 29, 2010 23:24 |
|
frogbs posted:that will literally double my electric bill. Are you sure about that? I can think of circumstances where this would be the case, but they're definitely outliers. According to my Kill-A-Watt my Q6600 server with 15 hard drives (not quite that many when I did the measurements I guess...maybe only 10 at that time) uses like 5 to 8 USD a month worth of electricity. Like I said, you could be right (low energy usage, high energy rates, etc), but just make sure you know what you're talking about.
|
# ¿ Jan 5, 2011 22:47 |
|
frogbs posted:I suppose I could break out the Kill-A-Watt and ballpark it, but doubling wont be far off. My energy usage is extremely minimal as is, so adding a Pentium 4 on all day with a 400w power supply is going to impact that significantly. I'll do a little math and come up with a more concrete figure.... FWIW, the size of your power supply doesn't have anything to do with how much power a system uses. Just because it's 400W doesn't mean it's using 400W of power. It uses only what the system is demanding.
|
# ¿ Jan 7, 2011 03:39 |
|
G-Prime posted:Can you go into more detail on this? Read or write or both? If it's just slow writes, that's no big deal in my book. As long as I can get ~10mbit or more reads via USB, I'll be happy. Off the top of my head USB1 max theoretical transfer speed is 12mbit and USB2 is 400mbit.
|
# ¿ Feb 3, 2011 21:15 |
|
what is this posted:There's also CPU usage associated with USB reads/writes. That's what I get for not following the thread. I just assume everyone has an awesome quad core server like me!
|
# ¿ Feb 3, 2011 21:53 |
|
Wizzle posted:They sold their hard drive line to Hitachi several years back and Hitachi maintains the same quality. http://en.wikipedia.org/wiki/Deskstar#Deskstar_failures I laughed. (I actually have a couple Hitachi Deskstars)
|
# ¿ Feb 13, 2011 18:41 |
|
surrender posted:Is it feasible to set up a NAS on a Ubuntu desktop? My family has a spare computer that I recently set up as a web surfing/office machine for visitors, and I'd like to keep this functionality while using it as a server for documents (mostly photos, maybe some MP3s) and to store Windows backups, so I don't need anything powerful. Would I be fine with setting up some SAMBA shares and mapping them as drives on each Windows 7 client? Yes, it's feasible.
|
# ¿ Feb 22, 2011 00:36 |
|
Scratch2k posted:Thanks for that, I never could find a definitive answer in the doco. I have since reinstalled WHS on my file server but am still considering unRAID because all I really need is a big file system with shares defined and unRAID does that (as does WHS) but after spending several hours looking for drivers and mucking around getting WHS configured I can see the attraction in using unRAID. If you don't care about RAID, it's pretty easy to set up LVM on most linux distributions. Hell, it's pretty easy to set up LVM on top of RAID.
|
# ¿ Feb 22, 2011 18:15 |
|
Sizzlechest posted:No HTPC will support Samsung's "continue" function. A HTPC will probably work fine for the 95% of the other functions and I'll probably end up going that route when it's all said and done. I don't think any other solution exists. With an HTPC, you don't need Samsung's continue function. That's handled by whatever media software you use...like XBMC.
|
# ¿ Mar 6, 2011 20:01 |
|
Sizzlechest posted:You're suggesting spending even more money for another device to accomplish the same thing as I already have to solve a problem with a more expensive and complex alternative. I think you broke some kind of record for the gooniest advice ever. How dare he give goony advice to a ... wait for it ... goon!??! Also, your current device doesn't do what you want, and you already said you want an HTPC. What's the problem?
|
# ¿ Mar 6, 2011 22:22 |
|
|
# ¿ Apr 28, 2024 08:53 |
|
nbv4 posted:I need help NAS thread! I'm one of those people who hords files and crap like that, but have never really had a good storage solution set up to manage it all. Up until now I've always just thrown my stuff into a folder on a drive until it gets full, then I buy a new drive, and then repeat. Enclosures don't really do RAID. You want a NAS device, like the Synology units everyone is always talking about in here. Personally, I use a regular PC as my file server, since in addition to managing my hard drives, it can do lots of other poo poo.
|
# ¿ Mar 11, 2011 23:54 |