Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
DLCinferno
Feb 22, 2003

Happy

Evilkiksass posted:

I have been working on designing a case using an atom motherboard and 2 5 hard drive raid cages as the base for it. Right now I am prototyping this in Google SketchUp. I was interested in getting your guys's opinion:


Click here for the full 1100x857 image.

Looks like some poor little drives will get mighty toasty stacked together like that. :)

Adbot
ADBOT LOVES YOU

DLCinferno
Feb 22, 2003

Happy
Just thought I'd let people know that the Promise SmartStor NS4300N is on sale at Frys for $300. It is almost $400 everywhere else. Not a bad deal for 4 drive SATA 3.0 NAS with tons of options. My friend has two and highly recommended it.

http://shop2.frys.com/product/5351488

DLCinferno
Feb 22, 2003

Happy

Alowishus posted:

Yeah I have a NS4300N (got it last time it was on sale at Fry's), and it is pretty decent. Promise has been pretty good with OS upgrades, and they have plugin modules to turn it into a DLNA, iTunes and Bittorrent server. The CPU in it is a little weak so you're not going to be able to saturate gigabit with it, but it'll do standard file serving and video streaming duty just fine.

My only complaint about mine is that when streaming a movie from it to my PS3, it occasionally drops out and aborts the movie playback, kicking me back to the PS3 menu. The device then disappears, but shows back up about 10 seconds later and I can start playing again. This could be specific to my unit or even my PS3, so YMMV.
I advertised it but I actually just ordered one tonight. How is the bittorrent client? I'm using µTorrent on my main box right now but all I really need is the ability to occasionally throttle and keep lots of torrents open at a time (~150 would be nice, but 75 would be fine).

I'm hoping the streaming issue doesn't affect normal Windows video playback?

DLCinferno
Feb 22, 2003

Happy
I think I finally found a place that has the Chenbro ES34069 case in stock. It is sold out EVERYWHERE.

Looking forward to building a system on it, looks like a really neat box.

DLCinferno fucked around with this message at 18:26 on Sep 27, 2008

DLCinferno
Feb 22, 2003

Happy
That looks like a sweet case, but goddamn is that site sketchy as hell. Still, for $95 it's definitely worth the risk.

I'm still waiting for a few parts for the Chenbro, but once they arrive I'll post a trip report. Word to the wise, it ended up being way more of a cost/hassle once I had to get a PCI Express x1 riser and a SATA card to match (my motherboard only had 4 SATA connections, and I wanted a decent system drive - which will be a 160GB 2.5"). Here's hoping it fits without problems. :(

That said, the case is loving sexy as hell, even though it's more than I'd ever pay for a regular case.

DLCinferno
Feb 22, 2003

Happy
I'm running a raid 5 through mdadm with four disks...I created an ext3 partition on each of the drives before building an array from the partitions. Should/could I create the array from the disks without initializing the partitions, and then partition the entire array afterward?

That didn't make sense to me at the time, but I want to make sure I'm choosing the best option (and I don't even know if that is possible).

DLCinferno
Feb 22, 2003

Happy
Well, so now I'm confused. After creating the ext3 partitions on each of the drives, and building an array across the partitions, I formatted the array (at /dev/md0) with mkfs as ext3, although I didn't create a partition on it first (fdisk sees no partitions at /dev/md0). Everything seems to be working just fine - I've been copying stuff to it over samba all day, but am I going to run into problems in the future? What if my system goes down and I need to assemble the array on a different box? Can I do it without problems?

I'm at about 99% utilization on my existing drives, so if I have to copy everything back in order to rebuild the array I need to do it soon or I'm going to lose data due to lack of disk space (or spend a troublesome amount of time ferrying the data onto different removable drives).

DLCinferno
Feb 22, 2003

Happy

Halibut Barn posted:

It sounds like you created the array properly, but like Stonefish said, the individual partitions should be marked as the "Linux RAID Autodetect" type, not ext3. That's easy enough to fix though, and I don't think it's really that critical. They just wouldn't be counted by mdadm when it's autodetecting arrays.
So I should be able to change the type in fdisk without losing any data, right? It's not a big deal that mdadm can't autodetect the array, but it would be nice.

necrobobsledder posted:

Someone posted their experience with a Chenbro case that would meet your needs.
That's actually the case I'm running this setup on right now. I was going to post a trip report after I finished getting everything running as I wanted. It's a goddamn sexy case, although I've had a few problems with heat since it's kinda cramped.

Here's the specs:

4 1TB Barracuda ES.2
DG45FC MiniITX motherboard w/4 SATA ports
2GB DDR2 800 RAM
Celeron 2.0Ghz
160GB 2.5" WD Scorpio SATA
PCIe-1x right-angle riser
Rosewill RC-213 PCIe-1x SATA Controller

The case comes with a power supply that has an external brick so you don't crowd the case. So far it has seemed very effective. I chose to add the SATA controller for the system drive, but I had alot of problems getting an OS working with it. Windows install kept crashing, openSUSE would install just fine, but just sat there on boot and would never get into the OS (it did once when the CD was still in the drive for some reason). I think those problems were due to the controller card expecting drivers to be loaded, but since I didn't have a USB floppy I couldn't test it.

Eventually I decided to try Ubuntu 8.10 and it installed perfectly and booted without a hitch. So far I've been very happy. The heat issues were because I tried running the system without a case fan in the motherboard space (motherboard+components are physically separated from the 4 drive bays, which have their own two fans). Turns out this motherboard seems to run kinda hot. The CPU was fine, but the board was hitting 96C on the graphics/memory hub controller...I dropped a spare fan I had kicking around on it and it fell to 65C, which is fine for me, but I do need to get some airflow moving still.

I'll probably buy two fans, one for the hub and I'll just screw it into a heatsink on the motherboard and one for exhaust in the motherboard space. There's not much room once everything is in there, and it probably won't be able to hold more than a 40mm, but it should be fine in the end. I'm also going to have to get a new splitter for the 3v power cable...the one spare they give you has a SATA connector, but a 4-pin floppy connector. There is a 3-pin connector for a fan on the motherboard (apart from the CPU fan of course).

The SATA controller for the system drive BARELY fit, and it definitely needed the right-angle riser to do so, which was kinda hard to find. It isn't very secure. There's also a PCI riser that Chenbro makes specifically for the case (I have one if you would like it), but obviously you'll need the motherboard for it. The Intel one I bought only had one expansion slot. The expansion card kinda blocks the minimal back airflow...I wish I had dremmeled it a bit to open up the flow, although there are side ports in the case right on top of the CPU which helps I'm sure.

There is a slimline DVD drive slot, a memory card reader slot (gotta buy the Chenbro sanctioned one I think) and a little 2.5" drive holder that fits very nicely under the DVD slot and holds the drive securely. The biggest problem is getting enough ports on your motherboard to hook up everything, although the memory card reader runs off USB.

Overall thoughts - with the case all buttoned up, it looks awesome. The 4 hot-swappable drives are sexy looking, and the whole thing is very solid and well-built. No lovely plastic holding your drives, they are satisfyingly secure. The fact that the drives have plenty of cooling and are completely segregated from the rest of the system heat is very nice. The only downside is finding the right mix of components to support everything you want in such a small space (mini-itx), which can affect cooling as well.

I'll post some pictures tomorrow evening, and feel free to ask me any questions. Definitely the most I've ever spent on a case (~$200, it was drat hard to find one too), but well worth it for something that is going to be on 24x7 acting as a DHCP/DNS/file/torrent server.

I chose to use software RAID even though the Intel motherboard supports RAID (not sure if it is true hardware or fake raid) because I wanted to learn and control the system more.

DLCinferno
Feb 22, 2003

Happy
fibre

hope you have a good reason for it though because it will cost a pretty penny

DLCinferno
Feb 22, 2003

Happy

Lobbyist posted:

So I just purchased 5 1TB SATA drives. Am I doomed to failure if I run them in Linux software RAID5? ZFS still only runs on Solaris? What are my options?
Why would you think you'd be doomed to failure? I'm running linux software raid (mdadm) right now for my file server and I love it. Today's processors are more than capable enough to run software raid without getting taxed even with heavy file copies and other activity.

I decided against hardware raid because I didn't want to be tied to a specific raid controller and frankly I couldn't be happier about that decision. Someone spilled a beer in my office on the filing desk next to the server, which ran down into the top of the case and fried the motherboard. Black melted plastic and everything.

I was worried about the drives, but I pulled them out, plugged them into my main computer and typed a single mdadm --assemble command and immediately I could mount and use the entire array. Otherwise I would have been waiting for some new hardware to arrive and hoping the specific controller was still manufactured. Not to mention how easy it is to grow the array when I need to.

Lot's of people I talked to were ambivalent about software raid but no one could really say why besides the fact that they didn't trust it. I've had only good experiences and would happily recommend it. When I rebuild my file server I'll probably look at ZFS just because it seems cool and accounts for the raid write hole.

DLCinferno
Feb 22, 2003

Happy
This is dissapointing:

Snow Leopard kisses ZFS bye-bye

Although I don't use anything Mac, I'd like more usage of ZFS.


On a side note, does anyone have suggestions for a case that can house at least 8 3.5" internal drives with good cooling for the drives and a decent layout?

DLCinferno
Feb 22, 2003

Happy

Interlude posted:

What's the going opinion of using the Western Digital "Raid Edition" drives as opposed to the Black versions? Worth the $55 price increase for a RAID-5 array?
They're basically the same drive except for one software difference that cripples the Black drives in a RAID setup. Unfortunately, WD charges a shitload for the privilege of that one feature, which is totally bogus in my opinion. Here's why you don't want to use them in an array:

quote:

Question
What is the difference between Desktop edition and RAID (Enterprise) edition hard drives?

Answer
Western Digital manufactures desktop edition hard drives and RAID Edition hard drives. Each type of hard drive is designed to work specifically in either a desktop computer environment or a demanding enterprise environment.

If you install and use a desktop edition hard drive connected to a RAID controller, the drive may not work correctly unless jointly qualified by an enterprise OEM. This is caused by the normal error recovery procedure that a desktop edition hard drive uses.

When an error is found on a desktop edition hard drive, the drive will enter into a deep recovery cycle to attempt to repair the error, recover the data from the problematic area, and then reallocate a dedicated area to replace the problematic area. This process can take up to 2 minutes depending on the severity of the issue. Most RAID controllers allow a very short amount of time for a hard drive to recover from an error. If a hard drive takes too long to complete this process, the drive will be dropped from the RAID array. Most RAID controllers allow from 7 to 15 seconds for error recovery before dropping a hard drive from an array. Western Digital does not recommend installing desktop edition hard drives in an enterprise environment (on a RAID controller).

Western Digital RAID edition hard drives have a feature called TLER (Time Limited Error Recovery) which stops the hard drive from entering into a deep recovery cycle. The hard drive will only spend 7 seconds to attempt to recover. This means that the hard drive will not be dropped from a RAID array. Though TLER is designed for RAID environments, it is fully compatible and will not be detrimental when used in non-RAID environments.

DLCinferno
Feb 22, 2003

Happy

necrobobsledder posted:

Psst, you can use a software utility from WD to enable TLER on even the Green drives. The primary reason to buy the Black series of drives is for the warranty and the far superior hardware (extra drive head motors apparently), not for the firmware features. To me, I think of extra warranty as insurance, and for maybe $15 more, I can get nearly double the insurance, so it's great as an external backup drive for me.

You'll want the RAID edition drives if you're willing to pay WD to run that utility on the drives for you. Otherwise, I'd stick with Black for performance and warranty. I personally use Green drives at home because I just sell my primary storage drives before the warranties are up and it's worked alright for me.
I did not know that and I wish I had when I was buying my last set of drives. :(

DLCinferno
Feb 22, 2003

Happy

cypherks posted:

How many disks do you have in your R5 setup? I'm confused as to why you'd run backups against an R5 array...
I sense a storm gathering...


(p.s. - also, he's talking about backing up to his server)

DLCinferno
Feb 22, 2003

Happy
Has anyone tried or is familiar with GlusterFS? Sounds like an interesting excursion.

quote:

GlusterFS is a clustered file-system capable of scaling to several peta-bytes. It aggregates various storage bricks over Infiniband RDMA or TCP/IP interconnect into one large parallel network file system. Storage bricks can be made of any commodity hardware such as x86-64 server with SATA-II RAID and Infiniband or GigE interconnect.

DLCinferno
Feb 22, 2003

Happy

KennyG posted:

I rewrote the thing in .NET and modeled raid arrays from 2-64 disks 1000 times each. I just optimized the code and am working on a 1,000,000 rep sim to try and improve the reliability of some of the averages.

If you want to know about how long you can expect your raid array to last - check this out.


Wow, that is great. Very nice job. Looking forward to the 1m rep.

DLCinferno
Feb 22, 2003

Happy
So I built an OpenSolaris server and I'm absolutely loving ZFS so far. I was running mdadm on Ubuntu before (have had great success with OS software raid) and it's been an interesting venture moving to ZFS. Want to share a file system with a windows machine? Single command: zfs set sharesmb=on pool/media. Done! No samba configuration, it's all automatic and the share is mounted and ready after each reboot without any config editing. Couldn't be easier.

I'd be happy to go into more details on the setup if anyone is interested (including some challenges getting rtorrent working on Solaris), but what I'm really interested in is some feedback on disk spindown.

Basically, what I'd like to know is what people think about MTBF and its relationship to disk spin up/down. Right now I've got the 4 drives in my array set to spin down after 4 hours of inactivity, which reduces the power consumption from 82 watts to 54 watts. However, I've also been reading that each spinup event is roughly equivalent to 10 hours of activity on the disk. The electricity gains are negligible (about $22 a year if all disks were always spun down), so I guess I'm asking if it is harder on the disk to leave them spinning 24x7 or have them spin up at least once a day for at least 4 hours (I usually turn on music/video when I get home from work).

I use the server as a torrent box so it is definitely 24x7, but the torrents are served from the system drive so the array isn't used unless I'm hitting the music/video shares. Should I set the spindown time to something much longer like 24 hours? What I'd like is to maximize the disk lifetimes. I've got a massive Chieftec case with 3 92mm side fans pointed directly onto the drives so the temps are pretty good no matter how hard they are working.

Thoughts?

DLCinferno
Feb 22, 2003

Happy

moep posted:

+1

I too ended up using these packages based on 0.8.2 but they are over a year old (0.8.5 is stable).

Yeah, I used those first, but wanted the new version. I found this great link that documented the process but it was also for 0.8.2, but I cribbed some of it for 0.8.5, especially the use of gmake.

http://www.jcmartin.org/compiling-rtorrent-on-opensolaris/

I also found this bug ticket filed for libtorrent:

http://libtorrent.rakshasa.no/ticket/1003

it had a link to two patches, one for libtorrent:

http://xnet.fi/software/memory_chunk.patch

...and the other for rtorrent:

http://xnet.fi/software/scgi.patch

The patch made it so that I could compile the newest version of libtorrent, but rtorrent still wouldn't compile. I think I may have screwed up some system setting, not really sure, but at this point I called one of my friends who has alot more experience and he was finally able to get it to compile. I'm not sure what exactly he did, but I'll see if he still remembers.

There's also some discussion about the problem on the Solaris mailing list here:

http://mail-index4.netbsd.org/pkgsrc-bugs/2009/05/02/msg031938.html

...it looks like they fixed it for 0.8.2 so maybe once the update makes it into release then rtorrent will be able to compile normally.


movax posted:

How do you have your drives set to spin down? I'm running OpenSolaris Nevada (the new/beta branch thing), and I haven't figured that out yet...

I added the following lines to /etc/power.conf and then ran /usr/sbin/pmconfig to reload the new config.

I'm sure it's obvious - set the four drives to 120 minutes and the system to always stay running.
code:
device-thresholds        /pci@0,0/pci1458,b005@1f,2/disk@1,0    120m
device-thresholds        /pci@0,0/pci1458,b005@1f,2/disk@2,0    120m
device-thresholds        /pci@0,0/pci1458,b005@1f,2/disk@3,0    120m
device-thresholds        /pci@0,0/pci1458,b005@1f,2/disk@4,0    120m
system-threshold        always-on
The one downside is that ZFS spins up drives sequentially when they are coming back online and it takes like 9 seconds for each one, so when you hit your array you have to wait for awhile. I did run across a script that did them concurrently but I can't find it right now.

DLCinferno
Feb 22, 2003

Happy

moep posted:

Thanks for that. I managed to get libtorrent-0.12.5 compiled but like you I’m stuck on rtorrent-0.8.5. I’ll just wait for the packages to get updated someday, until then I’ll have to settle with 0.8.2.

(I think the reason for the compiler errors is that the required patches listed in the guide (rtorrent-01-solaris.diff, rtorrent-02-event-ports.diff, rtorrent-03-curl-event.diff, rtorrent-04-sunpro.diff and rtorrent-05-sunpro-crash.diff) are for 0.8.2 only and incompatible with 0.8.5.)

That is really the only thing I dislike about OpenSolaris — up–to–date packages are sparse and it requires hours of patching and compiling to get basic stuff working that would require nothing more than one line in the command shell of a linux distribution. :v:

I haven't been able to get ahold of my friend, but I should have been more clear...I didn't install the patches from the guide, although I did use the newest versions of the packages.

DLCinferno
Feb 22, 2003

Happy

adorai posted:

Nope, ZFS is not GPL and will never be integrated into the kernel. It can run in userland, but will not perform well.

Not necessarily true:

http://www.sun.com/emrkt/campaign_docs/expertexchange/knowledge/solaris_zfs_gen.html#10

Wouldn't expect it soon though.



On another note - I'm curious if anyone has experience running MogileFS?

http://www.danga.com/mogilefs/

DLCinferno
Feb 22, 2003

Happy
edit: I'll respond tomorrow with more details

DLCinferno
Feb 22, 2003

Happy

Debrain posted:

Anyone have any experience updating the firmware to the Seagate Barracuda 7200.11 drives to SD1A and running it in raid? I have one of the 500gig hard drives laying around and figured I could learn how to set up a raid. Seems like a lot of people are complaining about all the problems with their hard drives not showing up in bios.

I have four of the 1TB drives that had the firmware issue. Never had any problems with them, but I decided not to risk it and upgraded all the drives. Everything went smoothly, just make sure you read up on it and get the right version of the firmware for your drive. It's a pretty simple process in the end.

DLCinferno
Feb 22, 2003

Happy
How cool does it keep the drives? And is the fan pretty loud?

DLCinferno
Feb 22, 2003

Happy
I've got that case (it's roughly the size of an endtable) and with 90 degree cables the door fits just fine.

DLCinferno
Feb 22, 2003

Happy

soj89 posted:

Cost is a big issue.

This one little sentence will completely define the end result. Literally, the scale could be from $800 spent stuffing a whitebox with some TB drives to $25k for a lower-end enterprise solution.

What is your budget?

DLCinferno
Feb 22, 2003

Happy
I'm also shopping for an OpenSolaris replacement (Nexenta might be it), but fyi:

http://github.com/behlendorf/zfs/wiki

DLCinferno
Feb 22, 2003

Happy

FISHMANPET posted:

I wish my OpenSolaris server had dual NICs so I could run my VM off of one and the actual OS on another. I've had terrible luck with both Xen and VirtualBox and my NICs where after a week or two of uptime the connection will stop for a few seconds up to a few minutes. Makes streaming things and working on the machine pretty unbearable.

Why not try ESXi?

DLCinferno
Feb 22, 2003

Happy

Thermopyle posted:

I'm thinking about moving on from WHS.

My main requirements are:
* Handle many (16 right now) disks of varying capacities (500GB to 2TB) with a total of over 17TB of raw disk space.
* One of my favorite features of WHS is not having to worry about multiple partitions...it's just one big pool of storage.
* Some sort of parity to protect from at least 1 (the more the better) drive failure.
* The main purpose of this storage is to store and stream HD video in the home. Streaming isn't too big of a bandwidth hog with hard drives on a gigabit network, but I do copy multi-gigabyte files to/from the array quite often so the closer it comes to saturating gigabit, the better.

Is this raid/lvm/mdadm linux thing still a cool thing to do? Is this guide from the OP still accurate/up-to-date/the best?

I was thinking that a linux thing would be best for me since I do lots of python development, and run several server apps written in python on my server...

The main reservation I have right now is that, while I won't have any problems figuring out how to set this up, I'm not terribly interested in futzing with it everyday, and that's one thing WHS as provided me...I set it up and never have to think about it.

Also, I will be running this on a fairly powerful machine (P55 mobo/Core 2 Quad/4GB RAM)...does this have any implications for which distro I should use? I'm most familiar with Ubuntu.

You should be fine with almost all your assumptions except potentially the actual RAIDing of your drives. Be aware that unlike WHS, which distributes data across any combination of drive sizes, mdadm will require you to choose the smallest size drive within the array as the size to use for each of the devices that array is built from. This means you have one 500GB drive and 15 2TB drives, you'll waste 1.5TB on all 15 of them. The way to use all your disk space is to create separate arrays for each combination of drive sizes, but in order to support at least RAID 5 on all your data you'll need at least 3 drives of each size.

Assuming this isn't too much of a burden, you can proceed with the rest of your plan, and if you use lvm you'll be able to combine all your mdadm arrays into a single big pool. Your computer should be plenty powerful enough to handle this and will probably get fairly close to saturating your gig network during reads, especially with that many spindles. Ubuntu is a fine choice for an OS. For reference, one of my servers is running Ubuntu Server edition with a 4 disk mdadm array of 7200rpm 1TB Seagates and I can get about 80-85MB/s transfer, with maybe a 20% cpu hit on the Core2Duo 2.16GHz (I think that's the cpu if I remember right).

Once you set it up and configure samba or whatever you're going to use to access the data, you can pretty much ignore it and it will just work. Make sure to setup mdadm notifications though, in case you lose a disk you'll want to get an email or something right away so you can replace it.

DLCinferno
Feb 22, 2003

Happy

Saukkis posted:

Another way to accomplish the same is to split all the drives into suitably sized partitions, create arrays from the partitions and then combine them with LVM. I'm using an extreme version of this scenario with all my drives split to 10+ partitions. I did it for flexibility when changing and adding drives before RAID expansion was a practical option in Linux.

True, but I didn't recommend that because you need to be very clever about how you're choosing your RAID levels on the partition arrays and which ones are going into the same array, otherwise a single drive failing could end up wiping out the entire array.

In a simple example, assume two 500GB drives and one 1TB drive. Partition the TB in half and create a RAID5 array across the four partitions. Unfortunately, if that TB drive goes down, it will effectively kill two devices in the array and render it useless.

I'd be curious to see what your partition/array map looks like - it must have a taken awhile to setup properly if you have over ten partitions on some disks?

Only registered members can see post attachments!

DLCinferno
Feb 22, 2003

Happy

Thermopyle posted:

Thanks. This is helpful. I've got enough 2TB and 1.5TB drives, but I'm going to have a problem with only have 2-1TB, 2-750GB, 1-500GB, and 1-400GB.

Hrmph.

In that case, you do actually have enough drives to safely do what Saukkis suggested. For example, if you didn't mind losing 150GB, you could create 250GB partitions on each of the drives and build four RAID5 arrays from those partitions. Each array would have only one partition per drive, so you could lose an entire disk without losing any data. A little more complex to setup, but it would work.

Only registered members can see post attachments!

DLCinferno
Feb 22, 2003

Happy

Thermopyle posted:

Are the arrays I create on one machine easily transferable to another machine with different hardware?

Sure are. Literally, unplug from one machine, plug into the new one, and run one mdadm --assemble command per array. As long as the computer can see the same physical drives/partitions, it doesn't matter what hardware it's running.

That's one of the main reasons I like ZFS/mdadm at home - no need to buy pricey hardware controllers, but you get most of the same benefits.

DLCinferno
Feb 22, 2003

Happy
For anyone looking at or owning any of the 4KB-sector drives, here's a pretty good article on how to compensate for potential performance issues, as well as some discussion about what to expect in the future from drive manufacturers (i.e. more of the same):

http://www.ibm.com/developerworks/linux/library/l-4kb-sector-disks/index.html?ca=dgr-lnxw074KB-Disksdth-LX

DLCinferno
Feb 22, 2003

Happy

Factory Factory posted:

Regarding VMware vSphere (ESXi), Christ on a cracker; for a piece of software I understand conceptually and have used before, it looks completely impenetrable as far as setting it up goes. Am I right that if I want RAID on a VMware box, it has to be hardware RAID or nothing? It won't put something ZFS-like together and I couldn't do that on a guest unless I did direct access to drives that weren't already storing the hypervisor itself?

Would anyone poo poo in my bed if I just ran Ubuntu server and ran my webserver through VirtualBox? The machine would be overpowered anyway.

VirtualBox on Ubuntu server will work just fine, although you won't get a nice UI to manage your VMs (at least, I don't know of a remote management UI for headless VBox). Depending on your hardware, ESXi might actually be pretty easy to use/setup. You can actually provide guests direct access to physical drives, which is what you'd want to do if you were running OpenIndiana or something with ZFS support. Here's some links to help you do so:

http://www.vm-help.com/forum/viewtopic.php?f=14&t=1025
http://www.vm-help.com/esx40i/SATA_RDMs.php

Before attempting ESXi you should investigate your hardware to ensure it is compatible. Besides the mostly enterprise-focused officially supported list, this is a great resource for home builders. Be sure to read the details - for example, my motherboard is supported but the on-board NIC is not, so I get crazy errors when trying to install ESXi unless I also drop a supported NIC into an empty slot:

http://www.vm-help.com/esx40i/esx40_whitebox_HCL.php

For what it's worth, I'm currently running Ubuntu Server with VirtualBox and it's just fine. I really like VMWare and I'm used to it from work, and although comparisons are really difficult on different types of hardware, ESXi seems slightly faster - but then again people report VBox faster on some OSes so who knows.

Is your webserver Windows or Linux? I'd rather run Windows in VMWare (for no good reason besides the fact that I know it works well already).

DLCinferno
Feb 22, 2003

Happy

adorai posted:

If you are going to run linux, you may as well use KVM. If you are going to run virtualbox, why wouldn't you run openindiana?

As for esxi, a cheapo lsilogic raid sas card can be had on ebay for under $50. Run two drives in raid 1, 6 more + onboard sata can be setup to pass straight to an openindiana guest via RDM, and you can have a party.

I don't quite understand your post.

KVM vs ESXi is reasonable, but not KVM + linux vs ??? . Why is running VirtualBox a given for OpenIndiana? There are lots of reasons NOT to run OI, for example, it is built on the remnants of OpenSolaris but doesn't really have the same major organizational support. I ran OpenSolaris for a year and a half, and I love ZFS, but I'm cautious of OI (not the only article along those lines).

Also, why would you buy a raid card to run two drives in raid 1 and have the rest passed through? If your motherboard supports it, why not provide direct access for the drives you want to the guest OS? Running the host in a raid 1 isn't so critical, you should always be able to rebuild elsewhere. A major reason that good software raid is so great is that it's not tied to any type of hardware. Are you advocating a cheap isilogic card for expandability, which might not matter at all if his motherboard has everything he needs right now? Or are you thinking the host should always be raid 1'ed?

DLCinferno
Feb 22, 2003

Happy

Methylethylaldehyde posted:

OI is basically the last publicly available Sun/Oracle ON build, plus some bugfixes to that code. It's also binary compatible with all the regular solaris crap.

Yeah, I realize that. I'm still running an instance of OI for all my data that was on ZFS when I was running OpenSolaris. My point is that I wouldn't necessarily recommend OI for someone just starting a server when its future is anything but concrete.

I know some people will disagree with my approach, but I'm kinda just treading water in mdadm until btrfs is ready for prime-time or a really viable ZFS alternative is exposed. I expect to deprecate my OI install for FUSE at some point, although maybe not for my regular datastore.

Meanwhile, Factory has a bunch of options on what to run, but my recommendation is still to strongly research OI and what he needs from it first...but which virtualization base to run is almost a separate question.

DLCinferno
Feb 22, 2003

Happy
.

double post

DLCinferno fucked around with this message at 06:02 on Nov 13, 2010

DLCinferno
Feb 22, 2003

Happy

Saukkis posted:

It really doesn't require much cleverness, simply remember to build the arrays from separate sdX devices.

Cool, makes sense.

Saukkis posted:

I think if you use partitions se to RAID autodetect type you don't even need the assemble command. During boot up Linux kernel will see a bunch of partitions that seem to belong to an array and then it figures out which of them go together.

Didn't work for me when my server failed and I had to move the drives to a new machine, but in either case, it's easy to move drives.

DLCinferno
Feb 22, 2003

Happy

Factory Factory posted:

After using the suggestions ITT to look around more, I've decided to roll my own NAS but not reuse hardware (just because it's such power-hungry stuff). Instead I'm building a Mini-ITX server based off a Chenbro server case with hot-swap bays. Of course, the only Mini-ITX board I could find that had both USB 3.0 and at least 4 SATA ports also takes a nice big 73W TDP Core i3 (I will likely underclock it a bit), but hey, could be worse. I can run a GUI so I can click things and not care about using precious CPU cycles. Chenbro also does a version of the case with a 120W power supply if you want to do an Atom build.

Software-wise, I'm going to see if I can do the following: Ubuntu server installed on an mdadm RAID 0+1 partition set, with the rest of the space on the drives set as RAIDZ + hot spare via zfs-fuse. While a userspace filesystem driver is crappy for "real" server functions (many of which ZFS is crappy for, too, compared to ext4), it should work just fine for general fileserving. If it doesn't, or if I can't set it up for some reason, I'll just do RAID5 + hot spare via mdadm. If I'm *really* missing something, I'll just do one big array and partition with LVM. I nearly went with OpenIndiana or NexentaStor, but familiarity and zfs-fuse won the day for me.

I'm not tracking on your proposed installation plan. You are installing the OS onto an mdadm array? What is going to run mdadm?

FWIW, I've got that exact same case for my backup server, and while I went with the Intel DG45FC board (no USB 3.0), I tried a couple different things to get a 5th drive into the case, which also has a dedicated space for a 2.5" drive. Tiny PCI single-drive SATA controller with a right-angle PCI express riser...worked for awhile, but was WAY too tight in the case. There's almost no room and it got pretty warm. Thought about wrapping an eSATA cable back into the case from the external port on the motherboard but that seemed sloppy. I ended up with a USB CF card reader plugged into one of the internal USB ports. Works great and 8 GB is more than enough for Ubuntu Server. If your motherboard supports booting from USB I'd recommend it.

DLCinferno
Feb 22, 2003

Happy

Thermopyle posted:

As mentioned earlier I'm experimenting with different configurations of mdadm RAID5 arrays and LVM.

I haven't kept up with linux filesystems at all in years. I'm trying to pick a filesystem and have no idea what's good for my purposes. I'm tempted to just stick with ext3 since that was what I used last time I was fooling around with a linux file server, but I don't know if that's still a good choice. ext4 is out now and it's one whole number bigger!

This will be used for 75% streaming of multigigabyte video files and 25% dicking around.

My raw storage space will be starting out with hard drives totaling 18TB in size.

So what filesystem would you pick?

Honestly, you probably won't notice the difference between the two with normal usage. The wiki page gives a pretty clear comparison though.

Personally, I use ext4, but since I didn't want to revert to ZFS at this time I'm really just holding out for BTRFS to go stable.

Adbot
ADBOT LOVES YOU

DLCinferno
Feb 22, 2003

Happy

Thermopyle posted:

Thanks to advice given earlier in the thread by DLCInferno and others, I've now moved all my data from WHS to an Ubuntu machine with mdadm+LVM.

I copy to/from the box over a gigabit network at 100-120MB/s (WHS on the same hardware did 60-70 MB/s) and I've got a nice linux machine for dicking around with. My total usable storage is somewhere around 15TB now...

It took frickin forever copying data off the NTFS drives to existing arrays and then expanding the arrays with that drive (I probably ended up with 150+ hours of copy/RAID growing), but it's done!

Thanks for the advice, guys.

Really good to hear it was successful. Cheers.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply