Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

So what are you guys using to backup your Windows client PCs to your server/NAS?

I asked this in the Windows Megathread, but thought maybe I'd get more response here, since it doesn't seem to be getting any traction over there.

Windows 7 built-in backup tool is fantastic with image-based incremental backups. The only problem is that only Pro or Ultimate supports backing up to a network location.

Adbot
ADBOT LOVES YOU

Telex
Feb 11, 2003

Methylethylaldehyde posted:

Out of the box openindiana is retarded simple to set up.

zfs create tank c0t0d0s0
zfs create tank/cifs
zfs set sharesmb=on tank/cifs
passwd god

log in via windows xp/7 with fishmanpet/god and go nuts. The rest of the issues are file/folder permissions and ACLs.

so what kind of file/folder permissions issues are we talking about here? Is it going to be problematic to have a client machine write to those folders via cifs?

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Telex posted:

so what kind of file/folder permissions issues are we talking about here? Is it going to be problematic to have a client machine write to those folders via cifs?

It all depends on what kind of permissions you want to use. chmod -R 777 folder works fine if you don't mind sharing all your crap with everyone. It's basically the standard issues you run into with multi-user and restricted permission sets in any folder system.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Telex posted:

so what kind of file/folder permissions issues are we talking about here? Is it going to be problematic to have a client machine write to those folders via cifs?
Like I posted on the previous page:

/usr/bin/chmod A=everyone@:rwxpdDaARWcCos:fd:allow zfsnamehere

Then fix the permissions in windows by right clicking the folder.

Telex
Feb 11, 2003

Oh I get the impression that trying to use a real unix type OS is going to blow my brains.

I have a vague understanding of things but spitting out that chmod string is waaaaaaay past my understanding of things.

Is there any place to go (i'm fine with books) to learn all the necessary stuff with solaris so I'm not a total retard when it comes to things?

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Telex posted:

Oh I get the impression that trying to use a real unix type OS is going to blow my brains.

I have a vague understanding of things but spitting out that chmod string is waaaaaaay past my understanding of things.

Is there any place to go (i'm fine with books) to learn all the necessary stuff with solaris so I'm not a total retard when it comes to things?

Google and blogs are your best bet. That chmod string basically says anyone can do anything, go nuts. That conveniently lets you set the ACLs in windows.

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams

adorai posted:

Like I posted on the previous page:

/usr/bin/chmod A=everyone@:rwxpdDaARWcCos:fd:allow zfsnamehere

Then fix the permissions in windows by right clicking the folder.

Ooof, ACLs, yet another thing I need to get around to learning.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Telex posted:

I have a vague understanding of things but spitting out that chmod string is waaaaaaay past my understanding of things.
If you think I memorized that, you are crazy, I have a file named "solaris notes.txt" that I open and copy and paste from.

movax
Aug 30, 2008

FISHMANPET posted:

Ooof, ACLs, yet another thing I need to get around to learning.

I spent last winter break playing with them, but it was worth it...this is what I set up on mine, seems to work ok, except I belatedly realized that I can't execute files off the server anymore...easy enough to fix with a script that just finds *.exe and flips +x on them.
code:
Movies, Audio, Documents, Software
chmod -R A0=owner@:rwxpdDaARWcCos:d:allow *
chmod -R A1=owner@:rwpdDaARWcCos:f:allow *
chmod -R A2=group@:rxaRcs:d:allow *
chmod -R A3=group@:raRcs:f:allow *
chmod -R A4=everyone@:rxaRcs:d:allow *
chmod -R A5=everyone@:raRcs:f:allow *

public
chmod A0=owner@:rwxpdDaARWcCos:d:allow .
chmod A1=owner@:rwpdDaARWcCos:f:allow .
chmod A2=group@:rwxpdDaRcs:d:allow .
chmod A3=group@:rwapDRcs:f:allow .
chmod A4=everyone@:rwxpaRcs:d:allow .
chmod A5=everyone@:rwapRcs:f:allow .

movax
chmod -R A0=owner@:rwxpdDaARWcCos:d:allow *
chmod -R A1=owner@:rwpdDaARWcCos:f:allow *
chmod -R A2=group@:rxaRcs:d:deny *
chmod -R A3=group@:raRcs:f:deny *
chmod -R A4=everyone@:rxaRcs:d:deny *
chmod -R A5=everyone@:raRcs:f:deny *
Also, I e-mailed Seagate about whether the Barracuda LPs were 4k or not. I just got a reply linking to this, and marketing bulletins here. Not a clear "no, this drive is not 4k", but I'm leaning it to it not being an "Advanced Format" drive, because it doesn't have Seagate's "SmartAlign" trademark all over it.

Pudgygiant
Apr 8, 2004

Garnet and black? More like gold and blue or whatever the fuck colors these are
I'm working overseas as a contractor on a tiny base where power is kind of an issue, and space will be an issue when I leave. Price isn't so much a factor but I don't want or need anything enterprise level at this point. What would my best option be for a small, power efficient 4TB storage solution with GbE and preferably eSATA that would be easy to expand in the future? Is the Drobo FS the way to go?

Goon Matchmaker
Oct 23, 2003

I play too much EVE-Online
I'm fond of the Acer Easystore H340. It has 4 hot swappable drive bays so you can stick up to 8tb of storage into it. It's around $300-400.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
So Goons, I'd like some advice. I'm planning to head off to law school, and I'm looking to migrate from an "all drives in one box" setup to a NAS that gives me backup and availability. I don't really know what the roll-my-own options are in depth, though. I have set up a FreeNAS server and used Ubuntu and RHEL in a desktop environment, so I'm comfortable enough but I just don't know the field.

I've been burned seriously by hardware failures before, and RAID 1 saved my rear end no less than 7 times in a year when I got a series of RMA replacements that had the same problem as the drive I sent back.

Basic idea: have a desktop (Win7), a laptop (Win7 or MacOS), and the NAS. Desktop has Data A, laptop has Data B, NAS stores A, B, shared stuff like an iTunes library, and images of both the laptop and desktop. Backup happens transparently via rsync. I also plan to backup the NAS (data but not images) to a separate external drive once every week, which could be used in a pinch if the NAS crapped out.

I've been bouncing between a prebuilt device from QNAP or Thecus and rolling my own server. If I decide to roll my own, I'd have an nVidia 680i board with a C2D E8400 to reuse, complete with the nVidia hardware RAID (though an E8400 could run a software RAID without breaking a sweat). If I rolled my own, I would want to used a virtualized OS so it could easily be snapshotted and, down the line, migrated.

Either way, I'd be getting all new identical 2TB drives for either RAID5 + hot spare or RAID 6, four drive minimum. I estimate that the raw amount of stuff I have to work with, including system images, would be around 750 GB.

Here are my priorities:
- Solid backup - my files are a record of my life since ~1999.
- Security - keep out prying eyes even if they're hopping on my WiFi with my permission
- Availability - I want to be able to get at everything, especially the class-critical stuff, even if two machines die (desktop, laptop, NAS, NAS backup drive)
- Set it and forget it - I can't be spending hours a week doing IT jockeying when I'm in law school. The "set it" part can be plenty complex, it just needs to have minimal maintenance.
- Fast enough that I don't puke myself with rage. My last FreeNAS server was built out of an Atom nettop and topped out at 1 MB/s over GbE for no obvious reason.
- Budget of around $1000 USD

Nice, but not critical:
- Shadow copy/snapshotting
- Ability to run services/servers beyond basic SMB shares, like a media server or virtualizing up a webserver
- I/O that can saturate GbE
- Ability to use server as a desktop in case other computers poop out at once
- Besides full automatic backup, specific additional backup of critical course files to a flash drive so if the power fails I can just pop that out and head to the library
e: - Oh, yeah, error notification by e-mail rather than checking the front panel lights/web interface
e: - Access from outside the LAN via SSH tunneling or somesuch

Don't care about:
- Torrenting
- Expandability - don't give a crap about adding/replacing drives to expand to dozens of TB of storage, I'll just build new and migrate after law school

Thoughts on plan, anything major I'm missing or overlooking? Setup and software suggestions? Benefits of getting a QNAP or Thecus appliance over roll-my-own besides ease of use that I'm not taking into account? Thanks, Goons.

Factory Factory fucked around with this message at 20:36 on Nov 12, 2010

FISHMANPET
Mar 3, 2007

Sweet 'N Sour
Can't
Melt
Steel Beams
My gut reaction would be to say ESXi with an OpenIndiana guest and ZFS file system.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
What's the best way to install that? Right on the RAID array? On another small RAID 1 pair or a tiny SSD?

e: Snaps, it seems the nVidia RAID is software enough that ESXi can't use it.

de: I just peeked at the WHS megathread. Within my needs, virtualizing that seems like a strong possibility, and familiarity is always nice. Wouldn't lose any disk space unless I did five or more drives, either. It would need a lot of third party software to get all the functionality I want, however, and I have no idea if that's available.

Factory Factory fucked around with this message at 22:32 on Nov 12, 2010

Glimm
Jul 27, 2005

Time is only gonna pass you by

I guess I should have stayed away from the Western Digital Green drives. Bought 3 of them for my NAS (ds410), 2 of them failed within a month. RMA'd them, and now the third original disk is failing (says it is degraded). What drives should I replace these with? I'm pretty sure I'm just going to return them all now.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
I have a pair of Samsung 1TB drives that are just solid as a rock, as long as you keep their temperatures sane, but the brand seems hit or miss judging by reviews. I've also had good luck with WD's Blue and mixed with RE3. Green definitely poo poo the bed for me, too.

Glimm
Jul 27, 2005

Time is only gonna pass you by

Cool, I'm going to head to Microcenter in a bit to replace them. Hoping it isn't a pain in the rear end to rebuild the array so as not to lose data. I should be able to just swap the disks one at a time and rebuild that way I assume?

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
According to Newegg, yes.

Regarding VMware vSphere (ESXi), Christ on a cracker; for a piece of software I understand conceptually and have used before, it looks completely impenetrable as far as setting it up goes. Am I right that if I want RAID on a VMware box, it has to be hardware RAID or nothing? It won't put something ZFS-like together and I couldn't do that on a guest unless I did direct access to drives that weren't already storing the hypervisor itself?

Would anyone poo poo in my bed if I just ran Ubuntu server and ran my webserver through VirtualBox? The machine would be overpowered anyway.

DLCinferno
Feb 22, 2003

Happy

Factory Factory posted:

Regarding VMware vSphere (ESXi), Christ on a cracker; for a piece of software I understand conceptually and have used before, it looks completely impenetrable as far as setting it up goes. Am I right that if I want RAID on a VMware box, it has to be hardware RAID or nothing? It won't put something ZFS-like together and I couldn't do that on a guest unless I did direct access to drives that weren't already storing the hypervisor itself?

Would anyone poo poo in my bed if I just ran Ubuntu server and ran my webserver through VirtualBox? The machine would be overpowered anyway.

VirtualBox on Ubuntu server will work just fine, although you won't get a nice UI to manage your VMs (at least, I don't know of a remote management UI for headless VBox). Depending on your hardware, ESXi might actually be pretty easy to use/setup. You can actually provide guests direct access to physical drives, which is what you'd want to do if you were running OpenIndiana or something with ZFS support. Here's some links to help you do so:

http://www.vm-help.com/forum/viewtopic.php?f=14&t=1025
http://www.vm-help.com/esx40i/SATA_RDMs.php

Before attempting ESXi you should investigate your hardware to ensure it is compatible. Besides the mostly enterprise-focused officially supported list, this is a great resource for home builders. Be sure to read the details - for example, my motherboard is supported but the on-board NIC is not, so I get crazy errors when trying to install ESXi unless I also drop a supported NIC into an empty slot:

http://www.vm-help.com/esx40i/esx40_whitebox_HCL.php

For what it's worth, I'm currently running Ubuntu Server with VirtualBox and it's just fine. I really like VMWare and I'm used to it from work, and although comparisons are really difficult on different types of hardware, ESXi seems slightly faster - but then again people report VBox faster on some OSes so who knows.

Is your webserver Windows or Linux? I'd rather run Windows in VMWare (for no good reason besides the fact that I know it works well already).

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Factory Factory posted:

Would anyone poo poo in my bed if I just ran Ubuntu server and ran my webserver through VirtualBox? The machine would be overpowered anyway.
If you are going to run linux, you may as well use KVM. If you are going to run virtualbox, why wouldn't you run openindiana?

As for esxi, a cheapo lsilogic raid sas card can be had on ebay for under $50. Run two drives in raid 1, 6 more + onboard sata can be setup to pass straight to an openindiana guest via RDM, and you can have a party.

DLCinferno
Feb 22, 2003

Happy

adorai posted:

If you are going to run linux, you may as well use KVM. If you are going to run virtualbox, why wouldn't you run openindiana?

As for esxi, a cheapo lsilogic raid sas card can be had on ebay for under $50. Run two drives in raid 1, 6 more + onboard sata can be setup to pass straight to an openindiana guest via RDM, and you can have a party.

I don't quite understand your post.

KVM vs ESXi is reasonable, but not KVM + linux vs ??? . Why is running VirtualBox a given for OpenIndiana? There are lots of reasons NOT to run OI, for example, it is built on the remnants of OpenSolaris but doesn't really have the same major organizational support. I ran OpenSolaris for a year and a half, and I love ZFS, but I'm cautious of OI (not the only article along those lines).

Also, why would you buy a raid card to run two drives in raid 1 and have the rest passed through? If your motherboard supports it, why not provide direct access for the drives you want to the guest OS? Running the host in a raid 1 isn't so critical, you should always be able to rebuild elsewhere. A major reason that good software raid is so great is that it's not tied to any type of hardware. Are you advocating a cheap isilogic card for expandability, which might not matter at all if his motherboard has everything he needs right now? Or are you thinking the host should always be raid 1'ed?

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

DLCinferno posted:

I don't quite understand your post.

KVM vs ESXi is reasonable, but not KVM + linux vs ??? . Why is running VirtualBox a given for OpenIndiana? There are lots of reasons NOT to run OI, for example, it is built on the remnants of OpenSolaris but doesn't really have the same major organizational support. I ran OpenSolaris for a year and a half, and I love ZFS, but I'm cautious of OI (not the only article along those lines).


OI is basically the last publicly available Sun/Oracle ON build, plus some bugfixes to that code. It's also binary compatible with all the regular solaris crap.

DLCinferno
Feb 22, 2003

Happy

Methylethylaldehyde posted:

OI is basically the last publicly available Sun/Oracle ON build, plus some bugfixes to that code. It's also binary compatible with all the regular solaris crap.

Yeah, I realize that. I'm still running an instance of OI for all my data that was on ZFS when I was running OpenSolaris. My point is that I wouldn't necessarily recommend OI for someone just starting a server when its future is anything but concrete.

I know some people will disagree with my approach, but I'm kinda just treading water in mdadm until btrfs is ready for prime-time or a really viable ZFS alternative is exposed. I expect to deprecate my OI install for FUSE at some point, although maybe not for my regular datastore.

Meanwhile, Factory has a bunch of options on what to run, but my recommendation is still to strongly research OI and what he needs from it first...but which virtualization base to run is almost a separate question.

DLCinferno
Feb 22, 2003

Happy
.

double post

DLCinferno fucked around with this message at 06:02 on Nov 13, 2010

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.
My head hurts... feel free to talk to me as if I am an idiot, a small child, or a small idiot.

So, correct me if I'm wrong. Say I want to have a fileserver which is a portable VM. With the goal "Every bit of storage has redundancy," I have a few options for virtualization:

1) Use hardware RAID to run storage drives as one big array, install hypervisor and VMs on that array. Can use any hypervisor, like vSphere, as long as it has drivers for the RAID controller.

2) Use software RAID as one big array, using a ZFS-capable OS as the hypervisor. Limits me to OpenSolaris and variations as the hypervisor. e: or mdadm in Linux?

3) Use a small hardware RAID 1 array, flash drive or SSD to store the hypervisor and guest installations. Use a guest which supports mdadm or ZFS to create a storage pool out of the storage hard drives. Use a second guest connected via iSCSI or somesuch to the ZFS host as the portable fileserver.

Option 2 would be the cheapest, 1 would be the fastest and most straightforward, 3 is flexible but complex and has the most that can go wrong.

And "my webserver" is hypothetical, I just mean any service I would want to run in the future.

Farmer Crack-Ass
Jan 2, 2001

this is me posting irl

Thermopyle posted:

So what are you guys using to backup your Windows client PCs to your server/NAS?

I asked this in the Windows Megathread, but thought maybe I'd get more response here, since it doesn't seem to be getting any traction over there.

Windows 7 built-in backup tool is fantastic with image-based incremental backups. The only problem is that only Pro or Ultimate supports backing up to a network location.

I managed to win a copy of Vista Ultimate a couple of years ago so I've gotten spoiled with being able to use Windows Backup with my server. I'll probably wind up getting Windows 7 Pro when I get around to upgrading so I can keep on using that.

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.

DLCinferno posted:

True, but I didn't recommend that because you need to be very clever about how you're choosing your RAID levels on the partition arrays and which ones are going into the same array, otherwise a single drive failing could end up wiping out the entire array.

In a simple example, assume two 500GB drives and one 1TB drive. Partition the TB in half and create a RAID5 array across the four partitions. Unfortunately, if that TB drive goes down, it will effectively kill two devices in the array and render it useless.

I'd be curious to see what your partition/array map looks like - it must have a taken awhile to setup properly if you have over ten partitions on some disks?


It really doesn't require much cleverness, simply remember to build the arrays from separate sdX devices.

Here's an example partition table from cfdisk. /dev/sdb/sdc/sdd are similar, just with slightly different amount of partitions. /dev/sda1 is housing the operating system currently. In the future I'll RAID1 it with another partition.

code:
   Name      Flags    Part TypeFS Type        [Label]     Size (MB)
 -----------------------------------------------------------------
   sda1      Boot      Primary Linux raid autodetect       15002.92
   sda2                Primary Linux raid autodetect       81923.79
   sda3                Primary Linux raid autodetect       81923.79
   sda5                Logical Linux raid autodetect       81923.79
   sda6                Logical Linux raid autodetect       81923.79
   sda7                Logical Linux raid autodetect       81923.79
   sda8                Logical Linux raid autodetect       81923.79
   sda9                Logical Linux raid autodetect       81923.79
   sda10               Logical Linux raid autodetect       81923.79
   sda11               Logical Linux raid autodetect       81923.79
   sda12               Logical Linux raid autodetect       81923.79
   sda13               Logical Linux raid autodetect       81923.79
   sda14               Logical Linux raid autodetect       81923.79
Here's a snippet from /proc/mdstat. /dev/md10-md18 are 245GB RAID5 arrays with 4 devices. /dev/md19-md21 are 163GB RAID1 arrays with 2 devices. They are made from the leftover partitions from the 2 1TB drives.

code:
md21 : active raid1 sdc14[0] sda14[1]
      80003584 blocks [2/2] [UU]

md10 : active raid5 sdd2[0] sdc2[1] sdb2[2] sda2[3]
      240010752 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]

md11 : active raid5 sdd3[0] sdc3[1] sdb3[2] sda3[3]
      240010752 blocks level 5, 64k chunk, algorithm 2 [4/4] [UUUU]
Then I've created two LVM volume groups of those arrays, sized 915GB and 1.34TB, but I'm considering combining them into one VG. The original reason for the two VGs was that I didn't have enough drives for my needs. vg_safehouse was made of RAID1 or RAID5 devices for storing more valuable files and vg_warehouse was a single large drive for the rest. Now that everything is RAIDed there's no need for warehouse anymore.


DLCinferno posted:

Sure are. Literally, unplug from one machine, plug into the new one, and run one mdadm --assemble command per array. As long as the computer can see the same physical drives/partitions, it doesn't matter what hardware it's running.

That's one of the main reasons I like ZFS/mdadm at home - no need to buy pricey hardware controllers, but you get most of the same benefits.
I think if you use partitions se to RAID autodetect type you don't even need the assemble command. During boot up Linux kernel will see a bunch of partitions that seem to belong to an array and then it figures out which of them go together.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

DLCinferno posted:

Yeah, I realize that. I'm still running an instance of OI for all my data that was on ZFS when I was running OpenSolaris. My point is that I wouldn't necessarily recommend OI for someone just starting a server when its future is anything but concrete.

Based on the IllumOS work and the OpenIndiana project, plus Oracle saying they're gonna release source for Solaris 11 Express, I can see it having a pretty decent upgrade path.

Plus the actual ZFS datastore is compatible with the BSD implementations and whatnot.

Factory Factory posted:

3) Use a small hardware RAID 1 array, flash drive or SSD to store the hypervisor and guest installations. Use a guest which supports mdadm or ZFS to create a storage pool out of the storage hard drives. Use a second guest connected via iSCSI or somesuch to the ZFS host as the portable fileserver.

It all depends on what you want to do with your system. It also depends on how beefy your system is. I have an OpenIndiana box that uses a C2D and 8 GB of ram, running a huge glut of older HDDs at a series of big ZFS pools, with a pair of mirrored HDDs as boot. Anything I need to do that can't be trivially done in solaris has a VM that does it for me. I have a Server 2008 instance that does user authentication and some windows XP instances that run :files: poo poo plus some windows utilities.

The other approach is to run an ESXi server, pass all the disks through to a Solaris VM as raw drives, use the raw drives to create a ZFS share, and have all the other VMs access it via NFS or CIFS. I prefer the first way because it's slightly less of a oval office when something breaks.

Methylethylaldehyde fucked around with this message at 22:07 on Nov 13, 2010

DLCinferno
Feb 22, 2003

Happy

Saukkis posted:

It really doesn't require much cleverness, simply remember to build the arrays from separate sdX devices.

Cool, makes sense.

Saukkis posted:

I think if you use partitions se to RAID autodetect type you don't even need the assemble command. During boot up Linux kernel will see a bunch of partitions that seem to belong to an array and then it figures out which of them go together.

Didn't work for me when my server failed and I had to move the drives to a new machine, but in either case, it's easy to move drives.

Factory Factory
Mar 19, 2010

This is what
Arcane Velocity was like.

Methylethylaldehyde posted:

The other approach is to run an ESXi server, pass all the disks through to a Solaris VM as raw drives, use the raw drives to create a ZFS share, and have all the other VMs access it via NFS or CIFS. I prefer the first way because it's slightly less of a oval office when something breaks.

Stupid question, how is ESXi booted? Is it loaded off an image and kept in RAM disk like with FreeNAS, or am I missing an assumed disk here? I wouldn't think ESXi would take kindly to having its install partition pulled into a ZFS pool by one of its guests.

This is the part where someone can talk to me like I'm an idiot.

PitViper
May 25, 2003

Welcome and thank you for shopping at Wal-Mart!
I love you!
I've been thinking about trying out an Open Solaris install and trying ZFS to consolidate storage of all my media into one pool of data, rather than spread out over a bunch of 640GB-1.5TB drives. I'm much more familiar with Linux than Solaris though, so is there any issue with running mdadm/LVM under a Linux distro? I'd like to build an initial RAID5 array, and be able to expand the existing mount location by adding new arrays as my storage needs expand.

I'd most likely be starting with an array of 4x1.5TB drives in RAID5, and adding another 6-8 months down the line when the need arises. As far as I can tell, this is possible, but what shortfalls am I missing? Absolute performance isn't that important, as this will be mostly streaming video/audio over a 100mbit home LAN to a few HTPC's and 3-4 desktops.

Methylethylaldehyde
Oct 23, 2004

BAKA BAKA

Factory Factory posted:

Stupid question, how is ESXi booted? Is it loaded off an image and kept in RAM disk like with FreeNAS, or am I missing an assumed disk here? I wouldn't think ESXi would take kindly to having its install partition pulled into a ZFS pool by one of its guests.

This is the part where someone can talk to me like I'm an idiot.

ESXi boots of like a 10mb image or something real small like that. You can actually just use a flash drive as the boot device. The fancy ESXi hosts just use a RAID1'd pair of CF drives half the time.

The VM image can also be run off a flash drive if you want, but a regular old HDD will work better.

cage-free egghead
Mar 8, 2004
Any chance anyone is looking for a DNS323 with hard drives? Looking to get rid of mine. Couldn't really get the hang of all the Putty/SSH/Linux stuff.

adorai
Nov 2, 2002

10/27/04 Never forget
Grimey Drawer

Methylethylaldehyde posted:

The VM image can also be run off a flash drive if you want, but a regular old HDD will work better.
Please tell me how to do this, I have tried without success previously to create a datastore on a flash drive.

Telex
Feb 11, 2003

I'm trying out FreeNAS and I'm having trouble getting CIFS to work like I want it to work..

I can get to my shares just fine when I use \\192.168.1.2 but \\hostname doesn't get me anywhere.

I can ping the hostname in a command line prompt and have no problems, I can browse the FreeNAS interface with http://hostname so it's not a "network" problem as far as I can tell..

and I can't figure out where in the interface I can make that hostname associate with CIFS/SMB. All I get is the Windows cannot access \\hostname error even though I can see the drat thing in my network browser on my Win7 machine. It KNOWS it's there but it won't work for some reason...

eta: now I REALLY don't get it, I had the name set as mediacenter before, I changed it to whatever and blam \\whatever works. I change back to mediacenter and it doesn't work again. I don't understand what's going on here but I don't care I'll just rename all my links on my machines instead of trying to replicate my previous windows based system.

Telex fucked around with this message at 20:15 on Nov 14, 2010

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

I've got 3 different SATA/RAID controller cards that I just use to add SATA ports.

I'd like to consolidate these. What's a good solution for adding 8 SATA ports, preferably via PCIe?

IOwnCalculus
Apr 2, 2003





Thermopyle posted:

I've got 3 different SATA/RAID controller cards that I just use to add SATA ports.

I'd like to consolidate these. What's a good solution for adding 8 SATA ports, preferably via PCIe?

Cheapest option is a Supermicro LSI-based card that comes up pretty regularly, though the only issue is the bracket for it is technically for Supermicro's UIO form factor. The connector is still PCIe so it will fit in a PCIe slot but you need to remove the bracket first.

Drevoak
Jan 30, 2007

Lblitzer posted:

Any chance anyone is looking for a DNS323 with hard drives? Looking to get rid of mine. Couldn't really get the hang of all the Putty/SSH/Linux stuff.

Could you post some more details? Hard drive model/sizes, how long they've been used, etc.

tboneDX
Jan 27, 2009
I have a quick question:

I'm using OpenSolaris snv_118 and have a raidz1 pool of four WD 1TB Caviar Greens. I recently had a power failure, and I'm pretty sure it messed something up, as I/O performance hasn't been all that great lately. I'm currently running a scrub on tank, and it's 12h in, 4.51% done, and at 246h26m to go. Next to one of the drives it says '141K repaired'.

Should I assume that everything will be fixed once this scrub is done? Also, should I upgrade to a later version of OS? What about some of the other operating systems supporting ZFS (FreeBSD/Nexenta/others)?

I have had very few problems with this setup, but this latest incident has me a bit worried...

Adbot
ADBOT LOVES YOU

Thermopyle
Jul 1, 2003

...the stupid are cocksure while the intelligent are full of doubt. —Bertrand Russell

IOwnCalculus posted:

Cheapest option is a Supermicro LSI-based card that comes up pretty regularly, though the only issue is the bracket for it is technically for Supermicro's UIO form factor. The connector is still PCIe so it will fit in a PCIe slot but you need to remove the bracket first.

So, this and two of these?

What's the performance like on these?

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply