|
So what are you guys using to backup your Windows client PCs to your server/NAS? I asked this in the Windows Megathread, but thought maybe I'd get more response here, since it doesn't seem to be getting any traction over there. Windows 7 built-in backup tool is fantastic with image-based incremental backups. The only problem is that only Pro or Ultimate supports backing up to a network location.
|
# ? Nov 9, 2010 22:10 |
|
|
# ? Apr 29, 2024 10:41 |
|
Methylethylaldehyde posted:Out of the box openindiana is retarded simple to set up. so what kind of file/folder permissions issues are we talking about here? Is it going to be problematic to have a client machine write to those folders via cifs?
|
# ? Nov 10, 2010 00:00 |
|
Telex posted:so what kind of file/folder permissions issues are we talking about here? Is it going to be problematic to have a client machine write to those folders via cifs? It all depends on what kind of permissions you want to use. chmod -R 777 folder works fine if you don't mind sharing all your crap with everyone. It's basically the standard issues you run into with multi-user and restricted permission sets in any folder system.
|
# ? Nov 10, 2010 01:01 |
|
Telex posted:so what kind of file/folder permissions issues are we talking about here? Is it going to be problematic to have a client machine write to those folders via cifs? /usr/bin/chmod A=everyone@:rwxpdDaARWcCos:fd:allow zfsnamehere Then fix the permissions in windows by right clicking the folder.
|
# ? Nov 10, 2010 01:09 |
|
Oh I get the impression that trying to use a real unix type OS is going to blow my brains. I have a vague understanding of things but spitting out that chmod string is waaaaaaay past my understanding of things. Is there any place to go (i'm fine with books) to learn all the necessary stuff with solaris so I'm not a total retard when it comes to things?
|
# ? Nov 10, 2010 01:49 |
|
Telex posted:Oh I get the impression that trying to use a real unix type OS is going to blow my brains. Google and blogs are your best bet. That chmod string basically says anyone can do anything, go nuts. That conveniently lets you set the ACLs in windows.
|
# ? Nov 10, 2010 02:30 |
|
adorai posted:Like I posted on the previous page: Ooof, ACLs, yet another thing I need to get around to learning.
|
# ? Nov 10, 2010 03:19 |
|
Telex posted:I have a vague understanding of things but spitting out that chmod string is waaaaaaay past my understanding of things.
|
# ? Nov 10, 2010 03:31 |
|
FISHMANPET posted:Ooof, ACLs, yet another thing I need to get around to learning. I spent last winter break playing with them, but it was worth it...this is what I set up on mine, seems to work ok, except I belatedly realized that I can't execute files off the server anymore...easy enough to fix with a script that just finds *.exe and flips +x on them. code:
|
# ? Nov 10, 2010 03:32 |
|
I'm working overseas as a contractor on a tiny base where power is kind of an issue, and space will be an issue when I leave. Price isn't so much a factor but I don't want or need anything enterprise level at this point. What would my best option be for a small, power efficient 4TB storage solution with GbE and preferably eSATA that would be easy to expand in the future? Is the Drobo FS the way to go?
|
# ? Nov 12, 2010 00:32 |
|
I'm fond of the Acer Easystore H340. It has 4 hot swappable drive bays so you can stick up to 8tb of storage into it. It's around $300-400.
|
# ? Nov 12, 2010 18:03 |
|
So Goons, I'd like some advice. I'm planning to head off to law school, and I'm looking to migrate from an "all drives in one box" setup to a NAS that gives me backup and availability. I don't really know what the roll-my-own options are in depth, though. I have set up a FreeNAS server and used Ubuntu and RHEL in a desktop environment, so I'm comfortable enough but I just don't know the field. I've been burned seriously by hardware failures before, and RAID 1 saved my rear end no less than 7 times in a year when I got a series of RMA replacements that had the same problem as the drive I sent back. Basic idea: have a desktop (Win7), a laptop (Win7 or MacOS), and the NAS. Desktop has Data A, laptop has Data B, NAS stores A, B, shared stuff like an iTunes library, and images of both the laptop and desktop. Backup happens transparently via rsync. I also plan to backup the NAS (data but not images) to a separate external drive once every week, which could be used in a pinch if the NAS crapped out. I've been bouncing between a prebuilt device from QNAP or Thecus and rolling my own server. If I decide to roll my own, I'd have an nVidia 680i board with a C2D E8400 to reuse, complete with the nVidia hardware RAID (though an E8400 could run a software RAID without breaking a sweat). If I rolled my own, I would want to used a virtualized OS so it could easily be snapshotted and, down the line, migrated. Either way, I'd be getting all new identical 2TB drives for either RAID5 + hot spare or RAID 6, four drive minimum. I estimate that the raw amount of stuff I have to work with, including system images, would be around 750 GB. Here are my priorities: - Solid backup - my files are a record of my life since ~1999. - Security - keep out prying eyes even if they're hopping on my WiFi with my permission - Availability - I want to be able to get at everything, especially the class-critical stuff, even if two machines die (desktop, laptop, NAS, NAS backup drive) - Set it and forget it - I can't be spending hours a week doing IT jockeying when I'm in law school. The "set it" part can be plenty complex, it just needs to have minimal maintenance. - Fast enough that I don't puke myself with rage. My last FreeNAS server was built out of an Atom nettop and topped out at 1 MB/s over GbE for no obvious reason. - Budget of around $1000 USD Nice, but not critical: - Shadow copy/snapshotting - Ability to run services/servers beyond basic SMB shares, like a media server or virtualizing up a webserver - I/O that can saturate GbE - Ability to use server as a desktop in case other computers poop out at once - Besides full automatic backup, specific additional backup of critical course files to a flash drive so if the power fails I can just pop that out and head to the library e: - Oh, yeah, error notification by e-mail rather than checking the front panel lights/web interface e: - Access from outside the LAN via SSH tunneling or somesuch Don't care about: - Torrenting - Expandability - don't give a crap about adding/replacing drives to expand to dozens of TB of storage, I'll just build new and migrate after law school Thoughts on plan, anything major I'm missing or overlooking? Setup and software suggestions? Benefits of getting a QNAP or Thecus appliance over roll-my-own besides ease of use that I'm not taking into account? Thanks, Goons. Factory Factory fucked around with this message at 20:36 on Nov 12, 2010 |
# ? Nov 12, 2010 20:33 |
|
My gut reaction would be to say ESXi with an OpenIndiana guest and ZFS file system.
|
# ? Nov 12, 2010 20:40 |
|
What's the best way to install that? Right on the RAID array? On another small RAID 1 pair or a tiny SSD? e: Snaps, it seems the nVidia RAID is software enough that ESXi can't use it. de: I just peeked at the WHS megathread. Within my needs, virtualizing that seems like a strong possibility, and familiarity is always nice. Wouldn't lose any disk space unless I did five or more drives, either. It would need a lot of third party software to get all the functionality I want, however, and I have no idea if that's available. Factory Factory fucked around with this message at 22:32 on Nov 12, 2010 |
# ? Nov 12, 2010 20:47 |
|
I guess I should have stayed away from the Western Digital Green drives. Bought 3 of them for my NAS (ds410), 2 of them failed within a month. RMA'd them, and now the third original disk is failing (says it is degraded). What drives should I replace these with? I'm pretty sure I'm just going to return them all now.
|
# ? Nov 13, 2010 01:00 |
|
I have a pair of Samsung 1TB drives that are just solid as a rock, as long as you keep their temperatures sane, but the brand seems hit or miss judging by reviews. I've also had good luck with WD's Blue and mixed with RE3. Green definitely poo poo the bed for me, too.
|
# ? Nov 13, 2010 01:29 |
|
Cool, I'm going to head to Microcenter in a bit to replace them. Hoping it isn't a pain in the rear end to rebuild the array so as not to lose data. I should be able to just swap the disks one at a time and rebuild that way I assume?
|
# ? Nov 13, 2010 01:42 |
|
According to Newegg, yes. Regarding VMware vSphere (ESXi), Christ on a cracker; for a piece of software I understand conceptually and have used before, it looks completely impenetrable as far as setting it up goes. Am I right that if I want RAID on a VMware box, it has to be hardware RAID or nothing? It won't put something ZFS-like together and I couldn't do that on a guest unless I did direct access to drives that weren't already storing the hypervisor itself? Would anyone poo poo in my bed if I just ran Ubuntu server and ran my webserver through VirtualBox? The machine would be overpowered anyway.
|
# ? Nov 13, 2010 03:00 |
|
Factory Factory posted:Regarding VMware vSphere (ESXi), Christ on a cracker; for a piece of software I understand conceptually and have used before, it looks completely impenetrable as far as setting it up goes. Am I right that if I want RAID on a VMware box, it has to be hardware RAID or nothing? It won't put something ZFS-like together and I couldn't do that on a guest unless I did direct access to drives that weren't already storing the hypervisor itself? VirtualBox on Ubuntu server will work just fine, although you won't get a nice UI to manage your VMs (at least, I don't know of a remote management UI for headless VBox). Depending on your hardware, ESXi might actually be pretty easy to use/setup. You can actually provide guests direct access to physical drives, which is what you'd want to do if you were running OpenIndiana or something with ZFS support. Here's some links to help you do so: http://www.vm-help.com/forum/viewtopic.php?f=14&t=1025 http://www.vm-help.com/esx40i/SATA_RDMs.php Before attempting ESXi you should investigate your hardware to ensure it is compatible. Besides the mostly enterprise-focused officially supported list, this is a great resource for home builders. Be sure to read the details - for example, my motherboard is supported but the on-board NIC is not, so I get crazy errors when trying to install ESXi unless I also drop a supported NIC into an empty slot: http://www.vm-help.com/esx40i/esx40_whitebox_HCL.php For what it's worth, I'm currently running Ubuntu Server with VirtualBox and it's just fine. I really like VMWare and I'm used to it from work, and although comparisons are really difficult on different types of hardware, ESXi seems slightly faster - but then again people report VBox faster on some OSes so who knows. Is your webserver Windows or Linux? I'd rather run Windows in VMWare (for no good reason besides the fact that I know it works well already).
|
# ? Nov 13, 2010 03:24 |
|
Factory Factory posted:Would anyone poo poo in my bed if I just ran Ubuntu server and ran my webserver through VirtualBox? The machine would be overpowered anyway. As for esxi, a cheapo lsilogic raid sas card can be had on ebay for under $50. Run two drives in raid 1, 6 more + onboard sata can be setup to pass straight to an openindiana guest via RDM, and you can have a party.
|
# ? Nov 13, 2010 03:58 |
|
adorai posted:If you are going to run linux, you may as well use KVM. If you are going to run virtualbox, why wouldn't you run openindiana? I don't quite understand your post. KVM vs ESXi is reasonable, but not KVM + linux vs ??? . Why is running VirtualBox a given for OpenIndiana? There are lots of reasons NOT to run OI, for example, it is built on the remnants of OpenSolaris but doesn't really have the same major organizational support. I ran OpenSolaris for a year and a half, and I love ZFS, but I'm cautious of OI (not the only article along those lines). Also, why would you buy a raid card to run two drives in raid 1 and have the rest passed through? If your motherboard supports it, why not provide direct access for the drives you want to the guest OS? Running the host in a raid 1 isn't so critical, you should always be able to rebuild elsewhere. A major reason that good software raid is so great is that it's not tied to any type of hardware. Are you advocating a cheap isilogic card for expandability, which might not matter at all if his motherboard has everything he needs right now? Or are you thinking the host should always be raid 1'ed?
|
# ? Nov 13, 2010 04:32 |
|
DLCinferno posted:I don't quite understand your post. OI is basically the last publicly available Sun/Oracle ON build, plus some bugfixes to that code. It's also binary compatible with all the regular solaris crap.
|
# ? Nov 13, 2010 04:44 |
|
Methylethylaldehyde posted:OI is basically the last publicly available Sun/Oracle ON build, plus some bugfixes to that code. It's also binary compatible with all the regular solaris crap. Yeah, I realize that. I'm still running an instance of OI for all my data that was on ZFS when I was running OpenSolaris. My point is that I wouldn't necessarily recommend OI for someone just starting a server when its future is anything but concrete. I know some people will disagree with my approach, but I'm kinda just treading water in mdadm until btrfs is ready for prime-time or a really viable ZFS alternative is exposed. I expect to deprecate my OI install for FUSE at some point, although maybe not for my regular datastore. Meanwhile, Factory has a bunch of options on what to run, but my recommendation is still to strongly research OI and what he needs from it first...but which virtualization base to run is almost a separate question.
|
# ? Nov 13, 2010 05:55 |
|
. double post DLCinferno fucked around with this message at 06:02 on Nov 13, 2010 |
# ? Nov 13, 2010 05:56 |
|
My head hurts... feel free to talk to me as if I am an idiot, a small child, or a small idiot. So, correct me if I'm wrong. Say I want to have a fileserver which is a portable VM. With the goal "Every bit of storage has redundancy," I have a few options for virtualization: 1) Use hardware RAID to run storage drives as one big array, install hypervisor and VMs on that array. Can use any hypervisor, like vSphere, as long as it has drivers for the RAID controller. 2) Use software RAID as one big array, using a ZFS-capable OS as the hypervisor. Limits me to OpenSolaris and variations as the hypervisor. e: or mdadm in Linux? 3) Use a small hardware RAID 1 array, flash drive or SSD to store the hypervisor and guest installations. Use a guest which supports mdadm or ZFS to create a storage pool out of the storage hard drives. Use a second guest connected via iSCSI or somesuch to the ZFS host as the portable fileserver. Option 2 would be the cheapest, 1 would be the fastest and most straightforward, 3 is flexible but complex and has the most that can go wrong. And "my webserver" is hypothetical, I just mean any service I would want to run in the future.
|
# ? Nov 13, 2010 13:04 |
|
Thermopyle posted:So what are you guys using to backup your Windows client PCs to your server/NAS? I managed to win a copy of Vista Ultimate a couple of years ago so I've gotten spoiled with being able to use Windows Backup with my server. I'll probably wind up getting Windows 7 Pro when I get around to upgrading so I can keep on using that.
|
# ? Nov 13, 2010 18:33 |
|
DLCinferno posted:True, but I didn't recommend that because you need to be very clever about how you're choosing your RAID levels on the partition arrays and which ones are going into the same array, otherwise a single drive failing could end up wiping out the entire array. Here's an example partition table from cfdisk. /dev/sdb/sdc/sdd are similar, just with slightly different amount of partitions. /dev/sda1 is housing the operating system currently. In the future I'll RAID1 it with another partition. code:
code:
DLCinferno posted:Sure are. Literally, unplug from one machine, plug into the new one, and run one mdadm --assemble command per array. As long as the computer can see the same physical drives/partitions, it doesn't matter what hardware it's running.
|
# ? Nov 13, 2010 19:04 |
|
DLCinferno posted:Yeah, I realize that. I'm still running an instance of OI for all my data that was on ZFS when I was running OpenSolaris. My point is that I wouldn't necessarily recommend OI for someone just starting a server when its future is anything but concrete. Based on the IllumOS work and the OpenIndiana project, plus Oracle saying they're gonna release source for Solaris 11 Express, I can see it having a pretty decent upgrade path. Plus the actual ZFS datastore is compatible with the BSD implementations and whatnot. Factory Factory posted:3) Use a small hardware RAID 1 array, flash drive or SSD to store the hypervisor and guest installations. Use a guest which supports mdadm or ZFS to create a storage pool out of the storage hard drives. Use a second guest connected via iSCSI or somesuch to the ZFS host as the portable fileserver. It all depends on what you want to do with your system. It also depends on how beefy your system is. I have an OpenIndiana box that uses a C2D and 8 GB of ram, running a huge glut of older HDDs at a series of big ZFS pools, with a pair of mirrored HDDs as boot. Anything I need to do that can't be trivially done in solaris has a VM that does it for me. I have a Server 2008 instance that does user authentication and some windows XP instances that run :files: poo poo plus some windows utilities. The other approach is to run an ESXi server, pass all the disks through to a Solaris VM as raw drives, use the raw drives to create a ZFS share, and have all the other VMs access it via NFS or CIFS. I prefer the first way because it's slightly less of a oval office when something breaks. Methylethylaldehyde fucked around with this message at 22:07 on Nov 13, 2010 |
# ? Nov 13, 2010 21:58 |
|
Saukkis posted:It really doesn't require much cleverness, simply remember to build the arrays from separate sdX devices. Cool, makes sense. Saukkis posted:I think if you use partitions se to RAID autodetect type you don't even need the assemble command. During boot up Linux kernel will see a bunch of partitions that seem to belong to an array and then it figures out which of them go together. Didn't work for me when my server failed and I had to move the drives to a new machine, but in either case, it's easy to move drives.
|
# ? Nov 13, 2010 22:31 |
|
Methylethylaldehyde posted:The other approach is to run an ESXi server, pass all the disks through to a Solaris VM as raw drives, use the raw drives to create a ZFS share, and have all the other VMs access it via NFS or CIFS. I prefer the first way because it's slightly less of a oval office when something breaks. Stupid question, how is ESXi booted? Is it loaded off an image and kept in RAM disk like with FreeNAS, or am I missing an assumed disk here? I wouldn't think ESXi would take kindly to having its install partition pulled into a ZFS pool by one of its guests. This is the part where someone can talk to me like I'm an idiot.
|
# ? Nov 13, 2010 23:30 |
|
I've been thinking about trying out an Open Solaris install and trying ZFS to consolidate storage of all my media into one pool of data, rather than spread out over a bunch of 640GB-1.5TB drives. I'm much more familiar with Linux than Solaris though, so is there any issue with running mdadm/LVM under a Linux distro? I'd like to build an initial RAID5 array, and be able to expand the existing mount location by adding new arrays as my storage needs expand. I'd most likely be starting with an array of 4x1.5TB drives in RAID5, and adding another 6-8 months down the line when the need arises. As far as I can tell, this is possible, but what shortfalls am I missing? Absolute performance isn't that important, as this will be mostly streaming video/audio over a 100mbit home LAN to a few HTPC's and 3-4 desktops.
|
# ? Nov 14, 2010 00:20 |
|
Factory Factory posted:Stupid question, how is ESXi booted? Is it loaded off an image and kept in RAM disk like with FreeNAS, or am I missing an assumed disk here? I wouldn't think ESXi would take kindly to having its install partition pulled into a ZFS pool by one of its guests. ESXi boots of like a 10mb image or something real small like that. You can actually just use a flash drive as the boot device. The fancy ESXi hosts just use a RAID1'd pair of CF drives half the time. The VM image can also be run off a flash drive if you want, but a regular old HDD will work better.
|
# ? Nov 14, 2010 07:16 |
Any chance anyone is looking for a DNS323 with hard drives? Looking to get rid of mine. Couldn't really get the hang of all the Putty/SSH/Linux stuff.
|
|
# ? Nov 14, 2010 11:13 |
|
Methylethylaldehyde posted:The VM image can also be run off a flash drive if you want, but a regular old HDD will work better.
|
# ? Nov 14, 2010 16:21 |
|
I'm trying out FreeNAS and I'm having trouble getting CIFS to work like I want it to work.. I can get to my shares just fine when I use \\192.168.1.2 but \\hostname doesn't get me anywhere. I can ping the hostname in a command line prompt and have no problems, I can browse the FreeNAS interface with http://hostname so it's not a "network" problem as far as I can tell.. and I can't figure out where in the interface I can make that hostname associate with CIFS/SMB. All I get is the Windows cannot access \\hostname error even though I can see the drat thing in my network browser on my Win7 machine. It KNOWS it's there but it won't work for some reason... eta: now I REALLY don't get it, I had the name set as mediacenter before, I changed it to whatever and blam \\whatever works. I change back to mediacenter and it doesn't work again. I don't understand what's going on here but I don't care I'll just rename all my links on my machines instead of trying to replicate my previous windows based system. Telex fucked around with this message at 20:15 on Nov 14, 2010 |
# ? Nov 14, 2010 20:08 |
|
I've got 3 different SATA/RAID controller cards that I just use to add SATA ports. I'd like to consolidate these. What's a good solution for adding 8 SATA ports, preferably via PCIe?
|
# ? Nov 14, 2010 20:18 |
|
Thermopyle posted:I've got 3 different SATA/RAID controller cards that I just use to add SATA ports. Cheapest option is a Supermicro LSI-based card that comes up pretty regularly, though the only issue is the bracket for it is technically for Supermicro's UIO form factor. The connector is still PCIe so it will fit in a PCIe slot but you need to remove the bracket first.
|
# ? Nov 14, 2010 20:28 |
|
Lblitzer posted:Any chance anyone is looking for a DNS323 with hard drives? Looking to get rid of mine. Couldn't really get the hang of all the Putty/SSH/Linux stuff. Could you post some more details? Hard drive model/sizes, how long they've been used, etc.
|
# ? Nov 14, 2010 20:40 |
|
I have a quick question: I'm using OpenSolaris snv_118 and have a raidz1 pool of four WD 1TB Caviar Greens. I recently had a power failure, and I'm pretty sure it messed something up, as I/O performance hasn't been all that great lately. I'm currently running a scrub on tank, and it's 12h in, 4.51% done, and at 246h26m to go. Next to one of the drives it says '141K repaired'. Should I assume that everything will be fixed once this scrub is done? Also, should I upgrade to a later version of OS? What about some of the other operating systems supporting ZFS (FreeBSD/Nexenta/others)? I have had very few problems with this setup, but this latest incident has me a bit worried...
|
# ? Nov 14, 2010 20:49 |
|
|
# ? Apr 29, 2024 10:41 |
|
IOwnCalculus posted:Cheapest option is a Supermicro LSI-based card that comes up pretty regularly, though the only issue is the bracket for it is technically for Supermicro's UIO form factor. The connector is still PCIe so it will fit in a PCIe slot but you need to remove the bracket first. So, this and two of these? What's the performance like on these?
|
# ? Nov 14, 2010 21:04 |