|
Ziploc posted:Is the built in Windows7 backup utility good for large backups? I hope you're not implying it's good for small backups.
|
# ? Oct 29, 2010 00:39 |
|
|
# ? Apr 29, 2024 13:50 |
|
HorusTheAvenger posted:I hope you're not implying it's good for small backups. I'm open to free alternatives.
|
# ? Oct 29, 2010 00:43 |
|
Ziploc posted:I'm open to free alternatives. Currently I'm using Areca Backup. It's fairly rudimentary to today's fully automated standards, which is what I was after. I don't want something that chooses files for me. I like explicit lines between what's backed up and what's not backed up. It won't do fancy things like whole system restores. On my personal machines I feel that's what install disks are for. http://www.areca-backup.org/ A friend of mine uses Cobian Backup and likes it a lot. I tried it out and didn't like it. I don't remember why. http://www.educ.umu.se/~cobian/cobianbackup.htm
|
# ? Oct 29, 2010 01:27 |
|
HorusTheAvenger posted:I hope you're not implying it's good for small backups. I don't use it since WHS does it's own backup thing, but I generally hear good things about Win7's backup program.
|
# ? Oct 29, 2010 02:20 |
|
Thermopyle posted:I don't use it since WHS does it's own backup thing, but I generally hear good things about Win7's backup program. Perhaps it's just me. When I tried it out it took hours to do a backup and I'm using a small SSD. I also didn't feel I had the right amount of granularity to select what gets backed up and what doesn't.
|
# ? Oct 29, 2010 03:01 |
|
HorusTheAvenger posted:A friend of mine uses Cobian Backup and likes it a lot. I tried it out and didn't like it. I don't remember why. Cosbian looks great. Exactly what I'm after. A set it and forget it backup every Wednesday night at 10:30pm. When I play Xbox with friends. I've set it up to do a full every 4 weeks and incremental every week. Moving along very quickly now through eSATA. Thanks!
|
# ? Oct 29, 2010 04:30 |
|
FISHMANPET posted:I made the decision last night at 3AM when I couldn't sleep to do this in the future. I just moved so everything is complete chaos (though the first thing the lady had me setup was the server so we could watch some teev ) but it's glad to hear it went well. Trip report: Holy hell did that suck. On first reboot, most of my services didn't have a state. If I ran svcs -a I just got a bunch of '-' where online, disabled, etc should have been. Reboot, and all those services became disabled, so my boot is hosed because I don't know what's supposed to be started and not started. My current plan is to go back to my b_134 BE and copy the list of running services and go from there.
|
# ? Oct 30, 2010 03:34 |
|
ufarn posted:
If you accidentally deleted all the files on your computer, would you want your 'backup' drive to proceed to mirror this deletion?
|
# ? Oct 30, 2010 03:38 |
|
FISHMANPET posted:Trip report: And I figured out my own problem. I run an LDAP server for authentication, but the machine isn't getting auth info from the LDAP server, except it sort of is... It's strange, I guess I'm back limping along, but it means I need to manually start my Virtual Machine at boot (which I did before anyway...)
|
# ? Oct 30, 2010 06:24 |
|
Triikan posted:If you accidentally deleted all the files on your computer, would you want your 'backup' drive to proceed to mirror this deletion?
|
# ? Oct 30, 2010 14:25 |
|
ufarn posted:I just can't see how it works out in the end, as the drive will eventually fill itself up with prior files and folders that no longer exist. Another approach could be a system where you set how much space the backups are allowed to use and then the backup software figures out a way to maintain maximum amount of different versions as far back as possible without going over the limit. Of course I wouldn't be surprised if a free backup software that comes with an external drive is lacking such features. But I would be hard pressed to call a software that only does synchronization a backup software.
|
# ? Oct 30, 2010 23:40 |
|
What's the benefit in OpenIndiana over something like FreeNAS?
|
# ? Oct 31, 2010 15:55 |
|
crm posted:What's the benefit in OpenIndiana over something like FreeNAS? The ability to run VBox and virtualize all your programs on one box.
|
# ? Oct 31, 2010 16:14 |
|
How bad are Tomato/DD-WRT routers with USB drives set up as network shares for transfer speeds?
|
# ? Oct 31, 2010 18:35 |
|
Methylethylaldehyde posted:The ability to run VBox and virtualize all your programs on one box. And ZFS.
|
# ? Oct 31, 2010 20:02 |
|
Alright, so I've got a question concerning the wide variety of storage software technology out there today. Right now I'm running WHS on a server with 6x 1tb, 4x 1.5tb, and 2x 640gb drives. It's mainly housing my media collection (movies, tv, music); all personal documents and poo poo I really care about are stored on personal laptops and backed up on and off-site. I have 1:1 duplication enabled on pretty much every share, to ensure a disk failure doesn't force me to spend hours getting everything back on there. The thing is, I'm running low on space. I've got just over 12tb formatted capacity, and I'm down to about 220gb free (which is more like 110gb when you consider the duplication). I'm also completely out of room in my case. Ideally, I'd like to start purchasing 2tb or 3tb drives to replace the 640gb and 1tb drives, but right now I don't have money to spend on that sort of thing. So, I was looking into maybe switching my storage to some sort of solution that's like RAID5/6, but doesn't have the "all drives must be the same size" requirement. Something where I could set two 1.5tb drives as parity drives, have the rest be used for data, and get the maximum amount of my space available to me. I've seen stuff like unRAID and flexRAID, but I was wondering if anyone here has anything in everyday use that they would recommend.
|
# ? Nov 1, 2010 16:41 |
|
mpeg4v3 posted:I have 1:1 duplication enabled on pretty much every share, to ensure a disk failure doesn't force me to spend hours getting everything back on there.
|
# ? Nov 1, 2010 16:48 |
|
I'd say it depends on your budget. Personally, I'd set up a new 4-6 2tb raid5, which (with 6 drives) would give you 10 tb of usable space. Copy your data there, then create raid 5 sets with your 1tb drives and a third with the 1.5tb drives. That'd give you: 5tb array (6x1tb) 4.5tb array (4x1.5tb) and a 10tb array (6x2tb) All those sizes are usable, minus formatting, etc. I would then separate the first and second array into another box and mirror the two arrays (you could jbod the two arrays, or just copy half to each), but if the data isn't that important, you'd have close to 20tb of usable space. EDIT: Are you currently running raid 1 or are you just copying everything over? You currently have 6.5tb of usable space. With just the drives you have you could increase that to 9.5tb, not including your 640gb. You'd have to have somewhere to move your current data if you don't get any new drives, so hopefully you have a friend with some extra space. You also have enough space if you get rid of the duplication and juggle the files over to the 4x1.5tb drives and 640gb drives without duplications, you'd have enough space to empty the 1tb drives, create a raid5 array on that, then move everything on the 1.5tb drives there, then create another raid5, and empty the 640gb drives. If anything failed during this process, you would lose data. Buying a new array would be safer if you care about your data. Triikan fucked around with this message at 21:57 on Nov 1, 2010 |
# ? Nov 1, 2010 21:41 |
|
I think my previous post got lost in the thread somewhere, so I'll ask again here real quick: Current: 8x1.5TB 7200rpm Seagate in RAID-Z2 Wish: Purchase another 8x2TB drives, make RAID-Z2 vdev, add to existing pool. Interface (aka bottleneck): Single Intel GigE NIC 1: What drives should I be looking at? Green or non-Green? 5400 or 7200? Don't want to have to deal with TLER enabling/disabling fuckery. 2: I have 8GB of RAM, so my first priority would rather be a dedicated ZIL device to speed up writes rather than a large SSD for L2ARC. Confirm/deny?
|
# ? Nov 1, 2010 22:37 |
|
movax posted:I think my previous post got lost in the thread somewhere, so I'll ask again here real quick: Avoid green/4k drives like the plague. I had a ton of issues with the WD Advanced Format drives because ZFS is way more clever than the sector emulation stuff it uses. Grab a 60-80 Gig Intel SSD, partition it 2gb ZIL, 58gb cache, works great because you don't often have writes that need to be both low latency and high throughput. I had an OCZ Solid II in my box and once you cached folder/file metadata to it, it made browsing SMB shares way more responsive. You can, if you have a decent switch, trunk two or more GigE connections together to get some additional bandwidth. Never ever shut your machine down. The cache currently is non-persistent, and it can take you upwards of a week to fully populate the cache on a decent sized SSD.
|
# ? Nov 1, 2010 23:25 |
|
Anyone familiar with Norco cases? Looking for a decent NAS case with hotswap bays, and the name keeps coming up in my searches. Looking for something that can hold 8+ drives and is relatively quiet.
|
# ? Nov 2, 2010 02:50 |
|
wang souffle posted:Anyone familiar with Norco cases? Looking for a decent NAS case with hotswap bays, and the name keeps coming up in my searches. Norco stock fans are never quiet. They're budget server cases, so they typically put through a decent amount of air, but are pretty loud.
|
# ? Nov 2, 2010 03:04 |
|
Triikan posted:Norco stock fans are never quiet. They're budget server cases, so they typically put through a decent amount of air, but are pretty loud. Any other recommendations?
|
# ? Nov 2, 2010 03:07 |
|
wang souffle posted:Anyone familiar with Norco cases? Looking for a decent NAS case with hotswap bays, and the name keeps coming up in my searches. The Norco 4220 and 4200 are your best bet. 20 bays, and you can change the 4x80mm fans to 3x 120 and it's pretty quiet. Not home theater level quiet, but enough that I can sleep with it on.
|
# ? Nov 2, 2010 03:13 |
|
Methylethylaldehyde posted:The Norco 4220 and 4200 are your best bet. 20 bays, and you can change the 4x80mm fans to 3x 120 and it's pretty quiet. By default the front fans are 3 loud 80mm fans. There's a guy (don't have the link handy) that makes a bracket that lets you replace those with 120mm fans, which make things all sorts of better, not just because 120mm fans will be quieter. The placement of the 80mm fans really sucks.
|
# ? Nov 2, 2010 03:17 |
|
Methylethylaldehyde posted:Avoid green/4k drives like the plague. I had a ton of issues with the WD Advanced Format drives because ZFS is way more clever than the sector emulation stuff it uses.
|
# ? Nov 2, 2010 03:45 |
|
Methylethylaldehyde posted:Avoid green/4k drives like the plague. I had a ton of issues with the WD Advanced Format drives because ZFS is way more clever than the sector emulation stuff it uses. Avoid green drives, got it. Intel drive...SLC or MLC? Judging from your suggested capacity, MLC? I thought I read somewhere where they murdered a MLC OCZ drive as ZIL in only about a month of service. I guess I should just wait for some neat 7200rpm drives to come on sale then (my current drives are 7200rpm; will dropping in 5400rpm drives absolutely slaughter my performance?) @Norco questions: I replaced all the 80mm in my 4020 w/ Yate Loons. I know you can replace the middle fans with 3 x 120mm fans if you wish too as well, there was a thread on AVSForums about that.
|
# ? Nov 2, 2010 03:50 |
|
FISHMANPET posted:By default the front fans are 3 loud 80mm fans. There's a guy (don't have the link handy) that makes a bracket that lets you replace those with 120mm fans, which make things all sorts of better, not just because 120mm fans will be quieter. The placement of the 80mm fans really sucks. What's the main difference between the 4220 and 4020?
|
# ? Nov 2, 2010 03:59 |
|
necrobobsledder posted:I'm running 4x 2tb wd20ears drives on an opensolaris raidz install and never hit any problem with things like realy slow I/o associated with misaligned partitions. My drives came with no jumpers. I get about 70 Mbps locally sustained with 2 gb of RAM on an athlon x2. Partition alignment is something that's tricky with these new drives, but it's possible for them to work fine with ZFS. My recommendation is to setup your array, put some data on it, run iostat, and check for bad i/o performance. Reformat every jumper change. The issue turned out to be a combination of ZFS, the SAS cards I was using, the drives I was using, and Deduplication. It would commit a lot of small metadata changes to the disk, all of which were 512bytes, and it would cause the drive to start queuing the reads, and reporting busy 100% of the time. This caused the whole system to go unresponsive until the queue was finished. When it wasn't thrashing, I could get ~130-200MB/sec sustained disk to dev/null throughput with an rsync job, it's just once it started thrashing the only solution sometimes was to hard reboot. Also, mine was raidZ2, which caused more metadata fuckery compared to regular raidz. Also, unless you're serving a DB that's like 90% duplicate information, never ever ever implement deduplication. movax posted:Avoid green drives, got it. Intel drive...SLC or MLC? Judging from your suggested capacity, MLC? I thought I read somewhere where they murdered a MLC OCZ drive as ZIL in only about a month of service. I guess I should just wait for some neat 7200rpm drives to come on sale then (my current drives are 7200rpm; will dropping in 5400rpm drives absolutely slaughter my performance?) It all depends on the configuration of your vdevs. If you have say 8 drives, and make two 4 drive raidZ sets, you get twice the IOPS of a single 8 drive raidZ2. Both have the same useful capacity, while one has a better chance of not dying during a rebuild. The difference between an 8 drive vdev made of 5400 rpm drives and a 2x4 disk vdev set made from 7200 rpm drives is maybe 5% total throughput for media streaming applications, but the faster drives will have about 2.2x the total IOPS. This is all bullshit if you have a warm SSD cache device though, because it'll serve 90% of your frequently accessed poo poo lightening fast, and stream TV shows through fine. I would just get any of the current generation SSDs and call it a day. They have enough internal cache to batch up writes from the host to the ZIL files, and they're gonna saturate the SATA bus while still giving you 10k+ IOPS. Some of the new sandforce based OCZ drives are really awesome. I suppose if you had a large DB being serviced by a 2nd gen OCZ SSD, you could burn them out that fast. If the wear leveling algorithm isn't robust, it can and sometimes will write itself into a corner and hammer at certain memory cells. If you de-rate the flash memory to realistic instead of 'round numbers', and you have a small cache with a crappy write ballancer, you could kill an SSD within that month timeframe. @norco: I ziptied 3 120mm fans together and then ziptied them to the rackmount holes in the side of the case, works great and still racks just fine. wang souffle posted:Official link goes to this site: http://www.ipcdirect.net/servlet/Detail?no=258 The 4220 uses SAS connectors, which make wiring a hell of a lot easier. They even make SAS->SATA reverse breakout cables, so you can use the SATA ports on your motherboard without ever having to buy a SAS card. Plus SAS cards are almost always way more reliable and robost compared to a regular 8 port SATA card. They are more pricey though. Methylethylaldehyde fucked around with this message at 04:29 on Nov 2, 2010 |
# ? Nov 2, 2010 04:11 |
|
There's also the 4224 (which supports 24 drives). Norco also offers a 120mm fan bracket for $1 + shipping for the 4224, which helps cut down the noise significantly (and if you replace the 80mm rear fans with quieter versions, it makes things quieter still).
|
# ? Nov 2, 2010 09:49 |
|
Methylethylaldehyde posted:It all depends on the configuration of your vdevs. If you have say 8 drives, and make two 4 drive raidZ sets, you get twice the IOPS of a single 8 drive raidZ2. Both have the same useful capacity, while one has a better chance of not dying during a rebuild. The difference between an 8 drive vdev made of 5400 rpm drives and a 2x4 disk vdev set made from 7200 rpm drives is maybe 5% total throughput for media streaming applications, but the faster drives will have about 2.2x the total IOPS. This is all bullshit if you have a warm SSD cache device though, because it'll serve 90% of your frequently accessed poo poo lightening fast, and stream TV shows through fine. Hm, so doing 2 4x2TB RAID-Zs would give me better IOPS and a better chance of surviving a rebuild, I like it. I might just go for 5400rpm drives then (they still make 5400rpm drives without head-parking/green/etc bullshit right?) and the SSD as a cache. Pick up a 60-80GB-sized Sandforce drive and partition as recommended (2GB ZIL/58GB L2ARC? Bigger ZIL won't help?). I don't have a DB or anything crazy, just files, so I guess I don't have to worry that much. Partition using gparted then assign the resultant devices using zpool? And of course, I guess once I make the new vdevs...how can I "shift" my data from the old drives to the new drives so I can destroy the old vdev and remake it into 2 4x1.5TB vdevs (or replace 'em w/ 2TB drives)?
|
# ? Nov 2, 2010 14:34 |
|
Methylethylaldehyde posted:The 4220 uses SAS connectors, which make wiring a hell of a lot easier. They even make SAS->SATA reverse breakout cables, so you can use the SATA ports on your motherboard without ever having to buy a SAS card. Plus SAS cards are almost always way more reliable and robost compared to a regular 8 port SATA card. They are more pricey though. I'm not even sure what 8 port SATA card would work with OpenSolaris. I went straight to SAS for my 8port needs, never looked back.
|
# ? Nov 2, 2010 14:40 |
|
FISHMANPET posted:I'm not even sure what 8 port SATA card would work with OpenSolaris. I went straight to SAS for my 8port needs, never looked back. Supermicro USAS-L8i, LSI 1068E-based. Not technically 8-SATA ports, but you can buy the appropriate breakout to go from 1xSFF-blah to 4xSATA.
|
# ? Nov 2, 2010 14:58 |
|
Methylethylaldehyde posted:The 4220 uses SAS connectors, which make wiring a hell of a lot easier. They even make SAS->SATA reverse breakout cables, so you can use the SATA ports on your motherboard without ever having to buy a SAS card. Plus SAS cards are almost always way more reliable and robost compared to a regular 8 port SATA card. They are more pricey though.
|
# ? Nov 2, 2010 17:17 |
|
movax posted:Supermicro USAS-L8i, LSI 1068E-based. Not technically 8-SATA ports, but you can buy the appropriate breakout to go from 1xSFF-blah to 4xSATA. Yeah, that's what I got. Not sure why you would call it a SATA card in the first place... wang souffle, search for the 1068E card. The Supermicro is kind of weird and backwards, but it's the cheapest, at a little over $100. There's an LSI card using that chipset, called the SAS3081E-R for around $250, and Intel also makes one that's about $150. If it were me today I'd get the Intel card, but when I got my machine I went with the SuperMicro because Intel didn't make their's yet. Links: Intel: http://www.newegg.com/Product/Product.aspx?Item=N82E16816117157 LSI: http://www.newegg.com/Product/Product.aspx?Item=N82E16816118100 According to review, you might have to flash the firmware on the Intel chip, or it might already come with the right firmware for ZFS.
|
# ? Nov 2, 2010 17:44 |
|
You can pick up an LSI card that will do eight drives with fanout cables for around $50 on ebay.
|
# ? Nov 2, 2010 17:51 |
|
Ok, so ordered a Vertex 2 60GB SSD for ZIL/L2ARC; how should I partition this? 10GB ZIL/50GB L2ARC? Also, shopping around for 2TB drives...might wait for a sale on certain models, but I quickly perused newegg's selection. Ignoring Caviar Green, Barracuda LP and any other green stuff (is WD's Green "AV" drive not suggested?), I pretty much found: Hitachi DeskStar 2TB 7200rpm Samsung SpinPoint F4 2TB 5400rpm(sold out ) Seagate Barracude 2TB 5900rpm I won't be ordering from newegg probably, because well, gently caress their HDD packing, but if that SSD functions as expected ZIL/L2ARC wise (based on what methylethylaldehyde has suggested), then the Samsung drives look good. Might get a full 16 of them and sell my old disks depending on answer to question in previous post (shifting around data in zpool). e: I guess the F4 is out because ZFS will not function properly with 4K sector drives? Or is just the WD drives because they emulate 512b sectors, and letting the F4 appear as a 4K drive, ZFS won't care? Or should I just buy 7K2000s and be done with it? movax fucked around with this message at 21:21 on Nov 2, 2010 |
# ? Nov 2, 2010 19:33 |
|
Those samsung drives are green drives as well. The EG in F4EG stands for their Ecogreen line.
|
# ? Nov 2, 2010 22:39 |
|
movax posted:Hm, so doing 2 4x2TB RAID-Zs would give me better IOPS and a better chance of surviving a rebuild, I like it. I might just go for 5400rpm drives then (they still make 5400rpm drives without head-parking/green/etc bullshit right?) and the SSD as a cache. The ZIL only needs to hold like 5 seconds worth of writes before they're purged to spinning disks. 5 seconds of ~240MB/sec is ~2GB. The rest you can use as regular cache, which is awesome. And yeah, I had the intel rebrand of the LSI-1068E card, flashed them with the IT firmware and they work great. Easiest way I found to shift the data is to go to best buy, buy two or three 2TB drives, move your data to them, break the vdev, remake it, copy the data back, and zero the drive+return them. Methylethylaldehyde fucked around with this message at 06:59 on Nov 3, 2010 |
# ? Nov 3, 2010 06:39 |
|
|
# ? Apr 29, 2024 13:50 |
|
I'm sure this has been asked dozens of times, but what's generally considered the best OS/distro to go with for ZFS? I'm particularly drawn to freeNAS, as it seems pretty easy to setup and runs so small, but I've seen a few reports that performance suffers on it. I was hoping not to have to devote a large amount of time to setup and configure an entire OS, but if need be (say, if OpenIndiana or whatever is significantly better), I guess I can.
|
# ? Nov 3, 2010 08:11 |