|
Transmission's officially packaged by the Sun folks. AFAIK, it can also do headless using transmission-daemon and transmission-cli.
|
# ¿ Feb 2, 2009 12:41 |
|
|
# ¿ Apr 26, 2024 19:30 |
|
FMA doesn't know how to deal with USB crap going haywire. One way or another, USB devices in combination with ZFS will be getting some treatment at some point soon, since there coincidentally is a rather large discussion going on about things related on the ZFS mailing list. It also seems that we're finally getting an uberblock selector in the upcoming weeks (means if your system hosed up royally on the last transaction group, breaking the pool, you can select the least oldest still valid uberblock and start from there).
|
# ¿ Feb 15, 2009 13:20 |
|
movax posted:I believe the ZFS guys recommend no more than 8 (maybe 10) drives to an array
|
# ¿ Apr 30, 2009 09:54 |
|
In the end, it depends a little on the workload. If you don't need maximum performance, you may just ignore that recommendation, unless you go crazy and want to create a 64+2 array. With RAID5, it's just LBA remapping, where you can read from single disks, if the request doesn't cross stripe boundaries. In RAIDZ, a stripe is a whole row, any read touches all disks, but small writes don't require reading all other disks to regenerate parity. The zio pipeline is however pretty good at IO reordering, if ZFS needs to read multiple stripes, it doesn't issue all IOs in order to complete stripe reads sequentially. Combat Pretzel fucked around with this message at 18:32 on May 1, 2009 |
# ¿ May 1, 2009 18:30 |
|
BotchedLobotomy posted:The other benefit (which maybe ZFS has too, I dont know) is that nothing is striped across a drive. So if you lost the parity drive and while rebuilding a second drive failed, you are only out on the data that was on that lost drive. the other X amount of drives in your array are not damaged/lost.
|
# ¿ Jun 9, 2009 10:43 |
|
angelfoodcakez posted:I don't get this. If it normally takes up to two minutes, how is it non-detrimental in a non-raid situation if they kill the recovery at 7 seconds? If it's fine, then why not put it on all drives?
|
# ¿ Jun 14, 2009 13:45 |
|
necrobobsledder posted:Psst, you can use a software utility from WD to enable TLER on even the Green drives.
|
# ¿ Jun 15, 2009 14:03 |
|
necrobobsledder posted:- Black still requires WDTLER utility in a RAID1+ system Also, what's the idle time for a WD Green drive before it parks its heads? I have a Seagate Momentus in my laptop, which has a 5 seconds idle timer, which handily conflicts like poo poo with ZFS' 5 second transaction groups. --edit: Nvm, Google says 8 seconds, with an option to install a different firmware and requesting a tool from WD to set it up to five minutes. Combat Pretzel fucked around with this message at 21:40 on Jul 26, 2009 |
# ¿ Jul 26, 2009 21:31 |
|
KennyG posted:After about 3 straight weeks of time on a Core2 Quad 9550 I was able to run 37,764,768,000,000 simulations on 'virtual drives' in various RAID configurations to try and put some concrete probability numbers on RAID configurations using standard available hard drives. It wasn't the most efficient method, but I feel it's statistically valid and pretty accurate given the real world anomalies in play.
|
# ¿ Aug 31, 2009 00:21 |
|
adorai posted:Expansion is coming Some enabling functionality is coming at some point, once they deem it stable. Deduplication and vdev removal are up first, which coincidentally depend on it. Then encryption. Then maybe block rebalancing. And then maybe a long way down the road, RAID-Z expansion.
|
# ¿ Oct 6, 2009 14:25 |
|
You guys with the WD Green drives, do you have constant IO pounding the array, or if not, did you adjust the idle timer that parks the heads?
|
# ¿ Oct 18, 2009 23:12 |
|
roadhead posted:I don't "constantly" pound the array though. Its either sustained writes for a particular period, or very light reads over the network (either DLNA or SMB) Altho it seems you can apply the WDIDLE3 and WDTLER tools to the WD Green series.
|
# ¿ Oct 25, 2009 17:26 |
|
roadhead posted:I downloaded these, but does anyone know if there is anyway to run these under FreebSD?
|
# ¿ Oct 25, 2009 22:20 |
|
I got the WD Green 1.5TB disks anyway, and guess what, I suppose one of the two is broken, because it seeks like a loving madman while there's exactly zero IO. --edit: Both drives do this. As soon the AHCI BIOS initializes them. Is there some sort of long self-test that's initiated on the disks the first runs? Combat Pretzel fucked around with this message at 22:13 on Oct 28, 2009 |
# ¿ Oct 28, 2009 21:35 |
|
So yeah, I let them have a go at it and keep these WD Greens spinning for a while, let them do their idle grinding. They have stopped doing this for now, I have yet to reboot tho. Strange bunch of disks. How's the reliability for you guys, especially for the WD15EADS three platter version? I'm still curious what it was all about, anyway.
|
# ¿ Oct 29, 2009 23:17 |
|
Are you putting UFS or ZFS on the stick? Latter would let the ARC deal with the speed issues.
|
# ¿ Nov 1, 2009 02:11 |
|
Anyone here running (Open)Solaris in a virtual machine and use it as ZFS fileserver? If so, how's the performance and longterm stability of the VM? I've had my run with OpenSolaris up to today and am switching back to Windows, but am still looking for a way to keep using ZFS for my data. I'm not so fond of a dedicated storage box, since my main machine keeps running 24/7, too.
|
# ¿ Nov 21, 2009 12:12 |
|
How are things with BTRFS these days? I keep looking for information, but most of it, especially that on "official" sites (like Oracle's), are outdated. All seems to relate to kernel 2.6.31, while there seems to be 2.6.34 already. I'm interested, because I'm looking at a parachute option, since I'm growing weary of Oracle's bullshit related to OpenSolaris.
|
# ¿ May 12, 2010 18:22 |
|
three posted:Why not use FreeBSD with ZFS? FISHMANPET posted:Nthing this poo poo. I'm stuck at B134 with all sorts of stupid problems (Network doesn't come up on boot, have to manually restart physical:nwam a couple of times, I can't set my XVM vms to start on boot) and I'd love to hit a stable release. But I can't go back to 2009.06 because my data pool is the most recent ZFS version. Build 134 itself is stable for me. Looking for an exit strategy, tho. necrobobsledder posted:Linux is what is inline with Oracle's long-term product strategy it seems, so I would expect OpenSolaris to be more or less dead by 2013 anyway and ZFS by 2016 since enterprise customers take a while to be weaned off anything. But I don't actually care so long as I have a solid storage backend. Then again, Larry Ellison is a human being. necrobobsledder posted:BTRFS will probably be production ready by the end of 2011 is my guess. HAMMER looks interesting, but DragonflyBSD At least if FreeBSD would adopt it...
|
# ¿ May 12, 2010 21:59 |
|
If you're using WD Green drives, use WDIDLE3.EXE to disable the loving Intellipark. And WDTLER.EXE to enable TLER. There is talk that the tools don't work with the newest drives, but they did for my WD15EADS, so YMMV.
|
# ¿ May 13, 2010 11:03 |
|
md10md posted:Yeah, I still need to do this. I have 2x750GB WD GPs and they just thrash the load_cycle. One drive has 1.3 million cycles. I really don't get the point of early head parking. It just fucks up your drive mechanics and offers minimal savings.
|
# ¿ May 13, 2010 23:36 |
|
PopeOnARope posted:Can I do something like this in windows? --edit: Whoops, nevermind that. Well, setting up such a script requires disabling any write caching whatsoever. I think the client versions of Windows have it enabled by default, and I don't know what the time out on the cache is. You can disable it tho, but involves some performance hit. Such a script would for instance not work for me, since ZFS groups all writes up for up to 30 seconds here when there's no considerable activity. --edit2: Wait, you should be able to stop the drive from doing this via APM. I got my laptop drive under control by running hddparm on Linux (was the OS installed). It should work with WD Green drives, too. This is a Windows equivalent, try this: http://sites.google.com/site/quiethdd/ PopeOnARope posted:\/ Christ. Do I buy seagate or hitachi when it's array time, anyway? As soon mine start doing poo poo, they're immediately flying outta the case. Should have gone Black series to begin with, but the Green ones were terribly cheap at 1.5GB, while the Black ones capped at 1GB back then. Combat Pretzel fucked around with this message at 11:22 on May 14, 2010 |
# ¿ May 14, 2010 11:15 |
|
hddparm has to be run each boot. Running it with a Linux live CD and then rebooting into your actual OS resets the setting to default. If you're using Linux, AFAIK you can set hddparm on boot time. At least Ubuntu had an init script for it. For Windows, use QuietHDD I've linked earlier. There was something for OSX, which I've tried on my hackpro laptop. Any other OS, like Solaris or BSD, you seem to be out of luck.
|
# ¿ May 14, 2010 16:21 |
|
FISHMANPET posted:Solaris has pretty much said gently caress you to SMART. I've got a 4+1 RaidZ with 1.5TB Samsungs. But no way to see how they're doing...
|
# ¿ May 14, 2010 19:52 |
|
Methylethylaldehyde posted:Most TV series will show a 5-15% savings, because of the intro and credits sharing almost entirely the same data.
|
# ¿ May 17, 2010 20:11 |
|
Methylethylaldehyde posted:Yeah, that's the savings I saw from a half dozen TV shows I have on my media box now. You might end up with some special snowflake x264 encodes that are somehow different for each and ever block, but I'm guessing you'll still see some savings. Any intro past the exact beginning of the file will not see any savings, because even if the outputted bitstream is bit exact for the intro of each episode, it is completely unlikely that it lines up on block boundaries, since any preceding content is of arbitrary size/length. For blocks to be deduped, they need to be exactly the same. I guess I have to try this myself to believe it. AbstractBadger posted:When a disk is added to a ZFS pool, is it expected to be empty? Spare disks are also initialized, since ZFS needs to be able to recognize it belonging to the pool. Combat Pretzel fucked around with this message at 17:58 on May 19, 2010 |
# ¿ May 19, 2010 17:55 |
|
"Used" is the logical amount of data, "Refer" is the actual physical usage. Used should be higher than "Refer". But ZFS also rolls "Used" amounts up into the filesystem hierarchy. See my home folder for example:code:
|
# ¿ May 21, 2010 22:26 |
|
Don't get the WD-RE, get the WD Black. Same drive mechanics and I think also electronics. Ability to change TLER is the unknown here.
|
# ¿ Jun 15, 2010 12:39 |
|
What the gently caress is it with recent SATA drives? What the hell are they doing while idle?! A few months ago, I got two WD Greens. As soon I've attached them, they were grinding the heads a long while while idle. At some point it stopped and just happens occasionally. I shrugged it off at that point. Now one of them is making tons of issues, I've switched it out today for a Seagate Barracuda 7200.11 (I've read about issues with THAT one after the order went through, but I needed a 1.5TB drive matching the sector count). But instead of booting back into Linux and resyncing the mirror, I said "gently caress it, I'm going to play some games first!" And guess what, the drive is seeking like poo poo at random points in time, untouched, uninitialized. And you don't even get an answer from the drive manufacturers. Back with the WDs, I've asked about information regarding that from their support. I got the random WD diagnostic tool spiel.
|
# ¿ Jun 25, 2010 21:26 |
|
Methylethylaldehyde posted:Yeah. You start with a single zpool composed of a raidz vdev. You can add additional vdevs to the pool, provided each vdev is of the same type.
|
# ¿ Jul 4, 2010 02:17 |
|
dietcokefiend posted:Save 40 bucks and get the standard Caviar Green 2TB drives perhaps?
|
# ¿ Jul 10, 2010 12:32 |
|
Use WDIDLE3.EXE too and disable their loving idle head parking, too. Or else, it's going to gently caress up your drive's lifecycle on mediocre use.
|
# ¿ Jul 26, 2010 13:20 |
|
After 8 seconds of inactivity, the heads are parked off-platter onto a ramp. Next IO, they're unparked again until the idle time-out is reached again. If you don't stress your drive well enough, you'll be racking up a lot of these (un-)load cycles. The drives are only rated 50000 or so. WDIDLE3.EXE allows you to bump the time-out up to 30 minutes.
|
# ¿ Jul 26, 2010 13:33 |
|
sund posted:Check the Synology forums and FAQs. I think they support some "green" drives.
|
# ¿ Jul 26, 2010 17:55 |
|
Just FYI, since this is at least mildly related. In the next few days, some OpenSolaris community folks and ex-Sun employees are going to announce a real community distro called Illumos. It appears that this is an effort thanks to Oracle being gay about its own OpenSolaris distro (since the next stable release is a half year late now and there aren't any developer builds anymore). --edit: I guess that the Nexenta folks are project leaders according to their site might make it a little more related. Combat Pretzel fucked around with this message at 13:49 on Jul 31, 2010 |
# ¿ Jul 31, 2010 13:44 |
|
You have to change your repository manually to update to developer builds from a stable one. If you managed to update to a dev build, you didn't do it by accident.
|
# ¿ Jul 31, 2010 18:17 |
|
Zhentar posted:Does anyone know of any ZFS file recovery tools? zpool import -f poolname On command line.
|
# ¿ Aug 2, 2010 12:06 |
|
Crackbone posted:Yeah, I know it's a minefield but setting up a NAS with enterprise drives is just not in my price range, not when it means I'm looking at $600 minimum for 2TB of mirrored storage.
|
# ¿ Aug 3, 2010 16:06 |
|
IOwnCalculus posted:My Samsung HD154s haven't done this - smartctl output:
|
# ¿ Aug 4, 2010 17:01 |
|
|
# ¿ Apr 26, 2024 19:30 |
|
IOwnCalculus posted:Huh; must not be tracked by the Samsung drives then, I don't have that at all. Here's smartctl -A for one of them: ior posted:how worried should I be? My WD Greens are rated for 50000 load cycles. But that doesn't mean that the drive will automatically fail, if it gets past that value, but chances of a failure increase. I've read about WD Greens with over 500K cycles. But I think there'll be warranty issues. Combat Pretzel fucked around with this message at 17:11 on Aug 8, 2010 |
# ¿ Aug 8, 2010 17:08 |