Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Transmission's officially packaged by the Sun folks. AFAIK, it can also do headless using transmission-daemon and transmission-cli.

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
FMA doesn't know how to deal with USB crap going haywire.

One way or another, USB devices in combination with ZFS will be getting some treatment at some point soon, since there coincidentally is a rather large discussion going on about things related on the ZFS mailing list. It also seems that we're finally getting an uberblock selector in the upcoming weeks (means if your system hosed up royally on the last transaction group, breaking the pool, you can select the least oldest still valid uberblock and start from there).

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

movax posted:

I believe the ZFS guys recommend no more than 8 (maybe 10) drives to an array
The maximum stripe size is 128KB. The more disks there are, the thinner it'll be stretched across the disks, possibly ending up in lots of small reads across the array.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
In the end, it depends a little on the workload. If you don't need maximum performance, you may just ignore that recommendation, unless you go crazy and want to create a 64+2 array.

With RAID5, it's just LBA remapping, where you can read from single disks, if the request doesn't cross stripe boundaries. In RAIDZ, a stripe is a whole row, any read touches all disks, but small writes don't require reading all other disks to regenerate parity. The zio pipeline is however pretty good at IO reordering, if ZFS needs to read multiple stripes, it doesn't issue all IOs in order to complete stripe reads sequentially.

Combat Pretzel fucked around with this message at 18:32 on May 1, 2009

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

BotchedLobotomy posted:

The other benefit (which maybe ZFS has too, I dont know) is that nothing is striped across a drive. So if you lost the parity drive and while rebuilding a second drive failed, you are only out on the data that was on that lost drive. the other X amount of drives in your array are not damaged/lost.
ZFS stripes poo poo as it pleases, influenced by vdev bandwidth load.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

angelfoodcakez posted:

I don't get this. If it normally takes up to two minutes, how is it non-detrimental in a non-raid situation if they kill the recovery at 7 seconds? If it's fine, then why not put it on all drives?
Probably if it can't be recovered with the 7 seconds, the chances it'll work in the next two minutes are pretty slim.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

necrobobsledder posted:

Psst, you can use a software utility from WD to enable TLER on even the Green drives.
Is that a set-and-forget thing, or do you need to run the tool every power up or reboot?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

necrobobsledder posted:

- Black still requires WDTLER utility in a RAID1+ system
WDTLER a run once utility? As in permanently changing a flag, or does it need to be run every boot?

Also, what's the idle time for a WD Green drive before it parks its heads? I have a Seagate Momentus in my laptop, which has a 5 seconds idle timer, which handily conflicts like poo poo with ZFS' 5 second transaction groups.

--edit: Nvm, Google says 8 seconds, with an option to install a different firmware and requesting a tool from WD to set it up to five minutes.

Combat Pretzel fucked around with this message at 21:40 on Jul 26, 2009

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

KennyG posted:

After about 3 straight weeks of time on a Core2 Quad 9550 I was able to run 37,764,768,000,000 simulations on 'virtual drives' in various RAID configurations to try and put some concrete probability numbers on RAID configurations using standard available hard drives. It wasn't the most efficient method, but I feel it's statistically valid and pretty accurate given the real world anomalies in play.
Sun introduced triple-parity RAID-Z in one of their latest Nevada builds. Care to simulate that? :D

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

adorai posted:

Expansion is coming
No it isn't.

Some enabling functionality is coming at some point, once they deem it stable. Deduplication and vdev removal are up first, which coincidentally depend on it. Then encryption. Then maybe block rebalancing. And then maybe a long way down the road, RAID-Z expansion.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
You guys with the WD Green drives, do you have constant IO pounding the array, or if not, did you adjust the idle timer that parks the heads?

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

roadhead posted:

I don't "constantly" pound the array though. Its either sustained writes for a particular period, or very light reads over the network (either DLNA or SMB)
The reason I ask is because from what I've read, the drive has an eight second idle timer. Means, after 8 seconds of inactivity, it'll park heads. Given ZFS' aggressive prefetcher (hell, watching a movie, it tends to prefetch in 50-100MB chunks) and the up to 30 seconds write transaction grouping, the drives may just park and unpark all goddamn day.

Altho it seems you can apply the WDIDLE3 and WDTLER tools to the WD Green series.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

roadhead posted:

I downloaded these, but does anyone know if there is anyway to run these under FreebSD?
They're DOS apps, you need a boot disk/CD/stick with FreeDOS or something.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
I got the WD Green 1.5TB disks anyway, and guess what, I suppose one of the two is broken, because it seeks like a loving madman while there's exactly zero IO.

--edit: Both drives do this. :(

As soon the AHCI BIOS initializes them. Is there some sort of long self-test that's initiated on the disks the first runs?

Combat Pretzel fucked around with this message at 22:13 on Oct 28, 2009

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
So yeah, I let them have a go at it and keep these WD Greens spinning for a while, let them do their idle grinding. They have stopped doing this for now, I have yet to reboot tho. Strange bunch of disks. How's the reliability for you guys, especially for the WD15EADS three platter version?

I'm still curious what it was all about, anyway.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Are you putting UFS or ZFS on the stick? Latter would let the ARC deal with the speed issues.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Anyone here running (Open)Solaris in a virtual machine and use it as ZFS fileserver? If so, how's the performance and longterm stability of the VM?

I've had my run with OpenSolaris up to today and am switching back to Windows, but am still looking for a way to keep using ZFS for my data. I'm not so fond of a dedicated storage box, since my main machine keeps running 24/7, too.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
How are things with BTRFS these days? I keep looking for information, but most of it, especially that on "official" sites (like Oracle's), are outdated. All seems to relate to kernel 2.6.31, while there seems to be 2.6.34 already.

I'm interested, because I'm looking at a parachute option, since I'm growing weary of Oracle's bullshit related to OpenSolaris.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

three posted:

Why not use FreeBSD with ZFS?
They're kinda behind in support. I think it does version 13, while my pools are at 22. Also, I'm still waiting for 64bit NVidia driver support to stabilize until I consider that an option.

FISHMANPET posted:

Nthing this poo poo. I'm stuck at B134 with all sorts of stupid problems (Network doesn't come up on boot, have to manually restart physical:nwam a couple of times, I can't set my XVM vms to start on boot) and I'd love to hit a stable release. But I can't go back to 2009.06 because my data pool is the most recent ZFS version.
There is a respin of build 134 out in internal testing, if it passes, 2010.H1 will be based on it, released and build 138 or something newer will hit /dev. One of the issues I have with Oracle is this secrecy bullshit. Another one, while not directly affecting OpenSolaris yet, is putting up a fence around Solaris and mentioning poo poo like Solaris Next-only premium features. Not sure how that'd pan out on long-term.

Build 134 itself is stable for me. Looking for an exit strategy, tho.

necrobobsledder posted:

Linux is what is inline with Oracle's long-term product strategy it seems, so I would expect OpenSolaris to be more or less dead by 2013 anyway and ZFS by 2016 since enterprise customers take a while to be weaned off anything. But I don't actually care so long as I have a solid storage backend.
If Oracle wasn't talking bullshit, OpenSolaris is going to stay. If at all, ZFS is going to be GPLized and hacked into Linux by Oracle. From what's being said, OpenSolaris respectively Solaris Next is going to stay the base for big iron databases.

Then again, Larry Ellison is a human being.

necrobobsledder posted:

BTRFS will probably be production ready by the end of 2011 is my guess.
I'm only considering BTRFS due to feature parity. Practically, I'm not particularly impressed by it, because seeing its development pace and history, it's the usual throw stuff into a pot and make it work philosophy. Just a rough overall design that's being tried to make it work. It's just that there isn't anything else alike in the Linux world yet (don't mention md).

HAMMER looks interesting, but DragonflyBSD :psyduck: At least if FreeBSD would adopt it...

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
If you're using WD Green drives, use WDIDLE3.EXE to disable the loving Intellipark. And WDTLER.EXE to enable TLER.

There is talk that the tools don't work with the newest drives, but they did for my WD15EADS, so YMMV.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

md10md posted:

Yeah, I still need to do this. I have 2x750GB WD GPs and they just thrash the load_cycle. One drive has 1.3 million cycles.
Holy poo poo. They're only rated like 300000 cycles.

I really don't get the point of early head parking. It just fucks up your drive mechanics and offers minimal savings.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

PopeOnARope posted:

Can I do something like this in windows?
IIRC, these tools only work in real DOS. I had to set up a FreeDOS USB boot stick for that.

--edit: Whoops, nevermind that.

Well, setting up such a script requires disabling any write caching whatsoever. I think the client versions of Windows have it enabled by default, and I don't know what the time out on the cache is. You can disable it tho, but involves some performance hit.

Such a script would for instance not work for me, since ZFS groups all writes up for up to 30 seconds here when there's no considerable activity.

--edit2: Wait, you should be able to stop the drive from doing this via APM. I got my laptop drive under control by running hddparm on Linux (was the OS installed). It should work with WD Green drives, too. This is a Windows equivalent, try this:

http://sites.google.com/site/quiethdd/

PopeOnARope posted:

\/ Christ. Do I buy seagate or hitachi when it's array time, anyway?
No, just don't get Green drives. Spend a few more dollars and get the Black ones. They don't do that poo poo by default. They're faster in the end, too, since Green drives spin at only 5400rpm.

As soon mine start doing poo poo, they're immediately flying outta the case. Should have gone Black series to begin with, but the Green ones were terribly cheap at 1.5GB, while the Black ones capped at 1GB back then.

Combat Pretzel fucked around with this message at 11:22 on May 14, 2010

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
hddparm has to be run each boot. Running it with a Linux live CD and then rebooting into your actual OS resets the setting to default.

If you're using Linux, AFAIK you can set hddparm on boot time. At least Ubuntu had an init script for it. For Windows, use QuietHDD I've linked earlier. There was something for OSX, which I've tried on my hackpro laptop. Any other OS, like Solaris or BSD, you seem to be out of luck.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

FISHMANPET posted:

Solaris has pretty much said gently caress you to SMART. I've got a 4+1 RaidZ with 1.5TB Samsungs. But no way to see how they're doing...
I would start to worry when checksum and I/O errors start appearing in zpool status.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Methylethylaldehyde posted:

Most TV series will show a 5-15% savings, because of the intro and credits sharing almost entirely the same data.
That a verifiable fact? Because I'm sure if the episodes have been encoded in two-pass VBR, bit allocations for said sections will be different.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Methylethylaldehyde posted:

Yeah, that's the savings I saw from a half dozen TV shows I have on my media box now. You might end up with some special snowflake x264 encodes that are somehow different for each and ever block, but I'm guessing you'll still see some savings.
Hmmm. The only way I see this working is if the intro is at the beginning of the show, as frame and pixel accurate copies across the episodes. Any minor variance in pixels or the episode starting at a different frame will generate different bitstreams and influence bitrate allocation.

Any intro past the exact beginning of the file will not see any savings, because even if the outputted bitstream is bit exact for the intro of each episode, it is completely unlikely that it lines up on block boundaries, since any preceding content is of arbitrary size/length. For blocks to be deduped, they need to be exactly the same.

I guess I have to try this myself to believe it.

AbstractBadger posted:

When a disk is added to a ZFS pool, is it expected to be empty?
ZFS kills the MBR and/or GPT, if you chose to add it as whole disk, or kills the partition/slice if you chose one instead.

Spare disks are also initialized, since ZFS needs to be able to recognize it belonging to the pool.

Combat Pretzel fucked around with this message at 17:58 on May 19, 2010

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
"Used" is the logical amount of data, "Refer" is the actual physical usage. Used should be higher than "Refer". But ZFS also rolls "Used" amounts up into the filesystem hierarchy. See my home folder for example:

code:
NAME                      USED  AVAIL  REFER  MOUNTPOINT
rpool/export             10.3G   433G    23K  /export
rpool/export/home        10.3G   433G    23K  /export/home
rpool/export/home/servo  10.3G   433G  10.3G  /export/home/servo

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Don't get the WD-RE, get the WD Black. Same drive mechanics and I think also electronics. Ability to change TLER is the unknown here.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
What the gently caress is it with recent SATA drives? What the hell are they doing while idle?!

A few months ago, I got two WD Greens. As soon I've attached them, they were grinding the heads a long while while idle. At some point it stopped and just happens occasionally. I shrugged it off at that point.

Now one of them is making tons of issues, I've switched it out today for a Seagate Barracuda 7200.11 (I've read about issues with THAT one after the order went through, but I needed a 1.5TB drive matching the sector count). But instead of booting back into Linux and resyncing the mirror, I said "gently caress it, I'm going to play some games first!" And guess what, the drive is seeking like poo poo at random points in time, untouched, uninitialized.

And you don't even get an answer from the drive manufacturers. Back with the WDs, I've asked about information regarding that from their support. I got the random WD diagnostic tool spiel.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Methylethylaldehyde posted:

Yeah. You start with a single zpool composed of a raidz vdev. You can add additional vdevs to the pool, provided each vdev is of the same type.
Actually, you can mix vdev types, it just requires you to use the --force flag. It's to prevent you from doing stupid poo poo, like accidentally adding a disk as single disk vdev instead of a spare. ZFS still doesn't support vdev evac.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

dietcokefiend posted:

Save 40 bucks and get the standard Caviar Green 2TB drives perhaps? ;)
The Green series are pieces of poo poo.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Use WDIDLE3.EXE too and disable their loving idle head parking, too. Or else, it's going to gently caress up your drive's lifecycle on mediocre use.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
After 8 seconds of inactivity, the heads are parked off-platter onto a ramp. Next IO, they're unparked again until the idle time-out is reached again. If you don't stress your drive well enough, you'll be racking up a lot of these (un-)load cycles. The drives are only rated 50000 or so.

WDIDLE3.EXE allows you to bump the time-out up to 30 minutes.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

sund posted:

Check the Synology forums and FAQs. I think they support some "green" drives.
If that thing can handle the APM settings on the drive, you won't need WDIDLE3.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
Just FYI, since this is at least mildly related. In the next few days, some OpenSolaris community folks and ex-Sun employees are going to announce a real community distro called Illumos. It appears that this is an effort thanks to Oracle being gay about its own OpenSolaris distro (since the next stable release is a half year late now and there aren't any developer builds anymore).

--edit: I guess that the Nexenta folks are project leaders according to their site might make it a little more related.

Combat Pretzel fucked around with this message at 13:49 on Jul 31, 2010

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!
You have to change your repository manually to update to developer builds from a stable one. If you managed to update to a dev build, you didn't do it by accident.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Zhentar posted:

Does anyone know of any ZFS file recovery tools?
Try:

zpool import -f poolname

On command line.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

Crackbone posted:

Yeah, I know it's a minefield but setting up a NAS with enterprise drives is just not in my price range, not when it means I'm looking at $600 minimum for 2TB of mirrored storage.
Anything that's claiming to be green or power-saving will do things like idle parking. Get some WD Black or the equivalent from other manufacturers.

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

IOwnCalculus posted:

My Samsung HD154s haven't done this - smartctl output:
The parking is counted in 'Load Cycle Count'.

Adbot
ADBOT LOVES YOU

Combat Pretzel
Jun 23, 2004

No, seriously... what kurds?!

IOwnCalculus posted:

Huh; must not be tracked by the Samsung drives then, I don't have that at all. Here's smartctl -A for one of them:
Yeah, I guess it isn't tracked. Practically any green drive does some sort of idle parking. From what I've read, Samsungs do it, too. Not sure which models, tho. With over 8500 hours of power-on time, there should be potential for quite a bunch of parkings.

ior posted:

how worried should I be?
Not sure. I guess it wouldn't hurt to check the technical specs for how much cycles the drives are rated.

My WD Greens are rated for 50000 load cycles. But that doesn't mean that the drive will automatically fail, if it gets past that value, but chances of a failure increase. I've read about WD Greens with over 500K cycles. But I think there'll be warranty issues.

Combat Pretzel fucked around with this message at 17:11 on Aug 8, 2010

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply