Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





You should know that if you plan on running ZFS and using Samba, you might have problems. My server, running Solaris, had stability issues for months and then I stopped listening to music stored on the network. All of a sudden my server, which had a maximum uptime of about two weeks ( , I know, but I don't want to poo poo where I eat so I didn't mess with it) stayed up for 3 months.

Ars thread with more information:
http://episteme.arstechnica.com/eve...67005626831/p/1

More recently, I installed some RAM that's either bad or the motherboard hates and Solaris crashed and corrupted a file* that prevents it from booting at all, so I figured I'd try BSD and see if it worked. Unfortunately, it's refusing to recognise zpool (which I admit I was not able to properly export). Back to solaris now, hope the newest version of the developer edition is more stable. Also I hope it will import the array because if not I'm really gonna

That said, I had a lovely SATA cable that was taking one of the disks in my array offline all the time (untill I replaced it; I'm not totally helpless damnit) and ZFS handled it pretty well, although sometimes the machine would freeze when the drive went away. Never lost any data, though.

*Disclaimer: Solaris was installed to a disk off the array and used a non-redundant filesystem. For some reason, I never thought to dump an image of it untill today

Munkeymon fucked around with this message at 03:23 on Mar 21, 2008

Adbot
ADBOT LOVES YOU

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Toiletbrush posted:

FreeBSD only supports pool version 2. That and apparently GEOM's interfering. If it ain't either of these, you can supply the -f flag to zpool import. Exporting the pool is just a management semantic.

That doesn't make sense based on what I found on their wiki: http://wiki.freebsd.org/ZFSQuickStartGuide they support features from at least version 8, though I can see that the features don't exactly stack, and so could be skipped for lower-hanging fruit. Besides, the -f flag doesn't do anything when the system swears there are no pools or if it simply can't start ZFS in the first place.

I thought I read somewhere that export wrote some extra metadata, but I could easily be wrong since all my research is a year old at this point.

quote:

As far as Solaris not booting, if GRUB and failsafe mode still work, the boot archive is hosed. Ain't a biggie, since you can recreate (i.e. update) it in failsafe mode.

I'd rather have the newer system going and the only things I care about are the the pool and the Azureus install, which is only valuble because it required a retarded ammount of effort to get working.

quote:

Might consider looking into the most recent Nevada builds. It comes now with a CIFS server written by Sun based on the actual Microsoft documentation. If you've another month time, you should wait for the next Developer Edition based on snv_87, which apparently comes with an updated version of the Caiman installer that supports installing to ZFS and setting up ZFS boot (single disk or mirror pool only). I'd figure that boot files on ZFS is more resilient to random crashes loving up your boot environment.

I'd much rather get an AMD64 build running, but that apparently means conjuring nforce drivers out of thin air, which I'm not up for. Maybe I will just get a minimal effort system running and ride it out untill the next version if it's that close :\ I miss the warm, fuzzy feeling of having nightly automated backups.

Munkeymon fucked around with this message at 09:40 on Mar 22, 2008

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Toiletbrush posted:

What I see is up to version 5, which is gzip. The features above are not really that important (yet), but I figure that zpool throws a fit if the version's higher. Actually, I don't even know how it'd behave on a higher pool version than what's supported. Silence might just be it, perhaps.

Oh sorry, I read here: http://wiki.freebsd.org/ZFS that they have delegated admin going, which is a version 8 feature. The pool it failed to notice was at version 3, so I don't think it was a version problem.

quote:

I don't get what you mean. You've already set up FreeBSD? Fixing the boot archive is one single line. Actually, the more recent Nevada builds should notice it themselves when booting to failsafe and ask you if it should be updated.

bootadm update-archive -R /a

(Since in failsafe mode, it mounts your root fs to /a)

Yeah, I installed FreeBSD on a spare drive that I swapped in and ZFS didn't work. I got an error message about it being unable to initialize the ZFS system and I couldn't find anything helpfull on Google, so I installed the latest Solaris Express Developer over it.

I did try updating my old install (~1 year old now), but the installer said there wasn't enough space on the drive. I don't see why because that drive has 62 GB free on slice 7, though I may be misunderstanding the update procedure.

quote:

Uhm. I still run snv_76, half a year old, and is pretty stable on my NForce4 mainboard. And it boots to 64bit mode. They ship drivers for the Nvidia SATA controller since I think snv_72.

servo@bigmclargehuge:~ > modinfo | grep nv_sata
38 fffffffff7842000 5b88 189 1 nv_sata (Nvidia ck804/mcp55 HBA v1.1)

All I know is that when I did install the AMD64 build last year, no disk plugged into an SATA controller would show up in /dev/dsk and I the only mention I could find of a similar situation (nforce 5 SATA controller not working) was 'solved' by giving up on the normal build and using the developer express package instead. I guess that's changed since then, so I'll probably have to give it annother try. Did you have to do any specail configuration for that or did everything work right from the get-go?

On a side note, I can't believe you use > in your prompt. I'd constantly be checking to be sure I wasn't redirecting output into an executable file

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Toiletbrush posted:

Must be that GEOM thing. I think ZFS in FreeBSD can't handle full disk vdevs properly, since Solaris partitions them as GPT disk when you slap ZFS all across it and GEOM kind of trips over itself with these. At least I think that was a limitation some time ago, that you had to set up a full disk BSD slice first and make it use that.

That sounds plausible, but it's kind of crappy when disk management is tripping around in the first place.

quote:

The Developer Editions are actually pretty stable. The development process of Solaris Nevada is pretty cool (and I guess similar to FreeBSD). Everything has to be tested and then approved and then tested before it can go into the main tree that'll become the Community and Developer editions. As said, I'm running a Nevada build and it's mighty stable.

Yeah, I read about that, but I just don't see the stability in the graphical environment. For example, I enabled a service the other night through the service management GUI application and as soon as I checked the box, the machine stopped responding and I had to reset it after 5 minutes of it not responding to the Num Lock key. Maybe I should have spent more for 'real' server hardware, but that and the hard locking caused by using WinAmp over SMB look more like software problems to me.

quote:

Slice 7 is /export with the standard layout and / with all the rest is slice 0. The loving dumb thing with the current Solaris installer is that it sized said slice more or less close to what it needs to install the system. If you try a regular update to the same slice, you'll be out of luck.

I had assumed it would just move or copy everything on slice 0 to someplace on 7 and then merge new files in, but I guess the installer isn't that smart.

quote:

If you still want to run Solaris and get rid of these silly hassles, wait a month for the snv_87 Community Edition, following the next upcoming Developer Edition. The new graphical installer (which you access under the Developer Edition boot option ) will support ZFS root and boot. Like this, you don't have to deal with mis-sized slices anymore on upgrades. Snap Upgrade will also be integrated, which is like Live Upgrade but for ZFS and taking advantage of it.

(ZFS boot works currently only on single disk or single mirror pools, so you need a seperate pool on your system disk.)

I'm also waiting for that build. If you intend to use GUI stuff on your server (locally or XDMCP), snv_88 will have Gnome 2.22 integrated, you don't want that because it appears like the new GVFS stuff makes it crash happy.

I don't really want Gnome at all because I prefer KDE I do use the GUI, though, for the torrent client.

Also, I don't think putting the system root in the pool isn't really something I care to do. I see the advantage to making it redundant, but I'm not confident in the updatability of that kind of setup. The built-in update manager never worked on my old install, for example, and it was easy to just swap out the hard disk and gently caress around with BSD without having to worry about stepping on other bootladers.

Will the community edition ever come out in 64-bit, do you think? You seem way more knowledgeable the Solaris community than I am. Oh, and what about the blurb on the download page that says it's 'unsupported'? Is that Sun speak for 'you're pretty much on your own'?

quote:

Nope. Pre snv_72, the SATA drives acted like regular IDE drives towards the system. I thought that was normal behaviour from the Nvidia SATA chipset. With snv_72, the Nvidia SATA driver was released and the disks act like SCSI disks now (which they're apparently supposed to).

Well on my first try with a 64-bit build the system didn't seem to be able to see them at all, IDE or no.

My guess would be that the system doesn't expect PATA drives to be hot-swappable like SATA is and someone had to hack in something ugly to make that workable.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Toiletbrush posted:

What I was saying is that, if you were to use a Solaris build with the new revision of the new installer, that you should create a seperate pool on the seperate disk you ran your system on. It'll not be redundant, but you get the advantages of pooled storage making the fixed slices crap go away, and Snap Upgrade, that'll employ snapshots and clones magic to update your system during regular operation (and make it available on reboot). The pool would be seperate from your data pool.

I understand now. I actually named the pool 'pool' because I was being extra clever that day and so I was probably tripping on terminology.

quote:

On boot, it's decided whether it loads the 32bit or 64bit kernel. The userland is mainly 32bit, but ships 64bit versions of most libraries. Components like Xorg are available in both versions, which version's loaded is decided with isaexec magic.

I get that, but I'd much rather just decide for it

quote:

That's strange. I figured that with the old ATA chips, the situation is similar to SATA's AHCI, that there's a generic way to operate them.

Yeah, I get 'that's strange' a lot. I really need to spend some time on their forums and figure out if the poo poo I'm dealing with is hardware related or if I'm just having super weird software issues.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Is https://www.newegg.com/Product/Prod...=9SIA5AD5YN7412 a decent deal or is Gold one of those lines that's actually trash somehow? I'd probably put it in my desktop mostly for a place to put local nightly backup images and other big files I don't want included in my nightlies.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Paul MaudDib posted:

It's an Archive drive which are shingle drives, they tend to have pretty poor performance on random writes.

Huh, that's a new one on me. I guess I'll just get one of those enclosures and shuk it since it's not that much more money and it's more room, anyway. Thanks

IOwnCalculus posted:

If you're going to do just one drive, why not get that external red for $180 and just leave it in the enclosure?

I don't want the nightly to depend on an external device.

Munkeymon fucked around with this message at 20:39 on Nov 7, 2017

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Paul MaudDib posted:

And now 8 TB Easystores are down to $130.

Jesus christ BestBuy, my bank account can't handle this...

https://www.bestbuy.com/site/wd-eas...black/6110900.p

quote:

This item is currently unavailable for online purchase. The item was not added to your cart.

Dangit. Oh well, I'll just stop by after work, I guess.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Yeah, that must be what happens when they sell out but the backend hasn't quite synced up with the front to communicate that because now it actually says sold out. Oh well.

Microcenter wouldn't match that would they?

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Paul MaudDib posted:

Looks like Best Buy got to the package, UPS says delivery was cancelled today. Nothing on the early BF deal but I'm not overly hopeful, I'm sure they were slammed with orders.

edit: Actually there are still some other "Black Friday" deals still up... so maybe it'll make it. Dunno, don't count your chickens until UPS accepts the package.

I mean, I guess if you bought enough of them and some USB adapters you could make a big-ish pool in ZFS

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





IOwnCalculus posted:

Or shuck them

I was making a dumb joke because he linked to SD cards in the packrat thread

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Is https://www.newegg.com/Product/Prod...N82E16822235160 shuckable? I keep coming up with stuff about shucking a very different-looking enclosure when I try to find instructions and NewEgg has it at $170 through their email deals today

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





I'm still not used to Athlon being the low-end low-power line from AMD :\

Anyway, I should probably rebuild my NAS and get it off the ten year old dual core Athlon that's been merrily chugging along forever even as drives died out from under it. I'm thinking I should wait and see what happens with Raven Ridge since I won't have time to gently caress with it period until January, but I'm open to suggestions. Main wrinkle is that I have a raid-z with seven disks (and a hot spare - I'm not totally insane) and plugging them all into the board rather than loving with controller cards would be really nice.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Paul MaudDib posted:

The question was rsync vs dircopy/xcopy type tools, if you are using zfs send you probably know about the merits of rsync.

While we're on the topic though - I used zfs send to move a backup to another ZFS pool. Let say we now have mainpool/myfs@snap1 and otherpool/myfs@snap1. I have made changes on mainpool/myfs, and I now have mainpool/myfs@snap2. How do I send the second snapshot over while preserving whatever context ZFS needs to know that snap1 and snap2 share blocks? Would that be zfs send -I mainpool/myfs@snap1 mainpool/myfs@snap2? And what would go in the rest of the recv command there? zfs recv otherpool?

Sending the second snapshot the same way you did the first should only transmit the delta (as long as the receiving FS has the first, obs). https://github.com/jimsalterjrs/sanoid/ (syncoid, partway down the readme) might help make the process easier than dealing directly with zfs send - honestly wish I had found that/it had existed last time I dealt with snapshot replication across servers.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Is there a way/setting to make a normal SATA port on a regular consumer mobo support hot swapping or do you have to have some kind of hardware in between?

Can't believe I've never thought to look into this before but hey first time for everything right?

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





I still use a ten year old https://www.newegg.com/Product/Prod...N82E16811119093 but I only bought one extra hard drive stack insert (comes with one, so I can get 8 in there safely), so unless I can find another on ebay or watever I have to gently caress with bay adapters to put more drives in. Other than the USB2 on the breakout box it's basically the perfect NAS case, though.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





SamDabbers posted:

This has been my strategy as well. I've been running single-vdev raidz2 pools with 6-8 drives for years, and am considering switching to multiple mirrored vdevs with the 8TB Easystores I recently picked up for the ease of expansion and fast resilvers without having to drop $1000 every handful of years. The increased IOPS would be nice, but not necessary.

Anybody else gone this route? Has the space/convenience/performance tradeoff been worth it to you?

Do you have a link to a howto for this? My RAIDZ setups (all two) have been very simplistic and I've only expanded one of them once by buying new disks, but i'm having trouble picturing this mirrored vdev structure you guys are on about.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Thermopyle posted:

It's just automatic backup, but if people would just know to turn it on it's easy and a no brainer.

It really only protects against hardware failure, though, so it's not going to help with minimally competent cryptolockers, which I assume are the most common source of data loss now that most new machines ship with an SSD.

You can't even point it at a network share unless you dig down into the Windows 7 version, which I just tried to do on my work laptop and it told me it couldn't use the network path for the usual cryptic non-reasons.

Harik posted:

I'd like to add that the space used is almost entirely about the retention length, not density of snapshots. I haven't used ZFS, but most snapshot systems have very little overhead, it's almost entirely taken up by files you've deleted since then.

Do you really belong in this thread if you delete stuff? I mean, come on

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Thermopyle posted:

I'm backing up to a network share with file history right now on Windows 10.



For the average consumer I feel like the most common source of data loss is accidentally deleting stuff, but maybe I'm wrong.

Yeah, I tried to add a network path through that and it fails with a uselessly generic error code, but it is a work machine+network so IDK what the deal is with the server on the other end. There's also a lot of digging to find that option. Main point is, anything Windows can write to without demanding a password from the user is probably toast if you get ransomwared, so it's probably only good for hardware failure*.

That's what the recycling bin is for?

*unless you're backing up to a NAS with regular snapshots it can roll back to but this scenario is way outside of 'average user' territory

e: the way I did it because I was hand-rolling my own snapshot management 11 years ago and am too lazy to update it was to give the NAS (solaris/openindiana) access to a share of my backup snapshots on my desktop so it can actively pull them and the desktop doesn't need write access. The script stores them in part of the file tree that's shared read-only to all clients. Never been tested with a cryptolocker, but that's what I came up with when I heard about them.

Munkeymon fucked around with this message at 17:50 on Jan 22, 2018

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





necrobobsledder posted:

Unless snapshots are visible via SMB, CIFS, or NFS I donít see how itíd be possible for a client to delete them. Hell, given a snapshot is immutable Iíd presume the worst thing to happen from a client could be deleting snapshots even if tons of access was granted, but Iíve never seen a UI or CLI command from a client that would let me see, create, or delete all these serverside snapshots. I have different logins to my servers as admin than I do as clients so even if I got infected Iíd just lose local data and for the past maybe 24 hours. Hell, Iíd be more concerned with cloud filesystems like via Google Drive or iCloud than my local files because I dunno how to recover cloud data effectively from providers.

The laziness is that to deal with a daily image and associated file additions+deletions 'correctly' without wasting a bunch of space would involve the snapshot configuration mirroring the image creation cadence, so I'd have to gently caress with snapshot scripts if I hosed with the backup config and well

This system has outlasted a couple of desktops and I haven't had to mess with snapshot scripts, so I'm OK with it, but this is part of why I'm eying this year to do a clean rebuild of the controlling system with modern tooling and maybe kick the legacy Solaris stuff to the curb. Especially now that I don't mind paying for some software that'll save me a weekend (lol it's never just one weekend) of hot hot manpage action.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Anime Schoolgirl posted:

you didn't diassemble them to get at the toy platters smh

And magnets! You can get some crazy powerful magnets out of old server drives

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Paul MaudDib posted:

Gigabyte has a C3958-based NAS board out as well.

Oddly, the C3958 is actually slower than the C3955 because it's clocked 20% lower. Weird, not sure what the differentiator between them is then. More NIC channels unlocked or something?

I've never messed with SFF-8087 stuff before - can I just buy the cheapest SFF-8087 port to SATA hydra adapter off Amazon and expect it to work and perform well or is there some controller in there I'd have to worry about being flaky garbage?

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Methylethylaldehyde posted:

Do keep in mind, there are breakout and reverse breakout versions of the SAS->SATA cable hydra, one of which works great, the other....doesn't.

What's a reverse breakout? Plugging an SF-8087 storage device into four host SATA ports?

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





I could have sworn enabling jumbo packets improved my throughput to my NAS years ago. Am I misremembering or is that a real thing?

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





DrDork posted:

Very real thing. They just often don't play well with sending things outside your network, as many ISPs are not at all happy about a MTU over 1500.

Oh, yeah, to be clear, both the NAS and the desktop had two ethernet jacks, so they had their own private line to each other that was 4k frames or whatever the max was they both supported. I'm not enough of a network wizzard to have internal on jumbo and external on regular outside of the idiot babby sandbox of physical cable plugs from A to B

Still a possibility for people to consider!

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Does anyone know offhand how to figure out a drive serial number given the output of zpool status? I've misplaced the piece of paper I had that mapped motherboard port numbers to drives for a homemade NAS that has a failing drive and I'd rather not play guess and check to figure out which one to swap out.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.






Dunno why they call that Not Safe for Wallet because it sounds extremely reasonable to me

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





necrobobsledder posted:

In situations like this and have a hotswap bay setup, I look for the drive thatís failing, and run a dd if=foo of=/dev/null on it and look at the lights for the drive. Otherwise, I hope via lspci -vvv output or from dmesg that something is there. Otherwise, itís off to the olde Google manual.

It's a NAS in that it's old computer parts with big hard drives attached - nothing so fancy as hot-swap bays :\

IOwnCalculus posted:

smartctl -i /dev/whatever if you just need the serial, the above is good to find the drive itself if you have hot swap bays.

That sounds familiar (this has happened before), which is why I took the time to note the mapping. Too bad I didn't tape it to the inside of the case or something smart like that. Of course I also didn't bother to write down how I figured it out.

Methylethylaldehyde posted:

Because it's a rabbit hole that ends with you buying tape storage off ebay to set up redundant data locations for your 73TB of perfectly curated 50s television rips.

I mean, I already have an onsite backup NAS (with a failing drive!) so I'm probably beyond help already

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





dexefiend posted:

Yes. I love that I have a ridiculous beast workstation that didn't have a ridiculous budget.

I just recently built bought an Unraid media server. For it, I used a $450 Dell R710 I found using labgopher.

I would have built another E5 Xeon, but my budget was only $450 and I wanted a rackmount case. I couldn't make it work building it myself.

The horsepower/dollar ratio in the V1/V2 E5 Xeons is crazy.

Dang labgopher is exactly what I didn't know I needed for comparison shopping. Too bad I don't know gently caress all about server parts other than that they cost too much new

adorai posted:

I have been buying these.

https://www.ebay.com/itm/Western-Di...5.c100012.m1985

A variety of sellers, they occasionally drop to $150ish.

Quoting this so I can find it later. Thanks!

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





D. Ebdrup posted:

Well, that was sort of true on a very old version of ZFS (the one you could manually add to FreeBSD 6.3 with a bit of backporting from 7-CURRENT if you were brave enough a decade ago in 2008*)

I just used Solaris 10 and then Open Indiana because I started my first build in 07, I'm lazy and just wanted to copy settings files to the other one and have them (mostly) just work. Solaris' role based permission system was nice.

Seriously thinking about switching to BSD with the next one, though.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Combat Pretzel posted:

OpenSolaris finally dead then? Oh my.

OpenIndiana got an update as recently as August, so it's not dead dead, but I can't imagine there are too many other sad sack partians usingclinging to it.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Are people using btrfs because of licensing issues with CDDL? I can't imagine any other reason to care about it when ZFS already exists and has been basically bulletproof from day zero, but I'm also not super plugged into these things, so I could easily be missing some key information.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Yeah, you'll make back the extra ~1k you spend to get the equivalent capacity in a mere century or so.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





That's not gonna help much if you're running a torrent client on the thing

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





apropos man posted:

How bout this idea for a seedbox: you run the entire contents of your torrent data in RAM and it only wakes the disk when you've finished leeching, dumps the data and then sleeps the disk again.

Super efficient for disk energy consumption and only requires enough RAM for whatever you're seeding/leeching at any one time.

So that's about 64GB RAM on an average seedbox, then

Spinning up a drive takes more power than keeping it spinning for, well, some amount of time that'll depend on the drive's construction. IIRC, spin-up/down cycles also cause more wear than steady spinning, again, for some time depending on the drive. What I'm getting at is min-maxing your microwatts of savings in your home is probably not worth the time and energy you'd spend thinking about it.

Now, if you run a datacenter where SANs energy usage can be measured in city block equivalent units, then what the hell are you doing reading this thread for advice

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





Friends of mine had a SSD die on them over the weekend. It was only 128MB, I think they pretty much filled it up right away and it lasted about 9 months. Not 100% sure if it wore out but it came up blank after a bluescreen and when I tried to run recovery tools on it, the log was a solid wall of read errors.

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





DrDork posted:

I seem to remember Linus Tech Tips being taken down a few years ago because they were doing some idiotic RAID0 setup

Doing dumb poo poo and making videos about it is his whole shtick, though?

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





The Diddler posted:

I do this. I have it set up to rclone my monthly VM backups, some videos that I may want to rewatch in the future, and I have an encrypted repository for sensitive documents. It took a while to get set up properly, but it's nice to have offsite backups of important stuff.

You still have to get some friends to go in on it with you to get the unlimited storage (or eat the whole $60/mo cost) right?

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





No performance benefits over what? Btrfs, where the stability page has a bunch of things labeled "mostly OK"? OK, maintainer.

Adbot
ADBOT LOVES YOU

Munkeymon
Aug 14, 2003

Motherfucker's got an
armor-piercing crowbar! Rigoddamndicu𝜆ous.





D. Ebdrup posted:

He's talking about performance benefits compared to ext4, a filesystem based on FFS in BSD, that can't detect or correct silent data corruption, isn't atomic with respect to its writes, and has been dismissed by Ted Ts'o.
Linus is a smart guy, there's no doubt about that - but filesystems and randomness are two areas he's demonstrated many times by his words and actions that he doesn't have a loving clue about.

lmao what a turd - clearly we should all still be on FAT32 because with such low overhead it must be faster

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply