|
You should know that if you plan on running ZFS and using Samba, you might have problems. My server, running Solaris, had stability issues for months and then I stopped listening to music stored on the network. All of a sudden my server, which had a maximum uptime of about two weeks ( , I know, but I don't want to poo poo where I eat so I didn't mess with it) stayed up for 3 months. Ars thread with more information: http://episteme.arstechnica.com/eve/forums/a/tpc/f/24609792/m/567005626831/p/1 More recently, I installed some RAM that's either bad or the motherboard hates and Solaris crashed and corrupted a file* that prevents it from booting at all, so I figured I'd try BSD and see if it worked. Unfortunately, it's refusing to recognise zpool (which I admit I was not able to properly export). Back to solaris now, hope the newest version of the developer edition is more stable. Also I hope it will import the array because if not I'm really gonna That said, I had a lovely SATA cable that was taking one of the disks in my array offline all the time (untill I replaced it; I'm not totally helpless damnit) and ZFS handled it pretty well, although sometimes the machine would freeze when the drive went away. Never lost any data, though. *Disclaimer: Solaris was installed to a disk off the array and used a non-redundant filesystem. For some reason, I never thought to dump an image of it untill today Munkeymon fucked around with this message at 04:23 on Mar 21, 2008 |
# ¿ Mar 21, 2008 04:19 |
|
|
# ¿ Apr 26, 2024 07:22 |
|
Toiletbrush posted:FreeBSD only supports pool version 2. That and apparently GEOM's interfering. If it ain't either of these, you can supply the -f flag to zpool import. Exporting the pool is just a management semantic. That doesn't make sense based on what I found on their wiki: http://wiki.freebsd.org/ZFSQuickStartGuide they support features from at least version 8, though I can see that the features don't exactly stack, and so could be skipped for lower-hanging fruit. Besides, the -f flag doesn't do anything when the system swears there are no pools or if it simply can't start ZFS in the first place. I thought I read somewhere that export wrote some extra metadata, but I could easily be wrong since all my research is a year old at this point. quote:As far as Solaris not booting, if GRUB and failsafe mode still work, the boot archive is hosed. Ain't a biggie, since you can recreate (i.e. update) it in failsafe mode. I'd rather have the newer system going and the only things I care about are the the pool and the Azureus install, which is only valuble because it required a retarded ammount of effort to get working. quote:Might consider looking into the most recent Nevada builds. It comes now with a CIFS server written by Sun based on the actual Microsoft documentation. If you've another month time, you should wait for the next Developer Edition based on snv_87, which apparently comes with an updated version of the Caiman installer that supports installing to ZFS and setting up ZFS boot (single disk or mirror pool only). I'd figure that boot files on ZFS is more resilient to random crashes loving up your boot environment. I'd much rather get an AMD64 build running, but that apparently means conjuring nforce drivers out of thin air, which I'm not up for. Maybe I will just get a minimal effort system running and ride it out untill the next version if it's that close :\ I miss the warm, fuzzy feeling of having nightly automated backups. Munkeymon fucked around with this message at 10:40 on Mar 22, 2008 |
# ¿ Mar 22, 2008 10:30 |
|
Toiletbrush posted:What I see is up to version 5, which is gzip. The features above are not really that important (yet), but I figure that zpool throws a fit if the version's higher. Actually, I don't even know how it'd behave on a higher pool version than what's supported. Silence might just be it, perhaps. Oh sorry, I read here: http://wiki.freebsd.org/ZFS that they have delegated admin going, which is a version 8 feature. The pool it failed to notice was at version 3, so I don't think it was a version problem. quote:I don't get what you mean. You've already set up FreeBSD? Fixing the boot archive is one single line. Actually, the more recent Nevada builds should notice it themselves when booting to failsafe and ask you if it should be updated. Yeah, I installed FreeBSD on a spare drive that I swapped in and ZFS didn't work. I got an error message about it being unable to initialize the ZFS system and I couldn't find anything helpfull on Google, so I installed the latest Solaris Express Developer over it. I did try updating my old install (~1 year old now), but the installer said there wasn't enough space on the drive. I don't see why because that drive has 62 GB free on slice 7, though I may be misunderstanding the update procedure. quote:Uhm. I still run snv_76, half a year old, and is pretty stable on my NForce4 mainboard. And it boots to 64bit mode. They ship drivers for the Nvidia SATA controller since I think snv_72. All I know is that when I did install the AMD64 build last year, no disk plugged into an SATA controller would show up in /dev/dsk and I the only mention I could find of a similar situation (nforce 5 SATA controller not working) was 'solved' by giving up on the normal build and using the developer express package instead. I guess that's changed since then, so I'll probably have to give it annother try. Did you have to do any specail configuration for that or did everything work right from the get-go? On a side note, I can't believe you use > in your prompt. I'd constantly be checking to be sure I wasn't redirecting output into an executable file
|
# ¿ Mar 22, 2008 22:26 |
|
Toiletbrush posted:Must be that GEOM thing. I think ZFS in FreeBSD can't handle full disk vdevs properly, since Solaris partitions them as GPT disk when you slap ZFS all across it and GEOM kind of trips over itself with these. At least I think that was a limitation some time ago, that you had to set up a full disk BSD slice first and make it use that. That sounds plausible, but it's kind of crappy when disk management is tripping around in the first place. quote:The Developer Editions are actually pretty stable. The development process of Solaris Nevada is pretty cool (and I guess similar to FreeBSD). Everything has to be tested and then approved and then tested before it can go into the main tree that'll become the Community and Developer editions. As said, I'm running a Nevada build and it's mighty stable. Yeah, I read about that, but I just don't see the stability in the graphical environment. For example, I enabled a service the other night through the service management GUI application and as soon as I checked the box, the machine stopped responding and I had to reset it after 5 minutes of it not responding to the Num Lock key. Maybe I should have spent more for 'real' server hardware, but that and the hard locking caused by using WinAmp over SMB look more like software problems to me. quote:Slice 7 is /export with the standard layout and / with all the rest is slice 0. The loving dumb thing with the current Solaris installer is that it sized said slice more or less close to what it needs to install the system. If you try a regular update to the same slice, you'll be out of luck. I had assumed it would just move or copy everything on slice 0 to someplace on 7 and then merge new files in, but I guess the installer isn't that smart. quote:If you still want to run Solaris and get rid of these silly hassles, wait a month for the snv_87 Community Edition, following the next upcoming Developer Edition. The new graphical installer (which you access under the Developer Edition boot option ) will support ZFS root and boot. Like this, you don't have to deal with mis-sized slices anymore on upgrades. Snap Upgrade will also be integrated, which is like Live Upgrade but for ZFS and taking advantage of it. I don't really want Gnome at all because I prefer KDE I do use the GUI, though, for the torrent client. Also, I don't think putting the system root in the pool isn't really something I care to do. I see the advantage to making it redundant, but I'm not confident in the updatability of that kind of setup. The built-in update manager never worked on my old install, for example, and it was easy to just swap out the hard disk and gently caress around with BSD without having to worry about stepping on other bootladers. Will the community edition ever come out in 64-bit, do you think? You seem way more knowledgeable the Solaris community than I am. Oh, and what about the blurb on the download page that says it's 'unsupported'? Is that Sun speak for 'you're pretty much on your own'? quote:Nope. Pre snv_72, the SATA drives acted like regular IDE drives towards the system. I thought that was normal behaviour from the Nvidia SATA chipset. With snv_72, the Nvidia SATA driver was released and the disks act like SCSI disks now (which they're apparently supposed to). Well on my first try with a 64-bit build the system didn't seem to be able to see them at all, IDE or no. My guess would be that the system doesn't expect PATA drives to be hot-swappable like SATA is and someone had to hack in something ugly to make that workable.
|
# ¿ Mar 23, 2008 03:01 |
|
Toiletbrush posted:What I was saying is that, if you were to use a Solaris build with the new revision of the new installer, that you should create a seperate pool on the seperate disk you ran your system on. It'll not be redundant, but you get the advantages of pooled storage making the fixed slices crap go away, and Snap Upgrade, that'll employ snapshots and clones magic to update your system during regular operation (and make it available on reboot). The pool would be seperate from your data pool. I understand now. I actually named the pool 'pool' because I was being extra clever that day and so I was probably tripping on terminology. quote:On boot, it's decided whether it loads the 32bit or 64bit kernel. The userland is mainly 32bit, but ships 64bit versions of most libraries. Components like Xorg are available in both versions, which version's loaded is decided with isaexec magic. I get that, but I'd much rather just decide for it quote:That's strange. I figured that with the old ATA chips, the situation is similar to SATA's AHCI, that there's a generic way to operate them. Yeah, I get 'that's strange' a lot. I really need to spend some time on their forums and figure out if the poo poo I'm dealing with is hardware related or if I'm just having super weird software issues.
|
# ¿ Mar 23, 2008 10:42 |
|
Is https://www.newegg.com/Product/Product.aspx?Item=9SIA5AD5YN7412 a decent deal or is Gold one of those lines that's actually trash somehow? I'd probably put it in my desktop mostly for a place to put local nightly backup images and other big files I don't want included in my nightlies.
|
# ¿ Nov 7, 2017 21:04 |
|
Paul MaudDib posted:It's an Archive drive which are shingle drives, they tend to have pretty poor performance on random writes. Huh, that's a new one on me. I guess I'll just get one of those enclosures and shuk it since it's not that much more money and it's more room, anyway. Thanks IOwnCalculus posted:If you're going to do just one drive, why not get that external red for $180 and just leave it in the enclosure? I don't want the nightly to depend on an external device. Munkeymon fucked around with this message at 21:39 on Nov 7, 2017 |
# ¿ Nov 7, 2017 21:34 |
|
Paul MaudDib posted:And now 8 TB Easystores are down to $130. quote:This item is currently unavailable for online purchase. The item was not added to your cart. Dangit. Oh well, I'll just stop by after work, I guess.
|
# ¿ Nov 8, 2017 15:14 |
|
Yeah, that must be what happens when they sell out but the backend hasn't quite synced up with the front to communicate that because now it actually says sold out. Oh well. Microcenter wouldn't match that would they?
|
# ¿ Nov 8, 2017 16:12 |
|
Paul MaudDib posted:Looks like Best Buy got to the package, UPS says delivery was cancelled today. Nothing on the early BF deal but I'm not overly hopeful, I'm sure they were slammed with orders. I mean, I guess if you bought enough of them and some USB adapters you could make a big-ish pool in ZFS
|
# ¿ Nov 8, 2017 17:11 |
|
IOwnCalculus posted:Or shuck them I was making a dumb joke because he linked to SD cards in the packrat thread
|
# ¿ Nov 8, 2017 17:45 |
|
Is https://www.newegg.com/Product/Product.aspx?Item=N82E16822235160 shuckable? I keep coming up with stuff about shucking a very different-looking enclosure when I try to find instructions and NewEgg has it at $170 through their email deals today
|
# ¿ Nov 9, 2017 18:32 |
|
I'm still not used to Athlon being the low-end low-power line from AMD :\ Anyway, I should probably rebuild my NAS and get it off the ten year old dual core Athlon that's been merrily chugging along forever even as drives died out from under it. I'm thinking I should wait and see what happens with Raven Ridge since I won't have time to gently caress with it period until January, but I'm open to suggestions. Main wrinkle is that I have a raid-z with seven disks (and a hot spare - I'm not totally insane) and plugging them all into the board rather than loving with controller cards would be really nice.
|
# ¿ Nov 17, 2017 18:22 |
|
Paul MaudDib posted:The question was rsync vs dircopy/xcopy type tools, if you are using zfs send you probably know about the merits of rsync. Sending the second snapshot the same way you did the first should only transmit the delta (as long as the receiving FS has the first, obs). https://github.com/jimsalterjrs/sanoid/ (syncoid, partway down the readme) might help make the process easier than dealing directly with zfs send - honestly wish I had found that/it had existed last time I dealt with snapshot replication across servers.
|
# ¿ Nov 30, 2017 15:55 |
|
Is there a way/setting to make a normal SATA port on a regular consumer mobo support hot swapping or do you have to have some kind of hardware in between? Can't believe I've never thought to look into this before but hey first time for everything right?
|
# ¿ Nov 30, 2017 20:13 |
|
I still use a ten year old https://www.newegg.com/Product/Product.aspx?Item=N82E16811119093 but I only bought one extra hard drive stack insert (comes with one, so I can get 8 in there safely), so unless I can find another on ebay or watever I have to gently caress with bay adapters to put more drives in. Other than the USB2 on the breakout box it's basically the perfect NAS case, though.
|
# ¿ Dec 1, 2017 15:10 |
|
SamDabbers posted:This has been my strategy as well. I've been running single-vdev raidz2 pools with 6-8 drives for years, and am considering switching to multiple mirrored vdevs with the 8TB Easystores I recently picked up for the ease of expansion and fast resilvers without having to drop $1000 every handful of years. The increased IOPS would be nice, but not necessary. Do you have a link to a howto for this? My RAIDZ setups (all two) have been very simplistic and I've only expanded one of them once by buying new disks, but i'm having trouble picturing this mirrored vdev structure you guys are on about.
|
# ¿ Dec 19, 2017 18:40 |
|
Thermopyle posted:It's just automatic backup, but if people would just know to turn it on it's easy and a no brainer. It really only protects against hardware failure, though, so it's not going to help with minimally competent cryptolockers, which I assume are the most common source of data loss now that most new machines ship with an SSD. You can't even point it at a network share unless you dig down into the Windows 7 version, which I just tried to do on my work laptop and it told me it couldn't use the network path for the usual cryptic non-reasons. Harik posted:I'd like to add that the space used is almost entirely about the retention length, not density of snapshots. I haven't used ZFS, but most snapshot systems have very little overhead, it's almost entirely taken up by files you've deleted since then. Do you really belong in this thread if you delete stuff? I mean, come on
|
# ¿ Jan 22, 2018 16:33 |
|
Thermopyle posted:I'm backing up to a network share with file history right now on Windows 10. Yeah, I tried to add a network path through that and it fails with a uselessly generic error code, but it is a work machine+network so IDK what the deal is with the server on the other end. There's also a lot of digging to find that option. Main point is, anything Windows can write to without demanding a password from the user is probably toast if you get ransomwared, so it's probably only good for hardware failure*. That's what the recycling bin is for? *unless you're backing up to a NAS with regular snapshots it can roll back to but this scenario is way outside of 'average user' territory e: the way I did it because I was hand-rolling my own snapshot management 11 years ago and am too lazy to update it was to give the NAS (solaris/openindiana) access to a share of my backup snapshots on my desktop so it can actively pull them and the desktop doesn't need write access. The script stores them in part of the file tree that's shared read-only to all clients. Never been tested with a cryptolocker, but that's what I came up with when I heard about them. Munkeymon fucked around with this message at 18:50 on Jan 22, 2018 |
# ¿ Jan 22, 2018 18:42 |
|
necrobobsledder posted:Unless snapshots are visible via SMB, CIFS, or NFS I don’t see how it’d be possible for a client to delete them. Hell, given a snapshot is immutable I’d presume the worst thing to happen from a client could be deleting snapshots even if tons of access was granted, but I’ve never seen a UI or CLI command from a client that would let me see, create, or delete all these serverside snapshots. I have different logins to my servers as admin than I do as clients so even if I got infected I’d just lose local data and for the past maybe 24 hours. Hell, I’d be more concerned with cloud filesystems like via Google Drive or iCloud than my local files because I dunno how to recover cloud data effectively from providers. The laziness is that to deal with a daily image and associated file additions+deletions 'correctly' without wasting a bunch of space would involve the snapshot configuration mirroring the image creation cadence, so I'd have to gently caress with snapshot scripts if I hosed with the backup config and well This system has outlasted a couple of desktops and I haven't had to mess with snapshot scripts, so I'm OK with it, but this is part of why I'm eying this year to do a clean rebuild of the controlling system with modern tooling and maybe kick the legacy Solaris stuff to the curb. Especially now that I don't mind paying for some software that'll save me a weekend (lol it's never just one weekend) of hot hot manpage action.
|
# ¿ Jan 22, 2018 23:22 |
|
Anime Schoolgirl posted:you didn't diassemble them to get at the toy platters smh And magnets! You can get some crazy powerful magnets out of old server drives
|
# ¿ Jan 25, 2018 14:47 |
|
Paul MaudDib posted:Gigabyte has a C3958-based NAS board out as well. I've never messed with SFF-8087 stuff before - can I just buy the cheapest SFF-8087 port to SATA hydra adapter off Amazon and expect it to work and perform well or is there some controller in there I'd have to worry about being flaky garbage?
|
# ¿ Apr 4, 2018 15:55 |
|
Methylethylaldehyde posted:Do keep in mind, there are breakout and reverse breakout versions of the SAS->SATA cable hydra, one of which works great, the other....doesn't. What's a reverse breakout? Plugging an SF-8087 storage device into four host SATA ports?
|
# ¿ Apr 5, 2018 13:53 |
|
I could have sworn enabling jumbo packets improved my throughput to my NAS years ago. Am I misremembering or is that a real thing?
|
# ¿ Jun 26, 2018 18:36 |
|
DrDork posted:Very real thing. They just often don't play well with sending things outside your network, as many ISPs are not at all happy about a MTU over 1500. Oh, yeah, to be clear, both the NAS and the desktop had two ethernet jacks, so they had their own private line to each other that was 4k frames or whatever the max was they both supported. I'm not enough of a network wizzard to have internal on jumbo and external on regular outside of the idiot babby sandbox of physical cable plugs from A to B Still a possibility for people to consider!
|
# ¿ Jun 26, 2018 19:27 |
|
Does anyone know offhand how to figure out a drive serial number given the output of zpool status? I've misplaced the piece of paper I had that mapped motherboard port numbers to drives for a homemade NAS that has a failing drive and I'd rather not play guess and check to figure out which one to swap out.
|
# ¿ Sep 25, 2018 14:29 |
|
dexefiend posted:https://www.serverbuilds.net/anniversary Dunno why they call that Not Safe for Wallet because it sounds extremely reasonable to me
|
# ¿ Sep 25, 2018 14:58 |
|
necrobobsledder posted:In situations like this and have a hotswap bay setup, I look for the drive that’s failing, and run a dd if=foo of=/dev/null on it and look at the lights for the drive. Otherwise, I hope via lspci -vvv output or from dmesg that something is there. Otherwise, it’s off to the olde Google manual. It's a NAS in that it's old computer parts with big hard drives attached - nothing so fancy as hot-swap bays :\ IOwnCalculus posted:smartctl -i /dev/whatever if you just need the serial, the above is good to find the drive itself if you have hot swap bays. That sounds familiar (this has happened before), which is why I took the time to note the mapping. Too bad I didn't tape it to the inside of the case or something smart like that. Of course I also didn't bother to write down how I figured it out. Methylethylaldehyde posted:Because it's a rabbit hole that ends with you buying tape storage off ebay to set up redundant data locations for your 73TB of perfectly curated 50s television rips. I mean, I already have an onsite backup NAS (with a failing drive!) so I'm probably beyond help already
|
# ¿ Sep 25, 2018 17:01 |
|
dexefiend posted:Yes. I love that I have a ridiculous beast workstation that didn't have a ridiculous budget. Dang labgopher is exactly what I didn't know I needed for comparison shopping. Too bad I don't know gently caress all about server parts other than that they cost too much new adorai posted:I have been buying these. Quoting this so I can find it later. Thanks!
|
# ¿ Sep 26, 2018 14:45 |
|
D. Ebdrup posted:Well, that was sort of true on a very old version of ZFS (the one you could manually add to FreeBSD 6.3 with a bit of backporting from 7-CURRENT if you were brave enough a decade ago in 2008*) I just used Solaris 10 and then Open Indiana because I started my first build in 07, I'm lazy and just wanted to copy settings files to the other one and have them (mostly) just work. Solaris' role based permission system was nice. Seriously thinking about switching to BSD with the next one, though.
|
# ¿ Oct 2, 2018 17:52 |
|
Combat Pretzel posted:OpenSolaris finally dead then? Oh my. OpenIndiana got an update as recently as August, so it's not dead dead, but I can't imagine there are too many other sad sack partians
|
# ¿ Oct 3, 2018 05:19 |
|
Are people using btrfs because of licensing issues with CDDL? I can't imagine any other reason to care about it when ZFS already exists and has been basically bulletproof from day zero, but I'm also not super plugged into these things, so I could easily be missing some key information.
|
# ¿ Nov 5, 2018 16:37 |
|
Yeah, you'll make back the extra ~1k you spend to get the equivalent capacity in a mere century or so.
|
# ¿ Nov 16, 2018 17:20 |
|
That's not gonna help much if you're running a torrent client on the thing
|
# ¿ Nov 16, 2018 18:18 |
|
apropos man posted:How bout this idea for a seedbox: you run the entire contents of your torrent data in RAM and it only wakes the disk when you've finished leeching, dumps the data and then sleeps the disk again. Spinning up a drive takes more power than keeping it spinning for, well, some amount of time that'll depend on the drive's construction. IIRC, spin-up/down cycles also cause more wear than steady spinning, again, for some time depending on the drive. What I'm getting at is min-maxing your microwatts of savings in your home is probably not worth the time and energy you'd spend thinking about it. Now, if you run a datacenter where SANs energy usage can be measured in city block equivalent units, then what the hell are you doing reading this thread for advice
|
# ¿ Nov 19, 2018 15:22 |
|
Friends of mine had a SSD die on them over the weekend. It was only 128MB, I think they pretty much filled it up right away and it lasted about 9 months. Not 100% sure if it wore out but it came up blank after a bluescreen and when I tried to run recovery tools on it, the log was a solid wall of read errors.
|
# ¿ Oct 30, 2019 21:13 |
|
DrDork posted:I seem to remember Linus Tech Tips being taken down a few years ago because they were doing some idiotic RAID0 setup Doing dumb poo poo and making videos about it is his whole shtick, though?
|
# ¿ Jan 2, 2020 22:49 |
|
The Diddler posted:I do this. I have it set up to rclone my monthly VM backups, some videos that I may want to rewatch in the future, and I have an encrypted repository for sensitive documents. It took a while to get set up properly, but it's nice to have offsite backups of important stuff. You still have to get some friends to go in on it with you to get the unlimited storage (or eat the whole $60/mo cost) right?
|
# ¿ Jan 9, 2020 17:36 |
|
No performance benefits over what? Btrfs, where the stability page has a bunch of things labeled "mostly OK"? OK, maintainer.
|
# ¿ Jan 10, 2020 17:59 |
|
|
# ¿ Apr 26, 2024 07:22 |
|
D. Ebdrup posted:He's talking about performance benefits compared to ext4, a filesystem based on FFS in BSD, that can't detect or correct silent data corruption, isn't atomic with respect to its writes, and has been dismissed by Ted Ts'o. lmao what a turd - clearly we should all still be on FAT32 because with such low overhead it must be faster
|
# ¿ Jan 10, 2020 21:14 |