|
What theme or skin are you using? The Plush theme should have the ability to purge both download and history lists. Want to give that a shot?
|
# ¿ Jun 13, 2011 18:40 |
|
|
# ¿ Apr 17, 2024 22:53 |
|
Murodese posted:Ugh, windows permissions issues ahoy. I your case, Everyone is not being added to newly written files. Have you applied it at the upper-most folder for the shares (e.g. "X:\Media\") and to replace all child permissions? I think further files written there should inherit the permissions from this folder/object. If not, then it could be a system policy to not do this specifically with Everyone. I hope a more familiar Windows guy can address your question if I'm wrong, but I'd say go with making more users. The drawback is you'd have more junk in your login screen. Edit: You could also place Everyone in the Users group (they have Read access by default to your new files for sure), but that's a serious security faux pas I am sure.
|
# ¿ Jun 14, 2011 21:17 |
|
If you are seeing gains, chances are the program wasn't very efficient in its disk usage, or you have a funky I/O scheduler in the OS. Or you have a lot of other contending I/O/processes. So really it would be a waste of money using an SSD for a downloads scratch directory unless if you are downloading and writing data at more than 70MB/s which could potentially outdo average 7200RPM drives, I guess! There's quite a bit of caching in the client to help reduce drive thrashing and what-have-you, too. Crazy Swedish Internet would be the best use-case for SSD, then. But then I guess you could just build a RAID or get faster mechanical drives, for cheaper.
|
# ¿ Jun 24, 2011 18:53 |
|
I use Newshosting, personally. You could look at a premium provider that your programs promote as well, but I haven't really tried. I just don't know if the 900+ days of retention is realistic. I usually don't have success for stuff that's 700 days or so old; it just will not repair successfully and I'd be missing one to five chunks. Screw that.
|
# ¿ Jun 24, 2011 19:39 |
|
I am using NFS on top of BTRFS sharing across a few Linux VMs, no additional ACL configuration being done from SABnzbd, too. Running Ubuntu 11.04 across the two. Only gotcha for me was the underlying permissions on BTFS. In your case, ZFS. Are you using Solaris, OpenFiler, FreeNAS, or what?
|
# ¿ Jun 28, 2011 15:06 |
|
haywire posted:Thing is, if one day they had 300 day retention and the next day they had 900 day retention, wouldn't it take 600 days for the 900 day retention to actually fully be in effect, seeing as they wouldn't have any files older than 300 days? I just have been observing some misses and problems despite that and I'm honestly unsure if it's my provider or maybe momentary issues with my network/connection, or something.
|
# ¿ Jun 29, 2011 20:59 |
|
Okay, cool. So from what I understand, SABnzbd is placing files on an NFS share (that is, Solaris runs your storage and SABnzbd is run from another box, correct?). Before you check your mountpoints and stuff, look at ~/.sabnzbd/sabnzbd.ini and see if there's a permissions option/directive set. Then check /etc/default/sabnzbdplus and see if there's something in there. It should pretty much just have a port number and address... but it overrides what's in the configuration file in my experience. Anyway here's what I have, maybe it'll help. Changed usernames, folder names, etc. SABnzbd box: code:
StorageBox = Where files are moved from categorization, but it's also the actual mountpoint. So nothing special on the SABnzbd box. Looking at my storage box (192.168.0.200 in this example): code:
There's also some Samba crap going on in StorageBox, but that isn't being used between the Linux boxes. Kachunkachunk fucked around with this message at 14:41 on Jun 30, 2011 |
# ¿ Jun 30, 2011 14:39 |
|
I've noted my own issues on Newshosting, but I still haven't done the research into whether it was my problem or not. I think it's probably a good idea to start watching what groups you you specifically have had problems with and see if it's some underdog group that's not uh... synched properly or something. Disclaimer: <---- knows poo poo about how a newsgroup hosting provider works. Either way, look for patterns!
|
# ¿ Jul 1, 2011 01:06 |
|
Use Process Explorer to find out what process has a handle on Thumbs.db. You're going to find it's Explorer each time. You can kill the specific file handle to remove it, but generally it's going to happen time and time again. I think if you disable thumbnails and previews in your Folder Options (apply it to all folders though) and it'll stop.
|
# ¿ Jul 1, 2011 19:15 |
|
Aafter posted:I just discovered Sickbeard and holy motherfucking hell. That thing is awesome. I had an entire series renamed in a completely wrong order because of the strange DVD ordering compared to the air-by dates. Took a while to figure out and fix. Anyway as far as your BSODs are concerned, you will need to look into the specific kind of BSOD occurring and go from there. E.g. figure out if it's a STOP 0x0000007C as one example (but believe me, it won't be a 0x7C).
|
# ¿ Jul 21, 2011 15:32 |
|
Dude, that is amazing!
|
# ¿ Jul 21, 2011 18:15 |
|
Yeah, sounds like it's disk contention or thrashing. Omitting the cache/incoming folder from virus scanners would also help a bit, but you do want the completed files being scanned later on. You could either have the files move there (I am hoping the scanner doesn't touch anything until all the files are already written, but I have a feeling it will), so you probably want to put a scan in during a post-processing script action for your binary categories (skipping multimedia).
|
# ¿ Jul 27, 2011 13:25 |
|
Also interested in knowing this. Is it pretty I/O or resource intensive as well? Just want to right-size the appliance I will inevitably put this on.
|
# ¿ Dec 14, 2012 16:57 |
|
Ah, nice - thanks for taking the time to do that. Now I have some idea of where to go with the setup and will probably give it a go in the matter of days.
|
# ¿ Dec 14, 2012 21:21 |
|
Probably wouldnt help, anyway. Didn't someone upload garbage that looked like a movie (in an archive) and it was still taken down shortly later? There isn't any intelligence going into it, really. I really see it all going towards codenamed or inconspiciously-named releases and using private indexers to host or share NZBs. It's a bit unfortunate because this would pretty much fragment all the communities to the point of it being like the BBS days, heh. I'm in for donating, by the way. I didn't PM anyone, though.
|
# ¿ Jan 18, 2013 19:12 |
|
Does anyone have Headphones working with nzb.su? Judging from the fact it works fine with the other clients/agents, they're probably banning the user agent for headphones again. Also see: https://github.com/rembo10/headphones/issues/955
|
# ¿ Feb 27, 2013 21:59 |
|
It doesn't quite help your concern much, but I tend to move towards virtualizing weird application suites in a single VM that can be backed up or include some form of change management (snapshots). Try making a VM and having it do all your SAB/Newznab/CouchPotato/Sickbeard/Headphones stuff. It can still store and manage/rename everything on your NAS. It'd pretty much be starting over in the VM this time, but this also means you don't touch and break anything during your "migration" process and can stop any time. Edit: I read your post a bit more. Maybe you can look at getting something small, like a Raspberry Pi, to do the work? Or some other small, cheap PC that can stay online all the time. I personally just leave my PC on all the time. The power consumption isn't really much at all every month, and heat is a non-issue because I run an Audio, Video, and USB cable from the tower to the desk. Complete silence! Kachunkachunk fucked around with this message at 14:28 on Mar 7, 2013 |
# ¿ Mar 7, 2013 14:24 |
|
Same! I've been running one for myself, but would gladly shut it down in favor of relying on the GOONZB project instead. Also indeed, once Amazon/PayPal/Whatever payments are set up, some can be earned back.
|
# ¿ Mar 21, 2013 23:40 |
|
I just came across a release that had all of the rar files misnamed, according to QuickPar (the whole snatch was said to be bad and failed to extract, according to SABNzbd). QuickPar went through the whole thing, all files being misnamed, then it said no repairs were needed and it fixed the filenames. Sure enough the release was intact! Really weird poo poo going on these days.
|
# ¿ Mar 25, 2013 23:33 |
|
NZBsRus used to be great. After their site got screwed by some bug that granted people VIP access too easily, they purged everyone's VIP access as "a security measure," including all those with very obvious history with the site, and not something more intelligent, or with integrity, like recent-registers. Then they made most older VIP folks re-purchase their VIP membership. Which wasn't anywhere as long as their previous. So gently caress them all, really. NZB.su is probably your next best bet.
|
# ¿ Mar 26, 2013 06:10 |
|
It's pretty easy, but if the provider took stuff down in the first place, it won't help a lot. I'd say once you get yourself a pretty good indexer, you'd up the API hit rate and hope to grab releases sooner than later.
|
# ¿ Mar 27, 2013 19:12 |
|
The indexer tells you what releases are out there in whatever groupings of articles they happen to be made up of. Then your client goes and seeks out those numerous articles, identified in an index file (nzb) via your usenet provider. If your provider has removed, lost, or otherwise simply doesn't have whatever was indexed then you run into an "incomplete" release. Some folks using other providers might have more luck than you, as that one simply had the articles still, or they haven't acted upon a DMCA takedown request yet.
|
# ¿ Mar 28, 2013 13:55 |
|
Just a note, RAR already has recovery record capabilities, but I am curious if PAR2 might actually be better. I think QuickPar should have options to drop in and create PAR recovery records. I have just never done it. Edit: After some playing with QuickPar, it appears it can do file splitting for you as well. But in my test I just dragged a largeish 700MB file into QuickPar and hit the Create button. Since it wasn't going on usenet, you don't really have to bother with a lot of the options. I'm not really sure if it's made to efficiently deal with hundreds of thousands of files, let alone small ones. So you might find yourself RARing a bunch of crap up, slicing into volume sets, and protecting the bunch with QuickPar (don't need to split further, so leave that unchecked). It'll generate your PARs after, but it kind of takes a while. WinRAR has the Recovery Record checkbox that adds some recovery, but QuickPar has more options here. Default is 10% redundancy in its settings. Kachunkachunk fucked around with this message at 19:57 on Apr 12, 2013 |
# ¿ Apr 12, 2013 19:47 |
|
And yeah I agree. I'd really just copy stuff over to the other disks or volume, then back over, once your RAID has been rebuilt.
|
# ¿ Apr 12, 2013 19:58 |
|
Could you rule out your extensions and start Firefox up in privacy or safe mode?
|
# ¿ May 18, 2013 17:49 |
|
porksmash posted:It's back to invite-only it says. "Donate at least $5* via PayPal or Bitcoin for an invite via e-mail as well as lifetime VIP access."
|
# ¿ Nov 25, 2015 00:43 |
|
|
# ¿ Apr 17, 2024 22:53 |
|
I reverse proxy a bunch of crap as well, but never really needed to bother proxying to Plex. What's your reasoning for avoiding the port forwarding/NAT, out of curiosity? Have you also had much experience doing this with Apache? I'm thinking about moving to Nginx for proxying, if it's easier. I never really had a high degree of success reverse proxying stuff via Apache, or it takes a ton of rewriting/effort to do it right.
|
# ¿ May 12, 2016 09:56 |