Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
cruft
Oct 25, 2007

Internet Explorer posted:

Yeah just to be clear you absolutely do not need to do this.

NihilCredo posted:

I went through a similar journey and tried many of the same things. I'm also running on low-power hardware, a Pi4 with 4GB and a USB HDD (though my desktop can also run services if needed via Nomad), and Nextcloud was getting too clunky.

rclone ... PiGallery2 ... basic auth with Caddy ... KaraDAV

Okay then, low-power crew, check in please. Your list is pretty similar to mine, maybe we can save somebody some headaches by identifying best practice.

Here's what I wound up with. Things I created for this, I've linked.

  • Name service: CoreDNS
  • HTTPS front-end for everything: Caddy2
  • OAuth2/OIDC gatekeeper (you can only see stuff if you've authenticated): OAuth2-Proxy
  • Container management: podman + runit (systemd would be a solid choice too)
  • Source code / Authentication: Forgejo + Big Builder
  • Chat: Ergo IRCd + KiwiIRC + cruft plugins not ready for publication
  • Photo album: PiGallery2
  • Temporary file sharing: Picoshare
  • Long-term storage: dufs
  • Front-end to all this: homelab portal

None of these require anything like Redis, Memcached, MariaDB, or PostgreSQL. PiGallery2 is currently generating thumbnails and forgejo is doing whatever forgejo does (it's always doing something), but the machine is still 62% idle.

It probably bears mentioning that in addition to being low on resource use, these applications are also low on maintenance. Having set up Gitlab for business users in the past, if the occasion arises again, I'm going to seriously consider Gitea/Forgejo. I'm a big fan of this "80% of the features with 20% of the effort" design philosophy.

I wish I could get PicoShare to use OIDC like everything else. Now that I'm thinking of it, my IRC plugin requires a server component for anonymous temporary image hosting. I should look into something that combines these two.

cruft fucked around with this message at 15:23 on Feb 26, 2024

Adbot
ADBOT LOVES YOU

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
After a lot of trial and error I got Traefik, OAUTH, and Crowdsec running on unraid.

Honestly the worst part was entering all the labels, and I'm too lazy to move to Docker compose.

E: no more Authelia, MariaDB, Redis, or swag. A+

Matt Zerella fucked around with this message at 19:01 on Feb 27, 2024

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down
Oh what the poo poo. After having my Ubuntu NUC running portainer and Plex very well for the past few weeks I decided it was time to set up Watchtower for auto-updating and then figure out a backup solution. I had a tickling concern about how Watchtower would handle the restarting of the Plex container as it always fails and I have to toggle network off and back on to Host (and clear the /init) to get it to re-run most of the time.

The fucker deleted the entire container.

So re-pulled Plex and must've not set it up correctly or the same as now I had to re-add all my libraries and waiting forever for all of the files to add.

I had downloaded the Portainer backup.zip from the settings and when I looked to see how to revert back to that backup, it turns out that is just a backup for Portainer and not settings for my containers.

Why is this so obtuse? Or better question, why am I so obtuse?

cruft
Oct 25, 2007

TraderStav posted:

Oh what the poo poo. After having my Ubuntu NUC running portainer and Plex very well for the past few weeks I decided it was time to set up Watchtower for auto-updating and then figure out a backup solution. I had a tickling concern about how Watchtower would handle the restarting of the Plex container as it always fails and I have to toggle network off and back on to Host (and clear the /init) to get it to re-run most of the time.

The fucker deleted the entire container.

So re-pulled Plex and must've not set it up correctly or the same as now I had to re-add all my libraries and waiting forever for all of the files to add.

I had downloaded the Portainer backup.zip from the settings and when I looked to see how to revert back to that backup, it turns out that is just a backup for Portainer and not settings for my containers.

Why is this so obtuse? Or better question, why am I so obtuse?

I'm guessing portainer used "docker volumes", and something told it "remove this container" so it gleefully removed the associated volume, too. And now you're sad.

I never understood why the Docker people were so horned up about volumes. They always seemed fragile to me, like, they could fail in exactly this way. And you can't (easily) get to the contents outside of the container.

Before you're commited, you might want to consider switching to a "bind mount", which is essentially telling it to, instead of using this mystical docker volume that lives somewhere but you're not sure where, use this path on the filesystem you gave it.

IDK how portainer makes Docker options show up, or if it even does, but on the commandline that'd just be using "-v /host/path/to/directory:/container/path" instead of "-v magicvolume:/container/path".

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

cruft posted:

I'm guessing portainer used "docker volumes", and something told it "remove this container" so it gleefully removed the associated volume, too. And now you're sad.

I never understood why the Docker people were so horned up about volumes. They always seemed fragile to me, like, they could fail in exactly this way. And you can't (easily) get to the contents outside of the container.

Before you're commited, you might want to consider switching to a "bind mount", which is essentially telling it to, instead of using this mystical docker volume that lives somewhere but you're not sure where, use this path on the filesystem you gave it.

IDK how portainer makes Docker options show up, or if it even does, but on the commandline that'd just be using "-v /host/path/to/directory:/container/path" instead of "-v magicvolume:/container/path".

Yes, the tutorial that I followed to set up the Plex container (yes, it was easy. Yes, I'm new to this and still following tutorials) had me set up the plex config in a volume container and then bind mounts for all my media. I can just bind /config to something outside the /var/lib/docker folder (which for some reason I cannot cd into but can sudo ls and such...)?

I may go that route, and can solve my 'backup' solution too by setting up a folder on the UnRaid array (different machine) to use which other than configuring those shares and setting up docker again, should make the NUC MOSTLY disposable...

Thanks!

Resdfru
Jun 4, 2004

I'm a freak on a leash.
Docker volumes sounds like a good assumption. I only ever do bind mounts if my containers need storage

My suggestion, traderstav, is to pick one of your containers and manage it manually. Use docker compose and set it up and if you don't know what one of the options does Google it. I think a better base understanding of what portainer is doing behind the scenes will be a good thing in general. That's just my opinion, I don't know if it's a good one

TraderStav
May 19, 2006

It feels like I was standing my entire life and I just sat down

Resdfru posted:

Docker volumes sounds like a good assumption. I only ever do bind mounts if my containers need storage

My suggestion, traderstav, is to pick one of your containers and manage it manually. Use docker compose and set it up and if you don't know what one of the options does Google it. I think a better base understanding of what portainer is doing behind the scenes will be a good thing in general. That's just my opinion, I don't know if it's a good one

To be sure I understand what you mean: skip portainer altogether and just figure it out on the command line from the ground up? Manage it that way?

That's sounding more like an attractive solution as Portainer has not been the friendly solution that I had hoped. I was looking for something that was just like UnRaid's container management, outside UnRaid. This serves as an excellent learning opportunity.

cruft
Oct 25, 2007

TraderStav posted:

I can just bind /config to something outside the /var/lib/docker folder (which for some reason I cannot cd into but can sudo ls and such...)?

You can!

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

TraderStav posted:

To be sure I understand what you mean: skip portainer altogether and just figure it out on the command line from the ground up? Manage it that way?

That's sounding more like an attractive solution as Portainer has not been the friendly solution that I had hoped. I was looking for something that was just like UnRaid's container management, outside UnRaid. This serves as an excellent learning opportunity.

Docker compose is a YAML based definition of what docker containers to spin up, and how. Then "docker-compose" consumes the file and spins up all your containers as defined.

cruft
Oct 25, 2007

Matt Zerella posted:

Docker compose is a YAML based definition of what docker containers to spin up, and how. Then "docker-compose" consumes the file and spins up all your containers as defined.

Or "docker stack deploy -f docker-compose.yaml mystack" if you've previously run "docker swarm init" because you're a fancypants and want something like Kubernetes but without a billion yaml files.

Newer versions of docker also allow you to run "docker compose", you no longer need the "docker-compose" binary add-on.

TransatlanticFoe
Mar 1, 2003

Hell Gem

TraderStav posted:

To be sure I understand what you mean: skip portainer altogether and just figure it out on the command line from the ground up? Manage it that way?

That's sounding more like an attractive solution as Portainer has not been the friendly solution that I had hoped. I was looking for something that was just like UnRaid's container management, outside UnRaid. This serves as an excellent learning opportunity.

If you want some web interface to manage your Docker apps, I just migrated over from Portainer to Dockge, which lets you still use compose files but gives you an easy way to update/restart/open a shell on your containers.

Warbird
May 23, 2012

America's Favorite Dumbass

Beat me to it. I can say with near certainty that anyone reading this does not, in fact, need to or frankly should be using Portainer over Dockge. Dockge doesn't hold your compose files hostage or prevent you from using your system/container stacks in ways it may not be happy with. Unless you absolutely have to have stuff like user management and so on, then Dockge is plenty and does no more than it needs to. My only gripe with it is that you can't easily isolate the output/logs of a given container if you're collocating it with some others in a stack/compose file, but it's a minor quibble.

A Bag of Milk
Jul 3, 2007

I don't see any American dream; I see an American nightmare.
Apologies for long post incoming, but if anyone has a few minutes I could use some expertise.

I'm having a hell of a time trying to shake this 403 error I'm getting when trying to access my self-hosted apps using my domain name, and I'm not even sure which part of my setup could be the culprit. My goal is to route my nextcloud container through nginx proxy manager using my domain name, so I can easily access nextcloud outside my LAN. I am running Fedora Server in a VM on TrueNAS Scale. I set up the network bridge between the VM and the TrueNAS storage and connected them with an NFS share. So far, I have my containers up and running for npm, cloudflare DDNS, and nextcloud all using podman compose, and I attached my domain's nameservers to my cloudflare account. I can access nextcloud ok using my.internal.ip.address:8080 but trying to access it using nextcloud.mydomain.com always returns a 403 error. I understand this could be due to a wide variety of things, so I'm trying to rule stuff out.

I have checked the logs for all containers, and none contain error messages. The cloudflare DDNS container logs show that it is successfully updating my public IP address to my domain. Then in my cloudflare DNS records I have a CNAME of my subdomain pointing to my domain name as well as an A record of my domain pointing to my public IP address. To try to rule out DDNS as the problem I also tried disabling DDNS and making an A record called nextcloud with content set as my public IP address.

When I ping nextcloud.mydomain.com from my server it is successful, and the IP address listed is my public IP address. I used netstat to verify my server is listening on ports 80 and 443. In my router software, I forwarded them to my VM IP address. I also turned firewalld and SELinux on and off to no effect. In netstat I found that ports 80 and 443 were being listened to by something called 'rootlessport.' I know this is related to podman, and wasn't sure if this needed extra configuration. So I respun up all of the containers in docker instead. The result was the same.

I can add nextcloud.mydomain.com as a proxy host in nginx proxy manager, and successfully add the let's encrypt ssl cert. I don't see anything going wrong with the process inside npm until I try to access my domain. To try to rule out any problems with nextcloud, I tried to set up npm.mydomain.com and forward port 81 to access nginx proxy manager webui and got the same 403 error.

Originally I thought the problem could be related to permissions. For npm, my understanding is that there are three places where you can set the user. There is the nginx.conf file inside the /etc directory within the container, there's the docker compose yaml, and there is the config.json file that sits adjacent to the docker compose file. I'm not exactly sure how permissions work, but I did set all three of these to the same user. And added the fedora vm user to the docker group. I originally had some problems here, but now the containers don't have any issues creating their necessary directories and files on the fedora side of the volume mapping.

Since I get the same 403 nginx error when I try to access my domain even when I don't have any proxy host defined in nginx proxy manager at all, I thought there might be some kind of problem with the way I am defining my proxy host. I knew there was some other way to configure this where you can simply set the container name as the forward hostname instead of using the IP address. So I defined the same network for each service in each podman compose yaml, and set the alias for each service. Then I tried using myalias.mynetwork as the forward hostname when defining my proxy hosts in npm. This also didn't change anything.

Thinking the problem might be related to my domain somehow, I tried to redo the whole process using DuckDNS as my DDNS service rather than using Cloudflare and my domain. I setup my DuckDNS account, spun up my DuckDNS container, set the DuckDNS proxy host in npm, everything appeared to be in working order with no errors, DuckDNS was listening to my public IP address, but when I would go to mydomain.duckdns.org in browser it would time out rather than display the nginx congratulations page.

One last thing, I have Fedora running on my desktop as well. So I installed all the bits, spun up the npm container on my desktop, and made the proxy host which also resulted in 403. It makes me think my router is the problem, but forwarding ports is pretty straightforward, and I'm not sure what else could be the problem on that level. I did host an Astroneer server just a few months ago which required forwarding ports and that all went fine, so I don't have any cgnat or complex hardware configurations that would prevent ports from opening.

As for other options, tailscale would be a viable alternative, except I need my files accessible from the web on computers that aren't able to have a tailscale client installed on them. Cloudflare tunnels also seem to have a restrictive TOS that would be incompatible with services such as nextcloud, and even though enforcement seems spotty, I don't want to rely on poor TOS enforcement for my services to work. TrueCharts was giving me the same problems when I had tried it earlier, but I wasn't sure if that was due to something completely different since I found it a bit opaque. It's not the ideal solution anyway since the app library is so limited compared to linux and the updates that cause breaking changes are a pain in the rear end. Perhaps I would be better served doing this a completely different way, but before I try to learn caddy/opnsense/nixos I'd like a better understanding of what exactly my problem is.

I'm trying to learn this as I go, and boy do I feel like I learned a lot. Still didn't get anywhere though. I think there might be a gap in my knowledge somewhere, and now I'm completely out of ideas. If anyone could tell me what to do next, or if any of the above sounds wrong, I'd really appreciate it.

cruft
Oct 25, 2007


Let me see if I got all that right:

You have a public-facing IP address running a hell of a lot of crap* between the Internet and an nginx server. When you run curl from your own network, you get a 200. When you curl from outside, you get a 403.

Assuming I got that right, what you need to do is start removing pieces from your giant pile of abstraction until you've identified precisely what the difference is between getting a 200 from outside, and getting a 403 from outside. Then ask us about that.

* The crap appears to be, at a minimum, TrueNAS, some kind of virtual machine, Fedora Server, a (VM player?) network bridge, and rootless podman.

e: I don't want this to sound like I want you to stop posting. Hell, I rubberduck tech support forums all the time, it can help me figure out the problem on my own. But you have like 26 variables here in the part I actually spent time reading and trying to understand. You need to slim that down before any Internet randos are going to stand a chance of substantively helping you.

cruft fucked around with this message at 23:01 on Feb 28, 2024

A Bag of Milk
Jul 3, 2007

I don't see any American dream; I see an American nightmare.
That's fair. I have only posted after I thought I had isolated every variable I could think of, only to find myself back where I started with no more leads. Instead, I'll try using a fresh bootable usb drive to create the simplest setup that I can, and see where that goes.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
Have you tried turning on nginx debug logging? It may give you some more clues about the 403.

As I was reading through this, I was wondering to myself why not just use VPN, it makes this whole thing so trivial. Then I got to this part:

quote:

As for other options, tailscale would be a viable alternative, except I need my files accessible from the web on computers that aren't able to have a tailscale client installed on them.

This makes it seem like you probably shouldn't even be accessing your private file share from these machines anyways. Is this some sort of employer owned machine you are trying to access Nextcloud from? If so, I would definitely avoid doing that.

cruft
Oct 25, 2007

fletcher posted:

This makes it seem like you probably shouldn't even be accessing your private file share from these machines anyways. Is this some sort of employer owned machine you are trying to access Nextcloud from? If so, I would definitely avoid doing that.

:hmmyes:

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
There's a few twitter feeds I follow for ski resort conditions, really wish I could view these in my RSS reader rather than on X. Seems like there have been a few self hosted twitter to RSS bridges over time, it's unclear if any of them are still working though. Any working solutions that folks know of?

spincube
Jan 31, 2006

I spent :10bux: so I could say that I finally figured out what this god damned cube is doing. Get well Lowtax.
Grimey Drawer
The only solution I've found is to find a Nitter instance that supports RSS (e.g. instance.tld/username/rss), and leave your reader to ping it every hour or whatever until the instance wakes up from being rate-limited. It's no good for up-to-the-minute stuff, of course, but for skimming through a feed at the end of the day it's functional enough.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

spincube posted:

The only solution I've found is to find a Nitter instance that supports RSS (e.g. instance.tld/username/rss), and leave your reader to ping it every hour or whatever until the instance wakes up from being rate-limited. It's no good for up-to-the-minute stuff, of course, but for skimming through a feed at the end of the day it's functional enough.

Sounds like self-hosting Nitter should be a viable option then?

spincube
Jan 31, 2006

I spent :10bux: so I could say that I finally figured out what this god damned cube is doing. Get well Lowtax.
Grimey Drawer
Could be. I don't have the stomach for it myself, for the same reason I don't self-host my email - feels like trying to access Twitter on my own terms, without an account, is only going to get more inconvenient as time goes on: and I'll deal with the infrequent 'service' if it means offloading that onto someone else.

Nitrousoxide
May 30, 2011

do not buy a oneplus phone



Nitter's kinda busted now with api changes twitter made. I don't think you can get the guest accounts they used as a workaround for cutting off free api access now.

I used to sub to a few hundred twitter accounts through nitter rss feeds, but they are all broken now. Maybe a locally hosted one would last some time, but I believe the nitter dev has called it a dead project and isn't going to continue to try and work around Musk's bullshit.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb
drat! Thanks for the info. I don't mind using my own account, I just like having the clear chronological order of an RSS feed and the ease of seeing read vs. unread posts. Having it consolidated with the rest of my RSS feeds would be nice as well.

Kibner
Oct 21, 2008

#1 Pelican Fan

fletcher posted:

drat! Thanks for the info. I don't mind using my own account, I just like having the clear chronological order of an RSS feed and the ease of seeing read vs. unread posts. Having it consolidated with the rest of my RSS feeds would be nice as well.

You probably still could, but you would then have to pay Twitter for API access and they are charging truly exorbitant amounts.

Malloc Voidstar
May 7, 2007

Fuck the cowboys. Unf. Fuck em hard.

Nitrousoxide posted:

Nitter's kinda busted now with api changes twitter made. I don't think you can get the guest accounts they used as a workaround for cutting off free api access now.

I used to sub to a few hundred twitter accounts through nitter rss feeds, but they are all broken now. Maybe a locally hosted one would last some time, but I believe the nitter dev has called it a dead project and isn't going to continue to try and work around Musk's bullshit.
Nitter can still work with real accounts, apparently including suspended accounts, but you need an account registered before iirc 2016 for it to have high enough limits. Also Twitter might eventually totally ban accounts involved with this, since you're not making them money. And since this doesn't really scale for large instances, if Twitter makes any internal changes Nitter might break a for long time/die for good.

Kibner posted:

You probably still could, but you would then have to pay Twitter for API access and they are charging truly exorbitant amounts.
You don't need to pay money, Nitter pretends to be Android and uses the internal Twitter APIs, not the ones for external developers.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

You might have better luck with a Mastodon-Twitter crossposter, then you can use Mastodon's built-in RSS feed.

Potato Salad
Oct 23, 2014

nobody cares


What usenet indexers and providers are y'all using these days? For indexers, I'm on nzbgeek and nzbfinder; for providers, Frugal and Eweka. Weirdly, I can't seem to find should-be-popular things -- childhood treasures.

Do I need to add other indexers/providers to the mix?

Potato Salad fucked around with this message at 02:56 on Mar 4, 2024

Resdfru
Jun 4, 2004

I'm a freak on a leash.
You're looking for this thread
https://forums.somethingawful.com/showthread.php?threadid=3409898&pagenumber=355&perpage=40

Time_pants
Jun 25, 2012

Now sauntering to the ring, please welcome the lackadaisical style of the man who is always doing something...

I'm reposting this from a thread I posted earlier on the recommendation that I might have better luck here:

I have recently started working on a project to try and archive about two decades of digital family photos--about 10 TB all up. I have a computer set aside for the task and a couple of 20 TB hard drives in a RAID setup. I'm at a bit of a loss of how best to proceed from here, though, because the goal is to make all of the photos accessible not only from any computer within our home network (easy and fine) as well as by smartphone (I am completely stumped). I tried Plex, but it seems really slow and better suited to streaming video than storing tens or hundreds of thousands of photos in an easy-to-use app, since the people I want to do this for aren't terribly tech savvy.

Am I barking up the wrong tree, or is this achievable?

flappin fish
Jul 4, 2005

Time_pants posted:

I'm reposting this from a thread I posted earlier on the recommendation that I might have better luck here:

I have recently started working on a project to try and archive about two decades of digital family photos--about 10 TB all up. I have a computer set aside for the task and a couple of 20 TB hard drives in a RAID setup. I'm at a bit of a loss of how best to proceed from here, though, because the goal is to make all of the photos accessible not only from any computer within our home network (easy and fine) as well as by smartphone (I am completely stumped). I tried Plex, but it seems really slow and better suited to streaming video than storing tens or hundreds of thousands of photos in an easy-to-use app, since the people I want to do this for aren't terribly tech savvy.

Am I barking up the wrong tree, or is this achievable?

Personally, I use and like Immich, but they're not kidding when they say it's under active development - I've had to go in and tinker with the docker-compose.yml file a few times to keep it working after updates.

Other people in this thread have talked about using PhotoPrism and NextCloud, which should have apps also. There are a couple of other options listed here. I can't say how well they'll scale to 10TB of photos. In particular, Immich and PhotoPrism use a lot of machine learning for facial recognition and search, so importing that many images will take a while. There might be other, easier, options if you already have your photos nicely organized and just need to let people browse through folders.

Depending on your setup, there's also the extra step of making it accessible outside the LAN. My solution was to install Tailscale, limit things to immediate family, and not worry about it too much, but that may not be feasible for you.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.

flappin fish posted:

Personally, I use and like Immich, but they're not kidding when they say it's under active development - I've had to go in and tinker with the docker-compose.yml file a few times to keep it working after updates.

Other people in this thread have talked about using PhotoPrism and NextCloud, which should have apps also. There are a couple of other options listed here. I can't say how well they'll scale to 10TB of photos. In particular, Immich and PhotoPrism use a lot of machine learning for facial recognition and search, so importing that many images will take a while. There might be other, easier, options if you already have your photos nicely organized and just need to let people browse through folders.

Depending on your setup, there's also the extra step of making it accessible outside the LAN. My solution was to install Tailscale, limit things to immediate family, and not worry about it too much, but that may not be feasible for you.

It might be worth mentioning that SpaceinvaderOne just released a tutorial for Immich. I haven't watched it since it's not something I use but his stuff is usually really helpful for getting stuff up and running.

Motronic
Nov 6, 2009

flappin fish posted:

Other people in this thread have talked about using PhotoPrism and NextCloud

I was one of the people unhappy with NextCloud because the built in photo browser is trash and there is no meaningful mobile client. Someone, I believe here, told me the fix: Memories (https://github.com/pulsejet/memories). This completely fixed my browsing issues. It's not as good on mobile as I'd like (does not disply local images) but it's adequate. I'm using the Recognize NextCloud plugin for facial and object recognition. It too can be described as "adequate".

In the end, it's good enough that I'm no longer actively looking for a different solution.

cruft
Oct 25, 2007

I like pigallery2. Easy to set up, doesn't take days to index things, and when the next hot thing comes out, you can easily(?) switch to it because pigallery2 treats the images as read-only: it doesn't try to rearrange everything on disk.

e: Since like the 90s I've kept photos sorted by month they were taken. So it's like Photos/1999/04/whatever.jpg. I got photosync to use this schema easily enough.

Here's another thing I wrote, maybe someone will find it helpful. This shell script (stored as Photos/new/process.sh) sorts a directory full of jpegs (Photos/new) into my schema (Photos/$yyyy/$mm). It also transcodes the mjpeg files my camera creates into mp4 files that browsers can play, and sorts them too.

e: Bad script removed, hat tip mawarannahr. Read on ITT if you want to see the old one that you shouldn't use, and a less-buggy one I created in response.

cruft fucked around with this message at 17:05 on Mar 28, 2024

mawarannahr
May 21, 2019

cruft posted:

I like pigallery2. Easy to set up, doesn't take days to index things, and when the next hot thing comes out, you can easily(?) switch to it because pigallery2 treats the images as read-only: it doesn't try to rearrange everything on disk.

e: Since like the 90s I've kept photos sorted by month they were taken. So it's like Photos/1999/04/whatever.jpg. I got photosync to use this schema easily enough.

Here's another thing I wrote, maybe someone will find it helpful. This shell script (stored as Photos/new/process.sh) sorts a directory full of jpegs (Photos/new) into my schema (Photos/$yyyy/$mm). It also transcodes the mjpeg files my camera creates into mp4 files that browsers can play, and sorts them too.


code:
#! /bin/sh

cd $(basename $0)

ls *.mov *.MOV | while read fn; do
  ffmpeg -i "$fn" -map_metadata 0 "${fn%.*}.mp4"
done

ls *.jpg *.JPG *.jpeg *.JPEG *.mp4 | while read fn; do
  exiftool '-Directory<CreateDate' -d "../%Y/%m" .
done

Inadvisable to parse ls output, might want to go with find and use -iname to match, or even just glob

cruft
Oct 25, 2007

mawarannahr posted:

Inadvisable to parse ls output, might want to go with find and use -iname to match, or even just glob

Aw, crap. I forgot `ls` sucks. But glob means `*.unmatched-file` expands to `*.unmached-file`, requiring a second check to see if the file we're about to process actually exists... unless I modify my stuff to only work in `bash`. `find` is, I guess, the way to do it, I just have to remember to provide `-maxdepth 1`.

Shell scripts blow.

Updated script:

code:
#! /bin/sh

cd $(basename $0)
for fn in *.mov *.MOV; do
  test -f "$fn" || continue
  ffmpeg -i "$fn" -map_metadata 0 "${fn%.*}.mp4"
done

for fn in *.jpg *.JPG *.jpeg *.JPEG *.mp4; do
  test -f "$fn" || continue
  exiftool '-Directory<CreateDate' -d "../%Y/%m" .
done

cruft fucked around with this message at 17:04 on Mar 28, 2024

Adbot
ADBOT LOVES YOU

Keito
Jul 21, 2005

WHAT DO I CHOOSE ?
I've been running Immich since late July last year and been satisfied with it. My wife and I sync our phone cameras there.

Started out with Nextcloud for photo sync, but with both the syncing being unreliable/annoying (on my wife's phone the Nextcloud app would pop up 1000+ notifications in rapid succession asking her to compare diffs for apparent out-of-sync files every time it ran its background sync task), and its photo viewing experience being abysmal, I was happy to move away from it. Keeping up with Immich changes has been a little annoying too, but release notes are quite clear about what changes you need to implement, so it's not hard at all.

I'm basically just using Nextcloud for syncing my SSH config/keys at this point, so I'm considering nuking it and setting up something that actually just works for simple file syncing, like Syncthing. Couldn't use Nextcloud for software development either as it can't deal with lots of tiny file changes generated by version control software, so I set up Gitea instead for that.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply