|
Cenodoxus posted:For system and application monitoring, I use Telegraf feeding to InfluxDB with dashboards and such in Grafana. Seems interesting. Any good writeups for that stuff or is it largely self explanatory?
|
# ? Feb 18, 2024 04:32 |
|
|
# ? Apr 29, 2024 10:53 |
|
Warbird posted:Seems interesting. Any good writeups for that stuff or is it largely self explanatory? For the most part a lot of these services I’ve expanded into over the years, fiddled with and found opportunities to integrate them. Graylog can be a bit of a bear because it’s built on top of MongoDB and Elasticsearch, so while the setup process is well documented, you can end up going down a rabbit hole. The official docs are still pretty good though. You might have better luck starting with InfluxDB and Telegraf since the setup is more streamlined. The official docs are very straightforward, but you can also find a lot of blogs and guides elsewhere.
|
# ? Feb 18, 2024 16:16 |
|
NetData is very easy to get running, FWIW. I'd even call it "batteries included."
|
# ? Feb 19, 2024 00:25 |
|
Looks like UnRAID might be switching to yearly updates subscription model and that would probably be the end of it for me. It's been a good run but I guess it's finally time to move over to full fat Linux since I'm comfortable with it. Is ZFS ok on Rocky Linux? I prefer RedHat based distros but I'd like to avoid the bleeding edge Fedora. If I went Debian I'd probably stick to vanilla Debian as I'm not a big fan of Ubuntu. I'm pretty sure ZFS is fine there. My plan would be to do most of my configuration with Ansible (so it's repeatable in case of a disaster). And use Portainer for docker management.
|
# ? Feb 19, 2024 16:40 |
|
Matt Zerella posted:Looks like UnRAID might be switching to yearly updates subscription model and that would probably be the end of it for me. It's been a good run but I guess it's finally time to move over to full fat Linux since I'm comfortable with it. I'm guessing TrueNAS Scale is not an option? It runs on Debian, iirc.
|
# ? Feb 19, 2024 17:15 |
|
Kibner posted:I'm guessing TrueNAS Scale is not an option? It runs on Debian, iirc. No, I hate how they use K8s for running containers. And I don't want to run a VM for dockers.
|
# ? Feb 19, 2024 17:47 |
|
Another option would be Openmediavault. It's debian as well, and used to include Portainer as the recommended way to use docker, but they've gotten rid of that in favor of their own webui for docker compose.
|
# ? Feb 19, 2024 18:24 |
I use OpenMediaVault inside a Proxmox VM. Mostly for legacy reasons though. I started out using OMV on bare metal and heavily utilized the webui. I migrated it to a VM to make backing up/restoring way easier and barely use the webui now. I have a script that updates all my containers and OMV via ssh and I manage the containers via a git repo with compose files. As a starting point though, OMV is a pretty easy distro to use. It's an easy recommend for newbies.
|
|
# ? Feb 19, 2024 18:40 |
I still can’t or like haven’t gotten around to wrapping my head around git for this stuff. I keep my compose files backed up and they just live in the root of their stack’s directory.
|
|
# ? Feb 19, 2024 19:00 |
|
I have a hacky method. I have a git repo for my compose files and I had a self hosted runner container. When I pushed /merged to main it would kick off github actions which just ran compose up. I keep all my compose files in 1 directory so I use a command to run compose up on all the files My runner token expired and I never bothered to generate a new one though. It was useful when I was making a lot of changes but I rarely touch anything anymore so when I do I push to git and then ssh in and pull and run compose up on the container I edited. I have a variety of tasks on my kanboard that I wanna do to do things better but finding time is always the issue
|
# ? Feb 19, 2024 19:09 |
|
I'm still a few steps behind you all. I don't even know where to find my docker compose files on either Unraid or Portainer. It's all abstracted away in the GUI. I have my appdata and docker files all backed up so can get there, but not sure if the items I'm finding are the docker compose files, or the XML that Unraid works with. I feel like I'm super close to having the full loop on basic Docker stuff closed once I get my head around this issue and then can do advanced poo poo like have all my docker files located elsewhere and in a press of a button have all my dockers fire right back up on a new instance. Which that above paragraph goal always confuses me, since these containers all have stuff sitting in AppData that it needs... so if I don't have that backed up or waiting for the container, how is it so turnkey? Then I get right back to the start and my understanding falls apart.
|
# ? Feb 19, 2024 19:36 |
|
I don't use unraid or portainer so I could be way off here but 1. If what you're looking at is a compose file it looks similar to this. Compose is a standardized yaml format. If it's not in this general format docker won't do what it should. But portainer or unraid could be wrapping this in something else i suppose. https://docs.linuxserver.io/general/docker-compose/#v1x-compatibility code:
2. Your container compose file will say which volumes it uses to store data. If those locations are backed up then whatever the container is doing is backed up for the most part. If there is no volume attached then the container is likely ephemeral Now maybe unraid or portainer changes things and it stores that stuff in some sort of app data? I guess they could be keeping all your volumes in the same place.
|
# ? Feb 19, 2024 19:51 |
|
Disaster avoided. Looks like current UnRAID licenses are unaffected by the new keys.
|
# ? Feb 19, 2024 20:05 |
TraderStav posted:I'm still a few steps behind you all. I don't even know where to find my docker compose files on either Unraid or Portainer. It's all abstracted away in the GUI. I have my appdata and docker files all backed up so can get there, but not sure if the items I'm finding are the docker compose files, or the XML that Unraid works with. You can grab the compose files out of portainer by going to settings and then back up Portainer. They should all just be in a zip file you can unzip. It won't be organized with any descriptions for the various compose files so you'll need to go into each one and figure out which service it's for. But assuming you don't have a ton it shouldn't take too long.
|
|
# ? Feb 19, 2024 20:44 |
|
TraderStav posted:I'm still a few steps behind you all. I don't even know where to find my docker compose files on either Unraid or Portainer. It's all abstracted away in the GUI. When I was using stacks, all of my compose files were under the compose/ directory in the mapped volume. They’re each under separate numbered folders by id, you can see which is which by mousing over in Portainer and checking the URL. Once I figured it out I had them symlinked in a separate folder with easier names.
|
# ? Feb 19, 2024 21:33 |
|
So there’s a lot of talk about UniFi here, and my house came with a pair of APs pre-installed with no documentation. The circle-y ones with the LED ring in the center, they route back to a cable cabinet/some sort of PoE adapters plugged into the wall. I used to be a network guy but have literally no time to dig into how to access these things once they’re plugged in (I’ve left them off the network until I know how to admin them). Any tips from the thread, or a better thread to ask in, etc. regarding these? They’d be fed by gigabit fiber, which right now I have plugged into one of those edgelord gaming routers I picked up during some VR work… which is fine. Any helpful resources on where to start would be great, as I have a feeling there’s a “smart home” way to deal with this poo poo through an app or whatever. Cool thread, thanks for creating it Edit: I’m also open to ‘let me google that for you’ burns since honestly I’m being lazy about it right now. Edit edit: I’m guessing this is the one, same adapter etc. so at least I have a manual for now: https://dl.ubnt.com/guides/UniFi/UniFi_AP_QSG.pdf Idiot edit: Mine are currently lit blue which doesn’t seem to be a status for the one in the manual. Anyway, it looks exactly the same but definitely consumer-level. Dumbass edit: It’s UAP-AC-LITE and blue means the network is up somehow, so I guess I really am going to stupidly dive into this late night. Fml hellotoothpaste fucked around with this message at 09:01 on Feb 21, 2024 |
# ? Feb 21, 2024 08:47 |
|
I got no love for Ubiquiti poo poo but I think you'll probably need to pull them down and hit the reset button. Not sure you can adopt them to a UniFi controller (yeah you'll need that too) if they're already set up elsewhere/aren't factory default.
|
# ? Feb 21, 2024 10:15 |
|
You can run APs without a controller but I lm not sure which if any features would be unavailable You will probably have to reset them but it's possible they are using default creds https://lazyadmin.nl/home-network/setup-unifi-ap-without-controller/ As this is the self hosting thread, you can easily self host the unifi controller in docker.
|
# ? Feb 21, 2024 13:54 |
|
You don't need to keep the controller running, once it's setup you can turn it off and the wifi will continue to work fully functional. Just turn the controller software back on every few months for updates or when you need to make changes. You can install it on docker, or just on your normal computer. Having it run somewhere 24/7 you can get logs and statistics and things, check what devices are connected, etc. And I think you will need to go and reset them with a paper clip or pin in order to be able to adopt them.
|
# ? Feb 21, 2024 14:09 |
|
Provided they are not eol (end of life) you can configure them with the unifi app and not setup a controller. You get about 90% of the functionality/configuration options with the app.
|
# ? Feb 21, 2024 14:50 |
|
THF13 posted:You don't need to keep the controller running, once it's setup you can turn it off and the wifi will continue to work fully functional. "Fully" meaning full if you don't use a guest network or expect advanced/assisted handoff between APs.
|
# ? Feb 21, 2024 15:22 |
|
mawarannahr posted:NetData is very easy to get running, FWIW. I'd even call it "batteries included." Netdata works great for home servers. The docker compose file works out of box to get all system stats. I've not tried to get it running on my fedora system with podman as a child to my Ubuntu servers parent yet.
|
# ? Feb 22, 2024 01:20 |
|
Netdata is good. I ran into a weird issue with Netdata in docker-compose, but my use case was probably a little odd to start with. If you follow the official instructions, they have you set the container's hostname equal to the FQDN of your host. I also had some other containers for various services that were hosted under the host's FQDN via Traefik routes. One of those containers was Uptime Kuma, trying to probe those services underneath the host's FQDN. So when I ran Netdata, Docker's internal DNS fuckery meant that all those DNS queries from Kuma for my host's FQDN were getting resolved to the Netdata container. It broke my monitoring for a little while until I figured it out. It wasn't a huge deal - I just commented out the hostname line, but that also meant that my Netdata instance saw the hostname as whatever the container ID happened to be, rather than the name of the host. Cenodoxus fucked around with this message at 01:37 on Feb 22, 2024 |
# ? Feb 22, 2024 01:33 |
|
I liked netdata a whole lot too. It was less resource intensive than Grafana/Prometheus, and even had some information on how to take that load down further. At the end of the day, though, it was still taking something close to 8% of the entire CPU on the RPi4, which is what pushed me to write my own thing displaying CPU use only when asked, and going back to the way I did system admin on the Sun 4a labs in the 90s. But for the casual homelab, I feel like netdata is probably the smart place to start. It has excellent documentation on everything it's collecting and why it's important, and is much simpler to set up than Grafana/Promethus/Exporters, which, frankly, are a pretty daunting task. I am running a Raspberry Pi. I realize this is unusual. I am not asking for assistance.
|
# ? Feb 22, 2024 15:08 |
Netdata is pretty neat, but loving hell I wish it was more granular - then again, that'd also increase the probe effect that cruft was just talking about above. I want someone to do what Sun FishWorks did, which you can see demonstrated here: https://www.youtube.com/watch?v=tDacjrSCeq4 A good setup of graphana and prometheus can kinda get you close to that, but the probe effect is much higher than it would be just using dtrace.
|
|
# ? Feb 22, 2024 16:29 |
|
BlankSystemDaemon posted:Netdata is pretty neat, but loving hell I wish it was more granular - then again, that'd also increase the probe effect that cruft was just talking about above. Yeah, I was actually trying to do something to log resource usage of a few specific processes (and ideally their threads) and I couldn't find a way to do it by PID. ended up polling ps while true and parsing the output... surely there has got to be a better way in 2024?
|
# ? Feb 22, 2024 16:34 |
mawarannahr posted:Yeah, I was actually trying to do something to log resource usage of a few specific processes (and ideally their threads) and I couldn't find a way to do it by PID. ended up polling ps while true and parsing the output... surely there has got to be a better way in 2024? I still maintain that the only solution is dtrace, or something with a similar design. And a packet filter with extensions doesn't count.
|
|
# ? Feb 22, 2024 16:56 |
|
Is Traefik my only option for using Google OAUTH easily to protect and unify the logins for my apps? I'm currently using SWAG with Authelia but I really hate that I have to run a database and Redis just for SSO. This is all internal protection and learning. Anyway it's pretty easy to turn Authelia on for SWAG but it looks like that's your only option aside from LDAP or basic?
|
# ? Feb 24, 2024 17:08 |
|
Matt Zerella posted:Is Traefik my only option for using Google OAUTH easily to protect and unify the logins for my apps? I just set up OAuth2-Proxy using my Gitea server for an IDP, but Google is the default. Works great, no database required, it's essentially the architecture I came up with for my (crappier) SimpleAuth in that it encrypts the persistent data for itself, and tells the browser to remember it. There are things you can't do this way, like revoke issued tokens, but for a homelab it's a solid option. I'm using Caddy, which isn't well-documented for OAuth2-Proxy. Traefik is. Give it a whirl, I think you're going to enjoy it. e: oh, I looked up SWAG. That's the thing that overlays nginx to make it as simple as Traefik and Caddy for TLS. OAuth2-Proxy provides nginx instructions, too, assuming SWAG exposes nginx config files... If you're already thinking about using Traefik, I'd like to suggest looking at Caddy, also. I used Traefik for a while, and the TLS stuff was dreamy, but the configuration was just bonkers confusing to me: I was spending so much time debugging my Traefik config. Caddy gave me the auto-TLS, and moved back to "edit a configuration file" which hurt my brain less. cruft fucked around with this message at 23:44 on Feb 25, 2024 |
# ? Feb 25, 2024 21:39 |
|
I had a long effortpost about various FLOSS file hosting solutions and why they're all awful. It was too long. I'll put it on my blog or something. But the short version is: I spent all weekend trying out various things and it's just about as bad as it was 20 years ago when I threw my hands in the air and told everybody to just use Google. NextCloud is probably the best candidate, but I refuse to run it for multiple reasons, not the least of which is that I can't trust PHP with my tax records. OwnCloud Infinite Scale is easy, low-resource, and super quick. It doesn't want an external database. I got it going in about 10 minutes. But it uses the filesystem in a screwy way that nothing else can work with. They're clearly targeting Enterprise customers. Which makes sense, really: Enterprise customers don't typically need PhotoPrism to see their files, and they also are willing to pay money for software. They might even appreciate that OCIS can interact natively with Ceph. Not CephFS: just plain old Rados Ceph. I know I would have found that appealing at my previous job. That is not appealing at home. You won't be getting my $0/month and feature requests for family photo albums, OwnCloud GmbH. At the end of the weekend, I'm back to dufs, which provides WebDAV and a usable (but not awesome) user interface in the browser. I can front-end it with Caddy/OAuth2-Proxy for SSO. Maybe I'll create an SFTP user for the Android devices to run PhotoSync (WebDAV is now protected by MFA OIDC, and PhotoSync can't cope with that). Other things I looked at but decided were worse than a WebDAV server with built-in JavaScript client: SeaFile, sftpgo, CozyCloud, FileStash, MyDrive, webdav-drive. cruft fucked around with this message at 00:00 on Feb 26, 2024 |
# ? Feb 25, 2024 23:38 |
|
cruft posted:NextCloud is probably the best candidate This is the saddest true statement I've read in a while.
|
# ? Feb 26, 2024 00:10 |
I loved that effort post! I had intended to read it later
|
|
# ? Feb 26, 2024 00:15 |
|
tuyop posted:I loved that effort post! I had intended to read it later Okay, I put it in the hard mode thread. https://forums.somethingawful.com/showthread.php?threadid=4051802&pagenumber=1#post538003958
|
# ? Feb 26, 2024 00:49 |
|
I think it sucks that you feel you need to quarantine those posts to that thread when they’d be perfectly fine here.
|
# ? Feb 26, 2024 00:52 |
Matt Zerella posted:I think it sucks that you feel you need to quarantine those posts to that thread when they’d be perfectly fine here.
|
|
# ? Feb 26, 2024 00:55 |
cruft posted:NextCloud is probably the best candidate, but I refuse to run it for multiple reasons, not the least of which is that I can't trust PHP with my tax records. What's the attack vector you are worried about?
|
|
# ? Feb 26, 2024 02:27 |
|
Matt Zerella posted:I think it sucks that you feel you need to quarantine those posts to that thread when they’d be perfectly fine here. I was going to take it down anyway, it was too much. Glad you enjoyed the in-depth crap! fletcher posted:What's the attack vector you are worried about? I do security research, and let's just say PHP (the interpreter) has not impressed me. For instance, with NextCloud, in order to prevent Apache from serving up your configuration file, complete with database password etc, you have to provide a path exclusion in .htaccess. That's not a NextCloud architecture issue, per se: it's a design PHP encourages. To answer your specific question: I worry about script kiddies exploiting one of the abundant opportunities for security vulnerabilities to creep into the code due to architectural decisions made years ago by PHP developers. A lot of this would be mitigated by disallowing all access until authenticated, but hiding the whole application behind a gateway would limit functionality. cruft fucked around with this message at 03:15 on Feb 26, 2024 |
# ? Feb 26, 2024 03:11 |
|
I use nextcloud and it's fine (yes, could be better), but I VPN in to access everything at home, so I'm less concerned security-wise.
|
# ? Feb 26, 2024 03:25 |
|
cruft posted:Okay, I put it in the hard mode thread. https://forums.somethingawful.com/showthread.php?threadid=4051802&pagenumber=1#post538003958 Matt Zerella posted:I think it sucks that you feel you need to quarantine those posts to that thread when they’d be perfectly fine here. Yeah just to be clear you absolutely do not need to do this. It might make sense to put a little disclaimer/FYI at the top of your "hard mode" posts, but that's more just avoiding unnecessary misunderstanding than anything.
|
# ? Feb 26, 2024 05:51 |
|
|
# ? Apr 29, 2024 10:53 |
|
I went through a similar journey and tried many of the same things. I'm also running on low-power hardware, a Pi4 with 4GB and a USB HDD (though my desktop can also run services if needed via Nomad), and Nextcloud was getting too clunky. I eventually ended up with rclone running "serve sftp" in a `restrict,command` SSH key, so it's not even running when a client isn't connected and my clients don't have unrestricted shell access. On the desktop, I browse files with KDE Dolphin and sync with rclone. On Android, I do both with RoundSync (which is literally rclone in app form). I use PiGallery2 for photo and video albums, it's indeed as blisteringly fast as it claims. Since it's just a webapp/PWA, I added a basic auth with Caddy so I don't need to trust the authors' security. It lacks any ML-based autotagging capabilities like Immich etc., but my desktop has a LLM-capable GPU and it's a task I would like to eventually automate separately that way. edit: I didn't stick with it for reasons I now can't remember, but you may want to take a look at KaraDAV. It's fast and puts a real effort into being compatible with Nextcloud client apps. Main downside is it's a small project by a small company, so you may want to be careful with security. NihilCredo fucked around with this message at 10:26 on Feb 26, 2024 |
# ? Feb 26, 2024 10:16 |