Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
BlankSystemDaemon
Mar 13, 2009



Zapf Dingbat posted:

So I got the Cloudflare proxy set up, and I was running into trouble with the certificate. Before Cloudflare, I had:

Internet -> Home Router -> Nginx reverse proxy -> Nexctloud

The Let's Encrypt cert sat on Nginx. Now Cloudflare has the cert and that works for external access. But when I come home, I get certification errors understandably. What can I do for LAN access? Can I have 2 certs?
Since you already have a LetsEncrypt cert, why not hide your services behind NGINX reverse proxying with HTTP basic auth?

You don't need a client like VPN or mesh services, and you can add sites to bookmarks that include authentication information for youself. ie: https://user:password@example.org

Adbot
ADBOT LOVES YOU

Zapf Dingbat
Jan 9, 2001


I guess I'm paranoid about my residential IP being exposed.

fletcher
Jun 27, 2003

ken park is my favorite movie

Cybernetic Crumb

CopperHound posted:

I think the thing you might miss is just how much google integrates sharing of everything with one sign on.

It is possible to use a single sign on front end with nextcloud, but photoprism is not multiuser and only supports link sharing.

The SSO stuff is hard for me to comprehend, but I did get allauth working for nextcloud.

e: I take that back it was "Authentik"

It is really handy being able to share certain Google Docs, Sheets, and Photos with friends easily, since pretty much everybody uses gmail. That experience seems very very hard to replicate with a secure self-hosted option. I think you are right about that probably being the thing that I will miss most.

I still want to play around with PhotoPrism to see how it goes. I still can't decide where to run it though. The two options I'm considering:
1. On my colocated server, which runs websites & game servers that are publicly accessible to the web. Though I take every reasonable measure I can to secure it, there's still a (very unlikely) chance of getting pwned by some 0day.
2. On my NAS in a VM (install & maintenance in a jail seems to cumbersome). I think I prefer having the NAS just be a NAS though, and not doing other things.

All my photos would exist in a minimum of three places no matter what (desktop computer, NAS, and backblaze w/ append only keys). Phone would syncthing the photo reel to my NAS over VPN.

I've been considering moving the colo server back home since I now have a 3 Gbps fiber connection that also came with a separate 1 Gbps connection (both are symmetrical). The connections use slightly different IP addresses, but I'm still weary of exposing my home IP address to the internet, which seems impossible to avoid when game servers are involved, unlike with the cloudflare discussion earlier for https stuff.

So maybe I keep the colo server running public stuff, and get a separate server for at home running all my "personal" apps like subsonic, photoprism, etc which would use the 1 Gbps connection. The colo server is a loud little 1U anyways, so maybe I could go with a quieter 4U build for the home rack. I think I've talked myself into that writing this post...but one of my justifications of splurging on this Gigabit Pro connection at home was that I'd save the $80/mo I was spending on the colo :(

drat you Google for making your ecosystem so sticky!

fletcher fucked around with this message at 08:13 on Jun 7, 2022

Aware
Nov 18, 2003
Would an effective auth mechanism for that be 2FA codes via email? Eg. You share a link to a friend using their email as a key, when they open it it emails them a one time code to access. Maybe a little annoying for subsequent visits but perhaps better than them having to setup an account or you doing oauth fuckery.

Is this even a thing?

BlankSystemDaemon
Mar 13, 2009



Zapf Dingbat posted:

I guess I'm paranoid about my residential IP being exposed.
It's part of /0 and it's been trivial to port-scan /0 since 2012.

If your network isn't properly firewalled, it's already too late - and if it is, having your IP be associated with you in particular isn't going to hurt unless you're the target of spear phishing or other attacks of that nature where a determined attacker will use any means to get at you in particular.

Tamba
Apr 5, 2010

It's probably more likely for your IP to be scanned by scans that just go over all IP addresses than it would be for your IP to be discovered via some DNS entry.

Pantsmaster Bill
May 7, 2007

Are there any self-hosted options for a daily journal/log? I’m currently using Day One but would ideally like to move to something I control myself. Bonus points for an API I can use for integrating other social media stuff (or baked in support).

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Pantsmaster Bill posted:

Are there any self-hosted options for a daily journal/log? I’m currently using Day One but would ideally like to move to something I control myself. Bonus points for an API I can use for integrating other social media stuff (or baked in support).

Quite a few, but for note-taking apps you really need to try them out and see which one fits you best. Some people write a lot, some people collect a lot of media, others need very quick tagging, etc. Some people use a phone, others a tablet, others a desktop.

Some options: Obsidian, DailyNotes, StandardNotes, Trilium

You could also run a Pleroma (Mastodon) instance and keep it private (either the whole instance or just all 'posts' by default), this gives you the option to publish things later when you want.

Tamba
Apr 5, 2010

Pantsmaster Bill posted:

Are there any self-hosted options for a daily journal/log? I’m currently using Day One but would ideally like to move to something I control myself. Bonus points for an API I can use for integrating other social media stuff (or baked in support).

https://github.com/zadam/trilium
Trilium Notes, maybe? Or do you want some kind of blog that's accessible by other people?

Corb3t
Jun 7, 2003

I'm obsessed with Obsidian, but it's definitely something you have to develop your own system for. I never bothered learning Markdown before and now I get frustrated whenever I try to edit text on a website or app and it doesn't support it. I've been slowly compiling all the important notes and resources related to various nerd-related topics - work productivity, server management (Unraid), marijuana cultivation, homebrewing, and a personal blog where I keep lots of random pieces of information.

I can't believe a PMS app is what's going to take to finally put some of my HTML/CSS education into practice.

If you're on iOS, you could always join the Day One TestFlight and get free IAPs - no annual sub needed! https://testflight.apple.com/join/NXLBigzY

Corb3t fucked around with this message at 18:46 on Jun 8, 2022

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer
Two questions: does anyone run a gopher server, and more seriously, how hard is it to set up an IRC server?

HamAdams
Jun 29, 2018

yospos

Smashing Link posted:

Two questions: does anyone run a gopher server, and more seriously, how hard is it to set up an IRC server?

running a small irc server with services and all is pretty trivial using ergochat

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

HamAdams posted:

running a small irc server with services and all is pretty trivial using ergochat

Thanks, that looks pretty easy to set up.

Comatoast
Aug 1, 2003

by Fluffdaddy
I use Joplin for note taking, to do lists, and random journal entries. It replaced Evernote for me. Not perfect, but totally adequate.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

Smashing Link posted:

Two questions: does anyone run a gopher server, and more seriously, how hard is it to set up an IRC server?

Gopher is very dead, and modern httpds are better and more capable than it was in every way -- except the ways which would now be considered monstrously insecure (Gopher clients were also telnet clients, and you could make links that let people login to remote servers by clicking)

corgski
Feb 6, 2007

Silly goose, you're here forever.

If you want to run a gopher server it's a simple protocol and there's a lot of little pet projects implementing it with varying levels of competency.

Just go to wikipedia's gopher page and pick a server from there that looks interesting. You'll probably have to build it from source and write the init script for it yourself but if you're doing retrotech stuff you should already have some experience doing that.

This one looks interesting: http://gophernicus.org/

BlankSystemDaemon
Mar 13, 2009



Some people have made something called gemini, which is supposed to be a modern-day replacement for gopher.

It fixes the main issue with gopher, in that it mandates some form of TLS. I'd like to see it also be ratified as an actual standard that one can build services against, instead of being constantly moving and changing.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

mdxi posted:

Gopher is very dead, and modern httpds are better and more capable than it was in every way -- except the ways which would now be considered monstrously insecure (Gopher clients were also telnet clients, and you could make links that let people login to remote servers by clicking)

I was wondering if Gopher is so dead that it's become more secure via security through obscurity. I just find it interesting as an alternative to HTTP.

BlankSystemDaemon
Mar 13, 2009



There's no such thing as security through obscurity on the modern internet.

Darwin_
Oct 4, 2021

BlankSystemDaemon posted:

There's no such thing as security through obscurity on the modern internet.

Honeypots work

e: now thinking about it, they may not be security through obscurity.

Darwin_ fucked around with this message at 21:47 on Jun 10, 2022

CommieGIR
Aug 22, 2006

The blue glow is a feature, not a bug


Pillbug

Darwin_ posted:

Honeypots work

e: now thinking about it, they may not be security through obscurity.

Honeypots are most certainly not security through obscurity. In fact they are pivotal now to identifying attackers inside a network now as canaries.

corgski
Feb 6, 2007

Silly goose, you're here forever.

Smashing Link posted:

I was wondering if Gopher is so dead that it's become more secure via security through obscurity. I just find it interesting as an alternative to HTTP.

As a retrotech thing it's a cute toy with some utility for older devices that can't do encrypted communication. It's not a hazard on the modern internet like an open SMTP relay would be.

Individual servers can be secure in the sense that they can be written to modern standards with no known vulnerabilities in the server daemon they're running, but that's not security through obscurity it's just good code.

And of course it's unencrypted communication so don't ever treat it like a secure communication channel even if the server software is secure.

Smashing Link
Jul 8, 2003

I'll keep chucking bombs at you til you fall off that ledge!
Grimey Drawer

corgski posted:

As a retrotech thing it's a cute toy with some utility for older devices that can't do encrypted communication. It's not a hazard on the modern internet like an open SMTP relay would be.

Individual servers can be secure in the sense that they can be written to modern standards with no known vulnerabilities in the server daemon they're running, but that's not security through obscurity it's just good code.

And of course it's unencrypted communication so don't ever treat it like a secure communication channel even if the server software is secure.

That is interesting. So it's not inherently an insecure protocol.

corgski
Feb 6, 2007

Silly goose, you're here forever.

Generally when people talk about "secure" or "insecure" protocols they mean the presence of encryption and authentication. Gopher has none. Don't transmit sensitive data over gopher. Read fediverse posts on pleroma instances through gopher, run your blog as text files hosted on a gopher server, write a proxy that serves up weather forecasts from the national weather server over gopher. There are use cases for insecure protocols as long as you know that they are insecure.

BlankSystemDaemon
Mar 13, 2009



corgski posted:

Generally when people talk about "secure" or "insecure" protocols they mean the presence of encryption and authentication. Gopher has none. Don't transmit sensitive data over gopher. Read fediverse posts on pleroma instances through gopher, run your blog as text files hosted on a gopher server, write a proxy that serves up weather forecasts from the national weather server over gopher. There are use cases for insecure protocols as long as you know that they are insecure.
On the one hand, you can absolutely do non-sensitive stuff in plaintext - but on the other hand, you can get wildcard TLS certificates for free now.
The only requirement being that you automate it (which you should be doing anyway), and that you monitor the automation (but you've got monitoring already, right?), so it's not the big uplift that it used to be.

corgski
Feb 6, 2007

Silly goose, you're here forever.

Absolutely, letsencrypt makes it stupid easy to add TLS to everything that can possibly support it, so the only real reason not to is if you're supporting pre-TLS devices for fun or profit. (And hopefully not for profit in tyool 2022.)

Although speaking of pre-TLS devices, I'm trying to use an older version of OpenSSL (as in one that supports and was specifically built with SSL 3 support) to create a root CA for my home network that Netscape 3 or 4 will actually be able to use and there is an absolute dearth of documentation regarding what pre-Mozilla Foundation Netscape needs to see to not declare the certificate corrupt or invalid.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

I have a project that I plan to start in the next 4-6 months, and a prerequisite for it was standing up an in-house, S3-compatible, distributed filesystem cluster. Which isn't as overkill as it might initially sound, because I did this professionally for five years.

Long story short, I got 3 tiny refurb machines, gave them more memory, added a 2TB NVMe SSD to each, and have now stood up a tiny Ceph cluster on them. There's a few pics and screenshots over here.

NB: the cluster reports raw capacity, because as an administrator that's what you're concerned with (and the effective capacity will depend on the replication rules in effect, which might not be as straightforward as you might expect at a glance). I'm using the standard of two replicas, for a total of three instances of every object, so my effective capacity is... 1.8TB, the same as the usable capacity of the individual SSDs. I do have expansion plans which will move beyond this slightly silly situation of one storage device per machine, but right now they're a bit indeterminate and this is plenty to get started with.

I'm managing the machines with the same ansible + custom tooling stack that I use to manage my compute nodes. For now, I'm managing ceph with the standard ceph tooling, which is significantly improved since 2018 when I stepped away from doing this as my day job.

I love this stuff, so I could talk at great length about all the details, but I'll just stop here. Happy to answer questions here or in the project.log thread.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

mdxi posted:

I love this stuff, so I could talk at great length about all the details, but I'll just stop here. Happy to answer questions here or in the project.log thread.

Yes! Did you evaluate MinIO vs. Ceph, and if so, what made you choose Ceph? You mention Ansible so I'm guessing you used that to setup Ceph and wasn't interested in installing a container system, was there anything else?

Also, are you aware of Garage? I would be hesitant to use it in a professional settings, but for (crazy nerd) home storage it looked attractive.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

NihilCredo posted:

Did you evaluate MinIO vs. Ceph, and if so, what made you choose Ceph?

I chose ceph because I got paid to manage a ceph cluster for 5 years and have a lot of experience with it. I know exactly how durable it really is (incredibly) from mishaps we had with those systems, and I know how far it can scale from talking to the people at CERN who use it to back the LHC. Not that I'm going to be approaching that kind of scale or that level of performance -- my use case is "preservation/cold storage" -- but I know that it's possible.

I hadn't thought about minio in a long time, so I just went and refreshed myself on it a bit. The pitch there still seems to be "what if RAID, but across multiple machines?" I still don't find this compelling. I'm already acclimated to achieving durability through full replication and don't balk at the hardware requirements that entails, but this is obviously a personal choice rather than a hard technical consideration.

I do find it curious that I didn't find anything in their docs (though of course I could have just missed it) about tuning their replication to be rack and/or row aware. This isn't important if you just have some data on some machines, and you're also replicating out to some cloud for extra durability. But if I'm professionally managing a cluster, running on real hardware and storing real business/customer data, then I want to be able to take networking and power domains into account for durability and availability planning.

I also found myself dinging them for talking about being more performant than a traditional filesystem, when they require underlying storage to be formatted with XFS, ZFS, ext, or (comedy option) btrfs. This used to be true of Ceph as well, but since Bluestore rolled out in 2018 storage devices have been directly managed by the OSD (object storage daemon) with no intermediary traditional filesystem. I also also also noted that their docs said that they were free of the overhead of a metadata server -- I can't speak for all distributed filesystems, but the MDS only comes into play with ceph if you turn on cephfs, the optional POSIX-compliant "looks like NFS" layer. There were 2 or 3 more things like that. Meh.

I'm very much aware that my extensive experience with Ceph means that I have a deep bias here, so I'm not trying to convince anyone else that they should be using it. My "trying to tell other people the right way to do things" days are far behind me anyhow. Unless you're paying me for that, of course.

quote:

You mention Ansible so I'm guessing you used that to setup Ceph and wasn't interested in installing a container system

No, I have a homebrew setup, largely leveraging ansible, but also with some python and bash scripting, which I use to manage my BOINC compute nodes, including the bare-metal provisioning. I boot Arch linux from USB, fetch a script from my control node with curl, and then about 5 minutes later I have a new machine stood up and ready to go. Tweaking this to configure machines for use as storage nodes rather than compute nodes was a matter of switching the ansible playbook that runs after the scripted install is done, to install docker and a few other dependencies.

For now, that's as far as ansible goes with the storage nodes. I stood up the first node with the `cephadm` tool (which didn't exist when I was doing this professionally, and is much nicer than doing all the initial configuration by hand), and it uses docker by default, so services are containerized. Also new these days is that after you've got it stood up on one node, ceph does its own orchestration of distributing daemonsets (again, via docker) based on settings, cluster size, and tags that you give machines. So for now, ansible is managing the storage machines, and ceph is managing itself.

quote:

Also, are you aware of Garage? I would be hesitant to use it in a professional settings, but for (crazy nerd) home storage it looked attractive.

I was not. I agree with you, and I would argue strenuously against putting it into production. Their docs make MinIO's look like a peer-reviewed research paper.

Edit: typo

mdxi fucked around with this message at 19:12 on Jun 20, 2022

BlankSystemDaemon
Mar 13, 2009



When you say ceph, I'm assuming you mean object/key+value storage on top of ceph? It's loving fantastic for that, basically the only game in town if you're not paying a company to handle it in the butt.

As far as I know, cephfs (the actual POSIX filesystem shim that's on top of the object storage) is still an utter shitshow though - in that it might eat your data at any point, and it's even harder to recover than ZFS.

mdxi
Mar 13, 2006

to JERK OFF is to be close to GOD... only with SPURTING

BlankSystemDaemon posted:

When you say ceph, I'm assuming you mean object/key+value storage on top of ceph?

Yep; as I said, S3-compatible object storage. There's also Swift compatibility (which works fine) but is a pain because you can't have one user which is allowed to talk S3 and Swift. Also nearly no one gives a gently caress about it anymore because S3 won that war quite some time ago.

I'm starting to feel bad about talking more about the moving parts without explaining anything for a general audience, so here's a Ceph crash course for anyone who is finding this interesting, but perhaps bewildering:
  • Ceph stores everything as objects-on-storage.
  • This core storage architecture is called RADOS (Reliable Autonomic Distributed Object Store) and is implemented in the librados library.
  • Daemons called OSDs (object storage daemons) directly manage individual devices, one OSD per device.
  • Then there are monitor and manager daemons handling cluster-level concerns (like replication, and the OSDs)
  • To get stuff in or out you add one or more access layers to the cluster, which are provided by additional daemons
    • For S3 and/or Swift object API, you add an RGW (RADOS Gateway), which is exactly what it sounds like: an HTTP interface that behaves like S3, and talks to librados on the backend
    • For block storage, you add RBD (RADOS Block Device) support. This layer is also well-tested and (is? used to be?) popular in OpenStack
    • For something that acts like an NFS-mounted POSIX "filesystem", you use CephFS. As BlankSystemDaemon noted, this has long been on the wonky side. When I was doing this for a living, it was actively warned against being used in production. I don't know what its current state is
  • Any instance of any of those layers will create a storage pool within the cluster and manage it. Don't think of pools as setaside storage space (they're not); think of them as configuration and management domains (they are)
  • This lets you have multiple API layers, and in fact multiple copies of any given API layer running on the same cluster
  • Pools contain placement groups (PGs), which are extents of data. They're convenient, abstract, logical cubby-holes that objects get grouped together in. Unless something is going wrong you don't need to think much about PGs
  • ... except to know that PGs are the actual replication domain, not individual objects

And so a request lifecycle looks kinda like:

  • Request hits the API
  • The API figures out what is desired (let's say fetching and object)
  • API says to librados: "Hey, I have user X with credentials Y requesting a copy of the object with id Z"
  • librados says "Hmmm, that API is managing pool B... and the object is in PG j... whose replicaset is stored on OSDs k, m, and n... OSD m is the primary, and it is both physically up and logically in the cluster. Hey OSD m, give me object Z from PG j"
  • OSD m fetches and the data flows back out as you would expect

So that's the basics. If you want to get into the really fun poo poo, you can look at CRUSH maps (manual OSD/PG allocation tuning), and the consensus algorithm which drives everything, and cluster federation. There's some deep stuff going on under the hood.

But these days it's pretty easy to stand up, and I have difficulty imagining that any personal setup would need anything other than the defaults (which are very reasonable), so long as you haven't done something bizarre like use devices of wildly varying capacities.

mdxi fucked around with this message at 08:12 on Jun 21, 2022

BlankSystemDaemon
Mar 13, 2009



Interestingly enough, a lot of copy-on-write transactional filesystem+LVM combinations (ZFS on multiple OS', BTRFS on Linux, APFS on macOS, and ReFS on Windows) all use the same kind of object-on-storage model.

You can probably imagine that it makes sense to expose these objects directly to a cluster storage solution - and that's exactly what LLNL (and other entities doing big storage on clustering) do, with Lustre+ZFS.
I wish OpenZFS did it, and that there was a kernel-space OSD for Ceph :sigh:

BlankSystemDaemon fucked around with this message at 14:45 on Jun 21, 2022

Gubbinal Girl
Apr 11, 2022


mdxi posted:

...

But these days it's pretty easy to stand up, and I have difficulty imagining that any personal setup would need anything other than the defaults (which are very reasonable), so long as you haven't done something bizarre like use devices of wildly varying capacities.

To piggyback on this, if you already have a Kubernetes cluster going you can use Rook to deploy Ceph. I found it reasonably straightforward to get working with no prior experience, especially considering the janky set up I was using.

Scruff McGruff
Feb 13, 2007

Jesus, kid, you're almost a detective. All you need now is a gun, a gut, and three ex-wives.
My extended family send out an email asking about a good low/no cost solution for creating a sort of family photo repository. Basically a place where family can share photos with each other. After watching them all fight about the standard cloud hosts I'm at the point where I'm willing to just host it myself on my home server. I was planning on just using Nextcloud but I wanted to know if there were other options I should consider.

Requirements:
-Easy user management/account creation/permissions
-UI/UX that is non-tech/old people friendly

Nice-to-haves
-Ability to upload multiple photos all at once
-Desktop/Phone client
-Ability to comment on photos

Warbird
May 23, 2012

America's Favorite Dumbass

Synology’s Photo app can field most of that, but it assumes you’re in their ecosystem already. Maybe Photoprism?

Mr Shiny Pants
Nov 12, 2012

mdxi posted:

Yep; as I said, S3-compatible object storage. There's also Swift compatibility (which works fine) but is a pain because you can't have one user which is allowed to talk S3 and Swift. Also nearly no one gives a gently caress about it anymore because S3 won that war quite some time ago.

I'm starting to feel bad about talking more about the moving parts without explaining anything for a general audience, so here's a Ceph crash course for anyone who is finding this interesting, but perhaps bewildering:
  • Ceph stores everything as objects-on-storage.
  • This core storage architecture is called RADOS (Reliable Autonomic Distributed Object Store) and is implemented in the librados library.
  • Daemons called OSDs (object storage daemons) directly manage individual devices, one OSD per device.
  • Then there are monitor and manager daemons handling cluster-level concerns (like replication, and the OSDs)
  • To get stuff in or out you add one or more access layers to the cluster, which are provided by additional daemons
    • For S3 and/or Swift object API, you add an RGW (RADOS Gateway), which is exactly what it sounds like: an HTTP interface that behaves like S3, and talks to librados on the backend
    • For block storage, you add RBD (RADOS Block Device) support. This layer is also well-tested and (is? used to be?) popular in OpenStack
    • For something that acts like an NFS-mounted POSIX "filesystem", you use CephFS. As BlankSystemDaemon noted, this has long been on the wonky side. When I was doing this for a living, it was actively warned against being used in production. I don't know what its current state is
  • Any instance of any of those layers will create a storage pool within the cluster and manage it. Don't think of pools as setaside storage space (they're not); think of them as configuration and management domains (they are)
  • This lets you have multiple API layers, and in fact multiple copies of any given API layer running on the same cluster
  • Pools contain placement groups (PGs), which are extents of data. They're convenient, abstract, logical cubby-holes that objects get grouped together in. Unless something is going wrong you don't need to think much about PGs
  • ... except to know that PGs are the actual replication domain, not individual objects

And so a request lifecycle looks kinda like:

  • Request hits the API
  • The API figures out what is desired (let's say fetching and object)
  • API says to librados: "Hey, I have user X with credentials Y requesting a copy of the object with id Z"
  • librados says "Hmmm, that API is managing pool B... and the object is in PG j... whose replicaset is stored on OSDs k, m, and n... OSD m is the primary, and it is both physically up and logically in the cluster. Hey OSD m, give me object Z from PG j"
  • OSD m fetches and the data flows back out as you would expect

So that's the basics. If you want to get into the really fun poo poo, you can look at CRUSH maps (manual OSD/PG allocation tuning), and the consensus algorithm which drives everything, and cluster federation. There's some deep stuff going on under the hood.

But these days it's pretty easy to stand up, and I have difficulty imagining that any personal setup would need anything other than the defaults (which are very reasonable), so long as you haven't done something bizarre like use devices of wildly varying capacities.

Thanks. I am always fascinated by Ceph, it seems really well thought out.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Does anyone know of a ready-made Linux image (or BSD, I guess) for minimal bastion servers? Basically just wireguard + firewall + redirect all incoming requests to $real_server (except for SSH administration port). Nothing else, minimal attack surface.

My current ISP offers free static IPs, so I've just been rawdogging my home connection for a while (with a DMZ, of course), but I'm moving abroad in a couple of months and I probably won't have that anymore. Hopefully I can find a ISP that provides IPv6, but if not, getting a $4/mo Hetzner VPS as a bastion server looks like the next best option. Managed bastion services also exist, but they appear to be oriented to enterprises at least in terms of prices (e.g. Azure Bastion is $136/mo plus egress bandwidth).

(To be clear, I don't want to use Tailscale or other VPN services to connect - I want to keep my server publicly accessible. I'm sufficiently confident of the security of my Vaultwarden and Nextcloud installs, and some other services are intended to be shared with other people.)

Mr Crucial
Oct 28, 2005
What's new pussycat?

NihilCredo posted:

(To be clear, I don't want to use Tailscale or other VPN services to connect - I want to keep my server publicly accessible. I'm sufficiently confident of the security of my Vaultwarden and Nextcloud installs, and some other services are intended to be shared with other people.)

Does it have to be a VPS bastion host? I use Cloudflare tunnels to expose some of my services including Vaultwarden to the internet. In addition to the CDN stuff you don’t need to have a fixed IP (everything is established from an agent in your environment via outbound HTTPS connections) plus you get the benefit of Cloudflare’s WAF and intrusion prevention technologies as extra protection. No need to open any ports at all on your end of the connection, and no need for port forwarding. It’s also free.

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Mr Crucial posted:

Does it have to be a VPS bastion host? I use Cloudflare tunnels to expose some of my services including Vaultwarden to the internet. In addition to the CDN stuff you don’t need to have a fixed IP (everything is established from an agent in your environment via outbound HTTPS connections) plus you get the benefit of Cloudflare’s WAF and intrusion prevention technologies as extra protection. No need to open any ports at all on your end of the connection, and no need for port forwarding. It’s also free.

AFAIK Cloudflare tunnels always perform TLS termination, which I don't want. It might be irrational paranoia, but I don't want someone else's machine decrypting my traffic. I might as well run a cloud server in that case.

Adbot
ADBOT LOVES YOU

SEKCobra
Feb 28, 2011

Hi
:saddowns: Don't look at my site :saddowns:

NihilCredo posted:

AFAIK Cloudflare tunnels always perform TLS termination, which I don't want. It might be irrational paranoia, but I don't want someone else's machine decrypting my traffic. I might as well run a cloud server in that case.

I believe for the tunnels you can absolutely run TLS inside of it. Normal web protection does terminate at their firewall.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply