Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
DerekSmartymans
Feb 14, 2005

The
Copacetic
Ascetic


RFC2324 posted:

Like the post above me said its fine for bash. its superficially identical to a minimal install of the distro you are installing, the only differences are internal stuff. Some syscalls don't work, for instance, which breaks some tools(mtr doesn't work on the opensuse implentation, for instance)

Thanks, folks. Iím not completely turned on enough to declare Linux4Lyfe yet (I had a dual boot Ubuntu 12.xx a long time ago), but these minimal installs seem perfect. I didnít really have the bandwidth to constantly dl new packages or programsí patches, and got burned one time by dlíing Ubuntu Kylin overnight and it installed like 3/4 of my metered connection before I realized it was in Chinese. I havenít used it since then 🤪. I have heard that Linux is fine for gaming these days, and most of my software has Linux versions so thought to give it a fresh look! Thanks again!

Adbot
ADBOT LOVES YOU

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!



Furism posted:

Not sure if that's the most appropriate thread for this but here goes.

I'm setting up a new VPS and I want that one to run most services in containers (probably Docker). I'm going to be running .NET Core web applications, probably a Wordpress too, that sort of things. I usually put all of this behind an nginx reverse proxy. But I don't trust the security of containers - sometimes the dependencies are not updated with the latest security patches. So I want to run an IPS - especially because of Wordpress. Are there any nginx IPS modules? Or am I doomed to use Apache? I find nginx easier to configure and just lighter.

I'm a network engineer so usually the IPS I deal with are part of some UTM or NGFW, but this is for a personal use so I don't have that option.

Does Docker have problems with applications escaping containers? Iím asking because I donít know.

xzzy
Mar 5, 2009



I think most of the exploits revolve around running containers with too many privileges but yes it can happen.

And even if they can't, if they exploit the service inside the container and turn it into part of a botnet or something it doesn't really matter, you still got owned (and any persistent storage mounted in the container is too so your server is effectively garbage now).

DerekSmartymans
Feb 14, 2005

The
Copacetic
Ascetic


RFC2324 posted:

this is how I ended up getting paid as a computer toucher, so be careful

Actions have consequences 🤩!

Mr. Crow
May 22, 2008

Snap City mayor for life


Furism posted:

Not sure if that's the most appropriate thread for this but here goes.

I'm setting up a new VPS and I want that one to run most services in containers (probably Docker). I'm going to be running .NET Core web applications, probably a Wordpress too, that sort of things. I usually put all of this behind an nginx reverse proxy. But I don't trust the security of containers - sometimes the dependencies are not updated with the latest security patches. So I want to run an IPS - especially because of Wordpress. Are there any nginx IPS modules? Or am I doomed to use Apache? I find nginx easier to configure and just lighter.

I'm a network engineer so usually the IPS I deal with are part of some UTM or NGFW, but this is for a personal use so I don't have that option.

Containers are not a magical tool, if anything they're a magical tool for shooting yourself in the foot. They're great when used properly but if you don't know anything about them I wouldn't throw them in a public VPS unless you just don't give a poo poo about anything on it. You're probably better off just running everything off the host and using SELinux.

For starters containers are notorious for having out of date software, your host package manager makes it easy to keep up to date, hiding everything in a container means you now have to have a plan for updating your containers packages as well.

Absolutely play around with them and absolutely figure out how to harden them, but don't make your first foray into them something open to the internet that you care about.

BlankSystemDaemon
Mar 13, 2009

System Access Node Not Found



Bob Morales posted:

Does Docker have problems with applications escaping containers? Iím asking because I donít know.
Apparently my post was eaten, so I have to try again?

Docker is for orchestration, not isolation. The creators readily point this out, but few people seem to have picked up on it.

There are things which I've heard are supposed to add isolation, but I've yet to see any of them in production at scale, and conversely, whenever docker (or kubernetes) is run in production at scale, it's typically with KVM, Xen, ESXi or bhyve providing the isolation.

Furism
Feb 21, 2006

Live long and headbang


Mr. Crow posted:

Containers are not a magical tool, if anything they're a magical tool for shooting yourself in the foot. They're great when used properly but if you don't know anything about them I wouldn't throw them in a public VPS unless you just don't give a poo poo about anything on it. You're probably better off just running everything off the host and using SELinux.

For starters containers are notorious for having out of date software, your host package manager makes it easy to keep up to date, hiding everything in a container means you now have to have a plan for updating your containers packages as well.

Absolutely play around with them and absolutely figure out how to harden them, but don't make your first foray into them something open to the internet that you care about.

Oh I've played with Docker in the past, and I'm aware of the security concerns around containers - that's why I want an IPS in the first place. I also intend to use only official images because those are scanned for vulnerabilities on a regular basis.

BlankSystemDaemon
Mar 13, 2009

System Access Node Not Found



What's an IPS going to do to help isolate containers?

Bob Morales
Aug 18, 2006


Just wear the fucking mask, Bob

I don't care how many people I probably infected with COVID-19 while refusing to wear a mask, my comfort is far more important than the health and safety of everyone around me!



BlankSystemDaemon posted:

What's an IPS going to do to help isolate containers?

Assuming it will detect and block malicious traffic?

Nitrousoxide
May 30, 2011

do not buy a oneplus phone





BlankSystemDaemon posted:

Apparently my post was eaten, so I have to try again?

Docker is for orchestration, not isolation. The creators readily point this out, but few people seem to have picked up on it.

There are things which I've heard are supposed to add isolation, but I've yet to see any of them in production at scale, and conversely, whenever docker (or kubernetes) is run in production at scale, it's typically with KVM, Xen, ESXi or bhyve providing the isolation.

What about Podman? As I understand it that supports rootless containers which should prevent them from escaping their containers I would think. And even if it didn't it would limit the potential damage to just that user's permissions.

Computer viking
May 30, 2011
Now with less breakage.

That's potentially still a fair bit of damage, though; most botnets just want your cpu time and bandwidth, not your files.

BlankSystemDaemon
Mar 13, 2009

System Access Node Not Found



Bob Morales posted:

Assuming it will detect and block malicious traffic?
That's a tall assumption if I've ever seen one.

Nitrousoxide posted:

What about Podman? As I understand it that supports rootless containers which should prevent them from escaping their containers I would think. And even if it didn't it would limit the potential damage to just that user's permissions.
I'm not sure what "rootless" means in this context; the root user isn't different from any other.
As far as I know there are no instances where uid=0 is hardcoded as the only way to accomplish something (ie. being in the wheel group on FreeBSD gives you access to su, by which you can substitute to a user with whatever privilege you want, assuming you have their password).

EDIT: It sounds like rootless, in this context, means starting as root and dropping privileges, which is a standard feature of daemon(8), and ought to be a core functionality of any well-designed daemon.

I'm talking about an isolation like the one offered by, for example, FreeBSD jails, which are designed explicitly to confine root by since it's the kernel that's enforcing the isolation.

BlankSystemDaemon fucked around with this message at 15:17 on Mar 25, 2021

Furism
Feb 21, 2006

Live long and headbang


BlankSystemDaemon posted:

What's an IPS going to do to help isolate containers?

An IPS is an Intrusion Prevention System that you put inline in front of network services (usually HTTP-based) that will scan incoming traffic for known vulnerabilities and block them so they are not forwarded to your vulnerable server. They are very common in production because sometimes you can't patch your server as quickly as you'd like, so you need to rely on a network device to protect you. It has nothing to do with container isolation.

Anyway, I think I'll just use Suricata as a stand-alone IPS, there doesn't seem to be any nginx module for this.

xzzy
Mar 5, 2009



Rootless containers in podman are containers run by an unprivileged user with zero root privileges. These containers get their own namespace and podman does some magic to map any root owned files/processes in the container to an unprivileged effective uid. This is a big change from docker where you had to add users to the docker group, allowing them to control the docker daemon (which runs as root).

The main justifications for this is to allow people to run containers without su/sudo and prevent the host system from getting owned if the services inside the container are compromised. This does impose some limits on what you can do with the containers though.. networking in particular (as you can't do things like set up network interfaces without root privileges).

BlankSystemDaemon
Mar 13, 2009

System Access Node Not Found



Furism posted:

An IPS is an Intrusion Prevention System that you put inline in front of network services (usually HTTP-based) that will scan incoming traffic for known vulnerabilities and block them so they are not forwarded to your vulnerable server. They are very common in production because sometimes you can't patch your server as quickly as you'd like, so you need to rely on a network device to protect you. It has nothing to do with container isolation.

Anyway, I think I'll just use Suricata as a stand-alone IPS, there doesn't seem to be any nginx module for this.
I didn't ask what an IPS is, I asked how it can isolate containers.
In the almost quarter century I've worked as a network admin, it's possible I might've used an IPS a fair few times.

xzzy posted:

Rootless containers in podman are containers run by an unprivileged user with zero root privileges. These containers get their own namespace and podman does some magic to map any root owned files/processes in the container to an unprivileged effective uid. This is a big change from docker where you had to add users to the docker group, allowing them to control the docker daemon (which runs as root).

The main justifications for this is to allow people to run containers without su/sudo and prevent the host system from getting owned if the services inside the container are compromised. This does impose some limits on what you can do with the containers though.. networking in particular (as you can't do things like set up network interfaces without root privileges).
Gotcha, it's relying on cgroup namespaces for isolation, which is what docker uses too - that's unfortunate, I thought there'd be something better.
So the major difference is that there's no controlling daemon running as root, and everything is configured to drop privileges.

Mr. Crow
May 22, 2008

Snap City mayor for life


BlankSystemDaemon posted:

Gotcha, it's relying on cgroup namespaces for isolation, which is what docker uses too - that's unfortunate, I thought there'd be something better.
So the major difference is that there's no controlling daemon running as root, and everything is configured to drop privileges.

I'm fuzzy on the details of exactly how it works since it's still very much alpha/beta quality (the rootless side, anyway); but afaik it's not actually "dropping privileges", it's completely in the scope of the user process and namespaces. It's still using cgroups and namespaces because, well, that's what the linux kernel has; but the crux of it come from https://github.com/rootless-containers/slirp4netns which piggy backs off the kernel to handle user networking. Last time I tried to use it (a year or ago or so) it was still pretty limited and by necessity will never have a lot of networking related features you might expect but still a pretty cool idea if it will fit within the scope of what you need it to do.


FreeBSD question: Does FreshPorts or a similar site post build status of ports? I've setup poudriere to build my ports nightly and llvm has been surprisingly failing for a week or so and I want to just save myself the time troubleshooting if it's just failing upstream.

xzzy
Mar 5, 2009



It helps that podman sees itself as primarily a container development tool to help users create their images. Then they run a command that exports it as a kubernetes config and they ship it off to the production side.

You can run it as a service as a feature-equal replacement to docker but I've seen some bugs in trying to do that that I haven't sorted out yet (every once in a while I get "filehandles are in use" errors when a container exits and I'm trying to restart it).

Nitrousoxide
May 30, 2011

do not buy a oneplus phone





Mr. Crow posted:

I'm fuzzy on the details of exactly how it works since it's still very much alpha/beta quality (the rootless side, anyway); but afaik it's not actually "dropping privileges", it's completely in the scope of the user process and namespaces. It's still using cgroups and namespaces because, well, that's what the linux kernel has; but the crux of it come from https://github.com/rootless-containers/slirp4netns which piggy backs off the kernel to handle user networking. Last time I tried to use it (a year or ago or so) it was still pretty limited and by necessity will never have a lot of networking related features you might expect but still a pretty cool idea if it will fit within the scope of what you need it to do.


FreeBSD question: Does FreshPorts or a similar site post build status of ports? I've setup poudriere to build my ports nightly and llvm has been surprisingly failing for a week or so and I want to just save myself the time troubleshooting if it's just failing upstream.

The podman GUI that red hat is working on, cockpit, has built-in virtual networking controls to allow you to do a lot of what you would need to do with Docker for the virtual networks using the built-in kernel control of the networks.

It's also supposed to be able to provide basic hypervisor control as well using the built-in kernel support for virtual machines. It's a cool project.

I assume they built it in such a way that it's more secure than the docker implementation of it. But understanding how cockpit works is beyond my capabilities.

other people
Jun 27, 2004
Associate Christ

cockpit is a fancy web frontend for managing a server. it also happens to have a plugin for podman.

https://cockpit-project.org

Furism
Feb 21, 2006

Live long and headbang


other people posted:

cockpit is a fancy web frontend for managing a server. it also happens to have a plugin for podman.

https://cockpit-project.org

Can you add vhosts, automatically get Let's Encrypt certificates and stuff? Can I tell it "I want that Docker image behind this reverse proxy here, by the way configure this as the FQDN and use port xyz for the forwarding to the backend", that sort of things?

other people
Jun 27, 2004
Associate Christ

Furism posted:

Can you add vhosts, automatically get Let's Encrypt certificates and stuff? Can I tell it "I want that Docker image behind this reverse proxy here, by the way configure this as the FQDN and use port xyz for the forwarding to the backend", that sort of things?



You can log into a terminal from it so yeah I guess you can do all that one way or another. I don't know if it has a docker plugin; it may be a podman only thing.

Furism
Feb 21, 2006

Live long and headbang


I just installed it and, yeah, it's ridiculously easy. I installed it on my CentOS server at home, added my private key (the GUI read it from the /home/user/.ssh directory really), provided the password, and in literally one click I can now remotely connect to my VPS server. But it seems to be aimed mostly at monitoring, not so much configuration of services (you still need to use the terminal as you pointed out). And I must say the GUI is amazing and top-notch quality, very professional I love it!

Nitrousoxide
May 30, 2011

do not buy a oneplus phone





Furism posted:

I just installed it and, yeah, it's ridiculously easy. I installed it on my CentOS server at home, added my private key (the GUI read it from the /home/user/.ssh directory really), provided the password, and in literally one click I can now remotely connect to my VPS server. But it seems to be aimed mostly at monitoring, not so much configuration of services (you still need to use the terminal as you pointed out). And I must say the GUI is amazing and top-notch quality, very professional I love it!

Can't you do virtual networking in the "networking" tab and then point your containers to your vnetwork that you've created, which say NGINX is on and managing inside of a podman container?

namlosh
Feb 11, 2014

I blew up


Furism posted:

I just installed it and, yeah, it's ridiculously easy. I installed it on my CentOS server at home, added my private key (the GUI read it from the /home/user/.ssh directory really), provided the password, and in literally one click I can now remotely connect to my VPS server. But it seems to be aimed mostly at monitoring, not so much configuration of services (you still need to use the terminal as you pointed out). And I must say the GUI is amazing and top-notch quality, very professional I love it!

Iím not too much of a Linux guy, but Iím getting there. Agreed that cockpit is really good and so is podman. You can install a podman-docker package that provides a cli just like docker, so you can run, ps, inspect, etc.

Using the UI, you can grab images from a registry and start a container giving it most of the options you can in docker: set volumes, environment variables, expose ports, etc. it works great, but I still use cli often because itís just easier. Havenít tried more advanced networking like macvlan or other drivers yet, but itís running deepstack and deepstack-ui images and theyíre talking to each other and via the hosts network just fine. CentOS8 is the bare metal o/s, but the podman stuff is running inside a vm I imported running RHEL8.3. So two instances of cockpit lol

Itís pretty slick so far

Matt Zerella
Oct 7, 2002


Furism posted:

Can you add vhosts, automatically get Let's Encrypt certificates and stuff? Can I tell it "I want that Docker image behind this reverse proxy here, by the way configure this as the FQDN and use port xyz for the forwarding to the backend", that sort of things?

Have you looked into Traefik at all? Its what I use and I can revers proxy based on labels. It even handles subdomain routing for me, and LE renewal/reloading.

I just pointed a wildcard domain at my IP and Traefik/labels handle the rest.

The Milkman
Jun 22, 2003

No one here is alone,
satellites in every home


Podman v3 also supports the bits needed for docker-compose, and it's been working pretty well for me. There's also podman-compose which works alright for creating/starting containers but falls down pretty quickly for anything else, no reason to mess with it anymore.

Opinions on traefik/nginx proxy manager/SWAG? I just need to handle a couple domains/subdomains for the handful of stuff I expose. I'd been using a caddy container because that was the easiest at the time a few years ago and I meant to replace it shortly thereafter but never did whoops

xzzy
Mar 5, 2009



I tried getting traefik to work for some web services in our OKD cluster and was unable to get it to do anything useful. It's very possible I am a giant idiot though.

I got a static config file with nginx working in an hour.

Furism
Feb 21, 2006

Live long and headbang


Well I got down the Dokku rabbit hole and it's an amazingly good project. Wordpress is giving me grief because when you turn HTTPS on, the dynamic pages are served over that but the static content (like the CSS and stuff) is served over HTTP so the browser blocks those because it considers them as "cross origin." I'm not mad at Firefox nor at Dokku, more at the Wordpress developer who hard-coded this bullshit. I have some .NET Core websites somewhere, I'll add a dockerfile to them and see how it goes.

Dokku is pretty drat sweet, it uses nginx as a front-end and you don't have to configure anything really. Just add an "app" (a container) and it'll take care of the vhost, Let's Encrypt certs and all that! I used to do this the old way (manually or through a few bash scripts) but having this streamlined through Dokku just saves me time. When the web app is properly coded, that is.

Cardiac
Aug 28, 2012



So I managed to gently caress up the GUI on a Centos 8 in an interesting way. I donít know exactly what I did, but for some reason I cannot boot into either GNOME, KDE or XFCE despite graphical.target being set.
The machine have two GPUs, one for the display and one for CUDA. I have removed and reinstalled NVIDIA drivers and the various desktops and whatever I do, I still boot into the CLI.
If I run startx from the CLI, the screen goes black and then crashes back to CLI. Except if I do it from root.
Interestingly, the VNC servers work great with GNOME and have no issues.

For various reasons, I cannot just do a fresh install of Centos since there are some critical software on the machine. Or can I?
Is there an easy way to start with default Centos GUI settings without breaking everything?

Volguus
Mar 3, 2009


There's nothing helpful in /var/log/Xorg.0.log ? Someone must know why it fails to start, even provide a simple twm.

rufius
Feb 27, 2011

Clear alcohols are for rich women on diets.


Cardiac posted:

So I managed to gently caress up the GUI on a Centos 8 in an interesting way. I donít know exactly what I did, but for some reason I cannot boot into either GNOME, KDE or XFCE despite graphical.target being set.
The machine have two GPUs, one for the display and one for CUDA. I have removed and reinstalled NVIDIA drivers and the various desktops and whatever I do, I still boot into the CLI.
If I run startx from the CLI, the screen goes black and then crashes back to CLI. Except if I do it from root.
Interestingly, the VNC servers work great with GNOME and have no issues.

For various reasons, I cannot just do a fresh install of Centos since there are some critical software on the machine. Or can I?
Is there an easy way to start with default Centos GUI settings without breaking everything?

A stupid question - have you tried with a different user? Just in case you goofed something local to your user.

After that, itís probably either rolling back driver versions or double checking that the Xorg conf file is sane.

Antigravitas
Dec 8, 2019

Outside Context Problem


Cardiac posted:

So I managed to gently caress up the GUI on a Centos 8 in an interesting way. I donít know exactly what I did, but for some reason I cannot boot into either GNOME, KDE or XFCE despite graphical.target being set.
The machine have two GPUs, one for the display and one for CUDA. I have removed and reinstalled NVIDIA drivers and the various desktops and whatever I do, I still boot into the CLI.
If I run startx from the CLI, the screen goes black and then crashes back to CLI. Except if I do it from root.
Interestingly, the VNC servers work great with GNOME and have no issues.

For various reasons, I cannot just do a fresh install of Centos since there are some critical software on the machine. Or can I?
Is there an easy way to start with default Centos GUI settings without breaking everything?

You should check if you can get the logs of the login manager. gdm, probably. And perhaps remove the Nvidia garbage card and see if that helps.

Comedy option: .Xauthority not owned by the user supposed to run X.

Gajarga
Nov 5, 2006


xzzy posted:

I tried getting traefik to work for some web services in our OKD cluster and was unable to get it to do anything useful. It's very possible I am a giant idiot though.

I got a static config file with nginx working in an hour.

traefik has(had?) a documentation problem where there was a ton of poorly labeled v1 docs littering up everywhere and they did a lot breaking changes to config for v2, so even if you were reading good looking stuff you might have just been lead astray of valid configs

Nobody Interesting
Mar 29, 2013

One way, dead end... Street signs are such fitting metaphors for the human condition.




I think I suck at git and I'm having a hard time wrapping my head around this. Googling for it is hard because everything assumes you're using Git{Hub,Lab}.

So I have a repo whose origin is on GitLab. We'll say its git@gitlab.com:nobody/reallygreatproject.git. This is cloned to my server.

Thing is I want to clone to my desktop and push directly to the server for the sake of swift deployment, essentially turning the GitLab origin into a mirror, so I am running git clone nobody@server:/srv/reallygreatproject

When I then commit and push I get:
code:
Enumerating objects: 3, done.
Counting objects: 100% (3/3), done.
Delta compression using up to 8 threads
Compressing objects: 100% (2/2), done.
Writing objects: 100% (2/2), 232 bytes | 232.00 KiB/s, done.
Total 2 (delta 1), reused 0 (delta 0), pack-reused 0
remote: error: refusing to update checked out branch: refs/heads/master
remote: error: By default, updating the current branch in a non-bare repository
remote: is denied, because it will make the index and work tree inconsistent
remote: with what you pushed, and will require 'git reset --hard' to match
remote: the work tree to HEAD.
remote: 
remote: You can set the 'receive.denyCurrentBranch' configuration variable
remote: to 'ignore' or 'warn' in the remote repository to allow pushing into
remote: its current branch; however, this is not recommended unless you
remote: arranged to update its work tree to match what you pushed in some
remote: other way.
remote: 
remote: To squelch this message and still keep the default behaviour, set
remote: 'receive.denyCurrentBranch' configuration variable to 'refuse'.
To server:/srv/reallygreatproject
 ! [remote rejected] master -> master (branch is currently checked out)
error: failed to push some refs to 'banff:/srv/reallygreatproject'
I really don't understand what I'm seeing. Google/StackOverflow gives me a few git config lines to run to turn remote into a bare repo, but this leaves me with an empty working directory on my server. Setting receive.denyCurrentBranch to ignore as it suggests allows me to push, but the new objects don't appear to be received on the server.

So basically, how the hell do I clone a clone and then push into it? What am I not understanding?

e: So remote is receieving the new objects but they're showing as deleted/staged???

Nobody Interesting fucked around with this message at 06:13 on Mar 28, 2021

Mr. Crow
May 22, 2008

Snap City mayor for life


It says right there in the message, you can't push to a non-bare repo. Typically what is hosted on GitHub etc is the bare repo (just the .git folder). It's assumed this is what your pushing to normally as the complexity and room for error increases if you're trying to update somebody else's working tree. You can disable this behavior and get what your trying to do via the config options in the message output.

To be honest though your just giving yourself a hard time with .git, just rsync the folder if that's the workflow you want, git isn't doing anything for you in this situation.

Edit: to be clear the error is because your trying to push to another cloned (non-bare) repo.

xtal
Jan 9, 2011



I would keep using git instead of rsync, it sounds like the problem is simply that you created a repo on the server that isn't bare. Move or delete the repo on the server, make a new one with git init --bare and push to there. It should end with dot git.

Mr. Crow
May 22, 2008

Snap City mayor for life


xtal posted:

I would keep using git instead of rsync, it sounds like the problem is simply that you created a repo on the server that isn't bare. Move or delete the repo on the server, make a new one with git init --bare and push to there. It should end with dot git.

I assumed he's using the server repo for *something* and needs the working tree, hence the theatrics. If you're just trying to have redundant mirrors then ya, do this.

Nobody Interesting
Mar 29, 2013

One way, dead end... Street signs are such fitting metaphors for the human condition.




It's a super duper business critical (not really) Hugo website, so the point in git was to have a post-receive hook that rebuilds the site.

THAT makes more sense though. I don't know why I was having such trouble comprehending what was right in front of me. So I guess the repo on the server needs to be re-initialised as bare and we go from there, probably. I'll see what happens, but at least I grasp the concept a bit better now. Thanks for holding my hand, I think I needed it.

The Milkman
Jun 22, 2003

No one here is alone,
satellites in every home


If you need to keep whatís on the remote, you could just clone it to a new folder and copy what you have locally into it and commit. Or do that and just delete whatís in the fresh clone first, thatís fewer steps than reinitializing or whatever

Adbot
ADBOT LOVES YOU

Nobody Interesting
Mar 29, 2013

One way, dead end... Street signs are such fitting metaphors for the human condition.




It turns out Gitlab's webhooks feature is a better solution for what I want, but example scripts are sparse and I'm poo poo with python. Eventually I'll figure it out, just not today!

There are plenty of example PHP webhooks, but enabling PHP in this container would defeat the point of using a static site generator.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply