Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

fankey posted:

I'm trying to run graphite and grafana on the same VM. If I start the graphite container before the grafana container everything works fine. If I start the grafana container first the graphite container doesn't operate correctly.

Let's assume the worst-case scenario - that the grafana image, by implementation, must have a running graphite instance it can connect to at startup, otherwise it will fail.

The canonical way to solve this in docker is to put both services in the same docker-compose.yml file and add the depends_on: graphite property to grafana:

https://docs.docker.com/compose/compose-file/#depends_on

Docker will now know to wait until graphite is running before it starts grafana.

Adbot
ADBOT LOVES YOU

fankey
Aug 31, 2001

Thanks for the comments. I'll use docker-compose to force the startup order and see if that helps. Here's the output of docker inspect for the 2 modes

Working grafana
Working graphite

If I then stop and start graphite the graphite container is 'broken'. I can still talk to grafana without any issues.

Nonworking graphite
Still working grafana

Fake Edit : I stopped everything and restarted just graphite to get things working again.... and it didn't work, at least not right away. I wasn't able to contact the graphite http server, so I shelled into the container to poke around a little bit. Inside, I was able to wget from localhost so I exited and tried wget on the host to localhost and that work. Then, trying it externally once again it started working. So the problem doesn't appear to be startup order related but more issues with the graphite container itself. I waited a least a couple of minutes before shelling into the container so I don't think I didn't wait long enough for the server to come up.

After that I stopped graphite and restarted it for one more test and it started working immediately. :iiam:

I'll look into prometheus since that seems to be a little more modern. I was using graphite because I'm using collectd on a bunch of embedded systems and wasn't aware there were other options for databases. There is a plugin for collectd to write to prometheus so that might work as well.
.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)
I need some Ansible help, this is breaking my brain.

If I have a list like this:

list:
- id: ['item1', 'item2']
value1: x
value2: y
- id: ['item3', 'item4']
value1: z
value2: a

How do I use this to flatten the id so value1 and value2 apply to both items in id?

the loop function kind of breaks my brain a bit, so usually I just use with_* which I know is the old way of doing things but it makes sense to me usually.

Right now I'm just doing:

- name: debug
debug:
msg: "ID is {{ item.id }} and value1 is {{ item.value1 }}"
with_items: "{{ list }}"

But the output just jams the list into item.id instead of separating it?


Ignore this, I was reading the output wrong.

Matt Zerella fucked around with this message at 20:05 on Jun 11, 2020

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
Do any of you goons have recommendations on example projects I can try to feed into my build pipeline as a demo? I've got Clair/Sonar/Nexus and some other nonsense set up, and it would be cool to test out various quality gates and whatnot.

I found Webgoat which is kind of what I'm looking for, and more along those lines would be great.

https://owasp.org/www-project-webgoat/

freeasinbeer
Mar 26, 2015

by Fluffdaddy

xzzy posted:

Are projects like maas and tinkerbell useful in more traditional provisioning environments?

Where I'm at we're typically 5+ years behind the cutting edge, so we're not very cloud aware though the momentum is building. Now that everyone is work from home provisioning new hardware has gotten really difficult because our workflow is fairly antique.. get some new hardware, someone spends a day configuring IPMI and customizing a ks.cfg then letting pxeboot do its thing. This hasn't evolved much for almost 20 years even though we've iterated through stuff like cobbler and foreman.

So if there's some better provisioning tools out there that support traditional servers (that is, it's provisioned and performs a single task until the hardware fails), that is pretty interesting to me. There has been talk of moving user analysis jobs into openshift which I'm not sure I'm keen on but it does indicate a more dynamic solution is on the horizon.

They are designed for on prem folks explicitly. Most of the demos use packet.net for laziness reasons, but it’s meant to be bare metal cloud provisioning.

Theoretically you can get it all setup so that you boot a host, it hits iPXE, loads up a placeholder OS, which then configures the device for you like setting up ipmi and installing the base OS. From there you can do things like have it check into Ansible tower for further config, or just remain on the shelf until someone asks for machines and runs their own provisioning tools.

xzzy
Mar 5, 2009

That sounds like Fun(tm) to me. But we're so far away from that kind of mindset it'll probably be 10 years before I get to try it. Everyone here is still in the my server is my server forever type of thinking.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

xzzy posted:

That sounds like Fun(tm) to me. But we're so far away from that kind of mindset it'll probably be 10 years before I get to try it. Everyone here is still in the my server is my server forever type of thinking.
Most people don't want to be messing with this poo poo, it's just that muscle memory is hard to build and they have no idea what good looks like. If they were to research those whole ecosystems themselves, it would take years to get from the theory of cattle to a working implementation of deployments they don't have to manage. Most cloud engineers getting into this stuff have no idea how to tell the developer story or demonstrate fast iteration, it always starts with some kind of pre-made deployable image. Get a demo that starts with the workstation and pushes outward through a zero-touch build and release pipeline.

Massive cultural shifts can be done, but you have to show off the benefit and sell something that actually exists instead of just telling people.

Matt Zerella
Oct 7, 2002

Norris'es are back baby. It's good again. Awoouu (fox Howl)

Vulture Culture posted:

Most people don't want to be messing with this poo poo, it's just that muscle memory is hard to build and they have no idea what good looks like. If they were to research those whole ecosystems themselves, it would take years to get from the theory of cattle to a working implementation of deployments they don't have to manage. Most cloud engineers getting into this stuff have no idea how to tell the developer story or demonstrate fast iteration, it always starts with some kind of pre-made deployable image. Get a demo that starts with the workstation and pushes outward through a zero-touch build and release pipeline.

Massive cultural shifts can be done, but you have to show off the benefit and sell something that actually exists instead of just telling people.

Yeah, you don't even really need tower for this, if you want to do a demo.

Take it as an opportunity to write nice idempotent Ansible config on a new server. Don't do anything by hand. As you need to do config, do it in Ansible and push the changes that way. Then spin up a VM and show them how easy it is to get the VM to the exact state your regular server is.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost

Vulture Culture posted:

Massive cultural shifts can be done, but you have to show off the benefit and sell something that actually exists instead of just telling people.
It really depends upon if the business really can benefit. For most places that care about software development velocity in earnest, the benefits are fairly clear and it's not that hard to demonstrate value and all that's argued is really about prioritization of resources. It's been a huge struggle for myself literally hired to push companies with thousands of engineers toward modern operations with immutable infrastructure because, quite simply put, the business doesn't actually benefit that much by being able to push out software faster or more reliably because software is maybe 20% of the product lifecycle. For example, most companies that produce IoT stuff are built primarily around shipping and distributing huge volumes of light bulbs and other small consumer hardware and workflows are pretty darn linear and the feedback loop is pretty delayed. They've never once had to have a firmware update for these devices in the field unless it's a huge business customer that has a huge support contract. So when I protest that using Jenkins to push firmware updates directly out to everyone all at once without a thought out deployment strategy that deals with the reality that software != physical supply chains, it becomes a business development exercise, not an engineering one.

The worst historically to change deployment workflows are the body shops that bill per hour unless specified directly in a contract though. They really do tend to culturally select for closed minded folks that like to follow things by the book as much as possible and to avoid risks.

If it's somewhere like an MSP, that's among one of the worst cases for getting management to buy into automation given the business model is literally the definition of taking on toil, per the SRE Book definition, without being able to solve the problems that cause them at their root.

Rescue Toaster
Mar 13, 2003
I'm curious about setting up toolchains in docker (like for cross compilers in some cases, or arm compilers running in qemu user mode, etc...).

Do people tend to have the most luck with:
1) Making the code build part of the docker image build
2) Making the toolchain container totally ephemeral and bind mounting the current folder to run the build inside the container then stop
3) Building up a big 'dev' container that persists, and either bind mounting (then you have to deal with uid issues on multi-user servers) or checking out into a docker volume (then you have to deal with pushing git permissions into the container)

1 might be valid for release builds of docker apps but seems annoying during development, and doesn't really work for non-docker apps that you just want the toolchain for
2 seems the most straightforward, but has issues like vscode can't see the compiler and headers (for C++) or installed libs (for python) since the container isn't running
3 is annoying because people end up running their container for a long time or making changes inside the running container and forgetting to push Dockerfile updates for new dependencies

Interested in any thoughts or success stories people have. I think any of these models can work on the CI side, but making something convenient for developers to use day to day seems more challenging.

Rescue Toaster fucked around with this message at 19:15 on Jun 29, 2020

Hadlock
Nov 9, 2004

What language specifically do you want to cross compile for

Golang and Rust have wildly different build process for cross compile. Last time I had to cross compile containers, I did it on raspberry pi for the two containers that had to run on ARM and only got updated very rarely. By building it native you avoid having to learn all the magic cross compiling incantations

AWS and, I think Google now too, offer arm based hosts that you can leverage for this task now, rather than get approval/po for physical hardware etc etc

TL;DR avoid cross compile unless you absolutely have to

If you have decision making powers, maybe choose an interpeted language like python and avoid the whole problem altogether

Rescue Toaster
Mar 13, 2003
The main cross compiled language would be C/C++, though rust is an option too. Using a native arm image w/ qemu eliminates the need for setting up a cross compilation toolchain, you just use a native arm toolchain image. The question isn't really about how to setup the toolchain image as it is how to make it easy for developers to actually develop using it.

vs code remote ssh and vs code remote container are really nice but you need a persistent running container. Using an ephemeral toolchain container running remotely (in AWS or a local swarm or whatever) seems at odds with decent IDE integration and source indexing.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Rescue Toaster posted:

I'm curious about setting up toolchains in docker (like for cross compilers in some cases, or arm compilers running in qemu user mode, etc...).

Do people tend to have the most luck with:
1) Making the code build part of the docker image build

I thought that was the problem that multi-stage containers solve.

1. Base image: your compiler toolchain
2. Build image derives from base image. Copy code, build code.
3. Runtime image derives from pared-down runtime environment. Copy binary output from #2 to this.

necrobobsledder
Mar 21, 2005
Lay down your soul to the gods rock 'n roll
Nap Ghost
Multi-stage Dockerfiles don't replace needing several different containers like needing a separate DB container for running integration tests. What they do solve is needing to have several different Dockerfiles to produce a single artifact set ultimately. One's choice of CI or orchestration system should fit one's needs beyond single streams of containers.

Build containers being all-in-one (many different packages) is like a monolithic program v. a bunch of microservices in that it's fine to start with a big, single container but it should probably get split up later on if you need to have different needs for different teams or for different stages of a pipeline managed in a more fine-grained manner. It's also pretty tough to have some tools fully contained with a single container like if you're using Terraform with several different external utilities, but this is where if you feel like it you can use a multi-stage container build to copy those artifacts over if desired.

What kind of sucks is when using ephemeral CI workers and you keep losing your local container caches and it takes a long time to rebuild them all. On the other hand, it does keep down some level of container rot by rebuilding everything from scratch (no pun intended) occasionally.

Rescue Toaster
Mar 13, 2003
Yeah I can make a build image and there's a few ways to use it. My question is more bigger picture:

a) Developers need (or very strongly desire, and currently have) code parsing/indexing. If your toolchain (and thus library headers and such) only exists in an ephemeral build container there's no way I can think of to do it
b) If you have some kind of long-lived instance of the toolchain container per-developer to handle #1, then you have all these containers hanging around, possibly not getting updated, they're mutable so someone might change their local one (add a library, apt-get something, etc...) and forget to push the change so they won't know until they kick off a CI build and it fails. The image sizes can become a problem too.
c) Tradeoffs to consider between bind mounting source outside a container and storing source inside a volume, or doing the code build inside the image build so the code only exists in the build context (which is sort of like bind mounting I guess).

I see tons of examples online for doing builds and using multistage builds but very little on 'here's a way to actually setup a development environment'. Just 'Here's how to build hello world or an existing source package that you're not modifying inside a container'.

Rescue Toaster fucked around with this message at 15:11 on Jun 30, 2020

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Rescue Toaster posted:

Yeah I can make a build image and there's a few ways to use it. My question is more bigger picture:

a) Developers need (or very strongly desire, and currently have) code parsing/indexing. If your toolchain (and thus library headers and such) only exists in an ephemeral build container there's no way I can think of to do it
b) If you have some kind of long-lived instance of the toolchain container per-developer to handle #1, then you have all these containers hanging around, possibly not getting updated, they're mutable so someone might change their local one (add a library, apt-get something, etc...) and forget to push the change so they won't know until they kick off a CI build and it fails. The image sizes can become a problem too.
c) Tradeoffs to consider between bind mounting source outside a container and storing source inside a volume, or doing the code build inside the image build so the code only exists in the build context (which is sort of like bind mounting I guess).

I see tons of examples online for doing builds and using multistage builds but very little on 'here's a way to actually setup a development environment'. Just 'Here's how to build hello world or an existing source package that you're not modifying inside a container'.

I guess I'm not understanding your end goal very well. Are you saying you want developers to, as part of their general day-to-day development process, have builds run in a container? As opposed to the more usual scenario, which is that containers are built as part of a CI process with the ultimate goal of running those containers in some sort of container orchestration platform?

That seems like a bit of an odd desire versus a more standard process of building containers and testing them during CI. Maybe it's a common scenario that I'm just not familiar with, but it strikes me as weird.

Rescue Toaster
Mar 13, 2003

New Yorp New Yorp posted:

I guess I'm not understanding your end goal very well. Are you saying you want developers to, as part of their general day-to-day development process, have builds run in a container? As opposed to the more usual scenario, which is that containers are built as part of a CI process with the ultimate goal of running those containers in some sort of container orchestration platform?

That seems like a bit of an odd desire versus a more standard process of building containers and testing them during CI. Maybe it's a common scenario that I'm just not familiar with, but it strikes me as weird.

Yes this is the issue. I also will do the normal CI process of building containers & applications. But because the toolchains are weird, modified, stripped down, or sometimes cross-compilation, keeping the toolchains in their own containers would be incredibly useful. But I guess this is just super uncommon, as I don't see much information online on handling it.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Rescue Toaster posted:

Yes this is the issue. I also will do the normal CI process of building containers & applications. But because the toolchains are weird, modified, stripped down, or sometimes cross-compilation, keeping the toolchains in their own containers would be incredibly useful. But I guess this is just super uncommon, as I don't see much information online on handling it.

Whatever your CI process is doing, let the developers do it locally. Use a private container registry to store base images with the different toolchains.

Don't let developers push new images on their own, instead have the dockerfiles for your base images go through a standard PR/CI process.

12 rats tied together
Sep 7, 2006

This might be specific to interpreted or duck typed languages only but I can't say I've ever encountered a situation where development needs full build system access locally.

Usually it's just like, install rbenv or pyenv or whatever, use the language package manager to install this package lock file that you got when you cloned the repository.

Make a feature branch. Make changes. docker-compose up -d to run the stack locally. You can run unit tests manually with this shell command. Use this command to run your code in the staging environment which has integration tests and, when possible, traffic replays. Traffic we can't replay is mocked with VCR (the only place I've worked that was doing this was a Ruby shop) and is handled in your unit tests.

I feel like this handles just about every possible development need and it doesn't involve needing to install an IDE plugin that lets you ssh to a kubernetes pod or whatever.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

12 rats tied together posted:

This might be specific to interpreted or duck typed languages only but I can't say I've ever encountered a situation where development needs full build system access locally.

Usually it's just like, install rbenv or pyenv or whatever, use the language package manager to install this package lock file that you got when you cloned the repository.

Make a feature branch. Make changes. docker-compose up -d to run the stack locally. You can run unit tests manually with this shell command. Use this command to run your code in the staging environment which has integration tests and, when possible, traffic replays. Traffic we can't replay is mocked with VCR (the only place I've worked that was doing this was a Ruby shop) and is handled in your unit tests.

I feel like this handles just about every possible development need and it doesn't involve needing to install an IDE plugin that lets you ssh to a kubernetes pod or whatever.

I think this is a scenario where developers are using one OS for development, but the application compiles and runs on many different platforms, each with their own quirks and compilation/runtime toolchains.

Rescue Toaster
Mar 13, 2003
Yeah things like cross compilation have always been a nightmare for C/C++, often/usually hard-tied into a specific IDE or the like. Docker's support for multi-arch images would seemingly solve the problem. But it only really works on the perspective of the CI system. For individual developers trying to use these toolchains day to day it's still a bit of a mess and looks like it might rely on developers individually setting up long-lived build toolchain containers and manually starting/stopping/updating them.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Rescue Toaster posted:

Yeah things like cross compilation have always been a nightmare for C/C++, often/usually hard-tied into a specific IDE or the like. Docker's support for multi-arch images would seemingly solve the problem. But it only really works on the perspective of the CI system. For individual developers trying to use these toolchains day to day it's still a bit of a mess and looks like it might rely on developers individually setting up long-lived build toolchain containers and manually starting/stopping/updating them.

I think my new disconnect with this question is why you're talking about "long-lived build toolchain" containers. What are you expecting that people will be doing with these containers that isn't accomplished by providing a local process that's equivalent to your CI process?

Rescue Toaster
Mar 13, 2003

New Yorp New Yorp posted:

I think my new disconnect with this question is why you're talking about "long-lived build toolchain" containers. What are you expecting that people will be doing with these containers that isn't accomplished by providing a local process that's equivalent to your CI process?

Simply source editing with indexing, code-completion. Those sorts of features only work when you've got one environment with the toolchain & source code all sitting there.

edit: Eclipse, CLion, VSCode, Atom all support this sort of stuff but it needs to be either in the host environment or a container available via ssh (or docker cli) that's running all the time.

Rescue Toaster fucked around with this message at 18:02 on Jun 30, 2020

gariig
Dec 31, 2004
Beaten into submission by my fiance
Pillbug

Rescue Toaster posted:

Simply source editing with indexing, code-completion. Those sorts of features only work when you've got one environment with the toolchain & source code all sitting there.

edit: Eclipse, CLion, VSCode, Atom all support this sort of stuff but it needs to be either in the host environment or a container available via ssh (or docker cli) that's running all the time.

Isn't that this remote container feature of VS code?

Rescue Toaster
Mar 13, 2003
Yep! But since the container has to stay running for vs code to connect, and has to bind mount (or checkout into a volume) all your source, it's sort of at odds with the normal docker CI model of multistage builds and importing source via build context and ephemeral build containers that may not even have a specific image associated with them. Trying to reconcile these two seemingly conflicting models of building has been what I'm banging my head against.

Hadlock
Nov 9, 2004

Can you share any more details about WHY they need to run the ARM version of the container? I'm guessing you're optimizing for ARM specific microcode or whatever for some IoT device? Like, how much time do they spend on ARM specific optimization, that they need to locally test changes on ARM containers?

I would be more inclined to give each developer a meaty ARM based cloud compute node, and then share an NFS mount (or whatever) between the developer workstation and the ARM node, and let them test that way, on docker, or whatever via ssh. Or maybe if they're on workstations, get an ARM coprocessor daughterboard type thing?

Ideally you would go to the VP of engineering/your boss/whatever and with a rousing speech convince them to move their development practices to the cloud but obviously we still live in reality. If you could get them to locally test on x86, and then run integration tests via CI/Jenkins/Whatever on an ARM node that would probably vastly simplify ops long term for the company. Clearly there's a reason why you're not doing that, so see question 1 at the top of my post.

Lastly, I can see you doing some insane spaghetti ops where you're running qemu on a vagrant instance and wiring it into the local docker network bus and everything magically works so long as you never touch it, and having job security until the heat death of the universe because it's absolutely going to break if you breathe on it and be constantly computer janitoring developer's laptops etc etc ad nauseam

Rescue Toaster
Mar 13, 2003
The ARM stuff might be a red herring. Picture just using a bleeding edge clang x86 toolchain built from source or something, that you want to keep consistent and updated across desktops, build machines, etc... you have a toolchain that exists 'only' in a container and all the same concerns apply in the same way.

cosmin
Aug 29, 2008
as this seems to have evolved in the containers/k8s/cloud native thread I have a question that might seems stupid but I haven't been able to find an easy answer to and I don't know if it's worth banging my head on

I'd like to try some things out in a Jupyter notebook that could use some GPU acceleration for the training part.
However, I'm a noob at Jupyter/PyTorch/setting up the ML (NLP) environment and just getting started.

In order to minimize costs, could I run a container on a standard compute VM where I setup the environment, play around with the code etc then move the container to a GPU VM for the training part and then back again to regular compute for testing?

Or is it too much hassle with setting up the right NVIDIA/CUDA drivers for the container or setting up the Jupyter/python environment for being CUDA enabled that kind of "stickies" the container to GPU enabled hosts?

If that's the case, why are people even considering running ML workloads in containers?

basically, I'd like to run my training containers on cheap hosts while i'm setting it up/playing around/testing and move it to GPU hosts while training. Is it feasible?

tortilla_chip
Jun 13, 2007

k-partite
Is Google CoLab not an option?

cosmin
Aug 29, 2008
I feel like a dunce.

It is, I just never read that you can actually train for free on that and I thought it’s just a Jupyter scratchbook...

I had some credits on other cloud and my mind automatically went to that instead of considering CoLab

Saukkis
May 16, 2003

Unless I'm on the inside curve pointing straight at oncoming traffic the high beams stay on and I laugh at your puny protest flashes.
I am Most Important Man. Most Important Man in the World.
What is the best way to keep Docker containers up-to-date, especially in a case where the container image Dockerfile has used apt-get or yum install to add extra packages. Recently I've come across two approaches. One uses containerr/Watchtower container to monitor when new versions of container are published. The other approach is a script which periodically pulls an image, then compares image ID to the running container ID. But I'm not confident that either of these methods will work in a situation where an update is released for one of those extra packages and upstream hasn't pushed a new image. To get those packages updated it would be necessary to rebuild the image?

NihilCredo
Jun 6, 2011

iram omni possibili modo preme:
plus una illa te diffamabit, quam multæ virtutes commendabunt

Saukkis posted:

What is the best way to keep Docker containers up-to-date, especially in a case where the container image Dockerfile has used apt-get or yum install to add extra packages. Recently I've come across two approaches. One uses containerr/Watchtower container to monitor when new versions of container are published. The other approach is a script which periodically pulls an image, then compares image ID to the running container ID. But I'm not confident that either of these methods will work in a situation where an update is released for one of those extra packages and upstream hasn't pushed a new image. To get those packages updated it would be necessary to rebuild the image?

Yes, you would have to rebuild the image. But rebuilding an image with updated dependencies is most definitely not something you want to happen automatically, except on a dedicated 'canary' environment for testing.

If that is your case, you should have a separate Dockerfile that pulls the latest version of everything, and have a cronjob that rebuilds it every X hours. (It's not practical to monitor every single dependency system - base image, apt-get, npm, pypi, maven, whatever - and trigger a rebuild every time some lovely left-pad package gets updated.)

Your real Dockerfile should lock the versions of every dependency it pulls (or COPY a version-controlled lockfile) so it's actually a reproducible, version-controlled artifact instead of a #yolo. It should only be updated after proper testing process.

In either case, watchtower can take care of keeping both the canary and the production environment updated whenever the image changes for the :canary or the :stable tags.

NihilCredo fucked around with this message at 08:41 on Aug 5, 2020

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Saukkis posted:

What is the best way to keep Docker containers up-to-date, especially in a case where the container image Dockerfile has used apt-get or yum install to add extra packages. Recently I've come across two approaches. One uses containerr/Watchtower container to monitor when new versions of container are published. The other approach is a script which periodically pulls an image, then compares image ID to the running container ID. But I'm not confident that either of these methods will work in a situation where an update is released for one of those extra packages and upstream hasn't pushed a new image. To get those packages updated it would be necessary to rebuild the image?
NihilCredo nailed the technicals, so here's a piece of color.

The second you're making modifications, you're no longer using upstream's image. Take off the FOSS Linux admin hat for a minute, pretend you're a Windows person, and consider the risk assessment the same way you would consider any other piece of off-the-shelf software. When there's a security update in OpenSSL, do you go replacing DLLs in some vendor's god-awful embedded PHP distribution, or do you measure the risk and wait for them to push an update before you go messing with their binary contracts? How do you make that risk determination?

A big part of moving to Docker and container orchestration is moving away from the stance of "everything must be evergreen, all the time", detaching from the pieces that you really don't need to care about, and using good auditing tools to tell you when something is important. Be diligent about upgrading beyond vulnerable versions, but do it on a supportable cadence, rather than being reactive and then having to react further to the fallout from your own reactivity. And when possible, let the container communities do the QA for you—if you wanted to be all in on this business yourself, you wouldn't be downloading strangers' builds off the Internet.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
I need to build a small MVP of a hardware server rack, specifically IPMI and at least three physical NICs. Any recommendations on cheap/low cost ways to accomplish this that aren't cloud based?

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Gyshall posted:

I need to build a small MVP of a hardware server rack, specifically IPMI and at least three physical NICs. Any recommendations on cheap/low cost ways to accomplish this that aren't cloud based?

I think you're in the wrong thread.

Methanar
Sep 26, 2013

by the sex ghost

Gyshall posted:

I need to build a small MVP of a hardware server rack, specifically IPMI and at least three physical NICs. Any recommendations on cheap/low cost ways to accomplish this that aren't cloud based?

What are you actually trying to accomplish?

Like pxe bootstrapping and how to set up IPMI?

xzzy
Mar 5, 2009

Find a reputable vendor that does supermicro, specify the requirements, buy a rack of hardware. They're reasonably cheap, work well, and a BMC plus two interfaces are standard. Adding a third nic is easy.

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.

Methanar posted:

What are you actually trying to accomplish?

Like pxe bootstrapping and how to set up IPMI?

Yeah... Repeatable bare metal. We are looking at Digital Rebar but don't want to shell out of an actual Dell server or similar (yet)

Methanar
Sep 26, 2013

by the sex ghost
https://github.blog/2015-12-01-githubs-metal-cloud/

This is a good read as to how github does things.

From what I've read about digital rebar, its great. As to the cheapest way to do it, you're probably just going to need to buy a real server if you want IPMI. Not sure how you could get around that.

Adbot
ADBOT LOVES YOU

Gyshall
Feb 24, 2009

Had a couple of drinks.
Saw a couple of things.
Gotcha. Thanks.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply