|
CRIP EATIN BREAD posted:this week im going to take the plunge and experiment with CDS https://ovh.github.io/cds experimenting with CDS broke the world economy in 08 so i wouldn't suggest it
|
# ? Jan 14, 2020 22:03 |
|
|
# ? Apr 26, 2024 08:29 |
|
oh? perfect
|
# ? Jan 14, 2020 22:06 |
|
Is circleci the best option for containers? From here on out I'm not releasing anything that isn't containerized so
|
# ? Jan 14, 2020 22:36 |
|
There's an article on Phoronix about someone redoing Make again, I'm not linking to it, but at the extremes of not hosting on anything normal. http://git.annexia.org/?p=libguestfs-talks.git;a=blob;f=2020-goals/notes.txt;h=b03d9e5ea1062869c642a1850a09340ea1575116;hb=HEAD
|
# ? Jan 14, 2020 23:02 |
|
Fiedler posted:the correct answer is azure devops. azure devops uses yaml which is the worst of all choices.
|
# ? Jan 14, 2020 23:07 |
|
msbuild should go in the dumpster and Microsoft should write a maven.net
|
# ? Jan 14, 2020 23:09 |
|
docker in your CI path is bad, don't run your builds in a docker actually probably just don't use docker in general, for anything remember: a docker is somebody who engages in docking
|
# ? Jan 15, 2020 00:59 |
|
Poopernickel posted:docker in your CI path is bad, don't run your builds in a docker why?
|
# ? Jan 15, 2020 01:32 |
|
docker is adequate if you're not doing database stuff and aren't waving it anywhere near end users k8s is extremely Google
|
# ? Jan 15, 2020 01:38 |
|
docker is "works on my machine" taken to its logical conclusion
|
# ? Jan 15, 2020 02:18 |
|
Shaggar posted:msbuild should go in the dumpster and Microsoft should write a maven.net sdk-style projects are equivalent
|
# ? Jan 15, 2020 02:19 |
|
Bloody posted:docker is "works on my machine" taken to its logical conclusion
|
# ? Jan 15, 2020 02:21 |
|
Bloody posted:docker is "works on my machine" taken to its logical conclusion
|
# ? Jan 15, 2020 02:50 |
|
you can do some really good layering and caching with docker, and it makes a huge difference if you have someone on your team who understands that poo poo vs. if you're just wiring garbage together like I would.
|
# ? Jan 15, 2020 03:09 |
|
MononcQc posted:you can do some really good layering and caching with docker, and it makes a huge difference if you have someone on your team who understands that poo poo vs. if you're just wiring garbage together like I would. lol more like you can do that and completely ignore reproducibility of builds and pretend that managing all the artifacts that go into an image don't matter so good luck deploying to production when one of the zillion services you depend on at build-time suddenly become deploy-time deps sorry i'm not yelling at you here i'm yelling into the clouds
|
# ? Jan 15, 2020 03:25 |
|
abigserve posted:why? it's pretty easy to have a dockerfile and think that your builds are reproducible, except: 1. you're downloading a linux image from a server that probably won't exist in a couple of years 2. you're using apt-get in your dockerfile, which basically guarantees you can never reproduce the configuration 3. good chance you're also using an even less reproducible package-manager too, like pip or npm 4. if your build isn't actually tied to a particular linux, then congratulations it is now 5. I guess you can archive the image manually, but in that case why gently caress around with docker? also lets none of us forget that docker is a for-profit company owned by sharky VCs - probably shouldn't assume dockerhub is profitable or will exist in 5 years, and probably shouldn't assume anything on dockerhub will be kept private - anything that hits their servers will eventually be sold to an adtech company as ~metadata~ people will forget about docker, then they'll make the program into some kind of lovely freemium thing to try and squeeze blood from a turnip, then it'll become irrelevant and you'll still be stuck janitoring your dockers maybe docker's better used as a production environment, idk - but I'm guessing probably not Poopernickel fucked around with this message at 05:56 on Jan 15, 2020 |
# ? Jan 15, 2020 05:43 |
|
Poopernickel posted:it's pretty easy to have a dockerfile and think that your builds are reproducible, except: hard to argue with this, because almost every Dockerfile fetches stuff from a repo or package manager. you're supposed to use nexus or something to ensure that all that stuff remains available and consistent. if you do, all these problems go away (except for the one where dockerhub vanishes one day) but almost nobody does this, because using docker is all about taking shortcuts and doing the bare minimum to get poo poo working
|
# ? Jan 15, 2020 06:06 |
|
but its cool to use a docker image to provide an environment for CI, because THAT actually does make your builds more repeatable the problem shows up when you go to update that image, and the Dockerfile refers to an apt repo that doesn't exist anymore. then you're back to janitoring linux again
|
# ? Jan 15, 2020 06:20 |
|
using docker images from a hash instead of a tag to build your poo poo is so much nicer than any other mechanism
|
# ? Jan 15, 2020 06:29 |
|
Helianthus Annuus posted:but its cool to use a docker image to provide an environment for CI, because THAT actually does make your builds more repeatable true on all counts at work, we used to have a Linux CI build that ran inside a Docker image - over time, everything in the image got increasingly obsolete and out-of-sync to the point where its build results were useless. Plus the virtual root environment meant that one of our devs decided to hard-code a bunch of things to dump into /opt at build-time, and nobody noticed since "the build still works". After all who would ever dream of doing something other than building inside the docker image?? we couldn't add anything new to the image because it referred to external repo sources that don't exist any more - so we just had this black-box docker image that was getting more and more broken, and effectively couldn't be reproduced or modified. I burned it down and moved all dependencies to things that could be installed on the build agent with apt-get and it's been working flawlessly (and has survived several OS upgrades). so I guess that's a story on how to use docker to ruin a future maintainer's sanity? Poopernickel fucked around with this message at 07:09 on Jan 15, 2020 |
# ? Jan 15, 2020 06:55 |
|
That's exactly the story of most servers in the 2000s
|
# ? Jan 15, 2020 16:59 |
|
good point with docker, you pay all that bullshit twice - once in the docker image, and once for the machine that runs the docker image
|
# ? Jan 15, 2020 17:08 |
|
CRIP EATIN BREAD posted:using docker images from a hash instead of a tag to build your poo poo is so much nicer than any other mechanism true, and true with pretty much any dependency really not that many people do it feels like every day I hear about a new outage due to floating deps
|
# ? Jan 15, 2020 17:09 |
|
I can understand why generic public hub images don't do that, but if you are doing any stuff internally you better have your own set of images. Just because the docker hub sucks sometimes doesn't mean the whole stack is bad, the technology and tool work just fine if you are willing to put a few hours aside to set up your base build images. You really should have your Dockerfiles start like this: code:
|
# ? Jan 15, 2020 18:33 |
|
totally, and ideally injecting actual values from that hash some other way fwiw i completely agree with the earlier take that vc-backed docker-the-company is not to be trusted, and personally i'd rather use any other container runtime at least the image format is standardized, and yeah, never don't be mirroring deps- including images- internally. i'm liking artifactory as a one-stop-shop for that kinda thing.
|
# ? Jan 15, 2020 20:16 |
|
artifactory rules but their pricing is outrageous. having to buy the most expensive license @ ~$15k/year just to get S3 storage support is ridiculous.
|
# ? Jan 15, 2020 21:56 |
|
CRIP EATIN BREAD posted:artifactory rules but their pricing is outrageous. at my old job, the NPM guys wanted to charge us 90k per year to run our own NPM mirror on-site we got artifactory instead lol
|
# ? Jan 16, 2020 00:21 |
|
Helianthus Annuus posted:at my old job, the NPM guys wanted to charge us 90k per year to run our own NPM mirror on-site lmao that explains why we just migrated from npm on-site to artifactory then
|
# ? Jan 16, 2020 00:24 |
|
Shaggar posted:msbuild should go in the dumpster and Microsoft should write a maven.net MSBuild exists because Microsoft felt it couldn’t use NAnt directly because at the time they saw open source as “infectious” these days they’d have just used NAnt or ported Maven
|
# ? Jan 16, 2020 01:36 |
|
msbuild and nant are equivalent so it would be pointless to switch. they need declarative builds
|
# ? Jan 16, 2020 01:38 |
|
|
# ? Apr 26, 2024 08:29 |
|
Helianthus Annuus posted:i've used a lot of jenkins instances, and no two are completely alike. there are lots of plugins to make it do exactly what you want. but they almost always become abandoned, and then you are stuck when you have to upgrade. the most important rule for working with jenkins is to only use a plugin for something if it's impossible to do without one, and even then make doubly sure that you actually need to do it.
|
# ? Jan 20, 2020 05:03 |