|
Edit: one of the problems with GoCD is that the name is hard to search for. Turns out there's been some discussion of it in this thread already. Does anybody have any strong opinions on Thoughtworks' Go? I was initially planning to move from Hudson to Jenkins (we went with Hudson back when it wasn't overshadowed by Jenkins), but I set up a test GoCD server and agents and I'm liking it so far. The thing I really dig is the pipeline visualizations, as we have dozens of separate repos in several different languages that all interact with each other, and in Hudson we do have some chaining set up, but not like it should be. The concern I have with GoCD is that the community doesn't seem huge (one of the factors I take into account when choosing products, because of our team size, is whether the product's community is large enough - in other words, if I have a problem, is there someone out there that has already solved it?). I'm also not sure if it's grown much since they open-sourced it, or if it'll just wither and I'll be migrating again in a year. Erwin fucked around with this message at 00:48 on Nov 17, 2015 |
# ¿ Nov 16, 2015 19:15 |
|
|
# ¿ Apr 29, 2024 15:46 |
|
wins32767 posted:Ok, so let me change the conversation a bit here. What's the kind of skills that a small but very rapidly growing company should look for in their first hire in a role that needs to handle some operations work? There isn't enough work for a full time operations position, nor will there be for at least a year, but I want to lay a good foundation. What skills are missing in your current staff that are preventing your company from automating its deployment pipeline? Find someone with those skills. You're looking for someone to help the rest of you get on the same page so everyone can participate, not to do it for you. Vulture Culture posted:I swear to God I'm going to lose my poo poo and harpoon the next person who refers to "devops role." The very idea of a DevOps role is completely antithetical to DevOps. Right, the term is Thought Leader.
|
# ¿ Jan 14, 2016 16:16 |
|
Anybody know when Policyfiles are going to be considered not experimental/ready for use in ChefDK? I'm trying to improve my cookbook workflow but I don't want to get too invested in Berkshelf if it's going away soon.
|
# ¿ Feb 9, 2016 23:49 |
|
As in adding dependencies to metadata and pinning versions to environments? That's what I'm doing now and I'm fine with continuing to do that, but Berkshelf being in the ChefDK seemed to indicate that that's the way it's meant to be done. But in general, the 'way to do things' in Chef changes every 3 months anyway.
|
# ¿ Feb 10, 2016 00:32 |
|
Vulture Culture posted:Is this still your take with the Blue Ocean stuff, or is this take solely restricted to old-style pipeline management? I'm looking for a decent CI setup for Chef and other infrastructure code. Jenkins works fine for cookbooks. Chef works around a lot of the annoyances of Jenkins because it handles its own upstream dependency 'artifact' juggling. You only need one Jenkinsfile for all of your cookbooks, especially if you do a sort of feature flags to turn on or off different steps (for instance, linting with rubocop or cookstyle, but not both). You can put a config file of some sort in each cookbook repo to specify which steps to enable. If you're defining everything as code (Jenkinsfiles, etc) then Blue Ocean is mostly about looking nice I think. But, if you want to configure each job through the GUI, then you'll get more out of it. I don't do that so I don't have much to say about Blue Ocean. If 'other infrastructure code' is Terraform, check out kitchen-terraform: https://github.com/newcontext-oss/kitchen-terraform Using a cookbook to install and configure Jenkins is a whole other level of frustration. Key things are that new versions of Jenkins often breaks Chef's official Jenkins cookbook and they don't care about fixing it. Also, installing plugins with dependencies takes forever (like literally hours to days) because it doesn't handle dependency resolution. You're better off gathering a list of all the plugins you want plus their dependencies and having your wrapper cookbook install each one without dependencies. There's an easy way to get that list with a groovy script from a running Jenkins instance (you'd spin up a temporary Jenkins master, hand-pick your plugins and install them, then get that list and plop it into an attribute in your cookbook). But really, running the Jenkins master in a docker container is far less annoying, just because of the way the official image handles plugin installation.
|
# ¿ Jul 28, 2017 16:08 |
|
Bhodi posted:I completely disagree with this sentiment because unless you're using something that spits out the fully formed xml like job builder, the groovy pipeline scripts are by far the best way to couple the jobs with the code they manage within your change control. edit: ultrabay2000 posted:Can anyone offer any pros/cons of GoCD compared to Jenkins currently? We're evaluating both of these. GoCD seems to have a nicer UI out of the box but Jenkins is a lot more widely used. It seems GoCD was particularly strong with value streams but Jenkins has made progress on that. edit2: Also GoCD expects all of your application servers to be managed by GoCD. It's not necessary, but I think that's their philosophy. Erwin fucked around with this message at 16:43 on Jul 28, 2017 |
# ¿ Jul 28, 2017 16:31 |
|
Plorkyeran posted:There's two major components to Blue Ocean: the use of the in-repo Jenkinsfile to configure things, and the pretty new UI. Jenkinsfile/groovy pipeline definition is part of the Pipeline plugin, not Blue Ocean. You can use them without installing Blue Ocean. I do agree that not being able to run pipeline code locally sucks, and most new pipelines have a few failed runs at the beginning while you iterate and push like a chump. If I need to do something complicated, I usually try to put as much of the logic as I can in a Rakefile or equivalent, which I can test locally, then just call rake tasks from the pipeline.
|
# ¿ Jul 28, 2017 18:24 |
|
fletcher posted:I'm trying to create a pipeline job for the first time in jenkins but I'm not sure where to either tell it about my git repo where the Jenkinsfile is, or put in the Jenkinsfile directly. That dropdown is supposed to have "Pipeline script" and "Pipeline script from SCM", the latter being what you're looking for. I'm guessing you're missing a plugin or two?
|
# ¿ Nov 2, 2017 16:39 |
|
fletcher posted:Appreciate your reply on the go Use a data source in Terraform to search AMIs by tag and find the newest AMI with whatever tag (which Packer can set). This way Jenkins doesn’t have to pass around information. You can also tell Terraform to ignore AMI changes so it doesn’t redeploy every run, then you can do a targeted destroy and create if that’s easier.
|
# ¿ Nov 11, 2017 14:17 |
|
Space Whale posted:What's the idiomatic way to get this out the door? I'm brand new to Jenkins and I've never green-fielded anything like this before. I'm sponging documentation but I hate not having anything to show.
|
# ¿ Jan 17, 2018 00:41 |
|
poemdexter posted:I would love for Jenkins to support the full Groovy language and not sandbox poo poo in weird ways. I'm currently developing a shared pipeline library for some Jenkins stuff and I loving hate it. You end up with @NonCPS all over the place because of its weird sandboxing poo poo, they want you to use declarative syntax because it's new and better but it doesn't support anything mildly complicated, and the only way to test changes that can't be unit tested is to commit and push it and run a job on Jenkins because god forbid they support loading the library from anything besides source control, so you end up with a hundred commit messages like "gently caress it let's try putting this here." Seriously just support rake libraries or something.
|
# ¿ Mar 9, 2018 17:14 |
|
Extremely Penetrated posted:We're 100% on-prem, no butt stuff. Extremely Penetrated posted:I'd like to make them a sample pipeline to use as a reference, and then make them responsible for their own crap. I'm not sure yet if I should do a build environment or have them build on their workstations and upload to the repository. Does this approach make sense? Extremely Penetrated posted:I don't have a clear idea of our dev's typical workflow...what should I be asking them or looking for?
|
# ¿ Jun 7, 2018 17:01 |
|
Gyshall posted:Lint or die trying I'm always conflicted when starting at a new client whether to turn off format on save so I can just commit a drat change or die on that hill. When possible I add a terraform fmt to their pipeline and fail PRs if anything needs to be formatted.
|
# ¿ Dec 12, 2018 02:33 |
|
New Yorp New Yorp posted:I just wanted to be sure I'm not missing some critical awesome Terraform feature that is worth throwing out several months of work and starting from scratch.
|
# ¿ Dec 19, 2018 16:56 |
|
New Yorp New Yorp posted:Okay, that's fair. My other complaints still stand. Unfortunately, I just don't have time to learn Go well enough to fix the broken things I've encountered. Terraform taking an hour to create a resource is probably the Azure provider's fault. If you're sure you're not doing something wrong in your configuration, then go look at the provider's repo for issues related to whatever you're seeing. The Azure provider is what defines how Terraform interacts with the Azure API to kick off the resource creation and to know when it's finished. If it was working correctly, it would take the same amount of time as your ARM template. There's nothing about Terraform that would make it take longer for Azure to do things. Solve that issue and your other points are moot. Terraform sucks in a lot of ways, but not in any of the ways you think it does. It's the best tool for what it does, and it's one of those things that you grow to hate because it's indispensable.
|
# ¿ Jan 9, 2019 19:12 |
|
New Yorp New Yorp posted:Is this just because the Azure provider sucks? I can accept that. I've never had to deal with the Azure API, but it sounds like the Azure API gives a 404 if provided a non-existent resource ID, instead of a more informative message about no resource existing with that ID. Therefore the Azure provider needs to guess at the meaning of a 404 and whether it indicates a missing resource or an actual problem. This is just conjecture, but there are quite a few issues around various 404 errors on the provider's github repo. It sounds like they have to provide 404 interpretation logic for each resource type. Also this issue might be related to what you're seeing with resource groups: https://github.com/terraform-providers/terraform-provider-azurerm/issues/2629 It seems Azure identifies things by names? Yikes. So the creator of that issue is creating a resource group with a name built from some variables. Then he creates another resource to add to that group, only he provides the resource_group_name as a string built by the same variables instead of actually referencing the created resource group data source. So maybe the resources in your resource group weren't assigned to the resource group by referencing the resource group (terraform) resource.name but just the same string value? It really just sounds like Azure is an all-around shitshow so hate Azure. Is Azure the only thing you're targeting with Terraform? If so why use Terraform?
|
# ¿ Jan 11, 2019 16:43 |
|
Cancelbot posted:It'll be the "refreshing state" part of the plan. I think Terraform just has a list of "this R53 record should exist here" in its state file, which then fires off a metric poo poo-ton of AWS API calls to verify that is indeed the case. It'll then do a diff based on what is consistent with the AWS state & the new computed state, rather than be smarter by looking at the HCL that changed prior to doing the refresh. Regardless of whether the configuration actually changed, it would need to refresh every resource anyway, since it would need to modify any drift.
|
# ¿ Feb 27, 2019 02:21 |
|
Do you have control over the connection and purposefully bring it down to limit costs, or is it a random as-available thing? As much as I wouldn't recommend Chef to anyone these days, this sounds like something Chef is more suited for than Ansible, if you wanted to use an existing CM tool. Since Chef runs as an agent in a 'pull' model, each client would hit the Chef server whenever they had connectivity to grab any new configuration. Looks like the Docker cookbook can manage containers as well, but I think I'd separate system configuration and container orchestration and update a compose file like VC said. If you have the budget for it, I know first-hand that DC/OS runs on cruise ships with sporadic satellite connections. I have strong opinions on DC/OS and simple container orchestration wouldn't be a use case I'd pick it for, but I know it's successfully solving the problem you are trying to solve.
|
# ¿ Mar 12, 2019 14:53 |
|
Mr Shiny Pants posted:How dumb would it be to run Jenkins agents remotely and have them to do the hard work? In a slave mode they do almost exactly what I want. Gross. Besides, Jenkins agents are meant to run pipelines that start and finish, not indefinite services. Or do you mean have a Jenkins agent at each site that orchestrates other servers?
|
# ¿ Mar 12, 2019 17:00 |
|
LochNessMonster posted:Whats the current standard for spinning up a PoC bare metal k8s cluster to show off some standard capabilities (nothing fancy). Kops, Kubeadm or Kubespray? Or should I just run k3s? To me kubeadm is simple enough that the opinionation of the other tools isn't worth the hassle.
|
# ¿ Dec 1, 2020 16:27 |
|
I always thought Kubespray sounded gross. Like “ah jeez grab a mop, I got Kubernetes everywhere. Ew, it’s dripping off of the ceiling!”
|
# ¿ Dec 1, 2020 22:50 |
|
12 rats tied together posted:Also unless something has radically changed since 0.11
|
# ¿ Feb 3, 2021 18:39 |
|
Most of the Terraform problems that 12 rats is pointing out are only problems when you're writing the code, and have established solutions. Yes iteration in Terraform is awfully slow compared to application code, but you can test the results of convoluted logic and once it works, it works. You can even do (slow) TDD with tools like kitchen-terraform or Terratest. Terraform is often the best tool for infrastructure automation largely because of its wide use - you'd rarely be the first to run into a given issue, especially with the main heavily-used providers.
|
# ¿ Feb 4, 2021 15:35 |
|
Yes but the tutorial on containerizing their framework they found uses Nomad, soooooooo….
|
# ¿ Feb 20, 2022 18:15 |
|
LochNessMonster posted:Never tried Nomad but if you ever find yourself in a position where you need to choose between DC/OS and K8s just gor for the latter and never look back. Seconded. Is the official install process still a shell script with several hundred megabytes of binary data baked into it and no automation? Because it used to be and it was stupid.
|
# ¿ Feb 20, 2022 18:59 |
|
Make sure you destroy everything and recreate it regularly to ensure that works. Rolling forward with your Terraform and never starting the over won’t guarantee it’ll work from scratch next time you want to reuse the module.
|
# ¿ Aug 4, 2022 02:12 |
|
some kinda jackal posted:If I need to schedule a pod on a node that is expected to have a specific external device mounted, in my mind this is a job labelling the node with hasdevice=true and podspec nodeSelector hasdevice: true? Yeah, this is the way to do it. It's not highly-available but that's not worth worrying about for your home stuff. It's how I make sure Home Assistant runs on the node that has the ZWave stick attached.
|
# ¿ Aug 26, 2022 16:31 |
|
some kinda jackal posted:After a whirlwind few months of applying random helm charts and resources from yaml lying around my various laptops onto my cluster and then promptly forgetting what is installed or where the original sources are, I’m kind of ready to see if gitops solves this for me a little. It sounds like you should just start with git as a single source of truth and not worry about deploying automatically. You can still use kubectl on whichever machine you’re sitting at - just be sure to pull first and commit and push after. Later you can add a pipeline to do it for you.
|
# ¿ Aug 27, 2022 00:33 |
|
Lucid Nonsense posted:The server you install our software on needs a license. Sending devices don't affect licensing, but licensing is based on the volume ingested. Dear Datadog…
|
# ¿ Jan 27, 2023 22:37 |
|
Tell me about it. We had meta-alerts set up specifically to avoid the dozens of billing traps they design into their features.
|
# ¿ Jan 27, 2023 23:02 |
|
|
# ¿ Apr 29, 2024 15:46 |
|
LochNessMonster posted:I’d choose ansible over chef/puppet all day every day. Absolutely, ruby sucks. Also Salt sucks too. Ansible is the least annoying of the four, however for this: Junkiebev posted:Business looking for something which does stuff like “disable smbv1 client connectivity on all endpoints”, and I’m looking for something like “Change State and track diffs at scale and the only requisite should be network connectivity and a Linux kernel.” ...I'd do immutable infrastructure and not worry about a config management tool, unless I really need one in the build pipeline.
|
# ¿ Dec 13, 2023 13:51 |