Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Erwin
Feb 17, 2006

Edit: one of the problems with GoCD is that the name is hard to search for. Turns out there's been some discussion of it in this thread already.

Does anybody have any strong opinions on Thoughtworks' Go? I was initially planning to move from Hudson to Jenkins (we went with Hudson back when it wasn't overshadowed by Jenkins), but I set up a test GoCD server and agents and I'm liking it so far. The thing I really dig is the pipeline visualizations, as we have dozens of separate repos in several different languages that all interact with each other, and in Hudson we do have some chaining set up, but not like it should be.

The concern I have with GoCD is that the community doesn't seem huge (one of the factors I take into account when choosing products, because of our team size, is whether the product's community is large enough - in other words, if I have a problem, is there someone out there that has already solved it?). I'm also not sure if it's grown much since they open-sourced it, or if it'll just wither and I'll be migrating again in a year.

Erwin fucked around with this message at 00:48 on Nov 17, 2015

Adbot
ADBOT LOVES YOU

Erwin
Feb 17, 2006

wins32767 posted:

Ok, so let me change the conversation a bit here. What's the kind of skills that a small but very rapidly growing company should look for in their first hire in a role that needs to handle some operations work? There isn't enough work for a full time operations position, nor will there be for at least a year, but I want to lay a good foundation.

What skills are missing in your current staff that are preventing your company from automating its deployment pipeline? Find someone with those skills. You're looking for someone to help the rest of you get on the same page so everyone can participate, not to do it for you.

Vulture Culture posted:

I swear to God I'm going to lose my poo poo and harpoon the next person who refers to "devops role." The very idea of a DevOps role is completely antithetical to DevOps.

Right, the term is Thought Leader.

Erwin
Feb 17, 2006

Anybody know when Policyfiles are going to be considered not experimental/ready for use in ChefDK? I'm trying to improve my cookbook workflow but I don't want to get too invested in Berkshelf if it's going away soon.

Erwin
Feb 17, 2006

As in adding dependencies to metadata and pinning versions to environments? That's what I'm doing now and I'm fine with continuing to do that, but Berkshelf being in the ChefDK seemed to indicate that that's the way it's meant to be done. But in general, the 'way to do things' in Chef changes every 3 months anyway.

Erwin
Feb 17, 2006

Vulture Culture posted:

Is this still your take with the Blue Ocean stuff, or is this take solely restricted to old-style pipeline management? I'm looking for a decent CI setup for Chef and other infrastructure code.

Jenkins works fine for cookbooks. Chef works around a lot of the annoyances of Jenkins because it handles its own upstream dependency 'artifact' juggling. You only need one Jenkinsfile for all of your cookbooks, especially if you do a sort of feature flags to turn on or off different steps (for instance, linting with rubocop or cookstyle, but not both). You can put a config file of some sort in each cookbook repo to specify which steps to enable.

If you're defining everything as code (Jenkinsfiles, etc) then Blue Ocean is mostly about looking nice I think. But, if you want to configure each job through the GUI, then you'll get more out of it. I don't do that so I don't have much to say about Blue Ocean.

If 'other infrastructure code' is Terraform, check out kitchen-terraform: https://github.com/newcontext-oss/kitchen-terraform

Using a cookbook to install and configure Jenkins is a whole other level of frustration. Key things are that new versions of Jenkins often breaks Chef's official Jenkins cookbook and they don't care about fixing it. Also, installing plugins with dependencies takes forever (like literally hours to days) because it doesn't handle dependency resolution. You're better off gathering a list of all the plugins you want plus their dependencies and having your wrapper cookbook install each one without dependencies. There's an easy way to get that list with a groovy script from a running Jenkins instance (you'd spin up a temporary Jenkins master, hand-pick your plugins and install them, then get that list and plop it into an attribute in your cookbook). But really, running the Jenkins master in a docker container is far less annoying, just because of the way the official image handles plugin installation.

Erwin
Feb 17, 2006

Bhodi posted:

I completely disagree with this sentiment because unless you're using something that spits out the fully formed xml like job builder, the groovy pipeline scripts are by far the best way to couple the jobs with the code they manage within your change control.
I think that's what I was saying though? What I meant is that if you're using the GUI to configure bespoke jobs, you'll get more out of Blue Ocean than just a pretty interface. You'll get more out of Jenkins as a whole with with Jenkinsfiles in your repos instead of doing anything manually. I think we're on the same page. However the extent of my Blue Ocean experience is opening a job in Blue Ocean to see a better view of the pipeline flow. I've done nothing else in Blue Ocean specifically.

edit:

ultrabay2000 posted:

Can anyone offer any pros/cons of GoCD compared to Jenkins currently? We're evaluating both of these. GoCD seems to have a nicer UI out of the box but Jenkins is a lot more widely used. It seems GoCD was particularly strong with value streams but Jenkins has made progress on that.

I'm leaning towards Jenkins because it's free and more widely used but we use GoCD heavily already.
I did a proof of concept buildout with GoCD at my old job. Put everything we did in Hudson into GoCD (Java, Perl, Python, MATLAB, SQL, and Nodejs) and it all worked and GoCD is way better looking. We then went to Jenkins not because GoCD didn't work, but it was a small team and Jenkins is just so much more Googleable. GoCD's forums were (are) in Google Groups and it was very hard to find any answers. With Jenkins, it's almost guaranteed that whatever you're trying to do has been done and discussed online by countless other people.

edit2: Also GoCD expects all of your application servers to be managed by GoCD. It's not necessary, but I think that's their philosophy.

Erwin fucked around with this message at 16:43 on Jul 28, 2017

Erwin
Feb 17, 2006

Plorkyeran posted:

There's two major components to Blue Ocean: the use of the in-repo Jenkinsfile to configure things, and the pretty new UI.

Jenkinsfiles are an improvement, but don't really solve any of the actual problems I have with Jenkins. You can't run a Jenkinsfile locally, so the fact that you can put logic in them is just as much of a trap as it always has been. Jobs which were working yesterday will still break tomorrow because someone updated a plugin that you aren't even using.

The new UI is a broken pile of garbage. It looks better than the old UI, but is incredibly slow and buggy.

Jenkinsfile/groovy pipeline definition is part of the Pipeline plugin, not Blue Ocean. You can use them without installing Blue Ocean. I do agree that not being able to run pipeline code locally sucks, and most new pipelines have a few failed runs at the beginning while you iterate and push like a chump. If I need to do something complicated, I usually try to put as much of the logic as I can in a Rakefile or equivalent, which I can test locally, then just call rake tasks from the pipeline.

Erwin
Feb 17, 2006

fletcher posted:

I'm trying to create a pipeline job for the first time in jenkins but I'm not sure where to either tell it about my git repo where the Jenkinsfile is, or put in the Jenkinsfile directly.

I see this Definition picklist in the Jenkins UI but it is empty. What am I doing wrong here?



That dropdown is supposed to have "Pipeline script" and "Pipeline script from SCM", the latter being what you're looking for. I'm guessing you're missing a plugin or two?

Erwin
Feb 17, 2006

fletcher posted:

Appreciate your reply on the go :D

I think the catch here is that I want these to be separate jobs.

So one job that runs every commit: test kitchen, chef deploy, packer builds

A separate job that runs at specific times of the day: Apply the most recent AMI to the test environment

I could just write a little python script to figure out what the newest AMI is but I figured there was a way to do this by passing around jenkins parameters

Use a data source in Terraform to search AMIs by tag and find the newest AMI with whatever tag (which Packer can set). This way Jenkins doesn’t have to pass around information.

You can also tell Terraform to ignore AMI changes so it doesn’t redeploy every run, then you can do a targeted destroy and create if that’s easier.

Erwin
Feb 17, 2006

Space Whale posted:

What's the idiomatic way to get this out the door? I'm brand new to Jenkins and I've never green-fielded anything like this before. I'm sponging documentation but I hate not having anything to show.
The best way to do anything in Jenkins is to keep as much logic as you can outside of Jenkins. Write your XML parser jawn in whatever language you want. Add a Jenkinsfile to that repo with one stage that just does sh 'myscript.rb' or whatever. Then add a Pipeline job in Jenkins, give it a schedule, and point it at your repo. Bada bing bada boom, you have a working process that can be easily productionized elsewhere.

Erwin
Feb 17, 2006

poemdexter posted:

I would love for Jenkins to support the full Groovy language and not sandbox poo poo in weird ways.

I'm currently developing a shared pipeline library for some Jenkins stuff and I loving hate it. You end up with @NonCPS all over the place because of its weird sandboxing poo poo, they want you to use declarative syntax because it's new and better but it doesn't support anything mildly complicated, and the only way to test changes that can't be unit tested is to commit and push it and run a job on Jenkins because god forbid they support loading the library from anything besides source control, so you end up with a hundred commit messages like "gently caress it let's try putting this here."

Seriously just support rake libraries or something.

Erwin
Feb 17, 2006

Extremely Penetrated posted:

We're 100% on-prem, no butt stuff.
heh

Extremely Penetrated posted:

I'd like to make them a sample pipeline to use as a reference, and then make them responsible for their own crap. I'm not sure yet if I should do a build environment or have them build on their workstations and upload to the repository. Does this approach make sense?
Container images that make it to production should be built within your CI/CD pipeline, not on developer workstations. The developers may need to build images locally for development work, but the workflow should be local development -> push code change only -> some sort automated testing and whatever merge process -> deployment.

Extremely Penetrated posted:

I don't have a clear idea of our dev's typical workflow...what should I be asking them or looking for?
Just sit down with them and watch them deploy a change. Start by automating the minimum amount necessary to keep them from having to manually copy files around. That could just mean the developer checks in code and then goes and clicks a button to copy the files instead of doing it themselves. That's not great, but it's a step in the right direction and it's easier to get buy-in with that than to burn everything down and start over. Work in little steps towards a proper deployment pipeline. Every change you make should be to make the developer's life easier in some way. Find the low hanging fruit first and you'll get buy-in for the more involved stuff down the road..

Erwin
Feb 17, 2006

Gyshall posted:

Lint or die trying

Terraform fmt or die trying

I'm always conflicted when starting at a new client whether to turn off format on save so I can just commit a drat change or die on that hill. When possible I add a terraform fmt to their pipeline and fail PRs if anything needs to be formatted.

Erwin
Feb 17, 2006

New Yorp New Yorp posted:

I just wanted to be sure I'm not missing some critical awesome Terraform feature that is worth throwing out several months of work and starting from scratch.
Perhaps whatever you're deploying to Azure has Terraform providers? E.g. if you're creating a Kubernetes cluster and a Gitlab server and *checks docs* ...Iunno, RabbitMQ or something, you can configure them end-to-end with one Terraform run. I don't think that's worth redoing all of the existing work, though. I'd default to starting with Terraform for any new project instead of the cloud provider's DSL, but I don't think it would be my number one priority to rewrite if the code already existed.

Erwin
Feb 17, 2006

New Yorp New Yorp posted:

Okay, that's fair. My other complaints still stand. Unfortunately, I just don't have time to learn Go well enough to fix the broken things I've encountered.

Terraform taking an hour to create a resource is probably the Azure provider's fault. If you're sure you're not doing something wrong in your configuration, then go look at the provider's repo for issues related to whatever you're seeing. The Azure provider is what defines how Terraform interacts with the Azure API to kick off the resource creation and to know when it's finished. If it was working correctly, it would take the same amount of time as your ARM template. There's nothing about Terraform that would make it take longer for Azure to do things. Solve that issue and your other points are moot.

Terraform sucks in a lot of ways, but not in any of the ways you think it does. It's the best tool for what it does, and it's one of those things that you grow to hate because it's indispensable.

Erwin
Feb 17, 2006

New Yorp New Yorp posted:

Is this just because the Azure provider sucks? I can accept that.
Yes. And more importantly, the Azure API.

I've never had to deal with the Azure API, but it sounds like the Azure API gives a 404 if provided a non-existent resource ID, instead of a more informative message about no resource existing with that ID. Therefore the Azure provider needs to guess at the meaning of a 404 and whether it indicates a missing resource or an actual problem. This is just conjecture, but there are quite a few issues around various 404 errors on the provider's github repo. It sounds like they have to provide 404 interpretation logic for each resource type.

Also this issue might be related to what you're seeing with resource groups: https://github.com/terraform-providers/terraform-provider-azurerm/issues/2629 It seems Azure identifies things by names? Yikes. So the creator of that issue is creating a resource group with a name built from some variables. Then he creates another resource to add to that group, only he provides the resource_group_name as a string built by the same variables instead of actually referencing the created resource group data source. So maybe the resources in your resource group weren't assigned to the resource group by referencing the resource group (terraform) resource.name but just the same string value?

It really just sounds like Azure is an all-around shitshow so hate Azure. Is Azure the only thing you're targeting with Terraform? If so why use Terraform?

Erwin
Feb 17, 2006

Cancelbot posted:

It'll be the "refreshing state" part of the plan. I think Terraform just has a list of "this R53 record should exist here" in its state file, which then fires off a metric poo poo-ton of AWS API calls to verify that is indeed the case. It'll then do a diff based on what is consistent with the AWS state & the new computed state, rather than be smarter by looking at the HCL that changed prior to doing the refresh.

Regardless of whether the configuration actually changed, it would need to refresh every resource anyway, since it would need to modify any drift.

Erwin
Feb 17, 2006

Do you have control over the connection and purposefully bring it down to limit costs, or is it a random as-available thing? As much as I wouldn't recommend Chef to anyone these days, this sounds like something Chef is more suited for than Ansible, if you wanted to use an existing CM tool. Since Chef runs as an agent in a 'pull' model, each client would hit the Chef server whenever they had connectivity to grab any new configuration. Looks like the Docker cookbook can manage containers as well, but I think I'd separate system configuration and container orchestration and update a compose file like VC said.

If you have the budget for it, I know first-hand that DC/OS runs on cruise ships with sporadic satellite connections. I have strong opinions on DC/OS and simple container orchestration wouldn't be a use case I'd pick it for, but I know it's successfully solving the problem you are trying to solve.

Erwin
Feb 17, 2006

Mr Shiny Pants posted:

How dumb would it be to run Jenkins agents remotely and have them to do the hard work? In a slave mode they do almost exactly what I want.

Gross. Besides, Jenkins agents are meant to run pipelines that start and finish, not indefinite services. Or do you mean have a Jenkins agent at each site that orchestrates other servers?

Erwin
Feb 17, 2006

LochNessMonster posted:

Whats the current standard for spinning up a PoC bare metal k8s cluster to show off some standard capabilities (nothing fancy). Kops, Kubeadm or Kubespray? Or should I just run k3s?

To me kubeadm is simple enough that the opinionation of the other tools isn't worth the hassle.

Erwin
Feb 17, 2006

I always thought Kubespray sounded gross. Like “ah jeez grab a mop, I got Kubernetes everywhere. Ew, it’s dripping off of the ceiling!”

Erwin
Feb 17, 2006

12 rats tied together posted:

Also unless something has radically changed since 0.11
The entire syntax has radically changed, yeah. Terraform is a pain in the rear end, but it's often the best tool for the job. You're the biggest detractor of it in this thread, and that's totally fine because it deserves heavy criticism, but you often make authoritative statements about it that are incorrect. For instance 0-n resource counts works fine and I'm not sure what you're getting at there.

Erwin
Feb 17, 2006

Most of the Terraform problems that 12 rats is pointing out are only problems when you're writing the code, and have established solutions. Yes iteration in Terraform is awfully slow compared to application code, but you can test the results of convoluted logic and once it works, it works. You can even do (slow) TDD with tools like kitchen-terraform or Terratest. Terraform is often the best tool for infrastructure automation largely because of its wide use - you'd rarely be the first to run into a given issue, especially with the main heavily-used providers.

Erwin
Feb 17, 2006

Yes but the tutorial on containerizing their framework they found uses Nomad, soooooooo….

Erwin
Feb 17, 2006

LochNessMonster posted:

Never tried Nomad but if you ever find yourself in a position where you need to choose between DC/OS and K8s just gor for the latter and never look back.

Seconded. Is the official install process still a shell script with several hundred megabytes of binary data baked into it and no automation? Because it used to be and it was stupid.

Erwin
Feb 17, 2006

Make sure you destroy everything and recreate it regularly to ensure that works. Rolling forward with your Terraform and never starting the over won’t guarantee it’ll work from scratch next time you want to reuse the module.

Erwin
Feb 17, 2006

some kinda jackal posted:

If I need to schedule a pod on a node that is expected to have a specific external device mounted, in my mind this is a job labelling the node with hasdevice=true and podspec nodeSelector hasdevice: true?

Yeah, this is the way to do it. It's not highly-available but that's not worth worrying about for your home stuff. It's how I make sure Home Assistant runs on the node that has the ZWave stick attached.

Erwin
Feb 17, 2006

some kinda jackal posted:

After a whirlwind few months of applying random helm charts and resources from yaml lying around my various laptops onto my cluster and then promptly forgetting what is installed or where the original sources are, I’m kind of ready to see if gitops solves this for me a little.

I don’t think I’m looking to buy into a philosophy or anything, at this point I think I just need something like Argo as a single-source-of-truth across all my little lab clusters given how frequently I blow everything up.

It sounds like you should just start with git as a single source of truth and not worry about deploying automatically. You can still use kubectl on whichever machine you’re sitting at - just be sure to pull first and commit and push after. Later you can add a pipeline to do it for you.

Erwin
Feb 17, 2006

Lucid Nonsense posted:

The server you install our software on needs a license. Sending devices don't affect licensing, but licensing is based on the volume ingested.

Dear Datadog…

Erwin
Feb 17, 2006

Tell me about it. We had meta-alerts set up specifically to avoid the dozens of billing traps they design into their features.

Adbot
ADBOT LOVES YOU

Erwin
Feb 17, 2006

LochNessMonster posted:

I’d choose ansible over chef/puppet all day every day.

Absolutely, ruby sucks. Also Salt sucks too. Ansible is the least annoying of the four, however for this:

Junkiebev posted:

Business looking for something which does stuff like “disable smbv1 client connectivity on all endpoints”, and I’m looking for something like “Change State and track diffs at scale and the only requisite should be network connectivity and a Linux kernel.”

So, Green-Field solution for Fleet Management of (mostly ephemeral) Linux nodes w/ GitOps-driven Config As Code in multiple cloud providers (:words:) with a pathway towards something enterprisesque for support.

...I'd do immutable infrastructure and not worry about a config management tool, unless I really need one in the build pipeline.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply