Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture





tell me how dumb this plan is:

1. encrypt sensitive information in my ansible playbooks locally with ansible vault
2. push to git
3. provisioning pulls git repo and unencrypts using ansible-vault and a password stored on the provisioning server
4. aws codedeploy using ansible locally to do final provisioning on a mostly preprovisioned ami running in an asg

i don't need really good security, i just want to keep passwords out of git but i don't want to run zookeeper/consul/whatever to do so if i can avoid it. i also want to avoid putting the passwords in my asg amis if possible, hence decrypting on provisioning

the talent deficit fucked around with this message at 20:37 on Jul 12, 2015

Adbot
ADBOT LOVES YOU

Bhodi
Dec 9, 2007

Oh, it's just a cat.
Pillbug
Let me know how it works, i considered doing that but settled with a standard secrets directory.

Hughlander
May 11, 2005

the talent deficit posted:

tell me how dumb this plan is:

1. encrypt sensitive information in my ansible playbooks locally with ansible vault
2. push to git
3. provisioning pulls git repo and unencrypts using ansible-vault and a password stored on the provisioning server
4. aws codedeploy using ansible locally to do final provisioning on a mostly preprovisioned ami running in an asg

i don't need really good security, i just want to keep passwords out of git but i don't want to run zookeeper/consul/whatever to do so if i can avoid it. i also want to avoid putting the passwords in my asg amis if possible, hence decrypting on provisioning

Have you also considered http://aws.amazon.com/kms/

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.
Alternatively, https://vaultproject.io/

magnetic
Jun 21, 2005

kiteless, master, teach me.
I have been tasked with implementing our Atlassian Suite (bitbucket, JIRA, Bamboo) and I have very little experience with any of them. Bitucket and JIRA are straight forward and make more sense. Bamboo is where I am falling down. Bamboo is "cloud" hosted. My biggest hurtle seems to be the post build task of moving the website files to our network(datacenter) for deployment. Is that even possible? Am I going to need host the bamboo on my network in order do save the project files to our servers?

Thanks for any help/ideas are most welcome.

the talent deficit
Dec 20, 2003

self-deprecation is a very british trait, and problems can arise when the british attempt to do so with a foreign culture






yes and also vault but kms makes some things that are dumb but company culturally ingrained impossible and vault is another service i'd have to manage. my main concern for now is removing user accounts and passwords from plaintext git

foundtomorrow
Feb 10, 2007

magnetic posted:

I have been tasked with implementing our Atlassian Suite (bitbucket, JIRA, Bamboo) and I have very little experience with any of them. Bitucket and JIRA are straight forward and make more sense. Bamboo is where I am falling down. Bamboo is "cloud" hosted. My biggest hurtle seems to be the post build task of moving the website files to our network(datacenter) for deployment. Is that even possible? Am I going to need host the bamboo on my network in order do save the project files to our servers?

Thanks for any help/ideas are most welcome.

Look up "bamboo agent install". It will depend a lot on your particular application and environments that you're deploying to.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE

Ithaqua posted:

The other big challenge is to get people to start using feature flags and short-lived dev branches so you can ship your code even if a feature is half-completed. The killer is usually database stuff -- it's hard to get into the mindset of never (rarely) introducing breaking database schema changes.

There are also dedicated schema management tools which attempt to address this. I haven't used one, but one that caught my eye was Flyway. The gist is that you dump some baseline content set or schema as "REV 001" and then store all schema/content changes (it's important that it be all changes) as additional revisions. By walking through the set of revisions you end up with whatever state the program expects. This can be done as part of a Maven build script or whatever.

I guess I use a ghetto version of that. What I have is a set of scripts that produce backups, which I check into git. You can then run the scripts and instantiate a database. In order for this to work well the ordering of the rows needs to be stable (Postgres doesn't guarantee any ordering of exported data), and you want one SQL file per table. I hacked up a version of pg_dump_splitsort. It also doesn't work well during add/removing columns since that means every row changes - typically I make one commit for the schema change and then a second commit for any content changes so that the changes stand out. You can obviously use branches, etc if you want to have multiple incompatible versions under development concurrently. Flyway would be nicer there because you see the exact change that's occurring rather than a gigantic block of changed rows. We do more content changes than schema changes so it works OK in practice.

Lord Of Texas posted:

"Breaking" database schema changes can be part of a toggles/feature flags approach too, the key to making that easy is having an SOA architecture where you don't have 50 different apps reading and writing from the same database tables.

If you instead have your tables behind a service that manages them, you can work around those changes within the bounded context and ensure you're not impacting anything used in production (e.g. if someone added a not-null column that's not used yet, you can have your service insert default values to that column for the time being)

As sort of a half-way approach, I broke our database into "content" and "logging" services. We have a few really legacy versions still kicking around and there's no money to rewrite and improve them. To let us move forward I broke it into "content" and "logging" databases - all the instances share the logging DB for configuration, user data, and logging, but each one can have its own version of our content set. It's not perfect but it's at least unjammed our forward progress.

The really stupid thing is that the most legacy version of all is a JSON implementation which a phone app talks to, so we could make the web front-end be a wrapper around the JSON service and roll our improvements back into the phone app...

Paul MaudDib fucked around with this message at 00:39 on Jul 15, 2015

sink
Sep 10, 2005

gerby gerb gerb in my mouf

Paul MaudDib posted:

There are also dedicated schema management tools which attempt to address this. I haven't used one, but one that caught my eye was Flyway. The gist is that you dump some baseline content set or schema as "REV 001" and then store all schema/content changes (it's important that it be all changes) as additional revisions. By walking through the set of revisions you end up with whatever state the program expects. This can be done as part of a Maven build script or whatever.

Flyway is great. I used it when it was a bit more minimal, and we had to write a handful of bash scripts around it, but the effort is minimal. Now it looks like there is all kinds of build tool support.

It's probably obvious but worth mentioning explicitly: Even with such a schema management tool, you're going to need to make sure your database schema is backwards compatible with at least one version of your application.

Mandator
Aug 28, 2007

I just found this thread so I need to read through it before I can make any meaningful contributions.

However I recently just set up Microsoft Release Manager for a TFS/GIT SC setup for a fairly large company if anyone has any questions about that. I think it's a pretty neat setup and I can't poke any holes in it. I'd love for you guys to poke holes in it though.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Mandator posted:

However I recently just set up Microsoft Release Manager for a TFS/GIT SC setup for a fairly large company if anyone has any questions about that. I think it's a pretty neat setup and I can't poke any holes in it. I'd love for you guys to poke holes in it though.

I've been working with the Release Management stuff a ton for the past 18 months. If you're using the agent-based model, stop right now and start considering how you can transition it to PowerShell or DSC scripts ("vNext" releases). The agent/fine-grained workflow model is being totally abandoned in TFS 2015 Update 1 in favor of a new release system that's closely modeled after the new build system (in fact, it's the exact same task system -- a build action can be a release action and vice versa). The idea is that you'll have your deployment scripts be in DSC/PowerShell/Chef/Puppet/Octopus/whatever and use the release tooling in TFS to orchestrate and manage releases, but not deployment. The release tooling will not help you deploy your software at all, there will be no built-in tasks for "set up a website" or anything like that. If you want to set up a website, write a DSC script, source control it, and invoke it as a release task.

The ALM Rangers are kicking off a project to create migration guidance and tooling next week, but it's going to be a shitshow for the existing users. I'm donating a bunch of code to the project because I foresaw this problem a while back and wrote a bunch of proof of concept code for doing migrations knowing we'd need it someday.

New Yorp New Yorp fucked around with this message at 01:02 on Jul 16, 2015

Mandator
Aug 28, 2007

Ithaqua posted:

I've been working with the Release Management stuff a ton for the past 18 months. If you're using the agent-based model, stop right now and start considering how you can transition it to PowerShell or DSC scripts ("vNext" releases). The agent/fine-grained workflow model is being totally abandoned in TFS 2015 Update 1 in favor of a new release system that's closely modeled after the new build system (in fact, it's the exact same task system -- a build action can be a release action and vice versa). The idea is that you'll have your deployment scripts be in DSC/PowerShell/Chef/Puppet/Octopus/whatever and use the release tooling in TFS to orchestrate and manage releases, but not deployment. The release tooling will not help you deploy your software at all, there will be no built-in tasks for "set up a website" or anything like that. If you want to set up a website, write a DSC script, source control it, and invoke it as a release task.

The ALM Rangers are kicking off a project to create migration guidance and tooling next week, but it's going to be a shitshow for the existing users. I'm donating a bunch of code to the project because I foresaw this problem a while back and wrote a bunch of proof of concept code for doing migrations knowing we'd need it someday.

I could have swore that agent based releases were not being phased out in 2015 and there was going to be a 2015 RM client that still supported agent based releases. I even read this from somewhere I trusted when I was doing my research on DSC/Agents. Gosh loving dang it. Why are they removing a feature that works perfectly fine?

However we only have around five of our enterprise projects CI'd at the moment using agent based releases so the switch should be relatively trivial. I've already extended the default functionality with PowerShell scripts so I'm not too worried about just going back to writing my own scripts for deployment.

Still, drat, thanks for the heads up man.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

Mandator posted:

I could have swore that agent based releases were not being phased out in 2015 and there was going to be a 2015 RM client that still supported agent based releases. I even read this from somewhere I trusted when I was doing my research on DSC/Agents. Gosh loving dang it. Why are they removing a feature that works perfectly fine?

However we only have around five of our enterprise projects CI'd at the moment using agent based releases so the switch should be relatively trivial. I've already extended the default functionality with PowerShell scripts so I'm not too worried about just going back to writing my own scripts for deployment.

Still, drat, thanks for the heads up man.

They're not being phased out, they're just going into maintenance mode. Think Silverlight or LINQ to SQL. They still exist, they work, you can use them, but they're not getting updates and there are newer technologies that you're supposed to use instead.

There's going to be a 2015 client/server/agent, with the minor aforementioned improvements. The new release system is entering private preview right now in VSO (I have access right now but haven't had much time to play with it and can't really comment on it beyond what's already public information). Once it drops for real (this fall/winter, last I heard was TFS 2015 Update 1 timeframe for on-prem, earlier for VSO), I would expect the client/server to work for another two or three years before they officially deprecate it for TFS2018 or whatever. That's my guess, that's nothing official from Microsoft or anything.

The reason is because of Microsoft's new direction in terms of cross-platform/cross-technology. They bought the existing software from another company to get something out there immediately so their intent to enter that area was well-known throughout the industry. They then transferred the original team they acquired to other projects and started working on their own release implementation that more closely hewed to their vision. You'll note that the "vNext" DSC stuff entered the picture pretty rapidly -- that was the direction they wanted to go in all along. The acquired technology was built on and for Windows and .NET, 100%. The client uses Windows Workflow and Windows executables and PowerShell scripts, which really doesn't translate to another platform, especially not with the shift toward everything being web-based.

The granular component/tool system worked okay for simple scenarios, but it didn't scale very well to very complex applications, and some aspects (rollbacks are implemented in a retarded, backwards way. The security model is awful. A lot of the built in tools are not idempotent and fail in weird ways) were broken at such a fundamental level as to render them useless. I did some implementations at Fortune 500 companies and really big insurance/financial institutions and the problems become very pronounced at that scale.

You can still achieve everything available in the agent task model with PowerShell/DSC scripts, it just requires more up-front effort. The rangers DSC resource kit helps fill some gaps, although not all of them. If the DSC ecosystem becomes more robust and discoverable, life will get better. I really didn't like working with Chef, but I will admit that Chef has an awesome community where cookbooks for every conceivable common scenario are already available. DSC needs to get to the same point.

Paul MaudDib
May 3, 2006

TEAM NVIDIA:
FORUM POLICE
So previously I've been running config as a "sidecar" folder - in git/myproject/ we have two directories of interest, "maven-dir" and "config-dir". Config-dir contains many different files, most of which fall into the category of either credentials (eg SQL server target and user/pass, or external service credentials) or content (eg some PDFs that we link to). Previously, each "build configuration" would have its own config subfolder - f.ex we have "config-dir/develop", "config-dir/master", "config-dir/testing", etc. Deployment involved manually copying config files to target, which isn't a trivial process since we can't directly access production. Each branch would have only its own appropriate config files - eg "master" would only have "config-dir/master".

That's not my ideal system, it's just codifying the state of things before I joined the project. So for a revamp, how can I make this better? So far I've already added Maven build-profile-based control of config-dir lookup. So when you're running branch "master" there's a property ${myproject.configdir} that specifies "/config-dir/myproject-master". You still have to copy it manually for now.

In terms of config - there are some small differences between dev/testing and production environments (eg different database servers, different external service credentials). Right now I think it's probably manageable with Maven build properties since there's only like a half dozen different fields, tops. We have property files in the resources folder which control this, and it's already set up for property injection, for now I could just do that. Let's say there were more than a half dozen different config fields that varied - what would be the management strategy there? Do all the potential variations of the credential files ride along, and I just specify which one of them I want for a particular build? Or do I want a credential management type service?

In terms of content - I think all differences occur across version numbers. We do have some versions that are super legacy and can't be touched. The strategy I've implemented for emergency bug control is that we can run multiple instances of the app at different versions, and we use Apache to map them back to specific older versions if necessary. In other words we can use mod_proxy to remap the "master" build's URL "/myproject/" in the case where URL is "/myproject/client3/" to "/myproject-v2.1.0/client3/" to hold them back to a specific version. This isn't ideal, but the previous strategy of "a different instance for every client" wasn't going to be sustainable. So we do need to maintain the capability for multiple independent instances, and they need to not clobber each other on deployment.

Right now, to be honest we could just build all the content right into the WAR file. It's only like 10MB of files tops, so our app goes from 15MB to 25MB. No big deal. Let's say it was significantly higher - how should we handle deploying multiple instances of the content if it amounted to 1gb for each instance? Maybe we have version-numbered directories or something, so for "content/pdf1.pdf" you go to "/config_dir/myproject-v2.1.0/content/pdf1.pdf"? And then you "git archive" or rsync it across from a management server?

I feel like I'm getting my head around the rest of the CI/CD concepts but the whole configuration management thing is a mystery to me.

Paul MaudDib fucked around with this message at 02:48 on Jul 17, 2015

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug
Microsoft's Release Management 2015 came out today, and I know some folks in this thread are using the 2013 edition.

I just upgraded my company's internal sandbox instance and it is totally hosed and nonfunctional. I think they're so focused on their total rewrite that they didn't put a lot of energy into testing this one, and it shows big-time. Obviously I have a sample size of 1 and your upgrade might work flawlessly, but I wanted to put the warning out. Of course, if you upgrade it and don't do a database backup first, you're dumb. So do that.

Dietrich
Sep 11, 2001

Ok so I'm trying to set up a TeamCity build agent for tackling Go. What am I getting myself into here? Go wants a GOPATH "work space" environment variable, but the build agent is going to be checking out everything into a workdir which won't have the requisite folder structure. Should I be scripting a move of all the sourcecode to the work space (after clearing it), triggering the build and the tests, and then pulling the resulting bin back to become an artifact?

wwb
Aug 17, 2004

Zero seat time with go but I think you can set environment variables and such in teamcity. I would probably look at having a GOPATH setup right and smylinking in the source folder.

If there is a good resource on the structure drop a link and I'll see if I can give some better advice.

Skier
Apr 24, 2003

Fuck yeah.
Fan of Britches

Dietrich posted:

Ok so I'm trying to set up a TeamCity build agent for tackling Go. What am I getting myself into here? Go wants a GOPATH "work space" environment variable, but the build agent is going to be checking out everything into a workdir which won't have the requisite folder structure. Should I be scripting a move of all the sourcecode to the work space (after clearing it), triggering the build and the tests, and then pulling the resulting bin back to become an artifact?

As wwb states you'll likely have to set the environment variables for go in a step before compiling. I've used a shell script that sets the Go workspace to the directory the Teamcity agent is running in, using the variables provided by Teamcity, then running go get and then go build.

Things get easier when you vendor your dependencies: not running go get means it doesn't grab the latest from whatever your deps are. We used godeps for this (https://github.com/tools/godep) but there's some experimental versioning tooling built into the latest versions of go. Also gb (http://getgb.io/) is an option but I've no experience with that. Once things are vendored I think you can just run godep go build, possibly without needing the environment variable setting the go workspace.

Then you can save the build artifacts easily.

It's been a year since I last worked with TC and Go so details may be a bit off.

wwb
Aug 17, 2004

I just read http://hadihariri.com/2015/09/30/setting-up-go-on-intellij/ which gave me about 1000% more understanding of go setups than I had when I woke up this AM.

Anyhow, a really straightforward way to do this would be to set TC to check out the project to a specified path on the agent that is within the gopath you setup on the agent. You will probably still need to set the gopath environment variable -- or set it up for the user who the build agent is running as -- but that should be the most straightforward way to do this though it won't scale horribly well.

Mr. Crow
May 22, 2008

Snap City mayor for life
Are there any good books that go over good patterns and practices re: devops?

I saw this book earlier, anything else noteworthy?

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
Gene Kim is a big DevOps proponent, and he wrote this book. Watch a couple of his talks (he's got loads online). He nails the philosophy, at least.

Sedro
Dec 31, 2008
Has anyone used TeamCity + Vagrant? I basically want TC to 'vagrant up' then run its build agent inside the VM. I could use a different build agent for each build but that immediately puts me into their enterprise pricing tier.

Vulture Culture
Jul 14, 2003

I was never enjoying it. I only eat it for the nutrients.

Sedro posted:

Has anyone used TeamCity + Vagrant? I basically want TC to 'vagrant up' then run its build agent inside the VM. I could use a different build agent for each build but that immediately puts me into their enterprise pricing tier.
You won't be running the build agent in the VM, but you can just invoke vagrant up/vagrant ssh as build steps. Text in, text out. Just be aware that certain classes of failures might cause your VM instances to not get destroyed correctly. You'll also be limited to one build at a time on the host, regardless of how many VMs you can create.

If you need dynamic agent support, you might consider having it spin up EC2 instances; that's directly integrated into the product.

Vulture Culture fucked around with this message at 18:15 on Oct 21, 2015

wwb
Aug 17, 2004

^^^ this.

Build agents are really coordinators, with ssh you can execute on any box you want. Most of our linux deployment punches through windows build agents left over from the pure-bred .NET shop days.

Now where vagrant and devops meet is provisioning -- you can usually setup the same provisioning scripts for both environments with a bit of configuration voodoo.

neurotech
Apr 22, 2004

Deep in my dreams and I still hear her callin'
If you're alone, I'll come home.

I work at a school and am a one-man show with regards to writing various small applications and managing the deployment workflow of said applications.

I have been spending the last few weeks learning docker and "trial and erroring" my way through that learning process. I really like the approach Docker takes and I have approached the point where I need to have several containers running, with most of them exposing ports, and some of them needing to link to each other.

For reference I have tried to start mapping out my environment:


Can anyone recommend some approaches to managing all of this? Should I just write it all as shell scripts, or maybe use docker-compose?

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender
If it all fits on one machine, just use shell scripts, i.e. a hard-coded startup order, hard-coded ports, etc. You don't need to overcomplicate things with service discovery or multi-host yet.

Data-only containers can be a bit of a pain, and always made me a bit nervous because if you nuke the last reference to the data, it's reclaimed by docker and removed. I'd avoid them entirely and just make a regular directory and bind-mount it inside the containers that need to access it.

neurotech
Apr 22, 2004

Deep in my dreams and I still hear her callin'
If you're alone, I'll come home.

minato posted:

If it all fits on one machine, just use shell scripts, i.e. a hard-coded startup order, hard-coded ports, etc. You don't need to overcomplicate things with service discovery or multi-host yet.

Data-only containers can be a bit of a pain, and always made me a bit nervous because if you nuke the last reference to the data, it's reclaimed by docker and removed. I'd avoid them entirely and just make a regular directory and bind-mount it inside the containers that need to access it.

Thanks. Your second point has kinda been ratting around in the back of my head recently, and I think I am going to go with your suggestion.

Erwin
Feb 17, 2006

Edit: one of the problems with GoCD is that the name is hard to search for. Turns out there's been some discussion of it in this thread already.

Does anybody have any strong opinions on Thoughtworks' Go? I was initially planning to move from Hudson to Jenkins (we went with Hudson back when it wasn't overshadowed by Jenkins), but I set up a test GoCD server and agents and I'm liking it so far. The thing I really dig is the pipeline visualizations, as we have dozens of separate repos in several different languages that all interact with each other, and in Hudson we do have some chaining set up, but not like it should be.

The concern I have with GoCD is that the community doesn't seem huge (one of the factors I take into account when choosing products, because of our team size, is whether the product's community is large enough - in other words, if I have a problem, is there someone out there that has already solved it?). I'm also not sure if it's grown much since they open-sourced it, or if it'll just wither and I'll be migrating again in a year.

Erwin fucked around with this message at 00:48 on Nov 17, 2015

wwb
Aug 17, 2004

My main concern is if they maintain it as well as they maintained CC.Net -- which is to say not at all.

Skier
Apr 24, 2003

Fuck yeah.
Fan of Britches

Erwin posted:

Does anybody have any strong opinions on Thoughtworks' Go? I was initially planning to move from Hudson to Jenkins (we went with Hudson back when it wasn't overshadowed by Jenkins), but I set up a test GoCD server and agents and I'm liking it so far. The thing I really dig is the pipeline visualizations, as we have dozens of separate repos in several different languages that all interact with each other, and in Hudson we do have some chaining set up, but not like it should be.

The concern I have with GoCD is that the community doesn't seem huge (one of the factors I take into account when choosing products, because of our team size, is whether the product's community is large enough - in other words, if I have a problem, is there someone out there that has already solved it?). I'm also not sure if it's grown much since they open-sourced it, or if it'll just wither and I'll be migrating again in a year.

I've seen companies looking at GoCD and ended up with Jenkins + a pipeline plugin due to the big question mark around GoCD's future. Not that Jenkins is superb, but it worked after tweaking and experimenting. Much better community around it.

If Jenkins 2.0 gets traction it'd probably be the best solution.

aBagorn
Aug 26, 2004
Great thread. I'm looking for a CI/CD solution to sit between BitBucket and Azure for a few different flavors of applications (some node apps, some .NET API apps, etc) and so far I haven't found anything I loved yet, because nothing is playing well with both BB and Azure Web Apps that I can see. I've seen some solutions for Azure Cloud Services or Azure Blob Storage but (and it's probably me not googling well) nothing for the new(er) Web App platform that seems better than just customizing the deployment scripts manually (which I don't want to do).

wwb
Aug 17, 2004

TeamCity or Jenkins should be able to handle this pretty well.

On the bitbucket side you just need something that speaks hg or git. On the orchestration something you need somethign that can run your scripts in the prescribed manner. Both of those can handle that with aplomb. TeamCity is a bit nicer and better with .NET out of the box but both will get you there.

FateFree
Nov 14, 2003

I'm trying to migrate a few small web apps Im hosting away from a dedicated server and onto a cloud solution. I'm looking at digital ocean and docker, but I'm having a hard time finding info about how to migrate. I have some dumb questions about both of them:

If I pick a small digital ocean droplet, can it be easily upgraded to a higher priced plan if I need the speed?

How does docker work with a droplet in terms of multiple applications. Does each app have a separate docker container that all run on the same droplet?

How do databases work? Do I have each database bundled with each app, or is each app sharing one docker container with MYSQL installed?

FateFree fucked around with this message at 22:07 on Dec 3, 2015

Pollyanna
Mar 5, 2005

Milk's on them.


I've taken it upon myself to create a VM for our Rails project, because our current config setup and new engineer onboarding is a mess. One of our config steps is "export these plaintext passwords in your shell profile and get someone to send you a copy of our dev database" (:psyduck:) among other bizarre poo poo. Don't get me started on our data model... Man, I never thought I'd see the day where I'd be looked to as a real leader and force for change and continuous improvement.

To that end, I'm puzzling together Vagrant and Chef to make an Ubuntu-Rails-Postgres VM, where all you need to do is install Vagrant and VirtualBox, clone the repo, run vagrant up --provision, wait, and have a VM you can SSH into and run the project in. No figuring out how to install Postgres on each new machine, no moronic database sharing, no inability to do dev work if you're on a Windows machine, nothin'.

So far, so good. I've got something really basic working on OSX (and I assume Linux distros as well), although I still need to automate bundle exec rake db:create etc. With Windows, it's been another story, since I don't do Windows dev and had to figure out how to use SourceTree as a makeshift git client, and then found out that our project manager's laptop couldn't handle 64-bit VMs. :downs: I had to downgrade to trusty32. I work for a finance and insurance corporation, so Windows will be everywhere, sadly.

I find it kinda fun! I also want get CICD going for the rest of the project, cause I refuse to work without some sort of automation pipeline in place. Better to get it done now rather than later - every project worth its salt moves to CICD eventually.

The one hitch in the plan is ActiveDirectory and VPN weirdness. The company is really strict and risk-averse, so it's got its own weird connection issues and the projects rely on ActiveDirectory being available because llllooooooolllll. We want to abstract AD away from the rest of the project, but it's not that simple due to aforementioned private network BS. (Our project has different configuration depending on whether you have WiFi or not. Yeah.)

Wish me luck :downs:

minato
Jun 7, 2004

cutty cain't hang, say 7-up.
Taco Defender

Pollyanna posted:

found out that our project manager's laptop couldn't handle 64-bit VMs. :downs:
Did you check the bios settings? Usually it's just a matter of enabling "Virtualization technology" if it's an Intel chip, and then you can run 64-bit VMs. The laptop would have to be really old not to have that option.

wwb
Aug 17, 2004

^^^ this. This is on by default now but there are 5+ years of laptops with this disabled by default. Of course, changing BIOS settings in an environment like that might require filing a 42 page form in triplicate and 3 dozen meetings.

As for the task at hand, it is a great thing when you get done. The big trick is getting vagrant provisioning to do all this dirty work, getting to 100% automated is tricky. Another trick is how to schlep around the database backups if that is important. We are using subrepos for that and it is working OK except for a few sites that have gigabytes of stuff.

For your AD problem a few notes. First is that, in my experience, any outgoing network activity if you are using a typical vagrant / virtualbox setup is going to be slow as bjezus, especially on Windows. Second, Rails tends to talk to LDAP not AD per se so the approach I would start with is to find some LDAP server you could setup in the dev environment to handle those calls and take the remote AD out of the loop.

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
Has anyone built a CI/CD pipeline for a Unity 3d app? I've built lamp and jvm pipelines but never a win/c# one, I Just need a good blog post that walks through the options at the different steps.

New Yorp New Yorp
Jul 18, 2003

Only in Kenya.
Pillbug

StabbinHobo posted:

Has anyone built a CI/CD pipeline for a Unity 3d app? I've built lamp and jvm pipelines but never a win/c# one, I Just need a good blog post that walks through the options at the different steps.

What type of application is this? I assume from hearing "Unity" that it's a desktop application. Continuous integration is easy: You build it. You run code analysis. You run unit tests. However, you can't really do "continuous" delivery of desktop applications, except maybe to QA lab environments for running a suite of system/UI tests. When you're dealing with desktop applications, the best you can do is publish an installer or something like a ClickOnce (ugh) package.

In any case, this is all pretty easy stuff in the Microsoft world these days... they've been putting a lot of effort into making it discoverable and comprehensible over the past few years.

What are you currently using for source control? What branching strategy are you using?

StabbinHobo
Oct 18, 2002

by Jeffrey of YOSPOS
vr app, targeting gear-vr/cardboard. total greenfield.

Adbot
ADBOT LOVES YOU

0zzyRocks
Jul 10, 2001

Lord of the broken bong

Pollyanna posted:

I've taken it upon myself to create a VM for our Rails project, because our current config setup and new engineer onboarding is a mess...

To that end, I'm puzzling together Vagrant and Chef to make an Ubuntu-Rails-Postgres VM, where all you need to do is install Vagrant and VirtualBox, clone the repo, run vagrant up --provision, wait, and have a VM you can SSH into and run the project in. No figuring out how to install Postgres on each new machine, no moronic database sharing, no inability to do dev work if you're on a Windows machine, nothin'.

I basically did exactly this at my current job for a CentOS/PHP/Magento/Percona VM, except we use Packer to build out versioned boxes (provisioned with Chef recipes and some cleanup shell scripts) because the initial Chef learning curve is pretty high and people are lazy. We eventually want to get Docker involved to build out layers which we can apply to CI environments in the future... but I haven't even looked at docker yet.

Chef has a nice IRC channel where lots of devs hang out, trust me you'll end up needing it.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply