Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Thalagyrt
Aug 10, 2006

xenilk posted:

Thank you!

ended up shortening ending_soon? to
Ruby code:
  def ending_soon?
    self.date_end >= Date.today && self.date_end <= 1.month.from_now.to_date
  end
since without the .to_date I was getting this error:
comparison of Date with ActiveSupport::TimeWithZone failed

Also, thanks for the example, exactly what I wanted! :)

Sounds like there's some confusion in your code somewhere based on that error - part of it is using naive datetimes, the other is using time-zone aware datetimes. You should really look further into what's happening instead of bandaging it as you might end up with dates being stored/retrieved incorrectly or other time-zone related problems if you're not using time-zone aware datetimes across the board. Date.today for example is one that you shouldn't use - use Time.zone.today instead to get a time-zone aware date object.

Adbot
ADBOT LOVES YOU

Thalagyrt
Aug 10, 2006

kayakyakr posted:

He's storing it as a Date type in the database. Depending on the intent, he could be ok with just using date. Probably would be better to store as a time with zone, though.

Hm, yeah that does just return a Date. Still worth storing it timezone aware in my opinion. The thing about a naive date is that the point where it rolls over from one day to the next is going to vary based on the timezone settings on the server, which may be undesirable.

Thalagyrt
Aug 10, 2006

Sub Par posted:

This is what I originally had, but because of the way Heroku works, I was getting nothing but 10.xx.xx.xx IPs that were likely some kind of Heroku load-balancing or whatever. Anyway I'm now trying this:
code:
@click.request_ip = request.env['HTTP_X_REAL_IP'] ||= request.env['REMOTE_ADDR']
We'll see how that goes.

Might want to double check that conditional assignment there. That looks like it should just be an || not an ||= for what you're trying to do.

Thalagyrt
Aug 10, 2006

EVGA Longoria posted:

Trying to get rid of code duplication. We have helpers for our API

Ruby code:
def post
 ::Post.find_by_slug_or_id(params[:id])
end

def published_post
 ::Post.published.find_by_slug_or_id(params[:id])
end
I don't like it, so I'm trying to combine it to a single function.

Ruby code:
def post(published = false)
  ??
end
After tweeting the Rails 4 Way authors, I've put the following together

Ruby code:
def post(published = false)
  ::Post.send(published ? :published : :all).find_by_slug_or_id(params[:id])
end
This accomplished my goal, but it's a bit ugly.

I'd probably write that a bit differently. It's not a one liner, but easier to understand at a quick glance what's going on.

Ruby code:
def post(published = false)
  posts = ::Post.all
  posts = posts.published if published
  posts.find_by_slug_or_id(params[:id])
end
This does kind of sound like it might be code that belongs in an authorization module instead of in your controller though - are you returning published vs all posts depending on the user that's retrieving the data?

Thalagyrt
Aug 10, 2006

EVGA Longoria posted:

It's based on which endpoint they're using, /posts vs /posts/feed. Technically /posts is restricted for some things, but there's also a different in API representation and a few other things, so it's not as straightforward a authorization.

I'm still fairly new to Rails stuff, would we incur much performance penalty for going from All to .published? My impulse was that we would.

Nope. Post.all.published and Post.published are nearly equivalent from a runtime perspective. Post.all simply returns a scope of all items which can be scoped further. Scopes aren't actually evaluated until you iterate over them (well, or a few other actions that necessitate making an actual SQL query). What they are is more like a set of parameters for a query that can be evaluated when needed to return a set of objects. In a lot of cases in Rails, that doesn't actually happen until you're displaying the returned records in a view.

Thalagyrt
Aug 10, 2006

Just wrote up a blog post about how I handle authorization in my business's client portal. Figure you guys might be interested. Spoiler: It's neither CanCan nor Pundit. :)

https://www.vnucleus.com/blog/2014/8/15/18-solving-complex-authorization-requirements-with-scopes

Thalagyrt
Aug 10, 2006

kayakyakr posted:

I'd consider rendering them and putting them in public.

If you don't want to pre-render them, I'd suggest creating a controller to handle that. Application really should only contain things that you want to be on ALL controllers. I usually make a MainController to handle my site index and various other semi-static pages.

Seems to be a common design decision. I have a PagesController to handle the static pages. They all have a bit of dynamic content in them, so they have to be dynamically rendered.

I'm thinking of splitting it into a Pages module with a LandingsController, FeaturesController, so on and so forth so I can have a bit more structure to the static pages... All of these static pages in one directory is getting kind of unwieldy!

Thalagyrt
Aug 10, 2006

Smol posted:

The simplistic rails model also misses the service / actual business logic layer. Don't be a dummy and try to fit everything an app does into rails controllers or models. The dumber they both are (i.e. Controllers only respond to http requests, models pretty much just hold data), the better off your app will be in the long run.

I wish there were more resources out there that explained this, because it's very true. The Rails model works for simple applications, but once you get beyond simple it falls apart. My models in the vNucleus portal have plenty of logic on them since it's a pretty complex application, but that logic is entirely data consistency/manipulation logic, not business logic. The meat of the application is all factored out into separate classes that encapsulate that operation, making extensive use of Wisper for services that controllers consume, ActiveModel::Model based form objects for any complex action that takes data in, and plain old service classes with an interface that makes sense for the service for other units of logic. It really does make maintaining the application much easier in the long run! It's also extremely easy to test due to the way the logic has been factored out. I've got about 1500 tests and 96% coverage at present.

Thalagyrt
Aug 10, 2006

necrotic posted:

It is good knowledge, but after you've done it once start using Devise. Its amazingly powerful and really quick to get all of the functionality you need for authentication (including pluggable schemes, like oAuth and 2FA).

Authorization is a different beast. I used to recommend CanCan, but its been discontinued (although the community has picked it up with CanCanCan). I've started using Pundit, which is basically helpers for writing authorization rules inside of POROs, and I have been loving it. It's a bit more verbose, but makes testing and understanding the system so much easier.

I honestly disagree on using Devise for everything. We've rolled our own authentication in-house and couldn't be happier. Devise didn't do what I wanted - it got me 95% of the way to where I wanted to be, then I spent more time than it took to roll my own authentication wrangling Devise to sort of kind of maybe do what I wanted. I ended up ripping almost all of devise's built in functionality out, then realized I may as well roll my own. It meets a specific, and admittedly common, use case, but if you're not in that exact use case it falls apart spectacularly. This is the case with the majority of gems that try to be a solution for everyone like that.

Pundit's no exception to this one-size-fits-all problem. Pundit assumes that authorization is going to be based on roles on the user, and the second you want to bring a secondary object (user has x permissions on account y and z permissions on account q, for example) into the authorization scope, Pundit falls apart without extensive modification. The whole concept of doing authorization in functions that take two objects is kind of silly anyway. Why should you have both a function to check if a user has rights on an object as well as a scope to return only the objects the user has rights to? Simply use the scope all the time. Check out Consul for an example of that. It's far more flexible than Pundit and can handle extremely bizarre authorization requirements very elegantly.

Thalagyrt
Aug 10, 2006

necrotic posted:

This is true for any language and any set of 3rd party libraries: they work for the most common cases, but once you need something specialized (I would be interested in what use case you had that didn't fit) rolling your own is generally the best approach.


I had not heard of Consul and will check it out. I do like the idea of scope-centric authorization.

Henning (the author) has a great talk about Consul if you want to watch it!

http://bizarre-authorization.talks.makandra.com

Thalagyrt
Aug 10, 2006

necrotic posted:

My point is that when you include them the controller/model is still fat, its just hidden. It also adds a pain point in figuring out "okay so Butt.new.make_a_fart works, but where is it defined? lets go see Butt! poo poo, its not there. Okay, which of these 10 modules is it in?". Yes, you can grep 'def make_a_fart', but its taken you more steps to end up finding where its defined than it ever should have. It also has a problem with method name collisions, which are (surprise) silent in Ruby and just take the most recent as truth.

Hope you didn't accidentally override some rarely used but important method in your new module! (Yes, tests, because those always exist in the project you got hired to work on)

Last month I was working on building an APN abstraction before having my coffee and thought "send would be a great method name to use for the method that sends an APN!" Whoops.

Thalagyrt
Aug 10, 2006

MALE SHOEGAZE posted:

My personal approach, just to keep me sane, is to only kill myself testing if:

a) I'm developing a feature, and writing tests is faster than testing manually
b) A failure in the system I'm working on involves money, or some sort of serious obligation

Otherwise, I just do the best I can with the time available. Some legacy systems are just too broken to test in a meaningful way. I can write tests for those things, and I'll be sure that my 'unit' will work, but I wont actually be ensuring that my unit isn't going to be breaking something.

RSpec here, and I take pretty much the same approach. Anything in my customer portal that can destroy/mess with servers, or deals with billing in any way has 100% test coverage. The stuff that's less critical like admin notes on an account or simple reports? Nah, not worth testing those. Overall I have about 95% test coverage, which is nice, but I don't have much intention to shoot for 100%.

Thalagyrt
Aug 10, 2006

MALE SHOEGAZE posted:

Also, I believe pretty firmly that you should not put things in the models folder unless it's backed by a table. But that's mostly because we have 680 files in our app/models folder (not counting subdirectories!) and it's terrible.

I firmly disagree on this. I have 280 files under app/models right now, however *everything* is namespaced into modules that make sense for the given set of functionality. There are no top-level models whatsoever, and at most I have 50 or so classes in a given module, with an average module size of about 10 classes (not all ActiveRecords). If you have hundreds of files in one directory, sure, I can see how you'd arrive at the conclusion you did. The thing is, when working in Rails, your models, services, lib, etc are all under the same namespace, so separating files out into a ton of directories that all share the same namespace introduces a bunch of gymnastics to make sure that you're not stomping over some other file in another directory with the same class name. Making a new class? Gotta make sure it doesn't already exist in /lib, /app/models, /app/decorators, /app/services, /app/helpers, /app/jobs, /app/use_cases, etc. Merging all of these into /app/models was the single best decision I ever made. Keeping everything together as much as possible and utilizing plain old Ruby modules to organize your code reduces the mental overhead necessary to understand the application a considerable amount.

Thalagyrt
Aug 10, 2006

MALE SHOEGAZE posted:

Then all you've done is taken your lib folder and stuck it in models for some reason. There's really no reason to have anything in your app folder that isn't a direct Rails subclass. I mean, you're probably not causing yourself any huge problems, but there's no reason all of that stuff couldn't be in lib, in the same directory structure you have now. The only exception being that your models would have to be in models.

You don't make a lib jobs/workers/services/decorators/etc, that would be ridiculous. You treat it like it's just full of unpacked gems. There's no issue with namespace conflict unless I'm duplicating functionality or giving things really weird names.

I'm not making any problems for myself. My codebase is incredibly easy to reason about, and is rather well factored. To me, the whole concept of putting your entire application under the library directory seems more than a bit silly. My lib directory is used for exactly that: external dependencies that I likely will extract into gems at some point. The meat of my application is not an external library. The only time putting your entire application under lib makes sense is if you plan to export the whole thing as a self-contained gem - which, if you're putting your entire DAO layer somewhere outside of what you're extracting to a gem, namely /app, isn't possible, since a gem that depends on the codebase that requires the gem in its Gemfile doesn't make any sense. So I'd argue that putting all your business logic in /lib makes even less sense, as now you have dependencies that cross a barrier in two directions (lib depends on app, app depends on lib) instead of one direction (app depends on lib only) and dependencies should only cross a barrier in one direction. And truly, don't try to kid yourself into thinking that extracting all your services and business logic out of /lib into a gem would make that gem make sense in any context other than when mixed with your specific app's set of ActiveRecords (regardless of the fact that they're DIed into the gem) because it wouldn't. There's still effectively a two way dependency in that type of situation. One makes no sense without the other. So treat it as part of your application, not an external library, or treat your *entire* domain model as an external library - DAOs and all.

Thalagyrt fucked around with this message at 02:27 on Jan 30, 2015

Thalagyrt
Aug 10, 2006

MALE SHOEGAZE posted:

:rolleyes: :rolleyes: :rolleyes:


By putting most of your code in the app/models folder, you're at least strongly implying that those classes depend on the ruby on rails framework. It's extremely likely that that dependency isn't actually necessary, but nevertheless, all of your code is now tightly coupled to the rails framework, and in fact tightly coupled to the rest of your application.

By pulling that code out of your rails application, and sticking it in the library, you're at least implying that your code should be easily composable throughout the rest of your application. Not only that, but you're setting yourself up to make it easy to break that code out into external services, or to reuse it in other applications.

Basically, the app folder is for implementation code, which means you're not writing interface code, which means you're probably wasting a lot of time.

Or you're doing it right but sticking everything in models because

The only bits coupled to Rails are the ActiveRecords themselves, which I pretty much entirely treat as DAOs. My entire argument is that extracting the *rest* as a library makes absolutely no sense, as that library does not function without the DAOs. You can DI them all you want, and I do, but even then you're still creating in effect a circular dependency where one part conceptually does not make any sense without the other. May as well treat them as one cohesive unit.

Thalagyrt
Aug 10, 2006

MALE SHOEGAZE posted:

I do agree that writing gems is really a total waste of time but in our case it makes sense because things have just gone too far in the other direction.

Gems make sense if you can truly black box them but yeah, I'm not that good.

I'd wager none of us here in the thread are Robert Martin levels of good. :p

Edit:

MALE SHOEGAZE posted:

I really think we're effectively arguing for the same thing, we just put them in different folders.

Yeah, I think so too!

Thalagyrt
Aug 10, 2006

MALE SHOEGAZE posted:

Honestly, our rails app has been around since rails 1.0 and I dont know where the rails app ends and our weird frankenstein begins.

Heh. I say you aren't truly developer until you've been mired by some huge legacy application that's been around for years and is a hodgepodge of the various best practices throughout the years, always shifting toward the new hotness but never refactoring the old and busted out. In the Rails world, this means some fat controllers, some fat models, maybe 30% of your business logic is implemented in services and the rest in huge models... Oh and sprinkle some concerns in for good measure. Maybe one truly talented OO guy stormed through at some point and extracted 20% of your codebase into a gem, so you had that one really well factored piece of business logic where everything's beautifully DIed and reusable, but people have just hacked functionality into the gem over the years and now it blows up if required anywhere other than your Rails app, and oh god why.

Thalagyrt
Aug 10, 2006

KoRMaK posted:

HAhahaahahaha

I use my models directory for stuff that is ACtiveREcord only (backed by a table).

I need to figure out a synonym for other stuff that is models but isn't ActiveREcord backed.

Models is the right word. Model in software engineering doesn't mean ActiveRecord or DAO or some specific type of object, it means domain model. However, in the Rails world, for some reason "model" has become bastardized to mean ActiveRecord, which leads to a lot of confusion. If you take a step back to pretty much anything other than Rails (personally, I did C and Java back in the 90s, .NET in the early 2000s, Python late 2000s, and switched to Ruby in 2012) you'll find that the domain model is a concept that represents the entirety of your application. Losing the conflation that model == ActiveRecord is something that I think would help a lot of Rails guys out.

Thalagyrt
Aug 10, 2006

I kinda just said screw it to Rails conventions a long time ago because I think Rails conventions start to fall apart really quick when you start writing any application that's more complex than a blog. The whole concept of different directories that aren't separate namespaces annoys me to begin with. I'd honestly argue that Rails's directory layout is detrimental. The controllers, models, etc directories should IMO each be actual modules - i.e. YourApp::Controllers, YourApp::Models, etc, not a bunch of directories whose contents are all autoloaded into the same module. Views probably belongs somewhere else. If we got rid of this legacy cruft and fear of namespacing, I think Rails would be a lot better off. I'm pretty excited about Lotus because it actually embraces OO proper and lets you structure your code in modules however you see fit. http://lotusrb.org/

Edit: If I had my way, I'd probably have a MyApp::HTTP module that contains controllers and related frontend classes (datatables, helpers, etc) and MyApp::Application module that contains the actual application, with a strict enforcement that Frontend can depend on Application but not the other way around. I might actually do that, thinking about it more. It's already effectively how I've structured my code in the directory structures anyway.

Thalagyrt fucked around with this message at 02:57 on Jan 30, 2015

Thalagyrt
Aug 10, 2006

MALE SHOEGAZE posted:

Yeah I agree completely. The Rails directory structure was my number 1 pet peeve until I got good enough at ruby/the rails magic to just ignore all of it.

Basically everyone here wants to get away from rails at this point. We're probably moving towards SOA because I don't think anyone can figure out a better way to slowly refactor everything in a sane way. Unfortunately, we're never going to get twitter style capital so we can't just rewrite everything, but we're also successful enough that we do actually need to plan for the future.

When I first started designing the vNuc portal (because let's face it, all the billing software in the hosting world blows. Actually, considering cPanel, Plesk, SolusVM, etc, I'll just say all the software in the hosting world blows) I contemplated a strict SOA policy. Actually started implementing it that way. Then I realized I'm probably never ever going to even come close to the scale, both in terms of code complexity (it's a hosting control panel, it handles credit cards, VPSes, support tickets, Exchange accounts, their lifecycles, and that's about it) as well as scaling (again, hosting control panel, I'm never going to have even thousands of simultaneous users) to make it worth the added complexity. Would have been super cool and super fun I'm sure, but in this business the biggest my scaling needs are ever realistically going to get is probably adding a second app server. I'm likely never even going to be 1% of the size of say AWS, so I'm not going to face those problems.

Thalagyrt
Aug 10, 2006

Pollyanna posted:

Actually, that reminds me - a lot of the Rails projects I've made are just straight "models -> database schema -> CRUD ops" setups, with rarely anything much more complicated than Users and Posts. What cases are those where you would actually use non-AR backed models, modules and extensions, and other complicated stuff?

In my case VPS management - should a database access object with the responsibility of storing server details really know about how to create a Xen domain, or install Linux on a server? Billing stuff - it's really not an account DAO's job to know how to talk to the bank, is it? I shouldn't have to talk to the bank every time I want to test something in the application. Same deal goes with Exchange - why should a DAO know how to talk to Active Directory? If you put all of this stuff in a database model you're going to create a nightmare for yourself with the SRP violation and code complexity. These objects that have 10+ responsibilities become hard to test effectively. It's all about decoupling components, then composing them back together to get the needed work done. By decoupling your components, you make it easier to replace components, which makes it easier to test components by replacing their dependencies with mocks that can reliably simulate failure conditions. Also, by decoupling your components you make it easier to extend behavior.

A good example of this was me wanting to give a credit for the remaining time on a server to a customer when they cancel the server. I introduced a CreditingServerTerminator which wraps a ServerTerminator (which can be injected on initialize, but comes with a sane default) and simply adds the credit when the ServerTerminator does this thing. The CreditingServerTerminator is used in the client area TerminationsController's create action. If all of that lived in the Server model itself I'd have two methods that terminate a server, which I suppose could make sense, but again, consider how much responsibility the Server would have in this case. It'd easily be a 4000 line class. This also makes it super easy to test the behavior of crediting on termination. I can mock out the ServerTerminator to just succeed, and assert that the account's ledger gets a credit message sent. No database access necessary to ensure the behavior works.

For a bit more clarity, the ServerTerminator's job is to actually destroy the Xen domain with our backend system, and then update the Server DAO's state to indicate that it's actually terminated. It also kicks off a few background jobs to notify the admins and account users that the server was terminated, and places an entry in the account's audit log. Basically, all the responsibility of what it means to terminate a server goes in there. It's used when a user terminates a server, an admin terminates one (say in the case of abuse) or when automation terminates one, and it's easy to compose behavior on top of as seen above.

The audit logging is another good example of when to inject dependencies and how decoupling into smaller classes makes things easier. Rather than having it simply be Account#log_event(details), I have an EventLogger in the Accounts module, which can be set up with various params. Every service object that uses an EventLogger comes with sane defaults for the default use case, but you can inject an EventLogger as well - so any time one of these services is used in a controller, I inject an EventLogger that's been pre-loaded with the user and request details so that the acting user + IP address gets tagged on any logged audit events.

Thalagyrt fucked around with this message at 19:09 on Jan 30, 2015

Thalagyrt
Aug 10, 2006

MALE SHOEGAZE posted:

Yeah but there's also nothing wrong with having lib/services, lib/sweepers. My main point is that if you're trying to follow rails conventions, you probably shouldn't be putting it in models.

That said, I agree with Thalagyrt that rails conventions not always the best, and it's fine to treat your models folder as "domain models." Personally I'd prefer to keep all my domain stuff out of AR classes, though.

Yeah, my entire argument is that the conventions are bunk, and people are making it harder to reason about their codebase by splitting up a single namespace into more than one directory. I've never seen that done in anything other than Rails - in every other language, the module tree and directory tree in a given project will look the same, so to someone who hasn't seen this weird pattern before, it'd make sense to think that /lib would be a Lib module, /app/models App::Models, /app/controllers App::Controllers, /app/services App::Services, etc - but they're one big module that just globs all this stuff up instead. I think the concept of having app/services, app/actions, app/models, etc all actually being the same module makes it harder to reason about. In my opinion, organizing your code by functional module (in my case, Servers, Accounts, Users, Email, etc) makes it easier to reason about. I'm not thinking about whether something is a service, or AR model, or whatever when I'm thinking about where it belongs. Instead, I'm thinking about whether it's part of Accounts, or Servers, or Tickets, etc.

Here, check out my app/models. I think you might see why I argue for this. You might also call me crazy, who knows. :)

http://hastebin.com/laxuzoyufe.avrasm

Each one of these subdirectories is pretty much one self-contained component to the site. There are some dependencies between modules - most everything has a dependency on the Accounts and Users modules - but for the most part, dependencies stay within a module. I could probably split these up further into more subdirectories, but I haven't found that necessary. But the key takeaway is that my code is organized by logical groupings of functionality, not by whether something is a service, or an AR, or something else.

Thalagyrt fucked around with this message at 01:01 on Feb 1, 2015

Thalagyrt
Aug 10, 2006

MALE SHOEGAZE posted:

No, I'm totally with you on keeping your modules discrete. It makes way, way more sense to do:
code:
Butts
/ Factories
/ Services
Dongs
/ Factories
/ Services
over
code:
Factories
/ Butts
/ Dongs
Services 
/ Butts
/ Dongs


This is what I mean about designing your code as if it's a gem. This is how (I would prefer, at least) my lib folder to be arranged. I don't care if it's in your app or your lib folder. I don't care if it's in the models folder either except that Rails has its own ideas about what should go in models.

Yeah, this is exactly what I mean. My only argument against throwing it all in lib is that in the vast majority of cases this stuff isn't going to make sense outside of the context of your app - and lib is traditionally a directory used for external/third party libraries - so unless it's something I'm going to extract into a gem, I keep it out of lib. I was looking at Discourse earlier today and found it incredibly weird that they basically have structured their files in such a way that it suggests that the core of their application is a third party library.

Edit: The top two results for "rails lib directory" on Google seem to agree with me, as well. These guys use it as a staging place for code that you could extract into a gem and reuse in another application.

http://blog.codeclimate.com/blog/2012/02/07/what-code-goes-in-the-lib-directory/
http://reefpoints.dockyard.com/ruby/2012/02/14/love-your-lib-directory.html

Thalagyrt fucked around with this message at 01:25 on Feb 1, 2015

Thalagyrt
Aug 10, 2006

prom candy posted:

I think I agree with you guys about file organization however I think there's an argument to be made that fighting against how Rails dictates that your files should be organized can make your codebase harder to reason about for a new developer joining your project. The organization of controllers, specifically, is also somewhat tied to how Rails expects things to be laid out in order to work quickly and easily with routing.

I think it would be interesting to open an app folder and see something like

code:
app
 posts
   - model.rb
   - decorator.rb
   - controller.rb
   views
     - index.html.haml
     - show.html.haml
It brings up some interesting questions though, like what if you had separate controllers and views for an admin section that operated on posts vs. the public user experience? What if you wanted to add API controllers and views? The plus side obviously is that when you open your app folder you can immediately see what your app is all about, I think Robert Martin has argued for something like this in Rails apps.

Controllers aren't part of your domain model. They're a boundary between HTTP and your application, and their entire job should be to take a request, tell your application to do one thing, and then render the outcome of that action as HTML/JSON/etc. Controllers having a completely separate namespace structure that lines up with your URL structure makes sense and I definitely wouldn't try to put them in the same set of modules as my domain model.

Thalagyrt
Aug 10, 2006

xenilk posted:

i guess it's not purely AR but couldnt you do

.where("countries.alpha2 NOT IN (?)", @country_exclude_list)

That should generate equivalent SQL to .where.not(countries: { alpha2: @country_exclude_list }).

Example:

2.1.3 :003 > puts Accounts::Account.joins(:solus_servers).where.not(solus_servers: { id: [1, 2] }).to_sql
SELECT "accounts_accounts".* FROM "accounts_accounts" INNER JOIN "solus_servers" ON "solus_servers"."account_id" = "accounts_accounts"."id" WHERE ("solus_servers"."id" NOT IN (1, 2))

Thalagyrt
Aug 10, 2006

Smol posted:

MySQL stores everything in UTC as well. It doesn't have a TIMESTAMP WITH TIME ZONE or equivalent data type.

Not entirely correct. MySQL stores time without timezone data at all, so saying it stores everything in UTC is incorrect as that would imply that MySQL is aware the timestamps you're storing are UTC, and it would have to be timezone aware for that. Since MySQL doesn't have a timezone aware datatype, best practice is to treat all times as UTC when storing/retrieving, but that doesn't mean that MySQL considers the times UTC. They're just times without any timezone as far as it's concerned.

Thalagyrt
Aug 10, 2006

kayakyakr posted:

Use Sucker Punch for async and whenever for scheduled... drop delayed job, it's a PITA.

Backgrounding tasks within your web workers with absolutely no guarantees of job durability or completion is a terrible idea. If a worker dies for some reason any jobs that haven't run will be lost.

Delayed Job and other tools architected like it (read: with background worker processes and a persistent storage mechanism) are far more reliable and make a guarantee that a job will be completed. If you outgrow using the database for jobs, switch to Sidekiq or Resque. Sucker Punch is a hack for async on the free tier of Heroku when you only have one process to run in and shouldn't be used in any real production environment.

Thalagyrt fucked around with this message at 23:02 on Jun 15, 2015

Thalagyrt
Aug 10, 2006

kayakyakr posted:

The others are also PITA to deploy, as KoRMaK is finding. If you plug in to the new ActiveJob stuff, then you're framework-flexible and can switch to whichever when you're ready to improve your deploy.

How is Delayed Job anything at all resembling PITA to deploy? Set up supervisor or runit or whatever the heck you want to use to run a couple instances of bundle exec rake jobs:work and you're done. Add a couple command line options to write out pidfiles if you want to be able to easily restart them with say Capistrano.

Edit: If you're deploying on Heroku or similar PaaS or even just using Foreman on a VPS it's even easier. Just add worker: bundle exec rake jobs:work to your procfile and you're done.

In KoRMaK's case, I'd advise setting up supervisor. It's really rather easy - just have to get it to switch users to the right user and then run rake jobs:work and you're done. It can be done in about 15 minutes if you've never touched supervisor before.

Thalagyrt fucked around with this message at 00:04 on Jun 16, 2015

Thalagyrt
Aug 10, 2006

KoRMaK posted:

Ugh, supervisor wants non-damoenized tasks, but the delayed job script is damoenized and the rake task doesn't let me specify multiple workers.

Should I just run rake jobs:work multiple times if I want multiple workers?

Yeah, just run them as foreground workers. I have 4 services set up with runit for mine, and have delayed_job configured to write out pid files so I can just kill `cat delayed_job.*.pid` to restart all the workers. Here's my runfile for runit - you should be able to do the same with supervisor easily. My ~/shared/environment file just loads up all the env variables - RAILS_ENV and other config.

code:
#!/bin/sh
su - betaforce -c "source ~/shared/environment && cd ~/current && bundle exec ruby bin/delayed_job run --identifier 1"
Adjust to taste.

Thalagyrt
Aug 10, 2006

KoRMaK posted:

I need to get better at the linux.

What does the --identifier option do? I'm not finding any info on it. And where are the pid files at? They were showing up in my pids dir when I did "delayed_job start" but when I do "delayed_job run" they don't show up.

It's explained right in the manual for delayed_job: https://github.com/collectiveidea/delayed_job/wiki/Delayed-job-command-details

It adds the numeric identifier to the process name, so you'll see delayed_job.1, delayed_job.2, etc.

Thalagyrt
Aug 10, 2006

KoRMaK posted:

Dammit, I need to browse the wiki pages instead of google searching and get more sleep.

Thank you so much!

e: Where should I see delayed_job.x? I'm looking in the system monitor and ps aux and not seeing anything but the console script I executed.

It will show up in the pidfile written out by delayed_job, which ends up in RAILS_ROOT/tmp/pids. Setting the identifier will cause the pidfile to be written out as delayed_job.identifier.pid instead of delayed_job.pid, which lets you keep track of each one individually.

code:
betaforce@portal:~/current/tmp/pids$ ls
delayed_job.1.pid  delayed_job.2.pid  delayed_job.3.pid  delayed_job.4.pid  puma.pid  scheduler_daemon.pid

Thalagyrt
Aug 10, 2006

KoRMaK posted:

I thought thats where it should be but I can't find it. I looked in my home directory/tmp/pids and the rails/tmp/pids directory and then did a search for everything with pid or delayed_job and I'm not finding any files.


Oh I think I get what was wrong. pid's usually only get made when a process is daemonized, and using delayed_job run keeps it in the foreground, thus no pid files.

Looks like they have an article on how to setup delayed job with monit https://github.com/collectiveidea/delayed_job/wiki/monitor-process

The pid files should be created whether or not it's daemonized. It's certainly created in my setup, and you can see the script I'm using to start it - it's not running daemonized. It won't be created if you start it with rake, though.

Thalagyrt
Aug 10, 2006

KoRMaK posted:

Here's a dumb question: I have rvm installed as my user, but when I go into sudo mode and do rvm it says its not installed. As sudo, I source my users bashrc file thinking that will fix it, but it doesn't.

Do I need to install RVM as sudo/root?

You don't want a web site running as root. Why are you using sudo?

Thalagyrt
Aug 10, 2006

KoRMaK posted:

I'm trying to get monit to launch the delayed_job script with the right env stuff. I thought I had to install rvm as sudo because monit likes to run stuff as sudo.

That was a couple hours ago, and I've come a long way since then, here's what I found works

On my local vm

start program = "/bin/su - kormak -c 'cd /media/my_app; script/delayed_job start -i 1'"

In production it changes to
start program = "/bin/su -c 'cd /media/my_app; script/delayed_job start -i 1'"

Seems pretty obvious now, but I didn't understand what I was doing and dropped the -c so it kept crashing. I then started using /bin/bash but it didn't work at first either until I used the -l (--login) option, so either /bin/su above works or this
start program = "/bin/bash -l -c 'cd /media/my_app; script/delayed_job start -i 1'"



So the adventure is almost over. Monit starts the thing, but then it reports that it failed to execute which is confusing and I can't stop it using monit.

Why are you trying to run delayed_job as root? Run it as the user that owns your web site, just like you do on your local VM.

Thalagyrt
Aug 10, 2006

KoRMaK posted:

Back to a question I had to a little bit ago, how do I install rvm and gems so that ALL users or a subset of users use the same gemset? Is this a common pattern, where multiple users share gemsets or are you supposed to install it for each user?

That'd require a lot of permission hackery. I've never seen a need to do that, though. What are you trying to accomplish?

Thalagyrt
Aug 10, 2006

KoRMaK posted:

I'm having a hard time with writing tests. I'm using rspec and capybara.

I have a remote script where the user clicks a link and it should download a file. I'm checking that the download directory changes the count of files in it. So I do

code:
expect{click_link "Download File"}.to change{ Dir.glob(File.join('/tmp/chromedownloads', "*.pdf")).length }.by(1)
The test finishes before the file downloads and says there is an error. How do I get it to wait.

OR

I can stash the count before I click the link, then compare it after to the current one. I'd really like to use the change command on the variable instead of should.

I think you might want to rethink your test strategy. Why are you testing that the browser can download a file? That's not part of your application. Just make sure it's served up properly and call it done. Having a browser download a file and trying to make an assertion that the file's been downloaded is rather silly.

Edit: In the same vein, don't test ActiveRecord, don't test other external dependencies, but do test *your* code.

Thalagyrt
Aug 10, 2006

KoRMaK posted:

Yea, probably. The goal is to verify that we can verify that our app can send data to a third party, who turns it into a PDF, and then the user can download that PDF.

Why? I'm getting my mindset adjusted to testing, so I probably have a bunch of silly approaches that I need to get set straight on.


So my current problem is how do I verify that my communication to the third party goes well and that I get what I expect as a response?

I was writing these as feature tests, and it would *almost* work except that the DB cleaner and rspec is exiting before the response is finished.

kayakyakr already covered it, but mock at any boundaries and test that your code responds to the boundaries properly. You don't need to test third party APIs. For the downloads, you don't need to test that the browser can download the file. If Chrome couldn't download a file, that's a bug in Chrome, not a bug in your code. Just use a controller test and assert that the right content type is set and other information relevant to the request. Beyond that, any browser will handle the download properly - so why waste time testing your browser?

Thalagyrt
Aug 10, 2006

KoRMaK posted:

I guess if I had to defend it, it would be that we want to make sure that our users are getting what they expect, and that includes knowing if our third party vendors are hosed up.

The majority of failures that'll arise when you're testing third parties will be random intermittent issues (throttling, network hiccups, maintenance, etc) and you're gonna end up pulling your hair out over random failures in your CI. Your tests should be deterministic. In order to have deterministic tests, you need to only test units under your control. That means mocking out *all* third party requests/responses. Work under the assumption that the third party will be working, and mock up specific failure scenarios so you know that you can gracefully handle third party APIs being down or otherwise erroring out.

Thalagyrt
Aug 10, 2006

Pardot posted:

This feels like you want DISTINCT ON (which is not the same as just DISTINCT), but I dont know how you get that through active record.

If you're using Arel 6 (so Rails 4.2 and up) there's an Arel method for distinct_on.

code:
table = Arel::Table.new(:users)
table.project(Arel.star).distinct_on(table[:id]).to_sql

Adbot
ADBOT LOVES YOU

Thalagyrt
Aug 10, 2006

Pollyanna posted:

Sounds good to me. I made a spec that mimicks our current issue and from what I can tell, the script/class/module/thing-I-made handles it correctly. I guess logging into console works well. I tested it on my local copy of our DB, and it works there too.

We don't do production DB backups at my place, though, as far as I know. At least, not regularly. Something about it being too much data, or whatever.

We have a 1.5TB database at my day job that's fully backed up to S3 daily... What's your excuse for not caring about your business's data at all?

Thalagyrt fucked around with this message at 20:21 on Nov 11, 2015

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply