|
xenilk posted:Thank you! Sounds like there's some confusion in your code somewhere based on that error - part of it is using naive datetimes, the other is using time-zone aware datetimes. You should really look further into what's happening instead of bandaging it as you might end up with dates being stored/retrieved incorrectly or other time-zone related problems if you're not using time-zone aware datetimes across the board. Date.today for example is one that you shouldn't use - use Time.zone.today instead to get a time-zone aware date object.
|
# ¿ Jul 18, 2014 00:00 |
|
|
# ¿ Apr 27, 2024 09:54 |
|
kayakyakr posted:He's storing it as a Date type in the database. Depending on the intent, he could be ok with just using date. Probably would be better to store as a time with zone, though. Hm, yeah that does just return a Date. Still worth storing it timezone aware in my opinion. The thing about a naive date is that the point where it rolls over from one day to the next is going to vary based on the timezone settings on the server, which may be undesirable.
|
# ¿ Jul 18, 2014 04:10 |
|
Sub Par posted:This is what I originally had, but because of the way Heroku works, I was getting nothing but 10.xx.xx.xx IPs that were likely some kind of Heroku load-balancing or whatever. Anyway I'm now trying this: Might want to double check that conditional assignment there. That looks like it should just be an || not an ||= for what you're trying to do.
|
# ¿ Jul 25, 2014 22:18 |
|
EVGA Longoria posted:Trying to get rid of code duplication. We have helpers for our API I'd probably write that a bit differently. It's not a one liner, but easier to understand at a quick glance what's going on. Ruby code:
|
# ¿ Jul 31, 2014 17:08 |
|
EVGA Longoria posted:It's based on which endpoint they're using, /posts vs /posts/feed. Technically /posts is restricted for some things, but there's also a different in API representation and a few other things, so it's not as straightforward a authorization. Nope. Post.all.published and Post.published are nearly equivalent from a runtime perspective. Post.all simply returns a scope of all items which can be scoped further. Scopes aren't actually evaluated until you iterate over them (well, or a few other actions that necessitate making an actual SQL query). What they are is more like a set of parameters for a query that can be evaluated when needed to return a set of objects. In a lot of cases in Rails, that doesn't actually happen until you're displaying the returned records in a view.
|
# ¿ Jul 31, 2014 18:37 |
|
Just wrote up a blog post about how I handle authorization in my business's client portal. Figure you guys might be interested. Spoiler: It's neither CanCan nor Pundit. https://www.vnucleus.com/blog/2014/8/15/18-solving-complex-authorization-requirements-with-scopes
|
# ¿ Aug 15, 2014 03:19 |
|
kayakyakr posted:I'd consider rendering them and putting them in public. Seems to be a common design decision. I have a PagesController to handle the static pages. They all have a bit of dynamic content in them, so they have to be dynamically rendered. I'm thinking of splitting it into a Pages module with a LandingsController, FeaturesController, so on and so forth so I can have a bit more structure to the static pages... All of these static pages in one directory is getting kind of unwieldy!
|
# ¿ Aug 17, 2014 19:06 |
|
Smol posted:The simplistic rails model also misses the service / actual business logic layer. Don't be a dummy and try to fit everything an app does into rails controllers or models. The dumber they both are (i.e. Controllers only respond to http requests, models pretty much just hold data), the better off your app will be in the long run. I wish there were more resources out there that explained this, because it's very true. The Rails model works for simple applications, but once you get beyond simple it falls apart. My models in the vNucleus portal have plenty of logic on them since it's a pretty complex application, but that logic is entirely data consistency/manipulation logic, not business logic. The meat of the application is all factored out into separate classes that encapsulate that operation, making extensive use of Wisper for services that controllers consume, ActiveModel::Model based form objects for any complex action that takes data in, and plain old service classes with an interface that makes sense for the service for other units of logic. It really does make maintaining the application much easier in the long run! It's also extremely easy to test due to the way the logic has been factored out. I've got about 1500 tests and 96% coverage at present.
|
# ¿ Oct 12, 2014 17:29 |
|
necrotic posted:It is good knowledge, but after you've done it once start using Devise. Its amazingly powerful and really quick to get all of the functionality you need for authentication (including pluggable schemes, like oAuth and 2FA). I honestly disagree on using Devise for everything. We've rolled our own authentication in-house and couldn't be happier. Devise didn't do what I wanted - it got me 95% of the way to where I wanted to be, then I spent more time than it took to roll my own authentication wrangling Devise to sort of kind of maybe do what I wanted. I ended up ripping almost all of devise's built in functionality out, then realized I may as well roll my own. It meets a specific, and admittedly common, use case, but if you're not in that exact use case it falls apart spectacularly. This is the case with the majority of gems that try to be a solution for everyone like that. Pundit's no exception to this one-size-fits-all problem. Pundit assumes that authorization is going to be based on roles on the user, and the second you want to bring a secondary object (user has x permissions on account y and z permissions on account q, for example) into the authorization scope, Pundit falls apart without extensive modification. The whole concept of doing authorization in functions that take two objects is kind of silly anyway. Why should you have both a function to check if a user has rights on an object as well as a scope to return only the objects the user has rights to? Simply use the scope all the time. Check out Consul for an example of that. It's far more flexible than Pundit and can handle extremely bizarre authorization requirements very elegantly.
|
# ¿ Oct 15, 2014 18:13 |
|
necrotic posted:This is true for any language and any set of 3rd party libraries: they work for the most common cases, but once you need something specialized (I would be interested in what use case you had that didn't fit) rolling your own is generally the best approach. Henning (the author) has a great talk about Consul if you want to watch it! http://bizarre-authorization.talks.makandra.com
|
# ¿ Oct 15, 2014 19:49 |
|
necrotic posted:My point is that when you include them the controller/model is still fat, its just hidden. It also adds a pain point in figuring out "okay so Butt.new.make_a_fart works, but where is it defined? lets go see Butt! poo poo, its not there. Okay, which of these 10 modules is it in?". Yes, you can grep 'def make_a_fart', but its taken you more steps to end up finding where its defined than it ever should have. It also has a problem with method name collisions, which are (surprise) silent in Ruby and just take the most recent as truth. Last month I was working on building an APN abstraction before having my coffee and thought "send would be a great method name to use for the method that sends an APN!" Whoops.
|
# ¿ Nov 26, 2014 18:32 |
|
MALE SHOEGAZE posted:My personal approach, just to keep me sane, is to only kill myself testing if: RSpec here, and I take pretty much the same approach. Anything in my customer portal that can destroy/mess with servers, or deals with billing in any way has 100% test coverage. The stuff that's less critical like admin notes on an account or simple reports? Nah, not worth testing those. Overall I have about 95% test coverage, which is nice, but I don't have much intention to shoot for 100%.
|
# ¿ Dec 21, 2014 05:36 |
|
MALE SHOEGAZE posted:Also, I believe pretty firmly that you should not put things in the models folder unless it's backed by a table. But that's mostly because we have 680 files in our app/models folder (not counting subdirectories!) and it's terrible. I firmly disagree on this. I have 280 files under app/models right now, however *everything* is namespaced into modules that make sense for the given set of functionality. There are no top-level models whatsoever, and at most I have 50 or so classes in a given module, with an average module size of about 10 classes (not all ActiveRecords). If you have hundreds of files in one directory, sure, I can see how you'd arrive at the conclusion you did. The thing is, when working in Rails, your models, services, lib, etc are all under the same namespace, so separating files out into a ton of directories that all share the same namespace introduces a bunch of gymnastics to make sure that you're not stomping over some other file in another directory with the same class name. Making a new class? Gotta make sure it doesn't already exist in /lib, /app/models, /app/decorators, /app/services, /app/helpers, /app/jobs, /app/use_cases, etc. Merging all of these into /app/models was the single best decision I ever made. Keeping everything together as much as possible and utilizing plain old Ruby modules to organize your code reduces the mental overhead necessary to understand the application a considerable amount.
|
# ¿ Jan 30, 2015 01:17 |
|
MALE SHOEGAZE posted:Then all you've done is taken your lib folder and stuck it in models for some reason. There's really no reason to have anything in your app folder that isn't a direct Rails subclass. I mean, you're probably not causing yourself any huge problems, but there's no reason all of that stuff couldn't be in lib, in the same directory structure you have now. The only exception being that your models would have to be in models. I'm not making any problems for myself. My codebase is incredibly easy to reason about, and is rather well factored. To me, the whole concept of putting your entire application under the library directory seems more than a bit silly. My lib directory is used for exactly that: external dependencies that I likely will extract into gems at some point. The meat of my application is not an external library. The only time putting your entire application under lib makes sense is if you plan to export the whole thing as a self-contained gem - which, if you're putting your entire DAO layer somewhere outside of what you're extracting to a gem, namely /app, isn't possible, since a gem that depends on the codebase that requires the gem in its Gemfile doesn't make any sense. So I'd argue that putting all your business logic in /lib makes even less sense, as now you have dependencies that cross a barrier in two directions (lib depends on app, app depends on lib) instead of one direction (app depends on lib only) and dependencies should only cross a barrier in one direction. And truly, don't try to kid yourself into thinking that extracting all your services and business logic out of /lib into a gem would make that gem make sense in any context other than when mixed with your specific app's set of ActiveRecords (regardless of the fact that they're DIed into the gem) because it wouldn't. There's still effectively a two way dependency in that type of situation. One makes no sense without the other. So treat it as part of your application, not an external library, or treat your *entire* domain model as an external library - DAOs and all. Thalagyrt fucked around with this message at 02:27 on Jan 30, 2015 |
# ¿ Jan 30, 2015 01:59 |
|
MALE SHOEGAZE posted:
The only bits coupled to Rails are the ActiveRecords themselves, which I pretty much entirely treat as DAOs. My entire argument is that extracting the *rest* as a library makes absolutely no sense, as that library does not function without the DAOs. You can DI them all you want, and I do, but even then you're still creating in effect a circular dependency where one part conceptually does not make any sense without the other. May as well treat them as one cohesive unit.
|
# ¿ Jan 30, 2015 02:30 |
|
MALE SHOEGAZE posted:I do agree that writing gems is really a total waste of time but in our case it makes sense because things have just gone too far in the other direction. I'd wager none of us here in the thread are Robert Martin levels of good. :p Edit: MALE SHOEGAZE posted:I really think we're effectively arguing for the same thing, we just put them in different folders. Yeah, I think so too!
|
# ¿ Jan 30, 2015 02:31 |
|
MALE SHOEGAZE posted:Honestly, our rails app has been around since rails 1.0 and I dont know where the rails app ends and our weird frankenstein begins. Heh. I say you aren't truly developer until you've been mired by some huge legacy application that's been around for years and is a hodgepodge of the various best practices throughout the years, always shifting toward the new hotness but never refactoring the old and busted out. In the Rails world, this means some fat controllers, some fat models, maybe 30% of your business logic is implemented in services and the rest in huge models... Oh and sprinkle some concerns in for good measure. Maybe one truly talented OO guy stormed through at some point and extracted 20% of your codebase into a gem, so you had that one really well factored piece of business logic where everything's beautifully DIed and reusable, but people have just hacked functionality into the gem over the years and now it blows up if required anywhere other than your Rails app, and oh god why.
|
# ¿ Jan 30, 2015 02:39 |
|
KoRMaK posted:HAhahaahahaha Models is the right word. Model in software engineering doesn't mean ActiveRecord or DAO or some specific type of object, it means domain model. However, in the Rails world, for some reason "model" has become bastardized to mean ActiveRecord, which leads to a lot of confusion. If you take a step back to pretty much anything other than Rails (personally, I did C and Java back in the 90s, .NET in the early 2000s, Python late 2000s, and switched to Ruby in 2012) you'll find that the domain model is a concept that represents the entirety of your application. Losing the conflation that model == ActiveRecord is something that I think would help a lot of Rails guys out.
|
# ¿ Jan 30, 2015 02:42 |
|
I kinda just said screw it to Rails conventions a long time ago because I think Rails conventions start to fall apart really quick when you start writing any application that's more complex than a blog. The whole concept of different directories that aren't separate namespaces annoys me to begin with. I'd honestly argue that Rails's directory layout is detrimental. The controllers, models, etc directories should IMO each be actual modules - i.e. YourApp::Controllers, YourApp::Models, etc, not a bunch of directories whose contents are all autoloaded into the same module. Views probably belongs somewhere else. If we got rid of this legacy cruft and fear of namespacing, I think Rails would be a lot better off. I'm pretty excited about Lotus because it actually embraces OO proper and lets you structure your code in modules however you see fit. http://lotusrb.org/ Edit: If I had my way, I'd probably have a MyApp::HTTP module that contains controllers and related frontend classes (datatables, helpers, etc) and MyApp::Application module that contains the actual application, with a strict enforcement that Frontend can depend on Application but not the other way around. I might actually do that, thinking about it more. It's already effectively how I've structured my code in the directory structures anyway. Thalagyrt fucked around with this message at 02:57 on Jan 30, 2015 |
# ¿ Jan 30, 2015 02:50 |
|
MALE SHOEGAZE posted:Yeah I agree completely. The Rails directory structure was my number 1 pet peeve until I got good enough at ruby/the rails magic to just ignore all of it. When I first started designing the vNuc portal (because let's face it, all the billing software in the hosting world blows. Actually, considering cPanel, Plesk, SolusVM, etc, I'll just say all the software in the hosting world blows) I contemplated a strict SOA policy. Actually started implementing it that way. Then I realized I'm probably never ever going to even come close to the scale, both in terms of code complexity (it's a hosting control panel, it handles credit cards, VPSes, support tickets, Exchange accounts, their lifecycles, and that's about it) as well as scaling (again, hosting control panel, I'm never going to have even thousands of simultaneous users) to make it worth the added complexity. Would have been super cool and super fun I'm sure, but in this business the biggest my scaling needs are ever realistically going to get is probably adding a second app server. I'm likely never even going to be 1% of the size of say AWS, so I'm not going to face those problems.
|
# ¿ Jan 30, 2015 03:05 |
|
Pollyanna posted:Actually, that reminds me - a lot of the Rails projects I've made are just straight "models -> database schema -> CRUD ops" setups, with rarely anything much more complicated than Users and Posts. What cases are those where you would actually use non-AR backed models, modules and extensions, and other complicated stuff? In my case VPS management - should a database access object with the responsibility of storing server details really know about how to create a Xen domain, or install Linux on a server? Billing stuff - it's really not an account DAO's job to know how to talk to the bank, is it? I shouldn't have to talk to the bank every time I want to test something in the application. Same deal goes with Exchange - why should a DAO know how to talk to Active Directory? If you put all of this stuff in a database model you're going to create a nightmare for yourself with the SRP violation and code complexity. These objects that have 10+ responsibilities become hard to test effectively. It's all about decoupling components, then composing them back together to get the needed work done. By decoupling your components, you make it easier to replace components, which makes it easier to test components by replacing their dependencies with mocks that can reliably simulate failure conditions. Also, by decoupling your components you make it easier to extend behavior. A good example of this was me wanting to give a credit for the remaining time on a server to a customer when they cancel the server. I introduced a CreditingServerTerminator which wraps a ServerTerminator (which can be injected on initialize, but comes with a sane default) and simply adds the credit when the ServerTerminator does this thing. The CreditingServerTerminator is used in the client area TerminationsController's create action. If all of that lived in the Server model itself I'd have two methods that terminate a server, which I suppose could make sense, but again, consider how much responsibility the Server would have in this case. It'd easily be a 4000 line class. This also makes it super easy to test the behavior of crediting on termination. I can mock out the ServerTerminator to just succeed, and assert that the account's ledger gets a credit message sent. No database access necessary to ensure the behavior works. For a bit more clarity, the ServerTerminator's job is to actually destroy the Xen domain with our backend system, and then update the Server DAO's state to indicate that it's actually terminated. It also kicks off a few background jobs to notify the admins and account users that the server was terminated, and places an entry in the account's audit log. Basically, all the responsibility of what it means to terminate a server goes in there. It's used when a user terminates a server, an admin terminates one (say in the case of abuse) or when automation terminates one, and it's easy to compose behavior on top of as seen above. The audit logging is another good example of when to inject dependencies and how decoupling into smaller classes makes things easier. Rather than having it simply be Account#log_event(details), I have an EventLogger in the Accounts module, which can be set up with various params. Every service object that uses an EventLogger comes with sane defaults for the default use case, but you can inject an EventLogger as well - so any time one of these services is used in a controller, I inject an EventLogger that's been pre-loaded with the user and request details so that the acting user + IP address gets tagged on any logged audit events. Thalagyrt fucked around with this message at 19:09 on Jan 30, 2015 |
# ¿ Jan 30, 2015 19:03 |
|
MALE SHOEGAZE posted:Yeah but there's also nothing wrong with having lib/services, lib/sweepers. My main point is that if you're trying to follow rails conventions, you probably shouldn't be putting it in models. Yeah, my entire argument is that the conventions are bunk, and people are making it harder to reason about their codebase by splitting up a single namespace into more than one directory. I've never seen that done in anything other than Rails - in every other language, the module tree and directory tree in a given project will look the same, so to someone who hasn't seen this weird pattern before, it'd make sense to think that /lib would be a Lib module, /app/models App::Models, /app/controllers App::Controllers, /app/services App::Services, etc - but they're one big module that just globs all this stuff up instead. I think the concept of having app/services, app/actions, app/models, etc all actually being the same module makes it harder to reason about. In my opinion, organizing your code by functional module (in my case, Servers, Accounts, Users, Email, etc) makes it easier to reason about. I'm not thinking about whether something is a service, or AR model, or whatever when I'm thinking about where it belongs. Instead, I'm thinking about whether it's part of Accounts, or Servers, or Tickets, etc. Here, check out my app/models. I think you might see why I argue for this. You might also call me crazy, who knows. http://hastebin.com/laxuzoyufe.avrasm Each one of these subdirectories is pretty much one self-contained component to the site. There are some dependencies between modules - most everything has a dependency on the Accounts and Users modules - but for the most part, dependencies stay within a module. I could probably split these up further into more subdirectories, but I haven't found that necessary. But the key takeaway is that my code is organized by logical groupings of functionality, not by whether something is a service, or an AR, or something else. Thalagyrt fucked around with this message at 01:01 on Feb 1, 2015 |
# ¿ Feb 1, 2015 00:53 |
|
MALE SHOEGAZE posted:No, I'm totally with you on keeping your modules discrete. It makes way, way more sense to do: Yeah, this is exactly what I mean. My only argument against throwing it all in lib is that in the vast majority of cases this stuff isn't going to make sense outside of the context of your app - and lib is traditionally a directory used for external/third party libraries - so unless it's something I'm going to extract into a gem, I keep it out of lib. I was looking at Discourse earlier today and found it incredibly weird that they basically have structured their files in such a way that it suggests that the core of their application is a third party library. Edit: The top two results for "rails lib directory" on Google seem to agree with me, as well. These guys use it as a staging place for code that you could extract into a gem and reuse in another application. http://blog.codeclimate.com/blog/2012/02/07/what-code-goes-in-the-lib-directory/ http://reefpoints.dockyard.com/ruby/2012/02/14/love-your-lib-directory.html Thalagyrt fucked around with this message at 01:25 on Feb 1, 2015 |
# ¿ Feb 1, 2015 01:11 |
|
prom candy posted:I think I agree with you guys about file organization however I think there's an argument to be made that fighting against how Rails dictates that your files should be organized can make your codebase harder to reason about for a new developer joining your project. The organization of controllers, specifically, is also somewhat tied to how Rails expects things to be laid out in order to work quickly and easily with routing. Controllers aren't part of your domain model. They're a boundary between HTTP and your application, and their entire job should be to take a request, tell your application to do one thing, and then render the outcome of that action as HTML/JSON/etc. Controllers having a completely separate namespace structure that lines up with your URL structure makes sense and I definitely wouldn't try to put them in the same set of modules as my domain model.
|
# ¿ Feb 1, 2015 18:26 |
|
xenilk posted:i guess it's not purely AR but couldnt you do That should generate equivalent SQL to .where.not(countries: { alpha2: @country_exclude_list }). Example: 2.1.3 :003 > puts Accounts::Account.joins(:solus_servers).where.not(solus_servers: { id: [1, 2] }).to_sql SELECT "accounts_accounts".* FROM "accounts_accounts" INNER JOIN "solus_servers" ON "solus_servers"."account_id" = "accounts_accounts"."id" WHERE ("solus_servers"."id" NOT IN (1, 2))
|
# ¿ Mar 6, 2015 01:49 |
|
Smol posted:MySQL stores everything in UTC as well. It doesn't have a TIMESTAMP WITH TIME ZONE or equivalent data type. Not entirely correct. MySQL stores time without timezone data at all, so saying it stores everything in UTC is incorrect as that would imply that MySQL is aware the timestamps you're storing are UTC, and it would have to be timezone aware for that. Since MySQL doesn't have a timezone aware datatype, best practice is to treat all times as UTC when storing/retrieving, but that doesn't mean that MySQL considers the times UTC. They're just times without any timezone as far as it's concerned.
|
# ¿ Mar 17, 2015 20:30 |
|
kayakyakr posted:Use Sucker Punch for async and whenever for scheduled... drop delayed job, it's a PITA. Backgrounding tasks within your web workers with absolutely no guarantees of job durability or completion is a terrible idea. If a worker dies for some reason any jobs that haven't run will be lost. Delayed Job and other tools architected like it (read: with background worker processes and a persistent storage mechanism) are far more reliable and make a guarantee that a job will be completed. If you outgrow using the database for jobs, switch to Sidekiq or Resque. Sucker Punch is a hack for async on the free tier of Heroku when you only have one process to run in and shouldn't be used in any real production environment. Thalagyrt fucked around with this message at 23:02 on Jun 15, 2015 |
# ¿ Jun 15, 2015 22:58 |
|
kayakyakr posted:The others are also PITA to deploy, as KoRMaK is finding. If you plug in to the new ActiveJob stuff, then you're framework-flexible and can switch to whichever when you're ready to improve your deploy. How is Delayed Job anything at all resembling PITA to deploy? Set up supervisor or runit or whatever the heck you want to use to run a couple instances of bundle exec rake jobs:work and you're done. Add a couple command line options to write out pidfiles if you want to be able to easily restart them with say Capistrano. Edit: If you're deploying on Heroku or similar PaaS or even just using Foreman on a VPS it's even easier. Just add worker: bundle exec rake jobs:work to your procfile and you're done. In KoRMaK's case, I'd advise setting up supervisor. It's really rather easy - just have to get it to switch users to the right user and then run rake jobs:work and you're done. It can be done in about 15 minutes if you've never touched supervisor before. Thalagyrt fucked around with this message at 00:04 on Jun 16, 2015 |
# ¿ Jun 15, 2015 23:47 |
|
KoRMaK posted:Ugh, supervisor wants non-damoenized tasks, but the delayed job script is damoenized and the rake task doesn't let me specify multiple workers. Yeah, just run them as foreground workers. I have 4 services set up with runit for mine, and have delayed_job configured to write out pid files so I can just kill `cat delayed_job.*.pid` to restart all the workers. Here's my runfile for runit - you should be able to do the same with supervisor easily. My ~/shared/environment file just loads up all the env variables - RAILS_ENV and other config. code:
|
# ¿ Jun 16, 2015 19:01 |
|
KoRMaK posted:I need to get better at the linux. It's explained right in the manual for delayed_job: https://github.com/collectiveidea/delayed_job/wiki/Delayed-job-command-details It adds the numeric identifier to the process name, so you'll see delayed_job.1, delayed_job.2, etc.
|
# ¿ Jun 16, 2015 20:13 |
|
KoRMaK posted:Dammit, I need to browse the wiki pages instead of google searching and get more sleep. It will show up in the pidfile written out by delayed_job, which ends up in RAILS_ROOT/tmp/pids. Setting the identifier will cause the pidfile to be written out as delayed_job.identifier.pid instead of delayed_job.pid, which lets you keep track of each one individually. code:
|
# ¿ Jun 16, 2015 20:37 |
|
KoRMaK posted:I thought thats where it should be but I can't find it. I looked in my home directory/tmp/pids and the rails/tmp/pids directory and then did a search for everything with pid or delayed_job and I'm not finding any files. The pid files should be created whether or not it's daemonized. It's certainly created in my setup, and you can see the script I'm using to start it - it's not running daemonized. It won't be created if you start it with rake, though.
|
# ¿ Jun 16, 2015 21:27 |
|
KoRMaK posted:Here's a dumb question: I have rvm installed as my user, but when I go into sudo mode and do rvm it says its not installed. As sudo, I source my users bashrc file thinking that will fix it, but it doesn't. You don't want a web site running as root. Why are you using sudo?
|
# ¿ Jun 18, 2015 15:54 |
|
KoRMaK posted:I'm trying to get monit to launch the delayed_job script with the right env stuff. I thought I had to install rvm as sudo because monit likes to run stuff as sudo. Why are you trying to run delayed_job as root? Run it as the user that owns your web site, just like you do on your local VM.
|
# ¿ Jun 18, 2015 16:13 |
|
KoRMaK posted:Back to a question I had to a little bit ago, how do I install rvm and gems so that ALL users or a subset of users use the same gemset? Is this a common pattern, where multiple users share gemsets or are you supposed to install it for each user? That'd require a lot of permission hackery. I've never seen a need to do that, though. What are you trying to accomplish?
|
# ¿ Jun 18, 2015 16:48 |
|
KoRMaK posted:I'm having a hard time with writing tests. I'm using rspec and capybara. I think you might want to rethink your test strategy. Why are you testing that the browser can download a file? That's not part of your application. Just make sure it's served up properly and call it done. Having a browser download a file and trying to make an assertion that the file's been downloaded is rather silly. Edit: In the same vein, don't test ActiveRecord, don't test other external dependencies, but do test *your* code.
|
# ¿ Jun 26, 2015 01:59 |
|
KoRMaK posted:Yea, probably. The goal is to verify that we can verify that our app can send data to a third party, who turns it into a PDF, and then the user can download that PDF. kayakyakr already covered it, but mock at any boundaries and test that your code responds to the boundaries properly. You don't need to test third party APIs. For the downloads, you don't need to test that the browser can download the file. If Chrome couldn't download a file, that's a bug in Chrome, not a bug in your code. Just use a controller test and assert that the right content type is set and other information relevant to the request. Beyond that, any browser will handle the download properly - so why waste time testing your browser?
|
# ¿ Jun 26, 2015 16:49 |
|
KoRMaK posted:I guess if I had to defend it, it would be that we want to make sure that our users are getting what they expect, and that includes knowing if our third party vendors are hosed up. The majority of failures that'll arise when you're testing third parties will be random intermittent issues (throttling, network hiccups, maintenance, etc) and you're gonna end up pulling your hair out over random failures in your CI. Your tests should be deterministic. In order to have deterministic tests, you need to only test units under your control. That means mocking out *all* third party requests/responses. Work under the assumption that the third party will be working, and mock up specific failure scenarios so you know that you can gracefully handle third party APIs being down or otherwise erroring out.
|
# ¿ Jun 26, 2015 16:56 |
|
Pardot posted:This feels like you want DISTINCT ON (which is not the same as just DISTINCT), but I dont know how you get that through active record. If you're using Arel 6 (so Rails 4.2 and up) there's an Arel method for distinct_on. code:
|
# ¿ Jul 11, 2015 18:47 |
|
|
# ¿ Apr 27, 2024 09:54 |
|
Pollyanna posted:Sounds good to me. I made a spec that mimicks our current issue and from what I can tell, the script/class/module/thing-I-made handles it correctly. I guess logging into console works well. I tested it on my local copy of our DB, and it works there too. We have a 1.5TB database at my day job that's fully backed up to S3 daily... What's your excuse for not caring about your business's data at all? Thalagyrt fucked around with this message at 20:21 on Nov 11, 2015 |
# ¿ Nov 11, 2015 20:18 |