Register a SA Forums Account here!
JOINING THE SA FORUMS WILL REMOVE THIS BIG AD, THE ANNOYING UNDERLINED ADS, AND STUPID INTERSTITIAL ADS!!!

You can: log in, read the tech support FAQ, or request your lost password. This dumb message (and those ads) will appear on every screen until you register! Get rid of this crap by registering your own SA Forums Account and joining roughly 150,000 Goons, for the one-time price of $9.95! We charge money because it costs us money per month for bills, and since we don't believe in showing ads to our users, we try to make the money back through forum registrations.
 
  • Post
  • Reply
Evil Trout
Nov 16, 2004

The evilest trout of them all
Chiming in to say that I'm quite experienced with Ruby and Rails after working with it full time for the last year, so I'll gladly answer any questions anyone might have.

My project is a web-based role playing game based on Internet culture. It's got over 100 models and uses a lot of awesome Ruby meta-programming tricks. It uses Haml for the templates and Sass for the stylesheets.

shopvac4christ posted:

I've tried installing Typo to blog.mysite.com. So I'd have something like ~/sites/blog.mysite.com/public symlinked to ~/public_html/blog and a subdomain created which points to that directory. First I had to figure out the trailing slash problem (the difference between mysite.com/blog/ and mysite.com/blog) but now that that's working, going to blog.mysite.com doesn't work at all. There's got to be a way to fix this without resorting to site-specific routing stuff, right?

What web server are you using? Because deployment changes drastically depending on this. For example, in Apache2 I'd set up a NamedVirtualHost to whatever directory you're running your Typo install out of (forget the symlink).

Adbot
ADBOT LOVES YOU

Evil Trout
Nov 16, 2004

The evilest trout of them all

shopvac4christ posted:

My host (Site5) is using Apache, but I'm not sure how much control I have over it.

Unfortunately controlling what subdomains point to where is really a web server configuration issue, nothing specific to rails. If you can't touch Apache's configuration files, it will probably involve some hackery outside of rails to get it to work.

Evil Trout
Nov 16, 2004

The evilest trout of them all

shopvac4christ posted:

How would one deploy a common library to all sites on a server? I ask the question this way because I'm coming from ASP.NET, where if we updated something in a global utilities library, we could just copy the DLL to separate projects, or even better, import it server-wide. I'm guessing it has something to do with plugins and maybe using capistrano to update the plugins for each site in a special deployment, but beyond that I wouldn't have any ideas.

One of the great features of Subversion (which I assume you're using because you mentioned capistrano) is the ability to have externals. An external is the ability to point part of your source tree at another source tree altogether.

What I would do in this case is build a new tree for your plugin, then put it as an external in the plugins directory of each project. Then, whenever you do a svn update (through capistrano or local, whatever) you'll get the latest stable version of your plugin.

I'm on edge rails right now, and I have my vendor/rails directory set up as a svn external. Every time I do a svn update, it grabs the latest version of rails for me, which is awesome. I also do it for Haml.

Evil Trout
Nov 16, 2004

The evilest trout of them all

Space Kimchi posted:

If someone wants to waste their time on this crap, knock yourself out, I guess. I'll be over here making stuff that works and just worrying about the normal lovely timesinks involved in web dev, like cross-browser CSS compliance and having the w3c inspector not poo poo bricks when I send my site through it to please some dude in IRC who swears my problems with something unrelated has to do with not using & for link strings with an ampersand in them.

It's interesting, because most web developers I know just "get" the basics of rails (activerecord, controllers, views, rjs) when they first see it. The way it's organized and does the database handling is pretty intuitive.

I can't say the same for the REST stuff. It's often a head scratcher -- like why would I bother to do all that extra work when the end product is the same?

There is an advantage though, it's about organization. One of the best things about a rails app is if you need to debug it, you know exactly what file to go to in what directory, since they're all organized the same way.

The REST stuff takes it a little further, in that it names the majority of your controllers and their methods for you. You always call the same methods to insert, update, delete, etc. Additionally, you get a lot of cool "black magic" for finding the paths of things.

Evil Trout
Nov 16, 2004

The evilest trout of them all

MrSaturn posted:

This is a question so stupid, I can't believe I'm asking it, but:

how do I link to a static page within my domain? I want to make a contact page for my application, but there's no dynamic content to be put on it. I tried making app/views/contact/index.rhtml, and I got a routing error when I pointed my browser there.

where do I put a page to link to, say
localhost:3000/contact?

If it's purely static, you can put it in public as people have mentioned, but if you want to do stuff like use layouts, an easy way to do it is to create a controller

script/generate Controller contact

The controlller doesn't need any methods. Even without actions defined it'll look for the view for whatever file you request, so your above url/file should work. I suspect you got the routing error due to not having a contact controller.

Evil Trout
Nov 16, 2004

The evilest trout of them all

Hammertime posted:

Oh, and assuming this is a pre-production system. Don't bother creating separate migration files for an alter column. It's easier to migrate back to version 0, change the table declaration to include the column, and rake it back to the latest version. Before you actually push the system live, the practice is to group all the migration files together into a single file (not including sample/starting data if you have any). Ideally there's a migration file for each version of the system that gets pushed into production, having separate migration files for development is just a temporary measure.

This can actually be dangerous unless you are the only person working on the project. If multiple people are checking out your source, going back and editing earlier revisions is sure to ruin their day :) Just a warning for the new guys.

Evil Trout
Nov 16, 2004

The evilest trout of them all
Here's a script I created to add all new files to the repo. It's good for non-generated content like javascripts and stylesheets:

#!/bin/sh

svn add `svn status | grep "^\?.*" | sed "s/^\? *//"`

Evil Trout
Nov 16, 2004

The evilest trout of them all

shopvac4christ posted:

Right. But what I'm asking is why is there a deploy.rb file at all? Why aren't the deployment instructions completely independent of whether or not there's a copy of the source code checked out?

The way Capistrano works is that it tunnels through SSH and executes a series of instructions on a remote machine. It needs to be run FROM somewhere, usually on a development machine that has a copy of the source locally.

If it were a server process you could just notify it to deploy itself, sure.

The deploy.rb is part of the instructions you're running that in turn execute remote SSH commands.

Evil Trout
Nov 16, 2004

The evilest trout of them all

Hop Pocket posted:

What is the rails equivalent of a Quartz scheduler? I want something that will execute periodic jobs on the development/production servers. I'd prefer to stay outside of the OS level cron and inside the application if possible in case our production server ends up being hosted on Windows.

I looked for something like this months ago and couldn't come up with anything. So my site uses cron for all jobs, using the "script/runner"

Evil Trout
Nov 16, 2004

The evilest trout of them all

SeventySeven posted:

Embarrassingly retarded question... I can't believe I can't figure out the answer to this on my own. Imagine I've got something like the following:

code:
@posts = Post.find(:all)
Which then gets passed to the view, yadda yadda yadda, this is all basic stuff. But what if @posts is empty (in this case there's no data) and I want to alter the view accordingly. I thought it would be a matter of doing something like the following:

code:
<% if @posts.nil? %>
  <p>No posts!</p>
<% else %>
  # Loop through @posts
<% end %>
But apparently not. Oh god, how do I do this? Put me out of my misery.

code:
<% if @posts.size == 0 %>
  <p>No posts!</p>
<% else %>
  display posts
<% end %>

Evil Trout
Nov 16, 2004

The evilest trout of them all

SeventySeven posted:

:ssj: Why didn't I think of that? I tried @posts.length and @posts.count.

Thanks.

Actually @posts.length should work too. @posts.size is equivalent, so I'm not sure why that failed!

Evil Trout
Nov 16, 2004

The evilest trout of them all

zigb posted:

- You still write SQL, for any sort of advanced find / sort. Not that I mind writing SQL, but I prefer to write it in one large understandable chunk instead of splitting it up into mini-clauses, like :conditions => "[some sql]", :order => "[some sql]", etc. Most find queries that required a JOIN or two had to be written using find_by_sql. Again, I like sql, but I feel like I'm cheating every time I use it in Rails.

This puzzles me. I just deployed an online RPG game using rails with over 120 models, and I'd say I had to write manual SQL less than 1% of the time.

Especially the part where you say a join or two -- what kind of joins are you doing that aren't handled elegantly by ActiveRecord?

Evil Trout
Nov 16, 2004

The evilest trout of them all

Hop Pocket posted:

For those of you using Mongrel in a deployment environment, how many Mongrel processes do you start? I know that it's all dependent on the application, load, # of static pages, etc. I'm setting up some hosting with Joyent, and you have to request Mongrel ports for a shared hosting environment, so I was just curious.

My site, Forumwarz runs on 10 Mongrels behind an nginx proxy. It's pretty snappy, each Mongrel process seems to be using about 50-60M of RAM on a 2.4Ghz Core2Duo dedicated server.

Evil Trout
Nov 16, 2004

The evilest trout of them all
One thing to consider is if you use Haml for markup (which I do and love), they have their own puts methods you can use in helpers, so your above code would work!

Evil Trout
Nov 16, 2004

The evilest trout of them all

Hop Pocket posted:

This might be more of a scriptaculous question, but I can't seem to get my animations that my RJS template return to execute serially. In other words, they seem to always run at the same time. Is there a way to get any visual_effect to run only after the previous one has finished?

This is something that has begun to annoy me lately. My game uses a lot of scriptaculous and the best way I found to do things serially is to use the page.delay(seconds) command that uses a block.

code:
page.visual_effect :fade, 'something', :duration => 1.0
page.delay(1.2) do
  page.replace_html 'something', 'HELLO'
  page.visual_effect :appear, 'something'
end
The problem is, I've found in the real world that sometimes the delay will execute before the first visual_effect has finished (usually on slow computers). In the example above, this will cause the div to not appear properly as it will be updated before it's hidden.

If you're coding in pure javascript, Scriptaculous allows you to execute events once the animation has finished, however there is no API for doing this in RJS.

Evil Trout
Nov 16, 2004

The evilest trout of them all

ikari posted:

http://wiki.script.aculo.us/scriptaculous/show/EffectQueues

You can use EffectQueues to get that behaviour, they can sometimes behave strangely but it's likely what you want.

Interesting, I didn't know effect queues were supported by RJS!

Having said that it wouldn't solve the problem I had above as updating an element isn't an animation, but it would solve the initial question :)

Evil Trout
Nov 16, 2004

The evilest trout of them all

savetheclocktower posted:

Effect.Event is a skeleton effect that allows arbitrary functions to be treated as effects for the purpose of queueing.

Yes, but I was talking about RJS. It's not hard to trigger an arbitrary function call for a visual effect when writing your own Javascript, the issue is doing it from RJS. For example, you can say:

code:
render :update do |page|
  page.delay(3) do
   # something
  end
end
And I was saying I'm not aware of a way to do:
code:
render :update do |page|
  page.visual_effect :blind_down do
    #something
  end
end
Where something is not a page.visual_effect, for example, a replace_html or a toggle or something like that.

My goal is to avoid writing external javascript files for simple things like fades and updates :)

I've found this technique, which involves using the << operator to write your own first-class functions, but it seems a bit of a hack.

Evil Trout
Nov 16, 2004

The evilest trout of them all

skidooer posted:

How about something like this?

This is exactly the kind of thing I want (and think RJS needs). I'll take a look later and see how it works, thanks!

savetheclocktower posted:

RJS isn't a leakproof abstraction. It's really just a way to place Ajax logic in your controller, where it makes sense to live. If you try to write everything in pure RJS, never touching JavaScript at all, you'll just end up uglifying your controllers.

Again, I'm aware you can call arbitrary Javascript from RJS. I don't want to write everything in RJS, but there are many cases where you just want to chain a couple of animations to an update.

Regarding the uglyfying of your controllers, anything more than a couple of lines of RJS goes in a .rjs file in my view directory.

Evil Trout
Nov 16, 2004

The evilest trout of them all

Xgkkp posted:

So I've used capistrano/mongrel to deploy and run, as the 'agile development with rails' book suggested. Does anyone know if:

a) This is the best way to deploy it
b) it's worth, and possible, preventing mongrel from accepting incoming connections outside of apache (i.e. nothing but localhost - I can access it at servername:8000 publically)
c) How I can easily get it to start at system boot - at the moment, the instances are started by capistrano directly, which obviously will die if the server goes down.

There are many ways to deploy a Rails app and how you want to do it has a lot to do with the hardware you have and the traffic you expect.

a) I personally run a nginx web server in front of a pack of mongrels. It sounds like you're using Apache to do the same thing. I personally prefer Nginx because it's less bulky but the difference is minimal. Some benchmarks have shown that the best performance is on Apache2 with FastCGI Rails processes.

b) It is possible and worth it to configure what you're talking about. I personally recommend using a firewall on all public web servers. Basically you should configure your firewall so that it only accepts incoming data on the few ports you need (usually HTTP/HTTPS/SSH/DNS and maybe SMTP). If you do this local connections (from Apache to Mongrel) will work fine, but people from the outside world will only be able to tunnel through Apache. On Linux iptables is a good place to start looking.

c) Mongrel has a page about how to configure it to start on boot:
http://mongrel.rubyforge.org/docs/mongrel_cluster.html - mine is set up using the points in that document and it works great.

Evil Trout
Nov 16, 2004

The evilest trout of them all

Hop Pocket posted:

I do love slicehost, but I'll be damned if gem does not work very well on a 256 slice. Very memory intensive. Installing one gem can take hours. I don't need a 512 slice, but am considering getting one just to make gem usage more palatable.

edit: 512 slice a billion times faster.

Considering rubygems is just a package management system, its resource usage is horrendous. Maybe it's doing some magic behind the scenes I'm unaware of, but simply updating the list of gems from the server can take up hundreds of megs of RAM.

Evil Trout
Nov 16, 2004

The evilest trout of them all

shopvac4christ posted:

Okay, so since apparently no one is actually deploying Rails projects, I have another question :)

No, some of us have Rails projects in production. Forumwarz, my MMO, is deployed on a dedicated host.

The technology is Nginx proxying to a pack of Mongrels.

I don't have any recommendations for small projects, but if you are doing mid-sized traffic (right now we're doing 20 dynamic rq/sec) a dedicated server is quite affordable at most hosting companies.

I just assumed that since you said you were interested in Phusion that you were working on something smaller.

Evil Trout
Nov 16, 2004

The evilest trout of them all

savetheclocktower posted:

Off-topic: Forumwarz is awesome. You guys do great stuff on the JavaScript side.

Thanks, that's mostly due to Prototype and Scriptaculous making it so easy!

Evil Trout
Nov 16, 2004

The evilest trout of them all

Nolgthorn posted:

Lets say I have about 20 tables that include more or less each a set of options for 20 different select boxes. These will change rarely, I have set up all the functionality required to display these select boxes and show the user's selected options somewhere all over a page someplace.

What is the best way to ensure there aren't 20 hits to the database on every edit request and a further 20 hits to the database on every page view where these results are displayed?

I can use page caching where the results are displayed but I am a bit dumbfounded by the idea of caching the select boxes that allow the users to change these settings as they need to be pre-populated with the values the user has selected.

Is the best way to just page cache on the edit page as well? Because the chances are that any time a user visits the edit page, they will be changing something that will clear the cache anyways.

In your controller, cache the results from the database calls.

If you're on Rails 2.1, caching is pretty easy to set up with a variety of stores (I can't recommend Memcached enough.)

http://www.thewebfellas.com/blog/2008/6/9/rails-2-1-now-with-better-integrated-caching

Evil Trout
Nov 16, 2004

The evilest trout of them all
In case anyone is interested, I just released Kawaii, which is kind of like a script/console web-front end for those of you like me who hang out in a console all day while developing in Rails.

It's open source, hosted at Github. Any feedback would be appreciated!

Evil Trout
Nov 16, 2004

The evilest trout of them all

bitprophet posted:

So, um. Why would I want to use Kawaii instead of, you know. A terminal window? :psyduck: This reminds me of Campfire's reinvention of IRC because "it wasn't a Web site" (and then of course a client-side app, Pyro, to interface with the Web site, making it even more :psyduck:...)

That said, I'm sure it was fun to make, and it looks neat, so good job at any rate!

Oh, it's because when you type in any commands that are longer than say, two console lines, the output is ridiculously hard to read.

For example, I'd rather look at this:


Click here for the full 780x376 image.


Than this:



(that's the same command in Kawaii and a script/console)

Evil Trout
Nov 16, 2004

The evilest trout of them all

j4on posted:

Let's say you work in a "fast and dirty" coding environment (news website) where most projects go from design to publish in under a week, sometimes only one day, and usually involve only one coder. Maintenance on old code is rarely needed, but deadlines absolutely can not be missed: if breaking the MVC paradigm will save one hour, by God break it. For these kind of small data-driven "mini-sites" is RoR still better than PHP? Or should I be lookng to learn something else entirely?

I would say yes, Rails is still better than PHP absolutely.

One you get set up properly you can just spawn and build new apps very quickly. With Phusion Passenger you can deploy just as easily too.

Evil Trout
Nov 16, 2004

The evilest trout of them all
I recently switched Forumwarz back from Phusion to a pack of Mongrels. Phusion seemed to perform more or less the same from a memory standpoint but I hated how I couldn't hot deploy safely.

With a pack of Mongrels, you can do a rolling restart so that your app never goes down while pushing updates (since Nginx can proxy to the other mongrels.) With Phusion, your only choice is to touch the tmp/restart.txt, where it shuts down all processes and starts up again, preventing any requests from completing for up to 30 seconds. My users definitely noticed that!

Evil Trout
Nov 16, 2004

The evilest trout of them all

Nolgthorn posted:

I have an Entries model and I have a Votes model. Each Entry has many Votes and each time that an Entry is loaded, the associated Votes will also need to be loaded, is there a way I can use scope to automatically :include => :votes on each request?

Or is this not good practice?

In Edge rails, there's now default scoping: http://ryandaigle.com/articles/2008/11/18/what-s-new-in-edge-rails-default-scoping

Although if you aren't running Edge, you might want to just create a named_scope that does it, and just make sure you always use that named_scope when accessing your entries.

Evil Trout
Nov 16, 2004

The evilest trout of them all

Nolgthorn posted:

Is there a way to easily report at the bottom of the page on a development application the processing time, requests and speed of which the page loaded? I am getting tired of always looking over to my terminal window on every page request to see even just a summary of what's been happening.

You might like FiveRuns Tuneup. Personally, I've tried things like this but always end up just looking at the console while optimizing.


Nolgthorn posted:

Also another question; I understand that the development environment runs a bit differently than in production. Now that I've got my db requests more or less optimized for the time being I'm noticing that my bottleneck seems to be loading partials, for instance I am using a commenting system and each comment is a partial with another partial in it for the rating system. When there are 25 comments being loaded on the page each one loads at a speed of between 0.7-1.7ms for the rating and 1-3ms per comment, in comparison the full amount of db requests on such a complex page load maybe 0.3ms in absolute total.

Why are the partials being loaded slowly, in production are these partials just going to be in memory or something?

Rails performs vastly different in production mode. Templates are cached, so if you make a change to them you have to restart the process. I have noticed performance problems with rendering too many partials in the past, but that's when there's hundreds. The overhead should be minor with 25.

You should do yourself a favor and enable production mode from time to time during development (just point your production entries in database.yml at the development database and it should work.)

Evil Trout
Nov 16, 2004

The evilest trout of them all

Nolgthorn posted:

I too don't like the way field_error_proc adds that horrible div around failing form elements, it's always messing up my layouts.

Why not just use CSS to not display it as a div? If you change the div's style to have display: inline in your CSS it is basically the same thing as a span.

Evil Trout
Nov 16, 2004

The evilest trout of them all

Pardot posted:

Rails 2.3 is out. http://guides.rubyonrails.org/2_3_release_notes.html - surprisingly great post about all the new stuff.

A small correction: it's a release candidate, not an official release yet.

http://weblog.rubyonrails.org/2009/2/1/rails-2-3-0-rc1-templates-engines-rack-metal-much-more

Evil Trout
Nov 16, 2004

The evilest trout of them all

chemosh6969 posted:

That got me closer. I did script/generate model YEAR1, but when I tried to pull the first record it died from trying to pull it from table YEAR1s

The problem here is that Rails is heavy on convention over configuration. In other words, it assumes that if a model is named Office, the table in the database will be named offices.

Of course this is less useful for existing databases but not hard to work around. You can still generate a model called Year1 -- but you'll have to add a line to the year1.rb file:

set_table_name "year1"

See http://api.rubyonrails.org/classes/ActiveRecord/Base.html#M002231

Evil Trout
Nov 16, 2004

The evilest trout of them all

Jargon posted:

I've been using Slicehost ($20 for an Ubuntu VPS with 256MB ram, enough to run a Rails site with light to moderate traffic), and it's a great value if you don't mind getting your hands dirty with a bare Linux box.

I just signed up and used you as a referrer :)

Forumwarz already runs on a cluster of three 8-core servers, but we're launching a new project that I wanted to partition away from our regular site. Slicehost looks really good as a lightweight simple host.

Evil Trout
Nov 16, 2004

The evilest trout of them all

GroceryBagHead posted:

You don't know how much you hate Mongrel until you need to do a rolling restart on a cluster of 4 app servers with 4 processes on each while fighting with Monit so it doesn't kill newly created mongrel processes. Passenger is just a must now.

Unicorn is worth a look. You can do a rolling restart pretty much out of the box by sending your master process a USR2.

You also only need monit to monitor one process (the master) because the children are restarted before you can even notice they're down.

I've run basically every Rails deployment system at one point or another and it's my current favorite.

Evil Trout
Nov 16, 2004

The evilest trout of them all

bitprophet posted:

Unicorn looks like has some excellent ideas behind it, but practically speaking I don't see any obvious benefits it has over Passenger for the average deployment situation. E.g. a soft restart is as easy as touch-ing a file or doing /etc/init.d/apache2 reload. If you migrated to Unicorn from Passenger I'd be interested in hearing why :)

I only ran Passenger for about a month. My site handles millions of Rails requests a day, and I very quickly discovered something about Passenger that I didn't like.

One dark secret about Ruby is that long running processes gain memory. Every major Rails deployment I know of uses some combination of Monit/God to watch each process and restart them when they go above a certain threshold. You can and should minimize them by doing profiling on your memory usage, but even a finely tuned app will still require process restarts from time to time.

In Passenger there is no built in way to monitor the memory of its processes. It does have a system to restart a process after a certain amount of requests though, PassengerMaxRequests. I've also seen people set up cron jobs to just check their memory on the command line and kill them when they get too high and that works too.

The bad part is that when you restart a process, either by sending it a terminating signal on the command line, touching tmp/restart.txt or by using something like PassengerMaxRequests is that Passenger seems to stall all future requests while the new process spins up. In my apps case, this could take 5-10 seconds, and that was noticeable and annoying.

So after a month of running Passenger I switched back to Mongrels and Monit. With Nginx in front, if one mongrel was restarting it would just proxy the request to another one and users wouldn't notice any interruption.

One flaw of Monit, as has been pointed out, is that hot restarts are quite difficult. In Unicorn, they're trivial. If you send a USR2 signal to a master process, it actually spins up a whole new application in the background, and only kills the original one once it's fully loaded. Unlike rolling restarts, there's no risk of people getting a mix of the old and new version of your app while other mongrels restart.

I've also found it performs a lot better, and the nginx confirguration is a lot simpler with a unix domain socket. If I add or remove processes, the nginx configuration can stay the same. My monit configuration is also a lot easier, since I only monitor the master process. My memory bloat is managed by a simple script that runs every 30 minutes.

Anyway, if your site is not receiving many requests per second, you might not experience these problems. For a medium-level traffic site like my own, Passenger was a headache :)

Evil Trout
Nov 16, 2004

The evilest trout of them all

Hammertime posted:

Anything in /app gets reloaded cleanly, anything in /lib /config or gem related needs a restart. I restart on route changes too, though not sure if that's required these days.

Actually /lib hasn't needed a restart in at least a year's worth of Rails releases.

Evil Trout
Nov 16, 2004

The evilest trout of them all

NotShadowStar posted:

script/console needs restarting if you look at it funny.

True. Fortunately there's an easy way to do it, by typing reload!.

Evil Trout
Nov 16, 2004

The evilest trout of them all

bitprophet posted:

1.9 is still way too new for serious deployment IMO, certainly at any outfit which has to maintain codebases that are more than a year or two old, and/or which deploys to servers not running on the bleeding edge.

I disagree. I've been running 1.9.1 in production for well over a year on a codebase that's 4 years old. The upgrade took maybe 3 days of work and resulted in a huge speed improvement (average of 2x as fast).

Evil Trout
Nov 16, 2004

The evilest trout of them all

Nolgthorn posted:

In the application I'm working on before showing things like last names, email addresses, etc. I have it set up so that privacy settings are assessed that ensure a user actually wants their email address or last name to be visible. The problem is that often a whole unnecessary query that traverses backwards through the user model from within the privacy model is run: "self.user".

Is there a way to magically give the user object into the privacy model when I reference the privacy object so it doesn't have to traverse backwards?

Having trouble solving this logic experiment, or else there is something very simple I just don't see!

code:
class User < ActiveRecord::Base
  belongs_to :privacy, :dependent => :destroy

  def display_name
    return self.firstname unless self.privacy.show_lastname?
    self.full_name
  end
end
code:
class Privacy < ActiveRecord::Base
  has_one :user

  def show_lastname?
    [false, true, self.determine_friend?][self.perm_lastname]
  end
  
  def determine_friend?
    return true if self.user.id == current_user.id
    return false unless relationship = self.user.relationship(current_user)
    relationship.is_friend?
  end
end

I had a similar issue once and I was able to do something like this (btw, you are using the self keyword way more than you should have to, it's usually unnecessary.)

code:
def display_name
  p = privacy
  p.user = self
  return self.firstname unless p.show_lastname?
  self.full_name
end
Having said that, it uglies up your code a lot, and I would guess it's not usually worth it unless you've found it to be a huge performance problem. Especially since ActiveRecord has a query cache, and if you previously retrieved the user by id it will not query the database a second time to do it in most cases, even if it shows up on your console as an extra query. (Look for the word cache beside it.)

Adbot
ADBOT LOVES YOU

Evil Trout
Nov 16, 2004

The evilest trout of them all

kalleth posted:

Datamapper is pretty awesome guys; yes it involved switching my rails 3.x site from authlogic to devise, but that was minimal :effort:. No migrations, define properties once, auto migrate/upgrade... utterly awesome for development. Thought you should know!

:ninja: edit - plus, its faster and much nicer to use than AR.

I spent almost a year working on a series of dating sites that used DataMapper, and let me tell you it was like pulling teeth. We had to make awful choices like sticking with a version that had known memory leaks or spending months upgrading to the latest version because they changed the API in huge awful ways.

Also, the killer feature of DM at the time, the Identity Map, doesn't work when you go across multiple databases.

If you find a bug in it, or have a question, good luck finding anyone to answer you. The IRC room would have 30 lurkers and nobody seemingly working on it or around at any time to answer questions.

  • 1
  • 2
  • 3
  • 4
  • 5
  • Post
  • Reply