|
Chiming in to say that I'm quite experienced with Ruby and Rails after working with it full time for the last year, so I'll gladly answer any questions anyone might have. My project is a web-based role playing game based on Internet culture. It's got over 100 models and uses a lot of awesome Ruby meta-programming tricks. It uses Haml for the templates and Sass for the stylesheets. shopvac4christ posted:I've tried installing Typo to blog.mysite.com. So I'd have something like ~/sites/blog.mysite.com/public symlinked to ~/public_html/blog and a subdomain created which points to that directory. First I had to figure out the trailing slash problem (the difference between mysite.com/blog/ and mysite.com/blog) but now that that's working, going to blog.mysite.com doesn't work at all. There's got to be a way to fix this without resorting to site-specific routing stuff, right? What web server are you using? Because deployment changes drastically depending on this. For example, in Apache2 I'd set up a NamedVirtualHost to whatever directory you're running your Typo install out of (forget the symlink).
|
# ¿ Sep 5, 2007 15:57 |
|
|
# ¿ Apr 26, 2024 13:50 |
|
shopvac4christ posted:My host (Site5) is using Apache, but I'm not sure how much control I have over it. Unfortunately controlling what subdomains point to where is really a web server configuration issue, nothing specific to rails. If you can't touch Apache's configuration files, it will probably involve some hackery outside of rails to get it to work.
|
# ¿ Sep 5, 2007 16:22 |
|
shopvac4christ posted:How would one deploy a common library to all sites on a server? I ask the question this way because I'm coming from ASP.NET, where if we updated something in a global utilities library, we could just copy the DLL to separate projects, or even better, import it server-wide. I'm guessing it has something to do with plugins and maybe using capistrano to update the plugins for each site in a special deployment, but beyond that I wouldn't have any ideas. One of the great features of Subversion (which I assume you're using because you mentioned capistrano) is the ability to have externals. An external is the ability to point part of your source tree at another source tree altogether. What I would do in this case is build a new tree for your plugin, then put it as an external in the plugins directory of each project. Then, whenever you do a svn update (through capistrano or local, whatever) you'll get the latest stable version of your plugin. I'm on edge rails right now, and I have my vendor/rails directory set up as a svn external. Every time I do a svn update, it grabs the latest version of rails for me, which is awesome. I also do it for Haml.
|
# ¿ Sep 8, 2007 18:46 |
|
Space Kimchi posted:If someone wants to waste their time on this crap, knock yourself out, I guess. I'll be over here making stuff that works and just worrying about the normal lovely timesinks involved in web dev, like cross-browser CSS compliance and having the w3c inspector not poo poo bricks when I send my site through it to please some dude in IRC who swears my problems with something unrelated has to do with not using & for link strings with an ampersand in them. It's interesting, because most web developers I know just "get" the basics of rails (activerecord, controllers, views, rjs) when they first see it. The way it's organized and does the database handling is pretty intuitive. I can't say the same for the REST stuff. It's often a head scratcher -- like why would I bother to do all that extra work when the end product is the same? There is an advantage though, it's about organization. One of the best things about a rails app is if you need to debug it, you know exactly what file to go to in what directory, since they're all organized the same way. The REST stuff takes it a little further, in that it names the majority of your controllers and their methods for you. You always call the same methods to insert, update, delete, etc. Additionally, you get a lot of cool "black magic" for finding the paths of things.
|
# ¿ Sep 8, 2007 23:16 |
|
MrSaturn posted:This is a question so stupid, I can't believe I'm asking it, but: If it's purely static, you can put it in public as people have mentioned, but if you want to do stuff like use layouts, an easy way to do it is to create a controller script/generate Controller contact The controlller doesn't need any methods. Even without actions defined it'll look for the view for whatever file you request, so your above url/file should work. I suspect you got the routing error due to not having a contact controller.
|
# ¿ Sep 14, 2007 04:32 |
|
Hammertime posted:Oh, and assuming this is a pre-production system. Don't bother creating separate migration files for an alter column. It's easier to migrate back to version 0, change the table declaration to include the column, and rake it back to the latest version. Before you actually push the system live, the practice is to group all the migration files together into a single file (not including sample/starting data if you have any). Ideally there's a migration file for each version of the system that gets pushed into production, having separate migration files for development is just a temporary measure. This can actually be dangerous unless you are the only person working on the project. If multiple people are checking out your source, going back and editing earlier revisions is sure to ruin their day Just a warning for the new guys.
|
# ¿ Sep 24, 2007 15:01 |
|
Here's a script I created to add all new files to the repo. It's good for non-generated content like javascripts and stylesheets: #!/bin/sh svn add `svn status | grep "^\?.*" | sed "s/^\? *//"`
|
# ¿ Sep 30, 2007 15:35 |
|
shopvac4christ posted:Right. But what I'm asking is why is there a deploy.rb file at all? Why aren't the deployment instructions completely independent of whether or not there's a copy of the source code checked out? The way Capistrano works is that it tunnels through SSH and executes a series of instructions on a remote machine. It needs to be run FROM somewhere, usually on a development machine that has a copy of the source locally. If it were a server process you could just notify it to deploy itself, sure. The deploy.rb is part of the instructions you're running that in turn execute remote SSH commands.
|
# ¿ Oct 2, 2007 19:04 |
|
Hop Pocket posted:What is the rails equivalent of a Quartz scheduler? I want something that will execute periodic jobs on the development/production servers. I'd prefer to stay outside of the OS level cron and inside the application if possible in case our production server ends up being hosted on Windows. I looked for something like this months ago and couldn't come up with anything. So my site uses cron for all jobs, using the "script/runner"
|
# ¿ Nov 12, 2007 18:49 |
|
SeventySeven posted:Embarrassingly retarded question... I can't believe I can't figure out the answer to this on my own. Imagine I've got something like the following: code:
|
# ¿ Nov 19, 2007 15:29 |
|
SeventySeven posted:Why didn't I think of that? I tried @posts.length and @posts.count. Actually @posts.length should work too. @posts.size is equivalent, so I'm not sure why that failed!
|
# ¿ Nov 19, 2007 15:47 |
|
zigb posted:- You still write SQL, for any sort of advanced find / sort. Not that I mind writing SQL, but I prefer to write it in one large understandable chunk instead of splitting it up into mini-clauses, like :conditions => "[some sql]", :order => "[some sql]", etc. Most find queries that required a JOIN or two had to be written using find_by_sql. Again, I like sql, but I feel like I'm cheating every time I use it in Rails. This puzzles me. I just deployed an online RPG game using rails with over 120 models, and I'd say I had to write manual SQL less than 1% of the time. Especially the part where you say a join or two -- what kind of joins are you doing that aren't handled elegantly by ActiveRecord?
|
# ¿ Nov 20, 2007 15:12 |
|
Hop Pocket posted:For those of you using Mongrel in a deployment environment, how many Mongrel processes do you start? I know that it's all dependent on the application, load, # of static pages, etc. I'm setting up some hosting with Joyent, and you have to request Mongrel ports for a shared hosting environment, so I was just curious. My site, Forumwarz runs on 10 Mongrels behind an nginx proxy. It's pretty snappy, each Mongrel process seems to be using about 50-60M of RAM on a 2.4Ghz Core2Duo dedicated server.
|
# ¿ Nov 20, 2007 21:46 |
|
One thing to consider is if you use Haml for markup (which I do and love), they have their own puts methods you can use in helpers, so your above code would work!
|
# ¿ Nov 22, 2007 16:16 |
|
Hop Pocket posted:This might be more of a scriptaculous question, but I can't seem to get my animations that my RJS template return to execute serially. In other words, they seem to always run at the same time. Is there a way to get any visual_effect to run only after the previous one has finished? This is something that has begun to annoy me lately. My game uses a lot of scriptaculous and the best way I found to do things serially is to use the page.delay(seconds) command that uses a block. code:
If you're coding in pure javascript, Scriptaculous allows you to execute events once the animation has finished, however there is no API for doing this in RJS.
|
# ¿ Dec 16, 2007 16:43 |
|
ikari posted:http://wiki.script.aculo.us/scriptaculous/show/EffectQueues Interesting, I didn't know effect queues were supported by RJS! Having said that it wouldn't solve the problem I had above as updating an element isn't an animation, but it would solve the initial question
|
# ¿ Dec 16, 2007 19:43 |
|
savetheclocktower posted:Effect.Event is a skeleton effect that allows arbitrary functions to be treated as effects for the purpose of queueing. Yes, but I was talking about RJS. It's not hard to trigger an arbitrary function call for a visual effect when writing your own Javascript, the issue is doing it from RJS. For example, you can say: code:
code:
My goal is to avoid writing external javascript files for simple things like fades and updates I've found this technique, which involves using the << operator to write your own first-class functions, but it seems a bit of a hack.
|
# ¿ Dec 17, 2007 00:36 |
|
skidooer posted:How about something like this? This is exactly the kind of thing I want (and think RJS needs). I'll take a look later and see how it works, thanks! savetheclocktower posted:RJS isn't a leakproof abstraction. It's really just a way to place Ajax logic in your controller, where it makes sense to live. If you try to write everything in pure RJS, never touching JavaScript at all, you'll just end up uglifying your controllers. Again, I'm aware you can call arbitrary Javascript from RJS. I don't want to write everything in RJS, but there are many cases where you just want to chain a couple of animations to an update. Regarding the uglyfying of your controllers, anything more than a couple of lines of RJS goes in a .rjs file in my view directory.
|
# ¿ Dec 17, 2007 16:16 |
|
Xgkkp posted:So I've used capistrano/mongrel to deploy and run, as the 'agile development with rails' book suggested. Does anyone know if: There are many ways to deploy a Rails app and how you want to do it has a lot to do with the hardware you have and the traffic you expect. a) I personally run a nginx web server in front of a pack of mongrels. It sounds like you're using Apache to do the same thing. I personally prefer Nginx because it's less bulky but the difference is minimal. Some benchmarks have shown that the best performance is on Apache2 with FastCGI Rails processes. b) It is possible and worth it to configure what you're talking about. I personally recommend using a firewall on all public web servers. Basically you should configure your firewall so that it only accepts incoming data on the few ports you need (usually HTTP/HTTPS/SSH/DNS and maybe SMTP). If you do this local connections (from Apache to Mongrel) will work fine, but people from the outside world will only be able to tunnel through Apache. On Linux iptables is a good place to start looking. c) Mongrel has a page about how to configure it to start on boot: http://mongrel.rubyforge.org/docs/mongrel_cluster.html - mine is set up using the points in that document and it works great.
|
# ¿ Dec 20, 2007 00:11 |
|
Hop Pocket posted:I do love slicehost, but I'll be damned if gem does not work very well on a 256 slice. Very memory intensive. Installing one gem can take hours. I don't need a 512 slice, but am considering getting one just to make gem usage more palatable. Considering rubygems is just a package management system, its resource usage is horrendous. Maybe it's doing some magic behind the scenes I'm unaware of, but simply updating the list of gems from the server can take up hundreds of megs of RAM.
|
# ¿ Feb 4, 2008 20:17 |
|
shopvac4christ posted:Okay, so since apparently no one is actually deploying Rails projects, I have another question No, some of us have Rails projects in production. Forumwarz, my MMO, is deployed on a dedicated host. The technology is Nginx proxying to a pack of Mongrels. I don't have any recommendations for small projects, but if you are doing mid-sized traffic (right now we're doing 20 dynamic rq/sec) a dedicated server is quite affordable at most hosting companies. I just assumed that since you said you were interested in Phusion that you were working on something smaller.
|
# ¿ Jun 30, 2008 03:14 |
|
savetheclocktower posted:Off-topic: Forumwarz is awesome. You guys do great stuff on the JavaScript side. Thanks, that's mostly due to Prototype and Scriptaculous making it so easy!
|
# ¿ Jul 2, 2008 19:45 |
|
Nolgthorn posted:Lets say I have about 20 tables that include more or less each a set of options for 20 different select boxes. These will change rarely, I have set up all the functionality required to display these select boxes and show the user's selected options somewhere all over a page someplace. In your controller, cache the results from the database calls. If you're on Rails 2.1, caching is pretty easy to set up with a variety of stores (I can't recommend Memcached enough.) http://www.thewebfellas.com/blog/2008/6/9/rails-2-1-now-with-better-integrated-caching
|
# ¿ Jul 18, 2008 15:31 |
|
In case anyone is interested, I just released Kawaii, which is kind of like a script/console web-front end for those of you like me who hang out in a console all day while developing in Rails. It's open source, hosted at Github. Any feedback would be appreciated!
|
# ¿ Jul 20, 2008 21:59 |
|
bitprophet posted:So, um. Why would I want to use Kawaii instead of, you know. A terminal window? This reminds me of Campfire's reinvention of IRC because "it wasn't a Web site" (and then of course a client-side app, Pyro, to interface with the Web site, making it even more ...) Oh, it's because when you type in any commands that are longer than say, two console lines, the output is ridiculously hard to read. For example, I'd rather look at this: Click here for the full 780x376 image. Than this: (that's the same command in Kawaii and a script/console)
|
# ¿ Jul 21, 2008 01:27 |
|
j4on posted:Let's say you work in a "fast and dirty" coding environment (news website) where most projects go from design to publish in under a week, sometimes only one day, and usually involve only one coder. Maintenance on old code is rarely needed, but deadlines absolutely can not be missed: if breaking the MVC paradigm will save one hour, by God break it. For these kind of small data-driven "mini-sites" is RoR still better than PHP? Or should I be lookng to learn something else entirely? I would say yes, Rails is still better than PHP absolutely. One you get set up properly you can just spawn and build new apps very quickly. With Phusion Passenger you can deploy just as easily too.
|
# ¿ Jul 23, 2008 18:11 |
|
I recently switched Forumwarz back from Phusion to a pack of Mongrels. Phusion seemed to perform more or less the same from a memory standpoint but I hated how I couldn't hot deploy safely. With a pack of Mongrels, you can do a rolling restart so that your app never goes down while pushing updates (since Nginx can proxy to the other mongrels.) With Phusion, your only choice is to touch the tmp/restart.txt, where it shuts down all processes and starts up again, preventing any requests from completing for up to 30 seconds. My users definitely noticed that!
|
# ¿ Nov 18, 2008 17:13 |
|
Nolgthorn posted:I have an Entries model and I have a Votes model. Each Entry has many Votes and each time that an Entry is loaded, the associated Votes will also need to be loaded, is there a way I can use scope to automatically :include => :votes on each request? In Edge rails, there's now default scoping: http://ryandaigle.com/articles/2008/11/18/what-s-new-in-edge-rails-default-scoping Although if you aren't running Edge, you might want to just create a named_scope that does it, and just make sure you always use that named_scope when accessing your entries.
|
# ¿ Dec 21, 2008 20:34 |
|
Nolgthorn posted:Is there a way to easily report at the bottom of the page on a development application the processing time, requests and speed of which the page loaded? I am getting tired of always looking over to my terminal window on every page request to see even just a summary of what's been happening. You might like FiveRuns Tuneup. Personally, I've tried things like this but always end up just looking at the console while optimizing. Nolgthorn posted:Also another question; I understand that the development environment runs a bit differently than in production. Now that I've got my db requests more or less optimized for the time being I'm noticing that my bottleneck seems to be loading partials, for instance I am using a commenting system and each comment is a partial with another partial in it for the rating system. When there are 25 comments being loaded on the page each one loads at a speed of between 0.7-1.7ms for the rating and 1-3ms per comment, in comparison the full amount of db requests on such a complex page load maybe 0.3ms in absolute total. Rails performs vastly different in production mode. Templates are cached, so if you make a change to them you have to restart the process. I have noticed performance problems with rendering too many partials in the past, but that's when there's hundreds. The overhead should be minor with 25. You should do yourself a favor and enable production mode from time to time during development (just point your production entries in database.yml at the development database and it should work.)
|
# ¿ Dec 29, 2008 01:55 |
|
Nolgthorn posted:I too don't like the way field_error_proc adds that horrible div around failing form elements, it's always messing up my layouts. Why not just use CSS to not display it as a div? If you change the div's style to have display: inline in your CSS it is basically the same thing as a span.
|
# ¿ Dec 31, 2008 16:46 |
|
Pardot posted:Rails 2.3 is out. http://guides.rubyonrails.org/2_3_release_notes.html - surprisingly great post about all the new stuff. A small correction: it's a release candidate, not an official release yet. http://weblog.rubyonrails.org/2009/2/1/rails-2-3-0-rc1-templates-engines-rack-metal-much-more
|
# ¿ Feb 2, 2009 21:43 |
|
chemosh6969 posted:That got me closer. I did script/generate model YEAR1, but when I tried to pull the first record it died from trying to pull it from table YEAR1s The problem here is that Rails is heavy on convention over configuration. In other words, it assumes that if a model is named Office, the table in the database will be named offices. Of course this is less useful for existing databases but not hard to work around. You can still generate a model called Year1 -- but you'll have to add a line to the year1.rb file: set_table_name "year1" See http://api.rubyonrails.org/classes/ActiveRecord/Base.html#M002231
|
# ¿ Apr 18, 2009 15:09 |
|
Jargon posted:I've been using Slicehost ($20 for an Ubuntu VPS with 256MB ram, enough to run a Rails site with light to moderate traffic), and it's a great value if you don't mind getting your hands dirty with a bare Linux box. I just signed up and used you as a referrer Forumwarz already runs on a cluster of three 8-core servers, but we're launching a new project that I wanted to partition away from our regular site. Slicehost looks really good as a lightweight simple host.
|
# ¿ May 14, 2009 16:15 |
|
GroceryBagHead posted:You don't know how much you hate Mongrel until you need to do a rolling restart on a cluster of 4 app servers with 4 processes on each while fighting with Monit so it doesn't kill newly created mongrel processes. Passenger is just a must now. Unicorn is worth a look. You can do a rolling restart pretty much out of the box by sending your master process a USR2. You also only need monit to monitor one process (the master) because the children are restarted before you can even notice they're down. I've run basically every Rails deployment system at one point or another and it's my current favorite.
|
# ¿ Dec 29, 2009 21:50 |
|
bitprophet posted:Unicorn looks like has some excellent ideas behind it, but practically speaking I don't see any obvious benefits it has over Passenger for the average deployment situation. E.g. a soft restart is as easy as touch-ing a file or doing /etc/init.d/apache2 reload. If you migrated to Unicorn from Passenger I'd be interested in hearing why I only ran Passenger for about a month. My site handles millions of Rails requests a day, and I very quickly discovered something about Passenger that I didn't like. One dark secret about Ruby is that long running processes gain memory. Every major Rails deployment I know of uses some combination of Monit/God to watch each process and restart them when they go above a certain threshold. You can and should minimize them by doing profiling on your memory usage, but even a finely tuned app will still require process restarts from time to time. In Passenger there is no built in way to monitor the memory of its processes. It does have a system to restart a process after a certain amount of requests though, PassengerMaxRequests. I've also seen people set up cron jobs to just check their memory on the command line and kill them when they get too high and that works too. The bad part is that when you restart a process, either by sending it a terminating signal on the command line, touching tmp/restart.txt or by using something like PassengerMaxRequests is that Passenger seems to stall all future requests while the new process spins up. In my apps case, this could take 5-10 seconds, and that was noticeable and annoying. So after a month of running Passenger I switched back to Mongrels and Monit. With Nginx in front, if one mongrel was restarting it would just proxy the request to another one and users wouldn't notice any interruption. One flaw of Monit, as has been pointed out, is that hot restarts are quite difficult. In Unicorn, they're trivial. If you send a USR2 signal to a master process, it actually spins up a whole new application in the background, and only kills the original one once it's fully loaded. Unlike rolling restarts, there's no risk of people getting a mix of the old and new version of your app while other mongrels restart. I've also found it performs a lot better, and the nginx confirguration is a lot simpler with a unix domain socket. If I add or remove processes, the nginx configuration can stay the same. My monit configuration is also a lot easier, since I only monitor the master process. My memory bloat is managed by a simple script that runs every 30 minutes. Anyway, if your site is not receiving many requests per second, you might not experience these problems. For a medium-level traffic site like my own, Passenger was a headache
|
# ¿ Dec 31, 2009 05:58 |
|
Hammertime posted:Anything in /app gets reloaded cleanly, anything in /lib /config or gem related needs a restart. I restart on route changes too, though not sure if that's required these days. Actually /lib hasn't needed a restart in at least a year's worth of Rails releases.
|
# ¿ Jan 12, 2010 16:21 |
|
NotShadowStar posted:script/console needs restarting if you look at it funny. True. Fortunately there's an easy way to do it, by typing reload!.
|
# ¿ Jan 14, 2010 16:38 |
|
bitprophet posted:1.9 is still way too new for serious deployment IMO, certainly at any outfit which has to maintain codebases that are more than a year or two old, and/or which deploys to servers not running on the bleeding edge. I disagree. I've been running 1.9.1 in production for well over a year on a codebase that's 4 years old. The upgrade took maybe 3 days of work and resulted in a huge speed improvement (average of 2x as fast).
|
# ¿ May 31, 2010 00:29 |
|
Nolgthorn posted:In the application I'm working on before showing things like last names, email addresses, etc. I have it set up so that privacy settings are assessed that ensure a user actually wants their email address or last name to be visible. The problem is that often a whole unnecessary query that traverses backwards through the user model from within the privacy model is run: "self.user". I had a similar issue once and I was able to do something like this (btw, you are using the self keyword way more than you should have to, it's usually unnecessary.) code:
|
# ¿ Jul 5, 2010 01:38 |
|
|
# ¿ Apr 26, 2024 13:50 |
|
kalleth posted:Datamapper is pretty awesome guys; yes it involved switching my rails 3.x site from authlogic to devise, but that was minimal . No migrations, define properties once, auto migrate/upgrade... utterly awesome for development. Thought you should know! I spent almost a year working on a series of dating sites that used DataMapper, and let me tell you it was like pulling teeth. We had to make awful choices like sticking with a version that had known memory leaks or spending months upgrading to the latest version because they changed the API in huge awful ways. Also, the killer feature of DM at the time, the Identity Map, doesn't work when you go across multiple databases. If you find a bug in it, or have a question, good luck finding anyone to answer you. The IRC room would have 30 lurkers and nobody seemingly working on it or around at any time to answer questions.
|
# ¿ Jul 17, 2011 16:18 |