We have enough CPU-intensive tasks going on in an application that it''s unacceptable performance-wise to be calculating them on the fly during the request-process-response cycle of the fcgi listeners. As a solution, I''ve got a DRb process that will be constantly running which the models within the rails app will call to obtain certain pre-calculated and/or cached values. There are certain actions which trigger the DRb process to recalculate values - but those work fine by using threads to avoid a slowdown in response to the fcgi''s. Couple questions... 1) If anyone has deployed a rails application with intense computation going on in the background (matching algorithms, constant score/average computation, etc), contact me off-list as I''d like to exchange notes. I''ll summarize and report back to list w/ best practices. 2) If anyone has a similar scenario to what I described above (our DRb processes are kept in SVN repos w/ the rest of the code), I''d be curious to hear about how you deploy new versions of your app, keep the DRb processes deployed/updated and constantly running, whether all of that is automated, etc. 3) I''ve wrapped everywhere that calls the DRb process with error catching code to make sure that if a process isn''t there I don''t get an application error -- but I''d also like something along the lines of a service manager and/or spinner/spawner type setup to make sure my DRb process keeps running. Is there a good way to do that? How do people using the DRb session store do that?