I''m currently working on a ruby on rails page that needs to query a web service, retrieve a few rss feeds and do a whole bunch of SQL queries. Right now, my code is doing all of this somewhat inefficiently. Ideally, I''d like to be able to do the http requests on the page asynchronously if possible. However, I have yet to find any example code or documentation that shows how to do this. I saw the http-access2 library, though that seemed quite complicated compared to the open-uri method I''m using now. Any suggestions about how best to go about this? In the end, I really just need to have it work, and have it be fast. -- Bob Aman
Bob- How about this? def request_urls(urls = []) reqs = [] urls.each do |url| reqs.push Thread.new { open(url) } end reqs.collect { |req| req.value } end responses = request_urls [ ''http://www.blahr.com/'', ''http://www.news.com/'' ] This method would still require all the requests to be complete before you continued onto the next step of your application, but it would execute all of them (more or less) simultaneously. Otherwise, you could do something more complicated with a Queue, but it would take more lines of code. Hope this helps, Ben On Tue, 15 Mar 2005 13:31:27 -0500, Bob Aman <vacindak-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:> I''m currently working on a ruby on rails page that needs to query a > web service, retrieve a few rss feeds and do a whole bunch of SQL > queries. Right now, my code is doing all of this somewhat > inefficiently. Ideally, I''d like to be able to do the http requests > on the page asynchronously if possible. However, I have yet to find > any example code or documentation that shows how to do this. I saw > the http-access2 library, though that seemed quite complicated > compared to the open-uri method I''m using now. Any suggestions about > how best to go about this? In the end, I really just need to have it > work, and have it be fast. > -- > Bob Aman > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >
Typo requests Tadalist, flickr and del.icio.us for syndication. It uses a small cache which basically works like a hash which stops returning its content after a set amount of time ( like 15 minutes ) challanging the caller to get a new version of the syndication target and storing it in the cache again. Here is the class: http://typo.leetsoft.com/trac.cgi/file/trunk/app/models/simple_cache.rb Here are some useage examples: http://typo.leetsoft.com/trac.cgi/file/trunk/app/helpers/application_helper.rb Maybe this helps... On Tue, 15 Mar 2005 13:31:27 -0500, Bob Aman <vacindak-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org> wrote:> I''m currently working on a ruby on rails page that needs to query a > web service, retrieve a few rss feeds and do a whole bunch of SQL > queries. Right now, my code is doing all of this somewhat > inefficiently. Ideally, I''d like to be able to do the http requests > on the page asynchronously if possible. However, I have yet to find > any example code or documentation that shows how to do this. I saw > the http-access2 library, though that seemed quite complicated > compared to the open-uri method I''m using now. Any suggestions about > how best to go about this? In the end, I really just need to have it > work, and have it be fast. > -- > Bob Aman > _______________________________________________ > Rails mailing list > Rails-1W37MKcQCpIf0INCOvqR/iCwEArCW2h5@public.gmane.org > http://lists.rubyonrails.org/mailman/listinfo/rails >-- Tobi http://www.snowdevil.ca - Snowboards that don''t suck http://www.hieraki.org - Open source book authoring http://blog.leetsoft.com - Technical weblog
> This method would still require all the requests to be complete before > you continued onto the next step of your application, but it would > execute all of them (more or less) simultaneously. Otherwise, you > could do something more complicated with a Queue, but it would take > more lines of code. > > Hope this helps, > > BenThanks! I think this is almost exactly what I was looking for. It''ll require a bit of modification, but I think I can figure that out on my own.> Typo requests Tadalist, flickr and del.icio.us for syndication. > > It uses a small cache which basically works like a hash which stops > returning its content after a set amount of time ( like 15 minutes ) > challanging the caller to get a new version of the syndication target > and storing it in the cache again.I already wrote a caching mechanism for the thing that works quite well. I borrowed the etag technique mentioned on RedHanded as well. -- Bob Aman
On Tue, 15 Mar 2005 11:45:59 -0700, Ben Schumacher <benschumacher@gmail.com> wrote:> Bob- > > How about this? > > def request_urls(urls = []) > reqs = [] > urls.each do |url| > reqs.push Thread.new { open(url) } > end > > reqs.collect { |req| req.value } > end > > responses = request_urls [ ''http://www.blahr.com/'', ''http://www.news.com/'' ]Just checked, and this does seem to work. Thanks! -- Bob Aman