Hi all, Does anyone have the experience of using Mechanize to scrape webpages? I encountered a problem using Mechanize to parse enormous number of pages. As each time I use agent.get function to fetch a page, it keeps the log, and thus as time grows, the object size of the agent grows bigger and bigger, and consuming all my memory (4G). Is there any solution to this problem? --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Ruby on Rails: Talk" group. To post to this group, send email to rubyonrails-talk-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org To unsubscribe from this group, send email to rubyonrails-talk+unsubscribe-/JYPxA39Uh5TLH3MbocFFw@public.gmane.org For more options, visit this group at http://groups.google.com/group/rubyonrails-talk?hl=en -~----------~----~----~----~------~----~------~--~---