My understanding was that using the :post=>true on a link_to() was supposed to prevent search engine crawlers from triggering the link. However, this does not seem to be working for me. Is there something else that I should be/can be doing to accomplish this? Thanks. -Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: http://wrath.rubyonrails.org/pipermail/rails/attachments/20060416/b90e8eb3/attachment.html
On Sunday 16 Apr 2006 19:23, Belorion wrote:> My understanding was that using the :post=>true on a link_to() was supposed > to prevent search engine crawlers from triggering the link. However, this > does not seem to be working for me. Is there something else that I should > be/can be doing to accomplish this? Thanks.Adding :post doesn''t do anything other than insert dynamically generated (Javascript) <form> tags, so it won''t do anything for non-Javascript clients such as web crawlers. From the Rails API: "And a third for making the link do a POST request (instead of the regular GET) through a dynamically added form element that is instantly submitted. Note that if the user has turned off Javascript, the request will fall back on the GET. So its your responsibility to determine what the action should be once it arrives at the controller. The POST form is turned on by passing :post as true." So basically, you need to check if the request is a GET, and if so most likely fall back to a second action which displays an actual form in order to call the first action again. In your controller... def destructive_action if request.post? # do some destructive action else redirect_to :action => "confirm_destruct" end end ... where confirm_destruct will be another action that displays an actual POST form which then goes on to call destructive_action again using the same parameters but requiring an extra click of a form submission button. HTH. ~Dave -- Dave Silvester Rent-A-Monkey Website Development http://www.rentamonkey.com/ PGP Key: http://www.rentamonkey.com/pgpkey.asc
You can always use robots.txt to prevent search engines from indexing certain areas of your site http://en.wikipedia.org/wiki/Robots.txt On 4/16/06, Belorion <belorion@gmail.com> wrote:> > My understanding was that using the :post=>true on a link_to() was > supposed to prevent search engine crawlers from triggering the link. > However, this does not seem to be working for me. Is there something else > that I should be/can be doing to accomplish this? Thanks. > > -Matt > > _______________________________________________ > Rails mailing list > Rails@lists.rubyonrails.org > http://lists.rubyonrails.org/mailman/listinfo/rails > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: http://wrath.rubyonrails.org/pipermail/rails/attachments/20060416/89f61253/attachment.html
Belorion <belorion@...> writes:> > My understanding was that using the :post=>true on a link_to() was supposed toprevent search engine crawlers from triggering the link. However, this does not seem to be working for me. Is there something else that I should be/can be doing to accomplish this? Thanks.> -MattGoogle, Yahoo! and MSFT honor the rel="nofollow" attribute on a link. Here''s some info: http://googleblog.blogspot.com/2005/01/preventing-comment-spam.html Also note that some crawlers are starting to parse an execute javascript. We noticed that the Google crawler started executing code sitting behind Ajax.Updaters earlier this year.
i was wondering the opposite. if you have a ''single page'' site model, are the crawlers smart enough to sift thru the javascript? what about contextual ads, is there a way to trigger updates to these when substantial parts of the page content change? -- Posted via http://www.ruby-forum.com/.