similar to: setting request headers via get()

Displaying 20 results from an estimated 500 matches similar to: "setting request headers via get()"

2006 Jul 28
2
s/IF_UNMODIFIED_SINCE/IF_MODIFIED_SINCE/ ???
I''ve noticed in the source code that Mongrel handles the IF_UNMODIFIED_SINCE header. I thought this rather odd, since IF_UNMODIFIED_SINCE is generally only useful with PUT. Even more odd is that there is no mention of IF_MODIFIED_SINCE in Mongrel which is useful with GET (although not as useful as IF_NONE_MATCH of course). Is this a bug in Mongrel? -Tim
2007 Jan 04
5
Help accessing http headers?
Hi, I''m using Mechanize, and I''ve developed a lot of code around it. I''d like to be able to check the Etag header during a get to see if the page has changed, as well as some other http header information. Can I do that without hacking Mechanize myself? Does anyone have any examples of how to do this? William
2008 Jan 16
3
settings headers in mechanize
hi, a potential mechanize user here. i''ve looked at the API but it''s not clear what capacity there is to arbitrarily set the headers - does mechanize allow for this? mechanize is clearly great for web browsing, but i also need to construct get and post requests from scratch. i''ve looked at net:http, libcurl, openuri, but mechanize seems simpler and more functional than
2008 Jul 10
1
custom http headers in form.submit / upload without original form
Heyas :) I''m wondering how do I do to send my custom HTTP headers when posting a form. agent.set_headers is a private method and I don''t know how to get a reference to HTTPHeaders to use add_field and such. Since my request is a form, I''m sending it using agent.submit(form). Any hints? Bonus question: I would like to upload a file to a REST webservice, but I
2007 Jul 25
0
Being a polite client: maintaining history
Hi, folks. I''m investigating libraries to use in a rather specialized feed reader. Some of the sites I want to follow don''t have RSS feeds (or have hopelessly broken feeds) so I was already planning on using Hpricot anyway -- Mechanize is looking good, here. In my research for my project, recipe 11.16 in O''Reilly''s Ruby Cookbook references a website[1]
2007 Oct 07
1
How to store a Mechanize object in the database?
Hi, I am trying to save a Mechanize object in database (using a Rails Model). But the save operation throws a TypeError Considering that "agent" is an instance of a Rails Model and "user" is defined as a "text" type in the Model. irb(main):039:0> agent.user = WWW::Mechanize.new #<WWW::Mechanize:0xb71295f0 @follow_meta_refresh=false, @key=nil,
2007 Sep 15
1
Apache 2.2.3 ETag weirdness.
I'm looking to clarify this entry I found in a changelog(line 2062ish) on a CentOS 5 box for Apache 2.2.3 *) mod_include no longer allows an ETag header on 304 responses. PR 19355. [Geoffrey Young <geoff apache.org>, Andr? Malo] Loading the mod_include module prevents any ETags headers from being sent from the box. If I comment out mod_include, ETags are sent as expected.
2008 May 06
0
Managing git submodules with git.rake
Hey all, If you''re like me and use git submodules heavily (I vendor everything, and every plugin is a submodule), you might like to hear about code published this morning to make it easier to manage multiple git submodules in a shared-server environment. It''s imaginatively titled ''git-rake'', and it does Good Things like: * aggregates submodule commit logs into
2008 Mar 11
8
Mechanize#get vs Mechanize#fetch_page
So I found myself wanting to call Mechanize#get with a hash for arguments like this: WWW::Mechanize.new(''http://api.flickr.com/services/rest/'', {:method => ''flickr.auth.getFrob''... }) Granted, it looks like this isn''t supported but it led me to what looks like a bug. Namely that get calls fetch_page(abs_uri, request, cur_page, &block)
2020 Jul 14
3
[PATCH nbdkit RFC 0/2] curl: Implement authorization scripts.
This is an RFC only, at the very least it lacks tests. This implements a rather complex new feature in nbdkit-curl-plugin allowing you to specify an external shell script that can be used to fetch an authorization token for services which requires a token or cookie for access, especially if that token must be renewed periodically. The motivation can be seen in the changes to the docs in patch 2.
2007 Nov 12
3
Weird error downloading a gzip''ed file
Hi all, I''ve been using mechanize for a while and it rocks. Docs are pretty clear and so far I''ve been able to do it on my own. However, I''m stuck in a weird situation in a script to download my contact list from hotmail. I''ve used Firebug to check all urls, and tested it by hand while logged in via browser. Even in the script everything works well until the
2020 Jul 14
0
[PATCH nbdkit RFC 2/2] curl: Implement authorization scripts.
This rather complex feature solves a problem for certain web services that require a cookie or token for access, especially one which must be periodically renewed. For motivation on this see the included documentation, and item (1)(b) here: https://www.redhat.com/archives/libguestfs/2020-July/msg00069.html --- plugins/curl/nbdkit-curl-plugin.pod | 120 +++++++++++ plugins/curl/Makefile.am
2007 Feb 07
15
https with certificates
I poked around the web a little and didn''t run across how to use https when it asks for certificate validation. I''m trying to connect to devices that don''t have valid certificates, and in this case, I don''t care if they are or not. So when I use my browser to ge to the site, firefox asks me to allow the certificate, then one other question, then I get the
2010 Oct 04
4
http caching a dynamic page
Is it possible to take advantage of http caching/proxy caching with dynamic pages? i.e. pages with section/part that can change over time but certain part of the page remain the same. I would rather not keep re-rendering the static part but insure the dynamic part are rendered with fresh data. I am using memcached mostly as an object store that I can minimize db hits but I still am rendering a
2007 Oct 05
3
basic_auth problem since 0.6.9
I have a site that I don''t think "returns" a basic_auth request, but is able to use basic_auth. In the past on 0.6.8, I could use the following code: require ''rubygems'' # gem ''mechanize'', ''=0.6.8'' require ''mechanize'' agent = WWW::Mechanize.new agent.basic_auth("username", "password")
2007 Feb 12
2
make check failure, internet.Rout.fail, Error in strsplit
I'm trying to build R on RedHat EL4. The compile went fine, but a make check ran into a problem and produced a file "internet.Rout.fail". Judging by the last part of that file, it was trying to run an R routine called "httpget" to retrieve the URL http://www.stats.ox.ac.uk/pub/datasets/csb/ch11b.dat. The precise error it encountered was: Error in
2012 Oct 02
4
Rails Default ETag Generation
How does Rails generate ETags by default? I''ve got config.action_controller. perform_caching set to true in production so that I can use page-level caching in a few specific places, but it seems that Rails is automatically setting ETags on *all* responses even though I''m not using fresh_when or the stale? helpers in any of my actions. How is Rails deciding to do this and how
2007 Mar 25
5
mechanize 0.6.6 Released
mechanize version 0.6.6 has been released! http://mechanize.rubyforge.org/ The Mechanize library is used for automating interaction with websites. Mechanize automatically stores and sends cookies, follows redirects, can follow links, and submit forms. Form fields can be populated and submitted. Mechanize also keeps track of the sites that you have visited as a history. Changes: =
2009 Aug 29
3
DO NOT REPLY [Bug 6672] New: mtim.tv_nsec not used when reading time of a file
https://bugzilla.samba.org/show_bug.cgi?id=6672 Summary: mtim.tv_nsec not used when reading time of a file Product: rsync Version: 3.0.6 Platform: Other OS/Version: All Status: NEW Severity: major Priority: P3 Component: core AssignedTo: wayned at samba.org ReportedBy: antonio at
2007 Feb 26
1
some Mechanize objects never garbage collected?
Greetings, I''m using Mechanize to scrap dozens of pages and have noticed the size of my ruby process keeps growing. I set Mechanize.max_history to 0 with no effect on the memory use. I wrote a little test to show the objects left on the heap after mechanizing a single page and then doing a garbage collection. Sample list appended below. I can supply the test code if it helps.