Hi,
The following is with the latest (and last) puppetmaster/puppetd on FC5.
I use puppet to configure a server and several workstations on my home
network. This includes managing several files of >5Mbyte on the workstations.
I have had problems recently with timeout errors on some of these files.
These errors seem consistent for a particular puppet setup/configuration, but
can change when he manifests or command line options change - earlier today
one file failed several times with puppetmaster --verbose but succeeded with
puppetmaster --debug.
The relevant parts of the puppetmaster and puppetd transcipts (both with
--debug) are included below. A pair of manifest files are attached -
* soundfonts.pp caused the failure in the transcripts
* tgz-etc.pp caused a failure with puppetmaster --verbose but not with
puppetmaster --debug
What''s the best plan for avoiding the timeout errors - or is there a
bug to
fix?
cheers
John Dubery
===================puppetmaster:
...
debug: Allowing authenticated client gateway(192.168.1.4) access to
fileserver.describe
debug: mount[soundfonts]: Describing
/soundfonts/Frank%20Wen/FluidR3122501.zip_expanded/FluidR3%20GS.SF2 for gateway
notice: Initializing values for
/mnt/server_raid/files_b/library/soundfonts/Frank
Wen/FluidR3122501.zip_expanded/FluidR3 GS.SF2
debug: /File[/mnt/server_raid/files_b/library/soundfonts/Frank
Wen/FluidR3122501.zip_expanded/FluidR3 GS.SF2]/checksum: Initializing checksum
hash
debug: /File[/mnt/server_raid/files_b/library/soundfonts/Frank
Wen/FluidR3122501.zip_expanded/FluidR3 GS.SF2]: Creating checksum
{md5}ac00a4343e86f1f734f4da7c8dcb4c83debug: Allowing authenticated client
gateway(192.168.1.4) access to fileserver.describe
debug: mount[soundfonts]: Describing
/soundfonts/Frank%20Wen/FluidR3122501.zip_expanded/FluidR3%20GM.SF2 for gateway
notice: Initializing values for
/mnt/server_raid/files_b/library/soundfonts/Frank
Wen/FluidR3122501.zip_expanded/FluidR3 GM.SF2
debug: /File[/mnt/server_raid/files_b/library/soundfonts/Frank
Wen/FluidR3122501.zip_expanded/FluidR3 GM.SF2]/checksum: Initializing checksum
hash
debug: /File[/mnt/server_raid/files_b/library/soundfonts/Frank
Wen/FluidR3122501.zip_expanded/FluidR3 GM.SF2]: Creating checksum
{md5}4a198aca9c76bf81db5e0c9e873291f0debug: Allowing authenticated client
gateway(192.168.1.4) access to fileserver.retrieve
info: mount[soundfonts]: Sending
/soundfonts/Frank%20Wen/FluidR3122501.zip_expanded/FluidR3%20GM.SF2 to gateway
puppetd:
...
debug: Calling fileserver.describe
debug: //File[/files/local/library/soundfonts/FluidR3 GM.sf2]: File does not
exist
debug: Calling fileserver.describe
debug: //File[/files/local/library/soundfonts/FluidR3 GM.sf2]: Changing ensure
debug: //File[/files/local/library/soundfonts/FluidR3 GM.sf2]: 1 change(s)
debug: //File[/files/local/library/soundfonts/FluidR3 GM.sf2]/ensure: setting
file (currently absent)
debug: Calling fileserver.retrieve
debug: Storing state
debug: Stored state in 16.26 seconds
debug: Creating default schedules
/usr/lib/ruby/1.8/timeout.rb:54:in `rbuf_fill'': execution expired
(Timeout::Error)
from /usr/lib/ruby/1.8/timeout.rb:56:in `timeout''
from /usr/lib/ruby/1.8/timeout.rb:76:in `timeout''
from /usr/lib/ruby/1.8/net/protocol.rb:132:in `rbuf_fill''
from /usr/lib/ruby/1.8/net/protocol.rb:116:in `readuntil''
from /usr/lib/ruby/1.8/net/protocol.rb:126:in `readline''
from /usr/lib/ruby/1.8/net/http.rb:2017:in `read_status_line''
from /usr/lib/ruby/1.8/net/http.rb:2006:in `read_new''
from /usr/lib/ruby/1.8/net/http.rb:1047:in `request''
... 43 levels...
from /usr/lib/ruby/site_ruby/1.8/puppet/network/client/master.rb:311:in
`run''
from /usr/lib/ruby/1.8/sync.rb:229:in `synchronize''
from /usr/lib/ruby/site_ruby/1.8/puppet/network/client/master.rb:297:in
`run''
from /usr/sbin/puppetd:426
[root@gateway ~]#
_______________________________________________
Puppet-users mailing list
Puppet-users@madstop.com
https://mail.madstop.com/mailman/listinfo/puppet-users
On Sat, Jul 14, 2007 at 05:47:27PM +0100, John Dubery wrote:> The following is with the latest (and last) puppetmaster/puppetd on FC5. > > I use puppet to configure a server and several workstations on my home > network. This includes managing several files of >5Mbyte on the workstations. > > I have had problems recently with timeout errors on some of these files. > These errors seem consistent for a particular puppet setup/configuration, but > can change when he manifests or command line options change - earlier today > one file failed several times with puppetmaster --verbose but succeeded with > puppetmaster --debug.[...]> What''s the best plan for avoiding the timeout errors - or is there a bug to > fix?It''s a bug, caused by the inefficient way that files are copied at the moment. Base64 encoding a large file on the server (which is necessary to transfer it over XMLRPC) takes longer than the client is willing to wait. The fix is to change the way that files are transferred to regular HTTP, but nobody has stepped up and done the work yet. People just work around the problem using http://reductivelabs.com/trac/puppet/wiki/DownloadFileRecipe. - Matt -- "Once one has achieved full endarkenment, one is happy to have an entirely nonfunctional computer" -- Steve VanDevender, ASR
Thanks Matt,
but ... how can you then subscribe to the download - for example expanding
a compressed file when it changes, as in the following extract?
John
define tgz($filename, $source, $destination) {
file { "/root/files_local/installers/tgz/$filename":
source => $source,
backup => false,
group => root,
mode => 775,
owner => root
}
file {$destination:
ensure => directory,
group => root,
mode => 755,
owner => root
}
exec { "/bin/tar -xzf /root/files_local/installers/tgz/$filename -C
$destination":
subscribe =>
File["/root/files_local/installers/tgz/$filename"],
refreshonly => true,
require => File[$destination]
}
}
==========================================================================Original
message:
From:
"Matthew Palmer" <mpalmer@hezmatt.org>
Sender:
puppet-users-bounces@madstop.com
Reply to:
"Puppet User Discussion" <puppet-users@madstop.com>
To:
puppet-users@madstop.com
Date:
Sun, 15 Jul 2007 09:21:03 +1000
Subject:
Re: [Puppet-users] timeout error on file transfer
---------------------------------------------------------------------------
On Sat, Jul 14, 2007 at 05:47:27PM +0100, John Dubery
wrote:> The following is with the latest (and last) puppetmaster/puppetd on FC5.
>
> I use puppet to configure a server and several workstations on my home
> network. This includes managing several files of >5Mbyte on the
workstations.
>
> I have had problems recently with timeout errors on some of these files.
> These errors seem consistent for a particular puppet setup/configuration,
but
> can change when he manifests or command line options change - earlier today
> one file failed several times with puppetmaster --verbose but succeeded
with
> puppetmaster --debug.
[...]
> What''s the best plan for avoiding the timeout errors - or is there
a bug to
> fix?
It''s a bug, caused by the inefficient way that files are copied at the
moment. Base64 encoding a large file on the server (which is necessary to
transfer it over XMLRPC) takes longer than the client is willing to wait.
The fix is to change the way that files are transferred to regular HTTP, but
nobody has stepped up and done the work yet. People just work around the
problem using http://reductivelabs.com/trac/puppet/wiki/DownloadFileRecipe.
- Matt
--
"Once one has achieved full endarkenment, one is happy to have an entirely
nonfunctional computer"
-- Steve VanDevender, ASR
_______________________________________________
Puppet-users mailing list
Puppet-users@madstop.com
https://mail.madstop.com/mailman/listinfo/puppet-users
On Jul 19, 2007, at 1:53 PM, John Dubery wrote:> but ... how can you then subscribe to the download - for example > expanding > a compressed file when it changes, as in the following extract?You can''t, which is why Puppet doesn''t use rsync as a transport. My major project in the next six weeks is switching from xmlrpc to REST, and in the process I hope to enable prefetching on file transfers so that it''s one call per copy statement instead of one per file; both of these should dramatically speed up file transfers, and you can expect them to be released in the next couple of months. -- You can''t wait for inspiration. You have to go after it with a club. -- Jack London --------------------------------------------------------------------- Luke Kanies | http://reductivelabs.com | http://madstop.com