X4T
2009-Aug-11 12:26 UTC
[Puppet Users] possible Puppet Memory Leak? - maybe not the right Subject ;-)
Hello Together, i am trying to use puppet to transfer a file (10MB) through the puppet file service (puppet://server/files/filename) inside of a file directive. The problem is that when the puppet client starts to transfer the file, the puppetmasterd starts to reserve the whole memory of my system including the swap until it finally dies. My question is, is there any filesize restriction inside of the filetransfer mechanism of the puppetmasterd and why does the puppetmasterd behave like this? Best regards, Sebastian --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
Larry Ludwig
2009-Aug-11 12:35 UTC
[Puppet Users] Re: possible Puppet Memory Leak? - maybe not the right Subject ;-)
On Aug 11, 2009, at 8:26 AM, X4T wrote:> > Hello Together, > > i am trying to use puppet to transfer a file (10MB) through the puppet > file service (puppet://server/files/filename) inside of a file > directive. The problem is that when the puppet client starts to > transfer > the file, the puppetmasterd starts to reserve the whole memory of my > system including the swap until it finally dies. My question is, is > there any filesize restriction inside of the filetransfer mechanism of > the puppetmasterd and why does the puppetmasterd behave like this? > > Best regards, > > SebastianHi Sebastian, With 0.24.8 your best bet is to move large files like this via other means (ie rysnc, ftp, nfs, etc) than directly though Puppet. With 0.25 using REST this issue of memory consumption will be much much less. -L -- Larry Ludwig Reductive Labs --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
X4T
2009-Aug-11 12:43 UTC
[Puppet Users] Re: possible Puppet Memory Leak? - maybe not the right Subject ;-)
Hi Larry, is it possible to do something like this: file { ........ ........ source => "http://server/filename" } to make the puppet-client download the file through http? Or do i need to do it through "exec" (using wget) or something like that? Regards, Sebastian On 11 Aug., 14:35, Larry Ludwig <la...@reductivelabs.com> wrote:> On Aug 11, 2009, at 8:26 AM, X4T wrote: > > > > > Hello Together, > > > i am trying to use puppet to transfer a file (10MB) through the puppet > > file service (puppet://server/files/filename) inside of a file > > directive. The problem is that when the puppet client starts to > > transfer > > the file, the puppetmasterd starts to reserve the whole memory of my > > system including the swap until it finally dies. My question is, is > > there any filesize restriction inside of the filetransfer mechanism of > > the puppetmasterd and why does the puppetmasterd behave like this? > > > Best regards, > > > Sebastian > > Hi Sebastian, > > With 0.24.8 your best bet is to move large files like this via other > means (ie rysnc, ftp, nfs, etc) than directly though Puppet. With > 0.25 using REST this issue of memory consumption will be much much less. > > -L > > -- > Larry Ludwig > Reductive Labs--~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
Larry Ludwig
2009-Aug-11 13:55 UTC
[Puppet Users] Re: possible Puppet Memory Leak? - maybe not the right Subject ;-)
On Aug 11, 2009, at 8:43 AM, X4T wrote:> > Hi Larry, > > is it possible to do something like this: > > file { ........ > ........ > source => "http://server/filename" > } >At the moment, no, but you can do it via exec type or create your own resource. -L -- Larry Ludwig Reductive Labs --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
Sven Mueller
2009-Aug-12 13:55 UTC
[Puppet Users] Re: possible Puppet Memory Leak? - maybe not the right Subject ;-)
Larry Ludwig schrieb:> > On Aug 11, 2009, at 8:43 AM, X4T wrote: > >> Hi Larry, >> >> is it possible to do something like this: >> >> file { ........ >> ........ >> source => "http://server/filename" >> } >> > > At the moment, no, but you can do it via exec type or create your own > resource.Here is what we did: Have the puppet server be available on http, provide a directory under /files which has the large files. Then we have a script that fetches the file if it changed on the server (using a HTTP HEAD request to get the ETag header that is guaranteed - more or less - to change when the file changes. Here is that script: =============================== cut here ==========================#!/bin/bash # re-fetches $DEST.tmp if needed (into $FILECACHE directory) # (re)creates/fecthes $DEST.head and $DEST.head.tmp along the way FILECACHE="${1}" DEST="${FILECACHE}/${2}" URL="${3}" # bail out on erros set -e # fetch ETag header for $URL if ! curl -I "$URL" | grep ETag > "$DEST.head.tmp"; then rm -f "${DEST}.head.tmp" exit 1 fi # if there is nothing to compare with, remove target file if ! [ -f "${DEST}.head" ]; then if [ -f "${DEST}.tmp" ]; then rm -f "${DEST}.tmp" fi else # else, if there are differences, remove it as well if ! diff "${DEST}.head" "${DEST}.head.tmp"; then if [ -f "${DEST}.tmp" ]; then rm -f "${DEST}.tmp" fi fi fi # here, the target file is either up to date according to the webserver, # or we don''t have it here anymore. if ! [ -f "${DEST}.tmp" ]; then if wget -O "${DEST}.tmp" -N "$URL"; then # we successfully fetched the file, copy over header cp "${DEST}.head.tmp" "${DEST}.head" else echo "Failed to fetch $URL" >&2 exit 1 fi fi exit 0 =============================== cut here ========================== the puppet manifest that goes along with it is. It needs $bigfile_cache to be set to a directory that can at least be created with a single file resource and has enough memory to hold two copies of the downloaded files (at least the size of all downloaded files plus the size of the biggest one among them): =============================== cut here ==========================# define a few things common to all bigfiles (but installed only if # bigfile define is used) @file { "$bigfile_cache": owner => "root", mode => 755, tag => "puppet_filecache", ensure => directory, } @file { "/usr/sbin/refresh_bigfile": source => "puppet://$puppetmaster/files/bigfiles/refresh_bigfile", owner => root, group => root, mode => 755, tag => "puppet_filecache", } include virtual_packages define bigfile ( $url, $destination="", $owner="root", $group="root", $mode="644" ) { # gets a file from $url and puts it into # $bigfile_cache/$destination while providing a File # resource for the latter. # Download only happens if no previous download was (completely) # done # make sure the cache directory is there File <| tag == "puppet_filecache" |> Package <| title == "curl.$architecture" |> if $destination=="" { $mydestination=$title } else { $mydestination=$destination } exec { "get_$mydestination.tmp": command => "/usr/sbin/refresh_bigfile ''$bigfile_cache'' ''$mydestination'' ''$url''" } file { "$bigfile_cache/$mydestination": source => "$bigfile_cache/$mydestination.tmp", require => Exec["get_$mydestination.tmp"], owner => $owner, group => $group, mode => $mode, } exec { "cleanup_$mydestination": # remove the downloaded file and replace it with a # hardlink to the final destination command => "rm $bigfile_cache/$mydestination.tmp; ln $bigfile_cache/$mydestination $bigfile_cache/$mydestination.tmp", require => File["$bigfile_cache/$mydestination"], onlyif => "test `stat -c %h $bigfile_cache/$mydestination` -lt 2", refreshonly => true, } } =============================== cut here ========================== To act only when the downloaded file actually changed, subscribe to File["$bigfile_cache/$title"] with $title being the title of your bigfile{} stanza. example: =============================== cut here ==========================bigfile { "test.tar.gz": url => "http://what.ever/files/test.tar.gz", } exec { "/bin/true": subscribe => File["$bigfile_cache/test.tar.gz"], } =============================== cut here ========================== Hope to help, Sven --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---