I''m using puppet (0.24, working on the 0.25 migration) to do rolling upgrades across our datacenter. I''m running puppet as a daemon. In order to change an application version, I modify a database, which in turn modifies the data that my puppet_node_classifier presents. I then ssh to the nodes that I want to upgrade and force a puppet run with puppetd --server=foo --test --report. The problem I''m running into is that on a regular basis a node is already in the process of doing an update, and so I get back a message like this: Lock file /var/lib/puppet/state/puppetdlock exists; skipping catalog run I can avoid this in some fashion by detecting this return result and re-sshing into the node to run puppetd again, but this doesn''t seem very elegant. What are other people doing to avoid this sort of situation? Pete --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
Silviu Paragina
2009-Sep-15 21:02 UTC
[Puppet Users] Re: Preventing concurrent puppetd updates
The error message gives you the solution, check for the existence of /var/lib/puppet/state/puppetdlock. My solution would be invoke-rc.d puppet stop #or /etc/init.d/puppet or whatever while [ -f /var/lib/puppet/state/puppetdlock ] do sleep 1 done #do your stuff Silviu On Tue, 15 Sep 2009 13:05:37 -0700, Pete Emerson <pemerson@gmail.com> wrote:> I''m using puppet (0.24, working on the 0.25 migration) to do rolling > upgrades across our datacenter. > > I''m running puppet as a daemon. > > In order to change an application version, I modify a database, which > in turn modifies the data that my puppet_node_classifier presents. I > then ssh to the nodes that I want to upgrade and force a puppet run > with puppetd --server=foo --test --report. > > The problem I''m running into is that on a regular basis a node is > already in the process of doing an update, and so I get back a message > like this: > > Lock file /var/lib/puppet/state/puppetdlock exists; skipping catalog run > > I can avoid this in some fashion by detecting this return result and > re-sshing into the node to run puppetd again, but this doesn''t seem > very elegant. What are other people doing to avoid this sort of > situation? > > Pete > >--~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
Silviu Paragina
2009-Sep-15 21:14 UTC
[Puppet Users] Re: Preventing concurrent puppetd updates
Now I realize that this is not so portable :-?? you could try creating a simple pp file and run it with puppet (not puppetd) which would essentially do the same thing. Silviu On Wed, 16 Sep 2009 00:02:07 +0300, Silviu Paragina <silviu@paragina.ro> wrote:> The error message gives you the solution, check for the existence of > /var/lib/puppet/state/puppetdlock. > > My solution would be > > invoke-rc.d puppet stop > #or /etc/init.d/puppet or whatever > while [ -f /var/lib/puppet/state/puppetdlock ] > do > sleep 1 > done > > #do your stuff > > > Silviu > > On Tue, 15 Sep 2009 13:05:37 -0700, Pete Emerson <pemerson@gmail.com> > wrote: >> I''m using puppet (0.24, working on the 0.25 migration) to do rolling >> upgrades across our datacenter. >> >> I''m running puppet as a daemon. >> >> In order to change an application version, I modify a database, which >> in turn modifies the data that my puppet_node_classifier presents. I >> then ssh to the nodes that I want to upgrade and force a puppet run >> with puppetd --server=foo --test --report. >> >> The problem I''m running into is that on a regular basis a node is >> already in the process of doing an update, and so I get back a message >> like this: >> >> Lock file /var/lib/puppet/state/puppetdlock exists; skipping catalog run >> >> I can avoid this in some fashion by detecting this return result and >> re-sshing into the node to run puppetd again, but this doesn''t seem >> very elegant. What are other people doing to avoid this sort of >> situation? >> >> Pete >> >> > >--~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
Pete Emerson
2009-Sep-15 21:42 UTC
[Puppet Users] Re: Preventing concurrent puppetd updates
Silviu, I think it''s a pretty good solution, though. I''m actually contemplating writing a simple job scheduler that would eliminate this problem, but wanted to make sure that I''m not missing something obvious like a built-in queuing system or something like that. On Sep 15, 2:14 pm, Silviu Paragina <sil...@paragina.ro> wrote:> Now I realize that this is not so portable :-?? you could try creating a > simple pp file and run it with puppet (not puppetd) which would essentially > do the same thing. > > Silviu > > On Wed, 16 Sep 2009 00:02:07 +0300, Silviu Paragina <sil...@paragina.ro> > wrote: > > > The error message gives you the solution, check for the existence of > > /var/lib/puppet/state/puppetdlock. > > > My solution would be > > > invoke-rc.d puppet stop > > #or /etc/init.d/puppet or whatever > > while [ -f /var/lib/puppet/state/puppetdlock ] > > do > > sleep 1 > > done > > > #do your stuff > > > Silviu > > > On Tue, 15 Sep 2009 13:05:37 -0700, Pete Emerson <pemer...@gmail.com> > > wrote: > >> I''m using puppet (0.24, working on the 0.25 migration) to do rolling > >> upgrades across our datacenter. > > >> I''m running puppet as a daemon. > > >> In order to change an application version, I modify a database, which > >> in turn modifies the data that my puppet_node_classifier presents. I > >> then ssh to the nodes that I want to upgrade and force a puppet run > >> with puppetd --server=foo --test --report. > > >> The problem I''m running into is that on a regular basis a node is > >> already in the process of doing an update, and so I get back a message > >> like this: > > >> Lock file /var/lib/puppet/state/puppetdlock exists; skipping catalog run > > >> I can avoid this in some fashion by detecting this return result and > >> re-sshing into the node to run puppetd again, but this doesn''t seem > >> very elegant. What are other people doing to avoid this sort of > >> situation? > > >> Pete--~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
Pete Emerson
2009-Sep-17 14:14 UTC
[Puppet Users] Re: Preventing concurrent puppetd updates
With this solution would I need to clear a cache? If I do two puppet runs right after each other, doesn''t puppet cache the recipes for a period of time? If so, what do I need to do to wipe that local cache out? Pete On Tue, Sep 15, 2009 at 2:42 PM, Pete Emerson <pemerson@gmail.com> wrote:> > Silviu, I think it''s a pretty good solution, though. > > I''m actually contemplating writing a simple job scheduler that would > eliminate this problem, but wanted to make sure that I''m not missing > something obvious like a built-in queuing system or something like > that. > > On Sep 15, 2:14 pm, Silviu Paragina <sil...@paragina.ro> wrote: >> Now I realize that this is not so portable :-?? you could try creating a >> simple pp file and run it with puppet (not puppetd) which would essentially >> do the same thing. >> >> Silviu >> >> On Wed, 16 Sep 2009 00:02:07 +0300, Silviu Paragina <sil...@paragina.ro> >> wrote: >> >> > The error message gives you the solution, check for the existence of >> > /var/lib/puppet/state/puppetdlock. >> >> > My solution would be >> >> > invoke-rc.d puppet stop >> > #or /etc/init.d/puppet or whatever >> > while [ -f /var/lib/puppet/state/puppetdlock ] >> > do >> > sleep 1 >> > done >> >> > #do your stuff >> >> > Silviu >> >> > On Tue, 15 Sep 2009 13:05:37 -0700, Pete Emerson <pemer...@gmail.com> >> > wrote: >> >> I''m using puppet (0.24, working on the 0.25 migration) to do rolling >> >> upgrades across our datacenter. >> >> >> I''m running puppet as a daemon. >> >> >> In order to change an application version, I modify a database, which >> >> in turn modifies the data that my puppet_node_classifier presents. I >> >> then ssh to the nodes that I want to upgrade and force a puppet run >> >> with puppetd --server=foo --test --report. >> >> >> The problem I''m running into is that on a regular basis a node is >> >> already in the process of doing an update, and so I get back a message >> >> like this: >> >> >> Lock file /var/lib/puppet/state/puppetdlock exists; skipping catalog run >> >> >> I can avoid this in some fashion by detecting this return result and >> >> re-sshing into the node to run puppetd again, but this doesn''t seem >> >> very elegant. What are other people doing to avoid this sort of >> >> situation? >> >> >> Pete > > >--~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
Pete Emerson
2009-Sep-17 15:12 UTC
[Puppet Users] Re: Preventing concurrent puppetd updates
Ah, in answer to my own question: http://reductivelabs.com/trac/puppet/wiki/ConfigurationReference --ignorecache should do the trick. On Thu, Sep 17, 2009 at 7:14 AM, Pete Emerson <pemerson@gmail.com> wrote:> With this solution would I need to clear a cache? If I do two puppet > runs right after each other, doesn''t puppet cache the recipes for a > period of time? If so, what do I need to do to wipe that local cache > out? > > Pete > > On Tue, Sep 15, 2009 at 2:42 PM, Pete Emerson <pemerson@gmail.com> wrote: >> >> Silviu, I think it''s a pretty good solution, though. >> >> I''m actually contemplating writing a simple job scheduler that would >> eliminate this problem, but wanted to make sure that I''m not missing >> something obvious like a built-in queuing system or something like >> that. >> >> On Sep 15, 2:14 pm, Silviu Paragina <sil...@paragina.ro> wrote: >>> Now I realize that this is not so portable :-?? you could try creating a >>> simple pp file and run it with puppet (not puppetd) which would essentially >>> do the same thing. >>> >>> Silviu >>> >>> On Wed, 16 Sep 2009 00:02:07 +0300, Silviu Paragina <sil...@paragina.ro> >>> wrote: >>> >>> > The error message gives you the solution, check for the existence of >>> > /var/lib/puppet/state/puppetdlock. >>> >>> > My solution would be >>> >>> > invoke-rc.d puppet stop >>> > #or /etc/init.d/puppet or whatever >>> > while [ -f /var/lib/puppet/state/puppetdlock ] >>> > do >>> > sleep 1 >>> > done >>> >>> > #do your stuff >>> >>> > Silviu >>> >>> > On Tue, 15 Sep 2009 13:05:37 -0700, Pete Emerson <pemer...@gmail.com> >>> > wrote: >>> >> I''m using puppet (0.24, working on the 0.25 migration) to do rolling >>> >> upgrades across our datacenter. >>> >>> >> I''m running puppet as a daemon. >>> >>> >> In order to change an application version, I modify a database, which >>> >> in turn modifies the data that my puppet_node_classifier presents. I >>> >> then ssh to the nodes that I want to upgrade and force a puppet run >>> >> with puppetd --server=foo --test --report. >>> >>> >> The problem I''m running into is that on a regular basis a node is >>> >> already in the process of doing an update, and so I get back a message >>> >> like this: >>> >>> >> Lock file /var/lib/puppet/state/puppetdlock exists; skipping catalog run >>> >>> >> I can avoid this in some fashion by detecting this return result and >>> >> re-sshing into the node to run puppetd again, but this doesn''t seem >>> >> very elegant. What are other people doing to avoid this sort of >>> >> situation? >>> >>> >> Pete >> >> >> >--~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---