I have a requirement where I want a Fact to be stored in PuppetDB during the manifest run and not during the initial fact gathering phase. I know I can, in my manifests, create a file in /etc/facter/facts.d or I can write a Ruby script that will then be distributed by plugin sync. But both of these methods will only publish the fact during the initial phase of the puppet agent run. What I want to be able to do is set a fact during the manifest run portion that will then be stored in the PuppetDB without an additional puppet run. For example I''m wanting to use it to create a SSH key for the system, then to immediately publish this ssh_root_key fact. But given the way it works at the moment it would only occur during the initial phase of the puppet run, so would require two runs before that fact was available and ready for other servers to pick up using puppetdbquery. Does anyone know if this can be forced in some way? -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To post to this group, send email to puppet-users@googlegroups.com. Visit this group at http://groups.google.com/group/puppet-users. For more options, visit https://groups.google.com/groups/opt_out.
Hi Paul, Here''s a diagram showing how the puppet run process flows: http://www.aosabook.org/images/puppet/TimingDiagram.png as you can see, facter is run exactly once, before the catalog is created. facter is not invoked again until the next run. I suppose you could have your sshkey resource notify the puppet service, which would subsequently trigger another run. -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To post to this group, send email to puppet-users@googlegroups.com. Visit this group at http://groups.google.com/group/puppet-users. For more options, visit https://groups.google.com/groups/opt_out.
Hi Wolf, Thanks for that diagram, that''s incredibly helpful. It seems a bit of an oversight not to allow facts to be updated during the manifest phase since manifests are making changes to the system and therefore potentially modifying facts during their run. I might look into modifying the agent run so that facts can be updated during the manifest phase using a custom function of some description. I was looking into calling exec to use "puppet facts upload". But this for some reason errors out during the puppet run even after auth.conf has been modified to allow the save call from nodes, it does work directly on the command line though which I assume is something to do with a lock or environment variable that differs during the agent run. On Monday, 7 October 2013 15:36:01 UTC+1, Wolf Noble wrote:> > Hi Paul, > > Here''s a diagram showing how the puppet run process flows: > http://www.aosabook.org/images/puppet/TimingDiagram.png > > as you can see, facter is run exactly once, before the catalog is created. > > facter is not invoked again until the next run. I suppose you could have > your sshkey resource notify the puppet service, which would subsequently > trigger another run. >-- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To post to this group, send email to puppet-users@googlegroups.com. Visit this group at http://groups.google.com/group/puppet-users. For more options, visit https://groups.google.com/groups/opt_out.
For anyone else wanting to do something similar: For now I''ve just used the postrun_command on the puppet agents so that the facts will be uploaded to the server once modifications have been made. e.g. postrun_command "puppet facts upload" This will then re-upload the facts once the puppet agent has run, giving it the most up to date information for anything querying PuppetDB. I really cannot understand for the life of me why this isn''t the default functionality, it seems ludicrous that a tool designed to report facts back to the master would have out of date information once puppet has modified the system. On Tuesday, 8 October 2013 10:31:17 UTC+1, Paul Oyston wrote:> > Hi Wolf, > > Thanks for that diagram, that''s incredibly helpful. > > It seems a bit of an oversight not to allow facts to be updated during the > manifest phase since manifests are making changes to the system and > therefore potentially modifying facts during their run. I might look into > modifying the agent run so that facts can be updated during the manifest > phase using a custom function of some description. > > I was looking into calling exec to use "puppet facts upload". But this for > some reason errors out during the puppet run even after auth.conf has been > modified to allow the save call from nodes, it does work directly on the > command line though which I assume is something to do with a lock or > environment variable that differs during the agent run. > > On Monday, 7 October 2013 15:36:01 UTC+1, Wolf Noble wrote: >> >> Hi Paul, >> >> Here''s a diagram showing how the puppet run process flows: >> http://www.aosabook.org/images/puppet/TimingDiagram.png >> >> as you can see, facter is run exactly once, before the catalog is created. >> >> facter is not invoked again until the next run. I suppose you could have >> your sshkey resource notify the puppet service, which would subsequently >> trigger another run. >> >-- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To post to this group, send email to puppet-users@googlegroups.com. Visit this group at http://groups.google.com/group/puppet-users. For more options, visit https://groups.google.com/groups/opt_out.
On Tuesday, October 8, 2013 4:31:17 AM UTC-5, Paul Oyston wrote:> > Hi Wolf, > > Thanks for that diagram, that''s incredibly helpful. > > It seems a bit of an oversight not to allow facts to be updated during the > manifest phase since manifests are making changes to the system and > therefore potentially modifying facts during their run. >You need to better understand the operational model. Puppet evaluates the manifests provided to it in light of a target node identity and a set of facts to compute, all in advance, details of the desired state of the target node. The result is packaged in a compiled form -- a catalog -- and handed off to the client-side Puppet runtime for those computed state details to be ensured. Puppet uses fact values only during catalog compilation, before it changes anything about the target node. They can be interpolated into resource properties, such as file names or content, but their identity is thereby lost. Changing or setting fact values in PuppetDB during catalog application will not have any effect on the catalog application process, except that in principle you could apply Exec resources that perform PuppetDB queries and do something dependent on the result.> I might look into modifying the agent run so that facts can be updated > during the manifest phase using a custom function of some description. >Puppet functions run during catalog compilation, not during catalog application, so that particular approach wouldn''t work. Even if you could make it work, you could not thereby modify the catalog run during catalog application. These are not the hooks you''re looking for. John -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To post to this group, send email to puppet-users@googlegroups.com. Visit this group at http://groups.google.com/group/puppet-users. For more options, visit https://groups.google.com/groups/opt_out.
I''m really only wanting an up to date set of facts once the puppet agent has finished making changes to the system, I''m not wanting to modify the catalog run process by modifying facter values during the run process. I''m aware that the facts are evaluated and the manifests compiled at the beginning of the agent run. It''s simply a matter of PuppetDB not containing the most up to date facts once that catalog has been completed. In the simplest possible example you might push a fact using pluginsync that checks if mongo is installed, at the start of the puppet agent run the fact is pulled in and evaluated to be false. The manifests then go away and make changes to that node, it installs mongo. The puppet agent then finishes, if you ran the fact on the system now it would return true, but the puppet master still contains false because it was only evaluated at the start. You then can''t rely on the PuppetDB information until the puppet agent runs again which could be another hour from the first run, then another period of time before other nodes pick up on this new fact value. What you explained about using Exec (in actuality I''m querying PuppetDB using puppetdbquery) is exactly what I''m wanting to do and have nodes perform actions based on the facts about other nodes stored in PuppetDB. As I say though, I''ve already provided a workaround by having a postrun script that updates the facts using "puppet facts upload", I just need to do some additional checking to see if there were actually changes in the run and then only update the facts if there were changes. On Tuesday, 8 October 2013 20:27:34 UTC+1, jcbollinger wrote:> > > > On Tuesday, October 8, 2013 4:31:17 AM UTC-5, Paul Oyston wrote: >> >> Hi Wolf, >> >> Thanks for that diagram, that''s incredibly helpful. >> >> It seems a bit of an oversight not to allow facts to be updated during >> the manifest phase since manifests are making changes to the system and >> therefore potentially modifying facts during their run. >> > > > You need to better understand the operational model. Puppet evaluates the > manifests provided to it in light of a target node identity and a set of > facts to compute, all in advance, details of the desired state of the > target node. The result is packaged in a compiled form -- a catalog -- and > handed off to the client-side Puppet runtime for those computed state > details to be ensured. > > Puppet uses fact values only during catalog compilation, before it changes > anything about the target node. They can be interpolated into resource > properties, such as file names or content, but their identity is thereby > lost. Changing or setting fact values in PuppetDB during catalog > application will not have any effect on the catalog application process, > except that in principle you could apply Exec resources that perform > PuppetDB queries and do something dependent on the result. > > > >> I might look into modifying the agent run so that facts can be updated >> during the manifest phase using a custom function of some description. >> > > > Puppet functions run during catalog compilation, not during catalog > application, so that particular approach wouldn''t work. Even if you could > make it work, you could not thereby modify the catalog run during catalog > application. > > These are not the hooks you''re looking for. > > > John > >-- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To post to this group, send email to puppet-users@googlegroups.com. Visit this group at http://groups.google.com/group/puppet-users. For more options, visit https://groups.google.com/groups/opt_out.
On Tuesday, October 8, 2013 6:08:56 PM UTC-5, Paul Oyston wrote:> > I''m really only wanting an up to date set of facts once the puppet agent > has finished making changes to the system, I''m not wanting to modify the > catalog run process by modifying facter values during the run process. I''m > aware that the facts are evaluated and the manifests compiled at the > beginning of the agent run. > > It''s simply a matter of PuppetDB not containing the most up to date facts > once that catalog has been completed. > > In the simplest possible example you might push a fact using pluginsync > that checks if mongo is installed, at the start of the puppet agent run the > fact is pulled in and evaluated to be false. The manifests then go away and > make changes to that node, it installs mongo. The puppet agent then > finishes, if you ran the fact on the system now it would return true, but > the puppet master still contains false because it was only evaluated at the > start. You then can''t rely on the PuppetDB information until the puppet > agent runs again which could be another hour from the first run, then > another period of time before other nodes pick up on this new fact value. > > What you explained about using Exec (in actuality I''m querying PuppetDB > using puppetdbquery) is exactly what I''m wanting to do and have nodes > perform actions based on the facts about other nodes stored in PuppetDB. > > As I say though, I''ve already provided a workaround by having a postrun > script that updates the facts using "puppet facts upload", I just need to > do some additional checking to see if there were actually changes in the > run and then only update the facts if there were changes. > >I still say you are trying to put the recorded facts to a use for which they were not intended and are not well suited. The recorded facts necessarily capture a snapshot of fact values for the target node, and all you''re doing is changing time point for that snapshot. On most machines, most of the time, node facts will be the same after a catalog run as they were before anyway, except for those that change continuously (e.g. uptime*). And that brings me to my next point: to the extent that there is a risk of fact values changing during a Puppet run, you cannot rely on them remaining constant *between* Puppet runs, either. The normal interpretation of the node facts stored in PuppetDB is "these are the facts that informed compilation of the node''s latest catalog". What you are describing changes that meaning, which presents a moderate risk given that PuppetDB is primarily an internal service for Puppet. Among other things, you may cause the master occasionally to serve up a stored catalog when it really needed to compile a fresh one. John -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To unsubscribe from this group and stop receiving emails from it, send an email to puppet-users+unsubscribe@googlegroups.com. To post to this group, send email to puppet-users@googlegroups.com. Visit this group at http://groups.google.com/group/puppet-users. For more options, visit https://groups.google.com/groups/opt_out.
Maybe Matching Threads
- PuppetDB "Failed to submit 'replace facts' command"
- puppetdb postgresql Connection refused
- After upgrade to 3.0: Warning: Error 400 on SERVER: Could not retrieve facts for
- Trouble connect to PuppetDB
- on puppet master server , puppet agent can't connect to itself