In an effort to try and track catalog changes we did the following:
* Make agents archive their cached catalog using the postrun_command
configuration
* Dump the certname_catalogs table periodically
* Compare the catalog similarity hash with the previous state
* In case a hash changed, check the reports for the host whether the change
seems justified
* If it does not seems justified, grab the archived catalogs from the host
* Use the puppet catalog diff tool (
https://github.com/ripienaar/puppet-catalog-diff ) to compare the archived
catalogs, look for the differences
I don''t know whether the logic in the method above is flawed, if so,
please
correct me.
We see some hosts with changing catalog hashes that has no activity in
their reports, and no difference in their archived catalogs. We''re no
clojure experts here, but it looks like the similarity hash is made using
the certname, the resources, and the edges. We though any change in these
should also be reflected in the cached catalogs on the agents, and the
changes could be seen with the puppet catalog diff tool.
Are we on a wrong track? How else we could track what is causing the
catalog hash changes in puppetdb?
Cheers,
ak0ska
On Friday, July 12, 2013 1:30:20 PM UTC+2, ak0ska wrote:>
> Hello,
>
> We have some performance issues with PuppetDB and I believe our low
> catalog duplication rate is partly responsible (7.5% atm). I would like to
> understand this problem better, and ask what''s the best way to
track down
> catalogs that change too often.
> I assumed the catalog hash (in certname_catalogs for example) shows
> whether two catalogs are dupes or not, is that correct? So when puppet
> runs, and there is a new hash associated with the given host in
> cername_catalogs, that means there was a change in configuration, and the
> old one is flushed, all its resources will be wiped when GC runs.
> If the above is correct, then what''s the best way to monitor the
changes
> in the catalog when the hash changes? Our agents send reports to Foreman,
> and I thought it''s enough to look for reports where the number of
applied
> resources is continuously greater than zero, but I found hosts where the
> applied value is 0, the skipped value is greater than zero, and the catalog
> hash changes after these runs. Does that mean that skipped steps could also
> count as catalog change?
> Can PuppetDB''s experimental report feature be used to easily track
down
> these changes?
>
--
You received this message because you are subscribed to the Google Groups
"Puppet Users" group.
To unsubscribe from this group and stop receiving emails from it, send an email
to puppet-users+unsubscribe@googlegroups.com.
To post to this group, send email to puppet-users@googlegroups.com.
Visit this group at http://groups.google.com/group/puppet-users.
For more options, visit https://groups.google.com/groups/opt_out.