Galed Friedmann
2011-Sep-01 11:39 UTC
[Puppet Users] Managing dynamic instances with puppet
Hello, My organization is currently running a complete production environment on Amazon EC2, and I''m now trying to implement some automations and scaling with puppet. I have several instances which I want them to be almost automatic, meaning that whenever is needed more instances will come up, and when they''re not needed they will be stopped. This should eventually be automatically is possible, I don''t want to know or care if they came up or down, and especially do not want to configure anything manually when that happens. I currently have a nice puppet configuration, when a node comes up it get''s it''s entire configuration from puppet, and exports several of it''s resources to remote nodes (such as nagios, and also some other instances'' /etc/hosts file using the Host resource). This is working fairly well, what I''m still not sure how to perform is node deletion. I want to achieve a way that when a node goes down, it''s exported resources will also disappear from the remote instances (meaning nagios will stop monitoring that host and it''s Host resource will be deleted from the remote server). The only way I find this possible is by running some cron on the master server that purges the exported resources DB every once in a while and using the purge function on the clients to remove resources that are not longer managed. While this sounds reasonable it scares me a bit because of several issues: - The master will need to purge the DB around the same time the nodes check their manifests again (so I''ll have an updated DB all the time) - I also have several unmanaged resources (like other nagios checks that I''m not managing through puppet currently). Will performing a purge on nagios resources will cause ALL existing checks that are not managed by puppet to disappear? Has anyone ever dealt with this kind of dilema? Are there are any other best practices to perform this? I''d really appreciate the help! Thanks, Galed. -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To view this discussion on the web visit https://groups.google.com/d/msg/puppet-users/-/Ap4R3tDnoMgJ. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
On 2011 9 1 18:44, "Galed Friedmann" <galed.friedmann@onavo.com> wrote:> > Hello, > My organization is currently running a complete production environment onAmazon EC2, and I''m now trying to implement some automations and scaling with puppet.> I have several instances which I want them to be almost automatic, meaningthat whenever is needed more instances will come up, and when they''re not needed they will be stopped. This should eventually be automatically is possible, I don''t want to know or care if they came up or down, and especially do not want to configure anything manually when that happens.> > I currently have a nice puppet configuration, when a node comes up itget''s it''s entire configuration from puppet, and exports several of it''s resources to remote nodes (such as nagios, and also some other instances'' /etc/hosts file using the Host resource).> > This is working fairly well, what I''m still not sure how to perform isnode deletion.> I want to achieve a way that when a node goes down, it''s exportedresources will also disappear from the remote instances (meaning nagios will stop monitoring that host and it''s Host resource will be deleted from the remote server).> > The only way I find this possible is by running some cron on the masterserver that purges the exported resources DB every once in a while and using the purge function on the clients to remove resources that are not longer managed. While this sounds reasonable it scares me a bit because of several issues:> - The master will need to purge the DB around the same time the nodescheck their manifests again (so I''ll have an updated DB all the time)> - I also have several unmanaged resources (like other nagios checks thatI''m not managing through puppet currently). Will performing a purge on nagios resources will cause ALL existing checks that are not managed by puppet to disappear?> > Has anyone ever dealt with this kind of dilema? Are there are any otherbest practices to perform this?> > I''d really appreciate the help! >Yes, you can use puppet report status or last compile time. I currently implemented it via foreman api instead of store configs, but the principal should be the same. Ohad> Thanks, > Galed. > > -- > You received this message because you are subscribed to the Google Groups"Puppet Users" group.> To view this discussion on the web visithttps://groups.google.com/d/msg/puppet-users/-/Ap4R3tDnoMgJ.> To post to this group, send email to puppet-users@googlegroups.com. > To unsubscribe from this group, send email topuppet-users+unsubscribe@googlegroups.com.> For more options, visit this group athttp://groups.google.com/group/puppet-users?hl=en. -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Reasonably Related Threads
- Opsview puppet module - purging hosts from opsview
- Managing multiple nagios servers with puppet and virtual resources
- Cleaning Out Stored Configs
- What the??? Failing dependancies and not sure why...
- Nagios based on David Schmitt's Complete Config : variables are empty