I just started digging into puppet and it looks like puppet is using a pull model. You have a master server and clients talk to it to get config info. Is anyone out there using a push model? If not, why not? Are there security reasons you would use one over the other? It seems that cfengine also uses a push model, so I wondered if this is a "standard" or if there are specific reasons for this approach. This is all fairly new to me, so I''m just getting my feet wet. Thanks for any pointers. ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.
Greetings,> I just started digging into puppet and it looks like puppet is using a > pull model. You have a master server and clients talk to it to get > config info. > > Is anyone out there using a push model? If not, why not? Are there > security reasons you would use one over the other?I firmly believe that you have to do both in the real world. Most the time, the clients are just fine pulling from a master source but on occasion you do have to push to the end nodes quickly and have them do something. There is a utility bundled called puppetrun which triggers the pull from the clients. Running as a user with the privileges to read the masters'' CA certificates, it''ll take --host <hostname> as an argument to trigger a puppet run. I personally have it wrapped like: #!/bin/sh /foo/bin/puppetrun --confdir=/foo/conf/puppet --ignoreschedules $* for those times I need to run something *now*. This isn''t "push" per se, but it does the right thing in triggering the behavior.> It seems that cfengine also uses a push model, so I wondered if this is > a "standard" or if there are specific reasons for this approach.I think it''s "six of one..." I''m convinced you need a hybrid approach where *most* of the time you''ll just want the clients to pull. I''d like to see a tiered puppetmaster setup (for geographically separate data centers or just for pure load): |- local puppetmaster -- clientfoo0 ... clientfooN puppetmaster -| |- local puppetmaster -- clientfoo0 ... clientfooN It''s doable for file serving but I think there should be a way for a change made on the puppetmaster to propagate down across N puppetmasters. Then just balance (dns, linux virtual server, balanceng, etc.) the "close" puppetmasters. I thought about using rsync on a cron job to sync up every couple of minutes which which triggers a restart of the local puppetmaster.> This is all fairly new to me, so I''m just getting my feet wet.Same here :-) Cheers, Ryan
Quoting Ryan Dooley <rd@powerset.com>:> Greetings, > >> I just started digging into puppet and it looks like puppet is using a >> pull model. You have a master server and clients talk to it to get >> config info. >> >> Is anyone out there using a push model? If not, why not? Are there >> security reasons you would use one over the other? > > I firmly believe that you have to do both in the real world. Most the time, > the clients are just fine pulling from a master source but on occasion you > do have to push to the end nodes quickly and have them do something. > > There is a utility bundled called puppetrun which triggers the pull from the > clients. Running as a user with the privileges to read the masters'' CA > certificates, it''ll take --host <hostname> as an argument to trigger a > puppet run.This is still a pull model from my prospective. The idea of a push model in my mind are daemon''s running on the clients as unpriveledge users, then have some sort of cron job running to process what has come across. Seperating the command from the execution is required in our situation.> > I personally have it wrapped like: > > #!/bin/sh > /foo/bin/puppetrun --confdir=/foo/conf/puppet --ignoreschedules $* > > for those times I need to run something *now*. > > This isn''t "push" per se, but it does the right thing in triggering the > behavior. >I''ll still give this a try.>> It seems that cfengine also uses a push model, so I wondered if this is >> a "standard" or if there are specific reasons for this approach. > > I think it''s "six of one..." I''m convinced you need a hybrid approach where > *most* of the time you''ll just want the clients to pull. > > I''d like to see a tiered puppetmaster setup (for geographically separate > data centers or just for pure load): > > |- local puppetmaster -- clientfoo0 ... clientfooN > puppetmaster -| > |- local puppetmaster -- clientfoo0 ... clientfooN > > It''s doable for file serving but I think there should be a way for a change > made on the puppetmaster to propagate down across N puppetmasters. Then > just balance (dns, linux virtual server, balanceng, etc.) the "close" > puppetmasters.This is a neat idea. Why couldn''t your remote puppetmasters be puppet clients to the super master? Wouldn''t that work like your talking about?> > I thought about using rsync on a cron job to sync up every couple of minutes > which which triggers a restart of the local puppetmaster. >This seems like it would also be easy to setup. Thanks for the response. Mike B. ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.
On Tue, Jul 03, 2007 at 09:47:11AM -0800, barsalou wrote:> I just started digging into puppet and it looks like puppet is using a > pull model. You have a master server and clients talk to it to get > config info. > > Is anyone out there using a push model? If not, why not? Are there > security reasons you would use one over the other?I''ve always found http://www.infrastructures.org/bootstrap/pushpull.shtml to be sufficiently persuasive that I have no intention of even attempting to use a push model any time soon. - Matt
On Jul 3, 2007, at 3:22 PM, barsalou wrote:> > This is still a pull model from my prospective. The idea of a push > model in my mind are daemon''s running on the clients as unpriveledge > users, then have some sort of cron job running to process what has > come > across. Seperating the command from the execution is required in our > situation.Puppet could be relatively easily modified to be push instead of pull -- it''s probably not more than a day''s development, if that. I''ve got one company possibly interested in funding the work, but I haven''t heard from them in a little while. If this is something you really want, I could point you to the right place to develop it, or you could fund my development of it. -- The whole secret of life is to be interested in one thing profoundly and in a thousand things well. -- Horace Walpole --------------------------------------------------------------------- Luke Kanies | http://reductivelabs.com | http://madstop.com
On Jul 3, 2007, at 4:33 PM, Matthew Palmer wrote:> I''ve always found http://www.infrastructures.org/bootstrap/ > pushpull.shtml to > be sufficiently persuasive that I have no intention of even > attempting to > use a push model any time soon.I have seen cases where push was required because of security reasons -- if your clients are in a DMZ and your server is on the LAN, then your server will need to initiate all connections, which is effectively push. -- A computer lets you make more mistakes faster than any invention in human history--with the possible exceptions of handguns and tequila. -- Mitch Ratcliffe --------------------------------------------------------------------- Luke Kanies | http://reductivelabs.com | http://madstop.com
> > I''ve always found http://www.infrastructures.org/bootstrap/ > > pushpull.shtml to > > be sufficiently persuasive that I have no intention of even > attempting > > to use a push model any time soon. > > I have seen cases where push was required because of security reasons > -- if your clients are in a DMZ and your server is on the > LAN, then your server will need to initiate all connections, > which is effectively push.I''ll put up my hand here and say that''s exactly the model I work with every day. We have approximately 100 Linux servers in various DMZs and the only direction of access is from our internal network to the DMZs, not the other way. Connections are limited to SSH as well but you can tunnel a lot over SSH ;-) (and of course, if you create a tunnel you can connect back the other way but security doesn''t like that for obvious reasons) James ********************************************************************************* Important Note This email (including any attachments) contains information which is confidential and may be subject to legal privilege. If you are not the intended recipient you must not use, distribute or copy this email. If you have received this email in error please notify the sender immediately and delete this email. Any views expressed in this email are not necessarily the views of AXA-Tech Australia. Thank you. **********************************************************************************
On Jul 3, 2007, at 6:49 PM, HARRIS Jimmy ((AXA-Tech-AU)) wrote:> I''ll put up my hand here and say that''s exactly the model I work with > every day. We have approximately 100 Linux servers in various DMZs > and > the only direction of access is from our internal network to the DMZs, > not the other way. > > Connections are limited to SSH as well but you can tunnel a lot > over SSH > ;-) (and of course, if you create a tunnel you can connect back the > other way but security doesn''t like that for obvious reasons)Configuring a client is a pretty simple process -- get the facts from the client, feed them to the compiler, and send the configuration back to the client. You''d need some work on the server to take care of those bits for you, although things are getting easier with the work from Kinial, and then you''d need some work on the client so that it was used to being fed the configuration instead of retrieving it itself. If anyone''s interested in doing the development here, I''d be glad to work with them to develop an implementation plan, including showing where the complications will be. -- It is well to remember that the entire universe, with one trifling exception, is composed of others. --John Andrew Holmes --------------------------------------------------------------------- Luke Kanies | http://reductivelabs.com | http://madstop.com
I''d like to put my hand up here and say that I am also quite interested in seeing push based delivery in puppet (in combination with pull based, naturally). We manage 60+ external systems, all of which would be better served getting their configurations pushed to them. Luke Kanies wrote:> Configuring a client is a pretty simple process -- get the facts from > the client, feed them to the compiler, and send the configuration > back to the client. You''d need some work on the server to take care > of those bits for you, although things are getting easier with the > work from Kinial, and then you''d need some work on the client so that > it was used to being fed the configuration instead of retrieving it > itself. >How would fileserving work though? I can see from looking at the localconfig.yaml that all non-fileserver data seems to be present, and with all variables and templates interpolated into it, so that shouldn''t be too much of a problem. Fileserving seems to be a different kettle of fish, with files pulled down on an individual basis and installed directly from the puppetmaster. Is there any potential for the client to look at a local fileserver ''cache''? I.e. if I could make available a compiled structure of files (relating solely to the client) on the server, and then arrange for that to be rsync''d to the client, for the client puppetd to pick up locally. Could that solve the problem?> If anyone''s interested in doing the development here, I''d be glad to > work with them to develop an implementation plan, including showing > where the complications will be. > >I''m happy to help here, with whatever needs doing, though my Ruby skills are somewhat lacking (though increasing thankfully!) Please let me know what you need. mike
> Luke Kanies wrote: >> Configuring a client is a pretty simple process -- get the facts from >> the client, feed them to the compiler, and send the configuration >> back to the client. You''d need some work on the server to take care >> of those bits for you, although things are getting easier with the >> work from Kinial, and then you''d need some work on the client so that >> it was used to being fed the configuration instead of retrieving it >> itself.So for me it would work something like: - Configure the master to know about what the client should need. - Push it out to the client - the client then compiles what it needs and executes it. The goal here is that the communication between the master and the client is using an unpriveledged user, even though end results are going to be executed by a priviledge user of some sort (maybe not root).>> > How would fileserving work though? >What kind of files are folks serving up here? Config files? It seems like rsync is choice here....what stops this from satisfying all "file serving" needs?> I can see from looking at the localconfig.yaml that all non-fileserver > data seems to be present, and with all variables and templates > interpolated into it, so that shouldn''t be too much of a problem. > Fileserving seems to be a different kettle of fish, with files pulled > down on an individual basis and installed directly from the puppetmaster. > > Is there any potential for the client to look at a local fileserver > ''cache''? I.e. if I could make available a compiled structure of files > (relating solely to the client) on the server, and then arrange for that > to be rsync''d to the client, for the client puppetd to pick up locally. > Could that solve the problem?See above.> >> If anyone''s interested in doing the development here, I''d be glad to >> work with them to develop an implementation plan, including showing >> where the complications will be. >> >> > I''m happy to help here, with whatever needs doing, though my Ruby skills > are somewhat lacking (though increasing thankfully!) Please let me know > what you need.Me too.>---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.
On Jul 4, 2007, at 10:30 PM, barsalou wrote:> > So for me it would work something like: > > - Configure the master to know about what the client should need. > - Push it out to the client > - the client then compiles what it needs and executes it.Having the client do the compile but only sending it a subset of files is not something that Puppet could easily do right now. You could quite easily compile the configuration on the server and send it to the client, but you can''t keep least-access yet have the client do the compile.> The goal here is that the communication between the master and the > client is using an unpriveledged user, even though end results are > going to be executed by a priviledge user of some sort (maybe not > root).It shouldn''t be hard to set this up so that it''s transport- independent, so Puppet shouldn''t care how the configuration gets there. Of course, the configuration will still need to be run as root, I would think (well, either root, or some root-equivalent, or you won''t be able to use Puppet to actually maintain your system).>>> >> How would fileserving work though? >> > > What kind of files are folks serving up here? Config files? It seems > like rsync is choice here....what stops this from satisfying all "file > serving" needs?If you want services to get restarted or whatever when files change, you need to use Puppet''s fileserver. -- The great tragedy of Science - the slaying of a beautiful hypothesis by an ugly fact. --Thomas H. Huxley --------------------------------------------------------------------- Luke Kanies | http://reductivelabs.com | http://madstop.com
On Jul 4, 2007, at 6:17 PM, Mike Pountney wrote:> I''d like to put my hand up here and say that I am also quite > interested > in seeing push based delivery in puppet (in combination with pull > based, > naturally). We manage 60+ external systems, all of which would be > better > served getting their configurations pushed to them.Great. If you join #puppet at some point we can go through the needed design changes, and then we can come up with an implementation plan. I''ll be able to help with the planning and vetting of patches, but not much more, unless someone''s sponsoring the work.> How would fileserving work though?Just use the file() function instead of the fileserver. It''s functionally equivalent but sends all of the files as part of the configuration. Clearly not a great idea for really large files, but otherwise works well.> I can see from looking at the localconfig.yaml that all non-fileserver > data seems to be present, and with all variables and templates > interpolated into it, so that shouldn''t be too much of a problem. > Fileserving seems to be a different kettle of fish, with files pulled > down on an individual basis and installed directly from the > puppetmaster. > > Is there any potential for the client to look at a local fileserver > ''cache''? I.e. if I could make available a compiled structure of files > (relating solely to the client) on the server, and then arrange for > that > to be rsync''d to the client, for the client puppetd to pick up > locally. > Could that solve the problem?You could certainly do it that way, too.> I''m happy to help here, with whatever needs doing, though my Ruby > skills > are somewhat lacking (though increasing thankfully!) Please let me > know > what you need.Heh, well, I need someone to do all of the work. :) I''m willing to help in the planning, but I don''t have the time to do the development. -- Finn''s Law: Uncertainty is the final test of innovation. --------------------------------------------------------------------- Luke Kanies | http://reductivelabs.com | http://madstop.com
Quoting Luke Kanies <luke@madstop.com>:> On Jul 4, 2007, at 10:30 PM, barsalou wrote: >> >> So for me it would work something like: >> >> - Configure the master to know about what the client should need. >> - Push it out to the client >> - the client then compiles what it needs and executes it. > > Having the client do the compile but only sending it a subset of > files is not something that Puppet could easily do right now. You > could quite easily compile the configuration on the server and send > it to the client, but you can''t keep least-access yet have the client > do the compile.I''m not sure I''m understanding your point where it says "...client,but you can''t...", could you re-word it?> >> The goal here is that the communication between the master and the >> client is using an unpriveledged user, even though end results are >> going to be executed by a priviledge user of some sort (maybe not >> root). > > It shouldn''t be hard to set this up so that it''s transport- > independent, so Puppet shouldn''t care how the configuration gets > there. Of course, the configuration will still need to be run as > root, I would think (well, either root, or some root-equivalent, or > you won''t be able to use Puppet to actually maintain your system). >''maintain'' seems fairly broad here, maybe I have some machines that I only have access to certain services (I''m thinking of a hosted box), but I get your point. Just want to put the idea out there, that I might not always have root access to a box I want to ''maintain''.>>>> >>> How would fileserving work though? >>> >> >> What kind of files are folks serving up here? Config files? It seems >> like rsync is choice here....what stops this from satisfying all "file >> serving" needs? > > If you want services to get restarted or whatever when files change, > you need to use Puppet''s fileserver. >I don''t see how these two things are connected? I would assume that any ''recipe'' you have that is doing stuff to services would be doing the service restart. Why would this cause the serving of files? Mike B. ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.
> > Great. If you join #puppet at some point we can go through the > needed design changes, and then we can come up with an implementation > plan. > > I''ll be able to help with the planning and vetting of patches, but > not much more, unless someone''s sponsoring the work. > >I wish i could encourage my work to do just that, but puppet''s pretty new in our infrastructure and so its benefits have not yet been seen by the powers that be. I''m about 10 modules away from that ;) I''ve got a reasonable plan of attack for the problem, so will document that shortly and forward it on to the group.>> How would fileserving work though? >> > > Just use the file() function instead of the fileserver. It''s > functionally equivalent but sends all of the files as part of the > configuration. Clearly not a great idea for really large files, but > otherwise works well. > >That''s my problem. I take it that wrapping files up using file() would involve them going into the localconfig.yaml in the same way that template() does? Works well for small config files and the like, but what happens when you start including RPMs, tgz files, database dumps, etc?>> I can see from looking at the localconfig.yaml that all non-fileserver >> data seems to be present, and with all variables and templates >> interpolated into it, so that shouldn''t be too much of a problem. >> Fileserving seems to be a different kettle of fish, with files pulled >> down on an individual basis and installed directly from the >> puppetmaster. >> >> Is there any potential for the client to look at a local fileserver >> ''cache''? I.e. if I could make available a compiled structure of files >> (relating solely to the client) on the server, and then arrange for >> that >> to be rsync''d to the client, for the client puppetd to pick up >> locally. >> Could that solve the problem? >> > > You could certainly do it that way, too. > >Cool, that''s the approach i''d plan to take - a client-specific compiled localconfig.yaml plus a file hierarchy relating just to the client that can be pushed out via some kind of rsync distribution daemon (puppetdister?)>> I''m happy to help here, with whatever needs doing, though my Ruby >> skills >> are somewhat lacking (though increasing thankfully!) Please let me >> know >> what you need. >> > > Heh, well, I need someone to do all of the work. :) > > I''m willing to help in the planning, but I don''t have the time to do > the development. >Alrighty, I''ll take that bait. I''ve got an old system that I wrote a few years back in Perl that used pretty much the distrib method above. I''ve always wanted to translate it into Ruby. What i''d really need is help figuring out the method to get puppetmaster to spit out the compiled form, and help with the changes needed to puppetd to get it to look at a local file cache. Cheers, mike
Quoting Luke Kanies <luke@madstop.com>:> On Jul 5, 2007, at 3:44 PM, barsalou wrote: >> >> Quoting Luke Kanies <luke@madstop.com>: >>> Having the client do the compile but only sending it a subset of >>> files is not something that Puppet could easily do right now. You >>> could quite easily compile the configuration on the server and send >>> it to the client, but you can''t keep least-access yet have the client >>> do the compile. >> >> I''m not sure I''m understanding your point where it says >> "...client,but you can''t...", could you re-word it? > > You said something about compiling configurations on the client after > determining what it would need. > > Puppet currently follows least-access (machines get the least access > that suffices for them) by compiling configurations on the server and > sending the client only the configuration it needs. If you compile > on the client, you''ll need to copy down essentially all of your > Puppet scripts, meaning that you''ll break least-access because every > client will have all configurations.Good point.> > You can push configurations but still compile on the server and thus > preserve least-access. > >>> >>> It shouldn''t be hard to set this up so that it''s transport- >>> independent, so Puppet shouldn''t care how the configuration gets >>> there. Of course, the configuration will still need to be run as >>> root, I would think (well, either root, or some root-equivalent, or >>> you won''t be able to use Puppet to actually maintain your system). >>> >> >> ''maintain'' seems fairly broad here, maybe I have some machines that >> I only have access to certain services (I''m thinking of a hosted >> box), but I get your point. >> >> Just want to put the idea out there, that I might not always have >> root access to a box I want to ''maintain''. > > Sure, it''s just that there''s so little of Puppet that''s useful > without root access, and I don''t even model internally whether I have > it or not. I probably should, but, well, I don''t. >>> >>> If you want services to get restarted or whatever when files change, >>> you need to use Puppet''s fileserver. >>> >> >> I don''t see how these two things are connected? I would assume that >> any ''recipe'' you have that is doing stuff to services would be >> doing the service restart. Why would this cause the serving of >> files? > > The point is that if Puppet pulls the file down, then it notices when > the file changes; if something else pulls the file down, then you > will still need to have Puppet monitor the file so that it knows to > restart a service if the configuration file changes. >This make sense to me now.> -- > The reasonable man adapts himself to the world; the unreasonable one > persists in trying to adapt the world to himself. Therefore all > progress depends on the unreasonable man. -- George Bernard Shaw > --------------------------------------------------------------------- > Luke Kanies | http://reductivelabs.com | http://madstop.com > >Thanks for the feedback Luke. ---------------------------------------------------------------- This message was sent using IMP, the Internet Messaging Program.
On Jul 5, 2007, at 5:16 PM, Mike Pountney wrote:> I wish i could encourage my work to do just that, but puppet''s pretty > new in our infrastructure and so its benefits have not yet been > seen by > the powers that be. I''m about 10 modules away from that ;) > > I''ve got a reasonable plan of attack for the problem, so will document > that shortly and forward it on to the group.Great.> That''s my problem. I take it that wrapping files up using file() would > involve them going into the localconfig.yaml in the same way that > template() does? Works well for small config files and the like, but > what happens when you start including RPMs, tgz files, database > dumps, etc?Yep, that''s definitely a concern.> Cool, that''s the approach i''d plan to take - a client-specific > compiled > localconfig.yaml plus a file hierarchy relating just to the client > that > can be pushed out via some kind of rsync distribution daemon > (puppetdister?)Cool.> Alrighty, I''ll take that bait. I''ve got an old system that I wrote > a few > years back in Perl that used pretty much the distrib method above. > I''ve > always wanted to translate it into Ruby. What i''d really need is help > figuring out the method to get puppetmaster to spit out the compiled > form, and help with the changes needed to puppetd to get it to look > at a > local file cache.It''s basically all in the getconfig() method in network/handler/ master.rb. That method is normally called by the client, with the client facts passed in as the main argument, but you can call the method yourself within a program. The dirty way to do this is just to write what amounts to a script that connects to a client, collects the facts, creates a Master handler, calls getconfig(), connects to the client, and writes that config to localconfig.yaml. You''d need to force the client to always use the cache which might not actually be possible at the moment, but that should be a pretty small change. The better way to do this is to continue some of the work I started a while ago -- break the getconfig() method up into three steps, where the first one stores the client facts via a simple API, the second step retrieves class membership, and the third step compiles the configuration. Each step uses the previous steps to succeed, using APIs to get the data it needs. That is, rather than having a single script that gets the facts and calls getconfig(), you could have a script that periodically got the facts from the clients and wrote them to whatever backend you wanted. Then you could recompile the configuration whenever you wanted, using the most recently stored facts. It probably makes sense to start the dirty way, but I''d really prefer to do the better way up-front. We could skip the node step for now, but Kinial will be that middle step soon, hopefully. -- Morgan''s Second Law: To a first approximation all appointments are canceled. --------------------------------------------------------------------- Luke Kanies | http://reductivelabs.com | http://madstop.com