There''s definitely enough complexity in some of the modules out there to warrant solid test coverage, especially if people start extending a module to support more distributions and OSes, while trying to keep the existing support working. That''s even before you start thinking about functions, facts, and native types. They''re *really* in need of solid testing, being all native Ruby. I''m a bit twitchy about widely distributing my modules without solid testing, and certainly I can''t accept non-trivial patches for them without some semblance of testing (breaking production manifests would be... ill-advised). Then there''s testing that everything still works with newer versions of Puppet -- by the look of it, type APIs and possibly even manifest syntax are going to be different in the soon-to-be-released version, and my idea of good testing isn''t "put it out there and cross your fingers", but it seems like that''s the best I can do at the moment. So, has anyone put any thought into testing modules? - Matt -- Software engineering: that part of computer science which is too difficult for the computer scientist.
On Nov 17, 2007, at 11:24 PM, Matt Palmer wrote:> There''s definitely enough complexity in some of the modules out > there to > warrant solid test coverage, especially if people start extending a > module > to support more distributions and OSes, while trying to keep the > existing > support working. That''s even before you start thinking about > functions, > facts, and native types. They''re *really* in need of solid > testing, being > all native Ruby.I agree on all counts.> I''m a bit twitchy about widely distributing my modules without solid > testing, and certainly I can''t accept non-trivial patches for them > without > some semblance of testing (breaking production manifests would be... > ill-advised). Then there''s testing that everything still works > with newer > versions of Puppet -- by the look of it, type APIs and possibly even > manifest syntax are going to be different in the soon-to-be-released > version, and my idea of good testing isn''t "put it out there and > cross your > fingers", but it seems like that''s the best I can do at the moment. > > So, has anyone put any thought into testing modules?Yeah, I spent a lot of time thinking about this while at LISA last week. I think that testing Puppet code should be split into two pieces, one of which is easy to test, and one of which is not. The easy-to-test bit is that the code generates the resources you expect. It shouldn''t be too hard to create a testing harness that makes it easy to set some variables, generate the resources, and make sure the resources are what you expect. For instance, if you want to test an SSH class, then you might say "if the client is Debian, it should create a service resource named sshd". This should be pretty easy to integrate with rspec. The second part, which is that the generated resources provide the functionality you want... that''s going to be much harder, and is really only possible if you have a full VM to do integration testing. You could reduce the test cost some by keeping the tests too noop mode, but it''s always going to be difficult to test a resource that''s only functional on certain platforms. -- It''s impossible to foresee the consequences of being clever. -- Christopher Strachey --------------------------------------------------------------------- Luke Kanies | http://reductivelabs.com | http://madstop.com
On Sun, Nov 18, 2007 at 12:13:54PM -0600, Luke Kanies wrote:> On Nov 17, 2007, at 11:24 PM, Matt Palmer wrote: > > I''m a bit twitchy about widely distributing my modules without solid > > testing, and certainly I can''t accept non-trivial patches for them > > without > > some semblance of testing (breaking production manifests would be... > > ill-advised). Then there''s testing that everything still works > > with newer > > versions of Puppet -- by the look of it, type APIs and possibly even > > manifest syntax are going to be different in the soon-to-be-released > > version, and my idea of good testing isn''t "put it out there and > > cross your > > fingers", but it seems like that''s the best I can do at the moment. > > > > So, has anyone put any thought into testing modules? > > The easy-to-test bit is that the code generates the resources you > expect. It shouldn''t be too hard to create a testing harness that > makes it easy to set some variables, generate the resources, and make > sure the resources are what you expect. For instance, if you want to > test an SSH class, then you might say "if the client is Debian, it > should create a service resource named sshd". This should be pretty > easy to integrate with rspec.It''s not the integration with testing frameworks that has me concerned; it''s linking up "out of tree" modules with Puppet itself. What I''d like, in a perfect world, would be for modules to have a ''test'' directory, which would contain test files with just one require ("require ''puppet/module_test_helper''", perhaps) and then filled with regular test cases.> The second part, which is that the generated resources provide the > functionality you want... that''s going to be much harder, and is > really only possible if you have a full VM to do integration > testing. You could reduce the test cost some by keeping the tests > too noop mode, but it''s always going to be difficult to test a > resource that''s only functional on certain platforms./me does the "it''s all mocks" dance. Again. Seriously. You can simulate any and every platform available by providing the facts and any mock filesystem and network calls that need to get made. You''ve done a mighty fine job of abstracting OS-specific stuff in Facter and the Puppet providers; why throw that all away now by failing to make the testing infrastructure platform-agnostic? - Matt -- "As far as I''m concerned, spammers are nothing more than electronic home-invasion gangs." -- Andy Markley
On Nov 18, 2007, at 1:30 PM, Matt Palmer wrote:> > It''s not the integration with testing frameworks that has me > concerned; it''s > linking up "out of tree" modules with Puppet itself. > > What I''d like, in a perfect world, would be for modules to have a > ''test'' > directory, which would contain test files with just one require > ("require > ''puppet/module_test_helper''", perhaps) and then filled with regular > test > cases.Right -- that''s what I was trying to point to, but I should have been more explicit. Then we need some kind of ''rake test'' target that will run the unit tests on all of the modules in the module path.> > /me does the "it''s all mocks" dance. Again. > > Seriously. You can simulate any and every platform available by > providing > the facts and any mock filesystem and network calls that need to > get made. > You''ve done a mighty fine job of abstracting OS-specific stuff in > Facter and > the Puppet providers; why throw that all away now by failing to > make the > testing infrastructure platform-agnostic?I do understand what you''re saying and I agree... to an extent. You could go further than what I did by validating that a given set of resources all pass validation on a given platform or whatever, and that would make a lot of sense -- I should have thought of that earlier. At some point you really do need to validate that your resources make sense, though -- this includes validating that packages with that name actually exist, they can actually be installed, your init scripts actually have ''restart'' commands, etc. I completely agree that we can get very far using mocks, but at some point you need to verify that your code does what you expect on the stupid computer, and it just can''t be turtles all the way down. -- The point of living and of being an optimist, is to be foolish enough to believe the best is yet to come. -- Peter Ustinov --------------------------------------------------------------------- Luke Kanies | http://reductivelabs.com | http://madstop.com
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Hi all! Thank you Matt, for bringing up this topic! On Sunday 18 November 2007, Luke Kanies wrote:> On Nov 17, 2007, at 11:24 PM, Matt Palmer wrote: > > There''s definitely enough complexity in some of the modules out > > there to > > warrant solid test coverage, especially if people start extending a > > module > > to support more distributions and OSes, while trying to keep the > > existing > > support working. That''s even before you start thinking about > > functions, > > facts, and native types. They''re *really* in need of solid > > testing, being > > all native Ruby. > > I agree on all counts.Having quite a bit of modules myself, I can only agree here too.> > I''m a bit twitchy about widely distributing my modules without solid > > testing, and certainly I can''t accept non-trivial patches for them > > without > > some semblance of testing (breaking production manifests would be... > > ill-advised). Then there''s testing that everything still works > > with newer > > versions of Puppet -- by the look of it, type APIs and possibly even > > manifest syntax are going to be different in the soon-to-be-released > > version, and my idea of good testing isn''t "put it out there and > > cross your > > fingers", but it seems like that''s the best I can do at the moment. > > > > So, has anyone put any thought into testing modules? > > Yeah, I spent a lot of time thinking about this while at LISA last week. > > I think that testing Puppet code should be split into two pieces, one > of which is easy to test, and one of which is not. > > The easy-to-test bit is that the code generates the resources you > expect. It shouldn''t be too hard to create a testing harness that > makes it easy to set some variables, generate the resources, and make > sure the resources are what you expect. For instance, if you want to > test an SSH class, then you might say "if the client is Debian, it > should create a service resource named sshd". This should be pretty > easy to integrate with rspec.I fear that handcoding the tests is not gonna be feasible for manifests. Contrary to simple unit testing intricate code, tests for manifests would be of the same complexity. I can imagine creating most of the tests automatically, by comparing the created resources to a stored "known good" result. If this "known good" restul could be stored in a stable, human readable format it can also be used for reviewing and "what-if" change-tracking scenarios.> The second part, which is that the generated resources provide the > functionality you want... that''s going to be much harder, and is > really only possible if you have a full VM to do integration > testing. You could reduce the test cost some by keeping the tests > too noop mode, but it''s always going to be difficult to test a > resource that''s only functional on certain platforms.My way around this is to add nagios tests to the modules, testing the installed functionality. While this doesn''t assure me about the functionality on all platforms, it does cover a very important subset. Regards, DavidS - -- The primary freedom of open source is not the freedom from cost, but the free- dom to shape software to do what you want. This freedom is /never/ exercised without cost, but is available /at all/ only by accepting the very different costs associated with open source, costs not in money, but in time and effort. - -- http://www.schierer.org/~luke/log/20070710-1129/on-forks-and-forking -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHQTlj/Pp1N6Uzh0URAlohAJ0erqiL741j6VJZ1Fds8P0Z7t3E4QCfVoT0 jV60WM1+4Ad8tr895tVWKzY=tbks -----END PGP SIGNATURE-----
On Mon, Nov 19, 2007 at 08:21:03AM +0100, David Schmitt wrote:> On Sunday 18 November 2007, Luke Kanies wrote: > > The second part, which is that the generated resources provide the > > functionality you want... that''s going to be much harder, and is > > really only possible if you have a full VM to do integration > > testing. You could reduce the test cost some by keeping the tests > > too noop mode, but it''s always going to be difficult to test a > > resource that''s only functional on certain platforms. > > My way around this is to add nagios tests to the modules, testing the > installed functionality. While this doesn''t assure me about the functionality > on all platforms, it does cover a very important subset.Ah, another devotee of test-first systems administration. I do this too, but plenty of the things that are configured with Puppet aren''t testable using Nagios to the degree that is really required. Also, I''m working with some massively HA systems, where having Nagios tell me that stuff is broken is fairly useless in the great, grand scheme of things -- significant damage has already been done. If Puppet was the cause of the outage, it wouldn''t be there for much longer. - Matt -- "I invented the term object-oriented, and I can tell you I did not have C++ in mind." -- Alan Kay
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Monday 19 November 2007, Matt Palmer wrote:> On Mon, Nov 19, 2007 at 08:21:03AM +0100, David Schmitt wrote: > > On Sunday 18 November 2007, Luke Kanies wrote: > > > The second part, which is that the generated resources provide the > > > functionality you want... that''s going to be much harder, and is > > > really only possible if you have a full VM to do integration > > > testing. You could reduce the test cost some by keeping the tests > > > too noop mode, but it''s always going to be difficult to test a > > > resource that''s only functional on certain platforms. > > > > My way around this is to add nagios tests to the modules, testing the > > installed functionality. While this doesn''t assure me about the > > functionality on all platforms, it does cover a very important subset. > > Ah, another devotee of test-first systems administration.Yes :) except that I don''t have enough resources to do proper pre-rollout tests and therefore usually operate on my living systems. I do admit though, that it is a drag :-/> I do this too, > but plenty of the things that are configured with Puppet aren''t testable > using Nagios to the degree that is really required. Also, I''m working with > some massively HA systems, where having Nagios tell me that stuff is broken > is fairly useless in the great, grand scheme of things -- significant > damage has already been done. If Puppet was the cause of the outage, it > wouldn''t be there for much longer.In my book, this would require a 1:1 test system to test deployment before roll out to production. While puppet can help here by being able to make two 100% identical roll-outs, I fail to see how it could be able to improve the situation beyond that. Regards, David - -- The primary freedom of open source is not the freedom from cost, but the free- dom to shape software to do what you want. This freedom is /never/ exercised without cost, but is available /at all/ only by accepting the very different costs associated with open source, costs not in money, but in time and effort. - -- http://www.schierer.org/~luke/log/20070710-1129/on-forks-and-forking -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHQUfx/Pp1N6Uzh0URAioEAJwNorwKfA5Bn4CvNKs3rZD3fCmewgCeO5lP Rzm6W1JtdnliuEtBqYlp45k=AB7H -----END PGP SIGNATURE-----
This one time, at band camp, David Schmitt wrote:>In my book, this would require a 1:1 test system to test deployment before >roll out to production. While puppet can help here by being able to make two >100% identical roll-outs, I fail to see how it could be able to improve the >situation beyond that.Yeah, I have thought through this whole thread that what Matt is after is a canary machine, a qa environment. Otherwise you''re going to be attempting to reimplement the entire system in mocks. Personally I think it''s cheaper to have the QA machine, and a staged rollout into the production systems if you fear taking down a HA cluster. There are going to be errors in your testing that will be exposed when it gets rolled out, so to me the module testing seems like a lot of effort with low return.
On Tue, Nov 20, 2007 at 09:40:51AM +1100, Jamie Wilkinson wrote:> This one time, at band camp, David Schmitt wrote: > >In my book, this would require a 1:1 test system to test deployment before > >roll out to production. While puppet can help here by being able to make two > >100% identical roll-outs, I fail to see how it could be able to improve the > >situation beyond that. > > Yeah, I have thought through this whole thread that what Matt is after is a > canary machine, a qa environment. Otherwise you''re going to be attempting > to reimplement the entire system in mocks. Personally I think it''s cheaper > to have the QA machine, and a staged rollout into the production systems if > you fear taking down a HA cluster. There are going to be errors in your > testing that will be exposed when it gets rolled out, so to me the module > testing seems like a lot of effort with low return.That exact same argument can be made against all automated testing, though. Incomplete testing is just evidence of a lack of imagination. <grin> I *really* want automated testing for modules so that I can accept patches to my modules from other contributors with minimal risk -- I want to be able to know, within a minute or two of applying the patch, that the functionality *I* rely on in the module isn''t broken by the patch. If I have to respin my QA infrastructure and do a massive pile of manual testing to check a patch, I''m just not going to bother accepting patches, which largely defeats the purpose of releasing them in the first place. - Matt
Totally agree. -----Original Message----- From: puppet-users-bounces@madstop.com [mailto:puppet-users-bounces@madstop.com] On Behalf Of Matt Palmer Sent: 19 November 2007 23:30 To: puppet-users@madstop.com Subject: Re: [Puppet-users] Testing modules On Tue, Nov 20, 2007 at 09:40:51AM +1100, Jamie Wilkinson wrote:> This one time, at band camp, David Schmitt wrote: > >In my book, this would require a 1:1 test system to test deployment > >before roll out to production. While puppet can help here by being > >able to make two 100% identical roll-outs, I fail to see how it could> >be able to improve the situation beyond that. > > Yeah, I have thought through this whole thread that what Matt is after> is a canary machine, a qa environment. Otherwise you''re going to be > attempting to reimplement the entire system in mocks. Personally I > think it''s cheaper to have the QA machine, and a staged rollout into > the production systems if you fear taking down a HA cluster. There > are going to be errors in your testing that will be exposed when it > gets rolled out, so to me the module testing seems like a lot ofeffort with low return. That exact same argument can be made against all automated testing, though. Incomplete testing is just evidence of a lack of imagination. <grin> I *really* want automated testing for modules so that I can accept patches to my modules from other contributors with minimal risk -- I want to be able to know, within a minute or two of applying the patch, that the functionality *I* rely on in the module isn''t broken by the patch. If I have to respin my QA infrastructure and do a massive pile of manual testing to check a patch, I''m just not going to bother accepting patches, which largely defeats the purpose of releasing them in the first place. - Matt _______________________________________________ Puppet-users mailing list Puppet-users@madstop.com https://mail.madstop.com/mailman/listinfo/puppet-users ------------------------------------------------------------------------ For important statutory and regulatory disclosures and more information about Barclays Capital, please visit our web site at http://www.barcap.com. Internet communications are not secure and therefore the Barclays Group does not accept legal responsibility for the contents of this message. Although the Barclays Group operates anti-virus programmes, it does not accept responsibility for any damage whatsoever that is caused by viruses being passed. Any views or opinions presented are solely those of the author and do not necessarily represent those of the Barclays Group. Replies to this email may be monitored by the Barclays Group for operational or business reasons. Barclays Capital is the investment banking division of Barclays Bank PLC, a company registered in England (number 1026167) with its registered office at 1 Churchill Place, London, E14 5HP. This email may relate to or be sent from other members of the Barclays Group. ------------------------------------------------------------------------
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On Tuesday 20 November 2007, Matt Palmer wrote:> On Tue, Nov 20, 2007 at 09:40:51AM +1100, Jamie Wilkinson wrote: > > This one time, at band camp, David Schmitt wrote: > > >In my book, this would require a 1:1 test system to test deployment > > > before roll out to production. While puppet can help here by being able > > > to make two 100% identical roll-outs, I fail to see how it could be > > > able to improve the situation beyond that. > > > > Yeah, I have thought through this whole thread that what Matt is after is > > a canary machine, a qa environment. Otherwise you''re going to be > > attempting to reimplement the entire system in mocks. Personally I think > > it''s cheaper to have the QA machine, and a staged rollout into the > > production systems if you fear taking down a HA cluster. There are going > > to be errors in your testing that will be exposed when it gets rolled > > out, so to me the module testing seems like a lot of effort with low > > return. >> I *really* want automated testing for modules so that I can accept patches > to my modules from other contributors with minimal risk -- I want to be > able to know, within a minute or two of applying the patch, that the > functionality *I* rely on in the module isn''t broken by the patch. If I > have to respin my QA infrastructure and do a massive pile of manual testing > to check a patch, I''m just not going to bother accepting patches, which > largely defeats the purpose of releasing them in the first place.Without having code to show, I think most of the testing benefits on the manifest level of modules can be reaped by comparing the generated resource lists. The input (facts) can be gathered from life systems and for the output it''d need a canonical ordering, but that shouldn''t be too hard either. Staying on this level also has the added value of making it very clear what kind of errors can be caught and which not as well as giving an indication of which aspects have to be checked after applying it.> That exact same argument can be made against all automated testing, though. > Incomplete testing is just evidence of a lack of imagination. <grin>In the end the configuration has to be tested on a running system. I can imagine a world where even more aspects than pure resource-list-comparison can be tested by creating chroots/vservers/domUs with the new manifests applied within, but for 100% test coverage, you also have to have a 100% fidelity of your test system. Regards, David - -- The primary freedom of open source is not the freedom from cost, but the free- dom to shape software to do what you want. This freedom is /never/ exercised without cost, but is available /at all/ only by accepting the very different costs associated with open source, costs not in money, but in time and effort. - -- http://www.schierer.org/~luke/log/20070710-1129/on-forks-and-forking -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQFHQpxh/Pp1N6Uzh0URAoczAKCQdC29Ilob/nozubKjz31wU9XNsACeNuHj a8DFXGJhNbEvUZnGnQZtQDA=t9la -----END PGP SIGNATURE-----
On Nov 19, 2007, at 5:29 PM, Matt Palmer wrote:> > That exact same argument can be made against all automated testing, > though. > Incomplete testing is just evidence of a lack of imagination. <grin> > > I *really* want automated testing for modules so that I can accept > patches > to my modules from other contributors with minimal risk -- I want > to be able > to know, within a minute or two of applying the patch, that the > functionality *I* rely on in the module isn''t broken by the patch. > If I > have to respin my QA infrastructure and do a massive pile of manual > testing > to check a patch, I''m just not going to bother accepting patches, > which > largely defeats the purpose of releasing them in the first place.Ok, so what''s missing for you to provide this? It doesn''t look like there''s any real issue with doing this kind of testing right now. -- If I want your opinion, I''ll read your entrails. --Doug Shewfelt --------------------------------------------------------------------- Luke Kanies | http://reductivelabs.com | http://madstop.com
On Tue, 20 Nov 2007, David Schmitt wrote:> Without having code to show, I think most of the testing benefits on the > manifest level of modules can be reaped by comparing the generated resource > lists. The input (facts) can be gathered from life systems and for the output > it''d need a canonical ordering, but that shouldn''t be too hard either. > > Staying on this level also has the added value of making it very clear what > kind of errors can be caught and which not as well as giving an indication of > which aspects have to be checked after applying it.What I really miss from puppet is a true ''dry-run'' mode: just show me what changes would be executed. I know it cannot be complete as one change can affect/trigger others but one could live with it. As a side comment, the ''--test'' flag of puppetd unfortunately quite misleading as one can easly assume that no change will happen on the managed node. Best regards, Jozsef -- E-mail : kadlec@sunserv.kfki.hu, kadlec@blackhole.kfki.hu PGP key: http://www.kfki.hu/~kadlec/pgp_public_key.txt Address: KFKI Research Institute for Particle and Nuclear Physics H-1525 Budapest 114, POB. 49, Hungary
On Tue, Nov 20, 2007 at 09:35:42AM +0100, David Schmitt wrote:> On Tuesday 20 November 2007, Matt Palmer wrote: > > On Tue, Nov 20, 2007 at 09:40:51AM +1100, Jamie Wilkinson wrote: > > > This one time, at band camp, David Schmitt wrote: > > > >In my book, this would require a 1:1 test system to test deployment > > > > before roll out to production. While puppet can help here by being able > > > > to make two 100% identical roll-outs, I fail to see how it could be > > > > able to improve the situation beyond that. > > > > > > Yeah, I have thought through this whole thread that what Matt is after is > > > a canary machine, a qa environment. Otherwise you''re going to be > > > attempting to reimplement the entire system in mocks. Personally I think > > > it''s cheaper to have the QA machine, and a staged rollout into the > > > production systems if you fear taking down a HA cluster. There are going > > > to be errors in your testing that will be exposed when it gets rolled > > > out, so to me the module testing seems like a lot of effort with low > > > return. > > > > I *really* want automated testing for modules so that I can accept patches > > to my modules from other contributors with minimal risk -- I want to be > > able to know, within a minute or two of applying the patch, that the > > functionality *I* rely on in the module isn''t broken by the patch. If I > > have to respin my QA infrastructure and do a massive pile of manual testing > > to check a patch, I''m just not going to bother accepting patches, which > > largely defeats the purpose of releasing them in the first place. > > Without having code to show, I think most of the testing benefits on the > manifest level of modules can be reaped by comparing the generated resourceWho''s testing just the manifest level? That''s the *least* interesting part of the whole endeavour. There''s custom facts, functions, types, and providers to worry about, too. *None* of those can be tested in modules currently, as far as I can tell.> > That exact same argument can be made against all automated testing, though. > > Incomplete testing is just evidence of a lack of imagination. <grin> > > In the end the configuration has to be tested on a running system.You''re making an unsupported assertion there. - Matt
On Tue, Nov 20, 2007 at 07:25:10PM +0100, Kadlecsik Jozsi wrote:> On Tue, 20 Nov 2007, David Schmitt wrote: > > Without having code to show, I think most of the testing benefits on the > > manifest level of modules can be reaped by comparing the generated resource > > lists. The input (facts) can be gathered from life systems and for the output > > it''d need a canonical ordering, but that shouldn''t be too hard either. > > > > Staying on this level also has the added value of making it very clear what > > kind of errors can be caught and which not as well as giving an indication of > > which aspects have to be checked after applying it. > > What I really miss from puppet is a true ''dry-run'' mode: just show me what > changes would be executed. I know it cannot be complete as one change can > affect/trigger others but one could live with it.What does the current --noop mode fail to do for you? - Matt
On Tue, Nov 20, 2007 at 10:27:16AM -0600, Luke Kanies wrote:> On Nov 19, 2007, at 5:29 PM, Matt Palmer wrote: > > > > That exact same argument can be made against all automated testing, > > though. > > Incomplete testing is just evidence of a lack of imagination. <grin> > > > > I *really* want automated testing for modules so that I can accept > > patches > > to my modules from other contributors with minimal risk -- I want > > to be able > > to know, within a minute or two of applying the patch, that the > > functionality *I* rely on in the module isn''t broken by the patch. > > If I > > have to respin my QA infrastructure and do a massive pile of manual > > testing > > to check a patch, I''m just not going to bother accepting patches, > > which > > largely defeats the purpose of releasing them in the first place. > > Ok, so what''s missing for you to provide this?A well-factored codebase that I can just include and call some method (with a suitable set of fact data) to get a tree of resources for a given manifest, and that I can then easily poke (with suitable mocks) to make sure that the resources (individually and collectively) do the right thing.> It doesn''t look like there''s any real issue with doing this kind of > testing right now.So you''ve got modules with complete, self-contained test suites that I can examine to see how you do it? - Matt
On Wed, 21 Nov 2007, Matthew Palmer wrote:> > What I really miss from puppet is a true ''dry-run'' mode: just show me what > > changes would be executed. I know it cannot be complete as one change can > > affect/trigger others but one could live with it. > > What does the current --noop mode fail to do for you?It seems I simply missed that flag! Thanks! Best regards, Jozsef -- E-mail : kadlec@sunserv.kfki.hu, kadlec@blackhole.kfki.hu PGP key: http://www.kfki.hu/~kadlec/pgp_public_key.txt Address: KFKI Research Institute for Particle and Nuclear Physics H-1525 Budapest 114, POB. 49, Hungary
On Nov 20, 2007, at 3:14 PM, Matthew Palmer wrote:>> >> Ok, so what''s missing for you to provide this? > > A well-factored codebase that I can just include and call some > method (with > a suitable set of fact data) to get a tree of resources for a given > manifest, and that I can then easily poke (with suitable mocks) to > make sure > that the resources (individually and collectively) do the right thing.Yeah, I''m working on it, but you can actually make progress now, even though it''s not ideal, rather than just wishing for something. If you doubt that I''m working on it, please compare the current compile, configuration, and interpreter classes in parser/ to what are currently released, or compare the code it takes to evaluate a configuration now vs. the current release.>> It doesn''t look like there''s any real issue with doing this kind of >> testing right now. > > So you''ve got modules with complete, self-contained test suites > that I can > examine to see how you do it?No, I don''t; if I did, I would have pointed you to them a while ago. You know the code base pretty darn well, though, and you seem fond enough of testing that you probably know testing pretty well. Find the unit tests that are similar to what you want and copy them with the modifications you need. No, it''s not pleasant, but it''s at least straightforward. (Replying to your other mail on this thread.)> Who''s testing just the manifest level? That''s the *least* > interesting part > of the whole endeavour. There''s custom facts, functions, types, and > providers to worry about, too. *None* of those can be tested in > modules > currently, as far as I can tell.What''s different about modules that suddenly you can''t test ruby in them? They''re just a set of directories following a specific convention. I''m pretty confused. You''ve got tests for the existing functions; why not emulate those tests for your module functions et al, but in $module/test or $module/spec? I feel like there''s something I''m missing, because this really isn''t that hard. -- I''m seventeen and I''m crazy. My uncle says the two always go together. When people ask your age, he said, always say seventeen and insane. -- Ray Bradbury --------------------------------------------------------------------- Luke Kanies | http://reductivelabs.com | http://madstop.com
Matthew Palmer schrieb:> On Tue, Nov 20, 2007 at 09:35:42AM +0100, David Schmitt wrote: >> On Tuesday 20 November 2007, Matt Palmer wrote: >>> On Tue, Nov 20, 2007 at 09:40:51AM +1100, Jamie Wilkinson wrote: >>>> This one time, at band camp, David Schmitt wrote: >>>>> In my book, this would require a 1:1 test system to test deployment >>>>> before roll out to production. While puppet can help here by being able >>>>> to make two 100% identical roll-outs, I fail to see how it could be >>>>> able to improve the situation beyond that. >>>> Yeah, I have thought through this whole thread that what Matt is after is >>>> a canary machine, a qa environment. Otherwise you''re going to be >>>> attempting to reimplement the entire system in mocks. Personally I think >>>> it''s cheaper to have the QA machine, and a staged rollout into the >>>> production systems if you fear taking down a HA cluster. There are going >>>> to be errors in your testing that will be exposed when it gets rolled >>>> out, so to me the module testing seems like a lot of effort with low >>>> return. >>> I *really* want automated testing for modules so that I can accept patches >>> to my modules from other contributors with minimal risk -- I want to be >>> able to know, within a minute or two of applying the patch, that the >>> functionality *I* rely on in the module isn''t broken by the patch. If I >>> have to respin my QA infrastructure and do a massive pile of manual testing >>> to check a patch, I''m just not going to bother accepting patches, which >>> largely defeats the purpose of releasing them in the first place. >> Without having code to show, I think most of the testing benefits on the >> manifest level of modules can be reaped by comparing the generated resource > > Who''s testing just the manifest level? That''s the *least* interesting part > of the whole endeavour. There''s custom facts, functions, types, and > providers to worry about, too. *None* of those can be tested in modules > currently, as far as I can tell.All of which are ruby code and therefore can be tested with rspec or similar ruby tools.>>> That exact same argument can be made against all automated testing, though. >>> Incomplete testing is just evidence of a lack of imagination. <grin> >> In the end the configuration has to be tested on a running system. > You''re making an unsupported assertion there.I''m at a loss how else I could verify "include exim4::with_spamscanning" or something from my hosting module (which installes and configures VServers) except by running it on a system and checking whether all required nagios tests[1] are installed and checkout OK. Regards, David [1] of course, those should also include things like "''exim4 -bt david@schmitt.edv-bus.at'' still routes my mail to my mailbox".
On Fri, Nov 23, 2007 at 10:54:16AM +0100, David Schmitt wrote:> Matthew Palmer schrieb: > > On Tue, Nov 20, 2007 at 09:35:42AM +0100, David Schmitt wrote: > >> On Tuesday 20 November 2007, Matt Palmer wrote: > >>> On Tue, Nov 20, 2007 at 09:40:51AM +1100, Jamie Wilkinson wrote: > >>>> This one time, at band camp, David Schmitt wrote: > >>>>> In my book, this would require a 1:1 test system to test deployment > >>>>> before roll out to production. While puppet can help here by being able > >>>>> to make two 100% identical roll-outs, I fail to see how it could be > >>>>> able to improve the situation beyond that. > >>>> Yeah, I have thought through this whole thread that what Matt is after is > >>>> a canary machine, a qa environment. Otherwise you''re going to be > >>>> attempting to reimplement the entire system in mocks. Personally I think > >>>> it''s cheaper to have the QA machine, and a staged rollout into the > >>>> production systems if you fear taking down a HA cluster. There are going > >>>> to be errors in your testing that will be exposed when it gets rolled > >>>> out, so to me the module testing seems like a lot of effort with low > >>>> return. > >>> I *really* want automated testing for modules so that I can accept patches > >>> to my modules from other contributors with minimal risk -- I want to be > >>> able to know, within a minute or two of applying the patch, that the > >>> functionality *I* rely on in the module isn''t broken by the patch. If I > >>> have to respin my QA infrastructure and do a massive pile of manual testing > >>> to check a patch, I''m just not going to bother accepting patches, which > >>> largely defeats the purpose of releasing them in the first place. > >> Without having code to show, I think most of the testing benefits on the > >> manifest level of modules can be reaped by comparing the generated resource > > > > Who''s testing just the manifest level? That''s the *least* interesting part > > of the whole endeavour. There''s custom facts, functions, types, and > > providers to worry about, too. *None* of those can be tested in modules > > currently, as far as I can tell. > > All of which are ruby code and therefore can be tested with rspec or > similar ruby tools.You seem very confident of this, so I take it you''ve got an example I could crib from?> >>> That exact same argument can be made against all automated testing, though. > >>> Incomplete testing is just evidence of a lack of imagination. <grin> > >> In the end the configuration has to be tested on a running system. > > You''re making an unsupported assertion there. > > I''m at a loss how else I could verify "include exim4::with_spamscanning" > or something from my hosting module (which installes and configures > VServers) except by running it on a system and checking whether all > required nagios tests[1] are installed and checkout OK.By specifying the initial state, running the code, and verifying either the actions taken, the final state, or both. - Matt -- I am cow, hear me moo, I weigh twice as much as you. I''m a cow, eating grass, methane gas comes out my ass. I''m a cow, you are too; join us all! Type apt-get moo.
Matt Palmer schrieb:> On Fri, Nov 23, 2007 at 10:54:16AM +0100, David Schmitt wrote: >> Matthew Palmer schrieb: >>> Who''s testing just the manifest level? That''s the *least* interesting part >>> of the whole endeavour. There''s custom facts, functions, types, and >>> providers to worry about, too. *None* of those can be tested in modules >>> currently, as far as I can tell. >> All of which are ruby code and therefore can be tested with rspec or >> similar ruby tools. > > You seem very confident of this, so I take it you''ve got an example I could > crib from?http://reductivelabs.com/trac/puppet/browser/spec/unit/ral/types/exec.rb is a small example how a native type can be tested. For a more comprehensive test of a provider, there is http://reductivelabs.com/trac/puppet/browser/spec/unit/ral/provider/interface/redhat.rb Of course, the requires in the header will have to be adjusted and one needs a local puppet tree. Functions are still tested with runit. Luke''s tests are at http://reductivelabs.com/trac/puppet/browser/test/language/functions.rb Having read many of the runit tests and written a little bit of rspec testing, I would really recommend though to try to start with rspec for functions too. If you gain any experience applying these things in the context of a module, I''m sure I won''t be the only grateful one.>>>>> That exact same argument can be made against all automated testing, though. >>>>> Incomplete testing is just evidence of a lack of imagination. <grin> >>>> In the end the configuration has to be tested on a running system. >>> You''re making an unsupported assertion there. >> I''m at a loss how else I could verify "include exim4::with_spamscanning" >> or something from my hosting module (which installes and configures >> VServers) except by running it on a system and checking whether all >> required nagios tests[1] are installed and checkout OK. > > By specifying the initial state, running the code, and verifying either the > actions taken, the final state, or both.The specific example I was thinking about was a patch adding tarpitting to my exim4::with_spamscanning class. The chances are high that such a patch would only change files/ and templates/ and only touch exim''s ACLs. Since I''m not capable of/interested in creating a third-party Exim ACL tester, I might run exim4 -bh to test the new config. For that I need to get quite a few external dependencies right, DNS, spamassassin callout-replies. This reaches dimensions where I think that a seperate testing system is easier to implement. But to save you one reply, "That exact same argument can be made against all automated testing, though. Incomplete testing is just evidence of a lack of imagination. <grin>" Regards, DavidS