I got tired of restarting my puppetmaster when it stopped responding and finally switched to mongrel last night. When running under mongrel, what sort of concurrent connection rates are people able to get? I dont know how many individual puppetmaster processes I should spawn, or how to tell when I should span more. Perhaps I need more than two, because this morning I had a look at how things were going and I see sometime in the night I started getting this error on every run: Nov 13 07:09:47 cormorant puppetd[1355]: Could not retrieve plugins: execution expired I ran the two puppetmaster instances in verbose mode so I could see what was happening, and so I look now at their output and all I''m seeing is puppetmaster processing reports for hosts, and no longer compiling configurations: notice: Compiled configuration for plover.riseup.net in 28.09 seconds info: Stored configuration for loon.riseup.net in 31.28 seconds notice: Compiled configuration for loon.riseup.net in 34.52 seconds info: Stored configuration for parrot.riseup.net in 35.49 seconds notice: Compiled configuration for parrot.riseup.net in 36.93 seconds info: Stored configuration for nuthatch.riseup.net in 65.28 seconds notice: Compiled configuration for nuthatch.riseup.net in 69.41 seconds info: Found kakapo in /etc/puppet/manifests/site.pp info: Found eider in /etc/puppet/manifests/site.pp info: Stored configuration for kakapo.riseup.net in 12.46 seconds notice: Compiled configuration for kakapo.riseup.net in 16.36 seconds info: Stored configuration for eider.riseup.net in 18.15 seconds notice: Compiled configuration for eider.riseup.net in 21.55 seconds info: Processing reports lastcheck for proxy.riseup.net info: Processing reports lastcheck for mx3.riseup.net info: Found puffin in /etc/puppet/manifests/site.pp info: Found cassowary in /etc/puppet/manifests/site.pp info: Stored configuration for cassowary.riseup.net in 7.48 seconds notice: Compiled configuration for cassowary.riseup.net in 11.68 seconds info: Stored configuration for puffin.riseup.net in 11.02 seconds notice: Compiled configuration for puffin.riseup.net in 18.82 seconds info: Processing reports lastcheck for plover.riseup.net info: Processing reports lastcheck for crane.riseup.net info: Processing reports lastcheck for loon.riseup.net info: Processing reports lastcheck for penguin.riseup.net info: Processing reports lastcheck for heron.riseup.net info: Processing reports lastcheck for redwing.riseup.net info: Processing reports lastcheck for primary.riseup.net info: Processing reports lastcheck for spamd2.riseup.net info: Processing reports lastcheck for goose.riseup.net info: Processing reports lastcheck for parrot.riseup.net info: Processing reports lastcheck for admin.riseup.net info: Processing reports lastcheck for albatross.riseup.net info: Processing reports lastcheck for kakapo.riseup.net info: Processing reports lastcheck for dns1.riseup.net info: Processing reports lastcheck for cormorant.riseup.net info: Processing reports lastcheck for cassowary.riseup.net info: Processing reports lastcheck for proxy.riseup.net info: Processing reports lastcheck for swan.riseup.net info: Processing reports lastcheck for spamd2.riseup.net info: Processing reports lastcheck for egret.riseup.net ... I''ll be restarting the two puppetmaster instances, which will likely get them back in working order, but I was hoping that my switch to mongrel would keep me from having to do this. Perhaps I should be running more than 2 puppetmaster instances? When running under mongrel, what sort of concurrent connection rates are people able to get? I dont know how many individual puppetmaster processes I should spawn, or how to tell when I should span more. Micah
On Nov 13, 2007 8:22 AM, Micah Anderson <micah@riseup.net> wrote:> Perhaps I should be running more than 2 puppetmaster instances? When > running under mongrel, what sort of concurrent connection > rates are people able to get? I dont know how many individual > puppetmaster processes I should spawn, or how to tell when I > should span more.So I''m spending a hell of a lot of time at the moment trying to debug why our puppet setup is bugging out every so often, and I''m starting to suspect problems in both mod_proxy_balancer and mongrel. We''re running 5000 odd clients against a single apache mod_proxy_balancer setup with 8 mongrel instances behind it. We have clients configured to run every half an hour, and it''s close to one client connecting a second. We''ve just moved to 0.23 on the server to try and resolve the issues I''m seeing, and they''ve changed, but not gone away completely. Basically once i see lsof -i -P | grep -c CLOSE_WAIT start ramping up.... the puppetmaster processes will simply stop responding. -- Nigel Kersten MacOps @ Google "Two kinds of Kool-Aid" _______________________________________________ Puppet-users mailing list Puppet-users@madstop.com https://mail.madstop.com/mailman/listinfo/puppet-users
On Nov 13, 2007, at 10:22 AM, Micah Anderson wrote:> > I''ll be restarting the two puppetmaster instances, which will > likely get > them back in working order, but I was hoping that my switch to mongrel > would keep me from having to do this. > > Perhaps I should be running more than 2 puppetmaster instances? When > running under mongrel, what sort of concurrent connection > rates are people able to get? I dont know how many individual > puppetmaster processes I should spawn, or how to tell when I > should span more.How widespread is this problem? How many others are experiencing it? I''m currently (albeit slowly) working toward a release, so if there''s an important fix needed in this area I''d like to know ASAP. And, of course, if anyone (and it sounds like Nigel is on the scent) has any ideas on what the problem might be, I''d love to know what to focus on (or, even, just apply someone else''s patch). -- Do not think of knocking out another person''s brains because he differs in opinion from you. It would be as rational to knock yourself on the head because you differ from yourself ten years ago. -- Horace Mann --------------------------------------------------------------------- Luke Kanies | http://reductivelabs.com | http://madstop.com
On Tue, 13 Nov 2007 16:22:55 +0000, Micah Anderson wrote:> I got tired of restarting my puppetmaster when it stopped responding and > finally switched to mongrel last night. > > When running under mongrel, what sort of concurrent connection rates are > people able to get? I dont know how many individual puppetmaster > processes I should spawn, or how to tell when I should span more. > > Perhaps I need more than two, because this morning I had a look at how > things were going and I see sometime in the night I started getting this > error on every run:I switched to 4 to see if this would improve my situation, but it doesn''t seem to have done much at all. What seems to happen is when I start things up, things seem to function normally. Then after a while the client spits out the following on start:> Nov 13 07:09:47 cormorant puppetd[1355]: Could not retrieve plugins: > execution expiredbut still manages to get a compiled configuration from the master. Then after a little while longer, the time to complete runs skyrockets: Nov 12 21:56:50 eider puppetd[22167]: Finished configuration run in 1419.60 seconds pre-mongrel my times were much lower: Nov 11 22:58:48 eider puppetd[22167]: Finished configuration run in 98.82 seconds and the puppetmasters only show: info: Processing reports lastcheck for hostname.domain No more compiled configurations or stored configuration lines. I''ve had to restart things at least 4 times today, and I''m thinking of switching back to webrick as restarting once every 4 days is preferable than 4 times a day. Before I do that, I will likely try a different version of mongrel (I''m using 1.1.1). I''m wondering if it has anything to do with the attempt to fix connection problems by restarting connections. I get these for both webrick and mongrel, but I wonder if one plays better with them: Nov 13 11:58:31 eider puppetd[22167]: Other end went away; restarting connection and retrying micah
what versions of Apache with mod_proxy_balancer are people running? I''m on 2.2.3 and notice a few balancer routing improvements in later 2.2.xversions. After staring at Apache with a debug LogLevel for the last 20 minutes until it happened again, I''m not closer to working out where the problem lies. the puppetmaster processes are hung on a select(), Apache thinks it is still working ok, and requests keep working, just really really slowly. _______________________________________________ Puppet-users mailing list Puppet-users@madstop.com https://mail.madstop.com/mailman/listinfo/puppet-users
I''m not sure if others are seeing this, but I''ll keep posting odd things as I work them out. After opening up the mongrel instances so they''re accessible outside of the proxy (and ignoring cert validation) then they continue to work when the mod_proxy_balancer in front of them has for all intents and purposes died. The problem does seem to lie with apache/mod_proxy_b rather than mongrel and puppet as far as I can see. _______________________________________________ Puppet-users mailing list Puppet-users@madstop.com https://mail.madstop.com/mailman/listinfo/puppet-users
On Tue, 13 Nov 2007 15:36:28 -0800, Nigel Kersten wrote:> what versions of Apache with mod_proxy_balancer are people running?I''m running apache 2.2.3-4+etch1 here> I''m on 2.2.3 and notice a few balancer routing improvements in later > 2.2.xversions.Micah
Nigel Kersten wrote:> what versions of Apache with mod_proxy_balancer are people running? > > I''m on 2.2.3 and notice a few balancer routing improvements in later > 2.2.x versions.[~]% /usr/sbin/httpd.worker -V Server version: Apache/2.2.3 Server built: Sep 11 2006 09:44:40 Server''s Module Magic Number: 20051115:3 Server loaded: APR 1.2.7, APR-Util 1.2.8 Compiled using: APR 1.2.7, APR-Util 1.2.7 Architecture: 64-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) With an in house mongrel-1.0.1 (packaging is the only change from the standard mongrel IIRC). Puppet is 0.23.2.> the puppetmaster processes are hung on a select(), Apache thinks it is > still working ok, and requests keep working, just really really slowly.The only thing I notice, which is fixed for the next release was on the client side (where the client would die if the server didn''t respond fast enough). God[0] + puppet + puppet splay* options seem to do the trick for me. Cheers, Ryan [0] - http://god.rubyforge.org/ by our own Tom Werner
On Tue, Nov 13, 2007 at 11:20:03PM +0000, Micah Anderson wrote:> On Tue, 13 Nov 2007 16:22:55 +0000, Micah Anderson wrote: > > I got tired of restarting my puppetmaster when it stopped responding and > > finally switched to mongrel last night. > > > > When running under mongrel, what sort of concurrent connection rates are > > people able to get? I dont know how many individual puppetmaster > > processes I should spawn, or how to tell when I should span more. > > > > Perhaps I need more than two, because this morning I had a look at how > > things were going and I see sometime in the night I started getting this > > error on every run: > > I switched to 4 to see if this would improve my situation, but it doesn''t > seem to have done much at all. > > What seems to happen is when I start things up, things seem to function > normally. Then after a while the client spits out the following on start: > > > Nov 13 07:09:47 cormorant puppetd[1355]: Could not retrieve plugins: > > execution expired > > but still manages to get a compiled configuration from the master.That''s *really* strange -- that suggests that the fileserver is taking Too Damn Long to get the full list of files to be transferred from plugins -- or that the transfers themselves are taking too long. I''d imagine this might be the case if all of your mongrels are occupied with other connections -- is that what you''re seeing? Otherwise, how many files have you got in your various modules'' plugins directories? That fileserver module should be fairly efficient, but I''ve not analysed its big-O performance, so it may well be nightmarish...> Then after a little while longer, the time to complete runs skyrockets: > > Nov 12 21:56:50 eider puppetd[22167]: Finished configuration run in > 1419.60 seconds > > pre-mongrel my times were much lower: > > Nov 11 22:58:48 eider puppetd[22167]: Finished configuration run in 98.82 > secondsI don''t suppose you''re using storeconfigs (with exported resources, particularly) on sqlite, are you? I noticed when I started using exported resources with more client connections that sqlite started getting *really* slow (1500 second compilations were common for me, too). Switching to PgSQL (in my case; I''d imagine MySQL wouldn''t be too diferent) dropped compile runs back to the single-digit-seconds range. If you''re already running a "real" database, is it getting snotted during compilation?> I''m wondering if it has anything to do with the attempt to fix connection > problems by restarting connections. I get these for both webrick and > mongrel, but I wonder if one plays better with them: > > Nov 13 11:58:31 eider puppetd[22167]: Other end went away; restarting > connection and retryingI''m having trouble understanding how that could be a problem, but I''m wondering if Apache might need specific config to allow keep-alive on the SSL connections? Another thing to consider is that, if keep-alive *is* enabled, you will need enough mongrels to cover your peak concurrent connection count, as I''d imagine that one mongrel would be permanently assigned to one client connection for the length of that TCP connection (although mod_proxy''s implementation might not do that, it seems likely that it would). - Matt -- If only more employers realized that people join companies, but leave bosses. A boss should be an insulator, not a conductor or an amplifier. -- Geoff Kinnel, in the Monastery
On Wed, 14 Nov 2007 17:50:47 +1100, Matt Palmer wrote:> On Tue, Nov 13, 2007 at 11:20:03PM +0000, Micah Anderson wrote: >> On Tue, 13 Nov 2007 16:22:55 +0000, Micah Anderson wrote: >> > I got tired of restarting my puppetmaster when it stopped responding >> > and finally switched to mongrel last night. >> > >> > When running under mongrel, what sort of concurrent connection rates >> > are people able to get? I dont know how many individual puppetmaster >> > processes I should spawn, or how to tell when I should span more. >> > >> > Perhaps I need more than two, because this morning I had a look at >> > how things were going and I see sometime in the night I started >> > getting this error on every run: >> >> I switched to 4 to see if this would improve my situation, but it >> doesn''t seem to have done much at all. >> >> What seems to happen is when I start things up, things seem to function >> normally. Then after a while the client spits out the following on >> start: >> >> > Nov 13 07:09:47 cormorant puppetd[1355]: Could not retrieve plugins: >> > execution expired >> >> but still manages to get a compiled configuration from the master. > > That''s *really* strange -- that suggests that the fileserver is taking > Too Damn Long to get the full list of files to be transferred from > plugins -- or that the transfers themselves are taking too long. I''d > imagine this might be the case if all of your mongrels are occupied with > other connections -- is that what you''re seeing?I''m not sure how I can tell this. This is my first use of mongrel, so I am not really familiar with how I can determine this sort of thing. I do have a suspicion that a number of the puppetd clients somehow skewed to hitting the server at the same time. The reason I suspect this is because I switched back to webrick and as soon as I started up the puppetmaster -v 13 nodes all reported "info: Processing reports lastcheck for <node>". I think these were trying to check in at different times, but couldn''t so they were trying again, and again for hours and suddenly when the puppetmaster became available they all tried all at once.> Otherwise, how many files have you got in your various modules'' plugins > directories? That fileserver module should be fairly efficient, but > I''ve not analysed its big-O performance, so it may well be > nightmarish...I dont think this should matter, because things were working better, with the same set of files, on webrick. But in case it does, its a pretty small set: # find /etc/puppet/modules -type f |grep -v .svn |wc 47 47 1654>> Then after a little while longer, the time to complete runs skyrockets: >> >> Nov 12 21:56:50 eider puppetd[22167]: Finished configuration run in >> 1419.60 seconds >> >> pre-mongrel my times were much lower: >> >> Nov 11 22:58:48 eider puppetd[22167]: Finished configuration run in >> 98.82 seconds > > I don''t suppose you''re using storeconfigs (with exported resources, > particularly) on sqlite, are you? I noticed when I started using > exported resources with more client connections that sqlite started > getting *really* slow (1500 second compilations were common for me, > too). Switching to PgSQL (in my case; I''d imagine MySQL wouldn''t be too > diferent) dropped compile runs back to the single-digit-seconds range. > If you''re already running a "real" database, is it getting snotted > during compilation?Yes, I am using storedconfigs, but I moved to mysql long ago because of these problems. That was a big win. No, I didn''t switch back to sqlite :)>> I''m wondering if it has anything to do with the attempt to fix >> connection problems by restarting connections. I get these for both >> webrick and mongrel, but I wonder if one plays better with them: >> >> Nov 13 11:58:31 eider puppetd[22167]: Other end went away; restarting >> connection and retrying > > I''m having trouble understanding how that could be a problem, but I''m > wondering if Apache might need specific config to allow keep-alive on > the SSL connections?I dont really understand how the connection can go away in the first place (afterall these machines are connected to each other on a gigabit switch). I could see maybe with webrick when there aren''t enough threads to handle connections, and the puppets just need to try at some later time, but with apache+mongrel is it really necessary? It struck me as odd that I would get these (with webrick) when only one node was checking in. Is there an easy way I can turn this off to test to see if its the root of the problem?> Another thing to consider is that, if keep-alive *is* enabled, you will > need enough mongrels to cover your peak concurrent connection count, as > I''d imagine that one mongrel would be permanently assigned to one client > connection for the length of that TCP connection (although mod_proxy''s > implementation might not do that, it seems likely that it would).That would present a scalability problem. I would need almost as many mongrels as I have nodes then. micah
On Tue, 13 Nov 2007 18:12:32 -0800, Ryan Dooley wrote:> Nigel Kersten wrote: >> what versions of Apache with mod_proxy_balancer are people running? >> >> I''m on 2.2.3 and notice a few balancer routing improvements in later >> 2.2.x versions. > > [~]% /usr/sbin/httpd.worker -V > Server version: Apache/2.2.3 > Server built: Sep 11 2006 09:44:40 > Server''s Module Magic Number: 20051115:3 Server loaded: APR 1.2.7, > APR-Util 1.2.8 Compiled using: APR 1.2.7, APR-Util 1.2.7 Architecture: > 64-bit > Server MPM: Worker > threaded: yes (fixed thread count) > forked: yes (variable process count)On mine: % /usr/sbin/apache2 -V Server version: Apache/2.2.3 Server built: Jun 17 2007 20:16:04 Server''s Module Magic Number: 20051115:3 Server loaded: APR 1.2.7, APR-Util 1.2.7 Compiled using: APR 1.2.7, APR-Util 1.2.7 Architecture: 32-bit Server MPM: Worker threaded: yes (fixed thread count) forked: yes (variable process count) Server compiled with.... -D APACHE_MPM_DIR="server/mpm/worker" -D APR_HAS_SENDFILE -D APR_HAS_MMAP -D APR_HAVE_IPV6 (IPv4-mapped addresses enabled) -D APR_USE_SYSVSEM_SERIALIZE -D APR_USE_PTHREAD_SERIALIZE -D SINGLE_LISTEN_UNSERIALIZED_ACCEPT -D APR_HAS_OTHER_CHILD -D AP_HAVE_RELIABLE_PIPED_LOGS -D DYNAMIC_MODULE_LIMIT=128 -D HTTPD_ROOT="" -D SUEXEC_BIN="/usr/lib/apache2/suexec" -D DEFAULT_PIDLOG="/var/run/apache2.pid" -D DEFAULT_SCOREBOARD="logs/apache_runtime_status" -D DEFAULT_ERRORLOG="logs/error_log" -D AP_TYPES_CONFIG_FILE="/etc/apache2/mime.types" -D SERVER_CONFIG_FILE="/etc/apache2/apache2.conf"> With an in house mongrel-1.0.1 (packaging is the only change from the > standard mongrel IIRC).I switched to mongrel 1.1 last night (from 1.1.1), same problem. I can switch to any other version of mongrel and see what happens if this seems like a useful test.> Puppet is 0.23.2.Here I''ve got Debian puppet/puppetmaster package 0.23.2-13
On Wed, Nov 14, 2007 at 05:05:32PM +0000, Micah Anderson wrote:> On Wed, 14 Nov 2007 17:50:47 +1100, Matt Palmer wrote: > > On Tue, Nov 13, 2007 at 11:20:03PM +0000, Micah Anderson wrote: > >> On Tue, 13 Nov 2007 16:22:55 +0000, Micah Anderson wrote: > >> > I got tired of restarting my puppetmaster when it stopped responding > >> > and finally switched to mongrel last night. > >> > > >> > When running under mongrel, what sort of concurrent connection rates > >> > are people able to get? I dont know how many individual puppetmaster > >> > processes I should spawn, or how to tell when I should span more. > >> > > >> > Perhaps I need more than two, because this morning I had a look at > >> > how things were going and I see sometime in the night I started > >> > getting this error on every run: > >> > >> I switched to 4 to see if this would improve my situation, but it > >> doesn''t seem to have done much at all. > >> > >> What seems to happen is when I start things up, things seem to function > >> normally. Then after a while the client spits out the following on > >> start: > >> > >> > Nov 13 07:09:47 cormorant puppetd[1355]: Could not retrieve plugins: > >> > execution expired > >> > >> but still manages to get a compiled configuration from the master. > > > > That''s *really* strange -- that suggests that the fileserver is taking > > Too Damn Long to get the full list of files to be transferred from > > plugins -- or that the transfers themselves are taking too long. I''d > > imagine this might be the case if all of your mongrels are occupied with > > other connections -- is that what you''re seeing? > > I''m not sure how I can tell this. This is my first use of mongrel, so I > am not really familiar with how I can determine this sort of thing.Perhaps mod_proxy_balancer has some instrumentation you can use to see how many connections it''s handling at once (or perhaps the status reporting stuff in Apache might do it); otherwise probably lsof on each of the mongrels or netstat to see if they''re listening or actively connected, and whether the connections are long-lasting (same port-pair for a while) or transient (different port-pairs regularly).> >> I''m wondering if it has anything to do with the attempt to fix > >> connection problems by restarting connections. I get these for both > >> webrick and mongrel, but I wonder if one plays better with them: > >> > >> Nov 13 11:58:31 eider puppetd[22167]: Other end went away; restarting > >> connection and retrying > > > > I''m having trouble understanding how that could be a problem, but I''m > > wondering if Apache might need specific config to allow keep-alive on > > the SSL connections? > > I dont really understand how the connection can go away in the first > place (afterall these machines are connected to each other on a gigabit > switch).The server could say "no more keep-alive for you, please come back again later" and drop the TCP connection. I suppose there could also be some SSL-level "max number of HTTP connections" setting, for security purposes, but that just seems lame to me. The only time that the TCP connection going away gets detected higher up is when Puppet tries to re-use the SSL connection; that''s when you get the log droppings and we re-connect.> I could see maybe with webrick when there aren''t enough threads > to handle connections, and the puppets just need to try at some later > time, but with apache+mongrel is it really necessary?It''s not a "too busy, retry" error, it''s a "$DEITY be damned, SSL isn''t a real socket layer" error. If SSL was doing it''s damned job like TCP does, it''d notice when the other end went away and reconnect (like TCP resending a packet that doesn''t get through). But noooo, SSL has to be different and cause *us* all this pain.> It struck me as odd that I would get these (with webrick) when only one > node was checking in. Is there an easy way I can turn this off to test to > see if its the root of the problem?patch -R, perhaps? I have to say that I don''t think that anyone would want to turn off a 25% performance optimisation, so we didn''t create any way to turn it off in the config. I think if you disable the little block o'' code between lines 155 and 161 in lib/puppet/network/xmlrpc/client.rb that deals with fiddling the @@http_cache, that should disable the connection caching stuff and every HTTP request will be made down a freshly established connection. I make no promises that''ll disable it, though. <grin>> > Another thing to consider is that, if keep-alive *is* enabled, you will > > need enough mongrels to cover your peak concurrent connection count, as > > I''d imagine that one mongrel would be permanently assigned to one client > > connection for the length of that TCP connection (although mod_proxy''s > > implementation might not do that, it seems likely that it would). > > That would present a scalability problem. I would need almost as many > mongrels as I have nodes then.Aaah, the ancient trade-off between interactive responsiveness and throughput rears it''s ugly head yet again. - Matt -- English is about as pure as a cribhouse whore. We don''t just borrow words; on occasion, English has pursued other languages down alleyways to beat them unconscious and rifle their pockets for new vocabulary." -- James D. Nicoll, resident of rec.arts.sf.written
So before anyone else goes down this path, setting up the balancer-manager mod_status stuff doesn''t help in my case at least. The whole balancer becomes non-responsive, including the manager, although the mongrel instances behind are still responsive. I don''t have a borked instance in front of me, but from memory, using lsof shows an even distribution of long-lasting connections. I''ve set up an alternative virtual host and an entirely separate apache both on other ports to help isolate where the problem is occurring, and Jeff McCune and I have been chatting on and off today looking at Pound as an alternative to Apache as a reverse proxy, which is looking like it might possibly be viable with some small code tweaks. _______________________________________________ Puppet-users mailing list Puppet-users@madstop.com https://mail.madstop.com/mailman/listinfo/puppet-users
On Nov 14, 2007, at 4:51 PM, Nigel Kersten wrote:> I''ve set up an alternative virtual host and an entirely separate > apache both on other ports to help isolate where the problem is > occurring, and Jeff McCune and I have been chatting on and off today > looking at Pound as an alternative to Apache as a reverse proxy, > which is looking like it might possibly be viable with some small > code tweaks.As an update to everyone interested in Mongrel, I got Puppetmasterd +Mongrel+Pound working last night. I''m finalizing the patches now and will have patches submitted and documentation posted on the wiki this afternoon. Cheers, -- Jeff McCune Systems Manager The Ohio State University Department of Mathematics _______________________________________________ Puppet-users mailing list Puppet-users@madstop.com https://mail.madstop.com/mailman/listinfo/puppet-users
On Nov 15, 2007, at 9:27 AM, Jeff McCune wrote:> As an update to everyone interested in Mongrel, I got Puppetmasterd > +Mongrel+Pound working last night. I''m finalizing the patches now > and will have patches submitted and documentation posted on the > wiki this afternoon.That''s great news. I thought Pound didn''t do client certificate authentication. -- If computers get too powerful, we can organize them into a committee -- that will do them in. -- Bradley''s Bromide --------------------------------------------------------------------- Luke Kanies | http://reductivelabs.com | http://madstop.com
On Nov 15, 2007, at 12:42 PM, Luke Kanies wrote:> On Nov 15, 2007, at 9:27 AM, Jeff McCune wrote: > >> As an update to everyone interested in Mongrel, I got Puppetmasterd >> +Mongrel+Pound working last night. I''m finalizing the patches now >> and will have patches submitted and documentation posted on the >> wiki this afternoon. > > That''s great news. I thought Pound didn''t do client certificate > authentication.I was under that impression as well, but Pound does client certificate authentication in the version I''ve been playing with. I actually even find it better than Apache in that Pound allows you to specify finer-grained CA bundles. As an example, I''m able to send a specific bundle to the client, informing it what CA''s the server is configured to accept (authorize a client), and yet another bundle that will not authorize a client directly, but may be used to "fill in the gaps" in a certificate chain leading from a trusted CA to a client certificate. This will end up playing very nicely into the multiple certificate authority work I was doing a few months ago, and appears to be quite a bit more flexible. By the way Luke, thanks for making the HTTP headers configuration options. It really minimized the patch to puppet. =) Cheers, -- Jeff McCune Systems Manager The Ohio State University Department of Mathematics
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 Jeff McCune wrote:> On Nov 14, 2007, at 4:51 PM, Nigel Kersten wrote: >> I''ve set up an alternative virtual host and an entirely separate >> apache both on other ports to help isolate where the problem is >> occurring, and Jeff McCune and I have been chatting on and off today >> looking at Pound as an alternative to Apache as a reverse proxy, which >> is looking like it might possibly be viable with some small code tweaks. > > As an update to everyone interested in Mongrel, I got > Puppetmasterd+Mongrel+Pound working last night. I''m finalizing the > patches now and will have patches submitted and documentation posted on > the wiki this afternoon.There is some documentation available at: http://reductivelabs.com/trac/puppet/wiki/UsingMongrelPound Jeff - I cleaned up my first cut at documentation as it was a very fast brain dump. The Pound configuration needs adjustment and I''ve referenced your #906 ticket and indicated that it probably won''t work quite yet. Cheers James Turnbull - -- James Turnbull <james@lovedthanlost.net> - --- Author of Pro Nagios 2.0 (http://www.amazon.com/gp/product/1590596099/) Hardening Linux (http://www.amazon.com/gp/product/1590594444/) - --- PGP Key (http://pgp.mit.edu:11371/pks/lookup?op=get&search=0x0C42DF40) -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.7 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD4DBQFHPLJZ9hTGvAxC30ARAkueAJ41fkR87cpf+OTZb+vdNuo9ugLUTgCYqM5t S79TEY80SdAiLumQ2FaNqA==4Rct -----END PGP SIGNATURE-----