I updated the issue so I''m not going to repeat everything here, but I thought enough people would be interested that I''d add it to the list. I found one memory problem and the issue basically tells how I did it. (but without some of the flailing that happened first) http://reductivelabs.com/redmine/issues/show/1395 Puppet is very flexible by design, so it isn''t always straight forward for someone to reproduce problems cause by a particular configuration. The more we can clarify the conditions that cause a problem or better still get the simplest configuration that causes the problems, the faster we can fix them. Cheers, Andrew --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users-unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
Can you like where the patches are as I could not find them on your github. -L On Jul 26, 3:36 am, "Andrew Shafer" <and...@reductivelabs.com> wrote:> I updated the issue so I''m not going to repeat everything here, but I > thought enough people would be interested that I''d add it to the list. > > I found one memory problem and the issue basically tells how I did it. (but > without some of the flailing that happened first)http://reductivelabs.com/redmine/issues/show/1395 > > Puppet is very flexible by design, so it isn''t always straight forward for > someone to reproduce problems cause by a particular configuration. The more > we can clarify the conditions that cause a problem or better still get the > simplest configuration that causes the problems, the faster we can fix them. > > Cheers, > Andrew--~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users-unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
http://github.com/littleidea/puppet/tree/report_leak On Sat, Jul 26, 2008 at 8:15 PM, Larry Ludwig <larrylud@gmail.com> wrote:> > Can you like where the patches are as I could not find them on your > github. > > -L > > On Jul 26, 3:36 am, "Andrew Shafer" <and...@reductivelabs.com> wrote: > > I updated the issue so I''m not going to repeat everything here, but I > > thought enough people would be interested that I''d add it to the list. > > > > I found one memory problem and the issue basically tells how I did it. > (but > > without some of the flailing that happened first) > http://reductivelabs.com/redmine/issues/show/1395 > > > > Puppet is very flexible by design, so it isn''t always straight forward > for > > someone to reproduce problems cause by a particular configuration. The > more > > we can clarify the conditions that cause a problem or better still get > the > > simplest configuration that causes the problems, the faster we can fix > them. > > > > Cheers, > > Andrew > > >--~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users-unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
Hi> I updated the issue so I''m not going to repeat everything here, but I > thought enough people would be interested that I''d add it to the list. > > I found one memory problem and the issue basically tells how I did it. (but > without some of the flailing that happened first) > http://reductivelabs.com/redmine/issues/show/1395 > > Puppet is very flexible by design, so it isn''t always straight forward for > someone to reproduce problems cause by a particular configuration. The more > we can clarify the conditions that cause a problem or better still get the > simplest configuration that causes the problems, the faster we can fix them.thanks for your work. I integrated your changes in our new 0.24.5 rpms. However unfortunately I can''t see any change: before: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 4913 root 15 0 179m 100m 2604 S 0 1.7 3:27.19 puppetd after: PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 10564 root 15 0 174m 95m 2604 S 0 1.6 0:17.70 puppetd so it looks like at least me wasn''t really affected by the leaking you fixed? Should I note that in the bugreport? greets pete --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users-unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
> > \ > so it looks like at least me wasn''t really affected by the leaking you > fixed? Should I note that in the bugreport?It doesn''t seem to have made any difference for me, either. And I would kill for your memory usage. Here''s mine: 22458 root 16 0 700m 614m 2560 S 0 7.7 53:37.16 puppetd And that''s after restarting about five hours ago (CentOS 5, x86_84). When I came in it was at about 1.1 gig size and 1 gig resident after about two days of running every fifteen minutes. Strangely the 32-bit FreeBSD machines don''t have anywhere near as much memory usage -- only about 35 megs. I added the memory_profiler.rb from ticket 1131, but it doesn''t seem to show anything interesting. d. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users-unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
Leaking is a different issue from memory usage, and the leak I found was obvious because it was leaking Ruby objects. If there is a leak below the level of the Ruby objects, that memory_profiler isn''t going to find anything. Peter and Darrell, can you provide more information about the catalog that gets compiled for those hosts? It would be nice to have some utilities built into Puppet to dump the ObjectSpace and maybe some other info to a file. Attach it to a signal handler and get some standard info that we can all use to troubleshoot. I have a few experiments using the memory_profiler and changing the classes assigned to a node while the profiler it running to see what data structures change. Obviously the client saves the catalog from the last run, but I don''t see why that would take up as much memory as some of these we are seeing. It might just be for my curiosity, but is everyone with the big memory footprint serving lots of files? If not, can I get some representative manifests? If yes, can someone take an inventory of all the files they are managing and record the size of all those files. --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users-unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
Hi Andrew, I''ve applied your patch, without major success. I''m attaching a memory usage and puppet file transfer usage of the first 24 hours of a puppetd process Tue Jul 29 10:41:44 SGT 2008 rss: 24860, size: 22548, vsize: 57352, cpu: 27.0 Tue Jul 29 11:11:44 SGT 2008 rss: 130596, size: 128180, vsize: 162984, cpu: 1.2 Tue Jul 29 11:41:44 SGT 2008 rss: 195296, size: 193032, vsize: 227836, cpu: 1.2 Tue Jul 29 12:11:44 SGT 2008 rss: 234692, size: 232548, vsize: 267352, cpu: 1.1 Tue Jul 29 12:41:44 SGT 2008 rss: 236268, size: 234216, vsize: 269020, cpu: 1.1 Tue Jul 29 13:11:44 SGT 2008 rss: 237480, size: 235152, vsize: 269956, cpu: 1.1 Tue Jul 29 13:41:44 SGT 2008 rss: 240240, size: 237900, vsize: 272704, cpu: 1.0 Tue Jul 29 14:11:44 SGT 2008 rss: 241340, size: 239004, vsize: 273808, cpu: 1.0 Tue Jul 29 14:41:44 SGT 2008 rss: 242108, size: 239692, vsize: 274496, cpu: 1.0 Tue Jul 29 15:11:44 SGT 2008 rss: 243300, size: 240972, vsize: 275776, cpu: 1.0 Tue Jul 29 15:41:44 SGT 2008 rss: 243728, size: 241600, vsize: 276404, cpu: 1.1 Tue Jul 29 16:11:44 SGT 2008 rss: 244468, size: 242476, vsize: 277280, cpu: 1.1 Tue Jul 29 16:41:44 SGT 2008 rss: 244952, size: 242872, vsize: 277676, cpu: 1.1 Tue Jul 29 17:11:44 SGT 2008 rss: 244948, size: 242836, vsize: 277640, cpu: 1.1 Tue Jul 29 17:41:44 SGT 2008 rss: 244888, size: 242512, vsize: 277316, cpu: 1.1 Tue Jul 29 18:11:44 SGT 2008 rss: 245492, size: 243028, vsize: 277832, cpu: 1.1 Tue Jul 29 18:41:44 SGT 2008 rss: 246788, size: 244556, vsize: 279360, cpu: 1.1 Tue Jul 29 19:11:44 SGT 2008 rss: 246544, size: 244104, vsize: 278908, cpu: 1.1 Tue Jul 29 19:41:44 SGT 2008 rss: 248796, size: 246620, vsize: 281424, cpu: 1.1 Tue Jul 29 20:11:44 SGT 2008 rss: 249248, size: 246912, vsize: 281716, cpu: 1.1 Tue Jul 29 20:41:44 SGT 2008 rss: 249792, size: 247616, vsize: 282420, cpu: 1.1 Tue Jul 29 21:11:43 SGT 2008 rss: 252376, size: 250104, vsize: 284908, cpu: 1.1 Tue Jul 29 21:41:43 SGT 2008 rss: 252496, size: 250208, vsize: 285012, cpu: 1.1 Tue Jul 29 22:11:43 SGT 2008 rss: 252776, size: 250728, vsize: 285532, cpu: 1.1 Tue Jul 29 22:41:43 SGT 2008 rss: 253792, size: 251652, vsize: 286456, cpu: 1.1 Tue Jul 29 23:11:43 SGT 2008 rss: 254164, size: 251864, vsize: 286668, cpu: 1.1 Tue Jul 29 23:41:43 SGT 2008 rss: 256076, size: 253836, vsize: 288640, cpu: 1.1 Wed Jul 30 00:11:43 SGT 2008 rss: 255604, size: 253556, vsize: 288360, cpu: 1.1 Wed Jul 30 00:41:43 SGT 2008 rss: 256344, size: 254208, vsize: 289012, cpu: 1.1 Wed Jul 30 01:11:43 SGT 2008 rss: 257144, size: 254860, vsize: 289664, cpu: 1.1 Wed Jul 30 01:41:43 SGT 2008 rss: 256332, size: 253992, vsize: 288796, cpu: 1.1 Wed Jul 30 02:11:43 SGT 2008 rss: 258396, size: 255948, vsize: 290752, cpu: 1.1 Wed Jul 30 02:41:42 SGT 2008 rss: 258780, size: 256328, vsize: 291132, cpu: 1.1 Wed Jul 30 03:11:42 SGT 2008 rss: 259016, size: 256736, vsize: 291540, cpu: 1.1 Wed Jul 30 03:41:43 SGT 2008 rss: 259040, size: 256816, vsize: 291620, cpu: 1.0 Wed Jul 30 04:11:43 SGT 2008 rss: 259868, size: 257688, vsize: 292492, cpu: 1.0 Wed Jul 30 04:41:43 SGT 2008 rss: 260812, size: 258432, vsize: 293236, cpu: 1.0 Wed Jul 30 05:11:43 SGT 2008 rss: 262148, size: 260228, vsize: 295032, cpu: 1.0 Wed Jul 30 05:41:42 SGT 2008 rss: 263212, size: 261280, vsize: 296084, cpu: 1.1 Wed Jul 30 06:11:42 SGT 2008 rss: 263544, size: 261424, vsize: 296228, cpu: 1.1 Wed Jul 30 06:41:42 SGT 2008 rss: 264484, size: 262236, vsize: 297040, cpu: 1.1 Wed Jul 30 07:11:42 SGT 2008 rss: 265760, size: 263568, vsize: 298372, cpu: 1.1 Wed Jul 30 07:41:42 SGT 2008 rss: 266032, size: 263972, vsize: 298776, cpu: 1.1 Wed Jul 30 08:11:42 SGT 2008 rss: 266516, size: 264316, vsize: 299120, cpu: 1.1 Wed Jul 30 08:41:42 SGT 2008 rss: 267704, size: 265720, vsize: 300524, cpu: 1.1 Wed Jul 30 09:11:42 SGT 2008 rss: 268576, size: 266644, vsize: 301448, cpu: 1.1 Wed Jul 30 09:41:42 SGT 2008 rss: 269016, size: 266812, vsize: 301616, cpu: 1.1 Wed Jul 30 10:11:42 SGT 2008 rss: 268796, size: 266748, vsize: 301552, cpu: 1.1 Wed Jul 30 10:41:42 SGT 2008 rss: 269488, size: 267336, vsize: 302140, cpu: 1.1 audo du -sk `sudo grep -i path\: localconfig.yaml| awk ''{print $2}''` 2>/dev/null | awk ''{total += $1; count ++} END {print count " files with a total of " total "KB" }'' 46 files with a total of 316KB Hope this helps.. Ohad Additionally, I''ve tried to find out how many file transfers are happening, but its quite hard to get an exact figure, in state On Tue, Jul 29, 2008 at 11:37 AM, Andrew Shafer <andrew@reductivelabs.com> wrote:> > Leaking is a different issue from memory usage, and the leak I found was > obvious because it was leaking Ruby objects. If there is a leak below the > level of the Ruby objects, that memory_profiler isn''t going to find > anything. > > Peter and Darrell, can you provide more information about the catalog that > gets compiled for those hosts? > > It would be nice to have some utilities built into Puppet to dump the > ObjectSpace and maybe some other info to a file. Attach it to a signal > handler and get some standard info that we can all use to troubleshoot. > > I have a few experiments using the memory_profiler and changing the classes > assigned to a node while the profiler it running to see what data structures > change. > > Obviously the client saves the catalog from the last run, but I don''t see > why that would take up as much memory as some of these we are seeing. > > It might just be for my curiosity, but is everyone with the big memory > footprint serving lots of files? If not, can I get some representative > manifests? If yes, can someone take an inventory of all the files they are > managing and record the size of all those files. > > > > > >--~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
> Peter and Darrell, can you provide more information about the > catalog that gets compiled for those hosts?Hmm.. What sort of information? I don''t think it''s particularly complicated. localconfig.yaml on that host is 127k and a good chunk of that is processed templates.> It might just be for my curiosity, but is everyone with the big > memory footprint serving lots of files? If not, can I get some > representative manifests? If yes, can someone take an inventory of > all the files they are managing and record the size of all those > files.Grepping for ''source:'' in the localconfig.yaml shows 46 files, none of which are over a few k in size -- they''re mostly config files. There are five templates processed on this host, again none over a few k. Over all there are 220 named objects. FWIW, I rescind my earlier comment about the memory leak patch not helping -- it seems to have stopped growing out of control, it is, however, still at 702m virtual and 616m resident (down from 1.2 gigs after three day). It gets to the current size very quickly after starting. I''m glad to provide any other debugging info I can. Darrell --~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---
Darrell, It''s not particularly complicated, it''s just not anywhere I can look at it :/ I was running a very simple manifest when I found this first leak, but I did notice something similar to what everyone else is seeing. The memory would start at about 20mb and grow to about 30mb then stabilize over the first 20-30 minutes. I had the run interval at 30 seconds at that point, and that growth was without an increase in the number of Ruby objects. On Wed, Jul 30, 2008 at 10:55 AM, Darrell Fuhriman <darrell@garnix.org>wrote:> > > Peter and Darrell, can you provide more information about the > > catalog that gets compiled for those hosts? > > Hmm.. What sort of information? I don''t think it''s particularly > complicated. localconfig.yaml on that host is 127k and a good chunk of > that is processed templates. > > > It might just be for my curiosity, but is everyone with the big > > memory footprint serving lots of files? If not, can I get some > > representative manifests? If yes, can someone take an inventory of > > all the files they are managing and record the size of all those > > files. > > Grepping for ''source:'' in the localconfig.yaml shows 46 files, none of > which are over a few k in size -- they''re mostly config files. There > are five templates processed on this host, again none over a few k. > Over all there are 220 named objects. > > FWIW, I rescind my earlier comment about the memory leak patch not > helping -- it seems to have stopped growing out of control, it is, > however, still at 702m virtual and 616m resident (down from 1.2 gigs > after three day). It gets to the current size very quickly after > starting. > > I''m glad to provide any other debugging info I can. > > Darrell > > > > >--~--~---------~--~----~------------~-------~--~----~ You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en -~----------~----~----~----~------~----~------~--~---