Hi there, We''ve got a puppet (0.24.8) instance with something like a hundred nodes on it. The puppetmaster is running under passenger, we''ve got both stored configs and dashboard reports going to a MySQL database on the same host. The dashboard itself is now in production use as our external node configuration and reporting tool. Oh yes, we''re definitely living the dream. Our problem is that the dashboard is just getting slower and slower as time goes by, and the database is becoming swamped. It''s a concern now that it has become such a key tool. Can anyone enlighten me as to whether there is any housekeeping that can be done to the dashboard database, in order to make the application any faster? Are old reports purged at any time, or will they hang about for ever? Our dashboard_production.reports table now contains ~380,000 entries and consumes 814MB of disk space. Would it help to prune these to a certain time-period? Might there be any indexes missing from my database? I think I installed the dashboard at version 1.0, but ran the database upgrade script between 1.0.1 and 1.0.3. I''m doing incremental MySQL tuning anyway, but I''d like to know if anyone else has any suggestions or similar experiences. Thanks. -- Ben Tullis -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
qutic development
2010-Sep-16 09:51 UTC
Re: [Puppet Users] Dashboard database optimization
On 16.09.2010, at 11:31, Ben Tullis wrote:> Are old reports purged at any time, or will they hang about for ever? > Our dashboard_production.reports table now contains ~380,000 entries > and > consumes 814MB of disk space. Would it help to prune these to a > certain > time-period?Rails logs are not rotated by default. The rails world would use a capistrano task: http://blog.daeltar.org/logrotate-with-capistrano-generated-configura This is creating a logrotate file - which can be done with puppet too.> Might there be any indexes missing from my database? I think I > installed > the dashboard at version 1.0, but ran the database upgrade script > between 1.0.1 and 1.0.3.Rack::Bug is a tool you can use to get an idea about missing indexes: http://github.com/brynary/rack-bug -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Thanks for the response, but I think you''ve misunderstood the first bit.> Rails logs are not rotated by default. The rails world would use a > capistrano task: > > http://blog.daeltar.org/logrotate-with-capistrano-generated-configura > > This is creating a logrotate file - which can be done with puppet too. >It''s not a log file that is causing the problem, it''s the sheer size of the database reports table. I have already put logrotate files in place for the rails log files, and they''re fine.> > Might there be any indexes missing from my database? I think I > > installed > > the dashboard at version 1.0, but ran the database upgrade script > > between 1.0.1 and 1.0.3. > > Rack::Bug is a tool you can use to get an idea about missing indexes: > > http://github.com/brynary/rack-bugThat''s an interesting technique, but I''d rather not get into modifying the application itself to put diagnostics in-line, especially as it''s in production. For reference, the indexes that I have on the reports table are these: mysql> show indexes in reports; +---------+------------+--------------------------+-------------- +-------------+-----------+-------------+----------+--------+------ +------------+---------+ | Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | +---------+------------+--------------------------+-------------- +-------------+-----------+-------------+----------+--------+------ +------------+---------+ | reports | 0 | PRIMARY | 1 | id | A | 380511 | NULL | NULL | | BTREE | | | reports | 1 | index_reports_on_node_id | 1 | node_id | A | 229 | NULL | NULL | YES | BTREE | | | reports | 1 | index_reports_on_time | 1 | time | A | 380511 | NULL | NULL | YES | BTREE | | +---------+------------+--------------------------+-------------- +-------------+-----------+-------------+----------+--------+------ +------------+---------+ 3 rows in set (0.06 sec) I''ve got mysql logging queries that can''t use an index, so I''ll analyse that to see if anything jumps out at me. This table currently has 800MB of data and uses 14MB for the indexes. -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
On Sep 16, 2010, at 5:31 AM, Ben Tullis wrote:> Our problem is that the dashboard is just getting slower and slower as > time goes by, and the database is becoming swamped. It''s a concern now > that it has become such a key tool. > > Can anyone enlighten me as to whether there is any housekeeping that can > be done to the dashboard database, in order to make the application any > faster?We had the same issues. I have this in `/etc/cron.daily` which blows away all but the last 14 days of activity for Dashboard. #!/bin/sh # filesystem /usr/bin/find /var/lib/puppet/reports/ -type f -mtime +60 -exec rm {} \; # these directories should be empty after the previous command /usr/bin/find /var/lib/puppet/reports/ -maxdepth 1 -mtime +60 -type d -exec rmdir {} \; # dashboard database /usr/bin/rake -f /opt/puppet-dashboard/Rakefile RAILS_ENV=production reports:prune upto=14 unit=day I would start higher and crank it down until you get acceptable performance. I''d like to have more than 14 days honestly, but it was just too slow otherwise. (You''ll note that I keep 60 days worth of YAML reports, so I could always import those if I really needed the data in the Dashboard.) -- Rob McBroom <http://www.skurfer.com/> -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
On Sep 16, 12:38 pm, Ben Tullis <b...@tiger-computing.co.uk> wrote:> Thanks for the response, but I think you''ve misunderstood the first > bit. > > > Rails logs are not rotated by default. The rails world would use a > > capistrano task: > > >http://blog.daeltar.org/logrotate-with-capistrano-generated-configura > > > This is creating a logrotate file - which can be done with puppet too. > > It''s not a log file that is causing the problem, it''s the sheer size > of the database reports table. > I have already put logrotate files in place for the rails log files, > and they''re fine. > > > > Might there be any indexes missing from my database? I think I > > > installed > > > the dashboard at version 1.0, but ran the database upgrade script > > > between 1.0.1 and 1.0.3. > > > Rack::Bug is a tool you can use to get an idea about missing indexes: > > >http://github.com/brynary/rack-bug > > That''s an interesting technique, but I''d rather not get into modifying > the application itself to put diagnostics in-line, especially as it''s > in production. > > For reference, the indexes that I have on the reports table are these: > > mysql> show indexes in reports; > +---------+------------+--------------------------+-------------- > +-------------+-----------+-------------+----------+--------+------ > +------------+---------+ > | Table | Non_unique | Key_name | Seq_in_index | > Column_name | Collation | Cardinality | Sub_part | Packed | Null | > Index_type | Comment | > +---------+------------+--------------------------+-------------- > +-------------+-----------+-------------+----------+--------+------ > +------------+---------+ > | reports | 0 | PRIMARY | 1 | > id | A | 380511 | NULL | NULL | | > BTREE | | > | reports | 1 | index_reports_on_node_id | 1 | > node_id | A | 229 | NULL | NULL | YES | > BTREE | | > | reports | 1 | index_reports_on_time | 1 | > time | A | 380511 | NULL | NULL | YES | > BTREE | | > +---------+------------+--------------------------+-------------- > +-------------+-----------+-------------+----------+--------+------ > +------------+---------+ > 3 rows in set (0.06 sec) > > I''ve got mysql logging queries that can''t use an index, so I''ll > analyse that to see if anything jumps out at me. > > This table currently has 800MB of data and uses 14MB for the indexes.Consider yourself lucky, mine takes up over 2GB. To save on space, if you are using MySQL 5.1 you might consider using the InnoDB plugin and Barracuda compressed row format. As the reports are the big space hog and are mostly text, they compress well. There are some details regarding database performance on the following ticket: http://projects.puppetlabs.com/issues/4357 In particular, upgrading to 1.0.4 (which I see has just made RC1) should help front-page performance. My latest update on the above issue also includes how to add another index to the reports table which speeds up front-page performance yet again. -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
Hi Oliver and Rob, I think I''ll be implementing all of those suggestions in the near future then. Many thanks to you both. We''re only on MySQL 5.0 for now, so no compressed rows, but that''s a very interesting technique. I''ll keep my ear to the ground for 1.0.4 as well. Ben -- You received this message because you are subscribed to the Google Groups "Puppet Users" group. To post to this group, send email to puppet-users@googlegroups.com. To unsubscribe from this group, send email to puppet-users+unsubscribe@googlegroups.com. For more options, visit this group at http://groups.google.com/group/puppet-users?hl=en.
On Thu, 16 Sep 2010 05:34:33 -0700, Oliver Hookins wrote:> > On Sep 16, 12:38 pm, Ben Tullis <b...@tiger-computing.co.uk> wrote: > > Thanks for the response, but I think you''ve misunderstood the first > > bit. > > > > > Rails logs are not rotated by default. The rails world would use a > > > capistrano task: > > > > >http://blog.daeltar.org/logrotate-with-capistrano-generated-configura > > > > > This is creating a logrotate file - which can be done with puppet too. > > > > It''s not a log file that is causing the problem, it''s the sheer size > > of the database reports table. > > I have already put logrotate files in place for the rails log files, > > and they''re fine. > > > > > > Might there be any indexes missing from my database? I think I > > > > installed > > > > the dashboard at version 1.0, but ran the database upgrade script > > > > between 1.0.1 and 1.0.3. > > > > > Rack::Bug is a tool you can use to get an idea about missing indexes: > > > > >http://github.com/brynary/rack-bug > > > > That''s an interesting technique, but I''d rather not get into modifying > > the application itself to put diagnostics in-line, especially as it''s > > in production. > > > > For reference, the indexes that I have on the reports table are these: > > > > mysql> show indexes in reports; > > +---------+------------+--------------------------+-------------- > > +-------------+-----------+-------------+----------+--------+------ > > +------------+---------+ > > | Table | Non_unique | Key_name | Seq_in_index | > > Column_name | Collation | Cardinality | Sub_part | Packed | Null | > > Index_type | Comment | > > +---------+------------+--------------------------+-------------- > > +-------------+-----------+-------------+----------+--------+------ > > +------------+---------+ > > | reports | 0 | PRIMARY | 1 | > > id | A | 380511 | NULL | NULL | | > > BTREE | | > > | reports | 1 | index_reports_on_node_id | 1 | > > node_id | A | 229 | NULL | NULL | YES | > > BTREE | | > > | reports | 1 | index_reports_on_time | 1 | > > time | A | 380511 | NULL | NULL | YES | > > BTREE | | > > +---------+------------+--------------------------+-------------- > > +-------------+-----------+-------------+----------+--------+------ > > +------------+---------+ > > 3 rows in set (0.06 sec) > > > > I''ve got mysql logging queries that can''t use an index, so I''ll > > analyse that to see if anything jumps out at me. > > > > This table currently has 800MB of data and uses 14MB for the indexes. > > Consider yourself lucky, mine takes up over 2GB. To save on space, if > you are using MySQL 5.1 you might consider using the InnoDB plugin and > Barracuda compressed row format. As the reports are the big space hog > and are mostly text, they compress well. > > There are some details regarding database performance on the following > ticket: > http://projects.puppetlabs.com/issues/4357 > > In particular, upgrading to 1.0.4 (which I see has just made RC1) > should help front-page performance. My latest update on the above > issue also includes how to add another index to the reports table > which speeds up front-page performance yet again. >This new index is planned to appear in RC2, and the 1.0.4 final. -- Jacob Helwig