Hello list, I am soliciting opinion here as opposed technical help with an idea I have. I've setup a bacula backup system on an AWS volume. Bacula stores a LOT of information in it's mysql database (in my setup, you can also use postgres or sqlite if you chose). Since I've started doing this I notice that the mysql data directory has swelled to over 700GB! That's quite a lot and its' easting up valuable disk space. So I had an idea. What about uses the fuse based s3fs to mount an S3 bucket on the local filesystem and use that as your mysql data dir? In other words mount your s3 bucket on /var/lib/mysql I used this article to setup the s3fs file system http://benjisimon.blogspot.com/2011/01/setting-up-s3-backup-solution-on-centos.html And everything went as planned. So my question to you dear listers is if I do start using a locally mounted s3 bucket as my mysqld data dir, will performance of the database be acceptable? If so, why? If not are there any other reasons why it would NOT be a good idea to do this? The steps I have in mind are basically this: 1) mysqldump --all-databases > alldb.sql 2) stop mysql 3) rm -rf /var/lib/mysql/* 4) mount the s3 bucket on /var/lib/mysql 5) start mysql 6) restore the alldb.sql dump Thanks for your opinions on this! Tim 4) -- GPG me!! gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B
On 01.10.2012 19:24, Tim Dunphy wrote:> Hello list, > > I am soliciting opinion here as opposed technical help with an idea > I > have. I've setup a bacula backup system on an AWS volume. Bacula > stores a > LOT of information in it's mysql database (in my setup, you can also > use > postgres or sqlite if you chose). Since I've started doing this I > notice > that the mysql data directory has swelled to over 700GB! That's quite > a lot > and its' easting up valuable disk space. > > So I had an idea. What about uses the fuse based s3fs to mount an S3 > bucket on the local filesystem and use that as your mysql data dir? > In > other words mount your s3 bucket on /var/lib/mysql > > I used this article to setup the s3fs file system > > > http://benjisimon.blogspot.com/2011/01/setting-up-s3-backup-solution-on-centos.html > > And everything went as planned. So my question to you dear listers is > if I > do start using a locally mounted s3 bucket as my mysqld data dir, > will > performance of the database be acceptable? If so, why? If not are > there any > other reasons why it would NOT be a good idea to do this? > > The steps I have in mind are basically this: > > 1) mysqldump --all-databases > alldb.sql > 2) stop mysql > 3) rm -rf /var/lib/mysql/* > 4) mount the s3 bucket on /var/lib/mysql > 5) start mysql > 6) restore the alldb.sql dump > > > Thanks for your opinions on this! > > Tim > 4)What a wild idea! :-) I don't think it will work; S3 is not really a filesystem afaik. -- Sent from the Delta quadrant using Borg technology! Nux! www.nux.ro
Hi, 700 GB is quite large. What was the amount of data you would have backed up till this point ? Could be that the catalog data is building up, have you looked at http://www.bacula.org/5.0.x-manuals/en/main/main/Catalog_Maintenance.html While your setup in theory should work, have you tested restores. Search through catalogs might be slower than you expect for 700 GB - jb On 10/1/12, Tim Dunphy <bluethundr at gmail.com> wrote:> Hello list, > > I am soliciting opinion here as opposed technical help with an idea I > have. I've setup a bacula backup system on an AWS volume. Bacula stores a > LOT of information in it's mysql database (in my setup, you can also use > postgres or sqlite if you chose). Since I've started doing this I notice > that the mysql data directory has swelled to over 700GB! That's quite a lot > and its' easting up valuable disk space. > > So I had an idea. What about uses the fuse based s3fs to mount an S3 > bucket on the local filesystem and use that as your mysql data dir? In > other words mount your s3 bucket on /var/lib/mysql > > I used this article to setup the s3fs file system > > http://benjisimon.blogspot.com/2011/01/setting-up-s3-backup-solution-on-centos.html > > And everything went as planned. So my question to you dear listers is if I > do start using a locally mounted s3 bucket as my mysqld data dir, will > performance of the database be acceptable? If so, why? If not are there any > other reasons why it would NOT be a good idea to do this? > > The steps I have in mind are basically this: > > 1) mysqldump --all-databases > alldb.sql > 2) stop mysql > 3) rm -rf /var/lib/mysql/* > 4) mount the s3 bucket on /var/lib/mysql > 5) start mysql > 6) restore the alldb.sql dump > > > Thanks for your opinions on this! > > Tim > 4) > > -- > GPG me!! > > gpg --keyserver pool.sks-keyservers.net --recv-keys F186197B > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos >
On 10/01/12 11:24 AM, Tim Dunphy wrote:> So I had an idea. What about uses the fuse based s3fs to mount an S3 > bucket on the local filesystem and use that as your mysql data dir? In > other words mount your s3 bucket on /var/lib/mysqldatabases need fast reliable commited random small block writes. S3 doesn't provide that capability. those sorts of fake file systems tend to only work well for sequential whole file read/write. -- john r pierce N 37, W 122 santa cruz ca mid-left coast
Am 01.10.2012 um 20:24 schrieb Tim Dunphy:> Hello list, > > I am soliciting opinion here as opposed technical help with an idea I > have. I've setup a bacula backup system on an AWS volume. Bacula stores a > LOT of information in it's mysql database (in my setup, you can also use > postgres or sqlite if you chose). Since I've started doing this I notice > that the mysql data directory has swelled to over 700GB! That's quite a lot > and its' easting up valuable disk space. > > So I had an idea. What about uses the fuse based s3fs to mount an S3 > bucket on the local filesystem and use that as your mysql data dir? In > other words mount your s3 bucket on /var/lib/mysql > > I used this article to setup the s3fs file system > > http://benjisimon.blogspot.com/2011/01/setting-up-s3-backup-solution-on-centos.html > > And everything went as planned. So my question to you dear listers is if I > do start using a locally mounted s3 bucket as my mysqld data dir, will > performance of the database be acceptable? If so, why? If not are there any > other reasons why it would NOT be a good idea to do this? > > The steps I have in mind are basically this: > > 1) mysqldump --all-databases > alldb.sql > 2) stop mysql > 3) rm -rf /var/lib/mysql/* > 4) mount the s3 bucket on /var/lib/mysql > 5) start mysql > 6) restore the alldb.sql dumpyour motivation is to save the resources that are occupied by /var/lib/mysql ?? please check the size of your "mysqldump --all-databases > alldb.sql". Is the dump also so big? -- LF
On Oct 1, 2012, at 11:24 AM, Tim Dunphy wrote:> Hello list, > > I am soliciting opinion here as opposed technical help with an idea I > have. I've setup a bacula backup system on an AWS volume. Bacula stores a > LOT of information in it's mysql database (in my setup, you can also use > postgres or sqlite if you chose). Since I've started doing this I notice > that the mysql data directory has swelled to over 700GB! That's quite a lot > and its' easting up valuable disk space. > > So I had an idea. What about uses the fuse based s3fs to mount an S3 > bucket on the local filesystem and use that as your mysql data dir? In > other words mount your s3 bucket on /var/lib/mysql > > I used this article to setup the s3fs file system > > http://benjisimon.blogspot.com/2011/01/setting-up-s3-backup-solution-on-centos.html > > And everything went as planned. So my question to you dear listers is if I > do start using a locally mounted s3 bucket as my mysqld data dir, will > performance of the database be acceptable? If so, why? If not are there any > other reasons why it would NOT be a good idea to do this? > > The steps I have in mind are basically this: > > 1) mysqldump --all-databases > alldb.sql > 2) stop mysql > 3) rm -rf /var/lib/mysql/* > 4) mount the s3 bucket on /var/lib/mysql > 5) start mysql > 6) restore the alldb.sql dump---- specifically regarding the above 6 steps? keep /var/lib/mysql. If you delete it you would lose everything and would have to set it up all over again. At some point if you actually go with your idea and it works for you, you can simply drop the bacula database from within mysql and recover the space used. That said and concurring with others? the notion of putting the database files on a remote server sounds like a terrible idea. Databases are useful/usable when they exist on a local, fast filesystem. Hard disk drives are cheap and I think that you really should be fixing the actual cause of the problem. I use Bacula and I prefer PostgreSQL over MySQL but regardless? the database grows as the number of files that are backed up (without purges) grows. Thus you should probably be considering some type of rotation which regularly purges old backups on a regular basis. Craig