we have multiple servers approx 10 and each has about 100 GB of data in the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we take backup for these databases on to a remote machine and store them datewise , ( the remote machine is a 2TB HDD ) currently tar is not feasible as the data is too huge and the same goes with mysqldump suggestion will be of great help -- Regards Agnello D'souza -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos/attachments/20100815/945897ec/attachment-0002.html>
On 08/14/2010 12:51 PM, Agnello George wrote:> we have multiple servers approx 10 and each has about 100 GB of data > in the /var/lib/mysql dir , excluding tar , mysqldump and replication > how do we take backup for these databases on to a remote machine and > store them datewise , ( the remote machine is a 2TB HDD ) > > currently tar is not feasible as the data is too huge and the same > goes with mysqldump > > suggestion will be of great helpAssuming you installed using LVM partitions (and that you left space for snapshots ;) ), stop the database, take a LVM snapshot, restart the database, rsync the mysql data directory to the other machine, then release the snapshot. -- Benjamin Franz
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1 On 14/08/2010 21:51, Agnello George wrote:> we have multiple servers approx 10 and each has about 100 GB of data > in the /var/lib/mysql dir , excluding tar , mysqldump and replication > how do we take backup for these databases on to a remote machine and > store them datewise , ( the remote machine is a 2TB HDD ) > > currently tar is not feasible as the data is too huge and the same > goes with mysqldumpThe problem i encountered with this kind of stuff is that huge backups take a lot of time and that corresponds possibly (depends of used backends maybe) to long downtime of the production instance. I generally dont find it feasible to do backups on production instances. Instead I suggest to set up a slave instance on another machine. The Backup Procedure would roughly be like this: * stop slave instance * make lvm snapshot * start slave instance again * do whatever you want with the snapshot data (sorry!) The Advantages as i can see are: Backup peak I/O is moved away from the production instance. Of course you have the constant replication overhead. As a variation you could only start the replication thread periodically. The production instance has not to be shut off, e.g. is not affected directly by backups. The slave instance could act as failover. Regards, Markus -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.10 (Darwin) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org/ iEYEARECAAYFAkxm/xoACgkQYoWFBIJE9eVDlwCgv5Rnvs8P9/AS2iXSvnuqcgRv yH0AnjpMeYWDzYw8t47nEtvkF7OnT/bz =l7Av -----END PGP SIGNATURE-----
On Sun, 15 Aug 2010, Agnello George wrote:> To: CentOS mailing list <centos at centos.org>, linux at yahoogroups.com > From: Agnello George <agnello.dsouza at gmail.com> > Subject: [CentOS] best ways to do mysql backup > > we have multiple servers approx 10 and each has about 100 GB of data in > the /var/lib/mysql dir , excluding tar , mysqldump and replication how do we > take backup for these databases on to a remote machine and store them > datewise , ( the remote machine is a 2TB HDD ) > > currently tar is not feasible as the data is too huge and the same goes > with mysqldump > > suggestion will be of great helpWould there be some way of tee-ing off the SQL statements to a remote file in real-time? So in effect you are creating a text file dump of the databases in real-time? Kind Regards, Keith Roberts ----------------------------------------------------------------- Websites: http://www.php-debuggers.net http://www.karsites.net http://www.raised-from-the-dead.org.uk All email addresses are challenge-response protected with TMDA [http://tmda.net] -----------------------------------------------------------------
Nic by si? nie sta?o gdyby Agnello George nie napisa?:> we have multiple servers approx 10 and each has about 100 GB of data in > the /var/lib/mysql dir , excluding tar , mysqldump and replication how do > we > take backup for these databases on to a remote machine and store them > datewise , ( the remote machine is a 2TB HDD ) > > currently tar is not feasible as the data is too huge and the same goes > with mysqldump > > suggestion will be of great helpWhy not mysqldump? I suggest mysqldump to local dysk and backup this to remote. I use it with __bacula__. -- Tuptus
why not mysqldump + binlog + rsync? Tang Jianwei On 08/15/2010 03:51 AM, Agnello George wrote:> we have multiple servers approx 10 and each has about 100 GB of data > in the /var/lib/mysql dir , excluding tar , mysqldump and replication > how do we take backup for these databases on to a remote machine and > store them datewise , ( the remote machine is a 2TB HDD ) > > currently tar is not feasible as the data is too huge and the same > goes with mysqldump > > suggestion will be of great help > > -- > Regards > Agnello D'souza > > > > _______________________________________________ > CentOS mailing list > CentOS at centos.org > http://lists.centos.org/mailman/listinfo/centos >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.centos.org/pipermail/centos/attachments/20100815/e73d787d/attachment-0002.html>
> currently tar? is not feasible as the data is too huge? and? the same goes > with mysqldump > > suggestion will be of great helpNot really an answer but a good book on the subject: http://oreilly.com/catalog/9780596807290/ http://oreilly.com/catalog/9780596101718/ Matt
Seemingly Similar Threads
- not able to check in all code into svn which creates problem in deployment
- error --- > any idea ?? Kernel: Additional sense: Invalid command operation code
- tools one could to use to troubleshoot for Apache
- difference between stickybit SUID and SGID
- how to ensure systems users strong passwdors enabled