Julien Groselle
2011-Aug-01 07:00 UTC
[Gluster-users] [Gluster 3.2.1] Réplication issues on a two bricks volume
Hello, I have installed GlusterFS one month ago, and replication have many issues : First of all, our infrastructure, 2 storage array of 8Tb in replication mode... We have our backups file on this arrays, so 6Tb of datas. I want replicate datas on the second storrage array, so, i use this command : # gluster volume rebalance REP_SVG migrate-data start And gluster start to replicate, in 2 weeks we had 2.6Yb of datas replicated. But now, replication fail after about one day of replication.... with many errors. So i have two questions, are there any option or command to boost replication ? We have to continue to backup our servers... so during the r?plication, many files are rotated/moved/added. Is it a problem for gluster to replicatde datas during a backup session ? For now, we can't replicate any datas more ! We need help. FYI : # gluster --version glusterfs 3.2.1 built on Jun 12 2011 12:29:36 Repository revision: v3.2.1 Copyright (c) 2006-2010 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU Affero General Public License. # uname -a Linux toomba 2.6.32-5-amd64 #1 SMP Wed Jan 12 03:40:32 UTC 2011 x86_64 GNU/Linux # cat /etc/debian_version 6.0.2 # gluster peer status Number of Peers: 1 Hostname: kaiserstuhl-svg.coe.int Uuid: 5b79b4bc-c8d2-48d4-bd43-37991197ab47 State: Peer in Cluster (Connected) # gluster volume info all Volume Name: REP_SVG Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: toomba-svg.coe.int:/storage/backup Brick2: kaiserstuhl-svg.coe.int:/storage/backup Options Reconfigured: performance.write-behind-window-size: 16MB performance.cache-size: 256MB diagnostics.brick-log-level: WARNING *Julien Groselle* -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110801/a6afd162/attachment.html>
Anand Avati
2011-Aug-01 07:18 UTC
[Gluster-users] [Gluster 3.2.1] Réplication issues on a two bricks volume
Can you please provide logs for the errors you are facing? Also, rebalance was not the right operation to be done in your situation. You don't seem to have a distributed setup (but a pure replicate instead) in which case rebalance is really not achieving anything for you. Avati On Mon, Aug 1, 2011 at 12:30 PM, Julien Groselle <julien.groselle at gmail.com>wrote:> Hello, > > I have installed GlusterFS one month ago, and replication have many issues > : > First of all, our infrastructure, 2 storage array of 8Tb in replication > mode... We have our backups file on this arrays, so 6Tb of datas. > > I want replicate datas on the second storrage array, so, i use this command > : > # gluster volume rebalance REP_SVG migrate-data start > And gluster start to replicate, in 2 weeks we had 2.6Yb of datas > replicated. > But now, replication fail after about one day of replication.... with many > errors. > > So i have two questions, are there any option or command to boost > replication ? > We have to continue to backup our servers... so during the r?plication, > many files are rotated/moved/added. > Is it a problem for gluster to replicatde datas during a backup session ? > > For now, we can't replicate any datas more ! We need help. > > FYI : > # gluster --version > glusterfs 3.2.1 built on Jun 12 2011 12:29:36 > Repository revision: v3.2.1 > Copyright (c) 2006-2010 Gluster Inc. <http://www.gluster.com> > GlusterFS comes with ABSOLUTELY NO WARRANTY. > You may redistribute copies of GlusterFS under the terms of the GNU Affero > General Public License. > > # uname -a > Linux toomba 2.6.32-5-amd64 #1 SMP Wed Jan 12 03:40:32 UTC 2011 x86_64 > GNU/Linux > > # cat /etc/debian_version > 6.0.2 > > # gluster peer status > Number of Peers: 1 > > Hostname: kaiserstuhl-svg.coe.int > Uuid: 5b79b4bc-c8d2-48d4-bd43-37991197ab47 > State: Peer in Cluster (Connected) > > # gluster volume info all > > Volume Name: REP_SVG > Type: Replicate > Status: Started > Number of Bricks: 2 > Transport-type: tcp > Bricks: > Brick1: toomba-svg.coe.int:/storage/backup > Brick2: kaiserstuhl-svg.coe.int:/storage/backup > Options Reconfigured: > performance.write-behind-window-size: 16MB > performance.cache-size: 256MB > diagnostics.brick-log-level: WARNING > > *Julien Groselle* > > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > http://gluster.org/cgi-bin/mailman/listinfo/gluster-users > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20110801/06ea6a28/attachment.html>
Maybe Matching Threads
- no dentry for non-root inode
- Gluster 3.1 and NFS problem
- Failed rebalance - lost files, inaccessible files, permission issues
- Directory metadata inconsistencies and missing output ("mismatched layout" and "no dentry for inode" error)
- AsteriskNOW with AX1600P card