Hello, deer glustefs experts: I've found that my cluster of only two nodes had ran into a big trouble. Here are some information about my glusterfs. [root at yq35 ~]# gluster --version glusterfs 3.2.6 built on Mar 22 2012 10:44:28 Repository revision: git://git.gluster.com/glusterfs.git Copyright (c) 2006-2011 Gluster Inc. <http://www.gluster.com> GlusterFS comes with ABSOLUTELY NO WARRANTY. You may redistribute copies of GlusterFS under the terms of the GNU General Public License. [root at yq35 ~]# gluster volume info Volume Name: volume1 Type: Replicate Status: Started Number of Bricks: 2 Transport-type: tcp Bricks: Brick1: 10.67.15.33:/media/data1/brick1 Brick2: 10.67.15.35:/media/data1/brick1 Options Reconfigured: features.quota: off When i use 'peer status' command, one of the nodes shows strange infomation as following: [root at yq33 ~]# gluster peer status Number of Peers: 2 Hostname: 10.67.15.33 Uuid: c213f7ab-18c1-40e1-85c8-dd7ae97fad03 State: Peer in Cluster (Connected) Hostname: 10.67.15.35 Uuid: 98949cd6-2b61-4ba8-8b67-76d8b58d4ce8 State: Peer in Cluster (Connected) tip:yq33 has the address of 10.67.15.33 The same command, however, shows diffrent information on another node: [root at yq35 ~]# gluster peer status Number of Peers: 1 Hostname: 10.67.15.33 Uuid: c213f7ab-18c1-40e1-85c8-dd7ae97fad03 State: Accepted peer request (Connected) tip:yq35 has the address of 10.67.15.35 I also find that there are two files in glusterfs' work dir on yq33 [root at yq33 ~]# ll /etc/glusterd/peers/ total 8 -rw-r--r-- 1 root root 72 Oct 25 00:11 98949cd6-2b61-4ba8-8b67-76d8b58d4ce8 -rw-r--r-- 1 root root 72 Aug 22 17:38 c213f7ab-18c1-40e1-85c8-dd7ae97fad03 [root at yq33 ~]# cat /etc/glusterd/peers/c213f7ab-18c1-40e1-85c8-dd7ae97fad03 uuid=c213f7ab-18c1-40e1-85c8-dd7ae97fad03 state=3 hostname1=10.67.15.33 [root at yq33 ~]# cat /etc/glusterd/peers/98949cd6-2b61-4ba8-8b67-76d8b58d4ce8 uuid=98949cd6-2b61-4ba8-8b67-76d8b58d4ce8 state=3 hostname1=10.67.15.35 but there is only one file on yq35 [root at yq35 ~]# ll /etc/glusterd/peers/c213f7ab-18c1-40e1-85c8-dd7ae97fad03 -rw-r--r-- 1 root root 72 Oct 25 00:11 /etc/glusterd/peers/c213f7ab-18c1-40e1-85c8-dd7ae97fad03 [root at yq35 ~]# cat /etc/glusterd/peers/c213f7ab-18c1-40e1-85c8-dd7ae97fad03 uuid=c213f7ab-18c1-40e1-85c8-dd7ae97fad03 state=4 hostname1=10.67.15.33 My question is: Can i just modify these files (modify 'state' to 3) and delete the file c213f7ab-18c1-40e1-85c8-dd7ae97fad03 on yq33 so that my cluster can be restored, and how? Thanks a lot. Regards. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://supercolony.gluster.org/pipermail/gluster-users/attachments/20121025/baf38e74/attachment.html>