(glusterfs 3.3.0, ubuntu 12.04)
I have two nodes in my test setup: dev-storage1 and dev-storage2.
While dev-storage2 was powered down, I added a new volume "single1" on
dev-storage1 (using only bricks on dev-storage1).
However, when I brought dev-storage2 back online, "gluster volume
info" on
that node doesn't show the newly-created volume.
So my question is, how do I go about getting the node info in sync?
I found "gluster volume sync", but this by itself doesn't seem to
be enough,
regardless of which node I run it on:
root at dev-storage1:~# gluster volume sync all
please delete all the volumes before full sync
oot at dev-storage1:~# gluster volume sync single1
please delete all the volumes before full sync
root at dev-storage2:~# gluster volume sync single1
please delete all the volumes before full sync
root at dev-storage2:~# gluster volume sync all
please delete all the volumes before full sync
Hmm, I don't want to delete all volumes in case the deletions replicate to
the other nodes too. What am I supposed to do now? Perhaps "peer
detach"
first, then delete everything on dev-storage2, then sync?
The volume info is pasted below.
(Side question: why is there even an option to sync a single volume rather
than "all", if you have to delete all the voluems first anyway?)
Thanks,
Brian.
---xxx---
root at dev-storage1:~# gluster volume info
Volume Name: fast
Type: Distribute
Volume ID: 864fd12d-d879-4310-abaa-a2cb99b7f695
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: dev-storage1:/disk/storage1/fast
Brick2: dev-storage2:/disk/storage2/fast
Volume Name: single1
Type: Distribute
Volume ID: 74d62eb4-176e-4671-8471-779d909e19f0
Status: Started
Number of Bricks: 1
Transport-type: tcp
Bricks:
Brick1: dev-storage1:/disk/storage1/single1
Volume Name: safe
Type: Replicate
Volume ID: 47a8f326-0e48-4a71-9cfe-f9ef8d555db7
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: dev-storage1:/disk/storage1/safe
Brick2: dev-storage2:/disk/storage2/safe
root at dev-storage1:~# gluster peer status
Number of Peers: 1
Hostname: dev-storage2
Uuid: e95a9441-aec3-41a5-8d3d-615b61f3f2d3
State: Peer in Cluster (Connected)
---xxx---
root at dev-storage2:~# gluster volume info
Volume Name: safe
Type: Replicate
Volume ID: 47a8f326-0e48-4a71-9cfe-f9ef8d555db7
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: dev-storage1:/disk/storage1/safe
Brick2: dev-storage2:/disk/storage2/safe
Volume Name: fast
Type: Distribute
Volume ID: 864fd12d-d879-4310-abaa-a2cb99b7f695
Status: Started
Number of Bricks: 2
Transport-type: tcp
Bricks:
Brick1: dev-storage1:/disk/storage1/fast
Brick2: dev-storage2:/disk/storage2/fast
root at dev-storage2:~# gluster peer status
Number of Peers: 1
Hostname: 10.0.1.1
Uuid: a1a8a3cd-468e-44d5-8c08-23821db4c80f
State: Peer in Cluster (Connected)