On a hunch, I attempted the "volume replace-brick <VOLNAME>
<BRICK> <NEW-BRICK> commit" command and, without much fanfare,
the volume information was updated:
> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7
192.168.1.1:/srv/sda8 commit
> replace-brick commit successful
>
> gluster> volume info
>?
> Volume Name: Repositories
> Type: Distributed-Replicate
> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
> Status: Started
> Number of Bricks: 2 x 2 = 4
> Transport-type: tcp
> Bricks:
> Brick1: 192.168.1.1:/srv/sda8
> Brick2: 192.168.1.2:/srv/sda7
> Brick3: 192.168.1.1:/srv/sdb7
> Brick4: 192.168.1.2:/srv/sdb7
>
> gluster> volume status
> Status of volume: Repositories
> Gluster process??? ??? ??? ??? ??? ??? Port??? Online??? Pid
>
------------------------------------------------------------------------------
> Brick 192.168.1.1:/srv/sda8??? ??? ??? ??? 24012??? Y??? 13796
> Brick 192.168.1.2:/srv/sda7??? ??? ??? ??? 24009??? Y??? 4946
> Brick 192.168.1.1:/srv/sdb7??? ??? ??? ??? 24010??? Y??? 5438
> Brick 192.168.1.2:/srv/sdb7??? ??? ??? ??? 24010??? Y??? 4951
> NFS Server on localhost??? ??? ??? ??? ??? 38467??? Y??? 13803
> Self-heal Daemon on localhost??? ??? ??? ??? N/A??? Y??? 13808
> NFS Server on 192.168.1.2??? ??? ??? ??? 38467??? Y??? 7969
> Self-heal Daemon on 192.168.1.2??? ??? ??? ??? N/A??? Y??? 7974
The XFS attributes are still intact on the old brick, however:
> [eric at sn1 ~]$ for x in sda7 sdb7 sda8 ; do sudo getfattr -m - /srv/$x
2> /dev/null ; done
> # file: srv/sda7
> trusted.afr.Repositories-client-0
> trusted.afr.Repositories-client-1
> trusted.afr.Repositories-io-threads
> trusted.afr.Repositories-replace-brick
> trusted.gfid
> trusted.glusterfs.dht
> trusted.glusterfs.volume-id
>
> # file: srv/sdb7
> trusted.afr.Repositories-client-2
> trusted.afr.Repositories-client-3
> trusted.gfid
> trusted.glusterfs.dht
> trusted.glusterfs.volume-id
>
> # file: srv/sda8
> trusted.afr.Repositories-io-threads
> trusted.afr.Repositories-replace-brick
> trusted.gfid
> trusted.glusterfs.volume-id
Is this intentional (i.e., leaving the the attributes intact)? Or functionality
that has yet to be implemented?
Eric Pretorious
Truckee, CA
>________________________________
> From: Eric <epretorious at yahoo.com>
>To: "gluster-users at gluster.org" <gluster-users at
gluster.org>
>Sent: Wednesday, September 5, 2012 5:05 PM
>Subject: [Gluster-users] migration operations: Stopping a migration
>
>
>I've created a distributed replicated volume:
>
>
>> gluster> volume info
>>?
>> Volume Name: Repositories
>> Type: Distributed-Replicate
>> Volume ID: 926262ae-2aa6-4bf7-b19e-cf674431b06c
>> Status: Started
>> Number of Bricks: 2 x 2 = 4
>> Transport-type: tcp
>> Bricks:
>> Brick1: 192.168.1.1:/srv/sda7
>> Brick2: 192.168.1.2:/srv/sda7
>> Brick3: 192.168.1.1:/srv/sdb7
>> Brick4: 192.168.1.2:/srv/sdb7
>
>
>
>...and begun migrating data from one brick to another as a PoC:
>
>
>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7
192.168.1.1:/srv/sda8 start
>> replace-brick started successfully
>>
>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7
192.168.1.1:/srv/sda8 status
>> Number of files migrated = 5147?????? Current file=
/centos/5.8/os/x86_64/CentOS/gnome-pilot-conduits-2.0.13-7.el5.x86_64.rpm
>> gluster> volume replace-brick Repositories 192.168.1.1:/srv/sda7
192.168.1.1:/srv/sda8 status>> Number of files migrated = 24631??????? Migration complete
>
>
>After the migration is finished, though, the list of bricks is wrong:
>
>
>
>> gluster> volume heal Repositories
info?????????????????????????????????????????????????????????
>> Heal operation on volume Repositories has been successful
>>
>> Brick 192.168.1.1:/srv/sda7
>> Number of entries: 0
>>
>> Brick 192.168.1.2:/srv/sda7
>> Number of entries: 0
>>
>> Brick 192.168.1.1:/srv/sdb7
>> Number of entries: 0
>>
>> Brick 192.168.1.2:/srv/sdb7
>> Number of entries: 0
>
>
>...and the XFS attributes are still intact on the old brick:
>
>
>> [eric at sn1 ~]$ for x in sda7 sdb7 sda8 ; do sudo getfattr -m -
/srv/$x 2> /dev/null ; done
>> # file: srv/sda7
>> trusted.afr.Repositories-client-0
>> trusted.afr.Repositories-client-1
>> trusted.afr.Repositories-io-threads
>> trusted.afr.Repositories-replace-brick
>> trusted.gfid
>> trusted.glusterfs.dht
>> trusted.glusterfs.pump-path
>> trusted.glusterfs.volume-id
>>
>> # file:
srv/sdb7>> trusted.afr.Repositories-client-2
>> trusted.afr.Repositories-client-3
>> trusted.gfid
>> trusted.glusterfs.dht
>> trusted.glusterfs.volume-id
>>
>> # file: srv/sda8
>> trusted.afr.Repositories-io-threads
>> trusted.afr.Repositories-replace-brick
>> trusted.gfid
>> trusted.glusterfs.volume-id
>
>
>
>Have I missed a step? Or: Is this (i.e., clean-up) a bug or functionality
that hasn't been implemented yet?
>
>
>Eric Pretorious
>Truckee, CA
>
>_______________________________________________
>Gluster-users mailing list
>Gluster-users at gluster.org
>http://gluster.org/cgi-bin/mailman/listinfo/gluster-users
>
>
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL:
<http://supercolony.gluster.org/pipermail/gluster-users/attachments/20120905/bcd26654/attachment.html>