Maybe remove peer glusterp3 via "gluster peer detach" then re add it?
On 14 October 2016 at 12:16, Thing <thing.thing at gmail.com>
wrote:> I seem to have a broken volume on glusterp3 which I odnt seem to be able to
> fix, how to please?
>
> =======> [root at glusterp1 /]# ls -l /data1
> total 4
> -rw-r--r--. 2 root root 0 Dec 14 2015 file1
> -rw-r--r--. 2 root root 0 Dec 14 2015 file2
> -rw-r--r--. 2 root root 0 Dec 14 2015 file3
> -rw-r--r--. 2 root root 0 Dec 14 2015 file.ipa1
> [root at glusterp1 /]# gluster volume status
> Staging failed on glusterp3.graywitch.co.nz. Error: Volume volume1 does not
> exist
>
> [root at glusterp1 /]# gluster
> gluster> volume info
>
> Volume Name: volume1
> Type: Replicate
> Volume ID: 91eef74e-4016-4bbe-8e86-01c88c64593f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: glusterp1.graywitch.co.nz:/data1
> Brick2: glusterp2.graywitch.co.nz:/data1
> Brick3: glusterp3.graywitch.co.nz:/data1
> Options Reconfigured:
> performance.readdir-ahead: on
> gluster> exit
> [root at glusterp1 /]# gluster volume heal volume1 info
> Brick glusterp1.graywitch.co.nz:/data1
> Status: Connected
> Number of entries: 0
>
> Brick glusterp2.graywitch.co.nz:/data1
> Status: Connected
> Number of entries: 0
>
> Brick glusterp3.graywitch.co.nz:/data1
> Status: Connected
> Number of entries: 0
>
> [root at glusterp1 /]# gluster volume info
>
> Volume Name: volume1
> Type: Replicate
> Volume ID: 91eef74e-4016-4bbe-8e86-01c88c64593f
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: glusterp1.graywitch.co.nz:/data1
> Brick2: glusterp2.graywitch.co.nz:/data1
> Brick3: glusterp3.graywitch.co.nz:/data1
> Options Reconfigured:
> performance.readdir-ahead: on
> [root at glusterp1 /]# gluster volume heal volume1 full
> Launching heal operation to perform full self heal on volume volume1 has
> been unsuccessful on bricks that are down. Please check if all brick
> processes are running.
> [root at glusterp1 /]#
> ============>
> On 14 October 2016 at 12:40, Thing <thing.thing at gmail.com> wrote:
>>
>> So glusterp3 is in a reject state,
>>
>> [root at glusterp1 /]# gluster peer status
>> Number of Peers: 2
>>
>> Hostname: glusterp2.graywitch.co.nz
>> Uuid: 93eebe2c-9564-4bb0-975f-2db49f12058b
>> State: Peer in Cluster (Connected)
>> Other names:
>> glusterp2
>>
>> Hostname: glusterp3.graywitch.co.nz
>> Uuid: 5d59b704-e42f-46c6-8c14-cf052c489292
>> State: Peer Rejected (Connected)
>> Other names:
>> glusterp3
>> [root at glusterp1 /]#
>>
>> =======>>
>> [root at glusterp2 /]# gluster peer status
>> Number of Peers: 2
>>
>> Hostname: glusterp1.graywitch.co.nz
>> Uuid: 4ece8509-033e-48d1-809f-2079345caea2
>> State: Peer in Cluster (Connected)
>> Other names:
>> glusterp1
>>
>> Hostname: glusterp3.graywitch.co.nz
>> Uuid: 5d59b704-e42f-46c6-8c14-cf052c489292
>> State: Peer Rejected (Connected)
>> Other names:
>> glusterp3
>> [root at glusterp2 /]#
>>
>> =======>>
>> [root at glusterp3 /]# gluster peer status
>> Number of Peers: 2
>>
>> Hostname: glusterp1.graywitch.co.nz
>> Uuid: 4ece8509-033e-48d1-809f-2079345caea2
>> State: Peer Rejected (Connected)
>> Other names:
>> glusterp1
>>
>> Hostname: glusterp2.graywitch.co.nz
>> Uuid: 93eebe2c-9564-4bb0-975f-2db49f12058b
>> State: Peer Rejected (Connected)
>> Other names:
>> glusterp2
>>
>> =========>> on glusterp3 gluster is dead and will not start,
>>
>> [root at glusterp3 /]# systemctl status gluster
>> ? gluster.service
>> Loaded: not-found (Reason: No such file or directory)
>> Active: inactive (dead)
>>
>> [root at glusterp3 /]# systemctl restart gluster
>> Failed to restart gluster.service: Unit gluster.service failed to load:
No
>> such file or directory.
>> [root at glusterp3 /]# systemctl enable gluster
>> Failed to execute operation: Access denied
>> [root at glusterp3 /]# systemctl enable gluster.service
>> Failed to execute operation: Access denied
>> [root at glusterp3 /]# systemctl start gluster.service
>> Failed to start gluster.service: Unit gluster.service failed to load:
No
>> such file or directory.
>>
>> =========>>
>> [root at glusterp3 /]# rpm -qa |grep gluster
>> glusterfs-client-xlators-3.8.4-1.el7.x86_64
>> glusterfs-server-3.8.4-1.el7.x86_64
>> nfs-ganesha-gluster-2.3.3-1.el7.x86_64
>> glusterfs-cli-3.8.4-1.el7.x86_64
>> glusterfs-api-3.8.4-1.el7.x86_64
>> glusterfs-fuse-3.8.4-1.el7.x86_64
>> glusterfs-ganesha-3.8.4-1.el7.x86_64
>> glusterfs-3.8.4-1.el7.x86_64
>> centos-release-gluster38-1.0-1.el7.centos.noarch
>> glusterfs-libs-3.8.4-1.el7.x86_64
>> [root at glusterp3 /]#
>>
>> ?
>>
>> On 14 October 2016 at 12:31, Thing <thing.thing at gmail.com>
wrote:
>>>
>>> Hmm seem I have something rather inconsistent,
>>>
>>> [root at glusterp1 /]# gluster volume create gv1 replica 3
>>> glusterp1:/brick1/gv1 glusterp2:/brick1/gv1 glusterp3:/brick1/gv1
>>> volume create: gv1: failed: Host glusterp3 is not in 'Peer in
Cluster'
>>> state
>>> [root at glusterp1 /]# gluster peer probe glusterp3
>>> peer probe: success. Host glusterp3 port 24007 already in peer list
>>> [root at glusterp1 /]# gluster peer probe glusterp2
>>> peer probe: success. Host glusterp2 port 24007 already in peer list
>>> [root at glusterp1 /]# gluster volume create gv1 replica 3
>>> glusterp1:/brick1/gv1 glusterp2:/brick1/gv1 glusterp3:/brick1/gv1
>>> volume create: gv1: failed: /brick1/gv1 is already part of a volume
>>> [root at glusterp1 /]# gluster volume show
>>> unrecognized word: show (position 1)
>>> [root at glusterp1 /]# gluster volume
>>> add-brick delete info quota
reset
>>> status
>>> barrier geo-replication list rebalance
set
>>> stop
>>> clear-locks heal log remove-brick
start
>>> sync
>>> create help profile replace-brick
>>> statedump top
>>> [root at glusterp1 /]# gluster volume list
>>> volume1
>>> [root at glusterp1 /]# gluster volume start gv0
>>> volume start: gv0: failed: Volume gv0 does not exist
>>> [root at glusterp1 /]# gluster volume start gv1
>>> volume start: gv1: failed: Volume gv1 does not exist
>>> [root at glusterp1 /]# gluster volume status
>>> Status of volume: volume1
>>> Gluster process TCP Port RDMA Port
Online
>>> Pid
>>>
>>>
------------------------------------------------------------------------------
>>> Brick glusterp1.graywitch.co.nz:/data1 49152 0 Y
>>> 2958
>>> Brick glusterp2.graywitch.co.nz:/data1 49152 0 Y
>>> 2668
>>> NFS Server on localhost N/A N/A N
>>> N/A
>>> Self-heal Daemon on localhost N/A N/A Y
>>> 1038
>>> NFS Server on glusterp2.graywitch.co.nz N/A N/A N
>>> N/A
>>> Self-heal Daemon on glusterp2.graywitch.co.
>>> nz N/A N/A Y
>>> 676
>>>
>>> Task Status of Volume volume1
>>>
>>>
------------------------------------------------------------------------------
>>> There are no active volume tasks
>>>
>>> [root at glusterp1 /]#
>>>
>>> On 14 October 2016 at 12:20, Thing <thing.thing at gmail.com>
wrote:
>>>>
>>>> I deleted a gluster volume gv0 as I wanted to make it thin
provisioned.
>>>>
>>>> I have rebuilt "gv0" but I am getting a failure,
>>>>
>>>> =========>>>> [root at glusterp1 /]# df -h
>>>> Filesystem Size Used Avail Use% Mounted on
>>>> /dev/mapper/centos-root 20G 3.9G 17G 20% /
>>>> devtmpfs 1.8G 0 1.8G 0% /dev
>>>> tmpfs 1.8G 12K 1.8G 1% /dev/shm
>>>> tmpfs 1.8G 8.9M 1.8G 1% /run
>>>> tmpfs 1.8G 0 1.8G 0%
/sys/fs/cgroup
>>>> /dev/mapper/centos-tmp 3.9G 33M 3.9G 1% /tmp
>>>> /dev/mapper/centos-home 50G 41M 50G 1% /home
>>>> /dev/mapper/centos-data1 120G 33M 120G 1% /data1
>>>> /dev/sda1 997M 312M 685M 32% /boot
>>>> /dev/mapper/centos-var 20G 401M 20G 2% /var
>>>> tmpfs 368M 0 368M 0%
/run/user/1000
>>>> /dev/mapper/vol_brick1-brick1 100G 33M 100G 1% /brick1
>>>> [root at glusterp1 /]# mkdir /brick1/gv0
>>>> [root at glusterp1 /]# gluster volume create gv0 replica 3
>>>> glusterp1:/brick1/gv0 glusterp2:/brick1/gv0
glusterp3:/brick1/gv0
>>>> volume create: gv0: failed: Host glusterp3 is not in 'Peer
in Cluster'
>>>> state
>>>> [root at glusterp1 /]# gluster peer probe glusterp3
>>>> peer probe: success. Host glusterp3 port 24007 already in peer
list
>>>> [root at glusterp1 /]# gluster volume create gv0 replica 3
>>>> glusterp1:/brick1/gv0 glusterp2:/brick1/gv0
glusterp3:/brick1/gv0
>>>> volume create: gv0: failed: /brick1/gv0 is already part of a
volume
>>>> [root at glusterp1 /]# gluster volume start gv0
>>>> volume start: gv0: failed: Volume gv0 does not exist
>>>> [root at glusterp1 /]# gluster volume create gv0 replica 3
>>>> glusterp1:/brick1/gv0 glusterp2:/brick1/gv0
glusterp3:/brick1/gv0 --force
>>>> unrecognized option --force
>>>> [root at glusterp1 /]# gluster volume create gv0 replica 3
>>>> glusterp1:/brick1/gv0 glusterp2:/brick1/gv0
glusterp3:/brick1/gv0
>>>> volume create: gv0: failed: /brick1/gv0 is already part of a
volume
>>>> [root at glusterp1 /]#
>>>> =========>>>>
>>>> Obviously something isnt happy here but I have no idea
what.......
>>>>
>>>> how to fix this please?
>>>
>>>
>>
>
>
> _______________________________________________
> Gluster-users mailing list
> Gluster-users at gluster.org
> http://www.gluster.org/mailman/listinfo/gluster-users
--
Lindsay