Displaying 7 results from an estimated 7 matches for "fceea52f8c2e".
2017 Sep 13
3
one brick one volume process dies?
...t;mailto:Gluster-users at gluster.org>
>     http://lists.gluster.org/mailman/listinfo/gluster-users
>     <http://lists.gluster.org/mailman/listinfo/gluster-users>
>
>
hi, here:
$ gluster vol info C-DATA
Volume Name: C-DATA
Type: Replicate
Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1: 
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
Brick2: 
10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
Brick3: 
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
Options...
2017 Sep 13
0
one brick one volume process dies?
...tp://lists.gluster.org/mailman/listinfo/gluster-users
>>     <http://lists.gluster.org/mailman/listinfo/gluster-users>
>>
>>
>>
> hi, here:
>
> $ gluster vol info C-DATA
>
> Volume Name: C-DATA
> Type: Replicate
> Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
> Brick2: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
> Brick3: 10.5.6.32:/__.aLocalStorages...
2017 Sep 13
2
one brick one volume process dies?
...ter-users
>>>     <http://lists.gluster.org/mailman/listinfo/gluster-users>
>>>
>>>
>>>
>> hi, here:
>>
>> $ gluster vol info C-DATA
>>
>> Volume Name: C-DATA
>> Type: Replicate
>> Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>> Brick2: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>> Bric...
2017 Sep 13
0
one brick one volume process dies?
...sts.gluster.org/mailman/listinfo/gluster-users>
>>>>
>>>>
>>>>
>>> hi, here:
>>>
>>> $ gluster vol info C-DATA
>>>
>>> Volume Name: C-DATA
>>> Type: Replicate
>>> Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>>> Brick2: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0...
2017 Sep 28
1
one brick one volume process dies?
...<http://lists.gluster.org/mailman/listinfo/gluster-users>>
>
>
>
>             hi, here:
>
>             $ gluster vol info C-DATA
>
>             Volume Name: C-DATA
>             Type: Replicate
>             Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
>             Status: Started
>             Snapshot Count: 0
>             Number of Bricks: 1 x 3 = 3
>             Transport-type: tcp
>             Bricks:
>             Brick1:
>             10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>             Brick2:...
2017 Sep 13
0
one brick one volume process dies?
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
Apart  from above info, please provide glusterd logs,  cmd_history.log.
Thanks
Gaurav
On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi everyone
>
> I have 3-peer cluster with all vols in replica mode, 9 vols.
> What I see, unfortunately, is one brick
2017 Sep 12
2
one brick one volume process dies?
hi everyone
I have 3-peer cluster with all vols in replica mode, 9 vols.
What I see, unfortunately, is one brick fails in one vol, 
when it happens it's always the same vol on the same brick.
Command: gluster vol status $vol - would show brick not online.
Restarting glusterd with systemclt does not help, only 
system reboot seem to help, until it happens, next time.
How to troubleshoot this