Displaying 14 results from an estimated 14 matches for "0gluster".
Did you mean:
gluster
2017 Aug 01
4
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
...lenty of these on all three peers.
hi guys
I've recently upgraded from 3.8 to 3.10 and I'm seeing weird
behavior.
I see: $gluster vol status $_vol detail; takes long timeand
mostly times out.
I do:
$ gluster vol heal $_vol info
and I see:
Brick
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA
Status: Transport endpoint is not connected
Number of entries: -
Brick
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA
Status: Connected
Number of entries: 0
Brick
10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA
Status: Transport endpoint is not connected
Nu...
2017 Aug 02
0
connection to 10.5.6.32:49155 failed (Connection refused); disconnecting socket
...t;
> I've recently upgraded from 3.8 to 3.10 and I'm seeing weird
> behavior.
> I see: $gluster vol status $_vol detail; takes long timeand
> mostly times out.
> I do:
> $ gluster vol heal $_vol info
> and I see:
> Brick
> 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA
> Status: Transport endpoint is not connected
> Number of entries: -
>
> Brick
> 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA
> Status: Connected
> Number of entries: 0
>
> Brick
> 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-CYTO-DATA...
2017 Sep 13
3
one brick one volume process dies?
...istinfo/gluster-users>
>
>
hi, here:
$ gluster vol info C-DATA
Volume Name: C-DATA
Type: Replicate
Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1:
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
Brick2:
10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
Brick3:
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
Options Reconfigured:
performance.md-cache-timeout: 600
performance.cache-invalidation: on
performance.stat-prefetch: on
features.cache-invalidation-timeout:...
2017 Sep 28
1
one brick one volume process dies?
...ily(or not) I now see, a week after:
gluster vol status CYTO-DATA
Status of volume: CYTO-DATA
Gluster process???????????????????????????? TCP Port? RDMA
Port Online? Pid
------------------------------------------------------------------------------
Brick 10.5.6.49:/__.aLocalStorages/0/0-GLUS
TERs/0GLUSTER-CYTO-DATA???????????????????? 49161???? 0
Y?????? 1743719
Brick 10.5.6.100:/__.aLocalStorages/0/0-GLU
STERs/0GLUSTER-CYTO-DATA??????????????????? 49152???? 0
Y?????? 20438
Brick 10.5.6.32:/__.aLocalStorages/0/0-GLUS
TERs/0GLUSTER-CYTO-DATA???????????????????? 49152???? 0
Y?????? 5607
Self-heal D...
2017 Sep 13
2
one brick one volume process dies?
...gt; Volume Name: C-DATA
>> Type: Replicate
>> Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 1 x 3 = 3
>> Transport-type: tcp
>> Bricks:
>> Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>> Brick2: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>> Brick3: 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>> Options Reconfigured:
>> performance.md-cache-timeout: 600
>> performance.cache-invalidation: on
>> performance.st...
2017 Sep 13
0
one brick one volume process dies?
...$ gluster vol info C-DATA
>
> Volume Name: C-DATA
> Type: Replicate
> Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
> Brick2: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
> Brick3: 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
> Options Reconfigured:
> performance.md-cache-timeout: 600
> performance.cache-invalidation: on
> performance.stat-prefetch: on
> fea...
2017 Sep 13
0
one brick one volume process dies?
...> Type: Replicate
>>> Volume ID: 18ffba73-532e-4a4d-84da-fceea52f8c2e
>>> Status: Started
>>> Snapshot Count: 0
>>> Number of Bricks: 1 x 3 = 3
>>> Transport-type: tcp
>>> Bricks:
>>> Brick1: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>>> Brick2: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>>> Brick3: 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-C-DATA
>>> Options Reconfigured:
>>> performance.md-cache-timeout: 600
>>> performance.cache-invalidation: on
>...
2017 Sep 04
2
heal info OK but statistics not working
...s worked again.
can you not reproduce it?
$ gluster vol info QEMU-VMs
Volume Name: QEMU-VMs
Type: Replicate
Volume ID: 8709782a-daa5-4434-a816-c4e0aef8fef2
Status: Started
Snapshot Count: 0
Number of Bricks: 1 x 3 = 3
Transport-type: tcp
Bricks:
Brick1:
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Brick2:
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Brick3:
10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Options Reconfigured:
transport.address-family: inet
nfs.disable: on
storage.owner-gid: 107
storage.owner-uid: 107
performance.readdir-ahead: on
geo-re...
2017 Sep 04
0
heal info OK but statistics not working
...uster vol info QEMU-VMs
>
> Volume Name: QEMU-VMs
> Type: Replicate
> Volume ID: 8709782a-daa5-4434-a816-c4e0aef8fef2
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 1 x 3 = 3
> Transport-type: tcp
> Bricks:
> Brick1: 10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
> Brick2: 10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
> Brick3: 10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
> Options Reconfigured:
> transport.address-family: inet
> nfs.disable: on
> storage.owner-gid: 107
> storage.owner-uid: 107
>...
2017 Sep 13
0
one brick one volume process dies?
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
Apart from above info, please provide glusterd logs, cmd_history.log.
Thanks
Gaurav
On Tue, Sep 12, 2017 at 2:22 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi everyone
>
> I have 3-peer cluster with all vols in replica mode, 9 vols.
> What I see, unfortunately, is one brick
2017 Sep 12
2
one brick one volume process dies?
hi everyone
I have 3-peer cluster with all vols in replica mode, 9 vols.
What I see, unfortunately, is one brick fails in one vol,
when it happens it's always the same vol on the same brick.
Command: gluster vol status $vol - would show brick not online.
Restarting glusterd with systemclt does not help, only
system reboot seem to help, until it happens, next time.
How to troubleshoot this
2017 Sep 04
0
heal info OK but statistics not working
Please provide the output of gluster volume info, gluster volume status and
gluster peer status.
On Mon, Sep 4, 2017 at 4:07 PM, lejeczek <peljasz at yahoo.co.uk> wrote:
> hi all
>
> this:
> $ vol heal $_vol info
> outputs ok and exit code is 0
> But if I want to see statistics:
> $ gluster vol heal $_vol statistics
> Gathering crawl statistics on volume GROUP-WORK
2017 Sep 04
2
heal info OK but statistics not working
hi all
this:
$ vol heal $_vol info
outputs ok and exit code is 0
But if I want to see statistics:
$ gluster vol heal $_vol statistics
Gathering crawl statistics on volume GROUP-WORK has been
unsuccessful on bricks that are down. Please check if all
brick processes are running.
I suspect - gluster inability to cope with a situation where
one peer(which is not even a brick for a single vol on
2017 Aug 29
0
modifying data via fues causes heal problem
hi there
I run off 3.10.5, have 3 peers with vols in replication.
Each time I copy some data on a client(which is a peer too)
I see something like it:
# for QEMU-VMs:
Gathering count of entries to be healed on volume QEMU-VMs
has been successful
Brick
10.5.6.32:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Number of entries: 0
Brick
10.5.6.49:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Number of entries: 2
Brick
10.5.6.100:/__.aLocalStorages/0/0-GLUSTERs/0GLUSTER-QEMU-VMs
Number of entries: 1
# end of QEMU-VMs:
which heals(automatically) later ok, but, why would this
happen in the...