Displaying 20 results from an estimated 150 matches for "gv0".
Did you mean:
gv
2017 Aug 16
0
Is transport=rdma tested with "stripe"?
...er] 0-epoll: Started thread with index 1
[2017-08-16 10:49:00.033092] I [socket.c:2415:socket_event_handler] 0-transport: EPOLLERR - disconnecting now
[2017-08-16 10:49:03.032434] I [socket.c:2415:socket_event_handler] 0-transport: EPOLLERR - disconnecting now
When I do:
gluster> volume create gv0 stripe 2 transport rdma gluster-s1-fdr:/data/brick1/gv0 gluster-s2-fdr:/data/brick1/gv0
volume create: gv0: success: please start the volume to access data
gluster> volume start gv0
volume start: gv0: success
The following appeared in glusterd.log. Note the "E" flag on the last line....
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
On Tue, Aug 15, 2017 at 01:04:11PM +0000, Hatazaki, Takao wrote:
> Ji-Hyeon,
>
> You're saying that "stripe=2 transport=rdma" should work. Ok, that
> was firstly I wanted to know. I'll put together logs later this week.
Note that "stripe" is not tested much and practically unmaintained. We
do not advise you to use it. If you have large files that you
2017 Sep 29
1
Gluster geo replication volume is faulty
...replica 2 arbiter 1 volumes with 9 bricks
[root at gfs1 ~]# gluster volume info
Volume Name: gfsvol
Type: Distributed-Replicate
Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gfs2:/gfs/brick1/gv0
Brick2: gfs3:/gfs/brick1/gv0
Brick3: gfs1:/gfs/arbiter/gv0 (arbiter)
Brick4: gfs1:/gfs/brick1/gv0
Brick5: gfs3:/gfs/brick2/gv0
Brick6: gfs2:/gfs/arbiter/gv0 (arbiter)
Brick7: gfs1:/gfs/brick2/gv0
Brick8: gfs2:/gfs/brick2/gv0
Brick9: gfs3:/gfs/arbiter/gv0 (arbiter)
Options Reconfigured:
nfs.disable:...
2018 Mar 12
2
Can't heal a volume: "Please check if all brick processes are running."
Hello,
We have a very fresh gluster 3.10.10 installation.
Our volume is created as distributed volume, 9 bricks 96TB in total
(87TB after 10% of gluster disk space reservation)
For some reasons I can?t ?heal? the volume:
# gluster volume heal gv0
Launching heal operation to perform index self heal on volume gv0 has
been unsuccessful on bricks that are down. Please check if all brick
processes are running.
Which processes should be run on every brick for heal operation?
# gluster volume status
Status of volume: gv0
Gluster process...
2018 May 10
2
broken gluster config
...has now finished but I still have this,
I cant find anything so far telling me how to fix it. Looking at
http://staged-gluster-docs.readthedocs.io/en/release3.7.0beta1/Features/heal-info-and-split-brain-resolution/
I cant determine what file? dir gvo? is actually the issue.
[root at glusterp1 gv0]# gluster volume heal gv0 info split-brain
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
Status: Connected
Number of entries in split-brain: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693>
Status: Connected
Number of ent...
2018 Mar 13
4
Can't heal a volume: "Please check if all brick processes are running."
...>>
>> We have a very fresh gluster 3.10.10 installation.
>> Our volume is created as distributed volume, 9 bricks 96TB in total
>> (87TB after 10% of gluster disk space reservation)
>>
>> For some reasons I can?t ?heal? the volume:
>> # gluster volume heal gv0
>> Launching heal operation to perform index self heal on volume gv0 has
>> been unsuccessful on bricks that are down. Please check if all brick
>> processes are running.
>>
>> Which processes should be run on every brick for heal operation?
>>
>> # gluster...
2017 Aug 18
1
Is transport=rdma tested with "stripe"?
...with index 1
> [2017-08-16 10:49:00.033092] I [socket.c:2415:socket_event_handler] 0-transport: EPOLLERR - disconnecting now
> [2017-08-16 10:49:03.032434] I [socket.c:2415:socket_event_handler] 0-transport: EPOLLERR - disconnecting now
>
> When I do:
>
> gluster> volume create gv0 stripe 2 transport rdma gluster-s1-fdr:/data/brick1/gv0 gluster-s2-fdr:/data/brick1/gv0
> volume create: gv0: success: please start the volume to access data
> gluster> volume start gv0
> volume start: gv0: success
>
> The following appeared in glusterd.log. Note the "E"...
2018 May 10
0
broken gluster config
[trying to read,
I cant understand what is wrong?
root at glusterp1 gv0]# gluster volume heal gv0 info
Brick glusterp1:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connected
Number of entries: 1
Brick glusterp2:/bricks/brick1/gv0
<gfid:eafb8799-4e7a-4264-9213-26997c5a4693> - Is in split-brain
Status: Connecte...
2017 Oct 06
0
Gluster geo replication volume is faulty
...[root at gfs1 ~]# gluster volume info
> Volume Name: gfsvol
> Type: Distributed-Replicate
> Volume ID: c2fb4365-480b-4d37-8c7d-c3046bca7306
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x (2 + 1) = 9
> Transport-type: tcp
> Bricks:
> Brick1: gfs2:/gfs/brick1/gv0
> Brick2: gfs3:/gfs/brick1/gv0
> Brick3: gfs1:/gfs/arbiter/gv0 (arbiter)
> Brick4: gfs1:/gfs/brick1/gv0
> Brick5: gfs3:/gfs/brick2/gv0
> Brick6: gfs2:/gfs/arbiter/gv0 (arbiter)
> Brick7: gfs1:/gfs/brick2/gv0
> Brick8: gfs2:/gfs/brick2/gv0
> Brick9: gfs3:/gfs/arbiter/gv0 (arb...
2013 Nov 29
1
Self heal problem
...d
split-brains. However, during my initial testing when intentionally and
gracefully restart the node "ned", a split-brain/self-heal error
occurred.
The log on "todd" and "rod" gives:
[2013-11-29 12:34:14.614456] E [afr-self-heal-data.c:1270:afr_sh_data_open_cbk] 0-gv0-replicate-0: open of <gfid:09b6d1d7-e583-4cee-93a4-4e972346ade3> failed on child gv0-client-2 (No such file or directory)
The reason is probably that the file was deleted and recreated with the
same file name during the time the node was offline, i.e. new inode and
thus new gfid.
Is this e...
2018 Feb 04
1
Troubleshooting glusterfs
Please help troubleshooting glusterfs with the following setup:
Distributed volume without replication. Sharding enabled.
[root at master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume info
Volume Name: gv0
Type: Distribute
Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925
Status: Started
Snapshot Count: 0
Number of Bricks: 27
Transport-type: tcp
Bricks:
Brick1: gluster3.qencode.com:/var/storage/brick/gv0
Brick2: encoder-376cac0405f311e884700671029ed6b8.qencode.com:
/var/storage/brick/gv0
Bric...
2018 Feb 04
1
Fwd: Troubleshooting glusterfs
...oubleshooting glusterfs with the following setup:
Distributed volume without replication. Sharding enabled.
# cat /etc/centos-release
CentOS release 6.9 (Final)
# glusterfs --version
glusterfs 3.12.3
[root at master-5f81bad0054a11e8bf7d0671029ed6b8 uploads]# gluster volume info
Volume Name: gv0
Type: Distribute
Volume ID: 1a7e05f6-4aa8-48d3-b8e3-300637031925
Status: Started
Snapshot Count: 0
Number of Bricks: 27
Transport-type: tcp
Bricks:
Brick1: gluster3.qencode.com:/var/storage/brick/gv0
Brick2: encoder-376cac0405f311e884700671029ed6b8.qencode.com:/var/storage/
brick/gv0
Bric...
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
...r/lib64/libgfrpc.so.0(rpc_clnt_submit-0x29300)[0x3fff9ebd69b0] (-->
/usr/lib64/glusterfs/3.10.5/xlator/protocol/client.so(+0x182e0)[0x3fff939182e0]
))))) 0-: 10.50.80.104:49152: ping timer event already removed
[2017-09-20 13:34:23.346070] D [MSGID: 0]
[dht-common.c:1002:dht_revalidate_cbk] 0-gv0-dht: revalidate lookup of /
returned with op_ret 0 [Structure needs cleaning]
[2017-09-20 13:34:23.347612] D [MSGID: 0] [dht-common.c:2699:dht_lookup]
0-gv0-dht: Calling fresh lookup for /tempdir3 on gv0-replicate-0
[2017-09-20 13:34:23.348013] D [MSGID: 0]
[client-rpc-fops.c:2936:client3_3_look...
2018 Mar 13
0
Can't heal a volume: "Please check if all brick processes are running."
...ytriyev wrote:
> Hello,
>
> We have a very fresh gluster 3.10.10 installation.
> Our volume is created as distributed volume, 9 bricks 96TB in total
> (87TB after 10% of gluster disk space reservation)
>
> For some reasons I can?t ?heal? the volume:
> # gluster volume heal gv0
> Launching heal operation to perform index self heal on volume gv0 has
> been unsuccessful on bricks that are down. Please check if all brick
> processes are running.
>
> Which processes should be run on every brick for heal operation?
>
> # gluster volume status
> Status...
2018 May 22
1
split brain? but where?
I tried looking for a file of the same size and the gfid doesnt show up,
8><---
[root at glusterp2 fb]# pwd
/bricks/brick1/gv0/.glusterfs/ea/fb
[root at glusterp2 fb]# ls -al
total 3130892
drwx------. 2 root root 64 May 22 13:01 .
drwx------. 4 root root 24 May 8 14:27 ..
-rw-------. 1 root root 3294887936 May 4 11:07
eafb8799-4e7a-4264-9213-26997c5a4693
-rw-r--r--. 1 root root 1396 May 22 13:01 gfi...
2017 Oct 17
0
gfid entries in volume heal info that do not heal
Attached is the heal log for the volume as well as the shd log.
>> Run these commands on all the bricks of the replica pair to get the attrs set on the backend.
[root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m . /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
getfattr: Removing leading '/' from absolute path names
# file: exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
security.selinux=0x73797374656d5f753a6f626a6563745f723a756e6c6162656c65645f743a733000
trusted.afr.dirty=0x00...
2017 Oct 17
3
gfid entries in volume heal info that do not heal
...ould be appreciated, but I?m definitely just wanting them gone. I forgot
> to mention earlier that the cluster is running 3.12 and was upgraded from
> 3.10; these files were likely stuck like this when it was on 3.10.
>
>
>
> [root at tpc-cent-glus1-081017 ~]# gluster volume info gv0
>
>
>
> Volume Name: gv0
>
> Type: Distributed-Replicate
>
> Volume ID: 8f07894d-e3ab-4a65-bda1-9d9dd46db007
>
> Status: Started
>
> Snapshot Count: 0
>
> Number of Bricks: 4 x (2 + 1) = 12
>
> Transport-type: tcp
>
> Bricks:
>
> Brick1: t...
2017 Oct 18
1
gfid entries in volume heal info that do not heal
...ymack at nsgdv.com> wrote:
> Attached is the heal log for the volume as well as the shd log.
>
> >> Run these commands on all the bricks of the replica pair to get the
> attrs set on the backend.
>
> [root at tpc-cent-glus1-081017 ~]# getfattr -d -e hex -m .
> /exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> getfattr: Removing leading '/' from absolute path names
> # file: exp/b1/gv0/.glusterfs/10/86/108694db-c039-4b7c-bd3d-ad6a15d811a2
> security.selinux=0x73797374656d5f753a6f626a6563
> 745f723a756e6c6162656c65645f743a733000
&...
2018 May 23
0
Rebalance state stuck or corrupted
We have had a rebalance operation going on for a few days. After a couple
days the rebalance status said "failed". We stopped the rebalance operation
by doing gluster volume rebalance gv0 stop. Rebalance log indicated gluster
did try to stop the rebalance. However, when we try now to stop the volume
or try to restart rebalance it says there's a rebalance operation going on
and volume can't be stopped. I tried restarting all the glusterfs-server
service (we're using Glust...
2018 Mar 14
2
Can't heal a volume: "Please check if all brick processes are running."
...fresh gluster 3.10.10 installation.
>>> Our volume is created as distributed volume, 9 bricks 96TB in total
>>> (87TB after 10% of gluster disk space reservation)
>>>
>>> For some reasons I can't "heal" the volume:
>>> # gluster volume heal gv0
>>> Launching heal operation to perform index self heal on volume gv0 has
>>> been unsuccessful on bricks that are down. Please check if all brick
>>> processes are running.
>>>
>>> Which processes should be run on every brick for heal operation?
>>...