Displaying 16 results from an estimated 16 matches for "gv01".
Did you mean:
gv0
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...nction then replicate what
changed to the first (primary) when it's back online? Or should I just
go for a third node to allow for this?
Also, how safe is it to set the following to none?
cluster.quorum-type: auto
cluster.server-quorum-type: server
[root at nfs01 /]# gluster volume start gv01
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
[root at nfs01 /]#
[root at nfs01 /]# gluster volume status
Status of volume: gv01
Gluster process TCP Port RDMA Port Online Pid
---------------------------------------------------------------...
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...he first (primary) when it's back online? Or should I just
> go for a third node to allow for this?
>
> Also, how safe is it to set the following to none?
>
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
>
>
> [root at nfs01 /]# gluster volume start gv01
> volume start: gv01: failed: Quorum not met. Volume operation not allowed.
> [root at nfs01 /]#
>
>
> [root at nfs01 /]# gluster volume status
> Status of volume: gv01
> Gluster process TCP Port RDMA Port Online
> Pid
>
> ----------------...
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...it's back online?? Or should I just
> go for a third node to allow for this?
>
> Also, how safe is it to set the following to none?
>
> cluster.quorum-type: auto
> cluster.server-quorum-type: server
>
>
> [root at nfs01 /]# gluster volume start gv01
> volume start: gv01: failed: Quorum not met. Volume operation not
> allowed.
> [root at nfs01 /]#
>
>
> [root at nfs01 /]# gluster volume status
> Status of volume: gv01
> Gluster process? ? ? ? ? ? ? ? ? ? ? ? ? ? ?TCP Port? RDMA Port
> On...
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...t; just
>> go for a third node to allow for this?
>>
>> Also, how safe is it to set the following to none?
>>
>> cluster.quorum-type: auto
>> cluster.server-quorum-type: server
>>
>>
>> [root at nfs01 /]# gluster volume start gv01
>> volume start: gv01: failed: Quorum not met. Volume operation not
>> allowed.
>> [root at nfs01 /]#
>>
>>
>> [root at nfs01 /]# gluster volume status
>> Status of volume: gv01
>> Gluster process TC...
2018 May 08
1
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...m using gluster as the backend for my NFS storage. I have gluster
running on two nodes, nfs01 and nfs02. It's mounted on /n on each host.
The path /n is in turn shared out by NFS Ganesha. It's a two node
setup with quorum disabled as noted below:
[root at nfs02 ganesha]# mount|grep gv01
nfs02:/gv01 on /n type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
[root at nfs01 glusterfs]# mount|grep gv01
nfs01:/gv01 on /n type fuse.glusterfs
(rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
Gluster al...
2018 May 22
1
[SOLVED] [Nfs-ganesha-support] volume start: gv01: failed: Quorum not met. Volume operation not allowed.
...ckend for my NFS storage.? I have gluster
> running on two nodes, nfs01 and nfs02.? It's mounted on /n on each host.
> ?The path /n is in turn shared out by NFS Ganesha.? It's a two node
> setup with quorum disabled as noted below:
>
> [root at nfs02 ganesha]# mount|grep gv01
> nfs02:/gv01 on /n type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072)
>
>
> [root at nfs01 glusterfs]# mount|grep gv01
> nfs01:/gv01 on /n type fuse.glusterfs
> (rw,relatime,user_id=0,group_id=0,default_permissions,all...
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...ot at nfs01 n]# dd if=/dev/zero of=./some-file.bin bs=1M count=10000
oflag=direct
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 96.3228 s, 109 MB/s
[root at nfs01 n]# rm some-file.bin
rm: remove regular file ?some-file.bin?? y
[ Via XFS ]
[root at nfs01 n]# cd /bricks/0/gv01/
[root at nfs01 gv01]# dd if=/dev/zero of=./some-file.bin bs=1M count=10000
oflag=direct
10000+0 records in
10000+0 records out
10485760000 bytes (10 GB) copied, 44.79 s, 234 MB/s
[root at nfs01 gv01]#
top - 12:49:48 up 1 day, 9:39, 2 users, load average: 0.66, 1.15, 1.82
Tasks: 165 total,...
2018 Feb 08
5
self-heal trouble after changing arbiter brick
...as follows:
# gluster volume info
Volume Name: myvol
Type: Distributed-Replicate
Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
Status: Started
Snapshot Count: 0
Number of Bricks: 3 x (2 + 1) = 9
Transport-type: tcp
Bricks:
Brick1: gv0:/data/glusterfs
Brick2: gv1:/data/glusterfs
Brick3: gv4:/data/gv01-arbiter (arbiter)
Brick4: gv2:/data/glusterfs
Brick5: gv3:/data/glusterfs
Brick6: gv1:/data/gv23-arbiter (arbiter)
Brick7: gv4:/data/glusterfs
Brick8: gv5:/data/glusterfs
Brick9: pluto:/var/gv45-arbiter (arbiter)
Options Reconfigured:
nfs.disable: on
transport.address-family: inet
storage.owner-gid...
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...SHR S %CPU %MEM TIME+ COMMAND
14691 root 20 0 107948 608 512 R 25.0 0.0 0:34.29 dd
1334 root 20 0 2694264 61076 2228 S 2.7 1.6 283:55.96
ganesha.nfsd
The result of a dd command directly against the brick FS itself is of
course much better:
[root at nfs01 gv01]# dd if=/dev/zero of=./some-file.bin
5771692+0 records in
5771692+0 records out
2955106304 bytes (3.0 GB) copied, 35.3425 s, 83.6 MB/s
[root at nfs01 gv01]# pwd
/bricks/0/gv01
[root at nfs01 gv01]#
Tried a few tweak options with no effect:
[root at nfs01 glusterfs]# gluster volume info
Volume N...
2018 Feb 09
0
self-heal trouble after changing arbiter brick
...ust 200 are not related to gfids (and yes, number of gfids is well beyond 64999):
# grep -c gfid heal-info.fpack
80578
# grep -v gfid heal-info.myvol
Brick gv0:/data/glusterfs
Status: Connected
Number of entries: 0
Brick gv1:/data/glusterfs
Status: Connected
Number of entries: 0
Brick gv4:/data/gv01-arbiter
Status: Connected
Number of entries: 0
Brick gv2:/data/glusterfs
/testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
/testset/05c - Possibly undergoing heal
/testset/b99 - Possibly undergoing heal
/testset/dd7 - Possibly undergoing heal
/testset/0b8 - Possibly undergoing heal
/testset/f...
2018 Feb 09
0
self-heal trouble after changing arbiter brick
...vol
> Type: Distributed-Replicate
> Volume ID: 43ba517a-ac09-461e-99da-a197759a7dc8
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 3 x (2 + 1) = 9
> Transport-type: tcp
> Bricks:
> Brick1: gv0:/data/glusterfs
> Brick2: gv1:/data/glusterfs
> Brick3: gv4:/data/gv01-arbiter (arbiter)
> Brick4: gv2:/data/glusterfs
> Brick5: gv3:/data/glusterfs
> Brick6: gv1:/data/gv23-arbiter (arbiter)
> Brick7: gv4:/data/glusterfs
> Brick8: gv5:/data/glusterfs
> Brick9: pluto:/var/gv45-arbiter (arbiter)
> Options Reconfigured:
> nfs.disable: on
> tra...
2018 Mar 19
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi,
As I posted in my previous emails - glusterfs can never match NFS (especially async one) performance of small files/latency. That's given by the design.
Nothing you can do about it.
Ondrej
-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Rik Theys
Sent: Monday, March 19, 2018 10:38 AM
To: gluster-users at
2018 Feb 09
1
self-heal trouble after changing arbiter brick
...ust 200 are not related to gfids (and yes, number of gfids is well beyond 64999):
# grep -c gfid heal-info.fpack
80578
# grep -v gfid heal-info.myvol
Brick gv0:/data/glusterfs
Status: Connected
Number of entries: 0
Brick gv1:/data/glusterfs
Status: Connected
Number of entries: 0
Brick gv4:/data/gv01-arbiter
Status: Connected
Number of entries: 0
Brick gv2:/data/glusterfs
/testset/13f/13f27c303b3cb5e23ee647d8285a4a6d.pack
/testset/05c - Possibly undergoing heal
/testset/b99 - Possibly undergoing heal
/testset/dd7 - Possibly undergoing heal
/testset/0b8 - Possibly undergoing heal
/testset/f...
2018 Mar 20
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
...ile.bin bs=1M count=10000
> oflag=direct
> 10000+0 records in
> 10000+0 records out
> 10485760000 bytes (10 GB) copied, 96.3228 s, 109 MB/s
> [root at nfs01 n]# rm some-file.bin
> rm: remove regular file ?some-file.bin?? y
>
> [ Via XFS ]
> [root at nfs01 n]# cd /bricks/0/gv01/
> [root at nfs01 gv01]# dd if=/dev/zero of=./some-file.bin bs=1M count=10000
> oflag=direct
> 10000+0 records in
> 10000+0 records out
> 10485760000 bytes (10 GB) copied, 44.79 s, 234 MB/s
> [root at nfs01 gv01]#
>
>
>
> top - 12:49:48 up 1 day, 9:39, 2 users, load...
2018 Mar 19
3
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi,
On 03/19/2018 03:42 PM, TomK wrote:
> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
> Removing NFS or NFS Ganesha from the equation, not very impressed on my
> own setup either.? For the writes it's doing, that's alot of CPU usage
> in top. Seems bottle-necked via a single execution core somewhere trying
> to facilitate read / writes to the other bricks.
>
>
2013 May 24
0
Problem After adding Bricks
...- 1 torque torque 90287 May 18 20:47 cache
-rw------- 1 torque torque 667180 May 18 23:35 file_mapping
drwx------ 3 torque torque 8192 May 23 11:31 mirror_trash
drwx------ 3 torque torque 8192 May 23 11:31 mirror_trash
drwx------ 3 torque torque 8192 May 23 11:31 mirror_trash
Volume Name: gv01
Type: Distributed-Replicate
Volume ID: 03cf79bd-c5d8-467d-9f31-6c3c40dd94e2
Status: Started
Number of Bricks: 11 x 2 = 22
Transport-type: tcp
Bricks:
Brick1: fs01:/bricks/b01
Brick2: fs02:/bricks/b01
Brick3: fs01:/bricks/b02
Brick4: fs02:/bricks/b02
Brick5: fs01:/bricks/b03
Brick6: fs02:/bricks/b03...