Displaying 20 results from an estimated 2000 matches similar to: "volume start: gv01: failed: Quorum not met. Volume operation not allowed."
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi,
You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote:
> Hey All,
>
> In a two node glusterfs setup, with one node down, can't use the second
> node to mount the volume. I understand this is
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,
With two nodes, the setup works but both sides go down when one node is
missing. Still I set the below two params to none and that solved my issue:
cluster.quorum-type: none
cluster.server-quorum-type: none
Thank you for that.
Cheers,
Tom
> Hi,
>
> You need 3 nodes at least to have quorum enabled. In 2 node setup you
> need to
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote:
> On 4/9/2018 2:45 AM, Alex K wrote:
> Hey Alex,
>
> With two nodes, the setup works but both sides go down when one node is
> missing. Still I set the below two params to none and that solved my issue:
>
> cluster.quorum-type: none
> cluster.server-quorum-type: none
>
> yes this disables
2018 May 08
1
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/11/2018 11:54 AM, Alex K wrote:
Hey Guy's,
Returning to this topic, after disabling the the quorum:
cluster.quorum-type: none
cluster.server-quorum-type: none
I've ran into a number of gluster errors (see below).
I'm using gluster as the backend for my NFS storage. I have gluster
running on two nodes, nfs01 and nfs02. It's mounted on /n on each host.
The path /n is
2018 Feb 07
2
Ip based peer probe volume create error
On 8/02/2018 4:45 AM, Gaurav Yadav wrote:
> After seeing command history, I could see that you have 3 nodes, and
> firstly you are peer probing 51.15.90.60? and 163.172.151.120 from?
> 51.15.77.14
> So here itself you have 3 node cluster, after all this you are going
> on node 2 and again peer probing 51.15.77.14.
> ?Ideally it should work, with above steps, but due to some
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/19/2018 10:52 AM, Rik Theys wrote:
> Hi,
>
> On 03/19/2018 03:42 PM, TomK wrote:
>> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
>> Removing NFS or NFS Ganesha from the equation, not very impressed on my
>> own setup either.? For the writes it's doing, that's alot of CPU usage
>> in top. Seems bottle-necked via a single execution core somewhere trying
2018 Apr 09
2
Gluster cluster on two networks
Hi all!
I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
Centos 7 and gluster version 3.12.6 on server.
All machines have two network interfaces and connected to two different networks,
10.10.0.0/16 (with hostnames in /etc/hosts, gluster version 3.12.6)
192.168.67.0/24 (with ldap, gluster version 3.13.1)
Gluster cluster was created on the 10.10.0.0/16 net, gluster peer
2018 Apr 10
0
Gluster cluster on two networks
Marcus,
Can you share server-side gluster peer probe and client-side mount
command-lines.
On Tue, Apr 10, 2018 at 12:36 AM, Marcus Peders?n <marcus.pedersen at slu.se>
wrote:
> Hi all!
>
> I have setup a replicated/distributed gluster cluster 2 x (2 + 1).
>
> Centos 7 and gluster version 3.12.6 on server.
>
> All machines have two network interfaces and connected to
2018 Apr 10
1
Gluster cluster on two networks
Yes,
In first server (urd-gds-001):
gluster peer probe urd-gds-000
gluster peer probe urd-gds-002
gluster peer probe urd-gds-003
gluster peer probe urd-gds-004
gluster pool list (from urd-gds-001):
UUID Hostname State
bdbe4622-25f9-4ef1-aad1-639ca52fc7e0 urd-gds-002 Connected
2a48a3b9-efa0-4fb7-837f-c800f04bf99f urd-gds-003 Connected
ad893466-ad09-47f4-8bb4-4cea84085e5b urd-gds-004
2018 May 22
1
[SOLVED] [Nfs-ganesha-support] volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
Appears I solved this one and NFS mounts now work on all my clients. No
issues since fixing it a few hours back.
RESOLUTION
Auditd is to blame for the trouble. Noticed this in the logs on 2 of
the 3 NFS servers (nfs01, nfs02, nfs03):
type=AVC msg=audit(1526965320.850:4094): avc: denied { write } for
pid=8714 comm="ganesha.nfsd" name="nfs_0"
2017 Jul 20
1
Error while mounting gluster volume
Hi Team,
While mounting the gluster volume using 'mount -t glusterfs' command it is
getting failed.
When we checked the log file getting the below logs
[1970-01-02 10:54:04.420065] E [MSGID: 101187]
[event-epoll.c:391:event_register_epoll] 0-epoll: failed to add fd(=7) to
epoll fd(=0) [Invalid argument]
[1970-01-02 10:54:04.420140] W [socket.c:3095:socket_connect] 0-: failed to
register
2018 Mar 19
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
Removing NFS or NFS Ganesha from the equation, not very impressed on my
own setup either. For the writes it's doing, that's alot of CPU usage
in top. Seems bottle-necked via a single execution core somewhere trying
to facilitate read / writes to the other bricks.
Writes to the gluster FS from within one of the gluster participating
bricks:
2018 Mar 19
3
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi,
On 03/19/2018 03:42 PM, TomK wrote:
> On 3/19/2018 5:42 AM, Ondrej Valousek wrote:
> Removing NFS or NFS Ganesha from the equation, not very impressed on my
> own setup either.? For the writes it's doing, that's alot of CPU usage
> in top. Seems bottle-necked via a single execution core somewhere trying
> to facilitate read / writes to the other bricks.
>
>
2018 Mar 19
2
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Hi,
As I posted in my previous emails - glusterfs can never match NFS (especially async one) performance of small files/latency. That's given by the design.
Nothing you can do about it.
Ondrej
-----Original Message-----
From: gluster-users-bounces at gluster.org [mailto:gluster-users-bounces at gluster.org] On Behalf Of Rik Theys
Sent: Monday, March 19, 2018 10:38 AM
To: gluster-users at
2018 Feb 24
0
Failed heal volume
Whem I try to run healin volume i have this log errors and 3221 file not
healing:
[2018-02-24 15:32:00.915219] W [socket.c:3216:socket_connect] 0-glusterfs:
Error disabling sockopt IPV6_V6ONLY: "Protocollo non disponibile"
[2018-02-24 15:32:00.915854] I [MSGID: 101190]
[event-epoll.c:613:event_dispatch_epoll_worker] 0-epoll: Started thread
with index 1
[2018-02-24 15:32:01.925714] E
2017 Aug 15
2
Is transport=rdma tested with "stripe"?
On Tue, Aug 15, 2017 at 01:04:11PM +0000, Hatazaki, Takao wrote:
> Ji-Hyeon,
>
> You're saying that "stripe=2 transport=rdma" should work. Ok, that
> was firstly I wanted to know. I'll put together logs later this week.
Note that "stripe" is not tested much and practically unmaintained. We
do not advise you to use it. If you have large files that you
2017 Jun 15
1
peer probe failures
Hi,
I'm having a similar issue, were you able to solve it?
Thanks.
Hey all,
I've got a strange problem going on here. I've installed glusterfs-server
on ubuntu 16.04:
glusterfs-client/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic]
glusterfs-common/xenial,now 3.7.6-1ubuntu1 amd64 [installed,automatic]
glusterfs-server/xenial,now 3.7.6-1ubuntu1 amd64 [installed]
I can
2018 Feb 12
0
Failed to get quota limits
Hi,
Can you provide more information like, the volume configuration, quota.conf
file and the log files.
On Sat, Feb 10, 2018 at 1:05 AM, mabi <mabi at protonmail.ch> wrote:
> Hello,
>
> I am running GlusterFS 3.10.7 and just noticed by doing a "gluster volume
quota <volname> list" that my quotas on that volume are broken. The command
returns no output and no errors
2018 Feb 13
2
Failed to get quota limits
Hi Hari,
Sorry for not providing you more details from the start. Here below you will find all the relevant log entries and info. Regarding the quota.conf file I have found one for my volume but it is a binary file. Is it supposed to be binary or text?
Regards,
M.
*** gluster volume info myvolume ***
Volume Name: myvolume
Type: Replicate
Volume ID: e7a40a1b-45c9-4d3c-bb19-0c59b4eceec5
Status:
2018 Feb 09
3
Failed to get quota limits
Hello,
I am running GlusterFS 3.10.7 and just noticed by doing a "gluster volume quota <volname> list" that my quotas on that volume are broken. The command returns no output and no errors but by looking in /var/log/glusterfs.cli I found the following errors:
[2018-02-09 19:31:24.242324] E [cli-cmd-volume.c:1674:cli_cmd_quota_handle_list_all] 0-cli: Failed to get quota limits for