Displaying 20 results from an estimated 900 matches similar to: "Lose gnfs connection during test"
2017 Sep 08
1
pausing scrub crashed scrub daemon on nodes
Hi,
I am using glusterfs 3.10.1 with 30 nodes each with 36 bricks and 10 nodes
each with 16 bricks in a single cluster.
By default I have paused scrub process to have it run manually. for the
first time, i was trying to run scrub-on-demand and it was running fine,
but after some time, i decided to pause scrub process due to high CPU usage
and user reporting folder listing taking time.
But scrub
2023 Mar 14
1
can't set up geo-replication: can't fetch slave details
Hi,
using Gluster 9.2 on debian 11 I'm trying to set up geo replication. I
am following this guide:
https://docs.gluster.org/en/main/Administrator-Guide/Geo-Replication/#password-less-ssh
I have a volume called "ansible" which is only a small volume and
seemed like an ideal test case.
Firstly, for a bit of feedback (this isn't my issue as I worked around
it) I had this
2017 Nov 13
2
snapshot mount fails in 3.12
Hi,
quick question about snapshot mounting: Were there changes in 3.12 that
were not mentioned in the release notes for snapshot mounting?
I recently upgraded from 3.10 to 3.12 on CentOS (using
centos-release-gluster312). The upgrade worked flawless. The volume
works fine too. But mounting a snapshot fails with those two error messages:
[2017-11-13 08:46:02.300719] E
2017 Nov 13
0
snapshot mount fails in 3.12
Hi Richard,
Thanks for posting this.
This issues is caused as part of the regression in 3.12.0 version [1], and
is already fixed in 3.12.3 [2] version (3.12.3 is tagged now with couple of
more subdirectory mount related fixes).
[1] - https://bugzilla.redhat.com/show_bug.cgi?id=1501235
[2] - https://review.gluster.org/#/c/18506/
If you don't want to change the versions, then please remove
2017 Dec 29
1
cannot mount with glusterfs-fuse after NFS-Ganesha enabled
Hi,
I've created a 2 node glusterFS test (Gluster 3.8).
Without enabling NFS-Ganesha, when I try to mount from a client using
glusterfs option - everything works.
However, after enabling NFS-Ganesha, when I try to mount from a client
using the glusterfs option (fuse), it fails with the following output (when
using the log-file option):
[2017-12-28 08:15:30.109110] I [MSGID: 100030]
2018 May 08
1
mount failing client to gluster cluster.
Hi,
On a debian 9 client,
========
root at kvm01:/var/lib/libvirt# dpkg -l glusterfs-client
8><---
ii glusterfs-client 3.8.8-1 amd64
clustered file-system (client package)
root at kvm01:/var/lib/libvirt#
=======
I am trying to to do a mount to a Centos 7 gluster setup,
=======
[root at glustep1 libvirt]# rpm -q glusterfs
glusterfs-4.0.2-1.el7.x86_64
2017 Aug 07
2
Slow write times to gluster disk
Hi Soumya,
We just had the opportunity to try the option of disabling the
kernel-NFS and restarting glusterd to start gNFS. However the gluster
demon crashes immediately on startup. What additional information
besides what we provide below would help debugging this?
Thanks,
Pat
-------- Forwarded Message --------
Subject: gluster-nfs crashing on start
Date: Mon, 7 Aug 2017 16:05:09
2018 Apr 09
2
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hey All,
In a two node glusterfs setup, with one node down, can't use the second
node to mount the volume. I understand this is expected behaviour?
Anyway to allow the secondary node to function then replicate what
changed to the first (primary) when it's back online? Or should I just
go for a third node to allow for this?
Also, how safe is it to set the following to none?
2018 Apr 09
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
Hi,
You need 3 nodes at least to have quorum enabled. In 2 node setup you need
to disable quorum so as to be able to still use the volume when one of the
nodes go down.
On Mon, Apr 9, 2018, 09:02 TomK <tomkcpr at mdevsys.com> wrote:
> Hey All,
>
> In a two node glusterfs setup, with one node down, can't use the second
> node to mount the volume. I understand this is
2016 Nov 02
0
Latest glusterfs 3.8.5 server not compatible with livbirt libgfapi access
Hi,
After updating glusterfs server to 3.8.5 (from Centos-gluster-3.8.repo) the
KVM virtual machines (qemu-kvm-ev-2.3.0-31) that access storage using
libgfapi are no longer able to start. The libvirt log file shows:
[2016-11-02 14:26:41.864024] I [MSGID: 104045] [glfs-master.c:91:notify]
0-gfapi: New graph 73332d32-3937-3130-2d32-3031362d3131 (0) coming up
[2016-11-02 14:26:41.864075] I [MSGID:
2023 Mar 21
1
can't set up geo-replication: can't fetch slave details
Hi,
is this a rare problem?
Cheers,
Kingsley.
On Tue, 2023-03-14 at 19:31 +0000, Kingsley Tart wrote:
> Hi,
>
> using Gluster 9.2 on debian 11 I'm trying to set up geo replication.
> I am following this guide:
>
>
https://docs.gluster.org/en/main/Administrator-Guide/Geo-Replication/#password-less-ssh
>
> I have a volume called "ansible" which is only a
2017 Aug 08
0
Slow write times to gluster disk
----- Original Message -----
> From: "Pat Haley" <phaley at mit.edu>
> To: "Soumya Koduri" <skoduri at redhat.com>, gluster-users at gluster.org, "Pranith Kumar Karampuri" <pkarampu at redhat.com>
> Cc: "Ben Turner" <bturner at redhat.com>, "Ravishankar N" <ravishankar at redhat.com>, "Raghavendra
2018 Jan 18
0
issues after botched update
Hi,
A client has a glusterfs cluster that's behaving weirdly after some
issues during upgrade.
They upgraded a glusterfs 2+1 cluster (replica with arbiter) from 3.10.9
to 3.12.4 on Centos and now have weird issues and some files maybe being
corrupted. They also switched from nfs ganesha that crashed every couple
of days to glusterfs subdirectory mounting. Subdirectory mounting was
the
2017 Aug 08
1
Slow write times to gluster disk
Soumya,
its
[root at mseas-data2 ~]# glusterfs --version
glusterfs 3.7.11 built on Apr 27 2016 14:09:20
Repository revision: git://git.gluster.com/glusterfs.git
Copyright (c) 2006-2013 Red Hat, Inc. <http://www.redhat.com/>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
It is licensed to you under your choice of the GNU Lesser
General Public License, version 3 or any later version (LGPLv3
2017 Dec 26
0
trying to mount gluster volume
Hi,
I have built a small (2 node) glusterfs nodes and install nfs-ganesha on
them.
When trying from a different machine to mount an NFS share from the
glusterfs nodes - it works.
However, when I'm trying to mount the volume from a remote machine using
glusterfs-fuse - it fails. I'm getting this error.
[2017-12-26 07:49:38.587196] I [MSGID: 100030] [glusterfsd.c:2412:main]
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,
With two nodes, the setup works but both sides go down when one node is
missing. Still I set the below two params to none and that solved my issue:
cluster.quorum-type: none
cluster.server-quorum-type: none
Thank you for that.
Cheers,
Tom
> Hi,
>
> You need 3 nodes at least to have quorum enabled. In 2 node setup you
> need to
2018 Jan 23
2
Understanding client logs
Hi all,
I have problem pin pointing an error, that users of
my system experience processes that crash.
The thing that have changed since the craches started
is that I added a gluster cluster.
Of cause the users start to attack my gluster cluster.
I started looking at logs, starting from the client side.
I just need help to understand how to read it in the right way.
I can see that every ten
2017 Jul 20
1
Error while mounting gluster volume
Hi Team,
While mounting the gluster volume using 'mount -t glusterfs' command it is
getting failed.
When we checked the log file getting the below logs
[1970-01-02 10:54:04.420065] E [MSGID: 101187]
[event-epoll.c:391:event_register_epoll] 0-epoll: failed to add fd(=7) to
epoll fd(=0) [Invalid argument]
[1970-01-02 10:54:04.420140] W [socket.c:3095:socket_connect] 0-: failed to
register
2018 Jan 15
2
Using the host name of the volume, its related commands can become very slow
Using the host name of the volume, its related gluster commands can become very slow .For example,create,start,stop volume,nfs related commands. and some time And in some cases, the command will return Error : Request timed out
but If using ip address to create the volume. The volume all gluster commands are normal.
I have configured /etc/hosts correctly,Because,SSH can normally use the
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote:
> On 4/9/2018 2:45 AM, Alex K wrote:
> Hey Alex,
>
> With two nodes, the setup works but both sides go down when one node is
> missing. Still I set the below two params to none and that solved my issue:
>
> cluster.quorum-type: none
> cluster.server-quorum-type: none
>
> yes this disables