Displaying 20 results from an estimated 1000 matches similar to: "peer probe failures"
2017 Aug 20
2
Glusterd not working with systemd in redhat 7
Hi!
I am having same issue but I am running Ubuntu v16.04.
It does not mount during boot, but works if I mount it manually. I am
running the Gluster-server on the same machines (3 machines)
Here is the /tc/fstab file
/dev/sdb1 /data/gluster ext4 defaults 0 0
web1.dasilva.network:/www /mnt/glusterfs/www glusterfs
defaults,_netdev,log-level=debug,log-file=/var/log/gluster.log 0 0
2017 Aug 21
0
Glusterd not working with systemd in redhat 7
On Mon, Aug 21, 2017 at 2:49 AM, Cesar da Silva <thunderlight1 at gmail.com>
wrote:
> Hi!
> I am having same issue but I am running Ubuntu v16.04.
> It does not mount during boot, but works if I mount it manually. I am
> running the Gluster-server on the same machines (3 machines)
> Here is the /tc/fstab file
>
> /dev/sdb1 /data/gluster ext4 defaults 0 0
>
>
2017 Aug 21
1
Glusterd not working with systemd in redhat 7
Hi!
Please see bellow. Note that web1.dasilva.network is the address of the
local machine where one of the bricks is installed and that ties to mount.
[2017-08-20 20:30:40.359236] I [MSGID: 100030] [glusterfsd.c:2476:main]
0-/usr/sbin/glusterd: Started running /usr/sbin/glusterd version 3.11.2
(args: /usr/sbin/glusterd -p /var/run/glusterd.pid)
[2017-08-20 20:30:40.973249] I [MSGID: 106478]
2017 May 29
1
Failure while upgrading gluster to 3.10.1
Sorry for big attachment in previous mail...last 1000 lines of those logs
attached now.
On Mon, May 29, 2017 at 4:44 PM, Pawan Alwandi <pawan at platform.sh> wrote:
>
>
> On Thu, May 25, 2017 at 9:54 PM, Atin Mukherjee <amukherj at redhat.com>
> wrote:
>
>>
>> On Thu, 25 May 2017 at 19:11, Pawan Alwandi <pawan at platform.sh> wrote:
>>
2017 Jul 03
2
Failure while upgrading gluster to 3.10.1
Hello Atin,
I've gotten around to this and was able to get upgrade done using 3.7.0
before moving to 3.11. For some reason 3.7.9 wasn't working well.
On 3.11 though I notice that gluster/nfs is really made optional and
nfs-ganesha is being recommended. We have plans to switch to nfs-ganesha
on new clusters but would like to have glusterfs-gnfs on existing clusters
so a seamless upgrade
2017 Jul 03
0
Failure while upgrading gluster to 3.10.1
On Mon, 3 Jul 2017 at 12:28, Pawan Alwandi <pawan at platform.sh> wrote:
> Hello Atin,
>
> I've gotten around to this and was able to get upgrade done using 3.7.0
> before moving to 3.11. For some reason 3.7.9 wasn't working well.
>
> On 3.11 though I notice that gluster/nfs is really made optional and
> nfs-ganesha is being recommended. We have plans to
2017 Jun 01
0
Gluster client mount fails in mid flight with signum 15
This has been solved, as far as we can tell.
Problem was with KillUserProcesses=1 in logind.conf. This has shown to kill mounts made using mount -a booth by root and by any user with sudo at session logout.
Hope this will anybody else who run into this.
Thanks 4 all your help and
cheers
Gabbe
1 juni 2017 kl. 09:24 skrev Gabriel Lindeborg <gabriel.lindeborg at
2017 Jun 01
2
Gluster client mount fails in mid flight with signum 15
All four clients did run 3.10.2 as well
The volumes has been running fine until we upgraded to 3.10, when we hit some issues with port mismatches. We restarted all the volumes, the servers and the clients and now hit this issue.
We?ve since backed up the files, remove the volumes, removed the bricks, removed gluster, installed glusterfs 3.7.20, created new volumes on new bricks, restored the
2017 Jun 01
1
Gluster client mount fails in mid flight with signum 15
On Thu, Jun 01, 2017 at 01:52:23PM +0000, Gabriel Lindeborg wrote:
> This has been solved, as far as we can tell.
>
> Problem was with KillUserProcesses=1 in logind.conf. This has shown to
> kill mounts made using mount -a booth by root and by any user with
> sudo at session logout.
Ah, yes, that could well be the cause of the problem.
> Hope this will anybody else who run
2017 Sep 20
1
"Input/output error" on mkdir for PPC64 based client
I put the share into debug mode and then repeated the process from a ppc64
client and an x86 client. Weirdly the client logs were almost identical.
Here's the ppc64 gluster client log of attempting to create a folder...
-------------
[2017-09-20 13:34:23.344321] D
[rpc-clnt-ping.c:93:rpc_clnt_remove_ping_timer_locked] (-->
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote:
> On 4/9/2018 2:45 AM, Alex K wrote:
> Hey Alex,
>
> With two nodes, the setup works but both sides go down when one node is
> missing. Still I set the below two params to none and that solved my issue:
>
> cluster.quorum-type: none
> cluster.server-quorum-type: none
>
> yes this disables
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,
With two nodes, the setup works but both sides go down when one node is
missing. Still I set the below two params to none and that solved my issue:
cluster.quorum-type: none
cluster.server-quorum-type: none
Thank you for that.
Cheers,
Tom
> Hi,
>
> You need 3 nodes at least to have quorum enabled. In 2 node setup you
> need to
2017 Sep 20
0
"Input/output error" on mkdir for PPC64 based client
Looks like it is an issue with architecture compatibility in RPC layer (ie,
with XDRs and how it is used). Just glance the logs of the client process
where you saw the errors, which could give some hints. If you don't
understand the logs, share them, so we will try to look into it.
-Amar
On Wed, Sep 20, 2017 at 2:40 AM, Walter Deignan <WDeignan at uline.com> wrote:
> I recently
2017 Sep 08
2
GlusterFS as virtual machine storage
The issue of I/O stopping may also be with glusterfsd not being properly
killed before rebooting the server.
For example in RHEL 7.4 with official Gluster 3.8.4, the glusterd service
does *not* stop glusterfsd when you run systemctl stop glusterd
So give this a try on the nose you wish to reboot:
1. Stop glusterd
2. Check if glusterfsd processes are still running. If they are, use:
killall
2017 Sep 08
0
GlusterFS as virtual machine storage
Hi Diego,
indeed glusterfsd processes are runnin and it is the reason I do
server reboot instead of systemctl glusterd stop. Is killall different
from reboot in a way glusterfsd processes are terminated in CentOS
(init 1?)?
However I will try this and let you know.
-ps
On Fri, Sep 8, 2017 at 12:19 PM, Diego Remolina <dijuremo at gmail.com> wrote:
> The issue of I/O stopping may also
2017 Sep 08
1
GlusterFS as virtual machine storage
This is exactly the problem,
Systemctl stop glusterd does *not* kill the brick processes.
On CentOS with gluster 3.10.x there is also a service, meant to only stop
glusterfsd (brick processes). I think the reboot process may not be
properly stopping glusterfsd or network or firewall may be stopped before
glusterfsd and so the nodes go into the long timeout.
Once again , in my case a simple
2017 Sep 08
1
GlusterFS as virtual machine storage
If your VMs use ext4 also check this:
https://joejulian.name/blog/keeping-your-vms-from-going-
read-only-when-encountering-a-ping-timeout-in-glusterfs/
I asked him what to do for VMs using XFS and he said he could not find a
fix (setting to change) for those.
HTH,
Diego
On Sep 8, 2017 6:19 AM, "Diego Remolina" <dijuremo at gmail.com> wrote:
> The issue of I/O stopping may
2017 Sep 08
0
GlusterFS as virtual machine storage
This is the qemu log of instance:
[2017-09-08 09:31:48.381077] C
[rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired]
0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded
in the last 1 seconds, disconnecting.
[2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
(-->
2017 Sep 08
3
GlusterFS as virtual machine storage
I think this should be considered a bug
If you have a server crash, glusterfsd process obviously doesn't exit
properly and thus this could least to IO stop ?
And server crashes are the main reason to use a redundant filesystem like
gluster
Il 8 set 2017 12:43 PM, "Diego Remolina" <dijuremo at gmail.com> ha scritto:
This is exactly the problem,
Systemctl stop glusterd does
2017 Sep 08
3
GlusterFS as virtual machine storage
Oh, you really don't want to go below 30s, I was told.
I'm using 30 seconds for the timeout, and indeed when a node goes down
the VM freez for 30 seconds, but I've never seen them go read only for
that.
I _only_ use virtio though, maybe it's that. What are you using ?
On Fri, Sep 08, 2017 at 11:41:13AM +0200, Pavel Szalbot wrote:
> Back to replica 3 w/o arbiter. Two fio jobs