search for: rpc_clnt_notifi

Displaying 20 results from an estimated 69 matches for "rpc_clnt_notifi".

Did you mean: rpc_clnt_notify
2017 Sep 08
2
GlusterFS as virtual machine storage
The issue of I/O stopping may also be with glusterfsd not being properly killed before rebooting the server. For example in RHEL 7.4 with official Gluster 3.8.4, the glusterd service does *not* stop glusterfsd when you run systemctl stop glusterd So give this a try on the nose you wish to reboot: 1. Stop glusterd 2. Check if glusterfsd processes are still running. If they are, use: killall
2017 Sep 08
0
GlusterFS as virtual machine storage
Hi Diego, indeed glusterfsd processes are runnin and it is the reason I do server reboot instead of systemctl glusterd stop. Is killall different from reboot in a way glusterfsd processes are terminated in CentOS (init 1?)? However I will try this and let you know. -ps On Fri, Sep 8, 2017 at 12:19 PM, Diego Remolina <dijuremo at gmail.com> wrote: > The issue of I/O stopping may also
2017 Sep 08
1
GlusterFS as virtual machine storage
This is exactly the problem, Systemctl stop glusterd does *not* kill the brick processes. On CentOS with gluster 3.10.x there is also a service, meant to only stop glusterfsd (brick processes). I think the reboot process may not be properly stopping glusterfsd or network or firewall may be stopped before glusterfsd and so the nodes go into the long timeout. Once again , in my case a simple
2017 Sep 08
1
GlusterFS as virtual machine storage
If your VMs use ext4 also check this: https://joejulian.name/blog/keeping-your-vms-from-going- read-only-when-encountering-a-ping-timeout-in-glusterfs/ I asked him what to do for VMs using XFS and he said he could not find a fix (setting to change) for those. HTH, Diego On Sep 8, 2017 6:19 AM, "Diego Remolina" <dijuremo at gmail.com> wrote: > The issue of I/O stopping may
2017 Sep 08
0
GlusterFS as virtual machine storage
This is the qemu log of instance: [2017-09-08 09:31:48.381077] C [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired] 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded in the last 1 seconds, disconnecting. [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b] (-->
2017 Sep 08
3
GlusterFS as virtual machine storage
I think this should be considered a bug If you have a server crash, glusterfsd process obviously doesn't exit properly and thus this could least to IO stop ? And server crashes are the main reason to use a redundant filesystem like gluster Il 8 set 2017 12:43 PM, "Diego Remolina" <dijuremo at gmail.com> ha scritto: This is exactly the problem, Systemctl stop glusterd does
2017 Sep 08
3
GlusterFS as virtual machine storage
Oh, you really don't want to go below 30s, I was told. I'm using 30 seconds for the timeout, and indeed when a node goes down the VM freez for 30 seconds, but I've never seen them go read only for that. I _only_ use virtio though, maybe it's that. What are you using ? On Fri, Sep 08, 2017 at 11:41:13AM +0200, Pavel Szalbot wrote: > Back to replica 3 w/o arbiter. Two fio jobs
2018 Feb 26
1
Problems with write-behind with large files on Gluster 3.8.4
Hello, I'm having problems when write-behind is enabled on Gluster 3.8.4. I have 2 Gluster servers each with a single brick that is mirrored between them. The code causing these issues reads two data files each approx. 128G in size. It opens a third file, mmap()'s that file, and subsequently reads and writes to it. The third file, on sucessful runs (without write-behind enabled)
2013 Mar 14
1
glusterfs 3.3 self-heal daemon crash and can't be started
Dear glusterfs experts, Recently we have encountered a self-heal daemon crash issue after rebalanced volume. Crash stack bellow: +------------------------------------------------------------------------------+ pending frames: patchset: git://git.gluster.com/glusterfs.git signal received: 11 time of crash: 2013-03-14 16:33:50 configuration details: argp 1 backtrace 1 dlfcn 1 fdatasync 1 libpthread
2013 Sep 06
2
[Gluster-devel] GlusterFS 3.3.1 client crash (signal received: 6)
It's a pity I don't know how to re-create the issue. While there are 1-2 crashed clients in total 120 clients every day. Below is gdb result: (gdb) where #0 0x0000003267432885 in raise () from /lib64/libc.so.6 #1 0x0000003267434065 in abort () from /lib64/libc.so.6 #2 0x000000326746f7a7 in __libc_message () from /lib64/libc.so.6 #3 0x00000032674750c6 in malloc_printerr () from
2017 Jun 17
1
client reconnect fails (was gluster heal entry reappears)
Hi Ravi, back to our client-cannot-reconnect-to-gluster-brick problem ... > Von: Ravishankar N [ravishankar at redhat.com] > Gesendet: Montag, 29. Mai 2017 06:34 > An: Markus Stockhausen; gluster-users at gluster.org > Betreff: Re: [Gluster-users] gluster heal entry reappears > > > On 05/28/2017 10:31 PM, Markus Stockhausen wrote: > > Hi, > > > > I'm
2017 May 29
1
Failure while upgrading gluster to 3.10.1
Sorry for big attachment in previous mail...last 1000 lines of those logs attached now. On Mon, May 29, 2017 at 4:44 PM, Pawan Alwandi <pawan at platform.sh> wrote: > > > On Thu, May 25, 2017 at 9:54 PM, Atin Mukherjee <amukherj at redhat.com> > wrote: > >> >> On Thu, 25 May 2017 at 19:11, Pawan Alwandi <pawan at platform.sh> wrote: >>
2017 Jul 03
2
Failure while upgrading gluster to 3.10.1
Hello Atin, I've gotten around to this and was able to get upgrade done using 3.7.0 before moving to 3.11. For some reason 3.7.9 wasn't working well. On 3.11 though I notice that gluster/nfs is really made optional and nfs-ganesha is being recommended. We have plans to switch to nfs-ganesha on new clusters but would like to have glusterfs-gnfs on existing clusters so a seamless upgrade
2017 Jun 14
0
No NFS connection due to GlusterFS CPU load
When executing the load test with the FIO tool, execute the following job from the client When executed, the load of 2 cores is high for the CPU. Up to 100%. At that time, if another client is performing NFS mounting, the df command I can not connect NFS without coming back. The log will continue to be output below. I believe that if the CPU utilization is distributed, the load will be eliminated.
2018 Feb 26
0
rpc/glusterd-locks error
Good morning. We have a 6 node cluster. 3 nodes are participating in a replica 3 volume. Naming convention: xx01 - 3 nodes participating in ovirt_vol xx02 - 3 nodes NOT particpating in ovirt_vol Last week, restarted glusterd on each node in cluster to update (one at a time). The three xx01 nodes all show the following in glusterd.log: [2018-02-26 14:31:47.330670] E
2017 Jul 03
0
Failure while upgrading gluster to 3.10.1
On Mon, 3 Jul 2017 at 12:28, Pawan Alwandi <pawan at platform.sh> wrote: > Hello Atin, > > I've gotten around to this and was able to get upgrade done using 3.7.0 > before moving to 3.11. For some reason 3.7.9 wasn't working well. > > On 3.11 though I notice that gluster/nfs is really made optional and > nfs-ganesha is being recommended. We have plans to
2013 Nov 09
2
Failed rebalance - lost files, inaccessible files, permission issues
I'm starting a new thread on this, because I have more concrete information than I did the first time around. The full rebalance log from the machine where I started the rebalance can be found at the following link. It is slightly redacted - one search/replace was made to replace an identifying word with REDACTED. https://dl.dropboxusercontent.com/u/97770508/mdfs-rebalance-redacted.zip
2017 Sep 05
0
Glusterd proccess hangs on reboot
Some corrections about the previous mails. Problem does not happen when no volumes created. Problem happens volumes created but in stopped state. Problem also happens when volumes started state. Below is the 5 stack traces taken by 10 min intervals and volumes stopped state. --1-- Thread 8 (Thread 0x7f413f3a7700 (LWP 104249)): #0 0x0000003d99c0f00d in nanosleep () from /lib64/libpthread.so.0 #1
2017 Sep 05
1
Glusterd proccess hangs on reboot
On Tue, Sep 5, 2017 at 6:13 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Some corrections about the previous mails. Problem does not happen > when no volumes created. > Problem happens volumes created but in stopped state. Problem also > happens when volumes started state. > Below is the 5 stack traces taken by 10 min intervals and volumes stopped > state. > As
2017 Jul 20
1
Error while mounting gluster volume
Hi Team, While mounting the gluster volume using 'mount -t glusterfs' command it is getting failed. When we checked the log file getting the below logs [1970-01-02 10:54:04.420065] E [MSGID: 101187] [event-epoll.c:391:event_register_epoll] 0-epoll: failed to add fd(=7) to epoll fd(=0) [Invalid argument] [1970-01-02 10:54:04.420140] W [socket.c:3095:socket_connect] 0-: failed to register