Displaying 20 results from an estimated 23 matches for "saved_frames_unwind".
2017 Sep 08
2
GlusterFS as virtual machine storage
...lbot at gmail.com> wrote:
This is the qemu log of instance:
[2017-09-08 09:31:48.381077] C
[rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired]
0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded
in the last 1 seconds, disconnecting.
[2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
(--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
(--
> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170] (...
2017 Sep 08
0
GlusterFS as virtual machine storage
...This is the qemu log of instance:
>
> [2017-09-08 09:31:48.381077] C
> [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired]
> 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded
> in the last 1 seconds, disconnecting.
> [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
> (--
>> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
>> /lib64/libgfrpc.so.0(rpc_clnt_connection_cle...
2017 Sep 08
1
GlusterFS as virtual machine storage
...This is the qemu log of instance:
>
> [2017-09-08 09:31:48.381077] C
> [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired]
> 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded
> in the last 1 seconds, disconnecting.
> [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
> (--
>> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
>> /lib64/libgfrpc.so.0(rpc_clnt_connection_cle...
2017 Sep 08
1
GlusterFS as virtual machine storage
...This is the qemu log of instance:
>
> [2017-09-08 09:31:48.381077] C
> [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired]
> 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded
> in the last 1 seconds, disconnecting.
> [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
> (--
> > /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanu...
2017 Sep 08
0
GlusterFS as virtual machine storage
This is the qemu log of instance:
[2017-09-08 09:31:48.381077] C
[rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired]
0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded
in the last 1 seconds, disconnecting.
[2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
(--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
(--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
(--
> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7fbcad8d2170] (...
2017 Sep 08
3
GlusterFS as virtual machine storage
...This is the qemu log of instance:
>
> [2017-09-08 09:31:48.381077] C
> [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired]
> 0-gv_openstack_1-client-1: server 10.0.1.202:49152 has not responded
> in the last 1 seconds, disconnecting.
> [2017-09-08 09:31:48.382411] E [rpc-clnt.c:365:saved_frames_unwind]
> (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7fbcadb09e8b]
> (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7fbcad8d08ee]
> (--
>> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7fbcad8d09fe] (-->
>> /lib64/libgfrpc.so.0(rpc_clnt_connection_cle...
2017 Sep 08
3
GlusterFS as virtual machine storage
Oh, you really don't want to go below 30s, I was told.
I'm using 30 seconds for the timeout, and indeed when a node goes down
the VM freez for 30 seconds, but I've never seen them go read only for
that.
I _only_ use virtio though, maybe it's that. What are you using ?
On Fri, Sep 08, 2017 at 11:41:13AM +0200, Pavel Szalbot wrote:
> Back to replica 3 w/o arbiter. Two fio jobs
2018 Feb 26
1
Problems with write-behind with large files on Gluster 3.8.4
...: failed to send the fop
[2018-02-22 18:07:45.221368] I [MSGID: 114018] [client.c:2280:client_rpc_notify] 2-scratch-client-0: disconnected from scratch-client-0. Client process will keep trying to connect to glusterd until brick's port is available
[2018-02-22 18:07:45.221576] E [rpc-clnt.c:365:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x2b2ee5874752] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x2b2ee5b4a8ce] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x2b2ee5b4a9de] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x2b2ee5b4c150] (-...
2017 Jun 17
1
client reconnect fails (was gluster heal entry reappears)
...52.1:49154 has not responded in the last 42 seconds, disconnecting.
[2017-06-17 09:08:26.033302] C [rpc-clnt-ping.c:160:rpc_clnt_ping_timer_expired] 0-gluster1-client-2: server 100.64.252.1:49155 has not responded in the last 42 seconds, disconnecting.
[2017-06-17 09:08:26.033751] E [rpc-clnt.c:365:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x192)[0x7f8fa739e162] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f8fa716595e] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f8fa7165a6e] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x84)[0x7f8fa71671d4] ))...
2017 Jun 14
0
No NFS connection due to GlusterFS CPU load
...s : 2
server.outstanding-rpc-limit : 64
nfs.outstanding-rpc-limit : 16
performance.io-thread-count : 16
/var/log/glusterfs/nfs.log
[2017-06-14 10:02:03.964405] I [MSGID: 108006] [afr-common.c:4941:afr_local_init] 0-gvol01-replicate-0: no subvolumes up
[2017-06-14 10:02:04.026299] E [rpc-clnt.c:365:saved_frames_unwind] (--> /lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f2729b3ae8b] (--> /lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f27299018ee] (--> /lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f27299019fe] (--> /lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7f2729903170] (-...
2018 Feb 26
0
rpc/glusterd-locks error
....so(+0x2322a)
[0x7f46020e922a]
-->/usr/lib64/glusterfs/3.12.5/xlator/mgmt/glusterd.so(+0x2d198)
[0x7f46020f3198]
-->/usr/lib64/glusterfs/3.12.5/xlator/mgmt/glusterd.so(+0xe4755)
[0x7f46021aa755] ) 0-management: Lock for vol ovirtprod_vol not held
[2018-02-26 14:31:47.331066] E [rpc-clnt.c:350:saved_frames_unwind] (-->
/lib64/libglusterfs.so.0(_gf_log_callingfn+0x13b)[0x7f460d64dedb] (-->
/lib64/libgfrpc.so.0(saved_frames_unwind+0x1de)[0x7f460d412e6e] (-->
/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f460d412f8e] (-->
/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x90)[0x7f460d414710] (-...
2017 May 29
1
Failure while upgrading gluster to 3.10.1
...2.183032] W [socket.c:852:__socket_keepalive]
> 0-socket: failed to set keep idle -1 on socket 20, Invalid argument
> [2017-05-29 11:04:52.183052] E [socket.c:2966:socket_connect]
> 0-management: Failed to set keep-alive: Invalid argument
> [2017-05-29 11:04:52.183622] E [rpc-clnt.c:362:saved_frames_unwind] (-->
> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x1a3)[0x7f767c46d483]
> (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_unwind+0x1cf)[0x7f767c2383af]
> (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_destroy+0xe)[0x7f767c2384ce]
> (--&...
2017 Jul 03
2
Failure while upgrading gluster to 3.10.1
...et keep idle -1 on socket 20, Invalid argument
>>>>>>> [2017-05-29 11:04:52.183052] E [socket.c:2966:socket_connect]
>>>>>>> 0-management: Failed to set keep-alive: Invalid argument
>>>>>>> [2017-05-29 11:04:52.183622] E [rpc-clnt.c:362:saved_frames_unwind]
>>>>>>> (--> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x1a3)[0x7f767c46d483]
>>>>>>> (--> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_unwind+0x1cf)[0x7f767c2383af]
>>>>>>> (--> /usr/lib/x86_64-lin...
2017 Jul 03
0
Failure while upgrading gluster to 3.10.1
...-1 on socket 20, Invalid argument
>>>>>>>> [2017-05-29 11:04:52.183052] E [socket.c:2966:socket_connect]
>>>>>>>> 0-management: Failed to set keep-alive: Invalid argument
>>>>>>>> [2017-05-29 11:04:52.183622] E [rpc-clnt.c:362:saved_frames_unwind]
>>>>>>>> (-->
>>>>>>>> /usr/lib/x86_64-linux-gnu/libglusterfs.so.0(_gf_log_callingfn+0x1a3)[0x7f767c46d483]
>>>>>>>> (-->
>>>>>>>> /usr/lib/x86_64-linux-gnu/libgfrpc.so.0(saved_frames_unwind+0x1cf...
2013 Nov 09
2
Failed rebalance - lost files, inaccessible files, permission issues
...wheels started falling off about 26 hours into
the rebalance.
[2013-10-29 23:13:17.193108] C
[client-handshake.c:126:rpc_client_ping_timer_expired] 0-mdfs-client-1:
server 10.116.0.22:24025 has not responded in the last 42 seconds,
disconnecting.
[2013-10-29 23:13:17.200616] E [rpc-clnt.c:373:saved_frames_unwind]
(-->/usr/lib64/libgfrpc.so.0(rpc_clnt_notify+0x78) [0x36de60f808]
(-->/usr/lib64/libgfrpc.so.0(rpc_clnt_connection_cleanup+0xb0)
[0x36de60f4c0] (-->/usr/lib64/libgfrpc.so.0(saved_frames_destroy+0xe)
[0x36de60ef2e]))) 0-mdfs-client-1: forced unwinding frame type(GlusterFS
3.1) op(STAT...
2011 Jun 09
1
NFS problem
...0000-000000000000) when it is supposed to be not present
[2011-06-09 17:01:35.784610] W [socket.c:1494:__socket_proto_state_machine] 0-poolsave-client-0: reading from socket failed. Error (Transport endpoint is not connected), peer (10.68.217.85:24014)
[2011-06-09 17:01:35.784745] E [rpc-clnt.c:338:saved_frames_unwind] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_notify+0xb9) [0x2ab58145f7f9] (-->/usr/local/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e) [0x2ab58145ef8e] (-->/usr/local/lib/libgfrpc.so.0(saved_frames_destroy+0xe) [0x2ab58145eefe]))) 0-poolsave-client-0: forced unwinding frame type(Glust...
2018 Apr 11
0
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On Wed, Apr 11, 2018 at 4:35 AM, TomK <tomkcpr at mdevsys.com> wrote:
> On 4/9/2018 2:45 AM, Alex K wrote:
> Hey Alex,
>
> With two nodes, the setup works but both sides go down when one node is
> missing. Still I set the below two params to none and that solved my issue:
>
> cluster.quorum-type: none
> cluster.server-quorum-type: none
>
> yes this disables
2010 Dec 24
1
node crashing on 4 replicated-distributed cluster
...28:server_rpc_notify]
vms-server: disconnected connection from 192.168.7.1:1018
[2010-12-24 15:59:02.181278] I [server-handshake.c:535:server_setvolume]
vms-server: accepted client from 192.168.7.1:1018
On nfs.log of node1 (many, operations changing):
[2010-12-24 15:58:49.263361] E [rpc-clnt.c:338:saved_frames_unwind]
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_notify+0x77) [0x7fabdcf5bd17]
(-->/usr/lib/libgfrpc.so.0(rpc_clnt_connection_cleanup+0x7e)
[0x7fabdcf5b4ae] (-->/usr/lib/libgfrpc.so.0(saved_frames_destroy+0xe)
[0x7fabdcf5b40e]))) rpc-clnt: forced unwinding frame type(GlusterFS 3.1)
op(WRITE(13)) calle...
2018 Apr 11
3
volume start: gv01: failed: Quorum not met. Volume operation not allowed.
On 4/9/2018 2:45 AM, Alex K wrote:
Hey Alex,
With two nodes, the setup works but both sides go down when one node is
missing. Still I set the below two params to none and that solved my issue:
cluster.quorum-type: none
cluster.server-quorum-type: none
Thank you for that.
Cheers,
Tom
> Hi,
>
> You need 3 nodes at least to have quorum enabled. In 2 node setup you
> need to
2018 Feb 28
1
Intermittent mount disconnect due to socket poller error
...108006]
[afr-common.c:5164:__afr_handle_child_down_event] 0-VOL-replicate-0: All
subvolumes are down. Going offline until atleast one of them comes back up.
[2018-02-28 19:35:58.486146] E [MSGID: 101046]
[dht-common.c:1501:dht_lookup_dir_cbk] 0-VOL-dht: dict is null <67 times>
<lots of saved_frames_unwind messages>
[2018-02-28 19:38:06.428607] E [socket.c:2648:socket_poller]
0-VOL-client-1: socket_poller SERVER2:24007 failed (No data available)
[2018-02-28 19:40:12.548650] E [socket.c:2648:socket_poller]
0-VOL-client-1: socket_poller SERVER2:24007 failed (No data available)
<manual umount /...