Displaying 20 results from an estimated 32 matches for "server_rpc_notify".
2017 Jun 01
0
Gluster client mount fails in mid flight with signum 15
...10.205040+02:00] [] []
/DAEMON/INFO [2017-05-31T08:21:21.580846+02:00] [] []
/DAEMON/INFO [2017-05-31T08:21:21.581623+02:00] [] []
And these are 40 last rows of the brick
==> /var/log/glusterfs/bricks/gluster-alfresco-brick.log <==
[2017-05-31 05:52:19.117394] I [MSGID: 115036] [server.c:559:server_rpc_notify] 0-alfresco-server: disconnecting connection from tms02.horisont.svenskaspel.se-3219-2017/05/31-05:52:19:118674-alfresco-client-2-0-0
[2017-05-31 05:52:19.117469] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-alfresco-server: Shutting down connection tms02.horisont.svenskaspel.se-3219-2017/0...
2017 Jun 01
2
Gluster client mount fails in mid flight with signum 15
...10.205040+02:00] [] []
/DAEMON/INFO [2017-05-31T08:21:21.580846+02:00] [] []
/DAEMON/INFO [2017-05-31T08:21:21.581623+02:00] [] []
And these are 40 last rows of the brick
==> /var/log/glusterfs/bricks/gluster-alfresco-brick.log <==
[2017-05-31 05:52:19.117394] I [MSGID: 115036] [server.c:559:server_rpc_notify] 0-alfresco-server: disconnecting connection from tms02.horisont.svenskaspel.se<http://tms02.horisont.svenskaspel.se/>-3219-2017/05/31-05:52:19:118674-alfresco-client-2-0-0
[2017-05-31 05:52:19.117469] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-alfresco-server: Shutting down connect...
2017 Jun 01
1
Gluster client mount fails in mid flight with signum 15
...AEMON/INFO [2017-05-31T08:21:21.580846+02:00] [] []
> /DAEMON/INFO [2017-05-31T08:21:21.581623+02:00] [] []
>
> And these are 40 last rows of the brick
> ==> /var/log/glusterfs/bricks/gluster-alfresco-brick.log <==
> [2017-05-31 05:52:19.117394] I [MSGID: 115036] [server.c:559:server_rpc_notify] 0-alfresco-server: disconnecting connection from tms02.horisont.svenskaspel.se-3219-2017/05/31-05:52:19:118674-alfresco-client-2-0-0
> [2017-05-31 05:52:19.117469] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-alfresco-server: Shutting down connection tms02.horisont.svenskaspel.se-3219-2...
2017 Oct 17
1
Distribute rebalance issues
...isconnects like this.
Steve
[2017-10-17 02:22:13.453575] I [MSGID: 115029]
[server-handshake.c:692:server_setvolume] 0-video-server: accepted
client from node-dc4-03-5825-2017/08/30-20:45:55:170091-video-client-4-2-318
(version: 3.8.15)
[2017-10-17 02:22:31.353286] I [MSGID: 115036]
[server.c:548:server_rpc_notify] 0-video-server: disconnecting
connection from
node-dc4-02-29040-2017/08/04-09:31:22:842268-video-client-4-7-403
[2017-10-17 02:22:31.353326] I [MSGID: 101055]
[client_t.c:415:gf_client_unref] 0-video-server: Shutting down
connection node-dc4-02-29040-2017/08/04-09:31:22:842268-video-client-4-7-403...
2018 Jan 05
0
Another VM crashed
...ck2/brick (arbiter)
Options Reconfigured:
nfs.disable: on
performance.readdir-ahead: on
transport.address-family: inet
[2017-12-31 05:25:01.724213] I [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk]
0-glusterfs: No change in volfile, continuing
[2018-01-02 07:56:32.763516] I [MSGID: 115036]
[server.c:548:server_rpc_notify] 0-datastore1-server: disconnecting
connection from srvpve4-1
652-2017/05/03-19:41:30:493103-datastore1-client-1-0-2
[2018-01-02 07:56:32.763554] I [MSGID: 101055]
[client_t.c:415:gf_client_unref] 0-datastore1-server: Shutting down
connection srvpve4-1652-2
017/05/03-19:41:30:493103-datastore1-clie...
2017 Oct 17
0
Distribute rebalance issues
On 17 October 2017 at 14:48, Stephen Remde <stephen.remde at gaist.co.uk>
wrote:
> Hi,
>
>
> I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens
2017 Oct 17
2
Distribute rebalance issues
Hi,
I have a rebalance that has failed on one peer twice now. Rebalance
logs below (directories anonomised and some irrelevant log lines cut).
It looks like it loses connection to the brick, but immediately stops
the rebalance on that peer instead of waiting for reconnection - which
happens a second or so later.
Is this normal behaviour? So far it has been the same server and the
same (remote)
2012 Dec 17
2
Transport endpoint
...r3/rpr3_sparky/matrix/.4d_ccnoesy.ucsf.QTQswL
[2012-12-15 00:53:24.743400] I [server-helpers.c:741:server_connection_put] 0-RedhawkShared-server: Shutting down connection mualhpcp01.hpc.muohio.edu-17684-2012/12/13-17:25:16:994209-RedhawkShared-client-0-0
[2012-12-15 00:53:24.743368] I [server.c:685:server_rpc_notify] 0-RedhawkShared-server: disconnecting connectionfrom mualhpcp01.hpc.muohio.edu-17684-2012/12/13-17:25:16:994209-RedhawkShared-client-0-0
[2012-12-15 00:53:24.740055] W [socket.c:195:__socket_rwv] 0-tcp.RedhawkShared-server: readv failed (Connection reset by peer)
I can't find relevant logs on...
2018 Jan 19
2
geo-replication command rsync returned with 3
...f-4a9c-9106-eb8404b95622
[2018-01-19 14:23:21.995049] I [MSGID: 115029]
[server-handshake.c:690:server_setvolume] 0-mvol1-server: accepted
client from
gl-master-01-22759-2018/01/19-14:23:21:928705-mvol1-client-0-0-0
(version: 3.7.18)
[2018-01-19 14:23:23.392692] I [MSGID: 115036]
[server.c:552:server_rpc_notify] 0-mvol1-server: disconnecting
connection from
gl-master-04-8871-2018/01/19-14:23:18:129523-mvol1-client-0-0-0
[2018-01-19 14:23:23.392746] I [MSGID: 101055]
[client_t.c:420:gf_client_unref] 0-mvol1-server: Shutting down
connection gl-master-04-8871-2018/01/19-14:23:18:129523-mvol1-client-0-0-0...
2010 Dec 24
1
node crashing on 4 replicated-distributed cluster
...n node4, I've found many of:
[2010-12-24 15:58:50.247688] C [rpcsvc.c:1118:rpcsvc_notify] rpcsvc: got
MAP_XID event, which should have not come
[2010-12-24 15:58:50.264731] E [rpcsvc.c:874:rpcsvc_request_create]
rpc-service: RPC call decoding failed
[2010-12-24 15:58:50.264835] I [server.c:428:server_rpc_notify]
vms-server: disconnected connection from 192.168.7.1:1001
[2010-12-24 15:58:50.279233] I [server-handshake.c:535:server_setvolume]
vms-server: accepted client from 192.168.7.1:1018
[2010-12-24 15:59:02.100081] E [rpcsvc.c:874:rpcsvc_request_create]
rpc-service: RPC call decoding failed
[2010-12-24...
2018 Feb 28
1
Intermittent mount disconnect due to socket poller error
...socket_poller SERVER2:24007 failed (No data available)
<manual umount / mount>
SERVER2:/var/log/glusterfs/bricks/VOL-brick2.log
[2018-02-28 19:35:58.379953] E [socket.c:2632:socket_poller]
0-tcp.VOL-server: poll error on socket
[2018-02-28 19:35:58.380530] I [MSGID: 115036]
[server.c:527:server_rpc_notify] 0-VOL-server: disconnecting connection
from CLIENT-30688-2018/02/28-03:11:39:784734-VOL-client-1-0-0
[2018-02-28 19:35:58.380932] I [socket.c:3672:socket_submit_reply]
0-tcp.VOL-server: not connected (priv->connected = -1)
[2018-02-28 19:35:58.380960] E [rpcsvc.c:1364:rpcsvc_submit_generic]...
2018 Jan 19
0
geo-replication command rsync returned with 3
...;[2018-01-19 14:23:21.995049] I [MSGID: 115029]
>[server-handshake.c:690:server_setvolume] 0-mvol1-server: accepted
>client from
>gl-master-01-22759-2018/01/19-14:23:21:928705-mvol1-client-0-0-0
>(version: 3.7.18)
>[2018-01-19 14:23:23.392692] I [MSGID: 115036]
>[server.c:552:server_rpc_notify] 0-mvol1-server: disconnecting
>connection from
>gl-master-04-8871-2018/01/19-14:23:18:129523-mvol1-client-0-0-0
>[2018-01-19 14:23:23.392746] I [MSGID: 101055]
>[client_t.c:420:gf_client_unref] 0-mvol1-server: Shutting down
>connection
>gl-master-04-8871-2018/01/19-14:23:18:1...
2018 Mar 07
0
Intermittent mount disconnect due to socket poller error
...nt>
>>
>>
>> ??? SERVER2:/var/log/glusterfs/bricks/VOL-brick2.log
>> ??? [2018-02-28 19:35:58.379953] E [socket.c:2632:socket_poller]
>> ??? 0-tcp.VOL-server: poll error on socket
>> ??? [2018-02-28 19:35:58.380530] I [MSGID: 115036]
>> ??? [server.c:527:server_rpc_notify] 0-VOL-server: disconnecting
>> ??? connection from
>> CLIENT-30688-2018/02/28-03:11:39:784734-VOL-client-1-0-0
>> ??? [2018-02-28 19:35:58.380932] I [socket.c:3672:socket_submit_reply]
>> ??? 0-tcp.VOL-server: not connected (priv->connected = -1)
>> ??? [2018-02-2...
2018 Jan 18
0
issues after botched update
...18-01-18 08:38:39.040853] I [MSGID: 115029]
[server-handshake.c:793:server_setvolume] 0-home-server: accepted
client from
gluster00.cluster.local-21570-2018/01/18-08:38:38:940331-home-client-0-0-0
(version: 3.12.4)
[2018-01-18 08:38:39.412234] I [MSGID: 115036]
[server.c:527:server_rpc_notify] 0-home-server: disconnecting
connection from
gluster00.cluster.local-21570-2018/01/18-08:38:38:940331-home-client-0-0-0
[2018-01-18 08:38:39.435180] I [MSGID: 101055]
[client_t.c:443:gf_client_unref] 0-home-server: Shutting down
connection
storage00.railscluster.local-21570...
2023 Feb 23
1
Big problems after update to 9.6
...C_NAME:gvol0-client-0-RECON_NO:-0
(version: 9.1) with subvol /nodirectwritedata/gluster/gvol0
[2023-02-23 20:22:56.741545 +0000] W [socket.c:766:__socket_rwv]
0-tcp.gvol0-server: readv on 10.20.20.11:49144 failed (No data available)
[2023-02-23 20:22:56.741599 +0000] I [MSGID: 115036]
[server.c:500:server_rpc_notify] 0-gvol0-server: disconnecting connection
[{client-uid=CTX_ID:46b23c19-5114-4a20-9306-9ea6faf02d51-GRAPH_ID:0-PID:35568-HOST:br.m5voip.com-PC_NAME:gvol0-client-0-RECON_NO:-0}]
[2023-02-23 20:22:56.741866 +0000] I [MSGID: 101055]
[client_t.c:397:gf_client_unref] 0-gvol0-server: Shutting down connec...
2013 Mar 28
1
Glusterfs gives up with endpoint not connected
...6-2010 Gluster Inc. <http://www.gluster.com>
GlusterFS comes with ABSOLUTELY NO WARRANTY.
You may redistribute copies of GlusterFS under the terms of the GNU Affero
General Public License.
My /var/log/glusterfs/bricks/raid5hs-glusterfs-export.log
[2013-03-28 10:47:07.243980] I [server.c:438:server_rpc_notify]
0-sambavol-server: disconnected connection from 192.168.130.199:1023
[2013-03-28 10:47:07.244000] I
[server-helpers.c:783:server_connection_destroy] 0-sambavol-server:
destroyed connection of
tuepdc.local-16600-2013/03/28-09:32:28:258428-sambavol-client-0
[root at tuepdc bricks]# gluster volume...
2011 Jun 10
1
Crossover cable: single point of failure?
Dear community,
I have a 2-node gluster cluster with one replicated volume shared to a
client via NFS. If the replication link (Ethernet crossover cable)
between the Gluster nodes breaks, I discovered that my whole storage is
not available anymore.
I am using Pacemaker/corosync with two virtual IPs (service IPs exposed
to the clients), so each node has its corresponding virtual IP, and
2023 Feb 24
1
Big problems after update to 9.6
...server: readv on 10.20.20.11:49144<https://urldefense.com/v3/__http://10.20.20.11:49144__;!!I_DbfM1H!H-ob27qPp9fpvcacuvx-Rq_m9Rdc7w0qO3r5pewwZCO30JJzs4eTic2nPJo3JaeCgJanX84-S_Iv80eiwJIScXmwbE2fA_l4$> failed (No data available)
[2023-02-23 20:22:56.741599 +0000] I [MSGID: 115036] [server.c:500:server_rpc_notify] 0-gvol0-server: disconnecting connection [{client-uid=CTX_ID:46b23c19-5114-4a20-9306-9ea6faf02d51-GRAPH_ID:0-PID:35568-HOST:br.m5voip.com-PC_NAME:gvol0-client-0-RECON_NO:-0}]
[2023-02-23 20:22:56.741866 +0000] I [MSGID: 101055] [client_t.c:397:gf_client_unref] 0-gvol0-server: Shutting down connect...
2013 May 13
0
Fwd: Seeing non-priv port + auth issue in the gluster brick log
...ion module is interested in accepting
remote-client (null)
[2013-05-11 06:40:19.912639] E [server-handshake.c:587:server_setvolume]
0-dpkvol-server: Cannot authenticate client from
vdsm_tsm_int-7221-2013/05/11-06:38:54:195128-dpkvol-client-0-0 3.4.0beta1
[2013-05-11 06:40:20.611853] I [server.c:771:server_rpc_notify]
0-dpkvol-server: disconnecting connectionfrom
vdsm_tsm_int-7221-2013/05/11-06:38:54:195128-dpkvol-client-0-0
[2013-05-11 06:40:20.611908] I
[server-helpers.c:735:server_connection_put] 0-dpkvol-server: Shutting
down connection
vdsm_tsm_int-7221-2013/05/11-06:38:54:195128-dpkvol-client-0-0
[2013-05...
2013 Nov 29
1
Self heal problem
Hi,
I have a glusterfs volume replicated on three nodes. I am planing to use
the volume as storage for vMware ESXi machines using NFS. The reason for
using tree nodes is to be able to configure Quorum and avoid
split-brains. However, during my initial testing when intentionally and
gracefully restart the node "ned", a split-brain/self-heal error
occurred.
The log on "todd"