Displaying 20 results from an estimated 46 matches for "server_setvolume".
2017 Jun 01
0
Gluster client mount fails in mid flight with signum 15
...lfresco-client-2-0-0
[2017-05-31 05:52:19.117469] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-alfresco-server: Shutting down connection tms02.horisont.svenskaspel.se-3219-2017/05/31-05:52:19:118674-alfresco-client-2-0-0
[2017-05-31 05:52:43.020403] I [MSGID: 115029] [server-handshake.c:695:server_setvolume] 0-alfresco-server: accepted client from tms01.horisont.svenskaspel.se-22656-2017/05/31-05:52:43:701-alfresco-client-2-0-0 (version: 3.10.2)
[2017-05-31 05:52:43.027816] I [MSGID: 115036] [server.c:559:server_rpc_notify] 0-alfresco-server: disconnecting connection from tms01.horisont.svenskaspel.se...
2017 Jun 01
2
Gluster client mount fails in mid flight with signum 15
...469] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-alfresco-server: Shutting down connection tms02.horisont.svenskaspel.se<http://tms02.horisont.svenskaspel.se/>-3219-2017/05/31-05:52:19:118674-alfresco-client-2-0-0
[2017-05-31 05:52:43.020403] I [MSGID: 115029] [server-handshake.c:695:server_setvolume] 0-alfresco-server: accepted client from tms01.horisont.svenskaspel.se<http://tms01.horisont.svenskaspel.se/>-22656-2017/05/31-05:52:43:701-alfresco-client-2-0-0 (version: 3.10.2)
[2017-05-31 05:52:43.027816] I [MSGID: 115036] [server.c:559:server_rpc_notify] 0-alfresco-server: disconnecting...
2017 Jun 01
1
Gluster client mount fails in mid flight with signum 15
...ient-2-0-0
> [2017-05-31 05:52:19.117469] I [MSGID: 101055] [client_t.c:436:gf_client_unref] 0-alfresco-server: Shutting down connection tms02.horisont.svenskaspel.se-3219-2017/05/31-05:52:19:118674-alfresco-client-2-0-0
> [2017-05-31 05:52:43.020403] I [MSGID: 115029] [server-handshake.c:695:server_setvolume] 0-alfresco-server: accepted client from tms01.horisont.svenskaspel.se-22656-2017/05/31-05:52:43:701-alfresco-client-2-0-0 (version: 3.10.2)
> [2017-05-31 05:52:43.027816] I [MSGID: 115036] [server.c:559:server_rpc_notify] 0-alfresco-server: disconnecting connection from tms01.horisont.svenskasp...
2017 Oct 17
1
Distribute rebalance issues
...ease the logging level of the brick? There is
nothing obvious (to me) in the log (see below for the same time period as
the latest rebalance failure). This is the only brick on that server that
has disconnects like this.
Steve
[2017-10-17 02:22:13.453575] I [MSGID: 115029]
[server-handshake.c:692:server_setvolume] 0-video-server: accepted
client from node-dc4-03-5825-2017/08/30-20:45:55:170091-video-client-4-2-318
(version: 3.8.15)
[2017-10-17 02:22:31.353286] I [MSGID: 115036]
[server.c:548:server_rpc_notify] 0-video-server: disconnecting
connection from
node-dc4-02-29040-2017/08/04-09:31:22:842268-video-c...
2011 Feb 04
1
3.1.2 Debian - client_rpc_notify "failed to get the port number for remote subvolume"
I have glusterfs 3.1.2 running on Debian, I'm able to start the volume
and now mount it via mount -t gluster and I can see everything. I am
still seeing the following error in /var/log/glusterfs/nfs.log
[2011-02-04 13:09:16.404851] E
[client-handshake.c:1079:client_query_portmap_cbk]
bhl-volume-client-98: failed to get the port number for remote
subvolume
[2011-02-04 13:09:16.404909] I
2018 Jan 05
0
Another VM crashed
...:41:30:493103-datastore1-client-1-0-2
[2018-01-02 07:56:32.763554] I [MSGID: 101055]
[client_t.c:415:gf_client_unref] 0-datastore1-server: Shutting down
connection srvpve4-1652-2
017/05/03-19:41:30:493103-datastore1-client-1-0-2
[2018-01-03 07:58:37.342761] I [MSGID: 115029]
[server-handshake.c:692:server_setvolume] 0-datastore1-server: accepted
client from srvpve4-1
573-2018/01/03-07:58:40:761060-datastore1-client-1-0-0 (version: 3.8.13)
root at srvpve2:/var/log/glusterfs/bricks# more data-brick2-brick.log
[2017-12-31 05:25:01.725328] I [glusterfsd-mgmt.c:1600:mgmt_getspec_cbk]
0-glusterfs: No change in volf...
2017 Oct 17
0
Distribute rebalance issues
On 17 October 2017 at 14:48, Stephen Remde <stephen.remde at gaist.co.uk>
wrote:
> Hi,
>
>
> I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens
2018 Jan 14
0
Volume can not write to data if this volume quota limits capacity and mount itself volume on arm64(aarch64) architecture
...[root at f08n25bricks]# tail -n 100 data-gluster-1045fe63-bf09-42f5-986b-ce2d3d63def0-test_vol.log
[2018-02-02 11:23:28.615522] I [login.c:81:gf_auth] 0-auth/login: allowed user names: ff3d40c1-1ef1-4eb3-a448-19ec3cb60f16
[2018-02-02 11:23:28.615573] I [MSGID: 115029] [server-handshake.c:690:server_setvolume] 0-test_vol-server: accepted client from f08n33-90322-2018/01/14-06:17:22:557038-test_vol-client-1-0-0 (version: 3.7.20)
[2018-02-02 11:23:28.730628] I [login.c:81:gf_auth] 0-auth/login: allowed user names: ff3d40c1-1ef1-4eb3-a448-19ec3cb60f16
[2018-02-02 11:23:28.730683] I [MSGID: 115029] [server-...
2017 Oct 17
2
Distribute rebalance issues
Hi,
I have a rebalance that has failed on one peer twice now. Rebalance
logs below (directories anonomised and some irrelevant log lines cut).
It looks like it loses connection to the brick, but immediately stops
the rebalance on that peer instead of waiting for reconnection - which
happens a second or so later.
Is this normal behaviour? So far it has been the same server and the
same (remote)
2018 Jan 19
2
geo-replication command rsync returned with 3
...or] Monitor:
worker(/brick1/mvol1) died in startup phase
/var/log/glusterfs/bricks/brick1-mvol1.log
[2018-01-19 14:23:18.264649] I [login.c:81:gf_auth] 0-auth/login:
allowed user names: 2bc51718-940f-4a9c-9106-eb8404b95622
[2018-01-19 14:23:18.264689] I [MSGID: 115029]
[server-handshake.c:690:server_setvolume] 0-mvol1-server: accepted
client from
gl-master-04-8871-2018/01/19-14:23:18:129523-mvol1-client-0-0-0
(version: 3.7.18)
[2018-01-19 14:23:21.995012] I [login.c:81:gf_auth] 0-auth/login:
allowed user names: 2bc51718-940f-4a9c-9106-eb8404b95622
[2018-01-19 14:23:21.995049] I [MSGID: 115029]
[ser...
2013 Sep 16
1
Gluster 3.4 QEMU and Permission Denied Errors
...qcow2 gluster://localhost/vmdata/test1.qcow 8G
I'm able to boot my created virtual machine but in the logs I see this:
[2013-09-16 15:16:04.471205] E [addr.c:152:gf_auth] 0-auth/addr:
client is bound to port 46021 which is not privileged
[2013-09-16 15:16:04.471277] I
[server-handshake.c:567:server_setvolume] 0-vmdata-server: accepted
client from gluster1.local-1061-2013/09/16-15:16:04:441166-vmdata-client-1-0
(version: 3.4.0)[2013-09-16 15:16:04.488000] I
[server-rpc-fops.c:1572:server_open_cbk] 0-vmdata-server: 18: OPEN
/test1.qcow (6b63a78b-7d5c-4195-a172-5bb6ed1e7dac) ==> (Permission
denied)
I...
2013 Nov 29
1
Self heal problem
Hi,
I have a glusterfs volume replicated on three nodes. I am planing to use
the volume as storage for vMware ESXi machines using NFS. The reason for
using tree nodes is to be able to configure Quorum and avoid
split-brains. However, during my initial testing when intentionally and
gracefully restart the node "ned", a split-brain/self-heal error
occurred.
The log on "todd"
2010 Dec 24
1
node crashing on 4 replicated-distributed cluster
...ld have not come
[2010-12-24 15:58:50.264731] E [rpcsvc.c:874:rpcsvc_request_create]
rpc-service: RPC call decoding failed
[2010-12-24 15:58:50.264835] I [server.c:428:server_rpc_notify]
vms-server: disconnected connection from 192.168.7.1:1001
[2010-12-24 15:58:50.279233] I [server-handshake.c:535:server_setvolume]
vms-server: accepted client from 192.168.7.1:1018
[2010-12-24 15:59:02.100081] E [rpcsvc.c:874:rpcsvc_request_create]
rpc-service: RPC call decoding failed
[2010-12-24 15:59:02.100160] I [server.c:428:server_rpc_notify]
vms-server: disconnected connection from 192.168.7.1:1018
[2010-12-24 15:59:02...
2018 Jan 19
0
geo-replication command rsync returned with 3
...l1) died in startup phase
>
>
>/var/log/glusterfs/bricks/brick1-mvol1.log
>
>[2018-01-19 14:23:18.264649] I [login.c:81:gf_auth] 0-auth/login:
>allowed user names: 2bc51718-940f-4a9c-9106-eb8404b95622
>[2018-01-19 14:23:18.264689] I [MSGID: 115029]
>[server-handshake.c:690:server_setvolume] 0-mvol1-server: accepted
>client from
>gl-master-04-8871-2018/01/19-14:23:18:129523-mvol1-client-0-0-0
>(version: 3.7.18)
>[2018-01-19 14:23:21.995012] I [login.c:81:gf_auth] 0-auth/login:
>allowed user names: 2bc51718-940f-4a9c-9106-eb8404b95622
>[2018-01-19 14:23:21.995049]...
2013 Jun 03
2
recovering gluster volume || startup failure
...09:29:03.963001] W [socket.c:410:__socket_keepalive] 0-socket: failed to set keep idle on socket 8
[2013-06-02 09:29:03.963046] W [socket.c:1876:socket_server_event_handler] 0-socket.glusterfsd: Failed to set keep-alive: Operation not supported
[2013-06-02 09:29:04.850120] I [server-handshake.c:571:server_setvolume] 0-gvol1-server: accepted client from iiclab-oel1-9347-2013/06/02-09:29:00:835397-gvol1-client-0-0 (version: 3.3.1)
[2013-06-02 09:32:16.973786] W [glusterfsd.c:831:cleanup_and_exit] (-->/usr/lib64/libgfrpc.so.0(rpcsvc_notify+0x93) [0x30cac0a5b3] (-->/usr/lib64/libgfrpc.so.0(rpcsvc_handle_rpc...
2018 Jan 18
0
issues after botched update
...te]
0-/data/home/brick1: allowed = "*", received addr = "*.*.*.*"
[2018-01-18 08:38:39.040813] I [login.c:76:gf_auth] 0-auth/login:
allowed user names: 2d225def-3f34-472a-b8e4-c183acafc151
[2018-01-18 08:38:39.040853] I [MSGID: 115029]
[server-handshake.c:793:server_setvolume] 0-home-server: accepted
client from
gluster00.cluster.local-21570-2018/01/18-08:38:38:940331-home-client-0-0-0
(version: 3.12.4)
[2018-01-18 08:38:39.412234] I [MSGID: 115036]
[server.c:527:server_rpc_notify] 0-home-server: disconnecting
connection from
gluster00.cluste...
2023 Feb 23
1
Big problems after update to 9.6
...ectwritedata/gluster/gvol0: allowed = "*", received addr =
"10.20.20.11"
[2023-02-23 20:22:56.717817 +0000] I [login.c:110:gf_auth] 0-auth/login:
allowed user names: a26c7de4-1236-4e0a-944a-cb82de7f7f0e
[2023-02-23 20:22:56.717840 +0000] I [MSGID: 115029]
[server-handshake.c:561:server_setvolume] 0-gvol0-server: accepted client
from
CTX_ID:46b23c19-5114-4a20-9306-9ea6faf02d51-GRAPH_ID:0-PID:35568-HOST:br.m5voip.com-PC_NAME:gvol0-client-0-RECON_NO:-0
(version: 9.1) with subvol /nodirectwritedata/gluster/gvol0
[2023-02-23 20:22:56.741545 +0000] W [socket.c:766:__socket_rwv]
0-tcp.gvol0-serve...
2011 Jun 10
1
Crossover cable: single point of failure?
Dear community,
I have a 2-node gluster cluster with one replicated volume shared to a
client via NFS. If the replication link (Ethernet crossover cable)
between the Gluster nodes breaks, I discovered that my whole storage is
not available anymore.
I am using Pacemaker/corosync with two virtual IPs (service IPs exposed
to the clients), so each node has its corresponding virtual IP, and
2023 Feb 24
1
Big problems after update to 9.6
...ectwritedata/gluster/gvol0: allowed = "*", received addr = "10.20.20.11"
[2023-02-23 20:22:56.717817 +0000] I [login.c:110:gf_auth] 0-auth/login: allowed user names: a26c7de4-1236-4e0a-944a-cb82de7f7f0e
[2023-02-23 20:22:56.717840 +0000] I [MSGID: 115029] [server-handshake.c:561:server_setvolume] 0-gvol0-server: accepted client from CTX_ID:46b23c19-5114-4a20-9306-9ea6faf02d51-GRAPH_ID:0-PID:35568-HOST:br.m5voip.com-PC_NAME:gvol0-client-0-RECON_NO:-0 (version: 9.1) with subvol /nodirectwritedata/gluster/gvol0
[2023-02-23 20:22:56.741545 +0000] W [socket.c:766:__socket_rwv] 0-tcp.gvol0-serve...
2018 Jan 16
1
[Possibile SPAM] Re: Problem with Gluster 3.12.4, VM and sharding
...fs/glustershd.log:
[2018-01-14 02:23:02.731245] I [glusterfsd-mgmt.c:1821:mgmt_getspec_cbk]
0-glusterfs: No change in volfile,continuing
(empty too)
/var/log/glusterfs/bricks/brick-brick2-gv2a2.log (the interested volume):
[2018-01-16 15:14:37.809965] I [MSGID: 115029]
[server-handshake.c:793:server_setvolume] 0-gv2a2-server: accepted
client from ovh-ov1-10302-2018/01/16-15:14:37:790306-gv2a2-client-0-0-0
(version: 3.12.4)
[2018-01-16 15:16:41.471751] E [MSGID: 113020]
[posix.c:1485:posix_mknod] 0-gv2a2-posix: setting gfid on
/bricks/brick2/gv2a2/.shard/62335cb9-c7b5-4735-a879-59cff93fe622.4 failed...