search for: server_rpc_notifi

Displaying 20 results from an estimated 32 matches for "server_rpc_notifi".

Did you mean: server_rpc_notify
2017 Jun 01
0
Gluster client mount fails in mid flight with signum 15
This has been solved, as far as we can tell. Problem was with KillUserProcesses=1 in logind.conf. This has shown to kill mounts made using mount -a booth by root and by any user with sudo at session logout. Hope this will anybody else who run into this. Thanks 4 all your help and cheers Gabbe 1 juni 2017 kl. 09:24 skrev Gabriel Lindeborg <gabriel.lindeborg at
2017 Jun 01
2
Gluster client mount fails in mid flight with signum 15
All four clients did run 3.10.2 as well The volumes has been running fine until we upgraded to 3.10, when we hit some issues with port mismatches. We restarted all the volumes, the servers and the clients and now hit this issue. We?ve since backed up the files, remove the volumes, removed the bricks, removed gluster, installed glusterfs 3.7.20, created new volumes on new bricks, restored the
2017 Jun 01
1
Gluster client mount fails in mid flight with signum 15
On Thu, Jun 01, 2017 at 01:52:23PM +0000, Gabriel Lindeborg wrote: > This has been solved, as far as we can tell. > > Problem was with KillUserProcesses=1 in logind.conf. This has shown to > kill mounts made using mount -a booth by root and by any user with > sudo at session logout. Ah, yes, that could well be the cause of the problem. > Hope this will anybody else who run
2017 Oct 17
1
Distribute rebalance issues
Nithya, Is there any way to increase the logging level of the brick? There is nothing obvious (to me) in the log (see below for the same time period as the latest rebalance failure). This is the only brick on that server that has disconnects like this. Steve [2017-10-17 02:22:13.453575] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-video-server: accepted client from
2018 Jan 05
0
Another VM crashed
Hi all, I still experience vm crashes with glusterfs. The VM I had problems (kept on crashing) was moved away from gluster and have had no problems since. Now another VM is doing the same. It just shutsdown. gluster is 3.8.13 I know now you are on 3.10 and 3.12, but I had troubles upgrading another cluster to 3.10 (although the processes were off and no files where in use gluster had to heal
2017 Oct 17
0
Distribute rebalance issues
On 17 October 2017 at 14:48, Stephen Remde <stephen.remde at gaist.co.uk> wrote: > Hi, > > > I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens
2017 Oct 17
2
Distribute rebalance issues
Hi, I have a rebalance that has failed on one peer twice now. Rebalance logs below (directories anonomised and some irrelevant log lines cut). It looks like it loses connection to the brick, but immediately stops the rebalance on that peer instead of waiting for reconnection - which happens a second or so later. Is this normal behaviour? So far it has been the same server and the same (remote)
2012 Dec 17
2
Transport endpoint
Hi, I've got Gluster error: Transport endpoint not connected. It came up twice after trying to rsync 2 TB filesystem over; it reached about 1.8 TB and got the error. Logs on the server side (on reverse time order): [2012-12-15 00:53:24.747934] I [server-helpers.c:629:server_connection_destroy] 0-RedhawkShared-server: destroyed connection of
2018 Jan 19
2
geo-replication command rsync returned with 3
Dear All, we are running a dist. repl. volume on 4 nodes including geo-replication to another location. the geo-replication was running fine for months. since 18th jan. the geo-replication is faulty. the geo-rep log on the master shows following error in a loop while the logs on the slave just show 'I'nformations... somehow suspicious are the frequent 'shutting down connection'
2010 Dec 24
1
node crashing on 4 replicated-distributed cluster
Hi, I've got troubles after few minutes of glusterfs operations. I setup a 4-node replica 4 storage, with 2 bricks on every server: # gluster volume create vms replica 4 transport tcp 192.168.7.1:/srv/vol1 192.168.7.2:/srv/vol1 192.168.7.3:/srv/vol1 192.168.7.4:/srv/vol1 192.168.7.1:/srv/vol2 192.168.7.2:/srv/vol2 192.168.7.3:/srv/vol2 192.168.7.4:/srv/vol2 I started copying files with
2018 Feb 28
1
Intermittent mount disconnect due to socket poller error
We've been on the Gluster 3.7 series for several years with things pretty stable. Given that it's reached EOL, yesterday I upgraded to 3.13.2. Every Gluster mount and server was disabled then brought back up after the upgrade, changing the op-version to 31302 and then trying it all out. It went poorly. Every sizable read and write (100's MB) lead to 'Transport endpoint not
2018 Jan 19
0
geo-replication command rsync returned with 3
Fwiw, rsync error 3 is: "Errors selecting input/output files, dirs" On January 19, 2018 7:36:18 AM PST, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote: >Dear All, > >we are running a dist. repl. volume on 4 nodes including >geo-replication >to another location. >the geo-replication was running fine for months. >since 18th jan. the geo-replication is faulty.
2018 Mar 07
0
Intermittent mount disconnect due to socket poller error
I happened to review the status of volume clients and realized they were reporting a mix of different op-versions: 3.13 clients were still connecting to the downgraded 3.12 server (likely a timing issue between downgrading clients and mounting volumes). Remounting the reported clients has resulted in the correct op-version all around and about a week free of these errors. On 2018-03-01
2018 Jan 18
0
issues after botched update
Hi, A client has a glusterfs cluster that's behaving weirdly after some issues during upgrade. They upgraded a glusterfs 2+1 cluster (replica with arbiter) from 3.10.9 to 3.12.4 on Centos and now have weird issues and some files maybe being corrupted. They also switched from nfs ganesha that crashed every couple of days to glusterfs subdirectory mounting. Subdirectory mounting was the
2023 Feb 23
1
Big problems after update to 9.6
Hello, We have a cluster with two nodes, "sg" and "br", which were running GlusterFS 9.1, installed via the Ubuntu package manager. We updated the Ubuntu packages on "sg" to version 9.6, and now have big problems. The "br" node is still on version 9.1. Running "gluster volume status" on either host gives "Error : Request timed out". On
2013 Mar 28
1
Glusterfs gives up with endpoint not connected
Dear all, Right out of the blue glusterfs is not working fine any more every now end the it stops working telling me, Endpoint not connected and writing core files: [root at tuepdc /]# file core.15288 core.15288: ELF 64-bit LSB core file AMD x86-64, version 1 (SYSV), SVR4-style, from 'glusterfs' My Version: [root at tuepdc /]# glusterfs --version glusterfs 3.2.0 built on Apr 22 2011
2011 Jun 10
1
Crossover cable: single point of failure?
Dear community, I have a 2-node gluster cluster with one replicated volume shared to a client via NFS. If the replication link (Ethernet crossover cable) between the Gluster nodes breaks, I discovered that my whole storage is not available anymore. I am using Pacemaker/corosync with two virtual IPs (service IPs exposed to the clients), so each node has its corresponding virtual IP, and
2023 Feb 24
1
Big problems after update to 9.6
Hi David, It seems like a network issue to me, As it's unable to connect the other node and getting timeout. Few things you can check- * Check the /etc/hosts file on both the servers and make sure it has the correct IP of the other node. * Are you binding gluster on any specific IP, which is changed after your update. * Check if you can access port 24007 from the other host. If
2013 May 13
0
Fwd: Seeing non-priv port + auth issue in the gluster brick log
Fwd.ing to Gluster users, in the hope that many more people can see this and hopefully can provide any clues thanx, deepak -------- Original Message -------- Subject: [Gluster-devel] Seeing non-priv port + auth issue in the gluster brick log Date: Sat, 11 May 2013 12:43:20 +0530 From: Deepak C Shetty <deepakcs at linux.vnet.ibm.com> Organization: IBM India Pvt. Ltd. To: Gluster
2013 Nov 29
1
Self heal problem
Hi, I have a glusterfs volume replicated on three nodes. I am planing to use the volume as storage for vMware ESXi machines using NFS. The reason for using tree nodes is to be able to configure Quorum and avoid split-brains. However, during my initial testing when intentionally and gracefully restart the node "ned", a split-brain/self-heal error occurred. The log on "todd"