similar to: Intermittent mount disconnect due to socket poller error

Displaying 20 results from an estimated 1000 matches similar to: "Intermittent mount disconnect due to socket poller error"

2018 Mar 07
0
Intermittent mount disconnect due to socket poller error
I happened to review the status of volume clients and realized they were reporting a mix of different op-versions: 3.13 clients were still connecting to the downgraded 3.12 server (likely a timing issue between downgrading clients and mounting volumes). Remounting the reported clients has resulted in the correct op-version all around and about a week free of these errors. On 2018-03-01
2023 Feb 23
1
Big problems after update to 9.6
Hello, We have a cluster with two nodes, "sg" and "br", which were running GlusterFS 9.1, installed via the Ubuntu package manager. We updated the Ubuntu packages on "sg" to version 9.6, and now have big problems. The "br" node is still on version 9.1. Running "gluster volume status" on either host gives "Error : Request timed out". On
2023 Feb 24
1
Big problems after update to 9.6
Hi David, It seems like a network issue to me, As it's unable to connect the other node and getting timeout. Few things you can check- * Check the /etc/hosts file on both the servers and make sure it has the correct IP of the other node. * Are you binding gluster on any specific IP, which is changed after your update. * Check if you can access port 24007 from the other host. If
2017 Dec 15
3
Production Volume will not start
Hi all, I have an issue where our volume will not start from any node. When attempting to start the volume it will eventually return: Error: Request timed out For some time after that, the volume is locked and we either have to wait or restart Gluster services. In the gluserd.log, it shows the following: [2017-12-15 18:00:12.423478] I [glusterd-utils.c:5926:glusterd_brick_start]
2017 Dec 18
0
Production Volume will not start
On Sat, Dec 16, 2017 at 12:45 AM, Matt Waymack <mwaymack at nsgdv.com> wrote: > Hi all, > > > > I have an issue where our volume will not start from any node. When > attempting to start the volume it will eventually return: > > Error: Request timed out > > > > For some time after that, the volume is locked and we either have to wait > or restart
2018 Jan 18
0
issues after botched update
Hi, A client has a glusterfs cluster that's behaving weirdly after some issues during upgrade. They upgraded a glusterfs 2+1 cluster (replica with arbiter) from 3.10.9 to 3.12.4 on Centos and now have weird issues and some files maybe being corrupted. They also switched from nfs ganesha that crashed every couple of days to glusterfs subdirectory mounting. Subdirectory mounting was the
2018 Feb 26
0
rpc/glusterd-locks error
Good morning. We have a 6 node cluster. 3 nodes are participating in a replica 3 volume. Naming convention: xx01 - 3 nodes participating in ovirt_vol xx02 - 3 nodes NOT particpating in ovirt_vol Last week, restarted glusterd on each node in cluster to update (one at a time). The three xx01 nodes all show the following in glusterd.log: [2018-02-26 14:31:47.330670] E
2018 Jan 19
2
geo-replication command rsync returned with 3
Dear All, we are running a dist. repl. volume on 4 nodes including geo-replication to another location. the geo-replication was running fine for months. since 18th jan. the geo-replication is faulty. the geo-rep log on the master shows following error in a loop while the logs on the slave just show 'I'nformations... somehow suspicious are the frequent 'shutting down connection'
2017 Nov 24
1
SSL configuration
Hello subscribers, I have a very strange question regarding SSL setup on gluster storage. I have create a common CA and sign certificate for my gluster nodes, placed host certificate, key and common CA certificate into /etc/ssl/, create a file called secure-access into /var/lib/glusterd/ Then, I start glusterd on all nodes, system work fine, I see with peer status all of my nodes. No problem.
2014 Dec 10
0
[PATCH v3 2/2] fb/nvaa: Enable non-isometric poller on NVAA/NVAC
(This is a v3 of patch "drm/nouveau/fb/nv50: Add PFB writes") This fix a GPU lockup on 9400M (NVAC) when using acceleration, see https://bugs.freedesktop.org/show_bug.cgi?id=27501 v2: - Move code to subdev/fb/nv50.c as suggested by Roy Spliet; - Remove arbitrary writes to 100c18/100c24 - Replace write to 100c1c of arbitrary value by the address of a scratch page as proposed by Ilia
2017 Jun 01
0
Gluster client mount fails in mid flight with signum 15
This has been solved, as far as we can tell. Problem was with KillUserProcesses=1 in logind.conf. This has shown to kill mounts made using mount -a booth by root and by any user with sudo at session logout. Hope this will anybody else who run into this. Thanks 4 all your help and cheers Gabbe 1 juni 2017 kl. 09:24 skrev Gabriel Lindeborg <gabriel.lindeborg at
2017 Jun 01
2
Gluster client mount fails in mid flight with signum 15
All four clients did run 3.10.2 as well The volumes has been running fine until we upgraded to 3.10, when we hit some issues with port mismatches. We restarted all the volumes, the servers and the clients and now hit this issue. We?ve since backed up the files, remove the volumes, removed the bricks, removed gluster, installed glusterfs 3.7.20, created new volumes on new bricks, restored the
2017 Jun 01
1
Gluster client mount fails in mid flight with signum 15
On Thu, Jun 01, 2017 at 01:52:23PM +0000, Gabriel Lindeborg wrote: > This has been solved, as far as we can tell. > > Problem was with KillUserProcesses=1 in logind.conf. This has shown to > kill mounts made using mount -a booth by root and by any user with > sudo at session logout. Ah, yes, that could well be the cause of the problem. > Hope this will anybody else who run
2018 Feb 13
2
Failed to get quota limits
Hi Hari, Sorry for not providing you more details from the start. Here below you will find all the relevant log entries and info. Regarding the quota.conf file I have found one for my volume but it is a binary file. Is it supposed to be binary or text? Regards, M. *** gluster volume info myvolume *** Volume Name: myvolume Type: Replicate Volume ID: e7a40a1b-45c9-4d3c-bb19-0c59b4eceec5 Status:
2017 Oct 17
1
Distribute rebalance issues
Nithya, Is there any way to increase the logging level of the brick? There is nothing obvious (to me) in the log (see below for the same time period as the latest rebalance failure). This is the only brick on that server that has disconnects like this. Steve [2017-10-17 02:22:13.453575] I [MSGID: 115029] [server-handshake.c:692:server_setvolume] 0-video-server: accepted client from
2018 Feb 26
1
Problems with write-behind with large files on Gluster 3.8.4
Hello, I'm having problems when write-behind is enabled on Gluster 3.8.4. I have 2 Gluster servers each with a single brick that is mirrored between them. The code causing these issues reads two data files each approx. 128G in size. It opens a third file, mmap()'s that file, and subsequently reads and writes to it. The third file, on sucessful runs (without write-behind enabled)
2018 Feb 13
0
Failed to get quota limits
Hi, A part of the log won't be enough to debug the issue. Need the whole log messages till date. You can send it as attachments. Yes the quota.conf is a binary file. And I need the volume status output too. On Tue, Feb 13, 2018 at 1:56 PM, mabi <mabi at protonmail.ch> wrote: > Hi Hari, > Sorry for not providing you more details from the start. Here below you will > find all
2018 Jan 19
0
geo-replication command rsync returned with 3
Fwiw, rsync error 3 is: "Errors selecting input/output files, dirs" On January 19, 2018 7:36:18 AM PST, Dietmar Putz <dietmar.putz at 3qsdn.com> wrote: >Dear All, > >we are running a dist. repl. volume on 4 nodes including >geo-replication >to another location. >the geo-replication was running fine for months. >since 18th jan. the geo-replication is faulty.
2018 Feb 13
2
Failed to get quota limits
Thank you for your answer. This problem seem to have started since last week, so should I also send you the same log files but for last week? I think logrotate rotates them on a weekly basis. The only two quota commands we use are the following: gluster volume quota myvolume limit-usage /directory 10GB gluster volume quota myvolume list basically to set a new quota or to list the current
2018 Jan 14
0
Volume can not write to data if this volume quota limits capacity and mount itself volume on arm64(aarch64) architecture
Thanks for reading this email?I found a problem while using Glusterfs? First?I created a Distributed Dispersed volume on three nodes?and Limit the volume capacity use quota command?this volume is auto mounted on /run/gluster/VOLUME_NAME. This volume can be read and written normally? After, I manually mounted the volume in another path to provide data storage of SAMBA and ISCSI services, after