similar to: Intermittent mount disconnect due to socket poller error

Displaying 20 results from an estimated 300 matches similar to: "Intermittent mount disconnect due to socket poller error"

2018 Feb 28
1
Intermittent mount disconnect due to socket poller error
We've been on the Gluster 3.7 series for several years with things pretty stable. Given that it's reached EOL, yesterday I upgraded to 3.13.2. Every Gluster mount and server was disabled then brought back up after the upgrade, changing the op-version to 31302 and then trying it all out. It went poorly. Every sizable read and write (100's MB) lead to 'Transport endpoint not
2014 Dec 10
0
[PATCH v3 2/2] fb/nvaa: Enable non-isometric poller on NVAA/NVAC
(This is a v3 of patch "drm/nouveau/fb/nv50: Add PFB writes") This fix a GPU lockup on 9400M (NVAC) when using acceleration, see https://bugs.freedesktop.org/show_bug.cgi?id=27501 v2: - Move code to subdev/fb/nv50.c as suggested by Roy Spliet; - Remove arbitrary writes to 100c18/100c24 - Replace write to 100c1c of arbitrary value by the address of a scratch page as proposed by Ilia
2007 May 21
10
[Bug 1316] New: Add LDAP support to sshd
http://bugzilla.mindrot.org/show_bug.cgi?id=1316 Summary: Add LDAP support to sshd Product: Portable OpenSSH Version: 4.6p1 Platform: All URL: http://dev.inversepath.com/trac/openssh-lpk OS/Version: All Status: NEW Severity: enhancement Priority: P2 Component: PAM support AssignedTo:
2014 Dec 11
1
[PATCH v3 2/2] fb/nvaa: Enable non-isometric poller on NVAA/NVAC
On Wed, Dec 10, 2014 at 5:53 PM, Pierre Moreau <pierre.morrow at free.fr> wrote: > (This is a v3 of patch "drm/nouveau/fb/nv50: Add PFB writes") > > This fix a GPU lockup on 9400M (NVAC) when using acceleration, see > https://bugs.freedesktop.org/show_bug.cgi?id=27501 > > v2: > - Move code to subdev/fb/nv50.c as suggested by Roy Spliet; > - Remove arbitrary
2017 Dec 18
0
Production Volume will not start
On Sat, Dec 16, 2017 at 12:45 AM, Matt Waymack <mwaymack at nsgdv.com> wrote: > Hi all, > > > > I have an issue where our volume will not start from any node. When > attempting to start the volume it will eventually return: > > Error: Request timed out > > > > For some time after that, the volume is locked and we either have to wait > or restart
2018 Jan 18
0
issues after botched update
Hi, A client has a glusterfs cluster that's behaving weirdly after some issues during upgrade. They upgraded a glusterfs 2+1 cluster (replica with arbiter) from 3.10.9 to 3.12.4 on Centos and now have weird issues and some files maybe being corrupted. They also switched from nfs ganesha that crashed every couple of days to glusterfs subdirectory mounting. Subdirectory mounting was the
2018 Feb 26
0
rpc/glusterd-locks error
Good morning. We have a 6 node cluster. 3 nodes are participating in a replica 3 volume. Naming convention: xx01 - 3 nodes participating in ovirt_vol xx02 - 3 nodes NOT particpating in ovirt_vol Last week, restarted glusterd on each node in cluster to update (one at a time). The three xx01 nodes all show the following in glusterd.log: [2018-02-26 14:31:47.330670] E
2017 Nov 24
1
SSL configuration
Hello subscribers, I have a very strange question regarding SSL setup on gluster storage. I have create a common CA and sign certificate for my gluster nodes, placed host certificate, key and common CA certificate into /etc/ssl/, create a file called secure-access into /var/lib/glusterd/ Then, I start glusterd on all nodes, system work fine, I see with peer status all of my nodes. No problem.
2017 Dec 15
3
Production Volume will not start
Hi all, I have an issue where our volume will not start from any node. When attempting to start the volume it will eventually return: Error: Request timed out For some time after that, the volume is locked and we either have to wait or restart Gluster services. In the gluserd.log, it shows the following: [2017-12-15 18:00:12.423478] I [glusterd-utils.c:5926:glusterd_brick_start]
2013 Feb 26
0
Replicated Volume Crashed
Hi, I have a gluster volume that consists of 22Bricks and includes a single folder with 3.6 Million files. Yesterday the volume crashed and turned out to be completely unresposible and I was forced to perform a hard reboot on all gluster servers because they were not able to execute a reboot command issued by the shell because they were that heavy overloaded. Each gluster server has 12 CPU cores
2023 Feb 23
1
Big problems after update to 9.6
Hello, We have a cluster with two nodes, "sg" and "br", which were running GlusterFS 9.1, installed via the Ubuntu package manager. We updated the Ubuntu packages on "sg" to version 9.6, and now have big problems. The "br" node is still on version 9.1. Running "gluster volume status" on either host gives "Error : Request timed out". On
2023 Feb 24
1
Big problems after update to 9.6
Hi David, It seems like a network issue to me, As it's unable to connect the other node and getting timeout. Few things you can check- * Check the /etc/hosts file on both the servers and make sure it has the correct IP of the other node. * Are you binding gluster on any specific IP, which is changed after your update. * Check if you can access port 24007 from the other host. If
2017 Jan 11
0
memory usage and error with samba 4.4.5
Hi, I have a samba 4 AD, running fine but last night detected that during some minuts something was not ok,,, I have detected that during some minutes in samba 4 AD server was a high load average and some services were not working correctly, It seems that this has generated and increment of swap that is generated by samba process. Logs and info collected: Jan 11 04:44:02 server winbindd[25326]:
2007 Feb 22
0
Problem with Cacti and CentOS 4.4
I just for the first time attempted to install Cacti from the RPMForge repo on CentOS 4.4. And I have to say that it hasn't gone very well at all. I have installed Cacti on Fedora a good 20 times with absolutely no problem. Basically the default rrds/graphs for localhost are generated but no other rrds/graphs are generated. When I look at the poller cache or snmp cache I see all the data.
2017 Jan 11
1
memory usage and error with samba 4.4.5
Hi, do you know if this memory leaks are solved with latest 4.4.x... for our environment it is difficult to upgrade to major version.... Thanks 2017-01-11 9:34 GMT+01:00 L.P.H. van Belle <belle at bazuin.nl>: > if your up to it, upgrade to 4.5.3 . > > i had these to but they are gone now, i cant recall they bugfix nr. sorry > and as extra if you upgrade, they have fixed some
2018 Feb 28
0
[Gluster-Maintainers] [Gluster-devel] Release 4.0: RC1 tagged
I found the following memory leak present in 3.13, 4.0 and master: https://bugzilla.redhat.com/show_bug.cgi?id=1550078 I will clone/port to 4.0 as soon as the patch is merged. On Wed, Feb 28, 2018 at 5:55 PM, Javier Romero <xavinux at gmail.com> wrote: > Hi all, > > Have tested on CentOS Linux release 7.4.1708 (Core) with Kernel > 3.10.0-693.17.1.el7.x86_64 > > This
2018 Feb 28
2
[Gluster-devel] [Gluster-Maintainers] Release 4.0: RC1 tagged
Hi all, Have tested on CentOS Linux release 7.4.1708 (Core) with Kernel 3.10.0-693.17.1.el7.x86_64 This package works ok http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm # yum install http://cbs.centos.org/kojifiles/work/tasks/1548/311548/centos-release-gluster40-0.9-1.el7.centos.x86_64.rpm # yum install glusterfs-server # systemctl
2014 Oct 21
3
Questions about some PFB registers on NVAC cards
(Sending it to the correct Nvidia mailing list, sorry for the spam) Hi, When using acceleration with Nouveau on MacBook Pros with an 9400M (NVAC) card, a PFIFO interrupt 0x00400000 is thrown during the initialisation of that card (sometime after PFIFO and PGRAPH initialisation) and the laptop will lockup [1], forcing users to load Nouveau without acceleration. After some investigation, I found
2010 Mar 28
0
CESA-2010:0165 Moderate CentOS 5 x86_64 nss Update
CentOS Errata and Security Advisory 2010:0165 Moderate Upstream details at : http://rhn.redhat.com/errata/RHSA-2010-0165.html The following updated files have been uploaded and are currently syncing to the mirrors: ( md5sum Filename ) x86_64: d678af0805ff84a1fc8266862df0cd3c nspr-4.8.4-1.el5_4.i386.rpm 6c9064fa9edec639f9eca3d521ebc593 nspr-4.8.4-1.el5_4.x86_64.rpm
2010 Jul 14
0
CEBA-2010:0526 CentOS 5 x86_64 nss Update
CentOS Errata and Bugfix Advisory 2010:0526 Upstream details at : http://rhn.redhat.com/errata/RHBA-2010-0526.html The following updated files have been uploaded and are currently syncing to the mirrors: ( md5sum Filename ) x86_64: 22dcab2f8a3d44a31fb3426d1c7bcbd3 nss-3.12.6-2.el5.centos.i386.rpm 86a5a82213fde8aec7a0a47af32b4e89 nss-3.12.6-2.el5.centos.x86_64.rpm