brandon at thinkhuge.net
2019-Mar-27 16:16 UTC
[Gluster-users] Transport endpoint is not connected failures in
Hello Amar and list, I wanted to follow-up to confirm that upgrading to 5.5 seem to fix the "Transport endpoint is not connected failures" for us. We did not have any of these failures in this past weekend backups cycle. Thank you very much for fixing whatever was the problem. I also removed some volume config options. One or more of the settings was contributing to the slow directory listing. Here is our current volume info. [root at lonbaknode3 ~]# gluster volume info Volume Name: volbackups Type: Distribute Volume ID: 32bf4fe9-5450-49f8-b6aa-05471d3bdffa Status: Started Snapshot Count: 0 Number of Bricks: 8 Transport-type: tcp Bricks: Brick1: lonbaknode3.domain.net:/lvbackups/brick Brick2: lonbaknode4.domain.net:/lvbackups/brick Brick3: lonbaknode5.domain.net:/lvbackups/brick Brick4: lonbaknode6.domain.net:/lvbackups/brick Brick5: lonbaknode7.domain.net:/lvbackups/brick Brick6: lonbaknode8.domain.net:/lvbackups/brick Brick7: lonbaknode9.domain.net:/lvbackups/brick Brick8: lonbaknode10.domain.net:/lvbackups/brick Options Reconfigured: performance.io-thread-count: 32 performance.client-io-threads: on client.event-threads: 8 diagnostics.brick-sys-log-level: WARNING diagnostics.brick-log-level: WARNING performance.cache-max-file-size: 2MB performance.cache-size: 256MB cluster.min-free-disk: 1% nfs.disable: on transport.address-family: inet server.event-threads: 8 [root at lonbaknode3 ~]# -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190327/ddfea045/attachment.html>
Sankarshan Mukhopadhyay
2019-Mar-27 23:57 UTC
[Gluster-users] Transport endpoint is not connected failures in
On Wed, Mar 27, 2019 at 9:46 PM <brandon at thinkhuge.net> wrote:> > Hello Amar and list, > > > > I wanted to follow-up to confirm that upgrading to 5.5 seem to fix the ?Transport endpoint is not connected failures? for us. > > > > We did not have any of these failures in this past weekend backups cycle. > > > > Thank you very much for fixing whatever was the problem.As always, thank you for circling back to the list and sharing that the issues have been addressed.> > I also removed some volume config options. One or more of the settings was contributing to the slow directory listing. > > > > Here is our current volume info. >This is very useful!> > [root at lonbaknode3 ~]# gluster volume info > > > > Volume Name: volbackups > > Type: Distribute > > Volume ID: 32bf4fe9-5450-49f8-b6aa-05471d3bdffa > > Status: Started > > Snapshot Count: 0 > > Number of Bricks: 8 > > Transport-type: tcp > > Bricks: > > Brick1: lonbaknode3.domain.net:/lvbackups/brick > > Brick2: lonbaknode4.domain.net:/lvbackups/brick > > Brick3: lonbaknode5.domain.net:/lvbackups/brick > > Brick4: lonbaknode6.domain.net:/lvbackups/brick > > Brick5: lonbaknode7.domain.net:/lvbackups/brick > > Brick6: lonbaknode8.domain.net:/lvbackups/brick > > Brick7: lonbaknode9.domain.net:/lvbackups/brick > > Brick8: lonbaknode10.domain.net:/lvbackups/brick > > Options Reconfigured: > > performance.io-thread-count: 32 > > performance.client-io-threads: on > > client.event-threads: 8 > > diagnostics.brick-sys-log-level: WARNING > > diagnostics.brick-log-level: WARNING > > performance.cache-max-file-size: 2MB > > performance.cache-size: 256MB > > cluster.min-free-disk: 1% > > nfs.disable: on > > transport.address-family: inet > > server.event-threads: 8 > > [root at lonbaknode3 ~]# >
Raghavendra Gowdappa
2019-Mar-28 01:54 UTC
[Gluster-users] Transport endpoint is not connected failures in
On Wed, Mar 27, 2019 at 9:46 PM <brandon at thinkhuge.net> wrote:> Hello Amar and list, > > > > I wanted to follow-up to confirm that upgrading to 5.5 seem to fix the > ?Transport endpoint is not connected failures? for us. >What was the version you saw failures in? Were there any logs matching with the pattern "ping_timer_expired" earlier?> > We did not have any of these failures in this past weekend backups cycle. > > > > Thank you very much for fixing whatever was the problem. > > > > I also removed some volume config options. One or more of the settings > was contributing to the slow directory listing. > > > > Here is our current volume info. > > > > [root at lonbaknode3 ~]# gluster volume info > > > > Volume Name: volbackups > > Type: Distribute > > Volume ID: 32bf4fe9-5450-49f8-b6aa-05471d3bdffa > > Status: Started > > Snapshot Count: 0 > > Number of Bricks: 8 > > Transport-type: tcp > > Bricks: > > Brick1: lonbaknode3.domain.net:/lvbackups/brick > > Brick2: lonbaknode4.domain.net:/lvbackups/brick > > Brick3: lonbaknode5.domain.net:/lvbackups/brick > > Brick4: lonbaknode6.domain.net:/lvbackups/brick > > Brick5: lonbaknode7.domain.net:/lvbackups/brick > > Brick6: lonbaknode8.domain.net:/lvbackups/brick > > Brick7: lonbaknode9.domain.net:/lvbackups/brick > > Brick8: lonbaknode10.domain.net:/lvbackups/brick > > Options Reconfigured: > > performance.io-thread-count: 32 > > performance.client-io-threads: on > > client.event-threads: 8 > > diagnostics.brick-sys-log-level: WARNING > > diagnostics.brick-log-level: WARNING > > performance.cache-max-file-size: 2MB > > performance.cache-size: 256MB > > cluster.min-free-disk: 1% > > nfs.disable: on > > transport.address-family: inet > > server.event-threads: 8 > > [root at lonbaknode3 ~]# > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190328/52cf817c/attachment.html>
Nithya Balachandran
2019-Mar-28 03:12 UTC
[Gluster-users] Transport endpoint is not connected failures in
On Wed, 27 Mar 2019 at 21:47, <brandon at thinkhuge.net> wrote:> Hello Amar and list, > > > > I wanted to follow-up to confirm that upgrading to 5.5 seem to fix the > ?Transport endpoint is not connected failures? for us. > > > > We did not have any of these failures in this past weekend backups cycle. > > > > Thank you very much for fixing whatever was the problem. > > > > I also removed some volume config options. One or more of the settings > was contributing to the slow directory listing. >Hi Brandon, Which options were removed? Thanks, Nithya> > > Here is our current volume info. > > > > [root at lonbaknode3 ~]# gluster volume info > > > > Volume Name: volbackups > > Type: Distribute > > Volume ID: 32bf4fe9-5450-49f8-b6aa-05471d3bdffa > > Status: Started > > Snapshot Count: 0 > > Number of Bricks: 8 > > Transport-type: tcp > > Bricks: > > Brick1: lonbaknode3.domain.net:/lvbackups/brick > > Brick2: lonbaknode4.domain.net:/lvbackups/brick > > Brick3: lonbaknode5.domain.net:/lvbackups/brick > > Brick4: lonbaknode6.domain.net:/lvbackups/brick > > Brick5: lonbaknode7.domain.net:/lvbackups/brick > > Brick6: lonbaknode8.domain.net:/lvbackups/brick > > Brick7: lonbaknode9.domain.net:/lvbackups/brick > > Brick8: lonbaknode10.domain.net:/lvbackups/brick > > Options Reconfigured: > > performance.io-thread-count: 32 > > performance.client-io-threads: on > > client.event-threads: 8 > > diagnostics.brick-sys-log-level: WARNING > > diagnostics.brick-log-level: WARNING > > performance.cache-max-file-size: 2MB > > performance.cache-size: 256MB > > cluster.min-free-disk: 1% > > nfs.disable: on > > transport.address-family: inet > > server.event-threads: 8 > > [root at lonbaknode3 ~]# > _______________________________________________ > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20190328/4942e51c/attachment.html>