W Kern
2023-Jul-21 19:02 UTC
[Gluster-users] log file spewing on one node, but not the others
we have an older 2+1 arbiter gluster cluster running 6.10? on Ubuntu18LTS It has run beautifully for years. Only occaisionally needing attention as drives have died, etc Each peer has two volumes. G1 and G2 with a shared 'gluster' network. Since July 1st one of the peers for one volume is spewing the logfile /var-lib-G1.log with the following errors. The volume (G2) is not showing this nor are there issue with other peer and the arbiter for the G1 volume. So its one machine with one volume that has the problem.? There have been NO issues with the volumes themselves. It simply a matter of the the logfiles generating GBs of entries every hour (which is how we noticed it when we started running out of log space). According to google there are mentions of this error, but that it was fixed in the 6.x series.? I can find no other mentions. I have tried restarting glusterd with no change. there doesn't seem to be any hardware issues. I am wondering if perhaps this is an XFS file corruption issue and if I were to unmount the Gluster run xfs_repair and bring it back, that would solve the issue. Any other suggestions? [2023-07-21 18:51:38.260507] W [inode.c:1638:inode_table_prune] (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/features/shard.so(+0x21b47) [0x7fb261c13b47] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) [0x7fb26947f416] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) [0x7fb26947f37a] ) 0-GLB1image-shard: Empty inode lru list found but with (-2) lru_size [2023-07-21 18:51:38.261231] W [inode.c:1638:inode_table_prune] (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/mount/fuse.so(+0xba51) [0x7fb266cdca51] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) [0x7fb26947f416] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) [0x7fb26947f37a] ) 0-fuse: Empty inode lru list found but with (-2) lru_size [2023-07-21 18:51:38.261377] W [inode.c:1638:inode_table_prune] (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(loc_wipe+0x12) [0x7fb26946bd72] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) [0x7fb26947f416] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) [0x7fb26947f37a] ) 0-GLB1image-shard: Empty inode lru list found but with (-2) lru_size [2023-07-21 18:51:38.261806] W [inode.c:1638:inode_table_prune] (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/cluster/replicate.so(+0x5ca57) [0x7fb26213ba57] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) [0x7fb26947f416] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) [0x7fb26947f37a] ) 0-GLB1image-replicate-0: Empty inode lru list found but with (-2) lru_size [2023-07-21 18:51:38.261933] W [inode.c:1638:inode_table_prune] (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(fd_unref+0x1ef) [0x7fb269495eaf] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) [0x7fb26947f416] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) [0x7fb26947f37a] ) 0-GLB1image-client-1: Empty inode lru list found but with (-2) lru_size [2023-07-21 18:51:38.262684] W [inode.c:1638:inode_table_prune] (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/cluster/replicate.so(+0x5ca57) [0x7fb26213ba57] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) [0x7fb26947f416] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) [0x7fb26947f37a] ) 0-GLB1image-replicate-0: Empty inode lru list found but with (-2) lru_size -wk
wk
2023-Jul-23 19:39 UTC
[Gluster-users] [EXT] [Glusterusers] RESOLVED log file spewing on one node but not the othe
unmounting the fusemount, and running xfs_repair seemed to have solved the problem. one remounted and given a HUP for glusterd and glusterfs there are now more log spews. the xfs_repair didn't show any errors, so I suspect the problem was with the Fuse mount and it just needed to be refreshed, it had been up for hundreds of days. -wk On 7/21/23 12:02 PM, W Kern wrote:> we have an older 2+1 arbiter gluster cluster running 6.10? on Ubuntu18LTS > > It has run beautifully for years. Only occaisionally needing attention > as drives have died, etc > > Each peer has two volumes. G1 and G2 with a shared 'gluster' network. > > Since July 1st one of the peers for one volume is spewing the logfile > /var-lib-G1.log with the following errors. > > The volume (G2) is not showing this nor are there issue with other > peer and the arbiter for the G1 volume. > > So its one machine with one volume that has the problem.? There have > been NO issues with the volumes themselves. > > It simply a matter of the the logfiles generating GBs of entries every > hour (which is how we noticed it when we started running out of log > space). > > According to google there are mentions of this error, but that it was > fixed in the 6.x series.? I can find no other mentions. > > I have tried restarting glusterd with no change. there doesn't seem to > be any hardware issues. > > I am wondering if perhaps this is an XFS file corruption issue and if > I were to unmount the Gluster run xfs_repair and bring it back, that > would solve the issue. > > Any other suggestions? > > [2023-07-21 18:51:38.260507] W [inode.c:1638:inode_table_prune] > (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/features/shard.so(+0x21b47) > [0x7fb261c13b47] > -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) > [0x7fb26947f416] > -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) > [0x7fb26947f37a] ) 0-GLB1image-shard: Empty inode lru list found but > with (-2) lru_size > [2023-07-21 18:51:38.261231] W [inode.c:1638:inode_table_prune] > (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/mount/fuse.so(+0xba51) > [0x7fb266cdca51] > -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) > [0x7fb26947f416] > -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) > [0x7fb26947f37a] ) 0-fuse: Empty inode lru list found but with (-2) > lru_size > [2023-07-21 18:51:38.261377] W [inode.c:1638:inode_table_prune] > (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(loc_wipe+0x12) > [0x7fb26946bd72] > -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) > [0x7fb26947f416] > -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) > [0x7fb26947f37a] ) 0-GLB1image-shard: Empty inode lru list found but > with (-2) lru_size > [2023-07-21 18:51:38.261806] W [inode.c:1638:inode_table_prune] > (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/cluster/replicate.so(+0x5ca57) > [0x7fb26213ba57] > -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) > [0x7fb26947f416] > -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) > [0x7fb26947f37a] ) 0-GLB1image-replicate-0: Empty inode lru list found > but with (-2) lru_size > [2023-07-21 18:51:38.261933] W [inode.c:1638:inode_table_prune] > (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(fd_unref+0x1ef) > [0x7fb269495eaf] > -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) > [0x7fb26947f416] > -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) > [0x7fb26947f37a] ) 0-GLB1image-client-1: Empty inode lru list found > but with (-2) lru_size > [2023-07-21 18:51:38.262684] W [inode.c:1638:inode_table_prune] > (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/cluster/replicate.so(+0x5ca57) > [0x7fb26213ba57] > -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) > [0x7fb26947f416] > -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) > [0x7fb26947f37a] ) 0-GLB1image-replicate-0: Empty inode lru list found > but with (-2) lru_size > > -wk > > ________ > > > > Community Meeting Calendar: > > Schedule - > Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC > Bridge: https://meet.google.com/cpu-eiue-hvk > Gluster-users mailing list > Gluster-users at gluster.org > https://lists.gluster.org/mailman/listinfo/gluster-users
Strahil Nikolov
2023-Jul-25 20:22 UTC
[Gluster-users] log file spewing on one node, but not the others
What is the uptime of the affected node ? There is a similar error reported in?https://access.redhat.com/solutions/5518661? which could?indicate?a possible problem in a memory area named ?lru? .Have you noticed any ECC errors in dmesg/IPMI of the system ? At least I would reboot the node and run hardware diagnostics to check that everything is fine. Best Regards,Strahil Nikolov? Sent from Yahoo Mail for iPhone On Tuesday, July 25, 2023, 4:31 AM, W Kern <wkmail at bneit.com> wrote: we have an older 2+1 arbiter gluster cluster running 6.10? on Ubuntu18LTS It has run beautifully for years. Only occaisionally needing attention as drives have died, etc Each peer has two volumes. G1 and G2 with a shared 'gluster' network. Since July 1st one of the peers for one volume is spewing the logfile /var-lib-G1.log with the following errors. The volume (G2) is not showing this nor are there issue with other peer and the arbiter for the G1 volume. So its one machine with one volume that has the problem.? There have been NO issues with the volumes themselves. It simply a matter of the the logfiles generating GBs of entries every hour (which is how we noticed it when we started running out of log space). According to google there are mentions of this error, but that it was fixed in the 6.x series.? I can find no other mentions. I have tried restarting glusterd with no change. there doesn't seem to be any hardware issues. I am wondering if perhaps this is an XFS file corruption issue and if I were to unmount the Gluster run xfs_repair and bring it back, that would solve the issue. Any other suggestions? [2023-07-21 18:51:38.260507] W [inode.c:1638:inode_table_prune] (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/features/shard.so(+0x21b47) [0x7fb261c13b47] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) [0x7fb26947f416] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) [0x7fb26947f37a] ) 0-GLB1image-shard: Empty inode lru list found but with (-2) lru_size [2023-07-21 18:51:38.261231] W [inode.c:1638:inode_table_prune] (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/mount/fuse.so(+0xba51) [0x7fb266cdca51] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) [0x7fb26947f416] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) [0x7fb26947f37a] ) 0-fuse: Empty inode lru list found but with (-2) lru_size [2023-07-21 18:51:38.261377] W [inode.c:1638:inode_table_prune] (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(loc_wipe+0x12) [0x7fb26946bd72] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) [0x7fb26947f416] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) [0x7fb26947f37a] ) 0-GLB1image-shard: Empty inode lru list found but with (-2) lru_size [2023-07-21 18:51:38.261806] W [inode.c:1638:inode_table_prune] (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/cluster/replicate.so(+0x5ca57) [0x7fb26213ba57] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) [0x7fb26947f416] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) [0x7fb26947f37a] ) 0-GLB1image-replicate-0: Empty inode lru list found but with (-2) lru_size [2023-07-21 18:51:38.261933] W [inode.c:1638:inode_table_prune] (-->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(fd_unref+0x1ef) [0x7fb269495eaf] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) [0x7fb26947f416] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) [0x7fb26947f37a] ) 0-GLB1image-client-1: Empty inode lru list found but with (-2) lru_size [2023-07-21 18:51:38.262684] W [inode.c:1638:inode_table_prune] (-->/usr/lib/x86_64-linux-gnu/glusterfs/6.10/xlator/cluster/replicate.so(+0x5ca57) [0x7fb26213ba57] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(inode_unref+0x36) [0x7fb26947f416] -->/usr/lib/x86_64-linux-gnu/libglusterfs.so.0(+0x3337a) [0x7fb26947f37a] ) 0-GLB1image-replicate-0: Empty inode lru list found but with (-2) lru_size -wk ________ Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk Gluster-users mailing list Gluster-users at gluster.org https://lists.gluster.org/mailman/listinfo/gluster-users -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.gluster.org/pipermail/gluster-users/attachments/20230725/684aa1cd/attachment.html>