Displaying 3 results from an estimated 3 matches for "nybaknode1".
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
...des to refresh
them and it happened again.? That was mentioned in this doc
https://access.redhat.com/solutions/276483 as an idea.
Does anyone know what we might check next?
glusterfs-server-10.4-1.el8s.x86_64
glusterfs-fuse-10.4-1.el8s.x86_64
Here is the info (hostnames changed) below.
[root at nybaknode1 ~]# gluster volume status volbackups detail
Status of volume: volbackups
----------------------------------------------------------------------------
--
Brick? ? ? ? ? ? ? ? : Brick nybaknode9.example.net:/lvbackups/brick
TCP Port? ? ? ? ? ? : 60039
RDMA Port? ? ? ? ? ? : 0
Online? ? ? ? ? ? ? : Y...
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
...des to refresh
them and it happened again. That was mentioned in this doc
https://access.redhat.com/solutions/276483 as an idea.
Does anyone know what we might check next?
glusterfs-server-10.4-1.el8s.x86_64
glusterfs-fuse-10.4-1.el8s.x86_64
Here is the info (hostnames changed) below.
[root at nybaknode1 ~]# gluster volume status volbackups detail
Status of volume: volbackups
----------------------------------------------------------------------------
--
Brick : Brick nybaknode9.example.net:/lvbackups/brick
TCP Port : 60039
RDMA Port : 0
Online :...
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Strahil and Gluster users,
Yes I had checked but, checked again and only 1% inode usage. 99% free. Same every node.
Example:
[root at nybaknode1 ]# df -i /lvbackups/brick
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vgbackups-lvbackups 3108921344 93602 3108827742 1% /lvbackups
[root at nybaknode1 ]#
I neglected to clarify in original post this issue is actually being seen through nfs-gan...