Displaying 4 results from an estimated 4 matches for "vgbackups".
2023 May 02
1
'error=No space left on device' but, there is plenty of space all nodes
...----------------------------------------------------------
--
Brick : Brick nybaknode9.example.net:/lvbackups/brick
TCP Port : 60039
RDMA Port : 0
Online : Y
Pid : 1664
File System : xfs
Device : /dev/mapper/vgbackups-lvbackups
Mount Options :
rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquot
a
Inode Size : 512
Disk Space Free : 6.1TB
Total Disk Space : 29.0TB
Inode Count : 3108974976
Free Inodes : 3108881513
---------------------------------...
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
...-------------------------------------------------------------
--
Brick? ? ? ? ? ? ? ? : Brick nybaknode9.example.net:/lvbackups/brick
TCP Port? ? ? ? ? ? : 60039
RDMA Port? ? ? ? ? ? : 0
Online? ? ? ? ? ? ? : Y
Pid? ? ? ? ? ? ? ? ? : 1664
File System? ? ? ? ? : xfs
Device? ? ? ? ? ? ? : /dev/mapper/vgbackups-lvbackups
Mount Options? ? ? ? :
rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,sunit=512,swidth=512,noquot
a
Inode Size? ? ? ? ? : 512
Disk Space Free? ? ? : 6.1TB
Total Disk Space? ? : 29.0TB
Inode Count? ? ? ? ? : 3108974976
Free Inodes? ? ? ? ? : 3108881513
-----------------------------------...
2023 May 04
1
'error=No space left on device' but, there is plenty of space all nodes
Hi Strahil and Gluster users,
Yes I had checked but, checked again and only 1% inode usage. 99% free. Same every node.
Example:
[root at nybaknode1 ]# df -i /lvbackups/brick
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/vgbackups-lvbackups 3108921344 93602 3108827742 1% /lvbackups
[root at nybaknode1 ]#
I neglected to clarify in original post this issue is actually being seen through nfs-ganesha remote client mounts to gluster. Its realized after ~12-24 hours of backups uploading over nfs last couple weekends....
2010 Jul 14
2
tunefs.lustre --print fails on mounted mdt/ost with mmp
Just checking to be sure this isn''t a known bug or problem. I couldn''t
find a bz for this, but it would appear that tunefs.lustre --print fails
on a lustre mdt or ost device if mounted with mmp.
Is this expected behavior?
TIA
mds1-gps:~ # tunefs.lustre --print /dev/mapper/mdt1
checking for existing Lustre data: not found
tunefs.lustre FATAL: Device /dev/mapper/mdt1 has not