search for: xfs_fsr

Displaying 4 results from an estimated 4 matches for "xfs_fsr".

2010 Apr 13
2
XFS-filesystem corrupted by defragmentation Was: Performance problems with XFS on Centos 5.4
Before I'd try to defragment my whole filesystem (see attached mail for whole story) I figured "Let's try it on some file". So I did > xfs_bmap /raid/Temp/someDiskimage.iso [output shows 101 extents and 1 hole] Then I defragmented the file > xfs_fsr /raid/Temp/someDiskimage.iso extents before:101 after:3 DONE > xfs_bmap /raid/Temp/someDiskimage.iso [output shows 3 extents and 1 hole] and now comes the bummer: i wanted to check the fragmentation of the whole filesystem (just for checking): > xfs_db -r /dev/mapper/VolGroup00-LogVol04 xf...
2010 Nov 04
2
SIS Error
.../dc/73/dc7398c85dd02efe8a14fe6cc019b2cf07eec600-d5ca962aaae7d14c587400003bc41c5f size mismatch: 122626 != 165655 There's about a dozen different file entries listed in the error log. I'm using 2.0.6, mdbox, and the mails are stored on a local XFS partition. I did recently start running xfs_fsr to defragment - could that do anything? -- Daniel
2012 Jun 11
3
centos 6.2 xfs + nfs space allocation
Centos 6.2 system with xfs filesystem. I'm sharing this filesystem using nfs. When I create a 10 gigabyte test file from a nfs client system : dd if=/dev/zero of=10Gtest bs=1M count=10000 10000+0 records in 10000+0 records out 10485760000 bytes (10 GB) copied, 74.827 s, 140 MB/s Output from 'ls -al ; du' during this test : -rw-r--r-- 1 root root 429170688 Jun 8 10:13 10Gtest
2019 Nov 28
1
Stale File Handle Errors During Heavy Writes
...ut they're supposed to have the same gfid. > This is something that needs DHT team's attention. > Do you mind raising a bug in?bugzilla.redhat.com?against glusterfs and component 'distribute' or 'DHT'?" > > > For me replicating it was easiest with running xfs_fsr (which is very write intensive in fragmented volumes) from within a VM, but it could happen with a simple yum install.. docker run (with new image)..general test with dd, mkfs.xfs or just random, which was the normal case. But i've to say my workload is mostly write intensive, like yours. >...