Patrick J. LoPresti
2010-Jun-16 02:08 UTC
[Ocfs2-users] Non-clean fsck on almost-new filesystem
My O/S is Suse Linux Enteprise Server 11 Service Pack 1. My SCSI device is a hardware iSCSI RAID chassis. I have done a variety of reads and writes to this device without any problems, and there are no network or I/O errors in any logs. The steps I took: 1) mkfs.ocfs2 -N 4 -b 4k -C 512k -J block64 -T datafiles /dev/sdd1 268435456 2) fsck.ocfs2 -fy /dev/sdd1 # no problems 3) Mounted file system as "/foo" two nodes using "crm resource start fs", which is just set up to do a mount.ocfs2 -o noatime 4) Ran "rsync -aHPv /local/source/dir. /foo/." on one of the two nodes 5) Ran "crm resource stop fs" to unmount the file system from both nodes 6) fsck.ocfs2 -fy /dev/sdd1 Output: Checking OCFS2 filesystem in /dev/sdd1: Label: <NONE> UUID: 93F5216F0A2041009BA37A97C0099F70 Number of blocks: 268435456 Block size: 4096 Number of clusters: 2097152 Cluster size: 524288 Number of slots: 4 /dev/sdd1 was run with -f, check forced. Pass 0a: Checking cluster allocation chains Pass 0b: Checking inode allocation chains Pass 0c: Checking extent block allocation chains Pass 1: Checking inodes and blocks. [CLUSTER_ALLOC_BIT] Cluster 97531 is marked in the global cluster bitmap but it isn't in use. Clear its bit in the bitmap? y [CLUSTER_ALLOC_BIT] Cluster 98873 is marked in the global cluster bitmap but it isn't in use. Clear its bit in the bitmap? y [CLUSTER_ALLOC_BIT] Cluster 99609 is marked in the global cluster bitmap but it isn't in use. Clear its bit in the bitmap? y [CLUSTER_ALLOC_BIT] Cluster 99696 is marked in the global cluster bitmap but it isn't in use. Clear its bit in the bitmap? y Pass 2: Checking directory entries. Pass 3: Checking directory connectivity. Pass 4a: checking for orphaned inodes Pass 4b: Checking inodes link counts. All passes succeeded. The other node was completely idle the entire time. A recursive "diff" shows that the directory tree copied successfully. Any ideas/thoughts/suggestions would be appreciated. I do not think I can deploy a solution that shows this sort of flakiness. (Or are my expectations misplaced, and these messages from fsck are normal for a cleanly unmounted partition?) Thanks. - Pat
For issues on sles, file a bz with Novell. Make sure you list out the fs features that had been enabled. On 06/15/2010 07:08 PM, Patrick J. LoPresti wrote:> My O/S is Suse Linux Enteprise Server 11 Service Pack 1. > > My SCSI device is a hardware iSCSI RAID chassis. I have done a > variety of reads and writes to this device without any problems, and > there are no network or I/O errors in any logs. > > The steps I took: > > 1) mkfs.ocfs2 -N 4 -b 4k -C 512k -J block64 -T datafiles /dev/sdd1 268435456 > > 2) fsck.ocfs2 -fy /dev/sdd1 # no problems > > 3) Mounted file system as "/foo" two nodes using "crm resource start > fs", which is just set up to do a mount.ocfs2 -o noatime > > 4) Ran "rsync -aHPv /local/source/dir. /foo/." on one of the two nodes > > 5) Ran "crm resource stop fs" to unmount the file system from both nodes > > 6) fsck.ocfs2 -fy /dev/sdd1 > > > Output: > > Checking OCFS2 filesystem in /dev/sdd1: > Label:<NONE> > UUID: 93F5216F0A2041009BA37A97C0099F70 > Number of blocks: 268435456 > Block size: 4096 > Number of clusters: 2097152 > Cluster size: 524288 > Number of slots: 4 > > /dev/sdd1 was run with -f, check forced. > Pass 0a: Checking cluster allocation chains > Pass 0b: Checking inode allocation chains > Pass 0c: Checking extent block allocation chains > Pass 1: Checking inodes and blocks. > [CLUSTER_ALLOC_BIT] Cluster 97531 is marked in the global cluster > bitmap but it isn't in use. Clear its bit in the bitmap? y > [CLUSTER_ALLOC_BIT] Cluster 98873 is marked in the global cluster > bitmap but it isn't in use. Clear its bit in the bitmap? y > [CLUSTER_ALLOC_BIT] Cluster 99609 is marked in the global cluster > bitmap but it isn't in use. Clear its bit in the bitmap? y > [CLUSTER_ALLOC_BIT] Cluster 99696 is marked in the global cluster > bitmap but it isn't in use. Clear its bit in the bitmap? y > Pass 2: Checking directory entries. > Pass 3: Checking directory connectivity. > Pass 4a: checking for orphaned inodes > Pass 4b: Checking inodes link counts. > All passes succeeded. > > > The other node was completely idle the entire time. > > A recursive "diff" shows that the directory tree copied successfully. > > Any ideas/thoughts/suggestions would be appreciated. I do not think I > can deploy a solution that shows this sort of flakiness. (Or are my > expectations misplaced, and these messages from fsck are normal for a > cleanly unmounted partition?) > > Thanks. > > - Pat > > _______________________________________________ > Ocfs2-users mailing list > Ocfs2-users at oss.oracle.com > http://oss.oracle.com/mailman/listinfo/ocfs2-users >