No, only a fsck will remove the two inodes from the orphan dir. And
until that's run, that message will be printed every 10 mins on some
node in the cluster.
The bug that led to this problem was fixed in 2.6.34.
http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=3939fda4b389993caf8741df5739b3e49f33a263
Sunil
On 06/18/2010 10:58 AM, Luben Karavelov wrote:> Hello,
>
> I have some minor problems with a production system here. Every 30 minutes
> I get messages like that on every connected OCFS2 node:
>
> (ocfs2_wq,446,2):ocfs2_query_inode_wipe:923 ERROR: Inode 212194282
> (on-disk 212194282) not orphaned! Disk flags 0x9, inode flags 0x20
> (ocfs2_wq,446,2):ocfs2_delete_inode:1053 ERROR: status = -17
> (ocfs2_wq,446,2):ocfs2_query_inode_wipe:923 ERROR: Inode 212194387
> (on-disk 212194387) not orphaned! Disk flags 0x9, inode flags 0x20
> (ocfs2_wq,446,2):ocfs2_delete_inode:1053 ERROR: status = -17
>
> I have traced the blocks to deleted files and I see them i 2 different
> orphan_dirs :
>
> root at rho1:~# debugfs.ocfs2 -R 'ls //orphan_dir:0002' /dev/sdb
> 14 16 1 2 .
> 6 16 2 2 ..
> 297544487 28 16 1 0000000011bc2b27
> 272787803 28 16 1 000000001042695b
>
>> 212194387 28 16 1 000000000ca5d453
>> 212194282 28 16 1 000000000ca5d3ea
>>
> 5064920 28 16 1 00000000004d48d8
> 345314234 28 16 1 00000000149513ba
> 465323168 3896 16 1 000000001bbc44a0
>
> root at rho1:~# debugfs.ocfs2 -R 'ls //orphan_dir:0004' /dev/sdb
> 16 16 1 2 .
> 6 44 2 2 ..
> 139143299 28 16 1 00000000084b2883
> 198068069 140 16 1 000000000bce4765
>
>> 212194387 140 16 1 000000000ca5d453
>> 212194282 3728 16 1 000000000ca5d3ea
>>
> So my question: Is there a way to fix this issue without fsck? Here, a
> typical run of fsck is 32-36 hours for a half full 3T fs, so it is not an
> option for a production system.
>
> On all nodes the system is running linux-2.6.34
>
> Here is the ocfs2 enabled features: sparse inline-data unwritten
>
> Installed ocfs2-tools are v1.4.3 but I could install another version (also
> from git repository)
>
> Thanks in advance for any suggestion
>
>