similar to: Many orphaned inodes after resize2fs

Displaying 20 results from an estimated 4000 matches similar to: "Many orphaned inodes after resize2fs"

2014 Apr 18
2
Many orphaned inodes after resize2fs
Hello, yesterday I experienced following problem with my ext3 filesystem: - I had ext3 filesystem of the size of a few TB with journal. I correctly unmounted it and it was marked clean. - I then ran fsck.etx3 -f on it and it did not find any problem. - After increasing size of its LVM volume by 1.5 TB I resized the filesystem by resize2fs lvm_volume and it finished without problem. - But
2014 Apr 18
0
Re: Many orphaned inodes after resize2fs
On Fri, Apr 18, 2014 at 06:56:57PM +0200, Patrik Horn?k wrote: > > yesterday I experienced following problem with my ext3 filesystem: > > - I had ext3 filesystem of the size of a few TB with journal. I correctly > unmounted it and it was marked clean. > > - I then ran fsck.etx3 -f on it and it did not find any problem. > > - After increasing size of its LVM volume by
2014 Apr 18
3
Re: Many orphaned inodes after resize2fs
Hi, it seems you got it right! I don't know if you read email I sent you before posting to the mailing list, but I accidentally diagnosed the cause... :) I've noticed that inodes fsck warned me about, at least ones that I checked, all have all four timestamps latest in 2010... The filesystem has maximum 1281998848 inodes, which is timestamp in august 2010. I don't know how it got
2014 Apr 19
0
Re: Many orphaned inodes after resize2fs
On Sat, Apr 19, 2014 at 05:42:12PM +0200, Patrik HornĂ­k wrote: > > Please confirm that this is fully correct solution (for my purpose, not > elegant clean way for official fix) and it has no negative consequences. It > seems that way but I did not analyze all code paths the fixed code is in. Yes, that's a fine solution. What I'll probably do is disable the check if
2002 Jul 24
0
ext3 orphaned inode list error
Hello, last fsck says my /tmp and my /home files system has errors, here is the log from fsck using the read-only option: How to recover from this savely? How to avoid this in future? i am using kernel 2.4.18 with the acl-patches from acl.bestbits.at and the patches for the fileutils and e2fsprogs. If you need more informations please mail.
2002 Jul 24
2
ext3 orphaned inode list
Hello, last fsck says my /tmp and my /home files system has errors, here is the log from fsck using the read-only option: How to recover from this savely? How to avoid this in future? i am using kernel 2.4.18 with the acl-patches from acl.bestbits.at and the patches for the fileutils and e2fsprogs. If you need more informations please mail.
2001 Feb 12
3
That darned orphaned socket hang
Stephen, OK, I can now reproduce this hang at will, purely by pulling the plug on my desktop when logged in and then rebooting - its a gnome desktop box with few partitions and ext3 on all of them, so I guess its getting a pile of gnome or ssh related sockets kept in /tmp which is on root To recap, when the machine is suffering from this, it hangs at the point of mounting the root filesystem
2011 Nov 25
2
Case: package removed from CRAN, but not orphaned
Dear R-Devel subscriber, I would like to raise a topic and ask for your advice, guidance. Today on R-help an issue with a certain package popped up that has been removed from CRAN, because it failed the checks and/or the dependencies are not any longer available. The package maintainer has been alerted to this issue a couple of times and kindly asked to fix the code, such that it fullfills the
2010 Jun 16
2
Samba 4 Orphaned DC
Is there a list of commands for cleaning up orphaned DC's. I used ntdsutil on the Windows boxes no problem. Used an LDAP gui tool to remove the LDAP entries. Sadly Samba4 is still trying to contact the orphan. Any guidance or manual page would be appreciated. Cheers, TMS III
2010 Aug 20
0
[PATCH] ocfs2: Don't delete orphaned files if we are in the process of umount.
Generally, orphan scan run in ocfs2_wq and is used to replay orphan dir. So for some low end iscsi device, the delete_inode may take a long time(In some devices, I have seen that delete 500 files will take about 15 secs). This will eventually cause umount to livelock(umount has to flush ocfs2_wq which will wait until orphan scan to finish). So this patch just try to finish the orphan scan
2004 Oct 15
0
corrupt orphan inode list
Hello, I am running kernel 2.6.4 on XScale/ARM with BusyBox. Quite frequently after shutdown I have a number of orphaned inodes on the ext3 filesystem. (I have yet to figure out the precise cause of the orphans, but I believe it has to do with umount failing at shutdown due to a bug in BusyBox init that does not properly kill processes.) It is my understanding the orphaned inode list should be
2015 Sep 15
0
Re: Question: running appliance commands over guest fs (resize2fs -P).
On Tue, Sep 15, 2015 at 04:31:46PM +0300, Maxim Perevedentsev wrote: > Hello everyone! > > I am working on resizing qcow2 images using virt-resize+liguestfs. If you're shrinking, I believe a better way to do this is to sparsify the image. > E.g. I when shrinking a partition, I have to resize filesystem using > resize2fs-size. The problem is that I cannot find out minimal >
2019 Jun 20
3
LLD handling of orphaned sections
Hello, The handling of orphaned sections in LLD 8 has changed from GNU LD behavior (note that LLD 7 didn't show this behavior). I've reported this as: https://bugs.llvm.org/show_bug.cgi?id=42327 It's not clear to me however whether this is intentional or a regression when compared to LLD 7. As stated in that bug report it would be helpful for me to get some kind of documentation
2019 Jun 21
2
LLD handling of orphaned sections
On Fri, 21 Jun 2019 at 13:05, Rui Ueyama via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > I think Geroge (cc'ed) knows better than me in that area. > > lld is underdocumented, and in particular there is virtually no documentation about its linker script support. Our basic strategy is to follow the GNU's documentation and the implementations unless it is too hard or
2010 Nov 04
1
orphan inodes deleted issue
Dear All, My servers running on CentOS 5.5 x86_64 with kernel 2.6.18.194.17.4.el gigabyte motherboard and 2 harddisks (seagate 500GB). My CentOS box configured RAID 1, yesterday and today I had the same problem on 2 servers with same configuration. See the following error messages for details: EXT3-fs: INFO: recovery required on readonly filesystem. EXT3-fs: write access will be enabled during
2015 Sep 15
4
Question: running appliance commands over guest fs (resize2fs -P).
Hello everyone! I am working on resizing qcow2 images using virt-resize+liguestfs. E.g. I when shrinking a partition, I have to resize filesystem using resize2fs-size. The problem is that I cannot find out minimal partition size (aka resize2fs -P). The only way is calling "resize2fs-size 1K", wait for resize2fs to claim "resize2fs: New size smaller than minimum (510050)"
2009 Nov 12
0
[PATCH 05/12] Btrfs: Avoid orphan inodes cleanup during replaying log
We do log replay in a single transaction, so it''s not good to do unbound operations during replaying log. This patch makes orphan inodes cleanup executed after replaying log. It also avoid doing other unbound operations such as truncating a file during replaying log. These unbound operations are postponed to the orphan inode cleanup stage. Signed-off-by: Yan Zheng
2011 Aug 30
3
resize2fs
Hi All: I am trying to resize a centos (5.2) VM drive. I use VMware and I have increased the size of the drive by 40G. I am running resize2fs on /dev/sdb1 (which is my root partition) but when I do I get this error: [root at centos ~]# resize2fs /dev/sdb1 120G resize2fs 1.39 (29-May-2006) The containing partition (or device) is only 19970795 (4k) blocks. You requested a new size of 31457280
2015 Sep 15
0
Fwd: Re: Question: running appliance commands over guest fs (resize2fs -P).
-------- Forwarded Message -------- Subject: Re: [Libguestfs] Question: running appliance commands over guest fs (resize2fs -P). Date: Tue, 15 Sep 2015 17:17:16 +0300 From: Maxim Perevedentsev <mperevedentsev@virtuozzo.com> To: Richard W.M. Jones <rjones@redhat.com> On 09/15/2015 04:57 PM, Richard W.M. Jones wrote: >> 2) More general, how to execute commands from
2011 Jun 24
1
How long should resize2fs take?
Hullo! First mail, sorry if this is the wrong place for this kind of question. I realise this is a "piece of string" type question. tl;dr version: I have a resizefs shrinking an ext4 filesystem from ~4TB to ~3TB and it's been running for ~2 days. Is this normal? Strace shows lots of:- lseek(3, 42978250752, SEEK_SET) = 42978250752 read(3,