Displaying 20 results from an estimated 220 matches similar to: "orphan inodes deleted issue"
2001 Sep 24
4
part of files in another file after crash
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
because of strange reasons my notebook sometimes crashes short after startup
(but that's not ext3's fault, maybe mem?, when i wait several minutes it
works without problems)
the problem is that after 3 crashes at startup, when my notebook finally
worked i got the msg:
Sep 23 23:29:17 blackbox kernel: EXT3-fs warning (device ide0(3,3)):
2001 Oct 09
2
Assert in jbd-kernel.c
Hello. I have installed the ext3 file system on a test system, and
sometimes I have a problem: I get an assert from within jbd-kernel.c,
and whatever prgram was writing to the disk when this happens is unable
to continue.
The system is a server I built, which I named "dax". It is running
Debian unstable, and I updated it to all the latest packages in Debian
unstable as of today.
2001 Oct 17
3
"ext2fs_check_if_mount: No such file or directory while determining whether" messages
Hi. I was using 2.4.10 with ext3 0.9.10 and thought it was
time to use -ac for the first time because 2.4.12-ac3
includes 0.9.12.
I don't know what I did to get the following messages, but in
my last boot I removed /etc/mtab (at runtime) and made it a
symlink to /proc/mounts. Not sure if a bad idea, but the only
problem until I rebooted was the need of losetup -d.
When I rebooted, all
2013 Sep 05
4
Bug#721946: xen-hypervisor-4.1-amd64: dom0_mem cannot exceed some value
Package: xen-hypervisor-4.1-amd64
Version: 4.1.4-3+deb7u1
Severity: normal
I tried GRUB_CMDLINE_XEN="dom0_mem=8192M": that delivers 6964868K total, then
crashes when used=2837436K free=4127432K.
By crash I mean the gnome screen was blown away, replaced by a black screen
with white log lines. That seems to happen every time dom0 uses a large amount
of memory.
After setting
2011 Aug 11
6
unable to mount zfs file system..pl help
# uname -a
Linux testbox 2.6.18-194.el5 #1 SMP Tue Mar 16 21:52:39 EDT 2010 x86_64 x86_64 x86_64 GNU/Linux
# rpm -qa|grep zfs
zfs-test-0.5.2-1
zfs-modules-0.5.2-1_2.6.18_194.el5
zfs-0.5.2-1
zfs-modules-devel-0.5.2-1_2.6.18_194.el5
zfs-devel-0.5.2-1
# zfs list
NAME USED AVAIL REFER MOUNTPOINT
pool1 120K 228G 21K /pool1
pool1/fs1 21K 228G 21K /vik
[root at
2007 Jan 15
1
inodes and ocfs
Hi, ocfs-2.4.9-e-enterprise-1.0.12-1 Redhat 2.1 enterprise kernel
Dont know if this is a problem or not but our monitoring software picked up
that an ocfs filesystem has a lack of inodes available to it
[root]# df -i
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/sdh 163825 163527 298 100% /a04
This filesystem has about 11 files in it and isnt used, so
2011 Aug 08
0
[PATCH] Btrfs: fix how we reserve space for deleting inodes
I converted btrfs_truncate to do sane reservations for truncate, but didn''t
convert btrfs_evict_inode. Basically we need to save the orphan_rsv for
deleting the orphan item, and do normal reservations for our truncate. Thanks,
Signed-off-by: Josef Bacik <josef@redhat.com>
---
fs/btrfs/inode.c | 31 +++++++++++++++++++------------
1 files changed, 19 insertions(+), 12
2012 Apr 09
0
[PATCH] Btrfs-progs: make btrfsck aware of free space inodes
The new xfstests will run fsck against the volume to make sure we didn''t
introduce any inconsistencies, which is nice except we will error out
immediately if we mount with inode_cache. We need to make btrfsck skip the
special free space cache items and then just assume that we have a link for
the free space cache inode item. This makes btrfsck pass with success on a
fs with inode cache
2014 Apr 19
0
Re: Many orphaned inodes after resize2fs
On Sat, Apr 19, 2014 at 05:42:12PM +0200, Patrik HornĂk wrote:
>
> Please confirm that this is fully correct solution (for my purpose, not
> elegant clean way for official fix) and it has no negative consequences. It
> seems that way but I did not analyze all code paths the fixed code is in.
Yes, that's a fine solution. What I'll probably do is disable the
check if
2019 Nov 27
0
Re: [v2v PATCH] v2v: require 100 availabe inodes on each filesystem (RHBZ#1764569)
On 11/27/19 11:00 AM, Pino Toscano wrote:
In the subject: s/availabe/available/
also, is it necessary to stick RHBZ#... in the subject, or is that a
better fit in the commit body?
> Enough free space in a filesystem does not imply available inodes to
> create/modify files on that filesystem. Hence, require at least 100
> available inodes on filesystems that can provide inode counts.
2023 Mar 08
1
[PATCH v7 0/6] evm: Do HMAC of multiple per LSM xattrs for new inodes
On Thu, Dec 1, 2022 at 5:42?AM Roberto Sassu
<roberto.sassu at huaweicloud.com> wrote:
>
> From: Roberto Sassu <roberto.sassu at huawei.com>
>
> One of the major goals of LSM stacking is to run multiple LSMs side by side
> without interfering with each other. The ultimate decision will depend on
> individual LSM decision.
>
> Several changes need to be made to
2014 Apr 18
0
Many orphaned inodes after resize2fs
Hello,
yesterday I experienced following problem with my ext3 filesystem:
- I had ext3 filesystem of the size of a few TB with journal. I correctly
unmounted it and it was marked clean.
- I then ran fsck.etx3 -f on it and it did not find any problem.
- After increasing size of its LVM volume by 1.5 TB I resized the
filesystem by resize2fs lvm_volume and it finished without problem.
- But
2002 Feb 14
1
[BUG] [PATCH]: handling bad inodes in 2.4.x kernels
hi folks,
i already posted this to the kernel mailing list a few days ago but nobody
there seems to be interested in what i found out.
since i believe this is a serious bug, i'm posting my perception again...
the bug is about the handling of bad inodes in at least the 2.4.16, .17,
.18-pre9 and .9 kernel releases (i suspect all 2.4 kernels are affected)
and causes the names_cache to get
2005 Sep 29
2
Maildir and counting inodes
I've been testing with alpha 3 and I am about ready to go production. I
am switching from mbox to maildir and I'd like to know if there is a
formula or rule of thumb for determining if your file system will have
enough inodes to handle all the mail message files.
I could write a script to look at each user's mbox files, count the
number messages and calculate the average number
2002 May 07
3
inodes 100% full, how do I know?
How can you know beforehand, without running fsck, that all inodes are used
of a particular ext3 filesystem? Default systemtools use output from df,
which shows only a 50% usage of the filesystem, and pretend nothing is
wrong, while you really cant't move or copy a file to it. So I only found
out when running fsck.
This is my output from fsck (RH7.2, stock kernel, stock? ext3):
root# fsck
2003 Aug 14
1
/usr: create/symlink failed, no inodes free
hi sirs,
i try to install galeon2 for my machine but get an error messages that i really do not understand while making mozilla.
that errors appear at the very beginning of a make command.
i attache my uname, df both before and after make, and errors message with this mail.
in wchich case that inodes are not sufficient ? and also how to get rid of this kind of error.
thank you in advance for
2012 Sep 28
1
Changes to inodes discovered by aide
Hi.
On one of my servers aide just reported inode changes to a large bunch of files in a variety of directories, e.g. /usr/bin, /usr/sbin etc. This machine sits behind a couple of firewalls and it would be hard to get to.
The day before I updated "clam*" and updated the aide database right after that:
-rw------- 1 root root 7407412 Sep 26 10:58 aide.db.gz
The problem was that the
2019 Nov 28
1
[v2v PATCH v2] v2v: require 100 available inodes on each filesystem
Enough free space in a filesystem does not imply available inodes to
create/modify files on that filesystem. Hence, require at least 100
available inodes on filesystems that can provide inode counts.
Related to: RHBZ#1764569
---
docs/virt-v2v.pod | 3 +++
v2v/v2v.ml | 13 +++++++++++--
2 files changed, 14 insertions(+), 2 deletions(-)
diff --git a/docs/virt-v2v.pod b/docs/virt-v2v.pod
2007 Apr 03
2
Corrupt inodes on shared disk...
I am having problems when using a Dell PowerVault MD3000 with multipath
from a Dell PowerEdge 1950. I have 2 cables connected and mount the
partition on the DAS Array. I am using RHEL 4.4 with RHCS and a two
node cluster. Only one node is "Active" at a time, it creates a mount
to the partition, and if there is an issue RHCS will fence the device
and then the other node will mount the
2008 Aug 05
3
doveot reporting "No space left on device" - yet df show plenty of space / inodes.
Hi,
I am running dovecot 1.0.rc7 on a Suse Linux server. The server has approx 200+ mailboxes.
Last week the filesystem (/dev/mapper/datavg/dat2lv) ran out of space - causing it to do into read-only mode. When I realised this I allocated some more space and re-booted the machine...
Strangely it seems that dovecot is still having problems... It's like dovecot doesn't realise that