similar to: ext3 efficiency, larger vs smaller file system, lots of inodes...

Displaying 20 results from an estimated 20000 matches similar to: "ext3 efficiency, larger vs smaller file system, lots of inodes..."

2009 Apr 17
2
E2fsck and large file
How big is a file that e2fsck considers it to be a large file? 814611 blocks used (42.79%) 0 bad blocks 1 large file <----- that Thanks John Nelson
2002 May 07
3
inodes 100% full, how do I know?
How can you know beforehand, without running fsck, that all inodes are used of a particular ext3 filesystem? Default systemtools use output from df, which shows only a 50% usage of the filesystem, and pretend nothing is wrong, while you really cant't move or copy a file to it. So I only found out when running fsck. This is my output from fsck (RH7.2, stock kernel, stock? ext3): root# fsck
2014 Apr 18
2
Many orphaned inodes after resize2fs
Hello, yesterday I experienced following problem with my ext3 filesystem: - I had ext3 filesystem of the size of a few TB with journal. I correctly unmounted it and it was marked clean. - I then ran fsck.etx3 -f on it and it did not find any problem. - After increasing size of its LVM volume by 1.5 TB I resized the filesystem by resize2fs lvm_volume and it finished without problem. - But
2009 Dec 08
3
botched RAID, now e2fsck or what?
Hi all, Somehow I managed to mess with a RAID array containing an ext3 partition. Parenthesis, if it matters: I disconnected physically a drive while the array was online. Next thing, I lost the right order of the drives in the array. While trying to re-create it, I overwrote the raid superblocks. Luckily, the array was RAID5 degraded, so whenever I re-created it, it didn't go into sync;
2009 Feb 05
1
Questions regarding journal replay
Today, I had to uncleanly shutdown one of our machines due to an error in 2.6.28.3. Durin the boot sequence, the ext4 partition /home experienced a journal replay. /home looks like this: /dev/mapper/volg1-logv1 on /home type ext4 (rw,noexec,nodev,noatime,errors=remount-ro) Filesystem Size Used Avail Use% Mounted on /dev/mapper/volg1-logv1 2,4T 1,4T 1022G 58% /home Filesystem
2014 Apr 18
0
Re: Many orphaned inodes after resize2fs
On Fri, Apr 18, 2014 at 06:56:57PM +0200, Patrik Horn?k wrote: > > yesterday I experienced following problem with my ext3 filesystem: > > - I had ext3 filesystem of the size of a few TB with journal. I correctly > unmounted it and it was marked clean. > > - I then ran fsck.etx3 -f on it and it did not find any problem. > > - After increasing size of its LVM volume by
2014 Apr 18
3
Re: Many orphaned inodes after resize2fs
Hi, it seems you got it right! I don't know if you read email I sent you before posting to the mailing list, but I accidentally diagnosed the cause... :) I've noticed that inodes fsck warned me about, at least ones that I checked, all have all four timestamps latest in 2010... The filesystem has maximum 1281998848 inodes, which is timestamp in august 2010. I don't know how it got
2009 Apr 26
1
ext4 mount fails with "resize inode not valid" after a reboot
With kernel 2.6.30-rc2-git6 and prior I am having problems mounting ext4 partitions after reboot. A successful mount looks like this: /dev/cciss/c0d0p8 on /squid-cache0 type ext4 (rw,noexec,nodev,noatime,data=writeback,errors=panic) /dev/cciss/c0d0p9 on /squid-cache1 type ext4 (rw,noexec,nodev,noatime,data=writeback,errors=panic) /dev/cciss/c0d0p10 on /squid-data type ext4
2009 Mar 04
1
file system, kernel or hardware raid failure?
I had a busy mailserver fail on me the other day. Below is what was printed in dmesg. We first suspected a hardware failure (raid controller or something else), so we moved the drives to another (identical hardware) machine and ran fsck. Fsck complained ("short read while reading inode") and asked if I wanted to ignore and rewrite (which I did). After booting up again, the problem came
2008 Oct 24
1
e2fsck discrepancies
Hi, yesterday I ran e2fsck -n on a mounted file system and got: /dev/sdb1 contains a file system with errors, check forced. According to Ted, the lines that followed were not to be trusted due to the fact that the file system was mounted. But this error statement suggests to run a check with the fs unmounted. Today, we scheduled a downtime and ran the check. It came of completely clean: ~:
2007 Apr 03
2
Corrupt inodes on shared disk...
I am having problems when using a Dell PowerVault MD3000 with multipath from a Dell PowerEdge 1950. I have 2 cables connected and mount the partition on the DAS Array. I am using RHEL 4.4 with RHCS and a two node cluster. Only one node is "Active" at a time, it creates a mount to the partition, and if there is an issue RHCS will fence the device and then the other node will mount the
2009 Jun 04
3
Patches that adds delayed orphan scan timer (rev 3)
Resending after implementing review comments.
2008 Apr 03
1
Shrink ext3 filesystem , running out of inode questions
Hi, I have an ext3 file system created with -T largefile4 option. Now it is running out of inode but it's only about 10% full. - Is there a way now to increase the number of inode without making a new file system? - If not, I am thinking about shrinking the file system, and then use the free up space to create a new file system with more inodes, and move the data over. Since I am
2014 Apr 19
0
Re: Many orphaned inodes after resize2fs
On Sat, Apr 19, 2014 at 05:42:12PM +0200, Patrik HornĂ­k wrote: > > Please confirm that this is fully correct solution (for my purpose, not > elegant clean way for official fix) and it has no negative consequences. It > seems that way but I did not analyze all code paths the fixed code is in. Yes, that's a fine solution. What I'll probably do is disable the check if
2013 Sep 16
2
Re: Numbers behind "df" and "tune2fs"
Thanks for you help. I also tried adding some other informations as you suggest: I can also take into account: - "Reserved block count: XXXXXXX" from tune2fs that gives me the number of blocks reserved for root - Reserved GDT blocks: XXX But I didn't thought about the FS journal. How can I gather information about it? (it's size and any other information?) 2013/9/16
2013 Apr 15
8
[PATCH] btrfs-progs: No-op when called as fsck.btrfsck
Hi, I thought that I would attempt a quick little patch that will make btrfsck into a No-op when called as fsck.btrfsck. The reasoning is that the FAQ states that it is recommended and safe to do so, and the current 12.04 version of Ubuntu just symlinks fsck.btrfsck to btrfsck instead of /bin/true. PS - Apologies if I mess this git send-email up! Dan McGrath (1): btrfs-progs: No-op when
2009 Jan 12
1
Bug in inode deletion code leading to stale inodes
Hello, I've hit a bug in OCFS2 delete code which results in inodes being left on disk without any links to them. The workload triggering this creates directories on one node and deletes them on another node in the cluster. The inode is not deleted because both nodes bail out from ocfs2_delete_inode() with: Skipping delete of 100405 because it is in use on other nodes The scenario which I
2002 May 15
2
when is fsck required?
Hi, can anyone give me an example of when an fsck would repair something that the ext3 driver would not? with full "data=journal" journaling, would fsck ever need to be run if all the partitions were ext3? the ext3 mini-howto refers to "certain rare hardware failure cases (e.g. hard drive failures)" that would require a filesystem check, but doesn't go into details.
2008 Jan 22
2
forced fsck (again?)
hello everyone. i guess this has been asked before, but haven't found it in the faq. i have the following issue... it is not uncommon nowadays to have desktops with filesystems in the order of 500gb/1tb. now, my kubuntu (but other distros do the same) forces a fsck on ext3 every so often, no matter what. in the past it wasn't a big issue. but with sizes increasing so much, users are
2013 Sep 16
0
Re: Numbers behind "df" and "tune2fs"
On 9/16/13 9:44 AM, Nicolas Michel wrote: > Thanks for you help. I also tried adding some other informations as you suggest: > I can also take into account: > - "Reserved block count: XXXXXXX" from tune2fs that gives me the > number of blocks reserved for root > - Reserved GDT blocks: XXX > > But I didn't thought about the FS journal. How can I gather