similar to: How do I check fragmentation amount?

Displaying 20 results from an estimated 10000 matches similar to: "How do I check fragmentation amount?"

2012 May 30
4
Reproducing fragmentation and out of space error
Recently I ran into a situation where an ocfs2 (1.4) volume was reporting it was out of space when it was not. Deleting some files helped short term but the problem quickly comes back. I believe this is due to the fragmentation bug that I have seen references to in the mailing list archive. I am trying to reproduce the problem on a test system so that I can validate that upgrading to 1.6
2010 May 21
2
fsck.ocfs2 using huge amount of memory?
We are setting up 2 new EL5 U4 machines to replace our current database servers running our demo environment. We use 3Par SANs and their snap clone options. The current production system we snap clone from is EL4 U5 with ocfs2 1.2.9, the new servers have ocfs2 1.4.3 installed. Part of the refresh process is to run fsck.ocfs2 on the volume to recover, but right now as I am trying to run it on our
2006 Mar 20
1
fixing a corrupt /dev/hdar .. debugfs assistance...
I used ddrescue to copy /dev/md1 to a disk of sufficient size, and re-ran e2fsck, and still get the error message that there's no root file system (I've tried most every superblock): # fsck -y -b 7962624 /dev/sdf fsck 1.36 (05-Feb-2005) e2fsck 1.36 (05-Feb-2005) Superblock has a bad ext3 journal (inode 8). Clear? yes *** ext3 journal has been deleted - filesystem is now ext2 only ***
2006 Oct 19
1
Fragmentation problem: Archive logs on ocfs1 and ocfs2
Hello All, I have few questions around our use of ocfs1/2 for archive logs on 10G RAC. Is there an article out there describing why fragmentation is a special concern for ocfs1/2? Are there ways to remove fragmentation short of rebuilding the fs? Is there a way to estimate how often we will need to rebuild the fs? Any special tools/packages available to handle this issue? Regards, Pradeep.
2011 Dec 06
2
OCFS2 showing "No space left on device" on a device with free space
Hi , I am getting the error "No space left on device" on a device with free space which is ocfs2 filesystem. Additional information is as below, [root at sai93 staging]# debugfs.ocfs2 -n -R "stats" /dev/sdb1 | grep -i "Cluster Size" Block Size Bits: 12 Cluster Size Bits: 15 [root at sai93 ~]# cat /etc/redhat-release Red Hat Enterprise Linux Server release
2006 Aug 02
1
Free space oddities on OCFS2
Hi all, I'm testing OCFS2 as a cluster filesystem for a mail system based on maildir, so basically the filesystem must be able to deal with lots of directories, and lots of small files. The first "oddity", is that when I mount a newly formated ocfs2 fs, it already contains used space: [root@ocfs1 /]# df /cgp02 Filesystem 1K-blocks Used Available Use% Mounted on
2011 Dec 20
8
ocfs2 - Kernel panic on many write/read from both
Sorry i don`t copy everything: TEST-MAIL1# echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604 246266859 TEST-MAIL1# echo "ls //orphan_dir:0001"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 6074335 30371669 285493670 TEST-MAIL2 ~ # echo "ls //orphan_dir:0000"|debugfs.ocfs2 /dev/dm-0|wc debugfs.ocfs2 1.6.4 5239722 26198604
2006 Sep 20
6
ocfs2 - disk usage inconsistencies
Hi all. I have a 50 GB OCFS2 file system. I'm currently using ~26GB of space but df is reporting 43 GB used. Any ideas how to find out where the missing 17GB is at? The file system was formatted with a 16K cluster & 4K block size. Thanks, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL:
2008 Sep 25
1
ocfs2 filesystem seems out of sync
Hi there I recently installed an OCFS2 filesystem on our FC-SAN. Everything seemed to work fine and I could read & write the filesystem from both servers that are mounting it. After a while though, writes coming from one node do not appear on the other node and vice versa. I am not sure what's causing this, and not very experienced at debugging filesystems. If anybody has any
2008 Jan 11
3
systems hang when accessing parts of the OCFS2 file system
Hi everyone Firstly, apologies for the cross post, I am not sure which list is most appropriate for this question. I should also point out, that I did not install OCFS2 and I am not the person that normally looks after these kind of things, so please can you bear that in mind when you make any suggestions (I will need a lot of detail!) The problem: accessing certain directories within the
2008 Jan 11
3
systems hang when accessing parts of the OCFS2 file system
Hi everyone Firstly, apologies for the cross post, I am not sure which list is most appropriate for this question. I should also point out, that I did not install OCFS2 and I am not the person that normally looks after these kind of things, so please can you bear that in mind when you make any suggestions (I will need a lot of detail!) The problem: accessing certain directories within the
2012 Feb 01
3
A Billion Files on OCFS2 -- Best Practices?
We have an application that has many processing threads writing more than a billion files ranging from 2KB ? 50KB, with 50% under 8KB (currently there are 700 million files). The files are never deleted or modified ? they are written once, and read infrequently. The files are hashed so that they are evenly distributed across ~1,000,000 subdirectories up to 3 levels deep, with up to 1000 files
2008 May 07
1
[PATCH]ocfs2-1.2: Add dput for uuid entry.
In ocfs2-1.2, when we mount a device, a debugfs dir will be created using its uuid. When 2 devices have the same uuid, after the 1st device is mounted, the 2nd one can't be mounted. This is OK. But the problem is that the dentry's reference is added. So when the 1st volume is umounted, none of these 2 volumes can be mounted then. So this fix solves this problem by dputting the
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2014 Jan 07
10
[PATCH net-next v2 1/4] net: allow > 0 order atomic page alloc in skb_page_frag_refill
skb_page_frag_refill currently permits only order-0 page allocs unless GFP_WAIT is used. Change skb_page_frag_refill to attempt higher-order page allocations whether or not GFP_WAIT is used. If memory cannot be allocated, the allocator will fall back to successively smaller page allocs (down to order-0 page allocs). This change brings skb_page_frag_refill in line with the existing page allocation
2011 Jul 06
2
Slow umounts on SLES10 patchlevel 3 ocfs2
Hi, we are using a SLES10 Patchlevel 3 with 12 Nodes hosting tomcat application servers. The cluster was running some time (about 200 days) without problems. Recently we needed to shutdown the cluster for maintenance and experianced very long times for the umount of the filesystem. It took something like 45 minutes each node and filesystem (12 x 45 minutes shutdown time). As a result the planned
2010 Apr 26
1
slowdown - fragmentation?
[This email is either empty or too large to be displayed at this time]
2014 Sep 10
1
How to unlock a bloked resource? Thanks
Hi All: As we test with two node in one OCFS2 cluster. The cluster is hang up may be for dead lock. We use the debugfs.ocfs tool founding that one resource is holding by one node who has it for long time and another node can still wait for the resource. So the cluster is hang up. debugfs.ocfs2 -R "fs_locks -B" /dev/dm-0 debugfs.ocfs2 -R "dlm_locks LOCKID_XXX" /dev/dm-0 How
2014 Sep 10
1
How to unlock a bloked resource? Thanks
Hi All: As we test with two node in one OCFS2 cluster. The cluster is hang up may be for dead lock. We use the debugfs.ocfs tool founding that one resource is holding by one node who has it for long time and another node can still wait for the resource. So the cluster is hang up. debugfs.ocfs2 -R "fs_locks -B" /dev/dm-0 debugfs.ocfs2 -R "dlm_locks LOCKID_XXX" /dev/dm-0 How
2009 Jan 15
5
[PATCH 0/3] ocfs2: Inode Allocation Strategy Improvement.v2
Changelog from V1 to V2: 1. Modify some codes according to Mark's advice. 2. Attach some test statistics in the commit log of patch 3 and in this e-mail also. See below. Hi all, In ocfs2, when we create a fresh file system and create inodes in it, they are contiguous and good for readdir+stat. While if we delete all the inodes and created again, the new inodes will get spread out and that