similar to: [PATCH] OCFS2: Pagecache usage optimization on OCFS2

Displaying 20 results from an estimated 100 matches similar to: "[PATCH] OCFS2: Pagecache usage optimization on OCFS2"

2009 Jul 13
1
[PATCH 1/1] adds mlogs to aops.c
this patch adds mlogs to apos to help tracing. Signed-off-by: Wengang Wang <wen.gang.wang at oracle.com> --- fs/ocfs2/aops.c | 233 ++++++++++++++++++++++++++++++++++++++++++++----------- 1 files changed, 189 insertions(+), 44 deletions(-) diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c index b2c52b3..b730010 100644 --- a/fs/ocfs2/aops.c +++ b/fs/ocfs2/aops.c @@ -90,7 +90,7 @@ static int
2009 Jul 21
1
(no subject)
>From c70adcaca99acf93bc00cf2edc4d549b83e2f95d Mon Sep 17 00:00:00 2001 From: Wengang Wang <wen.gang.wang at oracle.com> Date: Tue, 21 Jul 2009 10:52:52 +0800 Subject: [PATCH 1/1] ocfs2: adds mlogs to aops.c -V2 this patch adds some mlogs to apos.c helping tracing and narrowing down bugs. Signed-off-by: Wengang Wang <wen.gang.wang at oracle.com> --- fs/ocfs2/aops.c | 242
2009 Jul 21
1
[PATCH 1/1] ocfs2: adds mlogs to aops.c -V2
this patch adds some mlogs to apos.c helping tracing and narrowing down bugs. Signed-off-by: Wengang Wang <wen.gang.wang at oracle.com> --- fs/ocfs2/aops.c | 242 +++++++++++++++++++++++++++++++++++++++++++++---------- 1 files changed, 198 insertions(+), 44 deletions(-) diff --git a/fs/ocfs2/aops.c b/fs/ocfs2/aops.c index b2c52b3..4527f16 100644 --- a/fs/ocfs2/aops.c +++
2009 Jun 09
2
[PATCH] OCFS2: fdatasync should skip unimportant metadata writeout
Hi. In ocfs2, fdatasync and fsync are identical. I think fdatasync should skip committing transaction when inode->i_state is set just I_DIRTY_SYNC and this indicates only atime or/and mtime updates. Following patch improves fdatasync throughput. #sysbench --num-threads=16 --max-requests=300000 --test=fileio --file-block-size=4K --file-total-size=16G --file-test-mode=rndwr
2009 Jun 08
1
[PATCH] Btrfs: fdatasync should skip metadata writeout
Hi. In btrfs, fdatasync and fsync are identical. I think fdatasync should skip committing transaction when inode->i_state is set just I_DIRTY_SYNC and this indicates only atime or/and mtime updates. Following patch improves fdatasync throughput. #sysbench --num-threads=16 --max-requests=10000 --test=fileio --file-block-size=4K --file-total-size=16G --file-test-mode=rndwr
2009 Mar 17
33
[git patches] Ocfs2 updates for 2.6.30
Hi, The following patches comprise the bulk of Ocfs2 updates for the 2.6.30 merge window. Aside from larger, more involved fixes, we're adding the following features, which I will describe in the order their patches are mailed. Sunil's exported some more state to our debugfs files, and consolidated some other aspects of our debugfs infrastructure. This will further aid us in debugging
2009 Feb 02
5
[PATCH] btrfs: call mark_inode_dirty when i_size is updated
Hi Chris. I think it is needed to call mark_inode_dirty() when file size expands in order to flush metadata updates to HDD through sync() syscall or background_writeout(). Thanks. Signed-off-by: Hisashi Hifumi <hifumi.hisashi@oss.ntt.co.jp> diff -Nrup linux-2.6.29-rc3.org/fs/btrfs/file.c linux-2.6.29-rc3/fs/btrfs/file.c --- linux-2.6.29-rc3.org/fs/btrfs/file.c 2009-02-02
2007 Apr 27
2
ARC, mmap, pagecache...
Hi, I was wondering about the ARC and its interaction with the VM pagecache... When a file on a ZFS filesystem is mmaped, does the ARC cache get mapped to the process'' virtual memory? Or is there another copy? -Manoj
2017 Nov 06
0
Has libvirt guest pagecache level ?
Greetings Has libvirt dedicated page cache area for guest ? If not - what is the difference between cache='none' and cache='directsync' ? >The optional cache attribute controls the cache mechanism, possible >values are "default", "none", "writethrough", "writeback", "directsync" >(like "writethrough", but it
2017 Nov 14
0
Re: dramatic performance slowdown due to THP allocation failure with full pagecache
On Tue, Nov 14, 2017 at 10:52:03AM -0700, Blair Bethwaite wrote: > Thanks for the reply Daniel, > > However I think you slightly misunderstood the scenario... > > On 14 November 2017 at 10:32, Daniel P. Berrange <berrange@redhat.com> wrote: > > IOW, if your application has a certain expectation of performance that can only > > be satisfied by having the KVM guest
2017 Nov 14
1
Re: dramatic performance slowdown due to THP allocation failure with full pagecache
On 14 November 2017 at 10:56, Daniel P. Berrange <berrange@redhat.com> wrote: > Oh well THP usage inside the guest is then not really anything todo with > virt, just a regular Linux questions, so not sure libvirt is the best > place to ask. True, I just hoped you or one of the other devs might have some insight on reclaim behaviour that would provide a clue. I guess I'll try a
2017 Nov 14
0
Re: dramatic performance slowdown due to THP allocation failure with full pagecache
On Tue, Nov 14, 2017 at 10:23:56AM -0700, Blair Bethwaite wrote: > Hi all, > > This is not really a libvirt issue but I'm hoping some of the smart folks > here will know more about this problem... > > We have noticed when running some HPC applications on our OpenStack > (libvirt+KVM) cloud that the same application occasionally performs much > worse (4-5x slowdown)
2009 Jun 09
4
[PATCH] btrfs: fix write_dev_supers
Hi. I got following BUG trace. This is violation of BUG_ON(!buffer_locked(bh)) check on submit_bh() function. In write_dev_supers(), if wait parameter is set and buffer_uptodate() check is negative, submit_bh() is executed and hit above BUG_ON. So I fixed this issue. Thanks. Jun 9 00:41:32 dl580 kernel: ------------[ cut here ]------------ Jun 9 00:41:32 dl580 kernel: kernel BUG at
2017 Nov 14
2
Re: dramatic performance slowdown due to THP allocation failure with full pagecache
Thanks for the reply Daniel, However I think you slightly misunderstood the scenario... On 14 November 2017 at 10:32, Daniel P. Berrange <berrange@redhat.com> wrote: > IOW, if your application has a certain expectation of performance that can only > be satisfied by having the KVM guest backed by huge pages, then you should > really change to explicitly reserve huge pages for the
2009 Jun 18
8
Patches backported from mainline
All, Please review the patches backported to 1.4 from mainline. Sunil
2007 Mar 01
4
pagecache corruption on Tyan S3870
A couple of months ago I reported some problems with a batch of Tyan K8SSA (S3870) based machines. We are continuing to have an odd problem with these boxes, and if anyone has seen something similar elsewhere, I'd appreciate hearing about it. These boxes are running Centos 4.4 x86_64 with kernel 2.6.9-42.0.3.ELsmp. They are dual Opteron 265's (dual core) with 4x2GB DIMM's. The
2017 Nov 14
2
dramatic performance slowdown due to THP allocation failure with full pagecache
Hi all, This is not really a libvirt issue but I'm hoping some of the smart folks here will know more about this problem... We have noticed when running some HPC applications on our OpenStack (libvirt+KVM) cloud that the same application occasionally performs much worse (4-5x slowdown) than normal. We can reproduce this quite easily by filling pagecache (i.e. dd-ing a single large file to
2005 Jan 04
0
[PATCH] BUG on error handlings in Ext3 under I/O failure condition
Hello. I found bugs on error handlings in the functions arround the ext3 file system, which cause inadequate completions of synchronous write I/O operations when disk I/O failures occur. Both 2.4 and 2.6 have this problem. I carried out following experiment: 1. Mount a ext3 file system on a SCSI disk with ordered mode. 2. Open a file on the file system with O_SYNC|O_RDWR|O_TRUNC|O_CREAT
2009 Jun 16
0
[GIT PULL] ocfs2 updates for 2.6.31
Linus, et al, Here are the ocfs2 updates for 2.6.31. It's a quiet cycle, almost completely composed of fixes. There is a nice performance improvement from Hisashi Hifumi for fdatasync. Please pull. Joel The following changes since commit b4348f32dae3cb6eb4bc21c7ed8f76c0b11e9d6a: Linus Torvalds (1): Merge branch 'for-linus' of git://oss.sgi.com/xfs/xfs are available in
2005 Nov 01
2
xen, lvm, drbd, bad kernel messages
Regardless of the filesystem (i''ve used reiserfs, xfs, ext3), whenever I mount a fresh DRBD partition I get some nasty kernel messages. This is under Debian Sarge, Xen kernel 2.6.11.12-xen0 (dom0) using DRBD v0.7.11 (pulled from Debian "testing"). This is what I did to create the partition. On both nodes I created a new LVM storage device and started DRBD: # lvcreate