Displaying 3 results from an estimated 3 matches for "1300m".
Did you mean:
1300
2004 Jun 22
1
The truncate_inode_page call in ocfs_file_release causes the severethroughput drop of file reading in OCFS2.
...eaned when this inode is
first opened.
In this case, the file reading operation always read data from
the disk directly, which throughput is only 16M bytes/sec on our
development machine. But, if we try to bypass the call to
truncate_inode_page(), the file reading throughput in one node can reach
1300M bytes/sec, which is about 75% of that of ext3.
I think it is not a good idea to clean all page caches of an
inode when its last reference is closed. This inode may be reopened very
soon and its cached pages may be accessed again.
I guess your intention to call truncate_inode_page() is to avoid...
2007 Nov 29
6
PCI Passthrough to HVM on xen-unstable
...ed Hat Enterprise Linux Server (2.6.18-8.el5-up)
root (hd0,0)
kernel /vmlinuz-2.6.18-8.el5 ro root=/dev/VolGroup00/LogVol01
crashkernel=128M@16M maxcpus=1
initrd /initrd-2.6.18-8.el5.img
#2
title RHEL5-XEN311-RC2
root (hd0,0)
kernel /xen311/xen-3.1.1-rc2.gz dom0_mem=1300M loopback.nloopbacks=16
module /xen311/vmlinuz-2.6.18-xen-311 root=/dev/VolGroup00/LogVol01 ro
showopts console=tty0
module /xen311/initrd-2.6.18-xen-311.img
#3
title RHEL5-XEN320-UNSTABLE
root (hd0,0)
kernel /xen320-unstable/xen-3.2-unstable.gz dom0_mem=1300M
loopback.nloopbacks=...
2004 Jun 22
1
The truncate_inode_page call inocfs_file_releasecaus es the severethroughput drop of file reading in OCFS2.
=20
>-----Original Message-----
>From: ocfs2-devel-bounces@oss.oracle.com=20
>[mailto:ocfs2-devel-bounces@oss.oracle.com] On Behalf Of Wim Coekaerts
>Sent: 2004=C4=EA6=D4=C222=C8=D5 16:01
>To: Zhang, Sonic
>
>the problem is, how can we notify. I think we don't want to=20
>notify every
>node on every change othewise we overload the interconnect and we don't