Displaying 20 results from an estimated 4000 matches similar to: "O_DIRECT support"
2002 Jul 06
2
ext3 and raw devices
I doubt that this is actually an ext3 problem so I'm taking
a bit of a liberty, but...
We have several machines that have two ext3 partitions
(/, /boot) and a large raw partition. Something happened that
destroyed both the ext3 partitions on all machines.
I appears that the app writing to the raw partition had
a bug that caused it to write to negative block numbers
and that this wrote all
2002 Oct 21
1
All data gone AWOL
Hi,
We have been using ext3 with numerous machines and they
take quite a bit of abuse without incident. This morning I
got a call to say that a power cut had rendered one box unbootable.
Investigations revealed that the /boot partition was in good order.
An fsck of / also showed things to be well - except that the partition
was completely empty - except for /lost+found.
It might appear that
2004 Feb 05
3
increasing ext3 or io responsiveness
Our Invoice posting routine (intensive harddrive io) freezes every few
seconds to flush the cache. Reading this:
https://listman.redhat.com/archives/ext3-users/2002-November/msg00070.html
I decided to try:
# elvtune -r 2048 -w 131072 /dev/sda
# echo "90 500 0 0 600000 600000 95 20 0" >/proc/sys/vm/bdflush
# run_post_routine
# elvtune -r 128 -w 512 /dev/sda
# echo "30 500 0 0
2005 Oct 19
2
rsync and o_direct
Hi
We currently use rsync for various jobs at our company. We are now
looking at using it to create an offsite synchonised copy of an Oracle 10g
RAC archive logs area. The source area is on Oracle OCFS filesystem.
The OCFS filesystem requires all reads/writes to be performed with the
O_DIRECT option, thus bypassing cache. Oracle provide an updated
coreutils package which includes the
2007 Mar 07
1
ioemu in config file and O_DIRECT option
In config file, ''*type=ioemu*'' is added with vif . Does it make the
difference if we don''t add.
Because i did not notice any difference in para as well as full
virtulization.
Same as in case of exporting the disk to Guest dom. Is it just a placeholder
or else ?
Second thing, I am running RHEL3 as a HVM DomU on xen-3.0.4. Every thing
works fine, but i had a problem to
2004 Mar 04
1
[debian-knoppix] warning: updated with obselete bdflush call
Get this warning on bootup ext3 file checks on 2.6.* kernels.
Apparently harmless, but how do I fix this?
_______________________________________________
debian-knoppix mailing list
debian-knoppix at linuxtag.org
http://mailman.linuxtag.org/mailman/listinfo/debian-knoppix
2010 Feb 03
0
[PATCH] ocfs2: Add parenthesis to wrap the check for O_DIRECT.
Add parenthesis to wrap the check for O_DIRECT.
Signed-off-by: Tao Ma <tao.ma at oracle.com>
---
fs/ocfs2/file.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
index 06ccf6a..b2ca980 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c
@@ -2013,8 +2013,8 @@ out_dio:
/* buffered aio wouldn't have proper lock coverage today
2006 Oct 15
3
open(2) O_DIRECT on smbmount gives EINVAL
Does samba 3.0.23c not support the use of O_DIRECT? When I try to open an
smbmount'd file using O_DIRECT, I get EINVAL. I am able to use O_DIRECT with no
problems on a block device and nfs mounts, so I know the kernel supports it.
samba: 3.0.23c
kernel: 2.6.9-42.0.3.EL (32-bit)
I am using the below code for my test. smb fails on open(2).
#include <fcntl.h>
#include
2005 Oct 25
0
Fwd: rsync and o_direct
Guys, posted this last week and had no response so far. Just posting
again in case anyone missed it. I really could do with knowing as it's
delaying the rollout of a new project I'm working on.
Thanks, Simon
--
Hi
We currently use rsync for various jobs at our company. We are now
looking at using it to create an offsite synchonised copy of an Oracle 10g
RAC archive logs area. The
2006 Jul 24
1
O_DIRECT
It'd be pretty cool if rsync supported use of O_DIRECT on platforms that
support it, with or without my odirect package:
http://dcs.nac.uci.edu/~strombrg/odirect/
I say this because rsync is sometimes used to move a mountain of data,
just once. So there's little point in rsync toasting one's buffer cache.
I gather there's something like O_DIRECT on windows too, but it's
2003 May 01
3
Performance problem with mysql on a 3ware 1+0 raid array
Hi all,
We are observing a consistent interval of about 4 minutes at which there
are large sustained writes to disk that causes mysqld to block and not
respond for the entire period.
We are using data=journal with a 128M journal and the filesystem is
150GB in size.
We get about 300kb/sec in writes and that will jump to about 2000kb/sec
during the periods of large sustained writes. Those
2003 Mar 21
1
O_DIRECT
Hello,
Just became curious - is O_DIRECT already supported
on ext3 or not yet?
And a little bit offtopic :). Is that flag supported
on some other filesystem other than ext2?
Thanks,
Mindaugas
2002 Nov 21
2
/proc/sys/vm/bdflush
I'm lacking some understanding of how to tune / when to tune /proc/sys/vm/bdflush
Where can I read up on this?
Our current problem: Load is low, but ever so often the system decides
to do some serious disk I/O which causes all processes to wait for
disk I/O -- load explodes (rises linear up into the 20-30ies) just to
fall linearly (spelling?) right after that.
We think there might be some
2009 Mar 11
1
Enterprise Application with O_DIRECT access
Hello everyone,
I am learning and evaluating a glusterfs for film/video editing facilities.
Some major film/video editing realtime applications are using the
O_DIRECT file access for video/audio data files.
The GLFS client via fuse mechanism is disallow the open file with
O_DIRECT flag.
I made a little sample code for read a file with O_DIRECT flag, and
tried open the files on GLFS volumes.
It
2009 Aug 18
0
[PATCH] tapdisk:check O_DIRECT on hole file for performance
Although tapdisk has been instead of tapdisk-ioemu, but it still used on
some old xen-3.2.
So fix a performance problem here.
--
Kernel aio will retry when encounter a block doesn''t allocated, then do
async io,
but O_DIRECT flag was there, so io_wait on sync data.
And clean a little code style.
Signed-off-by: Wei Kong <weikong.cn@gmail.com>
2010 Mar 13
1
O_DIRECT, avoiding system cache?
Is it possible (planned?) to make rsync avoid going through system cache
and use direct IO?
Right now, if you decide to backup your desktop system (but it's not
only about desktop systems; rather more about one-time-only data
transfers) to external disk, you will notice that your X session lags
terribly, mainly because:
- system caches everything what rsync reads from the original drive,
2008 Feb 26
2
[PATCH]: Make Xen 3.1 IDE flush on O_DIRECT with drive caching off
All,
Long ago Xen added code to the device model to basically do an fsync()
after every data write if the user in the guest specified that IDE write caching
should be disabled. This works fine, except in the case where you are doing
O_DIRECT writes inside the guest (ala dd if=/dev/zero of=/dev/hdb oflag=direct).
This is because you can get out of ide_write_dma_cb() in the middle of the loop
2013 May 24
0
Re: [Qemu-devel] use O_DIRECT to open disk images for IDE failed under xen-4.1.2 and qemu upstream
On Fri, May 24, 2013 at 02:59:05AM +0000, Gonglei (Arei) wrote:
> Hi,
>
> >
> > On Thu, 23 May 2013, Gonglei (Arei) wrote:
> > > Hi, all
> > >
> > > I use O_DIRECT to open disk images for IDE, but I'm failed. After debug, I get
> > the below logs:
> > > [2013-05-22 23:25:46] ide: CMD=c8
> > > [2013-05-22 23:25:46] bmdma:
2004 Dec 01
2
cp --o_direct
Another question.
When my database is running, I do
[oracle@LNCSTRTLDB03 LPTE3]$ cp --o_direct xdb01.dbf /tmp
cp: cannot open `xdb01.dbf' for reading: Permission denied
[oracle@LNCSTRTLDB03 LPTE3]$
When the database is shudown it works.
Is this normal for ocfs because with any other filesystem I can just
copy a file at any time (Its only a testing, I know I cant copy
datafiles and have
2010 Sep 02
3
[patch] O_DIRECT: fix the splitting up of contiguous I/O
Andrew, can you please send this on to Linus and -stable ASAP? It's
causing massive problems for our users.
On Thu, Aug 12, 2010 at 04:50:59PM -0400, Jeff Moyer wrote:
> Hi,
>
> commit c2c6ca4 (direct-io: do not merge logically non-contiguous
> requests) introduced a bug whereby all O_DIRECT I/Os were submitted a
> page at a time to the block layer. The problem is that the