Displaying 20 results from an estimated 8000 matches similar to: "[PATCH] tapdisk:check O_DIRECT on hole file for performance"
2012 May 23
0
A little confusion between "tapdisk" and "tapdisk-ioemu"
Hi everyone,
as what I''ve learned form this link<http://wiki.xensource.com/xenwiki/blktap>,
i know that when i specify tap:aio I’m actually using the blktap driver and
finally using "tapdisk" to wirite to raw image file.
but the truth is even when i delete /usr/sbin/tapdisk the domU can still
boot on my machine with tap:aio protocol
after a deeper look into the scene,
2008 Jul 08
0
[PATCH] stubdom: do not build tapdisk as it is not supposed to build and we don''t need it
stubdom: do not build tapdisk as it it not supposed to build and we
don''t need it
Signed-off-by: Samuel Thibault <samuel.thibault@eu.citrix.com>
diff -r 4024164e7572 stubdom/Makefile
--- a/stubdom/Makefile Tue Jul 08 16:11:49 2008 +0100
+++ b/stubdom/Makefile Tue Jul 08 17:12:38 2008 +0100
@@ -190,7 +190,7 @@
[ -f ioemu/config-host.mak ] || \
( cd ioemu ; \
2008 Apr 22
0
[PATCH] blktap: Automatically start tapdisk-ioemu on demand
When a domain wants to use a tap:ioemu disk but has no device model,
start a tapdisk-ioemu instance as provider. Also, move the creation and
removal of communication pipes to xend so that qemu-dm doesn''t need the
unwanted SIGHUP handler anymore.
Signed-off-by: Kevin Wolf <kwolf@suse.de>
_______________________________________________
Xen-devel mailing list
2007 Mar 07
1
ioemu in config file and O_DIRECT option
In config file, ''*type=ioemu*'' is added with vif . Does it make the
difference if we don''t add.
Because i did not notice any difference in para as well as full
virtulization.
Same as in case of exporting the disk to Guest dom. Is it just a placeholder
or else ?
Second thing, I am running RHEL3 as a HVM DomU on xen-3.0.4. Every thing
works fine, but i had a problem to
2015 Mar 12
2
Tapdisk processes being left behind when hvm domu's migrate/shutdown
Hi All,
I'm seeing tapdisk processes not being terminated after a HVM vm is shutdown or migrated away. I don't see this problem with linux paravirt domu's, just windows hvm ones.
xl.cfg:
name = 'nathanwin'
memory = 4096
vcpus = 2
disk = [ 'file:/mnt/gtc_disk_p1/nathanwin/drive_c,hda,w' ]
vif = [ 'mac=00:16:3D:01:03:E0,bridge=vlan208' ]
builder =
2010 Jun 22
2
domU can not start in Xen 4.0.1-rc3-pre using tapdisk
The domU is using pygrub to boot its own 2.6.18.8-xen kernel. It can be
booted successfully under 2.6.18.8-xen dom0 and xen 3.3.1.
However when upgrade dom0 to 2.6.32.15 and xen 4.0.1-rc3-pre, the domU can
not boot with tapdisk. I am wondering it is something related to the blktap
driver.
*When using tap:aio:* PATH/disk.img in domU disk configuration, the boot
process hanged at a prompt:
XENBUS:
2010 Jun 22
2
domU can not start in Xen 4.0.1-rc3-pre using tapdisk
The domU is using pygrub to boot its own 2.6.18.8-xen kernel. It can be
booted successfully under 2.6.18.8-xen dom0 and xen 3.3.1.
However when upgrade dom0 to 2.6.32.15 and xen 4.0.1-rc3-pre, the domU can
not boot with tapdisk. I am wondering it is something related to the blktap
driver.
*When using tap:aio:* PATH/disk.img in domU disk configuration, the boot
process hanged at a prompt:
XENBUS:
2012 Oct 17
0
vhd format support failed on suse11 x86_64 sp2
Hi all,
I had met with a question about vhd format when i install xen 4.2.0 source code on suse11_x86_64_sp2.
execute steps:
1)Install fully suse 11 sp2 to host pc
2)Boot from Xen kernel menu
3)Download xen 4.2.0 source code from http://xen.org/download/index_4.2.0.html
4)Download bin86-0.16.19.tar.gz cmake-2.8.9.tar.gz Dev86src-0.16.19.tar.gz gettext-0.18.1.1.tar.gz
2010 Feb 03
0
[PATCH] ocfs2: Add parenthesis to wrap the check for O_DIRECT.
Add parenthesis to wrap the check for O_DIRECT.
Signed-off-by: Tao Ma <tao.ma at oracle.com>
---
fs/ocfs2/file.c | 4 ++--
1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/fs/ocfs2/file.c b/fs/ocfs2/file.c
index 06ccf6a..b2ca980 100644
--- a/fs/ocfs2/file.c
+++ b/fs/ocfs2/file.c
@@ -2013,8 +2013,8 @@ out_dio:
/* buffered aio wouldn't have proper lock coverage today
2007 Nov 28
1
RFC: add tapdisk link in xen-common for blktap
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hi,
while trying to get blktap based domus running with current Debian Xen
3.1.2 I came across a missing link: The tapdisk binary should be added
to GLOBAL_SCRIPTS in xen-common/scripts/Makefile. Otherwise blktap
based domus don't find their disks. blktapctrl forks tapdisk processes.
Peter
-----BEGIN PGP SIGNATURE-----
Version: GnuPG
2011 Sep 21
1
[PATCH] libxl: attempt to cleanup tapdisk processes on disk backend destroy
# HG changeset patch
# User Ian Campbell <ian.campbell@citrix.com>
# Date 1316609964 -3600
# Node ID b43fd821d1aebc8671e684bfc285cda7a6002ff1
# Parent 206afa070919e3fe0b13a03f870ca2da44ab604a
libxl: attempt to cleanup tapdisk processes on disk backend destroy.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
diff -r 206afa070919 -r b43fd821d1ae
2012 Jan 17
0
Attaching GDB to Tapdisk for Debugging
Hi all,
Has anyone used gdb to debug tapdisk problems? I''ve implemented a
custom tapdisk interface and I''m trying to debug a few kernel paging
crashes.
-Jack
2015 Mar 12
0
Tapdisk processes being left behind when hvm domu's migrate/shutdown
On Thu, Mar 12, 2015 at 6:11 PM, Nathan March <nathan at gt.net> wrote:
> Hi All,
>
>
>
> I'm seeing tapdisk processes not being terminated after a HVM vm is shutdown
> or migrated away. I don't see this problem with linux paravirt domu's, just
> windows hvm ones.
Interesting -- actually you get the same effect just starting and
shutting down a guest. It
2011 Apr 06
1
Xen page sharing
Hi sahil:
I think the reason why you cannot get page shared is due to the gref you got.
Gref is responsible for a page allocated from domU, in my understanding it should not be
0, that is a gref 0 can not be shared, that''s why I skip gref 0 to be nominated.
The gref is nominated to Xen and later used to find a corrspond MFN, so it shall not always be the same.
2005 Oct 19
2
rsync and o_direct
Hi
We currently use rsync for various jobs at our company. We are now
looking at using it to create an offsite synchonised copy of an Oracle 10g
RAC archive logs area. The source area is on Oracle OCFS filesystem.
The OCFS filesystem requires all reads/writes to be performed with the
O_DIRECT option, thus bypassing cache. Oracle provide an updated
coreutils package which includes the
2006 Oct 15
3
open(2) O_DIRECT on smbmount gives EINVAL
Does samba 3.0.23c not support the use of O_DIRECT? When I try to open an
smbmount'd file using O_DIRECT, I get EINVAL. I am able to use O_DIRECT with no
problems on a block device and nfs mounts, so I know the kernel supports it.
samba: 3.0.23c
kernel: 2.6.9-42.0.3.EL (32-bit)
I am using the below code for my test. smb fails on open(2).
#include <fcntl.h>
#include
2005 Oct 25
0
Fwd: rsync and o_direct
Guys, posted this last week and had no response so far. Just posting
again in case anyone missed it. I really could do with knowing as it's
delaying the rollout of a new project I'm working on.
Thanks, Simon
--
Hi
We currently use rsync for various jobs at our company. We are now
looking at using it to create an offsite synchonised copy of an Oracle 10g
RAC archive logs area. The
2013 May 24
0
Re: [Qemu-devel] use O_DIRECT to open disk images for IDE failed under xen-4.1.2 and qemu upstream
On Fri, May 24, 2013 at 02:59:05AM +0000, Gonglei (Arei) wrote:
> Hi,
>
> >
> > On Thu, 23 May 2013, Gonglei (Arei) wrote:
> > > Hi, all
> > >
> > > I use O_DIRECT to open disk images for IDE, but I'm failed. After debug, I get
> > the below logs:
> > > [2013-05-22 23:25:46] ide: CMD=c8
> > > [2013-05-22 23:25:46] bmdma:
2009 Mar 11
1
Enterprise Application with O_DIRECT access
Hello everyone,
I am learning and evaluating a glusterfs for film/video editing facilities.
Some major film/video editing realtime applications are using the
O_DIRECT file access for video/audio data files.
The GLFS client via fuse mechanism is disallow the open file with
O_DIRECT flag.
I made a little sample code for read a file with O_DIRECT flag, and
tried open the files on GLFS volumes.
It
2010 Oct 09
2
[PATCH 1/2] Ocfs2: Add a mount option "coherency=*" for O_DIRECT writes.
Currently, default behavior of O_DIRECT writes was allowing
concurrent writing among nodes, no cluster coherency guaranteed
(no EX locks was taken), it hurts buffered reads on other nodes
by reading stale data from cache.
The new mount option introduce a chance to choose two different
behaviors for O_DIRECT writes:
* coherency=full, as the default value, will disallow
concurrent