Displaying 4 results from an estimated 4 matches for "bdrv_o_nocach".
Did you mean:
bdrv_o_nocache
2013 Apr 05
0
[PATCHv2 1/2] Xen PV backend (for qemu-upstream-4.2-testing): Move call to bdrv_new from blk_init to blk_connect
...v *blkdev = container_of(xendev, struct XenBlkDev, xendev);
- int index, qflags, info = 0;
+ int info = 0;
/* read xenstore entries */
if (blkdev->params == NULL) {
@@ -603,18 +603,49 @@ static int blk_init(struct XenDevice *xendev)
}
/* read-only ? */
- qflags = BDRV_O_NOCACHE | BDRV_O_CACHE_WB | BDRV_O_NATIVE_AIO;
- if (strcmp(blkdev->mode, "w") == 0) {
- qflags |= BDRV_O_RDWR;
- } else {
+ if (strcmp(blkdev->mode, "w")) {
info |= VDISK_READONLY;
}
-
+
/* cdrom ? */
if (blkdev->devtype &&...
2010 Dec 10
4
qemu VS tapdisk2 VS blkback benchmarks
...ults you need to use the new qemu with
linux aio and O_DIRECT as disk backend:
- apply the libxl patches that Anthony sent to the list a little while ago;
- compile qemu with linux aio support, you might need few hacks to work
around limitations of the glic/libaio installed in your system;
- add BDRV_O_NOCACHE|BDRV_O_NATIVE_AIO to the flags used by qemu to open
the disks;
- some gntdev fixes to allow aio and O_DIRECT on granted pages, not yet
sent to the list (but soon).
TEST HARDWARE
-------------
I am using a not so new testbox with a 64bit 2.6.37 dom0 with 752MB of ram.
The guest is a 64 bit PV...
2013 May 24
0
Re: [Qemu-devel] use O_DIRECT to open disk images for IDE failed under xen-4.1.2 and qemu upstream
...> > > * If O_DIRECT is used the buffer needs to be aligned on a sector
> > > * boundary. Check if this is the case or tell the low-level
> > > * driver that it needs to copy the buffer.
> > > */
> > > if ((bs->open_flags & BDRV_O_NOCACHE)) {
> > > if (!bdrv_qiov_is_aligned(bs, qiov)) { //if the address is
> > aligned-512, will no meet the conditions
> > > type |= QEMU_AIO_MISALIGNED;
> > > #ifdef CONFIG_LINUX_AIO
> > > } else if (s->use_aio) {
> > &g...
2012 Aug 10
6
qemu-xen-traditional: NOCACHE or CACHE_WB to open disk images for IDE
Hi list,
Recently I was debugging L2 guest slow booting issue in nested virtualization environment (both L0 and L1 hypervisors are all Xen).
To boot a L2 Linux guest (RHEL6u2), it will need to wait more than 3 minutes after grub loaded, I did some profile, and see guest is doing disk operations by int13 BIOS procedure.
Even not consider the nested case, I saw there is a bug reporting normal VM