I have some Xen systems running Xen-4.1.1, dom0 linux-2.6.38 patched (it''s gentoo''s xen-sources) and domUs running linux-3.0.4 (vanilla sources from kernel.org). Block devices are phy on LVM2 volumes on DRBD-8.3.9 devices. Not immediately after boot, but after some I/O load on the disks I start seeing these in the domUs: blkfront: barrier: empty write xvdb1 op failed blkfront: xvdb1: barrier or flush: disabled the systems appear to keep working correctly anyway. I discovered the thing in domUs with linux-3.0.3 and got the same with -3.0.4, with or without cleancache enabled, mounting either an ext3 or an ext4 filesystem. The only domU I have running Ubuntu with its kernel 2.6.24-28-xen seems to be immune. Doing something like "dd if=/dev/zero of=/dev/xvdb1" I wasn''t able to trigger those messages even if I tried some different ''bs=XXX'' values. Doing a single mkfs.ext3 or mkfs.ext4 *will* trigger those message every time. Is this something I should be worried about? Should I switch back to an older kernel or change some other setting? -- Luca Lesinigo _______________________________________________ Xen-users mailing list Xen-users@lists.xensource.com http://lists.xensource.com/xen-users
After upgrading domU from linux-3.0.4 to 3.0.11I still get these nasty messages from dmesg when mounting the EXT3 filesystem during boot: blkfront: barrier: empty write xvda1 op failed blkfront: xvda1: barrier or flush: disabled EXT3-fs (xvda1): using internal journal this 64bit domU is running under Xen-4.1.1 with a 2.6.38 kernel (gentoo''s xen-sources which basically is a backport of XenLinux patches). The system appears to keep working correctly anyway but I suspect it slows down a little too much under heavy I/O (for example, copying many gigabytes from an ext3 to another). Working with raw blocks (eg. swap volumes, or running dd over a not-mounted ext3 volume) does not trigger that message, running a single mkfs.ext3 or mkfs.ext4 does make it appear. I usually have cleancache enabled over tmem, disabling it doesn''t seem to make any difference. Ubuntu domUs using their stock 2.6.24-??-xen kernel seem to be immune. Is this something I should be worried about? What would you suggest to try to correct it? (eg. is it something related to domU kernel, dom0 kernel, hypervisor, domU config, whatever?) thanks. -- Luca Lesinigo
Dom0 blkback on upstream kernel have barrier support only start from 3.2, set barrier=0 on fstab for not show error with kernel 3.0 or 3.1 -- View this message in context: http://xen.1045712.n5.nabble.com/blkfront-barrier-empty-write-op-failed-tp4780678p5038298.html Sent from the Xen - User mailing list archive at Nabble.com.
Il giorno 01/dic/2011, alle ore 11:41, Fantu ha scritto:> Dom0 blkback on upstream kernel have barrier support only start from 3.2, set barrier=0 on fstab for not show error with kernel 3.0 or 3.1Thanks for the tip. It pointed me in the right direction to find out something more. Now, this problem seems to be connected with the issues discussed in this thread: http://old-list-archives.xen.org/archives/html/xen-devel/2011-09/msg00385.html If I understood correctly the whole issue is: - my dom0 (2.6.38.x derived from Suse) does not advertise/support flush-cache, does advertise feature-barrier="1" (checked with xenstore-ls), but does not actually have the WRITE_BARRIER functionality - the domU (3.0.x) sees the feature-barrier and tries to use them but then fails because dom0 doesn''t actually implement them - turning barriers off in ext3 makes the log line disappear unless you use something else that triggers the usage of barriers (eg mkfs.ext3/.ext4, or anything WRITE_ODIRECT) - but then you''re running with no barrier or similar functionality so that will expose the system to corruption during hard crashes or power outages If I update my dom0 kernels to newer ones they should advertise feature-flush-cache="1" (should be in linux >= 3.0 afaik) and my domU will stop trying to use barriers and use flush-cache instead. Problem solved there. But what about older domU systems, like Ubuntu 10.xx linux-xen kernels, or HVM Windows guests with GPLPV drivers? Will they work correctly on newer dom0 with flush-cache and without barriers? I seem to understand that there isn''t any dom0 kernel available which actually implements both barriers (for older domU systems) and flush-cache (for newer ones)? What if I turn back to an older dom0 kernel which implements barriers? (should be kernels before 2.6.37 afaik) Would that make any ''generation'' of dom0 systems happily use barriers and prevent the above mentioned corruption risks? Thanks. -- Luca Lesinigo
Seemingly Similar Threads
- [PATCH] blkfront: don't change to closing if we're busy
- [PATCH] blkfront: don't change to closing if we're busy
- [PATCH] blkfront: don't change to closing if we're busy
- [PATCH] blkfront: don't put bdev right after getting it
- [PATCH] blkfront: don't put bdev right after getting it