search for: blockio

Displaying 20 results from an estimated 21 matches for "blockio".

Did you mean: blockid
2009 Sep 02
2
a room for blkio-cgroup in struct page_cgroup
Hi Kamezawa-san, As you wrote before (http://lkml.org/lkml/2009/7/22/65) > To be honest, what I expected in these days for people of blockio > cgroup is like following for getting room for themselves. <<snip>> > --- mmotm-2.6.31-Jul16.orig/include/linux/page_cgroup.h > +++ mmotm-2.6.31-Jul16/include/linux/page_cgroup.h > @@ -13,7 +13,7 @@ > struct page_cgroup { > unsigned long flags; > struc...
2009 Sep 02
2
a room for blkio-cgroup in struct page_cgroup
Hi Kamezawa-san, As you wrote before (http://lkml.org/lkml/2009/7/22/65) > To be honest, what I expected in these days for people of blockio > cgroup is like following for getting room for themselves. <<snip>> > --- mmotm-2.6.31-Jul16.orig/include/linux/page_cgroup.h > +++ mmotm-2.6.31-Jul16/include/linux/page_cgroup.h > @@ -13,7 +13,7 @@ > struct page_cgroup { > unsigned long flags; > struc...
2009 Sep 02
2
a room for blkio-cgroup in struct page_cgroup
Hi Kamezawa-san, As you wrote before (http://lkml.org/lkml/2009/7/22/65) > To be honest, what I expected in these days for people of blockio > cgroup is like following for getting room for themselves. <<snip>> > --- mmotm-2.6.31-Jul16.orig/include/linux/page_cgroup.h > +++ mmotm-2.6.31-Jul16/include/linux/page_cgroup.h > @@ -13,7 +13,7 @@ > struct page_cgroup { > unsigned long flags; > struc...
2023 Mar 30
0
About libvirt domain dump state and persistent state
...re double used: 2021-09-28T16:00:03.816682107-07:00 stderr F I0928 23:00:03.816558 1 cephvolume.go:73] attach disk &{XMLName:{Space: Local:} Device:disk RawIO: SGIO: Snapshot: Model: Driver:0xc00073b420 Auth:0xc000f49e40 Source:0xc000159860 BackingStore:<nil> Geometry:<nil> BlockIO:<nil> Mirror:<nil> Target:0xc000a58b40 IOTune:0xc00019b970 ReadOnly:<nil> Shareable:<nil> Transient:<nil> Serial:pvc-33003998-6624-4ac9-a923-d94f9401abdf WWN: Vendor: Product: Encryption:<nil> Boot:<nil> Alias:<nil> Address:0xc001420b40} error: virEr...
2020 Feb 11
2
[PATCH v2] lib: add support for disks with 4096 bytes sector size
...ackends support this parameter (currently only the +libvirt and direct backends do). + =back" }; { defaults with @@ -558,6 +568,10 @@ Disks with the E<lt>readonly/E<gt> flag are skipped. =back +If present, the value of C<logical_block_size> attribute of E<lt>blockio/E<gt> +tag in libvirt XML will be passed as C<blocksize> parameter to +C<guestfs_add_drive_opts>. + The other optional parameters are passed directly through to C<guestfs_add_drive_opts>." }; @@ -597,6 +611,10 @@ The optional C<readonlydisk> parameter controls...
2009 Jul 21
2
Best Practices for PV Disk IO?
...#39;'s compiled a list of places to look to reduce Disk IO Latency for Xen PV DomUs. I''ve gotten reasonably acceptable performance from my setup (Dom0 as a iSCSI initiator, providing phy volumes to DomUs), at about 45MB/sec writes, and 80MB/sec reads (this is to a IET target running in blockio mode). As always, reducing latency for small disk operations would be nice, but I''m not sweating it. I just wondering if anyone''s experienced similar behavior, and if they''ve found ways to improve performance. Cheers cc -- Chris Chen <muffaleta@gmail.com> &quot...
2020 Feb 10
1
[PATCH] lib: allow to specify physical/logical block size for disks
...ter (currently only the +libvirt and direct backends do). + +The default value is 0. + =back" }; { defaults with @@ -558,6 +580,10 @@ Disks with the E<lt>readonly/E<gt> flag are skipped. =back +If present, the value of C<logical_block_size> attribute of E<lt>blockio/E<gt> +tag in libvirt XML will be passed as C<blocksize> parameter to +C<guestfs_add_drive_opts>. + The other optional parameters are passed directly through to C<guestfs_add_drive_opts>." }; @@ -597,6 +623,10 @@ The optional C<readonlydisk> parameter controls...
2008 Feb 14
2
RE: [Iscsitarget-devel] Performance Question
>Yes, jumbo frames, no irq coalescence, blockio and see if >you can get Backup Exec to use large io request sizes when >reading and writing the data. The larger the better. Ok, Jumbo's enabled on the switch and media server. For the sake of our sanity jumping back and forth, I am trying to enable jumbo's on the bonded pair in the...
2014 Jan 19
3
Yet another disk I/O performance issue
...t and guest are using deadline as I/O scheduler. The VM uses an ext4 filesystem, while the image is saved on an ext3 disk. I mounted the host and guest filesystems specifying nodiratime and noatime options. Even if I convert the image to raw format nothing changes. I didn't mess with iotune nor blockio. Is there something that I overlooked or any other suggestion? Thanks in advance for your help. Matteo -- A refund for defective software might be nice, except it would bankrupt the entire software industry in the first year. Andrew S. Tanenbaum, Computer Networks, 2003, Introduction, page...
2020 Feb 10
0
Re: [RFC] lib: allow to specify physical/logical block size for disks
...aunch-direct.c b/lib/launch-direct.c > index ae6ca093b..518bd24fc 100644 > --- a/lib/launch-direct.c > +++ b/lib/launch-direct.c > @@ -273,6 +273,26 @@ add_drive_standard_params (guestfs_h *g, struct backend_direct_data *data, > return -1; > } > > +/** > + * Add the blockio elements of the C<-device> parameter. > + */ > +static int > +add_device_blockio_params (guestfs_h *g, struct qemuopts *qopts, > + struct drive *drv) > +{ > + if (drv->pblocksize) > + append_list_format ("physical_block_size=%d",...
2013 Mar 26
3
iSCSI connection corrupts Xen block devices.
...h VM''s being run on the same physical dom0 instance. The problem occurs regardless of the type of device backend which is used for the domU block device exported by SCST. The behavior has been verified with blktap, image over loop and qdisk. The problem also occurs when either FILEIO or BLOCKIO are used for the SCST virtual disk. As I said at the outset exposing a device to blkback twice may be something it was never designed to do. That being said using VM''s for this type of testing certainly makes sense and the behavior is unexpected. Let us know if there are any questions o...
2009 Oct 09
1
Possible to run iscsi-target and initiator on same server?
I am trying to install Oracle RAC in a two node cluster for testing purposes, so performance is not something that concerns me. I just want to go through the process all the way to creating a database. I have all the prerequisites except the shared storage and thought I'd give this a try. I'm running: - CentOS 5.3 kernel 2.6.18-164.el5 - iscsitarget-1.4.18-1 -
2020 Feb 07
8
[RFC] lib: allow to specify physical/logical block size for disks
...(from guestfs_config). */ diff --git a/lib/launch-direct.c b/lib/launch-direct.c index ae6ca093b..518bd24fc 100644 --- a/lib/launch-direct.c +++ b/lib/launch-direct.c @@ -273,6 +273,26 @@ add_drive_standard_params (guestfs_h *g, struct backend_direct_data *data, return -1; } +/** + * Add the blockio elements of the C<-device> parameter. + */ +static int +add_device_blockio_params (guestfs_h *g, struct qemuopts *qopts, + struct drive *drv) +{ + if (drv->pblocksize) + append_list_format ("physical_block_size=%d", drv->pblocksize); + if (drv-&g...
2018 Aug 09
0
Re: Windows Guest I/O performance issues (already using virtio) (Matt Schumacher)
...in windows VMs? > 3. Does my virtualized CPU model make sense? I defined Haswell-noTSX-IBRS and libvirt added the features. > 4. Which kernel branch offers the best stability and performance? > 5. Are there performance gains in using UEFI booting the windows guest and defining ?<blockio logical_block_size='4096' physical_block_size='4096'/>?? Perhaps better block size consistency through to the zvol? > > >Here is my setup: > >48 core Haswell CPU >192G Ram >Linux 4.14.61 or 4.9.114 (testing both) >ZFS file system on optane SSD drive or ZF...
2018 Aug 08
1
Windows Guest I/O performance issues (already using virtio)
...e difference in windows VMs? 3. Does my virtualized CPU model make sense? I defined Haswell-noTSX-IBRS and libvirt added the features. 4. Which kernel branch offers the best stability and performance? 5. Are there performance gains in using UEFI booting the windows guest and defining “<blockio logical_block_size='4096' physical_block_size='4096'/>”? Perhaps better block size consistency through to the zvol? Here is my setup: 48 core Haswell CPU 192G Ram Linux 4.14.61 or 4.9.114 (testing both) ZFS file system on optane SSD drive or ZFS file system on dumb HBA with 8...
2015 Sep 24
0
[PATCH] com32/disk: add UEFI support
...ic EFI_STATUS find_all_block_devs(EFI_HANDLE **bdevs, + unsigned long *bdevsno) +{ + EFI_STATUS status; + unsigned long len = 0; + + *bdevsno = 0; + + status = uefi_call_wrapper(BS->LocateHandle, 5, ByProtocol, + &BlockIoProtocol, NULL, &len, NULL); + if (EFI_ERROR(status) && status != EFI_BUFFER_TOO_SMALL) { + printf("%s: failed to locate BlockIo device handles\n", __func__); + return status; + } + + *bdevs = malloc(len); + if (!*bdevs) { + status = EFI_OUT_OF_...
2018 Sep 26
0
Re: OpenStack output workflow
...t User-Agent: Mutt/1.5.21 (2010-09-15) Turns out this is fairly easy, although quite obscure. Just use ‘systemd-run --pipe’ to run the virt-v2v command in a cgroup. The ‘--pipe’ option ensures it is still connected to stdin/stdout/ stderr (but see below). $ systemd-run --user --pipe \ -p BlockIOWriteBandwidth="/dev/sda2 1K" \ virt-v2v -i disk /var/tmp/fedora-27.img -o local -os /var/tmp Running as unit: run-u4429.service [ 0.0] Opening the source -i disk /var/tmp/fedora-27.img [ 0.0] Creating an overlay to protect the source from being modified etc. See system...
2018 Sep 26
2
Re: OpenStack output workflow
[Adding Tomas Golembiovsky] On Wed, Sep 26, 2018 at 12:11 PM Richard W.M. Jones <rjones@redhat.com> wrote: > > Rather than jumping to a solution, can you explain what the problem > is that you're trying to solve? > > You need to do <X>, you tried virt-v2v, it doesn't do <X>, etc. > Well, that's mainly IMS related challenges. We're working on
2018 Sep 26
3
Re: OpenStack output workflow
...n > Turns out this is fairly easy, although quite obscure. > > Just use ‘systemd-run --pipe’ to run the virt-v2v command in a cgroup. > The ‘--pipe’ option ensures it is still connected to stdin/stdout/ > stderr (but see below). > > $ systemd-run --user --pipe \ > -p BlockIOWriteBandwidth="/dev/sda2 1K" \ > virt-v2v -i disk /var/tmp/fedora-27.img -o local -os /var/tmp > > Running as unit: run-u4429.service > [ 0.0] Opening the source -i disk /var/tmp/fedora-27.img > [ 0.0] Creating an overlay to protect the source from being mod...
2015 May 22
2
libvirt with gcc5 Test failing
...... OK 374) QEMU XML-2-ARGV disk-ide-drive-split ... OK 375) QEMU XML-2-ARGV disk-ide-wwn ... OK 376) QEMU XML-2-ARGV disk-geometry ... OK 377) QEMU XML-2-ARGV disk-blockio ... OK 378) QEMU XML-2-ARGV video-device-pciaddr-default ... OK 379) QEMU XML-2-ARGV video-vga-nodevice ... OK 380) QEMU XML-2-ARGV video-vga-device ... OK 381) QEMU XML-2-ARGV...