similar to: read/write performance through mount point by guestmount

Displaying 20 results from an estimated 700 matches similar to: "read/write performance through mount point by guestmount"

2017 Jul 31
2
Re: read/write performance through mount point by guestmount
On Mon, Jul 31, 2017 at 09:49:00AM +0100, Richard W.M. Jones wrote: > On Mon, Jul 31, 2017 at 12:20:10PM +0800, lampahome wrote: > > I mount the disk.qcow2 on the /home/test/, and create a 50GB file. > > Mount the disk how? OK, subject says using guestmount. I'm surprised the slowdown isn't more than 95%. It's using FUSE which goes through an insane number of layers,
2017 Jul 31
2
Re: read/write performance through mount point by guestmount
On Mon, Jul 31, 2017 at 06:52:28PM +0800, lampahome wrote: > if I mount through guestfs library in python or guestfish, the same > condition happenes? > > I mean the insane number of layers and the performance No. The layers are only present because guestmount uses FUSE. libguestfs itself performs very well if you are careful to use it in the correct way. The architecture of
2012 May 13
1
Problem compiling package LogicReg - make Error 255
Hello all, I've been using the R package LogicReg, but ended up having to change a certain parameter in the Fortran 77 code (namely, I had to change LGCntrMax to 25 in the file slogic.f). I am using a 64-bit Windows 7 machine. When I tried to compile, I got the following error: -+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ C:\Program
2017 Jul 27
2
Re: performance between guestfish and qemu-nbd
2017-07-27 20:18 GMT+08:00 Richard W.M. Jones <rjones@redhat.com>: > On Thu, Jul 27, 2017 at 06:34:13PM +0800, lampahome wrote: > > I can mount qcow2 img to nbd devices through guestfish or qemu-nbd > > > > I'm curious about which performance is better? > > They do quite different things, they're not comparable. > > Can you specifically give the
2017 Jul 28
1
Re: performance between guestfish and qemu-nbd
2017-07-28 0:31 GMT+08:00 Richard W.M. Jones <rjones@redhat.com>: > On Fri, Jul 28, 2017 at 12:23:04AM +0800, lampahome wrote: > > 2017-07-27 20:18 GMT+08:00 Richard W.M. Jones <rjones@redhat.com>: > > > > > On Thu, Jul 27, 2017 at 06:34:13PM +0800, lampahome wrote: > > > > I can mount qcow2 img to nbd devices through guestfish or qemu-nbd > >
2017 Jul 31
0
Re: read/write performance through mount point by guestmount
On Mon, Jul 31, 2017 at 12:20:10PM +0800, lampahome wrote: > I mount the disk.qcow2 on the /home/test/, and create a 50GB file. Mount the disk how? Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming and virtualization blog: http://rwmj.wordpress.com virt-builder quickly builds VMs from scratch http://libguestfs.org/virt-builder.1.html
2017 Jul 31
0
Re: read/write performance through mount point by guestmount
if I mount through guestfs library in python or guestfish, the same condition happenes? I mean the insane number of layers and the performance 2017-07-31 16:52 GMT+08:00 Richard W.M. Jones <rjones@redhat.com>: > On Mon, Jul 31, 2017 at 09:49:00AM +0100, Richard W.M. Jones wrote: > > On Mon, Jul 31, 2017 at 12:20:10PM +0800, lampahome wrote: > > > I mount the disk.qcow2
2017 Jul 25
2
Re: build from github source
I try to install like below: apt-get install libyajl2 apt-get install libyajl2-dev apt-get install libyajl2-dbg and rebuild again: > ./configure > make still the same errors happened anyone has the same issue? 2017-07-25 18:27 GMT+08:00 Cedric Bosdonnat <cbosdonnat@suse.com>: > On Tue, 2017-07-25 at 17:42 +0800, lampahome wrote: > > why is undefined reference to
2018 Mar 20
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Excellent description, thank you. With performance.write-behind-trickling-writes ON (default): ## 4k randwrite # fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=32 --size=256MB --readwrite=randwrite test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32 fio-3.1 Starting 1 process Jobs: 1
2017 Jul 27
2
performance between guestfish and qemu-nbd
I can mount qcow2 img to nbd devices through guestfish or qemu-nbd I'm curious about which performance is better?
2017 Jul 25
2
build from github source
I followed the http://libguestfs.org/guestfs-building.1.html to build source. git clone https://github.com/libguestfs/libguestfs cd libguestfs ./autogen.sh make I installed the lib I didn't have it and everything is ok untill I enter *make* I got error message: > CCLD libguestfs.la > ar: `u' modifier ignored since `D' is the default (see `U') > make[3]:
2014 Mar 23
0
for Chris Mason ( iowatcher graphs)
Hello. Sorry for writing to btrfs mailing list, but personal mail reject my message. Saying " <chris.mason@fusionio.com>: host 10.101.1.19[10.101.1.19] said: 554 5.4.6 Hop count exceeded - possible mail loop (in reply to end of DATA command) Final-Recipient: rfc822; chris.mason@fusionio.com Action: failed Status: 5.0.0 Diagnostic-Code: X-Spam-&-Virus-Firewall; host
2012 Mar 25
3
attempt to access beyond end of device and livelock
Hi Dongyang, Yan, When testing BTRFS with RAID 0 metadata on linux-3.3, we see discard ranges exceeding the end of the block device [1], potentially causing dataloss; when this occurs, filesystem writeback becomes catatonic due to continual resubmission. The reproducer is quite simple [2]. Hope this proves useful... Thanks, Daniel --- [1] attempt to access beyond end of device ram0: rw=129,
2017 Aug 03
0
Re: read/write performance through mount point by guestmount
2017-07-31 18:57 GMT+08:00 Richard W.M. Jones <rjones@redhat.com>: > On Mon, Jul 31, 2017 at 06:52:28PM +0800, lampahome wrote: > > if I mount through guestfs library in python or guestfish, the same > > condition happenes? > > > > I mean the insane number of layers and the performance > > No. The layers are only present because guestmount uses FUSE. >
2019 Aug 08
3
[Bug] Cannot create file but read/write is ok
I don't know why I can't register bugzilla so I post here. I have two ubuntu 16.04 machine A, B, and A install samba version 4.10.6, and B install version 4.3.11. I use samba-vfs in machine A. I mount share of A in B ex: sudo mount -t cifs -o username='ppp',password='admin' //IP/public /home/ppp/test And I found I can create file like $ touch test/yo But I can read/write
2017 Jul 25
1
Re: build from github source
On Tue, Jul 25, 2017 at 01:30:01PM +0100, Richard W.M. Jones wrote: > On Tue, Jul 25, 2017 at 06:52:56PM +0800, lampahome wrote: > > I try to install like below: > > apt-get install libyajl2 > > apt-get install libyajl2-dev > > apt-get install libyajl2-dbg > > So this is Debian or Ubuntu? Which precise version? FWIW I just built libguestfs from git on Debian 9
2012 Dec 02
3
[PATCH] vhost-blk: Add vhost-blk support v6
vhost-blk is an in-kernel virito-blk device accelerator. Due to lack of proper in-kernel AIO interface, this version converts guest's I/O request to bio and use submit_bio() to submit I/O directly. So this version any supports raw block device as guest's disk image, e.g. /dev/sda, /dev/ram0. We can add file based image support to vhost-blk once we have in-kernel AIO interface. There are
2012 Dec 02
3
[PATCH] vhost-blk: Add vhost-blk support v6
vhost-blk is an in-kernel virito-blk device accelerator. Due to lack of proper in-kernel AIO interface, this version converts guest's I/O request to bio and use submit_bio() to submit I/O directly. So this version any supports raw block device as guest's disk image, e.g. /dev/sda, /dev/ram0. We can add file based image support to vhost-blk once we have in-kernel AIO interface. There are
2017 Oct 10
0
small files performance
I just tried setting: performance.parallel-readdir on features.cache-invalidation on features.cache-invalidation-timeout 600 performance.stat-prefetch performance.cache-invalidation performance.md-cache-timeout 600 network.inode-lru-limit 50000 performance.cache-invalidation on and clients could not see their files with ls when accessing via a fuse mount. The files and directories were there,
2017 Oct 10
2
small files performance
2017-10-10 8:25 GMT+02:00 Karan Sandha <ksandha at redhat.com>: > Hi Gandalf, > > We have multiple tuning to do for small-files which decrease the time for > negative lookups , meta-data caching, parallel readdir. Bumping the server > and client event threads will help you out in increasing the small file > performance. > > gluster v set <vol-name> group