Displaying 20 results from an estimated 10000 matches similar to: "Can open requests be handled asynchronously?"
2005 Sep 07
7
Asynchronous IO
Hi,
I have installed Xen on Linux 2.6.11.10 <http://2.6.11.10> and i am trying
to do Asynchronous Direct IO on SAS drives. The application which does the
asynchronous direct io on SAS drive is running on Domain 0. Actually the
IOPs what i get for a 512Bytes IO size is 67, but if i do the same operation
on Linux 2.6.11.10 <http://2.6.11.10> native kernel, i get 267
IOPs.Cananyone
2016 Oct 01
3
fw:ctdb cannot restart smb ?
Sent from YoMail
<bjq1016> 2016-10-01 09:30:16 wrote:
Hello everyone,
First, I configured a smb cluster of 3 nodes with ctdb . I found that when smb is killed or stop nomarlly , it cannot be restarted by ctdb, but the public ip reallo
2012 Mar 30
4
[PATCH] virtio_blk: Drop unused request tracking list
Benchmark shows small performance improvement on fusion io device.
Before:
seq-read : io=1,024MB, bw=19,982KB/s, iops=39,964, runt= 52475msec
seq-write: io=1,024MB, bw=20,321KB/s, iops=40,641, runt= 51601msec
rnd-read : io=1,024MB, bw=15,404KB/s, iops=30,808, runt= 68070msec
rnd-write: io=1,024MB, bw=14,776KB/s, iops=29,552, runt= 70963msec
After:
seq-read : io=1,024MB, bw=20,343KB/s,
2012 Mar 30
4
[PATCH] virtio_blk: Drop unused request tracking list
Benchmark shows small performance improvement on fusion io device.
Before:
seq-read : io=1,024MB, bw=19,982KB/s, iops=39,964, runt= 52475msec
seq-write: io=1,024MB, bw=20,321KB/s, iops=40,641, runt= 51601msec
rnd-read : io=1,024MB, bw=15,404KB/s, iops=30,808, runt= 68070msec
rnd-write: io=1,024MB, bw=14,776KB/s, iops=29,552, runt= 70963msec
After:
seq-read : io=1,024MB, bw=20,343KB/s,
2009 Dec 04
2
measuring iops on linux - numbers make sense?
Hello,
When approaching hosting providers for services, the first question
many of them asked us was about the amount of IOPS the disk system
should support.
While we stress-tested our service, we recorded between 4000 and 6000
"merged io operations per second" as seen in "iostat -x" and collectd
(varies between the different components of the system, we have a few
such
2013 Mar 18
2
Disk iops performance scalability
Hi,
Seeing a drop-off in iops when more vcpu''s are added:-
3.8.2 kernel/xen-4.2.1/single domU/LVM backend/8GB RAM domU/2GB RAM dom0
dom0_max_vcpus=2 dom0_vcpus_pin
domU 8 cores fio result 145k iops
domU 10 cores fio result 99k iops
domU 12 cores fio result 89k iops
domU 14 cores fio result 81k iops
ioping . -c 3
4096 bytes from . (ext4 /dev/xvda1): request=1 time=0.1 ms
4096 bytes
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP
guests. If one CPU is doing virtqueue kick and another CPU touches the
vblk->lock it will have to spin until virtqueue kick completes.
This patch reduces system% CPU utilization in SMP guests that are
running multithreaded I/O-bound workloads. The improvements are small
but show as iops and SMP are increased.
Khoa Huynh
2012 Jun 01
4
[PATCH v3] virtio_blk: unlock vblk->lock during kick
Holding the vblk->lock across kick causes poor scalability in SMP
guests. If one CPU is doing virtqueue kick and another CPU touches the
vblk->lock it will have to spin until virtqueue kick completes.
This patch reduces system% CPU utilization in SMP guests that are
running multithreaded I/O-bound workloads. The improvements are small
but show as iops and SMP are increased.
Khoa Huynh
2014 Jan 24
2
IOPS required by Asterisk for Call Recording
Hi
What are the disk IOPS required for Asterisk call recording?
I am trying to find out number of disks required in RAID array to record
500 calls.
Is there any formula to calculate IOPS required by Asterisk call
recording? This will help me to find IOPS for different scale.
If I assume that Asterisk will write data on disk every second for each
call, I will need disk array to support minimum
2010 Oct 12
2
Multiple SLOG devices per pool
I have a pool with a single SLOG device rated at Y iops.
If I add a second (non-mirrored) SLOG device also rated at Y iops will
my zpool now theoretically be able to handle 2Y iops? Or close to
that?
Thanks,
Ray
2018 Mar 20
0
Gluster very poor performance when copying small files (1x (2+1) = 3, SSD)
Excellent description, thank you.
With performance.write-behind-trickling-writes ON (default):
## 4k randwrite
# fio --randrepeat=1 --ioengine=libaio --gtod_reduce=1 --name=test --filename=test --bs=4k --iodepth=32 --size=256MB --readwrite=randwrite
test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=32
fio-3.1
Starting 1 process
Jobs: 1
2018 May 28
4
Re: VM I/O performance drops dramatically during storage migration with drive-mirror
Cc the QEMU Block Layer mailing list (qemu-block@nongnu.org), who might
have more insights here; and wrap long lines.
On Mon, May 28, 2018 at 06:07:51PM +0800, Chunguang Li wrote:
> Hi, everyone.
>
> Recently I am doing some tests on the VM storage+memory migration with
> KVM/QEMU/libvirt. I use the following migrate command through virsh:
> "virsh migrate --live
2019 Jan 05
1
Re: [PATCH nbdkit 0/7] server: Implement NBD_FLAG_CAN_MULTI_CONN.
Here are some more interesting numbers, concentrating on the memory
plugin and RAM disks. All are done using 8 threads and multi-conn, on
a single unloaded machine with 16 cores, using a Unix domain socket.
(1) The memory plugin using the sparse array, as implemented upstream
in 1.9.8:
read: IOPS=103k, BW=401MiB/s (420MB/s)(46.0GiB/120002msec)
write: IOPS=103k, BW=401MiB/s
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
Hi,
These patches try to support multi virtual queues(multi-vq) in one
virtio-blk device, and maps each virtual queue(vq) to blk-mq's
hardware queue.
With this approach, both scalability and performance on virtio-blk
device can get improved.
For verifying the improvement, I implements virtio-blk multi-vq over
qemu's dataplane feature, and both handling host notification
from each vq and
2012 Jan 18
4
Performance of Maildir vs sdbox/mdbox
Hi Guys,
I've been desperately trying to find some comparative performance
information about the different mailbox formats supported by Dovecot in
order to make an assessment on which format is right for our environment.
This is a brand new build, with customer mailboxes to be migrated in over
the course of 3-4 months.
Some details on our new environment:
* Approximately 1.6M+
2009 Jun 10
6
Asymmetric mirroring
Hello everyone,
I''m wondering if the following makes sense:
To configure a system for high IOPS, I want to have a zpool of 15K RPM SAS
drives. For high IOPS, I believe it is best to let ZFS stripe them, instead
of doing a raidz1 across them. Therefore, I would like to mirror the drives
for reliability.
Now, I''m wondering if I can get away with using a large capacity 7200
2017 Sep 08
4
GlusterFS as virtual machine storage
Gandalf, SIGKILL (killall -9 glusterfsd) did not stop I/O after few
minutes. SIGTERM on the other hand causes crash, but this time it is
not read-only remount, but around 10 IOPS tops and 2 IOPS on average.
-ps
On Fri, Sep 8, 2017 at 1:56 PM, Diego Remolina <dijuremo at gmail.com> wrote:
> I currently only have a Windows 2012 R2 server VM in testing on top of
> the gluster storage,
2014 May 30
4
[PATCH] block: virtio_blk: don't hold spin lock during world switch
Firstly, it isn't necessary to hold lock of vblk->vq_lock
when notifying hypervisor about queued I/O.
Secondly, virtqueue_notify() will cause world switch and
it may take long time on some hypervisors(such as, qemu-arm),
so it isn't good to hold the lock and block other vCPUs.
On arm64 quad core VM(qemu-kvm), the patch can increase I/O
performance a lot with VIRTIO_RING_F_EVENT_IDX
2014 May 30
4
[PATCH] block: virtio_blk: don't hold spin lock during world switch
Firstly, it isn't necessary to hold lock of vblk->vq_lock
when notifying hypervisor about queued I/O.
Secondly, virtqueue_notify() will cause world switch and
it may take long time on some hypervisors(such as, qemu-arm),
so it isn't good to hold the lock and block other vCPUs.
On arm64 quad core VM(qemu-kvm), the patch can increase I/O
performance a lot with VIRTIO_RING_F_EVENT_IDX