Displaying 20 results from an estimated 27 matches for "thoughput".
Did you mean:
throughput
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
...ndread, iodepth=64, bs=4K, jobs=N) is run inside VM to
verify the improvement.
I just create a small quadcore VM and run fio inside the VM, and
num_queues of the virtio-blk device is set as 2, but looks the
improvement is still obvious.
1), about scalability
- without mutli-vq feature
-- jobs=2, thoughput: 145K iops
-- jobs=4, thoughput: 100K iops
- with mutli-vq feature
-- jobs=2, thoughput: 193K iops
-- jobs=4, thoughput: 202K iops
2), about thoughput
- without mutli-vq feature
-- thoughput: 145K iops
- with mutli-vq feature
-- thoughput: 202K iops
So in my test, even for a quad-core VM, if...
2014 Jun 26
7
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
...ndread, iodepth=64, bs=4K, jobs=N) is run inside VM to
verify the improvement.
I just create a small quadcore VM and run fio inside the VM, and
num_queues of the virtio-blk device is set as 2, but looks the
improvement is still obvious.
1), about scalability
- without mutli-vq feature
-- jobs=2, thoughput: 145K iops
-- jobs=4, thoughput: 100K iops
- with mutli-vq feature
-- jobs=2, thoughput: 193K iops
-- jobs=4, thoughput: 202K iops
2), about thoughput
- without mutli-vq feature
-- thoughput: 145K iops
- with mutli-vq feature
-- thoughput: 202K iops
So in my test, even for a quad-core VM, if...
2014 Jun 26
0
[PATCH v2 0/2] block: virtio-blk: support multi vq per virtio-blk
...nside VM to
> verify the improvement.
>
> I just create a small quadcore VM and run fio inside the VM, and
> num_queues of the virtio-blk device is set as 2, but looks the
> improvement is still obvious.
>
> 1), about scalability
> - without mutli-vq feature
> -- jobs=2, thoughput: 145K iops
> -- jobs=4, thoughput: 100K iops
> - with mutli-vq feature
> -- jobs=2, thoughput: 193K iops
> -- jobs=4, thoughput: 202K iops
>
> 2), about thoughput
> - without mutli-vq feature
> -- thoughput: 145K iops
> - with mutli-vq feature
> -- thoughput: 202K...
2014 Jun 13
6
[RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk
...ndread, iodepth=64, bs=4K, jobs=N) is run inside VM to
verify the improvement.
I just create a small quadcore VM and run fio inside the VM, and
num_queues of the virtio-blk device is set as 2, but looks the
improvement is still obvious.
1), about scalability
- without mutli-vq feature
-- jobs=2, thoughput: 145K iops
-- jobs=4, thoughput: 100K iops
- without mutli-vq feature
-- jobs=2, thoughput: 186K iops
-- jobs=4, thoughput: 199K iops
2), about thoughput
- without mutli-vq feature
-- top thoughput: 145K iops
- with mutli-vq feature
-- top thoughput: 199K iops
So even for one quadcore VM,...
2014 Jun 13
6
[RFC PATCH 0/2] block: virtio-blk: support multi vq per virtio-blk
...ndread, iodepth=64, bs=4K, jobs=N) is run inside VM to
verify the improvement.
I just create a small quadcore VM and run fio inside the VM, and
num_queues of the virtio-blk device is set as 2, but looks the
improvement is still obvious.
1), about scalability
- without mutli-vq feature
-- jobs=2, thoughput: 145K iops
-- jobs=4, thoughput: 100K iops
- without mutli-vq feature
-- jobs=2, thoughput: 186K iops
-- jobs=4, thoughput: 199K iops
2), about thoughput
- without mutli-vq feature
-- top thoughput: 145K iops
- with mutli-vq feature
-- top thoughput: 199K iops
So even for one quadcore VM,...
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...> I just create a small quadcore VM and run fio inside the VM, and
>> num_queues of the virtio-blk device is set as 2, but looks the
>> improvement is still obvious. The host is 2 sockets, 8cores(16threads)
>> server.
>>
>> 1), about scalability
>> - jobs = 2, thoughput: +33%
>> - jobs = 4, thoughput: +100%
>>
>> 2), about top thoughput: +39%
>>
>> So in my test, even for a quad-core VM, if the virtqueue number
>> is increased from 1 to 2, both scalability and performance can
>> get improved a lot.
>>
>> In abo...
2014 Jul 01
2
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...> I just create a small quadcore VM and run fio inside the VM, and
>> num_queues of the virtio-blk device is set as 2, but looks the
>> improvement is still obvious. The host is 2 sockets, 8cores(16threads)
>> server.
>>
>> 1), about scalability
>> - jobs = 2, thoughput: +33%
>> - jobs = 4, thoughput: +100%
>>
>> 2), about top thoughput: +39%
>>
>> So in my test, even for a quad-core VM, if the virtqueue number
>> is increased from 1 to 2, both scalability and performance can
>> get improved a lot.
>>
>> In abo...
2007 Jan 24
2
Thoughput
Hi,
I am after a feel of the throughput capabilities for TC and Iptables in
comparison to dedicated hardware. I have heard talk about 1Gb+ throughput
with minimal performance impact using 50ish TC rules and 100+ Iptables
rules.
Is there anyone here running large throughput / large configurations, and if
so, what sort of figures?
Regards
Dan
2014 Jun 20
3
[PATCH v1 0/2] block: virtio-blk: support multi vq per virtio-blk
...iodepth=64, bs=4K, jobs=N) is run inside VM to
verify the improvement.
I just create a small quadcore VM and run fio inside the VM, and
num_queues of the virtio-blk device is set as 2, but looks the
improvement is still obvious.
1), about scalability
- without mutli-vq feature
-- jobs=2, thoughput: 145K iops
-- jobs=4, thoughput: 100K iops
- with mutli-vq feature
-- jobs=2, thoughput: 186K iops
-- jobs=4, thoughput: 199K iops
2), about thoughput
- without mutli-vq feature
-- top thoughput: 145K iops
- with mutli-vq feature
-- top thoughput: 199K iops
So in...
2014 Jun 20
3
[PATCH v1 0/2] block: virtio-blk: support multi vq per virtio-blk
...iodepth=64, bs=4K, jobs=N) is run inside VM to
verify the improvement.
I just create a small quadcore VM and run fio inside the VM, and
num_queues of the virtio-blk device is set as 2, but looks the
improvement is still obvious.
1), about scalability
- without mutli-vq feature
-- jobs=2, thoughput: 145K iops
-- jobs=4, thoughput: 100K iops
- with mutli-vq feature
-- jobs=2, thoughput: 186K iops
-- jobs=4, thoughput: 199K iops
2), about thoughput
- without mutli-vq feature
-- top thoughput: 145K iops
- with mutli-vq feature
-- top thoughput: 199K iops
So in...
2014 Jun 26
0
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...y the improvement.
>
> I just create a small quadcore VM and run fio inside the VM, and
> num_queues of the virtio-blk device is set as 2, but looks the
> improvement is still obvious. The host is 2 sockets, 8cores(16threads)
> server.
>
> 1), about scalability
> - jobs = 2, thoughput: +33%
> - jobs = 4, thoughput: +100%
>
> 2), about top thoughput: +39%
>
> So in my test, even for a quad-core VM, if the virtqueue number
> is increased from 1 to 2, both scalability and performance can
> get improved a lot.
>
> In above qemu implementation of virtio-blk...
2014 Jul 01
0
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...ll quadcore VM and run fio inside the VM, and
>>> num_queues of the virtio-blk device is set as 2, but looks the
>>> improvement is still obvious. The host is 2 sockets, 8cores(16threads)
>>> server.
>>>
>>> 1), about scalability
>>> - jobs = 2, thoughput: +33%
>>> - jobs = 4, thoughput: +100%
>>>
>>> 2), about top thoughput: +39%
>>>
>>> So in my test, even for a quad-core VM, if the virtqueue number
>>> is increased from 1 to 2, both scalability and performance can
>>> get improved a l...
2014 Jun 26
6
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...=4K, jobs=N) is run inside VM to
verify the improvement.
I just create a small quadcore VM and run fio inside the VM, and
num_queues of the virtio-blk device is set as 2, but looks the
improvement is still obvious. The host is 2 sockets, 8cores(16threads)
server.
1), about scalability
- jobs = 2, thoughput: +33%
- jobs = 4, thoughput: +100%
2), about top thoughput: +39%
So in my test, even for a quad-core VM, if the virtqueue number
is increased from 1 to 2, both scalability and performance can
get improved a lot.
In above qemu implementation of virtio-blk-mq device, only one
IOthread handles requ...
2014 Jun 26
6
[PATCH v3 0/2] block: virtio-blk: support multi vq per virtio-blk
...=4K, jobs=N) is run inside VM to
verify the improvement.
I just create a small quadcore VM and run fio inside the VM, and
num_queues of the virtio-blk device is set as 2, but looks the
improvement is still obvious. The host is 2 sockets, 8cores(16threads)
server.
1), about scalability
- jobs = 2, thoughput: +33%
- jobs = 4, thoughput: +100%
2), about top thoughput: +39%
So in my test, even for a quad-core VM, if the virtqueue number
is increased from 1 to 2, both scalability and performance can
get improved a lot.
In above qemu implementation of virtio-blk-mq device, only one
IOthread handles requ...
2017 Sep 12
2
SMB data transfer performance on AD mode
On Tue, 2017-09-12 at 09:11 -0700, Jeremy Allison via samba wrote:
> On Tue, Sep 12, 2017 at 12:52:29PM -0300, Dante Colo via samba wrote:
> > Hi Everyone !
> >
> > I note that all of samba AD server that i maintain are not so fast in terms of data transfer, more specifically none of them go over 40 MB/s , one particularly which i'm trying to find out why doesn't go
2009 Apr 23
1
Unexpectedly poor 10-disk RAID-Z2 performance?
Hail, caesar.
I''ve got a 10-disk RAID-Z2 backed by the 1.5 TB Seagate drives
everyone''s so fond of. They''ve all received a firmware upgrade (the
sane one, not the one that caused your drives to brick if the internal
event log hit the wrong number on boot).
They''re attached to an ARC-1280ML, a reasonably good SATA controller,
which has 1 GB of ECC DDR2 for
2006 Dec 07
1
-- Called 12127773456@OOH323 Segmentation fault (core dumped)
...clients will be placed in.
;Default - default
context=default
;Sets rtptimeout for all clients, unless overridden
;Default - 60 seconds
;rtptimeout=60 ; Terminate call if 60 seconds of no RTP activity
; when we're not on hold
;Type of Service
;Default - none (lowdelay, thoughput, reliability, mincost, none)
;tos=none
;amaflags = default
;The account code used by default for all clients.
;accountcode=h3230101
;The codecs to be used for all clients.Only ulaw and gsm supported as of
now.
;Default - ulaw
; ONLY ulaw, gsm, g729 and g7231 supported as of now
allow=all ;No...
2008 Nov 14
0
No subject
...sider it sequential because it improved
fairness in some sequential workloads (the CIC_SEEKY heuristic is used
also to determine the idle_window length in [bc]fq_arm_slice_timer()).
Anyway, we're dealing with heuristics, and they tend to favor some
workload over other ones. If recovering this thoughput loss is more
important than a transient unfairness due to short idling windows assigned
to sequential processes when they start, I've no problems in switching
the CIC_SEEKY logic to consider a process seeky when it starts.
Thank you for testing and for pointing out this issue, we missed it
in...
2003 Jun 05
0
Summary of hangind HP proliant
...;reasonable" level.( cpu at this hang time would be about 90-95% idle,
and load would go from about 40-100)
We then changed the 5i config to present each disk as a raid 0 device
(no processing on the card now) and used Linux Raid to do the mirroring
and raid5-ing. We now get about 100-105MB/s thoughput to the SW raid 5,
and no more apparent hangs.
If anyone is using a SmartArray device, you may want to experiment with
SW raid instead.
Dan Liebster
Adecco
2008 Nov 14
0
No subject
...sider it sequential because it improved
fairness in some sequential workloads (the CIC_SEEKY heuristic is used
also to determine the idle_window length in [bc]fq_arm_slice_timer()).
Anyway, we're dealing with heuristics, and they tend to favor some
workload over other ones. If recovering this thoughput loss is more
important than a transient unfairness due to short idling windows assigned
to sequential processes when they start, I've no problems in switching
the CIC_SEEKY logic to consider a process seeky when it starts.
Thank you for testing and for pointing out this issue, we missed it
in...