Displaying 20 results from an estimated 58 matches for "qps".
Did you mean:
ops
2009 Apr 15
3
MySQL On ZFS Performance(fsync) Problem?
...l
I did some test about MySQL''s Insert performance on ZFS, and met a big
performance problem,*i''m not sure what''s the point*.
Environment
2 Intel X5560 (8 core), 12GB RAM, 7 slc SSD(Intel).
A Java client run 8 threads concurrency insert into one Innodb table:
*~600 qps when sync_binlog=1 & innodb_flush_log_at_trx_commit=1
~600 qps when sync_binlog=10 & innodb_flush_log_at_trx_commit=1
~600 qps when sync_binlog=0 & innodb_flush_log_at_trx_commit=1
~900 qps when sync_binlog=1 & innodb_flush_log_at_trx_commit=0*
~5500 qps when sync_binlog=10 & i...
2019 Apr 15
4
[RFC 0/3] VirtIO RDMA
...t; >
> > Open issues/Todo list:
> > List is huge, this is only start point of the project.
> > Anyway, here is one example of item in the list:
> > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
> > in order to support for example 32K QPs we will need 64K VirtQ. Not sure
> > that this is reasonable so one option is to have one for all and
> > multiplex the traffic on it. This is not good approach as by design it
> > introducing an optional starvation. Another approach would be multi
> > queues and rou...
2019 Apr 15
4
[RFC 0/3] VirtIO RDMA
...t; >
> > Open issues/Todo list:
> > List is huge, this is only start point of the project.
> > Anyway, here is one example of item in the list:
> > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
> > in order to support for example 32K QPs we will need 64K VirtQ. Not sure
> > that this is reasonable so one option is to have one for all and
> > multiplex the traffic on it. This is not good approach as by design it
> > introducing an optional starvation. Another approach would be multi
> > queues and rou...
2019 Apr 22
1
[Qemu-devel] [RFC 0/3] VirtIO RDMA
...> > > > List is huge, this is only start point of the project.
> > > > Anyway, here is one example of item in the list:
> > > > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
> > > > in order to support for example 32K QPs we will need 64K VirtQ. Not sure
> > > > that this is reasonable so one option is to have one for all and
> > > > multiplex the traffic on it. This is not good approach as by design it
> > > > introducing an optional starvation. Another approach would be...
2019 Apr 19
0
[Qemu-devel] [RFC 0/3] VirtIO RDMA
...;> Open issues/Todo list:
>>> List is huge, this is only start point of the project.
>>> Anyway, here is one example of item in the list:
>>> - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
>>> in order to support for example 32K QPs we will need 64K VirtQ. Not sure
>>> that this is reasonable so one option is to have one for all and
>>> multiplex the traffic on it. This is not good approach as by design it
>>> introducing an optional starvation. Another approach would be multi
>>>...
2016 Apr 05
0
[PATCH] VSOCK: Detach QP check should filter out non matching QPs.
The check in vmci_transport_peer_detach_cb should only allow a
detach when the qp handle of the transport matches the one in
the detach message.
Testing: Before this change, a detach from a peer on a different
socket would cause an active stream socket to register a detach.
Reviewed-by: George Zhang <georgezhang at vmware.com>
Signed-off-by: Jorgen Hansen <jhansen at vmware.com>
---
2016 Apr 05
0
[PATCH] VSOCK: Detach QP check should filter out non matching QPs.
The check in vmci_transport_peer_detach_cb should only allow a
detach when the qp handle of the transport matches the one in
the detach message.
Testing: Before this change, a detach from a peer on a different
socket would cause an active stream socket to register a detach.
Reviewed-by: George Zhang <georgezhang at vmware.com>
Signed-off-by: Jorgen Hansen <jhansen at vmware.com>
---
2019 Apr 22
2
[Qemu-devel] [RFC 0/3] VirtIO RDMA
...> > > > List is huge, this is only start point of the project.
> > > > Anyway, here is one example of item in the list:
> > > > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
> > > > in order to support for example 32K QPs we will need 64K VirtQ. Not sure
> > > > that this is reasonable so one option is to have one for all and
> > > > multiplex the traffic on it. This is not good approach as by design it
> > > > introducing an optional starvation. Another approach would be...
2019 Apr 22
2
[Qemu-devel] [RFC 0/3] VirtIO RDMA
...> > > > List is huge, this is only start point of the project.
> > > > Anyway, here is one example of item in the list:
> > > > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
> > > > in order to support for example 32K QPs we will need 64K VirtQ. Not sure
> > > > that this is reasonable so one option is to have one for all and
> > > > multiplex the traffic on it. This is not good approach as by design it
> > > > introducing an optional starvation. Another approach would be...
2019 Apr 11
4
[RFC 0/3] VirtIO RDMA
...t; >
> > Open issues/Todo list:
> > List is huge, this is only start point of the project.
> > Anyway, here is one example of item in the list:
> > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
> > in order to support for example 32K QPs we will need 64K VirtQ. Not sure
> > that this is reasonable so one option is to have one for all and
> > multiplex the traffic on it. This is not good approach as by design it
> > introducing an optional starvation. Another approach would be multi
> > queues and rou...
2019 Apr 11
4
[RFC 0/3] VirtIO RDMA
...t; >
> > Open issues/Todo list:
> > List is huge, this is only start point of the project.
> > Anyway, here is one example of item in the list:
> > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
> > in order to support for example 32K QPs we will need 64K VirtQ. Not sure
> > that this is reasonable so one option is to have one for all and
> > multiplex the traffic on it. This is not good approach as by design it
> > introducing an optional starvation. Another approach would be multi
> > queues and rou...
2017 Nov 21
2
[PATCH] VSOCK: Don't call vsock_stream_has_data in atomic context
...b a mutex for any
queue pair access. In the detach callback for the vmci vsock
transport, we call vsock_stream_has_data while holding a spinlock,
and vsock_stream_has_data will access a queue pair.
To avoid this, we can simply omit calling vsock_stream_has_data
for host side queue pairs, since the QPs are empty per default
when the guest has detached.
This bug affects users of VMware Workstation using kernel version
4.4 and later.
Testing: Ran vsock tests between guest and host, and verified that
with this change, the host isn't calling vsock_stream_has_data
during detach. Ran mixedTest be...
2017 Nov 21
2
[PATCH] VSOCK: Don't call vsock_stream_has_data in atomic context
...b a mutex for any
queue pair access. In the detach callback for the vmci vsock
transport, we call vsock_stream_has_data while holding a spinlock,
and vsock_stream_has_data will access a queue pair.
To avoid this, we can simply omit calling vsock_stream_has_data
for host side queue pairs, since the QPs are empty per default
when the guest has detached.
This bug affects users of VMware Workstation using kernel version
4.4 and later.
Testing: Ran vsock tests between guest and host, and verified that
with this change, the host isn't calling vsock_stream_has_data
during detach. Ran mixedTest be...
2007 Nov 01
0
Samba and Dual Core Utilization
...and the Linux 2.6.20
kernel, I have observed something interesting about Samba and the
utilization of multiple CPU cores.
On a single Dual Core 3 Ghz Xeon machine, when I am hammering the Samba
server with requests from many client machines, a variety of CPU
utilization tools like Gkrellm and Qps show that CPU utilization is
around 30+ percent on one core and only 1 to 2 percent on the other
core. Qps shows that a single user's smbd process sometimes hops from
one core to another (which would seem generally undesirable) but all
smbd processes mostly spend their time on CPU 0.
When...
2013 Nov 06
1
Frequent RRL false negatives when using multiple server processes on Linux
...i,
Please advise how to use Response Rate Limiting on a server which has
multiple NSD server processes (nsd.conf server section has server-count
> 1).
We have a problem with NSD v3.2.16 repeatedly unblocking and blocking
again a single source which is flooding positive queries at a ~steady
700 qps rate. rrl-ratelimit setting is the default 200 qps. The
unblock-block happens multiple times a minute. This is causing false
negatives: NSD bursts out 200 responses on every unblock:
Nov 6 10:11:18 dnstest1 nsd[6881]: ratelimit block demo.funet.fi. type
positive target 193.166.5.0/24 query 193...
2019 Apr 15
0
[Qemu-devel] [RFC 0/3] VirtIO RDMA
...huge, this is only start point of the project.
> > > > > > Anyway, here is one example of item in the list:
> > > > > > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
> > > > > > in order to support for example 32K QPs we will need 64K VirtQ. Not sure
> > > > > > that this is reasonable so one option is to have one for all and
> > > > > > multiplex the traffic on it. This is not good approach as by design it
> > > > > > introducing an optional starvation...
2017 Nov 24
1
[PATCH v2] VSOCK: Don't call vsock_stream_has_data in atomic context
...b a mutex for any
queue pair access. In the detach callback for the vmci vsock
transport, we call vsock_stream_has_data while holding a spinlock,
and vsock_stream_has_data will access a queue pair.
To avoid this, we can simply omit calling vsock_stream_has_data
for host side queue pairs, since the QPs are empty per default
when the guest has detached.
This bug affects users of VMware Workstation using kernel version
4.4 and later.
Testing: Ran vsock tests between guest and host, and verified that
with this change, the host isn't calling vsock_stream_has_data
during detach. Ran mixedTest be...
2017 Nov 24
1
[PATCH v2] VSOCK: Don't call vsock_stream_has_data in atomic context
...b a mutex for any
queue pair access. In the detach callback for the vmci vsock
transport, we call vsock_stream_has_data while holding a spinlock,
and vsock_stream_has_data will access a queue pair.
To avoid this, we can simply omit calling vsock_stream_has_data
for host side queue pairs, since the QPs are empty per default
when the guest has detached.
This bug affects users of VMware Workstation using kernel version
4.4 and later.
Testing: Ran vsock tests between guest and host, and verified that
with this change, the host isn't calling vsock_stream_has_data
during detach. Ran mixedTest be...
2019 Apr 30
0
[Qemu-devel] [RFC 0/3] VirtIO RDMA
...> List is huge, this is only start point of the project.
> > > > > Anyway, here is one example of item in the list:
> > > > > - Multi VirtQ: Every QP has two rings and every CQ has one. This means that
> > > > > in order to support for example 32K QPs we will need 64K VirtQ. Not sure
> > > > > that this is reasonable so one option is to have one for all and
> > > > > multiplex the traffic on it. This is not good approach as by design it
> > > > > introducing an optional starvation. Another ap...
2012 Jul 25
5
problem with machine "freezing" for short periods
I have two HP dc7800 convertible minitowers that are exhibiting the
following issue: every 5-10 minutes, they will "freeze" for about 30
seconds, and then pick right back up again. During the freeze, it seems
that nothing at all happens on the system; the clock doesn't even advance
(it just picks up again with the next second, and that 30-or-so seconds
are lost).
I've tried both