Displaying 8 results from an estimated 8 matches for "600kpps".
2006 Aug 21
5
New hardware
Hi!
I want to upgrade hardware on my router (iptables, htb, >1000 users).
Now it is
based on usual desktop PC (Intel Prescott P4 3.00 Ghz, 1 Gb RAM). The
reason of hardware upgrade is growing up number of users, also we are
planning to increase upstream link from 100 Mbit/s to 1 Gbit/s.
Iptables rules are now optimized with ipset tool, for tc I''m using
hash tables as well. So I
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...the host to the VM does not make a big change with the
> patches in small packets scenario (minimum, 64 bytes, about 645
> without the patch, ~625 with batch and batch+buf api). If the packets
> are bigger, I can see a performance increase: with 256 bits, it goes
> from 590kpps to about 600kpps, and in case of 1500 bytes payload it
> gets from 348kpps to 528kpps, so it is clearly an improvement.
>
> * with testpmd and event_idx=on, batching+buf api perform similarly in
> both directions.
>
> All of testpmd tests were performed with no linux bridge, just a
> host'...
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...ig change with the
> patches in small packets scenario (minimum, 64 bytes, about 645
> without the patch, ~625 with batch and batch+buf api). If the packets
> are bigger, I can see a performance increase: with 256 bits,
I think you meant bytes?
> it goes
> from 590kpps to about 600kpps, and in case of 1500 bytes payload it
> gets from 348kpps to 528kpps, so it is clearly an improvement.
>
> * with testpmd and event_idx=on, batching+buf api perform similarly in
> both directions.
>
> All of testpmd tests were performed with no linux bridge, just a
> host's...
2020 Jul 01
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...about 645
>>> without the patch, ~625 with batch and batch+buf api). If the packets
>>> are bigger, I can see a performance increase: with 256 bits,
>>
>> I think you meant bytes?
>>
> Yes, sorry.
>
>>> it goes
>>> from 590kpps to about 600kpps, and in case of 1500 bytes payload it
>>> gets from 348kpps to 528kpps, so it is clearly an improvement.
>>>
>>> * with testpmd and event_idx=on, batching+buf api perform similarly in
>>> both directions.
>>>
>>> All of testpmd tests were perfor...
2020 Jul 09
0
[PATCH RFC v8 02/11] vhost: use batched get_vq_desc version
...the packets
> > >>> are bigger, I can see a performance increase: with 256 bits,
> > >>
> > >> I think you meant bytes?
> > >>
> > > Yes, sorry.
> > >
> > >>> it goes
> > >>> from 590kpps to about 600kpps, and in case of 1500 bytes payload it
> > >>> gets from 348kpps to 528kpps, so it is clearly an improvement.
> > >>>
> > >>> * with testpmd and event_idx=on, batching+buf api perform similarly in
> > >>> both directions.
> > >>...
2020 Jun 11
27
[PATCH RFC v8 00/11] vhost: ring format independence
This still causes corruption issues for people so don't try
to use in production please. Posting to expedite debugging.
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that converting to
iov later.
Used ring is similar: we fetch into an independent struct first,
convert that to
2020 Jun 11
27
[PATCH RFC v8 00/11] vhost: ring format independence
This still causes corruption issues for people so don't try
to use in production please. Posting to expedite debugging.
This adds infrastructure required for supporting
multiple ring formats.
The idea is as follows: we convert descriptors to an
independent format first, and process that converting to
iov later.
Used ring is similar: we fetch into an independent struct first,
convert that to
2006 Jan 27
23
5,000 concurrent calls system rollout question
Hi,
we are currently considering different options for rolling out a large scale IP PBX to handle around 3,000 + concurrent calls.
Can this be done with Asterisk? Has it been done before?
I really would like an input on this.
Thanks!
-------------- next part --------------
An HTML attachment was scrubbed...
URL: