Displaying 19 results from an estimated 19 matches for "loadtests".
Did you mean:
loadtest
2010 Oct 07
5
Per User Quotas with LDAP on Dovecot 1.x
Alle,
We're running Dovecot V1.0.7 on RHEL5.5, using maildir. We would like
to user per user quotas with an OpenLDAP (V2.3.43) backend.
We have setup a default quota in /etc/dovecot.conf:
quota = maildir:storage=10240:ignore=Trash
And have the following userdb configs in /etc/doveconf.conf:
userdb ldap {
args = /etc/dovecot-ldap.conf
}
and the following user_attrs defined in
2020 Jan 13
2
Load balancing Icecast - aggregated logs
Hi
I have a potential project for which my client requests that we load
balance the streaming service.
Of course, the Icecast server scales very well.
- http://icecast.org/loadtest/
However, the client requests high-availability and, due to the scale of the
potential project, we would like to load balance the service over two or
more servers.
I think the load balancing aspect is not my
2016 Jun 10
1
icecast relay server performance testing
I’m going to try to run multiple curl processes. The libuv code that i wrote is not of a very good quality (even though it’s really simple).
thanks!
—zahar
> On Jun 10, 2016, at 2:43 PM, Alejandro <cdgraff at gmail.com> wrote:
>
> In the past, i had used this method:
>
> http://icecast.org/loadtest/1/ <http://icecast.org/loadtest/1/>
>
> But to be honest,
2020 Jan 13
3
Load balancing Icecast - aggregated logs
Good afternoon Philipp
Many thanks for your reply.
Sorry for not being clear.
I think the problem I have would be in the implementation.
How do I run two versions of Icecast on two servers, with load balancing
between the two (perhaps using RR-DNS), but I present my client with one
unified log file for the audience statistics? Should the two servers write
their logs via a network file share to
2016 Jun 10
2
icecast relay server performance testing
I wrote a test application which is based on libuv. iptables is disabled.
I’m running the test application from two other machines.
Do you have any suggestions for testing?
thanks!
—zahar
> On Jun 10, 2016, at 2:38 PM, Alejandro <cdgraff at gmail.com> wrote:
>
> Zahar, how are you testing? with some CURL stress test? BTW, IPTABLES is enabled?
>
> I was running most time
2005 Nov 15
0
New icecast load test reports
Excellent results Oddsock,
I'm surprised about the amount of free memory that is avalaible when
using icecast instead of scast in high demand situations, impressive!
Regards.
-----Original Message-----
From: icecast-bounces@xiph.org [mailto:icecast-bounces@xiph.org] On
Behalf Of oddsock
Sent: 15 November 2005 4:52 AM
To: icecast@xiph.org
Subject: [Icecast] New icecast load test reports
2005 Nov 22
0
Server question - if you had ...
Yup,
Its all about the bandwidth!
Why not check out the load tests that were done by oddsock?
http://icecast.org/loadtest.php
Regards
-----Original Message-----
From: icecast-bounces@xiph.org [mailto:icecast-bounces@xiph.org] On
Behalf Of Steven Clift
Sent: 21 November 2005 4:03 PM
To: icecast@xiph.org
Subject: [Icecast] Server question - if you had ...
If you had $3,000 for a new Icecast2
2020 Jan 13
0
Load balancing Icecast - aggregated logs
Good afternoon,
On Mon, 2020-01-13 at 13:30 +0000, Chip wrote:
> Hi
>
> I have a potential project for which my client requests that we load
> balance the streaming service.
>
> Of course, the Icecast server scales very well.
>
> - http://icecast.org/loadtest/
>
> However, the client requests high-availability and, due to the scale of the
> potential
2006 Dec 13
2
Server Requirements
Hi Chip,
>hi david
>> Hi,
>>
>> I'm looking to stream a large amount of conncurrent connects and want
to
>> use icecast probably on a CentOS base. The bandwidth requirements are
over
>> 60 Mbps before any overheads, I'm looking at using some Dell SC1425 machines
>> but am not sure how many I'd need, I'd appreciated hearing your experiences
2017 Apr 20
1
[PATCH net-next v2 2/5] virtio-net: transmit napi
>> static int xmit_skb(struct send_queue *sq, struct sk_buff *skb)
>> {
>> struct virtio_net_hdr_mrg_rxbuf *hdr;
>> @@ -1130,9 +1172,11 @@ static netdev_tx_t start_xmit(struct sk_buff *skb,
>> struct net_device *dev)
>> int err;
>> struct netdev_queue *txq = netdev_get_tx_queue(dev, qnum);
>> bool kick =
2017 Apr 20
1
[PATCH net-next v2 2/5] virtio-net: transmit napi
>> static int xmit_skb(struct send_queue *sq, struct sk_buff *skb)
>> {
>> struct virtio_net_hdr_mrg_rxbuf *hdr;
>> @@ -1130,9 +1172,11 @@ static netdev_tx_t start_xmit(struct sk_buff *skb,
>> struct net_device *dev)
>> int err;
>> struct netdev_queue *txq = netdev_get_tx_queue(dev, qnum);
>> bool kick =
2016 Feb 28
5
Add support for in-process profile merging in profile-runtime
Justin, looks like there is some misunderstanding in my email. I want to
clarify it here first:
1) I am not proposing changing the default profile dumping model as used
today. The online merging is totally optional;
2) the on-line profile merging is not doing conversion from raw to index
format. It does very simple raw-to-raw merging using existing runtime APIs.
3) the change to existing profile
2020 Jan 13
0
Load balancing Icecast - aggregated logs
Good afternoon,
On Mon, 2020-01-13 at 13:41 +0000, Chip wrote:
> Good afternoon Philipp
>
> Many thanks for your reply.
> Sorry for not being clear.
>
> I think the problem I have would be in the implementation.
Ok.
> How do I run two versions of Icecast on two servers, with load balancing
> between the two (perhaps using RR-DNS),
Generally I recommend RR-DNS as it
2016 Feb 28
0
Add support for in-process profile merging in profile-runtime
> On Feb 28, 2016, at 12:46 AM, Xinliang David Li via llvm-dev <llvm-dev at lists.llvm.org> wrote:
>
> Justin, looks like there is some misunderstanding in my email. I want to clarify it here first:
>
> 1) I am not proposing changing the default profile dumping model as used today. The online merging is totally optional;
> 2) the on-line profile merging is not doing
2016 Jun 10
0
icecast relay server performance testing
In the past, i had used this method:
http://icecast.org/loadtest/1/
But to be honest, nothing be compared with real use case, we found many
issues when the connections arrive from many differents IPs, the stress
test open all from small set of IPs, but almost this test case is used for
many others, and present good results.
2016-06-10 2:40 GMT-03:00 Popov, Zahar <zahar.popov1978 at
2016 Feb 29
2
Add support for in-process profile merging in profile-runtime
+ 1 to Sean's suggestion of using a wrapper script to call profdata merge.
David, does that work for your use case?
Some inline comments ---
> On Feb 28, 2016, at 10:45 AM, Mehdi Amini via llvm-dev <llvm-dev at lists.llvm.org> wrote:
>
>>
>> On Feb 28, 2016, at 12:46 AM, Xinliang David Li via llvm-dev <llvm-dev at lists.llvm.org> wrote:
>>
>>
2016 Feb 28
0
Add support for in-process profile merging in profile-runtime
Xinliang David Li via llvm-dev <llvm-dev at lists.llvm.org> writes:
> One of the main missing features in Clang/LLVM profile runtime is the lack of
> support for online/in-process profile merging support. Profile data collected
> for different workloads for the same executable binary need to be collected
> and merged later by the offline post-processing tool. This limitation
2015 Sep 23
1
Use case question
Hi Guys,
Can anyone provide more details about where the lag would occur if I did
try to pursue a push to talk scenario with Icecast. I'm not sure exactly
how it works now so I'll outline the two potential discussion flows I have
in mind.
Perhaps someone could elaborate on between which steps the 5-10 seconds of
lag would come into play.
Mumble does look like it might be a better fit
2016 Feb 28
5
Add support for in-process profile merging in profile-runtime
One of the main missing features in Clang/LLVM profile runtime is the lack
of support for online/in-process profile merging support. Profile data
collected for different workloads for the same executable binary need to be
collected and merged later by the offline post-processing tool. This
limitation makes it hard to handle cases where the instrumented binary
needs to be run with large number of