Displaying 20 results from an estimated 2000 matches similar to: "minor error in http response"
2004 Aug 06
2
improved error.log output --diff
diff -u --recursive icecast/src/admin.c icecast-new/src/admin.c
--- icecast/src/admin.c 2003-07-18 16:29:23.000000000 -0400
+++ icecast-new/src/admin.c 2003-08-06 19:18:32.000000000 -0400
@@ -213,7 +213,7 @@
html_write(client, "HTTP/1.0 200 OK\r\n"
"Content-Type: text/html\r\n"
"\r\n");
- DEBUG1("Sending XSLT
2010 Jul 30
33
[PATCHES] Smartjog PatchDump
Hello,
I work at SmarctJog.com, we have here some patches on IceCast for
performance and reliability, these are mostly client/connection/source
cleanups (a slave merge is underway, and some more good stuff (c)),
but we'd like this to be merged in before the list gets any longer.
Please find attached a list of our patches with a short desc:
This one is actually not from us/me, it was found
2006 Sep 24
1
Add-on patch to support .pls .asx .ram .qtl listing formats
Hi,
If you have multiple players installed on your PC/Mac .m3u will always
open the last media player who are the default in charge of the extension
and mime m3u.
On your web site you want maybe to force a link to open real media player
or quicktime/itune. You need to create a .pls to force winamp loading the
streaming because windows media player won't open .pls etc.
If you add a .pls
2004 Aug 06
5
Missing headers in Icecast2
Hi Karl,
Thanks for your help,
About the "Connection:" header, you are right, it's:
"Connection: close" and NOT "Connection: keep-alive". The protocol when the
SERVER sends the data is http 1.0. It's http 1.1 when the browser requests
the data.
I don't understand the "Content-Length: 54000000" header either. Also I
noticed the flash player on
2004 Aug 06
2
icecast 2 compatibility with older clients
I've attached a small patch against icecast 2 which converts ice-
headers to icy- headers for clients that include icy- headers in their
request. This allows a few clients (notably xmms) to pick up stream
info they otherwise miss.
-b
-------------- next part --------------
Index: src/format.c
===================================================================
RCS file:
2004 Aug 06
0
icecast 2 compatibility with older clients
On Wednesday, 02 July 2003 at 19:19, Brendan Cully wrote:
> I've attached a small patch against icecast 2 which converts ice-
> headers to icy- headers for clients that include icy- headers in their
> request. This allows a few clients (notably xmms) to pick up stream
> info they otherwise miss.
Per Jack's suggestion, here's a different version that forces icy mode
when
2009 Dec 23
0
icecast 2.3.2 generated buildm3u don't support authenticated streaming via https
Hi,
I've setup'ed an HTTPS only icecast streaming server with
authentication. After login to the stream, I still get an m3u file
containing the HTTP url, e.g.:
http://user:pass at icecast-server:8000/stream.ogg
Expected:
https://user:pass at icecast-server:8001/stream.ogg
Test:
$ curl --insecure
2004 Aug 06
2
Re: PATCH: increase network congestion resilience
On Friday 17 January 2003 20:17, Karl Heyes shaped the electrons to say:
> I would suggest a slightly different approach.
>
> Instead of increasing the syscall overhead for all sockets, trapping
> for uncommon cases. Try the sock_write_bytes and if that is
> continuously having to queue (ie not all data can be sent) then display
> the warning, maybe make it a run-time option
2005 Aug 24
0
verbose imap logging
Implements more verbose imap logging (when client exits).
Similar to existing logging in POP3.
Due to the nature of imap and my incomplete understanding (of the
dovecot code) the session statistics reported have varying accuracy.
Much better than nothing IMO.
Cheers,
Jens
-----------------------------------------------------------------------
'This mail automatically becomes portable
2017 Aug 23
0
Glusterd proccess hangs on reboot
Hi Atin,
Do you have time to check the logs?
On Wed, Aug 23, 2017 at 10:02 AM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Same thing happens with 3.12.rc0. This time perf top shows hanging in
> libglusterfs.so and below is the glusterd logs, which are different
> from 3.10.
> With 3.10.5, after 60-70 minutes CPU usage becomes normal and we see
> brick processes come
2017 Aug 23
0
Glusterd proccess hangs on reboot
Could you be able to provide the pstack dump of the glusterd process?
On Wed, 23 Aug 2017 at 20:22, Atin Mukherjee <amukherj at redhat.com> wrote:
> Not yet. Gaurav will be taking a look at it tomorrow.
>
> On Wed, 23 Aug 2017 at 20:14, Serkan ?oban <cobanserkan at gmail.com> wrote:
>
>> Hi Atin,
>>
>> Do you have time to check the logs?
>>
2017 Aug 23
2
Glusterd proccess hangs on reboot
Same thing happens with 3.12.rc0. This time perf top shows hanging in
libglusterfs.so and below is the glusterd logs, which are different
from 3.10.
With 3.10.5, after 60-70 minutes CPU usage becomes normal and we see
brick processes come online and system starts to answer commands like
"gluster peer status"..
[2017-08-23 06:46:02.150472] E [client_t.c:324:gf_client_ref]
2017 Aug 24
0
Glusterd proccess hangs on reboot
Restarting glusterd causes the same thing. I tried with 3.12.rc0,
3.10.5. 3.8.15, 3.7.20 all same behavior.
My OS is centos 6.9, I tried with centos 6.8 problem remains...
Only way to a healthy state is destroy gluster config/rpms, reinstall
and recreate volumes.
On Thu, Aug 24, 2017 at 8:49 AM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Here you can find 10 stack trace samples
2017 Aug 23
2
Glusterd proccess hangs on reboot
Not yet. Gaurav will be taking a look at it tomorrow.
On Wed, 23 Aug 2017 at 20:14, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Hi Atin,
>
> Do you have time to check the logs?
>
> On Wed, Aug 23, 2017 at 10:02 AM, Serkan ?oban <cobanserkan at gmail.com>
> wrote:
> > Same thing happens with 3.12.rc0. This time perf top shows hanging in
> >
2017 Aug 24
0
Glusterd proccess hangs on reboot
Thank you Gaurav,
Here is more findings:
Problem does not happen using only 20 servers each has 68 bricks.
(peer probe only 20 servers)
If we use 40 servers with single volume, glusterd cpu %100 state
continues for 5 minutes and it goes to normal state.
with 80 servers we have no working state yet...
On Thu, Aug 24, 2017 at 1:33 PM, Gaurav Yadav <gyadav at redhat.com> wrote:
>
> I am
2017 Aug 29
0
Glusterd proccess hangs on reboot
Till now I haven't found anything significant.
Can you send me gluster logs along with command-history-logs for these
scenarios:
Scenario1 : 20 servers
Scenario2 : 40 servers
Scenario3: 80 Servers
Thanks
Gaurav
On Mon, Aug 28, 2017 at 11:22 AM, Serkan ?oban <cobanserkan at gmail.com>
wrote:
> Hi Gaurav,
> Any progress about the problem?
>
> On Thursday, August 24,
2017 Aug 29
0
Glusterd proccess hangs on reboot
I believe logs you have shared logs which consist of create volume followed
by starting the volume.
However, you have mentioned that when a node from 80 server cluster gets
rebooted, glusterd process hangs.
Could you please provide the logs which led glusterd to hang for all the
cases along with gusterd process utilization.
Thanks
Gaurav
On Tue, Aug 29, 2017 at 2:44 PM, Serkan ?oban
2017 Aug 29
0
Glusterd proccess hangs on reboot
glusterd returned to normal, here is the logs:
https://www.dropbox.com/s/41jx2zn3uizvr53/80servers_glusterd_normal_status.zip?dl=0
On Tue, Aug 29, 2017 at 1:47 PM, Serkan ?oban <cobanserkan at gmail.com> wrote:
> Here is the logs after stopping all three volumes and restarting
> glusterd in all nodes. I waited 70 minutes after glusterd restart but
> it is still consuming %100 CPU.
2017 Sep 04
0
Glusterd proccess hangs on reboot
>1. On 80 nodes cluster, did you reboot only one node or multiple ones?
Tried both, result is same, but the logs/stacks are from stopping and
starting glusterd only on one server while others are running.
>2. Are you sure that pstack output was always constantly pointing on strcmp being stuck?
It stays 70-80 minutes in %100 cpu consuming state, the stacks I send
is from first 5-10 minutes.
2017 Sep 04
0
Glusterd proccess hangs on reboot
I have been using a 60 server 1560 brick 3.7.11 cluster without
problems for 1 years. I did not see this problem with it.
Note that this problem does not happen when I install packages & start
glusterd & peer probe and create the volumes. But after glusterd
restart.
Also note that this still happens without any volumes. So it is not
related with brick count I think...
On Mon, Sep 4, 2017