similar to: bug report w.r.t. streaming of metadata in icecast

Displaying 20 results from an estimated 200 matches similar to: "bug report w.r.t. streaming of metadata in icecast"

2004 Aug 06
2
bug report w.r.t. streaming of metadata in icecast
On Tue, 7 Aug 2001, Brendan Cully wrote: > On Tuesday, 07 August 2001 at 12:56, Richard Fromm wrote: > > i believe there is a bug in the following line in write_chunk_with_metadata() > > in source.c. here is the original: > > > > if (source->info.udpseqnr == clicon->food.client->udpseqnr) { > > > > and here is the change: > > > >
2004 Aug 06
0
bug report w.r.t. streaming of metadata in icecast
On Tuesday, 07 August 2001 at 12:56, Richard Fromm wrote: > i've been trying to get title streaming of metadata to work with icecast > 1.3.10. i've found what i believe to be a bug -- is this the right place to > file a bug report? I think so. > it appears that this information should be periodically inserted into the data > stream. the behavior that i was seeing was
2004 Aug 06
5
Missing headers in Icecast2
Hi Karl, Thanks for your help, About the "Connection:" header, you are right, it's: "Connection: close" and NOT "Connection: keep-alive". The protocol when the SERVER sends the data is http 1.0. It's http 1.1 when the browser requests the data. I don't understand the "Content-Length: 54000000" header either. Also I noticed the flash player on
2004 Aug 06
2
bug report w.r.t. streaming of metadata in icecast
On Thursday, 09 August 2001 at 18:08, Richard Fromm wrote: > On Tue, 7 Aug 2001, Richard Fromm wrote: > > > to clarify, the behavior that i saw was that most of the time the title > > streaming appeared in the client, but sometimes it did not. > > i think i've nailed this down. the problem is if the icecast server > starts in the middle of a song already being
2017 Aug 23
2
Glusterd proccess hangs on reboot
Same thing happens with 3.12.rc0. This time perf top shows hanging in libglusterfs.so and below is the glusterd logs, which are different from 3.10. With 3.10.5, after 60-70 minutes CPU usage becomes normal and we see brick processes come online and system starts to answer commands like "gluster peer status".. [2017-08-23 06:46:02.150472] E [client_t.c:324:gf_client_ref]
2017 Aug 23
0
Glusterd proccess hangs on reboot
Hi Atin, Do you have time to check the logs? On Wed, Aug 23, 2017 at 10:02 AM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Same thing happens with 3.12.rc0. This time perf top shows hanging in > libglusterfs.so and below is the glusterd logs, which are different > from 3.10. > With 3.10.5, after 60-70 minutes CPU usage becomes normal and we see > brick processes come
2017 Aug 23
2
Glusterd proccess hangs on reboot
Not yet. Gaurav will be taking a look at it tomorrow. On Wed, 23 Aug 2017 at 20:14, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi Atin, > > Do you have time to check the logs? > > On Wed, Aug 23, 2017 at 10:02 AM, Serkan ?oban <cobanserkan at gmail.com> > wrote: > > Same thing happens with 3.12.rc0. This time perf top shows hanging in > >
2017 Aug 23
0
Glusterd proccess hangs on reboot
Could you be able to provide the pstack dump of the glusterd process? On Wed, 23 Aug 2017 at 20:22, Atin Mukherjee <amukherj at redhat.com> wrote: > Not yet. Gaurav will be taking a look at it tomorrow. > > On Wed, 23 Aug 2017 at 20:14, Serkan ?oban <cobanserkan at gmail.com> wrote: > >> Hi Atin, >> >> Do you have time to check the logs? >>
2017 Aug 24
6
Glusterd proccess hangs on reboot
Here you can find 10 stack trace samples from glusterd. I wait 10 seconds between each trace. https://www.dropbox.com/s/9f36goq5xn3p1yt/glusterd_pstack.zip?dl=0 Content of the first stack trace is here: Thread 8 (Thread 0x7f7a8cd4e700 (LWP 43069)): #0 0x0000003aa5c0f00d in nanosleep () from /lib64/libpthread.so.0 #1 0x000000303f837d57 in ?? () from /usr/lib64/libglusterfs.so.0 #2
2017 Aug 24
2
Glusterd proccess hangs on reboot
I am working on it and will share my findings as soon as possible. Thanks Gaurav On Thu, Aug 24, 2017 at 3:58 PM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Restarting glusterd causes the same thing. I tried with 3.12.rc0, > 3.10.5. 3.8.15, 3.7.20 all same behavior. > My OS is centos 6.9, I tried with centos 6.8 problem remains... > Only way to a healthy state is
2017 Aug 24
0
Glusterd proccess hangs on reboot
Restarting glusterd causes the same thing. I tried with 3.12.rc0, 3.10.5. 3.8.15, 3.7.20 all same behavior. My OS is centos 6.9, I tried with centos 6.8 problem remains... Only way to a healthy state is destroy gluster config/rpms, reinstall and recreate volumes. On Thu, Aug 24, 2017 at 8:49 AM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Here you can find 10 stack trace samples
2017 Aug 28
2
Glusterd proccess hangs on reboot
Hi Gaurav, Any progress about the problem? On Thursday, August 24, 2017, Serkan ?oban <cobanserkan at gmail.com> wrote: > Thank you Gaurav, > Here is more findings: > Problem does not happen using only 20 servers each has 68 bricks. > (peer probe only 20 servers) > If we use 40 servers with single volume, glusterd cpu %100 state > continues for 5 minutes and it goes to
2017 Aug 22
0
Glusterd proccess hangs on reboot
I reboot multiple times, also I destroyed the gluster configuration and recreate multiple times. The behavior is same. On Tue, Aug 22, 2017 at 6:47 PM, Atin Mukherjee <amukherj at redhat.com> wrote: > My guess is there is a corruption in vol list or peer list which has lead > glusterd to get into a infinite loop of traversing a peer/volume list and > CPU to hog up. Again this is a
2017 Aug 29
2
Glusterd proccess hangs on reboot
Here is the requested logs: https://www.dropbox.com/s/vt187h0gtu5doip/gluster_logs_20_40_80_servers.zip?dl=0 On Tue, Aug 29, 2017 at 7:48 AM, Gaurav Yadav <gyadav at redhat.com> wrote: > Till now I haven't found anything significant. > > Can you send me gluster logs along with command-history-logs for these > scenarios: > Scenario1 : 20 servers > Scenario2 : 40
2017 Aug 24
0
Glusterd proccess hangs on reboot
Thank you Gaurav, Here is more findings: Problem does not happen using only 20 servers each has 68 bricks. (peer probe only 20 servers) If we use 40 servers with single volume, glusterd cpu %100 state continues for 5 minutes and it goes to normal state. with 80 servers we have no working state yet... On Thu, Aug 24, 2017 at 1:33 PM, Gaurav Yadav <gyadav at redhat.com> wrote: > > I am
2017 Sep 01
0
Glusterd proccess hangs on reboot
Serkan, I have gone through other mails in the mail thread as well but responding to this one specifically. Is this a source install or an RPM install ? If this is an RPM install, could you please install the glusterfs-debuginfo RPM and retry to capture the gdb backtrace. If this is a source install, then you'll need to configure the build with --enable-debug and reinstall and retry
2017 Sep 04
2
Glusterd proccess hangs on reboot
On Fri, Sep 1, 2017 at 8:47 AM, Milind Changire <mchangir at redhat.com> wrote: > Serkan, > I have gone through other mails in the mail thread as well but responding > to this one specifically. > > Is this a source install or an RPM install ? > If this is an RPM install, could you please install the > glusterfs-debuginfo RPM and retry to capture the gdb backtrace. >
2017 Aug 29
0
Glusterd proccess hangs on reboot
Till now I haven't found anything significant. Can you send me gluster logs along with command-history-logs for these scenarios: Scenario1 : 20 servers Scenario2 : 40 servers Scenario3: 80 Servers Thanks Gaurav On Mon, Aug 28, 2017 at 11:22 AM, Serkan ?oban <cobanserkan at gmail.com> wrote: > Hi Gaurav, > Any progress about the problem? > > On Thursday, August 24,
2017 Aug 29
2
Glusterd proccess hangs on reboot
Here is the logs after stopping all three volumes and restarting glusterd in all nodes. I waited 70 minutes after glusterd restart but it is still consuming %100 CPU. https://www.dropbox.com/s/pzl0f198v03twx3/80servers_after_glusterd_restart.zip?dl=0 On Tue, Aug 29, 2017 at 12:37 PM, Gaurav Yadav <gyadav at redhat.com> wrote: > > I believe logs you have shared logs which consist of
2017 Aug 29
0
Glusterd proccess hangs on reboot
I believe logs you have shared logs which consist of create volume followed by starting the volume. However, you have mentioned that when a node from 80 server cluster gets rebooted, glusterd process hangs. Could you please provide the logs which led glusterd to hang for all the cases along with gusterd process utilization. Thanks Gaurav On Tue, Aug 29, 2017 at 2:44 PM, Serkan ?oban