Unfortunately I can't reproduce your memory leak reports. For example, here is the line about my production darkice instance from top: PID USER PRI NI SIZE RSS SHARE STAT %CPU %MEM TIME COMMAND 27516 root 20 0 3564 3564 1340 S 95.6 1.3 47:06 darkice <p>this as after approx 50 minutes from start, but the memory load doesn't change for the whole 4 hours it runs. What's even more strange, is that darkice does not allocate memory itself after initialization. I can only guess that the only place memory can leak is in the libraries darkice uses. Kristjan or Nicu, can you send me the exact libraries that you'r using? <p>Akos <p>--- >8 ---- List archives: http://www.xiph.org/archives/ icecast project homepage: http://www.icecast.org/ To unsubscribe from this list, send a message to 'icecast-request@xiph.org' containing only the word 'unsubscribe' in the body. No subject is needed. Unsubscribe messages sent to the list will be ignored/filtered.
Tried both the binary and compiling the src. Both leaking. It the setting suggested in "INSTALL.lame" didn't work and so I suspected the compiliation settings so I tried various settings. The compiler version is the one coming with RedHat 7.2 gcc3-3.0.1-3. Kristjan <p><p><p><p>-----Original Message----- From: owner-icecast@xiph.org [mailto:owner-icecast@xiph.org]On Behalf Of Akos Maroy Sent: 8. april 2002 17:53 To: icecast@xiph.org Subject: Re: [icecast] Darkice memory leak <p>Kristjan Gu?ni Bjarnason wrote:> ups, sorry for fuzzy description. I use "top" to watch the running > processes. It updates every 5 seconds.I see, thanks.> I am running RedHat 7.2, download from one of Redhats mirros. Haven't > installed any updates of Redhat 7.2. The box is a fresh PC with AmdAthlon> 1700XP, 128MB, two SoundBlaster 16 PCI using OSS drivers.I have about the same box at home, and have no such problems.> Any info missing?How did you compile lame? Or are you using a binary? <p>Akos <p><p>--- >8 ---- List archives: http://www.xiph.org/archives/ icecast project homepage: http://www.icecast.org/ To unsubscribe from this list, send a message to 'icecast-request@xiph.org' containing only the word 'unsubscribe' in the body. No subject is needed. Unsubscribe messages sent to the list will be ignored/filtered. <p>--- >8 ---- List archives: http://www.xiph.org/archives/ icecast project homepage: http://www.icecast.org/ To unsubscribe from this list, send a message to 'icecast-request@xiph.org' containing only the word 'unsubscribe' in the body. No subject is needed. Unsubscribe messages sent to the list will be ignored/filtered.
Kristjan Gu?ni Bjarnason wrote:> Tried both the binary and compiling the src. Both leaking. > > It the setting suggested in "INSTALL.lame" didn't work and so I suspected > the compiliation settings so I tried various settings. The compiler version > is the one coming with RedHat 7.2 gcc3-3.0.1-3. > > KristjanI don't think it's a memory leak ... of course the memory decreasing it's just buffering ... (change kernel params in /proc if you want other behaviour). I didn't see to start swapping and memory comes back when other programm needs it. Just let some vmstat 1 running and see if will eat up all your memory. Try running without swap. Nicu. <p><p><p><p><p><p>--- >8 ---- List archives: http://www.xiph.org/archives/ icecast project homepage: http://www.icecast.org/ To unsubscribe from this list, send a message to 'icecast-request@xiph.org' containing only the word 'unsubscribe' in the body. No subject is needed. Unsubscribe messages sent to the list will be ignored/filtered.
Nicu, maybe you are right but what got me started watching the memory was this log in /var/log/messages after darkice went down. Apr 8 00:16:54 nitro kernel: Out of Memory: Killed process 1997 (bash). Apr 8 00:16:59 nitro login(pam_unix)[1996]: session closed for user kgb Apr 8 00:17:01 nitro kernel: Out of Memory: Killed process 2025 (bash). Apr 8 00:17:15 nitro kernel: Out of Memory: Killed process 2112 (darkice). Apr 8 00:17:16 nitro login(pam_unix)[2024]: session closed for user kgb vmstat 1 tells me that somebody is allocating 24 kb/s because the free (idle memory keeps decreasing). The buffer size stays almost constant. I watched the memory usage for a while. Free memory went down to approx 2800 kbytes (24 kb/s) but then it stopped and the buffersize started to decrease by the same amount. During both free memory and buffer decrease the swap memory stayed almost constant. Once the buffermemory was down to approx 400 kb the swap memory started to decrease. I haven't change the kernel params but I guess that won't change anything. Kristjan <p>-----Original Message----- From: owner-icecast@xiph.org [mailto:owner-icecast@xiph.org]On Behalf Of Nicu Pavel Sent: 8. april 2002 21:19 To: icecast@xiph.org Subject: Re: [icecast] Darkice memory leak <p>Kristjan Gu?ni Bjarnason wrote:> Tried both the binary and compiling the src. Both leaking. > > It the setting suggested in "INSTALL.lame" didn't work and so I suspected > the compiliation settings so I tried various settings. The compilerversion> is the one coming with RedHat 7.2 gcc3-3.0.1-3. > > KristjanI don't think it's a memory leak ... of course the memory decreasing it's just buffering ... (change kernel params in /proc if you want other behaviour). I didn't see to start swapping and memory comes back when other programm needs it. Just let some vmstat 1 running and see if will eat up all your memory. Try running without swap. Nicu. <p><p><p><p><p><p>--- >8 ---- List archives: http://www.xiph.org/archives/ icecast project homepage: http://www.icecast.org/ To unsubscribe from this list, send a message to 'icecast-request@xiph.org' containing only the word 'unsubscribe' in the body. No subject is needed. Unsubscribe messages sent to the list will be ignored/filtered. <p>--- >8 ---- List archives: http://www.xiph.org/archives/ icecast project homepage: http://www.icecast.org/ To unsubscribe from this list, send a message to 'icecast-request@xiph.org' containing only the word 'unsubscribe' in the body. No subject is needed. Unsubscribe messages sent to the list will be ignored/filtered.