Displaying 20 results from an estimated 100 matches similar to: "Smartjog patchs"
2011 Jun 16
0
Smartjog patchs
ldefert <ldefert at smartjog.com> writes:
> I work for the company Smartjog (http://smatjog.com/) where we use a
> modified version of Icecast. We would like to contribute back those patchs,
> and hopefully have them merged in the official repository. I have posted
> them on the patch tracker, so feel free to leave a comment on them.
^^^^^^^^^^^^^
for those (like
2010 Jul 30
33
[PATCHES] Smartjog PatchDump
Hello,
I work at SmarctJog.com, we have here some patches on IceCast for
performance and reliability, these are mostly client/connection/source
cleanups (a slave merge is underway, and some more good stuff (c)),
but we'd like this to be merged in before the list gets any longer.
Please find attached a list of our patches with a short desc:
This one is actually not from us/me, it was found
2010 Aug 01
2
[PATCHES] Smartjog PatchDump
(answering both mails in one, as they call back each other)
Michael Smith <msmith at xiph.org> writes:
> Can you file bugs and attach the patches to them in our bugtracking
> system? http://trac.xiph.org/
Sorry, I really don't feel like creating 3x tickets right now =)
> Mail dumps of patches are pretty much guaranteed to not get merged.
Understandable.
> From a very
2010 Jul 30
2
[PATCHES] Smartjog PatchDump
Le vendredi 30 juillet 2010 12:25:48, Michael Smith a ?crit :
> All that said: Icecast2 is largely unmaintained these days - I don't
> know if anyone is interested in going through these and figuring out
> which ones are mergeable, which need fixing, and which shouldn't be
> used at all.
Maybe its time to find include new contributors ?
If no one has time to review the
2010 Jul 30
0
[PATCHES] Smartjog PatchDump
On Fri, Jul 30, 2010 at 10:43 AM, Romain Beauxis <toots at rastageeks.org> wrote:
> Le vendredi 30 juillet 2010 12:25:48, Michael Smith a ?crit :
>> All that said: Icecast2 is largely unmaintained these days - I don't
>> know if anyone is interested in going through these and figuring out
>> which ones are mergeable, which need fixing, and which shouldn't be
2007 Dec 25
5
[Bug 13815] New: Wrong url for http get
http://bugs.freedesktop.org/show_bug.cgi?id=13815
Summary: Wrong url for http get
Product: swfdec
Version: git
Platform: Other
URL: http://www.betterworldbooks.com/Flash/output.swf
OS/Version: All
Status: NEW
Severity: normal
Priority: medium
Component: library
AssignedTo: swfdec at
2008 Jan 31
1
icecast crash (segv)
Hi all,
I'm experiencing a systematic crash with icecast2 when I goes in this case
(tested with debian 2.3.1-5 2.3.1-6 and latest trunk)
When there is more than ~1400 clients connected and a mountpoint that was not
relaying goes online it crash.
Without this I can go up to 5000 clients without problems. Changing the number
of threadpool does'nt change anything.
here is the term log:
2011 Dec 21
1
Migrate Users from existing Samba4 Domain?
I've been using one Samba4 server based on a Fedora14 distro, but because
of my continuing issues with Bind 9.7 and dynamic DNS updates, I'm trying
to move to a Fedora 16 base, which includes Bind 9.8 by default (not to
mention a bucketload of other updates).
My question: What is the best way to pull the current domain data from the
first server to the second one? In particular, I'm
2006 Nov 17
3
Bug#399073: xen-hypervisor-3.0.3-1-i386: dom0 crashes with a domU that define more than 6 vdb
Package: xen-hypervisor-3.0.3-1-i386
Version: 3.0.3-0-2
Severity: important
When I try to start a domU with more than 6 vdb it crash.
The first time the vm does not launch and aparently wait for the loop
device creation (some scripts/block add, udev --daemon and sleep 1 process running). And nothing happen until I Ctrl+C the xm create.
My loop module is loaded with 64 loops.
After this first
2011 May 09
1
You don't check for malloc failure
On 05/08/2011 01:06 AM, Romain Beauxis wrote:
> Running out of memory is not considered as an error when calling malloc?
On linux, the only way to get an error when calling malloc() is to
disable memory-overcommiting. On regular linux systems, this is the
_only_ way for malloc() to return NULL. If an icecast process reaches
that point, it's screwed anyway it won't be able to do
2012 Dec 17
5
Feeback on RAID1 feature of Btrfs
Hello,
I''m testing Btrfs RAID1 feature on 3 disks of ~10GB. Last one is not
exactly 10GB (would be too easy).
About the test machine, it''s a kvm vm running an up-to-date archlinux
with linux 3.7 and btrfs-progs 0.19.20121005.
#uname -a
Linux seblu-btrfs-1 3.7.0-1-ARCH #1 SMP PREEMPT Tue Dec 11 15:05:50 CET
2012 x86_64 GNU/Linux
Filesystem was created with :
# mkfs.btrfs -L
2010 Jun 07
0
No subject
changes, there are lots of formatting changes, and things like
additions of "XXX: ..." comments that don't say anything helpful -
these will need removing before the patches are really reviewable.
A higher-level description of what you're attempting to accomplish
with these patchsets would also be very helpful - but much more detail
than "performance and reliability".
2009 Mar 04
4
Should I be worried?
When doing my updates, I got this message:
error: unpacking of archive failed on file /boot/System.map-2.6.9-78.0.
13.EL;49a
Unfortunately, I installed Centos 4 about two years ago on my server
that sits in the corner and does it's job of faithfully providing
services without me touching it except to run a backup script and do a
YUM update every week so. The result is that I have
2011 May 24
0
Xen and live migration with VIR_MIGRATE_PERSIST_DEST
Hi,
I'm having a hard time to figure how to perform Xen live migration using
the python API with the same rate of success than virt-manager.
# virsh version
Compiled against library: libvir 0.8.3
Using library: libvir 0.8.3
Using API: Xen 3.0.1
Running hypervisor: Xen 3.2.0
The goal is to have the migration done in only one call, including a
definition in the remote xend and undefinition
2007 Jan 16
2
Somes patchs
Here two patch:
A patch for switcher to disable window list. All was already present in code,
just missing an option :)
A patch for place, it's my old patch, i just fix a stupid segfault!
Will try to add some others placement modes...
http://puffy.homelinux.org/%7Egnumdk/compiz/patch/switcher.patch
http://puffy.homelinux.org/%7Egnumdk/compiz/patch/place.patch
Cedric
2005 Aug 23
1
Process build properly after applying patchs to 3.0.20
Hi,
I am trying to test some patches that have been made post 3.0.20. I
have Samba 3.0.20 src installed. I have applied two patches from SVN
(9484, 9481).
I have run autoconf. Poking around after "configure", I discovered I
needed to run "autoheader" as well. This is fine, but I am wondering if
I have missed any other steps?
My steps...
tar -xvzf samba-3.0.20.tar.gz
2009 Feb 10
2
[PATCHS] Included 3 patches that updates documentation
Included 3 patches that updates documentation.
This completes a 5 patches set.
If you prefer me to resend all of then as one patch, attached,
discussing or whatever, your're welcome.
Best regards,
vicente
>From 7cec3ad78c8454408c8b6a1950d441e02d56d138 Mon Sep 17 00:00:00 2001
From: Vicente Jimenez Aguilar <googuy at gmail.com>
Date: Fri, 23 Jan 2009 00:57:48 +0100
Subject: [PATCH]
2006 Jun 24
2
Patchs criterias
Greetings everyone!
During the 5 months since Compiz was released lots of work has been
done outside of the official team by the new Compiz community in order
to improve Compiz's features and usability. These unofficial
developers are mainly using the compiz.net boards, #xgl or #compiz-dev
IRC channels on Freenode to get in touch, discuss ideas, features,
patches and bugs... Most patches
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
Hi, everyone
And we have the blocked cluster several times, and the log is always, we have to reboot all the node of the cluster to avoid it.
Is there any patch that had fix this bug?
[<ffffffff817539a5>] schedule_timeout+0x1e5/0x250
[<ffffffff81755a77>] wait_for_completion+0xa7/0x160
[<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0
[<ffffffffa0564063>]
2014 Aug 21
1
Cluster blocked, so as to reboot all nodes to avoid it. Is there any patchs for it? Thanks.
Hi, everyone
And we have the blocked cluster several times, and the log is always, we have to reboot all the node of the cluster to avoid it.
Is there any patch that had fix this bug?
[<ffffffff817539a5>] schedule_timeout+0x1e5/0x250
[<ffffffff81755a77>] wait_for_completion+0xa7/0x160
[<ffffffff8109c9b0>] ? try_to_wake_up+0x2c0/0x2c0
[<ffffffffa0564063>]