This patch does two things.
This test program results (on i386) in an error about _NSIG:
#include <signal.h>
#if defined (SIGRTMAX)
int rtmax = SIGRTMAX;
#endif
The cause is that the kernel signal.h defines SIGRTMAX as _NSIG,
then makes _NSIG invisibelby hiding it inside ifdef __KERNEL__.
Perhaps it's more elegant to solve this in the kernel,
but the ramifications of that scare me.
The other issue is that signal() is not provided by klibc.
Here it's defined to be equal to bsd_signal, consistent with glibc
behaviour. I'm none too sure about the need for this one.
Regards,
Erik
diff -urN klibc-0.202-pristine/include/signal.h klibc-0.202/include/signal.h
--- klibc-0.202-pristine/include/signal.h 2004-06-06 11:18:04.000000000 +0200
+++ klibc-0.202/include/signal.h 2005-03-02 22:09:43.000000000 +0100
@@ -12,6 +12,13 @@
#include <sys/types.h>
#include <asm/signal.h>
+/* Kernel defines SIGRTMAX as _NSIG, then keeps _NSIG
+ hidden under ifdefs... */
+#ifndef _NSIG
+#undef SIGRTMIN
+#undef SIGRTMAX
+#endif
+
#include <klibc/archsignal.h>
/* glibc seems to use sig_atomic_t as "int" pretty much on all
architectures.
@@ -67,6 +74,8 @@
__extern __sighandler_t __signal(int, __sighandler_t, int);
__extern __sighandler_t sysv_signal(int, __sighandler_t);
__extern __sighandler_t bsd_signal(int, __sighandler_t);
+/* make sure classic code works */
+#define signal bsd_signal
__extern int sigaction(int, const struct sigaction *, struct sigaction *);
__extern int sigprocmask(int, const sigset_t *, sigset_t *);
__extern int sigpending(sigset_t *);