I am trying the small attached program on FreeBSD 6.3 (amd64, SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads library and on both it produces "BROKEN" message. I compile this program as follows: cc sched_test.c -o sched_test -pthread I believe that the behavior I observe is broken because: if thread #1 releases a mutex and then tries to re-acquire it while thread #2 was already blocked waiting on that mutex, then thread #1 should be "queued" after thread #2 in mutex waiter's list. Is there any option (thread scheduler, etc) that I could try to achieve "good" behavior? P.S. I understand that all this is subject to (thread) scheduler policy, but I think that what I expect is more reasonable, at least it is more reasonable for my application. -- Andriy Gapon
on 14/05/2008 18:17 Andriy Gapon said the following:> I am trying the small attached program on FreeBSD 6.3 (amd64, > SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads > library and on both it produces "BROKEN" message. > > I compile this program as follows: > cc sched_test.c -o sched_test -pthread > > I believe that the behavior I observe is broken because: if thread #1 > releases a mutex and then tries to re-acquire it while thread #2 was > already blocked waiting on that mutex, then thread #1 should be "queued" > after thread #2 in mutex waiter's list. > > Is there any option (thread scheduler, etc) that I could try to achieve > "good" behavior? > > P.S. I understand that all this is subject to (thread) scheduler policy, > but I think that what I expect is more reasonable, at least it is more > reasonable for my application. > >Daniel Eischen has just kindly notified me that the code (as an attachment) didn't make it to the list, so here it is inline. #include <stdio.h> #include <unistd.h> #include <stdlib.h> #include <pthread.h> pthread_mutex_t mutex; int count = 0; static void * thrfunc(void * arg) { while (1) { pthread_mutex_lock(&mutex); count++; if (count > 10) { fprintf(stderr, "you have a BROKEN thread scheduler!!!\n"); exit(1); } sleep(1); pthread_mutex_unlock(&mutex); } } int main(void) { pthread_t thr; #if 0 pthread_mutexattr_t attr; pthread_mutexattr_init(&attr); pthread_mutexattr_settype(&attr, PTHREAD_MUTEX_RECURSIVE_NP); pthread_mutex_init(&mutex, &attr); #else pthread_mutex_init(&mutex, NULL); #endif pthread_create(&thr, NULL, thrfunc, NULL); sleep(2); pthread_mutex_lock(&mutex); count = 0; printf("you have good thread scheduler\n"); pthread_mutex_unlock(&mutex); return 0; } -- Andriy Gapon
> I am trying the small attached program on FreeBSD 6.3 (amd64, > SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads > library and on both it produces "BROKEN" message. > > I compile this program as follows: > cc sched_test.c -o sched_test -pthread > > I believe that the behavior I observe is broken because: if thread #1 > releases a mutex and then tries to re-acquire it while thread #2 was > already blocked waiting on that mutex, then thread #1 should be "queued" > after thread #2 in mutex waiter's list. > > Is there any option (thread scheduler, etc) that I could try to achieve > "good" behavior? > > P.S. I understand that all this is subject to (thread) scheduler policy, > but I think that what I expect is more reasonable, at least it is more > reasonable for my application. > > -- > Andriy GaponAre you out of your mind?! You are specifically asking for the absolute worst possible behavior! If you have fifty tiny things to do on one side of the room and fifty tiny things to do on the other side, do you cross the room after each one? Of course not. That would be *ludicrous*. If you want/need strict alternation, feel free to code it. But it's the maximally inefficient scheduler behavior, and it sure as hell had better not be the default. DS
Andriy Gapon wrote:> I am trying the small attached program on FreeBSD 6.3 (amd64, > SCHED_4BSD) and 7-STABLE (i386, SCHED_ULE), both with libthr as threads > library and on both it produces "BROKEN" message. > > I compile this program as follows: > cc sched_test.c -o sched_test -pthread > > I believe that the behavior I observe is broken because: if thread #1 > releases a mutex and then tries to re-acquire it while thread #2 was > already blocked waiting on that mutex, then thread #1 should be "queued" > after thread #2 in mutex waiter's list. > > Is there any option (thread scheduler, etc) that I could try to achieve > "good" behavior? > > P.S. I understand that all this is subject to (thread) scheduler policy, > but I think that what I expect is more reasonable, at least it is more > reasonable for my application. >In fact, libthr is trying to avoid this conveying, if thread #1 hands off the ownership to thread #2, it will cause lots of context switch, in the idea world, I would let thread #1 to run until it exhausts its time slice, and at the end of its time slices, thread #2 will get the mutex ownership, of course it is difficult to make this work on SMP, but on UP, I would expect the result will be close enough if thread scheduler is sane, so we don't raise priority in kernel umtx code if a thread is blocked, this gives thread #1 some times to re-acquire the mutex without context switches, increases throughput. Regards, David Xu
On Wed, 14 May 2008, Andriy Gapon wrote:> I believe that the behavior I observe is broken because: if thread #1 > releases a mutex and then tries to re-acquire it while thread #2 was > already blocked waiting on that mutex, then thread #1 should be "queued" > after thread #2 in mutex waiter's list.The behavior of scheduling with respect to mutex contention (apart from pthread_mutexattr_setprotocol()) is not specified by POSIX, to the best of my knowledge, and thus is left to the discretion of the implementation.> Is there any option (thread scheduler, etc) that I could try to achieve > "good" behavior?No portable mechanism, and not any mechanism in the operating systems with which I am familiar. That said, as the behavior is not specified by POSIX, there would be nothing preventing an implementation from providing this as an optional behavior through a custom pthread_mutexattr_???_np() interface.> P.S. I understand that all this is subject to (thread) scheduler policy, > but I think that what I expect is more reasonable, at least it is more > reasonable for my application.As other responders have indicated, the behavior you desire is as unoptimal as possible for the general case. If your application would truly benefit from this sort of queuing behavior, I'd suggest that either you need to implement your own mechanism to accomplish the queueing (probably the easier fix), or that the threading architecture of your application could be designed in a different manner that avoids this problem (probably the more difficult fix). Brent -- Brent Casavant Dance like everybody should be watching. www.angeltread.org KD5EMB, EN34lv