similar to: Spinlock with ZAPTEL

Displaying 20 results from an estimated 10000 matches similar to: "Spinlock with ZAPTEL"

2005 Jul 22
1
Re: zaptel make problems
On a different note using Fedora Core 3 I get CC [M] /usr/src/zaptel/zaptel.o /usr/src/zaptel/zaptel.c: In function `zt_chan_write': /usr/src/zaptel/zaptel.c:1745: warning: ignoring return value of `copy_from_user', declared with attribute warn_unused_result /usr/src/zaptel/zaptel.c: In function `ioctl_load_zone': /usr/src/zaptel/zaptel.c:2392: warning: ignoring return value of
2014 Feb 26
0
[PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment
Locking is always an issue in a virtualized environment as the virtual CPU that is waiting on a lock may get scheduled out and hence block any progress in lock acquisition even when the lock has been freed. One solution to this problem is to allow unfair lock in a para-virtualized environment. In this case, a new lock acquirer can come and steal the lock if the next-in-line CPU to get the lock is
2014 Mar 12
0
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
Locking is always an issue in a virtualized environment as the virtual CPU that is waiting on a lock may get scheduled out and hence block any progress in lock acquisition even when the lock has been freed. One solution to this problem is to allow unfair lock in a para-virtualized environment. In this case, a new lock acquirer can come and steal the lock if the next-in-line CPU to get the lock is
2014 May 30
0
[PATCH v11 09/16] qspinlock, x86: Allow unfair spinlock in a virtual guest
Locking is always an issue in a virtualized environment because of 2 different types of problems: 1) Lock holder preemption 2) Lock waiter preemption One solution to the lock waiter preemption problem is to allow unfair lock in a virtualized environment. In this case, a new lock acquirer can come and steal the lock if the next-in-line CPU to get the lock is scheduled out. A simple unfair queue
2014 May 07
0
[PATCH v10 10/19] qspinlock, x86: Allow unfair spinlock in a virtual guest
Locking is always an issue in a virtualized environment because of 2 different types of problems: 1) Lock holder preemption 2) Lock waiter preemption One solution to the lock waiter preemption problem is to allow unfair lock in a virtualized environment. In this case, a new lock acquirer can come and steal the lock if the next-in-line CPU to get the lock is scheduled out. A simple unfair lock
2014 Feb 26
2
[PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment
On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote: > Locking is always an issue in a virtualized environment as the virtual > CPU that is waiting on a lock may get scheduled out and hence block > any progress in lock acquisition even when the lock has been freed. > > One solution to this problem is to allow unfair lock in a > para-virtualized environment. In this case,
2014 Feb 26
2
[PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment
On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote: > Locking is always an issue in a virtualized environment as the virtual > CPU that is waiting on a lock may get scheduled out and hence block > any progress in lock acquisition even when the lock has been freed. > > One solution to this problem is to allow unfair lock in a > para-virtualized environment. In this case,
2015 Jun 16
0
[PATCH 0/6] x86: reduce paravirtualized spinlock overhead
AFAIK there are no outstanding questions for more than one month now. I'd appreciate some feedback or accepting these patches. Juergen On 04/30/2015 12:53 PM, Juergen Gross wrote: > Paravirtualized spinlocks produce some overhead even if the kernel is > running on bare metal. The main reason are the more complex locking > and unlocking functions. Especially unlocking is no longer
2014 Mar 19
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 03/18/2014 04:14 AM, Paolo Bonzini wrote: > Il 17/03/2014 20:05, Konrad Rzeszutek Wilk ha scritto: >>> > Measurements were done by Gleb for two guests running 2.6.32 with 16 >>> > vcpus each, on a 16-core system. One guest ran with unfair locks, >>> > one guest ran with fair locks. Two kernel compilations ("time make >> And when you say fair
2014 Mar 19
2
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 03/18/2014 04:14 AM, Paolo Bonzini wrote: > Il 17/03/2014 20:05, Konrad Rzeszutek Wilk ha scritto: >>> > Measurements were done by Gleb for two guests running 2.6.32 with 16 >>> > vcpus each, on a 16-core system. One guest ran with unfair locks, >>> > one guest ran with fair locks. Two kernel compilations ("time make >> And when you say fair
2006 Jan 24
4
Asterisk with SuSe 10
Has anyone had any experience with the Asterisk on a SuSe 10 platform? I'm currently using FC3 but because we use SuSe within other parts of the business I'm being pushed to changed the OS. Regards Lee ########################################### This message has been scanned by F-Secure Anti-Virus for Microsoft Exchange. For more information, connect to http://www.f-secure.com/
2005 Jul 22
1
SATA
Has anyone had any problems with SATA, either on board or 3rd party setup? I've currently got a problem where an AMD non SATA FC2 system is working fine but an Intel system with a 3Ware SATA card and FC3 is radomly not syncing with the ISDN30. It allows and receives calls but at random intervals drops them. Regards Lee -- No virus found in this outgoing message. Checked by AVG
2015 Feb 12
0
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
On Thu, Feb 12, 2015 at 05:17:27PM +0530, Raghavendra K T wrote: > Paravirt spinlock clears slowpath flag after doing unlock. > As explained by Linus currently it does: > prev = *lock; > add_smp(&lock->tickets.head, TICKET_LOCK_INC); > > /* add_smp() is a full mb() */ > > if
2015 Feb 15
0
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
On 02/15/2015 11:25 AM, Raghavendra K T wrote: > Paravirt spinlock clears slowpath flag after doing unlock. > As explained by Linus currently it does: > prev = *lock; > add_smp(&lock->tickets.head, TICKET_LOCK_INC); > > /* add_smp() is a full mb() */ > > if (unlikely(lock->tickets.tail &
2015 Feb 12
0
[PATCH V3] x86 spinlock: Fix memory corruption on completing completions
On Thu, Feb 12, 2015 at 05:17:27PM +0530, Raghavendra K T wrote: > Paravirt spinlock clears slowpath flag after doing unlock. > As explained by Linus currently it does: > prev = *lock; > add_smp(&lock->tickets.head, TICKET_LOCK_INC); > > /* add_smp() is a full mb() */ > > if
2015 Feb 15
0
[PATCH V5] x86 spinlock: Fix memory corruption on completing completions
On 02/15/2015 11:25 AM, Raghavendra K T wrote: > Paravirt spinlock clears slowpath flag after doing unlock. > As explained by Linus currently it does: > prev = *lock; > add_smp(&lock->tickets.head, TICKET_LOCK_INC); > > /* add_smp() is a full mb() */ > > if (unlikely(lock->tickets.tail &
2014 Mar 04
0
[PATCH RFC v5 4/8] pvqspinlock, x86: Allow unfair spinlock in a real PV environment
On 03/03/2014 05:55 AM, Paolo Bonzini wrote: > Il 28/02/2014 18:06, Waiman Long ha scritto: >> On 02/26/2014 12:07 PM, Konrad Rzeszutek Wilk wrote: >>> On Wed, Feb 26, 2014 at 10:14:24AM -0500, Waiman Long wrote: >>>> Locking is always an issue in a virtualized environment as the virtual >>>> CPU that is waiting on a lock may get scheduled out and hence
2014 Mar 18
0
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
Il 17/03/2014 20:05, Konrad Rzeszutek Wilk ha scritto: >> > Measurements were done by Gleb for two guests running 2.6.32 with 16 >> > vcpus each, on a 16-core system. One guest ran with unfair locks, >> > one guest ran with fair locks. Two kernel compilations ("time make > And when you say fair locks are you saying PV ticketlocks or generic > ticketlocks?
2014 Mar 19
0
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
Il 19/03/2014 04:15, Waiman Long ha scritto: >>> You should see the same values with the PV ticketlock. It is not clear >>> to me if this testing did include that variant of locks? >> >> Yes, PV is fine. But up to this point of the series, we are concerned >> about spinlock performance when running on an overcommitted hypervisor >> that doesn't
2014 Mar 19
1
[PATCH v6 05/11] pvqspinlock, x86: Allow unfair spinlock in a PV guest
On 03/19/2014 06:07 AM, Paolo Bonzini wrote: > Il 19/03/2014 04:15, Waiman Long ha scritto: >>>> You should see the same values with the PV ticketlock. It is not clear >>>> to me if this testing did include that variant of locks? >>> >>> Yes, PV is fine. But up to this point of the series, we are concerned >>> about spinlock performance when