Displaying 20 results from an estimated 124 matches for "prolong".
Did you mean:
prolog
2006 May 25
0
Prolonged transfers turn hard drive into read-only.
Good day,
I'm hoping someone can provide me with some direction to get a samba
(possibly) issue fixed.
After large/long transfers, usually couple hundred megs +, the disc
I'm transferring the files turns into a read-only drive. I have to
unmount and remount the drive for it to be writable again. I'm not a
linux expert so I'm sure I didn't setup everything optimally.
2014 Apr 18
1
[PATCH v9 06/19] qspinlock: prolong the stay in the pending bit path
On Thu, Apr 17, 2014 at 09:46:04PM -0400, Waiman Long wrote:
> BTW, I didn't test out your atomic_test_and_set() change. Did it provide a
> noticeable performance benefit when compared with cmpxchg()?
I've not tested that I think. I had a hard time showing that cmpxchg
loops were slower, but once I did, I simply replaced all cmpxchg loops
with unconditional ops where possible.
The
2014 May 10
0
[PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
On 05/08/2014 02:58 PM, Peter Zijlstra wrote:
> On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
>> @@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
>> */
>> for (;;) {
>> /*
>> - * If we observe any contention; queue.
>> + * If we observe that the queue is not empty,
>> + * return
2014 Jun 12
0
[PATCH v11 06/16] qspinlock: prolong the stay in the pending bit path
On 06/12/2014 02:00 AM, Peter Zijlstra wrote:
> On Wed, Jun 11, 2014 at 05:22:28PM -0400, Long, Wai Man wrote:
>
>>>> @@ -233,11 +233,25 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
>>>> */
>>>> for (;;) {
>>>> /*
>>>> - * If we observe any contention; queue.
>>>> + * If we observe
2014 Apr 18
1
[PATCH v9 06/19] qspinlock: prolong the stay in the pending bit path
On Thu, Apr 17, 2014 at 09:46:04PM -0400, Waiman Long wrote:
> BTW, I didn't test out your atomic_test_and_set() change. Did it provide a
> noticeable performance benefit when compared with cmpxchg()?
I've not tested that I think. I had a hard time showing that cmpxchg
loops were slower, but once I did, I simply replaced all cmpxchg loops
with unconditional ops where possible.
The
2014 Jun 11
0
[PATCH v11 06/16] qspinlock: prolong the stay in the pending bit path
On 6/11/2014 6:26 AM, Peter Zijlstra wrote:
> On Fri, May 30, 2014 at 11:43:52AM -0400, Waiman Long wrote:
>> ---
>> kernel/locking/qspinlock.c | 18 ++++++++++++++++--
>> 1 files changed, 16 insertions(+), 2 deletions(-)
>>
>> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
>> index fc7fd8c..7f10758 100644
>> ---
2014 May 30
0
[PATCH v11 06/16] qspinlock: prolong the stay in the pending bit path
There is a problem in the current pending bit spinning code. When the
lock is free, but the pending bit holder hasn't grabbed the lock &
cleared the pending bit yet, the spinning code will not be run.
As a result, the regular queuing code path might be used most of
the time even when there is only 2 tasks contending for the lock.
Assuming that the pending bit holder is going to get the
2014 May 07
0
[PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
There is a problem in the current trylock_pending() function. When the
lock is free, but the pending bit holder hasn't grabbed the lock &
cleared the pending bit yet, the trylock_pending() function will fail.
As a result, the regular queuing code path will be used most of
the time even when there is only 2 tasks contending for the lock.
Assuming that the pending bit holder is going to get
2014 Apr 17
0
[PATCH v9 06/19] qspinlock: prolong the stay in the pending bit path
There is a problem in the current trylock_pending() function. When the
lock is free, but the pending bit holder hasn't grabbed the lock &
cleared the pending bit yet, the trylock_pending() function will fail.
As a result, the regular queuing code path will be used most of
the time even when there is only 2 tasks contending for the lock.
Assuming that the pending bit holder is going to get
2014 Apr 18
0
[PATCH v9 06/19] qspinlock: prolong the stay in the pending bit path
On 04/17/2014 12:36 PM, Peter Zijlstra wrote:
> On Thu, Apr 17, 2014 at 11:03:58AM -0400, Waiman Long wrote:
>> There is a problem in the current trylock_pending() function. When the
>> lock is free, but the pending bit holder hasn't grabbed the lock&
>> cleared the pending bit yet, the trylock_pending() function will fail.
> I remember seeing some of this..
>
2014 May 08
2
[PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
> @@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> */
> for (;;) {
> /*
> - * If we observe any contention; queue.
> + * If we observe that the queue is not empty,
> + * return and be queued.
> */
> - if (val & ~_Q_LOCKED_MASK)
> + if (val
2014 May 08
2
[PATCH v10 06/19] qspinlock: prolong the stay in the pending bit path
On Wed, May 07, 2014 at 11:01:34AM -0400, Waiman Long wrote:
> @@ -221,11 +222,37 @@ static inline int trylock_pending(struct qspinlock *lock, u32 *pval)
> */
> for (;;) {
> /*
> - * If we observe any contention; queue.
> + * If we observe that the queue is not empty,
> + * return and be queued.
> */
> - if (val & ~_Q_LOCKED_MASK)
> + if (val
2008 Dec 12
0
Is there anyone in charge of package wmtsa ?
Here is another occurrence of wmTSA internal error.
My time series is a short breathing cycle (2425-Cyle_9.txt).
Since wmtsa functions that extract extrema seem to expect longer series than I have, I tried the folowing two tricks:
1) I prolong the 1-cycle series on both ends through duplicating the first value (on the left end( and the last value on the right end as any ties as the
1-cycle series length. Then I apply CWT to such prolonged series and then obtain the maxima trees and the peaks as follows:
aal <- rep (aa[1], length...
2014 Jun 12
2
[PATCH v11 06/16] qspinlock: prolong the stay in the pending bit path
On Wed, Jun 11, 2014 at 05:22:28PM -0400, Long, Wai Man wrote:
> >>@@ -233,11 +233,25 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> >> */
> >> for (;;) {
> >> /*
> >>- * If we observe any contention; queue.
> >>+ * If we observe that the queue is not empty or both
> >>+ * the pending and lock bits
2014 Jun 12
2
[PATCH v11 06/16] qspinlock: prolong the stay in the pending bit path
On Wed, Jun 11, 2014 at 05:22:28PM -0400, Long, Wai Man wrote:
> >>@@ -233,11 +233,25 @@ void queue_spin_lock_slowpath(struct qspinlock *lock, u32 val)
> >> */
> >> for (;;) {
> >> /*
> >>- * If we observe any contention; queue.
> >>+ * If we observe that the queue is not empty or both
> >>+ * the pending and lock bits
2014 Jun 11
2
[PATCH v11 06/16] qspinlock: prolong the stay in the pending bit path
On Fri, May 30, 2014 at 11:43:52AM -0400, Waiman Long wrote:
> ---
> kernel/locking/qspinlock.c | 18 ++++++++++++++++--
> 1 files changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index fc7fd8c..7f10758 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -233,11 +233,25
2014 Jun 11
2
[PATCH v11 06/16] qspinlock: prolong the stay in the pending bit path
On Fri, May 30, 2014 at 11:43:52AM -0400, Waiman Long wrote:
> ---
> kernel/locking/qspinlock.c | 18 ++++++++++++++++--
> 1 files changed, 16 insertions(+), 2 deletions(-)
>
> diff --git a/kernel/locking/qspinlock.c b/kernel/locking/qspinlock.c
> index fc7fd8c..7f10758 100644
> --- a/kernel/locking/qspinlock.c
> +++ b/kernel/locking/qspinlock.c
> @@ -233,11 +233,25
2008 Apr 27
4
Smoothing out effects/ consistent effects everytime
...nt each time.
Sometimes it is rough, sometimes it is smooth, and other times it
seems that the effect is almost random.
Thanks,
--
Leonard Burton, N9URK
http://www.jiffyslides.com
service-CbOvBfcOUrWrJCssh9Shfg@public.gmane.org
leonardburton-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org
"The prolonged evacuation would have dramatically affected the
survivability of the occupants."
--~--~---------~--~----~------------~-------~--~----~
You received this message because you are subscribed to the Google Groups "Ruby on Rails: Spinoffs" group.
To post to this group, send email to ru...
2013 Jan 03
3
[LLVMdev] Does loop vectorizer inquire about target's SIMD capabilities?
...ilities of the
target architecture when it decides whether it is profitable to vectorize a
loop? I am asking this because I would like to have loop vectorization
disabled for targets that don't support SIMD instructions (for example,
standard mips32). Loop vectorization bloats the code size and prolongs
compilation time without any improvement to performance for such targets.
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20130103/ab782fa5/attachment.html>
2014 Apr 17
2
[PATCH v9 06/19] qspinlock: prolong the stay in the pending bit path
On Thu, Apr 17, 2014 at 11:03:58AM -0400, Waiman Long wrote:
> There is a problem in the current trylock_pending() function. When the
> lock is free, but the pending bit holder hasn't grabbed the lock &
> cleared the pending bit yet, the trylock_pending() function will fail.
I remember seeing some of this..
> It can be seen that the queue spinlock is slower than the ticket