Displaying 20 results from an estimated 72 matches for "schedule_timeout_interruptible".
2014 Sep 14
3
[PATCH 2/2] virtio-rng: fix stuck in catting hwrng attributes
...it's userspace context (not interrupt context).
Userspace context doesn't allow other user contexts run on that CPU,
unless the kernel code sleeps for some reason.
In this case, the need_resched() doesn't work.
My solution is removing need_resched() and use an appropriate delay by
schedule_timeout_interruptible(10).
Thanks, Amos
> > If we're really high priority (vs. the sysfs process) then I can see why
> > we'd need schedule_timeout_interruptible() instead of just schedule(),
> > and in that case, need_resched() would be false too.
> >
> > You could argue that&...
2014 Sep 14
3
[PATCH 2/2] virtio-rng: fix stuck in catting hwrng attributes
...it's userspace context (not interrupt context).
Userspace context doesn't allow other user contexts run on that CPU,
unless the kernel code sleeps for some reason.
In this case, the need_resched() doesn't work.
My solution is removing need_resched() and use an appropriate delay by
schedule_timeout_interruptible(10).
Thanks, Amos
> > If we're really high priority (vs. the sysfs process) then I can see why
> > we'd need schedule_timeout_interruptible() instead of just schedule(),
> > and in that case, need_resched() would be false too.
> >
> > You could argue that&...
2014 Sep 11
2
[PATCH 2/2] virtio-rng: fix stuck in catting hwrng attributes
...). But need_resched()
> doesn't work because rng_dev_read() is executing in user context.
I don't understand this explanation? I'd expect the sysfs process to be
woken by the mutex_unlock().
If we're really high priority (vs. the sysfs process) then I can see why
we'd need schedule_timeout_interruptible() instead of just schedule(),
and in that case, need_resched() would be false too.
You could argue that's intended behaviour, but I can't see how it
happens in the normal case anyway.
What am I missing?
Thanks,
Rusty.
> This patch removed need_resched() and increase delay to 10 jiffi...
2014 Sep 11
2
[PATCH 2/2] virtio-rng: fix stuck in catting hwrng attributes
...). But need_resched()
> doesn't work because rng_dev_read() is executing in user context.
I don't understand this explanation? I'd expect the sysfs process to be
woken by the mutex_unlock().
If we're really high priority (vs. the sysfs process) then I can see why
we'd need schedule_timeout_interruptible() instead of just schedule(),
and in that case, need_resched() would be false too.
You could argue that's intended behaviour, but I can't see how it
happens in the normal case anyway.
What am I missing?
Thanks,
Rusty.
> This patch removed need_resched() and increase delay to 10 jiffi...
2014 Sep 15
1
[PATCH 2/2] virtio-rng: fix stuck in catting hwrng attributes
...> > > > > @@ -195,8 +195,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
> > > > >
> > > > > mutex_unlock(&rng_mutex);
> > > > >
> > > > > - if (need_resched())
> > > > > - schedule_timeout_interruptible(1);
> > > > > + schedule_timeout_interruptible(10);
If cond_resched() does not work, it is a bug elsewehere.
> Problem only occurred in non-smp guest, we can improve it to:
>
> if(!is_smp())
> schedule_timeout_in...
2014 Sep 15
1
[PATCH 2/2] virtio-rng: fix stuck in catting hwrng attributes
...> > > > > @@ -195,8 +195,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
> > > > >
> > > > > mutex_unlock(&rng_mutex);
> > > > >
> > > > > - if (need_resched())
> > > > > - schedule_timeout_interruptible(1);
> > > > > + schedule_timeout_interruptible(10);
If cond_resched() does not work, it is a bug elsewehere.
> Problem only occurred in non-smp guest, we can improve it to:
>
> if(!is_smp())
> schedule_timeout_in...
2005 Aug 15
3
[-mm PATCH 2/32] fs: fix-up schedule_timeout() usage
...fs/cifsfs.c 2005-08-07 09:57:37.000000000 -0700
+++ 2.6.13-rc5-mm1-dev/fs/cifs/cifsfs.c 2005-08-10 15:03:11.000000000 -0700
@@ -781,14 +781,11 @@ static int cifs_oplock_thread(void * dum
oplockThread = current;
do {
- set_current_state(TASK_INTERRUPTIBLE);
-
- schedule_timeout(1*HZ);
+ schedule_timeout_interruptible(1*HZ);
spin_lock(&GlobalMid_Lock);
if(list_empty(&GlobalOplock_Q)) {
spin_unlock(&GlobalMid_Lock);
- set_current_state(TASK_INTERRUPTIBLE);
- schedule_timeout(39*HZ);
+ schedule_timeout_interruptible(39*HZ);
} else {
oplock_item = list_entry(GlobalOplock_Q.next,...
2014 Sep 10
5
[PATCH 0/2] fix stuck in catting hwrng attributes
If we read hwrng by long-running dd process, it takes too much cpu time.
When we check hwrng attributes from sysfs by cat, it gets stuck.
The problem can only be reproduced with non-smp guest with slow backend.
This patchset changed hwrng core to always delay 10 jiffies, cat process
have chance to execute protected code, the problem is resolved.
Thanks.
Amos Kong (2):
virtio-rng cleanup: move
2014 Sep 10
5
[PATCH 0/2] fix stuck in catting hwrng attributes
If we read hwrng by long-running dd process, it takes too much cpu time.
When we check hwrng attributes from sysfs by cat, it gets stuck.
The problem can only be reproduced with non-smp guest with slow backend.
This patchset changed hwrng core to always delay 10 jiffies, cat process
have chance to execute protected code, the problem is resolved.
Thanks.
Amos Kong (2):
virtio-rng cleanup: move
2014 Sep 10
2
RFC virtio-rng: fail to read sysfs of a busy device
...rivers/char/hw_random/core.c
> @@ -194,6 +194,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
> }
>
> mutex_unlock(&rng_mutex);
> + udelay(100);
We have a need_resched() right below. Why doesn't that work?
> if (need_resched())
> schedule_timeout_interruptible(1);
> @@ -233,10 +234,10 @@ static ssize_t hwrng_attr_current_store(struct device *dev,
> int err;
> struct hwrng *rng;
The following hunk doesn't work:
> + err = -ENODEV;
> err = mutex_lock_interruptible(&rng_mutex);
err is being set to another value in the next lin...
2014 Sep 10
2
RFC virtio-rng: fail to read sysfs of a busy device
...rivers/char/hw_random/core.c
> @@ -194,6 +194,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
> }
>
> mutex_unlock(&rng_mutex);
> + udelay(100);
We have a need_resched() right below. Why doesn't that work?
> if (need_resched())
> schedule_timeout_interruptible(1);
> @@ -233,10 +234,10 @@ static ssize_t hwrng_attr_current_store(struct device *dev,
> int err;
> struct hwrng *rng;
The following hunk doesn't work:
> + err = -ENODEV;
> err = mutex_lock_interruptible(&rng_mutex);
err is being set to another value in the next lin...
2014 Sep 15
2
[PATCH v2 3/3] hw_random: increase schedule timeout in rng_dev_read()
...b/drivers/char/hw_random/core.c
> index 263a370..b5d1b6f 100644
> --- a/drivers/char/hw_random/core.c
> +++ b/drivers/char/hw_random/core.c
> @@ -195,7 +195,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
>
> mutex_unlock(&rng_mutex);
>
> - schedule_timeout_interruptible(1);
> + schedule_timeout_interruptible(10);
>
> if (signal_pending(current)) {
> err = -ERESTARTSYS;
Does a schedule of 1 ms or 10 ms decrease the throughput?
I think we need some benchmarks.
--
Michael
-------------- next part --------------
A non-text attachment was scrub...
2014 Sep 15
2
[PATCH v2 3/3] hw_random: increase schedule timeout in rng_dev_read()
...b/drivers/char/hw_random/core.c
> index 263a370..b5d1b6f 100644
> --- a/drivers/char/hw_random/core.c
> +++ b/drivers/char/hw_random/core.c
> @@ -195,7 +195,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
>
> mutex_unlock(&rng_mutex);
>
> - schedule_timeout_interruptible(1);
> + schedule_timeout_interruptible(10);
>
> if (signal_pending(current)) {
> err = -ERESTARTSYS;
Does a schedule of 1 ms or 10 ms decrease the throughput?
I think we need some benchmarks.
--
Michael
-------------- next part --------------
A non-text attachment was scrub...
2014 Sep 15
7
[PATCH v2 0/3] fix stuck in accessing hwrng attributes
If we read hwrng by long-running dd process, it takes too much cpu
time and almost hold the mutex lock. When we check hwrng attributes
from sysfs by cat, it gets stuck in waiting the lock releaseing.
The problem can only be reproduced with non-smp guest with slow backend.
This patchset resolves the issue by changing rng_dev_read() to always
schedule 10 jiffies after release mutex lock, then cat
2014 Sep 15
7
[PATCH v2 0/3] fix stuck in accessing hwrng attributes
If we read hwrng by long-running dd process, it takes too much cpu
time and almost hold the mutex lock. When we check hwrng attributes
from sysfs by cat, it gets stuck in waiting the lock releaseing.
The problem can only be reproduced with non-smp guest with slow backend.
This patchset resolves the issue by changing rng_dev_read() to always
schedule 10 jiffies after release mutex lock, then cat
2014 Sep 10
0
RFC virtio-rng: fail to read sysfs of a busy device
...; mutex_unlock(&rng_mutex);
> > + udelay(100);
>
> We have a need_resched() right below. Why doesn't that work?
need_resched() is giving chance for userspace to
> > if (need_resched())
It never success in my debugging.
If we remove this check and always call schedule_timeout_interruptible(1),
problem also disappears.
diff --git a/drivers/char/hw_random/core.c b/drivers/char/hw_random/core.c
index aa30a25..263a370 100644
--- a/drivers/char/hw_random/core.c
+++ b/drivers/char/hw_random/core.c
@@ -195,8 +195,7 @@ static ssize_t rng_dev_read(struct file *filp,
char __user *buf,...
2014 Sep 13
0
[PATCH 2/2] virtio-rng: fix stuck in catting hwrng attributes
...39;t understand this explanation? I'd expect the sysfs process to be
> woken by the mutex_unlock().
But actually sysfs process's not woken always, this is they the
process gets stuck.
> If we're really high priority (vs. the sysfs process) then I can see why
> we'd need schedule_timeout_interruptible() instead of just schedule(),
> and in that case, need_resched() would be false too.
>
> You could argue that's intended behaviour, but I can't see how it
> happens in the normal case anyway.
>
> What am I missing?
>
> Thanks,
> Rusty.
>
> > This patc...
2014 Sep 14
0
[PATCH 2/2] virtio-rng: fix stuck in catting hwrng attributes
.../drivers/char/hw_random/core.c
> > > > @@ -195,8 +195,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
> > > >
> > > > mutex_unlock(&rng_mutex);
> > > >
> > > > - if (need_resched())
> > > > - schedule_timeout_interruptible(1);
> > > > + schedule_timeout_interruptible(10);
Problem only occurred in non-smp guest, we can improve it to:
if(!is_smp())
schedule_timeout_interruptible(10);
is_smp() is only available for arm arch, we need a general one....
2014 Sep 16
0
[PATCH v2 3/3] hw_random: increase schedule timeout in rng_dev_read()
...t; index 263a370..b5d1b6f 100644
> > --- a/drivers/char/hw_random/core.c
> > +++ b/drivers/char/hw_random/core.c
> > @@ -195,7 +195,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
> >
> > mutex_unlock(&rng_mutex);
> >
> > - schedule_timeout_interruptible(1);
> > + schedule_timeout_interruptible(10);
> >
> > if (signal_pending(current)) {
> > err = -ERESTARTSYS;
>
> Does a schedule of 1 ms or 10 ms decrease the throughput?
In my test environment, 1 jiffe always works (100%), as suggested by
Amit 10 jiffes is...
2014 Sep 09
2
mutex
...ivers/char/hw_random/core.c
index aa30a25..fa69020 100644
--- a/drivers/char/hw_random/core.c
+++ b/drivers/char/hw_random/core.c
@@ -194,6 +194,7 @@ static ssize_t rng_dev_read(struct file *filp, char __user *buf,
}
mutex_unlock(&rng_mutex);
+ udelay(100);
if (need_resched())
schedule_timeout_interruptible(1);
@@ -233,10 +234,10 @@ static ssize_t hwrng_attr_current_store(struct device *dev,
int err;
struct hwrng *rng;
+ err = -ENODEV;
err = mutex_lock_interruptible(&rng_mutex);
if (err)
return -ERESTARTSYS;
- err = -ENODEV;
list_for_each_entry(rng, &rng_list, list) {
if (str...