Displaying 20 results from an estimated 96 matches for "rmw".
Did you mean:
mw
2013 May 23
11
raid6: rmw writes all the time?
Hi all,
we got a new test system here and I just also tested btrfs raid6 on
that. Write performance is slightly lower than hw-raid (LSI megasas) and
md-raid6, but it probably would be much better than any of these two, if
it wouldn''t read all the during the writes. Is this a known issue? This
is with linux-3.9.2.
Thanks,
Bernd
--
To unsubscribe from this list: send the line
2012 May 16
0
[RESEND PATCH] Btrfs: set ioprio of scrub readahead to idle
...tree, and so it also defines (optimal)
* block layout.
diff --git a/fs/btrfs/reada.c b/fs/btrfs/reada.c
index ac5d010..48a4882 100644
--- a/fs/btrfs/reada.c
+++ b/fs/btrfs/reada.c
@@ -718,13 +718,18 @@ static void reada_start_machine_worker(struct btrfs_work *work)
{
struct reada_machine_work *rmw;
struct btrfs_fs_info *fs_info;
+ int old_ioprio;
rmw = container_of(work, struct reada_machine_work, work);
fs_info = rmw->fs_info;
kfree(rmw);
+ old_ioprio = IOPRIO_PRIO_VALUE(task_nice_ioclass(current),
+ task_nice_ioprio(current));
+ set_task_ioprio(current, BTRFS_IOP...
2011 Jun 29
14
[PATCH v4 0/6] btrfs: generic readeahead interface
This series introduces a generic readahead interface for btrfs trees.
The intention is to use it to speed up scrub in a first run, but balance
is another hot candidate. In general, every tree walk could be accompanied
by a readahead. Deletion of large files comes to mind, where the fetching
of the csums takes most of the time.
Also the initial build-ups of free-space-caches and
2015 Apr 02
3
[PATCH 8/9] qspinlock: Generic paravirt support
On Thu, Apr 02, 2015 at 12:28:30PM -0400, Waiman Long wrote:
> On 04/01/2015 05:03 PM, Peter Zijlstra wrote:
> >On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote:
> >>On 04/01/2015 02:48 PM, Peter Zijlstra wrote:
> >>I am sorry that I don't quite get what you mean here. My point is that in
> >>the hashing step, a cpu will need to scan an empty
2015 Apr 02
3
[PATCH 8/9] qspinlock: Generic paravirt support
On Thu, Apr 02, 2015 at 12:28:30PM -0400, Waiman Long wrote:
> On 04/01/2015 05:03 PM, Peter Zijlstra wrote:
> >On Wed, Apr 01, 2015 at 03:58:58PM -0400, Waiman Long wrote:
> >>On 04/01/2015 02:48 PM, Peter Zijlstra wrote:
> >>I am sorry that I don't quite get what you mean here. My point is that in
> >>the hashing step, a cpu will need to scan an empty
2015 Apr 02
0
[PATCH 8/9] qspinlock: Generic paravirt support
...;
for (;;) {
for (loop = SPIN_THRESHOLD; loop; loop--) {
@@ -126,29 +207,47 @@ static void pv_wait_head(struct qspinloc
cpu_relax();
}
- this_cpu_write(__pv_lock_wait, lock);
- /*
- * __pv_lock_wait must be set before setting _Q_SLOW_VAL
- *
- * [S] __pv_lock_wait = lock [RmW] l = l->locked = 0
- * MB MB
- * [S] l->locked = _Q_SLOW_VAL [L] __pv_lock_wait
- *
- * Matches the xchg() in pv_queue_spin_unlock().
- */
- if (!cmpxchg(&l->locked, _Q_LOCKED_VAL, _Q_SLOW_VAL))
- goto done;
+ if (!hb) {
+ hb = pv_hash...
2017 Feb 28
2
rL296252 Made large integer operation codegen significantly worse.
I see we're missing an isel pattern for add producing carry and doing a
memory RMW. I'm going to see if adding that helps anything.
~Craig
On Mon, Feb 27, 2017 at 8:47 PM, Nirav Davé via llvm-dev <
llvm-dev at lists.llvm.org> wrote:
> Yes. I'm seeing that as well. Not clear what's going on.
>
> In any case it looks to be unrelated to the alias analys...
2012 Oct 01
2
enquiry
hi,
I am new to using R ,I have 2 datasets with dates common in them ,how
can i take out the common dates within them.
karan
[[alternative HTML version deleted]]
2012 Oct 25
3
Change to daily digest
Hello folks,
I am currently receiving a lot of emails from the list which proves that this is a very important place to get good feedbacks and tips and that the community is here to help .. Excellent thing.
I am not though able to login to my subscriber space to change the email reception into daily digest, or I am not looking to the right place, if someone can point me to the right URL that
2011 Jun 10
6
[PATCH v2 0/6] btrfs: generic readeahead interface
This series introduces a generic readahead interface for btrfs trees.
The intention is to use it to speed up scrub in a first run, but balance
is another hot candidate. In general, every tree walk could be accompanied
by a readahead. Deletion of large files comes to mind, where the fetching
of the csums takes most of the time.
Also the initial build-ups of free-space-caches and
2015 Feb 28
1
OT: AF 4k sector drives with 512 emulation
...ives. That
pdf states that they come in 2,3,4 TB models. (A6 in the model # represents
512n). But there are almost no reviews on these HGST native 512n drives
online.
> 4Kn drives are appearing now also. I don't expect these drives to be
> bootable except possibly by systems with UEFI firmware. It's also
> possible hardware RAID will reject them unless explicitly supported.
>
> http://www.hgst.com/tech/techlib.nsf/techdocs/29C9312E3B7D10CE88257D41000D8D16/$file/Ultrastar-7K6000-DS.pdf
>
>
> > Some have better 512e emulation than others. Looking for some advice...
2013 Feb 18
1
RAID5/6 Implementation - Understanding first
...hare the same stripe.
As I understand it, what we are doing is trying to hide the underlying extents architecture to gain some advantages in the higher level code. I have been digging in the code, and believe I know the answer to this question. So by "higher levels" does this mean that RMW, snapshots, checksums and duplicate detection are all unaware of RAID architecture?
If so, I might have some points to consider in this space. If not, I will need to dig deeper in the code to understand how some of my concerns can be realized and how I missed the answer to my question.
Thank you...
2015 Mar 18
2
[PATCH 8/9] qspinlock: Generic paravirt support
...READ_ONCE(node->locked))
> + goto done;
> +
> + cpu_relax();
> + }
> +
> + /*
> + * Order pn->state vs pn->locked thusly:
> + *
> + * [S] pn->state = vcpu_halted [S] next->locked = 1
> + * MB MB
> + * [L] pn->locked [RmW] pn->state = vcpu_running
> + *
> + * Matches the xchg() from pv_kick_node().
> + */
> + (void)xchg(&pn->state, vcpu_halted);
> +
> + if (READ_ONCE(node->locked))
> + goto done;
> +
> + pv_wait(&pn->state, vcpu_halted);
> + }
> +done:...
2015 Mar 18
2
[PATCH 8/9] qspinlock: Generic paravirt support
...READ_ONCE(node->locked))
> + goto done;
> +
> + cpu_relax();
> + }
> +
> + /*
> + * Order pn->state vs pn->locked thusly:
> + *
> + * [S] pn->state = vcpu_halted [S] next->locked = 1
> + * MB MB
> + * [L] pn->locked [RmW] pn->state = vcpu_running
> + *
> + * Matches the xchg() from pv_kick_node().
> + */
> + (void)xchg(&pn->state, vcpu_halted);
> +
> + if (READ_ONCE(node->locked))
> + goto done;
> +
> + pv_wait(&pn->state, vcpu_halted);
> + }
> +done:...
2012 Oct 22
4
help stored permanently
Hi,
Each > help.start() generates a new tree of the R help system, somewhere
in 127.0.0.1:xxx, each xxx being difeerent. This tree disappears when
exiting R. How can the current help tree copied to a permanent place for
reference outside a running R? This would be practical for not having to
enter M-x R .
TIA
--Christian
--
Christian W. Hoffmann,
CH - 8915 Hausen am Albis, Switzerland
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...t; >>+ }
> >>+
> >>+ WRITE_ONCE(pn->state, vcpu_halted);
> >>+ if (!lp)
> >>+ lp = pv_hash(lock, pn);
> >>+ /*
> >>+ * lp must be set before setting _Q_SLOW_VAL
> >>+ *
> >>+ * [S] lp = lock [RmW] l = l->locked = 0
> >>+ * MB MB
> >>+ * [S] l->locked = _Q_SLOW_VAL [L] lp
> >>+ *
> >>+ * Matches the cmpxchg() in pv_queue_spin_unlock().
> >>+ */
> >>+ if (!slow_set&&
> >>...
2015 Apr 13
1
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
...t; >>+ }
> >>+
> >>+ WRITE_ONCE(pn->state, vcpu_halted);
> >>+ if (!lp)
> >>+ lp = pv_hash(lock, pn);
> >>+ /*
> >>+ * lp must be set before setting _Q_SLOW_VAL
> >>+ *
> >>+ * [S] lp = lock [RmW] l = l->locked = 0
> >>+ * MB MB
> >>+ * [S] l->locked = _Q_SLOW_VAL [L] lp
> >>+ *
> >>+ * Matches the cmpxchg() in pv_queue_spin_unlock().
> >>+ */
> >>+ if (!slow_set&&
> >>...
2020 Jul 23
2
RFC: nbdkit block size advertisement
I'm thinking of adding one or more callbacks to nbdkit to let
plugins/filters enforce various block size alignments (for example, the
swab filter requires 2/4/8 alignment, or VDDK requires 512 alignment,
etc). The NBD protocol currently has NBD_INFO_BLOCK_SIZE which can be
sent in reply to NBD_OPT_GO to tell the client about sizing constraints;
qemu already implements it as both client
2015 Apr 07
0
[PATCH v15 13/15] pvqspinlock: Only kick CPU at unlock time
...SPIN_THRESHOLD; loop; loop--) {
if (READ_ONCE(node->locked))
return;
-
cpu_relax();
}
@@ -198,17 +211,21 @@ static void pv_wait_node(struct mcs_spinlock *node)
*
* [S] pn->state = vcpu_halted [S] next->locked = 1
* MB MB
- * [L] pn->locked [RmW] pn->state = vcpu_running
+ * [L] pn->locked [RmW] pn->state = vcpu_hashed
*
- * Matches the xchg() from pv_kick_node().
+ * Matches the cmpxchg() from pv_scan_next().
*/
(void)xchg(&pn->state, vcpu_halted);
if (!READ_ONCE(node->locked))
pv_wait(&pn...
2015 Feb 11
3
[PATCH] x86 spinlock: Fix memory corruption on completing completions
On 02/10, Jeremy Fitzhardinge wrote:
>
> On 02/10/2015 05:26 AM, Oleg Nesterov wrote:
> > On 02/10, Raghavendra K T wrote:
> >> Unfortunately xadd could result in head overflow as tail is high.
> >>
> >> The other option was repeated cmpxchg which is bad I believe.
> >> Any suggestions?
> > Stupid question... what if we simply move SLOWPATH