Displaying 20 results from an estimated 176 matches for "v15".
Did you mean:
15
2011 Dec 30
2
Applyiing mode() or class() to each column of a data.frame XXXX
Hi everyone,
I am attempting to use the apply() function to obtain the mode and class of
each column in a data frame, however, I am encountering unexpected results.
I have the following example data:
v13<-1:6
v14<-c(1,2,3,3,NA,1)
v15<-c("Good","Bad",NA,"Good","Bad","Bad")
f4<-factor(rep(c("Blue","Red","Green"),2))
v16<-c(F,T,F,F,T,F)
data6<-data.frame(v13,v14,v15,f4,v16)
data6
Here is my function definition:
contents<-function(x){...
2016 Mar 10
2
Greedy register allocator allocates live sub-register
...g375, 14, pred:1,
pred:%noreg, 5; VRF128:%vreg306 VRF64_l:%vreg375
* bar 30, %vreg306; VRF128:%vreg306
6804B STORE128 %vreg304, <fi#33>, 0; mem:ST16[FixedStack33] VRF128:%vreg304
For this sequence of instructions, when allocating a register for %vreg375
the greedy register allocator chooses V15_l. The problem here is that it
had previously allocated V15 (V15_l is a sub-register of V15) to %vreg304.
%vreg304 is defined at 6768B and finally used at 6804B so the instruction
LOAD_v4i16 at 6796B ends up clobbering the value in V15 before its last
use. This is the output of the allocator itself...
2012 Oct 19
0
impute multilevel data in MICE
...a question about the 2lonly.pmm() and 2lonly.norm(), I get the following error quite often. Here is the code the error, could you give me some advice please? Am I using it in the right way?
> ini=mice(bhrm,maxit=0)
> pred=ini$pred
> pred
V1 V2 V3 V4 V5 V6 V7 V8 V9 V10 V11 V12 V13 V14 V15 V16 V17 V18
V1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0
V2 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0
V3 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 0
V4 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 1 0
V5 1 1 1 1 0...
2016 Jan 04
0
New nutdrv_qx sub driver - protocol v15&16
Hi,
I tested nutdrv_qx additions, introduced by @zykh, for a couple of
months now and changed only small things.
You can reach the repository via [1].
Please review the item_t array (starting at [2]) and give me feedback,
or change it directly.
Missing is the complete documentation, which I'm trying to get time for
since the last six months, but I don't.
Best, Nick
[1]
2015 Apr 08
1
[Xen-devel] [PATCH v15 12/15] pvqspinlock, x86: Enable PV qspinlock for Xen
On 07/04/15 03:55, Waiman Long wrote:
> This patch adds the necessary Xen specific code to allow Xen to
> support the CPU halting and kicking operations needed by the queue
> spinlock PV code.
This basically looks the same as the version I wrote, except I think you
broke it.
> +static void xen_qlock_wait(u8 *byte, u8 val)
> +{
> + int irq = __this_cpu_read(lock_kicker_irq);
2015 Apr 09
0
[PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote:
> For a virtual guest with the qspinlock patch, a simple unfair byte lock
> will be used if PV spinlock is not configured in or the hypervisor
> isn't either KVM or Xen. The byte lock works fine with small guest
> of just a few vCPUs. On a much larger guest, however, byte lock can
> have serious performance problem.
2015 Apr 09
0
[PATCH v15 16/16] unfair qspinlock: a queue based unfair lock
On Thu, Apr 09, 2015 at 09:16:24AM -0400, Rik van Riel wrote:
> On 04/09/2015 03:01 AM, Peter Zijlstra wrote:
> > On Wed, Apr 08, 2015 at 02:32:19PM -0400, Waiman Long wrote:
> >> For a virtual guest with the qspinlock patch, a simple unfair byte lock
> >> will be used if PV spinlock is not configured in or the hypervisor
> >> isn't either KVM or Xen. The
2015 Apr 09
0
[PATCH v15 09/15] pvqspinlock: Implement simple paravirt support for the qspinlock
On Thu, Apr 09, 2015 at 08:13:27PM +0200, Peter Zijlstra wrote:
> On Mon, Apr 06, 2015 at 10:55:44PM -0400, Waiman Long wrote:
> > +#define PV_HB_PER_LINE (SMP_CACHE_BYTES / sizeof(struct pv_hash_bucket))
> > +static struct qspinlock **pv_hash(struct qspinlock *lock, struct pv_node *node)
> > +{
> > + unsigned long init_hash, hash = hash_ptr(lock, pv_lock_hash_bits);
2019 Jan 03
0
[PATCH v15 23/26] sched: early boot clock
Pavel Tatashin wrote on Thu, Jan 03, 2019:
> Could you please send the config file and qemu arguments that were
> used to reproduce this problem.
Running qemu by hand, nothing fancy e.g. this works:
# qemu-system-x86_64 -m 1G -smp 4 -drive file=/root/kvm-wrapper/disks/f2.img,if=virtio -serial mon:stdio --enable-kvm -cpu Haswell -device virtio-rng-pci -nographic
(used a specific cpu just
2019 Jul 05
0
[PATCH v15 6/7] ext4: disable map_sync for async flush
Dont support 'MAP_SYNC' with non-DAX files and DAX files
with asynchronous dax_device. Virtio pmem provides
asynchronous host page cache flush mechanism. We don't
support 'MAP_SYNC' with virtio pmem and ext4.
Signed-off-by: Pankaj Gupta <pagupta at redhat.com>
Reviewed-by: Jan Kara <jack at suse.cz>
---
fs/ext4/file.c | 10 ++++++----
1 file changed, 6
2015 Apr 08
1
[Xen-devel] [PATCH v15 12/15] pvqspinlock, x86: Enable PV qspinlock for Xen
On 07/04/15 03:55, Waiman Long wrote:
> This patch adds the necessary Xen specific code to allow Xen to
> support the CPU halting and kicking operations needed by the queue
> spinlock PV code.
This basically looks the same as the version I wrote, except I think you
broke it.
> +static void xen_qlock_wait(u8 *byte, u8 val)
> +{
> + int irq = __this_cpu_read(lock_kicker_irq);
2023 Aug 28
0
[PATCH v15 11/23] dma-resv: Add kref_put_dma_resv()
Am 27.08.23 um 19:54 schrieb Dmitry Osipenko:
> Add simple kref_put_dma_resv() helper that wraps around kref_put_ww_mutex()
> for drivers that needs to lock dma-resv on kref_put().
>
> It's not possible to easily add this helper to kref.h because of the
> headers inclusion dependency, hence add it to dma-resv.h.
I was never really a big fan of kref_put_mutex() in the first
2015 Mar 19
0
[Xen-devel] [PATCH 0/9] qspinlock stuff -v15
On 16/03/15 13:16, Peter Zijlstra wrote:
>
> I feel that if someone were to do a Xen patch we can go ahead and merge this
> stuff (finally!).
This seems work for me, but I've not got time to give it a more thorough
testing.
You can fold this into your series.
There doesn't seem to be a way to disable QUEUE_SPINLOCKS when supported by
the arch, is this intentional? If so, the
2015 Mar 25
0
[PATCH 0/9] qspinlock stuff -v15
On Mon, Mar 16, 2015 at 02:16:13PM +0100, Peter Zijlstra wrote:
> Hi Waiman,
>
> As promised; here is the paravirt stuff I did during the trip to BOS last week.
>
> All the !paravirt patches are more or less the same as before (the only real
> change is the copyright lines in the first patch).
>
> The paravirt stuff is 'simple' and KVM only -- the Xen code was a
2015 Mar 27
0
[PATCH 0/9] qspinlock stuff -v15
On Thu, Mar 26, 2015 at 09:21:53PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 25, 2015 at 03:47:39PM -0400, Konrad Rzeszutek Wilk wrote:
> > Ah nice. That could be spun out as a seperate patch to optimize the existing
> > ticket locks I presume.
>
> Yes I suppose we can do something similar for the ticket and patch in
> the right increment. We'd need to restructure the
2015 Mar 30
0
[PATCH 0/9] qspinlock stuff -v15
On Mon, Mar 30, 2015 at 12:25:12PM -0400, Waiman Long wrote:
> I did it differently in my PV portion of the qspinlock patch. Instead of
> just waking up the CPU, the new lock holder will check if the new queue head
> has been halted. If so, it will set the slowpath flag for the halted queue
> head in the lock so as to wake it up at unlock time. This should eliminate
> your concern
2015 Mar 19
0
[Xen-devel] [PATCH 0/9] qspinlock stuff -v15
On 16/03/15 13:16, Peter Zijlstra wrote:
>
> I feel that if someone were to do a Xen patch we can go ahead and merge this
> stuff (finally!).
This seems work for me, but I've not got time to give it a more thorough
testing.
You can fold this into your series.
There doesn't seem to be a way to disable QUEUE_SPINLOCKS when supported by
the arch, is this intentional? If so, the
2015 Mar 25
0
[PATCH 0/9] qspinlock stuff -v15
On Mon, Mar 16, 2015 at 02:16:13PM +0100, Peter Zijlstra wrote:
> Hi Waiman,
>
> As promised; here is the paravirt stuff I did during the trip to BOS last week.
>
> All the !paravirt patches are more or less the same as before (the only real
> change is the copyright lines in the first patch).
>
> The paravirt stuff is 'simple' and KVM only -- the Xen code was a
2015 Mar 27
0
[PATCH 0/9] qspinlock stuff -v15
On Thu, Mar 26, 2015 at 09:21:53PM +0100, Peter Zijlstra wrote:
> On Wed, Mar 25, 2015 at 03:47:39PM -0400, Konrad Rzeszutek Wilk wrote:
> > Ah nice. That could be spun out as a seperate patch to optimize the existing
> > ticket locks I presume.
>
> Yes I suppose we can do something similar for the ticket and patch in
> the right increment. We'd need to restructure the
2015 Mar 30
0
[PATCH 0/9] qspinlock stuff -v15
On Mon, Mar 30, 2015 at 12:25:12PM -0400, Waiman Long wrote:
> I did it differently in my PV portion of the qspinlock patch. Instead of
> just waking up the CPU, the new lock holder will check if the new queue head
> has been halted. If so, it will set the slowpath flag for the halted queue
> head in the lock so as to wake it up at unlock time. This should eliminate
> your concern