Displaying 20 results from an estimated 10000 matches similar to: "[LLVMdev] L1, L2 Cache line sizes in TargetData?"
2011 May 03
5
[LLVMdev] Memory Subsystem Representation
For a while now we (Cray) have had some very primitive cache structure
information encoded into our version of LLVM. Given the more complex
memory structures introduced by Bulldozer and various accelerators, it's
time to do this Right (tm).
So I'm looking for some feedback on a proposed design.
The goal of this work is to provide Passes with useful information such
as cache sizes,
2011 Jun 27
4
How many L1/L2 my cpu have ?
Hi
Could anybody explain me how to check how many L1/L2 cache my cpu have.
I'm using CentOS 5.6
*cat /proc/cpuinfo |grep CPU *
model name : Intel(R) Core(TM)2 Duo CPU T9300 @ 2.50GHz
model name : Intel(R) Core(TM)2 Duo CPU T9300 @ 2.50GHz
Diagram of a generic dual-core processor, with CPU-local level 1 caches, and
a shared, on-die level 2 cache.
2011 May 03
0
[LLVMdev] Memory Subsystem Representation
On Tue, May 3, 2011 at 8:40 AM, David Greene <dag at cray.com> wrote:
> For a while now we (Cray) have had some very primitive cache structure
> information encoded into our version of LLVM. Given the more complex
> memory structures introduced by Bulldozer and various accelerators, it's
> time to do this Right (tm).
>
> So I'm looking for some feedback on a
2005 Aug 15
3
Pinning L1 page table with wrpt
hi,
it seems I no longer need to pin l1 pages when running with writable
page tables in xen-unstable -- e.g. I can now pin a l2 table without
having pinned its l1 descendants first, without Xen complaining. Do I
understand things correctly?
thanks,
Jacob
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xensource.com
http://lists.xensource.com/xen-devel
2017 Mar 11
3
Is there a way to know the target's L1 data cache line size?
I guess that in this case, what I would like to know is a reasonable
upper bound of the cache line size on the target architecture. Something
that I can align my data structures on at compile time so as to minimize
the odds of false sharing. Think
std::hardware_destructive_interference_size in C++17.
Le 11/03/2017 à 13:16, Bruce Hoult a écrit :
> There's no way to know, until you run
2017 Mar 11
2
Is there a way to know the target's L1 data cache line size?
Thank you! Is this information available programmatically through some
LLVM API, so that next time some hardware manufacturer does some crazy
experiment, my code can be automatically compatible with it as soon as
LLVM is?
Le 11/03/2017 à 13:38, Bruce Hoult a écrit :
> PowerPC G5 (970) and all recent IBM Power have 128 byte cache lines. I
> believe Itanium is also 128.
>
> Intel
2015 Nov 13
2
[PATCH] virtio_ring: Shadow available ring flags & index
On Wed, Nov 11, 2015 at 02:34:33PM +0200, Michael S. Tsirkin wrote:
> On Tue, Nov 10, 2015 at 04:21:07PM -0800, Venkatesh Srinivas wrote:
> > Improves cacheline transfer flow of available ring header.
> >
> > Virtqueues are implemented as a pair of rings, one producer->consumer
> > avail ring and one consumer->producer used ring; preceding the
> > avail ring
2015 Nov 13
2
[PATCH] virtio_ring: Shadow available ring flags & index
On Wed, Nov 11, 2015 at 02:34:33PM +0200, Michael S. Tsirkin wrote:
> On Tue, Nov 10, 2015 at 04:21:07PM -0800, Venkatesh Srinivas wrote:
> > Improves cacheline transfer flow of available ring header.
> >
> > Virtqueues are implemented as a pair of rings, one producer->consumer
> > avail ring and one consumer->producer used ring; preceding the
> > avail ring
2009 Oct 14
1
different L2 regularization behavior between lrm, glmnet, and penalized?
The following R code using different packages gives the same results for a
simple logistic regression without regularization, but different results
with regularization. This may just be a matter of different scaling of the
regularization parameters, but if anyone familiar with these packages has
insight into why the results differ, I'd appreciate hearing about it. I'm
new to
2007 Oct 16
8
Xeno Linux never pins L1 tables ?
hi,
I''m developing my own 32-bit (no PAE) paravirtualized kernel for xen with
Mini-OS as a starting point. I am currently working on process page table
support (equivalent of arch/i386/mm/pgtable-xen.c) and mostly following
Linux for the moment. I noticed that linux-2.6.18-xen never pins an L1 table
(a pte), yet __pgd_pin() walks the page directory and gives up write access
on the kernel
2014 Jan 16
7
Re: Double fault panic in L2 upon v2v conversion
Thanks Richard for a fast reply.
Yes, indeed, im working on a nested environment. I try to run v2v inside a
VM (L1) and to create an L2 by the conversion process. And on Intel. As I
wrote, it fails once in few times, mainly when there is a memory pressure
on L0.
Kashyap, can you please share your experience? Why should it crash during
nested conversion. I'm not too familiar with libguestfs
2016 Sep 19
2
How to set QEMU qcow2 l2-cache-size using libvirt xml?
QEMU's default qcow2 L2 cache size is too small for large images (and small cluster sizes), resulting in very bad performance.
https://blogs.igalia.com/berto/2015/12/17/improving-disk-io-performance-in-qemu-2-5-with-the-qcow2-l2-cache/
shows huge performance hit for a 20GB qcow2 with default 64kB cluster size:
L2 Cache, MiB Average IOPS
1 (default) 5100
1.5
2007 Apr 07
1
OT: general question re processor, l2 and l3 cache etc
Greetings
Please forgive the OT question yet I highly value the experience and wisdom
on this list
I am wondering if anyone here can address the performance difference between
having a processor board with say 256KB L2 *and* 2048KB L3 cache *VERSUS*
just having the same processor board with just the L2 cache in a centos
server environment...
Please figure that all other necessary and related
2014 Jan 15
4
Double fault panic in L2 upon v2v conversion
Hi everybody,
Wanted to hear your opinion and to receive a smart advice.
I'm trying to use virt-v2v in order to convert ova image (exported from
vcenter) to run on libvirt/kvm - all this inside a VM of fedora.
The converted image is also a fedora.
During the conversion process, in some point of libguestfs activity, I get
double fault panic from L2 (printed as part of libguest output) and the
2015 Nov 11
2
[PATCH] virtio_ring: Shadow available ring flags & index
Improves cacheline transfer flow of available ring header.
Virtqueues are implemented as a pair of rings, one producer->consumer
avail ring and one consumer->producer used ring; preceding the
avail ring in memory are two contiguous u16 fields -- avail->flags
and avail->idx. A producer posts work by writing to avail->idx and
a consumer reads avail->idx.
The flags and idx fields
2015 Nov 11
2
[PATCH] virtio_ring: Shadow available ring flags & index
Improves cacheline transfer flow of available ring header.
Virtqueues are implemented as a pair of rings, one producer->consumer
avail ring and one consumer->producer used ring; preceding the
avail ring in memory are two contiguous u16 fields -- avail->flags
and avail->idx. A producer posts work by writing to avail->idx and
a consumer reads avail->idx.
The flags and idx fields
2015 Nov 18
2
[PATCH] virtio_ring: Shadow available ring flags & index
On Mon, Nov 16, 2015 at 7:46 PM, Xie, Huawei <huawei.xie at intel.com> wrote:
> On 11/14/2015 7:41 AM, Venkatesh Srinivas wrote:
> > On Wed, Nov 11, 2015 at 02:34:33PM +0200, Michael S. Tsirkin wrote:
> >> On Tue, Nov 10, 2015 at 04:21:07PM -0800, Venkatesh Srinivas wrote:
> >>> Improves cacheline transfer flow of available ring header.
> >>>
>
2015 Nov 18
2
[PATCH] virtio_ring: Shadow available ring flags & index
On Mon, Nov 16, 2015 at 7:46 PM, Xie, Huawei <huawei.xie at intel.com> wrote:
> On 11/14/2015 7:41 AM, Venkatesh Srinivas wrote:
> > On Wed, Nov 11, 2015 at 02:34:33PM +0200, Michael S. Tsirkin wrote:
> >> On Tue, Nov 10, 2015 at 04:21:07PM -0800, Venkatesh Srinivas wrote:
> >>> Improves cacheline transfer flow of available ring header.
> >>>
>
2015 Nov 19
1
[PATCH] virtio_ring: Shadow available ring flags & index
On 11/18/2015 12:28 PM, Venkatesh Srinivas wrote:
> On Tue, Nov 17, 2015 at 08:08:18PM -0800, Venkatesh Srinivas wrote:
>> On Mon, Nov 16, 2015 at 7:46 PM, Xie, Huawei <huawei.xie at intel.com> wrote:
>>
>>> On 11/14/2015 7:41 AM, Venkatesh Srinivas wrote:
>>>> On Wed, Nov 11, 2015 at 02:34:33PM +0200, Michael S. Tsirkin wrote:
>>>>> On Tue,
2018 Nov 02
2
RFC: System (cache, etc.) model for LLVM
Am Do., 1. Nov. 2018 um 16:56 Uhr schrieb David Greene <dag at cray.com>:
> Ok. I would like to start posting patches for review without
> speculating too much on fancy/exotic things that may come later. We
> shouldn't do anything that precludes extensions but I don't want to get
> bogged down in a lot of details on things related to a small number of
> targets.