Displaying 20 results from an estimated 85 matches for "parallelisable".
Did you mean:
paralelizable
2013 Feb 07
0
[LLVMdev] Parallel Loop Metadata
Hi Nadav,
On 02/07/2013 07:46 PM, Nadav Rotem wrote:
> Pekka suggested that we add two kind of metadata: llvm.loop.parallel
> (attached to each loop latch) and llvm.mem.parallel (attached to each memory
> instruction!). I think that the motivation for the first metadata is clear -
> it says that the loop is data-parallel. I can also see us adding additional
> metadata such as
2011 Oct 11
2
[LLVMdev] Speculative paralellisation in LLVM compiler infrastructure!!!!!
Hi,
I am involved in the task of achieving speculative paralellisation in
llvm. I have started my work by trying to see if a simple for loop can be
paralellised in llvm.. The problem is i want to know how to check if a
program is automatically parallelised when compiled with llvm or if
explicitly need to do it how can i go about paralellising a for loop using
llvm compiler infrsatructure.how
2013 Feb 07
3
[LLVMdev] Parallel Loop Metadata
Hi,
I am continuing the discussion about Parallel Loop Metadata from here: http://lists.cs.uiuc.edu/pipermail/llvmdev/2013-February/059168.html and here: http://lists.cs.uiuc.edu/pipermail/llvmdev/2013-February/058999.html
Pekka suggested that we add two kind of metadata: llvm.loop.parallel (attached to each loop latch) and llvm.mem.parallel (attached to each memory instruction!). I think
2006 Aug 30
3
Antwort: Buying more computer for GLM
Hello,
at the moment I am doing quite a lot of regression, especially
logistic regression, on 20000 or more records with 30 or more
factors, using the "step" function to search for the model with the
smallest AIC. This takes a lot of time on this 1.8 GHZ Pentium
box. Memory does not seem to be such a big problem; not much
swapping is going on and CPU usage is at or close to
2011 Mar 21
3
[LLVMdev] Contributing to Polly with GSOC 2011
Dear all,
I am Raghesh, a student pursuing M.Tech at Indian Institute of
Technology, Madras, India.
I would like to make contribution to the Polly project
(http://wiki.llvm.org/Polyhedral_optimization_framework) as part of
GSOC 2011. I have gained some experience working in OpenMP Code
generation for Polly. This is almost stable now and planning to test
with the polybench benchmarks.
Some of
2005 Jun 07
1
R and MLE
...when I was using vector/matrix notation. I think the greatness of R
lies in a lovely vector/matrix notation, and it seems like a shame
to have to not use that when trying to do deriv().
* For iid problems, the computation of the likelihood function and
it's gradient vector are inherently parallelisable. How would one go
about doing this within R?
--
Ajay Shah Consultant
ajayshah at mayin.org Department of Economic Affairs
http://www.mayin.org/ajayshah Ministry of Finance, New Delhi
2011 Jun 12
1
snow package
Hi
I try parallelising some code using the snow package and the following lines:
cl <- makeSOCKcluster(8)
pfunc <- function (x) (if(x <= (-th)) 1 else 0) ###correlation coefficient
clusterExport(cl,c("pfunc","th"))
cor.c.f <- parApply(cl,tms,c(1,2),FUN=pfunc)
The parApply results in the error message:
> cor.c.f <- parApply(cl,tms,c(1,2),FUN=pfunc)
Error
2007 Mar 06
2
How to utilise dual cores and multi-processors on WinXP
Hello,
I have a question that I was wondering if anyone had a fairly straightforward answer to: what is the quickest and easiest way to take advantage of the extra cores / processors that are now commonplace on modern machines? And how do I do that in Windows?
I realise that this is a complex question that is not answered easily, so let me refine it some more. The type of scripts that I'm
2012 Sep 26
0
[LLVMdev] [PATCH / PROPOSAL] bitcode encoding that is ~15% smaller for large bitcode files...
On 26 Sep 2012, at 01:08, Jan Voung wrote:
> I've been looking into how to make llvm bitcode files smaller. There is one simple change that appears to shrink linked bitcode files by about 15%
Whenever anyone proposes a custom compression scheme for a data format, the first question that should always be asked is how does it compare to using a generic off-the-shelf compression algorithm.
2010 Sep 10
0
plyr: version 1.2
plyr is a set of tools for a common set of problems: you need to
__split__ up a big data structure into homogeneous pieces, __apply__ a
function to each piece and then __combine__ all the results back
together. For example, you might want to:
* fit the same model each patient subsets of a data frame
* quickly calculate summary statistics for each group
* perform group-wise transformations
2010 Sep 10
0
plyr: version 1.2
plyr is a set of tools for a common set of problems: you need to
__split__ up a big data structure into homogeneous pieces, __apply__ a
function to each piece and then __combine__ all the results back
together. For example, you might want to:
* fit the same model each patient subsets of a data frame
* quickly calculate summary statistics for each group
* perform group-wise transformations
2010 Mar 02
1
Output to sequentially numbered files... also, ideas for running R on Xgrid
Hello,
I have some code to run on an XGrid cluster. Currently the code is written
as a single, large job... this is no good for trying to run in parallel. To
break it up I have basically taken out the highest level for-loop and am
planning on batch-running many jobs, each one representing an instance of
the removed loop.
However, when it comes to output I am stuck. Previously the output was
2011 Jan 09
0
[LLVMdev] Proposal: Generic auto-vectorization and parallelization approach for LLVM and Polly
On 01/08/2011 07:34 PM, Renato Golin wrote:
> On 9 January 2011 00:07, Tobias Grosser<grosser at fim.uni-passau.de> wrote:
>> Matching the target vector width in our heuristics will obviously give the
>> best performance. So to get optimal performance Polly needs to take target
>> data into account.
>
> Indeed! And even if you lack target information, you
2002 Feb 15
2
ext3 fsck question
Hi,
After our big ext3 file server crashes, I notice the fsck spends some time
replaying the journals (about 5-10 mins for all volumes on the server in
question). I guess it must do this should you want to mount the volumes as
ext2.
My question--is it (theoretically) possible to tell fsck only to replay
half-finished and to knock out incomplete transactions from the journals,
leaving the kernel
2017 Aug 21
4
RISC-V LLVM status update
As you will have seen from previous postings, I've been working on upstream
LLVM support for the RISC-V instruction set architecture. The initial RFC
<http://lists.llvm.org/pipermail/llvm-dev/2016-August/103748.html>
provides a good overview of my approach. Thanks to funding from a third party,
I've recently been able to return to this effort as my main focus. Now feels
like a good
2011 Jan 09
2
[LLVMdev] Proposal: Generic auto-vectorization and parallelization approach for LLVM and Polly
On 9 January 2011 00:07, Tobias Grosser <grosser at fim.uni-passau.de> wrote:
> Matching the target vector width in our heuristics will obviously give the
> best performance. So to get optimal performance Polly needs to take target
> data into account.
Indeed! And even if you lack target information, you won't generate
wrong code. ;)
> Talking about OpenCL. The lowering
2011 Apr 09
1
How do I make this faster?
I was on vacation the last week and wrote some code to run a 500-day
correlation between the Nasdaq tracking stock (QQQ) and 191 currency pairs
for 500 days. The initial run took 9 hours(!) and I'd like to make it
faster. So, I'm including my code below, in hopes that somebody will be able
to figure out how to make it faster, either through parallelisation, or by
making changes. I've
2023 Mar 14
1
[V2V PATCH v3 5/6] v2v, in-place: introduce --block-driver command line option
On Tue, Mar 14, 2023 at 04:06:18PM +0200, Andrey Drobyshev wrote:
> Speaking of "make check": could you point out, for future reference,
> which particular sub-target you're referring to here? I can see these:
> check-am, check-recursive, check-slow, check-TESTS, check-valgrind. And
> none of them seems to refer to checking docs integrity. Yet running
> entire
2015 Aug 12
2
Proposal/patch: simple parallel LTO code generation
Hi all,
The most time consuming part of LTO at opt level 1 is by far the backend code
generator. (As a reminder, LTO opt level 1 runs a minimal set of passes;
it is most useful where the motivation behind the use of LTO is to deploy
a transformation that requires whole program visibility such as control
flow integrity [1], rather than to optimise the program using whole program
visibility). Code
1999 Mar 10
3
re: smp in Linux
A question to all you R-gurus:
Can R (or S-plus, for that matter) make efficient use
of multiple Intel Processors running under Linux (within
the same PC, not over a net)?
With the release of the new 2.2 kernel, this would seem
a interesting and cost-efficient way of boosting the
computational power of Intel/Linux platforms when using
R (or S-plus).
Thanks for any wise words,
Kenneth