Displaying 20 results from an estimated 710 matches for "ipoe".
Did you mean:
ipod
2013 Jul 17
2
[LLVMdev] [Proposal] Parallelize post-IPO stage.
On 7/17/13 12:35 PM, Diego Novillo wrote:
> On Fri, Jul 12, 2013 at 3:49 PM, Shuxin Yang <shuxin.llvm at gmail.com> wrote:
>
>> 3. How to parallelize post-IPO stage
>> ====================================
>>
>> From 5k' high, the concept is very simple, just to
>> step 1).divide the merged IR into small pieces,
>> step 2).and compile
2013 Jul 17
0
[LLVMdev] [Proposal] Parallelize post-IPO stage.
On Wed, Jul 17, 2013 at 1:06 PM, Shuxin Yang <shuxin.llvm at gmail.com> wrote:
>
> On 7/17/13 12:35 PM, Diego Novillo wrote:
>>
>> On Fri, Jul 12, 2013 at 3:49 PM, Shuxin Yang <shuxin.llvm at gmail.com>
>> wrote:
>>
>>> 3. How to parallelize post-IPO stage
>>> ====================================
>>>
>>> From 5k'
2016 May 04
3
status of IPO/IPCP?
Sean Silva via llvm-dev <llvm-dev at lists.llvm.org> writes:
> No tests fail with the patch below, so I would say it's pretty useless. It
> seems that the C bindings are the only user but we can probably just have them
> return IPSCCP instead.
I don't necessarily think your conclusion is wrong, but the patch isn't
proving what you think it's proving. In fact, the
2008 Jun 05
7
Improving data processing efficiency
Hi everyone!
I have a question about data processing efficiency.
My data are as follows: I have a data set on quarterly institutional
ownership of equities; some of them have had recent IPOs, some have not
(I have a binary flag set). The total dataset size is 700k+ rows.
My goal is this: For every quarter since issue for each IPO, I need to
find a "matched" firm in the same
2016 May 03
2
status of IPO/IPCP?
The pass is pretty rudimental (as the comment at the top of the file
hints), and it seems LLVM already has IPSCCP (which should do a better
job at interprocedural constant propagation).
I'm also not entirely sure it's used anywhere.
Is there any reason to keep it around?
Thanks,
--
Davide
"There are no solved problems; there are only problems that are more
or less solved" --
2013 Jul 29
5
[LLVMdev] IR Passes and TargetTransformInfo: Straw Man
On Jul 27, 2013, at 5:47 PM, Shuxin Yang <shuxin.llvm at gmail.com> wrote:
> Hi, Sean:
>
> I'm sorry I lie. I didn't mean to lie. I did try to avoid making a *BIG* change
> to the IPO pass-ordering for now. However, when I make a minor change to
> populateLTOPassManager() by separating module-pass and non-module-passes, I
> saw quite a few performance
2013 Jul 16
0
[LLVMdev] [Proposal] Parallelize post-IPO stage.
On 12 July 2013 15:49, Shuxin Yang <shuxin.llvm at gmail.com> wrote:
> Hi, There:
>
> This is the proposal for parallelizing post-ipo stage. See the following
> for details.
>
> I also attach a toy-grade rudimentary implementation. This
> implementation can be
> used to illustrate some concepts here. This patch is not going to be
> committed.
>
>
2013 Jul 18
3
[LLVMdev] IR Passes and TargetTransformInfo: Straw Man
Andy and I briefly discussed this the other day, we have not yet got
chance to list a detailed pass order
for the pre- and post- IPO scalar optimizations.
This is wish-list in our mind:
pre-IPO: based on the ordering he propose, get rid of the inlining (or
just inline tiny func), get rid of
all loop xforms...
post-IPO: get rid of inlining, or maybe we still need it, only
2013 Jul 17
0
[LLVMdev] [Proposal] Parallelize post-IPO stage.
On Fri, Jul 12, 2013 at 3:49 PM, Shuxin Yang <shuxin.llvm at gmail.com> wrote:
> 3. How to parallelize post-IPO stage
> ====================================
>
> From 5k' high, the concept is very simple, just to
> step 1).divide the merged IR into small pieces,
> step 2).and compile each of this pieces independendly.
> step 3) the objects of each piece
2016 Mar 21
2
[Inliner] Loop info in the inliner
Hi,It seems inliner does not take into account if a call is inside a loop. I'm trying to figure out if loop-info can be made available to the inliner.
When I try to add LoopInfoWrapperPass to Inliner.cpp,
diff --git a/llvm/lib/Transforms/IPO/Inliner.cpp b/llvm/lib/Transforms/IPO/Inliner.cppindex 568707d..cb51ea8 100644--- a/llvm/lib/Transforms/IPO/Inliner.cpp+++
2013 Jul 28
0
[LLVMdev] IR Passes and TargetTransformInfo: Straw Man
Hi, Sean:
I'm sorry I lie. I didn't mean to lie. I did try to avoid making a
*BIG* change
to the IPO pass-ordering for now. However, when I make a minor change to
populateLTOPassManager() by separating module-pass and non-module-passes, I
saw quite a few performance difference, most of them are degradations.
Attacking
these degradations one by one in a piecemeal manner is wasting
2012 Jun 08
1
ipoe vs pppoe...
Hi,
we are currently using pppd to connect to the net through pppoe.
Some providers seem to propose ipoe instead of pppoe and I cannot find any clue on how/where/if_possible to set up such connection in CentOS.
Anybody has any experience with ipoe...?
Thx,
JD
2015 Jun 04
5
[LLVMdev] Removing AvailableExternal values in GlobalDCE (was Re: RFC: ThinLTO Impementation Plan)
On Thu, Jun 4, 2015 at 3:58 PM, Duncan P. N. Exon Smith <
dexonsmith at apple.com> wrote:
>
> > Personally, I think the right approach is to add a bool to
> createGlobalDCEPass defaulting to true named something like
> IsAfterInlining. In most standard pass pipelines, GlobalDCE runs after
> inlining for obvious reasons, so the default makes sense. The special case
> is
2013 Jul 15
0
[LLVMdev] [Proposal] Parallelize post-IPO stage.
On Jul 12, 2013, at 3:49 PM, Shuxin Yang <shuxin.llvm at gmail.com> wrote:
> 6) Miscellaneous
> ===========
> Will partitioning degrade performance in theory. I think it depends on the definition of
> performance. If performance means execution-time, I guess it dose not.
> However, if performance includes code-size, I think it may have some negative impact.
> Following
2008 Jan 23
2
[LLVMdev] Issues with IPO optimization passes and JIT
Hello,
I am working on an LLVM-based JIT for a dynamically typed language
(freemat.sf.net), and would like to commend the LLVM team on an
awesome piece of work. One issue I ran into that I was hoping for
some clarification. Nominally, I had started out by performing code
generation from the AST into a function that was added to the current
module. I then ran a PassManager on that module to
2001 Jan 18
0
Obtain Biotech IPOs! 93
<head>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1252">
<meta name="GENERATOR" content="Microsoft FrontPage 4.0">
<meta name="ProgId" content="FrontPage.Editor.Document">
<title></title>
</head>
<body bgcolor="#CCCCCC">
<div
2001 Jan 18
0
Obtain Biotech IPOs! 93
<head>
<meta http-equiv="Content-Type" content="text/html; charset=windows-1252">
<meta name="GENERATOR" content="Microsoft FrontPage 4.0">
<meta name="ProgId" content="FrontPage.Editor.Document">
<title></title>
</head>
<body bgcolor="#CCCCCC">
<div
2013 Jul 12
14
[LLVMdev] [Proposal] Parallelize post-IPO stage.
Hi, There:
This is the proposal for parallelizing post-ipo stage. See the
following for details.
I also attach a toy-grade rudimentary implementation. This
implementation can be
used to illustrate some concepts here. This patch is not going to be
committed.
Unfortunately, this weekend I will be too busy to read emails. Please
do not construe
delayed response as being rude :-).
2017 Sep 16
3
RFC: Use closures to delay construction of optimization remarks
Another alternative could be:
ORE.emitMissed(DEBUG_TYPE, ...) << ...
Then the first line of emitMissed does a check if it is enabled and if not
then returns a dummy stream that does nothing for operator<< (and
short-circuits all the stream operations)
On Sep 15, 2017 2:21 PM, "Adam Nemet via llvm-dev" <llvm-dev at lists.llvm.org>
wrote:
For better readability we
2016 Mar 22
0
[Inliner] Loop info in the inliner
FYI - There is currently an architectural issue which prevents the SCC
pass manager (which runs the inliner) from relying on Function or Loop
analysis passes. This is the primary motivation of the pass manager
rewrite that Chandler Carruth has been working on for the last two
years. He's getting relatively close to that project being done, but
until then you are going to be effectively