Hello Chris,
Thursday, January 8, 2004, 5:28:47 PM, you wrote:
CL> Though I am mostly ignorant about leading edge distributed (i.e., loosely
CL> coupled parallel) programming stuff, I _think_ that network latencies are
CL> such that you really need the "big picture" of what a program
is doing to
CL> know how to profitably distribute and adapt the program to the
environment
CL> it finds itself running in.
1. Hard disks are much more slower then registers, but they are used
quite automatically for swapping. By analogy: if some module/function
is very active for a long time, but not very much communicating with
environment, then it might be transferred over slow network to proceed
somewhere else.
2. what about this run-time profiling? It might be quite a good basis
for getting *real* info about interactions between functions/modules
without this "big picture" provided by programmer, who could be even
very wrong with his/her prediction on profiling of own program :)
(haven't you been surprised sometime looking into profiling of your
own program? I am surprised very often )
CL> Our mid-term plans include looking at multithreading/parallel processing
CL> kinds of things, at least for shared memory systems.
cool... guys, I pray you get your "financial air" on a regular basis!
what you do is very important.
CL> At the LLVM level, we are interested in _exposing_ parallelism.
hm... strange. You mean, you going to define explicitly what and how to
parallelize? Why then we don't need similar unpleasant things for
registers?..
CL> In the fib example you are
CL> using, for example, it is pretty easy to show that all of the subcalls to
CL> fib can be run independently of each other (ie, in parallel). Combined
CL> with a suitable runtime library (e.g., pthreads), you could imagine an
CL> LLVM transformation that allows it to run well on a machine with 10K
CL> processors. :)
Chris, that's clear. But (let's me use this annoying analogy between
memory and CPUs) if you have strategy today where to allocate a
variable, then you need strategy to start or not to start a new
thread. Are there any good strategies for threads, as good as for
memory allocations?
>> Actually, maybe the problem is that scince is not ready arround
>> phi-functions, SSA and so on, if we speak about calculating machine
with
>> several CPUs spread via network... What could LLVM-gurus say here?
CL> "It's all about the abstraction." :) If you come up with
a clean design
CL> that integrates well with LLVM, and makes sense, it's quite possible
that
CL> we can integrate it. If it doesn't fit well with LLVM, or, well,
doesn't
CL> make sense, then we'll probably wait for something that does. :)
That
CL> said, LLVM is actually a really nice platform for playing around with
CL> "experimental" ideas like this, so feel free to start hacking
once you
CL> get a concrete idea!
oopsss, Chris, but I was talking about scientific basis. Abstraction
has a lot to do here as well, but, as mathematician, I could say,
if theory is ready (e.g. orthogonal system of functions) then you could
play with special-purpose applications of this theory (FFT, wavelets,
etc).
Otherwise, one could die in debris of heuristics.
SSA, phi-operator and alike is not an empty sound, isn't it? I think
there is a lot of reasonable theory around which allows you think of
design and adequate implementation instead of thinking how to
formulate the task, what kind of property has the construction you use
and so on... Theory is either ready (then happy hacking!) or not ready
(then be careful before hacking in big troubles).
Or?..
--
Best regards,
Valery A.Khamenya mailto:khamenya at mail.ru
Local Time: 20:46