On 6/6/07, Sandro Magi <naasking at gmail.com>
wrote:> On 6/5/07, John Criswell <criswell at cs.uiuc.edu> wrote:
> >
> > To be honest, while I understand your questions, I do not understand
the
> > context in which you are asking them. Are you asking if LLVM provides
> > any facilities to provide these protections, or are you asking if
we've
> > added any special features to LLVM (or to SVA; our research work based
> > on LLVM) to provide these protections beyond what is commonly provided
> > by systems today? Can you provide more information on how you want to
> > use LLVM for your project?
>
> The context was fully detailed in my original message:
> http://lists.cs.uiuc.edu/pipermail/llvmdev/2007-June/009261.html
>
> Basically, if you want to be able to download and execute potentially
> malicious code safely, no existing VM is sufficiently safe, since the
> malicious code can still DoS the CPU and the memory subsystems. This
> is because all VMs of which I'm aware provide insufficient resource
> accounting; the only efforts to minimize these DoS opportunities are
> CapROS/EROS and Coyotos secure operating systems (that I'm aware).
> Secure mobile code will remain a pipe dream until such isolation is
> addressed.
Personally, I wonder it may be a little bit too early for LLVM to meet
these fine-grained
confinement problems before the SVA gets mature.
Real world virtual machines like Vmware or Xen already provide some
level of confinement which in most circumstances can prevents an
attacker in guest OS from overwhelming the host.
(e.g. I have a native debian installed on a core duo system. If I
configure my vmware with 1 virtual cpu, I always have the cpu power
to manage my native debian even with the Dos case in the vmware.)
SVA is a step forward which is more analyzable compared to Vmware or Xen.
And more confined policies seems to be steps further.
>
> So I was proposing an extension to LLVM to address the problem, and
> asking about the feasibility of the extension as detailed in the above
> message.
>
> Static analyses are certainly preferable to a runtime solution, but I
> have yet to see a static analysis that attempts to extract these
> specific properties. The sort of analysis that might be able to do so
> with sufficient flexibility might be "sized types", which express
> algorithmic runtime and space complexity in types. I'm doubtful that
> this approach is feasible with low-level LLVM code, but I'd love to be
> wrong!
I think John Criswell was referring to this kind of particular cases:
for ( i=0; i<10; ++i) p[i]=malloc( 10* sizeof(int) );
If we statically "pattern match" the loop we can see that this line of
code trends to allocate
memory with 100*sizeof(int). This static information may be feed to
the optimizer (can we allocate 100*sizeof(int) contiguous memory at a
time? ) or the security protector ( Is this line of code allocating
more than expected? ).
Correct me if I am wrong. :-)
Best Regards,
Nai
>
> Another attack specific to a JIT is self-modifying code (SMC); if a
> piece of SMC can repeatedly get the VM to re-JIT code, the VM had
> better make sure that the JITting is done under the SMC's schedule,
> and using memory booked to the SMC. Otherwise, this is another DoS
> vulnerability possible due to improper resource accounting.
>
> I was going to ask about SMC in a separate e-mail, but since I brought
> it up here: can LLVM support languages with SMC or runtime code
> generation like MetaOCaml? I don't see how it could be done from what
> I understand of LLVM, but perhaps others see a way. It might be
> possible with additional intrinsics that invoke the JIT, but I don't
> see how native LLVM can express SMC.
>
> Thanks for your detailed response. Hope I've been able to clear
> everything up. :-)
>
> Sandro
>
> > To answer the first question, yes, there are simple ways in which LLVM
> > can provide these protections. Programs compiled to LLVM bytecode are
> > either translated to native code ahead of time or run with a JIT.
Either
> > way, each program is run within a separate process which has its own
set
> > of operating system imposed memory limits and CPU time limits. Code
can
> > execute the fork() and clone() system calls to create new threads and
> > processes (I'm not sure if our JIT can handle multi-threaded code,
but
> > in theory, it could). You could, in fact, write an LLVM transformation
> > pass that inserts calls to fork()/clone()/pthread_create()/etc. to
> > create new processes/threads as needed to enforce your policies. In
> > short, a program compiled to LLVM bytecode gets whatever protections
the
> > OS itself provides against such attacks, and I believe these
guarantees
> > extend to the JIT as well as to the code running within the JIT.
> >
> > To answer the second question, no, we have not done any research into
> > how to provide more fine grained protections against these attacks in
> > our Secure Virtual Architecture (which is based on LLVM). In
particular,
> > there is nothing that ensures that there are any protections against
> > these sort of attacks on kernel level code compiled for SVA, and we
have
> > not done any work to ensure that our JIT, if run below the OS, would
be
> > immune to such attacks.
> >
> > However, I suspect that adding such features would be an interesting
> > research question. I would think that static program analysis could be
> > used to determine code follows a particular memory allocation/CPU
usage
> > policy. I think it would also be possible to use automatic program
> > transformation to modify code to have run-time checks to ensure that
> > such policies were followed. However, I'm not familiar with the
work in
> > this area, so I don't know what has been done or what challenges
one
> > would need to overcome.
> >
> > -- John T.
> _______________________________________________
> LLVM Developers mailing list
> LLVMdev at cs.uiuc.edu http://llvm.cs.uiuc.edu
> http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev
>