Chandler Carruth
2014-May-10 01:33 UTC
[LLVMdev] Finding safe thread suspension points while JIT-ing (was: Add pass run listeners to the pass manager.)
So, I'm bringing a discussion that has thus far been on llvm-commits to llvmdev because I finally (and thanks for helping me understand this Andy) understand what is *really* going on, and I think lots of others need to be aware of this and involved to figure out the right path forward. You can find the full review thread and more context under the subject "[PATCH][PM] Add pass run listeners to the pass manager.", but here is the important bit from Juergen's initial email: this patch provides the necessary C/C++ APIs and infastructure to enable> fine- > grain progress report and safe suspension points after each pass in the > pass > manager.Clients can provide a callback function to the pass manager to call after> each > pass. This can be used in a variety of ways (progress report, dumping of > IR > between passes, safe suspension of threads, etc).I had wrongly (perhaps because of the implementation, but still, wrongly) focused on the progress report and IR dumping use cases. It sounds (from talking to Andy offline, sorry for that confusion) like the real use case is safe suspension of threads. The idea is that we would have a callback from the pass manager into the LLVMContext which would be used to recognize safe points at which the entire LLVMContext could be suspended, and to expose these callbacks through C API to JIT users. Increasingly, I can't fathom a way to get a good design for safe suspension of JIT-ing threads using callbacks tied to when passes run today. I think it is a huge mistake to bake this into the C API at this point. If you need this functionality in the C API, with a design we can use going forward, I'd like to see a *really* careful write up of exactly what the suspension point requirements are and a design for achieving them. I think it should be completely independent from any infrastructure for reporting or dumping IR in pass managers. I think something much simpler than this might work outside of the C API, where we can essentially change how it works when we start designing how multiple threads will actually work within an LLVMContext. Would that work? Is there a way to make progress more rapidly there? Ultimately, this is opening a huge can of worms if we put it into the C API, as I think it is going to fundamentally impact what options we actually have for parallelizing parts of LLVM in the future. If we want to go there, we need be *incredibly* explicit about what assumptions are being made here. -Chandler -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20140509/60da1f0c/attachment.html>
Andrew Trick
2014-May-10 02:36 UTC
[LLVMdev] Finding safe thread suspension points while JIT-ing (was: Add pass run listeners to the pass manager.)
On May 9, 2014, at 6:33 PM, Chandler Carruth <chandlerc at google.com> wrote:> So, I'm bringing a discussion that has thus far been on llvm-commits to llvmdev because I finally (and thanks for helping me understand this Andy) understand what is *really* going on, and I think lots of others need to be aware of this and involved to figure out the right path forward. > > You can find the full review thread and more context under the subject "[PATCH][PM] Add pass run listeners to the pass manager.", but here is the important bit from Juergen's initial email: > > this patch provides the necessary C/C++ APIs and infastructure to enable fine- > grain progress report and safe suspension points after each pass in the pass > manager. > > Clients can provide a callback function to the pass manager to call after each > pass. This can be used in a variety of ways (progress report, dumping of IR > between passes, safe suspension of threads, etc). > > I had wrongly (perhaps because of the implementation, but still, wrongly) focused on the progress report and IR dumping use cases. It sounds (from talking to Andy offline, sorry for that confusion) like the real use case is safe suspension of threads. The idea is that we would have a callback from the pass manager into the LLVMContext which would be used to recognize safe points at which the entire LLVMContext could be suspended, and to expose these callbacks through C API to JIT users.Good. Let’s table the discussion of how to report passes and just focus on the thread suspension API. It never occurred to me that a client using the new API for thread scheduling would not *already* be making an assumption about one thread per context. I believe anything else will break these clients regardless of the API. So I didn’t see this API as imposing a new restriction. The more explicit we can be about this, the better.> Increasingly, I can't fathom a way to get a good design for safe suspension of JIT-ing threads using callbacks tied to when passes run today. I think it is a huge mistake to bake this into the C API at this point. If you need this functionality in the C API, with a design we can use going forward, I'd like to see a *really* careful write up of exactly what the suspension point requirements are and a design for achieving them. I think it should be completely independent from any infrastructure for reporting or dumping IR in pass managers.Yes, there absolutely needs to be a way to expose functionality within LLVM in its current form through the C API. We can say that the API works under some explicit set of rules. If some future LLVM can be configured in a way that breaks the rules, you don’t get the callback in that case.> I think something much simpler than this might work outside of the C API, where we can essentially change how it works when we start designing how multiple threads will actually work within an LLVMContext. Would that work? Is there a way to make progress more rapidly there? > > Ultimately, this is opening a huge can of worms if we put it into the C API, as I think it is going to fundamentally impact what options we actually have for parallelizing parts of LLVM in the future. If we want to go there, we need be *incredibly* explicit about what assumptions are being made here.Let’s be explicit then. We will always need to be able to configure LLVM with one thread per context. Always. So it’s not like we’re adding something that could become unusable in the future. Does anyone disagree? Incidentally, I have no idea why the callback would not work with parallel context. If you suspend a thread within a thread group, it is totally expected that the other threads will also eventually block. Tangentially, how many other places do we assume that an LLVMContext corresponds to a thread? -Andy -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20140509/6f47fbe9/attachment.html>
Juergen Ributzka
2014-May-10 18:11 UTC
[LLVMdev] Finding safe thread suspension points while JIT-ing (was: Add pass run listeners to the pass manager.)
On May 9, 2014, at 7:36 PM, Andrew Trick <atrick at apple.com> wrote:> > On May 9, 2014, at 6:33 PM, Chandler Carruth <chandlerc at google.com> wrote: > >> So, I'm bringing a discussion that has thus far been on llvm-commits to llvmdev because I finally (and thanks for helping me understand this Andy) understand what is *really* going on, and I think lots of others need to be aware of this and involved to figure out the right path forward. >> >> You can find the full review thread and more context under the subject "[PATCH][PM] Add pass run listeners to the pass manager.", but here is the important bit from Juergen's initial email: >> >> this patch provides the necessary C/C++ APIs and infastructure to enable fine- >> grain progress report and safe suspension points after each pass in the pass >> manager. >> >> Clients can provide a callback function to the pass manager to call after each >> pass. This can be used in a variety of ways (progress report, dumping of IR >> between passes, safe suspension of threads, etc). >> >> I had wrongly (perhaps because of the implementation, but still, wrongly) focused on the progress report and IR dumping use cases. It sounds (from talking to Andy offline, sorry for that confusion) like the real use case is safe suspension of threads. The idea is that we would have a callback from the pass manager into the LLVMContext which would be used to recognize safe points at which the entire LLVMContext could be suspended, and to expose these callbacks through C API to JIT users. > > Good. Let’s table the discussion of how to report passes and just focus on the thread suspension API. It never occurred to me that a client using the new API for thread scheduling would not *already* be making an assumption about one thread per context. I believe anything else will break these clients regardless of the API. So I didn’t see this API as imposing a new restriction. The more explicit we can be about this, the better.They not only have to make the assumption of one thread per context, but they actually have to enforce it. According to the comments in LLVMContext there is no locking guarantee and the user/client has to be careful to use one context per thread. This is the current C API and that is how clients are using it right now. Any future extension to the LLVMContext and to the pass manager that change this requirement - namely running in parallel - should be backwards compatible. Although I don’t see how this could or should be an issue to begin with as long we default to the current single-threaded execution model per LLVMContext. Anything that changes this behavior should and have to be explicitly requested by the client. That means there has to be a new C API call to communicate this information. For now all the threads are created by the client and I think this should also stay so in the future.>> Increasingly, I can't fathom a way to get a good design for safe suspension of JIT-ing threads using callbacks tied to when passes run today. I think it is a huge mistake to bake this into the C API at this point. If you need this functionality in the C API, with a design we can use going forward, I'd like to see a *really* careful write up of exactly what the suspension point requirements are and a design for achieving them. I think it should be completely independent from any infrastructure for reporting or dumping IR in pass managers. > > Yes, there absolutely needs to be a way to expose functionality within LLVM in its current form through the C API. We can say that the API works under some explicit set of rules. If some future LLVM can be configured in a way that breaks the rules, you don’t get the callback in that case.It is already a conscious choice of the client if and how to use threads. This choice already affects how callbacks that we already have are implemented by the client. The same would apply for the proposed callback. The client knows exactly the conditions, because it is in full control of setting up the environment.> >> I think something much simpler than this might work outside of the C API, where we can essentially change how it works when we start designing how multiple threads will actually work within an LLVMContext. Would that work? Is there a way to make progress more rapidly there? >> >> Ultimately, this is opening a huge can of worms if we put it into the C API, as I think it is going to fundamentally impact what options we actually have for parallelizing parts of LLVM in the future. If we want to go there, we need be *incredibly* explicit about what assumptions are being made here.Yes, this will definitely impact the design, but only in a positive way :D There is only one big requirement and that is a given: The thread cannot hold a global mutex when making this call. This would deadlock everything - even other concurrent running contexts in todays implementation. When a thread group is running concurrently in the future pass manager then it clear that the suspension of any thread in this thread group might deadlock the remaining threads in the thread group and that is perfectly fine. Also having this callback being fired concurrently is fine too. The client created a parallel pass manager and has to write the callback thread-safe. The important thing here is that LLVM is holding the thread hostage and we need the control back to safely suspend it. It is possible suspend the thread from outside, but then it might be inside a system call or library call that holds a mutex. This could deadlock the whole application. By giving the control back to the client via the call back we know that this cannot happen. We know that LLVM might hold some mutex local to the context, but that is fine and won’t deadlock the whole application. -Juergen> > Let’s be explicit then. > > We will always need to be able to configure LLVM with one thread per context. Always. So it’s not like we’re adding something that could become unusable in the future. Does anyone disagree? > > Incidentally, I have no idea why the callback would not work with parallel context. If you suspend a thread within a thread group, it is totally expected that the other threads will also eventually block. > > Tangentially, how many other places do we assume that an LLVMContext corresponds to a thread? > > -Andy-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20140510/d1ef94df/attachment.html>