Here's how I did things, back when I got to write my own infrastructure. It's still O(n^2) in the worst case, but *much* quicker in the expected case. Number all the basic blocks, 0 to n-1 and build a vector mapping from integer to block. Requires O(n) time and space. For each block, compute the set containing it's dominance frontier, based on Figure 10 of * * *Efficiently Computing Static Single Assignment Form and ...* Cytron, Ferrante, Rosen, Wegman During the calculation, represent the current frontier, DF(X), as suggested in *An Efficient Representation for Sparse Sets* Briggs, Torczon This lets us add members and clear the set in constant time and allows us to enumerate the members in time proportional to the number of members. O(n) space. Then, having computed the DF for a single block, allocate a vector of the appropriate length and copy the members of the set into the vector. This requires time and space proportional to the actual size of the DF, typically much smaller than O(n) per block. When you're done computing DFs, throw away the big sparse set (you only need one as a workspace) and retain the dense vectors. When placing phi nodes, Figure 11 of Cytron, et al., the only use of the DF is to walk over the members in a for loop, and the dense vector is perfect for the occasion. More general representations waste space. So, the only real trick is to change the set representation between the time the DFs are computed and the time they are used. Some people object to numbering the basic blocks, but I've never understood this objection. I did it often, whenever I needed it; it's not some expensive property that you need to maintain across many phases. Preston On Fri, Dec 23, 2011 at 3:53 PM, Chris Lattner <clattner at apple.com> wrote:> > On Dec 23, 2011, at 1:35 PM, Preston Briggs wrote: > > > Reading the comments in Analysis/DominanceFrontier.h, I see a note that > the structure is deprecated and we're not to use it for anything new. > > > > Has it been replaced with something equally useful, or shall I redo the > calculation for myself, or ...? > > We're hoping to remove them, because they're inherently an N^2 data > structure and there are usually better ways to do things. What are you > trying to do? SSA construction? > > -Chris >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20111224/28f6464d/attachment.html>
I executed cmake -G "Visual Studio 10" . on windows, and got the following error message. I have python installed and in the path. CMake Error at CMakeLists.txt:256 (message): Unexpected failure executing llvm-build: Traceback (most recent call last): File "C:/Users/xzx/llvm/utils/llvm-build/llvm-build", line 3, in <module> import llvmbuild File "C:\Users\xzx\llvm\utils\llvm-build\llvmbuild\__init__.py", line 1, in <module> from main import main ImportError: No module named main -- Configuring incomplete, errors occurred!
Python 2.x works. Thanks Joey. 于 2011/12/25 18:41, Joey 写道:> Which version of Python? > > Looks like the same error I had when trying to use 3.x. You have to > use a 2.x version of Python. > > Joey > > On Sunday, 25 December 2011, Xu Zhongxing <xu_zhong_xing at 163.com > <mailto:xu_zhong_xing at 163.com>> wrote: > > I executed > > > > cmake -G "Visual Studio 10" . > > > > on windows, and got the following error message. I have python installed > > and in the path. > > > > > > CMake Error at CMakeLists.txt:256 (message): > > Unexpected failure executing llvm-build: Traceback (most recent call > > last): > > > > File "C:/Users/xzx/llvm/utils/llvm-build/llvm-build", line 3, in > > <module> > > import llvmbuild > > File "C:\Users\xzx\llvm\utils\llvm-build\llvmbuild\__init__.py", > > line 1, in > > <module> > > from main import main > > > > ImportError: No module named main > > > > > > -- Configuring incomplete, errors occurred! > > > > > > _______________________________________________ > > LLVM Developers mailing list > > LLVMdev at cs.uiuc.edu <mailto:LLVMdev at cs.uiuc.edu> http://llvm.cs.uiuc.edu > > http://lists.cs.uiuc.edu/mailman/listinfo/llvmdev > >
On Dec 24, 2011, at 7:48 AM, Preston Briggs wrote:> Here's how I did things, back when I got to write my own infrastructure. It's still O(n^2) in the worst case, but much quicker in the expected case. > > Number all the basic blocks, 0 to n-1 and build a vector mapping from integer to block. Requires O(n) time and space. > > For each block, compute the set containing it's dominance frontier, based on Figure 10 of > > Efficiently Computing Static Single Assignment Form and ... > Cytron, Ferrante, Rosen, Wegman > > During the calculation, represent the current frontier, DF(X), as suggested in > > An Efficient Representation for Sparse Sets > Briggs, Torczon > > This lets us add members and clear the set in constant time and allows us to enumerate the members in time proportional to the number of members. O(n) space. > > Then, having computed the DF for a single block, allocate a vector of the appropriate length and copy the members of the set into the vector. This requires time and space proportional to the actual size of the DF, typically much smaller than O(n) per block. > > When you're done computing DFs, throw away the big sparse set (you only need one as a workspace) and retain the dense vectors. > > When placing phi nodes, Figure 11 of Cytron, et al., the only use of the DF is to walk over the members in a for loop, and the dense vector is perfect for the occasion. More general representations waste space. > > So, the only real trick is to change the set representation between the time the DFs are computed and the time they are used. Some people object to numbering the basic blocks, but I've never understood this objection. I did it often, whenever I needed it; it's not some expensive property that you need to maintain across many phases.The other major problem that I forgot to mention is the "updating" problem. If you have many clients that are using DF, then you want anything that changes the CFG to update DF. This can be non-trivial, and (again) is a waste of time in most cases IMO. -Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20111228/d9ee0abb/attachment.html>
On Wed, Dec 28, 2011 at 11:39 PM, Chris Lattner <clattner at apple.com> wrote:> The other major problem that I forgot to mention is the "updating" problem. > If you have many clients that are using DF, then you want anything that > changes the CFG to update DF. This can be non-trivial, and (again) is a > waste of time in most cases IMO.Interesting. I never updated my DF representation; instead, I computed them from scratch whenever necessary. But I only used them for placing phi nodes, and I only did that in a batch fashion. That is, when I converted a routine to SSA form, I built DF's for the entire routine and then used them to insert phi nodes for the entire routine. If I was in the middle of register allocation (when the CFG didn't change), I carried the DFs along; otherwise, I threw them away. I expect most our difference in perspective goes back to my paying attention to the performance of individual optimizations, versus performance of the compiler as a whole. Preston