Quentin Colombet via llvm-dev
2018-Nov-28 01:01 UTC
[llvm-dev] [RFC] Tablegen-erated GlobalISel Combine Rules
Hi Daniel, Let me try to clarify my concern. Le mar. 27 nov. 2018 à 14:23, Daniel Sanders <daniel_l_sanders at apple.com> a écrit :> > > > On Nov 26, 2018, at 10:11, Quentin Colombet <quentin.colombet at gmail.com> wrote: > > Hi Daniel, > > Thanks for the reply! > > Le lun. 26 nov. 2018 à 10:00, Daniel Sanders > <daniel_l_sanders at apple.com> a écrit : > > > Hi Quentin, > > Sorry for the slow reply. > > On Nov 16, 2018, at 09:25, Quentin Colombet <quentin.colombet at gmail.com> wrote: > > Hi Daniel, > > I finally read the proposal. > > I have a couple of comments that re-enforce what I was thinking > initially: it is too early IMHO to develop a full tablegen backend for > this. I believe we still miss ways to represent things that matters > (see my comments below) and I am afraid we still don't know all that > needs to be represented. > > It can reach across basic blocks when it's safe to do so. We may need to introduce a means to limit the inter-basic-block reach as I can foresee situations where it may pull expensive operations into a loop if it's not careful about this > > > I think we would need a syntax for that and in particular, I don't > think a generic heuristic based on frequency can solely solve the > problem (i.e., what you're hinting in your RFC). > > > That's something I'm intending to look into properly as part of the algorithm work. The syntax can support whatever mechanism we need for this through GIMatchPredicate subclasses and there's room for syntactic sugar by defining new operator classes. > > > That's the kind of things that worries me. My cynical view makes me > think that more syntactic sugar sounds like we develop a hack on top > of the suggested syntax, thus I'm pushing for more forward thinking. I > admit I may be wrong and that additional syntactic sugar would fit > well, but I haven't seen evidence one way or another. Therefore, again > I am hesitant to stamp a new tablegen backend based on the few > elements we have. > > > I don't think I understand what you're getting at.What I am getting at is I feel we are defining a model for a problem we don't fully understand yet. I understand the model is flexible enough to inject C++ code wherever we want but I question why we should have to do that if we had captured the problem properly in the first place. I am not saying we don't capture it properly with this design, I am again saying that I don't have evidence to know. The reason I am so hesitant here is because it sounds like this syntax is the final solution whereas I think we could explore other directions (for instance extending tablegen itself for instance). However, by committing to this syntax/tablegen backend, when people start to adopt it, we will have to support them and more importantly migrate them if we decide to change something, that's why I believe this is premature. Now, again you're the one doing the work, so if you believe this step brings us closer to the final design, I have no issue with that :). Cheers, -Quentin> At the moment we have C++ code fragments which can be given names via GIMatchPredicate and are invoked like a macro with '(GIMatchPredicateDef arg1, arg2)' and everything in the current match section can be implemented in terms of that. We have instruction matching which is invoked with '(InstructionDef op1, op2, op3)' so we can analyze the structure of the DAG and decide how best to match it. We have room for different kinds of code blocks using '[{ ... some code ... }]' and choose what they mean. We can also have other kinds of matching invoked by '(SomethingWeHaventThoughtOfYet ...)' which tblgen can distinguish from instruction matching by checking the base class of the 'SomethingWeHaventThoughtOfYet' def. > > We have lots of extension points we can use to address things beyond match/apply. What kind of things do you want to consider? > > Another thing that we miss has to do with the number of users. E.g., > should a pattern only be applied if there is only one user? Is it > valid if there are more that one users, if we cannot move something > into the destination block can we still apply the pattern is we move > the result in the source block, etc. > > > The existing hasOneUse() checks can be done via by defining a GIMatchPredicate like so: > def hasOneUse : GIMatchPredicate<(ins reg:$A), bool, [{ > return MRI.hasOneUse(${A}); > }]>; > and then using it in the match section. Improving on hasOneUse() to support the cases where some/all uses are combinable will require algorithm changes and I'm intending to look into that as part of that work. > > The number of users may not be that relevant, but this gives use is a > notion of profitability (it is just how SDISel does this). This > actually ties back to the inter-basic-block case: is this pattern > profitable if it is between basic blocks with different frequencies. > In other words, I believe we should start to think in terms of how > profitable is a pattern. E.g., the source pattern as some cost and the > destination patterns as some other cost. Applying the pattern should > give us a profitability score, which would be higher when it produces > dead code (i.e., what the oneUse check tries to identify in SDISel). > > Following that logic, we may get away with all the disabling specific > patterns kind of thing. I.e., if something is not profitable it does > not get applied. > > > I agree. We'll need to decide on precise means to measure profitability but the underlying mechanism to support this is the metadata. I'd expect a portion of the cost measurement to be gathered implicitly (more nodes => higher cost, some contribution from the scheduler model, etc.) and a portion to be gathered explicitly via GIMatchPredicate. That value(s) would then be delivered to the apply step via metadata. > > All in all, the syntax that your suggesting is reasonable (though IMHO > MIR is going to limit our ability to share more logic with IR, but > maybe that's fine to give up on that goal?), but I feel it actually > distracts us from thinking about the problems we are solving and thus, > I am hesitant to stamp a work that would write a new tablegen backend. > > > I think on that last point we're drawing opposite conclusions from the same thing :-). I think you see the syntax as defining the Combine algorithm (is that right?) > > > No, I really worry that we tie ourselves to a syntax that is not the > right abstraction to define the work. Again, where I would like to go > is a description that can be used for high level combines as well and > MIR seems wrong for this. > > > There's two things that get locked in. The first is the overall structure: > (defs ...) > (match ...) > (apply ...) > 'match' and 'apply' directly correspond to the two major steps in a combine algorithm: Recognizing existing IR and changing it to something else. I think these two are safe to lock in as they're already key actions that any combiner implementation must do. 'defs' is less clear since it's not as directly connected to a combiners actions but I still think it's safe to lock in as a concept since the match step needs to tell the apply step what it matched. > > The other bit that gets locked in is that the contents of these sections must fit within tblgen's 'dag' syntax. That doesn't seem like much of a burden as code blocks and DagArg/DagArgList (https://llvm.org/docs/TableGen/LangRef.html#values) are very lenient about their contents. The operator DagArg can be any def within tablegen and different backends can act on those defs as they see fit. > > The idea of sharing code between InstCombine and MIR for combines that make sense to both isn't something I intend to work on but it does fit within the syntax proposed. > It would be something along the lines of: > class IRIndependentBinaryOp; > def IRIndependentAdd; > def IRIndependentMul; > def : GICombineRule<(defs ...), (match (IRIndependentMul $C, $A, 2)), (apply (IRIndependentAdd $C, $A, $A))>; > The backend(s) would need to know how to match and create LLVM-IR and MIR versions of IRIndependentMul and IRIndependentAdd much like the GlobalISel Combiner backend needs to know how to match and create defs that inherit from Instruction. > > * Just as an aside, it's technically an extension on the original syntax and could co-exist with the MIR code blocks but there's little point in supporting both methods. > > but I see the syntax as abstracting it away from the rules. One of my key goals with this syntax is to allow the reorganization of the match and apply steps so that I can try out a different Combine algorithm without blocking the development of Combines with the current algorithm. > > That said, again, I am not the one doing the work! > > Cheers, > -Quentin > [snip] > >
Amara Emerson via llvm-dev
2018-Nov-28 18:34 UTC
[llvm-dev] [RFC] Tablegen-erated GlobalISel Combine Rules
> On Nov 27, 2018, at 5:01 PM, Quentin Colombet via llvm-dev <llvm-dev at lists.llvm.org> wrote: > > Hi Daniel, > > Let me try to clarify my concern. > > Le mar. 27 nov. 2018 à 14:23, Daniel Sanders > <daniel_l_sanders at apple.com <mailto:daniel_l_sanders at apple.com>> a écrit : >> >> >> >> On Nov 26, 2018, at 10:11, Quentin Colombet <quentin.colombet at gmail.com> wrote: >> >> Hi Daniel, >> >> Thanks for the reply! >> >> Le lun. 26 nov. 2018 à 10:00, Daniel Sanders >> <daniel_l_sanders at apple.com> a écrit : >> >> >> Hi Quentin, >> >> Sorry for the slow reply. >> >> On Nov 16, 2018, at 09:25, Quentin Colombet <quentin.colombet at gmail.com> wrote: >> >> Hi Daniel, >> >> I finally read the proposal. >> >> I have a couple of comments that re-enforce what I was thinking >> initially: it is too early IMHO to develop a full tablegen backend for >> this. I believe we still miss ways to represent things that matters >> (see my comments below) and I am afraid we still don't know all that >> needs to be represented. >> >> It can reach across basic blocks when it's safe to do so. We may need to introduce a means to limit the inter-basic-block reach as I can foresee situations where it may pull expensive operations into a loop if it's not careful about this >> >> >> I think we would need a syntax for that and in particular, I don't >> think a generic heuristic based on frequency can solely solve the >> problem (i.e., what you're hinting in your RFC). >> >> >> That's something I'm intending to look into properly as part of the algorithm work. The syntax can support whatever mechanism we need for this through GIMatchPredicate subclasses and there's room for syntactic sugar by defining new operator classes. >> >> >> That's the kind of things that worries me. My cynical view makes me >> think that more syntactic sugar sounds like we develop a hack on top >> of the suggested syntax, thus I'm pushing for more forward thinking. I >> admit I may be wrong and that additional syntactic sugar would fit >> well, but I haven't seen evidence one way or another. Therefore, again >> I am hesitant to stamp a new tablegen backend based on the few >> elements we have. >> >> >> I don't think I understand what you're getting at. > > What I am getting at is I feel we are defining a model for a problem > we don't fully understand yet. > I understand the model is flexible enough to inject C++ code wherever > we want but I question why we should have to do that if we had > captured the problem properly in the first place. I am not saying we > don't capture it properly with this design, I am again saying that I > don't have evidence to know. > > The reason I am so hesitant here is because it sounds like this syntax > is the final solution whereas I think we could explore other > directions (for instance extending tablegen itself for instance). > However, by committing to this syntax/tablegen backend, when people > start to adopt it, we will have to support them and more importantly > migrate them if we decide to change something, that's why I believe > this is premature.I think Daniel’s efforts here were to allow that kind of change to happen without disrupting users much/at all. Ultimately it comes down to whether or not the expression of combines accurately captures the semantics of what the combines are really trying to do. Having the expressions be declarative is a big step in that direction IMO. In the absence of concrete examples where such a semantic representation isn’t sufficient to do something important, where is the path forward? Should we have more examples of existing combines being represented in this way as evidence? And are there any realistic alternatives for declarative representations combines? Amara> > Now, again you're the one doing the work, so if you believe this step > brings us closer to the final design, I have no issue with that :). > > Cheers, > -Quentin > >> At the moment we have C++ code fragments which can be given names via GIMatchPredicate and are invoked like a macro with '(GIMatchPredicateDef arg1, arg2)' and everything in the current match section can be implemented in terms of that. We have instruction matching which is invoked with '(InstructionDef op1, op2, op3)' so we can analyze the structure of the DAG and decide how best to match it. We have room for different kinds of code blocks using '[{ ... some code ... }]' and choose what they mean. We can also have other kinds of matching invoked by '(SomethingWeHaventThoughtOfYet ...)' which tblgen can distinguish from instruction matching by checking the base class of the 'SomethingWeHaventThoughtOfYet' def. >> >> We have lots of extension points we can use to address things beyond match/apply. What kind of things do you want to consider? >> >> Another thing that we miss has to do with the number of users. E.g., >> should a pattern only be applied if there is only one user? Is it >> valid if there are more that one users, if we cannot move something >> into the destination block can we still apply the pattern is we move >> the result in the source block, etc. >> >> >> The existing hasOneUse() checks can be done via by defining a GIMatchPredicate like so: >> def hasOneUse : GIMatchPredicate<(ins reg:$A), bool, [{ >> return MRI.hasOneUse(${A}); >> }]>; >> and then using it in the match section. Improving on hasOneUse() to support the cases where some/all uses are combinable will require algorithm changes and I'm intending to look into that as part of that work. >> >> The number of users may not be that relevant, but this gives use is a >> notion of profitability (it is just how SDISel does this). This >> actually ties back to the inter-basic-block case: is this pattern >> profitable if it is between basic blocks with different frequencies. >> In other words, I believe we should start to think in terms of how >> profitable is a pattern. E.g., the source pattern as some cost and the >> destination patterns as some other cost. Applying the pattern should >> give us a profitability score, which would be higher when it produces >> dead code (i.e., what the oneUse check tries to identify in SDISel). >> >> Following that logic, we may get away with all the disabling specific >> patterns kind of thing. I.e., if something is not profitable it does >> not get applied. >> >> >> I agree. We'll need to decide on precise means to measure profitability but the underlying mechanism to support this is the metadata. I'd expect a portion of the cost measurement to be gathered implicitly (more nodes => higher cost, some contribution from the scheduler model, etc.) and a portion to be gathered explicitly via GIMatchPredicate. That value(s) would then be delivered to the apply step via metadata. >> >> All in all, the syntax that your suggesting is reasonable (though IMHO >> MIR is going to limit our ability to share more logic with IR, but >> maybe that's fine to give up on that goal?), but I feel it actually >> distracts us from thinking about the problems we are solving and thus, >> I am hesitant to stamp a work that would write a new tablegen backend. >> >> >> I think on that last point we're drawing opposite conclusions from the same thing :-). I think you see the syntax as defining the Combine algorithm (is that right?) >> >> >> No, I really worry that we tie ourselves to a syntax that is not the >> right abstraction to define the work. Again, where I would like to go >> is a description that can be used for high level combines as well and >> MIR seems wrong for this. >> >> >> There's two things that get locked in. The first is the overall structure: >> (defs ...) >> (match ...) >> (apply ...) >> 'match' and 'apply' directly correspond to the two major steps in a combine algorithm: Recognizing existing IR and changing it to something else. I think these two are safe to lock in as they're already key actions that any combiner implementation must do. 'defs' is less clear since it's not as directly connected to a combiners actions but I still think it's safe to lock in as a concept since the match step needs to tell the apply step what it matched. >> >> The other bit that gets locked in is that the contents of these sections must fit within tblgen's 'dag' syntax. That doesn't seem like much of a burden as code blocks and DagArg/DagArgList (https://llvm.org/docs/TableGen/LangRef.html#values) are very lenient about their contents. The operator DagArg can be any def within tablegen and different backends can act on those defs as they see fit. >> >> The idea of sharing code between InstCombine and MIR for combines that make sense to both isn't something I intend to work on but it does fit within the syntax proposed. >> It would be something along the lines of: >> class IRIndependentBinaryOp; >> def IRIndependentAdd; >> def IRIndependentMul; >> def : GICombineRule<(defs ...), (match (IRIndependentMul $C, $A, 2)), (apply (IRIndependentAdd $C, $A, $A))>; >> The backend(s) would need to know how to match and create LLVM-IR and MIR versions of IRIndependentMul and IRIndependentAdd much like the GlobalISel Combiner backend needs to know how to match and create defs that inherit from Instruction. >> >> * Just as an aside, it's technically an extension on the original syntax and could co-exist with the MIR code blocks but there's little point in supporting both methods. >> >> but I see the syntax as abstracting it away from the rules. One of my key goals with this syntax is to allow the reorganization of the match and apply steps so that I can try out a different Combine algorithm without blocking the development of Combines with the current algorithm. >> >> That said, again, I am not the one doing the work! >> >> Cheers, >> -Quentin >> [snip] >> >> > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20181128/1d28b8c9/attachment.html>
Quentin Colombet via llvm-dev
2018-Nov-28 18:40 UTC
[llvm-dev] [RFC] Tablegen-erated GlobalISel Combine Rules
Le mer. 28 nov. 2018 à 10:34, Amara Emerson <aemerson at apple.com> a écrit :> > > On Nov 27, 2018, at 5:01 PM, Quentin Colombet via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > > Hi Daniel, > > Let me try to clarify my concern. > > Le mar. 27 nov. 2018 à 14:23, Daniel Sanders > <daniel_l_sanders at apple.com> a écrit : > > > > > On Nov 26, 2018, at 10:11, Quentin Colombet <quentin.colombet at gmail.com> > wrote: > > Hi Daniel, > > Thanks for the reply! > > Le lun. 26 nov. 2018 à 10:00, Daniel Sanders > <daniel_l_sanders at apple.com> a écrit : > > > Hi Quentin, > > Sorry for the slow reply. > > On Nov 16, 2018, at 09:25, Quentin Colombet <quentin.colombet at gmail.com> > wrote: > > Hi Daniel, > > I finally read the proposal. > > I have a couple of comments that re-enforce what I was thinking > initially: it is too early IMHO to develop a full tablegen backend for > this. I believe we still miss ways to represent things that matters > (see my comments below) and I am afraid we still don't know all that > needs to be represented. > > It can reach across basic blocks when it's safe to do so. We may need to > introduce a means to limit the inter-basic-block reach as I can foresee > situations where it may pull expensive operations into a loop if it's not > careful about this > > > I think we would need a syntax for that and in particular, I don't > think a generic heuristic based on frequency can solely solve the > problem (i.e., what you're hinting in your RFC). > > > That's something I'm intending to look into properly as part of the > algorithm work. The syntax can support whatever mechanism we need for this > through GIMatchPredicate subclasses and there's room for syntactic sugar by > defining new operator classes. > > > That's the kind of things that worries me. My cynical view makes me > think that more syntactic sugar sounds like we develop a hack on top > of the suggested syntax, thus I'm pushing for more forward thinking. I > admit I may be wrong and that additional syntactic sugar would fit > well, but I haven't seen evidence one way or another. Therefore, again > I am hesitant to stamp a new tablegen backend based on the few > elements we have. > > > I don't think I understand what you're getting at. > > > What I am getting at is I feel we are defining a model for a problem > we don't fully understand yet. > I understand the model is flexible enough to inject C++ code wherever > we want but I question why we should have to do that if we had > captured the problem properly in the first place. I am not saying we > don't capture it properly with this design, I am again saying that I > don't have evidence to know. > > The reason I am so hesitant here is because it sounds like this syntax > is the final solution whereas I think we could explore other > directions (for instance extending tablegen itself for instance). > However, by committing to this syntax/tablegen backend, when people > start to adopt it, we will have to support them and more importantly > migrate them if we decide to change something, that's why I believe > this is premature. > > I think Daniel’s efforts here were to allow that kind of change to happen > without disrupting users much/at all. > > Ultimately it comes down to whether or not the expression of combines > accurately captures the semantics of what the combines are really trying to > do. Having the expressions be declarative is a big step in that direction > IMO. In the absence of concrete examples where such a semantic > representation isn’t sufficient to do something important, where is the > path forward? Should we have more examples of existing combines being > represented in this way as evidence? >> And are there any realistic alternatives for declarative representations > combines? >Realistic I would have thought we can use the syntax we already have for SDISel. In other words, people wouldn’t have to learn yet another way of doing combines and when we are ready we can move everything (isel + combines) in one go.> Amara > > > Now, again you're the one doing the work, so if you believe this step > brings us closer to the final design, I have no issue with that :). > > Cheers, > -Quentin > > At the moment we have C++ code fragments which can be given names via > GIMatchPredicate and are invoked like a macro with '(GIMatchPredicateDef > arg1, arg2)' and everything in the current match section can be implemented > in terms of that. We have instruction matching which is invoked with > '(InstructionDef op1, op2, op3)' so we can analyze the structure of the DAG > and decide how best to match it. We have room for different kinds of code > blocks using '[{ ... some code ... }]' and choose what they mean. We can > also have other kinds of matching invoked by > '(SomethingWeHaventThoughtOfYet ...)' which tblgen can distinguish from > instruction matching by checking the base class of the > 'SomethingWeHaventThoughtOfYet' def. > > We have lots of extension points we can use to address things beyond > match/apply. What kind of things do you want to consider? > > Another thing that we miss has to do with the number of users. E.g., > should a pattern only be applied if there is only one user? Is it > valid if there are more that one users, if we cannot move something > into the destination block can we still apply the pattern is we move > the result in the source block, etc. > > > The existing hasOneUse() checks can be done via by defining a > GIMatchPredicate like so: > def hasOneUse : GIMatchPredicate<(ins reg:$A), bool, [{ > return MRI.hasOneUse(${A}); > }]>; > and then using it in the match section. Improving on hasOneUse() to > support the cases where some/all uses are combinable will require algorithm > changes and I'm intending to look into that as part of that work. > > The number of users may not be that relevant, but this gives use is a > notion of profitability (it is just how SDISel does this). This > actually ties back to the inter-basic-block case: is this pattern > profitable if it is between basic blocks with different frequencies. > In other words, I believe we should start to think in terms of how > profitable is a pattern. E.g., the source pattern as some cost and the > destination patterns as some other cost. Applying the pattern should > give us a profitability score, which would be higher when it produces > dead code (i.e., what the oneUse check tries to identify in SDISel). > > Following that logic, we may get away with all the disabling specific > patterns kind of thing. I.e., if something is not profitable it does > not get applied. > > > I agree. We'll need to decide on precise means to measure profitability > but the underlying mechanism to support this is the metadata. I'd expect a > portion of the cost measurement to be gathered implicitly (more nodes => > higher cost, some contribution from the scheduler model, etc.) and a > portion to be gathered explicitly via GIMatchPredicate. That value(s) would > then be delivered to the apply step via metadata. > > All in all, the syntax that your suggesting is reasonable (though IMHO > MIR is going to limit our ability to share more logic with IR, but > maybe that's fine to give up on that goal?), but I feel it actually > distracts us from thinking about the problems we are solving and thus, > I am hesitant to stamp a work that would write a new tablegen backend. > > > I think on that last point we're drawing opposite conclusions from the > same thing :-). I think you see the syntax as defining the Combine > algorithm (is that right?) > > > No, I really worry that we tie ourselves to a syntax that is not the > right abstraction to define the work. Again, where I would like to go > is a description that can be used for high level combines as well and > MIR seems wrong for this. > > > There's two things that get locked in. The first is the overall structure: > (defs ...) > (match ...) > (apply ...) > 'match' and 'apply' directly correspond to the two major steps in a > combine algorithm: Recognizing existing IR and changing it to something > else. I think these two are safe to lock in as they're already key actions > that any combiner implementation must do. 'defs' is less clear since it's > not as directly connected to a combiners actions but I still think it's > safe to lock in as a concept since the match step needs to tell the apply > step what it matched. > > The other bit that gets locked in is that the contents of these sections > must fit within tblgen's 'dag' syntax. That doesn't seem like much of a > burden as code blocks and DagArg/DagArgList ( > https://llvm.org/docs/TableGen/LangRef.html#values) are very lenient > about their contents. The operator DagArg can be any def within tablegen > and different backends can act on those defs as they see fit. > > The idea of sharing code between InstCombine and MIR for combines that > make sense to both isn't something I intend to work on but it does fit > within the syntax proposed. > It would be something along the lines of: > class IRIndependentBinaryOp; > def IRIndependentAdd; > def IRIndependentMul; > def : GICombineRule<(defs ...), (match (IRIndependentMul $C, $A, 2)), > (apply (IRIndependentAdd $C, $A, $A))>; > The backend(s) would need to know how to match and create LLVM-IR and MIR > versions of IRIndependentMul and IRIndependentAdd much like the GlobalISel > Combiner backend needs to know how to match and create defs that inherit > from Instruction. > > * Just as an aside, it's technically an extension on the original syntax > and could co-exist with the MIR code blocks but there's little point in > supporting both methods. > > but I see the syntax as abstracting it away from the rules. One of my key > goals with this syntax is to allow the reorganization of the match and > apply steps so that I can try out a different Combine algorithm without > blocking the development of Combines with the current algorithm. > > That said, again, I am not the one doing the work! > > Cheers, > -Quentin > [snip] > > > _______________________________________________ > LLVM Developers mailing list > llvm-dev at lists.llvm.org > http://lists.llvm.org/cgi-bin/mailman/listinfo/llvm-dev > > >-------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.llvm.org/pipermail/llvm-dev/attachments/20181128/8eded459/attachment-0001.html>
Justin Bogner via llvm-dev
2018-Nov-28 18:54 UTC
[llvm-dev] [RFC] Tablegen-erated GlobalISel Combine Rules
Amara Emerson via llvm-dev <llvm-dev at lists.llvm.org> writes:> On Nov 27, 2018, at 5:01 PM, Quentin Colombet via llvm-dev <llvm-dev at lists.llvm.org> wrote: >> What I am getting at is I feel we are defining a model for a problem >> we don't fully understand yet. >> I understand the model is flexible enough to inject C++ code wherever >> we want but I question why we should have to do that if we had >> captured the problem properly in the first place. I am not saying we >> don't capture it properly with this design, I am again saying that I >> don't have evidence to know. >> >> The reason I am so hesitant here is because it sounds like this syntax >> is the final solution whereas I think we could explore other >> directions (for instance extending tablegen itself for instance). >> However, by committing to this syntax/tablegen backend, when people >> start to adopt it, we will have to support them and more importantly >> migrate them if we decide to change something, that's why I believe >> this is premature. > > I think Daniel’s efforts here were to allow that kind of change to > happen without disrupting users much/at all. > > Ultimately it comes down to whether or not the expression of combines > accurately captures the semantics of what the combines are really > trying to do. Having the expressions be declarative is a big step in > that direction IMO. In the absence of concrete examples where such a > semantic representation isn’t sufficient to do something important, > where is the path forward? Should we have more examples of existing > combines being represented in this way as evidence?To add to this, I think we're in a better position to change things than we are now once we adopt declarative expressions of combines like Daniel is suggesting. With the declarative model the risk is that we've come up with something overconstrained and have to come up with something more expressive. If this happens, it should be reasonable to script updating the rules to whatever that more expressive form is, whether that's some different tablegen syntax or even directly expressing things in C++ with a bit of library support. If we go with the alternative of continuing to write all of the rules in C++ for now it'll be much more difficult to change in the future. The expressiveness of C++ is unconstrained, and when the same thing can be written six different ways it inevitably will be. If we decide to change our model in the future updating the rules from arbitrary C++ to that model will be a manual and labour intensive process. Basically, I agree that we risk having to migrate people in the future, but I think we already have that risk with the status quo and that this proposal will make it more practical to do if we end up needing to.> And are there any realistic alternatives for declarative > representations combines? > > Amara >> >> Now, again you're the one doing the work, so if you believe this step >> brings us closer to the final design, I have no issue with that :). >> >> Cheers, >> -Quentin