Displaying 4 results from an estimated 4 matches for "ir2native".
2020 Apr 08
6
RFC: a practical mechanism for applying Machine Learning for optimization policies in LLVM
...on, model
training, and iterative data collection/model training. We use TensorFlow
as our ML framework.
Related, we also needed to learn a separate model to evaluate the native
size of a function, given its IR, in order to calculate a more precise
reward for the reinforcement learning algorithm (“IR2Native”). We evaluated
‘just counting IR’ and TargetTransformInfo, but they appeared to provide
too noisy of a signal for the reward, insofar as the RL training algorithm
for the inlining model was concerned. This model is only used during
training.
RL - Training data collection: the training data we nee...
2020 Apr 08
2
RFC: a practical mechanism for applying Machine Learning for optimization policies in LLVM
...ining. We use
> TensorFlow
> > as our ML framework.
> >
> > Related, we also needed to learn a separate model to evaluate the native
> > size of a function, given its IR, in order to calculate a more precise
> > reward for the reinforcement learning algorithm (“IR2Native”). We
> evaluated
> > ‘just counting IR’ and TargetTransformInfo, but they appeared to provide
> > too noisy of a signal for the reward, insofar as the RL training
> algorithm
> > for the inlining model was concerned. This model is only used during
> > training.
&g...
2020 Apr 09
3
RFC: a practical mechanism for applying Machine Learning for optimization policies in LLVM
...ur ML framework.
>>> >
>>> > Related, we also needed to learn a separate model to evaluate the
>>> native
>>> > size of a function, given its IR, in order to calculate a more precise
>>> > reward for the reinforcement learning algorithm (“IR2Native”). We
>>> evaluated
>>> > ‘just counting IR’ and TargetTransformInfo, but they appeared to
>>> provide
>>> > too noisy of a signal for the reward, insofar as the RL training
>>> algorithm
>>> > for the inlining model was concerned. T...
2020 Apr 09
2
RFC: a practical mechanism for applying Machine Learning for optimization policies in LLVM
...gt; > Related, we also needed to learn a separate model to evaluate the
>>>>> native
>>>>> > size of a function, given its IR, in order to calculate a more
>>>>> precise
>>>>> > reward for the reinforcement learning algorithm (“IR2Native”). We
>>>>> evaluated
>>>>> > ‘just counting IR’ and TargetTransformInfo, but they appeared to
>>>>> provide
>>>>> > too noisy of a signal for the reward, insofar as the RL training
>>>>> algorithm
>>>>&g...