search for: raw_fd_stream

Displaying 8 results from an estimated 8 matches for "raw_fd_stream".

Did you mean: raw_fd_ostream
2010 Jun 16
1
[LLVMdev] Should file opening error during raw_fd_stream::raw_fd_stream exit instead of passing the error up to the caller?
In several places, for example in JITDebugRegisterer::MakeELF, stream is opened and the error is ignored. When the error is actually returned by open There are two solutions (assuming there are no exceptions). 1. check error string after every raw_fd_stream::raw_fd_stream, fix all the places where the check is missing 2. make raw_fd_stream::raw_fd_stream exit I suggest the second should be done since there are really no situations when the file error should be tolerated. Yuri
2015 Jan 23
3
[LLVMdev] Behaviour of outs()?
I was just fixing a bug that was caused by `stdout` being closed before the runtime has done `fflush(stdout)` [or however this is implemented in the runtime]. The cause of this seems to be that `outs()` returns a static object created from `raw_fd_stream(STDOUT_FILENO, true)` - the `true` being the `shouldClose` parameter. Surely LLVM is not supposed to close `stdout` as part of its operations? -- Mats
2015 Jan 23
3
[LLVMdev] Behaviour of outs()?
...gt; > I was just fixing a bug that was caused by `stdout` being closed > > before the runtime has done `fflush(stdout)` [or however this is > > implemented in the runtime]. > > > > The cause of this seems to be that `outs()` returns a static object > > created from `raw_fd_stream(STDOUT_FILENO, true)` - the `true` being > > the `shouldClose` parameter. > > > > Surely LLVM is not supposed to close `stdout` as part of its operations? > > Looks like this was added in r111643: > > commit 5d56d9df928c48571980efe8d4205de8ab557b7c > Author: Dan Goh...
2018 Mar 22
2
Commit module to Git after each Pass
Oh, well... as usually the answer appears to be pretty obvious. 99% of the time is spent inside the plain write. -print-after-all prints into llvm::errs(), which is an *unbuffered* raw_fd_stream. And -git-commit-after-all opens a *buffered* raw_fd_stream. As soon as I hacked -print-after-all to use a buffered stream to stderr performance went up to the normal expected values: ] time bin/opt -O1 big-ir.ll -disable-output -print-after-all -print-module-scope 2>&1 | grep -c "^...
2018 Jun 14
3
Commit module to Git after each Pass
...Sergeev via llvm-dev < > llvm-dev at lists.llvm.org> wrote: > >> Oh, well... as usually the answer appears to be pretty obvious. >> 99% of the time is spent inside the plain write. >> >> -print-after-all prints into llvm::errs(), which is an *unbuffered* >> raw_fd_stream. >> And -git-commit-after-all opens a *buffered* raw_fd_stream. >> >> As soon as I hacked -print-after-all to use a buffered stream to stderr >> performance went >> up to the normal expected values: >> >> ] time bin/opt -O1 big-ir.ll -disable-output -print-...
2018 Mar 21
0
Commit module to Git after each Pass
On 03/16/2018 01:21 AM, Fedor Sergeev via llvm-dev wrote: > git-commit-after-all solution has one serious issue - it has a hardcoded git handling which > makes it look problematic from many angles (picking a proper git, > selecting exact way of storing information, creating repository, replacing the file etc etc). > > Just dumping information in a way that allows easy
2018 Jun 15
2
Commit module to Git after each Pass
...8:06 AM Fedor Sergeev via llvm-dev <llvm-dev at lists.llvm.org<mailto:llvm-dev at lists.llvm.org>> wrote: Oh, well... as usually the answer appears to be pretty obvious. 99% of the time is spent inside the plain write. -print-after-all prints into llvm::errs(), which is an *unbuffered* raw_fd_stream. And -git-commit-after-all opens a *buffered* raw_fd_stream. As soon as I hacked -print-after-all to use a buffered stream to stderr performance went up to the normal expected values: ] time bin/opt -O1 big-ir.ll -disable-output -print-after-all -print-module-scope 2>&1 | grep -c "^;...
2018 Mar 15
4
Commit module to Git after each Pass
git-commit-after-all solution has one serious issue - it has a hardcoded git handling which makes it look problematic from many angles (picking a proper git, selecting exact way of storing information, creating repository, replacing the file etc etc). Just dumping information in a way that allows easy subsequent machine processing seems to be a more flexible, less cluttered and overall clean