Displaying 20 results from an estimated 344 matches for "xfails".
Did you mean:
fails
2016 Sep 28
6
[RFC] Require PRs for XFAILing tests
...parameter for specifying the bug number. This got me thinking.
I believe that any test that is marked XFAIL is a bug, and we can use LIT to enforce that. So I wrote a patch (https://reviews.llvm.org/D25035 <https://reviews.llvm.org/D25035>) to add a feature to LIT which would support mapping XFAILs to PRs, and a flag to turn XFAILS without PRs into failures.
My proposal is to add this feature to LIT (after proper code review, and I still need to write tests for it), and then to annotate all our XFAILS with PRs. Once all the PRs are annotated I think we should enable this behavior by default...
2013 Dec 19
2
[LLVMdev] How to XFAIL test cases with buildbot LNTFactory
...was previously
failing.
Does anyone know what is the canonical way to mark xfailures when
running nightly tests on the buildbots?
Should I switch to the NightlyTestBuilder? Or is there a way to convince
the LNTBuilder to accept XFAILures e.g. by allowing
the LitTestCommand to take an optional xfails argument according
to which test results will be relabeled for the buildbot?
Cheers,
Tobias
2016 Sep 29
2
[RFC] Require PRs for XFAILing tests
...ead is worth the added value, but then I'm a process
> kind of guy.
>
>
> On 28 September 2016 at 10:28, Renato Golin via llvm-dev
> <llvm-dev at lists.llvm.org <mailto:llvm-dev at lists.llvm.org>> wrote:
> > We already have an unwritten rule to create PRs for XFAILs, and we
> > normally don't XFAIL lightly (I don't, at least). But creating one PR
> > for every existing XFAIL may end up as a long list of never looked
> > PRs. :)
>
> As opposed to the other ~9000 open PRs? At least they would be tracked.
>
> I'd be in...
2016 Sep 28
6
[RFC] Require PRs for XFAILing tests
...common that is, although I'm
sure it does happen.
I think the overhead is worth the added value, but then I'm a process
kind of guy.
On 28 September 2016 at 10:28, Renato Golin via llvm-dev
<llvm-dev at lists.llvm.org> wrote:
> We already have an unwritten rule to create PRs for XFAILs, and we
> normally don't XFAIL lightly (I don't, at least). But creating one PR
> for every existing XFAIL may end up as a long list of never looked
> PRs. :)
As opposed to the other ~9000 open PRs? At least they would be tracked.
--paulr
2016 Sep 28
3
[RFC] Require PRs for XFAILing tests
This may be an unpopular opinion (and I don’t have the full context on those specific issues), but I believe that these are an abuse of XFAIL, and should probably be written in terms of REQUIRES instead of XFAIL.
I believe XFAIL tests actually execute, and are just marked as expected failure. If a test is not expected to ever succeed, we shouldn’t bother running it, which is what the REQUIRES
2014 Feb 21
3
[LLVMdev] make check issue with llvm-cov
> > And in the test file there is a line:
> > XFAIL: powerpc64, s390x, mips, sparc
>
> This is a crude attempt at "XFAIL: big-endian". The mips entry here is just
> wrong if the system is little-endian - the test passes on little-endian machines
> and fails on big-endian. This is obviously a problem.
'XFAIL: mips' counts as an XFAIL for all mips targets
2016 Oct 03
2
[RFC] Require PRs for XFAILing tests
> -----Original Message-----
> From: llvm-dev [mailto:llvm-dev-bounces at lists.llvm.org] On Behalf Of
> Krzysztof Parzyszek via llvm-dev
> Sent: Monday, October 03, 2016 10:40 AM
> To: llvm-dev at lists.llvm.org
> Subject: Re: [llvm-dev] [RFC] Require PRs for XFAILing tests
>
> On 10/3/2016 12:21 PM, Robinson, Paul via llvm-dev wrote:
> > As David Blaikie mentioned,
2014 Feb 21
3
[LLVMdev] make check issue with llvm-cov
If you can help get it working on big-endian systems, we should be able to remove the XFAIL. That seems like the cleanest way out of this. Yuchen sent a patch to llvm-commits on 12/19/13. (I can resend it to you if you don’t have that.) Can you try that out on a BE mips system?
On Feb 21, 2014, at 7:11 AM, Reed Kotler <Reed.Kotler at imgtec.com> wrote:
> On 02/21/2014 02:58 AM, Daniel
2004 Nov 27
6
[LLVMdev] QMTest vs. Dejagnu
...) Has a gui (some prefer this).
Cons of QMTest:
1) You have to use the gui to add directories.
2) You have to use the gui to XFAIL a test.
3) It uses something called expectation files that you must load
to view which tests XFAIL. There is no way (that I have found) to
get a complete list of XFAILs..
4) It is also hard to XFAIL across platforms, because it requires
hacking an expectation file for each
target, which must be done with the gui.
5) Intermediate output placement can not be controlled.
6) The output logs are not as clean.
7) Right now we are dependent on a specific version of...
2012 Aug 27
1
[LLVMdev] powerpc XFAIL question
Hi all,
I'm investigating the following test case that reports as an unexpected
pass on powerpc64-unknown-linux-gnu.
Clang : CodeGenCXX/member-alignment.cpp
This test case is marked as XFAIL for arm and powerpc. However, the test
passes fine for powerpc64-unknown-linux-gnu. There are two tests of this
form:
void
t::bar(void) {
// CHECK: _ZN1t3barEv{{.*}} align 2
2016 Oct 03
2
[RFC] Require PRs for XFAILing tests
...o find the set of tests citing the generic PR, but
somebody would have to take it upon themselves to go looking for them.
By the time that happened, the kinds of details we'd want to see in a
bug would be just as missing as if we had no XFAIL-to-PR link at all.
Conversely, requiring short-term XFAILs to have their own PR means
that if somebody fixed the test and forgot to close the PR, that
dangling PR would be easy to recognize as something that could be
summarily closed if anybody decided to go look at all the XFAIL-linked
PRs. This scenario leaves an open PR kicking around, O the horror,
bu...
2010 Jul 22
2
[LLVMdev] Marking a test suite test XFAIL
From http://llvm.org/docs/TestingGuide.html
Some tests are known to fail. Some are bugs that we have not fixed yet;
others are features that we haven't added yet (or may never add). In
DejaGNU, the result for such tests will be XFAIL (eXpected FAILure). In
this way, you can tell the difference between an expected and unexpected
failure.
The tests in the test suite have no such feature at
2006 Aug 10
1
Daily Xen-HVM Build Testing: cs11011
changeset: 11011:b60ea69932b1
tag: tip
parent: 11010:e4f1519b473f
parent: 10999:15304ad81c50
user: kfraser@localhost.localdomain
date: Wed Aug 9 12:04:20 2006 +0100
summary: Merge with xenppc-unstable.
Hardware: x460
NOTE: This runs were done with the latest version of Harry''s disk.iso patch.
******************** x86_32(no PAE):
2010 Jul 22
0
[LLVMdev] Marking a test suite test XFAIL
On Jul 22, 2010, at 2:44 PMPDT, Patrick Alexander Simmons wrote:
> From http://llvm.org/docs/TestingGuide.html
>
> Some tests are known to fail. Some are bugs that we have not fixed
> yet;
> others are features that we haven't added yet (or may never add). In
> DejaGNU, the result for such tests will be XFAIL (eXpected FAILure).
> In
> this way, you can tell the
2010 Jul 25
2
[LLVMdev] Marking a test suite test XFAIL
Thanks, Dale, that really helps.
What about disabling only one backend of a specific test?
Thanks,
--Patrick
On 07/22/10 16:04, Dale Johannesen wrote:
>
> On Jul 22, 2010, at 2:44 PMPDT, Patrick Alexander Simmons wrote:
>
>> From http://llvm.org/docs/TestingGuide.html
>>
>> Some tests are known to fail. Some are bugs that we have not fixed yet;
>> others are
2004 Nov 29
0
[LLVMdev] QMTest vs. Dejagnu
...t;
> Cons of QMTest:
> 1) You have to use the gui to add directories.
> 2) You have to use the gui to XFAIL a test.
> 3) It uses something called expectation files that you must load
> to view which tests XFAIL. There is no way (that I have found) to
> get a complete list of XFAILs..
> 4) It is also hard to XFAIL across platforms, because it requires
> hacking an expectation file for each
> target, which must be done with the gui.
QMTest could use the same expectations file for all platforms; I
eventually chose not to do that because I need some tests to pass on...
2013 Feb 14
1
[LLVMdev] How to XFAIL JIT tests for AArch64
Hi,
Currently, no tests that use lli without "-force-interpreter" are
expected to pass when executing on an AArch64 model. However, they
will pass if built and run on (say) X86, just setting the default
target triple.
So XFAIL gets unexpected passes on a compiler merely targetting
AArch64 and leaving the tests as they are gives unexpected failures
when they're run on a model.
Does
2013 May 31
4
[LLVMdev] [POLLY] fix Bug 15817
The attached patch eliminates http://llvm.org/bugs/show_bug.cgi?id=15817 by removing the remaining
"; XFAIL:*" added in http://lists.cs.uiuc.edu/pipermail/llvm-commits/Week-of-Mon-20130415/171812.html.
The Isl/CodeGen/scevcodegen-1.ll testcase in polly appears as an XPASS in current llvm/polly 3.3
and trunk svn for both x86_64-apple-darwin* and x86_64 Fedora 15 when built against isl
2006 Aug 29
0
Daily Xen-HVM Build Testing: cs11278
changeset: 11278:8273f730371b
tag: tip
user: Ian Campbell <ian.campbell@xensource.com>
date: Tue Aug 29 06:23:11 2006 +0100
summary: Fix definition of LINUX_VER so that doesn''t pickup LINUX_VER3
Hardware: x460
******************** x86_32(no PAE): ***************************
* dom0: SLES10 GM
* dom0 boots fine
* xend starts without problem
--- Linux
2013 Feb 26
2
[LLVMdev] ARMv5 Buildbot
Hi folks,
The llvm-arm-linux buildbot, although old, is up and running and the only
failures I can see were XFAIL, but still being run on ARM:
http://lab.llvm.org:8011/builders/llvm-arm-linux/builds/2158
Any ideas why they're still being run on that buildbot?
If we can clear those, we can get it passing again.
cheers,
--renato
-------------- next part --------------
An HTML attachment was