Ken Hofsass wrote:
> Counting it as a failure kind of defeats the purpose of skipping the
> test. The tests already fail or error. I want to make sure that the
> WSIT/JAXWS developers who are required to run these tests before
> check-in can assume all tests that get run are expected to pass... which
> has not been the case thus far... I think that expecting a developer to
> go look at hudson to make sure he/she is getting the same number & kind
> of failures/errors would likely cause more problems than skipping tests
> that are known to be broken.
OK, in that case I propose my plan B, which is to exclude the test only
from the current version. We already support version markers, right?
> I think it would be really handy if we could separate out the count/list
> of skipped tests... either have JUnit keep track of them (somehow) or
> add code to the harness itself to track & record them. Does something
> like that sound acceptable?
Since we build on JUnit for test execution, we really can't do anything
that goes against JUnit's notion of tests. JUnit does not have "not-run"
status for a test --- a test either succeeds or fails, so I don't
think we can do things like that.
--
Kohsuke Kawaguchi
Sun Microsystems kohsuke.kawaguchi_at_sun.com