Skip to content

Commit

Permalink
ftracetest: Add POSIX.3 standard and XFAIL result codes
Browse files Browse the repository at this point in the history
Add XFAIL and POSIX 1003.3 standard codes (UNRESOLVED/
UNTESTED/UNSUPPORTED) as result codes. These are used for the
results that test case is expected to fail or unsupported
feature (by config).

To return these result code, this introduces exit_unresolved,
exit_untested, exit_unsupported and exit_xfail functions,
which use real-time signals to notify the result code to
ftracetest.

This also set "errexit" option for the testcases, so that
the tests don't need to exit explicitly.

Note that if the test returns UNRESOLVED/UNSUPPORTED/FAIL,
its test log including executed commands is shown on console
and main logfile as below.

  ------
  # ./ftracetest samples/
  === Ftrace unit tests ===
  [1] failure-case example        [FAIL]
  execute: /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/fail.tc
  + . /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/fail.tc
  ++ cat non-exist-file
  cat: non-exist-file: No such file or directory
  [2] pass-case example   [PASS]
  [3] unresolved-case example     [UNRESOLVED]
  execute: /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/unresolved.tc
  + . /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/unresolved.tc
  ++ trap exit_unresolved INT
  ++ kill -INT 29324
  +++ exit_unresolved
  +++ kill -s 38 29265
  +++ exit 0
  [4] unsupported-case example    [UNSUPPORTED]
  execute: /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/unsupported.tc
  + . /home/fedora/ksrc/linux-3/tools/testing/selftests/ftrace/samples/unsupported.tc
  ++ exit_unsupported
  ++ kill -s 40 29265
  ++ exit 0
  [5] untested-case example       [UNTESTED]
  [6] xfail-case example  [XFAIL]

  # of passed:  1
  # of failed:  1
  # of unresolved:  1
  # of untested:  1
  # of unsupported:  1
  # of xfailed:  1
  # of undefined(test bug):  0
  ------

Link: http://lkml.kernel.org/p/20140929120211.30203.99510.stgit@kbuild-f20.novalocal

Acked-by: Namhyung Kim <namhyung@kernel.org>
Signed-off-by: Masami Hiramatsu <masami.hiramatsu.pt@hitachi.com>
Signed-off-by: Steven Rostedt <rostedt@goodmis.org>
  • Loading branch information
Masami Hiramatsu authored and Steven Rostedt committed Oct 3, 2014
1 parent 2909ef2 commit 915de2a
Show file tree
Hide file tree
Showing 13 changed files with 189 additions and 33 deletions.
37 changes: 37 additions & 0 deletions tools/testing/selftests/ftrace/README
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,43 @@ extension) and rewrite the test description line.
* The test cases should run on dash (busybox shell) for testing on
minimal cross-build environments.

* Note that the tests are run with "set -e" (errexit) option. If any
command fails, the test will be terminated immediately.

* The tests can return some result codes instead of pass or fail by
using exit_unresolved, exit_untested, exit_unsupported and exit_xfail.

Result code
===========

Ftracetest supports following result codes.

* PASS: The test succeeded as expected. The test which exits with 0 is
counted as passed test.

* FAIL: The test failed, but was expected to succeed. The test which exits
with !0 is counted as failed test.

* UNRESOLVED: The test produced unclear or intermidiate results.
for example, the test was interrupted
or the test depends on a previous test, which failed.
or the test was set up incorrectly
The test which is in above situation, must call exit_unresolved.

* UNTESTED: The test was not run, currently just a placeholder.
In this case, the test must call exit_untested.

* UNSUPPORTED: The test failed because of lack of feature.
In this case, the test must call exit_unsupported.

* XFAIL: The test failed, and was expected to fail.
To return XFAIL, call exit_xfail from the test.

There are some sample test scripts for result code under samples/.
You can also run samples as below:

# ./ftracetest samples/

TODO
====

Expand Down
124 changes: 109 additions & 15 deletions tools/testing/selftests/ftrace/ftracetest
Original file line number Diff line number Diff line change
Expand Up @@ -114,46 +114,140 @@ prlog "=== Ftrace unit tests ==="


# Testcase management
# Test result codes - Dejagnu extended code
PASS=0 # The test succeeded.
FAIL=1 # The test failed, but was expected to succeed.
UNRESOLVED=2 # The test produced indeterminate results. (e.g. interrupted)
UNTESTED=3 # The test was not run, currently just a placeholder.
UNSUPPORTED=4 # The test failed because of lack of feature.
XFAIL=5 # The test failed, and was expected to fail.

# Accumulations
PASSED_CASES=
FAILED_CASES=
UNRESOLVED_CASES=
UNTESTED_CASES=
UNSUPPORTED_CASES=
XFAILED_CASES=
UNDEFINED_CASES=
TOTAL_RESULT=0

CASENO=0
testcase() { # testfile
CASENO=$((CASENO+1))
prlog -n "[$CASENO]"`grep "^#[ \t]*description:" $1 | cut -f2 -d:`
}
failed() {
prlog " [FAIL]"
FAILED_CASES="$FAILED_CASES $CASENO"

eval_result() { # retval sigval
local retval=$2
if [ $2 -eq 0 ]; then
test $1 -ne 0 && retval=$FAIL
fi
case $retval in
$PASS)
prlog " [PASS]"
PASSED_CASES="$PASSED_CASES $CASENO"
return 0
;;
$FAIL)
prlog " [FAIL]"
FAILED_CASES="$FAILED_CASES $CASENO"
return 1 # this is a bug.
;;
$UNRESOLVED)
prlog " [UNRESOLVED]"
UNRESOLVED_CASES="$UNRESOLVED_CASES $CASENO"
return 1 # this is a kind of bug.. something happened.
;;
$UNTESTED)
prlog " [UNTESTED]"
UNTESTED_CASES="$UNTESTED_CASES $CASENO"
return 0
;;
$UNSUPPORTED)
prlog " [UNSUPPORTED]"
UNSUPPORTED_CASES="$UNSUPPORTED_CASES $CASENO"
return 1 # this is not a bug, but the result should be reported.
;;
$XFAIL)
prlog " [XFAIL]"
XFAILED_CASES="$XFAILED_CASES $CASENO"
return 0
;;
*)
prlog " [UNDEFINED]"
UNDEFINED_CASES="$UNDEFINED_CASES $CASENO"
return 1 # this must be a test bug
;;
esac
}

# Signal handling for result codes
SIG_RESULT=
SIG_BASE=36 # Use realtime signals
SIG_PID=$$

SIG_UNRESOLVED=$((SIG_BASE + UNRESOLVED))
exit_unresolved () {
kill -s $SIG_UNRESOLVED $SIG_PID
exit 0
}
trap 'SIG_RESULT=$UNRESOLVED' $SIG_UNRESOLVED

SIG_UNTESTED=$((SIG_BASE + UNTESTED))
exit_untested () {
kill -s $SIG_UNTESTED $SIG_PID
exit 0
}
passed() {
prlog " [PASS]"
PASSED_CASES="$PASSED_CASES $CASENO"
trap 'SIG_RESULT=$UNTESTED' $SIG_UNTESTED

SIG_UNSUPPORTED=$((SIG_BASE + UNSUPPORTED))
exit_unsupported () {
kill -s $SIG_UNSUPPORTED $SIG_PID
exit 0
}
trap 'SIG_RESULT=$UNSUPPORTED' $SIG_UNSUPPORTED

SIG_XFAIL=$((SIG_BASE + XFAIL))
exit_xfail () {
kill -s $SIG_XFAIL $SIG_PID
exit 0
}
trap 'SIG_RESULT=$XFAIL' $SIG_XFAIL

# Run one test case
run_test() { # testfile
local testname=`basename $1`
local testlog=`mktemp --tmpdir=$LOG_DIR ${testname}-XXXXXX.log`
testcase $1
echo "execute: "$1 > $testlog
(cd $TRACING_DIR; set -x ; . $1) >> $testlog 2>&1
ret=$?
if [ $ret -ne 0 ]; then
failed
catlog $testlog
else
passed
SIG_RESULT=0
# setup PID and PPID, $$ is not updated.
(cd $TRACING_DIR; read PID _ < /proc/self/stat ;
set -e; set -x; . $1) >> $testlog 2>&1
eval_result $? $SIG_RESULT
if [ $? -eq 0 ]; then
# Remove test log if the test was done as it was expected.
[ $KEEP_LOG -eq 0 ] && rm $testlog
else
catlog $testlog
TOTAL_RESULT=1
fi
}

# Main loop
for t in $TEST_CASES; do
run_test $t
done

prlog ""
prlog "# of passed: " `echo $PASSED_CASES | wc -w`
prlog "# of failed: " `echo $FAILED_CASES | wc -w`

test -z "$FAILED_CASES" # if no error, return 0
prlog "# of unresolved: " `echo $UNRESOLVED_CASES | wc -w`
prlog "# of untested: " `echo $UNTESTED_CASES | wc -w`
prlog "# of unsupported: " `echo $UNSUPPORTED_CASES | wc -w`
prlog "# of xfailed: " `echo $XFAILED_CASES | wc -w`
prlog "# of undefined(test bug): " `echo $UNDEFINED_CASES | wc -w`

# if no error, return 0
exit $TOTAL_RESULT
4 changes: 4 additions & 0 deletions tools/testing/selftests/ftrace/samples/fail.tc
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
#!/bin/sh
# description: failure-case example
cat non-exist-file
echo "this is not executed"
3 changes: 3 additions & 0 deletions tools/testing/selftests/ftrace/samples/pass.tc
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/sh
# description: pass-case example
return 0
4 changes: 4 additions & 0 deletions tools/testing/selftests/ftrace/samples/unresolved.tc
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
#!/bin/sh
# description: unresolved-case example
trap exit_unresolved INT
kill -INT $PID
3 changes: 3 additions & 0 deletions tools/testing/selftests/ftrace/samples/unsupported.tc
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/sh
# description: unsupported-case example
exit_unsupported
3 changes: 3 additions & 0 deletions tools/testing/selftests/ftrace/samples/untested.tc
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/sh
# description: untested-case example
exit_untested
3 changes: 3 additions & 0 deletions tools/testing/selftests/ftrace/samples/xfail.tc
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/sh
# description: xfail-case example
cat non-exist-file || exit_xfail
3 changes: 2 additions & 1 deletion tools/testing/selftests/ftrace/test.d/00basic/basic2.tc
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
#!/bin/sh
# description: Basic test for tracers
test -f available_tracers
for t in `cat available_tracers`; do
echo $t > current_tracer || exit 1
echo $t > current_tracer
done
echo nop > current_tracer
6 changes: 3 additions & 3 deletions tools/testing/selftests/ftrace/test.d/00basic/basic3.tc
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
#!/bin/sh
# description: Basic trace clock test
[ -f trace_clock ] || exit 1
test -f trace_clock
for c in `cat trace_clock | tr -d \[\]`; do
echo $c > trace_clock || exit 1
grep '\['$c'\]' trace_clock || exit 1
echo $c > trace_clock
grep '\['$c'\]' trace_clock
done
echo local > trace_clock
12 changes: 6 additions & 6 deletions tools/testing/selftests/ftrace/test.d/kprobe/add_and_remove.tc
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
#!/bin/sh
# description: Kprobe dynamic event - adding and removing

[ -f kprobe_events ] || exit 1
[ -f kprobe_events ] || exit_unsupported # this is configurable

echo 0 > events/enable || exit 1
echo > kprobe_events || exit 1
echo p:myevent do_fork > kprobe_events || exit 1
grep myevent kprobe_events || exit 1
[ -d events/kprobes/myevent ] || exit 1
echo 0 > events/enable
echo > kprobe_events
echo p:myevent do_fork > kprobe_events
grep myevent kprobe_events
test -d events/kprobes/myevent
echo > kprobe_events
15 changes: 7 additions & 8 deletions tools/testing/selftests/ftrace/test.d/kprobe/busy_check.tc
Original file line number Diff line number Diff line change
@@ -1,14 +1,13 @@
#!/bin/sh
# description: Kprobe dynamic event - busy event check

[ -f kprobe_events ] || exit 1
[ -f kprobe_events ] || exit_unsupported

echo 0 > events/enable || exit 1
echo > kprobe_events || exit 1
echo p:myevent do_fork > kprobe_events || exit 1
[ -d events/kprobes/myevent ] || exit 1
echo 1 > events/kprobes/myevent/enable || exit 1
echo 0 > events/enable
echo > kprobe_events
echo p:myevent do_fork > kprobe_events
test -d events/kprobes/myevent
echo 1 > events/kprobes/myevent/enable
echo > kprobe_events && exit 1 # this must fail
echo 0 > events/kprobes/myevent/enable || exit 1
echo 0 > events/kprobes/myevent/enable
echo > kprobe_events # this must succeed

5 changes: 5 additions & 0 deletions tools/testing/selftests/ftrace/test.d/template
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
#!/bin/sh
# description: %HERE DESCRIBE WHAT THIS DOES%
# you have to add ".tc" extention for your testcase file
# Note that all tests are run with "errexit" option.

exit 0 # Return 0 if the test is passed, otherwise return !0
# If the test could not run because of lack of feature, call exit_unsupported
# If the test returned unclear results, call exit_unresolved
# If the test is a dummy, or a placeholder, call exit_untested

0 comments on commit 915de2a

Please sign in to comment.