When used in parallel mode ("make -jN ..."), FreeBSD make sometimes report successful exit status from recipes that should fail, in case such recipes are prefixed with `@' and unset the `errexit' shell flag. Example: $ echo 'all: ; @set +e; false' | make -j2 -f-; echo status: $? status: 0 I found this problem while testing Automake; see automake bug#9245: <http://debbugs.gnu.org/cgi/bugreport.cgi?bug=9245> This issue causes automake-generated Makefiles using the newer parallel-tests harness to always report success if run with "make -jN check"; a really nasty case of spurious success IMHO, which caused me to label this bug as "serious". How-To-Repeat: Here is how to reproduce the issue: $ echo 'all: ; @set +e; false' | make -j2 -f-; echo status: $? status: 0 Notice that all of `-j', `@' and `set +e' seems to be required to trigger the bug: $ echo 'all: ; set +e; false' | make -j2 -f- set +e; false *** Error code 1 1 error $ echo 'all: ; @false' | make -j2 -f- *** Error code 1 1 error $ echo 'all: ; @set +e; false' | make -f- *** Error code 1 Stop in /tmp.
Stefano Lattarini reports: > [echo 'all: ; @set +e; false' | make -j2 -f- fails to detect error] I can reproduce this. The cause is that make(1) runs all commands for a target in a single shell when -j is in effect. This shell has the -e and -v options initially enabled and reads the script from standard input. In that case, the '@' modifier is handled by surrounding the command with a line 'set -' and a line 'set -v', and filtering the 'set -' from the output. (Similarly, the '+' modifier temporarily disables '-e'.) The 'set -v' has exit status 0 and masks the exit status from the command that was supposed to be tested. More generally, using a single shell in this manner will lead to results unexpected from the POSIX specification because changes to the shell environment remain in effect. A possible fix is to put each command (line) in a subshell environment with parentheses. To keep proper output, this requires replacing 'set -v' with explicit printf commands. If the command ends with an external program, this does not result in extra forks with our sh, although there will be more copy-on-write faults. By the way, having sh read the script from standard input will lead to suboptimal performance. One reason is that our sh has an optimization for -c that skips a fork for a final external program (with a few exceptions such as if there are trap handlers) but there is no such optimization for scripts named on the command line or commands passed via standard input. Another reason is that POSIX requires sh, when it is reading commands from standard input, to position the file pointer directly after each command before executing it. Our sh currently does not do this, however. If {ARG_MAX} permits, it is more efficient to pass the commands via -c. -- Jilles Tjoelker
For bugs matching the following criteria: Status: In Progress Changed: (is less than) 2014-06-01 Reset to default assignee and clear in-progress tags. Mail being skipped