rc3.org

Strong opinions, weakly held

Merging STDOUT and STDERR

Here’s a Unix shell lesson I always forget. Shell commands can send output to two places, standard output and standard error. The two are separated so that if you’re redirecting the output of the command to a file or piping it to another command, error messages generated by the command aren’t included with the expected output of the command.

This becomes a problem when you want to capture STDERR in a file, or pipe it to a command like grep so that you can search for specific things in the errors.

Fortunately, there’s a way to do this:

$ my_command 2>&1 | grep "somestring" 

The magic here is 2>&1. It means, “Redirect stream 2 to stream 1.” In Bourne shell derivatives (like bash, zsh, and ksh), stream 2 is STDERR and stream 1 is STDOUT. Once the streams are merged, you can do whatever you want with the single output stream, like pipe it to grep or redirect it to a file.

One common construct you’ll see in cron jobs is this:

1 0 * * *  run_some_script.sh > /dev/null 2>&1

Cron helpfully sends an email to the user that owns the cron job whenever the command cron is running produces any output. Putting the merged redirect to /dev/null in there sends both STDERR and STDOUT to the bit bucket so that no email is generated. Of course that also means that cron is eating the error messages, so if something goes wrong you won’t be notified in the traditional way.

3 Comments

  1. Jacob Davies

    May 8, 2009 at 4:57 pm

    You can get around that to some extent by making your cron’d jobs send you email if something really bad happens. You just have to remember to actually do that, and not to just output to stderr under the assumption that someone will read it.

    It’s tricky because a lot of programs generate stderr output under normal operating conditions. I think that is a mistake, but it’s a mistake well-ingrained by now.

  2. I’ve come up with a cron wrapper script that captures stdout and stderr to a log file that I can check if I need to (rotated daily and cleaned up after a set period of time). The wrapper handles timestamping, etc, so I can just have our cron scripts “log” to stdout. The best part is then that if the script exits non-zero, it sends an email to me and the other systems folks, and for real emergency situations, an exit code of 255 will page us.

    The script is really simple, and we’ve moved nearly all of our crons to this system, so everything is logged in the same place and works the same way. It gives me one less thing to worry about.

  3. I’m not a huge fan of muting cron jobs either. I use logger(1) and other wrappers for syslog where cron mail is impractical or not available.

    put all the noise into wherever cron.info goes

    0 0 * * * some_script.sh 2>&1 | logger -p cron.info

    This will get you something like: May 9 22:01:29 hostname logger: bash: some_script.sh: command not found

    That’s 90-95% of my cron mysteries right there.

    log STDERR to cron.error and be fancy about it:

    0 0 * * * some_script.sh 2>&1 1>/dev/null | logger -t cronlog -i -p cron.err

    May 9 22:07:15 hostname cronlog[34176]: bash: some_script.sh: command not found

    As the saying goes: “now you have two problems” – syslog and its variants each have their own features and challenges. YMMV.

Leave a Reply

Your email address will not be published.

*

© 2024 rc3.org

Theme by Anders NorenUp ↑