Thursday, May 31, 2012

Linux: Prevent a background process from being stopped after closing SSH client

I'm working on a linux machine through SSH (Putty). I need to leave a process running during the night, so I thought I could do that by starting the process in background (with an ampersand at the end of the command) and redirecting stdout to a file. To my surprise, that doesn't work. As soon as I close the Putty window, the process is stopped.

How can I prevent that from happening??

Source: Tips4all


  1. Check out the "nohup" program.

  2. I would recommend using GNU Screen. It allows you to disconnect from the server while all of your processes continue to run. I don't know how I lived without it before I knew it existed.

  3. When the session is closed the process receives the SIGHUP signal which it is apparently not catching. You can use the nohup command when launching the process or the bash built-in command disown -h after starting the process to prevent this from happening:

    > help disown
    disown: disown [-h] [-ar] [jobspec ...]
    By default, removes each JOBSPEC argument from the table of active jobs.
    If the -h option is given, the job is not removed from the table, but is
    marked so that SIGHUP is not sent to the job if the shell receives a
    SIGHUP. The -a option, when JOBSPEC is not supplied, means to remove all
    jobs from the job table; the -r option means to remove only running jobs.

  4. Personally, I like the 'batch' command.

    $ batch
    > mycommand -x arg1 -y arg2 -z arg3
    > ^D

    This stuffs it in to the background, and then mails the results to you. It's a part of cron.

  5. nohup blah &

    Substitute your process name for blah!

  6. As others have noted, to run a process in the background so that you can disconnect from your SSH session, you need to have the background process properly disassociate itself from its controlling terminal - which is the pseudo-tty that the SSH session uses.

    You can find information about daemonizing processes in books such as Stevens' "Advanced Network Program, Vol 1, 3rd Edn" or Rochkind's "Advanced Unix Programming".

    I recently (in the last couple of years) had to deal with a recalcitrant program that did not daemonize itself properly. I ended up dealing with that by creating a generic daemonizing program - similar to nohup but with more controls available.

    Usage: daemonize [-abchptxV][-d dir][-e err][-i in][-o out][-s sigs][-k fds][-m umask] -- command [args...]
    -V print version and exit
    -a output files in append mode (O_APPEND)
    -b both output and error go to output file
    -c create output files (O_CREAT)
    -d dir change to given directory
    -e file error file (standard error - /dev/null)
    -h print help and exit
    -i file input file (standard input - /dev/null)
    -k fd-list keep file descriptors listed open
    -m umask set umask (octal)
    -o file output file (standard output - /dev/null)
    -s sig-list ignore signal numbers
    -t truncate output files (O_TRUNC)
    -p print daemon PID on original stdout
    -x output files must be new (O_EXCL)

    The double-dash is optional on systems not using the GNU getopt() function; it is necessary (or you have to specify POSIXLY_CORRECT in the environment) on Linux etc. Since double-dash works everywhere, it is best to use it.

    Contact me (firstname dot lastname at gmail dot com) if you want the source for daemonize.

  7. Use screen. It is very simple to use and works like vnc for terminals.

  8. Nohup allows a client process to not be killed if a the parent process is killed, for argument when you logout. Even better still use:
    nohup /bin/sh -c "echo \$\$ > $pidfile; exec $FOO_BIN $FOO_CONFIG " > /dev/null

    Nohup makes the process you start immune to termination which your SSH session and its child processes are kill upon you logging out. The command i gave provides you with a way you can store the pid of the application in a pid file so that you can correcly kill it later and allows the process to run after you have logged out.

  9. If you use screen to run a process as root, beware of the possibility of privilege elevation attacks. If your own account gets compromised somehow, there will be a direct way to take over the entire server.

    If this process needs to be run regularly and you have sufficient access on the server, a better option would be to use cron the run the job. You could also use init.d (the super daemon) to start your process in the background, and it can terminate as soon as it's done.

  10. i know this thread is old, but ...

    daemonize? nohup? SCREEN? (tmux ftw, screen is junk ;-)

    just do what every other app has done since the beginning -- double fork.

    # ((exec sleep 30)&)
    # grep PPid /proc/`pgrep sleep`/status
    PPid: 1
    # jobs
    # disown
    bash: disown: current: no such job

    bang! done :-) I've used this countless times on all types of apps and many old machines. you can combine with redirects and whatnot to open a private channel between you and the process ...



    run_in_coproc () {
    echo "coproc[$1] -> main"
    read -r; echo $REPLY

    # dynamic-coprocess-generator. nice.
    _coproc () {
    local i o e n=${1//[^A-Za-z0-9_]}; shift
    exec {i}<> <(:) {o}<> >(:) {e}<> >(:)
    . /dev/stdin <<COPROC "${@}"
    (("\$@")&) <&$i >&$o 2>&$e
    $n=( $o $i $e )

    # pi-rads-of-awesome?
    for x in {0..5}; do
    _coproc COPROC$x run_in_coproc $x
    declare -p COPROC$x

    for x in COPROC{0..5}; do
    . /dev/stdin <<RUN
    read -r -u \${$x[0]}; echo \$REPLY
    echo "$x <- main" >&\${$x[1]}
    read -r -u \${$x[0]}; echo \$REPLY

    ... save as ...

    # ./
    declare -a COPROC0='([0]="21" [1]="16" [2]="23")'
    declare -a COPROC1='([0]="24" [1]="19" [2]="26")'
    declare -a COPROC2='([0]="27" [1]="22" [2]="29")'
    declare -a COPROC3='([0]="30" [1]="25" [2]="32")'
    declare -a COPROC4='([0]="33" [1]="28" [2]="35")'
    declare -a COPROC5='([0]="36" [1]="31" [2]="38")'
    coproc[0] -> main
    COPROC0 <- main
    coproc[1] -> main
    COPROC1 <- main
    coproc[2] -> main
    COPROC2 <- main
    coproc[3] -> main
    COPROC3 <- main
    coproc[4] -> main
    COPROC4 <- main
    coproc[5] -> main
    COPROC5 <- main

    ... and there you go, spawn whatever. the <(:) opens an anonymous pipe via process substitution, which dies, but the pipe sticks around because you have a handle to it. i usually do a sleep 1 instead of : because its slightly racy, and i'd get a "file busy" error -- never happens if a real command is ran (eg, command true)

    ... "heredoc sourcing":

    . /dev/stdin <<EOF

    ... works on every single shell i've ever tried, including busybox/etc (initramfs). i've never seen it done before -- i independently discovered it while prodding, who knew source could accept args?? -- but it often serves as a much more manageable form of eval, if there is such a thing ...

  11. If you're willing to run X applications as well - use xpra together with "screen".

  12. I had the same issue. I have created the file as nohup.out as user home directory and retyped the command, and my application is working fine.

  13. i would also go for screen program (i know that some1 else answer was screen but this is a completion)

    not only the fact that &, ctrl+z bg disown, nohup, etc. may give you a nasty surprise that when you logoff job will still be killed (i dunno why, but it did happened to me, and it didn't bother with it be cause i switched to use screen, but i guess anthonyrisinger solution as double forking would solve that), also screen have a major advantage over just back-grounding:

    screen will background your process without losing interactive control to it

    and btw, this is a question i would never ask in the first place :) ... i use screen from my beginning of doing anything in any unix ... i (almost) NEVER work in a unix/linux shell without starting screen first ... and i should stop now, or i'll start an endless presentation of what good screen is and what can do for ya ... look it up by yourself, it is worth it ;)