FAQ

Why are my print statements funky?

Often times with multiprocessing print statements come out funky because multiple processes are writing to sys.stdout at the same time. One of the ways you can deal with this is by implementing a multiprocessing.Lock(). An example of how to use the lock in a pre or post hook is shown below:

from multiprocessing import Lock
# Make the Lock object global so all child processes can use it
_print_lock_ = Lock()

def my_hook(*args):
    """ Process safe print for pre/post hook

    :param args: Tuple of ( *args, <ServerJob> )
    :return: None
    """
    global _print_lock_
    thisjob = list(args).pop()
    with _print_lock_:
        print(str(thisjob.name))
    return None

Note

After sshreader v3.2 you can also use the new sshreader.echo method to automatically implement a multiprocessing.Lock on the fly.

Where did my output go?

Say you have a script (that uses sshreader or otherwise) that you are piping the output from to another unix command. Something similar to the following:

./myscript.py | wc

but you keep getting 0 from the output of wc. This is due to the stdout buffer in your terminal. To overcome this “feature” either run your script as follows:

python -u myscript.py | wc

or change the shebang at the top of your python script to:

#!/usr/bin/env python -u

or by adding the following to your code directly after a print statement:

# Print something to stdout and immediately flush (unbuffered output)
print('Unbuffered output')
sys.stdout.flush()  # In Python 3.3 and above you can alternatively pass flush=True to the print statement

Note

As of sshreader v3.4.4 the sshreader.echo method issues a sys.stdout.flush() after calling the standard Python print function, giving you easy access to unbuffered output.

Byte-String vs. Unicode-String

In Python 2, strings are actually able to handle both byte-strings and unicode strings where in Python 3 all strings are only unicode strings. This can cause issues when working with both and using a module like sshreader (because output from Paramiko and the subprocess module are byte-strings). However, there is hope because sshreader includes a kwarg that can enable automatic decoding of byte-strings to unicode strings.

import sshreader

# sshreader can automatically decode bytestrings for you (for stdout and stderr)
# This works for both the shell_command and ssh_command methods
uname_cmd = sshreader.shell_command('uname -a', decodebytes=True)
uname_cmd.stdout.split(',')

Note

As of version 3.3 the default behavior for sshreader is to automatically decode byte strings to unicode strings. If you would like it do NOT decode byte strings then use this flag, setting it to False.

Pseduo Terminals

Sometimes when using SSH you will see an error like the following:

import sshreader
with sshreader.SSH('myhost.example.com', username='jdoe', keyfile='~/.ssh/id_rsa') as s:
    s.ssh_command('sudo touch /')
>> ShellCommand(cmd='sudo touch /', stdout='', stderr='sudo: sorry, you must have a tty to run sudo', return_code=1)

This is due to not having a terminal definition in your SSH connection. Normally the method for fixing this type of error is to disable !requiretty in your /etc/sudoers file. However, a quicker way to get around this is to request a pseudo terminal when creating your ssh connection. Sshreader will do this for you when you use the combine option when sending an ssh_command.

Note

When using a pseudo terminal stderr is piped to stdout. At the moment this is simply just a “feature” of paramiko. If this ever changes in the future we will certainly support pseudo terminals with stdout and stderr as separate outputs.

Changing Logging

As of version 3.5 of sshreader you might have noticed that the debuglevel option is no longer available on ServerJob objects or the sshread method. To enable logging going forward you will want to change the level for the sshreader logger.

import logging
logging.getLogger('sshreader').setLevel(logging.DEBUG)

To learn what levels can be set, check out the logger module’s documentation.