IP Monitoring & Diagnostics With Command Line Tools: Part 6 - Advanced Command Line Tools

We continue our series with some small code examples that will make your monitoring and diagnostic scripts more robust and reliable

Variable scope and inheritance

When variables are created, the scope determines their availability in child processes. They cannot be passed upwards to a calling parent.

Local variables only exist within the current shell level. They will not be inherited by child processes. Assign and remove them like this:


Environment variables will be inherited by child processes. Create and destroy them like this:

export MY_ENV_VAR="Some value"
export -n MY_ENV_VAR

Use the set command without parameters to view all the local and environment variables in the current shell.

Alternatively, use the export command without parameters to only list the environment variables that will be available in child processes.

Call the set or export commands inside a shell script to observe what has been inherited from the calling parent.

Avoiding code duplication

Important data values should only be defined in one place to avoid inconsistent behaviour.

An alternative way to run a shell script uses the source command which can be replaced with a single dot (.). We call this 'dot-running' a script. The script content will execute in the context of the current shell and variable assignments will persist afterwards.

Dot-running is useful for constructing shared configurations that are invoked from multiple scripts. This avoids the need to duplicate code and is optimal for defining important data values only once. Everything is consistent across the whole piece. This is like using the include mechanisms in other languages.

Self-installing code

The script filename and path are passed in the $0 (dollar zero) special variable. Remove the script file name and extension with the dirname command to establish the path where the script is located in the file system:

MY_BASE_PATH=$(dirname "$0")

Use this in a dot-run script to create self-configuring code that avoids the need for editing when it is deployed onto a new machine.

cat "${MY_SUB_DIRECTORY}/list_of_items.dat"

NOTE: Use braces around the variable names to make them less ambiguous when adding directory paths.

Enclose file and directory references in double quotes to avoid problems with modern systems that allow spaces in file names.

Passing parameters

Pass parameters to scripts like any other command line tool, even when dot-running a script. Use as few as possible to reduce complexity.

The words argument and parameter are often used interchangeably. My preference is for the calling process to pass parameters to a command and for the target script or function to receive arguments.

Positional arguments are numbered corresponding to their index within the calling command. The first nine are accessed with the $1 to $9 special variables:

echo "$1"
echo "$9"

In rare occasions where more than 9 parameters are passed, place curly braces around the index:

echo "${10}"
echo "${11}"
... etc

There is an alternative (more complex) mechanism that names the parameters and allows them to be presented in any order. It is useful when creating a large library of reusable scripts but for our needs the positional arguments are fine.

Use exit status values to indicate errors

When a script or command is called-to-action, it runs in a child process. On completion, an exit status value is returned to describe the outcome. The standard output and error streams can also be redirected. A non-zero exit status indicates that a problem occurred.

User defined exit status values should use the range 64-113 or 131-255 to avoid clashes with well-known reserved values. Script exit status values should never be higher than 255.

This example expects five arguments. The $# special variable is a count of how many have been presented. Check the value and return an error message with a non-zero exit status if there were too few.

if [ "$#" -lt "5" ]
   echo "Too few arguments" >&2
   exit 105
echo "Everything OK"
exit 0

The error message is written to standard error (>&2) so the calling script can separate it from the standard output stream.

The optional exit 0 at the end of a script is a safety net in case a previously executed command propagates a non-zero exit status back to the caller.

NOTE: Do not use exit status values in dot-run scripts because it will exit the current shell level.

Handling exit status values

Detect non-zero exit status results to handle exceptions without halting the script. Only check them where there is a risk of an error happening.

Capture the exit status immediately from the $? special variable before it is overwritten. This example reacts to the argument count error that was detected in the previous script:

MY_LINE_NUMBER=$LINENO ; parm_count.sh


# $? is already 0 again because the assignment was successful

if [ "${MY_ERR}" -ne "0" ]
   echo "Error ${MY_ERR} in line: ${MY_LINE_NUMBER}" >> err.log
   # Some remedial actions here

Save the line number for the command we will error check. A semi-colon saves the correct line number by executing two separate commands on one line.

Save the exit status for use in the echo and then check for a non-zero result. Write the line number and message to an error log to preserve it.

Detect duplicate values with command substitution

The shell substitutes the result of a command by enclosing it in brackets and prefixing them with a dollar sign. This example uses substitutions to detect duplicate items in a file or other output stream:


LINES=$(cat "${MY_FILE}" | wc -l | tr -d ' ')

DEDUPED=$(cat "${MY_FILE}" | sort | uniq | wc -l | tr -d ' ')

if [ "${LINES}" -gt "${DEDUPED}" ]
   echo "Duplicates detected"
   echo "No duplicates"

After sorting the input, the uniq command removes duplicates before wc counts the number of lines. Omitting the sort and uniq commands, counts the lines in the original source input. Use the cat command to avoid echoing the file name in the output and the tr to remove padding spaces.

Use this technique to count processes, measure disk space and check file sizes. More complex diagnostics can be implemented as shell scripts or compiled executables that are called in the same way.

Avoid collisions when creating temporary files

Store intermediate results in a temporary file. Include the process ID (PID) from the $$ special variable in the filename to prevent other processes from overwriting your files. Retain the filename in a variable to garbage collect (delete) the file later:

MY_TEMP_FILE_NAME ="/tmp/xxx_$$.tmp"

Refer to the temporary file in your code using the variable as an indirect reference:

echo "some text" > "${MY_TEMP_FILE_NAME}"

Modern systems provide the mktemp command which does all the work for you and creates an empty file ready to use. It returns the file name and path for you to retain:


At the end of your script, garbage-collect the temporary file with the rm command:


Atomic file transfer operations

Operations are atomic when they happen in a single step.

File transfers are never atomic because the content is incomplete while the data is being copied. Existing files are destroyed at the outset and a new file is visible as soon as it is opened for writing. Another process attempting to use the incomplete files during the transfer will probably crash as a result.

This is an issue when you want to replace an existing configuration file with a new version or drop a new content file into a container.

Fix this by using an intermediate filename followed by a file rename at the end. The new content is created invisibly beside the old and replaces it instantaneously. The transfer is now atomic because the new content is already complete before the rename happens.

Another process opening the file before the rename, will get the old but still viable version. After the rename, it sees the new version. Incomplete files are never accessed and a potential crash is eliminated.

Note the trailing underscore on the temporary filename. Processes watching an inbox for files with an "m4v" file extension will not see the "m4v_" files before they are renamed.


a_copy_command "${MY_ATOMIC_FILE}"   "${MY_ATOMIC_FILE}_"

a_rename_command "${MY_ATOMIC_FILE}_"   "${MY_ATOMIC_FILE}"

This works in a variety of contexts provided a rename command can be executed on the target system.


These are useful techniques to make your scripts more robust and reliable. The additional work is minor but the impact on reliability can be profound. Defensive coding may take a little longer to implement but the benefits are worthwhile. It is possible to deploy systems that operate reliably for years without any need for maintenance if you pre-empt the anticipated problems.

Try to avoid doing the work yourself if there is a tool that provides a simple solution. Let the system do all the heavy lifting for you.

You might also like...

An Introduction To Network Observability

The more complex and intricate IP networks and cloud infrastructures become, the greater the potential for unwelcome dynamics in the system, and the greater the need for rich, reliable, real-time data about performance and error rates.

2024 BEITC Update: ATSC 3.0 Broadcast Positioning Systems

Move over, WWV and GPS. New information about Broadcast Positioning Systems presented at BEITC 2024 provides insight into work on a crucial, common view OTA, highly precision, public time reference that ATSC 3.0 broadcasters can easily provide.

Next-Gen 5G Contribution: Part 2 - MEC & The Disruptive Potential Of 5G

The migration of the core network functionality of 5G to virtualized or cloud-native infrastructure opens up new capabilities like MEC which have the potential to disrupt current approaches to remote production contribution networks.

Designing IP Broadcast Systems: Addressing & Packet Delivery

How layer-3 and layer-2 addresses work together to deliver data link layer packets and frames across networks to improve efficiency and reduce congestion.

Next-Gen 5G Contribution: Part 1 - The Technology Of 5G

5G is a collection of standards that encompass a wide array of different use cases, across the entire spectrum of consumer and commercial users. Here we discuss the aspects of it that apply to live video contribution in broadcast production.