IP Monitoring & Diagnostics With Command Line Tools: Part 3 - Monitoring Your Remote Systems
Monitoring what is happening in a remote system depends on being able to ask for something to be checked and having the results reported back to you. There are many ways to do this. This article looks at some simple examples.
More articles in this series:
Last time, we found out how to check that our remote systems are reachable and working. Those machines are hosting important processes for us to interact with. Monitoring them can alert us right away when something starts to go wrong. There are many alternative solutions for gathering information from remote systems.
You can only communicate with a remote system if it is running a process that responds to your connection. They would most likely be background processes with no visible user interface.
Background processes can be started from the command line and detached from the parent account so they continue running after exiting from the session. Otherwise they would halt. They could be started automatically using the rc command when a machine is rebooted.
There are several different ways to run background processes in the UNIX environment. The optimum choice depends on what you need that process to do.
|Servers||A web server waits for requests to arrive on port 80. It supervises multiple child processes to handle the requests.|
|Daemon||This is a single process running in the background. The syslogd daemon collates messages from other processes and stores them in a shared log file.|
|Agent app||An application, triggered on demand to act on your behalf, possibly via a ssh command.|
|Service listener||Services start up when a remote machine connects to a specific port. The listener runs the application configured for that port and redirects the incoming data stream to it.|
Service listeners are a very efficient solution for monitoring a remote system. They consume no resources when they are at rest and only start up (very quickly) on demand. They are very resilient to memory leak problems because they quit immediately on completion of the task.
Monitoring with ssh
Measure the available disk space on a remote system with the df command:
ssh [email protected] df
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/root 2451064 1017960 1330704 44% /
none 512652 0 512652 0% /dev
/tmp 516844 800 516044 1% /tmp
/run 516844 2700 514144 1% /run
/dev/shm 516844 4 516840 1% /dev/shm
You must manually enter the password for the remote account for the time being until we install a shared security key. That is a straightforward process and I will cover it in the next article.
Monitoring processes can observe the status with any of the tools the remote account has permission to use. If you elevate the privileges on the remote account you should also implement some firewall protection to block any unrecognised machines if they try to connect. You could use the TCP-Wrappers utility for that and establish some rules that lock out unauthorised access.
Using the netcat (nc) command
The nc command can be started up at both ends of a remote connection as a server or client. Sending files to remote systems with it is very easy. Use nc interactively and type your messages to it directly. It can also be used by other applications as an intermediary. There are security implications with routers for connecting outside of your local network because the chosen ports may not be accessible.
At the remote end, you might use nc to listen on a particular port number and store any incoming data in a specific file. This example sets up a service listener on the remote machine using port number 5555:
nc -l 5555 > file_name.txt
On the local machine, use another instance of nc to communicate with the remote system on that port and send it some data from one of your files. Whatever you send will be stored in the target file:
nc host.example.com 5555 < filename.txt
If you want to send some input interactively from your keyboard, just fire up nc on your machine without an input file:
nc host.example.com 5555
Then anything you type will be transmitted and stored in the remote file until you exit. When you exit the client, the server instance of nc will also quit.
Terminate the local interactive session by pressing [Control] + [D] key and your local copy of nc will exit and signal the remote instance to close its file. If you are using nc from Windows you need to type [Control] + [Z] and then [Enter] to send a [Control] + [D].
You could use this mechanism to store interlocks on a remote system to stop and start processing operations. If they check the lock file contents on the remote machine and find that it contains the word 'STOP', they simply abort their start-up and check again after a suitable interval.
To have nc running on a remote machine when you need it, something must have started it for you. Aside from starting it manually, you could do this remotely with a ssh command. This gets very complex - very quickly.
Advanced topics like this are quite fascinating to solve and there is always a solution. When you find yourself wrestling with a complex scenario like this, try to step back and analyse the requirements again, because you might be approaching it in completely the wrong way and need a different type of remote intermediary process to act for you.
Using HTTPD for monitoring
|CSV||Comma Separated Values, possibly as a single line or a grid for importing into Excel or a database.|
|TSV||Use a tab character as a data item separator instead of a Comma.|
|XML||Strictly formatted mark-up similar to HTML. It is a useful way to serialise complex data structures but adds a significant overhead when formatting the raw data.|
|TXT||Unstructured data using any proprietary schema you want, or none at all.|
To implement remote monitoring, with HTTPD, build a web page (I would recommend using PHP). Let that code call out to the command line environment and execute a command there. Capture the result and output that as the page contents.
The individual components are quite simple to build and we will examine several examples of those in upcoming articles.
Using curl and wget
The curl and wget tools can communicate with remote systems and are easy to use. These commands could send a request to the status URL in your web server and store the result in a temporary file for processing locally.
The wget tool is primarily designed to download webpages. It can use HTTP, HTTPS or FTP protocols to access the contents of a remote web server.
The curl tool is designed for transferring files of any kind and can use more sophisticated mechanisms. It can access remote resources with secure copy, SMB file sharing as well as all the protocols that wget supports. You could also use curl to fetch mail messages from a mail server using the POP3 protocol.
The two tools are often confused with each other because both will retrieve webpages. The curl tool is much more powerful than wget and is useful for accessing many more remote resources than wget can reach.
Here is an example of using wget to retrieve a web page:
wget -O d_space.txt https://www.***.com/d_space.php
We could use curl to do the same thing:
curl -o d_space.txt https://www.***.com/d_space.php
Note that the output filename option flags are different for each command. The file is saved in the current working directory in the d_space.txt file.
Because we are using HTTP connections via port 80, this can work-around port mapping problems that you might have with other protocols. Port 80 is often opened on routers for access to web pages hosted on servers in another sub-net.
The curl command can upload files to a remote location via the SMB file sharing protocol but with wget, you could only do that using the HTTP POST protocol as if you were submitting a form.
Clearly curl is much more powerful but wget is simpler to use for many tasks. We will take a deeper dive into what wget and curl can do in a later article.
Integrating our individual computers and devices so they behave like a single system becomes more practical with messages and calls to action moving around the system automatically.
In the next few articles we will explore how the UNIX command line environment can amplify your skills with these tools. For example, you can:
• Install a shared key on a remote system to simplify using ssh.
• Redirect the output into a file.
• Filter and edit the results.
• Chain multiple tools together to execute a pipeline in one call to action.
• Wrap commands into a script and call them like you might use a macro in MS office.
• Schedule the execution of scripts on a regular basis.
You might also like...
The Big Guide To OTT: Part 1 - Back To The Beginning
Part 1 of The Big Guide To OTT is a set of three articles which take us back to the foundations of OTT and streaming services; defining the basic principles of the OTT ecosystem, describing the main infrastructure components and the…
Using Configurable FPGA’s For Integrated Flexibility, Scalability, And Resilience - Part 1
Although IP and cloud computing may be the new buzz words of the industry, hardware solutions are still very much in demand, especially when taking into consideration software defined architectures. And in addition, a whole new plethora of FPGA based…
Delivering Timing For Live Cloud Productions - Part 1
Video and audio signals represent synchronous sampled systems that demand high timing accuracy from their distribution and processing infrastructures. Although this has caused many challenges for broadcasters working in traditional hardware systems, the challenges are magnified exponentially when we process…
Professional Live IP Video - Designing Networks
There’s a lot to consider when planning to incorporate uncompressed media into your live production particularly using SMPTE ST 2110, ST 2022-6 or AES67 streams. In this article, we will look at the network hardware, its architecture and future-proofing you s…
IP Monitoring & Diagnostics With Command Line Tools: Part 6 - Advanced Command Line Tools
We continue our series with some small code examples that will make your monitoring and diagnostic scripts more robust and reliable