Quick and dirty bash performance profiler

Submitted by-badmin- onjeu 04/05/2017 - 10:19

I recently got some comments on osync about performance issues with large filesets (600k files).
[Edit] Probem actually raised with rsync < 3.1.2 when transferring xattrs.[/Edit]
Before going insane with a debugger and some pretty hard tools, I decided to write a very small performance profiler function.

My scripts often use forks in order to keep control over the execution time, and that's where I hook the performance profiler (which actually is a big term just for something to log cpu & memory usage).

The performance profiler function will periodically give output about the script and all it's child processes.

The function itself:

function _PerfProfiler {
        local perfString

        perfString=$(ps -p $$ -o %cpu,%mem,cmd,time)

        for i in $(pgrep -P $$); do
                perfString="$perfString\n"$(ps -p $i -o %cpu,%mem,cmd,time | tail -1)
        done

        if type iostat > /dev/null 2>&1; then
                perfString="$perfString\n"$(iostat) 
        fi 

        echo -e "PerfProfiler: $perfString" >> ./performance.log

}

An example linux bash script that would use the performance profiler (well, okay, the sleep command isn't very useful, but you could replace it with whatever you need to run):

sleep 10 &
pid=$!
sleep 5 &

while (kill -0 $pid > /dev/null 2>&1); do
        _PerfProfiler
        sleep 1
done

The kill -0 $pid command checks if the pid is still running. sleep 1 is a trivial interval for performance statistics.

At the end of the execution you'll just have to check the performance.log file.
Mine actually uses a logger function in order to show on screen and log to file which might be more useful.