Avoid script running multiple times by file lock

Sometimes we need a single instance of a script to run at a time. Meaning, the script itself should detects whether any instances of himself are still running and act accordingly.

When multiple instances of one script running, it’s easy to cause problems. I’ve ever seen that about 350 instances of a status checking script running there without doing anything, but eat lots of system resource.

So it’s an important feature for your scripting work, and you might have faced similar issues already in cron jobs, system monitoring scripts or system backup procedures.

Well, how can we get rid of such problems?

The idea is simple, the first instance of the script should open a file, and lock it to get rid of the second instance. Therefore when the consequent instances try to run and lock the file, it will fails. Here we show three examples on how to lock a script in Shell, Python and Perl.

How can we create a lock file in Shell?

An example looks like this:

# This is to examine whether the lockfile existed
[ -f "${0}.lock" ] && exit -1
# Create the lock file
lockfile ${0}.lock
# Your code goes here!
# Release the lock file manually
rm -f ${0}.lock

Please keep in mind that as we use relative path for the lockfile here, we need to make sure when creating and deleting the lock file, we’re in the same working directory.

How can we create a lock file in Perl?

With the help of Fcntl, it’s easy. By default Fcntl is installed on common linux distributions. An example looks like this:

use Fcntl ':flock'; # import LOCK_* constants
open  *{0}
or die "What!? $0:$!";
flock *{0}, LOCK_EX|LOCK_NB
or die "$0 is already running somewhere!n";

It has one disadvantage that on at least one system and on one version of Perl (AS635 on Win2k) you won’t see any error message or script generated message as Perl will fail (quietly!?) if you try to run the script while it is already running.

Share this post

7 thoughts on “Avoid script running multiple times by file lock

  1. The first test is superfluous as the lockfile command will “wait” until the lock is released by the other process. By default it will retry forever every 8 seconds (default sleeptime). If you want the script to exit instead of waiting, just define retry 0.

    Example, first process:

    lockfile /tmp/lock; sleep 10; rm -f /tmp/lock

    Second process will wait if first process is busy

    $ lockfile /tmp/lock && echo “Finaly, I’am running…”; rm -f /tmp/lock

    Second process abort if first one is busy:

    $ lockfile -r0 /tmp/lock && echo “Finaly, I’am running…”; rm -f /tmp/lock

  2. Ripat, thanks for your nice examples! Now the script can be rewritten like this, which is easier than the former one:

    # Create the lock file
    lockfile -r0 ${0}.lock && {
    # Your code goes here!
    # Release the lock file manually
    rm -f ${0}.lock

  3. This is such a great resource that you are providing and you give it away for free. I enjoy seeing websites that understand the value of providing a prime resource for free. I truly loved reading your post. Thanks!

  4. I feel that is among the such a lot significant info
    for me. And i am happy reading your article. However should commentary
    on few normal things, The website style is great, the articles
    is actually excellent : D. Excellent activity,

Post Comment