Lockfile hanling

Ervin Hegedüs airween at gmail.com
Tue Mar 31 10:50:12 EDT 2015


Hello,

there is an app, written in Python, which stores few bytes of
datas in a single file. The application uses threads. Every
thread can modify the file, but only one at a time. I'm using a
lock file to prevent the multiple access.

Here is the lock method:

  while True:
      try:
          fl = os.open(self.lockfile, os.O_CREAT | os.O_EXCL | os.O_RDWR)
      except OSError, e:
          if e.errno != errno.EEXIST:
              raise
          time.sleep(0.2)
          continue
      except:
          syslog.syslog(syslog.LOG_DEBUG, "Sync error: " + str(sys.exc_info()[1]))
      else:
          break

This works as well for me - about 3-4 weeks. After some weeks, I
got this error:

OSError: [Errno 24] Too many open files: '/var/spool/myapp/queue.lock'


Today the app had been restarted, about 3-4 hours ago. Now I see
under the proc/PID/fd:

lrwx------ 1 root     root     64 márc 31 16.45 5 -> /var/spool/myapp/queue.lock (deleted)

there are about 50 deleted FD's. After few weeks the process
reaches the number if max fd's.

How can I prevent or avoid this issue? What's the correct way to
handle the lockfile in Python?


Thanks,


Ervin




More information about the Python-list mailing list