Are threads bad? - was: Future of Pypy?

Chris Angelico rosuav at gmail.com
Wed Feb 25 01:07:35 EST 2015


On Wed, Feb 25, 2015 at 5:02 PM, Ian Kelly <ian.g.kelly at gmail.com> wrote:
>> Uhh, I have seen *heaps* of code whose performance suffers from too
>> much locking. At the coarsest and least intelligent level, a database
>> program that couldn't handle concurrency at all, so I wrote an
>> application-level semaphore that stopped two people from running it at
>> once. You want to use that program? Ask the other guy to close it.
>> THAT is a performance problem. And there are plenty of narrower cases,
>> where it ends up being a transactions-per-second throughput limiter.
>
> Is the name of that database program "Microsoft Access" perchance?

No, though it wouldn't surprise me if it had the same issue. No, the
program was a DBase-backed one of my own development; it was the DBase
engine itself that couldn't handle concurrency, so I added some
startup code that checked to see if anyone else had the file open, and
popped up a "retry or cancel" prompt until the semaphore cleared.

Later on, I was able to shift the entire content into DB2 (once we no
longer needed compatibility with an even older DBase-backed program),
and voila, we had concurrency. I still needed to make use of
record-level locking (if you open a record for editing, it holds a
lock for as long as you have the window open; chances are, anyone else
who wants the same record is actually doing the same job, so erroring
out is the best option), but no more database-level locks.

ChrisA



More information about the Python-list mailing list