[issue41699] Potential memory leak with asyncio and run_in_executor
Sophia Wisdom
report at bugs.python.org
Thu Oct 29 22:22:37 EDT 2020
Sophia Wisdom <sophia at reduct.video> added the comment:
While not calling executor.shutdown() may leave some resources still used, it should be small and fixed. Regularly calling executor.shutdown() and then instantiating a new ThreadPoolExecutor in order to run an asyncio program does not seem like a good API to me.
You mention there appear to be both an event loop and a futures leak -- I think I have a good test case for the futures, without using threads at all. This seems to be leaking `future._result`s somehow even though their __del__ is called.
```
import asyncio
from concurrent.futures import Executor, Future
import gc
result_gcs = 0
suture_gcs = 0
class ResultHolder:
def __init__(self, mem_size):
self.mem = list(range(mem_size)) # so we can see the leak
def __del__(self):
global result_gcs
result_gc += 1
class Suture(Future):
def __del__(self):
global suture_gcs
suture_gcs += 1
class SimpleExecutor(Executor):
def submit(self, fn):
future = Suture()
future.set_result(ResultHolder(1000))
return future
async def function():
loop = asyncio.get_running_loop()
for i in range(10000):
loop.run_in_executor(SimpleExecutor(), lambda x:x)
def run():
asyncio.run(function())
print(suture_gcs, result_gcs)
```
10MB
```
> run()
10000 10000
```
100MB
Both result_gcs and suture_gcs are 10000 every time. My best guess for why this would happen (for me it doesn't seem to happen without the loop.run_in_executor) is the conversion from a concurrent.Future to an asyncio.Future, which involves callbacks to check on the result, but that doesn't make sense, because the result itself has __del__ called on it but somehow it doesn't free the memory!
----------
_______________________________________
Python tracker <report at bugs.python.org>
<https://bugs.python.org/issue41699>
_______________________________________
More information about the Python-bugs-list
mailing list