-
-
Notifications
You must be signed in to change notification settings - Fork 186
Add transport.closing property #248
Description
In aiohttp project I have an issue (aio-libs/aiohttp#370).
Long story short: for handling static files aiohttp uses code like:
with open(filepath, 'rb') as f:
chunk = f.read(self.limit)
while chunk:
resp.write(chunk)
yield from resp.drain()
chunk = f.read(self.limit)
When client closes HTTP connection (by socket shutdown for example) transport._force_close(...) schedules _call_connection_lost on next loop iteration and assigns transport._closing flags (as well as transport._conn_lost).
Actual transport closing will be done on next loop iteration, but aiohttp static file handler has no chance to be informed about transport closing: stream.write() and underlying transport.write() don't check for ._closing flag and always succeed.
transport.write() does check for ._conn_lost but sends log message only, not raises exception -- and the behavior is pretty correct.
aiohttp static file handler sends no data on resp.write(chunk) call, stream buffer is never overloaded and yield from resp.drain() always returns without pausing.
Thus all multi-megabyte file may be iterated over and pushed into stream. Actual data will not be sent via wire of course but the whole process is
a) takes longer than required
b) sends a lot of warning messages 'socket.send() raised exception.' to asyncio logger
I propose to add public readonly bool property .closing (return ._closing internal value).
It gives me a way to check if transport is on the middle of closing procedure.
aiohttp.StreamWriter also should be modified: I guess to add .closing property to StreamWriter to mimic transport.
StreamWriter.drain() coroutine should check for stream.closing first and call yield from asyncio.sleep(0) to run previously scheduled transport._call_connection_lost at first.
Sorry for long message, I hope I've described my problem well enough. Feel free to ask if my (complex enough) scenario is still unclean.
If no objections I will prepare a patch.