Simple TCP proxy

Barry barry at barrys-emacs.org
Thu Jul 28 08:29:39 EDT 2022



> On 28 Jul 2022, at 10:31, Morten W. Petersen <morphex at gmail.com> wrote:
> 
> 
> Hi Barry.
> 
> Well, I can agree that using backlog is an option for handling bursts. But what if that backlog number is exceeded?  How easy is it to deal with such a situation?

You can make backlog very large, if that makes sense.
But at some point you will be forced to reject connections,
once you cannot keep up with the average rate of connections.


> 
> I just cloned twisted, and compared the size:
> 
> morphex at morphex-Latitude-E4310:~$ du -s stp; du -s tmp/twisted/
> 464 stp
> 98520 tmp/twisted/
> morphex at morphex-Latitude-E4310:~$ du -sh stp/LICENSE 
> 36K stp/LICENSE
> 
> >>> 464/98520.0
> 0.004709703613479496
> >>> 
> 
> It's quite easy to get an idea of what's going on in STP, as opposed to if something goes wrong in Twisted with the size of the codebase. I used to use emacs a lot, but then I came into a period where it was more practical to use nano, and I mostly use nano now, unless I need to for example search and replace or something like that.

I mentioned twisted for context. Depending on yours need the built in python 3 async support may well be sufficient for you needs. Using threads is not scalable.

In the places I code disk space of a few MiB is not an issue.

Barry

> 
> -Morten
> 
>> On Thu, Jul 28, 2022 at 8:31 AM Barry <barry at barrys-emacs.org> wrote:
>> 
>> 
>> > On 27 Jul 2022, at 17:16, Morten W. Petersen <morphex at gmail.com> wrote:
>> > 
>> > Hi.
>> > 
>> > I'd like to share with you a recent project, which is a simple TCP proxy
>> > that can stand in front of a TCP server of some sort, queueing requests and
>> > then allowing n number of connections to pass through at a time:
>> > 
>> > https://github.com/morphex/stp
>> > 
>> > I'll be developing it further, but the the files committed in this tree
>> > seem to be stable:
>> > 
>> > https://github.com/morphex/stp/tree/9910ca8c80e9d150222b680a4967e53f0457b465
>> > 
>> > I just bombed that code with 700+ requests almost simultaneously, and STP
>> > handled it well.
>> 
>> What is the problem that this solves?
>> 
>> Why not just increase the allowed size of the socket listen backlog if you just want to handle bursts of traffic.
>> 
>> I do not think of this as a proxy, rather a tunnel.
>> And the tunnel is a lot more expensive the having kernel keep the connection in
>> the listen socket backlog.
>> 
>> I work on a web proxy written on python that handles huge load and
>> using backlog of the bursts.
>> 
>> It’s async using twisted as threads are not practice at scale.
>> 
>> Barry
>> 
>> > 
>> > Regards,
>> > 
>> > Morten
>> > 
>> > -- 
>> > I am https://leavingnorway.info
>> > Videos at https://www.youtube.com/user/TheBlogologue
>> > Twittering at http://twitter.com/blogologue
>> > Blogging at http://blogologue.com
>> > Playing music at https://soundcloud.com/morten-w-petersen
>> > Also playing music and podcasting here:
>> > http://www.mixcloud.com/morten-w-petersen/
>> > On Google+ here https://plus.google.com/107781930037068750156
>> > On Instagram at https://instagram.com/morphexx/
>> > -- 
>> > https://mail.python.org/mailman/listinfo/python-list
>> > 
>> 
> 
> 
> -- 
> I am https://leavingnorway.info
> Videos at https://www.youtube.com/user/TheBlogologue
> Twittering at http://twitter.com/blogologue
> Blogging at http://blogologue.com
> Playing music at https://soundcloud.com/morten-w-petersen
> Also playing music and podcasting here: http://www.mixcloud.com/morten-w-petersen/
> On Google+ here https://plus.google.com/107781930037068750156
> On Instagram at https://instagram.com/morphexx/


More information about the Python-list mailing list