urllib: geturl with max filesize?

Wendelin Auer wa at net-federation.de
Tue Jun 26 08:50:27 EDT 2001


hello.
I'm planning to 'pull' a file header from a different urls than the rest of
my page. I'm not very introduced in python, but I tried and the following
code is doing well (scripted html in an output-template of a
python-search-server):


<!-- **********************start header**************************-->
<!--$
import urllib
f = urllib.urlopen("http://www.some-url.com/0,1003,-XX-200---,FF.html")
data = f.read()
f.close()
write(data)
-->
<!-- **********************start content**************************-->


The url is coming out of a parameter in the search request, so this can be
changed by anybody.
So what, if some nice guy puts an url of a 10 GB-file on a fast server at
this place? I have to prevent that.

- Does anybody know how to limit the filesize of this geturl-function?
- Is there maybe another security problem?

Thank You,
Wendelin Auer





More information about the Python-list mailing list