urllib (sort of reload) question

David Fisher python at rose164.wuh.wustl.edu
Tue Apr 25 07:34:27 EDT 2000


Hi,
I've never actually done this so caveat emptor.  A glance at rfc2616 tells
me you're going to need to add a "Pragma: no-cache" to the outgoing header.
Add the line:

h.putheader("Pramga","no-cache")

somewhere before h.endheaders() in the method open_http() of class
URLopener.  Right next to the GET would probably be good.
Good luck,
David


----- Original Message -----
From: "John Littler" <linuxmusic at crosswinds.net>
Newsgroups: comp.lang.python
To: <python-list at python.org>
Sent: Friday, April 21, 2000 8:24 PM
Subject: urllib (sort of reload) question


>
> Hi,
> I use urllib to get a number of URLs and parse the
> results. One pesky URL doesn't expire it's content
> correctly and as an interim measure I need to reload
> the page in my browser before getting the data with
> python. I'm going through squid so I guess that's where
> the old stuff is being picked up.
> Does anyone know how to emulate browser "reload" in
> python? Looking at the urllib code didn't provide me
> with any clues.
> TIA
> John
>
>
> --
> pgp public key @ http://www.crosswinds.net/~linuxmusic/pubring.html
> --
> http://www.python.org/mailman/listinfo/python-list





More information about the Python-list mailing list