Slurping Web Pages
Paul Rubin
phr-n2003b at NOSPAMnightsong.com
Sat Jan 25 14:50:09 EST 2003
"Tony Dunn" <tdunn at lynxxsolutions.com> writes:
> I've started a new project where I need to slurp web pages from a site that
> use cookies to authenticate access. I've used *urllib* in the past to grab
> *public* web pages, but I'm not sure the best way to go about dealing with
> the cookie issue.
You can use urllib2 and set a cookie header:
request = urllib2.request(url, None, {"Cookie": your_cookie_header})
page = urllib2.urlopen(request)
slurped_contents = page.read()
More information about the Python-list
mailing list