BeautifulSoup

Paul McGuire ptmcg at austin.rr.com
Sat Aug 20 02:47:29 EDT 2005


Mike -

Thanks for asking.  Typically I hang back from these discussions of
parsing HTML or XML (*especially* XML), since there are already a
number of parsers out there that can handle the full language syntax.
But it seems that many people trying to parse HTML aren't interested in
fully parsing an HTML page, so much as they are trying to match some
tag pattern, to extract or modify the embedded data.  In these cases,
fully comprehending HTML syntax is rarely required.

In this particular instance, the OP had started another thread in which
he was trying to extract some HTML content using regexp's, and this
didn't seem to be converging to a workable solution.  When he finally
revealed that what he was trying to do was extract and modify the URL's
in a web pages HTML source, this seemed like a tractable problem for a
quick pyparsing program.  In the interests of keeping things simple, I
admittedly provided a limited solution.  As you mentioned, no
additional attributes are handled by this code.  But many HTML scrapers
are able to make simplifying assumptions about what HTML features can
be expected, and I did not want to spend a lot of time solving problems
that may never come up.

So you asked some good questions, let me try to give some reasonable
answers, or at least responses:

1. "If it were in the ports tree, I'd have grabbed it and tried it
myself."
By "ports tree", I assume you mean some directory of your Linux
distribution.  I'm sure my Linux ignorance is showing here, most of my
work occurs on Windows systems.  I've had pyparsing available on SF for
over a year and a half, and I do know that it has been incorporated (by
others) into a couple of Linux distros, including Debian, ubuntu,
gentoo, and Fedora.  If you are interested in doing a port to another
Linux, that would be great!  But I was hoping that hosting pyparsing on
SF would be easy enough for most people to be able to get at it.

2. "How well does it deal with other attributes in front of the href,
like <A onClick="..." href="...">?"
*This* version doesn't deal with other attributes at all, in the
interests of simplicity.  However, pyparsing includes a helper method,
makeHTMLTags(), that *does* support arbitrary attributes within an
opening HTML tag.  It is called like:

    anchorStart,anchorEnd = makeHTMLTags("A")

makeHTMLTags returns a pyparsing subexpression that *does* comprehend
attributes, as well as opening tags that include their own closing '/'
(indicating an empty tag body).  Tag attributes are accessible by name
in the returned results tokens, without requiring setResultsName()
calls (as in the example).

3. "How about if my HTML has things that look like HTML in attributes,
like <TAG ATTRIBUTE="stuff<A HREF=stuff">?"
Well, again, the simple example wont be able to tell the difference,
and it would process the ATTRIBUTE string as a real tag.  To address
this, we would expand our statement to process quoted strings
explicitly, and separately from the htmlAnchor, as in:

    htmlPatterns = quotedString | htmlAnchor

and then use htmlPatterns for the transformString call:

    htmlPatterns.transformString( inputHTML )

You didn't ask, but one feature that is easy to handle is comments.
pyparsing includes some common comment syntaxes, such as cStyleComment
and htmlComment.  To ignore them, one simply calls ignore() on the root
pyparsing node. In the simple example, this would look like:

    htmlPatterns.ignore( htmlComment )

By adding this single statement, all HTML comments would be ignored.


Writing a full HTML parser with pyparsing would be tedious, and not a
great way to spend your time, given the availability of other parsing
tools.  But for simple scraping and extracting, it can be a very
efficient way to go.

-- Paul




More information about the Python-list mailing list