[Mailman-Developers] UI for Mailman 3.0 update

Stephen J. Turnbull stephen at xemacs.org
Fri Jun 18 06:46:51 CEST 2010


Barry Warsaw writes:

 > It's an interesting idea, but I'm not quite sure how a webserver pipeline
 > would work.  The way the list server pipeline works now is by treating
 > messages as jobs that flow through the system.  A web request is kind of a
 > different beast.

Why?  Abstractly, both web requests and mail messages are packages of
data divided into metadata and payload.  You line up a sequence of
Handlers, each one looks at the metadata and decides whether it wants
a crack at the package or not.  If no, back to the pipeline.  If yes,
it may process metadata or payload (possibly modifying them), then
decide to (a) do something final (reject/discard it or send something
back out to the outside world), or (b) punt it back to the pipeline
for further processing.

You can also keep state across requests.  If it's request-specific the
nature of HTTP and email both require cookies (aka one-time keys).
So, what's so different?

It seems to me that it might also make communication between the
webserver and the mail server(s) easier to organize (eg, when the user
sends email to list-subcribe, then confirms by clicking on the web URL
in the response) if these "jobs" had a unified format.

It's possible that having a thousand handlers all looking at
everything would be horribly inefficient, then you could divide things
up into subpipelines (in the Linux kernel firewall they're called
chains), with master Handlers in the toplevel pipeline dispatching to
lower level subpipelines.

I thought that was how Mailman 3 was organized.  I know that Mailman
2's mail pipeline has inspired a lot of my thoughts about how
Roundup's internal implementation could be improved.  (Roundup
"auditors" and "reactors" look a lot like Handlers.)  I guess I'd
better go look more closely at Mailman 3.



More information about the Mailman-Developers mailing list