[Mailman-Developers] Re: subscription confirmations

Gerald Oskoboiny gerald@impressive.net
Tue, 17 Jul 2001 02:35:24 -0400


On Mon, Jul 16, 2001 at 09:04:09PM -0700, Chuq Von Rospach wrote:
> I've removed mailman-users from the disto. We shhouldn't be using both lists
> at the same time in discussions, and this is a developers/design issue.

ok... I was wondering about that, thanks.

> On 7/16/01 5:38 PM, "Gerald Oskoboiny" <gerald@impressive.net> wrote:
> > Sure... I agree that it's possible for a standard to be irrelevant
> > or just not meet the needs of the users for which it was written.
> > 
> > But I don't think that is the case here:
> 
> But -- you haven't dealt with the safety issue in any substantive way. If
> you can't build a standard that protects the user from abuse, I'd argue that
> the standard provides a false sense of security that is more destructive
> than not standardizing at all; because, as you noted, it'll tend to
> encourage developers to write to the standard, and not all of those writers
> will really understand the subtler issues involved. So if you can't make GET
> safe to automatically browse, even with the blackhats, I'd argue it's better
> to not create standards that'd encourage that -- or write the standard in
> such a way that these issues and limitations are very clear IN the standard.

I think the HTTP spec is fairly clear about most of this:

9.1.1 Safe Methods

    Implementors should be aware that the software represents the user in
    their interactions over the Internet, and should be careful to allow
    the user to be aware of any actions they might take which may have an
    unexpected significance to themselves or others.

    In particular, the convention has been established that the GET and
    HEAD methods SHOULD NOT have the significance of taking an action
    other than retrieval. These methods ought to be considered "safe".
    This allows user agents to represent other methods, such as POST, PUT
    and DELETE, in a special way, so that the user is made aware of the
    fact that a possibly unsafe action is being requested.

    Naturally, it is not possible to ensure that the server does not
    generate side-effects as a result of performing a GET request; in
    fact, some dynamic resources consider that a feature. The important
    distinction here is that the user did not request the side-effects,
    so therefore cannot be held accountable for them.

    -- http://www.w3.org/Protocols/rfc2616/rfc2616-sec9.html#sec9.1.1

but once all that's been said, it's really up to the implementations
to do the right thing.

> > Fetching a URL into my local http cache doesn't cause a virus to
> > be executed or anything else bad to happen, and I wouldn't use
> > software where that kind of thing would be possible anyway.
> 
> No, but it can cause actions you'll regret. You started this by bringing up
> one as a problem. Now, however, you're saying "well, that's no big deal".
> 
> Which is it? No big deal? Or a problem? And if we can trigger actions you
> might or might not like, you can bet it'll honk off others. And if we can
> trigger actions, so can others, and those won't necessarily be innocent
> ones.

If it happens once in a while with an obscure site here and there,
that's much less of a problem than if some popular software like
Mailman is doing the wrong thing and sending out tens or hundreds
of thousands of these messages every day. (in part because every
time one of these is used, it helps legitimize the practice of
'click this URL to unsub', which is the wrong message to be
sending to people.)

I don't expect all the incorrect implementations in the world to
suddenly get fixed overnight, but I'm trying to get the ones I
know about fixed.

> > If you think the docs on this subject at W3C are lacking, by all
> > means let me know.
> 
> No, what I really was hoping for were examples of what W3 (or you) consider
> 'proper', to see how W3 thinks this ought to be done.

ok, I'll try to write something up on this sometime...

> > I think there is a plan to write a similar
> > one targeted towards site administrators, pointing out common
> > mistakes and raising awareness about little-known but important
> > RFC/spec details like this.
> 
> One of the best things I think W3 could do in these cases is not only to
> write "good" "bad", but generate cookbooks of techniques, with explanations
> of why they're good, or why they ought to be avoided. Especially in the
> subtlties of the standards that might not be intuitively obvious, or which
> might be involved in emerging technologies (like wireless) that the typical
> designer hasn't had time to worry about yet (or doesn't know to worry
> about).

I agree this kind of thing needs to be done, but I think it can
usually be done quite well by third parties, in online courses
and articles, printed books, etc. But like I said above, I think
W3C will start doing a bit more of this than we have in the past,
it's just a matter of finding the time...

> > By "Mailman's fault" I meant that if mailman did this, it would
> > be the part of the equation causing problems by not abiding by
> > the HTTP spec. But this prefetching thing is just an example; the
> > main point is that the protocol has this stuff built in for a
> > reason, and there may be hundreds of other applications (current
> > and future) that need it to be there.
> 
> The standards also brought us, um, BLINK.

er... no, Netscape's programmers implemented BLINK one night
after they had been drinking :)

It's not in any HTML standard ever published by W3C or the IETF.

> >> And the more I think about it, the more it's an interesting point -- but on
> >> more than one level. Has W3c considered the implications of defining a
> >> standard that depends on voluntary acceptance here?
> > 
> > Which Internet standards *don't* depend on voluntary acceptance?
> 
> But there's a difference here -- we're talking about possible security
> issues, not just whether someone adopts a tag.

A bad implementation of a spec can always cause security problems.
This distinction between GET and POST in the HTTP protocol is
specifically there to *prevent* problems: if I make a stock trade
using an online brokerage and then hit my browser's "back" and
"forward" buttons, I don't want the same transaction executed
again! That's why brokerages, banks, and other quality sites use
POST for such transactions, and browsers written to the spec will
prompt the user for confirmation before rePOSTing a form.

-- 
Gerald Oskoboiny <gerald@impressive.net>
http://impressive.net/people/gerald/