using the PSF license for one's code

Bengt Richter bokr at oz.net
Sat Nov 9 04:30:49 EST 2002


On Sat, 09 Nov 2002 04:49:55 GMT, Terry Hancock <hancock at anansispaceworks.com> wrote:

>Donnal Walter wrote:
>> program. Of course it would be difficult to market a spreadsheet, and
>> I have had no desire to consider it. I also have no desire to release
>> it as open source, as long as it is in the form of a spreadsheet. But
>> my point is simply that this is the kind of application that cries out
>> for *some kind* of open source distribution. On the other hand, most
>> of the applications we have in mind have more to do with organizing
>> information than with performing calculations.
>
>Yep: "high use-value, low-sale-value" fits Eric Raymond's analysis anyway.
>
>> Incidentally, there is (now) a commercial product available to do this
>> task, and it carries a disclaimer that the user is responsible for
>> verifying the accuracy of output. Of course, the source code is not
>> available for us to look at.
>
>I like the idea that *full-disclosure* should be a defense against
>liability claims.  Because it seems fair to me that if I can find out
>fully how something works, it's reasonable for me to take
>responsibility for it.
I think that is an interesting point, though usually you depend on trust
a fair amount. There just isn't enough time to do source walkthroughs of
all the software you are going to run. I wonder if you could design open
software to generate module coverage/use statistics that could be contributed
with a click or two to a QA server, to generate automatic reliability ratings.
So you could tell absence of problem reports from absence of use.

>
>I think the thing to realize about open-source versus closed-source
>for reliability is that using closed source may reduce your responsibility
>-- i.e. you can pass the blame onto the source of the software more
>easily if it fails and kills somebody.  But it doesn't do anything to
>reduce the actual chance of it killing somebody. In fact, the "fewer
>eyeballs" effect -- an indirect, but definite correlation to being closed-
>source -- probably *increases* the actual risk.
>
>Or to put it another way -- no amount of life insurance will save your
>life. Wear a seatbelt instead.
>
>For people who *really* care about safety (as opposed to liability), 
>open-source is usually going to be a good choice.  That's because
>there's just no practical way to get so many people to check closed-
>source code.  I'd like to think that when the stakes are life and limb,
>that people *are* more concerned about safety than liability.
I think so too, however, attention has to be attracted and eyes motivated
by some presentation of the information. It's not automatic.

>
>I think this is the reasoning behind the military and data security
>folks who argue for using open-source code. Certainly it was not
>an obvious result. I think it started because a lot of users had a
>feeling that open-source code was more reliable without actually
>knowing why. Then I remember there were some articles talking
>about proving it by running the Gnu utilities and the control
>group of commercial Unix utilities against random data and counting
>the crashes. It was only after that that I started seeing arguments
>for *why* this should be so. (IIRC -- other people may remember
>this differently, I certainly don't have references). 
>
>It was certainly a revelation for me -- I guess I'm a "free-software 
>convert".
>
>> David Brown:
>>> Have you considered whether Python is really a suitable
>>> language for this job?  It is probably ideal for your first
>>> project, but not necessarily for the second one.  Python
>>> makes it easy to write great software, but it also makes it
>>> easy to make mistakes which can only be found at run-time.
>
>Really, though, the software should be extensively tested at the
>run-time level, anyway.   Also, the real question is "what happens
>if it does break?".  In this case, it sounds like a pharmacist checks
>it anyway. If it's too far out of bounds (i.e. deadly), they're going
>to catch it right-away (humans are good at that). If it's subtly
>wrong, it probably won't kill anybody -- medicine isn't *that*
>precise.  I remember being somewhat shocked at how imprecisely
>pharmaceutical units are defined or measured (how big is a "drop"?).
>
>And as the OP points out, the accuracy ratio is better compared
>to hand-calculation anyway, so while some hypothetical ADA-based
>solution might be even more reliable, it seems like the problem
>is getting solved.

There is no such thing as a "safe" language that can turn a bad or
incomplete specification into a good program. An incomplete specification
does not adequately reflect the role of the software in the total system, and
unnecessarily leaves it to humans to notice that e.g., a 16-kg preemie weight
probably should be 1.6 kg etc. An Ada program that just calculates proportional
dosage can be "correct" but garbage in garbage out is not the right attitude
for a system affecting lives. The spec should say something about human keying
errors etc. Re that, see

    http://catless.ncl.ac.uk/Risks/21.49.html#subj2

and for a read that will turn your stomach re software and medical devices, follow
the link to

    http://courses.cs.vt.edu/~cs3604/lib/Therac_25/Therac_1.html

(for a mix of more or less consequential stuff, browse http://groups.google.com/groups?q=comp.risks)

Would open source have done better? Well, I think we should say "the open source
community" rather than just "open source," because the key is a bunch of _people_
who care to engage their minds, and a relatively efficient mechanism for getting
their attention and collaborating.

I am sure many suggestions for improvements would have appeared in short order
if the Therac software had been posted to a public medical software QA newsgroup,
where people would be extra motivated to help make it right. And OT post here
would probably not do too badly either ;-)

I have no doubt that the open source phenomenon could be brought to bear on the
concept and specification phase of systems design much more generally than
just software aspects too. I.e., if the software is used by a doctor, but the
result always goes through the hands of a pharmacist and a nurse, what would make
the system as a whole more reliable? It takes a combination of people who intimately
understand various aspects and interactions in the system, and what is critical and
what isn't, and also others who can imagine technical _system_ possibilities as well
as implementations of particular micro-jobs. E.g., what if everyone involved had wifi
pda's and there was a system monitoring events in time and space, and sanity-checking
values and delays, and lighting big red lights and making automated cell calls when
something is out of norms. Well, in an open forum there will be people who see both
flaws and opportunities in blurts like that, and so the entire process becomes a kind
of genetic algorithm operating to evolve and refine ideas.

Regards,
Bengt Richter



More information about the Python-list mailing list