[EuroPython] Europython 2004 feedback results (warning - long)

Beatrice Fontaine bea at webwitches.com
Tue Oct 19 14:43:46 CEST 2004


On Mon, 2004-10-18 at 20:11, Magnus Lycka wrote:

> Considering the cost of a decent wedding, I can imagine why you'd want
> to take a free ride on EPC with your wedding party. You're probably 
> just saying this to get a lot of wedding presents! ;)

Erm, Who was talking about a free ride? Don't spread indecent
rumours...!

> Seriously I agree with Harald that we should use some caution in our
> evaluation of the questionaire.

That is what one should always do when talking about statistics. But
isn't it always nicer to exercise caution on statistics that are _not_
the nails to your coffin? Caution or not, these are good results and
that is a good thing.

> It seems clear that we can draw the conclusion that most of the people 
> who filled in the questionaire were happy about the conference, and I 
> don't think a lot of people left after half of the conference, or skipped 
> the questionaire because they were so pissed off by EPC, so we might even
> make the guess that most people who visited the conference were happy
> about it. Still, with 127 responses, most people who visited the 
> conference didn't hand in any response, and while a *random* sample of
> n=127 is decent for a thingie like this, the act of writing and handing
> in a questionaire isn't random...

That's why "indicators" are called just that ;)

> Another thing we *don't* know, is to what extent there were people who 
> didn't come to the conference because of one reason or another, what they
> think, what could have made them come, and whether the conference had 
> been better with them present... Maybe there people who didn't feel 
> convinced about the quality because the program appeared to late, or who 
> didn't want to travel so far north, or who didn't get sufficiently 
> impressed by the web site etc.

Well, that is really market analysis and has not that much to do with
establishing statistical indicators for why the conference as such was a
success for those who _did_ participate. I grant you that market
analysis (pull) is a worthwhile endeavour if we want to increase the
impact of the conference, but it is also by far more work than offering
a conference to the best of one's abilities (push). If we go further
down the road of marketing the conference outside of the community in
order to draw in new user groups, as Harald and some others suggested,
then that is an exercise to go through.   

> I don't know how to get around this problem in a convenient manner, but
> we should remember that there is a bias in the questionare, particularly
> regarding questions that might have affetced peoples decision to turn up
> or not, such as location, program and registration process etc.

By benchmarking against other conferences that are at the same level,
draw 5 times more visitors who come back every year, and which charge
these visitors by far higher entrance fees and still get them to come.

> That shouldn't prevent us from being happy and proud over EPC 2004, I'm
> basically just damaged from being married to a scientist who performs
> statistical analyses all day long...

Then you know all about it already, anyhow :) And yes, I think we can
all be happy and proud because it was obvious to the plain eye there,
not only on the questionnaires, that people took home a good vibe, no
matter what projector flaked where, how many talks overlapped, and
whether the registration process drove you nuts or not. I think that for
those people who actively helped organise the conference, that is a very
motivating factor, isn't it? While we are sucking up all the criticism
because it will help improve everything that didn't work out, we can
also lap up the good stuff along the way. Since the event was a success,
it is important to remember _that_ more than anything else. because
_that_ is where the marketing sits. We need an evaluation of all that
was good. The good stuff sells, and we want more people to come.

> Another interesting thing is to see what things people liked, and what they
> didn't like. Just listing the number of excellent minus poor gives the
> following list:
> 
> 65 overall impression
> 63 conference dinner
> 55 food
> 41 well organized program
> 34 accomodation
> 30 good talks
> 24 internet access
> 10 web site
> 
> Even the worst listed aspect has a few more "excellent" than "poor", but 
> it's a bit surprising that "overall impression" rated higher than any 
> specific factor. I'm no expert in questionaire psychology, maybe someone 
> can figure this out. Did we ask the wrong questions? Maybe it's just 
> that we didn't have the same scale for all questions?

It means that not all aspects are of equal importance to all people and
that many people thought it excellent although there were irritating
factors. For instance, you may think that the conference was brilliant
and not give a toss about what the food is like, because it is not a
priority to you at all. Also, there was some muttering about why there
wasn't any soap in the bathrooms, which means that some people gave less
than "excellent" to an accommodation that was superbly cheap, clean,
friendly, comfortable, at walking distance from the conference, and with
a supermarket. Beats me how that can be less than excellent... Yet only
34 people said it was. It makes me wonder what people expect (and at
what price!). I had soap with me, because I figured that at that price,
there wouldn't be any.  But that is _my_ personal opinion and is of no
relevance to one who obviously had different expectations. That doesn't
necessarily mean to the latter that the overall event was less than
excellent, anyhow.

There were 10 people who thought the website was excellent and I (as one
of its cooks) think that that is certainly not true, but it just goes to
show that people perceive things differently from the outside. Even
though most questionnaires said that it was "good" and only few said it
was truly terrible, it is entirely our choice to set that as a benchmark
or not. Does this evaluation mean it is good enough (=should stay this
way) or that we want/need more than 10 "excellent", and if yes, at the
expense of how much work that can be accomplished and _who_ wants to
take care of making it truly excellent?

People have been going on about the whole registration thing for months
now, yet now we read that many people thought it was easy, whereas more
than 150 people didn't even hand in the questionnaire. Not handing in a
questionnaire, from experience, meant that it was all "OK" as far as
they are concerned. People usually fill them in because they 1) have
just had the most wonderful time of their life, 2) they think it (or at
least part of it) was a total and utter disaster and _must_ be commented
on, or 3) they know the organisers and want to be nice to them.
Everybody else just floats along, and that is perfectly fine. Most
people happily floating along is the way it should be, especially if the
event becomes larger.

Anyhow, I think that the free comments at the bottom were actually the
most enriching part, because that is where people put down what really
made _them_ happy/unhappy as individuals. It shows what really stuck out
for them. That is the most helpful part for me, personally, because it
can't be washed into numbers. Let's use it all.

bea

-- 
bea at webwitches.com
"My agenda is so hidden that I can't find it myself". Me.



More information about the EuroPython mailing list