[Edu-sig] An engine to navigate between lessons

Andrew Harrington aharrin at luc.edu
Mon May 15 04:27:29 CEST 2006


Navigating Between Lessons

I have been reading the discussion and trying the prototypes for Python 
content delivery with great excitement.  On seeing how well you were 
doing on the content delivery, my imagination took off and I wrote down 
a whole bunch of brainstorming ideas below on an important independent 
aspect of an interactive learning environment.  Just tonight the "Only 
Python" thread touched on the area I have been thinking about with the 
idea of choosing what comes next after evaluating code in the Crunchy Frog.

I am interested in the learning model and engine for choosing the next 
lesson to present.  I imagine a nonlinear web of potential next steps 
and a flexible engine to determine the best next step.  I would like to 
consider carrying on this with like-minded folks in parallel with people 
working on the delivery environment choices and lesson/example/test 
content.    I see the lesson choosing engine as being essentially 
independent of the type of lessons being presented, so this work should 
fit into the larger Shuttleworth vision for computer based learning in 
general.

I have written a rather long document here, with an overview, some more 
nitty-gritty aspects, and ideas for moving forward.  I refer to a web 
page at the end with more specific implementation ideas.

One model is just having a lot of tutorials, and let learners choose 
which one to do and which one to do after that.  When we (hopefully) get 
an enormous number of independent contributions from our user/learner 
community dynamically added, completely manual choosing of a next 
tutorial will be onerous.  At a minimum we would like an engine to limit 
the obvious choices of where to go next, while still letting the learner 
choose.

While some learners may be self aware enough to know when they know one 
thing well enough and are ready to look for something else, others would 
like to avoid worrying about that and concentrate on the material, and 
let an intelligent engine determine when and where they should go next 
to advance toward their goal most efficiently. 

The goals tie into a separate issue of motivation and reward addressed 
in recent threads: figuring what *is* the goal and how to make it 
exciting.  Consider a learner starting from a place that immediately 
describes a bunch of different application areas that might be 
interesting.  Let learners choose goals by saying "I want to be able to 
do something like that myself!"  -- whether it be as simple as Kirby's 
neat example of a Mad Lib generator, drawing shapes with a Turtle, or as 
complicated as a dynamic web site connected to a database, or elaborate 
gaming.  Then let the skill set needed for the selected type of 
application determine the inventory of skills that the engine guides the 
learner through. 

Alternately goals could be specified by some formal body like the 
Virginia Public Schools.  The idea of letting learners choose an 
interesting application to head for could still apply.  Applications 
would be chosen that include the required competencies, with some extra 
additions depending on the target application area chosen by the 
learner.  The complete target skill set chosen would then guide the path 
the engine provides through the lessons.  Letting the learner *choose* 
to take on the elaborated goal should help keep the learner on task and 
motivated. 

To keep the rewards from being too remote, a sequence of simpler sample 
application area targets could be chosen incorporating more and more of 
the final desired skill sets.   Again these chosen goals might add a bit 
to what the learner needs to absorb and affect the path through the 
lessons, but the extra motivation should be worth it.

Anyway, to allow lessons to fit into a larger, at least partly automated 
environment, we would need some agreed upon interface for messages 
passed back and forth between lessons and the lesson choosing engine, or 
other components.

As I started thinking about this, I realized that there could be other 
useful modules integrating multiple lessons, for instance summarizing 
and reviewing exposition.  I thought of Zelle's text, where the main 
chapters have a lot of exposition with the actual syntax rules 
interspersed.  The book also has an appendix that just has the new 
syntax from each chapter in the order of the chapters.  If the learner 
forgets some syntax, and wants to check back, it would be nice to be 
able to have a window pop up with all the syntax introduced so far, in 
the order that the learner encountered it, dynamically generated from 
the particular learners' lesson history.  Or for a more general review 
see summary statements from exposition in their lesson history.  Or 
maybe see a summary of things introduced so far by grouped by some topic 
tags, ... 

There would need to be a further consistent interface to the lessons to 
do this.  (I would assume a lesson could omit implementation of one of 
these extra features, and return None to a message with a request, and 
have the system behave gracefully.) 

I'm sure I have not thought about everything in the integration of 
lessons that could be useful and would need an agreed interface to be 
able to implement it. 

The most basic data structures and interfaces would revolve around 
finely divided skill sets:  those that are prerequisites and those that 
should be learned by the end of a lesson.  I would hope we could get all 
lessons to interact on that level.   Low level skills are easiest to 
categorize, set dependencies for, and automatically test.  It will be 
more of a challenge for higher level learning outcomes, but the 
complication is likely to be in the lessons, while the basic lesson 
choosing engine would be similar.

The idea of skill sets is of course very general.  I would plan to set 
it up so the model interfaces with the certification system Jeff is 
working on for the State of Virginia.

You could have a very basic engine choosing lessons with just this 
information. You would have to depend on the learners to determine when 
they understood things well enough to go on to the next lesson. 

The next elaboration would be to send testing results in some form from 
the end of a lesson session to the central engine, indicating if the 
learner did get the desired competencies, or which subset of them, or 
that more drill is needed, or this was way too hard for the learner.  
Possibilities for the engine would be to:
   go on to the next lesson in the author's prepared sequence
   if one learning outcome was incompletely attained, go to a drill 
lesson on that learning outcome
   jump back to an alternate lesson that goes slower with more examples 
and smaller steps
   if the current module only provides exposition/exposure, go to a 
lesson that includes  testing, ....

Anyhow, agreeing on at least optional interfaces from lessons to a 
larger system would be useful.  I am interested in joining with others 
to think about that and report back.
- - - - - - - - - - - - - -
Nitty Gritty

It will of course be important to label skills that are prerequisites 
and skills demonstrated as an outcome of an exercise.  It will be harder 
to deal with the fuzzier ideas of how many creative steps are needed 
without explicit experience for a specific challenge vs. measuring how 
many steps together a learner can confidently handle while feeling 
challenged.  Hopefully as we make the engine more sophisticated, we can 
measure more individual pedagogical parameters for learners and 
incorporate them in the engine decisions.

Ideally we would not only dynamically measure the needs of a learner as 
s/he progresses, but also measure and modify the overall response of the 
system.  We first keep track of session histories locally and, with the 
user's permission, upload them to a central site to hone the engine 
choosing the best place to go next.  For instance:  we marked module N 
as closely following module M, but almost nobody got N on the first try 
after M -- now mark it as being further off, so when the engine looks 
for the best module to follow M for a learner challenged appropriately 
by a certain sized jump, it finds module M1 instead ... or FLAG: we 
really need to write an intermediate module between M and N!  We could 
measure not only the distance between lessons but the effectiveness of a 
lesson.  We could either flag a lesson as not working or dynamically 
offer up an alternate lesson that is provided, that leads students to 
the same goals faster than another.

Learning styles are very important and diverse, as I have found with my 
children and my learners.  These should be the basis of user options 
guiding the choice of modules.  For example:  introduce examples first 
vs. rules first; supply little bites vs. a major overview before looking 
for feedback; written vs. verbal or video presentation, module requires 
cognitive (grade) level N ...  A more sophisticated later approach would 
be to explicitly test the learner's learning style and then set some of 
these parameters automatically.  There are also many other user chosen 
options.  For example:  I like modules written by X, modules approved by 
Y, modules approved by a learned professor, modules using analogies to 
Java or C or music or football....

Have an easy interface to add your own lesson and label it in the 
general taxonomy so it works with the choice engine.  It would be neat 
to say to learners:  if you struggled through this, and finally got it, 
how would you redo it to make it easier for you?  Ok, do it for everyone 
who comes after you!  Upload it and label the differences in your 
presentation from the previous one (i.e. smaller step -- comes before 
the last one, or more examples provided, or just 'written by X').  A 
learner may find s/he can relate particularly well to what X writes.

On categories:  the learned professors can specify a bunch of categories 
and tags, but the deli.ci.us model shows the power of organic growth:  
what is a useful tag in practice will stick.  If we do not allow this 
initially, at least we should design a system where this would be a 
modular generalization to add.  If we are looking for user contributions 
and large organic growth, I think flexible preference/descriptor tags 
could be very useful. 

Allow a record of learning via secure external certification and also 
locally without any special security that would enforce avoidance of 
funny business.

Along the way, the learner should be able to see a display of what 
skills s/he has mastered and which remain for the current goal (either 
based on informal feedback and testing or by official certification). 

Way out in front:  For video lessons, which fit some people's learning 
style, there are very effective commercial systems to automatically 
store in real time and later compact a movie of what appears on the 
computer screen + synchronous voice.  Modern commercial systems are not 
large resource hogs.  If we are to allow anyone to add video lessons, a 
free alternative would be good to develop!

--------------------

Moving Forward

I would like to explore a more general engine since people start in 
different places, head to different personal goals, and have different 
speeds and styles of learning and different computer capacities.  Also 
we have discussed an environment that makes user contributions easy.  We 
want to dynamically integrate these contributions.  The contributions 
may not fit into preplanned niches.  All this suggest a nonlinear web of 
potential next steps and a flexible engine based on inputs including 
those listed above to determine the best next step.

If people agree that a "choose the next module" engine like this is 
useful, I would like to develop and reach agreement on the external 
interface tied into data structures for lesson dependencies, user 
preferences, and outcomes from modules.  Hopefully I will find others to 
develop it with me, moving to a more specialized list for implementation 
details, and report progress back to the main list.

Some concepts for this engine might come under well-developed 
interactive learning theory that I have not seen.  At least in the early 
brainstorming stages I am more of an "imaginer from a clean slate" than 
"searcher for what is out there to avoid reinventing the wheel".  
References to the existing "wheels" that you have seen are appreciated.  
I think there should still be a lot of room for new ideas around open 
source and a dynamic response to a network community of learners and 
contributors and having flexible relationships that do not depend on a 
static, predetermined, limited vision.

I have thought about some more detailed design ideas for classes that 
are likely to be involved.  See 
http://www.edupython.org/tutorialEngine/brainstormingEngineObjects.html

The first concrete project I would be interested in interfacing with is 
the Crunchy Frog.

-- 
  Andrew N. Harrington
  Computer Science Department      Undergraduate Program Director
  Loyola University Chicago        http://www.cs.luc.edu/~anh
  512B Lewis Towers (office)       Office Phone: 312-915-7982
  Snail mail to Lewis Towers 416   Dept. Fax:    312-915-7998
  820 North Michigan Avenue        aharrin at luc.edu
  Chicago, Illinois 60611          



More information about the Edu-sig mailing list