Python compilers?

SeeBelow at SeeBelow.Nut SeeBelow at SeeBelow.Nut
Tue May 18 22:14:10 EDT 2004


Peter Hansen wrote:
<snip>
> The bottleneck is almost certainly in evaluating the fitness
> function, not in performing the mutations and cross-overs.
> What does your fitness function do with all those floats?
> Perhaps it can be handled much faster with one of the numeric
> extensions for Python...  or with Pyrex.

You are right, the worst bottleneck is the fitness function.  Every
population member is an ANN (artificial neural network) and the ANNout()
function must be called for each fitness evaluation.  You can see there
is a lot of looping.  This C code runs very quickly:
-----------------------------------------------------------------------
/* neursubs.c - for computing output of artificial neural network - for
EvSail-2.2
   M. Timin, August, 2003 
   
   piecewise parabolic approximator replaced conventional sigmoid,
October, 2003
*/  


#include <math.h>
#include <stdlib.h>

#define NEUR_MAX 80                    /* maximum number of neurons in a
layer */
#define LOOP(i,N) for(i=0; i<N; i++)   /* be careful using this! */

/* This 4 piece curve is a good sigmoid approximator. */
float sigAprox(register float x)  {
	register float z;             
    
	if(x <= -4.0)
		return 0.0;
    else if(x <= 0.0) {
        z = x + 4.0;
        return z*z/32;
    } 
    else if(x < 4.0)  {
        z = x - 4.0;
        return 1.0 - z*z/32;   
    }    
	else
		return 1.0;
}    

/* oneNeuron() uses a vector of inputs and a vector of weights, and the
sigmoid activity
   function, to compute the output of one neuron.  It is assumed that an
extra 
   input of 1.0 is at the beginning of the input vector, and that there
is a
   corresponding value at the beginning of the weight vector.  This is
actually 
   the bias.  So upon entering this function, wptr points to the bias
and inptr
   points to 1.0.  The inputCount should include the bias, so it should
be one
   more than the number of inputs.  */
float oneNeuron(float *inptr, float *wptr, int inputCount) {
    int i;
    float sum = 0.0;
    
    LOOP(i, inputCount)  {              /* summation loop */
        sum += *inptr++ * *wptr++;
    }
    return sigAprox(sum);       /* this is the sigmoid formula */ 
}

/* This is the routine which calculates the outputs of the ANN.  Before
calling it the input
   values must be in the array pointd to by inptr.  Values of the
outputs will be placed
   in the array pointed to by outValues */
void ANNout(int numIn,  /* number of inputs to the ANN */
        int numHid,     /* number of neurons that receive the inputs */
        int numOut,     /* number of final output neurons */
        float *inptr,   /* pointer to the array of input values */
        float *wptr,    /* pointer to array of weights & biases in a
specific order */
        float *outValues)   /* pointer to where to write the output */
{
        float t1[NEUR_MAX];    /* NEUR_MAX defined above */
        float t2[NEUR_MAX];
        int i;
        
        /* prepare the input array: */
        t1[0] = 1.0;
        LOOP(i, numIn)
            t1[i+1] = *inptr++;
        /* compute and store intermediate outputs: */
        t2[0] = 1.0;
        LOOP(i, numHid) 
        {
            t2[i+1] = oneNeuron(t1, wptr, numIn+1);
            wptr += numIn+1;    
        }  
        /* do similar for final layer, writing to destination */
        LOOP(i, numOut) 
        {
            outValues[i] = oneNeuron(t2, wptr, numHid+1);
            wptr += numHid+1;
        } 
}


-----------------------------------------------------------------------
Humans may write to me at this address: zenguy at shaw dot ca



More information about the Python-list mailing list