[Python Edinburgh] Functional Programming in Python

Mark Smith mark.smith at practicalpoetry.co.uk
Fri Dec 24 13:00:24 CET 2010


Hey Tef,

I'll definitely check out those references.

I agree with everything you say here... my point wasn't so much 'how do you
write a complete application so that it is /optimally/ distributed' - that's
definitely something that needs to be designed-in. My point was more 'how
can we extend Python so that the runtime knows enough to be /able/ to
optimise (to some degree) for this environment.' I don't want my favourite
programming language to be obsoleted by this development in hardware
architecture. I feel that the most-used programming languages are blindly
walking towards this problem though, so we're not alone.

You coming along on Tuesday?

--Mark

On 24 December 2010 11:06, Thomas Figg <thomas.figg at gmail.com> wrote:

>
> On 24 Dec 2010, at 10:32, Mark Smith wrote:
>
> > To fully utilise these processors, then, we need a new approach to coding
> that can distribute the processing load
>
> You might want to look at chapel - a language from cray - to see a modern
> attempt at a language for distributed
> and parallel number crunching. Alternatively the recent language 'Plaid'
> has some novel ways of using data-flow for parallelism.
>
> Anyway, there are two main approaches to multiple processors: parallel
> processing and concurrency. What works for one might not work for the other.
>
> For example, google use map-reduce (a batch framework for out of core
> processing) and pregel (using a bulk synchronous parallel model for graph
> calculations) - both with elements of parallelism but massively different
> architectures.
>
> Parallelising an algorithm is hard, and the sufficiently smart compilers
> haven't arrived yet - at least not for imperative languages. Functional
> programs are easier to re-order and rewrite because of the lack of side
> effects/referential transparency. Languages with restrictions on aliasing
> (fortran) and assignment (single assigment c) are simpler to optimise. Using
> annotations is bug prone, and debugging parallel programs is no mean feat.
>
> Even when you do parallelise something, often the returns aren't that great
> - the overhead of synchronisation
> eats away at performance gains (c.f ahmdals law)
>
> We also face the larger problem of locality. We still program as if access
> to memory is universally cheap, when in reality some memory is cheaper to
> read than other bits (see cache hits).  This gets much harder to deal with
> when you bring in extra cores and machines.
>
> I guess what I'm trying to say is that effective use of multiple processors
> requires good design from the outset, rather than annotations on for loops.
>
> Aside: You might enjoy this talk about the architecture behind a low
> latency trade system, and why the avoided
> the traditional models http://bit.ly/i22X6C
>
>
> -tef
>
> _______________________________________________
> Edinburgh mailing list
> Edinburgh at python.org
> http://mail.python.org/mailman/listinfo/edinburgh
>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://mail.python.org/pipermail/edinburgh/attachments/20101224/dad4508b/attachment-0001.html>


More information about the Edinburgh mailing list