Friday, September 15, 2006

If I had a Concurrent Python, what would I do with it?

Something like this:

CORE_COUNT = 2 #This is how many CPUs or cores we have to play with.
def xmap(fn, seq):
Run a function over a sequence, and return the results.
The workload is split over multiple threads.
class Mapper(Thread):
def __init__(self, fn, seq):
Thread.__init__(self,, args=(fn,seq))

def map(self, fn, seq):
self.results = map(fn, seq)

newseq = []
n = len(seq) / CORE_COUNT
r = len(seq) % CORE_COUNT
b,e = 0, n + min(1, r)
for i in xrange(CORE_COUNT):
r = max(0, r-1)
b,e = e, e + n + min(1, r)

results = []
for thread in [Mapper(fn,s) for s in newseq]:
return results

This function takes care of starting, joining and collecting results from threads. It lets the programmer map a function over a sequence, and have the work done in parallel. Of course, this won't work in CPython, but it might prove useful in PyPy, or IronPython. Parallel processing in this style would be very useful for applications that work with large data sets, ie Games!

A game could be written to take advantage of the xmap function whenever possible. It would only make sense to use it when iterating over large data sets or using long running functions.

1 comment:

Simon Wittber said...

There are many problems with this code. For one thing, the list slicing code needs to be replaced with something which doesn't create copies of the list, which is a needless and expensive overhead.

Popular Posts