Tuesday, December 25, 2007

Duke Nuke is coming.

I've just seen the Duke Nuke Forever Trailer. Bwahaahahah!

Its still making me laugh. Bwahahaha. I remember many mispent evenings, days and weekends playing Duke3D over a null modem cable, and a home made setup with 2 modems and a 9 volt battery.

The last line at the end of the trailer... genius. My friend Iven disagrees, though I do understand his point of view.

Tuesday, December 18, 2007

Pylons Rocks.

I've built sites with Django, TurboGears and Pylons. I've come to prefer Pylons. Why?

Pylons gets out of the way, and stays out of the way. It lets me use SQLAlchemy, the best Python ORM available. It lets me do things the right way, and it always lets me do things my way. It doesn't hold my hand, and I don't want it to.

For example, from the Django documentation:
django.middleware.common.CommonMiddleware handles ETags based on the USE_ETAGS setting. If USE_ETAGS is set to True, Django will calculate an ETag for each request by MD5-hashing the page content, and it’ll take care of sending Not Modified responses, if appropriate.
This means that your django application most likely needs to talk to a database, process some code, parse and render a template before it can generate an etag. If you want to fix this behavior... best of luck to you.

With Pylons, I can do one lookup on the database and generate an etag, short circuiting the whole operation, shaving CPU time and RAM requirements. Sure, I had to write this code myself (2 lines including the db lookup) for each relevant controller method. However, my web service will scale much better because of it.

It is small details like this, and the tweakability of a Pylons installation that make it a much better framework for HTTP and REST nuts.

Now, I'm betting that most web developers out there don't really care about etags. If this is you, then perhaps django is the right tool for you. However, with a tad more effort and learning, Pylons will let you write better software.

CentOS, -1

I've been struggling to setup a web / db env in CentOS.

It sucks. Badly. Why rename apache to httpd? Why let PostgreSQL config files live in the data directory instead of /etc? Is there _any_ compelling reason to use this pile of junk instead of Debian or Ubuntu?


I guess I've been spoiled by the (comparatively) pure joy of configuring, maintaining and running my code on Debian based distributions.

Friday, December 14, 2007

Is hard real time Python possible?

I've been investigating the possibility of hard real time software development with Python. From some brief research, I've come up with the following:

* A Real Time OS is required.
* The .NET framework apparently cannot be used in hard real time software.
* Windows is not a RTOS, however there is a real time Linux kernel (Which I can install with Ubuntu!).

Assuming I do have an RTOS available, I might be able to use Python if:

* I turn off GC
* I don't create referential cycles
* I avoid using threads
* I use some best practices for real time software development.
* I read a good book on HRT software development.

I'm hoping that I can somehow get access (in Python) to high resolution timer and integrate that into my fibra framework so I can add some temporal based tests/constraints for individual tasklets.

Has anyone done this sort of thing with Python before? I'm looking for advice. I'm willing to consider other languages and platforms if they fit.

Wednesday, December 12, 2007

Do names belong in a URL?

Dear Lazyweb.

Imagine a nice RESTful interface for working with Tags. The URL:
will return a list of all the tags.

The URL:
will return a list of all the items that are associated with the tag "foo".

Or should it?

What happens when you may have tags in different languages? Is something like this:
possible or even desirable? (These characters were copied from a spam email, I have no idea what it says.)

Should the tag collection be accessed by id, rather than name? Eg:
This is uglier, but more usable across languages and character sets.

Hmmm. What do I do....?

Monday, December 10, 2007

Eventlet Looks Familiar

Linden labs (Second Life developers) are using a library named Eventlet to abstract non-blocking network IO behind co-routines.

I'm glad I've seen this, as Eventlet uses similar techniques to fibra, and validates my approach. In fact, this quote from the Eventlet wiki, applies equally to fibra:

Eventlet began life as Donovan Preston was talking to Bob Ippolito about coroutine-based non-blocking networking frameworks in Python. Most non-blocking frameworks require you to run the "main loop" in order to perform all network operations, but Donovan wondered if a library written using a trampolining style could get away with transparently running the main loop any time i/o was required, stopping the main loop once no more i/o was scheduled.

Saturday, December 08, 2007

What is the 2007 Independent Game of the Year?

Game Tunnel are running a poll to determine the Independent Game of the Year. Galcon consumed most of my available game time earlier this year, so it gets my vote.

If you enjoyed Galcon, go and vote for it! It would be great to see a Python game come out on top!

Wednesday, December 05, 2007

import antigravity

import antigravity t-shirts.

I need some new shirts for work, I thought I'd share these in case anyone else is thinking the same...

Thursday, November 29, 2007

Ubisoft coming to Perth?

I went to the pulse expo this evening, where one of the speakers mentioned that Ubisoft are in town (Perth, Western Australia), looking to set up a studio.


Wednesday, November 28, 2007

A chat server using fibra.

The following code is a very simple chat server implemented using cooperative threads in the fibra 0.01 framework. Explanation follows below.
import fibra
import fibra.plugins.network as network
import fibra.plugins.tasks as tasks

class Chatter(object):
def __init__(self, address):
self.address = address
self.members = {}

def listener(self):
while True:
conn = (yield network.listen(self.address))
yield tasks.spawn(self.login(conn))

def login(self, conn):
handle = None
while handle is None:
yield network.send(conn, 'What is your handle?')
handle = (yield network.receive(conn))
if handle in self.members:
yield network.send(conn, 'That handle is already taken.')
handle = None
self.members[handle] = conn
yield tasks.spawn(self.chat(conn, handle))

def lost_conn(self, conn, handle):
if handle in self.members:
yield self.broadcast('%s has left the chat.' % handle)

def broadcast(self, text):
for member, conn in self.members.items():
yield network.send(conn, text)
except network.NetworkError:
yield tasks.on_finish(self.lost_conn(conn, member))

def closed_socket(self, conn, handle):
yield network.on_lost_connection(conn)
yield self.lost_conn(conn, handle)

def chat(self, conn, handle):
yield tasks.on_finish(self.lost_conn(conn, handle))
yield tasks.spawn(self.closed_socket(conn, handle))
yield self.broadcast('%s has joined the chat.' % handle)
data = ''
while True:
data = yield network.receive(conn)
except network.NetworkError:
if data == '/quit': break
yield self.broadcast('%s says: %s' % (handle, data))
yield network.close(conn)

if __name__ == "__main__":
chatter = Chatter(('localhost',1980))
s = fibra.Schedule()
while s.tick(): pass

The Chatter class has 6 methods, which are all python generators, which in the context of fibra, I call tasklets.

At the bottom of the code, a fibra Schedule is created, and the NetworkPlugin is registered. The NetworkPlugin allows tasklets to yield certain values which perform network related operations. The main tasklet, chatter.listener, is installed into the scheduler, then the schedule is continually ticked in a while loop. The while loop will finish when there are no more tasklets to run.

So, what does the listener method do? It creates a tasklet, and yields the network.listen object, which will return a new connection when someone connects to the address passed into the network.listen call. This is a non-blocking operation. When the connection object is returned, the listener method spawns a new tasklet (self.login) with the connection, then goes back to listening for another new connection. The self.login tasklet will continue to run concurrently while the listen method is waiting.

The login method sends a prompt to the new connection, asking for a handle to identify the user. If the handle has not already been used, it spawns a chat tasklet and then exits.

The first line of the chat tasklet schedules another tasklet (self.lost_conn) which will be run when the chat tasklet finishes. The second line spawns a tasklet (self.closed_socket) which waits for the socket to close unexpectedly. The chat tasklet then broadcasts a message to any users who are already logged in. It then loops, sending chat messages to all users as they are received. Finally, if the tasklet receives a '/quit' line, it breaks out of the loop and closes the socket and finishes. At this point, the self.lost_conn task is awakened and runs.

If you want to test this code yourself, start the server, and telnet to localhost 1980.

Friday, November 23, 2007

Why do sockets die?

I'm testing a network lib, which uses select for polling socket.

I'm running a stress test, which connects 100 sockets to a server socket (all in the same process), then echoes data back and forth as quickly as possible. If a socket dies, it gets removed. Each loop iteration I print time passed, and the number of active sockets left.

As it runs, I watch the number of sockets slowly decline, until I have a set of 15 sockets left, which seem to keep running happily. Why do the other 85 sockets die? They either raise ECONNRESET, EPIPE or ETIMEDOUT. I imagined sockets connected via localhost would be quite reliable...

Update: The same test between two different machines does _not_ show this same problem. So what's up with localhost?

Tuesday, November 20, 2007

Looking for Web Designer.

I'm looking for an XHTML/CSS expert. Must also have excellent graphics/design skills. Must have a portfolio I can view online, and be comfortable learning new technologies.

Send me an email (simonwittber at gmail dot com) if you are interested in full-time work, and can work in Perth, Western Australia or are willing to relocate.

On the way to work this morning...

Monday, November 19, 2007

TCP Networked Tasklets in Python

Fibra 3 introduces some networking features.

Generator based tasklets can now communicate with each other over TCP! The networking plugin uses Twisted to do its magic. What other sorts of plugins might be useful? I'm running out of ideas now. :-)

This is the code for a simple server which echoes everything it receives, and starts the conversation with a 'hi.' message:
import fibra
import fibra.plugins.sleep
import fibra.plugins.tasks as tasks
import fibra.plugins.network as network

def listener():
conn = (yield network.ListenForNewConnection(1980))
yield echo(conn)
yield watch(conn)

def watch(connection):
yield network.WaitForLostConnection(connection)
print connection, ' has been lost'

def echo(connection):
while True:
data = (yield network.WaitForData(connection))

s = fibra.Schedule()


while s.tick(): pass

This is the code for a simple client which echoes everything it receives:
import fibra
import fibra.plugins.sleep
import fibra.plugins.tasks as tasks
import fibra.plugins.network as network

def watch(connection):
yield network.WaitForLostConnection(connection)
print connection, ' has been lost'

def echo(connection):
while True:
data = (yield network.WaitForData(connection))

def connector():
conn = (yield network.ConnectToHost(('localhost',1980)))
yield echo(conn)
yield watch(conn)

s = fibra.Schedule()


while s.tick(): pass

Sunday, November 18, 2007

New Code Repositories and Docs.

I'm now using bzr instead of svn. I'm pushing my repositories to:


I'm also auto publishing documentation to:


Friday, November 16, 2007

Cooperative + Preemptive Concurrency

I've just uploaded Fibra 2 to the cheeseshop. Fibra 2 includes the promised non-blocking plugin, which allows a generator based task to momentarily run in a seperate thread.
import fibra
import fibra.plugins.nonblock
import fibra.plugins.sleep
import time

def stuff():
yield fibra.plugins.nonblock.WillBlock()
print 'I am running inside a thread.'
print 'I am still running inside a thread.'
print 'I am exiting the thread, going back into cooperative mode.'
yield None
for i in xrange(3):
print 'I am running cooperatively too.'
yield 1

def other_stuff():
for i in xrange(5):
print 'I am running cooperatively.'
yield 1

s = fibra.Schedule()
while s.tick(): pass

Bitten by Configuration Management

I've learnt a lesson re. configuration management.

When setting up new projects, esp. projects built on frameworks like TurboGears, you should keep your eggs handy, for all possible platforms.

an "easy_install TurboGears==1.0.1" in September will not necessarily download the same code in October. In particular, it seems RuleDispatch has changed, which exposes a new bug in my application... grrr.

Cool Unix Tools

Today I was faced with dumping a postgres database, compressing it, downloading it, uncompressing it and restoring it on my local dev machine. It's a painfully slow process, which I've always had to supervise.

Surely, there must be a better way to do this...

Some small research time later I discovered netcat, or nc for short. nc lets you create a pipe over a network. As its man page states, it's a TCP/IP swiss army knife.

This is how I ended up automating most of the process.

On my local machine:
nc -l -p 1979 | bunzip2 | psql database_name

On the remote machine:
pg_dump database_name | bzip2 | nc my_local_ip 1979

Voila! The database dumped its data, piped it through bzip2 then over the network, to my waiting nc process which received the data, then piped it through bunzip2 and then into psql.

Cool. Another demonstration showing that Unix is such a great environment.

Pyglet amazes me again.

Pyglet 1.0 beta2 was released to the world a few days ago. I svn updated my local copy of the repository and noticed a new soundspace folder in the examples directory.

Curious, I tried to run soundspace.py. Hmm seems I require an extra 'abvin' library... I try the example again...

Wow. Cool. Hang on... I can drag these widgets around, change their direction... and the audio changes too!

This is 3D positional audio! Pyglet rocks! This is a seriously cool, feature-full library. The more I look into Pyglet, the more I am pleasantly surprised. I need to find a project where I need to use these kinds of cool features!

Well done Alex.

Thursday, November 15, 2007

Spatial Hashing

Often, when building a game, you need to test if objects are colliding. The objects could be spaceships, rocks, mouse pointers, laser beams... whatever. The simple approach is to iterate over all your objects, and test if they collide with a specific point.

If you do this using a linear algorithm, you'll quickly find that as you get more and more objects, your collision detection code will slow down at the same rate.

To get around this, you can test against a smaller set of objects, by using a spatial index. A spatial index (in this example, a Spatial Hash / Hash Map) stores all object positions, and can quickly tell you what objects _might_ be colliding in a certain area. You can then iterate through this smaller list, testing for exact collisions if needed. This is called a broad phase collision detection strategy.

from math import floor

class HashMap(object):
Hashmap is a a spatial index which can be used for a broad-phase
collision detection strategy.
def __init__(self, cell_size):
self.cell_size = cell_size
self.grid = {}

def from_points(cls, cell_size, points):
Build a HashMap from a list of points.
hashmap = cls(cell_size)
setdefault = hashmap.grid.setdefault
key = hashmap.key
for point in points:
return hashmap

def key(self, point):
cell_size = self.cell_size
return (

def insert(self, point):
Insert point into the hashmap.
self.grid.setdefault(self.key(point), []).append(point)

def query(self, point):
Return all objects in the cell specified by point.
return self.grid.setdefault(self.key(point), [])

The above class implements a spatial hash. A simple way of putting it is: "we store these points in a grid, and you can retrieve an entire grid cell with its points."

if __name__ == '__main__':

from random import uniform
from time import time

NUM_POINTS = 100000
new_point = lambda: (

points = [new_point() for i in xrange(NUM_POINTS)]
T = time()
hashmap = HashMap.from_points(10, points)
print 1.0 / (time() - T), '%d point builds per second.' % NUM_POINTS

T = time()
print 1.0 / (time() - T), '%d point queries per second.' % NUM_POINTS

This example inserts 10000 points into the hashmap, using a cell size of 10. This means, when we query point (0,0,0), we retrieve all points in the cube defined by (0,0,0),(10,10,10).

On my machine, I can build a 10000 point hashmap 2.7 times per second, and query it 70000 times per second. This makes it great for colliding static points, but not so great for colliding moving points. I imagine the from_points method could be improved somewhat. Any suggestions?

Wednesday, November 14, 2007

XPS M1330 Review

The Dell XPS M1330 arrived today. I'll be using this machine in my new office in the Big Blue Room. :)

The screen is backlit by WLED's. This means it's a lot brighter than most Laptop LCD's, but it is still not as bright as my iMac LCD.

Ubuntu Gutsy installed perfectly (had to install using safe mode video, after that, the nvidia drivers worked fine). On the first boot, the battery monitor happily reported 6.5 hours of battery life left. Excellent! Everything works, including the multimedia card reader and the sound card.

The build quality feels rather sturdy, even though the machine is very light. The keyboard looks and feels cheap. The battery isn't a perfect fit, and has a very slight wobble.

Pystone reports 68000 pystones, glxgears reports 4800 fps. Not very accurate benchmarks, but at least you get an idea of what to expect.

Overall, I'm happy with it. The build quality is better than the usual Dell standard, and the performance is great, considering the price!

Ubuntu is killing your laptop.

Bronwen sent me a very interesting link the other day. I'm glad she did, because it turns out that Ubuntu has been killing my hard disk!

How? When your ubuntu laptop is running on battery, the disk heads are parked as part of the power saving strategy. When the disk needs to be accessed, the heads are unparked. Apparently this can only happen about 600000 times before a disk becomes likely to fail. This is all well and good, however the Feisty and Gusty releases of Ubuntu do this up to 4 times per minute, which is _bad_, considering that kind of frequency gives your hard disk a life expectancy of 104 days!

This problem only occurs when your laptop is running on battery. The way to solve it is (replace sda with your hard disk device):
sudo hdparm -B 255 /dev/sda
which turns off the aggressive power management features of your hard drive.

If you want to check how much life there is left in your hard disk:
sudo smartctl -d ata -a /dev/sda | grep Load_Cycle_Count
My laptop has clocked up 300000 cycles... which is amazing (in a bad way!), considering it is rarely unplugged.

Update: I've just discovered this is old news. I don't read slashdot anymore... :-) Shame there hasn't been a fix yet.

Update: According to an Ubuntu Dev, Ubuntu does not alter hard disk settings. So, it would appear that aggressive power management is not the problem. The problem is something is writing to the disk too frequently, which will unpark the disk heads. This is still an Ubuntu issue IMO.

Tuesday, November 13, 2007

Concurrency is Fun!

I find myself spending too much time building cool features for my scheduler. It's a real time and brain sink. :-)

I've just uploaded Fibra 1, which provides a plugin which lets tasklets spawn other tasklets, wait for other tasklets to complete, and spawn tasklets when the current tasklet terminates. This is very neat for building sequences of actions, and is much more natural than the way I used to do it.

I'd like to build functionality so that a tasklet can watch what another tasklet is producing, watch if it raises an exception etc. I've also got to replicate the non blocking yield magic I put into nanothreads.

New 'Office' for the Summer

This Summer, I'll be working from a few new locations, around and about the city.

This particular spot looks like a good candidate! Working outdoors will require a few changes. I'll need to swap my power hungry laptop for something more portable and long lived, and I'll need to sort out a mobile broadband solution. On top of that, I'll need to carefully plan what I would like to achieve each day, so that I don't get too distracted... and achieve nothing. :)

Monday, November 12, 2007

Building Games in Small Pieces - The Scheduler

I've just uploaded fibra to the cheeseshop. This is another small piece of code which I find very useful in developing simulations and games.

Fibra is the scheduler I used in ICCARUS to simulate concurrency. It uses Python generators as 'tasklets' which are iterated cooperatively. It does a job which is very similar to another library I've written, but is different in that it is very light weight, and uses a plugin system to provide extra functionality, such as sleeping, deferring execution, spawning into a real thread etc. To achieve this, it uses new Python 2.5 generator methods, so I decided to split it out of the older nanothreads module and create a new package.

Usually, I have a global scheduler available in the game, so any part of the code can defer a function call, or install a new tasklet. I usually iterate the scheduler just after I handle GUI events.

It's nothing new, its been done before, but I imagine with the right plugins, it could achieve much of what people want when they talk about concurrent Python.

This is a simple example:

import fibra
import fibra.plugins.sleep

def sleeper(x):
while True:
print x
yield x

def normal():
while True: yield None

s = fibra.Schedule()
#tell the schedule that Sleep, float and int objects should be
#handled by the SleepPlugin
s.register_plugin(fibra.plugins.sleep.SleepPlugin(), (fibra.plugins.sleep.Sleep, float, int))
#the SleepPlugin provides a new method which lets us defer
#as tasklets start for X seconds.
s.defer(4, sleeper(0.5))
#install a sleep tasks that will only iterate once ever 2 seconds
#install a normal task that will iterate on every tick
#iterate the scheduler
while True:

Sunday, November 11, 2007

REST in Pylons needs work.

RESTful controllers, in Pylons, are second class citizens.

The issue is, Routes does not support nested dispatching to a depth > 1. This means if you want to build a RESTful API which looks like this:


you are bang out of luck. No dice. Can't do it. You are restricted to:


which is an artificial restriction, which gets in the way. Blocks the whole road, in fact.

pylonshq.com states that, if you don't like the behavior of routes, you can 'plug in your favorite' dispatcher. This is much harder than it sounds, as the WSGIController classes you've probably already written likely depend on information coming from the routes layer. IMO this makes routes (in the context of Pylons) part of the application. It is not middleware.

It would be great if someone could please tell me I'm wrong, and show me the way!

Wednesday, November 07, 2007

Monetizing a Web Service?

I felt like doing something new last night, so I built a web service using the Pylons 0.9.6 and SQLAlchemy 0.4 libraries. It was really just an excuse to catch up on the latest releases of the libraries...

...so now I've got a cool little web service which provides online storage (like Amazon's S3) with a sequence/list style API. It has an XMLRPC and an RESTful API. It's useful and well-featured enough that I am going to try and monetize the service. I'm thinking of having free accounts with a fixed monthly quota. If a user pays a subscription, the quota increases. Pretty simple, and self sustaining. I hope! :)

The target audience is other developers who want to provide a synchronized database to their distributed applications. Think TODO lists, High score lists for games or even twitter-like chat applications.

I still need to design a human interface, write some docs etc. I'll need some beta testers too. Anyone interested?

BTW: SQLAlchemy 0.4 has added some very cool features. It is still the best ORM for Python. Everyone else is playing catch up.

Monday, October 22, 2007

Building Games in Small Pieces

A few people have been asking for ICCARUS source code.

In it's current state, it is a big ball of mud, and probably useless to most people. So, I'm attempting to refactor parts of the code and release them as re-usable units of functionality, which can be easily tested, documented and even used! :-) I'll think about a complete ICCARUS release later.

The first useful chunk is my mainloop module. This module supplies a generic Loop construct (which implements frame skipping and a fixed simulation supdate), for your pyglet/pygame/opengl application.


It's rather simple. To use it, you create 3 callback functions to handle tick, render and gui events.
>>> import time
>>> from mainloop import Loop
>>> loop = Loop(time.time, time.sleep, tick_func, render_func, gui_func)
>>> loop.start()
It's quite a tight loop, and IMO is probably about the most code you should have as your inner game loop.

Mechanics and Coders

Took my MX5 in for a major service today. Walked back into the shop at closing time to pick up the keys and take it home. 3 hours later I drive away in a car with non-working handbrake. Which is not a good thing, especially considering I almost live on the side of a mountain.

Finding a good car mechanic is like finding a good coder. Next to impossible. From my personal experience, I believe most programmers don't really care about their work, and are satisfied with mediocrity. Perhaps the same can be said of auto mechanics.

Unicode Madness

I don't think I completely understand unicode.

>>> x = u'\xbb'
>>> x.encode('ascii', 'ignore')
>>> x.decode('ascii', 'ignore')
Traceback (most recent call last):
File "", line 1, in ?
UnicodeEncodeError: 'ascii' codec can't encode character u'\xbb' in position 0: ordinal not in range(128)

Why does the decode call raise an exception even though I've asked it to 'ignore' Unicode problems?

Friday, October 19, 2007

Do you work from home?

For that last 12 months, I've been working for myself, from my home office.

The single best thing I can recommend to anyone doing the same... is get a decent high backed office chair. I received one as a gift last night, and sitting here right now... ahh its magic!

I'm sure it helps me write better code! :-)

Update: Chair is pictured, and can be bought from ikea.

Wednesday, October 17, 2007

Meeting fellow enthusiasts...

I went to a DCiRG (Digital Content Industry Reference Group) meeting this evening, and met a few fellow enthusiasts. If you cannot work out what might happen at a DCiRG meeting... it was mostly about computer games and multimedia technologies. :-)

We talked about Python, Pygame, Pyglet and OpenGL. There was also talk about XNA + IronPython... It's great to talk to other people about all this stuff... It's rare that I get to meet Python people who like to build games too! Our technical colleges and Universities are offering (or will be offering) Python courses, centered around game development. Woohoo!

Thursday, September 06, 2007

The Motorola Z6

I've just bought a Motorola Z6, a Linux based, music oriented mobile phone.

There are some serious bugs in the phone. Trying to mount the phone as a usb mass storage device will not work at all in Linux. It has some problems in OS X, but is still usable after a few tries. I haven't been able to test in Win32 yet.

Also, the bundled S9 bluetooth stereo headset cuts in and out while playing music. This unfortunately this makes the headset useless for listening to music. :-(


Hopefully a firmware update will eventually solve these issues. Other aspects of the phone are great. The menus and navigation in general are very fast, and it has the most readable display in direct sunlight I've ever used.

Monday, August 20, 2007

Check out Pyglet!

Pyglet is a cross-platform windowing and multimedia library for Python. If you've been using pygame all your life (like me), you need check it out.

Pyglet uses OpenGL. It makes blitting a 2D image as simple as
The API is exceptionally clean, IMO, and is accompanied by a programming guide and an API reference.

It also allows you to code for multi-head setups, and provides positional audio via OpenAL. Awesome! And best of all... there is no building required! It's 100% Pure Python.

Pyglet is still in alpha phase, so don't expect everything to work perfectly. Having said that... it works well enough for me. I'm converted.

Friday, August 17, 2007


The chaps over at viddler are hosting a video of our ICCARUS presentation. The video skips the first moments of the presentation, where I load a few instances of the Python interpreter... to jeers (maybe cheers :-) from (I guessing here) the Rails aficionados in the crowd.

Update: ICCARUS Screencast is now available on scouta.

Wednesday, August 15, 2007

ICCARUS wins at Webjam

The ICCARUS (Interactive Command Console and Relational User Statistics) tool was launched at webjam this evening, and won first place via popular vote.

Some nervous moment were had, due to the Ubuntu laptop not playing nicely with the projector setup... a few minutes of xorg.conf hacking... and we were up and running... phew!

ICCARUS provides a three dimensional visualisation of the data behind scouta.com. It shows the social network between members, the memberships of scouta groups, and the links between members and the videos/podcasts which they enjoy.

The 'galaxy' can be navigated by clicking on points of interest, or searched using commands.

The really neat thing, is that the galaxy is not static. It can be dynamically reconfigured via custom constraint expressions, to show how the different node types cluster around items, groups and users. This provides instant visual feedback on the health and growth of the system and how people are using the system.

The data is fetched via web services provided by TurboGears, and uses an (as yet) unreleased version of the GFX library to create the visuals.

Tuesday, August 14, 2007

ICCARUS is coming...

The Python-Powered ICCARUS will make its first public appearance on Wednesday, August 15, 2007 during webjam at the Velvet Lounge, Perth, West Australia.

Wednesday, August 01, 2007

Scouta JetBlack, launched!

The JetBlack release of scouta has just been launched. It comes with major interface changes, and iTunes integration via a mac-only client application.

GFX Demo Code

This little piece of code draws random sprites all over an 800x600 window using GFX. The GFX specific stuff has been commented, all the rest is standard Python/Pygame stuff.

  1 import random
2 import pygame
3 from gfx import gl, array, ext
6 def main():
7 pygame.init()
8 flags = pygame.OPENGL|pygame.DOUBLEBUF|pygame.HWSURFACE
9 pygame.display.set_mode((800,600), flags)
11 #setup the opengl window
12 gl.init((1280,800))
14 #create an image batch of 10000 images, which uses the texture 'sprite.png'
15 image_count = 10000
16 texture = ext.GLSurface(pygame.image.load('sprite.png'))
17 image_batch = ext.ImageBatch(image_count, texture)
19 #create 10000 random images, and use the whole texture for each image
20 for i in xrange(image_count):
21 x,y = random.randint(0,795), random.randint(0,595)
22 w,h = 5,5
23 vertices = (x,y),(x,y+h),(x+w,y+h),(x+w,y)
24 texture_coords = (0,0),(0,1),(1,1),(1,0)
25 image_batch.set_quad(i, vertices, texture_coords=texture_coords)
27 clock = pygame.time.Clock()
28 running = True
29 while running:
30 #clear the display
31 gl.clear((0.0,0.0,0.0,1.0))
32 #draw the image batch
33 image_batch.draw()
34 clock.tick()
35 pygame.display.flip()
36 if pygame.QUIT in (i.type for i in pygame.event.get()):
37 running = False
38 print 'FPS:', clock.get_fps()
41 if __name__ == "__main__":
42 main()

Tuesday, July 31, 2007

9 million sprites per second

Well... almost.

I've just clocked my latest GFX lib release rendering 1 million 5x5 alpha blended sprites at 9 frames per second. Larger sprite sizes increase the rendering time, for example a 64x64 sprite can be rendered 6 hundred thousand times per second... but 9 million is a better number for a headline. :-)

This kind of speed comes courtesy of OpenGL batch mode operations, implemented via vertex arrays, which I am using via a Pyrex wrapper. I've re-implemented a tiny sub-set of numpy array functionality, so you can do things like move sprites around by adding an array of velocities to the sprite vertices... which is useful when implementing particle systems for example.

I imagine it will also be useful for rendering large tilemaps, game levels etc. Stay tuned for some over the top particle effects in my future games! :-)

Python Education in Perth

Just recieved an invitation in my email to attend a bi-monthly game dev meetup at TAFE (A Technical College) in Perth, WA.

* Python * what is it and where to start:
You may have heard of Python * a very high level open source scripting language that has been used not only for introducing programming concepts to students but also for writing games scripts and controlling associated 3D rendering engines amongst many other applications and uses. Find out what you need to download to install Python and PyGame and the pit falls you will encounter to get it all up and running from a layman s perspective. See an example of what can be achieved after just a few days of development.
Presented by Steve Eggleston, Head of Programs, New Media Studies, Central TAFE

These chaps are using Python to teach game development! That's great! I wonder if they know of pyweek...

Monday, July 23, 2007

Generators hold references too!

For some reason, our TG app was leaking transactions, causing predictable crashes. Every page view, Postgres was holding onto transaction, which never committed.


After much brute force debugging, I discovered the problem. A controller was calling a method in the model, and passing the return value through to the template, where incidentally, the value was no longer being used. "Easy fix", I thought, "just remove the method call."

But, I couldn't leave it at that, I had to find out what this method was doing differently which was causing transaction leakage... As it turns out, the method was returning a generator, via a generator expression. The generator itself was holding references to the transaction, preventing it from closing. The transaction would only close once the generator had been iterated through completely, finally calling the clean up code.

Woohoo. It's a good feeling when you kill a nasty bug like this. I'll be more careful with generator expressions from now on!

Sunday, July 22, 2007

Rectangle Operations

The Rect class in Pygame is very useful for normal screen based apps. It doesn't work so well in OpenGL code, which uses a left handed coordinate system (a positive Y axis). Also, pygame.Rect uses integers for internal coordinates, which means you cannot use it at sub-pixel accuracy.

So... I wrote a new Rect class to fix these problems.
easy_install rect

>>> from rect import Rect
>>> r = Rect((0,0,10,10))
>>> r.top
>>> r.bottom
Also included in the rect package is a quadtree spatial index, and a rectangle-bin packing function, useful for packing sprite strips, pre-rendered font characters or perhaps tile maps.

Update: Oops, a link would be helpful :-) http://cheeseshop.python.org/pypi/Rect/

Tuesday, July 10, 2007

A General Pygame Main-Loop

Will McGugan's post about mastering time in pygame started me thinking about my game loops, and how I might implement frame skipping, and other things.

The benefit of having a fixed 'step size' for your simulation might not be immediately apparent, however it has obvious benefits when trying to synchonrize network games, or work with physics libraries (eg ODE) which can go non-deterministic when working with a variable step size...

This is what my new general game loop looks like. If I ever need to implement frame-skipping, I believe I'll need to write some more code at line 27, to detect if every frame is being skipped... :-) I use this loop to generate Tick and Render events which get handled elsewhere.

1 import pygame
2 from pygame.locals import *
4 def main():
5 pygame.init()
6 pygame.display.set_mode((320,200))
8 #time is specified in milliseconds
9 #fixed simulation step duration
10 step_size = 20
11 #max duration to render a frame
12 max_frame_time = 100
14 now = pygame.time.get_ticks()
15 while(True):
16 #handle events
17 if QUIT in [e.type for e in pygame.event.get()]:
18 break
20 #get the current real time
21 T = pygame.time.get_ticks()
23 #if elapsed time since last frame is too long...
24 if T-now > max_frame_time:
25 #slow the game down by resetting clock
26 now = T - step_size
27 #alternatively, do nothing and frames will auto-skip, which
28 #may cause the engine to never render!
30 #this code will run only when enough time has passed, and will
31 #catch up to wall time if needed.
32 while(T-now >= step_size):
33 #save old game state, update new game state based on step_size
34 now += step_size
35 else:
36 pygame.time.wait(10)
38 #render game state. use 1.0/(step_size/(T-now)) for interpolation
40 if __name__ == "__main__":
41 main()

Update: Thanks to a suggestion from Marius in the comments, the loop is now environmentally friendly. :-)

Monday, July 09, 2007

Mediator Pattern in Python

1 class Mediator(object):
2 def __init__(self):
3 self.signals = {}
5 def signal(self, signal_name, *args, **kw):
6 for handler in self.signals.get(signal_name, []):
7 handler(*args, **kw)
0 def connect(self, signal_name, receiver):
10 handlers = self.signals.setdefault(signal_name, [])
11 handlers.append(receiver)
13 def disconnect(self, signal_name, receiver):
14 handlers[signal_name].remove(receiver)

This class helps promote loose coupling in my games. I imagine that with a few lines from Pygnet, I could probably 'loosely couple' objects over a network. :-)

Friday, June 15, 2007

Safer Serialization

The need for a secure / safe serialization module for the built-in Python types has reared its head again.

After looking around at the alternatives, I decided I should simply update my 2005-era gherkin module, and make it an easy installable package. When the cheeseshop comes back online I'll do the upload.

I've added support for sets, and complex numbers. I even tried to make it faster, but ended up in defeat. I had forgotten how much time I had already spent optimizing the thing... My 2007 brain could not best my 2005 brain... hmmm must be getting old.

In other news, the new Super Ajax-ified Media Widget (Scouta Play) went live earlier this week on the front page of scouta.com. It doesn't use gherkin, it uses json. :-) Band of None have also released a new tune, Hoffburger.

Saturday, June 09, 2007

Building Corsair Redux

I've got a few things I want Corsair Redux to feature.
  • User Generated Content - Players to be able to upload models and textures for use in game.
  • Exploration - Exploration and discovery should be key gameplay elements.

While thinking about game rules, I realised I need to answer a few unasked questions...

Is a conquer-the-universe gameplay objective going to be able to create a long lived game? Should it have RPG elements? Should it focus more on trading, economics, or something else?

Maybe a sim-solar-system, with players creating trade routes, discovering and colonising new planets... hmm.

The first Corsair project taught me that it really helps to get gameplay rules correct, before coding starts. :-)

Friday, June 08, 2007

Minimal Pygame Networking Achieved

Hello pygnet! pygnet sits on Twisted, and make it easy to trade marshal-able Python objects between clients and a server, using TCP.

It also (seems to) integrate well with the de-facto standard pygame event loop, as long as you call the poll function regularly.

So... what is next in my quest to complete a multiplayer-persistent-world-online-game? I think I need to define the game rules, and a backend strategy for implementing them. Also need some way to persist data to disk. ZODB perhaps?

Sunday, June 03, 2007

Ubuntu + iMac = Greased Lightning

I haven't booted my iMac into Ubuntu Linux for a while, due to my newly discovered dependence on iTunes. However, I had to fetch some spreadsheets out of the Ubuntu system, which required me of course to reboot the iMac... select Ubuntu... yada yada.

Woah! I didn't know this machine could run so quickly! Hey, I've even got more eye candy in here! I had forgotten just how quickly Linux can run on an iMac. OSX feels like wading through treacle compared to this... Hmmm, why aren't I using Ubuntu full-time?

Ah. That's right. iTunes. And I still haven't got audio in Ubuntu to work correctly. (It needs max volume to barely hear any sound.)

I might upgrade to Ubuntu/Feisty and see if it resolves the issue.

Update: The upgrade is complete, but I'm still only getting very low volume audio. Looks like it's a known problem though.

Saturday, June 02, 2007

Something looks like a Pylons caching problem...

...but isn't.

For a while now, in my in-development Pylons web app, I've been experiencing what I thought were cache problems. Every X clicks of the refresh button (usually 5 or 10) the page would revert to an older version. I thought this might be some middleware playing tricks on me... but it wasn't. Well not quite.

I use some custom SQLAlchemy magic in my model to dynamically switch between SQLite databases based on the URL, yet still keep the 'core' database available in the same model namespace. The code to setup the correct SQLAlchemy sessions sits in the __before__ method of lib.base.BaseController. As it turns out, I also need to call session.clear() on my SA session object between requests, as it seems some junk is left around, waiting to trick the unwary programmer.

This problem was solved with the help of Band of None and their new tune, Your Myspace Page.

Wednesday, May 30, 2007

Why use Twisted?

I've suspected that Unix sockets sometimes work differently to Win32 sockets. After considering updating my FibraNet networking code, I came across a document, which explains the differences between the two platforms.

The document provides a good argument for tackling the Twisted learning curve, as Twisted takes care of the listed incompatibilities and provides consistent behaviour for the programmer.

Tuesday, May 29, 2007

Game Programming Fun

Over the last few years, I've had a lot of fun building simple games with some rather clever and talented friends. I've done few things which can rival the feeling of achievement I received from building these tiny little pieces of entertainment.

This video shows the highlights of some of our efforts.

I haven't had much time for personal projects lately, but I figured starting something, and working away at it slowly is better than not doing anything at all. I might even make the effort a bit more public, a but more open, so others can pitch in if they get interested.

The first thing I'm going to decide... is whether to base my networking code on Twisted, or risk using my own home grown FibraNet. :-)

Tuesday, May 22, 2007

A memcached helper class.

This little class wraps the memcache functionality:
  • It allows objects to expire after a default or custom age.
  • It allows lazy evaluation of default values (think dict.setdefault, with real lazy behavior) by hiding the default value behind a lambda.

import memcache
import time

class Cache(object):
def __init__(self, cache_addresses, default_timeout=30*60):
self.default_timeout = default_timeout
self.cache = memcache.Client(cache_addresses)

def get(self, key, default=lambda:None, timeout=None):
obj_timestamp = self.cache.get(key)
if obj_timestamp is None:
obj = default()
self.set(key, obj)
obj,timestamp = obj_timestamp
if timeout is None: timeout = self.default_timeout
if time.time() - timestamp > timeout:
obj = default()
self.set(key, obj)
return obj

def set(self, key, obj):
self.cache.set(key, (obj, time.time()))

>>> tc = Cache([''])
>>> results = tc.get('some_big_list', lambda:build_some_big_list())

An advantage of using this class, is that if the memcached process dies, your process will continue to work as normal, as long as a default is provided to the .get method call.

An Aha! Moment

Something clicked in my brain this morning...

lambda is a really neat way to implement lazy evaluation.

cache = {}
def get_from_cache(key, lazy_else=lambda:None):
if key not in cache:
cache[key] = lazy_else()
return cache[key]

>>> obj = get_from_cache('big_list', lambda:function_to_build_big_list())

This is an obviously contrived example (which is slightly broken), but I'm using something similar in a web app, to fetch results from costly SQL queries from memcached.

It simple, elegant, and a good argument for lambda's simple syntax.

Saturday, May 12, 2007

Don't kill your HTTP cache-ability!

In the 7 years I've been actively involved in web development, I have never seen any of my peers bother implementing proper controls to allow web proxies, and browser caches to correctly cache dynamic content.

I took the time to do this for a recent project, and I partly credit the HTTP caching code for allowing the site to survive huge traffic surges driven by TechCrunch and BoingBoing articles. I believe correctly implemented HTTP caching for a dynamic site is one of the smartest things a developer can do to mitigate the effect of these surges, and make best use of the CPU cycles and bandwidth of a web server.

For a long time though, I struggled with strange caching bugs and I ended up having to turn off the caching mechanims for authenticated users. Not an optimal solution... then last week I came across a comment from mnot.net which spelt out my error.

Changing the content based on IP address or cookies really damages cacheability and the idea that a GET is idempotent, as I understand it.

This is exactly what I had been doing... dynamic customisation of page content based whether a user is authenticated... or not. ARgh!

Unfortunatly, most of the Python web frameworks I've worked with encourage and even demonstrate this technique in documentation and examples.

The solution to this problem is shown in the same comment.

Rather than separating a user identifier...

Cookie: userid={USER_ID}

Why not try something like this for personalization...


Thanks for the tip l.m.orchard!

Monday, May 07, 2007

Brain Impedance and ZODB

Over the weekend, I've done some more programming in a Pylons application with ZODB.

I've discovered that my brain is finding it hard to let go of all the relational constraints and paradigms which I have been working with for the last 11 years. I keep imagining that I need to create an index for this or that, so that I can look it up real-quick-like... But then I realize a sequential scan isn't going to be that costly... so I should just write the simplest-thing-that-works... and it does just work!

ZODB really makes prototyping a web app simple, and fast. I'm glad I took the time to learn how it works, its already becoming a valuable tool.

As far as Pylons goes... I'm liking it. When compared to TurboGears, it feels much more composed rather than integrated. This might be a good, or a bad thing, depending on your point of view.

My current challenge is getting the methods on a RESTResource to return data encoded in XML, HTML or JSON, depending on values in the HTTP Accept header.

Thursday, May 03, 2007

I am no longer a Twit.

Though some may still argue otherwise.

The 'Delete My Account' account button is an amazing innovation.

Tuesday, May 01, 2007

Form Authentication and REST

I started playing with AuthKit inside a Pylons App last week. AuthKit works nicely, just like TurboGears identity management, but they both share a common problem when working with RESTful controllers.

When a 401 error is raised, the framework takes over and redirects to a login form. The login form then checks the validation, and redirects back to the original page, in effect converting a GET request into a POST request.

In your Pylons app, this could have the effect of calling your create method in your controller, rather than the index method.

I think the lesson is: only raise 401 on methods which are called by a POST request, or use standard HTTP Authentication systems.

Which is the lesser evil?

Tuesday, April 24, 2007

User Interfaces, as a Web Service

I came across JS-Kit via TechCrunch. JS-Kit provides user interface components as a web service.

This is a little embedded poll.

Rate this post:

It's a great idea. One issue I do see with building interfaces this way, is that they can make your content inaccessible to search engine spiders, and probably your google ads too.

Thursday, April 19, 2007

The State of Web Development, 2007.

iiNet (my ISP) has started offering movies on demand, by partnering with ANYTIME on VOLT. "Awesome! I can use this with my new Apple TV!" I think to myself. Sadly, it was not to be. I was greeted with a brain dead message telling me to restart my browser and load the site in IE6.

Flashback to 1999... egads. Do people really still develop this way? Whoever is running this company needs to sack the CTO.

Friday, April 13, 2007

ZODB/ZEO + Pylons?

I'm investigating ZODB for a small project, and am wondering how well it will fit in with a web app which is using Pylons.

I don't see any hurdles so far, but I am a little worried about a comment made in the ZODB/ZEO Programming Guide.

ZEO is written using asyncore, from the Python standard library. It assumes that some part of the user application is running an asyncore mainloop. For example, Zope run the loop in a separate thread and ZEO uses that. If your application does not have a mainloop, ZEO will not process incoming invalidation messages until you make some call into ZEO.

Does this mean that the incoming network buffer could fill up with invalidation messages, if my code doesn't make any calls into ZEO for a period of time?

I can see this possibly happening when running multiple instances of a Pylons app. Hmmm.

Sunday, April 01, 2007

Particle Fun for Pyweek

My recent experiments with Particle Systems have come in handy for Pyweek.

I'm thinking of building something... using particles. :-)

This particular display is generated from mesh data. I'm thinking of creating ways of morphing the point cloud data using other mesh points, and maybe magnet attractors or something. I could even make it explode, or implode. Hmmm. How do I work this into a game?

Perhaps some kind of abstract shooter... or a weird puzzle game. Hmmm, still not sure what I'll do.

Thursday, March 29, 2007

Letting Go...

I've been really keen to get something together over the last few months for the Nullarbor Game compo, but I've had to let my entry slide. I've simply not had the time, real life commitments, and real life work kept getting in the way.

The venture was not entirely unproductive, however. A friend composed a great tune for my demo, which he called 'Under Sufferance'. If you like electronic stuff, it's worth a listen. (Try the download, as the stream uses Windows Media Player. Egh.)

Wednesday, March 28, 2007

Checking out Pylons

Over the last few days I've been evaluating Pylons, because I'm having some trouble making TurboGears behave with SQLAlchemy.

The first thing that I noticed, was the awesome thru-the-web-debugger which pops up in your browser when your code raises an uncaught exception. Wow. That's gonna come in handy.

The other great thing is that Pylons doesn't try to look after your database for you. You've got to handle all that yourself, which is great, because I want to be responsible for that code. The good news is that most of my TurboGears model could be copied straight over into Pylons, and it just worked.

One thing I don't really like about Pylons, is routes. I know I'm going against the grain here, but I prefer to override CherryPy's .default method rather than write a bunch of strange looking rules. In the end, thats probably what I'm going to do with routes. Just route every request to my own dispatch method, which, even though some may consider it hairy, provides a cleaner URL. I don't believe http methods belong in a RESTful URL, which is what routes seems to want to do.

Sunday, March 25, 2007

Scouta gets TechCrunched

Scouta was TechCrunched last Friday. So far, the TurboGears / lighttpd combination has been holding together quite well, no hiccups at all.

Wednesday, March 21, 2007

Particle Simulations

I've left the super-optimized, runtime-bytecode-compiling 3D engine project for a short while, so I can focus on a demo for the Nullarbor demoparty.

Competing in Nullarbor will be particularly challenging for me, simply because I'm using Python, and can't afford expensive CPU consuming algorithms.

I've decided to base the demo around particle system effects, mainly because Numpy is able to make these sorts of simulations relatively fast.

A couple of hints for would-be particle-system programmers: Use additive blending, use the GL point sprite extension, and turn off depth testing. If I had found these 3 hints spelled out in one sentence, I would have saved quite a few hours research. :-)

In other news, someone decided to debianize my FibraNet package. People actually _use_ this code? :-)

Sunday, March 18, 2007

Runtime Bytecode Assembly

Yesterday, I realized I could transform a Scenegraph DAG structure into a flat list, and iterate over that each frame.

Today, I realized I can do a lot better than that. I can actually generate bytecode from that structure, turn it into a function, and call that function for each frame.

Wow. This is really cool, fun stuff. It provides immediate, obvious optimizations. I've effectively unrolled a render loop and removed all the dispatch mechanisms from my inner loop.

Fortunately its quite easy to do, using the PEAK ByteCodeAssembler and the 'new' module in the standard library. This part of the PEAK documentation was the most helpful.

I'll post some code soon, I promise. :-)

Saturday, March 17, 2007

DAG finished, Interpreter next.

I've fixed up a problem or two with my DAG implementation, and uploaded it to the cheeseshop. It must be one of the smallest packages ever! :-)

While writing some tests to make sure my traversal function was processing nodes in the desired order, I realised that traversing the graph results in a 1D list of nodes (processing instructions) with a bunch of stack pops and pushes in the right places. Then I realised... this list is never going to change, unless I add or remove nodes, which in practice, rarely happens.

This makes the DAG really only one phase of process, which can be re-visited as needed, not on every processing pass. The DAG is used to produce a linear list of instructions, which I can feed into an interpreter.

This sounds like a good optimisation to me, and a fun diversion for a Saturday afternoon.

Friday, March 16, 2007

Walking a Graph

I must have written at least 10 different DAG implementations, but I feel I still haven't found the perfect, Pythonic implementation.

I use these things to build and experiment with scenegraph based graphics engines, so therefore, the requirements are, in order of priority:

  1. Speed.

  2. Flexibility.

  3. Elegance.

I need speed, because I'd like to experiment with graphics techniques which require multiple traversals over the graph for each frame before it gets displayed to the screen.

I've tried, and given up previously, but now, I'm trying to approach the problem from a different angle, armed with Python2.5.

So, who wants to help build a super fast depth first traversal algorithm? :-)

This is the code for building my graph, with some extra features I'll need later for talking to OpenGL.

class Node(object):
A node in a graph.
_instances = weakref.WeakValueDictionary()
_instance_count = 0
def __new__(cls, *args, **kw):
instance = object.__new__(cls, *args, **kw)
instance._id = Node._instance_count
Node._instances[instance._id] = instance
Node._instance_count += 1
return instance

def get(cls, id):
Returns a node by its _id attribute.
return cls._instances[id]

def __repr__(self):
return "<%s #%s object>" % (self.__class__.__name__, self._id)

class Composite(Node):
A node in a graph, composed of other nodes.
def __init__(self, *children):
self.children = list(children)

def add(self, *nodes):

def remove(self, *nodes):

... and this is the code I'll use to benchmark my algorithms... with a free naive recursive walker included!

if __name__ == "__main__":
import time

def dispatch(node):

def walk(node, indent=0):
for child in getattr(node, 'children', []):
walk(child, indent+1)

def build_test(node, depth=0):
if depth > 5: return
n = Composite()
for i in xrange(5):
n.add(*(Node() for x in xrange(5)))
build_test(n, depth+1)

root = Composite()
t = time.clock()
print time.clock() - t


This is the fastest traversal function so far. It runs 1.37 times faster than the recursive walk. On my machine it walks 289261 nodes in 0.378 seconds. I think I can forget about this now, and work on something else :-)

def stack_walk(root):
stack = deque([root])
stack_pop = stack.pop
stack_extendleft = stack.extendleft
while stack:
node = stack_pop()
if hasattr(node, 'children'):


Of course, after writing a unit test to test the processing order of the traversal function, I discovered I got it wrong...

It should look like this:

def traverse(root, dispatch):
stack = deque([root])
stack_pop = stack.popleft
stack_extend = stack.extend
stack_rotate = stack.rotate
while stack:
node = stack_pop()
if hasattr(node, 'children'):

Having TurboGears Problems

I'm using TurboGears with the SQLAlchemy ORM Library for a couple of large projects. SQLAlchemy is magic, and TurboGears has definitely provided a rapid development environment for both projects, however it is becoming clear that the second-class-citizen status of SQLAlchemy is becoming a problem in one of my applications, where I need greater control over database transactions and exception handling. In this case, TG is just getting in the way.

It looks like most of these SA integration problems are being addressed in some future TG release, but I can't really wait, and I'm not inclined to fix the TG code (all that multimethod RuleDispatch code makes my brain hurt). I'm going to write my own specialised expose/identity/validate decorators, and try a specialised CherryPy3/Genshi/SQLAlchemy combination without TG.

Sunday, March 11, 2007

Interviewing at Google

I had a preliminary phone interview with Google last Wednesday. Seems they are chasing Python programmers for positions at YouTube, and my name came up on their radar.

The interview went well (for what is was), but since then I've discovered it may be very difficult to get the required E-3 visa which is needed for Australians to do this kind of work in the US.

The problem is, I need to qualify as a 'skilled person', which requires a BSc, or equivalent (8 years) work experience. I have part of a degree, which I'm not likely to finish, and only 6 years of commercial work experience, which basically means I don't qualify for the visa.

Anyone else had to tackle this problem before?

Saturday, March 10, 2007

Nullarbor Approaches

The Nullabor Demoparty is approaching quickly. This year it is being held at the GO3 Conference at the end of March.

I've still not started my entry in the competition, simply due to a lack of available free time. I'm not too worried though, last year I wrote Krool Joolz over one week, and it was good enough to come second... though I do expect the quality of the competition to be much higher this year... hmmm.

I think I'd better start something. Soon.

Friday, March 09, 2007

Sometimes, you _need_ AJAX.

The latest version of Scouta went live this week, with a cool new feature.

An asynchronous commenting system.

Asynchronous comments are very important in Scouta, as we want discussions to sit right next to the media item, yet we don't want to interrupt the video or audio stream if the user decides to post a comment while the media item is playing. At the same time, we can't generate the conversation thread completely inside Javascript, because that would prevent conversations being spidered by search engines...

Anyhow, MochiKit and JSON came to the rescue, I ended up sending back the HTML fragment (which represents the comment) inside the POST request, which then gets dropped into the UL container. I think this is a pretty common technique, and it works well.

To achieve all this asynchronous magic, I have needed to write a _lot_ of Javascript, which has made me really appreciate and understand the benefit of multiline anonymous functions. It sure would be great to have these in Python, though I cannot imagine what the syntax might look like...

Wednesday, February 21, 2007

Scouta Lives.

Scouta has launched.

Scouta is a recommendation system for videos, podcasts and other media. It is built on Python and TurboGears technology.

Sign up, poke around, and let us know what you think.

Wednesday, February 14, 2007

When I close my eyes...

I see purple, green, pink, blue and and periwinkle planets. I see hundreds, sometimes thousands of colourful, floating triangles exploding in a particle system frenzy amidst rapidly decrementing digits, floating in the gaseous wasteland of a nebulae.

No, I'm not hallucinating, I've just been playing too much Galcon.

If you haven't played Galcon yet, you are fortunate. It steals too much of my time; beware lest it steal valuable pieces of your life also. :-)

I've been playing for a while now, and I'm still amazed, everyday, by the new and innovative strategies my opponents come up with.

Now please excuse me, I have a Galaxy to Conquer!

Friday, February 09, 2007


There has been a lot of talk about Django, TurboGears, Pylons, Rails and others. People love crossing swords about this sort of stuff, and the Django Pronouncement added an interesting catalyst to the mix...

One point I haven't seen highlighted, and which I think some people might be missing, is the idea that competition breeds excellence. I don't think I really understood this myself, until I experienced it's effects firsthand.

It was August, 2005, and I was participating in the Pyweek game programming compeition, with a small team of friends. The first four days of the competition, were not good days for us. We couldn't make any firm decisions, we were worried about over-extending ourselves and were getting slightly discouraged.

Then... we saw what some of the other teams were doing. 3D characters on a 3D board. Cool, liquid smooth interfaces. Bouncy, addictive soundtracks. Seeing what others were achieving drove our small team on. We pushed the envelope as far as we could, and ended up producing a entry far beyond our original expectations, which, IIRC, received a perfect score in the production category.

Looking back, I can see that without (worthy!) competition, our small team would probably have given up, and gone home.

How can this apply to Python web frameworks? If there were no competing frameworks, whichever web technology we had would quickly stagnate. Competition helps a team move quickly, make decisions, and stick with them. If the are a smart team, hopefully they will also make good decisions!

...so, I'm not worried about choosing between Django or TurboGears or Pylons or Rails or Xxx. I'll pick the right tool for the job, when it's time to do so. In the meantime, I'm happy that there is competition in the web framework world, because it means I get to use the best tools available, and it also means they are going to keep getting better.

Wednesday, February 07, 2007

RESTful TurboGears

Over the last 6 months, I've been involved with two large projects building web applications in the TurboGears framework.

We've put together some rather complex systems very quickly, and I believe our success in part was aided by using a CherryPy controller which let us easily map RESTful, elegant URLs to a sensible controller structure. We made sure that controller code did a minimum of work. Most of the time, the controller code only manipulated the model via method calls on the model objects.

For the uninitiated, a URL can usually be considered RESTful, when the URL path identifies the resource which is being fetched (using a GET request) or modified (a POST request).

An excellent side-effect of having a RESTful URL scheme, is that it encourages clear thinking about your application, and if you have the right controller class, it can accelerate your normal web development cycle beyond rapid! At least, thats the way it happened to me :-)

The Resource class presented below is derived from a class developed for the Scouta project, and the fellows over there have kindly given permission for me to release it. It provides integration with the TurboGears validation framework, and also works with web and browser caches by doing last-modified-date checks when requested.

The original inspiration for this controller comes from a recipe in the Python Cookbook.

import cherrypy
from turbogears import redirect, expose, error_handler
from datetime import datetime
from time import gmtime, strptime

def parse_http_date(timestamp_string):
if timestamp_string is None: return None
test = timestamp_string[3]
if test == ',':
format = "%a, %d %b %Y %H:%M:%S GMT"
elif test == ' ':
format = "%a %d %b %H:%M:%S %Y"
format = "%A, %d-%b-%y %H:%M:%S GMT"
return datetime(*strptime(timestamp_string, format)[:6])

class Resource(object):
children = {}

def __init__(self):
error_function = getattr(self.__class__, 'error', None)
if error_function is not None:
#If this class defines an error handling method (self.error),
#then we should decorate our methods with the TG error_handler.
self.get = error_handler(error_function)(self.get)
self.modify = error_handler(error_function)(self.modify)
self.new = error_handler(error_function)(self.new)

def get_child(cls, token):
return cls.children.get(token, None)

def default(self, *path, **kw):
request = cherrypy.request
path = list(path)
resource = None
http_method = request.method.lower()
#check the http method is supported.
method_name = dict(get='get',post='modify')[http_method]
except KeyError:
raise cherrypy.HTTPError(501)

if not path: #If the request path is to a collection.
if http_method == 'post':
#If the method is a post, we call self.create which returns
#a class which is passed into the self.new method.
resource = self.create(**kw)
assert resource is not None
method_name = 'new'
elif http_method == 'get':
#If the method is a get, call the self.index method, which
#should list the contents of the collection.
return self.index(**kw)
#Any other methods get rejected.
raise cherrypy.HTTPError(501)

if resource is None:
#if we don't have a resource by now, (it wasn't created)
#then try and load one.
token = path.pop(0)
resource = self.load(token)
if resource is None:
#No resource found?
raise cherrypy.HTTPError(404)

#if we have a path, check if the first token matches this
#classes children.
if path:
token = path.pop(0)
child = self.get_child(token)
if child is not None:
child.parent = resource
#call down into the child resource.
return child.default(*path, **kw)
raise cherrypy.HTTPError(404)

if http_method == 'get':
#if this resource has children, make sure it has a '/'
#on the end of the URL
if getattr(self, 'children', None) is not None:
if request.path[-1:] != '/':
redirect(request.path + "/")
#if the client already has the request in cache, check
#if we have a new version else tell the client not
#to bother.
modified_check = request.headers.get('If-Modified-Since', None)
modified_check = parse_http_date(modified_check)
if modified_check is not None:
last_modified = self.get_last_modified_date(resource)
if last_modified is not None:
if last_modified <= modified_check:
raise cherrypy.HTTPRedirect("", 304)

#run the requested method, passing it the resource
method = getattr(self, method_name)
response = method(resource, **kw)
#set the last modified date header for the response
last_modified = self.get_last_modified_date(resource)
if last_modified is None:
last_modified = datetime(*gmtime()[:6])

cherrypy.response.headers['Last-Modified'] = (
datetime.strftime(last_modified, "%a, %d %b %Y %H:%M:%S GMT")

return response

def get_last_modified_date(self, resource):
returns the last modified date of the resource.
return None

def index(self, **kw):
returns the representation of a collection of resources.
raise cherrypy.HTTPError(403)

def load(self, token):
loads and returns a resource identified by the token.
return None

def create(self, **kw):
returns a class or function which will be passed into the self.new
raise cherrypy.HTTPError(501)

def new(self, resource_factory, **kw):
uses resources factory to create a resource, commit it to the
raise cherrypy.HTTPError(501)

def modify(self, resource, **kw):
uses kw to modifiy the resource.
raise cherrypy.HTTPError(501)

def get(self, resource, **kw):
fetches the resource, and returns a representation of the resource.
raise cherrypy.HTTPError(501)

This Resource class looks complicated, but it really makes writing nice URL systems in TurboGears a piece of cake. A contrived example will illustrate best. :-)

The below code demonstrates how to set up two classes, which allow users to be listed, individual users viewed, user posts listed, and individual posts viewed using these URLs.


Notice how the classes integrate quite nicely with TurboGears validators. You only need to define one error function, and the Resource controller makes sure it gets called if validation fails on any of your get, modify or new method calls. This example uses SQLAlchemy for its model.

class Posts(Resource):
def load(self, post_id):
return model.Post.get_by(user_id=self.parent.user_id, post_id=post_id)

def index(self):
return dict(posts=model.Post.select_by(user_id=self.parent.user_id))

def get(self, post):
return dict(post=post)

class Users(Resource):
children = dict(posts=Posts())

def root(self):
return dict(users=model.User.select())

def load(self, user_name):
return model.User.get_by(user_name=(user_name))

def create(self, **kw)
return model.User

def error(self, tg_errors=None):
return tg_errors

def new(self, User, **kw):
new_user = User(**kw)
return dict(user=user)

def get_modified_date(self, user):
return user.last_modified_date

def get(self, user):
return dict(user=user)

display_name = validators.UnicodeString(length=255, if_empty=None),
def modify(self, user, **kw):
user.display_name = kw[display_name]
user.email_address = kw[email_address]
return dict(user=user)

You may notice that the Resource class has no support for PUT or DELETE requests. I've intentionally left these out, as they are not well supported across all browsers. Fortunately, we don't need them.

To insert a new user in the above example, simply post to /users/ and the controller will call the new method. If you want to delete, you need to treat your deletes as modify operations, eg, post to /users/simon
and set a delete flag. This is not very elegant, but it is a good compromise, as I've rarely (never!) needed to do a real delete call on a resource.

Tuesday, February 06, 2007

SOAP... is a "hostile overlay of the Web"

Haha! It seems that Gartner's thinking on Web services agrees with statements I've made for years...
Web Services based on SOAP and WSDL are "Web" in name only. In fact, they are a hostile overlay of the Web based on traditional enterprise middleware architectural styles that has fallen far short of expectations over the past decade.

Yep. SOAP sucks. Anyone building enterprise architecture with SOAP (mostly .NET people) have made a big mistake, and Gartner has spelled it out for them.

Long live the RESTful architectural style!


Popular Posts