Wednesday, June 25, 2008

Google App Engine performance - Part 2

So the first analysis was to look at the gadget performance with 40,000 pixels which gives a fair old number of calculations (its 16 iterations for those that want to know). My next consideration was what would happen to a larger image that was further over the threshold. Would that see more issues?



Again its that cliff (I need to go and look at the code history) but again its remarkably stable after that point.


I know I shouldn't be surprised but this is several times over the CPU Quota limit (about 5 times in fact) so I was expecting to see a bit more variation as it caned the processor.



Now this shows just how consistent the processing is. Its important to note here that this isn't what Google App Engine is being pitched at right now. Given they've pitched it at data read intensive apps I'm impressed at just how level the capacity is. Having a 2 x standard deviation that is sitting around +/- 2% and even the "exceptional" items only bumping up around the 5% mark is indication either of a load of spare capacity and therefore not much contention or some very clever loading.

The penultimate bit I wanted to see was whether the four fold increase in calculations resulted in a linear increase in time.



What this graph shows is the raw performance and then the weighting (i.e. blog performance divided by 4). Zooming in on comparing the blog (160000 pixel) weighted against the straight off 40000 pixel gadget we get

Which very impressively means that there is a slight (that really is a fraction of 1% not 0.5 = 50%) performance gain through doing more calculations. Its not enough to be significant but it is enough to say that the performance is pretty linear even several times above the performance quota. The standard deviations are also pretty much in line which indicates a decent amount of stability at this level.

So while this isn't a linear scalability test in terms of horizontal scaling it does indicate that you are pretty much bound to 1 CPU and I'm not seeing much in the way of swapping out (you'd expect the Max/Min and the std dev on the larger one to be higher if swapping was a problem). So either Google have a massive bit of spare capacity or are doing clever scheduling with what they have.

The final question is what is the cut off point....

Technorati Tags: ,

No comments: