Walk through of how to build a simple protocol using Kamaelia

October 07, 2008 at 02:23 AM | categories: python, oldblog | View Comments

A question was raised on the Kamaelia list asking for a demo on how to nest protocols using Kamaelia. In order to show some clarity, and avoid a protocol mismatch I decided to build a protocol which is a composite of the following 3 things:
  • Something clumping data into messages
  • JSON data being forwarded in serialised form as those messages
  • The serialised messages encrypted (naively) using DES encryption.
Rather than repost all the steps leading to that here, you can find the post that does just that here:
You'd probably want to take the resulting code a little further than I've taken it if you were to use it in production, since (for example) DataChunker is designed to be used with larger & larger numbers of messages. It would be relatively simple to transform this to a pub/sub system/router for sending recieving Javascript fragments, but wanted to leave the example looking relatively simple for now. I may come back to that as a pub/sub demo.

The final example looks like this:
 
#!/usr/bin/python

import cjson
import Axon

from Kamaelia.Chassis.Pipeline import Pipeline
from Kamaelia.Util.Chooser import Chooser
from Kamaelia.Util.Console import ConsoleEchoer

from Kamaelia.Internet.TCPClient import TCPClient
from Kamaelia.Chassis.ConnectedServer import ServerCore

from Kamaelia.Protocol.Framing import DataChunker, DataDeChunker
from Kamaelia.Apps.Grey.PeriodicWakeup import PeriodicWakeup

from Crypto.Cipher import DES

messages = [ {"hello": "world" },
{"hello": [1,2,3] },
{"world": [1,2,3] },
{"world": {"game":"over"} },
]*10

class MarshallJSON(Axon.Component.component):
def main(self):
while not self.dataReady("control"):
for j in self.Inbox("inbox"):
j_encoded = cjson.encode(j)
self.send(j_encoded, "outbox")
if not self.anyReady():
self.pause()
yield 1

class DeMarshallJSON(Axon.Component.component):
def main(self):
while not self.dataReady("control"):
for j in self.Inbox("inbox"):
j_decoded = cjson.decode(j)
self.send(j_decoded, "outbox")
if not self.anyReady():
self.pause()
yield 1

class Encoder(object):
"""Null encoder/base encoder - returns the same string
for encode/decode"""
def __init__(self, key, **argd):
super(Encoder, self).__init__(**argd)
self.__dict__.update(argd)
self.key = key
def encode(self, some_string):
return some_string
def decode(self, some_string):
return some_string

class DES_CRYPT(Encoder):
def __init__(self, key, **argd):
super(DES_CRYPT, self).__init__(key, **argd)
self.key = self.pad_eight(key)[:8]
self.obj = obj=DES.new(self.key, DES.MODE_ECB)

def encode(self, some_string):
padded = self.pad_eight(some_string)
encrypted = self.obj.encrypt(padded)
return encrypted

def decode(self, some_string):
padded = self.obj.decrypt(some_string)
decoded = self.unpad(padded)
return decoded

def pad_eight(self, some_string):
X = len(some_string)
if X % 8 != 0:
pad_needed = 8-X % 8
else:
pad_needed = 8
pad_needed = 8-(X % 8)
PAD = pad_needed * chr(pad_needed)
return some_string+PAD

def unpad(self, some_string):
x = ord(some_string[-1])
return some_string[:-x]

class Encrypter(Axon.Component.component):
key = "ABCD"
def main(self):
crypter = DES_CRYPT(self.key)
while not self.dataReady("control"):
for j in self.Inbox("inbox"):
j_encoded = crypter.encode(j)
self.send(j_encoded, "outbox")
if not self.anyReady():
self.pause()
yield 1

class Decrypter(Axon.Component.component):
key = "ABCD"
def main(self):
crypter = DES_CRYPT(self.key)
while not self.dataReady("control"):
for j in self.Inbox("inbox"):
j_decoded = crypter.decode(j)
self.send(j_decoded, "outbox")
if not self.anyReady():
self.pause()
yield 1

def protocol(*args,**argd):
return Pipeline(
PeriodicWakeup(message="NEXT", interval=1),
Chooser(messages),
MarshallJSON(),
Encrypter(), # Encrypt on the way out
DataChunker(),
)

def json_client_prefab(ip, port):
return Pipeline(
TCPClient(ip, port=port),
DataDeChunker(),
Decrypter(), # Decrypt on the way in
DeMarshallJSON(),
ConsoleEchoer(use_repr=True)
)

ServerCore(protocol=protocol, port=2345).activate()
json_client_prefab("127.0.0.1", 2345).run()
Read and Post Comments

Any sufficiently advanced...

October 01, 2008 at 12:14 PM | categories: python, oldblog | View Comments

Any sufficiently advanced...

The famous lisp quote - "Any sufficiently complicated C or Fortran program contains an ad hoc informally-specified bug-ridden slow implementation of half of Common Lisp" - make me think of something else today. Specifically I've been building a system where users enter data, someone else enters rules (which are data) and the system goes away and infers more data and then more things to do. That made me realise this:

"Any sufficiently complicated data rich system, which responds based on rules stored as data contains an ad hoc informally-specified bug-ridden slow implementation of half of Prolog."

Now, I'm not saying about to suddenly jump ship and just start writing prolog everywhere, but it's (to me) a useful realisation.

Read and Post Comments

Not impressed with changes to shell service on sourceforge...

September 19, 2008 at 05:08 PM | categories: python, oldblog | View Comments

So, sourceforge are changing their services. Can't really complain since it is free, but it really is pain. They've switched off shell access, and broken the web servers (our home page there now shows this:
Traceback (most recent call last):
File "/home/groups/k/ka/kamaelia/cgi-bin/Wiki/wiki", line 24, in ?
import pprint
File "/usr/lib64/python2.4/pprint.py", line 39, in ?
from cStringIO import StringIO as _StringIO
ImportError: /usr/lib64/python2.4/lib-dynload/cStringIO.so: failed to map segment from shared object: Cannot allocate memory
And there's no way (that I can see) to change the website to point at the main website now. We were building/readying a replacement server, but hadn't quite got there, so weren't putting redirects in place yet. Ho hum. I've updated the link on http://sourceforge.net/projects/kamaelia to point at the new server, but given we've moved SVN elsewhere, mailing lists elsewhere and the webserver elsewhere I think this final breakage by SF marks the end of the road for use with sourceforge.

I don't mind the idea that they're changing the maintenance of the website from shell to web - it's their site, their call. But disabling shell, breaking the webserver, BEFORE putting in place the alternative is a real pain. It's kinda sad to leave sourceforge, but breaking subversion, spammed to death mailing lists and now breaking the website is the final straw really. You never know though, I'm sure the aim behind these changes is to improve things, so we may come back at some point if they fix things.


Read and Post Comments

Shocking...

August 31, 2008 at 12:37 AM | categories: python, oldblog | View Comments

And people wonder why I have no desire to visit the US any more : http://www.salon.com/opinion/greenwald/2008/08/30/police_raids/index.html
Read and Post Comments

Hope for the copyright system?

August 20, 2008 at 09:55 PM | categories: python, oldblog | View Comments

"Open source licensing has become a widely used method of creative collaboration that serves to advance the arts and sciences in a manner and at a pace that few could have imagined just a few decades ago" -- The U.S. Court of Appeal for the Federal Circuit
Read and Post Comments

Unit test crib sheet

August 07, 2008 at 12:13 PM | categories: python, oldblog | View Comments

Mainly for me, but hopefully of use to others too:
#!/usr/bin/python
#
# running this as "test_CribSheet.py -v "
#
- gives you some cribsheet docs on what's going on and runs all the tests
#
# running this as "./test_CribSheet.py -v LikeCycleOfATest"
# - allows you to just run one of the suites.
#
# This doesn't replace documentation, and there's probably some hidden
# assumptions here, but it's quite useful.
#

import unittest
import os

class
DemoDocStringsImpact(
unittest.TestCase):
    # Note that the next test doesn't have a doc string. Look at the results in -v
    def test_DefaultVerboseMessage(self):
        pass

    # Note that the next test does have a doc string. Look at the results in -v
    def test_NonDefaultVerboseMessage(self):
        "This message will be shown in -v"
        pass


class
LikeCycleOfATest(
unittest.TestCase):

    def setUp(self):
        "We get called before every test_ in this class"
        self.value = 2

    def test_test1(self):
        "LifeCycle : 1 - we get called after setUp, but before tearDown"
        self.assertNotEqual(1, self.value)
        self.value = 1

    def test_test2(self):
        """LifeCycle : 2 - self.value wiped from previous test case
        - this is because setUp & tearDown are called before/after every test"""
        self.assertNotEqual(1, self.value)

    def tearDown(self):
        "We get called before *every* test_ in this class"
        # We could for example close the file used by every test, or close
        # a database or network connection


class Escape_tests(unittest.TestCase):

    def test_NullTest1(self):
        "assertNotEquals - fails with AssertionError if Equal"
        self.assertNotEqual(1, 2)

    def test_NullTest2(self):
        "assertEquals  - fails with AssertionError if not Equal"
        self.assertEqual(1, 1)

    def test_NullTest3(self):
        "assertEquals, custom error message -  - fails with AssertionError + custom message if not Equal"
        self.assertEqual(1, 1, "If you see this, the test is broken")

    def test_CallsSelfFailShouldBeCaughtByAssertionError(self):
        "self.fail - fail with AssertionError + custom message - useful for failing if an assertion does not fire when it should"
        try:
            self.fail("Fail!")
        except AssertionError:
            pass

    def test_NullTest4(self):
        "assert_ - for those times when you just want to assert something as try. Can have a custom message"
        self.assert_(1 ==1 , "one and one is two...")

    def test_NullTest5(self):
        "fail unless - This is essentially the same as self.assert_ really"
        self.failUnless(1 ==1)

    def test_NullTest6(self):
        "fail unless - code for this shows how to catch the Assertion error"
        try:
            self.failUnless(1 !=1 )
        except AssertionError:
            pass

    def test_NullTest7(self):
        "fail unless - how to extract the error message"
        try:
            self.failUnless(1 !=1, "Looks like the test is wrong!")
        except AssertionError, e:
            self.assert_(e.message == "Looks like the test is wrong!")

    def test_NullTest8(self):
        "assertRaises - can be useful for checking boundary cases of method/function calls."
        def LegendaryFail():
             1/0
        self.assertRaises(ZeroDivisionError, LegendaryFail)

    def test_NullTest9(self):
        "failUnlessRaises - can be useful for checking boundary cases of method/function calls. - can also pass in arguments"
        def LegendaryFail(left, right):
             left/right
        self.failUnlessRaises(ZeroDivisionError, LegendaryFail,1,0)


    def test_NullTest10(self):
        "assertRaises - how to simulate this so your assertion error can get a custom message"
        def LegendaryFail(left, right):
             left/right
        try:
            LegendaryFail(1,0)
            self.fail("This would fail here is LegendaryFail did not raise a ZeroDivisionError")
        except ZeroDivisionError:
            pass


if __name__=="__main__":
    # Next line invokes voodoo magic that causes all the testcases above to run.
    unittest.main()

Just running this without -v:
~/scratch> python test_CribSheet.py
...............
----------------------------------------------------------------------
Ran 15 tests in 0.002s

OK
Running this with -v:
~/scratch> python test_CribSheet.py -v
test_DefaultVerboseMessage (__main__.DemoDocStringsImpact) ... ok
This message will be shown in -v ... ok
self.fail - fail with AssertionError + custom message - useful for failing if an assertion does not fire when it should ... ok
assertNotEquals - fails with AssertionError if Equal ... ok
assertRaises - how to simulate this so your assertion error can get a custom message ... ok
assertEquals  - fails with AssertionError if not Equal ... ok
assertEquals, custom error message -  - fails with AssertionError + custom message if not Equal ... ok
assert_ - for those times when you just want to assert something as try. Can have a custom message ... ok
fail unless - This is essentially the same as self.assert_ really ... ok
fail unless - code for this shows how to catch the Assertion error ... ok
fail unless - how to extract the error message ... ok
assertRaises - can be useful for checking boundary cases of method/function calls. ... ok
failUnlessRaises - can be useful for checking boundary cases of method/function calls. - can also pass in arguments ... ok
LifeCycle : 1 - we get called after setUp, but before tearDown ... ok
LifeCycle : 2 - self.value wiped from previous test case ... ok

----------------------------------------------------------------------
Ran 15 tests in 0.003s

OK
Running this for just one group of tests:
~/scratch> python test_CribSheet.py -v DemoDocStringsImpact
test_DefaultVerboseMessage (__main__.DemoDocStringsImpact) ... ok
This message will be shown in -v ... ok

----------------------------------------------------------------------
Ran 2 tests in 0.001s

OK
Corrections, improvements, suggestions welcome. Hopefully of use to someone :) (written since I tend to use existing tests as a crib)

Read and Post Comments

Kamaelia Talks at Pycon UK, September 12-14th Birmingham

July 28, 2008 at 01:15 AM | categories: python, oldblog | View Comments

My talk suggestions have been accepted by the committee [*] and so I'll be giving the following two talks at Pycon UK:

Practical concurrent systems made simple using Kamaelia

This is a talk for beginners who want to primarily get started with using Kamaelia (probably to either make building systems more fun, or because they want to explore/use concurrency naturally). The full abstract can be found here:
This is the short version: This talk as a result aims to teach you how to get started with Kamaelia, building a variety of systems, as well as walking through the design and implementation of some systems built in the last year. Systems built over the past year include tools for dealing with spam (greylisting), through database modelling, video & image transcoding for a youtube/flickr type system, paint programs, webserving, XMPP, games, and a bunch of other things.

The other talk:

Sharing Data & Services Safely in Concurrent Systems using Kamaelia

Again, this covers stuff most people have found they've needed to know something about after using Kamaelia for non-trivial stuff for a few weeks. (essentially how to use the CAT & STM code) The full abstract can be found here:
The short version for that: Whilst message passing and "shared" nothing systems like Kamaelia simplify many problems, sometimes you really do need to share data. (eg a single pygame display!) Unconstrained concurrent access to data causes problem, so Kamaelia has two problems: 1) How do you provide tools that enable access to shared data and services, 2) Do so in a way without making people's heads explode?  I'll be using the Speak N Write code to illustrate that.
[*] This won't be/shouldn't be a shock since I'm on the pycon UK committee, but I don't take things for granted :-)
There's also masses of great talks lined up, and the first batch of talks put up already that I'm interest in (scheduling allowing :) include:
  • Stretching Pyglet's Wings
  • How I used Python to Control my Central Heating System
  • The Advantages And Disadvantages Of Python In Commercial Applications
  • Getting Warmer with Python - Python's role in helping "solve" global warming.
  • Python in Higher Education: One Year On
  • PyPy's Python Interpreter - Status and Plans
  • Cloud Computing and Amazon Web Services
  • Distributed Serpents: Python, Peloton and highly available Services
  • Open Source Testing Tools In Practice
  • py.test - Rapid Testing with Minimal Effort
  • ... and naturally lots more though :)

I'm also particularly looking forward to the keynotes by Ted Leung (Sun, previously OSAF) & Mark Shuttleworth (Ubuntu, Canonical). I've not heard Ted speak before so that'll be interesting in itself, however I've heard Mark speak twice before and he's a great speaker.

There's also plans afoot for a BOF discuss people's gripes with python's packaging systems, and the need for things like an easy_uninstall. More BOFs welcome of course.

If you've not signed up, go take a look at the talks list and see part of what you're missing :-)

(yeah, I'm excited, but why not? It's exciting :-) )

Read and Post Comments

George Bernard Shaw was wrong

July 27, 2008 at 02:00 PM | categories: python, oldblog | View Comments

... or rather incomplete. This quote is often used as a good rationale for sharing ideas, and as a nub of an idea it's good, but incomplete:
If you have an apple and I have an apple and we exchange apples then you and I will still each have one apple. But if you have an idea and I have an idea and we exchange these ideas, then each of us will have two ideas.
It's nice and on a most basic of levels it's true. However it's utterly incomplete as anyone who's worked on anything based on sharing ideas, be it brainstorming, collaborative working, open source or anything else. It's actually more a combinatorial explosion of ideas you get and in fact with just two "completely atomic" ideas you never have just 2 ideas, you always have at least 3 - A, B, meld(AB). In fact this sequence should bring flashbacks to many people to their maths:

The reason it's wrong is because of this:
  • 2 ideas A, B -> A, B, meld(AB)
    • 3 possibilities

  • 3 ideas A, B, C -> A, B, C, meld(AB), meld(BC), meld(AC), meld(ABC)
    • 7 possibilities

  • 4 ideas A, B, C -> A, B, C, D, meld(AB), meld(AC), meld(AD), meld(BC), meld(BD), meld(CD), meld(ABC), meld(ABD), meld(ACD),meld(BCD), meld(ABCD)
    • 15 possibilities

  • More generally: 2**N -1 possibilities
Becomes more true when you realise most ideas aren't atomic ideas. OK, not all combinations are valid and useful, but very rarely does sharing 2 ideas leave both parties with just 2 ideas... :-)

Read and Post Comments

Concurrent software is not the problem - Intel talking about 1000s of cores

July 05, 2008 at 12:47 PM | categories: python, oldblog | View Comments

Intel have recently been talking about the fact that we'll probably have to deal with not hundreds but thousands of cores on a machine at some point in the near future (5-10 year time frame based on their 80 core prototype). Now many people seem to be worried about how to use even a dual core machine, so naturally many people are probably going wa-HUH? However I suspect the Erlang crowd have a similar reaction to us - which is "cool".

Why? Like the erlang group, in Kamaelia, the thing we've focussed on is making concurrency easy to work with, primarily by aiming for making concurrent software maintenance easier (for the average developer). In practical terms this has meant putting friendly metaphors (hopefully) on top of well established principles of message passing systems, as well as adding support for other forms of constrained shared data. (STM is a bit like version control for variables).

We've done this by using various application domains as the starting point, such as DVB, networking and use of audio/video etc, and used Python as the language of choice to do so (Though we probably could've shouted about our application uses more/better, though we've getting better I think :-). However the approaches apply to more or less any non-functional language - so there's a proof of concept of our miniaxon core in C++, Ruby, & Java as well. (C++ & ruby ones deliberately simple/naive coding style :)

This does mean that now when we approach a problem - such as the desire to build a tool that assists a child learning to read and write - we end up with a piece of code that internally exhibits high levels of concurrency. For example, even the simple Speak And Write application is made of 37 components which at present all right in the same process, but could be easily be made to use 37 processes... (prepending all Pipelines & Graphlines with the word "Process")

Despite this, we don't normally think in terms of number of components or concurrent things, largely because you don't normally think of the number of functions you use in a piece of code - we just focus on the functionality we want from the system. I'm sure once upon a time though people did, but I don't know anyone who counts the number of functions or methods they have. The diagram below for example is the high level functionality of the system:


Unlike many diagrams though, this has a 1 to 1 correspondance with the code: (skipping some details below)

bgcolour = (255,255,180)
Backplane("SPEECH").activate()

Pipeline(
    SubscribeTo("SPEECH"),
    UnixProcess("while read word; do echo $word | espeak -w foo.wav --stdin ; aplay foo.wav ; done"),
).activate()

CANVAS  = Canvas( position=(0,40), size=(800,320),
                  bgcolour = bgcolour ).activate()
CHALLENGE  = TextDisplayer(size = (390, 200), position = (0,40),
                           bgcolour = bgcolour, text_height=48,
                           transparent =1).activate()
TEXT  = Textbox(size = (800, 100), position = (0,260), bgcolour = (255,180,255),
                text_height=48, transparent =1 ).activate()

Graphline(
    CHALLENGER  = Challenger(),
    CHALLENGE_SPLITTER = TwoWaySplitter(),
    CHALLENGE_CHECKER = Challenger_Checker(),
    SPEAKER  = PublishTo("SPEECH"),

    CHALLENGE  = CHALLENGE,
    TEXT  = TEXT,
    CANVAS  = CANVAS,

    PEN     = Pen(bgcolour = bgcolour),
    STROKER = StrokeRecogniser(),
    OUTPUT  = aggregator(),
    ANSWER_SPLITTER = TwoWaySplitter(),

    TEXTDISPLAY  = TextDisplayer(size = (800, 100), position = (0,380),
                                 bgcolour = (180,255,255), text_height=48 ),

    linkages = {
               ("CANVAS",  "eventsOut") : ("PEN", "inbox"),
               ("CHALLENGER","outbox")  : ("CHALLENGE_SPLITTER", "inbox"),
               ("CHALLENGE_SPLITTER","outbox")  : ("CHALLENGE", "inbox"),
               ("CHALLENGE_SPLITTER","outbox2")  : ("SPEAKER", "inbox"),
               ("PEN", "outbox")        : ("CANVAS", "inbox"),
               ("PEN", "points")        : ("STROKER", "inbox"),
               ("STROKER", "outbox")    : ("OUTPUT", "inbox"),
               ("STROKER", "drawing")   : ("CANVAS", "inbox"),
               ("OUTPUT","outbox")      : ("TEXT", "inbox"),
               ("TEXT","outbox")      : ("ANSWER_SPLITTER", "inbox"),
               ("ANSWER_SPLITTER","outbox")  : ("TEXTDISPLAY", "inbox"),
               ("ANSWER_SPLITTER","outbox2") : ("CHALLENGE_CHECKER", "inbox"),
               ("CHALLENGE_CHECKER","outbox") : ("SPEAKER", "inbox"),
               ("CHALLENGE_CHECKER", "challengesignal") : ("CHALLENGER", "inbox"),
    },
).run()

However, what has this got to do with 1000s of cores? After all, even a larger application (like the Whiteboard) only really exhibits a hundred or two hundred of degrees of concurrency... Now, clearly if every application you were using was written the approach of simpler, friendlier component metaphors that Kamaelia currently uses, then it's likely that you would probably start using all those CPUs. I say "approach", because I'd really like to see people taking our proofs of concept and making native versions for C++, Ruby, Perl, etc - I don't believe in the view of one language to rule them all. I'd hope it was easier to maintain and be more bug free, because that's a core aim, but the proof of the approach is in the coding really, not the talking.

However, when you get to 1000s of cores a completely different issue suddenly arises that you didn't have with concurrency at the levels of 1,5, 10, 100 cores. That of software tolerance of hardware unreliability, and that, not writing concurrent software is the REAL problem.

It's been well noted that Google currently scale their applications across 1000s of machines using Map Reduce, which fundamentally is just another metaphor for writing code in a concurrent way. However, they are also well known to work on the assumption that they will have a number of servers fail every single day. This will will fundamentally mean half way through doing something. Now with a web search, if something goes wrong, you can just redo the search, or just not aggregate the results of the search.

In a desktop application, what if the core that fails is handling audio output? Is it acceptable for the audio to just stop working? Or would you need to have some mechanism to back out from the error and retry? It was thinking about these issues early this morning that I realised that what you you need is a way of capturing what was going to be running on that core before you execute it, and then launch it. In that scenario, if the CPU fails (assuming a detection mechanism) you can then restart the component on a fresh core.

The interesting thing here is that ProcessPipeline can help us out here. The way process pipeline works is as follows. Given the following system:

ProcessPipeline( Producer(), Transformer(), Consumer() ).run()

Such as:

ProcessPipeline( SimpleFileReader(), AudioDecoder(), AudioPlayer() ).run()

Then ProcessPipeline runs in the foreground process. For each of the components listed in the pipeline, it forks, and runs the component using the pprocess library, with data passing between components via the ProcessPipeline (based on the principle of the simplest thing that could possibly work). The interesting thing about this is this: ProcessPipeline therefore has a copy of each component before it started executing. Fundamentally this allows process pipeline to be able (at some later point in time) to be able to detect erroneous death of the component (somehow :) ), either due to bugs, or hardware failure, and to be able to restart the component - masking the error from the other components in the system.

Now, quite how that would actually work in practice, I'm not really sure, ProcessPipeline is after all experimental at present, with issues in it being explored by a Google Summer of Code project aimed at a multi-process paint program (by a first year CS student...). However, it gives me warm fuzzy feelings about both our approach and it's potential longevity - since we do have a clear reasonable answer as to how to deal with that (hardware) reliability issue.

So, whilst Intel may have some "unwelcome advice", and people may be reacting thinking "how on earth do I even structure my code to work that way", but the real problem is "how do I write application code that is resilient to and works despite hardware failure".

That's a much harder question, and the only solution to both that I can see is "break your code down into restartable, non-datasharing, message passing, replaceable components". I'm sure other solutions either exist or will come along though :-) After all, Kamaelia turns out to have similarities to Hugo Simpon's MASCOT (pdf, see also wikipedia link) which is over 30 years old but barely advertised, so I'm sure that other approaches exist.

Read and Post Comments

Interesting post on requirements for project websites

July 04, 2008 at 09:06 AM | categories: python, oldblog | View Comments

I quite like this list of requirements for project websitesby Brian Jones. We've been planning a rebuild of the Kamaelia website to focus on applications written using kamaelia, how to modify them, make your apps using it, and how to join in (hoping people are interested in doing so), and whilst that's a wider scope than some of the things he's suggesting, but it's a good checklist. (After all, each of the applications themselves should actually have a project page with that sort of information).
Read and Post Comments

« Previous Page -- Next Page »