At this early stage in my PhD, it would be instructive to look ahead and take a wild, naïve guess as to what kind of deliverable simulation I may end up producing at the end. If only to look back and laugh later on. ;-)

First of all; the challenge, as I understand it at this time:

To model an autonomic system, specifically one inside an automotive machine, most likely a car. A modern car will have a wide array of sensors and actuators, and the system designer needs to be able to see how they are performing, in real time.

At the moment I’m envisioning a car, modelled in 3D, driving out onto a classic Tufte-ian grid.

A car pulling out on to the gridA car that I’ll never be able to model.

The best guess at the moment on what kind of 3D I’m going to use would be some modification of the “Source” engine, which powered Half Life 2. The SDK comes with the game and I’ve played around with it. It’s very powerful.

Once the car comes to a stop, the outer paneling flies off, exposing a simplified version of the car’s innards. Thus begins the simulation, with (hopefully live) data being fed into the system and the display showing various activities on the screen.

Statistics like network activity and CPU usage will be on-screen at all times, in the form of pie-graphs and “sparklines” to show trends over time. Atomic events such as sensors being activated and sensors failing will be shown as alerts, possibly through a picture-in-picture system that shows a zoomed-in version of the full model at the point where the incident occurred. Clicking on this PiP box will then focus the main view on this area, through a camera movement. This device was used in the household simulation game, “The Sims”, to announce events like burglaries and housefires.

So, that’s what I’ve got so far. Of course, I’m leaving out all the bits about multiple displays and pulling elements from the main screen down onto a PDA or something crazy. It’s early days yet though, so this could still go in any direction.