After my initial foray into predicting the future was met with puzzlement, I’ve been thinking back over the idea of, as Aaron put it, “marrying the Scientific Visualisation with the Information Visualisation”. This seemed like the logical way to go, but right now it doesn’t look like what’s actually required or even desired for this project. Nonetheless, I want to write down the reasons I originally started thinking along this track.

Spatial Representation
First of all, because an autonomic system is made up of a large and fluctuating number of sensors and actuators, it made sense to have some form of spatial representation of where the sensors are located. This allows such actions as the person watching the visualisation saying “show me activity for the sensors at the rear of the car”, or for those clustered in the engine, for example. This would surely be a useful UI for interacting with the simulation.
Sensor Grouping
Beyond these ‘logical groupings’, they could also simply drag a box around the sensors they were interested in, and use the usual Shift-click/Ctrl-click interaction to add or remove sensors from their selection, and then generate the visualisation from this selection. Splitting the sensors driving the visualisation into groups like this would simplify the task of focusing on certain parts of the simulation, or moving parts of it onto other display devices (particularly low-power devices, with not enough processing power to generate the entire visualisation).
Using a 3D Camera
When sensors fail, as they are wont to do, the camera in the 3D environment can be positioned to show the location of the failure. This would allow the user to select nearby sensors and get realtime data from just those sensors surrounding the problematic one.