Archive for November, 2005

Monthly Archive

We are preparing to install IBM’s Deep Computing Visualization software on a computer in the new Visualisation lab that’s hooked up to the DiamondTouch. This will allow us to both:

* “Explore the styles of interaction possible across different devices and a heterogeneous computing environment”
* Support simultaneous multi-user interactions across different displays

I am expecting to demo my simulation on the multiple displays in the Viz lab in the months ahead, so this will be a good introduction to the technology.

From the fact sheet (PDF):

High-end graphical images can be viewed in two visualization modes — SVN (Scalable Visual Networking) to increase screen resolution and multiplicity of physical displays; and RVN (Remote Visual Networking) to allow remote use of the application.

These two modes reflect two of the challenges in my PhD research: creating a visualisation of a large dataset across many displays, and to allow parts of the visualisation to migrate across devices.

After my initial foray into predicting the future was met with puzzlement, I’ve been thinking back over the idea of, as Aaron put it, “marrying the Scientific Visualisation with the Information Visualisation”. This seemed like the logical way to go, but right now it doesn’t look like what’s actually required or even desired for this project. Nonetheless, I want to write down the reasons I originally started thinking along this track.

Spatial Representation
First of all, because an autonomic system is made up of a large and fluctuating number of sensors and actuators, it made sense to have some form of spatial representation of where the sensors are located. This allows such actions as the person watching the visualisation saying “show me activity for the sensors at the rear of the car”, or for those clustered in the engine, for example. This would surely be a useful UI for interacting with the simulation.
Sensor Grouping
Beyond these ‘logical groupings’, they could also simply drag a box around the sensors they were interested in, and use the usual Shift-click/Ctrl-click interaction to add or remove sensors from their selection, and then generate the visualisation from this selection. Splitting the sensors driving the visualisation into groups like this would simplify the task of focusing on certain parts of the simulation, or moving parts of it onto other display devices (particularly low-power devices, with not enough processing power to generate the entire visualisation).
Using a 3D Camera
When sensors fail, as they are wont to do, the camera in the 3D environment can be positioned to show the location of the failure. This would allow the user to select nearby sensors and get realtime data from just those sensors surrounding the problematic one.

It’s about time we started making some proper progress on the face recognition program we’re writing as part of Joe‘s Software Engineering course. The team (that is, myself, Marko and Ado) have begun to thrash things out on the GForge message boards.

At this early stage in my PhD, it would be instructive to look ahead and take a wild, naïve guess as to what kind of deliverable simulation I may end up producing at the end. If only to look back and laugh later on. ;-)

First of all; the challenge, as I understand it at this time:

To model an autonomic system, specifically one inside an automotive machine, most likely a car. A modern car will have a wide array of sensors and actuators, and the system designer needs to be able to see how they are performing, in real time.

At the moment I’m envisioning a car, modelled in 3D, driving out onto a classic Tufte-ian grid.

A car pulling out on to the gridA car that I’ll never be able to model.

The best guess at the moment on what kind of 3D I’m going to use would be some modification of the “Source” engine, which powered Half Life 2. The SDK comes with the game and I’ve played around with it. It’s very powerful.

Once the car comes to a stop, the outer paneling flies off, exposing a simplified version of the car’s innards. Thus begins the simulation, with (hopefully live) data being fed into the system and the display showing various activities on the screen.

Statistics like network activity and CPU usage will be on-screen at all times, in the form of pie-graphs and “sparklines” to show trends over time. Atomic events such as sensors being activated and sensors failing will be shown as alerts, possibly through a picture-in-picture system that shows a zoomed-in version of the full model at the point where the incident occurred. Clicking on this PiP box will then focus the main view on this area, through a camera movement. This device was used in the household simulation game, “The Sims”, to announce events like burglaries and housefires.

So, that’s what I’ve got so far. Of course, I’m leaving out all the bits about multiple displays and pulling elements from the main screen down onto a PDA or something crazy. It’s early days yet though, so this could still go in any direction.

While looking into the possibilities of the Source SDK, specifically the FacePoser program, I had an idea that may or may not be useful. The SDK contains an assortment of tools for creating realistic human characters; characters that look, move and emote like real people. This goes as far as a very impressive facial expression modeller.

The characters that are created for games like Half Life 2 are very convincing. So, my idea was to create a human character (dressed in a lab coat and carrying a clipboard), who would stand beside the model of whatever autonomic system I was simulating. As the simulation wears on, the character will speak, and emote, various feedback cues to the user, like flailing his arms around when sensors fail. What more intuitive form of feedback than one which everyone is most used to having to interpret?

The trick, of course, is to not let this become a more technologically advanced version of Clippy. The character, who could be thought of as an avatar for the autonomic system’s general health, would generally stay in the background. Much of the feedback he provides could even be picked up subconciously, as he walks around the car performing ‘checks’; all the while providing subtle auditory hints and contorting his face to show his level of contentment.

UnfortunateOf course, there are some faces that no amount of technology could ever emulate.

A very interesting paper, given how long ago it was written. Though informal, this was inspiring. The examples of possible pervasive systems — the vast majority of which have not yet been implemented — seem to have had a large impact on future ‘thinkers’.

The Computer for the 21st Century,” by Mark Weiser.

@article{213017,
  author = {Mark Weiser},
  title = {The computer for the 21st century},
  book = {Human-computer interaction: toward the year 2000},
  year = {1995},
  isbn = {1-55860-246-1},
  pages = {933--940},
  publisher = {Morgan Kaufmann Publishers Inc.},
  address = {San Francisco, CA, USA},
}