visualisation

Entries tagged with “visualisation”.


PPD logoAaron is running a workshop at AVI 2008 on “designing multi-touch interaction techniques for coupled public and private displays”. If you have a novel idea for an interactive system involving mobile devices, fixed displays and surfaces; or if you just need an excuse to put an iPhone on your research budget, head over to the PPD ’08 website.

The best t-shirt I never bought. Sold out! Bah.

Electric sheep has become a very popular screensaver in the SRG office since it replaced my previous favourite, Fireflies some months ago. A special presentation of the high-definition version (“Dreams in High Fidelity”) is being displayed at Siggraph 2006, which starts today. We’ve seen some of the best sheep float through the office.

Me and two very tiny octopuses. Soon to be ex-octopuses thanks to Mike!
Say hello to my little friends.

I got word in April that my first paper, the alluringly-titled “Collaborating in Context: Immersive Visualisation Environments” which I submitted in March to the Context in Advanced Interfaces workshop at AVI06, had been accepted. So, Mark, Mike and I headed off to Venice for the week to watch presentations, ride around on boats and eat octopuses.

The paper concerns the design and development of our unique visualization lab here in UCD. My presentation at the workshop went fairly well, considering I had completed a cross-city dash minutes before starting (Venice is a big place!). My slides are available with the others at the workshop’s results page. My paper has been published in the ACM digital library.

AVI 06 proper was an excellent conference, with plenty of interesting work going on, and people to meet. My trip report is available:

[PDF] Trip Report: AVI 2006 May 23–26, Venice Italy

Our own photos are online, and you can also check out the very lovely Geoffrey EllisAVI photos (spot the goons!).

2-D or not 2-D? (That is the question) finds fault with some of the new graphical features coded into the latest version of Keynote, which were subsequently used by Steve Jobs at the latest Macworld keynote. Interestingly, the slides contained a number of basic, but easily-made information visualisation mistakes; what Tufte would call a lack of “graphical integrity.”

Digital magpie Marko sends on a cool floor screen demo apparently made by Nintendo and shown at last year’s E3 Expo (a venue that is forever on my list of things to go to). It seems Nintendo are really pushing new ways of interacting with software, especially considering their plans for their next console’s controller.

Mark found an interesting video (80mb) rendered with Blender and OpenGL 2.0, that has a nice zoomable interface. It’d look good running in the Viz lab.

IBM Research has some examples of a Weather Visualisation system they have designed.

We are preparing to install IBM’s Deep Computing Visualization software on a computer in the new Visualisation lab that’s hooked up to the DiamondTouch. This will allow us to both:

* “Explore the styles of interaction possible across different devices and a heterogeneous computing environment”
* Support simultaneous multi-user interactions across different displays

I am expecting to demo my simulation on the multiple displays in the Viz lab in the months ahead, so this will be a good introduction to the technology.

From the fact sheet (PDF):

High-end graphical images can be viewed in two visualization modes — SVN (Scalable Visual Networking) to increase screen resolution and multiplicity of physical displays; and RVN (Remote Visual Networking) to allow remote use of the application.

These two modes reflect two of the challenges in my PhD research: creating a visualisation of a large dataset across many displays, and to allow parts of the visualisation to migrate across devices.

After my initial foray into predicting the future was met with puzzlement, I’ve been thinking back over the idea of, as Aaron put it, “marrying the Scientific Visualisation with the Information Visualisation”. This seemed like the logical way to go, but right now it doesn’t look like what’s actually required or even desired for this project. Nonetheless, I want to write down the reasons I originally started thinking along this track.

Spatial Representation
First of all, because an autonomic system is made up of a large and fluctuating number of sensors and actuators, it made sense to have some form of spatial representation of where the sensors are located. This allows such actions as the person watching the visualisation saying “show me activity for the sensors at the rear of the car”, or for those clustered in the engine, for example. This would surely be a useful UI for interacting with the simulation.
Sensor Grouping
Beyond these ‘logical groupings’, they could also simply drag a box around the sensors they were interested in, and use the usual Shift-click/Ctrl-click interaction to add or remove sensors from their selection, and then generate the visualisation from this selection. Splitting the sensors driving the visualisation into groups like this would simplify the task of focusing on certain parts of the simulation, or moving parts of it onto other display devices (particularly low-power devices, with not enough processing power to generate the entire visualisation).
Using a 3D Camera
When sensors fail, as they are wont to do, the camera in the 3D environment can be positioned to show the location of the failure. This would allow the user to select nearby sensors and get realtime data from just those sensors surrounding the problematic one.

Recent bookmarks tagged with “visualisation”.