Archive for December, 2005

Monthly Archive

Calvin and Hobbes cartoon.

Of course, in the time it took me to colour this in, I could’ve written ten papers…

No, not the as-yet-unknown-quantity that is the paper I’m trying to put together for AVI 2006. I just got word from one of the editors at O’Reilly that the book I contributed to, PHP Hacks, has been published and is in shops. I should be getting my ‘author copy’ in the post over the next few days. Huzzah! :-)

I’ll be upgrading this blog to WordPress 2.0 soon, since the list of what’s new looks so good, in particular the new Ajax-ified admin panel. Update: Done. Works well.

Mark found an interesting video (80mb) rendered with Blender and OpenGL 2.0, that has a nice zoomable interface. It’d look good running in the Viz lab.

After an interesting meeting today, we’ve each chosen a website to extract data from, to be fed into construct as RDF. The idea of standardising on Python for all of the newly created sensors was brought up, which is good as I’ve already started working on my Python scraper.

Hidden Data

I didn’t mention this in the meeting, but some very useful data, like currency conversion rates, are generally not shown on public-facing websites. To get at them requires a form submission, and then scraping the resulting HTML page. Things I learned during my fourth year project may be able to help here, since one of the sites I tested on was this currency conversion page.

Access to a feed of realtime data costs $540 a year. With my project, for the cost of a HTTP GET request, you could have up to the minute data on any currency available in their system. This was made possible by a very useful Perl module called HTML::Form, which allowed me to simulate form submits, and thus retrieve the HTTP response page. Something similar is bound to exist for Python.

Working with Trees

There are two main approaches to screen-scraping: using heavy, regular expression-laden parsing for certain patterns of text in a string, or constructing a treelike representation of a page in memory, and then traversing this tree looking for certain elements. My favoured method is the latter, since it is generally more robust to small cosmetic changes to the underlying HTML page. Scraper rewrites are still required for when a page is reorganised, but this happens less frequently than a site having a few colours changed around.

Beautiful Soup is a very useful package for Python, which will robustly convert even an invalid HTML page into a tree, and then provides you the methods required to traverse the tree. This way, scrapers can be bashed out pretty quickly. Here’s some code to set it up; after this’ll come the page-specific code that extracts the relevant table rows or whatever is required.

import urllib, sys, re, BeautifulSoup
 
def get_page(url):
    """Fetches an arbitrary page from the web and prints it."""
    try:
        location = urllib.urlopen(url)
    except IOError, (errno, strerror):
        sys.exit("I/O error(%s): %s" % (errno, strerror))
    content = location.read()

    # Clear out all troublesome whitespace
    content = content.replace("\n", "")
    content = content.replace("\r", "")
    content = content.replace("\t", "")
    content = content.replace("> ", ">")
    content = content.replace("  ", " ")
    
    location.close()
    return content
    
def generate_tree(page):
    """Converts a string of HTML into a document tree."""
    return BeautifulSoup.BeautifulSoup(page)

Once you have this set up, fetching a certain element on a page becomes as easy as writing:

print generate_tree(get_page('http://www.imdb.com/')).first('table')

Polling Period

We discussed how often the sensors/scrapers should fetch their target webpage to re-parse it. Polling a page too often is likely to get your IP address blocked. Personally I don’t think this is as big a problem as was made out. Most RSS readers are designed to poll a feed once every 30 minutes to an hour. This is a reasonable period. Bar a few examples (stock quotes specifically), very few sites that we’re monitoring will be updating more frequently than that. In fact, the period could likely be increased. It would be relatively simple to set up a cron job to run each of the sensors in order every 30 minutes.

This approach could then be extended. RSS readers are/should be designed to honour various HTTP headers so that they don’t continually re-fetch the same feed over and over again if it’s not changing. All HTML files are sent with those same headers, so we could have conditions set up that the sensors will first do a HEAD request, and if we get a 304 response or if the Last Modified headers are within the last update cycle, we defer the update until the next cycle.

Ideally, the polling would be adaptive, so we have a single script that takes as input the derived update frequency of each page, and writes a new cron file with modified periodicity for each site. Thus, pages like the Dublin Bus timetables, which I’m working on, will be re-parsed very infrequently, since the site is rarely updated. Conversely, sites that serve constantly-updated information, like stock quotes and currency conversion rates, will be fetched much more often (but never more than a lower bound, like every 10 minutes).

I’ve begun learning some Python, primarily because Mark found an open source 3D graphics package called Blender, which uses packages written in Python.

So far, it looks like it’s similar in many ways to Perl, which is good because I already have plenty of experience with Perl, having used it for my final year project. Also, Lorcan is talking about doing some screen-scraping on major websites to glean data like movie showtimes and current stock prices, to be fed into Construct as contextual data.

I’ve done some screen-scraping in Perl before, but I’m guessing most of the others won’t want to code their screen-scrapers in Perl too. This will lead to serious code maintainability problems, which will invariably happen since every time the source website is updated you may have to recode some or all of your corresponding screen-scraper. Such is the life we’ve chosen. It would be best if we didn’t have to have designated caretakers for each module, so standardising on one language for them all would be nice. And I know which way the tide is turning (Joe characterised Perl with “I don’t like any language where my cat can walk across the keyboard and it will still compile” — touché).

Update: I’ve done some work on a scraper in Python.

IBM Research has some examples of a Weather Visualisation system they have designed.