Bloglines Web Services

Bloglines just announced their new web services Including feed synchronisation, and enabled in several aggregators including FeedDemon, NetNewsWire and BlogBot.

I was totally blown away when I read that aggregators were now providing feed synchronisation across multiple machines using Bloglines as the central data source. “Amazing!”, I thought “They’ve finally done it!”, but sadly it doesn’t really seem as though they have. Whilst you can synchronise the list of feeds you’re subscribed to, you can’t truly synchronise the state of the items in those feeds because although the desktop aggregator can specifically retrieve items for a feed which are marked as unread, the desktop aggregator can’t notify Bloglines about items it has marked as read.

I really hope they add the ability to do this. If they do, then voila! a fully-featured centralised aggregator. I need never use another service!

Quoted on Robert Scoble’s blog, Nick Bradbury, creator of FeedDemon says:

The best part is items you read in FeedDemon don’t show up as unread in Bloglines, and items you read through Bloglines don’t show up in FeedDemon. In other words, feed state is synchronized so that you don’t read the same item twice

But looking at the API calls Bloglines has published I just can’t see how this is true. The APIs are all one-way, so you can download all your unread items and leave them marked as unread or mark them all as read. Unless I’m missing something, there needs to be an API call to retrieve a single unread item and mark it as read in Bloglines which is called when an item is marked as read in the aggregator.

The WellStyled Workshop

The WellStyled Workshop is yet another website about how to use standards-compliant code and CSS techniques to keep code clean, readable and semantic.

Normally I’d just file this away under my links, but this one’s slightly different. Whilst there may not be a ton of content to look at quite yet, there’s one feature in particular which deserves a closer look – every article is in two languages.

Unlike most sites which provide the article in one language and have a simple link named “this article in French” (or whatever) somewhere on the page, actually provides the full text of both language versions on the same page and uses CSS and Javascript to hide the version that’s not currently being displayed and sets a cookie so that when you start navigating around, the rest of the site is still in the language you selected.

Not only this, but each article also shows a snippet of the other-language content (or the whole thing, if it’s short) in a smaller, different-coloured font by the side of the main article. Clicking on the snippet hides the active language and reveals the article in the alternative language.

It’s a very simple trick, but have pulled it off with aplomb. The design of this functionality is excellent, it’s always very obvious what to do, and it makes you wonder why it’s not been done before (which is, of course, a sure sign of a good idea).

And of course, it’s only now that I discover who the site is by – Pixy! It was following Pixy’s work about three years ago that made me interested in the web, how it works and the potential to be had but I’d unsubscribed a long time ago and forgotten all about him. Time to catch up I think.


FoafSpace is a new FOAF search engine by Gene McCulley which was just announced on the rdfweb-dev mailing list. Normally I’d just bookmark it using but the fact that I loaded the site, typed in my nick and it came back with my details in sub-second time was pretty impressive.

At the time of writing, it’s currently got 1039950 people listed and provides interesting stats like the top 20 websites in terms of the number of FoaF files that have been spidered and which of the people it knows about have their birthdays today.

I’d just started thinking about writing my own FOAF scutter, since Jim Ley’s data dumps are in MySQL format (I’d prefer the raw triples) and a bit out of date and I need a chunk of FOAF data to do some analysis on, and you can never have too many hacky, poorly-written scutters 😉 Hopefully FoaFspace will release their data dump, and maybe could do the same?

Ha! Now I’ve said that I’d prefer it in triples I’ve found Morten Frederiksen’s Perl script for converting Jim’s DB dump file directly into triples, I’ll have to give that a go.

Jabber and FOAF – the hope is still alive

stpeter has made three very interesting posts (Disco RDF, Knowing and Making Friends) which talk about the possibilities that would start to open up if Jabber clients (or more likely servers) started using FOAF to help Jabber users find people, chatrooms, and other entities of interest on the Jabber network. It makes for interesting reading, and it would be good to see what direction the Jabber guys take this – adoption by an organisation like theirs could be what it takes to getting the FOAF guys to actually mark some more of those elements ‘stable’ and hence increase people’s trust in its stability.

Exchange to iCal/Sunbird via Perl

I’ve got something I call, which is a perl script that logs into an Exchanged server running IMAP, and then reads a mailbox that is really a calendar, and pulls out the VCALENDAR parts, and formats them into an ics-type file.

Oh. My. God! And it just so small and simple, unbelievable!

Oh, how badly do I want this? Very badly. But I don’t even know if our Exchange server has IMAP enabled! First thing Monday morning. First. Thing. amazon bookmarklet

consume! is a bookmarklet for using with and

Just drag the link above to your browser toolbar and then when you’re viewing a book description page on Amazon click it and you’ll be taken to the “add this book to your collection” page on Add a reading status and a comment, hit “save” and you’re done. Massively sped-up book consuming!

I freely admit I’m not the world’s best javascript hacker, so if there are improvements that can be made (especially to the regexes, which work, but only just about) please let me know!