Review metadata in blogs and feeds

There just isn’t any.

If there is, I’d really like someone to point it out to me.

If you’re providing a review of something, it might be a one-off, in which case this kind of thing isn’t going to be very important, as you’d hope people would read (or skip) the whole thing, but if you’re regularly producing reviews (as on Val’s blog, which I came across earlier), then you need that review data (in his case marks out of ten) to be part of the intrinsic metadata of your post. At this point I’d normally launch into an RDF speech, but there’s not really any point. RDF would be the natural, and best choice for this kind of thing, but to be honest, anything will do. Anything standard, that all the tools could support. Is the notion of a “Review” just too hard to support? It doesn’t need to specify what kind of review, that should be picked up by other forms of metadata. Just please, someone in one of the big publishing tools, start building in support for writing reviews and adding a score to it so we can start aggregating.

Thanks.

RSS Reading Habits and Aggregator Design

I’ve noticed that my RSS reading habits are markedly different to those of the other people I know who also use aggregators. Using either desktop aggregators like FeedThing or SharpReader or a web-based aggregator like Bloglines they all read feeds in the same way:

  1. load aggregator
  2. click on first feed with unread items
  3. read new items in feed, following links if full-text is not provided, until all items are marked as read
  4. click on next feed with unread items
  5. goto 2 until no more unread items

I use JabRSS as my aggregator, and as a result I read feeds differently. Here’s my process (there’s a little extra explanation for those who haven’t used JabRSS):

  1. load aggregator, a window appears with a title, link and summary of all the unread items for a feed, the popup has a “next” button at the bottom for navigating to the next feed with unread items
  2. read title and summary of all items, if interested in an item click link and load item in new tab in Firefox
  3. click next
  4. goto 2 until window is dismissed
  5. switch to Firefox and read all items in tabs, closing tabs and del.icio.us-ing as I go until I’ve read all the items

JabRSS has no concept at all of a read item or an unread item, it just gives me the latest links and lets me follow them if I want. This is drastically different to just about every aggregator currently in use (I know that Bloglines now lets you display the feeds which have unread items in them, I imagine there are some desktop aggregators which also allow you to do the same as Bloglines).

Having tried to go back to a desktop aggregator last week I was amazed by a) how much of my time was spent navigating between feeds when I didn’t really care what feed the new item was in b) how little of my screen was actually taken up with reading the content and c) how my concentration was continually broken my incessant context switching as I moved from feed content -> list of unread items -> new item -> full content.

There’s no way this can be the best way of reading new items in your subscribed feeds and it becomes even more noticeable the more feeds you subscribe to. I get, on average, 150 new items in my aggregator every day, and all I care about is the content of the ones which may interest me. That means I need to be able to scan fast and then open in my browser. I probably spend less than five minutes a day actually in my aggregator, the rest of the time is reading the content.

Not only do desktop aggregators stop you from scanning quickly (although SauceReader’s Outlook-like auto-preview for new items does actually try to help with this), they also open items in their own bastardised version of a web browser, and on Windows, that built-in browser is 99% of the time going to be an IE ActiveX control. Thanks, but no thanks – I have a browser, it has all my tools built into it like bugmenot, popup blockers, del.icio.us and more, and that’s what I want to use immediately.

So what am I saying? I’m saying that most of the aggregators I’ve used actually obstruct what the user is trying to do – read the new contents of feeds they’ve subscribed to.

Traditional desktop aggregators do, of course, have benefits. Like search. And, er, filtering, which is automated search. Flagged items. There we go, that’s two. Except when I want to flag an item I’m reading I del.icio.us it with an appropriate tag. Then I actually get to see that “flagged” item on any computer.

And of course desktop aggregators let you go back at any point and look at old items in feeds, even when you’re offline, which can be useful and which JabRSS for instance doesn’t handle very well because in order to scan quickly you need a summary and not full-content (which JabRSS can actually give you), so whilst you can use your Jabber client ’s history to get the content of short items you’ll need to rely on your browser’s cache for the rest.

Somewhere in between there’s a happy medium. A desktop aggregator which just lists new items with a link which opens in your browser. No list of feeds. No list of read items. No browser window. But with a secret management mode, which lets you perform searches and read old feeds. Is that too much to ask?

Blog design old and new

The current ‘design’ is actually just a sample layout I was putting together a year and a half ago and decided to test on my blog. And, as it turned out, that’s the way its stayed. Certainly not the intention!

So I need to design. Normally you’d expect people to say “redesign”, but that’s not what I’m actually starting from :). I’m powered by Blogger but I don’t think this actually restricts me in any way as all it’s doing is holding my content and pouring it into the container I provide (which can, of course, be PHP or JSP or anything)

It’s interesting to see just how widespread the vertical drop-shadow has become since the Blogger templates adopted it (of course it’s only in the templates because it was just picking up when they were written), and how used to it I’ve become – when I saw the first few blogs switch to vertical shadows I hated it, but it’s a reasonably effective and inoffensive way of bounding content. Not that I’ll be using it ;).

If used properly we all know that a dropshadow can add another dynamic to a website, but do you ever get that “copycat” feeling when you use them? I am sure it’s just an ego thing with me but I try to avoid them now because I know everyone has done them to death and I don’t want to be the guy that creates a site that everyone has seen before. I might just be paranoid though.

Jon Hicks’ recent redesign is worth talking about for several reasons. First of all, it’s open – a stark contrast to every other website, which is centred, fixed-width and bounded by vertical drop shadow borders (of course this is a sweeping, generalisation and hideously untrue, but it’s certainly the way it feels); next, despite the content being fixed width, it actually provides something extra to those with larger monitors by way of an image on the side of screen more of which is displayed the higher your resolution (a trick I first saw on Haiko Hebig’s site – check out the image in the top left). This is actually very important as it means Jon’s site actually fills the entire screen, no matter how large your resolution or monitor. It’s bright and airy, using colour very effectively. Compare and contrast with Paul Scrivens’ Whitespace.

It’s certainly food for thought, especially given that I can’t design for toffee. God knows when I’ll actually get around to it, too 🙂

JSPWiki at work

Back in April Russell Beattie rolled out JSPWiki at his workplace, which kind of inspired me and in June we rolled out a JSPWiki of our own to a set of trial users who all loved it. Soon after we rolled it out to some of the other departments and whilst there was some resistance, in the main its gone quite well.

In the five months we’ve been running we’ve created about 800 pages (although a healthy number of these are by-products of using the Todo List Plugin), which is a pretty immense output, but shows how well everyone took to the new technology.

The default theme is pretty grim, so we’re running a modified version of the Clean Template. Additionally, search is as slow as anything (like, 30 seconds to bring back any results), so I recently updated to use Lucene for Search, which has made searching an absolute doddle – a five second max for any search! Great!

We’ve had a couple of problems with JSPWiki along the way (and we still have some). First and foremost was that inserting a carriage return doesn’t actually insert a <br>, you have to force it using wiki syntax with a double backslash, which is pretty unintuitive and for the non-technical users downright confusing. Fortunately Kieron Wilkinson has written a patch which will insert implicit linebreaks as well as supporting the existing wiki syntax. I applied it to our wiki, and not only did it not break any pages, but now line breaks work in the way they should! Hurrah that man!

We’ve also had a problem with page naming. You have to make everyone follow a consistent style when naming pages whilst not being too draconian, but if someone creates an inappropriately named page then there’s not much you can do about it except create a whole new page, copy and paste the old content and make sure the old page isn’t linked to by anything else. It’s awkward and a hassle to administer (especially if you want to actually completely remove the unnecessary pages which means having to delete the underlying physical files, which, on our system, meant that the built-in search broke – I’ve not tried it since we switched to Lucene but I’d imagine that that would also break since it wouldn’t be able to update its index because the deletion hasn’t gone through the wiki). There is a Rename Page patch which I’d like to apply as soon as possible and which should remove the admin burden from this but so far I’ve not got around to trying it.

The two outstanding problems we have are both Mozilla-based. People are always inserting links to locations and files on the network via file://, and these links work just fine in IE but security restrictions in Mozilla browsers mean that clicking on the links does nothing – there’s no user feedback or notification why the navigation failed (unless the Javascript console is open of course, in which case they’ll see the message “Security Error: Content at [URL] may not load or link to [file]” thrown. At the moment the only workaround I have for this is to install the IEView extension, right click the link and go “Open in IE”, which will then work of course.

The second is a problem in the default print CSS for the Clean Template which I haven’t solved yet (to be fair I’ve only spent a couple of minutes looking) but means that when you print a wiki page it’ll print the first paragraph or so, then insert a page break and continue on the next sheet of paper, which means you can only reliably print from IE. Very annoying!

Despite these flaws though, JSPWiki has been a clear success for bottom-up information creation and sharing and I’d definitely recommend it to any company looking to increase the flow of information from the shop-floor (as it were), with the info not just flowing upwards as typically happens in official reports, but also sideways, which is oftentimes where it’s more needed.

Back in November an anonymous commentor posted a solution to the file://// URI problem on one of my old posts. Herewith:

Go to ‘about:config’ and use a filter of ‘checkloaduri’.

Set that value to false (double click on it) and you should be able to load from file:// urls.

And it works an absolute treat. I circulated this workaround around the office, and now we’re just down to the printing issue 🙂

Half-formed

I don’t post to my blog very often any more. This tends to be because as I have one idea for a blog entry and start writing it, I get an idea for another, so jot down a note about the general jist and get on with the initial post but my brain keeps being distracted by what I should write in the second post. The times I’ve decided to abandon the first post to work on the second the exact same thing, but in reverse, has occurred. At various occassions writing one post has spawned three others – none of which gets completed. I currently have six text files on my desktop, all of which are full posts which I can’t complete because I can’t really remember what I was writing any more.

I suspect my brain is conspiring against weblogging.

Compiling JSPWiki 2.0 with Tomcat 5

Specifically, that’s JSPWiki 2.0.52 and Tomcat 5.0.29, but who’s counting anyway?

The compilation instructions for JSPWiki confidently state that you should copy your servlet.jar from your $CATALINA_HOME/common/lib directory into either $JAVA_HOME/jre/lib/ext or to JSPWiki/lib.

Tomcat 5 doesn’t use servlet.jar any more, so the JARs you need to copy are $CATALINA_HOMEcommonlibjsp-api.jar and $CATALINA_HOMEcommonlibservlet-api.jar. I just dropped them into my JSPWiki/lib dir because I didn’t want them influencing any of my other Java apps, but putting them into $JAVA_HOME/jre/lib/ext should also work.

Downloading music

So a representative of the BPI on BBC Breakfast this morning summed up his argument for suing people who download music with [there are so many legal online alternatives that] there’s no excuse for downloading anymore.

Well, I hate to burst that little bubble but it’s not true. I have songs in my music collection which are not only unavailable at any legal online music service, but are also unavailable at any shop I’ve been into (and I’ve asked!).

So what am I supposed to do? When I’m hunting for that one track I heard on an advert, I look it up on commercialbreaksandbeats.co.uk, do a search in the online sellers (it’s almost never there), and then fire up the music downloading app-du-jour and get it. My dodgy music taste aside, what are the alternatives

Just to go back again for a moment, this representative also said that 60% of all CDs retail for £10 or under. Well now. Whilst that might well be true for new albums, singles still cost a fiver. And are then completely unavailable two months after release. So what are you supposed to do? It leaves the consumer stuck betwen a rock and a hard place, and in the end they’re more likely to get that one song they want off of the web than fruitlessly hunt around in car boot sales for months.

I can understand the music industry’s frustration at this bandwagon that appears to have left without them and undercuts their entire business model, but by denying that there are any real reasons at all that someone might download music, they just perpetuate the image of being out of touch and only looking after their best interests and not their customers’; and for as long as that continues, people will see downloading as a completely viable alternative method of getting the music they want to hear.

wiki-fying bbc.co.uk

Stef Magdalinski has done something very clever. Fresh from his work on theyworkforyou.com, he’s set up a proxy to the news.bbc.co.uk website which scans the content for capitalised phrases and acronyms and replaces them with links to Wikipediaarticles if they exist (He’s written about how it works in “Don’t get me wrong, I really like BBC News Online“).

For example, the current lead story “Trust us, Howard urges voters” has references to and Michael Howard, the UKIP, Tony Blair. When you run the article through the proxy and all of these become clickable links to the relevant Wikipedia articles (so Michael Howard, the UKIP, Tony Blair), providing not only background on all the relevant parties, but a “further reading” list, so if you’re interested in something on the site, you can find out more about it using Wikipedia as your reference source (of course, there are some problems with this). What would be nicer of course, would be to have cross-referenced BBC articles, which they kind of do in their “See Also:” section of each news story, but these tend to be just similar articles which don’t actually add anything to the original article, and may in fact pose more questions. For example, what’s really needed on the BBC article above are links to Michael Howard’s biography, Tony Blair’s biography and the UKIP Q&A – all on the news.bbc.co.uk site. Incidentally, those three articles are all the first results for the relevant searches using the BBC search engine.