RSS for fun and the public good

I follow Steve Messer’s blog – he’s a product manager in the UK Civil Service, making land and housing data easier to find, use and trust. And he writes weeknotes which I enjoy reading.

He mentioned this week that his team’s blog doesn’t produce an RSS feed. Well, I’ve just been doing that for my own blog so I thought, how hard can it be?

An hour later and I opened my pull request.

The Digital Land blog is an interesting beast, being built by a custom-built Python static site generator using gov.uk components and rendering with Jinja.

It’s been a couple of years since I last looked at much Python (probably whilst I was at GDS working as Head of Technology on the Clinically Extremely Vulnerable People Service) and I don’t think I’ve ever used Jinja, so my code is probably terrible but it does work on my machine!

Hopefully what I’ve done will be at least the right shaped answer, but I imagine one of their actual Python experts will fix up my PR nicely.

(also, it’s an Atom feed that I generated, not an RSS feed, but the title didn’t scan as nicely)

UPDATE: It’s merged! Look on my works ye mighty.

Retrieve an Atom feed in .NET Core

After yesterday’s adventure and recent forehead-smacking, this seemed like an appropriate, and small, goal.

After some Googling I found RestSharp, which bills itself as a Simple REST and HTTP API Client for .NET. Sounds good.

I tried to work out how to add this to my project.json but couldn’t find any documentation on what it should like, even after I remembered about things like NuGet.

So I guessed and typed dnu install restsharp, which seemed to fetch the right files. My package.json didn’t seem to have updated though, so I then did a dnu restore. This updated the package file, but might have been overkill and a timing issue in my editor.

I now have a small file which will retrieve an Atom file and dump it to screen.

Another small step tomorrow.

Online ebook catalogs in Atom

As I recently wrote, I have a new-found interest in ebooks (I also bought four new textbooks from O’Reilly using a BOGOF offer to pick up 97 Things Every Programmer Should Know, 97 Things Every Project Manager Should Know, Beautiful Code and The Art of Agile Development).

I mainly read ebooks on my Android device, specifically, using Aldiko.

Aldiko has a built in browser for the feedbooks.com catalog, but also gives you the ability to add your own catalogs. A friend told me that Calibre, a popular ebook management programme, has a web interface which one of the other popular Android ebook readers (WordPlayer) could be pointed at in order to add custom catalogs. After a quick trial and a few Google searches, I realised that WordPlayer actually subscribes to an XML file hosted on http://localhost/calibre/stanza

Opening this file shows it to be Atom, where each entry is a small metadata container and the link element is used to reference the actual book and images that represent it, like this:


    <link type="application/epub+zip" href="/get/epub/3"/>
    <link rel="x-stanza-cover-image" type="image/jpeg" href="/get/cover/3"/>
    <link rel="x-stanza-cover-image-thumbnail" type="image/jpeg" href="/get/thumb/3"/>

Another few searches showed this to be a draft specification called openpub. Aldiko supports this, so adding the /stanza URL to a custom catalog works there too! Voila, custom catalogs in Aldiko. Marvellous!

It should only require a tiny bit of work to write code that serves a catalog straight from the filesystem without the overhead of Calibre (which I found to be quite heavyweight). This is what I have started here.

Importing blog posts and comments from Blogger to WordPress

bloggerpressI tried this a year ago only to experience epic fail.

I tried this yesterday and it was a marvellous success.

Around this time last year I was locked out of my Google account and decided to move what I could over to my own server (a process I’ve still not completed!). As part of that move I used BloggerBackup to export all of my blog posts and comments and tried to do an import into WordPress, which didn’t work. I was resigned to writing some script to import it but ran into a WordPress date parsing bug which I had trouble tracking down – however since my old blog was still available as static HTML on my server, I wasn’t really that worried about it.

blogger import Last night I tried the built-in WordPress import from Blogger. It uses OAuth to authenticate and then allows import of your posts and comments from the comfort of a couple of clicks in the WordPress admin interface. All very smooth, all very easy (apart from the slightly worrying disparity between the number of imported elements and the totals). I’ll have to move my images, but that’s no real bother.

My archives now go all the way back to May 2002 when it was a co-blog with my housemate of the time who is now an arty-philoso-programmer in Australia. Before that I maintained my blog by hand and I’m not sure I have copies.

A quick “thanks” to my colleague Tom Natt who helped me fix my .htaccess changes so that old links and Google searches still work (also thanks to Mark Pilgrim’s Cruft-free URLs in Movable Type which I could rather tragically remember as a useful post from five years ago).

Parsing Atom with libxml2

Whilst trying to parse some Atom (my Blogger backup) with libxml2 I appear to have run into the same problem that Aristotle hit two years ago in XPath vs the default namespace: easy things should be easy, to wit: The story is that you can’t match on the default namespace in XPath.


>> import libxml2
>> doc = libxml2.parseFile("/home/pip/allposts.xml")
>> results = doc.xpathEval("//feed")
>> len(results)
0

Unbelievable.

Immediate potential solutions:

  1. XSLT my Atom document to add “atom:” to all my default-namespaced elements
  2. use an entirely different method of parsing
  3. remove the atom namespace declaration from the top of the file
  4. something else

Option 3 looks like the only sane route to take in this one-off job, but I’m quite surprised that I have to do it at all.

Actually, this turned out to be my fault – I was parsing two documents at the same time, one with a namespace declaration set correctly (for parsing my Atom file), and one with no namespaces set. I used the latter for my xpath query, which clearly didn’t work – many thanks to everyone who left a comment!

HOWTO download your Google Reader starred items

How to create a backup of your starred items in Google Reader, should the need ever arise:

A screenshot of the Google Reader settings page

  • Log in to Google Reader
  • Click ‘Settings‘ in the top-right of the window
  • Click the ‘Tags‘ tab
  • Check the “Your starred items” box
  • Click the “Change sharing…” dropdown box and select “public
  • Now click on ‘View public page‘ which has appeared to the right of “Your starred items” (this will open in a new window by default)
  • In the right-hand column there is a link to a feed. Right-click it and save it to disk.

Congratulations, you now have an Atom feed of your starred items to do with as you wish. With any luck it will even be valid.