After a few more moments of thought following my last post I imagine what we could do is implement OpenID consumption so that rather than forcing people to be pre-approved before they get a light-weight account they can be post-approved.
That is to say, if you’ve registered for a light-weight account using an OpenID on a trusted server (let’s say anything in the .ac.uk domain) then your account is automatically processed and you can log in immediately, but you still have to provide the username or email address of someone with a full Bath account who will get an email asking if they really do vouch for you or not, with the account being revoked after five days if they’ve not been confirmed.
Plenty of people have said in the last few days that there are more OpenID providers than there are OpenID-consuming services.
I have some basic code which, if enabled, would grant everyone at the University I work at an OpenID. This isn’t the challenge though; where could we turn on OpenID consumption in our services?
We offer a lightweight account to external users so that researchers can use tools we provide to collaborate with other researchers here. However we don’t give them out freely and for each account application there needs to be a native user who supports the application. This gives us some technological and social-contract guarantees about who we grant access to.
It’s also a pain in the ass for anyone who does just want to sign in to look at some research data or whatever – they have to apply themselves or ask someone to apply for them first.
Universities in the UK have signed up to rolling out Shibboleth in the next few years which should enable anyone from one university to sign in to the services at another university. I wonder how many of the lightweight accounts we currently grant this would take care of?
So, what are possible use-cases for OpenID at a University?
Two of my workmates are currently at the Future of Web Apps conference in London.
Not only have they had to pay for their wifi (which they were using to update a page on our wiki with details of the talks) but the connection is terrible: very slow and keeps dropping – they’ve given up trying to use it, despite having already paid. That’s pretty bad.
Maybe someone should tell the event organisers that the future of web apps probably involves working internet connections?
By the way, the wifi at Apachecon EU 2006 was free, always-on and brilliantly useful.
Mark‘s linked to David Pashley’s ogg player roundup. For what it’s worth, I’ve been looking into this recently too, except I had some additional restrictions which, given the players David puts on show, leaves me with only one choice (which was the one I’d previously selected in my own research): the Samsung YP-U2 (1GB for £61 or 2GB for, er, £58).
My additional restrictions were:
- rechargeable internal battery
- must not need cable to connect to PC
The player I’d be replacing is the Creative Zen Nano Plus 1GB which I’d give three stars for general alright-ness.
Whether I can actually justify spending ~£60 on a new player which will only give me a minor playing upgrade (only a few tens of tracks in my 20GB compressed-format music collection are in ogg) and some usability benefits (no more lugging a mini-USB cable and spare batteries around) is yet to be seen.
In fact, looking back, I’ve been wanting this player since June 2006. This is reasonably depressing – does this really mean there have been no new decent ogg-capable players for at least eight months?
A mental reminder to check first whether a new owner provides RSS feeds for searches in a service they’ve just bought before learning Ruby, Rubyful Soup and Feedtools and writing your own parser.
There’s two hours I’m not getting back.
There is little wiki syntax interoperability. In fact, it’s so bad that there are dedicated libraries for converting between almost every wiki system, the best probably being a Perl library called HTML::WikiConverter which also has an online demo and can covert HTML to sixteen different syntaxes. Sixteen!
There have been several efforts over the years to come up with a common syntax, and they’ve all petered out. The latest is called Creole and at their last workshop Ward Cunningham was on the Panel. In their own words:
Creole is a common wiki markup language to be used across different Wikis. It’s not replacing existing markup but instead enabling wiki users to transfer content seamlessly across wikis, and for novice users to contribute more easily.
They also have a reasonably impressive list of wiki engines with plugins for supporting this interchange syntax
You can see the wiki syntax they propose in the latest version of the spec (0.4 at the time of writing).
It’s not too bad so long as you can read words mixed between five vertical bars [[a bit|like this]] [[and|then]] [[maybe something|like this]]. It’s obviously to maintain the “all formatting is double-character” and to allow people to keep putting things in square brackets. Which of course they do all the time. Or, in the several thousand wiki pages I’ve seen, twice.
At least it doesn’t use MediaWiki’s markup for italics and bold. Visual clutter is crap.
The computer mouse is one of the worst human-computer interaction devices ever invented and I hope it dies a rapid death.
Every year or so I explode with fury at the absolute inadequacy of the mouse. It forces my hand to adpot some permanently cramped expression of agony only emulated in the natural world when you fall into a pit of lava and your outstretched arm is the last thing to be consumed by the all-devouring magma.
Not only that but it forces me to stretch out for it, my arms sitting as far apart from one another as if they were being held back by heavies as I’m eviscerated for not paying my loan shark in time.
Even then, I have to move my hand in ever smaller amounts, trying to pinpoint tiny little places on a screen where clicking one of the buttons underneath my fingers will actually perform some action. This never happens anywhere else in real life: it’s freaky and unnatural.
It’s absolute rubbish.
My anger has only built since I got my tablet PC, Nintendo DS and Nintendo Wii. The Wii in particular has a pointing and navigation device which is at somewhere near what I might normally do; you know: up, down, left, right, select an item. These things aren’t that tough. I promise you I don’t need five buttons on my mouse.
I would guess that most of my time in front of a computer is spent either writing code or reading web pages. When I’m writing code I’m almost permanently using the keyboard. I can understand why this is. When I’m reading, why force me to hover inanely over my keyboard and mouse so that I can scroll up and down the page or move between links? Why not let me sit back with a little wireless widget which lets me just scroll up and down? Or give me a keyboard which dual-functions as a touchpad, or anything, something, just let me throw my mouse away and never suffer it again.
So here, yet again, begins my annual attempt to do without a mouse for as long as I can. My desktop aggregator, BlogBridge, has already proved itself a loser (because you can’t open links in posts in a browser window without the mouse). Ubuntu at home would be a lot easier if the Deskbar history ordered items by which one I run most often. More comments like this to come, I rather suspect.
The latest Atom-over-XMPP IETF draft expired two months ago.
Speakers and a microphone: Hm, remind me not to leave my rabbit near my mic.
Or to install Vista.