Government Digital Service developer docs

GDS has an absolute boat-load of repositories on GitHub, and no clear way to access or browse them.

One of those repositories, govuk-developers, is an ordered index to those repositories, allowing you to find exactly what you’re looking for. This is a static site generated with Middleman and served from Heroku, except that you need a GDS login to be able to access it (presumably for cost reasons).

In the description for that repo it says This project is being rebuilt and is undergoing rapid change. but there haven’t been any changes for nearly a week, so I’ve forked it.

It’s now running in my Heroku account with no username and password (there’s a prompt, but just hit ), so you too can browse the GDS repos and docs in a useful way.

Reading feeds again

When Google Reader went away I raged and sulked and tried half a dozen services touted as replacements, but eventually I installed FeedDemon on the computer I used the most often and read my feeds daily, there.

A few weeks ago I found the Newsfold app feed reader for Android, logged back in to my Feedly account and imported the OPML from FeedDemon.

Since then I have been happily checking my feeds every day on my phone, more or less like I used to with the Google Reader mobile app. I only have a dozen or so going at the moment, but hopefully this will slowly step up and I will actually know what’s going on the world again without having to suffer Twitter.

The Guardian’s Christmas Gift Guide – on the web, not of the web

Screenshot of The Guardian's Christmas Gift List app

Congratulations to The Guardian for making me write a blog post.

It’s coming up to Christmas, and like all newspapers The Guardian have a Christmas Gift Guide. It actually looks quite nice, with well-sized images, responsive layout, good filtering and obligatory animated falling snow. When I opened it there were a couple of items which I was interested in buying. So far, so Christmas.

Screenshot of The Guardian's Christmas Gift List app

Screenshot of The Guardian’s Christmas Gift List app

The whole thing is very similar to Amazon’s ‘stream’ app (“Interesting finds, updated daily”) for browsing items. The difference comes when you actually click an item.

On both the Guardian and Amazon sites the page is dimmed and a modal display appears. This gives you more information about the product (Amazon’s one also has the decency to support pressing the Escape button to dismiss the modal), and a link to go through to the main product page.

Except that on the Guardian app, that’s not quite what happens.

Where Amazon’s pop-up display has a nice <a href="..."> for its link, meaning that you can open items in a new tab, send them to other tools, copy the link address and so on, the Guardian’s one has a <button>. A button with an attribute of data-target="...".

This means that all you can do is click it and hope for the best. Will it change the location of your current tab, taking you away from the gift list? (no) Will it open a product page in a new tab? (yes) If you’ve already clicked one item in the list and then click another will it open another tab or replace the existing one? (it replaces the existing one – you can’t have two tabs with two different items open at the same time).

As a user, this is incredibly frustrating, especially for something which I was so prepared to like.

Can I therefore take a short moment to suggest to web developers everywhere that if they are adding a link to a web page, they do so by using the markup intended to provide a link to a web page, and not jury-rigging their own link mechanisms with JavaScript on top of whatever element they think they can style the best?

Just in case anyone was wondering, I did go and look up the implementation to see if this was the fault of a crazy JavaScript framework. As far as I can tell, it’s not. Check out lines 237 and 238 of https://giftguide.imd1.uk/js/gui/xmas/view/singularProductView.js:

var targetUrl = $(event.currentTarget).attr('data-target');
window.open(targetUrl,"Christmas gift");

Sigh.

Render Conference 2016

Last week I was in Oxford for RenderConf.

It was brilliant.

Render conference is two packed days of technical front-end talks from a seriously awesome line-up of speakers.

And when they said awesome, they meant it. Go check out the schedule!

My top three talks (in no particular order) were:

Val Head (@vlh): Designing meaningful animation (about web animation from Disney to CSS bezier curves)

Ashley G. Williams (@ag_dubs): If you wish to learn ES6/2015 from scratch, you must first invent the universe (Computer Science theory, Picasso and teaching)

Frederik Vanhoutte (@wblut): Rendering the obvious (Rainbows, categorisation, and Beethoven’s exploding head)

Between these three talks alone there was creativity, practical advice, inspiration and a call to change how you look at the world. Across the other 17 talks you could get everything from demoscene coding in WebGL to how to use the <canvas> element to print a new scarf. There was a really great range of presentations.

All the talks will apparently be available on http://lanyrd.com/2016/renderconf/coverage/ shortly, and the videos will be on Vimeo. My photos, which are mostly of the nice things I saw in between talks because I was too busy listening, are on Flickr.

As well as being an exciting, inspirational conference though, the whole thing was just executed really, really well. Regular breaks, pastries and drinks, retro gaming (enjoying losing at videogames is my superpower), provided lunch, table tennis, skittles, 3D printing pens, hacking competitions and more meant there was always something to do and always someone interesting or cool to talk to.

There was a party at the venue on the first evening where the food even came to us – freshly-made burritos and pizzas from vans outside the venues complemented the wine, beers and alcohol-free options inside. Hats off to all the organisers!

I think my only complaint was that given this was my first visit to Oxford, I was disappointed to find out that not every single building is a 1,000-year old cathedral. Absolutely gutted.

Probably lucky though, I hear they have terrible wifi. See you next year, Render!

Pimoroni Flotilla

Flotilla – Friendly Electronics for All” is the headline, which sounds pretty damn good to me, someone who is permanently afraid of blowing his Arduino and other kit by connecting something the wrong way around.

So this, which has a ton of sensors and outputs, and a central dock for making them talk to each other, which all works over USB, is perfect for me.

We bought the Mega Treasure Chest starter kit and it arrived the other day, but it’s early enough in its life that there are very very few instructions on how to use it just yet. There are three “secret” URLs for getting going (the whole thing is controlled by a Raspberry Pi so you’ll need one of those too!):

  1. http://flotil.la/start/ – instructions on setup are here
  2. http://flotil.la/rockpool – the web interface for controlling your flotilla is here
  3. http://flotil.la/cookbook/ – the two demos which do exist are here

We also used a first version of the “Getting Started” booklet to see us through the initial stages.

Once you’ve done those, you can watch the video for how to build your own robot which you can control over your network using all the different sensors and controllers you get. My 6yo was able to put the whole thing together himself and learn how to control it predictably in under two hours, which was brilliant! We did actually run out of the plastic screws used to hold it all together – I think someone in the warehouse was tired and we ended up with fewer than expected but just about managed to get it all together and have been assured more are on their way!

The completed robot! (upside down!)

I was really glad to discover that the flotilla daemon runs on a Kano, which we’ve had for a couple of years now and which my son loves, because the forums had made me a bit nervous about which OSes were supported.

We’re really looking forward to the rest of the instructions arriving, but my son is already completely delighted with the fact that he has built, and can control, his own robot! I just want to know if the flotilla daemon will run from a Pi Zero powered by a power bar so we can put them on the back of the robot and cut the cord it currently needs to the regular Pi!

Estimating is good for your health

Matt Jukes has written a post called Don’t do Agile. Be agile. Or something. where he describes how estimating stories has not worked for them at the Office of National Statistics.

I think there is a more important reason to do estimation that just release planning, and that’s people management.

Over time, estimating allows you to create a sprint which is neither too daunting nor too trivial, and to build a good team cadence. A sprint with too much work in it can be depressing and ultimately slow down delivery, a sprint with too little runs the risk of falling foul of Parkinson’s law.

You could argue that if you have a good, committed team who are always up for the challenge of whatever’s put in front of them then it doesn’t matter – I’d argue the opposite and that you’re at risk of burning out your team.

Velocity, paired with your observations of the team, will give you the data to be able to work out if they are maintaining a sustainable pace or not.

As much as I am not a fan of Pivotal Tracker, it’s taught me that the burndown is not as important as having a graphical view onto historical velocity. Did the team deliver fewer points this sprint? Why? Was it story-related or human-related? If their velocity has been solid and steady for a long time, are they ready to step up, or do they need a rest? Most managers have a gut feel for these things, but having the data to be able to back it up makes it easier to discuss in retrospectives and planning sessions.

Estimation is not always easy, but so long as you don’t put too much importance on getting the “right” estimate, and so long as it’s really treated like an estimate, it can be extremely valuable for both release planning and maintaining the long-term health of your team.

Brick wall

So I wanted to parse the feed I retrieved yesterday, and now I know a bit more about how the package system works, I just search NuGet and found SimpleFeedReader

$ dnu install SimpleFeedReader

At this point Visual Studio Code prompts me that it needs to run the `restore` command, so I click the button and in theory SimpleFeedReader becomes available from my code.

In theory anyway. In practice it doesn’t look like it works in .NET Core, so I decided to try and use the System.XML.XPath package to parse the feed myself, but after a few hours of trying to include the dependencies in package.json correctly, I simply couldn’t do it and ended up going to install Visual Studio. I feel very disappointed about this – I was hoping I could treat .NET as just another language – I don’t need a specialised editor to be able to write small scripts in almost any other language, but the docs around .NET Core just aren’t there. Hopefully I will be able to use Visual Studio to work out what I was getting wrong.

Running .NET code from Visual Studio Code

“Code” is the name of Microsoft’s new lightweight code editor. It’s been out since April but it seemed like this project was a good opportunity to use it properly.

It has a command palette much like the one in Sublime Text, but I couldn’t find a command to run my small .cs file.

Lots of the docs talk about using Grunt and Gulp to run your tasks, but that seemed like overkill for running a single command to build and run my code.

It turns out that you can configure your project.json with a list of commands which then appear in the palette.

A couple of small lines and now I can run my code directly from inside Code, hurray!

Retrieve an Atom feed in .NET Core

After yesterday’s adventure and recent forehead-smacking, this seemed like an appropriate, and small, goal.

After some Googling I found RestSharp, which bills itself as a Simple REST and HTTP API Client for .NET. Sounds good.

I tried to work out how to add this to my project.json but couldn’t find any documentation on what it should like, even after I remembered about things like NuGet.

So I guessed and typed dnu install restsharp, which seemed to fetch the right files. My package.json didn’t seem to have updated though, so I then did a dnu restore. This updated the package file, but might have been overkill and a timing issue in my editor.

I now have a small file which will retrieve an Atom file and dump it to screen.

Another small step tomorrow.

Getting started with .NET Core

It’s been a few years since I last used .net so I thought I’d give it a go. It was slightly more eventful 30 minutes than I’d have liked, so I thought I’d write it up.

I started by trying to install Visual Studio, but half an hour later it was still under 50% done, and since I only wanted to be writing scripts rather than applications, I started looking for some smaller getting started guides and came across Microsoft’s guide to getting started with .NET Core. So I took a brief look at the new shape of the .NET stack, liked what I saw, and went back to the guide.

After installing the .NET Version Manager (dnvm) I had to restart PowerShell to get the changes to my PATH to take effect, and also I had to open an Administrator PowerShell window and run set-executionpolicy remotesigned because I hit the Running scripts is disabled on this system error message.

The code samples from the guide can’t simply be copy and pasted into an editor – each is actually a single line of code and then JavaScript is applied to make it appear as though it is spread across multiple lines. This is pretty disappointing, I thought something was wrong with my local editor configuration until I hit view source on the page.

I was using Visual Studio Code, and at this point was also disappointed that SHIFT+ALT+F didn’t format the CSharp code for me, although it did format the JSON.

I’d closed my PowerShell window by this point, and when I opened a new one and ran dnu restore I got an error message about it not being on my path (The term ‘dnu’ is not recognized as the name of a cmdlet, function, script file, or operable program.). Running dnvm upgrade seems to have fixed this and I can now run the commands from both PowerShell and Command Prompt!

My “Hello World” did at least run first time. Phew!

I should now have the infrastructure to get properly going, but this was a much rougher intro than I was hoping for.