Estimating is good for your health

Matt Jukes has written a post called Don’t do Agile. Be agile. Or something. where he describes how estimating stories has not worked for them at the Office of National Statistics.

I think there is a more important reason to do estimation that just release planning, and that’s people management.

Over time, estimating allows you to create a sprint which is neither too daunting nor too trivial, and to build a good team cadence. A sprint with too much work in it can be depressing and ultimately slow down delivery, a sprint with too little runs the risk of falling foul of Parkinson’s law.

You could argue that if you have a good, committed team who are always up for the challenge of whatever’s put in front of them then it doesn’t matter – I’d argue the opposite and that you’re at risk of burning out your team.

Velocity, paired with your observations of the team, will give you the data to be able to work out if they are maintaining a sustainable pace or not.

As much as I am not a fan of Pivotal Tracker, it’s taught me that the burndown is not as important as having a graphical view onto historical velocity. Did the team deliver fewer points this sprint? Why? Was it story-related or human-related? If their velocity has been solid and steady for a long time, are they ready to step up, or do they need a rest? Most managers have a gut feel for these things, but having the data to be able to back it up makes it easier to discuss in retrospectives and planning sessions.

Estimation is not always easy, but so long as you don’t put too much importance on getting the “right” estimate, and so long as it’s really treated like an estimate, it can be extremely valuable for both release planning and maintaining the long-term health of your team.

Brick wall

So I wanted to parse the feed I retrieved yesterday, and now I know a bit more about how the package system works, I just search NuGet and found SimpleFeedReader

$ dnu install SimpleFeedReader

At this point Visual Studio Code prompts me that it needs to run the `restore` command, so I click the button and in theory SimpleFeedReader becomes available from my code.

In theory anyway. In practice it doesn’t look like it works in .NET Core, so I decided to try and use the System.XML.XPath package to parse the feed myself, but after a few hours of trying to include the dependencies in package.json correctly, I simply couldn’t do it and ended up going to install Visual Studio. I feel very disappointed about this – I was hoping I could treat .NET as just another language – I don’t need a specialised editor to be able to write small scripts in almost any other language, but the docs around .NET Core just aren’t there. Hopefully I will be able to use Visual Studio to work out what I was getting wrong.

Running .NET code from Visual Studio Code

“Code” is the name of Microsoft’s new lightweight code editor. It’s been out since April but it seemed like this project was a good opportunity to use it properly.

It has a command palette much like the one in Sublime Text, but I couldn’t find a command to run my small .cs file.

Lots of the docs talk about using Grunt and Gulp to run your tasks, but that seemed like overkill for running a single command to build and run my code.

It turns out that you can configure your project.json with a list of commands which then appear in the palette.

A couple of small lines and now I can run my code directly from inside Code, hurray!

Retrieve an Atom feed in .NET Core

After yesterday’s adventure and recent forehead-smacking, this seemed like an appropriate, and small, goal.

After some Googling I found RestSharp, which bills itself as a Simple REST and HTTP API Client for .NET. Sounds good.

I tried to work out how to add this to my project.json but couldn’t find any documentation on what it should like, even after I remembered about things like NuGet.

So I guessed and typed dnu install restsharp, which seemed to fetch the right files. My package.json didn’t seem to have updated though, so I then did a dnu restore. This updated the package file, but might have been overkill and a timing issue in my editor.

I now have a small file which will retrieve an Atom file and dump it to screen.

Another small step tomorrow.

Getting started with .NET Core

It’s been a few years since I last used .net so I thought I’d give it a go. It was slightly more eventful 30 minutes than I’d have liked, so I thought I’d write it up.

I started by trying to install Visual Studio, but half an hour later it was still under 50% done, and since I only wanted to be writing scripts rather than applications, I started looking for some smaller getting started guides and came across Microsoft’s guide to getting started with .NET Core. So I took a brief look at the new shape of the .NET stack, liked what I saw, and went back to the guide.

After installing the .NET Version Manager (dnvm) I had to restart PowerShell to get the changes to my PATH to take effect, and also I had to open an Administrator PowerShell window and run set-executionpolicy remotesigned because I hit the Running scripts is disabled on this system error message.

The code samples from the guide can’t simply be copy and pasted into an editor – each is actually a single line of code and then JavaScript is applied to make it appear as though it is spread across multiple lines. This is pretty disappointing, I thought something was wrong with my local editor configuration until I hit view source on the page.

I was using Visual Studio Code, and at this point was also disappointed that SHIFT+ALT+F didn’t format the CSharp code for me, although it did format the JSON.

I’d closed my PowerShell window by this point, and when I opened a new one and ran dnu restore I got an error message about it not being on my path (The term ‘dnu’ is not recognized as the name of a cmdlet, function, script file, or operable program.). Running dnvm upgrade seems to have fixed this and I can now run the commands from both PowerShell and Command Prompt!

My “Hello World” did at least run first time. Phew!

I should now have the infrastructure to get properly going, but this was a much rougher intro than I was hoping for.

Why contracting developers are a giant pain in the ass

I have just read Why contracting developers refuse to go permanent. It says:

Working with legacy technology is something the developers I spoke to were not particularly fond of. They explained that these pieces of software are often built using outdated methodologies and poorly documented, if at all.

This is true, but my experience of dealing with several legacy projects which were written by contracting developers is like this:

This one is in Rails from three years ago and is full of CSRFs! This one is in Ember! This one is hard-wired to a five-year-old version of WordPress!

Yeah, it’s definitely the organisations which are the problem.

Bristol Mini Maker Faire

Today I took my 6 and 2 year olds into town to look around the Bristol Mini Maker Faire. It was really good and highly recommended.

We saw the Nao and Baxter robots from Active8 Robotics, a laser cutter from Just Add Sharks in action making press-out catapults, Raspberry Pi-powered wheel and track robots from Dawn Robotics and plenty of other things.

The kids were particularly taken by a couple who reverse-engineered childrens’ electronic toys like the Furby or basic motor-controlled toys and had them rigged up to simple push buttons to make them work.

We also liked seeing how Is Martin Running? works, and playing on the Makey Make.

They were sadly a bit too young to make their own shonkbot or get a Petduino (needs soldering) but watching the RepRapPro in action was a novelty, and we came away with some printed robot figurines which they treasured!

Many thanks to the chap from Ragworm who valiantly tried to explain how circuit boards are printed to my 6 year old 🙂 it will no doubt be easier to comprehend when our Flotilla kit arrives!

There was lots of other homebrew displays from people who used the Bristol Hackspace and I’m only sorry I can’t remember the names of their projects, but we all enjoyed it thoroughly!

Here’s looking forward to the next one!

Verically aligning code

I remember being a new developer and thinking that vertical alignment of code, whilst having some minor upsides, was just too damn ugly to do.

Now I’m much older and the less time I have to spend parsing someone else’s code before I can see if it’s correct or not, the better.

Vertical alignment, the vast majority of the time, does make code vastly easier to read, and although it may have some diff-based downsides, they tend to be one-off rather than perpetual. I know which cost I’d rather bear.

Just hanging out

I have started working from home one day a week to help make up the hours I lose by taking my son to school in the mornings.

This week, on my day at home, I wanted to have a remote talk with some people in the office. “Easy”, we all thought, “we’ll just use Google Hangouts”. Wrong.

Laggy, constant dropouts, confusing UI. In the end I hung up in frustration and we IMd to exchange Skype details and used that, which was near-perfect.

It turns out that having more than 10 years experience of running a product does actually make a difference. Who knew?