Estimating is good for your health

Matt Jukes has written a post called Don’t do Agile. Be agile. Or something. where he describes how estimating stories has not worked for them at the Office of National Statistics.

I think there is a more important reason to do estimation that just release planning, and that’s people management.

Over time, estimating allows you to create a sprint which is neither too daunting nor too trivial, and to build a good team cadence. A sprint with too much work in it can be depressing and ultimately slow down delivery, a sprint with too little runs the risk of falling foul of Parkinson’s law.

You could argue that if you have a good, committed team who are always up for the challenge of whatever’s put in front of them then it doesn’t matter – I’d argue the opposite and that you’re at risk of burning out your team.

Velocity, paired with your observations of the team, will give you the data to be able to work out if they are maintaining a sustainable pace or not.

As much as I am not a fan of Pivotal Tracker, it’s taught me that the burndown is not as important as having a graphical view onto historical velocity. Did the team deliver fewer points this sprint? Why? Was it story-related or human-related? If their velocity has been solid and steady for a long time, are they ready to step up, or do they need a rest? Most managers have a gut feel for these things, but having the data to be able to back it up makes it easier to discuss in retrospectives and planning sessions.

Estimation is not always easy, but so long as you don’t put too much importance on getting the “right” estimate, and so long as it’s really treated like an estimate, it can be extremely valuable for both release planning and maintaining the long-term health of your team.

Brick wall

So I wanted to parse the feed I retrieved yesterday, and now I know a bit more about how the package system works, I just search NuGet and found SimpleFeedReader

$ dnu install SimpleFeedReader

At this point Visual Studio Code prompts me that it needs to run the `restore` command, so I click the button and in theory SimpleFeedReader becomes available from my code.

In theory anyway. In practice it doesn’t look like it works in .NET Core, so I decided to try and use the System.XML.XPath package to parse the feed myself, but after a few hours of trying to include the dependencies in package.json correctly, I simply couldn’t do it and ended up going to install Visual Studio. I feel very disappointed about this – I was hoping I could treat .NET as just another language – I don’t need a specialised editor to be able to write small scripts in almost any other language, but the docs around .NET Core just aren’t there. Hopefully I will be able to use Visual Studio to work out what I was getting wrong.

Running .NET code from Visual Studio Code

“Code” is the name of Microsoft’s new lightweight code editor. It’s been out since April but it seemed like this project was a good opportunity to use it properly.

It has a command palette much like the one in Sublime Text, but I couldn’t find a command to run my small .cs file.

Lots of the docs talk about using Grunt and Gulp to run your tasks, but that seemed like overkill for running a single command to build and run my code.

It turns out that you can configure your project.json with a list of commands which then appear in the palette.

A couple of small lines and now I can run my code directly from inside Code, hurray!

Retrieve an Atom feed in .NET Core

After yesterday’s adventure and recent forehead-smacking, this seemed like an appropriate, and small, goal.

After some Googling I found RestSharp, which bills itself as a Simple REST and HTTP API Client for .NET. Sounds good.

I tried to work out how to add this to my project.json but couldn’t find any documentation on what it should like, even after I remembered about things like NuGet.

So I guessed and typed dnu install restsharp, which seemed to fetch the right files. My package.json didn’t seem to have updated though, so I then did a dnu restore. This updated the package file, but might have been overkill and a timing issue in my editor.

I now have a small file which will retrieve an Atom file and dump it to screen.

Another small step tomorrow.

Getting started with .NET Core

It’s been a few years since I last used .net so I thought I’d give it a go. It was slightly more eventful 30 minutes than I’d have liked, so I thought I’d write it up.

I started by trying to install Visual Studio, but half an hour later it was still under 50% done, and since I only wanted to be writing scripts rather than applications, I started looking for some smaller getting started guides and came across Microsoft’s guide to getting started with .NET Core. So I took a brief look at the new shape of the .NET stack, liked what I saw, and went back to the guide.

After installing the .NET Version Manager (dnvm) I had to restart PowerShell to get the changes to my PATH to take effect, and also I had to open an Administrator PowerShell window and run set-executionpolicy remotesigned because I hit the Running scripts is disabled on this system error message.

The code samples from the guide can’t simply be copy and pasted into an editor – each is actually a single line of code and then JavaScript is applied to make it appear as though it is spread across multiple lines. This is pretty disappointing, I thought something was wrong with my local editor configuration until I hit view source on the page.

I was using Visual Studio Code, and at this point was also disappointed that SHIFT+ALT+F didn’t format the CSharp code for me, although it did format the JSON.

I’d closed my PowerShell window by this point, and when I opened a new one and ran dnu restore I got an error message about it not being on my path (The term ‘dnu’ is not recognized as the name of a cmdlet, function, script file, or operable program.). Running dnvm upgrade seems to have fixed this and I can now run the commands from both PowerShell and Command Prompt!

My “Hello World” did at least run first time. Phew!

I should now have the infrastructure to get properly going, but this was a much rougher intro than I was hoping for.