How much time is it sensible to track? Who needs this information?
At work we use Trac to look after our projects. It’s very useful for what we do because a) it’s free b) it has tight Subversion integration c) it has a wiki d) it has a customisable issue tracker. It does also have downsides like its atrociously unfriendly wiki-syntax, but seeing as we weren’t using anything at all before, it’s a step in the right direction.
By default, Trac doesn’t do any kind of time management, so in a couple of projects we’ve added in a
Trac is also useful to us because it generates iCalendar files for project roadmaps and RSS files for just about everything else.
So here’s the problem:
- I have multiple projects
- each project has its own Trac install
- each Trac install has a list of the issues assigned to me and the time estimate for each one
- I can retrieve and aggregate this data in order to find out how busy I’m going to be (we don’t store dates at the moment, but that could be done)
So, what about work I do that isn’t on a project?
- investigating technology X
- mentoring John
- a day at a conference
Where do these days go?
Some suggested solutions:
- We use calendaring software; should I put completed items in there, and then use that as another data source to compile with my Trac data?
- should we use “proper” project management software (OpenWorkbench, Basecamp), dropping the Trac time_estimate field?
- if so, how do we keep the PM software and our Trac issue list in sync?
- should we switch to a mega-simple approach such as Joel Spolsky’s Painless Software Schedules using Excel?
At the moment I don’t really know what to do. I suspect that we should use something which imports Trac milestones and the corresponding issues and use that tool to schedule the tasks. This would also give us the option to add in custom, one-off tasks like the ones listed above. I suspect also, that we should do as Joel does and ignore dependencies and let the developers actually sort that out.
In fact, the more I think about it, the more I think that we should get the latest version of Jakarta POI and wire up Excel to Trac to keep track of all. It’s got to be the easiest solution, hasn’t it?
Places is the new and exciting part of Firefox 2.
Except that now it’s the new and exciting part of Firefox 3.
Fortunately, Firefox 2 is keeping the mozStorage backend (i.e. sqllite) so all that data mining that I promised should still be available.
Here’s the relevant mailing list thread.
Remove Mark Pilgrim‘s sidebars and make the content full-width. Not perfect, but it works:
width: 90% !important;
padding: 0 !important;
padding-right: 10% !important;
I use the Stylish Firefox extension for maintaining site-specific CSS.
I appear to have timed my WxWidgets/XMPP-enabled three-paned aggregator (which uses Planet as part of the backend) perfectly.
There’s a little bit going around at the moment about microformat validation; for example Norm Walsh on Validating microformats, Bill de hÓra thinks it matters Not a whit and the mailing list is going apeshit.
Is it true? Is validation not important? If, like Bill says, it’s important at publication time, then surely it is.
It’s important to note the stated intention of microformats to be
Designed for humans first and machines second. Whilst this might be true for the simple microformats like rel-tag rel-nofollow and so on, it’s certainly not true for the compound/complex microformats like hCard, for which there are already multiple tools for creation. I would go further and say that writing compound microformats by hand, as part of a normal HTML page is harder than writing a bare XML representation of your data.
Note that I do think that uF are a good idea, but the humans-firstness, in the context of uF like hCard is a joke and that without even the tiniest validation (of the uF, not the HTML) except to see if it works in multiple tools, it’s like actually wanting to smash your face repeatedly into a mirror.
I’ll accept a refutation when someone writes the equivalent of Mark Pilgrim’s Universal Feed Parser.
Ryan King says:
See, one of the problems with XML-on-the-web are the dialects. See Tantek’s Tower of Babel Problem for more explanation.
I know I might be sounding a bit idealistic to suggest that people can actually use a shared vocabulary, but that how the web works.
Yes, it’s called HTML and it’s a formal spec.
After a couple of years of using uncountable tools, and despite their ever-expanding list of podcasts etc. I think I’ve finally hit the perfect combo for downloading those Radio4 “listen again” RealPlayer files or entire streams from the BBC and converting them to mp3 (and ogg).
You will need two tools:
Both of these apps are Windows GUI tools, easy to use and free (shareware).
You can copy and paste .ram URLs into Net Transport and it will download the .rm or .ra files for you. You can also schedule downloading from a stream so that you can, for example, get the breezeblock.
Once the download is complete, you can then open the downloaded RealPlayer file or files in Switch and convert them into mp3 or ogg, which you can then happily transfer over to your iPod or other mp3 player. Brilliant.
A word of warning – I’ve been using Switch for a couple of days, and it’s worked fine, but it does have a 14-day trial period after which the “advanced” features are turned off. The documentation states that this includes converting files from the following formats: .DSS, .SRI, .ACT, .RCD, .REC , and .SHN. Fortunately I’ve never heard of most of these so it shouldn’t be a problem, but if I run into any difficulties after another couple of weeks, I’ll let you know.