This is based on the excellent work of Aaron Hoffman which he’s written up here.
This process adds two git hooks to your repositories which prevent you from committing or pushing to a branch named
Git hooks live in each of your git repositories rather than in a global location, and so the first thing to do is create the files which will be copied into each repository when you do a
git clone (or a
git init in an existing local repository).
mkdir -p ~/.git-templates/hooks
Add git hooks
Create a file called
pre-commit and give it this content:
prevent commit to local master branch
git symbolic-ref HEAD
if [ "$branch" = "refs/heads/master" ]; then
echo "pre-commit hook: Can not commit to the local master branch."
Create a file called
pre-push and give it this content:
Prevent push to remote master branch
while read local_ref local_sha remote_ref remote_sha
if [ "$remote_ref" = "refs/heads/master" ]; then
echo "pre-push hook: Can not push to remote master branch."
Mark them as executable
chmod a+x .git-templates/hooks/*
Set the git init.templateDir configuration variable
git config --global init.templateDir '~/.git-templates'
Add the hooks to an existing local repository clone
git init while in your local clone directory. This is a non-destructive command which will copy the new hooks into your
You can double-check the git-init docs if you’re nervous before doing this!
Add the hooks when you clone a repository
Just clone the repo and your hooks will be in the .git/hooks directory!
You will no longer get the sample git hooks copied into
REPO/.git/hooks, nor the sample
excludes file but they will continue to exist in
/usr/share/git-core/templates and can be copied into your
~/.git-templates directory if you want to keep them.
I am finding myself using a lot of Pentaho Data Integration at the moment.
It’s a good, powerful, tool, but my god does it have some annoyances.
It’s a drag and drop tool that allows you to process massive amounts of data in parallel, without needing to be an almighty data analyst already. This means that you can bring up the configuration windows for each data processing step you’re working with at the same time, so you can check you’ve named all your variables correctly, and so on.
It has a help system built in, which pops up a window containing the wiki page for the step you’re working with. Except that the help window is modal. The only modal window in the whole application is the one which gives you a guide on what to type into which box or which contains example and values that you might want to copy/paste into your step. Except you can’t. Because modal.
As you run your data process, Pentaho marks each step as in progress, or successful. Except that if you have your process divided up into multiple data transformations then you can only check the status correctly if you close all but the first transformation in the process, run it, and then re-open the sub-transformations from there. Baffling.
When your transformations are running you get a nice real-time log of what’s happening at the bottom of your screen, which you can scroll through. Except that as new lines are added to the log it scrolls it to the bottom. Good luck finding the log message you were looking at before!
More complaining into the void next time! Hope you’re looking forward to it as much as I am!
I have bought Go: The Complete Developer’s Guide on Udemy. It’s been a good intro so far, but there are more than 90 videos, and I’d quite like to do some each day at work, where streaming video isn’t always allowed.
The author of this course has allowed his videos to be downloaded, but that’s on a video-by-video basis. I’d like to grab all of them.
udemy-dl has me covered, downloading all the videos in mp4 format and grabbing the subtitles too. This is almost certainly against the Udemy terms and conditions, but in practice will be very useful. Who knows, maybe I’ll even end up buying more courses!
Go to about:config
Look for: dom.webnotifications.enabled
and set its value to ‘false’.
This will completely disable push notifications and Firefox won’t ask your permission again.
Source: How can I suppress web push requests for all websites?
I was busy writing a blog post about types of dependency injection, since it’s come up a few times at work, but it seems as though Roger has done it for me: Java: Spring Dependency Injection Patterns – The good, the bad, and the ugly.
Basically: use constructor injection.
About a year after my son was born, when my memory started working again, I started writing a blog about the things he was getting up to.
I started it on 19 September 2010, 2718 days ago. In that time I have made 733 posts, roughly one every four days. Obviously there are peaks and troughs, but that’s a nice average to have. Enough time can pass between each one for something new, nice or surprising to happen and warrant recording.
I don’t use pictures, only words, because I’m keen that moving blogging platforms, or the vagaries of image resizing don’t destroy it over time. In 18 months my son will be ten, I think that’ll probably be enough posts to get it printed out and bound. I’ve used https://www.lulu.com/ many times with great success, but there are other services which do WordPress-specific imports so hopefully I’ll find something which lets me do it nice and easily as well as making something that looks good, and lets me preserve my digital record well past the ability of any digital records management.
Growing up I played the board games that you might expect a kid growing up in the 80s to play: Scrabble, Monopoly, Frustration, Cluedo and so on. Although I mostly enjoyed them, they were all tedious in their own ways. The more interesting the game, the longer it took, and the more “adult” it was seen to be and was therefore either out of reach of my younger sibling or took to long to play with my parents. Today is very different.
Sites like https://boardgamegeek.com/ mean that it’s possible to find games for my kids which don’t take too long to play and are also accessible for their ages, meaning they’re much more fun!
As well as some of the classics like “Guess Who” and “Connect 4” we’ve acquired Kingdomino, Castle Panic and Labyrinth, all of which are good fun.
On our horizon I can definitely see Catan Junior and Ticket to Ride: First Journey (Europe). Hopefully my kids will be able to look back on the board games they played with pleasure rather than mild horror, and will be able to play better, more interesting games as they grow up.
Last blog standing, “last guy dancing”: How Jason Kottke is thinking about kottke.org at 20
One of the compelling things about blogs, for me, was that you had individual people presenting links and information that were a little view into what that person was interested in, and what was interesting about this person.
Maven is a powerful build tool for Java and it tends to spit out a large amount of logs, requiring you to scroll back in your output window or console to look at what’s happening. If you’re running it regularly, for example whilst building tests then it’s easy to scroll back slightly too far and look at the results from a previous run by accident.
An easy way to avoid this is to configure Maven to output a timestamp on each log line. Just open up your MAVEN_HOME/conf/logging/simplelogger.properties and change the dateTimeFormat like this:
Not only will this make it easier to spot if you’re looking at the correct log lines but you’ll also be better able to see how long each stage is taking (although for real measurements here you’ll want a profiler).
I do not like debugging. I prefer good logging.
The log4j manual quotes Brian W. Kernighan and Rob Pike from their “truly excellent book” The Practice of Programming:
As personal choice, we tend not to use debuggers beyond getting a stack trace or the value of a variable or two. One reason is that it is easy to get lost in details of complicated data structures and control flow; we find stepping through a program less productive than thinking harder and adding output statements and self-checking code at critical places.
Clicking over statements takes longer than scanning the output of judiciously-placed displays. It takes less time to decide where to put print statements than to single-step to the critical section of code, even assuming we know where that is. More important, debugging statements stay with the program; debugging sessions are transient.
There are times when a debugger can be really helpful, but in my experience they are normally used as a fallback for a poorly documented system with an unclear flow of logic, or overly large methods with poor test coverage.