Microhabits - my core of software development

Microhabits - my core of software development

I was asked a while ago by one of the very, very talented juniors that I mentor:

"Why does all your programming look so easy, and yet when I do even the same things, it feels so hard and slow?"

I spent a while thinking about this, and what really came to me is, over years, and working with the right people, developers who care about both the code they write, as well as their own growth, build up a set of what I call "microhabits" about how they go about doing development.

I consider these things almost "foundational" to how I write code, and interestingly doesn't really apply to any particular language (even though, my core skills are in Java).

These are the microhabits that I find that I use "all the time", and find great value in adopting them. I'm pretty sure none of them are "new" or "revolutionary", but I was shocked to find that these sorts of things aren't even discussed by delivery teams these days, much to their own detriment.

Always start with "Hello, World!"

When I am working on a new project, the very first thing I build will be an app that is pretty much :

System.out.println("Hello, World!");        

I'll then do all the work that's needed to get that app packaged, and into production. This will involve the CI pipelines, the packaging, the deployment, and potentially monitoring hooks.

Why? That's the hard bit for a lot of teams, and until you develop the necessary experience, you can't work out "why doesn't my app work" when you have a large, already developed application. There's literally zero moving parts in the Hello, World! app, and so any failure to work is a result of the "execution container/environment" and solving that problem becomes the focus, rather than some extended triage of what is going on.

The first test in a suite should be for object creation only

Very much like the above, my first test tends to look like

MyObject mo = new MyObject();
assertNotNull(mo);        

Why? Mostly for the same reasons as above. It's a baseline triage point. Do I have the unit test framework included? Do I have paths set up correctly? Do I have configuration for the testing in the right way?

Always run the tests after you "pull" from others

If you're using some form of source control (and in this day and age, you really should), every time you do something like git checkout, git pull, git clone you're getting code from "somebody else" (and if you're working by yourself, that somebody is you, but a different you in a different time) you must run the full test suite.

Why? If you're doing development, the slow part is working out "why is my stuff broken?". If you don't establish a baseline of what working is, BEFORE you start, then when you find it broken (and you will) then you don't know if it's you, or somebody else, or a difference in your environment, or DNS (hint, it's always DNS). This cuts down on triage/investigation/debug problems because you know things "must be working".

Always run the tests before you "pull" from others

Much like the microhabit above, and for the same reason. I try to minimise the number of things that are changing from the last time I ran some tests and established a working baseline.

Run the tests all the time

I will run the unit tests (in my IDE, using a single keystroke) at the end of a thought pattern. I'll write very small amounts of code, then run the tests, and they'll complete in a tiny number of seconds, or sub-second. Again, constantly keeping a baseline of the next piece of work

Don't refactor and add new code at the same time

I adhere very strongly to the very old (and sadly mostly forgotten) "red/green/refactor" model of writing code. I compartmentalise the though processes in such a way that I'm only dealing with the smallest amount of cognitive load at any point in time.

"red" - I'm writing a test either as a TDD design signpost and code extension vector, or I'm writing some unit tests for functionality not currently implemented.

"green" - I'm making that 1 failing test pass. If I can't, or if it gets messy, or I hate the code I wrote to make it pass, then I'll throw it ALL AWAY and start again. If it's how I like it, then I'll create a baseline by something like a check-in. (In the old days, I'd make more use of the IDE versioning facilities, so again, you don't need any particular tools to work this way)

Remember, I'm dealing with 2-3 minutes of thinking/designing/coding - I'll have learned something in that process, so throwing away the code is irrelevant, I have those lessons to try again.

"refactor" - I have working code, it's mostly in a shape I like, but it's not displaying the sorts of characteristics I want from my code AS A WHOLE. I have a baseline (from "green" above) and now I can refactor the code (making it more expressive, removing duplication etc)

Constantly "pull" from others

I'll have a baseline of working, I'll then have a choice to implement more, or include more from my team. In most cases, if I've done a lot of heavy brain work, then grabbing the code from others gives a nice break.

Why? Ain't nobody got time for a huge merge conflict. If you keep your code up to date with everybody else (and provide your code to everybody else) more frequently, then you don't need to worry.

Prioritise pushing working code to other team members

Like the above, if we share the code all the time, then our shared view of the code is far more consistent than if we have longer integration times. This is part of participating in a high functioning team.

The other part of this is (and a thing I constantly hear) is that "I can't do that because my code is on a branch, needs a PR, needs to be behind a feature flag" - to which I say "stop doing all those dumb things, and work out better ways to deliver software, because they absolutely exist"

Every test MUST fail before it passes before continuing

If I write a test, and it passes then I'm deeply suspicious. There's one of 2 things happening here.

  1. My code already works for this test, and hence the functionality exists, and it's not a useful test (at this point in time - a little more on this later)
  2. The test is wrong

The number of times I've written a test that doesn't actually do what I thought it does (#2) is a massive number. The number of times I've found tests in test suites written by others that don't test anything, or what they think they are is the same massive number.

Why? A passing test with no failure doesn't give me any more useful information. That test is there to help us constrain our development so our code gives us the right answers. If there's no failure, there's no constraint.

There are a type of test that will often pass before failing. These are what I call algorithmic tests. Tests of "math" (like, in a simple case, adding 2 numbers). Once the algorithm is working, we might want to add in a suite of additional tests that cover corner cases. Now, I don't normally then go and "break" the algorithm to force my tests to fail, but what I do is to change the tests, so they fail. I might invert a comparison, I might compare to zero rather than the actual number. What I'm doing here is to watch my change "toggle" the state of the test, from passing to failing, and then back to passing again.

Work in small increments

I work in what most people would consider laughably trivial blocks. I will literally only do 2-3 minutes of work before I consider it "done" and ready to baseline/use/push or whatever. This has massive benefits for me for the following reasons;

  1. I'm not very smart. I literally can't do all the incredible things other people do with their brains. As a result, I've come up with this coping mechanism that allows me to appear like I know what I'm doing, but really, I'm just very good at breaking things down into tiny parts and working on them in isolation, with no distractions
  2. I've got a bad memory. I can't remember what I had for breakfast most days, and I certainly can't remember some long winded explanation or huge complex model that I have to keep in my head to turn into software. I can do little things, if they're easy, and it takes me a short time to do them.
  3. I'm really picky about my code. My father is a master craftsman. He can make things in wood that you would think is impossible, with a level of quality and feel that is otherworldly. I can only aspire to be like him in what I do, so I do it with my code. I want others to feel about my code in the same way I (and others) feel about my fathers wood creations. Writing in small increments means I'm always able to throw code away to try again, rather than keep it and keep trying to force it to be better.


I've been writing code in this way for nearly 40 years in nearly every domain of business. I know it works. I know it's possible, and I know it's generally applicable. If you read these and go "that can't work here" - question why, and it's maybe not the microhabit, but the constraints you're putting on yourself to do software instead.









Henri G.

Senior Project Manager

2 个月

Thank you Jon - very insightful and pragmatic

要查看或添加评论,请登录

社区洞察

其他会员也浏览了