Science can make us better at software development

Picture of Alex Sinclair
Posted by Alex Sinclair

Some questions are perennial in development: “Do we build our teams by function, or by product?”, “Do we build microservices or a monolith?”, or “Do we use python, java, or something else?”.

Even a question as simple as ‘tabs vs spaces’ has been raging since at least the 1980s. For most of us, we answer these questions by relying on our past experiences, and those of our colleagues; we tried that before and it didn’t work, or, that worked for us before, so we should do it again. Perhaps we’ve read articles like “You absolutely must do this to be successful!” or “10 things to never do in software development!”.

Most of the time we can find ways to make things work. But by always following what we’ve done before we’ll never know the path untraveled, and never know what would have been if we’d taken another road - which means we don’t find out if there could have been a better way. So how can we be sure what’s really important, and what doesn’t matter?

Enter science! We live in the world of big data and LLMs, of robots and drones, of quantum mechanics and the James Webb Space Telescope. Surely we can do something beneficial with our 28 million software developers and our 60+ years of progress in this fantastic industry?

The answer is yes. Published in 2016, the "State of DevOps Report" made a bold, if common, claim: "if you do these things you will be more successful than if you don’t". Two things made this different; firstly, the authors were Nicole Forsgren PhD, a published academic with a track record of excellent science, Jez Humble author of Continuous Delivery (2010), and Gene Kim author of The Phoenix Project (2013) - all well known and respected names in the field of DevOps. Second, they claimed that this wasn’t just an opinion; it was proven with science. What science? Well, we had to wait another two years to find out.

In 2018, “Accelerate: The Science of Lean Software and DevOps: Building and Scaling High Performing Technology Organizations” was published. We finally had access to the data so we could judge for ourselves. The book is split into three sections comprised of: 'our results / what they mean / the science behind them'. The authors said;

"After asking 25,000 different individuals about the company they work for, their practices, and the success of that company, we finally had a picture of what really mattered".

They summarise their findings nicely in Appendix A.

One thing that really stood out to me in the book was that it doesn’t matter how you deliver your software. Whether you use a monolith, microservices, or desktop software sold in shrink-wrapped packaging from a Currys / PC World, it didn’t matter. What did matter was whether or not you had a good software architecture. Was your code split into independent modules where changes to one module didn’t force changes in others? Or was it a big pile of spaghetti, where everything was connected to everything else?

The authors didn’t confine their investigation into only code related practices, either. Companies with a Generative culture - that is, ones who celebrate failures as opportunities for learning and improvement - were shown to be more successful than those that preferred shooting the messenger and punishing the team. Teams that encourage and give space for learning and knowledge will often outperform those that don’t.

The insights provided by scientific studies like those in "Accelerate", and its 1987 predecessor "Peopleware", show us that others have already done the hard work of learning what really matters. By listening to these evidence-based findings, we can avoid reinventing the wheel and instead focus our efforts on what has been proven to drive success. We can leverage the power of science as a guide to invest our energy in improving what matters most, and consign 'tabs vs spaces' back to the bike shed.