How small is small?

I have great respect for the professional agile coach and scrum master community. Few people seem to systematically care for both humans and business, maintaining profitability without ever sacrificing the humans. Now, however, I will alienate vast swathes of them in one post. Hold on.

What is work in software development?

Most mature teams do two types of work, they look after a system and make small changes to it – maintenance, keeping the lights on and new features that the business claims it wants. It is common to get an army in to build a new platform and then allow the teams to naturally attrit as transformational project works fizzles out, contractors leave, the most marketable developers either get promoted out of the team or get better offers elsewhere. A small stream of fine adjustment changes keep coming in to the core team of maintenance developers – effectively – that remains. Eventually this maintenance development work gets outsourced abroad.

A better way is to have teams of people that work together all day every day. Don’t expand and contract or otherwise mess with teams, hire carefully from the beginning and keep new work flowing into teams rather than restructuring after a piece of work is complete. Contract experts to pair with the existing team if you need to tech the team new technology, but don’t get mercenaries to do work. It might be slower, but if you have a good team that you treat well, odds are better they’ll stay, and they will develop new features to be able to be maintained better in the future and less likely to cut corners as any shortcuts will blow up in their own faces shortly later.

Why do we plan work?

When companies spend money on custom software development, a set of managers at very high positions within the organisation have decided that investing in custom software is a competitive advantage, and several other managers think they are crazy to spend all this money on IT.

To mollify the greater organisation, there is some financial oversight and budgeting. Easily communicated projects are sold to the business “we’ll put a McGuffin in the app”, “we’ll sprinkle some AI on it” or similar, and hopefully there is enough money in there to also do a bit of refactoring on the sly.

This pot of money is finite, so there is strong pressure to keep costs under control, don’t get any surprise AWS bills or middle managers will have to move on. Cost runaway kills companies, so there are legitimately people not sleeping at night when there are big projects in play.

How do we plan?

Problem statement

Software development is very different from real work. If you build a physical thing, anything from a phone to a house, you can make good use of a detailed drawing describing exactly how the thing is constructed and the exact properties of the components that are needed. If you are to make changes or maintain it, you need these specifications. It is useful both for construction and maintenance.

If you write the exact same piece of software twice, you have some kind of compulsive issue, you need help. The operating system comes with commands to duplicate files. Or you could run the compiler twice. There are infinite ways of building the exact same piece of software. You don’t need a programmer to do that, it’s pointless. A piece of software is not a physical thing.

Things change, a lot. Fundamentally – people don’t know what they want until they see it, so even if you did not have problems with technology changing underneath your feet whilst developing software, you would still have problems with the fact that fundamentally people did not know what they wanted back when they asked you to build something.

The big issues though is technology change. Back in the day, computer manufacturers would have the audacity to evolve the hardware in ways that made you have to re-learn how to write code. High level languages came along and now instead we live with Microsoft UI frameworks or Javascript frameworks that are mandatory one day and obsolete the next. Things change.

How do you ever successfully plan to build software, then? Well… we have tried to figure that out for seven decades. The best general concept we have arrived at so far is iteration, i.e. deliver small chunks over time rather than to try and deliver all of it at once.

The wrong way

One of the most well-known but misunderstood papers is Managing The Development of Large Software Systems by Dr Winston W Royce1 that launched the concept Waterfall.

Basically, the software development process in waterfall is outlined into distinct phases:

  1. System requirements
  2. Software requirements
  3. Analysis
  4. Program design
  5. Coding
  6. Testing
  7. Operations

For some reason people took this as gospel for several decades, despite the core, fundamental problem that dooms the process to failure is outlined right below figure 2 – the pretty waterfall illustration of the phases above – that people keep referring to, it says:

I believe in this concept, but the implementation described above is risky and invites failure. The
problem is illustrated in Figure 4. The testing phase which occurs at the end of the development cycle is the
first event for which timing, storage, input/output transfers, etc., are experienced as distinguished from
analyzed. These phenomena are not precisely analyzable. They are not the solutions to the standard partial
differential equations of mathematical physics for instance. Yet if these phenomena fail to satisfy the various
external constraints, then invariably a major redesign is required. A simple octal patch or redo of some isolated
code will not fix these kinds of difficulties. The required design changes are likely to be so disruptive that the
software requirements upon which the design is based and which provides the rationale for everything are
violated. Either the requirements must be modified, or a substantial change in the design is required. In effect
the development process has returned to the origin and one can expect up to a lO0-percent overrun in schedule
and/or costs.

Managing The Development of Large Software Systems, Dr Winston W Royce

Reading further, Royce realises that a more iterative approach is necessary as pure waterfall is impossible in practice. His legacy however was not that.

Another wrong way – RUP

Rational Rose and the Rational Unified Process was the Chat GPT of the late nineties, early noughties. Basically, if you only would make an UML drawing in Rational Rose, it would give you a C++ program that executed. It was magical. Before PRINCE2 and SAFe, everyone was RUP certified. You had loads of planning meetings, wrote elaborate Use Cases on index cards, and eventually you had code. It sounds like waterfall with better tooling.

Agile

People realised that when things are constantly changing, it was doomed to have a fixed plan to start with and to stay on it even when you knew that it was unattainable or undesirable to reach the original goal. Loads of attempts were made, but one day some people got together to actually have a proper go at defining what should be the true way going forward.

In February 11-13, 2001, at The Lodge at Snowbird ski resort in the Wasatch mountains of Utah, seventeen people met to talk, ski, relax, and try to find common ground—and of course, to eat. What emerged was the Agile ‘Software Development’ Manifesto. Representatives from Extreme Programming, SCRUM, DSDM, Adaptive Software Development, Crystal, Feature-Driven Development, Pragmatic Programming, and others sympathetic to the need for an alternative to documentation driven, heavyweight software development processes convened.

History: The Agile Manifesto

So – everybody did that, and we all lived happily ever after?

Short answer: No. You don’t get to just spend cash, i.e. have developer do work, without making it clear what you are spending it on, why, and how you intend to know that it worked. Completely unacceptable, people thought.

The origins of tribalism within IT departments have been done to death in this blog alone, so for once it will not be rehashed. Suffice to say, organisationally often staff is organised according to their speciality rather than in teams that produce output together. Budgeting is complex, there can be political competition that is counter productive to IT as a whole or for the organisation as a whole.

Attempts at running a midsize to large IT department that develops custom software have been made in form of Scaled Agile Framework (SAFe), DevOps and SRE (where SRE is addressing the problem backwards, from running black-box software using monitoring, alerts, metrics and tracing to ensure operability and reliability of the software).

As part of some of the original frameworks that came in with the Agile Manifesto, a bunch of practices became part of Agile even though they were not “canon”, such as User Stories, that were said to be a few words on an index card, pinned to a noticeboard in the team office, just wordy enough to help you discuss a problem directly with your user. This of course eventually started to develop back into the verbose RUP Use Cases from yesteryear, but “agile, because they are in Jira”, and rules had to be created for the minimum amount of information on there to successfully deliver a feature. In the Toyota Production System that originated Scrum, Lean Software Development and Six Sigma (sadly, an antipattern), one of the key the lessons is The ideal batch size is 1, and generally making smaller changes. This explosion in size of the user story is symptomatic of the remaining problems in modern software development.

Current state of affairs

So what do we do

As you can surmise if you read the previous paragraphs, we did not fix it for everybody, we still struggle to reliably make software.

The story and its size problems

The part of this blog post that will alienate the agile community is coming up. The units of work are too big. You can’t release something that is not a feature. Something smaller than a feature has no value.

If you work next to a normal human user, and they say – to offer an example – “we keep accidentally clicking on this button, so we end up sending a message to the customer too early, we are actually just trying to get to this area here to double-check before sending”, you can collaboratively determine the correct behaviour, make it happen, release in one day, and it is a testable and demoable feature.

Unfortunately requirements tend to be much bigger and less customer facing. Like, department X want to start seeing the reasons for turning down customer requests in their BI tooling being a feature, and then a “product backlog item” could be service A and service B needs to post messages on a message bus in various positions of the user flow identifying reasons.

Iterating over and successfully releasing this style of feature to production is hard.

Years ago I saw Allen Holub speaking on SD&D in London and his approach to software development is very pure. It is both depressing and enlightening to read the flamewars that erupt in his mentions on Twitter when he explains how to successfully make and release small changes. People scream and shout that it is not possible to do it his way.

In the years since, I have come to realise that nothing is more important than making smaller units of work. We need to make smaller changes. Everything gets better if / when we succeed. It requires a mindset shift, a move away from big detailed backlogs to smaller changes, discussed directly with the customer (in the XP sense, probably some other person in the business, or another development team). To combat the uncertainty, it is possible to mandate some kind of documentation update (graph? chart?) as part of the definition of done. Yes, needless documentation is waste, but if we need to keep a map over how the software is built, as long as people actually consult it, it is useful. We don’t need any further artefacts of the story once the feature is live in production anyway.

How do we make smaller stories?

This is the challenge for our experts in agile software development. Teach us, be bothered, ignore the sighs of developers that still do not understand, the ones raging in Allen Holub’s mentions. I promise, they will understand when they see it first hand. Daily releases of bug free code. They think people are lying to them when they hear us talk about it. When they experience it though, they will love it.

When every day means a new story in production, you also get predictability. As soon as you are able to split incoming or proposed work into daily chunks, you also get the ability to forecast – roughly, better than most other forms of estimate – and since you deliver the most important new thing every day, you give the illusion of value back to those that pay your salary.

Surprises

TIL, TIFO or “I was today years old when”

Universal markers of new information. But it shouldn’t have been new. Documentation had been created.

I can tell you, the surprise was monumental.

In the olden days – if you ran a dotnet project without specifying UseUrls() or other tricks for local development, the containerised application would listen to ports 80 and 443.

Hence the default Dockerfile generated in some existing .NET projects will contain the rows EXPOSE 80 and EXPOSE 443. All of a sudden these two commands are completely useless, as the app quietly instead listen to ports 8080 and 8081 respectively, meaning you are exposing a port that nothing listens to.

The purpose behind the change of default port is to enable increased security. A non-root user cannot listen to ports below 1024(?) and can be set up with file access restrictions that add an additional layer of security, preventing a user from some actions even if they somehow gain entry to the website worker process.
You are still free to ignore the security benefits and keep running your website as root, but unless you take action, your docker-based apps will fail after you upgrade.

Alternatives

Embrace change – Run container as low level user

You need to change the Dockerfile to expose the ports the app listens to, so add EXPOSE statements allowing docker to map your ports. You also need to add the instruction USER app in the last bit of the Dockerfile to make sure the container will run as that user.

Of course, since the container now exposes ports 8080 and 8081 instead, you must map whatever runtime environment you are using, such as ACA configuration or ECS task definitions to take into account your non-standard ports.

Embrace change cautiously – run on new ports but as root

You need to change the Dockerfile to expose the ports the app listens to, so add EXPOSE statements allowing docker to map your ports.

Of course, since the container now exposes ports 8080 and 8081 instead, you must map whatever runtime environment you are using, such as ACA configuration or ECS task definitions to take into account your non-standard ports.

Reject change – embrace peace of mind

You can pretend like the world is a safe place and override the default ports by settings environment variables in your Dockerfile towards the end, in the final stage:

ENV ASPNETCORE_HTTP_PORTS="80,8080" ASPNETCORE_HTTPS_PORTS="443,8081"

This leaves the status quo essentially intact.

Conclusion

I suspect option 3 is the most relevant if you have existing deployments and cloud configurations where modifying port numbers is either not available to you as you would hae needed to depend on another team to make a change on their own schedule- or perhaps the change to your runtime configuration would simply be prohibitively time consuming.

Due to the limited change between Option 2 and 1, it would seem silly to stop at changing port configurations without getting the security benefit of a least privilege runtime user, so I would suggest that for new projects or low complexity software estates, definitely pick option 1 and enjoy the improved security, and in other cases – go option 3 until the security benefit makes it worth spending your maintenance time making the change to non-root running.

Abstractions, abstractions everywhere

X, X everywhere meme template, licensed by imgflip.com

All work in software engineering is about abstractions.

Abstractions

All models are wrong, but some are useful

George Box

It began with Assembly language, when people were tired of writing large-for-its-time programs in raw binary instructions, they made a language that basically mapped each binary instruction to a text value, and then there was an app that would translate that to raw binary and print punch cards. Not a huge abstraction, but it started there. Then came high level languages and off we went. Now we can conjure virtual hardware out of thin air with regular programming languages.

The magic of abstractions, it really gives you an amazing leverage, but at the same time you sacrifice actual knowledge of the implementation details, meaning you often get exposed to obscure errors that you either have no idea what they mean, or even worse- understand exactly what’s wrong but you don’t have access to make that change because the source is just a machine translated piece of Go, and there is no way to fix the translated C# directly, just to take one example.

Granularity and collaboration across an organisation

Abstractions in code

Starting small

Most systems start small, solving a specific problem. This is done well, and the requirements grow, people begin to understand what is possible and features accrue. A monolith is built, and it is useful. For a while things will be excellent and features will be added at great speed, and developers might be added along the way.

A complex system that works is invariably found to have evolved from a simple system that worked

John Gall

Things take a turn

Usually, at some point some things go wrong – or auditors get involved because regulatory compliance – and you prevent developers from deploying to production, hiring gate keepers to protect the company from the developers. In the olden days – hopefully not anymore – you hire testers to do manual testing to cover a shortfall in automated testing. Now you have a couple of hand-offs within the team, meaning people write code, give it to testers who find bugs, work goes the wrong way – backwards – for developers to clean up their own mess and to try again. Eventually something will be available to release, and the gate keepers will grudgingly allow a change to happen, at some point.

This leads to a slow down in the feature factory, some old design choices may cause problems that further slow down the pace of change, or – if you’re lucky – you just have too many developers in one team, and you somehow have to split them up in different teams, which means comms deteriorate and collaborating in one codebase becomes even harder. With the existing change prevention, struggles with quality and now poor cross-team communication, something has to be done to clear a path so that the two groups of people can collaborate effectively.

Separation of concerns

So what do we do? Well, every change needs to be covered by some kind of automated test, if only to at first guarantee that you aren’t making things worse. This way you can now refactor the codebase to a point where the two groups can have separate responsibilities, and collaborate over well defined API boundaries, for instance. Separate deployable units, so that teams are free to deploy according to their own schedule.

If we can get better collaboration with early test designs and front-load test automation, and befriend the ops gatekeepers to wire in monitoring so that teams are fully wired in to how their products behave in the live environment, we would be close to optimum.

Unfortunately – this is very difficult. Taking a pile of software and making sense of it, deciding how to split it up between teams, gradually separating out features can be too daunting to really get started. You don’t want to break anything, and if you – as many are won’t to do, especially if you are new in an organisation – decide to start over from scratch, you may run into one or more of the problems that occur when attempting a rewrite. One example being where you end up in a competition against a moving target. The same team has to own a feature in both the old and the new codebase, in that case, to stop that competition. For some companies it is simply worth the risk, they are aware they are wasting enormous sums of money, but they still accept the cost. You would have to be very brave.

Abstractions in Infrastructure

From FTP-from-within-the-editor to Cloud native IaC

When software is being deployed – and I am ignoring native apps now, largely, and focusing on web applications and APIs- there are a number of things that are actually happening that are at this point completely obscured by layers of abstraction.

The metal

The hardware needs to exist. This used to be a very physical thing, a brand new HP ProLiant howling in the corner of the office onto which you installed a server OS and set up networking so that you could deploy software on it, before plugging it into a rack somewhere, probably a cupboard – hopefully with cooling and UPS. Then VM hosts became a thing, so you provisioned apps using VMWare or similar and got to be surprised at how expensive enterprise storage is per GB compared to commodity hardware. This could be done via VMWare CLI, but most likely an ops person pointed and clicked.

Deploying software

Once the VM was provisioned, things like Ansible, Chef and Puppet began to become a thing, abstracting away the stopping of websites, the copying of zip files, the unzipping, the rewriting configuration and the restarting of the web app into a neat little script. Already here you are seeing problems where “normal” problems, like a file being locked by a running process, show up as a very cryptic error message that the developer might not understand. You start to see cargo cult where people blindly copy things from one app to another because you think two services are the same, and people don’t understand the details. Most of the time that’s fine, but it can also be problematic with a bit of bad luck.

Somebody else’s computer

Then cloud came, and all of a sudden you did not need to buy a server up front and instead rent as much server as you need. Initially, all you had was VMs, so your Chef/Puppet/Ansible worked pretty much the same as before, and each cloud provider offered a different was of provisioning virtual hardware before you came to the point where the software deployment mechanism came into play. More abstractions to fundamentally do the same thing. Harder to analyse any failures, you some times have to dig out a virtual console to just see why/how an app is failing because it’s not even writing logs. Abstractions may exist, but they often leak.

Works on my machine-as-a-service

Just like the London Pool and the Docklands were rendered derelict by containerisation, a lot of people’s accumulated skills in Chef and Ansible have been rendered obsolete as app deployments have become smaller, each app simply unzipped on top of a brand new Linux OS sprinkled with some configuration answer, and then have the image pushed to a registry somewhere. On one hand, it’s very easy. If you can build the image and run the container locally, it will work in the cloud (provided the correct access is provisioned, but at least AWS offer a fake service that let’s you dry run the app on your own machine and test various role assignments to make sure IAM is also correctly set up. On the other hand, somehow the “metal” is locked away even further and you cannot really access a console anymore, just a focused log viewer that let’s you see only events related to your ECS task, for instance.

Abstractions in Organisations

The above tales of ops vs test vs dev illustrates the problem of structuring an organisation incorrectly. If you structure it per function you get warring tribes and very little progress because one team doesn’t want any change at all in order to maintain stability, the other one gets held responsible for every problem customers encounter and the third one just wants to add features. If you structured the organisation for business outcome, everyone would be on the same team working towards the same goals with different skill sets, so the way you think of the boxes in an org chart can have a massive impact on real world performance.

There are no solutions, only trade-offs, so consider the effects of sprinkling people of various background across the organisation, if instead of being kept in the cellar as usual you start proliferating your developers among the general population of the organisation, how do you ensure that every team follows the agreed best practices, that no corners are cut even when a non-technical manager is demanding answers. How do you manage performance of developers you have to go out of your way to see? I argue such things are solvable problems, but do ask your doctor if reverse Conway is right for you.

Conclusion

What is a good abstraction?

Coupling vs Cohesion

If a team can do all of their day-to-day work without waiting for another team to deliver something or approve something, if there are no hand-offs, then they have good cohesion. All the things needed are to hand. If the rest of the organisation understands what this team does and there is no confusion about which team to go to with this type of work, then you have high cohesion. It is a good thing.

If however, one team constantly is worrying about what another team is doing, where certain tickets are in their sprint in order to schedule their own work, then you have high coupling and time is wasted. Some work has to be moved between teams or the interface between the teams has to be made more explicit in order to reduce this interdependency.

In Infrastructure, you want the virtual resources associated with one application to be managed within the same repository/area to offer locality and ease of change for the team.

Single Responsibility Principle

While dangerous to over-apply within software development (you get more coupling than cohesion if you are too zealous), this principle is generally useful within architecture and infrastructure.

Originally meaning that one class / method should only do one thing – an extrapolation of the UNIX principles – it can more generally be said to mean that on that layer of abstraction, a team, infrastructure pipe, app, program, class […] should have one responsibility. This usually mean a couple of things happen, but they conceptually belong together. They have the same reason to change.

What – if any – pitfalls exist ?

The major weakness of most abstractions is when they fall apart, when they leak. Not having access to a physical computer is fine, as long as the deployment pipeline is working, as long as the observability is wired up correctly, but when it falls down, you still need to be able to see console output, you need to understand how networking works, to some extent, you need to understand what obscure operating system errors mean. Basically when things go really wrong you are needed to have already learned to run that app in that operating system before, so you recognise the error messages and have some troubleshooting steps memorised.
So although we try and save our colleagues from the cognitive load of having to know everything we were forced to learn over the decades, to spare them the heartache, they still need to know. All of it. So yes, the danger with the proliferation of layers of abstraction is to pick the correct ones, and to try and keep the total bundle of layers as lean as possible because otherwise someone will want to simplify or clarify these abstractions by adding another layer on top, and the cycle begins again.

Cutler, Star Wars, fast feedback and accuracy

As my father’s son I have always been raised to see the plucky heroes at AT&T and Bell Labs, Brian Kernighan and Ken Ritchie as the Jedi, fighting with the light side of the force, thus foregoing force lightning and readable documentation as that is only available to Dark Side force users such as Bill Gates. Anders Hejlsberg must be Anakin Skywalker in my father’s cinematic universe as Anders first created Turbo Pascal, our favourite programming language when I was little, only to turn to the Dark Side and forge C# and TypeScript – presumably using red khyber crystals.

Anyway – the Count Dooku in this tortured analogy would be Dave Cutler – who created both VMS and Windows NT, was recently interviewed on Dave Plummer’s YouTube channel for several hours.

If you ever think you are going to watch that interview, do so now, unless you are OK with spoiler or spoiler-adjacent content. Also be warned the interview is long. Very, very long.

I find it fascinating how far removed his current work at XBox game streaming (I think? XCloud sounds like a game streaming solution) is from where he started as he left college. His first foray in computing was in simulations, and he had to carry a bring a pile of punch cards – a thousand punch cards I think he says – to run a simulation at IBM, because the computing power requirement was too big for what they had locally at the paper company where he was working.

Later he complains about current developers lack of attention to detail, the host theorising that perhaps the faster inner loops for today’s developers make them less likely to show the requisite skill or diligence.

Now, wait before you flood his YouTube video with angry comments – please let me agree, so that you can post angry comments here instead, thus driving engagement.

I want to say, I see where he’s coming from.

A lot of really hard problems have solutions now that didn’t exist when I started out, and weren’t even possible to conceptualise when Cutler started out. You can realistically do TDD now. If you don’t have access to a piece of hardware you need to write drivers for, you could still most likely have your employer buy enough hardware that you could simulate that hardware with very little effort. You can essentially make it so that you immediately know when you are breaking things as you are typing.

But also… you could save yourself from a majority of problems by taking a bit more care. Like, hey, are you stuck with legacy code with poor test coverage? Well – if that product is an API that is called by another team – just because you have Swagger documentation and the guys can reach you on Slack/Teams, don’t just randomly change behaviours in an endpoint so that dependent services are broken. Why not just make a new endpoint for the new behaviour? And – seriously – when you make new code – why can’t that at least that new bit be unit tested?

Even if you still cannot add unit tests, if you had the mindset of being about to drive far away to test the software live with only you and your boss – don’t you think you would be more carefully reviewing every single change? Really?

Even TDD avatars Feathers and Beck will acknowledge coming across code that isn’t unit tested but still is navigable and observably correct. You could achieve that without any extra build steps or frameworks. You could just pay attention, it has been done in life.

Now, at the same time when we old timers get excited about the olden days we like to brag about how many hours we worked, and I can tell from personal experience that when you don’t sleep -accuracy is the first to go, so – if you are going to make changes in systems where you have nothing like unit test or integration tests in place to help you, make sure you give yourself a couple of extra minutes, basically.

The main reason behind this post is to say:
We shouldn’t be luddites by rejecting modern development practices, but also – if you are a junior dev stuck in a team that writes and maintains legacy code, you could still write defect free code, you just have to pay more attention and go a bit slower. There is no immediate get-out-of-jail-free card because the builds are flaky, like. And if you are struggling and you MUST add unit tests in order to manage to keep high standards for code quality, you could probably refactor into acceptable test coverage more easily than you think.

Our luxury as software developers – to be able to know well ahead of time exactly how our code will work in production would be a dream in most disciplines, yet a surprisingly low number of bridges randomly collapse. People do sane things and catastrophes fail to appear.
The legacy project you got saddled with can cause you to go slow, but it really shouldn’t give you license to deliver new bugs, Don’t internalise when acceptance for lower standards may become evident in your team, such as “well, when X and Y happen and they refuse to Z, this is the best they’ll get”. Instead just don’t deliver, and explain why you cannot. If you don’t offer the lower standards, the resulting bugs won’t appear in your code.

McKinsey and the elusive IT department

I know that both my readers are software developers, so – this is a bit of a departure.

Background

Within a business, there are many financial decisions to be made, and nothing kills a business as fast as costs quietly running away. This is why companies have bureaucracy, to make sure that those who enter into contracts on behalf of the company (primarily sales and procurement) are doing so with due skill and care.

Managers follow up on performance down on the individual level. Commission and bonuses reward top performers, career progression means you are judged by the average scores of your subordinates. If you miss budget you are held accountable. What went wrong? What changes are you implementing, why do you think those changes will make a difference?

CEOs are thinking- why is IT, and specifically IT software development so unwilling to produce metrics and show their work? Are they not monitoring their people? Surely they will want to know who their worst performers so that they can train them or ultimately get rid of them if all else fails?

In this environment, senior developers turned junior managers are treading lightly to try and explain to senior management the ways in which – compared to doing literally nothing – measuring incorrectly can cause much worse problems in terms of incorrect optimisations or alienating and losing the wrong individual contributors, but as far as I know, rarely have any inroads been made into fairly and effectively measuring individual performance in an IT department. You can spot toxic people, or people that plainly lied on their CV, but apart from clear-cut cases like that, there are so many team factors that affect the individual, that getting rid of people instead of addressing systemic or process-related problems is like cutting off your nose to spite your face.

Bad metrics are bad

What do we mean bad metrics? How about a fictionalised example of real metrics out there: LOC. Lines of Code. How many lines of code did Barry commit yesterday? 450? Excellent. Give the man a pay rise! How about Lucy? Oh dear… only 220. We shall have to look at devising a personal improvement plan and file with HR.

However, upon careful review – it turns out that out of Barry’s contribution yesterday, 300 lines consisted of a piece of text art spelling out BARRY WOZ ERE. Oh, that’s unfortunate, he outsmarted our otherwise bullet-proof metric. I know, we solve everything by redefining our metric to exclude comment lines. NCLOC, non-comment lines of code. By Jove, we have it! Lucy is vindicated, she doesn’t write comments in her code so she kept all her 220 lines and is headed for an Employee of the Month trophy if she keeps this up. Now unfortunately after this boost in visibility within the team, Lucy is tasked with supervising a graduate in the team, so they pair program together, sometimes on the graduate’s laptop and sometimes on Lucy’s, and because life is too short they don’t modify the git configuration for every single contribution, so half the day’s contributions fall under the graduate’s name in the repository and the rest under Lucy’s. The erstwhile department star sees her metrics plummet and can feel the unforgiving gaze from her line manager. Barry is looking good again, because he never helps anyone.

So – to the people on the factory floor of software development, the drawbacks and incompleteness of metrics for individual performance are quite obvious, but that doesn’t help senior management that have legitimate concerns for how company resources are spent, and want to have some oversight.

Good metrics are good

After several decades of this situation, leading experts in the field of DevOps decided to research how system development works in big organisations to try and figure out what works and what doesn’t. In 2016 the result came out in the form of the DORA metrics,  throughput (deployment frequency, lead time for changes), and stability (mean time to recover, change failure rate) were published in the State of DevOps report. This measures the output of a team or a department, not an individual, and the metrics help steer improvements in software delivery in a way that cannot be gamed to produce good metrics but negative actual outcomes. Again – the goal of the DORA metrics are to ensure that software design, construction, delivery continuous improvement is successfully undertaken, to avoid pitfalls of failed software projects and astronomical sums of money lost – measuring team or individual performance is not what it’s about.

In the post-pandemic recovery phase a lot of organisations are looking at all of the above and are asking for a way to get certainty and to get actual insight into what IT is doing with all the money, tired of the excuses being made by evasive CTOs or software development managers. How hard could it be?

A proposal, the blowback and a suggestion

Whenever there is a willing customer, grifters will provide a service.

McKinsey took among other things the DORA metrics and NCLOC to cook up their own custom metric to solve this problem once and for all.

Obviously, the response from software development specialists was unanimously critical, and the metric was destroyed up and down the internet, and I must admit I enjoyed reading or watching several eviscerations, especially one of the fathers of DevOps, Dave Farley, had a very effective critique of the paper on YouTube.

There was no solution though. No-one was trying to help the leaders get the insights they crave. There was limited understanding of why. Surely you must trust your employees, why else did you hire them?

Then I stumbled upon a series of writings from Kent Beck and Gegerly Orosz, trying to address this head on, to do what McKinsey hadn’t achieved, which I found so inspiring that this blog exists just as an excuse to post the links:

https://newsletter.pragmaticengineer.com/p/measuring-developer-productivity
https://newsletter.pragmaticengineer.com/p/measuring-developer-productivity-part-2

If you need to know why the proposed McKinsey metrics are bad, look at Dave Farley’s video, and if you what to know what to do instead, read the writings of Beck/Orosz.

GitHub Action shenanigans

When considering what provider to use in order to polish and cut the diamonds that are your deployable units of code into the stunningly clear diamonds they deserve to be, you have probably considered CircleCi, Azure DevOps, GitHub Actions, TeamCity and similar.

After playing with GitHub Actions for a bit, I’m going to comment on a few recent experiences.

Overall Philosophy

Unlike TeamCity, but like CircleCI and – to some extent – Azure DevOps, it’s all about what is in the yaml. You modify your code and the wáy it gets built in the same commit – which is the way God intended it.

There are countless benefits to this strategy, over that of TeamCity where the builds are defined in the UI. That means that if you make a big restructuring of the source repository but need to hotfix a pre-restructure version of the code, you had better have kept an archived version of the old build chain or you will have a bad day.

There is a downside, though. The artefact management and chaining in TeamCity is extremely intuitive, so if you build an artefact in one chain and deploy it in the next, it is really simple to make work like clockwork. You can achieve this easily with ADO too, but those are predictably the bits that require some tickling of the UI.

Now, is a real problem? Should not – in this modern world – builds be small and self-contained? Build-and-push to a docker registry, [generic-tool] up, Bob’s your uncle? Your artefact stuff and separate build / deployment pipelines smack of legacy, what are we – living in the past?! you exclaim.

Sure, but… Look, the various hallelujah solutions that offer “build-and-push-and-deploy”, you know as well as I do that at some point they are going to behave unpredictably, and all you can tell is that the wrong piece of code is running in production with no evidence offered as to why.

“My kingdom for a logfile” as it is written, so – you want to separate the build from the deploy, and then you need to stitch a couple of things together and the problems start.

Complex scenarios

When working with ADO, you can name builds (in the UI) so that you can reference their output from the yaml, and move on from there, to identify the tag of the docker container you just built and reference it when you are deploying cloud resources.

What about GitHub Actions?

Well…

Allegedly, you can define outputs or you can create reusaable workflows, so that your “let’s build cloud resources” bit of yaml can be shared in case you have multiple situations (different environments?) that you want to deploy that the same time, you can avoid duplication.

There are a couple of gotchas, though. If you defined a couple of outputs in a workflow for returning a couple of docker image tags for later consumption, they … exist, somewhere? Maybe. You could first discover that your tags are disqualified from being used as output in a step because they contain a secret(!), which in the AWS case can be resolved by supplying an undocumented parameter to the AWS Login action, encouraging it to not mask the account number. The big showstopper imhoi is that the scenario where you would want to just grab some metadata from a historic run of a separate workflow file to identify which docker images to deploy, that doesn’t seem as clearly supported.

The idea for GitHub Actions workflows seems to be – at least at time of writing, that you do all the things in one file, in one go, possibly with some flow-control to pick which steps get skipped. There is no support for the legacy concept of “OK, I built it now, and deployed it to my test environment” – some manual testing happens – and “OK, it was fine, of course, I was never worried” -> you deploy the same binaries to live. “Ah HAH! You WERE just complaining about legacy! I KNEW IT!” you shout triumphantly. Fair cop, but society is to blame.

If you were to consider replacing Azure DevOps with GitHub Actions for anything even remotely legacy, please be aware that things could end up being dicey. Imho.

If I’m wrong, I’m hoping to leverage Cunningham’s Law to educate me, because googling and reading the source sure did not reveal any magic to me.

.NET C# CI/CD in Docker

Works on my machine-as-a-service

When building software in the modern workplace, you want to automatically test and statically analyse your code before pushing code to production. This means that rather than tens of test environments and an army of manual testers you have a bunch of automation that runs as close to when the code is written. Tests are run, the rate of how much code is not covered by automated tests is calculated, test results are published to the build server user interface (so that in the event that -heaven forbid – tests are broken, the developer gets as much detail as possible to resolve the problem) and static analysis of the built piece of software is performed to make sure no known problematic code has been introduced by ourselves, and also verifying that dependencies included are free from known vulnerabilities.

The classic dockerfile added by C# when an ASP.NET Core Web Api project is started features a multi stage build layout where an initial layer includes the full C# SDK, and this is where the code is built and published. The next layer is based on the lightweight .NET Core runtime, and the output directory from the build layer is copied here and the entrypoint is configured so that the website starts when you run the finished docker image.

Even tried multi

Multistage builds were a huge deal when they were introduced. You get one docker image that only contains the things you need, any source code is safely binned off in other layers that – sure – are cached, but don’t exist outside this local docker host on the build agent. If you then push the finished image to a repository, none of the source will come along. In the before times you had to solve this with multiple Dockerfiles, which is quite undesirable. You want to have high cohesion but low coupling, and fiddling with multiple Dockerfiles when doing things like upgrading versions does not give you a premium experience and invites errors to an unnecessesary degree.

Where is the evidence?

Now, when you go to Azure DevOps, GitHub Actions or CircleCI to find what went wrong with your build, the test results are available because the test runner has produced and provided output that can be understood by that particular test runner. If your test runner is not forthcoming with the information, all you will know is “computer says no” and you will have to trawl through console data – if that – and that is not the way to improve your day.

So – what – what do we need? Well we need the formatted test output. Luckily dotnet test will give it to us if we ask it nicely.

The only problem is that those files will stay on the image that we are binning – you know multistage builds and all that – since we don’t want these files to show up in the finished supposedly slim article.

Old world Docker

When a docker image is built, every relevant change will create a new layer, and eventually a final image will be created and published that is an amalgamation of all consistuent layers. In the olden days, the legacy builder would cache all of the intermediate layers and publish a hash in the output so that you could refer back to intermediate layers should you so choose.

This seems like the perfect way of forensically finding the test result files we need. Let’s add a LABEL so that we can find the correct layer after the fact, copy the test data output and push it to the build server.

FROM mcr.microsoft.com/dotnet/aspnet:7.0-bullseye-slim AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:7.0-bullseye-slim AS build
WORKDIR /
COPY ["src/webapp/webapp.csproj", "/src/webapp/"]
COPY ["src/classlib/classlib.csproj", "/src/classlib/"]
COPY ["test/classlib.tests/classlib.tests.csproj", "/test/classlib.tests/"]
# restore for all projects
RUN dotnet restore src/webapp/webapp.csproj
RUN dotnet restore src/classlib/classlib.csproj
RUN dotnet restore test/classlib.tests/classlib.tests.csproj
COPY . .
# test
# install the report generator tool
RUN dotnet tool install dotnet-reportgenerator-globaltool --version 5.1.20 --tool-path /tools
RUN dotnet test --results-directory /testresults --logger "trx;LogFileName=test_results.xml" /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura /p:CoverletOutput=/testresults/coverage/ /test/classlib.tests/classlib.tests.csproj
LABEL test=true
# generate html reports using report generator tool
RUN /tools/reportgenerator "-reports:/testresults/coverage/coverage.cobertura.xml" "-targetdir:/testresults/coverage/reports" "-reporttypes:HTMLInline;HTMLChart"
RUN ls -la /testresults/coverage/reports
 
ARG BUILD_TYPE="Release" 
RUN dotnet publish src/webapp/webapp.csproj -c $BUILD_TYPE -o /app/publish
# Package the published code as a zip file, perhaps? Push it to a SAST?
# Bottom line is, anything you want to extract forensically from this build
# process is done in the build layer.
FROM base AS final
WORKDIR /app
COPY --from=build /app/publish .
ENTRYPOINT ["dotnet", "webapp.dll"]

The way you would leverage this test output is by fishing out the remporary layer from the cache and assign it to a new image from which you can do plain file operations.

# docker images --filter "label=test=true"
REPOSITORY   TAG       IMAGE ID       CREATED          SIZE
<none>       <none>    0d90f1a9ad32   40 minutes ago   3.16GB
# export id=$(docker images --filter "label=test=true" -q | head -1)
# docker create --name testcontainer $id
# docker cp testcontainer:/testresults ./testresults
# docker rm testcontainer

All our problems are solved. Wrap this in a script and you’re done. I did, I mean they did, I stole this from another blog.

Unfortunately keeping an endless archive of temporary, orphaned layers became a performance and storage bottleneck for docker, so – sadly – the Modern Era began with some optimisations that rendered this method impossible.

The Modern Era of BuildKit

Since intermediate layers are mostly useless, just letting them fall by the wayside and focus on actual output was much more efficient according to the forces that be. The use of multistage Dockerfiles to additionally produce test data output was not recommended or recognised as a valid use case.

So what to do? Well – there is a new command called docker bake that lets you do docker build on multiple docker images, or – most importantly – built targetting multiple targets on the same Dockerfile.

This means you can run one build all the way through to produce the final lightweight image and also have a second run that saves the intermediary image full of test results. Obviously the docker cache will make sure nothing is actually run twice, the second run is just about picking out the layer from the cache and making it accessible.

The Correct way of using bake is to format a bake file in HCL format:

group "default" {
  targets = [ "webapp", "webapp-test" ]
}
target "webapp" {
  output = [ "type=docker" ]
  dockerfile = "src/webapp/Dockerfile"
}
target "webapp-test" {
  output = [ "type=image" ]
  dockerfile = "src/webapp/Dockerfile"
  target = "build"
} 

If you run this command line with docker buildx bake -f docker-bake.hcl, you will be able to fish out the historic intermediary layer using the method described above.

Conclusion

So – using this mechanism you get a minimal number of dockerfiles, you get all the build guffins happening inside docker, giving you freedom from whatever limitations plague your build agent yet the bloated mess that is the build process will be automagically discarded and forgotten as you march on into your bright future with a lightweight finished image.

Technical debt – the truth

Rationale

Within software engineering we often talk about Technical Debt. It was defined by elder programmer and agilist Ward Cunningham and likens trade offs made when designing and developing software to credit you can take on, but that has to be repaid eventually. The comparison further correctly implies that you have to service your credit every time you make new changes to the code, and that the interest compounds over time. Some companies literally go bankrupt over unmanageable technical debt – because the cost of development goes up as speed of delivery plummets, and whatever use the software was intended to have is eventually completely covered by a competitor yet unburden by technical debt.

But what do you mean technical debt?

The decision to take a shortcut can be a couple of things. Usually amount of features, or time spent per feature. It could mean postponing required features to meet a deadline/milestone, despite it taking longer to circle back and do the work later in the process. If there is no cost to this scheduling change, it’s just good planning. For it to be defined as technical debt there has to have been a cost associated with the rescheduling.

It is also possible to sacrifice quality to meet a deadline. “Let’s not apply test driven development because it takes longer, we can write tests after the feature code instead”. That would mean that instead of iteratively writing a failing tests first followed by the feature code that makes that test pass, we will get into a state of flow and churn out code as we solve the problem in varying levels of abstraction and retrofit tests as we deem necessary. Feels fast, but the tests get big, incomplete, unwieldy and brittle compared to the plentiful small all-encompassing, and specific tests TDD bring. A debt you are taking on to be paid later.

A third kind of technical debt – which I suspect is the most common – is also the one that fits the comparison to financial debt the least. A common way to cut corners is to not continuously maintain your code as it evolves over time. It’s more akin to the cost of not looking after your house as it is attacked by weather and nature, more dereliction than anything else really.

Let’s say your business had a physical product it would sell back when a certain piece of software was written. Now the product sold is essentially a digital license of some kind, but in the source code you still have inventory, shipping et cetera that has been modified to handle selling a digital product in a way that kind of works, but every time you introduce a new type of digital product you have to write further hacks to make it appear like a physical product as far as the system knows.

The correct way to deal with this would have been to make a more fundamental change the first time digital products were introduced. Maybe copy the physical process at first and cut things out that don’t make sense whilst you determine how digital products work, gradually refactoring the code as you learn.

Interest

What does compound interest mean in the context of technical debt? Let’s say you have created a piece of software, your initial tech debt in this story is you are thin on unit tests but have tried to compensate by making more elaborate integration tests. So let’s say the time comes to add an integration, let’s say a json payload needs to be posted to a third party service over HTTP with a bespoke authentication behaviour.

If you had applied TDD, you would most likely have a fairly solid abstraction over the rest payload, so that an integration test could be simple and small.

But in our hypothetical you have less than ideal test coverage, so you need to write a fairly elaborate integration test that needs to verify parts of the surrounding feature along with the integration itself to truly know the integration works.

Like with some credit cards, you have two options on your hypothetical tech debt statement, either build the elaborate new integration test at a significant cost – a day? Three? Or you avert your eyes and choose the second – smaller- amount and increase your tech debt principal by not writing an automated test at all and vow to test this area of code by hand every time you make a change. The technical debt equivalent of a payday loan.

Critique

So what’s wrong with this perfect description of engineering trade offs? We addressed above how a common type of debt doesn’t fit the debt model very neatly, which is one issue, but I think the bigger problem is – to the business we just sound like cowboy builders.

Would you accept that a builder under-specified a steel beam for an extension you are having built? “It’s cheaper and although it is not up to code, it’ll still take the weight of the side of the house and your kids and a few of their friends. Don’t worry about it.“ No, right? Or an electrician getting creative with the earthing of the power shower as it’s Friday afternoon, and he had promised to be done by now. Heck no, yes?

The difference of course is that within programming there is no equivalent of a GasSafe registry, no NICEIC et cetera. There are no safety regulations for how you write code, yet.

This means some people will offer harmful ways of cutting corners to people that don’t have the context to know the true cost of the technical debt involved.

We will complain that product owners are unwilling to spend budget on necessary technical work, so as to blame product rather than take some responsibility. The business expects us to flag up if there are problems. Refactoring as we go, upgrading third party dependencies as we go should not be something the business has to care about. Just add it to the tickets, cost of doing business.

Sure there are big singular incidents such as a form of authentication being decommissioned or a framework being sunset that will require big coordinated change involving product, but usually those changes aren’t that hard to sell to the business. It is unpleasant but the business can understand this type of work being necessary.

The stuff that is hard to sell is bunched up refactorings you should have done along the way over time, but you didn’t- and now you want to do them because it’s starting to hurt. Tech debt amortisation is very hard to sell, because things are not totally broken now, why do we have to eat the cost of this massive ticket when everything works and is making money? Are you sure you aren’t just trying to gold plate something just out of vanity? The budget is finite and product has other things on their mind to deal with. Leave it for now, we’ll come back to it (when it’s already fallen over).

The business expects you to write code that is reliable, performant and maintainable. Even if you warn them you are offering to cut corners at the expense of future speed of execution, a non-developer may have no idea of the scale of the implications of what you are offering .

If they spent a big chunk out of their budget one year – the equivalent of a new house in a good neighbourhood – so that a bunch of people could build a piece of software with the hope that this brand new widget in a website or new line-of-business app will bring increased profits over the coming years, they don’t want to hear roughly the same group of people refer to it as “legacy code” already at the end of the following financial year.

Alternative

Think of your practices as regulations that you simply cannot violate. Stop offering solutions that involve sacrificing quality! Please even.

We are told that making an elaborate big-design-upfront is waterfall and bad – but how about some-design-upfront? Just enough thinking ahead to decide where to extend existing functionality and where to instead put a fork in the road and begin a more separate flow in the code, that you then develop iteratively.

If you have to make the bottom line more appealing to the stakeholders for them to dare invest in making new product through you and not through dubious shadow-IT, try and figure out a way to start smaller and deliver value sooner rather than tricking yourself into accepting work that you cannot possibly deliver safely and responsibly.

Size matters

When I started out, I had no idea about lean, agile or TDD. I did read a lot about Object Oriented Programming. I even thought I was practicing it. My first decade of programming was not what would pass for professional grade by modern standards, but even my poor standard back then was – and still would be – not the worst. Sadly.

Eventually I read some pamphlets about TDD, tried some katas, read books about lean software development and the book the Phoenix Project and was subjected to Fred George‘s Object Bootcamp with my co-workers a couple of employers ago.

Then I understood. All of it. Object oriented programming, TDD, lean. All of it. Because of how steeped in my old bad habits I am it still takes conscious effort to practice all these things, and I fail to enforce it sometimes, but at least at this point I know what good looks like. If I had stronger discipline I could refactor towards it as often as I like.

Like, if you make small enough focused classes, object orientation makes sense and you find yourself binning and replacing classes that are no longer fit-for-purpose rather than rewriting them. Open-Closed principle. There are all of a sudden more value objects and composable domain classes that cleanly describe domain behaviour and you come away from cascading changes that you thought were inherent in OO. But is it easy? No. It requires constant effort to maintain.

The most important thing is size. Make the smallest possible change to make the test go green. People – me, you – we? – overcomplicate things and fail to comprehend just how small small should be and in the refactor step, we rarely refactor enough and leave classes too big. Now I find this really hard, the commit-by-commit design step where you are supposed to refactor code into proper shape, reduce classes down to their most composable form, their bare essence. On the other hand, “It’s your only job, Rachel!” like Jimmy Carr says on Eight out of Ten Cats Does Countdown when she can’t solve a numbers game right away. We are paid not only to sit in front of our computers and type but also to think. If we do a bit more of the thinking and a little bit less of the typing, nobody is going to get upset.

Allen Holub talks about user stories on Twitter, reminding us that a story isn’t a neologism for “requirement” and that a user story literally means that, a software user telling us a story about a domain problem they are having. Not “as a user I want to authenticate so that I can have access to the system” but like actual meat on the bone. “As an underwriter I want to bind a quote as I have agreed terms with the broker”. Requirements for authentication will come as part of some story, but start stupid simple with a .htaccess file or similar. For some subsystems that requirement will never come, or can be trivially covered with infrastructure, and you just saved yourself a bunch of maintenance. Code is not an asset, it’s a liability.

Allen Holub also uses TDD when designing systems, as in uses Java and jUnit to TDD integrations between microservices like a less-trendy jupyter notebook proving his overall system design, even ending up with some code that could describe integration points. Start smaller. No, even smaller than that.

Then we have the No Estimates crowd. Seems insane to people used to the widespread non-story user stories with large blocks of detailed requirements that necessitates complex up-front engineering. How could teams take on these big stories without estimating how much they will take to build – what if we spend three months and we don’t even achieve what we want?!

That’s right. But – do you remember Agile? Yea, see the point was to build the smallest possible implementation that can prove viability. If your stories are sized consistently and correctly – i.e. smaller than you think possible – the consistent implementation work will give you the foresight you need to coordinate and plan as necessary.

We can then add features incrementally as users examine the existing product and realise the potential and get ideas based on their experience as domain experts for what they could do to meet their customers’ needs, or what experiments could be run to determine customer desires. In small increments still.

Even if you insist on keeping creating your estimates, you will find that making stories smaller will improve delivery quality and adherence to timescales. As soon as stories get too big, they are harder to reason about and you may even forget acceptance criteria you thought you had clear in your head. You will never regret making stories smaller.

SD&D 2022

I had the chance to attend the Software Design & Development conference at the Barbican Centre this week. It was my first conference since the plague so it was a new experience. I will here coalesce some of the main points that I gathered. I may go into specifics in a couple of topics that stood out to me, but this is mostly so that I have some notes to aid my memory.

Overall

The conference is very well organised, you can tell it’s not their first rodeo. From a practicality point of view the Barbican is equally easy/cumbersome to get to from all directions, which is about as good as you can get in London where usually you will end up favouring proximity to a subset of main line termini, thus making the location cumbersome to get to for at least half the population (since there are airports in every direction out from the city). There were a number of timeslots where you truly had FOMO for choosing one track over another, which is a design goal of a program committee – so, well done SD&D! There was some unfortunate setting in the AV equipment which interfered with shortcuts in Visual Studio in interesting ways, but surely that can be addressed somehow.

Kevlin Henney

The first law of developer conferences states Always Catch a Talk by Kevlin Henney if Available. It doesn’t teach you anything about a specific new thing, but it puts the entire universe into context, and always inspires you to go ahead and look up papers from the olden days that describe modern phenomena.

C# features

My biggest bafflement came not from the talks that showcased new C# features, such as the latest iteration of pattern matching and the like – as I usually get introduced to them when they show up as options in ReSharper – but by the old abomination that is default implementations on interfaces in C#8. This talk by Jeremy Clark was an eye opener. It is so jank you wonder how it could ever be released into production. My guess is that the compatibility argument from MAUI was the big reason, and it makes sense, but basically my instinct to stay away from it was right, but my guess is that you will have weird bugs because of it at some point in the future.

I also caught the C# Channels talk. I seem to recall the gestation of Channels, as if I remember correctly it was publicly brought into being through discussions on David Fowler’s twitter account (unless I misremember). Anyway, it is a highly civilised way of communicating between two async tasks with back pressure. Intuitive to use and safe. Like a concurrent queue but a lot nicer.

Micro services

Allen Holub had a number of talks on Microservices and if you can I’m sure you should see more of them, due to the abundance of other brilliant talks I only caught his test driven architecture talk which I will mention below. Other than him though there were talks by Juval Löwy and Neal Ford that covered architecture and micro services. I caught Software architecture foundations: identifying characteristics by Neal Ford and Sander Hoogendoorns’s talk on migrating to microservices in small steps, both worth watching.

Security

There were a couple of security talks focussing on automating security as well as the updated OWASP top 10 chart of threats as well as a couple of talks by Scott Brady about – I am guessing – – Identity Server. The dizzying array of choices meant I didn’t go to the identity server talks, but they are definitely on my to watch-list if there is ever video from this.

The security automation ones though Continuous Security by Kim van Wilgen and Add Security into your Agile Process by Cecilia Wirén took you through what you need to really improve the security posture of your development process. Kim v Wilgen was more in-depth on tooling whilst Cecilia Wiren was more about the whole process and what to consider where. The only sustainable way forward is to automate things like dependency vetting/tracking and as much static code analyisis you can, to bring these concerns as far left as you can, menaing early in the process. Rewriting a function even before you commit it is easy. Having an automated fuzzing tool discover you have a buffer overrun vulnerability when you thought you were done is worse from a cost-of-remediation point of view, but of course letting a bad vulnerability out into the wild is an order of magnitude worse, so whilst catching them in the dev cycle is preferable, attempting to catch stuff late through more heavy handed automation that is too time consuming to run on every commit is still worth considering running periodically as it’s better than the alternative.

Tests drive everything

How to stop testing and break your codebase by Clare Sudbery was an amazing talk that really hit home. It was like an experiment of what if I just skip test-first for a bit and see what happens? brought out of time crunch, tiredness and a sense that it would be a safe trade-off because I know what I’m doing and – in her case – I have acceptance tests . Like I have noticed in various side projects, when you let go of the discipline the drawbacks come at you hard and fast – almost cartoonishly so. It was one of the most relatable talks I have ever attended.

Allen Holub’s DbC (Design by Coding):applying TDD principles to architecture was fascinating. It started out by being a bit “old man yells at clouds” about how real agile is index cards on a board, not Jira, and although yes, index cards or post-it’s on a physical board is preferable to an electronic board, the electronic board prevents me from having to commute four days out of the five, so – no. The points raised about authoring million detailed tickets ahead of time being a waste though, hard agree, and the rest of the diatribe against big design up-front I was all aboard with and probably to some extent already doing. The interesting bit was to come though.
He presented a hypothetical technical problem that needed architecting. Instead of drawing a diagram or writing specs, he whipped out an editor with his favourite version of junit and wrote some java code through TDD – all in one file – that implemented tests and classes that symbolised microservices and their endpoints, as well as the interactions between them. Light weight, easy to read for developers, pleasant tooling and at least as useful as a diagram. In both cases you start writing the code from scratch, but you have a design document that makes sense. I’m not 100% sold on the concept but it’s worth considering.

Various highlights

Tuesday morning keynote was about quantum computing, and reluctantly I must concede that it probably is the future, and the community seems to be looking for converts, but I just can’t go from quantum entanglement to taking data from a web page and shoving it into a database, so I guess I have to wait those three years before it hits the mainstream and it will become digestable for the likes of me. If you are into cryptography as in encryption, not the various ponzi schemes, you should probably get into it now.

I caught a Kate Gregory talk – on naming – after only being a fan off of her YouTube talks on C++ vs C, definitely worth seeing. She proposed the strategy of “just mash the keyboard if you can’t think of a name and go back to it later” was also brought forward elsewhere on the conference, the point being: don’t get stuck trying to come up with a name, start writing the code and as you start talking about what the thing you just wrote is and what it is responsible for, a great name will eventually become evident and then you use that. The effort of coming up with a good name is worth it, and with refactoring tools it’s worth just moving past the instant rather than making a bad decision. Once you have finished the feature and proceeded, the caveats and difference between the name and the actual implementation will fade into obscurity and when you come back to the same code in three weeks you will have forgotten all about it and can be misled by bad names as easily as if you hadn’t written the code yourself.

Conclusion

I really enjoyed this conference. Again, with a past in a program committee I really admired the work they put in to cause so much anxiety when picking talks. Obviously the QE2 conference centre in Westminster is newer so NDC London benefits from that, and if you want to see Troy Hunt and the asp.net core guys Fowler and Edwards you would be better off over to NDC, but SD&D had all the core things right and a wider array of breakout sessions, and infrastructure like the food was a lot less chaotic at SD&D than it usually is at NDC. Compared to BuildStuff and Øredev – those conferences I only attended as visitor to the city, so I didn’t have to commute, meaning of course that’s nice.

If you like the environment you’ll be pleased that there was no shilling or abundance of obscure t-shirts handed out that would have drained natural resources, but I suspect that SD&D would have enjoyed more sponsorships and an expo floor which allegedly they have had before . Since this conference was actually SD&D2020 postponed several times over the course of two years, it is possible that SD&D 2023 will have pre-pandemic levels of shilling. All I know is I enjoy free t-shirts to clothe the child. Regardless I strongly recommend attending this conference.