Category Archives: Waffling

Further on how time is wasted

I keep going on about why software development should consist of empowered cross-functional teams, and a lot of actual experts have written very well – at length – about this, and within manufacturing the Japanese showed why it matters in the 1980s, and the American automotive industry drew the wrong lessons from it, but that is a separate issue. For some light reading on these topics I recommend People Before Tech by Duena Blomstrom and also Eli Goldratt’s The Goal.

Incorrect conclusions from the Toyota Production System

The emergence of Six Sigma was a perfect example of drawing the completely wrong conclusion from the TPS1. In manufacturing, as well as in other processes where you do the exact same thing multiple times you do need to do a few sensible things, like examine your value chain and eliminate waste, so figuring out exactly how to fit a dashboard 20 seconds faster in a car, or provide automated power tools that let the fitter apply the exactly correct torque without any manual settings creates massive value, and communicating and re-evaluating these procedures to gradually optimise further has a direct tie in with value creation.

But transferring that way of working to an office full of software developers where you hopefully solve different problems every single day (or you would use the software you have already written, or license existing software rather than waste resources building something that already exists) is purely negative, causing unnecessary bureaucracy that actually prevents value creation.

Also -the exact processes that have been developed at Toyota or even at its highly successful joint venture with General Motors – NUMMI – were never the success factor. The success factor was the introspection, the empowerment to adapt and inspect processes at the leaf nodes of the organisation. The attempts by GM to bring the exact processes back to Detroit failed miserably. The clue is in the meta process, the mission and purpose as discussed by Richard Pascale’s in The Art of Japanese Management and in Tom Peter’s In Search of Excellence.

The value chain

The books I mentioned in the beginning explain how to look at work as it flows through an organisation as a chain where pieces of work are handed off between different parts of the organisation as different specialists do their part. Obviously there needs to be authorisation and executive oversight to make sure nothing that runs contrary to the ethos of the company gets done in its name, there are multiple regulatory and legal concerns that a company wants to safeguard, and I want to make it clear that I am not proposing we remove those safeguards, but the total actual work that is done needs mapping out. Executives and especially developers rarely have a full picture of what is really going on, as in there can be workarounds created around perceived failings in the systems used that have never been reported accurately.

A more detailed approach that is like a DLC on the Value Stream Mapping process, is called Event Storming, where you gather stakeholders in a room to map out exactly the actors and the information that makes up a business process. This can take days, and may seem like a waste of a meeting room, but the knowledge that come out of it is very real – as long as you make sure not to make this a fun exercise for architects only, but to involve representatives from the very real people involved in these processes day-to-day.

The waste

What is waste then? Well – spending time that does not even indirectly create value. Having people wait for approvals rather than ship them, product creating tickets six months ahead of time that then need to be thrown away because the business needs to go in a different direction. Writing documentation nobody reads (it is the “that nobody reads” that is the problem there, so work on writing good, useful and discoverable documentation, and skip the rest). Having two or more teams work on solving the same problem without coordination – although there is a cost to coordination as well, there is a tradeoff here. Sometimes it is less wasteful for two teams to independently solve the same problem if it leads to faster time to market, as long as the maintenance burden created is minimal.

On a value stream map it becomes utterly painfully clear that you need information before it exists, or that dependencies flow in the wrong direction, and with enough seniority in the room you can make decisions on what matters and with enough individual contributors in the room you can agree practical solutions that make people’s lives easier. You can see handoffs that are unnecessary or approvals that could be determined algorithmically, find ways of making late decisions based on correct data, or find ways of implementing safeguards in a way that does not cost the business time.

As a small cog in a big machine it is sometimes very difficult to know what parts of your daily struggles add or detract value from the business as a whole, and these types of exercises are very useful in making things visible. The organisation is also forced to resolve unspoken things like power struggles so that business decisions are made at a sensible level with clear levels of authority. Especially businesses with a lot of informal authority or informal hierarchies can struggle to put into words how they do things day-to-day, but it is very important that what is documented is the current unvarnished truth, or else it is like you learn repeatedly in The Goal – optimising any other place than the constraint is useless.

But what – why is a handoff between teams “waste”?

There are some unappealing truths in the Goal- e.g. the ideal batch size is 1, and you need slack in the system – but when you think about them, they are true:

Slack in the system

Say for instance, you are afraid of your regulator – with good reason – and you know from bitter experience that software developers are cowboys. You hire an ops team to gatekeep, to prevent developers from running around with direct access to production, and now the ops team relies on detailed instructions form the developers on how to install the freshly created software into production, yet the developers are not privy to exactly how production works. Hilarity ensues and deployments often fail. It becomes hard for the business to know when their features go out, because both ops and dev are flying blind. In addition to this, the ops team is 100% utilised, they are deploying and configuring things all day, so any failed deployment (or, let’s be honest, botched configuration change ops attempts on their own without any developer to blame) always leads to delays, so the lead time for a deployment goes out to two weeks, and then further.

OK, so let’s say we solve that, ops build – or commission – a pipeline that they accept is secure and has enough controls and reliable rollback capabilities to be trusted to hand over to be used by a pair of developers, bosh – we solve the deployment problem, developers can only release code that works, or it will be rolled back without them needing the actual keys to the kingdom, they have a button and observability, that’s all they need. Of course, us developers will find new ways to break production, but the fact remains, rollback is easy to achieve with this new magical pipeline.

Now this state of affairs is magical thinking, no overworked ops team is going to have the spare capacity to work on tooling. What actually tended to happen was that the business hired a “devops team”, which unfortunately weren’t trusted with access to production either, so you might end up with separate automation among ops vs dev (“dev ops team” writes and maintains CI/CD tooling and some developer observability, ops team run their own management platform and liveness monitoring) which does not really solve the problem. The ops team needs time to introspect and improve processes, i.e. slack in the system.

Ideal batch size is 1

Let us say, you have iterations, i.e. the “agile is multiple waterfalls in a row” trope. You work for a bit, you push the new code to classic QA that revise their test plans, and then they test your changes as much as they can before the release. You end up with 60 jira tickets that need QA signoff on all browsers before the release, and you need to dedicate a middle manager to go around and hover behind the shoulders of devs and QA until all questions are straightened out and the various bugs and features have been verified in the dedicated test environment.

A test plan for the release is created, perhaps a dry run is carried out, you warn support that things are about to go down, you bring half the estate down and install updates on the one side of the load balancer, you let the QAs test the deployed tickets. They will try and test as much as they can of the 60 tickets without taking the proverbial, given that the whole estate is currently only serving prod traffic from a subset of the instances, and once they are happy, prod is switched over to the freshly deployed side, and a couple of interested parties start refreshing the logs to see if something bad seems to be happening, as the second half of the system is deployed, the firewall is reset to normal and monitoring is enabled.

So that is a “normal” release for your department, and it requires fairly many people to go to DEFCON 2 and dedicate their time to shepherding the release into production. A lot of the complexity with the release is the sheer size of the changes. If you were deploying a small service with one change, the work to validate that it is working would be minimal, and you would also immediately know what is broken because you know exactly what you changed. With large change sets, if you start seeing an Internal Server Error becoming common, you have no exact clue as to what went wrong, unless you are lucky and the error message makes immediate sense, but unfortunately, if it was a simple problem, you would probably have caught it in the various test stages beforehand.

Now imagine that one month the marquee feature that was planned to be released was a little bit too big and wouldn’t be able to be released in its entirety, so the powers that be decide, hey let’s just combine this sprint with the next one and push the release out another two weeks.

Come QA validation before the delayed release, there are 120 tickets to be validated – do you think that takes twice the time to validate or more? Well, you only get the same three days to do the job, it’s up to the Head of QA to figure it out, but the test plan is longer which makes the release take 10 hours, four hours of which include the bit of limbo while the estate is running on half power.

So yea, you want to make releases easy to roll back and fast to do and rely heavily on automated validation to avoid needing manual verification – but most of all, you want to keep the change sets small. The automation helps that become an easier choice, but you could choose to release often even with manual validation, but it seems to be human nature to prefer a traumatic delivery every two weeks rather than slight nausea every afternoon.

So what are the right conclusions to draw from TPS?

Well, the main thing is go and see, and continuous improvement. Allow the organisation to learn from previous performance, and empower the people to make decision at the level it makes sense, e.g. discussions about tabs vs spaces should not travel further up the organisation than among the developer collective, or some discussions should not be elevated beyond the team. If you give teams accountability on cloud spend, the visibility and the authority to affect change, you will see teams waste less money, but if the cost is hard to attribute and only visible on a departmental level, your developers are going to shrug their shoulders because they do not see how they can effect change. If you allow teams some autonomy on how to solve problems whilst giving them clear visibility on what the Customer – in the agile sense – wants and needs, the teams can make correct small tradeoffs on their level without derailing the greater good. So – basically – make work visible, let teams know the truth about how their software is working and what their users feel about it. Let developers see how users use their software. Do not be afraid to show how much the team costs, so that they can make reasonable suggestions – like “we could automate these three things, and given our estimated running cost, that would cost the business roughly 50 grand, and we think that would be money well spent because a), b) c) […]” or alternatively “that would be cool, but there is no way we could produce that for you for a defensible amount of money, let us look what exists on the market that we can plug into to solve the problem without writing code”.

Everyone at work is a grown-up, so if you think that you are getting unrealistic suggestions from your development teams, consider if you perhaps have hidden too much relevant information from them, and that perhaps if we figure out how to make relevant information easily accessible, we could give everyone a better understanding of not only what is happening right now, but more importantly, what reasonably should happen next. This also works to help upper management understand what each team is doing. If you have internal resistance from this, consider why, because that in itself could explain problems you might be having.

  1. The initialism TPS stands for Toyota Production System, as you may deduce from the proximity to the headline, but I acknowledge the existience of TPS reports in popular culture – i.e. Office Space. I do not believe they are related. ↩︎

Development Productivity

Why the rush to measure developer productivity?

A lot has been written on developer productivity, the best – indubitably – by Gregerly Orosz and Kent Beck, but instead of methodical thinking and analysis I will just recall some anecdotes, it is Saturday after all.

The reason a business even invests in custom software development is that the business realises they could get a competitive advantage by automating or simplifying tasks used when generating revenue, and I would say only as a secondary concern to lower cost.

The main goal is to get the features and have some rapid response if there are any problems, as long as this happens there are no problems. Unfortunately as we know, software development is notorious for having problems.

The customer part of the organisation is measured relentlessly, that is sometimes incentivised based on generated profit, whilst they deal with the supplier part of the organisation that seems to have no benchmarks for individual performance, which is bewildering when projects keep slipping.

Why can’t we just measure outcome of an individual software developer? Because an individual developer’s outcome depends on too many external factors. Sure, one developer might type the line that once in production led to a massive reduction in losses, but who should get credit? The developer that committed the code, the other developer that reviewed it? The ops person that approved the release into production? The product owner that distilled wants and needs into requirements that could be translated into code?

The traditional solution is to attempt to measure effort, velocity, lines of code et cetera. Those metrics can easily prove to yield problematic emergent behaviour in teams, and it at no point measures if the code written actually provides value to the business.

I would argue that the smallest unit of accountability is the development team. The motley crew of experts in various fields that work together to extract business requirements and convert it into running software. If they are empowered to talk to the business and trusted to release software responsibly into production, they an be held accountable for things like cloud spend, speed of delivery and reliability.

Unfortunately the above description of a development team and its empowerment sounds like a fairytale for many developers out there. There are decision gates, shared infrastructure, shared legacy code and domain complexities meaning that teams wait for each other, or wait for a specific specialist team to do some work only they are trained/allowed to do. I have likened it to an incandescent lightbulb before. A lot of heat loss, very little light. Most of the development effort is waste heat.

Why do software development teams have problem delivering features?

I will engage in harmful stereotype here to make a wider point. I have been around the block over the years and visited many organisations that have degrees of the below issues, in different ways.

Getting new stuff into the pipeline

Fundamentally it is difficult to get access to have the internal IT department build you a new thing. A new thing means a project plan has to be drawn up to figure out how much you are willing to spend, the IT department need to negotiate priorities with other things currently going on and there can be power plays between various stakeholders and some larger projects IT are engaging in on their own because they mentioned it when their own budget was negotiated.

Some of this stuff is performative, literally a project needs to look “cool” to justify an increase in headcount. Now, I’m not saying upper management are careless, there are follow-ups and metrics on the department, and if you get a bunch of people in and the projected outcomes don’t materialise, you will be questioned, but the accountability does not change the fact that there is a marketing aspect when asking for a budget, which also means there is some work the IT department must do regardless of what the rest of the business wants, because they promised the CEO a specific shiny thing.

Gathering requirements

In some organisations, the business know “their” developers and can shoot questions via DM. There is a dark side to this when one developer becomes someone’s “guy” and his Teams in tray becomes a shadow support ticket system. This deterioration is why delivery managers or scrum masters or engineering managers step in to protect the developers – and thus their timelines – because otherwise none of the project work gets done because the developers are working on pet projects for random people in the business and all that budgeting and all those plans go out the window. The problem with this protectionism is that you remove direct feedback, the developers do not intuitively know how their users operate the software in anger. So many developers get anxiety pangs when they finally see the amount of workarounds people do on a daily basis with hotkeys or various clipboard tricks, things that could just have been an event in the source code, or a few function calls, saving literally thousands of hours of employee time in a year.

The funnel of requirements therefore goes through specific people, a product owner or business analyst that has the unenviable task to gather all kinds of requests into a slew of work that a representative in the business is authorised to approve, meaning instead of letting every Tom, Dick and Harry have a go at adding requirements, there is some control to prevent cost runaway and to offer a cohesive vision. Yes, great advantages, but one thing it means is that people on the front line that work against commission and feel a constant pressure whilst battling the custom software, when they complain about stuff being annoying or difficult, they never see that being addressed, and when every six months some new feature is presented, they notice that their pet peeve is still there unaddressed. This creates a groundswell of dissatisfaction, sometimes unfounded, but unfortunately sometimes not.

Sometimes the IT department introduce a dual pipeline, one pipe for long-term feature work and one for small changes that can be addressed “in the run of play” to offer people the sense of feedback being quickly addressed, which adds the burden of having a separate control mechanism of “does this change make sense? is it small enough to qualify?” but some companies have had success with it.

The way to be effective here is to reduce gate keeping but have transparent discussions on how much time can be spent addressing annoyances. Allowing teams to submit and vote for features works too, but generally just showing developers how the software is used by its users is eye-opening.

Building the right thing

“We have poor requirements” is the main complaint developers have when stories overrun their estimates. In my experience this happens when stories are too big, and maybe some more back and forth is needed with the business to sort it out. If developers and business are organisationally close enough, a half hour meeting could save a lot of time. It can be argued that estimates are a waste of time and should be replaced by budgets instead, but that’s a separate blog post.

Developers have all had experience of wasting time. Let us say request comes down from the business with a vision of something, the team goes back and forth to figure out how to migrate from an existing system and how to put the new app in production. They write a first version to prove the concept, and then it never gets put into production for some external reason. Probably a perfectly valid reason, it is just never made clear to the developers why the last three months of meetings, documents and code was binned, which grates on the developers. As implied above, the fact that it was 3 months of work can explain why thing was never released, its time may very well have passed.

My proposed solution to the business remains to build smaller initial deliverables to test the waters, I have yet to be convinced that doesn’t work. It is hard, yes, it requires different discussions, and I will concede, the IT department might already be at a disadvantage trying to promise faster deliverables.

Also, requirements change because reality changes, the business is not always just messing with you, and because your big or complex changes take time to get through to production, the problem of changing requirements gets exacerbated the slower delivery is. Also – don’t design everything up front. Figure out small chunks you can deliver and show people. This is difficult when you are building backend rails for something, but you can demo API payloads., Even semi- or non technical people can understand fundamental concepts and spot omissions early “why are you asking for x here? We don’t know that at this point of the customer journey”. Naming your API request and response objects the way the business would, i.e. ubiquitous langauge, makes this process a lot easier. Get feedback early. Keep a decision log. You don’t need diagrams of every single class but you do need a glossary and a decision log.

Building the thing right

I have banged on about engineering practices on here before, so there is nothing really new here. Fundamentally, the main things are missing tests, anything from unit, to feature, to integration, to contract – not to mention performance tests. Now, sometimes. you write these tests and run them automatically. Ideally you do, but with contract tests for instance, the amount of ceremony you have to set up to do your first automated contract test, plus its limited value until everything is contract tested means that a fair trade off can be to agree to manually test contracts. The point is, you will have there tests regardless of if they are automated or not, or else you will release code that does not work. The later you have the epiphany that test first is superior, the more legacy you will have that is hard to test and hard to change.

Even if you are stuck with legacy code that is hard to test, you can always test manually. I prefer to have test specialists on a team that I can ask for help, because their devious minds can come up with better tests, and then I just perform that tests when I think I’m done. You hate manually testing? Good, good! Use the hate, automate, but there is really no reason to not test.

If there are other parts of the business calling an API you are changing, never break an interface, always version. Doesn’t matter how you try and communicate, people are busy with their own stuff, they will notice you broke them way too late. Of course, contract tests should catch this, but why tempt fate. Cross-team collaboration is hard, if you can put some of these contracts behind some form of validation, you will save a lot of heartache.

Operation

I have addressed in previous posts the olden day triumvirate of QA, Ops and Dev and how they were opposing forces. Ops never wanted to make any changes, QA would have preferred if you made smaller changes, and Devs just churned out code throwing it over the wall. Recently it is not as bad, the DevOps culture attempts to build unity and decentralise so that teams are able to responsibly operate their own software, but a lot of time there is a specific organisation that handles deployments separately from the development team. Partly it can be interpreted as being required by ITIL, but also it gives operations a final chance to protect the business from poor developer output, but with all gatekeepers and steps, it adds a column on the board where tickets gather, which makes for a bigger release when it finally hits production, a bigger changeset means a bigger surface area and more problems.

The key problem with running a service is to understand its state and alert on it. If it takes you a few hours to know that a service isn’t performing well, you are at a disadvantage. There is a tradeoff between the amount of data you produce and what value it brings.

Once you can quickly detect that a release went bad and can quickly roll it back, ideally automatically, then you will have saved everyone a lot of time and improved the execution speed for the whole department. It is very important. If not, you will further alienate your users and their managers, which is even worse, politically.

Technical advancements

People may argue that I have been telling you that lacking development productivity is the fault of basically everyone else but developer, but…. come on – surely we will be able to just sprinkle some AI on this and be done with it? Or use one of those fancy low code solutions? Surely we can avoid the problem of producing software slowly by not actually writing code?

The only line of code guaranteed to be bug free is the one not written. I am all for focusing on your core business and write only the code you need to solve the business problem at hand. Less code to write sounds like less code to review and maintain.

Now, I won’t speak to all low code solutions because I tend to work in high code(?) environments the last decade and a bit, but the ones I have seen glimpts of look very powerful, slap a text box on a canvas, bosh you have a field stored in a table. The people writing applications with these platforms will become very skilled at producing these applications quickly,

Will all your software live on this platform? What happened to the legacy apps that slow you down today, will they not require some changes in the future as well? Will the teams you currently have problems avoiding to break shared APIs fare better with strings in the textbox of a website? Are you responsible for hosting this platform or is it SaaS, how is data stored? Will the BI team try to break into a third party database to ingest data or will they accept some kind of API? What about recently written micro services that have yet to pay back their cost of development? Bin them?

Conclusion

My belief is that the quest for developer productivity comes from a desire to reduce the time between idea and code running well in production. If that lead time was drastically cut, nobody on the other side of the business is going to care what developers do with their time.

Although development productivity is affected by the care and attention applied by individual developers, given the complexities of a software development department and the constraints of the development workflow, productivity is a function of the execution of the whole department. If your teams are cross functional and autonomous you can hold them accountable on a team level, but that requires a relative transparency around cost that requires engineering effort to acquire.

The speed with which features come to production is only in a limited way affected by the speed with which developers write code, and if you do not address political and organisational bottlenecks, you may not see any improvements if you go low code or AI (“vibe coding”),

Our job as older people within a software development function is to make sure we do what we can to make sure features reach the business faster, regardless of how they are implemented. As always measure first and optimise at the bottleneck before you measure again.

How small is small?

I have great respect for the professional agile coach and scrum master community. Few people seem to systematically care for both humans and business, maintaining profitability without ever sacrificing the humans. Now, however, I will alienate vast swathes of them in one post. Hold on.

What is work in software development?

Most mature teams do two types of work, they look after a system and make small changes to it – maintenance, keeping the lights on and new features that the business claims it wants. It is common to get an army in to build a new platform and then allow the teams to naturally attrit as transformational project works fizzles out, contractors leave, the most marketable developers either get promoted out of the team or get better offers elsewhere. A small stream of fine adjustment changes keep coming in to the core team of maintenance developers – effectively – that remains. Eventually this maintenance development work gets outsourced abroad.

A better way is to have teams of people that work together all day every day. Don’t expand and contract or otherwise mess with teams, hire carefully from the beginning and keep new work flowing into teams rather than restructuring after a piece of work is complete. Contract experts to pair with the existing team if you need to tech the team new technology, but don’t get mercenaries to do work. It might be slower, but if you have a good team that you treat well, odds are better they’ll stay, and they will develop new features to be able to be maintained better in the future and less likely to cut corners as any shortcuts will blow up in their own faces shortly later.

Why do we plan work?

When companies spend money on custom software development, a set of managers at very high positions within the organisation have decided that investing in custom software is a competitive advantage, and several other managers think they are crazy to spend all this money on IT.

To mollify the greater organisation, there is some financial oversight and budgeting. Easily communicated projects are sold to the business “we’ll put a McGuffin in the app”, “we’ll sprinkle some AI on it” or similar, and hopefully there is enough money in there to also do a bit of refactoring on the sly.

This pot of money is finite, so there is strong pressure to keep costs under control, don’t get any surprise AWS bills or middle managers will have to move on. Cost runaway kills companies, so there are legitimately people not sleeping at night when there are big projects in play.

How do we plan?

Problem statement

Software development is very different from real work. If you build a physical thing, anything from a phone to a house, you can make good use of a detailed drawing describing exactly how the thing is constructed and the exact properties of the components that are needed. If you are to make changes or maintain it, you need these specifications. It is useful both for construction and maintenance.

If you write the exact same piece of software twice, you have some kind of compulsive issue, you need help. The operating system comes with commands to duplicate files. Or you could run the compiler twice. There are infinite ways of building the exact same piece of software. You don’t need a programmer to do that, it’s pointless. A piece of software is not a physical thing.

Things change, a lot. Fundamentally – people don’t know what they want until they see it, so even if you did not have problems with technology changing underneath your feet whilst developing software, you would still have problems with the fact that fundamentally people did not know what they wanted back when they asked you to build something.

The big issues though is technology change. Back in the day, computer manufacturers would have the audacity to evolve the hardware in ways that made you have to re-learn how to write code. High level languages came along and now instead we live with Microsoft UI frameworks or Javascript frameworks that are mandatory one day and obsolete the next. Things change.

How do you ever successfully plan to build software, then? Well… we have tried to figure that out for seven decades. The best general concept we have arrived at so far is iteration, i.e. deliver small chunks over time rather than to try and deliver all of it at once.

The wrong way

One of the most well-known but misunderstood papers is Managing The Development of Large Software Systems by Dr Winston W Royce1 that launched the concept Waterfall.

Basically, the software development process in waterfall is outlined into distinct phases:

  1. System requirements
  2. Software requirements
  3. Analysis
  4. Program design
  5. Coding
  6. Testing
  7. Operations

For some reason people took this as gospel for several decades, despite the core, fundamental problem that dooms the process to failure is outlined right below figure 2 – the pretty waterfall illustration of the phases above – that people keep referring to, it says:

I believe in this concept, but the implementation described above is risky and invites failure. The
problem is illustrated in Figure 4. The testing phase which occurs at the end of the development cycle is the
first event for which timing, storage, input/output transfers, etc., are experienced as distinguished from
analyzed. These phenomena are not precisely analyzable. They are not the solutions to the standard partial
differential equations of mathematical physics for instance. Yet if these phenomena fail to satisfy the various
external constraints, then invariably a major redesign is required. A simple octal patch or redo of some isolated
code will not fix these kinds of difficulties. The required design changes are likely to be so disruptive that the
software requirements upon which the design is based and which provides the rationale for everything are
violated. Either the requirements must be modified, or a substantial change in the design is required. In effect
the development process has returned to the origin and one can expect up to a lO0-percent overrun in schedule
and/or costs.

Managing The Development of Large Software Systems, Dr Winston W Royce

Reading further, Royce realises that a more iterative approach is necessary as pure waterfall is impossible in practice. His legacy however was not that.

Another wrong way – RUP

Rational Rose and the Rational Unified Process was the Chat GPT of the late nineties, early noughties. Basically, if you only would make an UML drawing in Rational Rose, it would give you a C++ program that executed. It was magical. Before PRINCE2 and SAFe, everyone was RUP certified. You had loads of planning meetings, wrote elaborate Use Cases on index cards, and eventually you had code. It sounds like waterfall with better tooling.

Agile

People realised that when things are constantly changing, it was doomed to have a fixed plan to start with and to stay on it even when you knew that it was unattainable or undesirable to reach the original goal. Loads of attempts were made, but one day some people got together to actually have a proper go at defining what should be the true way going forward.

In February 11-13, 2001, at The Lodge at Snowbird ski resort in the Wasatch mountains of Utah, seventeen people met to talk, ski, relax, and try to find common ground—and of course, to eat. What emerged was the Agile ‘Software Development’ Manifesto. Representatives from Extreme Programming, SCRUM, DSDM, Adaptive Software Development, Crystal, Feature-Driven Development, Pragmatic Programming, and others sympathetic to the need for an alternative to documentation driven, heavyweight software development processes convened.

History: The Agile Manifesto

So – everybody did that, and we all lived happily ever after?

Short answer: No. You don’t get to just spend cash, i.e. have developer do work, without making it clear what you are spending it on, why, and how you intend to know that it worked. Completely unacceptable, people thought.

The origins of tribalism within IT departments have been done to death in this blog alone, so for once it will not be rehashed. Suffice to say, organisationally often staff is organised according to their speciality rather than in teams that produce output together. Budgeting is complex, there can be political competition that is counter productive to IT as a whole or for the organisation as a whole.

Attempts at running a midsize to large IT department that develops custom software have been made in form of Scaled Agile Framework (SAFe), DevOps and SRE (where SRE is addressing the problem backwards, from running black-box software using monitoring, alerts, metrics and tracing to ensure operability and reliability of the software).

As part of some of the original frameworks that came in with the Agile Manifesto, a bunch of practices became part of Agile even though they were not “canon”, such as User Stories, that were said to be a few words on an index card, pinned to a noticeboard in the team office, just wordy enough to help you discuss a problem directly with your user. This of course eventually started to develop back into the verbose RUP Use Cases from yesteryear, but “agile, because they are in Jira”, and rules had to be created for the minimum amount of information on there to successfully deliver a feature. In the Toyota Production System that originated Scrum, Lean Software Development and Six Sigma (sadly, an antipattern), one of the key the lessons is The ideal batch size is 1, and generally making smaller changes. This explosion in size of the user story is symptomatic of the remaining problems in modern software development.

Current state of affairs

So what do we do

As you can surmise if you read the previous paragraphs, we did not fix it for everybody, we still struggle to reliably make software.

The story and its size problems

The part of this blog post that will alienate the agile community is coming up. The units of work are too big. You can’t release something that is not a feature. Something smaller than a feature has no value.

If you work next to a normal human user, and they say – to offer an example – “we keep accidentally clicking on this button, so we end up sending a message to the customer too early, we are actually just trying to get to this area here to double-check before sending”, you can collaboratively determine the correct behaviour, make it happen, release in one day, and it is a testable and demoable feature.

Unfortunately requirements tend to be much bigger and less customer facing. Like, department X want to start seeing the reasons for turning down customer requests in their BI tooling being a feature, and then a “product backlog item” could be service A and service B needs to post messages on a message bus in various positions of the user flow identifying reasons.

Iterating over and successfully releasing this style of feature to production is hard.

Years ago I saw Allen Holub speaking on SD&D in London and his approach to software development is very pure. It is both depressing and enlightening to read the flamewars that erupt in his mentions on Twitter when he explains how to successfully make and release small changes. People scream and shout that it is not possible to do it his way.

In the years since, I have come to realise that nothing is more important than making smaller units of work. We need to make smaller changes. Everything gets better if / when we succeed. It requires a mindset shift, a move away from big detailed backlogs to smaller changes, discussed directly with the customer (in the XP sense, probably some other person in the business, or another development team). To combat the uncertainty, it is possible to mandate some kind of documentation update (graph? chart?) as part of the definition of done. Yes, needless documentation is waste, but if we need to keep a map over how the software is built, as long as people actually consult it, it is useful. We don’t need any further artefacts of the story once the feature is live in production anyway.

How do we make smaller stories?

This is the challenge for our experts in agile software development. Teach us, be bothered, ignore the sighs of developers that still do not understand, the ones raging in Allen Holub’s mentions. I promise, they will understand when they see it first hand. Daily releases of bug free code. They think people are lying to them when they hear us talk about it. When they experience it though, they will love it.

When every day means a new story in production, you also get predictability. As soon as you are able to split incoming or proposed work into daily chunks, you also get the ability to forecast – roughly, better than most other forms of estimate – and since you deliver the most important new thing every day, you give the illusion of value back to those that pay your salary.