Death of Enterprise C# as we know it?

The constant battle between getting paid and getting a wide audience has simultaneously hit a number of formerly open source products in the .NET space as they move towards closed source paid licensing in future versions. Is this the end of the world for current shape enterprise C# as we know it? If not – will we be OK?

Enterprise applications retrospective

I will tell this historical recapitulation as if it is all fact, but of course these are just my recollections at this point, and I can’t be bothered to verify anything at the moment.

When I got started in the dark ages, developing software using Microsoft technologies was not something real developers did unless forced. Microsoft developers used VB, and were primarily junior office workers that had graduated from writing Excel macros or Access forms apps. The pros used C++. Or, more truthfully C/C++ – as people mostly wrote C with the occasional class sprinkled in. The obvious downside with the C part is that humans are not diligent enough to handle manual memory management, and C++ had yet to develop all the fancy memory safety it has now.

The solution came from Sun Microsystems. They invented Java, a language that was supposed to solve everything. It offered managed memory, i.e. it took much of the responsibility for memory management away from the developer. It also did not compile down to machine code directly, it compiled down to an intermediate language that could then quickly be interpreted into machine language through a runtime. This abstraction layer made it possible to write Java once and run it on any platform. This was attractive to many vendors of complex software such as database engines, as they wanted to compete on the Workstation market, i.e. Serious Computer Hardware for engineers and others, and all of a sudden being able to sell the same software onto multiple hardware platforms in that space was attractive, since by nature those platforms were never going to be very numerous.

This was an outrageous success. Companies adopted Java immediately, there were bifurcations of the market, open source Java Development Kids came about. Oracle got in the fray. C++ stopped being the default language for professional software development, and as a JVM based ex-colleague of mine remarked “it became the Cobol of our time”.

Microsoft saw this, and wrote a java interpreter for DOS/Windows (J++), but of course they used the embrace-extend-extinguish playbook and ran fast and loose with the specification. Sun knew what was coming, so they immediately rounded up their lawyers.

Hurt and rejected, Microsoft backed off from joining the JVM family, but instead hired Turbo Pascal inventor Anders Heijlsberg to create a new language that would be a bit more grown up than VB but also be more friendly to beginners than C++ was. Basically, the brief was to rip off Java, which is evident if you look at the .NET Base Class Library today.

Now, the reason for this retrospective is to explain cultural context. Windows having come from DOS, a single user operating system – means that Windows application security was extremely deeply flawed both technically initially, but also culturally for the longest time. Everybody runs with administrator privileges the way they’d never daily as root on Linux, to the point they had to introduce an extra annoyance layer UAC on top of normal windows security, because even if they implemented an equivalent to sudo, there would be no way culturally to get people to stop granting their user membership in the Local Administrators group.

The same way Basica in DOS and VBA in Office fed people to VB that fed people to C#, always with the goal of low barrier to entry and beginner friendly documentation, has meant that there is an enormous volume of really poor engineering practice in the .NET developer space. If you google “ASP.NET C# login screen” you will have nightmares when you read the accepted answers.

Culture in the .NET world vs Java

In the Java space meanwhile, huge strides were made. Before .NET developers even knew what unit tests were, enterprise Java had created suites for tests, runners, continuous integration, object-relational mappers, all kinds of complex distributed systems that were the backbone of the biggest organisations on the internet. Not everything was a hit – java browser applets died a death fairly quickly, but while Java development houses were experiencing domain driven design and micro services, Microsoft’s documentation still taught beginners about three tier applications using BizTalk, WCF and Workflow Foundation.

Perhaps because of Oracle and Sun being too busy suing everybody, perhaps because of a better understanding of what an ecosystem is or perhaps just blind luck, a lot of products were created on the JVM and allowed to live on. Some even saw broad adoption elsewhere (Jenkins and Team City for build servers just to name a couple) while .NET really never grew out of the enterprise line of business crud. Microsoft developers read MSDN.com and possibly ventured to ComponentSource, but largely stayed away from the wider ecosystem. Just like in the days when you could never be fired for buying IBM, companies were reluctant to buy third party components. In the Windows or .NET world, only a few third party things ever really made it. Resharper and Red Gate SQL Tool Belt can sell licenses the way almost no-one else can, but with every new version of Visual Studio, Microsoft steals more features from ReSharper to include by default in a way that I don’t think would happen in the Java space. First off, there is no standard Java IDE, there are a few popular options, but there is no anointed main IDE that you must use (except, if I had to attempt java programming I’d use IntelliJ IDEA because brand loyalty, but that’s just me), while Microsoft definitely always has had an ironclad first party grip over its ecosystem.

Open Source seeping into .NET

There was an Alt.NET movement of contrarians that ported some popular Java libraries to .NET to try and build grown-up software on .NET, but before .NET Core it really seemed like nobody had ever had to consider performance when developing on .NET, and a slow death kept occurring. Sebastien Lambla created Open Wrap to try and provide a package manager for .NET developers. Enthusiasts within Microsoft created NuGet that basically stole the concept. German enthusiasts created Paket in order to more successfully deal with dependency graphs, there was a lot of animosity before finally Paket stopped being actively sabotaged by Microsoft. After Nuget saw broad adoption, and .NET developers heard about the previous decade worth of inventions in the Java space, there was an explosion of growth. Open Source got in below the radar and allowed .NET developers to taste the rainbow without going through procurement.

Microsoft developers heard about Dependency Injection. It was so much behind Java’s container wars that when Java developers joked about being caffeine driven machines that turn XML into stack traces, if we even used configuration files, they would most likely have been Json at the time. because XML had died out of the mainstream between them and us discovering overcomplicated DI container configuration.However, since we did have proper Generics – which Java did not until much later – we were able to use fluent interfaces to overcomplicate our DI instead of configuration files. This was another awkward phase – that we will get back to later.

We got to hear about unit tests, NUnit was a port of the JUnit test framework in Java, and Microsoft included MSTest with Visual Studio, but it was so awful that NUnit stayed strong. Out of spite, Microsoft pivoted to promote XUnit so that even if MS Test always remained a failure, at least NUnit would suffer. XUnit is good, so it has defeated the rest.

Meanwhile, Sun had been destroyed, Oracle had taken stewardship of Java, C# had eclipsed Java in terms of language design, and the hippest developers had abandoned Java for Ruby, Scala, Python and JavaScript(!). Kids were changing the world using Ruby on Rails and Node JS. Microsoft were stuck with an enormous cadre of enterprise LOB app developers, and blogs titled “I’m leaving .NET” were trending.

The Force Awakens

Microsoft had internal problems with discipline, and some of the rogue agents that created NuGet proceeded to cause further problems, and eventually after decades of battles with the legal department they managed to release some code as open source within Microsoft, which was a massive cultural shift.

These rogue agents at Microsoft had seen Ruby on Rails and community efforts like Fubu MVC and became determined to retrofit a model view controller paradigm onto legacy ASP.NETs page rendering model, and the improvement with Razor views over old ASPX pages was so great that adoption was immediate, and this was in practice the first thing that was made open source, the ASP.NET MVC bit.

The ruby web framework Sinatra spawned a new .NET thing called Nancy FX that offered extremely light-weight web applications. The creator Andreas Håkansson contributed to standardising a .NET content pipeline called OWIN, but later developments would see Microsoft step away from this standard.

In this climate, although I am hazy about the exact reasons why, some troublemakers at Microsoft decided to build a cross platform new implementation of .NET, called .NET Core. Its first killer app was ASP.NET Core, a new high performance web framework that was supposed to steal the lunch from Node JS. It featured native support for middleware pipelines and would eventually move towards endpoint routing, which would have consequences for community efforts.

After an enormous investment in the new frameworks, and with contributions from the general public, .NET Core became good enough to use in anger, and it was a lot faster than anything Microsoft had ever built before. Additionally, you could now run your websites on Linux and save a fortune. Microsoft were just as happy to sell you compute on Azure to run Linux. It felt like a new world. While on the legacy .NET Framework the Container Wars had settled into trench warfare and a stalemate between Castle Windsor, Ninject and Unity among others, Microsoft created peace in our time by simply building a rudimentary DI system into .NET Core, de facto killing off one of the most successful open source ecosystems in .NET.

A few years into ASP.NET Core being a thing, the rogue agents within Microsoft would introduce minimal APIs, which thanks to middleware and endpoint routing basically struck a blow to Nancy FX and also blocked F# web service project Giraffe for about a year in the hope that it too would support OpenAPI /Swagger API documentation.

After we finally caught up on SOLID that Java developers had been talking about for a decade, we knew we had to separate our concerns and use all the patterns in the Gang of Four. Thankfully MediatR came about, so we could separate out our web endpoints from the code that actually did the thing, by mapping the request object to a DTO, and then just passing that request DTO to MediatR that would pick and execute the correct handler. Nice. The mapping would most commonly be done by using AutoMapper. Both of these are both loved and hated.

Before DevOps had become trendy, developers in the .NET space were using Team City to run tests on code merged into source control, and would have the build server produce a deployable package upon all tests being green, and when it was time to release, an ops person would either deploy it to production directly or approve access to a physical machine where the developer would carefully deploy the new software. If you had a particularly Microsoft-loyal CTO, you would at this time be running TFS as a build server, and use Visual Studio’s built in task management to track issues in TFS, but most firms had a more relaxed environment with Team City and Jira.

When the DevOps revolution came, Octopus Deploy would allow complex deployment automation directly out of Team City, enabling IT departments to do Continuous Deployment if they chose to. For us fortunate enough to only click buttons in Octopus Deploy it felt like the future, but the complexities of keeping those things going may have been vast.

I’ve looked at Cloud, from both sides now

Microsoft Azure has gone through a bunch of iterations. Initially, the goal was to try and compete with Amazon Web Services. Microsoft offered virtual machines and a few abstractions, like Web Workers and Block Storage. Oh, and it was called Windows Azure to begin with.

A Danish startup called AppHarbor offered a .NET version of Heroku, i.e. a cloud application platform for .NET. This too felt like the future, and AppHarbor moved to the west coast of the USA and got funding.

Microsoft realised that this was a brilliant idea and created Azure Websites, offering PaaS within Azure. This was a shot below the waterline of AppHarbor that finally shut down in 5 December 2022. After several iterations, this is now knows as Azure App Services, and is more capable than AWS Lambda, which in turn is superior to Azure Functions.

Fundamentally, Azure was not great in the beginning. It exposed several shortcomings within Windows, it was basically the first time anybody had attempted to use Windows in anger, so Microsoft were shocked to discover all the problems. After a tremendous investment in Windows Server, as well as to a certain extent giving up and running stuff like software defined networking on Linux, Azure performance has improved.

Microsoft was not satisfied and wanted control over the software development lifecycle everywhere. More investment went into TFS, wrestling it into something that could be put in the Cloud. To hide its origins they renamed it Azure DevOps. You could define build and deployment pipelines as yaml files, which finally was an improvement over TeamCity, and the deployment pipes were not as good as Octopus Deploy, but they were good enough that people abandoned Team City, Octopus Deploy and to some extent Jira, so that code, build pipelines and tickets all would go on to live in TFS/Azure DevOps.

To summarise, .NET developers are inherently suspicious of software that comes from third parties. Only a select few pass the vibe check. Once they have reached that point of success they become too successful and Microsoft turns on them.

Current State of Affairs

By the end of last year, I would say your run-of-the-mill C# house would build code in Visual Studio – hopefully with ReSharper installed, keep source code, run tests, and keep tickets in Azure DevOps. The code would use MediatR to dispatch commands to handlers, use AutoMapper to translate requests from web front-end to handlers, and format responses back out to the web endpoint. Most likely data would be stored in an azure hosted SQL Database, possibly using Cosmos DB for non-relational storage and Redis for caches. It is also highly likely that unit tests used Fluent Assertions and Moq for mocks, because people like those.

Does everybody love this situation? Well, no. Tooling has over the years improved so vastly that you can easily navigate your way around your codebase by keyboard shortcuts, or a keyboard shortcut whilst clicking on an identifier. Except, when. you are using MediatR and AutoMapper it all becomes complicated. Looking at automapper mapping files, you wonder to yourself if it wouldn’t have been more straightforward to manually map the classes and have some unit tests to prove that they still work?

Fluent syntax as described a couple of places above was a fad in the early 2010s. You wrote a set of interfaces with a set of methods on them, that in turn returned objects with other interfaces on them, and you used these interfaces to build grammars. Basically a fluent validation library could offer a set of extension methods that would allow you to build assertion expressions directly off the back of a variable, like:

var result = _sut.CallMethod(Value);
result.Should().Not().BeNull(); 

The problem with fluent interfaces is that the grammar is not standardised, so you would type Is.NotNull, or Is().Not().Null, or any combination of the above and after a number of years, it all blends together. If you happen to have both a mocking library and an assertion library within your namespace, your IntelliSense will suggest all kinds of extension methods when you hit the full stop, and you are never quite sure if it is a filter expression for a mocking library or a constraint for an assertion library.

The Impending Apocalypse

Above is described the difficulty threading the needle between getting any traction in the .NET space yet not becoming too successful so that Microsoft decides to erase your company from the map. Generally, the audacity of open source contributors to want access to food and shelter is severely frowned upon on the internet. How dare they, is the general sentiment. After the success of Nuget, the avalanche of modern open source tooling becoming available to regular folks working the salt mines of .NET, GitHub issues have been flooded by entitled professional developers making demands off open source project maintainers without any reciprocity in the form of a support contract or source code contributions. In fairness to these entitled developers, they are pawns in a game where they have little agency in terms of spending time or money on things, so deciding to contribute a fix to an open source library could have detrimental effects on ones employment status, and there would be no way to get approval to buy a support contract without having a senior manager lodge a ticket with procurement and enduring the ensuing political consequences, but to the open source maintainer it is till a nightmare, regardless of people’s motivations.

A year ago, Moq shocked the developer community by including an obfuscated piece of tracking software, and more recently Fluent Assertions, MediatR, Automapper and MassTransit have announced they will move towards a paid license.

Conclusion

What does this mean for everybody? Well…. it depends. For an enterprise, the procurement dance starts, so heaven knows when there will be an outcome from that. Moving away from AutoMapper and MediatR is most likely too dramatic to consider as by necessity these things become choke points where literally everything in the app goes through mapping or dynamic dispatch, however – there are alternatives. Most likely the open source versions will be forked and maintained by others, but given the general state of .NET open source, it is probably more likely that instead of almost market wide adoption, you will see a much more selective user base going forward. I want all programmers to be able to eat, and I wish all the success to Jimmy Bogard going forward. He has had an enormous impact on the world of software development and deserve all the success he can get.

Ever since watching 8 lines of code by Greg Young I have been a luddite when it comes to various forms of magic, and been a proponent of manual DI and mapping. My advice would be to just rip this stuff out and inject your handler through the constructor the old fashioned way. Unless you have very specific requirements, I can almost guarantee that you will find it more readable and easier to maintain. Also, like Greg says – the friction is there for a reason. If your project becomes too big to understand without magic, that’s a sign to divide your solution into smaller deployable units.

The continuous march continues, where the teams at Microsoft usurp more and more features from the general .NET ecosystem and include them in future versions of .NET, C# or Visual Studio, squeezing the life out of smaller companies along the way.

I would like to see more situations like paket and xunit where Microsoft have stayed their hand and allowed the thing to exist. I think a healthy coexistence of multiple valid solutions would be healthier for all parties. I do not know how to bring that about. Developers in the .NET space remain first party loyal only, except for a very small number of products. I think it is necessary to build a marketplace where developers can buy or subscribe to software tools and libraries, which also offers software bill-of-materials type information so that managers can have some control over which licenses are in play, perhaps give developers a budget for how much they can spend on tools. That way enterprise devs can buy the tools they need whilst offering the transparency that is needed for compliance, with controls that allow an employer to limit spending whilst simultaneously allowing tool developers to feed their kids. I mean Steam exists, and that’s a marketplace that also exists on a Microsoft platform. Imagine a similar thing but for Nugets.

Barriers – trade and otherwise

The last few days of highly retro American foreign policy and the ensuing global market chaos has made everyone aware of trade barriers, and will make everyone fully grasp the consequences of a trade war, the same way the pandemic allowed us to re-learn the lessons of 1919 and the Spanish flu.

Background

To take our minds off the impending doom, I will instead address other barriers, ones that are more related to technology and software, but still cause real world problems for real people.

There are a number of popular websites that enumerate falsehoods programmers believe about names, addresses and even time zones. I do not have the stamina currently to produce an equivalent treatise on falsehoods people believe about residency, but I can manage to put together a bit of a moan on the subject at least.

Identity

In some countries, there are surrogate keys that can be used to identify a person. If these are ubiquitous, you come to rely on everyone having them and may require them as part of customer onboarding. Do remember that tourists exist. Or legal residents that somehow still does not have this magical number, In countries that have had reasons to become paranoid about dodgy people buying train tickets online such as Spain, they may demand an identity number. If you will be in such a situation, try to accept an alternative such as passport number.

In other countries, where there is no surrogate key – you are your address, essentially- you will instead need to consider how you handle, well tourists as well, again, but also students that may reside with their parents still. Also – make it easy to update people’s addresses as they inevitably have to move in our current neo-feudal society of landed gentry and renter class.

Residence vs Citizenship

In Europe at least it has been relatively common for people to live permanently in a country other than one of which they are a citizen. Many countries put up barriers to become a citizen to ensure that only those who care become naturalised citizens and vote on important matters. Other countries hand out citizenship in cereal boxes, like Sweden, but regardless – just because someone lives in a country and pays taxes there, does not mean they are a citizen of that country. If you ask for a “country”, please be explicit if you mean residency or nationality. Also, people do have multiple citizenships sometimes. Does this matter to you? Do you need this information? Really? There are a lot of data privacy concerns here, also please consider purpose and retention of this data.

As KYC becomes a bigger deal, and countries invent new barriers to live and work in other countries, you as – let’s say- the vendor or implementer of systems involved in recruitment will have to pay attention to how things really work to ensure you correctly gate keep your users so that you only refer people that can legally work in the jurisdiction you are serving. This seems onerous, so you will immediately partner with an identity and verification firm that deals with that for you. Great – as long as that supplier is up-to-date.

As an example – if Her Majesty (RIP) bestowed a Settled Status upon you as an EU citizen and UK resident, you will not be eligible for a biometric residency card from the Home Office. This is still a surprise to many embassies and IT systems the world over. And no, you can’t blag one from the Home Office. The IT system or embassy will need to deal with share codes from the Settlement Scheme that prove right to live and work in Blighty. Again, pay attention.

Payments

Unfortunately the very real threat of terror attacks and the less gruesome but more prevalent risk of fraud have meant that what used to be a stupid simple experience – buying things online – has become more complex.

Here you are usually bound by what measures your financial rails or your regulator put in place to reduce risk, but at least consider if you need to require the card to be registered to the shipping address, or if you as a UK company really require bank account and sort code to enter payment details as these are not printed on foreign debit cards. You will need it for direct debit of course, but always? Think about it, is all I ask.

I tried to buy car insurance when I was new in the UK, and I had to go to a brick and mortar insurer so that I could pay with my Swedish visa debit, as all online businesses required UK debit cards. it is hard to get a bank account as a n00b in any country, and that degree of difficulty helps other businesses. If you can show a bank statement mailed to your address, that’s prime identity document stuff in the UK.

Why all of this friction? Why?

Surely, someone has solved all of this. Yes, but I have never been a resident of Estonia, so I cannot tell you how utopia works.

I can warn you of what has happened in Sweden. Everyone – well, not everyone, but most people – has an electronic certificate called Bank ID. You can use it to pay, to do your taxes, to loan money, to buy a house, to start a business. Anything. In the high trust society of old you could just punch in your surrogate key, the personnummer, and you would without any further verification be subscribed to a newspaper. Both the newspaper and the invoice would arrive at your registered address. Yes the state would tell the newspaper where you live.

With BankID, you will now get a follow-up question and need to verify in an app. So people are creating shell companies, taking up loans, buying property and transferring funds on behalf of other people and using social engineering to make sure the app gets clicked.

There are calls for slowing down bureaucracy, making people show up in person to do some of this stuff due to the proliferation of fake identities issued through misuse of proper channels, hijacked cryptographic identities and malevolent automation. Some old fashioned bureaucracy can help reduce fraud by stalling, basically. Consider abuse as you design systems.

How small is small?

I have great respect for the professional agile coach and scrum master community. Few people seem to systematically care for both humans and business, maintaining profitability without ever sacrificing the humans. Now, however, I will alienate vast swathes of them in one post. Hold on.

What is work in software development?

Most mature teams do two types of work, they look after a system and make small changes to it – maintenance, keeping the lights on and new features that the business claims it wants. It is common to get an army in to build a new platform and then allow the teams to naturally attrit as transformational project works fizzles out, contractors leave, the most marketable developers either get promoted out of the team or get better offers elsewhere. A small stream of fine adjustment changes keep coming in to the core team of maintenance developers – effectively – that remains. Eventually this maintenance development work gets outsourced abroad.

A better way is to have teams of people that work together all day every day. Don’t expand and contract or otherwise mess with teams, hire carefully from the beginning and keep new work flowing into teams rather than restructuring after a piece of work is complete. Contract experts to pair with the existing team if you need to tech the team new technology, but don’t get mercenaries to do work. It might be slower, but if you have a good team that you treat well, odds are better they’ll stay, and they will develop new features to be able to be maintained better in the future and less likely to cut corners as any shortcuts will blow up in their own faces shortly later.

Why do we plan work?

When companies spend money on custom software development, a set of managers at very high positions within the organisation have decided that investing in custom software is a competitive advantage, and several other managers think they are crazy to spend all this money on IT.

To mollify the greater organisation, there is some financial oversight and budgeting. Easily communicated projects are sold to the business “we’ll put a McGuffin in the app”, “we’ll sprinkle some AI on it” or similar, and hopefully there is enough money in there to also do a bit of refactoring on the sly.

This pot of money is finite, so there is strong pressure to keep costs under control, don’t get any surprise AWS bills or middle managers will have to move on. Cost runaway kills companies, so there are legitimately people not sleeping at night when there are big projects in play.

How do we plan?

Problem statement

Software development is very different from real work. If you build a physical thing, anything from a phone to a house, you can make good use of a detailed drawing describing exactly how the thing is constructed and the exact properties of the components that are needed. If you are to make changes or maintain it, you need these specifications. It is useful both for construction and maintenance.

If you write the exact same piece of software twice, you have some kind of compulsive issue, you need help. The operating system comes with commands to duplicate files. Or you could run the compiler twice. There are infinite ways of building the exact same piece of software. You don’t need a programmer to do that, it’s pointless. A piece of software is not a physical thing.

Things change, a lot. Fundamentally – people don’t know what they want until they see it, so even if you did not have problems with technology changing underneath your feet whilst developing software, you would still have problems with the fact that fundamentally people did not know what they wanted back when they asked you to build something.

The big issues though is technology change. Back in the day, computer manufacturers would have the audacity to evolve the hardware in ways that made you have to re-learn how to write code. High level languages came along and now instead we live with Microsoft UI frameworks or Javascript frameworks that are mandatory one day and obsolete the next. Things change.

How do you ever successfully plan to build software, then? Well… we have tried to figure that out for seven decades. The best general concept we have arrived at so far is iteration, i.e. deliver small chunks over time rather than to try and deliver all of it at once.

The wrong way

One of the most well-known but misunderstood papers is Managing The Development of Large Software Systems by Dr Winston W Royce1 that launched the concept Waterfall.

Basically, the software development process in waterfall is outlined into distinct phases:

  1. System requirements
  2. Software requirements
  3. Analysis
  4. Program design
  5. Coding
  6. Testing
  7. Operations

For some reason people took this as gospel for several decades, despite the core, fundamental problem that dooms the process to failure is outlined right below figure 2 – the pretty waterfall illustration of the phases above – that people keep referring to, it says:

I believe in this concept, but the implementation described above is risky and invites failure. The
problem is illustrated in Figure 4. The testing phase which occurs at the end of the development cycle is the
first event for which timing, storage, input/output transfers, etc., are experienced as distinguished from
analyzed. These phenomena are not precisely analyzable. They are not the solutions to the standard partial
differential equations of mathematical physics for instance. Yet if these phenomena fail to satisfy the various
external constraints, then invariably a major redesign is required. A simple octal patch or redo of some isolated
code will not fix these kinds of difficulties. The required design changes are likely to be so disruptive that the
software requirements upon which the design is based and which provides the rationale for everything are
violated. Either the requirements must be modified, or a substantial change in the design is required. In effect
the development process has returned to the origin and one can expect up to a lO0-percent overrun in schedule
and/or costs.

Managing The Development of Large Software Systems, Dr Winston W Royce

Reading further, Royce realises that a more iterative approach is necessary as pure waterfall is impossible in practice. His legacy however was not that.

Another wrong way – RUP

Rational Rose and the Rational Unified Process was the Chat GPT of the late nineties, early noughties. Basically, if you only would make an UML drawing in Rational Rose, it would give you a C++ program that executed. It was magical. Before PRINCE2 and SAFe, everyone was RUP certified. You had loads of planning meetings, wrote elaborate Use Cases on index cards, and eventually you had code. It sounds like waterfall with better tooling.

Agile

People realised that when things are constantly changing, it was doomed to have a fixed plan to start with and to stay on it even when you knew that it was unattainable or undesirable to reach the original goal. Loads of attempts were made, but one day some people got together to actually have a proper go at defining what should be the true way going forward.

In February 11-13, 2001, at The Lodge at Snowbird ski resort in the Wasatch mountains of Utah, seventeen people met to talk, ski, relax, and try to find common ground—and of course, to eat. What emerged was the Agile ‘Software Development’ Manifesto. Representatives from Extreme Programming, SCRUM, DSDM, Adaptive Software Development, Crystal, Feature-Driven Development, Pragmatic Programming, and others sympathetic to the need for an alternative to documentation driven, heavyweight software development processes convened.

History: The Agile Manifesto

So – everybody did that, and we all lived happily ever after?

Short answer: No. You don’t get to just spend cash, i.e. have developer do work, without making it clear what you are spending it on, why, and how you intend to know that it worked. Completely unacceptable, people thought.

The origins of tribalism within IT departments have been done to death in this blog alone, so for once it will not be rehashed. Suffice to say, organisationally often staff is organised according to their speciality rather than in teams that produce output together. Budgeting is complex, there can be political competition that is counter productive to IT as a whole or for the organisation as a whole.

Attempts at running a midsize to large IT department that develops custom software have been made in form of Scaled Agile Framework (SAFe), DevOps and SRE (where SRE is addressing the problem backwards, from running black-box software using monitoring, alerts, metrics and tracing to ensure operability and reliability of the software).

As part of some of the original frameworks that came in with the Agile Manifesto, a bunch of practices became part of Agile even though they were not “canon”, such as User Stories, that were said to be a few words on an index card, pinned to a noticeboard in the team office, just wordy enough to help you discuss a problem directly with your user. This of course eventually started to develop back into the verbose RUP Use Cases from yesteryear, but “agile, because they are in Jira”, and rules had to be created for the minimum amount of information on there to successfully deliver a feature. In the Toyota Production System that originated Scrum, Lean Software Development and Six Sigma (sadly, an antipattern), one of the key the lessons is The ideal batch size is 1, and generally making smaller changes. This explosion in size of the user story is symptomatic of the remaining problems in modern software development.

Current state of affairs

So what do we do

As you can surmise if you read the previous paragraphs, we did not fix it for everybody, we still struggle to reliably make software.

The story and its size problems

The part of this blog post that will alienate the agile community is coming up. The units of work are too big. You can’t release something that is not a feature. Something smaller than a feature has no value.

If you work next to a normal human user, and they say – to offer an example – “we keep accidentally clicking on this button, so we end up sending a message to the customer too early, we are actually just trying to get to this area here to double-check before sending”, you can collaboratively determine the correct behaviour, make it happen, release in one day, and it is a testable and demoable feature.

Unfortunately requirements tend to be much bigger and less customer facing. Like, department X want to start seeing the reasons for turning down customer requests in their BI tooling being a feature, and then a “product backlog item” could be service A and service B needs to post messages on a message bus in various positions of the user flow identifying reasons.

Iterating over and successfully releasing this style of feature to production is hard.

Years ago I saw Allen Holub speaking on SD&D in London and his approach to software development is very pure. It is both depressing and enlightening to read the flamewars that erupt in his mentions on Twitter when he explains how to successfully make and release small changes. People scream and shout that it is not possible to do it his way.

In the years since, I have come to realise that nothing is more important than making smaller units of work. We need to make smaller changes. Everything gets better if / when we succeed. It requires a mindset shift, a move away from big detailed backlogs to smaller changes, discussed directly with the customer (in the XP sense, probably some other person in the business, or another development team). To combat the uncertainty, it is possible to mandate some kind of documentation update (graph? chart?) as part of the definition of done. Yes, needless documentation is waste, but if we need to keep a map over how the software is built, as long as people actually consult it, it is useful. We don’t need any further artefacts of the story once the feature is live in production anyway.

How do we make smaller stories?

This is the challenge for our experts in agile software development. Teach us, be bothered, ignore the sighs of developers that still do not understand, the ones raging in Allen Holub’s mentions. I promise, they will understand when they see it first hand. Daily releases of bug free code. They think people are lying to them when they hear us talk about it. When they experience it though, they will love it.

When every day means a new story in production, you also get predictability. As soon as you are able to split incoming or proposed work into daily chunks, you also get the ability to forecast – roughly, better than most other forms of estimate – and since you deliver the most important new thing every day, you give the illusion of value back to those that pay your salary.

Surprises

TIL, TIFO or “I was today years old when”

Universal markers of new information. But it shouldn’t have been new. Documentation had been created.

I can tell you, the surprise was monumental.

In the olden days – if you ran a dotnet project without specifying UseUrls() or other tricks for local development, the containerised application would listen to ports 80 and 443.

Hence the default Dockerfile generated in some existing .NET projects will contain the rows EXPOSE 80 and EXPOSE 443. All of a sudden these two commands are completely useless, as the app quietly instead listen to ports 8080 and 8081 respectively, meaning you are exposing a port that nothing listens to.

The purpose behind the change of default port is to enable increased security. A non-root user cannot listen to ports below 1024(?) and can be set up with file access restrictions that add an additional layer of security, preventing a user from some actions even if they somehow gain entry to the website worker process.
You are still free to ignore the security benefits and keep running your website as root, but unless you take action, your docker-based apps will fail after you upgrade.

Alternatives

Embrace change – Run container as low level user

You need to change the Dockerfile to expose the ports the app listens to, so add EXPOSE statements allowing docker to map your ports. You also need to add the instruction USER app in the last bit of the Dockerfile to make sure the container will run as that user.

Of course, since the container now exposes ports 8080 and 8081 instead, you must map whatever runtime environment you are using, such as ACA configuration or ECS task definitions to take into account your non-standard ports.

Embrace change cautiously – run on new ports but as root

You need to change the Dockerfile to expose the ports the app listens to, so add EXPOSE statements allowing docker to map your ports.

Of course, since the container now exposes ports 8080 and 8081 instead, you must map whatever runtime environment you are using, such as ACA configuration or ECS task definitions to take into account your non-standard ports.

Reject change – embrace peace of mind

You can pretend like the world is a safe place and override the default ports by settings environment variables in your Dockerfile towards the end, in the final stage:

ENV ASPNETCORE_HTTP_PORTS="80,8080" ASPNETCORE_HTTPS_PORTS="443,8081"

This leaves the status quo essentially intact.

Conclusion

I suspect option 3 is the most relevant if you have existing deployments and cloud configurations where modifying port numbers is either not available to you as you would hae needed to depend on another team to make a change on their own schedule- or perhaps the change to your runtime configuration would simply be prohibitively time consuming.

Due to the limited change between Option 2 and 1, it would seem silly to stop at changing port configurations without getting the security benefit of a least privilege runtime user, so I would suggest that for new projects or low complexity software estates, definitely pick option 1 and enjoy the improved security, and in other cases – go option 3 until the security benefit makes it worth spending your maintenance time making the change to non-root running.

Abstractions, abstractions everywhere

X, X everywhere meme template, licensed by imgflip.com

All work in software engineering is about abstractions.

Abstractions

All models are wrong, but some are useful

George Box

It began with Assembly language, when people were tired of writing large-for-its-time programs in raw binary instructions, they made a language that basically mapped each binary instruction to a text value, and then there was an app that would translate that to raw binary and print punch cards. Not a huge abstraction, but it started there. Then came high level languages and off we went. Now we can conjure virtual hardware out of thin air with regular programming languages.

The magic of abstractions, it really gives you an amazing leverage, but at the same time you sacrifice actual knowledge of the implementation details, meaning you often get exposed to obscure errors that you either have no idea what they mean, or even worse- understand exactly what’s wrong but you don’t have access to make that change because the source is just a machine translated piece of Go, and there is no way to fix the translated C# directly, just to take one example.

Granularity and collaboration across an organisation

Abstractions in code

Starting small

Most systems start small, solving a specific problem. This is done well, and the requirements grow, people begin to understand what is possible and features accrue. A monolith is built, and it is useful. For a while things will be excellent and features will be added at great speed, and developers might be added along the way.

A complex system that works is invariably found to have evolved from a simple system that worked

John Gall

Things take a turn

Usually, at some point some things go wrong – or auditors get involved because regulatory compliance – and you prevent developers from deploying to production, hiring gate keepers to protect the company from the developers. In the olden days – hopefully not anymore – you hire testers to do manual testing to cover a shortfall in automated testing. Now you have a couple of hand-offs within the team, meaning people write code, give it to testers who find bugs, work goes the wrong way – backwards – for developers to clean up their own mess and to try again. Eventually something will be available to release, and the gate keepers will grudgingly allow a change to happen, at some point.

This leads to a slow down in the feature factory, some old design choices may cause problems that further slow down the pace of change, or – if you’re lucky – you just have too many developers in one team, and you somehow have to split them up in different teams, which means comms deteriorate and collaborating in one codebase becomes even harder. With the existing change prevention, struggles with quality and now poor cross-team communication, something has to be done to clear a path so that the two groups of people can collaborate effectively.

Separation of concerns

So what do we do? Well, every change needs to be covered by some kind of automated test, if only to at first guarantee that you aren’t making things worse. This way you can now refactor the codebase to a point where the two groups can have separate responsibilities, and collaborate over well defined API boundaries, for instance. Separate deployable units, so that teams are free to deploy according to their own schedule.

If we can get better collaboration with early test designs and front-load test automation, and befriend the ops gatekeepers to wire in monitoring so that teams are fully wired in to how their products behave in the live environment, we would be close to optimum.

Unfortunately – this is very difficult. Taking a pile of software and making sense of it, deciding how to split it up between teams, gradually separating out features can be too daunting to really get started. You don’t want to break anything, and if you – as many are won’t to do, especially if you are new in an organisation – decide to start over from scratch, you may run into one or more of the problems that occur when attempting a rewrite. One example being where you end up in a competition against a moving target. The same team has to own a feature in both the old and the new codebase, in that case, to stop that competition. For some companies it is simply worth the risk, they are aware they are wasting enormous sums of money, but they still accept the cost. You would have to be very brave.

Abstractions in Infrastructure

From FTP-from-within-the-editor to Cloud native IaC

When software is being deployed – and I am ignoring native apps now, largely, and focusing on web applications and APIs- there are a number of things that are actually happening that are at this point completely obscured by layers of abstraction.

The metal

The hardware needs to exist. This used to be a very physical thing, a brand new HP ProLiant howling in the corner of the office onto which you installed a server OS and set up networking so that you could deploy software on it, before plugging it into a rack somewhere, probably a cupboard – hopefully with cooling and UPS. Then VM hosts became a thing, so you provisioned apps using VMWare or similar and got to be surprised at how expensive enterprise storage is per GB compared to commodity hardware. This could be done via VMWare CLI, but most likely an ops person pointed and clicked.

Deploying software

Once the VM was provisioned, things like Ansible, Chef and Puppet began to become a thing, abstracting away the stopping of websites, the copying of zip files, the unzipping, the rewriting configuration and the restarting of the web app into a neat little script. Already here you are seeing problems where “normal” problems, like a file being locked by a running process, show up as a very cryptic error message that the developer might not understand. You start to see cargo cult where people blindly copy things from one app to another because you think two services are the same, and people don’t understand the details. Most of the time that’s fine, but it can also be problematic with a bit of bad luck.

Somebody else’s computer

Then cloud came, and all of a sudden you did not need to buy a server up front and instead rent as much server as you need. Initially, all you had was VMs, so your Chef/Puppet/Ansible worked pretty much the same as before, and each cloud provider offered a different was of provisioning virtual hardware before you came to the point where the software deployment mechanism came into play. More abstractions to fundamentally do the same thing. Harder to analyse any failures, you some times have to dig out a virtual console to just see why/how an app is failing because it’s not even writing logs. Abstractions may exist, but they often leak.

Works on my machine-as-a-service

Just like the London Pool and the Docklands were rendered derelict by containerisation, a lot of people’s accumulated skills in Chef and Ansible have been rendered obsolete as app deployments have become smaller, each app simply unzipped on top of a brand new Linux OS sprinkled with some configuration answer, and then have the image pushed to a registry somewhere. On one hand, it’s very easy. If you can build the image and run the container locally, it will work in the cloud (provided the correct access is provisioned, but at least AWS offer a fake service that let’s you dry run the app on your own machine and test various role assignments to make sure IAM is also correctly set up. On the other hand, somehow the “metal” is locked away even further and you cannot really access a console anymore, just a focused log viewer that let’s you see only events related to your ECS task, for instance.

Abstractions in Organisations

The above tales of ops vs test vs dev illustrates the problem of structuring an organisation incorrectly. If you structure it per function you get warring tribes and very little progress because one team doesn’t want any change at all in order to maintain stability, the other one gets held responsible for every problem customers encounter and the third one just wants to add features. If you structured the organisation for business outcome, everyone would be on the same team working towards the same goals with different skill sets, so the way you think of the boxes in an org chart can have a massive impact on real world performance.

There are no solutions, only trade-offs, so consider the effects of sprinkling people of various background across the organisation, if instead of being kept in the cellar as usual you start proliferating your developers among the general population of the organisation, how do you ensure that every team follows the agreed best practices, that no corners are cut even when a non-technical manager is demanding answers. How do you manage performance of developers you have to go out of your way to see? I argue such things are solvable problems, but do ask your doctor if reverse Conway is right for you.

Conclusion

What is a good abstraction?

Coupling vs Cohesion

If a team can do all of their day-to-day work without waiting for another team to deliver something or approve something, if there are no hand-offs, then they have good cohesion. All the things needed are to hand. If the rest of the organisation understands what this team does and there is no confusion about which team to go to with this type of work, then you have high cohesion. It is a good thing.

If however, one team constantly is worrying about what another team is doing, where certain tickets are in their sprint in order to schedule their own work, then you have high coupling and time is wasted. Some work has to be moved between teams or the interface between the teams has to be made more explicit in order to reduce this interdependency.

In Infrastructure, you want the virtual resources associated with one application to be managed within the same repository/area to offer locality and ease of change for the team.

Single Responsibility Principle

While dangerous to over-apply within software development (you get more coupling than cohesion if you are too zealous), this principle is generally useful within architecture and infrastructure.

Originally meaning that one class / method should only do one thing – an extrapolation of the UNIX principles – it can more generally be said to mean that on that layer of abstraction, a team, infrastructure pipe, app, program, class […] should have one responsibility. This usually mean a couple of things happen, but they conceptually belong together. They have the same reason to change.

What – if any – pitfalls exist ?

The major weakness of most abstractions is when they fall apart, when they leak. Not having access to a physical computer is fine, as long as the deployment pipeline is working, as long as the observability is wired up correctly, but when it falls down, you still need to be able to see console output, you need to understand how networking works, to some extent, you need to understand what obscure operating system errors mean. Basically when things go really wrong you are needed to have already learned to run that app in that operating system before, so you recognise the error messages and have some troubleshooting steps memorised.
So although we try and save our colleagues from the cognitive load of having to know everything we were forced to learn over the decades, to spare them the heartache, they still need to know. All of it. So yes, the danger with the proliferation of layers of abstraction is to pick the correct ones, and to try and keep the total bundle of layers as lean as possible because otherwise someone will want to simplify or clarify these abstractions by adding another layer on top, and the cycle begins again.

Cutler, Star Wars, fast feedback and accuracy

As my father’s son I have always been raised to see the plucky heroes at AT&T and Bell Labs, Brian Kernighan and Ken Ritchie as the Jedi, fighting with the light side of the force, thus foregoing force lightning and readable documentation as that is only available to Dark Side force users such as Bill Gates. Anders Hejlsberg must be Anakin Skywalker in my father’s cinematic universe as Anders first created Turbo Pascal, our favourite programming language when I was little, only to turn to the Dark Side and forge C# and TypeScript – presumably using red khyber crystals.

Anyway – the Count Dooku in this tortured analogy would be Dave Cutler – who created both VMS and Windows NT, was recently interviewed on Dave Plummer’s YouTube channel for several hours.

If you ever think you are going to watch that interview, do so now, unless you are OK with spoiler or spoiler-adjacent content. Also be warned the interview is long. Very, very long.

I find it fascinating how far removed his current work at XBox game streaming (I think? XCloud sounds like a game streaming solution) is from where he started as he left college. His first foray in computing was in simulations, and he had to carry a bring a pile of punch cards – a thousand punch cards I think he says – to run a simulation at IBM, because the computing power requirement was too big for what they had locally at the paper company where he was working.

Later he complains about current developers lack of attention to detail, the host theorising that perhaps the faster inner loops for today’s developers make them less likely to show the requisite skill or diligence.

Now, wait before you flood his YouTube video with angry comments – please let me agree, so that you can post angry comments here instead, thus driving engagement.

I want to say, I see where he’s coming from.

A lot of really hard problems have solutions now that didn’t exist when I started out, and weren’t even possible to conceptualise when Cutler started out. You can realistically do TDD now. If you don’t have access to a piece of hardware you need to write drivers for, you could still most likely have your employer buy enough hardware that you could simulate that hardware with very little effort. You can essentially make it so that you immediately know when you are breaking things as you are typing.

But also… you could save yourself from a majority of problems by taking a bit more care. Like, hey, are you stuck with legacy code with poor test coverage? Well – if that product is an API that is called by another team – just because you have Swagger documentation and the guys can reach you on Slack/Teams, don’t just randomly change behaviours in an endpoint so that dependent services are broken. Why not just make a new endpoint for the new behaviour? And – seriously – when you make new code – why can’t that at least that new bit be unit tested?

Even if you still cannot add unit tests, if you had the mindset of being about to drive far away to test the software live with only you and your boss – don’t you think you would be more carefully reviewing every single change? Really?

Even TDD avatars Feathers and Beck will acknowledge coming across code that isn’t unit tested but still is navigable and observably correct. You could achieve that without any extra build steps or frameworks. You could just pay attention, it has been done in life.

Now, at the same time when we old timers get excited about the olden days we like to brag about how many hours we worked, and I can tell from personal experience that when you don’t sleep -accuracy is the first to go, so – if you are going to make changes in systems where you have nothing like unit test or integration tests in place to help you, make sure you give yourself a couple of extra minutes, basically.

The main reason behind this post is to say:
We shouldn’t be luddites by rejecting modern development practices, but also – if you are a junior dev stuck in a team that writes and maintains legacy code, you could still write defect free code, you just have to pay more attention and go a bit slower. There is no immediate get-out-of-jail-free card because the builds are flaky, like. And if you are struggling and you MUST add unit tests in order to manage to keep high standards for code quality, you could probably refactor into acceptable test coverage more easily than you think.

Our luxury as software developers – to be able to know well ahead of time exactly how our code will work in production would be a dream in most disciplines, yet a surprisingly low number of bridges randomly collapse. People do sane things and catastrophes fail to appear.
The legacy project you got saddled with can cause you to go slow, but it really shouldn’t give you license to deliver new bugs, Don’t internalise when acceptance for lower standards may become evident in your team, such as “well, when X and Y happen and they refuse to Z, this is the best they’ll get”. Instead just don’t deliver, and explain why you cannot. If you don’t offer the lower standards, the resulting bugs won’t appear in your code.

McKinsey and the elusive IT department

I know that both my readers are software developers, so – this is a bit of a departure.

Background

Within a business, there are many financial decisions to be made, and nothing kills a business as fast as costs quietly running away. This is why companies have bureaucracy, to make sure that those who enter into contracts on behalf of the company (primarily sales and procurement) are doing so with due skill and care.

Managers follow up on performance down on the individual level. Commission and bonuses reward top performers, career progression means you are judged by the average scores of your subordinates. If you miss budget you are held accountable. What went wrong? What changes are you implementing, why do you think those changes will make a difference?

CEOs are thinking- why is IT, and specifically IT software development so unwilling to produce metrics and show their work? Are they not monitoring their people? Surely they will want to know who their worst performers so that they can train them or ultimately get rid of them if all else fails?

In this environment, senior developers turned junior managers are treading lightly to try and explain to senior management the ways in which – compared to doing literally nothing – measuring incorrectly can cause much worse problems in terms of incorrect optimisations or alienating and losing the wrong individual contributors, but as far as I know, rarely have any inroads been made into fairly and effectively measuring individual performance in an IT department. You can spot toxic people, or people that plainly lied on their CV, but apart from clear-cut cases like that, there are so many team factors that affect the individual, that getting rid of people instead of addressing systemic or process-related problems is like cutting off your nose to spite your face.

Bad metrics are bad

What do we mean bad metrics? How about a fictionalised example of real metrics out there: LOC. Lines of Code. How many lines of code did Barry commit yesterday? 450? Excellent. Give the man a pay rise! How about Lucy? Oh dear… only 220. We shall have to look at devising a personal improvement plan and file with HR.

However, upon careful review – it turns out that out of Barry’s contribution yesterday, 300 lines consisted of a piece of text art spelling out BARRY WOZ ERE. Oh, that’s unfortunate, he outsmarted our otherwise bullet-proof metric. I know, we solve everything by redefining our metric to exclude comment lines. NCLOC, non-comment lines of code. By Jove, we have it! Lucy is vindicated, she doesn’t write comments in her code so she kept all her 220 lines and is headed for an Employee of the Month trophy if she keeps this up. Now unfortunately after this boost in visibility within the team, Lucy is tasked with supervising a graduate in the team, so they pair program together, sometimes on the graduate’s laptop and sometimes on Lucy’s, and because life is too short they don’t modify the git configuration for every single contribution, so half the day’s contributions fall under the graduate’s name in the repository and the rest under Lucy’s. The erstwhile department star sees her metrics plummet and can feel the unforgiving gaze from her line manager. Barry is looking good again, because he never helps anyone.

So – to the people on the factory floor of software development, the drawbacks and incompleteness of metrics for individual performance are quite obvious, but that doesn’t help senior management that have legitimate concerns for how company resources are spent, and want to have some oversight.

Good metrics are good

After several decades of this situation, leading experts in the field of DevOps decided to research how system development works in big organisations to try and figure out what works and what doesn’t. In 2016 the result came out in the form of the DORA metrics,  throughput (deployment frequency, lead time for changes), and stability (mean time to recover, change failure rate) were published in the State of DevOps report. This measures the output of a team or a department, not an individual, and the metrics help steer improvements in software delivery in a way that cannot be gamed to produce good metrics but negative actual outcomes. Again – the goal of the DORA metrics are to ensure that software design, construction, delivery continuous improvement is successfully undertaken, to avoid pitfalls of failed software projects and astronomical sums of money lost – measuring team or individual performance is not what it’s about.

In the post-pandemic recovery phase a lot of organisations are looking at all of the above and are asking for a way to get certainty and to get actual insight into what IT is doing with all the money, tired of the excuses being made by evasive CTOs or software development managers. How hard could it be?

A proposal, the blowback and a suggestion

Whenever there is a willing customer, grifters will provide a service.

McKinsey took among other things the DORA metrics and NCLOC to cook up their own custom metric to solve this problem once and for all.

Obviously, the response from software development specialists was unanimously critical, and the metric was destroyed up and down the internet, and I must admit I enjoyed reading or watching several eviscerations, especially one of the fathers of DevOps, Dave Farley, had a very effective critique of the paper on YouTube.

There was no solution though. No-one was trying to help the leaders get the insights they crave. There was limited understanding of why. Surely you must trust your employees, why else did you hire them?

Then I stumbled upon a series of writings from Kent Beck and Gegerly Orosz, trying to address this head on, to do what McKinsey hadn’t achieved, which I found so inspiring that this blog exists just as an excuse to post the links:

https://newsletter.pragmaticengineer.com/p/measuring-developer-productivity
https://newsletter.pragmaticengineer.com/p/measuring-developer-productivity-part-2

If you need to know why the proposed McKinsey metrics are bad, look at Dave Farley’s video, and if you what to know what to do instead, read the writings of Beck/Orosz.

GitHub Action shenanigans

When considering what provider to use in order to polish and cut the diamonds that are your deployable units of code into the stunningly clear diamonds they deserve to be, you have probably considered CircleCi, Azure DevOps, GitHub Actions, TeamCity and similar.

After playing with GitHub Actions for a bit, I’m going to comment on a few recent experiences.

Overall Philosophy

Unlike TeamCity, but like CircleCI and – to some extent – Azure DevOps, it’s all about what is in the yaml. You modify your code and the wáy it gets built in the same commit – which is the way God intended it.

There are countless benefits to this strategy, over that of TeamCity where the builds are defined in the UI. That means that if you make a big restructuring of the source repository but need to hotfix a pre-restructure version of the code, you had better have kept an archived version of the old build chain or you will have a bad day.

There is a downside, though. The artefact management and chaining in TeamCity is extremely intuitive, so if you build an artefact in one chain and deploy it in the next, it is really simple to make work like clockwork. You can achieve this easily with ADO too, but those are predictably the bits that require some tickling of the UI.

Now, is a real problem? Should not – in this modern world – builds be small and self-contained? Build-and-push to a docker registry, [generic-tool] up, Bob’s your uncle? Your artefact stuff and separate build / deployment pipelines smack of legacy, what are we – living in the past?! you exclaim.

Sure, but… Look, the various hallelujah solutions that offer “build-and-push-and-deploy”, you know as well as I do that at some point they are going to behave unpredictably, and all you can tell is that the wrong piece of code is running in production with no evidence offered as to why.

“My kingdom for a logfile” as it is written, so – you want to separate the build from the deploy, and then you need to stitch a couple of things together and the problems start.

Complex scenarios

When working with ADO, you can name builds (in the UI) so that you can reference their output from the yaml, and move on from there, to identify the tag of the docker container you just built and reference it when you are deploying cloud resources.

What about GitHub Actions?

Well…

Allegedly, you can define outputs or you can create reusaable workflows, so that your “let’s build cloud resources” bit of yaml can be shared in case you have multiple situations (different environments?) that you want to deploy that the same time, you can avoid duplication.

There are a couple of gotchas, though. If you defined a couple of outputs in a workflow for returning a couple of docker image tags for later consumption, they … exist, somewhere? Maybe. You could first discover that your tags are disqualified from being used as output in a step because they contain a secret(!), which in the AWS case can be resolved by supplying an undocumented parameter to the AWS Login action, encouraging it to not mask the account number. The big showstopper imhoi is that the scenario where you would want to just grab some metadata from a historic run of a separate workflow file to identify which docker images to deploy, that doesn’t seem as clearly supported.

The idea for GitHub Actions workflows seems to be – at least at time of writing, that you do all the things in one file, in one go, possibly with some flow-control to pick which steps get skipped. There is no support for the legacy concept of “OK, I built it now, and deployed it to my test environment” – some manual testing happens – and “OK, it was fine, of course, I was never worried” -> you deploy the same binaries to live. “Ah HAH! You WERE just complaining about legacy! I KNEW IT!” you shout triumphantly. Fair cop, but society is to blame.

If you were to consider replacing Azure DevOps with GitHub Actions for anything even remotely legacy, please be aware that things could end up being dicey. Imho.

If I’m wrong, I’m hoping to leverage Cunningham’s Law to educate me, because googling and reading the source sure did not reveal any magic to me.

.NET C# CI/CD in Docker

Works on my machine-as-a-service

When building software in the modern workplace, you want to automatically test and statically analyse your code before pushing code to production. This means that rather than tens of test environments and an army of manual testers you have a bunch of automation that runs as close to when the code is written. Tests are run, the rate of how much code is not covered by automated tests is calculated, test results are published to the build server user interface (so that in the event that -heaven forbid – tests are broken, the developer gets as much detail as possible to resolve the problem) and static analysis of the built piece of software is performed to make sure no known problematic code has been introduced by ourselves, and also verifying that dependencies included are free from known vulnerabilities.

The classic dockerfile added by C# when an ASP.NET Core Web Api project is started features a multi stage build layout where an initial layer includes the full C# SDK, and this is where the code is built and published. The next layer is based on the lightweight .NET Core runtime, and the output directory from the build layer is copied here and the entrypoint is configured so that the website starts when you run the finished docker image.

Even tried multi

Multistage builds were a huge deal when they were introduced. You get one docker image that only contains the things you need, any source code is safely binned off in other layers that – sure – are cached, but don’t exist outside this local docker host on the build agent. If you then push the finished image to a repository, none of the source will come along. In the before times you had to solve this with multiple Dockerfiles, which is quite undesirable. You want to have high cohesion but low coupling, and fiddling with multiple Dockerfiles when doing things like upgrading versions does not give you a premium experience and invites errors to an unnecessesary degree.

Where is the evidence?

Now, when you go to Azure DevOps, GitHub Actions or CircleCI to find what went wrong with your build, the test results are available because the test runner has produced and provided output that can be understood by that particular test runner. If your test runner is not forthcoming with the information, all you will know is “computer says no” and you will have to trawl through console data – if that – and that is not the way to improve your day.

So – what – what do we need? Well we need the formatted test output. Luckily dotnet test will give it to us if we ask it nicely.

The only problem is that those files will stay on the image that we are binning – you know multistage builds and all that – since we don’t want these files to show up in the finished supposedly slim article.

Old world Docker

When a docker image is built, every relevant change will create a new layer, and eventually a final image will be created and published that is an amalgamation of all consistuent layers. In the olden days, the legacy builder would cache all of the intermediate layers and publish a hash in the output so that you could refer back to intermediate layers should you so choose.

This seems like the perfect way of forensically finding the test result files we need. Let’s add a LABEL so that we can find the correct layer after the fact, copy the test data output and push it to the build server.

FROM mcr.microsoft.com/dotnet/aspnet:7.0-bullseye-slim AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:7.0-bullseye-slim AS build
WORKDIR /
COPY ["src/webapp/webapp.csproj", "/src/webapp/"]
COPY ["src/classlib/classlib.csproj", "/src/classlib/"]
COPY ["test/classlib.tests/classlib.tests.csproj", "/test/classlib.tests/"]
# restore for all projects
RUN dotnet restore src/webapp/webapp.csproj
RUN dotnet restore src/classlib/classlib.csproj
RUN dotnet restore test/classlib.tests/classlib.tests.csproj
COPY . .
# test
# install the report generator tool
RUN dotnet tool install dotnet-reportgenerator-globaltool --version 5.1.20 --tool-path /tools
RUN dotnet test --results-directory /testresults --logger "trx;LogFileName=test_results.xml" /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura /p:CoverletOutput=/testresults/coverage/ /test/classlib.tests/classlib.tests.csproj
LABEL test=true
# generate html reports using report generator tool
RUN /tools/reportgenerator "-reports:/testresults/coverage/coverage.cobertura.xml" "-targetdir:/testresults/coverage/reports" "-reporttypes:HTMLInline;HTMLChart"
RUN ls -la /testresults/coverage/reports
 
ARG BUILD_TYPE="Release" 
RUN dotnet publish src/webapp/webapp.csproj -c $BUILD_TYPE -o /app/publish
# Package the published code as a zip file, perhaps? Push it to a SAST?
# Bottom line is, anything you want to extract forensically from this build
# process is done in the build layer.
FROM base AS final
WORKDIR /app
COPY --from=build /app/publish .
ENTRYPOINT ["dotnet", "webapp.dll"]

The way you would leverage this test output is by fishing out the remporary layer from the cache and assign it to a new image from which you can do plain file operations.

# docker images --filter "label=test=true"
REPOSITORY   TAG       IMAGE ID       CREATED          SIZE
<none>       <none>    0d90f1a9ad32   40 minutes ago   3.16GB
# export id=$(docker images --filter "label=test=true" -q | head -1)
# docker create --name testcontainer $id
# docker cp testcontainer:/testresults ./testresults
# docker rm testcontainer

All our problems are solved. Wrap this in a script and you’re done. I did, I mean they did, I stole this from another blog.

Unfortunately keeping an endless archive of temporary, orphaned layers became a performance and storage bottleneck for docker, so – sadly – the Modern Era began with some optimisations that rendered this method impossible.

The Modern Era of BuildKit

Since intermediate layers are mostly useless, just letting them fall by the wayside and focus on actual output was much more efficient according to the forces that be. The use of multistage Dockerfiles to additionally produce test data output was not recommended or recognised as a valid use case.

So what to do? Well – there is a new command called docker bake that lets you do docker build on multiple docker images, or – most importantly – built targetting multiple targets on the same Dockerfile.

This means you can run one build all the way through to produce the final lightweight image and also have a second run that saves the intermediary image full of test results. Obviously the docker cache will make sure nothing is actually run twice, the second run is just about picking out the layer from the cache and making it accessible.

The Correct way of using bake is to format a bake file in HCL format:

group "default" {
  targets = [ "webapp", "webapp-test" ]
}
target "webapp" {
  output = [ "type=docker" ]
  dockerfile = "src/webapp/Dockerfile"
}
target "webapp-test" {
  output = [ "type=image" ]
  dockerfile = "src/webapp/Dockerfile"
  target = "build"
} 

If you run this command line with docker buildx bake -f docker-bake.hcl, you will be able to fish out the historic intermediary layer using the method described above.

Conclusion

So – using this mechanism you get a minimal number of dockerfiles, you get all the build guffins happening inside docker, giving you freedom from whatever limitations plague your build agent yet the bloated mess that is the build process will be automagically discarded and forgotten as you march on into your bright future with a lightweight finished image.

Technical debt – the truth

Rationale

Within software engineering we often talk about Technical Debt. It was defined by elder programmer and agilist Ward Cunningham and likens trade offs made when designing and developing software to credit you can take on, but that has to be repaid eventually. The comparison further correctly implies that you have to service your credit every time you make new changes to the code, and that the interest compounds over time. Some companies literally go bankrupt over unmanageable technical debt – because the cost of development goes up as speed of delivery plummets, and whatever use the software was intended to have is eventually completely covered by a competitor yet unburden by technical debt.

But what do you mean technical debt?

The decision to take a shortcut can be a couple of things. Usually amount of features, or time spent per feature. It could mean postponing required features to meet a deadline/milestone, despite it taking longer to circle back and do the work later in the process. If there is no cost to this scheduling change, it’s just good planning. For it to be defined as technical debt there has to have been a cost associated with the rescheduling.

It is also possible to sacrifice quality to meet a deadline. “Let’s not apply test driven development because it takes longer, we can write tests after the feature code instead”. That would mean that instead of iteratively writing a failing tests first followed by the feature code that makes that test pass, we will get into a state of flow and churn out code as we solve the problem in varying levels of abstraction and retrofit tests as we deem necessary. Feels fast, but the tests get big, incomplete, unwieldy and brittle compared to the plentiful small all-encompassing, and specific tests TDD bring. A debt you are taking on to be paid later.

A third kind of technical debt – which I suspect is the most common – is also the one that fits the comparison to financial debt the least. A common way to cut corners is to not continuously maintain your code as it evolves over time. It’s more akin to the cost of not looking after your house as it is attacked by weather and nature, more dereliction than anything else really.

Let’s say your business had a physical product it would sell back when a certain piece of software was written. Now the product sold is essentially a digital license of some kind, but in the source code you still have inventory, shipping et cetera that has been modified to handle selling a digital product in a way that kind of works, but every time you introduce a new type of digital product you have to write further hacks to make it appear like a physical product as far as the system knows.

The correct way to deal with this would have been to make a more fundamental change the first time digital products were introduced. Maybe copy the physical process at first and cut things out that don’t make sense whilst you determine how digital products work, gradually refactoring the code as you learn.

Interest

What does compound interest mean in the context of technical debt? Let’s say you have created a piece of software, your initial tech debt in this story is you are thin on unit tests but have tried to compensate by making more elaborate integration tests. So let’s say the time comes to add an integration, let’s say a json payload needs to be posted to a third party service over HTTP with a bespoke authentication behaviour.

If you had applied TDD, you would most likely have a fairly solid abstraction over the rest payload, so that an integration test could be simple and small.

But in our hypothetical you have less than ideal test coverage, so you need to write a fairly elaborate integration test that needs to verify parts of the surrounding feature along with the integration itself to truly know the integration works.

Like with some credit cards, you have two options on your hypothetical tech debt statement, either build the elaborate new integration test at a significant cost – a day? Three? Or you avert your eyes and choose the second – smaller- amount and increase your tech debt principal by not writing an automated test at all and vow to test this area of code by hand every time you make a change. The technical debt equivalent of a payday loan.

Critique

So what’s wrong with this perfect description of engineering trade offs? We addressed above how a common type of debt doesn’t fit the debt model very neatly, which is one issue, but I think the bigger problem is – to the business we just sound like cowboy builders.

Would you accept that a builder under-specified a steel beam for an extension you are having built? “It’s cheaper and although it is not up to code, it’ll still take the weight of the side of the house and your kids and a few of their friends. Don’t worry about it.“ No, right? Or an electrician getting creative with the earthing of the power shower as it’s Friday afternoon, and he had promised to be done by now. Heck no, yes?

The difference of course is that within programming there is no equivalent of a GasSafe registry, no NICEIC et cetera. There are no safety regulations for how you write code, yet.

This means some people will offer harmful ways of cutting corners to people that don’t have the context to know the true cost of the technical debt involved.

We will complain that product owners are unwilling to spend budget on necessary technical work, so as to blame product rather than take some responsibility. The business expects us to flag up if there are problems. Refactoring as we go, upgrading third party dependencies as we go should not be something the business has to care about. Just add it to the tickets, cost of doing business.

Sure there are big singular incidents such as a form of authentication being decommissioned or a framework being sunset that will require big coordinated change involving product, but usually those changes aren’t that hard to sell to the business. It is unpleasant but the business can understand this type of work being necessary.

The stuff that is hard to sell is bunched up refactorings you should have done along the way over time, but you didn’t- and now you want to do them because it’s starting to hurt. Tech debt amortisation is very hard to sell, because things are not totally broken now, why do we have to eat the cost of this massive ticket when everything works and is making money? Are you sure you aren’t just trying to gold plate something just out of vanity? The budget is finite and product has other things on their mind to deal with. Leave it for now, we’ll come back to it (when it’s already fallen over).

The business expects you to write code that is reliable, performant and maintainable. Even if you warn them you are offering to cut corners at the expense of future speed of execution, a non-developer may have no idea of the scale of the implications of what you are offering .

If they spent a big chunk out of their budget one year – the equivalent of a new house in a good neighbourhood – so that a bunch of people could build a piece of software with the hope that this brand new widget in a website or new line-of-business app will bring increased profits over the coming years, they don’t want to hear roughly the same group of people refer to it as “legacy code” already at the end of the following financial year.

Alternative

Think of your practices as regulations that you simply cannot violate. Stop offering solutions that involve sacrificing quality! Please even.

We are told that making an elaborate big-design-upfront is waterfall and bad – but how about some-design-upfront? Just enough thinking ahead to decide where to extend existing functionality and where to instead put a fork in the road and begin a more separate flow in the code, that you then develop iteratively.

If you have to make the bottom line more appealing to the stakeholders for them to dare invest in making new product through you and not through dubious shadow-IT, try and figure out a way to start smaller and deliver value sooner rather than tricking yourself into accepting work that you cannot possibly deliver safely and responsibly.