Category Archives: Blog

Your blog category

Unix mail

E-mail used to be a file. A file called something like /var/spool/mail/[username], and new email would appear by text being appended to that file. The idea was that the system could send notifications to you by appending messages there, and you could send messages to other users by appending text to the files belonging to other users, using the program mail.

Later on you could send email to other machines on the network, by addressing it with user name, and @ sign and the name of the computer. I am not 100% sure, and I am too lazy to look it up, but the way this communication happened was using SMTP, simple mail transfer protocol. You’ll notice that SMTP only lets you send mail (and implicitly appending it to the file belonging to the user you are sending to).

Much later Post Office Protocol was invented, so that you could fetch your email from your computer at work and download to your Eudora email client on your Windows machine at home. It would just fetch the email from your file, optionally removing the email from that file as doing so.

As Lotus and Microsoft built groupware solutions loosely built on top of email, people wanted to access the email on the server rather than always download thenm, and have the emails organised in folders, which led to the introduction of IMAP.

Why am I mentioning this? Well, you may -if you are using a UNIX operating system still see the notification “You have mail” as you open a new terminal. It is not as exciting as you may think, it is probably a guy called cron that’s emailing you, but still – the mailbox is the void in which the system screams when it wants your help, so it would be nice to wire it into your mainstream email reader.

Because I am running Outlook to handle my personal email on my computer, I had to hack together a python script that does this work. It seems if I was using Thunderbird I could still have UNIX mail access, but.. it’s not worth it.

What Is and Should Never Be

I have been banging on about the perils of the Great Rewrite in many previous posts. Huge risks regarding feature creep, lost requirements, hidden assumptions, spiralling cost, internal unhealthy competition where the new system can barely keep up with the evolving legacy system, et cetera.

I will in this post attempt to argue the opposite case. Why should you absolutely embark on a Great Rewrite? How do you easily skirt around the pitfalls? What rewards lay in front of you? I will start with my usual standpoint, what are invallid excuses for embarking on this type of journey?

Why NOT to give up on gradual refactoring?

If you analyse your software problems and notice that they are not technical in nature, but political there is no point in embarking on a massive adventure, because the dysfunction will not go away. No engineering problems are unsolvable, but political roadblocks can be completely immovable under certain circumstances. You cannot make two teams collaborate where the organisation is deeply invested in making that collaboration impossible. This is usually a problem of P&L where the hardest thing about a complex migration problem is the accounting and budgeting involved in having a key set of subject matter experts collaborating cross-functionally.

The most horrible instances of shadow IT or Frankenstein middleware have been created because the people that ought to do something were not available so some other people had to do something themselves.

Basically, if – regardless of size of work – you cannot get a piece of work funnelled through the IT department into production in an acceptable time, and the chief problem is the way the department operates, you cannot fix that by ripping out the code and starting over.

Why DO give up on gradual refactoring?

Impossible to enact change in a reasonable time frame.

Let us say you have an existing centralised datastore that has several small systems integrate across it in undocumented ways, and your largest legacy systems are getting to the point where its libraries cannot be upgraded anymore. Every deployment is risky, and performance characteristics are unpredictable for every change, and your business side, your customer in the lean sense, demands quicker adoption of new products. You literally cannot deliver what the business wants in a defensible time.

It may be better to start building a new system for the new products, and refactor the new system to bring older products across after a while. Yes, the risk of a race condition between new and old teams is enormous, so ideally teams should own the business function in both the new and the old system, so that the developers get some accidental domain knowledge which is useful when migrating.

Radically changed requirements

Has the world changed drastically since the system was first created? Are you laden with legacy code that you would just like to throw away, except the way the code is structured you would first need to do a great refactor before you can throw bits away, but the test coverage is too low to do so safely?

One example of radically changed requirements could be – you started out as a small site only catering to a domestic audience, but then success happens and you need to deal with multiple languages and the dreaded concept of timezones. Some of the changes necessary for such a change can be of the magnitude that you are better off throwing away the old code rather than touching almost every area of the code to use resources instead of hard coded text. This might be an example of amortising on well adjudicated technical debt. The time to market gain you made by not internationalising your application first time round could have been the difference that made you a success, but still – now that choice is coming back to haunt you.

Pick a piece of functionality that you want to keep, and write a test around both the legacy and the new version to make sure you cover all requirements you have forgotten over the years (this is Very Hard to Do). Once you have correctly implemented this feature, bring it live and switch off this feature in the legacy system. Pick the next keeper feature and repeat the process, until nothing remains that you want to salvage from the old system and you can decommission the charred remains.

Pitfalls

Race condition

Basically, you have a team of developers implement client onboarding in the new system. Some internal developers and a couple of external boutique consultants from some firm near Old Street. They have meetings with the business, i.e strategic sales and marketing are involved, they have an external designer involved to make sure the visuals are top notch, meanwhile in the damp lower ground floor, the legacy team has Compliance in their ear about the changes that need to go live NOW or else the business risk being in violation of some treaty that enters into force next week.

I.e. as the new system is slowly polished, made accessible, perhaps being a bit bikeshedded as too many senior stakeholders get involved, the requirements for the actual behind-the-scenes criteria that need to be implemented are rapidly changing, and to the team involved in the rework it seems that the goalposts never stop moving, and most of the time they are never told, because compliance “already told IT”, i.e. the legacy team.

What is the best way to avoid this? Well, if legacy functionality seems to have high churn, move it out into a “neutral venue”, a separate service that can be accessed from both new and old systems and remove the legacy remains to avoid confusion. Once the legacy system is fully decommissioned you can take a view and see if you want to absorb these halfway houses or if you are happy with how they are factored. The important thing is that key functionality only exists in one location at all time.

Stall

A brave head of engineering sets out to implement a new modern web front-end, replacing a server rendered website communicating via soap with a legacy backend where all business logic lives. Some APIs have to be created to do processing that that the legacy website did on its own before or after calling into the service. On top of that, a strangler fig pattern is implemented around the calls to the legacy monolith, primarily to isolate the use of soap away from the new code, but also to obviate some of the things that is deemed not to be worth taking the round trip over soap. Unfortunately, after the new website is live and complete, the strangler fig has not actually strangled the back-end service, and a desktop client app is still talking soap directly to the backend service with no intention of ever caring about or even acknowledging the strangler fig. Progress ceases and you are stuck with a half-finished API that in some cases implements the same features as the backend service, but in most cases just acts as a wrapper around soap. Some features live in two places, and nobody is happy.

How to avoid it? Well, things may happen that prevent you from completing a long term plan, but ideally, if you intend to strangle a service, make sure all stakeholders are bought into the plan. This can be complex if the legacy platform being strangled is managed by another organisation, e.g. an outsourcing partner.

Reflux

Lets say you have a monolithic storage, the One Database. Over the years BI and financial ops have gotten used to querying directly into the One Database to capture reports. Since the application teams are never told about this work, the reports are often broken, but they persevere and keep maintaining these reports anyway. The big issue for engineering is the host of “batch jobs”, i.e. small programs run from a band built task scheduler from 2001 that does some rudimentary form of logging directly into a SchedulerLogs database. Nobody knows what these various programs do, or which tables in the One Database they touch, just that the jobs are Important. The source code for these small executables exist somewhere, probably… Most likely in the old CVS install on a snapshot of a Windows Server 2008 VM that is an absolute pain to start up, but there is a batch file from 2016 that does the whole thing, it usually works.

Now, a new system is created. Finally, the data structure in the New Storage is fit for purpose, new and old products can be maintained and manipulated correctly because there are no secret dependencies. An entity relationship that was stuck as 1-1 due to an old, bad design that had never been possible to rectify – as it would break the reconciliation batch job that nobody wants to touch – can finally be put right, and several years worth of poor data quality can finally be addressed.

Then fin ops and BI write an angry email to the CFO that the main product no longer reports data to their models, and how can life be this way, and there is a crisis meeting amongst the C-level execs and an edict is brought down to the floor, and the head of engineering gets told off for threatening to obstruct the fiduciary duties of the company, and is told to immediately make sure data is populated in the proper tables… Basically, automatically sync the new data to the old One Database to make sure that the legacy Qlik reports show the correct data, which also means that some of the new data structures have to be dismantled as they cannot be meaningfully mapped back to the legacy database.

How do you avoid this? Well, loads of things were wrong in this scenario, but my hobby-horse is about abstractions, i.e. make sure any reports pointing directly into an operational database do not do that anymore. Ideally you should have a data platform for all reporting data where people can subscribe to published datasets, i.e. you get contracts between producer and consumer of data so that the dependencies are explicit and can be enforced, but. at minimum have some views or temporary tables that define the data used by the people making the report. That way they can ask you to add certain columns, and as a developer you are aware that your responsibility is to not break those views at any cost, but you are still free to refactor underneath and make sure the operational data model is always fit for purpose.

Conclusion

You can successfully execute a great rewrite, but unless you are in a situation where the company has made a great pivot and large swathes of the feature in the legacy system can just be deleted, you will always contend with legacy data and legacy features, so fundamentally is is crucial to avoid at least the pitfalls listed above (add more in the comments, and I’ll add them and pretend they were there all along). Things like how reporting will work must be sorted out ahead of time. There will be lack of understanding, shock and dismay, because what we see as hard coupling and poor cohesion, some folks will see as single pane of glass, so some people will think it is ludicrous to not use the existing database structure forever. All the data is there already?!

Once there is a strategy and a plan in place for how the work will take place, the organisation will have to be told that although you were not of the opinion that we were moving quickly before, we shall actually for a significant time worsen our response times regarding new features as we dedicate considerable resources to performing a major upgrade to our platform into a state that will be more flexible and easy to change.

Then the main task is to only move forward at pace, and to atomically go feature by feature into the new world, removing legacy as you go, and use enough resources to keep the momentum going. Best of luck!

Are the kids alright?

I know in the labour market of today, suggesting someone pick up programming from scratch is akin to suggesting someone dedicate their life to being a cooper. Sure, in very specific places, that is a very sought after skill that will earn you a good living, but compared to its heyday the labour market has shrunk considerably.

Getting into the biz

How do people get into this business? As with I suspect most things, there has to be a lot of initial positive reinforcement. Like – you do not get to be a great athlete without several thousands of hours of effort rain or shine, whether you enjoy it or not – but the reason some people with “talent” end up succeeding is that they have enough early success to catch the “bug” and stick at it when things inevitably get difficult and sacrifices have to be made.

I think the same applies here, but beyond e-sports and streamer fame, it has always been more of an internal motivation, the feeling of “I’m a genius!” when you acquire new knowledge and see things working. It used to help to have literally nothing else going on in life that was more rewarding, because just like that fleeting sensation of understanding the very fibre of the universe, there is also the catastrophic feeling of being a fraud and the worst person in the world once you stumble upon something beyond your understanding, so if you had anything else to occupy yourself with, the temptation to just chuck it in must be incredibly strong.

Until recently – software development was seen as a fairly secure career choice, so people has a financial motivator to get into it – but still, anecdotally it seems people many times got into software development by accident. Had to edit a web page, and discovered javascript and PHP – or , had to do programming as part of some lab at university and quite enjoyed it et c. Some were trying to become real engineers but had to settle for software development, some were actuaries in insurance and ended up programming python for a living.

I worry that as the economic prospects of getting into the industry as a junior developer is eaten up by AI budgets, we will see a drop-off of those that accidentally end up in software development and we will be left with only the ones with what we could kindly call a “calling”, or what I would say “has no other marketable skills” like back in my day.

Dwindling power of coercion

Microsoft of course is the enemy of any right thinking 1337 h4xx0r, but there has been quite a while where if you wanted a Good Job, learning .NET and working for a large corporation on a Lenovo Thinkpad was the IT equivalent of working at a factory in the 1960s. Not super joyous but, a Good Job. You learned .NET 4.5 and you pretended to like it. WCF, BizTalk and all. The economic power was unrelenting.

Then the crazy web 2.0 happened and the cool kids were using Ruby on Rails. If you wanted to start using ruby, it was super easy. It was like back in my day, but instead of typing ABC80 basic – see below- they used the read evaluate print loop in ruby. Super friendly way of feeling like a genius and gradually increase the level of difficulty.

Meanwhile legacy Java and C# were very verbose, you had to explain things like static, class, void, include, static not to mention braces and semicolons et c to people before they could create a loop of a bad word filling the terminal.

People would rather still learn PHP or Ruby, because they saw no value in those old stodgy languages.

Oracle were too busy being in court suing people to notice, but on the JVM there were other attempts at creating some things less verbose – Scala and eventually Kotlin happened.

Eventually Microsoft noticed what was going on, and as the cool kids jumped ship from Ruby onto NodeJS, Microsoft were determined to not miss the boat this time, so they threw away the .NET Framework, or “threw away” – as much as Microsoft have ever broken with legacy, but still fairly backward compatible, and started from scratch with .NET Core and a renewed focus on performance and lowered barriers to entry.

The pressure really came as data science folks rediscovered Python. It too has super low barrier to entry, except there is a pipeline to data science, and Microsoft really failed to break into that market due to the continuous mismanagent of F#, except they attacked it form the Azure side and get the money that way – depite people writing python.

Their new ASP.NET Core web stack stole borrowed concepts like minimal API from Sinatra and Nancy, and they introduced top level statements to allow people to immediately get the satisfaction of creating a script that loops and emits rude words using only two lines of code

But still, the canonical way of writing this code was to install Visual Studio and create a New Project – Console App, and when you save that to disk you have a whole bunch of extra nonsense there (a csproj file, a bunch of editor metadata stuff that you do not want to have to explain to a n00b et cetera), which is not beginner friendly enough.

This past Wednesday, Microsoft introduced .NET 10 and Visual Studio 2026. In it, they have introduced file based apps, where you can write one file that can reference NuGet packages or other C# projects, import namespaces and declare build-time variables inline. It seems like an evolution of scriptcs, but slightly more complete. You can now give people a link to the SDK installer and then give them this to put in a file called file.cs:

Then, like in most programming tutorials out there you can tell them to do sudo chmod +x file.cs if they are running a unix like OS. In that case, the final step is ./file.cs and your rude word will fill the screen…

If you are safely on Windows, or if you don’t feel comfortable with chmod, you can just type dotnet file.cs and see the screen fill with creativity.

Conclusion

Is the bar low enough?

Well, if they are competing with PHP, yes, you can give half a page length’s instruction and get people going with C#, which is roughly what it takes to get going with any other language on Linux or Mac and definitely easier than setting up PHP. The difficulty with C# and with Python as well is that they are old. Googling will give you C# constructs from ages ago that may not translate well to a file based project world. Googling for help with Python will give you a mix of python 2 and python 3, and with python it is really hard to know what is a pip thing and what is an elaborate hoax due to the naming standards. The conclusion is therefore, dotnet is now in the same ballpark as the other ones in terms of complexity, but it depends on what resources remain available. Python has a whole gigantic world out there of “how to get started from 0”, whilst C# has a legacy of really bad code from the ASP.NET WebForms days. Microsoft have historically been excellent at providing documentation, so we shall see if their MVP/RD network flood the market with intro pages.

At the same time, Microsoft is going through yet another upheaval with Windows 10 going out of support and Microsoft tightening the noose around needing to have a Microsoft Account to run Windows 11, and at the same time Steam have released the Steam Console running Windows software on Linux, meaning people will have less forced exposure to Windows even to game, whilst Google own the school market. Microsoft will still have corporate environments that are locked to Windows for a while longer, but they are far from the situation they used to be in.

I don’t know if C# is now easy enough to adopt that people that are curious about learning programming would install it over anything else on their mac or linux box.

High or low bar, should people even learn to code?

Yes, some people are going to have to learn programming in the future. AGI is not happening, and new models can only train on what is out there. Today’s generative AI can do loads of things, but in order to develop the necessary skills to leverage it responsibly, you need to be familiar with all the baggage underneath or else you risk releasing software that is incredibly insecure or that will destroy customer data. Like Bjarne Stoustrup said “C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off” – this can apply to AI generated code as well.

Why does code rot?

The dichotomy

In popular parlance you have two categories of code: your own, freshly written code, which is the best code, code that never will be problematic – and then there is legacy code, which is someone else’s code, untested, undocumented and awful. Code gradually goes from good to legacy in some ways that appear mystical, and in the end you change jobs or they bring in new guys to do a Great Rewrite with mixed results.

So, to paraphrase Baldrick from Blackadder Goes Forth: “The way I see it, these days there’s a [legacy code mess], right? and, ages ago, there wasn’t a [legacy code mess], right?    So, there must have been a moment when there not being a [legacy code mess] went away, right? and there being a [legacy code mess] came along.   So, what I want to know is: How did we get from the one case of affairs to the other case of affairs”

The hungry ostrich

Why does code start to deteriorate? What precipitates the degradation that eventually leads to terminal decline? What is the first bubble of rust appearing by the wheel arches? This is hard to generally state, but the causes I have personally seen over the years boil down to being prevented from making changes in a defensible amount of time.

Coupling via schema – explicit or not

E.g. it could be that you have another system accessing your storage directly. Doesn’t matter if you are using schemaless storage or not, as long as two different codebases need to make sense of the same data, you have a schema whether you admit it or not, at some point those systems will need to coordinate their changes to not break functionality.

Fundamentally – as soon as you start going “neah, I won’t remove/rename/change type of that old column because I have no idea who still uses it” you are in trouble. Each storage must have one service in front of it that owns it, so that it can safely manage schema migrations, and anyone wanting to access that data needs to use a well defined API to do so. The service maintainers can thereafter be held responsible to maintain this API in perpetuity, and easily so since the dependency is explicit and documented. If the other service just queried the storage directly, the maintainer is completely unaware (yes, this goes for BI teams as well).

Barnacles settling

If every feature request leads to functions and classes growing as new code is added like barnacles without regular refactoring to more effective patterns, the code gradually gets harder to change. This is commonly a side-effect of high turnover or outsourcing. The developers do not feel empowered to make structural changes, or perhaps have not had enough time to get acquainted with the architecture as it was intended at some point. Make sure that whomever maintains your legacy code is fully aware of their responsibility to refactor as they go along.

Test after

When interviewing engineers it is very common that they say they “practice TDD, but…”, meaning they test after. At least to me the difference in test quality is obviously different if I write the tests first versus if I get into the zone and write the feature first and then try to retrofit tests afterwards. Hint: there is usually a lot less mocking if you test first. As the tests get more complex, adding new code to a class under tests gets harder, and if the developer does not feel empowered to refactor first, the tests are likely to not cover the added functionality properly , so perhaps a complex integration test is modified to validate the new code, maybe the change is tested manually…

Failure to accept Conway’s law

The reason people got hyped about micro services was the idea that you could deploy individual features independently of the rest of the organisation and the rest of the code. This is lovely, as long as you do it right. You can also go too granular, but in my experience that rarely happens. The problem that does happen is that separate teams have interests in the same code and modify the same bits, and releases can’t go out without a lot of coordination. If you also have poor automation test coverage you will get a manual verification burden that further slows down releases. At your earliest convenience you must spend time restructuring your code or at least the ownership of it so that teams afully own all aspects of the thing they are responsible for and they can release code independently, and with any remaining cross-team dependencies made explicit and automatically verifiable.

Casual attitude towards breaking changes

If you have a monolith that is providing core features to your estate, and you have a publicly accessible API Operation, assume it is being used by somebody. Basically, if you must change its required parameters or its output, create a new versioned endpoint or one by a different name. Does this make things less messy? No, but at least you don’t break a consumer you don’t know about. Tech leads will hope that you message around to try and identify who uses it and coordinate a good outcome, but historically that seems too much to ask. We are only human after all.

Until you have PACT tests for everything, and solid coverage, never break a public method.

Outside of support horizon

Initially it does not seem that bad to be stuck with a slightly unsupported version of a library, but as time moves on, all of a sudden you get stuck for a week with a zero day bug that you can’t patch because three other libraries are out of date and contain breaking changes. It is much better if you are ready to make changes as you go along. One breaking change usually lets you have options, but when you are already exposed with a potential security breach, you have to make bad decisions due to lack of time.

Complex releases

Finally, it is worth mentioning that you want to avoid manual steps in your releases. Today there is really no excuse for making a release more complex than one button click. Ideally abstract away configuration so that there is no file.prod.config template that is separate from file.uat.config, or else that prod template file is almost guaranteed to break the release, much like the grille was the only thing rusting on the Rover 400 that was almost completely a Honda, (except for the grille).

Stopping Princip

So how do we avoid the decline, the rot? As with shifting quality and security left, it is much cheaper to address these problems the earlier you spot them, so if you find yourself in any of the situations above, address them with haste.

  1. Avoid engaging “maintenance developers”, their remit may explicitly mean they cannot do major refactoring even when necessary
  2. Keep assigning resources to keep dependencies updated. Use SAST to validate that your dependencies are not vulnerable.
  3. Disallow and remove integration-by-database at any cost. This is hard to fix, but worth it. This alone solves 90% of niggling small problems you are continuously having as you can fix your data structure to fit your current problems rather than the ones you had 15 years ago. If you cannot create a true data platform for reporting data, at least define agreed views/ indexes that can act like an interface for external consumers. That way you have a layer of abstraction between external consumers and yourself and stay free to refactor as long as you make sure that the views still work.
  4. Make dependencies explicit. Ideally PACT tests, but if not that, at least integration tests. This way you avoid needing shared integration environments where teams are shocked to first find out that their changes they have been working on for two weeks breaks some other piece of software they didn’t know existed.

The sky is falling?

Outage

If you tried to do anything online today you may have had more problems than usual. All kinds of services were failing, because some storage at AWS on the eastern seaboard was having problems.

Now, there are plenty of people that love to point out that the cloud has a lot of all-eggs-in-one-basket where one service being unreliable can knock out an insane percentage of the infrastructure of the internet, and they say we should go back to having our own servers in our own basement.

There is a lot of valid maths behind that kind of stance, as renting a big enough chunk of cloud infrrastructure is incredibly expensive, even id you would replace them with really hot computers. Now, I remember back when installing a server meant an HP ProLiant 1U server would show up at your desk and you’d plug it in, annoy everyone else in the office with the fan noise, and you’d stick some software on it, but of course that’s not the time that people want to go back to, people want to go back to giant VMWare clusters where you could provision a new VM conveniently from your desk. Except of course storage was always ridiculously expensive with GBs of enterprise SAN storage costing per GB what 10 TB cost on the street.

Where did cloud come from?

Why did we end up where we are? Well, AWS offered people a chance to provision apps on virtual hardware without buying a bunch of servers first. This was an advantage that cloud still has to this day. You can just get started, gauge how much interest there is, what amount of hardware makes sense, what the costs look like, and then you can possibly decide to bering it all home to your basement. Of course, cloud providers will try to entice you with database systems and queueing systems that are vendor specific to prevent you from moving your apps home, but it is not insurmountable.

Also, although I remember a time when companies would have server rooms in their offices where they stashed their electronic equipment, hopefully – but not mandatorily – arranging for improved cooling and redundant power. After a while people realised it would make more sense to rent space in a colocated datacentre, where your servers can socialise with other servers, all managed by a hosting partner that provides a certain level of physical security, climate control and fire suppression. At this point though, you are probably leasing your servers, leasing your rackspace and paying fees for this situation. Are you sure you are saving an enormous amount of money this way versus running cloud native apps in the cloud?

If your product is something like an email provider, of course, you will probably have network and storage needs on a scale that merits building your own datacentre, still reducing cost versus cloud hosting, but – and this may be hard to accept for some leaders, your company’s product is probably not GMail. It is worth making the calculation though.

Why is US East 1 having problems enough to break half the internet?

So, yes, having multiple active copies of your infrastructure up and running globally is expensive yes, but the main reason businesses keep building their infrastructure in US East 1 is that there are very complex problems with consistency and availability as soon as you have multiple replicas out there being updated simultaneously, so if there is any way to just have one database instance, you do that, and a lot of American businesses prefer to keep their code in Virginia, or something. OR maybe it’s because US East 1 is the default region. This is not an inherent property of cloud apps, you are free to have your single copy of your infrastructure in other regions, or – heck – have a cold failover that you can spin up in another region.

“I hate sitting around, I want to never experience this again”

I hear you – you are looking for solutions, I like it.

Multi Cloud – no

Grifters are going to say “Multi cloud! They can’t all be down at the same time!”, and… sure, but I have yet to see a good multicloud setup. There is no true cross platform IaC, so you’ll have to write a whole bunch of duplicate infra and pay for it to sit around waiting for the other clouds to go down, or if you run active- active you’ll pay egress and ingress to synchronise data across worlds and get a whole new class of problems with consistency and latency.

No – this is a bad option, you are spending loads of money on a solution that you cannot even fully use, since you are limited to the lowest common denominator

Bring it in house- meh, maybe

If you are going to bring your software home…. take the numbers for a spin again, because I doubt that they will make sense.

If you are going to do it – do it properly, i.e. use the tools that didn’t exist back when we built stuff for on-premise. Use containers and ephemeral compute instances. Unfortunately – if you don’t have enough money to lease rack space in multiple datacentres you still have a single point of failure, and if you do have enough money for that, then you will have that synchronisation problem again, so the hard engineering really doesn’t go away. Again, make sure the contracts for your data centres and the additional cost of hiring people to manage the on prem apps you will need to replace that fancy managed infrastructure your cloud provider offered (like, yes, now you need to hire a couple of ZooKeeper and Kafka admins) doesn’t exceed the cloud cost, or at least that your expected uptime is better than what yuur cloud provider is offering.

Do nothing – my favourite option

Well… did you get away with the outage? Did you lose less money than it would cost to take decisive action? How many times can the cloud fall over before it’s worth it? Sure, some IT security experts say that when China go to war with Taiwan, the cyber attack that will strike the US will probably take out large cloud providers since it seems to be so effective in crippling infrastructure, do you think that is likely? Will that hurt your business specifically?

If you can get away with telling your users to “email you tomorrow when the cloud is back up” or words to that effect, you should probably take advantage of that and not spend more money than you need, but on. the other hand if you need 100% uptime – as in no nines, 100%, there is an IBM Mainframe that offers that, and you can configure it to behave like an insane number of linux m machines all in one trench coat, so you can run your existing apps on it, kind of.

Presumably, your system needs are somewhere on that continuum between “that’s OK, we’ll try again tomorrow” and “100% or else”, and I cannot make blanket guarantees, but if you chat to the business, they will probably have very specific ideas of what is acceptable and unacceptabler downtime, and if you agree with them about that, you are – I am guessing – going to be surprised at how OK people will be with staying in the cloud and taking your chances, as long as there is some observability and feedback.

Heck, if I am wrong, buy that mainframe.

Busman’s Holiday

Rusted bolts vs a pristine manifold

I used to spend time with automotively inclined gentlemen. There were two distinct schools of the car hobby at that time. Finbilsmek, e.g. renovating a classic car or preparing a race car – sure, it eats all your money in parts, but you get to listen to music and carefully admire your new components as you fit them to your clean project car – unless it’s the weekend before the race where the stress level is high. The other school is bruksbilsmek, i.e. fixing your daily driver. It is the night before the MOT, it’s by the side of the road, it’s with a subset of your tools on a Halfords parking. Only if you are lucky does it takes place in your garage, on a car lift – and even if you are that lucky, then salt and grime is constantly falling in your face and if you fail to sort the problem it will have a massive impact on your daily life.

A similar thing exists in IT. If you are tinkering with your computer at home you have time to google bits, listen to music, type random stuff and see if it works. Worst case you just wipe it and start over. It’s enjoyable to install some weird hardware or software and try to get it going.

However if your work laptop starts having problems, or a thing that you need to sort out for work is broken, the enjoyment goes away and there is only rage. Therefore, at least at my age, I wouldn’t build a computer for work, nor do I have any wish to maintain the operating system or mess with networking or access rights – there are pros that do that stuff and keep abreast of all the bulletins of which security holes out there have been patched, I happily let them worry about it, I just accept their vetted upgrades and make sure I restart when I’m asked to.

Baby-proofing a laptop

This is why I’m not principally against working in a baby-proofed environment, i.e. where you as a developer do not have true admin rights, you have no access to customer data, you have no direct access to production. I would love that – as long as that still meant I could install everything I need to work, I can test all my logic locally (code, deployment, monitoring – all of it) and that all my developer tools work. Having all the networking, patching of servers, provisioning of resources and testing patches all of that being magically taken care of by someone else is very nice indeed, and allowing me to focus on delivering trustworthy code which I’m sure sounds super boring to others.

Unfortunately achieving such an environment – a baby-proofed one – requires a lot of engineering. We would like to be in a situation where a company onboards someone and without any manual intervention whatsoever they get a user account provisioned with all the correct group memberships and access as well, and after plugging in the laptop, setting up MFA, locking the screen and going for coffee – all necessary apps will be installed onto the laptop ready for immediate productivity. That would require a lot of cooperation between HR, ops, dev and procurement, plus enough resources to implement and tests all aspects iof this, and everyone involved in this would already have a day job so this would be extra.

Root of all evil

The biggest technical obstacle that makes developer special is that developers use software that need to attach a debugger to a process, and to open ports, i.e. listen for incoming traffic/ requests. – which is what a web app is. An operating system thinks these are dangerous things. Generally you get to listen to stuff on some ports with high numbers, but “well-known” ports require admin access. I.e. you can’t open port 80 and 443 without admin access, cause it would be dangerous if some random code tried to play web server. Attaching a debugger is even more dangerous, you literally have access to all of the process’ memory. You could read any secrets you wanted out of there, so – yeah – not something you get to do without admin access. Opening ports on high numbers was not a problem at the time, but in some cases you still needed to attach a debugger to IIS which required admin access.

On unix-like operating systems that were multi user aware from the beginning, there has been a culture of creating your own user for day-to-day work, and keeping an admin account called root that you only use for things that the operating system thinks is serious, like writing to the /etc directory or running programs in /sbin. Later the concept of sudo arrived, where you basically give accounts the opportunity to temporarily acquire root privileges after typing in their own password again, meaning you can delegate the right to install software without permanently giving the user elevated rights or giving them a root password. Also, the need to type in the password makes it harder to abuse by trickery, but by no means is it bullet proof.

Windows came from DOS, a single user operating system. Although Windows NT, the kernel has decent security design, the culture among windows users was generally that you just put yourself in the Administrators group when you installed your computer and you were “root” and life was easy. The lax security culture meant that many apps simply could not function if the user was not part of the Administrators group, so there was evidently no practical adoption of healthy practices. Windows machines were extremely susceptible to malware and as popular as Windows XP was, something had to be done. When Windows Vista came, the most hated new feature was User Access Control, which was a new layer of obstinance on top of Windows security, meaning the operating system threw up a popup in your face when you did something risky – like opening any port at any number, writing files to suspicious folder – such as editing C:\Windows\System32\drivers\etc\hosts – which is the windows version of /etc/hosts.

People hated UAC, and it was the new thing people did directly after installing – add yourself to Administrators and switch off UAC. But unfortunately you couldn’t argue with the results. The spread of malware was slowed down quite dramatically. Effectively UAC was a bolt-on sudo copy that just made you click on something to confirm. If you didn’t have access rights, it would ask you to type in some credentials that did have the power to approve the action. This meant that corporations started to give you a separate admin accounts that only worked on your machine, but gave you enough rights to open ports or install programs. An analogue to sudo, but more cumbersome.

Windows 7 made UAC back off a bit to increase adoption, and the results continued to be impressive. However – although Microsoft built a simple web server for development – IIS Express – that didn’t require administrative access when debugging – UAC would still sometimes ask you for approval to start things like an android emulator, an Azure Storage emulator or even an Azure Function Host, thus still requiring users to have some way of elevating, i.e. type in admin credentials just to do work. This has to be addressed if we are to be able to move into the glorious future where developers are fully embedded in a padded cell where we can do no harm.

Forbidden knowledge

At Netflix among other places, they devised a way to provide an ether of configuration that apps can just absorb, meaning that the app announces who it is, and recieves its configuration, i.e. you remove the problem of needing to know how the production environment is set up, you just ask for things and they are provided. That way apps can be secured and configured without any knowledge of the production environment leaking out to developers.

Containerisation lets us effectively ship a little egg of code into production, with a defined contract of what the application needs from the outside world. Combine this with a sidecar as above that handles communication between services, and you achieve the perfect state of developers being safely prevented from knowing anything concrete about how the production environment is configured, yet being able to deliver tested apps into production.

The biggest obstacle here is leaky abstractions. Like DAPR for instance promises to abstract away how things like message queues work, but it doesn’t actually. You cannot locally test something with Redis Message Broker or RabbitMQ that you intend to run on Azure Service Bus in prod. You need to be able to integration test automatically, or else it is unacceptable. The tests need to be able to run realistically in every environment.

Let me VNC onto the server

Back in the day when VMs were commonly used when hosting websites, you sometimes had to log into a virtual server and look into eventvwr.exe to see what was actively going wrong, maybe a particular executable was eating all the memory and needed a bit of encouragement to get over itself. This type of access is of course dangerous to have, and it would be nearly unheardof for a developer to have this type of access to production hardware even when troubleshooting, and instead there will be alerts that automatically destroy an instance of an app that is misbehaving whilst already having spun up a replacement. In the rare cases you still need to use a VM , you install agents on them that allow people to perform certain maintenance tasks without ever logging in. Fundamentally this has been solved in the way I foresee all of this being solved, by abstracting away the problem.

Conclusion

We are closer than ever to utopia, and the level of hand cranking required to reach nirvana is lower than ever, but there is still too much manual effort required. There is plenty of scope for disruption. A cocoon world for developers that allows for low faff developing and testing of containerised apps, being able to conclusively prove that monitoring and dependency acquisition works locally before pushing the code to CI is a minimum. This, depending on your cloud provider is still anything from impossible to a massive PITA. There are according to a quick search new IAM solutions that look like they offer identity and app provisioning in a seamless way, so the future is on its way somehow.

Development Productivity

Why the rush to measure developer productivity?

A lot has been written on developer productivity, the best – indubitably – by Gregerly Orosz and Kent Beck, but instead of methodical thinking and analysis I will just recall some anecdotes, it is Saturday after all.

The reason a business even invests in custom software development is that the business realises they could get a competitive advantage by automating or simplifying tasks used when generating revenue, and I would say only as a secondary concern to lower cost.

The main goal is to get the features and have some rapid response if there are any problems, as long as this happens there are no problems. Unfortunately as we know, software development is notorious for having problems.

The customer part of the organisation is measured relentlessly, that is sometimes incentivised based on generated profit, whilst they deal with the supplier part of the organisation that seems to have no benchmarks for individual performance, which is bewildering when projects keep slipping.

Why can’t we just measure outcome of an individual software developer? Because an individual developer’s outcome depends on too many external factors. Sure, one developer might type the line that once in production led to a massive reduction in losses, but who should get credit? The developer that committed the code, the other developer that reviewed it? The ops person that approved the release into production? The product owner that distilled wants and needs into requirements that could be translated into code?

The traditional solution is to attempt to measure effort, velocity, lines of code et cetera. Those metrics can easily prove to yield problematic emergent behaviour in teams, and it at no point measures if the code written actually provides value to the business.

I would argue that the smallest unit of accountability is the development team. The motley crew of experts in various fields that work together to extract business requirements and convert it into running software. If they are empowered to talk to the business and trusted to release software responsibly into production, they an be held accountable for things like cloud spend, speed of delivery and reliability.

Unfortunately the above description of a development team and its empowerment sounds like a fairytale for many developers out there. There are decision gates, shared infrastructure, shared legacy code and domain complexities meaning that teams wait for each other, or wait for a specific specialist team to do some work only they are trained/allowed to do. I have likened it to an incandescent lightbulb before. A lot of heat loss, very little light. Most of the development effort is waste heat.

Why do software development teams have problem delivering features?

I will engage in harmful stereotype here to make a wider point. I have been around the block over the years and visited many organisations that have degrees of the below issues, in different ways.

Getting new stuff into the pipeline

Fundamentally it is difficult to get access to have the internal IT department build you a new thing. A new thing means a project plan has to be drawn up to figure out how much you are willing to spend, the IT department need to negotiate priorities with other things currently going on and there can be power plays between various stakeholders and some larger projects IT are engaging in on their own because they mentioned it when their own budget was negotiated.

Some of this stuff is performative, literally a project needs to look “cool” to justify an increase in headcount. Now, I’m not saying upper management are careless, there are follow-ups and metrics on the department, and if you get a bunch of people in and the projected outcomes don’t materialise, you will be questioned, but the accountability does not change the fact that there is a marketing aspect when asking for a budget, which also means there is some work the IT department must do regardless of what the rest of the business wants, because they promised the CEO a specific shiny thing.

Gathering requirements

In some organisations, the business know “their” developers and can shoot questions via DM. There is a dark side to this when one developer becomes someone’s “guy” and his Teams in tray becomes a shadow support ticket system. This deterioration is why delivery managers or scrum masters or engineering managers step in to protect the developers – and thus their timelines – because otherwise none of the project work gets done because the developers are working on pet projects for random people in the business and all that budgeting and all those plans go out the window. The problem with this protectionism is that you remove direct feedback, the developers do not intuitively know how their users operate the software in anger. So many developers get anxiety pangs when they finally see the amount of workarounds people do on a daily basis with hotkeys or various clipboard tricks, things that could just have been an event in the source code, or a few function calls, saving literally thousands of hours of employee time in a year.

The funnel of requirements therefore goes through specific people, a product owner or business analyst that has the unenviable task to gather all kinds of requests into a slew of work that a representative in the business is authorised to approve, meaning instead of letting every Tom, Dick and Harry have a go at adding requirements, there is some control to prevent cost runaway and to offer a cohesive vision. Yes, great advantages, but one thing it means is that people on the front line that work against commission and feel a constant pressure whilst battling the custom software, when they complain about stuff being annoying or difficult, they never see that being addressed, and when every six months some new feature is presented, they notice that their pet peeve is still there unaddressed. This creates a groundswell of dissatisfaction, sometimes unfounded, but unfortunately sometimes not.

Sometimes the IT department introduce a dual pipeline, one pipe for long-term feature work and one for small changes that can be addressed “in the run of play” to offer people the sense of feedback being quickly addressed, which adds the burden of having a separate control mechanism of “does this change make sense? is it small enough to qualify?” but some companies have had success with it.

The way to be effective here is to reduce gate keeping but have transparent discussions on how much time can be spent addressing annoyances. Allowing teams to submit and vote for features works too, but generally just showing developers how the software is used by its users is eye-opening.

Building the right thing

“We have poor requirements” is the main complaint developers have when stories overrun their estimates. In my experience this happens when stories are too big, and maybe some more back and forth is needed with the business to sort it out. If developers and business are organisationally close enough, a half hour meeting could save a lot of time. It can be argued that estimates are a waste of time and should be replaced by budgets instead, but that’s a separate blog post.

Developers have all had experience of wasting time. Let us say request comes down from the business with a vision of something, the team goes back and forth to figure out how to migrate from an existing system and how to put the new app in production. They write a first version to prove the concept, and then it never gets put into production for some external reason. Probably a perfectly valid reason, it is just never made clear to the developers why the last three months of meetings, documents and code was binned, which grates on the developers. As implied above, the fact that it was 3 months of work can explain why thing was never released, its time may very well have passed.

My proposed solution to the business remains to build smaller initial deliverables to test the waters, I have yet to be convinced that doesn’t work. It is hard, yes, it requires different discussions, and I will concede, the IT department might already be at a disadvantage trying to promise faster deliverables.

Also, requirements change because reality changes, the business is not always just messing with you, and because your big or complex changes take time to get through to production, the problem of changing requirements gets exacerbated the slower delivery is. Also – don’t design everything up front. Figure out small chunks you can deliver and show people. This is difficult when you are building backend rails for something, but you can demo API payloads., Even semi- or non technical people can understand fundamental concepts and spot omissions early “why are you asking for x here? We don’t know that at this point of the customer journey”. Naming your API request and response objects the way the business would, i.e. ubiquitous langauge, makes this process a lot easier. Get feedback early. Keep a decision log. You don’t need diagrams of every single class but you do need a glossary and a decision log.

Building the thing right

I have banged on about engineering practices on here before, so there is nothing really new here. Fundamentally, the main things are missing tests, anything from unit, to feature, to integration, to contract – not to mention performance tests. Now, sometimes. you write these tests and run them automatically. Ideally you do, but with contract tests for instance, the amount of ceremony you have to set up to do your first automated contract test, plus its limited value until everything is contract tested means that a fair trade off can be to agree to manually test contracts. The point is, you will have there tests regardless of if they are automated or not, or else you will release code that does not work. The later you have the epiphany that test first is superior, the more legacy you will have that is hard to test and hard to change.

Even if you are stuck with legacy code that is hard to test, you can always test manually. I prefer to have test specialists on a team that I can ask for help, because their devious minds can come up with better tests, and then I just perform that tests when I think I’m done. You hate manually testing? Good, good! Use the hate, automate, but there is really no reason to not test.

If there are other parts of the business calling an API you are changing, never break an interface, always version. Doesn’t matter how you try and communicate, people are busy with their own stuff, they will notice you broke them way too late. Of course, contract tests should catch this, but why tempt fate. Cross-team collaboration is hard, if you can put some of these contracts behind some form of validation, you will save a lot of heartache.

Operation

I have addressed in previous posts the olden day triumvirate of QA, Ops and Dev and how they were opposing forces. Ops never wanted to make any changes, QA would have preferred if you made smaller changes, and Devs just churned out code throwing it over the wall. Recently it is not as bad, the DevOps culture attempts to build unity and decentralise so that teams are able to responsibly operate their own software, but a lot of time there is a specific organisation that handles deployments separately from the development team. Partly it can be interpreted as being required by ITIL, but also it gives operations a final chance to protect the business from poor developer output, but with all gatekeepers and steps, it adds a column on the board where tickets gather, which makes for a bigger release when it finally hits production, a bigger changeset means a bigger surface area and more problems.

The key problem with running a service is to understand its state and alert on it. If it takes you a few hours to know that a service isn’t performing well, you are at a disadvantage. There is a tradeoff between the amount of data you produce and what value it brings.

Once you can quickly detect that a release went bad and can quickly roll it back, ideally automatically, then you will have saved everyone a lot of time and improved the execution speed for the whole department. It is very important. If not, you will further alienate your users and their managers, which is even worse, politically.

Technical advancements

People may argue that I have been telling you that lacking development productivity is the fault of basically everyone else but developer, but…. come on – surely we will be able to just sprinkle some AI on this and be done with it? Or use one of those fancy low code solutions? Surely we can avoid the problem of producing software slowly by not actually writing code?

The only line of code guaranteed to be bug free is the one not written. I am all for focusing on your core business and write only the code you need to solve the business problem at hand. Less code to write sounds like less code to review and maintain.

Now, I won’t speak to all low code solutions because I tend to work in high code(?) environments the last decade and a bit, but the ones I have seen glimpts of look very powerful, slap a text box on a canvas, bosh you have a field stored in a table. The people writing applications with these platforms will become very skilled at producing these applications quickly,

Will all your software live on this platform? What happened to the legacy apps that slow you down today, will they not require some changes in the future as well? Will the teams you currently have problems avoiding to break shared APIs fare better with strings in the textbox of a website? Are you responsible for hosting this platform or is it SaaS, how is data stored? Will the BI team try to break into a third party database to ingest data or will they accept some kind of API? What about recently written micro services that have yet to pay back their cost of development? Bin them?

Conclusion

My belief is that the quest for developer productivity comes from a desire to reduce the time between idea and code running well in production. If that lead time was drastically cut, nobody on the other side of the business is going to care what developers do with their time.

Although development productivity is affected by the care and attention applied by individual developers, given the complexities of a software development department and the constraints of the development workflow, productivity is a function of the execution of the whole department. If your teams are cross functional and autonomous you can hold them accountable on a team level, but that requires a relative transparency around cost that requires engineering effort to acquire.

The speed with which features come to production is only in a limited way affected by the speed with which developers write code, and if you do not address political and organisational bottlenecks, you may not see any improvements if you go low code or AI (“vibe coding”),

Our job as older people within a software development function is to make sure we do what we can to make sure features reach the business faster, regardless of how they are implemented. As always measure first and optimise at the bottleneck before you measure again.

Assembly and abstractions

Machine Code and Assembly

Up until the 1980s, when you bought a computer, you occasionally needed to program it using machine code, i.e. directly feeding the processor instructions in its native instruction set. Programming language abstractions had existed since the 1960s, but for small computers, the overhead of compilers and lack of optimisation made it unfeasible for certain applications. Leading mathematicians and computer scientists designed languages like Algol 68 that influenced many of the languages still around today, but everyone knew that if you wanted to build something truly performant – on simple hardware or applications that crunch a lot of data – you would have to write it in assembly (basically machine code, but with names instead of instruction numbers, and labels for branches – a very transparent abstraction over machine code). A compiler would not always create optimal machine code based on the higher level language, but eventually the time saved reading and writing a high level language versus maintaining large codebases in Assembly outweighed more and more minute performance differences.

High level languages and new categories of work

As computer programs got bigger, a couple of problems commonly arose. Shared mutable state turned out to be risky, as it becomes hard to understand which code changes what. Code that affect a certain concept should ideally live next to the rest of the code that affects that area so that it becomes easier to understand. Naming becomes critical for maintenance et c. Structured programming was introduced, object oriented programming, declarative programs et cetera.

When the declarative structured query language (SQL) was invented by Chamberlin and Boyce at IBM, it was a domain specific language for data structuring, management, retrieval. The idea was that you no longer needed software developers to query and update data. However, nobody could be bothered with that. Sure, a lot of people made little apps in Microsoft Access, but largely what happened was that a brand new specialist group of IT folks emerged, the DBOs, that understood all the complexities around storage architecture, query design, troubleshooting, and optimisation.

JavaScript-as-assembly

JavaScript became the lingua franca of the web, and babel allowed people to be creative. As a simple example, you can write scripts for the web in F# using Fable, and when you view source in the browser it shows the F# source code, however in reality the code being run is javascript that has been transpiled and optimised using tooling, whilst the browser knows how to locate the original to show in the developer tools. This means that you can essentially run any language you like and essentially compile it down to javascript. You commit your original source file into your source code versioning system, and rely on build pipelines to generate the javascript the same way you don’t commit binaries to source control but build them in the pipeline.

What about AI?

Just like with SQL, a lot of people are convinced that we can stop hiring developers now that we have all these sophisticated programmer AIs that can do all the work. I am selfishly not convinced that is true. Maybe it is because of all the other times they were going to get rid of programmers that either fizzled out (Rational Rose) or just turned out to create new jobs, like the SQL example above.

One problem is that abstractions are leaky, i.e. you cannot fully escape the limitations of the underlying platform, meaning that you will need to know a few fundamentals to manage a software system. Like – you may not need to be able to adjust a carburettor in order to drive a car anymore, but you should probably know about fuel, air and spark to be able to troubleshoot if something is wrong.

Another potential problem is that senior developers are not spawned fully formed like Pallas Athena, they emerge out of the chrysalis of a junior developer. Good judgement comes from experience, experience comes from bad decision et c. For senior developers to be around to wrangle AI agents, we somehow need to employ juniors first.

I think there will be some turmoil in the industry for a while, but eventually we will settle down on a new source code emerging, probably a rather onerous specification, written in English by one or more AI agents, and then there will be a build pipeline, that with determinism can generate the software from the specification. The level of specificity required probably makes the specification unmanageable to write by a human, but it can at least be read, reviewed and understood. Once determinism exists , the spec will be the source code, and the traditional source code will be like the javascript that is used in the browser today, you most likely don’t even have to look at it, unless something mysterious has gone wrong, and it definitely won’t need committing to your versioning tool.

Death of Enterprise C# as we know it?

The constant battle between getting paid and getting a wide audience has simultaneously hit a number of formerly open source products in the .NET space as they move towards closed source paid licensing in future versions. Is this the end of the world for current shape enterprise C# as we know it? If not – will we be OK?

Enterprise applications retrospective

I will tell this historical recapitulation as if it is all fact, but of course these are just my recollections at this point, and I can’t be bothered to verify anything at the moment.

When I got started in the dark ages, developing software using Microsoft technologies was not something real developers did unless forced. Microsoft developers used VB, and were primarily junior office workers that had graduated from writing Excel macros or Access forms apps. The pros used C++. Or, more truthfully C/C++ – as people mostly wrote C with the occasional class sprinkled in. The obvious downside with the C part is that humans are not diligent enough to handle manual memory management, and C++ had yet to develop all the fancy memory safety it has now.

The solution came from Sun Microsystems. They invented Java, a language that was supposed to solve everything. It offered managed memory, i.e. it took much of the responsibility for memory management away from the developer. It also did not compile down to machine code directly, it compiled down to an intermediate language that could then quickly be interpreted into machine language through a runtime. This abstraction layer made it possible to write Java once and run it on any platform. This was attractive to many vendors of complex software such as database engines, as they wanted to compete on the Workstation market, i.e. Serious Computer Hardware for engineers and others, and all of a sudden being able to sell the same software onto multiple hardware platforms in that space was attractive, since by nature those platforms were never going to be very numerous.

This was an outrageous success. Companies adopted Java immediately, there were bifurcations of the market, open source Java Development Kids came about. Oracle got in the fray. C++ stopped being the default language for professional software development, and as a JVM based ex-colleague of mine remarked “it became the Cobol of our time”.

Microsoft saw this, and wrote a java interpreter for DOS/Windows (J++), but of course they used the embrace-extend-extinguish playbook and ran fast and loose with the specification. Sun knew what was coming, so they immediately rounded up their lawyers.

Hurt and rejected, Microsoft backed off from joining the JVM family, but instead hired Turbo Pascal inventor Anders Heijlsberg to create a new language that would be a bit more grown up than VB but also be more friendly to beginners than C++ was. Basically, the brief was to rip off Java, which is evident if you look at the .NET Base Class Library today.

Now, the reason for this retrospective is to explain cultural context. Windows having come from DOS, a single user operating system – means that Windows application security was extremely deeply flawed both technically initially, but also culturally for the longest time. Everybody runs with administrator privileges the way they’d never daily as root on Linux, to the point they had to introduce an extra annoyance layer UAC on top of normal windows security, because even if they implemented an equivalent to sudo, there would be no way culturally to get people to stop granting their user membership in the Local Administrators group.

The same way Basica in DOS and VBA in Office fed people to VB that fed people to C#, always with the goal of low barrier to entry and beginner friendly documentation, has meant that there is an enormous volume of really poor engineering practice in the .NET developer space. If you google “ASP.NET C# login screen” you will have nightmares when you read the accepted answers.

Culture in the .NET world vs Java

In the Java space meanwhile, huge strides were made. Before .NET developers even knew what unit tests were, enterprise Java had created suites for tests, runners, continuous integration, object-relational mappers, all kinds of complex distributed systems that were the backbone of the biggest organisations on the internet. Not everything was a hit – java browser applets died a death fairly quickly, but while Java development houses were experiencing domain driven design and micro services, Microsoft’s documentation still taught beginners about three tier applications using BizTalk, WCF and Workflow Foundation.

Perhaps because of Oracle and Sun being too busy suing everybody, perhaps because of a better understanding of what an ecosystem is or perhaps just blind luck, a lot of products were created on the JVM and allowed to live on. Some even saw broad adoption elsewhere (Jenkins and Team City for build servers just to name a couple) while .NET really never grew out of the enterprise line of business crud. Microsoft developers read MSDN.com and possibly ventured to ComponentSource, but largely stayed away from the wider ecosystem. Just like in the days when you could never be fired for buying IBM, companies were reluctant to buy third party components. In the Windows or .NET world, only a few third party things ever really made it. Resharper and Red Gate SQL Tool Belt can sell licenses the way almost no-one else can, but with every new version of Visual Studio, Microsoft steals more features from ReSharper to include by default in a way that I don’t think would happen in the Java space. First off, there is no standard Java IDE, there are a few popular options, but there is no anointed main IDE that you must use (except, if I had to attempt java programming I’d use IntelliJ IDEA because brand loyalty, but that’s just me), while Microsoft definitely always has had an ironclad first party grip over its ecosystem.

Open Source seeping into .NET

There was an Alt.NET movement of contrarians that ported some popular Java libraries to .NET to try and build grown-up software on .NET, but before .NET Core it really seemed like nobody had ever had to consider performance when developing on .NET, and a slow death kept occurring. Sebastien Lambla created Open Wrap to try and provide a package manager for .NET developers. Enthusiasts within Microsoft created NuGet that basically stole the concept. German enthusiasts created Paket in order to more successfully deal with dependency graphs, there was a lot of animosity before finally Paket stopped being actively sabotaged by Microsoft. After Nuget saw broad adoption, and .NET developers heard about the previous decade worth of inventions in the Java space, there was an explosion of growth. Open Source got in below the radar and allowed .NET developers to taste the rainbow without going through procurement.

Microsoft developers heard about Dependency Injection. It was so much behind Java’s container wars that when Java developers joked about being caffeine driven machines that turn XML into stack traces, if we even used configuration files, they would most likely have been Json at the time. because XML had died out of the mainstream between them and us discovering overcomplicated DI container configuration.However, since we did have proper Generics – which Java did not until much later – we were able to use fluent interfaces to overcomplicate our DI instead of configuration files. This was another awkward phase – that we will get back to later.

We got to hear about unit tests, NUnit was a port of the JUnit test framework in Java, and Microsoft included MSTest with Visual Studio, but it was so awful that NUnit stayed strong. Out of spite, Microsoft pivoted to promote XUnit so that even if MS Test always remained a failure, at least NUnit would suffer. XUnit is good, so it has defeated the rest.

Meanwhile, Sun had been destroyed, Oracle had taken stewardship of Java, C# had eclipsed Java in terms of language design, and the hippest developers had abandoned Java for Ruby, Scala, Python and JavaScript(!). Kids were changing the world using Ruby on Rails and Node JS. Microsoft were stuck with an enormous cadre of enterprise LOB app developers, and blogs titled “I’m leaving .NET” were trending.

The Force Awakens

Microsoft had internal problems with discipline, and some of the rogue agents that created NuGet proceeded to cause further problems, and eventually after decades of battles with the legal department they managed to release some code as open source within Microsoft, which was a massive cultural shift.

These rogue agents at Microsoft had seen Ruby on Rails and community efforts like Fubu MVC and became determined to retrofit a model view controller paradigm onto legacy ASP.NETs page rendering model, and the improvement with Razor views over old ASPX pages was so great that adoption was immediate, and this was in practice the first thing that was made open source, the ASP.NET MVC bit.

The ruby web framework Sinatra spawned a new .NET thing called Nancy FX that offered extremely light-weight web applications. The creator Andreas Håkansson contributed to standardising a .NET content pipeline called OWIN, but later developments would see Microsoft step away from this standard.

In this climate, although I am hazy about the exact reasons why, some troublemakers at Microsoft decided to build a cross platform new implementation of .NET, called .NET Core. Its first killer app was ASP.NET Core, a new high performance web framework that was supposed to steal the lunch from Node JS. It featured native support for middleware pipelines and would eventually move towards endpoint routing, which would have consequences for community efforts.

After an enormous investment in the new frameworks, and with contributions from the general public, .NET Core became good enough to use in anger, and it was a lot faster than anything Microsoft had ever built before. Additionally, you could now run your websites on Linux and save a fortune. Microsoft were just as happy to sell you compute on Azure to run Linux. It felt like a new world. While on the legacy .NET Framework the Container Wars had settled into trench warfare and a stalemate between Castle Windsor, Ninject and Unity among others, Microsoft created peace in our time by simply building a rudimentary DI system into .NET Core, de facto killing off one of the most successful open source ecosystems in .NET.

A few years into ASP.NET Core being a thing, the rogue agents within Microsoft would introduce minimal APIs, which thanks to middleware and endpoint routing basically struck a blow to Nancy FX and also blocked F# web service project Giraffe for about a year in the hope that it too would support OpenAPI /Swagger API documentation.

After we finally caught up on SOLID that Java developers had been talking about for a decade, we knew we had to separate our concerns and use all the patterns in the Gang of Four. Thankfully MediatR came about, so we could separate out our web endpoints from the code that actually did the thing, by mapping the request object to a DTO, and then just passing that request DTO to MediatR that would pick and execute the correct handler. Nice. The mapping would most commonly be done by using AutoMapper. Both of these are both loved and hated.

Before DevOps had become trendy, developers in the .NET space were using Team City to run tests on code merged into source control, and would have the build server produce a deployable package upon all tests being green, and when it was time to release, an ops person would either deploy it to production directly or approve access to a physical machine where the developer would carefully deploy the new software. If you had a particularly Microsoft-loyal CTO, you would at this time be running TFS as a build server, and use Visual Studio’s built in task management to track issues in TFS, but most firms had a more relaxed environment with Team City and Jira.

When the DevOps revolution came, Octopus Deploy would allow complex deployment automation directly out of Team City, enabling IT departments to do Continuous Deployment if they chose to. For us fortunate enough to only click buttons in Octopus Deploy it felt like the future, but the complexities of keeping those things going may have been vast.

I’ve looked at Cloud, from both sides now

Microsoft Azure has gone through a bunch of iterations. Initially, the goal was to try and compete with Amazon Web Services. Microsoft offered virtual machines and a few abstractions, like Web Workers and Block Storage. Oh, and it was called Windows Azure to begin with.

A Danish startup called AppHarbor offered a .NET version of Heroku, i.e. a cloud application platform for .NET. This too felt like the future, and AppHarbor moved to the west coast of the USA and got funding.

Microsoft realised that this was a brilliant idea and created Azure Websites, offering PaaS within Azure. This was a shot below the waterline of AppHarbor that finally shut down in 5 December 2022. After several iterations, this is now knows as Azure App Services, and is more capable than AWS Lambda, which in turn is superior to Azure Functions.

Fundamentally, Azure was not great in the beginning. It exposed several shortcomings within Windows, it was basically the first time anybody had attempted to use Windows in anger, so Microsoft were shocked to discover all the problems. After a tremendous investment in Windows Server, as well as to a certain extent giving up and running stuff like software defined networking on Linux, Azure performance has improved.

Microsoft was not satisfied and wanted control over the software development lifecycle everywhere. More investment went into TFS, wrestling it into something that could be put in the Cloud. To hide its origins they renamed it Azure DevOps. You could define build and deployment pipelines as yaml files, which finally was an improvement over TeamCity, and the deployment pipes were not as good as Octopus Deploy, but they were good enough that people abandoned Team City, Octopus Deploy and to some extent Jira, so that code, build pipelines and tickets all would go on to live in TFS/Azure DevOps.

To summarise, .NET developers are inherently suspicious of software that comes from third parties. Only a select few pass the vibe check. Once they have reached that point of success they become too successful and Microsoft turns on them.

Current State of Affairs

By the end of last year, I would say your run-of-the-mill C# house would build code in Visual Studio – hopefully with ReSharper installed, keep source code, run tests, and keep tickets in Azure DevOps. The code would use MediatR to dispatch commands to handlers, use AutoMapper to translate requests from web front-end to handlers, and format responses back out to the web endpoint. Most likely data would be stored in an azure hosted SQL Database, possibly using Cosmos DB for non-relational storage and Redis for caches. It is also highly likely that unit tests used Fluent Assertions and Moq for mocks, because people like those.

Does everybody love this situation? Well, no. Tooling has over the years improved so vastly that you can easily navigate your way around your codebase by keyboard shortcuts, or a keyboard shortcut whilst clicking on an identifier. Except, when. you are using MediatR and AutoMapper it all becomes complicated. Looking at automapper mapping files, you wonder to yourself if it wouldn’t have been more straightforward to manually map the classes and have some unit tests to prove that they still work?

Fluent syntax as described a couple of places above was a fad in the early 2010s. You wrote a set of interfaces with a set of methods on them, that in turn returned objects with other interfaces on them, and you used these interfaces to build grammars. Basically a fluent validation library could offer a set of extension methods that would allow you to build assertion expressions directly off the back of a variable, like:

var result = _sut.CallMethod(Value);
result.Should().Not().BeNull(); 

The problem with fluent interfaces is that the grammar is not standardised, so you would type Is.NotNull, or Is().Not().Null, or any combination of the above and after a number of years, it all blends together. If you happen to have both a mocking library and an assertion library within your namespace, your IntelliSense will suggest all kinds of extension methods when you hit the full stop, and you are never quite sure if it is a filter expression for a mocking library or a constraint for an assertion library.

The Impending Apocalypse

Above is described the difficulty threading the needle between getting any traction in the .NET space yet not becoming too successful so that Microsoft decides to erase your company from the map. Generally, the audacity of open source contributors to want access to food and shelter is severely frowned upon on the internet. How dare they, is the general sentiment. After the success of Nuget, the avalanche of modern open source tooling becoming available to regular folks working the salt mines of .NET, GitHub issues have been flooded by entitled professional developers making demands off open source project maintainers without any reciprocity in the form of a support contract or source code contributions. In fairness to these entitled developers, they are pawns in a game where they have little agency in terms of spending time or money on things, so deciding to contribute a fix to an open source library could have detrimental effects on ones employment status, and there would be no way to get approval to buy a support contract without having a senior manager lodge a ticket with procurement and enduring the ensuing political consequences, but to the open source maintainer it is till a nightmare, regardless of people’s motivations.

A year ago, Moq shocked the developer community by including an obfuscated piece of tracking software, and more recently Fluent Assertions, MediatR, Automapper and MassTransit have announced they will move towards a paid license.

Conclusion

What does this mean for everybody? Well…. it depends. For an enterprise, the procurement dance starts, so heaven knows when there will be an outcome from that. Moving away from AutoMapper and MediatR is most likely too dramatic to consider as by necessity these things become choke points where literally everything in the app goes through mapping or dynamic dispatch, however – there are alternatives. Most likely the open source versions will be forked and maintained by others, but given the general state of .NET open source, it is probably more likely that instead of almost market wide adoption, you will see a much more selective user base going forward. I want all programmers to be able to eat, and I wish all the success to Jimmy Bogard going forward. He has had an enormous impact on the world of software development and deserve all the success he can get.

Ever since watching 8 lines of code by Greg Young I have been a luddite when it comes to various forms of magic, and been a proponent of manual DI and mapping. My advice would be to just rip this stuff out and inject your handler through the constructor the old fashioned way. Unless you have very specific requirements, I can almost guarantee that you will find it more readable and easier to maintain. Also, like Greg says – the friction is there for a reason. If your project becomes too big to understand without magic, that’s a sign to divide your solution into smaller deployable units.

The continuous march continues, where the teams at Microsoft usurp more and more features from the general .NET ecosystem and include them in future versions of .NET, C# or Visual Studio, squeezing the life out of smaller companies along the way.

I would like to see more situations like paket and xunit where Microsoft have stayed their hand and allowed the thing to exist. I think a healthy coexistence of multiple valid solutions would be healthier for all parties. I do not know how to bring that about. Developers in the .NET space remain first party loyal only, except for a very small number of products. I think it is necessary to build a marketplace where developers can buy or subscribe to software tools and libraries, which also offers software bill-of-materials type information so that managers can have some control over which licenses are in play, perhaps give developers a budget for how much they can spend on tools. That way enterprise devs can buy the tools they need whilst offering the transparency that is needed for compliance, with controls that allow an employer to limit spending whilst simultaneously allowing tool developers to feed their kids. I mean Steam exists, and that’s a marketplace that also exists on a Microsoft platform. Imagine a similar thing but for Nugets.

How small is small?

I have great respect for the professional agile coach and scrum master community. Few people seem to systematically care for both humans and business, maintaining profitability without ever sacrificing the humans. Now, however, I will alienate vast swathes of them in one post. Hold on.

What is work in software development?

Most mature teams do two types of work, they look after a system and make small changes to it – maintenance, keeping the lights on and new features that the business claims it wants. It is common to get an army in to build a new platform and then allow the teams to naturally attrit as transformational project works fizzles out, contractors leave, the most marketable developers either get promoted out of the team or get better offers elsewhere. A small stream of fine adjustment changes keep coming in to the core team of maintenance developers – effectively – that remains. Eventually this maintenance development work gets outsourced abroad.

A better way is to have teams of people that work together all day every day. Don’t expand and contract or otherwise mess with teams, hire carefully from the beginning and keep new work flowing into teams rather than restructuring after a piece of work is complete. Contract experts to pair with the existing team if you need to tech the team new technology, but don’t get mercenaries to do work. It might be slower, but if you have a good team that you treat well, odds are better they’ll stay, and they will develop new features to be able to be maintained better in the future and less likely to cut corners as any shortcuts will blow up in their own faces shortly later.

Why do we plan work?

When companies spend money on custom software development, a set of managers at very high positions within the organisation have decided that investing in custom software is a competitive advantage, and several other managers think they are crazy to spend all this money on IT.

To mollify the greater organisation, there is some financial oversight and budgeting. Easily communicated projects are sold to the business “we’ll put a McGuffin in the app”, “we’ll sprinkle some AI on it” or similar, and hopefully there is enough money in there to also do a bit of refactoring on the sly.

This pot of money is finite, so there is strong pressure to keep costs under control, don’t get any surprise AWS bills or middle managers will have to move on. Cost runaway kills companies, so there are legitimately people not sleeping at night when there are big projects in play.

How do we plan?

Problem statement

Software development is very different from real work. If you build a physical thing, anything from a phone to a house, you can make good use of a detailed drawing describing exactly how the thing is constructed and the exact properties of the components that are needed. If you are to make changes or maintain it, you need these specifications. It is useful both for construction and maintenance.

If you write the exact same piece of software twice, you have some kind of compulsive issue, you need help. The operating system comes with commands to duplicate files. Or you could run the compiler twice. There are infinite ways of building the exact same piece of software. You don’t need a programmer to do that, it’s pointless. A piece of software is not a physical thing.

Things change, a lot. Fundamentally – people don’t know what they want until they see it, so even if you did not have problems with technology changing underneath your feet whilst developing software, you would still have problems with the fact that fundamentally people did not know what they wanted back when they asked you to build something.

The big issues though is technology change. Back in the day, computer manufacturers would have the audacity to evolve the hardware in ways that made you have to re-learn how to write code. High level languages came along and now instead we live with Microsoft UI frameworks or Javascript frameworks that are mandatory one day and obsolete the next. Things change.

How do you ever successfully plan to build software, then? Well… we have tried to figure that out for seven decades. The best general concept we have arrived at so far is iteration, i.e. deliver small chunks over time rather than to try and deliver all of it at once.

The wrong way

One of the most well-known but misunderstood papers is Managing The Development of Large Software Systems by Dr Winston W Royce1 that launched the concept Waterfall.

Basically, the software development process in waterfall is outlined into distinct phases:

  1. System requirements
  2. Software requirements
  3. Analysis
  4. Program design
  5. Coding
  6. Testing
  7. Operations

For some reason people took this as gospel for several decades, despite the core, fundamental problem that dooms the process to failure is outlined right below figure 2 – the pretty waterfall illustration of the phases above – that people keep referring to, it says:

I believe in this concept, but the implementation described above is risky and invites failure. The
problem is illustrated in Figure 4. The testing phase which occurs at the end of the development cycle is the
first event for which timing, storage, input/output transfers, etc., are experienced as distinguished from
analyzed. These phenomena are not precisely analyzable. They are not the solutions to the standard partial
differential equations of mathematical physics for instance. Yet if these phenomena fail to satisfy the various
external constraints, then invariably a major redesign is required. A simple octal patch or redo of some isolated
code will not fix these kinds of difficulties. The required design changes are likely to be so disruptive that the
software requirements upon which the design is based and which provides the rationale for everything are
violated. Either the requirements must be modified, or a substantial change in the design is required. In effect
the development process has returned to the origin and one can expect up to a lO0-percent overrun in schedule
and/or costs.

Managing The Development of Large Software Systems, Dr Winston W Royce

Reading further, Royce realises that a more iterative approach is necessary as pure waterfall is impossible in practice. His legacy however was not that.

Another wrong way – RUP

Rational Rose and the Rational Unified Process was the Chat GPT of the late nineties, early noughties. Basically, if you only would make an UML drawing in Rational Rose, it would give you a C++ program that executed. It was magical. Before PRINCE2 and SAFe, everyone was RUP certified. You had loads of planning meetings, wrote elaborate Use Cases on index cards, and eventually you had code. It sounds like waterfall with better tooling.

Agile

People realised that when things are constantly changing, it was doomed to have a fixed plan to start with and to stay on it even when you knew that it was unattainable or undesirable to reach the original goal. Loads of attempts were made, but one day some people got together to actually have a proper go at defining what should be the true way going forward.

In February 11-13, 2001, at The Lodge at Snowbird ski resort in the Wasatch mountains of Utah, seventeen people met to talk, ski, relax, and try to find common ground—and of course, to eat. What emerged was the Agile ‘Software Development’ Manifesto. Representatives from Extreme Programming, SCRUM, DSDM, Adaptive Software Development, Crystal, Feature-Driven Development, Pragmatic Programming, and others sympathetic to the need for an alternative to documentation driven, heavyweight software development processes convened.

History: The Agile Manifesto

So – everybody did that, and we all lived happily ever after?

Short answer: No. You don’t get to just spend cash, i.e. have developer do work, without making it clear what you are spending it on, why, and how you intend to know that it worked. Completely unacceptable, people thought.

The origins of tribalism within IT departments have been done to death in this blog alone, so for once it will not be rehashed. Suffice to say, organisationally often staff is organised according to their speciality rather than in teams that produce output together. Budgeting is complex, there can be political competition that is counter productive to IT as a whole or for the organisation as a whole.

Attempts at running a midsize to large IT department that develops custom software have been made in form of Scaled Agile Framework (SAFe), DevOps and SRE (where SRE is addressing the problem backwards, from running black-box software using monitoring, alerts, metrics and tracing to ensure operability and reliability of the software).

As part of some of the original frameworks that came in with the Agile Manifesto, a bunch of practices became part of Agile even though they were not “canon”, such as User Stories, that were said to be a few words on an index card, pinned to a noticeboard in the team office, just wordy enough to help you discuss a problem directly with your user. This of course eventually started to develop back into the verbose RUP Use Cases from yesteryear, but “agile, because they are in Jira”, and rules had to be created for the minimum amount of information on there to successfully deliver a feature. In the Toyota Production System that originated Scrum, Lean Software Development and Six Sigma (sadly, an antipattern), one of the key the lessons is The ideal batch size is 1, and generally making smaller changes. This explosion in size of the user story is symptomatic of the remaining problems in modern software development.

Current state of affairs

So what do we do

As you can surmise if you read the previous paragraphs, we did not fix it for everybody, we still struggle to reliably make software.

The story and its size problems

The part of this blog post that will alienate the agile community is coming up. The units of work are too big. You can’t release something that is not a feature. Something smaller than a feature has no value.

If you work next to a normal human user, and they say – to offer an example – “we keep accidentally clicking on this button, so we end up sending a message to the customer too early, we are actually just trying to get to this area here to double-check before sending”, you can collaboratively determine the correct behaviour, make it happen, release in one day, and it is a testable and demoable feature.

Unfortunately requirements tend to be much bigger and less customer facing. Like, department X want to start seeing the reasons for turning down customer requests in their BI tooling being a feature, and then a “product backlog item” could be service A and service B needs to post messages on a message bus in various positions of the user flow identifying reasons.

Iterating over and successfully releasing this style of feature to production is hard.

Years ago I saw Allen Holub speaking on SD&D in London and his approach to software development is very pure. It is both depressing and enlightening to read the flamewars that erupt in his mentions on Twitter when he explains how to successfully make and release small changes. People scream and shout that it is not possible to do it his way.

In the years since, I have come to realise that nothing is more important than making smaller units of work. We need to make smaller changes. Everything gets better if / when we succeed. It requires a mindset shift, a move away from big detailed backlogs to smaller changes, discussed directly with the customer (in the XP sense, probably some other person in the business, or another development team). To combat the uncertainty, it is possible to mandate some kind of documentation update (graph? chart?) as part of the definition of done. Yes, needless documentation is waste, but if we need to keep a map over how the software is built, as long as people actually consult it, it is useful. We don’t need any further artefacts of the story once the feature is live in production anyway.

How do we make smaller stories?

This is the challenge for our experts in agile software development. Teach us, be bothered, ignore the sighs of developers that still do not understand, the ones raging in Allen Holub’s mentions. I promise, they will understand when they see it first hand. Daily releases of bug free code. They think people are lying to them when they hear us talk about it. When they experience it though, they will love it.

When every day means a new story in production, you also get predictability. As soon as you are able to split incoming or proposed work into daily chunks, you also get the ability to forecast – roughly, better than most other forms of estimate – and since you deliver the most important new thing every day, you give the illusion of value back to those that pay your salary.