I am going to armchair quarterback the future of Windows. I have absolutely no insight into what Microsoft are doing, and no deeper knowledge into the internals of Windows than having read the odd Windows Internals book ages ago, as well as following some Microsoft adventures, such as the super deprived windows images they tried to introduce as docker base images for people to use.
What is wrong with Windows you say? Apart from the new ream of bugs, the ads, the telemetry, the copilot integration, the constant UI changes that still don’t solve the fundamental problems?
Well – I think the kernel architecture of Windows is sound. It’s designed by the guy behind VMS, it is very flexible. Things like WSL1 was possible due to the pluggable architecture, I think it may have a performance drawback compared to less heavily abstracted systems, but OTOH, it does have its benefits in the situation we are.
I think NTFS is sound in terms of what it does, it supports journaling, it has granular security, it’s arguably better than EXT4 in certain ways – except it really struggles with small files and big directories. The main problem with NTFS is that rumours say it is so legacy now within the company that developers always try to create new filesystems rather than work on NTFS. I think where modern AI tools actually shine is the way it allows you to wrestle legacy code under control, refactor and make the code maintainable for new developers. If the future of computing in general – and not just in Linux – is inevitably moving towards big folders with small files, a change has to be made, or Microsoft could choose to natively support EXT4 or ZFS.
The big problem Windows needs to solve is the terrible Windows API, as in the GDI bitmap nonsense and the Windows message stacks.
They have tried so many times to make XAML a thing, and I guess I haven’t given up about that, I mean it is fairly close to MacOS X+/NeXT’s interfacebuilder, which clearly works despite being awful to use. Just make compile into something closer to the metal instead of having it managed. Also, don’t drag OLE2 out of the mausoleum for this like you did with Windows 8, just have an app model that starts a process with main() and then hooks into the UI stack. Sure, sandbox the apps as the operating system now has a bunch of built-in security concepts that didn’t exist in DOS that is the foundation for the current Windows API stack, but use some sane method that is compatible with how processes should work in a operating system not from 1980.
Basically we need to leave device contexts handles behind and get onto canvases. I don’t know what a good UI stack should look like, but fundamentally – security – i.e. multi user access and remote control- as a first class design concern, vector graphics, hardware acceleration and effective use of modern processors need to be high on the list of demands. Surely, starting from scratch would allow you to better handle things like clipboards securely.
So – breaking compatibility I say? Yes – and no. I think Microsoft should create a new subsystem for running apps on a completely separate UI stack, with security and performance as the key metrics during design. If you insist on compatibility, let the XAML used for the new stack to be mostly compatible with whatever XAML dialect that has the biggest number of running apps, and find a way to run old apps in a GDI32 emulator in the new desktop. Performance penalty? Yes, allow it. The new UI paradigm is the future, but if people choose to install the legacy subsystem, their old apps will still run. Security features and a well defined upgrade path through microsoft’s developer tooling should allow Enterprise IT departments to force the software estates towards only relying on the new stack.
They should use their enormous network of influencers – MVPs and RDs – to bully people into upgrading their apps to native 64 bit new UI stack like Apple have done multiple times (MacOs legacy => MacOS X+, Motorola 68000 to PowerPC, PowerPC to Intel, Intel to Apple Silicon).
It will end the previously endless cycle of Microsoft attempting to shoehorn a new UI into old windows, and will firmly hang a Sword of Damocles over all legacy windows apps allowing a clear sustainable path to the future.
If the technical design is clean enough, it will be possible to convert desktop virtualisation giants to the new stack early on to get a strong market share, letting you only be dependent on individual app developers to keep up, and given the Office Suite is such a key player, binning the legacy UI and moving to the new stack for themselves among to will directly benefit adoption.
You will have seen property magnates strangle the property market by building data centres on spec (often poorly as the demands of a data centre are quite different than those of an office building or a warehouse), you will have seen the future production for years of memory chips, storage chips and graphics processor already being bought by AI giants, effectively barring the average gaming enthusiast from upgrading their computers or getting into the hobby at all. Apple only being exempt because they too have bought their future capacity for a few years ahead, but at some point when these contracts renew or when China invades Taiwan, the lost capacity at TSMC will hit MacBook and iPhone prices as well. Same thing with gaming consoles, cars and the crippled computers handed out in bulk to the regular corporate drone.
For what, you may ask? For AI memes? For desinformation? For hallucinated information in corporate reports that steer companies in the wrong direction?
In defence of “artificial intelligence”
If you have read AI slop it seems baffling that anyone would like to use it. Especially someone that writes things for a living, yet we see newspaper articles and web pages that contain “If you want I can rewrite this is a more enticing style…” stuff from Chat GPT that they forgot to remove. I.e. people that get paid to write use AI to generate text. Some musicians use Suno to make backing tracks to practice alongside. Why? Is this not like turkeys voting for Christmas?
Not all writing is a Hemingwayeque six month stay in Key West, sometimes you just need to correctly structure text based on 15 news bulletins you got from Bloomberg so that you eventually get paid at the end of the month. Giving an AI a model some rules about writing a news push article (first paragraph that states when, what and who, and then progressively add details in descending order of importance so that the text can be liberally cut from the end if necessary – as an example) and have it produced in seconds. Or in the musician case, rather than getting four mates in a room at the same time just so that you can bore them with practicing soloing in a particular mode, you can have the AI produce a 20 minute fake track with the right chords and not drive anyone insane except yourself.
With software construction it is even more attractive. All the code we write is supposed to not be creative. It is supposed to be familiar, predictable and in keeping with the style of all the rest of the code in the codebase. I.e. the repetition and theft is a feature.
The biggest problem in software development is that we are mere humans with human failings. e.g., some business rules change somewhere, we change the code, observe – hopefully using automated tests – that the change works and then we are onto the next, because we fail to notice that the function name no longer describes what the code does, or even more commonly, an old comment now has become a complete lie.
We had already solved some of these problems with automated refactoring tools that interpret the code to follow the call chain, so that you can rename a function, and it renames every instance of it being used in the code base, thus significantly lowering the threshold for keeping names relevant after code changes. There are also tools that let you automatically extract a piece of code out of a bigger function to reduce the size of functions. From our perspective, as not a vibe coder, AI tools are just an extension of that. I can now ask a developer tool to refactor the existing code into a certain pattern, and although a year ago that could have meant catastrophic corruption of the code, we had things like source control (effectively save points for programmers, you just turn time back to before the boss battle) compilers/linters and tests that limited how crazy things could get, it was still way too interesting. Recently the tools actually produce sane code if you prompt it correctly, even if we still obviously have the guardrails in place.
The irony is that the code we have trained the model is written by mortal humans, and the text you get back from the models sounds like talking to a very junior developer. “All tests run except […] that aren’t important” or “There is one broken test that is unrelated to our change” which is highly amusing. It will also easily disable authentication if it becomes too “difficult” to deal with, which is super dangerous. Guardrails, rules and commands are very important, but at the end of the day you are responsible for the code your model produces.
Risks
I mentioned companies making decisions on hallucinated data before. That is a risk, but on the other hand – how long is the list of companies that went bankrupt because some clever soul accidentally replaced one formula in A Critical Excel Worksheet with the same value as a constant? Every tool is dangerous if used incorrectly.
Of course, completely vibe coded applications that have not been analysed from a security perspective can have an unlimited array of vulnerabilities, it is fundamentally up to the tool makers to protect their users, which seems to be lacking for certain tools. Giving an AI agent full access to your own account and terminal, or your own email is of course very dangerous.
The Tesla Full Self Driving fallacy can happen with agents as well – you give an AI agent an administrative task that saves you a week of gruelling boring mundane work and everything is fine, so you just add more rights until there is a disaster, the same way that Tesla drivers after a few successful uncomplicated drives on the motorway start napping behind the wheel until they slam into a truck, best case.
That is the opposite of guard rails. I foresee that the same type of libraries available to query databases whilst disabling SQL injection will come around for prompt creation to avoid prompt injection, but also that prompt injection will climb the charts of popular exploits.
So what then?
Fundamentally, everyone is going to use AI – not because it is forced upon you by Microsoft, but because there will be an application that is useful to you. I have no idea if this is the end of white collar work, it could be, but it could also just be yet another tool in the arsenal. The only thing I am fairly certain about is that we cannot turn back time, but of course future wars may force us to go back to the kind of electronics that we can manufacture in the west, meaning 1980s tech at best, which could uniquely strike against the IT sector. The future is wide open.
Us old folks reminisce about the olden days when you would hit a button and the corresponding character would appear on the screen immediately. A lot of old custom built ERP or POS software would be impenetrable for a new user with incredibly cluttered text based user interface BUT there would be no latency. You cannot get that snappiness anymore, despite modern computers being several orders of magnitude faster.
Hardware
Back in the day, the keyboard was attached to a dedicated DIN port, most definitely not plug and play. The processor would be yanked out of whatever it was doing to tend to keyboard input whenever signals came in on the serial port. Today there is a Universal Serial Bus that contains a lot more distributed decision making and plug & play. Devices announce their presence, and there is a whole ceremony to ensure that the correct drivers are in place, the processor gets to deal with incoming traffic when it chooses to. Great for the overall smoothness of the experience using the computer, but for keyboards you usually get latency. Gaming mouse and keyboard manufacturers work to reduce this, but fundamentally USB means latency.
Operating systems
Back in the day, the software would run on a DOS machine, an OS so lightweight it barely qualified as an operating system, you mostly talk directly to the hardware as an application developer. These days, your process is waiting to be told by the OS when a user has typed text in your app. The process not only sounds complicated, it IS complex. The operating system will first have to make sure the correct thread is currently running, and then deliver the keystroke.
Applications
In the before times, the application would receive the keystroke, confer with its internal state (am I in a menu or am I in a text editor? Was this a special key, like a function key or similar?) and then immediately render the character on the screen if appropriate.
Today, all kinds of things can happen. Are you editing a word document on Sharepoint – or using Google Docs? In that case your keystrokes go to the cloud first. Bonus points, no save button, but also – massive legacy on an order of magnitude greater than that USB malarkey. Also – either app, sometimes even text boxes in the OS – will spell check words if you press the space bar or stop typing for a bit.
As developers we are aware of intellisense, I.e. predictive text for developers. Yet another order of magnitude of latency, because the developer tool has to sort of recompile large parts of the app underneath your fingers. Even though the app tries to be clever about doing as little work as possible, you can imagine how insanely much more work that is, compared to writing a character to a screen in text mode.
A possibly redeeming factor is that while personal computers in the olden days had a single thread of execution, literally doing one thing at the time, in the modern world these distractions can happen literally simultaneously to your typing, so the latency is not quite as bad as it could have been.
What to do?
I suggest you go ask the people most keenly interested in low latency, I.e. gaming. There will be tests online you can peruse before picking a keyboard. Your operating system may offer you tweaks to prioritise UI responsiveness in its scheduling, and you can switch off interactive features. You can run Linux in text mode with tmux. Or, you can just accept that the days of snappy UIs are over and let the computer go off and do its thing like an otherwise faithful dog that only listens to a subset of commands.
I hail from the depths of the northern wastelands of Sweden, one of the other Laplands where Santa does not live. Despite living abroad for more than a decade, I attempt to keep tabs on the motherland, of course, since I vote and am of the age when – in the olden days – I would be writing unhinged letters to the editor in the local newspaper, which today means ranting on Facebook to innocent bystanders that probably have me muted.
Without getting into specifics of what has changed since I left, one of the weirdnesses about Sweden is that almost all your data is public. Imagine the phonebook, but with your income, the deed to your flat, including every extension ever made, everything available to search without restriction by anyone that is interested.
Traditionally this was used when the tabloids had problems creating content because there were no daily bombings to write about, they would write “the 10 richest people in YOUR BOROUGH, this is how they live”, and this information was just a couple of phone calls away, no whistleblower needed. FOIA on steroids.
Again, without getting into what has changed – the fact has become that a larger number of people are seeking exemptions from this public status, i.e. a protected identity (skyddad identitet) which means that all of your information has to be kept secret, you get a fake address maintained by the tax authority where all your physical official mail is proxied. This concept of protected identity was created to protect battered women from being easily located by their violent ex, but today 50% of the people using the service are social services employees, police officers and others that have active threats to their lives due to their work. I’m not saying it was ok that the system was poorly designed before, but the number of impacted people has risen sharply, so what used to be a once in a million thing for people to encounter, thus explaining some of the friction, it has become a much broader phenomenon.
In a UK context, you know that slip you get from the council where they need to confirm you are correctly registered on the electoral roll, imagine that checkbox to make your data available for advertisers, but that always being checked.
Unfortunately, in the UK, not checking that box has consequences, many automated systems do not believe you exist and you need alternative forms of identity verification – you carry your council tax and gas bill everywhere – whilst if you are in the public register a lot of things work relatively seamlessly, except in Sweden – the proportion of people not generally available in the tax authority’s ledger of all residents is still so small that nobody considers it at any point, meaning the fact that somebody in the family has a protected identity has broad consequences in everyday life, such as that it is impossible to pick up a prescription for your children at the pharmacy because the system does not accept that you are related to your children, and problems collecting parcels because of the way identities are validated to just name a couple. That proxy address the government gives you only works for mail, not for parcels, which I guess makes sense, cause if the tax authority had to get into logistics as well, that might be a step too far, even for Sweden, even if they couldn’t possibly do a worse job than PostNord, but I digress.
I have written about problems like these before, where nobody involved in designing systems intended for use by the general public considers use cases beyond their own nose – because it is very difficult to accurately do so, but in this case I wonder if a radical redesign would be better, with privacy by default and clear consent to share one’s information, you know like GDPR. We all had to do it in every other IT system, why should the public sector be exempt?
In Sweden they have had catastrophic failures of public procurement where a new system for the state rail and road authority was unable to correctly protect state secrets, and similar problems for the public health insurance authority that could not deal with the concept of information classification – a whole class of problems could be solved with a radical redesign. This is one of the things where I think breaking everything is worth it, because retrofitting privacy is nearly impossible, and any attempts at backwards compatibility is like trying to turn DOS into a multi-user operating system – there will be gaps everywhere since the foundational design is inherently counter to what you are trying to achieve.
E-mail used to be a file. A file called something like /var/spool/mail/[username], and new email would appear by text being appended to that file. The idea was that the system could send notifications to you by appending messages there, and you could send messages to other users by appending text to the files belonging to other users, using the program mail.
Later on you could send email to other machines on the network, by addressing it with user name, and @ sign and the name of the computer. I am not 100% sure, and I am too lazy to look it up, but the way this communication happened was using SMTP, simple mail transfer protocol. You’ll notice that SMTP only lets you send mail (and implicitly appending it to the file belonging to the user you are sending to).
Much later Post Office Protocol was invented, so that you could fetch your email from your computer at work and download to your Eudora email client on your Windows machine at home. It would just fetch the email from your file, optionally removing the email from that file as doing so.
As Lotus and Microsoft built groupware solutions loosely built on top of email, people wanted to access the email on the server rather than always download thenm, and have the emails organised in folders, which led to the introduction of IMAP.
Why am I mentioning this? Well, you may -if you are using a UNIX operating system still see the notification “You have mail” as you open a new terminal. It is not as exciting as you may think, it is probably a guy called cron that’s emailing you, but still – the mailbox is the void in which the system screams when it wants your help, so it would be nice to wire it into your mainstream email reader.
Because I am running Outlook to handle my personal email on my computer, I had to hack together a python script that does this work. It seems if I was using Thunderbird I could still have UNIX mail access, but.. it’s not worth it.
I have been banging on about the perils of the Great Rewrite in many previous posts. Huge risks regarding feature creep, lost requirements, hidden assumptions, spiralling cost, internal unhealthy competition where the new system can barely keep up with the evolving legacy system, et cetera.
I will in this post attempt to argue the opposite case. Why should you absolutely embark on a Great Rewrite? How do you easily skirt around the pitfalls? What rewards lay in front of you? I will start with my usual standpoint, what are invallid excuses for embarking on this type of journey?
Why NOT to give up on gradual refactoring?
If you analyse your software problems and notice that they are not technical in nature, but political there is no point in embarking on a massive adventure, because the dysfunction will not go away. No engineering problems are unsolvable, but political roadblocks can be completely immovable under certain circumstances. You cannot make two teams collaborate where the organisation is deeply invested in making that collaboration impossible. This is usually a problem of P&L where the hardest thing about a complex migration problem is the accounting and budgeting involved in having a key set of subject matter experts collaborating cross-functionally.
The most horrible instances of shadow IT or Frankenstein middleware have been created because the people that ought to do something were not available so some other people had to do something themselves.
Basically, if – regardless of size of work – you cannot get a piece of work funnelled through the IT department into production in an acceptable time, and the chief problem is the way the department operates, you cannot fix that by ripping out the code and starting over.
Why DO give up on gradual refactoring?
Impossible to enact change in a reasonable time frame.
Let us say you have an existing centralised datastore that has several small systems integrate across it in undocumented ways, and your largest legacy systems are getting to the point where its libraries cannot be upgraded anymore. Every deployment is risky, and performance characteristics are unpredictable for every change, and your business side, your customer in the lean sense, demands quicker adoption of new products. You literally cannot deliver what the business wants in a defensible time.
It may be better to start building a new system for the new products, and refactor the new system to bring older products across after a while. Yes, the risk of a race condition between new and old teams is enormous, so ideally teams should own the business function in both the new and the old system, so that the developers get some accidental domain knowledge which is useful when migrating.
Radically changed requirements
Has the world changed drastically since the system was first created? Are you laden with legacy code that you would just like to throw away, except the way the code is structured you would first need to do a great refactor before you can throw bits away, but the test coverage is too low to do so safely?
One example of radically changed requirements could be – you started out as a small site only catering to a domestic audience, but then success happens and you need to deal with multiple languages and the dreaded concept of timezones. Some of the changes necessary for such a change can be of the magnitude that you are better off throwing away the old code rather than touching almost every area of the code to use resources instead of hard coded text. This might be an example of amortising on well adjudicated technical debt. The time to market gain you made by not internationalising your application first time round could have been the difference that made you a success, but still – now that choice is coming back to haunt you.
Pick a piece of functionality that you want to keep, and write a test around both the legacy and the new version to make sure you cover all requirements you have forgotten over the years (this is Very Hard to Do). Once you have correctly implemented this feature, bring it live and switch off this feature in the legacy system. Pick the next keeper feature and repeat the process, until nothing remains that you want to salvage from the old system and you can decommission the charred remains.
Pitfalls
Race condition
Basically, you have a team of developers implement client onboarding in the new system. Some internal developers and a couple of external boutique consultants from some firm near Old Street. They have meetings with the business, i.e strategic sales and marketing are involved, they have an external designer involved to make sure the visuals are top notch, meanwhile in the damp lower ground floor, the legacy team has Compliance in their ear about the changes that need to go live NOW or else the business risk being in violation of some treaty that enters into force next week.
I.e. as the new system is slowly polished, made accessible, perhaps being a bit bikeshedded as too many senior stakeholders get involved, the requirements for the actual behind-the-scenes criteria that need to be implemented are rapidly changing, and to the team involved in the rework it seems that the goalposts never stop moving, and most of the time they are never told, because compliance “already told IT”, i.e. the legacy team.
What is the best way to avoid this? Well, if legacy functionality seems to have high churn, move it out into a “neutral venue”, a separate service that can be accessed from both new and old systems and remove the legacy remains to avoid confusion. Once the legacy system is fully decommissioned you can take a view and see if you want to absorb these halfway houses or if you are happy with how they are factored. The important thing is that key functionality only exists in one location at all time.
Stall
A brave head of engineering sets out to implement a new modern web front-end, replacing a server rendered website communicating via soap with a legacy backend where all business logic lives. Some APIs have to be created to do processing that that the legacy website did on its own before or after calling into the service. On top of that, a strangler fig pattern is implemented around the calls to the legacy monolith, primarily to isolate the use of soap away from the new code, but also to obviate some of the things that is deemed not to be worth taking the round trip over soap. Unfortunately, after the new website is live and complete, the strangler fig has not actually strangled the back-end service, and a desktop client app is still talking soap directly to the backend service with no intention of ever caring about or even acknowledging the strangler fig. Progress ceases and you are stuck with a half-finished API that in some cases implements the same features as the backend service, but in most cases just acts as a wrapper around soap. Some features live in two places, and nobody is happy.
How to avoid it? Well, things may happen that prevent you from completing a long term plan, but ideally, if you intend to strangle a service, make sure all stakeholders are bought into the plan. This can be complex if the legacy platform being strangled is managed by another organisation, e.g. an outsourcing partner.
Reflux
Lets say you have a monolithic storage, the One Database. Over the years BI and financial ops have gotten used to querying directly into the One Database to capture reports. Since the application teams are never told about this work, the reports are often broken, but they persevere and keep maintaining these reports anyway. The big issue for engineering is the host of “batch jobs”, i.e. small programs run from a band built task scheduler from 2001 that does some rudimentary form of logging directly into a SchedulerLogs database. Nobody knows what these various programs do, or which tables in the One Database they touch, just that the jobs are Important. The source code for these small executables exist somewhere, probably… Most likely in the old CVS install on a snapshot of a Windows Server 2008 VM that is an absolute pain to start up, but there is a batch file from 2016 that does the whole thing, it usually works.
Now, a new system is created. Finally, the data structure in the New Storage is fit for purpose, new and old products can be maintained and manipulated correctly because there are no secret dependencies. An entity relationship that was stuck as 1-1 due to an old, bad design that had never been possible to rectify – as it would break the reconciliation batch job that nobody wants to touch – can finally be put right, and several years worth of poor data quality can finally be addressed.
Then fin ops and BI write an angry email to the CFO that the main product no longer reports data to their models, and how can life be this way, and there is a crisis meeting amongst the C-level execs and an edict is brought down to the floor, and the head of engineering gets told off for threatening to obstruct the fiduciary duties of the company, and is told to immediately make sure data is populated in the proper tables… Basically, automatically sync the new data to the old One Database to make sure that the legacy Qlik reports show the correct data, which also means that some of the new data structures have to be dismantled as they cannot be meaningfully mapped back to the legacy database.
How do you avoid this? Well, loads of things were wrong in this scenario, but my hobby-horse is about abstractions, i.e. make sure any reports pointing directly into an operational database do not do that anymore. Ideally you should have a data platform for all reporting data where people can subscribe to published datasets, i.e. you get contracts between producer and consumer of data so that the dependencies are explicit and can be enforced, but. at minimum have some views or temporary tables that define the data used by the people making the report. That way they can ask you to add certain columns, and as a developer you are aware that your responsibility is to not break those views at any cost, but you are still free to refactor underneath and make sure the operational data model is always fit for purpose.
Conclusion
You can successfully execute a great rewrite, but unless you are in a situation where the company has made a great pivot and large swathes of the feature in the legacy system can just be deleted, you will always contend with legacy data and legacy features, so fundamentally is is crucial to avoid at least the pitfalls listed above (add more in the comments, and I’ll add them and pretend they were there all along). Things like how reporting will work must be sorted out ahead of time. There will be lack of understanding, shock and dismay, because what we see as hard coupling and poor cohesion, some folks will see as single pane of glass, so some people will think it is ludicrous to not use the existing database structure forever. All the data is there already?!
Once there is a strategy and a plan in place for how the work will take place, the organisation will have to be told that although you were not of the opinion that we were moving quickly before, we shall actually for a significant time worsen our response times regarding new features as we dedicate considerable resources to performing a major upgrade to our platform into a state that will be more flexible and easy to change.
Then the main task is to only move forward at pace, and to atomically go feature by feature into the new world, removing legacy as you go, and use enough resources to keep the momentum going. Best of luck!
I know in the labour market of today, suggesting someone pick up programming from scratch is akin to suggesting someone dedicate their life to being a cooper. Sure, in very specific places, that is a very sought after skill that will earn you a good living, but compared to its heyday the labour market has shrunk considerably.
Getting into the biz
How do people get into this business? As with I suspect most things, there has to be a lot of initial positive reinforcement. Like – you do not get to be a great athlete without several thousands of hours of effort rain or shine, whether you enjoy it or not – but the reason some people with “talent” end up succeeding is that they have enough early success to catch the “bug” and stick at it when things inevitably get difficult and sacrifices have to be made.
I think the same applies here, but beyond e-sports and streamer fame, it has always been more of an internal motivation, the feeling of “I’m a genius!” when you acquire new knowledge and see things working. It used to help to have literally nothing else going on in life that was more rewarding, because just like that fleeting sensation of understanding the very fibre of the universe, there is also the catastrophic feeling of being a fraud and the worst person in the world once you stumble upon something beyond your understanding, so if you had anything else to occupy yourself with, the temptation to just chuck it in must be incredibly strong.
Until recently – software development was seen as a fairly secure career choice, so people has a financial motivator to get into it – but still, anecdotally it seems people many times got into software development by accident. Had to edit a web page, and discovered javascript and PHP – or , had to do programming as part of some lab at university and quite enjoyed it et c. Some were trying to become real engineers but had to settle for software development, some were actuaries in insurance and ended up programming python for a living.
I worry that as the economic prospects of getting into the industry as a junior developer is eaten up by AI budgets, we will see a drop-off of those that accidentally end up in software development and we will be left with only the ones with what we could kindly call a “calling”, or what I would say “has no other marketable skills” like back in my day.
Dwindling power of coercion
Microsoft of course is the enemy of any right thinking 1337 h4xx0r, but there has been quite a while where if you wanted a Good Job, learning .NET and working for a large corporation on a Lenovo Thinkpad was the IT equivalent of working at a factory in the 1960s. Not super joyous but, a Good Job. You learned .NET 4.5 and you pretended to like it. WCF, BizTalk and all. The economic power was unrelenting.
Then the crazy web 2.0 happened and the cool kids were using Ruby on Rails. If you wanted to start using ruby, it was super easy. It was like back in my day, but instead of typing ABC80 basic – see below- they used the read evaluate print loop in ruby. Super friendly way of feeling like a genius and gradually increase the level of difficulty.
10 PRINT "BAD WORD ";
20 GOTO 10
Meanwhile legacy Java and C# were very verbose, you had to explain things like static, class, void, include, static not to mention braces and semicolons et c to people before they could create a loop of a bad word filling the terminal.
People would rather still learn PHP or Ruby, because they saw no value in those old stodgy languages.
Oracle were too busy being in court suing people to notice, but on the JVM there were other attempts at creating some things less verbose – Scala and eventually Kotlin happened.
Eventually Microsoft noticed what was going on, and as the cool kids jumped ship from Ruby onto NodeJS, Microsoft were determined to not miss the boat this time, so they threw away the .NET Framework, or “threw away” – as much as Microsoft have ever broken with legacy, but still fairly backward compatible, and started from scratch with .NET Core and a renewed focus on performance and lowered barriers to entry.
The pressure really came as data science folks rediscovered Python. It too has super low barrier to entry, except there is a pipeline to data science, and Microsoft really failed to break into that market due to the continuous mismanagent of F#, except they attacked it form the Azure side and get the money that way – depite people writing python.
Their new ASP.NET Core web stack stole borrowed concepts like minimal API from Sinatra and Nancy, and they introduced top level statements to allow people to immediately get the satisfaction of creating a script that loops and emits rude words using only two lines of code
while (true) Console.Write("Rude word ");
But still, the canonical way of writing this code was to install Visual Studio and create a New Project – Console App, and when you save that to disk you have a whole bunch of extra nonsense there (a csproj file, a bunch of editor metadata stuff that you do not want to have to explain to a n00b et cetera), which is not beginner friendly enough.
This past Wednesday, Microsoft introduced .NET 10 and Visual Studio 2026. In it, they have introduced file based apps, where you can write one file that can reference NuGet packages or other C# projects, import namespaces and declare build-time variables inline. It seems like an evolution of scriptcs, but slightly more complete. You can now give people a link to the SDK installer and then give them this to put in a file called file.cs:
#!/usr/bin/env dotnet run
while (true)
Console.Write("rude word ");
Then, like in most programming tutorials out there you can tell them to do sudo chmod +x file.cs if they are running a unix like OS. In that case, the final step is ./file.cs and your rude word will fill the screen…
If you are safely on Windows, or if you don’t feel comfortable with chmod, you can just type dotnet file.cs and see the screen fill with creativity.
Conclusion
Is the bar low enough?
Well, if they are competing with PHP, yes, you can give half a page length’s instruction and get people going with C#, which is roughly what it takes to get going with any other language on Linux or Mac and definitely easier than setting up PHP. The difficulty with C# and with Python as well is that they are old. Googling will give you C# constructs from ages ago that may not translate well to a file based project world. Googling for help with Python will give you a mix of python 2 and python 3, and with python it is really hard to know what is a pip thing and what is an elaborate hoax due to the naming standards. The conclusion is therefore, dotnet is now in the same ballpark as the other ones in terms of complexity, but it depends on what resources remain available. Python has a whole gigantic world out there of “how to get started from 0”, whilst C# has a legacy of really bad code from the ASP.NET WebForms days. Microsoft have historically been excellent at providing documentation, so we shall see if their MVP/RD network flood the market with intro pages.
At the same time, Microsoft is going through yet another upheaval with Windows 10 going out of support and Microsoft tightening the noose around needing to have a Microsoft Account to run Windows 11, and at the same time Steam have released the Steam Console running Windows software on Linux, meaning people will have less forced exposure to Windows even to game, whilst Google own the school market. Microsoft will still have corporate environments that are locked to Windows for a while longer, but they are far from the situation they used to be in.
I don’t know if C# is now easy enough to adopt that people that are curious about learning programming would install it over anything else on their mac or linux box.
High or low bar, should people even learn to code?
Yes, some people are going to have to learn programming in the future. AGI is not happening, and new models can only train on what is out there. Today’s generative AI can do loads of things, but in order to develop the necessary skills to leverage it responsibly, you need to be familiar with all the baggage underneath or else you risk releasing software that is incredibly insecure or that will destroy customer data. Like Bjarne Stoustrup said “C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off” – this can apply to AI generated code as well.
In popular parlance you have two categories of code: your own, freshly written code, which is the best code, code that never will be problematic – and then there is legacy code, which is someone else’s code, untested, undocumented and awful. Code gradually goes from good to legacy in some ways that appear mystical, and in the end you change jobs or they bring in new guys to do a Great Rewrite with mixed results.
So, to paraphrase Baldrick from Blackadder Goes Forth: “The way I see it, these days there’s a [legacy code mess], right? and, ages ago, there wasn’t a [legacy code mess], right? So, there must have been a moment when there not being a [legacy code mess] went away, right? and there being a [legacy code mess] came along. So, what I want to know is: How did we get from the one case of affairs to the other case of affairs”
The hungry ostrich
Why does code start to deteriorate? What precipitates the degradation that eventually leads to terminal decline? What is the first bubble of rust appearing by the wheel arches? This is hard to generally state, but the causes I have personally seen over the years boil down to being prevented from making changes in a defensible amount of time.
Coupling via schema – explicit or not
E.g. it could be that you have another system accessing your storage directly. Doesn’t matter if you are using schemaless storage or not, as long as two different codebases need to make sense of the same data, you have a schema whether you admit it or not, at some point those systems will need to coordinate their changes to not break functionality.
Fundamentally – as soon as you start going “neah, I won’t remove/rename/change type of that old column because I have no idea who still uses it” you are in trouble. Each storage must have one service in front of it that owns it, so that it can safely manage schema migrations, and anyone wanting to access that data needs to use a well defined API to do so. The service maintainers can thereafter be held responsible to maintain this API in perpetuity, and easily so since the dependency is explicit and documented. If the other service just queried the storage directly, the maintainer is completely unaware (yes, this goes for BI teams as well).
Barnacles settling
If every feature request leads to functions and classes growing as new code is added like barnacles without regular refactoring to more effective patterns, the code gradually gets harder to change. This is commonly a side-effect of high turnover or outsourcing. The developers do not feel empowered to make structural changes, or perhaps have not had enough time to get acquainted with the architecture as it was intended at some point. Make sure that whomever maintains your legacy code is fully aware of their responsibility to refactor as they go along.
Test after
When interviewing engineers it is very common that they say they “practice TDD, but…”, meaning they test after. At least to me the difference in test quality is obviously different if I write the tests first versus if I get into the zone and write the feature first and then try to retrofit tests afterwards. Hint: there is usually a lot less mocking if you test first. As the tests get more complex, adding new code to a class under tests gets harder, and if the developer does not feel empowered to refactor first, the tests are likely to not cover the added functionality properly , so perhaps a complex integration test is modified to validate the new code, maybe the change is tested manually…
Failure to accept Conway’s law
The reason people got hyped about micro services was the idea that you could deploy individual features independently of the rest of the organisation and the rest of the code. This is lovely, as long as you do it right. You can also go too granular, but in my experience that rarely happens. The problem that does happen is that separate teams have interests in the same code and modify the same bits, and releases can’t go out without a lot of coordination. If you also have poor automation test coverage you will get a manual verification burden that further slows down releases. At your earliest convenience you must spend time restructuring your code or at least the ownership of it so that teams afully own all aspects of the thing they are responsible for and they can release code independently, and with any remaining cross-team dependencies made explicit and automatically verifiable.
Casual attitude towards breaking changes
If you have a monolith that is providing core features to your estate, and you have a publicly accessible API Operation, assume it is being used by somebody. Basically, if you must change its required parameters or its output, create a new versioned endpoint or one by a different name. Does this make things less messy? No, but at least you don’t break a consumer you don’t know about. Tech leads will hope that you message around to try and identify who uses it and coordinate a good outcome, but historically that seems too much to ask. We are only human after all.
Until you have PACT tests for everything, and solid coverage, never break a public method.
Outside of support horizon
Initially it does not seem that bad to be stuck with a slightly unsupported version of a library, but as time moves on, all of a sudden you get stuck for a week with a zero day bug that you can’t patch because three other libraries are out of date and contain breaking changes. It is much better if you are ready to make changes as you go along. One breaking change usually lets you have options, but when you are already exposed with a potential security breach, you have to make bad decisions due to lack of time.
Complex releases
Finally, it is worth mentioning that you want to avoid manual steps in your releases. Today there is really no excuse for making a release more complex than one button click. Ideally abstract away configuration so that there is no file.prod.config template that is separate from file.uat.config, or else that prod template file is almost guaranteed to break the release, much like the grille was the only thing rusting on the Rover 400 that was almost completely a Honda, (except for the grille).
Stopping Princip
So how do we avoid the decline, the rot? As with shifting quality and security left, it is much cheaper to address these problems the earlier you spot them, so if you find yourself in any of the situations above, address them with haste.
Avoid engaging “maintenance developers”, their remit may explicitly mean they cannot do major refactoring even when necessary
Keep assigning resources to keep dependencies updated. Use SAST to validate that your dependencies are not vulnerable.
Disallow and remove integration-by-database at any cost. This is hard to fix, but worth it. This alone solves 90% of niggling small problems you are continuously having as you can fix your data structure to fit your current problems rather than the ones you had 15 years ago. If you cannot create a true data platform for reporting data, at least define agreed views/ indexes that can act like an interface for external consumers. That way you have a layer of abstraction between external consumers and yourself and stay free to refactor as long as you make sure that the views still work.
Make dependencies explicit. Ideally PACT tests, but if not that, at least integration tests. This way you avoid needing shared integration environments where teams are shocked to first find out that their changes they have been working on for two weeks breaks some other piece of software they didn’t know existed.
If you tried to do anything online today you may have had more problems than usual. All kinds of services were failing, because some storage at AWS on the eastern seaboard was having problems.
Now, there are plenty of people that love to point out that the cloud has a lot of all-eggs-in-one-basket where one service being unreliable can knock out an insane percentage of the infrastructure of the internet, and they say we should go back to having our own servers in our own basement.
There is a lot of valid maths behind that kind of stance, as renting a big enough chunk of cloud infrrastructure is incredibly expensive, even id you would replace them with really hot computers. Now, I remember back when installing a server meant an HP ProLiant 1U server would show up at your desk and you’d plug it in, annoy everyone else in the office with the fan noise, and you’d stick some software on it, but of course that’s not the time that people want to go back to, people want to go back to giant VMWare clusters where you could provision a new VM conveniently from your desk. Except of course storage was always ridiculously expensive with GBs of enterprise SAN storage costing per GB what 10 TB cost on the street.
Where did cloud come from?
Why did we end up where we are? Well, AWS offered people a chance to provision apps on virtual hardware without buying a bunch of servers first. This was an advantage that cloud still has to this day. You can just get started, gauge how much interest there is, what amount of hardware makes sense, what the costs look like, and then you can possibly decide to bering it all home to your basement. Of course, cloud providers will try to entice you with database systems and queueing systems that are vendor specific to prevent you from moving your apps home, but it is not insurmountable.
Also, although I remember a time when companies would have server rooms in their offices where they stashed their electronic equipment, hopefully – but not mandatorily – arranging for improved cooling and redundant power. After a while people realised it would make more sense to rent space in a colocated datacentre, where your servers can socialise with other servers, all managed by a hosting partner that provides a certain level of physical security, climate control and fire suppression. At this point though, you are probably leasing your servers, leasing your rackspace and paying fees for this situation. Are you sure you are saving an enormous amount of money this way versus running cloud native apps in the cloud?
If your product is something like an email provider, of course, you will probably have network and storage needs on a scale that merits building your own datacentre, still reducing cost versus cloud hosting, but – and this may be hard to accept for some leaders, your company’s product is probably not GMail. It is worth making the calculation though.
Why is US East 1 having problems enough to break half the internet?
So, yes, having multiple active copies of your infrastructure up and running globally is expensive yes, but the main reason businesses keep building their infrastructure in US East 1 is that there are very complex problems with consistency and availability as soon as you have multiple replicas out there being updated simultaneously, so if there is any way to just have one database instance, you do that, and a lot of American businesses prefer to keep their code in Virginia, or something. OR maybe it’s because US East 1 is the default region. This is not an inherent property of cloud apps, you are free to have your single copy of your infrastructure in other regions, or – heck – have a cold failover that you can spin up in another region.
“I hate sitting around, I want to never experience this again”
I hear you – you are looking for solutions, I like it.
Multi Cloud – no
Grifters are going to say “Multi cloud! They can’t all be down at the same time!”, and… sure, but I have yet to see a good multicloud setup. There is no true cross platform IaC, so you’ll have to write a whole bunch of duplicate infra and pay for it to sit around waiting for the other clouds to go down, or if you run active- active you’ll pay egress and ingress to synchronise data across worlds and get a whole new class of problems with consistency and latency.
No – this is a bad option, you are spending loads of money on a solution that you cannot even fully use, since you are limited to the lowest common denominator
Bring it in house- meh, maybe
If you are going to bring your software home…. take the numbers for a spin again, because I doubt that they will make sense.
If you are going to do it – do it properly, i.e. use the tools that didn’t exist back when we built stuff for on-premise. Use containers and ephemeral compute instances. Unfortunately – if you don’t have enough money to lease rack space in multiple datacentres you still have a single point of failure, and if you do have enough money for that, then you will have that synchronisation problem again, so the hard engineering really doesn’t go away. Again, make sure the contracts for your data centres and the additional cost of hiring people to manage the on prem apps you will need to replace that fancy managed infrastructure your cloud provider offered (like, yes, now you need to hire a couple of ZooKeeper and Kafka admins) doesn’t exceed the cloud cost, or at least that your expected uptime is better than what yuur cloud provider is offering.
Do nothing – my favourite option
Well… did you get away with the outage? Did you lose less money than it would cost to take decisive action? How many times can the cloud fall over before it’s worth it? Sure, some IT security experts say that when China go to war with Taiwan, the cyber attack that will strike the US will probably take out large cloud providers since it seems to be so effective in crippling infrastructure, do you think that is likely? Will that hurt your business specifically?
If you can get away with telling your users to “email you tomorrow when the cloud is back up” or words to that effect, you should probably take advantage of that and not spend more money than you need, but on. the other hand if you need 100% uptime – as in no nines, 100%, there is an IBM Mainframe that offers that, and you can configure it to behave like an insane number of linux m machines all in one trench coat, so you can run your existing apps on it, kind of.
Presumably, your system needs are somewhere on that continuum between “that’s OK, we’ll try again tomorrow” and “100% or else”, and I cannot make blanket guarantees, but if you chat to the business, they will probably have very specific ideas of what is acceptable and unacceptabler downtime, and if you agree with them about that, you are – I am guessing – going to be surprised at how OK people will be with staying in the cloud and taking your chances, as long as there is some observability and feedback.
I used to spend time with automotively inclined gentlemen. There were two distinct schools of the car hobby at that time. Finbilsmek, e.g. renovating a classic car or preparing a race car – sure, it eats all your money in parts, but you get to listen to music and carefully admire your new components as you fit them to your clean project car – unless it’s the weekend before the race where the stress level is high. The other school is bruksbilsmek, i.e. fixing your daily driver. It is the night before the MOT, it’s by the side of the road, it’s with a subset of your tools on a Halfords parking. Only if you are lucky does it takes place in your garage, on a car lift – and even if you are that lucky, then salt and grime is constantly falling in your face and if you fail to sort the problem it will have a massive impact on your daily life.
A similar thing exists in IT. If you are tinkering with your computer at home you have time to google bits, listen to music, type random stuff and see if it works. Worst case you just wipe it and start over. It’s enjoyable to install some weird hardware or software and try to get it going.
However if your work laptop starts having problems, or a thing that you need to sort out for work is broken, the enjoyment goes away and there is only rage. Therefore, at least at my age, I wouldn’t build a computer for work, nor do I have any wish to maintain the operating system or mess with networking or access rights – there are pros that do that stuff and keep abreast of all the bulletins of which security holes out there have been patched, I happily let them worry about it, I just accept their vetted upgrades and make sure I restart when I’m asked to.
Baby-proofing a laptop
This is why I’m not principally against working in a baby-proofed environment, i.e. where you as a developer do not have true admin rights, you have no access to customer data, you have no direct access to production. I would love that – as long as that still meant I could install everything I need to work, I can test all my logic locally (code, deployment, monitoring – all of it) and that all my developer tools work. Having all the networking, patching of servers, provisioning of resources and testing patches all of that being magically taken care of by someone else is very nice indeed, and allowing me to focus on delivering trustworthy code which I’m sure sounds super boring to others.
Unfortunately achieving such an environment – a baby-proofed one – requires a lot of engineering. We would like to be in a situation where a company onboards someone and without any manual intervention whatsoever they get a user account provisioned with all the correct group memberships and access as well, and after plugging in the laptop, setting up MFA, locking the screen and going for coffee – all necessary apps will be installed onto the laptop ready for immediate productivity. That would require a lot of cooperation between HR, ops, dev and procurement, plus enough resources to implement and tests all aspects iof this, and everyone involved in this would already have a day job so this would be extra.
Root of all evil
The biggest technical obstacle that makes developer special is that developers use software that need to attach a debugger to a process, and to open ports, i.e. listen for incoming traffic/ requests. – which is what a web app is. An operating system thinks these are dangerous things. Generally you get to listen to stuff on some ports with high numbers, but “well-known” ports require admin access. I.e. you can’t open port 80 and 443 without admin access, cause it would be dangerous if some random code tried to play web server. Attaching a debugger is even more dangerous, you literally have access to all of the process’ memory. You could read any secrets you wanted out of there, so – yeah – not something you get to do without admin access. Opening ports on high numbers was not a problem at the time, but in some cases you still needed to attach a debugger to IIS which required admin access.
On unix-like operating systems that were multi user aware from the beginning, there has been a culture of creating your own user for day-to-day work, and keeping an admin account called root that you only use for things that the operating system thinks is serious, like writing to the /etc directory or running programs in /sbin. Later the concept of sudo arrived, where you basically give accounts the opportunity to temporarily acquire root privileges after typing in their own password again, meaning you can delegate the right to install software without permanently giving the user elevated rights or giving them a root password. Also, the need to type in the password makes it harder to abuse by trickery, but by no means is it bullet proof.
Windows came from DOS, a single user operating system. Although Windows NT, the kernel has decent security design, the culture among windows users was generally that you just put yourself in the Administrators group when you installed your computer and you were “root” and life was easy. The lax security culture meant that many apps simply could not function if the user was not part of the Administrators group, so there was evidently no practical adoption of healthy practices. Windows machines were extremely susceptible to malware and as popular as Windows XP was, something had to be done. When Windows Vista came, the most hated new feature was User Access Control, which was a new layer of obstinance on top of Windows security, meaning the operating system threw up a popup in your face when you did something risky – like opening any port at any number, writing files to suspicious folder – such as editing C:\Windows\System32\drivers\etc\hosts – which is the windows version of /etc/hosts.
People hated UAC, and it was the new thing people did directly after installing – add yourself to Administrators and switch off UAC. But unfortunately you couldn’t argue with the results. The spread of malware was slowed down quite dramatically. Effectively UAC was a bolt-on sudo copy that just made you click on something to confirm. If you didn’t have access rights, it would ask you to type in some credentials that did have the power to approve the action. This meant that corporations started to give you a separate admin accounts that only worked on your machine, but gave you enough rights to open ports or install programs. An analogue to sudo, but more cumbersome.
Windows 7 made UAC back off a bit to increase adoption, and the results continued to be impressive. However – although Microsoft built a simple web server for development – IIS Express – that didn’t require administrative access when debugging – UAC would still sometimes ask you for approval to start things like an android emulator, an Azure Storage emulator or even an Azure Function Host, thus still requiring users to have some way of elevating, i.e. type in admin credentials just to do work. This has to be addressed if we are to be able to move into the glorious future where developers are fully embedded in a padded cell where we can do no harm.
Forbidden knowledge
At Netflix among other places, they devised a way to provide an ether of configuration that apps can just absorb, meaning that the app announces who it is, and recieves its configuration, i.e. you remove the problem of needing to know how the production environment is set up, you just ask for things and they are provided. That way apps can be secured and configured without any knowledge of the production environment leaking out to developers.
Containerisation lets us effectively ship a little egg of code into production, with a defined contract of what the application needs from the outside world. Combine this with a sidecar as above that handles communication between services, and you achieve the perfect state of developers being safely prevented from knowing anything concrete about how the production environment is configured, yet being able to deliver tested apps into production.
The biggest obstacle here is leaky abstractions. Like DAPR for instance promises to abstract away how things like message queues work, but it doesn’t actually. You cannot locally test something with Redis Message Broker or RabbitMQ that you intend to run on Azure Service Bus in prod. You need to be able to integration test automatically, or else it is unacceptable. The tests need to be able to run realistically in every environment.
Let me VNC onto the server
Back in the day when VMs were commonly used when hosting websites, you sometimes had to log into a virtual server and look into eventvwr.exe to see what was actively going wrong, maybe a particular executable was eating all the memory and needed a bit of encouragement to get over itself. This type of access is of course dangerous to have, and it would be nearly unheardof for a developer to have this type of access to production hardware even when troubleshooting, and instead there will be alerts that automatically destroy an instance of an app that is misbehaving whilst already having spun up a replacement. In the rare cases you still need to use a VM , you install agents on them that allow people to perform certain maintenance tasks without ever logging in. Fundamentally this has been solved in the way I foresee all of this being solved, by abstracting away the problem.
Conclusion
We are closer than ever to utopia, and the level of hand cranking required to reach nirvana is lower than ever, but there is still too much manual effort required. There is plenty of scope for disruption. A cocoon world for developers that allows for low faff developing and testing of containerised apps, being able to conclusively prove that monitoring and dependency acquisition works locally before pushing the code to CI is a minimum. This, depending on your cloud provider is still anything from impossible to a massive PITA. There are according to a quick search new IAM solutions that look like they offer identity and app provisioning in a seamless way, so the future is on its way somehow.