Monthly Archives: March 2026

You are welcome

I have solved your biggest problem.

Picture the scene: you are getting warnings by your OS that you are running out of disk space, but you can’t understand the output of du well enough to take action. You miss the old school tooling on windows where you would see a pie chart, where you knew you would have to attach that biggest pie. Also, after I discovered the concept of sixel for graphs in a terminal, I was convinced that I deserve to have both the terminal and a pie chart.

So, like the hypocrite I am, rather than investigating tools like GNU’s faster du tools to see what already exists out there, I had cursor build me the tool I needed.

And – an overview in how it works:

Again – you are welcome.

Speculation on the future of Windows

I am going to armchair quarterback the future of Windows. I have absolutely no insight into what Microsoft are doing, and no deeper knowledge into the internals of Windows than having read the odd Windows Internals book ages ago, as well as following some Microsoft adventures, such as the super deprived windows images they tried to introduce as docker base images for people to use.

What is wrong with Windows you say? Apart from the new ream of bugs, the ads, the telemetry, the copilot integration, the constant UI changes that still don’t solve the fundamental problems?

Well – I think the kernel architecture of Windows is sound. It’s designed by the guy behind VMS, it is very flexible. Things like WSL1 was possible due to the pluggable architecture, I think it may have a performance drawback compared to less heavily abstracted systems, but OTOH, it does have its benefits in the situation we are.

I think NTFS is sound in terms of what it does, it supports journaling, it has granular security, it’s arguably better than EXT4 in certain ways – except it really struggles with small files and big directories. The main problem with NTFS is that rumours say it is so legacy now within the company that developers always try to create new filesystems rather than work on NTFS. I think where modern AI tools actually shine is the way it allows you to wrestle legacy code under control, refactor and make the code maintainable for new developers. If the future of computing in general – and not just in Linux – is inevitably moving towards big folders with small files, a change has to be made, or Microsoft could choose to natively support EXT4 or ZFS.

The big problem Windows needs to solve is the terrible Windows API, as in the GDI bitmap nonsense and the Windows message stacks.

They have tried so many times to make XAML a thing, and I guess I haven’t given up about that, I mean it is fairly close to MacOS X+/NeXT’s interfacebuilder, which clearly works despite being awful to use. Just make compile into something closer to the metal instead of having it managed. Also, don’t drag OLE2 out of the mausoleum for this like you did with Windows 8, just have an app model that starts a process with main() and then hooks into the UI stack. Sure, sandbox the apps as the operating system now has a bunch of built-in security concepts that didn’t exist in DOS that is the foundation for the current Windows API stack, but use some sane method that is compatible with how processes should work in a operating system not from 1980.

Basically we need to leave device contexts handles behind and get onto canvases. I don’t know what a good UI stack should look like, but fundamentally – security – i.e. multi user access and remote control- as a first class design concern, vector graphics, hardware acceleration and effective use of modern processors need to be high on the list of demands. Surely, starting from scratch would allow you to better handle things like clipboards securely.

So – breaking compatibility I say? Yes – and no. I think Microsoft should create a new subsystem for running apps on a completely separate UI stack, with security and performance as the key metrics during design. If you insist on compatibility, let the XAML used for the new stack to be mostly compatible with whatever XAML dialect that has the biggest number of running apps, and find a way to run old apps in a GDI32 emulator in the new desktop. Performance penalty? Yes, allow it. The new UI paradigm is the future, but if people choose to install the legacy subsystem, their old apps will still run. Security features and a well defined upgrade path through microsoft’s developer tooling should allow Enterprise IT departments to force the software estates towards only relying on the new stack.

They should use their enormous network of influencers – MVPs and RDs – to bully people into upgrading their apps to native 64 bit new UI stack like Apple have done multiple times (MacOs legacy => MacOS X+, Motorola 68000 to PowerPC, PowerPC to Intel, Intel to Apple Silicon).

It will end the previously endless cycle of Microsoft attempting to shoehorn a new UI into old windows, and will firmly hang a Sword of Damocles over all legacy windows apps allowing a clear sustainable path to the future.

If the technical design is clean enough, it will be possible to convert desktop virtualisation giants to the new stack early on to get a strong market share, letting you only be dependent on individual app developers to keep up, and given the Office Suite is such a key player, binning the legacy UI and moving to the new stack for themselves among to will directly benefit adoption.

Why AI, man?

You will have seen property magnates strangle the property market by building data centres on spec (often poorly as the demands of a data centre are quite different than those of an office building or a warehouse), you will have seen the future production for years of memory chips, storage chips and graphics processor already being bought by AI giants, effectively barring the average gaming enthusiast from upgrading their computers or getting into the hobby at all. Apple only being exempt because they too have bought their future capacity for a few years ahead, but at some point when these contracts renew or when China invades Taiwan, the lost capacity at TSMC will hit MacBook and iPhone prices as well. Same thing with gaming consoles, cars and the crippled computers handed out in bulk to the regular corporate drone.

For what, you may ask? For AI memes? For desinformation? For hallucinated information in corporate reports that steer companies in the wrong direction?

In defence of “artificial intelligence”

If you have read AI slop it seems baffling that anyone would like to use it. Especially someone that writes things for a living, yet we see newspaper articles and web pages that contain “If you want I can rewrite this is a more enticing style…” stuff from Chat GPT that they forgot to remove. I.e. people that get paid to write use AI to generate text. Some musicians use Suno to make backing tracks to practice alongside. Why? Is this not like turkeys voting for Christmas?

Not all writing is a Hemingwayeque six month stay in Key West, sometimes you just need to correctly structure text based on 15 news bulletins you got from Bloomberg so that you eventually get paid at the end of the month. Giving an AI a model some rules about writing a news push article (first paragraph that states when, what and who, and then progressively add details in descending order of importance so that the text can be liberally cut from the end if necessary – as an example) and have it produced in seconds. Or in the musician case, rather than getting four mates in a room at the same time just so that you can bore them with practicing soloing in a particular mode, you can have the AI produce a 20 minute fake track with the right chords and not drive anyone insane except yourself.

With software construction it is even more attractive. All the code we write is supposed to not be creative. It is supposed to be familiar, predictable and in keeping with the style of all the rest of the code in the codebase. I.e. the repetition and theft is a feature.

The biggest problem in software development is that we are mere humans with human failings. e.g., some business rules change somewhere, we change the code, observe – hopefully using automated tests – that the change works and then we are onto the next, because we fail to notice that the function name no longer describes what the code does, or even more commonly, an old comment now has become a complete lie.

We had already solved some of these problems with automated refactoring tools that interpret the code to follow the call chain, so that you can rename a function, and it renames every instance of it being used in the code base, thus significantly lowering the threshold for keeping names relevant after code changes. There are also tools that let you automatically extract a piece of code out of a bigger function to reduce the size of functions. From our perspective, as not a vibe coder, AI tools are just an extension of that. I can now ask a developer tool to refactor the existing code into a certain pattern, and although a year ago that could have meant catastrophic corruption of the code, we had things like source control (effectively save points for programmers, you just turn time back to before the boss battle) compilers/linters and tests that limited how crazy things could get, it was still way too interesting. Recently the tools actually produce sane code if you prompt it correctly, even if we still obviously have the guardrails in place.

The irony is that the code we have trained the model is written by mortal humans, and the text you get back from the models sounds like talking to a very junior developer. “All tests run except […] that aren’t important” or “There is one broken test that is unrelated to our change” which is highly amusing. It will also easily disable authentication if it becomes too “difficult” to deal with, which is super dangerous. Guardrails, rules and commands are very important, but at the end of the day you are responsible for the code your model produces.

Risks

I mentioned companies making decisions on hallucinated data before. That is a risk, but on the other hand – how long is the list of companies that went bankrupt because some clever soul accidentally replaced one formula in A Critical Excel Worksheet with the same value as a constant? Every tool is dangerous if used incorrectly.

Of course, completely vibe coded applications that have not been analysed from a security perspective can have an unlimited array of vulnerabilities, it is fundamentally up to the tool makers to protect their users, which seems to be lacking for certain tools.
Giving an AI agent full access to your own account and terminal, or your own email is of course very dangerous.

The Tesla Full Self Driving fallacy can happen with agents as well – you give an AI agent an administrative task that saves you a week of gruelling boring mundane work and everything is fine, so you just add more rights until there is a disaster, the same way that Tesla drivers after a few successful uncomplicated drives on the motorway start napping behind the wheel until they slam into a truck, best case.

That is the opposite of guard rails. I foresee that the same type of libraries available to query databases whilst disabling SQL injection will come around for prompt creation to avoid prompt injection, but also that prompt injection will climb the charts of popular exploits.

So what then?

Fundamentally, everyone is going to use AI – not because it is forced upon you by Microsoft, but because there will be an application that is useful to you. I have no idea if this is the end of white collar work, it could be, but it could also just be yet another tool in the arsenal. The only thing I am fairly certain about is that we cannot turn back time, but of course future wars may force us to go back to the kind of electronics that we can manufacture in the west, meaning 1980s tech at best, which could uniquely strike against the IT sector. The future is wide open.