Tag Archives: AI

Why AI, man?

You will have seen property magnates strangle the property market by building data centres on spec (often poorly as the demands of a data centre are quite different than those of an office building or a warehouse), you will have seen the future production for years of memory chips, storage chips and graphics processor already being bought by AI giants, effectively barring the average gaming enthusiast from upgrading their computers or getting into the hobby at all. Apple only being exempt because they too have bought their future capacity for a few years ahead, but at some point when these contracts renew or when China invades Taiwan, the lost capacity at TSMC will hit MacBook and iPhone prices as well. Same thing with gaming consoles, cars and the crippled computers handed out in bulk to the regular corporate drone.

For what, you may ask? For AI memes? For desinformation? For hallucinated information in corporate reports that steer companies in the wrong direction?

In defence of “artificial intelligence”

If you have read AI slop it seems baffling that anyone would like to use it. Especially someone that writes things for a living, yet we see newspaper articles and web pages that contain “If you want I can rewrite this is a more enticing style…” stuff from Chat GPT that they forgot to remove. I.e. people that get paid to write use AI to generate text. Some musicians use Suno to make backing tracks to practice alongside. Why? Is this not like turkeys voting for Christmas?

Not all writing is a Hemingwayeque six month stay in Key West, sometimes you just need to correctly structure text based on 15 news bulletins you got from Bloomberg so that you eventually get paid at the end of the month. Giving an AI a model some rules about writing a news push article (first paragraph that states when, what and who, and then progressively add details in descending order of importance so that the text can be liberally cut from the end if necessary – as an example) and have it produced in seconds. Or in the musician case, rather than getting four mates in a room at the same time just so that you can bore them with practicing soloing in a particular mode, you can have the AI produce a 20 minute fake track with the right chords and not drive anyone insane except yourself.

With software construction it is even more attractive. All the code we write is supposed to not be creative. It is supposed to be familiar, predictable and in keeping with the style of all the rest of the code in the codebase. I.e. the repetition and theft is a feature.

The biggest problem in software development is that we are mere humans with human failings. e.g., some business rules change somewhere, we change the code, observe – hopefully using automated tests – that the change works and then we are onto the next, because we fail to notice that the function name no longer describes what the code does, or even more commonly, an old comment now has become a complete lie.

We had already solved some of these problems with automated refactoring tools that interpret the code to follow the call chain, so that you can rename a function, and it renames every instance of it being used in the code base, thus significantly lowering the threshold for keeping names relevant after code changes. There are also tools that let you automatically extract a piece of code out of a bigger function to reduce the size of functions. From our perspective, as not a vibe coder, AI tools are just an extension of that. I can now ask a developer tool to refactor the existing code into a certain pattern, and although a year ago that could have meant catastrophic corruption of the code, we had things like source control (effectively save points for programmers, you just turn time back to before the boss battle) compilers/linters and tests that limited how crazy things could get, it was still way too interesting. Recently the tools actually produce sane code if you prompt it correctly, even if we still obviously have the guardrails in place.

The irony is that the code we have trained the model is written by mortal humans, and the text you get back from the models sounds like talking to a very junior developer. “All tests run except […] that aren’t important” or “There is one broken test that is unrelated to our change” which is highly amusing. It will also easily disable authentication if it becomes too “difficult” to deal with, which is super dangerous. Guardrails, rules and commands are very important, but at the end of the day you are responsible for the code your model produces.

Risks

I mentioned companies making decisions on hallucinated data before. That is a risk, but on the other hand – how long is the list of companies that went bankrupt because some clever soul accidentally replaced one formula in A Critical Excel Worksheet with the same value as a constant? Every tool is dangerous if used incorrectly.

Of course, completely vibe coded applications that have not been analysed from a security perspective can have an unlimited array of vulnerabilities, it is fundamentally up to the tool makers to protect their users, which seems to be lacking for certain tools.
Giving an AI agent full access to your own account and terminal, or your own email is of course very dangerous.

The Tesla Full Self Driving fallacy can happen with agents as well – you give an AI agent an administrative task that saves you a week of gruelling boring mundane work and everything is fine, so you just add more rights until there is a disaster, the same way that Tesla drivers after a few successful uncomplicated drives on the motorway start napping behind the wheel until they slam into a truck, best case.

That is the opposite of guard rails. I foresee that the same type of libraries available to query databases whilst disabling SQL injection will come around for prompt creation to avoid prompt injection, but also that prompt injection will climb the charts of popular exploits.

So what then?

Fundamentally, everyone is going to use AI – not because it is forced upon you by Microsoft, but because there will be an application that is useful to you. I have no idea if this is the end of white collar work, it could be, but it could also just be yet another tool in the arsenal. The only thing I am fairly certain about is that we cannot turn back time, but of course future wars may force us to go back to the kind of electronics that we can manufacture in the west, meaning 1980s tech at best, which could uniquely strike against the IT sector. The future is wide open.