Category Archives: Evil

Are the kids alright?

I know in the labour market of today, suggesting someone pick up programming from scratch is akin to suggesting someone dedicate their life to being a cooper. Sure, in very specific places, that is a very sought after skill that will earn you a good living, but compared to its heyday the labour market has shrunk considerably.

Getting into the biz

How do people get into this business? As with I suspect most things, there has to be a lot of initial positive reinforcement. Like – you do not get to be a great athlete without several thousands of hours of effort rain or shine, whether you enjoy it or not – but the reason some people with “talent” end up succeeding is that they have enough early success to catch the “bug” and stick at it when things inevitably get difficult and sacrifices have to be made.

I think the same applies here, but beyond e-sports and streamer fame, it has always been more of an internal motivation, the feeling of “I’m a genius!” when you acquire new knowledge and see things working. It used to help to have literally nothing else going on in life that was more rewarding, because just like that fleeting sensation of understanding the very fibre of the universe, there is also the catastrophic feeling of being a fraud and the worst person in the world once you stumble upon something beyond your understanding, so if you had anything else to occupy yourself with, the temptation to just chuck it in must be incredibly strong.

Until recently – software development was seen as a fairly secure career choice, so people has a financial motivator to get into it – but still, anecdotally it seems people many times got into software development by accident. Had to edit a web page, and discovered javascript and PHP – or , had to do programming as part of some lab at university and quite enjoyed it et c. Some were trying to become real engineers but had to settle for software development, some were actuaries in insurance and ended up programming python for a living.

I worry that as the economic prospects of getting into the industry as a junior developer is eaten up by AI budgets, we will see a drop-off of those that accidentally end up in software development and we will be left with only the ones with what we could kindly call a “calling”, or what I would say “has no other marketable skills” like back in my day.

Dwindling power of coercion

Microsoft of course is the enemy of any right thinking 1337 h4xx0r, but there has been quite a while where if you wanted a Good Job, learning .NET and working for a large corporation on a Lenovo Thinkpad was the IT equivalent of working at a factory in the 1960s. Not super joyous but, a Good Job. You learned .NET 4.5 and you pretended to like it. WCF, BizTalk and all. The economic power was unrelenting.

Then the crazy web 2.0 happened and the cool kids were using Ruby on Rails. If you wanted to start using ruby, it was super easy. It was like back in my day, but instead of typing ABC80 basic – see below- they used the read evaluate print loop in ruby. Super friendly way of feeling like a genius and gradually increase the level of difficulty.

Meanwhile legacy Java and C# were very verbose, you had to explain things like static, class, void, include, static not to mention braces and semicolons et c to people before they could create a loop of a bad word filling the terminal.

People would rather still learn PHP or Ruby, because they saw no value in those old stodgy languages.

Oracle were too busy being in court suing people to notice, but on the JVM there were other attempts at creating some things less verbose – Scala and eventually Kotlin happened.

Eventually Microsoft noticed what was going on, and as the cool kids jumped ship from Ruby onto NodeJS, Microsoft were determined to not miss the boat this time, so they threw away the .NET Framework, or “threw away” – as much as Microsoft have ever broken with legacy, but still fairly backward compatible, and started from scratch with .NET Core and a renewed focus on performance and lowered barriers to entry.

The pressure really came as data science folks rediscovered Python. It too has super low barrier to entry, except there is a pipeline to data science, and Microsoft really failed to break into that market due to the continuous mismanagent of F#, except they attacked it form the Azure side and get the money that way – depite people writing python.

Their new ASP.NET Core web stack stole borrowed concepts like minimal API from Sinatra and Nancy, and they introduced top level statements to allow people to immediately get the satisfaction of creating a script that loops and emits rude words using only two lines of code

But still, the canonical way of writing this code was to install Visual Studio and create a New Project – Console App, and when you save that to disk you have a whole bunch of extra nonsense there (a csproj file, a bunch of editor metadata stuff that you do not want to have to explain to a n00b et cetera), which is not beginner friendly enough.

This past Wednesday, Microsoft introduced .NET 10 and Visual Studio 2026. In it, they have introduced file based apps, where you can write one file that can reference NuGet packages or other C# projects, import namespaces and declare build-time variables inline. It seems like an evolution of scriptcs, but slightly more complete. You can now give people a link to the SDK installer and then give them this to put in a file called file.cs:

Then, like in most programming tutorials out there you can tell them to do sudo chmod +x file.cs if they are running a unix like OS. In that case, the final step is ./file.cs and your rude word will fill the screen…

If you are safely on Windows, or if you don’t feel comfortable with chmod, you can just type dotnet file.cs and see the screen fill with creativity.

Conclusion

Is the bar low enough?

Well, if they are competing with PHP, yes, you can give half a page length’s instruction and get people going with C#, which is roughly what it takes to get going with any other language on Linux or Mac and definitely easier than setting up PHP. The difficulty with C# and with Python as well is that they are old. Googling will give you C# constructs from ages ago that may not translate well to a file based project world. Googling for help with Python will give you a mix of python 2 and python 3, and with python it is really hard to know what is a pip thing and what is an elaborate hoax due to the naming standards. The conclusion is therefore, dotnet is now in the same ballpark as the other ones in terms of complexity, but it depends on what resources remain available. Python has a whole gigantic world out there of “how to get started from 0”, whilst C# has a legacy of really bad code from the ASP.NET WebForms days. Microsoft have historically been excellent at providing documentation, so we shall see if their MVP/RD network flood the market with intro pages.

At the same time, Microsoft is going through yet another upheaval with Windows 10 going out of support and Microsoft tightening the noose around needing to have a Microsoft Account to run Windows 11, and at the same time Steam have released the Steam Console running Windows software on Linux, meaning people will have less forced exposure to Windows even to game, whilst Google own the school market. Microsoft will still have corporate environments that are locked to Windows for a while longer, but they are far from the situation they used to be in.

I don’t know if C# is now easy enough to adopt that people that are curious about learning programming would install it over anything else on their mac or linux box.

High or low bar, should people even learn to code?

Yes, some people are going to have to learn programming in the future. AGI is not happening, and new models can only train on what is out there. Today’s generative AI can do loads of things, but in order to develop the necessary skills to leverage it responsibly, you need to be familiar with all the baggage underneath or else you risk releasing software that is incredibly insecure or that will destroy customer data. Like Bjarne Stoustrup said “C makes it easy to shoot yourself in the foot; C++ makes it harder, but when you do it blows your whole leg off” – this can apply to AI generated code as well.

Why does code rot?

The dichotomy

In popular parlance you have two categories of code: your own, freshly written code, which is the best code, code that never will be problematic – and then there is legacy code, which is someone else’s code, untested, undocumented and awful. Code gradually goes from good to legacy in some ways that appear mystical, and in the end you change jobs or they bring in new guys to do a Great Rewrite with mixed results.

So, to paraphrase Baldrick from Blackadder Goes Forth: “The way I see it, these days there’s a [legacy code mess], right? and, ages ago, there wasn’t a [legacy code mess], right?    So, there must have been a moment when there not being a [legacy code mess] went away, right? and there being a [legacy code mess] came along.   So, what I want to know is: How did we get from the one case of affairs to the other case of affairs”

The hungry ostrich

Why does code start to deteriorate? What precipitates the degradation that eventually leads to terminal decline? What is the first bubble of rust appearing by the wheel arches? This is hard to generally state, but the causes I have personally seen over the years boil down to being prevented from making changes in a defensible amount of time.

Coupling via schema – explicit or not

E.g. it could be that you have another system accessing your storage directly. Doesn’t matter if you are using schemaless storage or not, as long as two different codebases need to make sense of the same data, you have a schema whether you admit it or not, at some point those systems will need to coordinate their changes to not break functionality.

Fundamentally – as soon as you start going “neah, I won’t remove/rename/change type of that old column because I have no idea who still uses it” you are in trouble. Each storage must have one service in front of it that owns it, so that it can safely manage schema migrations, and anyone wanting to access that data needs to use a well defined API to do so. The service maintainers can thereafter be held responsible to maintain this API in perpetuity, and easily so since the dependency is explicit and documented. If the other service just queried the storage directly, the maintainer is completely unaware (yes, this goes for BI teams as well).

Barnacles settling

If every feature request leads to functions and classes growing as new code is added like barnacles without regular refactoring to more effective patterns, the code gradually gets harder to change. This is commonly a side-effect of high turnover or outsourcing. The developers do not feel empowered to make structural changes, or perhaps have not had enough time to get acquainted with the architecture as it was intended at some point. Make sure that whomever maintains your legacy code is fully aware of their responsibility to refactor as they go along.

Test after

When interviewing engineers it is very common that they say they “practice TDD, but…”, meaning they test after. At least to me the difference in test quality is obviously different if I write the tests first versus if I get into the zone and write the feature first and then try to retrofit tests afterwards. Hint: there is usually a lot less mocking if you test first. As the tests get more complex, adding new code to a class under tests gets harder, and if the developer does not feel empowered to refactor first, the tests are likely to not cover the added functionality properly , so perhaps a complex integration test is modified to validate the new code, maybe the change is tested manually…

Failure to accept Conway’s law

The reason people got hyped about micro services was the idea that you could deploy individual features independently of the rest of the organisation and the rest of the code. This is lovely, as long as you do it right. You can also go too granular, but in my experience that rarely happens. The problem that does happen is that separate teams have interests in the same code and modify the same bits, and releases can’t go out without a lot of coordination. If you also have poor automation test coverage you will get a manual verification burden that further slows down releases. At your earliest convenience you must spend time restructuring your code or at least the ownership of it so that teams afully own all aspects of the thing they are responsible for and they can release code independently, and with any remaining cross-team dependencies made explicit and automatically verifiable.

Casual attitude towards breaking changes

If you have a monolith that is providing core features to your estate, and you have a publicly accessible API Operation, assume it is being used by somebody. Basically, if you must change its required parameters or its output, create a new versioned endpoint or one by a different name. Does this make things less messy? No, but at least you don’t break a consumer you don’t know about. Tech leads will hope that you message around to try and identify who uses it and coordinate a good outcome, but historically that seems too much to ask. We are only human after all.

Until you have PACT tests for everything, and solid coverage, never break a public method.

Outside of support horizon

Initially it does not seem that bad to be stuck with a slightly unsupported version of a library, but as time moves on, all of a sudden you get stuck for a week with a zero day bug that you can’t patch because three other libraries are out of date and contain breaking changes. It is much better if you are ready to make changes as you go along. One breaking change usually lets you have options, but when you are already exposed with a potential security breach, you have to make bad decisions due to lack of time.

Complex releases

Finally, it is worth mentioning that you want to avoid manual steps in your releases. Today there is really no excuse for making a release more complex than one button click. Ideally abstract away configuration so that there is no file.prod.config template that is separate from file.uat.config, or else that prod template file is almost guaranteed to break the release, much like the grille was the only thing rusting on the Rover 400 that was almost completely a Honda, (except for the grille).

Stopping Princip

So how do we avoid the decline, the rot? As with shifting quality and security left, it is much cheaper to address these problems the earlier you spot them, so if you find yourself in any of the situations above, address them with haste.

  1. Avoid engaging “maintenance developers”, their remit may explicitly mean they cannot do major refactoring even when necessary
  2. Keep assigning resources to keep dependencies updated. Use SAST to validate that your dependencies are not vulnerable.
  3. Disallow and remove integration-by-database at any cost. This is hard to fix, but worth it. This alone solves 90% of niggling small problems you are continuously having as you can fix your data structure to fit your current problems rather than the ones you had 15 years ago. If you cannot create a true data platform for reporting data, at least define agreed views/ indexes that can act like an interface for external consumers. That way you have a layer of abstraction between external consumers and yourself and stay free to refactor as long as you make sure that the views still work.
  4. Make dependencies explicit. Ideally PACT tests, but if not that, at least integration tests. This way you avoid needing shared integration environments where teams are shocked to first find out that their changes they have been working on for two weeks breaks some other piece of software they didn’t know existed.

Biscuits

What are cookies? Why do they exist? Why on earth would I ever NOT want to accept delicious cookies? What is statelessness? All that and more in this treatise on cookies and privacy.

Requests

The original humble website (info.cern.ch) looked very different from the sites that currently power banking, commerce and even interactions with government. Fundamentally a web server is a program that lets you create, request, update, or delete resources. It tells you some information about the content, what type it is, how big it is, when it was last modified, if you are supposed to cache it or not, among other things. These metadata are returned as headers, i.e. bits of content before the main content, so to speak.

To over-simplify the process, the client, e.g. the browser, simply breaks down the address in the address bar into the scheme – usually http or https, the host name – info.cern.ch, and the path – /. If the scheme is http and not port number was explicitly given, the browser will contact info.cern.ch on port 80, and then send the command GET /. The browser will send information in headers, such as User-Agent, i.e. it tells the web server which browser it is, and it can tell you referrer as well, i.e. which website linked to this one. These headers are sent by the browser, but they are not mandatory, and any low level http client can set their own referrer and user-agent headers, so it is important to realise that these headers are not guaranteed to be correct. The server too will offer up information in headers. Sometimes the server will – as headers in addition to the content it serves you – announce what type of web server it is (software and platform) which is something you should ideally disable, because that information is only helpful for targeting malware, with no real valid use case.

Why this technical mumbo jumbo? Well, the thing you didn’t see in the above avalanche of tech stuff was the browser authenticating the user to the server in any way. The browser rocked up, asked for something and received it, at no point was a session created, or credentials exchanged. Now, info.cern.ch is a very simple page, but it does have a favicon, i.e. the little picture that adorns the top left of the browser tab, so when the page is requested, it actually makes two calls to the Swiss web server. One for the HTML content, and one for the picture. Now with modern HTTP protocol versions this is changing somewhat, but let’s ignore that for now, the point is – the server does not keep session state, it does not know if you are the same browser refreshing the page, or if you are someone completely new that requests the page and the favicon.

There was no mechanism to “log in”, to start a session, there was no way to know if it was the same user coming back that you already knew because no such facility existed within the protocol. From fairly early on you could have the server return status code 401 to say “you need to log in”, and there was a provision for the browser to then supply some credentials using a header called Authorize, but you had to supply that header for every request or else it wouldn’t work. This is how APIs work still, each request is a new world, you authenticate with every call.

The solution, the way to log into a website, to exchange credentials once and then create a session that knows who you are whilst you are on a website, was using cookies.

Taking the biscuit

What is a cookie? Well, it is a small file. It is stored by the browser somewhere in the user’s local files.

The server returns a header called Set-Cookie where the server tells the browser to remember some data, basically name & value and possibly a domain.

Once that has happened. there is a gentleman’s agreement that that browser will always send along those cookies when a subsequent call is made to that same server, and the normal flow is that the server will set a cookie like “cool-session-id= a234f32d” and the server will then upon subsequent requests read the cookie cool-session-id and know which session this request belongs to “a234f32d, ah long time no see – carry on”. Some cookies live for a very long time “don’t ask again”, and some, the session ones, live for 5 minutes or similar. When the cookies expire, the browser will no longer send them along with requests to the server, and you willl have to log in again, or similar.

How the cookie crumbles

What could possibly go wrong, these cookies seem perfect with no downsides whatsoever? Yes, and no. A HTML page, a hypertext document, contains text, images, and links. Usually you build up a web page using text content and images that you host on your own machine, so the browser keeps talking to the same server to get all the necessary content, but sometimes you use content from somewhere else, like under-construction.gif that was popular back in my day. That means that the server where under-construction.gif is hosted can set cookies as well, because the call to its server to download that picture is the same type of thing that the call to my server where the HTML lives, those calls work the same way. If the person hosting under-construction.gif wanted to, they could use those cookies to figure out which pages each person visits. If it was 1995. then under-construction.gif could be referenced from 1000 websites, and by setting cookies, the host of under-construction.gif could start keeping a list of the times when the same cookie showed up on requests for under-construction.gif from different websites. The combination of Referrer header and the cookie set in each browser would allow interesting statistics to be kept.

Let’s say this isn’t under-construction.gif, but rather a Paypal Donate button, a Ko-Fi button, a Facebook button or a Google script manager, and you start seeing the problem. These third party cookies are sometimes called tracking cookies, because, well – that’s what they do.

Why the sweet tooth?

Why do people allow content on their website that they know will track their users? Well, for the plebs, like this blog here, I suspect the main thing is the site creator cannot be bothered to clear house. You use some pre-built tool, like WordPress, and accept that it will drop cookies like a medieval fairytale, you can’t be arsed to wade in PHP on your spare time to stop the site from doing so. Then there’s the naive greed, like if I add a Paypal Donate button, or an Amazon affiliate link, I could make enough money to buy several 4 packs of coke zero, infinite money glitch !!1one.

For companies and commercial websites, I am fairly convinced that Google Analytics is the biggest culprit. Even if you have zero interest in monetising the website itself, and you never intend to place ads at any time, Google Analytics is a popular tool to track how users use your application. You can tag up buttons with identifiers and see what features either are not discovered or are too complex, i.e. users abandon multi-step processes half way through. From a product design perspective these seem like useful signals, but form a pure engineering perspective it allows you to build realistic monitoring and performance tests because you have reasonably accurate evidence of how real world users use your website. The noble goal of making the world a better place aside, the fact is that you are still using that third party cookie from Google, and they use it for whatever purposes they have, the only thing is you get to use some of that data too.

Achieving the same level of insight about how your users use your app by using an analytics tool you built in-house would take a herculean effort, and for most companies, that cost would not be defensible. You see a similar problem happen after Sales develops a load-bearing excel template, and you realise that building a line-of-business web app to replace that template would be astronomically expensive and still miss out on some features Excel has built-in.

Consent is fundamental

As you can tell the technical difference between a marketing cookie and a cookie used for improving the app or monitoring quality is nonexistent. It is all about intent. The General Data Protection Regulation was an attempt at safeguarding people’s data by requiring companies to be upfront about the intent and scope of the information it keeps, and to keep them accountable in case they suffer data breaches. One of the most visible aspects of the regulation is the cookie consent popup that quckly became ubiquitous over the whole of the internet.

Now, this quickly became an industry of its own where companies buy third party javascript apps that allow you to switch off optional cookies and present comprehensive descriptions about what the purpose is around each cookie. I personally think it is a bit of a racket preying on the internal fear of the Compliance department in large corporations, but still – these apps do provide a service. The only problem is that you as a site maintainer gets to define if a cookie is mandatory or not. You can designate a tracking cookie as required, and it will basically be up to the courts to decide if you are in violation. Some sites like spammy news aggregators do this upfront, they designate their tracking cookies as mandatory.

Conclusion

So, are cookies always harmful, or can you indulge in the odd one now and then without feeling bad? The simple answer is, it depends. Every time you approve of a third party cookie, know that you are traced across websites. You may not mind, because it’s your favourite oligopoly Apple, or you might mind because it’s ads.doubleclick.net – it is up to you. And if you are building a website with a limited budget that does not include also building a bespoke analytics platform, you may hold your nose and add google analytics, knowing full well that a lot of people will block that cookie anyway, reducing your feedback in potentially statistically significant ways. Fundamentally it is about choice. At least this way you can stay informed.

Abstractions, abstractions everywhere

X, X everywhere meme template, licensed by imgflip.com

All work in software engineering is about abstractions.

Abstractions

All models are wrong, but some are useful

George Box

It began with Assembly language, when people were tired of writing large-for-its-time programs in raw binary instructions, they made a language that basically mapped each binary instruction to a text value, and then there was an app that would translate that to raw binary and print punch cards. Not a huge abstraction, but it started there. Then came high level languages and off we went. Now we can conjure virtual hardware out of thin air with regular programming languages.

The magic of abstractions, it really gives you an amazing leverage, but at the same time you sacrifice actual knowledge of the implementation details, meaning you often get exposed to obscure errors that you either have no idea what they mean, or even worse- understand exactly what’s wrong but you don’t have access to make that change because the source is just a machine translated piece of Go, and there is no way to fix the translated C# directly, just to take one example.

Granularity and collaboration across an organisation

Abstractions in code

Starting small

Most systems start small, solving a specific problem. This is done well, and the requirements grow, people begin to understand what is possible and features accrue. A monolith is built, and it is useful. For a while things will be excellent and features will be added at great speed, and developers might be added along the way.

A complex system that works is invariably found to have evolved from a simple system that worked

John Gall

Things take a turn

Usually, at some point some things go wrong – or auditors get involved because regulatory compliance – and you prevent developers from deploying to production, hiring gate keepers to protect the company from the developers. In the olden days – hopefully not anymore – you hire testers to do manual testing to cover a shortfall in automated testing. Now you have a couple of hand-offs within the team, meaning people write code, give it to testers who find bugs, work goes the wrong way – backwards – for developers to clean up their own mess and to try again. Eventually something will be available to release, and the gate keepers will grudgingly allow a change to happen, at some point.

This leads to a slow down in the feature factory, some old design choices may cause problems that further slow down the pace of change, or – if you’re lucky – you just have too many developers in one team, and you somehow have to split them up in different teams, which means comms deteriorate and collaborating in one codebase becomes even harder. With the existing change prevention, struggles with quality and now poor cross-team communication, something has to be done to clear a path so that the two groups of people can collaborate effectively.

Separation of concerns

So what do we do? Well, every change needs to be covered by some kind of automated test, if only to at first guarantee that you aren’t making things worse. This way you can now refactor the codebase to a point where the two groups can have separate responsibilities, and collaborate over well defined API boundaries, for instance. Separate deployable units, so that teams are free to deploy according to their own schedule.

If we can get better collaboration with early test designs and front-load test automation, and befriend the ops gatekeepers to wire in monitoring so that teams are fully wired in to how their products behave in the live environment, we would be close to optimum.

Unfortunately – this is very difficult. Taking a pile of software and making sense of it, deciding how to split it up between teams, gradually separating out features can be too daunting to really get started. You don’t want to break anything, and if you – as many are won’t to do, especially if you are new in an organisation – decide to start over from scratch, you may run into one or more of the problems that occur when attempting a rewrite. One example being where you end up in a competition against a moving target. The same team has to own a feature in both the old and the new codebase, in that case, to stop that competition. For some companies it is simply worth the risk, they are aware they are wasting enormous sums of money, but they still accept the cost. You would have to be very brave.

Abstractions in Infrastructure

From FTP-from-within-the-editor to Cloud native IaC

When software is being deployed – and I am ignoring native apps now, largely, and focusing on web applications and APIs- there are a number of things that are actually happening that are at this point completely obscured by layers of abstraction.

The metal

The hardware needs to exist. This used to be a very physical thing, a brand new HP ProLiant howling in the corner of the office onto which you installed a server OS and set up networking so that you could deploy software on it, before plugging it into a rack somewhere, probably a cupboard – hopefully with cooling and UPS. Then VM hosts became a thing, so you provisioned apps using VMWare or similar and got to be surprised at how expensive enterprise storage is per GB compared to commodity hardware. This could be done via VMWare CLI, but most likely an ops person pointed and clicked.

Deploying software

Once the VM was provisioned, things like Ansible, Chef and Puppet began to become a thing, abstracting away the stopping of websites, the copying of zip files, the unzipping, the rewriting configuration and the restarting of the web app into a neat little script. Already here you are seeing problems where “normal” problems, like a file being locked by a running process, show up as a very cryptic error message that the developer might not understand. You start to see cargo cult where people blindly copy things from one app to another because you think two services are the same, and people don’t understand the details. Most of the time that’s fine, but it can also be problematic with a bit of bad luck.

Somebody else’s computer

Then cloud came, and all of a sudden you did not need to buy a server up front and instead rent as much server as you need. Initially, all you had was VMs, so your Chef/Puppet/Ansible worked pretty much the same as before, and each cloud provider offered a different was of provisioning virtual hardware before you came to the point where the software deployment mechanism came into play. More abstractions to fundamentally do the same thing. Harder to analyse any failures, you some times have to dig out a virtual console to just see why/how an app is failing because it’s not even writing logs. Abstractions may exist, but they often leak.

Works on my machine-as-a-service

Just like the London Pool and the Docklands were rendered derelict by containerisation, a lot of people’s accumulated skills in Chef and Ansible have been rendered obsolete as app deployments have become smaller, each app simply unzipped on top of a brand new Linux OS sprinkled with some configuration answer, and then have the image pushed to a registry somewhere. On one hand, it’s very easy. If you can build the image and run the container locally, it will work in the cloud (provided the correct access is provisioned, but at least AWS offer a fake service that let’s you dry run the app on your own machine and test various role assignments to make sure IAM is also correctly set up. On the other hand, somehow the “metal” is locked away even further and you cannot really access a console anymore, just a focused log viewer that let’s you see only events related to your ECS task, for instance.

Abstractions in Organisations

The above tales of ops vs test vs dev illustrates the problem of structuring an organisation incorrectly. If you structure it per function you get warring tribes and very little progress because one team doesn’t want any change at all in order to maintain stability, the other one gets held responsible for every problem customers encounter and the third one just wants to add features. If you structured the organisation for business outcome, everyone would be on the same team working towards the same goals with different skill sets, so the way you think of the boxes in an org chart can have a massive impact on real world performance.

There are no solutions, only trade-offs, so consider the effects of sprinkling people of various background across the organisation, if instead of being kept in the cellar as usual you start proliferating your developers among the general population of the organisation, how do you ensure that every team follows the agreed best practices, that no corners are cut even when a non-technical manager is demanding answers. How do you manage performance of developers you have to go out of your way to see? I argue such things are solvable problems, but do ask your doctor if reverse Conway is right for you.

Conclusion

What is a good abstraction?

Coupling vs Cohesion

If a team can do all of their day-to-day work without waiting for another team to deliver something or approve something, if there are no hand-offs, then they have good cohesion. All the things needed are to hand. If the rest of the organisation understands what this team does and there is no confusion about which team to go to with this type of work, then you have high cohesion. It is a good thing.

If however, one team constantly is worrying about what another team is doing, where certain tickets are in their sprint in order to schedule their own work, then you have high coupling and time is wasted. Some work has to be moved between teams or the interface between the teams has to be made more explicit in order to reduce this interdependency.

In Infrastructure, you want the virtual resources associated with one application to be managed within the same repository/area to offer locality and ease of change for the team.

Single Responsibility Principle

While dangerous to over-apply within software development (you get more coupling than cohesion if you are too zealous), this principle is generally useful within architecture and infrastructure.

Originally meaning that one class / method should only do one thing – an extrapolation of the UNIX principles – it can more generally be said to mean that on that layer of abstraction, a team, infrastructure pipe, app, program, class […] should have one responsibility. This usually mean a couple of things happen, but they conceptually belong together. They have the same reason to change.

What – if any – pitfalls exist ?

The major weakness of most abstractions is when they fall apart, when they leak. Not having access to a physical computer is fine, as long as the deployment pipeline is working, as long as the observability is wired up correctly, but when it falls down, you still need to be able to see console output, you need to understand how networking works, to some extent, you need to understand what obscure operating system errors mean. Basically when things go really wrong you are needed to have already learned to run that app in that operating system before, so you recognise the error messages and have some troubleshooting steps memorised.
So although we try and save our colleagues from the cognitive load of having to know everything we were forced to learn over the decades, to spare them the heartache, they still need to know. All of it. So yes, the danger with the proliferation of layers of abstraction is to pick the correct ones, and to try and keep the total bundle of layers as lean as possible because otherwise someone will want to simplify or clarify these abstractions by adding another layer on top, and the cycle begins again.

Desktop OS for developers

The results of the latest StackOverflow Developer Survey just came out, showing – among other interesting things – that Windows is dying as a developer OS. Not one to abandon ship any time soon I’d still like to offer up some suggestions.

TL;DR

  • Make the commandline deterministic.
  • Copying files across the network cannot be a lottery.
  • Stop rebooting UI-frameworks
  • Make F# the flagship language

Back in the day, Microsoft through VB and Visual C++ overcame some of the hurdles of developing software for Windows – then the only, effectively, desktop OS in the enterprise. Developers, and their managers, rallied behind these products and several million kilometres of code was written over a couple of decades.

The hurdles that were overcome were related to the boilerplate needed to register window classes, creating a window and responding to the basic window messages required to show the window in Windows and have the program behave as expected vis-a-vis the expectations a Windows user might have. Nowhere in VB6 samples was anybody discussing how to write tests or how, really, to write good code. In fact, sample code, simplified on purpose to only showcase one feature at a time, would not contain any distractions such as test code.

When Classic ASP was created, a lot of this philosophy came a cross to the web, and Microsoft managed to create something as horrible as PHP, but with less features, telling a bunch of people that it’s OK to be a cowboy.

When the .NET framework was created as a response to Java, a lot of VB6 and ASP.NET  programmers came across and I think Microsoft started to see what they had created. Things like Patterns & Practices came out and the certification programmes were taking software design and testing into consideration. Sadly, however, they tended to give poor advice that was only marginally better than what was out there in the wild.

Missed the boat on civilised software development

It was a shock to the system when the ALT.NET movement came out and started to bring in things that were completely mainstream in the Java community but almost esoteric in .NET. Continuous integration – unit testing – TDD – DDD. Microsoft tried to keep up by creating TFS that apart from source code version in had ALM tools to manage bugs and features as well as a built-in build server but it became clear to more and more developers that Microsoft really didn’t understand the whole thing about testing first or how lean software development needs to happen.

While Apple had used their iron fist to force people to dump Mac OS for the completely different, Unix-based operating system OS X (with large bits of NextStep brought across, like the API and InterfaceBuilder) – Microsoft were considering their enterprise customers and never made a clean break with Gdi32. Longhorn was supposed to solve everything, making WPF native and super fast, obsoleting the old BitBlt malarkey and instead ushering in a brighter future.

As you are probably aware, this never happened. .NET code in the kernel was a horrible idea and the OS division banned .NET from anything ever being shipped with Windows, salvaged whatever they could duct tape together – and the result of that was Vista. Yes, .NET was banned from Windows and stayed banned up until Powershell became mainstream a long, long time later. Now, with Universal Windows Apps, a potentially viable combo of C++ code and vector UI has finally been introduced, but since it is the fifth complete UI stack reboot since Longhorn folded, it is probably too little too late and too many previously enthusiastic Silverlight or WPF people have already fallen by the wayside. Oh and many of the new APIs are still really hard to write tests around, and it is easy finding yourself in a situation where you need to install Visual Studio and some SDK on a build server, because the dependency relies on the Registry or the GAC rather than things that come with the source.

Automation

As Jeffrey Snover mentions in several talks, Windows wasn’t really designed with automation in mind. OLE Automation possibly, but scripting? Nooo. Now, with more grown-up ways of developing software – automation becomes more critical. The Windows world has developed alternate ways of deploying software to end-user machines than work quite well, but for things like automated integration tests and build automation you should still be able to rely on scripting to set things up.

This is where Windows really lets the developer community down. Simple operations in Windows aren’t deterministic. For a large majority of things you call on the command-line  – you are the only one responsible for determining if the command ran successfully. The program you called from the command-line may very well have failed despite it returning a 0 exit code. The execution just might not have finished despite the process having ended, so some files may still be locked. For a while, you never know. Oh, and mounting network drives is magic and often fails for no reason.

End result

Some people leave for Mac because everything just works, if you can live with bad security practices  and sometimes a long delay before you get some things like Java updates. Some people leave for Linux because if you script everything, you don’t really mind all those times you have to reinstall because thing like a change in screen resolution or a security update killed the OS to the point you can’t log in anymore, you just throw away the partition and rerun the scripts. Also, from a developer standpoint, everything just works, in terms of available tools and frameworks.

What to do about it

If Microsoft wants to keep making developer tools and frameworks, they need to start listening to the developers that engage whenever Microsoft open sources things. They most likely have valuable input into how things are used by your serious users – beyond the tutorials.

Stop spending resources duplicating things already existing for Windows or .NET as that strikes precisely at the enthusiasts that Microsoft needs in order to stop hemorrhaging developers.

What is .NET Core – really? Stop rewriting the same things over and over. At least solve the problems the rewrite was supposed to address first before adding fluff. Also – giving people the ability to work cross-platform means people will, so you are sabotaging yourselves while building some good-will, admittedly.

Most importantly – treat F# like Apple treats Swift. Something like – we don’t hate C# – there is a lot of legacy there but F# is new, trendier and better. F# is far better than Swift and has been used in high spec applications for nine years already. Still Microsoft after years of beta testing still manages to release a JITer that has broken tail call optimisation (a cornerstone of functional runtimes as it lets you do recursion effectively). That is simply UNACCEPTABLE and I would have publicly shamed then fired so many managers for letting that happen. Microsoft needs to take F# seriously – ensure it gets the best possible performance, tooling and templating. It is a golden opportunity to separate professional developers from the morons you find if you google “asp.net login form” or similar.

In other words – there are many simple things Microsoft could do to turn the tide, but Im not sure they will manage, despite the huge strides taken of late. It is also evident that developers hold a grudge for ages.

UTF-8

Having grown up in a society evolved beyond the confines of 7-bit ASCII and lived through the nightmare of codepages as well as cursed the illiterate that have so little to say they can manage with 26 letters in their alphabet, I was pleased when I read Joel Spolsky’s tutorial on Unicode. It was a relief – finally somebody understood.

Years passed and I thought I knew now how to do things right. And then I had to do Windows C in anger and was lost in the jungle of wchar_t and TCHAR and didn’t know where to turn.

Finally I found this resource here:

http://utf8everywhere.org/

And the strategies outlined to deal with UTF-8 in Windows are clear:

  • Define UNICODE and _UNICODE
  • Don’t use wchar_t or TCHAR or any of the associated macros. Always assume std::string and char * are UTF-8. Call Wide windows APIs and use boost nowide or similar to widen the characters going in and narrowing them coming out.
  • Never produce any text that isn’t UTF-8.

Do note however that the observations that Windows would not support true UTF-16 are incorrect as this was fixed before Windows 7.

Mail server and collaborative calendar

I was thinking, in light of the fact that all our data, including this blog, is being scrutinized by foreign (to me and most internet users) powers, that maybe one should try to replace Google Apps and Outlook.com, well Google Apps really. The problem is that Google Apps is pretty damn useful.

So what does one need? I have no money to spend, so it has to be free.

One needs a full featured SMTP server, a good web interface with simple design that also renders well on mobile in which you can search through (and find!) email and appointments. Some would argue that you need chat and video conferencing as well, and I guess one where neither Chinese nor US military is also on the call would be preferable, but I can live without it.

I cannot, however, live without civilized charset support. As in, working iso-8859-1 or something proper that can display the letters of honor and might, aka åäö. It would also not seem like a grown-up solution if it wasn’t trivial to add encryption and/or signing of e-mails.

You also need shared calendars and the option of reserving common resources. Not perhaps in the home, but the very first thing you do when you use shared calendars in a company is that you start booking things, such as conference rooms, portable projectors et c.

The user catalog needs to be easily managed. You need folders/labels of some kind and filters to manage the flow of e-mail.

I also, probably, need a way to access my e-mail via IMAP.

So the question is, how much of this do I have to build? How much exists in Linux? Does it look decent or does it look Linuxy? By decent I mean Web 2.0-ish, as opposed to linuxy or Windowsy, such as outlook web access.

I’d like any suggestion. I am prepared to code, albeit in C#/mono, but I would love to use as much Linux OSS available as I probably suck at writing this kind of stuff compared to those who have already done so.

So far I have found the following:
http://www.howtoforge.com/perfect-server-ubuntu-13.04-nginx-bind-dovecot-ispconfig-3
But instead of an ISP tool a custom website? Perhaps solr or lucene indexing each user’s maildir. But what about a calendar?

Why not just install Citadel?
This requires testing.

Windows Phone 6.1

I managed to lose my employer’s Nokia N82 and as punishment by my boss he stuck me with an HTC Touch Cruise Windows Phone Classic 6.1 phone that nobody had wanted to use since 2008. I have tweeted about my findings with the hashtag #punishmentphone.

In short, the experience has been mixed. Synchronization with Google Apps works like a charm with e-mail, contacts and calendar and the messaging function is quite OK in the way e-mail works and the SMS part has conversations just like the iPhone. Sadly, though, the Windows Mobile general feel remains with very bad tactile feedback from the touch interface and a borderline unusable virtual keyboard and having a Windows interace on a phone means that user stories like “Create new SMS” or “Make a phone call” be at least a few clicks too far away for comfort. Oh, and another pet peeve: When the phone boots, it throws the SIM-card PIN-code dialog at me first, but that gets hidden by the WinMo desktop and I have to go in to the comm manager and disable the phone and reenable it to get the PIN dialog to a place where I can actually punch the numbers in. WinMo has improved since before, though as the phone has only died on me once so far for no reason, which is vastly better than a QTek S100 I wrestled with years before.

The Dark Side

Back in the nineties when I trained as an apprentice coder at the University of Umeå, I was first exposed to the Dark Side. It was very seductive with an intuitive TCP/IP stack, simple signal management conventions, concise UI:s, lean config files, powerful scripts, nifty daemons and of course the Bible: Advanced Programming in the UNIX Environment by W. Richard Stevens. I marveled at the distributed GUI, broken and deeply insecure though as it was, I admired that multi-user security concerns had been addressed back in the ‘70s already, not as an afterthought in 1989. Then I was brought out in the real world and began appreciating GUI conventions,  speed of implementation, solving customer problems and so on and came and joined the Just and Fair in the Microsoft camp and haven’t strayed since. Much.

But

But now and then the urge to complicate things gets stronger and stronger and with members of my immediate family having been lost to the Others for years now, the lure became overwhelming. After years of propaganda from my father I finally decided to install OpenSolaris on a virtual machine. Of course Solaris will not install on Windows Virtual PC Beta. It does not recognize any of the virtual SATA devices and thus cannot install. What to do? Well I gave up of course.

For a week or two until I thought I should make a serious attempt and actually dowloaded Sun’s VirtualBox.

VirtualBox vs Windows Virtual PC Beta

I really wanted to hate it, it not being made by the Just and Fair folks at Redmond. However, it is difficult to argue with 64 bit CPU high performance virtualization with USB, disk and network integration. It sucks the life out of the host machine, but in return both guest and host perform acceptably, lest you allocate too much memory to the guest “computer”, unlike with Windows Virtual PC where you get low performance (but better than Virtual PC 2007) on the handful of platforms it does handle.

Of course, I popped in the ISO in an empty Virtual machine (ironically using the same empty VHD Windows Virtual PC Beta created but couldn’t make available to Solaris) and of course the setup just chugged along event free. A vast improvement over previous UNIXes I installed back in the day. It refused to give me any options but to fill my VHD with the One Solaris partition to bind them all. None of that allocating partitions to /root, /home, /usr/local et c-business that used to be 90% of the fun of trying to set up a Unix style system.

Stereo? Not for me

Of course, once everything is set up you want to do stuff. Back in the day I would have set up FTP and the Apache httpd and created logins for friends and acquaintances and set up the firewall to allow for SSH:ing in to my computer. My computer in this case just being nothing but a figment of my laptop’s imagination, that would be even more pointless than it was back in the day. So: What to do? Weill, of course: Develop in .NET! After all, if I am to achieve global supremacy, I need to be able to code on Solaris as well, at some point.

This is where it all came back to me. Because it is a proper UNIX and not some humble Linux, allegedly, they haven’t gotten mono to run consistently on Solaris. The hours of gunzipping and ./configuring and make believing. compile errors upon compile errors. Downloaded various other package managers to appease the evildoers. Nothing I did could make mono build, despite gigabytes of source code. Ah well… Now I gave up properly.

OpenSUSE

Until I tried OpenSUSE, that is. Similar scenario, clean VM in VirtualBox, popped a physical DVD in my drive and shared it to VirtualBox, the empty VM booted and started the Open SUSE setup. I got to fiddle with my partitions (yay!) but I chose not to. Again, nothing fun happened during install, it just worked. A couple of reboots later I was able to login on my new machine, open the package installer, select everything mono-related and click install. After 20 minutes or se everything was downloaded and installed, and I just went to the Start menu(or what do you guys call it? The non-Start menu?) –> Applications-Development-Integrated Development Environment (monodevelop) and start to code. Very painless. Press play, off you go and you have ASP.NET being hosted on a Linux machine.

Then I noticed that my window was a bit on the smallish side, I would have preferred a higher resolution, like the one I had on the Solaris VM or so, so I go into the yast2 thingy and change display settings to something I consider appropriate and save the settings. I am instructed to restart the window manager, so I reboot the computer (yes I know. but it involved less interaction) and login again. Everything looks fine until after 5 seconds when the display gets garbled and I’m thrown out back to the login screen. I try all kinds of failsafe settings but to no avail. Of course I could manually edit a conf file somewhere to solve the issue, but Google has yet to reaffirm its friendship with me by coming up with a link to a Q&A forum where the question has been answered,

So?

Back to the good side it is, where stuff just works. Of course you can also end up in situations where stuff just doesn’t work as well (oi! CRM4! There’s this new thing, Windows Server 2008. It’s been out for a good two years now!), but resources are more plentiful and as an added bonus: you feel better cursing big successful company than makers of free software.