Tag Archives: Awesome

Mob / Pair vs Solo and Speed

I have recently “thought led” on LinkedIn, claiming that the future of software development lies in mob programming. I think this take automatically flips my bozo bit in the minds of certain listeners, whilst for many people that is a statement about as revolutionary as saying water is wet.

Some definitions (based on my vague recollection and lazy googling to verify, please let me know if you know better) about what I mean by mob and pair programming.

Solo development

This is what you think it is. Not usually completely alone in the dark wearing a hoodie like in the films, but at least, you sit at your own screen, pick up a ticket, write your code, open a PR, tag your mates to review it, get a coffee, go through and see if there are PRs from other developers for you to review. Rinse / repeat.

The benefit here is you get to have your own keyboard shortcuts , your own Spotify playlist and can respond immediately to chat messages from the boss. The downside is that regulators don’t trust developers, not alone, so you need someone else to check your work. We used to have a silo for testers, but like trying to season the food afterwards, it is impossible to retrofit quality, so we have modified our ways of working, but the queue of pull requests in a review queue is still a bottleneck, and if you are unlucky, you lose the “race” and need to resolve merge conflicts before your changes can be applied to the trunk of the source tree.

Pair programming

Origins

Pair programming is one of the OG practices of Extreme Programming (XP), developed in 1996 by Kent Beck, Ward Cunningham and Ron Jeffries, and later publicised in the book Extreme Programming Explained (Beck) and basically means one computer, two programmers. One person types – drives – the other navigates. It makes it easier to retain context in the minds of both people, it is easier to retain state in case you get interrupted, and you spread knowledge incredibly quickly. There are limitations of course, if the navigator is disengaged or if the two people have strong egos and you get unproductive discussions over syntactic preference, but that would have played out in pull requests/code review anyway, so at least this is resolved live. In practical terms this is rarely a problem.

Having two people work on code is much more efficient than reviewing after the fact, but it is of course not completely guaranteed, but it is pretty close. The only time I have worked in a development team that produced literally zero defects, pair programming was mandatory, change sets were small, and releases were frequent. We recruited a pair of developers that had already adopted these practices at a previous job, and in passing chat with some of our developers ahead of joining they had mentioned that their teams had zero defects, and our people laughed – because surely that’s impossible. Then they showed us. Test first, pair program, release often. It works. There were still occasions where we had missed a requirement, but that was discovered before code went live, but still of course led us to evolve our ways of working until that didn’t happen either.

Downsides?

The most obvious naive observation would be 2 developers, one computer – surely you get half the output? Now, typing speed is not the bottleneck when it comes to software development, but more importantly – code has no intrinsic value. The value is in the software delivering the right features at the least possible investment of time and money (whether it is creation or maintenance), so writing the right code – including writing only code that differentiates your business from the competition – is a lot more important than writing the most code. Most people in the industry are aware of this simple fact, so generally the “efficiency loss” of having two people operating one computer is understood to outweigh by delivering the right code faster.

On the human level, initially people rarely love having a backseat driver when coding, either you are self-conscious about your typing speed or your rate of typos or you feel like you are slowing people down, but by frequently revolving pairs and roles driver/navigator the ice breaks quickly. You need to have a situation where a junior feels safe to challenge the choices of a senior, i.e. psychological safety, but that is generally true of an innovative and efficient workplace, so if you don’t have that – start there. Another niggle is that I am still looking for a way to do it frictionlessly online. It is doable over Teams, but it isn’t ideal. I have had very limited success with the collab feature in VS Code and Visual Studio, but if it works for you – great!

Overall

People that have given it a proper go seem to almost universally agree on the benefits, even if that began as a thing forced upon them by an engineering manager, seem to appreciate it. It does take a lot of mental effort, because the normal breaks to think as you type get skipped because your navigator is completely on it, so you write the whole time, and similarly the navigator can keep the whole problem in mind and does not have to deal with browsing file trees or triggering compilations and test runs, they can focus on the next thing. All in all this means that after about 6-7 hours, you are done. Just give up, finish off the day writing documentation, reporting time, do other admin and check emails – because thinking about code will have ceased. By this time in the afternoon you will probably have pushed a piece of code into production, so it’s also a fantastic opportunity to get a snack and pat yourself on the back as the monitoring is all green and everything is working.

Mob programming

Origins

In 2011, a software development team at Hunter Industries happens upon Mob Programming as the evolution from practicing TDD and Coding Dojos and applying those techniques to get up to speed on a project that had been put on hold for several months. A gradual evolution of practices, as well as a daily inspection and adaptation cycle, resulted in the approach that is now known as Mob Programming.

2014 Woody Zuill originally described Mob Programming in an Experience Report at Agile2014 based on the experiences of his team at Hunter Industries.

Mob programming is next level Pair Programming. Fundamentally, the team is seated together in one area. One person writes code at the time, usually projected or connected into a massive TV for everyone to be able to see. Other computers are available for research, looking at logs or databases, but everyone stays in the room, both physically and mentally, so everybody doesn’t get to sit at a table with their own laptop open, the focus is on the big screen. People talk out loud and guide the work forward. Communication is direct.

Downsides

I mean it is hard to go tell a manager that a whole team needs to book a conference room or secluded collaboration area and hang all day, every day going forward – it seems like a ludicrously expensive meeting, and you want to expense a incredibly large flatscreen TV as well – are the Euros coming up or what? Let me guess you want Sky Sports with that? All joking aside, the optics can be problematic, just like it would be problematic getting developers multiple big monitors back in the day. At some companies you have to let your back problems become debilitating before you are allowed to create discord by getting a fancier chair than the rest of the populace, so – those dynamics can play in as well.

The same problems of fatigue from being on 100% of the time can appear in a mob and because there are more people involved, the complexities grow. Making sure the whole team buys in ahead of time is crucial, it is not something that can be successfully imposed from above. However, again, people that have tried it properly seem to agree on its benefits. A possible compromise can be to pair on tickets, but code review in a mob.

Overall

The big leap in productivity here lies in the the advent of AI. If you can mob on code design and construction, you can avoid reviewing massive PRs, evade ensuing complex merge conflicts and instead safely deliver features often. The help of AI agents. yet with a team of expert humans still in the loop. I am convinced a mob approach to AI-assisted software development is going to be a game changer.

Whole-team approach – origins?

The book The Mythical Man-Month came out in 1975, a fantastic year, and addresses a lot of mistakes round managing teamwork. Most famously the book shows how and why adding new team members to speed up development actually slows things down. The thing I was fascinated by when I read it was essay 3, The Surgical Team. A proposal by Harlan Mills was something akin to a team of surgeons with multiple specialised roles doing work together. Remember at the time Brooks was collating the ideas in this book, CPU time was absurdly expensive, terminals were not yet a thing, so you wrote code on paper and handed it off to be hole punched before you handed that stack off to an operator. Technicians wore a white coat when they went on site to service a mainframe so that people took them seriously. The archetypal java developer in cargo shorts, t-shirt and a beard was far away still, at least from Europe.

The idea was basically to move from private art to public practice, and was founded on having a a team of specialists that all worked together:

  • a surgeon, Mills calls him a chief programmer – basically the most senior developer
  • a copilot, basically a chief programmer in waiting, basically acts as sounding board
  • an administrator – room bookings, wages, holidays, HR [..]
  • an editor – technical writer that ensures all documentation is readable and discoverable
  • two secretaries that handle all communication from the team
  • a program clerk – a secretary that understands code, and can organise the work product, i.e. manages the output files and basically does versioning as well as keeps notes and records of recent runs – again, this was pre-git, pre CI.
  • the toolsmith – basically maintains all the utilities the surgeon needs to do his or her job
  • the tester – classic QA
  • the language lawyer – basically a Staff Programmer that evaluates new techniques in spikes and comes back with new viable ways of working. This was intended as a shared role where one LL could serve multiple surgeons.

So – why was I fascinated, this is clear lunacy – you think – who has secretaries anymore?! Yes, clearly several of these roles have been usurped by tooling, such as the secretaries, the program clerk and the editor (unfortunately, I’d love having access to a proper technical writer). Parts of the Administrator’s job is sometimes handled by delivery leads, and few developers have to line manage as it is seen as a separate skill. Although it still happens, it is not a requirement for a senior developer, but rather a role that a developer adopts in addition to their existing role as a form of personal development.

No, I liked the way the concept accepts that you need multiple flavours of people to make a good unit of software construction.

The idea of a Chief Programmer in a team is clearly unfit for a world where CPU time is peanuts compared to human time and programmers themselves are cheap as chips compared to surgeons, and the siloing effect of having only two people in a team understand the whole system is undesirable.

But, in the actual act of software development, having one person behind the keyboard, and a group of people behind them constantly thinking about different aspects of the problem being solved, they each have their own niche and they can propose good tests to add, risks to consider as well as suitable mitigations – I think from a future where a lot of the typing is done by an AI agent – the concept really has legs. The potential for quick feedback and immediate help is perfect and the disseminated context across the whole team lets you remain productive even if the occasional team member goes on leave for a few days. The obvious differences in technical context aside, it seems there was an embryo there for what has through repeated experimentation and analysis developed into Mob Programming of today.

So what is the bottleneck then?

I keep writing that typing speed is not the bottleneck, so what is? Why is everything so bad out there?

Fundamentally code is text. Back in the day you would write huge files of text and struggle to not overwrite each other’s changes. Eventually, code versioning came along, and you could “check out” code like a library, and then only you could check that file back in. This was unsustainable when people went on annual leave and forgot to check their code back in, and eventually tooling improved to support merging code files automatically with some success.

In some organisations you would have one team working the next version of a piece of software, and another team working on the current version being live. At the end of a year long development cycle it would be time to spend a month integrating the new version into the various fixes that had been done to the old version over the whole year of teams working full time. Unless you have been involved in something like that, you cannot imagine how hard that is to do. Long lived branches become a problem way before you hit a year, a couple of days is enough to make you question your life choices. And, the time spent on integration is of literally zero value to the business. All you are doing is shoehorning changes already written in order to get the new version in a state where it can be released, that whole month of work is waste. Not to mention the colossal load on testing it is to verify a year’s worth of features before going live.

People came up with Continuous Integration, where you agree to continuously integrate your changes into a common area making sure that the source code is releaseable and correct at all times. In practice this means you don’t get to have a branch live longer than a day, you have to merge your changes to the agreed integration area every day.

Now, CI – like Behaviour Driven Development has come to mean a tool. That is, do we use continuous integration? Yeah, we have Azure DevOps, the same way BDD has become we use SpecSharp for acceptance tests, but I believe it is important to understand what words really mean. I loathe the work involved in setting up a good grammar for a set of cucumber tests in the true sense of the word, but I love giving tests names that adhere to the BDD style, and I find that testers can understand what the tests do even if they are in C# instead of English.

The point is, activities like the integration of long lived branches and code reviews of large PRs become more difficult just due to their size, and if you need to do any manual verification, working on a huge change set is inherently exponentially more difficult than dealing with smaller change sets.

But what about the world of AI? I believe the future will consist of programmers herding AI agents doing a lot of the actual typing and prototyping, and regulators deeply worried about what this means for accountability and auditability.

The solution from legislators seem to be Human-in-the-Loop, and the only way to avoid the pitfalls of large change sets whilst giving the business the execution speed they have heard AI owes them, is to modify our ways of working so that the output of a mob of programmers can be equated to reviewed code – because, let’s face it – it has been reviewed by a whole team of people – and regulators worry about singular rogue employees being able to push malicious code into production, so if anything, if an evildoer wants to bribe developers, rather than needing to bribe two, they would now have to bribe a whole team without getting exposed, so I think it holds up well from a security perspective. Technically of course, pushes would still need to be signed off by multiple people for there to be accountability on record and to prevent malware from wreaking havoc, but that is a rather simple variation on existing workflows, the thing we are trying to avoid is an actual PR review queue holding up work, especially since reviewing a massive PR is what humans do the worst at.

Is this going to be straightforward? No, probably not, as with anything, we need to inspect and adapt – carefully observe what works and what does not, but I am fairly certain that the most highly productive teams of the future will have a workflow that incorporates a substantial share of mob programming.

Why does code rot?

The dichotomy

In popular parlance you have two categories of code: your own, freshly written code, which is the best code, code that never will be problematic – and then there is legacy code, which is someone else’s code, untested, undocumented and awful. Code gradually goes from good to legacy in some ways that appear mystical, and in the end you change jobs or they bring in new guys to do a Great Rewrite with mixed results.

So, to paraphrase Baldrick from Blackadder Goes Forth: “The way I see it, these days there’s a [legacy code mess], right? and, ages ago, there wasn’t a [legacy code mess], right?    So, there must have been a moment when there not being a [legacy code mess] went away, right? and there being a [legacy code mess] came along.   So, what I want to know is: How did we get from the one case of affairs to the other case of affairs”

The hungry ostrich

Why does code start to deteriorate? What precipitates the degradation that eventually leads to terminal decline? What is the first bubble of rust appearing by the wheel arches? This is hard to generally state, but the causes I have personally seen over the years boil down to being prevented from making changes in a defensible amount of time.

Coupling via schema – explicit or not

E.g. it could be that you have another system accessing your storage directly. Doesn’t matter if you are using schemaless storage or not, as long as two different codebases need to make sense of the same data, you have a schema whether you admit it or not, at some point those systems will need to coordinate their changes to not break functionality.

Fundamentally – as soon as you start going “neah, I won’t remove/rename/change type of that old column because I have no idea who still uses it” you are in trouble. Each storage must have one service in front of it that owns it, so that it can safely manage schema migrations, and anyone wanting to access that data needs to use a well defined API to do so. The service maintainers can thereafter be held responsible to maintain this API in perpetuity, and easily so since the dependency is explicit and documented. If the other service just queried the storage directly, the maintainer is completely unaware (yes, this goes for BI teams as well).

Barnacles settling

If every feature request leads to functions and classes growing as new code is added like barnacles without regular refactoring to more effective patterns, the code gradually gets harder to change. This is commonly a side-effect of high turnover or outsourcing. The developers do not feel empowered to make structural changes, or perhaps have not had enough time to get acquainted with the architecture as it was intended at some point. Make sure that whomever maintains your legacy code is fully aware of their responsibility to refactor as they go along.

Test after

When interviewing engineers it is very common that they say they “practice TDD, but…”, meaning they test after. At least to me the difference in test quality is obviously different if I write the tests first versus if I get into the zone and write the feature first and then try to retrofit tests afterwards. Hint: there is usually a lot less mocking if you test first. As the tests get more complex, adding new code to a class under tests gets harder, and if the developer does not feel empowered to refactor first, the tests are likely to not cover the added functionality properly , so perhaps a complex integration test is modified to validate the new code, maybe the change is tested manually…

Failure to accept Conway’s law

The reason people got hyped about micro services was the idea that you could deploy individual features independently of the rest of the organisation and the rest of the code. This is lovely, as long as you do it right. You can also go too granular, but in my experience that rarely happens. The problem that does happen is that separate teams have interests in the same code and modify the same bits, and releases can’t go out without a lot of coordination. If you also have poor automation test coverage you will get a manual verification burden that further slows down releases. At your earliest convenience you must spend time restructuring your code or at least the ownership of it so that teams afully own all aspects of the thing they are responsible for and they can release code independently, and with any remaining cross-team dependencies made explicit and automatically verifiable.

Casual attitude towards breaking changes

If you have a monolith that is providing core features to your estate, and you have a publicly accessible API Operation, assume it is being used by somebody. Basically, if you must change its required parameters or its output, create a new versioned endpoint or one by a different name. Does this make things less messy? No, but at least you don’t break a consumer you don’t know about. Tech leads will hope that you message around to try and identify who uses it and coordinate a good outcome, but historically that seems too much to ask. We are only human after all.

Until you have PACT tests for everything, and solid coverage, never break a public method.

Outside of support horizon

Initially it does not seem that bad to be stuck with a slightly unsupported version of a library, but as time moves on, all of a sudden you get stuck for a week with a zero day bug that you can’t patch because three other libraries are out of date and contain breaking changes. It is much better if you are ready to make changes as you go along. One breaking change usually lets you have options, but when you are already exposed with a potential security breach, you have to make bad decisions due to lack of time.

Complex releases

Finally, it is worth mentioning that you want to avoid manual steps in your releases. Today there is really no excuse for making a release more complex than one button click. Ideally abstract away configuration so that there is no file.prod.config template that is separate from file.uat.config, or else that prod template file is almost guaranteed to break the release, much like the grille was the only thing rusting on the Rover 400 that was almost completely a Honda, (except for the grille).

Stopping Princip

So how do we avoid the decline, the rot? As with shifting quality and security left, it is much cheaper to address these problems the earlier you spot them, so if you find yourself in any of the situations above, address them with haste.

  1. Avoid engaging “maintenance developers”, their remit may explicitly mean they cannot do major refactoring even when necessary
  2. Keep assigning resources to keep dependencies updated. Use SAST to validate that your dependencies are not vulnerable.
  3. Disallow and remove integration-by-database at any cost. This is hard to fix, but worth it. This alone solves 90% of niggling small problems you are continuously having as you can fix your data structure to fit your current problems rather than the ones you had 15 years ago. If you cannot create a true data platform for reporting data, at least define agreed views/ indexes that can act like an interface for external consumers. That way you have a layer of abstraction between external consumers and yourself and stay free to refactor as long as you make sure that the views still work.
  4. Make dependencies explicit. Ideally PACT tests, but if not that, at least integration tests. This way you avoid needing shared integration environments where teams are shocked to first find out that their changes they have been working on for two weeks breaks some other piece of software they didn’t know existed.

Busman’s Holiday

Rusted bolts vs a pristine manifold

I used to spend time with automotively inclined gentlemen. There were two distinct schools of the car hobby at that time. Finbilsmek, e.g. renovating a classic car or preparing a race car – sure, it eats all your money in parts, but you get to listen to music and carefully admire your new components as you fit them to your clean project car – unless it’s the weekend before the race where the stress level is high. The other school is bruksbilsmek, i.e. fixing your daily driver. It is the night before the MOT, it’s by the side of the road, it’s with a subset of your tools on a Halfords parking. Only if you are lucky does it takes place in your garage, on a car lift – and even if you are that lucky, then salt and grime is constantly falling in your face and if you fail to sort the problem it will have a massive impact on your daily life.

A similar thing exists in IT. If you are tinkering with your computer at home you have time to google bits, listen to music, type random stuff and see if it works. Worst case you just wipe it and start over. It’s enjoyable to install some weird hardware or software and try to get it going.

However if your work laptop starts having problems, or a thing that you need to sort out for work is broken, the enjoyment goes away and there is only rage. Therefore, at least at my age, I wouldn’t build a computer for work, nor do I have any wish to maintain the operating system or mess with networking or access rights – there are pros that do that stuff and keep abreast of all the bulletins of which security holes out there have been patched, I happily let them worry about it, I just accept their vetted upgrades and make sure I restart when I’m asked to.

Baby-proofing a laptop

This is why I’m not principally against working in a baby-proofed environment, i.e. where you as a developer do not have true admin rights, you have no access to customer data, you have no direct access to production. I would love that – as long as that still meant I could install everything I need to work, I can test all my logic locally (code, deployment, monitoring – all of it) and that all my developer tools work. Having all the networking, patching of servers, provisioning of resources and testing patches all of that being magically taken care of by someone else is very nice indeed, and allowing me to focus on delivering trustworthy code which I’m sure sounds super boring to others.

Unfortunately achieving such an environment – a baby-proofed one – requires a lot of engineering. We would like to be in a situation where a company onboards someone and without any manual intervention whatsoever they get a user account provisioned with all the correct group memberships and access as well, and after plugging in the laptop, setting up MFA, locking the screen and going for coffee – all necessary apps will be installed onto the laptop ready for immediate productivity. That would require a lot of cooperation between HR, ops, dev and procurement, plus enough resources to implement and tests all aspects iof this, and everyone involved in this would already have a day job so this would be extra.

Root of all evil

The biggest technical obstacle that makes developer special is that developers use software that need to attach a debugger to a process, and to open ports, i.e. listen for incoming traffic/ requests. – which is what a web app is. An operating system thinks these are dangerous things. Generally you get to listen to stuff on some ports with high numbers, but “well-known” ports require admin access. I.e. you can’t open port 80 and 443 without admin access, cause it would be dangerous if some random code tried to play web server. Attaching a debugger is even more dangerous, you literally have access to all of the process’ memory. You could read any secrets you wanted out of there, so – yeah – not something you get to do without admin access. Opening ports on high numbers was not a problem at the time, but in some cases you still needed to attach a debugger to IIS which required admin access.

On unix-like operating systems that were multi user aware from the beginning, there has been a culture of creating your own user for day-to-day work, and keeping an admin account called root that you only use for things that the operating system thinks is serious, like writing to the /etc directory or running programs in /sbin. Later the concept of sudo arrived, where you basically give accounts the opportunity to temporarily acquire root privileges after typing in their own password again, meaning you can delegate the right to install software without permanently giving the user elevated rights or giving them a root password. Also, the need to type in the password makes it harder to abuse by trickery, but by no means is it bullet proof.

Windows came from DOS, a single user operating system. Although Windows NT, the kernel has decent security design, the culture among windows users was generally that you just put yourself in the Administrators group when you installed your computer and you were “root” and life was easy. The lax security culture meant that many apps simply could not function if the user was not part of the Administrators group, so there was evidently no practical adoption of healthy practices. Windows machines were extremely susceptible to malware and as popular as Windows XP was, something had to be done. When Windows Vista came, the most hated new feature was User Access Control, which was a new layer of obstinance on top of Windows security, meaning the operating system threw up a popup in your face when you did something risky – like opening any port at any number, writing files to suspicious folder – such as editing C:\Windows\System32\drivers\etc\hosts – which is the windows version of /etc/hosts.

People hated UAC, and it was the new thing people did directly after installing – add yourself to Administrators and switch off UAC. But unfortunately you couldn’t argue with the results. The spread of malware was slowed down quite dramatically. Effectively UAC was a bolt-on sudo copy that just made you click on something to confirm. If you didn’t have access rights, it would ask you to type in some credentials that did have the power to approve the action. This meant that corporations started to give you a separate admin accounts that only worked on your machine, but gave you enough rights to open ports or install programs. An analogue to sudo, but more cumbersome.

Windows 7 made UAC back off a bit to increase adoption, and the results continued to be impressive. However – although Microsoft built a simple web server for development – IIS Express – that didn’t require administrative access when debugging – UAC would still sometimes ask you for approval to start things like an android emulator, an Azure Storage emulator or even an Azure Function Host, thus still requiring users to have some way of elevating, i.e. type in admin credentials just to do work. This has to be addressed if we are to be able to move into the glorious future where developers are fully embedded in a padded cell where we can do no harm.

Forbidden knowledge

At Netflix among other places, they devised a way to provide an ether of configuration that apps can just absorb, meaning that the app announces who it is, and recieves its configuration, i.e. you remove the problem of needing to know how the production environment is set up, you just ask for things and they are provided. That way apps can be secured and configured without any knowledge of the production environment leaking out to developers.

Containerisation lets us effectively ship a little egg of code into production, with a defined contract of what the application needs from the outside world. Combine this with a sidecar as above that handles communication between services, and you achieve the perfect state of developers being safely prevented from knowing anything concrete about how the production environment is configured, yet being able to deliver tested apps into production.

The biggest obstacle here is leaky abstractions. Like DAPR for instance promises to abstract away how things like message queues work, but it doesn’t actually. You cannot locally test something with Redis Message Broker or RabbitMQ that you intend to run on Azure Service Bus in prod. You need to be able to integration test automatically, or else it is unacceptable. The tests need to be able to run realistically in every environment.

Let me VNC onto the server

Back in the day when VMs were commonly used when hosting websites, you sometimes had to log into a virtual server and look into eventvwr.exe to see what was actively going wrong, maybe a particular executable was eating all the memory and needed a bit of encouragement to get over itself. This type of access is of course dangerous to have, and it would be nearly unheardof for a developer to have this type of access to production hardware even when troubleshooting, and instead there will be alerts that automatically destroy an instance of an app that is misbehaving whilst already having spun up a replacement. In the rare cases you still need to use a VM , you install agents on them that allow people to perform certain maintenance tasks without ever logging in. Fundamentally this has been solved in the way I foresee all of this being solved, by abstracting away the problem.

Conclusion

We are closer than ever to utopia, and the level of hand cranking required to reach nirvana is lower than ever, but there is still too much manual effort required. There is plenty of scope for disruption. A cocoon world for developers that allows for low faff developing and testing of containerised apps, being able to conclusively prove that monitoring and dependency acquisition works locally before pushing the code to CI is a minimum. This, depending on your cloud provider is still anything from impossible to a massive PITA. There are according to a quick search new IAM solutions that look like they offer identity and app provisioning in a seamless way, so the future is on its way somehow.

You can have nice things

I have come across a few things that are legitimately pleasant to use, so I thought I should collate them here to aid my aging memory. Dear reader, I am not attempting to copy Scott Hanselman’s tools list, I am stealing the concept.

Github Actions

Yea, not something revolutionary I just uncovered that you never heard of before, but still. It’s pretty great. Out of all the yet-another-yet-another-markup-language-configuration-file-to-configure-a-thing tools that exist that help you orchestrate builds, I personally find Github Actions the least weirdly magical and easy to live with, but then I’ve only tried CircleCI, Azure DevOps/TFS and TeamCity.

Pulumi – Infrastructure as code

Write your infrastructure code in C# using Pulumi.It supports Azure, AWS, Google Cloud and Kubernetes, but – as I’ve ranted about before, this shouldn’t be taken as a way to support multi-cloud, the object hierarchy is still very bespoke to each cloud provider. That said, you can mix and match providers in a stack, let’s say you have your DNS hosted in DNSimple but your cloud compute bits in Azure. You would be stuck doing a lot of bash scripting to make it work otherwise, but Pulumi lets you write one C# file that describes all of your infra, mostly.
You will recognise the feel of using it from chef, basically you write code that describes the infrastructure, but the actual construction isn’t happening in the code, first the description is made, the desired state is then compared to the actual running state, and adjustments are made. It is a thin wrapper over terraform, but it does what it says on the tin.

MinVer – automagic versioning for .NET Core

At some point you will write your build chain hack to populate some attributes on your Assembly to stamp a brand on a binary so you can display a version on your site that you can track back to a specific commit. The simplest way of doing this, without needing to change branching strategy or write custom code, is MinVer.

It literally browses through your commits to find your version tags and then increments that version with how many commits there are from that commit. It is what I dreamed would be out there when I started looking. It is genius.

A couple of gotchas: It relies – duh- on having access to the git history, so you need to remember to remove .git from your .dockerignore file, or else your dotnet publish inside docker build will fail to locate any version information. Obviously, unless you intended to release all versions of your source code in the docker image, make sure you have a staged docker build – this is the default in recent Visual Studio templates – but still. I encourage you in any case to mount your finished docker image using docker run -it --entrypoint sh imagename:tag to have a look that your docker image contains what you expect.

Also, in your GitHub Actions you will need to allow for a deeper fetch depth for your script to have enough data to calculate the version number, but that is mentioned in the documentation. I already used a tag prefix ‘v’ for my versions, so I had to add that to my project files. No problems, it just worked. Very impressed.

Simple vs “Simple”

F# has two key features that makes the code very compact. Significant whitespace and forced compilation order.

Significant whitespace removes the need for curly braces or the use of keywords such as begin/end. Forced compile order means your dependencies have to be declared in code files above yours in the project declaration. This gives your F# projects a natural reading order and makes projects follow a natural order that transcends individual style.

Now there is a user voice suggestion that the enforced compile order be removed.

I think this is a good idea. I am against project files. As soon as you have three or more developers working in the same group of files, any one file used to maintain state is bound to become a source of merge conflicts and strife. Just look at C#.

I am sure your IDE could evaluate the dependency order of your files and present them in that order for you, heck one could probably make a CLI tool to show that same information if the navigational benefits of the current order is what is holding you back. Let us break out of IDE-centric languages and allow the programs to be defined in code rather than config.

Rebootcamp

I have been saying a bunch of things, repeating what others say, mostly, but never actually internalised what they really meant. After a week with Fred George and Tom Scott I have seen the light in some way. I have seen proof of the efficacy of pair programming, I have seen the value of fast red-green-refactor cycles and most importantly I have learnt just how much I don’t know.

This was an Object Bootcamp developed by Fred George and Deliberate and basically consisted of problem solving in pairs going over various patterns and OO design in general, pointing out various code smells to look out for and how to refactor your way out of trouble. The course packed in as much as the team could take over the course of the week and is highly recommended. Our finest OO developers in the team still learned new things over the week and the rest of us learned even more.

Where to go from here? I use this blog as a way to write down things I learn so I can reference it later. My fanbase tends to stick to my posts about NHibernate and ASP.NET MVC 3 or something from several years ago, so I need not worry about making things fresh and interesting for the readership. The general recommended reading list that came off of this week reads as follows:

So, GoF and Refactoring – no shockers, eh? We have them in our library and I’ve even read them, even though I first read some other derivative book on design patterns back in the day, but obviously there are things that didn’t quite take the first time. I guess I was too young. Things make so much more sense now when you have a catalogue of past mistakes to cross-reference against various patterns.

The thing is, what I hadn’t internalised properly is how evil getters and setters are. I had some separation of concerns in terms of separating database classes from model classes, but still the classes didn’t instantiate good objects, they were basically just bags of data, and mediator classes had business logic, messing with other classes data instead of proper objects churning cleanly.

Encapsulating information in the system is crucial. It is hard to do correctly, but by timeboxing the time from red to green you force yourself to build the next simplest clean thing before you continue. There is no time for gold plating, and boy you veer off and try something clever only to realise that you needed to stop and go back. Small changes. I have written this so many times before, but if you do it properly it really works.  I have seen JB Rainsberger and Greg Young talk about this, and I have nodded and said sure. Testify! “That would be nice to get to do in practice” was my thinking. And then I added getters and setters to my classes. Or at least made them anemic by having a constructor with parameters and then getters, used by demigod classes. The time to make a change is yesterday, not tomorrow.

So, yes. Analysis Patterns is a hard read, said Fred. Well, then. It seems extremely interesting. I think Refactoring to Patterns will be the very next thing I read, but then I will need to take a stab at it.

I need to learn where patterns could get rid of code smells, increase encapsulation and reduce complexity.

There is a handy catalogue of refactorings that I already have a shortcut to in the chrome toolbar. It gets a lot more clicks now, but in general I will not make any grand statements now but rather come back with a post showing results.

Early Bird campaign ends in two weeks, Aug 15

Early Bird campaign ends in two weeks, Aug 15

We are proud of the program we have put together for Øredev 2013 and we are quite confident you will find the content interesting enough to at least consider coming to Malmö in November.

The thing is, our very lucrative early bird campaign is ending in two weeks, so if you are on the fence, this is the time to pull the trigger and make that reservation. It is easier to beg accounting for forgiveness than to ask for permission. It can’t hurt to at least ask.