The constant battle between getting paid and getting a wide audience has simultaneously hit a number of formerly open source products in the .NET space as they move towards closed source paid licensing in future versions. Is this the end of the world for current shape enterprise C# as we know it? If not – will we be OK?
Enterprise applications retrospective
I will tell this historical recapitulation as if it is all fact, but of course these are just my recollections at this point, and I can’t be bothered to verify anything at the moment.
When I got started in the dark ages, developing software using Microsoft technologies was not something real developers did unless forced. Microsoft developers used VB, and were primarily junior office workers that had graduated from writing Excel macros or Access forms apps. The pros used C++. Or, more truthfully C/C++ – as people mostly wrote C with the occasional class sprinkled in. The obvious downside with the C part is that humans are not diligent enough to handle manual memory management, and C++ had yet to develop all the fancy memory safety it has now.
The solution came from Sun Microsystems. They invented Java, a language that was supposed to solve everything. It offered managed memory, i.e. it took much of the responsibility for memory management away from the developer. It also did not compile down to machine code directly, it compiled down to an intermediate language that could then quickly be interpreted into machine language through a runtime. This abstraction layer made it possible to write Java once and run it on any platform. This was attractive to many vendors of complex software such as database engines, as they wanted to compete on the Workstation market, i.e. Serious Computer Hardware for engineers and others, and all of a sudden being able to sell the same software onto multiple hardware platforms in that space was attractive, since by nature those platforms were never going to be very numerous.
This was an outrageous success. Companies adopted Java immediately, there were bifurcations of the market, open source Java Development Kids came about. Oracle got in the fray. C++ stopped being the default language for professional software development, and as a JVM based ex-colleague of mine remarked “it became the Cobol of our time”.
Microsoft saw this, and wrote a java interpreter for DOS/Windows (J++), but of course they used the embrace-extend-extinguish playbook and ran fast and loose with the specification. Sun knew what was coming, so they immediately rounded up their lawyers.
Hurt and rejected, Microsoft backed off from joining the JVM family, but instead hired Turbo Pascal inventor Anders Heijlsberg to create a new language that would be a bit more grown up than VB but also be more friendly to beginners than C++ was. Basically, the brief was to rip off Java, which is evident if you look at the .NET Base Class Library today.
Now, the reason for this retrospective is to explain cultural context. Windows having come from DOS, a single user operating system – means that Windows application security was extremely deeply flawed both technically initially, but also culturally for the longest time. Everybody runs with administrator privileges the way they’d never daily as root on Linux, to the point they had to introduce an extra annoyance layer UAC on top of normal windows security, because even if they implemented an equivalent to sudo, there would be no way culturally to get people to stop granting their user membership in the Local Administrators group.
The same way Basica in DOS and VBA in Office fed people to VB that fed people to C#, always with the goal of low barrier to entry and beginner friendly documentation, has meant that there is an enormous volume of really poor engineering practice in the .NET developer space. If you google “ASP.NET C# login screen” you will have nightmares when you read the accepted answers.
Culture in the .NET world vs Java
In the Java space meanwhile, huge strides were made. Before .NET developers even knew what unit tests were, enterprise Java had created suites for tests, runners, continuous integration, object-relational mappers, all kinds of complex distributed systems that were the backbone of the biggest organisations on the internet. Not everything was a hit – java browser applets died a death fairly quickly, but while Java development houses were experiencing domain driven design and micro services, Microsoft’s documentation still taught beginners about three tier applications using BizTalk, WCF and Workflow Foundation.
Perhaps because of Oracle and Sun being too busy suing everybody, perhaps because of a better understanding of what an ecosystem is or perhaps just blind luck, a lot of products were created on the JVM and allowed to live on. Some even saw broad adoption elsewhere (Jenkins and Team City for build servers just to name a couple) while .NET really never grew out of the enterprise line of business crud. Microsoft developers read MSDN.com and possibly ventured to ComponentSource, but largely stayed away from the wider ecosystem. Just like in the days when you could never be fired for buying IBM, companies were reluctant to buy third party components. In the Windows or .NET world, only a few third party things ever really made it. Resharper and Red Gate SQL Tool Belt can sell licenses the way almost no-one else can, but with every new version of Visual Studio, Microsoft steals more features from ReSharper to include by default in a way that I don’t think would happen in the Java space. First off, there is no standard Java IDE, there are a few popular options, but there is no anointed main IDE that you must use (except, if I had to attempt java programming I’d use IntelliJ IDEA because brand loyalty, but that’s just me), while Microsoft definitely always has had an ironclad first party grip over its ecosystem.
Open Source seeping into .NET
There was an Alt.NET movement of contrarians that ported some popular Java libraries to .NET to try and build grown-up software on .NET, but before .NET Core it really seemed like nobody had ever had to consider performance when developing on .NET, and a slow death kept occurring. Sebastien Lambla created Open Wrap to try and provide a package manager for .NET developers. Enthusiasts within Microsoft created NuGet that basically stole the concept. German enthusiasts created Paket in order to more successfully deal with dependency graphs, there was a lot of animosity before finally Paket stopped being actively sabotaged by Microsoft. After Nuget saw broad adoption, and .NET developers heard about the previous decade worth of inventions in the Java space, there was an explosion of growth. Open Source got in below the radar and allowed .NET developers to taste the rainbow without going through procurement.
Microsoft developers heard about Dependency Injection. It was so much behind Java’s container wars that when Java developers joked about being caffeine driven machines that turn XML into stack traces, if we even used configuration files, they would most likely have been Json at the time. because XML had died out of the mainstream between them and us discovering overcomplicated DI container configuration.However, since we did have proper Generics – which Java did not until much later – we were able to use fluent interfaces to overcomplicate our DI instead of configuration files. This was another awkward phase – that we will get back to later.
We got to hear about unit tests, NUnit was a port of the JUnit test framework in Java, and Microsoft included MSTest with Visual Studio, but it was so awful that NUnit stayed strong. Out of spite, Microsoft pivoted to promote XUnit so that even if MS Test always remained a failure, at least NUnit would suffer. XUnit is good, so it has defeated the rest.
Meanwhile, Sun had been destroyed, Oracle had taken stewardship of Java, C# had eclipsed Java in terms of language design, and the hippest developers had abandoned Java for Ruby, Scala, Python and JavaScript(!). Kids were changing the world using Ruby on Rails and Node JS. Microsoft were stuck with an enormous cadre of enterprise LOB app developers, and blogs titled “I’m leaving .NET” were trending.
The Force Awakens
Microsoft had internal problems with discipline, and some of the rogue agents that created NuGet proceeded to cause further problems, and eventually after decades of battles with the legal department they managed to release some code as open source within Microsoft, which was a massive cultural shift.
These rogue agents at Microsoft had seen Ruby on Rails and community efforts like Fubu MVC and became determined to retrofit a model view controller paradigm onto legacy ASP.NETs page rendering model, and the improvement with Razor views over old ASPX pages was so great that adoption was immediate, and this was in practice the first thing that was made open source, the ASP.NET MVC bit.
The ruby web framework Sinatra spawned a new .NET thing called Nancy FX that offered extremely light-weight web applications. The creator Andreas Håkansson contributed to standardising a .NET content pipeline called OWIN, but later developments would see Microsoft step away from this standard.
In this climate, although I am hazy about the exact reasons why, some troublemakers at Microsoft decided to build a cross platform new implementation of .NET, called .NET Core. Its first killer app was ASP.NET Core, a new high performance web framework that was supposed to steal the lunch from Node JS. It featured native support for middleware pipelines and would eventually move towards endpoint routing, which would have consequences for community efforts.
After an enormous investment in the new frameworks, and with contributions from the general public, .NET Core became good enough to use in anger, and it was a lot faster than anything Microsoft had ever built before. Additionally, you could now run your websites on Linux and save a fortune. Microsoft were just as happy to sell you compute on Azure to run Linux. It felt like a new world. While on the legacy .NET Framework the Container Wars had settled into trench warfare and a stalemate between Castle Windsor, Ninject and Unity among others, Microsoft created peace in our time by simply building a rudimentary DI system into .NET Core, de facto killing off one of the most successful open source ecosystems in .NET.
A few years into ASP.NET Core being a thing, the rogue agents within Microsoft would introduce minimal APIs, which thanks to middleware and endpoint routing basically struck a blow to Nancy FX and also blocked F# web service project Giraffe for about a year in the hope that it too would support OpenAPI /Swagger API documentation.
After we finally caught up on SOLID that Java developers had been talking about for a decade, we knew we had to separate our concerns and use all the patterns in the Gang of Four. Thankfully MediatR came about, so we could separate out our web endpoints from the code that actually did the thing, by mapping the request object to a DTO, and then just passing that request DTO to MediatR that would pick and execute the correct handler. Nice. The mapping would most commonly be done by using AutoMapper. Both of these are both loved and hated.
Before DevOps had become trendy, developers in the .NET space were using Team City to run tests on code merged into source control, and would have the build server produce a deployable package upon all tests being green, and when it was time to release, an ops person would either deploy it to production directly or approve access to a physical machine where the developer would carefully deploy the new software. If you had a particularly Microsoft-loyal CTO, you would at this time be running TFS as a build server, and use Visual Studio’s built in task management to track issues in TFS, but most firms had a more relaxed environment with Team City and Jira.
When the DevOps revolution came, Octopus Deploy would allow complex deployment automation directly out of Team City, enabling IT departments to do Continuous Deployment if they chose to. For us fortunate enough to only click buttons in Octopus Deploy it felt like the future, but the complexities of keeping those things going may have been vast.
I’ve looked at Cloud, from both sides now
Microsoft Azure has gone through a bunch of iterations. Initially, the goal was to try and compete with Amazon Web Services. Microsoft offered virtual machines and a few abstractions, like Web Workers and Block Storage. Oh, and it was called Windows Azure to begin with.
A Danish startup called AppHarbor offered a .NET version of Heroku, i.e. a cloud application platform for .NET. This too felt like the future, and AppHarbor moved to the west coast of the USA and got funding.
Microsoft realised that this was a brilliant idea and created Azure Websites, offering PaaS within Azure. This was a shot below the waterline of AppHarbor that finally shut down in 5 December 2022. After several iterations, this is now knows as Azure App Services, and is more capable than AWS Lambda, which in turn is superior to Azure Functions.
Fundamentally, Azure was not great in the beginning. It exposed several shortcomings within Windows, it was basically the first time anybody had attempted to use Windows in anger, so Microsoft were shocked to discover all the problems. After a tremendous investment in Windows Server, as well as to a certain extent giving up and running stuff like software defined networking on Linux, Azure performance has improved.
Microsoft was not satisfied and wanted control over the software development lifecycle everywhere. More investment went into TFS, wrestling it into something that could be put in the Cloud. To hide its origins they renamed it Azure DevOps. You could define build and deployment pipelines as yaml files, which finally was an improvement over TeamCity, and the deployment pipes were not as good as Octopus Deploy, but they were good enough that people abandoned Team City, Octopus Deploy and to some extent Jira, so that code, build pipelines and tickets all would go on to live in TFS/Azure DevOps.
To summarise, .NET developers are inherently suspicious of software that comes from third parties. Only a select few pass the vibe check. Once they have reached that point of success they become too successful and Microsoft turns on them.
Current State of Affairs
By the end of last year, I would say your run-of-the-mill C# house would build code in Visual Studio – hopefully with ReSharper installed, keep source code, run tests, and keep tickets in Azure DevOps. The code would use MediatR to dispatch commands to handlers, use AutoMapper to translate requests from web front-end to handlers, and format responses back out to the web endpoint. Most likely data would be stored in an azure hosted SQL Database, possibly using Cosmos DB for non-relational storage and Redis for caches. It is also highly likely that unit tests used Fluent Assertions and Moq for mocks, because people like those.
Does everybody love this situation? Well, no. Tooling has over the years improved so vastly that you can easily navigate your way around your codebase by keyboard shortcuts, or a keyboard shortcut whilst clicking on an identifier. Except, when. you are using MediatR and AutoMapper it all becomes complicated. Looking at automapper mapping files, you wonder to yourself if it wouldn’t have been more straightforward to manually map the classes and have some unit tests to prove that they still work?
Fluent syntax as described a couple of places above was a fad in the early 2010s. You wrote a set of interfaces with a set of methods on them, that in turn returned objects with other interfaces on them, and you used these interfaces to build grammars. Basically a fluent validation library could offer a set of extension methods that would allow you to build assertion expressions directly off the back of a variable, like:
var result = _sut.CallMethod(Value);
result.Should().Not().BeNull();
The problem with fluent interfaces is that the grammar is not standardised, so you would type Is.NotNull
, or Is().Not().Null
, or any combination of the above and after a number of years, it all blends together. If you happen to have both a mocking library and an assertion library within your namespace, your IntelliSense will suggest all kinds of extension methods when you hit the full stop, and you are never quite sure if it is a filter expression for a mocking library or a constraint for an assertion library.
The Impending Apocalypse
Above is described the difficulty threading the needle between getting any traction in the .NET space yet not becoming too successful so that Microsoft decides to erase your company from the map. Generally, the audacity of open source contributors to want access to food and shelter is severely frowned upon on the internet. How dare they, is the general sentiment. After the success of Nuget, the avalanche of modern open source tooling becoming available to regular folks working the salt mines of .NET, GitHub issues have been flooded by entitled professional developers making demands off open source project maintainers without any reciprocity in the form of a support contract or source code contributions. In fairness to these entitled developers, they are pawns in a game where they have little agency in terms of spending time or money on things, so deciding to contribute a fix to an open source library could have detrimental effects on ones employment status, and there would be no way to get approval to buy a support contract without having a senior manager lodge a ticket with procurement and enduring the ensuing political consequences, but to the open source maintainer it is till a nightmare, regardless of people’s motivations.
A year ago, Moq shocked the developer community by including an obfuscated piece of tracking software, and more recently Fluent Assertions, MediatR, Automapper and MassTransit have announced they will move towards a paid license.
Conclusion
What does this mean for everybody? Well…. it depends. For an enterprise, the procurement dance starts, so heaven knows when there will be an outcome from that. Moving away from AutoMapper and MediatR is most likely too dramatic to consider as by necessity these things become choke points where literally everything in the app goes through mapping or dynamic dispatch, however – there are alternatives. Most likely the open source versions will be forked and maintained by others, but given the general state of .NET open source, it is probably more likely that instead of almost market wide adoption, you will see a much more selective user base going forward. I want all programmers to be able to eat, and I wish all the success to Jimmy Bogard going forward. He has had an enormous impact on the world of software development and deserve all the success he can get.
Ever since watching 8 lines of code by Greg Young I have been a luddite when it comes to various forms of magic, and been a proponent of manual DI and mapping. My advice would be to just rip this stuff out and inject your handler through the constructor the old fashioned way. Unless you have very specific requirements, I can almost guarantee that you will find it more readable and easier to maintain. Also, like Greg says – the friction is there for a reason. If your project becomes too big to understand without magic, that’s a sign to divide your solution into smaller deployable units.
The continuous march continues, where the teams at Microsoft usurp more and more features from the general .NET ecosystem and include them in future versions of .NET, C# or Visual Studio, squeezing the life out of smaller companies along the way.
I would like to see more situations like paket and xunit where Microsoft have stayed their hand and allowed the thing to exist. I think a healthy coexistence of multiple valid solutions would be healthier for all parties. I do not know how to bring that about. Developers in the .NET space remain first party loyal only, except for a very small number of products. I think it is necessary to build a marketplace where developers can buy or subscribe to software tools and libraries, which also offers software bill-of-materials type information so that managers can have some control over which licenses are in play, perhaps give developers a budget for how much they can spend on tools. That way enterprise devs can buy the tools they need whilst offering the transparency that is needed for compliance, with controls that allow an employer to limit spending whilst simultaneously allowing tool developers to feed their kids. I mean Steam exists, and that’s a marketplace that also exists on a Microsoft platform. Imagine a similar thing but for Nugets.