Category Archives: C#

.NET C# CI/CD in Docker

Works on my machine-as-a-service

When building software in the modern workplace, you want to automatically test and statically analyse your code before pushing code to production. This means that rather than tens of test environments and an army of manual testers you have a bunch of automation that runs as close to when the code is written. Tests are run, the rate of how much code is not covered by automated tests is calculated, test results are published to the build server user interface (so that in the event that -heaven forbid – tests are broken, the developer gets as much detail as possible to resolve the problem) and static analysis of the built piece of software is performed to make sure no known problematic code has been introduced by ourselves, and also verifying that dependencies included are free from known vulnerabilities.

The classic dockerfile added by C# when an ASP.NET Core Web Api project is started features a multi stage build layout where an initial layer includes the full C# SDK, and this is where the code is built and published. The next layer is based on the lightweight .NET Core runtime, and the output directory from the build layer is copied here and the entrypoint is configured so that the website starts when you run the finished docker image.

Even tried multi

Multistage builds were a huge deal when they were introduced. You get one docker image that only contains the things you need, any source code is safely binned off in other layers that – sure – are cached, but don’t exist outside this local docker host on the build agent. If you then push the finished image to a repository, none of the source will come along. In the before times you had to solve this with multiple Dockerfiles, which is quite undesirable. You want to have high cohesion but low coupling, and fiddling with multiple Dockerfiles when doing things like upgrading versions does not give you a premium experience and invites errors to an unnecessesary degree.

Where is the evidence?

Now, when you go to Azure DevOps, GitHub Actions or CircleCI to find what went wrong with your build, the test results are available because the test runner has produced and provided output that can be understood by that particular test runner. If your test runner is not forthcoming with the information, all you will know is “computer says no” and you will have to trawl through console data – if that – and that is not the way to improve your day.

So – what – what do we need? Well we need the formatted test output. Luckily dotnet test will give it to us if we ask it nicely.

The only problem is that those files will stay on the image that we are binning – you know multistage builds and all that – since we don’t want these files to show up in the finished supposedly slim article.

Old world Docker

When a docker image is built, every relevant change will create a new layer, and eventually a final image will be created and published that is an amalgamation of all consistuent layers. In the olden days, the legacy builder would cache all of the intermediate layers and publish a hash in the output so that you could refer back to intermediate layers should you so choose.

This seems like the perfect way of forensically finding the test result files we need. Let’s add a LABEL so that we can find the correct layer after the fact, copy the test data output and push it to the build server.

FROM mcr.microsoft.com/dotnet/aspnet:7.0-bullseye-slim AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:7.0-bullseye-slim AS build
WORKDIR /
COPY ["src/webapp/webapp.csproj", "/src/webapp/"]
COPY ["src/classlib/classlib.csproj", "/src/classlib/"]
COPY ["test/classlib.tests/classlib.tests.csproj", "/test/classlib.tests/"]
# restore for all projects
RUN dotnet restore src/webapp/webapp.csproj
RUN dotnet restore src/classlib/classlib.csproj
RUN dotnet restore test/classlib.tests/classlib.tests.csproj
COPY . .
# test
# install the report generator tool
RUN dotnet tool install dotnet-reportgenerator-globaltool --version 5.1.20 --tool-path /tools
RUN dotnet test --results-directory /testresults --logger "trx;LogFileName=test_results.xml" /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura /p:CoverletOutput=/testresults/coverage/ /test/classlib.tests/classlib.tests.csproj
LABEL test=true
# generate html reports using report generator tool
RUN /tools/reportgenerator "-reports:/testresults/coverage/coverage.cobertura.xml" "-targetdir:/testresults/coverage/reports" "-reporttypes:HTMLInline;HTMLChart"
RUN ls -la /testresults/coverage/reports
 
ARG BUILD_TYPE="Release" 
RUN dotnet publish src/webapp/webapp.csproj -c $BUILD_TYPE -o /app/publish
# Package the published code as a zip file, perhaps? Push it to a SAST?
# Bottom line is, anything you want to extract forensically from this build
# process is done in the build layer.
FROM base AS final
WORKDIR /app
COPY --from=build /app/publish .
ENTRYPOINT ["dotnet", "webapp.dll"]

The way you would leverage this test output is by fishing out the remporary layer from the cache and assign it to a new image from which you can do plain file operations.

# docker images --filter "label=test=true"
REPOSITORY   TAG       IMAGE ID       CREATED          SIZE
<none>       <none>    0d90f1a9ad32   40 minutes ago   3.16GB
# export id=$(docker images --filter "label=test=true" -q | head -1)
# docker create --name testcontainer $id
# docker cp testcontainer:/testresults ./testresults
# docker rm testcontainer

All our problems are solved. Wrap this in a script and you’re done. I did, I mean they did, I stole this from another blog.

Unfortunately keeping an endless archive of temporary, orphaned layers became a performance and storage bottleneck for docker, so – sadly – the Modern Era began with some optimisations that rendered this method impossible.

The Modern Era of BuildKit

Since intermediate layers are mostly useless, just letting them fall by the wayside and focus on actual output was much more efficient according to the forces that be. The use of multistage Dockerfiles to additionally produce test data output was not recommended or recognised as a valid use case.

So what to do? Well – there is a new command called docker bake that lets you do docker build on multiple docker images, or – most importantly – built targetting multiple targets on the same Dockerfile.

This means you can run one build all the way through to produce the final lightweight image and also have a second run that saves the intermediary image full of test results. Obviously the docker cache will make sure nothing is actually run twice, the second run is just about picking out the layer from the cache and making it accessible.

The Correct way of using bake is to format a bake file in HCL format:

group "default" {
  targets = [ "webapp", "webapp-test" ]
}
target "webapp" {
  output = [ "type=docker" ]
  dockerfile = "src/webapp/Dockerfile"
}
target "webapp-test" {
  output = [ "type=image" ]
  dockerfile = "src/webapp/Dockerfile"
  target = "build"
} 

If you run this command line with docker buildx bake -f docker-bake.hcl, you will be able to fish out the historic intermediary layer using the method described above.

Conclusion

So – using this mechanism you get a minimal number of dockerfiles, you get all the build guffins happening inside docker, giving you freedom from whatever limitations plague your build agent yet the bloated mess that is the build process will be automagically discarded and forgotten as you march on into your bright future with a lightweight finished image.

The Power of Sample Code

What is wrong with OOP?

In the culture wars between “Object Oriented Programming” and Functional Programming, you will find proponents of OOP that argue that we are doing fine – why should we change? and proponents of FP that lists a litany of inherent problems with what we are doing today and point to the ways FP solves them. After I once was at an Object Bootcamp with Fred George I believe the two main schools of thought are both wrong. All the problems listed by the FP peeps are correct, but they are not inherent in OOP, actually OOP addresses a few of them, but we are as an industry not doing OOP.

I may feel Fred George is the Messiah, but he is not alone in his views. Greg Young has similar concerns.

Inheritance is not the Big Deal

I am old enough to remember Borland C++ ads from the 90s. It focused a lot on inheritance, and reuse through inheritance became the USP for object oriented languages.

As soon as you have written some code though, you realise inheritance is the worst, as it creates undue coupling, making changes very hard to implement.

When Borland made those ads about how Porsche Turbo inherited the Carrera but implemented a big fat rear wing, they had begun their foray into C++ because it offered a way to handle the substantial boilerplate involved in writing a program in Windows. It was relatively straightforward to implement the basics and create a usable abstraction on top of the raw Windows API that made the developer experience much more pleasant.

As visual designers became a thing, they wanted a way to map properties with code, so that UI components (those things implemented as objects we mentioned above) could be manipulated by a developer in design mode. “Property” setters, basically syntactic sugar disguising normal functions, allowed the UI designers to read settings from the object, and replace them with what the developer types in. With this work, Borland and Microsoft were working to catch up with InterfaceBuilder from NeXT Computer (the same thing that lives on today in the Apple MacOS/iOS SDK) that had bolted a different type system on top of C and called it Objective C – but that had a world leading visual designer at the time. Anyway, I think they were in a hurry and didn’t think things through.

Approaches to deal with big programs

In a large codebase, the big problem is achieving low coupling but high cohesion. This means, you want all the code that belongs together to live together but you don’t want to have to make changes in seemingly unrelated code to modify a piece of functionality.

In large problems of old, you could call any subroutine from anywhere else, and many resources were shared, meaning between the time you set a value in a variable and you read from it, some piece of code in between could have modified the value, and you would not be automatically able to know where this access is made and how to prevent it.

In FP, we use modules for scoping, meaning you group functions into modules to aid readability, but the key concept, the Big Idea is immutability. After a value is created, it exists globally, but since they are read-only once created, the drawbacks of global state go away. There is no way to change something that somebody else relies on. You can transform it into a new thing that you need, but the original value hangs around until it’s no longer needed. It is harder to accidentally break other code with changes you are making

The Big Idea in Object oriented development is Encapsulation. You put the data with the code and manipulate abstractions. This means that if you get your abstractions right, you can change or replace these abstractions without needing to make sweeping changes in the codebase.

The original concept of object orientation relied on independent small sub programs that communicated by message passing, implicitly imagining like an “in tray” of messages that the object could process at its own pace and then send a response when the work was completed. However – objects were in C++, Java and C# was implemented as special dynamically allocated structs to which you made function calls, i e they became decidedly more synchronous than they were in Smalltalk or Simula. You would recognise Erlang Processes and Actors as looking more like OG objects. You also see that what made objects useful were that they shared properties we today associate with the term micro services, but on a smaller scale.

So what’s the problem, and what’s up with the title of this blog post?

Java, and C# arguably even more so, took the Big Idea and tossed it out the window. Property Get/Property Set to support novelties like graphical designers and visual components are a clear violation of encapsulation. Why are we letting objects access data that lives in other objects? The need to do that is a huge red flag that your model is incorrect. Both the bible, i.e. Refactoring, by Fowler and the actual Bible condemn this, this feature envy.

But why did these properties survive the nineties and live on into modern day? Why have they made things worse with auto properties?

Sample code

When you learn a new language, or to code in general, the main threshold is getting to the point where you write idiomatic code in that language. I e you use familiar phrases. You indent the code in a certain way, you name things according to a certain standard and you use familiar ways to do things like open a database connection, make a HTTP request et c, that a seasoned programmer would be familiar with. Unfortunately- in C# at least, these antipatterns are canon at this point, so to write properly encapsulated code would maybe cause a casual reviewer to ask WTF and be sceptical.

What is canon comes from the publicly available body of work that a beginner can reasonably access. Meaning, effectively Microsoft sets the bar when they announce features, document them and create samples.

There are some issues here. If you look at a large piece of sample code, you may notice how difficult it is to identify the key concept being demoed as the logging code or error handling bulk up the code in a way that is distracting, so brevity must be allowed to remain a priority, clearly.

At the edges where the code starts interacting with network and storage, this type of organisation isn’t inherently despicable either, so a blanket ban is perhaps not the way forward either.

How do we make it clear to new OO devs that when they fill that empty Models folder their project template creates for them with code they would be better off thinking OO proper?

By that I mean making classes that are extremely small, use value objects, prefer private fields, avoid properties et cetera. My suspicion is that any attempt at conveying this programming style through the medium of sample code in templates or documentation is doomed. The bulk of code necessary to not only prove the concept but to in fact make it part of the vernacular would require a large number of people making quite a lot of good code public do that new learners can assimilate the knowledge.

I think good OO code is scarce. Getting the abstractions right is just too hard, you will have compromises in various places, and all the tools tempt you with ways to stray from the narrow path of righteousness, but with modern refactoring tools you should be able to address some of the issues amd continually strive to make the code better.

Incidentally, with properly sized objects you can unit test without cheating (using internal helper methods, or by using mocks), so there is scope to brighten up the tests as well.

IdS4 on .NET Core 3.1

Sometimes I write literary content with substance and longstanding impact, and sometimes I just write stuff down that I might need to remember in the future.

Wen migrating a .NET Core 2.2 IdentityServer4 project to .NET Core 3.1 I had a number of struggles. The biggest one was that the IdentityServer threw the following error:

idsrv was not authenticated. Failure message: Unprotect ticket failed

After scouring the entire internet all I got was responses alluding to the fact that I needed to have the same data protection key on both instances of my website. The only problem with that was that I only had one single instance. Since I was also faffing about with changing DataProtection (unwisely, but hey, I like to live dangerously on my free time) – this was a very effective distraction that kept me debugging the wrong thing for ages.

After starting off from a blank .NET Core 3.1 template and adding all the crust from my old site I finally stumbled upon the difference.

In my earlier migration attempt I had mistakenly put:

            app.UseEndpoints(o =>
               {
                   o.MapControllers();
               });

Now, my main problem was that I had made multiple changes between testing, which is a cardinal sin I only allow myself in my free time, which is why it is so precious now. If you had only made this change and run the website you would notice that it isn’t running, probably. But since I changed a bunch of stuff at once I had a challenge figuring out what went wrong.

In the default template the writing was different and gave completely different results:

      app.UseEndpoints(endpoints =>
            {
                endpoints.MapControllerRoute(
                    name: "default",
                    pattern: "{controller=Home}/{action=Index}/{id?}");
            });

And lo, not only did the website load with the default IdS4 development mode homepage, the unprotect errors went away(!!).

The main lesson here is: Don’t be stupid, change one thing at a time – even if you think you know what you’re doing.

…and the number of the counting shall be 3

.NET Core 3.0 is here, allegedly the penultimate stop on the roadmap before the Singularity, when they finally bin .NET Framework and unify on top of Windows XP… er… .NET 5. The news are packed with stuff about WinForms, WPF and other legacy technologies, but I’m going to stick with the webby and consoley bits, where I’ve been mostly operating since I started using .NET Core back in 1.0 days.

Scope

As usual I will mostly just write down gotchas I have come across so that if I come across it again I will have a greater chance of not wasting so much time the second time around.

We are starting from a .NET Core 2.2 app that initially was .NET Core 2.1, so it may not have been fully upgraded in all respects between 2.1 and 2.2 if there were changes I couldn’t be bothered implementing.

Breaking changes

I followed an excellent guide to get started with references that need to leave your project file and other that need to come back in after they were exiled from the magic default Microsoft.AspNetCore.App assembly, as well as other breaking changes. It’s not that bad, and you really will enjoy the experience.

There has long been a trend among hipsters to forego the unstructured default folders in ASP.NET projects that buckets Controllers, Views and Models into separate folders, in favour of instead having two folders in the root, one called Features and another called Infrastructure. The Features folder would contain – you guessed it – each feature, with controllers, views, viewmodels and the data model grouped together. To make this work, there was the necessity of creating a new Convention for adding controllers with Feature, so that the view resolver would know where to look for the Views. Since AddMvc is now called AddControllersWithViews, I made that change hoping to make things look happy again. I noticed that the FeatureConvention had a squiggly. This is because the old interface IPageConvention that the FeatureConvention used to implement no longer existed. What to do? Well I looked around all over the internet and I found nothing, so finally I discovered the IControllerModelConvention interface, and by literally just replacing the name of the interface , everything just compiled, so the interfaces were identical.

services.AddControllersWithViews(options =>
            {
                var policy = new AuthorizationPolicyBuilder()
                    .RequireAuthenticatedUser()
                    .Build();
                options.Conventions.Add(new FeatureConvention());
                options.Filters.Add(new AuthorizeFilter(policy));
            })

Controller Actions

So with .NET Core 2.2 the ActionResult<T> type came into being, but now you are starting to see squigglies around IActionResult, saying the bell tolls for untyped responses. This is not such a big deal. Often this means you get to cut away vast swathes of boilerplate where you respond with Ok() or Json() around something, instead just returning what the handler created, replacing Task<IActionResult> with Task<ActionResult<SomeExcellentDto>>. Not only have you now achieved a lot of automagic swaggering where you otherwise would have had to write attributes manually to inform the consumer what the payload looks like, you have also eliminated the need for a class of tests just ensuring that the action methods return data of the right kind.

The thing to look out for is that if your handler returns an IEnumerable<T>, due to the way ActionResult<T> works, you need to cast the enumerable to an array or a list, because otherwise the type cannot be instantiated, i e instead of Task<ActionResult<IEnumerable<SomeExcellentDto>>> you need to go with Task<ActionResult<SomeExcellentDto[]>> or Task<ActionResult<List<SomeExcellentDto>>>

(Swash)buckle up, buttercup

The swaggering is a separate chapter – the latest versions of Swashbuckle are hard integrated with OpenAPI, so you have to replace any filters you may have created to format your swagger document since all the APIs are broken and replaced with similar ones from OpenAPI, and if you do any swaggering at all you have to get the latest prerelease of Swashbuckle to even be able to compile. Basically – if you are using Swashbuckle today – congratulations, you are about to start swearing.

IdentityModel crisis

The IdentityModel nuget package has been updated quite radically between versions, and if you were doing clever things in message handlers to keep track of or request tokens from the token endpoint when talking between services, you may need to update your code to do without the TokenClient that bereft of life has ceased to be.

The new method is to use extension methods on to HttpClient, and the canonical example is providing a typed HttpClient. created on demand by the HttpClientFactory, that in turn calls the extension methods. See documentation here. The token endpoint and the extension methods on HttpClient are covered in more detail here.

Contrarian style guide for C#

Goal

I have decided to try and formulate what I want C# code to look like. Over the years I have accumulated a whole host of opinions that may or may not be complimentary or maybe contradict each other. The purpose of this post is to try and make sense of all of it and maybe crystallize what my style is – so that I can tell all of you how wrong you are and what you crazy kids should be doing.

Is this all my intellectual property? Hells to the no. I have read and heard a bunch of stuff and probably stolen ideas from most people, but the most important people I have listened to or read are Fred George and Greg Young. Sadly it didn’t take.

Object oriented programming

Loads of people hate it and cite typical C# code as the reason why. Yes. That code is horrible, but it isn’t OOP.

Why was Object Oriented programming invented? It was a way to make sense of large software systems. Rather than a very long list of functions that could – and did – call any other function perhaps modifying global state on the way, people were looking for a way to organise code so that changes were predictably difficult to make. This was achieved by introducing encapsulation. Private methods and fields could not be affected by external code and external code could not depend on the internal implementation of code in a class. If implemented correctly “change” means adding a new class and possibly deleting an old one.

Was this the only way people tried to solve this problem? No – functional programming was popularised, relying on pure functions and composition again offering a way to make changes in a predictable way.

I don’t mind you using C# to write in a functional style and by using composition, but if you’re creating classes – take encapsulation seriously. Encapsulation is The Thing with OOP. This means properties are evil. Think about it. Don’t put properties in the code.

But what about inheritance?

Yeah, when Borland C++ came out in 1992 the ads were full of Porche Targas that inherited from Carrera S but with the roof overridden to be missing (!) and similar. Oh – the code we would reuse. With inheritance.

Well. Barbara Liskov probably has some things to say about the Targa I suspect.

In short – don’t use inheritance unless the relationship between the classes is “is a” and the relation is highly unlikely to change.

Single responsibility principle

So you have heard about the single responsibility principle. When you see duplication you swoop in, add a new parameter and delete one of the functions. Boom!

Except what if those two functions weren’t doing the exact same thing? Their similarity was only fleeting. With the refactoring you have just made you have introduced a very hard coupling.

Code size

A method shouldn’t have more lines than three or four. Be liberal with the extract method refactoring. To “fit” in this metric, make guard clauses one line each. Put in one empty line after guard clauses and the meat of the class.

Why? What is this? Why are we counting lines now? Well code tends to attract more code as it ages and methods grow. Having a “magic number” that is The Limit helps you put in the time to refactor and thus battle the bloat before it happens.

Classes should not have more than two or three fields – as discussed earlier there will be no properties. If you feel yourself struggling trying to convince yourself to add another private field, do investigate whether or not there isn’t a new class in there waiting to break free.

Don’t be afraid to leverage the fact that private is a code level constraint, not an instance level one. I e, you can let instances of a class talk to each other and they will have access to the private fields of the sibling. This is one of the ways in which you can do actual work without properties.

Edges

Well with this whole Amish attitude to properties – how do you even deserialise json payloads or populate DTOs from the database? How do you write files?

By using the mediator pattern or writing adapters. At the edges you will find code getting less OO and more … procedural? I e anemic classes with only data, loads of members et c and verbose code that talks to very microsofty framework classes.

This is OK, and to some extent natural – but do not let it leak into the domain. Create domain and value objects for the actual processing without leaking any of the ugliness into the domain.

Avoid raw primitives

If you throw around a customer ID in your code – how do you represent it? As an int? How about an order ID? You know the drill – an ID cannot reasonably be added, subtracted, multiplied or divided and it probably can’t be <=0. And a customer ID and order ID should probably never be assignable to the same parameter.

Yes – take the time to do value objects. Resharper will generate equality and GetHashCode for you, so yes it is harder than in F# with more required boilerplate, but it is worth it.

Conditionals

Where the arbitrary constraint on method size really cuts into your Microsoft sample code style is in the area of conditionals. Do you even WinMain bruv? Nested if statements or case statements – what about them huh? Huh?!?

Yes. Get rid of them.

When making small classes an if statement is almost always too big. Lift the variability into its own class – a strategy.

Unit tests

There are two schools of unit testing. The London School and the Chicago School. The Chicago school relies on setup of instances, calling them and asserting the results. The London School relies on setting up a series of mocks, passing them into a class and then asserting that the mocks were called in the right order with the right values.

Of course – me being lazy – I prefer the Chicago school. With small classes you can just call the public methods and assert that the right results come out. If you feel the need to get inside the class to look at and validate internals – your class under test is too big or does too many things.

Creating a class

When you create a class, think about its purpose. “What’s its job?” as Fred George asks. Don’t create a class where the name ends in er or or. Find a concept that describes what the class understands.

It’s ok to spend 15 minutes drawing on paper how you see the next bit of coding to go before you write the first test and get going.

Disclaimer

It depends. Of course it all depends. But I’ve never noticed “omg these classes are too small and have no weird dependencies- I need to refactor” so I am fairly confident in these recommendations.

Reading list

The following books are useful to have around or to have read.

Some other books that are good for you as a developer:

Sprint 0 for old school .NET devs

When you start work on a code base, either from scratch or as you approach it as a new dev, there are a few things you should ensure are in place before you get going. Like most other things, these are easier and more natural on more mature platforms than it tends to be on .NET where the toy app tends to be king, but it is doable. I will here list some technologies I use and I may mention some competing ones for completeness. I can’t share most of the code because reasons, but if there is interest, I can put together samples showing specific techniques of the ones I have used.

Introduction

The checklist of what you need to put in place if it doesn’t exist is the following:

  • Version control
  • Build server
  • Automated tests run as part of the build process
  • Automated packaging
  • Automated deployment
  • Automated monitoring
  • Automated setup of development environment

I know most of that reads as “a house must have a roof” but there were, and still are,  places where people go “We’re only a couple of guys, we can just leave the source on the NAS, it’s backed up”, so I’m just being clear here.

The last point may seem like overkill, but especially if you use the abomination that is 3rd party components, it is crucial to save time and more easily onboard new guys.

Version control

I’m going to shock you here and not require that you use Git. You might want to, because it’s becoming a skill everybody has, and GitHub is a beautiful place to keep your code, but if your workflow is centralised anyway. you can legitimately stay with Subversion as long as you take care of the basics, as in backing up the repository properly. Recent versions have less horrible diffing, so it is is not that bad anymore. The most crucial aspect is that you do need a version control system that has a good powerful command line interface.

Build server

I’m going to suggest TeamCity here, because I know it well, but it has some real drawbacks. GitHub comes with TravisCI which is a modern CI system. Jenkins has been popular. The point is – make sure your build configuration is the same on the development machine as it is on the build server, and make sure the build configuration is version controlled with the source, ideally that you can run all of it from the command line. This way you can know you are not about to break the build before you push your changes and also you know that you can recreate a previous version of the code without faff because a contemporary build configuration can be found in source control along side the source.

Automated tests run as part of the build process

The obvious bit here are to run your favourite command-line testrunner on your build output to make sure all your tests run on the build server.

You can always write powershell scripts to make simple but high-value integration tests, it doesn’t all have to be Selenium / FitNesse even though those are quite nice once you are over the initial hurdle, but yes, there is a cost.

Automated deployment

Many ways to deploy exist, OctopusDeploy is popular to use with TeamCity, but you should look at chef and puppet as well. They tend to be hostile to Windows, but it is now possible to use them, and they make sense. It is as if they are specifically designed for the purpose of deploying and maintaining software infrastructure.

Years ago I looked very briefly at Puppet, but I have come to work with chef recently and it does what it is supposed to do, but I have no factual reason to rate chef higher than puppet other than that I now know it.

For chef you can look at kitchen to test your deployment scripts in transient VMs. It can use Azure VMs, VMWare WorkStation or Vagrant / VirtualBox and it does shorten the feedback cycle considerably.

Automated monitoring

You can use pingdom, Monitis or Nagios or just run a few curl/wget in a scheduled task and send an email if they don’t return the expected information. Either way, you need to be able to know if things aren’t working. Use the smallest possible thing you can get away with if your budget is constrained, but do use something.

Automated setup of the development environment

This may be sensitive, as developers tend to be particular about where their code lives and how their machine is organised, but having the developers just be able to run a script and end up with all their development tools set up and ready to debug.

Some things are going to be difficult. Installing Redgate SQL Source Control doesn’t seem to be scriptable, but other than that you can:

  • install Visual Studio
  • Install plugins
  • Use chocolatey to install source control management
  • Get the sources locally

In some cases, as your system grows, and you break bits out into separate micro services, you will need this scripting to make sure you can set up the entire system to debugged locally. Ideally you would, as an additional means to verify your deployment method, use chef-zero or corresponding technology from Puppet to use your normal deployment templates as you configure the system on the development machines as well. This is another situation where clever IDE tooling actively makes things difficult for you, but any work you put in to automate here will pay huge dividends.

But what does this all mean in practice?

In short, if you are going to keep working on Windows exclusively, at least learn Powershell. It is not that horrible. The stance among Windows users have long been anti-scripting, and with the state of CMD.exe it has been for good reason. Powershell, though, has a lot of features that make sense even though the syntax can be confusing at first. Learning ruby may be more viable from a cross-platform perspective, but Powershell is evidently more suited for Windows and a lot of useful cmdlets for enabling windows features et cetera are only available in Powershell.

Desktop OS for developers

The results of the latest StackOverflow Developer Survey just came out, showing – among other interesting things – that Windows is dying as a developer OS. Not one to abandon ship any time soon I’d still like to offer up some suggestions.

TL;DR

  • Make the commandline deterministic.
  • Copying files across the network cannot be a lottery.
  • Stop rebooting UI-frameworks
  • Make F# the flagship language

Back in the day, Microsoft through VB and Visual C++ overcame some of the hurdles of developing software for Windows – then the only, effectively, desktop OS in the enterprise. Developers, and their managers, rallied behind these products and several million kilometres of code was written over a couple of decades.

The hurdles that were overcome were related to the boilerplate needed to register window classes, creating a window and responding to the basic window messages required to show the window in Windows and have the program behave as expected vis-a-vis the expectations a Windows user might have. Nowhere in VB6 samples was anybody discussing how to write tests or how, really, to write good code. In fact, sample code, simplified on purpose to only showcase one feature at a time, would not contain any distractions such as test code.

When Classic ASP was created, a lot of this philosophy came a cross to the web, and Microsoft managed to create something as horrible as PHP, but with less features, telling a bunch of people that it’s OK to be a cowboy.

When the .NET framework was created as a response to Java, a lot of VB6 and ASP.NET  programmers came across and I think Microsoft started to see what they had created. Things like Patterns & Practices came out and the certification programmes were taking software design and testing into consideration. Sadly, however, they tended to give poor advice that was only marginally better than what was out there in the wild.

Missed the boat on civilised software development

It was a shock to the system when the ALT.NET movement came out and started to bring in things that were completely mainstream in the Java community but almost esoteric in .NET. Continuous integration – unit testing – TDD – DDD. Microsoft tried to keep up by creating TFS that apart from source code version in had ALM tools to manage bugs and features as well as a built-in build server but it became clear to more and more developers that Microsoft really didn’t understand the whole thing about testing first or how lean software development needs to happen.

While Apple had used their iron fist to force people to dump Mac OS for the completely different, Unix-based operating system OS X (with large bits of NextStep brought across, like the API and InterfaceBuilder) – Microsoft were considering their enterprise customers and never made a clean break with Gdi32. Longhorn was supposed to solve everything, making WPF native and super fast, obsoleting the old BitBlt malarkey and instead ushering in a brighter future.

As you are probably aware, this never happened. .NET code in the kernel was a horrible idea and the OS division banned .NET from anything ever being shipped with Windows, salvaged whatever they could duct tape together – and the result of that was Vista. Yes, .NET was banned from Windows and stayed banned up until Powershell became mainstream a long, long time later. Now, with Universal Windows Apps, a potentially viable combo of C++ code and vector UI has finally been introduced, but since it is the fifth complete UI stack reboot since Longhorn folded, it is probably too little too late and too many previously enthusiastic Silverlight or WPF people have already fallen by the wayside. Oh and many of the new APIs are still really hard to write tests around, and it is easy finding yourself in a situation where you need to install Visual Studio and some SDK on a build server, because the dependency relies on the Registry or the GAC rather than things that come with the source.

Automation

As Jeffrey Snover mentions in several talks, Windows wasn’t really designed with automation in mind. OLE Automation possibly, but scripting? Nooo. Now, with more grown-up ways of developing software – automation becomes more critical. The Windows world has developed alternate ways of deploying software to end-user machines than work quite well, but for things like automated integration tests and build automation you should still be able to rely on scripting to set things up.

This is where Windows really lets the developer community down. Simple operations in Windows aren’t deterministic. For a large majority of things you call on the command-line  – you are the only one responsible for determining if the command ran successfully. The program you called from the command-line may very well have failed despite it returning a 0 exit code. The execution just might not have finished despite the process having ended, so some files may still be locked. For a while, you never know. Oh, and mounting network drives is magic and often fails for no reason.

End result

Some people leave for Mac because everything just works, if you can live with bad security practices  and sometimes a long delay before you get some things like Java updates. Some people leave for Linux because if you script everything, you don’t really mind all those times you have to reinstall because thing like a change in screen resolution or a security update killed the OS to the point you can’t log in anymore, you just throw away the partition and rerun the scripts. Also, from a developer standpoint, everything just works, in terms of available tools and frameworks.

What to do about it

If Microsoft wants to keep making developer tools and frameworks, they need to start listening to the developers that engage whenever Microsoft open sources things. They most likely have valuable input into how things are used by your serious users – beyond the tutorials.

Stop spending resources duplicating things already existing for Windows or .NET as that strikes precisely at the enthusiasts that Microsoft needs in order to stop hemorrhaging developers.

What is .NET Core – really? Stop rewriting the same things over and over. At least solve the problems the rewrite was supposed to address first before adding fluff. Also – giving people the ability to work cross-platform means people will, so you are sabotaging yourselves while building some good-will, admittedly.

Most importantly – treat F# like Apple treats Swift. Something like – we don’t hate C# – there is a lot of legacy there but F# is new, trendier and better. F# is far better than Swift and has been used in high spec applications for nine years already. Still Microsoft after years of beta testing still manages to release a JITer that has broken tail call optimisation (a cornerstone of functional runtimes as it lets you do recursion effectively). That is simply UNACCEPTABLE and I would have publicly shamed then fired so many managers for letting that happen. Microsoft needs to take F# seriously – ensure it gets the best possible performance, tooling and templating. It is a golden opportunity to separate professional developers from the morons you find if you google “asp.net login form” or similar.

In other words – there are many simple things Microsoft could do to turn the tide, but Im not sure they will manage, despite the huge strides taken of late. It is also evident that developers hold a grudge for ages.

Salvation through a bad resource

Me and a colleague have been struggling for some time deploying an IIS website correctly using chef. As you may be aware, chef is used to describe the desired state of the configuration of a system through the use of resources, which know how to bring parts of the system into the desired state – if they, for some reason, should not be – during a process called convergence.

Chef has a daemon (or Service, as it is called in the civilised world) that continually ensures that the system is configured in accordance with the desired state. If the desired state changes, the system is brought into line automagically.

As usual, what works nicely and neatly in Unix-like operating systems requires volumes of eloquent code literature, or pulp fiction rather, to implement on Windows, because things are different here.

IIS websites are configured with a file called web.config. When this file changes, the website restarts (the application threadpool does, to be specific). Since the Chef windows service is running chef-client in regular intervals it is imperative that chef-client doesn’t falsely assume that the configuration needs to be overwritten  every time it runs is as that would be quite disruptive to any would-be users of the application. Now, the autostart behaviour can be disabled, but that is not the way things should have to be.

A common approach on Windows is to disable the chef service and to just run the chef client manually when you know you want to deploy things, but that just isn’t right either and it takes a lot away from the basic features and the profound magic of chef. Anyway, this means we can’t keep tricking chef into believing that things have changed when they really haven’t, because that is disruptive and bad.

So like I mentioned earlier – IIS websites are configured with a file called web.config. Since everybody that ever encountered an IIS website is aware of that, there is no chance that an evildoer won’t know to look for the connection strings to the database in that very file. To mitigate this well-knownness, or at least make the evil-doer first leverage a privilege escalation vulnerability,  there is a built-in feature that allows an administrator to encrypt the file so that the lowly peon account that the website is executing as doesn’t have the right to read it.  For obvious reasons this encryption is tied to the local machine, so you can’t just copy the file to a different machine where you happen to be admin to decrypt it. This does however mean that you have to first template the file to a temporary location and then check if the output of the chef template, the latest and greatest of website configuration, is actually any different from what was there before.

It took us ages to figure out that what we need to do is to write our web.config template exactly as it looks once it has been decrypted by Windows, then start our proceedings by decrypting the production web.config into a temporary location. We then set up the chef template resource to try and overwrite the temporary file with new values, and if there has been a change, use notifications to trigger a ruby_block that normally doesn’t execute, but when triggered by the template resource both encrypts the updated config and copies it across to prod.

Result!

But wait… The temporary file has to be deleted. It has highly sensitive information (I would like to flatter myself) and shouldn’t even have made it to disk in its clear-text form, and now it’s still there waiting to be read by the evildoer.

Using a ruby block resource or a file resource to delete the temporary file causes chef to record this as a change, and change that isn’t a change is bad. Or at least misleading in this case.

Enter colleague nr 2 “just make a bad resource that doesn’t use converge_by”.

Of course! We write a resource that takes a path and deletes it using pure ruby code, but it “forgets” to tell chef that a file was deleted, so chef will update the configuration when it should but will gladly report 0 resources updated at the end of a run where nothing has changed. Beautiful!

DONE. Week-end. I’m off.

 

Getting back into WPF

These last couple of weeks I have been working with a Windows desktop app based on WPF. I hadn’t been involved with that in quite some time so there was some trepidation before I got stuck in.

I have been pairing consistently throughout and I believe it has been very helpful for both parties as the union of our skill sets have been quite large and varied regardless of which colleague I was working with at the time. The app had interesting object life-cycles and has some issues in object creation when viewed from the standpoint of what we need to do today although it was well suited to solve the problems it did at the time it was written.

Working closely with colleagues mean that we could make fairly informed decisions whilst refactoring and the discussions have seemed productive. I tend to always feel that the code is significantly improved as we finish tasks even though we have stayed close to the task at hand and avoided refactoring for its own sake.

Given the rebootcamp experience, I’m always looking to make smaller classes with encapsulated functionality, but I still have room for improvement. As I’m fortunate enough to have very skilled colleagues it is always useful to discuss these things in the pair.  It helps to have another pair of eyes there to figure out ways to proceed – getting things done whilst working to gradually improve the design. I haven’t felt disappointed with a piece of code I’ve helped write for quite a while.

The way we currently work is that we elaborate tasks with product and test people and write acceptance criteria together before we set out to implement the changes. This means we figure out, most of the time, how to use unit tests to prove our acceptance criteria,without having to write elaborate integration tests, keeping them fairly simple trying to wrestle the test triangle the right way up .

All this feels like a lot of overhead for a bit of hacking, but we tend to do things only once now rather than having to go back and change something because QA or PM aren’t happy. There are some UI state changes which are difficult to test comprehensively, so we have had things hit us that were unexpected, and we did have to change the design slightly in some cases to make it more robust and testable, but that still felt under control compared with how hard UI bugs can be to track down.

Whatevs. This is where I am on my journey now. I feel like I’m learning more stuff and that I am developing. At my age that is no small thing.

 

 

 

Tip: Clearing QueryString when using RadControls and ASP.NET WebForms

I had to google myself silly and the page I ended up finding was so hard to find I couldn’t actually find it again to link to it properly. The below code is not my idea, so if you feel wronged, submit your link and I shall attribute properly.

The problem is: I want to be able to navigate, or find my way back if you will, to a certain MultiPage and TabStrip that is nested inside a Telerik RadGrid. As it is a view and not a state change, I would like the querystring to handle this. However, to be nice, or obnoxious, depending on how you see it, Telerik will resubmit the same querystring when you navigate the tabs yourself, which will make it so that after changing tabs via querystring once, you cannot navigate to a different page by clicking anymore.

Anyway: People on the Internet, especially Telerik forums tell you to do Request.QueryString.Clear(), but that just doesn’t work because it is a readonly Collection and although it compiles, you will get a runtime error. So: What to do?

The post that I found simply used violence and Reflection to force the collection into being Read/Write, and then when Telerik reads from Request.Querystring to needlessly resubmit the exact previous query, it actually, accidentally, posts the correct, cleaned querystring. This enables my expected scenario where you by using GET can change tabs in a TabStrip/MultiPage.

        private void ClearQueryStringParam(string paramName)
        { Continue reading Tip: Clearing QueryString when using RadControls and ASP.NET WebForms