Tag Archives: ASP.NET Core

.NET C# CI/CD in Docker

Works on my machine-as-a-service

When building software in the modern workplace, you want to automatically test and statically analyse your code before pushing code to production. This means that rather than tens of test environments and an army of manual testers you have a bunch of automation that runs as close to when the code is written. Tests are run, the rate of how much code is not covered by automated tests is calculated, test results are published to the build server user interface (so that in the event that -heaven forbid – tests are broken, the developer gets as much detail as possible to resolve the problem) and static analysis of the built piece of software is performed to make sure no known problematic code has been introduced by ourselves, and also verifying that dependencies included are free from known vulnerabilities.

The classic dockerfile added by C# when an ASP.NET Core Web Api project is started features a multi stage build layout where an initial layer includes the full C# SDK, and this is where the code is built and published. The next layer is based on the lightweight .NET Core runtime, and the output directory from the build layer is copied here and the entrypoint is configured so that the website starts when you run the finished docker image.

Even tried multi

Multistage builds were a huge deal when they were introduced. You get one docker image that only contains the things you need, any source code is safely binned off in other layers that – sure – are cached, but don’t exist outside this local docker host on the build agent. If you then push the finished image to a repository, none of the source will come along. In the before times you had to solve this with multiple Dockerfiles, which is quite undesirable. You want to have high cohesion but low coupling, and fiddling with multiple Dockerfiles when doing things like upgrading versions does not give you a premium experience and invites errors to an unnecessesary degree.

Where is the evidence?

Now, when you go to Azure DevOps, GitHub Actions or CircleCI to find what went wrong with your build, the test results are available because the test runner has produced and provided output that can be understood by that particular test runner. If your test runner is not forthcoming with the information, all you will know is “computer says no” and you will have to trawl through console data – if that – and that is not the way to improve your day.

So – what – what do we need? Well we need the formatted test output. Luckily dotnet test will give it to us if we ask it nicely.

The only problem is that those files will stay on the image that we are binning – you know multistage builds and all that – since we don’t want these files to show up in the finished supposedly slim article.

Old world Docker

When a docker image is built, every relevant change will create a new layer, and eventually a final image will be created and published that is an amalgamation of all consistuent layers. In the olden days, the legacy builder would cache all of the intermediate layers and publish a hash in the output so that you could refer back to intermediate layers should you so choose.

This seems like the perfect way of forensically finding the test result files we need. Let’s add a LABEL so that we can find the correct layer after the fact, copy the test data output and push it to the build server.

FROM mcr.microsoft.com/dotnet/aspnet:7.0-bullseye-slim AS base
WORKDIR /app
FROM mcr.microsoft.com/dotnet/sdk:7.0-bullseye-slim AS build
WORKDIR /
COPY ["src/webapp/webapp.csproj", "/src/webapp/"]
COPY ["src/classlib/classlib.csproj", "/src/classlib/"]
COPY ["test/classlib.tests/classlib.tests.csproj", "/test/classlib.tests/"]
# restore for all projects
RUN dotnet restore src/webapp/webapp.csproj
RUN dotnet restore src/classlib/classlib.csproj
RUN dotnet restore test/classlib.tests/classlib.tests.csproj
COPY . .
# test
# install the report generator tool
RUN dotnet tool install dotnet-reportgenerator-globaltool --version 5.1.20 --tool-path /tools
RUN dotnet test --results-directory /testresults --logger "trx;LogFileName=test_results.xml" /p:CollectCoverage=true /p:CoverletOutputFormat=cobertura /p:CoverletOutput=/testresults/coverage/ /test/classlib.tests/classlib.tests.csproj
LABEL test=true
# generate html reports using report generator tool
RUN /tools/reportgenerator "-reports:/testresults/coverage/coverage.cobertura.xml" "-targetdir:/testresults/coverage/reports" "-reporttypes:HTMLInline;HTMLChart"
RUN ls -la /testresults/coverage/reports
 
ARG BUILD_TYPE="Release" 
RUN dotnet publish src/webapp/webapp.csproj -c $BUILD_TYPE -o /app/publish
# Package the published code as a zip file, perhaps? Push it to a SAST?
# Bottom line is, anything you want to extract forensically from this build
# process is done in the build layer.
FROM base AS final
WORKDIR /app
COPY --from=build /app/publish .
ENTRYPOINT ["dotnet", "webapp.dll"]

The way you would leverage this test output is by fishing out the remporary layer from the cache and assign it to a new image from which you can do plain file operations.

# docker images --filter "label=test=true"
REPOSITORY   TAG       IMAGE ID       CREATED          SIZE
<none>       <none>    0d90f1a9ad32   40 minutes ago   3.16GB
# export id=$(docker images --filter "label=test=true" -q | head -1)
# docker create --name testcontainer $id
# docker cp testcontainer:/testresults ./testresults
# docker rm testcontainer

All our problems are solved. Wrap this in a script and you’re done. I did, I mean they did, I stole this from another blog.

Unfortunately keeping an endless archive of temporary, orphaned layers became a performance and storage bottleneck for docker, so – sadly – the Modern Era began with some optimisations that rendered this method impossible.

The Modern Era of BuildKit

Since intermediate layers are mostly useless, just letting them fall by the wayside and focus on actual output was much more efficient according to the forces that be. The use of multistage Dockerfiles to additionally produce test data output was not recommended or recognised as a valid use case.

So what to do? Well – there is a new command called docker bake that lets you do docker build on multiple docker images, or – most importantly – built targetting multiple targets on the same Dockerfile.

This means you can run one build all the way through to produce the final lightweight image and also have a second run that saves the intermediary image full of test results. Obviously the docker cache will make sure nothing is actually run twice, the second run is just about picking out the layer from the cache and making it accessible.

The Correct way of using bake is to format a bake file in HCL format:

group "default" {
  targets = [ "webapp", "webapp-test" ]
}
target "webapp" {
  output = [ "type=docker" ]
  dockerfile = "src/webapp/Dockerfile"
}
target "webapp-test" {
  output = [ "type=image" ]
  dockerfile = "src/webapp/Dockerfile"
  target = "build"
} 

If you run this command line with docker buildx bake -f docker-bake.hcl, you will be able to fish out the historic intermediary layer using the method described above.

Conclusion

So – using this mechanism you get a minimal number of dockerfiles, you get all the build guffins happening inside docker, giving you freedom from whatever limitations plague your build agent yet the bloated mess that is the build process will be automagically discarded and forgotten as you march on into your bright future with a lightweight finished image.

You can have nice things

I have come across a few things that are legitimately pleasant to use, so I thought I should collate them here to aid my aging memory. Dear reader, I am not attempting to copy Scott Hanselman’s tools list, I am stealing the concept.

Github Actions

Yea, not something revolutionary I just uncovered that you never heard of before, but still. It’s pretty great. Out of all the yet-another-yet-another-markup-language-configuration-file-to-configure-a-thing tools that exist that help you orchestrate builds, I personally find Github Actions the least weirdly magical and easy to live with, but then I’ve only tried CircleCI, Azure DevOps/TFS and TeamCity.

Pulumi – Infrastructure as code

Write your infrastructure code in C# using Pulumi.It supports Azure, AWS, Google Cloud and Kubernetes, but – as I’ve ranted about before, this shouldn’t be taken as a way to support multi-cloud, the object hierarchy is still very bespoke to each cloud provider. That said, you can mix and match providers in a stack, let’s say you have your DNS hosted in DNSimple but your cloud compute bits in Azure. You would be stuck doing a lot of bash scripting to make it work otherwise, but Pulumi lets you write one C# file that describes all of your infra, mostly.
You will recognise the feel of using it from chef, basically you write code that describes the infrastructure, but the actual construction isn’t happening in the code, first the description is made, the desired state is then compared to the actual running state, and adjustments are made. It is a thin wrapper over terraform, but it does what it says on the tin.

MinVer – automagic versioning for .NET Core

At some point you will write your build chain hack to populate some attributes on your Assembly to stamp a brand on a binary so you can display a version on your site that you can track back to a specific commit. The simplest way of doing this, without needing to change branching strategy or write custom code, is MinVer.

It literally browses through your commits to find your version tags and then increments that version with how many commits there are from that commit. It is what I dreamed would be out there when I started looking. It is genius.

A couple of gotchas: It relies – duh- on having access to the git history, so you need to remember to remove .git from your .dockerignore file, or else your dotnet publish inside docker build will fail to locate any version information. Obviously, unless you intended to release all versions of your source code in the docker image, make sure you have a staged docker build – this is the default in recent Visual Studio templates – but still. I encourage you in any case to mount your finished docker image using docker run -it --entrypoint sh imagename:tag to have a look that your docker image contains what you expect.

Also, in your GitHub Actions you will need to allow for a deeper fetch depth for your script to have enough data to calculate the version number, but that is mentioned in the documentation. I already used a tag prefix ‘v’ for my versions, so I had to add that to my project files. No problems, it just worked. Very impressed.

Auto-login after signup

If you have a website that uses Open ID Connect for login, you may want to allow the user to be logged in directly after having validated their e-mail address and having created their password.

If you are using IdentityServer 4 you may be confused by the hits you get on the interwebs. I was, so I shall – mostly for my own sake – write down what is what, should I stumble upon this again.

OIDC login flow primer

There are several Open ID authentication flows depending on if you are protecting an API, a mobile native app or a browser-based web app. Most flows basically work in such a way that you navigate to the site that you need to be logged in to access. It discovers that you aren’t logged in (most often – you don’t have the cookie set) and redirects you to its STS, IdentityServer4 in this case, and with this request it tells identityserver4 what site it is (client_id), the scopes it wants and how it wants to receive the tokens. IdentityServer4 will either just return the token (the user was already logged in elsewhere) or get the information it needs from the end user (username, password, biometrics, whatever you want to support) and eventually if this authentication is successful, the IdentityServer will return some tokens and the original website will happily set an authentication token and let you in.

The point is – you have to first go where you want, you can’t just navigate to the login screen, you need the context of having been redirected from the app you want to use for the login flow to work. As a sidenote, this means your end users can wreak havoc unto themselves with favourites/ bookmarks capturing login context that has long expired.

Registration

You want to give users a simple on-boarding procedure, a few textboxes where they can type in email and password, or maybe invite people via e-mail and let them set up their password and then become logged in. How do we make that work with the above flows?

The canonical blog post on this topic seems to be this one: https://benfoster.io/blog/identity-server-post-registration-sign-in/. Although brilliant, it is only partially helpful as it covers IdentityServer3, and the newer one is a lot different. Based on ASP.NET Core, for instance.

  1. The core idea is sound – generate a cryptographically random one-time access code and map against the user after the user has been created in the registration page. (In IdentityServer4)
  2. Create an anonymous endpoint in a controller in one of the apps the user will be allowed to use, in it, ascertain that you have been sent one of those codes, then Challenge the OIDC authentication flow, adding this code as an AcrValue as the request goes back to the IdentityServer4
  3. Extend the authentication system to allow these temporary codes to log you in.

To address the IdentityServer3-ness, people have tried all over the internet, here is somebody who get’s it sorted: https://stackoverflow.com/questions/51457213/identity-server-4-auto-login-after-registration-not-working

Concretely you need a few things – the function that creates OTACs, which you can lift from Ben Foster’s blog post. A sidenote, do remember that if you use a cooler password hashing algorithm you have to use special validators rather than rely on applying the hash onto the the same plaintext to validate. I e, you need to fetch the hash from whatever storage you use and use the specific methods the library offers to validate that the hashes are equivalent.

After the OTAC is created, you need to redirect to a controller action in one of the protected websites, passing the OTAC along.

The next job is therefore to create the action.

        [AllowAnonymous]
        public async Task LogIn(string otac)
        {
            if (otac is null) Response.Redirect("/Home/Index");
            var properties = new AuthenticationProperties
            {
                Items = { new KeyValuePair<string, string>("otac", otac) },
                RedirectUri = Url.Action("Index", "Home", null, Request.Scheme)
            };

            await Request.HttpContext.ChallengeAsync(ClassLibrary.Middleware.AuthenticationScheme.Oidc, properties);
        } 

After storing the OTAC in the HttpContext, it’s time to actually send the code over the wire, and to do that you need to intercept the calls when the authentication middleware is about to send the request over to IdentityServer. This is done where the call to AddOpenIdConnect happens (maybe yours is in Startup.cs?), where you get to configure options, among which are some event handlers.

OnRedirectToIdentityProvider = async n =>{
    n.ProtocolMessage.RedirectUri = redirectUri;
    if ((n.ProtocolMessage.RequestType == OpenIdConnectRequestType.Authentication) && n.Properties.Items.ContainsKey("otac"))
    {
        // Trying to autologin after registration
        n.ProtocolMessage.AcrValues = n.Properties.Items["otac"];
    }
    await Task.FromResult(0);
}

After this – you need to override the AuthorizeInteractionResponseGenerator, get the AcrValues from the request, and – if successful – log the user in, and respond accordingly. Register this class using services.AddAuthorizeInteractionResponseGenerator(); in Startup.cs

Unfortunately, I was still mystified as to how to log things in, in IdentityServer4 as I could not find a SignIn manager used widely in the source code, but then I found this blog post:
https://stackoverflow.com/questions/56216001/login-after-signup-in-identity-server4, and it became clear that using an IHttpContextAccessor was “acceptable”.

    public override async Task<InteractionResponse> ProcessInteractionAsync(ValidatedAuthorizeRequest request, ConsentResponse consent = null)
    {
        var acrValues = request.GetAcrValues().ToList();
        var otac = acrValues.SingleOrDefault();

        if (otac != null && request.ClientId == "client")
        {
            var user = await _userStore.FindByOtac(otac, CancellationToken.None);

            if (user is object)
            {
                await _userStore.ClearOtac(user.Guid);
                var svr = new IdentityServerUser(user.SubjectId)
                {
                    AuthenticationTime = _clock.UtcNow.DateTime

                };
                var claimsPrincipal = svr.CreatePrincipal();
                request.Subject = claimsPrincipal;

                request.RemovePrompt();

                await _httpContextAccessor.HttpContext.SignInAsync(claimsPrincipal);

                return new InteractionResponse
                {
                    IsLogin = false,
                    IsConsent = false,
                };
            }
        }

        return await base.ProcessInteractionAsync(request, consent);
    }

Anyway, after ironing out the kinks the perceived inconvenience of the flow was greatly reduced. Happy coding!

Put your Swagger UI behind a login screen

I have tried to put a piece of API documentation behind interactive authentication. I have various methods that are available to users of different roles. In an attempt to be helpful I wanted to hide the API methods that you can’t access anyway. Of course when the user wants to call the methods for the purpose of trying them out, I use the well documented ways of hooking up Bearer token authentication in the Swashbuckle UI.

I thought this was a simple idea, but it seems to be a radical concept that was only used back in Framework days. After reading a bunch of almost relevant google hits, I finally went ahead and did a couple of things.

  1. Organise the pipeline so that Authentication happens before the UseSwaggerUI call in the pipeline.
  2. Hook up an operation filter to tag the operations that are valid for the current user by checking the Roles in the User ClaimsPrincipal.
  3. Hook up a document filter to filter out the non-tagged operations, and also clean up the tags or you’ll get duplicates – although further experimentation here too can yield results.
  4. Set up the API auth as if you are doing an interactive website so you have Open ID Connect middleware set up as a default Authentication Scheme, set up Cookie as Default Scheme and add Bearer as an additional scheme.
  5. Add the Bearer scheme to all APi controllers (or some other policy, point is, you need to specify that the API controllers only accept Bearer auth.