Tag Archives: C#

You can have nice things

I have come across a few things that are legitimately pleasant to use, so I thought I should collate them here to aid my aging memory. Dear reader, I am not attempting to copy Scott Hanselman’s tools list, I am stealing the concept.

Github Actions

Yea, not something revolutionary I just uncovered that you never heard of before, but still. It’s pretty great. Out of all the yet-another-yet-another-markup-language-configuration-file-to-configure-a-thing tools that exist that help you orchestrate builds, I personally find Github Actions the least weirdly magical and easy to live with, but then I’ve only tried CircleCI, Azure DevOps/TFS and TeamCity.

Pulumi – Infrastructure as code

Write your infrastructure code in C# using Pulumi.It supports Azure, AWS, Google Cloud and Kubernetes, but – as I’ve ranted about before, this shouldn’t be taken as a way to support multi-cloud, the object hierarchy is still very bespoke to each cloud provider. That said, you can mix and match providers in a stack, let’s say you have your DNS hosted in DNSimple but your cloud compute bits in Azure. You would be stuck doing a lot of bash scripting to make it work otherwise, but Pulumi lets you write one C# file that describes all of your infra, mostly.
You will recognise the feel of using it from chef, basically you write code that describes the infrastructure, but the actual construction isn’t happening in the code, first the description is made, the desired state is then compared to the actual running state, and adjustments are made. It is a thin wrapper over terraform, but it does what it says on the tin.

MinVer – automagic versioning for .NET Core

At some point you will write your build chain hack to populate some attributes on your Assembly to stamp a brand on a binary so you can display a version on your site that you can track back to a specific commit. The simplest way of doing this, without needing to change branching strategy or write custom code, is MinVer.

It literally browses through your commits to find your version tags and then increments that version with how many commits there are from that commit. It is what I dreamed would be out there when I started looking. It is genius.

A couple of gotchas: It relies – duh- on having access to the git history, so you need to remember to remove .git from your .dockerignore file, or else your dotnet publish inside docker build will fail to locate any version information. Obviously, unless you intended to release all versions of your source code in the docker image, make sure you have a staged docker build – this is the default in recent Visual Studio templates – but still. I encourage you in any case to mount your finished docker image using docker run -it --entrypoint sh imagename:tag to have a look that your docker image contains what you expect.

Also, in your GitHub Actions you will need to allow for a deeper fetch depth for your script to have enough data to calculate the version number, but that is mentioned in the documentation. I already used a tag prefix ‘v’ for my versions, so I had to add that to my project files. No problems, it just worked. Very impressed.

Auto-login after signup

If you have a website that uses Open ID Connect for login, you may want to allow the user to be logged in directly after having validated their e-mail address and having created their password.

If you are using IdentityServer 4 you may be confused by the hits you get on the interwebs. I was, so I shall – mostly for my own sake – write down what is what, should I stumble upon this again.

OIDC login flow primer

There are several Open ID authentication flows depending on if you are protecting an API, a mobile native app or a browser-based web app. Most flows basically work in such a way that you navigate to the site that you need to be logged in to access. It discovers that you aren’t logged in (most often – you don’t have the cookie set) and redirects you to its STS, IdentityServer4 in this case, and with this request it tells identityserver4 what site it is (client_id), the scopes it wants and how it wants to receive the tokens. IdentityServer4 will either just return the token (the user was already logged in elsewhere) or get the information it needs from the end user (username, password, biometrics, whatever you want to support) and eventually if this authentication is successful, the IdentityServer will return some tokens and the original website will happily set an authentication token and let you in.

The point is – you have to first go where you want, you can’t just navigate to the login screen, you need the context of having been redirected from the app you want to use for the login flow to work. As a sidenote, this means your end users can wreak havoc unto themselves with favourites/ bookmarks capturing login context that has long expired.

Registration

You want to give users a simple on-boarding procedure, a few textboxes where they can type in email and password, or maybe invite people via e-mail and let them set up their password and then become logged in. How do we make that work with the above flows?

The canonical blog post on this topic seems to be this one: https://benfoster.io/blog/identity-server-post-registration-sign-in/. Although brilliant, it is only partially helpful as it covers IdentityServer3, and the newer one is a lot different. Based on ASP.NET Core, for instance.

  1. The core idea is sound – generate a cryptographically random one-time access code and map against the user after the user has been created in the registration page. (In IdentityServer4)
  2. Create an anonymous endpoint in a controller in one of the apps the user will be allowed to use, in it, ascertain that you have been sent one of those codes, then Challenge the OIDC authentication flow, adding this code as an AcrValue as the request goes back to the IdentityServer4
  3. Extend the authentication system to allow these temporary codes to log you in.

To address the IdentityServer3-ness, people have tried all over the internet, here is somebody who get’s it sorted: https://stackoverflow.com/questions/51457213/identity-server-4-auto-login-after-registration-not-working

Concretely you need a few things – the function that creates OTACs, which you can lift from Ben Foster’s blog post. A sidenote, do remember that if you use a cooler password hashing algorithm you have to use special validators rather than rely on applying the hash onto the the same plaintext to validate. I e, you need to fetch the hash from whatever storage you use and use the specific methods the library offers to validate that the hashes are equivalent.

After the OTAC is created, you need to redirect to a controller action in one of the protected websites, passing the OTAC along.

The next job is therefore to create the action.

        [AllowAnonymous]
        public async Task LogIn(string otac)
        {
            if (otac is null) Response.Redirect("/Home/Index");
            var properties = new AuthenticationProperties
            {
                Items = { new KeyValuePair<string, string>("otac", otac) },
                RedirectUri = Url.Action("Index", "Home", null, Request.Scheme)
            };

            await Request.HttpContext.ChallengeAsync(ClassLibrary.Middleware.AuthenticationScheme.Oidc, properties);
        } 

After storing the OTAC in the HttpContext, it’s time to actually send the code over the wire, and to do that you need to intercept the calls when the authentication middleware is about to send the request over to IdentityServer. This is done where the call to AddOpenIdConnect happens (maybe yours is in Startup.cs?), where you get to configure options, among which are some event handlers.

OnRedirectToIdentityProvider = async n =>{
    n.ProtocolMessage.RedirectUri = redirectUri;
    if ((n.ProtocolMessage.RequestType == OpenIdConnectRequestType.Authentication) && n.Properties.Items.ContainsKey("otac"))
    {
        // Trying to autologin after registration
        n.ProtocolMessage.AcrValues = n.Properties.Items["otac"];
    }
    await Task.FromResult(0);
}

After this – you need to override the AuthorizeInteractionResponseGenerator, get the AcrValues from the request, and – if successful – log the user in, and respond accordingly. Register this class using services.AddAuthorizeInteractionResponseGenerator(); in Startup.cs

Unfortunately, I was still mystified as to how to log things in, in IdentityServer4 as I could not find a SignIn manager used widely in the source code, but then I found this blog post:
https://stackoverflow.com/questions/56216001/login-after-signup-in-identity-server4, and it became clear that using an IHttpContextAccessor was “acceptable”.

    public override async Task<InteractionResponse> ProcessInteractionAsync(ValidatedAuthorizeRequest request, ConsentResponse consent = null)
    {
        var acrValues = request.GetAcrValues().ToList();
        var otac = acrValues.SingleOrDefault();

        if (otac != null && request.ClientId == "client")
        {
            var user = await _userStore.FindByOtac(otac, CancellationToken.None);

            if (user is object)
            {
                await _userStore.ClearOtac(user.Guid);
                var svr = new IdentityServerUser(user.SubjectId)
                {
                    AuthenticationTime = _clock.UtcNow.DateTime

                };
                var claimsPrincipal = svr.CreatePrincipal();
                request.Subject = claimsPrincipal;

                request.RemovePrompt();

                await _httpContextAccessor.HttpContext.SignInAsync(claimsPrincipal);

                return new InteractionResponse
                {
                    IsLogin = false,
                    IsConsent = false,
                };
            }
        }

        return await base.ProcessInteractionAsync(request, consent);
    }

Anyway, after ironing out the kinks the perceived inconvenience of the flow was greatly reduced. Happy coding!

Database Integration Testing

Testing your SQL queries is as important as any other piece of logic. Unless you only do reads and writes, presumably some type of logic will be implemented at least in the form of a query, and you would like to validate that logic same as any other.

Overview

For this you need database integration tests, There are multiple strategies for this (in-memory databases, additional abstractions and mocks, or creating a temporary but real database, just to name a few) but I will in this post discuss running a linux SQL Server docker image, applying all migrations to it from scratch and the running tests on top of it.

Technology choice is beyond the scope of this text. I use .NET Core 3.1, XUnit and legacy C# because I know it already and because my F# is not idiomatic enough for me not to go on tangents and end up writing a monad tutorial instead. I have used MySQL / MariaDB before and I will never use it for anything I care about. I have tried Postgres, and I like it , it is a proper database system, but again, not familiar enough for my purposes this time. To reiterate, this post is based on using C# on .NET Core 3.1 over MSSQL Server and the tests will be run on push using Github Actions.

My development machine is really trying, OK, so let us cut it some slack. Anyway, I have Windows 10 insider something, with WSL2 and Docker Desktop for WSL2 on it. I run Ubuntu 20.04 in WSL2, dist-upgraded from 18.04. I develop the code in VS2019 Community on Windows, obviously.

Problem

This is simple, when a commit is made to the part of a repository that contains DbUp SQL Scripts, related production code or these tests, I want to trigger tests that verify that my SQL Migrations are valid, and when SQL queries change, I want those changes verified against a real database server.

I do not like docker, especially docker-compose. It seems to me it has been designed by people that don’t know what they are on about. Statistically that cannot be the case, since there are tens of thousands of docker-compose users that do magical things, but I have wasted enough time, so like Seymour Skinner I proclaim, “no, it is the children that are wrong!”, and I thus need to find another way of running an ad hoc SQL Server.

All CI stuff and production hosting of this system is Linux based, but Visual Studio is stuck in Windows, so I need a way to be able to trigger these tests in a cross platform way.

Clues

I found an article by Jeremy D Miller that describes how to use a .NET client of the Docker API to automatically run a MSSQL database server. I made some hacky mods:

internal class SqlServerContainer : IDockerServer
{
    public SqlServerContainer() : base("microsoft/mssql-server-linux:latest", "dev-mssql")
    {
        // My production code uses some custom types that Dapper needs
        // handlers for. Registering them here seems to work
        SqlMapper.AddTypeHandler(typeof(CustomType), CustomTypeHandler.Default);
    }

    public static readonly string ConnectionString = "Data Source=127.0.0.1,1436;User Id=sa;Password=AJ!JA!J1aj!JA!J;Timeout=5";

    // Gotta wait until the database is really available
    // or you'll get oddball test failures;)
    protected override async Task<bool> isReady()
    {
        try
        {
            using (var conn =
                new SqlConnection(ConnectionString))
            {
                await conn.OpenAsync();

                return true;
            }
        }
        catch (Exception)
        {
            return false;
        }
    }

    // Watch the port mapping here to avoid port
    // contention w/ other Sql Server installations
    public override HostConfig ToHostConfig()
    {
        return new HostConfig
        {
            PortBindings = new Dictionary<string, IList<PortBinding>>
            {
                {
                    "1433/tcp",
                    new List<PortBinding>
                    {
                        new PortBinding
                        {
                            HostPort = $"1436",
                            HostIP = "127.0.0.1"
                        }

                    }
                }
            },

        };
    }

    public override Config ToConfig()
    {
        return new Config
        {
            Env = new List<string> { "ACCEPT_EULA=Y", "SA_PASSWORD=AJ!JA!J1aj!JA!J", "MSSQL_PID=Developer" }
        };
    }

    public async static Task RebuildSchema(IDatabaseSchemaEnforcer enforcer, string databaseName)
    {
        using (var conn = new SqlConnection($"{ConnectionString};Initial Catalog=master"))
        {
            await conn.ExecuteAsync($@"
                IF DB_ID('{databaseName}') IS NOT NULL
                BEGIN
                    DROP DATABASE {databaseName}
                END
            ");
        }
        await enforcer.EnsureSchema($"{ConnectionString};Initial Catalog={databaseName}");
    }
}

I then cheated by reading the documentation for DbUp and combined the SQL Server schema creation with the docker image starting code to produce the witch’s brew below.

internal class APISchemaEnforcer : IDatabaseSchemaEnforcer
{
    private readonly IMessageSink _diagnosticMessageSink;

    public APISchemaEnforcer(IMessageSink diagnosticMessageSink)
    {
        _diagnosticMessageSink = diagnosticMessageSink;
    }

    public Task EnsureSchema(string connectionString)
    {
        EnsureDatabase.For.SqlDatabase(connectionString);
        var upgrader =
            DeployChanges.To
                .SqlDatabase(connectionString)
                .WithScriptsEmbeddedInAssembly(Assembly.GetAssembly(typeof(API.DbUp.Program)))
                .JournalTo(new NullJournal())
                .LogTo(new DiagnosticSinkLogger(_diagnosticMessageSink))
                .Build();
        var result = upgrader.PerformUpgrade();
        return Task.CompletedTask;
    }
}

When DbUp runs it will output all scripts run to the console, so we need to make sure this type of information will actually end up being logged, despite it being diagnostic. There are two problems there, we need to use a IMessageSink to write diagnostic logs from DbUp for XUnit to become aware of the information and secondly we must add a configuration file to the integration test project for xunit to choose to print the messages to the console.

Our message sink diagnostic logger is plumbed into DbUp as you can see in the previous example, and here is the implementation:

internal class DiagnosticSinkLogger : IUpgradeLog
{
    private IMessageSink _diagnosticMessageSink;

    public DiagnosticSinkLogger(IMessageSink diagnosticMessageSink)
    {
        _diagnosticMessageSink = diagnosticMessageSink;
    }

    public void WriteError(string format, params object[] args)
    {
        var message = new DiagnosticMessage(format, args);
        _diagnosticMessageSink.OnMessage(message);
    }

    public void WriteInformation(string format, params object[] args)
    {
        var message = new DiagnosticMessage(format, args);
        _diagnosticMessageSink.OnMessage(message);
    }

    public void WriteWarning(string format, params object[] args)
    {
        var message = new DiagnosticMessage(format, args);
        _diagnosticMessageSink.OnMessage(message);
    }
}

Telling XUnit to print diagnostic information is done through a file in the root of the integration test project called xunit.runner.json, and it needs to look like this:

{
  "$schema": "https://xunit.net/schema/current/xunit.runner.schema.json",
  "diagnosticMessages": true
}

If you started out with Jeremy’s example and have followed along , applying my tiny changes you may or may not be up and running by now. I had an additional problem – developing on Windows while running CI on Linux. I solved this with another well judged hack:

public abstract class IntegrationFixture : IAsyncLifetime
{
    private readonly IDockerClient _client;
    private readonly SqlServerContainer _container;

    public IntegrationFixture()
    {
        _client = new DockerClientConfiguration(GetEndpoint()).CreateClient();
        _container = new SqlServerContainer();
    }

    private Uri GetEndpoint()
    {
        return RuntimeInformation.IsOSPlatform(OSPlatform.Windows)
            ? new Uri("tcp://localhost:2375")
            : new Uri("unix:///var/run/docker.sock");
    }

    public async Task DisposeAsync()
    {
        await _container.Stop(_client);
    }

    protected string GetConnectionString() => $"{SqlServerContainer.ConnectionString};Initial Catalog={DatabaseName}";
        
    protected abstract IDatabaseSchemaEnforcer SchemaEnforcer { get; }
    protected abstract string DatabaseName { get; }

    public async Task InitializeAsync()
    {
        await _container.Start(_client);
        await SqlServerContainer.RebuildSchema(SchemaEnforcer, DatabaseName);
    }

    public SqlConnection GetConnection() => new SqlConnection(GetConnectionString());
}

The point is basically, if you are executing on Linux, find the unix socket but if you are stuck on Windows – try TCP.

Github Action

After having a single test – to my surprise – actually pass locally after having created the entire database – I thought it was time to think about the CI portion of this adventure. I had no idea if the Github Action thing would allow me to just pull down docker images, but I thought “probably not”. Still created the yaml, because nobody likes a coward:

# This is a basic workflow to help you get started with Actions

name: API Database tests

# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
on:
  push:
    branches: [ master ]
    paths: 
      - '.github/workflows/thisaction.yml'
      - 'test/API.DbUp.Tests/*'
      - 'src/API.DbUp/*'
      - 'src/API/*'
  pull_request:
    branches: [ master ]
    paths: 
      - '.github/workflows/thisaction.yml'
      - 'test/API.DbUp.Tests/*'
      - 'src/API.DbUp/*'
      - 'src/API/*'

# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
  # This workflow contains a single job called "test"
  test:
    # The type of runner that the job will run on
    runs-on: ubuntu-latest

    # Steps represent a sequence of tasks that will be executed as part of the job
    steps:
    # Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
    - uses: actions/checkout@v2

    # Runs a single command using the runners shell
    - name: Run .NET Core CLI tests
      run: |
        echo Run tests based on docker. Bet u twenty quid this will fail
        dotnet test test/API.DbUp.Tests/API.DbUp.Tests.csproj

You can determine, based on the highlighted line above the level of surprise and elation I felt when after I committed and pushed, github chugged through, downloaded the mssql docker image, recreated my schema, ran the test and returned a success message. I am still in shock.

So what now?

Like Jeremy discusses in his post, the problem with database integration tests is that you want to get a lot of assertions out of each time you created your database due to how expensive it is. In order to do so, and to procrastinate a little, I created a nifty little piece of code to keep track of test data I create in each function, so that I can run tests independent of each other and clean up almost automatically using Stack<T>.

I created little helper functions that would create domain objects when setting up tests. Each test would at the beginning create a Stack<RevertAction> and pass it into each helper function while setting up the tests, and each helper function would push a new RevertAction($"DELETE FROM ThingA WHERE id = {IDofThingAIJustCreated}") onto that stack. At the end of each test, I would invoke the Revert extension method on the stack and pass it some context so that it can access the test database and output test logging if necessary.

public class RevertAction
{
    string _sqlCommandText;

    public RevertAction(string sqlCommandText)
    {
        _sqlCommandText = sqlCommandText;
    }

    public async Task Execute(IntegrationFixture fixture, ITestOutputHelper output)
    {
        using var conn = fixture.GetConnection();
        try
        {
            await conn.ExecuteAsync(_sqlCommandText);
        }
        catch(Exception ex)
        {
            output.WriteLine($"Revert action failed: {_sqlCommandText}");
            output.WriteLine($"Exception: {ex.Message}");
            output.WriteLine($"{ex.ToString()}");
            throw;
        }

    }
}

The revert method is extremely simple:

public static class StackExtensions
{
    public static async Task Revert(this Stack<RevertAction> actions, IntegrationFixture fixture, ITestOutputHelper output)
    {
        while (actions.Any())
        {
            var action = actions.Pop();
            await action.Execute(fixture, output);
        }
    }
}

So – that was it. The things I put in this blog post were the hardest for me to figure out, the rest is just a question of maintaining database integration tests, and that is very implementation specific, so I leave that up to you.

Rebootcamp

I have been saying a bunch of things, repeating what others say, mostly, but never actually internalised what they really meant. After a week with Fred George and Tom Scott I have seen the light in some way. I have seen proof of the efficacy of pair programming, I have seen the value of fast red-green-refactor cycles and most importantly I have learnt just how much I don’t know.

This was an Object Bootcamp developed by Fred George and Deliberate and basically consisted of problem solving in pairs going over various patterns and OO design in general, pointing out various code smells to look out for and how to refactor your way out of trouble. The course packed in as much as the team could take over the course of the week and is highly recommended. Our finest OO developers in the team still learned new things over the week and the rest of us learned even more.

Where to go from here? I use this blog as a way to write down things I learn so I can reference it later. My fanbase tends to stick to my posts about NHibernate and ASP.NET MVC 3 or something from several years ago, so I need not worry about making things fresh and interesting for the readership. The general recommended reading list that came off of this week reads as follows:

So, GoF and Refactoring – no shockers, eh? We have them in our library and I’ve even read them, even though I first read some other derivative book on design patterns back in the day, but obviously there are things that didn’t quite take the first time. I guess I was too young. Things make so much more sense now when you have a catalogue of past mistakes to cross-reference against various patterns.

The thing is, what I hadn’t internalised properly is how evil getters and setters are. I had some separation of concerns in terms of separating database classes from model classes, but still the classes didn’t instantiate good objects, they were basically just bags of data, and mediator classes had business logic, messing with other classes data instead of proper objects churning cleanly.

Encapsulating information in the system is crucial. It is hard to do correctly, but by timeboxing the time from red to green you force yourself to build the next simplest clean thing before you continue. There is no time for gold plating, and boy you veer off and try something clever only to realise that you needed to stop and go back. Small changes. I have written this so many times before, but if you do it properly it really works.  I have seen JB Rainsberger and Greg Young talk about this, and I have nodded and said sure. Testify! “That would be nice to get to do in practice” was my thinking. And then I added getters and setters to my classes. Or at least made them anemic by having a constructor with parameters and then getters, used by demigod classes. The time to make a change is yesterday, not tomorrow.

So, yes. Analysis Patterns is a hard read, said Fred. Well, then. It seems extremely interesting. I think Refactoring to Patterns will be the very next thing I read, but then I will need to take a stab at it.

I need to learn where patterns could get rid of code smells, increase encapsulation and reduce complexity.

There is a handy catalogue of refactorings that I already have a shortcut to in the chrome toolbar. It gets a lot more clicks now, but in general I will not make any grand statements now but rather come back with a post showing results.

The Actor Model

In the .NET community perhaps the Actor model of computing is not all that familiar. In this post I will briefly describe my understanding of the possibilities it gives the “real life” Enterprise LOB-application developer without going into specifics.


Different abstraction

The Actor model is a different abstraction from the normally more machine-like multithreaded server process model. This model sees the software system as a set of Actors listening for incoming messages. Messages are sent using to mailing addresses, conceptually, as in – not a hard reference to a running piece of code of any kind. An actor can only send messages to other actors that it knows the address of. It will know only of actors it receives messages from or creates itself. Unintuitively, the communication between actors is handled through a concept known as Channels, which would imply something more direct than a mail-drop system. In common implementations today, channels are implemented as queues.

The Actor will react to the incoming message by either sending a finite number of messages, creating a finite number of actors as well as deciding how to handle the next incoming message. The messages contain all the information needed for this processing, so there is no global state to modify or any shared resources to wait for. Each actor has its own execution context. Whether you execute all actors in the same thread in the same process or whether you have tens of thousands of separate machines with a multitude of threads of execution in several processes does not matter to the actor. This leads to a fantastic potential for scalability as there is little or no negative coupling.

Real-world examples?

If you think about it more closely, the Actor model can be said to better model the organizations we are building systems for. If a process requires a certain number of steps to be taken, modelling the steps, or actually the coworkers performing the step, as actors and the piece of paper or file that would travel from desk to desk can be modelled as a message. Just like faceless corporate drones, actors can be stateful, as is evident from the definition above.

In the Actor model data is not encapsulated but rather transmitted as messages between actors that perform actions with that data and then send the information forward. In object oriented programming you still send messages, conceptually, but actually you call methods on the object, but the objects are one with their data and the only way to modify the data is by sending a message to the object and hope the object will allow itself to be modified in the way you want.

There are large practical similarities between classic object oriented programming and the Actor model, and the Actor model frameworks implement actors as “objects” and use the OOP toolbox to implement the infrastructure that the frameworks provide, but the conceptual differences are importan remember when working with this model.

What frameworks exist for .NET?

There are two well known Actor model framework for .NET, Retlang (http://code.google.com/p/retlang/) and Stact. They both provide the basics you can expect from a framework of this kind, only Retlang stray somewhat from the Actor model nomenclature and provide no remote messaging capability, focusing on high performance in-memory messaging. It is unclear how development proceeds with Retlang, but there is support for .NET 4.

Another framework that supports Actor model is Stact https://github.com/phatboyg/Stact) which uses concepts more closely linked to the specification and would be preferable by those who like conceptual clarity.

Recently, F# has introduced the concept of F# Agents, which is essentially actors. They have isolation between instances (actors) and respond asynchronously to incoming statically typed messages.  (http://www.developerfusion.com/article/139804/an-introduction-to-f-agents/)

We will proceed and show a brief example using F# as it is less verbose than other ways of implementing this architecture. The examples are slightly modified from the above link and also ripped straight from the horse’s mouth, i e  Don Syme’s post http://blogs.msdn.com/b/dsyme/archive/2010/02/15/async-and-parallel-design-patterns-in-f-part-3-agents.aspx 

open Microsoft.FSharp.Control
type
Agent<‘T> = MailboxProcessor<‘T>

let agent =
    Agent.Start(
fun inbox ->
        let rec loop n = async
{
           
let!
msg = inbox.Receive()
            printfn
“now seen a total of %d messages”
(n+1)
           
return!
loop (n+1)
        }
        loop 0 )

for i in 1 .. 1000 do
    agent.Post (sprintf “Hello %d” i )

The above agent uses a builtin class called MailboxProcessor that allows you to post messages, passing in the actual Processor function as an argument, “putting the fun back into functional”.

Of course this is a silly example that doesn’t make anybody happy. Let’ s show something more network-related, so that you can imagine the implications.

open System.Net.Sockets

 

/// serve up a stream of quotes

let serveQuoteStream (client: TcpClient) = async {

    let stream = client.GetStream()

    while true do

        do! stream.AsyncWrite( “MSFT 10.38″B )

        do! Async.Sleep 1000.0 // sleep one second

}

 

What is the point of this then? Well, the agents are asynchronous and take no resources while waiting.  You might as well create a bunch of actors that will all lie there waiting for an incoming request and deal with them asynchronously with mindblowing efficiency. Every Actor (or instance of Agent) has its own state completely isolated from the others. See? No problems with shared mutable state. Put the foot on the accelerator and press it all the way to the floor.

Error management in Actors

Below is an example of error management using a supervisor that will take care of your vast pool of actors and deal with their failures. Again, contrived example, but remember you have all of the .NET framework at your disposal, so just imagine the supervisor doing some really clever reporting or cleanup.

open Microsoft.FSharp.Control

type Agent<'T> = MailboxProcessor<'T>

module Agent = 
   let reportErrorsTo (supervisor: Agent<exn>) (agent: Agent<_>) = 
       agent.Error.Add(fun error -> supervisor.Post error); agent

   let start (agent: Agent<_>) = agent.Start(); agent

let supervisor = 
   Agent<int * System.Exception>.Start(fun inbox ->
     async { while true do 
               let! (agentId, err) = inbox.Receive()
               printfn "an error '%s' occurred in agent %d" err.Message agentId })


let agents = 
   [ for agentId in 0 .. 10000 ->
        let agent = 
            new Agent<string> (fun inbox ->
               async { 
                    while true do 
                        let! msg = inbox.Receive()
                        if msg.Contains("agent 99") then 
                            failwith "I don't like that cookie!" }
                ) 
        agent.Error.Add(fun error -> supervisor.Post (agentId, error))
        agent.Start()
        (agentId, agent) ]

for (agentId, agent) in agents do 
   agent.Post (sprintf "message to agent %d" agentId )


Conclusion

The implications that arise from the Actor model and F# Agents in particular is that where you can use messaging (you are free to use F# Type Providers to make your various XML WebService packets become F# Types) you can process them with great efficiency to trigger all sorts of  .NET functionality asynchronously. You could use F# Agents to back your WebApi services to make the code more concise and responsive. There is no end to the possibilities.

For further reading, please don’t hesitate to follow Don Syme at http://blogs.msdn.com/b/dsyme/  and the various blogs and documentation available through there.