Category Archives: General

Cutler, Star Wars, fast feedback and accuracy

As my father’s son I have always been raised to see the plucky heroes at AT&T and Bell Labs, Brian Kernighan and Ken Ritchie as the Jedi, fighting with the light side of the force, thus foregoing force lightning and readable documentation as that is only available to Dark Side force users such as Bill Gates. Anders Hejlsberg must be Anakin Skywalker in my father’s cinematic universe as Anders first created Turbo Pascal, our favourite programming language when I was little, only to turn to the Dark Side and forge C# and TypeScript – presumably using red khyber crystals.

Anyway – the Count Dooku in this tortured analogy would be Dave Cutler – who created both VMS and Windows NT, was recently interviewed on Dave Plummer’s YouTube channel for several hours.

If you ever think you are going to watch that interview, do so now, unless you are OK with spoiler or spoiler-adjacent content. Also be warned the interview is long. Very, very long.

I find it fascinating how far removed his current work at XBox game streaming (I think? XCloud sounds like a game streaming solution) is from where he started as he left college. His first foray in computing was in simulations, and he had to carry a bring a pile of punch cards – a thousand punch cards I think he says – to run a simulation at IBM, because the computing power requirement was too big for what they had locally at the paper company where he was working.

Later he complains about current developers lack of attention to detail, the host theorising that perhaps the faster inner loops for today’s developers make them less likely to show the requisite skill or diligence.

Now, wait before you flood his YouTube video with angry comments – please let me agree, so that you can post angry comments here instead, thus driving engagement.

I want to say, I see where he’s coming from.

A lot of really hard problems have solutions now that didn’t exist when I started out, and weren’t even possible to conceptualise when Cutler started out. You can realistically do TDD now. If you don’t have access to a piece of hardware you need to write drivers for, you could still most likely have your employer buy enough hardware that you could simulate that hardware with very little effort. You can essentially make it so that you immediately know when you are breaking things as you are typing.

But also… you could save yourself from a majority of problems by taking a bit more care. Like, hey, are you stuck with legacy code with poor test coverage? Well – if that product is an API that is called by another team – just because you have Swagger documentation and the guys can reach you on Slack/Teams, don’t just randomly change behaviours in an endpoint so that dependent services are broken. Why not just make a new endpoint for the new behaviour? And – seriously – when you make new code – why can’t that at least that new bit be unit tested?

Even if you still cannot add unit tests, if you had the mindset of being about to drive far away to test the software live with only you and your boss – don’t you think you would be more carefully reviewing every single change? Really?

Even TDD avatars Feathers and Beck will acknowledge coming across code that isn’t unit tested but still is navigable and observably correct. You could achieve that without any extra build steps or frameworks. You could just pay attention, it has been done in life.

Now, at the same time when we old timers get excited about the olden days we like to brag about how many hours we worked, and I can tell from personal experience that when you don’t sleep -accuracy is the first to go, so – if you are going to make changes in systems where you have nothing like unit test or integration tests in place to help you, make sure you give yourself a couple of extra minutes, basically.

The main reason behind this post is to say:
We shouldn’t be luddites by rejecting modern development practices, but also – if you are a junior dev stuck in a team that writes and maintains legacy code, you could still write defect free code, you just have to pay more attention and go a bit slower. There is no immediate get-out-of-jail-free card because the builds are flaky, like. And if you are struggling and you MUST add unit tests in order to manage to keep high standards for code quality, you could probably refactor into acceptable test coverage more easily than you think.

Our luxury as software developers – to be able to know well ahead of time exactly how our code will work in production would be a dream in most disciplines, yet a surprisingly low number of bridges randomly collapse. People do sane things and catastrophes fail to appear.
The legacy project you got saddled with can cause you to go slow, but it really shouldn’t give you license to deliver new bugs, Don’t internalise when acceptance for lower standards may become evident in your team, such as “well, when X and Y happen and they refuse to Z, this is the best they’ll get”. Instead just don’t deliver, and explain why you cannot. If you don’t offer the lower standards, the resulting bugs won’t appear in your code.

n Habits of a successful developer

I thought I was going to write one of those listicles just to get going as I have posted nothing for half a year. Recent developments have made me take stock and figure out what certain people have taught me while working with them and what I should try and learn from them going forward. I will not cover the basics of TDD, vim vs Emacs, tab vs spaces or any of that. I will assume you write good tests and create largely correct, well structured code that your coworkers can understand clearly.  I will just address things you can do right now – outside of the actual coding – to be the best you can be.

1. Go home at the end of the day

This isn’t a new lesson, but it is important to note again. Do not give the company any more of your time than you agreed when you signed on. You may love the company and the management, but you are not doing yourself any favours by spending too much time in the office. You may think you get more stuff done, but what I have learned is that you can get more done in 8 hours per day than most people do in 9 or 10, but it takes some hard work and focus. In the other points below I’ll try and identify some of the tricks these people use and see if I can perhaps pick up on those I’ve yet to implement myself.

Another reason I brought up this topic is that I have recently seen people fall into the trap of giving their life to the company. If you own equity and have a chance of a real upside that may be a trade-off worth making, but if you are a lowly employee, even if you have option rather than stock, it is highly unlikely that the arrangement is going to pay back time lost from seeing your family or even just resting to be fit-for-fight the next day. The company will never love you back – it cannot.

Even with equity, think about it – you wouldn’t cut an employee any slack that had worked themselves to the bone and after a while started making serious mistakes. Miss a client meeting due to oversleeping after an all-nighter in the office? Starting to create more bugs than they fix? Maybe started yo act bitterly in the office when interacting with coworkers due to the asymmetric workload this individual had voluntarily taken upon him- or herself? The problems and even the discord being sown in the workplace must eventually be addressed despite the employee having put in enormous hours for the company. I’m saying there is a  way to overwork yourself out of a job, which probably nobody in the company wants to go through and as an employee it is bound to be a bitter experience.

So – given the potential productivity and longevity of people that stay within their normal hours and the fact that they get to go home and chill with the family – this is the course of action I recommend..

2. Prefer early mornings to evenings regarding extracurricular activities

This is something I could be better at, but starting out at 5am if you want to do something extra curricular such as blogging or trying out a new language or technology is extremely effective. It is exactly like going out for a run in that a) I haven’t done it very often, but often enough to know that b) the hard bit is just getting started, and c) it is fantastic as you do it and notice how much progress you are making.

3. Never leave a question unasked

When you hammer out details about a piece of code about to be written – never leave a question unaddressed. Sometimes people hang back and assume that all edge cases are handled and that there are no more loose ends.

  1. Do we have everybody here that we need to flesh out this story?
  2. What is the expected outcome of the feature?
  3. What is the expected input?
  4. How will we handle untrusted input?
  5. Security concerns?
  6. Usability?
  7. How do we present errors?
  8. Will the stakeholders present, yes I was serious about 1, accept a solution that allows us to write less code?
  9. What is the lowest level at which we can automatically test the acceptance criteria?
  10. How do we deploy the feature?
  11. How do we monitor it?
  12. Do we need to produce any additional deliverable (yes, docs, manuals)?

The biggest way you can save time and get to go home on time is by not making mistakes, and one way to not make mistakes is to know that you are building the right thing the first time. Not advocating a Big Design Up Front, just a Right-Sized Discussion Just In Time.

4. Be the one that takes notes at design meetings

After asking the right questions-volunteer to do the dirty work such as updating ticketing systems, to make sure none of the information you are just about to use to write code is lost on the way. Documentation is often a waste, but this bit – details about the acceptance criteria for the feature or bit of code you are about to write is actually useful for a while, at least until the code is in production later today or tomorrow.

5. Maintain standards in terms of tooling and infrastructure

Keep your house in order. Don’t have your development machine behind on patches, behind on OS versions, on old development tools or in a state where you and or a coworker cannot be immediately productive. I have struggled with this as I for a while as I recently out of stubbornness tried to run a Linux desktop in anger. I thought 2016 would be the year of the Linux desktop, and in a lot of ways it was. Debian feels very natural for a Windows user. For chef, Javascript, even some PowerShell I did fine and  was productive. However since most of my work is in C# on .NET  I had to employ all kinds of other ways of running Visual Studio on VMs, on a laptop, or wherever which was annoying to everybody. Thankfully once I lost patience, thanks to my efforts in scripting, I had new Windows environment running directly on the metal set up in hours.

Do not let broken builds stay broken. If tests are flaky-  address that, either by replacing the tests with more robust ones, or remove them completely – but do put yourself in a position where any build failure is probably legitimate and is resolved immediately.

Constantly challenge the automation – do you need it? Is it over-engineered? Does it cover as much as it needs? Is it flexible or is it brittle?

6. Be helpful

Be ready to talk to anybody in the company that has questions about what you do. Pretend like you have boundless energy (which, to be fair you probably do since you now go home on time everyday). Even parts of the organisation that for historical reasons doesn’t have much faith in development/engineering (yes, that happens everywhere). Be there, answer stupid questions, insinuant questions and honest questions with a smile or at least a reasonable facsimile. Try and note down any specific complaints and welcome your critics to sit in when you elaborate stories regarding their favourite topic in the future.  If you are prepared to do some internal promotion you will be trusted in the rest of the company and liked by your colleagues who probably avoid those people like the plague.

Don’t arm yourself with headphones and plug away leaving your more junior colleagues stranded if they ave any questions – the total productivity of the team isn’t helped by you being in the zone if at the same time three people are struggling with something that you could have spotted right away.If you really do need to be alone to solve something, book a meeting room or something, get out of the landscape/team room.

If a couple of people have questions about anything you are doing, offer to do a brown bag on it, send out invites and see what the traction is. If you create a culture of curiosity and willingness to learn the company will make money and everybody will appreciate your effforts. Myself I have known that these things are useful and peple find them interesting, but it is only the brownbags or lunch & learn sessions that actually get scheduled and actually happen that are beneficial, the ones you ponder quietly to yourself but never actually set up are worthless. Take action.

7. Interact with peers outside of your company

Now, this seems to fly in the face of that Go home at the end of the day-bit, but this benefits mostly you as a developer and only secondly your employer. There are meetups and user groups in loads of places and you should find some and go there. This is an item where I really need to improve. Especially if you are involved in a technology that is quickly evolving, such as the language Elixir has been over the last couple of years, swapping war stories to the extent your NDAs will allow can be quite useful. Any piece of new discovery that you can share can benefit the local community and in turn you can have some of your queries addressed. It is a very good way to figure out how much of advertised technologies actually get used, and in what way and can thus help you correctly judge what new technologies are worth looking into to solve business problems at work.

Right, so for this listicle n appears to equal seven. Do you have any more traits of successful developers you have noticed that you would recommend, or just stuff that you do that is awesome and that we all are fools if we don’t emulate right now? Feel free to share.

Salvation through a bad resource

Me and a colleague have been struggling for some time deploying an IIS website correctly using chef. As you may be aware, chef is used to describe the desired state of the configuration of a system through the use of resources, which know how to bring parts of the system into the desired state – if they, for some reason, should not be – during a process called convergence.

Chef has a daemon (or Service, as it is called in the civilised world) that continually ensures that the system is configured in accordance with the desired state. If the desired state changes, the system is brought into line automagically.

As usual, what works nicely and neatly in Unix-like operating systems requires volumes of eloquent code literature, or pulp fiction rather, to implement on Windows, because things are different here.

IIS websites are configured with a file called web.config. When this file changes, the website restarts (the application threadpool does, to be specific). Since the Chef windows service is running chef-client in regular intervals it is imperative that chef-client doesn’t falsely assume that the configuration needs to be overwritten  every time it runs is as that would be quite disruptive to any would-be users of the application. Now, the autostart behaviour can be disabled, but that is not the way things should have to be.

A common approach on Windows is to disable the chef service and to just run the chef client manually when you know you want to deploy things, but that just isn’t right either and it takes a lot away from the basic features and the profound magic of chef. Anyway, this means we can’t keep tricking chef into believing that things have changed when they really haven’t, because that is disruptive and bad.

So like I mentioned earlier – IIS websites are configured with a file called web.config. Since everybody that ever encountered an IIS website is aware of that, there is no chance that an evildoer won’t know to look for the connection strings to the database in that very file. To mitigate this well-knownness, or at least make the evil-doer first leverage a privilege escalation vulnerability,  there is a built-in feature that allows an administrator to encrypt the file so that the lowly peon account that the website is executing as doesn’t have the right to read it.  For obvious reasons this encryption is tied to the local machine, so you can’t just copy the file to a different machine where you happen to be admin to decrypt it. This does however mean that you have to first template the file to a temporary location and then check if the output of the chef template, the latest and greatest of website configuration, is actually any different from what was there before.

It took us ages to figure out that what we need to do is to write our web.config template exactly as it looks once it has been decrypted by Windows, then start our proceedings by decrypting the production web.config into a temporary location. We then set up the chef template resource to try and overwrite the temporary file with new values, and if there has been a change, use notifications to trigger a ruby_block that normally doesn’t execute, but when triggered by the template resource both encrypts the updated config and copies it across to prod.

Result!

But wait… The temporary file has to be deleted. It has highly sensitive information (I would like to flatter myself) and shouldn’t even have made it to disk in its clear-text form, and now it’s still there waiting to be read by the evildoer.

Using a ruby block resource or a file resource to delete the temporary file causes chef to record this as a change, and change that isn’t a change is bad. Or at least misleading in this case.

Enter colleague nr 2 “just make a bad resource that doesn’t use converge_by”.

Of course! We write a resource that takes a path and deletes it using pure ruby code, but it “forgets” to tell chef that a file was deleted, so chef will update the configuration when it should but will gladly report 0 resources updated at the end of a run where nothing has changed. Beautiful!

DONE. Week-end. I’m off.

 

How to hire devs for a small project

Do you have a smallish bit of programming type work you need done? You are thinking of either hiring somebody or getting contractors in? There are several pitfalls that you could experience and not all of them are the fault of the people you hire. I am here to set your expectations.

Know what type of people you hire

By the way I will use hire in this post to mean either employ or contract, you choose.

Historically programming have been done by programmers, or software development engineers. These would throw code together and tell you it is finished. Sometimes within deadline and sometimes not. You would then have QA people write test specifications and test the software and produce a list of bugs that then the programmers would have to fix. if you had a small budget you would forego the QA and try and test it yourself. This is not the way to do things.

If you insist on hiring an old-fashioned programmer, make sure you also hire an old-fashioned QA because otherwise the software you are making will never work. This project of yours is still probably doomed to be over budget.

A better way would be to hire a pair of modern senior agile developers that have learned the skills involved in QA-work and can design correctness into the system by clearly defining acceptance critera and handling edge-cases up front. You can advertise for an agile tester or something, if no developer you find admits to being great at testing, but ideally you are looking for a pair of developers that are close to the complete package.

Yes, by the way, hire at least two developers. As the mythical man-month says you can’t double project performace by doubling the number of resources, but a pair of developers are much more effective than a lone developer to the point that it definitely is worth the upfront cost.

Spend time

You will need to be on hand to look at the software as it is being constructed and make decisions on what you want the product to do. Your developers may give you rough estimates (S/M/L or similar) on how much effort certain things are and suggest solutions that achieve your objectives in an effective way, but you need to be available or else development will halt. This is not a fire-and-forget thing, unless you can fully authorise some person to be product owner for you that you can place next to the developers full time.

The thing you are doing, owning the product, is crucial to the success of your venture. I will post a link below to reading material as recommended by Stephen Carratt, a product owner I work with on a daily basis, that helps you define the work in small, clearly defined chunks that your developers call User Stories.

Work incrementally

Start out with small things and add features iteratively. Start by finishing core functionality first and always only add the most important thing. Don’t get me wrong, do have a roadmap of what you want the piece of software to do so you know where you want to eventually end up, but avoid spending time on specifying details right away. There will be cheaper or better options becoming apparent as you go along. You may be able to release working pre-releases to a subset of the intended audience to validate your assumptions and adapt accordingly.

Read up on working in a modern way

To help you create user stories that will help organise the work Steve recommends User Story Mapping (Patton).

Read about Lean Software Development (M Poppendeick) and Agile Testing (Crispin, Gregory) to know why the old way of making overly precise specifications (either of tests and code) ahead of time is wasteful and how working closesly with the development team is more effective in making sure the right thing is built.

Also, to understand why your developers keep banging on about delivery automation look at  Continuous Delivery (Humble, Farley)  although there are fictionalised books that illustrate all these benefits such as The Phoenix Project (Kim, Behr, Spafford).

But hey – when am I done? And what about maintenance?

Exactly! Why did you develop this product? To sell it? Is it selling? To help sell something else? Is it working? Hopefully you are making money or at least building value. Can you make the product better in a cost effective way – do so, if you can’t – stop. Purely budgetary concerns rule here, but also consider the startup costs included in dismissing your first developers and then shortly aftewards getting two other ones in to evolve the product,  needing to first spend some time learning the codebase from scratch before they can work at full speed.

Tech stuff for non-technical people

This is not a deeply technical post. Even less technical than my other ones. I write this inspired by a conversation on Twitter where I got asked about a few fairly advanced pieces of technical software by a clever but not-that-deeply technical person. It turned out he needed access to a website to do his human messaging thing and what he was given was some technical mumbo jumbo about git and gerrit. Questions arose. Of course. I tried to answer some of the questions on Twitter but the 140 char limit was only one of the impediments I faced in trying to provide good advice. So, I’m writing this so I can link to it in the future if I need to answer similar questions.

I myself am an ageing programmer, so I know a bunch of stuff. Some that is true, some that was true and some that ought to be true. I am trying to not lie but there may be inaccuracies because I am lazy and because reality changes while blog posts remain the same because, again – lazy.

What is Git?

Short answer: A distributed version control system.

Longer answer: When you write code you fairly quickly get fed up with keeping track of files with code in them. The whole cool_code.latest24.txt naming scheme to keep old stuff around while you make changes quickly goes out the window and you realise there should be a better way. There is. Using a version control system, another piece of software keeps track of all the changes you have made to the bunch of files that make up your website/app/program/set of programs, and let you compare versions, find the change that introduced a nasty bug et c. With Git you have all that information inside your local computer and can work very quickly and if you need to store your work somewhere else like on a server or just with a friend that has a copy of the same bunch of files (repository), only the changes you have made get sent across to that remote repository.

There are competing systems to git that do almost the same thing (mercurial is also distributed) and then a bunch of systems that instead are centralised and only let you see one version at the time, and all the comparison/search/etc stuff happen across the network. Subversion (SVN) or CVS are common ones. You also have TFS, Perforce and a few others.

What is a build server?

Short answer: a piece of software that watches over a repository and tries to build the source code and run automated tests when source code is modified.

Longer answer:

The most popular way for a developer to dismiss a user looking for help is to say “That works on my machine”. Legitimately, it is not always evident that you have forgotten to add a code file to your version control system. This causes problems for other people trying to use your changes as the software is incomplete and probably not working as a result of the missing files. Having an automated way of checking that the repository is always in a state where you can get the latest version of the code in development, build it and expect it to work is very helpful. It may seem obvious to you, but it was not a given until fairly recently. Any decent group of developers will make sure that the build server at least has a way of deploying the product, but ideally deployment should happen automatically. This does require that the developers are diligent with their automatic tests and that the deployment automation is well designed so that rare problems can be quickly rolled back.

What is Gerrit?

Short answer: I’ll let Wikipedia answer this one as I have managed to avoid using it myself despite a few close calls. It seems to be a code review and issue tracking tool that can be used to manage and approve changes to software and integrate with the build server.

Long answer:

Before accepting changes to source code, maintainers of large systems will look at submitted changes first and see if it is correct and in keeping with the programming style that the rest of the system uses. Despite code being written “for the computer”, maybe the most important aspect of source code quality and style is that other contributors understand the code and can find their way. To keep track of feature suggestions, bug reports and the submitted bits of code that pertain to them, software systems exist that can help out here too.

I don’t know, but I would think I could get most of that functionality with a paid GitHub subscription. Yes, gerrit is free, but unless you are technical and want to pet a server you will have to hire somebody to set up and maintain the (virtual) machine where your are running gerrit.

Where do I put my website?

Short answer: I don’t know, the possibilities are endless today. Don’t pay a lot of money.

Longer answer: Amazon decided, a long time ago, to sell a lot of books. They figured that their IT infrastructure had to be easy to extend as they were counting on having to buy a ton of servers every month so their IT stuff had to just work and automatically just use the new computers without a bunch of installing, configuration and the like. They also didn’t like it that the most heavy duty servers cost an arm and a leg, and wanted to combine a stack of cheaper computers and pool those resources to get the same horsepower at a lower cost. They built a bunch of very clever programs that provided a way to create virtual servers on demand that in actual fact were running on these cheap “commodity hardware” machines and they created a way to buy disk space as if it was electricity, you paid per stored megabyte rather than having to buy and entire hard drive et c. This turned out to be very popular and they started selling their surplus capacity to end users, and all of a sudden you could rent computing capacity rather than making huge capital investments up front. The cloud era was here.

Sadly, there was a bunch of companies out there that already owned a bunch of servers, the big iron expensive ones even, and sold that computing capacity to end users, your neighbourhood ISP, Internet Service Provider, or Web Hotel.

Predictably, the ISPs aren’t doing too well. They niche themselves in providing “control panels” that make server administration slightly less complicated, but the actual web hosting aspect is near free with cloud providers, so for almost all your website needs, look no further than Amazon or Microsoft Azure, to name just two. Many of these providers can hook into your source control system and request a fresh copy of your website as soon as your build server says the tests have run and everything is OK. Kids today take that stuff for granted, but I still think it’s magical.

Don’t forget this important caveat: Your friendly neighbourhood ISP may occasionally answer the phone. If you want to talk to a human, that is a lot easier than to try and get a MS or Amazon person on the phone that can actually help you.

UTF-8

Having grown up in a society evolved beyond the confines of 7-bit ASCII and lived through the nightmare of codepages as well as cursed the illiterate that have so little to say they can manage with 26 letters in their alphabet, I was pleased when I read Joel Spolsky’s tutorial on Unicode. It was a relief – finally somebody understood.

Years passed and I thought I knew now how to do things right. And then I had to do Windows C in anger and was lost in the jungle of wchar_t and TCHAR and didn’t know where to turn.

Finally I found this resource here:

http://utf8everywhere.org/

And the strategies outlined to deal with UTF-8 in Windows are clear:

  • Define UNICODE and _UNICODE
  • Don’t use wchar_t or TCHAR or any of the associated macros. Always assume std::string and char * are UTF-8. Call Wide windows APIs and use boost nowide or similar to widen the characters going in and narrowing them coming out.
  • Never produce any text that isn’t UTF-8.

Do note however that the observations that Windows would not support true UTF-16 are incorrect as this was fixed before Windows 7.

Øredev 2013 – it is drawing near

Since I last wrote about our most excellent conference Øredev, there have been a few announcements. Go check out the program for all details as it is now time for you to register if you have not already done so.

After the Build conference, the key takeaways, in my most humble opinion, were vastly improved performance and features in multi-monitor, multi-DPI  support  in XAML in Windows 8.1 as well as the enormous paradigm shift in ASP.NET 4.5.1 which unifies all view engines and paradigms into one ASP.NET, which will help users cherry pick the things they need to create apps that leverage all things that comprise ASP.NET. WebForms, MVC, SignalR and WebApi.

To demonstrate those things from Build, Mads Kristensen, Sr. Program Manager on the Web Platforms & Tools team, will grace us with his presence and do a talk entitled What’s new in Visual Studio 2013 for Web Developers, while one of our old favourites, Tess Ferrandez will return to do a session called What is new in XAML for Windows 8.1.

We will also, as you may have read on the Internets, feature keynotes from Anna Beatrice Scott, Randall Munroe and Jens Bergensten have been added to the lineup.

In other words, we are pretty excited about our conference, and believe you would be too, so get started with your reservations.

Mail server and collaborative calendar

I was thinking, in light of the fact that all our data, including this blog, is being scrutinized by foreign (to me and most internet users) powers, that maybe one should try to replace Google Apps and Outlook.com, well Google Apps really. The problem is that Google Apps is pretty damn useful.

So what does one need? I have no money to spend, so it has to be free.

One needs a full featured SMTP server, a good web interface with simple design that also renders well on mobile in which you can search through (and find!) email and appointments. Some would argue that you need chat and video conferencing as well, and I guess one where neither Chinese nor US military is also on the call would be preferable, but I can live without it.

I cannot, however, live without civilized charset support. As in, working iso-8859-1 or something proper that can display the letters of honor and might, aka åäö. It would also not seem like a grown-up solution if it wasn’t trivial to add encryption and/or signing of e-mails.

You also need shared calendars and the option of reserving common resources. Not perhaps in the home, but the very first thing you do when you use shared calendars in a company is that you start booking things, such as conference rooms, portable projectors et c.

The user catalog needs to be easily managed. You need folders/labels of some kind and filters to manage the flow of e-mail.

I also, probably, need a way to access my e-mail via IMAP.

So the question is, how much of this do I have to build? How much exists in Linux? Does it look decent or does it look Linuxy? By decent I mean Web 2.0-ish, as opposed to linuxy or Windowsy, such as outlook web access.

I’d like any suggestion. I am prepared to code, albeit in C#/mono, but I would love to use as much Linux OSS available as I probably suck at writing this kind of stuff compared to those who have already done so.

So far I have found the following:
http://www.howtoforge.com/perfect-server-ubuntu-13.04-nginx-bind-dovecot-ispconfig-3
But instead of an ISP tool a custom website? Perhaps solr or lucene indexing each user’s maildir. But what about a calendar?

Why not just install Citadel?
This requires testing.

.NET 4.5.1

To continue the last post about the //BUILD/ Conference, what else is new? What should you look at now? A few things stand out and I will go through them in order.

See Function Return values in debugger

For F# or any functional language, return values of functions are pretty crucial to be able to see when you debug your code or else you are sort of robbed of the idea of a debugger. Of course, you do have less code and fewer bugs per user story in functional languages vs imperative languages in general, so debugging probably isn’t as important, but the fact that you had to change your code to be able to double check return values must have been a huge pain. The point is, now in VS2013 Preview, when you step past a return statement, you will see the return value in the Autos window as well as being able to access it in the Immediate windows as $ReturnValue. I know I missed it in C#, it must have been super annoying for our functional friends. The takeaways are YAY and  – try F# (still near the top of my to-do list).

64-bit Edit & Continue

ORLY? I will be happy to see the 64-bit dialog box go but would have been happy to just have the debugging stop immediately when somebody tries to type in a code window. That probably indicates that the programmer knows what’s wrong and needs to make changes, not that they would like to go full VB3. To me it’s just embarrassing to see VS handle 64-bit as if it was magic. There should be no difference for a real programmer how you handle 64 or 32 bit (beyond the obvious, but that’s a five minute conversation among stakeholders over coffee) and the fact that the segregation remained is a bit of an embarassment.

ADO.NET Connection Resiliency

EF/ADO.NET Connection resiliency. When you use SQL Database in Windows Azure, you will notice that you only get a brief window of connectivity so as to not waste server resources. This is brilliant and should have been the default on your vanilla SQL Server back in the nineties as well, it would have given us less sucky client / server apps back in the day, but the point is the folks over at ADO.NET now gave us connection resiliency, which means your connection autoresumes when you remember to access your database again. This means simpler code in your app and the database abstraction layer also abstracts connection failures, the way $HigherPower intended.

Hosting and Runtime Features

IIS App Suspension – Another Azure-derived feature is that they will allow web apps to run, but to be suspended in its current state when it has been idle for a while. This feature is available in IIS 8.5, Windows Server 2012 R2 Preview.

Large Object Heap compaction – the Large Object Heap can be now compacted as part of the 2nd generation GC run, as in a full stop-the-world GC, so use carefully. You will need to use a separate API to invoke this feature, but if you are having problems with the Large Object Heap not being compacted, you are probably aware of that and are prepared to add a line of code.

New runtime features that will affect you without you doing anything include automatic multi-core support to make app startup time faster and ASP.NET apps and also the fact that you will get Windows Updates of the .NET framework with precompiled native images. In previous versions of Windows your .NET apps will JIT mscorlib.dll et c as you run them when the dllis different from the cached binary installed with Windows, VS or a service pack.

Cadence – pleasing both camps

A huge deal is that the .NET Framework will go full Ubuntu style versioning, with many small incremental changes, with big bang Long Term Support versions made for the enterprise. They wouldn’t call it “ubuntu style versioning”, so I did it for them. They have implemented this by deploying new features as NuGet packages continuously, while they collate mature features and include them in big releases. This is pretty significant and a great decision.

WCF 4.0

As the tour is now concluded and we are doing business as usual I figured it is time to post some promised source code and powerpoint material. If you just joined us, the final Jayway seminar of the spring season was on Windows Identity Foundation and a short roundup of new features in WCF 4.0. Stefan Severin MC:d the WIF section while I did the presentation on WCF 4.0.

So what IS up with WCF 4.0?  My three main points were the following:

  1. Simplified Configuration
  2. Full implementation of WS Discovery
  3. A turn-key RoutingService

My full presentation in attached and I also submit some source code, largely based on Aaron Skonnard’s excellent MSDN article with minor modifications to show the difference between WCF3.5 and WCF4 in terms of configuration.

References:

Presentation

Source Code

Aaron Skonnard’s introduction to WCF 4.0