Category Archives: Programming

Techie programming type posts.

How Do I Unit Test Database Access?

If as a .Net developer you’re serious about making sure your code is properly tested, one of the biggest problem areas has always been around database code. Even with the more widespread adoption of Object Relational Mapping (ORM) frameworks that abstract some of the complexity of database access, unit testing code that accesses the database is still difficult.

Over the years there have been various strategies to unit test database code that developers have come up with, and at various times I’ve seen projects that use all of them. I’ve also seen examples of large projects where you could see several of these techniques used in different parts of the code base.

The simplest technique – which isn’t really a technique as such – is just to test with the real database. Often this will be a special instance of the database created by the test run into which test data is loaded. The biggest argument against this idea is this isn’t really a unit test and should more correctly be considered an integration test. The biggest problem with this technique is using the real database is pretty slow, and that often leads to compromises to allow the test suite to be run in a reasonable time frame, either with reducing the number of tests, or not starting each test with a clean database configuration. Reducing the tests increases the risk that important conditions may not be properly tested, whilst not cleaning the database can lead to unexpected interactions between different tests. However in situations where you have complex logic in stored procedures in the database, sometimes this is the only way you can test them.

If you are practising Test Driven Development, where you are running unit tests repeatedly, having a unit test suite that takes even just minutes to run is a real problem.

A step on from using the real database is to use an alternative that is faster than your real database, for example an in memory database. This idea has come to more prominence recently as Microsoft has added an in memory database provider to their latest version of their current ORM, Entity Framework Core. However there have been third-party in memory options around for a while such as Effort. In both the official offering and third-party options they are drop in providers that can use the same entity framework code, but just go to an in memory provider instead. Purists will argue that even using an in memory provider this is still really an integration test rather than a unit test, you are merely replacing the dependent database rather than removing it. However to a software developer it can be an attractive option compared to the effort required in stubbing, mocking or faking a full ADO.Net provider. The other criticism of this technique is that because this is a different type of database being used from the live system there is the risk of behavioural differences between that and the real database. Having said that since Microsoft are highlighting testing as a benefit of their new in memory provider hopefully those will be few and far between.

Moving on from using an in memory database, the next option, at least until Entity Framework version 6 came along was to build a fake context object that could be used for testing. I’m not going to go into a detailed explanation of how to do this, as there are a number of good tutorials around, including this one from a Microsoft employee. The basic idea is that you construct a complete fake context object that gets injected into the code being tested instead of the real database context. Although you generally only need to construct the fake database context once, it is comparatively a lot of code, so it is pretty obvious why developers are delighted at the in memory provider included in Entity Framework Core. If you’re not needing to use the full abilities of the context, you do have the option of only partially implementing the fake context. The main criticism of using fakes is that again you’re running the risk of behavioural differences. This time it is because you’re using a different type of context, in particular under the covers you’re using the Microsoft LINQ to Objects classes to talk to the fake object, whereas the real database code will be using LINQ to Entities classes. Put simply whilst the syntax will be the same, you’re not exercising the actual database access code you will be using in the live system. You’re relying on Microsoft LINQ to Objects and Microsoft LINQ to Entities behaving in a similar fashion.

With the arrival of Entity Framework 6, there were some changes made that made it a lot easier to use a mocking framework instead of fake objects. Microsoft have a good guide to testing using a Mocking Framework in their Entity Framework documentation, alongside a revised guide to using a fake object as a test double. The amount of code to fully mock a context is similar to a faked context, but again if you only need to use part of the functionality of the context in your tests you only need mock the parts of the context you need. As with any mocked object it’s important that your mock behaviour is the same as the real object you’re trying to simulate for the tests, and this can be pretty complex with an object like a database context. Particularly problematic areas are around the behaviour of the SaveChanges functionality, where some fairly subtle bugs can creep in with code that passes a test but doesn’t work in production if for example you test by just expecting the SaveChanges method to be called.

That takes us on to a collection of other techniques that are more about isolating the database access code to make it easier to test.

The long standing way to do this is based around the Repository and Unit of Work patterns. There are a variety of ways you can implement these, for example you don’t necessarily need the Unit of Work and could just use the Repository pattern alone. There is a good Microsoft tutorial on the pattern using Entity Framework 5. The basic idea with the repository is to wrap the database code in the repository, and then mock the repository for subsequent tests. The database code in the repository just consists of simple create, read, update and delete (CRUD) functions. Whilst this was a common pattern before Entity Framework, and persisted with early versions of Entity Framework that are difficult to mock or fake, it has largely gone out of fashion. This is not least because the Entity Framework DbSet is an implementation of the same repository pattern so it is totally unnecessary to create an additional implementation of the Repository pattern for mocking now you can just mock or fake DbSet itself.

The other method that has been used for a long while is a traditional data access layer. The actual database code is hidden abstracted behind a series of method calls that take the parameters and return the data which can be easily mocked. Rather than being generic, the code inside each of those methods is for particular queries, and whilst that will be fairly simple database code that can be easily tested, there will be a single function for each query. There are good ways and bad ways of doing this, for example I have seen projects with vast library classes containing all of the queries used by the business logic – a bit of a maintenance nightmare at times. Probably a better design and more in keeping with SOLID principles is to have smaller classes more closely related to how the queries are being used. Either way there is a big overhead with lots of query functions together in a big data access layer.

Data access layers again have started to go out of fashion, however some of the principles behind them can still be applied. The single responsibility principle part of SOLID can be interpreted as suggesting that even if you don’t have a formal data access layer, you shouldn’t be putting database access code in the same method as business logic. The business logic should be working taking and receiving generic collections, rather than retrieving data and working directly on DbSets all in one method. You really shouldn’t have one method that queries data, manipulates it and writes it back. That application of the single responsibility then gives the separation of concerns that can make your code easier to test. The business logic can be tested using simple unit tests rather than having to write complicated unit tests that prime an in memory database or mock, call a function and then examine database contents to see what has happened. The database access methods are again a lot simpler, often just retrieving data, and can easily be supported by a simple mock of the part of the database context being used – a full blown in memory database, or fake or mock context isn’t needed.

In conclusion unit testing code that is accessing a database has always been difficult, and whilst some of the problems have been addressed over the years, it is still not easy. However if you are following good design practices such as DRY and SOLID the occasions when the only way to test something is via a database context should be fairly minimal. If you are finding that you are needing to do that it is well worth looking again at whether you have inadvertently violated the single responsibility principle. Even though the advent of the in memory database makes database context based testing easier, that doesn’t mean you should be using it everywhere. A simple unit test of a loosely coupled method will always be faster than testing a more complex method even using an in memory database. It is well worth considering whether your design would be improved by not coupling your business logic directly to you database access code.

Can You Just Take a Look at this Legacy Code?

As a programmer there are a number of books which people will tell you are must read books for any professional – which do change over time as programming techniques evolve. However the books are fairly consistent in that they all tend to be written from the point of view of a green field system, starting from first principles, ensuring you build a maintainable system.

But is that realistic? You might be lucky and get in at the beginning of a brand new startup, or you could land a job at a consultancy where you’re always writing bespoke code, but for most programmers an awful lot of their career will be dealing with the joys of legacy code.

It may be that you come into an established company with many years of development and thousands of lines of code debt and changing technologies.

Alternatively you could be handed the thing programmers often dread the “business developed application” – often these are mired in corporate politics as well, with strained relations between the business area that developed the application and the IT department. Indeed in one company I worked for there was a semi-secret development team in one part of the business formed as a result of the IT department saying no one too many times! In most cases these business developed applications are produced by people whose strength is in understanding how the business works, but are inexperienced as developers, which often produces a double hit of problems in that the business logic is usually poorly documented, and the code is also of poor quality.

Other examples I’ve come across are prototype systems that have almost accidentally ended up as critical systems, and something that happens surprisingly often is a big company takes on responsibility for a third party product either because they don’t want to upgrade to a supported version, or because the third party company is abandoning a product altogether.

The common factor in all of these is that you’re taking on a codebase that is less than ideal, so all these coding books that assume you’re starting from scratch aren’t overly useful. All the benefits of test driven development protecting you when you make changes are really not much good when you have incomplete or totally missing tests. It’s incredibly difficult to find your way around a badly structured code base if you’re used to textbook structures and accurate documentation.

What do you do? Edit and pray it works? Rewrite the whole system from scratch?

All of which brings me back to where I started, and the excellent Working Effectively with Legacy Code by Michael Feathers. The book starts from the entirely pragmatic position that you are going to be working on dodgy code a lot of the time, and if you don’t want to make it worse you need to get it sorted out. It is also realistic in that it gives you techniques to gradually improve the code as a business will rarely be able to spare the time and resources to totally rewrite something.

The really basic concept around which a lot of the more complicated techniques are built is that whilst you can’t bring all of a codebase under test immediately you can grow islands of properly tested code within the codebase that gradually spread out as you work on other parts of the codebase over time. To create these islands you need to separate them from the rest of the codebase, which is where a lot of the complexity comes from, but Feathers offers a variety of different techniques for making those separations. The ultimate aim is that as much of your legacy codebase is brought under test, and the codebase as far as possible conforms to modern principles like DRY and SOLID, whilst at the same time allows you to produce the changes and improvements your users or customers are demanding to the legacy code.

I hesitate to say that any programming book is an essential read, but if like most programmers you’re faced with a chaotic legacy codebase Working Effectively with Legacy Code is a book that certainly gives you a lot of practical advice of how to make things better.

Binding a WPF Grid Column Header

This is another of the occasional posts that are primarily here to remind me how I did something, and because it might be useful to somebody else!

Over recent years all of the desktop based windows applications I’ve developed have had user interfaces created using Windows Presentation Framework, having done Windows Forms extensively previously, it’s  a definite step up, with great support for data binding user interface elements to data fields. As such it is pretty straightforward to produce user interfaces with a really clear separation of concerns, using a Model-View-ViewModel design.

A recent change request for one of my projects has been to provide dynamic labelling for data fields – different sorts of users store the same data in the same fields, but use different terminology which needs to be reflected in the labelling. Whilst the WPF binding support is able to cope with labels without an issue, I hit a bit of a dead end with a large data grid that was used in the application where the column headers were required to change. Whilst the data items in the grid can be bound without problems, the column headers just wouldn’t work.

Having failed with the standard binding, I then looked at using FindAncestor to try and point the column header at the ViewModel that was mapped to the data context of the window, again this didn’t work.

After much digging around and experimenting, I eventually came upon this question on StackOverflow which while it was about binding column visibility, demonstrated the same basic problem, that you couldn’t bind column properties. The first answer by Cameron MacFarland explains why it doesn’t work, and gives a good answer that solved the problem for me. It’s not the first question that comes up when searching for the problem on Stack Overflow, but the solution offered by Cameron is a much neater and cleaner solution than many of the other solutions offered to the similar questions.

The issue is that under WPF data grid columns are not part of either the visual or logical tree, and therefore the usual relative binding mechanisms don’t work. The solution is to use a static binding, and create a proxy class in the application code that provides what would come via the relative binding.

The property is set up as follows:


    


And then you need to create this binding proxy class:

public class BindingProxy : Freezable
{
    protected override Freezable CreateInstanceCore()
    {
        return new BindingProxy();
    }

    public object Data
    {
        get { return (object)GetValue(DataProperty); }
        set { SetValue(DataProperty, value); }
    }

    // Using a DependencyProperty as the backing store for Data.
    // This enables animation, styling, binding, etc...
    public static readonly DependencyProperty DataProperty =
        DependencyProperty.Register("Data", typeof(object), 
        typeof(BindingProxy), new UIPropertyMetadata(null));
}

Although the example Cameron gives is for column visibility, once the binding proxy is set up it works just as well for other column properties such as the column header I was trying to bind.

Keeping Track of SQL Statements in an Entity Framework Code First Project

One of the more annoying omissions from the Microsoft Entity Framework is the out of the box ability to easily trace the SQL statements produced by the framework. In the past we’ve got around that using Community Entity Framework Provider Wrappers which includes EFTracingProvider that can log the generated SQL statements to the console or to a file.

However in our most recent project where we are trying Code First, it runs into a bit of a problem as the EFTracingProvider uses the ObjectContext whereas Code First is based on the newer DBContext. A trawl around the Internet can find various posts about how to get the providers to work this post being probably the most complete example. The big issue though is that they all require some degree of extra code to con the provider into working with the DBContext, and they all ran into various issues with our Code First solution, most often related to the parts where the solution generated the initial SQLServer Compact database.

However with some more digging I came across a solution in the form of the Clutch Diagnostics EntityFramework package on Nuget. The package wraps up another project, the excellent (and free) MiniProfiler which whilst it is designed for use in web projects is adaptable enough to be used in other ways. The Clutch Diagnostics EntityFramework only needs a couple of additional lines in the start up code for the application, and then an implementation of IDBTracingListener to send the traces somewhere. There are several of these available, but for our purposes it was really easy to write our own. The big advantage is that there is no need to make any changes to the DBContext, so the code can be easily and automatically removed for releases.

For an example of how to implement IDBTracingListener, check out my answer on Stack Overflow.

A SQL Stored Procedure Parameter Sniffing Gotcha

This is another one of those occasional posts that is primarily for my own benefit to remind me of a particular problem, but that I’m posting publicly in case it could be of use to someone else.

On one of our systems we have a stored procedure to pull back all of the staff details for a particular project. Initially the screen used LINQ queries, but as anybody who has used LINQ can tell you in certain situations the queries it produces can become quite unwieldy and slow, so in places like that we’ve swapped to using stored procedures. The stored procedure is really simple consisting of one query that takes the two stored procedure parameters to identify the project the staff list is required for. Anyway, on our test systems the stored procedure has been running really well returning the staff details in under a second.

However that hasn’t been the case on the live system. The same query on our biggest project has been slow. Not just slightly slow, go make a cup of coffee (including picking and grinding the coffee beans), do the Times Jumbo Crossword type slow. But when you take the query that the stored procedure uses out and run it directly in a SQL Management Studio query window it returns in under a second, indeed if the same project in our User Acceptance Test server which is essentially an older copy of the live database returns in a similar high speed. It’s something in particular about the live server.

Not surprisingly this has caused a good deal of head scratching, but on Friday afternoon I finally solved the mystery and found what was causing the slow down thanks to this blog post.

To understand what is going on you need to remember a few things about how SQLServer works:

  • SQLServer processes queries differently depending on a number of factors including how many results it thinks the query is going to produce, the indexes on the tables, how the data is arranged in the tables and how the data is arranged on the disk to name a few.
  • When you create a stored procedure SQLServer builds these execution plans only once, the first time the query is run and uses these execution plans every subsequent time the stored procedure is called.
  • If you use a stored procedure parameter in a query within that procedure the query optimiser uses the values of those parameters in the execution plan, if you use local variables the query optimiser creates a more generic plan. (This is called Parameter Sniffing)

Having asked around, most SQLServer users are aware of the query optimiser, many are aware that SQLServer builds the query execution plan once – although they may not know exactly when, but relatively few, including a good few DBA’s will be aware of the difference in the way parameters and local variables are treated by the optimiser.

When you bear in mind that we have a mixture of different sized projects in our system, it starts to become rather obvious what has happened and why the query is running very slowly on one server but not on others. On some servers the first call of the stored procedure was for a small project, whilst on others it was a big project, as a result the SQLServer’s have created different execution plans and that is favouring particular project sizes. Unfortunately on the live server the query plan is totally unsuitable for the project with hundreds of staff members, hence the hideously slow performance.

All I did was change the parameters in the query to be local variables, and then set the value of those local variables to be the value of the parameters – two extra lines and a tweak of the query, and the query started returning in under a second as for all the other servers. By virtue of having a generic query plan the performance of the query is not going to be quite as good as one targeting a particular project size, but in a system where we are storing a wide variety of project sizes a generic plan is what is needed.

At this point, having found the problem I started looking at other stored procedures that could potentially exhibit similar problems – as a general rule I’d recommend not putting parameters directly into queries.

If you want a more detailed explanation, complete with a simple worked example of what to do, check out this SQL Garbage Collector post.

DDD8 – Apples, Boots and @blowdart

Major embarass @blowdart session!  #DDD8Today was the eighth annual(ish) gathering of four hundred of the Microsoft Development community for a day of free technical training. Once again it was spectacularly over subscribed – sold out faster than Glastonbury (all the places went within fifteen minutes) and with no Microsoft speakers had the usual mix of sessions, some of which perhaps you wouldn’t expect to see at Microsoft.

The day started off cold – although there was no snow it was definitely a case of scraping ice off the car, and if you were in any doubt, one look at Rachel Hawley’s footwear could tell you! Having said that, as has become traditional at these events, bacon butties to warm you up awaited those who got there early.

First off a couple of observations. For a Microsoft Developer Day, it was a very good advert for Apple! Of the five talks I attended, two were obviously running off Mac’s. One was about iPhone development, so using a Mac was a given, although the presentation was also given using Keynote (and all the more slick for it) and rather than messing around with font sizes as all the PC based presenters have to do Chris Hardy used the built-in OS X zoom gestures to quickly focus in on what he was showing. The other Mac based presentation given by Ian Cooper wasn’t anything related to Mac development at all, but was presented in MacOS X, using the Mac version of Powerpoint, with a windows development environment running in VMWare. It’s not so long ago that developers would buy a Mac, largely ditch MacOS X and stick Windows on it – it does seem that even with the advent of Windows 7 that isn’t always the case now… The other massive advert for Apple was not surprisingly the vast numbers of iPhones in evidence. I certainly think it would have been worth somebody doing the same as Scoble did at Le Web to get a ball park figure of how many there were. There were a good few Google Android phones around, but few if any Microsoft based phones in evidence. This was also reflected in the sessions – no talks on Windows Mobile development, but there was a talk on using MonoTouch to develop iPhone applications!

As is normal for these days, what I thought I would attend, and what I actually attended were slightly different. I initially thought I’d just take up residence in Chicago 1 for the day, but in the end I fancied a change of pace.

First up I attended a talk by Ian Cooper on Real World MVC Architectures. This in part was because I’ve just done my first ASP.Net MVC project, and I was half expecting to find I’d done it all wrong, as to a large extent I’ve put it together as felt right rather than following any explicit paradigm. To my relief it seems all the talk of proper architecture seems to be sinking in, and the way I’ve constructed it is pretty much as was suggested, even to the point that I’ve used particular techniques without having read about them as yet in my MVC book in that I understood why they were being used but didn’t recognise the idea by name! I suspect the session might have been pitched a bit too much towards the beginner end of things for experienced MVC programmers, but for me it was certainly a good reinforcement of the techniques.

Next I slipped next door for a change of gear, and a non-technical talk by Liam Westley who was talking about how to be a small software development outfit and not go bust. To be honest, the principles Liam outlined can apply equally well to large software houses, a number of which I’ve come across who don’t get this stuff right, and even to people in a corporate environment like me as getting these sorts of things wrong will at the very least have your internal customers looking elsewhere for their software, or at the worst put you out of a job. Liam gave us a set of broad principles that any software developer should be doing as a matter of course – things like delivering properly tested software, applying proper logging (even in a corporate environment fixing a problem before the users have got round to reporting it scores serious brownie points), and understanding your users, all go to making people happy to give you their software work, and not go elsewhere.

For session number three it was a first for me, in that it was the first time that I have heard Jon Skeet speak. His name will be familiar to anyone who frequents Stack Overflow – and as his reputation is testament to he sometimes seems to answer C# questions within seconds of them being asked. What is slightly more surprising is that his day job is at Google as a Java developer. Even more surprising he fits all of that in with being a Methodist Local Preacher too – but I suspect that stands him in good stead for being able to deliver material well, as from the experience today his reputation is well deserved. The latest version of C# brings in some interesting, but quite complex new ideas, and he did manage to put them over in a way that even with the early start on a Saturday I pretty well followed them. Having said that whilst I liked the presentation, and many of the new features, I was less than impressed by the return of the ubiquitous VB variant data type, in the guise of the dynamic type. Whilst I am well aware that the way the variant and the dynamic work are rather different, it’s much more about how it will end up being used, or more likely abused. I’m with Jon Skeet on this in that I much prefer a situation where the types can be validated at compile time. Whilst there are legitimate reasons for adding dynamic, and as an exercise in language design the implementation is very impressive, as with the variant I am quite sure it will end up being thoroughly misused, and will lead to many a difficult to nail down bug.

Next up was lunch, and was the traditional scramble for a lunch bag. Unfortunately it seems that the entire occupants of the Chicago 1 side went the same way and got all the non veggie and non seafood sandwiches (I have to watch having too much of certain types of seafood with my gout) and as always it was a bit of a lucky dip as to what else you got, so I ended up with a sandwich, crisps and an apple that I wanted, and a can of diet coke and a snickers bar that I didn’t want. I know they’ve tried various things over the years, but I still think there has to be a better way than this, as it was pretty obvious looking around that not everybody wanted what was in their lunch and there was a lot going to waste.

The lunch time Grok Talks had relocated this year, and were in the atrium in building four. This certainly gave a bit more space, but did seem to make the security guards mighty jumpy – I got a stern “I’ve just seen you behaving strangely” from one for taking this picture – I just liked the look of the clear blue against the white of the building structure and was going to make some comment about the weather! The Grok Talks were marred rather by problems with the technology. For a start the speakers were badly positioned in relation to where the presenters were standing leading to endless feedback problems. The talks also took an absolute age to get started, and when they did people seemed to overrun, which as a result led to people who were further down the running order being disappointed. There were a couple of interesting talks though, and it was especially interesting watching Gary Short intensely watching somebody else demonstrate Code Rush! Looking at the response hopefully there will be a few more converts from Resharper, a jump I made many years ago!

After lunch was one of my personal interest talks. As an iPhone owner and software engineer I’ve always quite fancied giving an bit of iPhone development a go. The problem is that as well as learning a new platform and new environment, developing for the iPhone requires learning a new language, Objective-C. However Chris Hardy was demonstrating a way that I could leverage my existing C# skills using the Mono environment and an add on to it called MonoTouch. Whilst developers still need to be able to read Objective-C to understand what is going on, and still need to learn their way around the Apple API’s, it allows them to develop entirely in familiar C#, and even brings advantages in terms of some of the extra type safety that C# brings. I have to say I was pretty impressed at the environment and what it can do. I was less impressed by the price – $399 for a personal license, which only covers you for a year of updates, with even more for a corporate license – far too much for your average hobbyist programmer to even consider. I can’t help thinking that they are missing a trick here, and providing a low cost or free license for developers in return for a share of the revenues, maybe using some sort of phone home code to keep track would certainly broaden the base of programmers using it.

My last session of the day, to be honest I would have gone to even if Barry was just reading the phone book, as this was potentially his last appearance at a Developer Day before he loses the essential qualification for being allowed to speak of not working for Microsoft, as in a scant few days he will be starting a new job working for Microsoft at one of their offices in Redmond. As always there was the classic banter with people he knew in the audience, in particular Jon Skeet who was attempting to pose increasingly difficult questions it seemed. Barry also started off by hijacking the session next door as Ben Hall, the speaker had a birthday and was foolish enough to tell somebody! What I was also expecting, and got in spades were interruptions marking his departure from the UK development scene. His book Beginning ASP.NET Security featured in several. In the first Liam Westley gave a touching and heartfelt tribute, and said how much he had been looking forward to the arrival of the book – as it was just the right size to prop up his wobbly table. In another they spoofed the winter cold adverts, suggesting that the book was good fuel to keep the elderly warm. The session finished off with a clip from his appearance many years ago on The Crystal Maze, and several of the organising team appearing in T-shirts especially prepared for the occasion. All in all it was a memorable way to finish off the day, and hopefully a memorable occasion for Barry as he heads across the Atlantic. The one question that remains is whether all the spelling mistakes in the presentation were down to Barry, or whether somebody did get at his presentation before he went on…

All in all it was an excellent day, and although I know there were a couple of sessions that had problems, the ones I attended were all excellent, and well worth the spare time given up. It was great to catch up with friends from the community, previous developer days and previous jobs. Whilst it does appear that the day is very much a victim of it’s own success (even with local developer days around the country people still travel from far and wide to attend this one in addition to their local days) hopefully a way can be found to allow it to keep running in future years, and all credit to the organising team, and the staff at Microsoft for keeping the whole day running smoothly.

Ensuring WCF Correctly Reports Errors

This is another one of my note to my future self posts, that might be useful to somebody else, so skip past if you don’t know what WCF is…

Anyway, if you’re still here, I’ve spent the past day or so trying to track down a problem in some WCF code. Essentially the problem has been that whilst I have been out of the office over the last week, we’ve had a change propagate through to our development server which has caused problems with some of our existing services, specifically some code where one service needs to make a call to another service running on the same machine to finish it’s work. To do this it needs to pass through the Kerberos ticket that the initial service has received, and whilst up to now it has been quite happily doing this, now it has stopped and instead is getting the credentials for the underlying windows service passed.

The problem was made a lot harder to diagnose by a little WCF gotcha whereby the error that is generated is overwritten when the calling code tries to dispose of the service object. Damien McGivern has an excellent post describing the problem and giving a solution, however it didn’t quite meet our needs, as we sometimes need to specify an endpoint when creating the proxy object.

To get around the problem, I adapted Damien’s code slightly creating an extension method taking an object of type TService rather than creating the object within the method, so the method can be used as follows:

new RelationshipServiceClient().UsingService(service => ... );

Whilst it doesn’t actually solve the mystery of why our server started mishandling WCF calls, it did at least give us a bit more clue!