Category Archives: Programming

Techie programming type posts.

DDD Scotland

Last weekend was Developer Day Scotland. Much like the original Developer Days that I’ve been along to many of based on the Microsoft Campus that relaunched with a new team running them, this was a relaunch by the Scottish Developers. As there were some interesting sessions on the agenda, and since I fancied an excuse to take the West Coast Main Line over Shap and through the Southern Uplands – something I usually only glimpse whilst driving the M6 and A74(M), I grabbed myself a ticket and headed north.

The conference was held at the Paisley Campus of the University of the West of Scotland. The Reading Developer Day’s are relatively unusual being held at a company site, but then few companies have the kind of setup Microsoft have that is suitable. Having said that the experience of attending a DDD at a University showed up several advantaged, not least that they have much more space, and in particular the main hall is large enough to take all the attendees – at Microsoft the prize giving at the end of the day ends up being done with all the attendees stood in the foyer and the organisers stood on the stairs!

This conference I was very much picking sessions to tie in with upcoming work, rather than just sessions that piqued my interest as I have done at other DDD events.

First up I kicked off with Filip W talking about Interactive Development with Roslyn.

Filip started off with a quick recap of the history of C# as a language – enough to make me feel a little old as I can remember my first experiences with the early versions of C# back with Visual Studio 2003. This was to highlight that the way developers worked with C# hasn’t changed much over the years, which is why the new Roslyn compiler is such a game changer.

He started off with a simple feature, dotnet watch that allows you to run a specific command as soon as a source file changes. This needs the VS2017 project format, but allows a great deal of flexibility in how you work with code.

From there he moved on to Edit and Continue. Edit and Continue has been around for longer than C# – it was an often used feature of VB6 that was demanded in .Net as people moved across. It has however been problematic, as a general rule of thumb tending to support a version behind the current cutting edge C#. There have also always been a number of limitations, in particular not being able to change lambda functions at all. Roslyn has changed that. Finally it has now caught up with the current C# 7.

For the next part of his talk Filip talked about C# REPL, what is known in VS2017 as the C# Interactive Shell.

The C# REPL didn’t exist before Roslyn, because as a compiled language the kind of interactive functionality just wasn’t possible. With Roslyn, Microsoft has introduced a special mode that relaxes some of the parsing rules to make interactive development possible, including the ability to use syntax that normal C# code would reject as illegal.

Interestingly as Filip explained, each line is still compiled, which does give the interactive window some interesting advantages over interpreted interactive languages, allowing developers to interactively step back through compilation. It also integrates in with the current open solution allowing developers to manipulate and explore the current solution in more complex ways than previously.

C# REPL exists in several forms. It can be run directly from the command line, whilst the C# Interactive window in Visual Studio is a WPF wrapper around the C# REPL that leverages extra functionality. There is also an “Execute in Interactive” right click menu option to immediately run the code under the cursor. The final variation of it is Xamarin Workbooks which uses Markdown format text, and uses the C# REPL to execute any code blocks in the document. Output can also be sent to the iOS or Android emulators as well as running locally.

Filip finished off by discussing Live Unit Testing, something I’ve already been using in VS2017. This runs tests as they are actually being coded – it doesn’t wait for code to be saved. It does this by hooking in as a Roslyn analyser. It’s straightforward to write a custom analyser ourselves perhaps to enforce coding standards, or guide other developers in the way to use a library – indeed some third party library developers are already including analysers to do just this.

For session number two, I stayed in the main hall for Jonathan Channon talking about Writing Simpler ASP.Net Core.

Jonathan started by talking about a project he had worked on where speed had been an issue, and they had tracked the problem down to the large numbers of dependencies being inserted using dependency injection. The issue being that the inversion of control mechanism used to insert the dependencies was using reflection.

The issue is with the way we do SOLID in ASP.Net, so Jonathan used a series of examples showing how we can go from a solution heavily dependent on injecting dependencies and using mocking frameworks for testing, to something that uses no dependency injection or mocking frameworks at all. He has his examples for the talk online in a GitHub repository.

What is perhaps most interesting about his final solution is that the technology he is using has been around since the earliest days of C# – using delegates and static methods, along with his own Botwin library to simplify building an API going to a much more functional programming style model than is used in traditional ASP.Net.

Jonathan also highlighted a number of other blogs and videos. Mike Hadlow blogs on much the same technique highlighting how much less code using a functional style produces. Posts from Mark Seeman and Brian Geihsler also talk about how SOLID principles lead to a profusion of dependencies making codebases difficult to navigate.

Given that so much software these days follows the SOLID principles, this was a challenging different view on how systems should be developed, one of those “everything you think you know is wrong” type sessions.

The next session I attended was Paul Aikman talking about React.js and friends, which was one of my must attend talks as I was due to start working with React.js the following week for the first time. Paul has posted his slides on his website.

Paul started by taking us through how his company has eventually arrived at using React.js, starting out with Webforms augmented by JQuery, through Knockout and Angular 1, before settling on and sticking with React.

He also highlighted how there has been a gradual shift from performing a lot of processing on the server side with minimal client side functionality to the current situation where customers are expecting a rich and responsive experience interacting with websites that mean clients are now a lot fatter. He also discussed why having started with Angular 1, his company took the decision to shift to React, which effectively came down to the significant changes between Angular 1 and 2 meaning that they would effectively have to learn a new framework with Angular 2, so they went out to what they regarded as the best at the time, and changed to React.

He then gave a rapid overview of how React worked, which I found really useful coming to React for the first time during the following week. He highlighted that given the years of being told to separate logic and presentation with the MVC pattern, one of the biggest surprises with React is that it mixes logic and presentation together.

Paul also highlighted that React only focuses on UI logic, following the principle of doing one thing, and doing it well. There are additional libraries such as Redux and React Router that provide the additional functionality needed to build a web application.

After lunch, I decided to head along to Gary Fleming’s talk on API’s on the Scale of Decades which was on the problems with API’s, and how developers can write and API that can evolve over time rather than lock you in to poor early decisions. Once again Gary has his talk notes online which are well worth taking a look at. As a side note Gary was using an app called Deckset to run his presentation, that takes presentations written in Markdown syntax – considering the amount of time I spent reworking Markdown notes into a Keynote presentation recently, I’ve noted it down as something to look at further.

Gary’s talk was the one that promoted the most heated discussion of any I attended, both at the session, and also when I came back to the office. He started from the point that designing API’s is hard, but that what most developers want in an API is something that is both machine and human readable, changeable, testable and documented.

Gary started with a crash course in the concept of affordance, using Mario, and animals in a tree as examples! Gary was highlighting that in both the case of the game, and different animals using a tree in different ways it was through their knowledge and experience that they were interacting with the tree or we were playing the game, API’s should be similar. He used further examples where knowledge and experience allow us to interact with something – save buttons that look like floppy disks, even though many people now have never even used a floppy disk.

Applying this to our API’s the mechanisms for controlling them should be included in the information returned by the API, you shouldn’t separate them out.

Looking at a common affordance on an API, if there is a large dataset to return, generally we will page this, and there is a common set of affordances for stepping through the dataset. Going back to the days of the text adventure games from the early days of computer games, once again there was a common set of verbs with which to interact with the game.

A good tip Gary gave for thinking about the verbs and nouns to describe these affordances was to think about how you would ask voice assistants like Alexa or Siri to do what you want to do. He also suggested that well designed affordances are effective documentation for an API – if it is clear how to use an API you don’t need extensive documentation.

Gary then moved onto the problem of changing an API.

He used the example of the Ship of Theseus. In this thought experiment a ship over a long life has ongoing repairs such that eventually every single plank of wood and component of the ship has been replaced – is it the same ship? If we use this lens on an API, if over time we are changing our API, is it the same API, when do our changes take it from version 1 to version 2?

Gary’s suggestion was that we shouldn’t be versioning our API at all. To respond to the surprise from the audience he highlighted that we cope with this every day using websites, all of which change their API that we as users interact with. We apply our knowledge of the website and cope with the changes.

Gary then moved on to testing. His first example was asking the question of why we need brakes on a car? The obvious answer is to enable us to stop, but they also allow us to go faster. For the same reason we need tests on an API to allow us to change them faster.

Fundamentally, if we know that an API will inevitably change, we need to plan for those changes. He suggested that we should be using Consumer Driven Contracts, where the consumers of the API gives detailed expectations of how the API should behave, and then these form the basis of the tests against the API. He also highlighted the important of using fuzzers to ensure the API responds and handles unexpected data.

His final point provoked the most discussion, looking back at what he had been discussing he highlighted that JSON, which is what many APIs currently use is limited, and suggested that it is something we use by convention, rather than because it is the best tool for the job. He suggested that using HTML5 was a better option as it offered a richer interface that gave greater affordance to the users of the API. There was a good deal of incredulity from members of the audience, and indeed a similar level from our architect back at the office after the conference. Gary has subsequently said that there are limitations with using HTML5 also, but it was as much about getting people to question why they use JSON as proposing HTML5 as the solution.

My next session was also run by Gary, as I decided to pay a visit to the community room where he was running a Lean Coffee session.

The group came up with a mixed selection of topics to discuss. First off was a topic proposed by Becca Liddle who is the organiser for Women in Tech Scotland who asked about perceptions of women in technology companies. The discussion was wide ranging and covered a number of common issues around how women are treated both by company culture, and male colleagues, and also how male dominated tech culture can be off-putting to women and minorities. Becca had talked to a number of other women attending the conference earlier in the day and shared some horror stories of their experiences. Certainly food for thought as to how we encourage a more diverse workforce in IT. We also discussed what we were currently learning and broader issues around training, and also had a discussion about the impending changes being brought by GDPR which was in some ways a bit of a relief as it seems everybody is as concerned about it, and nobody feels they will be ready.

Next I went along to a session on Building APIs with Azure Functions by Kevin Smith. Again this was a session I attended because as a team we’re using Azure Functions in order to try and break up large bits of processing into horizontally scalable functions.

Kevin gave a good overview of the functionality available, highlighting the rapid development and simplified integrations, and also how they can be developed using Visual Studio. Kevin also has a good introduction on his website.

He also gave some good insight into the issues, including issues debugging them, and in particular problems with Microsoft breaking Azure functions. Ironically his final demo was also one that failed on the day I’m not sure whether it was because of a Microsoft breaking change!

My final talk of the day was Peter Shaw giving an Introduction to Typescript for C# Developers – once again it was a session I attended because we’re using Typescript for the upcoming work and the talk served as a good introduction.

First though a moan, Peter refused to use the microphone in the hall on the basis that he “had a loud voice”. Now he certainly did speak loud enough that I with good hearing could hear him without a problem, however experience looking after the sound at church is that if somebody does that there may well be people in the audience who have hearing difficulties, but nine times out of ten when challenged like this, they won’t feel comfortable in drawing attention to themselves as being unable to hear. At church the reason we ask people to use microphones is because however loud peoples voices are they can’t speak loud enough to drive the induction loop that many people with hearing difficulties will use, and speakers refusing to use the microphone leaves those people feeling discriminated against. Sometimes they will suffer in silence, other times they will complain to the sound crew, almost never will they complain to the speaker, who carries on in blissful ignorance thinking they have a loud voice and everything is fine. I hate working with a microphone too, so do many other people, but they are there for a reason, so if you’re a speaker, and there is a microphone, please use it!

Anyway, moan over, onto the talk. Peter started with an overview of why Typescript is important. More and more applications are moving into the browser, much as Paul Aikman highlighted in his talk on React we’re moving from applications where much of the functionality is in complicated server side C# code, to applications with richer client side experiences using Javascript. Similarly the growing variety of internet of things often use Javascript.

For developers used to the rich type-safe world of C#, Javascript can be a bit of a shock. Typescript is a language designed by Anders Hejlsberg who designed C# to open up Javascript to a back end developer used to C#.

As such the syntax is familiar to anyone who is used to C#, and makes the transition to Javascript development relatively painless.

Interestingly Peter highlighted that Typescript is more of a pre-processor than a compiler – ultimately what is produced is valid Javascript, but Typescript acts like a safety net enabling the developer to write enterprise scale Javascript applications.

There are a number of subtle differences however driven by the differences in Javascript. For example Typescript has Union Types that allow for the Javascript ability to change the type of variables. Undefined and null are still usable, however the Typescript team advise against them.

There is lots of Typescript support around. Many of the most common Javascript libraries already have typescript type files defined to allow them to be used from Typescript. Peter referred us to Definitely Typed as a good repository of high quality Typescript definitions.

As an introduction it was a useful talk giving me as a C# developer taking first steps into Typescript confidence that it won’t be a difficult transition.

After that we had the closing section of the Developer Day with the traditional raffle and prize giving, and as is traditional (and much to the disappointment of the kids because an Xbox X was one of the prizes) I didn’t actually win anything in the raffle. Was no bad thing however as I’m not quite sure how I would have got an Xbox back to Reading on the train…

How Do I Unit Test Database Access?

If as a .Net developer you’re serious about making sure your code is properly tested, one of the biggest problem areas has always been around database code. Even with the more widespread adoption of Object Relational Mapping (ORM) frameworks that abstract some of the complexity of database access, unit testing code that accesses the database is still difficult.

Over the years there have been various strategies to unit test database code that developers have come up with, and at various times I’ve seen projects that use all of them. I’ve also seen examples of large projects where you could see several of these techniques used in different parts of the code base.

The simplest technique – which isn’t really a technique as such – is just to test with the real database. Often this will be a special instance of the database created by the test run into which test data is loaded. The biggest argument against this idea is this isn’t really a unit test and should more correctly be considered an integration test. The biggest problem with this technique is using the real database is pretty slow, and that often leads to compromises to allow the test suite to be run in a reasonable time frame, either with reducing the number of tests, or not starting each test with a clean database configuration. Reducing the tests increases the risk that important conditions may not be properly tested, whilst not cleaning the database can lead to unexpected interactions between different tests. However in situations where you have complex logic in stored procedures in the database, sometimes this is the only way you can test them.

If you are practising Test Driven Development, where you are running unit tests repeatedly, having a unit test suite that takes even just minutes to run is a real problem.

A step on from using the real database is to use an alternative that is faster than your real database, for example an in memory database. This idea has come to more prominence recently as Microsoft has added an in memory database provider to their latest version of their current ORM, Entity Framework Core. However there have been third-party in memory options around for a while such as Effort. In both the official offering and third-party options they are drop in providers that can use the same entity framework code, but just go to an in memory provider instead. Purists will argue that even using an in memory provider this is still really an integration test rather than a unit test, you are merely replacing the dependent database rather than removing it. However to a software developer it can be an attractive option compared to the effort required in stubbing, mocking or faking a full ADO.Net provider. The other criticism of this technique is that because this is a different type of database being used from the live system there is the risk of behavioural differences between that and the real database. Having said that since Microsoft are highlighting testing as a benefit of their new in memory provider hopefully those will be few and far between.

Moving on from using an in memory database, the next option, at least until Entity Framework version 6 came along was to build a fake context object that could be used for testing. I’m not going to go into a detailed explanation of how to do this, as there are a number of good tutorials around, including this one from a Microsoft employee. The basic idea is that you construct a complete fake context object that gets injected into the code being tested instead of the real database context. Although you generally only need to construct the fake database context once, it is comparatively a lot of code, so it is pretty obvious why developers are delighted at the in memory provider included in Entity Framework Core. If you’re not needing to use the full abilities of the context, you do have the option of only partially implementing the fake context. The main criticism of using fakes is that again you’re running the risk of behavioural differences. This time it is because you’re using a different type of context, in particular under the covers you’re using the Microsoft LINQ to Objects classes to talk to the fake object, whereas the real database code will be using LINQ to Entities classes. Put simply whilst the syntax will be the same, you’re not exercising the actual database access code you will be using in the live system. You’re relying on Microsoft LINQ to Objects and Microsoft LINQ to Entities behaving in a similar fashion.

With the arrival of Entity Framework 6, there were some changes made that made it a lot easier to use a mocking framework instead of fake objects. Microsoft have a good guide to testing using a Mocking Framework in their Entity Framework documentation, alongside a revised guide to using a fake object as a test double. The amount of code to fully mock a context is similar to a faked context, but again if you only need to use part of the functionality of the context in your tests you only need mock the parts of the context you need. As with any mocked object it’s important that your mock behaviour is the same as the real object you’re trying to simulate for the tests, and this can be pretty complex with an object like a database context. Particularly problematic areas are around the behaviour of the SaveChanges functionality, where some fairly subtle bugs can creep in with code that passes a test but doesn’t work in production if for example you test by just expecting the SaveChanges method to be called.

That takes us on to a collection of other techniques that are more about isolating the database access code to make it easier to test.

The long standing way to do this is based around the Repository and Unit of Work patterns. There are a variety of ways you can implement these, for example you don’t necessarily need the Unit of Work and could just use the Repository pattern alone. There is a good Microsoft tutorial on the pattern using Entity Framework 5. The basic idea with the repository is to wrap the database code in the repository, and then mock the repository for subsequent tests. The database code in the repository just consists of simple create, read, update and delete (CRUD) functions. Whilst this was a common pattern before Entity Framework, and persisted with early versions of Entity Framework that are difficult to mock or fake, it has largely gone out of fashion. This is not least because the Entity Framework DbSet is an implementation of the same repository pattern so it is totally unnecessary to create an additional implementation of the Repository pattern for mocking now you can just mock or fake DbSet itself.

The other method that has been used for a long while is a traditional data access layer. The actual database code is hidden abstracted behind a series of method calls that take the parameters and return the data which can be easily mocked. Rather than being generic, the code inside each of those methods is for particular queries, and whilst that will be fairly simple database code that can be easily tested, there will be a single function for each query. There are good ways and bad ways of doing this, for example I have seen projects with vast library classes containing all of the queries used by the business logic – a bit of a maintenance nightmare at times. Probably a better design and more in keeping with SOLID principles is to have smaller classes more closely related to how the queries are being used. Either way there is a big overhead with lots of query functions together in a big data access layer.

Data access layers again have started to go out of fashion, however some of the principles behind them can still be applied. The single responsibility principle part of SOLID can be interpreted as suggesting that even if you don’t have a formal data access layer, you shouldn’t be putting database access code in the same method as business logic. The business logic should be working taking and receiving generic collections, rather than retrieving data and working directly on DbSets all in one method. You really shouldn’t have one method that queries data, manipulates it and writes it back. That application of the single responsibility then gives the separation of concerns that can make your code easier to test. The business logic can be tested using simple unit tests rather than having to write complicated unit tests that prime an in memory database or mock, call a function and then examine database contents to see what has happened. The database access methods are again a lot simpler, often just retrieving data, and can easily be supported by a simple mock of the part of the database context being used – a full blown in memory database, or fake or mock context isn’t needed.

In conclusion unit testing code that is accessing a database has always been difficult, and whilst some of the problems have been addressed over the years, it is still not easy. However if you are following good design practices such as DRY and SOLID the occasions when the only way to test something is via a database context should be fairly minimal. If you are finding that you are needing to do that it is well worth looking again at whether you have inadvertently violated the single responsibility principle. Even though the advent of the in memory database makes database context based testing easier, that doesn’t mean you should be using it everywhere. A simple unit test of a loosely coupled method will always be faster than testing a more complex method even using an in memory database. It is well worth considering whether your design would be improved by not coupling your business logic directly to you database access code.

Can You Just Take a Look at this Legacy Code?

As a programmer there are a number of books which people will tell you are must read books for any professional – which do change over time as programming techniques evolve. However the books are fairly consistent in that they all tend to be written from the point of view of a green field system, starting from first principles, ensuring you build a maintainable system.

But is that realistic? You might be lucky and get in at the beginning of a brand new startup, or you could land a job at a consultancy where you’re always writing bespoke code, but for most programmers an awful lot of their career will be dealing with the joys of legacy code.

It may be that you come into an established company with many years of development and thousands of lines of code debt and changing technologies.

Alternatively you could be handed the thing programmers often dread the “business developed application” – often these are mired in corporate politics as well, with strained relations between the business area that developed the application and the IT department. Indeed in one company I worked for there was a semi-secret development team in one part of the business formed as a result of the IT department saying no one too many times! In most cases these business developed applications are produced by people whose strength is in understanding how the business works, but are inexperienced as developers, which often produces a double hit of problems in that the business logic is usually poorly documented, and the code is also of poor quality.

Other examples I’ve come across are prototype systems that have almost accidentally ended up as critical systems, and something that happens surprisingly often is a big company takes on responsibility for a third party product either because they don’t want to upgrade to a supported version, or because the third party company is abandoning a product altogether.

The common factor in all of these is that you’re taking on a codebase that is less than ideal, so all these coding books that assume you’re starting from scratch aren’t overly useful. All the benefits of test driven development protecting you when you make changes are really not much good when you have incomplete or totally missing tests. It’s incredibly difficult to find your way around a badly structured code base if you’re used to textbook structures and accurate documentation.

What do you do? Edit and pray it works? Rewrite the whole system from scratch?

All of which brings me back to where I started, and the excellent Working Effectively with Legacy Code by Michael Feathers. The book starts from the entirely pragmatic position that you are going to be working on dodgy code a lot of the time, and if you don’t want to make it worse you need to get it sorted out. It is also realistic in that it gives you techniques to gradually improve the code as a business will rarely be able to spare the time and resources to totally rewrite something.

The really basic concept around which a lot of the more complicated techniques are built is that whilst you can’t bring all of a codebase under test immediately you can grow islands of properly tested code within the codebase that gradually spread out as you work on other parts of the codebase over time. To create these islands you need to separate them from the rest of the codebase, which is where a lot of the complexity comes from, but Feathers offers a variety of different techniques for making those separations. The ultimate aim is that as much of your legacy codebase is brought under test, and the codebase as far as possible conforms to modern principles like DRY and SOLID, whilst at the same time allows you to produce the changes and improvements your users or customers are demanding to the legacy code.

I hesitate to say that any programming book is an essential read, but if like most programmers you’re faced with a chaotic legacy codebase Working Effectively with Legacy Code is a book that certainly gives you a lot of practical advice of how to make things better.

Binding a WPF Grid Column Header

This is another of the occasional posts that are primarily here to remind me how I did something, and because it might be useful to somebody else!

Over recent years all of the desktop based windows applications I’ve developed have had user interfaces created using Windows Presentation Framework, having done Windows Forms extensively previously, it’s  a definite step up, with great support for data binding user interface elements to data fields. As such it is pretty straightforward to produce user interfaces with a really clear separation of concerns, using a Model-View-ViewModel design.

A recent change request for one of my projects has been to provide dynamic labelling for data fields – different sorts of users store the same data in the same fields, but use different terminology which needs to be reflected in the labelling. Whilst the WPF binding support is able to cope with labels without an issue, I hit a bit of a dead end with a large data grid that was used in the application where the column headers were required to change. Whilst the data items in the grid can be bound without problems, the column headers just wouldn’t work.

Having failed with the standard binding, I then looked at using FindAncestor to try and point the column header at the ViewModel that was mapped to the data context of the window, again this didn’t work.

After much digging around and experimenting, I eventually came upon this question on StackOverflow which while it was about binding column visibility, demonstrated the same basic problem, that you couldn’t bind column properties. The first answer by Cameron MacFarland explains why it doesn’t work, and gives a good answer that solved the problem for me. It’s not the first question that comes up when searching for the problem on Stack Overflow, but the solution offered by Cameron is a much neater and cleaner solution than many of the other solutions offered to the similar questions.

The issue is that under WPF data grid columns are not part of either the visual or logical tree, and therefore the usual relative binding mechanisms don’t work. The solution is to use a static binding, and create a proxy class in the application code that provides what would come via the relative binding.

The property is set up as follows:


    


And then you need to create this binding proxy class:

public class BindingProxy : Freezable
{
    protected override Freezable CreateInstanceCore()
    {
        return new BindingProxy();
    }

    public object Data
    {
        get { return (object)GetValue(DataProperty); }
        set { SetValue(DataProperty, value); }
    }

    // Using a DependencyProperty as the backing store for Data.
    // This enables animation, styling, binding, etc...
    public static readonly DependencyProperty DataProperty =
        DependencyProperty.Register("Data", typeof(object), 
        typeof(BindingProxy), new UIPropertyMetadata(null));
}

Although the example Cameron gives is for column visibility, once the binding proxy is set up it works just as well for other column properties such as the column header I was trying to bind.

Keeping Track of SQL Statements in an Entity Framework Code First Project

One of the more annoying omissions from the Microsoft Entity Framework is the out of the box ability to easily trace the SQL statements produced by the framework. In the past we’ve got around that using Community Entity Framework Provider Wrappers which includes EFTracingProvider that can log the generated SQL statements to the console or to a file.

However in our most recent project where we are trying Code First, it runs into a bit of a problem as the EFTracingProvider uses the ObjectContext whereas Code First is based on the newer DBContext. A trawl around the Internet can find various posts about how to get the providers to work this post being probably the most complete example. The big issue though is that they all require some degree of extra code to con the provider into working with the DBContext, and they all ran into various issues with our Code First solution, most often related to the parts where the solution generated the initial SQLServer Compact database.

However with some more digging I came across a solution in the form of the Clutch Diagnostics EntityFramework package on Nuget. The package wraps up another project, the excellent (and free) MiniProfiler which whilst it is designed for use in web projects is adaptable enough to be used in other ways. The Clutch Diagnostics EntityFramework only needs a couple of additional lines in the start up code for the application, and then an implementation of IDBTracingListener to send the traces somewhere. There are several of these available, but for our purposes it was really easy to write our own. The big advantage is that there is no need to make any changes to the DBContext, so the code can be easily and automatically removed for releases.

For an example of how to implement IDBTracingListener, check out my answer on Stack Overflow.