Category Archives: Computers/Technology

Stuff about computers and technology in general.

DDD Scotland

Last weekend was Developer Day Scotland. Much like the original Developer Days that I’ve been along to many of based on the Microsoft Campus that relaunched with a new team running them, this was a relaunch by the Scottish Developers. As there were some interesting sessions on the agenda, and since I fancied an excuse to take the West Coast Main Line over Shap and through the Southern Uplands – something I usually only glimpse whilst driving the M6 and A74(M), I grabbed myself a ticket and headed north.

The conference was held at the Paisley Campus of the University of the West of Scotland. The Reading Developer Day’s are relatively unusual being held at a company site, but then few companies have the kind of setup Microsoft have that is suitable. Having said that the experience of attending a DDD at a University showed up several advantaged, not least that they have much more space, and in particular the main hall is large enough to take all the attendees – at Microsoft the prize giving at the end of the day ends up being done with all the attendees stood in the foyer and the organisers stood on the stairs!

This conference I was very much picking sessions to tie in with upcoming work, rather than just sessions that piqued my interest as I have done at other DDD events.

First up I kicked off with Filip W talking about Interactive Development with Roslyn.

Filip started off with a quick recap of the history of C# as a language – enough to make me feel a little old as I can remember my first experiences with the early versions of C# back with Visual Studio 2003. This was to highlight that the way developers worked with C# hasn’t changed much over the years, which is why the new Roslyn compiler is such a game changer.

He started off with a simple feature, dotnet watch that allows you to run a specific command as soon as a source file changes. This needs the VS2017 project format, but allows a great deal of flexibility in how you work with code.

From there he moved on to Edit and Continue. Edit and Continue has been around for longer than C# – it was an often used feature of VB6 that was demanded in .Net as people moved across. It has however been problematic, as a general rule of thumb tending to support a version behind the current cutting edge C#. There have also always been a number of limitations, in particular not being able to change lambda functions at all. Roslyn has changed that. Finally it has now caught up with the current C# 7.

For the next part of his talk Filip talked about C# REPL, what is known in VS2017 as the C# Interactive Shell.

The C# REPL didn’t exist before Roslyn, because as a compiled language the kind of interactive functionality just wasn’t possible. With Roslyn, Microsoft has introduced a special mode that relaxes some of the parsing rules to make interactive development possible, including the ability to use syntax that normal C# code would reject as illegal.

Interestingly as Filip explained, each line is still compiled, which does give the interactive window some interesting advantages over interpreted interactive languages, allowing developers to interactively step back through compilation. It also integrates in with the current open solution allowing developers to manipulate and explore the current solution in more complex ways than previously.

C# REPL exists in several forms. It can be run directly from the command line, whilst the C# Interactive window in Visual Studio is a WPF wrapper around the C# REPL that leverages extra functionality. There is also an “Execute in Interactive” right click menu option to immediately run the code under the cursor. The final variation of it is Xamarin Workbooks which uses Markdown format text, and uses the C# REPL to execute any code blocks in the document. Output can also be sent to the iOS or Android emulators as well as running locally.

Filip finished off by discussing Live Unit Testing, something I’ve already been using in VS2017. This runs tests as they are actually being coded – it doesn’t wait for code to be saved. It does this by hooking in as a Roslyn analyser. It’s straightforward to write a custom analyser ourselves perhaps to enforce coding standards, or guide other developers in the way to use a library – indeed some third party library developers are already including analysers to do just this.

For session number two, I stayed in the main hall for Jonathan Channon talking about Writing Simpler ASP.Net Core.

Jonathan started by talking about a project he had worked on where speed had been an issue, and they had tracked the problem down to the large numbers of dependencies being inserted using dependency injection. The issue being that the inversion of control mechanism used to insert the dependencies was using reflection.

The issue is with the way we do SOLID in ASP.Net, so Jonathan used a series of examples showing how we can go from a solution heavily dependent on injecting dependencies and using mocking frameworks for testing, to something that uses no dependency injection or mocking frameworks at all. He has his examples for the talk online in a GitHub repository.

What is perhaps most interesting about his final solution is that the technology he is using has been around since the earliest days of C# – using delegates and static methods, along with his own Botwin library to simplify building an API going to a much more functional programming style model than is used in traditional ASP.Net.

Jonathan also highlighted a number of other blogs and videos. Mike Hadlow blogs on much the same technique highlighting how much less code using a functional style produces. Posts from Mark Seeman and Brian Geihsler also talk about how SOLID principles lead to a profusion of dependencies making codebases difficult to navigate.

Given that so much software these days follows the SOLID principles, this was a challenging different view on how systems should be developed, one of those “everything you think you know is wrong” type sessions.

The next session I attended was Paul Aikman talking about React.js and friends, which was one of my must attend talks as I was due to start working with React.js the following week for the first time. Paul has posted his slides on his website.

Paul started by taking us through how his company has eventually arrived at using React.js, starting out with Webforms augmented by JQuery, through Knockout and Angular 1, before settling on and sticking with React.

He also highlighted how there has been a gradual shift from performing a lot of processing on the server side with minimal client side functionality to the current situation where customers are expecting a rich and responsive experience interacting with websites that mean clients are now a lot fatter. He also discussed why having started with Angular 1, his company took the decision to shift to React, which effectively came down to the significant changes between Angular 1 and 2 meaning that they would effectively have to learn a new framework with Angular 2, so they went out to what they regarded as the best at the time, and changed to React.

He then gave a rapid overview of how React worked, which I found really useful coming to React for the first time during the following week. He highlighted that given the years of being told to separate logic and presentation with the MVC pattern, one of the biggest surprises with React is that it mixes logic and presentation together.

Paul also highlighted that React only focuses on UI logic, following the principle of doing one thing, and doing it well. There are additional libraries such as Redux and React Router that provide the additional functionality needed to build a web application.

After lunch, I decided to head along to Gary Fleming’s talk on API’s on the Scale of Decades which was on the problems with API’s, and how developers can write and API that can evolve over time rather than lock you in to poor early decisions. Once again Gary has his talk notes online which are well worth taking a look at. As a side note Gary was using an app called Deckset to run his presentation, that takes presentations written in Markdown syntax – considering the amount of time I spent reworking Markdown notes into a Keynote presentation recently, I’ve noted it down as something to look at further.

Gary’s talk was the one that promoted the most heated discussion of any I attended, both at the session, and also when I came back to the office. He started from the point that designing API’s is hard, but that what most developers want in an API is something that is both machine and human readable, changeable, testable and documented.

Gary started with a crash course in the concept of affordance, using Mario, and animals in a tree as examples! Gary was highlighting that in both the case of the game, and different animals using a tree in different ways it was through their knowledge and experience that they were interacting with the tree or we were playing the game, API’s should be similar. He used further examples where knowledge and experience allow us to interact with something – save buttons that look like floppy disks, even though many people now have never even used a floppy disk.

Applying this to our API’s the mechanisms for controlling them should be included in the information returned by the API, you shouldn’t separate them out.

Looking at a common affordance on an API, if there is a large dataset to return, generally we will page this, and there is a common set of affordances for stepping through the dataset. Going back to the days of the text adventure games from the early days of computer games, once again there was a common set of verbs with which to interact with the game.

A good tip Gary gave for thinking about the verbs and nouns to describe these affordances was to think about how you would ask voice assistants like Alexa or Siri to do what you want to do. He also suggested that well designed affordances are effective documentation for an API – if it is clear how to use an API you don’t need extensive documentation.

Gary then moved onto the problem of changing an API.

He used the example of the Ship of Theseus. In this thought experiment a ship over a long life has ongoing repairs such that eventually every single plank of wood and component of the ship has been replaced – is it the same ship? If we use this lens on an API, if over time we are changing our API, is it the same API, when do our changes take it from version 1 to version 2?

Gary’s suggestion was that we shouldn’t be versioning our API at all. To respond to the surprise from the audience he highlighted that we cope with this every day using websites, all of which change their API that we as users interact with. We apply our knowledge of the website and cope with the changes.

Gary then moved on to testing. His first example was asking the question of why we need brakes on a car? The obvious answer is to enable us to stop, but they also allow us to go faster. For the same reason we need tests on an API to allow us to change them faster.

Fundamentally, if we know that an API will inevitably change, we need to plan for those changes. He suggested that we should be using Consumer Driven Contracts, where the consumers of the API gives detailed expectations of how the API should behave, and then these form the basis of the tests against the API. He also highlighted the important of using fuzzers to ensure the API responds and handles unexpected data.

His final point provoked the most discussion, looking back at what he had been discussing he highlighted that JSON, which is what many APIs currently use is limited, and suggested that it is something we use by convention, rather than because it is the best tool for the job. He suggested that using HTML5 was a better option as it offered a richer interface that gave greater affordance to the users of the API. There was a good deal of incredulity from members of the audience, and indeed a similar level from our architect back at the office after the conference. Gary has subsequently said that there are limitations with using HTML5 also, but it was as much about getting people to question why they use JSON as proposing HTML5 as the solution.

My next session was also run by Gary, as I decided to pay a visit to the community room where he was running a Lean Coffee session.

The group came up with a mixed selection of topics to discuss. First off was a topic proposed by Becca Liddle who is the organiser for Women in Tech Scotland who asked about perceptions of women in technology companies. The discussion was wide ranging and covered a number of common issues around how women are treated both by company culture, and male colleagues, and also how male dominated tech culture can be off-putting to women and minorities. Becca had talked to a number of other women attending the conference earlier in the day and shared some horror stories of their experiences. Certainly food for thought as to how we encourage a more diverse workforce in IT. We also discussed what we were currently learning and broader issues around training, and also had a discussion about the impending changes being brought by GDPR which was in some ways a bit of a relief as it seems everybody is as concerned about it, and nobody feels they will be ready.

Next I went along to a session on Building APIs with Azure Functions by Kevin Smith. Again this was a session I attended because as a team we’re using Azure Functions in order to try and break up large bits of processing into horizontally scalable functions.

Kevin gave a good overview of the functionality available, highlighting the rapid development and simplified integrations, and also how they can be developed using Visual Studio. Kevin also has a good introduction on his website.

He also gave some good insight into the issues, including issues debugging them, and in particular problems with Microsoft breaking Azure functions. Ironically his final demo was also one that failed on the day I’m not sure whether it was because of a Microsoft breaking change!

My final talk of the day was Peter Shaw giving an Introduction to Typescript for C# Developers – once again it was a session I attended because we’re using Typescript for the upcoming work and the talk served as a good introduction.

First though a moan, Peter refused to use the microphone in the hall on the basis that he “had a loud voice”. Now he certainly did speak loud enough that I with good hearing could hear him without a problem, however experience looking after the sound at church is that if somebody does that there may well be people in the audience who have hearing difficulties, but nine times out of ten when challenged like this, they won’t feel comfortable in drawing attention to themselves as being unable to hear. At church the reason we ask people to use microphones is because however loud peoples voices are they can’t speak loud enough to drive the induction loop that many people with hearing difficulties will use, and speakers refusing to use the microphone leaves those people feeling discriminated against. Sometimes they will suffer in silence, other times they will complain to the sound crew, almost never will they complain to the speaker, who carries on in blissful ignorance thinking they have a loud voice and everything is fine. I hate working with a microphone too, so do many other people, but they are there for a reason, so if you’re a speaker, and there is a microphone, please use it!

Anyway, moan over, onto the talk. Peter started with an overview of why Typescript is important. More and more applications are moving into the browser, much as Paul Aikman highlighted in his talk on React we’re moving from applications where much of the functionality is in complicated server side C# code, to applications with richer client side experiences using Javascript. Similarly the growing variety of internet of things often use Javascript.

For developers used to the rich type-safe world of C#, Javascript can be a bit of a shock. Typescript is a language designed by Anders Hejlsberg who designed C# to open up Javascript to a back end developer used to C#.

As such the syntax is familiar to anyone who is used to C#, and makes the transition to Javascript development relatively painless.

Interestingly Peter highlighted that Typescript is more of a pre-processor than a compiler – ultimately what is produced is valid Javascript, but Typescript acts like a safety net enabling the developer to write enterprise scale Javascript applications.

There are a number of subtle differences however driven by the differences in Javascript. For example Typescript has Union Types that allow for the Javascript ability to change the type of variables. Undefined and null are still usable, however the Typescript team advise against them.

There is lots of Typescript support around. Many of the most common Javascript libraries already have typescript type files defined to allow them to be used from Typescript. Peter referred us to Definitely Typed as a good repository of high quality Typescript definitions.

As an introduction it was a useful talk giving me as a C# developer taking first steps into Typescript confidence that it won’t be a difficult transition.

After that we had the closing section of the Developer Day with the traditional raffle and prize giving, and as is traditional (and much to the disappointment of the kids because an Xbox X was one of the prizes) I didn’t actually win anything in the raffle. Was no bad thing however as I’m not quite sure how I would have got an Xbox back to Reading on the train…

Dear Y-Cam Solutions, You’ve Lost a Customer

I’ve been a Y-Cam customer for a long while, I originally had one of their early Y-Cam Black cameras that had a pretty technical setup and was uploading to a local FTP server, I later added one of their newer Y-Cam Knight cameras that included a built in Micro-SD slot but again needed an FTP server to upload to. Over the years I tried a couple of different cloud services that took pictures and video uploaded by FTP to generate alerts.

Then Y-Cam decided to change direction – including cloud storage for images as part of the deal. They didn’t give an option to migrate existing customers onto the new platform, but instead launched a version of the existing camera with new firmware that hooked up to their HomeMonitor service. After doing the maths to work out how much I’d pay in subscription fees for the existing cameras I made the switch, and later bought one of their newer Y-Cam Evo cameras which similarly hooked up to their online service. Both cameras came with seven days of cloud storage for free forever, with options to upgrade to thirty days for a monthly charge. Subsequently the company has also launched an internet connected alarm system again with a monthly fee. I didn’t really need either of these, and just carried on with the free storage option.

The older cameras have been fine, the newer Evo was a bit of a disappointment and would quite frequently lose contact with the Y-Cam cloud servers, and Y-Cam made a total hash of launching a new iOS app so for a long while whilst the cameras would trigger and record video they wouldn’t actually raise alerts. They’ve never managed to handle having multiple users using the app properly so whilst our tadoº heating system app will switch the heating off when the last person leaves, and back on when the first person returns, whatever order we leave and come back in, the Y-Cam app can only handle locations from a single phone leading to a whole load of unnecessary alerts. Y-Cam have also consistently refused to allow their cameras to integrate in with any of the burgeoning home automation platforms such as Apple Homekit, Amazon Alexa or Google Home, or even allow their cameras to be accessed by integration platforms like If This Then That that could allow users to work around their limitations. However I’ve stuck with Y-Cam having invested in the cameras and because of the free storage.

Then this week I and all the other Y-Cam users got an e-mail from the company telling us that forever is actually ending in fourteen days, when the company will require us all to pay a monthly fee for each camera, or transfer to one of their higher cost services. No option to switch the camera to using local storage – either pay them a fee or they brick our cameras rendering them useless. The explanation in the email makes it pretty clear what has happened:

We have endeavoured to provide our cloud camera service and support without making a monthly subscription mandatory. However, it is no longer possible to continue without requiring a monthly fee to cover the cost of providing a service for Y-cam cloud camera users.

Basically their promotional material suggesting that all you need is their free service has worked rather too well, and the whole model was actually dependent on being able to up-sell users to the extended thirty day storage service, or to one of their alarms. The problem now is that rather than dropping the seven day storage for new customers and honouring the promise of seven day storage forever for the existing customers, they’ve decided to charge everybody, the result is a lot of very upset customers – search Twitter for some of the responses.

That left me with a choice, do I pay them, or switch platform? To be blunt having been early leaders in IP cameras they’ve rather been left behind, and certainly the existing cameras don’t really perform as well as I’d like. The connectivity issues, inability to have multiple users locations tracked to deactivate the cameras and the lousy software updates were just annoying on a free service, given my experience and the tacit admission in the e-mail that the company is in financial trouble doesn’t really give me confidence that if I pay up things will get any better.  If they’d actually fixed the location issues our cameras would be uploading a lot less footage to their cloud servers anyway, one of the reasons their cloud storage is costing so much is because the software is poor.

The old cameras still work fine, so I can swap back to using personal cloud storage, and having talked with colleagues who are running other cameras, yesterday I bought a Netatmo Welcome. Unlike Y-Cam who haven’t really much changed what their cameras do over the past decade, Netatmo have been innovating with facial recognition so the camera will only trigger if it sees someone it doesn’t recognise. Also rather than tie you to their cloud service Netatmo allow you to load footage to FTP or Dropbox much as Y-Cam did in the past. Apple Homekit integration is already in beta, and they have an extensive selection of actions on If This Then That allowing you to trigger all sorts of home automation from the camera.

The camera turned up today, and is now all set up and working – it wasn’t all plain sailing though as the automated setup struggled to connect to the Netatmo web service. After some digging around and a good deal of frustration this turned out to be because the camera uses an IPSec VPN to connect to the server. My current router is a Billion BiPAC 8800NL which has a whole set of Application Layer Gateway options including one for IPSec that was turned on, there are a number of online discussions suggesting that the BiPAC 8800NL Application Layer Gateway IPSec option breaks the Cisco Anyconnect Secure Mobile Client VPN and that the option should be shut off, so I tried turning it off on my router and the Netatmo camera instantly started working.

So after the teething troubles I now have one Y-Cam camera replaced, and if Y-Cam don’t relent and either grandfather existing customers, or issue firmware that allows us to use alternative storage, the other will go soon too. Y-Cam is a great example of a company that had a good start in the IP camera market, but managed to squander it – if they’d innovated maybe I’d have stayed, but paying for a service that had been sold to me as free, no way. Y-Cam’s loss is a gain for Netatmo.

Sorting the Frame Rate Problem Using RasPlex

Back in January I wrote about the problems of trying to get streaming video to play back smoothly from Plex on our Apple TV, or XBox, or Fire TV, or pretty well anything, whilst I’d got around the problem by manually switching the Apple TV back and forth, it was still not really a satisfactory solution, and also didn’t solve the problem with any 24fps movie content. I also found that even well established apps like Netflix suffer the same problem on the Apple TV when we were watching The Crown where the shots with trains passing the camera had exactly the same jitter problem that was coming up on my content from Plex.

After a bit of research I’ve found that there is only one TV streaming box that can switch frame rates for Plex playback, and that is the NVIDIA Shield, but since that retails for £170 and doesn’t do anything much more than the XBox, Apple TV or Fire TV options we have already I wasn’t too keen.

From looking through the many online discussions of the problem, it seems that people running the now deprecated Plex Home Theater had got around the problem, and people using the built in Plex clients on smart TV’s didn’t have the issue, but again getting a new PC or Mac to go in the living room, or replacing our TV wasn’t really a cheap option either.

Then I came across RasPlex which is an actively developed port of Plex Home Theater to the Raspberry Pi. Like the PC and Mac versions of Plex Home Theater it was able to switch resolution, and with the arrival of the Raspberry Pi 3, the little £33 computer is more than capable of driving 1080p video.

At this point, after my experience setting up flight tracking with a Raspberry Pi I thought I’d be writing an explanation of setting it up, but RasPlex is really dead easy. The most fiddly bit of the whole process was getting the tiny screws that mount the Raspberry Pi 3 I bought into case into the equally tiny holes. RasPlex provide installers for Windows, Mac and Linux that will set up the software on a suitable memory card, and then it is as simple as plugging the Raspberry Pi into a power socket and your TV and turning on. The Raspberry Pi 3 has built in Wifi that RasPlex detects, and whilst it takes a bit of time when first booted to cache data from your Plex server, once it is up and running it is fine.

To get the resolution changes you’ll need to dig down into the advanced video settings, because by default RasPlex will stick to whatever resolution is set for the user interface, much like the commercial streaming boxes. However once that setting was changed, whatever video I threw at it worked fine on our TV – a slight pause as the TV switched frame rate and off it went. The other nice plus was that even with our seven year old Panasonic TX-L32S10 we didn’t need a separate remote for the Raspberry Pi as since the TV has HDMI-CEC support we can navigate the RasPlex user interface with the regular TV remote.

There are a couple of downsides, firstly unlike the Apple TV, the Raspberry Pi doesn’t have a sleep mode. The power save options on RasPlex will shut the whole Raspberry Pi down, at which point you have to cycle the power to wake it up again. Also the Raspberry Pi didn’t seem able to drive the picture through the cheapie HDMI switcher we have connecting the increasing number of HDMI devices we have to the TV.

However even with buying the Raspberry Pi, a suitable case with heatsinks for the processors on the Raspberry Pi that potentially get rather a workout, memory card and power supply, I still ended up with a Plex box for less than £60, and one that plays video significantly better than any of the established players by switching the TV to the correct frame rate.

That of course just leaves one final question, if a £33 box can do it, why can’t Apple, Roku, Amazon and all the rest do the same thing? Apple and Amazon especially are selling content that would benefit from a switchable box, and yet none of them do it, and instead ship boxes that make their content look rubbish.

How Do I Unit Test Database Access?

If as a .Net developer you’re serious about making sure your code is properly tested, one of the biggest problem areas has always been around database code. Even with the more widespread adoption of Object Relational Mapping (ORM) frameworks that abstract some of the complexity of database access, unit testing code that accesses the database is still difficult.

Over the years there have been various strategies to unit test database code that developers have come up with, and at various times I’ve seen projects that use all of them. I’ve also seen examples of large projects where you could see several of these techniques used in different parts of the code base.

The simplest technique – which isn’t really a technique as such – is just to test with the real database. Often this will be a special instance of the database created by the test run into which test data is loaded. The biggest argument against this idea is this isn’t really a unit test and should more correctly be considered an integration test. The biggest problem with this technique is using the real database is pretty slow, and that often leads to compromises to allow the test suite to be run in a reasonable time frame, either with reducing the number of tests, or not starting each test with a clean database configuration. Reducing the tests increases the risk that important conditions may not be properly tested, whilst not cleaning the database can lead to unexpected interactions between different tests. However in situations where you have complex logic in stored procedures in the database, sometimes this is the only way you can test them.

If you are practising Test Driven Development, where you are running unit tests repeatedly, having a unit test suite that takes even just minutes to run is a real problem.

A step on from using the real database is to use an alternative that is faster than your real database, for example an in memory database. This idea has come to more prominence recently as Microsoft has added an in memory database provider to their latest version of their current ORM, Entity Framework Core. However there have been third-party in memory options around for a while such as Effort. In both the official offering and third-party options they are drop in providers that can use the same entity framework code, but just go to an in memory provider instead. Purists will argue that even using an in memory provider this is still really an integration test rather than a unit test, you are merely replacing the dependent database rather than removing it. However to a software developer it can be an attractive option compared to the effort required in stubbing, mocking or faking a full ADO.Net provider. The other criticism of this technique is that because this is a different type of database being used from the live system there is the risk of behavioural differences between that and the real database. Having said that since Microsoft are highlighting testing as a benefit of their new in memory provider hopefully those will be few and far between.

Moving on from using an in memory database, the next option, at least until Entity Framework version 6 came along was to build a fake context object that could be used for testing. I’m not going to go into a detailed explanation of how to do this, as there are a number of good tutorials around, including this one from a Microsoft employee. The basic idea is that you construct a complete fake context object that gets injected into the code being tested instead of the real database context. Although you generally only need to construct the fake database context once, it is comparatively a lot of code, so it is pretty obvious why developers are delighted at the in memory provider included in Entity Framework Core. If you’re not needing to use the full abilities of the context, you do have the option of only partially implementing the fake context. The main criticism of using fakes is that again you’re running the risk of behavioural differences. This time it is because you’re using a different type of context, in particular under the covers you’re using the Microsoft LINQ to Objects classes to talk to the fake object, whereas the real database code will be using LINQ to Entities classes. Put simply whilst the syntax will be the same, you’re not exercising the actual database access code you will be using in the live system. You’re relying on Microsoft LINQ to Objects and Microsoft LINQ to Entities behaving in a similar fashion.

With the arrival of Entity Framework 6, there were some changes made that made it a lot easier to use a mocking framework instead of fake objects. Microsoft have a good guide to testing using a Mocking Framework in their Entity Framework documentation, alongside a revised guide to using a fake object as a test double. The amount of code to fully mock a context is similar to a faked context, but again if you only need to use part of the functionality of the context in your tests you only need mock the parts of the context you need. As with any mocked object it’s important that your mock behaviour is the same as the real object you’re trying to simulate for the tests, and this can be pretty complex with an object like a database context. Particularly problematic areas are around the behaviour of the SaveChanges functionality, where some fairly subtle bugs can creep in with code that passes a test but doesn’t work in production if for example you test by just expecting the SaveChanges method to be called.

That takes us on to a collection of other techniques that are more about isolating the database access code to make it easier to test.

The long standing way to do this is based around the Repository and Unit of Work patterns. There are a variety of ways you can implement these, for example you don’t necessarily need the Unit of Work and could just use the Repository pattern alone. There is a good Microsoft tutorial on the pattern using Entity Framework 5. The basic idea with the repository is to wrap the database code in the repository, and then mock the repository for subsequent tests. The database code in the repository just consists of simple create, read, update and delete (CRUD) functions. Whilst this was a common pattern before Entity Framework, and persisted with early versions of Entity Framework that are difficult to mock or fake, it has largely gone out of fashion. This is not least because the Entity Framework DbSet is an implementation of the same repository pattern so it is totally unnecessary to create an additional implementation of the Repository pattern for mocking now you can just mock or fake DbSet itself.

The other method that has been used for a long while is a traditional data access layer. The actual database code is hidden abstracted behind a series of method calls that take the parameters and return the data which can be easily mocked. Rather than being generic, the code inside each of those methods is for particular queries, and whilst that will be fairly simple database code that can be easily tested, there will be a single function for each query. There are good ways and bad ways of doing this, for example I have seen projects with vast library classes containing all of the queries used by the business logic – a bit of a maintenance nightmare at times. Probably a better design and more in keeping with SOLID principles is to have smaller classes more closely related to how the queries are being used. Either way there is a big overhead with lots of query functions together in a big data access layer.

Data access layers again have started to go out of fashion, however some of the principles behind them can still be applied. The single responsibility principle part of SOLID can be interpreted as suggesting that even if you don’t have a formal data access layer, you shouldn’t be putting database access code in the same method as business logic. The business logic should be working taking and receiving generic collections, rather than retrieving data and working directly on DbSets all in one method. You really shouldn’t have one method that queries data, manipulates it and writes it back. That application of the single responsibility then gives the separation of concerns that can make your code easier to test. The business logic can be tested using simple unit tests rather than having to write complicated unit tests that prime an in memory database or mock, call a function and then examine database contents to see what has happened. The database access methods are again a lot simpler, often just retrieving data, and can easily be supported by a simple mock of the part of the database context being used – a full blown in memory database, or fake or mock context isn’t needed.

In conclusion unit testing code that is accessing a database has always been difficult, and whilst some of the problems have been addressed over the years, it is still not easy. However if you are following good design practices such as DRY and SOLID the occasions when the only way to test something is via a database context should be fairly minimal. If you are finding that you are needing to do that it is well worth looking again at whether you have inadvertently violated the single responsibility principle. Even though the advent of the in memory database makes database context based testing easier, that doesn’t mean you should be using it everywhere. A simple unit test of a loosely coupled method will always be faster than testing a more complex method even using an in memory database. It is well worth considering whether your design would be improved by not coupling your business logic directly to you database access code.

Can You Just Take a Look at this Legacy Code?

As a programmer there are a number of books which people will tell you are must read books for any professional – which do change over time as programming techniques evolve. However the books are fairly consistent in that they all tend to be written from the point of view of a green field system, starting from first principles, ensuring you build a maintainable system.

But is that realistic? You might be lucky and get in at the beginning of a brand new startup, or you could land a job at a consultancy where you’re always writing bespoke code, but for most programmers an awful lot of their career will be dealing with the joys of legacy code.

It may be that you come into an established company with many years of development and thousands of lines of code debt and changing technologies.

Alternatively you could be handed the thing programmers often dread the “business developed application” – often these are mired in corporate politics as well, with strained relations between the business area that developed the application and the IT department. Indeed in one company I worked for there was a semi-secret development team in one part of the business formed as a result of the IT department saying no one too many times! In most cases these business developed applications are produced by people whose strength is in understanding how the business works, but are inexperienced as developers, which often produces a double hit of problems in that the business logic is usually poorly documented, and the code is also of poor quality.

Other examples I’ve come across are prototype systems that have almost accidentally ended up as critical systems, and something that happens surprisingly often is a big company takes on responsibility for a third party product either because they don’t want to upgrade to a supported version, or because the third party company is abandoning a product altogether.

The common factor in all of these is that you’re taking on a codebase that is less than ideal, so all these coding books that assume you’re starting from scratch aren’t overly useful. All the benefits of test driven development protecting you when you make changes are really not much good when you have incomplete or totally missing tests. It’s incredibly difficult to find your way around a badly structured code base if you’re used to textbook structures and accurate documentation.

What do you do? Edit and pray it works? Rewrite the whole system from scratch?

All of which brings me back to where I started, and the excellent Working Effectively with Legacy Code by Michael Feathers. The book starts from the entirely pragmatic position that you are going to be working on dodgy code a lot of the time, and if you don’t want to make it worse you need to get it sorted out. It is also realistic in that it gives you techniques to gradually improve the code as a business will rarely be able to spare the time and resources to totally rewrite something.

The really basic concept around which a lot of the more complicated techniques are built is that whilst you can’t bring all of a codebase under test immediately you can grow islands of properly tested code within the codebase that gradually spread out as you work on other parts of the codebase over time. To create these islands you need to separate them from the rest of the codebase, which is where a lot of the complexity comes from, but Feathers offers a variety of different techniques for making those separations. The ultimate aim is that as much of your legacy codebase is brought under test, and the codebase as far as possible conforms to modern principles like DRY and SOLID, whilst at the same time allows you to produce the changes and improvements your users or customers are demanding to the legacy code.

I hesitate to say that any programming book is an essential read, but if like most programmers you’re faced with a chaotic legacy codebase Working Effectively with Legacy Code is a book that certainly gives you a lot of practical advice of how to make things better.

The TV Frame Game

Through another one of the numerous techie competing standards stories, (the TL;DR summary being that NTSC TV standard was considered a bit rubbish on this side of the pond and as a result in Europe we developed two alternative standards PAL and SECAM) in the UK and the USA we ended up with two somewhat incompatible TV systems. In the USA they had TV pictures with a vertical resolution of 480 lines, playing at a frame rate of 30 frames per second, whilst on this side of the Atlantic we were watching a  higher resolution 576 line picture, but playing at a frame rate of 25 frames per second. The TV companies had ways of converting pictures between the two standards, and eventually we got to home video recorders being able to play tapes recorded in the other standard, and TV’s that could cope with both, indeed these days in the UK you’ll find most DVD or BluRay players and TV’s will quite happily switch between European 50Hz standards and the North American 60Hz, whatever the standard of the material that was put into the machine.

When the HD standards came around there seemed to be general agreement across the world, and everybody settled on 720 lines or 1080 lines for high definition pictures and all seemed right with the world… Or maybe not…

That brings us to me watching a video last night which involved a number of shots of trains going left to right or right to left across the screen, and a really annoying judder as the trains went past. I was watching from an HD video file playing back on our Apple TV through Plex. Thinking it was a problem with the Apple TV I tried it through Plex on our Xbox One – same problem, and watching the raw file on the desktop, same problem again. Looking at the file it had come from a UK production company and was encoded in 1080p with a frame rate of 25 frames per second, perfectly standard UK file. So I took a look at the Apple TV. Digging into the settings I had the picture standard set to Auto, further down it said it had automatically set itself to 1080p 60Hz. There was also an option to specify which picture format to use, with a 1080p 50Hz option, so I switched that over, watched the file again, and away went the judder, switch back to auto and the Apple TV would decide to switch to 1080p 60Hz.

The basic problem seems to be that unlike the DVD Players, video recorders or BluRay players the latest generation of devices like the Apple TV or Xbox, even though many are capable of switching the resolution, automatically go for 1080p 60Hz and then behave as if the TV they’re connected to is a dumb panel that can’t cope with any other standard, as a result they then try to convert video at another frame rate in software. The judder I could see on the video is a result of the Apple TV or Xbox trying to show 25 frames per second on a device that is wanting 30 frames per second, so on smooth movements you get the judder because 20% of the frames in any one second of video are being shown twice. Knowing my TV is a European model that can cope with a 50Hz picture I can switch the Apple TV over and it works fine (not so for the Xbox incidentally) but then if I watch a North American video at 30 frames per second the Apple TV is locked in 50Hz and has much the same problem trying to handle showing 30 frames in the period when it only has 25 frames.

At this point the cinema purists are going to point out that there is another very common frame rate, with is 24 frames per second, which is the frame rate that most movies are made at, and many BluRays are now released as that standard because again a lot of TV sets these days will cope with the frame rate. So what do the Apple TV, Xbox and other TV streamer boxes do? They try and show those 24 frames in whatever frame rate the box is currently set to, and have exactly the same problem.

Going through my digital videos I have a real mixed bag. Most of the UK stuff is 25 frames per second, some where it has come off film is 24 frames per second, US stuff mostly 30 frames per second. Looking at home videos I have the same mixed bag, primarily because even though they’re all UK bought devices the cameras and phones I’ve had over the years don’t always produce UK standard video, for example iPhones using the standard camera software will consistently record in 60Hz standards – you have to resort to apps like Filmic to get the phone to record in European 50Hz standards, or even 24 frames per second if you want to work with cinema standards.

So even though world has agreed the size of a picture, there is still no agreement over how many of those pictures are shown per second. Most of our digital streaming boxes either will only work at the US 60Hz standard (the earliest Sky Now boxes were stuck on 60Hz) or are switchable but thanks to the software are difficult to switch across – the Apple TV you have to go rummaging in the settings, on the Xbox you effectively have to con the Xbox into thinking your TV can only do 50Hz pictures before it will switch – with the devices doing a second rate job when your TV is quite often perfectly capable of playing things back correctly.

Having one standard is never going to work as we’ll still have vast amounts of archive content at the older frame rates, so for the moment it would really help if the digital streamer manufacturers actually started acknowledging that there are a variety of standards – even your average US consumer who doesn’t have any 50Hz content is going to notice glitching if they watch a movie. We’ve had DVD and Video Recorders that could switch for years, why is it that the new tech seems to have taken such a massive step backwards?

Featured image old tv stuff by Gustavo Devito

Crowdsourced Flight Tracking

Earlier this week I had to head off to the airport to pick my wife up as she came back in on a transatlantic flight. As I’ve often done before I kept an eye on the flight using Flightradar24, and the Plane Finder app on my phone and the Apple TV.

Taking a look at the Flightradar24 site the Add Coverage link caught my eye – as I didn’t think there would be many people around with a spare radar station sitting around! However after a bit of reading it transpired that I didn’t need it.

Whilst in the past the online flight sites have been taking data from publicly available air traffic control feeds, now they are increasingly getting data picked up from ADS-B transmitters that aircraft increasingly use. Essentially each ADS-B equipped plane broadcasts a signal which encodes the location of the plane along with other details about their flight such as heading and altitude. The big advantage of the system is in areas that lack radar coverage improving safety, but since the signals broadcast on a similar frequency to DVB-T TV pictures it also means that a simple USB DVB-T receiver plugged into a home computer can be used to pick up the signals from the aircraft as well.

Given the really low cost of entry sites like FlightRadar24, PlaneFinder and FlightAware are significantly augmenting their data by offering their premium accounts for free to home users who supply data. You can use an existing PC, Mac or Linux box, but in order to get the account you need the computer to be running all the time, so a small low power computer like a Raspberry Pi is a much better option, and that is what is all three sites are suggesting. Whilst you could spend a large amount of money on fancy roof mounted aerials from my reading I figured that since we are located under the approach to Heathrow airport even the basic general purpose aerial that comes with the USB receiver would be enough to pick up a few planes. Additionally if I could get the Raspberry Pi feeding to all three sites – and there were plenty of people online saying that you could – I could get three premium accounts even for the potentially pretty small number of flights I could pick up.

So I drew up a shopping list.

First off, not having a USB DVB-T receiver I needed something to pick up the signals. I opted for the NooElec NESDR Mini 2 USB RTL-SDR & ADS-B Receiver Set which seemed to have pretty good reviews and which a number of people online were using – it also only cost £17.95 and included a basic aerial. I also needed a Raspberry Pi as although we have one, it’s part of the kids Kano so not something I can take over. I did look at whether to get the new Raspberry Pi 3, but since I didn’t need the extra speed or wifi I saved £6 and got the older Raspberry Pi 2 instead for £25.99. I also picked up a power supply, Raspberry Pi Case and memory card, and had all the bits for my DIY aircraft tracker for under £100.

Setting up the Raspberry Pi is pretty straightforward, you just need to grab a copy of the Raspbian operating system, install it onto the memory card and off you go. In fact you don’t even need a screen, keyboard and mouse if you have a bootable copy of the OS on the card as by default it has SSH access enabled so as long as it is plugged into a network once it is booted up you can access the Pi.

Once it was up and running, I initially opted for FlightRadar24, for no other reason as that was the site I’d initially read about feeding data from. That proved to be a bit of a mistake from the point of view of feeding to all three sites simultaneously. The basic idea is that you use the software from one site as the master feed, and then hook the other two bits of feeder software up to the third party data feed of the third. The trick seems to be to use the FlightAware software as the master feed and then tell the other two bits of software that they are talking to a receiver on port 30005 of localhost, then all three play nicely together. Once I’d swapped around and reinstalled the FlightAware software first I pretty soon had all three up and running and feeding data to their respective sites, alongside giving me information locally. If you start with the FlightAware software and a brand new SD card, you could also opt for their custom build of Raspbian with PiAware preinstalled which makes life a bit easier and is configured for optimal performance as a scanner.

There is a bit of variation between the software experiences. FlightAware is very much a command line setup, and connects to an account via username and password. Their local software is a simple page with a map of the planes and a list of their details. They have a much better experience on their main website, with probably the most detailed statistics on the performance of your receiver of the three sites. FlightRadar24 has a slightly better process, but still command line based, although when it is up and running the website is very basic just showing a list of the planes. You can however change the settings from the local website and restart without resorting to the command line. Plane Finder is by far the nicest local installation with an easy to use web based setup, and a nice local website that will give you a visualisation of the planes that your receiver is picking up, detailed log files, statistics of the performance of your receiver and the communication link with the internet, along with a web based settings page to reconfigure the software. Whilst all the sites give detailed step-by-step instructions that can guide a novice through, Plane Finder is by far the friendliest user experience.

So the big question is how well does it work?

IMG_5798I wasn’t expecting much from my little aerial sat on top of a filing cabinet in my office, but I was amazed. It’s certainly not picking up planes 200 miles away as a roof mounted aerial would, but it is picking up lots of flights within 50 miles, and one or two from a lot further away. It certainly picks up flights going in and out of Heathrow, especially when they’re coming in over Reading, along with quite a few flights over towards Gatwick and flights from further afield passing over the UK. In the forty-eight hours I’ve had it running it has picked up 2,859 distinct aircraft! If you compare my local radar view with the main site pages it’s clear I’m not getting everything, but I’m impressed by how many I’m picking up considering the cheap aerial and kit I’m using. Certainly if I wanted to spend a lot more money and get a proper roof mounted aerial I could probably track a load more.

So if you fancy a simple little project that’s not going to break the bank, and also want to contribute data to crowdsourced aircraft tracking I can certainly recommend building a Raspberry Pi based ADS-B tracker, and as a bonus you get access to all the fancy premium features on three of the main aircraft tracking sites. If you live in a populated area you’ll really only be adding resilience to the network, but if you’re in a more rural area there are definite gaps in coverage – there is a good map showing coverage on the FlightAware site – aside from that it’s also really quite fun if you’ve got kids to be able to point out a plane going overhead and say where the plane is travelling from or to.