Category Archives: Computers/Technology

Stuff about computers and technology in general.

Sorting the Frame Rate Problem Using RasPlex

Back in January I wrote about the problems of trying to get streaming video to play back smoothly from Plex on our Apple TV, or XBox, or Fire TV, or pretty well anything, whilst I’d got around the problem by manually switching the Apple TV back and forth, it was still not really a satisfactory solution, and also didn’t solve the problem with any 24fps movie content. I also found that even well established apps like Netflix suffer the same problem on the Apple TV when we were watching The Crown where the shots with trains passing the camera had exactly the same jitter problem that was coming up on my content from Plex.

After a bit of research I’ve found that there is only one TV streaming box that can switch frame rates for Plex playback, and that is the NVIDIA Shield, but since that retails for £170 and doesn’t do anything much more than the XBox, Apple TV or Fire TV options we have already I wasn’t too keen.

From looking through the many online discussions of the problem, it seems that people running the now deprecated Plex Home Theater had got around the problem, and people using the built in Plex clients on smart TV’s didn’t have the issue, but again getting a new PC or Mac to go in the living room, or replacing our TV wasn’t really a cheap option either.

Then I came across RasPlex which is an actively developed port of Plex Home Theater to the Raspberry Pi. Like the PC and Mac versions of Plex Home Theater it was able to switch resolution, and with the arrival of the Raspberry Pi 3, the little £33 computer is more than capable of driving 1080p video.

At this point, after my experience setting up flight tracking with a Raspberry Pi I thought I’d be writing an explanation of setting it up, but RasPlex is really dead easy. The most fiddly bit of the whole process was getting the tiny screws that mount the Raspberry Pi 3 I bought into case into the equally tiny holes. RasPlex provide installers for Windows, Mac and Linux that will set up the software on a suitable memory card, and then it is as simple as plugging the Raspberry Pi into a power socket and your TV and turning on. The Raspberry Pi 3 has built in Wifi that RasPlex detects, and whilst it takes a bit of time when first booted to cache data from your Plex server, once it is up and running it is fine.

To get the resolution changes you’ll need to dig down into the advanced video settings, because by default RasPlex will stick to whatever resolution is set for the user interface, much like the commercial streaming boxes. However once that setting was changed, whatever video I threw at it worked fine on our TV – a slight pause as the TV switched frame rate and off it went. The other nice plus was that even with our seven year old Panasonic TX-L32S10 we didn’t need a separate remote for the Raspberry Pi as since the TV has HDMI-CEC support we can navigate the RasPlex user interface with the regular TV remote.

There are a couple of downsides, firstly unlike the Apple TV, the Raspberry Pi doesn’t have a sleep mode. The power save options on RasPlex will shut the whole Raspberry Pi down, at which point you have to cycle the power to wake it up again. Also the Raspberry Pi didn’t seem able to drive the picture through the cheapie HDMI switcher we have connecting the increasing number of HDMI devices we have to the TV.

However even with buying the Raspberry Pi, a suitable case with heatsinks for the processors on the Raspberry Pi that potentially get rather a workout, memory card and power supply, I still ended up with a Plex box for less than £60, and one that plays video significantly better than any of the established players by switching the TV to the correct frame rate.

That of course just leaves one final question, if a £33 box can do it, why can’t Apple, Roku, Amazon and all the rest do the same thing? Apple and Amazon especially are selling content that would benefit from a switchable box, and yet none of them do it, and instead ship boxes that make their content look rubbish.

How Do I Unit Test Database Access?

If as a .Net developer you’re serious about making sure your code is properly tested, one of the biggest problem areas has always been around database code. Even with the more widespread adoption of Object Relational Mapping (ORM) frameworks that abstract some of the complexity of database access, unit testing code that accesses the database is still difficult.

Over the years there have been various strategies to unit test database code that developers have come up with, and at various times I’ve seen projects that use all of them. I’ve also seen examples of large projects where you could see several of these techniques used in different parts of the code base.

The simplest technique – which isn’t really a technique as such – is just to test with the real database. Often this will be a special instance of the database created by the test run into which test data is loaded. The biggest argument against this idea is this isn’t really a unit test and should more correctly be considered an integration test. The biggest problem with this technique is using the real database is pretty slow, and that often leads to compromises to allow the test suite to be run in a reasonable time frame, either with reducing the number of tests, or not starting each test with a clean database configuration. Reducing the tests increases the risk that important conditions may not be properly tested, whilst not cleaning the database can lead to unexpected interactions between different tests. However in situations where you have complex logic in stored procedures in the database, sometimes this is the only way you can test them.

If you are practising Test Driven Development, where you are running unit tests repeatedly, having a unit test suite that takes even just minutes to run is a real problem.

A step on from using the real database is to use an alternative that is faster than your real database, for example an in memory database. This idea has come to more prominence recently as Microsoft has added an in memory database provider to their latest version of their current ORM, Entity Framework Core. However there have been third-party in memory options around for a while such as Effort. In both the official offering and third-party options they are drop in providers that can use the same entity framework code, but just go to an in memory provider instead. Purists will argue that even using an in memory provider this is still really an integration test rather than a unit test, you are merely replacing the dependent database rather than removing it. However to a software developer it can be an attractive option compared to the effort required in stubbing, mocking or faking a full ADO.Net provider. The other criticism of this technique is that because this is a different type of database being used from the live system there is the risk of behavioural differences between that and the real database. Having said that since Microsoft are highlighting testing as a benefit of their new in memory provider hopefully those will be few and far between.

Moving on from using an in memory database, the next option, at least until Entity Framework version 6 came along was to build a fake context object that could be used for testing. I’m not going to go into a detailed explanation of how to do this, as there are a number of good tutorials around, including this one from a Microsoft employee. The basic idea is that you construct a complete fake context object that gets injected into the code being tested instead of the real database context. Although you generally only need to construct the fake database context once, it is comparatively a lot of code, so it is pretty obvious why developers are delighted at the in memory provider included in Entity Framework Core. If you’re not needing to use the full abilities of the context, you do have the option of only partially implementing the fake context. The main criticism of using fakes is that again you’re running the risk of behavioural differences. This time it is because you’re using a different type of context, in particular under the covers you’re using the Microsoft LINQ to Objects classes to talk to the fake object, whereas the real database code will be using LINQ to Entities classes. Put simply whilst the syntax will be the same, you’re not exercising the actual database access code you will be using in the live system. You’re relying on Microsoft LINQ to Objects and Microsoft LINQ to Entities behaving in a similar fashion.

With the arrival of Entity Framework 6, there were some changes made that made it a lot easier to use a mocking framework instead of fake objects. Microsoft have a good guide to testing using a Mocking Framework in their Entity Framework documentation, alongside a revised guide to using a fake object as a test double. The amount of code to fully mock a context is similar to a faked context, but again if you only need to use part of the functionality of the context in your tests you only need mock the parts of the context you need. As with any mocked object it’s important that your mock behaviour is the same as the real object you’re trying to simulate for the tests, and this can be pretty complex with an object like a database context. Particularly problematic areas are around the behaviour of the SaveChanges functionality, where some fairly subtle bugs can creep in with code that passes a test but doesn’t work in production if for example you test by just expecting the SaveChanges method to be called.

That takes us on to a collection of other techniques that are more about isolating the database access code to make it easier to test.

The long standing way to do this is based around the Repository and Unit of Work patterns. There are a variety of ways you can implement these, for example you don’t necessarily need the Unit of Work and could just use the Repository pattern alone. There is a good Microsoft tutorial on the pattern using Entity Framework 5. The basic idea with the repository is to wrap the database code in the repository, and then mock the repository for subsequent tests. The database code in the repository just consists of simple create, read, update and delete (CRUD) functions. Whilst this was a common pattern before Entity Framework, and persisted with early versions of Entity Framework that are difficult to mock or fake, it has largely gone out of fashion. This is not least because the Entity Framework DbSet is an implementation of the same repository pattern so it is totally unnecessary to create an additional implementation of the Repository pattern for mocking now you can just mock or fake DbSet itself.

The other method that has been used for a long while is a traditional data access layer. The actual database code is hidden abstracted behind a series of method calls that take the parameters and return the data which can be easily mocked. Rather than being generic, the code inside each of those methods is for particular queries, and whilst that will be fairly simple database code that can be easily tested, there will be a single function for each query. There are good ways and bad ways of doing this, for example I have seen projects with vast library classes containing all of the queries used by the business logic – a bit of a maintenance nightmare at times. Probably a better design and more in keeping with SOLID principles is to have smaller classes more closely related to how the queries are being used. Either way there is a big overhead with lots of query functions together in a big data access layer.

Data access layers again have started to go out of fashion, however some of the principles behind them can still be applied. The single responsibility principle part of SOLID can be interpreted as suggesting that even if you don’t have a formal data access layer, you shouldn’t be putting database access code in the same method as business logic. The business logic should be working taking and receiving generic collections, rather than retrieving data and working directly on DbSets all in one method. You really shouldn’t have one method that queries data, manipulates it and writes it back. That application of the single responsibility then gives the separation of concerns that can make your code easier to test. The business logic can be tested using simple unit tests rather than having to write complicated unit tests that prime an in memory database or mock, call a function and then examine database contents to see what has happened. The database access methods are again a lot simpler, often just retrieving data, and can easily be supported by a simple mock of the part of the database context being used – a full blown in memory database, or fake or mock context isn’t needed.

In conclusion unit testing code that is accessing a database has always been difficult, and whilst some of the problems have been addressed over the years, it is still not easy. However if you are following good design practices such as DRY and SOLID the occasions when the only way to test something is via a database context should be fairly minimal. If you are finding that you are needing to do that it is well worth looking again at whether you have inadvertently violated the single responsibility principle. Even though the advent of the in memory database makes database context based testing easier, that doesn’t mean you should be using it everywhere. A simple unit test of a loosely coupled method will always be faster than testing a more complex method even using an in memory database. It is well worth considering whether your design would be improved by not coupling your business logic directly to you database access code.

Can You Just Take a Look at this Legacy Code?

As a programmer there are a number of books which people will tell you are must read books for any professional – which do change over time as programming techniques evolve. However the books are fairly consistent in that they all tend to be written from the point of view of a green field system, starting from first principles, ensuring you build a maintainable system.

But is that realistic? You might be lucky and get in at the beginning of a brand new startup, or you could land a job at a consultancy where you’re always writing bespoke code, but for most programmers an awful lot of their career will be dealing with the joys of legacy code.

It may be that you come into an established company with many years of development and thousands of lines of code debt and changing technologies.

Alternatively you could be handed the thing programmers often dread the “business developed application” – often these are mired in corporate politics as well, with strained relations between the business area that developed the application and the IT department. Indeed in one company I worked for there was a semi-secret development team in one part of the business formed as a result of the IT department saying no one too many times! In most cases these business developed applications are produced by people whose strength is in understanding how the business works, but are inexperienced as developers, which often produces a double hit of problems in that the business logic is usually poorly documented, and the code is also of poor quality.

Other examples I’ve come across are prototype systems that have almost accidentally ended up as critical systems, and something that happens surprisingly often is a big company takes on responsibility for a third party product either because they don’t want to upgrade to a supported version, or because the third party company is abandoning a product altogether.

The common factor in all of these is that you’re taking on a codebase that is less than ideal, so all these coding books that assume you’re starting from scratch aren’t overly useful. All the benefits of test driven development protecting you when you make changes are really not much good when you have incomplete or totally missing tests. It’s incredibly difficult to find your way around a badly structured code base if you’re used to textbook structures and accurate documentation.

What do you do? Edit and pray it works? Rewrite the whole system from scratch?

All of which brings me back to where I started, and the excellent Working Effectively with Legacy Code by Michael Feathers. The book starts from the entirely pragmatic position that you are going to be working on dodgy code a lot of the time, and if you don’t want to make it worse you need to get it sorted out. It is also realistic in that it gives you techniques to gradually improve the code as a business will rarely be able to spare the time and resources to totally rewrite something.

The really basic concept around which a lot of the more complicated techniques are built is that whilst you can’t bring all of a codebase under test immediately you can grow islands of properly tested code within the codebase that gradually spread out as you work on other parts of the codebase over time. To create these islands you need to separate them from the rest of the codebase, which is where a lot of the complexity comes from, but Feathers offers a variety of different techniques for making those separations. The ultimate aim is that as much of your legacy codebase is brought under test, and the codebase as far as possible conforms to modern principles like DRY and SOLID, whilst at the same time allows you to produce the changes and improvements your users or customers are demanding to the legacy code.

I hesitate to say that any programming book is an essential read, but if like most programmers you’re faced with a chaotic legacy codebase Working Effectively with Legacy Code is a book that certainly gives you a lot of practical advice of how to make things better.

The TV Frame Game

Through another one of the numerous techie competing standards stories, (the TL;DR summary being that NTSC TV standard was considered a bit rubbish on this side of the pond and as a result in Europe we developed two alternative standards PAL and SECAM) in the UK and the USA we ended up with two somewhat incompatible TV systems. In the USA they had TV pictures with a vertical resolution of 480 lines, playing at a frame rate of 30 frames per second, whilst on this side of the Atlantic we were watching a  higher resolution 576 line picture, but playing at a frame rate of 25 frames per second. The TV companies had ways of converting pictures between the two standards, and eventually we got to home video recorders being able to play tapes recorded in the other standard, and TV’s that could cope with both, indeed these days in the UK you’ll find most DVD or BluRay players and TV’s will quite happily switch between European 50Hz standards and the North American 60Hz, whatever the standard of the material that was put into the machine.

When the HD standards came around there seemed to be general agreement across the world, and everybody settled on 720 lines or 1080 lines for high definition pictures and all seemed right with the world… Or maybe not…

That brings us to me watching a video last night which involved a number of shots of trains going left to right or right to left across the screen, and a really annoying judder as the trains went past. I was watching from an HD video file playing back on our Apple TV through Plex. Thinking it was a problem with the Apple TV I tried it through Plex on our Xbox One – same problem, and watching the raw file on the desktop, same problem again. Looking at the file it had come from a UK production company and was encoded in 1080p with a frame rate of 25 frames per second, perfectly standard UK file. So I took a look at the Apple TV. Digging into the settings I had the picture standard set to Auto, further down it said it had automatically set itself to 1080p 60Hz. There was also an option to specify which picture format to use, with a 1080p 50Hz option, so I switched that over, watched the file again, and away went the judder, switch back to auto and the Apple TV would decide to switch to 1080p 60Hz.

The basic problem seems to be that unlike the DVD Players, video recorders or BluRay players the latest generation of devices like the Apple TV or Xbox, even though many are capable of switching the resolution, automatically go for 1080p 60Hz and then behave as if the TV they’re connected to is a dumb panel that can’t cope with any other standard, as a result they then try to convert video at another frame rate in software. The judder I could see on the video is a result of the Apple TV or Xbox trying to show 25 frames per second on a device that is wanting 30 frames per second, so on smooth movements you get the judder because 20% of the frames in any one second of video are being shown twice. Knowing my TV is a European model that can cope with a 50Hz picture I can switch the Apple TV over and it works fine (not so for the Xbox incidentally) but then if I watch a North American video at 30 frames per second the Apple TV is locked in 50Hz and has much the same problem trying to handle showing 30 frames in the period when it only has 25 frames.

At this point the cinema purists are going to point out that there is another very common frame rate, with is 24 frames per second, which is the frame rate that most movies are made at, and many BluRays are now released as that standard because again a lot of TV sets these days will cope with the frame rate. So what do the Apple TV, Xbox and other TV streamer boxes do? They try and show those 24 frames in whatever frame rate the box is currently set to, and have exactly the same problem.

Going through my digital videos I have a real mixed bag. Most of the UK stuff is 25 frames per second, some where it has come off film is 24 frames per second, US stuff mostly 30 frames per second. Looking at home videos I have the same mixed bag, primarily because even though they’re all UK bought devices the cameras and phones I’ve had over the years don’t always produce UK standard video, for example iPhones using the standard camera software will consistently record in 60Hz standards – you have to resort to apps like Filmic to get the phone to record in European 50Hz standards, or even 24 frames per second if you want to work with cinema standards.

So even though world has agreed the size of a picture, there is still no agreement over how many of those pictures are shown per second. Most of our digital streaming boxes either will only work at the US 60Hz standard (the earliest Sky Now boxes were stuck on 60Hz) or are switchable but thanks to the software are difficult to switch across – the Apple TV you have to go rummaging in the settings, on the Xbox you effectively have to con the Xbox into thinking your TV can only do 50Hz pictures before it will switch – with the devices doing a second rate job when your TV is quite often perfectly capable of playing things back correctly.

Having one standard is never going to work as we’ll still have vast amounts of archive content at the older frame rates, so for the moment it would really help if the digital streamer manufacturers actually started acknowledging that there are a variety of standards – even your average US consumer who doesn’t have any 50Hz content is going to notice glitching if they watch a movie. We’ve had DVD and Video Recorders that could switch for years, why is it that the new tech seems to have taken such a massive step backwards?

Featured image old tv stuff by Gustavo Devito

Crowdsourced Flight Tracking

Earlier this week I had to head off to the airport to pick my wife up as she came back in on a transatlantic flight. As I’ve often done before I kept an eye on the flight using Flightradar24, and the Plane Finder app on my phone and the Apple TV.

Taking a look at the Flightradar24 site the Add Coverage link caught my eye – as I didn’t think there would be many people around with a spare radar station sitting around! However after a bit of reading it transpired that I didn’t need it.

Whilst in the past the online flight sites have been taking data from publicly available air traffic control feeds, now they are increasingly getting data picked up from ADS-B transmitters that aircraft increasingly use. Essentially each ADS-B equipped plane broadcasts a signal which encodes the location of the plane along with other details about their flight such as heading and altitude. The big advantage of the system is in areas that lack radar coverage improving safety, but since the signals broadcast on a similar frequency to DVB-T TV pictures it also means that a simple USB DVB-T receiver plugged into a home computer can be used to pick up the signals from the aircraft as well.

Given the really low cost of entry sites like FlightRadar24, PlaneFinder and FlightAware are significantly augmenting their data by offering their premium accounts for free to home users who supply data. You can use an existing PC, Mac or Linux box, but in order to get the account you need the computer to be running all the time, so a small low power computer like a Raspberry Pi is a much better option, and that is what is all three sites are suggesting. Whilst you could spend a large amount of money on fancy roof mounted aerials from my reading I figured that since we are located under the approach to Heathrow airport even the basic general purpose aerial that comes with the USB receiver would be enough to pick up a few planes. Additionally if I could get the Raspberry Pi feeding to all three sites – and there were plenty of people online saying that you could – I could get three premium accounts even for the potentially pretty small number of flights I could pick up.

So I drew up a shopping list.

First off, not having a USB DVB-T receiver I needed something to pick up the signals. I opted for the NooElec NESDR Mini 2 USB RTL-SDR & ADS-B Receiver Set which seemed to have pretty good reviews and which a number of people online were using – it also only cost £17.95 and included a basic aerial. I also needed a Raspberry Pi as although we have one, it’s part of the kids Kano so not something I can take over. I did look at whether to get the new Raspberry Pi 3, but since I didn’t need the extra speed or wifi I saved £6 and got the older Raspberry Pi 2 instead for £25.99. I also picked up a power supply, Raspberry Pi Case and memory card, and had all the bits for my DIY aircraft tracker for under £100.

Setting up the Raspberry Pi is pretty straightforward, you just need to grab a copy of the Raspbian operating system, install it onto the memory card and off you go. In fact you don’t even need a screen, keyboard and mouse if you have a bootable copy of the OS on the card as by default it has SSH access enabled so as long as it is plugged into a network once it is booted up you can access the Pi.

Once it was up and running, I initially opted for FlightRadar24, for no other reason as that was the site I’d initially read about feeding data from. That proved to be a bit of a mistake from the point of view of feeding to all three sites simultaneously. The basic idea is that you use the software from one site as the master feed, and then hook the other two bits of feeder software up to the third party data feed of the third. The trick seems to be to use the FlightAware software as the master feed and then tell the other two bits of software that they are talking to a receiver on port 30005 of localhost, then all three play nicely together. Once I’d swapped around and reinstalled the FlightAware software first I pretty soon had all three up and running and feeding data to their respective sites, alongside giving me information locally. If you start with the FlightAware software and a brand new SD card, you could also opt for their custom build of Raspbian with PiAware preinstalled which makes life a bit easier and is configured for optimal performance as a scanner.

There is a bit of variation between the software experiences. FlightAware is very much a command line setup, and connects to an account via username and password. Their local software is a simple page with a map of the planes and a list of their details. They have a much better experience on their main website, with probably the most detailed statistics on the performance of your receiver of the three sites. FlightRadar24 has a slightly better process, but still command line based, although when it is up and running the website is very basic just showing a list of the planes. You can however change the settings from the local website and restart without resorting to the command line. Plane Finder is by far the nicest local installation with an easy to use web based setup, and a nice local website that will give you a visualisation of the planes that your receiver is picking up, detailed log files, statistics of the performance of your receiver and the communication link with the internet, along with a web based settings page to reconfigure the software. Whilst all the sites give detailed step-by-step instructions that can guide a novice through, Plane Finder is by far the friendliest user experience.

So the big question is how well does it work?

IMG_5798I wasn’t expecting much from my little aerial sat on top of a filing cabinet in my office, but I was amazed. It’s certainly not picking up planes 200 miles away as a roof mounted aerial would, but it is picking up lots of flights within 50 miles, and one or two from a lot further away. It certainly picks up flights going in and out of Heathrow, especially when they’re coming in over Reading, along with quite a few flights over towards Gatwick and flights from further afield passing over the UK. In the forty-eight hours I’ve had it running it has picked up 2,859 distinct aircraft! If you compare my local radar view with the main site pages it’s clear I’m not getting everything, but I’m impressed by how many I’m picking up considering the cheap aerial and kit I’m using. Certainly if I wanted to spend a lot more money and get a proper roof mounted aerial I could probably track a load more.

So if you fancy a simple little project that’s not going to break the bank, and also want to contribute data to crowdsourced aircraft tracking I can certainly recommend building a Raspberry Pi based ADS-B tracker, and as a bonus you get access to all the fancy premium features on three of the main aircraft tracking sites. If you live in a populated area you’ll really only be adding resilience to the network, but if you’re in a more rural area there are definite gaps in coverage – there is a good map showing coverage on the FlightAware site – aside from that it’s also really quite fun if you’ve got kids to be able to point out a plane going overhead and say where the plane is travelling from or to.

Roaming an iPhone on Data Only

Since 3 relaunched their free roaming Feel at Home scheme we’ve been a little spoilt when traveling – we went on a trip to the USA and used our iPhones pretty much as we would do in the UK. It was therefore a bit of a shock when at short notice we had to do a trip to Canada, and we took a look at the roaming costs over the border… Calls across the board are £1.40 a minute, and data, for which I have unlimited in the UK is £6 per MB – definitely not a Feel at Home destination…

In the past we’ve got hold of a local SIM for Canada. HolidayPhone do a Canadian SIM card but they’re not cheap, and they wouldn’t be delivered in time for our trip. Canadian company Similicious have better prices, but on a short notice trip they also wouldn’t be able to get the SIM card to us in time, as international shipping would be about fourteen days.

The other option was to get an international SIM, but looking through the options they’re all primarily focused on voice calls and texts, you can get an international SIM with data, but that adds to the cost even more.  However looking at what we use day by day, the vast majority of the use we now make of our phone is data, not calls. Our phones are essentially handheld computers that just happen to make calls. Quite often we’re communicating through chat apps like Google Hangouts or Facebook Messenger. On the ground in Canada apps like Citymapper and Uber would be essential for getting around. If we wanted to make voice calls we had FaceTime and FaceTime Audio to talk to other iPhone users, and apps like Skype can also be used to call conventional phone numbers for fairly minimal costs. So it seemed like we could get by with a data only SIM – so would those be any cheaper.

Having a search around I came across the Love2Surf card on next day delivery with Amazon. It comes pre-loaded with 100Mb of data, and has a website that allows you to add more, so we thought we’d give it a go.

In the UK the card runs on the EE network, so we swapped out the normal 3 SIM before leaving for the airport, and were able to give it a test run whilst still in the UK. On arrival in Canada it hooked up to the Rogers network and it quite happily made a FaceTime call from the baggage claim hall.

The main issue we had on the trip was a hiccup when we added data to the card part way through, which was a technical issue at the Love2Surf end, that left us with the card unable to connect to any network for a few hours, but it came back and we were able to carry on. There were a couple of occasions when we forgot we couldn’t make voice calls, but you’re not going to end up with a big bill from doing so as the card is only authorised to roam data. All in all it seemed like the little bit of convenience was worth it for the cost saving.

Comparing the prices for Canada if you’ve got enough notice it won’t beat the cost of a local SIM from Similicious but if you’re visiting multiple countries touring around the Love2Surf card is certainly cheaper than buying a local SIM for each country.

Taxi

Recently I’ve been playing around with taxi apps on my phone.

For a while I’ve had Hailo on my iPhone for when I am in London, which is an app from a London startup that allows you to hail a black cab from your phone, but whilst that has expanded to other cities worldwide it doesn’t work in the UK anywhere outside the capital, so for day to day it’s not much use when I’m at home in the depths of Berkshire.

Since Hailo appeared, other competitors have turned up, the biggest and most notorious being Uber. The Moovit app that I have had on my phone for a while also for keeping track of buses and trains I’m using has had an Uber link up for a while, but again it hasn’t been that much use because Uber really only operated in the capital, however recently I noticed that rather than saying no cars available it would more frequently come up with availability for a car, particularly when I was in Reading. Whilst the company hadn’t officially expanded to Reading, they had expanded west along the M4 from London into Slough, Windsor and Maidenhead, and what I was seeing was cars who had carried passengers to Maidenhead becoming available when they had dropped their passengers in Reading and were heading back.

That caused me to take a look again and see whether any of the multitude of apps were usable for someone who didn’t live in the capital.

Hallo was definitely still only working with London black cabs, but GetTaxi, an Israeli startup with a similar concept to Hailo was now operating with the black cabs that operate in Reading. The only downside is that their service doesn’t operate outside Reading so whilst I could order a cab from Reading to home, I couldn’t request one in the reverse direction.

IMG_5766Uber would very occasionally offer me a ride even at home, but not often enough to rely on, so it looked like maybe there still wasn’t a viable option. However I came across another alternative thanks to a blogger I’d come across who worked as a minicab driver. There is lots of interesting stuff on his site, but the main point is that he lives in Brighton but operates mainly in London. He has worked for a variety of operators, including Uber, but is now working for large mini cab operator Addison Lee – the post where he discusses why he has gone to Addison Lee and is no longer working for Uber is well worth a read, but it also highlighted their investment in technology so I grabbed the app. I had seen Addison Lee cars operating around Reading so I knew they had expanded coverage to Berkshire – indeed since their sale to the Carlyle Group and the departure of their frequently controversial founder and CEO they are covering the whole country.

The app seems just as good as the equivalent apps for Uber, Hailo and GetTaxi, but unlike all of those it will offer me a taxi at my door, and allow me to book one at any time. Of course the on demand is not the five or ten minute wait you’d get in London – usually between thirty and sixty minutes at home – but booking an airport pick up or drop off the rates are comparable with any of the other cab firms I’ve used over the years, and they will also offer me a home to work, or work to home booking at a reasonable cost. The app also allows me to pay with Apple Pay or PayPal, and even retains the option to pay the driver cash (although one of the advantages of Hailo has always been that I never carry much cash these days, certainly not enough for a reasonable length cab journey). Certainly I’m going to give it a go next time I’m booking a cab, certainly can’t be as bad as some of the experiences we’ve had over the years.

IMG_5767As an experiment, having found a cab app that covered me at home, I then wondered how much their claim to cover the whole country really extended, and as yet, I haven’t found anywhere in the country it hasn’t offered me an estimate for an on demand request, or for a pre-booking, as long as one end of the journey or another is somewhere close to their main area in the south-east of England. This for example is a pickup request for the big hotel in the centre of Portree on the Isle of Skye, for which the app is quoting 295 minutes – whether they’d actually turn up if you made the on demand request is another matter as from experimentation 295 minutes seems to be some sort of maximum and is what it quotes in a number of places I’ve tried, but the app certainly suggests it will take a booking – an eye watering £1770 to come back home!