Category Archives: Computers/Technology

Stuff about computers and technology in general.

DDD 2018 at Microsoft Reading

After a busy July, finally I’ve got a quiet moment to catch up with my notes from the recent Develop Developer Developer event held at Microsoft HQ in Reading.

I attended a real mix of sessions this year. First up was a real mind bending session led by Francess Tibble and Anita Ramanan, two software engineers at Microsoft talking about Quantum Computing and the Q# language. The session was split into two parts, the first a bit of a crash course in the physics involved in Quantum Computing, with quite a bit of maths too. The interesting take away is that present day quantum computers are expensive and unstable as they are particularly sensitive to external factors so can lose state in seconds. As a result we currently have the Quantum Development Kit that simulates how a real quantum computer should behave.

The key difference with a quantum computer is in the bit, in classical computing the bit is ether 0 or 1, but in quantum computing the bit can also be any point in between, taking the usual light bulb analogy for a classic bit, it’s like having a dimmer attached. I really haven’t got the space to cover all their content in detail, but they did do a version of the same talk a few days before DDD which is online on YouTube.

Moving on I then attended Joseph Woodward talking about Web Assembly, Blazor and the Future of Web Development.

Joseph started with a run through of the history of web development, and the perennial problem that whilst there has been a relentless move towards providing applications in a web browser, the tools to create rich applications in a web browser are really pretty limited. JavaScript, the main language of the web has become that largely by historical accident, and is pretty slow. Web Assembly is the latest of a number of attempts to replace JavaScript as the language of the web, in this case providing what is effectively a low-level byte code for the web and then compiling other languages into this byte code. At this stage it’s still very much a minimum viable product, but does seem to show some promise with multiple languages being able to compile into Web Assembly byte code.

For C# and other .Net support, since they also compile into the intermediate language of the .Net platform, Microsoft offers Blazor, which is a full .Net machine written in Web Assembly byte code. This of course does mean that .Net intermediate language is then being interpreted into Web Assembly byte code, so there are plans to compile to avoid this double layer of interpretation.

The actual coding is familiar to any C# programmers with familiar dependency injection, and the ability to pull in code using Nuget. Interop with JavaScript is provided, and is necessary because Web Assembly does not provide access to the DOM.

It was clear from the talk that the platform is still immature, it lacks performance and has no threading or garbage collection. However it does show promise. Even if it doesn’t provide a replacement for JavaScript, it does allow us to mix and match languages picking the language that is best suited for a particular task.

Next was what for many people was one of the big draws for this years DDD, the return of Barry Dorrans, now .NET Security Curmudgeon at Microsoft, but who before joining Microsoft and moving across the pond had been a regular speaker on security at developer events. Barry was presenting his Code Behind the Vulnerability session, variations of which he has presented for a number of years at conferences around the world. The great advantage of presenting it here however is that it allowed developers who don’t work for companies with the budgets to send their developers to paid for conferences to see this important session. Indeed Robert Hogg CEO of Black Marble who organise the DDD event at Microsoft considered the subject matter so important that he said to any of his developers in the room that they’d be fired if they did anything that Barry had spoken about!

The purpose behind the Code Behind the Vulnerability session is basically to go through security issues that Microsoft have found in their code, and the cause so other developers don’t make the same mistakes. Barry updates this session periodically as new exploits and problems come to light, so it is well worth keeping an eye out online for new versions.

Barry covered eight different security advisories, including hash tables that could bring a system down if they received specific user data – the tip here being not to use user supplied data as keys for a hash table, exposed endpoints that allowed users to work out encrypted messages, and a number of occasions where people had turned off or misused features making security holes, for example turning off signing on view state allowing attackers to create .NET objects, or simply writing a GET API call that changes state.

Barry’s summary slide is the basics, but the whole slide deck is worth a read. His summary is:
– Sign your data, even when it is encrypted
– Don’t use regular expressions
– Don’t use BinaryFormatter
– Don’t overbind in MVC
– Use the right HTTP verb
– Validate your inputs

Barry’s session is a critical one for anybody doing .NET development, many of the issues he shows are easy to make, but can have catastrophic consequences.

The next session I attended was rather lighter, but was also one that has been presented at a major conference but Dylan Beattie was bringing to DDD. You can view the keynote version of Apps, Algorithms and Abstractions: Decoding our Digital World on YouTube and it is broadly similar.

Dylan starts off with talking about how news of his birth and a first picture made it from where he was born in Africa, back to his grandparents back in Oxfordshire – a process that took weeks. He then looks at technology today where we can get a photo appear on a phone in your pocket and respond immediately. In the space of his lifetime the way we communicate has fundamentally changed. His session goes through the basic technology that underpins these changes, and is absolutely fascinating.

This was probably my favourite session of the day as it covers so many different areas of technology. It was also presented in an easy to digest way, and in a way that I’ve been able to show it to my children and they can start to understand all sorts of technological ideas.

My final session was one of those I picked more because I enjoyed the speaker – Gary Short talking about AI Dev-ops. Gary started looking at how the principles that have brought about dev-ops can be applied to AI and machine learning work, for much the same reasons. There has always been a big disconnect between data scientists and coders. Data scientists have a very niche skillset, so in the past they would do the specialist work, and then hand their carefully designed models to developer to implement. However tools are now being produced that allow data scientists to develop an implement their models, and coders to just connect to these rather than implement them.

Gary also had some useful tips, he highlighted that you can only optimise algorithms for false positives, or false negatives, not both, so it is a business decision as to which costs more, false positives or false negatives. This is a useful tip with regards to our products at FISCAL as we have a continual tension between reducing the number of false positives we produce, whilst not missing results, i.e. a false negative.

In summary DDD 2018 was a good day, and well worth spending a Saturday. For many developers there isn’t the budget to go to paid conferences regularly, so it is particularly good to be able to see sessions from those conferences presented live at a free community conference. Particularly for sessions like Barry’s important information about how to code securely is something all developers should be hearing, not just the ones who work for a company with a good training and conference budget!

Open Space at Pivotal Software

Yesterday I had my first Unconference experience, where I attended a one day Open Space event, held at the offices of Pivotal Software, who are located just off Silicon Roundabout, on Old Street in London.

Firstly it’s probably worth explaining the concept of an Unconference or Open Space. The organisers of this event described it well when they said that the concept of the Open Space came about when people observed that many of the interesting conversations at other conferences took place in the coffee breaks, so when I arrived, the conference organisers had arranged a number of different rooms and environments, had divided the day up into a number of time slots, but all we had was a large empty wall to fill with sessions and activities, and sheets with the four principles, and one law that would govern the day.

Looking at the principles, the first three are about how the sessions run, basically don’t worry about who is or isn’t there, take the session as it comes, and don’t feel you have to fill the time slot:

  • Whoever comes are the right people.
  • Whatever happens is the only thing that could have happened.
  • When it’s over it’s over

The next principle is “Butterflies and Bumblebees”, which describes the way some people move between sessions. The Bumblebees don’t spend an entire period in one session, and instead visit many of the groups and sessions, cross pollinating between the sessions with fresh views and ideas they have maybe picked up from other sessions. Butterflies are not participating in any current session, but instead are tending to their own needs, but sometimes participate. At the Open Space both character types were apparent during the day.

The one and only law is the Law of Two Feet – basically if you’re not learning, or not feeling you’re gaining anything from the session, just walk away.

Wikipedia has a much more detailed overview of the technique – whilst we were using for a technical conference the technique could easily be applied to any subject area.

We kicked off with an ice breaker session to break us up into five groups to discuss sessions where we had to arrange ourselves into alphabetical order based on our country of birth. That proved to be an interesting exercise for several groups – the people from the United States actually all put themselves under A for America, I was in a group of three for England, but there were other English people down in a group at U for United Kingdom, and there was a lady who put herself at C for Crimea because she didn’t want to offend anyone, given that when she was born Crimea was in the Soviet Union, has been in Ukraine for a long period and Crimea has now been annexed by Russia.

In our groups we then talked a little about what we wanted to gain from the day, and then discussed what possible sessions we could do. There was an interesting mix. Firstly, some people had existing sessions they had done elsewhere that they were willing to run again – with hindsight I actually had one of those in that I had a copy of a Software Craftsmanship brown bag I’d done at work on my laptop – something for next time. Other people had current project they were happy to share and talk about with other participants, or skills they were willing to share. Other sessions were encouraged in the group discussion on sessions, so for example in our group we had a man who worked as a Biological Scientist, and was happy to talk about his work with scientific software. One key driver is that you don’t need to be an expert to propose a session, so several people had topics they wanted to talk about ranging from particular technologies or concepts, challenging principles, or just hot topics they wanted to discuss.

From there everybody with a proposed session lines up to give a quick elevator pitch of their proposal and suggest a location, taking in turns until all the sessions have been proposed. After that there is a rationalisation stage where sessions can be moved to resolve any potential clashes, to balance out the schedule, and refine a better schedule. After that it was into the first session.

The first session I picked was one on Unconditional Programming. The concept comes from Michael Feathers, whose superb book on Legacy Code actually sits on my desk much of the time. He came up with the term in a blog post back in 2013 and he has done conference talks on the same subject. It’s certainly an idea I’d come across previously, and the concept of Unconditional Programming is potentially one of the techniques Feathers discusses in his upcoming book which was originally going to be called Brutal Refactoring: More Working Effectively with Legacy Code, but may well be called Unconditional Programming.

The basic concept is that in many cases the use of conditional code such as the ubiquitous if/else construct actually couples different bits of the code together that shouldn’t be coupled, so removing the conditional will result in two separate more easily testable pieces of code that a cleaner and more easy to understand.

This provoked an interesting discussion, as the conditional is not really removed, it is merely moved, so rather than existing as an if statement the conditional is being provided by polymorphism in the factory class.

In the discussion we also had a few proponents of functional languages, one who uses Erlang, and another who uses Haskell who agreed that conditional free code was easier to achieve in functional languages, in particular through the vehicle of pattern matching. In this situation the pattern matching is providing the conditional rather than the if statement.

It was certainly an interesting discussion, especially with a mix of developers from both imperative and functional programming backgrounds.

My next session was given the title “How to produce your own Citymapper in 10 Minutes”, and was a developer who as a side project had been looking at routing algorithms and had been using London Underground routing as a test bed.

He started off showing us a picture of the London Underground map, and then highlighting that if you change the stations into nodes, and the lines between them into edges, you have a familiar concept in Computer Science, a directed graph. He highlighted that finding the shortest route in a directed graph was a problem that had been solved way back in 1956 by Edsger Dijkstra, and as such libraries to generate the shortest routes were available in pretty well every programming language. He then showed us some code that used the freely available underground station data that you can get from the TFL API and Dijkstra’s algorithm to work out the shortest route between two stations on the underground.

We then had a wide ranging discussions about how it wasn’t quite that simple, looking at issues of fast trains on lines like the Metropolitan line, and that it didn’t take account of the times to change trains. It was also highlighted that Dijkstra’s algorithm quickly breaks down with larger datasets, even if you use intelligent heuristics to prune potentially unsuitable routes that go away from the destination.

We then finished off talking about better options for graph and tree searching, and got onto talking about the A* algorithm that can produce routes more quickly, and especially a recent talk by Simon Peyton-Jones that covers the same subject area called Getting from A to B – Route Planning on Slow Computers.

My first session after lunch was entitled “Beyond SOLID” and was proposed by a developer who wanted to challenge whether the SOLID principles were really the best way to describe Object Oriented code.

We started working through each of the principles.

Firstly he stated that he though that the Single Responsibility Principle was primarily about code cohesion, and could equally be applied to other programming paradigms – it was something that was just good practice rather than something specific to Object Oriented Code.

Moving on to the Open/Closed Principle, he again thought that whilst the principle of being open for extension and closed for modification is primarily about plug ability of code, and is really a bit vague and doesn’t tell us much that is valuable.

The Liskov Substitution Principle he thought was probably one of the most difficult to understand of the principles, and whilst it gives us a better paradigm for understanding what object oriented code should look like is again not specifically about object orientation.

The Interface Segregation Principle is also about cohesion of code, but this time is more object oriented as it is specifically talking about contracts, using interfaces rather than concrete classes.

Finally the Dependency Inversion principle is again more object oriented as it is about how we use interfaces to instantiate concrete classes.

We then opened the discussion, and there seemed to be consensus that the SOLID principles were more principles of good design rather than specifically object orientation principles. We also discussed how being able to regurgitate the SOLID principles was almost a staple interview question, but it really didn’t indicate anything much beyond the interviewees ability to remember the acronym. It was also pointed out that SOLID has quite a mix of high level design principles, low level concepts, some architecture, and in the Liskov Substitution principle some quite deep Computer Science.

From there we tried to look at other principles or concepts that could describe Object Oriented coding including Design Patterns, Tell don’t ask message passing, CQRS command query segregation, keeping things small/replaceable parts, balanced abstractions, loose coupling, tight cohesion, code smells, and the classic OO principle of polymorphism, encapsulation, and inheritance/abstraction.

However when we looked at those principles, very few of them exclusively applied to Object Oriented code, many of them were just good design practices.

As with the earlier discussion on Unconditional Programming it was a good discussion about something that in many circles is regarded as a foundation of Object Oriented programming, but it is clear once you dig into it, certainly doesn’t cover all of Object Oriented programming, and is much more about good design than anything Object Oriented specific.

The next session I went along to had been convened by a developer and a mental health practitioner who were concerned about issues of developer mental health and burnout, and was a wide ranging discussion amongst a group of developers over work cultures, work pressures and how developers cope, or don’t cope with them.

From the discussion there certainly seems to be some variation in how companies handle their developers. Some will take all the hours developers will give with apparently minimal concern for their mental well being, at times actively encouraging and rewarding long hours that can lead to burnout. Others although they are good on limiting actual work, encourage their developers to be participating in their own time in community events and side projects, which again are increasing stress – several said they are now careful to limit their participation in communities to ones they consider directly relevant to their work.

We also had a good discussion about practices in our teams, covering working hours and stand up times. One company a developer worked for the director would actively force developers to go home at about 6pm. Another deliberately did not give their developers laptops and limited remote access to stop developers working long hours. Another operated a hot desking policy that used fixed desktop computers at each desk and the developers moving around and pairing. This also highlighted that pairing forces people to work common hours – one company explicitly banned using any code that hadn’t been written in a pair.

This again was a very interesting session highlighting the different ways different teams worked, and highlighting good practice, and bad practice across those different companies.

The final session was again a discussion on the broad topic of exploring where software development overlapped with other disciplines.

We started from the principle that the Software Industry has tried a number of metaphors over the years for describing itself such as Engineering or Craftsmanship, but we quickly reach the limits of those metaphors.

Over the course of the hour we drew connections with at least twenty other areas and disciplines including Philosophy, Biosciences, Psychology, Linguistics, Politics, Art and Design and Sociology.

Once again, with a diverse group of people it was a good exploration of all parts of software development drawing some parallels I’d not really though of before.

After that the day drew to a close with a quick retrospective where we sat and talked and shared experiences about what we had learnt during the day. As a first experience of an Open Space it was really enjoyable, from turning up with no idea of what I might learn there were a wide variety of sessions, and as the facilitators said at the beginning lots of interesting discussions.

Unlike other conferences I’ve been to, there was a pretty diverse range of participants also, with much closer to a 50:50 male to female split than I’ve seen elsewhere. Much as with DDD Scotland where one of the most interesting sessions was the Lean Coffee discussion, we got a chance to talk and explore ideas that you maybe wouldn’t see as a full session in a regular conference.

My thanks go to Denise and Spike at Pivotal Software for providing the venue and organisation for the event, and all the other participants for making it such an enjoyable day.

DDD Scotland

Last weekend was Developer Day Scotland. Much like the original Developer Days that I’ve been along to many of based on the Microsoft Campus that relaunched with a new team running them, this was a relaunch by the Scottish Developers. As there were some interesting sessions on the agenda, and since I fancied an excuse to take the West Coast Main Line over Shap and through the Southern Uplands – something I usually only glimpse whilst driving the M6 and A74(M), I grabbed myself a ticket and headed north.

The conference was held at the Paisley Campus of the University of the West of Scotland. The Reading Developer Day’s are relatively unusual being held at a company site, but then few companies have the kind of setup Microsoft have that is suitable. Having said that the experience of attending a DDD at a University showed up several advantaged, not least that they have much more space, and in particular the main hall is large enough to take all the attendees – at Microsoft the prize giving at the end of the day ends up being done with all the attendees stood in the foyer and the organisers stood on the stairs!

This conference I was very much picking sessions to tie in with upcoming work, rather than just sessions that piqued my interest as I have done at other DDD events.

First up I kicked off with Filip W talking about Interactive Development with Roslyn.

Filip started off with a quick recap of the history of C# as a language – enough to make me feel a little old as I can remember my first experiences with the early versions of C# back with Visual Studio 2003. This was to highlight that the way developers worked with C# hasn’t changed much over the years, which is why the new Roslyn compiler is such a game changer.

He started off with a simple feature, dotnet watch that allows you to run a specific command as soon as a source file changes. This needs the VS2017 project format, but allows a great deal of flexibility in how you work with code.

From there he moved on to Edit and Continue. Edit and Continue has been around for longer than C# – it was an often used feature of VB6 that was demanded in .Net as people moved across. It has however been problematic, as a general rule of thumb tending to support a version behind the current cutting edge C#. There have also always been a number of limitations, in particular not being able to change lambda functions at all. Roslyn has changed that. Finally it has now caught up with the current C# 7.

For the next part of his talk Filip talked about C# REPL, what is known in VS2017 as the C# Interactive Shell.

The C# REPL didn’t exist before Roslyn, because as a compiled language the kind of interactive functionality just wasn’t possible. With Roslyn, Microsoft has introduced a special mode that relaxes some of the parsing rules to make interactive development possible, including the ability to use syntax that normal C# code would reject as illegal.

Interestingly as Filip explained, each line is still compiled, which does give the interactive window some interesting advantages over interpreted interactive languages, allowing developers to interactively step back through compilation. It also integrates in with the current open solution allowing developers to manipulate and explore the current solution in more complex ways than previously.

C# REPL exists in several forms. It can be run directly from the command line, whilst the C# Interactive window in Visual Studio is a WPF wrapper around the C# REPL that leverages extra functionality. There is also an “Execute in Interactive” right click menu option to immediately run the code under the cursor. The final variation of it is Xamarin Workbooks which uses Markdown format text, and uses the C# REPL to execute any code blocks in the document. Output can also be sent to the iOS or Android emulators as well as running locally.

Filip finished off by discussing Live Unit Testing, something I’ve already been using in VS2017. This runs tests as they are actually being coded – it doesn’t wait for code to be saved. It does this by hooking in as a Roslyn analyser. It’s straightforward to write a custom analyser ourselves perhaps to enforce coding standards, or guide other developers in the way to use a library – indeed some third party library developers are already including analysers to do just this.

For session number two, I stayed in the main hall for Jonathan Channon talking about Writing Simpler ASP.Net Core.

Jonathan started by talking about a project he had worked on where speed had been an issue, and they had tracked the problem down to the large numbers of dependencies being inserted using dependency injection. The issue being that the inversion of control mechanism used to insert the dependencies was using reflection.

The issue is with the way we do SOLID in ASP.Net, so Jonathan used a series of examples showing how we can go from a solution heavily dependent on injecting dependencies and using mocking frameworks for testing, to something that uses no dependency injection or mocking frameworks at all. He has his examples for the talk online in a GitHub repository.

What is perhaps most interesting about his final solution is that the technology he is using has been around since the earliest days of C# – using delegates and static methods, along with his own Botwin library to simplify building an API going to a much more functional programming style model than is used in traditional ASP.Net.

Jonathan also highlighted a number of other blogs and videos. Mike Hadlow blogs on much the same technique highlighting how much less code using a functional style produces. Posts from Mark Seeman and Brian Geihsler also talk about how SOLID principles lead to a profusion of dependencies making codebases difficult to navigate.

Given that so much software these days follows the SOLID principles, this was a challenging different view on how systems should be developed, one of those “everything you think you know is wrong” type sessions.

The next session I attended was Paul Aikman talking about React.js and friends, which was one of my must attend talks as I was due to start working with React.js the following week for the first time. Paul has posted his slides on his website.

Paul started by taking us through how his company has eventually arrived at using React.js, starting out with Webforms augmented by JQuery, through Knockout and Angular 1, before settling on and sticking with React.

He also highlighted how there has been a gradual shift from performing a lot of processing on the server side with minimal client side functionality to the current situation where customers are expecting a rich and responsive experience interacting with websites that mean clients are now a lot fatter. He also discussed why having started with Angular 1, his company took the decision to shift to React, which effectively came down to the significant changes between Angular 1 and 2 meaning that they would effectively have to learn a new framework with Angular 2, so they went out to what they regarded as the best at the time, and changed to React.

He then gave a rapid overview of how React worked, which I found really useful coming to React for the first time during the following week. He highlighted that given the years of being told to separate logic and presentation with the MVC pattern, one of the biggest surprises with React is that it mixes logic and presentation together.

Paul also highlighted that React only focuses on UI logic, following the principle of doing one thing, and doing it well. There are additional libraries such as Redux and React Router that provide the additional functionality needed to build a web application.

After lunch, I decided to head along to Gary Fleming’s talk on API’s on the Scale of Decades which was on the problems with API’s, and how developers can write and API that can evolve over time rather than lock you in to poor early decisions. Once again Gary has his talk notes online which are well worth taking a look at. As a side note Gary was using an app called Deckset to run his presentation, that takes presentations written in Markdown syntax – considering the amount of time I spent reworking Markdown notes into a Keynote presentation recently, I’ve noted it down as something to look at further.

Gary’s talk was the one that promoted the most heated discussion of any I attended, both at the session, and also when I came back to the office. He started from the point that designing API’s is hard, but that what most developers want in an API is something that is both machine and human readable, changeable, testable and documented.

Gary started with a crash course in the concept of affordance, using Mario, and animals in a tree as examples! Gary was highlighting that in both the case of the game, and different animals using a tree in different ways it was through their knowledge and experience that they were interacting with the tree or we were playing the game, API’s should be similar. He used further examples where knowledge and experience allow us to interact with something – save buttons that look like floppy disks, even though many people now have never even used a floppy disk.

Applying this to our API’s the mechanisms for controlling them should be included in the information returned by the API, you shouldn’t separate them out.

Looking at a common affordance on an API, if there is a large dataset to return, generally we will page this, and there is a common set of affordances for stepping through the dataset. Going back to the days of the text adventure games from the early days of computer games, once again there was a common set of verbs with which to interact with the game.

A good tip Gary gave for thinking about the verbs and nouns to describe these affordances was to think about how you would ask voice assistants like Alexa or Siri to do what you want to do. He also suggested that well designed affordances are effective documentation for an API – if it is clear how to use an API you don’t need extensive documentation.

Gary then moved onto the problem of changing an API.

He used the example of the Ship of Theseus. In this thought experiment a ship over a long life has ongoing repairs such that eventually every single plank of wood and component of the ship has been replaced – is it the same ship? If we use this lens on an API, if over time we are changing our API, is it the same API, when do our changes take it from version 1 to version 2?

Gary’s suggestion was that we shouldn’t be versioning our API at all. To respond to the surprise from the audience he highlighted that we cope with this every day using websites, all of which change their API that we as users interact with. We apply our knowledge of the website and cope with the changes.

Gary then moved on to testing. His first example was asking the question of why we need brakes on a car? The obvious answer is to enable us to stop, but they also allow us to go faster. For the same reason we need tests on an API to allow us to change them faster.

Fundamentally, if we know that an API will inevitably change, we need to plan for those changes. He suggested that we should be using Consumer Driven Contracts, where the consumers of the API gives detailed expectations of how the API should behave, and then these form the basis of the tests against the API. He also highlighted the important of using fuzzers to ensure the API responds and handles unexpected data.

His final point provoked the most discussion, looking back at what he had been discussing he highlighted that JSON, which is what many APIs currently use is limited, and suggested that it is something we use by convention, rather than because it is the best tool for the job. He suggested that using HTML5 was a better option as it offered a richer interface that gave greater affordance to the users of the API. There was a good deal of incredulity from members of the audience, and indeed a similar level from our architect back at the office after the conference. Gary has subsequently said that there are limitations with using HTML5 also, but it was as much about getting people to question why they use JSON as proposing HTML5 as the solution.

My next session was also run by Gary, as I decided to pay a visit to the community room where he was running a Lean Coffee session.

The group came up with a mixed selection of topics to discuss. First off was a topic proposed by Becca Liddle who is the organiser for Women in Tech Scotland who asked about perceptions of women in technology companies. The discussion was wide ranging and covered a number of common issues around how women are treated both by company culture, and male colleagues, and also how male dominated tech culture can be off-putting to women and minorities. Becca had talked to a number of other women attending the conference earlier in the day and shared some horror stories of their experiences. Certainly food for thought as to how we encourage a more diverse workforce in IT. We also discussed what we were currently learning and broader issues around training, and also had a discussion about the impending changes being brought by GDPR which was in some ways a bit of a relief as it seems everybody is as concerned about it, and nobody feels they will be ready.

Next I went along to a session on Building APIs with Azure Functions by Kevin Smith. Again this was a session I attended because as a team we’re using Azure Functions in order to try and break up large bits of processing into horizontally scalable functions.

Kevin gave a good overview of the functionality available, highlighting the rapid development and simplified integrations, and also how they can be developed using Visual Studio. Kevin also has a good introduction on his website.

He also gave some good insight into the issues, including issues debugging them, and in particular problems with Microsoft breaking Azure functions. Ironically his final demo was also one that failed on the day I’m not sure whether it was because of a Microsoft breaking change!

My final talk of the day was Peter Shaw giving an Introduction to Typescript for C# Developers – once again it was a session I attended because we’re using Typescript for the upcoming work and the talk served as a good introduction.

First though a moan, Peter refused to use the microphone in the hall on the basis that he “had a loud voice”. Now he certainly did speak loud enough that I with good hearing could hear him without a problem, however experience looking after the sound at church is that if somebody does that there may well be people in the audience who have hearing difficulties, but nine times out of ten when challenged like this, they won’t feel comfortable in drawing attention to themselves as being unable to hear. At church the reason we ask people to use microphones is because however loud peoples voices are they can’t speak loud enough to drive the induction loop that many people with hearing difficulties will use, and speakers refusing to use the microphone leaves those people feeling discriminated against. Sometimes they will suffer in silence, other times they will complain to the sound crew, almost never will they complain to the speaker, who carries on in blissful ignorance thinking they have a loud voice and everything is fine. I hate working with a microphone too, so do many other people, but they are there for a reason, so if you’re a speaker, and there is a microphone, please use it!

Anyway, moan over, onto the talk. Peter started with an overview of why Typescript is important. More and more applications are moving into the browser, much as Paul Aikman highlighted in his talk on React we’re moving from applications where much of the functionality is in complicated server side C# code, to applications with richer client side experiences using Javascript. Similarly the growing variety of internet of things often use Javascript.

For developers used to the rich type-safe world of C#, Javascript can be a bit of a shock. Typescript is a language designed by Anders Hejlsberg who designed C# to open up Javascript to a back end developer used to C#.

As such the syntax is familiar to anyone who is used to C#, and makes the transition to Javascript development relatively painless.

Interestingly Peter highlighted that Typescript is more of a pre-processor than a compiler – ultimately what is produced is valid Javascript, but Typescript acts like a safety net enabling the developer to write enterprise scale Javascript applications.

There are a number of subtle differences however driven by the differences in Javascript. For example Typescript has Union Types that allow for the Javascript ability to change the type of variables. Undefined and null are still usable, however the Typescript team advise against them.

There is lots of Typescript support around. Many of the most common Javascript libraries already have typescript type files defined to allow them to be used from Typescript. Peter referred us to Definitely Typed as a good repository of high quality Typescript definitions.

As an introduction it was a useful talk giving me as a C# developer taking first steps into Typescript confidence that it won’t be a difficult transition.

After that we had the closing section of the Developer Day with the traditional raffle and prize giving, and as is traditional (and much to the disappointment of the kids because an Xbox X was one of the prizes) I didn’t actually win anything in the raffle. Was no bad thing however as I’m not quite sure how I would have got an Xbox back to Reading on the train…

Dear Y-Cam Solutions, You’ve Lost a Customer

I’ve been a Y-Cam customer for a long while, I originally had one of their early Y-Cam Black cameras that had a pretty technical setup and was uploading to a local FTP server, I later added one of their newer Y-Cam Knight cameras that included a built in Micro-SD slot but again needed an FTP server to upload to. Over the years I tried a couple of different cloud services that took pictures and video uploaded by FTP to generate alerts.

Then Y-Cam decided to change direction – including cloud storage for images as part of the deal. They didn’t give an option to migrate existing customers onto the new platform, but instead launched a version of the existing camera with new firmware that hooked up to their HomeMonitor service. After doing the maths to work out how much I’d pay in subscription fees for the existing cameras I made the switch, and later bought one of their newer Y-Cam Evo cameras which similarly hooked up to their online service. Both cameras came with seven days of cloud storage for free forever, with options to upgrade to thirty days for a monthly charge. Subsequently the company has also launched an internet connected alarm system again with a monthly fee. I didn’t really need either of these, and just carried on with the free storage option.

The older cameras have been fine, the newer Evo was a bit of a disappointment and would quite frequently lose contact with the Y-Cam cloud servers, and Y-Cam made a total hash of launching a new iOS app so for a long while whilst the cameras would trigger and record video they wouldn’t actually raise alerts. They’ve never managed to handle having multiple users using the app properly so whilst our tadoº heating system app will switch the heating off when the last person leaves, and back on when the first person returns, whatever order we leave and come back in, the Y-Cam app can only handle locations from a single phone leading to a whole load of unnecessary alerts. Y-Cam have also consistently refused to allow their cameras to integrate in with any of the burgeoning home automation platforms such as Apple Homekit, Amazon Alexa or Google Home, or even allow their cameras to be accessed by integration platforms like If This Then That that could allow users to work around their limitations. However I’ve stuck with Y-Cam having invested in the cameras and because of the free storage.

Then this week I and all the other Y-Cam users got an e-mail from the company telling us that forever is actually ending in fourteen days, when the company will require us all to pay a monthly fee for each camera, or transfer to one of their higher cost services. No option to switch the camera to using local storage – either pay them a fee or they brick our cameras rendering them useless. The explanation in the email makes it pretty clear what has happened:

We have endeavoured to provide our cloud camera service and support without making a monthly subscription mandatory. However, it is no longer possible to continue without requiring a monthly fee to cover the cost of providing a service for Y-cam cloud camera users.

Basically their promotional material suggesting that all you need is their free service has worked rather too well, and the whole model was actually dependent on being able to up-sell users to the extended thirty day storage service, or to one of their alarms. The problem now is that rather than dropping the seven day storage for new customers and honouring the promise of seven day storage forever for the existing customers, they’ve decided to charge everybody, the result is a lot of very upset customers – search Twitter for some of the responses.

That left me with a choice, do I pay them, or switch platform? To be blunt having been early leaders in IP cameras they’ve rather been left behind, and certainly the existing cameras don’t really perform as well as I’d like. The connectivity issues, inability to have multiple users locations tracked to deactivate the cameras and the lousy software updates were just annoying on a free service, given my experience and the tacit admission in the e-mail that the company is in financial trouble doesn’t really give me confidence that if I pay up things will get any better.  If they’d actually fixed the location issues our cameras would be uploading a lot less footage to their cloud servers anyway, one of the reasons their cloud storage is costing so much is because the software is poor.

The old cameras still work fine, so I can swap back to using personal cloud storage, and having talked with colleagues who are running other cameras, yesterday I bought a Netatmo Welcome. Unlike Y-Cam who haven’t really much changed what their cameras do over the past decade, Netatmo have been innovating with facial recognition so the camera will only trigger if it sees someone it doesn’t recognise. Also rather than tie you to their cloud service Netatmo allow you to load footage to FTP or Dropbox much as Y-Cam did in the past. Apple Homekit integration is already in beta, and they have an extensive selection of actions on If This Then That allowing you to trigger all sorts of home automation from the camera.

The camera turned up today, and is now all set up and working – it wasn’t all plain sailing though as the automated setup struggled to connect to the Netatmo web service. After some digging around and a good deal of frustration this turned out to be because the camera uses an IPSec VPN to connect to the server. My current router is a Billion BiPAC 8800NL which has a whole set of Application Layer Gateway options including one for IPSec that was turned on, there are a number of online discussions suggesting that the BiPAC 8800NL Application Layer Gateway IPSec option breaks the Cisco Anyconnect Secure Mobile Client VPN and that the option should be shut off, so I tried turning it off on my router and the Netatmo camera instantly started working.

So after the teething troubles I now have one Y-Cam camera replaced, and if Y-Cam don’t relent and either grandfather existing customers, or issue firmware that allows us to use alternative storage, the other will go soon too. Y-Cam is a great example of a company that had a good start in the IP camera market, but managed to squander it – if they’d innovated maybe I’d have stayed, but paying for a service that had been sold to me as free, no way. Y-Cam’s loss is a gain for Netatmo.

Sorting the Frame Rate Problem Using RasPlex

Back in January I wrote about the problems of trying to get streaming video to play back smoothly from Plex on our Apple TV, or XBox, or Fire TV, or pretty well anything, whilst I’d got around the problem by manually switching the Apple TV back and forth, it was still not really a satisfactory solution, and also didn’t solve the problem with any 24fps movie content. I also found that even well established apps like Netflix suffer the same problem on the Apple TV when we were watching The Crown where the shots with trains passing the camera had exactly the same jitter problem that was coming up on my content from Plex.

After a bit of research I’ve found that there is only one TV streaming box that can switch frame rates for Plex playback, and that is the NVIDIA Shield, but since that retails for £170 and doesn’t do anything much more than the XBox, Apple TV or Fire TV options we have already I wasn’t too keen.

From looking through the many online discussions of the problem, it seems that people running the now deprecated Plex Home Theater had got around the problem, and people using the built in Plex clients on smart TV’s didn’t have the issue, but again getting a new PC or Mac to go in the living room, or replacing our TV wasn’t really a cheap option either.

Then I came across RasPlex which is an actively developed port of Plex Home Theater to the Raspberry Pi. Like the PC and Mac versions of Plex Home Theater it was able to switch resolution, and with the arrival of the Raspberry Pi 3, the little £33 computer is more than capable of driving 1080p video.

At this point, after my experience setting up flight tracking with a Raspberry Pi I thought I’d be writing an explanation of setting it up, but RasPlex is really dead easy. The most fiddly bit of the whole process was getting the tiny screws that mount the Raspberry Pi 3 I bought into case into the equally tiny holes. RasPlex provide installers for Windows, Mac and Linux that will set up the software on a suitable memory card, and then it is as simple as plugging the Raspberry Pi into a power socket and your TV and turning on. The Raspberry Pi 3 has built in Wifi that RasPlex detects, and whilst it takes a bit of time when first booted to cache data from your Plex server, once it is up and running it is fine.

To get the resolution changes you’ll need to dig down into the advanced video settings, because by default RasPlex will stick to whatever resolution is set for the user interface, much like the commercial streaming boxes. However once that setting was changed, whatever video I threw at it worked fine on our TV – a slight pause as the TV switched frame rate and off it went. The other nice plus was that even with our seven year old Panasonic TX-L32S10 we didn’t need a separate remote for the Raspberry Pi as since the TV has HDMI-CEC support we can navigate the RasPlex user interface with the regular TV remote.

There are a couple of downsides, firstly unlike the Apple TV, the Raspberry Pi doesn’t have a sleep mode. The power save options on RasPlex will shut the whole Raspberry Pi down, at which point you have to cycle the power to wake it up again. Also the Raspberry Pi didn’t seem able to drive the picture through the cheapie HDMI switcher we have connecting the increasing number of HDMI devices we have to the TV.

However even with buying the Raspberry Pi, a suitable case with heatsinks for the processors on the Raspberry Pi that potentially get rather a workout, memory card and power supply, I still ended up with a Plex box for less than £60, and one that plays video significantly better than any of the established players by switching the TV to the correct frame rate.

That of course just leaves one final question, if a £33 box can do it, why can’t Apple, Roku, Amazon and all the rest do the same thing? Apple and Amazon especially are selling content that would benefit from a switchable box, and yet none of them do it, and instead ship boxes that make their content look rubbish.