Something a Bit Different

We’ve just finished watching the Royal Wedding of Prince Harry and Meghan Markle, and whilst in some ways it was business as usual with all the processions and pageantry, in others this was a pretty radical departure.

In terms of the service, despite the fact that vast majority of Church of England weddings are conducted using the Common Worship service, the Royals would invariably use the traditional Book of Common Prayer wedding service or the Alternative Services Series One update, with an understanding and explanation of marriage that is totally at odds with what most people these days think a marriage is about.

Compare the introduction that was used at the wedding of the Duke and Duchess of Cambridge, or the Book of Common Prayer introduction with the modern introduction that Harry and Meghan had today – traditional is primarily about children, and a “remedy for sin and to avoid fornication” in the BCP version, without any mention of love which is first in the introduction to the modern service.

As to why the Royals always go for traditional, you need look no further than Prince Charles who is patron of the Prayer Book Society which makes it all the more amazing that his younger son used the Common Worship service straight out of the book, even with the contemporary language Lord’s Prayer. The choice of hymns and anthems too was pretty much what many regular couples would pick, without any special commissions – a couple of well known hymns, anthems both of which are in the repertoire of our church choir in Finchampstead, indeed The Lord Bless You and Keep You is one that pretty well any church choir in the country will know.

Where it differed from what most couples would have, and was very different from your average Royal event was the choice of preacher. Back in 2001, the Duke and Duchess of Cambridge had the then Bishop of London, a great friend of Prince Charles. Harry and Meghan apparently asked the advice of Justin Welby, the Archbishop of Canterbury and he recommended Michael Curry, the Presiding Bishop of the US Episcopal Church. Bishop Curry is regarded as one of the best preachers in the Anglican Church, but is also pretty controversial with the traditionalist elements of the Anglican church, and indeed with the current US Government. He preached what for him was a short sermon at the wedding – check out some of his other sermons on YouTube – but something quite unlike anything we’ve seen before.

If you want to see quite how different, this is the sermon the former Bishop of London preached for the Duke and Duchess of Cambridge – I had no recollection of what he actually said when I looked back:

This is the sermon Michael Curry preached today – complete with quotes from Martin Luther King, quotes from a slave song, talking about poverty, world peace, and the power of love, but delivered in a style no English Bishop would ever have attempted, let alone in a Royal Wedding.

There has not surprisingly been a lot of reaction. This from radio presenter James O’Brien:

This from Labour MP David Lammy:

TV Presenter Piers Morgan:

Actor David Schneider:

And Zara’s face was a picture…

Finally we have Archbishop Justin and Presiding Bishop Michael talking about the sermon outside St Albans Abbey tonight.

Ultimately Curry had an audience of billions, and probably the only chance he’ll get to speak to an audience that large. Unlike any other Royal Wedding before, he actually preached a sermon with a message rather than the usual fairly light fluffy wedding sermon we’d expect, and certainly I doubt we’ll get anything like it at the next Royal Wedding in the autumn.

Open Space at Pivotal Software

Yesterday I had my first Unconference experience, where I attended a one day Open Space event, held at the offices of Pivotal Software, who are located just off Silicon Roundabout, on Old Street in London.

Firstly it’s probably worth explaining the concept of an Unconference or Open Space. The organisers of this event described it well when they said that the concept of the Open Space came about when people observed that many of the interesting conversations at other conferences took place in the coffee breaks, so when I arrived, the conference organisers had arranged a number of different rooms and environments, had divided the day up into a number of time slots, but all we had was a large empty wall to fill with sessions and activities, and sheets with the four principles, and one law that would govern the day.

Looking at the principles, the first three are about how the sessions run, basically don’t worry about who is or isn’t there, take the session as it comes, and don’t feel you have to fill the time slot:

  • Whoever comes are the right people.
  • Whatever happens is the only thing that could have happened.
  • When it’s over it’s over

The next principle is “Butterflies and Bumblebees”, which describes the way some people move between sessions. The Bumblebees don’t spend an entire period in one session, and instead visit many of the groups and sessions, cross pollinating between the sessions with fresh views and ideas they have maybe picked up from other sessions. Butterflies are not participating in any current session, but instead are tending to their own needs, but sometimes participate. At the Open Space both character types were apparent during the day.

The one and only law is the Law of Two Feet – basically if you’re not learning, or not feeling you’re gaining anything from the session, just walk away.

Wikipedia has a much more detailed overview of the technique – whilst we were using for a technical conference the technique could easily be applied to any subject area.

We kicked off with an ice breaker session to break us up into five groups to discuss sessions where we had to arrange ourselves into alphabetical order based on our country of birth. That proved to be an interesting exercise for several groups – the people from the United States actually all put themselves under A for America, I was in a group of three for England, but there were other English people down in a group at U for United Kingdom, and there was a lady who put herself at C for Crimea because she didn’t want to offend anyone, given that when she was born Crimea was in the Soviet Union, has been in Ukraine for a long period and Crimea has now been annexed by Russia.

In our groups we then talked a little about what we wanted to gain from the day, and then discussed what possible sessions we could do. There was an interesting mix. Firstly, some people had existing sessions they had done elsewhere that they were willing to run again – with hindsight I actually had one of those in that I had a copy of a Software Craftsmanship brown bag I’d done at work on my laptop – something for next time. Other people had current project they were happy to share and talk about with other participants, or skills they were willing to share. Other sessions were encouraged in the group discussion on sessions, so for example in our group we had a man who worked as a Biological Scientist, and was happy to talk about his work with scientific software. One key driver is that you don’t need to be an expert to propose a session, so several people had topics they wanted to talk about ranging from particular technologies or concepts, challenging principles, or just hot topics they wanted to discuss.

From there everybody with a proposed session lines up to give a quick elevator pitch of their proposal and suggest a location, taking in turns until all the sessions have been proposed. After that there is a rationalisation stage where sessions can be moved to resolve any potential clashes, to balance out the schedule, and refine a better schedule. After that it was into the first session.

The first session I picked was one on Unconditional Programming. The concept comes from Michael Feathers, whose superb book on Legacy Code actually sits on my desk much of the time. He came up with the term in a blog post back in 2013 and he has done conference talks on the same subject. It’s certainly an idea I’d come across previously, and the concept of Unconditional Programming is potentially one of the techniques Feathers discusses in his upcoming book which was originally going to be called Brutal Refactoring: More Working Effectively with Legacy Code, but may well be called Unconditional Programming.

The basic concept is that in many cases the use of conditional code such as the ubiquitous if/else construct actually couples different bits of the code together that shouldn’t be coupled, so removing the conditional will result in two separate more easily testable pieces of code that a cleaner and more easy to understand.

This provoked an interesting discussion, as the conditional is not really removed, it is merely moved, so rather than existing as an if statement the conditional is being provided by polymorphism in the factory class.

In the discussion we also had a few proponents of functional languages, one who uses Erlang, and another who uses Haskell who agreed that conditional free code was easier to achieve in functional languages, in particular through the vehicle of pattern matching. In this situation the pattern matching is providing the conditional rather than the if statement.

It was certainly an interesting discussion, especially with a mix of developers from both imperative and functional programming backgrounds.

My next session was given the title “How to produce your own Citymapper in 10 Minutes”, and was a developer who as a side project had been looking at routing algorithms and had been using London Underground routing as a test bed.

He started off showing us a picture of the London Underground map, and then highlighting that if you change the stations into nodes, and the lines between them into edges, you have a familiar concept in Computer Science, a directed graph. He highlighted that finding the shortest route in a directed graph was a problem that had been solved way back in 1956 by Edsger Dijkstra, and as such libraries to generate the shortest routes were available in pretty well every programming language. He then showed us some code that used the freely available underground station data that you can get from the TFL API and Dijkstra’s algorithm to work out the shortest route between two stations on the underground.

We then had a wide ranging discussions about how it wasn’t quite that simple, looking at issues of fast trains on lines like the Metropolitan line, and that it didn’t take account of the times to change trains. It was also highlighted that Dijkstra’s algorithm quickly breaks down with larger datasets, even if you use intelligent heuristics to prune potentially unsuitable routes that go away from the destination.

We then finished off talking about better options for graph and tree searching, and got onto talking about the A* algorithm that can produce routes more quickly, and especially a recent talk by Simon Peyton-Jones that covers the same subject area called Getting from A to B – Route Planning on Slow Computers.

My first session after lunch was entitled “Beyond SOLID” and was proposed by a developer who wanted to challenge whether the SOLID principles were really the best way to describe Object Oriented code.

We started working through each of the principles.

Firstly he stated that he though that the Single Responsibility Principle was primarily about code cohesion, and could equally be applied to other programming paradigms – it was something that was just good practice rather than something specific to Object Oriented Code.

Moving on to the Open/Closed Principle, he again thought that whilst the principle of being open for extension and closed for modification is primarily about plug ability of code, and is really a bit vague and doesn’t tell us much that is valuable.

The Liskov Substitution Principle he thought was probably one of the most difficult to understand of the principles, and whilst it gives us a better paradigm for understanding what object oriented code should look like is again not specifically about object orientation.

The Interface Segregation Principle is also about cohesion of code, but this time is more object oriented as it is specifically talking about contracts, using interfaces rather than concrete classes.

Finally the Dependency Inversion principle is again more object oriented as it is about how we use interfaces to instantiate concrete classes.

We then opened the discussion, and there seemed to be consensus that the SOLID principles were more principles of good design rather than specifically object orientation principles. We also discussed how being able to regurgitate the SOLID principles was almost a staple interview question, but it really didn’t indicate anything much beyond the interviewees ability to remember the acronym. It was also pointed out that SOLID has quite a mix of high level design principles, low level concepts, some architecture, and in the Liskov Substitution principle some quite deep Computer Science.

From there we tried to look at other principles or concepts that could describe Object Oriented coding including Design Patterns, Tell don’t ask message passing, CQRS command query segregation, keeping things small/replaceable parts, balanced abstractions, loose coupling, tight cohesion, code smells, and the classic OO principle of polymorphism, encapsulation, and inheritance/abstraction.

However when we looked at those principles, very few of them exclusively applied to Object Oriented code, many of them were just good design practices.

As with the earlier discussion on Unconditional Programming it was a good discussion about something that in many circles is regarded as a foundation of Object Oriented programming, but it is clear once you dig into it, certainly doesn’t cover all of Object Oriented programming, and is much more about good design than anything Object Oriented specific.

The next session I went along to had been convened by a developer and a mental health practitioner who were concerned about issues of developer mental health and burnout, and was a wide ranging discussion amongst a group of developers over work cultures, work pressures and how developers cope, or don’t cope with them.

From the discussion there certainly seems to be some variation in how companies handle their developers. Some will take all the hours developers will give with apparently minimal concern for their mental well being, at times actively encouraging and rewarding long hours that can lead to burnout. Others although they are good on limiting actual work, encourage their developers to be participating in their own time in community events and side projects, which again are increasing stress – several said they are now careful to limit their participation in communities to ones they consider directly relevant to their work.

We also had a good discussion about practices in our teams, covering working hours and stand up times. One company a developer worked for the director would actively force developers to go home at about 6pm. Another deliberately did not give their developers laptops and limited remote access to stop developers working long hours. Another operated a hot desking policy that used fixed desktop computers at each desk and the developers moving around and pairing. This also highlighted that pairing forces people to work common hours – one company explicitly banned using any code that hadn’t been written in a pair.

This again was a very interesting session highlighting the different ways different teams worked, and highlighting good practice, and bad practice across those different companies.

The final session was again a discussion on the broad topic of exploring where software development overlapped with other disciplines.

We started from the principle that the Software Industry has tried a number of metaphors over the years for describing itself such as Engineering or Craftsmanship, but we quickly reach the limits of those metaphors.

Over the course of the hour we drew connections with at least twenty other areas and disciplines including Philosophy, Biosciences, Psychology, Linguistics, Politics, Art and Design and Sociology.

Once again, with a diverse group of people it was a good exploration of all parts of software development drawing some parallels I’d not really though of before.

After that the day drew to a close with a quick retrospective where we sat and talked and shared experiences about what we had learnt during the day. As a first experience of an Open Space it was really enjoyable, from turning up with no idea of what I might learn there were a wide variety of sessions, and as the facilitators said at the beginning lots of interesting discussions.

Unlike other conferences I’ve been to, there was a pretty diverse range of participants also, with much closer to a 50:50 male to female split than I’ve seen elsewhere. Much as with DDD Scotland where one of the most interesting sessions was the Lean Coffee discussion, we got a chance to talk and explore ideas that you maybe wouldn’t see as a full session in a regular conference.

My thanks go to Denise and Spike at Pivotal Software for providing the venue and organisation for the event, and all the other participants for making it such an enjoyable day.

DDD South-West 2018

Following on from my trip to Glasgow for DDD Scotland, on Saturday 21st April I took a much shorter trip along the Great Western to Bristol for this years DDD South West. Here after a couple of weeks is my write up of the sessions I attended. There was no need for an overnight stay, but I did get to experience an almost deserted Reading Station very early on a Saturday morning!

First off I went along to Callum Whyte’s session on how his team took a Monolithic Monster and are currently turning it into a Majestic Microservice App.

As Callum told the story of how his company picked up this project, there were lots of knowing laughs from the audience – seems like a good number of us had experienced similar, when our non-technical directors had taken on nightmare projects. In this case it was a favour for a friend who had paid a Lithuanian outsourcing company a grand total of seven thousand pounds to produce an app for the the NHS and had delivered it with a couple of bugs that needed sorting before it went live in a couple of months time. Callum’s team had opened the proverbial Pandora’s Box and discovered a horrific mess, and so far his company have burned through the equivalent of half a million pounds worth of time and resources getting it in a fit state. The app is currently live, and is now being improved.

When they originally took on the project the source code they received appeared to be six months out of date, and wouldn’t even build, even when they got the apparently latest code it was still different from what was on production. There were major security bugs and holes, for example certain bugs would reveal a complete list of all the patients registered on the application. It was also running on the free tier of Amazon Web Services, and was totally unable to scale.

It was quickly decided that there wasn’t the time or money for a complete re-write, so they made a plan for what their ideal architecture would be – a micro service architecture based on Azure – and then looked at how to get there. They also then modelled how the system actually worked, and identified the critical areas that needed to be addressed first.

In opting for a micro service architecture they effectively treated the existing monolith as just another micro service. They then used the Strangler Pattern to start to replace the monolith, creating new micro services to replace parts of the existing application.

The key aspect of the strangler pattern is that as far as possibly you don’t work on the monolith, you’re creating a total replacement alongside and then connecting into that new replacement leaving the old monolith behind. Eventually you will have totally replaced the old code. The only work on the monolith is to reroute to the new micro services.

Whilst the original monolith had no tests, the new code was developed with a full test suite. They made extensive use of Azure Functions, but kept them small – don’t create a new monolith with giant Azure Functions!

They also made use of Azure APU Gateways which allowed them to reroute parts of a single API either to the old monolith or new functions.

Callum particularly recommended Aram Koukia’s series of blogs on A Microservices Implementation Journey as a good introduction to developing micro services, and said this was something they used to onboard new developers onto their team.

For session number two I had a total change of gear, and went along to hear Dan Clarke talk about Developer Productivity.

Dan’s session was a good selection of hints and tips, reinforcing some things I do already, and pointing me towards tools to try. Even better I later found out that for giving feedback on Dan’s talk I won a license for LinqPad a tool I’ve used previously and that Dan demonstrated during the talk.

Dan started off talking about the importance of taking notes and snippets. He stressed the importance, especially for a contractor of keeping the notes in a place that goes with you, not one which stays at your company, as it is a personal goldmine. For a long while I’ve kept a notebook at work where I write down notes during the day (and at DDD events), more recently I’ve taken to keeping a daily work journal of key events during the day, and have already used that to refer back to work I’ve done previously. The key thing is to record anything that you think might be useful in the future.

Dan then talked about focus, recommending the use of headphones to filter out distractions – to be fair I’ve often found using headphones annoying in themselves. He also talked about the Pomodoro Technique as something he has found valuable. Dan has written his own desktop application for keeping track of his time called Tomatoad. He also recommended Music to Flow By a set of Pomodoro length tracks designed to have the right tempo and style to both stimulate the brain to perform their best, but not be too interesting that you end up listening to the music rather than focusing.

Dan then moved on to some tool recommendations.

He kicked off with LinqPad, which I had seen as a result of having bought Joseph Albahari’s excellent C# in a Nutshell book and updated to a number of subsequent revisions. As an aside if you want an excellent book on C# this is my number one recommendation – whilst you could get much the same from the MSDN documentation, this book covers it in a much more readable format, and before I switched largely to e-Books, this was the book I had to hand on my desk. Back to LinqPad, the tool has grown massively from when I had originally looked at it, and it was great to be win the prize of a free license. It is already installed on my PC at work as a handy scratchpad for working with LINQ, and also querying databases.

Dan then moved on to talking about Resharper, a tool that I and lots of other developers have installed, but one which many of us fail to use completely. Whilst Dan showed some parts of the tool I was already aware of, I was still able to pick up some good tips, and realise that I was only using parts of some other features.

Dan also talked about the resurgence of the Command Line. He conceded that the Windows Command Line is still a bit rubbish, but highlighted that Powershell is a lot better. He recommended some useful tools.

Firstly he pointed me towards PoshGit a Powershell extension that helps when working with Git in Powershell. The most obvious improvement is that it changes the Powershell prompt to give a compact summary if the current directory is within a Git repository. This again is now installed on my PC at work.

For working better with multiple console windows he recommended ConEmu a console window that wraps any other console application be that any of the Windows consoles like the regular command line or Powershell, or any of the Unix variants.

For better clipboard handling he recommended Ditto a clipboard extension that significantly extends the functionality of the standard clipboard.

Moving on Dan then recommended the use of mind mapping, a technique I’ve used quite frequently for recording subject areas when modelling systems. He also talked about how to be more organised using to-do lists and GTD techniques. He also talked about techniques for achieving Zero Inbox which were pretty similar to the process I’ve used for many years on my Mac at home using MailActOn and MailTags.

He finished off with a look at keeping your brain healthy, stressing the importance of not multi-tasking, and also eating and drinking to keep your brain healthy – quite the opposite from traditional developer food, but again something that I’ve been trying to stick to for a while already.

Next I moved on to a Git Deep Dive presented by James World. I’ve been using Git for a while, and I certainly agree with James’ opening point that Git is a tool that it takes most people three to six months to start to get their heads around. As he said that underlying concept of Git is beautifully simple, built around the commit. However built around it is a truly hideous user experience, and the documentation really doesn’t help by using multiple terms for the same thing.

James showed something similar to the techniques we are using at work to smooth out the sometimes quite disjointed path developers take to get to a solution into presenting a more straightforward story.

He also highlighted a number of important Git concepts that are sometimes missed.

First off he used the git hash-object command to compute the unique and immutable hash value which is at the heart of how Git operates. A key concept is that the actual filename is separate from that hash.

When looking at merges James highlighted that Git is able to do an octopus merge where it merges from multiple branches. James also said that we really shouldn’t try it!

Another useful tip which explains some behaviour I’ve seen from Git is that Git remembers how you resolved a conflict in the past and will reuse the technique if a similar problem comes up in future. This certainly explains some weird merge problems I’ve come across where Git has automatically done some strange things because it has actually been reusing a merge it had seen me do previously.

James also demonstrated rebasing, and the use of an interactive rebase, something we use extensively at work when working with multiple branches.

He also talked about various different branching strategies and how we can use them with Git.

James finished off with a look at the libgit2sharp package on Nuget which allows us to manipulate and analyse at Git repository from C# – this allows us to do some powerful analysis of commits and the commit messages to take a look at our projects in useful ways.

At lunchtime there were some Grok talks which I went along to. The first that I came into towards the end was a whistle stop tour of server less. Following that we had an interesting demo of the power of the new Linux Subsystem in Windows 10, including demonstrating it running on a pretty low spec Microsoft Surface. It finished off with a talk on the potentially pretty philosophical question of exactly how big a Microservice should be.

My first session after lunch was one which I had wanted to attend at DDD Scotland, but had been unable to as it clashed with another session I was interested in. It was great to be able to get to see it here, Ismail Mayat talking about Teaching and Old Dog New Tricks.

The session had grown out of Ismail having attended training given by Uncle Bob Martin himself, someone whose evangelical zeal for clean code I have been following too.

Ismail made liberal use of quotes throughout his talk, including this gem, known as Weinberg’s Second Law:

If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilisation.

He started off with an exploration of why developers write lousy code, highlighting that awful code is most often written by reasonable developers but in awful circumstances.

Often developers are under time pressures and cut corners planning to go back and sort out the mess, but they never do, and the mess grows and festers. He also highlighted the problems of working with the latest hipster framework or tool, and the problems that can be produced by trying to learn a new framework whilst solving a complex problem, again this can produce lousy code. He also highlighted that often when developers start on something they do not fully understand the problem – more often than not a developers understanding of the problem is growing and forming as we are coding the solution.

He came up with a good analogy of working with lousy code – you’re working on your house and you have to do a simple job like change the doorbell, but in changing that doorbell the oven explodes. This is just the kind of thing we experience with difficult to maintain code.

He then went through a number of techniques that Uncle Bob covers in his books about using descriptive variable names, using nouns to name classes and verbs to name methods, limiting function length, the Single Responsibility Principle for for functions and not using booleans as control parameters on functions.

He also highlighted the importance of keeping functions pure – they shouldn’t mutate state and should have no side effects. It is important that if you repeatedly send the same inputs to a function, you repeatedly get the same outputs.

Inside every large program is a small program trying to get out.

He suggested that the software industry is currently similar to the medicine in the Victorian era – we are making great strides with techniques and are starting to learn about the problems, but we’re not there yet, and patients are still dying on the table!

He suggested that Test Driven Development is one way we are starting to apply engineering rigour to software development, and that Test Driven Development is an important technique because of the well established difficulty adding tests to an existing system after the event.

Alongside the Uncle Bob books he also recommended a couple of others that are on my read list: The Art of Unit Testing and Growing Object Oriented Software Guided by Tests.

I finished off with Joseph Woodward talking about Patterns and Practices for Building a Better Web Architecture.

The basic purpose of the talk was to look at Josephs experience exploring whether it was possible to improve on Web API – not surprisingly given he was presenting on it he was clear that he was able to.

Firstly he talked about the the Command Query Responsibility Separation pattern, more commonly known as CQRS. The pattern was first described by Greg Young and separates the usual design of Web API controllers into two, one consisting of commands that write, and the other of queries that read. The commands mutate the state, queries do not.

There are a number of reasons for doing this, most importantly it allows us to use different models for reading and writing. Trying to construct a single model that works for both reading and writing is complicated, and results in most services violating the single responsibility principle.

Also we have different requirements for when we might use the commands to write, and queries to read, often we will be reading much more than writing, so having the two separate gives us more flexibility over our architecture.

Next he moved onto techniques for loosening the coupling between our services using the Command Dispatcher Pattern, and the Mediator Pattern, in particular making use of MediatR a .NET library that allows us to decouple dependencies between services in a micro service environment.

Joseph also showed a useful extension for VSCode called Rest Client that I’ve taken to using for testing API through REST – although in our system a lot of the API have Swagger, in particular the parts built from Azure Functions do not have that option, and Rest Client is useful for testing those parts.

Joseph gave a number of good tips for how to design our Web API projects, removing business logic from controllers, and decoupling the domain from the UI framework.

He also highlighted that often we are putting validation logic in two places, and are restricted by the way ASP.Net puts validation as attributes on fields. He recommended using Jeremy Skinners Fluent Validation library which allows for more complex validation rules to be constructed, and for us to encapsulate validation in validation objects derived from the AbstractValidator class. This allows us to reuse custom validation across different objects, and offers a much more flexible and reusable way to validate.

Another library he recommended was The Polly Project which again helps us with separating out our system into micro services by implementing important Circuit Breaker, Isolation and other patterns in a fluent and thread safe manner. We’re not currently using these patterns in our systems at work, but we probably should be as the system grows.

Joseph finished off talking about how we structure our projects. The way Visual Studio often encourages us to arrange them is grouped by technical concern, so we put all our controllers together in one folder, all our views in another and so on. Instead Joseph recommended arranging by domain concern, so all the related objects are together, and developers only have to look in one place for everything needed for a particular feature.

Joseph also recommended a couple of videos that helped him with how to construct his controllers better: Slices Not Layers by Jimmy Bogard and Fat Controller CQRS Diet by Derek Comartin.

One final note was that Joseph was the first person I’ve seen at DDD using Jetbrains Rider under MacOS X – I’ve experimented with it, but ultimately I still use VS2017 as that is what we have at work. However on a Mac I can actually get Rider to work, whereas the Mac version of Visual Studio has never worked for me.

So that is an overview of my day at DDD South West. As with all the other DDD events, my thanks to all the volunteers who put it together and spoke, as always I learned loads during the day, and it is great to have people in the community willing to share their expertise for free like this. With so many companies with limited training budgets and resources it is great for us as developers to be able to keep our skills updated and relevant without breaking the bank!