Category Archives: Microsoft

DDD 2018 at Microsoft Reading

After a busy July, finally I’ve got a quiet moment to catch up with my notes from the recent Develop Developer Developer event held at Microsoft HQ in Reading.

I attended a real mix of sessions this year. First up was a real mind bending session led by Francess Tibble and Anita Ramanan, two software engineers at Microsoft talking about Quantum Computing and the Q# language. The session was split into two parts, the first a bit of a crash course in the physics involved in Quantum Computing, with quite a bit of maths too. The interesting take away is that present day quantum computers are expensive and unstable as they are particularly sensitive to external factors so can lose state in seconds. As a result we currently have the Quantum Development Kit that simulates how a real quantum computer should behave.

The key difference with a quantum computer is in the bit, in classical computing the bit is ether 0 or 1, but in quantum computing the bit can also be any point in between, taking the usual light bulb analogy for a classic bit, it’s like having a dimmer attached. I really haven’t got the space to cover all their content in detail, but they did do a version of the same talk a few days before DDD which is online on YouTube.

Moving on I then attended Joseph Woodward talking about Web Assembly, Blazor and the Future of Web Development.

Joseph started with a run through of the history of web development, and the perennial problem that whilst there has been a relentless move towards providing applications in a web browser, the tools to create rich applications in a web browser are really pretty limited. JavaScript, the main language of the web has become that largely by historical accident, and is pretty slow. Web Assembly is the latest of a number of attempts to replace JavaScript as the language of the web, in this case providing what is effectively a low-level byte code for the web and then compiling other languages into this byte code. At this stage it’s still very much a minimum viable product, but does seem to show some promise with multiple languages being able to compile into Web Assembly byte code.

For C# and other .Net support, since they also compile into the intermediate language of the .Net platform, Microsoft offers Blazor, which is a full .Net machine written in Web Assembly byte code. This of course does mean that .Net intermediate language is then being interpreted into Web Assembly byte code, so there are plans to compile to avoid this double layer of interpretation.

The actual coding is familiar to any C# programmers with familiar dependency injection, and the ability to pull in code using Nuget. Interop with JavaScript is provided, and is necessary because Web Assembly does not provide access to the DOM.

It was clear from the talk that the platform is still immature, it lacks performance and has no threading or garbage collection. However it does show promise. Even if it doesn’t provide a replacement for JavaScript, it does allow us to mix and match languages picking the language that is best suited for a particular task.

Next was what for many people was one of the big draws for this years DDD, the return of Barry Dorrans, now .NET Security Curmudgeon at Microsoft, but who before joining Microsoft and moving across the pond had been a regular speaker on security at developer events. Barry was presenting his Code Behind the Vulnerability session, variations of which he has presented for a number of years at conferences around the world. The great advantage of presenting it here however is that it allowed developers who don’t work for companies with the budgets to send their developers to paid for conferences to see this important session. Indeed Robert Hogg CEO of Black Marble who organise the DDD event at Microsoft considered the subject matter so important that he said to any of his developers in the room that they’d be fired if they did anything that Barry had spoken about!

The purpose behind the Code Behind the Vulnerability session is basically to go through security issues that Microsoft have found in their code, and the cause so other developers don’t make the same mistakes. Barry updates this session periodically as new exploits and problems come to light, so it is well worth keeping an eye out online for new versions.

Barry covered eight different security advisories, including hash tables that could bring a system down if they received specific user data – the tip here being not to use user supplied data as keys for a hash table, exposed endpoints that allowed users to work out encrypted messages, and a number of occasions where people had turned off or misused features making security holes, for example turning off signing on view state allowing attackers to create .NET objects, or simply writing a GET API call that changes state.

Barry’s summary slide is the basics, but the whole slide deck is worth a read. His summary is:
– Sign your data, even when it is encrypted
– Don’t use regular expressions
– Don’t use BinaryFormatter
– Don’t overbind in MVC
– Use the right HTTP verb
– Validate your inputs

Barry’s session is a critical one for anybody doing .NET development, many of the issues he shows are easy to make, but can have catastrophic consequences.

The next session I attended was rather lighter, but was also one that has been presented at a major conference but Dylan Beattie was bringing to DDD. You can view the keynote version of Apps, Algorithms and Abstractions: Decoding our Digital World on YouTube and it is broadly similar.

Dylan starts off with talking about how news of his birth and a first picture made it from where he was born in Africa, back to his grandparents back in Oxfordshire – a process that took weeks. He then looks at technology today where we can get a photo appear on a phone in your pocket and respond immediately. In the space of his lifetime the way we communicate has fundamentally changed. His session goes through the basic technology that underpins these changes, and is absolutely fascinating.

This was probably my favourite session of the day as it covers so many different areas of technology. It was also presented in an easy to digest way, and in a way that I’ve been able to show it to my children and they can start to understand all sorts of technological ideas.

My final session was one of those I picked more because I enjoyed the speaker – Gary Short talking about AI Dev-ops. Gary started looking at how the principles that have brought about dev-ops can be applied to AI and machine learning work, for much the same reasons. There has always been a big disconnect between data scientists and coders. Data scientists have a very niche skillset, so in the past they would do the specialist work, and then hand their carefully designed models to developer to implement. However tools are now being produced that allow data scientists to develop an implement their models, and coders to just connect to these rather than implement them.

Gary also had some useful tips, he highlighted that you can only optimise algorithms for false positives, or false negatives, not both, so it is a business decision as to which costs more, false positives or false negatives. This is a useful tip with regards to our products at FISCAL as we have a continual tension between reducing the number of false positives we produce, whilst not missing results, i.e. a false negative.

In summary DDD 2018 was a good day, and well worth spending a Saturday. For many developers there isn’t the budget to go to paid conferences regularly, so it is particularly good to be able to see sessions from those conferences presented live at a free community conference. Particularly for sessions like Barry’s important information about how to code securely is something all developers should be hearing, not just the ones who work for a company with a good training and conference budget!

Windows 8 sales figures point to sluggish start

Microsoft were looking for the arrival of Windows 8 to turn around the ongoing decline in PC sales, with the first sales numbers released, it looks like the hoped for turnaround hasn’t occurred…

Since the launch of Windows 8, sales of Windows devices in the US have dropped 21 per cent compared to the same time period last year, NPD said. Notebook sales dropped 24 per cent, but desktop sales fared a bit better with a smaller 9 per cent decline.

“After just four weeks on the market, it’s still early to place blame on Windows 8 for the ongoing weakness in the PC market,” Stephen Baker, vice president of industry analysis at NPD, said in a statement. “We still have the whole holiday selling season ahead of us, but clearly Windows 8 did not prove to be the impetus for a sales turnaround some had hoped for.”

A SQL Stored Procedure Parameter Sniffing Gotcha

This is another one of those occasional posts that is primarily for my own benefit to remind me of a particular problem, but that I’m posting publicly in case it could be of use to someone else.

On one of our systems we have a stored procedure to pull back all of the staff details for a particular project. Initially the screen used LINQ queries, but as anybody who has used LINQ can tell you in certain situations the queries it produces can become quite unwieldy and slow, so in places like that we’ve swapped to using stored procedures. The stored procedure is really simple consisting of one query that takes the two stored procedure parameters to identify the project the staff list is required for. Anyway, on our test systems the stored procedure has been running really well returning the staff details in under a second.

However that hasn’t been the case on the live system. The same query on our biggest project has been slow. Not just slightly slow, go make a cup of coffee (including picking and grinding the coffee beans), do the Times Jumbo Crossword type slow. But when you take the query that the stored procedure uses out and run it directly in a SQL Management Studio query window it returns in under a second, indeed if the same project in our User Acceptance Test server which is essentially an older copy of the live database returns in a similar high speed. It’s something in particular about the live server.

Not surprisingly this has caused a good deal of head scratching, but on Friday afternoon I finally solved the mystery and found what was causing the slow down thanks to this blog post.

To understand what is going on you need to remember a few things about how SQLServer works:

  • SQLServer processes queries differently depending on a number of factors including how many results it thinks the query is going to produce, the indexes on the tables, how the data is arranged in the tables and how the data is arranged on the disk to name a few.
  • When you create a stored procedure SQLServer builds these execution plans only once, the first time the query is run and uses these execution plans every subsequent time the stored procedure is called.
  • If you use a stored procedure parameter in a query within that procedure the query optimiser uses the values of those parameters in the execution plan, if you use local variables the query optimiser creates a more generic plan. (This is called Parameter Sniffing)

Having asked around, most SQLServer users are aware of the query optimiser, many are aware that SQLServer builds the query execution plan once – although they may not know exactly when, but relatively few, including a good few DBA’s will be aware of the difference in the way parameters and local variables are treated by the optimiser.

When you bear in mind that we have a mixture of different sized projects in our system, it starts to become rather obvious what has happened and why the query is running very slowly on one server but not on others. On some servers the first call of the stored procedure was for a small project, whilst on others it was a big project, as a result the SQLServer’s have created different execution plans and that is favouring particular project sizes. Unfortunately on the live server the query plan is totally unsuitable for the project with hundreds of staff members, hence the hideously slow performance.

All I did was change the parameters in the query to be local variables, and then set the value of those local variables to be the value of the parameters – two extra lines and a tweak of the query, and the query started returning in under a second as for all the other servers. By virtue of having a generic query plan the performance of the query is not going to be quite as good as one targeting a particular project size, but in a system where we are storing a wide variety of project sizes a generic plan is what is needed.

At this point, having found the problem I started looking at other stored procedures that could potentially exhibit similar problems – as a general rule I’d recommend not putting parameters directly into queries.

If you want a more detailed explanation, complete with a simple worked example of what to do, check out this SQL Garbage Collector post.