So have you ever felt like a total muppet stood up in front of an audience giving a presentation? Maybe so, but it seems that we should be taking inspiration from the Muppets – or at least from Sesame Street – when giving a presentation. Check out this post from the continually excellent Coding Horror blog.
A frequently updated, very well written and often insightful blog that I love reading. It covers such a wide range of topics itâ€™s always fresh and enjoyable to read. Often times there are multiple posts per day, all of them verbose and a pleasure to read.
Personally I’m of the opinion that it’s existence is primarily about providing a platform for iPhone development on Windows – very much that they need a platform, rather than any deep seated belief that the Windows platform needs another browser.
From my point of view that is made even more clear by the lengths to which Apple have gone to make it look and operate exactly like the MacOS X version, even down to the look of the buttons and scroll bars.
The identical behaviour even extends to how the browser renders fonts and graphics. If you take a look at the picture above, this shows the blog open in Firefox and Safari on Windows. Looking at the fonts, you’ll notice that the text looks subtly different – some people regard it as more blurry – this is because the browser is eschewing the usual Windows Cleartype in favour of the algorithm used by MacOS X. In theory, the MacOS X algorithm is intended to produce fonts that are as close to the original typeface design as possible, whilst Cleartype fits to the pixel grid – better screen wise – at the expense of accurately rendering the typeface. Coding Horror has a good article explaining the differences – ultimately it comes down to personal taste.
The other thing to note from the screen shot is the differences in the colour of the sunset picture at the top of the page. This is because Safari on Windows also treats graphics containing embedded colour space information differently. The sunset picture on the top of the page contains the colour space information from the original picture I took – Safari finds this and renders the graphic differently (although not necessarily correctly – ironically only the now defunct Mac Internet Explorer correctly interpreted colour spaces) resulting in the more vibrant orange hues that can be seen in Safari.
All of these duplicate features make it clear that alongside converting Safari, large amounts of MacOS X have been ported too to make it all work! Hence if you compare the memory usage of Safari with other browsers on Windows you’ll find it’s using a lot more than anything else…
Finally, one irony of Safari on Windows though is that whilst I don’t tend to use the browser much on MacOS X – preferring Firefox, I’m using Safari on Windows quite a lot because the text looks way better on the machine at work…
Update: The Safari on Windows debate rolls onward. With the news that the browser has been downloaded over one million times in the forty-eight hours since release, there is an interesting article from a Microsoft employee who initially is bemoaning the fact that Safari does everything itself, and then having read a posting by Joel Spolsky and the Coding Horror posting I mentioned above realises that it is unlikely that things will change!
The Spolsky posting is a good read in terms of the history – the Apple philosophy is very much about wanting to make fonts look as close to the printed original as possible – Joel explains in more detail why this is important to the desktop publishing and design communities. Choice quote of the posting has to be this:
“Typically, Apple chose the stylish route, putting art above practicality, because Steve Jobs has taste, while Microsoft chose the comfortable route, the measurably pragmatic way of doing things that completely lacks in panache.”
I have a lot of respect for Jeff Atwood and his Coding Horror blog. He often has interesting and informed insights into software development, and generally knows what he is talking about.
Yesterday he posted an article under the heading “The Best Code is No Code At All” where, backed by comments from Wil Shipley he argues for the benefits of code brevity – put simply reducing the volume of code a developer has to read to understand how an application works. This is certainly something I agree with. However then he gives an example of a simple change to improve code brevity:
if (s == String.Empty)
if (s == "")
He backs the suggestion with the following statement:
It seems obvious to me that the latter case is better because it’s just plain smaller. And yet I’m virtually guaranteed to encounter developers who will fight me, almost literally to the death, because they’re absolutely convinced that the verbosity of String.Empty is somehow friendlier to the compiler. As if I care about that. As if anyone cared about that!
This is one occasion where I significantly disagree with Jeff on this when it comes to development in .Net.
Firstly, using “” is error prone. Put a space between the quotes, and the compiler won’t pick it up – it is still a valid string literal. It may be a minor change, but it will break your application, and is the kind of typo that is a pain to find. A typo in String.Empty will be picked up by the compiler, rather than coming up in testing, or worse still on a customer site.
Secondly, using String.Empty is more efficient (although as any .Net programmer should be able to tell you, checking the length of the string is more efficient still). To understand why String.Empty is more efficient it helps to understand how .Net handles strings. In .Net, a string is immutable – it never changes, so for example routines that concatenate strings together to build up database queries will be repeatedly creating new strings throughout the whole process. The .Net Framework helpfully provides a StringBuilder object to use in such situations. However the same is true when considering any literal, so “” has to become a string object, and yet you already have String.Empty which is exactly that.
However, .Net has another trick up it’s sleeve, in that it maintains something called the string intern pool. Every string literal is stored in this pool, and when a new literal is encountered that is already in the pool the version from the pool is used instead, saving the overhead of creating a new object. Whilst that speeds up literals somewhat, the application still has to do slight more than if it is just handed a suitable object straight off. Using String.Empty is slightly more efficient than “” – and it’s nothing to do with being friendlier to the compiler.
It’s entirely fair to argue that the performance differences are negligible in most situations, a point with which I’d agree, however the differences magnify if you are producing code with a lot of literals, and a lot of string comparisons. As an aside check out this posting from a former member of the CLR team at Microsoft where he ponders whether automatically interning the entire pool when an assembly is loaded is a good or bad idea.
Put simply, if you know you’re going to be repeatedly comparing to a string value, and performance is an issue, don’t use a string literal, as every comparison you’ll get the overhead of it looking up in the string intern pool – and if the string value is an empty string, don’t bother with either and use String.Length, or better still String.IsNullOrEmpty if there is the possibility that the string is null (or Nothing in VB.Net), both of which are faster than a string comparison in that situation.
So sorry Jeff – I see your point about brevity, but I’m sticking with String.Empty!
It was quite amazing back in December, to see the effect of having my blog linked to by Robert Scoble, almost immediately there was a noticeable jump in my traffic that lasted for a good week or so. Scobleizer, like Slashdot, is one of these sites on which getting a mention can really boost your traffic, sometimes to unmanageable levels.
One example of this was the effect this week when Scoble posted about the Coding Horror blog. Now I’ve actually been reading Coding Horror for quite a while. Jeff Atwood writes a broad mix of articles, primarily about good programming practice, but broadening into all sorts of other areas.
For example, recent articles that have raised a smile include ‘Lotus Notes: Survival of the Unfittest‘ – both my current and previous employers have used this delight for e-mail, and after all those years and many updates it’s still no better – and ‘Revisiting Edit and Continue‘ that argues that the return of the ability to edit a running application in Visual Studio 2005 is actually a bad thing.
Anyway, now the results of the Scobleizing have died down, I’d definitely recommend that any .Net programmers go take a look at Coding Horror.