I rambled on and on a few weeks back about how much performance matters. Although I got a lot of contrary feedback, most of it had to do with my deliberately controversial title than the content of the article. I had intended to post a redux, but something more concise is on my mind lately.
I can’t tell you how many times I’ve heard the age-old adage echoed inappropriately and out of context:
That immutability facilitates increased degrees of concurrency is an oft-cited dictum. But is it true? And either way, why?
In .NET today, readonly/initonly-ness is in the eye of the provider. Not the beholder.
Partially-constructed objects are a constant source of difficulty in object-oriented systems. These are objects whose construction has not completed in its entirety prior to another piece of code trying to use them. Such uses are often error-prone, because the object in question is likely not in a consistent state. Because this situation is comparatively rare in the wild, however, most people (safely) ignore or remain ignorant of the problem. Until one day they get bitten.
My article about Transactional Memory (TM) was picked up by a few news feeds recently.
We use static analysis very heavily in my project here at Microsoft, as a way of finding bugs and/or enforcing policies that would have otherwise gone unenforced. Many of the analyses we rely on are in fact minor extensions to the CLR type system, and verge on “effect typing”, an intriguing branch of type systems research that has matured significantly over the years.
Simon Peyton Jones was in town a couple weeks back to deliver a repeat of his ECOOP’09 keynote, “Classes, Jim, but not as we know them. Type classes in Haskell: what, why, and whither”, to a group of internal Microsoft language folks. It was a fantastic talk, and pulled together multiple strains of thought that I’ve been pondering lately, most notably the common thread amongst them: interfface abstraction.
One of my comments in the 2nd edition of the .NET Framework design guidelines (on page 164) was that you can use extension methods as a way of getting default implementations for interface methods. We’ve actually begun using these techniques here on my team. To illustrate this trick, let’s rewind the clock and imagine we were designing new collections APIs from day one.
Sometimes you need to wait for something before proceeding with a computation.
Rewind the clock to mid-2004. Around this time awareness about the looming “concurrency sea change” was rapidly growing industry-wide, and indeed within Microsoft an increasing number of people – myself included – began to focus on how the .NET Framework, CLR, and Visual C++ environments could better accommodate first class concurrent programming. Of course, our colleagues in MSR and researchers in the industry more generally had many years’ head start on us, in some cases dating back 3+decades. It is safe to say that there was no shortage of prior art to understand and learn from.
Say you’ve got a Task<T>. Well, now what?
Well, Visual Studio 2010 Beta 2 is out on the street. It contains plenty of neat new things to keep one busy for at least a rainy Saturday. I proved this today.
Embarrassingly, I neglected to write about the oldest trick in the book in my last post: designing the producer/consumer data structure to reduce false sharing. As I’ve written about several times previously (e.g. see here), and more so in the book, false sharing can be deadly and ought to be avoided.
Commonly two threads must communicate with one another, typically to exchange some piece of information. This arises in low-level shared memory synchronization as in PLINQ’s asynchronous data merging, in the implementation of higher level patterns like message passing, inter-process communication, and in countless other situations. If only two agents partake in this arrangement, however, it is possible to implement a highly efficient exchange protocol. Although the situation is rather special, exploiting this opportunity can lead to some interesting performance benefits.