Simon Peyton Jones was in town a couple weeks back to deliver a repeat of his ECOOP’09 keynote, “Classes, Jim, but not as we know them. Type classes in Haskell: what, why, and whither”, to a group of internal Microsoft language folks. It was a fantastic talk, and pulled together multiple strains of thought that I’ve been pondering lately, most notably the common thread amongst them: interfface abstraction.
One of my comments in the 2nd edition of the .NET Framework design guidelines (on page 164) was that you can use extension methods as a way of getting default implementations for interface methods. We’ve actually begun using these techniques here on my team. To illustrate this trick, let’s rewind the clock and imagine we were designing new collections APIs from day one.
Sometimes you need to wait for something before proceeding with a computation.
Rewind the clock to mid-2004. Around this time awareness about the looming “concurrency sea change” was rapidly growing industry-wide, and indeed within Microsoft an increasing number of people – myself included – began to focus on how the .NET Framework, CLR, and Visual C++ environments could better accommodate first class concurrent programming. Of course, our colleagues in MSR and researchers in the industry more generally had many years’ head start on us, in some cases dating back 3+decades. It is safe to say that there was no shortage of prior art to understand and learn from.
Say you’ve got a Task<T>. Well, now what?
Well, Visual Studio 2010 Beta 2 is out on the street. It contains plenty of neat new things to keep one busy for at least a rainy Saturday. I proved this today.
Embarrassingly, I neglected to write about the oldest trick in the book in my last post: designing the producer/consumer data structure to reduce false sharing. As I’ve written about several times previously (e.g. see here), and more so in the book, false sharing can be deadly and ought to be avoided.
Commonly two threads must communicate with one another, typically to exchange some piece of information. This arises in low-level shared memory synchronization as in PLINQ’s asynchronous data merging, in the implementation of higher level patterns like message passing, inter-process communication, and in countless other situations. If only two agents partake in this arrangement, however, it is possible to implement a highly efficient exchange protocol. Although the situation is rather special, exploiting this opportunity can lead to some interesting performance benefits.
I’ve grown convinced over the past few years that taming side effects in our programming languages is a necessary prerequisite to attaining ubiquitous parallelism nirvana. Although I am continuously exploring the ways in which we can accomplish this – ranging from shared nothing isolation, to purely functional programming, and anything and everything in between – what I wonder the most about is whether the development ecosystem at large is ready and willing for such a change.
In this blog post, I’ll demonstrate building some very simple (but nice!) synchronization abstractions: a Lock type and a standalone ConditionVariable class. And we’ll use a few new types in .NET 4.0 in the process. I had to implement a condition variable recently – the joys of developing a new operating system / platform from the ground up – and decided to put together a toy example for a blog post as I went. Warning: this is for educational purposes only.