I’ve grown convinced over the past few years that taming side effects in our programming languages is a necessary prerequisite to attaining ubiquitous parallelism nirvana. Although I am continuously exploring the ways in which we can accomplish this – ranging from shared nothing isolation, to purely functional programming, and anything and everything in between – what I wonder the most about is whether the development ecosystem at large is ready and willing for such a change.
In this blog post, I’ll demonstrate building some very simple (but nice!) synchronization abstractions: a Lock type and a standalone ConditionVariable class. And we’ll use a few new types in .NET 4.0 in the process. I had to implement a condition variable recently – the joys of developing a new operating system / platform from the ground up – and decided to put together a toy example for a blog post as I went. Warning: this is for educational purposes only.
I wrote this memo over 2 1/2 years ago about what to do with concurrent exceptions in Parallel Extensions to .NET. Since Beta1 is now out, I thought posting it may provide some insight into our design decisions. I’ve made only a few slight edits (like replacing code- and type-names), but it’s mainly in original form. I still agree with much of what I wrote, although I’d definitely write it differently today. And in retrospect, I would have driven harder to get deeper runtime integration. Perhaps in the next release.
One of my many focuses lately has been developing a memory ordering model for our project here at Microsoft. There are four main questions to answer when defining such a model:
An interesting alternative to reader/writer locks is to combine pessimistic writing with optimistic reading. This borrows some ideas from transactional memory, although of course the ideas have existed long before. I was reminded of this trick by a colleague on my new team just a couple days ago.
A while back, I made a big stink about what appeared to be the presence of illegal load-load reorderings in Intel’s IA32 memory model. They specifically claim this is impossible in their documentation. Well, last week I was chatting with a colleague, Sebastian Burckhardt, about this disturbing fact. And it turned out he had recently written a paper that formalizes the CLR 2.0 memory model, and in fact treats this phenomenon with a great deal of rigor:
Managed code generally is not hardened against asynchronous exceptions.
Pop quiz: Can this code deadlock?
I was very harsh in my previous post about reader/writer locks.
A couple weeks ago, I illustrated a very simple reader/writer lock that was comprised of a single word and used spinning instead of blocking under contention. The reason you might use a lock with a read (aka shared) mode is fairly well known: by allowing multiple readers to enter the lock simultaneously, concurrency is improved and therefore so does scalability. Or so the textbook theory goes.
I frequently get asked about the C# compiler’s warning CS0420 about taking byrefs to volatile fields. For example, given a program
Reader/writer locks are commonly used when significantly more time is spent reading shared state than writing it (as is often the case), with the aim of improving scalability. The theoretical scalability wins come because the lock can be acquired in a special read-mode, which permits multiple readers to enter at once. A write-mode is also available which offers typical mutual exclusion with respect to all readers and writers. The idea is simple: if many readers can read simultaneously, the theory goes, concurrency improves.
CAS operations kill scalability.