I wrote this memo over 2 1/2 years ago about what to do with concurrent exceptions in Parallel Extensions to .NET. Since Beta1 is now out, I thought posting it may provide some insight into our design decisions. I’ve made only a few slight edits (like replacing code- and type-names), but it’s mainly in original form. I still agree with much of what I wrote, although I’d definitely write it differently today. And in retrospect, I would have driven harder to get deeper runtime integration. Perhaps in the next release.
One of my many focuses lately has been developing a memory ordering model for our project here at Microsoft. There are four main questions to answer when defining such a model:
An interesting alternative to reader/writer locks is to combine pessimistic writing with optimistic reading. This borrows some ideas from transactional memory, although of course the ideas have existed long before. I was reminded of this trick by a colleague on my new team just a couple days ago.
A while back, I made a big stink about what appeared to be the presence of illegal load-load reorderings in Intel’s IA32 memory model. They specifically claim this is impossible in their documentation. Well, last week I was chatting with a colleague, Sebastian Burckhardt, about this disturbing fact. And it turned out he had recently written a paper that formalizes the CLR 2.0 memory model, and in fact treats this phenomenon with a great deal of rigor:
The parallel computing team just shipped an early release Axum(fka Maestro), an actor based programming language with message passing and strong isolation.
Managed code generally is not hardened against asynchronous exceptions.
Pop quiz: Can this code deadlock?
A few weeks back I recorded a discussion with the infamous Erik Meijer and Charles from Channel9.
I was very harsh in my previous post about reader/writer locks.
A couple weeks ago, I illustrated a very simple reader/writer lock that was comprised of a single word and used spinning instead of blocking under contention. The reason you might use a lock with a read (aka shared) mode is fairly well known: by allowing multiple readers to enter the lock simultaneously, concurrency is improved and therefore so does scalability. Or so the textbook theory goes.