Today is an extraordinarily exciting day for me. After about two years of work by several great people across the company, the first Parallel Extensions (a.k.a. Parallel FX) CTP has been posted to MSDN. Check out Soma’s blog post for an overview, and the new MSDN parallel computing dev center for more details. Keep an eye on the team’s new blog too, as we’ll be posting a lot of content there as we make progress on the library; in fact, thanks to Steve (who writes blog posts in his sleep), there’s already a bunch of reading to catch up on!
I recently described an approach to adding immutability to existing, mutability-oriented programming languages such as C#. When motivating such a feature, I alluded to the fact that immutability can make concurrent programming simpler. This seems obvious: a major difficulty in building concurrent systems today is dealing with the ever-changing state of “the world,” requiring synchronization mechanisms to control concurrent reads and writes. This synchronization leads to inefficiencies in the end product, complexity in the design process, and, if not done correctly, bugs: race conditions, deadlocks due to the lack of composability of traditional locking mechanisms, and so forth.
I’ve been asked a number of times about immutable types support for C#. Although C# doesn’t offer first class language support in the way that F# does, you can get pretty far with what you do have in your hands already. Nothing prevents you from creating immutable data structures today, of course, but the problem is that there’s no compiler or runtime support to ensure you’ve done it right.
There are several docs out there that describe the CLR memory, most notably this article.
Lock-free algorithms are often “better” than lock-based algorithms:
Charles from Channel9 recorded a conversation with Anders and me a couple weeks back. The topic? Concurrency. More specifically, Parallel FX (PFX):
When COM came onto the scene in the early 90’s, Symmetric Muiltiprocessor (SMP) architectures had just been introduced into the higher end of the computer market. “Higher end” in this context basically meant server-side computing, a market in which the increase in compute power promised increased throughput for heavily loaded backend systems. Parallelism per se—that is, breaking apart large problems into smaller ones so that multiple processors can hack away to solve the larger problem more quickly—was still limited to the domain of specialty computing, such as high-end supercomputing and high-performance computing communities. The only economic incentive for Windows programmers to use multithreading, therefore, was limited mostly to servers. Heck, parallelism is still pretty much limited to those domains, but the economic incentives are clearly in the midst of a fundamental change.
Two articles about ParallelFX (PFX) are in the October issue of MSDN magazine and have been posted online:
Lock recursion is usually a bad idea. It can seem convenient (at first), but once the slippery slope of making calls from critical regions into complex ecosystems of code is embarked upon (which is usually a necessary pre-requisite to lock recursion, except for some relatively simple cases), it’s easy to accidentally fall right off the edge. This topic was part of the doc I wrote previously about using concurrency inside of reusable libraries. My opinions haven’t changed much since then.
Most managed code in the .NET Framework has not been hardened against asynchronous exceptions. This includes out of memory (OOM) conditions and asynchronous thread aborts, and is entirely by design. Hardening against OOM, for example, is historically an extraordinarily difficult feat, and few systems undertake the development and QA costs needed to do so. (FWIW, the CLR VM is one such system.) Simply failing gracefully is usually hard enough. Failing gracefully is admittedly leaps and bounds easier in managed code because allocation failures are communicated via exceptions rather than return values, and are thus transitively propagated “by default.” Thread aborts are even more difficult to harden against, however, because they can originate at any instruction (with a handful of exceptions). Ensuring data invariants are protected for every single instruction is clearly just a little difficult.