We had an interesting debate at a Parallel Extensions design meeting yesterday, where I tried to convince everybody that a full fence on SpinLock exit is not a requirement. We currently offer an Exit(bool) overload that accepts a flushReleaseWrite argument. This merely changes the lock release from
We sat down last week with Charles from Channel9 to discuss the new CTP. Both parts got posted today:
We just released a new CTP of Parallel Extensions to .NET: get it here.
PDC’08 is officially _on _for October 27-30th this year: http://microsoftpdc.com/.
Counting events and doing something once a certain number have been registered is a highly common pattern that comes up in concurrent programming a lot. In the olden days, COM ref counting was a clear example of this: multiple threads might share a COM object, call Release when done with it, and hence memory management was much simpler. GC has alleviated a lot of that, but the problem of deciding when a shared IDisposable resource should be finally Disposed of in .NET is strikingly similar. And now-a-days, things like CountdownEvent are commonly useful for orchestrating multiple workers (see MSDN Magazine), which (although not evident at first) is based on the same counting principle.
We take code reviews very seriously in our group. No checkin is ever made without a peer developer taking a close look. (Incubation projects are often treated differently than product work, because the loss of agility is intolerable.) A lot of this is done over email, but if there’s anything that is unclear from just looking at the code, a face to face review is done. Feedback ranges from consistency (with guidelines and surrounding code), finding errors or overlooked conditions, providing suggestions on how to more clearly write something, comments, etc.; this ensures that our codebase is always of super high quality.
I’ve mentioned before that the CLR has a central wait routine that is used by any synchronization waits in managed code. This covers WaitHandles (AutoResetEvent, ManualResetEvent, etc.), CLR Monitors (Enter, Wait), Thread.Join, any APIs that use such things, and the like. This routine even gets involved for waits that are internal to the CLR VM itself. This is primarily done so that the runtime can pump appropriately on STAs, and was later used to experiment with fiber-mode scheduler in SQL Server. Two years ago I showed how to use these capabilities to build a deadlock detection tool via the CLR’s hosting APIs. Sadly IO-based waits (like FileStream.Read) do not route through this.
A few of us got in a room two weeks ago with Charles to discuss the Task Parallel Library component of Parallel Extensions.
A long time ago, I wrote that you’d never need to write another finalizer again. I’m sorry to say, my friends, that I may have (unintentionally) lied. In my defense, the blog title where I proclaimed this fact did end with “well, almost never.”
Torn reads are possible whenever you read a shared value without synchronization that is either misaligned and/or which spans an addressible pointer-sized region of memory. This can lead to crashes and data corruption due to bogus values being seen. If not careful, torn reads can also violate type safety. If you have a static variable that points to an object of type T, and your program only ever writes references to objects of type T into it, you may still end up accessing a memory location that isn’t actually a T. How could this be?
I’m still plugging away at my new concurrency book.
Most schedulers try to keep the number of runnable threads (within a certain context) equal to the number of processors available. Aside from fairness desires, there’s usually little reason to have more: and in fact, having more can lead to more memory pressure due to the cost of stacks and working set held on the stacks, non-pageable kernel memory, per-thread data structures, etc., and also has execution time costs due to increased pressure on the kernel scheduler, more frequent context switches, and poor locality due to threads being swapped in and out of processors. In extreme cases, blocked threads can build up only for all of them to be awoken and released to wreak havoc on the system at once, hurting scalability.
Today is an extraordinarily exciting day for me. After about two years of work by several great people across the company, the first Parallel Extensions (a.k.a. Parallel FX) CTP has been posted to MSDN. Check out Soma’s blog post for an overview, and the new MSDN parallel computing dev center for more details. Keep an eye on the team’s new blog too, as we’ll be posting a lot of content there as we make progress on the library; in fact, thanks to Steve (who writes blog posts in his sleep), there’s already a bunch of reading to catch up on!
I recently described an approach to adding immutability to existing, mutability-oriented programming languages such as C#. When motivating such a feature, I alluded to the fact that immutability can make concurrent programming simpler. This seems obvious: a major difficulty in building concurrent systems today is dealing with the ever-changing state of “the world,” requiring synchronization mechanisms to control concurrent reads and writes. This synchronization leads to inefficiencies in the end product, complexity in the design process, and, if not done correctly, bugs: race conditions, deadlocks due to the lack of composability of traditional locking mechanisms, and so forth.
I’ve been asked a number of times about immutable types support for C#. Although C# doesn’t offer first class language support in the way that F# does, you can get pretty far with what you do have in your hands already. Nothing prevents you from creating immutable data structures today, of course, but the problem is that there’s no compiler or runtime support to ensure you’ve done it right.