My book, Concurrent Programming on Windows, is shaping up quite nicely. (Given that I’ve been working on it for over a year now, I suppose it had better be!) I’ve been surprised at the amazing level of anticipation and excitement from blog readers, coworkers, and Microsoft customers, and I really can’t wait for it to be finished. Thanks for the patience so far.
Whether or not it’s possible for an object to be published before it has been fully constructed is perhaps the most common .NET memory model-related question that arises time and time again. In fact, there was a discussion this week on an internal .NET alias, and another a couple weeks ago in the Joel on Software forums.
In response to a previous post, a reader said
Windows Vista has a new one-time initialization feature, which I’m pretty envious of being someone who writes most of his code in C# and answers countless questions about double-checked locking in the CLR. Rather than sprinkling double-checked locking all over your code base, along with the ever-lasting worry in the back of your mind that you’ve gotten the synchronization incorrect, it’s a better idea to consolidate it into one place.
I recently read Butler Lampson’s immensely wonderful paper “Hints for Computer System Design” ( HTML, PDF).
Intel and AMD processors have had very limited support for SIMD computations in the form of MMX and SSE since the late 90s. Though most programmers live in a MIMD-oriented world, SIMD programming had a surge in research interest in the 80s, and has remained promising for all those years, albeit a bit silently. Vectorization is a fairly popular technique primarily in niche markets such as the FORTRAN and supercomputing communities. Given the rise of GPGPU (see here, here, and here) and rumors floating about in the microprocessor arena, this is an interesting space to watch.
I’ve opined on thread affinity several times in the past. I think the term “thread affinity” is en vogue only internal to Microsoft, so it may help to define what it means for the rest of the world.
Everybody’s probably aware of the RegisterWaitForSingleObject method: it exists in the native and CLR thread pools, and does pretty much the same thing in both. (It’s called CreateThreadpoolWait and SetThreadpoolWait in Vista.) This feature allows you to consolidate a bunch of waits onto dedicated thread pool wait threads. Each such thread waits on up to 63 registered objects using a wait-any-style WaitForMultipleObjects. When any of the objects become signaled, or a timeout occurs, the wait thread just wakes up and queues a callback to run on the normal thread pool work queue. Then it updates timeouts and possibly removes the object from its wait set, and then goes right back to waiting.
Haskell is _the _most underappreciated yet extraordinarily significant programming language in the world. The syntax is frightening enough to scare off those with weak stomaches, but some of the most interesting and creative research in type systems and, within recent years, parallelism have arisen from the Haskell community. I recently stumbled across a fascinating paper from the ACM SIGPLAN History of Programming Languages Conference (HOPL’III) from earlier this year:
Michael Suess, author of the ver nice blog thinkingparallel.com, recently ran a series of interviews. He asked five parallelism experts from different domains (Erlang, MPI, OpenMP, POSIX threads, .NET threads) to answer the same set of questions: