Type classes, kinds, and higher-order polymorphism represent some of Haskell’s most unique and important contributions to the world of programming languages. They are all related, and began life as type classes in Wadler and Blott’s 1988 paper, How to make ad-hoc polymorphism less ad hoc. Eventually, Jones introduced the (then separate) concept of constructor classes, in his 1993 paper, A system of constructor classes: overloading and implicit higher-order polymorphism. Eventually these two ideas were unified into a beautiful single set of features (namely, type constructors and kinds) in Haskell.
A few months back, while writing my new book, I whipped together a tool to dump information about your processor layout using the GetLogicalProcessorInformation function from C#. You can find the code snippet in Chapter 5, Advanced Threads, of my book. (A developer on the Windows Core OS team, Adam Glass, had also written a similar tool in C++.) I will be posting code to the companion site for my book in the coming weeks, at which point you can easily get your hands on it.
Dan Grossman invited me to deliver a talk as part of the University of Washington’s Computer Science and Engineering Colloquia series. It was recorded and will eventually air on UWTV, but has also been posted online:
The word “architect” means different things to different people in the context of software engineering. And it varies wildly depending on the kind of organization you’re in. An architect at a medium sized IT shop might focus on connecting disparate business systems together at a high level, but without diving down into code. An architect at a startup may be more like a tech lead, checking in code like mad, but also keeping the rest of the team in check. And a software architect at Microsoft can play an even varied number of roles because the company is so large and diversity of projects so great.
It’s been quite some time since I blogged about what I’ve been reading. That’s not because I haven’t been reading – au contraire! – but rather because I’ve been busy doing so. I find these posts interesting for myself, so that I can look back and see where my interests were at a particular point in time. Given the sheer number of additions, I can’t properly rate them like I have in the past. Here are the more interesting ones, those that stick out in my mind:
The October 2008 MSDN Magazine issue just went live with 5 articles on concurrency, plus the editor’s note. Four of the articles are written by members of the Parallel Computing team here at Microsoft, including one by me:
The enumeration pattern in .NET unfortunately implies some overhead that makes it difficult to compete with ordinary for loops. In fact, the difference between
In part 2 of this series, I described a new work stealing queue data structure used for work item management. This structure allows us to push and pop elements into a thread-local work queue without heavy-handed synchronization. Moreover, this distributed a large amount of the scheduling responsibility across the threads (and hence processors). The result is that, for recursively queued work items, scalability is improved and pressure on the typical bottleneck in a thread pool (i.e., the global lock) is alleviated.
Most programs are tangled webs of data and control dependencies. For sequential programs, this doesn’t matter much aside from putting constraints on the legal optimizations available to a compiler. But it gets worse. Imperative programs today are also full of side-effect dependencies. Unlike data and control dependence—which most compilers can identify and understand the semantics of (aliasing aside)—side-effect dependencies are hidden and the semantic meaning of them is entirely ad-hoc. These can include scribbling to shared memory, writing to the disk, or printing to the console.
Miguel de Icaza recently blogged about the addition of Parallel Extensions to the Mono family.
The primary reason a traditional thread pool doesn’t scale is that there’s a single work queue protected by a global lock. For obvious reasons, this can easily become a bottleneck. Two primary things contribute heavily to whether the global lock becomes a limiting factor for a particular workload’s throughput:
This is the first part in a series I am going to do on building a custom thread pool. Not that I’m advocating you do such a thing, but I figured it could be interesting to explore the intricacies involved. We’ll start off really simple:
Here’s a slightly more formal approach to demonstrating that the CLR MM is improperly implemented for the particular example I showed earlier.
The adjacent release/acquire problem is well known. As an example, given the program:
I just submitted the final manuscript for Concurrent Programming on Windows to Addison-Wesley.