A colleague lent me a copy of W. Daniel Hillis’s PhD thesis, The Connection Machine, which is also available in book form from The MIT Press. I only began reading it last night, but I have been continuously amazed. It’s been enlightening to realize how much framing problems differently (and, in many cases, more naturally) can make programming without concurrency seem ridiculous.
Stack overflow can be catostrophic for Windows programs. Some Win32 libraries and commercial components may or may not respond intelligently to it. For example, I know that, at least as late as Windows XP, a Win32 CRITICAL_SECTION that has been initialized so as to never block can actually end up stack overflowing in the process of trying to acquire the lock. Yet MSDN claims it cannot fail if the spin count is high enough. A stack overflow here can actually lead to orphaned critical sections, deadlocks, and generally unreliable software in low stack conditions. The Whidbey CLR now does a lot of work to probe for sufficient stack in sections of code that manipulate important resources. And we pre-commit the entire stack to ensure that overflows won’t occur due to failure to commit individual pages in the stack. If a stack overflow ever does occur, however, it’s considered a major catastrophy–since we can’t reason about the state of what native code may have done in the face of it–and therefore, the default unhosted CLR will fail-fast.
The CLR thread pool is a very useful thing. It amortizes the cost of thread creation and deletion–which, on Windows, are not cheap things–over the life of your process, without you having to write the hairy, complex logic to do the same thing yourself. The algorithms it uses have been tuned over three major releases of the .NET Framework now. Unfortunately, it’s still not perfect. In particular, it stutters occasionally.
As I mentioned in a recent post, Windows Vista has new built-in support for deadlock detection. At the time, I couldn’t find any publicly available documentation on this feature. Well, I just found it:
When threads are created on Windows, the caller of the CreateThread API has the option to supply stack reserve/commit sizes. If not specified–i.e. the stack size parameter is 0–Windows just uses the sizes found in the PE header of the executable. Microsoft’s linkers by and large use 1MB reserve/2 page commit by default, although most let you override this (e.g. LINK.EXE’s /STACK:xxx,[yyy] option and VC++’s CL.EXE /F xxx). The CLR always pre-commits the entire stack for managed threads.
There are two main reasons to use concurrency.
Windows Vista has some great new features for concurrent programming. For those of you still writing native code, it’s worth checking them out. For those writing managed code, we have a bunch of great stuff in the pipeline for the future, but unfortunately you’ll have to wait. Or convert (back) to the dark side.
Jim Johnson started a series back in January that I’m dying to see continued. It’s about writing resource managers in System.Transactions, which surprisingly turns out to be incredibly straightforward. Provided you are able to implement the correct ACI[D] transactional qualities for the resource in question, that is. Juval Lowy’s December 2005 MSDN Magazine article on volatile resource managers described how to build what turns out to be essentially mini-transactional memory, without much of the syntax, implicit and transitive qualities, and robustness.
Ntdll exports an undocumented function from WinNT.h:
After posting my last article on creating a lazy allocation IAsyncResult, I received a few mails on the ordering of the completion sequence. It was wrong and has been updated. Thanks to those who pointed this out.