User Mode Scheduling in Windows 7

Don’t use threads. Or more precisely, don’t over-use them. It’s one of the first thing fledgling programmers learn after they start using threads. This is because threading involves a lot of overhead. In short, using more threads may improve concurrency, but it will give you less overall throughput as more processing is put into simply managing the threads instead of letting them run. So programmers learn to use threads sparingly.

When normal threads run out of time, or block on something like a mutex or I/O, they hand off control to the operating system kernel. The kernel then finds a new thread to run, and switches back to user-mode to run the thread. This context switching is what User Mode Scheduling looks to alleviate.

User Mode Scheduling can be thought of as a cross between threads and thread pools. An application creates one or more UMS scheduler threads—typically one for each processor. It then creates several UMS worker threads for each scheduler thread. The worker threads are the ones that run your actual code. Whenever a worker thread runs out of time, it is put on the end of its scheduler thread’s queue. If a worker thread blocks, it is put on a waiting list to be re-queued by the kernel when whatever it was waiting on finishes. The scheduler thread then takes the worker thread from the top of the queue and starts running it. Like the name suggests, this happens entirely in user-mode, avoiding the expensive user->kernel->user-mode transitions. Letting each thread run for exactly as long as it needs helps to solve the throughput problem. Work is only put into managing threads when absolutely necessary instead of in ever smaller time slices, leaving more time to run your actual code.

A good side effect of this is UMS threads also help to alleviate the cache thrashing problems typical in heavily-threaded applications. Forgetting your data sharing patterns, each thread still needs its own storage for stack space, processor context, and thread-local storage. Every time a context switch happens, some data may need to be pushed out of caches in order to load some kernel-mode code and the next thread’s data. By switching between threads less often, cache can be put to better use for the task at hand.

If you have ever had a chance to use some of the more esoteric APIs included with Windows, you might be wondering why we need UMS threads when we have fibers which offer similar co-operative multitasking. Fibers have a lot of special exceptions. There are things that aren’t safe to do with them. Libraries that rely on thread-local storage, for instance, will likely walk all over themselves if used from within fibers. A UMS thread on the other hand is a full fledged thread—they support TLS and no have no real special things to keep in mind while using them.

I still wouldn’t count out thread pools just yet. UMS threads are still more expensive than a thread pool and the large memory requirements of a thread still apply here, so things like per-client threads in internet daemons are still out of the question if you want to be massively scalable. More likely, UMS threads will be most useful for building thread pools. Most thread pools launch two or three threads per CPU to help stay busy when given blocking tasks, and UMS threads will at least help keep their time slice usage optimal.

From what I understand the team behind Microsoft’s Concurrency Runtime, to be included with Visual C++ 2010, was one of the primary forces behind UMS threads. They worked very closely with the kernel folks to find the most scalable way to enable the super-parallel code that will be possible with the CR.

Posted on April 23, 2009 in Coding, Microsoft, Scalability, Threading, Windows 7

Related Posts