High Performance I/O on Windows
I/O completion ports were first introduced in Windows NT 4.0 as a highly scalable, multi-processor capable alternative to the then-typical practices of using select, WSAWaitForMultipleEvents, WSAAsyncSelect, or even running a single thread per client. The most optimal way to perform I/O on Windows—short of writing a kernel-mode driver—is to use I/O completion ports.
A recent post on Slashdot claimed sockets have run their course, which I think is absolutely not true! The author seems to believe the Berkeley sockets API is the only way to perform socket I/O, which is nonsense. Much more modern, scalable and high performance APIs exist today such as I/O completion ports on Windows, epoll on Linux, and kqueue on FreeBSD. In light of this I thought I’d write a little about completion ports here.
The old ways of multiplexing I/O still work pretty well for scenarios with a low number of concurrent connections, but when writing a server daemon to handle a thousand or even tens of thousands of concurrent clients at once, we need to use something different. In this sense the old de facto standard Berkeley sockets API has run its course, because the overhead of managing so many connections is simply too great and makes using multiple processors hard.
An I/O completion port is a multi-processor aware queue. You create a completion port, bind file or socket handles to it, and start asynchronous I/O operations. When they complete—either successfully or with an error—a completion packet is queued up on the completion port, which the application can dequeue from multiple threads. The completion port uses some special voodoo to make sure only a specific number of threads can run at once—if one thread blocks in kernel-mode, it will automatically start up another one.
First you need to create a completion port with CreateIoCompletionPort:
HANDLE iocp = CreateIoCompletionPort(INVALID_HANDLE_VALUE, NULL, 0, 0);
It’s important to note that NumberOfConcurrentThreads is not the total number of threads that can access the completion port—you can have as many as you want. This instead controls the number of threads it will allow to run concurrently. Once you’ve reached this number, it will block all other threads. If one of the running threads blocks for any reason in kernel-mode, the completion port will automatically start up one of the waiting threads. Specifying 0 for this is equivalent to the number of logical processors in the system.
Next is creating and associating a file or socket handle, using CreateFile, WSASocket, and CreateIoCompletionPort:
#define OPERATION_KEY 1 HANDLE file = CreateFile(L"file.txt", GENERIC_READ, FILE_SHARE_READ, NULL, OPEN_EXISTING, FILE_FLAG_OVERLAPPED, NULL); SOCKET sock = WSASocket(AF_INET, SOCK_STREAM, IPPROTO_TCP, NULL, 0, WSA_FLAG_OVERLAPPED); CreateIoCompletionPort(file, iocp, OPERATION_KEY, 0); CreateIoCompletionPort((HANDLE)sock, iocp, OPERATION_KEY, 0);
Files and sockets must be opened with the FILE_FLAG_OVERLAPPED
and WSA_FLAG_OVERLAPPED
flags before they are used asynchronously.
Reusing CreateIoCompletionPort
for associating file/socket handles is weird design choice from Microsoft but that’s how it’s done. The CompletionKey
parameter can be anything you want, it is a value provided when packets are dequeued. I define a OPERATION_KEY
here to use as the CompletionKey
, the significance of which I’ll get to later.
Next we have to start up some I/O operations. I’ll skip setting up the socket and go right to sending data. We start these operations using ReadFile and WSASend:
OVERLAPPED readop, sendop; WSABUF sendwbuf; char readbuf[256], sendbuf[256]; memset(&readop, 0, sizeof(readop)); memset(&sendop, 0, sizeof(sendop)); sendwbuf.len = sizeof(sendbuf); sendwbuf.buf = sendbuf; BOOL readstatus = ReadFile(file, readbuf, sizeof(readbuf), NULL, &readop); if(!readstatus) { DWORD readerr = GetLastError(); if(readerr != ERROR_IO_PENDING) { // error reading. } } int writestatus = WSASend(sock, &sendwbuf, 1, NULL, 0, &sendop, NULL); if(writestatus) { int writeerr = WSAGetLastError(); if(writeerr != WSA_IO_PENDING) { // error sending. } }
Every I/O operation using a completion port has an OVERLAPPED
structure associated with it. Windows uses this internally in unspecified ways, only saying we need to zero them out before starting an operation. The OVERLAPPED
structure will be given back to us when we dequeue the completion packets, and can be used to pass along our own context data.
I have left out the standard error checking up until now for brevity’s sake, but this one doesn’t work quite like one might expect so it is important here. When starting an I/O operation, an error might not really be an error. If the function succeeds all is well, but if the function fails, it is important to check the error code with GetLastError or WSAGetLastError.
If these functions return ERROR_IO_PENDING
or WSA_IO_PENDING
, the function actually still completed successfully. All these error codes mean is an asynchronous operation has been started and completion is pending, as opposed to completing immediately. A completion packet will be queued up regardless of the operation completing asynchronously or not.
Dequeuing packets from a completion port is handled by the GetQueuedCompletionStatus
function:
This function dequeues completion packets, consisting of the completion key we specified in CreateIoCompletionPort
and the OVERLAPPED
structure we gave while starting the I/O. If the I/O transferred any data, it will retrieve how many bytes were transferred too. Again the error handling is a bit weird on this one, having three error states.
This can be run from as many threads as you like, even a single one. It is common practice to run a pool of twice the number of threads as there are logical processors available, to keep the CPU active with the aforementioned functionality of starting a new thread if a running one blocks.
Unless you are going for a single-threaded design, I recommend starting two threads per logical CPU. Even if your app is designed to be 100% asynchronous, you will still run into blocking when locking shared data and even get unavoidable hidden blocking I/Os like reading in paged out memory. Keeping two threads per logical CPU will keep the processor busy without overloading the OS with too much context switching.
This is all well and good, but two I/O operations were initiated—a file read and a socket send. We need a way to tell their completion packets apart. This is why we need to attach some state to the OVERLAPPED
structure when we call those functions:
struct my_context { OVERLAPPED ovl; int isread; }; struct my_context readop, sendop; memset(&readop.ovl, 0, sizeof(readop.ovl)); memset(&sendop.ovl, 0, sizeof(sendop.ovl)); readop.isread = 1; sendop.isread = 0; ReadFile(file, readbuf, sizeof(readbuf), NULL, &readop.ovl); WSASend(sock, &sendwbuf, 1, NULL, 0, &sendop.ovl, NULL);
Now we can tell the operations apart when we dequeue them:
OVERLAPPED *ovl; ULONG_PTR completionkey; DWORD transferred; GetQueuedCompletionStatus(iocp, &transferred, &completionkey, &ovl, INFINITE); struct my_context *ctx = (struct my_context*)ovl; if(ctx->isread) { // read completed. } else { // send completed. }
The last important thing to know is how to queue up your own completion packets. This is useful if you want to split an operation up to be run on the thread pool, or if you want to exit a thread waiting on a call to GetQueuedCompletionStatus
. To do this, we use the PostQueuedCompletionStatus
function:
#define EXIT_KEY 0 struct my_context ctx; PostQueuedCompletionStatus(iocp, 0, OPERATION_KEY, &ctx.ovl); PostQueuedCompletionStatus(iocp, 0, EXIT_KEY, NULL);
Here we post two things onto the queue. The first, we post our own structure onto the queue, to be processed by our thread pool. The second, we give a new completion key: EXIT_KEY
. The thread which processes this packet can test if the completion key is EXIT_KEY
to know when it needs to stop dequeuing packets and shut down.
Other than the completion port handle, Windows does not use any of the parameters given to PostQueuedCompletionStatus
. They are entirely for our use, to be dequeued with GetQueuedCompletionStatus
.
That’s all I have to write for now, and should be everything one would need to get started learning these high performance APIs! I will make another post shortly detailing some good patterns for completion port usage, and some optimization tips to ensure efficient usage of these I/O APIs.
Update: this subject continued in I/O completion ports made easy.
Related Posts
- I/O completion ports made easy on May 14, 2009 in Asynchronous I/O, Coding, Feature Article, I/O Completion Ports, Microsoft, Scalability, Sockets, Windows
- Scalability isn’t everything on March 04, 2008 in Coding, Scalability
- User Mode Scheduling in Windows 7 on April 23, 2009 in Coding, Microsoft, Scalability
- Tips for efficient I/O on May 15, 2009 in Asynchronous I/O, Coding, Feature Article, I/O Completion Ports, Microsoft, Scalability, Sockets, Windows
- WCF is pretty neat on April 21, 2008 in Coding, Microsoft, Scalability