Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p><code>async</code> itself is quite performant. A ton of work went into this.</p> <p>In general, on the server side you're concerned about <code>async</code> I/O. I'm going to ignore <code>async</code> CPU-bound methods because the <code>async</code> overhead will get lost in the noise anyway.</p> <p>Asynchronous I/O will increase your memory usage per request, but it'll reduce your thread usage per request. So you end up winning (except borderline pathological corner cases). This is true for all asynchronous I/O, including <code>async</code>.</p> <p><code>await</code> was designed with a pattern - not just the <code>Task</code> type - so if you need to squeeze out as much performance as possible, you can.</p> <blockquote> <p>I read an article about the performance impact on those applications since the compiler will generate a quite complex state machine for async methods.</p> </blockquote> <p>The <a href="http://msdn.microsoft.com/en-us/magazine/hh456402.aspx" rel="noreferrer">article you read</a> by Stephen Toub is excellent. I also recommend the <a href="http://channel9.msdn.com/Events/BUILD/BUILD2011/TOOL-829T" rel="noreferrer">Zen of Async video</a> (also by Stephen Toub).</p> <blockquote> <p>Async-programming using these keywords is so much easier but is it as good as say SocketAsyncEventArgs for Sockets ?</p> </blockquote> <p>First, understand that <code>SocketAsyncEventArgs</code> is more scalable because it reduces memory garbage. The simpler way to use <code>async</code> sockets will generate more memory garbage, but since <code>await</code> is pattern-based you can <a href="http://blogs.msdn.com/b/pfxteam/archive/2011/12/15/10248293.aspx" rel="noreferrer">define your own <code>async</code>-compatible wrappers for the <code>SocketAsyncEventArgs</code> API</a> (as seen on Stephen Toub's blog... I'm sensing a pattern here ;). This allows you to squeeze every ounce of performance out.</p> <p>Though it's usually better in the long run to design a scale-out system rather than twisting the code to avoid a few memory allocations. IMHO.</p> <blockquote> <p>Second question: Are asynchronous IO methods like Stream.WriteAsync really asynchronous (Completion Ports on .Net or epoll/poll on Mono) or are these methods cheap wrappers for pushing a write call to a threadpool ?</p> </blockquote> <p>I don't know about Mono. On .NET, <em>most</em> asynchronous I/O methods are based on a completion port. The <code>Stream</code> class is a notable exception. The <code>Stream</code> base class will do a "cheap wrapper" by default, but allows derived classes to override this behavior. <code>Stream</code>s that come from network communications always override this to provide truly asynchronous I/O. <code>Stream</code>s that deal with files only override this <strong>if</strong> the stream was constructed explicitly for asynchronous I/O.</p> <blockquote> <p>Third question: Beside the SynchronizationContext of an UI application, is there a way to implement some kind of single-threaded context ?</p> </blockquote> <p>ASP.NET also has a <code>SynchronizationContext</code>, so if you're using ASP.NET you're already set.</p> <p>If you are doing your own socket-based server (e.g., a Win32 service), then you <em>could</em> use the <code>AsyncContext</code> type in my AsyncEx library. But it doesn't sound like this is what you'd actually want. <code>AsyncContext</code> will create a single-threaded context on the current thread. But the true power of <code>async</code> for server applications comes from scaling <em>requests</em> instead of <em>threads</em>.</p> <p>Consider how the ASP.NET <code>SynchronizationContext</code> works: as each <em>request</em> comes in, it grabs a thread pool thread and constructs a <code>SynchronizationContext</code> (for that <em>request</em>). When that request has asynchronous work to do, it registers with the <code>SynchronizationContext</code> and the thread running that request returns to the thread pool. Later, when the asynchronous work completes, it grabs a thread pool thread (<em>any</em> thread), installs the existing <code>SynchronizationContext</code> on it, and continues processing that request. When the request is finally completed, its <code>SynchronizationContext</code> is disposed.</p> <p>The key in that process is that when the request is waiting (<code>await</code>) asynchronous operations, there are <em>no</em> threads dedicated to that request. Since a <em>request</em> is considerably lightweight compared to a <em>thread</em>, this enables the server to scale better.</p> <p>If you gave each of your requests a single-threaded <code>SynchronizationContext</code> such as <code>AsyncContext</code>, this would bind a thread to each request even when it has nothing to do. That's hardly any better than a synchronous multithreaded server.</p> <p>You may find my <a href="http://msdn.microsoft.com/en-us/magazine/gg598924.aspx" rel="noreferrer">MSDN article on <code>SynchronizationContext</code></a> useful if you want to tackle inventing your own <code>SynchronizationContext</code>. I also cover in that article how asynchronous methods "register" and "install" the context; this is done mostly-automatically by <code>async void</code> and <code>await</code> so you won't have to do it explicitly.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload