Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>The whole <code>SetSynchronizationContext</code> is a red herring, this is just a mechanism for marshalling, the work still happens in the IO Thread Pool. </p> <p>What you are asking for is a way to queue and harvest <a href="http://msdn.microsoft.com/en-us/library/ms681951.aspx" rel="nofollow noreferrer">Asynchronous Procedure Calls</a> for all your IO work from the main thread. Many higher level frameworks wrap this kind functionality, the most famous one being <a href="http://libevent.org/" rel="nofollow noreferrer">libevent</a>.</p> <p>There is a great recap on the various options here: <a href="https://stackoverflow.com/questions/4093185/whats-the-difference-between-epoll-poll-threadpool/5449827#5449827">Whats the difference between epoll, poll, threadpool?</a>. </p> <p>.NET already takes care of scaling for you by have a special "IO Thread Pool" that handles IO access when you call the <code>BeginXYZ</code> methods. This IO Thread Pool must have at least 1 thread per processor on the box. see: <a href="http://msdn.microsoft.com/en-us/library/system.threading.threadpool.setmaxthreads.aspx" rel="nofollow noreferrer">ThreadPool.SetMaxThreads</a>.</p> <p>If single threaded app is a critical requirement (for some crazy reason) you could, of course, interop all of this stuff in using <a href="http://msdn.microsoft.com/en-us/library/system.runtime.interopservices.dllimportattribute.aspx" rel="nofollow noreferrer">DllImport</a> (see an <a href="http://www.codeproject.com/KB/cs/managediocp.aspx" rel="nofollow noreferrer">example here</a>) </p> <p>However it would be a very <a href="http://blogs.msdn.com/b/ericeil/archive/2008/06/20/windows-i-o-threads-vs-managed-i-o-threads.aspx" rel="nofollow noreferrer">complex and risky task</a>: </p> <blockquote> <p>Why don't we support APCs as a completion mechanism? APCs are really not a good general-purpose completion mechanism for user code. Managing the reentrancy introduced by APCs is nearly impossible; any time you block on a lock, for example, some arbitrary I/O completion might take over your thread. It might try to acquire locks of its own, which may introduce lock ordering problems and thus deadlock. Preventing this requires meticulous design, and the ability to make sure that someone else's code will never run during your alertable wait, and vice-versa. This greatly limits the usefulness of APCs.</p> </blockquote> <p>So, to recap. If you want a <strong>single threaded</strong> managed process that does all its work using APC and completion ports, you are going to have to hand code it. Building it would be risky and tricky. </p> <p>If you simply want <strong>high scale</strong> networking, you can keep using <code>BeginXYZ</code> and family and rest assured that it will perform well, since it uses APC. You pay a minor price marshalling stuff between threads and the .NET particular implementation. </p> <p>From: <a href="http://msdn.microsoft.com/en-us/magazine/cc300760.aspx" rel="nofollow noreferrer">http://msdn.microsoft.com/en-us/magazine/cc300760.aspx</a></p> <blockquote> <p>The next step in scaling up the server is to use asynchronous I/O. Asynchronous I/O alleviates the need to create and manage threads. This leads to much simpler code and also is a more efficient I/O model. Asynchronous I/O utilizes callbacks to handle incoming data and connections, which means there are no lists to set up and scan and there is no need to create new worker threads to deal with the pending I/O.</p> </blockquote> <p>An interesting, side fact, is that single threaded is not the fastest way to do async sockets on Windows using completion ports see: <a href="http://doc.sch130.nsc.ru/www.sysinternals.com/ntw2k/info/comport.shtml" rel="nofollow noreferrer">http://doc.sch130.nsc.ru/www.sysinternals.com/ntw2k/info/comport.shtml</a> </p> <blockquote> <p>The goal of a server is to incur as few context switches as possible by having its threads avoid unnecessary blocking, while at the same time maximizing parallelism by using multiple threads. The ideal is for there to be a thread actively servicing a client request on every processor and for those threads not to block if there are additional requests waiting when they complete a request. For this to work correctly however, there must be a way for the application to activate another thread when one processing a client request blocks on I/O (like when it reads from a file as part of the processing). </p> </blockquote>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload