Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<blockquote> <p>Nows tasks are basically operating on a ProcessThread..so therefore we can slice 1 process thread 1000000 times for 1000000 tasks.</p> </blockquote> <p>This is not true. A Task != a thread, and especially does not equate to a <a href="http://msdn.microsoft.com/en-us/library/system.diagnostics.processthread.aspx" rel="noreferrer">ProcessThread</a>. Multiple tasks will get scheduled onto a single thread.</p> <blockquote> <p>Is it the TPL task schduler that looks at the OS and determines that we have 8 virtual Process threads in a multicore machine, and so therefore allocates the load of 1000000 tasks across these 8 vgirtual process threads ?</p> </blockquote> <p>Effectively, yes. When using the default TaskScheduler (which you're doing above), the tasks are run on ThreadPool threads. The 1000000 tasks will not create 1000000 threads (though it will use more than the 8 you mention...)</p> <p>That being said, data parallelism (such as looping in a giant for loop) is typically much better handled via <a href="http://msdn.microsoft.com/en-us/library/system.threading.tasks.parallel.for.aspx" rel="noreferrer">Parallel.For</a> or <a href="http://msdn.microsoft.com/en-us/library/system.threading.tasks.parallel.foreach.aspx" rel="noreferrer">Parallel.ForEach</a>. The Parallel class will, internally, use a <a href="http://msdn.microsoft.com/en-us/library/dd381768.aspx" rel="noreferrer"><code>Partitioner&lt;T&gt;</code></a> to split up the work into fewer tasks, which will give you better overall performance since it will have far less overhead. For more details, see my post on <a href="http://reedcopsey.com/2010/01/26/parallelism-in-net-part-5-partitioning-of-work/" rel="noreferrer">Partitioning in the TPL</a>.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload