Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>The following applies to all DB access technologies, not just ADO.NET.</p> <p>Client-side processing is usually the wrong place to solve data access problems. You can achieve several orders of magnitude improvement in performance by optimizing your schema, create proper indexes and writing proper SQL queries. </p> <p>Why transfer 1M records to a client for processing, over a limited network connection with significant latency, when a proper query could return the 2-3 records that matter? </p> <p>RDBMS systems are designed to take advantage of available processors, RAM and disk arrays to perform queries as fast as possible. DB servers typically have far larger amounts of RAM and faster disk arrays than client machines. </p> <p>What type of processing are you trying to do? Are you perhaps trying to analyze transactional data? In this case you should first extract the data to a reporting, or better yet, an OLAP database. A star schema with proper indexes and precalculated analytics can be 1000x times faster than an OLTP schema for analysis.</p> <p>Improved SQL coding can also result in 10x-50x times improvement or more. A typical mistake by programmers not accustomed to SQL is to use cursors instead of set operations to process data. This usually leads to horrendous performance degradation, in the order of 50x times and worse. </p> <p>Pulling all data to the client to process them row-by-row is even worse. This is essentially the same as using cursors, only the data has to travel over the wire and processing will have to use the client's often limited memory.</p> <p>The only place where asynchronous processing offers any advantage, is when you want to fire off a long operation and execute code when processing finishes. ADO.NET already provides <a href="http://msdn.microsoft.com/en-us/library/zw97wx20.aspx" rel="nofollow">asynchronous operations</a> using the APM model (BeginExecute/EndExecute). You can use TPL to wrap this in a task to simplify programming but you won't get any performance improvements.</p> <p>It could be that your problem is not suited to database processing at all. If your algorithm requires that you scan the entire dataset multiple times, it would be better to extract all the data to a suitable file format in one go, and transfer it to another machine for processing. </p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload