Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>I know this may be a bit old fashioned but you could potentially fill an ADO.NET DataSet from the Database as DataSet and DataTable of the ADO.NET layer is the disconnected layer. Then Dataset could be held in memory for a determined amount of time by the application. Then you could post it back to the database. Create Your Dataset, fill it from an entity, ADO.NET connected layer, or Linq to SQL layer, it exists filled, you can further fill it with new data as needed, then compare it in a final query back to the database to merge the changes.</p> <p>I know I did a project a while back where I did a mixture of Linq, ADO.NET and xml serialization to basically serialize data from ADO.NET to an xml file with the built in xml serialization of ADO.NET. Then read it with Linq to XML. It was similar to what you are saying in that the XML file was the cache essentially in file format and I just updated it with changes by counting it's distinct elements representing a key value in the database. If it's counts were off it updated, else it stayed the same. This was not applicable for large sets of millions of rows but for small things I wanted to ALWAYS have access to it was nice and was pretty fast.</p> <p>I know that in the 70-516 MS Press book on .NET 4.0 Data Access there is a lab near the end of the book on caching if you can find it online. It basically targets a database, collects changes since last time, works off of that, then merges at the end. That way you are working with a differential constantly that is smaller in memory but tracking your working changes.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload