Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>In general, the reason for caching is that you feel you can pull the data out of memory (without it being stale) faster than you can pull it from the database. A situation where you can pull the right data from Cache is a Cache Hit. If your schema has a low Cache Hit rate, then Cache is probably hurting more than helping. If your data changes rapidly, you will have a low Cache Hit rate and it will be slower than simply querying for the data.</p> <p>The trick is to split your data between infrequently changing and frequently changing elements. Cache the infrequently changing elements and do not cache the frequently changing elements. This could even be done at the database level on a single entity by using a 1:1 relationship where one of the tables contains the infrequently changing data and other the frequently changing information.You said that your source data would contain 10 columns that almost never change and 90 that change frequently. Build your objects around that notion so that you can cache the 10 that rarely change and query for the 90 that change frequently. </p> <blockquote> <p>I store each row in a class and the class is stored in the Server Cache via a HUGE list</p> </blockquote> <p>From your original post, it sounds like you are not storing each instance in cache, but instead a list of instances in cache as a single entry. The problem is that you can get multi-threading issues in this design. When multiple threads pull the one-list-to-rule-them-all, they are all accessing the same instance in memory (assuming they are on the same server). Furthermore, as you have discovered, the <code>CacheDependency</code> will not work in this design because it will expire the entire list rather than a single item.</p> <p>One obvious, but highly problematic, solution would be to change your design to store each instance in memory with a logical cache key of some sort and add a <code>CacheDependency</code> for each instance. The problem is that if the number of instances is large, that will create <strong>a lot</strong> of overhead in the system verifying currency of each of the instances and expiring when necessary. If the cache items are polling the database, that will also create a lot of traffic.</p> <p>An approach I have used to solve the problem of having a large number of database dependent CacheDependencies is to make a custom ICacheItemExpiration in the CachingBlock from the Enterprise Library. This also meant I was using the CachingBlock to do caching of my objects and not the ASP.NET cache directly. In this variant, I created a class called a <code>DatabaseExpirationManager</code> which kept track of which items to expire from cache. I would still add each item to the cache individually but with but with this modified dependency which simply registered the item with the <code>DatabaseExpirationManager</code>. The <code>DatabaseExpirationManager</code> would be notified of the keys that need to be expired and would expire the items from cache. I will say, right from the start, that this solution will probably not work on rapidly changing data. <code>DatabaseExpirationManager</code> would be running constantly holding a lock on its list of items to expire and preventing new items from being added. You would have to do some serious multi-threading analysis to ensure that you reduced contention while not enabling a race condition.</p> <p><strong>ADDITION</strong> </p> <p>Ok. First, fair warning that this will be a long post. Second, this is not even the entire library as that would be too long.</p> <p>Taking the wayback machine, I wrote this code in early and late-2005/early-2006 right as .NET 2.0 came out and I haven't investigated whether the more recent libraries might be doing this better (almost assuredly they are). I was using the January 2005/May 2005/January 2006 libraries. You can still get the 2006 library off CodePlex.</p> <p>The way I came up with this solution was to look at the source of the Caching system in the Enterprise Library. In short, everything fed through the <code>CacheManager</code> class. That class has three primary components (all three are in the <code>Microsoft.Practices.EnterpriseLibrary.Caching</code> namespace): <code>Cache</code> <code>BackgroundScheduler</code> <code>ExpirationPollTimer</code></p> <p>The <code>Cache</code> class is the EntLib's implementation of cache. The <code>BackgroundScheduler</code> was used to scavenge the cache on a separate thread. The <code>ExpirationPollTimer</code> was a wrapper around a <code>Timer</code> class. </p> <p>So, first off, it should be noted that the <code>Cache</code> scavenges itself based on a timer. Similarly, my solution would poll the database on a timer. The EntLib cache and the ASP.NET cache both work on the individual items having a delegate to check when the item should be expired. My solution worked on the premise of an outside entity checking when the items should be expired. The second thing to note is that whenever you start playing around with a central cache, you have to be attentive to multi-threading issues.</p> <p>First I replaced the <code>BackgroundScheduler</code> with two classes: <code>DatabaseExpirationWorker</code> and <code>DatabaseExpirationManager</code>. <code>DatabaseExpirationManager</code> contained the important method that queried the database for changes and passed the list of changes to an event:</p> <pre><code>private object _syncRoot = new object(); private List&lt;Guid&gt; _objectChanges = new List&lt;Guid&gt;(); public event EventHandler&lt;DatabaseExpirationEventArgs&gt; ExpirationFired; ... public void UpdateExpirations() { lock ( _syncRoot ) { DataTable dt = GetExpirationsFromDb(); List&lt;Guid&gt; keys = new List&lt;Guid&gt;(); foreach ( DataRow dr in dt.Rows ) { Guid key = (Guid)dr[0]; keys.Add(key); _objectChanges.Add(key); } if ( ExpirationFired != null ) ExpirationFired(this, new DatabaseExpirationEventArgs(keys)); } } </code></pre> <p>The <code>DatabaseExpirationEventArgs</code> class looked like so:</p> <pre><code>public class DatabaseExpirationEventArgs : System.EventArgs { public DatabaseExpirationEventArgs( List&lt;Guid&gt; expiredKeys ) { _expiredKeys = expiredKeys; } private List&lt;Guid&gt; _expiredKeys; public List&lt;Guid&gt; ExpiredKeys { get { return _expiredKeys; } } } </code></pre> <p>In this database, all the primary keys were Guids. This make keeping track of changes substantially simpler. Each of the save methods in the middle tier would write their PK and the current datetime into a table. Each time the system polled the database, it stored the datetime (from the database. not from the middle-tier) that it initiated the polling and <code>GetExpirationsFromDb</code> would return all items that had changed since that time. Another method would periodically remove rows that had long since been polled. This table of changes was very narrow: a guid and a datetime (with a PK on both columns and the clustered index on datetime IIRC). Thus, it could be queried very quickly. Also note that I used the Guid as the key in the Cache. </p> <p>The <code>DatabaseExpirationWorker</code> class was nearly identical to the <code>BackgroundScheduler</code> except that its <code>DoExpirationTimeoutExpired</code> would call the <code>DatabaseExpirationManager</code> <code>UpdateExpirations</code> method. Since none of the methods in <code>BackgroundScheduler</code> were <code>virtual</code>, I could not simply derive from <code>BackgroundScheduler</code> and override its methods.</p> <p>The last thing I did was to write my own version of the EntLib's CacheManager that used my <code>DatabaseExpirationWorker</code> instead of the <code>BackgroundScheduler</code> and its indexer would check the object expiration list:</p> <pre><code>private List&lt;Guid&gt; _objectExpirations; private void OnExpirationFired( object sender, DatabaseExpirationEventArgs e ) { _objectExpirations = e.ExpiredKeys; lock(_objectExpirations) { foreach( Guid key in _objectExpirations) this.RealCache.Remove(key); } } private Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager _realCache; private Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager RealCache { get { lock(_syncRoot) { if ( _realCache == null ) _realCache = Microsoft.Practices.EnterpriseLibrary.Caching.CacheManager.CacheFactory.GetCacheManager(); return _realCache; } } } public object this[string key] { get { lock(_objectExpirations) { if (_objectExpirations.Contains(key)) return null; return this.RealCache.GetData(key); } } } </code></pre> <p>Again, it's many moons since I reviewed this code but this gives you the jist of it. Even looking through my old code, I see many places that can be cleaned up and cleared up. I also have not looked at the Caching block in the most recent version of the EntLib but I would imagine it has changed and improved. Keep in mind that in the system in which I built this, there were dozens of changes per second not hundreds. So, if the data was stale for a minute or two, that was acceptable. If in your solution there thousands of changes per second then this solution may not feasible.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload