Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>Seems like a two-pass map-reduce would fit the bill, somewhat akin to this example: <a href="http://cookbook.mongodb.org/patterns/unique_items_map_reduce/" rel="nofollow">http://cookbook.mongodb.org/patterns/unique_items_map_reduce/</a></p> <p>In pass #1, group the original "name"x"action"x"date" documents by just "name" and "action", collecting the various "date" values into a "dates" array during reduce. Use a 'finalize' function to find the minimum of the collected dates.</p> <p>Untested code:</p> <pre><code>// phase i map function : function () { emit( { "name": this.name, "action": this.action } , { "count": 1, "dates": [ this.date ] } ); } // phase i reduce function : function( key, values ) { var result = { count: 0, dates: [ ] }; values.forEach( function( value ) { result.count += value.count; result.dates = result.dates.concat( value.dates ); } return result; } // phase i finalize function : function( key, reduced_value ) { var earliest = new Date( Math.min.apply( Math, reduced_value.dates ) ); reduced_value.firstEncounteredDate = earliest ; return reduced_value; } </code></pre> <p>In pass #2, use the documents generated in pass #1 as input. For each "name"x"action" document, emit a new "name"x"action"x"date" document for each collected date, along with the now determined minimum date common to that "name"x"action" pair. Group by "name"x"action"x"date", summing up the count for each individual date during reduce.</p> <p>Equally untested code: </p> <pre><code>// phase ii map function : function() { this.dates.forEach( function( d ) { emit( { "name": this.name, "action": this.action, "date" : d } , { "count": 1, "firstEncounteredDate" : this.firstEncounteredDate } ); } } // phase ii reduce function : function( key, values ) { // note: value[i].firstEncounteredDate should all be identical, so ... var result = { "count": 0, "firstEncounteredDate": values[0].firstEncounteredDate }; values.forEach( function( value ) { result.count += value.count; } return result; } </code></pre> <p>Pass #2 does not do a lot of heavy lifting, obviously -- it's mostly copying each document N times, one for each unique date. We could easily build a map of unique dates to their incidence counts during the reduce step of pass #1. (In fact, if we <em>don't</em> do this, there's no real point in having a "count" field in the values from pass #1.) But doing the second pass is a fairly effortless way of generating a full target collection containing the desired documents.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload