Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>With regard to publishing events from different endpoints, I think you're getting a little too caught up in the concept of an event being published from one service. That's true only in the logical sense - one event should be fully owned by one <em>logical</em> service, however <em>a logical service can be made up of many endpoints</em>.</p> <p>An event is not really published from a queue. When you think of the queue that an event is published from, what you really are talking about is the input queue where you send subscription requests for that event.</p> <p>So, you can have multiple endpoints all publishing the same event, as long as the subscription requests for that event all go to the same place.</p> <p>This is commonly the case, for instance, in a bulk vs. priority scenario, where you have two endpoints handling the same command (and then publishing the same event) except one is bulk with a long SLA, and the other has a much shorter SLA - maybe it's a big customer, or the command is coming from an actual human user waiting for a response. QueueA and PriorityQueueA both process the same command and publish the same event, but QueueA handles the subscriptions, so both processes "publish from" QueueA.</p> <p>That said, have you tried just letting multiple threads access the aggregate roots? Even with some contention, you may find that with a small number of retries it may not be as contentious as you might think. I have some fairly contentious processes in production with NServiceBus and although I see the occasional evidence of contention as an exception in the log, with 5 retries I never have any of those progress to the error queue.</p> <p>-- More recently, the second-level retries feature has been added which further decreases the chance of a message ending up in the error queue.</p> <p>If there is that much contention, another strategy might be to maintain an in-memory list of aggregate roots currently being operated on, and then if a message comes in that should be "locked out" just call <code>Bus.HandleCurrentMessageLater()</code> to stick that message back on the end of the queue.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload