Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>The activemq.prefetchSize is only available on a SUBSCRIBE message, not a CONNECT, according to the ActiveMQ docs for their extended stomp headers (<a href="http://activemq.apache.org/stomp.html" rel="nofollow noreferrer">http://activemq.apache.org/stomp.html</a>). Here is the relevant info:</p> <blockquote> <p>verb: SUBSCRIBE</p> <p>header: activemq.prefetchSize</p> <p>type: int</p> <p>description: Specifies the maximum number of pending messages that will be dispatched to the client. Once this maximum is reached no more messages are dispatched until the client acknowledges a message. Set to 1 for very fair distribution of messages across consumers where processing messages can be slow.</p> </blockquote> <p>My reading and experience with this, is that since M1 has not been ack'd (b/c you have client ack turned on), that this M1 should be the 1 message allowed by prefetchSize=1 set on the subscription. I am surprised to hear that it didn't work, but perhaps I need to run a more detailed test. Your settings should be correct for the behavior you want.</p> <p>I have heard of flakiness from others about the activemq dispatch, so it is possible this is a bug with the version you are using.</p> <p>One suggestion I would have is to either sniff the network traffic to see if the M1 is getting ack'd for some reason, or throw some puts statements into the ruby stomp gem to watch the communication (this is what I usually end up doing when debugging stomp problems).</p> <p>If I get a chance to try this out, I'll update my comment with my own results.</p> <p>One suggestion: It is very possible that multiple long processing messages could be sent, and if the number of long processing messages exceeds your number of processes, you'll be in this fix where quick processing messages are waiting.</p> <p>I tend to have at least one dedicated process that just does quick jobs, or to put it another way, dedicate a set # of processes that just do longer jobs. Having all poller consumer processes listen to both long and short can end up with sub-optimal results no matter what dispatch does. Process groups are the way to configure a consumer to listen to a subset of destinations: <a href="http://code.google.com/p/activemessaging/wiki/Configuration" rel="nofollow noreferrer">http://code.google.com/p/activemessaging/wiki/Configuration</a></p> <blockquote> <p>processor_group name, *list_of_processors</p> <pre><code>A processor group is a way to run the poller to only execute a subset of </code></pre> <p>the processors by passing the name of the group in the poller command line arguments. </p> <pre><code>You specify the name of the processor as its underscored lowercase </code></pre> <p>version. So if you have a FooBarProcessor and BarFooProcessor in a processor group, it would look like this:</p> <pre><code> ActiveMessaging::Gateway.define do |s| ... s.processor_group :my_group, :foo_bar_processor, :bar_foo_processor end The processor group is passed into the poller like the following: ./script/poller start -- process-group=my_group </code></pre> </blockquote>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload