Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <blockquote> <p>Is this a correct understanding?</p> </blockquote> <p>A loop that continually polls something until there is a response is very wasteful. If you add a <code>sleep</code> between each poll, you reduce the CPU usage, but at the cost of reduced responsiveness for individual requests ... compared to what is achievable if you do it the <em>right</em> way.</p> <p>Without knowing exactly what you are doing (what you are polling, and why) it is a bit difficult to say what the <em>best</em> solution is. But here are a couple of possible scenarios:</p> <ul> <li><p>If your web service is waiting for a response from an external service, then the simple solution is to just do a blocking read, and configure your web server with more worker threads.</p></li> <li><p>On the other hand, if your web service is waiting for a computation to complete, a new thread and wait / notify ... or one of the higher level synchronization classes ... may be the answer.</p></li> <li><p>If you need to handle a really large number of these blocking requests in parallel, that is going to require a lot of threads and hence a lot of memory, and other things. In that case you need to consider an web container that breaks the one-thread-per-request constraint. The latest version of the Servlet spec allows this, as do some of the alternative (non-Servlet) architectures.</p></li> </ul> <hr> <p><strong>FOLLOW UP</strong></p> <blockquote> <p>... I think the issue is your point 2, that the service is simply just waiting for the computation. So, by simply threading this computation will free up resources in the service?</p> </blockquote> <p>If what you are describing is true, then running the computation in a different thread won't make it go much quicker. In fact, it could make it go slower.</p> <p>The ultimate bottleneck is going to be CPU capacity, disc bandwidth and / or network bandwidth. Multi-threading is only going to make an individual request go faster if you can effectively / efficiently use 2 or more processors on the same request at the same time. It will only make your throughput better to the extent that it allows requests to run while others are waiting for external events; e.g. network responses to arrive, or file read/write operations to complete.</p> <p>What I think you actually need to do is to figure out why the computation is taking so long and try and fix that: </p> <ul> <li>Are your database queries inefficient? </li> <li>Are fetching result-sets that are too large? </li> <li>Do you have a problem with your schemas? </li> <li>Poor choice of indexes? </li> <li>Or are you simply trying to do too much on a machine that is too small using the wrong kind of database?</li> </ul> <p>There are various techniques for <em>measuring</em> performance of an application service and a database to determine where the bottlenecks are. (Start with a Google search ....) </p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload