Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p><em>Are you considering following points while Measuring the Performance of application?</em></p> <ol> <li>Caching</li> <li>Sessionless Controllers</li> <li>AsyncControllers</li> </ol> <h2><strong>Output caching :</strong></h2> <p><em>Perhaps the most useful feature of MVC3 (<strong>Performance-Wise</strong>) is output caching. The biggest <strong>Performance</strong> hits actually occur when your application really has to fetch data, do calculations on it and return the data. Output caching can cache these results so they can be returned directly without even touching the database. Especially when executing complex queries this can drop the load on your server significantly (in fact you could drop the load on your server by a whooping 90% by carefully inplementing caching in your web application).</em></p> <pre><code>namespace MvcApplication1.Controllers { public class DataController : Controller { [OutputCache(Duration=10)] public string Index() { return DateTime.Now.ToString("T"); } } } </code></pre> <h2><strong>Sessionless controllers :</strong></h2> <p>Controllers with session state disabled provide an optimization for controllers that do not require session state. Stateless controllers are meant for situations where you do not require the concept of a session.</p> <p><em>By default the ASP.NET pipeline will not process requests belonging to the same session concurrently. It serialises them, i.e. it queues them in the order that they were received so that they are processed serially rather than in parallel. This means that if a request is in progress and another request from the same session arrives, it will be queued to only begin executing when the first request has finished.</em></p> <p>Let's look at an example; a page making 3 asynchronous AJAX requests to the server, with session state enabled(also note that session must actually be used, as ASP.NET is smart enough not to serialise requests if you never use session state, even if it's enabled).</p> <p><strong>JQuery</strong></p> <pre><code>$(document).ready(function () { //Make 3 concurrent requests to /ajaxtest/test for (var i = 0; i &lt; 3; i++) { $.post("/ajaxtest/test/" + i, function (data) { //Do something with data... }, "json"); } }); </code></pre> <p><strong>Controller - Action Method</strong></p> <pre><code>public class AjaxTestController : Controller { [HttpPost] public JsonResult Test(int? id) { Thread.Sleep(500); return Json(/*Some object*/); } } </code></pre> <p><img src="https://i.stack.imgur.com/hVLIQ.png" alt="enter image description here"></p> <p><em>You can see the effect of serialised requests in the network profile; each request takes roughly 500ms longer than the previous one. So it means we're not getting any benefit from making these AJAX calls asynchronously. Let's look at the profile again with session state disabled for our AjaxTestController (using the [SessionState] attribute).</em></p> <pre><code>[SessionState(SessionStateBehavior.Disabled)] public class AjaxTestController : Controller { //...As above } </code></pre> <p><img src="https://i.stack.imgur.com/2aaXT.png" alt="enter image description here"></p> <p><em>Much better! You can see how the 3 requests are being processed in parallel, and take a total of 500ms to complete, rather than 1500ms we saw in our first example.</em> </p> <h2><strong>Async-Controllers :</strong></h2> <p>First, controller begins one or more external I/O calls (e.g., SQL database calls or web service calls). Without waiting for them to complete, it releases the thread back into the ASP.NET worker thread pool so that it can deal with other requests.</p> <p>Later, when all of the external I/O calls have completed, the underlying ASP.NET platform grabs another free worker thread from the pool, reattaches it to your original HTTP context, and lets it complete handling the original request.</p> <p><img src="https://i.stack.imgur.com/GLy30.png" alt="enter image description here"></p> <h2><strong><a href="http://blog.stevensanderson.com/2010/01/25/measuring-the-performance-of-asynchronous-controllers/" rel="nofollow noreferrer">How to Measure the Response time under Heavy Traffic?</a></strong></h2> <p><a href="http://blog.stevensanderson.com/2010/01/25/measuring-the-performance-of-asynchronous-controllers/" rel="nofollow noreferrer">I copied below content from this link. Because sometime links gets broken so I kept some important part here. Please check this link for more details</a></p> <p><em>To understand how asynchronous controllers respond to differing levels of traffic, and how this compares to a straightforward synchronous controller, you can put create a sample MVC with two controllers. To simulate a long-running external, they both perform a SQL query that takes 2 seconds to complete (using the SQL command WAITFOR DELAY ’00:00:02′) and then they return the same fixed text to the browser. One of them does it Synchronously; the other Asynchronously.</em></p> <p>In another example you can check a simple C# console application that simulates heavy traffic hitting a given URL. It simply requests the same URL over and over, calculating the rolling average of the last few response times. First it does so on just one thread, but then gradually increases the number of concurrent threads to 150 over a 30-minute period. If you want to try running this tool against your own site, you can download the C# source code.</p> <p>The results illustrate a number of points about how asynchronous requests perform. Check out this graph of average response times versus number of concurrent requests (lower response times are better):</p> <p><img src="https://i.stack.imgur.com/ZQ4XH.png" alt="enter image description here"></p> <p><em>To understand this, first I need to tell you that I had set my ASP.NET MVC application’s worker thread pool to an artificially low maximum limit of 50 worker threads. My server actually has a default max threadpool size of 200 – a more sensible limit – but the results are made clearer if I reduce it. As you can see, the synchronous and asynchronous requests performed exactly the same as long as there were enough worker threads to go around. And why shouldn’t they? But once the threadpool was exhausted (> 50 clients), the synchronous requests had to form a queue to be serviced. Basic queuing theory tells us that the average time spent waiting in a queue is given by the formula:</em> <br></p> <p><img src="https://i.stack.imgur.com/Ksxij.png" alt="enter image description here"></p> <p><em>and this is exactly what we see in the graph. The queuing time grows linearly with the length of the queue. (Apologies for my indulgence in using a formula – sometimes I just can’t suppress my inner mathematician. I’ll get therapy if it becomes a problem.) The asynchronous requests didn’t need to start queuing so soon, though. They don’t need to block a worker thread while waiting, so the threadpool limit wasn’t an issue. So why did they start queuing when there were more than 100 clients? It’s because the ADO.NET connection pool is limited to 100 concurrent connections by default.</em></p> <p>Hope this should help you.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload