Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>Maximum number of connections are impacted by certain limits on both client &amp; server sides, albeit a little differently.</p> <p><strong>On the client side:</strong> Increase the ephermal port range, and decrease the <code>tcp_fin_timeout</code></p> <p>To find out the default values:</p> <pre><code>sysctl net.ipv4.ip_local_port_range sysctl net.ipv4.tcp_fin_timeout </code></pre> <p>The ephermal port range defines the maximum number of outbound sockets a host can create from a particular I.P. address. The <code>fin_timeout</code> defines the minimum time these sockets will stay in <code>TIME_WAIT</code> state (unusable after being used once). Usual system defaults are:</p> <ul> <li>net.ipv4.ip_local_port_range = 32768 61000</li> <li>net.ipv4.tcp_fin_timeout = 60</li> </ul> <p>This basically means your system cannot consistently guarantee more than <code>(61000 - 32768) / 60 = 470</code> sockets per second. If you are not happy with that, you could begin with increasing the <code>port_range</code>. Setting the range to <code>15000 61000</code> is pretty common these days. You could further increase the availability by decreasing the <code>fin_timeout</code>. Suppose you do both, you should see over 1500 outbound connections per second, more readily.</p> <p><strong>To change the values</strong>:</p> <pre><code>sysctl net.ipv4.ip_local_port_range="15000 61000" sysctl net.ipv4.tcp_fin_timeout=30 </code></pre> <p>The above should not be interpreted as the factors impacting system capability for making outbound connections per second. But rather these factors affect system's ability to handle concurrent connections in a sustainable manner for large periods of "activity."</p> <p>Default Sysctl values on a typical linux box for <code>tcp_tw_recycle</code> &amp; <code>tcp_tw_reuse</code> would be</p> <pre><code>net.ipv4.tcp_tw_recycle=0 net.ipv4.tcp_tw_reuse=0 </code></pre> <p>These do not allow a connection from a "used" socket (in wait state) and force the sockets to last the complete <code>time_wait</code> cycle. I recommend setting:</p> <pre><code>sysctl net.ipv4.tcp_tw_recycle=1 sysctl net.ipv4.tcp_tw_reuse=1 </code></pre> <p>This allows fast cycling of sockets in time_wait state and re-using them. But before you do this change make sure that this does not conflict with the protocols that you would use for the application that needs these sockets.</p> <p><strong>On the Server Side:</strong> The <code>net.core.somaxconn</code> value has an important role. It limits the maximum number of requests queued to a listen socket. If you are sure of your server application's capability, bump it up from default 128 to something like 128 to 1024. Now you can take advantage of this increase by modifying the listen backlog variable in your application's listen call, to an equal or higher integer.</p> <pre><code>sysctl net.core.somaxconn=1024 </code></pre> <p><code>txqueuelen</code> parameter of your ethernet cards also have a role to play. Default values are 1000, so bump them up to 5000 or even more if your system can handle it.</p> <pre><code>ifconfig eth0 txqueuelen 5000 echo "/sbin/ifconfig eth0 txqueuelen 5000" &gt;&gt; /etc/rc.local </code></pre> <p>Similarly bump up the values for <code>net.core.netdev_max_backlog</code> and <code>net.ipv4.tcp_max_syn_backlog</code>. Their default values are 1000 and 1024 respectively.</p> <pre><code>sysctl net.core.netdev_max_backlog=2000 sysctl net.ipv4.tcp_max_syn_backlog=2048 </code></pre> <p>Now remember to start both your client and server side applications by increasing the FD ulimts, in the shell.</p> <p>Besides the above one more popular technique used by programmers is to reduce the number of <em>tcp write</em> calls. My own preference is to use a buffer wherein I push the data I wish to send to the client, and then at appropriate points I write out the buffered data into the actual socket. This technique allows me to use large data packets, reduce fragmentation, reduces my CPU utilization both in the user land and at kernel-level.</p>
    singulars
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload