Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>Most large companies explore a variety of things to handle the traffic and load on their servers. Roughly speaking:</p> <ol> <li>A load balancer sits between the entry point and the actual client.</li> <li>A reverse proxy often times sits between these to handle static files, pre-computed/rendered views, and other such largely static assets.</li> <li>Any cast is used for DNS purposes, so that you are routed towards the nearest server that handles that URL.</li> <li>Back pressure is employed in systems to limit the amount of requests feeding through a single pipeline and so that services don't tip over.</li> <li>Memcached, Redis and the like are used as short term caches. That is, if it's going to roughly be the same result every 5 seconds, then that result can be cached in memory for faster delivery. Some proxies can be configured to read out of these.</li> </ol> <p>If you're really interested, start reading some of the Netflix blog. Take a look at some of the open source they've used like <a href="https://github.com/Netflix/Hystrix" rel="noreferrer">Hystrix</a> or <a href="https://github.com/Netflix/zuul" rel="noreferrer">Zuul</a>. You can also take a look at some of their <a href="http://techblog.netflix.com/2013/12/netflix-presentation-videos-from-aws.html" rel="noreferrer">videos</a>. They make heavy use of proxies and have built in some very advanced distributed behavior.</p> <p>As far as a reverse proxy being a good idea, think in terms of failure. If your service calls out to another API by direct route and that service fails, then your service will fail and cascade upwards to the end user. On the other hand, if it's hitting a reverse proxy, then that proxy can be configured or even auto detect failures and divert traffic to back up servers. </p> <p>As far as a reverse proxy being a good idea, think in terms of load. Sometimes servers can only handle a fraction of the traffic individually so that load must be shared on many servers. This is true not just of CPU capped but also IO capped resources (even if the return signal itself will not be the cause of the IO capping.)</p> <p>Daisy chaining like this presents its own special little hell but it's sometimes unavoidable. On the downsides and what makes it a really bad choice if you can avoid it at all costs is a loss of deterministic behavior. Sometimes the stupidest things will bring your servers down. And by stupid, I mean, really, really dumb stuff that you never thought in a million years might bite you in the butt (think server clocks out of sync.) You have to start using rolling deploys of code, take down servers manually or forcefully if they stop responding, and keep those proxy configs in good order.</p> <p>HTTP1.1 support can also be an issue. Not all reverse proxy adhere to the spec. In fact, some of them only cover ~50%. HAProxy does not do SSL. If you're only limited hardware then thread based proxy can unexpectedly swamp the system with threads.</p> <p>Finally, adding in a proxy is one more thing that will break (not can, will.) You have to monitor them just like any piece of the platform, aggregate their logs, and run mock drills on them too. </p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload