Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>Your solution will not perform well when all users request the same video. A better solution is to have all videos available on all servers and use a load balancing server to redirect the current request to the server which has the lowest number of feeds open.</p> <p>Note that storage back ends (RAID arrays, SAN) can deliver data at a very high rate, so you often can get away with one storage system for several video servers (i.e. one storage system per N video servers and 1 load balancer (or two if you want failover)).</p> <p>A good solution here is to have a "redirect" command in the protocol:</p> <ol> <li>Client asks load balancer (LB) for video</li> <li>LB tells client which video server (VS) to use. This is a simple "find VS with the lowest amount of open feeds."</li> <li>Client connects directly to VS (to avoid all overhead)</li> <li>VS tells LB the current amount of open feeds (don't use an incremental approach here to avoid synchronization issues)</li> <li>VS begins streaming the data to the client</li> <li>When a client disconnects, VS tells LB of the new number of feeds</li> </ol> <p>[EDIT] The main reason to get the clients to connect directly to the video servers is network throughput. If all VS send their data to the LB who passes it on to the clients, you are limiting yourself to the speed of the single (or dual) network card of the LB. If you have 5 VS, you can get five times the throughput when connecting directly. Also, you can easily scale your system when more users hammer it by simply adding another video server, plugging it into the backbone and adding one entry to the list on the LB.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload