Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<ul> <li><strong>Dynamic</strong> linking can <strong>reduce total resource consumption</strong> (if more than one process shares the same library (including the version in "the same", of course)). I believe this is the argument that drives it its presence in most environments. Here "resources" includes disk space, RAM, and cache space. Of course, if your dynamic linker is insufficiently flexible there is a risk of <a href="http://en.wikipedia.org/wiki/DLL_Hell" rel="noreferrer">DLL hell</a>.</li> <li><strong>Dynamic</strong> linking means that bug fixes and upgrades to libraries <strong>propagate</strong> to improve <em>your</em> product without requiring you to ship anything.</li> <li><strong>Plugins</strong> always call for <strong>dynamic</strong> linking.</li> <li><strong>Static</strong> linking, means that you can know the code will run in very <strong>limited environments</strong> (early in the boot process, or in rescue mode).</li> <li><strong>Static</strong> linking can make binaries <strong>easier to distribute</strong> to diverse user environments (at the cost of sending a large and more resource hungry program).</li> <li><strong>Static</strong> linking may allow slightly <strong>faster startup</strong> times, but this depends to some degree on both the size and complexity of your program <em>and</em> on the details of the OSs loading strategy.</li> </ul> <hr> <p>Some edits to include the very relevant suggestions in the comments and in other answers. I'd like to note that the way you break on this depends a lot on what environment you plan to run in. Minimal embedded systems may not have enough resources to support dynamic linking. Slightly larger small systems may well support linking, because their memory is small enough to make the RAM savings from dynamic linking very attractive. Full blown consumer PCs have, as Mark notes, enormous resources, and you can probably let the convenience issues drive you thinking on this matter.</p> <hr> <p>To address the performance and efficiency issues: <strong>it depends</strong>.</p> <p>Classically, dynamic libraries require a some kind of glue layer which often means double dispatch or an extra layer of indirection in function addressing and can cost a little speed (but is function calling time actually a big part of your running time???).</p> <p>However, if you are running multiple processes which all call the same library a lot, you can end up saving cache lines (and thus winning on running performance) when using dynamic linking relative using static linking. (Unless modern OS's are smart enough to notice identical segments in statically linked binaries. Seems hard, anyone know?)</p> <p>Another issue: loading time. You pay loading costs at some point. When you pay this cost depends on how the OS works as well as what linking you use. Maybe you'd rather put off paying it until you know you need it.</p> <p>Note that static-vs--dynamic linking is traditionally <em>not</em> a optimization issue, because they both involve separate compilation down to object files. However, this is not required: a compiler can in principle, "compile" "static libraries" to a digested AST form initially, and "link" them by adding those ASTs to the ones generated for the main code, thus empowering global optimization. None of the systems I use do this, so I can't comment on how well it works.</p> <p>The way to answer performance questions is <em>always</em> by testing (and use an test environment as much like the deployment environment as possible).</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload