Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p><a href="http://hackage.haskell.org/trac/ghc/wiki/Commentary/Compiler/HscMain" rel="noreferrer">This GHC Trac page</a> also explains the passes fairly well. <a href="http://hackage.haskell.org/trac/ghc/wiki/Commentary/Compiler/OptOrdering" rel="noreferrer">This page</a> explains the optimization ordering, though, like the majority of the Trac Wiki, it is out of date.</p> <p>For specifics, the best thing to do is probably to look at how a specific program is compiled. The best way to see which optimizations are being performed is to compile the program verbosely, using the <code>-v</code> flag. Taking as an example the first piece of Haskell I could find on my computer:</p> <pre><code>Glasgow Haskell Compiler, Version 7.4.2, stage 2 booted by GHC version 7.4.1 Using binary package database: /usr/lib/ghc-7.4.2/package.conf.d/package.cache wired-in package ghc-prim mapped to ghc-prim-0.2.0.0-7d3c2c69a5e8257a04b2c679c40e2fa7 wired-in package integer-gmp mapped to integer-gmp-0.4.0.0-af3a28fdc4138858e0c7c5ecc2a64f43 wired-in package base mapped to base-4.5.1.0-6e4c9bdc36eeb9121f27ccbbcb62e3f3 wired-in package rts mapped to builtin_rts wired-in package template-haskell mapped to template-haskell-2.7.0.0-2bd128e15c2d50997ec26a1eaf8b23bf wired-in package dph-seq not found. wired-in package dph-par not found. Hsc static flags: -static *** Chasing dependencies: Chasing modules from: *SleepSort.hs Stable obj: [Main] Stable BCO: [] Ready for upsweep [NONREC ModSummary { ms_hs_date = Tue Oct 18 22:22:11 CDT 2011 ms_mod = main:Main, ms_textual_imps = [import (implicit) Prelude, import Control.Monad, import Control.Concurrent, import System.Environment] ms_srcimps = [] }] *** Deleting temp files: Deleting: compile: input file SleepSort.hs Created temporary directory: /tmp/ghc4784_0 *** Checking old interface for main:Main: [1 of 1] Compiling Main ( SleepSort.hs, SleepSort.o ) *** Parser: *** Renamer/typechecker: *** Desugar: Result size of Desugar (after optimization) = 79 *** Simplifier: Result size of Simplifier iteration=1 = 87 Result size of Simplifier iteration=2 = 93 Result size of Simplifier iteration=3 = 83 Result size of Simplifier = 83 *** Specialise: Result size of Specialise = 83 *** Float out(FOS {Lam = Just 0, Consts = True, PAPs = False}): Result size of Float out(FOS {Lam = Just 0, Consts = True, PAPs = False}) = 95 *** Float inwards: Result size of Float inwards = 95 *** Simplifier: Result size of Simplifier iteration=1 = 253 Result size of Simplifier iteration=2 = 229 Result size of Simplifier = 229 *** Simplifier: Result size of Simplifier iteration=1 = 218 Result size of Simplifier = 218 *** Simplifier: Result size of Simplifier iteration=1 = 283 Result size of Simplifier iteration=2 = 226 Result size of Simplifier iteration=3 = 202 Result size of Simplifier = 202 *** Demand analysis: Result size of Demand analysis = 202 *** Worker Wrapper binds: Result size of Worker Wrapper binds = 202 *** Simplifier: Result size of Simplifier = 202 *** Float out(FOS {Lam = Just 0, Consts = True, PAPs = True}): Result size of Float out(FOS {Lam = Just 0, Consts = True, PAPs = True}) = 210 *** Common sub-expression: Result size of Common sub-expression = 210 *** Float inwards: Result size of Float inwards = 210 *** Liberate case: Result size of Liberate case = 210 *** Simplifier: Result size of Simplifier iteration=1 = 206 Result size of Simplifier = 206 *** SpecConstr: Result size of SpecConstr = 206 *** Simplifier: Result size of Simplifier = 206 *** Tidy Core: Result size of Tidy Core = 206 writeBinIface: 4 Names writeBinIface: 28 dict entries *** CorePrep: Result size of CorePrep = 224 *** Stg2Stg: *** CodeGen: *** CodeOutput: *** Assembler: '/usr/bin/gcc' '-fno-stack-protector' '-Wl,--hash-size=31' '-Wl,--reduce-memory-overheads' '-I.' '-c' '/tmp/ghc4784_0/ghc4784_0.s' '-o' 'SleepSort.o' Upsweep completely successful. *** Deleting temp files: Deleting: /tmp/ghc4784_0/ghc4784_0.c /tmp/ghc4784_0/ghc4784_0.s Warning: deleting non-existent /tmp/ghc4784_0/ghc4784_0.c link: linkables are ... LinkableM (Sat Sep 29 20:21:02 CDT 2012) main:Main [DotO SleepSort.o] Linking SleepSort ... *** C Compiler: '/usr/bin/gcc' '-fno-stack-protector' '-Wl,--hash-size=31' '-Wl,--reduce-memory-overheads' '-c' '/tmp/ghc4784_0/ghc4784_0.c' '-o' '/tmp/ghc4784_0/ghc4784_0.o' '-DTABLES_NEXT_TO_CODE' '-I/usr/lib/ghc-7.4.2/include' *** C Compiler: '/usr/bin/gcc' '-fno-stack-protector' '-Wl,--hash-size=31' '-Wl,--reduce-memory-overheads' '-c' '/tmp/ghc4784_0/ghc4784_0.s' '-o' '/tmp/ghc4784_0/ghc4784_1.o' '-DTABLES_NEXT_TO_CODE' '-I/usr/lib/ghc-7.4.2/include' *** Linker: '/usr/bin/gcc' '-fno-stack-protector' '-Wl,--hash-size=31' '-Wl,--reduce-memory-overheads' '-o' 'SleepSort' 'SleepSort.o' '-L/usr/lib/ghc-7.4.2/base-4.5.1.0' '-L/usr/lib/ghc-7.4.2/integer-gmp-0.4.0.0' '-L/usr/lib/ghc-7.4.2/ghc-prim-0.2.0.0' '-L/usr/lib/ghc-7.4.2' '/tmp/ghc4784_0/ghc4784_0.o' '/tmp/ghc4784_0/ghc4784_1.o' '-lHSbase-4.5.1.0' '-lHSinteger-gmp-0.4.0.0' '-lgmp' '-lHSghc-prim-0.2.0.0' '-lHSrts' '-lm' '-lrt' '-ldl' '-u' 'ghczmprim_GHCziTypes_Izh_static_info' '-u' 'ghczmprim_GHCziTypes_Czh_static_info' '-u' 'ghczmprim_GHCziTypes_Fzh_static_info' '-u' 'ghczmprim_GHCziTypes_Dzh_static_info' '-u' 'base_GHCziPtr_Ptr_static_info' '-u' 'base_GHCziWord_Wzh_static_info' '-u' 'base_GHCziInt_I8zh_static_info' '-u' 'base_GHCziInt_I16zh_static_info' '-u' 'base_GHCziInt_I32zh_static_info' '-u' 'base_GHCziInt_I64zh_static_info' '-u' 'base_GHCziWord_W8zh_static_info' '-u' 'base_GHCziWord_W16zh_static_info' '-u' 'base_GHCziWord_W32zh_static_info' '-u' 'base_GHCziWord_W64zh_static_info' '-u' 'base_GHCziStable_StablePtr_static_info' '-u' 'ghczmprim_GHCziTypes_Izh_con_info' '-u' 'ghczmprim_GHCziTypes_Czh_con_info' '-u' 'ghczmprim_GHCziTypes_Fzh_con_info' '-u' 'ghczmprim_GHCziTypes_Dzh_con_info' '-u' 'base_GHCziPtr_Ptr_con_info' '-u' 'base_GHCziPtr_FunPtr_con_info' '-u' 'base_GHCziStable_StablePtr_con_info' '-u' 'ghczmprim_GHCziTypes_False_closure' '-u' 'ghczmprim_GHCziTypes_True_closure' '-u' 'base_GHCziPack_unpackCString_closure' '-u' 'base_GHCziIOziException_stackOverflow_closure' '-u' 'base_GHCziIOziException_heapOverflow_closure' '-u' 'base_ControlziExceptionziBase_nonTermination_closure' '-u' 'base_GHCziIOziException_blockedIndefinitelyOnMVar_closure' '-u' 'base_GHCziIOziException_blockedIndefinitelyOnSTM_closure' '-u' 'base_ControlziExceptionziBase_nestedAtomically_closure' '-u' 'base_GHCziWeak_runFinalizzerBatch_closure' '-u' 'base_GHCziTopHandler_flushStdHandles_closure' '-u' 'base_GHCziTopHandler_runIO_closure' '-u' 'base_GHCziTopHandler_runNonIO_closure' '-u' 'base_GHCziConcziIO_ensureIOManagerIsRunning_closure' '-u' 'base_GHCziConcziSync_runSparks_closure' '-u' 'base_GHCziConcziSignal_runHandlers_closure' link: done *** Deleting temp files: Deleting: /tmp/ghc4784_0/ghc4784_1.o /tmp/ghc4784_0/ghc4784_0.s /tmp/ghc4784_0/ghc4784_0.o /tmp/ghc4784_0/ghc4784_0.c *** Deleting temp dirs: Deleting: /tmp/ghc4784_0 </code></pre> <p>Looking from the first <code>*** Simplifier:</code> to the last, where all the optimization phases happen, we see quite a lot.</p> <p>First of all, the Simplifier runs between almost all the phases. This makes writing many passes much easier. For example, when implementing many optimizations, they simply create rewrite rules to propagate the changes instead of having to do it manually. The simplifier encompasses a number of simple optimizations, including inlining and fusion. The main limitation of this that I know is that GHC refuses to inline recursive functions, and that things have to be named correctly for fusion to work.</p> <p>Next, we see a full list of all the optimizations performed:</p> <ul> <li><p>Specialise</p> <p>The basic idea of specialization is to remove polymorphism and overloading by identifying places where the function is called and creating versions of the function that aren't polymorphic - they are specific to the types they are called with. You can also tell the compiler to do this with the <code>SPECIALISE</code> pragma. As an example, take a factorial function:</p> <pre><code>fac :: (Num a, Eq a) =&gt; a -&gt; a fac 0 = 1 fac n = n * fac (n - 1) </code></pre> <p>As the compiler doesn't know any properties of the multiplication that is to be used, it cannot optimize this at all. If however, it sees that it is used on an <code>Int</code>, it now can create a new version, differing only in the type:</p> <pre><code>fac_Int :: Int -&gt; Int fac_Int 0 = 1 fac_Int n = n * fac_Int (n - 1) </code></pre> <p>Next, rules mentioned below can fire, and you end up with something working on unboxed <code>Int</code>s, which is much faster than the original. Another way to look at specialisation is partial application on type class dictionaries and type variables.</p> <p>The <a href="http://www.haskell.org/ghc/dist/current/docs/html/libraries/ghc-7.7.20120902/src/Specialise.html" rel="noreferrer">source</a> here has a load of notes in it.</p></li> <li><p>Float out</p> <p>EDIT: I apparently misunderstood this before. My explanation has completely changed.</p> <p>The basic idea of this is to move computations that shouldn't be repeated out of functions. For example, suppose we had this:</p> <pre><code>\x -&gt; let y = expensive in x+y </code></pre> <p>In the above lambda, every time the function is called, <code>y</code> is recomputed. A better function, which floating out produces, is</p> <pre><code>let y = expensive in \x -&gt; x+y </code></pre> <p>To facilitate the process, other transformations may be applied. For example, this happens:</p> <pre><code> \x -&gt; x + f 2 \x -&gt; x + let f_2 = f 2 in f_2 \x -&gt; let f_2 = f 2 in x + f_2 let f_2 = f 2 in \x -&gt; x + f_2 </code></pre> <p>Again, repeated computation is saved.</p> <p>The <a href="http://www.haskell.org/ghc/dist/current/docs/html/libraries/ghc-7.7.20120902/src/FloatOut.html#floatOutwards" rel="noreferrer">source</a> is very readable in this case.</p> <p>At the moment bindings between two adjacent lambdas are not floated. For example, this does not happen:</p> <pre><code>\x y -&gt; let t = x+x in ... </code></pre> <p>going to</p> <pre><code> \x -&gt; let t = x+x in \y -&gt; ... </code></pre></li> <li><p>Float inwards</p> <p>Quoting the source code,</p> <p>The main purpose of <code>floatInwards</code> is floating into branches of a case, so that we don't allocate things, save them on the stack, and then discover that they aren't needed in the chosen branch.</p> <p>As an example, suppose we had this expression:</p> <pre><code>let x = big in case v of True -&gt; x + 1 False -&gt; 0 </code></pre> <p>If <code>v</code> evaluates to <code>False</code>, then by allocating <code>x</code>, which is presumably some big thunk, we have wasted time and space. Floating inwards fixes this, producing this:</p> <pre><code>case v of True -&gt; let x = big in x + 1 False -&gt; let x = big in 0 </code></pre> <p>, which is subsequently replaced by the simplifier with</p> <pre><code>case v of True -&gt; big + 1 False -&gt; 0 </code></pre> <p><a href="http://research.microsoft.com/pubs/67060/float.ps.gz" rel="noreferrer">This paper</a>, although covering other topics, gives a fairly clear introduction. Note that despite their names, floating in and floating out don't get in an infinite loop for two reasons:</p> <ol> <li>Float in floats lets into <code>case</code> statements, while float out deals with functions.</li> <li>There is a fixed order of passes, so they shouldn't be alternating infinitely. </li> </ol></li> </ul> <p></p> <ul> <li><p>Demand analysis</p> <p>Demand analysis, or strictness analysis is less of a transformation and more, like the name suggests, of an information gathering pass. The compiler finds functions that always evaluate their arguments (or at least some of them), and passes those arguments using call-by-value, instead of call-by-need. Since you get to evade the overheads of thunks, this is often much faster. Many performance problems in Haskell arise from either this pass failing, or code simply not being strict enough. A simple example is the difference between using <code>foldr</code>, <code>foldl</code>, and <code>foldl'</code> to sum a list of integers - the first causes stack overflow, the second causes heap overflow, and the last runs fine, because of strictness. This is probably the easiest to understand and best documented of all of these. I believe that polymorphism and CPS code often defeat this.</p></li> <li><p>Worker Wrapper binds</p> <p>The basic idea of the worker/wrapper transformation is to do a tight loop on a simple structure, converting to and from that structure at the ends. For example, take this function, which calculates the factorial of a number.</p> <pre><code>factorial :: Int -&gt; Int factorial 0 = 1 factorial n = n * factorial (n - 1) </code></pre> <p>Using the definition of <code>Int</code> in GHC, we have</p> <pre><code>factorial :: Int -&gt; Int factorial (I# 0#) = I# 1# factorial (I# n#) = I# (n# *# case factorial (I# (n# -# 1#)) of I# down# -&gt; down#) </code></pre> <p>Notice how the code is covered in <code>I#</code>s? We can remove them by doing this:</p> <pre><code>factorial :: Int -&gt; Int factorial (I# n#) = I# (factorial# n#) factorial# :: Int# -&gt; Int# factorial# 0# = 1# factorial# n# = n# *# factorial# (n# -# 1#) </code></pre> <p>Although this specific example could have also been done by SpecConstr, the worker/wrapper transformation is very general in the things it can do.</p></li> <li><p>Common sub-expression</p> <p>This is another really simple optimization that is very effective, like strictness analysis. The basic idea is that if you have two expressions that are the same, they will have the same value. For example, if <code>fib</code> is a Fibonacci number calculator, CSE will transform</p> <pre><code>fib x + fib x </code></pre> <p>into</p> <pre><code>let fib_x = fib x in fib_x + fib_x </code></pre> <p>which cuts the computation in half. Unfortunately, this can occasionally get in the way of other optimizations. Another problem is that the two expressions have to be in the same place and that they have to be <em>syntactically</em> the same, not the same by value. For example, CSE won't fire in the following code without a bunch of inlining:</p> <pre><code>x = (1 + (2 + 3)) + ((1 + 2) + 3) y = f x z = g (f x) y </code></pre> <p>However, if you compile via llvm, you may get some of this combined, due to its Global Value Numbering pass.</p></li> <li><p>Liberate case</p> <p>This seems to be a terribly documented transformation, besides the fact that it can cause code explosion. Here is a reformatted (and slightly rewritten) version of the little documentation I found:</p> <p>This module walks over <code>Core</code>, and looks for <code>case</code> on free variables. The criterion is: if there is a <code>case</code> on a free variable on the route to the recursive call, then the recursive call is replaced with an unfolding. For example, in</p> <pre><code>f = \ t -&gt; case v of V a b -&gt; a : f t </code></pre> <p>the inner <code>f</code> is replaced. to make</p> <pre><code>f = \ t -&gt; case v of V a b -&gt; a : (letrec f = \ t -&gt; case v of V a b -&gt; a : f t in f) t </code></pre> <p>Note the need for shadowing. Simplifying, we get</p> <pre><code>f = \ t -&gt; case v of V a b -&gt; a : (letrec f = \ t -&gt; a : f t in f t) </code></pre> <p>This is better code, because <code>a</code> is free inside the inner <code>letrec</code>, rather than needing projection from <code>v</code>. Note that this deals with <em>free variables</em>, unlike SpecConstr, which deals with <em>arguments</em> that are of known form.</p> <p>See below for more information about SpecConstr.</p></li> <li><p>SpecConstr - this transforms programs like</p> <pre><code>f (Left x) y = somthingComplicated1 f (Right x) y = somethingComplicated2 </code></pre> <p>into</p> <pre><code>f_Left x y = somethingComplicated1 f_Right x y = somethingComplicated2 {-# INLINE f #-} f (Left x) = f_Left x f (Right x) = f_Right x </code></pre> <p>As an extended example, take this definition of <code>last</code>:</p> <pre><code>last [] = error "last: empty list" last (x:[]) = x last (x:x2:xs) = last (x2:xs) </code></pre> <p>We first transform it to</p> <pre><code>last_nil = error "last: empty list" last_cons x [] = x last_cons x (x2:xs) = last (x2:xs) {-# INLINE last #-} last [] = last_nil last (x : xs) = last_cons x xs </code></pre> <p>Next, the simplifier runs, and we have</p> <pre><code>last_nil = error "last: empty list" last_cons x [] = x last_cons x (x2:xs) = last_cons x2 xs {-# INLINE last #-} last [] = last_nil last (x : xs) = last_cons x xs </code></pre> <p>Note that the program is now faster, as we are not repeatedly boxing and unboxing the front of the list. Also note that the inlining is crucial, as it allows the new, more efficient definitions to actually be used, as well as making recursive definitions better.</p> <p>SpecConstr is controlled by a number of heuristics. The ones mentioned in the paper are as such:</p> <ol> <li>The lambdas are explicit and the arity is <code>a</code>.</li> <li>The right hand side is "sufficiently small," something controlled by a flag.</li> <li>The function is recursive, and the specializable call is used in the right hand side.</li> <li>All of the arguments to the function are present.</li> <li>At least one of the arguments is a constructor application.</li> <li>That argument is case-analysed somewhere in the function.</li> </ol> <p>However, the heuristics have almost certainly changed. In fact, the paper mentions an alternative sixth heuristic:</p> <p>Specialise on an argument <code>x</code> only if <code>x</code> is <em>only</em> scrutinised by a <code>case</code>, and is not passed to an ordinary function, or returned as part of the result.</p></li> </ul> <p>This was a very small file (12 lines) and so possibly didn't trigger that many optimizations (though I think it did them all). This also doesn't tell you why it picked those passes and why it put them in that order.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload