Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>Well, I think type inference <em>is</em> in Java for <em>historical</em> reasons mostly: as befits a language with strong legacy constraints, Java improvements are made cautiously &amp; incrementally (as the <a href="http://jcp.org/" rel="nofollow">JCP</a> shows, even though some <a href="http://mail.openjdk.java.net/pipermail/coin-dev/2009-February/000009.html" rel="nofollow">improvements</a> of type inference manage to <a href="http://blogs.oracle.com/darcy/entry/project_coin_final_five" rel="nofollow">go through</a>). With generics, the long-standing <a href="http://en.wikipedia.org/wiki/Generic_Java" rel="nofollow">GJ</a> implementation was thoroughly evaluated before inclusion in Java 5.</p> <blockquote> <p>Prior to the release of Java 5, there was no type inference in Java. (...) When generics (...) were introduced in Java 5, the language retained this requirement for variables, methods, and allocations. But the introduction of polymorphic methods (parameterized by type) dictated that either (i) the programmer provide the method type arguments at every polymorphic method call site or (ii) the language support the inference of method type arguments. To avoid creating an additional clerical burden for programmers, the designers of Java 5 elected to perform type inference to determine the type arguments for polymorphic method calls. (<a href="http://portal.acm.org/citation.cfm?id=1449804" rel="nofollow">source</a>)</p> </blockquote> <p>But that does not mean that there is a strong culture for pervasive type inference in Java. As per <a href="http://java.sun.com/docs/books/jls/third_edition/html/expressions.html#15.12.2.7" rel="nofollow">the spec</a>:</p> <blockquote> <p>Note also that type inference does not affect soundness in any way. If the types inferred are nonsensical, the invocation will yield a type error. The type inference algorithm should be viewed as a heuristic, designed to perform well in practice. If it fails to infer the desired result, explicit type parameters may be used instead.</p> </blockquote> <p><strong>I do think more type inference for Java would be a boon</strong> (Scala is already a <a href="http://www.scala-lang.org/node/127" rel="nofollow">very interesting improvement</a> in that direction). IMHO, <em>type inference makes the feedback loop with the type checker less mechanical, while being just as sound</em>, letting you write less types, but making you type-check just as much. Since a major benefit of types is to <strong>direct the mental process of program search</strong> ("<a href="https://www.quora.com/Why-do-programming-languages-use-type-systems/answer/Conor-McBride" rel="nofollow">letting you write within the space of well-typed programs, rather than in the space of ascii turds</a>"), this comfort in interaction with the type checker seems invaluable : you get to have a type-checker that verifies you think in well-typed terms <em>and trains you to do so</em>, rather than making you account for it on every line.</p> <p>Now, the <em>stage</em> at which type inference should happen is another question. I think wanting to have "inferencers" separate from the runtime answers legacy concerns : it avoids requiring you to have a type inference algorithm that is always backwards-compatible. But the key then becomes what your standard/major libraries look like : is the source you publish &amp; exchange with others annotated or not ? </p> <p>While it's true that an annotated source can, valuably, be type-checked whatever the strength of your inference engine is, I'd still want a type inferencer in the compiler, because it's not only that I don't want to <em>write</em> <code>List&lt;CacheDecoratorFactory&gt; cacheDecoratorFactories = new ArrayList&lt;CacheDecoratorFactory&gt;();</code>, it's that I don't event want to <em>read</em> it. Nor do I want to deal with it when I refactor pre-existing source, for that matter. I would need a type "hider" erasing annotations before I interact with source, but if the type inference engine is <em>not</em> complete, the problem of <em>which</em> annotations to erase, and ensuring the erasure followed by type reconstruction is <em>bijective</em> becomes thorny (esp. if your inference engine doesn't return a <a href="http://en.wikipedia.org/wiki/Principal_type" rel="nofollow">principal type</a>) ... If we have to solve a thorny problem <em>anyway</em>, why not make it a good, and as-complete-as-possible type inference algorithm ? My hunch is that past a certain level of quality (particularly in the generality of the returned types), the legacy concerns would start to fade.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload