Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    primarykey
    data
    text
    <p>Let's first see what should happen when we follow various rules.</p> <p>Following the rules in the C# 4.0 spec:</p> <ul> <li>The set D of types to search for user-defined conversions consists of A and B.</li> <li>The set U of applicable conversions consists of the user-defined implicit conversion from A to B, and the lifted user-defined implicit conversion from A? to B?.</li> <li>We must now choose the unique best of those two elements of U.</li> <li>The most specific source type is A?.</li> <li>The most specific target type is B.</li> <li>U does not contain any conversion from A? to B, so this is ambiguous. </li> </ul> <p>This should make sense. We do not know here whether the conversion should use the lifted conversion, converting from A? to B? and then from B? to B, or whether we should use the unlifted conversion, convert from A? to A and then A to B.</p> <hr> <p>ASIDE:</p> <p>Upon deeper reflection it is not clear that this is a difference which makes any difference.</p> <p>Suppose we use the lifted conversion. If A? is non-null then we will convert from A? to A, then A to B, then B to B?, then B? back to B, which will succeed. If A? is null then we will convert from A? directly to a null B?, and then crash when unwrapping that to B. </p> <p>Suppose we use the unlifted conversion and A? is non-null. Then we convert from A? to A, A to B, done. If A? is null then we crash when unwrapping A? to A. </p> <p>So in this case both versions of the conversion have exactly the same action, so it doesn't really matter which we choose, and so calling this an ambiguity is unfortunate. However, this does not change the fact that <strong>clearly the compiler is not following the letter of the C# 4 specification</strong>.</p> <hr> <p>What about the ECMA spec? </p> <ul> <li>The set U consists of the user-defined conversion from A to B, but not the lifted conversion because S (which is A?) and T (which is B) are not both nullable.</li> </ul> <p>And now we have only one to choose from, so overload resolution has an easy job of it.</p> <p>However, this does not imply that the compiler is following the rules of the ECMA spec. <strong>In fact it is following the rules of neither spec.</strong> It is <em>closer</em> to the ECMA spec in that it does not add both operators to the candidate set, and therefore, in this simple case, chooses the only member of the candidate set. But in fact it <em>never</em> adds the lifted operator to the candidate set, <em>even when both the source and target are nullable value types</em>. Moreover it violates the ECMA spec in numerous other ways that would be shown up by more complex examples:</p> <ul> <li><p>Lifted conversion semantics (that is, inserting a null check before calling the method and skipping it if the operand is null) are allowed on user-defined conversions from a non-nullable struct type to a nullable struct type, pointer type or reference type! That is, if you have a conversion from A to string, then you get a lifted conversion from A? to string that produces a null string if the operand is null. This rule is found nowhere in either spec.</p></li> <li><p>According to the spec, the types that must encompass or be encompassed by each other are the type of the expression being converted (called S in the specification) and the formal parameter type of the user-defined conversion. The C# compiler actually checks for encompassment of the <em>underlying</em> type of the expression being converted if it is a nullable value type. (S0 in the spec.) This means that certain conversions which ought to be rejected are instead accepted.</p></li> <li><p>According to the spec, the best target type should be determined by looking at the set of output types of the various conversions, lifted or unlifted. A user-defined conversion from A to B should be treated as having an output type of B for the purposes of finding the best output type. But if you had a cast from A to B? then the compiler would actually consider B? as the output type of the <em>unlifted</em> conversion for the purposes of determining the most specific output type!</p></li> </ul> <p>I could go on (and on and on...) for hours about these and numerous other bugs in user-defined conversion processing. We've barely scratched the surface here; we haven't even gotten into what happens when generics get involved. But I will spare you. The takeaway here is: you cannot narrowly parse any version of the C# specification and from it determine what will happen in a complicated user-defined conversion scenario. The compiler <em>usually does what the user expects</em>, and usually does it for the wrong reasons. </p> <p>This is both one of the most complicated parts of the specification, and the part of the specification that the compiler complies with the least, which is a bad combination. This is deeply unfortunate.</p> <p>I made a valiant attempt to bring Roslyn into compliance with the specification but I failed; doing so introduced far, far too many real-world breaking changes. Instead I made Roslyn copy the behavior of the original compiler, just with a much cleaner, easier-to-understand implementation.</p>
    singulars
    1. This table or related slice is empty.
    plurals
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. This table or related slice is empty.
    1. VO
      singulars
      1. This table or related slice is empty.
    2. VO
      singulars
      1. This table or related slice is empty.
    3. VO
      singulars
      1. This table or related slice is empty.
    1. COWhat would you think of the idea of allowing type members and conversion operators to indicate via attributes what sorts of transformations should be allowable on them? Language designers seem to work hard formulating rules to discriminate between cases where a transform will make sense and those where it will generate silly code; letting programmers mark members where transforms would generate silly code (so they shouldn't compile) and those where they might seem silly but would actually work (and thus should compile) would ease the need to have rules identify all the tough cases.
      singulars
    2. CO@supercat: To be frank I think the fact that there are representation-changing conversions at all is an indication of a design failure. User-defined conversions (and other operators) seem to exist primarily to mitigate the shortcomings of arithmetic in C, which is pretty horrid. C# improves upon a bad situation but doesn't really solve the problem: computers are supposedly good at math and yet we give you fourteen built-in types none of which can actually do integer arithmetic without introducing bizarre failure modes.
      singulars
    3. COWith regard to numerics, I think Java failed to consistently define what implicit or explicit conversions should mean, and .NET followed Java's lead rather than fixing the problem. If the allowable implicit promotions were [SByte|Byte]->[Int16|UInt16]->[Int32|UInt32]->[Int64|UInt64]->Decimal->Double->Single, then for any trio of types which were convertible T->U->V, one could define rounding rules such that converting a T to a U and then a V would be the same as converting a T directly to V. The rule for implicit conversion would not be...
      singulars
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload