Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p>I don't think there is a way to do this with PyUnit and wouldn't want to see PyUnit extended in this way.</p> <p>I prefer to stick to one assertion per test function (<a href="https://stackoverflow.com/questions/2878717/multiple-asserts-in-a-unit-test/2878912#2878912">or more specifically asserting one concept per test</a>) and would rewrite <code>test_addition()</code> as four separate test functions. This would give more useful information on failure, <em>viz</em>:</p> <pre><code>.FF. ====================================================================== FAIL: test_addition_with_two_negatives (__main__.MathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_addition.py", line 10, in test_addition_with_two_negatives self.assertEqual(-1 + (-1), -1) AssertionError: -2 != -1 ====================================================================== FAIL: test_addition_with_two_positives (__main__.MathTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_addition.py", line 6, in test_addition_with_two_positives self.assertEqual(1 + 1, 3) # Failure! AssertionError: 2 != 3 ---------------------------------------------------------------------- Ran 4 tests in 0.000s FAILED (failures=2) </code></pre> <p>If you decide that this approach isn't for you, you may find <a href="https://stackoverflow.com/q/1307367/78845">this answer</a> helpful.</p> <h2>Update</h2> <p>It looks like you are testing two concepts with your updated question and I would split these into two unit tests. The first being that the parameters are being stored on the creation of a new object. This would have two assertions, one for <code>make</code> and one for <code>model</code>. If the first fails, the that clearly needs to be fixed, whether the second passes or fails is irrelevant at this juncture.</p> <p>The second concept is more questionable... You're testing whether some default values are initialised. <strong>Why</strong>? It would be more useful to test these values at the point that they are actually used (and if they are not used, then why are they there?). </p> <p>Both of these tests fail, and both should. When I am unit-testing, I am far more interested in failure than I am in success as that is where I need to concentrate.</p> <pre><code>FF ====================================================================== FAIL: test_creation_defaults (__main__.CarTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_car.py", line 25, in test_creation_defaults self.assertEqual(self.car.wheel_count, 4) # Failure! AssertionError: 3 != 4 ====================================================================== FAIL: test_creation_parameters (__main__.CarTest) ---------------------------------------------------------------------- Traceback (most recent call last): File "test_car.py", line 20, in test_creation_parameters self.assertEqual(self.car.model, self.model) # Failure! AssertionError: 'Ford' != 'Model T' ---------------------------------------------------------------------- Ran 2 tests in 0.000s FAILED (failures=2) </code></pre>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload