Note that there are some explanatory texts on larger screens.

plurals
  1. POContinuing in Python's unittest when an assertion fails
    text
    copied!<p>EDIT: switched to a better example, and clarified why this is a real problem.</p> <p>I'd like to write unit tests in Python that continue executing when an assertion fails, so that I can see multiple failures in a single test. For example:</p> <pre><code>class Car(object): def __init__(self, make, model): self.make = make self.model = make # Copy and paste error: should be model. self.has_seats = True self.wheel_count = 3 # Typo: should be 4. class CarTest(unittest.TestCase): def test_init(self): make = "Ford" model = "Model T" car = Car(make=make, model=model) self.assertEqual(car.make, make) self.assertEqual(car.model, model) # Failure! self.assertTrue(car.has_seats) self.assertEqual(car.wheel_count, 4) # Failure! </code></pre> <p>Here, the purpose of the test is to ensure that Car's <code>__init__</code> sets its fields correctly. I could break it up into four methods (and that's often a great idea), but in this case I think it's more readable to keep it as a single method that tests a single concept ("the object is initialized correctly").</p> <p>If we assume that it's best here to not break up the method, then I have a new problem: I can't see all of the errors at once. When I fix the <code>model</code> error and re-run the test, then the <code>wheel_count</code> error appears. It would save me time to see both errors when I first run the test.</p> <p>For comparison, Google's C++ unit testing framework <a href="https://github.com/google/googletest/blob/master/googletest/docs/primer.md#assertions" rel="nofollow noreferrer">distinguishes between</a> between non-fatal <code>EXPECT_*</code> assertions and fatal <code>ASSERT_*</code> assertions:</p> <blockquote> <p>The assertions come in pairs that test the same thing but have different effects on the current function. ASSERT_* versions generate fatal failures when they fail, and abort the current function. EXPECT_* versions generate nonfatal failures, which don't abort the current function. Usually EXPECT_* are preferred, as they allow more than one failures to be reported in a test. However, you should use ASSERT_* if it doesn't make sense to continue when the assertion in question fails.</p> </blockquote> <p>Is there a way to get <code>EXPECT_*</code>-like behavior in Python's <code>unittest</code>? If not in <code>unittest</code>, then is there another Python unit test framework that does support this behavior?</p> <hr> <p>Incidentally, I was curious about how many real-life tests might benefit from non-fatal assertions, so I looked at some <a href="https://searchcode.com/?q=%22import%20unittest%22%20unittest.TestCase%20self.assert%20lang:Python" rel="nofollow noreferrer">code examples</a> (edited 2014-08-19 to use searchcode instead of Google Code Search, RIP). Out of 10 randomly selected results from the first page, all contained tests that made multiple independent assertions in the same test method. All would benefit from non-fatal assertions.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload