Note that there are some explanatory texts on larger screens.

plurals
  1. PO
    text
    copied!<p><strong>"every single part of our project has a meaningful test in place"</strong></p> <p>"Part" is undefined. "Meaningful" is undefined. That's okay, however, since it gets better further on.</p> <p><strong>"validates the correctness of every component in our system"</strong></p> <p>"Component" is undefined. But correctness is defined, and we can assign a number of alternatives to component. You only mention Python, so I'll assume the entire project is pure Python.</p> <ul> <li><p>Validates the correctness of every module.</p></li> <li><p>Validates the correctness of every class of every module.</p></li> <li><p>Validates the correctness of every method of every class of every module.</p></li> </ul> <p>You haven't asked about line of code coverage or logic path coverage, which is a good thing. That way lies madness.</p> <p><strong>"guarantees that when we change something we can spot unintentional changes to other sub-systems"</strong></p> <p>This is regression testing. That's a logical consequence of any unit testing discipline.</p> <p>Here's what you can do.</p> <ol> <li><p>Enumerate every module. Create a unittest for that module that is just a unittest.main(). This should be quick -- a few days at most.</p></li> <li><p>Write a nice top-level unittest script that uses a testLoader to all unit tests in your tests directory and runs them through the text runner. At this point, you'll have a lot of files -- one per module -- but no actual test cases. Getting the testloader and the top-level script to work will take a few days. It's important to have this overall harness working.</p></li> <li><p>Prioritize your modules. A good rule is "most heavily reused". Another rule is "highest risk from failure". Another rule is "most bugs reported". This takes a few hours.</p></li> <li><p>Start at the top of the list. Write a TestCase per class with no real methods or anything. Just a framework. This takes a few days at most. Be sure the docstring for each TestCase positively identifies the Module and Class under test and the status of the test code. You can use these docstrings to determine test coverage.</p></li> </ol> <p>At this point you'll have two parallel tracks. You have to actually design and implement the tests. Depending on the class under test, you may have to build test databases, mock objects, all kinds of supporting material.</p> <ul> <li><p>Testing Rework. Starting with your highest priority untested module, start filling in the TestCases for each class in each module.</p></li> <li><p>New Development. For <strong>every</strong> code change, a unittest.TestCase must be created for the class being changed.</p></li> </ul> <p>The test code follows the same rules as any other code. Everything is checked in at the end of the day. It has to run -- even if the tests don't all pass.</p> <p>Give the test script to the product manager (not the QA manager, the actual product manager who is responsible for shipping product to customers) and make sure they run the script every day and find out why it didn't run or why tests are failing.</p> <p>The actual running of the master test script is not a QA job -- it's everyone's job. Every manager at every level of the organization has to be part of the daily build script output. <strong>All</strong> of their jobs have to depend on "all tests passed last night". Otherwise, the product manager will simply pull resources away from testing and you'll have nothing.</p>
 

Querying!

 
Guidance

SQuiL has stopped working due to an internal error.

If you are curious you may find further information in the browser console, which is accessible through the devtools (F12).

Reload