| <?xml version="1.0" encoding="UTF-8"?> |
| <org.eclipse.epf.uma:TaskDescription xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:org.eclipse.epf.uma="http://www.eclipse.org/epf/uma/1.0.6/uma.ecore" xmlns:epf="http://www.eclipse.org/epf" epf:version="1.5.1" xmlns:rmc="http://www.ibm.com/rmc" rmc:version="7.5.1" xmi:id="_NrbRUqeqEdmKDbQuyzCoqQ" name="run_tests,_0jVEkMlgEdmt3adZL5Dmdw" guid="_NrbRUqeqEdmKDbQuyzCoqQ" changeDate="2007-12-06T02:34:58.000-0800" version="1.0.0"> |
| <keyConsiderations><ul>
 |
| <li>
 |
| Run all tests as frequently as possible. Ideally, run all test scripts against each build deployed to the test
 |
| environment. If this is impractical, run regression tests for existing functionality, and&nbsp;focus the test cycle
 |
| on work items completed in the new build.
 |
| </li>
 |
| <li>
 |
| Even test scripts that are expected to fail provide valuable feedback. However, once a test script is passing, it
 |
| should not fail&nbsp;against subsequent builds of the solution.
 |
| </li>
 |
| </ul></keyConsiderations> |
| <sections xmi:id="_xVhnwKRLEdyLP-jEVj8Kyw" name="Review work items completed in the build" guid="_xVhnwKRLEdyLP-jEVj8Kyw"> |
| <sectionDescription>Review work items that were integrated into the build since the last test cycle. Focus on identifying any previously
 |
| unimplemented or failing requirements are now expected to meet the conditions of satisfaction.</sectionDescription> |
| </sections> |
| <sections xmi:id="_1L1yAKRLEdyLP-jEVj8Kyw" name="Select Test Scripts" guid="_1L1yAKRLEdyLP-jEVj8Kyw"> |
| <sectionDescription><p>
 |
| Select test scripts related to work items completed in the build.
 |
| </p>
 |
| <p>
 |
| Ideally, each test cycle should execute all test scripts, but some types of tests are too time-consuming to include in
 |
| each test cycle. For manual or time-intensive tests, include test scripts that will provide the most useful feedback
 |
| about the maturing solution based on the objectives of the iteration.
 |
| </p>
 |
| <p>
 |
| Plan with test suites to&nbsp;simplify the process of selecting tests for each build (see <a
 |
| class="elementLinkWithType" href="./../../core.tech.common.extend_supp/guidances/guidelines/test_suite_D54EEBED.html"
 |
| guid="_0aDz0MlgEdmt3adZL5Dmdw">Guideline: Test Suite</a>).
 |
| </p></sectionDescription> |
| </sections> |
| <sections xmi:id="_gV408KuSEdmhFZtkg1nakg" name="Execute Test Scripts against the build" guid="_gV408KuSEdmhFZtkg1nakg"> |
| <sectionDescription><p>
 |
| Run the tests using the step-by-step procedure in the <a class="elementLink"
 |
| href="./../../core.tech.common.extend_supp/workproducts/test_script_39A30BA2.html" guid="_0ZfMEMlgEdmt3adZL5Dmdw">Test
 |
| Script</a>.
 |
| </p>
 |
| <p>
 |
| For automated test scripts, initiate the test execution.&nbsp;Automated test scripts should run in suites in the
 |
| correct sequence, and collect results in the Test Log.
 |
| </p>
 |
| <p>
 |
| To execute a manual test script, establish its preconditions, perform the steps while logging results in the <a
 |
| class="elementLink" href="./../../core.tech.common.extend_supp/workproducts/test_log_CBA2FDF4.html"
 |
| guid="_0ZlSsMlgEdmt3adZL5Dmdw">Test Log</a>, and perform any teardown steps.
 |
| </p></sectionDescription> |
| </sections> |
| <sections xmi:id="_sQaC4DO2EduqsLmIADMQ9g" name="Analyze and communicate test results" guid="_sQaC4DO2EduqsLmIADMQ9g"> |
| <sectionDescription><p>
 |
| Post the test results in a conspicuous place that is accessible to the entire team, such as a white board or wiki.
 |
| </p>
 |
| <p>
 |
| For each failing test script, analyze the Test Log to identify the cause of the test failure. Begin with failing tests
 |
| that you expected to begin passing against this build, which may indicate newly delivered work items that do not meet
 |
| the conditions of satisfaction. Then review previously passing test scripts that are now failing, which may indicate
 |
| regressive issues in the build.
 |
| </p>
 |
| <ul>
 |
| <li>
 |
| If a test failed because the solution does not meet the conditions of satisfaction for the test case, log the issue
 |
| in the Work Items List. In the work item, clearly identify the observed behavior, the expected behavior, and steps
 |
| to repeat the issue. Note which failing test initially discovered the issue.
 |
| </li>
 |
| <li>
 |
| If a test failed because of a change in the system (such as a user-interface change), but the implementation still
 |
| meets the conditions of satisfaction in the test case, update the test script to pass with the new implementation.
 |
| </li>
 |
| <li>
 |
| If a test failed because the test script is incorrect (a false negative result) or passed when it was expected to
 |
| fail (a false positive result), update the test script to correctly implement the conditions of satisfaction in the
 |
| test case. If the test case for a requirement is invalid, create a request change to modify the conditions of
 |
| satisfaction for the requirement.&nbsp;
 |
| </li>
 |
| </ul>
 |
| <p>
 |
| It's best to update test scripts as quickly and continuously as possible. If the change to the test script is trivial,
 |
| update the test while analyzing the test results. If the change is a non-trivial task, submit it to the Work Items List
 |
| so it can be prioritized against other tasks.
 |
| </p></sectionDescription> |
| </sections> |
| <sections xmi:id="_3t6oADO2EduqsLmIADMQ9g" name="Provide feedback to the team" guid="_3t6oADO2EduqsLmIADMQ9g"> |
| <sectionDescription><p>
 |
| Summarize and provide feedback to the team about how well the build satisfies the requirements planned to the
 |
| iteration. Focus on measuring progress in terms of passing tests.
 |
| </p>
 |
| <p>
 |
| Explain the results for the test cycle&nbsp;in the context of overall trends:
 |
| </p>
 |
| <ul>
 |
| <li>
 |
| How many tests were selected for the build, and what&nbsp;are their statuses (pass, fail, blocked, not run, etc.)?
 |
| </li>
 |
| <li>
 |
| How many issues were added to the Work Items List, and what are their statuses and severities?
 |
| </li>
 |
| <li>
 |
| For test scripts that were blocked or skipped, what are the main reasons (such as known issues)?
 |
| </li>
 |
| </ul></sectionDescription> |
| </sections> |
| <purpose>To provide feedback to the team about how well a build satisfies the requirements.</purpose> |
| </org.eclipse.epf.uma:TaskDescription> |