| <?xml version="1.0" encoding="UTF-8"?> |
| <org.eclipse.epf.uma:TaskDescription xmi:version="2.0" |
| xmlns:xmi="http://www.omg.org/XMI" xmlns:org.eclipse.epf.uma="http://www.eclipse.org/epf/uma/1.0.5/uma.ecore" |
| xmlns:epf="http://www.eclipse.org/epf" epf:version="1.5.0" xmi:id="_NrbRUqeqEdmKDbQuyzCoqQ" |
| name="run_tests,_0jVEkMlgEdmt3adZL5Dmdw" guid="_NrbRUqeqEdmKDbQuyzCoqQ" changeDate="2007-06-01T13:21:00.703-0700" |
| version="1.0.0"> |
| <sections xmi:id="_xnl4oA_hEdyi96l9YvaMVA" name="Review work items completed in the build" |
| guid="_xnl4oA_hEdyi96l9YvaMVA"> |
| <sectionDescription>Review work items that were integrated into the build since the last test cycle. Focus on identifying any previously
 |
| 
 |
| unimplemented or failing requirements are now expected to meet the conditions of satisfaction.</sectionDescription> |
| </sections> |
| <sections xmi:id="_3Fd2kA_hEdyi96l9YvaMVA" name="Select Test Scripts" guid="_3Fd2kA_hEdyi96l9YvaMVA"> |
| <sectionDescription><p>
 |
| Select test scripts related to work items completed in the build.
 |
| </p>
 |
| <p>
 |
| Ideally, each test cycle should execute all test scripts, but some types of tests are too time-consuming to include in
 |
| each test cycle. For manual or time-intensive tests, include test scripts that will provide the most useful feedback
 |
| about the maturing solution based on the objectives of the iteration.
 |
| </p>
 |
| <p>
 |
| Plan with test suites to&nbsp;simplify the process of selecting tests for each build (see <a class="elementLinkWithType" href="./../../openup/guidances/guidelines/test_suite_D54EEBED.html" guid="_0aDz0MlgEdmt3adZL5Dmdw">Guideline: Test Suite</a>).
 |
| </p></sectionDescription> |
| </sections> |
| <sections xmi:id="_gV408KuSEdmhFZtkg1nakg" name="Execute Test Scripts against the build" |
| guid="_gV408KuSEdmhFZtkg1nakg"> |
| <sectionDescription><p>
 |
| Run the tests using the step-by-step procedure in the <a class="elementLink" href="./../../openup/workproducts/test_script_39A30BA2.html" guid="_0ZfMEMlgEdmt3adZL5Dmdw">Test Script</a>.
 |
| </p>
 |
| <p>
 |
| For automated test scripts, initiate the test execution.&nbsp;Automated test scripts should run in suites in the
 |
| correct sequence, and collect results in the Test Log.
 |
| </p>
 |
| <p>
 |
| To execute a manual test script, establish its preconditions, perform the steps while logging results in the <a class="elementLink" href="./../../openup/workproducts/test_log_CBA2FDF4.html" guid="_0ZlSsMlgEdmt3adZL5Dmdw">Test Log</a>,
 |
| and perform any teardown steps.
 |
| </p><br /></sectionDescription> |
| </sections> |
| <sections xmi:id="_0XzAwDO2EduqsLmIADMQ9g" name="Analyze and communicate test results" |
| guid="_0XzAwDO2EduqsLmIADMQ9g"> |
| <sectionDescription><p>
 |
| Post the test results in a conspicuous place that is accessible to the entire team, such as a white board or wiki.
 |
| </p>
 |
| <p>
 |
| For each failing test script, analyze the Test Log to identify the cause of the test failure. Begin with failing tests
 |
| that you expected to begin passing against this build, which may indicate newly delivered work items that do not meet
 |
| the conditions of satisfaction. Then review previously passing test scripts that are now failing, which may indicate
 |
| regressive issues in the build.
 |
| </p>
 |
| <ul>
 |
| <li>
 |
| If a test failed because the solution does not meet the conditions of satisfaction for the test case, log the issue
 |
| in the Work Items List. In the work item, clearly identify the observed behavior, the expected behavior, and steps
 |
| to repeat the issue. Note which failing test initially discovered the issue.
 |
| </li>
 |
| <li>
 |
| If a test failed because of a change in the system (such as a user-interface change), but the implementation still
 |
| meets the conditions of satisfaction in the test case, update the test script to pass with the new implementation.
 |
| </li>
 |
| <li>
 |
| If a test failed because the test script is incorrect (a false negative result) or passed when it was expected to
 |
| fail (a false positive result), update the test script to correctly implement the conditions of satisfaction in the
 |
| test case. If the test case for a requirement is invalid, perform the <a class="elementLinkWithType" href="./../../openup/tasks/request_change_A048C387.html" guid="_0mwzEclgEdmt3adZL5Dmdw">Task: Request Change</a> to
 |
| modify the conditions of satisfaction for the requirement.&nbsp;
 |
| </li>
 |
| </ul>
 |
| <p>
 |
| It's best to update test scripts as quickly and continuously as possible (see <a class="elementLinkWithType" href="./../../openup/tasks/implement_test_scripts_26F00282.html" guid="_0jO98MlgEdmt3adZL5Dmdw">Task: Implement Test Scripts</a>). If the change to the test script is trivial, update the test while analyzing the test results. If the
 |
| change is a non-trivial task, submit it to the Work Items List so it can be prioritized against other tasks.
 |
| </p></sectionDescription> |
| </sections> |
| <sections xmi:id="_i3flUBB5Edyy0ZcrPg8jlg" name="Provide feedback to the team" guid="_i3flUBB5Edyy0ZcrPg8jlg"> |
| <sectionDescription><p>
 |
| Summarize and provide feedback to the team about how well the build satisfies the requirements planned to the
 |
| iteration. Focus on measuring progress in terms of passing tests.
 |
| </p>
 |
| <p>
 |
| Explain the results for the test cycle&nbsp;in the context of overall trends:
 |
| </p>
 |
| <ul>
 |
| <li>
 |
| How many tests were selected for the build, and what&nbsp;are their statuses (pass, fail, blocked, not run, etc.)?
 |
| </li>
 |
| <li>
 |
| How many issues were added to the Work Items List, and what are their statuses and severities?
 |
| </li>
 |
| <li>
 |
| For test scripts that were blocked or skipped, what are the main reasons (such as known issues)?
 |
| </li>
 |
| </ul></sectionDescription> |
| </sections> |
| <keyConsiderations><ul>
 |
| <li>
 |
| Run all tests as frequently as possible. Ideally, run all test scripts against each build deployed to the test
 |
| environment. If this is impractical, run regression tests for existing functionality, and&nbsp;focus the test cycle
 |
| on work items completed in the new build.
 |
| </li>
 |
| <li>
 |
| Even test scripts that are expected to fail provide valuable feedback (see <a class="elementLinkWithType" href="./../../openup/guidances/guidelines/test_first_design_21C77ADF.html" guid="_0Y6kUMlgEdmt3adZL5Dmdw">Guideline: Test-first Design</a>). However, once a test script is passing, it should not fail&nbsp;against subsequent builds
 |
| of the solution.
 |
| </li>
 |
| </ul></keyConsiderations> |
| <purpose><p>
 |
| To provide feedback to the team about how well a <a href="./../../openup/workproducts/build_95D7D8FD.html" guid="_0YuXEMlgEdmt3adZL5Dmdw">build</a> satisfies the requirements.
 |
| </p></purpose> |
| </org.eclipse.epf.uma:TaskDescription> |