blob: dbe2e00313c259f25a417213c6c233853090f47e [file] [log] [blame]
<?xml version="1.0" encoding="UTF-8"?>
<org.eclipse.epf.uma:TaskDescription xmi:version="2.0"
xmlns:xmi="http://www.omg.org/XMI" xmlns:org.eclipse.epf.uma="http://www.eclipse.org/epf/uma/1.0.5/uma.ecore"
xmlns:rmc="http://www.ibm.com/rmc" rmc:version="7.5.0" xmlns:epf="http://www.eclipse.org/epf"
epf:version="1.5.0" xmi:id="_NrbRUqeqEdmKDbQuyzCoqQ"
name="run_tests,_0jVEkMlgEdmt3adZL5Dmdw" guid="_NrbRUqeqEdmKDbQuyzCoqQ" changeDate="2007-12-06T14:34:58.984-0800"
version="1.0.0">
<keyConsiderations>&lt;ul>&#xD;
&lt;li>&#xD;
Run all tests as frequently as possible. Ideally, run all test scripts against each build deployed to the test&#xD;
environment. If this is impractical, run regression tests for existing functionality, and&amp;nbsp;focus the test cycle&#xD;
on work items completed in the new build.&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
Even test scripts that are expected to fail provide valuable feedback. However, once a test script is passing, it&#xD;
should not fail&amp;nbsp;against subsequent builds of the solution.&#xD;
&lt;/li>&#xD;
&lt;/ul></keyConsiderations>
<sections xmi:id="_xVhnwKRLEdyLP-jEVj8Kyw" name="Review work items completed in the build"
guid="_xVhnwKRLEdyLP-jEVj8Kyw">
<sectionDescription>Review work items that were integrated into the build since the last test cycle. Focus on identifying any previously&#xD;
unimplemented or failing requirements are now expected to meet the conditions of satisfaction.</sectionDescription>
</sections>
<sections xmi:id="_1L1yAKRLEdyLP-jEVj8Kyw" name="Select Test Scripts" guid="_1L1yAKRLEdyLP-jEVj8Kyw">
<sectionDescription>&lt;p>&#xD;
Select test scripts related to work items completed in the build.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Ideally, each test cycle should execute all test scripts, but some types of tests are too time-consuming to include in&#xD;
each test cycle. For manual or time-intensive tests, include test scripts that will provide the most useful feedback&#xD;
about the maturing solution based on the objectives of the iteration.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Plan with test suites to&amp;nbsp;simplify the process of selecting tests for each build (see &lt;a class=&quot;elementLinkWithType&quot; href=&quot;./../../core.tech.common.extend_supp/guidances/guidelines/test_suite_D54EEBED.html&quot; guid=&quot;_0aDz0MlgEdmt3adZL5Dmdw&quot;>Guideline: Test Suite&lt;/a>).&#xD;
&lt;/p></sectionDescription>
</sections>
<sections xmi:id="_gV408KuSEdmhFZtkg1nakg" name="Execute Test Scripts against the build"
guid="_gV408KuSEdmhFZtkg1nakg">
<sectionDescription>&lt;p>&#xD;
Run the tests using the step-by-step procedure in the &lt;a class=&quot;elementLink&quot; href=&quot;./../../core.tech.common.extend_supp/workproducts/test_script_39A30BA2.html&quot; guid=&quot;_0ZfMEMlgEdmt3adZL5Dmdw&quot;>Test Script&lt;/a>.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
For automated test scripts, initiate the test execution.&amp;nbsp;Automated test scripts should run in suites in the&#xD;
correct sequence, and collect results in the Test Log.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
To execute a manual test script, establish its preconditions, perform the steps while logging results in the &lt;a class=&quot;elementLink&quot; href=&quot;./../../core.tech.common.extend_supp/workproducts/test_log_CBA2FDF4.html&quot; guid=&quot;_0ZlSsMlgEdmt3adZL5Dmdw&quot;>Test Log&lt;/a>, and perform any teardown steps.&#xD;
&lt;/p></sectionDescription>
</sections>
<sections xmi:id="_sQaC4DO2EduqsLmIADMQ9g" name="Analyze and communicate test results"
guid="_sQaC4DO2EduqsLmIADMQ9g">
<sectionDescription>&lt;p>&#xD;
Post the test results in a conspicuous place that is accessible to the entire team, such as a white board or wiki.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
For each failing test script, analyze the Test Log to identify the cause of the test failure. Begin with failing tests&#xD;
that you expected to begin passing against this build, which may indicate newly delivered work items that do not meet&#xD;
the conditions of satisfaction. Then review previously passing test scripts that are now failing, which may indicate&#xD;
regressive issues in the build.&#xD;
&lt;/p>&#xD;
&lt;ul>&#xD;
&lt;li>&#xD;
If a test failed because the solution does not meet the conditions of satisfaction for the test case, log the issue&#xD;
in the Work Items List. In the work item, clearly identify the observed behavior, the expected behavior, and steps&#xD;
to repeat the issue. Note which failing test initially discovered the issue.&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
If a test failed because of a change in the system (such as a user-interface change), but the implementation still&#xD;
meets the conditions of satisfaction in the test case, update the test script to pass with the new implementation.&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
If a test failed because the test script is incorrect (a false negative result) or passed when it was expected to&#xD;
fail (a false positive result), update the test script to correctly implement the conditions of satisfaction in the&#xD;
test case. If the test case for a requirement is invalid, create a request change to modify the conditions of&#xD;
satisfaction for the requirement.&amp;nbsp;&#xD;
&lt;/li>&#xD;
&lt;/ul>&#xD;
&lt;p>&#xD;
It's best to update test scripts as quickly and continuously as possible. If the change to the test script is trivial,&#xD;
update the test while analyzing the test results. If the change is a non-trivial task, submit it to the Work Items List&#xD;
so it can be prioritized against other tasks.&#xD;
&lt;/p></sectionDescription>
</sections>
<sections xmi:id="_3t6oADO2EduqsLmIADMQ9g" name="Provide feedback to the team" guid="_3t6oADO2EduqsLmIADMQ9g">
<sectionDescription>&lt;p>&#xD;
Summarize and provide feedback to the team about how well the build satisfies the requirements planned to the&#xD;
iteration. Focus on measuring progress in terms of passing tests.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Explain the results for the test cycle&amp;nbsp;in the context of overall trends:&#xD;
&lt;/p>&#xD;
&lt;ul>&#xD;
&lt;li>&#xD;
How many tests were selected for the build, and what&amp;nbsp;are their statuses (pass, fail, blocked, not run, etc.)?&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
How many issues were added to the Work Items List, and what are their statuses and severities?&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
For test scripts that were blocked or skipped, what are the main reasons (such as known issues)?&#xD;
&lt;/li>&#xD;
&lt;/ul></sectionDescription>
</sections>
<purpose>To provide feedback to the team about how well a build satisfies the requirements.</purpose>
</org.eclipse.epf.uma:TaskDescription>