blob: 7bb06eee8d1c62a1852a3326dd54385a08e0989b [file] [log] [blame]
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE html PUBLIC "-//W3C/DTD XHTML 1.0 Strict//EN" "DTD/xhtml1-strict.dtd">
<!-- VERSION rmc:7.1.0 -->
<html xmlns="http://www.w3.org/1999/xhtml">
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8"/>
<!-- START NON-TRANSLATABLE -->
<title>\openup_basic\tasks\run_tests.xmi</title>
</head>
<!-- WARNING: do not modify the generated comments in this file below this line. They are used as markers for the import process. -->
<body>
Element Name: run_tests.xmi<br/><br/>
<!-- END NON-TRANSLATABLE -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: presentationName<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:presentationName,_0jVEkMlgEdmt3adZL5Dmdw CRC: 2545677411 -->Run Tests<!-- END:presentationName,_0jVEkMlgEdmt3adZL5Dmdw -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: briefDescription<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:briefDescription,_0jVEkMlgEdmt3adZL5Dmdw CRC: 3595529747 -->Run the appropriate collections of tests required to evaluate product quality. Capture test results that facilitate ongoing assessment of the product.<!-- END:briefDescription,_0jVEkMlgEdmt3adZL5Dmdw -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: mainDescription<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:mainDescription,_NrbRUqeqEdmKDbQuyzCoqQ CRC: 4087532239 -->Run the system test, which addresses functional and system integration tests and, potentially, user acceptance tests.<!-- END:mainDescription,_NrbRUqeqEdmKDbQuyzCoqQ -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: keyConsiderations<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:keyConsiderations,_NrbRUqeqEdmKDbQuyzCoqQ CRC: 1019361853 --><ul>
<li>
Run the test regularly. Ideally, that means whenever the code changes but, minimally, once a day.
</li>
<li>
It should be possible for anyone on the test team to run the test at any time.
</li>
</ul><!-- END:keyConsiderations,_NrbRUqeqEdmKDbQuyzCoqQ -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: purpose<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:purpose,_NrbRUqeqEdmKDbQuyzCoqQ CRC: 1105804481 -->To execute tests and evaluate the test results.<!-- END:purpose,_NrbRUqeqEdmKDbQuyzCoqQ -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: name<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:name,_fR4aQKuSEdmhFZtkg1nakg CRC: 3985836831 -->Schedule test execution<!-- END:name,_fR4aQKuSEdmhFZtkg1nakg -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: sectionDescription<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:sectionDescription,_fR4aQKuSEdmhFZtkg1nakg CRC: 1420022442 --><p>
Run&nbsp;the system tests as often as possible. Ideally, run&nbsp;the tests whenever new code is checked into&nbsp;the
version control tool.
</p>
<p>
For larger systems, this will be too expensive.&nbsp;The tests may take several hours to run; therefore, you'll need to
schedule tests less frequently. If possible, however, run the tests several times a day. As a minimum,
run&nbsp;automated tests each night.
</p><!-- END:sectionDescription,_fR4aQKuSEdmhFZtkg1nakg -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: name<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:name,_gV408KuSEdmhFZtkg1nakg CRC: 2579482424 -->Run the test<!-- END:name,_gV408KuSEdmhFZtkg1nakg -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: sectionDescription<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:sectionDescription,_gV408KuSEdmhFZtkg1nakg CRC: 3515341533 --><p>
Run the test at the scheduled time based on the instructions in the <a class="elementLink" href="./../../openup_basic/workproducts/test_script,_0ZfMEMlgEdmt3adZL5Dmdw.html" guid="_0ZfMEMlgEdmt3adZL5Dmdw">Test Script</a>. It is best that the script&nbsp;be automated.
</p>
<p>
Good practices:
</p>
<ol>
<li>
Run the test in a separate test environment.
</li>
<li>
Ensure that you run the test against the latest clean build.
</li>
<li>
The first step of the test should be to set up the test environment (ensure that the network is available, that the
test database is available and reset to a known state, and so on).
</li>
</ol><!-- END:sectionDescription,_gV408KuSEdmhFZtkg1nakg -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: name<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:name,_hfVJQKuSEdmhFZtkg1nakg CRC: 2405848560 -->Close test run<!-- END:name,_hfVJQKuSEdmhFZtkg1nakg -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: sectionDescription<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:sectionDescription,_hfVJQKuSEdmhFZtkg1nakg CRC: 2769718961 -->Close the actual run as the last step of running the test.&nbsp;To do this:
<ol>
<li>
Close the test logs. The&nbsp;test log files should be closed and placed in the appropriate folder or directory.
</li>
<li>
Announce results. Send a notice to everyone involved in the project informing them of the result of the test run
and where they can find the test logs.
</li>
</ol><!-- END:sectionDescription,_hfVJQKuSEdmhFZtkg1nakg -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: name<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:name,_sQaC4DO2EduqsLmIADMQ9g CRC: 556134948 -->Examine the test log<!-- END:name,_sQaC4DO2EduqsLmIADMQ9g -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: sectionDescription<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:sectionDescription,_sQaC4DO2EduqsLmIADMQ9g CRC: 309771938 --><p>
Collect and compile information from test execution logs so you can:
</p>
<ul>
<li>
Capture the high-impact and risk issues discovered in running the tests.
</li>
<li>
Identify errors in test creation, data inputs, or integrating applications and any architectural anomalies.
</li>
<li>
Isolate the target of the test to determine failure points.
</li>
<li>
Diagnose failure symptoms and characteristics.
</li>
<li>
Assess and identify possible solutions.
</li>
</ul>
<p>
After completing these steps, verify that you have enough details to determine the impact of the results. In addition,
make sure that enough information exists to assist individuals who are performing dependent tasks.
</p><!-- END:sectionDescription,_sQaC4DO2EduqsLmIADMQ9g -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: name<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:name,_0XzAwDO2EduqsLmIADMQ9g CRC: 47886765 -->Identify failures and propose solutions<!-- END:name,_0XzAwDO2EduqsLmIADMQ9g -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: sectionDescription<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:sectionDescription,_0XzAwDO2EduqsLmIADMQ9g CRC: 3417445203 --><p>
Identify whether or not the test has failed and propose a solution based on the type of test and category of
failure.&nbsp; The approach to testing will determine the identified failures and candidates for solutions.
</p>
<p>
Tests that are programmer supporting are used to help prepare and ensure confidence in the code.&nbsp;When identifying
failures and proposing solutions for programmer supporting tests:
</p>
<ul>
<li>
Failures will be identified at an object or element level.
</li>
<li>
Solutions will be to help clarify the problem.
</li>
</ul>
<p>
Test that are business supporting are used to uncover prior mistakes and omissions.
</p>
<ul>
<li>
Failures will identify omissions in requirements.
</li>
<li>
Solutions will help to clarify expectations of the system.
</li>
</ul>
<p>
After you have this information and the steps proposed to resolve the failures, you can effectively categorize the type
of failure and the appropriate type of solution.
</p>
<p>
See <a class="elementLinkWithUserText" href="./../../openup_basic/guidances/supportingmaterials/references,_9ToeIB83Edqsvps02rpOOg.html" guid="_9ToeIB83Edqsvps02rpOOg">[MAR03]</a> for more information.
</p><!-- END:sectionDescription,_0XzAwDO2EduqsLmIADMQ9g -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: name<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:name,_3t6oADO2EduqsLmIADMQ9g CRC: 512718462 -->Communicate test results<!-- END:name,_3t6oADO2EduqsLmIADMQ9g -->
<br/><br/><br/>
<!-- START NON-TRANSLATABLE -->
Attribute: sectionDescription<br/><br/>
<!-- END NON-TRANSLATABLE -->
<!-- START:sectionDescription,_3t6oADO2EduqsLmIADMQ9g CRC: 2039249076 --><p>
Communicate the test results to the team. For failed tests this might involve adding bugs to the <a class="elementLink" href="./../../openup_basic/workproducts/work_items_list,_rGNWsCbSEdqh1LYUOGRh2A.html" guid="_rGNWsCbSEdqh1LYUOGRh2A">Work Items List</a>.
</p>
<p>
Communicating test results can affect the perception of the effectiveness of the tests. When communicating test
results, it is important that you:
</p>
<ul>
<li>
Know the audience, so that appropriate information is communicated appropriately
</li>
<li>
Run tests or scenarios that are likely to uncover the high-impact and risk issues or represent actual use of the
system
</li>
</ul>
<p>
When preparing test result reports, answer the following questions:
</p>
<ul>
<li>
How many test cases exist, and what are their states (pass, fail, blocked, and so on)?
</li>
<li>
How many bug reports have been filed, and what are their states (open, assigned, ready for testing, closed,
deferred)?
</li>
<li>
What trends and patterns do you see in test case and bug report states, especially opened and closed bug reports
and passed and failed test cases?
</li>
<li>
For test cases that were blocked or skipped, why are they in this state?
</li>
<li>
Considering all test cases not yet run (and perhaps not even created yet), what key risks and areas of
functionality remain untested?
</li>
<li>
For failed test cases, what are the associated bug reports?
</li>
<li>
For bug reports ready for confirmation testing, when can your team perform the test?
</li>
</ul><!-- END:sectionDescription,_3t6oADO2EduqsLmIADMQ9g -->
</body>
</html>