blob: 9980637077660849bb9b81ddca782002a49fda37 [file] [log] [blame]
<?xml version="1.0" encoding="UTF-8"?>
<org.eclipse.epf.uma:ContentDescription xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:org.eclipse.epf.uma="http://www.eclipse.org/epf/uma/1.0.3/uma.ecore" epf:version="1.0.0" xmi:id="-9gUpkUYqONF3x8UWwAO_zw" name="failure_analysis_rpt_creation,_0jhR0MlgEdmt3adZL5Dmdw" guid="-9gUpkUYqONF3x8UWwAO_zw" changeDate="2006-09-29T13:52:52.340-0700" version="1.0.0">
<mainDescription>&lt;h3&gt;
Introduction
&lt;/h3&gt;
&lt;p&gt;
During testing, you will encounter failures related to the execution of your tests in different forms, such as code
defects, user errors, program malfunctions, and general problems. This&amp;nbsp;concept discusses some ways to conduct
failure analysis and then to report your findings.
&lt;/p&gt;
&lt;h3&gt;
Failure Analysis
&lt;/h3&gt;
&lt;p&gt;
After you have run your tests, it is good practice to identify inputs for review of the results of the testing effort.
Some likely sources are defects that occurred during the execution of test scripts, change request metrics, and test
log details.
&lt;/p&gt;
&lt;p&gt;
Running test scripts results in errors of different kinds such as uncovered defects, unexpected behavior, or general
failure of the test script to run properly. When you run test scripts, one of the most important things to do is to
identify causes and effects of failure. It is important to differentiate failures in the system under test&amp;nbsp;from
those related to the tests themselves.
&lt;/p&gt;
&lt;p&gt;
Change request metrics are useful in analyzing and correcting failures in the testing. Select metrics that will
facilitate creation of incident reports from a collection of change requests.
&lt;/p&gt;
&lt;p&gt;
Change request metrics that you may find useful in your failure analysis include:
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
test coverage
&lt;/li&gt;
&lt;li&gt;
priority
&lt;/li&gt;
&lt;li&gt;
impact
&lt;/li&gt;
&lt;li&gt;
defect trends
&lt;/li&gt;
&lt;li&gt;
density
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
Finally, one of the most critical sources of your failure analysis is the test log. Start by gathering the test log's
output during the implementation and execution of the tests. Relevant logs might come from many sources; they might be
captured by the tools you use (both test execution and diagnostic tools), generated by custom-written routines your
team has developed, output from the target test items themselves, and recorded manually be the tester. Gather all of
the available test log sources and examine their content. Check that all the scheduled testing executed to completion,
and that all the needed tests&amp;nbsp;have been scheduled.
&lt;/p&gt;
&lt;h3&gt;
Self-Documenting Tests
&lt;/h3&gt;
&lt;p&gt;
For automated tests it is a best practice for the test itself to examine the results and clearly report itself as
passing or failing. This provides the most efficient way to run tests such that whole suites of tests can be run with
each test in turn determining whether it has passed or failed without the need for human intervention. When authoring
self-documenting tests, take extra care to ensure that the analysis of the results considers all possibilities.
&lt;/p&gt;
&lt;h3&gt;
Recording Your Findings
&lt;/h3&gt;
&lt;p&gt;
Once you have conducted your failure analysis, you may decide to formalize the results of this analysis by recording
your findings in a report. There are several factors that go into deciding whether to record your failure analysis in a
report. Some of the key factors include: level of testing formality, complexity of the testing effort, and the need to
communicate the testing results to the entire development team. In less formal environments, it may be sufficient to
record your failure analysis in&amp;nbsp;a test evaluation summary.
&lt;/p&gt;</mainDescription>
</org.eclipse.epf.uma:ContentDescription>