| <?xml version="1.0" encoding="UTF-8"?> |
| <org.eclipse.epf.uma:ContentDescription xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:org.eclipse.epf.uma="http://www.eclipse.org/epf/uma/1.0.3/uma.ecore" epf:version="1.0.0" xmi:id="-9gUpkUYqONF3x8UWwAO_zw" name="failure_analysis_rpt_creation,_0jhR0MlgEdmt3adZL5Dmdw" guid="-9gUpkUYqONF3x8UWwAO_zw" changeDate="2006-09-29T13:52:52.340-0700" version="1.0.0"> |
| <mainDescription><h3> |
| Introduction |
| </h3> |
| <p> |
| During testing, you will encounter failures related to the execution of your tests in different forms, such as code |
| defects, user errors, program malfunctions, and general problems. This&nbsp;concept discusses some ways to conduct |
| failure analysis and then to report your findings. |
| </p> |
| <h3> |
| Failure Analysis |
| </h3> |
| <p> |
| After you have run your tests, it is good practice to identify inputs for review of the results of the testing effort. |
| Some likely sources are defects that occurred during the execution of test scripts, change request metrics, and test |
| log details. |
| </p> |
| <p> |
| Running test scripts results in errors of different kinds such as uncovered defects, unexpected behavior, or general |
| failure of the test script to run properly. When you run test scripts, one of the most important things to do is to |
| identify causes and effects of failure. It is important to differentiate failures in the system under test&nbsp;from |
| those related to the tests themselves. |
| </p> |
| <p> |
| Change request metrics are useful in analyzing and correcting failures in the testing. Select metrics that will |
| facilitate creation of incident reports from a collection of change requests. |
| </p> |
| <p> |
| Change request metrics that you may find useful in your failure analysis include: |
| </p> |
| <ul> |
| <li> |
| test coverage |
| </li> |
| <li> |
| priority |
| </li> |
| <li> |
| impact |
| </li> |
| <li> |
| defect trends |
| </li> |
| <li> |
| density |
| </li> |
| </ul> |
| <p> |
| Finally, one of the most critical sources of your failure analysis is the test log. Start by gathering the test log's |
| output during the implementation and execution of the tests. Relevant logs might come from many sources; they might be |
| captured by the tools you use (both test execution and diagnostic tools), generated by custom-written routines your |
| team has developed, output from the target test items themselves, and recorded manually be the tester. Gather all of |
| the available test log sources and examine their content. Check that all the scheduled testing executed to completion, |
| and that all the needed tests&nbsp;have been scheduled. |
| </p> |
| <h3> |
| Self-Documenting Tests |
| </h3> |
| <p> |
| For automated tests it is a best practice for the test itself to examine the results and clearly report itself as |
| passing or failing. This provides the most efficient way to run tests such that whole suites of tests can be run with |
| each test in turn determining whether it has passed or failed without the need for human intervention. When authoring |
| self-documenting tests, take extra care to ensure that the analysis of the results considers all possibilities. |
| </p> |
| <h3> |
| Recording Your Findings |
| </h3> |
| <p> |
| Once you have conducted your failure analysis, you may decide to formalize the results of this analysis by recording |
| your findings in a report. There are several factors that go into deciding whether to record your failure analysis in a |
| report. Some of the key factors include: level of testing formality, complexity of the testing effort, and the need to |
| communicate the testing results to the entire development team. In less formal environments, it may be sufficient to |
| record your failure analysis in&nbsp;a test evaluation summary. |
| </p></mainDescription> |
| </org.eclipse.epf.uma:ContentDescription> |