| <?xml version="1.0" encoding="UTF-8"?> |
| <org.eclipse.epf.uma:ContentDescription xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:org.eclipse.epf.uma="http://www.eclipse.org/epf/uma/1.0.3/uma.ecore" epf:version="1.0.0" xmi:id="_qS8JcMM3EdmSIPI87WLu3g" name="failure_analisys_rpt_creation,_0jhR0MlgEdmt3adZL5Dmdw" guid="_qS8JcMM3EdmSIPI87WLu3g" changeDate="2006-09-20T15:57:59.790-0700"> |
| <mainDescription><h3> |
| Introduction |
| </h3> |
| <p> |
| During testing, you will encounter failures related to the execution of your tests&nbsp;in different&nbsp;forms, such |
| as, code defects, user errors, program malfunctions, and general problems.&nbsp;This concept page describes some ways |
| to conduct failure analysis and then to report your findings. |
| </p> |
| <h3> |
| Failure Analysis |
| </h3> |
| <p> |
| After you have run your tests, it is good practice to identify inputs for review of the results of the testing |
| effort.&nbsp;Some likely sources are defects that occured during the execution of test scripts, change request metrics, |
| and&nbsp;test log details. |
| </p> |
| <p> |
| Running test scripts may results in errors of different kinds such as uncovered defects, unexpected behavior, or |
| general failure of the test script to run properly.&nbsp;When you run test scripts, one of the most important things to |
| do is to identify causes and effects of failure.&nbsp;It is important to differentiate failures in the system under |
| test as well as those related to the tests themselves. |
| </p> |
| <p> |
| Change request metrics are useful in analyzing and correcting failures in the testing.&nbsp;Select metrics that will |
| facilitate creation of incident reports from a collection of change requests.&nbsp;Change request metrics that you may |
| find useful in your failure analysis include: test coverage, priority, impact, defect trends and density. |
| </p> |
| <p> |
| Finally, one of the most critical sources of your failure analysis is the test log.&nbsp;Start by gathering the test |
| logs output during the implementation and execution of the tests. Relevant logs might come from many sources - they |
| might be captured by the tools you use (both test execution and diagnostic tools), generated by custom-written routines |
| your team has developed, output from the target-of-test items themselves, and recorded manually by the tester. Gather |
| all of the available test log sources and examine their content. Check that all the scheduled testing executed to |
| completion, and that all the tests that should have been scheduled actually were. |
| </p> |
| <h3> |
| Recording Your Findings |
| </h3> |
| <p> |
| Once you have conducted your failure analysis, you may decide to formalize the results of this analysis by recording |
| your findings in a report.&nbsp;There are several factors that go into deciding whether to record your failure analysis |
| in a report.&nbsp;Some of the key factors include:&nbsp;level of testing formality, complexity of the testing effort, |
| and the need to communicate the testing results to the entire development team.&nbsp;In less formal environments, it |
| may be sufficient to record your failure analysis in the form of a change request.&nbsp;In this case, it is convenient |
| to be able to cull relevant failure analysis information of change requests and put this into&nbsp;a report format. |
| </p></mainDescription> |
| </org.eclipse.epf.uma:ContentDescription> |