blob: 7299c67dcdf3e1d315055e1493dd99db0c53cce5 [file] [log] [blame]
<?xml version="1.0" encoding="UTF-8"?>
<org.eclipse.epf.uma:ContentDescription xmi:version="2.0"
xmlns:xmi="http://www.omg.org/XMI" xmlns:org.eclipse.epf.uma="http://www.eclipse.org/epf/uma/1.0.4/uma.ecore"
xmlns:epf="http://www.eclipse.org/epf" epf:version="1.2.0" xmi:id="-3i1jvKMUGGmAYPw4dHFbEg"
name="test-ideas_list,8.834380241450745E-306" guid="-3i1jvKMUGGmAYPw4dHFbEg" changeDate="2006-12-01T18:44:08.749-0500"
version="1.0.0">
<mainDescription>&lt;h3>&#xD;
&lt;a id=&quot;Introduction&quot; name=&quot;Introduction&quot;>Introduction&lt;/a>&#xD;
&lt;/h3>&#xD;
&lt;p>&#xD;
Information used in designing tests is gathered from many places: design models, classifier interfaces, statecharts,&#xD;
and code itself. At some point, this source document information must be transformed into executable tests:&#xD;
&lt;/p>&#xD;
&lt;ul>&#xD;
&lt;li>&#xD;
specific inputs given to the software under test&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
in a particular hardware and software configuration&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
initialized to a known state&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
with specific results expected&#xD;
&lt;/li>&#xD;
&lt;/ul>&#xD;
&lt;p>&#xD;
It's possible to go directly from source document information to executable tests, but it's often useful to add an&#xD;
intermediate step. In this step, test ideas are written into a &lt;i>Test-Ideas List&lt;/i>, which is used to create&#xD;
executable tests.&#xD;
&lt;/p>&#xD;
&lt;h3>&#xD;
&lt;a id=&quot;TestIdeas&quot; name=&quot;TestIdeas&quot;>What are Test Ideas?&lt;/a>&#xD;
&lt;/h3>&#xD;
&lt;p>&#xD;
A test idea (sometimes referred to as a test requirement) is a brief statement about a test that could be performed. As&#xD;
a simple example, let's consider a function that calculates a square root and come up with some test ideas:&#xD;
&lt;/p>&#xD;
&lt;ul>&#xD;
&lt;li>&#xD;
give a number that's barely less than zero as input&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
give zero as the input&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
test a number that's a perfect square, like 4 or 16 (is the result exactly 2 or 4?)&#xD;
&lt;/li>&#xD;
&lt;/ul>&#xD;
&lt;p>&#xD;
Each of these ideas could readily be converted into an executable test with exact descriptions of inputs and expected&#xD;
results.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
There are two advantages to this less-specific intermediate form:&#xD;
&lt;/p>&#xD;
&lt;ul>&#xD;
&lt;li>&#xD;
test ideas are more reviewable and understandable than complete tests - it's easier to understand the reasoning&#xD;
behind them&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
test ideas support more powerful tests, as described later under the heading &lt;a href=&quot;#TestDesignUsingTheList&quot;>Test&#xD;
Design Using the List&lt;/a>&#xD;
&lt;/li>&#xD;
&lt;/ul>&#xD;
&lt;p>&#xD;
The square root examples all describe inputs, but test ideas can describe any of the elements of an executable test.&#xD;
For example, &quot;print to a LaserJet IIIp&quot; describes an aspect of the test environment to be used for a test, as does&#xD;
&quot;test with database full&quot;, however, these latter test ideas are very incomplete in themselves: Print &lt;b>what&lt;/b> to the&#xD;
printer? Do &lt;b>what&lt;/b> with that full database? They do, however, ensure that important ideas aren't forgotten; ideas&#xD;
that will be described in more detail later in test design.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Test ideas are often based on fault&amp;nbsp;models; notions of which faults are plausible in software and how those faults&#xD;
can best be uncovered. For example, consider boundaries. It's safe to assume the square root function can be&#xD;
implemented something like this:&#xD;
&lt;/p>&#xD;
&lt;blockquote>&#xD;
&lt;pre>&#xD;
double sqrt(double x) { if (x &amp;lt; 0) // signal error ...&#xD;
&lt;/pre>&#xD;
&lt;/blockquote>&#xD;
&lt;p>&#xD;
It's also plausible that the &lt;font size=&quot;+0&quot;>&amp;lt;&lt;/font> will be incorrectly typed as &lt;font size=&quot;+0&quot;>&amp;lt;=&lt;/font>.&#xD;
People often make that kind of mistake, so it's worth checking. The fault cannot be detected with &lt;font size=&quot;+0&quot;>X&lt;/font> having the value &lt;font size=&quot;+0&quot;>2&lt;/font>, because both the incorrect expression (&lt;font size=&quot;+0&quot;>x&amp;lt;=0&lt;/font>) and the correct expression (&lt;font size=&quot;+0&quot;>x&amp;lt;0&lt;/font>) will take the same branch of the&#xD;
&lt;font size=&quot;+0&quot;>if&lt;/font> statement. Similarly, giving &lt;font size=&quot;+0&quot;>X&lt;/font> the value -&lt;font size=&quot;+0&quot;>5&lt;/font>&#xD;
cannot find the fault. The only way to find it is to give &lt;font size=&quot;+0&quot;>X&lt;/font> the value &lt;font size=&quot;+0&quot;>0&lt;/font>,&#xD;
which justifies the second test idea.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
In this case, the fault model is explicit. In other cases, it's implicit. For example, whenever a program manipulates a&#xD;
linked structure, it's good to test it against a circular one. It's possible that many faults could lead to a&#xD;
mishandled circular structure. For the purposes of testing, they needn't be enumerated - it suffices to know that some&#xD;
fault is likely enough that the test is worth running.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
The following links provide information about getting test ideas from different kinds of fault models. The first two&#xD;
are explicit fault models; the last uses implicit ones.&#xD;
&lt;/p>&#xD;
&lt;ul>&#xD;
&lt;li>&#xD;
&lt;a class=&quot;elementLinkWithType&quot; href=&quot;./../../../xp/guidances/guidelines/test_ideas_for_booleans_and_boundaries.html&quot; guid=&quot;1.7150344523489172E-305&quot;>Guideline: Test Ideas for Booleans and Boundaries&lt;/a>&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
&lt;a class=&quot;elementLinkWithType&quot; href=&quot;./../../../xp/guidances/guidelines/test_ideas_for_method_calls.html&quot; guid=&quot;8.5657170364036E-306&quot;>Guideline: Test Ideas for Method Calls&lt;/a>&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
&lt;a class=&quot;elementLinkWithType&quot; href=&quot;./../../../xp/guidances/concepts/test-ideas_catalog.html&quot; guid=&quot;1.2384224477983028E-305&quot;>Concept: Test-Ideas Catalog&lt;/a>&#xD;
&lt;/li>&#xD;
&lt;/ul>&#xD;
&lt;p>&#xD;
These fault models can be applied to many different artifacts. For example, the first one describes what to do with&#xD;
Boolean expressions. Such expressions can be found in code, in guard conditions, in statecharts and sequence diagrams,&#xD;
and in natural-language descriptions of method behaviors (such as you might find in a published API).&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Occasionally it's also helpful to have guidelines for specific artifacts. See &lt;a class=&quot;elementLinkWithType&quot; href=&quot;./../../../xp/guidances/guidelines/test_ideas_for_statechart_and_flow_diagrams.html&quot; guid=&quot;1.0347051690476123E-305&quot;>Guideline: Test Ideas for Statechart and Flow Diagrams&lt;/a>.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
A particular Test-Ideas List might contain test ideas from many fault models, and those fault models could be derived&#xD;
from more than one artifact.&#xD;
&lt;/p>&#xD;
&lt;h3>&#xD;
&lt;a id=&quot;TestDesignUsingTheList&quot; name=&quot;TestDesignUsingTheList&quot;>Test Design Using the List&lt;/a>&#xD;
&lt;/h3>&#xD;
&lt;p>&#xD;
Let's suppose you're designing tests for a method that searches for a string in a sequential collection. It can either&#xD;
obey case or ignore case in its search, and it returns the index of the first match found or -1 if no match is found.&#xD;
&lt;/p>&#xD;
&lt;blockquote>&#xD;
&lt;pre>&#xD;
int Collection.find(String string, Boolean ignoreCase);&#xD;
&lt;/pre>&#xD;
&lt;/blockquote>&#xD;
&lt;p>&#xD;
Here are some test ideas for this method:&#xD;
&lt;/p>&#xD;
&lt;ol>&#xD;
&lt;li>&#xD;
match found in the first position&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
match found in the last position&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
no match found&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
two or more matches found in the collection&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
case is ignored; match found, but it wouldn't match if case was obeyed&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
case is obeyed; an exact match is found&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
case is obeyed; a string that would have matched if case were ignored is skipped&#xD;
&lt;/li>&#xD;
&lt;/ol>&#xD;
&lt;p>&#xD;
It would be simple to implement these seven tests, one for each test idea. However, different test ideas can be&#xD;
combined into a single test. For example, the following test &lt;i>satisfies&lt;/i> test ideas 2, 6, and 7:&#xD;
&lt;/p>&#xD;
&lt;blockquote>&#xD;
&lt;p>&#xD;
Setup: collection initialized to [&quot;dawn&quot;, &quot;Dawn&quot;]&lt;br />&#xD;
Invocation: collection.find(&quot;Dawn&quot;, false)&lt;br />&#xD;
Expected result: return value is 1 (it would be 0 if &quot;dawn&quot; were not skipped)&#xD;
&lt;/p>&#xD;
&lt;/blockquote>&#xD;
&lt;p>&#xD;
Making test ideas nonspecific makes them easier to combine.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
It's possible to satisfy all of the test ideas in three tests. Why would three tests that satisfy seven test ideas be&#xD;
better than seven separate tests?&#xD;
&lt;/p>&#xD;
&lt;ul>&#xD;
&lt;li>&#xD;
When you're creating a large number of simple tests, it's common to create test N+1 by copying test N and tweaking&#xD;
it just enough to satisfy the new test idea. The result, especially in more complex software, is that test N+1&#xD;
probably exercises the program in almost the same way as test N. It takes almost exactly the same path through the&#xD;
code.&lt;br />&#xD;
&lt;br />&#xD;
A smaller number of tests, each satisfying several test ideas, doesn't allow a &quot;copy and tweak&quot; approach. Each&#xD;
test will be somewhat different from the last, exercising the code in different ways and taking different&#xD;
paths.&lt;br />&#xD;
&lt;br />&#xD;
Why would that be better? If the Test-Ideas List were complete, with a test idea for every fault in the program,&#xD;
it wouldn't matter how you wrote the tests. But the list is always missing some test ideas that could find bugs. By&#xD;
having each test do very different things from the last one - by adding seemingly unneeded variety - you increase&#xD;
the chance that one of the tests will stumble over a bug by sheer dumb luck. In effect, smaller, more complex tests&#xD;
increase the chance the test will satisfy a test idea that you didn't know you needed.&lt;br />&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
Sometimes when you're creating more complex tests, new test ideas come to mind. That happens less often with simple&#xD;
tests, because so much of what you're doing is exactly like the last test, which dulls your mind.&#xD;
&lt;/li>&#xD;
&lt;/ul>&#xD;
&lt;p>&#xD;
However, there are reasons for not creating complex tests.&#xD;
&lt;/p>&#xD;
&lt;ul>&#xD;
&lt;li>&#xD;
If each test satisfies a single test idea and the test for idea 2 fails, you immediately know the most likely&#xD;
cause: the program doesn't handle a match in the last position. If a test satisfies ideas 2, 6, and 7, then&#xD;
isolating the failure is harder.&lt;br />&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
Complex tests are more difficult to understand and maintain. The intent of the test is less obvious.&lt;br />&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
Complex tests are more difficult to create. Constructing a test that satisfies five test ideas often takes more&#xD;
time than constructing five tests that each satisfy one. Moreover, it's easier to make mistakes - to think you're&#xD;
satisfying all five when you're only satisfying four.&#xD;
&lt;/li>&#xD;
&lt;/ul>&#xD;
&lt;p>&#xD;
In practice, you must find a reasonable balance between complexity and simplicity. For example, the first tests you&#xD;
subject the software to (typically the smoke tests) should be simple, easy to understand and maintain, and intended to&#xD;
catch the most obvious problems. Later tests should be more complex, but not so complex they are not maintainable.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
After you've finished a set of tests, it's good to check them against the characteristic test design mistakes discussed&#xD;
in &lt;a class=&quot;elementLinkWithType&quot; href=&quot;./../../../xp/guidances/concepts/developer_testing.html#TestDesignMistakes&quot; guid=&quot;4.085829182735815E-305&quot;>Concept: Developer Testing&lt;/a>.&#xD;
&lt;/p>&#xD;
&lt;h3>&#xD;
&lt;a id=&quot;UsingTestIdeasBeforeTest&quot; name=&quot;UsingTestIdeasBeforeTest&quot;>Using Test Ideas Before Testing&lt;/a>&#xD;
&lt;/h3>&#xD;
&lt;p>&#xD;
A Test-Ideas List is useful for reviews and inspections of design artifacts. For example, consider this part of a&#xD;
design model showing the association between Department and Employee classes.&#xD;
&lt;/p>&#xD;
&lt;p align=&quot;center&quot;>&#xD;
&lt;img height=&quot;45&quot; alt=&quot;&quot; src=&quot;resources/tstidslst-img1.gif&quot; width=&quot;223&quot; />&#xD;
&lt;/p>&#xD;
&lt;p class=&quot;picturetext&quot;>&#xD;
Figure 1: Association between Department and Employee Classes&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
The rules for creating test ideas from such a model would ask you to consider the case where a department has many&#xD;
employees. By walking through a design and asking &quot;what if, at this point, the department has many employees?&quot;, you&#xD;
might discover design or analysis errors. For example, you might realize that only one employee at a time can be&#xD;
transferred between departments. That might be a problem if the corporation is prone to sweeping reorganizations where&#xD;
many employees need to be transferred.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Such faults, cases where a possibility was overlooked, are called &lt;i>faults of omission&lt;/i>. Just like the faults&#xD;
themselves, you have probably omitted tests that detect these faults from your testing effort. For example, see &lt;a class=&quot;elementLinkWithUserText&quot; href=&quot;./../../../xp/guidances/supportingmaterials/xp_and_agile_process_references.html&quot; guid=&quot;6.191633934532389E-306&quot;>[GLA81]&lt;/a>, &amp;nbsp;&lt;a href=&quot;../../referenc.htm#OST84&quot;>[OST84]&lt;/a>, &lt;a href=&quot;../../referenc.htm#BAS87&quot;>[BAS87]&lt;/a>, &lt;a href=&quot;../../referenc.htm#MAR00&quot;>[MAR00]&lt;/a>, and other studies that&#xD;
show how often faults of omission escape into deployment.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
The role of testing in design activities is discussed further in &lt;a class=&quot;elementLinkWithType&quot; href=&quot;./../../../xp/guidances/concepts/test-first_design.html&quot; guid=&quot;6.556259235358794E-306&quot;>Concept: Test-first Design&lt;/a>.&#xD;
&lt;/p>&#xD;
&lt;h3>&#xD;
&lt;a id=&quot;TestIdeasTraceability&quot; name=&quot;TestIdeasTraceability&quot;>Test Ideas and Traceability&lt;/a>&#xD;
&lt;/h3>&#xD;
&lt;p>&#xD;
Traceability is a matter of tradeoffs. Is its value worth the cost of maintaining it? This question needs to be&#xD;
considered during &lt;a href=&quot;../../activity/ac_tst_dfnasstrcnds.htm&quot;>Activity: Define Assessment and Traceability&#xD;
Needs&lt;/a>.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
When traceability is worthwhile, it's conventional to trace tests back to the artifacts that inspired them. For&#xD;
example, you might have traceability between an API and its tests. If the API changes, you know which tests to change.&#xD;
If the code (that implements the API) changes, you know which tests to run. If a test puzzles you, you can find the API&#xD;
it's intended to test.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
The Test-Ideas List adds another level of traceability. You can trace from a test to the test ideas it satisfies, and&#xD;
then from the test ideas to the original artifact.&#xD;
&lt;/p>&lt;br />&#xD;
&lt;br /></mainDescription>
</org.eclipse.epf.uma:ContentDescription>