blob: ebda44f2dcf80330933585d7e883af8a0ceae03a [file] [log] [blame]
<?xml version="1.0" encoding="UTF-8"?>
<org.eclipse.epf.uma:ContentDescription xmi:version="2.0" xmlns:xmi="http://www.omg.org/XMI" xmlns:org.eclipse.epf.uma="http://www.eclipse.org/epf/uma/1.0.3/uma.ecore" epf:version="1.0.0" xmi:id="-3i1jvKMUGGmAYPw4dHFbEg" name="test-ideas_list,8.834380241450745E-306" guid="-3i1jvKMUGGmAYPw4dHFbEg" changeDate="2006-12-01T15:44:08.749-0800" version="1.0.0">
<mainDescription>&lt;h3&gt;
&lt;a id=&quot;Introduction&quot; name=&quot;Introduction&quot;&gt;Introduction&lt;/a&gt;
&lt;/h3&gt;
&lt;p&gt;
Information used in designing tests is gathered from many places: design models, classifier interfaces, statecharts,
and code itself. At some point, this source document information must be transformed into executable tests:
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
specific inputs given to the software under test
&lt;/li&gt;
&lt;li&gt;
in a particular hardware and software configuration
&lt;/li&gt;
&lt;li&gt;
initialized to a known state
&lt;/li&gt;
&lt;li&gt;
with specific results expected
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
It's possible to go directly from source document information to executable tests, but it's often useful to add an
intermediate step. In this step, test ideas are written into a &lt;i&gt;Test-Ideas List&lt;/i&gt;, which is used to create
executable tests.
&lt;/p&gt;
&lt;h3&gt;
&lt;a id=&quot;TestIdeas&quot; name=&quot;TestIdeas&quot;&gt;What are Test Ideas?&lt;/a&gt;
&lt;/h3&gt;
&lt;p&gt;
A test idea (sometimes referred to as a test requirement) is a brief statement about a test that could be performed. As
a simple example, let's consider a function that calculates a square root and come up with some test ideas:
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
give a number that's barely less than zero as input
&lt;/li&gt;
&lt;li&gt;
give zero as the input
&lt;/li&gt;
&lt;li&gt;
test a number that's a perfect square, like 4 or 16 (is the result exactly 2 or 4?)
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
Each of these ideas could readily be converted into an executable test with exact descriptions of inputs and expected
results.
&lt;/p&gt;
&lt;p&gt;
There are two advantages to this less-specific intermediate form:
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
test ideas are more reviewable and understandable than complete tests - it's easier to understand the reasoning
behind them
&lt;/li&gt;
&lt;li&gt;
test ideas support more powerful tests, as described later under the heading &lt;a href=&quot;#TestDesignUsingTheList&quot;&gt;Test
Design Using the List&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
The square root examples all describe inputs, but test ideas can describe any of the elements of an executable test.
For example, &quot;print to a LaserJet IIIp&quot; describes an aspect of the test environment to be used for a test, as does
&quot;test with database full&quot;, however, these latter test ideas are very incomplete in themselves: Print &lt;b&gt;what&lt;/b&gt; to the
printer? Do &lt;b&gt;what&lt;/b&gt; with that full database? They do, however, ensure that important ideas aren't forgotten; ideas
that will be described in more detail later in test design.
&lt;/p&gt;
&lt;p&gt;
Test ideas are often based on fault&amp;nbsp;models; notions of which faults are plausible in software and how those faults
can best be uncovered. For example, consider boundaries. It's safe to assume the square root function can be
implemented something like this:
&lt;/p&gt;
&lt;blockquote&gt;
&lt;pre&gt;
double sqrt(double x) { if (x &amp;lt; 0) // signal error ...
&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;p&gt;
It's also plausible that the &lt;font size=&quot;+0&quot;&gt;&amp;lt;&lt;/font&gt; will be incorrectly typed as &lt;font size=&quot;+0&quot;&gt;&amp;lt;=&lt;/font&gt;.
People often make that kind of mistake, so it's worth checking. The fault cannot be detected with &lt;font
size=&quot;+0&quot;&gt;X&lt;/font&gt; having the value &lt;font size=&quot;+0&quot;&gt;2&lt;/font&gt;, because both the incorrect expression (&lt;font
size=&quot;+0&quot;&gt;x&amp;lt;=0&lt;/font&gt;) and the correct expression (&lt;font size=&quot;+0&quot;&gt;x&amp;lt;0&lt;/font&gt;) will take the same branch of the
&lt;font size=&quot;+0&quot;&gt;if&lt;/font&gt; statement. Similarly, giving &lt;font size=&quot;+0&quot;&gt;X&lt;/font&gt; the value -&lt;font size=&quot;+0&quot;&gt;5&lt;/font&gt;
cannot find the fault. The only way to find it is to give &lt;font size=&quot;+0&quot;&gt;X&lt;/font&gt; the value &lt;font size=&quot;+0&quot;&gt;0&lt;/font&gt;,
which justifies the second test idea.
&lt;/p&gt;
&lt;p&gt;
In this case, the fault model is explicit. In other cases, it's implicit. For example, whenever a program manipulates a
linked structure, it's good to test it against a circular one. It's possible that many faults could lead to a
mishandled circular structure. For the purposes of testing, they needn't be enumerated - it suffices to know that some
fault is likely enough that the test is worth running.
&lt;/p&gt;
&lt;p&gt;
The following links provide information about getting test ideas from different kinds of fault models. The first two
are explicit fault models; the last uses implicit ones.
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
&lt;a class=&quot;elementLinkWithType&quot;
href=&quot;./../../../xp/guidances/guidelines/test_ideas_for_booleans_and_boundaries,1.7150344523489172E-305.html&quot;
guid=&quot;1.7150344523489172E-305&quot;&gt;Guideline: Test Ideas for Booleans and Boundaries&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a class=&quot;elementLinkWithType&quot;
href=&quot;./../../../xp/guidances/guidelines/test_ideas_for_method_calls,8.5657170364036E-306.html&quot;
guid=&quot;8.5657170364036E-306&quot;&gt;Guideline: Test Ideas for Method Calls&lt;/a&gt;
&lt;/li&gt;
&lt;li&gt;
&lt;a class=&quot;elementLinkWithType&quot;
href=&quot;./../../../xp/guidances/concepts/test-ideas_catalog,1.2384224477983028E-305.html&quot;
guid=&quot;1.2384224477983028E-305&quot;&gt;Concept: Test-Ideas Catalog&lt;/a&gt;
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
These fault models can be applied to many different artifacts. For example, the first one describes what to do with
Boolean expressions. Such expressions can be found in code, in guard conditions, in statecharts and sequence diagrams,
and in natural-language descriptions of method behaviors (such as you might find in a published API).
&lt;/p&gt;
&lt;p&gt;
Occasionally it's also helpful to have guidelines for specific artifacts. See &lt;a class=&quot;elementLinkWithType&quot;
href=&quot;./../../../xp/guidances/guidelines/test_ideas_for_statechart_and_flow_diagrams,1.0347051690476123E-305.html&quot;
guid=&quot;1.0347051690476123E-305&quot;&gt;Guideline: Test Ideas for Statechart and Flow Diagrams&lt;/a&gt;.
&lt;/p&gt;
&lt;p&gt;
A particular Test-Ideas List might contain test ideas from many fault models, and those fault models could be derived
from more than one artifact.
&lt;/p&gt;
&lt;h3&gt;
&lt;a id=&quot;TestDesignUsingTheList&quot; name=&quot;TestDesignUsingTheList&quot;&gt;Test Design Using the List&lt;/a&gt;
&lt;/h3&gt;
&lt;p&gt;
Let's suppose you're designing tests for a method that searches for a string in a sequential collection. It can either
obey case or ignore case in its search, and it returns the index of the first match found or -1 if no match is found.
&lt;/p&gt;
&lt;blockquote&gt;
&lt;pre&gt;
int Collection.find(String string, Boolean ignoreCase);
&lt;/pre&gt;
&lt;/blockquote&gt;
&lt;p&gt;
Here are some test ideas for this method:
&lt;/p&gt;
&lt;ol&gt;
&lt;li&gt;
match found in the first position
&lt;/li&gt;
&lt;li&gt;
match found in the last position
&lt;/li&gt;
&lt;li&gt;
no match found
&lt;/li&gt;
&lt;li&gt;
two or more matches found in the collection
&lt;/li&gt;
&lt;li&gt;
case is ignored; match found, but it wouldn't match if case was obeyed
&lt;/li&gt;
&lt;li&gt;
case is obeyed; an exact match is found
&lt;/li&gt;
&lt;li&gt;
case is obeyed; a string that would have matched if case were ignored is skipped
&lt;/li&gt;
&lt;/ol&gt;
&lt;p&gt;
It would be simple to implement these seven tests, one for each test idea. However, different test ideas can be
combined into a single test. For example, the following test &lt;i&gt;satisfies&lt;/i&gt; test ideas 2, 6, and 7:
&lt;/p&gt;
&lt;blockquote&gt;
&lt;p&gt;
Setup: collection initialized to [&quot;dawn&quot;, &quot;Dawn&quot;]&lt;br /&gt;
Invocation: collection.find(&quot;Dawn&quot;, false)&lt;br /&gt;
Expected result: return value is 1 (it would be 0 if &quot;dawn&quot; were not skipped)
&lt;/p&gt;
&lt;/blockquote&gt;
&lt;p&gt;
Making test ideas nonspecific makes them easier to combine.
&lt;/p&gt;
&lt;p&gt;
It's possible to satisfy all of the test ideas in three tests. Why would three tests that satisfy seven test ideas be
better than seven separate tests?
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
When you're creating a large number of simple tests, it's common to create test N+1 by copying test N and tweaking
it just enough to satisfy the new test idea. The result, especially in more complex software, is that test N+1
probably exercises the program in almost the same way as test N. It takes almost exactly the same path through the
code.&lt;br /&gt;
&lt;br /&gt;
A smaller number of tests, each satisfying several test ideas, doesn't allow a &quot;copy and tweak&quot; approach. Each
test will be somewhat different from the last, exercising the code in different ways and taking different
paths.&lt;br /&gt;
&lt;br /&gt;
Why would that be better? If the Test-Ideas List were complete, with a test idea for every fault in the program,
it wouldn't matter how you wrote the tests. But the list is always missing some test ideas that could find bugs. By
having each test do very different things from the last one - by adding seemingly unneeded variety - you increase
the chance that one of the tests will stumble over a bug by sheer dumb luck. In effect, smaller, more complex tests
increase the chance the test will satisfy a test idea that you didn't know you needed.&lt;br /&gt;
&lt;/li&gt;
&lt;li&gt;
Sometimes when you're creating more complex tests, new test ideas come to mind. That happens less often with simple
tests, because so much of what you're doing is exactly like the last test, which dulls your mind.
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
However, there are reasons for not creating complex tests.
&lt;/p&gt;
&lt;ul&gt;
&lt;li&gt;
If each test satisfies a single test idea and the test for idea 2 fails, you immediately know the most likely
cause: the program doesn't handle a match in the last position. If a test satisfies ideas 2, 6, and 7, then
isolating the failure is harder.&lt;br /&gt;
&lt;/li&gt;
&lt;li&gt;
Complex tests are more difficult to understand and maintain. The intent of the test is less obvious.&lt;br /&gt;
&lt;/li&gt;
&lt;li&gt;
Complex tests are more difficult to create. Constructing a test that satisfies five test ideas often takes more
time than constructing five tests that each satisfy one. Moreover, it's easier to make mistakes - to think you're
satisfying all five when you're only satisfying four.
&lt;/li&gt;
&lt;/ul&gt;
&lt;p&gt;
In practice, you must find a reasonable balance between complexity and simplicity. For example, the first tests you
subject the software to (typically the smoke tests) should be simple, easy to understand and maintain, and intended to
catch the most obvious problems. Later tests should be more complex, but not so complex they are not maintainable.
&lt;/p&gt;
&lt;p&gt;
After you've finished a set of tests, it's good to check them against the characteristic test design mistakes discussed
in &lt;a class=&quot;elementLinkWithType&quot;
href=&quot;./../../../xp/guidances/concepts/developer_testing,4.085829182735815E-305.html#TestDesignMistakes&quot;
guid=&quot;4.085829182735815E-305&quot;&gt;Concept: Developer Testing&lt;/a&gt;.
&lt;/p&gt;
&lt;h3&gt;
&lt;a id=&quot;UsingTestIdeasBeforeTest&quot; name=&quot;UsingTestIdeasBeforeTest&quot;&gt;Using Test Ideas Before Testing&lt;/a&gt;
&lt;/h3&gt;
&lt;p&gt;
A Test-Ideas List is useful for reviews and inspections of design artifacts. For example, consider this part of a
design model showing the association between Department and Employee classes.
&lt;/p&gt;
&lt;p align=&quot;center&quot;&gt;
&lt;img height=&quot;45&quot; alt=&quot;&quot; src=&quot;resources/tstidslst-img1.gif&quot; width=&quot;223&quot; /&gt;
&lt;/p&gt;
&lt;p class=&quot;picturetext&quot;&gt;
Figure 1: Association between Department and Employee Classes
&lt;/p&gt;
&lt;p&gt;
The rules for creating test ideas from such a model would ask you to consider the case where a department has many
employees. By walking through a design and asking &quot;what if, at this point, the department has many employees?&quot;, you
might discover design or analysis errors. For example, you might realize that only one employee at a time can be
transferred between departments. That might be a problem if the corporation is prone to sweeping reorganizations where
many employees need to be transferred.
&lt;/p&gt;
&lt;p&gt;
Such faults, cases where a possibility was overlooked, are called &lt;i&gt;faults of omission&lt;/i&gt;. Just like the faults
themselves, you have probably omitted tests that detect these faults from your testing effort. For example, see &lt;a
class=&quot;elementLinkWithUserText&quot;
href=&quot;./../../../xp/guidances/supportingmaterials/xp_and_agile_process_references,6.191633934532389E-306.html&quot;
guid=&quot;6.191633934532389E-306&quot;&gt;[GLA81]&lt;/a&gt;, &amp;nbsp;&lt;a href=&quot;../../referenc.htm#OST84&quot;&gt;[OST84]&lt;/a&gt;, &lt;a
href=&quot;../../referenc.htm#BAS87&quot;&gt;[BAS87]&lt;/a&gt;, &lt;a href=&quot;../../referenc.htm#MAR00&quot;&gt;[MAR00]&lt;/a&gt;, and other studies that
show how often faults of omission escape into deployment.
&lt;/p&gt;
&lt;p&gt;
The role of testing in design activities is discussed further in &lt;a class=&quot;elementLinkWithType&quot;
href=&quot;./../../../xp/guidances/concepts/test-first_design,6.556259235358794E-306.html&quot;
guid=&quot;6.556259235358794E-306&quot;&gt;Concept: Test-first Design&lt;/a&gt;.
&lt;/p&gt;
&lt;h3&gt;
&lt;a id=&quot;TestIdeasTraceability&quot; name=&quot;TestIdeasTraceability&quot;&gt;Test Ideas and Traceability&lt;/a&gt;
&lt;/h3&gt;
&lt;p&gt;
Traceability is a matter of tradeoffs. Is its value worth the cost of maintaining it? This question needs to be
considered during &lt;a href=&quot;../../activity/ac_tst_dfnasstrcnds.htm&quot;&gt;Activity: Define Assessment and Traceability
Needs&lt;/a&gt;.
&lt;/p&gt;
&lt;p&gt;
When traceability is worthwhile, it's conventional to trace tests back to the artifacts that inspired them. For
example, you might have traceability between an API and its tests. If the API changes, you know which tests to change.
If the code (that implements the API) changes, you know which tests to run. If a test puzzles you, you can find the API
it's intended to test.
&lt;/p&gt;
&lt;p&gt;
The Test-Ideas List adds another level of traceability. You can trace from a test to the test ideas it satisfies, and
then from the test ideas to the original artifact.
&lt;/p&gt;
&lt;br /&gt;
&lt;br /&gt;</mainDescription>
</org.eclipse.epf.uma:ContentDescription>