blob: cf2d89acce58c0bf7c545b96c9eef4b0a8f73dfa [file] [log] [blame]
<?xml version="1.0" encoding="UTF-8"?>
<org.eclipse.epf.uma:ContentDescription xmi:version="2.0"
xmlns:xmi="http://www.omg.org/XMI" xmlns:org.eclipse.epf.uma="http://www.eclipse.org/epf/uma/1.0.5/uma.ecore"
xmlns:epf="http://www.eclipse.org/epf" epf:version="1.5.0" xmi:id="-wuu2cNRUPlrBuaO0OdzLFg"
name=",_ByOd4O6pEduvoopEslG-4g" guid="-wuu2cNRUPlrBuaO0OdzLFg" changeDate="2007-04-20T15:29:04.687-0700"
version="1.0.0">
<mainDescription>&lt;h3>&#xD;
&lt;a id=&quot;DeveloperTestingPitfalls&quot; name=&quot;DeveloperTestingPitfalls&quot;>&lt;/a>Pitfalls Getting Started with Developer Testing&#xD;
&lt;/h3>&#xD;
&lt;p>&#xD;
Many developers who begin trying to do a substantially more thorough job of testing give up the effort shortly&#xD;
thereafter. They find that it does not seem to be yielding value. Further, some developers who begin well with&#xD;
developer testing find that they've created an unmaintainable test suite that is eventually abandoned.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Establish expectations&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
Those who find developer testing rewarding do it. Those who view it as a chore find ways to avoid it. This is simply in&#xD;
the nature of most developers in most industries, and treating it as a shameful lack of discipline hasn't historically&#xD;
been successful. Therefore, as a developer you should expect testing to be rewarding and do what it takes to make it&#xD;
rewarding.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Ideal developer testing follows a very tight edit-test loop. You make a small change to the product, such as adding a&#xD;
new method to a class, then you immediately rerun your tests. If any test breaks, you know exactly what code is the&#xD;
cause. This easy, steady pace of development is the greatest reward of developer testing. A long debugging session&#xD;
should be exceptional.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Because it's not unusual for a change made in one class to break something in another, you should expect to rerun not&#xD;
just the changed class's tests, but many tests. Ideally, you rerun the complete test suite for your component many&#xD;
times per hour. Every time you make a significant change, you rerun the suite, watch the results, and either proceed to&#xD;
the next change or fix the last change. Expect to spend some effort making that rapid feedback possible.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Automate your tests&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
Running tests often is not practical if tests are manual. For some components, automated tests are easy. An example&#xD;
would be an in-memory database. It communicates to its clients through an API and has no other interface to the outside&#xD;
world. Tests for it would look something like this:&#xD;
&lt;/p>&#xD;
&lt;blockquote>&#xD;
&lt;pre>&#xD;
/* Check that elements can be added at most once. */&#xD;
// Setup&#xD;
Database db = new Database();&#xD;
db.add(&quot;key1&quot;, &quot;value1&quot;);&#xD;
// Test&#xD;
boolean result = db.add(&quot;key1&quot;, &quot;another value&quot;);&#xD;
expect(result == false);&#xD;
&lt;/pre>&#xD;
&lt;/blockquote>&#xD;
&lt;p>&#xD;
The tests are different from ordinary client code in only one way: instead of believing the results of API calls, they&#xD;
check. If the API makes client code easy to write, it makes test code easy to write. If the test code is &lt;i>not&lt;/i>&#xD;
easy to write, you've received an early warning that the API could be improved. Test-first design is thus consistent&#xD;
with the iterative processes' focus on addressing important risks early.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
The more tightly connected the component is to the outside world, however, the harder it will be to test. There are two&#xD;
common cases: graphical user interfaces and back-end components.&#xD;
&lt;/p>&#xD;
&lt;h5>&#xD;
Graphical user interfaces&#xD;
&lt;/h5>&#xD;
&lt;p>&#xD;
Suppose the database in the example above receives its data via a callback from a user-interface object. The callback&#xD;
is invoked when the user fills in some text fields and pushes a button. Testing this by manually filling in the fields&#xD;
and pushing the button isn't something you want to do many times an hour. You must arrange a way to deliver the input&#xD;
under programmatic control, typically by &quot;pushing&quot; the button in code.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Pushing the button causes some code in the component to be executed. Most likely, that code changes the state of some&#xD;
user-interface objects. So you must also arrange a way to query those objects programmatically.&#xD;
&lt;/p>&#xD;
&lt;h5>&#xD;
Back-end components&#xD;
&lt;/h5>&#xD;
&lt;p>&#xD;
Suppose the component under test doesn't implement a database. Instead, it's a wrapper around a real, on-disk database.&#xD;
Testing against that real database might be difficult. It might be hard to install and configure. Licenses for it might&#xD;
be expensive. The database might slow down the tests enough that you're not inclined to run them often. In such cases,&#xD;
it's worthwhile to &quot;stub out&quot; the database with a simpler component that does just enough to support the tests.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Stubs are also useful when a component that your component talks to isn't ready yet. You don't want your testing to&#xD;
wait on someone else's code.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Don't write your own tools&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
Developer testing seems pretty straightforward. You set up some objects, make a call through an API, check the result,&#xD;
and announce a test failure if the results aren't as expected. It's also convenient to have some way to group tests so&#xD;
that they can be run individually or as complete suites. Tools that support those requirements are called &lt;i>test&#xD;
frameworks&lt;/i>.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Developer testing &lt;b>is&lt;/b> straightforward, and the requirements for test frameworks are not complicated. If, however,&#xD;
you yield to the temptation of writing your own test framework, you'll spend much more time tinkering with the&#xD;
framework than you probably expect. There are many test frameworks available, both commercial and open source, and&#xD;
there's no reason not to use one of those.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Do create support code&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
Test code tends to be repetitive. It's common to see sequences of code like this:&#xD;
&lt;/p>&#xD;
&lt;blockquote>&#xD;
&lt;pre>&#xD;
// null name not allowed&#xD;
retval = o.createName(&quot;&quot;); &#xD;
expect(retval == null);&#xD;
// leading spaces not allowed&#xD;
retval = o.createName(&quot; l&quot;); &#xD;
expect(retval == null);&#xD;
// trailing spaces not allowed&#xD;
retval = o.createName(&quot;name &quot;); &#xD;
expect(retval == null);&#xD;
// first character may not be numeric&#xD;
retval = o.createName(&quot;5allpha&quot;); &#xD;
expect(retval == null);&#xD;
&lt;/pre>&#xD;
&lt;/blockquote>&#xD;
&lt;p>&#xD;
This code is created by copying one check, pasting it, then editing it to make another check.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
The danger here is twofold. If the interface changes, much editing will have to be done. (In more complicated cases, a&#xD;
simple global replacement won't suffice.) Also, if the code is at all complicated, the intent of the test can be lost&#xD;
amid all the text.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
When you find yourself repeating yourself, seriously consider factoring out the repetition into support code. Even&#xD;
though the code above is a simple example, it's more readable and maintainable if written like this:&#xD;
&lt;/p>&#xD;
&lt;blockquote>&#xD;
&lt;pre>&#xD;
void expectNameRejected(MyClass o, String s) {&#xD;
Object retval = o.createName(s);&#xD;
expect(retval == null);&#xD;
}&#xD;
...&#xD;
// null name not allowed&#xD;
expectNameRejected(o, &quot;&quot;); &#xD;
// leading spaces not allowed.&#xD;
expectNameRejected(o, &quot; l&quot;); &#xD;
// trailing spaces not allowed.&#xD;
expectNameRejected(o, &quot;name &quot;); &#xD;
// first character may not be numeric.&#xD;
expectNameRejected(o, &quot;5alpha&quot;); &#xD;
&lt;/pre>&#xD;
&lt;/blockquote>&#xD;
&lt;p>&#xD;
Developers writing tests often err on the side of too much copying-and-pasting. If you suspect yourself of that&#xD;
tendency, it's useful to consciously err in the other direction. Resolve that you will strip your code of all duplicate&#xD;
text.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Write the tests first&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
Writing the tests after the code is a chore. The urge is to rush through it, to finish up and move on. Writing tests&#xD;
before the code makes testing part of a positive feedback loop. As you implement more code, you see more tests passing&#xD;
until finally all the tests pass and you're done. People who write tests first seem to be more successful, and it takes&#xD;
no more time. For more on putting tests first, see &lt;a class=&quot;elementLinkWithType&quot; href=&quot;./../../../openup/guidances/guidelines/test_first_design_21C77ADF.html&quot; guid=&quot;_0Y6kUMlgEdmt3adZL5Dmdw&quot;>Guideline: Test-first Design&lt;/a>.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Keep the tests understandable&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
You should expect that you, or someone else, will have to modify the tests later. A typical situation is that a later&#xD;
iteration calls for a change to the component's behavior. As a simple example, suppose the component once declared a&#xD;
square root method like this:&#xD;
&lt;/p>&#xD;
&lt;blockquote>&#xD;
&lt;p>&#xD;
&lt;font size=&quot;+0&quot;>double sqrt(double x);&lt;/font>&#xD;
&lt;/p>&#xD;
&lt;/blockquote>&#xD;
&lt;p>&#xD;
In that version, a negative argument caused &lt;font size=&quot;+0&quot;>sqrt&lt;/font> to return NaN (&quot;not a number&quot; from the IEEE&#xD;
754-1985 &lt;i>Standard for Binary Floating-Point Arithmetic&lt;/i>). In the new iteration, the square root method will&#xD;
accept negative numbers and return a complex result:&#xD;
&lt;/p>&#xD;
&lt;blockquote>&#xD;
&lt;p>&#xD;
&lt;font size=&quot;+0&quot;>Complex sqrt(double x);&lt;/font>&#xD;
&lt;/p>&#xD;
&lt;/blockquote>&#xD;
&lt;p>&#xD;
Old tests for &lt;font size=&quot;+0&quot;>sqrt&lt;/font> will have to change. That means understanding what they do, and updating them&#xD;
so that they work with the new &lt;font size=&quot;+0&quot;>sqrt&lt;/font>. When updating tests, you must take care not to destroy&#xD;
their bug-finding power. One way that sometimes happens is this:&#xD;
&lt;/p>&#xD;
&lt;blockquote>&#xD;
&lt;pre>&#xD;
void testSQRT () {&#xD;
// Update these tests for Complex &#xD;
// when I have time -- bem&#xD;
/*&#xD;
double result = sqrt(0.0);&#xD;
...&#xD;
*/&#xD;
}&#xD;
&lt;/pre>&#xD;
&lt;/blockquote>&#xD;
&lt;p>&#xD;
Other ways are more subtle: the tests are changed so that they actually run, but they no longer test what they were&#xD;
originally intended to test. The end result, over many iterations, can be a test suite that is too weak to catch many&#xD;
bugs. This is sometimes called &quot;test suite decay&quot;. A decayed suite will be abandoned, because it's not worth the&#xD;
upkeep.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Test suite decay is less likely in the direct tests for &lt;font size=&quot;+0&quot;>sqrt&lt;/font> than in indirect tests. There will&#xD;
be code that calls &lt;font size=&quot;+0&quot;>sqrt&lt;/font>. That code will have tests. When &lt;font size=&quot;+0&quot;>sqrt&lt;/font> changes,&#xD;
some of those tests will fail. The person who changes &lt;font size=&quot;+0&quot;>sqrt&lt;/font> will probably have to change those&#xD;
tests. Because he's less familiar with them, and because their relationship to the change is less clear, he's more&#xD;
likely to weaken them in the process of making them pass.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
When you're creating support code for tests (as urged above), be careful: the support code should clarify, not obscure,&#xD;
the purpose of the tests that use it. A common complaint about object-oriented programs is that there's no one place&#xD;
where anything's done. If you look at any one method, all you discover is that it forwards its work somewhere else.&#xD;
Such a structure has advantages, but it makes it harder for new people to understand the code. Unless they make an&#xD;
effort, their changes are likely to be incorrect or to make the code even more complicated and fragile. The same is&#xD;
true of test code, except that later maintainers are even less likely to take due care. You must head off the problem&#xD;
by writing understandable tests.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Match the test structure to the product structure&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
Suppose someone has inherited your component. They need to change a part of it. They may want to examine the old tests&#xD;
to help them in their new design. They want to update the old tests before writing the code (test-first design).&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
All those good intentions will go by the wayside if they can't find the appropriate tests. What they'll do is make the&#xD;
change, see what tests fail, then fix those. That will contribute to test suite decay.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
For that reason, it's important that the test suite be well structured, and that the location of tests be predictable&#xD;
from the structure of the product. Most usually, developers arrange tests in a parallel hierarchy, with one test class&#xD;
per product class. So if someone is changing a class named &lt;font size=&quot;+0&quot;>Log&lt;/font>, they know the test class is&#xD;
&lt;font size=&quot;+0&quot;>TestLog&lt;/font>, and they know where the source file can be found.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Let tests violate encapsulation&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
You might limit your tests to interacting with your component exactly as client code does, through the same interface&#xD;
that client code uses. However, this has disadvantages. Suppose you're testing a simple class that maintains a doubly&#xD;
linked list:&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
In particular, you're testing the &lt;font size=&quot;+0&quot;>DoublyLinkedList.insertBefore(Object existing, Object&#xD;
newObject)&lt;/font> method. In one of your tests, you want to insert an element in the middle of the list, then check if&#xD;
it's been inserted successfully. The test uses the list above to create this updated list:&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
It checks the list correctness like this:&#xD;
&lt;/p>&#xD;
&lt;blockquote>&#xD;
&lt;pre>&#xD;
// the list is now one longer. &#xD;
expect(list.size()==3);&#xD;
// the new element is in the correct position&#xD;
expect(list.get(1)==m);&#xD;
// check that other elements are still there.&#xD;
expect(list.get(0)==a);&#xD;
expect(list.get(2)==z);&#xD;
&lt;/pre>&#xD;
&lt;/blockquote>&#xD;
&lt;p>&#xD;
That seems sufficient, but it's not. Suppose the list implementation is incorrect and backward pointers are not set&#xD;
correctly. That is, suppose the updated list actually looks like this:&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
If &lt;font size=&quot;+0&quot;>DoublyLinkedList.get(int index)&lt;/font> traverses the list from the beginning to the end (likely),&#xD;
the test would miss this failure. If the class provides &lt;font size=&quot;+0&quot;>elementBefore&lt;/font> and &lt;font size=&quot;+0&quot;>elementAfter&lt;/font> methods, checking for such failures is straightforward:&#xD;
&lt;/p>&#xD;
&lt;blockquote>&#xD;
&lt;pre>&#xD;
// Check that links were all updated&#xD;
expect(list.elementAfter(a)==m);&#xD;
expect(list.elementAfter(m)==z);&#xD;
expect(list.elementBefore(z)==m); //this will fail&#xD;
expect(list.elementBefore(m)==a);&#xD;
&lt;/pre>&#xD;
&lt;/blockquote>&#xD;
&lt;p>&#xD;
But what if it doesn't provide those methods? You could devise more elaborate sequences of method calls that will fail&#xD;
if the suspected defect is present. For example, this would work:&#xD;
&lt;/p>&#xD;
&lt;blockquote>&#xD;
&lt;pre>&#xD;
// Check whether back-link from Z is correct.&#xD;
list.insertBefore(z, x);&#xD;
// If it was incorrectly not updated, X will have &#xD;
// been inserted just after A.&#xD;
expect(list.get(1)==m); &#xD;
&lt;/pre>&#xD;
&lt;/blockquote>&#xD;
&lt;p>&#xD;
But such a test is more work to create and is likely to be significantly harder to maintain. (Unless you write good&#xD;
comments, it will not be at all clear why the test is doing what it's doing.) There are two solutions:&#xD;
&lt;/p>&#xD;
&lt;ol>&#xD;
&lt;li>&#xD;
Add the &lt;font size=&quot;+0&quot;>elementBefore&lt;/font> and &lt;font size=&quot;+0&quot;>elementAfter&lt;/font> methods to the public&#xD;
interface. But that effectively exposes the implementation to everyone and makes future change more difficult.&#xD;
&lt;/li>&#xD;
&lt;li>&#xD;
Let the tests &quot;look under the hood&quot; and check pointers directly.&#xD;
&lt;/li>&#xD;
&lt;/ol>&#xD;
&lt;p>&#xD;
The latter is usually the best solution, even for a simple class like &lt;font size=&quot;+0&quot;>DoublyLinkedList&lt;/font> and&#xD;
especially for the more complex classes that occur in your products.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Typically, tests are put in the same package as the class they test. They are given protected or friend access.&#xD;
&lt;/p>&#xD;
&lt;h3>&#xD;
&lt;a id=&quot;TestDesignMistakes&quot; name=&quot;TestDesignMistakes&quot;>&lt;/a>Characteristic Test Design Mistakes&#xD;
&lt;/h3>&#xD;
&lt;p>&#xD;
Each test exercises a component and checks for correct results. The design of the test-the inputs it uses and how it&#xD;
checks for correctness-can be good at revealing defects, or it can inadvertently hide them. Here are some&#xD;
characteristic test design mistakes.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Failure to specify expected results in advance&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
Suppose you're testing a component that converts XML into HTML. A temptation is to take some sample XML, run it through&#xD;
the conversion, then look at the results in a browser. If the screen looks right, you &quot;bless&quot; the HTML by saving it as&#xD;
the official expected results. Thereafter, a test compares the actual output of the conversion to the expected results.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
This is a dangerous practice. Even sophisticated computer users are used to believing what the computer does. You are&#xD;
likely to overlook mistakes in the screen appearance. (Not to mention that browsers are quite tolerant of misformatted&#xD;
HTML.) By making that incorrect HTML the official expected results, you make sure that the test can never find the&#xD;
problem.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
It's less dangerous to doubly-check by looking directly at the HTML, but it's still dangerous. Because the output is&#xD;
complicated, it will be easy to overlook errors. You'll find more defects if you write the expected output by hand&#xD;
first.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Failure to check the background&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
Tests usually check that what should have been changed has been, but their creators often forget to check that what&#xD;
should have been left alone has been left alone. For example, suppose a program is supposed to change the first 100&#xD;
records in a file. It's a good idea to check that the 101&lt;sup>st&lt;/sup> hasn't been changed.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
In theory, you would check that nothing in the &quot;background&quot;-the entire file system, all of memory, everything reachable&#xD;
through the network-has been left alone. In practice, you have to choose carefully what you can afford to check. But&#xD;
it's important to make that choice.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Failure to check persistence&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
Just because the component tells you a change has been made, that doesn't mean it has actually been committed to the&#xD;
database. You need to check the database via another route.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Failure to add variety&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
A test might be designed to check the effect of three fields in a database record, but many other fields need to be&#xD;
filled in to execute the test. Testers will often use the same values over and over again for these &quot;irrelevant&quot;&#xD;
fields. For example, they'll always use the name of their lover in a text field, or 999 in a numeric field.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
The problem is that sometimes what shouldn't matter actually does. Every so often, there's a bug that depends on some&#xD;
obscure combination of unlikely inputs. If you always use the same inputs, you stand no chance of finding such bugs. If&#xD;
you persistently vary inputs, you might. Quite often, it costs almost nothing to use a number different than 999 or to&#xD;
use someone else's name. When varying the values used in tests costs almost nothing and it has some potential benefit,&#xD;
then vary. (Note: It's unwise to use names of old lovers instead of your current one if your current lover works with&#xD;
you.)&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Here's another benefit. One plausible fault is for the program to use field &lt;i>X&lt;/i> when it should have used field&#xD;
&lt;i>Y&lt;/i>. If both fields contain &quot;Dawn&quot;, the fault can't be detected.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Failure to use realistic data&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
It's common to use made-up data in tests. That data is often unrealistically simple. For example, customer names might&#xD;
be &quot;Mickey&quot;, &quot;Snoopy&quot;, and &quot;Donald&quot;. Because that data is different from what real users enter - for example, it's&#xD;
characteristically shorter - it can miss defects real customers will see. For example, these one-word names wouldn't&#xD;
detect that the code doesn't handle names with spaces.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
It's prudent to make a slight extra effort to use realistic data.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Failure to notice that the code does nothing at all&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
Suppose you initialize a database record to zero, run a calculation that should result in zero being stored in the&#xD;
record, then check that the record is zero. What has your test demonstrated? The calculation might not have taken place&#xD;
at all. Nothing might have been stored, and the test couldn't tell.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
That example sounds unlikely. But this same mistake can crop up in subtler ways. For example, you might write a test&#xD;
for a complicated installer program. The test is intended to check that all temporary files are removed after a&#xD;
successful installation. But, because of all the installer options, in that test, one particular temporary file wasn't&#xD;
created. Sure enough, that's the one the program forgot to remove.&#xD;
&lt;/p>&#xD;
&lt;h4>&#xD;
Failure to notice that the code does the wrong thing&#xD;
&lt;/h4>&#xD;
&lt;p>&#xD;
Sometimes a program does the right thing for the wrong reasons. As a trivial example, consider this code:&#xD;
&lt;/p>&#xD;
&lt;blockquote>&#xD;
&lt;pre>&#xD;
if (a &amp;lt; b &amp;amp;&amp;amp; c) &#xD;
return 2 * x;&#xD;
else&#xD;
return x * x;&#xD;
&lt;/pre>&#xD;
&lt;/blockquote>&#xD;
&lt;p>&#xD;
The logical expression is wrong, and you've written a test that causes it to evaluate incorrectly and take the wrong&#xD;
branch. Unfortunately, purely by coincidence, the variable X has the value 2 in that test. So the result of the wrong&#xD;
branch is accidentally correct - the same as the result the right branch would have given.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
For each expected result, you should ask if there's a plausible way in which that result could be&amp;nbsp;achieved for the&#xD;
wrong reason. While it's often impossible to know, sometimes it's not.&#xD;
&lt;/p></mainDescription>
</org.eclipse.epf.uma:ContentDescription>