blob: 9c053413929165f789728f9ceab9b71bb20f1ddc [file] [log] [blame]
<?xml version="1.0" encoding="UTF-8"?>
<org.eclipse.epf.uma:ContentDescription xmi:version="2.0"
xmlns:xmi="http://www.omg.org/XMI" xmlns:org.eclipse.epf.uma="http://www.eclipse.org/epf/uma/1.0.5/uma.ecore"
xmlns:epf="http://www.eclipse.org/epf" epf:version="1.5.0" xmi:id="_vuwC4MPcEdmbOvqy4O0adg"
name="programming_automated_tests,_0j5sUMlgEdmt3adZL5Dmdw" guid="_vuwC4MPcEdmbOvqy4O0adg"
changeDate="2006-12-07T13:06:38.445-0800" version="1.0.0">
<mainDescription>&lt;h3>&#xD;
Introduction&#xD;
&lt;/h3>&#xD;
&lt;p>&#xD;
Although the programming of automated tests should contribute to the overall test effort, it usually does not make up&#xD;
the entire test effort. In fact, test environments that are based on a complete automation approach end up spending&#xD;
more time on test automation than on testing. Before you begin developing automated test scripts, consider first&#xD;
whether it is more efficient to perform manual testing. Some aspects of an application are more efficiently tested&#xD;
manually (for example, GUI testing versus data-drive testing). If you decide to program automated test scripts, examine&#xD;
what aspects of your test scripting can be automated and begin designing your scripts.&#xD;
&lt;/p>&#xD;
&lt;h3>&#xD;
Design your automated tests&#xD;
&lt;/h3>&#xD;
&lt;p>&#xD;
Without some level of design of your automated tests, introducing automation into your testing effort can lead to more&#xD;
problems than it solves. You should consider developing your automated tests according to a lifecycle with automation&#xD;
test requirements, design, testing of the automation tests, and implementation of the automation tests. This approach&#xD;
can be informal or formal depending on your project needs. By designing the programming of your automated tests, you&#xD;
can avoid spending time programming the wrong tests, re-working programmed tests, deciphering different coding styles&#xD;
in the programming of the tests, etc.&#xD;
&lt;/p>&#xD;
&lt;h3>&#xD;
Recorded versus programmed scripts&#xD;
&lt;/h3>&#xD;
&lt;p>&#xD;
Although there are clear benefits to recorded scripts (for example, ease of creation or ability for novice testers to&#xD;
learn a scripting language), recorded scripts also present their own problems. The disadvantages of playback scripts&#xD;
are well known. They are deceptively easy to create but very difficult to update. Problems with script reliability,&#xD;
hard-coded data values, or changes to the application under test and the need to re-record are well-documented. On the&#xD;
other hand, programming scripts can present difficulties of their own: they are difficult for the novice tester to&#xD;
create, they can require substantial time and effort to develop, and they can be difficult to debug. Most test tooling&#xD;
makes these issues less problematic by providing the tester script support functions, such as ways to establish target&#xD;
of test lists, systematic ways to program verification point, point to datapools, build commands into the script (for&#xD;
example, sleeper commands), comment the script, and document the script. Another major advantage, which is often&#xD;
overlooked, of using testing tooling to mitigate these risks is the ability to add to an existing script in the form of&#xD;
making corrections to an existing script, testing new features of a test target or application under test, or resuming&#xD;
a recording after an interruption.&#xD;
&lt;/p>&#xD;
&lt;h3>&#xD;
Functional and performance test scripts&#xD;
&lt;/h3>&#xD;
&lt;p>&#xD;
When discussing automating test scripts, it is important to distinguish between functional and performance tests. Most&#xD;
discussions of programming automated test scripts focus on testing the functionality of an application. This is not&#xD;
inappropriate, since a lot of automated testing focuses on functional testing. Performance test scripting, however, has&#xD;
its unique characteristics. Performance test automation provides you with the ability to programmatically set workloads&#xD;
by adding user groups to test loads under group usage, setting think time behavior, running tests randomly or at set&#xD;
rates, or setting the duration of a run. Performance test automation also allows you to create and maintain schedules&#xD;
for your tests.&#xD;
&lt;/p>&#xD;
&lt;h3>&#xD;
Testing test scripts&#xD;
&lt;/h3>&#xD;
&lt;p>&#xD;
When testing your test scripts, keep in mind whether you are testing recorded or programmed test scripts. For recorded&#xD;
scripts, much of the debugging of the script consists of errors that are introduced due to changes in the test target&#xD;
or test environment. When you run a recorded test script, consider the test target of the script. Some test automation&#xD;
tools capture this information as a part of the test script. Debugging a recorded script consists largely of&#xD;
determining whether changes in the target have created error conditions in the script. In general, there are two main&#xD;
categories to examine here: changes in the UI and test session sensitive data (for example, date stamped data). In most&#xD;
cases, discrepancies between recording and playback cause errors in your recorded test scripts.&#xD;
&lt;/p>&#xD;
&lt;p>&#xD;
Testing programmed test scripts involves many of the same debugging techniques you would apply to debugging an&#xD;
application. Consider both the flow control logic and the data aspects of your script. Automated testing tools provide&#xD;
you with test script debugging IDEs as well as datapool management features that facilitate this type of testing.&#xD;
During execution of test scripts, a test that uses a datapool can replace values in the programmed test with variable&#xD;
test data that is stored in the datapool.&#xD;
&lt;/p></mainDescription>
</org.eclipse.epf.uma:ContentDescription>