blob: 41b2643ed99f96b0b1dad89e20743b11eb34d40e [file] [log] [blame]
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN">
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1">
<title>Preferences</title>
</head>
<body>
<h1>Preferences</h1>
<p>
The tool have some preferences which may be configured by advanced users.
This must be done carefully otherwise, either no data could be read by the tool
and/or generated results could be puzzled.
</p><p>
<h3><a name="eclipse_version">Eclipse version</a></h3>
<p><img src="images/preferences-eclipse-versions.png" alt="Preferences Eclipse versions"/></p>
<p>
The Eclipse version on which the performance results belongs to. There are
only two possible choice: the maintenance and the current versions.
</p>
<h3><a name="database">Database</a></h3>
<p><img src="images/preferences-database.png" alt="Preferences Database"/></p>
<p>
By default the tool does not connect to any performance results database as common
users might not have enough rights to access it. However, users having these
rights may want to look at the database contents and update the local data files
with it. Hence, it is possible to configure the tool to connect to a database
which may be either local or on the releng server (<code><b>minsky</b></code>).
</p><p>
Note that the folder for the local database must be the parent of the
<code>perfDb3x</code> folder otherwise you'll get an error message.
</p>
<h3><a name="status">Status</a></h3>
<p><img src="images/preferences-status.png" alt="Preferences Status"/></p>
<p>
The status preferences can be set to exclude some tests while written
the status file (see <a href="components.html#writestatus">Write status</a>).
</p><p>
Flying over each check-box or radio buttons gives a short explanation of each
preference value
</p><p>
Here are the detailed explanation of these status preferences:
</p>
<ul>
<li><b>Values</b>: Check-box to include the values of the failing tests in the
status file. Note that is not recommended to do so when status want to be
compared between builds</li>
<li><b>Error level</b>: Level of the error from which the test result is
excluded from the file:
<ul>
<li>None: the test is always included in the status file whatever the error
level is</li>
<li>Invalid: the test is not included when the error level is over 100%</li>
<li>Weird: the test is not included when the error level is over 50%</li>
<li>Suspicious: the test is not included when the error level is over 25%</li>
<li>Noticeable: the test is not included when the error level is over 3%</li>
</ul>
</li>
<li><b>Small value</b>: The test is not included when a small value is detected:
<ul>
<li>Build value: the test is not included when the value for this build test value
is below than 100ms (<i>as the test duration is below the recommended minimum
value it's not necessary to strongly survey it...</i>)</li>
<li>Delta value: the test is not included when the value of the difference between
the build and the baseline is below than 100ms (<i>as the regression is below
what a normal user can detect, it may not be necessary to report it...</i>)</li>
</ul>
</li>
<li><b>Statistics</b>: Level of deviation of test history since the first build
from which the test result is not included in the status file:
<ul>
<li>None: the test is always included in the status file whatever the deviation
is</li>
<li>Unstable: the test is not included when the deviation is over 10%</li>
<li>Erratic: the test is not included when the error level is over 20%</li>
</ul>
</li>
<li><b>Builds to confirm</b>: The number of builds to confirm a regression.
As tests may have <i>natural</i> variation, it's often necessary to have several
builds to confirm that a regression really occurs. This preference allow to
define how many consecutive builds must show a regression before including
a test result in the status file...
</li>
</ul>
<h3><a name="milestones">Milestones</a></h3>
<p><img src="images/preferences-milestones.png" alt="Preferences Milestones"/></p>
<p>
These are the list of the version milestones. Each milestone is a date string
using the <b>yyyymmddHHMM</b> format. When a new milestone is shipped, then a new
date must be added to this preference to let the tool to identify it in the builds
list and emphasize it...
</p>
<h3><a name="lastbuild">Last build</a></h3>
<p><img src="images/preferences-lastbuild.png" alt="Preferences Last build"/></p>
<p>
The last build on which verifications and/or generation want to be done. When
not set (which is the default value) the last build of the database is taken
into account.
</p><p>
All builds after the selected one are ignored by the tool likewise they would
have no local data. Changing this value will trigger the initialization of
the local data which will be read again.
</p>
<h2><a name="defaultdim">Default dimension</a></h2>
<p><img src="images/preferences-default-dim.png" alt="Preferences Default dimension"/></p>
<p>
This is the dimension used to compute delta and make the verification. Currently
this is the <b>Elapsed Process Time</b> dimension.
</p><p>
<i>Note that the default dimension must belong to the <b>Results dimensions</b>
described below, hence a new selected dimensions will always be automatically
added to the list...</i>
</p>
<h2><a name="resultsdim">Results dimensions</a></h2>
<p><img src="images/preferences-results-dim.png" alt="Preferences Results dimension"/></p>
<p>
These are dimensions displayed in the scenario data HTML pages. Currently there
are the <b>Elapsed Process Time</b> and the <b>CPU Time</b>. Having these dimensions
configurable may be interesting to display others dimensions and see whether their
numbers may be relevant or not (e.g. <b>Used Hava Heap</b>).
</p><p>
<i>Note that the default dimension described above must belong to the selected
dimensions, hence it will always be automatically added to the new selected list...</i>
</p>
</body>
</html>