blob: 0d5494e410bb9b830b76f5a02a307ba2615390c1 [file] [log] [blame]
 N4JS Design Specification
This chapter may be outdated.

In order to run all tests from command line, use maven:

mvn clean verify

You may have to increase the memory for maven via export MAVEN_OPTS="-Xmx2048m" (Unix) or set MAVEN_OPTS="-Xmx2048m" (Windows).

Do not run the tests via mvn clean test as this may lead to some failures.

14.1. Performance Tests

There are two kind of performance tests:

1. Synthetic Tests: an arbitrary number of test classes is generated, and then some modifications are performed on these classes.

2. Real World Tests: tests are based on a snapshot version of our platform libraries

14.1.1. Synthetic Performance Tests

The idea of the synthetic performance tests is to test the performance of specific functionality with a defined number classes, specially designed for the functionality under test.

The overall structure of the synthetic performance test is

1. generate test classes

2. compile these classes

3. modify the test classes

4. measure incremental build time

Step 3) and 4) can be done in a loop. Also, step 2) can be looped (with clean build).

The test classes are spread over clusters and projects. The following categories are used:

Cluster

A cluster is a set of projects, each project of a cluster may depend on another project of the cluster. There are no dependencies between projects of different clusters

Project

A project simply is a N4JS project, containing packages. A project may depend on other projects.

Package

A package is a folder in a source folder of a project. A package contains classes.

Class

A class is defined in a file, usually one class per file. The file, and with it the class, is contained in a package. The class contains members.

Member

A member is either a field or method of a class. A method may has a body, which may contain variables with references to other classes.

14.1.1.1. Design of Generator

Performance Generator shows the classes of the performance test generator.

The package is designed as follows:

1. N4ProjectGenerator main control class for generation

2. TestDescriptor and subclasses: In order to keep memory consumption of the test class generator low, there is no graph structure created for the test elements. Instead, each element is uniquely named by a number, this number (actually a tuple of numbers) is stored in TestDescriptors and sub classes. There is a descriptor for each element of the tests.

3. AbstractModifier and subclasses generarate the tests. The idea is as follows:

• Modifier generates all files, with complete references and no issues (complete)

• sub classes of Modifier skip certain generations or add modifications, leading to issues or solving them

In order to compute the name of a class from its descriptor, as well as retrieving a class based on an absolute number, the modifiers use utility methods provided by PerformanceTestConfiguration. Note that computing the names and numbers depends on a configuration!

14.1.1.2. Design of Performance Test Configuration and Execution

Class Diagram of Performance Test Configuration and Execution shows the classes of the performance test configuration and execution.

Figure 45. Class Diagram of Performance Test Configuration and Execution

The package is designed as follows:

1. PerformanceTestConfiguration stores the test configuration. The configuration stores how many clusters, packages etc. are to be generated. It also provides methods for generating names from the descriptors mentioned above.

2. PerformanceMeter executes the test, listening to the (build) job to be finished etc.

3. AbstractGeneratingPerformanceTest Base test class contains setup, teardown and utility methods.

4. PerformanceTest Test class containing tests.

14.1.1.3. JUnit Configuration

We are using JUnitBenchamrks (http://labs.carrotsearch.com/junit-benchmarks.html/) to extend adjust plain JUnit behavior specifically to the performance tests needs.

14.1.1.4. JUnitBenchmark Test Configuration

JUnitBenchmark test configuration performed by annotating test method with @BenchmarkOptions. Parameters for that annotation include:

1. warmupRounds how many times test will be executed without taking measurement

2. benchmarkRounds how many times test will be executed, measurements taken will be used in results report

3. callgc Call System.gc() before each test. This may slow down the tests in a significant way.

4. concurrency specifies how many threads to use for tests.

5. clock specifies which clock to use.

Typical configuration for our performance tests might look like:

@BenchmarkOptions(benchmarkRounds = 5, warmupRounds = 2)
@Test
public void test() throws Exception {
//test...
}
14.1.1.5. JUnitBenchmark Report Configuration

By annotating TestClass in proper way, JUnitBenchamrks will generate html reports with performance results. There are two reports that can be generated:

1. @BenchmarkMethodChart report will contain results for every method from one test run (but all benchmarkRounds defined)

• filePrefix defines report file name

2. @BenchmarkHistoryChart report will contain trend of results from multiple test runs (it is aggregation of multiple instances of @BenchmarkMethodChart report)

• filePrefix defines report file name

• labelWith defines label that will mark separate runs

labelWith property can have value propagated from run configuration/command line. example configuration might be @BenchmarkHistoryChart(filePrefix = benchmark-history, labelWith = LabelType.CUSTOM_KEY)

14.1.1.6. JUnitBenchmark Run Configuration

It is possible to specify additional options for performance test run

1. -Djub.consumers=CONSOLE,H2 specifies where results will be written, H2 indicates H2 database to be used

2. -Djub.db.file=.benchmarks specifies name of the H2 database file

3. -Djub.customkey= value of that property scan be used as label in @BenchmarkHistoryChart

14.1.1.7. JUnitBenchmark Example

configuration example:

@BenchmarkMethodChart(filePrefix = "benchmark-method")
@BenchmarkHistoryChart(filePrefix = "benchmark-history", labelWith = LabelType.CUSTOM_KEY)
public class PerformanceTest extends AbstractGeneratingPerformanceTest {

public PerformanceTest() {
super("PerfTest");
}

@Rule
public TestRule benchmarkRun = new BenchmarkRule();

@Test
@BenchmarkOptions(benchmarkRounds = 5, warmupRounds = 2)
public void Test1() throws Exception {

//Test...
}

@Test
@BenchmarkOptions(benchmarkRounds = 5, warmupRounds = 2)
public void Test2() throws Exception {

//Test...
}
}

executing this code in Eclipse with configuration:

-Xms512m -Xmx1024m -XX:MaxPermSize=512m $-$Djub.consumers=CONSOLE,H2 $-$Djub.db.file=.benchmarks $-$Djub.customkey=${current_date} will cause : 1. both tests to be executed 2 times for the warmup 2. both of tests being executed 5 times with measurement taken 3. results written to console 4. results stored in local H2 db file (created if doesn’t exist) 5. generated benchmark-method.html with performance results of every test in that execution 6. generated benchmark-history.html with performance results of every execution 7. separate test executions will be labeled in benchmark-history.html with their start time 14.1.1.8. Note on Jenkins Job For performance tests it is important not to get pass/fail result in terms of being below given threshold, but also to examine trend of those results. We achieve this by tooling described above. In order to keep this data independent of the build machine or build system storage, we are using separate repository to store performance artifacts. Jenkins in copying previous test results into workspace, runs performance tests, then commits and pushes combined results (adds current results to previous results) to repository. 14.2. ECMA Tests ECMAScript Language test262 is a test suite intended to check agreement between JavaScript implementations and ECMA-262, the ECMAScript Language Specification (currently 5.1 Edition).The test suite contains thousands of individual tests, each of which tests some specific requirements of the ECMAScript Language Specification. For more info refer to http://test262.ecmascript.org/ Uses of this suite may include: 1. grammar tests 2. validation tests 3. run-time tests ECMA test262 suite source code can be found here: http://hg.ecmascript.org/tests/test262 14.2.1. Grammar Tests Based on the JS files included in test262 suite we are generating tests that feed provided JS code into the parser. This operation will result in 1. parser throwing exceptions 2. parsed output will contain standard output First case indicates that parsing provided JS code was not possible. This is considered to be Test Error. Second case case indicates that parsing of the provided code was successful, and will result either • output with no errors - code adhered parser grammar • output with errors - code violated parser grammar Given test must interpret those results to provide proper test output. 14.2.1.1. Negative Tests It is important to note that some of the tests are positive and some are negative. Negative test cases are marked by the authors with @negative JSDoc like marker therefore parser tests must be aware of that to avoid both false positives and false negatives results. 14.2.1.2. Test Exclusion To exclude validation tests or run-time related test, implementation is blacklist approach to exclude some of the ECMA test262 tests from execution. 14.3. Integration Tests Integration tests based on the stdlib and online-presence code bases can be found in bundle org.eclipse.n4js.hlc.tests in package org.eclipse.n4js.hlc.tests.integration (headless case) and in bundle org.eclipse.n4js.ui.tests in package org.eclipse.n4js.ui.tests.integration (plugin-UI tests running inside Eclipse). The headless tests also execute mangelhaft tests, the UI tests only perform compilation of the test code. More information can be found in the API documentation of classes AbstractIntegrationTest and AbstractIntegrationPluginUITest. 14.4. Test Helpers Test helpers contain utility classes that are reused between different test plug-ins. 14.4.1. Parameterized N4JS tests Xtext JUnit test runer injects test a ParserHelper that allows to run N4JS parser on given input and obtain information abut parsing results. In some cases we want to run this kind of tests on large input data. To address this we provide two utilities ParameterizedXtextRunner and TestCodeProvider. They allow write data driven parser tests. 14.4.1.1. ParameterizedXtextRunner This This junit runner serves two purposes: • injecting ParserHelper • creating multiple test instances for each input data provided This class is based on @link org.eclipse.xtext.testing.XtextRunner and @link org.junit.runners.Parameterized 14.4.1.2. TestCodeProvider This class is repsonsible for extracting ZipEntry from provided ZipFile. Additinally it can filter out entries that match strings in provided black list file. Filtering out ZipEntries assumes that blacklist file contians Path of ZipEntry in ZipFile as string in one line. Lines starting with # in black list file are ignored by TestCodeProvider. 14.4.1.3. Example of parameterized parser test @RunWith(XtextParameterizedRunner.class) @InjectWith(N4JSInjectorProvider.class) public class DataDrivenParserTestSuite { /** * Zip archives containing test files. */ public static final Collection<String> TEST_DATA_RESOURCES = Arrays.asList("foo.zip", "bar.zip"); /** * Blacklist of files requiring an execution engine. */ public static final String BLACKLIST_FILENAME = "blacklist.txt"; /** * Every generated test will use different ZipEntry as test data */ final ZipEntry entry; /** * Name of resource containing corresponding ZipEntry */ final String resourceName; @Inject protected ParseHelper<Script> parserN4JS; Collection<String> blackList; static final Logger logger = Logger.getLogger("someLogger"); public CopyOfLibraryParsingTestSuite(ZipEntry entry, String resourceName, Collection<String> blackList) { this.entry = entry; this.resourceName = resourceName; this.blackList = blackList; } @Rule public TestRule blackListHandler = new TestRule() { @Override public Statement apply(final Statement base, Description description) { final String entryName = entry.getName(); if (blackList.contains(entryName)) { return new Statement() { @Override public void evaluate() throws Throwable { try { base.evaluate(); } catch (AssertionError e) { // expected return; } } }; } else { return base; } } }; /** * Generates collection of ZipEntry instances that will be used as data * provided parameter is mapped to name of the test (takes advantage of fact * that ZipEntry.toString() is the same as entry.getName()) * * @return * @throws URISyntaxException * @throws ZipException * @throws IOException */ @Parameters(name = "{0}") public static Collection<Object[]> data() throws URISyntaxException, ZipException, IOException { return TestCodeProvider.getDataFromZippedRoots(TEST_DATA_RESOURCES, BLACKLIST_FILENAME); } /** * generated instances of the tests will use this base implementation * * @throws Exception */ @Test public void test() throws Exception { assertNotNull(this.entry); assertNotNull(this.resourceName); assertNotNull(this.parserN4JS); //actual test code } } 14.5. Issue Suppression It can be useful to suppress certain issues before tests are ran, so that test expectations don’t have to consider inessential warnings. This means that the validator still returns a full list of issues, but before passing them to the testing logic, the issues are filtered. When working with JUnit tests, the custom InjectorProvider N4JSInjectorProviderWithIssueSuppression can be used to configure them to suppress issues. The codes that are suppressed are globally specified by the DEFAULT_SUPPRESSED_ISSUE_CODES_FOR_TESTS constant in N4JSLanguageConstants. When working with Xpect tests, the XpectSetupFactory SuppressIssuesSetup can be used. See Xpext Issue Suppression for more details on Xpect issue suppression. 14.6. Xpect Tests For many tests, Xpect [Xpect] is used. Xpect allows for defining tests inside the language which is the language under test. That is, it is possible to refer to a JUnit test method in a special annotated comment, along with arguments passed to that method (typically expectations and the concrete location). Xpect comes with a couple of predefined methods which could be used there, e.g., tests for checking whether some expected error messages actually are produced. We have defined (and will probably define more) N4JS specific test methods. In the following, we describe the most common Xpect test methods we use. Note that we do not use all types of tests shipped with Xpect. For example, AST tests (comparing the actual AST with an expected AST, using string dumps) is too hard to maintain. Xpect test can be ignored by inserting a ! between XPECT and the test name, e.g. // XPECT ! errors --> '~$message$~' at "~$location$~" 14.6.1. Xpect Test Setup The setup is either defined in the file itself, e.g., /* XPECT_SETUP org.eclipse.n4js.spec.tests.N4JSSpecTest END_SETUP */ or bundle-wide for a specific language in the plugin.xml (or fragment.xml), e.g., <extension point="org.xpect.testSuite"> <testSuite class="org.eclipse.n4js.spec.tests.N4JSSpecTest" fileExtension="n4js" /> </extension> 14.6.2. Xpect Issue Suppression To configure an Xpect test class to suppress issues, you have to use the @XpectImport annotation to import the XpectSetupFactory org.eclipse.n4js.xpect.validation.suppression.SuppressIssuesSetup. Any Xpect test that is executed by this runner will work on the filtered list of issues. Similar to issue suppressing JUnit tests, the suppressed issue codes are specified by DEFAULT_SUPPRESSED_ISSUE_CODES_FOR_TESTS constant in N4JSLanguageConstants. For further per-file configuration a custom XPECT_SETUP parameter can be used. This overrides the suppression configuration of an Xpect runner class for the current file. /* XPECT_SETUP org.eclipse.n4js.tests.N4JSXpectTest IssueConfiguration { IssueCode "AST_LOCAL_VAR_UNUSED" {enabled=true} } END_SETUP */ In this example the issue code AST_LOCAL_VAR_UNUSED is explicitly enabled which means that no issue with this issue code will be suppressed. Definition Single line: // XPECT errors --> '~$message$~' at "~$location$~" Multi line: /* XPECT errors --- '~$message_1$~' at "~$location_1$~" ~$\dots$~ '~$message_n$~' at "~$location_n$~" --- */ Description Checks that one or more errors are issued at given location and compares the actual messages at a given location with the expected messages specified in the test. Also see no errors below. Definition Single line: // XPECT warnings --> '~$Message$~' at "~$Location$~" Multi line: /* XPECT warnings --- '~$message_1$~' at "~$location_1$~" ~$\dots$~ '~$message_n$~' at "~$location_n$~" --- */ Description Checks that one or more warnings are issued at given location and compares the actual messages at a given location with the expected messages specified in the test. 14.6.4. N4JS Specific Xpect Test Methods There are a lot of N4 specific Xpect tests methods available. To get all of these methods, search for references to annotation org.xpect.runner.Xpect in the N4 test plugins. Definition Single line: // XPECT noerrors --> '~$messageOrComment$~' at "~$location$~" Multi line: /* XPECT noerrors --- '~$messageOrComment_1$~' at "~$location_1$~" ~$\dots$~ '~$messageOrComment_n$~' at "~$location_n$~" --- */ Provided by NoerrorsValidationTest Description Checks that at the given location no error (or warning) is issued. This tests is roughly speaker the opposite of errors. The idea behind this test is to replace comments in the code, stating that an expression is assumed to be valid, with an explicit test. This is in particular useful when you start working on a task, in which there are (wrong) errors at a given position, or for bug report. Example function foo(any o): number { if (o instanceof string) { // XPECT noerrors --> "effect systems knows that o is a string" at "o" return o.length; } return 0; } is clearer and more explicit than function foo(any o): number { if (o instanceof string) { // here should be no error: return o.length; } return 0; } Also, the noerrors version will fail with a correct description, while the second one would fail with a general error and no location. Once the feature is implemented, regressions are detected much easier with the explicit version. Definition Single line: // XPECT scope at$location$--> ~$[$~!~$]$~~$name_1$~, ~$\dots$~, ~$[$~!~$]$~~$name_n$~ ~$[$~, ...~$]$~ Multi line: /* XPECT scope$location$--- ~$[$~!~$]$~~$name_1$~, ~$\dots$~, ~$[$~!~$]$~~$name_n$~~$[$~, ...~$]$~ --- */ Provided by PositionAwareScopingXpectTest Description Checks that the expected elements are actually found in the scope (or explicitly not found, when ! is used). This is a modified version of the Xpect built-in scope test, ensuring that also elements only put into the scope when they are explicitly requested are found. Example // XPECT scope at 'this.|$data_property_b' --> a, b, $data_property_b, !newB, ... return this.$data_property_b + "_getter";
Definition

Single line:

// XPECT scopeWithPosition at $location$ --> ~$[$~!~$]$~~$name_1$~ - ~$pos_1$~, ~$\dots$~, ~$[$~!~$]$~~$name_n$~ - ~$pos_n$~ ~$[$ ~, ...~$]$~

Multi line:

/* XPECT scopeWithPosition $location$ ---
~$[$~!~$]$~~$name_1$~ - ~$pos_1$~, ~$\dots$~,
~$[$~!~$]$~~$name_n$~ - ~$pos_n$~ ~$[$ ~, ...~$]$~
--- */
Provided by

PositionAwareScopingXpectTest

Description

Checks that the expected elements are actually found in the scope (or explicitly not found, when ! is used). The concrete syntax of the position, which is usually the line number, or the line number prefix with T if a type element is referenced, is described in EObjectDescriptionToNameWithPositionMapper.

Example
/* XPECT scopeWithPosition at foo2 ---
b - 9,
c - 25,
foo - T3,
foo2 - T9,
...
---*/
foo2()
Definition

Single line:

//

Multi line:

Provided by

N4JSXpectTest

Description

Compares scope including resource name but not line number.

Definition

Single line:

//

Multi line:

Provided by

N4JSXpectTest

Description

Checks that a given element is bound to something identified by (simple) qualified name. The check is designed as simple as possible. That is, simply the next following expression is tested, and within that we expect a property access or a direct identifiable element. The compared name is the simple qualified name, that is container (type) followed by elements name, without URIs of modules etc.

Definition

Single line:

// XPECT linkedPathname at '$location$' --> ~$pathname$~
Provided by

Description

Checks that an identifier is linked to a given element identified by its path name. The path name is the qualified name in which the segments are separated by ’/’. This test does not use the qualified name provider, as the provider may return null for non-globally available elements. It rather computes the name again by using reflection, joining all name properties of the target and its containers.

Example
// XPECT linkedPathname at 'foo()' --> C/foo
new C().foo();
Definition

Single line:

// XPECT type of '$location$' --> ~$type$~
Provided by

N4JSXpectTest

Description

Checks that the type inferred at location is similar to expected type.

Example
// XPECT type of 'x' --> string
var x = 'hello';
// XPECT type of 'foo()' --> union{A,number}
var any y = foo();
Definition

-

Single line
// XPECT expectedType at 'location' --&gt; Type
The location (at) is optional.
Provided by

N4JSXpectTest

Description

Checks that an element/expression has a certain expected type (i.e. Xsemantics judgment expectedTypeIn).

Definition

Single line:

// XPECT elementKeyword at 'myFunction' -> function
Example
interface I {
fld: int;
get g(): string;
set s(p:string);
}

//XPECT elementKeyword at 'string' --> primitive
var v1: string;

//XPECT elementKeyword at 'I' --> interface
var i: I;

//XPECT elementKeyword at 'fld' --> field
i.fld;
Provided by

ElementKeywordXpectMethod

Description

Checks that an element/expression has a certain element keyword. The expected element keyword is identical to the element keyword shown when hovering the mouse over that element/expression in the N4JS IDE. This method is particuarly useful for testing merged elements of union/intersection.

Definition

Single line:

// XPECT accessModifier at 'myFunction' -> function

or

// XPECT accessModifier -> function
Example
// XPECT accessModifier --> publicInternal
export @Internal public abstract class MyClass2 {

// XPECT accessModifier --> project
abstract m1(): string;

// XPECT accessModifier at 'm2' --> project
m2(): string {
return "";
}
}
Provided by

AccessModifierXpectMethod

Description

Checks that an element/expression has a certain accessibility.

Definition

Single line:

//

Multi line:

Provided by

-

Description

This test should only be used during development of compiler and not used in the long run, because this kind of test is extremely difficult to maintain.

Definition

Single line:

//

Multi line:

Provided by

-

Description

The most important test for compiler/transpiler, but also for ensuring that N4JS internal validations and assumptions are true at runtime.

Definition

Single line:

/* XPECT lint */
Provided by

CompileAndLintTest

Description

Passes the generated code through the JSHint JavaScript linter. This test includes for instance checking for missing semicolons and undefined variables. The whole test exclusively refers to the generated javascript code.

Definition

Single line:

/* XPECT lintFails */
Provided By

CompileAndLintTest

Description

Negation of lint. Fails on linting success. Expects linting errors.

14.6.5. FIXME Xpect modifier

A modification of the official Xpect framework allows us to use the FIXME modifier on each test. [15]

Syntax

FIXME can be applied on any test just after the XPECT keyword:

// XPECT FIXME  xpectmethod ... --> ...

Tests will still be ignored if an exclamation mark (!) is put between XPECT and FIXME.

Description

Using FIXME on a test negates the result of the underlying JUnit test framework. Thus a failure will be reported as a true assertion and an assertion that holds will be reported as failure . This enables to author valuable tests of behaviour, which is still not functional.

Example

For instance, if we encounter an error-message at a certain code position, but the code is perfectly right, then we have an issue. We can annotate the situation with a ’fix me’ ’noerrors’ expectation:

// Perfectly right behaviour XPECT FIXME noerrors -->
console.log("fine example code with wrong error marker here.");

This turns the script into an Xpect test. We can integrate the test right away into our test framework and it will not break our build (even though the bug is not fixed).

When the issue will be worked on, the developer starts with removing ’FIXME’ turning this into useful unit-test.

It is crucial to understand that FIXME negates the whole assertion. Example: If one expects an error marker at a certain position using the ’errors’ directive, one must give the exact wording of the expected error-message to actually get the FIXME behaviour working. To avoid strange behaviour it is useful to describe the expected error a comment in front of the expectation and leave the message-section empty.

14.6.6. Expectmatrix Xpect tests

Applying test-driven development begins with authoring acceptance and functional tests for the work in the current sprint. By this the overall code quality is ensured for the current tasks to solve. Rerunning all collected tests with each build ensures the quality of tasks solved in the past. Currently there is no real support for tasks, which are not in progress but are known to be processed in the near or far future. Capturing non-trivial bug reports and turning them into reproducable failing test-cases is one example.

Usually people deactivate those future-task-tests in the test code by hand. This approach doesn’t allow to calculate any metrics about the code. One such metric would be: Is there any reported bug solved just by working on an (seemingly unrelated) scheduled task?

To achieve measurements about known problems, a special build-scenario is set up. As a naming convention all classes with names matching * Pending are assumed to be Junit tests. In bundle org.eclipse.n4js.expectmatrix.tests two different Xpect-Runners are provided, each working on its own subfolder. Usual Xpect-tests are organised in folder xpect-test while in folder xpect-pending all future-tests are placed. A normal maven-build processes only the standard junit and xpect tests. Starting a build with profile execute-expectmatrix-pending-tests will additionally execute Xpect tests from folder xpect-pending and for all bundles inheriting from /tests/pom.xml all unit tests ending in * Pending. This profile is deactivated by default.

A special jenkins-job - N4JS-IDE_nightly_bugreports_pending - is configured to run the pending tests and render an overview und history to compare issues over time. Due to internal Jenkins structures this build always marked failed, even though the maven-build succeeds successfully.

Relevant additional information can be found in

14.6.7. Xpect Lint Tests

Figure 46. Xpect Lint

The test transpiles the provided n4js resource and checks the generated code. This is achieved using the Javascript linter JSHint.

After transpiling the provided n4js resource the LintXpectMethod combines the generated code with the jshint code into a script. It calls the JSHint validation function and returns the linting result as a json object. The error results are displayed in the console. The script is executed using the Engine class. (Design)

For the linting process an adapted configuration for JSHint is used. For the needs of N4JS the linter is configured to recognise N4JS specific globals. Details about the error codes can be found at the jshint repository.

The following warnings are explicitly enabled/disabled:

• W069: [’a’] is better written in dot notation DISABLED

• W033: Missing semicolon ENABLED

• W014: Bad line breaking before ’a’. DISABLED

• W032: Uneccesarry semicolon ENABLED

• W080: It’s not necessary to initialize ’a’ to ’undefined’. ENABLED

• W078: Setter is defined without getter. DISABLED

• ES6 related warnings are disabled using the ’esnext’ option: W117: Symbol is not defined. DISABLED W104: ’yield’ is only available in ES6 DISABLED W117: Promise is not defined DISABLED W119: function* is only available in ES6 DISABLED

The xpect lint test only applies if the provided resource passes the n4js compiler.

The xpect method lintFails can be used to create negative tests. All linting issues discovered during the development of the xpect plugin have there own negative test to keep track of their existence.

14.7. Xpect Proposal Tests

Proposal tests are all tests which verify the existence and application of completion proposals, created by content assist, quick fixes etc.

The key attributes of a proposal (cf ConfigurableCompletionProposal) are:

displayString

the string displayed in the proposal list

replacementString

simple variant of which is to be added to document, not necessarily the whole replacement (as this may affect several locations and even user interaction)

In the tests, a proposal is identified by a string contained in the displayString. If several proposal match, test will fail (have to rewrite test setup or proposal identifier to be longer). Proposal identifier should be as short as possible (to make test robust), but not too short (to make test readable).

The following proposal tests are defined:

contentAssist [[/itex] List ][/itex]

verifies proposals created by content assist

quickFix [[/itex] List ][/itex]

verifies proposals created by quick fixes. Cursor position is relevant, that’s handled by the framework. We only create tests with cursor position – fixes applied via the problem view should work similarly, but without final cursor position.

If no error is found at given position, test will fail (with appropriate error message!). In call cases of apply, the issue must be resolved. Usually, fixing an issue may leave the file invalid as other issues still exists, or because by fixing one issue others may be introduced (which may happen often as we try to avoid consequential errors in validation). For some special cases, quickFix tests support special features, see below.

Not tested in this context: Verify proposal description, as these tests would be rather hard to maintain and the descriptions are often computed asynchronously.

14.7.1. Validation vs. Non-Validation

We expect proposal tests to be applied on non-valid test files, and usually file is also broken after a proposal has been applied. Thus, the test-suite must not fail if the file is not valid.

14.7.2. General Proposal Test Features

14.7.2.1. Test Variables

Often, list of proposals are similar for different tests (which define different scenarios in which the proposals should be generated). For that reason, variables can be defined in the test set up:

In the Xpect-Setup there is now a special Config component where specific switches are accessible. For instance the timeout switch for content assist can be modified:

/* XPECT_SETUP org.eclipse.n4js.tests.N4JSNotValidatingXpectPluginUITest
...
Config {
content_assist_timeout = 1000
...
}
...
*/

Note: There should only be one Config component per Xpect-Setup.

Variables are introduced via the VarDef component. It takes a string argument as the variable name on construction. Inside the body one add MemberLists and StringLists arguments. Variable definitions may appear in Config bodies or in the Xpect-Setup.

VarDef "objectProposals" {
...
}

Define variables with expression: A simple selector is given with the MemberList component. These components take three String arguments in the constructor. The first one is a typename. The second one is the feature selector, e.g. methods , fields , …and the third one defines the visibility.

/* XPECT_SETUP
VarDef "stringProposals" { MemberList  "String" "methods" "public" {}}
END_SETUP */

We have to define a filter later in Xtend/Java, e.g., getClassWithName( className ).filterType(methods).filterVisibility(accessodifier)…​

A variable is later referenced as follows:

<$variable> Usage example: // XPECT contentAssistList at 'a.<|>methodA' proposals --> <$stringProposals>, methodA2
a.methodA
14.7.2.2. at – Location and Selection

Tokens in expectation/setup:

• <|> cursor position

• <[> selection start → also defines cursor position

• <]> selection end

All proposal tests have to specify a location via at, the location must contain the cursor position and may contain a selection. E.g.: