blob: 998720fe0751610c227c292f390febff9479b386 [file] [log] [blame]
\clearpage
\chapter{The Epsilon Unit Testing Framework (EUnit)}
\label{chp:eunit}
EUnit is an unit testing framework specifically designed to test model management tasks, based on EOL and the Ant workflow tasks. It provides assertions for comparing models, files and directories. Tests can be reused with different sets of models and input data, and differences between the expected and actual models can be graphically visualized. This chapter discusses the motivation behind EUnit, describes how tests are organized and written and shows two examples of how a model-to-model transformation can be tested with EUnit. This chapter ends with a discussion of how EUnit can be extended to support other modelling and model management technologies.
\section{Motivation}
\label{sec:eunit-motiv-test-model}
Model-driven approaches are being adopted in a wide range of demanding environments, such as finance, health care or telecommunications~\cite{Guttman2006}. In this context, validation and verification is identified as one of the many challenges of model-driven software engineering (MDSE)~\cite{Chaudron2009}.
MDSE in practice involves creating models, and thereafter \textit{managing} them, via various tasks, such as model transformation, validation and merging. The validation and verification of each type of model management task has its own specific challenges. Kolovos et al.\ list testing concerns for model-to-model (M2M) and model-to-text (M2T) transformations, model validations, model comparisons and model compositions in~\cite{EUnit}. Baudry et al.\ identify three main issues when testing model transformations~\cite{Baudry2010}: the complexity of the input and output models, the immaturity of the model management environments and the large number of different transformation languages and techniques.
\subsection{Common Issues}
\label{sec:eunit-common-issues}
While each type of model management task does have specific complexity, some of the concerns raised by Baudry can be generalized to apply to all model management tasks:
\begin{itemize}
\item There is usually a large number of models to be handled. Some may be created by hand, some may be generated using hand-written programs, and some may be generated automatically following certain coverage criteria.
\item A single model or set of models may be used in several tasks. For instance, a model may be validated before performing an in-place transformation to assist the user, and later on it may be transformed to another model or merged with a different model. This requires having at least one test for each valid combination of models and sets of tasks.
\item Test oracles are more complex than in traditional unit testing~\cite{Mottu2008}: instead of checking scalar values or simple lists, entire graphs of model objects or file trees may have to be compared. In some cases, complex properties in the generated artifacts may have to be checked.
\item Models and model management tasks may use a wide range of technologies. Models may be based on Ecore~\cite{Steinberg2008}, XML files or Java object graphs, among many others. At the same time, tasks may use technologies from different platforms, such as Epsilon, oAW~\cite{oAW} or AMMA~\cite{AMMA}. Many of these technologies offer high-level tools for running and debugging the different tasks using several models. However, users wishing to do automated unit testing need to learn low-level implementation details about their modelling and model management technologies. This increases the initial cost of testing these tasks and hampers the adoption of new technologies.
\item Existing testing tools tend to focus on the testing technique itself, and lack integration with external systems. Some tools provide graphical user interfaces, but most do not generate reports which can be consumed by a continuous integration server, for instance.
\end{itemize}
\subsection{Testing with JUnit}
\label{sec:testing-with-junit}
The previous issues are easier to understand with a concrete example. This section shows how a simple transformation between two EMF models in ETL using JUnit 4~\cite{JUnit2011} would be normally tested, and points out several issues due to JUnit's limitations as a general-purpose unit testing framework for Java programs.
For the sake of brevity, only an outline of the JUnit test suite is included. All JUnit test suites are defined as Java classes. This test suite has three methods:
\begin{enumerate}
\item The test setup method (marked with the \javaannotation{Before} JUnit annotation) loads the required models by creating and configuring instances of \class{EmfModel}. After that, it prepares the transformation by creating and configuring an instance of \class{EtlModule}, adding the input and output models to its model repository.
\item The test case itself (marked with \javaannotation{Test}) runs the ETL transformation and uses the generic comparison algorithm implemented by EMF Compare to perform the model comparison.
\item The test teardown method (marked with \javaannotation{After}) disposes of the models.
\end{enumerate}
Several issues can be identified in each part of the test suite. First, test setup is tightly bound to the technologies used: it depends on the API of the \class{EmfModel} and \class{EtlModule} classes, which are both part of Epsilon. Later refactorings in these classes may break existing tests.
The test case can only be used for a single combination of input and output models. Testing several combinations requires either repeating the same code and therefore making the suite less maintainable, or using parametric testing, which may be wasteful if not all tests need the same combinations of models.
Model comparison requires the user to manually select a model comparison engine and integrate it with the test. For comparing EMF models, EMF Compare is easy to use and readily available. However, generic model comparison engines may not be available for some modelling technologies, or may be harder to integrate.
Finally, instead of comparing the obtained and expected models, several properties could have been checked in the obtained model. However, querying models through Java code can be quite verbose.
\subsection{Selected Approach}
\label{sec:eunit-selected-approach}
Several approaches could be followed to address these issues. Our first instinct would be to extend JUnit and reuse all the tooling available for it. A custom test runner would simplify setup and teardown, and modelling platforms would integrate their technologies into it. Since Java is very verbose when querying models, the custom runner should run tests in a higher-level language, such as EOL. However, JUnit is very tightly coupled to Java, and this would impose limits on the level of integration we could obtain. For instance, errors in the model management tasks or the EOL tests could not be reported from their original source, but rather from the Java code which invoked them. Another problem with this approach is that new integration code would need to be written for each of the existing platforms.
Alternatively, we could add a new language exclusively dedicated to testing to the Epsilon family. Being based on EOL, model querying would be very concise, and with a test runner written from scratch, test execution would be very flexible. However, this would still require all platforms to write new code to integrate with it, and this code would be tightly coupled to Epsilon.
As a middle ground, we could decorate EOL to guide its execution through a new test runner, while reusing the Apache Ant~\cite{ANT} tasks already provided by several of the existing platforms, such as AMMA or Epsilon. Like Make, Ant is a tool focused on automating the execution of processes such as program builds. Unlike Make, Ant defines processes using XML \emph{buildfiles} with sets of interrelated \emph{targets}. Each target contains in turn a sequence of \emph{tasks}. Many Ant tasks and Ant-based tools already exist, and it is easy to create a new Ant task.
Among these three approaches, EUnit follows the last one. Ant tasks take care of model setup and management, and tests are written in EOL and executed by a new test runner, written from the ground up.
\section{Test Organization}
\label{sec:eunit-test-organization}
EUnit has a rich data model: test suites are organized as trees of tests, and each test is divided into many parts which can be extended by the user. This section is dedicated to describing how test suites and tests are organized. The next section indicates how they are written.
\subsection{Test Suites}
EUnit test suites are organized as trees: inner nodes group related test cases and define \emph{data} bindings. Leaf nodes define \emph{model} bindings and run the test cases.
Data bindings repeat all test cases with different values in one or more variables. They can implement parametric testing, as in JUnit 4. EUnit can nest several data bindings, running all test cases once for each combination.
Model bindings are specific to EUnit: they allow developers to repeat a single test case with different subsets of models. Data and model bindings can be combined. One interesting approach is to set the names of the models to be used in the model binding from the data binding, as a quick way to try several test cases with the same subsets of models.
\begin{figure}
\centering
\input{tikz/test-organization.tikz}
\caption{Example of an EUnit test tree}
\label{fig:eunit-test-tree}
\end{figure}
Figure~\ref{fig:eunit-test-tree} shows an example of an EUnit test tree: nodes with data bindings are marked with \eol{data}, and nodes with model bindings are marked with \eol{model}. EUnit will perform a preorder traversal of this tree, running the following tests:
\begin{enumerate}
\item \test{A} with \variable{x = 1} and model X.
\item \test{A} with \variable{x = 1} and model Y.
\item \test{B} with \variable{x = 1} and both models.
\item \test{A} with \variable{x = 2} and model X.
\item \test{A} with \variable{x = 2} and model Y.
\item \test{B} with \variable{x = 2} and both models.
\end{enumerate}
Optionally, EUnit can filter tests by name, running only \test{A} or \test{B}. Similarly to JUnit, EUnit logs start and finish times for each node in the tree, so the most expensive test cases can be quickly detected. However, EUnit logs CPU time\footnote{CPU time only measures the time elapsed in the thread used by EUnit, and is more stable with varying system load in single-threaded programs.} in addition to the usual wallclock time.
Parametric testing is not to be confused with \emph{theories}~\cite{Saff2007}: both repeat a test case with different values, but results are reported quite differently. While parametric testing produces separate test cases with independent results, theories produce aggregated tests which only pass if the original test case passes for every data point. Figure~\ref{fig:parametric-vs-theories} illustrates these differences. EUnit does not support theories yet: however, they can be approximated with data bindings.
\begin{figure}
\centering
\subbottom[Parametric testing]{
\input{tikz/parametric-vs-theory.tikz}
}
\subbottom[Theories]{
\newcommand{\typea}{test}
\newcommand{\typeb}{data}
\input{tikz/parametric-vs-theory.tikz}
}
\caption{Comparison between parametric testing and theories}
\label{fig:parametric-vs-theories}
\end{figure}
\subsection{Test Cases}
\label{sec:eunit-test-cases}
The execution of a test case is divided into the following steps:
\begin{enumerate}
\item Apply the data bindings of its ancestors.
\item Run the model setup sections defined by the user.
\item Apply the model bindings of this node.
\item Run the regular setup sections defined by the user.
\item Run the test case itself.
\item Run the teardown sections defined by the user.
\item Tear down the data bindings and models for this test.
\end{enumerate}
An important difference between JUnit and EUnit is that setup is split into two parts: model setup and regular setup. This split allows users to add code before and after model bindings are applied. Normally, the model setup sections will load all the models needed by the test suite, and the regular setup sections will further prepare the models selected by the model binding. Explicit teardown sections are usually not needed, as models are disposed automatically by EUnit. EUnit includes them for consistency with the xUnit frameworks.
Due to its focus on model management, model setup in EUnit is very flexible. Developers can combine several ways to set up models, such as model references, individual Apache Ant~\cite{ANT} tasks, Apache Ant targets or Human-Usable Text Notation (HUTN)~\cite{HUTN} fragments.
A test case may produce one among several results. \eol{SUCCESS} is obtained if all assertions passed and no exceptions were thrown. \eol{FAILURE} is obtained if an assertion failed. \eol{ERROR} is obtained if an unexpected exception was thrown while running the test. Finally, tests may be \eol{SKIPPED} by the user.
\section{Test Specification}
\label{sec:eunit-test-specification}
In the previous section, we described how test suites and test cases are organized. In this section, we will show how to write them.
As discussed before, after evaluating several approaches, we decided to combine the expressive power of EOL and the extensibility of Apache Ant. For this reason, EUnit test suites are split into two files: an Ant buildfile and an EOL script with some special-purpose annotations. The next subsections describe the contents of these two files and revisit the previous example with EUnit.
\subsection{Ant Buildfile}
EUnit uses standard Ant buildfiles: running EUnit is as simple as using its Ant task. Users may run EUnit more than once in a single Ant launch: the graphical user interface will automatically aggregate the results of all test suites.
\subsubsection{EUnit Invocations}
An example invocation of the EUnit Ant task using the most common features is shown in Listing~\ref{lst:example-eunit-ant}. Users will normally only use some of these features at a time, though. Optional attributes have been listed between brackets. Some nested elements can be repeated 0+ times (\verb!*!) or 0-1 times (\verb!?!). Valid alternatives for an attribute are separated with \verb!|!.
\begin{lstlisting}[language=XML,columns=fixed,caption=Format of an invocation of the EUnit Ant task,label=lst:example-eunit-ant]
<epsilon.eunit src="..."
[failOnErrors="..."]
[package=".."]
[toDir="..."]
[report="yes|no"]>
(<model ref="OldName" [as="NewName"]/>)*
(<uses ref="x" [as="y"] />)*
(<exports ref="z" [as="w"] />)*
(<parameter name="myparam" value="myvalue" />)*
(<modelTasks><!-- Zero or more Ant tasks --></modelTasks>)?
</epsilon.eunit>
\end{lstlisting}
The EUnit Ant task is based on the Epsilon abstract executable module task (see Section~\ref{sec:ExecutableModuleTask}), inheriting some useful features. The attribute \xmlattribute{src} points to the path of the EOL file, and the optional attribute \xmlattribute{failOnErrors} can be set to \eol{false} to prevent EUnit from aborting the Ant launch if a test case fails. EUnit also inherits support for importing and exporting global variables through the \xmlelement{uses} and \xmlelement{exports} elements: the original name is set in \xmlattribute{ref}, and the optional \xmlattribute{as} attribute allows for using a different name. For receiving parameters as name-value piars, the \xmlelement{parameter} element can be used.
Model references (using the \xmlelement{model} nested element) are also inherited from the Epsilon abstract executable module task. These allow model management tasks to refer by name to models previously loaded in the Ant buildfile. However, EUnit implicitly reloads the models after each test case. This ensures that test cases are isolated from each other.
The EUnit Ant task adds several new features to customize the test result reports and perform more advanced model setup. By default, EUnit generates reports in the XML format of the Ant \xmlelement{junit} task. This format is also used by many other tools, such as the TestNG unit testing framework~\cite{TestNG}, the Jenkins continuous integration server~\cite{Jenkins} or the JUnit Eclipse plug-ins. To suppress these reports, \xmlattribute{report} must be set to \xmlattribute{no}.
By default, the XML report is generated in the same directory as the Ant buildfile, but it can be changed with the \xmlattribute{toDir} attribute. Test names in JUnit are formed by its Java package, class and method: EUnit uses the filename of the EOL script as the class and the name of the EOL operation as the method. By default, the package is set to the string ``default'': users are encouraged to customize it with the \xmlattribute{package} attribute.
The optional \xmlelement{modelTasks} nested element contains a sequence of Ant tasks which will be run after reloading the model references and before running the model setup sections in the EOL file. This allows users to run workflows more advanced than simply reloading model references, such as the one in Listing~\ref{lst:eunit-ex1-ant}.
\subsubsection{Helper Targets}
Ant buildfiles for EUnit may include \emph{helper targets}. These targets can be invoked using \eol{runTarget("targetName")} from anywhere in the EOL script. Helper targets are quite versatile: called from an EOL model setup section, they allow for reusing model loading fragments between different EUnit test suites. They can also be used to invoke the model management tasks under test. Listing~\ref{lst:eunit-ex1-ant} shows a helper target for an ETL transformation, and listing~\ref{lst:eunit-atl} shows a helper target for an ATL transformation.
\subsection{EOL script}
\label{sec:eunit-eol-script}
The Epsilon Object Language script is the second half of the EUnit test suite. EOL annotations are used to tag some of the operations as data binding definitions (\eolannotation{@data} or \eolannotation{@Data}), additional model setup sections (\eolannotation{@model}/\eolannotation{@Model}), test setup and teardown sections (\eolannotation{@setup}/\eolannotation{@Before} and \eolannotation{@teardown}/\eolannotation{@After}) and test cases (\eolannotation{@test}/\eolannotation{@Test}). Suite setup and teardown sections can also be defined with \eolannotation{@suitesetup}/\eolannotation{@BeforeClass} and \eolannotation{@suiteteardown}/\eolannotation{@AfterClass} annotations: these operations will be run before and after all tests, respectively.
\subsubsection{Data bindings}
Data bindings repeat all test cases with different values in some variables. To define a data binding, users must define an operation which returns a sequence of elements and is marked with \eolannotation{@data variable}. All test cases will be repeated once for each element of the returned sequence, setting the specified variable to the corresponding element. Listing~\ref{lst:eunit-databind} shows two nested data bindings and a test case which will be run four times: with \variable{x=1} and \variable{y=5}, \variable{x=1} and \variable{y=6}, \variable{x=2} and \variable{y=5} and finally \variable{x=2} and \variable{y=6}. The example shows how \variable{x} and \variable{y} could be used by the setup section to generate an input model for the test. This can be useful if the intent of the test is ensuring that a certain property holds in a class of models, rather than a single model.
\begin{lstlisting}[language=EOL,caption=Example of a 2-level data binding,label=lst:eunit-databind,columns=fixed]
@data x
operation firstLevel() { return 1.to(2); }
@data y
operation secondLevel() { return 5.to(6); }
@setup
operation generateModel() { -* generate model using x and y *- }
@test
operation mytest() { -* test with the generated model *- }
\end{lstlisting}
Alternatively, if both \variable{x} and \variable{y} were to use the same sets of values, we could add two \eolannotation{@data} annotations to the same operation. For instance, Listing~\ref{lst:eunit-databind-reused} shows how we could test with 4 cases: \variable{x=1} and \variable{y=1}, \variable{x=1} and \variable{y=2}, \variable{x=2} and \variable{y=1} and \variable{x=2} and \variable{y=2}.
\begin{lstlisting}[language=EOL,caption=Example of reusing the same operation for several data bindings,label=lst:eunit-databind-reused,columns=fixed]
@data x
@data y
operation levels() { return 1.to(2); }
@setup
operation generateModel() { -* generate model using x and y *- }
@test
operation mytest() { -* test with the generated model *- }
\end{lstlisting}
\subsubsection{Model bindings}
Model bindings repeat a test case with different subsets of models. They can be defined by annotating a test case with \eolannotation{\$with map} or \eolannotation{\$onlyWith map} one or more times, where \eol{map} is an EOL expression that produces a \class{Map}. For each key-value pair \variable{key = value}, EUnit will rename the model named \variable{value} to \variable{key}. The difference between \eolannotation{\$with} and \eolannotation{\$onlyWith} is how they handle the models not mentioned in the \class{Map}: \eolannotation{\$with} will preserve them as is, and \eolannotation{\$onlyWith} will make them unavailable during the test. \eolannotation{\$onlyWith} is useful for tightly restricting the set of available models in a test and for avoiding ambiguous type references when handling multiple models using the same metamodel.
Listing~\ref{lst:eunit-modelbind} shows two tests which will be each run twice. The first test uses \eolannotation{\$with}, which preserves models not mentioned in the \class{Map}: the first time, model \modelname{A} will be the default model and model \modelname{B} will be the \modelname{Other} model, and the second time, model \modelname{B} will be the default model and model \modelname{A} will be the \modelname{Other} model. The second test uses two \eolannotation{\$onlyWith} annotations: on the first run, \modelname{A} will be available as \modelname{Model} and \modelname{B} will not unavailable, and on the second run, only \modelname{B} will be available as \modelname{Model} and \modelname{A} will be unavailable.
\begin{lstlisting}[float=tbp, language=EOL,caption=Examples of model bindings,label=lst:eunit-modelbind,columns=fixed]
$with Map {"" = "A", "Other" = "B"}
$with Map {"" = "B", "Other" = "A"}
@test
operation mytest() {
-* use the default and Other models, while
keeping the rest as is *-
}
$onlyWith Map { "Model" = "A" }
$onlyWith Map { "Model" = "B" }
@test
operation mytest2() {
-- first time: A as 'Model', B is unavailable
-- second time: B as 'Model', A is unavailable
}
\end{lstlisting}
\subsubsection{Additional variables and built-in operations}
EUnit provides several variables and operations which are useful for testing. These are listed in Table~\ref{tab:eunit-operations}.
\begin{longtabu} {|p{6cm}|X|}
\caption{Extra operations and variables in EUnit}
\label{tab:eunit-operations}\\
\hline \textbf{Signature} & \textbf{Description} \\\hline
runTarget(name : String) & Runs the specified target of the Ant buildfile which invoked EUnit. \\\hline
exportVariable(name : String) & Exports the specified variable, to be used by another executable module (see Section~\ref{sec:ExecutableModuleTask}). \\\hline
useVariable(name : String) & Imports the specified variable, which should have been previously exported by another executable module (see Section~\ref{sec:ExecutableModuleTask}). \\\hline
loadHutn(name : String, hutn : String) & Loads an EMF model with the specified name, by parsing the second argument as an HUTN~\cite{HUTN} fragment. \\\hline
antProject : org.apache.tools.ant.Project & Global variable which refers to the Ant project being executed. This can be used to create and run Ant tasks from inside the EOL code. \\\hline
\end{longtabu}
\subsubsection{Assertions}
EUnit implements some common assertions for equality and inequality, with special versions for comparing floating-point numbers. EUnit also supports a limited form of exception testing with \eol{assertError}, which checks that the expression inside it throws an exception. Custom assertions can be defined by the user with the \eol{fail} operation, which fails a test with a custom message. The available assertions are shown in Table~\ref{tab:eunit-assertions}. Table~\ref{tab:eunit-comparator-options} lists the available option keys which can be used with the model equality assertions, by comparator.
\newpage
\newcommand{\parameter}[1]{\newline{}\phantom{- }#1}
\begin{longtabu}{|p{6.5cm}|X|}
\caption{Assertions in EUnit}
\label{tab:eunit-assertions} \\\hline
\textbf{Signature} & \textbf{Description} \\\hline
assertEqualDirectories(\parameter{expectedPath : String},\parameter{obtainedPath : String}) & Fails the test if the contents of the directory in \variable{obtainedFile} differ from those of the directory in \variable{expectedPath}. Directory comparisons are performed on recursive snapshots of both directories.\\\hline
assertEqualFiles(\parameter{expectedPath : String},\parameter{obtainedPath : String}) & Fails the test if the contents of the file in \variable{obtainedPath} differ from those of the file in \variable{expectedPath}. File comparisons are performed on snapshots of both files.\\\hline
assertEqualModels(\parameter{[msg : String,]}\parameter{expectedModel : String},\parameter{obtainedModel : String}\parameter{[, options : Map]}) & Fails the test with the optional message \variable{msg} if the model named \variable{obtainedModel} is not equal to the model named \variable{expectedModel}. Model comparisons are performed on snapshots of the resource sets of both models. During EMF comparisons, XMI identifiers are ignored. Additional comparator-specific options can be specified through \variable{options}.\\\hline
assertEquals(\parameter{[msg : String,]}\parameter{expected : Any},\parameter{obtained : Any}) & Fails the test with the optional message \variable{msg} if the values of \variable{expected} and \variable{obtained} are not equal. \\\hline
assertEquals(\parameter{[msg : String,]}\parameter{expected : Real},\parameter{obtained : Real},\parameter{ulps : Integer}) & Fails the test with the optional message \variable{msg} if the values of \variable{expected} and \variable{obtained} differ in more than \variable{ulps} units of least precision. See \href{http://download.oracle.com/javase/6/docs/api/java/lang/Math.html\#ulp(double)}{this site} for details.\\\hline
assertError(expr : Any) & Fails the test if no exception is thrown during the evaluation of \variable{expr}.\\\hline
assertFalse(\parameter{[msg : String,]}\parameter{cond : Boolean}) & Fails the test with the optional message \variable{msg} if \variable{cond} is \eol{true}. It is a negated version of assertTrue. \\\hline
assertLineWithMatch(\parameter{[msg : String,]}\parameter{path : String},\parameter{regexp : String}) & Fails the test with the optional message \variable{msg} if the file at \variable{path} does not have a line containing a substring matching the regular expression \variable{regexp}\footnote{See \class{java.util.regex.Pattern} for details about the accepted syntax for regular expressions.}.\\\hline
assertMatchingLine(\parameter{[msg : String,]}\parameter{path : String},\parameter{regexp : String}) & Fails the test with the optional message \variable{msg} if the file at \variable{path} does not have a line that matches the regular expression \variable{regexp}\footnote{See footnote for \eol{assertLineWithMatch} for details about the syntax of the regular expressions.} from start to finish.\\\hline
assertNotEqualDirectories(\parameter{expectedPath : String},\parameter{obtainedPath : String}) & Negated version of assertEqualDirectories.\\\hline
assertNotEqualFiles(\parameter{expectedPath : String},\parameter{obtainedPath : String}) & Negated version of assertEqualFiles.\\\hline
assertNotEqualModels(\parameter{[msg : String,]}\parameter{expectedModel : String},\parameter{obtainedModel : String}) & Negated version of assertNotEqualModels.\\\hline
assertNotEquals(\parameter{[msg : String,]}\parameter{expected : Any},\parameter{obtained : Any}) & Negated version of assertEquals([msg : String,] expected : Any, obtained : Any). \\\hline
assertNotEquals(\parameter{[msg : String,]}\parameter{expected : Real},\parameter{obtained : Real},\parameter{ulps : Integer}) & Negated version of assertEquals([msg : String,] expected : Real, obtained : Real, ulps : Integer). \\\hline
assertTrue(\parameter{[msg : String,]}\parameter{cond : Boolean}) & Fails the test with the optional message \variable{msg} if \variable{cond} is \eol{false}.\\\hline
fail(msg : String) & Fails a test with the message \variable{msg}.\\\hline
\end{longtabu}
\begin{longtabu}{|p{7cm}|X|}
\caption{Available options by model comparator}
\label{tab:eunit-comparator-options} \\\hline
\textbf{Comparator and key} & \textbf{Usage} \\\hline
EMF, ``whitespace'' & When set to ``ignore'', differences in EString attribute values due to whitespace will be ignored. Disabled by default.\\\hline
EMF, ``ignore\-Attribute\-Value\-Changes'' & Can contain a \eol{Sequence} of
strings of the form ``package.class.attribute''. Differences in the
values for these attributes will be ignored. However, if the
attribute is set on one side and not on the other, the difference
will be reported as normal. Empty by default. \\\hline
EMF, ``unordered\-Moves'' & When set to ``ignore'', differences in the order of the elements within an unordered EReference. Enabled by default. \\\hline
\end{longtabu}
More importantly, EUnit implements specific assertions for comparing models, files and trees of files. Model comparison is not implemented by the assertions themselves: it is an optional service implemented by some EMC drivers. Currently, EMF models will automatically use EMF Compare as their comparison engine. The rest of the EMC drivers do not support comparison yet. The main advantage of having an abstraction layer implement model comparison as a service is that the test case definition is decoupled from the concrete model comparison engine used.
Model, file and directory comparisons take a snapshot of their operands before comparing them, so EUnit can show the differences right at the moment when the comparison was performed. This is especially important when some of the models are generated on the fly by the EUnit test suite, or when a test case for code generation may overwrite the results of the previous one.
Figure~\ref{fig:screenshot-eunit} shows a screenshot of the EUnit graphical user interface. On the left, an Eclipse view shows the results of several EUnit test suites. We can see that the \eol{load-\allowbreak{}models-\allowbreak{}with-\allowbreak{}hutn} suite failed. The Compare button to the right of ``Failure Trace'' can be pressed to show the differences between the expected and obtained models, as shown on the right side. EUnit implements a pluggable architecture where \emph{difference viewers} are automatically selected based on the types of the operands. There are difference viewers for EMF models, file trees and a fallback viewer which converts both operands to strings.
\begin{sidewaysfigure}
\centering
\includegraphics[width=\textwidth]{images/EUnitUI}
\caption{Screenshot of the EUnit graphical user interface}
\label{fig:screenshot-eunit}
\end{sidewaysfigure}
\section{Examples}
\label{sec:eunit-examples}
\subsection{Models and Tasks in the Buildfile}
\label{sec:eunit-exampl-balanc-ant}
After describing the basic syntax, we will show how to use EUnit to test an ETL transformation.
The Ant buildfile is shown in Listing~\ref{lst:eunit-ex1-ant}. It has two targets: \anttarget{run-tests} (lines 2--19) invokes the EUnit suite, and \anttarget{tree2graph} (lines 23--28) is a helper target which transforms model \modelname{Tree} into model \modelname{Graph} using ETL. The \xmlelement{modelTasks} nested element is used to load the input, expected output and output EMF models. \modelname{Graph} is loaded with \xmlattribute{read} set to \eol{false}: the model will be initially empty, and will be populated by the ETL transformation.
\begin{lstlisting}[language=XML, caption=Ant buildfile for EUnit with \xmlelement{modelTasks} and a helper target, label=lst:eunit-ex1-ant]
<project>
<target name="run-tests">
<epsilon.eunit src="test-external.eunit">
<modelTasks>
<epsilon.emf.loadModel name="Tree"
modelfile="tree.model"
metamodelfile="tree.ecore"
read="true" store="false"/>
<epsilon.emf.loadModel name="GraphExpected"
modelfile="graph.model"
metamodelfile="graph.ecore"
read="true" store="false"/>
<epsilon.emf.loadModel name="Graph"
modelfile="transformed.model"
metamodelfile="graph.ecore"
read="false" store="false"/>
</modelTasks>
</epsilon.eunit>
</target>
<target name="tree2graph">
<epsilon.etl src="${basedir}/resources/Tree2Graph.etl">
<model ref="Tree"/>
<model ref="Graph"/>
</epsilon.etl>
</target>
</project>
\end{lstlisting}%$
The EOL script is shown in Listing~\ref{lst:eunit-ex1-eol}: it invokes the helper task (line 3) and checks that the obtained model is equal to the expected model (line 4). Internally, EMC will perform the comparison using EMF Compare.
\begin{lstlisting}[language=EOL, caption=EOL script using \eol{runTarget} to run ETL, label=lst:eunit-ex1-eol]
@test
operation transformationWorksAsExpected() {
runTarget("tree2graph");
assertEqualModels("GraphExpected", "Graph");
}
\end{lstlisting}
\subsection{Models and Tasks in the EOL Script}
\label{sec:exampl-only-launcher}
In the previous section, the EOL file is kept very concise because the model setup and model management tasks under test were specified in the Ant buildfile. In this section, we will inline the models and the tasks into the EOL script instead.
The Ant buildfile is shown in Listing~\ref{lst:ex2-ant}. It is now very simple: all it needs to do is run the EOL script. However, since we will parse HUTN in the EOL script, we must make sure the \class{EPackage}s of the metamodels are registered.
\begin{lstlisting}[language=XML,caption=Ant buildfile which only runs the EOL script,label=lst:ex2-ant]
<project>
<target name="run-tests">
<epsilon.emf.register file="tree.ecore"/>
<epsilon.emf.register file="graph.ecore"/>
<epsilon.eunit src="test-inlined.eunit"/>
</target>
</project>
\end{lstlisting}
The EOL script used is shown in Listing~\ref{lst:ex2-eol}. Instead of loading models through the Ant tasks, the \eol{loadHutn} operation has been used to load the models. The test itself is almost the same, but instead of running a helper target, it invokes an operation which creates and runs the ETL Ant task through the \eol{antProject} variable provided by EUnit, taking advantage of the support in EOL for invoking Java code through reflection.
\begin{lstlisting}[language=EOL,caption=EOL script with inlined models and tasks,label=lst:ex2-eol]
@model
operation loadModels() {
loadHutn("Tree", '@Spec {Metamodel {nsUri: "Tree" }}
Model {
Tree "t1" { label : "t1" }
Tree "t2" {
label : "t2"
parent : Tree "t1"
}
}
');
loadHutn("GraphExpected", '@Spec {Metamodel {nsUri: "Graph"}}
Graph { nodes :
Node "t1" {
name : "t1"
outgoing : Edge { source : Node "t1" target : Node "t2" }
},
Node "t2" {
name : "t2"
}
}
');
loadHutn("Graph", '@Spec {Metamodel {nsUri: "Graph"}}');
}
@test
operation transformationWorksAsExpected() {
runETL();
assertEqualModels("GraphExpected", "Graph");
}
operation runETL() {
var etlTask := antProject.createTask("epsilon.etl");
etlTask.setSrc(new Native('java.io.File')(
antProject.getBaseDir(), 'resources/etl/Tree2Graph.etl'));
etlTask.createModel().setRef("Tree");
etlTask.createModel().setRef("Graph");
etlTask.execute();
}
\end{lstlisting}
\section{Extending EUnit}
\label{sec:eunit-extending}
EUnit is based on the Epsilon platform, but it is designed to accommodate other technologies. In this section we will explain several strategies to add support for these technologies to EUnit.
EUnit uses the Epsilon Model Connectivity abstraction layer to handle different modelling technologies. Adding support for a different modelling technology only requires implementing another driver for EMC. Depending on the modelling technology, the driver can provide optional services such as model comparison, caching or reflection. For more details, the reader is suggested to consult Chapter~\ref{sec:Design.EMC}.
EUnit uses Ant as a workflow language: all model management tasks must be exposed through Ant tasks. It is highly encouraged, however, that the Ant task is aware of the EMC model repository linked to the Ant project. Otherwise, users will have to shuffle the models out from and back into the repository between model management tasks. As an example, a helper target for an ATL~\cite{ATL} transformation with the existing Ant tasks needs to:
\begin{enumerate}
\item Save the input model in the EMC model repository to a file, by invoking the \xmlelement{epsilon.storeModel} task.
\item Load the metamodels and the input model with \xmlelement{atl.loadModel}.
\item Run the ATL transformation with \xmlelement{atl.launch}.
\item Save the result of the ATL transformation with \xmlelement{atl.saveModel}.
\item Load it into the EMC model repository with \xmlelement{epsilon.emf.loadModel}.
\end{enumerate}
Listing~\ref{lst:eunit-atl} shows the Ant buildfile which would be required for running these steps, showing that while EUnit can use the existing ATL tasks as-is, the required helper task is quite longer than the one in Listing~\ref{lst:eunit-ex1-ant}. Ideally, Ant tasks should be adapted or wrapped to use models directly from the EMC model repository.
\begin{lstlisting}[language=XML,caption=Testing an ATL model transformation with EUnit,label=lst:eunit-atl]
<project>
<!-- ... omitted ... -->
<target name="atl">
<!-- Create temporary files for input and output models -->
<tempfile property="atl.temp.srcfile" />
<tempfile property="atl.temp.dstfile" />
<!-- Save input model to a file -->
<epsilon.storeModel model="Tree"
target="${atl.temp.srcfile}" />
<!-- Load the metamodels and the source model -->
<atl.loadModel name="TreeMM" metamodel="MOF"
path="metamodels/tree.ecore" />
<atl.loadModel name="GraphMM" metamodel="MOF"
path="metamodels/graph.ecore" />
<atl.loadModel name="Tree" metamodel="TreeMM"
path="${atl.temp.srcfile}" />
<!-- Run ATL and save the model -->
<atl.launch path="transformation/tree2graph.atl">
<inmodel name="IN" model="Tree" />
<outmodel name="OUT" model="Graph" metamodel="GraphMM" />
</atl.launch>
<atl.saveModel model="Graph" path="${atl.temp.dstfile}" />
<!-- Load it back into the EUnit suite -->
<epsilon.emf.loadModel name="Graph"
modelfile="${atl.temp.dstfile}"
metamodeluri="Graph"
read="true" store="false" />
<!-- Delete temporary files -->
<delete file="${atl.temp.srcfile}" />
<delete file="${atl.temp.dstfile}" />
</target>
</project>
\end{lstlisting}
Another advantage in making model management tasks EMC-aware is that they can easily ``export'' their results as models, making them easier to test. For instance, the EVL Ant task allows for exporting its results as a model by setting the attribute \xmlattribute{exportAsModel} to \eol{true}, as mentioned in Section~\ref{sec:EvlTask}. This way, EOL can query the results as any regular model (see Listing~\ref{lst:eunit-evl}). This is simpler than transforming the validated model to a problem metamodel, as suggested in~\cite{Jouault2005}. The example in Listing~\ref{lst:eunit-evl} checks that a single warning was produced due to the expected rule (\eol{LabelsStartWithT}) and the expected model element.
\begin{lstlisting}[language=EOL,caption=Testing an EVL model validation with EUnit,label=lst:eunit-evl]
@test
operation valid() {
var tree := new Tree!Tree;
tree.label := '1n';
runTarget('validate-tree');
var errors := EVL!EvlUnsatisfiedConstraint.allInstances;
assertEquals(1, errors.size);
var error := errors.first;
assertEquals(tree, error.instance);
assertEquals(false, error.constraint.isCritique);
assertEquals('LabelsStartWithT', error.constraint.name);
}
\end{lstlisting}
\section{Summary}
This chapter has presented EUnit, an unit testing framework specialized on testing model management tasks, such as model-to-model transformations, model-to-text transformations and model validations. Section~\ref{sec:eunit-motiv-test-model} has presented the motivation for creating EUnit. Section~\ref{sec:eunit-test-organization} has described the data model used by EUnit, and the steps involved in running a test. Section~\ref{sec:eunit-test-specification} has specified how tests in EUnit are written. Section~\ref{sec:eunit-examples} has shown two examples that test the same ETL transformation using different EUnit constructs. Finally, Section~\ref{sec:eunit-extending} suggests how to extend EUnit to handle additional modelling and model management technologies.
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "EpsilonBook"
%%% End: