blob: 5e76d27aea5ba82d81fc51f89a12d5dad53364a9 [file] [log] [blame]
{"config":{"lang":["en"],"min_search_length":3,"prebuild_index":false,"separator":"[\\s\\-]+"},"docs":[{"location":"","text":"Eclipse Epsilon \u00b6 Epsilon is a family of Java-based scripting languages for automating common model-based software engineering tasks, such as code generation , model-to-model transformation and model validation , that work out of the box with EMF (including Xtext and Sirius ), UML, Simulink, XML and other types of models . Epsilon also includes Eclipse-based editors and debuggers, convenient reflective tools for textual modelling and model visualisation , and Apache Ant tasks. Installation \u00b6 Download the Eclipse Installer and select Epsilon, as shown below. Note that you will need a Java Runtime Environment installed on your system. More options for downloading Epsilon (update sites, Maven) are available here . Why Epsilon? \u00b6 One syntax to rule them all: All languages in Epsilon build on top of a common expression language which means that you can reuse code across your model-to-model transformations, code generators, validation constraints etc. Integrated development tools: All languages in Epsilon are supported by editors providing syntax and error highlighting, code templates, and graphical tools for configuring, running, debugging and profiling Epsilon programs. Documentation, Documentation, Documentation: More than 30 articles , 15 screencasts and 40 examples are available to help you get from novice to expert. Strong support for EMF: Epsilon supports all flavours of EMF, including reflective, generated and non-XMI (textual) models such as these specified using Xtext or EMFText-based DSLs. No EMF? No problem: While Epsilon provides strong support for EMF, it is not bound to EMF at all. In fact, support for EMF is implemented as a driver for the model connectivity layer of Epsilon. Other drivers provide support for XML, CSV, Simulink and you can even roll out your own driver! No Eclipse? No problem either: While Epsilon provides strong support for Eclipse, we also provide standalone JARs through Maven Central that you can use to embed Epsilon in your plain Java or Android application. Mix and match: Epsilon poses no constraints on the number/type of models you can use in the same program. For example, you can write a transformation that transforms an XML-based and an EMF-based model into a Simulink model and also modifies the source EMF model. Plumbing included: You can use the ANT Epsilon tasks to compose Epsilon programs into complex workflows. Programs executed in the same workflow can share models and even pass parameters to each other. Extensible: Almost every aspect of Epsilon is extensible. You can add support for your own type of models , extend the Eclipse-based development tools, add a new method to the String type, or even implement your own model management language on top of EOL. Java is your friend: You can call methods of Java classes from all Epsilon programs to reuse code you have already written or to perform tasks that Epsilon languages do not support natively. Parallel execution: Since 2.0, Epsilon is multi-threaded, which includes first-order operations and some of the rule-based languages, making it faster than other interpreted tools. All questions answered: The Epsilon forum contains more than 7000 posts and we're proud that no question has ever gone unanswered. We're working on it: Epsilon has been an Eclipse project since 2006 and it's not going away any time soon. License \u00b6 Epsilon is licensed under the Eclipse Public License 2.0 .","title":"Home"},{"location":"#eclipse-epsilon","text":"Epsilon is a family of Java-based scripting languages for automating common model-based software engineering tasks, such as code generation , model-to-model transformation and model validation , that work out of the box with EMF (including Xtext and Sirius ), UML, Simulink, XML and other types of models . Epsilon also includes Eclipse-based editors and debuggers, convenient reflective tools for textual modelling and model visualisation , and Apache Ant tasks.","title":"Eclipse Epsilon"},{"location":"#installation","text":"Download the Eclipse Installer and select Epsilon, as shown below. Note that you will need a Java Runtime Environment installed on your system. More options for downloading Epsilon (update sites, Maven) are available here .","title":"Installation"},{"location":"#why-epsilon","text":"One syntax to rule them all: All languages in Epsilon build on top of a common expression language which means that you can reuse code across your model-to-model transformations, code generators, validation constraints etc. Integrated development tools: All languages in Epsilon are supported by editors providing syntax and error highlighting, code templates, and graphical tools for configuring, running, debugging and profiling Epsilon programs. Documentation, Documentation, Documentation: More than 30 articles , 15 screencasts and 40 examples are available to help you get from novice to expert. Strong support for EMF: Epsilon supports all flavours of EMF, including reflective, generated and non-XMI (textual) models such as these specified using Xtext or EMFText-based DSLs. No EMF? No problem: While Epsilon provides strong support for EMF, it is not bound to EMF at all. In fact, support for EMF is implemented as a driver for the model connectivity layer of Epsilon. Other drivers provide support for XML, CSV, Simulink and you can even roll out your own driver! No Eclipse? No problem either: While Epsilon provides strong support for Eclipse, we also provide standalone JARs through Maven Central that you can use to embed Epsilon in your plain Java or Android application. Mix and match: Epsilon poses no constraints on the number/type of models you can use in the same program. For example, you can write a transformation that transforms an XML-based and an EMF-based model into a Simulink model and also modifies the source EMF model. Plumbing included: You can use the ANT Epsilon tasks to compose Epsilon programs into complex workflows. Programs executed in the same workflow can share models and even pass parameters to each other. Extensible: Almost every aspect of Epsilon is extensible. You can add support for your own type of models , extend the Eclipse-based development tools, add a new method to the String type, or even implement your own model management language on top of EOL. Java is your friend: You can call methods of Java classes from all Epsilon programs to reuse code you have already written or to perform tasks that Epsilon languages do not support natively. Parallel execution: Since 2.0, Epsilon is multi-threaded, which includes first-order operations and some of the rule-based languages, making it faster than other interpreted tools. All questions answered: The Epsilon forum contains more than 7000 posts and we're proud that no question has ever gone unanswered. We're working on it: Epsilon has been an Eclipse project since 2006 and it's not going away any time soon.","title":"Why Epsilon?"},{"location":"#license","text":"Epsilon is licensed under the Eclipse Public License 2.0 .","title":"License"},{"location":"examples/","text":"Examples \u00b6 Each example in this page comes in the form of an Eclipse project, which is stored under the examples dirctory of Epsilon's Git repository. To run an example, you need to: Clone the repository Import the project in question into your Eclipse workspace Register any Ecore metamodels in it Right click the .launch file in it Select Run as... and click the first item in the menu that pops up Warning To avoid copying the same metamodels across different example projects, some projects reuse Ecore metamodels stored in the org.eclipse.epsilon.examples.metamodels project. If you are unable to run any of the examples below, please give us a shout . Epsilon Object Language \u00b6 Create an OO model with EOL : In this example we use EOL to programmatically construct a model that conforms to an object-oriented metamodel. Modify a Tree model with EOL : In this example we use EOL to programmatically modify a model that conforms to a Tree metamodel and store the modified version as a new model. Call Java code from Epsilon : In this example, we create a JFrame from EOL. The aim of this example is to show how to call Java code from within Epsilon languages. Creating custom Java tools for Epsilon : In this example, we create a custom tool for Epsilon. Building and querying plain XML documents with EOL : In this example, we use the plain XML driver of Epsilon to build and query an XML document that is not backed by a XSD/DTD. Cloning and copying XML elements across documents with EOL : In this example, we use the plain XML driver of Epsilon to clone and copy XML elements across different documents with EOL. Cloning EMF model elements with EOL : In this example, we demonstrate how the EmfTool built-in tool can be used to perform deep-copy (cloning) of EMF model elements using EOL. Profiling and caching in EOL : This example demonstrates the caching capabilities and the profiling tools provided by Epsilon. Manage XSD-backed XML files with EOL : In this example we demonstrate using EOL to query an XSD-backed XML file. Manage Matlab Simulink/Stateflow blocks from Epsilon : In this example we show how to manage Matlab Simulink/Stateflow blocks with EOL. Epsilon Transformation Language \u00b6 Transform a Tree model to a Graph model with ETL : In this example, we use ETL to transform a model that conforms to a Tree metamodel to a model that conforms to a Graph metamodel. Transform an RSS feed to an Atom feed using ETL : In this example, we use ETL and the plain XML driver to transform an RSS feed to an Atom feed. Experiment with the different types of transformation rule in ETL using a Flowchart-to-HTML transformation. : In this example, we show the different types of transformation rule that are provided by ETL, including plain, abstract, lazy, primary and greedy rules. We also explore rule inheritance and rules that generate more than model element. We transform from a Flowchart model to an HTML model. Transform an OO model to a DB model with ETL : In this example, we use ETL to transform a model that conforms to an Object-Oriented metamodel to a model that conforms to the Database metamodel. Epsilon Generation Language \u00b6 Experiment with the different features of EGL using a Flowchart-to-HTML transformation. : In this example, we explore the main features of EGL by generating HTML text from an EMF model of a flowchart. We demonstrate the EGX coordination language, code formatters, preserving hand-written text with protected regions and generating a fine-grained trace model. Generating HTML pages from an XML document : In this example, we use the plain XML driver of Epsilon in the context of an EGL model-to-text transformation. Generate HTML documentation from an Ecore metamodel with EGL : In this example, we demonstrate how EGL can be used to generate HTML documentation from an Ecore metamodel. Epsilon Validation Language \u00b6 Validate an OO model with EVL : In this example, we use EVL, to express constraints for models that conform to an Object-Oriented metamodel. Validate an OO model against a DB model with EVL : In this example, we use EVL to expressing inter-model constraints. Dijkstra's shortest path algorithm with EOL/EVL : In this example, we use EOL and EVL to implement Dijkstra's shortest path algorithm. Epsilon Merging Language \u00b6 Heterogeneous Model Merging with ECL/EML : In this example, we demonstrate merging heterogeneous models using ECL and EML. Epsilon Flock \u00b6 Migrate Petri net models with Epsilon Flock : In this example we demonstrate how to migrate a model in response to metamodel changes with Epsilon Flock. Epsilon Model Generation Language \u00b6 Generate PetriNet models using EMG : In this example we demonstrate how to generate PetriNet elements and how to define relations between them. Epsilon Pattern Language \u00b6 Find pattern matches in railway models using EPL : In this example we demonstrate how to find matches of the patterns in the Train Benchmark models with EPL. Combining the Epsilon Languages \u00b6 Use Epsilon in standalone Java applications : In this example, we demonstrate how Epsilon languages can be used in standalone, non-Eclipse-based Java applications. MDD-TIF complete case study : In this example, we demonstrate how different languages in Epsilon (EVL, EGL, EML, ETL and ECL) can be combined to implement more complex operations. Compare, validate and merge OO models : In this example, we use ECL to compare two OO models, then use EVL to check the identified matches for consistency and finally EML to merge them. Construct a workflow to orchestrate several Epsilon programs with Ant : In this example we demonstrate how to use the built-in Epsilon Ant tasks to define a workflow by combining several Epsilon programs. Here, we validate, transform and generate HTML from a flowchart model. Provide custom/extended tasks for the workflow : In this example we demonstrate how you can define your own ANT tasks that extend the Epsilon workflow tasks. Use model transactions in a workflow : In this example we demonstrate using the ant-contrib try/catch tasks and the Epsilon model transactions tasks to conditionally rollback changes in models modified in a workflow. Eugenia \u00b6 Implement a GMF editor with image nodes using Eugenia : In this example we use Eugenia to implement a GMF editor with images instead of shapes for nodes. Implement a GMF editor with end labels in connections using Eugenia : In this example we use Eugenia to implement a GMF editor with end labels in connections. Implement a flowchart GMF editor using Eugenia : In this example we use Eugenia to implement a flowchart GMF editor, and EOL to polish its appearance. EUnit \u00b6 Test EOL scripts with EUnit : In this example we show the basic structure of an EUnit test, some useful assertions for the basic types and how to test for errors and define our own assertions. Reuse EUnit tests with model and data bindings : In this example we show how the same EUnit test can be reused for several models, and how EUnit supports several levels of parametric tests. Test a model validation script with EUnit : In this example we show how a model validation script written in EVL can be tested with EUnit, using the exportAsModel attribute of the EVL workflow task. Test a model-to-text transformation with EUnit : In this example we show how a model-to-text transformation written in EGL can be tested with EUnit and HUTN. Integrate EUnit into a standard JUnit plug-in test : In this example we show how to write an EUnit/JUnit plug-in test of an ETL transformation. Even more examples \u00b6 More examples are available in the examples folder of the Git repository.","title":"Examples"},{"location":"examples/#examples","text":"Each example in this page comes in the form of an Eclipse project, which is stored under the examples dirctory of Epsilon's Git repository. To run an example, you need to: Clone the repository Import the project in question into your Eclipse workspace Register any Ecore metamodels in it Right click the .launch file in it Select Run as... and click the first item in the menu that pops up Warning To avoid copying the same metamodels across different example projects, some projects reuse Ecore metamodels stored in the org.eclipse.epsilon.examples.metamodels project. If you are unable to run any of the examples below, please give us a shout .","title":"Examples"},{"location":"examples/#epsilon-object-language","text":"Create an OO model with EOL : In this example we use EOL to programmatically construct a model that conforms to an object-oriented metamodel. Modify a Tree model with EOL : In this example we use EOL to programmatically modify a model that conforms to a Tree metamodel and store the modified version as a new model. Call Java code from Epsilon : In this example, we create a JFrame from EOL. The aim of this example is to show how to call Java code from within Epsilon languages. Creating custom Java tools for Epsilon : In this example, we create a custom tool for Epsilon. Building and querying plain XML documents with EOL : In this example, we use the plain XML driver of Epsilon to build and query an XML document that is not backed by a XSD/DTD. Cloning and copying XML elements across documents with EOL : In this example, we use the plain XML driver of Epsilon to clone and copy XML elements across different documents with EOL. Cloning EMF model elements with EOL : In this example, we demonstrate how the EmfTool built-in tool can be used to perform deep-copy (cloning) of EMF model elements using EOL. Profiling and caching in EOL : This example demonstrates the caching capabilities and the profiling tools provided by Epsilon. Manage XSD-backed XML files with EOL : In this example we demonstrate using EOL to query an XSD-backed XML file. Manage Matlab Simulink/Stateflow blocks from Epsilon : In this example we show how to manage Matlab Simulink/Stateflow blocks with EOL.","title":"Epsilon Object Language"},{"location":"examples/#epsilon-transformation-language","text":"Transform a Tree model to a Graph model with ETL : In this example, we use ETL to transform a model that conforms to a Tree metamodel to a model that conforms to a Graph metamodel. Transform an RSS feed to an Atom feed using ETL : In this example, we use ETL and the plain XML driver to transform an RSS feed to an Atom feed. Experiment with the different types of transformation rule in ETL using a Flowchart-to-HTML transformation. : In this example, we show the different types of transformation rule that are provided by ETL, including plain, abstract, lazy, primary and greedy rules. We also explore rule inheritance and rules that generate more than model element. We transform from a Flowchart model to an HTML model. Transform an OO model to a DB model with ETL : In this example, we use ETL to transform a model that conforms to an Object-Oriented metamodel to a model that conforms to the Database metamodel.","title":"Epsilon Transformation Language"},{"location":"examples/#epsilon-generation-language","text":"Experiment with the different features of EGL using a Flowchart-to-HTML transformation. : In this example, we explore the main features of EGL by generating HTML text from an EMF model of a flowchart. We demonstrate the EGX coordination language, code formatters, preserving hand-written text with protected regions and generating a fine-grained trace model. Generating HTML pages from an XML document : In this example, we use the plain XML driver of Epsilon in the context of an EGL model-to-text transformation. Generate HTML documentation from an Ecore metamodel with EGL : In this example, we demonstrate how EGL can be used to generate HTML documentation from an Ecore metamodel.","title":"Epsilon Generation Language"},{"location":"examples/#epsilon-validation-language","text":"Validate an OO model with EVL : In this example, we use EVL, to express constraints for models that conform to an Object-Oriented metamodel. Validate an OO model against a DB model with EVL : In this example, we use EVL to expressing inter-model constraints. Dijkstra's shortest path algorithm with EOL/EVL : In this example, we use EOL and EVL to implement Dijkstra's shortest path algorithm.","title":"Epsilon Validation Language"},{"location":"examples/#epsilon-merging-language","text":"Heterogeneous Model Merging with ECL/EML : In this example, we demonstrate merging heterogeneous models using ECL and EML.","title":"Epsilon Merging Language"},{"location":"examples/#epsilon-flock","text":"Migrate Petri net models with Epsilon Flock : In this example we demonstrate how to migrate a model in response to metamodel changes with Epsilon Flock.","title":"Epsilon Flock"},{"location":"examples/#epsilon-model-generation-language","text":"Generate PetriNet models using EMG : In this example we demonstrate how to generate PetriNet elements and how to define relations between them.","title":"Epsilon Model Generation Language"},{"location":"examples/#epsilon-pattern-language","text":"Find pattern matches in railway models using EPL : In this example we demonstrate how to find matches of the patterns in the Train Benchmark models with EPL.","title":"Epsilon Pattern Language"},{"location":"examples/#combining-the-epsilon-languages","text":"Use Epsilon in standalone Java applications : In this example, we demonstrate how Epsilon languages can be used in standalone, non-Eclipse-based Java applications. MDD-TIF complete case study : In this example, we demonstrate how different languages in Epsilon (EVL, EGL, EML, ETL and ECL) can be combined to implement more complex operations. Compare, validate and merge OO models : In this example, we use ECL to compare two OO models, then use EVL to check the identified matches for consistency and finally EML to merge them. Construct a workflow to orchestrate several Epsilon programs with Ant : In this example we demonstrate how to use the built-in Epsilon Ant tasks to define a workflow by combining several Epsilon programs. Here, we validate, transform and generate HTML from a flowchart model. Provide custom/extended tasks for the workflow : In this example we demonstrate how you can define your own ANT tasks that extend the Epsilon workflow tasks. Use model transactions in a workflow : In this example we demonstrate using the ant-contrib try/catch tasks and the Epsilon model transactions tasks to conditionally rollback changes in models modified in a workflow.","title":"Combining the Epsilon Languages"},{"location":"examples/#eugenia","text":"Implement a GMF editor with image nodes using Eugenia : In this example we use Eugenia to implement a GMF editor with images instead of shapes for nodes. Implement a GMF editor with end labels in connections using Eugenia : In this example we use Eugenia to implement a GMF editor with end labels in connections. Implement a flowchart GMF editor using Eugenia : In this example we use Eugenia to implement a flowchart GMF editor, and EOL to polish its appearance.","title":"Eugenia"},{"location":"examples/#eunit","text":"Test EOL scripts with EUnit : In this example we show the basic structure of an EUnit test, some useful assertions for the basic types and how to test for errors and define our own assertions. Reuse EUnit tests with model and data bindings : In this example we show how the same EUnit test can be reused for several models, and how EUnit supports several levels of parametric tests. Test a model validation script with EUnit : In this example we show how a model validation script written in EVL can be tested with EUnit, using the exportAsModel attribute of the EVL workflow task. Test a model-to-text transformation with EUnit : In this example we show how a model-to-text transformation written in EGL can be tested with EUnit and HUTN. Integrate EUnit into a standard JUnit plug-in test : In this example we show how to write an EUnit/JUnit plug-in test of an ETL transformation.","title":"EUnit"},{"location":"examples/#even-more-examples","text":"More examples are available in the examples folder of the Git repository.","title":"Even more examples"},{"location":"faq/","text":"Frequently Asked Questions \u00b6 In this page we provide answers to common questions about Epsilon. If your question is not answered here, please feel free to ask in the forum . What is the relationship between Epsilon and EMF? \u00b6 Briefly, with EMF you can specify metamodels and construct models that conform to these metamodels, while with Epsilon you can process these EMF models and metamodels (e.g. validate them, transform them, generate code from them etc.). Is Epsilon a model transformation language? \u00b6 No. Epsilon is a family of languages, one of which targets model-to-model transformation (ETL). Who is using Epsilon? \u00b6 With more than 6000 posts in the Epsilon forum , it appears that quite a few people are currently using different parts of Epsilon. A list of companies and open-source projects that use Epsilon is available here . How do I get help? \u00b6 Epsilon has a dedicated forum where you can ask questions about the tools and languages it provides. Whenever possible, please use the forum instead of direct email. We're monitoring the forum very closely and there is practically no difference in terms of response time. Also, answered questions in the forum form a knowledge base, which other users can consult in case they face similar issues in the future, and an active forum is an indication of a healthy and actively maintained project (properties that the Eclipse Foundation takes very seriously). When posting messages to the forum we recommend that you use your full (or at least a realistic) name instead of a nickname (e.g. \"ABC\", \"SomeGuy\") as the latter can lead to pretty awkward sentences. How do I get notified when a new version of Epsilon becomes available? \u00b6 To get notified when a new version of Epsilon becomes available you can configure Eclipse to check for updates automatically by going to Window->Preferences->Install/Update/Automatic Updates and checking the \"Automatically find new updates and notify me\" option. Can I use Epsilon in a non-Eclipse-based standalone Java application? \u00b6 Yes. There are several examples of doing just that in the examples/org.eclipse.epsilon.examples.standalone project in the Git repository. Just grab your JARs through Maven Central . How does Epsilon compare to the OMG family of languages? \u00b6 There are two main differences: First, QVT, OCL and MTL are standards while languages in Epsilon are not. While having standards is arguably a good thing , by not having to conform to standardized specifications, Epsilon provides the agility to explore interesting new features and extensions of model management languages, and contribute to advancing the state of the art in the field. Examples of such interesting and novel features in Epsilon include interactive transformation , tight Java integration , extended properties , and support for transactions . Second, Epsilon provides specialized languages for tasks that are currently not explicitly targeted by the OMG standards. Examples of such tasks include interactive in-place model transformation, model comparison, and model merging. What is the difference between E*L and language X? \u00b6 If the available documentation doesn't provide enough information for figuring this out, please feel free to ask in the Epsilon forum . Are Epsilon languages compiled or interpreted? \u00b6 All Epsilon languages are interpreted. With the exception of EGL templates which are transformed into EOL before execution, all other languages are supported by bespoke interpreters. How can I contribute to Epsilon? \u00b6 There are several ways to contribute to Epsilon. In the first phase you can ask questions in the forum and help with maintaining the vibrant community around Epsilon. You may also want to let other developers know about Epsilon by sharing your experiences online. If you are interested in contributing code to Epsilon, you should start by submitting bug reports, feature requests - and hopefully patches that fix/implement them. This will demonstrate your commitment and long-term interest in the project - which is required by the Eclipse Foundation in order to later on be nominated for a committer account. How do I get all children of a model element? \u00b6 Epsilon does not provide a built-in method for this but you can use EObject's eContents() method if you're working with EMF. To get all descendants of an element, something like the following should do the trick: o.asSequence().closure(x | x.eContents()) . See https://www.eclipse.org/forums/index.php/t/855628/ for more details. How do I get the container of a model element? \u00b6 Epsilon does not provide a built-in method for this but you can use EObject's eConainer() method if you're working with EMF. Where is the metamodel of ETL/EVL etc.? \u00b6 Epsilon languages do not have Ecore-based metamodels. How do I enable code-completion/assistance in the Epsilon editors? \u00b6 Epsilon does not provide support for type-aware code completion as Epsilon languages are dynamically typed. However, ctrl+space provides a list of previously typed tokens to speed up typing.","title":"Frequently asked questions"},{"location":"faq/#frequently-asked-questions","text":"In this page we provide answers to common questions about Epsilon. If your question is not answered here, please feel free to ask in the forum .","title":"Frequently Asked Questions"},{"location":"faq/#what-is-the-relationship-between-epsilon-and-emf","text":"Briefly, with EMF you can specify metamodels and construct models that conform to these metamodels, while with Epsilon you can process these EMF models and metamodels (e.g. validate them, transform them, generate code from them etc.).","title":"What is the relationship between Epsilon and EMF?"},{"location":"faq/#is-epsilon-a-model-transformation-language","text":"No. Epsilon is a family of languages, one of which targets model-to-model transformation (ETL).","title":"Is Epsilon a model transformation language?"},{"location":"faq/#who-is-using-epsilon","text":"With more than 6000 posts in the Epsilon forum , it appears that quite a few people are currently using different parts of Epsilon. A list of companies and open-source projects that use Epsilon is available here .","title":"Who is using Epsilon?"},{"location":"faq/#how-do-i-get-help","text":"Epsilon has a dedicated forum where you can ask questions about the tools and languages it provides. Whenever possible, please use the forum instead of direct email. We're monitoring the forum very closely and there is practically no difference in terms of response time. Also, answered questions in the forum form a knowledge base, which other users can consult in case they face similar issues in the future, and an active forum is an indication of a healthy and actively maintained project (properties that the Eclipse Foundation takes very seriously). When posting messages to the forum we recommend that you use your full (or at least a realistic) name instead of a nickname (e.g. \"ABC\", \"SomeGuy\") as the latter can lead to pretty awkward sentences.","title":"How do I get help?"},{"location":"faq/#how-do-i-get-notified-when-a-new-version-of-epsilon-becomes-available","text":"To get notified when a new version of Epsilon becomes available you can configure Eclipse to check for updates automatically by going to Window->Preferences->Install/Update/Automatic Updates and checking the \"Automatically find new updates and notify me\" option.","title":"How do I get notified when a new version of Epsilon becomes available?"},{"location":"faq/#can-i-use-epsilon-in-a-non-eclipse-based-standalone-java-application","text":"Yes. There are several examples of doing just that in the examples/org.eclipse.epsilon.examples.standalone project in the Git repository. Just grab your JARs through Maven Central .","title":"Can I use Epsilon in a non-Eclipse-based standalone Java application?"},{"location":"faq/#how-does-epsilon-compare-to-the-omg-family-of-languages","text":"There are two main differences: First, QVT, OCL and MTL are standards while languages in Epsilon are not. While having standards is arguably a good thing , by not having to conform to standardized specifications, Epsilon provides the agility to explore interesting new features and extensions of model management languages, and contribute to advancing the state of the art in the field. Examples of such interesting and novel features in Epsilon include interactive transformation , tight Java integration , extended properties , and support for transactions . Second, Epsilon provides specialized languages for tasks that are currently not explicitly targeted by the OMG standards. Examples of such tasks include interactive in-place model transformation, model comparison, and model merging.","title":"How does Epsilon compare to the OMG family of languages?"},{"location":"faq/#what-is-the-difference-between-el-and-language-x","text":"If the available documentation doesn't provide enough information for figuring this out, please feel free to ask in the Epsilon forum .","title":"What is the difference between E*L and language X?"},{"location":"faq/#are-epsilon-languages-compiled-or-interpreted","text":"All Epsilon languages are interpreted. With the exception of EGL templates which are transformed into EOL before execution, all other languages are supported by bespoke interpreters.","title":"Are Epsilon languages compiled or interpreted?"},{"location":"faq/#how-can-i-contribute-to-epsilon","text":"There are several ways to contribute to Epsilon. In the first phase you can ask questions in the forum and help with maintaining the vibrant community around Epsilon. You may also want to let other developers know about Epsilon by sharing your experiences online. If you are interested in contributing code to Epsilon, you should start by submitting bug reports, feature requests - and hopefully patches that fix/implement them. This will demonstrate your commitment and long-term interest in the project - which is required by the Eclipse Foundation in order to later on be nominated for a committer account.","title":"How can I contribute to Epsilon?"},{"location":"faq/#how-do-i-get-all-children-of-a-model-element","text":"Epsilon does not provide a built-in method for this but you can use EObject's eContents() method if you're working with EMF. To get all descendants of an element, something like the following should do the trick: o.asSequence().closure(x | x.eContents()) . See https://www.eclipse.org/forums/index.php/t/855628/ for more details.","title":"How do I get all children of a model element?"},{"location":"faq/#how-do-i-get-the-container-of-a-model-element","text":"Epsilon does not provide a built-in method for this but you can use EObject's eConainer() method if you're working with EMF.","title":"How do I get the container of a model element?"},{"location":"faq/#where-is-the-metamodel-of-etlevl-etc","text":"Epsilon languages do not have Ecore-based metamodels.","title":"Where is the metamodel of ETL/EVL etc.?"},{"location":"faq/#how-do-i-enable-code-completionassistance-in-the-epsilon-editors","text":"Epsilon does not provide support for type-aware code completion as Epsilon languages are dynamically typed. However, ctrl+space provides a list of previously typed tokens to speed up typing.","title":"How do I enable code-completion/assistance in the Epsilon editors?"},{"location":"labs/","text":"Epsilon Labs \u00b6 EpsilonLabs is a satellite project of Epsilon on GitHub that hosts experimental stuff which may (or may not) end up being part of Epsilon in the future. It also hosts contributions that are incompatible with EPL and therefore cannot be hosted under eclipse.org. Warning Please be aware that the code contributed under EpsilonLabs is not part of (or in any other way formally related to) Eclipse, and has not been IP-checked by the Eclipse legal team.","title":"Epsilon Labs"},{"location":"labs/#epsilon-labs","text":"EpsilonLabs is a satellite project of Epsilon on GitHub that hosts experimental stuff which may (or may not) end up being part of Epsilon in the future. It also hosts contributions that are incompatible with EPL and therefore cannot be hosted under eclipse.org. Warning Please be aware that the code contributed under EpsilonLabs is not part of (or in any other way formally related to) Eclipse, and has not been IP-checked by the Eclipse legal team.","title":"Epsilon Labs"},{"location":"branding/","text":"td { width: 300px; } td img { width: 150px; margin-left: auto; margin-right: auto; display: block; vertical-align: middle;} Branding \u00b6 Below are different versions of the Epsilon logo to use in posters, presentations, demos etc. To download a copy of a logo, right-click on it and select Save Image As... in your browser. The SVG versions are infinitely scalable and the PNG versions are much larger than their thumbnails on this page. The font of the text in the logo is Lucida Grande . Description SVG PNG Sphere and text Sphere only Text only What do the name and the logo mean? \u00b6 Epsilon (pronounced \u025bps\u026al\u0252n ) stands for E xtensible P latform for S pecification of I nteroperable L anguages for M o del Ma n agement. The dark blue jigsaw pieces in the logo represent the different languages in Epsilon , while the purple pieces represent the different modelling technologies that Epsilon programs can operate on. License \u00b6 As with everything else in Epsilon, the logos are licensed under the Eclipse Public License 2.0 .","title":"Branding"},{"location":"branding/#branding","text":"Below are different versions of the Epsilon logo to use in posters, presentations, demos etc. To download a copy of a logo, right-click on it and select Save Image As... in your browser. The SVG versions are infinitely scalable and the PNG versions are much larger than their thumbnails on this page. The font of the text in the logo is Lucida Grande . Description SVG PNG Sphere and text Sphere only Text only","title":"Branding"},{"location":"branding/#what-do-the-name-and-the-logo-mean","text":"Epsilon (pronounced \u025bps\u026al\u0252n ) stands for E xtensible P latform for S pecification of I nteroperable L anguages for M o del Ma n agement. The dark blue jigsaw pieces in the logo represent the different languages in Epsilon , while the purple pieces represent the different modelling technologies that Epsilon programs can operate on.","title":"What do the name and the logo mean?"},{"location":"branding/#license","text":"As with everything else in Epsilon, the logos are licensed under the Eclipse Public License 2.0 .","title":"License"},{"location":"doc/","text":"Documentation \u00b6 Epsilon is a family of languages and tools implemented in Java, for automating common model-based software engineering tasks. Languages \u00b6 At the core of Epsilon is the Epsilon Object Language (EOL) , a scripting language that combines the imperative style of languages like Java/JavaScript with the powerful functional model querying capabilities of OCL . On top of EOL, Epsilon provides a number of interoperable task-specific languages for tasks such as code generation, model-to-model transformation and model validation. Epsilon languages are underpinnd by a model connectivity layer that shields them from the specifics of individual modeling technologies and allows them to query and modify models that conform to different technologies in a uniform way (e.g. transform an EMF model into Simulink, cross-validate an XML document and a UML model). graph TD ECL[\"Model<br/>comparison<br/>(ECL)\"] Flock[\"Model<br/>migration<br/>(Flock)\"] EGL[\"Code<br/>generation<br/>(EGL)\"] EVL[\"Model<br/>validation<br/>(EVL)\"] EPL[\"Pattern<br/>Matching<br/>(EPL)\"] EML[\"Model<br/>Merging<br/>(EML)\"] ETL[\"M2M<br/>Transformation<br/>(ETL)\"] EOL[\"Epsilon Object Language (EOL)\"] ETL --> |extends|EOL EML --> |uses for matching|ECL EML --> |extends|ETL EPL --> |extends|EOL ECL --> |extends|EOL EGL --> |preprocessed into|EOL EVL --> |extends|EOL Flock --> |extends|EOL EMC[\"Epsilon Model Connectivity (EMC)\"] EMF[\"Eclipse Modeling<br/> Framework (EMF)\"] Simulink[\"MATLAB<br/>Simulink\"] Excel[\"Excel<br/>Speadsheets\"] PTC[\"PTC Integrity<br/>Modeller\"] Dots[\"...\"] EOL -->|accesses models through| EMC EMC --- EMF EMC --- Simulink EMC --- XML EMC --- Excel EMC --- PTC EMC --- CSV EMC --- Dots classDef eol fill:#CCCCCC; class EOL eol; classDef emc fill:#AFAFAF; class EMC emc; classDef language fill:#ffffff; class ETL,Flock,EGL,EVL,ECL,EPL,EML language; classDef driver fill:#E8E8E8; class EMF,XML,CSV,Simulink,Excel,PTC,Dots driver; Task-Specific Languages \u00b6 Epsilon provides the following task-specific languages, which use EOL as their core expression language. Each task-specific language provides constructs and syntax that are tailored to the specific task it targets: Epsilon Generation Language (EGL) : A template-based model-to-text language for generating code, documentation and other textual artefacts from models. EGL supports content-destination decoupling, protected regions for mixing generated with hand-written code. EGL also provides a rule-based coordination language ( EGX ), that allows specific EGL templates to be executed for a specific model element type, with the ability to guard rule execution and specify generation target location by type/element. Epsilon Transformation Language (ETL) : A rule-based model-to-model transformation language that supports transforming many input to many output models, rule inheritance, lazy and greedy rules, and the ability to query and modify both input and output models. Epsilon Validation Language (EVL) : A model validation language that supports both intra and inter-model consistency checking, constraint dependency management and specifying fixes that users can invoke to repair identified inconsistencies. EVL is integrated with EMF/GMF and as such, EVL constraints can be evaluated from within EMF/GMF editors and generate error markers for failed constraints. Epsilon Wizard Language (EWL) : A language tailored to interactive in-place model transformations on model elements selected by the user. EWL is integrated with EMF/GMF and as such, wizards can be executed from within EMF and GMF editors. Epsilon Comparison Language (ECL) : A rule-based language for discovering correspondences (matches) between elements of models of diverse metamodels. Epsilon Merging Language (EML) : A rule-based language for merging models of diverse metamodels, after first identifying their correspondences with ECL (or otherwise). Epsilon Pattern Language (EPL) : A pattern language for matching model elements based on element relations and characteristics. Epsilon Model Generation Language (EMG) : A language for semi-automated model generation. Epsilon Flock : A rule-based transformation language for updating models in response to metamodel changes. EUnit : EUnit is a unit testing framework specialized on testing model management tasks, such as model-to-model transformations, model-to-text transformations or model validation. It is based on Epsilon, but it can be used for model technologies external to Epsilon. Tests are written by combining an EOL script and an ANT buildfile. Tools \u00b6 In addition to the languages above, Epsilon also provides several tools and utilities for working with models. Graphical Modelling \u00b6 Picto : Picto is an Eclipse view for visualising models via model-to-text transformation to SVG/HTML. Compared to existing graphical modelling frameworks such as Sirius and GMF, the main appeal of Picto is that model visualisation takes place in an embedded browser and therefore you can leverage any HTML/SVG/JavaScript-based technology such as D3.js, mxGraph and JointJS. Picto also provides built-in support for the powerful Graphviz and PlantUML textual syntaxes (which are transformed to SVG via the respective tools). A distinguishing feature of Picto is does not require running multiple Eclipse instances as the metamodels, models and visualisation transformations can all reside in the same workspace. Eugenia : Eugenia is a front-end for GMF. Its aim is to speed up the process of developing a GMF editor and lower the entrance barrier for new developers. To this end, Eugenia enables developers to generate a fully-functional GMF editor only by specifying a few high-level annotations in the Ecore metamodel. Textual Modelling \u00b6 Flexmi : Flexmi is a flexible, reflective textual syntax for creating models conforming to Ecore (EMF) metamodels. Flexmi is XML-based and offers features such as fuzzy matching of XML tags and attributes to Ecore class/feature names, support for embedding EOL expressions in models and for defining and instantiating model element templates. Human Usable Textual Notation : An implementation of the OMG standard for representing models in a human understandable format. HUTN allows models to be written using a text editor in a C-like syntax. EMF Utilities \u00b6 Exeed : Exeed is an enhanced version of the built-in EMF reflective tree-based editor that enables developers to customize the labels and icons of model elements simply by attaching a few simple annotations to the respective EClasses in the Ecore metamodel. Exeed also supports setting the values of references using drag-and-drop instead of using the combo boxes in the properties view. ModeLink : ModeLink is an editor consisting of 2-3 side-by-side EMF tree-based editors, and is very convenient for establishing (weaving) links between different models using drag-and-drop. Workflow \u00b6 Workflow : Epsilon provides a set of ANT tasks to enable developers assemble complex workflows that involve both MDE and non-MDE tasks.","title":"Overview"},{"location":"doc/#documentation","text":"Epsilon is a family of languages and tools implemented in Java, for automating common model-based software engineering tasks.","title":"Documentation"},{"location":"doc/#languages","text":"At the core of Epsilon is the Epsilon Object Language (EOL) , a scripting language that combines the imperative style of languages like Java/JavaScript with the powerful functional model querying capabilities of OCL . On top of EOL, Epsilon provides a number of interoperable task-specific languages for tasks such as code generation, model-to-model transformation and model validation. Epsilon languages are underpinnd by a model connectivity layer that shields them from the specifics of individual modeling technologies and allows them to query and modify models that conform to different technologies in a uniform way (e.g. transform an EMF model into Simulink, cross-validate an XML document and a UML model). graph TD ECL[\"Model<br/>comparison<br/>(ECL)\"] Flock[\"Model<br/>migration<br/>(Flock)\"] EGL[\"Code<br/>generation<br/>(EGL)\"] EVL[\"Model<br/>validation<br/>(EVL)\"] EPL[\"Pattern<br/>Matching<br/>(EPL)\"] EML[\"Model<br/>Merging<br/>(EML)\"] ETL[\"M2M<br/>Transformation<br/>(ETL)\"] EOL[\"Epsilon Object Language (EOL)\"] ETL --> |extends|EOL EML --> |uses for matching|ECL EML --> |extends|ETL EPL --> |extends|EOL ECL --> |extends|EOL EGL --> |preprocessed into|EOL EVL --> |extends|EOL Flock --> |extends|EOL EMC[\"Epsilon Model Connectivity (EMC)\"] EMF[\"Eclipse Modeling<br/> Framework (EMF)\"] Simulink[\"MATLAB<br/>Simulink\"] Excel[\"Excel<br/>Speadsheets\"] PTC[\"PTC Integrity<br/>Modeller\"] Dots[\"...\"] EOL -->|accesses models through| EMC EMC --- EMF EMC --- Simulink EMC --- XML EMC --- Excel EMC --- PTC EMC --- CSV EMC --- Dots classDef eol fill:#CCCCCC; class EOL eol; classDef emc fill:#AFAFAF; class EMC emc; classDef language fill:#ffffff; class ETL,Flock,EGL,EVL,ECL,EPL,EML language; classDef driver fill:#E8E8E8; class EMF,XML,CSV,Simulink,Excel,PTC,Dots driver;","title":"Languages"},{"location":"doc/#task-specific-languages","text":"Epsilon provides the following task-specific languages, which use EOL as their core expression language. Each task-specific language provides constructs and syntax that are tailored to the specific task it targets: Epsilon Generation Language (EGL) : A template-based model-to-text language for generating code, documentation and other textual artefacts from models. EGL supports content-destination decoupling, protected regions for mixing generated with hand-written code. EGL also provides a rule-based coordination language ( EGX ), that allows specific EGL templates to be executed for a specific model element type, with the ability to guard rule execution and specify generation target location by type/element. Epsilon Transformation Language (ETL) : A rule-based model-to-model transformation language that supports transforming many input to many output models, rule inheritance, lazy and greedy rules, and the ability to query and modify both input and output models. Epsilon Validation Language (EVL) : A model validation language that supports both intra and inter-model consistency checking, constraint dependency management and specifying fixes that users can invoke to repair identified inconsistencies. EVL is integrated with EMF/GMF and as such, EVL constraints can be evaluated from within EMF/GMF editors and generate error markers for failed constraints. Epsilon Wizard Language (EWL) : A language tailored to interactive in-place model transformations on model elements selected by the user. EWL is integrated with EMF/GMF and as such, wizards can be executed from within EMF and GMF editors. Epsilon Comparison Language (ECL) : A rule-based language for discovering correspondences (matches) between elements of models of diverse metamodels. Epsilon Merging Language (EML) : A rule-based language for merging models of diverse metamodels, after first identifying their correspondences with ECL (or otherwise). Epsilon Pattern Language (EPL) : A pattern language for matching model elements based on element relations and characteristics. Epsilon Model Generation Language (EMG) : A language for semi-automated model generation. Epsilon Flock : A rule-based transformation language for updating models in response to metamodel changes. EUnit : EUnit is a unit testing framework specialized on testing model management tasks, such as model-to-model transformations, model-to-text transformations or model validation. It is based on Epsilon, but it can be used for model technologies external to Epsilon. Tests are written by combining an EOL script and an ANT buildfile.","title":"Task-Specific Languages"},{"location":"doc/#tools","text":"In addition to the languages above, Epsilon also provides several tools and utilities for working with models.","title":"Tools"},{"location":"doc/#graphical-modelling","text":"Picto : Picto is an Eclipse view for visualising models via model-to-text transformation to SVG/HTML. Compared to existing graphical modelling frameworks such as Sirius and GMF, the main appeal of Picto is that model visualisation takes place in an embedded browser and therefore you can leverage any HTML/SVG/JavaScript-based technology such as D3.js, mxGraph and JointJS. Picto also provides built-in support for the powerful Graphviz and PlantUML textual syntaxes (which are transformed to SVG via the respective tools). A distinguishing feature of Picto is does not require running multiple Eclipse instances as the metamodels, models and visualisation transformations can all reside in the same workspace. Eugenia : Eugenia is a front-end for GMF. Its aim is to speed up the process of developing a GMF editor and lower the entrance barrier for new developers. To this end, Eugenia enables developers to generate a fully-functional GMF editor only by specifying a few high-level annotations in the Ecore metamodel.","title":"Graphical Modelling"},{"location":"doc/#textual-modelling","text":"Flexmi : Flexmi is a flexible, reflective textual syntax for creating models conforming to Ecore (EMF) metamodels. Flexmi is XML-based and offers features such as fuzzy matching of XML tags and attributes to Ecore class/feature names, support for embedding EOL expressions in models and for defining and instantiating model element templates. Human Usable Textual Notation : An implementation of the OMG standard for representing models in a human understandable format. HUTN allows models to be written using a text editor in a C-like syntax.","title":"Textual Modelling"},{"location":"doc/#emf-utilities","text":"Exeed : Exeed is an enhanced version of the built-in EMF reflective tree-based editor that enables developers to customize the labels and icons of model elements simply by attaching a few simple annotations to the respective EClasses in the Ecore metamodel. Exeed also supports setting the values of references using drag-and-drop instead of using the combo boxes in the properties view. ModeLink : ModeLink is an editor consisting of 2-3 side-by-side EMF tree-based editors, and is very convenient for establishing (weaving) links between different models using drag-and-drop.","title":"EMF Utilities"},{"location":"doc/#workflow","text":"Workflow : Epsilon provides a set of ANT tasks to enable developers assemble complex workflows that involve both MDE and non-MDE tasks.","title":"Workflow"},{"location":"doc/ecl/","text":"The Epsilon Comparison Language (ECL) \u00b6 Model comparison is the task of identifying matching elements between models. In general, matching elements are elements that are involved in a relationship of interest. For example, before merging homogeneous models, it is essential to identify overlapping (common) elements so that they do not appear in duplicate in the merged model. Similarly, in heterogeneous model merging, it is a prerequisite to identify the elements on which the two models will be merged. Finally, in transformation testing, matching elements are pairs consisting of elements in the input model and their generated counterparts in the output model. The aim of the Epsilon Comparison Language (ECL) is to enable users to specify comparison algorithms in a rule-based manner to identify pairs of matching elements between two models of potentially different metamodels and modelling technologies. In this section, the abstract and concrete syntax, as well as the execution semantics of the language, are discussed in detail. Abstract Syntax \u00b6 In ECL, comparison specifications are organized in modules ( EcLModule ). As illustrated below, EclModule (indirectly) extends EolModule which means that it can contain user-defined operations and import other library modules and ECL modules. Apart from operations, an ECL module contains a set of match-rules ( MatchRule ) and a set of pre and post blocks than run before and after all comparisons, respectively. MatchRules enable users to perform comparison of model elements at a high level of abstraction. Each match-rule declares a name, and two parameters ( leftParameter and rightParameter ) that specify the types of elements it can compare. It also optionally defines a number of rules it inherits ( extends ) and if it is abstract , lazy and/or greedy . The semantics of the latter are discussed shortly. classDiagram class MatchRule { -name: String -abstract: Boolean -lazy: Boolean -unique: Boolean -greedy: Boolean -guard: ExecutableBlock<Boolean> -compare: ExecutableBlock<Boolean> -do: ExecutableBlock<Void> } class Parameter { -name: String -type: EolType } class NamedStatementBlockRule { -name: String -body: StatementBlock } EolModule <|-- ErlModule EclModule --|> ErlModule Pre --|> NamedStatementBlockRule Post --|> NamedStatementBlockRule ErlModule -- Pre: pre * ErlModule -- Post: post * EclModule -- MatchRule: rules * MatchRule -- Parameter: left MatchRule -- Parameter: right MatchRule -- MatchRule: extends * A match rule has three parts. The guard part is an EOL expression or statement block that further limits the applicability of the rule to an even narrower range of elements than that specified by the left and right parameters. The compare part is an EOL expression or statement block that is responsible for comparing a pair of elements and deciding if they match or not. Finally, the do part is an EOL expression or block that is executed if the compare part returns true to perform any additional actions required. Pre and post blocks are named blocks of EOL statements which as discussed in the sequel are executed before and after the match-rules have been executed respectively. Concrete Syntax \u00b6 The concrete syntax of a match-rule is displayed below. (@lazy)? (@greedy)? (@abstract)? rule <name> match <leftParameterName>:<leftParameterType> with <rightParameterName>:<rightParameterType> (extends <ruleName>(, <ruleName>)*)? { (guard (:expression)|({statementBlock}))? compare (:expression)|({statementBlock}) (do {statementBlock})? } Pre and post blocks have a simple syntax that, as shown below, consists of the identifier ( pre or post ), an optional name and the set of statements to be executed enclosed in curly braces. (pre|post) <name> { statement+ } Execution Semantics \u00b6 Rule and Block Overriding \u00b6 An ECL module can import a number of other ECL modules. In such a case, the importing ECL module inherits all the rules and pre/post blocks specified in the modules it imports (recursively). If the module specifies a rule or a pre/post block with the same name, the local rule/block overrides the imported one respectively. Comparison Outcome \u00b6 As illustrated below, the result of comparing two models with ECL is a trace ( MatchTrace ) that consists of a number of matches ( Match ). Each match holds a reference to the objects from the two models that have been compared ( left and right ), a boolean value that indicates if they have been found to be matching or not, a reference to the rule that has made the decision, and a Map ( info ) that is used to hold any additional information required by the user (accessible at runtime through the matchInfo implicit variable). During the matching process, a second, temporary, match trace is also used to detect and resolve cyclic invocation of match-rules as discussed in the sequel. classDiagram class Match { -left: Object -right: Object -matching: Boolean } class EclContext { -matchTrace: MatchTrace -tempMatchTrace: MatchTrace } MatchRule -- Match: rule MatchTrace -- Match: matches * EclContext --|> EolContext EclContext -- MatchTrace Map -- Match: info Rule Execution Scheduling \u00b6 Non-abstract, non-lazy match-rules are evaluated automatically by the execution engine in a top-down fashion - with respect to their order of appearance - in two passes. In the first pass, each rule is evaluated for all the pairs of instances in the two models that have a type-of relationship with the types specified by the leftParameter and rightParameter of the rule. In the second pass, each rule that is marked as greedy is executed for all pairs that have not been compared in the first pass, and which have a kind-of relationship with the types specified by the rule. In both passes, to evaluate the compare part of the rule, the guard must be satisfied. Before the compare part of a rule is executed, the compare parts of all of the rules it extends (super-rules) must be executed (recursively). Before executing the compare part of a super-rule, the engine verifies that the super-rule is actually applicable to the elements under comparison by checking for type conformance and evaluating the guard part of the super-rule. If the compare part of a rule evaluates to true, the optional do part is executed. In the do part the user can specify any actions that need to be performed for the identified matching elements, such as to populate the info map of the established match with additional information. Finally, a new match is added to the match trace that has its matching property set to the logical conjunction of the results of the evaluation of the compare parts of the rule and its super-rules. The matches() built-in operation \u00b6 To refrain from performing duplicate comparisons and to de-couple match-rules from each other, ECL provides the built-in matches(opposite : Any) operation for model elements and collections. When the matches() operation is invoked on a pair of objects, it queries the main and temporary match-traces to discover if the two elements have already been matched and if so it returns the cached result of the comparison. Otherwise, it attempts to find an appropriate match rule to compare the two elements and if such a rule is found, it returns the result of the comparison, otherwise it returns false. Unlike the top-level execution scheme, the matches() operation invokes both lazy and non-lazy rules. In addition to objects, the matches operations can also be invoked to match pairs of collections of the same type (e.g. a Sequence against a Sequence). When invoked on ordered collections (i.e. Sequence and OrderedSet ), it examines if the collections have the same size and each item of the source collection matches with the item of the same index in the target collection. Finally, when invoked on unordered collections (i.e. Bag and Set ), it examines if for each item in the source collection, there is a matching item in the target collection irrespective of its index. Users can also override the built-in matches operation using user-defined operations with the same name, that loosen or strengthen the built-in semantics. Cyclic invocation of matches() \u00b6 Providing the built-in matches operation significantly simplifies comparison specifications. It also enhances decoupling between match-rules from each other as when a rule needs to compare two elements that are outside its scope, it does not need to know/specify which other rule can compare those elements explicitly. On the other hand, it is possible - and quite common indeed - for two rules to implicitly invoke each other. For example consider the match rule below that attempts to match nodes of the simple Tree metamodel. classDiagram class Tree { +label: String +parent: Tree +children: Tree[*] } Tree -- Tree rule Tree2Tree match l : T1!Tree with r : T2!Tree { compare : l.label = r.label and l.parent.matches(r.parent) and l.children.matches(r.children) } The rule specifies that for two Tree nodes ( l and r ) to match, they should have the same label, belong to matching parents and have matching children. In the absence of a dedicated mechanism for cycle detection and resolution, the rule would end up in an infinite loop. To address this problem, ECL provides a temporary match-trace which is used to detect and resolve cyclic invocations of the match() built-in operation. As discussed above, a match is added to the primary match-trace as soon as the compare part of the rule has been executed to completion. By contrast, a temporary match (with its matching property set to true ) is added to the temporary trace before the compare part is executed. In this way, any subsequent attempts to match the two elements from invoked rules will not re-invoke the rule. Finally, when a top-level rule returns, the temporary match trace is reset. Fuzzy and Dictionary-based String Matching \u00b6 In the example above, the rule specifies that to match, two trees must - among other criteria - have the same label. However, there are cases when a less-strict approach to matching string properties of model elements is desired. For instance, when comparing two UML models originating from different organizations, it is common to encounter ontologically equivalent classes which however have different names (e.g. Client and Customer). In this case, to achieve a more sound matching, the use of a dictionary or a lexical database (e.g. WordNet) is necessary. Alternatively, fuzzy string matching algorithms can be used. As several such tools and algorithms have been implemented in various programming languages, it is a sensible approach to reuse them instead of re-implementing them. For example, in the listing below a wrapper for the Simmetrics fuzzy string comparison tool is used to compare the labels of the trees using the Levenshtein algorithm. To achieve this, line 11 invokes the fuzzyMatch() operation defined in lines 16-18 which uses the simmterics native tool (instantiated in lines 2-4) to match the two labels using their Levenshtein distance with a threshold of 0.5. pre { var simmetrics = new Native(\"org.epsilon.ecl.tools. textcomparison.simmetrics.SimMetricsTool\"); } rule FuzzyTree2Tree match l : T1!Tree with r : T2!Tree { compare : l.label.fuzzyMatch(r.label) and l.parent.matches(r.parent) and l.children.matches(r.children) } operation String fuzzyMatch(other : String) : Boolean { return simmetrics.similarity(self,other,\"Levenshtein\") > 0.5; } The Match Trace \u00b6 Users can query and modify the match trace calculated during the comparison process in the post sections of the module or export it into another application or Epsilon program. For example, in a post section, the trace can be printed to the default output stream or serialized into a model of an arbitrary metamodel. In another use case, the trace may be exported to be used in the context of a validation module that will use the identified matches to evaluate inter-model constraints, or in a merging module that will use the matches to identify the elements on which the two models will be merged.","title":"Model comparison (ECL)"},{"location":"doc/ecl/#the-epsilon-comparison-language-ecl","text":"Model comparison is the task of identifying matching elements between models. In general, matching elements are elements that are involved in a relationship of interest. For example, before merging homogeneous models, it is essential to identify overlapping (common) elements so that they do not appear in duplicate in the merged model. Similarly, in heterogeneous model merging, it is a prerequisite to identify the elements on which the two models will be merged. Finally, in transformation testing, matching elements are pairs consisting of elements in the input model and their generated counterparts in the output model. The aim of the Epsilon Comparison Language (ECL) is to enable users to specify comparison algorithms in a rule-based manner to identify pairs of matching elements between two models of potentially different metamodels and modelling technologies. In this section, the abstract and concrete syntax, as well as the execution semantics of the language, are discussed in detail.","title":"The Epsilon Comparison Language (ECL)"},{"location":"doc/ecl/#abstract-syntax","text":"In ECL, comparison specifications are organized in modules ( EcLModule ). As illustrated below, EclModule (indirectly) extends EolModule which means that it can contain user-defined operations and import other library modules and ECL modules. Apart from operations, an ECL module contains a set of match-rules ( MatchRule ) and a set of pre and post blocks than run before and after all comparisons, respectively. MatchRules enable users to perform comparison of model elements at a high level of abstraction. Each match-rule declares a name, and two parameters ( leftParameter and rightParameter ) that specify the types of elements it can compare. It also optionally defines a number of rules it inherits ( extends ) and if it is abstract , lazy and/or greedy . The semantics of the latter are discussed shortly. classDiagram class MatchRule { -name: String -abstract: Boolean -lazy: Boolean -unique: Boolean -greedy: Boolean -guard: ExecutableBlock<Boolean> -compare: ExecutableBlock<Boolean> -do: ExecutableBlock<Void> } class Parameter { -name: String -type: EolType } class NamedStatementBlockRule { -name: String -body: StatementBlock } EolModule <|-- ErlModule EclModule --|> ErlModule Pre --|> NamedStatementBlockRule Post --|> NamedStatementBlockRule ErlModule -- Pre: pre * ErlModule -- Post: post * EclModule -- MatchRule: rules * MatchRule -- Parameter: left MatchRule -- Parameter: right MatchRule -- MatchRule: extends * A match rule has three parts. The guard part is an EOL expression or statement block that further limits the applicability of the rule to an even narrower range of elements than that specified by the left and right parameters. The compare part is an EOL expression or statement block that is responsible for comparing a pair of elements and deciding if they match or not. Finally, the do part is an EOL expression or block that is executed if the compare part returns true to perform any additional actions required. Pre and post blocks are named blocks of EOL statements which as discussed in the sequel are executed before and after the match-rules have been executed respectively.","title":"Abstract Syntax"},{"location":"doc/ecl/#concrete-syntax","text":"The concrete syntax of a match-rule is displayed below. (@lazy)? (@greedy)? (@abstract)? rule <name> match <leftParameterName>:<leftParameterType> with <rightParameterName>:<rightParameterType> (extends <ruleName>(, <ruleName>)*)? { (guard (:expression)|({statementBlock}))? compare (:expression)|({statementBlock}) (do {statementBlock})? } Pre and post blocks have a simple syntax that, as shown below, consists of the identifier ( pre or post ), an optional name and the set of statements to be executed enclosed in curly braces. (pre|post) <name> { statement+ }","title":"Concrete Syntax"},{"location":"doc/ecl/#execution-semantics","text":"","title":"Execution Semantics"},{"location":"doc/ecl/#rule-and-block-overriding","text":"An ECL module can import a number of other ECL modules. In such a case, the importing ECL module inherits all the rules and pre/post blocks specified in the modules it imports (recursively). If the module specifies a rule or a pre/post block with the same name, the local rule/block overrides the imported one respectively.","title":"Rule and Block Overriding"},{"location":"doc/ecl/#comparison-outcome","text":"As illustrated below, the result of comparing two models with ECL is a trace ( MatchTrace ) that consists of a number of matches ( Match ). Each match holds a reference to the objects from the two models that have been compared ( left and right ), a boolean value that indicates if they have been found to be matching or not, a reference to the rule that has made the decision, and a Map ( info ) that is used to hold any additional information required by the user (accessible at runtime through the matchInfo implicit variable). During the matching process, a second, temporary, match trace is also used to detect and resolve cyclic invocation of match-rules as discussed in the sequel. classDiagram class Match { -left: Object -right: Object -matching: Boolean } class EclContext { -matchTrace: MatchTrace -tempMatchTrace: MatchTrace } MatchRule -- Match: rule MatchTrace -- Match: matches * EclContext --|> EolContext EclContext -- MatchTrace Map -- Match: info","title":"Comparison Outcome"},{"location":"doc/ecl/#rule-execution-scheduling","text":"Non-abstract, non-lazy match-rules are evaluated automatically by the execution engine in a top-down fashion - with respect to their order of appearance - in two passes. In the first pass, each rule is evaluated for all the pairs of instances in the two models that have a type-of relationship with the types specified by the leftParameter and rightParameter of the rule. In the second pass, each rule that is marked as greedy is executed for all pairs that have not been compared in the first pass, and which have a kind-of relationship with the types specified by the rule. In both passes, to evaluate the compare part of the rule, the guard must be satisfied. Before the compare part of a rule is executed, the compare parts of all of the rules it extends (super-rules) must be executed (recursively). Before executing the compare part of a super-rule, the engine verifies that the super-rule is actually applicable to the elements under comparison by checking for type conformance and evaluating the guard part of the super-rule. If the compare part of a rule evaluates to true, the optional do part is executed. In the do part the user can specify any actions that need to be performed for the identified matching elements, such as to populate the info map of the established match with additional information. Finally, a new match is added to the match trace that has its matching property set to the logical conjunction of the results of the evaluation of the compare parts of the rule and its super-rules.","title":"Rule Execution Scheduling"},{"location":"doc/ecl/#the-matches-built-in-operation","text":"To refrain from performing duplicate comparisons and to de-couple match-rules from each other, ECL provides the built-in matches(opposite : Any) operation for model elements and collections. When the matches() operation is invoked on a pair of objects, it queries the main and temporary match-traces to discover if the two elements have already been matched and if so it returns the cached result of the comparison. Otherwise, it attempts to find an appropriate match rule to compare the two elements and if such a rule is found, it returns the result of the comparison, otherwise it returns false. Unlike the top-level execution scheme, the matches() operation invokes both lazy and non-lazy rules. In addition to objects, the matches operations can also be invoked to match pairs of collections of the same type (e.g. a Sequence against a Sequence). When invoked on ordered collections (i.e. Sequence and OrderedSet ), it examines if the collections have the same size and each item of the source collection matches with the item of the same index in the target collection. Finally, when invoked on unordered collections (i.e. Bag and Set ), it examines if for each item in the source collection, there is a matching item in the target collection irrespective of its index. Users can also override the built-in matches operation using user-defined operations with the same name, that loosen or strengthen the built-in semantics.","title":"The matches() built-in operation"},{"location":"doc/ecl/#cyclic-invocation-of-matches","text":"Providing the built-in matches operation significantly simplifies comparison specifications. It also enhances decoupling between match-rules from each other as when a rule needs to compare two elements that are outside its scope, it does not need to know/specify which other rule can compare those elements explicitly. On the other hand, it is possible - and quite common indeed - for two rules to implicitly invoke each other. For example consider the match rule below that attempts to match nodes of the simple Tree metamodel. classDiagram class Tree { +label: String +parent: Tree +children: Tree[*] } Tree -- Tree rule Tree2Tree match l : T1!Tree with r : T2!Tree { compare : l.label = r.label and l.parent.matches(r.parent) and l.children.matches(r.children) } The rule specifies that for two Tree nodes ( l and r ) to match, they should have the same label, belong to matching parents and have matching children. In the absence of a dedicated mechanism for cycle detection and resolution, the rule would end up in an infinite loop. To address this problem, ECL provides a temporary match-trace which is used to detect and resolve cyclic invocations of the match() built-in operation. As discussed above, a match is added to the primary match-trace as soon as the compare part of the rule has been executed to completion. By contrast, a temporary match (with its matching property set to true ) is added to the temporary trace before the compare part is executed. In this way, any subsequent attempts to match the two elements from invoked rules will not re-invoke the rule. Finally, when a top-level rule returns, the temporary match trace is reset.","title":"Cyclic invocation of matches()"},{"location":"doc/ecl/#fuzzy-and-dictionary-based-string-matching","text":"In the example above, the rule specifies that to match, two trees must - among other criteria - have the same label. However, there are cases when a less-strict approach to matching string properties of model elements is desired. For instance, when comparing two UML models originating from different organizations, it is common to encounter ontologically equivalent classes which however have different names (e.g. Client and Customer). In this case, to achieve a more sound matching, the use of a dictionary or a lexical database (e.g. WordNet) is necessary. Alternatively, fuzzy string matching algorithms can be used. As several such tools and algorithms have been implemented in various programming languages, it is a sensible approach to reuse them instead of re-implementing them. For example, in the listing below a wrapper for the Simmetrics fuzzy string comparison tool is used to compare the labels of the trees using the Levenshtein algorithm. To achieve this, line 11 invokes the fuzzyMatch() operation defined in lines 16-18 which uses the simmterics native tool (instantiated in lines 2-4) to match the two labels using their Levenshtein distance with a threshold of 0.5. pre { var simmetrics = new Native(\"org.epsilon.ecl.tools. textcomparison.simmetrics.SimMetricsTool\"); } rule FuzzyTree2Tree match l : T1!Tree with r : T2!Tree { compare : l.label.fuzzyMatch(r.label) and l.parent.matches(r.parent) and l.children.matches(r.children) } operation String fuzzyMatch(other : String) : Boolean { return simmetrics.similarity(self,other,\"Levenshtein\") > 0.5; }","title":"Fuzzy and Dictionary-based String Matching"},{"location":"doc/ecl/#the-match-trace","text":"Users can query and modify the match trace calculated during the comparison process in the post sections of the module or export it into another application or Epsilon program. For example, in a post section, the trace can be printed to the default output stream or serialized into a model of an arbitrary metamodel. In another use case, the trace may be exported to be used in the context of a validation module that will use the identified matches to evaluate inter-model constraints, or in a merging module that will use the matches to identify the elements on which the two models will be merged.","title":"The Match Trace"},{"location":"doc/egl/","text":"The Epsilon Generation Language (EGL) \u00b6 EGL is a language tailored for model-to-text transformation (M2T). EGL can be used to transform models into various types of textual artefact, including code (e.g. Java), reports (e.g. in HTML/LaTeX), images (e.g. using Graphviz ), formal specifications (e.g. Z notation), or even entire applications comprising code in multiple languages (e.g. HTML, Javascript and CSS). EGL is a template-based language (i.e. EGL programs resemble the text that they generate), and provides several features that simplify and support the generation of text from models, including: a sophisticated and language-independent merging engine (for preserving hand-written sections of generated text), an extensible template system (for generating text to a variety of sources, such as a file on disk, a database server, or even as a response issued by a web server), formatting algorithms (for producing generated text that is well-formatted and hence readable), and traceability mechanisms (for linking generated text with source models). Abstract Syntax \u00b6 The figure below shows the abstract syntax of EGL's core functionality. classDiagram class EglSection { +getChildren(): List +getText(): String } class EglDynamicSection { +getText(): String } class EglStaticSection { +getText(): String } class EglShortcutSection { +getText(): String } EglSection <|-- EglDynamicSection EglSection <|-- EglStaticSection EglSection <|-- EglShortcutSection Conceptually, an EGL program comprises one or more sections . The contents of static sections are emitted verbatim and appear directly in the generated text. The contents of dynamic sections are executed and are used to control the text that is generated. In its dynamic sections, EGL re-uses EOL's syntax for structuring program control flow, performing model inspection and navigation, and defining custom operations. In addition, EGL provides an EOL object, out , which is used in dynamic sections to perform operations on the generated text, such as appending and removing strings; and specifying the type of text to be generated. EGL also provides syntax for defining dynamic output sections, which provide a convenient shorthand for outputting text from within dynamic sections. Similar syntax is often provided by template-based code generators. Concrete Syntax \u00b6 The concrete syntax of EGL closely resembles the style of other template-based code generation languages, such as PHP. The tag pair [% %] is used to delimit a dynamic section. Any text not enclosed in such a tag pair is contained in a static section. The listing below illustrates the use of dynamic and static sections to form a basic EGL template. [% for (i in Sequence{1..5}) { %] i is [%=i%] [% } %] Executing the EGL template above would produce the generated text below. The [%=expr%] construct (line 2) is shorthand for [% out.print(expr); %] , which appends expr to the output generated by the transformation. i is 1 i is 2 i is 3 i is 4 i is 5 Any EOL statement can be contained in the dynamic sections of an EGL template. For example, the EGL template shown below generates text from a model that conforms to a metamodel that describes an object-oriented system. [% for (c in Class.all) { %] [%=c.name%] [% } %] Comments and Markers \u00b6 Inside an EGL dynamic section, EOL's comment syntax can be used. Additionally, EGL adds syntax for comment blocks [* this is a comment *] and marker blocks [*- this is a marker *] . Marker blocks are highlighted by the EGL editor and EGL outline view in Eclipse. User-Defined Operations \u00b6 Like EOL, EGL permits users to define re-usable units of code via operations. [% c.declaration(); %] [% operation Class declaration() { %] [%=self.visibility%] class [%=self.name%] {} [% } %] In EGL, user-defined operations are defined in dynamic sections, but may mix static and dynamic sections in their bodies. Consider, for example, the EGL code in the listing above, which emits a declaration for a Java class (e.g. public class Foo {} ). Lines 2-4 declare the operation. Note that the start and the end of the operation's declaration (on lines 2 and 4, respectively) are contained in dynamic sections. The body of the operation (line 3), however, mixes static and dynamic output sections. Finally, note that the operation is invoked from a dynamic section (line 1). It is worth noting that any loose (i.e. not contained in other operations) dynamic or static sections below the first operation of a template will be ignored at runtime. When a user-defined operation is invoked, any static or dynamic sections contained in the body of the operation are immediately appended to the generated text. Sometimes, however, it is desirable to manipulate the text produced by an operation before it is appended to the generated text. To this end, EGL defines the @template annotation which can applied to operations to indicate that any text generated by the operation must be returned from the operation and not appended to the generated text. For example, the EGL program in the listing above could be rewritten using a @template annotation, as demonstrated below. [%=c.declaration()%] [% @template operation Class declaration() { %] [%=self.visibility%] class [%=self.name%] {} [% } %] There is a subtle difference between the way in which standard (i.e. unannotated) operations and @template operations are invoked. Compare the first line of the two listings above. The former uses a dynamic section, because invoking the operation causes the evaluation of its body to be appended to the text generated by this program. By contrast, the latter uses a dynamic output section to append the result returned by the @template operation to the text generated by this program. In general, @template operations afford more flexibility than standard operations. For example, line 1 of the listing above could perform some manipulation of the text returned by the declaration() operation before the text is outputted. Therefore, @template operations provide a mechanism for re-using common pieces of a code generator, without sacrificing the flexibility to slightly alter text before it is emitted. Standard (unannotated) operations also permit re-use, but in a less flexible manner. Finally, it is worth noting that user-defined operations in EGL do not have to generate text. For example, the following listing illustrates two operations defined in an EGL program that do not generate any text. The former is a query that returns a Boolean value, while the latter alters the model, and does not return a value. [% operation Class isAnonymous() : Boolean { return self.name.isUndefined(); } operation removeOneClass() { delete Class.all.random(); } %] The OutputBuffer \u00b6 As an EGL program is executed, text is appended to a data structure termed the OutputBuffer . In every EGL program, the OutputBuffer is accessible via the out built-in variable. The OutputBuffer provides operations for appending to and removing from the buffer, and for merging generated text with existing text. For many EGL programs, interacting directly with the OutputBuffer is unnecessary. The contents of static and dynamic output sections are sent directly to the OutputBuffer , and no operation of the OutputBuffer need be invoked directly. However, in cases when generated text must be sent to the OutputBuffer from dynamic sections, or when generated text must be merged with existing text, the operations of OutputBuffer are provided in the table below. The [merge engine section]](#merge-engine) discusses merging generated and existing text, and presents several examples of invoking the operations of OutputBuffer . Signature Description chop(numberOfChars : Integer) Removes the specified number of characters from the end of the buffer print(object : Any) Appends a string representation of the specified object to the buffer println(object : Any) Appends a string representation of the specified object and a new line to the buffer println() Appends a new line to the buffer setContentType(contentType : String) Updates the content type of this template. Subsequent calls to preserve or startPreserve that do not specify a style of comment will use the style of comment defined by the specified content type. preserve(id : String, enabled : Boolean, contents : String) Appends a protected region to the buffer with the given identifier, enabled state and contents. Uses the current content type to determine how to format the start and end markers. preserve(startComment : String, endComment : String, id : String, enabled : Boolean, contents : String) Appends a protected region to the buffer with the given identifier, enabled state and contents. Uses the first two parameters as start and end markers. startPreserve(id : String, enabled : Boolean) Begins a protected region by appending the start marker for a protected region to the buffer with the given identifier and enabled state. Uses the current content type to determine how to format the start and end markers startPreserve(startComment : String, endComment : String, id : String, enabled : Boolean) Begins a protected region by appending the start marker to the buffer with the given identifier and enabled state. Uses the first two parameters as start and end markers. stopPreserve() Ends the current protected region by appending the end marker to the buffer. This operation should be invoked only if there a protected region is currently open (i.e. has been started by invoking startPreserve but not yet stopped by invoking stopPreserve ). Co-ordination \u00b6 Warning The recommended way to coordinate the execution of EGL templates is using the EGX rule-based language . This section describes an imperative way to invoke EGL templates which pre-dates EGX and should only be used as a fall-back in case the semantics of EGX are not sufficient for the task at hand. In the large, M2T transformations are used to generate text to various destinations. For example, code generators often produce files on disk, and web applications often generate text as part of the response for a resource on the web server. Text might be generated to a network socket during interprocess communication, or as a query that runs on a database. Furthermore, (parts of) a single M2T transformation might be re-used in different contexts. A M2T transformation that generates files on disk today might be re-purposed to generate the response from a web server tomorrow. Given these concerns, EGL provides a co-ordination engine that provides mechanisms for modularising M2T transformations, and for controlling the destinations to which text is generated. The EGL co-ordination engine fulfils three requirements: Reusability : the co-ordination engine allows EGL programs to be decomposed into one or more templates, which can be shared between EGL programs. Variety of destination : the co-ordination engine provides an extensible set of template types that can generate text to a variety of destinations. The next section describes the default template type, which is tailored to generate text to files on disk, while a subsequent section discusses the way in which users can define their own template types for generating text to other types of destination. Separation of concerns : the co-ordination engine ensures that the logic for controlling the text that is generated (i.e. the content) and the logic for controlling the way in which text is emitted (i.e. the destination) are kept separate. There is also the EGX language , which was introduced after this documentation was initially written, but provides a fully-fledged rule-based execution engine for paramterising EGL templates. The Template type \u00b6 Central to the co-ordination engine is the Template type, which EGL provides in addition to the default EOL types. Via the Template type, EGL fulfils the three requirements identified above. Firstly, a Template can invoke other Templates , and hence can be shared and re-used between EGL programs. Secondly, the Template type has been implemented in an extensible manner: users can define their own types of Template that generate text to any destination (e.g. a database or a network socket), as described in the custom coordination section . Finally, the Template type provides a set of operations that are used to control the destination of generated text. Users typically define a \"driver\" template that does not generate text, but rather controls the destination of text that is generated by other templates. For example, consider the EGL program in the listing below. This template generates no text (as it contains only a single dynamic section), but is used instead to control the destination of text generated by another template. Line 1 defines a variable, t , of type Template . Note that, unlike the EOL types, instances of Template are not created with the new keyword. Instead, the TemplateFactory built-in object is used to load templates from, for example, a file system path. On line 3, the generate operation of the Template type invokes the EGL template stored in the file \"ClassNames.egl\" and emits the generated text to \"Output.txt\". [% var t : Template = TemplateFactory.load(\"ClassNames.egl\"); t.generate(\"Output.txt\"); %] In addition to generate , the Template type defines further operations for controlling the context and invocation of EGL templates. The following table lists all of the operations defined on Template , and a further example of their use is given in the sequel. Signature Description populate(name : String, value : Any) Makes a variable with the specified name and value available during the execution of the template. process() : String Executes the template and returns the text that is generated. generate(destination : String) Executes the template and stores the text to the specified destination. The format of the destination parameter is dictated by the type of template. For example, the default template type (which can generate files on disk) expects a file system path as the destination parameter. Returns a object representing the generated file. append(destination : String) Executes the template: if the destination exists, it will add a newline and the generated text at the end of the file. If the file does not exist, it will write the generated text to it (with no newline). Returns a object representing the generated file. setFormatter(formatter : Formatter) Changes the formatter for this template to the specified formatter. Subsequent calls to generate or process will produce text that is formatted with the specified formatter. setFormatters(formatters : Sequence(Formatter)) Changes the formatter for this template to the specified sequence of formatters. Subsequent calls to generate or process will produce text that is formatted with each of the specified formatters in turn. The TemplateFactory object \u00b6 As discussed above, instances of Template are not created with the new keyword. Instead, EGL provides a built-in object, the TemplateFactory , for this purpose. Users can customise the type of the TemplateFactory object to gain more control over the way in which text is generated. By default, EGL provides a TemplateFactory that exposes operations for loading templates (by loading files from disk), preparing templates (by parsing a string containing EGL code), and for controlling the file system locations from which templates are loaded and to which text is generated. The table below lists the operations provided by the built-in TemplateFactory object. Signature Description load(path : String) : Template Returns an instance of Template that can be used to execute the EGL template stored at the specified path. prepare(code : String) Changes the default path that is used to resolve relative paths when generating files to disk. Subsequent calls to load and prepare will create templates that use the new path. setOutputRoot(path : String) Changes the default path that is used to resolve relative paths when generating files to disk. Subsequent calls to load and prepare will create templates that use the new path. setTemplateRoot(path : String) Changes the default path that is used to resolve relative paths when loading templates with the load operation. Subsequent calls to load will use the new path. setDefaultFormatter(formatter : Formatter) Changes the formatter for this template factory to the specified formatter. Templates that are constructed after this operation has been invoked will produce text that is, by default, formatted with the specified formatter. setDefaultFormatters(format- ters : Sequence(Formatter)) Sequence(Formatter)) & Changes the formatter for this template to the specified sequence of formatters. Templates that are constructed after this operation has been invoked will produce text that is, by default, formatted with each of the specified formatters in turn. An Example of Co-ordination with EGL \u00b6 The operations provided by the TemplateFactory object and Template type are demonstrated by the EGL program in the listing below. Lines 2-3 use operations on TemplateFactory to change the paths from which templates will be loaded (line 2) and to which generated files will be created (line 3). Line 5 demonstrates the use of the prepare operation for creating a template from EGL code. When the interface template is invoked, the EGL code passed to the prepare operation will be executed. Finally, line 9 (and line 12) illustrates the way in which the populate operation can be used to pass a value to a template before invoking it. Specifically, the interface and implementation templates can use a variable called root , which is populated by the driver template before invoking them. [% TemplateFactory.setTemplateRoot(\"/usr/franz/templates\"); TemplateFactory.setOutputRoot(\"/tmp/output\"); var interface : Template = TemplateFactory.prepare(\"public interface [%=root.name] {}\"); var implementation : Template = TemplateFactory.load(\"Class2Impl.egl\"); for (c in Class.all) { interface.populate(\"root\", c); interface.generate(\"I\" + c.name + \".java\"); implementation.populate(\"root\", c); implementation.generate(c.name + \".java\"); } %] Customising the Co-ordination Engine \u00b6 EGL provides mechanisms for customising the co-ordination engine. Specifically, users can define and use their own TemplateFactory . In many cases, users need not customise the co-ordination engine, and can write transformations using the built-in Template type and TemplateFactory object. If, however, you need more control over the co-ordination process, the discussion in this section might be helpful. Specifically, a custom TemplateFactory is typically used to achieve one or more of the following goals: Provide additional mechanisms for constructing Templates . Example: facilitate the loading of templates from a database. Enrich / change the behaviour of the built-in Template type. Example: change the way in which generated text is sent to its destination. Observe or instrument the transformation process by, for instance, logging calls to the operations provided by the Template type of the TemplateFactory object. Example: audit or trace the transformation process. Customisation is achieved in two stages: implementing the custom TemplateFactory (and potentially a custom Template ) in Java, and using the custom TemplateFactory . Implementing a custom TemplateFactory \u00b6 A custom TemplateFactory is a subclass of EglTemplateFactory . Typically, a custom TemplateFactory is implemented by overriding one of the methods of EglTemplateFactory . For example, the createTemplate method is overriden to specify that a custom type of Template should be created by the TemplateFactory . Likewise, the load and prepare methods can be overriden to change the location from which Template s are constructed. A custom Template is a subclass of EglTemplate or, most often, a subclass of EglPersistentTemplate . Again, customisation is typically achieved by overriding methods in the superclass, or by adding new methods. For example, to perform auditing activities whenever a template is used to generate text, the doGenerate method of EglPersistentTemplate is overriden. import org.eclipse.epsilon.egl.EglFileGeneratingTemplateFactory ; import org.eclipse.epsilon.egl.EglTemplate ; import org.eclipse.epsilon.egl.EglPersistentTemplate ; import org.eclipse.epsilon.egl.exceptions.EglRuntimeException ; import org.eclipse.epsilon.egl.execute.context.IEglContext ; import org.eclipse.epsilon.egl.spec.EglTemplateSpecification ; public class CountingTemplateFactory extends EglFileGeneratingTemplateFactory { @Override protected EglTemplate createTemplate ( EglTemplateSpecification spec ) throws Exception { return new CountingTemplate ( spec , context , getOutputRootOrRoot (), outputRootPath ); } public class CountingTemplate extends EglPersistentTemplate { public static int numberOfCallsToGenerate = 0 ; public CountingTemplate ( EglTemplateSpecification spec , IEglContext context , URI outputRoot , String outputRootPath ) throws Exception { super ( spec , context , outputRoot , outputRootPath ); } @Override protected void doGenerate ( File file , String targetName , boolean overwrite , boolean protectRegions ) throws EglRuntimeException { numberOfCallsToGenerate ++; } } } Using a custom TemplateFactory \u00b6 When invoking an EGL program, the user may select a custom TemplateFactory . For example, the EGL development tools provide an Eclipse launch configuration that provides a tab named \"Generated Text.\"On this tab, users can select a TemplateFactory (under the group called \"Type of Template Factory\"). Note that a TemplateFactory only appears on the launch configuration tab if it has been registered with EGL via an Eclipse extension. Similarly, the workflow language provided by Epsilon allows the specification of custom types of TemplateFactory via the templateFactoryType parameter. Summary \u00b6 The co-ordination engine provided by EGL facilitates the construction of modular and re-usable M2T transformations and can be used to generate text to various types of destination. Furthermore, the logic for specifying the contents of generated text is kept separate from the logic for specifying the destination of generated text. Merge Engine \u00b6 EGL provides language constructs that allow M2T transformations to designate regions of generated text as protected . Whenever an EGL program attempts to generate text, any protected regions that are encountered in the specified destination are preserved. Within an EGL program, protected regions are specified with the preserve(String, String, String, Boolean, String) method on the out keyword. The first two parameters define the comment delimiters of the target language. The other parameters provide the name, enable-state and content of the protected region, as illustrated in the listing below. [%=out.preserve(\"/*\", \"*/\", \"anId\", true, \"System.out.println(foo);\") %] A protected region declaration may have many lines, and use many EGL variables in the contents definition. To enhance readability, EGL provides two additional methods on the out keyword: startPreserve(String, String, String, Boolean) and stopPreserve . The listing below uses these to generate a protected region. [%=out.startPreserve(\"/*\", \"*/\", \"anId\", true)%] System.out.println(foo); [%=out.stopPreserve()%] Because an EGL template may contain many protected regions, EGL also provides a separate method to set the target language generated by the current template, setContentType(String) . By default, EGL recognises Java, HTML, Perl and EGL as valid content types. An alternative configuration file can be used to specify further content types. Following a call to setContentType , the first two arguments to the preserve and startPreserve methods can be omitted, as shown in the listing below. [% out.setContentType(\"Java\"); %] [%=out.preserve(\"anId\", true, \"System.out.println(foo);\")%] Because some languages define more than one style of comment delimiter, EGL allows mixed use of the styles for preserve and startPreserve methods. Once a content type has been specified, a protected region may also be declared entirely from a static section, using the syntax in the listing below. [% out.setContentType(\"Java\"); %] // protected region anId [on|off] begin System.out.println(foo); // protected region anId end When a template that defines one or more protected regions is processed by the EGL execution engine, the target output destinations are examined and existing contents of any protected regions are preserved. If either the output generated by from the template or the existing contents of the target output destination contains protected regions, a merging process is invoked. The table below shows the default behaviour of EGL's merge engine. Protected Regions in Generated Protected Regions in Existing Contents taken from On On Existing On Off Generated On Absent Generated Off On Existing Off Off Generated Off Absent Generated Absent On Neither (causes a warning) Absent Off Neither (causes a warning) Formatters \u00b6 Often the text generated by a model-to-text transformation is not formatted in a desirable manner. Text generated with a model-to-text transformation might contain extra whitespace or inconsistent indentation. This is because controlling the formatting of generated text in a model-to-text transformation language can be challenging. In a template-based model-to-text language, such as EGL, it can be difficult to know how best to format a transformation. On the one hand, the transformation must be readable and understandable, and on the other hand, the generated text must typically also be readable and understandable. Conscientious developers apply various conventions to produce readable code. EGL encourages template developers to prioritise the readability of templates over the readability of generated text when writing EGL templates. For formatting generated text, EGL provides an extensible set of formatters that can be invoked during a model-to-text transformation. Using a Formatter \u00b6 EGL provides several built-in formatters. Users can implement additional formatters. To use a formatter, invoke the setFormatter or setFormatters operation on an instance of the Template type. A formatter is a Java class that implements EGL's Formatter interface. From within an EGL program, formatters can be created using a Native (i.e. Java) type. The listing below demonstrates the use of a built-in formatter (XmlFormatter). [% var f = new Native(\"org.eclipse.epsilon.egl.formatter.language.XmlFormatter\"); var t = TemplateFactory.load(\"generate_some_xml.egl\"); t.setFormatter(f); t.generate(\"formatted.xml\"); %] To facilitate the re-use of a formatter with many templates, the TemplateFactory object provides the setDefaultFormatter and setDefaultFormatters operations. Templates that are loaded or prepared after a call to setDefaultFormatter or setDefaultFormatters will, by default, use the formatter(s) specified for the TemplateFactory . Note that setting the formatter on a template overwrite any formatter that may have been set on that template by the TemplateFactory . The default formatters for an EGL program can also be set when invoking the program. For example, the EGL development tools provide an Eclipse launch configuration that provides a tab named \"Generated Text.\" On this tab, users can configure one or more formatters which will be used as the default formatters for this EGL program. Note that custom formatters only appear on the launch configuration tab if they have been registered with EGL via an Eclipse extension. Similarly, the workflow language provided by Epsilon provides a formatter nested element that can be used to specify one or more default formatters. Implementing a Custom Formatter \u00b6 Providing a user-defined formatter involves implementing the Formatter interface (in org.eclipse.epsilon.egl.formatter ). For example, the listing below demonstrates a simple formatter that transforms all generated text to uppercase. import org.eclipse.epsilon.egl.formatter.Formatter ; public class UppercaseFormatter implements Formatter { @Override public String format ( String text ) { return text . toUpperCase (); } } The set of built-in formatters provided by EGL includes some partial implementations of the Formatter interface that can be re-used to simplify the implementation of custom formatters. For instance, the LanguageFormatter class can correct the indentation of a program written in most languages, when given a start and end regular expression. Finally, an Eclipse extension point is provided for custom formatters. Providing an extension that conforms to the custom formatter extension point allows EGL to display the custom formatter in the launch configuration tabs of the EGL development tools. Traceability \u00b6 EGL also provides a traceability API, as a debugging aid, to support auditing of the M2T transformation process, and to facilitate change propagation. This API facilitates exploration of the templates executed, files affected and protected regions processed during a transformation. The figure below shows sample output from the traceability API after execution of an EGL M2T transformation to generate Java code from an instance of an OO metamodel. The view shown is accessed via the ... menu in Eclipse. Traceability information can also be accessed programmatically, as demonstrated in the listing below. EglTemplateFactoryModuleAdapter module = new EglTemplateFactoryModuleAdapter ( new EglTemplateFactory ()); boolean parsed = module . parse ( new File ( \"myTemplate.egl\" )); if ( parsed && module . getParseProblems (). isEmpty ()) { module . execute (); Template base = module . getContext (). getBaseTemplate (); // traverse the template hierachy // display data } else { // error handling }","title":"Code generation (EGL)"},{"location":"doc/egl/#the-epsilon-generation-language-egl","text":"EGL is a language tailored for model-to-text transformation (M2T). EGL can be used to transform models into various types of textual artefact, including code (e.g. Java), reports (e.g. in HTML/LaTeX), images (e.g. using Graphviz ), formal specifications (e.g. Z notation), or even entire applications comprising code in multiple languages (e.g. HTML, Javascript and CSS). EGL is a template-based language (i.e. EGL programs resemble the text that they generate), and provides several features that simplify and support the generation of text from models, including: a sophisticated and language-independent merging engine (for preserving hand-written sections of generated text), an extensible template system (for generating text to a variety of sources, such as a file on disk, a database server, or even as a response issued by a web server), formatting algorithms (for producing generated text that is well-formatted and hence readable), and traceability mechanisms (for linking generated text with source models).","title":"The Epsilon Generation Language (EGL)"},{"location":"doc/egl/#abstract-syntax","text":"The figure below shows the abstract syntax of EGL's core functionality. classDiagram class EglSection { +getChildren(): List +getText(): String } class EglDynamicSection { +getText(): String } class EglStaticSection { +getText(): String } class EglShortcutSection { +getText(): String } EglSection <|-- EglDynamicSection EglSection <|-- EglStaticSection EglSection <|-- EglShortcutSection Conceptually, an EGL program comprises one or more sections . The contents of static sections are emitted verbatim and appear directly in the generated text. The contents of dynamic sections are executed and are used to control the text that is generated. In its dynamic sections, EGL re-uses EOL's syntax for structuring program control flow, performing model inspection and navigation, and defining custom operations. In addition, EGL provides an EOL object, out , which is used in dynamic sections to perform operations on the generated text, such as appending and removing strings; and specifying the type of text to be generated. EGL also provides syntax for defining dynamic output sections, which provide a convenient shorthand for outputting text from within dynamic sections. Similar syntax is often provided by template-based code generators.","title":"Abstract Syntax"},{"location":"doc/egl/#concrete-syntax","text":"The concrete syntax of EGL closely resembles the style of other template-based code generation languages, such as PHP. The tag pair [% %] is used to delimit a dynamic section. Any text not enclosed in such a tag pair is contained in a static section. The listing below illustrates the use of dynamic and static sections to form a basic EGL template. [% for (i in Sequence{1..5}) { %] i is [%=i%] [% } %] Executing the EGL template above would produce the generated text below. The [%=expr%] construct (line 2) is shorthand for [% out.print(expr); %] , which appends expr to the output generated by the transformation. i is 1 i is 2 i is 3 i is 4 i is 5 Any EOL statement can be contained in the dynamic sections of an EGL template. For example, the EGL template shown below generates text from a model that conforms to a metamodel that describes an object-oriented system. [% for (c in Class.all) { %] [%=c.name%] [% } %]","title":"Concrete Syntax"},{"location":"doc/egl/#comments-and-markers","text":"Inside an EGL dynamic section, EOL's comment syntax can be used. Additionally, EGL adds syntax for comment blocks [* this is a comment *] and marker blocks [*- this is a marker *] . Marker blocks are highlighted by the EGL editor and EGL outline view in Eclipse.","title":"Comments and Markers"},{"location":"doc/egl/#user-defined-operations","text":"Like EOL, EGL permits users to define re-usable units of code via operations. [% c.declaration(); %] [% operation Class declaration() { %] [%=self.visibility%] class [%=self.name%] {} [% } %] In EGL, user-defined operations are defined in dynamic sections, but may mix static and dynamic sections in their bodies. Consider, for example, the EGL code in the listing above, which emits a declaration for a Java class (e.g. public class Foo {} ). Lines 2-4 declare the operation. Note that the start and the end of the operation's declaration (on lines 2 and 4, respectively) are contained in dynamic sections. The body of the operation (line 3), however, mixes static and dynamic output sections. Finally, note that the operation is invoked from a dynamic section (line 1). It is worth noting that any loose (i.e. not contained in other operations) dynamic or static sections below the first operation of a template will be ignored at runtime. When a user-defined operation is invoked, any static or dynamic sections contained in the body of the operation are immediately appended to the generated text. Sometimes, however, it is desirable to manipulate the text produced by an operation before it is appended to the generated text. To this end, EGL defines the @template annotation which can applied to operations to indicate that any text generated by the operation must be returned from the operation and not appended to the generated text. For example, the EGL program in the listing above could be rewritten using a @template annotation, as demonstrated below. [%=c.declaration()%] [% @template operation Class declaration() { %] [%=self.visibility%] class [%=self.name%] {} [% } %] There is a subtle difference between the way in which standard (i.e. unannotated) operations and @template operations are invoked. Compare the first line of the two listings above. The former uses a dynamic section, because invoking the operation causes the evaluation of its body to be appended to the text generated by this program. By contrast, the latter uses a dynamic output section to append the result returned by the @template operation to the text generated by this program. In general, @template operations afford more flexibility than standard operations. For example, line 1 of the listing above could perform some manipulation of the text returned by the declaration() operation before the text is outputted. Therefore, @template operations provide a mechanism for re-using common pieces of a code generator, without sacrificing the flexibility to slightly alter text before it is emitted. Standard (unannotated) operations also permit re-use, but in a less flexible manner. Finally, it is worth noting that user-defined operations in EGL do not have to generate text. For example, the following listing illustrates two operations defined in an EGL program that do not generate any text. The former is a query that returns a Boolean value, while the latter alters the model, and does not return a value. [% operation Class isAnonymous() : Boolean { return self.name.isUndefined(); } operation removeOneClass() { delete Class.all.random(); } %]","title":"User-Defined Operations"},{"location":"doc/egl/#the-outputbuffer","text":"As an EGL program is executed, text is appended to a data structure termed the OutputBuffer . In every EGL program, the OutputBuffer is accessible via the out built-in variable. The OutputBuffer provides operations for appending to and removing from the buffer, and for merging generated text with existing text. For many EGL programs, interacting directly with the OutputBuffer is unnecessary. The contents of static and dynamic output sections are sent directly to the OutputBuffer , and no operation of the OutputBuffer need be invoked directly. However, in cases when generated text must be sent to the OutputBuffer from dynamic sections, or when generated text must be merged with existing text, the operations of OutputBuffer are provided in the table below. The [merge engine section]](#merge-engine) discusses merging generated and existing text, and presents several examples of invoking the operations of OutputBuffer . Signature Description chop(numberOfChars : Integer) Removes the specified number of characters from the end of the buffer print(object : Any) Appends a string representation of the specified object to the buffer println(object : Any) Appends a string representation of the specified object and a new line to the buffer println() Appends a new line to the buffer setContentType(contentType : String) Updates the content type of this template. Subsequent calls to preserve or startPreserve that do not specify a style of comment will use the style of comment defined by the specified content type. preserve(id : String, enabled : Boolean, contents : String) Appends a protected region to the buffer with the given identifier, enabled state and contents. Uses the current content type to determine how to format the start and end markers. preserve(startComment : String, endComment : String, id : String, enabled : Boolean, contents : String) Appends a protected region to the buffer with the given identifier, enabled state and contents. Uses the first two parameters as start and end markers. startPreserve(id : String, enabled : Boolean) Begins a protected region by appending the start marker for a protected region to the buffer with the given identifier and enabled state. Uses the current content type to determine how to format the start and end markers startPreserve(startComment : String, endComment : String, id : String, enabled : Boolean) Begins a protected region by appending the start marker to the buffer with the given identifier and enabled state. Uses the first two parameters as start and end markers. stopPreserve() Ends the current protected region by appending the end marker to the buffer. This operation should be invoked only if there a protected region is currently open (i.e. has been started by invoking startPreserve but not yet stopped by invoking stopPreserve ).","title":"The OutputBuffer"},{"location":"doc/egl/#co-ordination","text":"Warning The recommended way to coordinate the execution of EGL templates is using the EGX rule-based language . This section describes an imperative way to invoke EGL templates which pre-dates EGX and should only be used as a fall-back in case the semantics of EGX are not sufficient for the task at hand. In the large, M2T transformations are used to generate text to various destinations. For example, code generators often produce files on disk, and web applications often generate text as part of the response for a resource on the web server. Text might be generated to a network socket during interprocess communication, or as a query that runs on a database. Furthermore, (parts of) a single M2T transformation might be re-used in different contexts. A M2T transformation that generates files on disk today might be re-purposed to generate the response from a web server tomorrow. Given these concerns, EGL provides a co-ordination engine that provides mechanisms for modularising M2T transformations, and for controlling the destinations to which text is generated. The EGL co-ordination engine fulfils three requirements: Reusability : the co-ordination engine allows EGL programs to be decomposed into one or more templates, which can be shared between EGL programs. Variety of destination : the co-ordination engine provides an extensible set of template types that can generate text to a variety of destinations. The next section describes the default template type, which is tailored to generate text to files on disk, while a subsequent section discusses the way in which users can define their own template types for generating text to other types of destination. Separation of concerns : the co-ordination engine ensures that the logic for controlling the text that is generated (i.e. the content) and the logic for controlling the way in which text is emitted (i.e. the destination) are kept separate. There is also the EGX language , which was introduced after this documentation was initially written, but provides a fully-fledged rule-based execution engine for paramterising EGL templates.","title":"Co-ordination"},{"location":"doc/egl/#the-template-type","text":"Central to the co-ordination engine is the Template type, which EGL provides in addition to the default EOL types. Via the Template type, EGL fulfils the three requirements identified above. Firstly, a Template can invoke other Templates , and hence can be shared and re-used between EGL programs. Secondly, the Template type has been implemented in an extensible manner: users can define their own types of Template that generate text to any destination (e.g. a database or a network socket), as described in the custom coordination section . Finally, the Template type provides a set of operations that are used to control the destination of generated text. Users typically define a \"driver\" template that does not generate text, but rather controls the destination of text that is generated by other templates. For example, consider the EGL program in the listing below. This template generates no text (as it contains only a single dynamic section), but is used instead to control the destination of text generated by another template. Line 1 defines a variable, t , of type Template . Note that, unlike the EOL types, instances of Template are not created with the new keyword. Instead, the TemplateFactory built-in object is used to load templates from, for example, a file system path. On line 3, the generate operation of the Template type invokes the EGL template stored in the file \"ClassNames.egl\" and emits the generated text to \"Output.txt\". [% var t : Template = TemplateFactory.load(\"ClassNames.egl\"); t.generate(\"Output.txt\"); %] In addition to generate , the Template type defines further operations for controlling the context and invocation of EGL templates. The following table lists all of the operations defined on Template , and a further example of their use is given in the sequel. Signature Description populate(name : String, value : Any) Makes a variable with the specified name and value available during the execution of the template. process() : String Executes the template and returns the text that is generated. generate(destination : String) Executes the template and stores the text to the specified destination. The format of the destination parameter is dictated by the type of template. For example, the default template type (which can generate files on disk) expects a file system path as the destination parameter. Returns a object representing the generated file. append(destination : String) Executes the template: if the destination exists, it will add a newline and the generated text at the end of the file. If the file does not exist, it will write the generated text to it (with no newline). Returns a object representing the generated file. setFormatter(formatter : Formatter) Changes the formatter for this template to the specified formatter. Subsequent calls to generate or process will produce text that is formatted with the specified formatter. setFormatters(formatters : Sequence(Formatter)) Changes the formatter for this template to the specified sequence of formatters. Subsequent calls to generate or process will produce text that is formatted with each of the specified formatters in turn.","title":"The Template type"},{"location":"doc/egl/#the-templatefactory-object","text":"As discussed above, instances of Template are not created with the new keyword. Instead, EGL provides a built-in object, the TemplateFactory , for this purpose. Users can customise the type of the TemplateFactory object to gain more control over the way in which text is generated. By default, EGL provides a TemplateFactory that exposes operations for loading templates (by loading files from disk), preparing templates (by parsing a string containing EGL code), and for controlling the file system locations from which templates are loaded and to which text is generated. The table below lists the operations provided by the built-in TemplateFactory object. Signature Description load(path : String) : Template Returns an instance of Template that can be used to execute the EGL template stored at the specified path. prepare(code : String) Changes the default path that is used to resolve relative paths when generating files to disk. Subsequent calls to load and prepare will create templates that use the new path. setOutputRoot(path : String) Changes the default path that is used to resolve relative paths when generating files to disk. Subsequent calls to load and prepare will create templates that use the new path. setTemplateRoot(path : String) Changes the default path that is used to resolve relative paths when loading templates with the load operation. Subsequent calls to load will use the new path. setDefaultFormatter(formatter : Formatter) Changes the formatter for this template factory to the specified formatter. Templates that are constructed after this operation has been invoked will produce text that is, by default, formatted with the specified formatter. setDefaultFormatters(format- ters : Sequence(Formatter)) Sequence(Formatter)) & Changes the formatter for this template to the specified sequence of formatters. Templates that are constructed after this operation has been invoked will produce text that is, by default, formatted with each of the specified formatters in turn.","title":"The TemplateFactory object"},{"location":"doc/egl/#an-example-of-co-ordination-with-egl","text":"The operations provided by the TemplateFactory object and Template type are demonstrated by the EGL program in the listing below. Lines 2-3 use operations on TemplateFactory to change the paths from which templates will be loaded (line 2) and to which generated files will be created (line 3). Line 5 demonstrates the use of the prepare operation for creating a template from EGL code. When the interface template is invoked, the EGL code passed to the prepare operation will be executed. Finally, line 9 (and line 12) illustrates the way in which the populate operation can be used to pass a value to a template before invoking it. Specifically, the interface and implementation templates can use a variable called root , which is populated by the driver template before invoking them. [% TemplateFactory.setTemplateRoot(\"/usr/franz/templates\"); TemplateFactory.setOutputRoot(\"/tmp/output\"); var interface : Template = TemplateFactory.prepare(\"public interface [%=root.name] {}\"); var implementation : Template = TemplateFactory.load(\"Class2Impl.egl\"); for (c in Class.all) { interface.populate(\"root\", c); interface.generate(\"I\" + c.name + \".java\"); implementation.populate(\"root\", c); implementation.generate(c.name + \".java\"); } %]","title":"An Example of Co-ordination with EGL"},{"location":"doc/egl/#customising-the-co-ordination-engine","text":"EGL provides mechanisms for customising the co-ordination engine. Specifically, users can define and use their own TemplateFactory . In many cases, users need not customise the co-ordination engine, and can write transformations using the built-in Template type and TemplateFactory object. If, however, you need more control over the co-ordination process, the discussion in this section might be helpful. Specifically, a custom TemplateFactory is typically used to achieve one or more of the following goals: Provide additional mechanisms for constructing Templates . Example: facilitate the loading of templates from a database. Enrich / change the behaviour of the built-in Template type. Example: change the way in which generated text is sent to its destination. Observe or instrument the transformation process by, for instance, logging calls to the operations provided by the Template type of the TemplateFactory object. Example: audit or trace the transformation process. Customisation is achieved in two stages: implementing the custom TemplateFactory (and potentially a custom Template ) in Java, and using the custom TemplateFactory .","title":"Customising the Co-ordination Engine"},{"location":"doc/egl/#implementing-a-custom-templatefactory","text":"A custom TemplateFactory is a subclass of EglTemplateFactory . Typically, a custom TemplateFactory is implemented by overriding one of the methods of EglTemplateFactory . For example, the createTemplate method is overriden to specify that a custom type of Template should be created by the TemplateFactory . Likewise, the load and prepare methods can be overriden to change the location from which Template s are constructed. A custom Template is a subclass of EglTemplate or, most often, a subclass of EglPersistentTemplate . Again, customisation is typically achieved by overriding methods in the superclass, or by adding new methods. For example, to perform auditing activities whenever a template is used to generate text, the doGenerate method of EglPersistentTemplate is overriden. import org.eclipse.epsilon.egl.EglFileGeneratingTemplateFactory ; import org.eclipse.epsilon.egl.EglTemplate ; import org.eclipse.epsilon.egl.EglPersistentTemplate ; import org.eclipse.epsilon.egl.exceptions.EglRuntimeException ; import org.eclipse.epsilon.egl.execute.context.IEglContext ; import org.eclipse.epsilon.egl.spec.EglTemplateSpecification ; public class CountingTemplateFactory extends EglFileGeneratingTemplateFactory { @Override protected EglTemplate createTemplate ( EglTemplateSpecification spec ) throws Exception { return new CountingTemplate ( spec , context , getOutputRootOrRoot (), outputRootPath ); } public class CountingTemplate extends EglPersistentTemplate { public static int numberOfCallsToGenerate = 0 ; public CountingTemplate ( EglTemplateSpecification spec , IEglContext context , URI outputRoot , String outputRootPath ) throws Exception { super ( spec , context , outputRoot , outputRootPath ); } @Override protected void doGenerate ( File file , String targetName , boolean overwrite , boolean protectRegions ) throws EglRuntimeException { numberOfCallsToGenerate ++; } } }","title":"Implementing a custom TemplateFactory"},{"location":"doc/egl/#using-a-custom-templatefactory","text":"When invoking an EGL program, the user may select a custom TemplateFactory . For example, the EGL development tools provide an Eclipse launch configuration that provides a tab named \"Generated Text.\"On this tab, users can select a TemplateFactory (under the group called \"Type of Template Factory\"). Note that a TemplateFactory only appears on the launch configuration tab if it has been registered with EGL via an Eclipse extension. Similarly, the workflow language provided by Epsilon allows the specification of custom types of TemplateFactory via the templateFactoryType parameter.","title":"Using a custom TemplateFactory"},{"location":"doc/egl/#summary","text":"The co-ordination engine provided by EGL facilitates the construction of modular and re-usable M2T transformations and can be used to generate text to various types of destination. Furthermore, the logic for specifying the contents of generated text is kept separate from the logic for specifying the destination of generated text.","title":"Summary"},{"location":"doc/egl/#merge-engine","text":"EGL provides language constructs that allow M2T transformations to designate regions of generated text as protected . Whenever an EGL program attempts to generate text, any protected regions that are encountered in the specified destination are preserved. Within an EGL program, protected regions are specified with the preserve(String, String, String, Boolean, String) method on the out keyword. The first two parameters define the comment delimiters of the target language. The other parameters provide the name, enable-state and content of the protected region, as illustrated in the listing below. [%=out.preserve(\"/*\", \"*/\", \"anId\", true, \"System.out.println(foo);\") %] A protected region declaration may have many lines, and use many EGL variables in the contents definition. To enhance readability, EGL provides two additional methods on the out keyword: startPreserve(String, String, String, Boolean) and stopPreserve . The listing below uses these to generate a protected region. [%=out.startPreserve(\"/*\", \"*/\", \"anId\", true)%] System.out.println(foo); [%=out.stopPreserve()%] Because an EGL template may contain many protected regions, EGL also provides a separate method to set the target language generated by the current template, setContentType(String) . By default, EGL recognises Java, HTML, Perl and EGL as valid content types. An alternative configuration file can be used to specify further content types. Following a call to setContentType , the first two arguments to the preserve and startPreserve methods can be omitted, as shown in the listing below. [% out.setContentType(\"Java\"); %] [%=out.preserve(\"anId\", true, \"System.out.println(foo);\")%] Because some languages define more than one style of comment delimiter, EGL allows mixed use of the styles for preserve and startPreserve methods. Once a content type has been specified, a protected region may also be declared entirely from a static section, using the syntax in the listing below. [% out.setContentType(\"Java\"); %] // protected region anId [on|off] begin System.out.println(foo); // protected region anId end When a template that defines one or more protected regions is processed by the EGL execution engine, the target output destinations are examined and existing contents of any protected regions are preserved. If either the output generated by from the template or the existing contents of the target output destination contains protected regions, a merging process is invoked. The table below shows the default behaviour of EGL's merge engine. Protected Regions in Generated Protected Regions in Existing Contents taken from On On Existing On Off Generated On Absent Generated Off On Existing Off Off Generated Off Absent Generated Absent On Neither (causes a warning) Absent Off Neither (causes a warning)","title":"Merge Engine"},{"location":"doc/egl/#formatters","text":"Often the text generated by a model-to-text transformation is not formatted in a desirable manner. Text generated with a model-to-text transformation might contain extra whitespace or inconsistent indentation. This is because controlling the formatting of generated text in a model-to-text transformation language can be challenging. In a template-based model-to-text language, such as EGL, it can be difficult to know how best to format a transformation. On the one hand, the transformation must be readable and understandable, and on the other hand, the generated text must typically also be readable and understandable. Conscientious developers apply various conventions to produce readable code. EGL encourages template developers to prioritise the readability of templates over the readability of generated text when writing EGL templates. For formatting generated text, EGL provides an extensible set of formatters that can be invoked during a model-to-text transformation.","title":"Formatters"},{"location":"doc/egl/#using-a-formatter","text":"EGL provides several built-in formatters. Users can implement additional formatters. To use a formatter, invoke the setFormatter or setFormatters operation on an instance of the Template type. A formatter is a Java class that implements EGL's Formatter interface. From within an EGL program, formatters can be created using a Native (i.e. Java) type. The listing below demonstrates the use of a built-in formatter (XmlFormatter). [% var f = new Native(\"org.eclipse.epsilon.egl.formatter.language.XmlFormatter\"); var t = TemplateFactory.load(\"generate_some_xml.egl\"); t.setFormatter(f); t.generate(\"formatted.xml\"); %] To facilitate the re-use of a formatter with many templates, the TemplateFactory object provides the setDefaultFormatter and setDefaultFormatters operations. Templates that are loaded or prepared after a call to setDefaultFormatter or setDefaultFormatters will, by default, use the formatter(s) specified for the TemplateFactory . Note that setting the formatter on a template overwrite any formatter that may have been set on that template by the TemplateFactory . The default formatters for an EGL program can also be set when invoking the program. For example, the EGL development tools provide an Eclipse launch configuration that provides a tab named \"Generated Text.\" On this tab, users can configure one or more formatters which will be used as the default formatters for this EGL program. Note that custom formatters only appear on the launch configuration tab if they have been registered with EGL via an Eclipse extension. Similarly, the workflow language provided by Epsilon provides a formatter nested element that can be used to specify one or more default formatters.","title":"Using a Formatter"},{"location":"doc/egl/#implementing-a-custom-formatter","text":"Providing a user-defined formatter involves implementing the Formatter interface (in org.eclipse.epsilon.egl.formatter ). For example, the listing below demonstrates a simple formatter that transforms all generated text to uppercase. import org.eclipse.epsilon.egl.formatter.Formatter ; public class UppercaseFormatter implements Formatter { @Override public String format ( String text ) { return text . toUpperCase (); } } The set of built-in formatters provided by EGL includes some partial implementations of the Formatter interface that can be re-used to simplify the implementation of custom formatters. For instance, the LanguageFormatter class can correct the indentation of a program written in most languages, when given a start and end regular expression. Finally, an Eclipse extension point is provided for custom formatters. Providing an extension that conforms to the custom formatter extension point allows EGL to display the custom formatter in the launch configuration tabs of the EGL development tools.","title":"Implementing a Custom Formatter"},{"location":"doc/egl/#traceability","text":"EGL also provides a traceability API, as a debugging aid, to support auditing of the M2T transformation process, and to facilitate change propagation. This API facilitates exploration of the templates executed, files affected and protected regions processed during a transformation. The figure below shows sample output from the traceability API after execution of an EGL M2T transformation to generate Java code from an instance of an OO metamodel. The view shown is accessed via the ... menu in Eclipse. Traceability information can also be accessed programmatically, as demonstrated in the listing below. EglTemplateFactoryModuleAdapter module = new EglTemplateFactoryModuleAdapter ( new EglTemplateFactory ()); boolean parsed = module . parse ( new File ( \"myTemplate.egl\" )); if ( parsed && module . getParseProblems (). isEmpty ()) { module . execute (); Template base = module . getContext (). getBaseTemplate (); // traverse the template hierachy // display data } else { // error handling }","title":"Traceability"},{"location":"doc/egx/","text":"The EGL Co-Ordination Language (EGX) \u00b6 EGX is a rule-based co-ordination language designed for automating the parametrised execution of model-to-text template transformations. Although built on top of the Epsilon Generation Language (EGL), EGX can in principle work with any template-based model-to-text transformation language. The rationale for this co-ordination language comes from the need to invoke text generation templates multiple times with various parameters, usually derived from input models. To better understand EGX, it is helpful to be familiar with template-based text generation. Epsilon Generation Language \u00b6 EGL is Epsilon's model-to-text transformation language. EGL in principle is similar in purpose to server-side scripting languages like PHP (and can indeed be used for such purposes, as demonstrated in this article ). To recap, a template is a text file which has both static and dynamic regions. As the name implies, a static region is where text appears as-is in the output, whereas a dynamic region uses code to generate the output, often relying on data which is only available at runtime (hence, \"dynamic\"). Dynamic regions are expressed using EOL. One can think of an EGL template as a regular text file with some EOL embedded in it, or as an EOL program with the added convenience of verbatim text generation. Indeed, it is possible to use EGL without any static regions, relying on the output buffer variable to write the output text. In EGL, the output variable is called \"out\" and the markers for the start and end of dynamic regions are \"[%\" and \"%]\" respectively. For convenience, \"[%=\" outputs the string value of the expression which follows. EGL has many advanced features, such as recording traceability information, post-process formatting (to ensure consistent style in the final output) and protected regions, which allow certain parts of the text to be preserved if modified by hand, rather than being overwritten on each invocation of the template. EGL can handle merges, and also supports outputting text to any output stream. As an example, consider a simple Library metamodel (shown below). Suppose each model may have multiple Libraries, and each Library has a name, multiple Books and Authors. Similarly, each Book has one or more Authors, and each Author has multiple Books, similar to the relation between Actors and Movies in the IMDb metamodel used in previous chapters. Now suppose we have a single monolithic model and want to transform this into multiple structured files, such as web pages (HTML) or XML documents. One possible decomposition of this is to generate a page for each Library in the model. classDiagram class Library{ name: EString id: ELong books: Book[*] } class Book { title: EString pages: EInt ISBN: EString authors: Author[*] } class Author{ name: EString books: Book[*] } Library -- Book: books * Book -- Author: books * / authors * <?xml version=\"1.0\" encoding=\"UTF-8\"?> <library id=[%=lib.id%] name=\"[%=name%]\"> [% for (book in books) {%] <book> <title>[%=book.title%]</title> <isbn>[%=book.isbn%]</isbn> <pages>[%=book.pages.asString()%]</pages> <authors> [% for (author in book.authors) {%] <author name=\"[%=author.name%]\"/> [%}%] </authors> </book> [%}%] </library> Notice how the template refers to \"books\" (which is a collection of Book elements) without deriving them directly from the underlying model (i.e. there are no uses of allInstances). This is because the variables were provided to the template before invocation. Template Orchestration \u00b6 In the previous example, we stated that we want to invoke the template for all instances of Library in the model. To do this, we need to loop through all Library instances in the model(s), load the template, populate it with the required variables derived from the current Library instance and execute the template. However since we want each Library's contents to be written to a distinct XML file (perhaps identified by its name or id), we also need to set the output file for each template based on the current instance. In more complex cases, we may also want to have certain rules for whether a Library should be generated at all (e.g. if it does not have a threshold number of Books), and whether we should overwrite an existing file. For example, we may decide that for Libraries with a large number of books, we do not want to overwrite the file. Furthermore, we may want to have a different naming convention for certain libraries based on their name or ID, which may be decided based on an arbitrarily complex function. Also, we may not want to include all of the Books in the output file, but a subset, which requires additional processing logic. We may even have different templates for libraries based on the number of Books they hold \u2013 for example, with a large Library, we may want to inline all of the properties of each Book to save disk space, rather than having the title, pages, authors etc. enumerated as children. Or we may want to omit the authors. This can be achieved by modifying the template with conditionals, but this makes the template much less readable and harder to modify, so it can be easier to have a separate template instead. All of these factors are tedious to implement manually and can be difficult to maintain and modify by domain experts using handwritten imperative code. Therefore, a more declarative way of achieving this is needed. This is precisely the purpose of EGX. Features and Execution Algorithm \u00b6 Like all of Epsilon's rule-based (ERL) languages, an EGX module consists of any number of named rules, as well as optional pre and post blocks which can be used to perform arbitrarily complex tasks using imperative code before and after the execution of rules, respectively. The execution algorithm of EGX is quite simple, since the language itself is essentially a means to parameterise a for loop. EGX adds on top of ERL only a single top-level rule construct: the GenerationRule . The execution algorithm is thus as simple as executing all of these rules, in the order they are defined in the module. Thus, the remainder of this section describes the components and execution semantics of GenerationRule . Note that since variables declared in an earlier scope (executable block) within a GenerationRule are visible to later blocks, the order in which the engine executes each component block is important. Thus, we summarise each component block in execution order; which should also be the order in which they are declared by the user in the program. Note also that all of the component blocks of a GenerationRule are optional \u2013 that is, one can use any combination of them, including all or none. transform : A parameter (name and type), optionally followed by the collection of elements to run the rule over. The parameter name is bound to the current element, and this rule is executed for all elements in the specified collection. If the user does not specify a domain from which the elements are drawn using the in: construct, the engine will retrieve all model elements matching the type (but not subtypes) of the parameter type. To include all types and subtypes of the specified parameter, rule must be marked with the @greedy annotation, otherwise the entire rule must be repeated for each subtype. guard : True by default. If this returns false, the GenerationRule will skip execution of the remaining blocks for the current element (or altogether if the rule has no input elements). pre : Arbitrary block of code, can be used to set up variables or any other pre-processing. overwrite : Whether to overwrite the target file if it already exists. True by default. merge : Whether to merge new contents with existing contents. True by default. template : The path (usually relative) and name of the template to invoke. parameters : Key-value pairs mapping variable names to values, which will be passed to the template. That is, the template will be populated with variable names (the keys) and values based on the provided Map. target : The path of the file to which the output of the template should be written. post : Arbitrarily code block for post-processing. In addition to having access to all variable declared in previous blocks, a new variable called generated is also available, which is usually a reference to the generated file so the user can call any methods available on java.io.File . If the EGL execution engine has not been configured to output to files, or the target is ommitted, then this variable will be the output of the template as a String instead. The only other noteworthy aspect of EGX's execution algorithm is that it keeps a cache of templates which have been loaded, to avoid re-parsing and re-initialising them every time. Of course, the variables for the template are reset and rebound every time, as they may be different. The purpose of the cache is only to avoid the potentially expensive process of parsing EGL templates. Parallel Execution \u00b6 Owing to its rule-based declarative nature, EGX can execute rules independently, and even if you only have a single rule, it can be invoked on a per-element basis by separate threads. You can declare a rule to be executed in parallel using the @parallel annotation, or by using the automatic parallelisation execution engine. Example Program \u00b6 Returning to our example, we can orchestrate the generation of Libraries as shown below, which demonstrates most of the features of EGX. Here we see how it is possible to screen eligible Library instances for generation, populate the template with the necessary parameters, invoke a different version of the template and direct the output to the desired file, all based on arbitrary user-defined criteria expressed declaratively using EOL. We can also compute aggregate metadata thanks to the pre and post blocks available both globally and on a per-rule basis. In this example, we simply compute the size of each file and print them once all transformations have taken place. Furthermore, we demonstrate that not all rules need to transform a specific model element: EGX can be used for convenience to invoke EGL templates with parameters, as shown by the \"AuthorsAndBooks\" rule. Here we only want to generate a single file from the Authors and Books in the model, where the logic for doing this is in a single EGL template. Although it wouldn't make much sense to use EGX purely for invoking single templates without parameters, the reader can perhaps appreciate that in large and complex models, there may be many different templates \u2013 e.g. one for each type \u2013 so all of the co-ordination in invoking them can be centralised to a single declarative file. EGX can thus be used as a workflow language in directing model-to-text transformations and is suitable for various use cases of almost any complexity. operation Book isValid() : Boolean { return self.isbn.isDefined() and self.isbn.length() == 13; } pre { var outDirLib : String = \"../libraries/\"; var libFileSizes = new Map; } rule Lib2XML transform lib : Library { guard : lib.name.length() > 3 and lib.books.size() > 10 pre { var eligibleBooks = lib.books.select(b | b.isValid()); var isBigLibrary = eligibleBooks.size() > 9000; } merge : isBigLibrary overwrite : not isBigLibrary template { var libTemplate = \"rel/path/to/Lib2XML\"; if (isBigLibrary) { libTemplate += \"_minified\"; } return libTemplate+\".egl\"; } parameters : Map { \"name\" = lib.name, \"id\" = lib.id, \"books\" = lib.books } target { var outFile = outDirLib + lib.name; if (isBigLibrary) { outFile += \"_compact\"; } return outFile+\".xml\"; } post { libFileSizes.put(generated.getName(), generated.length()); } } rule AuthorsAndBooks { parameters : Map { \"authors\" = Authors.allInstances(), \"books\" = Book.allInstances() } template : \"AuthorsAndBooks.egl\" target : \"AllAuthorsBooks.txt\" } post { libFileSizes.println(); (\"Total: \"+libFileSizes.values().sum()).println(); }","title":"The EGL Co-Ordination Language (EGX)"},{"location":"doc/egx/#the-egl-co-ordination-language-egx","text":"EGX is a rule-based co-ordination language designed for automating the parametrised execution of model-to-text template transformations. Although built on top of the Epsilon Generation Language (EGL), EGX can in principle work with any template-based model-to-text transformation language. The rationale for this co-ordination language comes from the need to invoke text generation templates multiple times with various parameters, usually derived from input models. To better understand EGX, it is helpful to be familiar with template-based text generation.","title":"The EGL Co-Ordination Language (EGX)"},{"location":"doc/egx/#epsilon-generation-language","text":"EGL is Epsilon's model-to-text transformation language. EGL in principle is similar in purpose to server-side scripting languages like PHP (and can indeed be used for such purposes, as demonstrated in this article ). To recap, a template is a text file which has both static and dynamic regions. As the name implies, a static region is where text appears as-is in the output, whereas a dynamic region uses code to generate the output, often relying on data which is only available at runtime (hence, \"dynamic\"). Dynamic regions are expressed using EOL. One can think of an EGL template as a regular text file with some EOL embedded in it, or as an EOL program with the added convenience of verbatim text generation. Indeed, it is possible to use EGL without any static regions, relying on the output buffer variable to write the output text. In EGL, the output variable is called \"out\" and the markers for the start and end of dynamic regions are \"[%\" and \"%]\" respectively. For convenience, \"[%=\" outputs the string value of the expression which follows. EGL has many advanced features, such as recording traceability information, post-process formatting (to ensure consistent style in the final output) and protected regions, which allow certain parts of the text to be preserved if modified by hand, rather than being overwritten on each invocation of the template. EGL can handle merges, and also supports outputting text to any output stream. As an example, consider a simple Library metamodel (shown below). Suppose each model may have multiple Libraries, and each Library has a name, multiple Books and Authors. Similarly, each Book has one or more Authors, and each Author has multiple Books, similar to the relation between Actors and Movies in the IMDb metamodel used in previous chapters. Now suppose we have a single monolithic model and want to transform this into multiple structured files, such as web pages (HTML) or XML documents. One possible decomposition of this is to generate a page for each Library in the model. classDiagram class Library{ name: EString id: ELong books: Book[*] } class Book { title: EString pages: EInt ISBN: EString authors: Author[*] } class Author{ name: EString books: Book[*] } Library -- Book: books * Book -- Author: books * / authors * <?xml version=\"1.0\" encoding=\"UTF-8\"?> <library id=[%=lib.id%] name=\"[%=name%]\"> [% for (book in books) {%] <book> <title>[%=book.title%]</title> <isbn>[%=book.isbn%]</isbn> <pages>[%=book.pages.asString()%]</pages> <authors> [% for (author in book.authors) {%] <author name=\"[%=author.name%]\"/> [%}%] </authors> </book> [%}%] </library> Notice how the template refers to \"books\" (which is a collection of Book elements) without deriving them directly from the underlying model (i.e. there are no uses of allInstances). This is because the variables were provided to the template before invocation.","title":"Epsilon Generation Language"},{"location":"doc/egx/#template-orchestration","text":"In the previous example, we stated that we want to invoke the template for all instances of Library in the model. To do this, we need to loop through all Library instances in the model(s), load the template, populate it with the required variables derived from the current Library instance and execute the template. However since we want each Library's contents to be written to a distinct XML file (perhaps identified by its name or id), we also need to set the output file for each template based on the current instance. In more complex cases, we may also want to have certain rules for whether a Library should be generated at all (e.g. if it does not have a threshold number of Books), and whether we should overwrite an existing file. For example, we may decide that for Libraries with a large number of books, we do not want to overwrite the file. Furthermore, we may want to have a different naming convention for certain libraries based on their name or ID, which may be decided based on an arbitrarily complex function. Also, we may not want to include all of the Books in the output file, but a subset, which requires additional processing logic. We may even have different templates for libraries based on the number of Books they hold \u2013 for example, with a large Library, we may want to inline all of the properties of each Book to save disk space, rather than having the title, pages, authors etc. enumerated as children. Or we may want to omit the authors. This can be achieved by modifying the template with conditionals, but this makes the template much less readable and harder to modify, so it can be easier to have a separate template instead. All of these factors are tedious to implement manually and can be difficult to maintain and modify by domain experts using handwritten imperative code. Therefore, a more declarative way of achieving this is needed. This is precisely the purpose of EGX.","title":"Template Orchestration"},{"location":"doc/egx/#features-and-execution-algorithm","text":"Like all of Epsilon's rule-based (ERL) languages, an EGX module consists of any number of named rules, as well as optional pre and post blocks which can be used to perform arbitrarily complex tasks using imperative code before and after the execution of rules, respectively. The execution algorithm of EGX is quite simple, since the language itself is essentially a means to parameterise a for loop. EGX adds on top of ERL only a single top-level rule construct: the GenerationRule . The execution algorithm is thus as simple as executing all of these rules, in the order they are defined in the module. Thus, the remainder of this section describes the components and execution semantics of GenerationRule . Note that since variables declared in an earlier scope (executable block) within a GenerationRule are visible to later blocks, the order in which the engine executes each component block is important. Thus, we summarise each component block in execution order; which should also be the order in which they are declared by the user in the program. Note also that all of the component blocks of a GenerationRule are optional \u2013 that is, one can use any combination of them, including all or none. transform : A parameter (name and type), optionally followed by the collection of elements to run the rule over. The parameter name is bound to the current element, and this rule is executed for all elements in the specified collection. If the user does not specify a domain from which the elements are drawn using the in: construct, the engine will retrieve all model elements matching the type (but not subtypes) of the parameter type. To include all types and subtypes of the specified parameter, rule must be marked with the @greedy annotation, otherwise the entire rule must be repeated for each subtype. guard : True by default. If this returns false, the GenerationRule will skip execution of the remaining blocks for the current element (or altogether if the rule has no input elements). pre : Arbitrary block of code, can be used to set up variables or any other pre-processing. overwrite : Whether to overwrite the target file if it already exists. True by default. merge : Whether to merge new contents with existing contents. True by default. template : The path (usually relative) and name of the template to invoke. parameters : Key-value pairs mapping variable names to values, which will be passed to the template. That is, the template will be populated with variable names (the keys) and values based on the provided Map. target : The path of the file to which the output of the template should be written. post : Arbitrarily code block for post-processing. In addition to having access to all variable declared in previous blocks, a new variable called generated is also available, which is usually a reference to the generated file so the user can call any methods available on java.io.File . If the EGL execution engine has not been configured to output to files, or the target is ommitted, then this variable will be the output of the template as a String instead. The only other noteworthy aspect of EGX's execution algorithm is that it keeps a cache of templates which have been loaded, to avoid re-parsing and re-initialising them every time. Of course, the variables for the template are reset and rebound every time, as they may be different. The purpose of the cache is only to avoid the potentially expensive process of parsing EGL templates.","title":"Features and Execution Algorithm"},{"location":"doc/egx/#parallel-execution","text":"Owing to its rule-based declarative nature, EGX can execute rules independently, and even if you only have a single rule, it can be invoked on a per-element basis by separate threads. You can declare a rule to be executed in parallel using the @parallel annotation, or by using the automatic parallelisation execution engine.","title":"Parallel Execution"},{"location":"doc/egx/#example-program","text":"Returning to our example, we can orchestrate the generation of Libraries as shown below, which demonstrates most of the features of EGX. Here we see how it is possible to screen eligible Library instances for generation, populate the template with the necessary parameters, invoke a different version of the template and direct the output to the desired file, all based on arbitrary user-defined criteria expressed declaratively using EOL. We can also compute aggregate metadata thanks to the pre and post blocks available both globally and on a per-rule basis. In this example, we simply compute the size of each file and print them once all transformations have taken place. Furthermore, we demonstrate that not all rules need to transform a specific model element: EGX can be used for convenience to invoke EGL templates with parameters, as shown by the \"AuthorsAndBooks\" rule. Here we only want to generate a single file from the Authors and Books in the model, where the logic for doing this is in a single EGL template. Although it wouldn't make much sense to use EGX purely for invoking single templates without parameters, the reader can perhaps appreciate that in large and complex models, there may be many different templates \u2013 e.g. one for each type \u2013 so all of the co-ordination in invoking them can be centralised to a single declarative file. EGX can thus be used as a workflow language in directing model-to-text transformations and is suitable for various use cases of almost any complexity. operation Book isValid() : Boolean { return self.isbn.isDefined() and self.isbn.length() == 13; } pre { var outDirLib : String = \"../libraries/\"; var libFileSizes = new Map; } rule Lib2XML transform lib : Library { guard : lib.name.length() > 3 and lib.books.size() > 10 pre { var eligibleBooks = lib.books.select(b | b.isValid()); var isBigLibrary = eligibleBooks.size() > 9000; } merge : isBigLibrary overwrite : not isBigLibrary template { var libTemplate = \"rel/path/to/Lib2XML\"; if (isBigLibrary) { libTemplate += \"_minified\"; } return libTemplate+\".egl\"; } parameters : Map { \"name\" = lib.name, \"id\" = lib.id, \"books\" = lib.books } target { var outFile = outDirLib + lib.name; if (isBigLibrary) { outFile += \"_compact\"; } return outFile+\".xml\"; } post { libFileSizes.put(generated.getName(), generated.length()); } } rule AuthorsAndBooks { parameters : Map { \"authors\" = Authors.allInstances(), \"books\" = Book.allInstances() } template : \"AuthorsAndBooks.egl\" target : \"AllAuthorsBooks.txt\" } post { libFileSizes.println(); (\"Total: \"+libFileSizes.values().sum()).println(); }","title":"Example Program"},{"location":"doc/emc/","text":"The Epsilon Model Connectivity Layer (EMC) \u00b6 The Epsilon Model Connectivity (EMC) layer provides abstraction facilities over concrete modelling technologies such as EMF , XML , Simulink etc. and enables Epsilon programs to interact with models conforming to these technologies in a uniform manner. A graphical overview of the core classes and methods of EMC is displayed below. Tip If you are interested in examples of EMC-based drivers for Epsilon, rather than on the organisation of EMC itself, please scroll to the bottom of this page . classDiagram class IModel { -name: String -aliases: String[*] +load() +load(properties : StringProperties) +store() +getAllOfKind(type: String): Object[*] +isKindOf(element: Object, type: String): boolean +getAllOfType(type: String): Object[*] +isTypeOf(element: Object, type: String): boolean +createInstance(type: String): Object +deleteElement(element: Object) } class ModelRepository { +getOwningModel(modelElement: Object) +getModelByName(name: String) +dispose() } class IPropertyGetter { +invoke(object: Object, property: String) } class IPropertySetter { +invoke(object: Object, property: String, value: Object) } ModelRepository -- IModel: models * ModelGroup -- IModel: models * IModel <|-- ModelGroup IModel -- IPropertySetter: propertySetter IModel -- IPropertyGetter: propertyGetter To abstract away from diverse model representations and APIs provided by different modelling technologies, EMC defines the IModel interface. IModel provides a number of methods that enable querying and modifying the model elements it contains at a higher level of abstraction. To enable languages and tools that build atop EMC to manage multiple models simultaneously, the ModelRepository class acts as a container that offers fa\u00e7ade services. The following sections discuss these two core concepts in detail. The IModel interface \u00b6 Each model specifies a name which must be unique in the context of the model repository in which it is contained. Also, it defines a number of aliases; that is non-unique alternate names; via which it can be accessed. The interface also defines the following services. Loading and Persistence \u00b6 The load() and load(properties : Properties) methods enable extenders to specify in a uniform way how a model is loaded into memory from the physical location in which it resides. Similarly, the store() and store(location : String) methods are used to define how the model can be persisted from memory to a permanent storage location. Type-related Services \u00b6 The majority of metamodelling architectures support inheritance between meta-classes and therefore two types of type-conformance relationships generally appear between model elements and types. The type-of relationship appears when a model element is an instance of the type and the kind-of relationship appears when the model element is an instance of the type or any of its sub-types. Under this definition, the getAllOfType(type: String) and the getAllOfKind(type: String) operations return all the elements in the model that have a type-of and a kind-of relationship with the type in question respectively. Similarly, the isTypeOf(element: Object, type : String) and isKindOf(element: Object, type : String) return whether the element in question has a type-of or a kind-of relationship with the type respectively. The getTypeOf(element: Object) method returns the fully-qualified name of the type an element conforms to. The hasType(type: String) method returns true if the model supports a type with the specified name. To support technologies that enable users to define abstract (non-instantiable) types, the isInstantiable(type: String) method returns if instances of the type can be created. Ownership \u00b6 The allContents() method returns all the elements that the model contains and the owns(element: Object) method returns true if the element under question belongs to the model. Creation, Deletion and Modifications \u00b6 Model elements are created and deleted using the createInstance(type: String) and deleteElement(element: Object) methods respectively. To retrieve and set the values of properties of its model elements, IModel uses its associated propertyGetter ( IPropertyGetter ) and propertySetter ( IPropertySetter ) respectively. Technology-specific implementations of those two interfaces are responsible for accessing and modifying the value of a property of a model element through their invoke(element: Object, property : String) and invoke(value: Object) respectively. The ModelRepository class \u00b6 A model repository acts as a container for a set of models that need to be managed in the context of a task or a set of tasks. Apart from a reference to the models it contains, ModelRepository also provides the following fa\u00e7ade functionality. The getOwningModel(element: Object) method returns the model that owns a particular element. The transactionSupport property specifies an instance of the ModelRepositoryTransactionSupport class which is responsible for aggregate management of transactions by delegating calls to its startTransaction() , commitTransaction() and abortTransaction() methods, to the respective methods of instances of IModelTransactionSupport associated with models contained in the repository. The ModelGroup class \u00b6 A ModelGroup is a group of models that have a common alias. ModelGroups are calculated dynamically by the model repository based on common model aliases. That is, if two or more models share a common alias, the repository forms a new model group. Since ModelGroup implements the IModel interface, clients can use all the methods of IModel to perform aggregate operations on multiple models, such as collecting the contents of more than one models. An exception to that is the createInstance(type: String) method which cannot be defined for a group of models as it cannot be determined in which model of the group the newly created element should belong. Assumptions about the underlying modelling technologies \u00b6 The discussion provided above has demonstrated that EMC makes only minimal assumptions about the structure and the organization of the underlying modelling technologies. Thus, it intentionally refrains from defining classes for concepts such as model element , type and metamodel . By contrast, it employs a lightweight approach that uses primitive strings for type names and objects of the target implementation platforms as model elements. There are two reasons for this decision. The primary reason is that by minimizing the assumptions about the underlying technologies EMC becomes more resistant to future changes of the implementations of the current technologies and can also embrace new technologies without changes. Another reason is that if a heavy-weight approach was used, extending the platform with support for a new modelling technology would involve providing wrapping objects for the native objects which represent model elements and types in the specific modelling technology. Experiments in the early phases of the design of EMC demonstrated that such a heavy-weight approach significantly increases the amount of memory required to represent the models in memory, degrades performance and provides little benefits in reward. EMC Drivers \u00b6 Below are known drivers that implement the EMC interfaces discussed above and allow Epsilon programs to access different types of models and structured data. Eclipse Modeling Framework \u00b6 The Eclipse Modelling Framework (EMF) is one of the most robust and widely used open-source modelling frameworks, and the cornerstone of an extensive ecosystem of technologies for graphical/textual model editing, model comparison and merging etc. Being an Eclipse project, Epsilon naturally provides support for all flavours of EMF models (e.g. textual, graphical, XSD-based XML), and most of the screencasts , articles and examples in Epsilon's Git repository use EMF models. Matlab Simulink \u00b6 Epsilon also provides mature support for querying and modifying Matlab Simulink models as shown in these articles . XML/CSV \u00b6 For quick and dirty metamodel-less modelling, Epsilon also supports plain XML documents and CSV files . Eclipse Hawk \u00b6 Hawk is an Eclipse project that provides tools for monitoring, indexing and querying repositories (i.e. local folders, Eclipse workspaces, Git/SVN repositories) containing models. Hawk provides an EMC driver through which model indices can be queried with Epsilon languages. Other Drivers \u00b6 There are also less mature/well-documented drivers for Epsilon for tools and formats such as: Eclipse C/C++ Development tools PTC Integrity Modeller MetaEdit+ Eclipse Java Development Tools Relational Databases (JDBC) ArgoUML Connected Data Objects (CDO) NeoEMF These drivers have not had much external use historically, but if you're interested in them, please give us a shout .","title":"Model connectivity"},{"location":"doc/emc/#the-epsilon-model-connectivity-layer-emc","text":"The Epsilon Model Connectivity (EMC) layer provides abstraction facilities over concrete modelling technologies such as EMF , XML , Simulink etc. and enables Epsilon programs to interact with models conforming to these technologies in a uniform manner. A graphical overview of the core classes and methods of EMC is displayed below. Tip If you are interested in examples of EMC-based drivers for Epsilon, rather than on the organisation of EMC itself, please scroll to the bottom of this page . classDiagram class IModel { -name: String -aliases: String[*] +load() +load(properties : StringProperties) +store() +getAllOfKind(type: String): Object[*] +isKindOf(element: Object, type: String): boolean +getAllOfType(type: String): Object[*] +isTypeOf(element: Object, type: String): boolean +createInstance(type: String): Object +deleteElement(element: Object) } class ModelRepository { +getOwningModel(modelElement: Object) +getModelByName(name: String) +dispose() } class IPropertyGetter { +invoke(object: Object, property: String) } class IPropertySetter { +invoke(object: Object, property: String, value: Object) } ModelRepository -- IModel: models * ModelGroup -- IModel: models * IModel <|-- ModelGroup IModel -- IPropertySetter: propertySetter IModel -- IPropertyGetter: propertyGetter To abstract away from diverse model representations and APIs provided by different modelling technologies, EMC defines the IModel interface. IModel provides a number of methods that enable querying and modifying the model elements it contains at a higher level of abstraction. To enable languages and tools that build atop EMC to manage multiple models simultaneously, the ModelRepository class acts as a container that offers fa\u00e7ade services. The following sections discuss these two core concepts in detail.","title":"The Epsilon Model Connectivity Layer (EMC)"},{"location":"doc/emc/#the-imodel-interface","text":"Each model specifies a name which must be unique in the context of the model repository in which it is contained. Also, it defines a number of aliases; that is non-unique alternate names; via which it can be accessed. The interface also defines the following services.","title":"The IModel interface"},{"location":"doc/emc/#loading-and-persistence","text":"The load() and load(properties : Properties) methods enable extenders to specify in a uniform way how a model is loaded into memory from the physical location in which it resides. Similarly, the store() and store(location : String) methods are used to define how the model can be persisted from memory to a permanent storage location.","title":"Loading and Persistence"},{"location":"doc/emc/#type-related-services","text":"The majority of metamodelling architectures support inheritance between meta-classes and therefore two types of type-conformance relationships generally appear between model elements and types. The type-of relationship appears when a model element is an instance of the type and the kind-of relationship appears when the model element is an instance of the type or any of its sub-types. Under this definition, the getAllOfType(type: String) and the getAllOfKind(type: String) operations return all the elements in the model that have a type-of and a kind-of relationship with the type in question respectively. Similarly, the isTypeOf(element: Object, type : String) and isKindOf(element: Object, type : String) return whether the element in question has a type-of or a kind-of relationship with the type respectively. The getTypeOf(element: Object) method returns the fully-qualified name of the type an element conforms to. The hasType(type: String) method returns true if the model supports a type with the specified name. To support technologies that enable users to define abstract (non-instantiable) types, the isInstantiable(type: String) method returns if instances of the type can be created.","title":"Type-related Services"},{"location":"doc/emc/#ownership","text":"The allContents() method returns all the elements that the model contains and the owns(element: Object) method returns true if the element under question belongs to the model.","title":"Ownership"},{"location":"doc/emc/#creation-deletion-and-modifications","text":"Model elements are created and deleted using the createInstance(type: String) and deleteElement(element: Object) methods respectively. To retrieve and set the values of properties of its model elements, IModel uses its associated propertyGetter ( IPropertyGetter ) and propertySetter ( IPropertySetter ) respectively. Technology-specific implementations of those two interfaces are responsible for accessing and modifying the value of a property of a model element through their invoke(element: Object, property : String) and invoke(value: Object) respectively.","title":"Creation, Deletion and Modifications"},{"location":"doc/emc/#the-modelrepository-class","text":"A model repository acts as a container for a set of models that need to be managed in the context of a task or a set of tasks. Apart from a reference to the models it contains, ModelRepository also provides the following fa\u00e7ade functionality. The getOwningModel(element: Object) method returns the model that owns a particular element. The transactionSupport property specifies an instance of the ModelRepositoryTransactionSupport class which is responsible for aggregate management of transactions by delegating calls to its startTransaction() , commitTransaction() and abortTransaction() methods, to the respective methods of instances of IModelTransactionSupport associated with models contained in the repository.","title":"The ModelRepository class"},{"location":"doc/emc/#the-modelgroup-class","text":"A ModelGroup is a group of models that have a common alias. ModelGroups are calculated dynamically by the model repository based on common model aliases. That is, if two or more models share a common alias, the repository forms a new model group. Since ModelGroup implements the IModel interface, clients can use all the methods of IModel to perform aggregate operations on multiple models, such as collecting the contents of more than one models. An exception to that is the createInstance(type: String) method which cannot be defined for a group of models as it cannot be determined in which model of the group the newly created element should belong.","title":"The ModelGroup class"},{"location":"doc/emc/#assumptions-about-the-underlying-modelling-technologies","text":"The discussion provided above has demonstrated that EMC makes only minimal assumptions about the structure and the organization of the underlying modelling technologies. Thus, it intentionally refrains from defining classes for concepts such as model element , type and metamodel . By contrast, it employs a lightweight approach that uses primitive strings for type names and objects of the target implementation platforms as model elements. There are two reasons for this decision. The primary reason is that by minimizing the assumptions about the underlying technologies EMC becomes more resistant to future changes of the implementations of the current technologies and can also embrace new technologies without changes. Another reason is that if a heavy-weight approach was used, extending the platform with support for a new modelling technology would involve providing wrapping objects for the native objects which represent model elements and types in the specific modelling technology. Experiments in the early phases of the design of EMC demonstrated that such a heavy-weight approach significantly increases the amount of memory required to represent the models in memory, degrades performance and provides little benefits in reward.","title":"Assumptions about the underlying modelling technologies"},{"location":"doc/emc/#emc-drivers","text":"Below are known drivers that implement the EMC interfaces discussed above and allow Epsilon programs to access different types of models and structured data.","title":"EMC Drivers"},{"location":"doc/emc/#eclipse-modeling-framework","text":"The Eclipse Modelling Framework (EMF) is one of the most robust and widely used open-source modelling frameworks, and the cornerstone of an extensive ecosystem of technologies for graphical/textual model editing, model comparison and merging etc. Being an Eclipse project, Epsilon naturally provides support for all flavours of EMF models (e.g. textual, graphical, XSD-based XML), and most of the screencasts , articles and examples in Epsilon's Git repository use EMF models.","title":"Eclipse Modeling Framework"},{"location":"doc/emc/#matlab-simulink","text":"Epsilon also provides mature support for querying and modifying Matlab Simulink models as shown in these articles .","title":"Matlab Simulink"},{"location":"doc/emc/#xmlcsv","text":"For quick and dirty metamodel-less modelling, Epsilon also supports plain XML documents and CSV files .","title":"XML/CSV"},{"location":"doc/emc/#eclipse-hawk","text":"Hawk is an Eclipse project that provides tools for monitoring, indexing and querying repositories (i.e. local folders, Eclipse workspaces, Git/SVN repositories) containing models. Hawk provides an EMC driver through which model indices can be queried with Epsilon languages.","title":"Eclipse Hawk"},{"location":"doc/emc/#other-drivers","text":"There are also less mature/well-documented drivers for Epsilon for tools and formats such as: Eclipse C/C++ Development tools PTC Integrity Modeller MetaEdit+ Eclipse Java Development Tools Relational Databases (JDBC) ArgoUML Connected Data Objects (CDO) NeoEMF These drivers have not had much external use historically, but if you're interested in them, please give us a shout .","title":"Other Drivers"},{"location":"doc/emg/","text":"The Epsilon Model Generation Language (EMG) \u00b6 At some point, programs written in any of the Epsilon model management languages might need to be tested in order to find defects (bugs) and assert their correctness, or benchmarked in order to assess their performance. Both testing and benchmarking activities require appropriate test data, i.e. models that conform to specific metamodels and their constraints, satisfy additional requirements or characteristics (e.g. certain size), and/or contain data and provide a structure that exercises particular aspects of the program under test. Manual assembly of test models is an error prone, time and labour consuming activity. This type of activities are perfect candidates for automation. Given that it is also a model management activity, it follows that the automation can be provided by a model generation engine that can execute model generation scripts. The scripts should be written in a model generation language that allows the user to generate models that conform to specific metamodels and its arbitrarily complex constraints (e.g constraints formulated in compound first-order OCL operations), satisfy particular characteristics, and contain specific data and exhibit particular structures. The model generation engine should exhibit characteristics such as randomness, repeatability, scalability and easy parametrization. The Epsilon Model Generation Language addresses the automated generation of complex models. Approaches to Model Generation \u00b6 The model generation approaches found in literature provide fully-automated behaviour. In a fully-automated approach, the tool loads the metamodel (and in some cases its constraints) and generates models that conform to the metamodel (and satisfy the constraints, if constraints are supported). However, the existing solutions can generate invalid models and in the case where constraints are supported, only simple constraints are supported. The Epsilon Model Generation follows a semi-automated generation approach. There are three main tasks in model generation: Create instances of types in the metamodel(s). Assign values to the instance's attributes (properties typed by primitive types: String, Integer, etc.). Create links between instances to assign values to references (properties typed by complex types: other types in the metamodel). In the semi-automated approach, all of these tasks can be configured to execute statically or dynamically (with randomness). Statically, the user must specify every single aspect of the generation. Dynamically, for example, the number of instances to create of a given type can be random, or the value of a given attribute can be set to random values, or the links between elements can be done between random pairs of elements. The combination of random and static definition of the generation tasks allows the user to generate models that can satisfy complex constraints, guarantee additional characteristics and exercise particular aspects of the program under test. This chapter discusses the concrete syntax of EMG as well as its execution semantics. To aid understanding, the discussion of the syntax and the semantics of the language revolves around an exemplar generation which is developed incrementally throughout the chapter. Syntax \u00b6 The EMG language does not provide additional syntax. Instead it provides a set of predefined annotations that can be added to EOL operations and EPL patterns in order to perform the model generation. The predefined EOL operation annotations are: Name Description instances Defines the number of instances to create. This annotation accepts one parameter. The parameter can be an expression that resolves to an Integer (e.g. literal, variable name, etc.) or a sequence in the form Sequence {min, max} ). An integer value statically defines how many instances are to be created. A sequence defines a range that is used by the engine to generates a random number n of instances, with min <= n <= max . list Defines an identifier (listID) for a placeholder list for the elements created. This annotation accepts one parameter. The parameter is the identifier (String) that can later be used in operations that accept it as an argument in order to access the elements created by the operation. parameters If the instantiated type accepts/needs arguments for instantiation, the parameters annotation can be used to provide them. This annotation accepts one parameter. The parameter must be a Sequence that contains the desired arguments in the order expected by the constructor. All three annotations are executable and hence must be prefixed with a $ symbol when used. Further, these annotations are only evaluated on create operations. The EPL pattern annotations are: Name Description number This limits the number of times the pattern is matched, to constraint the number of links created between elements. This annotation accepts one parameter. The parameter can be an expression that resolves to an Integer (e.g. literal, variable name, etc.) or a sequence in the form Sequence {min, max} ). An integer value statically defines how many instances are to be created. A sequence defines a range that is used by the engine to generates a random number n of instances, with min <= n <= max . probability This defines the probability that the body of the pattern will be executed for a matching set of elements. The effect is that not all matching elements are linked. Effectively this also limits the number of times links are created. noRepeat This forbids previous matched elements to be re-linked. The first two annotations are executable and hence must be prefixed with a $ symbol when used and the last one is a simple annotation and must be prefixed with @ . Additionally the EMG engine provides a set of predefined operations that provide support for generating random data that can be used to set the attributes and references of the generated model elements, to select random elements from collections, etc. EMG predefined operations \u00b6 Signature Description nextAddTo(n : Integer, m : Integer): Sequence(Integer) Returns a sequence of n integers who's sum is equal to m. nextBoolean() Returns the next pseudorandom, uniformly distributed boolean value. nextCamelCaseWords(charSet : String, length : Integer, minWordLength : Integer) : String Generates a string of the given length formatted as CamelCase, with subwords of a minimum length of the minWordLength argument, using characters from the given charSet. nextCapitalisedWord(charSet : String, length : Integer) : String Generate a Capitalized string of the given length using characters from the given charSet. nextFromCollection(c : Sequence) : Any Returns the next object from the collection, selected pseudoramdomly using the uniform distribution. If the collection is empty, returns null. nextFromList(listID : String) : Any Returns the next object from the list, selected pseudoramdomly using the uniform distribution. If the list is empty, returns null. The listID can either be a name defined by the \\@list annotation or a parameter name from the run configuration. In the latter case, the parameter value can be either a comma separated string or a file path. If it is a comma separated string, then a list is created by splitting the string, if the value is a path, then the file will be read and each line will be treated as a list element. nextFromListAsSample(listID : String) : Any Same as nextFromList, but in this case the list is treated as a sample without replacement, i.e. each call will return a unique member of the list. nextHttpURI(addPort : Boolean, addPath : Boolean, addQuery : Boolean, addFragment : Boolean) : String Generates a random URI that complies to http:[//host[:port]][/]path [?query][#fragment]. The path, query and fragment parts are optional and will be added if the respective argument is True. nextInt() : Integer Returns the next pseudorandom, uniformly distributed integer. All 2^32 possible integer values should be produced with (approximately) equal probability. nextInt(upper : Integer) : Integer Returns a pseudorandom, uniformly distributed integer value between 0 (inclusive) and upper (exclusive). The argument must be positive. nextInt(lower: Integer, upper : Integer) : Integer Returns a pseudorandom, uniformly distributed integer value between lower and upper (endpoints included). The arguments must be positive and upper >= lower . nextReal() : Real Returns the next pseudorandom, uniformly distributed real value between 0.0 and 1.0 . nextReal(upper : Real) : Real Returns the next pseudorandom, uniformly distributed real value between 0.0 and upper (inclusive). nextReal(lower: Real, upper : Real) : Real Returns a pseudorandom, uniformly distributed real value between lower and upper (endpoints included). nextSample(c : Sequence, k : Integer) : Sequence(Any) Returns a Sequence of k objects selected randomly from the Sequence c using a uniform distribution. Sampling from c is without replacement; but if c contains identical objects, the sample may include repeats. If all elements of c are distinct, the resulting object collection represents a Simple Random Sample of size k from the elements of c . nextSample(listID : String, k : Integer) : Sequence(Any) Same as nextSample but the sequence is referenced by listID . The listID has the same meanings as for operation nextFromList . nextString() : String Returns the next string made up from characters of the LETTER character set, pseudorandomly selected with a uniform distribution. The length of the string is between 4 and 10 characters. nextString(length : Integer) : String Returns the next String made up from characters of the LETTER character set, pseudorandomly selected with a uniform distribution. The length of the String is equal to length . nextString(charSet : String, length : Integer) : String Returns the next String of the given length using the specified character set, pseudorandomly selected with a uniform distribution. nextURI() : String Generates a random URI that complies to: scheme:[//[user:password]host[:port]][/]path [?query][#fragment]. The port, path, query and fragment are added randomly. The scheme is randomly selected from: http, ssh and ftp. For ssh and ftp, a user and pasword are randomly generated. The host is generated from a random string and uses a top-level domain. The number of paths and queries are random between 1 and 4. nextURI(addPort : Boolean, addPath : Boolean, addQuery : Boolean, addFragment : Boolean) : String Same as nextURI, but the given arguments control what additional port, path, query and fragment information is added. nextUUID() : String Returns a type 4 (pseudo randomly generated) UUID. The UUID is generated using a cryptographically strong pseudo random number generator. nextValue() : Real Returns the next pseudorandom value, picked from the configured distribution (by default the uniform distribution is used). nextValue(d : String, p : Sequence) : Real Returns the next pseudorandom, from the provided distribution d . The parameters p are used to configure the distribution (if required). The supported distributions are: Binomial, Exponential and Uniform. For Binomial parameters are: numberOfTrials and probabilityOfSuccess. For Exponential the mean. For Uniform the lower and upper values (lower inclusive). setNextValueDistribution(d : String, p : Sequence) Define the distribution to use for calls to nextValue() . Parameters are the same as for nextValue(d, p). Character Sets for String operations \u00b6 For the operations that accept a character set, the supported sets are defined as follows: Name Characters ID abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ 1234567890 NUMERIC 1234567890 LETTER abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ LETTER_UPPER ABCDEFGHIJKLMNOPQRSTUVWXYZ LETTER_LOWER abcdefghijklmnopqrstuvwxyz UPPER_NUM ABCDEFGHIJKLMNOPQRSTUVWXYZ 1234567890 LOWER_NUM abcdefghijklmnopqrstuvwxyz 1234567890 ID_SYMBOL abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ 1234567890 ~{}!@#\\$%\\^&( ) _+-=[] \\|;': \\\" \\< > ? , . /\\ HEX_LOWER abcdef1234567890 HEX_UPPER ABCDEF1234567890 Creating Model Elements \u00b6 The EMG engine will search for EOL operations that follow a particular signature in order to determine what elements to create in the generated model. The signature is: create <OutputType> () { ... } . That is, the operation must be named create , the operation's context type defines the type of the created instance and no parameters should be passed. By default the create operation only creates one instance. Hence, the provided annotations can be used to tailor the behaviour of the operation. Consider the case of the PetriNet metamodel in the figure below. classDiagram class Element { +name: String } class Place { +outgoing: PlaceToTransArc[*] +incoming: TransToPlaceArc[*] } class PetriNet { +places: Place[*] +transitions: Transition[*] +arcs: Arc[*] } class Transition { +incoming: PlaceToTransArc[*] +outgoing: TransToPlaceArc[*] } class TransToPlaceArc { +source: Transition +target: Place } class PlaceToTransArc { +target: Transition +source: Place } Element <|-- PetriNet Element <|-- Place Transition --|> Element PetriNet *-- Arc PetriNet *-- Place PetriNet *-- Transition Arc <|-- TransToPlaceArc Arc <|-- PlaceToTransArc The code excerpt displayed below creates a PetriNet and then adds some places and transitions to it. Note that the instances annotation is executable and hence you can use absolute values, variables or expressions. The list annotation in the PetriNet creation will result in all PetriNet instances to be stored in a sequence called net . The list name is then used in the Place and Transition create operations to add the places and transitions to a random ( nextFromList ) PetriNet. In this example there is only one, but we could easily create more PetriNet instances and hence have them contain random number of Places and Transitions. The name of the elements is generated using the random string generation facilities. pre { var num_p = 10 } $instances 1 @list net operation PetriNet create() { self.name = nextCamelCaseWords(\"LETTER_LOWER\", 15, 10); } $instances num_p operation Place create() { self.name = \"P_\" + nextString(\"LETTER_LOWER\", 15); nextFromList(\"net\").transitions.add(self); } $instances num_p / 2 operation Transition create() { self.name = \"T_\" + nextString(\"LETTER_LOWER\", 15); nextFromList(\"net\").transitions.add(self); } Creating Model Links \u00b6 In the previous section, the places and transitions references of the PetriNet were defined during the creation of the Place and Transition elements. For more complex reference patterns, EMG leverages the use of EPL patterns. For example, Arcs can have complex constraints in order to determine the source and target transition/place, and possibly even having separate rules for each type of Arc. The EPL pattern in the listing below creates two arcs in order to connect a source and a target Place via a Transition. The pattern matches all transitions in a given PetriNet. The pattern body selects a random Place for the source and a random Place for the target (the while loops are used to pick places that have the lowest incoming/outgoing arcs possible). The weight of the arc is generated randomly from 0 to 10 ( nextInt(10) ). The pattern has been annotated with the \\@probability annotation which will effectively only use 70% of the transitions to create arcs (i.e. of all the possible PetriNet-Transition matches, the code of the pattern will only be executed with a probability of 0.70). @probability 0.7 pattern Transition net:PetriNet, tra:Transition from: net.transitions { onmatch { var size = 0; var freeSources = Place.all().select(s | s.incoming.size() == size); while (freeSources.isEmpty()) { size += 1; freeSources = Place.all().select(s | s.incoming.size() == size); } size = 0; var freeTarget = Place.all().select(s | s.outgoing.size() == size); while (freeTarget.isEmpty()) { size += 1; freeTarget = Place.all().select(s | s.outgoing.size() == size); } var source = nextFromCollection(freeSources); var target = nextFromCollection(freeTarget); var a1:Arc = new PlaceToTransArc(); a1.weight = nextInt(10); a1.source = source; net.places.add(source); a1.target = tra; net.arcs.add(a1); var a2:Arc = new TransToPlaceArc(); a1.weight = nextInt(10); a2.source = tra; a2.target = target; net.places.add(target); net.arcs.add(a2); } } Meaningful Strings \u00b6 In some scenarios having completely random Strings for some of the element fields might not be desirable. In this case EMG has an embedded mechanism to facilitate the use of meaningful attribute values (not only for Strings) and we show a second approach based on additional models. Values as a parameter \u00b6 The nextFromList() operation will first look for a list with that name, if it can't find it will look for a parameter (from the run configuration) with that name. The value of the parameter can be either an absolute path to a file or a comma separated list of values. If it is a comma separated list of values, then the individual values will be loaded as a Collection. For example, if we added the parameter names: John, Rose, Juan, Xiang, Joe to the run configuration, the listing below shows how to use that information to define the instance attributes. $instances num_p operation Place create() { self.name = nextFromList(\"name\"); nextFromList(\"net\").transitions.add(self); } If it is a file path, then each line of the file will be loaded as an item to the Collection. Note that the distinction between paths and comma separated values is the assumption that paths don't contain commas. Values as a model \u00b6 A more powerful approach would be to use an existing model to serve as the source for attribute values. Given that there are several websites 1 to generate random data in the form of CSV files, we recommend the use of a CSV model to serve as an attribute value source. A CSV file with name , lastName , and email can be easily generated and loaded as a second model the the EMG script. Then, a Row of data can be picked randomly to set an element's attributes. The listing below shows this approach. $instances num_p operation Person create() { var p = nextFromCollection(dataModel.Row.all()); self.name = p.name; self.lastName = p.lastName; self.email = p.email; } Note that in this case, by using different rows for each value you can further randomize the data. https://www.mockaroo.com/, https://www.generatedata.com/, www.freedatagenerator.com/, etc. \u21a9","title":"Model generation (EMG)"},{"location":"doc/emg/#the-epsilon-model-generation-language-emg","text":"At some point, programs written in any of the Epsilon model management languages might need to be tested in order to find defects (bugs) and assert their correctness, or benchmarked in order to assess their performance. Both testing and benchmarking activities require appropriate test data, i.e. models that conform to specific metamodels and their constraints, satisfy additional requirements or characteristics (e.g. certain size), and/or contain data and provide a structure that exercises particular aspects of the program under test. Manual assembly of test models is an error prone, time and labour consuming activity. This type of activities are perfect candidates for automation. Given that it is also a model management activity, it follows that the automation can be provided by a model generation engine that can execute model generation scripts. The scripts should be written in a model generation language that allows the user to generate models that conform to specific metamodels and its arbitrarily complex constraints (e.g constraints formulated in compound first-order OCL operations), satisfy particular characteristics, and contain specific data and exhibit particular structures. The model generation engine should exhibit characteristics such as randomness, repeatability, scalability and easy parametrization. The Epsilon Model Generation Language addresses the automated generation of complex models.","title":"The Epsilon Model Generation Language (EMG)"},{"location":"doc/emg/#approaches-to-model-generation","text":"The model generation approaches found in literature provide fully-automated behaviour. In a fully-automated approach, the tool loads the metamodel (and in some cases its constraints) and generates models that conform to the metamodel (and satisfy the constraints, if constraints are supported). However, the existing solutions can generate invalid models and in the case where constraints are supported, only simple constraints are supported. The Epsilon Model Generation follows a semi-automated generation approach. There are three main tasks in model generation: Create instances of types in the metamodel(s). Assign values to the instance's attributes (properties typed by primitive types: String, Integer, etc.). Create links between instances to assign values to references (properties typed by complex types: other types in the metamodel). In the semi-automated approach, all of these tasks can be configured to execute statically or dynamically (with randomness). Statically, the user must specify every single aspect of the generation. Dynamically, for example, the number of instances to create of a given type can be random, or the value of a given attribute can be set to random values, or the links between elements can be done between random pairs of elements. The combination of random and static definition of the generation tasks allows the user to generate models that can satisfy complex constraints, guarantee additional characteristics and exercise particular aspects of the program under test. This chapter discusses the concrete syntax of EMG as well as its execution semantics. To aid understanding, the discussion of the syntax and the semantics of the language revolves around an exemplar generation which is developed incrementally throughout the chapter.","title":"Approaches to Model Generation"},{"location":"doc/emg/#syntax","text":"The EMG language does not provide additional syntax. Instead it provides a set of predefined annotations that can be added to EOL operations and EPL patterns in order to perform the model generation. The predefined EOL operation annotations are: Name Description instances Defines the number of instances to create. This annotation accepts one parameter. The parameter can be an expression that resolves to an Integer (e.g. literal, variable name, etc.) or a sequence in the form Sequence {min, max} ). An integer value statically defines how many instances are to be created. A sequence defines a range that is used by the engine to generates a random number n of instances, with min <= n <= max . list Defines an identifier (listID) for a placeholder list for the elements created. This annotation accepts one parameter. The parameter is the identifier (String) that can later be used in operations that accept it as an argument in order to access the elements created by the operation. parameters If the instantiated type accepts/needs arguments for instantiation, the parameters annotation can be used to provide them. This annotation accepts one parameter. The parameter must be a Sequence that contains the desired arguments in the order expected by the constructor. All three annotations are executable and hence must be prefixed with a $ symbol when used. Further, these annotations are only evaluated on create operations. The EPL pattern annotations are: Name Description number This limits the number of times the pattern is matched, to constraint the number of links created between elements. This annotation accepts one parameter. The parameter can be an expression that resolves to an Integer (e.g. literal, variable name, etc.) or a sequence in the form Sequence {min, max} ). An integer value statically defines how many instances are to be created. A sequence defines a range that is used by the engine to generates a random number n of instances, with min <= n <= max . probability This defines the probability that the body of the pattern will be executed for a matching set of elements. The effect is that not all matching elements are linked. Effectively this also limits the number of times links are created. noRepeat This forbids previous matched elements to be re-linked. The first two annotations are executable and hence must be prefixed with a $ symbol when used and the last one is a simple annotation and must be prefixed with @ . Additionally the EMG engine provides a set of predefined operations that provide support for generating random data that can be used to set the attributes and references of the generated model elements, to select random elements from collections, etc.","title":"Syntax"},{"location":"doc/emg/#emg-predefined-operations","text":"Signature Description nextAddTo(n : Integer, m : Integer): Sequence(Integer) Returns a sequence of n integers who's sum is equal to m. nextBoolean() Returns the next pseudorandom, uniformly distributed boolean value. nextCamelCaseWords(charSet : String, length : Integer, minWordLength : Integer) : String Generates a string of the given length formatted as CamelCase, with subwords of a minimum length of the minWordLength argument, using characters from the given charSet. nextCapitalisedWord(charSet : String, length : Integer) : String Generate a Capitalized string of the given length using characters from the given charSet. nextFromCollection(c : Sequence) : Any Returns the next object from the collection, selected pseudoramdomly using the uniform distribution. If the collection is empty, returns null. nextFromList(listID : String) : Any Returns the next object from the list, selected pseudoramdomly using the uniform distribution. If the list is empty, returns null. The listID can either be a name defined by the \\@list annotation or a parameter name from the run configuration. In the latter case, the parameter value can be either a comma separated string or a file path. If it is a comma separated string, then a list is created by splitting the string, if the value is a path, then the file will be read and each line will be treated as a list element. nextFromListAsSample(listID : String) : Any Same as nextFromList, but in this case the list is treated as a sample without replacement, i.e. each call will return a unique member of the list. nextHttpURI(addPort : Boolean, addPath : Boolean, addQuery : Boolean, addFragment : Boolean) : String Generates a random URI that complies to http:[//host[:port]][/]path [?query][#fragment]. The path, query and fragment parts are optional and will be added if the respective argument is True. nextInt() : Integer Returns the next pseudorandom, uniformly distributed integer. All 2^32 possible integer values should be produced with (approximately) equal probability. nextInt(upper : Integer) : Integer Returns a pseudorandom, uniformly distributed integer value between 0 (inclusive) and upper (exclusive). The argument must be positive. nextInt(lower: Integer, upper : Integer) : Integer Returns a pseudorandom, uniformly distributed integer value between lower and upper (endpoints included). The arguments must be positive and upper >= lower . nextReal() : Real Returns the next pseudorandom, uniformly distributed real value between 0.0 and 1.0 . nextReal(upper : Real) : Real Returns the next pseudorandom, uniformly distributed real value between 0.0 and upper (inclusive). nextReal(lower: Real, upper : Real) : Real Returns a pseudorandom, uniformly distributed real value between lower and upper (endpoints included). nextSample(c : Sequence, k : Integer) : Sequence(Any) Returns a Sequence of k objects selected randomly from the Sequence c using a uniform distribution. Sampling from c is without replacement; but if c contains identical objects, the sample may include repeats. If all elements of c are distinct, the resulting object collection represents a Simple Random Sample of size k from the elements of c . nextSample(listID : String, k : Integer) : Sequence(Any) Same as nextSample but the sequence is referenced by listID . The listID has the same meanings as for operation nextFromList . nextString() : String Returns the next string made up from characters of the LETTER character set, pseudorandomly selected with a uniform distribution. The length of the string is between 4 and 10 characters. nextString(length : Integer) : String Returns the next String made up from characters of the LETTER character set, pseudorandomly selected with a uniform distribution. The length of the String is equal to length . nextString(charSet : String, length : Integer) : String Returns the next String of the given length using the specified character set, pseudorandomly selected with a uniform distribution. nextURI() : String Generates a random URI that complies to: scheme:[//[user:password]host[:port]][/]path [?query][#fragment]. The port, path, query and fragment are added randomly. The scheme is randomly selected from: http, ssh and ftp. For ssh and ftp, a user and pasword are randomly generated. The host is generated from a random string and uses a top-level domain. The number of paths and queries are random between 1 and 4. nextURI(addPort : Boolean, addPath : Boolean, addQuery : Boolean, addFragment : Boolean) : String Same as nextURI, but the given arguments control what additional port, path, query and fragment information is added. nextUUID() : String Returns a type 4 (pseudo randomly generated) UUID. The UUID is generated using a cryptographically strong pseudo random number generator. nextValue() : Real Returns the next pseudorandom value, picked from the configured distribution (by default the uniform distribution is used). nextValue(d : String, p : Sequence) : Real Returns the next pseudorandom, from the provided distribution d . The parameters p are used to configure the distribution (if required). The supported distributions are: Binomial, Exponential and Uniform. For Binomial parameters are: numberOfTrials and probabilityOfSuccess. For Exponential the mean. For Uniform the lower and upper values (lower inclusive). setNextValueDistribution(d : String, p : Sequence) Define the distribution to use for calls to nextValue() . Parameters are the same as for nextValue(d, p).","title":"EMG predefined operations"},{"location":"doc/emg/#character-sets-for-string-operations","text":"For the operations that accept a character set, the supported sets are defined as follows: Name Characters ID abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ 1234567890 NUMERIC 1234567890 LETTER abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ LETTER_UPPER ABCDEFGHIJKLMNOPQRSTUVWXYZ LETTER_LOWER abcdefghijklmnopqrstuvwxyz UPPER_NUM ABCDEFGHIJKLMNOPQRSTUVWXYZ 1234567890 LOWER_NUM abcdefghijklmnopqrstuvwxyz 1234567890 ID_SYMBOL abcdefghijklmnopqrstuvwxyz ABCDEFGHIJKLMNOPQRSTUVWXYZ 1234567890 ~{}!@#\\$%\\^&( ) _+-=[] \\|;': \\\" \\< > ? , . /\\ HEX_LOWER abcdef1234567890 HEX_UPPER ABCDEF1234567890","title":"Character Sets for String operations"},{"location":"doc/emg/#creating-model-elements","text":"The EMG engine will search for EOL operations that follow a particular signature in order to determine what elements to create in the generated model. The signature is: create <OutputType> () { ... } . That is, the operation must be named create , the operation's context type defines the type of the created instance and no parameters should be passed. By default the create operation only creates one instance. Hence, the provided annotations can be used to tailor the behaviour of the operation. Consider the case of the PetriNet metamodel in the figure below. classDiagram class Element { +name: String } class Place { +outgoing: PlaceToTransArc[*] +incoming: TransToPlaceArc[*] } class PetriNet { +places: Place[*] +transitions: Transition[*] +arcs: Arc[*] } class Transition { +incoming: PlaceToTransArc[*] +outgoing: TransToPlaceArc[*] } class TransToPlaceArc { +source: Transition +target: Place } class PlaceToTransArc { +target: Transition +source: Place } Element <|-- PetriNet Element <|-- Place Transition --|> Element PetriNet *-- Arc PetriNet *-- Place PetriNet *-- Transition Arc <|-- TransToPlaceArc Arc <|-- PlaceToTransArc The code excerpt displayed below creates a PetriNet and then adds some places and transitions to it. Note that the instances annotation is executable and hence you can use absolute values, variables or expressions. The list annotation in the PetriNet creation will result in all PetriNet instances to be stored in a sequence called net . The list name is then used in the Place and Transition create operations to add the places and transitions to a random ( nextFromList ) PetriNet. In this example there is only one, but we could easily create more PetriNet instances and hence have them contain random number of Places and Transitions. The name of the elements is generated using the random string generation facilities. pre { var num_p = 10 } $instances 1 @list net operation PetriNet create() { self.name = nextCamelCaseWords(\"LETTER_LOWER\", 15, 10); } $instances num_p operation Place create() { self.name = \"P_\" + nextString(\"LETTER_LOWER\", 15); nextFromList(\"net\").transitions.add(self); } $instances num_p / 2 operation Transition create() { self.name = \"T_\" + nextString(\"LETTER_LOWER\", 15); nextFromList(\"net\").transitions.add(self); }","title":"Creating Model Elements"},{"location":"doc/emg/#creating-model-links","text":"In the previous section, the places and transitions references of the PetriNet were defined during the creation of the Place and Transition elements. For more complex reference patterns, EMG leverages the use of EPL patterns. For example, Arcs can have complex constraints in order to determine the source and target transition/place, and possibly even having separate rules for each type of Arc. The EPL pattern in the listing below creates two arcs in order to connect a source and a target Place via a Transition. The pattern matches all transitions in a given PetriNet. The pattern body selects a random Place for the source and a random Place for the target (the while loops are used to pick places that have the lowest incoming/outgoing arcs possible). The weight of the arc is generated randomly from 0 to 10 ( nextInt(10) ). The pattern has been annotated with the \\@probability annotation which will effectively only use 70% of the transitions to create arcs (i.e. of all the possible PetriNet-Transition matches, the code of the pattern will only be executed with a probability of 0.70). @probability 0.7 pattern Transition net:PetriNet, tra:Transition from: net.transitions { onmatch { var size = 0; var freeSources = Place.all().select(s | s.incoming.size() == size); while (freeSources.isEmpty()) { size += 1; freeSources = Place.all().select(s | s.incoming.size() == size); } size = 0; var freeTarget = Place.all().select(s | s.outgoing.size() == size); while (freeTarget.isEmpty()) { size += 1; freeTarget = Place.all().select(s | s.outgoing.size() == size); } var source = nextFromCollection(freeSources); var target = nextFromCollection(freeTarget); var a1:Arc = new PlaceToTransArc(); a1.weight = nextInt(10); a1.source = source; net.places.add(source); a1.target = tra; net.arcs.add(a1); var a2:Arc = new TransToPlaceArc(); a1.weight = nextInt(10); a2.source = tra; a2.target = target; net.places.add(target); net.arcs.add(a2); } }","title":"Creating Model Links"},{"location":"doc/emg/#meaningful-strings","text":"In some scenarios having completely random Strings for some of the element fields might not be desirable. In this case EMG has an embedded mechanism to facilitate the use of meaningful attribute values (not only for Strings) and we show a second approach based on additional models.","title":"Meaningful Strings"},{"location":"doc/emg/#values-as-a-parameter","text":"The nextFromList() operation will first look for a list with that name, if it can't find it will look for a parameter (from the run configuration) with that name. The value of the parameter can be either an absolute path to a file or a comma separated list of values. If it is a comma separated list of values, then the individual values will be loaded as a Collection. For example, if we added the parameter names: John, Rose, Juan, Xiang, Joe to the run configuration, the listing below shows how to use that information to define the instance attributes. $instances num_p operation Place create() { self.name = nextFromList(\"name\"); nextFromList(\"net\").transitions.add(self); } If it is a file path, then each line of the file will be loaded as an item to the Collection. Note that the distinction between paths and comma separated values is the assumption that paths don't contain commas.","title":"Values as a parameter"},{"location":"doc/emg/#values-as-a-model","text":"A more powerful approach would be to use an existing model to serve as the source for attribute values. Given that there are several websites 1 to generate random data in the form of CSV files, we recommend the use of a CSV model to serve as an attribute value source. A CSV file with name , lastName , and email can be easily generated and loaded as a second model the the EMG script. Then, a Row of data can be picked randomly to set an element's attributes. The listing below shows this approach. $instances num_p operation Person create() { var p = nextFromCollection(dataModel.Row.all()); self.name = p.name; self.lastName = p.lastName; self.email = p.email; } Note that in this case, by using different rows for each value you can further randomize the data. https://www.mockaroo.com/, https://www.generatedata.com/, www.freedatagenerator.com/, etc. \u21a9","title":"Values as a model"},{"location":"doc/eml/","text":"The Epsilon Merging Language (EML) \u00b6 The aim of EML is to contribute model merging capabilities to Epsilon. More specifically, EML can be used to merge an arbitrary number of input models of potentially diverse metamodels and modelling technologies. This section provides a discussion on the abstract and concrete syntax of EML, as well as its execution semantics. It also provides two examples of merging homogeneous and heterogeneous models. Abstract Syntax \u00b6 In EML, merging specifications are organized in modules ( EmlModule ). As displayed below, EmlModule inherits from EtlModule . classDiagram class MergeRule { -name: String -abstract: Boolean -lazy: Boolean -primary: Boolean -greedy: Boolean -guard: ExecutableBlock<Boolean> -compare: ExecutableBlock<Boolean> -do: ExecutableBlock<Void> } class Parameter { -name: String -type: EolType } class NamedStatementBlockRule { -name: String -body: StatementBlock } EolModule <|-- ErlModule EtlModule <|-- EmlModule Pre --|> NamedStatementBlockRule Post --|> NamedStatementBlockRule EtlModule <|-- ErlModule ErlModule -- Pre: pre * ErlModule -- Post: post * EmlModule -- MergeRule: rules * MergeRule -- Parameter: left MergeRule -- Parameter: right MergeRule -- Parameter: target MergeRule -- MergeRule: extends * By extending EtlModule , an EML module can contain a number of transformation rules and user-defined operations. An EML module can also contain one or more merge rules as well as a set of pre and post named EOL statement blocks. As usual, pre and post blocks will be run before and after all rules, respectively. Each merge rule defines a name, a left, a right, and one or more target parameters. It can also extend one or more other merge rules and be defined as having one or more of the following properties: abstract, greedy, lazy and primary. Concrete Syntax \u00b6 The listing below demonstrates the concrete syntax of EML merge-rules. (@abstract)? (@lazy)? (@primary)? (@greedy)? rule <name> merge <leftParameter> with <rightParameter> into (<targetParameter>(, <targetParameter>)*)? (extends <ruleName>(, <ruleName>)*)? { statementBlock } Pre and post blocks have a simple syntax that consists of the identifier ( pre or post ), an optional name and the set of statements to be executed enclosed in curly braces. (pre|post) <name> { statement+ } Execution Semantics \u00b6 Rule and Block Overriding \u00b6 An EML module can import a number of other EML and ETL modules. In this case, the importing EML module inherits all the rules and pre/post blocks specified in the modules it imports (recursively). If the module specifies a rule or a pre/post block with the same name, the local rule/block overrides the imported one respectively. Rule Scheduling \u00b6 When an EML module is executed, the pre blocks are executed in the order in which they have been defined. Following that, for each match of the established matchTrace the applicable non-abstract, non-lazy merge rules are executed. When all matches have been merged, the transformation rules of the module are executed on all applicable elements - that have not been merged - in the models. Finally, after all rules have been applied, the post blocks of the module are executed. Rule Applicability \u00b6 By default, for a merge-rule to apply to a match , the left and right elements of the match must have a type-of relationship with the leftParameter and rightParameter of the rule respectively. This can be relaxed to a kind-of relationship by specifying that the merge rule is greedy (using the \\@greedy annotation in terms of concrete syntax). Source Elements Resolution \u00b6 As with model transformation, in model merging it is often required to resolve the counterparts of an element of a source model into the target models. In EML, this is achieved by overloading the semantics of the equivalents() and equivalent() operations defined by ETL. In EML, in addition to inspecting the transformation trace and invoking any applicable transformation rules, the equivalents() operation also examines the mergeTrace (displayed in the figure below) that stores the results of the application of merge-rules and invokes any applicable (both lazy and non-lazy) rules. Similarly to ETL, the order of the results of the equivalents() operation respects the order of the (merge or transform) rules that have produced them. An exception to that occurs if one of the rules has been declared as primary, in which case its results are prepended to the list of elements returned by equivalent. classDiagram class Merge { -left: Object -right: Object -targets: Object[*] } EtlContext <|-- EmlContext EmlContext -- MatchTrace: matchTrace MergeTrace -- EmlContext: mergeTrace MergeTrace -- Merge: merges * Merge -- MergeRule Homogeneous Model Merging Example \u00b6 In this scenario, two models conforming to the Graph metamodel need to be merged. The first step is to compare the two graphs using the ECL module below. rule MatchNodes match l : Left!Node with r : Right!Node { compare : l.label = r.label } rule MatchEdges match l : Left!Edge with r : Right!Edge { compare : l.source.matches(r.source) and l.target.matches(r.target) } rule MatchGraphs match l : Left!Graph with r : Right!Graph { compare : true } The MatchNodes rule in line 1 defines that two nodes match if they have the same label. The MatchEdges rule in line 8 specifies that two edges match if both their source and target nodes match (regardless of whether the labels of the edges match or not as it is assumed that there can not be two distinct edges between the same nodes). Finally, since only one instance of Graph is expected to be in each model, the MatchGraphs rule in line 16 returns true for any pair of Graphs. Having established the necessary correspondences between matching elements of the two models, the EML specification below performs the merge. import \"Graphs.etl\"; rule MergeGraphs merge l : Left!Graph with r : Right!Graph into t : Target!Graph { t.label = l.label + \" and \" + r.label; } @abstract rule MergeGraphElements merge l : Left!GraphElement with r : Right!GraphElement into t : Target!GraphElement { t.graph ::= l.graph; } rule MergeNodes merge l : Left!Node with r : Right!Node into t : Target!Node extends GraphElements { t.label = \"c_\" + l.label; } rule MergeEdges merge l : Left!Edge with r : Right!Edge into t : Target!Edge extends GraphElements { t.source ::= l.source; t.target ::= l.target; } In line 3, the MergeGraphs merge rule specifies that two matching Graphs ( l and r ) are to be merged into one Graph t in the target model that has as a label, the concatenation of the labels of the two input graphs separated using 'and'. The mergeNodes rule In line 22 specifies that two matching Nodes are merged into a single Node in the target model. The label of the merged node is derived by concatenating the c (for common) static string with the label of the source Node from the left model. Similarly, the MergeEdges rule specifies that two matching Edges are merged into a single Edge in the target model. The source and target nodes of the merged Edge are set to the equivalents (::=) of the source and target nodes of the edge from the left model. To reduce duplication, the MergeNodes and MergeEdges rules extend the abstract MergeGraphElements rule specified in line 13 which assigns the graph property of the graph element to the equivalent of the left graph. The rules displayed above address only the matching elements of the two models. To also copy the elements for which no equivalent has been found in the opposite model, the EML module imports the ETL module below. rule TransformGraph transform s : Source!Graph to t : Target!Graph { t.label = s.label; } @abstract rule TransformGraphElement transform s : Source!GraphElement to t : Target!GraphElement { t.graph ::= s.graph; } rule TransformNode transform s : Source!Node to t : Target!Node extends TransformGraphElement { t.label = s.graph.label + \"_\" + s.label; } rule TransformEdge transform s : Source!Edge to t : Target!Edge extends TransformGraphElement { t.source ::= s.source; t.target ::= s.target; } The rules of the ETL module apply to model elements of both the Left and the Right model as both have been aliased as Source. Of special interest is the TransformNode rule in line 17 that specifies that non-matching nodes in the two input models will be transformed into nodes in the target model the labels of which will be a concatenation of their input graph and the label of their counterparts in the input models. Executing the ECL and EML modules on the exemplar models displayed in the following two figures creates the target model of the final figure. graph LR n1 --> n2 n1 --> n3 n3 --> n5 n2 --> n4 Left model graph LR n1 --> n8 n1 --> n6 n8 --> n6 n6 --> n3 Right model graph LR c_n1 --> g1_n2 g1_n2 --> c_n4 c_n1 --> g2_n8 g2_n8 --> g2_n6 c_n1 --> g2_n6 c_n1 --> c_n3 c_n3 --> g1_n5 g2_n6 --> c_n3 Merged model","title":"Model merging (EML)"},{"location":"doc/eml/#the-epsilon-merging-language-eml","text":"The aim of EML is to contribute model merging capabilities to Epsilon. More specifically, EML can be used to merge an arbitrary number of input models of potentially diverse metamodels and modelling technologies. This section provides a discussion on the abstract and concrete syntax of EML, as well as its execution semantics. It also provides two examples of merging homogeneous and heterogeneous models.","title":"The Epsilon Merging Language (EML)"},{"location":"doc/eml/#abstract-syntax","text":"In EML, merging specifications are organized in modules ( EmlModule ). As displayed below, EmlModule inherits from EtlModule . classDiagram class MergeRule { -name: String -abstract: Boolean -lazy: Boolean -primary: Boolean -greedy: Boolean -guard: ExecutableBlock<Boolean> -compare: ExecutableBlock<Boolean> -do: ExecutableBlock<Void> } class Parameter { -name: String -type: EolType } class NamedStatementBlockRule { -name: String -body: StatementBlock } EolModule <|-- ErlModule EtlModule <|-- EmlModule Pre --|> NamedStatementBlockRule Post --|> NamedStatementBlockRule EtlModule <|-- ErlModule ErlModule -- Pre: pre * ErlModule -- Post: post * EmlModule -- MergeRule: rules * MergeRule -- Parameter: left MergeRule -- Parameter: right MergeRule -- Parameter: target MergeRule -- MergeRule: extends * By extending EtlModule , an EML module can contain a number of transformation rules and user-defined operations. An EML module can also contain one or more merge rules as well as a set of pre and post named EOL statement blocks. As usual, pre and post blocks will be run before and after all rules, respectively. Each merge rule defines a name, a left, a right, and one or more target parameters. It can also extend one or more other merge rules and be defined as having one or more of the following properties: abstract, greedy, lazy and primary.","title":"Abstract Syntax"},{"location":"doc/eml/#concrete-syntax","text":"The listing below demonstrates the concrete syntax of EML merge-rules. (@abstract)? (@lazy)? (@primary)? (@greedy)? rule <name> merge <leftParameter> with <rightParameter> into (<targetParameter>(, <targetParameter>)*)? (extends <ruleName>(, <ruleName>)*)? { statementBlock } Pre and post blocks have a simple syntax that consists of the identifier ( pre or post ), an optional name and the set of statements to be executed enclosed in curly braces. (pre|post) <name> { statement+ }","title":"Concrete Syntax"},{"location":"doc/eml/#execution-semantics","text":"","title":"Execution Semantics"},{"location":"doc/eml/#rule-and-block-overriding","text":"An EML module can import a number of other EML and ETL modules. In this case, the importing EML module inherits all the rules and pre/post blocks specified in the modules it imports (recursively). If the module specifies a rule or a pre/post block with the same name, the local rule/block overrides the imported one respectively.","title":"Rule and Block Overriding"},{"location":"doc/eml/#rule-scheduling","text":"When an EML module is executed, the pre blocks are executed in the order in which they have been defined. Following that, for each match of the established matchTrace the applicable non-abstract, non-lazy merge rules are executed. When all matches have been merged, the transformation rules of the module are executed on all applicable elements - that have not been merged - in the models. Finally, after all rules have been applied, the post blocks of the module are executed.","title":"Rule Scheduling"},{"location":"doc/eml/#rule-applicability","text":"By default, for a merge-rule to apply to a match , the left and right elements of the match must have a type-of relationship with the leftParameter and rightParameter of the rule respectively. This can be relaxed to a kind-of relationship by specifying that the merge rule is greedy (using the \\@greedy annotation in terms of concrete syntax).","title":"Rule Applicability"},{"location":"doc/eml/#source-elements-resolution","text":"As with model transformation, in model merging it is often required to resolve the counterparts of an element of a source model into the target models. In EML, this is achieved by overloading the semantics of the equivalents() and equivalent() operations defined by ETL. In EML, in addition to inspecting the transformation trace and invoking any applicable transformation rules, the equivalents() operation also examines the mergeTrace (displayed in the figure below) that stores the results of the application of merge-rules and invokes any applicable (both lazy and non-lazy) rules. Similarly to ETL, the order of the results of the equivalents() operation respects the order of the (merge or transform) rules that have produced them. An exception to that occurs if one of the rules has been declared as primary, in which case its results are prepended to the list of elements returned by equivalent. classDiagram class Merge { -left: Object -right: Object -targets: Object[*] } EtlContext <|-- EmlContext EmlContext -- MatchTrace: matchTrace MergeTrace -- EmlContext: mergeTrace MergeTrace -- Merge: merges * Merge -- MergeRule","title":"Source Elements Resolution"},{"location":"doc/eml/#homogeneous-model-merging-example","text":"In this scenario, two models conforming to the Graph metamodel need to be merged. The first step is to compare the two graphs using the ECL module below. rule MatchNodes match l : Left!Node with r : Right!Node { compare : l.label = r.label } rule MatchEdges match l : Left!Edge with r : Right!Edge { compare : l.source.matches(r.source) and l.target.matches(r.target) } rule MatchGraphs match l : Left!Graph with r : Right!Graph { compare : true } The MatchNodes rule in line 1 defines that two nodes match if they have the same label. The MatchEdges rule in line 8 specifies that two edges match if both their source and target nodes match (regardless of whether the labels of the edges match or not as it is assumed that there can not be two distinct edges between the same nodes). Finally, since only one instance of Graph is expected to be in each model, the MatchGraphs rule in line 16 returns true for any pair of Graphs. Having established the necessary correspondences between matching elements of the two models, the EML specification below performs the merge. import \"Graphs.etl\"; rule MergeGraphs merge l : Left!Graph with r : Right!Graph into t : Target!Graph { t.label = l.label + \" and \" + r.label; } @abstract rule MergeGraphElements merge l : Left!GraphElement with r : Right!GraphElement into t : Target!GraphElement { t.graph ::= l.graph; } rule MergeNodes merge l : Left!Node with r : Right!Node into t : Target!Node extends GraphElements { t.label = \"c_\" + l.label; } rule MergeEdges merge l : Left!Edge with r : Right!Edge into t : Target!Edge extends GraphElements { t.source ::= l.source; t.target ::= l.target; } In line 3, the MergeGraphs merge rule specifies that two matching Graphs ( l and r ) are to be merged into one Graph t in the target model that has as a label, the concatenation of the labels of the two input graphs separated using 'and'. The mergeNodes rule In line 22 specifies that two matching Nodes are merged into a single Node in the target model. The label of the merged node is derived by concatenating the c (for common) static string with the label of the source Node from the left model. Similarly, the MergeEdges rule specifies that two matching Edges are merged into a single Edge in the target model. The source and target nodes of the merged Edge are set to the equivalents (::=) of the source and target nodes of the edge from the left model. To reduce duplication, the MergeNodes and MergeEdges rules extend the abstract MergeGraphElements rule specified in line 13 which assigns the graph property of the graph element to the equivalent of the left graph. The rules displayed above address only the matching elements of the two models. To also copy the elements for which no equivalent has been found in the opposite model, the EML module imports the ETL module below. rule TransformGraph transform s : Source!Graph to t : Target!Graph { t.label = s.label; } @abstract rule TransformGraphElement transform s : Source!GraphElement to t : Target!GraphElement { t.graph ::= s.graph; } rule TransformNode transform s : Source!Node to t : Target!Node extends TransformGraphElement { t.label = s.graph.label + \"_\" + s.label; } rule TransformEdge transform s : Source!Edge to t : Target!Edge extends TransformGraphElement { t.source ::= s.source; t.target ::= s.target; } The rules of the ETL module apply to model elements of both the Left and the Right model as both have been aliased as Source. Of special interest is the TransformNode rule in line 17 that specifies that non-matching nodes in the two input models will be transformed into nodes in the target model the labels of which will be a concatenation of their input graph and the label of their counterparts in the input models. Executing the ECL and EML modules on the exemplar models displayed in the following two figures creates the target model of the final figure. graph LR n1 --> n2 n1 --> n3 n3 --> n5 n2 --> n4 Left model graph LR n1 --> n8 n1 --> n6 n8 --> n6 n6 --> n3 Right model graph LR c_n1 --> g1_n2 g1_n2 --> c_n4 c_n1 --> g2_n8 g2_n8 --> g2_n6 c_n1 --> g2_n6 c_n1 --> c_n3 c_n3 --> g1_n5 g2_n6 --> c_n3 Merged model","title":"Homogeneous Model Merging Example"},{"location":"doc/eol/","text":"The Epsilon Object Language (EOL) \u00b6 The primary aim of EOL is to provide a reusable set of common model management facilities, atop which task-specific languages can be implemented. However, EOL can also be used as a general-purpose standalone model management language for automating tasks that do not fall into the patterns targeted by task-specific languages. This section presents the syntax and semantics of the language using a combination of abstract syntax diagrams, concrete syntax examples and informal discussion. Module Organization \u00b6 In this section the syntax of EOL is presented in a top-down manner. An EOL programs are organized in modules . Each module defines a body and a number of operations . The body is a block of statements that are evaluated when the module is executed 1 . Each operation defines the kind of objects on which it is applicable ( context ), a name , a set of parameters and optionally a return type . Modules can also import other modules using import statements and access their operations, as shown in the listing below. // file imported.eol operation hello() { \"Hello world!\".println(); } // file importer.eol // We can use relative/absolute paths or platform:/ URIs import \"imported.eol\"; hello(); // main body // ... more operations could be placed here ... classDiagram class EolModule { +main:StatementBlock } class ImportStatement { +imported:EolModule } class Operation { +name: String +context: EolType +parameters: Parameter[*] +returnType: EolType } class ExecutableAnnotation { +expression: Expression } class SimpleAnnotation { +values: String[*] } EolModule -- ImportStatement: * EolModule -- Operation: operations * Operation -- Annotation: annotations * Operation -- StatementBlock: body EolModule -- StatementBlock: main StatementBlock -- Statement: statements * Annotation <|-- ExecutableAnnotation Annotation <|-- SimpleAnnotation User-Defined Operations \u00b6 In mainstream object oriented languages such as Java and C++, operations are defined inside classes and can be invoked on instances of those classes. EOL on the other hand is not object-oriented in the sense that it does not define classes itself, but nevertheless needs to manage objects of types defined externally to it (e.g. in metamodels). By defining the context-type of an operation explicitly, the operation can be called on instances of the type as if it was natively defined by the type. For example, consider the code excerpts displayed in the listings below. In the first listing, the operations add1 and add2 are defined in the context of the built-in Integer type, which is specified before their names. Therefore, they can be invoked in line 1 using the 1.add1().add2() expression: the context (the integer 1 ) will be assigned to the special variable self . On the other hand, in the second listing where no context is defined, they have to be invoked in a nested manner which follows an in-to-out direction instead of the left to right direction used by the former excerpt. As complex model queries often involve invoking multiple properties and operations, this technique is particularly beneficial to the overall readability of the code. 1.add1().add2().println(); operation Integer add1() : Integer { return self + 1; } operation Integer add2() : Integer { return self + 2; } add2(add1(1)).println(); operation add1(base : Integer) : Integer { return base + 1; } operation add2(base : Integer) : Integer { return base + 2; } EOL supports polymorphic operations using a runtime dispatch mechanism. Multiple operations with the same name and parameters can be defined, each defining a distinct context type. For example, in the listing below, the statement in line 1 invokes the test operation defined in line 4, while the statement in line 2 invokes the test operation defined in line 8. \"1\".test(); 1.test(); operation String test() { (self + \" is a string\").println(); } operation Integer test() { (self + \"is an integer\").println(); } Annotations \u00b6 EOL supports two types of annotations: simple and executable. A simple annotation specifies a name and a set of String values while an executable annotation specifies a name and an expression. The concrete syntaxes of simple and executable annotations are displayed in the listing below. // Simple annotation @name value(,value) // Executable annotation $name expression Several examples for simple annotations are shown the listing below. Examples for executable annotations will be given in the following sections. @colors red @colors red, blue @colors red, blue, green In stand-alone EOL, annotations are supported only in the context of operations, however as discussed in the sequel, task-specific languages also make use of annotations in their constructs, each with task-specific semantics. EOL operations support three particular annotations: the pre and post executable annotations for specifying pre and post-conditions, and the cached simple annotation, which are discussed below. Pre/post conditions in user-defined operations \u00b6 A number of pre and post executable annotations can be attached to EOL operations to specify the pre- and post-conditions of the operation. When an operation is invoked, before its body is evaluated, the expressions of the pre annotations are evaluated. If all of them return true , the body of the operation is executed, otherwise, an error is raised. Similarly, once the body of the operation has been executed, the expressions of the post annotations of the operation are executed to ensure that the operation has had the desired effects. Pre and post annotations can access all the variables in the parent scope, as well as the parameters of the operation and the object on which the operation is invoked (through the self variable). Moreover, in post annotations, the returned value of the operation is accessible through the built-in _result variable. An example of using pre and post conditions in EOL appears below. 1.add(2); 1.add(-1); $pre i > 0 $post _result > self operation Integer add(i : Integer) : Integer { return self + i; } In line 4 the add operation defines a pre-condition stating that the parameter i must be a positive number. In line 5, the operation defines that result of the operation ( _result ) must be greater than the number on which it was invoked ( self ). Thus, when executed in the context of the statement in line 1 the operation succeeds, while when executed in the context of the statement in line 2, the pre-condition is not satisfied and an error is raised. Operation Result Caching \u00b6 EOL supports caching the results of parameter-less operations using the @cached simple annotation. In the following example, the Fibonacci number of a given Integer is calculated using the fibonacci recursive operation displayed in the listing below. Since the fibonacci operation is declared as cached , it is only executed once for each distinct Integer and subsequent calls on the same target return the cached result. Therefore, when invoked in line 1, the body of the operation is called 16 times. By contrast, if no @cached annotation was specified, the body of the operation would be called recursively 1973 times. This feature is particularly useful for performing queries on large models and caching their results without needing to introduce explicit variables that store the cached results. It is worth noting that caching works by reference , which means that all clients of a cached method for a given context will receive the same returned object. As such, if the first client modifies the returned object in some way (e.g. sets a property in the case of an object or adds an element in the case of the collection), subsequent clients of the method for the same context will receive the modified object/collection. 15.fibonacci().println(); @cached operation Integer fibonacci() : Integer { if (self = 1 or self = 0) { return 1; } else { return (self-1).fibonacci() + (self-2).fibonacci(); } } Types \u00b6 As is the case for most programming languages, EOL defines a built-in system of types, illustrated in the figure below. The Any type, inspired by the OclAny type of OCL, is the basis of all types in EOL including Collection types. classDiagram class ModelElementType { -model: String -type: String } class Native { -implementation: String } ModelElementType --|> Any Any <|-- Native Any <|-- Collection Any <|-- Map Collection <|-- Bag Collection <|-- Set Collection <|-- OrderedSet Collection <|-- Sequence PrimitiveType --|> Any PrimitiveType <|-- Integer PrimitiveType <|-- String PrimitiveType <|-- Boolean PrimitiveType <|-- Real The operations supported by instances of the Any type are outlined in the table below 2 . Signature Description asBag() : Bag Returns a new Bag containing the object asBoolean() : Boolean Returns a Boolean based on the string representation of the object. If the string representation is not of an acceptable format, an error is raised asInteger() : Integer Returns an Integer based on the string representation of the object. If the string representation is not of an acceptable format, an error is raised asOrderedSet() : OrderedSet Returns a new OrderedSet containing the object asReal() : Real Returns a Real based on the string representation of the object. If the string representation is not of an acceptable format, an error is raised asDouble() : Double Returns a Java Double based on the string representation of the object. If the string representation is not of an acceptable format, an error is raised asFloat() : Float Returns a Java Float based on the string representation of the object. If the string representation is not of an acceptable format, an error is raised asSequence() : Sequence Returns a new Sequence containing the object asSet() : Set Returns a new Set containing the object asString() : String Returns a string representation of the object err([prefix : String]) : Any Prints a string representation of the object on which it is invoked to the error stream prefixed with the optional prefix string and returns the object on which it was invoked. In this way, the err operation can be used for debugging purposes in a non-invasive manner errln([prefix : String]) : Any Has the same effects as the err operation but also produces a new line in the output stream. format([pattern : String]) : String Uses the provided pattern to form a String representation of the object on which the method is invoked. The pattern argument must conform to the format string syntax defined by Java 3 . hasProperty(name : String) : Boolean Returns true if the object has a property with the specified name or false otherwise ifUndefined(alt : Any) : Any If the object is undefined, it returns alt else it returns the object isDefined() : Boolean Returns true if the object is defined and false otherwise isKindOf(type : Type) : Boolean Returns true if the object is of the given type or one of its subtypes and false otherwise isTypeOf(type : Type) : Boolean Returns true if the object is of the given type and false otherwise isUndefined() : Boolean Returns true if the object is undefined and false otherwise owningModel() : Model Returns the model that contains this object or an undefined value otherwise print([prefix : String]) : Any Prints a string representation of the object on which it is invoked to the regular output stream, prefixed with the optional prefix string and returns the object on which it was invoked. In this way, the print operation can be used for debugging purposes in a non-invasive manner println([prefix : String]) : Any Has the same effects as the print operation but also produces a new line in the output stream. type() : Type Returns the type of the object. Primitive Types \u00b6 EOL provides four primitive types: String, Integer, Real and Boolean. The String type represents a finite sequence of characters and supports the following operations which can be invoked on its instances. Signature Description characterAt(index : Integer) : String Returns the character in the specified index concat(str : String) : String Returns a concatenated form of the string with the str parameter endsWith(str : String) : Boolean Returns true iff the string ends with str escapeXml() : String Returns a new string with escaped XML-reserved characters firstToLowerCase() : String Returns a new string the first character of which has been converted to lower case ftlc() : String Alias for firstToLowerCase() firstToUpperCase() : String Returns a new string, the first character of which has been converted to upper case ftuc : String Alias for firstToUpperCase() isInteger() : Boolean Returns true iff the string is an integer isReal() : Boolean Returns true iff the string is a real number isSubstringOf(str : String) : Boolean Returns true iff the string the operation is invoked on is a substring of str length() : Integer Returns the number of characters in the string matches(reg : String) : Boolean Returns true if there are occurrences of the regular expression reg in the string pad(length : Integer, padding : String, right : Boolean) : String Pads the string up to the specified length with specified padding (e.g. \"foo\".pad(5, \"*\", true) returns \"foo**\" ) replace(source : String, target : String) : String Returns a new string in which all instances of source have been replaced with instances of target split(reg : String) : Sequence(String) Splits the string using as a delimiter the provided regular expression, reg , and returns a sequence containing the parts startsWith(str : String) : Boolean Returns true iff the string starts with str substring(index : Integer) : String Returns a sub-string of the string starting from the specified index and extending to the end of the original string substring(startIndex : Integer, endIndex : Integer) : String Returns a sub-string of the string starting from the specified startIndex and ending at endIndex toCharSequence() : Sequence(String) Returns a sequence containing all the characters of the string toLowerCase() : String Returns a new string where all the characters have been converted to lower case toUpperCase() : String Returns a new string where all the characters have been converted to upper case trim() : String Returns a trimmed copy of the string The Real type represents real numbers and provides the following operations. Signature Description abs() : Real Returns the absolute value of the real ceiling() : Integer Returns the nearest Integer that is greater than the real floor() : Integer Returns the nearest Integer that is less than the real log() : Real Returns the natural logarithm of the real log10() : Real Returns the 10-based logarithm of the real max(other : Real) : Real Returns the maximum of the two reals min(other : Real) : Real Returns the minimum of the two reals pow(exponent : Real) : Real Returns the real to the power of exponent round() : Integer Rounds the real to the nearest Integer The Integer type represents natural numbers and negatives and extends the Real primitive type. It also defines the following operations: Signature Description iota(end : Integer, step : Integer) : Sequence(Integer) Returns a sequence of integers up to end using the specified step (e.g. 1.iota(10,2) returns Sequence{1,3,5,7,9}) mod(divisor : Integer) : Integer Returns the remainder of dividing the integer by the divisor to(other : Integer) : Sequence(Integer) Returns a sequence of integers (e.g. 1.to(5) returns Sequence{1,2,3,4,5}) toBinary() : String Returns the binary representation of the integer (e.g. 6.toBinary() returns \"110\") toHex() : String Returns the hexadecimal representation of the integer (e.g. 42.toBinary() returns \"2a\") Finally, the Boolean type represents true/false states and provides no additional operations to those provided by the base Any type. Collections and Maps \u00b6 EOL provides four types of collections and a Map type. The Bag type represents non-unique, unordered collections and implements the java.util.Collection interface, the Sequence type represents non-unique, ordered collections and implements the java.util.List interface, the Set type represents unique and unordered collections and implements the java.util.Set interface, the OrderedSet represents unique and ordered collections. Since version 2.0, there are also two concurrent collection types, which can safely be modified from multiple threads. These are ConcurrentBag and ConcurrentSet , which are thread-safe variants of the Bag and Set types respectively. All collection types inherit from the abstract Collection type. Apart from simple operations, EOL also supports logic operations on collections. The following operations (along with any operations declared on the java.util.Collection interface) apply to all types of collections: Signature Description add(item : Any) : Boolean Adds an item to the collection. If the collection is a set, addition of duplicate items has no effect. Returns true if the collection increased in size: this is always the case for bags and sequences, and for sets and ordered sets it is true if the element was not part of the collection before. addAll(col : Collection) : Boolean Adds all the items of the col argument to the collection. If the collection is a set, it only adds items that do not already exist in the collection. Returns true if this collection changed as a result of the call asBag() Returns a Bag that contains the same elements as the collection. asOrderedSet() Returns a duplicate-free OrderedSet that contains the same elements as the collection. asSequence() Returns a Sequence that contains the same elements as the collection. asSet() Returns a duplicate-free Set that contains the same elements as the collection. clear() Empties the collection clone() : Collection Returns a new collection of the same type containing the same items with the original collection concat() : String Returns the string created by converting each element of the collection to a string concat(separator : String) : String Returns the string created by converting each element of the collection to a string, using the given argument as a separator count(item : Any) : Integer Returns the number of times the item exists in the collection excludes(item : Any) : Boolean Returns true if the collection excludes the item excludesAll(col : Collection) : Boolean Returns true if the collection excludes all the items of collection col excluding(item : Any) : Collection Returns a new collection that excludes the item -- unlike the remove() operation that removes the item from the collection itself excludingAll(col : Collection) : Collection Returns a new collection that excludes all the elements of the col collection flatten() : Collection Recursively flattens all items that are of collection type and returns a new collection where no item is a collection itself includes(item : Any) : Boolean Returns true if the collection includes the item includesAll(col : Collection) : Boolean Returns true if the collection includes all the items of collection col including(item : Any) : Collection Returns a new collection that also contains the item -- unlike the add() operation that adds the item to the collection itself includingAll(col : Collection) : Collection Returns a new collection that is a union of the two collections. The type of the returned collection (i.e. Bag, Sequence, Set, OrderedSet) is same as the type of the collection on which the operation is invoked isEmpty() : Boolean Returns true if the collection does not contain any elements and false otherwise min() : Real Returns the minimum of all reals/integers in the collection, or 0 if it is empty min(default : Real) : Real Returns the minimum of all reals/integers in the collection, or the default value if it is empty max() : Real Returns the maximum of all reals/integers in the collection, or 0 if it is empty max(default : Real) : Real Returns the maximum of all reals/integers in the collection, or the default value if it is empty notEmpty() : Boolean Returns true if the collection contains any elements and false otherwise powerset() : Set Returns the set of all subsets of the collection product() : Real Returns the product of all reals/integers in the collection random() : Any Returns a random item from the collection remove(item : Any) : Boolean Removes an item from the collection. Returns true if the collection contained the specified element removeAll(col : Collection) : Boolean Removes all the items of col from the collection. Returns true if the collection changed as a result of the call size() : Integer Returns the number of items the collection contains sum() : Real Returns the sum of all reals/integers in the collection The following operations apply to ordered collection types (i.e. Sequence and OrderedSet): Signature Description at(index : Integer) : Any Returns the item of the collection at the specified index first() : Any Returns the first item of the collection fourth() : Any Returns the fourth item of the collection indexOf(item : Any) : Integer Returns the index of the item in the collection or -1 if it does not exist invert() : Collection Returns an inverted copy of the collection last() : Any Returns the last item of the collection removeAt(index : Integer) : Any Removes and returns the item at the specified index. second() : Any Returns the second item of the collection third() : Any Returns the third item of the collection Also, EOL collections support the following first-order operations. Apart from aggregate and closure , all of these operations have a parallel variant which can take advantage of multiple cores to improve performance. All computations contained in these operations are assumed to be free from side-effects (i.e. do not mutate global variables). Aside from the following built-in first-order operations which are evaluated eagerly, all Collection types in the Java implementation of EOL support Streams. This allows for chains of queries and transformations on collections to be evaluated more efficiently. A stream can be obtained by calling the stream() method on the collection. The API is defined by the Java standard library 4 . Signature Description atLeastNMatch(iterator : Type | condition, n : Integer) : Boolean Returns true if there are n or more items in the collection that satisfy the condition atMostNMatch(iterator : Type | condition, n : Integer) : Boolean Returns true if there are n or fewer items in the collection that satisfy the condition aggregate(iterator : Type | keyExpression, valueExpression) : Map Returns a map containing key-value pairs produced by evaluating the key and value expressions on each item of the collection that is of the specified type closure(iterator : Type | expression) : Collection Returns a collection containing the results of evaluating the transitive closure of the results produced by the expression on each item of the collection that is of the specified type. For example, if t is a tree model element, t.closure(it|it.children) will return all its descendants collect(iterator : Type | expression) : Collection Returns a collection containing the results of evaluating the expression on each item of the collection that is of the specified type count(iterator : Type | condition) : Integer Returns the number of elements in the collection that satisfy the condition exists(iterator : Type | condition) : Boolean Returns true if there exists at least one item in the collection that satisfies the condition forAll(iterator : Type | condition) : Boolean Returns true if all items in the collection satisfy the condition nMatch(iterator : Type | condition, n : Integer) : Boolean Returns true if there are exactly n items in the collection that satisfy the condition none(iterator : Type | condition) : Boolean Returns true if there are no items in the collection that satisfy the condition one(iterator : Type | condition) : Boolean Returns true if there exists exactly one item in the collection that satisfies the condition reject(iterator : Type | condition) : Collection Returns a sub-collection containing only items of the specified type that do not satisfy the condition rejectOne(iterator : Type | condition) : Collection Returns a sub-collection containing all elements except the first element which does not satisfy the condition select(iterator : Type | condition) : Collection Returns a sub-collection containing only items of the specified type that satisfy the condition selectByKind(Type) : Collection Returns a sub-collection containing only items of the specified type and subtypes selectByType(Type) : Collection Returns a sub-collection containing only items of the specified type only selectOne(iterator : Type | condition) : Any Returns any element that satisfies the condition sortBy(iterator: Type | expression) : Collection Returns a copy of the collection sorted by the results of evaluating the expression on each item of the collection that conforms to the iterator type. The expression should return either an Integer, a String or an object that is an instance of Comparable. The ordering is calculated as follows: for integers, smaller to greater; for Strings, as defined by the compareTo method of Java strings; for Comparable objects, according to the semantics of the type's compareTo method implementation. The Map type (which implements the java.util.Map interface) represents a Set of key-value pairs in which the keys are unique. Since version 2.0, there is also a thread-safe ConcurrentMap type, which implements the java.util.concurrent.ConcurrentMap interface. The following operations are supported: Signature Description clear() Clears the map containsKey(key : Any) : Boolean Returns true if the map contains the specified key containsValue(value : Any) : Boolean Returns true if this map maps one or more keys to the specified value. get(key : Any) : Any Returns the value for the specified key isEmpty() : Boolean Returns true if the map contains no key-value mappings. keySet() : Set Returns the keys of the map put(key : Any, value : Any) Adds the key-value pair to the map. If the map already contains the same key, the value is overwritten putAll(map : Map) Copies all of the mappings from the specified map to this map. remove(key : Any) : Any Removes the mapping for the specified key from this map if present. Returns the previous value associated with key. size() : Integer Returns the number of key-value mappings in this map. values() : Bag Returns the values of the map Tuples \u00b6 Since version 2.2, EOL supports Tuples, which can be used to compose arbitrary data structures on-the-fly. A Tuple in EOL behaves like a Map<String, Object> , except that the values of the map can be accessed using literal property call expressions. There are three ways to instantiate a tuple, as shown below. // After construction var alice = new Tuple; alice.name = \"Alice\"; alice.age = 32; // During construction var bob = new Tuple(name = \"Bob\", age = 28); // Map Literal var charlie = Tuple{\"name\" = \"Charlie\", \"age\" = 36}; If a non-existent property on a Tuple is accessed, an exception is thrown. var p = new Tuple(name = \"Alice\", age = 32); p.name.substring(0, 3); // \"Ali\" p.age; // 32 p.occupation.isDefined(); // false p.occupation.toUpperCase(); // Property 'occupation' not found Native Types \u00b6 As discussed earlier, while the purpose of EOL is to provide significant expressive power to enable users to manage models at a high level of abstraction, it is not intended to be a general-purpose programming language. Therefore, there may be cases where users need to implement some functionality that is either not efficiently supported by the EOL runtime (e.g. complex mathematical computations) or that EOL does not support at all (e.g. developing user interfaces, accessing databases). To overcome this problem, EOL enables users to create objects of the underlying programming environment by using native types. A native type specifies an implementation property that indicates the unique identifier for an underlying platform type. For instance, in a Java implementation of EOL the user can instantiate and use a Java class via its class identifier. Thus, the EOL excerpt in the listing below creates a Java window (Swing JFrame) and uses its methods to change its title and dimensions and make it visible. var frame = new Native(\"javax.swing.JFrame\"); frame.title = \"Opened with EOL\"; frame.setBounds(100,100,300,200); frame.visible = true; To pass arguments to the constructor of a native type, a parameter list must be added, such as that in the listing below. var file = new Native(\"java.io.File\")(\"myfile.txt\"); file.absolutePath.println(); Static types can also be referenced in EOL and stored in a variable for convenience, as shown below. var Collectors = Native(\"java.util.stream.Collectors\"); Model Element Types \u00b6 A model element type represents a meta-level classifier for model elements. Epsilon intentionally refrains from defining more details about the meaning of a model element type, to be able to support diverse modelling technologies where a type has different semantics. For instance an Ecore EClass, an XSD complex type and a Java class can all be regarded as model element types according to the implementation of the underlying modelling framework. Info As EOL is decoupled from modelling technologies (e.g. EMF, Simulink), through Epsilon's Model Connectivity Layer , we refrain from referring to specific modelling technologies in this section as much as possible. In case of multiple models, as well as the name of the type, the name of the model is also required to resolve a particular type since different models may contain elements of homonymous but different model element types. In case a model defines more than one type with the same name (e.g. in different packages), a fully qualified type name must be provided. In terms of concrete syntax, inspired by ATL , the ! character is used to separate the name of the type from the name of the model it is defined in. For instance, Ma!A represents the type A of model Ma . Also, to support modelling technologies that provide hierarchical grouping of types (e.g. using packages) the :: notation is used to separate between packages and classes. A model element type supports the following operations: Signature Description all() : Set Alias for allOfKind() (for syntax-compactness purposes) allInstances() : Set Alias for allOfKind() (for compatibility with OCL) allOfKind() : Set Returns all the elements in the model that are instances either of the type itself or of one of its subtypes allOfType() : Set Returns all the elements in the model that are instances of the type createInstance() : Any Creates an instance of the type in the model. The same can be achieved using the new operator (see below) isInstantiable() : Boolean Returns true if the type is instantiable (i.e. non-abstract) As an example of the concrete syntax, the listing below retrieves all the instances of the Class type (including instances of its subtypes) defined in the Core package of the UML 1.4 metamodel that are contained in the model named UML14. UML14!Core::Foundation::Class.allInstances(); Creating and Deleting Model Elements \u00b6 EOL provides the new and delete operators for creating and deleting model elements as shown below. The new operator is an alias for the createInstance() method above, and can also be used to create instances of primitive and native types (i.e Java classes). var t : new Tree; // Creates a new instance of type Tree var p : new Source!Person; // Creates a new Person in model Source delete t; // Deletes the element created in line 1 Expressions \u00b6 Literal Values \u00b6 EOL provides special syntax constructs to create instances of each of the built-in types: Integer literals are defined by using one or more decimal digits (such as 42 or 999 ). Optionally, long integers (with the same precision as a Java Long ) can be produced by adding a \"l\" suffix, such as 42l . Real literals are defined by: Adding a decimal separator and non-empty fractional part to the integer part, such as 42.0 or 3.14 . Please note that .2 and 2. are not valid literals. Adding a floating point suffix: \"f\" and \"F\" denote single precision, and \"d\" and \"D\" denote double precision. For example, 2f or 3D . Adding an exponent, such as 2e+1 (equal to 2e1 ) or 2e-1 . Using any combination of the above options. String literals are sequences of characters delimited by single ( 'hi' ) or double ( \"hi\" ) quotes. Quotes inside the string can be escaped by using a backslash, such as in 'A\\'s' or \"A\\\"s\" . Literal backslashes need to be escaped as well, such as in 'A\\\\B' . Special escape sequences are also provided: \\n for a newline, \\t for a horizontal tab and \\r for a carriage return, among others. Boolean literals use the true reserved keyword for the true Boolean value, and false reserved keyword for the false Boolean value. Sequence and most other collections (except Map s) also have literals. Their format is T {e} , where T is the name of the type and e are zero or more elements, separated by commas. For instance, Sequence{} is the empty sequence, and Set {1, 2, 3} is the set of numbers between 1 and 3. Map literals are similar to the sequential collection literals, but their elements are of the form key = value . For instance, Map{\"a\" = 1, \"b\" = 2} is a map which has two keys, \"a\" and \"b\", which map to the integer values 1 and 2, respectively. Please note that, when defining an element such as 1 = 2 = 3 , the key would be 1 and the value would be the result of evaluating 2 = 3 (false). If you would like to use the result of the expression 1 = 2 as key, you will need to enclose it in parenthesis, such as in (1 = 2) = 3 . Feature Navigation \u00b6 Since EOL needs to manage models defined using object oriented modelling technologies, it provides expressions to navigate properties and invoke simple and declarative operations on objects. In terms of concrete syntax, . is used as a uniform operator to access a property of an object and to invoke an operation on it. The -> operator, which is used in OCL to invoke first-order logic operations on sets, has been also preserved for syntax compatibility reasons. In EOL, every operation can be invoked both using the . or the -> operators, with a slightly different semantics to enable overriding the built-in operations. If the . operator is used, precedence is given to the user-defined operations, otherwise precedence is given to the built-in operations. For instance, the Any type defines a println() method that prints the string representation of an object to the standard output stream. In the listing below, the user has defined another parameterless println() operation in the context of Any. Therefore the call to println() in line 1 will be dispatched to the user-defined println() operation defined in line 3. In its body the operation uses the -> operator to invoke the built-in println() operation (line 4). \"Something\".println(); operation Any println() : Any { (\"Printing : \" + self)->println(); } Navigating to the parent/children of model elements EOL does not provide a technology-independent way of navigating to the parent/children of a model element. If you need to do this, you should use any methods provided by the underlying modelling platform. For example, as all elements of EMF models are instances of the EObject Java class, the me.eContainer() and me.eContents() method calls in EMF return the parent and children of element me respectively. Escaping Reserved Keywords \u00b6 Due to the variable nature of (meta-)models and the various domain-specific languages of Epsilon (including EOL itself), feature navigation calls may clash with reserved keywords, leading to a parsing error. Back-ticks can be used to escape such keywords. For example, if a model element contains a feature called operation , then this can be navigated as shown in the listing below. var op = modelElement.`operation`; Arithmetical and Comparison Operators \u00b6 EOL provides common operators for performing arithmetical computations and comparisons illustrated in the following two tables respectively. Operator Description + Adds reals/integers and concatenates strings - Subtracts reals/integers - (unary). Returns the negative of a real/integer * Multiplies reals/integers / Divides reals/integers += Adds the r-value to the l-value -= Subtracts the r-value from the l-value *= Multiplies the l-value by the r-value /= Divides the l-value by the r-value ++ Increments the integer by one -- Decrements the integer by one Operator Description = Returns true if the left hand side equals the right hand side. In the case of primitive types (String, Boolean, Integer, Real) the operator compares the values; in the case of objects it returns true if the two expressions evaluate to the same object == Same as = <> Is the logical negation of the (=) operator != Same as <> > For reals/integers returns true if the left hand side is greater than the right hand side number < For reals/integers returns true if the left hand side is less than the right hand side number >= For reals/integers returns true if the left hand side is greater or equal to the right hand side number <= For reals/integers returns true if the left hand side is less or equal to then right hand side number Logical Operators \u00b6 EOL provides common operators for performing logical computations illustrated in the table below. Logical operations apply only to instances of the Boolean primitive type. Operator Precedence All logical operators in EOL have the same priority. This is in contrast to other languages like Java where e.g. and has a higher priority than or . As a result, while true || true && false returns true in Java, the equivalent true or true and false expression in EOL returns false . Default priorities can be overridden using brackets ( true or (true and false) in this case.) Operator Description and Returns the logical conjunction of the two expressions or Returns the logical disjunction of the two expressions not Returns the logical negation of the expression implies Returns the logical implication of the two expressions (see below) xor Returns true if only one of the involved expressions evaluates to true and false otherwise The truth table for the implies logical operator is below. Left Right Result true true true true false false false true true false false true Ternary Operator \u00b6 As of version 2.0, EOL has a ternary operator which is a concise way of using if/else as an expression. The semantics and syntax are similar to Java, but can be used anywhere as an expression, not only in variable assignments or return statements. The listing below shows some examples of this 5 . Note that is also possible to use the else keyword in place of the colon for separating the true and false expressions for greater clarity. As one would expect, the branches are evaluated lazily: only one of the branches is executed and returned as the result of the expression depending on the value of the Boolean expression before the question mark. var result = 2+2==4 ? \"Yes\" else \"No\"; return ((result == \"Yes\" ? 1 : 0) * 2 == 2).mod(2) == 0; Safe Navigation and Elvis Operator \u00b6 As of version 2.1, EOL supports safe null navigation ?. , which makes it more concise to chain feature call expressions without resorting to defensive null / isDefined() checks. In the following example, the variable result will be null , and the program won't crash since the safe navigation operator is used. var a = null; var result = a?.someProperty?.anotherProperty; The null variant of the \"Elvis operator\" can also be used to simplify null check ternary expressions, as shown in the example below. var a = null; var b = \"result\"; var c = a != null ? a : b; var d = a ?: b; assert(c == d); As with the ternary operator, the Elvis operator can also be used anywhere an expression is expected, not just in assignments. As of Epsilon 2.2, there is also the ?= shortcut assignment operator. This is useful for reassigning a variable if it is null. In other words, a ?= b is equivalent to if (a == null) a = b; . var a = null; var b = \"result\"; a ?= b; assert(a == b); Enumerations \u00b6 EOL provides the # operator for accessing enumeration literals. For example, the VisibilityEnum#vk_public expression returns the value of the literal vk_public of the VisibilityEnum enumeration. For EMF metamodels, VisibilityEnum#vk_public.instance can also be used. Statements \u00b6 Variable Declaration Statement \u00b6 A variable declaration statement declares the name and (optionally) the type and initial value of a variable in an EOL program. If no type is explicitly declared, the variable is assumed to be of type Any . For variables of primitive type, declaration automatically creates an instance of the type with the default values presented in the table below. For non-primitive types the user has to explicitly assign the value of the variable either by using the new keyword or by providing an initial value expression. If neither is done the value of the variable is undefined. Variables in EOL are strongly-typed. Therefore a variable can only be assigned values that conform to its type (or a sub-type of it). Type Default value Integer 0 Boolean false String \"\" Real 0.0 Scope \u00b6 The scope of variables in EOL is generally limited to the block of statements where they are defined, including any nested blocks. Nevertheless, as discussed in the sequel, there are cases in task-specific languages that build atop EOL where the scope of variables is expanded to other non-nested blocks as well. EOL also allows variable shadowing; that is to define a variable with the same name in a nested block that overrides a variable defined in an outer block. The listing below provides an example of declaring and using variables. Line 1 defines a variable named i of type Integer and assigns it an initial value of 5 . Line 2 defines a variable named c of type Class (from model Uml) and creates a new instance of the type in the model (by using the new keyword). The commented out assignment statement of line 3 would raise a runtime error since it would attempt to assign a String value to an Integer variable. The condition of line 4 returns true since the c variable has been initialized before. Line 5 defines a new variable also named i that is of type String and which overrides the Integer variable declared in line 1. Therefore the assignment statement of line 6 is legitimate as it assigns a string value to a variable of type String. Finally, as the program has exited the scope of the if statement, the assignment statement of line 7 is also legitimate as it refers to the i variable defined in line 1. var i : Integer = 5; var c : new Uml!Class; //i = \"somevalue\"; if (c.isDefined()) { var i : String; i = \"somevalue\"; } i = 3; Assignment Statement \u00b6 The assignment statement is used to update the values of variables and properties of native objects and model elements. Variable Assignment \u00b6 When the left hand side of an assignment statement is a variable, the value of the variable is updated to the object to which the right hand side evaluates to. If the type of the right hand side is not compatible (kind-of relationship) with the type of the variable, the assignment is illegal and a runtime error is raised. Assignment to objects of primitive types is performed by value while assignment to instances of non-primitive values is performed by reference. For example, in the listing below, in line 1 the value of the a variable is set to a new Class in the Uml model. In line 2, a new untyped variable b is declared and its value is assigned to a. In line 3 the name of the class is updated to Customer and thus, line 4 prints Customer to the standard output stream. var a : new Uml!Class; var b = a; a.name = \"Customer\"; b.name.println(); On the other hand, in the listing below, in line 1 the a String variable is declared. In line 2 an untyped variable b is declared. In line 3, the value of a is changed to Customer (which is an instance of the primitive String type). This has no effect on b and thus line 4 prints an empty string to the standard output stream. var a : String; var b = a; a = \"Customer\"; b.println(); Native Object Property Assignment \u00b6 When the left hand side of the assignment is a property of a native object, deciding on the legality and providing the semantics of the assignment is delegated to the execution engine. For example, in a Java-based execution engine, given that x is a native object, the statement x.y = a may be interpreted as x.setY(a) or if x is an instance of a map x.put(\"y\",a) . By contrast, in a C# implementation, it can be interpreted as x.y = a since the language natively supports properties in classes. Model Element Property Assignment \u00b6 When the left hand side of the assignment is a property of a model element, the model that owns the particular model element (accessible using the ModelRepository.getOwningModel() operation) is responsible for implementing the semantics of the assignment using its associated propertyGetter . For example, if x is a model element, the statement x.y = a may be interpreted using the Java code of the first listing below if x belongs to an EMF-based model or using the Java code of the second listing if it belongs to an MDR-based model. EStructuralFeature feature = x . eClass (). getEStructuralFeature ( \"y\" ); x . eSet ( feature , a ); StructuralFeature feature = findStructuralFeature ( x . refClass (), \"y\" ); x . refSetValue ( feature , a ); Special Assignment Statement \u00b6 In task-specific languages, an assignment operator with task-specific semantics is often required. Therefore, EOL provides an additional assignment operator. In standalone EOL, the operator has the same semantics with the primary assignment operator discussed above, however task-specific languages can redefine its semantics to implement custom assignment behaviour. For example, consider the simple model-to-model transformation of the listing below where a simple object oriented model is transformed to a simple database model using an ETL transformation. rule Class2Table transform c : OO!Class to t : DB!Table { t.name = c.name; } rule Attribute2Column transform a : OO!Attribute to c : DB!Column { c.name = a.name; //c.owningTable = a.owningClass; c.owningTable ::= a.owningClass; } The Class2Table rule transforms a Class of the OO model into a Table in the DB model and sets the name of the table to be the same as the name of the class. Rule Atribute2Column transforms an Attribute from the OO model into a Column in the DB model. Except for setting its name (line 12), it also needs to define that the column belongs to the table which corresponds to the class that defines the source attribute. The commented-out assignment statement of line 13 cannot be used for this purpose since it would illegally attempt to assign the owningTable feature of the column to a model element of an inappropriate type ( OO!Class ). However, the special assignment operator in ETL has language-specific semantics , and thus in line 14 it assigns to the owningTable feature not the class that owns the attribute but its corresponding table (calculated using the Class2Table rule) in the DB model. If Statement \u00b6 As in most programming languages, an if statement consists of a condition, a block of statements that is executed if the condition is satisfied and (optionally) a block of statements that is executed otherwise. As an example, in the listing below, if variable a holds a value that is greater than 0 the statement of line 3 is executed, otherwise the statement of line 5 is executed. if (a > 0) { \"A is greater than 0\".println(); } else { \"A is less equal than 0\".println(); } Switch Statement \u00b6 A switch statement consists of an expression and a set of cases, and can be used to implement multi-branching. Unlike Java/C, switch in EOL doesn't by default fall through to the next case after a successful one. Therefore, it is not necessary to add a break statement after each case. To enable falling through to all subsequent cases you can use the continue statement. Also, unlike Java/C, the switch expression can return anything (not only integers). As an example, when executed, the code in the listing below prints 2 while the code in the following listing prints 2,3,default . var i = \"2\"; switch (i) { case \"1\" : \"1\".println(); case \"2\" : \"2\".println(); case \"3\" : \"3\".println(); default : \"default\".println(); } var i = \"2\"; switch (i) { case \"1\" : \"1\".println(); case \"2\" : \"2\".println(); continue; case \"3\" : \"3\".println(); default : \"default\".println(); } While Statement \u00b6 A while statement consists of a condition and a block of statements which are executed as long as the condition is satisfied. For example, in the listing below, the body of the while statement is executed 5 times printing the numbers 0 to 4 to the output console. Inside the body of a while statement, the built-in read-only loopCount integer variable holds the number of times the innermost loop has been executed so far (including the current iteration). Right after entering the loop for the first time and before running the first statement in its body, loopCount is set to 1, and it is incremented after each following iteration. var i : Integer = 0; while (i < 5) { // both lines print the same thing i.println(); (loopCount - 1).println(); // increment the counter i = i+1; } For Statement \u00b6 In EOL, for statements are used to iterate the contents of collections. A for statement defines a typed iterator and an iterated collection as well as a block of statements that is executed for every item in the collection that has a kind-of relationship with the type defined by the iterator. As with the majority of programming languages, modifying a collection while iterating it raises a runtime error. To avoid this situation, users can use the clone() built-in operation of the Collection type. var col : Sequence = Sequence{\"a\", 1, 2, 2.5, \"b\"}; for (r : Real in col) { r.print(); if (hasMore){\",\".print();} } Inside the body of a for statement, two built-in read-only variables are visible: the loopCount integer variable and the hasMore boolean variable. hasMore is used to determine if there are more items if the collection for which the loop will be executed. For example, in the listing below the col heterogeneous Sequence is defined that contains two strings ( a and b ), two integers ( 1 , 2 ) and one real ( 2.5 ). The for loop of line 2 only iterates through the items of the collection that are of kind Real and therefore prints 1,2,2.5 to the standard output stream. Break, BreakAll and Continue Statements \u00b6 To exit from for and while loops on demand, EOL provides the break and breakAll statements. The break statement exits the innermost loop while the breakAll statement exits all outer loops as well. On the other hand, to skip a particular loop and proceed with the next one, EOL provides the continue statement. For example, the program in the listing below, prints 2,1 3,1 to the standard output stream. for (i in Sequence{1..3}) { if (i = 1) {continue;} for (j in Sequence{1..4}) { if (j = 2) {break;} if (j = 3) {breakAll;} (i + \",\" + j).println(); } } Throw Statement \u00b6 EOL provides the throw statement for throwing a value as an Java exception. This is especially useful when invoking EOL scripts from Java code: by catching and processing the exception, the Java code may be able to automatically handle the problem without requiring user input. Any value can be thrown, as shown in the listing below where we throw a number and a string. throw 42; throw \"Error!\"; Transaction Statement \u00b6 The underlying EMC layer provides support for transactions in models. To utilize this feature EOL provides the transaction statement. A transaction statement (optionally) defines the models that participate in the transaction. If no models are defined, it is assumed that all the models that are accessible from the enclosing program participate. When the statement is executed, a transaction is started on each participating model. If no errors are raised during the execution of the contained statements, any changes made to model elements are committed. On the other hand, if an error is raised the transaction is rolled back and any changes made to the models in the context of the transaction are undone. The user can also use the abort statement to explicitly exit a transaction and roll-back any changes done in its context. In the listing below, an example of using this feature in a simulation problem is illustrated. var system : System.allInstances.first(); for (i in Sequence {1..100}) { transaction { var failedProcessors : Set; while (failedProcessors.size() < 10) { failedProcessors.add(system.processors.random()); } for (processor in failedProcessors) { processor.failed = true; processor.moveTasksElsewhere(); } system.evaluateAvailability(); abort; } } In this problem, a system consists of a number of processors. A processor manages some tasks and can fail at any time. The EOL program in the listing above performs 100 simulation steps, in every one of which 10 random processors from the model (lines 7-11) are marked as failed by setting their failed property to true (line 14). Then, the tasks that the failed processors manage are moved to other processors (line 15). Finally the availability of the system in this state is evaluated. After a simulation step, the state of the model has been drastically changed since processors have failed and tasks have been relocated. To be able to restore the model to its original state after every simulation step, each step is executed in the context of a transaction which is explicitly aborted (line 20) after evaluating the availability of the system. Therefore after each simulation step the model is restored to its original state for the next step to be executed. Extended Properties \u00b6 Quite often, during a model management operation it is necessary to associate model elements with information that is not supported by the metamodel they conform to. For instance, the EOL program in the listing below calculates the depth of each Tree element in a model that conforms to the Tree metamodel displayed below. classDiagram class Tree { +label: String +parent: Tree +children: Tree[*] } Tree -- Tree As the Tree metamodel doesn't support a depth property in the Tree metaclass, each Tree has to be associated with its calculated depth using the depths map defined in line 1. Another approach would be to extend the Tree metamodel to support the desired depth property; however, applying this technique every time an additional property is needed for some model management operation would quickly pollute the metamodel with properties of secondary importance. var depths = new Map; for (n in Tree.allInstances.select(t|not t.parent.isDefined())) { n.setDepth(0); } for (n in Tree.allInstances) { (n.name + \" \" + depths.get(n)).println(); } operation Tree setDepth(depth : Integer) { depths.put(self,depth); for (c in self.children) { c.setDepth(depth + 1); } } To simplify the code required in such cases, EOL provides the concept of extended properties . In terms of concrete syntax, an extended property is a normal property, the name of which starts with the tilde character ( ~ ). With regards to its execution semantics, the first time the value of an extended property of an object is assigned, the property is created and associated with the object. Then, the property can be accessed as a normal property. If an extended property is accessed before it is assigned, it returns null . The listing below demonstrates using a ~depth extended property to eliminate the need for using the depths map in the listing that follows it. for (n in Tree.allInstances.select(t|not t.parent.isDefined())) { n.setDepth(0); } for (n in Tree.allInstances) { (n.name + \" \" + n.~depth).println(); } operation Tree setDepth(depth : Integer) { self.~depth = depth; for (c in self.children) { c.setDepth(depth + 1); } } Context-Independent User Input \u00b6 A common assumption in model management languages is that model management tasks are only executed in a batch-manner without human intervention. However, as demonstrated in the sequel, it is often useful for the user to provide feedback that can precisely drive the execution of a model management operation. Model management operations can be executed in a number of runtime environments in each of which a different user-input method is more appropriate. For instance when executed in the context of an IDE (such as Eclipse) visual dialogs are preferable, while when executed in the context of a server or from within an ANT workflow, a command-line user input interface is deemed more suitable. To abstract away from the different runtime environments and enable the user to specify user interaction statements uniformly and regardless of the runtime context, EOL provides the IUserInput interface that can be realized in different ways according to the execution environment and attached to the runtime context via the IEolContext.setUserInput(IUserInput userInput) method. The IUserInput specifies the methods presented in the table below. Signature Description inform(message : String) Displays the specified message to the user confirm(message : String, [default : Boolean]) : Boolean Prompts the user to confirm if the condition described by the message holds prompt(message : String, [default : String]) : String Prompts the user for a string in response to the message promptInteger(message : String, [default : Integer]) : Integer Prompts the user for an Integer promptReal(message : String, [default : Real]) : Real Prompts the user for a Real choose(message : String, options : Sequence, [default : Any]) : Any Prompts the user to select one of the options chooseMany(message : String, options : Sequence, [default : Sequence]) : Sequence Prompts the user to select one or more of the options As displayed above, all the methods of the IUserInput interface accept a default parameter. The purpose of this parameter is dual. First, it enables the designer of the model management program to prompt the user with the most likely value as a default choice and secondly it enables a concrete implementation of the interface ( UnattendedExecutionUserInput ) which returns the default values without prompting the user at all and thus, can be used for unattended execution of interactive Epsilon programs. The figures below demonstrate the interfaces through which input is required by the user when the exemplar System.user.promptInteger(\"Please enter a number\", 1); statement is executed using an Eclipse-based and a command-line-based IUserInput implementation respectively. User-input facilities have been found to be particularly useful in all model management tasks. Such facilities are essential for performing operations on live models such as model validation and model refactoring but can also be useful in model comparison where marginal matching decisions can be delegated to the user and model transformation where the user can interactively specify the elements that will be transformed into corresponding elements in the target model. Although the EOL parser permits loose statements (e.g. not contained in operations) between/after operations, these are ignored at runtime. \u21a9 Parameters within square brackets are optional \u21a9 http://download.oracle.com/javase/8/docs/api/java/util/Formatter.html#syntax \u21a9 https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html \u21a9 For further examples of ternary operator, see https://git.eclipse.org/c/epsilon/org.eclipse.epsilon.git/tree/tests/org.eclipse.epsilon.eol.engine.test.acceptance/src/org/eclipse/epsilon/eol/engine/test/acceptance/TernaryTests.eol \u21a9","title":"Object language (EOL)"},{"location":"doc/eol/#the-epsilon-object-language-eol","text":"The primary aim of EOL is to provide a reusable set of common model management facilities, atop which task-specific languages can be implemented. However, EOL can also be used as a general-purpose standalone model management language for automating tasks that do not fall into the patterns targeted by task-specific languages. This section presents the syntax and semantics of the language using a combination of abstract syntax diagrams, concrete syntax examples and informal discussion.","title":"The Epsilon Object Language (EOL)"},{"location":"doc/eol/#module-organization","text":"In this section the syntax of EOL is presented in a top-down manner. An EOL programs are organized in modules . Each module defines a body and a number of operations . The body is a block of statements that are evaluated when the module is executed 1 . Each operation defines the kind of objects on which it is applicable ( context ), a name , a set of parameters and optionally a return type . Modules can also import other modules using import statements and access their operations, as shown in the listing below. // file imported.eol operation hello() { \"Hello world!\".println(); } // file importer.eol // We can use relative/absolute paths or platform:/ URIs import \"imported.eol\"; hello(); // main body // ... more operations could be placed here ... classDiagram class EolModule { +main:StatementBlock } class ImportStatement { +imported:EolModule } class Operation { +name: String +context: EolType +parameters: Parameter[*] +returnType: EolType } class ExecutableAnnotation { +expression: Expression } class SimpleAnnotation { +values: String[*] } EolModule -- ImportStatement: * EolModule -- Operation: operations * Operation -- Annotation: annotations * Operation -- StatementBlock: body EolModule -- StatementBlock: main StatementBlock -- Statement: statements * Annotation <|-- ExecutableAnnotation Annotation <|-- SimpleAnnotation","title":"Module Organization"},{"location":"doc/eol/#user-defined-operations","text":"In mainstream object oriented languages such as Java and C++, operations are defined inside classes and can be invoked on instances of those classes. EOL on the other hand is not object-oriented in the sense that it does not define classes itself, but nevertheless needs to manage objects of types defined externally to it (e.g. in metamodels). By defining the context-type of an operation explicitly, the operation can be called on instances of the type as if it was natively defined by the type. For example, consider the code excerpts displayed in the listings below. In the first listing, the operations add1 and add2 are defined in the context of the built-in Integer type, which is specified before their names. Therefore, they can be invoked in line 1 using the 1.add1().add2() expression: the context (the integer 1 ) will be assigned to the special variable self . On the other hand, in the second listing where no context is defined, they have to be invoked in a nested manner which follows an in-to-out direction instead of the left to right direction used by the former excerpt. As complex model queries often involve invoking multiple properties and operations, this technique is particularly beneficial to the overall readability of the code. 1.add1().add2().println(); operation Integer add1() : Integer { return self + 1; } operation Integer add2() : Integer { return self + 2; } add2(add1(1)).println(); operation add1(base : Integer) : Integer { return base + 1; } operation add2(base : Integer) : Integer { return base + 2; } EOL supports polymorphic operations using a runtime dispatch mechanism. Multiple operations with the same name and parameters can be defined, each defining a distinct context type. For example, in the listing below, the statement in line 1 invokes the test operation defined in line 4, while the statement in line 2 invokes the test operation defined in line 8. \"1\".test(); 1.test(); operation String test() { (self + \" is a string\").println(); } operation Integer test() { (self + \"is an integer\").println(); }","title":"User-Defined Operations"},{"location":"doc/eol/#annotations","text":"EOL supports two types of annotations: simple and executable. A simple annotation specifies a name and a set of String values while an executable annotation specifies a name and an expression. The concrete syntaxes of simple and executable annotations are displayed in the listing below. // Simple annotation @name value(,value) // Executable annotation $name expression Several examples for simple annotations are shown the listing below. Examples for executable annotations will be given in the following sections. @colors red @colors red, blue @colors red, blue, green In stand-alone EOL, annotations are supported only in the context of operations, however as discussed in the sequel, task-specific languages also make use of annotations in their constructs, each with task-specific semantics. EOL operations support three particular annotations: the pre and post executable annotations for specifying pre and post-conditions, and the cached simple annotation, which are discussed below.","title":"Annotations"},{"location":"doc/eol/#prepost-conditions-in-user-defined-operations","text":"A number of pre and post executable annotations can be attached to EOL operations to specify the pre- and post-conditions of the operation. When an operation is invoked, before its body is evaluated, the expressions of the pre annotations are evaluated. If all of them return true , the body of the operation is executed, otherwise, an error is raised. Similarly, once the body of the operation has been executed, the expressions of the post annotations of the operation are executed to ensure that the operation has had the desired effects. Pre and post annotations can access all the variables in the parent scope, as well as the parameters of the operation and the object on which the operation is invoked (through the self variable). Moreover, in post annotations, the returned value of the operation is accessible through the built-in _result variable. An example of using pre and post conditions in EOL appears below. 1.add(2); 1.add(-1); $pre i > 0 $post _result > self operation Integer add(i : Integer) : Integer { return self + i; } In line 4 the add operation defines a pre-condition stating that the parameter i must be a positive number. In line 5, the operation defines that result of the operation ( _result ) must be greater than the number on which it was invoked ( self ). Thus, when executed in the context of the statement in line 1 the operation succeeds, while when executed in the context of the statement in line 2, the pre-condition is not satisfied and an error is raised.","title":"Pre/post conditions in user-defined operations"},{"location":"doc/eol/#operation-result-caching","text":"EOL supports caching the results of parameter-less operations using the @cached simple annotation. In the following example, the Fibonacci number of a given Integer is calculated using the fibonacci recursive operation displayed in the listing below. Since the fibonacci operation is declared as cached , it is only executed once for each distinct Integer and subsequent calls on the same target return the cached result. Therefore, when invoked in line 1, the body of the operation is called 16 times. By contrast, if no @cached annotation was specified, the body of the operation would be called recursively 1973 times. This feature is particularly useful for performing queries on large models and caching their results without needing to introduce explicit variables that store the cached results. It is worth noting that caching works by reference , which means that all clients of a cached method for a given context will receive the same returned object. As such, if the first client modifies the returned object in some way (e.g. sets a property in the case of an object or adds an element in the case of the collection), subsequent clients of the method for the same context will receive the modified object/collection. 15.fibonacci().println(); @cached operation Integer fibonacci() : Integer { if (self = 1 or self = 0) { return 1; } else { return (self-1).fibonacci() + (self-2).fibonacci(); } }","title":"Operation Result Caching"},{"location":"doc/eol/#types","text":"As is the case for most programming languages, EOL defines a built-in system of types, illustrated in the figure below. The Any type, inspired by the OclAny type of OCL, is the basis of all types in EOL including Collection types. classDiagram class ModelElementType { -model: String -type: String } class Native { -implementation: String } ModelElementType --|> Any Any <|-- Native Any <|-- Collection Any <|-- Map Collection <|-- Bag Collection <|-- Set Collection <|-- OrderedSet Collection <|-- Sequence PrimitiveType --|> Any PrimitiveType <|-- Integer PrimitiveType <|-- String PrimitiveType <|-- Boolean PrimitiveType <|-- Real The operations supported by instances of the Any type are outlined in the table below 2 . Signature Description asBag() : Bag Returns a new Bag containing the object asBoolean() : Boolean Returns a Boolean based on the string representation of the object. If the string representation is not of an acceptable format, an error is raised asInteger() : Integer Returns an Integer based on the string representation of the object. If the string representation is not of an acceptable format, an error is raised asOrderedSet() : OrderedSet Returns a new OrderedSet containing the object asReal() : Real Returns a Real based on the string representation of the object. If the string representation is not of an acceptable format, an error is raised asDouble() : Double Returns a Java Double based on the string representation of the object. If the string representation is not of an acceptable format, an error is raised asFloat() : Float Returns a Java Float based on the string representation of the object. If the string representation is not of an acceptable format, an error is raised asSequence() : Sequence Returns a new Sequence containing the object asSet() : Set Returns a new Set containing the object asString() : String Returns a string representation of the object err([prefix : String]) : Any Prints a string representation of the object on which it is invoked to the error stream prefixed with the optional prefix string and returns the object on which it was invoked. In this way, the err operation can be used for debugging purposes in a non-invasive manner errln([prefix : String]) : Any Has the same effects as the err operation but also produces a new line in the output stream. format([pattern : String]) : String Uses the provided pattern to form a String representation of the object on which the method is invoked. The pattern argument must conform to the format string syntax defined by Java 3 . hasProperty(name : String) : Boolean Returns true if the object has a property with the specified name or false otherwise ifUndefined(alt : Any) : Any If the object is undefined, it returns alt else it returns the object isDefined() : Boolean Returns true if the object is defined and false otherwise isKindOf(type : Type) : Boolean Returns true if the object is of the given type or one of its subtypes and false otherwise isTypeOf(type : Type) : Boolean Returns true if the object is of the given type and false otherwise isUndefined() : Boolean Returns true if the object is undefined and false otherwise owningModel() : Model Returns the model that contains this object or an undefined value otherwise print([prefix : String]) : Any Prints a string representation of the object on which it is invoked to the regular output stream, prefixed with the optional prefix string and returns the object on which it was invoked. In this way, the print operation can be used for debugging purposes in a non-invasive manner println([prefix : String]) : Any Has the same effects as the print operation but also produces a new line in the output stream. type() : Type Returns the type of the object.","title":"Types"},{"location":"doc/eol/#primitive-types","text":"EOL provides four primitive types: String, Integer, Real and Boolean. The String type represents a finite sequence of characters and supports the following operations which can be invoked on its instances. Signature Description characterAt(index : Integer) : String Returns the character in the specified index concat(str : String) : String Returns a concatenated form of the string with the str parameter endsWith(str : String) : Boolean Returns true iff the string ends with str escapeXml() : String Returns a new string with escaped XML-reserved characters firstToLowerCase() : String Returns a new string the first character of which has been converted to lower case ftlc() : String Alias for firstToLowerCase() firstToUpperCase() : String Returns a new string, the first character of which has been converted to upper case ftuc : String Alias for firstToUpperCase() isInteger() : Boolean Returns true iff the string is an integer isReal() : Boolean Returns true iff the string is a real number isSubstringOf(str : String) : Boolean Returns true iff the string the operation is invoked on is a substring of str length() : Integer Returns the number of characters in the string matches(reg : String) : Boolean Returns true if there are occurrences of the regular expression reg in the string pad(length : Integer, padding : String, right : Boolean) : String Pads the string up to the specified length with specified padding (e.g. \"foo\".pad(5, \"*\", true) returns \"foo**\" ) replace(source : String, target : String) : String Returns a new string in which all instances of source have been replaced with instances of target split(reg : String) : Sequence(String) Splits the string using as a delimiter the provided regular expression, reg , and returns a sequence containing the parts startsWith(str : String) : Boolean Returns true iff the string starts with str substring(index : Integer) : String Returns a sub-string of the string starting from the specified index and extending to the end of the original string substring(startIndex : Integer, endIndex : Integer) : String Returns a sub-string of the string starting from the specified startIndex and ending at endIndex toCharSequence() : Sequence(String) Returns a sequence containing all the characters of the string toLowerCase() : String Returns a new string where all the characters have been converted to lower case toUpperCase() : String Returns a new string where all the characters have been converted to upper case trim() : String Returns a trimmed copy of the string The Real type represents real numbers and provides the following operations. Signature Description abs() : Real Returns the absolute value of the real ceiling() : Integer Returns the nearest Integer that is greater than the real floor() : Integer Returns the nearest Integer that is less than the real log() : Real Returns the natural logarithm of the real log10() : Real Returns the 10-based logarithm of the real max(other : Real) : Real Returns the maximum of the two reals min(other : Real) : Real Returns the minimum of the two reals pow(exponent : Real) : Real Returns the real to the power of exponent round() : Integer Rounds the real to the nearest Integer The Integer type represents natural numbers and negatives and extends the Real primitive type. It also defines the following operations: Signature Description iota(end : Integer, step : Integer) : Sequence(Integer) Returns a sequence of integers up to end using the specified step (e.g. 1.iota(10,2) returns Sequence{1,3,5,7,9}) mod(divisor : Integer) : Integer Returns the remainder of dividing the integer by the divisor to(other : Integer) : Sequence(Integer) Returns a sequence of integers (e.g. 1.to(5) returns Sequence{1,2,3,4,5}) toBinary() : String Returns the binary representation of the integer (e.g. 6.toBinary() returns \"110\") toHex() : String Returns the hexadecimal representation of the integer (e.g. 42.toBinary() returns \"2a\") Finally, the Boolean type represents true/false states and provides no additional operations to those provided by the base Any type.","title":"Primitive Types"},{"location":"doc/eol/#collections-and-maps","text":"EOL provides four types of collections and a Map type. The Bag type represents non-unique, unordered collections and implements the java.util.Collection interface, the Sequence type represents non-unique, ordered collections and implements the java.util.List interface, the Set type represents unique and unordered collections and implements the java.util.Set interface, the OrderedSet represents unique and ordered collections. Since version 2.0, there are also two concurrent collection types, which can safely be modified from multiple threads. These are ConcurrentBag and ConcurrentSet , which are thread-safe variants of the Bag and Set types respectively. All collection types inherit from the abstract Collection type. Apart from simple operations, EOL also supports logic operations on collections. The following operations (along with any operations declared on the java.util.Collection interface) apply to all types of collections: Signature Description add(item : Any) : Boolean Adds an item to the collection. If the collection is a set, addition of duplicate items has no effect. Returns true if the collection increased in size: this is always the case for bags and sequences, and for sets and ordered sets it is true if the element was not part of the collection before. addAll(col : Collection) : Boolean Adds all the items of the col argument to the collection. If the collection is a set, it only adds items that do not already exist in the collection. Returns true if this collection changed as a result of the call asBag() Returns a Bag that contains the same elements as the collection. asOrderedSet() Returns a duplicate-free OrderedSet that contains the same elements as the collection. asSequence() Returns a Sequence that contains the same elements as the collection. asSet() Returns a duplicate-free Set that contains the same elements as the collection. clear() Empties the collection clone() : Collection Returns a new collection of the same type containing the same items with the original collection concat() : String Returns the string created by converting each element of the collection to a string concat(separator : String) : String Returns the string created by converting each element of the collection to a string, using the given argument as a separator count(item : Any) : Integer Returns the number of times the item exists in the collection excludes(item : Any) : Boolean Returns true if the collection excludes the item excludesAll(col : Collection) : Boolean Returns true if the collection excludes all the items of collection col excluding(item : Any) : Collection Returns a new collection that excludes the item -- unlike the remove() operation that removes the item from the collection itself excludingAll(col : Collection) : Collection Returns a new collection that excludes all the elements of the col collection flatten() : Collection Recursively flattens all items that are of collection type and returns a new collection where no item is a collection itself includes(item : Any) : Boolean Returns true if the collection includes the item includesAll(col : Collection) : Boolean Returns true if the collection includes all the items of collection col including(item : Any) : Collection Returns a new collection that also contains the item -- unlike the add() operation that adds the item to the collection itself includingAll(col : Collection) : Collection Returns a new collection that is a union of the two collections. The type of the returned collection (i.e. Bag, Sequence, Set, OrderedSet) is same as the type of the collection on which the operation is invoked isEmpty() : Boolean Returns true if the collection does not contain any elements and false otherwise min() : Real Returns the minimum of all reals/integers in the collection, or 0 if it is empty min(default : Real) : Real Returns the minimum of all reals/integers in the collection, or the default value if it is empty max() : Real Returns the maximum of all reals/integers in the collection, or 0 if it is empty max(default : Real) : Real Returns the maximum of all reals/integers in the collection, or the default value if it is empty notEmpty() : Boolean Returns true if the collection contains any elements and false otherwise powerset() : Set Returns the set of all subsets of the collection product() : Real Returns the product of all reals/integers in the collection random() : Any Returns a random item from the collection remove(item : Any) : Boolean Removes an item from the collection. Returns true if the collection contained the specified element removeAll(col : Collection) : Boolean Removes all the items of col from the collection. Returns true if the collection changed as a result of the call size() : Integer Returns the number of items the collection contains sum() : Real Returns the sum of all reals/integers in the collection The following operations apply to ordered collection types (i.e. Sequence and OrderedSet): Signature Description at(index : Integer) : Any Returns the item of the collection at the specified index first() : Any Returns the first item of the collection fourth() : Any Returns the fourth item of the collection indexOf(item : Any) : Integer Returns the index of the item in the collection or -1 if it does not exist invert() : Collection Returns an inverted copy of the collection last() : Any Returns the last item of the collection removeAt(index : Integer) : Any Removes and returns the item at the specified index. second() : Any Returns the second item of the collection third() : Any Returns the third item of the collection Also, EOL collections support the following first-order operations. Apart from aggregate and closure , all of these operations have a parallel variant which can take advantage of multiple cores to improve performance. All computations contained in these operations are assumed to be free from side-effects (i.e. do not mutate global variables). Aside from the following built-in first-order operations which are evaluated eagerly, all Collection types in the Java implementation of EOL support Streams. This allows for chains of queries and transformations on collections to be evaluated more efficiently. A stream can be obtained by calling the stream() method on the collection. The API is defined by the Java standard library 4 . Signature Description atLeastNMatch(iterator : Type | condition, n : Integer) : Boolean Returns true if there are n or more items in the collection that satisfy the condition atMostNMatch(iterator : Type | condition, n : Integer) : Boolean Returns true if there are n or fewer items in the collection that satisfy the condition aggregate(iterator : Type | keyExpression, valueExpression) : Map Returns a map containing key-value pairs produced by evaluating the key and value expressions on each item of the collection that is of the specified type closure(iterator : Type | expression) : Collection Returns a collection containing the results of evaluating the transitive closure of the results produced by the expression on each item of the collection that is of the specified type. For example, if t is a tree model element, t.closure(it|it.children) will return all its descendants collect(iterator : Type | expression) : Collection Returns a collection containing the results of evaluating the expression on each item of the collection that is of the specified type count(iterator : Type | condition) : Integer Returns the number of elements in the collection that satisfy the condition exists(iterator : Type | condition) : Boolean Returns true if there exists at least one item in the collection that satisfies the condition forAll(iterator : Type | condition) : Boolean Returns true if all items in the collection satisfy the condition nMatch(iterator : Type | condition, n : Integer) : Boolean Returns true if there are exactly n items in the collection that satisfy the condition none(iterator : Type | condition) : Boolean Returns true if there are no items in the collection that satisfy the condition one(iterator : Type | condition) : Boolean Returns true if there exists exactly one item in the collection that satisfies the condition reject(iterator : Type | condition) : Collection Returns a sub-collection containing only items of the specified type that do not satisfy the condition rejectOne(iterator : Type | condition) : Collection Returns a sub-collection containing all elements except the first element which does not satisfy the condition select(iterator : Type | condition) : Collection Returns a sub-collection containing only items of the specified type that satisfy the condition selectByKind(Type) : Collection Returns a sub-collection containing only items of the specified type and subtypes selectByType(Type) : Collection Returns a sub-collection containing only items of the specified type only selectOne(iterator : Type | condition) : Any Returns any element that satisfies the condition sortBy(iterator: Type | expression) : Collection Returns a copy of the collection sorted by the results of evaluating the expression on each item of the collection that conforms to the iterator type. The expression should return either an Integer, a String or an object that is an instance of Comparable. The ordering is calculated as follows: for integers, smaller to greater; for Strings, as defined by the compareTo method of Java strings; for Comparable objects, according to the semantics of the type's compareTo method implementation. The Map type (which implements the java.util.Map interface) represents a Set of key-value pairs in which the keys are unique. Since version 2.0, there is also a thread-safe ConcurrentMap type, which implements the java.util.concurrent.ConcurrentMap interface. The following operations are supported: Signature Description clear() Clears the map containsKey(key : Any) : Boolean Returns true if the map contains the specified key containsValue(value : Any) : Boolean Returns true if this map maps one or more keys to the specified value. get(key : Any) : Any Returns the value for the specified key isEmpty() : Boolean Returns true if the map contains no key-value mappings. keySet() : Set Returns the keys of the map put(key : Any, value : Any) Adds the key-value pair to the map. If the map already contains the same key, the value is overwritten putAll(map : Map) Copies all of the mappings from the specified map to this map. remove(key : Any) : Any Removes the mapping for the specified key from this map if present. Returns the previous value associated with key. size() : Integer Returns the number of key-value mappings in this map. values() : Bag Returns the values of the map","title":"Collections and Maps"},{"location":"doc/eol/#tuples","text":"Since version 2.2, EOL supports Tuples, which can be used to compose arbitrary data structures on-the-fly. A Tuple in EOL behaves like a Map<String, Object> , except that the values of the map can be accessed using literal property call expressions. There are three ways to instantiate a tuple, as shown below. // After construction var alice = new Tuple; alice.name = \"Alice\"; alice.age = 32; // During construction var bob = new Tuple(name = \"Bob\", age = 28); // Map Literal var charlie = Tuple{\"name\" = \"Charlie\", \"age\" = 36}; If a non-existent property on a Tuple is accessed, an exception is thrown. var p = new Tuple(name = \"Alice\", age = 32); p.name.substring(0, 3); // \"Ali\" p.age; // 32 p.occupation.isDefined(); // false p.occupation.toUpperCase(); // Property 'occupation' not found","title":"Tuples"},{"location":"doc/eol/#native-types","text":"As discussed earlier, while the purpose of EOL is to provide significant expressive power to enable users to manage models at a high level of abstraction, it is not intended to be a general-purpose programming language. Therefore, there may be cases where users need to implement some functionality that is either not efficiently supported by the EOL runtime (e.g. complex mathematical computations) or that EOL does not support at all (e.g. developing user interfaces, accessing databases). To overcome this problem, EOL enables users to create objects of the underlying programming environment by using native types. A native type specifies an implementation property that indicates the unique identifier for an underlying platform type. For instance, in a Java implementation of EOL the user can instantiate and use a Java class via its class identifier. Thus, the EOL excerpt in the listing below creates a Java window (Swing JFrame) and uses its methods to change its title and dimensions and make it visible. var frame = new Native(\"javax.swing.JFrame\"); frame.title = \"Opened with EOL\"; frame.setBounds(100,100,300,200); frame.visible = true; To pass arguments to the constructor of a native type, a parameter list must be added, such as that in the listing below. var file = new Native(\"java.io.File\")(\"myfile.txt\"); file.absolutePath.println(); Static types can also be referenced in EOL and stored in a variable for convenience, as shown below. var Collectors = Native(\"java.util.stream.Collectors\");","title":"Native Types"},{"location":"doc/eol/#model-element-types","text":"A model element type represents a meta-level classifier for model elements. Epsilon intentionally refrains from defining more details about the meaning of a model element type, to be able to support diverse modelling technologies where a type has different semantics. For instance an Ecore EClass, an XSD complex type and a Java class can all be regarded as model element types according to the implementation of the underlying modelling framework. Info As EOL is decoupled from modelling technologies (e.g. EMF, Simulink), through Epsilon's Model Connectivity Layer , we refrain from referring to specific modelling technologies in this section as much as possible. In case of multiple models, as well as the name of the type, the name of the model is also required to resolve a particular type since different models may contain elements of homonymous but different model element types. In case a model defines more than one type with the same name (e.g. in different packages), a fully qualified type name must be provided. In terms of concrete syntax, inspired by ATL , the ! character is used to separate the name of the type from the name of the model it is defined in. For instance, Ma!A represents the type A of model Ma . Also, to support modelling technologies that provide hierarchical grouping of types (e.g. using packages) the :: notation is used to separate between packages and classes. A model element type supports the following operations: Signature Description all() : Set Alias for allOfKind() (for syntax-compactness purposes) allInstances() : Set Alias for allOfKind() (for compatibility with OCL) allOfKind() : Set Returns all the elements in the model that are instances either of the type itself or of one of its subtypes allOfType() : Set Returns all the elements in the model that are instances of the type createInstance() : Any Creates an instance of the type in the model. The same can be achieved using the new operator (see below) isInstantiable() : Boolean Returns true if the type is instantiable (i.e. non-abstract) As an example of the concrete syntax, the listing below retrieves all the instances of the Class type (including instances of its subtypes) defined in the Core package of the UML 1.4 metamodel that are contained in the model named UML14. UML14!Core::Foundation::Class.allInstances();","title":"Model Element Types"},{"location":"doc/eol/#creating-and-deleting-model-elements","text":"EOL provides the new and delete operators for creating and deleting model elements as shown below. The new operator is an alias for the createInstance() method above, and can also be used to create instances of primitive and native types (i.e Java classes). var t : new Tree; // Creates a new instance of type Tree var p : new Source!Person; // Creates a new Person in model Source delete t; // Deletes the element created in line 1","title":"Creating and Deleting Model Elements"},{"location":"doc/eol/#expressions","text":"","title":"Expressions"},{"location":"doc/eol/#literal-values","text":"EOL provides special syntax constructs to create instances of each of the built-in types: Integer literals are defined by using one or more decimal digits (such as 42 or 999 ). Optionally, long integers (with the same precision as a Java Long ) can be produced by adding a \"l\" suffix, such as 42l . Real literals are defined by: Adding a decimal separator and non-empty fractional part to the integer part, such as 42.0 or 3.14 . Please note that .2 and 2. are not valid literals. Adding a floating point suffix: \"f\" and \"F\" denote single precision, and \"d\" and \"D\" denote double precision. For example, 2f or 3D . Adding an exponent, such as 2e+1 (equal to 2e1 ) or 2e-1 . Using any combination of the above options. String literals are sequences of characters delimited by single ( 'hi' ) or double ( \"hi\" ) quotes. Quotes inside the string can be escaped by using a backslash, such as in 'A\\'s' or \"A\\\"s\" . Literal backslashes need to be escaped as well, such as in 'A\\\\B' . Special escape sequences are also provided: \\n for a newline, \\t for a horizontal tab and \\r for a carriage return, among others. Boolean literals use the true reserved keyword for the true Boolean value, and false reserved keyword for the false Boolean value. Sequence and most other collections (except Map s) also have literals. Their format is T {e} , where T is the name of the type and e are zero or more elements, separated by commas. For instance, Sequence{} is the empty sequence, and Set {1, 2, 3} is the set of numbers between 1 and 3. Map literals are similar to the sequential collection literals, but their elements are of the form key = value . For instance, Map{\"a\" = 1, \"b\" = 2} is a map which has two keys, \"a\" and \"b\", which map to the integer values 1 and 2, respectively. Please note that, when defining an element such as 1 = 2 = 3 , the key would be 1 and the value would be the result of evaluating 2 = 3 (false). If you would like to use the result of the expression 1 = 2 as key, you will need to enclose it in parenthesis, such as in (1 = 2) = 3 .","title":"Literal Values"},{"location":"doc/eol/#feature-navigation","text":"Since EOL needs to manage models defined using object oriented modelling technologies, it provides expressions to navigate properties and invoke simple and declarative operations on objects. In terms of concrete syntax, . is used as a uniform operator to access a property of an object and to invoke an operation on it. The -> operator, which is used in OCL to invoke first-order logic operations on sets, has been also preserved for syntax compatibility reasons. In EOL, every operation can be invoked both using the . or the -> operators, with a slightly different semantics to enable overriding the built-in operations. If the . operator is used, precedence is given to the user-defined operations, otherwise precedence is given to the built-in operations. For instance, the Any type defines a println() method that prints the string representation of an object to the standard output stream. In the listing below, the user has defined another parameterless println() operation in the context of Any. Therefore the call to println() in line 1 will be dispatched to the user-defined println() operation defined in line 3. In its body the operation uses the -> operator to invoke the built-in println() operation (line 4). \"Something\".println(); operation Any println() : Any { (\"Printing : \" + self)->println(); } Navigating to the parent/children of model elements EOL does not provide a technology-independent way of navigating to the parent/children of a model element. If you need to do this, you should use any methods provided by the underlying modelling platform. For example, as all elements of EMF models are instances of the EObject Java class, the me.eContainer() and me.eContents() method calls in EMF return the parent and children of element me respectively.","title":"Feature Navigation"},{"location":"doc/eol/#escaping-reserved-keywords","text":"Due to the variable nature of (meta-)models and the various domain-specific languages of Epsilon (including EOL itself), feature navigation calls may clash with reserved keywords, leading to a parsing error. Back-ticks can be used to escape such keywords. For example, if a model element contains a feature called operation , then this can be navigated as shown in the listing below. var op = modelElement.`operation`;","title":"Escaping Reserved Keywords"},{"location":"doc/eol/#arithmetical-and-comparison-operators","text":"EOL provides common operators for performing arithmetical computations and comparisons illustrated in the following two tables respectively. Operator Description + Adds reals/integers and concatenates strings - Subtracts reals/integers - (unary). Returns the negative of a real/integer * Multiplies reals/integers / Divides reals/integers += Adds the r-value to the l-value -= Subtracts the r-value from the l-value *= Multiplies the l-value by the r-value /= Divides the l-value by the r-value ++ Increments the integer by one -- Decrements the integer by one Operator Description = Returns true if the left hand side equals the right hand side. In the case of primitive types (String, Boolean, Integer, Real) the operator compares the values; in the case of objects it returns true if the two expressions evaluate to the same object == Same as = <> Is the logical negation of the (=) operator != Same as <> > For reals/integers returns true if the left hand side is greater than the right hand side number < For reals/integers returns true if the left hand side is less than the right hand side number >= For reals/integers returns true if the left hand side is greater or equal to the right hand side number <= For reals/integers returns true if the left hand side is less or equal to then right hand side number","title":"Arithmetical and Comparison Operators"},{"location":"doc/eol/#logical-operators","text":"EOL provides common operators for performing logical computations illustrated in the table below. Logical operations apply only to instances of the Boolean primitive type. Operator Precedence All logical operators in EOL have the same priority. This is in contrast to other languages like Java where e.g. and has a higher priority than or . As a result, while true || true && false returns true in Java, the equivalent true or true and false expression in EOL returns false . Default priorities can be overridden using brackets ( true or (true and false) in this case.) Operator Description and Returns the logical conjunction of the two expressions or Returns the logical disjunction of the two expressions not Returns the logical negation of the expression implies Returns the logical implication of the two expressions (see below) xor Returns true if only one of the involved expressions evaluates to true and false otherwise The truth table for the implies logical operator is below. Left Right Result true true true true false false false true true false false true","title":"Logical Operators"},{"location":"doc/eol/#ternary-operator","text":"As of version 2.0, EOL has a ternary operator which is a concise way of using if/else as an expression. The semantics and syntax are similar to Java, but can be used anywhere as an expression, not only in variable assignments or return statements. The listing below shows some examples of this 5 . Note that is also possible to use the else keyword in place of the colon for separating the true and false expressions for greater clarity. As one would expect, the branches are evaluated lazily: only one of the branches is executed and returned as the result of the expression depending on the value of the Boolean expression before the question mark. var result = 2+2==4 ? \"Yes\" else \"No\"; return ((result == \"Yes\" ? 1 : 0) * 2 == 2).mod(2) == 0;","title":"Ternary Operator"},{"location":"doc/eol/#safe-navigation-and-elvis-operator","text":"As of version 2.1, EOL supports safe null navigation ?. , which makes it more concise to chain feature call expressions without resorting to defensive null / isDefined() checks. In the following example, the variable result will be null , and the program won't crash since the safe navigation operator is used. var a = null; var result = a?.someProperty?.anotherProperty; The null variant of the \"Elvis operator\" can also be used to simplify null check ternary expressions, as shown in the example below. var a = null; var b = \"result\"; var c = a != null ? a : b; var d = a ?: b; assert(c == d); As with the ternary operator, the Elvis operator can also be used anywhere an expression is expected, not just in assignments. As of Epsilon 2.2, there is also the ?= shortcut assignment operator. This is useful for reassigning a variable if it is null. In other words, a ?= b is equivalent to if (a == null) a = b; . var a = null; var b = \"result\"; a ?= b; assert(a == b);","title":"Safe Navigation and Elvis Operator"},{"location":"doc/eol/#enumerations","text":"EOL provides the # operator for accessing enumeration literals. For example, the VisibilityEnum#vk_public expression returns the value of the literal vk_public of the VisibilityEnum enumeration. For EMF metamodels, VisibilityEnum#vk_public.instance can also be used.","title":"Enumerations"},{"location":"doc/eol/#statements","text":"","title":"Statements"},{"location":"doc/eol/#variable-declaration-statement","text":"A variable declaration statement declares the name and (optionally) the type and initial value of a variable in an EOL program. If no type is explicitly declared, the variable is assumed to be of type Any . For variables of primitive type, declaration automatically creates an instance of the type with the default values presented in the table below. For non-primitive types the user has to explicitly assign the value of the variable either by using the new keyword or by providing an initial value expression. If neither is done the value of the variable is undefined. Variables in EOL are strongly-typed. Therefore a variable can only be assigned values that conform to its type (or a sub-type of it). Type Default value Integer 0 Boolean false String \"\" Real 0.0","title":"Variable Declaration Statement"},{"location":"doc/eol/#scope","text":"The scope of variables in EOL is generally limited to the block of statements where they are defined, including any nested blocks. Nevertheless, as discussed in the sequel, there are cases in task-specific languages that build atop EOL where the scope of variables is expanded to other non-nested blocks as well. EOL also allows variable shadowing; that is to define a variable with the same name in a nested block that overrides a variable defined in an outer block. The listing below provides an example of declaring and using variables. Line 1 defines a variable named i of type Integer and assigns it an initial value of 5 . Line 2 defines a variable named c of type Class (from model Uml) and creates a new instance of the type in the model (by using the new keyword). The commented out assignment statement of line 3 would raise a runtime error since it would attempt to assign a String value to an Integer variable. The condition of line 4 returns true since the c variable has been initialized before. Line 5 defines a new variable also named i that is of type String and which overrides the Integer variable declared in line 1. Therefore the assignment statement of line 6 is legitimate as it assigns a string value to a variable of type String. Finally, as the program has exited the scope of the if statement, the assignment statement of line 7 is also legitimate as it refers to the i variable defined in line 1. var i : Integer = 5; var c : new Uml!Class; //i = \"somevalue\"; if (c.isDefined()) { var i : String; i = \"somevalue\"; } i = 3;","title":"Scope"},{"location":"doc/eol/#assignment-statement","text":"The assignment statement is used to update the values of variables and properties of native objects and model elements.","title":"Assignment Statement"},{"location":"doc/eol/#variable-assignment","text":"When the left hand side of an assignment statement is a variable, the value of the variable is updated to the object to which the right hand side evaluates to. If the type of the right hand side is not compatible (kind-of relationship) with the type of the variable, the assignment is illegal and a runtime error is raised. Assignment to objects of primitive types is performed by value while assignment to instances of non-primitive values is performed by reference. For example, in the listing below, in line 1 the value of the a variable is set to a new Class in the Uml model. In line 2, a new untyped variable b is declared and its value is assigned to a. In line 3 the name of the class is updated to Customer and thus, line 4 prints Customer to the standard output stream. var a : new Uml!Class; var b = a; a.name = \"Customer\"; b.name.println(); On the other hand, in the listing below, in line 1 the a String variable is declared. In line 2 an untyped variable b is declared. In line 3, the value of a is changed to Customer (which is an instance of the primitive String type). This has no effect on b and thus line 4 prints an empty string to the standard output stream. var a : String; var b = a; a = \"Customer\"; b.println();","title":"Variable Assignment"},{"location":"doc/eol/#native-object-property-assignment","text":"When the left hand side of the assignment is a property of a native object, deciding on the legality and providing the semantics of the assignment is delegated to the execution engine. For example, in a Java-based execution engine, given that x is a native object, the statement x.y = a may be interpreted as x.setY(a) or if x is an instance of a map x.put(\"y\",a) . By contrast, in a C# implementation, it can be interpreted as x.y = a since the language natively supports properties in classes.","title":"Native Object Property Assignment"},{"location":"doc/eol/#model-element-property-assignment","text":"When the left hand side of the assignment is a property of a model element, the model that owns the particular model element (accessible using the ModelRepository.getOwningModel() operation) is responsible for implementing the semantics of the assignment using its associated propertyGetter . For example, if x is a model element, the statement x.y = a may be interpreted using the Java code of the first listing below if x belongs to an EMF-based model or using the Java code of the second listing if it belongs to an MDR-based model. EStructuralFeature feature = x . eClass (). getEStructuralFeature ( \"y\" ); x . eSet ( feature , a ); StructuralFeature feature = findStructuralFeature ( x . refClass (), \"y\" ); x . refSetValue ( feature , a );","title":"Model Element Property Assignment"},{"location":"doc/eol/#special-assignment-statement","text":"In task-specific languages, an assignment operator with task-specific semantics is often required. Therefore, EOL provides an additional assignment operator. In standalone EOL, the operator has the same semantics with the primary assignment operator discussed above, however task-specific languages can redefine its semantics to implement custom assignment behaviour. For example, consider the simple model-to-model transformation of the listing below where a simple object oriented model is transformed to a simple database model using an ETL transformation. rule Class2Table transform c : OO!Class to t : DB!Table { t.name = c.name; } rule Attribute2Column transform a : OO!Attribute to c : DB!Column { c.name = a.name; //c.owningTable = a.owningClass; c.owningTable ::= a.owningClass; } The Class2Table rule transforms a Class of the OO model into a Table in the DB model and sets the name of the table to be the same as the name of the class. Rule Atribute2Column transforms an Attribute from the OO model into a Column in the DB model. Except for setting its name (line 12), it also needs to define that the column belongs to the table which corresponds to the class that defines the source attribute. The commented-out assignment statement of line 13 cannot be used for this purpose since it would illegally attempt to assign the owningTable feature of the column to a model element of an inappropriate type ( OO!Class ). However, the special assignment operator in ETL has language-specific semantics , and thus in line 14 it assigns to the owningTable feature not the class that owns the attribute but its corresponding table (calculated using the Class2Table rule) in the DB model.","title":"Special Assignment Statement"},{"location":"doc/eol/#if-statement","text":"As in most programming languages, an if statement consists of a condition, a block of statements that is executed if the condition is satisfied and (optionally) a block of statements that is executed otherwise. As an example, in the listing below, if variable a holds a value that is greater than 0 the statement of line 3 is executed, otherwise the statement of line 5 is executed. if (a > 0) { \"A is greater than 0\".println(); } else { \"A is less equal than 0\".println(); }","title":"If Statement"},{"location":"doc/eol/#switch-statement","text":"A switch statement consists of an expression and a set of cases, and can be used to implement multi-branching. Unlike Java/C, switch in EOL doesn't by default fall through to the next case after a successful one. Therefore, it is not necessary to add a break statement after each case. To enable falling through to all subsequent cases you can use the continue statement. Also, unlike Java/C, the switch expression can return anything (not only integers). As an example, when executed, the code in the listing below prints 2 while the code in the following listing prints 2,3,default . var i = \"2\"; switch (i) { case \"1\" : \"1\".println(); case \"2\" : \"2\".println(); case \"3\" : \"3\".println(); default : \"default\".println(); } var i = \"2\"; switch (i) { case \"1\" : \"1\".println(); case \"2\" : \"2\".println(); continue; case \"3\" : \"3\".println(); default : \"default\".println(); }","title":"Switch Statement"},{"location":"doc/eol/#while-statement","text":"A while statement consists of a condition and a block of statements which are executed as long as the condition is satisfied. For example, in the listing below, the body of the while statement is executed 5 times printing the numbers 0 to 4 to the output console. Inside the body of a while statement, the built-in read-only loopCount integer variable holds the number of times the innermost loop has been executed so far (including the current iteration). Right after entering the loop for the first time and before running the first statement in its body, loopCount is set to 1, and it is incremented after each following iteration. var i : Integer = 0; while (i < 5) { // both lines print the same thing i.println(); (loopCount - 1).println(); // increment the counter i = i+1; }","title":"While Statement"},{"location":"doc/eol/#for-statement","text":"In EOL, for statements are used to iterate the contents of collections. A for statement defines a typed iterator and an iterated collection as well as a block of statements that is executed for every item in the collection that has a kind-of relationship with the type defined by the iterator. As with the majority of programming languages, modifying a collection while iterating it raises a runtime error. To avoid this situation, users can use the clone() built-in operation of the Collection type. var col : Sequence = Sequence{\"a\", 1, 2, 2.5, \"b\"}; for (r : Real in col) { r.print(); if (hasMore){\",\".print();} } Inside the body of a for statement, two built-in read-only variables are visible: the loopCount integer variable and the hasMore boolean variable. hasMore is used to determine if there are more items if the collection for which the loop will be executed. For example, in the listing below the col heterogeneous Sequence is defined that contains two strings ( a and b ), two integers ( 1 , 2 ) and one real ( 2.5 ). The for loop of line 2 only iterates through the items of the collection that are of kind Real and therefore prints 1,2,2.5 to the standard output stream.","title":"For Statement"},{"location":"doc/eol/#break-breakall-and-continue-statements","text":"To exit from for and while loops on demand, EOL provides the break and breakAll statements. The break statement exits the innermost loop while the breakAll statement exits all outer loops as well. On the other hand, to skip a particular loop and proceed with the next one, EOL provides the continue statement. For example, the program in the listing below, prints 2,1 3,1 to the standard output stream. for (i in Sequence{1..3}) { if (i = 1) {continue;} for (j in Sequence{1..4}) { if (j = 2) {break;} if (j = 3) {breakAll;} (i + \",\" + j).println(); } }","title":"Break, BreakAll and Continue Statements"},{"location":"doc/eol/#throw-statement","text":"EOL provides the throw statement for throwing a value as an Java exception. This is especially useful when invoking EOL scripts from Java code: by catching and processing the exception, the Java code may be able to automatically handle the problem without requiring user input. Any value can be thrown, as shown in the listing below where we throw a number and a string. throw 42; throw \"Error!\";","title":"Throw Statement"},{"location":"doc/eol/#transaction-statement","text":"The underlying EMC layer provides support for transactions in models. To utilize this feature EOL provides the transaction statement. A transaction statement (optionally) defines the models that participate in the transaction. If no models are defined, it is assumed that all the models that are accessible from the enclosing program participate. When the statement is executed, a transaction is started on each participating model. If no errors are raised during the execution of the contained statements, any changes made to model elements are committed. On the other hand, if an error is raised the transaction is rolled back and any changes made to the models in the context of the transaction are undone. The user can also use the abort statement to explicitly exit a transaction and roll-back any changes done in its context. In the listing below, an example of using this feature in a simulation problem is illustrated. var system : System.allInstances.first(); for (i in Sequence {1..100}) { transaction { var failedProcessors : Set; while (failedProcessors.size() < 10) { failedProcessors.add(system.processors.random()); } for (processor in failedProcessors) { processor.failed = true; processor.moveTasksElsewhere(); } system.evaluateAvailability(); abort; } } In this problem, a system consists of a number of processors. A processor manages some tasks and can fail at any time. The EOL program in the listing above performs 100 simulation steps, in every one of which 10 random processors from the model (lines 7-11) are marked as failed by setting their failed property to true (line 14). Then, the tasks that the failed processors manage are moved to other processors (line 15). Finally the availability of the system in this state is evaluated. After a simulation step, the state of the model has been drastically changed since processors have failed and tasks have been relocated. To be able to restore the model to its original state after every simulation step, each step is executed in the context of a transaction which is explicitly aborted (line 20) after evaluating the availability of the system. Therefore after each simulation step the model is restored to its original state for the next step to be executed.","title":"Transaction Statement"},{"location":"doc/eol/#extended-properties","text":"Quite often, during a model management operation it is necessary to associate model elements with information that is not supported by the metamodel they conform to. For instance, the EOL program in the listing below calculates the depth of each Tree element in a model that conforms to the Tree metamodel displayed below. classDiagram class Tree { +label: String +parent: Tree +children: Tree[*] } Tree -- Tree As the Tree metamodel doesn't support a depth property in the Tree metaclass, each Tree has to be associated with its calculated depth using the depths map defined in line 1. Another approach would be to extend the Tree metamodel to support the desired depth property; however, applying this technique every time an additional property is needed for some model management operation would quickly pollute the metamodel with properties of secondary importance. var depths = new Map; for (n in Tree.allInstances.select(t|not t.parent.isDefined())) { n.setDepth(0); } for (n in Tree.allInstances) { (n.name + \" \" + depths.get(n)).println(); } operation Tree setDepth(depth : Integer) { depths.put(self,depth); for (c in self.children) { c.setDepth(depth + 1); } } To simplify the code required in such cases, EOL provides the concept of extended properties . In terms of concrete syntax, an extended property is a normal property, the name of which starts with the tilde character ( ~ ). With regards to its execution semantics, the first time the value of an extended property of an object is assigned, the property is created and associated with the object. Then, the property can be accessed as a normal property. If an extended property is accessed before it is assigned, it returns null . The listing below demonstrates using a ~depth extended property to eliminate the need for using the depths map in the listing that follows it. for (n in Tree.allInstances.select(t|not t.parent.isDefined())) { n.setDepth(0); } for (n in Tree.allInstances) { (n.name + \" \" + n.~depth).println(); } operation Tree setDepth(depth : Integer) { self.~depth = depth; for (c in self.children) { c.setDepth(depth + 1); } }","title":"Extended Properties"},{"location":"doc/eol/#context-independent-user-input","text":"A common assumption in model management languages is that model management tasks are only executed in a batch-manner without human intervention. However, as demonstrated in the sequel, it is often useful for the user to provide feedback that can precisely drive the execution of a model management operation. Model management operations can be executed in a number of runtime environments in each of which a different user-input method is more appropriate. For instance when executed in the context of an IDE (such as Eclipse) visual dialogs are preferable, while when executed in the context of a server or from within an ANT workflow, a command-line user input interface is deemed more suitable. To abstract away from the different runtime environments and enable the user to specify user interaction statements uniformly and regardless of the runtime context, EOL provides the IUserInput interface that can be realized in different ways according to the execution environment and attached to the runtime context via the IEolContext.setUserInput(IUserInput userInput) method. The IUserInput specifies the methods presented in the table below. Signature Description inform(message : String) Displays the specified message to the user confirm(message : String, [default : Boolean]) : Boolean Prompts the user to confirm if the condition described by the message holds prompt(message : String, [default : String]) : String Prompts the user for a string in response to the message promptInteger(message : String, [default : Integer]) : Integer Prompts the user for an Integer promptReal(message : String, [default : Real]) : Real Prompts the user for a Real choose(message : String, options : Sequence, [default : Any]) : Any Prompts the user to select one of the options chooseMany(message : String, options : Sequence, [default : Sequence]) : Sequence Prompts the user to select one or more of the options As displayed above, all the methods of the IUserInput interface accept a default parameter. The purpose of this parameter is dual. First, it enables the designer of the model management program to prompt the user with the most likely value as a default choice and secondly it enables a concrete implementation of the interface ( UnattendedExecutionUserInput ) which returns the default values without prompting the user at all and thus, can be used for unattended execution of interactive Epsilon programs. The figures below demonstrate the interfaces through which input is required by the user when the exemplar System.user.promptInteger(\"Please enter a number\", 1); statement is executed using an Eclipse-based and a command-line-based IUserInput implementation respectively. User-input facilities have been found to be particularly useful in all model management tasks. Such facilities are essential for performing operations on live models such as model validation and model refactoring but can also be useful in model comparison where marginal matching decisions can be delegated to the user and model transformation where the user can interactively specify the elements that will be transformed into corresponding elements in the target model. Although the EOL parser permits loose statements (e.g. not contained in operations) between/after operations, these are ignored at runtime. \u21a9 Parameters within square brackets are optional \u21a9 http://download.oracle.com/javase/8/docs/api/java/util/Formatter.html#syntax \u21a9 https://docs.oracle.com/javase/8/docs/api/java/util/stream/Stream.html \u21a9 For further examples of ternary operator, see https://git.eclipse.org/c/epsilon/org.eclipse.epsilon.git/tree/tests/org.eclipse.epsilon.eol.engine.test.acceptance/src/org/eclipse/epsilon/eol/engine/test/acceptance/TernaryTests.eol \u21a9","title":"Context-Independent User Input"},{"location":"doc/epl/","text":"The Epsilon Pattern Language (EPL) \u00b6 The aim of EPL is to contribute pattern matching capabilities to Epsilon. This chapter discusses the abstract and concrete syntax of EPL as well as its execution semantics. To aid understanding, the discussion of the syntax and the semantics of the language revolves around an exemplar pattern which is developed incrementally throughout the chapter. The exemplar pattern is matched against models extracted from Java source code using tooling provided by the MoDisco project. MoDisco is an Eclipse project that provides a fine-grained Ecore-based metamodel of the Java language as well as tooling for extracting models that conform to this Java metamodel from Java source code. A simplified view of the relevant part of the MoDisco Java metamodel used in this running example is presented below. The aim of the pattern developed here (which we will call PublicField ) is to identify quartets of <ClassDeclaration, FieldDeclaration, MethodDeclaration, MethodDeclaration> , each representing a field of a Java class for which appropriately named accessor/getter (getX/isX) and mutator/setter (setX) methods are defined by the class. classDiagram class ClassDeclaration { +name: String +bodyDeclarations: BodyDeclaration[*] } class BodyDeclaration { +name: String +modifiers: Modifier[*] } class VariableDeclarationFragment { +name: String } class FieldDeclaration { +fragments: VariableDeclarationFragment[*] +type: TypeAccess } class MethodDeclaration { +returnType: TypeAccess } class VariableDeclarationFragment { +name: String } class Modifier { +visibility: VisibilityKind } class VisibilityKind { #none #public #protected #private } ClassDeclaration -- BodyDeclaration: bodyDeclarations * BodyDeclaration -- Modifier: modifiers * Modifier -- VisibilityKind: visibility BodyDeclaration <|-- FieldDeclaration MethodDeclaration --|> BodyDeclaration FieldDeclaration -- VariableDeclarationFragment: fragments * FieldDeclaration -- TypeAccess: type MethodDeclaration -- TypeAccess: returnType Syntax \u00b6 The syntax of EPL is an extension of the syntax of the EOL language , which is the core language of Epsilon. As such, any references to expression and statement block in this chapter, refer to EOL expressions and blocks of EOL statements respectively. It is also worth noting that EOL expressions and statements can produce side-effects on models, and therefore, it is the responsibility of the developer to decide which expressions used in the context of EPL patterns should be side-effect free and which not. As illustrated in the figure below, EPL patterns are organised in modules . Each module contains a number of named patterns and optionally, pre and post statement blocks that are executed before and after the pattern matching process, and helper EOL operations. EPL modules can import other EPL and EOL modules to facilitate reuse and modularity. classDiagram class EplModule { -iterative: Boolean -maxLoops: Integer } class Pattern { -name: String -match: ExecutableBlock<Boolean> -onMatch: ExecutableBlock<Void> -noMatch: ExecutableBlock<Void> -do: ExecutableBlock<Void> } class Role { -names: String[1..*] -negative: Boolean -type: EolType -guard: ExecutableBlock<Boolean> -active: ExecutableBlock<Boolean> -optional: ExecutableBlock<Boolean> } class Cardinality { -lowerBound: Integer -upperBound: Integer } EolModule <|-- ErlModule ErlModule <|-- EplModule Pre --|> NamedStatementBlockRule Post --|> NamedStatementBlockRule ErlModule -- Pre: pre * ErlModule -- Post: post * EplModule -- Pattern: patterns * Pattern -- Role: roles * Role -- Domain: domain Domain <|-- StaticDomain Domain <|-- DynamicDomain Role -- Cardinality: cardinality In its simplest form a pattern consists of a number of named and typed roles and a match condition. For example, in lines 2-3, the PublicField pattern below, defines four roles ( class , field , setter and getter ). The match condition of the pattern specifies that for a quartet to be a valid match, the field, setter and getter must all belong to the class (lines 5-7, and that the setter and getter methods must be appropriately named 1 . pattern PublicField class : ClassDeclaration, field : FieldDeclaration, setter : MethodDeclaration, getter : MethodDeclaration { match : class.bodyDeclarations.includes(field) class.bodyDeclarations.includes(setter) and class.bodyDeclarations.includes(getter) and setter.name = \"set\" + field.getName() and (getter.name = \"get\" + field.getName() or getter.name = \"is\" + field.getName()) } @cached operation FieldDeclaration getName() { return self.fragments.at(0).name.firstToUpperCase(); } The implementation of the PublicField pattern above is fully functional but not particularly efficient as the match condition needs to be evaluated #ClassDefinition * #FieldDeclaration * #MethodDeclaration^2 times. To enable pattern developers to reduce the search space, each role in an EPL pattern can specify a domain which is an EOL expression that returns a collection of model elements from which the role will draw values. There are two types of domains in EPL: static domains which are computed once for all applications of the pattern, and which are not dependent on the bindings of other roles of the pattern (denoted using the in keyword in terms of the concrete syntax), and dynamic domains which are recomputed every time the candidate values of the role are iterated, and which are dependent on the bindings of other roles (denoted using the from keyword). Beyond a domain, each role can also specify a guard expression that further prunes unnecessary evaluations of the match condition. Using dynamic domains and guards, the PublicField pattern can be expressed in a more efficient way, as illustrated below. To further illustrate the difference between dynamic and static domains, changing from to in in line 4 would trigger a runtime exception as the domain would become static and therefore not able to access bindings of other roles (i.e. class ). pattern PublicField class : ClassDeclaration, field : FieldDeclaration from: class.bodyDeclarations, setter : MethodDeclaration from: class.bodyDeclarations guard: setter.name = \"set\" + field.getName(), getter : MethodDeclaration from: class.bodyDeclarations guard : (getter.name = \"get\" + field.getName() or getter.name = \"is\" + field.getName()) { } The implementation above is significantly more efficient than the previous implementation but can still be improved by further reducing the number of name comparisons of candidate setter and getter methods. To achieve this we can employ memoisation: we create a hash map of method names and methods once before pattern matching (line 2), and use it to identify candidate setters and getters (lines 9 and 12-13). pre { var methodMap = MethodDeclaration.all.mapBy(m|m.name); } pattern PublicField class : ClassDeclaration, field : FieldDeclaration from: class.bodyDeclarations, setter : MethodDeclaration from: getMethods(\"set\" + field.getName()) guard: setter.abstractTypeDeclaration = class, getter : MethodDeclaration from: getMethods(\"get\" + field.getName()) .includingAll(getMethods(\"is\" + field.getName())), guard: getter.abstractTypeDeclaration = class { } operation getMethods(name : String) : Sequence(MethodDeclaration) { var methods = methodMap.get(name); if (methods.isDefined()) return methods; else return new Sequence; } The sections below discuss the remainder of the syntax of EPL. Negative Roles \u00b6 Pattern roles can be negated using the no keyword. For instance, by adding the no keyword before the setter role in line 8 of the listing above, the pattern will match fields that have getters but no setters (i.e. read-only fields). Optional and Active Roles \u00b6 Pattern roles can be designated as optional using the optional EOL expression. For example, adding optional: true to the setter role would also match all fields that only have a getter. By adding optional: true to the setter role and optional: setter.isDefined() to the getter role, the pattern would match fields that have at least a setter or a getter. Roles can be completely deactivated depending on the bindings of other roles through the active construct. For example, if the pattern developer prefers to specify separate roles for getX and isX getters, with a preference over getX getters, the pattern can be formulated as illustrated in the listing below so that if a getX getter is found, no attempt is even made to match an isX getter. pattern PublicField class : ClassDeclaration, field : FieldDeclaration ..., setter : MethodDeclaration ..., getGetter : MethodDeclaration ..., isGetter: MethodDeclaration ... active: getGetter.isUndefined() { } Role Cardinality \u00b6 The cardinality of a role (lower and upper bound) can be defined in square brackets following the type of the role. Roles that have a cardinality with an upper bound > 1 are bound to the subset of elements from the domain of the role which also satisfy the guard, if the size of that subset is within the bounds of the role's cardinality. The listing below demonstrates the ClassAndPrivateFields pattern that detects instances of classes and all their private fields. If the cardinality of the field role in line 3 was [1..3] instead of [*], the pattern would only detect classes that own 1 to 3 private fields. pattern ClassAndPrivateFields class : ClassDeclaration, field : FieldDeclaration[*] from: class.bodyDeclarations guard: field.getVisibility() = VisibilityKind#private { onmatch { var message : String; message = class.name + \" matches\"; message.println(); } do { // More actions here } nomatch : (class.name + \" does not match\").println() } operation FieldDeclaration getVisibility() { if (self.modifier.isDefined()) { return self.modifier.visibility; } else { return null; } } Execution Semantics \u00b6 When an EPL module is executed, all of its pre statement blocks are first executed in order to define and initialise any global variables needed (e.g. the methodMap variable in the listing above or to print diagnostic messages to the user. Subsequently, patterns are executed in the order in which they appear. For each pattern, all combinations that conform to the type and constraints of the roles of the pattern are iterated, and the validity of each combination is evaluated in the match statement block of the pattern. In the absence of a match block, every combination that satisfies the constraints of the roles of the pattern is accepted as a valid instance of the pattern. Immediately after every successful match, the optional onmatch statement block of the pattern is invoked (see lines 7-11 of the listing above) and after every unsuccessful matching attempt, for combinations which however satisfy the constraints specified by the roles of the pattern, the optional nomatch statement block of the pattern (line 17) is executed . When matching of all patterns is complete, the do part (line 13) of each successful match is executed. In the do part, developers can modify the involved models (e.g to perform in-place transformation), without the risk of concurrent list modification errors (which can occur if elements are created/deleted during pattern matching). After pattern matching has been completed, the post statement blocks of the module are executed in order to perform any necessary finalisation actions. An EPL module can be executed in a one-off or iterative mode. In the one-off mode, patterns are only evaluated once, while in the iterative mode, the process is repeated until no more matches have been found or until the maximum number of iterations (specified by the developer) has been reached. The iterative mode is particularly suitable for patterns that perform reduction of the models they are evaluated against. Pattern Matching Output \u00b6 The output of the execution of an EPL module on a set of models is a collection of matches encapsulated in a PatternMatchModel , as illustrated in the figure below. As PatternMatchModel implements the IModel EMC interface, its instances can be accessed from other programs expressed in languages of the Epsilon family. classDiagram class Match { +bindings: Map<String, Object> } IModel --|> PatternMatchModel PatternMatchModel -- Pattern: patterns * PatternMatchModel -- Match: matches * A PatternMatchModel introduces one model element type for each pattern and one type for each field of each pattern (the name of these types are derived by concatenating the name of the pattern with a camel-case version of the name of the field). Instances of the prior are the matches of the pattern while instances of the latter are elements that have been matched in this particular role. For example, after executing the EPL module above, the produced PatternMatchModel contains 5 types: PublicField , instances of which are all the identified matches of the PublicField pattern, PublicFieldClass , instances of which are all the classes in the input model which have been matched to the class role in instances of the PublicField pattern, and similarly PublicFieldField , PublicFieldSetter and PublicFieldGetter . Interoperability with Other Model Management Tasks \u00b6 As a PatternMatchModel is an instance of IModel , after its computation it can be manipulated by other Epsilon programs. For example, the listing below demonstrates running the EPL module and passing its output to the EVL constraints that follow and, if validation is successful, to an ETL transformation where it is used to guide the generation of a UML model. In lines 4-7, the Java model is loaded and is assigned the name Java . Then, in line 9, the Java model is passed on to publicfield.epl for pattern matching. The result of pattern matching, which is an instance of the PatternMatchModel class (and therefore also an instance of IModel ) is exported to the global context under the name Patterns . Then, in line 13, both the Patterns and the Java models are passed on to the EVL model validation task which performs validation of the identified pattern matches. <project default= \"main\" > <target name= \"main\" > <epsilon.emf.loadModel name= \"Java\" modelfile= \"org.eclipse.epsilon.eol.engine_java.xmi\" metamodeluri= \"...MoDisco/Java/0.2.incubation/java\" read= \"true\" store= \"false\" /> <epsilon.epl src= \"publicfield.epl\" exportAs= \"Patterns\" > <model ref= \"Java\" /> </epsilon.epl> <epsilon.evl src= \"constraints.evl\" > <model ref= \"Patterns\" /> <model ref= \"Java\" /> </epsilon.evl> <epsilon.etl src= \"java2uml.etl\" > <model ref= \"Patterns\" /> <model ref= \"Java\" /> </epsilon.etl> </target> </project> Line 1 below defines a set of constraints that will be applied to instances of the PublicField type from the Patterns model. As discussed above, these are all matched instances of the PublicField pattern. Line 4, specifies the condition that needs to be satisfied by instances of the pattern. Notice the self.getter and self.field expressions which return the MethodDeclaration and FieldDeclaration bound to the instance of the pattern. Then, line 5 defines the message that should be produced for instances of PublicField that do not satisfy this constraint. context Patterns!PublicField { guard: self.field.type.isDefined() constraint GetterAndFieldSameType { check : self.getter.returnType.type = self.field.type.type message : \"The getter of \" + self.class.name + \".\" + self.field.fragments.at(0).name + \" does not have the same type as the field itself\" } } If validation is successful, both the Java and the Patterns model are passed on to an ETL transformation that transforms the Java model to a UML model, a fragment of which is presented below. The transformation encodes <field, setter, getter> triplets in the Java model as public properties in the UML model. As such, in line 6 of the transformation, the Patterns model is used to check whether field s has been matched under the PublicField pattern, and if so, the next line ignores the field's declared visibility and sets the visibility of the respective UML property to public . rule FieldDeclaration2Property transform s: Java!FieldDeclaration to t: Uml!Property { t.name = s.getName(); if (s.instanceOf(Patterns!PublicFieldField)) { t.visibility = Uml!VisibilityKind#public; } else { t.visibility = s.toUmlVisibility(); } ... } As Epsilon provides ANT tasks for all its languages, the same technique can be used to pass the result of pattern matching on to model-to-text transformations, as well as model comparison and model merging programs. To maintain the running example simple and concise, the pattern does not check aspects such as matching/compatible parameter/return types in the field, setter and getter but the reader should easily be able to envision how this would be supported through additional clauses in the match condition. \u21a9","title":"Pattern matching (EPL)"},{"location":"doc/epl/#the-epsilon-pattern-language-epl","text":"The aim of EPL is to contribute pattern matching capabilities to Epsilon. This chapter discusses the abstract and concrete syntax of EPL as well as its execution semantics. To aid understanding, the discussion of the syntax and the semantics of the language revolves around an exemplar pattern which is developed incrementally throughout the chapter. The exemplar pattern is matched against models extracted from Java source code using tooling provided by the MoDisco project. MoDisco is an Eclipse project that provides a fine-grained Ecore-based metamodel of the Java language as well as tooling for extracting models that conform to this Java metamodel from Java source code. A simplified view of the relevant part of the MoDisco Java metamodel used in this running example is presented below. The aim of the pattern developed here (which we will call PublicField ) is to identify quartets of <ClassDeclaration, FieldDeclaration, MethodDeclaration, MethodDeclaration> , each representing a field of a Java class for which appropriately named accessor/getter (getX/isX) and mutator/setter (setX) methods are defined by the class. classDiagram class ClassDeclaration { +name: String +bodyDeclarations: BodyDeclaration[*] } class BodyDeclaration { +name: String +modifiers: Modifier[*] } class VariableDeclarationFragment { +name: String } class FieldDeclaration { +fragments: VariableDeclarationFragment[*] +type: TypeAccess } class MethodDeclaration { +returnType: TypeAccess } class VariableDeclarationFragment { +name: String } class Modifier { +visibility: VisibilityKind } class VisibilityKind { #none #public #protected #private } ClassDeclaration -- BodyDeclaration: bodyDeclarations * BodyDeclaration -- Modifier: modifiers * Modifier -- VisibilityKind: visibility BodyDeclaration <|-- FieldDeclaration MethodDeclaration --|> BodyDeclaration FieldDeclaration -- VariableDeclarationFragment: fragments * FieldDeclaration -- TypeAccess: type MethodDeclaration -- TypeAccess: returnType","title":"The Epsilon Pattern Language (EPL)"},{"location":"doc/epl/#syntax","text":"The syntax of EPL is an extension of the syntax of the EOL language , which is the core language of Epsilon. As such, any references to expression and statement block in this chapter, refer to EOL expressions and blocks of EOL statements respectively. It is also worth noting that EOL expressions and statements can produce side-effects on models, and therefore, it is the responsibility of the developer to decide which expressions used in the context of EPL patterns should be side-effect free and which not. As illustrated in the figure below, EPL patterns are organised in modules . Each module contains a number of named patterns and optionally, pre and post statement blocks that are executed before and after the pattern matching process, and helper EOL operations. EPL modules can import other EPL and EOL modules to facilitate reuse and modularity. classDiagram class EplModule { -iterative: Boolean -maxLoops: Integer } class Pattern { -name: String -match: ExecutableBlock<Boolean> -onMatch: ExecutableBlock<Void> -noMatch: ExecutableBlock<Void> -do: ExecutableBlock<Void> } class Role { -names: String[1..*] -negative: Boolean -type: EolType -guard: ExecutableBlock<Boolean> -active: ExecutableBlock<Boolean> -optional: ExecutableBlock<Boolean> } class Cardinality { -lowerBound: Integer -upperBound: Integer } EolModule <|-- ErlModule ErlModule <|-- EplModule Pre --|> NamedStatementBlockRule Post --|> NamedStatementBlockRule ErlModule -- Pre: pre * ErlModule -- Post: post * EplModule -- Pattern: patterns * Pattern -- Role: roles * Role -- Domain: domain Domain <|-- StaticDomain Domain <|-- DynamicDomain Role -- Cardinality: cardinality In its simplest form a pattern consists of a number of named and typed roles and a match condition. For example, in lines 2-3, the PublicField pattern below, defines four roles ( class , field , setter and getter ). The match condition of the pattern specifies that for a quartet to be a valid match, the field, setter and getter must all belong to the class (lines 5-7, and that the setter and getter methods must be appropriately named 1 . pattern PublicField class : ClassDeclaration, field : FieldDeclaration, setter : MethodDeclaration, getter : MethodDeclaration { match : class.bodyDeclarations.includes(field) class.bodyDeclarations.includes(setter) and class.bodyDeclarations.includes(getter) and setter.name = \"set\" + field.getName() and (getter.name = \"get\" + field.getName() or getter.name = \"is\" + field.getName()) } @cached operation FieldDeclaration getName() { return self.fragments.at(0).name.firstToUpperCase(); } The implementation of the PublicField pattern above is fully functional but not particularly efficient as the match condition needs to be evaluated #ClassDefinition * #FieldDeclaration * #MethodDeclaration^2 times. To enable pattern developers to reduce the search space, each role in an EPL pattern can specify a domain which is an EOL expression that returns a collection of model elements from which the role will draw values. There are two types of domains in EPL: static domains which are computed once for all applications of the pattern, and which are not dependent on the bindings of other roles of the pattern (denoted using the in keyword in terms of the concrete syntax), and dynamic domains which are recomputed every time the candidate values of the role are iterated, and which are dependent on the bindings of other roles (denoted using the from keyword). Beyond a domain, each role can also specify a guard expression that further prunes unnecessary evaluations of the match condition. Using dynamic domains and guards, the PublicField pattern can be expressed in a more efficient way, as illustrated below. To further illustrate the difference between dynamic and static domains, changing from to in in line 4 would trigger a runtime exception as the domain would become static and therefore not able to access bindings of other roles (i.e. class ). pattern PublicField class : ClassDeclaration, field : FieldDeclaration from: class.bodyDeclarations, setter : MethodDeclaration from: class.bodyDeclarations guard: setter.name = \"set\" + field.getName(), getter : MethodDeclaration from: class.bodyDeclarations guard : (getter.name = \"get\" + field.getName() or getter.name = \"is\" + field.getName()) { } The implementation above is significantly more efficient than the previous implementation but can still be improved by further reducing the number of name comparisons of candidate setter and getter methods. To achieve this we can employ memoisation: we create a hash map of method names and methods once before pattern matching (line 2), and use it to identify candidate setters and getters (lines 9 and 12-13). pre { var methodMap = MethodDeclaration.all.mapBy(m|m.name); } pattern PublicField class : ClassDeclaration, field : FieldDeclaration from: class.bodyDeclarations, setter : MethodDeclaration from: getMethods(\"set\" + field.getName()) guard: setter.abstractTypeDeclaration = class, getter : MethodDeclaration from: getMethods(\"get\" + field.getName()) .includingAll(getMethods(\"is\" + field.getName())), guard: getter.abstractTypeDeclaration = class { } operation getMethods(name : String) : Sequence(MethodDeclaration) { var methods = methodMap.get(name); if (methods.isDefined()) return methods; else return new Sequence; } The sections below discuss the remainder of the syntax of EPL.","title":"Syntax"},{"location":"doc/epl/#negative-roles","text":"Pattern roles can be negated using the no keyword. For instance, by adding the no keyword before the setter role in line 8 of the listing above, the pattern will match fields that have getters but no setters (i.e. read-only fields).","title":"Negative Roles"},{"location":"doc/epl/#optional-and-active-roles","text":"Pattern roles can be designated as optional using the optional EOL expression. For example, adding optional: true to the setter role would also match all fields that only have a getter. By adding optional: true to the setter role and optional: setter.isDefined() to the getter role, the pattern would match fields that have at least a setter or a getter. Roles can be completely deactivated depending on the bindings of other roles through the active construct. For example, if the pattern developer prefers to specify separate roles for getX and isX getters, with a preference over getX getters, the pattern can be formulated as illustrated in the listing below so that if a getX getter is found, no attempt is even made to match an isX getter. pattern PublicField class : ClassDeclaration, field : FieldDeclaration ..., setter : MethodDeclaration ..., getGetter : MethodDeclaration ..., isGetter: MethodDeclaration ... active: getGetter.isUndefined() { }","title":"Optional and Active Roles"},{"location":"doc/epl/#role-cardinality","text":"The cardinality of a role (lower and upper bound) can be defined in square brackets following the type of the role. Roles that have a cardinality with an upper bound > 1 are bound to the subset of elements from the domain of the role which also satisfy the guard, if the size of that subset is within the bounds of the role's cardinality. The listing below demonstrates the ClassAndPrivateFields pattern that detects instances of classes and all their private fields. If the cardinality of the field role in line 3 was [1..3] instead of [*], the pattern would only detect classes that own 1 to 3 private fields. pattern ClassAndPrivateFields class : ClassDeclaration, field : FieldDeclaration[*] from: class.bodyDeclarations guard: field.getVisibility() = VisibilityKind#private { onmatch { var message : String; message = class.name + \" matches\"; message.println(); } do { // More actions here } nomatch : (class.name + \" does not match\").println() } operation FieldDeclaration getVisibility() { if (self.modifier.isDefined()) { return self.modifier.visibility; } else { return null; } }","title":"Role Cardinality"},{"location":"doc/epl/#execution-semantics","text":"When an EPL module is executed, all of its pre statement blocks are first executed in order to define and initialise any global variables needed (e.g. the methodMap variable in the listing above or to print diagnostic messages to the user. Subsequently, patterns are executed in the order in which they appear. For each pattern, all combinations that conform to the type and constraints of the roles of the pattern are iterated, and the validity of each combination is evaluated in the match statement block of the pattern. In the absence of a match block, every combination that satisfies the constraints of the roles of the pattern is accepted as a valid instance of the pattern. Immediately after every successful match, the optional onmatch statement block of the pattern is invoked (see lines 7-11 of the listing above) and after every unsuccessful matching attempt, for combinations which however satisfy the constraints specified by the roles of the pattern, the optional nomatch statement block of the pattern (line 17) is executed . When matching of all patterns is complete, the do part (line 13) of each successful match is executed. In the do part, developers can modify the involved models (e.g to perform in-place transformation), without the risk of concurrent list modification errors (which can occur if elements are created/deleted during pattern matching). After pattern matching has been completed, the post statement blocks of the module are executed in order to perform any necessary finalisation actions. An EPL module can be executed in a one-off or iterative mode. In the one-off mode, patterns are only evaluated once, while in the iterative mode, the process is repeated until no more matches have been found or until the maximum number of iterations (specified by the developer) has been reached. The iterative mode is particularly suitable for patterns that perform reduction of the models they are evaluated against.","title":"Execution Semantics"},{"location":"doc/epl/#pattern-matching-output","text":"The output of the execution of an EPL module on a set of models is a collection of matches encapsulated in a PatternMatchModel , as illustrated in the figure below. As PatternMatchModel implements the IModel EMC interface, its instances can be accessed from other programs expressed in languages of the Epsilon family. classDiagram class Match { +bindings: Map<String, Object> } IModel --|> PatternMatchModel PatternMatchModel -- Pattern: patterns * PatternMatchModel -- Match: matches * A PatternMatchModel introduces one model element type for each pattern and one type for each field of each pattern (the name of these types are derived by concatenating the name of the pattern with a camel-case version of the name of the field). Instances of the prior are the matches of the pattern while instances of the latter are elements that have been matched in this particular role. For example, after executing the EPL module above, the produced PatternMatchModel contains 5 types: PublicField , instances of which are all the identified matches of the PublicField pattern, PublicFieldClass , instances of which are all the classes in the input model which have been matched to the class role in instances of the PublicField pattern, and similarly PublicFieldField , PublicFieldSetter and PublicFieldGetter .","title":"Pattern Matching Output"},{"location":"doc/epl/#interoperability-with-other-model-management-tasks","text":"As a PatternMatchModel is an instance of IModel , after its computation it can be manipulated by other Epsilon programs. For example, the listing below demonstrates running the EPL module and passing its output to the EVL constraints that follow and, if validation is successful, to an ETL transformation where it is used to guide the generation of a UML model. In lines 4-7, the Java model is loaded and is assigned the name Java . Then, in line 9, the Java model is passed on to publicfield.epl for pattern matching. The result of pattern matching, which is an instance of the PatternMatchModel class (and therefore also an instance of IModel ) is exported to the global context under the name Patterns . Then, in line 13, both the Patterns and the Java models are passed on to the EVL model validation task which performs validation of the identified pattern matches. <project default= \"main\" > <target name= \"main\" > <epsilon.emf.loadModel name= \"Java\" modelfile= \"org.eclipse.epsilon.eol.engine_java.xmi\" metamodeluri= \"...MoDisco/Java/0.2.incubation/java\" read= \"true\" store= \"false\" /> <epsilon.epl src= \"publicfield.epl\" exportAs= \"Patterns\" > <model ref= \"Java\" /> </epsilon.epl> <epsilon.evl src= \"constraints.evl\" > <model ref= \"Patterns\" /> <model ref= \"Java\" /> </epsilon.evl> <epsilon.etl src= \"java2uml.etl\" > <model ref= \"Patterns\" /> <model ref= \"Java\" /> </epsilon.etl> </target> </project> Line 1 below defines a set of constraints that will be applied to instances of the PublicField type from the Patterns model. As discussed above, these are all matched instances of the PublicField pattern. Line 4, specifies the condition that needs to be satisfied by instances of the pattern. Notice the self.getter and self.field expressions which return the MethodDeclaration and FieldDeclaration bound to the instance of the pattern. Then, line 5 defines the message that should be produced for instances of PublicField that do not satisfy this constraint. context Patterns!PublicField { guard: self.field.type.isDefined() constraint GetterAndFieldSameType { check : self.getter.returnType.type = self.field.type.type message : \"The getter of \" + self.class.name + \".\" + self.field.fragments.at(0).name + \" does not have the same type as the field itself\" } } If validation is successful, both the Java and the Patterns model are passed on to an ETL transformation that transforms the Java model to a UML model, a fragment of which is presented below. The transformation encodes <field, setter, getter> triplets in the Java model as public properties in the UML model. As such, in line 6 of the transformation, the Patterns model is used to check whether field s has been matched under the PublicField pattern, and if so, the next line ignores the field's declared visibility and sets the visibility of the respective UML property to public . rule FieldDeclaration2Property transform s: Java!FieldDeclaration to t: Uml!Property { t.name = s.getName(); if (s.instanceOf(Patterns!PublicFieldField)) { t.visibility = Uml!VisibilityKind#public; } else { t.visibility = s.toUmlVisibility(); } ... } As Epsilon provides ANT tasks for all its languages, the same technique can be used to pass the result of pattern matching on to model-to-text transformations, as well as model comparison and model merging programs. To maintain the running example simple and concise, the pattern does not check aspects such as matching/compatible parameter/return types in the field, setter and getter but the reader should easily be able to envision how this would be supported through additional clauses in the match condition. \u21a9","title":"Interoperability with Other Model Management Tasks"},{"location":"doc/etl/","text":"The Epsilon Transformation Language (ETL) \u00b6 The aim of ETL is to contribute model-to-model transformation capabilities to Epsilon. More specifically, ETL can be used to transform an arbitrary number of input models into an arbitrary number of output models of different modelling languages and technologies at a high level of abstraction. Abstract Syntax \u00b6 As illustrated in the figure below, ETL transformations are organized in modules ( EtlModule ). A module can contain a number of transformation rules ( TransformRule ). Each rule has a unique name (in the context of the module) and also specifies one source and many target parameters. A transformation rule can also extend a number of other transformation rules and be declared as abstract , primary and/or lazy 1 . To limit its applicability to a subset of elements that conform to the type of the source parameter, a rule can optionally define a guard which is either an EOL expression or a block of EOL statements. Finally, each rule defines a block of EOL statements ( body ) where the logic for populating the property values of the target model elements is specified. Besides transformation rules, an ETL module can also optionally contain a number of pre and post named blocks of EOL statements which, as discussed later, are executed before and after the transformation rules respectively. These should not be confused with the pre-/post-condition annotations available for EOL user-defined operations. classDiagram class TransformRule { -name: String -abstract: Boolean -lazy: Boolean -primary: Boolean -greedy: Boolean -type: EolModelElementType -guard: ExecutableBlock<Boolean> -body: ExecutableBlock<Void> } class Parameter { -name: String -type: EolType } class NamedStatementBlockRule { -name: String -body: StatementBlock } EolModule <|-- ErlModule EtlModule --|> ErlModule Pre --|> NamedStatementBlockRule Post --|> NamedStatementBlockRule ErlModule -- Pre: pre * ErlModule -- Post: post * EtlModule -- TransformRule: rules * TransformRule -- Parameter: source TransformRule -- Parameter: targets * TransformRule -- TransformRule: extends * Concrete Syntax \u00b6 The concrete syntax of a transformation rule is displayed in the listing below. The optional abstract , lazy and primary attributes of the rule are specified using respective annotations. The name of the rule follows the rule keyword and the source and target parameters are defined after the transform and to keywords. Also, the rule can define an optional comma-separated list of rules it extends after the extends keyword. Inside the curly braces ({}), the rule can optionally specify its guard either as an EOL expression following a colon (:) (for simple guards) or as a block of statements in curly braces (for more complex guards). Finally, the body of the rule is specified as a sequence of EOL statements. (@abstract)? (@lazy)? (@primary)? rule <name> transform <sourceParameterName>:<sourceParameterType> to <targetParameterName>:<targetParameterType> (,<targetParameterName>:<targetParameterType>)* (extends <ruleName> (, <ruleName>*)? { (guard (:expression)|({statementBlock}))? statement+ } Pre and post blocks have a simple syntax that, as presented the listing below, consists of the identifier ( pre or post ), an optional name and the set of statements to be executed enclosed in curly braces. (pre|post) <name> { statement+ } Execution Semantics \u00b6 Rule and Block Overriding \u00b6 Similarly to EOL, an ETL module can import a number of other ETL modules. In this case, the importing ETL module inherits all the rules and pre/post blocks specified in the modules it imports (recursively). If the module specifies a rule or a pre/post block with the same name, the local rule/block overrides the imported one respectively. Rule Execution Scheduling \u00b6 When an ETL module is executed, the pre blocks of the module are executed first in the order in which they have been specified. Following that, each non-abstract and non-lazy rule is executed for all the elements on which it is applicable. To be applicable on a particular element, the element must have a type-of relationship with the type defined in the rule's sourceParameter (or a kind-of relationship if the rule is annotated as @greedy ) and must also satisfy the guard of the rule (and all the rules it extends). When a rule is executed on an applicable element, the target elements are initially created by instantiating the targetParameters of the rules, and then their contents are populated using the EOL statements of the body of the rule. Finally, when all rules have been executed, the post blocks of the module are executed in the order in which they have been declared. Source Elements Resolution \u00b6 Resolving target elements that have been (or can be) transformed from source elements by other rules is a frequent task in the body of a transformation rule. To automate this task and reduce coupling between rules, ETL contributes the equivalents() and equivalent() built-in operations that automatically resolve source elements to their transformed counterparts in the target models. When the equivalents() operation is applied on a single source element (as opposed to a collection of them), it inspects the established transformation trace (displayed in the figure below) and invokes the applicable rules (if necessary) to calculate the counterparts of the element in the target model. When applied to a collection it returns a Bag containing Bag s that in turn contain the counterparts of the source elements contained in the collection. The equivalents() operation can be also invoked with an arbitrary number of rule names as parameters to invoke and return only the equivalents created by specific rules. Unlike the main execution scheduling scheme discussed above, the equivalents() operation invokes both lazy and non-lazy rules. It is worth noting that lazy rules are computationally expensive and should be used with caution as they can significantly degrade the performance of the overall transformation. With regard to the ordering of the results of the equivalents() operations, the returned elements appear in the respective order of the rules that have created them. An exception to this occurs when one of the rules is declared as primary , in which case its results precede the results of all other rules. classDiagram class Transformation { -source: Object -targets: Object[*] } class ITransformationStrategy { +transformModels(context : EtlContext) } EolContext <|-- EtlContext EtlContext -- TransformationTrace EtlContext -- ITransformationStrategy: strategy TransformationTrace -- Transformation: transformations * Transformation -- TransformRule: rule ETL also provides the convenient equivalent() operation which, when applied to a single element, returns only the first element of the respective result that would have been returned by the equivalents() operation discussed above. Also, when applied to a collection the equivalent() operation returns a flattened collection (as opposed to the result of equivalents() which is a Bag of Bag s in this case). As with the equivalents() operation, the equivalent() operation can also be invoked with or without parameters. The semantics of the equivalent() operation is further illustrated through a simple example. In this example, we need to transform a model that conforms to the Tree metamodel displayed below into a model that conforms to the Graph metamodel, also displayed below. classDiagram class Node { +label: String +incoming: Edge[*] +outgoing: Edge[*] } class Edge { +source: Node +target: Node } class Tree { +name: String +parent: Tree +children: Tree[*] } Tree -- Tree Node -- Edge Edge -- Node More specifically, we need to transform each Tree element to a Node , and an Edge that connects it with the Node that is equivalent to the tree's parent . This is achieved using the rule below. rule Tree2Node transform t : Tree!Tree to n : Graph!Node { n.label = t.label; if (t.parent.isDefined()) { var edge = new Graph!Edge; edge.source = n; edge.target = t.parent.equivalent(); } } In lines 1--3, the Tree2Node rule specifies that it can transform elements of the Tree type in the Tree model into elements of the Node type in the Graph model. In line 5 it specifies that the label of the created Node should be the same as the label of the source Tree. If the parent of the source Tree is defined (line 7), the rule creates a new Edge (line 8) and sets its source property to the created Node (line 9) and its target property to the equivalent Node of the source Tree 's parent (line 10). Overriding the semantics of the EOL Special Assignment Operator \u00b6 As discussed above, resolving the equivalent(s) or source model elements in the target model is a recurring task in model transformation. Furthermore, in most cases resolving the equivalent of a model element is immediately followed by assigning/adding the obtained target model elements to the value(s) of a property of another target model element. For example, in line 10 of the listing above, the equivalent obtained is immediately assigned to the target property of the generated Edge . To make transformation specifications more readable, ETL overrides the semantics of the SpecialAssignmentStatement ( ::= in terms of concrete syntax), to set its left-hand side, not to the element its right-hand side evaluates to, but to its equivalent as calculated using the equivalent() operation discussed above. Using this feature, line 10 of the Tree2Node rule can be rewritten as shown below. edge.target ::= t.parent; Interactive Transformations \u00b6 Using the user interaction facilities of EOL, an ETL transformation can become interactive by prompting the user for input during its execution. For example in the listing below, we modify the Tree2Node rule by adding a guard part that uses the user-input facilities of EOL (more specifically the UserInput.confirm(String,Boolean) operation) to enable the user select manually at runtime which of the Tree elements need to be transformed to respective Node elements in the target model and which not. rule Tree2Node transform t : Tree!Tree to n : Graph!Node { guard : UserInput.confirm (\"Transform tree \" + t.label + \"?\", true) n.label = t.label; var target : Graph!Node ::= t.parent; if (target.isDefined()) { var edge = new Graph!Edge; edge.source = n; edge.target = target; } } The concept of lazy rules was first introduced in ATL \u21a9","title":"Model transformation (ETL)"},{"location":"doc/etl/#the-epsilon-transformation-language-etl","text":"The aim of ETL is to contribute model-to-model transformation capabilities to Epsilon. More specifically, ETL can be used to transform an arbitrary number of input models into an arbitrary number of output models of different modelling languages and technologies at a high level of abstraction.","title":"The Epsilon Transformation Language (ETL)"},{"location":"doc/etl/#abstract-syntax","text":"As illustrated in the figure below, ETL transformations are organized in modules ( EtlModule ). A module can contain a number of transformation rules ( TransformRule ). Each rule has a unique name (in the context of the module) and also specifies one source and many target parameters. A transformation rule can also extend a number of other transformation rules and be declared as abstract , primary and/or lazy 1 . To limit its applicability to a subset of elements that conform to the type of the source parameter, a rule can optionally define a guard which is either an EOL expression or a block of EOL statements. Finally, each rule defines a block of EOL statements ( body ) where the logic for populating the property values of the target model elements is specified. Besides transformation rules, an ETL module can also optionally contain a number of pre and post named blocks of EOL statements which, as discussed later, are executed before and after the transformation rules respectively. These should not be confused with the pre-/post-condition annotations available for EOL user-defined operations. classDiagram class TransformRule { -name: String -abstract: Boolean -lazy: Boolean -primary: Boolean -greedy: Boolean -type: EolModelElementType -guard: ExecutableBlock<Boolean> -body: ExecutableBlock<Void> } class Parameter { -name: String -type: EolType } class NamedStatementBlockRule { -name: String -body: StatementBlock } EolModule <|-- ErlModule EtlModule --|> ErlModule Pre --|> NamedStatementBlockRule Post --|> NamedStatementBlockRule ErlModule -- Pre: pre * ErlModule -- Post: post * EtlModule -- TransformRule: rules * TransformRule -- Parameter: source TransformRule -- Parameter: targets * TransformRule -- TransformRule: extends *","title":"Abstract Syntax"},{"location":"doc/etl/#concrete-syntax","text":"The concrete syntax of a transformation rule is displayed in the listing below. The optional abstract , lazy and primary attributes of the rule are specified using respective annotations. The name of the rule follows the rule keyword and the source and target parameters are defined after the transform and to keywords. Also, the rule can define an optional comma-separated list of rules it extends after the extends keyword. Inside the curly braces ({}), the rule can optionally specify its guard either as an EOL expression following a colon (:) (for simple guards) or as a block of statements in curly braces (for more complex guards). Finally, the body of the rule is specified as a sequence of EOL statements. (@abstract)? (@lazy)? (@primary)? rule <name> transform <sourceParameterName>:<sourceParameterType> to <targetParameterName>:<targetParameterType> (,<targetParameterName>:<targetParameterType>)* (extends <ruleName> (, <ruleName>*)? { (guard (:expression)|({statementBlock}))? statement+ } Pre and post blocks have a simple syntax that, as presented the listing below, consists of the identifier ( pre or post ), an optional name and the set of statements to be executed enclosed in curly braces. (pre|post) <name> { statement+ }","title":"Concrete Syntax"},{"location":"doc/etl/#execution-semantics","text":"","title":"Execution Semantics"},{"location":"doc/etl/#rule-and-block-overriding","text":"Similarly to EOL, an ETL module can import a number of other ETL modules. In this case, the importing ETL module inherits all the rules and pre/post blocks specified in the modules it imports (recursively). If the module specifies a rule or a pre/post block with the same name, the local rule/block overrides the imported one respectively.","title":"Rule and Block Overriding"},{"location":"doc/etl/#rule-execution-scheduling","text":"When an ETL module is executed, the pre blocks of the module are executed first in the order in which they have been specified. Following that, each non-abstract and non-lazy rule is executed for all the elements on which it is applicable. To be applicable on a particular element, the element must have a type-of relationship with the type defined in the rule's sourceParameter (or a kind-of relationship if the rule is annotated as @greedy ) and must also satisfy the guard of the rule (and all the rules it extends). When a rule is executed on an applicable element, the target elements are initially created by instantiating the targetParameters of the rules, and then their contents are populated using the EOL statements of the body of the rule. Finally, when all rules have been executed, the post blocks of the module are executed in the order in which they have been declared.","title":"Rule Execution Scheduling"},{"location":"doc/etl/#source-elements-resolution","text":"Resolving target elements that have been (or can be) transformed from source elements by other rules is a frequent task in the body of a transformation rule. To automate this task and reduce coupling between rules, ETL contributes the equivalents() and equivalent() built-in operations that automatically resolve source elements to their transformed counterparts in the target models. When the equivalents() operation is applied on a single source element (as opposed to a collection of them), it inspects the established transformation trace (displayed in the figure below) and invokes the applicable rules (if necessary) to calculate the counterparts of the element in the target model. When applied to a collection it returns a Bag containing Bag s that in turn contain the counterparts of the source elements contained in the collection. The equivalents() operation can be also invoked with an arbitrary number of rule names as parameters to invoke and return only the equivalents created by specific rules. Unlike the main execution scheduling scheme discussed above, the equivalents() operation invokes both lazy and non-lazy rules. It is worth noting that lazy rules are computationally expensive and should be used with caution as they can significantly degrade the performance of the overall transformation. With regard to the ordering of the results of the equivalents() operations, the returned elements appear in the respective order of the rules that have created them. An exception to this occurs when one of the rules is declared as primary , in which case its results precede the results of all other rules. classDiagram class Transformation { -source: Object -targets: Object[*] } class ITransformationStrategy { +transformModels(context : EtlContext) } EolContext <|-- EtlContext EtlContext -- TransformationTrace EtlContext -- ITransformationStrategy: strategy TransformationTrace -- Transformation: transformations * Transformation -- TransformRule: rule ETL also provides the convenient equivalent() operation which, when applied to a single element, returns only the first element of the respective result that would have been returned by the equivalents() operation discussed above. Also, when applied to a collection the equivalent() operation returns a flattened collection (as opposed to the result of equivalents() which is a Bag of Bag s in this case). As with the equivalents() operation, the equivalent() operation can also be invoked with or without parameters. The semantics of the equivalent() operation is further illustrated through a simple example. In this example, we need to transform a model that conforms to the Tree metamodel displayed below into a model that conforms to the Graph metamodel, also displayed below. classDiagram class Node { +label: String +incoming: Edge[*] +outgoing: Edge[*] } class Edge { +source: Node +target: Node } class Tree { +name: String +parent: Tree +children: Tree[*] } Tree -- Tree Node -- Edge Edge -- Node More specifically, we need to transform each Tree element to a Node , and an Edge that connects it with the Node that is equivalent to the tree's parent . This is achieved using the rule below. rule Tree2Node transform t : Tree!Tree to n : Graph!Node { n.label = t.label; if (t.parent.isDefined()) { var edge = new Graph!Edge; edge.source = n; edge.target = t.parent.equivalent(); } } In lines 1--3, the Tree2Node rule specifies that it can transform elements of the Tree type in the Tree model into elements of the Node type in the Graph model. In line 5 it specifies that the label of the created Node should be the same as the label of the source Tree. If the parent of the source Tree is defined (line 7), the rule creates a new Edge (line 8) and sets its source property to the created Node (line 9) and its target property to the equivalent Node of the source Tree 's parent (line 10).","title":"Source Elements Resolution"},{"location":"doc/etl/#overriding-the-semantics-of-the-eol-special-assignment-operator","text":"As discussed above, resolving the equivalent(s) or source model elements in the target model is a recurring task in model transformation. Furthermore, in most cases resolving the equivalent of a model element is immediately followed by assigning/adding the obtained target model elements to the value(s) of a property of another target model element. For example, in line 10 of the listing above, the equivalent obtained is immediately assigned to the target property of the generated Edge . To make transformation specifications more readable, ETL overrides the semantics of the SpecialAssignmentStatement ( ::= in terms of concrete syntax), to set its left-hand side, not to the element its right-hand side evaluates to, but to its equivalent as calculated using the equivalent() operation discussed above. Using this feature, line 10 of the Tree2Node rule can be rewritten as shown below. edge.target ::= t.parent;","title":"Overriding the semantics of the EOL Special Assignment Operator"},{"location":"doc/etl/#interactive-transformations","text":"Using the user interaction facilities of EOL, an ETL transformation can become interactive by prompting the user for input during its execution. For example in the listing below, we modify the Tree2Node rule by adding a guard part that uses the user-input facilities of EOL (more specifically the UserInput.confirm(String,Boolean) operation) to enable the user select manually at runtime which of the Tree elements need to be transformed to respective Node elements in the target model and which not. rule Tree2Node transform t : Tree!Tree to n : Graph!Node { guard : UserInput.confirm (\"Transform tree \" + t.label + \"?\", true) n.label = t.label; var target : Graph!Node ::= t.parent; if (target.isDefined()) { var edge = new Graph!Edge; edge.source = n; edge.target = target; } } The concept of lazy rules was first introduced in ATL \u21a9","title":"Interactive Transformations"},{"location":"doc/eunit/","text":"The Epsilon Unit Testing Framework (EUnit) \u00b6 EUnit is an unit testing framework specifically designed to test model management tasks, based on EOL and the Ant workflow tasks. It provides assertions for comparing models, files and directories. Tests can be reused with different sets of models and input data, and differences between the expected and actual models can be graphically visualized. This chapter describes how tests are organized and written and shows two examples of how a model-to-model transformation can be tested with EUnit. This chapter ends with a discussion of how EUnit can be extended to support other modelling and model management technologies. Common Issues \u00b6 While each type of model management task does have specific complexity, below is a list of common concerns: There is usually a large number of models to be handled. Some may be created by hand, some may be generated using hand-written programs, and some may be generated automatically following certain coverage criteria. A single model or set of models may be used in several tasks. For instance, a model may be validated before performing an in-place transformation to assist the user, and later on it may be transformed to another model or merged with a different model. This requires having at least one test for each valid combination of models and sets of tasks. Test oracles are more complex than in traditional unit testing: instead of checking scalar values or simple lists, entire graphs of model objects or file trees may have to be compared. In some cases, complex properties in the generated artifacts may have to be checked. Models and model management tasks may use a wide range of technologies. Models may be based on Ecore, XML files or Java object graphs, among many others. At the same time, tasks may use technologies from different platforms, such as Epsilon, or AMMA. Many of these technologies offer high-level tools for running and debugging the different tasks using several models. However, users wishing to do automated unit testing need to learn low-level implementation details about their modelling and model management technologies. This increases the initial cost of testing these tasks and hampers the adoption of new technologies. Existing testing tools tend to focus on the testing technique itself, and lack integration with external systems. Some tools provide graphical user interfaces, but most do not generate reports which can be consumed by a continuous integration server, for instance. Testing with JUnit \u00b6 The previous issues are easier to understand with a concrete example. This section shows how a simple transformation between two EMF models in ETL using JUnit 4 would be normally tested, and points out several issues due to JUnit's limitations as a general-purpose unit testing framework for Java programs. For the sake of brevity, only an outline of the JUnit test suite is included. All JUnit test suites are defined as Java classes. This test suite has three methods: The test setup method (marked with the @Before JUnit annotation) loads the required models by creating and configuring instances of . After that, it prepares the transformation by creating and configuring an instance of , adding the input and output models to its model repository. The test case itself (marked with @Test ) runs the ETL transformation and uses the generic comparison algorithm implemented by EMF Compare to perform the model comparison. The test teardown method (marked with @After ) disposes of the models. Several issues can be identified in each part of the test suite. First, test setup is tightly bound to the technologies used: it depends on the API of the and classes, which are both part of Epsilon. Later refactorings in these classes may break existing tests. The test case can only be used for a single combination of input and output models. Testing several combinations requires either repeating the same code and therefore making the suite less maintainable, or using parametric testing, which may be wasteful if not all tests need the same combinations of models. Model comparison requires the user to manually select a model comparison engine and integrate it with the test. For comparing EMF models, EMF Compare is easy to use and readily available. However, generic model comparison engines may not be available for some modelling technologies, or may be harder to integrate. Finally, instead of comparing the obtained and expected models, several properties could have been checked in the obtained model. However, querying models through Java code can be quite verbose. Selected Approach \u00b6 Several approaches could be followed to address these issues. Our first instinct would be to extend JUnit and reuse all the tooling available for it. A custom test runner would simplify setup and teardown, and modelling platforms would integrate their technologies into it. Since Java is very verbose when querying models, the custom runner should run tests in a higher-level language, such as EOL. However, JUnit is very tightly coupled to Java, and this would impose limits on the level of integration we could obtain. For instance, errors in the model management tasks or the EOL tests could not be reported from their original source, but rather from the Java code which invoked them. Another problem with this approach is that new integration code would need to be written for each of the existing platforms. Alternatively, we could add a new language exclusively dedicated to testing to the Epsilon family. Being based on EOL, model querying would be very concise, and with a test runner written from scratch, test execution would be very flexible. However, this would still require all platforms to write new code to integrate with it, and this code would be tightly coupled to Epsilon. As a middle ground, we could decorate EOL to guide its execution through a new test runner, while reusing the Apache Ant tasks already provided by several of the existing platforms, such as AMMA or Epsilon. Like Make, Ant is a tool focused on automating the execution of processes such as program builds. Unlike Make, Ant defines processes using XML buildfiles with sets of interrelated targets . Each target contains in turn a sequence of tasks . Many Ant tasks and Ant-based tools already exist, and it is easy to create a new Ant task. Among these three approaches, EUnit follows the last one. Ant tasks take care of model setup and management, and tests are written in EOL and executed by a new test runner, written from the ground up. Test Organization \u00b6 EUnit has a rich data model: test suites are organized as trees of tests, and each test is divided into many parts which can be extended by the user. This section is dedicated to describing how test suites and tests are organized. The next section indicates how they are written. Test Suites \u00b6 EUnit test suites are organized as trees: inner nodes group related test cases and define data bindings. Leaf nodes define model bindings and run the test cases. Data bindings repeat all test cases with different values in one or more variables. They can implement parametric testing, as in JUnit 4. EUnit can nest several data bindings, running all test cases once for each combination. Model bindings are specific to EUnit: they allow developers to repeat a single test case with different subsets of models. Data and model bindings can be combined. One interesting approach is to set the names of the models to be used in the model binding from the data binding, as a quick way to try several test cases with the same subsets of models. The figure below shows an example of an EUnit test tree: nodes with data bindings are marked with data , and nodes with model bindings are marked with model . graph TD data1[data<br/>x=1] data2[data<br/>x=2] testa1[test A] testb1[test B] testa2[test A] testb2[test B] modelx1[model X] modely1[model Y] modelx2[model X] modely2[model Y] root --> data1 root --> data2 data1 --> testa1 data1 --> testb1 data2 --> testa2 data2 --> testb2 testa1 --> modelx1 testa1 --> modely1 testa2 --> modelx2 testa2 --> modely2 EUnit will perform a preorder traversal of this tree, running the following tests: A with x = 1 and model X. A with x = 1 and model Y. B with x = 1 and both models. A with x = 2 and model X. A with x = 2 and model Y. B with x = 2 and both models. Optionally, EUnit can filter tests by name, running only A or B . Similarly to JUnit, EUnit logs start and finish times for each node in the tree, so the most expensive test cases can be quickly detected. However, EUnit logs CPU time 1 in addition to the usual wallclock time. Parametric testing is not to be confused with theories : both repeat a test case with different values, but results are reported quite differently. While parametric testing produces separate test cases with independent results, theories produce aggregated tests which only pass if the original test case passes for every data point. The figures below illustrate these differences. EUnit does not support theories yet: however, they can be approximated with data bindings. graph TD data1[data 1] data2[data 2] testa1[test 1] testb1[test 2] testa2[test 1] testb2[test 2] root --> data1 root --> data2 data1 --> testa1 data1 --> testb1 data2 --> testa2 data2 --> testb2 Parametric Testing graph TD data1[test 1] data2[test 2] testa1[data 1] testb1[data 2] testa2[data 1] testb2[data 2] root --> data1 root --> data2 data1 --> testa1 data1 --> testb1 data2 --> testa2 data2 --> testb2 Theories Test Cases \u00b6 The execution of a test case is divided into the following steps: Apply the data bindings of its ancestors. Run the model setup sections defined by the user. Apply the model bindings of this node. Run the regular setup sections defined by the user. Run the test case itself. Run the teardown sections defined by the user. Tear down the data bindings and models for this test. An important difference between JUnit and EUnit is that setup is split into two parts: model setup and regular setup. This split allows users to add code before and after model bindings are applied. Normally, the model setup sections will load all the models needed by the test suite, and the regular setup sections will further prepare the models selected by the model binding. Explicit teardown sections are usually not needed, as models are disposed automatically by EUnit. EUnit includes them for consistency with the xUnit frameworks. Due to its focus on model management, model setup in EUnit is very flexible. Developers can combine several ways to set up models, such as model references, individual Apache Ant tasks, Apache Ant targets or Human-Usable Text Notation (HUTN) fragments. A test case may produce one among several results. SUCCESS is obtained if all assertions passed and no exceptions were thrown. FAILURE is obtained if an assertion failed. ERROR is obtained if an unexpected exception was thrown while running the test. Finally, tests may be SKIPPED by the user. Test Specification \u00b6 In the previous section, we described how test suites and test cases are organized. In this section, we will show how to write them. As discussed before, after evaluating several approaches, we decided to combine the expressive power of EOL and the extensibility of Apache Ant. For this reason, EUnit test suites are split into two files: an Ant buildfile and an EOL script with some special-purpose annotations. The next subsections describe the contents of these two files and revisit the previous example with EUnit. Ant Buildfile \u00b6 EUnit uses standard Ant buildfiles: running EUnit is as simple as using its Ant task. Users may run EUnit more than once in a single Ant launch: the graphical user interface will automatically aggregate the results of all test suites. EUnit Invocations \u00b6 An example invocation of the EUnit Ant task using the most common features is shown in the listing below. Users will normally only use some of these features at a time, though. Optional attributes have been listed between brackets. Some nested elements can be repeated 0+ times ( * ) or 0-1 times ( ? ). Valid alternatives for an attribute are separated with | . <epsilon.eunit src= \"...\" [ failOnErrors= \"...\" ] [ package= \"..\" ] [ toDir= \"...\" ] [ report= \"yes|no\" ] > ( <model ref= \"OldName\" [ as= \"NewName\" ] /> )* ( <uses ref= \"x\" [ as= \"y\" ] /> )* ( <exports ref= \"z\" [ as= \"w\" ] /> )* ( <parameter name= \"myparam\" value= \"myvalue\" /> )* ( <modelTasks> <!-- Zero or more Ant tasks --> </modelTasks> )? </epsilon.eunit> The EUnit Ant task is based on the Epsilon abstract executable module task, inheriting some useful features. The attribute src points to the path of the EOL file, and the optional attribute failOnErrors can be set to false to prevent EUnit from aborting the Ant launch if a test case fails. EUnit also inherits support for importing and exporting global variables through the <uses> and <exports> elements: the original name is set in ref , and the optional as attribute allows for using a different name. For receiving parameters as name-value piars, the <parameter> element can be used. Model references (using the <model> nested element) are also inherited from the Epsilon abstract executable module task. These allow model management tasks to refer by name to models previously loaded in the Ant buildfile. However, EUnit implicitly reloads the models after each test case. This ensures that test cases are isolated from each other. The EUnit Ant task adds several new features to customize the test result reports and perform more advanced model setup. By default, EUnit generates reports in the XML format of the Ant <junit> task. This format is also used by many other tools, such as the TestNG unit testing framework, the Jenkins continuous integration server or the JUnit Eclipse plug-ins. To suppress these reports, report must be set to no. By default, the XML report is generated in the same directory as the Ant buildfile, but it can be changed with the toDir attribute. Test names in JUnit are formed by its Java package, class and method: EUnit uses the filename of the EOL script as the class and the name of the EOL operation as the method. By default, the package is set to the string \"default\": users are encouraged to customize it with the package attribute. The optional <modelTasks> nested element contains a sequence of Ant tasks which will be run after reloading the model references and before running the model setup sections in the EOL file. This allows users to run workflows more advanced than simply reloading model references. Helper Targets \u00b6 Ant buildfiles for EUnit may include helper targets . These targets can be invoked using from anywhere in the EOL script. Helper targets are quite versatile: called from an EOL model setup section, they allow for reusing model loading fragments between different EUnit test suites. They can also be used to invoke the model management tasks under test. EOL script \u00b6 The Epsilon Object Language script is the second half of the EUnit test suite. EOL annotations are used to tag some of the operations as data binding definitions ( @data or @Data ), additional model setup sections ( @model / @Model ), test setup and teardown sections ( @setup / @Before and @teardown / @After ) and test cases ( @test / @Test ). Suite setup and teardown sections can also be defined with @suitesetup / @BeforeClass and @suiteteardown / @AfterClass annotations: these operations will be run before and after all tests, respectively. Data bindings \u00b6 Data bindings repeat all test cases with different values in some variables. To define a data binding, users must define an operation which returns a sequence of elements and is marked with @data variable. All test cases will be repeated once for each element of the returned sequence, setting the specified variable to the corresponding element. Listing 15.2 shows two nested data bindings and a test case which will be run four times: with x=1 and y=5, x=1 and y=6, x=2 and y=5 and finally x=2 and y=6. The example shows how x and y could be used by the setup section to generate an input model for the test. This can be useful if the intent of the test is ensuring that a certain property holds in a class of models, rather than a single model. @data x operation firstLevel() { return 1.to(2); } @data y operation secondLevel() { return 5.to(6); } @setup operation generateModel() { /* generate model using x and y */ } @test operation mytest() { /* test with the generated model */ } Alternatively, if both x and y were to use the same sets of values, we could add two @data annotations to the same operation. For instance, the listing below shows how we could test with 4 cases: x=1 and y=1, x=1 and y=2, x=2 and y=1 and x=2 and y=2. @data x @data y operation levels() { return 1.to(2); } @setup operation generateModel() { /* generate model using x and y */ } @test operation mytest() { /* test with the generated model */ } Model bindings \u00b6 Model bindings repeat a test case with different subsets of models. They can be defined by annotating a test case with $with map or $onlyWith map one or more times, where map is an EOL expression that produces a MAP . For each key-value pair key = value , EUnit will rename the model named value to key . The difference between $with and $onlyWith is how they handle the models not mentioned in the MAP : $with will preserve them as is, and $onlyWith will make them unavailable during the test. $onlyWith is useful for tightly restricting the set of available models in a test and for avoiding ambiguous type references when handling multiple models using the same metamodel. The listing below shows two tests which will be each run twice. The first test uses $with , which preserves models not mentioned in the MAP: the first time, model \"A\" will be the default model and model \"B\" will be the \"Other\" model, and the second time, model \"B\" will be the default model and model \"A\" will be the \"Other\" model. The second test uses two $onlyWith annotations: on the first run, \"A\" will be available as \"Model\" and \"B\" will not unavailable, and on the second run, only \"B\" will be available as \"Model\" and \"A\" will be unavailable. $with Map {\"\" = \"A\", \"Other\" = \"B\"} $with Map {\"\" = \"B\", \"Other\" = \"A\"} @test operation mytest() { /* use the default and Other models, while keeping the rest as is */ } $onlyWith Map { \"Model\" = \"A\" } $onlyWith Map { \"Model\" = \"B\" } @test operation mytest2() { // first time: A as 'Model', B is unavailable // second time: B as 'Model', A is unavailable } Additional variables and built-in operations \u00b6 EUnit provides several variables and operations which are useful for testing. These are listed in the table below. Signature Description runTarget(name : String) Runs the specified target of the Ant buildfile which invoked EUnit. exportVariable(name : String) Exports the specified variable, to be used by another executable module. useVariable(name : String) Imports the specified variable, which should have been previously exported by another executable module. loadHutn(name : String, hutn : String) Loads an EMF model with the specified name, by parsing the second argument as an HUTN fragment. antProject : org.apache.tools.ant.Project Global variable which refers to the Ant project being executed. This can be used to create and run Ant tasks from inside the EOL code. Assertions \u00b6 EUnit implements some common assertions for equality and inequality, with special versions for comparing floating-point numbers. EUnit also supports a limited form of exception testing with , which checks that the expression inside it throws an exception. Custom assertions can be defined by the user with the operation, which fails a test with a custom message. The available assertions are shown in the table below. Signature Description assertEqualDirectories(expectedPath : String,obtainedPath : String) Fails the test if the contents of the directory in differ from those of the directory in . Directory comparisons are performed on recursive snapshots of both directories. assertEqualFiles(expectedPath : String,obtainedPath : String) Fails the test if the contents of the file in differ from those of the file in . File comparisons are performed on snapshots of both files. assertEqualModels([msg : String,]expectedModel : String,obtainedModel : String[, options : Map]) Fails the test with the optional message if the model named is not equal to the model named . Model comparisons are performed on snapshots of the resource sets of both models. During EMF comparisons, XMI identifiers are ignored. Additional comparator-specific options can be specified through . assertEquals([msg : String,]expected : Any,obtained : Any) Fails the test with the optional message if the values of and are not equal. assertEquals([msg : String,]expected : Real,obtained : Real,ulps : Integer) Fails the test with the optional message if the values of and differ in more than units of least precision. See this site for details. assertError(expr : Any) Fails the test if no exception is thrown during the evaluation of . assertFalse([msg : String,]cond : Boolean) Fails the test with the optional message if is . It is a negated version of assertTrue. assertLineWithMatch([msg : String,]path : String,regexp : String) Fails the test with the optional message if the file at does not have a line containing a substring matching the regular expression 2 . assertMatchingLine([msg : String,]path : String,regexp : String) Fails the test with the optional message if the file at does not have a line that matches the regular expression 3 from start to finish. assertNotEqualDirectories(expectedPath : String,obtainedPath : String) Negated version of assertEqualDirectories. assertNotEqualFiles(expectedPath : String,obtainedPath : String) Negated version of assertEqualFiles. assertNotEqualModels([msg : String,]expectedModel : String,obtainedModel : String) Negated version of assertNotEqualModels. assertNotEquals([msg : String,]expected : Any,obtained : Any) Negated version of assertEquals([msg : String,] expected : Any, obtained : Any). assertNotEquals([msg : String,]expected : Real,obtained : Real,ulps : Integer) Negated version of assertEquals([msg : String,] expected : Real, obtained : Real, ulps : Integer). assertTrue([msg : String,]cond : Boolean) Fails the test with the optional message if is . fail(msg : String) Fails a test with the message . The table below lists the available option keys which can be used with the model equality assertions, by comparator. Comparator and Key Usage EMF, \"whitespace\" When set to \"ignore\", differences in EString attribute values due to whitespace will be ignored. Disabled by default. EMF, \"ignoreAttributeValueChanges\" Can contain a of strings of the form \"package.class.attribute\". Differences in the values for these attributes will be ignored. However, if the attribute is set on one side and not on the other, the difference will be reported as normal. Empty by default. EMF, \"unorderedMoves\" When set to \"ignore\", differences in the order of the elements within an unordered EReference. Enabled by default. More importantly, EUnit implements specific assertions for comparing models, files and trees of files. Model comparison is not implemented by the assertions themselves: it is an optional service implemented by some EMC drivers. Currently, EMF models will automatically use EMF Compare as their comparison engine. The rest of the EMC drivers do not support comparison yet. The main advantage of having an abstraction layer implement model comparison as a service is that the test case definition is decoupled from the concrete model comparison engine used. Model, file and directory comparisons take a snapshot of their operands before comparing them, so EUnit can show the differences right at the moment when the comparison was performed. This is especially important when some of the models are generated on the fly by the EUnit test suite, or when a test case for code generation may overwrite the results of the previous one. The following figure shows a screenshot of the EUnit graphical user interface. On the left, an Eclipse view shows the results of several EUnit test suites. We can see that the load- models-with-hutn suite failed. The Compare button to the right of \"Failure Trace\" can be pressed to show the differences between the expected and obtained models, as shown on the right side. EUnit implements a pluggable architecture where difference viewers are automatically selected based on the types of the operands. There are difference viewers for EMF models, file trees and a fallback viewer which converts both operands to strings. Examples \u00b6 Models and Tasks in the Buildfile \u00b6 After describing the basic syntax, we will show how to use EUnit to test an ETL transformation. The Ant buildfile is shown in the listing below. It has two targets: run-tests (lines 2-19) invokes the EUnit suite, and tree2graph (lines 23-28) is a helper target which transforms model Tree into model Graph using ETL. The <modelTasks> nested element is used to load the input, expected output and output EMF models. Graph is loaded with read set to false : the model will be initially empty, and will be populated by the ETL transformation. <project> <target name= \"run-tests\" > <epsilon.eunit src= \"test-external.eunit\" > <modelTasks> <epsilon.emf.loadModel name= \"Tree\" modelfile= \"tree.model\" metamodelfile= \"tree.ecore\" read= \"true\" store= \"false\" /> <epsilon.emf.loadModel name= \"GraphExpected\" modelfile= \"graph.model\" metamodelfile= \"graph.ecore\" read= \"true\" store= \"false\" /> <epsilon.emf.loadModel name= \"Graph\" modelfile= \"transformed.model\" metamodelfile= \"graph.ecore\" read= \"false\" store= \"false\" /> </modelTasks> </epsilon.eunit> </target> <target name= \"tree2graph\" > <epsilon.etl src= \"${basedir}/resources/Tree2Graph.etl\" > <model ref= \"Tree\" /> <model ref= \"Graph\" /> </epsilon.etl> </target> </project> The EOL script is shown in the listing below: it invokes the helper task (line 3) and checks that the obtained model is equal to the expected model (line 4). Internally, EMC will perform the comparison using EMF Compare. @test operation transformationWorksAsExpected() { runTarget(\"tree2graph\"); assertEqualModels(\"GraphExpected\", \"Graph\"); } Models and Tasks in the EOL Script \u00b6 In the previous section, the EOL file is kept very concise because the model setup and model management tasks under test were specified in the Ant buildfile. In this section, we will inline the models and the tasks into the EOL script instead. The Ant buildfile is shown in the listing below. It is now very simple: all it needs to do is run the EOL script. However, since we will parse HUTN in the EOL script, we must make sure the s of the metamodels are registered. <project> <target name= \"run-tests\" > <epsilon.emf.register file= \"tree.ecore\" /> <epsilon.emf.register file= \"graph.ecore\" /> <epsilon.eunit src= \"test-inlined.eunit\" /> </target> </project> The EOL script used is shown below. Instead of loading models through the Ant tasks, the loadHutn operation has been used to load the models. The test itself is almost the same, but instead of running a helper target, it invokes an operation which creates and runs the ETL Ant task through the antProject variable provided by EUnit, taking advantage of the support in EOL for invoking Java code through reflection. @model operation loadModels() { loadHutn(\"Tree\", '@Spec {Metamodel {nsUri: \"Tree\" }} Model { Tree \"t1\" { label : \"t1\" } Tree \"t2\" { label : \"t2\" parent : Tree \"t1\" } } '); loadHutn(\"GraphExpected\", '@Spec {Metamodel {nsUri: \"Graph\"}} Graph { nodes : Node \"t1\" { name : \"t1\" outgoing : Edge { source : Node \"t1\" target : Node \"t2\" } }, Node \"t2\" { name : \"t2\" } } '); loadHutn(\"Graph\", '@Spec {Metamodel {nsUri: \"Graph\"}}'); } @test operation transformationWorksAsExpected() { runETL(); assertEqualModels(\"GraphExpected\", \"Graph\"); } operation runETL() { var etlTask := antProject.createTask(\"epsilon.etl\"); etlTask.setSrc(new Native('java.io.File')( antProject.getBaseDir(), 'resources/etl/Tree2Graph.etl')); etlTask.createModel().setRef(\"Tree\"); etlTask.createModel().setRef(\"Graph\"); etlTask.execute(); } Extending EUnit \u00b6 EUnit is based on the Epsilon platform, but it is designed to accommodate other technologies. In this section we will explain several strategies to add support for these technologies to EUnit. EUnit uses the Epsilon Model Connectivity abstraction layer to handle different modelling technologies. Adding support for a different modelling technology only requires implementing another driver for EMC. Depending on the modelling technology, the driver can provide optional services such as model comparison, caching or reflection. EUnit uses Ant as a workflow language: all model management tasks must be exposed through Ant tasks. It is highly encouraged, however, that the Ant task is aware of the EMC model repository linked to the Ant project. Otherwise, users will have to shuffle the models out from and back into the repository between model management tasks. As an example, a helper target for an ATL transformation with the existing Ant tasks needs to: Save the input model in the EMC model repository to a file, by invoking the <epsilon.storeModel> task. Load the metamodels and the input model with <atl.loadModel> . Run the ATL transformation with <atl.launch> . Save the result of the ATL transformation with <atl.saveModel> . Load it into the EMC model repository with <epsilon.emf.loadModel> . The listing below shows the Ant buildfile which would be required for running these steps, showing that while EUnit can use the existing ATL tasks as-is, the required helper task is quite longer than the one shown above. Ideally, Ant tasks should be adapted or wrapped to use models directly from the EMC model repository. <project> <!-- ... omitted ... --> <target name= \"atl\" > <!-- Create temporary files for input and output models --> <tempfile property= \"atl.temp.srcfile\" /> <tempfile property= \"atl.temp.dstfile\" /> <!-- Save input model to a file --> <epsilon.storeModel model= \"Tree\" target= \"${atl.temp.srcfile}\" /> <!-- Load the metamodels and the source model --> <atl.loadModel name= \"TreeMM\" metamodel= \"MOF\" path= \"metamodels/tree.ecore\" /> <atl.loadModel name= \"GraphMM\" metamodel= \"MOF\" path= \"metamodels/graph.ecore\" /> <atl.loadModel name= \"Tree\" metamodel= \"TreeMM\" path= \"${atl.temp.srcfile}\" /> <!-- Run ATL and save the model --> <atl.launch path= \"transformation/tree2graph.atl\" > <inmodel name= \"IN\" model= \"Tree\" /> <outmodel name= \"OUT\" model= \"Graph\" metamodel= \"GraphMM\" /> </atl.launch> <atl.saveModel model= \"Graph\" path= \"${atl.temp.dstfile}\" /> <!-- Load it back into the EUnit suite --> <epsilon.emf.loadModel name= \"Graph\" modelfile= \"${atl.temp.dstfile}\" metamodeluri= \"Graph\" read= \"true\" store= \"false\" /> <!-- Delete temporary files --> <delete file= \"${atl.temp.srcfile}\" /> <delete file= \"${atl.temp.dstfile}\" /> </target> </project> Another advantage in making model management tasks EMC-aware is that they can easily \u201cexport\u201d their results as models, making them easier to test. For instance, the EVL Ant task allows for exporting its results as a model by setting the attribute exportAsModel to true . This way, EOL can query the results as any regular model. This is simpler than transforming the validated model to a problem metamodel. The example in the listing below checks that a single warning was produced due to the expected rule ( LabelsStartWithT ) and the expected model element. @test operation valid() { var tree := new Tree!Tree; tree.label := '1n'; runTarget('validate-tree'); var errors := EVL!EvlUnsatisfiedConstraint.allInstances; assertEquals(1, errors.size); var error := errors.first; assertEquals(tree, error.instance); assertEquals(false, error.constraint.isCritique); assertEquals('LabelsStartWithT', error.constraint.name); } CPU time only measures the time elapsed in the thread used by EUnit, and is more stable with varying system load in single-threaded programs. \u21a9 See JAVA.UTIL.REGEX.PATTERN for details about the accepted syntax for regular expressions. \u21a9 See footnote assertLineWithMatch for details about the syntax of the regular expressions. \u21a9","title":"Unit testing (EUnit)"},{"location":"doc/eunit/#the-epsilon-unit-testing-framework-eunit","text":"EUnit is an unit testing framework specifically designed to test model management tasks, based on EOL and the Ant workflow tasks. It provides assertions for comparing models, files and directories. Tests can be reused with different sets of models and input data, and differences between the expected and actual models can be graphically visualized. This chapter describes how tests are organized and written and shows two examples of how a model-to-model transformation can be tested with EUnit. This chapter ends with a discussion of how EUnit can be extended to support other modelling and model management technologies.","title":"The Epsilon Unit Testing Framework (EUnit)"},{"location":"doc/eunit/#common-issues","text":"While each type of model management task does have specific complexity, below is a list of common concerns: There is usually a large number of models to be handled. Some may be created by hand, some may be generated using hand-written programs, and some may be generated automatically following certain coverage criteria. A single model or set of models may be used in several tasks. For instance, a model may be validated before performing an in-place transformation to assist the user, and later on it may be transformed to another model or merged with a different model. This requires having at least one test for each valid combination of models and sets of tasks. Test oracles are more complex than in traditional unit testing: instead of checking scalar values or simple lists, entire graphs of model objects or file trees may have to be compared. In some cases, complex properties in the generated artifacts may have to be checked. Models and model management tasks may use a wide range of technologies. Models may be based on Ecore, XML files or Java object graphs, among many others. At the same time, tasks may use technologies from different platforms, such as Epsilon, or AMMA. Many of these technologies offer high-level tools for running and debugging the different tasks using several models. However, users wishing to do automated unit testing need to learn low-level implementation details about their modelling and model management technologies. This increases the initial cost of testing these tasks and hampers the adoption of new technologies. Existing testing tools tend to focus on the testing technique itself, and lack integration with external systems. Some tools provide graphical user interfaces, but most do not generate reports which can be consumed by a continuous integration server, for instance.","title":"Common Issues"},{"location":"doc/eunit/#testing-with-junit","text":"The previous issues are easier to understand with a concrete example. This section shows how a simple transformation between two EMF models in ETL using JUnit 4 would be normally tested, and points out several issues due to JUnit's limitations as a general-purpose unit testing framework for Java programs. For the sake of brevity, only an outline of the JUnit test suite is included. All JUnit test suites are defined as Java classes. This test suite has three methods: The test setup method (marked with the @Before JUnit annotation) loads the required models by creating and configuring instances of . After that, it prepares the transformation by creating and configuring an instance of , adding the input and output models to its model repository. The test case itself (marked with @Test ) runs the ETL transformation and uses the generic comparison algorithm implemented by EMF Compare to perform the model comparison. The test teardown method (marked with @After ) disposes of the models. Several issues can be identified in each part of the test suite. First, test setup is tightly bound to the technologies used: it depends on the API of the and classes, which are both part of Epsilon. Later refactorings in these classes may break existing tests. The test case can only be used for a single combination of input and output models. Testing several combinations requires either repeating the same code and therefore making the suite less maintainable, or using parametric testing, which may be wasteful if not all tests need the same combinations of models. Model comparison requires the user to manually select a model comparison engine and integrate it with the test. For comparing EMF models, EMF Compare is easy to use and readily available. However, generic model comparison engines may not be available for some modelling technologies, or may be harder to integrate. Finally, instead of comparing the obtained and expected models, several properties could have been checked in the obtained model. However, querying models through Java code can be quite verbose.","title":"Testing with JUnit"},{"location":"doc/eunit/#selected-approach","text":"Several approaches could be followed to address these issues. Our first instinct would be to extend JUnit and reuse all the tooling available for it. A custom test runner would simplify setup and teardown, and modelling platforms would integrate their technologies into it. Since Java is very verbose when querying models, the custom runner should run tests in a higher-level language, such as EOL. However, JUnit is very tightly coupled to Java, and this would impose limits on the level of integration we could obtain. For instance, errors in the model management tasks or the EOL tests could not be reported from their original source, but rather from the Java code which invoked them. Another problem with this approach is that new integration code would need to be written for each of the existing platforms. Alternatively, we could add a new language exclusively dedicated to testing to the Epsilon family. Being based on EOL, model querying would be very concise, and with a test runner written from scratch, test execution would be very flexible. However, this would still require all platforms to write new code to integrate with it, and this code would be tightly coupled to Epsilon. As a middle ground, we could decorate EOL to guide its execution through a new test runner, while reusing the Apache Ant tasks already provided by several of the existing platforms, such as AMMA or Epsilon. Like Make, Ant is a tool focused on automating the execution of processes such as program builds. Unlike Make, Ant defines processes using XML buildfiles with sets of interrelated targets . Each target contains in turn a sequence of tasks . Many Ant tasks and Ant-based tools already exist, and it is easy to create a new Ant task. Among these three approaches, EUnit follows the last one. Ant tasks take care of model setup and management, and tests are written in EOL and executed by a new test runner, written from the ground up.","title":"Selected Approach"},{"location":"doc/eunit/#test-organization","text":"EUnit has a rich data model: test suites are organized as trees of tests, and each test is divided into many parts which can be extended by the user. This section is dedicated to describing how test suites and tests are organized. The next section indicates how they are written.","title":"Test Organization"},{"location":"doc/eunit/#test-suites","text":"EUnit test suites are organized as trees: inner nodes group related test cases and define data bindings. Leaf nodes define model bindings and run the test cases. Data bindings repeat all test cases with different values in one or more variables. They can implement parametric testing, as in JUnit 4. EUnit can nest several data bindings, running all test cases once for each combination. Model bindings are specific to EUnit: they allow developers to repeat a single test case with different subsets of models. Data and model bindings can be combined. One interesting approach is to set the names of the models to be used in the model binding from the data binding, as a quick way to try several test cases with the same subsets of models. The figure below shows an example of an EUnit test tree: nodes with data bindings are marked with data , and nodes with model bindings are marked with model . graph TD data1[data<br/>x=1] data2[data<br/>x=2] testa1[test A] testb1[test B] testa2[test A] testb2[test B] modelx1[model X] modely1[model Y] modelx2[model X] modely2[model Y] root --> data1 root --> data2 data1 --> testa1 data1 --> testb1 data2 --> testa2 data2 --> testb2 testa1 --> modelx1 testa1 --> modely1 testa2 --> modelx2 testa2 --> modely2 EUnit will perform a preorder traversal of this tree, running the following tests: A with x = 1 and model X. A with x = 1 and model Y. B with x = 1 and both models. A with x = 2 and model X. A with x = 2 and model Y. B with x = 2 and both models. Optionally, EUnit can filter tests by name, running only A or B . Similarly to JUnit, EUnit logs start and finish times for each node in the tree, so the most expensive test cases can be quickly detected. However, EUnit logs CPU time 1 in addition to the usual wallclock time. Parametric testing is not to be confused with theories : both repeat a test case with different values, but results are reported quite differently. While parametric testing produces separate test cases with independent results, theories produce aggregated tests which only pass if the original test case passes for every data point. The figures below illustrate these differences. EUnit does not support theories yet: however, they can be approximated with data bindings. graph TD data1[data 1] data2[data 2] testa1[test 1] testb1[test 2] testa2[test 1] testb2[test 2] root --> data1 root --> data2 data1 --> testa1 data1 --> testb1 data2 --> testa2 data2 --> testb2 Parametric Testing graph TD data1[test 1] data2[test 2] testa1[data 1] testb1[data 2] testa2[data 1] testb2[data 2] root --> data1 root --> data2 data1 --> testa1 data1 --> testb1 data2 --> testa2 data2 --> testb2 Theories","title":"Test Suites"},{"location":"doc/eunit/#test-cases","text":"The execution of a test case is divided into the following steps: Apply the data bindings of its ancestors. Run the model setup sections defined by the user. Apply the model bindings of this node. Run the regular setup sections defined by the user. Run the test case itself. Run the teardown sections defined by the user. Tear down the data bindings and models for this test. An important difference between JUnit and EUnit is that setup is split into two parts: model setup and regular setup. This split allows users to add code before and after model bindings are applied. Normally, the model setup sections will load all the models needed by the test suite, and the regular setup sections will further prepare the models selected by the model binding. Explicit teardown sections are usually not needed, as models are disposed automatically by EUnit. EUnit includes them for consistency with the xUnit frameworks. Due to its focus on model management, model setup in EUnit is very flexible. Developers can combine several ways to set up models, such as model references, individual Apache Ant tasks, Apache Ant targets or Human-Usable Text Notation (HUTN) fragments. A test case may produce one among several results. SUCCESS is obtained if all assertions passed and no exceptions were thrown. FAILURE is obtained if an assertion failed. ERROR is obtained if an unexpected exception was thrown while running the test. Finally, tests may be SKIPPED by the user.","title":"Test Cases"},{"location":"doc/eunit/#test-specification","text":"In the previous section, we described how test suites and test cases are organized. In this section, we will show how to write them. As discussed before, after evaluating several approaches, we decided to combine the expressive power of EOL and the extensibility of Apache Ant. For this reason, EUnit test suites are split into two files: an Ant buildfile and an EOL script with some special-purpose annotations. The next subsections describe the contents of these two files and revisit the previous example with EUnit.","title":"Test Specification"},{"location":"doc/eunit/#ant-buildfile","text":"EUnit uses standard Ant buildfiles: running EUnit is as simple as using its Ant task. Users may run EUnit more than once in a single Ant launch: the graphical user interface will automatically aggregate the results of all test suites.","title":"Ant Buildfile"},{"location":"doc/eunit/#eunit-invocations","text":"An example invocation of the EUnit Ant task using the most common features is shown in the listing below. Users will normally only use some of these features at a time, though. Optional attributes have been listed between brackets. Some nested elements can be repeated 0+ times ( * ) or 0-1 times ( ? ). Valid alternatives for an attribute are separated with | . <epsilon.eunit src= \"...\" [ failOnErrors= \"...\" ] [ package= \"..\" ] [ toDir= \"...\" ] [ report= \"yes|no\" ] > ( <model ref= \"OldName\" [ as= \"NewName\" ] /> )* ( <uses ref= \"x\" [ as= \"y\" ] /> )* ( <exports ref= \"z\" [ as= \"w\" ] /> )* ( <parameter name= \"myparam\" value= \"myvalue\" /> )* ( <modelTasks> <!-- Zero or more Ant tasks --> </modelTasks> )? </epsilon.eunit> The EUnit Ant task is based on the Epsilon abstract executable module task, inheriting some useful features. The attribute src points to the path of the EOL file, and the optional attribute failOnErrors can be set to false to prevent EUnit from aborting the Ant launch if a test case fails. EUnit also inherits support for importing and exporting global variables through the <uses> and <exports> elements: the original name is set in ref , and the optional as attribute allows for using a different name. For receiving parameters as name-value piars, the <parameter> element can be used. Model references (using the <model> nested element) are also inherited from the Epsilon abstract executable module task. These allow model management tasks to refer by name to models previously loaded in the Ant buildfile. However, EUnit implicitly reloads the models after each test case. This ensures that test cases are isolated from each other. The EUnit Ant task adds several new features to customize the test result reports and perform more advanced model setup. By default, EUnit generates reports in the XML format of the Ant <junit> task. This format is also used by many other tools, such as the TestNG unit testing framework, the Jenkins continuous integration server or the JUnit Eclipse plug-ins. To suppress these reports, report must be set to no. By default, the XML report is generated in the same directory as the Ant buildfile, but it can be changed with the toDir attribute. Test names in JUnit are formed by its Java package, class and method: EUnit uses the filename of the EOL script as the class and the name of the EOL operation as the method. By default, the package is set to the string \"default\": users are encouraged to customize it with the package attribute. The optional <modelTasks> nested element contains a sequence of Ant tasks which will be run after reloading the model references and before running the model setup sections in the EOL file. This allows users to run workflows more advanced than simply reloading model references.","title":"EUnit Invocations"},{"location":"doc/eunit/#helper-targets","text":"Ant buildfiles for EUnit may include helper targets . These targets can be invoked using from anywhere in the EOL script. Helper targets are quite versatile: called from an EOL model setup section, they allow for reusing model loading fragments between different EUnit test suites. They can also be used to invoke the model management tasks under test.","title":"Helper Targets"},{"location":"doc/eunit/#eol-script","text":"The Epsilon Object Language script is the second half of the EUnit test suite. EOL annotations are used to tag some of the operations as data binding definitions ( @data or @Data ), additional model setup sections ( @model / @Model ), test setup and teardown sections ( @setup / @Before and @teardown / @After ) and test cases ( @test / @Test ). Suite setup and teardown sections can also be defined with @suitesetup / @BeforeClass and @suiteteardown / @AfterClass annotations: these operations will be run before and after all tests, respectively.","title":"EOL script"},{"location":"doc/eunit/#data-bindings","text":"Data bindings repeat all test cases with different values in some variables. To define a data binding, users must define an operation which returns a sequence of elements and is marked with @data variable. All test cases will be repeated once for each element of the returned sequence, setting the specified variable to the corresponding element. Listing 15.2 shows two nested data bindings and a test case which will be run four times: with x=1 and y=5, x=1 and y=6, x=2 and y=5 and finally x=2 and y=6. The example shows how x and y could be used by the setup section to generate an input model for the test. This can be useful if the intent of the test is ensuring that a certain property holds in a class of models, rather than a single model. @data x operation firstLevel() { return 1.to(2); } @data y operation secondLevel() { return 5.to(6); } @setup operation generateModel() { /* generate model using x and y */ } @test operation mytest() { /* test with the generated model */ } Alternatively, if both x and y were to use the same sets of values, we could add two @data annotations to the same operation. For instance, the listing below shows how we could test with 4 cases: x=1 and y=1, x=1 and y=2, x=2 and y=1 and x=2 and y=2. @data x @data y operation levels() { return 1.to(2); } @setup operation generateModel() { /* generate model using x and y */ } @test operation mytest() { /* test with the generated model */ }","title":"Data bindings"},{"location":"doc/eunit/#model-bindings","text":"Model bindings repeat a test case with different subsets of models. They can be defined by annotating a test case with $with map or $onlyWith map one or more times, where map is an EOL expression that produces a MAP . For each key-value pair key = value , EUnit will rename the model named value to key . The difference between $with and $onlyWith is how they handle the models not mentioned in the MAP : $with will preserve them as is, and $onlyWith will make them unavailable during the test. $onlyWith is useful for tightly restricting the set of available models in a test and for avoiding ambiguous type references when handling multiple models using the same metamodel. The listing below shows two tests which will be each run twice. The first test uses $with , which preserves models not mentioned in the MAP: the first time, model \"A\" will be the default model and model \"B\" will be the \"Other\" model, and the second time, model \"B\" will be the default model and model \"A\" will be the \"Other\" model. The second test uses two $onlyWith annotations: on the first run, \"A\" will be available as \"Model\" and \"B\" will not unavailable, and on the second run, only \"B\" will be available as \"Model\" and \"A\" will be unavailable. $with Map {\"\" = \"A\", \"Other\" = \"B\"} $with Map {\"\" = \"B\", \"Other\" = \"A\"} @test operation mytest() { /* use the default and Other models, while keeping the rest as is */ } $onlyWith Map { \"Model\" = \"A\" } $onlyWith Map { \"Model\" = \"B\" } @test operation mytest2() { // first time: A as 'Model', B is unavailable // second time: B as 'Model', A is unavailable }","title":"Model bindings"},{"location":"doc/eunit/#additional-variables-and-built-in-operations","text":"EUnit provides several variables and operations which are useful for testing. These are listed in the table below. Signature Description runTarget(name : String) Runs the specified target of the Ant buildfile which invoked EUnit. exportVariable(name : String) Exports the specified variable, to be used by another executable module. useVariable(name : String) Imports the specified variable, which should have been previously exported by another executable module. loadHutn(name : String, hutn : String) Loads an EMF model with the specified name, by parsing the second argument as an HUTN fragment. antProject : org.apache.tools.ant.Project Global variable which refers to the Ant project being executed. This can be used to create and run Ant tasks from inside the EOL code.","title":"Additional variables and built-in operations"},{"location":"doc/eunit/#assertions","text":"EUnit implements some common assertions for equality and inequality, with special versions for comparing floating-point numbers. EUnit also supports a limited form of exception testing with , which checks that the expression inside it throws an exception. Custom assertions can be defined by the user with the operation, which fails a test with a custom message. The available assertions are shown in the table below. Signature Description assertEqualDirectories(expectedPath : String,obtainedPath : String) Fails the test if the contents of the directory in differ from those of the directory in . Directory comparisons are performed on recursive snapshots of both directories. assertEqualFiles(expectedPath : String,obtainedPath : String) Fails the test if the contents of the file in differ from those of the file in . File comparisons are performed on snapshots of both files. assertEqualModels([msg : String,]expectedModel : String,obtainedModel : String[, options : Map]) Fails the test with the optional message if the model named is not equal to the model named . Model comparisons are performed on snapshots of the resource sets of both models. During EMF comparisons, XMI identifiers are ignored. Additional comparator-specific options can be specified through . assertEquals([msg : String,]expected : Any,obtained : Any) Fails the test with the optional message if the values of and are not equal. assertEquals([msg : String,]expected : Real,obtained : Real,ulps : Integer) Fails the test with the optional message if the values of and differ in more than units of least precision. See this site for details. assertError(expr : Any) Fails the test if no exception is thrown during the evaluation of . assertFalse([msg : String,]cond : Boolean) Fails the test with the optional message if is . It is a negated version of assertTrue. assertLineWithMatch([msg : String,]path : String,regexp : String) Fails the test with the optional message if the file at does not have a line containing a substring matching the regular expression 2 . assertMatchingLine([msg : String,]path : String,regexp : String) Fails the test with the optional message if the file at does not have a line that matches the regular expression 3 from start to finish. assertNotEqualDirectories(expectedPath : String,obtainedPath : String) Negated version of assertEqualDirectories. assertNotEqualFiles(expectedPath : String,obtainedPath : String) Negated version of assertEqualFiles. assertNotEqualModels([msg : String,]expectedModel : String,obtainedModel : String) Negated version of assertNotEqualModels. assertNotEquals([msg : String,]expected : Any,obtained : Any) Negated version of assertEquals([msg : String,] expected : Any, obtained : Any). assertNotEquals([msg : String,]expected : Real,obtained : Real,ulps : Integer) Negated version of assertEquals([msg : String,] expected : Real, obtained : Real, ulps : Integer). assertTrue([msg : String,]cond : Boolean) Fails the test with the optional message if is . fail(msg : String) Fails a test with the message . The table below lists the available option keys which can be used with the model equality assertions, by comparator. Comparator and Key Usage EMF, \"whitespace\" When set to \"ignore\", differences in EString attribute values due to whitespace will be ignored. Disabled by default. EMF, \"ignoreAttributeValueChanges\" Can contain a of strings of the form \"package.class.attribute\". Differences in the values for these attributes will be ignored. However, if the attribute is set on one side and not on the other, the difference will be reported as normal. Empty by default. EMF, \"unorderedMoves\" When set to \"ignore\", differences in the order of the elements within an unordered EReference. Enabled by default. More importantly, EUnit implements specific assertions for comparing models, files and trees of files. Model comparison is not implemented by the assertions themselves: it is an optional service implemented by some EMC drivers. Currently, EMF models will automatically use EMF Compare as their comparison engine. The rest of the EMC drivers do not support comparison yet. The main advantage of having an abstraction layer implement model comparison as a service is that the test case definition is decoupled from the concrete model comparison engine used. Model, file and directory comparisons take a snapshot of their operands before comparing them, so EUnit can show the differences right at the moment when the comparison was performed. This is especially important when some of the models are generated on the fly by the EUnit test suite, or when a test case for code generation may overwrite the results of the previous one. The following figure shows a screenshot of the EUnit graphical user interface. On the left, an Eclipse view shows the results of several EUnit test suites. We can see that the load- models-with-hutn suite failed. The Compare button to the right of \"Failure Trace\" can be pressed to show the differences between the expected and obtained models, as shown on the right side. EUnit implements a pluggable architecture where difference viewers are automatically selected based on the types of the operands. There are difference viewers for EMF models, file trees and a fallback viewer which converts both operands to strings.","title":"Assertions"},{"location":"doc/eunit/#examples","text":"","title":"Examples"},{"location":"doc/eunit/#models-and-tasks-in-the-buildfile","text":"After describing the basic syntax, we will show how to use EUnit to test an ETL transformation. The Ant buildfile is shown in the listing below. It has two targets: run-tests (lines 2-19) invokes the EUnit suite, and tree2graph (lines 23-28) is a helper target which transforms model Tree into model Graph using ETL. The <modelTasks> nested element is used to load the input, expected output and output EMF models. Graph is loaded with read set to false : the model will be initially empty, and will be populated by the ETL transformation. <project> <target name= \"run-tests\" > <epsilon.eunit src= \"test-external.eunit\" > <modelTasks> <epsilon.emf.loadModel name= \"Tree\" modelfile= \"tree.model\" metamodelfile= \"tree.ecore\" read= \"true\" store= \"false\" /> <epsilon.emf.loadModel name= \"GraphExpected\" modelfile= \"graph.model\" metamodelfile= \"graph.ecore\" read= \"true\" store= \"false\" /> <epsilon.emf.loadModel name= \"Graph\" modelfile= \"transformed.model\" metamodelfile= \"graph.ecore\" read= \"false\" store= \"false\" /> </modelTasks> </epsilon.eunit> </target> <target name= \"tree2graph\" > <epsilon.etl src= \"${basedir}/resources/Tree2Graph.etl\" > <model ref= \"Tree\" /> <model ref= \"Graph\" /> </epsilon.etl> </target> </project> The EOL script is shown in the listing below: it invokes the helper task (line 3) and checks that the obtained model is equal to the expected model (line 4). Internally, EMC will perform the comparison using EMF Compare. @test operation transformationWorksAsExpected() { runTarget(\"tree2graph\"); assertEqualModels(\"GraphExpected\", \"Graph\"); }","title":"Models and Tasks in the Buildfile"},{"location":"doc/eunit/#models-and-tasks-in-the-eol-script","text":"In the previous section, the EOL file is kept very concise because the model setup and model management tasks under test were specified in the Ant buildfile. In this section, we will inline the models and the tasks into the EOL script instead. The Ant buildfile is shown in the listing below. It is now very simple: all it needs to do is run the EOL script. However, since we will parse HUTN in the EOL script, we must make sure the s of the metamodels are registered. <project> <target name= \"run-tests\" > <epsilon.emf.register file= \"tree.ecore\" /> <epsilon.emf.register file= \"graph.ecore\" /> <epsilon.eunit src= \"test-inlined.eunit\" /> </target> </project> The EOL script used is shown below. Instead of loading models through the Ant tasks, the loadHutn operation has been used to load the models. The test itself is almost the same, but instead of running a helper target, it invokes an operation which creates and runs the ETL Ant task through the antProject variable provided by EUnit, taking advantage of the support in EOL for invoking Java code through reflection. @model operation loadModels() { loadHutn(\"Tree\", '@Spec {Metamodel {nsUri: \"Tree\" }} Model { Tree \"t1\" { label : \"t1\" } Tree \"t2\" { label : \"t2\" parent : Tree \"t1\" } } '); loadHutn(\"GraphExpected\", '@Spec {Metamodel {nsUri: \"Graph\"}} Graph { nodes : Node \"t1\" { name : \"t1\" outgoing : Edge { source : Node \"t1\" target : Node \"t2\" } }, Node \"t2\" { name : \"t2\" } } '); loadHutn(\"Graph\", '@Spec {Metamodel {nsUri: \"Graph\"}}'); } @test operation transformationWorksAsExpected() { runETL(); assertEqualModels(\"GraphExpected\", \"Graph\"); } operation runETL() { var etlTask := antProject.createTask(\"epsilon.etl\"); etlTask.setSrc(new Native('java.io.File')( antProject.getBaseDir(), 'resources/etl/Tree2Graph.etl')); etlTask.createModel().setRef(\"Tree\"); etlTask.createModel().setRef(\"Graph\"); etlTask.execute(); }","title":"Models and Tasks in the EOL Script"},{"location":"doc/eunit/#extending-eunit","text":"EUnit is based on the Epsilon platform, but it is designed to accommodate other technologies. In this section we will explain several strategies to add support for these technologies to EUnit. EUnit uses the Epsilon Model Connectivity abstraction layer to handle different modelling technologies. Adding support for a different modelling technology only requires implementing another driver for EMC. Depending on the modelling technology, the driver can provide optional services such as model comparison, caching or reflection. EUnit uses Ant as a workflow language: all model management tasks must be exposed through Ant tasks. It is highly encouraged, however, that the Ant task is aware of the EMC model repository linked to the Ant project. Otherwise, users will have to shuffle the models out from and back into the repository between model management tasks. As an example, a helper target for an ATL transformation with the existing Ant tasks needs to: Save the input model in the EMC model repository to a file, by invoking the <epsilon.storeModel> task. Load the metamodels and the input model with <atl.loadModel> . Run the ATL transformation with <atl.launch> . Save the result of the ATL transformation with <atl.saveModel> . Load it into the EMC model repository with <epsilon.emf.loadModel> . The listing below shows the Ant buildfile which would be required for running these steps, showing that while EUnit can use the existing ATL tasks as-is, the required helper task is quite longer than the one shown above. Ideally, Ant tasks should be adapted or wrapped to use models directly from the EMC model repository. <project> <!-- ... omitted ... --> <target name= \"atl\" > <!-- Create temporary files for input and output models --> <tempfile property= \"atl.temp.srcfile\" /> <tempfile property= \"atl.temp.dstfile\" /> <!-- Save input model to a file --> <epsilon.storeModel model= \"Tree\" target= \"${atl.temp.srcfile}\" /> <!-- Load the metamodels and the source model --> <atl.loadModel name= \"TreeMM\" metamodel= \"MOF\" path= \"metamodels/tree.ecore\" /> <atl.loadModel name= \"GraphMM\" metamodel= \"MOF\" path= \"metamodels/graph.ecore\" /> <atl.loadModel name= \"Tree\" metamodel= \"TreeMM\" path= \"${atl.temp.srcfile}\" /> <!-- Run ATL and save the model --> <atl.launch path= \"transformation/tree2graph.atl\" > <inmodel name= \"IN\" model= \"Tree\" /> <outmodel name= \"OUT\" model= \"Graph\" metamodel= \"GraphMM\" /> </atl.launch> <atl.saveModel model= \"Graph\" path= \"${atl.temp.dstfile}\" /> <!-- Load it back into the EUnit suite --> <epsilon.emf.loadModel name= \"Graph\" modelfile= \"${atl.temp.dstfile}\" metamodeluri= \"Graph\" read= \"true\" store= \"false\" /> <!-- Delete temporary files --> <delete file= \"${atl.temp.srcfile}\" /> <delete file= \"${atl.temp.dstfile}\" /> </target> </project> Another advantage in making model management tasks EMC-aware is that they can easily \u201cexport\u201d their results as models, making them easier to test. For instance, the EVL Ant task allows for exporting its results as a model by setting the attribute exportAsModel to true . This way, EOL can query the results as any regular model. This is simpler than transforming the validated model to a problem metamodel. The example in the listing below checks that a single warning was produced due to the expected rule ( LabelsStartWithT ) and the expected model element. @test operation valid() { var tree := new Tree!Tree; tree.label := '1n'; runTarget('validate-tree'); var errors := EVL!EvlUnsatisfiedConstraint.allInstances; assertEquals(1, errors.size); var error := errors.first; assertEquals(tree, error.instance); assertEquals(false, error.constraint.isCritique); assertEquals('LabelsStartWithT', error.constraint.name); } CPU time only measures the time elapsed in the thread used by EUnit, and is more stable with varying system load in single-threaded programs. \u21a9 See JAVA.UTIL.REGEX.PATTERN for details about the accepted syntax for regular expressions. \u21a9 See footnote assertLineWithMatch for details about the syntax of the regular expressions. \u21a9","title":"Extending EUnit"},{"location":"doc/evl/","text":"The Epsilon Validation Language (EVL) \u00b6 The aim of EVL is to contribute model validation capabilities to Epsilon. More specifically, EVL can be used to specify and evaluate constraints on models of arbitrary metamodels and modelling technologies. Abstract Syntax \u00b6 In EVL, validation specifications are organized in modules ( EvlModule ). As illustrated in the figure below, EvlModule (indirectly) extends EolModule which means that it can contain user-defined operations and import other EOL library modules and EVL modules. Apart from operations, an EVL module also contains a set of constraints grouped by the context they apply to, and, by extending ErlModule , a number of pre and post blocks. classDiagram class Constraint { -name: String -guard: ExecutableBlock<Boolean> -check: ExecutableBlock<Boolean> -message: ExecutableBlock<String> -isCritique: boolean } class ConstraintContext { -type: EolModelElementType -guard: ExecutableBlock<Boolean> } class NamedStatementBlockRule { -name: String -body: StatementBlock } class Fix { -guard: ExecutableBlock<Boolean> -title: ExecutableBlock<String> -body: ExecutableBlock<Void> } EolModule <|-- ErlModule EvlModule --|> ErlModule Pre --|> NamedStatementBlockRule Post --|> NamedStatementBlockRule ErlModule -- Pre: pre * ErlModule -- Post: post * EvlModule -- ConstraintContext: contexts * ConstraintContext -- Constraint: constraints * Constraint -- Fix: fixes * Context \u00b6 A context specifies the kind of instances on which the contained constraints will be evaluated. Each context can optionally define a guard which limits its applicability to a narrower subset of instances of its specified type. Thus, if the guard fails for a specific instance of the type, none of its contained constraints are evaluated. Constraint \u00b6 As with OCL, each EVL constraint defines a name and a body ( check ). However, it can optionally also define a guard which further limits its applicability to a subset of the instances of the type defined by the embracing context . Each constraint can optionally define a message as an ExecutableBlock that should return a String providing a description of the reason(s) for which the constraint has failed on a particular element. A constraint can also optionally define a number of fixes . Finally, as displayed in the figure above, constraint is an abstract class that is used as a super-class for the specific types Constraint and Critique . Guard \u00b6 Guards are used to limit the applicability of constraints. This can be achieved at two levels. At the Context level it limits the applicability of all constraints of the context and at the Constraint level it limits the applicability of a specific constraint. Fix \u00b6 A fix defines a title using an ExecutableBlock instead of a static String to allow users to specify context-aware titles (e.g. Rename class customer to Customer instead of a generic Convert first letter to upper-case ). Moreover, the do (body) part is a statement block where the fixing functionality can be defined using EOL. The developer is responsible for ensuring that the actions contained in the fix actually repair the identified inconsistency. Critique \u00b6 Critiques are constraints that are used to capture non-critical issues that do not invalidate the model, but should nevertheless be addressed by the user to enhance the quality of the model. Pre and Post \u00b6 An EVL module can define a number of named pre and a post blocks that contain EOL statements which are executed before and after evaluating the constraints respectively. These should not be confused with the pre-/post-condition annotations available for EOL user-defined operations. Concrete Syntax \u00b6 The following listing demonstrates the concrete sytnax of the context , constraint and fix abstract syntax constructs discussed above. (@lazy)? context <name> { (guard (:expression)|({statementBlock}))? (constraint)* } ((@lazy)? (constraint|critique) <name> { (guard (:expression)|({statementBlock}))? (check (:expression)|({statementBlock}))? (message (:expression)|({statementBlock}))? (fix)* } fix { (guard (:expression)|({statementBlock}))? (title (:expression)|({statementBlock})) do { statementBlock } } Pre and post blocks have a simple syntax that, as presented in the listing below, consists of the identifier ( pre or post ), an optional name and the set of statements to be executed enclosed in curly braces. (pre|post) <name> { statement+ } Execution Semantics \u00b6 Having discussed the abstract and concrete syntaxes of EVL, this section provides an informal discussion of the execution semantics of the language. The execution of an EVL module is separated into four phases: Phase 1 \u00b6 Before any constraint is evaluated, the pre blocks of the module are executed in the order in which they have been specified. Phase 2 \u00b6 For each non-lazy context with at least one non-lazy constraint, the instances of the meta-class it defines are collected. For each instance, the guard of the context is evaluated. If the guard is satisfied, then for each non-lazy constraint contained in the context the constraint's guard is also evaluated. If the guard of the constraint is satisfied, the body of the constraint is evaluated. In case the body evaluates to false , the message part of the rule is evaluated and the produced message is added along with the instance, the constraint and the available fixes to the ValidationTrace . The execution order of an EVL module follows a top-down depth-first scheme that respects the order in which the contexts and constraints appear in the module. However, the execution order can change in case one of the satisfies , satisfiesOne , satisfiesAll built-in operations, discussed in detail in the sequel, are called. Phase 3 \u00b6 In this phase, the validation trace is examined for unsatisfied constraints and the user is presented with the message each one has produced. The user can then select one or more of the available fixes to be executed. Execution of fixes is performed in a transactional manner using the respective facilities provided by the model connectivity framework. This is to prevent runtime errors raised during the execution of a fix from compromising the validated model by leaving it in an inconsistent state. Phase 4 \u00b6 When the user has performed all the necessary fixes or chooses to end Phase 3 explicitly, the post section of the module is executed. There, the user can perform tasks such as serializing the validation trace or producing a summary of the validation process results. Capturing Dependencies between Constraints \u00b6 It is often the case that constraints conceptually depend on each other. To allow users capture such dependencies, EVL provides the satisfies(constraint : String) : Boolean , satisfiesAll(constraints : Sequence(String)) : Boolean and satisfiesOne(constraints : Sequence(String)) : Boolean built-in operations. Using these operations, an constraint can specify in its guard other constraints which need to be satisfied for it to be meaningful to evaluate. When one of these operations is invoked, if the required constraints (either lazy or non-lazy) have been evaluated for the instances on which the operation is invoked, the engine will return their cached results; otherwise it will evaluate them and return their results. Example \u00b6 The following is an EVL program demonstrating some of the language features, which validates models confirming to the Movies metamodel shown below. Execution begins from the pre block, which simply computes the average number of actors per Movie and stores it into a global variable, which can be accessed at any point. The ValidActors constraint checks that for every instance of Movie which has more than the average number of actors, all of the actors have valid names. This is achieved through a dependency on the HashValidName invariant declared in the context of Person type. This constraint is marked as lazy, which means it is only executed when invoked by satisfies , so avoiding unnecessary or duplicate invocations. The HasValidName constraint makes use of a helper operation ( isPlain() ) on Strings. Once all Movie instances have been checked, the execution engine then proceeds to validate all Person instances, which consists of only one non-lazy constraint ValidMovieYears . This checks that all of the movies the actor has played in were released at least 3 years after the actor was born. Finally, the post block is executed, which in this case simply prints some basic information about the model. classDiagram class Movie { -title: String -rating: Double -year: Int } class Person { -name: String -birthYear: Int } Movie -- Person: movies * / persons * pre { var numMovies = Movie.all.size(); var numActors = Person.all.size(); var apm = numActors / numMovies; } context Movie { constraint ValidActors { guard : self.persons.size() > apm check : self.persons.forAll(p | p.satisfies(\"HasValidName\")) } } context Person { @lazy constraint HasValidName { check : self.name.isPlain() } constraint ValidMovieYears { check : self.movies.forAll(m | m.year + 1 > self.birthYear) } } operation String isPlain() : Boolean { return self.matches(\"[A-Za-z\\\\s]+\"); } post { (\"Actors per Movie=\"+apm).println(); (\"# Movies=\"+numMovies).println(); (\"# Actors=\"+numActors).println(); }","title":"Model validation (EVL)"},{"location":"doc/evl/#the-epsilon-validation-language-evl","text":"The aim of EVL is to contribute model validation capabilities to Epsilon. More specifically, EVL can be used to specify and evaluate constraints on models of arbitrary metamodels and modelling technologies.","title":"The Epsilon Validation Language (EVL)"},{"location":"doc/evl/#abstract-syntax","text":"In EVL, validation specifications are organized in modules ( EvlModule ). As illustrated in the figure below, EvlModule (indirectly) extends EolModule which means that it can contain user-defined operations and import other EOL library modules and EVL modules. Apart from operations, an EVL module also contains a set of constraints grouped by the context they apply to, and, by extending ErlModule , a number of pre and post blocks. classDiagram class Constraint { -name: String -guard: ExecutableBlock<Boolean> -check: ExecutableBlock<Boolean> -message: ExecutableBlock<String> -isCritique: boolean } class ConstraintContext { -type: EolModelElementType -guard: ExecutableBlock<Boolean> } class NamedStatementBlockRule { -name: String -body: StatementBlock } class Fix { -guard: ExecutableBlock<Boolean> -title: ExecutableBlock<String> -body: ExecutableBlock<Void> } EolModule <|-- ErlModule EvlModule --|> ErlModule Pre --|> NamedStatementBlockRule Post --|> NamedStatementBlockRule ErlModule -- Pre: pre * ErlModule -- Post: post * EvlModule -- ConstraintContext: contexts * ConstraintContext -- Constraint: constraints * Constraint -- Fix: fixes *","title":"Abstract Syntax"},{"location":"doc/evl/#context","text":"A context specifies the kind of instances on which the contained constraints will be evaluated. Each context can optionally define a guard which limits its applicability to a narrower subset of instances of its specified type. Thus, if the guard fails for a specific instance of the type, none of its contained constraints are evaluated.","title":"Context"},{"location":"doc/evl/#constraint","text":"As with OCL, each EVL constraint defines a name and a body ( check ). However, it can optionally also define a guard which further limits its applicability to a subset of the instances of the type defined by the embracing context . Each constraint can optionally define a message as an ExecutableBlock that should return a String providing a description of the reason(s) for which the constraint has failed on a particular element. A constraint can also optionally define a number of fixes . Finally, as displayed in the figure above, constraint is an abstract class that is used as a super-class for the specific types Constraint and Critique .","title":"Constraint"},{"location":"doc/evl/#guard","text":"Guards are used to limit the applicability of constraints. This can be achieved at two levels. At the Context level it limits the applicability of all constraints of the context and at the Constraint level it limits the applicability of a specific constraint.","title":"Guard"},{"location":"doc/evl/#fix","text":"A fix defines a title using an ExecutableBlock instead of a static String to allow users to specify context-aware titles (e.g. Rename class customer to Customer instead of a generic Convert first letter to upper-case ). Moreover, the do (body) part is a statement block where the fixing functionality can be defined using EOL. The developer is responsible for ensuring that the actions contained in the fix actually repair the identified inconsistency.","title":"Fix"},{"location":"doc/evl/#critique","text":"Critiques are constraints that are used to capture non-critical issues that do not invalidate the model, but should nevertheless be addressed by the user to enhance the quality of the model.","title":"Critique"},{"location":"doc/evl/#pre-and-post","text":"An EVL module can define a number of named pre and a post blocks that contain EOL statements which are executed before and after evaluating the constraints respectively. These should not be confused with the pre-/post-condition annotations available for EOL user-defined operations.","title":"Pre and Post"},{"location":"doc/evl/#concrete-syntax","text":"The following listing demonstrates the concrete sytnax of the context , constraint and fix abstract syntax constructs discussed above. (@lazy)? context <name> { (guard (:expression)|({statementBlock}))? (constraint)* } ((@lazy)? (constraint|critique) <name> { (guard (:expression)|({statementBlock}))? (check (:expression)|({statementBlock}))? (message (:expression)|({statementBlock}))? (fix)* } fix { (guard (:expression)|({statementBlock}))? (title (:expression)|({statementBlock})) do { statementBlock } } Pre and post blocks have a simple syntax that, as presented in the listing below, consists of the identifier ( pre or post ), an optional name and the set of statements to be executed enclosed in curly braces. (pre|post) <name> { statement+ }","title":"Concrete Syntax"},{"location":"doc/evl/#execution-semantics","text":"Having discussed the abstract and concrete syntaxes of EVL, this section provides an informal discussion of the execution semantics of the language. The execution of an EVL module is separated into four phases:","title":"Execution Semantics"},{"location":"doc/evl/#phase-1","text":"Before any constraint is evaluated, the pre blocks of the module are executed in the order in which they have been specified.","title":"Phase 1"},{"location":"doc/evl/#phase-2","text":"For each non-lazy context with at least one non-lazy constraint, the instances of the meta-class it defines are collected. For each instance, the guard of the context is evaluated. If the guard is satisfied, then for each non-lazy constraint contained in the context the constraint's guard is also evaluated. If the guard of the constraint is satisfied, the body of the constraint is evaluated. In case the body evaluates to false , the message part of the rule is evaluated and the produced message is added along with the instance, the constraint and the available fixes to the ValidationTrace . The execution order of an EVL module follows a top-down depth-first scheme that respects the order in which the contexts and constraints appear in the module. However, the execution order can change in case one of the satisfies , satisfiesOne , satisfiesAll built-in operations, discussed in detail in the sequel, are called.","title":"Phase 2"},{"location":"doc/evl/#phase-3","text":"In this phase, the validation trace is examined for unsatisfied constraints and the user is presented with the message each one has produced. The user can then select one or more of the available fixes to be executed. Execution of fixes is performed in a transactional manner using the respective facilities provided by the model connectivity framework. This is to prevent runtime errors raised during the execution of a fix from compromising the validated model by leaving it in an inconsistent state.","title":"Phase 3"},{"location":"doc/evl/#phase-4","text":"When the user has performed all the necessary fixes or chooses to end Phase 3 explicitly, the post section of the module is executed. There, the user can perform tasks such as serializing the validation trace or producing a summary of the validation process results.","title":"Phase 4"},{"location":"doc/evl/#capturing-dependencies-between-constraints","text":"It is often the case that constraints conceptually depend on each other. To allow users capture such dependencies, EVL provides the satisfies(constraint : String) : Boolean , satisfiesAll(constraints : Sequence(String)) : Boolean and satisfiesOne(constraints : Sequence(String)) : Boolean built-in operations. Using these operations, an constraint can specify in its guard other constraints which need to be satisfied for it to be meaningful to evaluate. When one of these operations is invoked, if the required constraints (either lazy or non-lazy) have been evaluated for the instances on which the operation is invoked, the engine will return their cached results; otherwise it will evaluate them and return their results.","title":"Capturing Dependencies between Constraints"},{"location":"doc/evl/#example","text":"The following is an EVL program demonstrating some of the language features, which validates models confirming to the Movies metamodel shown below. Execution begins from the pre block, which simply computes the average number of actors per Movie and stores it into a global variable, which can be accessed at any point. The ValidActors constraint checks that for every instance of Movie which has more than the average number of actors, all of the actors have valid names. This is achieved through a dependency on the HashValidName invariant declared in the context of Person type. This constraint is marked as lazy, which means it is only executed when invoked by satisfies , so avoiding unnecessary or duplicate invocations. The HasValidName constraint makes use of a helper operation ( isPlain() ) on Strings. Once all Movie instances have been checked, the execution engine then proceeds to validate all Person instances, which consists of only one non-lazy constraint ValidMovieYears . This checks that all of the movies the actor has played in were released at least 3 years after the actor was born. Finally, the post block is executed, which in this case simply prints some basic information about the model. classDiagram class Movie { -title: String -rating: Double -year: Int } class Person { -name: String -birthYear: Int } Movie -- Person: movies * / persons * pre { var numMovies = Movie.all.size(); var numActors = Person.all.size(); var apm = numActors / numMovies; } context Movie { constraint ValidActors { guard : self.persons.size() > apm check : self.persons.forAll(p | p.satisfies(\"HasValidName\")) } } context Person { @lazy constraint HasValidName { check : self.name.isPlain() } constraint ValidMovieYears { check : self.movies.forAll(m | m.year + 1 > self.birthYear) } } operation String isPlain() : Boolean { return self.matches(\"[A-Za-z\\\\s]+\"); } post { (\"Actors per Movie=\"+apm).println(); (\"# Movies=\"+numMovies).println(); (\"# Actors=\"+numActors).println(); }","title":"Example"},{"location":"doc/ewl/","text":"The Epsilon Wizard Language (EWL) \u00b6 There are two types of model-to-model transformations: mapping and update transformations. Mapping transformations typically transform a source model into a set of target models expressed in (potentially) different modelling languages by creating zero or more model elements in the target models for each model element of the source model. By contrast, update transformations perform in-place modifications in the source model itself. They can be further classified into two subcategories: transformations in the small and in the large. Update transformations in the large apply to sets of model elements calculated using well-defined rules in a batch manner. An example of this category of transformations is a transformation that automatically adds accessor and mutator operations for all attributes in a UML model. On the other hand, update transformations in the small are applied in a user-driven manner on model elements that have been explicitly selected by the user. An example of this kind of transformations is a transformation that renames a user-specified UML class and all its incoming associations consistently. In Epsilon, mapping transformations can be specified using ETL , and update transformations in the large can be implemented either using the model modification features of EOL or using an ETL transformation in which the source and target models are the same model. By contrast, update transformations in the small cannot be effectively addressed by any of the languages presented so far. The following section discusses the importance of update transformations in the small and motivates the definition of a task-specific language (Epsilon Wizard Language (EWL)) that provides tailored and effective support for defining and executing update transformations on models of diverse metamodels. Motivation \u00b6 Constructing and refactoring models is undoubtedly a mentally intensive process. However, during modelling, recurring patterns of model update activities typically appear. As an example, when renaming a class in a UML class diagram, the user also needs to manually update the names of association ends that link to the renamed class. Thus, when renaming a class from Chapter to Section , all associations ends that point to the class and are named chapter or chapters should be also renamed to section and sections respectively. As another example, when a modeller needs to refactor a UML class into a singleton [@Larman], they need to go through a number of well-defined, but trivial, steps such as attaching a stereotype ( <<singleton>> ), defining a static instance attribute and adding a static getInstance() method that returns the unique instance of the singleton. It is generally accepted that performing repetitive tasks manually is both counter-productive and error-prone. On the other hand, failing to complete such tasks correctly and precisely compromises the consistency, and thus the quality, of the models. In Model Driven Engineering, this is particularly important since models are increasingly used to automatically produce (parts of) working systems. Automating the Construction and Refactoring Process \u00b6 Contemporary modelling tools provide built-in transformations ( wizards ) for automating common repetitive tasks. However, according to the architecture of the designed system and the specific problem domain, additional repetitive tasks typically appear, which cannot be addressed by the pre-conceived built-in wizards of a modelling tool. To address the automation problem in its general case, users must be able to easily define update transformations (wizards) that are tailored to their specific needs. To an extent, this can be achieved via the extensible architecture that state-of-the-art modelling tools often provide which enables users to add functionality to the tool via scripts or application code using the implementation language of the tool. Nevertheless, the majority of modelling tools provide an API through which they expose an edited model, which requires significant effort to learn and use. Also, since each API is proprietary, such scripts and extensions are not portable to other tools. Finally, API scripting languages and third-generation languages such as Java and C++ are not particularly suitable for model navigation and modification. Furthermore, existing languages for mapping transformations, such as QVT, ATL and ETL, cannot be used as-is for this purpose, because these languages have been designed to operate in a batch manner without human involvement in the process. By contrast, as discussed above, the task of constructing and refactoring models is inherently user-driven. Update Transformations in the Small \u00b6 Update transformations are actions that automatically create, update or delete model elements based on a selection of existing elements in the model and information obtained otherwise (e.g. through user input), in a user-driven fashion. In this section, such actions are referred to as wizards instead of rules to reduce confusion between them and rules of mapping transformation languages. In the following sections, the desirable characteristics of wizards are elaborated informally. Structure of Wizards \u00b6 In its simplest form, a wizard only needs to define the actions it will perform when it is applied to a selection of model elements. The structure of such a wizard that transforms a UML class into a singleton is shown using pseudo-code in the listing below. do : attach the singleton stereotype create the instance attribute create the getInstance method Since not all wizards apply to all types of elements in the model, each wizard needs to specify the types of elements to which it applies. For example, the wizard of the listing above, which automatically transforms a class into a singleton, applies only when the selected model element is a class. The simplest approach to ensuring that the wizard will only be applied on classes is to enclose its body in an if condition as shown in the listing below. do : if (selected element is a class) { attach the singleton stereotype create the instance attribute create the getInstance method } A more modular approach is to separate this condition from the body of the wizard. This is shown in the listing below where the condition of the wizard is specified as a separate guard stating that the wizard applies only to elements of type Class. The latter is preferable since it enables filtering out wizards that are not applicable to the current selection of elements by evaluating only their guard parts and rejecting those that return false . Thus, at any time, the user can be provided with only the wizards that are applicable to the current selection of elements. Filtering out irrelevant wizards reduces confusion and enhances usability, particularly as the list of specified wizards grows. guard : selected element is a class do : attach the singleton stereotype create the instance attribute create the getInstance method To enhance usability, a wizard also needs to define a short human-readable description of its functionality. To achieve this, another field named title has been added. There are two options for defining the title of a wizard: the first is to use a static string and the second to use a dynamic expression. The latter is preferable since it enables definition of context-aware titles. guard : selected element is a class title : Convert class <class-name> into a singleton do : attach the singleton stereotype create the instance attribute create the getInstance method Capabilities of Wizards \u00b6 The guard and title parts of a wizard need to be expressed using a language that provides model querying and navigation facilities. Moreover, the do part also requires model modification capabilities to implement the transformation. To achieve complex transformations, it is essential that the user can provide additional information. For instance, to implement a wizard that addresses the class renaming scenario, the information provided by the selected class does not suffice; the user must also provide the new name of the class. Therefore, EWL must also provide mechanisms for capturing user input. Abstract Syntax \u00b6 Since EWL is built atop Epsilon, its abstract and concrete syntax need only to define the concepts that are relevant to the task it addresses; they can reuse lower-level constructs from EOL. A graphical overview of the abstract syntax of the language is provided in the figure below. The basic concept of the EWL abstract syntax is a Wizard . A wizard defines a name , a guard part, a title part and a $do$ part. Wizards are organized in Modules . The name of a wizard acts as an identifier and must be unique in the context of a module. The guard and title parts of a wizard are of type ExpressionOrStatementBlock , inherited from EOL. An ExpressionOrStatementBlock is either a single EOL expression or a block of EOL statements that include one or more return statements. This construct allows users to express simple declarative calculations as single expressions and complex calculations as blocks of imperative statements. Finally, the do part of the wizard is a block of EOL statements that specify the effects of the wizard when applied to a compatible selection of model elements. Concrete Syntax \u00b6 The following listing presents the concrete syntax of EWL wizards. wizard <name> { (guard (:expression)|({statementBlock}))? (title (:expression)|({statementBlock}))? do { statementBlock } } Execution Semantics \u00b6 The process of executing EWL wizards is inherently user-driven and as such it depends on the environment in which they are used. In general, each time the selection of model elements changes (i.e. the user selects or deselects a model element in the modelling tool), the guards of all wizards are evaluated. If the guard of a wizard is satisfied, the title part is also evaluated and the wizard is added to a list of applicable wizards. Then, the user can select a wizard and execute its do part to perform the intended transformation. In EWL, variables defined and initialized in the guard part of the wizard can be accessed both by the title and the do parts. In this way, results of calculations performed in the guard part can be re-used, instead of re-calculated in the subsequent parts. The practicality of this approach is discussed in more detail in the examples that follow. Also, the execution of the do part of each wizard is performed in a transactional mode by exploiting the transaction capabilities of the underlying model connectivity framework, so that possible logical errors in the do part of a wizard do not leave the edited model in an inconsistent state. Examples \u00b6 This section presents three concrete examples of EWL wizards for refactoring UML 1.4 models. The aim of this section is not to provide complete implementations that address all the sub-cases of each scenario but to provide enhanced understanding of the concrete syntax, the features and the capabilities of EWL to the reader. Moreover, it should be stressed again that although the examples in this section are based on UML models, by building on Epsilon, EWL can be used to capture wizards for diverse modelling languages and technologies. Converting a Class into a Singleton \u00b6 The singleton pattern is applied when there is a class for which only one instance can exist at a time. In terms of UML, a singleton is a class stereotyped with the <<singleton>> stereotype, and it defines a static attribute named instance which holds the value of the unique instance. It also defines a static getInstance() operation that returns that unique instance. Wizard ClassToSingleton , presented below, simplifies the process of converting a class into a singleton by adding the proper stereotype, attribute and operation to it automatically. wizard ClassToSingleton { // The wizard applies when a class is selected guard : self.isTypeOf(Class) title : \"Convert \" + self.name + \" to a singleton\" do { // Create the getInstance() operation var gi : new Operation; gi.owner = self; gi.name = \"getInstance\"; gi.visibility = VisibilityKind#vk_public; gi.ownerScope = ScopeKind#sk_classifier; // Create the return parameter of the operation var ret : new Parameter; ret.type = self; ret.kind = ParameterDirectionKind#pdk_return; gi.parameter = Sequence{ret}; // Create the instance field var ins : new Attribute; ins.name = \"instance\"; ins.type = self; ins.visibility = VisibilityKind#vk_private; ins.ownerScope = ScopeKind#sk_classifier; ins.owner = self; // Attach the <<singleton>> stereotype self.attachStereotype(\"singleton\"); } } // Attaches a stereotype with the specified name // to the Model Element on which it is invoked operation ModelElement attachStereotype(name : String) { var stereotype : Stereotype; // Try to find an existing stereotype with this name stereotype = Stereotype.allInstances.selectOne(s|s.name = name); // If there is no existing stereotype // with that name, create one if (not stereotype.isDefined()){ stereotype = Stereotype.createInstance(); stereotype.name = name; stereotype.namespace = self.namespace; } // Attach the stereotype to the model element self.stereotype.add(stereotype); } The guard part of the wizard specifies that it is only applicable when the selection is a single UML class. The title part specifies a context-aware title that informs the user of the functionality of the wizard and the do part implements the functionality by adding the getInstance operation (lines 10-14), the instance attribute (lines 23-28) and the <<singleton>> stereotype (line 31). The stereotype is added via a call to the attachStereotype() operation. Attaching a stereotype is a very common action when refactoring UML models, particularly where UML profiles are involved, and therefore to avoid duplication, this reusable operation that checks for an existing stereotype, creates it if it does not already exists, and attaches it to the model element on which it is invoked has been specified. An extended version of this wizard could also check for existing association ends that link to the class and for which the upper-bound of their multiplicity is greater than one and either disallow the wizard from executing on such classes (in the guard part) or update the upper-bound of their multiplicities to one (in the do part). However, the aim of this section is not to implement complete wizards that address all sub-cases but to provide a better understanding of the concrete syntax and the features of EWL. This principle also applies to the examples presented in the sequel. Renaming a Class \u00b6 The most widely used convention for naming attributes and association ends of a given class is to use a lower-case version of the name of the class as the name of the attribute or the association end. For instance, the two ends of a one-to-many association that links classes Book and Chapter are most likely to be named book and chapters respectively. When renaming a class (e.g. from Chapter to Section ) the user must then manually traverse the model to find all attributes and association ends of this type and update their names (i.e. from chapter or bookChapter to section and bookSection respectively). This can be a daunting process especially in the context of large models. Wizard RenameClass presented in the listing below automates this process. wizard RenameClass { // The wizard applies when a Class is selected guard : self.isKindOf(Class) title : \"Rename class \" + self.name do { var newName : String; // Prompt the user for the new name of the class newName = UserInput.prompt(\"New name for class \" + self.name); if (newName.isDefined()) { var affectedElements : Sequence; // Collect the AssociationEnds and Attributes // that are affected by the rename affectedElements.addAll( AssociationEnd.allInstances.select(ae|ae.participant=self)); affectedElements.addAll( Attribute.allInstances.select(a|a.type = self)); var oldNameToLower : String; oldNameToLower = self.name.firstToLowerCase(); var newNameToLower : String; newNameToLower = newName.firstToLowerCase(); // Update the names of the affected AssociationEnds // and Attributes for (ae in affectedElements) { ae.replaceInName(oldNameToLower, newNameToLower); ae.replaceInName(self.name, newName); } self.name = newName; } } } // Renames the ModelElement on which it is invoked operation ModelElement replaceInName (oldString : String, newString : String) { if (oldString.isSubstringOf(self.name)) { // Calculate the new name var newName : String; newName = self.name.replace(oldString, newString); // Prompt the user for confirmation of the rename if (UserInput.confirm (\"Rename \" + self.name + \" to \" + newName + \"?\")) { // Perform the rename self.name = newName; } } } As with the ClassToSingleton wizard, the guard part of RenameClass specifies that the wizard is applicable only when the selection is a simple class and the title provides a context-aware description of the functionality of the wizard. The information provided by the selected class itself does not suffice in the case of renaming since the new name of the class is not specified anywhere in the existing model. In EWL, and in all languages that build on EOL, user input can be obtained using the built-in UserInput facility. Thus, in line 12 the user is prompted for the new name of the class using the UserInput.prompt() operation. Then, all the association ends and attributes that refer to the class are collected in the affectedElements sequence (lines 14-21). Using the replaceInName operation (lines 31 and 32), the name of each one is examined for a substring of the upper-case or the lower-case version of the old name of the class. In case the check returns true, the user is prompted to confirm (line 48) that the feature needs to be renamed. This further highlights the importance of user input for implementing update transformations with fine-grained user control. Moving Model Elements into a Different Package \u00b6 A common refactoring when modelling in UML is to move model elements, particularly Classes, between different packages. When moving a pair of classes from one package to another, the associations that connect them must also be moved to the target package. To automate this process, the listing below presents the MoveToPackage wizard. wizard MoveToPackage { // The wizard applies when a Collection of // elements, including at least one Package // is selected guard { var moveTo : Package; if (self.isKindOf(Collection)) { moveTo = self.select(e|e.isKindOf(Package)).last(); } return moveTo.isDefined(); } title : \"Move \" + (self.size()-1) + \" elements to \" + moveTo.name do { // Move the selected Model Elements to the // target package for (me in self.excluding(moveTo)) { me.namespace = moveTo; } // Move the Associations connecting any // selected Classes to the target package for (a in Association.allInstances) { if (a.connection.forAll(c|self.includes(c.participant))){ a.namespace = moveTo; } } } } The wizard applies when more than one element is selected and at least one of the elements is a Package . If more than one package is selected, the last one is considered as the target package to which the rest of the selected elements will be moved. This is specified in the guard part of the wizard. To reduce user confusion in identifying the package to which the elements will be moved, the name of the target package appears in the title of the wizard. This example shows the importance of the decision to express the title as a dynamically calculated expression (as opposed to a static string). It is worth noting that in the title part of the wizard (line 14), the moveTo variable declared in the guard (line 7) is referenced. Through experimenting with a number of wizards, it has been noticed that in complex wizards repeated calculations need to be performed in the guard , title and do parts of the wizard. To eliminate this duplication, the scope of variables defined in the guard part has been extended so that they are also accessible from the title and do part of the wizard.","title":"The Epsilon Wizard Language (EWL)"},{"location":"doc/ewl/#the-epsilon-wizard-language-ewl","text":"There are two types of model-to-model transformations: mapping and update transformations. Mapping transformations typically transform a source model into a set of target models expressed in (potentially) different modelling languages by creating zero or more model elements in the target models for each model element of the source model. By contrast, update transformations perform in-place modifications in the source model itself. They can be further classified into two subcategories: transformations in the small and in the large. Update transformations in the large apply to sets of model elements calculated using well-defined rules in a batch manner. An example of this category of transformations is a transformation that automatically adds accessor and mutator operations for all attributes in a UML model. On the other hand, update transformations in the small are applied in a user-driven manner on model elements that have been explicitly selected by the user. An example of this kind of transformations is a transformation that renames a user-specified UML class and all its incoming associations consistently. In Epsilon, mapping transformations can be specified using ETL , and update transformations in the large can be implemented either using the model modification features of EOL or using an ETL transformation in which the source and target models are the same model. By contrast, update transformations in the small cannot be effectively addressed by any of the languages presented so far. The following section discusses the importance of update transformations in the small and motivates the definition of a task-specific language (Epsilon Wizard Language (EWL)) that provides tailored and effective support for defining and executing update transformations on models of diverse metamodels.","title":"The Epsilon Wizard Language (EWL)"},{"location":"doc/ewl/#motivation","text":"Constructing and refactoring models is undoubtedly a mentally intensive process. However, during modelling, recurring patterns of model update activities typically appear. As an example, when renaming a class in a UML class diagram, the user also needs to manually update the names of association ends that link to the renamed class. Thus, when renaming a class from Chapter to Section , all associations ends that point to the class and are named chapter or chapters should be also renamed to section and sections respectively. As another example, when a modeller needs to refactor a UML class into a singleton [@Larman], they need to go through a number of well-defined, but trivial, steps such as attaching a stereotype ( <<singleton>> ), defining a static instance attribute and adding a static getInstance() method that returns the unique instance of the singleton. It is generally accepted that performing repetitive tasks manually is both counter-productive and error-prone. On the other hand, failing to complete such tasks correctly and precisely compromises the consistency, and thus the quality, of the models. In Model Driven Engineering, this is particularly important since models are increasingly used to automatically produce (parts of) working systems.","title":"Motivation"},{"location":"doc/ewl/#automating-the-construction-and-refactoring-process","text":"Contemporary modelling tools provide built-in transformations ( wizards ) for automating common repetitive tasks. However, according to the architecture of the designed system and the specific problem domain, additional repetitive tasks typically appear, which cannot be addressed by the pre-conceived built-in wizards of a modelling tool. To address the automation problem in its general case, users must be able to easily define update transformations (wizards) that are tailored to their specific needs. To an extent, this can be achieved via the extensible architecture that state-of-the-art modelling tools often provide which enables users to add functionality to the tool via scripts or application code using the implementation language of the tool. Nevertheless, the majority of modelling tools provide an API through which they expose an edited model, which requires significant effort to learn and use. Also, since each API is proprietary, such scripts and extensions are not portable to other tools. Finally, API scripting languages and third-generation languages such as Java and C++ are not particularly suitable for model navigation and modification. Furthermore, existing languages for mapping transformations, such as QVT, ATL and ETL, cannot be used as-is for this purpose, because these languages have been designed to operate in a batch manner without human involvement in the process. By contrast, as discussed above, the task of constructing and refactoring models is inherently user-driven.","title":"Automating the Construction and Refactoring Process"},{"location":"doc/ewl/#update-transformations-in-the-small","text":"Update transformations are actions that automatically create, update or delete model elements based on a selection of existing elements in the model and information obtained otherwise (e.g. through user input), in a user-driven fashion. In this section, such actions are referred to as wizards instead of rules to reduce confusion between them and rules of mapping transformation languages. In the following sections, the desirable characteristics of wizards are elaborated informally.","title":"Update Transformations in the Small"},{"location":"doc/ewl/#structure-of-wizards","text":"In its simplest form, a wizard only needs to define the actions it will perform when it is applied to a selection of model elements. The structure of such a wizard that transforms a UML class into a singleton is shown using pseudo-code in the listing below. do : attach the singleton stereotype create the instance attribute create the getInstance method Since not all wizards apply to all types of elements in the model, each wizard needs to specify the types of elements to which it applies. For example, the wizard of the listing above, which automatically transforms a class into a singleton, applies only when the selected model element is a class. The simplest approach to ensuring that the wizard will only be applied on classes is to enclose its body in an if condition as shown in the listing below. do : if (selected element is a class) { attach the singleton stereotype create the instance attribute create the getInstance method } A more modular approach is to separate this condition from the body of the wizard. This is shown in the listing below where the condition of the wizard is specified as a separate guard stating that the wizard applies only to elements of type Class. The latter is preferable since it enables filtering out wizards that are not applicable to the current selection of elements by evaluating only their guard parts and rejecting those that return false . Thus, at any time, the user can be provided with only the wizards that are applicable to the current selection of elements. Filtering out irrelevant wizards reduces confusion and enhances usability, particularly as the list of specified wizards grows. guard : selected element is a class do : attach the singleton stereotype create the instance attribute create the getInstance method To enhance usability, a wizard also needs to define a short human-readable description of its functionality. To achieve this, another field named title has been added. There are two options for defining the title of a wizard: the first is to use a static string and the second to use a dynamic expression. The latter is preferable since it enables definition of context-aware titles. guard : selected element is a class title : Convert class <class-name> into a singleton do : attach the singleton stereotype create the instance attribute create the getInstance method","title":"Structure of Wizards"},{"location":"doc/ewl/#capabilities-of-wizards","text":"The guard and title parts of a wizard need to be expressed using a language that provides model querying and navigation facilities. Moreover, the do part also requires model modification capabilities to implement the transformation. To achieve complex transformations, it is essential that the user can provide additional information. For instance, to implement a wizard that addresses the class renaming scenario, the information provided by the selected class does not suffice; the user must also provide the new name of the class. Therefore, EWL must also provide mechanisms for capturing user input.","title":"Capabilities of Wizards"},{"location":"doc/ewl/#abstract-syntax","text":"Since EWL is built atop Epsilon, its abstract and concrete syntax need only to define the concepts that are relevant to the task it addresses; they can reuse lower-level constructs from EOL. A graphical overview of the abstract syntax of the language is provided in the figure below. The basic concept of the EWL abstract syntax is a Wizard . A wizard defines a name , a guard part, a title part and a $do$ part. Wizards are organized in Modules . The name of a wizard acts as an identifier and must be unique in the context of a module. The guard and title parts of a wizard are of type ExpressionOrStatementBlock , inherited from EOL. An ExpressionOrStatementBlock is either a single EOL expression or a block of EOL statements that include one or more return statements. This construct allows users to express simple declarative calculations as single expressions and complex calculations as blocks of imperative statements. Finally, the do part of the wizard is a block of EOL statements that specify the effects of the wizard when applied to a compatible selection of model elements.","title":"Abstract Syntax"},{"location":"doc/ewl/#concrete-syntax","text":"The following listing presents the concrete syntax of EWL wizards. wizard <name> { (guard (:expression)|({statementBlock}))? (title (:expression)|({statementBlock}))? do { statementBlock } }","title":"Concrete Syntax"},{"location":"doc/ewl/#execution-semantics","text":"The process of executing EWL wizards is inherently user-driven and as such it depends on the environment in which they are used. In general, each time the selection of model elements changes (i.e. the user selects or deselects a model element in the modelling tool), the guards of all wizards are evaluated. If the guard of a wizard is satisfied, the title part is also evaluated and the wizard is added to a list of applicable wizards. Then, the user can select a wizard and execute its do part to perform the intended transformation. In EWL, variables defined and initialized in the guard part of the wizard can be accessed both by the title and the do parts. In this way, results of calculations performed in the guard part can be re-used, instead of re-calculated in the subsequent parts. The practicality of this approach is discussed in more detail in the examples that follow. Also, the execution of the do part of each wizard is performed in a transactional mode by exploiting the transaction capabilities of the underlying model connectivity framework, so that possible logical errors in the do part of a wizard do not leave the edited model in an inconsistent state.","title":"Execution Semantics"},{"location":"doc/ewl/#examples","text":"This section presents three concrete examples of EWL wizards for refactoring UML 1.4 models. The aim of this section is not to provide complete implementations that address all the sub-cases of each scenario but to provide enhanced understanding of the concrete syntax, the features and the capabilities of EWL to the reader. Moreover, it should be stressed again that although the examples in this section are based on UML models, by building on Epsilon, EWL can be used to capture wizards for diverse modelling languages and technologies.","title":"Examples"},{"location":"doc/ewl/#converting-a-class-into-a-singleton","text":"The singleton pattern is applied when there is a class for which only one instance can exist at a time. In terms of UML, a singleton is a class stereotyped with the <<singleton>> stereotype, and it defines a static attribute named instance which holds the value of the unique instance. It also defines a static getInstance() operation that returns that unique instance. Wizard ClassToSingleton , presented below, simplifies the process of converting a class into a singleton by adding the proper stereotype, attribute and operation to it automatically. wizard ClassToSingleton { // The wizard applies when a class is selected guard : self.isTypeOf(Class) title : \"Convert \" + self.name + \" to a singleton\" do { // Create the getInstance() operation var gi : new Operation; gi.owner = self; gi.name = \"getInstance\"; gi.visibility = VisibilityKind#vk_public; gi.ownerScope = ScopeKind#sk_classifier; // Create the return parameter of the operation var ret : new Parameter; ret.type = self; ret.kind = ParameterDirectionKind#pdk_return; gi.parameter = Sequence{ret}; // Create the instance field var ins : new Attribute; ins.name = \"instance\"; ins.type = self; ins.visibility = VisibilityKind#vk_private; ins.ownerScope = ScopeKind#sk_classifier; ins.owner = self; // Attach the <<singleton>> stereotype self.attachStereotype(\"singleton\"); } } // Attaches a stereotype with the specified name // to the Model Element on which it is invoked operation ModelElement attachStereotype(name : String) { var stereotype : Stereotype; // Try to find an existing stereotype with this name stereotype = Stereotype.allInstances.selectOne(s|s.name = name); // If there is no existing stereotype // with that name, create one if (not stereotype.isDefined()){ stereotype = Stereotype.createInstance(); stereotype.name = name; stereotype.namespace = self.namespace; } // Attach the stereotype to the model element self.stereotype.add(stereotype); } The guard part of the wizard specifies that it is only applicable when the selection is a single UML class. The title part specifies a context-aware title that informs the user of the functionality of the wizard and the do part implements the functionality by adding the getInstance operation (lines 10-14), the instance attribute (lines 23-28) and the <<singleton>> stereotype (line 31). The stereotype is added via a call to the attachStereotype() operation. Attaching a stereotype is a very common action when refactoring UML models, particularly where UML profiles are involved, and therefore to avoid duplication, this reusable operation that checks for an existing stereotype, creates it if it does not already exists, and attaches it to the model element on which it is invoked has been specified. An extended version of this wizard could also check for existing association ends that link to the class and for which the upper-bound of their multiplicity is greater than one and either disallow the wizard from executing on such classes (in the guard part) or update the upper-bound of their multiplicities to one (in the do part). However, the aim of this section is not to implement complete wizards that address all sub-cases but to provide a better understanding of the concrete syntax and the features of EWL. This principle also applies to the examples presented in the sequel.","title":"Converting a Class into a Singleton"},{"location":"doc/ewl/#renaming-a-class","text":"The most widely used convention for naming attributes and association ends of a given class is to use a lower-case version of the name of the class as the name of the attribute or the association end. For instance, the two ends of a one-to-many association that links classes Book and Chapter are most likely to be named book and chapters respectively. When renaming a class (e.g. from Chapter to Section ) the user must then manually traverse the model to find all attributes and association ends of this type and update their names (i.e. from chapter or bookChapter to section and bookSection respectively). This can be a daunting process especially in the context of large models. Wizard RenameClass presented in the listing below automates this process. wizard RenameClass { // The wizard applies when a Class is selected guard : self.isKindOf(Class) title : \"Rename class \" + self.name do { var newName : String; // Prompt the user for the new name of the class newName = UserInput.prompt(\"New name for class \" + self.name); if (newName.isDefined()) { var affectedElements : Sequence; // Collect the AssociationEnds and Attributes // that are affected by the rename affectedElements.addAll( AssociationEnd.allInstances.select(ae|ae.participant=self)); affectedElements.addAll( Attribute.allInstances.select(a|a.type = self)); var oldNameToLower : String; oldNameToLower = self.name.firstToLowerCase(); var newNameToLower : String; newNameToLower = newName.firstToLowerCase(); // Update the names of the affected AssociationEnds // and Attributes for (ae in affectedElements) { ae.replaceInName(oldNameToLower, newNameToLower); ae.replaceInName(self.name, newName); } self.name = newName; } } } // Renames the ModelElement on which it is invoked operation ModelElement replaceInName (oldString : String, newString : String) { if (oldString.isSubstringOf(self.name)) { // Calculate the new name var newName : String; newName = self.name.replace(oldString, newString); // Prompt the user for confirmation of the rename if (UserInput.confirm (\"Rename \" + self.name + \" to \" + newName + \"?\")) { // Perform the rename self.name = newName; } } } As with the ClassToSingleton wizard, the guard part of RenameClass specifies that the wizard is applicable only when the selection is a simple class and the title provides a context-aware description of the functionality of the wizard. The information provided by the selected class itself does not suffice in the case of renaming since the new name of the class is not specified anywhere in the existing model. In EWL, and in all languages that build on EOL, user input can be obtained using the built-in UserInput facility. Thus, in line 12 the user is prompted for the new name of the class using the UserInput.prompt() operation. Then, all the association ends and attributes that refer to the class are collected in the affectedElements sequence (lines 14-21). Using the replaceInName operation (lines 31 and 32), the name of each one is examined for a substring of the upper-case or the lower-case version of the old name of the class. In case the check returns true, the user is prompted to confirm (line 48) that the feature needs to be renamed. This further highlights the importance of user input for implementing update transformations with fine-grained user control.","title":"Renaming a Class"},{"location":"doc/ewl/#moving-model-elements-into-a-different-package","text":"A common refactoring when modelling in UML is to move model elements, particularly Classes, between different packages. When moving a pair of classes from one package to another, the associations that connect them must also be moved to the target package. To automate this process, the listing below presents the MoveToPackage wizard. wizard MoveToPackage { // The wizard applies when a Collection of // elements, including at least one Package // is selected guard { var moveTo : Package; if (self.isKindOf(Collection)) { moveTo = self.select(e|e.isKindOf(Package)).last(); } return moveTo.isDefined(); } title : \"Move \" + (self.size()-1) + \" elements to \" + moveTo.name do { // Move the selected Model Elements to the // target package for (me in self.excluding(moveTo)) { me.namespace = moveTo; } // Move the Associations connecting any // selected Classes to the target package for (a in Association.allInstances) { if (a.connection.forAll(c|self.includes(c.participant))){ a.namespace = moveTo; } } } } The wizard applies when more than one element is selected and at least one of the elements is a Package . If more than one package is selected, the last one is considered as the target package to which the rest of the selected elements will be moved. This is specified in the guard part of the wizard. To reduce user confusion in identifying the package to which the elements will be moved, the name of the target package appears in the title of the wizard. This example shows the importance of the decision to express the title as a dynamically calculated expression (as opposed to a static string). It is worth noting that in the title part of the wizard (line 14), the moveTo variable declared in the guard (line 7) is referenced. Through experimenting with a number of wizards, it has been noticed that in complex wizards repeated calculations need to be performed in the guard , title and do parts of the wizard. To eliminate this duplication, the scope of variables defined in the guard part has been extended so that they are also accessible from the title and do part of the wizard.","title":"Moving Model Elements into a Different Package"},{"location":"doc/flock/","text":"Epsilon Flock for Model Migration \u00b6 The aim of Epsilon Flock is to contribute model migration capabilities to Epsilon. Model migration is the process of updating models in response to metamodel changes. This section discusses the motivation for implementing Flock, introduces its syntax and execution semantics, and demonstrates the use of Flock with an example. Flock can be used to update models to a new version of their metamodel, or even to move from one modelling technology to another (e.g., from XML to EMF). To illustrate the challenges of model migration, we use the example of metamodel evolution below. In the top figure, a Component comprises other Component s, Connector s and Port s. A Connector joins two Port s. Connector s are unidirectional, and hence define to and from references to Port . The original metamodel allows a Connector to start and end at the same Port , and the metamodel was evolved to prevent this, as shown in the bottom figure. Port was made abstract, and split into two subtypes, InputPort and OutputPort . The references between Connector and (the subtypes of) Port were renamed for consistency with the names of the subtypes. classDiagram class Component { +subcomponents: Component[*] +connectors: Connector[*] +ports: Port[*] } class Port { +name: String +outgoing: Connector +incoming: Connector } class Connector { +name: String +from: Port +to: Port } Component *-- Connector: connectors * Component *-- Component Component *-- Port: ports * Connector -- Port: from Port -- Connector: to classDiagram class Component { +subcomponents: Component[*] +connectors: Connector[*] +ports: Port[*] } class Port { +name: String +outgoing: Connector +incoming: Connector } class Connector { +name: String +in: InPort +out: OutPort } class InputPort { +connector: Connector } class OutputPort { +connector: Connector } Component *-- Connector: connectors * Component *-- Component Component *-- Port: ports * InputPort --|> Port: in OutputPort --|> Port: out Connector -- InputPort Connector -- OutputPort Some models that conform to the original metamodel do not conform to the evolved metamodel. Specifically, models might not conform to the evolved metamodel because: They contain instances of Port , which is an abstract class in the evolved metamodel. They contain instances of Connector that specify values for the features to and from , which are not defined for the Connector type in the evolved metamodel. They contain instances of Connector that do not specify a value for the in and out features, which are mandatory for the Connector type in the evolved metamodel. Model migration can be achieved with a general-purpose model-to-model transformation using a language such as ETL. However, this typically involves writing a large amount of repetitive and redundant code. Flock reduces the amount of repetitive and redundant code needed to specify model migration by automatically copying from the original to the migrated model all of the model elements that conform to the evolved metamodel as described below. classDiagram class GuardedConstruct { -guard: ExecutableBlock<Boolean> } class Deletion { -originalType: String -strict: Boolean -cascade: Boolean } class Retyping { -originalType: String -strict: Boolean -evolvedType: String } class PackageRetyping { -originalType: String -evolvedType: String } class PackageDeletion { -originalType: String } class MigrateRule { -originalType: String -strict: Boolean -ignoredFeatures: String[*] -body: ExecutableBlock<Void> } FlockModule -- TypeMappingConstruct: typeMappings * Deletion --|> TypeMappingConstruct TypeMappingConstruct <|-- Retyping TypeMappingConstruct <|-- PackageDeletion TypeMappingConstruct <|-- PackageRetyping MigrationRule --|> GuardedConstruct GuardedConstruct <|-- TypeMappingConstruct FlockModule -- MigrateRule: rules * EolModule <|-- ErlModule ErlModule <|-- FlockModule Pre --|> NamedStatementBlockRule Post --|> NamedStatementBlockRule ErlModule -- Pre: pre * ErlModule -- Post: post * Abstract Syntax \u00b6 As illustrated in the figure above, Flock migration strategies are organised into individual modules ( FlockModule ). Flock modules inherit from EOL language constructs for specifying user-defined operations and for importing other (EOL and Flock) modules. Like the other rule-based of Epsilon, Flock modules may comprise any number of pre (post) blocks, which are executed before (after) all other constructs. Flock modules comprise any number of type mappings ( TypeMapping ) and rules ( Rule ). Type mappings operate on metamodel types ( Retyping and Deletion ) or on metamodel packages ( PackageRetyping and PackageDeletion ). Type mappings are applied to a type in the original metamodel ( originalType ) or to a package in the original metamodel ( originalPackage ) . Additionally, Retyping s apply to an evolved metamodel type ( evolvedType ) or package ( evolvedPackage ). Each rule has an original metamodel type ( originalType ), a body comprising a block of EOL statements, and zero or more ignoredFeatures . Type mappings and rules can optionally specify a guard , which is either an EOL statement or a block of EOL statements. Type mappings that operate on metamodel types and rules can be marked as strict . Concrete Syntax \u00b6 The listing below demonstrates the concrete syntax of the Flock language constructs. All of the constructs begin with keyword(s) ( retype , retype package delete , delete package or migrate ), followed by the original metamodel type or package. Additionally, type mappings that operate on metamodel types and rules can be annotated with the strict modifier. The delete construct can be annotated with a cascade modifier. All constructs can have guards, which are specified using the when keyword. Migrate rules can specify a list of features that conservative copy will ignore ( ignoring ), and a body containing a sequence of at least one EOL statement. Note that a migrate rule must have a list of ignored features, or a body, or both. (@strict)? retype <originalType> to <evolvedType> (when (:<eolExpression>)|({<eolStatement>+}))? retype package <originalPackage> to <evolvedPackage> (when (:<eolExpression>)|({<eolStatement>+}))? (@strict)? (@cascade)? delete <originalType> (when (:<eolExpression>)|({<eolStatement>+}))? delete package <originalPackage> (when (:<eolExpression>)|({<eolStatement>+}))? (@strict)? migrate <originalType> (ignoring <featureList>)? (when (:<eolExpression>)|({<eolStatement>+}))? { <eolStatement>+ } Pre and post blocks have a simple syntax that, as presented below, consists of the identifier ( pre or post ), an optional name and the set of statements to be executed enclosed in curly braces. (pre|post) <name> { statement+ } Execution Semantics \u00b6 The execution semantics of a Flock module are now described. Note that the Epsilon Model Connectivity (EMC) layer, which Flock uses to access and manipulate models supports a range of modelling technologies, and identifies types by name. Consequently, the term type is used to mean \"the name of an element of a metamodel\" in the following discussion. For example, Component , Connector and InputPort are three of the types defined in the evolved metamodel. Execution of a Flock module occurs in six phases: Any pre blocks are executed. Type mapping constructs (retypings and deletions) are processed to identify the way in which original and evolved metamodel types are to be related. Migrate rules are inspected to build sets of ignored properties. The information determined in steps 2 and 3 is used as input a copying algorithm, which creates an (equivalent) element in the migrated model for each element of the original model, and copies values from original to equivalent model elements. Migrate rules are executed on each pair of original and (equivalent) migrated model elements. Any post blocks are executed. In phases 2-5, language constructs are executed only when they are applicable . The applicability of the Flock language constructs (retyping, deletion or migrate rule) is determined from their type and guard. For a language construct c to be applicable to an original model element o , o must instantiate either the original type of c or one of the subtypes of the original type of c ; and o must satisfy the guard of c . For language constructs that have been annotated as strict, type-checking is more restrictive: o must instantiate the original type of c (and not one its subtypes). In other words, the applicability of strict constructs is determined with EOL's isTypeOf operation and the applicability of non-strict constructs is determined with EOL's isKindOf operation. For language constructs that have been annotated with cascade, type-checking is less restrictive: o must be contained in another model element (either directly or indirectly) to which the construct is applicable. Similarly, for language constructs that operate on packages (i.e. package retyping and package deletions), type-checking is less restrictive: o must be contained in a package with the same name as the original package of c . Phases 2-4 of execution implement a copying algorithm which has been termed conservative copy and is discussed thoroughly elsewhere . Essentially, conservative copy will do the following for each element of the original model, o : Do nothing when o instantiates a type that cannot be instantiated in the evolved metamodel (e.g., because the type of o is now abstract or no longer exists). Example: instances of Port in the original metamodel are not copied because Port has become abstract. Fully copy o to produce m in the migrated model when o instantiate a type that has not been at all affected by metamodel evolution. Example: instances of Component in the original metamodel are fully copied because neither Component nor any of its features have been changed. Partially copy o to produce m in the migrated model when o instantiates a type with one or more features that have been affected by metamodel evolution. Example: instances of Connector in the original metamodel are partially copied because the from and to features have been renamed. Note that in a partial copy only the features that have not been affected by metamodel evolution are copied (e.g., the name s of Connector s). In phase 5, migrate rules are applied. These rules specify the problem-specific migration logic and might, for example, create migrated model elements for original model elements that were skipped or partially copied by the copying algorithm described above. The Flock engine makes available two variables ( original and migrated ) for use in the body of any migration rule. These variables are used to refer to the particular elements of the original and migrated models to which the rule is currently being applied. In addition, Flock defines an equivalent() operation that can be called on any original model element and returns the equivalent migrated model element (or null ). The equivalent() operation is used to access elements of the migrated model that cannot be accessed via the migrated variable due to metamodel evolution. Flock rules often contain statements of the form: original.x.equivalent() where x is a feature that has been removed from the evolved metamodel. Finally, we should consider the order in which Flock schedules language constructs: a construct that appears earlier (higher) in the source file has priority. This is important because only one type mapping (retypings and deletions) is applied per original model element, and because this implies that migrate rules are applied from top-to-bottom. This ordering is consistent with the other languages of the Epsilon platform. Example \u00b6 Flock is now demonstrated using the example of model migration introduced above. Recall that the metamodel evolution involves splitting the Port type to form the InputPort and OutputPort types. Below is a high-level design for migrating models from the original to the evolved metamodel. For every instance, p, of Port in the original model: If there exists in the original model a Connector , c, that specifies p as the value for its from feature: Create a new instance, i , of InputPort in the migrated model. Set c as the connector of i . Add c to the ports reference of the Component that contains c. If there exists in the original model a Connector , c, that specifies p as the value for its to feature: Create a new instance of OutputPort in the migrated model. Set c as the connector of i. Add c to the ports reference of the Component that contains c. And nothing else changes. The Flock migration strategy that implements this design is shown below. Three type mappings constructs (on lines 1-4) are used to control the way in which instances of Port are migrated. For example, line 3 specifies that instances of Port that are referenced via the from feature of a Connector are retyped, becoming InputPort s. Instances of Connector are migrated using the rule on lines 6-9, which specifies the way in which the from and to features have evolved to form the in and out features. delete Port when: not (original.isInput() xor original.isOutput()) retype Port to InputPort when: original.isInput() retype Port to OutputPort when: original.isOutput() migrate Connector { migrated.`in` = original.from.equivalent(); migrated.out = original.`to`.equivalent(); } operation Original!Port isInput() : Boolean { return Original!Connector.all.exists(c|c.from == self); } operation Original!Port isOutput() : Boolean { return Original!Connector.all.exists(c|c.`to` == self); } Note that metamodel elements that have not been affected by the metamodel evolution, such as Component s, are migrated automatically. Explicit copying code would be needed to achieve this with a general purpose model-to-model transformation language. Limitations and Scope \u00b6 Although Flock has been shown to much more concise than general purpose model-to-model transformation languages for specifying model migration, Flock does not provide some of the features commonly available in general-purpose model-to-model transformation language. This section discusses the limitations of Flock and its intended scope with respect to other tools for model migration. Limitations \u00b6 Firstly, Flock does not support rule inheritance, and re-use of migration logic is instead achieved by exploiting the inheritance hierarchy of the original metamodel. The form of re-use provided by Flock is less general than rule-inheritance, but has proved sufficient for existing use-cases. Secondly, Flock does not provide language constructs for controlling the order in which rules are scheduled (other than the ordering of the rules in the program file). ATL, for example, includes constructs that allow users to specify that rules are scheduled explicitly (lazy rules) or in a memoised manner (unique rules). We anticipate that scheduling constructs might be necessary for larger migration strategies, but have not yet encountered situations in which they have been required. Thirdly, Flock is tailored for applying migration to a single original and a single migrated model. Although further models can be accessed by a Flock migration strategy, they cannot be used as the source or target of the conservative copy algorithm. By contrast, some general-purpose model transformation languages can access and manipulate any number of models. Finally, Flock has been tailored to the model migration problem. In other words, we believe that Flock is well-suited to specifying model transformations between two metamodels that are very similar. For metamodel evolution in which the original metamodel undergoes significant and large-scale revision, a general-purpose transformation might be more suitable than Flock for specifying model migration. Scope \u00b6 Flock is typically used as a manual specification approach in which model migration strategies are written by hand. As such, we believe that Flock provides a flexible and concise way to specify migration, and is a foundation for further tools that seek to automate the metamodel evolution and model migration processes. There are approaches to model migration that encompass both the metamodel evolution and model migration processes, seeking to automatically derive model migration strategies (e.g., Edapt . These approaches provide more automation but at the cost of flexibility: for example, you might be restricted to using a tool-specific editor to perform model migration, or to using only EMF. A more thorough discussion of the design decisions and execution semantics of Flock can be found in a SoSyM journal article . Flock has been compared with other model migration tools and languages in a MoDELS paper .","title":"Model migration (Flock)"},{"location":"doc/flock/#epsilon-flock-for-model-migration","text":"The aim of Epsilon Flock is to contribute model migration capabilities to Epsilon. Model migration is the process of updating models in response to metamodel changes. This section discusses the motivation for implementing Flock, introduces its syntax and execution semantics, and demonstrates the use of Flock with an example. Flock can be used to update models to a new version of their metamodel, or even to move from one modelling technology to another (e.g., from XML to EMF). To illustrate the challenges of model migration, we use the example of metamodel evolution below. In the top figure, a Component comprises other Component s, Connector s and Port s. A Connector joins two Port s. Connector s are unidirectional, and hence define to and from references to Port . The original metamodel allows a Connector to start and end at the same Port , and the metamodel was evolved to prevent this, as shown in the bottom figure. Port was made abstract, and split into two subtypes, InputPort and OutputPort . The references between Connector and (the subtypes of) Port were renamed for consistency with the names of the subtypes. classDiagram class Component { +subcomponents: Component[*] +connectors: Connector[*] +ports: Port[*] } class Port { +name: String +outgoing: Connector +incoming: Connector } class Connector { +name: String +from: Port +to: Port } Component *-- Connector: connectors * Component *-- Component Component *-- Port: ports * Connector -- Port: from Port -- Connector: to classDiagram class Component { +subcomponents: Component[*] +connectors: Connector[*] +ports: Port[*] } class Port { +name: String +outgoing: Connector +incoming: Connector } class Connector { +name: String +in: InPort +out: OutPort } class InputPort { +connector: Connector } class OutputPort { +connector: Connector } Component *-- Connector: connectors * Component *-- Component Component *-- Port: ports * InputPort --|> Port: in OutputPort --|> Port: out Connector -- InputPort Connector -- OutputPort Some models that conform to the original metamodel do not conform to the evolved metamodel. Specifically, models might not conform to the evolved metamodel because: They contain instances of Port , which is an abstract class in the evolved metamodel. They contain instances of Connector that specify values for the features to and from , which are not defined for the Connector type in the evolved metamodel. They contain instances of Connector that do not specify a value for the in and out features, which are mandatory for the Connector type in the evolved metamodel. Model migration can be achieved with a general-purpose model-to-model transformation using a language such as ETL. However, this typically involves writing a large amount of repetitive and redundant code. Flock reduces the amount of repetitive and redundant code needed to specify model migration by automatically copying from the original to the migrated model all of the model elements that conform to the evolved metamodel as described below. classDiagram class GuardedConstruct { -guard: ExecutableBlock<Boolean> } class Deletion { -originalType: String -strict: Boolean -cascade: Boolean } class Retyping { -originalType: String -strict: Boolean -evolvedType: String } class PackageRetyping { -originalType: String -evolvedType: String } class PackageDeletion { -originalType: String } class MigrateRule { -originalType: String -strict: Boolean -ignoredFeatures: String[*] -body: ExecutableBlock<Void> } FlockModule -- TypeMappingConstruct: typeMappings * Deletion --|> TypeMappingConstruct TypeMappingConstruct <|-- Retyping TypeMappingConstruct <|-- PackageDeletion TypeMappingConstruct <|-- PackageRetyping MigrationRule --|> GuardedConstruct GuardedConstruct <|-- TypeMappingConstruct FlockModule -- MigrateRule: rules * EolModule <|-- ErlModule ErlModule <|-- FlockModule Pre --|> NamedStatementBlockRule Post --|> NamedStatementBlockRule ErlModule -- Pre: pre * ErlModule -- Post: post *","title":"Epsilon Flock for Model Migration"},{"location":"doc/flock/#abstract-syntax","text":"As illustrated in the figure above, Flock migration strategies are organised into individual modules ( FlockModule ). Flock modules inherit from EOL language constructs for specifying user-defined operations and for importing other (EOL and Flock) modules. Like the other rule-based of Epsilon, Flock modules may comprise any number of pre (post) blocks, which are executed before (after) all other constructs. Flock modules comprise any number of type mappings ( TypeMapping ) and rules ( Rule ). Type mappings operate on metamodel types ( Retyping and Deletion ) or on metamodel packages ( PackageRetyping and PackageDeletion ). Type mappings are applied to a type in the original metamodel ( originalType ) or to a package in the original metamodel ( originalPackage ) . Additionally, Retyping s apply to an evolved metamodel type ( evolvedType ) or package ( evolvedPackage ). Each rule has an original metamodel type ( originalType ), a body comprising a block of EOL statements, and zero or more ignoredFeatures . Type mappings and rules can optionally specify a guard , which is either an EOL statement or a block of EOL statements. Type mappings that operate on metamodel types and rules can be marked as strict .","title":"Abstract Syntax"},{"location":"doc/flock/#concrete-syntax","text":"The listing below demonstrates the concrete syntax of the Flock language constructs. All of the constructs begin with keyword(s) ( retype , retype package delete , delete package or migrate ), followed by the original metamodel type or package. Additionally, type mappings that operate on metamodel types and rules can be annotated with the strict modifier. The delete construct can be annotated with a cascade modifier. All constructs can have guards, which are specified using the when keyword. Migrate rules can specify a list of features that conservative copy will ignore ( ignoring ), and a body containing a sequence of at least one EOL statement. Note that a migrate rule must have a list of ignored features, or a body, or both. (@strict)? retype <originalType> to <evolvedType> (when (:<eolExpression>)|({<eolStatement>+}))? retype package <originalPackage> to <evolvedPackage> (when (:<eolExpression>)|({<eolStatement>+}))? (@strict)? (@cascade)? delete <originalType> (when (:<eolExpression>)|({<eolStatement>+}))? delete package <originalPackage> (when (:<eolExpression>)|({<eolStatement>+}))? (@strict)? migrate <originalType> (ignoring <featureList>)? (when (:<eolExpression>)|({<eolStatement>+}))? { <eolStatement>+ } Pre and post blocks have a simple syntax that, as presented below, consists of the identifier ( pre or post ), an optional name and the set of statements to be executed enclosed in curly braces. (pre|post) <name> { statement+ }","title":"Concrete Syntax"},{"location":"doc/flock/#execution-semantics","text":"The execution semantics of a Flock module are now described. Note that the Epsilon Model Connectivity (EMC) layer, which Flock uses to access and manipulate models supports a range of modelling technologies, and identifies types by name. Consequently, the term type is used to mean \"the name of an element of a metamodel\" in the following discussion. For example, Component , Connector and InputPort are three of the types defined in the evolved metamodel. Execution of a Flock module occurs in six phases: Any pre blocks are executed. Type mapping constructs (retypings and deletions) are processed to identify the way in which original and evolved metamodel types are to be related. Migrate rules are inspected to build sets of ignored properties. The information determined in steps 2 and 3 is used as input a copying algorithm, which creates an (equivalent) element in the migrated model for each element of the original model, and copies values from original to equivalent model elements. Migrate rules are executed on each pair of original and (equivalent) migrated model elements. Any post blocks are executed. In phases 2-5, language constructs are executed only when they are applicable . The applicability of the Flock language constructs (retyping, deletion or migrate rule) is determined from their type and guard. For a language construct c to be applicable to an original model element o , o must instantiate either the original type of c or one of the subtypes of the original type of c ; and o must satisfy the guard of c . For language constructs that have been annotated as strict, type-checking is more restrictive: o must instantiate the original type of c (and not one its subtypes). In other words, the applicability of strict constructs is determined with EOL's isTypeOf operation and the applicability of non-strict constructs is determined with EOL's isKindOf operation. For language constructs that have been annotated with cascade, type-checking is less restrictive: o must be contained in another model element (either directly or indirectly) to which the construct is applicable. Similarly, for language constructs that operate on packages (i.e. package retyping and package deletions), type-checking is less restrictive: o must be contained in a package with the same name as the original package of c . Phases 2-4 of execution implement a copying algorithm which has been termed conservative copy and is discussed thoroughly elsewhere . Essentially, conservative copy will do the following for each element of the original model, o : Do nothing when o instantiates a type that cannot be instantiated in the evolved metamodel (e.g., because the type of o is now abstract or no longer exists). Example: instances of Port in the original metamodel are not copied because Port has become abstract. Fully copy o to produce m in the migrated model when o instantiate a type that has not been at all affected by metamodel evolution. Example: instances of Component in the original metamodel are fully copied because neither Component nor any of its features have been changed. Partially copy o to produce m in the migrated model when o instantiates a type with one or more features that have been affected by metamodel evolution. Example: instances of Connector in the original metamodel are partially copied because the from and to features have been renamed. Note that in a partial copy only the features that have not been affected by metamodel evolution are copied (e.g., the name s of Connector s). In phase 5, migrate rules are applied. These rules specify the problem-specific migration logic and might, for example, create migrated model elements for original model elements that were skipped or partially copied by the copying algorithm described above. The Flock engine makes available two variables ( original and migrated ) for use in the body of any migration rule. These variables are used to refer to the particular elements of the original and migrated models to which the rule is currently being applied. In addition, Flock defines an equivalent() operation that can be called on any original model element and returns the equivalent migrated model element (or null ). The equivalent() operation is used to access elements of the migrated model that cannot be accessed via the migrated variable due to metamodel evolution. Flock rules often contain statements of the form: original.x.equivalent() where x is a feature that has been removed from the evolved metamodel. Finally, we should consider the order in which Flock schedules language constructs: a construct that appears earlier (higher) in the source file has priority. This is important because only one type mapping (retypings and deletions) is applied per original model element, and because this implies that migrate rules are applied from top-to-bottom. This ordering is consistent with the other languages of the Epsilon platform.","title":"Execution Semantics"},{"location":"doc/flock/#example","text":"Flock is now demonstrated using the example of model migration introduced above. Recall that the metamodel evolution involves splitting the Port type to form the InputPort and OutputPort types. Below is a high-level design for migrating models from the original to the evolved metamodel. For every instance, p, of Port in the original model: If there exists in the original model a Connector , c, that specifies p as the value for its from feature: Create a new instance, i , of InputPort in the migrated model. Set c as the connector of i . Add c to the ports reference of the Component that contains c. If there exists in the original model a Connector , c, that specifies p as the value for its to feature: Create a new instance of OutputPort in the migrated model. Set c as the connector of i. Add c to the ports reference of the Component that contains c. And nothing else changes. The Flock migration strategy that implements this design is shown below. Three type mappings constructs (on lines 1-4) are used to control the way in which instances of Port are migrated. For example, line 3 specifies that instances of Port that are referenced via the from feature of a Connector are retyped, becoming InputPort s. Instances of Connector are migrated using the rule on lines 6-9, which specifies the way in which the from and to features have evolved to form the in and out features. delete Port when: not (original.isInput() xor original.isOutput()) retype Port to InputPort when: original.isInput() retype Port to OutputPort when: original.isOutput() migrate Connector { migrated.`in` = original.from.equivalent(); migrated.out = original.`to`.equivalent(); } operation Original!Port isInput() : Boolean { return Original!Connector.all.exists(c|c.from == self); } operation Original!Port isOutput() : Boolean { return Original!Connector.all.exists(c|c.`to` == self); } Note that metamodel elements that have not been affected by the metamodel evolution, such as Component s, are migrated automatically. Explicit copying code would be needed to achieve this with a general purpose model-to-model transformation language.","title":"Example"},{"location":"doc/flock/#limitations-and-scope","text":"Although Flock has been shown to much more concise than general purpose model-to-model transformation languages for specifying model migration, Flock does not provide some of the features commonly available in general-purpose model-to-model transformation language. This section discusses the limitations of Flock and its intended scope with respect to other tools for model migration.","title":"Limitations and Scope"},{"location":"doc/flock/#limitations","text":"Firstly, Flock does not support rule inheritance, and re-use of migration logic is instead achieved by exploiting the inheritance hierarchy of the original metamodel. The form of re-use provided by Flock is less general than rule-inheritance, but has proved sufficient for existing use-cases. Secondly, Flock does not provide language constructs for controlling the order in which rules are scheduled (other than the ordering of the rules in the program file). ATL, for example, includes constructs that allow users to specify that rules are scheduled explicitly (lazy rules) or in a memoised manner (unique rules). We anticipate that scheduling constructs might be necessary for larger migration strategies, but have not yet encountered situations in which they have been required. Thirdly, Flock is tailored for applying migration to a single original and a single migrated model. Although further models can be accessed by a Flock migration strategy, they cannot be used as the source or target of the conservative copy algorithm. By contrast, some general-purpose model transformation languages can access and manipulate any number of models. Finally, Flock has been tailored to the model migration problem. In other words, we believe that Flock is well-suited to specifying model transformations between two metamodels that are very similar. For metamodel evolution in which the original metamodel undergoes significant and large-scale revision, a general-purpose transformation might be more suitable than Flock for specifying model migration.","title":"Limitations"},{"location":"doc/flock/#scope","text":"Flock is typically used as a manual specification approach in which model migration strategies are written by hand. As such, we believe that Flock provides a flexible and concise way to specify migration, and is a foundation for further tools that seek to automate the metamodel evolution and model migration processes. There are approaches to model migration that encompass both the metamodel evolution and model migration processes, seeking to automatically derive model migration strategies (e.g., Edapt . These approaches provide more automation but at the cost of flexibility: for example, you might be restricted to using a tool-specific editor to perform model migration, or to using only EMF. A more thorough discussion of the design decisions and execution semantics of Flock can be found in a SoSyM journal article . Flock has been compared with other model migration tools and languages in a MoDELS paper .","title":"Scope"},{"location":"doc/hutn/","text":"Human Usable Textual Notation \u00b6 HUTN is an OMG standard for storing models in a human understandable format. In a sense it is a human-oriented alternative to XMI; it has a C-like style which uses curly braces instead of the verbose XML start and end-element tags. Epsilon provides an implementation of HUTN which has been realized using ETL for model-to-model transformation, EGL for generating model-to-text transformations, and EVL for checking the consistency of HUTN models. Features \u00b6 Write models using a text editor Generic-syntax: no need to specify parser Error markers highlighting inconsistencies Resilient to metamodel changes Built-in HUTN->XMI and XMI->HUTN transformations Automated builder (HUTN->XMI) Examples \u00b6 Article: Using the Human-Usable Textual Notation (HUTN) in Epsilon Article: Using HUTN for T2M transformation - Article: New in HUTN 0.7.1 Article: Managing Inconsistent Models with HUTN Reference \u00b6 The OMG provides a complete specification of the HUTN syntax.","title":"HUTN"},{"location":"doc/hutn/#human-usable-textual-notation","text":"HUTN is an OMG standard for storing models in a human understandable format. In a sense it is a human-oriented alternative to XMI; it has a C-like style which uses curly braces instead of the verbose XML start and end-element tags. Epsilon provides an implementation of HUTN which has been realized using ETL for model-to-model transformation, EGL for generating model-to-text transformations, and EVL for checking the consistency of HUTN models.","title":"Human Usable Textual Notation"},{"location":"doc/hutn/#features","text":"Write models using a text editor Generic-syntax: no need to specify parser Error markers highlighting inconsistencies Resilient to metamodel changes Built-in HUTN->XMI and XMI->HUTN transformations Automated builder (HUTN->XMI)","title":"Features"},{"location":"doc/hutn/#examples","text":"Article: Using the Human-Usable Textual Notation (HUTN) in Epsilon Article: Using HUTN for T2M transformation - Article: New in HUTN 0.7.1 Article: Managing Inconsistent Models with HUTN","title":"Examples"},{"location":"doc/hutn/#reference","text":"The OMG provides a complete specification of the HUTN syntax.","title":"Reference"},{"location":"doc/pinset/","text":"Dataset Extraction (Pinset) \u00b6 The Pinset language offers specific syntax constructs to extract table-like datasets from models . The main objective of Pinset is to facilitate the analysis of models data via conventional data mining and machine learning techniques, which impose a tabular input format. In addition, tables can be useful as an extra viewpoint when creating model visualisations. Model example \u00b6 We use as running example a course model, which contains the enrolled students along with their grades. All models and Pinset scripts shown in this documentation can be found in an example project in the Epsilon repository. All Pinset scripts query the following metamodel: classDiagram class Course { name: String } class Student { ID: String name: String isRemote: Boolean } class ContactDetails { email: String phone: String } class EvaluationItem { name: String percentage: int } class Grade { points: int } Course *--> Student: students * Course *--> EvaluationItem: items * Student *--> ContactDetails: contact Student *--> Grade: grades * Grade --> EvaluationItem: item As for the data shown as a result of the Pinset scripts, we use the following Flexmi model, which conforms to the metamodel above: <?nsuri grades?> <course name= \"Model-Driven Engineering\" > <item name= \"Lab 1\" perc= \"15\" /> <item name= \"Lab 2\" perc= \"15\" /> <item name= \"Partial Test\" perc= \"20\" /> <item name= \"Final Exam\" perc= \"50\" /> <student id= \"S1\" name= \"Alice\" > <contact email= \"alice@university.com\" phone= \"+44 101\" /> <grade item= \"Lab 1\" points= \"60\" /> <grade item= \"Lab 2\" points= \"90\" /> <grade item= \"Partial Test\" points= \"80\" /> <grade item= \"Final Exam\" points= \"85\" /> </student> <student id= \"S2\" name= \"Bob\" remote= \"true\" > <contact email= \"bob@university.com\" phone= \"+44 654\" /> <grade item= \"Lab 1\" points= \"60\" /> <grade item= \"Final Exam\" points= \"100\" /> </student> <student id= \"S3\" name= \"Charlie\" remote= \"true\" > <contact email= \"charlie@university.com\" phone= \"+44 333\" /> <grade item= \"Lab 1\" points= \"50\" /> <grade item= \"Lab 2\" points= \"35\" /> <grade item= \"Partial Test\" points= \"20\" /> </student> <student id= \"S4\" name= \"Dana\" > <contact email= \"dana@university.com\" /> <grade item= \"Lab 1\" points= \"100\" /> <grade item= \"Lab 2\" points= \"90\" /> <grade item= \"Partial Test\" points= \"70\" /> <grade item= \"Final Exam\" points= \"95\" /> </student> </course> Overview \u00b6 This first Pinset example defines a dataset from students data, containing some basic information such as name and student ID, contact details, the number of completed evaluation items, and the final grade for the course: dataset studentsSummary over s : Student { column id: s.ID column name: s.name column phone: s.contact.phone column items_completed: s.grades.size column final_grade : s.getFinalGrade() column course_outcome { if (final_grade < 50) { return \"fail\"; } else if (final_grade < 70) { return \"good\"; } else if (final_grade < 90) { return \"notable\"; } else { return \"excellent\"; } } } @cached operation Student getFinalGrade() { return self.grades .collect(g | g.points * g.item.percentage) .sum() / 100; } From that Pinset script, the following dataset is generated: id name phone items_completed final_grade course_outcome S1 Alice +44 101 4 81 notable S2 Bob +44 654 2 59 good S3 Charlie +44 333 3 16 fail S4 Dana 4 90 excellent As the above example shows, Pinset offers a rule-based syntax to declare datasets. These rules are specified as a set of column generators that capture data from instances of a type included in an input model. That type is defined as a parameter, after the over keyword. In the example, the chosen type is Student , which by default means that each Student instance of the input model will be used to populate a row of the output dataset. Pinset offers different column generators. This first example uses the column one, which is composed of the name of the column header and an EOL expression to calculate the cell value over the row element. Other common EOL constructs are also available in Pinset scripts. For instance, an EOL block can be used for those column calculations that might be better organised in an imperative set of statements, such as the course_outcome column that shows the final course result in a textual format as used in the Spanish education system. In addition, external operations can be invoked in the column expressions, such as the getFinalGrade() operation used in the example. As a last comment for the column generator, values of previously calculated columns of an element can be used in subsequent definitions. For instance, the course_outcome column uses the finalGrade After this overview, next sections describe extra column generators, as well as on other functionalities offered by Pinset for an easier dataset extraction specification. Properties accessors \u00b6 As a way to facilitate the definition of columns that simply hold element properties, Pinset offers some column generators to access these properties: dataset studentsContact over s : Student { properties [ID as StudentId, name] reference contact[email, phone] } The previous dataset rule results in: StudentId name contact_email contact_phone S1 Alice alice@university.com +44 101 S2 Bob bob@university.com +44 654 S3 Charlie charlie@university.com +44 333 S4 Dana dana@university.com Precisely, Pinset offers two property accessors: the properties generator can be used to generate columns for attributes of the selected type (e.g. ID and name in the example), while the references one allows getting attributes from single references (i.e. upper bound of 1) of the type, such as contact . When using the properties accessor, the name of the attribute is used as column name, while for the references accessor a combination of the name of the reference with the name of the attribute is used (e.g. contact_phone ). This default behaviour can be altered by using the as keyword. These accessors also offer null safety. If any attributes or the traversed reference point to null, Pinset automatically inserts a blank value in the cell. Row filtering \u00b6 By default, all elements of the selected type are processed into rows. As this might sometimes not be desired, Pinset offers some ways to filter out rows from the resulting dataset: dataset remoteStudents over s : Student { guard: s.isRemote properties[ID, name] } dataset finalExamAssistants over s : Student from : Student.all.select(s | s.grades.exists(g | g.item.name == \"Final Exam\")) { properties[ID, name] } These dataset rules show the two ways that can be used to perform filtering in Pinset: The remoteStudents dataset uses a guard to limit the processed students to the remote ones (based in their boolean attribute). Any element not meeting the guard requirements is excluded from the dataset generation step. The finalExamAssistants dataset uses a from expression to only include those students that took the final exam of the course. A from expression must return a collection of elements of the selected type to be used for the dataset generation. Therefore, this expression can be used for row filtering, and for other things such as performance improvements (i.e. calculate a collection, and use it for multiple dataset generations). If necessary, both filtering mechanisms can be used simultaneously. For instance, if we combine the guard and from expressions shown above, we would obtain a dataset with the remote students that took the final exam of the course. Multiple columns: grid \u00b6 In some cases, we might want to generate a set of columns that are calculated using the same expression, just by changing the parameter(s) of such expression. In the course example, this happens when generating a table including the detailed grades of the students for all the evaluated items of the course, such as the following: ID name Lab_1 Lab_2 Partial_Test Final_Exam final_grade S1 Alice 60 90 80 85 81 S2 Bob 60 100 59 S3 Charlie 50 35 20 16 S4 Dana 100 90 70 95 90 Defining this table with the column generator would quickly become very verbose and tedious, as we would need to use one expression for each evaluated item of the course. Also, using that strategy would match the Pinset script to the specific course, as the script would include the name of the grades that are being represented as columns. Any new item added to future editions of the course, or any new course we might want to suport, would require updating the Pinset script / creating a new one. To prevent this, Pinset offers the grid generator, which allows the batch-definition of similar columns. A grid has three components: keys : determine the elements to use as seeds or parameters of each column. header : used to create the name or header of the column, based on the value of each individual key . body : used to calculate the value of each cell of the column. Generally, both the row element and the grid key intervene here. This generator is used in the following dataset rule, which generates the grades table depicted above: dataset studentGrades over s : Student { properties[ID, name] grid { keys: EvaluationItem.all header: key.name body: s.grades.selectOne(g | g.item == key)?.points } column final_grade : s.getFinalGrade() } In that grid generator, the course evaluation items are used as keys , which means that each one of these items would be evaluated over the header and body expressions to generate a new column. The header of the columns uses the item name, and the body is calculated by looking for a grade of the student for the evaluation item. The body uses the ?. safe null navigation operator in case the student does not have a grade for certain item. Typeless dataset rules \u00b6 The from expression presented above to filter rows during the generation can be also used to define datasets where the row elements are not instances coming from an input model. This can be useful to perform data aggregations, or to generate synthetic tables starting from a custom collection of values. The following dataset rule generates a basic table using a sequence of numbers as row elements and different column generators: dataset numbers over n from : 1.to(5) { column number : n column squared : n * n grid { keys: 2.to(5) header: \"times_\" + key body: n * key } } number squared times_2 times_3 times_4 times_5 1 1 2 3 4 5 2 4 4 6 8 10 3 9 6 9 12 15 4 16 8 12 16 20 5 25 10 15 20 25 Nested column generators \u00b6 When certain intermediate value has to be used in several column calculations, Pinset offers a nested, composite column generator. This generator is defined by a from expression that calculates a value, followed by a block containing column generators that can use that value: dataset gradesDetails over g : Grade { properties[points] reference item[name] from student : g.eContainer { column id : student.ID column final_grade : student.getFinalGrade() column grade_lowerthan_final : g.points < final_grade } } The rule above generates a dataset with one row per grade in the course. The rule includes a from expression, which obtains the student that obtained the grade through the containment reference. Then, it is used to obtain the student id and final grade, and an extra column that determines whether a grade contributed negatively to the final grade of the student, by checking if it has less points than the final grade. The names of the nested column generators are prefixed with the name given to the object calculated by the from expression: points item_name student_id student_final_grade student_grade_lowerthan_final 60 Lab 1 S1 81 true 90 Lab 2 S1 81 false 80 Partial Test S1 81 true 85 Final Exam S1 81 false 60 Lab 1 S2 59 false 100 Final Exam S2 59 false 50 Lab 1 S3 16 false 35 Lab 2 S3 16 false 20 Partial Test S3 16 false 100 Lab 1 S4 90 false 90 Lab 2 S4 90 false 70 Partial Test S4 90 true 95 Final Exam S4 90 false Column post-processing \u00b6 Pinset offers some column post-processing operations that are frequently used to prepare a dataset for an analysis. These operations are invoked by annotating the column generators. dataset studentGradesPostProcessed over s : Student { properties[ID] @fillNulls 0 grid { keys: EvaluationItem.all header: key.name body: s.grades.selectOne(g | g.item == key)?.points } column final_grade : s.getFinalGrade() @normalize 100 column final_grade_normalized : final_grade } ID Lab_1 Lab_2 Partial_Test Final_Exam final_grade final_grade_normalized S1 60 90 80 85 81 0.81 S2 60 0 0 100 59 0.59 S3 50 35 20 0 16 0.16 S4 100 90 70 95 90 0.9 Fill nulls \u00b6 It is possible to @fillNulls with a custom value, or with a special and sometimes used value, such as the mean or the mode of the column values. The following dataset rule By annotating the grid in the detailed grades example, we can fill with zeros those cells where a student did not took an evaluation item. Normalisation \u00b6 We can @normalize data columns between the [0,1] interval (useful when applying distance-based algorithms with numeric columns in different scales). A value can be provided to the annotation to perform the normalisation. If no value is given, the maximum value encountered in the column is used instead. The dataset rule above contains a column with the normalised final grade of the course. Coming soon \u00b6 An integration of Pinset with Picto to ease the creation of advanced table visualisations inside the Eclipse IDE is on the way.","title":"Dataset extraction (Pinset)"},{"location":"doc/pinset/#dataset-extraction-pinset","text":"The Pinset language offers specific syntax constructs to extract table-like datasets from models . The main objective of Pinset is to facilitate the analysis of models data via conventional data mining and machine learning techniques, which impose a tabular input format. In addition, tables can be useful as an extra viewpoint when creating model visualisations.","title":"Dataset Extraction (Pinset)"},{"location":"doc/pinset/#model-example","text":"We use as running example a course model, which contains the enrolled students along with their grades. All models and Pinset scripts shown in this documentation can be found in an example project in the Epsilon repository. All Pinset scripts query the following metamodel: classDiagram class Course { name: String } class Student { ID: String name: String isRemote: Boolean } class ContactDetails { email: String phone: String } class EvaluationItem { name: String percentage: int } class Grade { points: int } Course *--> Student: students * Course *--> EvaluationItem: items * Student *--> ContactDetails: contact Student *--> Grade: grades * Grade --> EvaluationItem: item As for the data shown as a result of the Pinset scripts, we use the following Flexmi model, which conforms to the metamodel above: <?nsuri grades?> <course name= \"Model-Driven Engineering\" > <item name= \"Lab 1\" perc= \"15\" /> <item name= \"Lab 2\" perc= \"15\" /> <item name= \"Partial Test\" perc= \"20\" /> <item name= \"Final Exam\" perc= \"50\" /> <student id= \"S1\" name= \"Alice\" > <contact email= \"alice@university.com\" phone= \"+44 101\" /> <grade item= \"Lab 1\" points= \"60\" /> <grade item= \"Lab 2\" points= \"90\" /> <grade item= \"Partial Test\" points= \"80\" /> <grade item= \"Final Exam\" points= \"85\" /> </student> <student id= \"S2\" name= \"Bob\" remote= \"true\" > <contact email= \"bob@university.com\" phone= \"+44 654\" /> <grade item= \"Lab 1\" points= \"60\" /> <grade item= \"Final Exam\" points= \"100\" /> </student> <student id= \"S3\" name= \"Charlie\" remote= \"true\" > <contact email= \"charlie@university.com\" phone= \"+44 333\" /> <grade item= \"Lab 1\" points= \"50\" /> <grade item= \"Lab 2\" points= \"35\" /> <grade item= \"Partial Test\" points= \"20\" /> </student> <student id= \"S4\" name= \"Dana\" > <contact email= \"dana@university.com\" /> <grade item= \"Lab 1\" points= \"100\" /> <grade item= \"Lab 2\" points= \"90\" /> <grade item= \"Partial Test\" points= \"70\" /> <grade item= \"Final Exam\" points= \"95\" /> </student> </course>","title":"Model example"},{"location":"doc/pinset/#overview","text":"This first Pinset example defines a dataset from students data, containing some basic information such as name and student ID, contact details, the number of completed evaluation items, and the final grade for the course: dataset studentsSummary over s : Student { column id: s.ID column name: s.name column phone: s.contact.phone column items_completed: s.grades.size column final_grade : s.getFinalGrade() column course_outcome { if (final_grade < 50) { return \"fail\"; } else if (final_grade < 70) { return \"good\"; } else if (final_grade < 90) { return \"notable\"; } else { return \"excellent\"; } } } @cached operation Student getFinalGrade() { return self.grades .collect(g | g.points * g.item.percentage) .sum() / 100; } From that Pinset script, the following dataset is generated: id name phone items_completed final_grade course_outcome S1 Alice +44 101 4 81 notable S2 Bob +44 654 2 59 good S3 Charlie +44 333 3 16 fail S4 Dana 4 90 excellent As the above example shows, Pinset offers a rule-based syntax to declare datasets. These rules are specified as a set of column generators that capture data from instances of a type included in an input model. That type is defined as a parameter, after the over keyword. In the example, the chosen type is Student , which by default means that each Student instance of the input model will be used to populate a row of the output dataset. Pinset offers different column generators. This first example uses the column one, which is composed of the name of the column header and an EOL expression to calculate the cell value over the row element. Other common EOL constructs are also available in Pinset scripts. For instance, an EOL block can be used for those column calculations that might be better organised in an imperative set of statements, such as the course_outcome column that shows the final course result in a textual format as used in the Spanish education system. In addition, external operations can be invoked in the column expressions, such as the getFinalGrade() operation used in the example. As a last comment for the column generator, values of previously calculated columns of an element can be used in subsequent definitions. For instance, the course_outcome column uses the finalGrade After this overview, next sections describe extra column generators, as well as on other functionalities offered by Pinset for an easier dataset extraction specification.","title":"Overview"},{"location":"doc/pinset/#properties-accessors","text":"As a way to facilitate the definition of columns that simply hold element properties, Pinset offers some column generators to access these properties: dataset studentsContact over s : Student { properties [ID as StudentId, name] reference contact[email, phone] } The previous dataset rule results in: StudentId name contact_email contact_phone S1 Alice alice@university.com +44 101 S2 Bob bob@university.com +44 654 S3 Charlie charlie@university.com +44 333 S4 Dana dana@university.com Precisely, Pinset offers two property accessors: the properties generator can be used to generate columns for attributes of the selected type (e.g. ID and name in the example), while the references one allows getting attributes from single references (i.e. upper bound of 1) of the type, such as contact . When using the properties accessor, the name of the attribute is used as column name, while for the references accessor a combination of the name of the reference with the name of the attribute is used (e.g. contact_phone ). This default behaviour can be altered by using the as keyword. These accessors also offer null safety. If any attributes or the traversed reference point to null, Pinset automatically inserts a blank value in the cell.","title":"Properties accessors"},{"location":"doc/pinset/#row-filtering","text":"By default, all elements of the selected type are processed into rows. As this might sometimes not be desired, Pinset offers some ways to filter out rows from the resulting dataset: dataset remoteStudents over s : Student { guard: s.isRemote properties[ID, name] } dataset finalExamAssistants over s : Student from : Student.all.select(s | s.grades.exists(g | g.item.name == \"Final Exam\")) { properties[ID, name] } These dataset rules show the two ways that can be used to perform filtering in Pinset: The remoteStudents dataset uses a guard to limit the processed students to the remote ones (based in their boolean attribute). Any element not meeting the guard requirements is excluded from the dataset generation step. The finalExamAssistants dataset uses a from expression to only include those students that took the final exam of the course. A from expression must return a collection of elements of the selected type to be used for the dataset generation. Therefore, this expression can be used for row filtering, and for other things such as performance improvements (i.e. calculate a collection, and use it for multiple dataset generations). If necessary, both filtering mechanisms can be used simultaneously. For instance, if we combine the guard and from expressions shown above, we would obtain a dataset with the remote students that took the final exam of the course.","title":"Row filtering"},{"location":"doc/pinset/#multiple-columns-grid","text":"In some cases, we might want to generate a set of columns that are calculated using the same expression, just by changing the parameter(s) of such expression. In the course example, this happens when generating a table including the detailed grades of the students for all the evaluated items of the course, such as the following: ID name Lab_1 Lab_2 Partial_Test Final_Exam final_grade S1 Alice 60 90 80 85 81 S2 Bob 60 100 59 S3 Charlie 50 35 20 16 S4 Dana 100 90 70 95 90 Defining this table with the column generator would quickly become very verbose and tedious, as we would need to use one expression for each evaluated item of the course. Also, using that strategy would match the Pinset script to the specific course, as the script would include the name of the grades that are being represented as columns. Any new item added to future editions of the course, or any new course we might want to suport, would require updating the Pinset script / creating a new one. To prevent this, Pinset offers the grid generator, which allows the batch-definition of similar columns. A grid has three components: keys : determine the elements to use as seeds or parameters of each column. header : used to create the name or header of the column, based on the value of each individual key . body : used to calculate the value of each cell of the column. Generally, both the row element and the grid key intervene here. This generator is used in the following dataset rule, which generates the grades table depicted above: dataset studentGrades over s : Student { properties[ID, name] grid { keys: EvaluationItem.all header: key.name body: s.grades.selectOne(g | g.item == key)?.points } column final_grade : s.getFinalGrade() } In that grid generator, the course evaluation items are used as keys , which means that each one of these items would be evaluated over the header and body expressions to generate a new column. The header of the columns uses the item name, and the body is calculated by looking for a grade of the student for the evaluation item. The body uses the ?. safe null navigation operator in case the student does not have a grade for certain item.","title":"Multiple columns: grid"},{"location":"doc/pinset/#typeless-dataset-rules","text":"The from expression presented above to filter rows during the generation can be also used to define datasets where the row elements are not instances coming from an input model. This can be useful to perform data aggregations, or to generate synthetic tables starting from a custom collection of values. The following dataset rule generates a basic table using a sequence of numbers as row elements and different column generators: dataset numbers over n from : 1.to(5) { column number : n column squared : n * n grid { keys: 2.to(5) header: \"times_\" + key body: n * key } } number squared times_2 times_3 times_4 times_5 1 1 2 3 4 5 2 4 4 6 8 10 3 9 6 9 12 15 4 16 8 12 16 20 5 25 10 15 20 25","title":"Typeless dataset rules"},{"location":"doc/pinset/#nested-column-generators","text":"When certain intermediate value has to be used in several column calculations, Pinset offers a nested, composite column generator. This generator is defined by a from expression that calculates a value, followed by a block containing column generators that can use that value: dataset gradesDetails over g : Grade { properties[points] reference item[name] from student : g.eContainer { column id : student.ID column final_grade : student.getFinalGrade() column grade_lowerthan_final : g.points < final_grade } } The rule above generates a dataset with one row per grade in the course. The rule includes a from expression, which obtains the student that obtained the grade through the containment reference. Then, it is used to obtain the student id and final grade, and an extra column that determines whether a grade contributed negatively to the final grade of the student, by checking if it has less points than the final grade. The names of the nested column generators are prefixed with the name given to the object calculated by the from expression: points item_name student_id student_final_grade student_grade_lowerthan_final 60 Lab 1 S1 81 true 90 Lab 2 S1 81 false 80 Partial Test S1 81 true 85 Final Exam S1 81 false 60 Lab 1 S2 59 false 100 Final Exam S2 59 false 50 Lab 1 S3 16 false 35 Lab 2 S3 16 false 20 Partial Test S3 16 false 100 Lab 1 S4 90 false 90 Lab 2 S4 90 false 70 Partial Test S4 90 true 95 Final Exam S4 90 false","title":"Nested column generators"},{"location":"doc/pinset/#column-post-processing","text":"Pinset offers some column post-processing operations that are frequently used to prepare a dataset for an analysis. These operations are invoked by annotating the column generators. dataset studentGradesPostProcessed over s : Student { properties[ID] @fillNulls 0 grid { keys: EvaluationItem.all header: key.name body: s.grades.selectOne(g | g.item == key)?.points } column final_grade : s.getFinalGrade() @normalize 100 column final_grade_normalized : final_grade } ID Lab_1 Lab_2 Partial_Test Final_Exam final_grade final_grade_normalized S1 60 90 80 85 81 0.81 S2 60 0 0 100 59 0.59 S3 50 35 20 0 16 0.16 S4 100 90 70 95 90 0.9","title":"Column post-processing"},{"location":"doc/pinset/#fill-nulls","text":"It is possible to @fillNulls with a custom value, or with a special and sometimes used value, such as the mean or the mode of the column values. The following dataset rule By annotating the grid in the detailed grades example, we can fill with zeros those cells where a student did not took an evaluation item.","title":"Fill nulls"},{"location":"doc/pinset/#normalisation","text":"We can @normalize data columns between the [0,1] interval (useful when applying distance-based algorithms with numeric columns in different scales). A value can be provided to the annotation to perform the normalisation. If no value is given, the maximum value encountered in the column is used instead. The dataset rule above contains a column with the normalised final grade of the course.","title":"Normalisation"},{"location":"doc/pinset/#coming-soon","text":"An integration of Pinset with Picto to ease the creation of advanced table visualisations inside the Eclipse IDE is on the way.","title":"Coming soon"},{"location":"doc/workflow/","text":"Orchestration Workflow \u00b6 In practice, model management tasks are seldom carried out in isolation; instead, they are often combined together to form complex workflows. Therefore, in addition to task-specific languages for individual tasks, Epsilon provides an orchestration mechanism for composing tasks into automated build processes. Motivation \u00b6 As a motivating example, an exemplar workflow that consists of both model management tasks (1-4, 6) and mainstream software development tasks (5, 7) is displayed below. Load a UML model Validate it Transform it into a Database Schema model Generate Java code from the UML model Compile the Java code Generate SQL code from the Database model Deploy the SQL code in a Database Management System (DBMS) In the above workflow, if the validation step (2) fails, the entire process should be aborted and the identified errors should be reported to the user. This example demonstrates that to be of practical use, a task orchestration framework needs to be able to coordinate both model management and mainstream development tasks and provide mechanisms for establishing dependencies between different tasks. This page discusses such a framework for orchestrating modular model management tasks implemented using languages of the Epsilon platform. As the problem of task coordination is common in software development, many technical solutions have been already proposed and are widely used by software practitioners. In this context, designing a new general-purpose workflow management solution was deemed inappropriate. Therefore, the task orchestration solution discussed here has been designed as an extension to the robust and widely used ANT framework. A brief overview of ANT as well as a discussion on the choice to design the orchestration workflow of Epsilon atop it is provided below. The ANT Tool \u00b6 ANT, named so because it is a little thing that can be used to build big things , is a robust and widely-used framework for composing automated workflows from small reusable activities. The most important advantages of ANT, compared to traditional build tools such as gnumake , is that it is platform independent and easily extensible. Platform independence is achieved by building atop Java, and extensibility is realized through a lightweight binding mechanism that enables developers to contribute custom tasks using well defined interfaces and extension points. This section provides a brief discussion of the structure and concrete syntax of ANT workflows, as well as the extensibility mechanisms that ANT provides to enable users contribute custom tasks. Structure \u00b6 In ANT, each workflow is captured as a project . A simplified illustration of the structure of an ANT project is displayed in the figure below. Each ANT project consists of a number of targets . The one specified as the default is executed automatically when the project is executed. Each target contains a number of tasks and depends on other targets that must be executed before it. An ANT task is responsible for a distinct activity and can either succeed or fail. Exemplar activities implemented by ANT tasks include file system management, compiler invocation, version management and remote artefact deployment. classDiagram class Project { -targets: Target[*] -default: Target -properties: Property[*] } class Task { -typeName: String -name: String -attributes: Attribute[*] } class Attribute { -name: String -value: String } class Target { -name: String -tasks: Task[*] -depends: Target[*] } class HashMap { +put(key: String, object: Object) +get(key: String): Object } Project -- Property: properties * Project -- Target: targets * Target -- Project: default Property --|> Task Task -- Attribute: attributes * Task -- Target: tasks * Target -- Target: depends * Project -- HashMap: references * Concrete Syntax \u00b6 In terms of concrete syntax, ANT provides an XML-based syntax. In the lisging below, an exemplar ANT project that compiles a set of Java files is illustrated. The project contains one target ( main ) which is also set to be the default target. The main target contains one javac task that specifies attributes such as srcdir , destdir and classpath , which define that the Java compiler will compile a set of Java files contained into the src directory into classes that should be placed in the build directory using dependencies.jar as an external library. <project default= \"main\" > <target name= \"main\" /> <javac srcdir= \"${src}\" destdir= \"${build}\" classpath= \"dependencies.jar\" debug= \"on\" source= \"1.4\" /> </target> </project> Extending ANT \u00b6 Binding between the XML tags that describe the tasks and the actual implementations of the tasks is achieved through a light-weight mechanism at two levels. First, the tag (in the example above, javac ) is resolved to a Java class that extends the org.apache.ant.Task abstract class (in the case of javac , the class is org.apache.tools.ant.taskdefs.Javac ) via a configuration file. Then, the attributes of the tasks (e.g. srcdir ) are set using the reflective features that Java provides. Finally, the execute() method of the task is invoked to perform the actual job. ANT also supports more advanced features including nested XML elements and filesets , however providing a complete discussion is beyond the scope of this page. Integration Challenges \u00b6 A simple approach to extending ANT with support for model management tasks would be to implement one standalone task for each language in Epsilon. However, such an approach demonstrates a number of integration and performance shortcomings which are discussed below. Since models are typically serialized in the file system, before a task is executed, the models it needs to access/modify must be parsed and loaded in memory. In the absence of a more elaborate framework, each model management task would have to take responsibility for loading and storing the models it operates on. Also, in most workflows, more than one task operates on the same models sequentially, and needlessly loading/storing the same models many times in the context of the same workflow is an expensive operation both time and memory-wise, particularly as the size of models increases. Another weakness of this primitive approach is limited inter-task communication. In the absence of a communication framework that allows model management tasks to exchange information with each other, it is often the case that many tasks end up performing the same (potentially expensive) queries on models. By contrast, an inter-task communication framework would enable time and resource intensive calculations to be performed once and their results to be communicated to all interested subsequent tasks. Having discussed ANT, Epsilon and the challenges their integration poses, the following sections presents the design of a solution that enables developers to invoke model management tasks in the context of ANT workflows. The solution consists of a core framework that addresses the challenges discussed above, a set of specific tasks, each of which implements a distinct model management activity, and a set of tasks that enable developers to initiate and manage transactions on models using the respective facilities provided by Epsilon's model connectivity layer . Framework Design and Core Tasks \u00b6 The role of the core framework, illustrated below, is to provide model loading and storing facilities as well as runtime communication facilities to the individual model management tasks that build atop it. This section provides a detailed discussion of the components it consists of. classDiagram class Task { -name: String -type: String } class VariableNestedElement { -ref: String -as: String -optional: String } class EpsilonTask { -profile: Boolean +getProjectRepository(): ModelRepository +getProjectContext(): IEolContext } class ExecutableModuleTask { -src: String -code: String -models: ModelNestedElement[*] -exports: ExportNestedElement[*] -uses: UsesNestedElement[*] } class ModelNestedElement { -ref: String -as: String -optional: String } Task <|-- EpsilonTask EpsilonTask <|-- ExecutableModuleTask ExecutableModuleTask *-- ModelNestedElement: models * ExecutableModuleTask *-- UsesNestedElement: uses * ExecutableModuleTask *-- ExportsNestedElement: exports * ExportsNestedElement --|> VariableNestedElement UsesNestedElement --|> VariableNestedElement classDiagram class LoadModelTask { -name: String -type: String -aliases: String -parameters: ParameterNestedElement[*] } class ParameterNestedElement { -name: String -value: String -file: String } class StoreModelTask { -model: String -target: String } class DisposeModelTask { -model: String } class StartTransactionTask { -name: String -models: String } class CommitTransactionTask { -name: String } class RollbackTransactionTask { -name: String } EpsilonTask <|-- CommitTransactionTask EpsilonTask <|-- StartTransactionTask RollbackTransactionTask --|> EpsilonTask EpsilonTask <|-- LoadModelTask StoreModelTask --|> EpsilonTask DisposeModelTask --|> EpsilonTask DisposeModelsTask --|> EpsilonTask LoadModelTask *-- ParameterNestedElement: parameters * The EpsilonTask task \u00b6 An ANT task can access the project in which it is contained by invoking the Task.getProject() method. To facilitate sharing of arbitrary information between tasks, ANT projects provide two convenience methods, namely addReference(String key, Object ref) and getReference(String key) : Object . The former is used to add key-value pairs, which are then accessible using the latter from other tasks of the project. To avoid loading models multiple times and to enable on-the-fly management of models from different Epsilon modules without needing to store and re-load the models after each task, a reference to a project-wide model repository has been added to the current ANT project using the addReference method discussed above. In this way, all the subclasses of the abstract EpsilonTask can invoke the getProjectRepository() method to access the project model repository. Also, to support a variable sharing mechanism that enables inter-task communication, the same technique has been employed; a shared context, accessible by all Epsilon tasks via the getProjectContext() method, has been added. Through this mechanism, model management tasks can export variables to the project context (e.g. traces or lists containing results of expensive queries) which other tasks can then reuse. EpsilonTask also specifies a profile attribute that defines if the execution of the task must be profiled using the profiling features provided by Epsilon. Profiling is a particularly important aspect of workflow execution, especially where model management languages are involved. The main reason is that model management languages tend to provide convenient features which can however be computationally expensive (such as the allInstances() EOL built-in feature that returns all the instances of a specific metaclass in the model) and when used more often than really needed, can significantly degrade the overall performance. The workflow leverages the model-transaction services provided by the model connectivity framework of Epsilon by providing three tasks for managing transactions in the context of workflows. Model Loading Tasks \u00b6 The LoadModelTask (epsilon.loadModel) loads a model from an arbitrary location (e.g. file-system, database) and adds it to the project repository so that subsequent Epsilon tasks can query or modify it. Since Epsilon supports many modelling technologies (e.g. EMF, MDR, XML), the LoadModelTask defines only three generic attributes. The name attribute specifies the name of the model in the project repository. The type attribute specifies the modelling technology with which the model is captured and is used to resolve the technology-specific model loading functionality. Finally, the aliases attribute defines a comma-separated list of alternative names by which the model can be accessed in the model repository. The rest of the information needed to load a model is implementation-specific and is therefore provided through parameter nested elements, each one defining a pair of name - value attributes. As an example, a task for loading an EMF model that has a file-based ECore metamodel is displayed below. <epsilon.loadModel name= \"Tree1\" type= \"EMF\" > <parameter name= \"modelFile\" value= \"TreeInstance.ecore\" /> <parameter name= \"metamodelFile\" path= \"Tree.ecore\" /> <parameter name= \"isMetamodelFileBased\" value= \"true\" /> <parameter name= \"readOnLoad\" value= \"true\" /> </epsilon.loadModel> LoadEmfModelTask is a specialised version of LoadModelTask only for EMF models. While the type attribute is no longer available, the task still supports the name and aliases attributes. In addition, some of the values which had to be provided through parameter nested elements can now be set using regular attributes, such as modelFile , modelUri , metamodelFile (which implicitly indicates that the metamodel is file-based), metamodelUri , reuseUnmodifiedMetamodelFile (which can be set to \"false\" to avoid reusing file-based metamodels that have not been modified since the last time they were loaded), read (equivalent to readOnLoad ) and store (equivalent to storeOnDisposal ). The listing below shows the equivalent fragment required to produce the same result as in the listing above. <epsilon.emf.loadModel name= \"Tree1\" modelFile= \"TreeInstance.ecore\" metamodelFile= \"Tree.ecore\" /> Model Storing Task \u00b6 The StoreModelTask (epsilon.storeModel) is used to store a model residing in the project repository. The StoreModelTask defines three attributes: name (required): name of the model to be stored. targetUri (optional): URI where the model will be stored (e.g. \"file:/path/to/destination\"). target (optional): file path where the model will be stored (e.g. \"file.xmi\"). targetUri takes precedence over target . If neither is defined, then the model is stored in the location from which it was originally loaded. Model Disposal Tasks \u00b6 When a model is no longer required by tasks of the workflow, it can be disposed using the epsilon.disposeModel task. The task provides the model attribute that defines the name of the model to be disposed. Also, the attribute-less epsilon.disposeModels task is provided that disposes all the models in the project model repository. This task is typically invoked when the model management part of the workflow has finished. The StartTransaction Task \u00b6 The epsilon.startTransaction task defines a name attribute that identifies the transaction. It also optionally defines a comma-separated list of model names ( models ) that the transaction will manage. If the models attribute is not specified, the transaction involves all the models contained in the common project model repository. The CommitTransaction and RollbackTransaction Tasks \u00b6 The epsilon.commitTransaction and epsilon.rollbackTransaction tasks define a name attribute through which the transaction to be committed/rolled-back is located in the project's active transactions. If several active transactions with the same name exist the more recent one is selected. The example below demonstrates an exemplar usage of the epsilon.startTransaction and epsilon.rollbackTransaction tasks. In this example, two empty models Tree1 and Tree2 are loaded in lines 1,2. Then, the EOL task of line 4 queries the models and prints the number of instances of the Tree metaclass in each one of them (which is 0 for both). Then, in line 13, a transaction named T1 is started on model Tree1. The EOL task of line 15, creates a new instance of Tree in both Tree1 and Tree2 and prints the number of instances of Tree in the two models (which is 1 for both models). Then, in line 26, the T1 transaction is rolled-back and any changes done in its context to model Tree1 (but not Tree2) are undone. Therefore, the EOL task of line 28, which prints the number of instances of Tree in both models, prints 0 for Tree1 but 1 for Tree2. <epsilon.loadModel name= \"Tree1\" type= \"EMF\" > ... </epsilon.loadModel> <epsilon.loadModel name= \"Tree2\" type= \"EMF\" > ... </epsilon.loadModel> <epsilon.eol> <![CDATA[ Tree1!Tree.allInstances.size().println(); // prints 0 Tree2!Tree.allInstances.size().println(); // prints 0 ]]> <model ref= \"Tree1\" /> <model ref= \"Tree2\" /> </epsilon.eol> <epsilon.startTransaction name= \"T1\" models= \"Tree1\" /> <epsilon.eol> <![CDATA[ var t1 : new Tree1!Tree; Tree1!Tree.allInstances.size().println(); // prints 1 var t2 : new Tree2!Tree; Tree2!Tree.allInstances.size().println(); // prints 1 ]]> <model ref= \"Tree1\" /> <model ref= \"Tree2\" /> </epsilon.eol> <epsilon.rollbackTransaction name= \"T1\" /> <epsilon.eol> <![CDATA[ Tree1!Tree.allInstances.size().println(); // prints 0 Tree2!Tree.allInstances.size().println(); // prints 1 ]]> <model ref= \"Tree1\" /> <model ref= \"Tree2\" /> </epsilon.eol> classDiagram class ExecutableModuleTask { -src: String } class EmlTask { -useMatchTrace: String -exportTransformationTrace: String -exportMergeTrace: String } class EtlTask { -exportTransformationTrace: String } class EglTask { -target: String } class EclTask { -exportMatchTrace: String -useMatchTrace: String } class EvlTask { -failOnErrors: Boolean -failOnWarnings: Boolean -exportConstraintTrace: String } ExecutableModuleTask <|-- EclTask ExecutableModuleTask <|-- EvlTask ExecutableModuleTask <|-- EglTask EmlTask --|> ExecutableModuleTask EtlTask --|> ExecutableModuleTask EolTask --|> ExecutableModuleTask The Abstract Executable Module Task \u00b6 This task is the base of all the model management tasks presented in the following section. Its aim is to encapsulate the commonalities of Epsilon tasks in order to reduce duplication among them. As already discussed, in Epsilon, specifications of model management tasks are organized in executable modules. While modules can be stored anywhere, in the case of the workflow it is assumed that they are either stored as separate files in the file-system or they are provided inline within the worfklow. Thus, this abstract task defines an src attribute that specifies the path of the source file in which the Epsilon module is stored, but also supports inline specification of the source of the module. The two alternatives are demonstrated in the listings below. <project default= \"main\" > <target name= \"main\" > <epsilon.eol src= \"HelloWorld.eol\" /> </target> </project> <project default= \"main\" > <target name= \"main\" > <epsilon.eol> <![CDATA[ \"Hello world\".println(); ]]> </epsilon.eol> </target> </project> Optionally, users can enable debugging for the module to be run by setting the debug attribute to true . An example is shown below. If the module reaches a breakpoint, users will be able to run the code step by step and inspect the stack trace and its variables. <project default= \"main\" > <target name= \"main\" > <epsilon.eol src= \"HelloWorld.eol\" debug= \"true\" /> </target> </project> The task also defines the following nested elements: 0..n model nested elements \u00b6 Through the model nested elements, each task can define which of the models, loaded in the project repository it needs to access. Each model element defines three attributes. The ref attribute specifies the name of the model that the task needs to access, the as attribute defines the name by which the model will be accessible in the context of the task, and the aliases defines a comma-delimited sequence of aliases for the model in the context of the task. 0..n parameter nested elements \u00b6 The parameter nested elements enable users to communicate String parameters to tasks. Each parameter element defines a name and a value attribute. Before executing the module, each parameter element is transformed into a String variable with the respective name and value which is then made accessible to the module. 0..n exports nested elements \u00b6 To facilitate low-level integration between different Epsilon tasks, each task can export a number of variables to the project context, so that subsequent tasks can access them later. Each export nested element defines the three attributes. The ref attribute specifies the name of the variable to be exported, the as string attribute defines the name by which the variable is stored in the project context and the optional boolean attribute specifies whether the variable is mandatory. If optional is set to false and the module does not specify such a variable, an ANT BuildException is raised. 0..n uses nested elements \u00b6 The uses nested elements enable tasks to import variables exported by previous Epsilon tasks. Each use element supports three attributes. The ref attribute specifies the name of the variable to be used. If there is no variable with this name in the project context, the ANT project properties are queried. This enables Epsilon modules to access ANT parameters (e.g. provided using command-line arguments). The as attribute specifies the name by which the variable is accessible in the context of the task. Finally, the optional boolean paramter specifies if the variable must exist in the project context. To better illustrate the runtime communication mechanism, a minimal example is provided below. In the first listing, Exporter.eol defines a String variable named x and assigns a value to it. The workflow below specifies that after executing Exporter.eol , it must export a variable named x with the new name y to the project context. Finally, it defines that before executing User.eol , it must query the project context for a variable named y and in case this is available, add the variable to the module's context and then execute it. Thus, the result of executing the workflow is Some String printed in the output console. // Exporter.eol var x : String = \"Some string\"; // User.eol z.println(); <epsilon.eol src= \"Exporter.eol\" > <exports ref= \"x\" as= \"y\" /> </epsilon.eol> <epsilon.eol src= \"User.eol\" > <uses ref= \"y\" as= \"z\" /> </epsilon.eol> Model Management Tasks \u00b6 Having discussed the core framework, this section presents the model management tasks that have been implemented atop it, using languages of the Epsilon platform. Generic Model Management Task \u00b6 The epsilon.eol task executes an EOL module, defined using the src attribute on the models that are specified using the model nested elements. Model Validation Task \u00b6 The epsilon.evl task executes an EVL module, defined using the src attribute on the models that are specified using the model nested elements. In addition to the attributes defined by the ExecutableModuleTask, this task also provides the following attributes: failOnErrors : Errors are the results of unsatisfied constraints. Setting the value of this attribute to true (default is false ) causes a BuildException to be raised if one or more errors are identified during the validation process. failOnWarnings : Similarly to errors, warnings are the results of unsatisfied critiques. Setting the value of this atrribute to true (default is also false ) causes a BuildException to be raised if one or more warnings are identified during the validation process. exportConstraintTrace : This attribute enables developers to export the internal constraint trace constructed during model validation to the project context so that it can be later accessed by other tasks - which could for example attempt to automatically repair the identified inconsistencies. exportAsModel : Setting the value of this attribute to true (default is false ) causes EVL to export the results of the validation as a new model in the project repository, named \"EVL\". This model contains all the s found by EVL. These instances contain several useful attributes: constraint points to the with the definition of the constraint and instance points to the model element which did not satisfy the constraint. From the , isCritique can be used to check if it is a critique or not, and name contains the name of the constraint. Model-to-Model Transformation Task \u00b6 The epsilon.etl task executes an ETL module, defined using the src attribute to transform between the models that are specified using the model nested elements. In addition to the attributes defined by the ExecutableModuleTask, this task also provides the exportTransformationTrace attribute that enables the developer to export the internal transformation trace to the project context. In this way this trace can be reused by subsequent tasks; for example another task can serialize it in the form of a separate traceability model. Model Comparison Task \u00b6 The epsilon.ecl task executes an ECL module, defined using the src attribute to establish matches between elements of the models that are specified using the model nested elements. In addition to the attributes defined by the ExecutableModuleTask, this task also provides the exportMatchTrace attribute that enables users to export the match-trace calculated during the comparison to the project context so that subsequent tasks can reuse it. For example, as discussed in the sequel, an EML model merging task can use it as a means of identifying correspondences on which to perform merging. In another example, the match-trace can be stored by a subsequent EOL task in the form of an stand-alone weaving model. Model Merging Task \u00b6 The epsilon.eml task executes an EML module, defined using the src attribute on the models that are specified using the model nested elements. In addition to the attributes defined by the ExecutableModuleTask, this task also provides the following attributes: useMatchTrace : To merge a set of models, an EML module needs an established match-trace between elements of the models. The useMatchTrace attribute enables the EML task to use a match-trace exported by a preceeding ECL task (using its exportMatchTrace attribute). exportMergeTrace, exportTransformationTrace : Similarly to ETL, through these attributes an EML task can export the internal traces calculated during merging for subsequent tasks to use. Model-to-Text Transformation Task \u00b6 To support model to text transformations, EglTask (epsilon.egl) task is provided that executes an Epsilon Generation Language (EGL) module. In addition to the attributes defined by ExecutableModuleTask , EglTask also defines the following attributes: target : Defines a file in which all of the generated text will be stored. templateFactoryType : Defines the Java class that will be instantiated to provide a TemplateFactory for the EGL program. The specified class must be on the classpath and must subtype EglTemplateFactory . EglTask may nest any number of formatter elements. The formatter nested element has the following attributes: implementation (required) : Defines the Java class that will be instantiated to provide a Formatter for the EGL program. The specified class must be on the classpath and must subtype Formatter . Model Migration Task \u00b6 To support model migration, FlockTask (epsilon.flock) is provided for executing an Epsilon Flock module. In addition to the attributes defined by ExecutableModuleTask , FlockTask also defines the following mandatory attributes: originalModel : Specifies which of the currently loaded models should be used as the source of the model migration. migratedModel : Specifies which of the currently loaded models should be used as the target of the model migration. Pattern Matching Task \u00b6 The epsilon.epl task executes an EPL module, defined using the src attribute to perform pattern matching on the models that are specified using the model nested elements. In addition to the attributes defined by the ExecutableModuleTask, this task also provides the following attributes. repeatWhileMatches : A boolean specifying whether the pattern matching process should continue to execute for as long as matches are found. maxLoops : An integer specifying the maximum number of pattern matching iterations. exportAs : The name under which the computed pattern match model should be made available to other Epsilon tasks of the workflow. Java Class Static Method Execution Task \u00b6 The epsilon.java.executeStaticMethod task executes a parameter-less static method, defined using the method attribute, of a Java class, defined using the javaClass attribute. This task can be useful for setting up the infrastructure of Xtext-based languages.","title":"Workflow (Ant tasks)"},{"location":"doc/workflow/#orchestration-workflow","text":"In practice, model management tasks are seldom carried out in isolation; instead, they are often combined together to form complex workflows. Therefore, in addition to task-specific languages for individual tasks, Epsilon provides an orchestration mechanism for composing tasks into automated build processes.","title":"Orchestration Workflow"},{"location":"doc/workflow/#motivation","text":"As a motivating example, an exemplar workflow that consists of both model management tasks (1-4, 6) and mainstream software development tasks (5, 7) is displayed below. Load a UML model Validate it Transform it into a Database Schema model Generate Java code from the UML model Compile the Java code Generate SQL code from the Database model Deploy the SQL code in a Database Management System (DBMS) In the above workflow, if the validation step (2) fails, the entire process should be aborted and the identified errors should be reported to the user. This example demonstrates that to be of practical use, a task orchestration framework needs to be able to coordinate both model management and mainstream development tasks and provide mechanisms for establishing dependencies between different tasks. This page discusses such a framework for orchestrating modular model management tasks implemented using languages of the Epsilon platform. As the problem of task coordination is common in software development, many technical solutions have been already proposed and are widely used by software practitioners. In this context, designing a new general-purpose workflow management solution was deemed inappropriate. Therefore, the task orchestration solution discussed here has been designed as an extension to the robust and widely used ANT framework. A brief overview of ANT as well as a discussion on the choice to design the orchestration workflow of Epsilon atop it is provided below.","title":"Motivation"},{"location":"doc/workflow/#the-ant-tool","text":"ANT, named so because it is a little thing that can be used to build big things , is a robust and widely-used framework for composing automated workflows from small reusable activities. The most important advantages of ANT, compared to traditional build tools such as gnumake , is that it is platform independent and easily extensible. Platform independence is achieved by building atop Java, and extensibility is realized through a lightweight binding mechanism that enables developers to contribute custom tasks using well defined interfaces and extension points. This section provides a brief discussion of the structure and concrete syntax of ANT workflows, as well as the extensibility mechanisms that ANT provides to enable users contribute custom tasks.","title":"The ANT Tool"},{"location":"doc/workflow/#structure","text":"In ANT, each workflow is captured as a project . A simplified illustration of the structure of an ANT project is displayed in the figure below. Each ANT project consists of a number of targets . The one specified as the default is executed automatically when the project is executed. Each target contains a number of tasks and depends on other targets that must be executed before it. An ANT task is responsible for a distinct activity and can either succeed or fail. Exemplar activities implemented by ANT tasks include file system management, compiler invocation, version management and remote artefact deployment. classDiagram class Project { -targets: Target[*] -default: Target -properties: Property[*] } class Task { -typeName: String -name: String -attributes: Attribute[*] } class Attribute { -name: String -value: String } class Target { -name: String -tasks: Task[*] -depends: Target[*] } class HashMap { +put(key: String, object: Object) +get(key: String): Object } Project -- Property: properties * Project -- Target: targets * Target -- Project: default Property --|> Task Task -- Attribute: attributes * Task -- Target: tasks * Target -- Target: depends * Project -- HashMap: references *","title":"Structure"},{"location":"doc/workflow/#concrete-syntax","text":"In terms of concrete syntax, ANT provides an XML-based syntax. In the lisging below, an exemplar ANT project that compiles a set of Java files is illustrated. The project contains one target ( main ) which is also set to be the default target. The main target contains one javac task that specifies attributes such as srcdir , destdir and classpath , which define that the Java compiler will compile a set of Java files contained into the src directory into classes that should be placed in the build directory using dependencies.jar as an external library. <project default= \"main\" > <target name= \"main\" /> <javac srcdir= \"${src}\" destdir= \"${build}\" classpath= \"dependencies.jar\" debug= \"on\" source= \"1.4\" /> </target> </project>","title":"Concrete Syntax"},{"location":"doc/workflow/#extending-ant","text":"Binding between the XML tags that describe the tasks and the actual implementations of the tasks is achieved through a light-weight mechanism at two levels. First, the tag (in the example above, javac ) is resolved to a Java class that extends the org.apache.ant.Task abstract class (in the case of javac , the class is org.apache.tools.ant.taskdefs.Javac ) via a configuration file. Then, the attributes of the tasks (e.g. srcdir ) are set using the reflective features that Java provides. Finally, the execute() method of the task is invoked to perform the actual job. ANT also supports more advanced features including nested XML elements and filesets , however providing a complete discussion is beyond the scope of this page.","title":"Extending ANT"},{"location":"doc/workflow/#integration-challenges","text":"A simple approach to extending ANT with support for model management tasks would be to implement one standalone task for each language in Epsilon. However, such an approach demonstrates a number of integration and performance shortcomings which are discussed below. Since models are typically serialized in the file system, before a task is executed, the models it needs to access/modify must be parsed and loaded in memory. In the absence of a more elaborate framework, each model management task would have to take responsibility for loading and storing the models it operates on. Also, in most workflows, more than one task operates on the same models sequentially, and needlessly loading/storing the same models many times in the context of the same workflow is an expensive operation both time and memory-wise, particularly as the size of models increases. Another weakness of this primitive approach is limited inter-task communication. In the absence of a communication framework that allows model management tasks to exchange information with each other, it is often the case that many tasks end up performing the same (potentially expensive) queries on models. By contrast, an inter-task communication framework would enable time and resource intensive calculations to be performed once and their results to be communicated to all interested subsequent tasks. Having discussed ANT, Epsilon and the challenges their integration poses, the following sections presents the design of a solution that enables developers to invoke model management tasks in the context of ANT workflows. The solution consists of a core framework that addresses the challenges discussed above, a set of specific tasks, each of which implements a distinct model management activity, and a set of tasks that enable developers to initiate and manage transactions on models using the respective facilities provided by Epsilon's model connectivity layer .","title":"Integration Challenges"},{"location":"doc/workflow/#framework-design-and-core-tasks","text":"The role of the core framework, illustrated below, is to provide model loading and storing facilities as well as runtime communication facilities to the individual model management tasks that build atop it. This section provides a detailed discussion of the components it consists of. classDiagram class Task { -name: String -type: String } class VariableNestedElement { -ref: String -as: String -optional: String } class EpsilonTask { -profile: Boolean +getProjectRepository(): ModelRepository +getProjectContext(): IEolContext } class ExecutableModuleTask { -src: String -code: String -models: ModelNestedElement[*] -exports: ExportNestedElement[*] -uses: UsesNestedElement[*] } class ModelNestedElement { -ref: String -as: String -optional: String } Task <|-- EpsilonTask EpsilonTask <|-- ExecutableModuleTask ExecutableModuleTask *-- ModelNestedElement: models * ExecutableModuleTask *-- UsesNestedElement: uses * ExecutableModuleTask *-- ExportsNestedElement: exports * ExportsNestedElement --|> VariableNestedElement UsesNestedElement --|> VariableNestedElement classDiagram class LoadModelTask { -name: String -type: String -aliases: String -parameters: ParameterNestedElement[*] } class ParameterNestedElement { -name: String -value: String -file: String } class StoreModelTask { -model: String -target: String } class DisposeModelTask { -model: String } class StartTransactionTask { -name: String -models: String } class CommitTransactionTask { -name: String } class RollbackTransactionTask { -name: String } EpsilonTask <|-- CommitTransactionTask EpsilonTask <|-- StartTransactionTask RollbackTransactionTask --|> EpsilonTask EpsilonTask <|-- LoadModelTask StoreModelTask --|> EpsilonTask DisposeModelTask --|> EpsilonTask DisposeModelsTask --|> EpsilonTask LoadModelTask *-- ParameterNestedElement: parameters *","title":"Framework Design and Core Tasks"},{"location":"doc/workflow/#the-epsilontask-task","text":"An ANT task can access the project in which it is contained by invoking the Task.getProject() method. To facilitate sharing of arbitrary information between tasks, ANT projects provide two convenience methods, namely addReference(String key, Object ref) and getReference(String key) : Object . The former is used to add key-value pairs, which are then accessible using the latter from other tasks of the project. To avoid loading models multiple times and to enable on-the-fly management of models from different Epsilon modules without needing to store and re-load the models after each task, a reference to a project-wide model repository has been added to the current ANT project using the addReference method discussed above. In this way, all the subclasses of the abstract EpsilonTask can invoke the getProjectRepository() method to access the project model repository. Also, to support a variable sharing mechanism that enables inter-task communication, the same technique has been employed; a shared context, accessible by all Epsilon tasks via the getProjectContext() method, has been added. Through this mechanism, model management tasks can export variables to the project context (e.g. traces or lists containing results of expensive queries) which other tasks can then reuse. EpsilonTask also specifies a profile attribute that defines if the execution of the task must be profiled using the profiling features provided by Epsilon. Profiling is a particularly important aspect of workflow execution, especially where model management languages are involved. The main reason is that model management languages tend to provide convenient features which can however be computationally expensive (such as the allInstances() EOL built-in feature that returns all the instances of a specific metaclass in the model) and when used more often than really needed, can significantly degrade the overall performance. The workflow leverages the model-transaction services provided by the model connectivity framework of Epsilon by providing three tasks for managing transactions in the context of workflows.","title":"The EpsilonTask task"},{"location":"doc/workflow/#model-loading-tasks","text":"The LoadModelTask (epsilon.loadModel) loads a model from an arbitrary location (e.g. file-system, database) and adds it to the project repository so that subsequent Epsilon tasks can query or modify it. Since Epsilon supports many modelling technologies (e.g. EMF, MDR, XML), the LoadModelTask defines only three generic attributes. The name attribute specifies the name of the model in the project repository. The type attribute specifies the modelling technology with which the model is captured and is used to resolve the technology-specific model loading functionality. Finally, the aliases attribute defines a comma-separated list of alternative names by which the model can be accessed in the model repository. The rest of the information needed to load a model is implementation-specific and is therefore provided through parameter nested elements, each one defining a pair of name - value attributes. As an example, a task for loading an EMF model that has a file-based ECore metamodel is displayed below. <epsilon.loadModel name= \"Tree1\" type= \"EMF\" > <parameter name= \"modelFile\" value= \"TreeInstance.ecore\" /> <parameter name= \"metamodelFile\" path= \"Tree.ecore\" /> <parameter name= \"isMetamodelFileBased\" value= \"true\" /> <parameter name= \"readOnLoad\" value= \"true\" /> </epsilon.loadModel> LoadEmfModelTask is a specialised version of LoadModelTask only for EMF models. While the type attribute is no longer available, the task still supports the name and aliases attributes. In addition, some of the values which had to be provided through parameter nested elements can now be set using regular attributes, such as modelFile , modelUri , metamodelFile (which implicitly indicates that the metamodel is file-based), metamodelUri , reuseUnmodifiedMetamodelFile (which can be set to \"false\" to avoid reusing file-based metamodels that have not been modified since the last time they were loaded), read (equivalent to readOnLoad ) and store (equivalent to storeOnDisposal ). The listing below shows the equivalent fragment required to produce the same result as in the listing above. <epsilon.emf.loadModel name= \"Tree1\" modelFile= \"TreeInstance.ecore\" metamodelFile= \"Tree.ecore\" />","title":"Model Loading Tasks"},{"location":"doc/workflow/#model-storing-task","text":"The StoreModelTask (epsilon.storeModel) is used to store a model residing in the project repository. The StoreModelTask defines three attributes: name (required): name of the model to be stored. targetUri (optional): URI where the model will be stored (e.g. \"file:/path/to/destination\"). target (optional): file path where the model will be stored (e.g. \"file.xmi\"). targetUri takes precedence over target . If neither is defined, then the model is stored in the location from which it was originally loaded.","title":"Model Storing Task"},{"location":"doc/workflow/#model-disposal-tasks","text":"When a model is no longer required by tasks of the workflow, it can be disposed using the epsilon.disposeModel task. The task provides the model attribute that defines the name of the model to be disposed. Also, the attribute-less epsilon.disposeModels task is provided that disposes all the models in the project model repository. This task is typically invoked when the model management part of the workflow has finished.","title":"Model Disposal Tasks"},{"location":"doc/workflow/#the-starttransaction-task","text":"The epsilon.startTransaction task defines a name attribute that identifies the transaction. It also optionally defines a comma-separated list of model names ( models ) that the transaction will manage. If the models attribute is not specified, the transaction involves all the models contained in the common project model repository.","title":"The StartTransaction Task"},{"location":"doc/workflow/#the-committransaction-and-rollbacktransaction-tasks","text":"The epsilon.commitTransaction and epsilon.rollbackTransaction tasks define a name attribute through which the transaction to be committed/rolled-back is located in the project's active transactions. If several active transactions with the same name exist the more recent one is selected. The example below demonstrates an exemplar usage of the epsilon.startTransaction and epsilon.rollbackTransaction tasks. In this example, two empty models Tree1 and Tree2 are loaded in lines 1,2. Then, the EOL task of line 4 queries the models and prints the number of instances of the Tree metaclass in each one of them (which is 0 for both). Then, in line 13, a transaction named T1 is started on model Tree1. The EOL task of line 15, creates a new instance of Tree in both Tree1 and Tree2 and prints the number of instances of Tree in the two models (which is 1 for both models). Then, in line 26, the T1 transaction is rolled-back and any changes done in its context to model Tree1 (but not Tree2) are undone. Therefore, the EOL task of line 28, which prints the number of instances of Tree in both models, prints 0 for Tree1 but 1 for Tree2. <epsilon.loadModel name= \"Tree1\" type= \"EMF\" > ... </epsilon.loadModel> <epsilon.loadModel name= \"Tree2\" type= \"EMF\" > ... </epsilon.loadModel> <epsilon.eol> <![CDATA[ Tree1!Tree.allInstances.size().println(); // prints 0 Tree2!Tree.allInstances.size().println(); // prints 0 ]]> <model ref= \"Tree1\" /> <model ref= \"Tree2\" /> </epsilon.eol> <epsilon.startTransaction name= \"T1\" models= \"Tree1\" /> <epsilon.eol> <![CDATA[ var t1 : new Tree1!Tree; Tree1!Tree.allInstances.size().println(); // prints 1 var t2 : new Tree2!Tree; Tree2!Tree.allInstances.size().println(); // prints 1 ]]> <model ref= \"Tree1\" /> <model ref= \"Tree2\" /> </epsilon.eol> <epsilon.rollbackTransaction name= \"T1\" /> <epsilon.eol> <![CDATA[ Tree1!Tree.allInstances.size().println(); // prints 0 Tree2!Tree.allInstances.size().println(); // prints 1 ]]> <model ref= \"Tree1\" /> <model ref= \"Tree2\" /> </epsilon.eol> classDiagram class ExecutableModuleTask { -src: String } class EmlTask { -useMatchTrace: String -exportTransformationTrace: String -exportMergeTrace: String } class EtlTask { -exportTransformationTrace: String } class EglTask { -target: String } class EclTask { -exportMatchTrace: String -useMatchTrace: String } class EvlTask { -failOnErrors: Boolean -failOnWarnings: Boolean -exportConstraintTrace: String } ExecutableModuleTask <|-- EclTask ExecutableModuleTask <|-- EvlTask ExecutableModuleTask <|-- EglTask EmlTask --|> ExecutableModuleTask EtlTask --|> ExecutableModuleTask EolTask --|> ExecutableModuleTask","title":"The CommitTransaction and RollbackTransaction Tasks"},{"location":"doc/workflow/#the-abstract-executable-module-task","text":"This task is the base of all the model management tasks presented in the following section. Its aim is to encapsulate the commonalities of Epsilon tasks in order to reduce duplication among them. As already discussed, in Epsilon, specifications of model management tasks are organized in executable modules. While modules can be stored anywhere, in the case of the workflow it is assumed that they are either stored as separate files in the file-system or they are provided inline within the worfklow. Thus, this abstract task defines an src attribute that specifies the path of the source file in which the Epsilon module is stored, but also supports inline specification of the source of the module. The two alternatives are demonstrated in the listings below. <project default= \"main\" > <target name= \"main\" > <epsilon.eol src= \"HelloWorld.eol\" /> </target> </project> <project default= \"main\" > <target name= \"main\" > <epsilon.eol> <![CDATA[ \"Hello world\".println(); ]]> </epsilon.eol> </target> </project> Optionally, users can enable debugging for the module to be run by setting the debug attribute to true . An example is shown below. If the module reaches a breakpoint, users will be able to run the code step by step and inspect the stack trace and its variables. <project default= \"main\" > <target name= \"main\" > <epsilon.eol src= \"HelloWorld.eol\" debug= \"true\" /> </target> </project> The task also defines the following nested elements:","title":"The Abstract Executable Module Task"},{"location":"doc/workflow/#0n-model-nested-elements","text":"Through the model nested elements, each task can define which of the models, loaded in the project repository it needs to access. Each model element defines three attributes. The ref attribute specifies the name of the model that the task needs to access, the as attribute defines the name by which the model will be accessible in the context of the task, and the aliases defines a comma-delimited sequence of aliases for the model in the context of the task.","title":"0..n model nested elements"},{"location":"doc/workflow/#0n-parameter-nested-elements","text":"The parameter nested elements enable users to communicate String parameters to tasks. Each parameter element defines a name and a value attribute. Before executing the module, each parameter element is transformed into a String variable with the respective name and value which is then made accessible to the module.","title":"0..n parameter nested elements"},{"location":"doc/workflow/#0n-exports-nested-elements","text":"To facilitate low-level integration between different Epsilon tasks, each task can export a number of variables to the project context, so that subsequent tasks can access them later. Each export nested element defines the three attributes. The ref attribute specifies the name of the variable to be exported, the as string attribute defines the name by which the variable is stored in the project context and the optional boolean attribute specifies whether the variable is mandatory. If optional is set to false and the module does not specify such a variable, an ANT BuildException is raised.","title":"0..n exports nested elements"},{"location":"doc/workflow/#0n-uses-nested-elements","text":"The uses nested elements enable tasks to import variables exported by previous Epsilon tasks. Each use element supports three attributes. The ref attribute specifies the name of the variable to be used. If there is no variable with this name in the project context, the ANT project properties are queried. This enables Epsilon modules to access ANT parameters (e.g. provided using command-line arguments). The as attribute specifies the name by which the variable is accessible in the context of the task. Finally, the optional boolean paramter specifies if the variable must exist in the project context. To better illustrate the runtime communication mechanism, a minimal example is provided below. In the first listing, Exporter.eol defines a String variable named x and assigns a value to it. The workflow below specifies that after executing Exporter.eol , it must export a variable named x with the new name y to the project context. Finally, it defines that before executing User.eol , it must query the project context for a variable named y and in case this is available, add the variable to the module's context and then execute it. Thus, the result of executing the workflow is Some String printed in the output console. // Exporter.eol var x : String = \"Some string\"; // User.eol z.println(); <epsilon.eol src= \"Exporter.eol\" > <exports ref= \"x\" as= \"y\" /> </epsilon.eol> <epsilon.eol src= \"User.eol\" > <uses ref= \"y\" as= \"z\" /> </epsilon.eol>","title":"0..n uses nested elements"},{"location":"doc/workflow/#model-management-tasks","text":"Having discussed the core framework, this section presents the model management tasks that have been implemented atop it, using languages of the Epsilon platform.","title":"Model Management Tasks"},{"location":"doc/workflow/#generic-model-management-task","text":"The epsilon.eol task executes an EOL module, defined using the src attribute on the models that are specified using the model nested elements.","title":"Generic Model Management Task"},{"location":"doc/workflow/#model-validation-task","text":"The epsilon.evl task executes an EVL module, defined using the src attribute on the models that are specified using the model nested elements. In addition to the attributes defined by the ExecutableModuleTask, this task also provides the following attributes: failOnErrors : Errors are the results of unsatisfied constraints. Setting the value of this attribute to true (default is false ) causes a BuildException to be raised if one or more errors are identified during the validation process. failOnWarnings : Similarly to errors, warnings are the results of unsatisfied critiques. Setting the value of this atrribute to true (default is also false ) causes a BuildException to be raised if one or more warnings are identified during the validation process. exportConstraintTrace : This attribute enables developers to export the internal constraint trace constructed during model validation to the project context so that it can be later accessed by other tasks - which could for example attempt to automatically repair the identified inconsistencies. exportAsModel : Setting the value of this attribute to true (default is false ) causes EVL to export the results of the validation as a new model in the project repository, named \"EVL\". This model contains all the s found by EVL. These instances contain several useful attributes: constraint points to the with the definition of the constraint and instance points to the model element which did not satisfy the constraint. From the , isCritique can be used to check if it is a critique or not, and name contains the name of the constraint.","title":"Model Validation Task"},{"location":"doc/workflow/#model-to-model-transformation-task","text":"The epsilon.etl task executes an ETL module, defined using the src attribute to transform between the models that are specified using the model nested elements. In addition to the attributes defined by the ExecutableModuleTask, this task also provides the exportTransformationTrace attribute that enables the developer to export the internal transformation trace to the project context. In this way this trace can be reused by subsequent tasks; for example another task can serialize it in the form of a separate traceability model.","title":"Model-to-Model Transformation Task"},{"location":"doc/workflow/#model-comparison-task","text":"The epsilon.ecl task executes an ECL module, defined using the src attribute to establish matches between elements of the models that are specified using the model nested elements. In addition to the attributes defined by the ExecutableModuleTask, this task also provides the exportMatchTrace attribute that enables users to export the match-trace calculated during the comparison to the project context so that subsequent tasks can reuse it. For example, as discussed in the sequel, an EML model merging task can use it as a means of identifying correspondences on which to perform merging. In another example, the match-trace can be stored by a subsequent EOL task in the form of an stand-alone weaving model.","title":"Model Comparison Task"},{"location":"doc/workflow/#model-merging-task","text":"The epsilon.eml task executes an EML module, defined using the src attribute on the models that are specified using the model nested elements. In addition to the attributes defined by the ExecutableModuleTask, this task also provides the following attributes: useMatchTrace : To merge a set of models, an EML module needs an established match-trace between elements of the models. The useMatchTrace attribute enables the EML task to use a match-trace exported by a preceeding ECL task (using its exportMatchTrace attribute). exportMergeTrace, exportTransformationTrace : Similarly to ETL, through these attributes an EML task can export the internal traces calculated during merging for subsequent tasks to use.","title":"Model Merging Task"},{"location":"doc/workflow/#model-to-text-transformation-task","text":"To support model to text transformations, EglTask (epsilon.egl) task is provided that executes an Epsilon Generation Language (EGL) module. In addition to the attributes defined by ExecutableModuleTask , EglTask also defines the following attributes: target : Defines a file in which all of the generated text will be stored. templateFactoryType : Defines the Java class that will be instantiated to provide a TemplateFactory for the EGL program. The specified class must be on the classpath and must subtype EglTemplateFactory . EglTask may nest any number of formatter elements. The formatter nested element has the following attributes: implementation (required) : Defines the Java class that will be instantiated to provide a Formatter for the EGL program. The specified class must be on the classpath and must subtype Formatter .","title":"Model-to-Text Transformation Task"},{"location":"doc/workflow/#model-migration-task","text":"To support model migration, FlockTask (epsilon.flock) is provided for executing an Epsilon Flock module. In addition to the attributes defined by ExecutableModuleTask , FlockTask also defines the following mandatory attributes: originalModel : Specifies which of the currently loaded models should be used as the source of the model migration. migratedModel : Specifies which of the currently loaded models should be used as the target of the model migration.","title":"Model Migration Task"},{"location":"doc/workflow/#pattern-matching-task","text":"The epsilon.epl task executes an EPL module, defined using the src attribute to perform pattern matching on the models that are specified using the model nested elements. In addition to the attributes defined by the ExecutableModuleTask, this task also provides the following attributes. repeatWhileMatches : A boolean specifying whether the pattern matching process should continue to execute for as long as matches are found. maxLoops : An integer specifying the maximum number of pattern matching iterations. exportAs : The name under which the computed pattern match model should be made available to other Epsilon tasks of the workflow.","title":"Pattern Matching Task"},{"location":"doc/workflow/#java-class-static-method-execution-task","text":"The epsilon.java.executeStaticMethod task executes a parameter-less static method, defined using the method attribute, of a Java class, defined using the javaClass attribute. This task can be useful for setting up the infrastructure of Xtext-based languages.","title":"Java Class Static Method Execution Task"},{"location":"doc/articles/","text":"Articles \u00b6 This page contains an index of articles presenting a range of tools and languages in Epsilon. Should you find that an article contains errors or is inconsistent with the current release of Epsilon, please let us know . Epsilon Object Language \u00b6 EOL syntax updates : This article summarizes changes in the EOL concrete syntax over time. Extended Properties : This article demonstrates the extended properties mechanism in EOL (and by inheritance, in all languages in Epsilon). Call Java from Epsilon : This article demonstrates how to create Java objects, access their properties and call their methods from Epsilon languages. Call Java functional interfaces from Epsilon : This article demonstrates how to call native methods which take functions as their parameter, using lambdas and streams directly from Epsilon using EOL syntax. Profiling Epsilon Programs : This article demonstrates how to profile Epsilon programs using the platform's built-in profiling tools. Epsilon Validation Language \u00b6 EVL-GMF Integration : This article demonstrates evaluating EVL constraints from within a GMF-based editor. Parallel Execution : This article explains how to use the parallel module implementations for EOL and rule-based languages like EVL. Epsilon Generation Language \u00b6 Code Generation Tutorial with EGL : This article demonstrates using EGL templates to generate HTML files from an XML document. Using template operations in EGL : This article demonstrates template operations for writing re-usable code in EGL (the model-to-text language of Epsilon). EGL as a server-side language : This article demonstrates using EGL (the model-to-text language of Epsilon) in Tomcat to produce HTML pages from EMF models on the fly. Co-ordinating EGL templates with EGX : This article demonstrates how to parameterize EGL templates and execute them multiple times to produce multiple files. Re-using EGL templates : This article demonstrates how to invoke other EGL templates and direct their output to calling EGL template. Epsilon Transformation Language \u00b6 XML to EMF Transformation : This article shows how to transform an XML document into an EMF model using the Epsilon Transformation Language and Epsilon's XML driver Epsilon and EMF models \u00b6 Emfatic language reference : Emfatic is a language designed to represent EMF Ecore models in a textual form. This article details the syntax of Emfatic and the mapping between Emfatic declarations and the corresponding Ecore constructs. Reflective EMF tutorial : This tutorial demonstrates how to create an EMF Ecore metamodel and a sample model that conforms to it reflectively (i.e. without generating any code). Epsilon and EMF : Frequently-asked questions related to querying and modifying EMF-based models with Epsilon. The EMF EPackage Registry View : This article demonstrates the EPackage Registry view which allows developers to inspect the contents of the registered EMF EPackages. Exeed annotation reference : This article lists the annotations you can use on your metamodels to customize the look of the Exeed model editor. Inspecting EMF models with Exeed : This article demonstrates how you can use Exeed to inspect the structure of your EMF models. Working with custom EMF resources : This article demonstrates how you can work with custom EMF resources in Epsilon. Parsing XML documents as EMF models with Flexmi : This article demonstrates how you can use Flexmi to parse XML documents in a fuzzy manner as instances of Ecore metamodels. Modularity Mechanisms in Flexmi : This article demonstrates how you can break down Flexmi models over multiple files and use templates to capture complex reusable structures in your models. Epsilon and Simulink models \u00b6 Scripting Simulink models using Epsilon : In this article we demonstrate how you can query and modify Simulink models in Epsilon. Managing Matlab Simulink/Stateflow models from Epsilon : This tutorial shows you how to manipulate Simulink and Stateflow blocks from within Epsilon. Epsilon and other types of models \u00b6 Scripting XML documents using Epsilon : In this article we demonstrate how you can create, query and modify plain standalone XML documents (i.e. no XSD/DTD needed) in Epsilon programs using the PlainXML driver. Scripting CSV files using Epsilon : This article demonstrates how you can query CSV files with Epsilon programs using the CSV driver. Scripting BibTeX files using Epsilon : In this article we demonstrate how you can query a list of references stored in BibTeX files with Epsilon programs using the BibTeX driver. Eugenia \u00b6 Fundamentals \u00b6 Eugenia GMF Tutorial : This article provides a guide to using Eugenia for developing GMF editors, as well as its complete list of features and supported annotations. Customizing an editor generated with Eugenia : This article demonstrates Eugenia's polishing transformations, which can be used to customize GMF editors in a systematic and reproducible way. Applying source code patches to an editor generated with Eugenia : This article demonstrates Eugenia's patch generation and application functionality, which can be used to customize the Java source code generated by GMF in a systematic and reproducible way. Eugenia: Automated Invocation with Ant : This article demonstrates how to run Eugenia from Ant, and some of the additional features offered through the Ant task. Recipes \u00b6 Eugenia: Nodes with images instead of shapes : This article shows how to create nodes in your GMF editor that are represented with images (png, jpg etc.) instead of the standard GMF shapes (rectangle, ellipse etc.) Eugenia: Nodes with images defined at run-time : This article addresses the case where the end-user needs to set an image for each node at runtime. Eugenia: Nodes with a centred layout : This article shows how to create nodes in your GMF editor whose contents are centred both vertically and horizontally. Eugenia: Phantom nodes in GMF editors : This article demonstrates how to define GMF phantom nodes in Eugenia. Picto \u00b6 Visualising Models with Picto : Picto is an Eclipse view for visualising models via model-to-text transformation to SVG/HTML. The article introduces Picto and shows the tool in action. Human-Usable Textual Notation \u00b6 Using the Human-Usable Textual Notation (HUTN) in Epsilon : This article demonstrates how to specify models using a textual notation. Customising Epsilon HUTN documents with configuration : This article demonstrates how to customise Epsilon HUTN documents with a configuration model. Compliance of Epsilon HUTN to the OMG HUTN Standard : This article summarises the similarities and differences between the Epsilon HUTN implementation and the OMG HUTN standard. Teaching Material \u00b6 MDE Exercises : This article provides a number of exercises which enable you to test your knowledge on MDE, EMF and Epsilon. Technical Support \u00b6 Constructing a helpful minimal example : From time to time, you may run into a problem when using Epsilon or find a bug. This article describes how to construct a minimal example that we can use to reproduce the problem on our machine. Extending Epsilon \u00b6 Developing a new Epsilon Language : This article demonstrates how to develop a new language on top of Epsilon. Developing a new EMC Driver : This article demonstrates how to develop a new driver for Epsilon's Model Connectivity layer (EMC). Installation \u00b6 Working with Epsilon 1.x : This article contains instructions for installing legacy versions of Epsilon prior to 2.0. Setting up Eclipse for Epsilon development : This article explains how to easily set up and configure an Eclipse IDE for contributing to Epsilon. Epsilon Developers \u00b6 Running Epsilon from source : This article demonstrates how to run Epsilon from source in your machine. Call for User Stories : This is a kind request to all Epsilon Users. Manage the Epsilon website locally : This article demonstrates how to manage the Epsilon website in your machine. Epsilon development principles : These are the guiding principles used by the developers of Epsilon. Resolved bugs : This article discusses the different types of resolved bugs in Epsilon. Managing the target platform : This article outlines how to manage the target platform that Epsilon is built against. Adding new plugins : This article outlines the process of adding new plugins to the main Epsilon repository. Preparing the macOS distribution : This article outlines the process of signing the Eclipse macOS distribution. Forking Epsilon as a non-committer with Git : This article shows how to branch Epsilon into a different remote repository whilst still getting updates from the main project. Publishing to the EpsilonLabs Updatesite : This article outlines the process for publishing a plugin (EMC driver/language/tool) from the EpsilonLabs Github organisation to the EpsilonLabs updatesite. Releasing a new version of Epsilon : This article lists all the tasks required for releasing a version of Epsilon. Releasing a new version to Maven Central : This article outlines how to release a new version of the Epsilon standalone artifacts to Maven Central.","title":"Articles"},{"location":"doc/articles/#articles","text":"This page contains an index of articles presenting a range of tools and languages in Epsilon. Should you find that an article contains errors or is inconsistent with the current release of Epsilon, please let us know .","title":"Articles"},{"location":"doc/articles/#epsilon-object-language","text":"EOL syntax updates : This article summarizes changes in the EOL concrete syntax over time. Extended Properties : This article demonstrates the extended properties mechanism in EOL (and by inheritance, in all languages in Epsilon). Call Java from Epsilon : This article demonstrates how to create Java objects, access their properties and call their methods from Epsilon languages. Call Java functional interfaces from Epsilon : This article demonstrates how to call native methods which take functions as their parameter, using lambdas and streams directly from Epsilon using EOL syntax. Profiling Epsilon Programs : This article demonstrates how to profile Epsilon programs using the platform's built-in profiling tools.","title":"Epsilon Object Language"},{"location":"doc/articles/#epsilon-validation-language","text":"EVL-GMF Integration : This article demonstrates evaluating EVL constraints from within a GMF-based editor. Parallel Execution : This article explains how to use the parallel module implementations for EOL and rule-based languages like EVL.","title":"Epsilon Validation Language"},{"location":"doc/articles/#epsilon-generation-language","text":"Code Generation Tutorial with EGL : This article demonstrates using EGL templates to generate HTML files from an XML document. Using template operations in EGL : This article demonstrates template operations for writing re-usable code in EGL (the model-to-text language of Epsilon). EGL as a server-side language : This article demonstrates using EGL (the model-to-text language of Epsilon) in Tomcat to produce HTML pages from EMF models on the fly. Co-ordinating EGL templates with EGX : This article demonstrates how to parameterize EGL templates and execute them multiple times to produce multiple files. Re-using EGL templates : This article demonstrates how to invoke other EGL templates and direct their output to calling EGL template.","title":"Epsilon Generation Language"},{"location":"doc/articles/#epsilon-transformation-language","text":"XML to EMF Transformation : This article shows how to transform an XML document into an EMF model using the Epsilon Transformation Language and Epsilon's XML driver","title":"Epsilon Transformation Language"},{"location":"doc/articles/#epsilon-and-emf-models","text":"Emfatic language reference : Emfatic is a language designed to represent EMF Ecore models in a textual form. This article details the syntax of Emfatic and the mapping between Emfatic declarations and the corresponding Ecore constructs. Reflective EMF tutorial : This tutorial demonstrates how to create an EMF Ecore metamodel and a sample model that conforms to it reflectively (i.e. without generating any code). Epsilon and EMF : Frequently-asked questions related to querying and modifying EMF-based models with Epsilon. The EMF EPackage Registry View : This article demonstrates the EPackage Registry view which allows developers to inspect the contents of the registered EMF EPackages. Exeed annotation reference : This article lists the annotations you can use on your metamodels to customize the look of the Exeed model editor. Inspecting EMF models with Exeed : This article demonstrates how you can use Exeed to inspect the structure of your EMF models. Working with custom EMF resources : This article demonstrates how you can work with custom EMF resources in Epsilon. Parsing XML documents as EMF models with Flexmi : This article demonstrates how you can use Flexmi to parse XML documents in a fuzzy manner as instances of Ecore metamodels. Modularity Mechanisms in Flexmi : This article demonstrates how you can break down Flexmi models over multiple files and use templates to capture complex reusable structures in your models.","title":"Epsilon and EMF models"},{"location":"doc/articles/#epsilon-and-simulink-models","text":"Scripting Simulink models using Epsilon : In this article we demonstrate how you can query and modify Simulink models in Epsilon. Managing Matlab Simulink/Stateflow models from Epsilon : This tutorial shows you how to manipulate Simulink and Stateflow blocks from within Epsilon.","title":"Epsilon and Simulink models"},{"location":"doc/articles/#epsilon-and-other-types-of-models","text":"Scripting XML documents using Epsilon : In this article we demonstrate how you can create, query and modify plain standalone XML documents (i.e. no XSD/DTD needed) in Epsilon programs using the PlainXML driver. Scripting CSV files using Epsilon : This article demonstrates how you can query CSV files with Epsilon programs using the CSV driver. Scripting BibTeX files using Epsilon : In this article we demonstrate how you can query a list of references stored in BibTeX files with Epsilon programs using the BibTeX driver.","title":"Epsilon and other types of models"},{"location":"doc/articles/#eugenia","text":"","title":"Eugenia"},{"location":"doc/articles/#fundamentals","text":"Eugenia GMF Tutorial : This article provides a guide to using Eugenia for developing GMF editors, as well as its complete list of features and supported annotations. Customizing an editor generated with Eugenia : This article demonstrates Eugenia's polishing transformations, which can be used to customize GMF editors in a systematic and reproducible way. Applying source code patches to an editor generated with Eugenia : This article demonstrates Eugenia's patch generation and application functionality, which can be used to customize the Java source code generated by GMF in a systematic and reproducible way. Eugenia: Automated Invocation with Ant : This article demonstrates how to run Eugenia from Ant, and some of the additional features offered through the Ant task.","title":"Fundamentals"},{"location":"doc/articles/#recipes","text":"Eugenia: Nodes with images instead of shapes : This article shows how to create nodes in your GMF editor that are represented with images (png, jpg etc.) instead of the standard GMF shapes (rectangle, ellipse etc.) Eugenia: Nodes with images defined at run-time : This article addresses the case where the end-user needs to set an image for each node at runtime. Eugenia: Nodes with a centred layout : This article shows how to create nodes in your GMF editor whose contents are centred both vertically and horizontally. Eugenia: Phantom nodes in GMF editors : This article demonstrates how to define GMF phantom nodes in Eugenia.","title":"Recipes"},{"location":"doc/articles/#picto","text":"Visualising Models with Picto : Picto is an Eclipse view for visualising models via model-to-text transformation to SVG/HTML. The article introduces Picto and shows the tool in action.","title":"Picto"},{"location":"doc/articles/#human-usable-textual-notation","text":"Using the Human-Usable Textual Notation (HUTN) in Epsilon : This article demonstrates how to specify models using a textual notation. Customising Epsilon HUTN documents with configuration : This article demonstrates how to customise Epsilon HUTN documents with a configuration model. Compliance of Epsilon HUTN to the OMG HUTN Standard : This article summarises the similarities and differences between the Epsilon HUTN implementation and the OMG HUTN standard.","title":"Human-Usable Textual Notation"},{"location":"doc/articles/#teaching-material","text":"MDE Exercises : This article provides a number of exercises which enable you to test your knowledge on MDE, EMF and Epsilon.","title":"Teaching Material"},{"location":"doc/articles/#technical-support","text":"Constructing a helpful minimal example : From time to time, you may run into a problem when using Epsilon or find a bug. This article describes how to construct a minimal example that we can use to reproduce the problem on our machine.","title":"Technical Support"},{"location":"doc/articles/#extending-epsilon","text":"Developing a new Epsilon Language : This article demonstrates how to develop a new language on top of Epsilon. Developing a new EMC Driver : This article demonstrates how to develop a new driver for Epsilon's Model Connectivity layer (EMC).","title":"Extending Epsilon"},{"location":"doc/articles/#installation","text":"Working with Epsilon 1.x : This article contains instructions for installing legacy versions of Epsilon prior to 2.0. Setting up Eclipse for Epsilon development : This article explains how to easily set up and configure an Eclipse IDE for contributing to Epsilon.","title":"Installation"},{"location":"doc/articles/#epsilon-developers","text":"Running Epsilon from source : This article demonstrates how to run Epsilon from source in your machine. Call for User Stories : This is a kind request to all Epsilon Users. Manage the Epsilon website locally : This article demonstrates how to manage the Epsilon website in your machine. Epsilon development principles : These are the guiding principles used by the developers of Epsilon. Resolved bugs : This article discusses the different types of resolved bugs in Epsilon. Managing the target platform : This article outlines how to manage the target platform that Epsilon is built against. Adding new plugins : This article outlines the process of adding new plugins to the main Epsilon repository. Preparing the macOS distribution : This article outlines the process of signing the Eclipse macOS distribution. Forking Epsilon as a non-committer with Git : This article shows how to branch Epsilon into a different remote repository whilst still getting updates from the main project. Publishing to the EpsilonLabs Updatesite : This article outlines the process for publishing a plugin (EMC driver/language/tool) from the EpsilonLabs Github organisation to the EpsilonLabs updatesite. Releasing a new version of Epsilon : This article lists all the tasks required for releasing a version of Epsilon. Releasing a new version to Maven Central : This article outlines how to release a new version of the Epsilon standalone artifacts to Maven Central.","title":"Epsilon Developers"},{"location":"doc/articles/xml-to-emf/","text":"XML to EMF Transformation with ETL \u00b6 This example shows how to transform an XML document into an EMF model using the Epsilon Transformation Language and Epsilon's XML driver . We start with our source XML file ( tree.xml ), which is shown below: <?xml version=\"1.0\"?> <tree name= \"t1\" > <tree name= \"t2\" /> <tree name= \"t3\" > <tree name= \"t4\" /> </tree> </tree> The Ecore metamodel (expressed in Emfatic ) to which our target EMF model will conform to is shown below: package tree; class Tree { attr String label; ref Tree#children parent; val Tree[*]#parent children; } Finally, our ETL transformation ( xml2emf.etl ) is in the listing below: rule XmlTree2EmfTree transform s : Xml!t_tree to t : Emf!Tree { t.label = s.a_name; t.children ::= s.c_tree; } The transformation consists of one rule which transforms every tree element in the XML document ( Xml!t_tree ) into an instance of the Tree class of our Ecore metamodel above. The rule sets the label of the latter to the name of the former, and the children of the latter, to the equivalent model elements produced by the tree child elements of the former. To run the transformation: Right-click on tree.emf or tree.ecore and select Register EPackages Right-click on xml2emf.launch and select Run As -> xml2emf Once the transformation has executed you can open tree.model to inspect the EMF model it has produced with the reflective tree-based editor. The complete source code of the example is available here .","title":"XML to EMF Transformation with ETL"},{"location":"doc/articles/xml-to-emf/#xml-to-emf-transformation-with-etl","text":"This example shows how to transform an XML document into an EMF model using the Epsilon Transformation Language and Epsilon's XML driver . We start with our source XML file ( tree.xml ), which is shown below: <?xml version=\"1.0\"?> <tree name= \"t1\" > <tree name= \"t2\" /> <tree name= \"t3\" > <tree name= \"t4\" /> </tree> </tree> The Ecore metamodel (expressed in Emfatic ) to which our target EMF model will conform to is shown below: package tree; class Tree { attr String label; ref Tree#children parent; val Tree[*]#parent children; } Finally, our ETL transformation ( xml2emf.etl ) is in the listing below: rule XmlTree2EmfTree transform s : Xml!t_tree to t : Emf!Tree { t.label = s.a_name; t.children ::= s.c_tree; } The transformation consists of one rule which transforms every tree element in the XML document ( Xml!t_tree ) into an instance of the Tree class of our Ecore metamodel above. The rule sets the label of the latter to the name of the former, and the children of the latter, to the equivalent model elements produced by the tree child elements of the former. To run the transformation: Right-click on tree.emf or tree.ecore and select Register EPackages Right-click on xml2emf.launch and select Run As -> xml2emf Once the transformation has executed you can open tree.model to inspect the EMF model it has produced with the reflective tree-based editor. The complete source code of the example is available here .","title":"XML to EMF Transformation with ETL"},{"location":"doc/articles/adding-new-plugins/","text":"Adding new plugins \u00b6 This article outlines the process of adding new plugins to the main Epsilon repository. Move them to the Epsilon repository. Plugins, features, tests and examples should be placed under the respective directories in the repository. Add pom.xml files similar to the ones we already have for each plugin, but changing the <artifactId> to the Eclipse plugin name. If you want its tests to be run from Hudson as plug-in tests, add them to the EpsilonHudsonPluggedInTestSuite in org.eclipse.epsilon.test . Define a feature for the new plugins (feature project in features/, as usual, but with its own POM) and add it to the site.xml in the org.eclipse.epsilon.updatesite.interim project. Change the plugins/pom.xml , tests/pom.xml and features/pom.xml so they mention the new projects in their <modules> section. If you want a specific standalone JAR for this, you\\'ll need to update the jarmodel.xml , rerun the jarmodel2mvn.launch launch config, and then mention the new Maven assembly descriptor in the org.eclipse.epsilon.standalone/pom.xml file. There's a readme.txt file in that folder that explains the process. Update org.eclipse.epsilon/standalone/org.eclipse.epsilon.standalone/pom.xml with the details of the new plugins.","title":"Adding new plugins"},{"location":"doc/articles/adding-new-plugins/#adding-new-plugins","text":"This article outlines the process of adding new plugins to the main Epsilon repository. Move them to the Epsilon repository. Plugins, features, tests and examples should be placed under the respective directories in the repository. Add pom.xml files similar to the ones we already have for each plugin, but changing the <artifactId> to the Eclipse plugin name. If you want its tests to be run from Hudson as plug-in tests, add them to the EpsilonHudsonPluggedInTestSuite in org.eclipse.epsilon.test . Define a feature for the new plugins (feature project in features/, as usual, but with its own POM) and add it to the site.xml in the org.eclipse.epsilon.updatesite.interim project. Change the plugins/pom.xml , tests/pom.xml and features/pom.xml so they mention the new projects in their <modules> section. If you want a specific standalone JAR for this, you\\'ll need to update the jarmodel.xml , rerun the jarmodel2mvn.launch launch config, and then mention the new Maven assembly descriptor in the org.eclipse.epsilon.standalone/pom.xml file. There's a readme.txt file in that folder that explains the process. Update org.eclipse.epsilon/standalone/org.eclipse.epsilon.standalone/pom.xml with the details of the new plugins.","title":"Adding new plugins"},{"location":"doc/articles/bibtex/","text":"Scripting BibTeX files using Epsilon \u00b6 In this article we demonstrate how you can query list of references stored in BibTeX files in Epsilon programs using the BibTeX EMC driver. All the examples in this article demonstrate using EOL to script BibTeX files. However, it's worth stressing that BibTeX files are supported throughout Epsilon. Therefore, you can use Epsilon to (cross-)validate, transform (to other models - XML or EMF-based -, or to text), compare and merge your BibTeX files. Querying a BibTeX file \u00b6 We use the following eclipse.bib as a base for demonstrating the EOL syntax for querying BibTeX files. @book { steinberg09emf , author = {Steinberg, D. and Budinsky, F. and Paternostro, M. and Merks, E.} , title = {{EMF}: {E}clipse {M}odeling {F}ramework} , year = {2008} , publisher = {Addison-Wesley Professional} , address = {Boston, Massachusetts} } @inproceedings { gronback06gmf , author = {Gronback, R.} , title = {Introduction to the {Eclipse Graphical Modeling Framework}} , booktitle = {Proc. EclipseCon} , year = {2006} , address = {Santa Clara, California} } @article { brooks86nosilverbullet , author = {Brooks Jr., F.P.} , title = {No Silver Bullet - Essence and Accidents of Software Engineering} , journal = {IEEE Computer} , volume = {20} , number = {4} , year = {1987} , pages = {10-19} , } How can I access all publications? \u00b6 Presuming that we have specified the name MyPubs when loading the BibTeX file as a model, the allContents method can be used to access all of the entries in the BibTeX file: // Get all publications var publications = MyPubs.allContents(); How can I access a publication? \u00b6 Publications (entries) in a BibTeX file can be accessed by type: // Get all @book elements var books = Book.all; // Get a random book var b = Book.all.random(); Note that the BibTeX driver recognises only those types defined in your BibTeX file. For example, attempting to call Phdthesis.all will result in an error for the BibTeX file shown above, as that BibTeX file contains no @phdthesis entries. How can I access and change the properties of a particular publication? \u00b6 Properties are accessed via the dot notation: // Get a random book var b = Book.all.random(); // Get the title of the random book var t = b.title; // Get the Amazon rating of the random book var a = b.amazonRating; Note that the empty string is returned when accessing a property that does not exist (such as the amazonRating property in the example above). Properties can be changed using an assignment statement: // Get a random book var b = Book.all.random(); // Get the title of the random book b.title = \"On the Criteria To Be Used in Decomposing Systems into Modules\" Note that the current version of the BibTeX driver does not support saving changes to disk. Any changes made to properties are volatile (and persist only during the duration of the Epsilon program's execution). Adding a BibTeX file to your launch configuration \u00b6 To add a BibTeX file to your Epsilon launch configuration, you need to select \"Show all model types\" and then choose \"BibTeX model\" from the list of available model types. Then you can configure the details of your BibTeX (name, file etc.) in the screen that pops up. Unsupported features \u00b6 The current version of the BibTeX driver for Epsilon is not yet a complete implementation. In particular, the following features are not yet supported: Storing changes to BibTeX models to disk. Deleting entries from a BibTeX file. Please file an enhancement request on the Epsilon bugzilla if you require -- or can provide a patch for -- these features.","title":"Scripting BibTeX files using Epsilon"},{"location":"doc/articles/bibtex/#scripting-bibtex-files-using-epsilon","text":"In this article we demonstrate how you can query list of references stored in BibTeX files in Epsilon programs using the BibTeX EMC driver. All the examples in this article demonstrate using EOL to script BibTeX files. However, it's worth stressing that BibTeX files are supported throughout Epsilon. Therefore, you can use Epsilon to (cross-)validate, transform (to other models - XML or EMF-based -, or to text), compare and merge your BibTeX files.","title":"Scripting BibTeX files using Epsilon"},{"location":"doc/articles/bibtex/#querying-a-bibtex-file","text":"We use the following eclipse.bib as a base for demonstrating the EOL syntax for querying BibTeX files. @book { steinberg09emf , author = {Steinberg, D. and Budinsky, F. and Paternostro, M. and Merks, E.} , title = {{EMF}: {E}clipse {M}odeling {F}ramework} , year = {2008} , publisher = {Addison-Wesley Professional} , address = {Boston, Massachusetts} } @inproceedings { gronback06gmf , author = {Gronback, R.} , title = {Introduction to the {Eclipse Graphical Modeling Framework}} , booktitle = {Proc. EclipseCon} , year = {2006} , address = {Santa Clara, California} } @article { brooks86nosilverbullet , author = {Brooks Jr., F.P.} , title = {No Silver Bullet - Essence and Accidents of Software Engineering} , journal = {IEEE Computer} , volume = {20} , number = {4} , year = {1987} , pages = {10-19} , }","title":"Querying a BibTeX file"},{"location":"doc/articles/bibtex/#how-can-i-access-all-publications","text":"Presuming that we have specified the name MyPubs when loading the BibTeX file as a model, the allContents method can be used to access all of the entries in the BibTeX file: // Get all publications var publications = MyPubs.allContents();","title":"How can I access all publications?"},{"location":"doc/articles/bibtex/#how-can-i-access-a-publication","text":"Publications (entries) in a BibTeX file can be accessed by type: // Get all @book elements var books = Book.all; // Get a random book var b = Book.all.random(); Note that the BibTeX driver recognises only those types defined in your BibTeX file. For example, attempting to call Phdthesis.all will result in an error for the BibTeX file shown above, as that BibTeX file contains no @phdthesis entries.","title":"How can I access a publication?"},{"location":"doc/articles/bibtex/#how-can-i-access-and-change-the-properties-of-a-particular-publication","text":"Properties are accessed via the dot notation: // Get a random book var b = Book.all.random(); // Get the title of the random book var t = b.title; // Get the Amazon rating of the random book var a = b.amazonRating; Note that the empty string is returned when accessing a property that does not exist (such as the amazonRating property in the example above). Properties can be changed using an assignment statement: // Get a random book var b = Book.all.random(); // Get the title of the random book b.title = \"On the Criteria To Be Used in Decomposing Systems into Modules\" Note that the current version of the BibTeX driver does not support saving changes to disk. Any changes made to properties are volatile (and persist only during the duration of the Epsilon program's execution).","title":"How can I access and change the properties of a particular publication?"},{"location":"doc/articles/bibtex/#adding-a-bibtex-file-to-your-launch-configuration","text":"To add a BibTeX file to your Epsilon launch configuration, you need to select \"Show all model types\" and then choose \"BibTeX model\" from the list of available model types. Then you can configure the details of your BibTeX (name, file etc.) in the screen that pops up.","title":"Adding a BibTeX file to your launch configuration"},{"location":"doc/articles/bibtex/#unsupported-features","text":"The current version of the BibTeX driver for Epsilon is not yet a complete implementation. In particular, the following features are not yet supported: Storing changes to BibTeX models to disk. Deleting entries from a BibTeX file. Please file an enhancement request on the Epsilon bugzilla if you require -- or can provide a patch for -- these features.","title":"Unsupported features"},{"location":"doc/articles/call-for-user-stories/","text":"Call for User Stories \u00b6 Over the last few years we've been delighted to see the Epsilon community grow and expand. We'd like to take the opportunity to thank you all for your feedback and contributions, and if it's not too much of a hassle, we'd like to ask for your help one more time. Epsilon is developed and maintained by members of staff at the University of York (UK) and University of Cadiz (Spain). In the context of the UK Research Excellence Framework 2014, we (the York people ) need to prepare a portfolio that demonstrates the impact of our research (for some definition of impact ). In this direction it'd be really appreciated if you spare a few minutes to write a few sentences on what you're using Epsilon for in your company/research group and why it's cool, and share them with us at epsilon.devs@gmail.com . All responses, no matter how short or seemingly trivial , would be very helpful for us, and will be rewarded accordingly next time we meet. Of course, no user story will be made publicly available without your explicit consent. In case you'd like an example, we recently received the following statement from Jendrik Johannes, a founder of DevBoost . Our thanks to Jendrik for his statement and for kindly allowing us to use it here. \"We used Eugenia in a project where we developed a graphical editor for a client as an extension for their existing tool for modeling wind farms. The client already used a model as the basis for the tool and thus it was a matter of minutes to generate a prototype of the editor with Eugenia. This gave us the possibility to discuss the clients requirements directly on a working prototype which later on also served as the basis for the actual implementation. Using Eugenia, we implemented the prototype within a week - a task that usually takes a month.\" [Jendrik Johannes, founder of DevBoost ]","title":"Call for User Stories"},{"location":"doc/articles/call-for-user-stories/#call-for-user-stories","text":"Over the last few years we've been delighted to see the Epsilon community grow and expand. We'd like to take the opportunity to thank you all for your feedback and contributions, and if it's not too much of a hassle, we'd like to ask for your help one more time. Epsilon is developed and maintained by members of staff at the University of York (UK) and University of Cadiz (Spain). In the context of the UK Research Excellence Framework 2014, we (the York people ) need to prepare a portfolio that demonstrates the impact of our research (for some definition of impact ). In this direction it'd be really appreciated if you spare a few minutes to write a few sentences on what you're using Epsilon for in your company/research group and why it's cool, and share them with us at epsilon.devs@gmail.com . All responses, no matter how short or seemingly trivial , would be very helpful for us, and will be rewarded accordingly next time we meet. Of course, no user story will be made publicly available without your explicit consent. In case you'd like an example, we recently received the following statement from Jendrik Johannes, a founder of DevBoost . Our thanks to Jendrik for his statement and for kindly allowing us to use it here. \"We used Eugenia in a project where we developed a graphical editor for a client as an extension for their existing tool for modeling wind farms. The client already used a model as the basis for the tool and thus it was a matter of minutes to generate a prototype of the editor with Eugenia. This gave us the possibility to discuss the clients requirements directly on a working prototype which later on also served as the basis for the actual implementation. Using Eugenia, we implemented the prototype within a week - a task that usually takes a month.\" [Jendrik Johannes, founder of DevBoost ]","title":"Call for User Stories"},{"location":"doc/articles/call-java-from-epsilon/","text":"Call Java from Epsilon \u00b6 Model management languages such as those provided by Epsilon are by design not general purpose languages. Therefore, there are features that such languages do not support inherently (mainly because such features are typically not needed in the context of model management). However, there are cases where a feature that is not built-in may be necessary for a specific task. To address such issues and enable developers to implement non-standard functionality, Epsilon supports the Tool concept. A tool is a normal Java class that (optionally) conforms to a specific interface ( org.eclipse.epsilon.eol.tools.ITool ) and which can be instantiated and accessed from the context of an EOL (or any other EOL-based language such as EML, ETL, EVL etc) program. After instantiation, EOL can be used to invoke methods and access properties of the object. In this article we show how to create and declare a new tool ( org.eclipse.epsilon.examples.tools.SampleTool ), and then use it from an EOL program. Create the tool \u00b6 The first step is to create a new plugin project named org.eclipse.epsilon.examples.tools . Then create a class named SampleTool with the following content. package org.eclipse.epsilon.examples.tools ; public class SampleTool { protected String name ; public void setName ( String name ) { this . name = name ; } public String getName () { return name ; } public String sayHello () { return \"Hello \" + name ; } } Declare the tool \u00b6 Add org.eclipse.epsilon.common.dt to the dependencies of your plugin Create an extension to the org.eclipse.epsilon.common.dt.tool extension point Set the class to org.eclipse.epsilon.examples.tools.SampleTool Set the name to SampleTool Add org.eclipse.epsilon.examples.tools to the exported packages list in the Runtime tab Invoke the tool \u00b6 To invoke the tool you have two options: You can either run a new Eclipse instance, or export the plugin and place it in the dropins folder of your installation. Then you can invoke the tool using the following EOL program. var sampleTool = new Native(\"org.eclipse.epsilon.examples.tools.SampleTool\"); sampleTool.name = \"George\"; sampleTool.sayHello().println(); // Prints Hello George Standalone setup \u00b6 To use tools contributed via extensions in a standalone Java setup within Eclipse you'll need to add the following line of code. context . getNativeTypeDelegates (). add ( new ExtensionPointToolNativeTypeDelegate ()); You can get the source code of this example here .","title":"Call Java from Epsilon"},{"location":"doc/articles/call-java-from-epsilon/#call-java-from-epsilon","text":"Model management languages such as those provided by Epsilon are by design not general purpose languages. Therefore, there are features that such languages do not support inherently (mainly because such features are typically not needed in the context of model management). However, there are cases where a feature that is not built-in may be necessary for a specific task. To address such issues and enable developers to implement non-standard functionality, Epsilon supports the Tool concept. A tool is a normal Java class that (optionally) conforms to a specific interface ( org.eclipse.epsilon.eol.tools.ITool ) and which can be instantiated and accessed from the context of an EOL (or any other EOL-based language such as EML, ETL, EVL etc) program. After instantiation, EOL can be used to invoke methods and access properties of the object. In this article we show how to create and declare a new tool ( org.eclipse.epsilon.examples.tools.SampleTool ), and then use it from an EOL program.","title":"Call Java from Epsilon"},{"location":"doc/articles/call-java-from-epsilon/#create-the-tool","text":"The first step is to create a new plugin project named org.eclipse.epsilon.examples.tools . Then create a class named SampleTool with the following content. package org.eclipse.epsilon.examples.tools ; public class SampleTool { protected String name ; public void setName ( String name ) { this . name = name ; } public String getName () { return name ; } public String sayHello () { return \"Hello \" + name ; } }","title":"Create the tool"},{"location":"doc/articles/call-java-from-epsilon/#declare-the-tool","text":"Add org.eclipse.epsilon.common.dt to the dependencies of your plugin Create an extension to the org.eclipse.epsilon.common.dt.tool extension point Set the class to org.eclipse.epsilon.examples.tools.SampleTool Set the name to SampleTool Add org.eclipse.epsilon.examples.tools to the exported packages list in the Runtime tab","title":"Declare the tool"},{"location":"doc/articles/call-java-from-epsilon/#invoke-the-tool","text":"To invoke the tool you have two options: You can either run a new Eclipse instance, or export the plugin and place it in the dropins folder of your installation. Then you can invoke the tool using the following EOL program. var sampleTool = new Native(\"org.eclipse.epsilon.examples.tools.SampleTool\"); sampleTool.name = \"George\"; sampleTool.sayHello().println(); // Prints Hello George","title":"Invoke the tool"},{"location":"doc/articles/call-java-from-epsilon/#standalone-setup","text":"To use tools contributed via extensions in a standalone Java setup within Eclipse you'll need to add the following line of code. context . getNativeTypeDelegates (). add ( new ExtensionPointToolNativeTypeDelegate ()); You can get the source code of this example here .","title":"Standalone setup"},{"location":"doc/articles/code-generation-tutorial-egl/","text":"Code Generation Tutorial with EGL \u00b6 EGL is a template-based language that can be used to generate code (or any other kind of text) from different types of models supported by Epsilon (e.g. EMF, UML, XML). This example demonstrates using EGL to generate HTML code from the XML document below. <library> <book title= \"EMF Eclipse Modeling Framework\" pages= \"744\" public= \"true\" > <id> EMFBook </id> <author> Dave Steinberg </author> <author> Frank Budinsky </author> <author> Marcelo Paternostro </author> <author> Ed Merks </author> <published> 2009 </published> </book> <book title= \"Eclipse Modeling Project: A Domain-Specific Language (DSL) Toolkit\" pages= \"736\" public= \"true\" > <id> EMPBook </id> <author> Richard Gronback </author> <published> 2009 </published> </book> <book title= \"Official Eclipse 3.0 FAQs\" pages= \"432\" public= \"false\" > <id> Eclipse3FAQs </id> <author> John Arthorne </author> <author> Chris Laffra </author> <published> 2004 </published> </book> </library> More specifically, we will generate one HTML file for each <book> element that has a public attribute set to true . Below is an EGL template ( book2page.egl ) that can generate an HTML file from a single <book> element. For more details on using EGL's expression language to navigate and query XML documents, please refer to this article . <h1>Book [%=index%]: [%=book.a_title%]</h1> <h2>Authors</h2> <ul> [%for (author in book.c_author) { %] <li>[%=author.text%] [%}%] </ul> The template above can generate one HTML file from one <book> element. To run this template against '''all''' <book> elements anywhere in the XML document, and generate appropriately-named HTML files, we need to use an EGX co-ordination program such as the one illustrated below ( main.egx ). The Book2Page rule of the EGX program will transform every <book> element ( t_book ) that satisfies the declared guard (has a public attribute set to true ), into a target file, using the specified template ( book2page.egl ). In addition, the EGX program specifies a Library2Page rule, that generates an HTML (index) file for each <library> element in the document. rule Book2Page transform book : t_book { // We only want to generate pages // for books that have their public // attribute set to true guard : book.b_public parameters { // These parameters will be made available // to the invoked template as variables var params : new Map; params.put(\"index\", t_book.all.indexOf(book) + 1); return params; } // The EGL template to be invoked template : \"book2page.egl\" // Output file target : \"gen/\" + book.e_id.text + \".html\" } rule Library2Page transform library : t_library { template : \"library2page.egl\" target : \"gen/index.html\" } For completeness, the source code of the library2page.egl template appears below. <h1>Books</h1> <ul> [%for (book in library.c_book.select(b|b.b_public)) { %] <li><a href=\"[%=book.e_id.text%].html\">[%=book.a_title%]</a> [%}%] </ul> Running the Code Generator from Eclipse \u00b6 Screenshots of the Eclipse run configuration appear below. The complete source for this example is available here . Running the Code Generator from Java \u00b6 The following snippet demonstrates using Epsilon's Java API to parse the XML document and execute the EGX program. The complete source for this example is available here (please read lib/readme.txt for instructions on how to obtain the missing Epsilon JAR). import java.io.File ; import org.eclipse.epsilon.egl.EglFileGeneratingTemplateFactory ; import org.eclipse.epsilon.egl.EgxModule ; import org.eclipse.epsilon.emc.plainxml.PlainXmlModel ; public class App { public static void main ( String [] args ) throws Exception { // Parse main.egx EgxModule module = new EgxModule ( new EglFileGeneratingTemplateFactory ()); module . parse ( new File ( \"main.egx\" ). getAbsoluteFile ()); if (! module . getParseProblems (). isEmpty ()) { System . out . println ( \"Syntax errors found. Exiting.\" ); return ; } // Load the XML document PlainXmlModel model = new PlainXmlModel (); model . setFile ( new File ( \"library.xml\" )); model . setName ( \"L\" ); model . load (); // Make the document visible to the EGX program module . getContext (). getModelRepository (). addModel ( model ); // ... and execute module . execute (); } }","title":"Code Generation Tutorial with EGL"},{"location":"doc/articles/code-generation-tutorial-egl/#code-generation-tutorial-with-egl","text":"EGL is a template-based language that can be used to generate code (or any other kind of text) from different types of models supported by Epsilon (e.g. EMF, UML, XML). This example demonstrates using EGL to generate HTML code from the XML document below. <library> <book title= \"EMF Eclipse Modeling Framework\" pages= \"744\" public= \"true\" > <id> EMFBook </id> <author> Dave Steinberg </author> <author> Frank Budinsky </author> <author> Marcelo Paternostro </author> <author> Ed Merks </author> <published> 2009 </published> </book> <book title= \"Eclipse Modeling Project: A Domain-Specific Language (DSL) Toolkit\" pages= \"736\" public= \"true\" > <id> EMPBook </id> <author> Richard Gronback </author> <published> 2009 </published> </book> <book title= \"Official Eclipse 3.0 FAQs\" pages= \"432\" public= \"false\" > <id> Eclipse3FAQs </id> <author> John Arthorne </author> <author> Chris Laffra </author> <published> 2004 </published> </book> </library> More specifically, we will generate one HTML file for each <book> element that has a public attribute set to true . Below is an EGL template ( book2page.egl ) that can generate an HTML file from a single <book> element. For more details on using EGL's expression language to navigate and query XML documents, please refer to this article . <h1>Book [%=index%]: [%=book.a_title%]</h1> <h2>Authors</h2> <ul> [%for (author in book.c_author) { %] <li>[%=author.text%] [%}%] </ul> The template above can generate one HTML file from one <book> element. To run this template against '''all''' <book> elements anywhere in the XML document, and generate appropriately-named HTML files, we need to use an EGX co-ordination program such as the one illustrated below ( main.egx ). The Book2Page rule of the EGX program will transform every <book> element ( t_book ) that satisfies the declared guard (has a public attribute set to true ), into a target file, using the specified template ( book2page.egl ). In addition, the EGX program specifies a Library2Page rule, that generates an HTML (index) file for each <library> element in the document. rule Book2Page transform book : t_book { // We only want to generate pages // for books that have their public // attribute set to true guard : book.b_public parameters { // These parameters will be made available // to the invoked template as variables var params : new Map; params.put(\"index\", t_book.all.indexOf(book) + 1); return params; } // The EGL template to be invoked template : \"book2page.egl\" // Output file target : \"gen/\" + book.e_id.text + \".html\" } rule Library2Page transform library : t_library { template : \"library2page.egl\" target : \"gen/index.html\" } For completeness, the source code of the library2page.egl template appears below. <h1>Books</h1> <ul> [%for (book in library.c_book.select(b|b.b_public)) { %] <li><a href=\"[%=book.e_id.text%].html\">[%=book.a_title%]</a> [%}%] </ul>","title":"Code Generation Tutorial with EGL"},{"location":"doc/articles/code-generation-tutorial-egl/#running-the-code-generator-from-eclipse","text":"Screenshots of the Eclipse run configuration appear below. The complete source for this example is available here .","title":"Running the Code Generator from Eclipse"},{"location":"doc/articles/code-generation-tutorial-egl/#running-the-code-generator-from-java","text":"The following snippet demonstrates using Epsilon's Java API to parse the XML document and execute the EGX program. The complete source for this example is available here (please read lib/readme.txt for instructions on how to obtain the missing Epsilon JAR). import java.io.File ; import org.eclipse.epsilon.egl.EglFileGeneratingTemplateFactory ; import org.eclipse.epsilon.egl.EgxModule ; import org.eclipse.epsilon.emc.plainxml.PlainXmlModel ; public class App { public static void main ( String [] args ) throws Exception { // Parse main.egx EgxModule module = new EgxModule ( new EglFileGeneratingTemplateFactory ()); module . parse ( new File ( \"main.egx\" ). getAbsoluteFile ()); if (! module . getParseProblems (). isEmpty ()) { System . out . println ( \"Syntax errors found. Exiting.\" ); return ; } // Load the XML document PlainXmlModel model = new PlainXmlModel (); model . setFile ( new File ( \"library.xml\" )); model . setName ( \"L\" ); model . load (); // Make the document visible to the EGX program module . getContext (). getModelRepository (). addModel ( model ); // ... and execute module . execute (); } }","title":"Running the Code Generator from Java"},{"location":"doc/articles/csv-emc/","text":"Scripting CSV documents using Epsilon \u00b6 In this article we demonstrate how you can create, query and modify CSV documents in Epsilon programs using the CSV driver. The examples in this article demonstrate using EOL and ETL to script CSV documents. However, it's worth stressing that CSV documents are supported throughout Epsilon. Therefore, you can use Epsilon to (cross-)validate, transform (to other models - XML or EMF-based -, or totext), compare and merge your CSV documents. Note: This article is consistent with Epsilon versions 1.5+. The CSV Model Configuration Dialog \u00b6 To add a CSV document to your Epsilon launch configuration you first need to click on \"Show all model types\" in order to display the CSV Model type. From there you can select \"CSV Model\" from the list of available model types. Then you can configure the details of your document (name, file etc.) in the screen that pops up. You need to provide a name for the model and select the CSV file using the \"Browse Workspace...\" button. The CSV section allows you to define specific behaviour for the CSV model. The Field Separator allows you to select a different separator than comma.... yes, they are called comma-separated files, but sometimes a colon, or a semi-colon, or other char is used as a field separator. Now you can tell the model loader which one too use. By default it is a comma. The Quote Character allows you to select the character used for quotes. Quotes are used when a column value contains the field separator to avoid erroneous input. The Known Headers tells the loader that the first row of your file contains headers. Headers can late be used to access fields of a row. The Varargs Header tells the loader that the last column/field of the file can span multiple columns. This is not the \"standard\" (did you know that RFC 4180 describes CSV file standards?), but in some cases it can be useful. Finally, the Id Field allows you to optionally select one of the fields as an id for your model elements. When using Known Headers , this should be the name of one of the fields. If not, it should be the index (integer) of the field. Next we show how the different options can be used when working with CSV models. Querying a CSV document \u00b6 All elements in the CSV model are of type Row , that is, all model access has to be done using that type. Header-less CSV Model \u00b6 Consider the following NoHeaders.csv input. 604-78-8459,Ricoriki,Dwyr,rdwyr0@parallels.com,Male,VP Quality Control,2558058636921002,Horror 272-41-1349,Norry,Halpin,nhalpin1@slashdot.org,Female,Legal Assistant,,Drama 844-07-0023,Matteo,Macer,mmacer2@sogou.com,Male,Tax Accountant,3542981651057648,Horror 429-41-4964,Kattie,Fysh,kfysh3@angelfire.com,Female,Senior Financial Analyst,,Comedy 378-90-9530,Link,Proffitt,lproffitt4@cloudflare.com,Male,Paralegal,,Drama 811-26-0387,Rafferty,Sobieski,rsobieski5@usatoday.com,Male,Physical Therapy Assistant,5602242765074843,Horror 386-53-1139,Ernestine,Kringe,ekringe6@gov.uk,Female,Software Consultant,3531096662484096,Drama 850-05-5333,Flossy,Mobberley,fmobberley7@msn.com,Female,Chief Design Engineer,3558038696922012,Romance 605-52-9809,Tull,Ingerith,tingerith8@surveymonkey.com,Male,VP Quality Control,,Drama 580-79-7291,Derry,Laurisch,dlaurisch9@taobao.com,Male,Software Test Engineer I,,War 676-89-8860,Cosetta,Vlasov,cvlasova@livejournal.com,Female,Nurse Practicioner,,Thriller 748-10-2370,Lissa,Stanger,lstangerb@tmall.com,Female,Analyst Programmer,,Thriller 164-18-3409,Giffie,Boards,gboardsc@gmpg.org,Male,Graphic Designer,3575314620284632,Comedy 212-06-7778,Rabbi,Varran,rvarrand@jugem.jp,Male,GIS Technical Architect,3551249058791476,Horror 628-02-3617,Olvan,Alabone,oalabonee@archive.org,Male,Help Desk Technician,,Thriller 318-48-3006,Constantino,Eyckelbeck,ceyckelbeckf@histats.com,Male,Recruiter,564182300132483644,War 122-74-6759,Nickolas,Collard,ncollardg@dot.gov,Male,Web Designer IV,,Drama 309-57-3090,Chere,Hurry,churryh@huffingtonpost.com,Female,Tax Accountant,,Mystery 833-32-9040,Mattie,Hamon,mhamoni@auda.org.au,Male,Structural Engineer,,Drama 101-82-2564,Hew,Goble,hgoblej@ocn.ne.jp,Male,VP Accounting,,Comedy Since there are no headers, we need to access the information via the general field attribute and index (0 based): // Get all Rows elements var people = Row.all; // Get a random person var p = people.random(); // Check the gender of p (field 4) // Prints 'Male' or 'Female' p.field.at(4).println(); // Get the emails (field 3) of people that like Horror movies (field 7) so we can let them know a new movie is out. // Prints 'Sequence {rdwyr0@parallels.com, mmacer2@sogou.com, rsobieski5@usatoday.com, rvarrand@jugem.jp}' people.select(p | p.field.at(7) == 'Horror').collect(p | p.field.at(3)).println(); Header-full CSV Model \u00b6 Consider that we add headers to the previous CSV model ( Headers.csv ) id,first_name,last_name,email,gender,job,credit_card,movies 604-78-8459,Ricoriki,Dwyr,rdwyr0@parallels.com,Male,VP Quality Control,2558058636921002,Horror 272-41-1349,Norry,Halpin,nhalpin1@slashdot.org,Female,Legal Assistant,,Drama ... We can query the same information as before, but this time we can use the field names defined by the header: // Get all Rows elements var people = Row.all; // Get a random person var p = people.random(); // Check the gender of p // Prints 'Male' or 'Female' p.gender.println(); // Get the emails of people that like Horror movies so we can let them know a new movie is out. // Prints 'Sequence {rdwyr0@parallels.com, mmacer2@sogou.com, rsobieski5@usatoday.com, rvarrand@jugem.jp}' people.select(p | p.movies == 'Horror').collect(p | p.email).println(); // Get all males and females that like Thrillers and set up dates // Prints // Olvan and Cosetta is a match made in heaven! // Olvan and Lissa is a match made in heaven! var mt = people.select(p | p.movies == 'Thriller' and p.gender == 'Male'); var ft = people.select(p | p.movies == 'Thriller' and p.gender == 'Female'); for (m in mt) { for (f in ft) { (m.first_name + \" and \" + f.first_name + \" is a match made in heaven!\").println(); } } Header-full with Varargs CSV Model \u00b6 Last, we have a CSV model with some vararg information, is the same as before, but in this case persons are allowed to have multiple movies. We have also added a quote field that shows the quote character in action. id,first_name,last_name,email,gender,job,credit_card,quote,movies 604-78-8459,Ricoriki,Dwyr,rdwyr0@parallels.com,Male,VP Quality Control,,Duis at velit eu est congue elementum.,Horror 272-41-1349,Norry,Halpin,nhalpin1@slashdot.org,Female,Legal Assistant,,Aenean sit amet justo. Morbi ut odio.,Drama,Film-Noir,Thriller 844-07-0023,Matteo,Macer,mmacer2@sogou.com,Male,Tax Accountant,3542981651057648,In hac habitasse platea dictumst.,Horror,Mystery,Thriller 429-41-4964,Kattie,Fysh,kfysh3@angelfire.com,Female,Senior Financial Analyst,,Suspendisse potenti. In eleifend quam a odio.,Comedy 378-90-9530,Link,Proffitt,lproffitt4@cloudflare.com,Male,Paralegal,,Suspendisse accumsan tortor quis turpis. Sed ante.,Drama 811-26-0387,Rafferty,Sobieski,rsobieski5@usatoday.com,Male,Physical Therapy Assistant,5602242765074843,\"Nulla neque libero, convallis eget, eleifend luctus, ultricies eu, nibh. Quisque id justo sit amet sapien dignissim vestibulum.\",Horror 386-53-1139,Ernestine,Kringe,ekringe6@gov.uk,Female,Software Consultant,3531096662484096,Nulla justo. Aliquam quis turpis eget elit sodales scelerisque.,Drama 850-05-5333,Flossy,Mobberley,fmobberley7@msn.com,Female,Chief Design Engineer,3558038696922012,Nulla tempus.,Comedy,Romance 605-52-9809,Tull,Ingerith,tingerith8@surveymonkey.com,Male,VP Quality Control,,\"Morbi vestibulum, velit id pretium iaculis, diam erat fermentum justo, nec condimentum neque sapien placerat ante. Nulla justo.\",Drama 580-79-7291,Derry,Laurisch,dlaurisch9@taobao.com,Male,Software Test Engineer I,,Praesent blandit lacinia erat. Vestibulum sed magna at nunc commodo placerat.,Drama,War 676-89-8860,Cosetta,Vlasov,cvlasova@livejournal.com,Female,Nurse Practicioner,,In hac habitasse platea dictumst.,Crime,Film-Noir,Thriller 748-10-2370,Lissa,Stanger,lstangerb@tmall.com,Female,Analyst Programmer,,Pellentesque at nulla.,Action,Adventure,Thriller 164-18-3409,Giffie,Boards,gboardsc@gmpg.org,Male,Graphic Designer,3575314620284632,\"Morbi vel lectus in quam fringilla rhoncus. Mauris enim leo, rhoncus sed, vestibulum sit amet, cursus id, turpis.\",Comedy 212-06-7778,Rabbi,Varran,rvarrand@jugem.jp,Male,GIS Technical Architect,3551249058791476,Suspendisse potenti.,Horror 628-02-3617,Olvan,Alabone,oalabonee@archive.org,Male,Help Desk Technician,,Pellentesque viverra pede ac diam. Cras pellentesque volutpat dui.,Action,Adventure,Sci-Fi,Thriller 318-48-3006,Constantino,Eyckelbeck,ceyckelbeckf@histats.com,Male,Recruiter,564182300132483644,In hac habitasse platea dictumst. Maecenas ut massa quis augue luctus tincidunt.,War 122-74-6759,Nickolas,Collard,ncollardg@dot.gov,Male,Web Designer IV,,Praesent blandit lacinia erat. Vestibulum sed magna at nunc commodo placerat.,Drama 309-57-3090,Chere,Hurry,churryh@huffingtonpost.com,Female,Tax Accountant,,\"In tempor, turpis nec euismod scelerisque, quam turpis adipiscing lorem, vitae mattis nibh ligula nec sem.\",Drama,Fantasy,Mystery 833-32-9040,Mattie,Hamon,mhamoni@auda.org.au,Male,Structural Engineer,,Duis at velit eu est congue elementum. In hac habitasse platea dictumst.,Drama 101-82-2564,Hew,Goble,hgoblej@ocn.ne.jp,Male,VP Accounting,,Etiam pretium iaculis justo.,Comedy // Get all Rows elements var people = Row.all; // Random thoughts for (p in people) { if (p.gender == \"Female\" and p.movies.includes(\"Thriller\")) { (p.first_name + \" screams '\" + p.quote + \"' when watching a Thriller. She is afraid of being a \" + p.job + \".\").println(); } else if (p.gender == \"Male\" and p.movies.includes(\"Drama\")) { (p.first_name + \" sighs, but blames '\" + p.quote + \"' for the tear in his eye. Being a \" + p.job + \" will never be the same.\").println(); } } // Output //Norry screams 'Aenean sit amet justo. Morbi ut odio.' when watching a Thriller. She is afraid of being a Legal Assistant. //Link sighs, but blames 'Suspendisse accumsan tortor quis turpis. Sed ante.' for the tear in his eye. Being a Paralegal will never be the same. //Tull sighs, but blames 'Morbi vestibulum, velit id pretium iaculis, diam erat fermentum justo, nec condimentum neque sapien placerat ante. Nulla justo.' for the tear in his eye. Being a VP Quality Control will never be the same. //Derry sighs, but blames 'Praesent blandit lacinia erat. Vestibulum sed magna at nunc commodo placerat.' for the tear in his eye. Being a Software Test Engineer I will never be the same. //Cosetta screams 'In hac habitasse platea dictumst.' when watching a Thriller. She is afraid of being a Nurse Practicioner. //Lissa screams 'Pellentesque at nulla.' when watching a Thriller. She is afraid of being a Analyst Programmer. //Nickolas sighs, but blames 'Praesent blandit lacinia erat. Vestibulum sed magna at nunc commodo placerat.' for the tear in his eye. Being a Web Designer IV will never be the same. //Mattie sighs, but blames 'Duis at velit eu est congue elementum. In hac habitasse platea dictumst.' for the tear in his eye. Being a Structural Engineer will never be the same. Querying/modifying CSV documents in EOL \u00b6 The CSV driver support direct query and modification of attribute values: // Get all Rows elements var people = Row.all; // Get a random person var p = people.random(); p.name.println(); // Change the name p.name = \"Maria Antonieta\" p.name.println(); How do I create an element? \u00b6 You can use the new operator for this, and remember that all CSV elements are rows! New Rows will be added at the end of the file when persisting the changes. // Check how many entries are in the model // Prints '20' Row.all.size().println(); // Creates a new book element var b = new Row; // Check again // Prints '21' Row.all.size().println(); Loading an CSV document in your ANT buildfile \u00b6 The following ANT build file demonstrates how you can use ANT to load/store and process CSV documents with Epsilon. <project default= \"main\" > <target name= \"main\" > <epsilon.csv.loadModel name= \"people\" file= \"people.csv\" read= \"true\" store= \"false\" , knownHeaders= \"true\" /> </epsilon.csv.loadModel> <epsilon.eol src= \"my.eol\" > <model ref= \"people\" /> </epsilon.eol> </target> </project> Loading an CSV document through Java code \u00b6 The following excerpt demonstrates using CSV models using Epsilon\\'s Java API. EolModule module = new EolModule (); module . parse ( new File ( \"...\" )); CsvModel model = new CsvModel (); model . setName ( \"M\" ); model . setFile ( new File ( \"...\" )); char fieldSeparator = ',' ; model . setFieldSeparator ( fieldSeparator ); model . setKnownHeaders ( false ); model . setVarargsHeaders ( false ); module . getContext (). getModelRepository (). addModel ( model ); module . getContext (). setModule ( module ); module . execute ();","title":"Scripting CSV documents using Epsilon"},{"location":"doc/articles/csv-emc/#scripting-csv-documents-using-epsilon","text":"In this article we demonstrate how you can create, query and modify CSV documents in Epsilon programs using the CSV driver. The examples in this article demonstrate using EOL and ETL to script CSV documents. However, it's worth stressing that CSV documents are supported throughout Epsilon. Therefore, you can use Epsilon to (cross-)validate, transform (to other models - XML or EMF-based -, or totext), compare and merge your CSV documents. Note: This article is consistent with Epsilon versions 1.5+.","title":"Scripting CSV documents using Epsilon"},{"location":"doc/articles/csv-emc/#the-csv-model-configuration-dialog","text":"To add a CSV document to your Epsilon launch configuration you first need to click on \"Show all model types\" in order to display the CSV Model type. From there you can select \"CSV Model\" from the list of available model types. Then you can configure the details of your document (name, file etc.) in the screen that pops up. You need to provide a name for the model and select the CSV file using the \"Browse Workspace...\" button. The CSV section allows you to define specific behaviour for the CSV model. The Field Separator allows you to select a different separator than comma.... yes, they are called comma-separated files, but sometimes a colon, or a semi-colon, or other char is used as a field separator. Now you can tell the model loader which one too use. By default it is a comma. The Quote Character allows you to select the character used for quotes. Quotes are used when a column value contains the field separator to avoid erroneous input. The Known Headers tells the loader that the first row of your file contains headers. Headers can late be used to access fields of a row. The Varargs Header tells the loader that the last column/field of the file can span multiple columns. This is not the \"standard\" (did you know that RFC 4180 describes CSV file standards?), but in some cases it can be useful. Finally, the Id Field allows you to optionally select one of the fields as an id for your model elements. When using Known Headers , this should be the name of one of the fields. If not, it should be the index (integer) of the field. Next we show how the different options can be used when working with CSV models.","title":"The CSV Model Configuration Dialog"},{"location":"doc/articles/csv-emc/#querying-a-csv-document","text":"All elements in the CSV model are of type Row , that is, all model access has to be done using that type.","title":"Querying a CSV document"},{"location":"doc/articles/csv-emc/#header-less-csv-model","text":"Consider the following NoHeaders.csv input. 604-78-8459,Ricoriki,Dwyr,rdwyr0@parallels.com,Male,VP Quality Control,2558058636921002,Horror 272-41-1349,Norry,Halpin,nhalpin1@slashdot.org,Female,Legal Assistant,,Drama 844-07-0023,Matteo,Macer,mmacer2@sogou.com,Male,Tax Accountant,3542981651057648,Horror 429-41-4964,Kattie,Fysh,kfysh3@angelfire.com,Female,Senior Financial Analyst,,Comedy 378-90-9530,Link,Proffitt,lproffitt4@cloudflare.com,Male,Paralegal,,Drama 811-26-0387,Rafferty,Sobieski,rsobieski5@usatoday.com,Male,Physical Therapy Assistant,5602242765074843,Horror 386-53-1139,Ernestine,Kringe,ekringe6@gov.uk,Female,Software Consultant,3531096662484096,Drama 850-05-5333,Flossy,Mobberley,fmobberley7@msn.com,Female,Chief Design Engineer,3558038696922012,Romance 605-52-9809,Tull,Ingerith,tingerith8@surveymonkey.com,Male,VP Quality Control,,Drama 580-79-7291,Derry,Laurisch,dlaurisch9@taobao.com,Male,Software Test Engineer I,,War 676-89-8860,Cosetta,Vlasov,cvlasova@livejournal.com,Female,Nurse Practicioner,,Thriller 748-10-2370,Lissa,Stanger,lstangerb@tmall.com,Female,Analyst Programmer,,Thriller 164-18-3409,Giffie,Boards,gboardsc@gmpg.org,Male,Graphic Designer,3575314620284632,Comedy 212-06-7778,Rabbi,Varran,rvarrand@jugem.jp,Male,GIS Technical Architect,3551249058791476,Horror 628-02-3617,Olvan,Alabone,oalabonee@archive.org,Male,Help Desk Technician,,Thriller 318-48-3006,Constantino,Eyckelbeck,ceyckelbeckf@histats.com,Male,Recruiter,564182300132483644,War 122-74-6759,Nickolas,Collard,ncollardg@dot.gov,Male,Web Designer IV,,Drama 309-57-3090,Chere,Hurry,churryh@huffingtonpost.com,Female,Tax Accountant,,Mystery 833-32-9040,Mattie,Hamon,mhamoni@auda.org.au,Male,Structural Engineer,,Drama 101-82-2564,Hew,Goble,hgoblej@ocn.ne.jp,Male,VP Accounting,,Comedy Since there are no headers, we need to access the information via the general field attribute and index (0 based): // Get all Rows elements var people = Row.all; // Get a random person var p = people.random(); // Check the gender of p (field 4) // Prints 'Male' or 'Female' p.field.at(4).println(); // Get the emails (field 3) of people that like Horror movies (field 7) so we can let them know a new movie is out. // Prints 'Sequence {rdwyr0@parallels.com, mmacer2@sogou.com, rsobieski5@usatoday.com, rvarrand@jugem.jp}' people.select(p | p.field.at(7) == 'Horror').collect(p | p.field.at(3)).println();","title":"Header-less CSV Model"},{"location":"doc/articles/csv-emc/#header-full-csv-model","text":"Consider that we add headers to the previous CSV model ( Headers.csv ) id,first_name,last_name,email,gender,job,credit_card,movies 604-78-8459,Ricoriki,Dwyr,rdwyr0@parallels.com,Male,VP Quality Control,2558058636921002,Horror 272-41-1349,Norry,Halpin,nhalpin1@slashdot.org,Female,Legal Assistant,,Drama ... We can query the same information as before, but this time we can use the field names defined by the header: // Get all Rows elements var people = Row.all; // Get a random person var p = people.random(); // Check the gender of p // Prints 'Male' or 'Female' p.gender.println(); // Get the emails of people that like Horror movies so we can let them know a new movie is out. // Prints 'Sequence {rdwyr0@parallels.com, mmacer2@sogou.com, rsobieski5@usatoday.com, rvarrand@jugem.jp}' people.select(p | p.movies == 'Horror').collect(p | p.email).println(); // Get all males and females that like Thrillers and set up dates // Prints // Olvan and Cosetta is a match made in heaven! // Olvan and Lissa is a match made in heaven! var mt = people.select(p | p.movies == 'Thriller' and p.gender == 'Male'); var ft = people.select(p | p.movies == 'Thriller' and p.gender == 'Female'); for (m in mt) { for (f in ft) { (m.first_name + \" and \" + f.first_name + \" is a match made in heaven!\").println(); } }","title":"Header-full CSV Model"},{"location":"doc/articles/csv-emc/#header-full-with-varargs-csv-model","text":"Last, we have a CSV model with some vararg information, is the same as before, but in this case persons are allowed to have multiple movies. We have also added a quote field that shows the quote character in action. id,first_name,last_name,email,gender,job,credit_card,quote,movies 604-78-8459,Ricoriki,Dwyr,rdwyr0@parallels.com,Male,VP Quality Control,,Duis at velit eu est congue elementum.,Horror 272-41-1349,Norry,Halpin,nhalpin1@slashdot.org,Female,Legal Assistant,,Aenean sit amet justo. Morbi ut odio.,Drama,Film-Noir,Thriller 844-07-0023,Matteo,Macer,mmacer2@sogou.com,Male,Tax Accountant,3542981651057648,In hac habitasse platea dictumst.,Horror,Mystery,Thriller 429-41-4964,Kattie,Fysh,kfysh3@angelfire.com,Female,Senior Financial Analyst,,Suspendisse potenti. In eleifend quam a odio.,Comedy 378-90-9530,Link,Proffitt,lproffitt4@cloudflare.com,Male,Paralegal,,Suspendisse accumsan tortor quis turpis. Sed ante.,Drama 811-26-0387,Rafferty,Sobieski,rsobieski5@usatoday.com,Male,Physical Therapy Assistant,5602242765074843,\"Nulla neque libero, convallis eget, eleifend luctus, ultricies eu, nibh. Quisque id justo sit amet sapien dignissim vestibulum.\",Horror 386-53-1139,Ernestine,Kringe,ekringe6@gov.uk,Female,Software Consultant,3531096662484096,Nulla justo. Aliquam quis turpis eget elit sodales scelerisque.,Drama 850-05-5333,Flossy,Mobberley,fmobberley7@msn.com,Female,Chief Design Engineer,3558038696922012,Nulla tempus.,Comedy,Romance 605-52-9809,Tull,Ingerith,tingerith8@surveymonkey.com,Male,VP Quality Control,,\"Morbi vestibulum, velit id pretium iaculis, diam erat fermentum justo, nec condimentum neque sapien placerat ante. Nulla justo.\",Drama 580-79-7291,Derry,Laurisch,dlaurisch9@taobao.com,Male,Software Test Engineer I,,Praesent blandit lacinia erat. Vestibulum sed magna at nunc commodo placerat.,Drama,War 676-89-8860,Cosetta,Vlasov,cvlasova@livejournal.com,Female,Nurse Practicioner,,In hac habitasse platea dictumst.,Crime,Film-Noir,Thriller 748-10-2370,Lissa,Stanger,lstangerb@tmall.com,Female,Analyst Programmer,,Pellentesque at nulla.,Action,Adventure,Thriller 164-18-3409,Giffie,Boards,gboardsc@gmpg.org,Male,Graphic Designer,3575314620284632,\"Morbi vel lectus in quam fringilla rhoncus. Mauris enim leo, rhoncus sed, vestibulum sit amet, cursus id, turpis.\",Comedy 212-06-7778,Rabbi,Varran,rvarrand@jugem.jp,Male,GIS Technical Architect,3551249058791476,Suspendisse potenti.,Horror 628-02-3617,Olvan,Alabone,oalabonee@archive.org,Male,Help Desk Technician,,Pellentesque viverra pede ac diam. Cras pellentesque volutpat dui.,Action,Adventure,Sci-Fi,Thriller 318-48-3006,Constantino,Eyckelbeck,ceyckelbeckf@histats.com,Male,Recruiter,564182300132483644,In hac habitasse platea dictumst. Maecenas ut massa quis augue luctus tincidunt.,War 122-74-6759,Nickolas,Collard,ncollardg@dot.gov,Male,Web Designer IV,,Praesent blandit lacinia erat. Vestibulum sed magna at nunc commodo placerat.,Drama 309-57-3090,Chere,Hurry,churryh@huffingtonpost.com,Female,Tax Accountant,,\"In tempor, turpis nec euismod scelerisque, quam turpis adipiscing lorem, vitae mattis nibh ligula nec sem.\",Drama,Fantasy,Mystery 833-32-9040,Mattie,Hamon,mhamoni@auda.org.au,Male,Structural Engineer,,Duis at velit eu est congue elementum. In hac habitasse platea dictumst.,Drama 101-82-2564,Hew,Goble,hgoblej@ocn.ne.jp,Male,VP Accounting,,Etiam pretium iaculis justo.,Comedy // Get all Rows elements var people = Row.all; // Random thoughts for (p in people) { if (p.gender == \"Female\" and p.movies.includes(\"Thriller\")) { (p.first_name + \" screams '\" + p.quote + \"' when watching a Thriller. She is afraid of being a \" + p.job + \".\").println(); } else if (p.gender == \"Male\" and p.movies.includes(\"Drama\")) { (p.first_name + \" sighs, but blames '\" + p.quote + \"' for the tear in his eye. Being a \" + p.job + \" will never be the same.\").println(); } } // Output //Norry screams 'Aenean sit amet justo. Morbi ut odio.' when watching a Thriller. She is afraid of being a Legal Assistant. //Link sighs, but blames 'Suspendisse accumsan tortor quis turpis. Sed ante.' for the tear in his eye. Being a Paralegal will never be the same. //Tull sighs, but blames 'Morbi vestibulum, velit id pretium iaculis, diam erat fermentum justo, nec condimentum neque sapien placerat ante. Nulla justo.' for the tear in his eye. Being a VP Quality Control will never be the same. //Derry sighs, but blames 'Praesent blandit lacinia erat. Vestibulum sed magna at nunc commodo placerat.' for the tear in his eye. Being a Software Test Engineer I will never be the same. //Cosetta screams 'In hac habitasse platea dictumst.' when watching a Thriller. She is afraid of being a Nurse Practicioner. //Lissa screams 'Pellentesque at nulla.' when watching a Thriller. She is afraid of being a Analyst Programmer. //Nickolas sighs, but blames 'Praesent blandit lacinia erat. Vestibulum sed magna at nunc commodo placerat.' for the tear in his eye. Being a Web Designer IV will never be the same. //Mattie sighs, but blames 'Duis at velit eu est congue elementum. In hac habitasse platea dictumst.' for the tear in his eye. Being a Structural Engineer will never be the same.","title":"Header-full with Varargs CSV Model"},{"location":"doc/articles/csv-emc/#queryingmodifying-csv-documents-in-eol","text":"The CSV driver support direct query and modification of attribute values: // Get all Rows elements var people = Row.all; // Get a random person var p = people.random(); p.name.println(); // Change the name p.name = \"Maria Antonieta\" p.name.println();","title":"Querying/modifying CSV documents in EOL"},{"location":"doc/articles/csv-emc/#how-do-i-create-an-element","text":"You can use the new operator for this, and remember that all CSV elements are rows! New Rows will be added at the end of the file when persisting the changes. // Check how many entries are in the model // Prints '20' Row.all.size().println(); // Creates a new book element var b = new Row; // Check again // Prints '21' Row.all.size().println();","title":"How do I create an element?"},{"location":"doc/articles/csv-emc/#loading-an-csv-document-in-your-ant-buildfile","text":"The following ANT build file demonstrates how you can use ANT to load/store and process CSV documents with Epsilon. <project default= \"main\" > <target name= \"main\" > <epsilon.csv.loadModel name= \"people\" file= \"people.csv\" read= \"true\" store= \"false\" , knownHeaders= \"true\" /> </epsilon.csv.loadModel> <epsilon.eol src= \"my.eol\" > <model ref= \"people\" /> </epsilon.eol> </target> </project>","title":"Loading an CSV document in your ANT buildfile"},{"location":"doc/articles/csv-emc/#loading-an-csv-document-through-java-code","text":"The following excerpt demonstrates using CSV models using Epsilon\\'s Java API. EolModule module = new EolModule (); module . parse ( new File ( \"...\" )); CsvModel model = new CsvModel (); model . setName ( \"M\" ); model . setFile ( new File ( \"...\" )); char fieldSeparator = ',' ; model . setFieldSeparator ( fieldSeparator ); model . setKnownHeaders ( false ); model . setVarargsHeaders ( false ); module . getContext (). getModelRepository (). addModel ( model ); module . getContext (). setModule ( module ); module . execute ();","title":"Loading an CSV document through Java code"},{"location":"doc/articles/dev-setup/","text":"Eclipse Setup for Epsilon Developers \u00b6 If you are a contributor to Epsilon (or you want to build on top of it), and don't already have Eclipse installed or the repository cloned, you can easily set this up automatically in a few clicks. Head to the Downloads page , download the installer for your platform and launch it. Then switch to Advanced Mode. Select \"Eclipse IDE for Java Developers\" in the Product page and then Next. On the Projects page, look for Epsilon and select it, then Next. You can customise variables to suit, such as where Eclipse will be installed and the protocol for cloning the repositories. The defaults should be fine. Keep going with Next and then Finish. If all went to plan, then you should have a local copy of the main Epsilon repository and the website , with projects imported into Eclipse. You may need to wait for setup tasks to finish when first launching Eclipse. This can also be manually triggered from the Help -> Perform Setup Tasks menu in Eclipse. If you encounter any issues, please let us know via the mailing list or forum .","title":"Eclipse Setup for Epsilon Developers"},{"location":"doc/articles/dev-setup/#eclipse-setup-for-epsilon-developers","text":"If you are a contributor to Epsilon (or you want to build on top of it), and don't already have Eclipse installed or the repository cloned, you can easily set this up automatically in a few clicks. Head to the Downloads page , download the installer for your platform and launch it. Then switch to Advanced Mode. Select \"Eclipse IDE for Java Developers\" in the Product page and then Next. On the Projects page, look for Epsilon and select it, then Next. You can customise variables to suit, such as where Eclipse will be installed and the protocol for cloning the repositories. The defaults should be fine. Keep going with Next and then Finish. If all went to plan, then you should have a local copy of the main Epsilon repository and the website , with projects imported into Eclipse. You may need to wait for setup tasks to finish when first launching Eclipse. This can also be manually triggered from the Help -> Perform Setup Tasks menu in Eclipse. If you encounter any issues, please let us know via the mailing list or forum .","title":"Eclipse Setup for Epsilon Developers"},{"location":"doc/articles/developing-a-new-emc-driver/","text":"Developing a new EMC Driver \u00b6 The following deck of slides demonstrates the implementation of a new \"driver\" for Epsilon's Model Connectivity layer that allows all Epsilon languages to interact with CSV files. The complete source-code is located in the Epsilon Git repository (see details in the slides).","title":"Developing a new EMC Driver"},{"location":"doc/articles/developing-a-new-emc-driver/#developing-a-new-emc-driver","text":"The following deck of slides demonstrates the implementation of a new \"driver\" for Epsilon's Model Connectivity layer that allows all Epsilon languages to interact with CSV files. The complete source-code is located in the Epsilon Git repository (see details in the slides).","title":"Developing a new EMC Driver"},{"location":"doc/articles/developing-a-new-language/","text":"Developing a new Epsilon Language \u00b6 The following decks of slides demonstrate the implementation of two minimal model management languages (and their supporting Eclipse-based development tools) on top of Epsilon: one using annotations (TestLang - with fewer than 200 lines of code), and one using grammar extension (EDL - with fewer than 300 lines of code). All the source-code for the two languages is located in the Epsilon Git repository (see details in the slides). Annotations: TestLang \u00b6 Grammar Extension: Epsilon Demo Language (EDL) \u00b6","title":"Developing a new Epsilon Language"},{"location":"doc/articles/developing-a-new-language/#developing-a-new-epsilon-language","text":"The following decks of slides demonstrate the implementation of two minimal model management languages (and their supporting Eclipse-based development tools) on top of Epsilon: one using annotations (TestLang - with fewer than 200 lines of code), and one using grammar extension (EDL - with fewer than 300 lines of code). All the source-code for the two languages is located in the Epsilon Git repository (see details in the slides).","title":"Developing a new Epsilon Language"},{"location":"doc/articles/developing-a-new-language/#annotations-testlang","text":"","title":"Annotations: TestLang"},{"location":"doc/articles/developing-a-new-language/#grammar-extension-epsilon-demo-language-edl","text":"","title":"Grammar Extension: Epsilon Demo Language (EDL)"},{"location":"doc/articles/development-principles/","text":"Epsilon Development Principles \u00b6 This article describes the guiding principles that the committers of Epsilon follow. In-keeping with agile development principles, we don't use a strict/heavy-weight development process. Each member of the development team is free to use quite different approaches to software development. However, we aim to follow the following principles to ensure that there is a basic level of consistency across the Epsilon platform and its development. General \u00b6 Be mindful of different use cases : design, implementation and evolution of the platform respects that Epsilon can be used in different environments (from Eclipse or stand-alone) and on different operating systems (Windows, Linux, Mac OS); and that Epsilon programs can be invoked in different manners (Eclipse launch configurations, Ant tasks, programmatically). Maintain backwards-compatibility : the APIs exposed by Epsilon should be stable. Changes should not break client code. We use deprecation to warn users that an API has changed, and might be changed in a breaking manner in a future version of Epsilon. Source code \u00b6 Collectively own the code : all of the code is owned by the entire team, and anybody can make changes anywhere. Often, we work together on changes to the core of the platform, or to languages that a particular committer has developed initially (e.g., we might work closely with Antonio on a change to EUnit, because Antonio has done most of the recent work on EUnit). Collaborate on design : although we rarely practice \"live\" pair programming, we do share patches and discuss important design decisions internally. Adhere to code conventions : we do not place opening brackets on their own line. Testing \u00b6 Favour automated testing : to provide some assurance that we are shipping working code, we include automated tests along with feature code. Favour testing over testing-first : although we appreciate the benefits of test-first and test-driven development, we do not always develop tests first, often preferring peer review to make design decisions. Everyone uses the same testing frameworks : currently we favour JUnit 4 and Mockito for testing and mocking, respectively. Older code might still use other libraries (e.g. JUnit 3 and JMock), and we aim to replace these when we encounter them. Bug/Feature Tracking \u00b6 Trace changes using Bugzilla : we use Bugzilla to document and discuss design and implementation changes. We often raise our own bugs. We use bug numbers in commit messages to maintain trace links between the code and discussions about the code. Adhere to Bugzilla conventions : we follow a small set of Bugzilla conventions . Source Code Management \u00b6 Describe commits with meaningful messages : to ensure that the history of the code can be understood by every member of the team, we endeavour to make our commit messages understandable and traceable. Metadata is often include in commit messages, for example: \"[EOL] Fixes bug #123456, which prevented the creation of widgets.\" Avoid large commits : to ensure that the history of the code can be understood by every member of the team, we favour breaking large commits into smaller consecutive commits. Technical Support \u00b6 No forum post goes unanswered : to maintain and foster the community around Epsilon, we answer every question on the user forum. Encourage users to produce minimal examples : if we need to reproduce a user's issue, we will often ask for a minimal example to aid in debugging. We have found this to be effective because it allows us to focus most of our time on fixing issues, and because users sometimes discover the solution to their issue while producing the minimal example.","title":"Epsilon Development Principles"},{"location":"doc/articles/development-principles/#epsilon-development-principles","text":"This article describes the guiding principles that the committers of Epsilon follow. In-keeping with agile development principles, we don't use a strict/heavy-weight development process. Each member of the development team is free to use quite different approaches to software development. However, we aim to follow the following principles to ensure that there is a basic level of consistency across the Epsilon platform and its development.","title":"Epsilon Development Principles"},{"location":"doc/articles/development-principles/#general","text":"Be mindful of different use cases : design, implementation and evolution of the platform respects that Epsilon can be used in different environments (from Eclipse or stand-alone) and on different operating systems (Windows, Linux, Mac OS); and that Epsilon programs can be invoked in different manners (Eclipse launch configurations, Ant tasks, programmatically). Maintain backwards-compatibility : the APIs exposed by Epsilon should be stable. Changes should not break client code. We use deprecation to warn users that an API has changed, and might be changed in a breaking manner in a future version of Epsilon.","title":"General"},{"location":"doc/articles/development-principles/#source-code","text":"Collectively own the code : all of the code is owned by the entire team, and anybody can make changes anywhere. Often, we work together on changes to the core of the platform, or to languages that a particular committer has developed initially (e.g., we might work closely with Antonio on a change to EUnit, because Antonio has done most of the recent work on EUnit). Collaborate on design : although we rarely practice \"live\" pair programming, we do share patches and discuss important design decisions internally. Adhere to code conventions : we do not place opening brackets on their own line.","title":"Source code"},{"location":"doc/articles/development-principles/#testing","text":"Favour automated testing : to provide some assurance that we are shipping working code, we include automated tests along with feature code. Favour testing over testing-first : although we appreciate the benefits of test-first and test-driven development, we do not always develop tests first, often preferring peer review to make design decisions. Everyone uses the same testing frameworks : currently we favour JUnit 4 and Mockito for testing and mocking, respectively. Older code might still use other libraries (e.g. JUnit 3 and JMock), and we aim to replace these when we encounter them.","title":"Testing"},{"location":"doc/articles/development-principles/#bugfeature-tracking","text":"Trace changes using Bugzilla : we use Bugzilla to document and discuss design and implementation changes. We often raise our own bugs. We use bug numbers in commit messages to maintain trace links between the code and discussions about the code. Adhere to Bugzilla conventions : we follow a small set of Bugzilla conventions .","title":"Bug/Feature Tracking"},{"location":"doc/articles/development-principles/#source-code-management","text":"Describe commits with meaningful messages : to ensure that the history of the code can be understood by every member of the team, we endeavour to make our commit messages understandable and traceable. Metadata is often include in commit messages, for example: \"[EOL] Fixes bug #123456, which prevented the creation of widgets.\" Avoid large commits : to ensure that the history of the code can be understood by every member of the team, we favour breaking large commits into smaller consecutive commits.","title":"Source Code Management"},{"location":"doc/articles/development-principles/#technical-support","text":"No forum post goes unanswered : to maintain and foster the community around Epsilon, we answer every question on the user forum. Encourage users to produce minimal examples : if we need to reproduce a user's issue, we will often ask for a minimal example to aid in debugging. We have found this to be effective because it allows us to focus most of our time on fixing issues, and because users sometimes discover the solution to their issue while producing the minimal example.","title":"Technical Support"},{"location":"doc/articles/egl-invoke-egl/","text":"Re-using EGL templates \u00b6 Sometimes it may be handy to send the output of one EGL template into another EGL template. This is a great idea because it will make your templates more modular, cohesive and lead to less code overall. For example, suppose you've been generating an XML file for each Book in your model. Hence, you have a Book2XML.egl template with the following contents: <book> <title>[%=title%]</title> <isbn>[%=isbn%]</isbn> <pages>[%=pages.asString()%]</pages> <authors> [% for (author in authors) {%] <author name=\"[%=author.name%]\"/> [%}%] </authors> </book> Suppose that now you also want to generate a single XML for each Library; where a Library is a collection of Books. Instead of duplicating the code in Book2XML.egl, you can re-use it by calling it from Library2XML.egl, like so: <library id=[%=lib.id%] name=\"[%=lib.name%]\"> [% for (book in lib.books) { var bookTemplate : Template = TemplateFactory.load(\"/path/to/Book2XML.egl\"); bookTemplate.populate(\"book\", book); bookTemplate.populate(\"title\", book.title); bookTemplate.populate(\"isbn\", book.isbn); bookTemplate.populate(\"pages\", book.pages); bookTemplate.populate(\"authors\", book.authors); %] [%=bookTemplate.process()%] [%}%] As with EGX, you can pass parameters to the invoked template using the \"populate\" operation, where the first parameter is the variable name (that the invoked template will see) and the second parameter is the value.","title":"Re-using EGL templates"},{"location":"doc/articles/egl-invoke-egl/#re-using-egl-templates","text":"Sometimes it may be handy to send the output of one EGL template into another EGL template. This is a great idea because it will make your templates more modular, cohesive and lead to less code overall. For example, suppose you've been generating an XML file for each Book in your model. Hence, you have a Book2XML.egl template with the following contents: <book> <title>[%=title%]</title> <isbn>[%=isbn%]</isbn> <pages>[%=pages.asString()%]</pages> <authors> [% for (author in authors) {%] <author name=\"[%=author.name%]\"/> [%}%] </authors> </book> Suppose that now you also want to generate a single XML for each Library; where a Library is a collection of Books. Instead of duplicating the code in Book2XML.egl, you can re-use it by calling it from Library2XML.egl, like so: <library id=[%=lib.id%] name=\"[%=lib.name%]\"> [% for (book in lib.books) { var bookTemplate : Template = TemplateFactory.load(\"/path/to/Book2XML.egl\"); bookTemplate.populate(\"book\", book); bookTemplate.populate(\"title\", book.title); bookTemplate.populate(\"isbn\", book.isbn); bookTemplate.populate(\"pages\", book.pages); bookTemplate.populate(\"authors\", book.authors); %] [%=bookTemplate.process()%] [%}%] As with EGX, you can pass parameters to the invoked template using the \"populate\" operation, where the first parameter is the variable name (that the invoked template will see) and the second parameter is the value.","title":"Re-using EGL templates"},{"location":"doc/articles/egl-server-side/","text":"Using EGL as a server-side scripting language in Tomcat \u00b6 The original purpose of EGL was to enable batch generation of source code and other textual artefacts from EMF models. However, since there is no hard binding between the language and the file system, it is also possible to use EGL in other contexts. In this article, we demonstrate using EGL as a server-side scripting language in Tomcat, to produce web pages from EMF models on the fly. Setup \u00b6 Download a fresh copy of Tomcat 6.0 here and extract it Download egl-servlet-full.zip Extract all .jar files from the zip into the lib folder of Tomcat Open web.xml in the conf directory of Tomcat and add the following snippet <servlet> <servlet-name> egl </servlet-name> <servlet-class> org.eclipse.epsilon.egl.servlet.EglServlet </servlet-class> <load-on-startup> 1 </load-on-startup> </servlet> <servlet-mapping> <servlet-name> egl </servlet-name> <url-pattern> *.egl </url-pattern> </servlet-mapping> Make sure that there is an environment variable called JRE_HOME and it's pointing to your JRE installation directory (the root, not the bin ). In Windows, you can create this variable from System Properties\u2192Advanced\u2192Environment Variables\u2192System Variables\u2192New... Create a Hello World web application \u00b6 To create a hello world web application and test your installation, you need to go through the following steps: Go to the webapps folder and create a new directory named helloworld Inside helloworld , create a new file called index.egl and add to it the following code [%=\"Hello World\"%] Start Tomcat using bin/startup.bat (or startup.sh in Linux/MacOS) Open your browser and go to http://localhost:8080/helloworld/index.egl A web-page with the text Hello World should appear. If not, please make sure you've followed all the steps above and if it still doesn't work, please drop by the forum and we'll be happy to help. Accessing parameters from the URL \u00b6 To access parameters from the URL (or fields of a submitted form) you can use the request.getParameter('parameter-name') method. For example, by modifying the source code of index.egl to the following [%=\"Hello \"+request.getParameter(\"visitor\")%] and navigating to http://localhost:8080/helloworld/index.egl?visitor=John , you should get a page reading Hello John as a result. Other built-in objects \u00b6 EGL provides the following built-in objects which (should) function exactly like they do in JSP request response config application session You may want to have a look here for a tutorial that explains their functionality. Caching \u00b6 EGL provides the built-in cache object to facilitate two types of caching. Page caching can be used to ensure that repeated requests to the same URL do not result in the execution of EGL templates. Fragment caching can be used to share the text generated by a template between requests for different URLs. For example, the following code is used to ensure that repeated requests for pages matching the regular expression index.* are served from the page cache: [% cache.pages(\"index.*\"); %] The page cache can be expired programmatically, as shown below, or by restarting the Tomcat server. [% cache.expirePages(\"index.*\"); %] In addition to page caching, EGL supports fragment caching which allows the contents of a sub-template to be cached. For example, the following code processes sidebar.egl only the first time that the template is executed: [% var sidebarTemplate = TemplateFactory.load(\"Sidebar.egl\"); %] [%=cache.fragment(sidebarTemplate) %] Note that the fragment method should be used in a dynamic output section. Like pages, fragments can be expired programmatically (or by restarting the Tomcat server): [% cache.expireFragment(sidebarTemplate); %] A simple caching strategy is to populate the page and fragment caches from your main EGL templates, and to provide a ClearCache.egl template in a sub-directory that only administrators that can access. Loading EMF models in EGL pages \u00b6 The main motivation for turning EGL into a server-side scripting language is its ability to work well with EMF models. EGL provides the modelManager built-in object to let you load EMF models that reside in the web application. To experiment with modelManager , download the Graph.ecore and Ecore.ecore models and place them in your helloworld directory. Then, change your index.egl to look like this [% modelManager.registerMetamodel(\"Ecore.ecore\"); modelManager.loadModel(\"Sample\", \"Graph.ecore\", \"http://www.eclipse.org/emf/2002/Ecore\"); %] The metamodel has [%=EClass.all.size()%] classes Refresh the page in your browser and it should now read: The metamodel has 3 classes The Model Manager \u00b6 The modelManager built-in object provides the following methods: registerMetamodel(file : String) : Registers the file (should be an Ecore metamodel) in EPackage.Registry.INSTANCE loadModel(name : String, modelFile : String, metamodelURI : String) : Loads the model stored in modelFile using the registered metamodel metamodelURI . loadModelByFile(name : String, modelFile : String, metamodelFile : String) : Loads the model stored in modelFile using the metamodel in metamodelFile . loadModel(name : String, aliases : String, modelFile : String, metamodel : String, expand : Boolean, metamodelIsFilebased : Boolean) : Provides more parameters for loading models. uncacheModel(modelFile : String) : Removes the modelFile from the cache (next call to loadModel() will actually reload it) clear() : Clears cached models and metamodels Sharing models between templates \u00b6 Currently, each model is only loaded once (the first time the loadModel() or loadModelByFile() is called). If multiple pages need to access the same model, add the model loading logic in an operation in a separate models.eol file: operation loadModels() { modelManager.registerMetamodel(\"Ecore.ecore\"); modelManager.loadModel(\"Sample\", \"Graph.ecore\", \"http://www.eclipse.org/emf/2002/Ecore\"); } and then import and call it from each one of your pages: [% import \"models.eol\"; loadModels(); %] // Page code here Running EGL on Google App Engine \u00b6 By default App Engine will treat EGL files as static content and serve their source code instead of executing them. To work around this, add the following snippet under the root element of the appengine-web.xml configuration file of your App Engine application. <static-files> <exclude path= \"*.egl\" /> </static-files> Working with big models \u00b6 If you encounter a Java OutOfMemoryError while querying a big model you'll need to start Tomcat with more memory than the default 256 MB. To do this, go to bin/catalina.bat (on Windows -- if you're on Linux you should modify catalina.sh accordingly) and change line set JAVA_OPTS=%JAVA_OPTS% %LOGGING_MANAGER% to set JAVA_OPTS=%JAVA_OPTS% %LOGGING_MANAGER% -Xms1024m -Xmx1024m -XX:MaxPermSize=128m If you keep getting out of memory errors, you may find PSI Probe useful for figuring out what's going wrong.","title":"Using EGL as a server-side scripting language in Tomcat"},{"location":"doc/articles/egl-server-side/#using-egl-as-a-server-side-scripting-language-in-tomcat","text":"The original purpose of EGL was to enable batch generation of source code and other textual artefacts from EMF models. However, since there is no hard binding between the language and the file system, it is also possible to use EGL in other contexts. In this article, we demonstrate using EGL as a server-side scripting language in Tomcat, to produce web pages from EMF models on the fly.","title":"Using EGL as a server-side scripting language in Tomcat"},{"location":"doc/articles/egl-server-side/#setup","text":"Download a fresh copy of Tomcat 6.0 here and extract it Download egl-servlet-full.zip Extract all .jar files from the zip into the lib folder of Tomcat Open web.xml in the conf directory of Tomcat and add the following snippet <servlet> <servlet-name> egl </servlet-name> <servlet-class> org.eclipse.epsilon.egl.servlet.EglServlet </servlet-class> <load-on-startup> 1 </load-on-startup> </servlet> <servlet-mapping> <servlet-name> egl </servlet-name> <url-pattern> *.egl </url-pattern> </servlet-mapping> Make sure that there is an environment variable called JRE_HOME and it's pointing to your JRE installation directory (the root, not the bin ). In Windows, you can create this variable from System Properties\u2192Advanced\u2192Environment Variables\u2192System Variables\u2192New...","title":"Setup"},{"location":"doc/articles/egl-server-side/#create-a-hello-world-web-application","text":"To create a hello world web application and test your installation, you need to go through the following steps: Go to the webapps folder and create a new directory named helloworld Inside helloworld , create a new file called index.egl and add to it the following code [%=\"Hello World\"%] Start Tomcat using bin/startup.bat (or startup.sh in Linux/MacOS) Open your browser and go to http://localhost:8080/helloworld/index.egl A web-page with the text Hello World should appear. If not, please make sure you've followed all the steps above and if it still doesn't work, please drop by the forum and we'll be happy to help.","title":"Create a Hello World web application"},{"location":"doc/articles/egl-server-side/#accessing-parameters-from-the-url","text":"To access parameters from the URL (or fields of a submitted form) you can use the request.getParameter('parameter-name') method. For example, by modifying the source code of index.egl to the following [%=\"Hello \"+request.getParameter(\"visitor\")%] and navigating to http://localhost:8080/helloworld/index.egl?visitor=John , you should get a page reading Hello John as a result.","title":"Accessing parameters from the URL"},{"location":"doc/articles/egl-server-side/#other-built-in-objects","text":"EGL provides the following built-in objects which (should) function exactly like they do in JSP request response config application session You may want to have a look here for a tutorial that explains their functionality.","title":"Other built-in objects"},{"location":"doc/articles/egl-server-side/#caching","text":"EGL provides the built-in cache object to facilitate two types of caching. Page caching can be used to ensure that repeated requests to the same URL do not result in the execution of EGL templates. Fragment caching can be used to share the text generated by a template between requests for different URLs. For example, the following code is used to ensure that repeated requests for pages matching the regular expression index.* are served from the page cache: [% cache.pages(\"index.*\"); %] The page cache can be expired programmatically, as shown below, or by restarting the Tomcat server. [% cache.expirePages(\"index.*\"); %] In addition to page caching, EGL supports fragment caching which allows the contents of a sub-template to be cached. For example, the following code processes sidebar.egl only the first time that the template is executed: [% var sidebarTemplate = TemplateFactory.load(\"Sidebar.egl\"); %] [%=cache.fragment(sidebarTemplate) %] Note that the fragment method should be used in a dynamic output section. Like pages, fragments can be expired programmatically (or by restarting the Tomcat server): [% cache.expireFragment(sidebarTemplate); %] A simple caching strategy is to populate the page and fragment caches from your main EGL templates, and to provide a ClearCache.egl template in a sub-directory that only administrators that can access.","title":"Caching"},{"location":"doc/articles/egl-server-side/#loading-emf-models-in-egl-pages","text":"The main motivation for turning EGL into a server-side scripting language is its ability to work well with EMF models. EGL provides the modelManager built-in object to let you load EMF models that reside in the web application. To experiment with modelManager , download the Graph.ecore and Ecore.ecore models and place them in your helloworld directory. Then, change your index.egl to look like this [% modelManager.registerMetamodel(\"Ecore.ecore\"); modelManager.loadModel(\"Sample\", \"Graph.ecore\", \"http://www.eclipse.org/emf/2002/Ecore\"); %] The metamodel has [%=EClass.all.size()%] classes Refresh the page in your browser and it should now read: The metamodel has 3 classes","title":"Loading EMF models in EGL pages"},{"location":"doc/articles/egl-server-side/#the-model-manager","text":"The modelManager built-in object provides the following methods: registerMetamodel(file : String) : Registers the file (should be an Ecore metamodel) in EPackage.Registry.INSTANCE loadModel(name : String, modelFile : String, metamodelURI : String) : Loads the model stored in modelFile using the registered metamodel metamodelURI . loadModelByFile(name : String, modelFile : String, metamodelFile : String) : Loads the model stored in modelFile using the metamodel in metamodelFile . loadModel(name : String, aliases : String, modelFile : String, metamodel : String, expand : Boolean, metamodelIsFilebased : Boolean) : Provides more parameters for loading models. uncacheModel(modelFile : String) : Removes the modelFile from the cache (next call to loadModel() will actually reload it) clear() : Clears cached models and metamodels","title":"The Model Manager"},{"location":"doc/articles/egl-server-side/#sharing-models-between-templates","text":"Currently, each model is only loaded once (the first time the loadModel() or loadModelByFile() is called). If multiple pages need to access the same model, add the model loading logic in an operation in a separate models.eol file: operation loadModels() { modelManager.registerMetamodel(\"Ecore.ecore\"); modelManager.loadModel(\"Sample\", \"Graph.ecore\", \"http://www.eclipse.org/emf/2002/Ecore\"); } and then import and call it from each one of your pages: [% import \"models.eol\"; loadModels(); %] // Page code here","title":"Sharing models between templates"},{"location":"doc/articles/egl-server-side/#running-egl-on-google-app-engine","text":"By default App Engine will treat EGL files as static content and serve their source code instead of executing them. To work around this, add the following snippet under the root element of the appengine-web.xml configuration file of your App Engine application. <static-files> <exclude path= \"*.egl\" /> </static-files>","title":"Running EGL on Google App Engine"},{"location":"doc/articles/egl-server-side/#working-with-big-models","text":"If you encounter a Java OutOfMemoryError while querying a big model you'll need to start Tomcat with more memory than the default 256 MB. To do this, go to bin/catalina.bat (on Windows -- if you're on Linux you should modify catalina.sh accordingly) and change line set JAVA_OPTS=%JAVA_OPTS% %LOGGING_MANAGER% to set JAVA_OPTS=%JAVA_OPTS% %LOGGING_MANAGER% -Xms1024m -Xmx1024m -XX:MaxPermSize=128m If you keep getting out of memory errors, you may find PSI Probe useful for figuring out what's going wrong.","title":"Working with big models"},{"location":"doc/articles/egl-template-operations/","text":"Using template operations in EGL \u00b6 Template operations provide a way to re-use small fragments of EGL code. This article shows how to write EGL template operations and discusses when you might want to use them. Suppose we are writing a code generator for plain-old Java objects, and we have the following EGL code (which assumes the presence of a class object): class [%=class.name%] { [% for (feature in class.features) { %] /** * Gets the value of [%=feature.firstToLowerCase()%] */ public [%=feature.type%] get[%=feature%]() { return [%=feature.firstToLowerCase()%]; } /** * Sets the value of [%=feature.firstToLowerCase()%] */ public void set[%=feature%]([%=feature.type%] [%=feature.firstToLowerCase()%]) { this.[%=feature.firstToLowerCase()%] = [%=feature.firstToLowerCase()%]; } [% } %] } While the above code will work, it has a couple of drawbacks. Firstly, the code to generate getters and setters cannot be re-used in other templates. Secondly, the template is arguably hard to read - the purpose of the loop's body is not immediately clear. Using EGL template operations, the above code becomes: class [%=class.name%] { [% for (feature in class.features) { %] [%=feature.getter()%] [%=feature.setter()%] [% } %] } [% @template operation Feature getter() { %] /** * Gets the value of [%=self.firstToLowerCase()%] */ public [%=self.type%] get[%=self%]() { return [%=self.firstToLowerCase()%]; } [% } %] @template operation Feature setter() { %] /** * Sets the value of [%=self.firstToLowerCase()%] */ public void set[%=self%]([%=self.type%] [%=self.firstToLowerCase()%]) { this.[%=self.firstToLowerCase()%] = [%=self.firstToLowerCase()%]; } [% } %] Notice that, in the body of the loop, we call the template operations, getter and setter , to generate the getter and setter methods for each feature. This makes the loop arguably easier to read, and the getter and setter operations can be re-used in other templates. Template operations are annotated with @template and can mix dynamic and static sections, just like the main part of an EGL template. Operations are defined on metamodel types (Feature in the code above), and may be called on any model element that instantiates that type. In the body of an operation, the keyword self is used to refer to the model element on which the operation has been called. Common issues \u00b6 Issue: my template operation produces no output. Resolution: ensure that the call to the template operation is placed in a dynamic output section (e.g. [%=thing.op()%] ) rather than in a plain dynamic section (e.g. [% thing.op(); %] ). Template operations return a value, which must then be emitted to the main template using a dynamic output section. Thanks to Mark Tippetts for reporting this issue via the Epsilon forum .","title":"Using template operations in EGL"},{"location":"doc/articles/egl-template-operations/#using-template-operations-in-egl","text":"Template operations provide a way to re-use small fragments of EGL code. This article shows how to write EGL template operations and discusses when you might want to use them. Suppose we are writing a code generator for plain-old Java objects, and we have the following EGL code (which assumes the presence of a class object): class [%=class.name%] { [% for (feature in class.features) { %] /** * Gets the value of [%=feature.firstToLowerCase()%] */ public [%=feature.type%] get[%=feature%]() { return [%=feature.firstToLowerCase()%]; } /** * Sets the value of [%=feature.firstToLowerCase()%] */ public void set[%=feature%]([%=feature.type%] [%=feature.firstToLowerCase()%]) { this.[%=feature.firstToLowerCase()%] = [%=feature.firstToLowerCase()%]; } [% } %] } While the above code will work, it has a couple of drawbacks. Firstly, the code to generate getters and setters cannot be re-used in other templates. Secondly, the template is arguably hard to read - the purpose of the loop's body is not immediately clear. Using EGL template operations, the above code becomes: class [%=class.name%] { [% for (feature in class.features) { %] [%=feature.getter()%] [%=feature.setter()%] [% } %] } [% @template operation Feature getter() { %] /** * Gets the value of [%=self.firstToLowerCase()%] */ public [%=self.type%] get[%=self%]() { return [%=self.firstToLowerCase()%]; } [% } %] @template operation Feature setter() { %] /** * Sets the value of [%=self.firstToLowerCase()%] */ public void set[%=self%]([%=self.type%] [%=self.firstToLowerCase()%]) { this.[%=self.firstToLowerCase()%] = [%=self.firstToLowerCase()%]; } [% } %] Notice that, in the body of the loop, we call the template operations, getter and setter , to generate the getter and setter methods for each feature. This makes the loop arguably easier to read, and the getter and setter operations can be re-used in other templates. Template operations are annotated with @template and can mix dynamic and static sections, just like the main part of an EGL template. Operations are defined on metamodel types (Feature in the code above), and may be called on any model element that instantiates that type. In the body of an operation, the keyword self is used to refer to the model element on which the operation has been called.","title":"Using template operations in EGL"},{"location":"doc/articles/egl-template-operations/#common-issues","text":"Issue: my template operation produces no output. Resolution: ensure that the call to the template operation is placed in a dynamic output section (e.g. [%=thing.op()%] ) rather than in a plain dynamic section (e.g. [% thing.op(); %] ). Template operations return a value, which must then be emitted to the main template using a dynamic output section. Thanks to Mark Tippetts for reporting this issue via the Epsilon forum .","title":"Common issues"},{"location":"doc/articles/egx-parameters/","text":"Co-ordinating EGL template execution with EGX \u00b6 Suppose you're using Epsilon to make a compiler for a domain-specific language (DSL). Specifically, for every Library in the DSL, you want to generate a separate XML file with all of the properties of the Library and its Books. With EGX, you can parameterize your EGL templates to achieve this, like so: pre { var outDirLib : String = \"../libraries/\"; var extension : String = \".xml\"; var specialBook : String = \"Art of War\"; var bigLibThreshold : Integer = 9000; } rule Libraries transform lib : Library { parameters : Map { \"library\" = lib, \"name\" = lib.name, \"books\" = lib.books, \"hasSpecialBook\" = lib.books.exists(book | book.title == specialBook), \"isBigLibrary\" = lib.books.size() > bigLibThreshold } template: \"/path/to/Lib2XML.egl\" target: outDirLib+lib.name+extension } In this example, the Lib2XML EGL template will be invoked for every Library instance in the model, and the output will be written to the file specified in the \"target\". The Lib2XML template will receive all of the parameters put in the \"params\" variable in the parameters block of the rule. The variable is a mapping from variable name (that the EGL template will use to refer to it) and variable value. For reference, the Lib2XML template is shown below. Note There is no limit on the number of rules you can declare in an EGX program. <?xml version=\"1.0\" encoding=\"UTF-8\"?> <library id=[%=lib.id%] name=\"[%=name%]\" isBigLibrary=\"[%=isBigLibrary.asString()%]\"> [% for (book in books) {%] <book> <title>[%=book.title%]</title> <isbn>[%=book.isbn%]</isbn> <pages>[%=book.pages.asString()%]</pages> <authors> [% for (author in book.authors) {%] <author name=\"[%=author.name%]\"/> [%}%] </authors> </book> [%}%] </library>","title":"Co-ordinating EGL template execution with EGX"},{"location":"doc/articles/egx-parameters/#co-ordinating-egl-template-execution-with-egx","text":"Suppose you're using Epsilon to make a compiler for a domain-specific language (DSL). Specifically, for every Library in the DSL, you want to generate a separate XML file with all of the properties of the Library and its Books. With EGX, you can parameterize your EGL templates to achieve this, like so: pre { var outDirLib : String = \"../libraries/\"; var extension : String = \".xml\"; var specialBook : String = \"Art of War\"; var bigLibThreshold : Integer = 9000; } rule Libraries transform lib : Library { parameters : Map { \"library\" = lib, \"name\" = lib.name, \"books\" = lib.books, \"hasSpecialBook\" = lib.books.exists(book | book.title == specialBook), \"isBigLibrary\" = lib.books.size() > bigLibThreshold } template: \"/path/to/Lib2XML.egl\" target: outDirLib+lib.name+extension } In this example, the Lib2XML EGL template will be invoked for every Library instance in the model, and the output will be written to the file specified in the \"target\". The Lib2XML template will receive all of the parameters put in the \"params\" variable in the parameters block of the rule. The variable is a mapping from variable name (that the EGL template will use to refer to it) and variable value. For reference, the Lib2XML template is shown below. Note There is no limit on the number of rules you can declare in an EGX program. <?xml version=\"1.0\" encoding=\"UTF-8\"?> <library id=[%=lib.id%] name=\"[%=name%]\" isBigLibrary=\"[%=isBigLibrary.asString()%]\"> [% for (book in books) {%] <book> <title>[%=book.title%]</title> <isbn>[%=book.isbn%]</isbn> <pages>[%=book.pages.asString()%]</pages> <authors> [% for (author in book.authors) {%] <author name=\"[%=author.name%]\"/> [%}%] </authors> </book> [%}%] </library>","title":"Co-ordinating EGL template execution with EGX"},{"location":"doc/articles/eol-syntax-updates/","text":"EOL Syntax Updates \u00b6 The following is a brief description of changes to the Epsilon Object Language's syntax in each release. 2.2 \u00b6 Tuple type. Similar to a Map with String keys, but its properties can be accessed like a regular object (using the . operator). Assign if null operator ?= as convenient shorthand for a = a ?: b , that is: a = a <> null ? a : b . 2.1 \u00b6 Elvis operator as a convenient shorthand to use an alternative value if an expression is null. a ?: b is a concise way of writing a <> null ? a else b . Null-safe navigation operator to allow for easy chaining of feature calls without resorting to null checks. For example, null?.getClass()?.getName() will return null without crashing. != can be used as an alias for <> (i.e. \"not equals\"). 2.0 \u00b6 Ternary expressions, which can be used almost anywhere, not just in assignments or returns. Syntax and semantics are identical to Java, but you can also use the else keyword in place of the : if you prefer. Native lambda expressions. You can use first-order operation syntax or JavaScript-style => for invoking functional interfaces. Removed old-style OCL comments ( -* and -- ). -- can be used to decrement integers. Thread-safe collection types: ConcurrentBag , ConcurrentMap and ConcurrentSet . 1.4 \u00b6 Added support for postfix increment operator (i.e. i++ ) and for composite assignment statements (i.e. a +=1; a -= 2; a *= 3; a /= 4; ) 0.9.1 \u00b6 Added support for externally defined variables. Support for Map literal expressions (e.g. Map {key1 = value1, k2 = v2} ) 0.8.8 \u00b6 In 0.8.8 we extended the syntax of EOL so that it looks and feels a bit more like Java. As the majority of Eclipse/EMF audience are Java programmers, this will hopefully make their (and our) lives a bit easier. Of course, all these changes also affect all languages built on top of EOL. More specifically, we have introduced: double quotes ( \" \" ) for string literals, backticks (` `) for reserved words, Java-like comments ( // and /**/ ), == as a comparison operator, = as an assignment operator (in 0.8.7) All these changes (except for the double quotes which have now been replaced by ` `) are non-breaking: the old syntax ( '' for strings, = for comparison and := for assignment still work). Below is an example demonstrating the new syntax: /* This is a multi line comment */ // This is a single line comment var i = 1; if (i == 1) { \"Hello World\".println(); } i = 2; // Assigns the value 2 to i var `variable with spaces` = 3; `variable with spaces`.println(); // Prints 3 If you have suggestions for further Java-ifications of the EOL syntax, please post your comments to the Epsilon forum or add them to bug 292403 .","title":"EOL Syntax Updates"},{"location":"doc/articles/eol-syntax-updates/#eol-syntax-updates","text":"The following is a brief description of changes to the Epsilon Object Language's syntax in each release.","title":"EOL Syntax Updates"},{"location":"doc/articles/eol-syntax-updates/#22","text":"Tuple type. Similar to a Map with String keys, but its properties can be accessed like a regular object (using the . operator). Assign if null operator ?= as convenient shorthand for a = a ?: b , that is: a = a <> null ? a : b .","title":"2.2"},{"location":"doc/articles/eol-syntax-updates/#21","text":"Elvis operator as a convenient shorthand to use an alternative value if an expression is null. a ?: b is a concise way of writing a <> null ? a else b . Null-safe navigation operator to allow for easy chaining of feature calls without resorting to null checks. For example, null?.getClass()?.getName() will return null without crashing. != can be used as an alias for <> (i.e. \"not equals\").","title":"2.1"},{"location":"doc/articles/eol-syntax-updates/#20","text":"Ternary expressions, which can be used almost anywhere, not just in assignments or returns. Syntax and semantics are identical to Java, but you can also use the else keyword in place of the : if you prefer. Native lambda expressions. You can use first-order operation syntax or JavaScript-style => for invoking functional interfaces. Removed old-style OCL comments ( -* and -- ). -- can be used to decrement integers. Thread-safe collection types: ConcurrentBag , ConcurrentMap and ConcurrentSet .","title":"2.0"},{"location":"doc/articles/eol-syntax-updates/#14","text":"Added support for postfix increment operator (i.e. i++ ) and for composite assignment statements (i.e. a +=1; a -= 2; a *= 3; a /= 4; )","title":"1.4"},{"location":"doc/articles/eol-syntax-updates/#091","text":"Added support for externally defined variables. Support for Map literal expressions (e.g. Map {key1 = value1, k2 = v2} )","title":"0.9.1"},{"location":"doc/articles/eol-syntax-updates/#088","text":"In 0.8.8 we extended the syntax of EOL so that it looks and feels a bit more like Java. As the majority of Eclipse/EMF audience are Java programmers, this will hopefully make their (and our) lives a bit easier. Of course, all these changes also affect all languages built on top of EOL. More specifically, we have introduced: double quotes ( \" \" ) for string literals, backticks (` `) for reserved words, Java-like comments ( // and /**/ ), == as a comparison operator, = as an assignment operator (in 0.8.7) All these changes (except for the double quotes which have now been replaced by ` `) are non-breaking: the old syntax ( '' for strings, = for comparison and := for assignment still work). Below is an example demonstrating the new syntax: /* This is a multi line comment */ // This is a single line comment var i = 1; if (i == 1) { \"Hello World\".println(); } i = 2; // Assigns the value 2 to i var `variable with spaces` = 3; `variable with spaces`.println(); // Prints 3 If you have suggestions for further Java-ifications of the EOL syntax, please post your comments to the Epsilon forum or add them to bug 292403 .","title":"0.8.8"},{"location":"doc/articles/epackage-registry-view/","text":"The EMF EPackage Registry View \u00b6 The EPackage registry ( EPackage.Registry.INSTANCE ) contains references to all registered Ecore EPackages in EMF. To visualise the contents of the registry, we have implemented the following EPackage Registry view. Using this view, one can browse through the EClasses contained in each registered EPackage, discover the super/sub types of each EClass, and navigate through its features and operations. The view provides options to show/hide derived features, operations, inherited features and opposite references, supports quick navigation from a feature to its type (double-click), and integrates with the Properties view. To make this view visible go to Window->Show view->Other... and select EPackage Registry under the Epsilon category. The view is populated and refreshed on demand. As such, when it first appears it is empty. To populate it with the registered EPackages, you need to click the Refresh button on the top right.","title":"The EMF EPackage Registry View"},{"location":"doc/articles/epackage-registry-view/#the-emf-epackage-registry-view","text":"The EPackage registry ( EPackage.Registry.INSTANCE ) contains references to all registered Ecore EPackages in EMF. To visualise the contents of the registry, we have implemented the following EPackage Registry view. Using this view, one can browse through the EClasses contained in each registered EPackage, discover the super/sub types of each EClass, and navigate through its features and operations. The view provides options to show/hide derived features, operations, inherited features and opposite references, supports quick navigation from a feature to its type (double-click), and integrates with the Properties view. To make this view visible go to Window->Show view->Other... and select EPackage Registry under the Epsilon category. The view is populated and refreshed on demand. As such, when it first appears it is empty. To populate it with the registered EPackages, you need to click the Refresh button on the top right.","title":"The EMF EPackage Registry View"},{"location":"doc/articles/epsilon-1.x/","text":"Working with versions of Epsilon prior to 2.0 \u00b6 In the old days before we embraced advancements in Eclipse provisioning technology (P2), to use Epsilon one needed to download an Eclipse distribution and manually install the pre-requisite plugins and features required to work with Epsilon. Pre-packaged distributions \u00b6 If you wish to use an older version of Epsilon, the easiest and most compatible way is to download one of the ready-made distributions bundled from the archives , since they contain the selected version of Epsilon all its mandatory and optional dependencies. You will only need a Java Runtime Environment.You will only need a Java Runtime Environment . Navigate to the directory with the desired version, and download the archive file appropriate for your platform and unzip it. If you are using Windows, please extract the download close to the root of a drive (e.g. C:) as the maximum path length on Windows may not exceed 255 characters by default. From a Modeling Distribution \u00b6 For a more up-to-date IDE, we recommend that users install the Eclipse Modeling Tools distribution and install Epsilon along with its (optional) dependencies (these are mainly for working with Eugenia) by adding the following list of update sites through Help \u2192 Install New Software... : Epsilon : https://download.eclipse.org/epsilon/updates/1.5 (substitute 1.5 for the desired version) Emfatic : https://download.eclipse.org/emfatic/update GMF Tooling : https://download.eclipse.org/modeling/gmp/gmf-tooling/updates/releases QVTo : https://download.eclipse.org/mmt/qvto/updates/releases/3.9.1","title":"Working with versions of Epsilon prior to 2.0"},{"location":"doc/articles/epsilon-1.x/#working-with-versions-of-epsilon-prior-to-20","text":"In the old days before we embraced advancements in Eclipse provisioning technology (P2), to use Epsilon one needed to download an Eclipse distribution and manually install the pre-requisite plugins and features required to work with Epsilon.","title":"Working with versions of Epsilon prior to 2.0"},{"location":"doc/articles/epsilon-1.x/#pre-packaged-distributions","text":"If you wish to use an older version of Epsilon, the easiest and most compatible way is to download one of the ready-made distributions bundled from the archives , since they contain the selected version of Epsilon all its mandatory and optional dependencies. You will only need a Java Runtime Environment.You will only need a Java Runtime Environment . Navigate to the directory with the desired version, and download the archive file appropriate for your platform and unzip it. If you are using Windows, please extract the download close to the root of a drive (e.g. C:) as the maximum path length on Windows may not exceed 255 characters by default.","title":"Pre-packaged distributions"},{"location":"doc/articles/epsilon-1.x/#from-a-modeling-distribution","text":"For a more up-to-date IDE, we recommend that users install the Eclipse Modeling Tools distribution and install Epsilon along with its (optional) dependencies (these are mainly for working with Eugenia) by adding the following list of update sites through Help \u2192 Install New Software... : Epsilon : https://download.eclipse.org/epsilon/updates/1.5 (substitute 1.5 for the desired version) Emfatic : https://download.eclipse.org/emfatic/update GMF Tooling : https://download.eclipse.org/modeling/gmp/gmf-tooling/updates/releases QVTo : https://download.eclipse.org/mmt/qvto/updates/releases/3.9.1","title":"From a Modeling Distribution"},{"location":"doc/articles/epsilon-emf/","text":"Epsilon and EMF \u00b6 Below are some frequently-asked questions related to querying and modifying EMF-based models with Epsilon. What is the difference between containment and non-containment references in EMF? \u00b6 Briefly, a model element can belong to as most one containment reference at a time. Containment references also demonstrate a cascade-delete behaviour. For example, consider the following Ecore metamodel (captured in Emfatic). package cars; class Person { ref Person[*] friends; //non-containment reference val Car[*] cars; // containment reference } class Car { } Now consider the following EOL code which demonstrates the similarities/differences of containment and non-containment references. // Set up a few model elements to play with var c1 = new Car; var c2 = new Car; var p1 = new Person; var p2 = new Person; var p3 = new Person; // p1's car is c1 and p2's car is c2 p1.cars.add(c1); p2.cars.add(c2); // p3 is a friend of both p1 and p2 p1.friends.add(p3); p2.friends.add(p3); p1.friends.println(); // prints {p3} p2.friends.println(); // prints {p3} //add c2 to p1's cars p1.cars.add(c2); p1.cars.println(); // prints {c1, c2} // The following statement prints an empty set! // As discussed above, model elements can belong to at // most 1 containment reference. As such, by adding c2 to // the cars of p1, EMF removes it from the cars of p2 p2.cars.println(); // Delete p1 from the model delete p1; Person.all.println(); // prints {p2, p3} // The following statement prints an empty set! // As discussed above, containment references demonstrate // a cascade-delete behaviour. As such, when we deleted p1, // all the model elements contained in its cars containment reference // were also deleted from the model. Note how the friends of p1 (p2 and p3) // were not deleted from the model, since they were referenced through a // non-containment reference (friends) Car.all.println(); How can I get all children of a model element? \u00b6 Epsilon does not provide a built-in method for this but you can use EObject's eContents() method if you're working with EMF. To get all descendants of an element, something like the following should do the trick: o.asSequence().closure(x | x.eContents()) . See https://www.eclipse.org/forums/index.php/t/855628/ for more details. How can I get the container of a model element? \u00b6 Epsilon does not provide a built-in method for this but you can use EObject's eContainer() method if you're working with EMF. How can I use an existing EMF Resource in Epsilon? \u00b6 To use an existing EMF Resource in your Epsilon program, you should wrap it as an InMemoryEmfModel first. How can I use custom load/save options for my EMF model? \u00b6 You need to un-tick the \"Read on load\"/\"Store on disposal\" options in your model configuration dialog and use the underlying EMF resource's load/save methods directly from your EOL code. For example, to turn off the OPTION_DEFER_IDREF_RESOLUTION option, which is on by default in Epsilon's EMF driver and has been reported to slow down loading of models that use \"id\" attributes , you can use the following EOL statement. M.resource.load(Map{\"DEFER_IDREF_RESOLUTION\" = false});","title":"Epsilon and EMF"},{"location":"doc/articles/epsilon-emf/#epsilon-and-emf","text":"Below are some frequently-asked questions related to querying and modifying EMF-based models with Epsilon.","title":"Epsilon and EMF"},{"location":"doc/articles/epsilon-emf/#what-is-the-difference-between-containment-and-non-containment-references-in-emf","text":"Briefly, a model element can belong to as most one containment reference at a time. Containment references also demonstrate a cascade-delete behaviour. For example, consider the following Ecore metamodel (captured in Emfatic). package cars; class Person { ref Person[*] friends; //non-containment reference val Car[*] cars; // containment reference } class Car { } Now consider the following EOL code which demonstrates the similarities/differences of containment and non-containment references. // Set up a few model elements to play with var c1 = new Car; var c2 = new Car; var p1 = new Person; var p2 = new Person; var p3 = new Person; // p1's car is c1 and p2's car is c2 p1.cars.add(c1); p2.cars.add(c2); // p3 is a friend of both p1 and p2 p1.friends.add(p3); p2.friends.add(p3); p1.friends.println(); // prints {p3} p2.friends.println(); // prints {p3} //add c2 to p1's cars p1.cars.add(c2); p1.cars.println(); // prints {c1, c2} // The following statement prints an empty set! // As discussed above, model elements can belong to at // most 1 containment reference. As such, by adding c2 to // the cars of p1, EMF removes it from the cars of p2 p2.cars.println(); // Delete p1 from the model delete p1; Person.all.println(); // prints {p2, p3} // The following statement prints an empty set! // As discussed above, containment references demonstrate // a cascade-delete behaviour. As such, when we deleted p1, // all the model elements contained in its cars containment reference // were also deleted from the model. Note how the friends of p1 (p2 and p3) // were not deleted from the model, since they were referenced through a // non-containment reference (friends) Car.all.println();","title":"What is the difference between containment and non-containment references in EMF?"},{"location":"doc/articles/epsilon-emf/#how-can-i-get-all-children-of-a-model-element","text":"Epsilon does not provide a built-in method for this but you can use EObject's eContents() method if you're working with EMF. To get all descendants of an element, something like the following should do the trick: o.asSequence().closure(x | x.eContents()) . See https://www.eclipse.org/forums/index.php/t/855628/ for more details.","title":"How can I get all children of a model element?"},{"location":"doc/articles/epsilon-emf/#how-can-i-get-the-container-of-a-model-element","text":"Epsilon does not provide a built-in method for this but you can use EObject's eContainer() method if you're working with EMF.","title":"How can I get the container of a model element?"},{"location":"doc/articles/epsilon-emf/#how-can-i-use-an-existing-emf-resource-in-epsilon","text":"To use an existing EMF Resource in your Epsilon program, you should wrap it as an InMemoryEmfModel first.","title":"How can I use an existing EMF Resource in Epsilon?"},{"location":"doc/articles/epsilon-emf/#how-can-i-use-custom-loadsave-options-for-my-emf-model","text":"You need to un-tick the \"Read on load\"/\"Store on disposal\" options in your model configuration dialog and use the underlying EMF resource's load/save methods directly from your EOL code. For example, to turn off the OPTION_DEFER_IDREF_RESOLUTION option, which is on by default in Epsilon's EMF driver and has been reported to slow down loading of models that use \"id\" attributes , you can use the following EOL statement. M.resource.load(Map{\"DEFER_IDREF_RESOLUTION\" = false});","title":"How can I use custom load/save options for my EMF model?"},{"location":"doc/articles/eugenia-affixed-nodes/","text":"Eugenia: Affixed Nodes in GMF \u00b6 From the following annotated Ecore metamodel (in Emfatic) @namespace(uri=\"components\", prefix=\"components\") package components; @gmf.diagram class ComponentDiagram { val Component[*] components; val Connector[*] connectors; } abstract class NamedElement { attr String name; } @gmf.node(label=\"name\") class Component extends NamedElement { @gmf.affixed val Port[*] ports; } @gmf.node(figure=\"rectangle\", size=\"20,20\", label=\"name\", label.placement=\"external\", label.icon=\"false\") class Port extends NamedElement { } @gmf.link(source=\"source\", target=\"target\", label=\"name\", target.decoration=\"arrow\") class Connector extends NamedElement { ref Port source; ref Port target; } Eugenia can automatically generate this GMF editor:","title":"Eugenia: Affixed Nodes in GMF"},{"location":"doc/articles/eugenia-affixed-nodes/#eugenia-affixed-nodes-in-gmf","text":"From the following annotated Ecore metamodel (in Emfatic) @namespace(uri=\"components\", prefix=\"components\") package components; @gmf.diagram class ComponentDiagram { val Component[*] components; val Connector[*] connectors; } abstract class NamedElement { attr String name; } @gmf.node(label=\"name\") class Component extends NamedElement { @gmf.affixed val Port[*] ports; } @gmf.node(figure=\"rectangle\", size=\"20,20\", label=\"name\", label.placement=\"external\", label.icon=\"false\") class Port extends NamedElement { } @gmf.link(source=\"source\", target=\"target\", label=\"name\", target.decoration=\"arrow\") class Connector extends NamedElement { ref Port source; ref Port target; } Eugenia can automatically generate this GMF editor:","title":"Eugenia: Affixed Nodes in GMF"},{"location":"doc/articles/eugenia-ant/","text":"Eugenia: Automated Invocation with Ant \u00b6 Eugenia can be called from Ant, using the <epsilon.eugenia> Ant task. This way, the creation of the GMF editors can be easily automated by using a standard Ant Builder. Additionally, the Ant task has several features which are not currently available through the regular graphical user interface. In this article, we will show how to invoke the Eugenia Ant task and offer some recommendations on how to adopt it. Basic usage \u00b6 The Eugenia Ant task only requires specifying the source Emfatic description or Ecore model through the src attribute: <!-- Generate myfile.ecore from myfile.emf and then proceed --> <epsilon.eugenia src= \"myfile.emf\" /> <!-- Start directly from the Ecore model --> <epsilon.eugenia src= \"myfile.ecore\" /> Warning The Eugenia Ant task requires that the Ant buildfile is run from the same JRE as the workspace. Please check the Workflow documentation for instructions on how to do it. Limiting the steps to be run \u00b6 Normally, Eugenia runs all these steps: Clean the models from the previous run (the clean step) If src is an Emfatic source file (with the .emf extension), generate the Ecore model from it ( ecore ) Generate the EMF GenModel from the Ecore model and polish it with Ecore2GenModel.eol if available ( genmodel ) Generate the GmfGraph, GmfTool and GmfMap models and polish them with Ecore2GMF.eol if available ( gmf ) Generate the GmfGen model and polish it with FixGMFGen.eol if available ( gmfgen ) Generate the EMF code from the EMF GenModel model ( emfcode ) Generate the GMF code from the GMFGen model ( gmfcode ) Optionally, the Ant task can run only some of these steps. The firstStep attribute can be used to specify the first step to be run, and the lastStep can be used to specify the last step to be run. For example: <!-- Skips the clean, ecore and genmodel steps --> <epsilon.eugenia src= \"myfile.ecore\" firstStep= \"gmf\" /> <!-- Does not run the emfcode and gmfcode steps --> <epsilon.eugenia src= \"myfile.emf\" lastStep= \"gmfgen\" /> <!-- Only runs the gmf and gmfgen steps --> <epsilon.eugenia src= \"myfile.ecore\" firstStep= \"gmf\" lastStep= \"gmfgen\" /> Using extra models for polishing \u00b6 Additional models to be used in a polishing transformation can be specified through the <model> nested element. <model> has three attributes: ref (mandatory) is the name with which the model was loaded into the model repository of the Ant project, using the Epsilon model loading Ant tasks. as (optional) is the name to be used for the model inside the polishing transformation. step (mandatory) is the identifier of the Eugenia step to which we will add the model reference. As an example, consider the following fragment: <epsilon.emf.loadModel name= \"Labels\" modelfile= \"my.model\" metamodeluri= \"mymetamodelURI\" read= \"true\" store= \"false\" /> <epsilon.eugenia src= \"myfile.emf\" > <model ref= \"Labels\" step= \"gmf\" /> </epsilon.eugenia> This example will make the Labels model available to the Ecore2GMF.eol polishing transformation, which is part of the gmf step.","title":"Eugenia: Automated Invocation with Ant"},{"location":"doc/articles/eugenia-ant/#eugenia-automated-invocation-with-ant","text":"Eugenia can be called from Ant, using the <epsilon.eugenia> Ant task. This way, the creation of the GMF editors can be easily automated by using a standard Ant Builder. Additionally, the Ant task has several features which are not currently available through the regular graphical user interface. In this article, we will show how to invoke the Eugenia Ant task and offer some recommendations on how to adopt it.","title":"Eugenia: Automated Invocation with Ant"},{"location":"doc/articles/eugenia-ant/#basic-usage","text":"The Eugenia Ant task only requires specifying the source Emfatic description or Ecore model through the src attribute: <!-- Generate myfile.ecore from myfile.emf and then proceed --> <epsilon.eugenia src= \"myfile.emf\" /> <!-- Start directly from the Ecore model --> <epsilon.eugenia src= \"myfile.ecore\" /> Warning The Eugenia Ant task requires that the Ant buildfile is run from the same JRE as the workspace. Please check the Workflow documentation for instructions on how to do it.","title":"Basic usage"},{"location":"doc/articles/eugenia-ant/#limiting-the-steps-to-be-run","text":"Normally, Eugenia runs all these steps: Clean the models from the previous run (the clean step) If src is an Emfatic source file (with the .emf extension), generate the Ecore model from it ( ecore ) Generate the EMF GenModel from the Ecore model and polish it with Ecore2GenModel.eol if available ( genmodel ) Generate the GmfGraph, GmfTool and GmfMap models and polish them with Ecore2GMF.eol if available ( gmf ) Generate the GmfGen model and polish it with FixGMFGen.eol if available ( gmfgen ) Generate the EMF code from the EMF GenModel model ( emfcode ) Generate the GMF code from the GMFGen model ( gmfcode ) Optionally, the Ant task can run only some of these steps. The firstStep attribute can be used to specify the first step to be run, and the lastStep can be used to specify the last step to be run. For example: <!-- Skips the clean, ecore and genmodel steps --> <epsilon.eugenia src= \"myfile.ecore\" firstStep= \"gmf\" /> <!-- Does not run the emfcode and gmfcode steps --> <epsilon.eugenia src= \"myfile.emf\" lastStep= \"gmfgen\" /> <!-- Only runs the gmf and gmfgen steps --> <epsilon.eugenia src= \"myfile.ecore\" firstStep= \"gmf\" lastStep= \"gmfgen\" />","title":"Limiting the steps to be run"},{"location":"doc/articles/eugenia-ant/#using-extra-models-for-polishing","text":"Additional models to be used in a polishing transformation can be specified through the <model> nested element. <model> has three attributes: ref (mandatory) is the name with which the model was loaded into the model repository of the Ant project, using the Epsilon model loading Ant tasks. as (optional) is the name to be used for the model inside the polishing transformation. step (mandatory) is the identifier of the Eugenia step to which we will add the model reference. As an example, consider the following fragment: <epsilon.emf.loadModel name= \"Labels\" modelfile= \"my.model\" metamodeluri= \"mymetamodelURI\" read= \"true\" store= \"false\" /> <epsilon.eugenia src= \"myfile.emf\" > <model ref= \"Labels\" step= \"gmf\" /> </epsilon.eugenia> This example will make the Labels model available to the Ecore2GMF.eol polishing transformation, which is part of the gmf step.","title":"Using extra models for polishing"},{"location":"doc/articles/eugenia-nodes-with-centred-layout/","text":"Eugenia: Nodes with centred content \u00b6 This recipe shows how to create nodes in your GMF editor whose contents are centred both horizontally and vertically. The resulting editor will produce nodes like this: We'll start with the following metamodel and Eugenia annotations: @namespace(uri=\"www.eclipse.org/epsilon/examples/widgets\", prefix=\"w\") package widgets; @gmf.diagram class System { val Widget[*] widgets; } @gmf.node(label=\"name\", label.icon=\"false\") class Widget { attr String[1] name; } In this case, we only have one child node (the label for the node). We need to add a polishing transformation to our project (described in more detail in this article ) to use a grid layout and specify the appropriate layout data for the label. In a file named ECore2GMF.eol, place the following code: var shape = findShape('WidgetFigure'); shape.layout = new GmfGraph!GridLayout; var label = shape.children.first; label.layoutData = new GmfGraph!GridLayoutData; label.layoutData.grabExcessVerticalSpace = true; label.layoutData.grabExcessHorizontalSpace = true; operation findShape(name : String) { return GmfGraph!Shape.all.selectOne(s|s.name = name); } If we have multiple child nodes, we may want to use a custom layout manager instead to achieve the centring. The polishing transformation will have to add the custom layout to our widget figure, and the ECcore2GMF.eol file will now look like this: findShape('WidgetFigure').layout = createCentredLayout(); operation findShape(name : String) { return GmfGraph!Shape.all.selectOne(s|s.name = name); } operation createCentredLayout() : GmfGraph!CustomLayout { var layout = new GmfGraph!CustomLayout; layout.qualifiedClassName = 'widgets.custom.layouts.CentredLayout'; return layout; } Notice that the layout specifies a qualified class name of widgets.custom.layouts.CentredLayout . We must create a class with that name, which implements the LayoutManager of draw2d. We'll use this exemplar implementation of widgets.custom.layouts.CentredLayout and place it in a widgets.custom plug-in project. We must add a dependency for the widgets.custom plugin project to the widgets.diagram project generated by GMF. For more details, please check the org.eclipse.epsilon.eugenia.examples.centred example projects at the Epsilon Git repository.","title":"Eugenia: Nodes with centred content"},{"location":"doc/articles/eugenia-nodes-with-centred-layout/#eugenia-nodes-with-centred-content","text":"This recipe shows how to create nodes in your GMF editor whose contents are centred both horizontally and vertically. The resulting editor will produce nodes like this: We'll start with the following metamodel and Eugenia annotations: @namespace(uri=\"www.eclipse.org/epsilon/examples/widgets\", prefix=\"w\") package widgets; @gmf.diagram class System { val Widget[*] widgets; } @gmf.node(label=\"name\", label.icon=\"false\") class Widget { attr String[1] name; } In this case, we only have one child node (the label for the node). We need to add a polishing transformation to our project (described in more detail in this article ) to use a grid layout and specify the appropriate layout data for the label. In a file named ECore2GMF.eol, place the following code: var shape = findShape('WidgetFigure'); shape.layout = new GmfGraph!GridLayout; var label = shape.children.first; label.layoutData = new GmfGraph!GridLayoutData; label.layoutData.grabExcessVerticalSpace = true; label.layoutData.grabExcessHorizontalSpace = true; operation findShape(name : String) { return GmfGraph!Shape.all.selectOne(s|s.name = name); } If we have multiple child nodes, we may want to use a custom layout manager instead to achieve the centring. The polishing transformation will have to add the custom layout to our widget figure, and the ECcore2GMF.eol file will now look like this: findShape('WidgetFigure').layout = createCentredLayout(); operation findShape(name : String) { return GmfGraph!Shape.all.selectOne(s|s.name = name); } operation createCentredLayout() : GmfGraph!CustomLayout { var layout = new GmfGraph!CustomLayout; layout.qualifiedClassName = 'widgets.custom.layouts.CentredLayout'; return layout; } Notice that the layout specifies a qualified class name of widgets.custom.layouts.CentredLayout . We must create a class with that name, which implements the LayoutManager of draw2d. We'll use this exemplar implementation of widgets.custom.layouts.CentredLayout and place it in a widgets.custom plug-in project. We must add a dependency for the widgets.custom plugin project to the widgets.diagram project generated by GMF. For more details, please check the org.eclipse.epsilon.eugenia.examples.centred example projects at the Epsilon Git repository.","title":"Eugenia: Nodes with centred content"},{"location":"doc/articles/eugenia-nodes-with-images/","text":"Eugenia: Nodes with images instead of shapes \u00b6 This recipe shows how to create nodes in your GMF editor that are represented with images (png, jpg etc.) instead of the standard GMF shapes (rectangle, ellipse etc.). We'll use the simple friends metamodel as demonstration: @namespace(uri=\"friends\", prefix=\"\") package friends; @gmf.diagram class World { val Person[*] people; } @gmf.node(figure=\"figures.PersonFigure\", label.icon=\"false\", label=\"name\", label.placement=\"external\") class Person { attr String name; @gmf.link(width=\"2\", color=\"0,255,0\", source.decoration=\"arrow\", target.decoration=\"arrow\", style=\"dash\") ref Person[*] friendOf; @gmf.link(width=\"2\", color=\"255,0,0\", source.decoration=\"arrow\", target.decoration=\"arrow\", style=\"dash\") ref Person[*] enemyOf; } We define a custom figure for Person ( figure=\"figures.PersonFigure\" ) and also specify that the label should be placed externally to the node ( label.placement=\"external\" ). Once we have generated our diagram code we need to go and define the figure.PersonFigure class. An example of an png image-based implementation is available below: package figures ; import org.eclipse.draw2d.ImageFigure ; import activator.PluginActivator ; /** * @generated */ public class PersonFigure extends ImageFigure { public PersonFigure () { super ( PluginActivator . imageDescriptorFromPlugin ( PluginActivator . ID , \"images/Person.png\" ). createImage (), 0 ); } } The PluginActivator extends AbstractUIPlugin, which provides methods for loading images from within our plug-in: package activator ; import org.eclipse.core.runtime.Plugin ; import org.eclipse.ui.plugin.AbstractUIPlugin ; import org.osgi.framework.BundleContext ; public class PluginActivator extends AbstractUIPlugin { public static final String ID = \"friends.figures\" ; //$NON-NLS-1$ private static PluginActivator ourInstance ; public PluginActivator () {} public void start ( BundleContext context ) throws Exception { super . start ( context ); ourInstance = this ; } public void stop ( BundleContext context ) throws Exception { ourInstance = null ; super . stop ( context ); } public static PluginActivator getDefault () { return ourInstance ; } } The result looks like this: For more details, please check the full example .","title":"Eugenia: Nodes with images instead of shapes"},{"location":"doc/articles/eugenia-nodes-with-images/#eugenia-nodes-with-images-instead-of-shapes","text":"This recipe shows how to create nodes in your GMF editor that are represented with images (png, jpg etc.) instead of the standard GMF shapes (rectangle, ellipse etc.). We'll use the simple friends metamodel as demonstration: @namespace(uri=\"friends\", prefix=\"\") package friends; @gmf.diagram class World { val Person[*] people; } @gmf.node(figure=\"figures.PersonFigure\", label.icon=\"false\", label=\"name\", label.placement=\"external\") class Person { attr String name; @gmf.link(width=\"2\", color=\"0,255,0\", source.decoration=\"arrow\", target.decoration=\"arrow\", style=\"dash\") ref Person[*] friendOf; @gmf.link(width=\"2\", color=\"255,0,0\", source.decoration=\"arrow\", target.decoration=\"arrow\", style=\"dash\") ref Person[*] enemyOf; } We define a custom figure for Person ( figure=\"figures.PersonFigure\" ) and also specify that the label should be placed externally to the node ( label.placement=\"external\" ). Once we have generated our diagram code we need to go and define the figure.PersonFigure class. An example of an png image-based implementation is available below: package figures ; import org.eclipse.draw2d.ImageFigure ; import activator.PluginActivator ; /** * @generated */ public class PersonFigure extends ImageFigure { public PersonFigure () { super ( PluginActivator . imageDescriptorFromPlugin ( PluginActivator . ID , \"images/Person.png\" ). createImage (), 0 ); } } The PluginActivator extends AbstractUIPlugin, which provides methods for loading images from within our plug-in: package activator ; import org.eclipse.core.runtime.Plugin ; import org.eclipse.ui.plugin.AbstractUIPlugin ; import org.osgi.framework.BundleContext ; public class PluginActivator extends AbstractUIPlugin { public static final String ID = \"friends.figures\" ; //$NON-NLS-1$ private static PluginActivator ourInstance ; public PluginActivator () {} public void start ( BundleContext context ) throws Exception { super . start ( context ); ourInstance = this ; } public void stop ( BundleContext context ) throws Exception { ourInstance = null ; super . stop ( context ); } public static PluginActivator getDefault () { return ourInstance ; } } The result looks like this: For more details, please check the full example .","title":"Eugenia: Nodes with images instead of shapes"},{"location":"doc/articles/eugenia-nodes-with-runtime-images/","text":"Eugenia: Nodes with images defined at run-time \u00b6 This recipe addresses the case where the end-user needs to set an image for each node at runtime (based on Thomas Beyer's solution presented in the GMF newsgroup). For our example, we'll use the Component class. Create an attribute to store the image path \u00b6 First we need to create an imagePath attribute that will store the path of the image for each component. Set the figure of Component to a custom ComponentFigure \u00b6 The next step is to set the figure of Component in Eugenia to a custom figure. After those two steps, our definition of Component looks like this: @gmf.node(label=\"name\", figure=\"ccdl.diagram.figures.ComponentFigure\", label.placement=\"external\") class Component { attr String name; attr String imagePath; } Once we generate the diagram code, we'll get an error because ComponentFigure has not been found. We need to create the ComponentFigure class and set its code to the following: import java.io.File ; import java.util.HashMap ; import org.eclipse.core.resources.IFile ; import org.eclipse.core.resources.ResourcesPlugin ; import org.eclipse.core.runtime.FileLocator ; import org.eclipse.core.runtime.Path ; import org.eclipse.core.runtime.Platform ; import org.eclipse.draw2d.ImageFigure ; import org.eclipse.jface.resource.ImageDescriptor ; import org.eclipse.swt.graphics.Image ; import ccdl.diagram.part.CcdlDiagramEditorPlugin ; public class ComponentFigure extends ImageFigure { static Image unspecified = null ; public ComponentFigure () { if ( unspecified == null ) { unspecified = ImageDescriptor . createFromURL ( FileLocator . find ( Platform . getBundle ( CcdlDiagramEditorPlugin . ID ), new Path ( \"icons/ComponentDefault.png\" ), new HashMap ())) . createImage (); } } public static Image createImage ( String imagePath ) { try { IFile res =( IFile ) ResourcesPlugin . getWorkspace (). getRoot (). findMember ( new Path ( imagePath )); File file = new File ( res . getRawLocation (). toOSString ()); return ImageDescriptor . createFromURL ( file . toURI (). toURL ()). createImage (); } catch ( Exception ex ) { return unspecified ; } } public void setImagePath ( String imagePath ) { try { if ( getImage ()!= null && getImage () != unspecified ) { getImage (). dispose (); } this . setImage ( createImage ( imagePath )); } catch ( Exception ex ) { ex . printStackTrace (); } } } Create the image path property descriptor \u00b6 The next step is to create the property descriptor for the image path so that we can eventually get a nice browse button in the properties view. To do this we need to create a new class named ComponentImagePathPropertyDescriptor . import org.eclipse.emf.ecore.EAttribute ; import org.eclipse.emf.edit.provider.IItemPropertyDescriptor ; import org.eclipse.gmf.runtime.emf.ui.properties.descriptors.EMFCompositeSourcePropertyDescriptor ; import org.eclipse.jface.viewers.CellEditor ; import org.eclipse.swt.widgets.Composite ; public class ComponentImagePathPropertyDescriptor extends EMFCompositeSourcePropertyDescriptor { public ComponentImagePathPropertyDescriptor ( Object object , IItemPropertyDescriptor itemPropertyDescriptor , String category ) { super ( object , itemPropertyDescriptor , category ); } protected CellEditor doCreateEditor ( Composite composite ) { try { if ((( EAttribute ) getFeature ()). getName (). equals ( \"imagePath\" )) { return new ComponentImagePathCellEditor ( composite ); } } catch ( Exception ex ){} return super . doCreateEditor ( composite ); } } Create the image path property cell editor \u00b6 import org.eclipse.core.resources.IFile ; import org.eclipse.core.resources.IResource ; import org.eclipse.core.resources.ResourcesPlugin ; import org.eclipse.jface.viewers.DialogCellEditor ; import org.eclipse.jface.window.Window ; import org.eclipse.swt.widgets.Composite ; import org.eclipse.swt.widgets.Control ; import org.eclipse.ui.dialogs.ResourceListSelectionDialog ; public class ComponentImagePathCellEditor extends DialogCellEditor { public ComponentImagePathCellEditor ( Composite parent ) { super ( parent ); } protected Object openDialogBox ( Control cellEditorWindow ) { ResourceListSelectionDialog elementSelector = new ResourceListSelectionDialog ( cellEditorWindow . getShell (), ResourcesPlugin . getWorkspace (). getRoot (), IResource . DEPTH_INFINITE | IResource . FILE ); elementSelector . setTitle ( \"Image\" ); elementSelector . setMessage ( \"Please select an image\" ); elementSelector . open (); if ( elementSelector . getReturnCode () == Window . OK ){ IFile f = ( IFile ) elementSelector . getResult ()[ 0 ]; return f . getFullPath (). toString (); } else { return null ; } } } Update the XXXPropertySection under xxx.diagram.sheet \u00b6 Update the getPropertySource method as follows: public IPropertySource getPropertySource ( Object object ) { if ( object instanceof IPropertySource ) { return ( IPropertySource ) object ; } AdapterFactory af = getAdapterFactory ( object ); if ( af != null ) { IItemPropertySource ips = ( IItemPropertySource ) af . adapt ( object , IItemPropertySource . class ); if ( ips != null ) { if ( object instanceof Component ) { return new PropertySource ( object , ips ) { protected IPropertyDescriptor createPropertyDescriptor ( IItemPropertyDescriptor itemPropertyDescriptor ) { EStructuralFeature feature = ( EStructuralFeature ) itemPropertyDescriptor . getFeature ( object ); if ( feature . getName (). equalsIgnoreCase ( \"imagePath\" )) { return new ComponentImagePathPropertyDescriptor ( object , itemPropertyDescriptor , \"Misc\" ); } else { return new EMFCompositeSourcePropertyDescriptor ( object , itemPropertyDescriptor , \"Misc\" ); } } }; } //return new PropertySource(object, ips); return new EMFCompositePropertySource ( object , ips , \"Misc\" ); } } if ( object instanceof IAdaptable ) { return ( IPropertySource ) (( IAdaptable ) object ) . getAdapter ( IPropertySource . class ); } return null ; } Modify the edit part \u00b6 Modify the handleNotificationEvent method so that the figure is updated every time the value of imagePath changes protected void handleNotificationEvent ( Notification event ) { if ( event . getNotifier () == getModel () && EcorePackage . eINSTANCE . getEModelElement_EAnnotations () . equals ( event . getFeature ())) { handleMajorSemanticChange (); } else { if ( event . getFeature () instanceof EAttribute ) { EAttribute eAttribute = ( EAttribute ) event . getFeature (); if ( eAttribute . getName (). equalsIgnoreCase ( \"imagePath\" )) { ComponentFigure figure = ( ComponentFigure ) this . getPrimaryShape (); figure . setImagePath ( event . getNewStringValue ()); } } super . handleNotificationEvent ( event ); } } Modify the createNodeShape method so that the figure is initialized from the existing imagePath the first time. protected IFigure createNodeShape () { primaryShape = new ComponentFigure (); Component component = ( Component ) (( Node ) getNotationView ()). getElement (); (( ComponentFigure ) primaryShape ). setImagePath ( component . getImagePath ()); return primaryShape ; }","title":"Eugenia: Nodes with images defined at run-time"},{"location":"doc/articles/eugenia-nodes-with-runtime-images/#eugenia-nodes-with-images-defined-at-run-time","text":"This recipe addresses the case where the end-user needs to set an image for each node at runtime (based on Thomas Beyer's solution presented in the GMF newsgroup). For our example, we'll use the Component class.","title":"Eugenia: Nodes with images defined at run-time"},{"location":"doc/articles/eugenia-nodes-with-runtime-images/#create-an-attribute-to-store-the-image-path","text":"First we need to create an imagePath attribute that will store the path of the image for each component.","title":"Create an attribute to store the image path"},{"location":"doc/articles/eugenia-nodes-with-runtime-images/#set-the-figure-of-component-to-a-custom-componentfigure","text":"The next step is to set the figure of Component in Eugenia to a custom figure. After those two steps, our definition of Component looks like this: @gmf.node(label=\"name\", figure=\"ccdl.diagram.figures.ComponentFigure\", label.placement=\"external\") class Component { attr String name; attr String imagePath; } Once we generate the diagram code, we'll get an error because ComponentFigure has not been found. We need to create the ComponentFigure class and set its code to the following: import java.io.File ; import java.util.HashMap ; import org.eclipse.core.resources.IFile ; import org.eclipse.core.resources.ResourcesPlugin ; import org.eclipse.core.runtime.FileLocator ; import org.eclipse.core.runtime.Path ; import org.eclipse.core.runtime.Platform ; import org.eclipse.draw2d.ImageFigure ; import org.eclipse.jface.resource.ImageDescriptor ; import org.eclipse.swt.graphics.Image ; import ccdl.diagram.part.CcdlDiagramEditorPlugin ; public class ComponentFigure extends ImageFigure { static Image unspecified = null ; public ComponentFigure () { if ( unspecified == null ) { unspecified = ImageDescriptor . createFromURL ( FileLocator . find ( Platform . getBundle ( CcdlDiagramEditorPlugin . ID ), new Path ( \"icons/ComponentDefault.png\" ), new HashMap ())) . createImage (); } } public static Image createImage ( String imagePath ) { try { IFile res =( IFile ) ResourcesPlugin . getWorkspace (). getRoot (). findMember ( new Path ( imagePath )); File file = new File ( res . getRawLocation (). toOSString ()); return ImageDescriptor . createFromURL ( file . toURI (). toURL ()). createImage (); } catch ( Exception ex ) { return unspecified ; } } public void setImagePath ( String imagePath ) { try { if ( getImage ()!= null && getImage () != unspecified ) { getImage (). dispose (); } this . setImage ( createImage ( imagePath )); } catch ( Exception ex ) { ex . printStackTrace (); } } }","title":"Set the figure of Component to a custom ComponentFigure"},{"location":"doc/articles/eugenia-nodes-with-runtime-images/#create-the-image-path-property-descriptor","text":"The next step is to create the property descriptor for the image path so that we can eventually get a nice browse button in the properties view. To do this we need to create a new class named ComponentImagePathPropertyDescriptor . import org.eclipse.emf.ecore.EAttribute ; import org.eclipse.emf.edit.provider.IItemPropertyDescriptor ; import org.eclipse.gmf.runtime.emf.ui.properties.descriptors.EMFCompositeSourcePropertyDescriptor ; import org.eclipse.jface.viewers.CellEditor ; import org.eclipse.swt.widgets.Composite ; public class ComponentImagePathPropertyDescriptor extends EMFCompositeSourcePropertyDescriptor { public ComponentImagePathPropertyDescriptor ( Object object , IItemPropertyDescriptor itemPropertyDescriptor , String category ) { super ( object , itemPropertyDescriptor , category ); } protected CellEditor doCreateEditor ( Composite composite ) { try { if ((( EAttribute ) getFeature ()). getName (). equals ( \"imagePath\" )) { return new ComponentImagePathCellEditor ( composite ); } } catch ( Exception ex ){} return super . doCreateEditor ( composite ); } }","title":"Create the image path property descriptor"},{"location":"doc/articles/eugenia-nodes-with-runtime-images/#create-the-image-path-property-cell-editor","text":"import org.eclipse.core.resources.IFile ; import org.eclipse.core.resources.IResource ; import org.eclipse.core.resources.ResourcesPlugin ; import org.eclipse.jface.viewers.DialogCellEditor ; import org.eclipse.jface.window.Window ; import org.eclipse.swt.widgets.Composite ; import org.eclipse.swt.widgets.Control ; import org.eclipse.ui.dialogs.ResourceListSelectionDialog ; public class ComponentImagePathCellEditor extends DialogCellEditor { public ComponentImagePathCellEditor ( Composite parent ) { super ( parent ); } protected Object openDialogBox ( Control cellEditorWindow ) { ResourceListSelectionDialog elementSelector = new ResourceListSelectionDialog ( cellEditorWindow . getShell (), ResourcesPlugin . getWorkspace (). getRoot (), IResource . DEPTH_INFINITE | IResource . FILE ); elementSelector . setTitle ( \"Image\" ); elementSelector . setMessage ( \"Please select an image\" ); elementSelector . open (); if ( elementSelector . getReturnCode () == Window . OK ){ IFile f = ( IFile ) elementSelector . getResult ()[ 0 ]; return f . getFullPath (). toString (); } else { return null ; } } }","title":"Create the image path property cell editor"},{"location":"doc/articles/eugenia-nodes-with-runtime-images/#update-the-xxxpropertysection-under-xxxdiagramsheet","text":"Update the getPropertySource method as follows: public IPropertySource getPropertySource ( Object object ) { if ( object instanceof IPropertySource ) { return ( IPropertySource ) object ; } AdapterFactory af = getAdapterFactory ( object ); if ( af != null ) { IItemPropertySource ips = ( IItemPropertySource ) af . adapt ( object , IItemPropertySource . class ); if ( ips != null ) { if ( object instanceof Component ) { return new PropertySource ( object , ips ) { protected IPropertyDescriptor createPropertyDescriptor ( IItemPropertyDescriptor itemPropertyDescriptor ) { EStructuralFeature feature = ( EStructuralFeature ) itemPropertyDescriptor . getFeature ( object ); if ( feature . getName (). equalsIgnoreCase ( \"imagePath\" )) { return new ComponentImagePathPropertyDescriptor ( object , itemPropertyDescriptor , \"Misc\" ); } else { return new EMFCompositeSourcePropertyDescriptor ( object , itemPropertyDescriptor , \"Misc\" ); } } }; } //return new PropertySource(object, ips); return new EMFCompositePropertySource ( object , ips , \"Misc\" ); } } if ( object instanceof IAdaptable ) { return ( IPropertySource ) (( IAdaptable ) object ) . getAdapter ( IPropertySource . class ); } return null ; }","title":"Update the XXXPropertySection under xxx.diagram.sheet"},{"location":"doc/articles/eugenia-nodes-with-runtime-images/#modify-the-edit-part","text":"Modify the handleNotificationEvent method so that the figure is updated every time the value of imagePath changes protected void handleNotificationEvent ( Notification event ) { if ( event . getNotifier () == getModel () && EcorePackage . eINSTANCE . getEModelElement_EAnnotations () . equals ( event . getFeature ())) { handleMajorSemanticChange (); } else { if ( event . getFeature () instanceof EAttribute ) { EAttribute eAttribute = ( EAttribute ) event . getFeature (); if ( eAttribute . getName (). equalsIgnoreCase ( \"imagePath\" )) { ComponentFigure figure = ( ComponentFigure ) this . getPrimaryShape (); figure . setImagePath ( event . getNewStringValue ()); } } super . handleNotificationEvent ( event ); } } Modify the createNodeShape method so that the figure is initialized from the existing imagePath the first time. protected IFigure createNodeShape () { primaryShape = new ComponentFigure (); Component component = ( Component ) (( Node ) getNotationView ()). getElement (); (( ComponentFigure ) primaryShape ). setImagePath ( component . getImagePath ()); return primaryShape ; }","title":"Modify the edit part"},{"location":"doc/articles/eugenia-patching/","text":"Customizing the Java source code generated by Eugenia \u00b6 Occasionally, the Java source code generated by GMF to implement your graphical editor is not quite what you want, and it's not possible to polish the GMF models to incorporate your desired changes. Essentially, you'd like to change the code generation templates used by GMF. In this situation, you have two options. The first option is to use GMF dynamic templates , which requires some knowledge of Xpand (the code generation language used by GMF) and can often involve hunting around in the GMF code generator for the right place to make your changes. Alternatively, you can use Eugenia's patch generation and application functionality (described below). Running example \u00b6 The remainder of this article demonstrates how to customize the source code for a generated GMF editor to change the size of the margins used for external labels. As shown below, the patched version of the GMF editor positions labels closer to their nodes: Note that the models used by GMF to generate our editor don't provide a way to control the size of the margins, so we can't use a polishing transformation. Automatically patching the source code of a generated GMF editor \u00b6 After generating the GMF code for your editor, Eugenia will search for a patches directory in the same project as your Emfatic source. If the patches directory is found, Eugenia will apply to your workspace any .patch file found in that directory. Creating and applying patches with Eugenia \u00b6 Create .patch files using Eclipse's Team functionality: Make your desired changes to the generated Java source code by hand. Right-click the project containing your changes, and select Team\u2192Create Patch... Select Clipboard and click Finish Create a patches directory in the project containing your Emfatic source. Create a new file (e.g. patches/MyChanges.patch ), paste your patch into the new file and save it. The next time that you run EuGEnia, your .patch file will be automatically applied to the generated Java source code. You can also apply or remove all of your patches by right-clicking your patches directory and selecting Eugenia\u2192Apply patches or Eugenia\u2192Remove applied patches. In our running example, we devise the patch below to fix the margins of externally placed labels for the State model element type. We save the patch into patches/FixExternalLabelMarginsForState.patch diff --git org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/StateEditPart.java org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/StateEditPart.java index d0684d6..f162365 100644 --- org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/StateEditPart.java +++ org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/StateEditPart.java @@ -143,7 +143,7 @@ if (borderItemEditPart instanceof StateNameEditPart) { BorderItemLocator locator = new BorderItemLocator(getMainFigure(), PositionConstants.SOUTH); - locator.setBorderItemOffset(new Dimension(-20, -20)); + locator.setBorderItemOffset(new Dimension(-5, -5)); borderItemContainer.add(borderItemEditPart.getFigure(), locator); } else { super.addBorderItem(borderItemContainer, borderItemEditPart); Generating patches with Eugenia \u00b6 It is possible to generate .patch files as part of the Eugenia code generation process. This allows you to include in .patch files information from your source metamodel, or from the GMF models generated by Eugenia. Generating .patch files is particularly useful when you want to apply the same type of change in several places in the Java source code for your GMF editor: Create a file named GeneratePatches.egx in the same directory as your Emfatic source code. In the GeneratePages.egx file, write a transformation rule for each element of the ECore or GMF models for which you want to generate a .patch file: Create one or more EGL templates for use by your GeneratePages.egx file. Each EGL template is essentially a parameterised .patch file. The next time that you run EuGEnia, your GeneratePatches.egx file will be automatically invoked to generate one or more .patch files, which will then be automatically applied to the generated Java source code. You can also test your GeneratePatches.egx file, by right-clicking it and selecting Eugenia\u2192Generate patches. In our running example, we can generalise our State patch (above) such that it is applied to any element in our metamodel that has an external label. First, we create a GeneratePatches.egx file that produces a .patch file for every EClass in our ECore file that is annotated with label.placement set to external : // Imports the EClass#getLabelPlacement() operation from Eugenia import \"platform:/plugin/org.eclipse.epsilon.eugenia/transformations/ECoreUtil.eol\"; rule FixExternalLabelMargins // apply this rule to all EClasses where... transform c : ECore!EClass { // ... the EClass is annotated with @gmf.node(label.placement=\"external\") guard: c.getLabelPlacement() == \"external\" // invoke the following EGL template on the EClass template : \"FixExternalLabelMargin.egl\" // make the source directory and name of the node available to the template parameters : Map{ \"srcDir\" = getSourceDirectory(), \"node\" = c.name } // and save the generated text to the following .patch file target : \"FixExternalLabelMarginsFor\" + c.name + \".patch\" } // Determine source directory from GMF Gen model @cache operation getSourceDirectory() { var genEditor = GmfGen!GenEditorGenerator.all.first; return genEditor.pluginDirectory.substring(1) + \"/\" + genEditor.packageNamePrefix.replace(\"\\\\.\", \"/\"); } We'll also need to provide a parameterised version of our State patch, saving it as an EGL template at FixExternalLabelMargin.egl : diff --git [%=srcDir%]/edit/parts/[%=node%]EditPart.java [%=srcDir%]/edit/parts/[%=node%]EditPart.java index d0684d6..f162365 100644 --- [%=srcDir%]/edit/parts/[%=node%]EditPart.java +++ [%=srcDir%]/edit/parts/[%=node%]EditPart.java @@ -143,7 +143,7 @@ if (borderItemEditPart instanceof [%=node%]NameEditPart) { BorderItemLocator locator = new BorderItemLocator(getMainFigure(), PositionConstants.SOUTH); - locator.setBorderItemOffset(new Dimension(-20, -20)); + locator.setBorderItemOffset(new Dimension(-5, -5)); borderItemContainer.add(borderItemEditPart.getFigure(), locator); } else { super.addBorderItem(borderItemContainer, borderItemEditPart); Note that the above template uses the srcDir and node variables made available by our EGX transformation rule. The next time that Eugenia is invoked, a .patch file is generated and applied for every EClass in our ECore file that has an externally-placed label: FAQ \u00b6 Should my patches produce @generated NOT annotations? \u00b6 No, because this can cause subsequent invocations of Eugenia and the GMF code generator to fail -- the GMF code generator will attempt to preserve code marked as @generated NOT and your .patch files will likely not apply cleanly to the code that has been preserved. The code that is applied via .patch files is generated code and should be treated as such. One or more of my patches couldn't be applied. What should I do? \u00b6 Firstly, check to ensure that Eclipse can apply your patch via the Team\u2192Apply patch... menu item. If not, you'll need to fix your .patch file. Secondly, ensure that the order in which your patches are being applied is not causing problems. By default Eugenia orders patches alphabetically by filename: a.patch will be applied before z.patch I'm using git-svn and my patch files can't be applied by Eugenia or by Eclipse's Team\u2192Apply patch... menu item. What should I do? \u00b6 You should edit the headers of any patch file generated by git-svn and remove the dummy a and b folders. For example:* diff --git a/org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java b/org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java index 65e2685..109b568 100644 --- a/org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java +++ b/org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java @@ -152,6 +152,8 @@ ... becomes: diff --git org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java index 65e2685..109b568 100644 --- org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java +++ org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java @@ -152,6 +152,8 @@ ...","title":"Customizing the Java source code generated by Eugenia"},{"location":"doc/articles/eugenia-patching/#customizing-the-java-source-code-generated-by-eugenia","text":"Occasionally, the Java source code generated by GMF to implement your graphical editor is not quite what you want, and it's not possible to polish the GMF models to incorporate your desired changes. Essentially, you'd like to change the code generation templates used by GMF. In this situation, you have two options. The first option is to use GMF dynamic templates , which requires some knowledge of Xpand (the code generation language used by GMF) and can often involve hunting around in the GMF code generator for the right place to make your changes. Alternatively, you can use Eugenia's patch generation and application functionality (described below).","title":"Customizing the Java source code generated by Eugenia"},{"location":"doc/articles/eugenia-patching/#running-example","text":"The remainder of this article demonstrates how to customize the source code for a generated GMF editor to change the size of the margins used for external labels. As shown below, the patched version of the GMF editor positions labels closer to their nodes: Note that the models used by GMF to generate our editor don't provide a way to control the size of the margins, so we can't use a polishing transformation.","title":"Running example"},{"location":"doc/articles/eugenia-patching/#automatically-patching-the-source-code-of-a-generated-gmf-editor","text":"After generating the GMF code for your editor, Eugenia will search for a patches directory in the same project as your Emfatic source. If the patches directory is found, Eugenia will apply to your workspace any .patch file found in that directory.","title":"Automatically patching the source code of a generated GMF editor"},{"location":"doc/articles/eugenia-patching/#creating-and-applying-patches-with-eugenia","text":"Create .patch files using Eclipse's Team functionality: Make your desired changes to the generated Java source code by hand. Right-click the project containing your changes, and select Team\u2192Create Patch... Select Clipboard and click Finish Create a patches directory in the project containing your Emfatic source. Create a new file (e.g. patches/MyChanges.patch ), paste your patch into the new file and save it. The next time that you run EuGEnia, your .patch file will be automatically applied to the generated Java source code. You can also apply or remove all of your patches by right-clicking your patches directory and selecting Eugenia\u2192Apply patches or Eugenia\u2192Remove applied patches. In our running example, we devise the patch below to fix the margins of externally placed labels for the State model element type. We save the patch into patches/FixExternalLabelMarginsForState.patch diff --git org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/StateEditPart.java org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/StateEditPart.java index d0684d6..f162365 100644 --- org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/StateEditPart.java +++ org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/StateEditPart.java @@ -143,7 +143,7 @@ if (borderItemEditPart instanceof StateNameEditPart) { BorderItemLocator locator = new BorderItemLocator(getMainFigure(), PositionConstants.SOUTH); - locator.setBorderItemOffset(new Dimension(-20, -20)); + locator.setBorderItemOffset(new Dimension(-5, -5)); borderItemContainer.add(borderItemEditPart.getFigure(), locator); } else { super.addBorderItem(borderItemContainer, borderItemEditPart);","title":"Creating and applying patches with Eugenia"},{"location":"doc/articles/eugenia-patching/#generating-patches-with-eugenia","text":"It is possible to generate .patch files as part of the Eugenia code generation process. This allows you to include in .patch files information from your source metamodel, or from the GMF models generated by Eugenia. Generating .patch files is particularly useful when you want to apply the same type of change in several places in the Java source code for your GMF editor: Create a file named GeneratePatches.egx in the same directory as your Emfatic source code. In the GeneratePages.egx file, write a transformation rule for each element of the ECore or GMF models for which you want to generate a .patch file: Create one or more EGL templates for use by your GeneratePages.egx file. Each EGL template is essentially a parameterised .patch file. The next time that you run EuGEnia, your GeneratePatches.egx file will be automatically invoked to generate one or more .patch files, which will then be automatically applied to the generated Java source code. You can also test your GeneratePatches.egx file, by right-clicking it and selecting Eugenia\u2192Generate patches. In our running example, we can generalise our State patch (above) such that it is applied to any element in our metamodel that has an external label. First, we create a GeneratePatches.egx file that produces a .patch file for every EClass in our ECore file that is annotated with label.placement set to external : // Imports the EClass#getLabelPlacement() operation from Eugenia import \"platform:/plugin/org.eclipse.epsilon.eugenia/transformations/ECoreUtil.eol\"; rule FixExternalLabelMargins // apply this rule to all EClasses where... transform c : ECore!EClass { // ... the EClass is annotated with @gmf.node(label.placement=\"external\") guard: c.getLabelPlacement() == \"external\" // invoke the following EGL template on the EClass template : \"FixExternalLabelMargin.egl\" // make the source directory and name of the node available to the template parameters : Map{ \"srcDir\" = getSourceDirectory(), \"node\" = c.name } // and save the generated text to the following .patch file target : \"FixExternalLabelMarginsFor\" + c.name + \".patch\" } // Determine source directory from GMF Gen model @cache operation getSourceDirectory() { var genEditor = GmfGen!GenEditorGenerator.all.first; return genEditor.pluginDirectory.substring(1) + \"/\" + genEditor.packageNamePrefix.replace(\"\\\\.\", \"/\"); } We'll also need to provide a parameterised version of our State patch, saving it as an EGL template at FixExternalLabelMargin.egl : diff --git [%=srcDir%]/edit/parts/[%=node%]EditPart.java [%=srcDir%]/edit/parts/[%=node%]EditPart.java index d0684d6..f162365 100644 --- [%=srcDir%]/edit/parts/[%=node%]EditPart.java +++ [%=srcDir%]/edit/parts/[%=node%]EditPart.java @@ -143,7 +143,7 @@ if (borderItemEditPart instanceof [%=node%]NameEditPart) { BorderItemLocator locator = new BorderItemLocator(getMainFigure(), PositionConstants.SOUTH); - locator.setBorderItemOffset(new Dimension(-20, -20)); + locator.setBorderItemOffset(new Dimension(-5, -5)); borderItemContainer.add(borderItemEditPart.getFigure(), locator); } else { super.addBorderItem(borderItemContainer, borderItemEditPart); Note that the above template uses the srcDir and node variables made available by our EGX transformation rule. The next time that Eugenia is invoked, a .patch file is generated and applied for every EClass in our ECore file that has an externally-placed label:","title":"Generating patches with Eugenia"},{"location":"doc/articles/eugenia-patching/#faq","text":"","title":"FAQ"},{"location":"doc/articles/eugenia-patching/#should-my-patches-produce-generated-not-annotations","text":"No, because this can cause subsequent invocations of Eugenia and the GMF code generator to fail -- the GMF code generator will attempt to preserve code marked as @generated NOT and your .patch files will likely not apply cleanly to the code that has been preserved. The code that is applied via .patch files is generated code and should be treated as such.","title":"Should my patches produce @generated NOT annotations?"},{"location":"doc/articles/eugenia-patching/#one-or-more-of-my-patches-couldnt-be-applied-what-should-i-do","text":"Firstly, check to ensure that Eclipse can apply your patch via the Team\u2192Apply patch... menu item. If not, you'll need to fix your .patch file. Secondly, ensure that the order in which your patches are being applied is not causing problems. By default Eugenia orders patches alphabetically by filename: a.patch will be applied before z.patch","title":"One or more of my patches couldn't be applied. What should I do?"},{"location":"doc/articles/eugenia-patching/#im-using-git-svn-and-my-patch-files-cant-be-applied-by-eugenia-or-by-eclipses-teamapply-patch-menu-item-what-should-i-do","text":"You should edit the headers of any patch file generated by git-svn and remove the dummy a and b folders. For example:* diff --git a/org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java b/org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java index 65e2685..109b568 100644 --- a/org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java +++ b/org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java @@ -152,6 +152,8 @@ ... becomes: diff --git org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java index 65e2685..109b568 100644 --- org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java +++ org.eclipse.epsilon.eugenia.examples.executablestatemachine.graphical.diagram/src/esm/diagram/edit/parts/EndStateEditPart.java @@ -152,6 +152,8 @@ ...","title":"I'm using git-svn and my patch files can't be applied by Eugenia or by Eclipse's Team\u2192Apply patch... menu item. What should I do?"},{"location":"doc/articles/eugenia-phantom-nodes/","text":"Eugenia: Phantom nodes in GMF editors \u00b6 Containment references in Ecore metamodels are usually depicted in GMF as spatial containment (e.g. in the sense that a class is contained inside the figure of a package). However, it is sometimes needed to represent containment references using links instead. To achieve this, GMF provides the notion of phantom nodes. Eugenia provides first-class support for phantom nodes in GMF using the phantom annotation detail. The following listing provides such an example: @namespace(uri=\"phantom\", prefix=\"phantom\") package phantom; @gmf.diagram class Model extends NamedElement { val Group[*] groups; } class NamedElement { attr String name; } @gmf.node(label=\"name\") class Group extends NamedElement { @gmf.link(label=\"member\") val Member[*] members; } @gmf.node(label=\"name\", phantom=\"true\") class Member extends NamedElement { } In this example, a Model contains many groups and a Group contains many members. To represent the Group.members containment reference as a normal link, we the phantom detail of the gmf.node annotation of Member to true and add a gmf.link anotation to Group.members. The result looks like this:","title":"Eugenia: Phantom nodes in GMF editors"},{"location":"doc/articles/eugenia-phantom-nodes/#eugenia-phantom-nodes-in-gmf-editors","text":"Containment references in Ecore metamodels are usually depicted in GMF as spatial containment (e.g. in the sense that a class is contained inside the figure of a package). However, it is sometimes needed to represent containment references using links instead. To achieve this, GMF provides the notion of phantom nodes. Eugenia provides first-class support for phantom nodes in GMF using the phantom annotation detail. The following listing provides such an example: @namespace(uri=\"phantom\", prefix=\"phantom\") package phantom; @gmf.diagram class Model extends NamedElement { val Group[*] groups; } class NamedElement { attr String name; } @gmf.node(label=\"name\") class Group extends NamedElement { @gmf.link(label=\"member\") val Member[*] members; } @gmf.node(label=\"name\", phantom=\"true\") class Member extends NamedElement { } In this example, a Model contains many groups and a Group contains many members. To represent the Group.members containment reference as a normal link, we the phantom detail of the gmf.node annotation of Member to true and add a gmf.link anotation to Group.members. The result looks like this:","title":"Eugenia: Phantom nodes in GMF editors"},{"location":"doc/articles/eugenia-polishing/","text":"Customizing a GMF editor generated by Eugenia \u00b6 So now you have created the first version of your GMF editor with Eugenia and it looks almost like what you want - just a few tweaks and you are there. As Eugenia doesn't support all the features of GMF (otherwise it would be just as complex) you are finding that the tweaks you want to do are not supported by the annotations provided by Eugenia and therefore you need to go change one or more of the generated .gmfgraph, .gmfmap and .gmftool models manually. If you decide to do this, you won't be able to use Eugenia any more for your editor because it will overwrite your changes. We've come across this situation many times and decided to do something about it. Since merging such complex models sensibly is virtually impossible, we've implemented support for user-defined transformations that complement the built-in transformations provided by Eugenia. Let's go straight to an example. We have the following classdiagram metamodel: @namespace(uri=\"classdiagram\", prefix=\"classdiagram\") package classdiagram; @gmf.diagram class Model { val Clazz[*] classes; } @gmf.node(label=\"name\", figure=\"rectangle\") class Clazz { attr String name; @gmf.compartment(layout=\"list\", collapsible=\"false\") val Attribute[*] attributes; } @gmf.node(label=\"name,type\", figure=\"rectangle\", label.icon=\"false\", label.pattern=\"{0}:{1}\") class Attribute { attr String name; attr String type; } and we follow the standard Eugenia procedure to generate a GMF editor from it. The editor looks like this: which is almost what we want. What we really want is something like this: To get this, we need to customize the classdiagram.gmfgraph model like this so that we can get this: To perform these changes automatically every time Eugenia is executed on classdiagram.ecore , we can create a new EOL transformation called ECore2GMF.eol and place it in the same folder with classdiagram.ecore . Eugenia will then pick it up and execute it after the built-in transformation every time we invoke Generate GMF tool, graph and map models action. In our case, the ECore2GMF.eol customization transformation looks like this: // Find the compartment figure var clazzAttributesCompartmentFigure = GmfGraph!Rectangle.all. selectOne(r|r.name = 'ClazzAttributesCompartmentFigure'); // ... and add a stack layout to it clazzAttributesCompartmentFigure.layout = new GmfGraph!StackLayout; // Find the attribute figure var attributeFigure = GmfGraph!Rectangle.all. selectOne(r|r.name = 'AttributeFigure'); // ... delete its border delete attributeFigure.border; // ... set its outline to false attributeFigure.outline = false; // ... and add a preferred size to it var preferredSize = new GmfGraph!Dimension; preferredSize.dx = 100; preferredSize.dy = 16; attributeFigure.preferredSize = preferredSize; Similarly, if we needed to customize the logic behind the Synchronize GMF Gen model action, we'd need to define a FixGMFGen.eol transformation next to classdiagram.ecore . What models can I access from the ECore2GMF.eol and FixGMFGen.eol transformations? \u00b6 In the Ecore2GenModel.eol transformation and the later FixGenModel.eol transformation you can access the ECore metamodel (named Ecore ) and the EMF GenModel model (named GenModel ). You can run Ecore2GenModel.eol or FixGenModel.eol manually by right-clicking on the .ecore file and selecting \"Generate EMF GenModel\" or \"Synchronize EMF GenModel\", respectively. In the ECore2GMF.eol transformation you can access the ECore metamodel (named ECore ), the tool model (named GmfTool ), the graph model (named GmfGraph ) and the map model (named GmfMap ). You can regenerate the GMF models and run ECore2GMF.eol manually by right-clicking on the .ecore file and selecting \"Generate GMF tool, graph and map models\". In the FixGMFGen.eol transformation you can access the ECore metamodel (named ECore ), and the generator model (named GmfGen ). You can run FixGMFGen.eol manually by right-clicking on the .gmfgen model (which should have been created previously from the .gmfmap using standard GMF tools) and selecting \"Synchronize GMFGen\". How do I customize the generated code? \u00b6 GMF generates code in two steps: During the GmfMap \u2192 GmfGen transformation: small fragments are embedded into the GmfGen model, using GMF figure templates. From the GmfGen model: the embedded bits are dumped to certain files, and additional code is generated using the rest of the GMF templates. To use your own GMF figure templates, you need to place them under a folder called templates-gmfgraph , which should be a sibling of the folder where your .emf or .ecore files are stored. If it exists, Eugenia will use its templates for the GmfMap \u2192 GmfGen transformation. To customize the code generated from the GmfGen model, you will need to use Eugenia's patch generation and application functionality or GMF dynamic templates . Getting assistance in writing these transformations \u00b6 You'll most probably find Exeed and the EPackage Registry view to be useful for writing such transformations.","title":"Customizing a GMF editor generated by Eugenia"},{"location":"doc/articles/eugenia-polishing/#customizing-a-gmf-editor-generated-by-eugenia","text":"So now you have created the first version of your GMF editor with Eugenia and it looks almost like what you want - just a few tweaks and you are there. As Eugenia doesn't support all the features of GMF (otherwise it would be just as complex) you are finding that the tweaks you want to do are not supported by the annotations provided by Eugenia and therefore you need to go change one or more of the generated .gmfgraph, .gmfmap and .gmftool models manually. If you decide to do this, you won't be able to use Eugenia any more for your editor because it will overwrite your changes. We've come across this situation many times and decided to do something about it. Since merging such complex models sensibly is virtually impossible, we've implemented support for user-defined transformations that complement the built-in transformations provided by Eugenia. Let's go straight to an example. We have the following classdiagram metamodel: @namespace(uri=\"classdiagram\", prefix=\"classdiagram\") package classdiagram; @gmf.diagram class Model { val Clazz[*] classes; } @gmf.node(label=\"name\", figure=\"rectangle\") class Clazz { attr String name; @gmf.compartment(layout=\"list\", collapsible=\"false\") val Attribute[*] attributes; } @gmf.node(label=\"name,type\", figure=\"rectangle\", label.icon=\"false\", label.pattern=\"{0}:{1}\") class Attribute { attr String name; attr String type; } and we follow the standard Eugenia procedure to generate a GMF editor from it. The editor looks like this: which is almost what we want. What we really want is something like this: To get this, we need to customize the classdiagram.gmfgraph model like this so that we can get this: To perform these changes automatically every time Eugenia is executed on classdiagram.ecore , we can create a new EOL transformation called ECore2GMF.eol and place it in the same folder with classdiagram.ecore . Eugenia will then pick it up and execute it after the built-in transformation every time we invoke Generate GMF tool, graph and map models action. In our case, the ECore2GMF.eol customization transformation looks like this: // Find the compartment figure var clazzAttributesCompartmentFigure = GmfGraph!Rectangle.all. selectOne(r|r.name = 'ClazzAttributesCompartmentFigure'); // ... and add a stack layout to it clazzAttributesCompartmentFigure.layout = new GmfGraph!StackLayout; // Find the attribute figure var attributeFigure = GmfGraph!Rectangle.all. selectOne(r|r.name = 'AttributeFigure'); // ... delete its border delete attributeFigure.border; // ... set its outline to false attributeFigure.outline = false; // ... and add a preferred size to it var preferredSize = new GmfGraph!Dimension; preferredSize.dx = 100; preferredSize.dy = 16; attributeFigure.preferredSize = preferredSize; Similarly, if we needed to customize the logic behind the Synchronize GMF Gen model action, we'd need to define a FixGMFGen.eol transformation next to classdiagram.ecore .","title":"Customizing a GMF editor generated by Eugenia"},{"location":"doc/articles/eugenia-polishing/#what-models-can-i-access-from-the-ecore2gmfeol-and-fixgmfgeneol-transformations","text":"In the Ecore2GenModel.eol transformation and the later FixGenModel.eol transformation you can access the ECore metamodel (named Ecore ) and the EMF GenModel model (named GenModel ). You can run Ecore2GenModel.eol or FixGenModel.eol manually by right-clicking on the .ecore file and selecting \"Generate EMF GenModel\" or \"Synchronize EMF GenModel\", respectively. In the ECore2GMF.eol transformation you can access the ECore metamodel (named ECore ), the tool model (named GmfTool ), the graph model (named GmfGraph ) and the map model (named GmfMap ). You can regenerate the GMF models and run ECore2GMF.eol manually by right-clicking on the .ecore file and selecting \"Generate GMF tool, graph and map models\". In the FixGMFGen.eol transformation you can access the ECore metamodel (named ECore ), and the generator model (named GmfGen ). You can run FixGMFGen.eol manually by right-clicking on the .gmfgen model (which should have been created previously from the .gmfmap using standard GMF tools) and selecting \"Synchronize GMFGen\".","title":"What models can I access from the ECore2GMF.eol and FixGMFGen.eol transformations?"},{"location":"doc/articles/eugenia-polishing/#how-do-i-customize-the-generated-code","text":"GMF generates code in two steps: During the GmfMap \u2192 GmfGen transformation: small fragments are embedded into the GmfGen model, using GMF figure templates. From the GmfGen model: the embedded bits are dumped to certain files, and additional code is generated using the rest of the GMF templates. To use your own GMF figure templates, you need to place them under a folder called templates-gmfgraph , which should be a sibling of the folder where your .emf or .ecore files are stored. If it exists, Eugenia will use its templates for the GmfMap \u2192 GmfGen transformation. To customize the code generated from the GmfGen model, you will need to use Eugenia's patch generation and application functionality or GMF dynamic templates .","title":"How do I customize the generated code?"},{"location":"doc/articles/eugenia-polishing/#getting-assistance-in-writing-these-transformations","text":"You'll most probably find Exeed and the EPackage Registry view to be useful for writing such transformations.","title":"Getting assistance in writing these transformations"},{"location":"doc/articles/evl-gmf-integration/","text":"Live validation and quick-fixes in GMF-based editors with EVL \u00b6 In this tutorial , we demonstrated how Eugenia can be used to easily implement a GMF-based editor for a small FileSystem DSL. Now, we demonstrate how the Epsilon Validation Language can be used to easily contribute validation/quick fixes to our GMF editor. '''''(Note: this applies to any GMF-based editor - not only to editors constructed with Eugenia)''''' Warning If you have not implemented your editor using Eugenia, before you start please make sure that you have turned on validation in your .gmfgen model. The flags you need to set to true are the Validation Enabled and Validation Decorators in the Gen Diagram . Step 1: Create the integration plugin \u00b6 In the first step we create the integration plugin that will host our constraints and extensions. We name it org.eclipse.epsilon.eugenia.examples.filesystem.validation Step 2: Set the dependencies \u00b6 We switch to the dependencies tab of MANIFEST.MF and add org.eclipse.ui.ide and org.eclipse.epsilon.evl.emf.validation to the list of dependencies. Step 3: Write the constraints \u00b6 We create a new .evl file in the plugin. In our case we've created it under validation/filesystem.evl (make sure you switch to the Build tab to verify that the .evl file is included in your binary build). In our example we define the following constraints: context File { constraint HasName { check : self.name.isDefined() message : 'Unnamed ' + self.eClass().name + ' not allowed' } } context Folder { critique NameStartsWithCapital { guard : self.satisfies('HasName') check : self.name.firstToUpperCase() = self.name message : 'Folder ' + self.name + ' should start with an upper-case letter' fix { title : 'Rename to ' + self.name.firstToUpperCase() do { self.name := self.name.firstToUpperCase(); } } } } context Sync { constraint MustLinkSame { check : self.source.eClass() = self.target.eClass() message : 'Cannot synchronize a ' + self.source.eClass().name + ' with a ' + self.target.eClass().name fix { title : 'Synchronize with another ' + self.source.eClass().name do { var target := UserInput.choose('Select target', _Model.getAllOfType(self.source.eClass().name)); if (target.isDefined()) self.target := target; } } } } We have defined three constraints: The first ( HasName ) requires that each file has a non-empty name. The second one ( NameStartsWithCapital ) requires that every folder starts with a capital letter. Unlike the HasName , this is declared as a critique which means that if it is not satisfied by an element, this will be shown as a warning (instead of an error) on the editor. In the guard of this constraint we first check that the element satisfies the HasName constraint first (it wouldn't make sense to check this for an empty-named file). If the critique is not satisfied, a warning is generated and the user is presented with the option to invoke the fix which automatically renames the folder. The third one ( MustLinkSame ) requires that a sync synchronizes two things of the same type: i.e. a folder with a folder, a file with a file etc. If this fails, it generates an error and the user can invoke the fix to repair it. In the fix, the user is prompted to select one of the elements of the same type as the source of the sync to serve as the target. Step 4: Bind the constraints to the editor \u00b6 Having written the constraints, the next step is to bind them to the GMF editor. To do this, we switch to the Extensions tab of MANIFEST.MF and add the org.eclipse.epsilon.evl.emf.validation extension. Then we right-click it and add a new constraintBinding . In the namespaceURI field of the extension we set the value to filesystem and in the constraints field we select the validation/filesystem.evl EVL file we created in Step 3. Next, we add the org.eclipse.ui.ide.markerResolution extension and below it we create two markerResolutionGenerator with the following details class : org.eclipse.epsilon.evl.emf.validation.EvlMarkerResolutionGenerator markerType : org.eclipse.epsilon.eugenia.examples.filesystem.diagram.diagnostic and class : org.eclipse.epsilon.evl.emf.validation.EvlMarkerResolutionGenerator markerType : org.eclipse.emf.ecore.diagnostic Step 5: Ready to go! \u00b6 The next step is to run a new Eclipse instance and create a new filesystem diagram that looks like this: To validate this we go to the Diagram menu and select Validate (depending on your version of Eclipse, the Validate option may be located under the Edit menu instead). The editor now looks like this: There are two problems with our model: The sync between picture.bmp and backup is invalid as it syncs a file with a folder. As a result the MustLinkSame constraint has failed and the sync has been visually annotated with a red circle that shows this. Similarly, the NameStartsWithCapital constraints has failed for the backup folder (it should start with an upper-case letter) and this is indicated with a red triangle on the folder. The generated errors/warnings also appear in the Problems view: Double-clicking on an error/warning in this view brings us to the respective editor and highlights the failing element. What is more important however is that for constraints for which we have defined fixes (e.g. the MustLinkSame and NameStartsWithCapital ) constraints, we can also apply the fixes using this view. To do this we need to right-click a problem that has quick fixes (indicated by a small lamp on the bottom right) and select Quick Fix . Doing this for the \"Folder backup should start with an upper-case letter\" warning, brings up the following dialog: Clicking Finish invokes the behaviour of the fix which renames the folder from backup to Backup (and resolves the problem). The change is also reflected to the diagram automatically due to the GMF MVC architecture. It is worth mentioning that any changes done during a quick fix can be undone/redone using the respective options from the Edit menu (or simply using Ctrl-Z , Ctrl-Y ). Also, if an error occurs in the middle of a fix block, all changes to the model done in the block are automatically rolled back. Troubleshooting/Known issues \u00b6 While errors/warnings are persisted across sessions, quick-fixes are not. Therefore, if you run validation and re-start Eclipse, in the new Eclipse instance the problems will still appear in the editor/problems view but quick-fixes will not be available until you run validation again. We are currently working on a fix for this. Recipes \u00b6 If you need validation to be performed whenever your diagram is saved add the following line in the doSaveDocument(IProgressMonitor monitor, Object element, IDocument document, boolean overwrite) method of your XXXDocumentProvider class (located in the .diagram.part package) in your diagram plugin. ValidateAction . runValidation (( View ) document . getContent ());","title":"Live validation and quick-fixes in GMF-based editors with EVL"},{"location":"doc/articles/evl-gmf-integration/#live-validation-and-quick-fixes-in-gmf-based-editors-with-evl","text":"In this tutorial , we demonstrated how Eugenia can be used to easily implement a GMF-based editor for a small FileSystem DSL. Now, we demonstrate how the Epsilon Validation Language can be used to easily contribute validation/quick fixes to our GMF editor. '''''(Note: this applies to any GMF-based editor - not only to editors constructed with Eugenia)''''' Warning If you have not implemented your editor using Eugenia, before you start please make sure that you have turned on validation in your .gmfgen model. The flags you need to set to true are the Validation Enabled and Validation Decorators in the Gen Diagram .","title":"Live validation and quick-fixes in GMF-based editors with EVL"},{"location":"doc/articles/evl-gmf-integration/#step-1-create-the-integration-plugin","text":"In the first step we create the integration plugin that will host our constraints and extensions. We name it org.eclipse.epsilon.eugenia.examples.filesystem.validation","title":"Step 1: Create the integration plugin"},{"location":"doc/articles/evl-gmf-integration/#step-2-set-the-dependencies","text":"We switch to the dependencies tab of MANIFEST.MF and add org.eclipse.ui.ide and org.eclipse.epsilon.evl.emf.validation to the list of dependencies.","title":"Step 2: Set the dependencies"},{"location":"doc/articles/evl-gmf-integration/#step-3-write-the-constraints","text":"We create a new .evl file in the plugin. In our case we've created it under validation/filesystem.evl (make sure you switch to the Build tab to verify that the .evl file is included in your binary build). In our example we define the following constraints: context File { constraint HasName { check : self.name.isDefined() message : 'Unnamed ' + self.eClass().name + ' not allowed' } } context Folder { critique NameStartsWithCapital { guard : self.satisfies('HasName') check : self.name.firstToUpperCase() = self.name message : 'Folder ' + self.name + ' should start with an upper-case letter' fix { title : 'Rename to ' + self.name.firstToUpperCase() do { self.name := self.name.firstToUpperCase(); } } } } context Sync { constraint MustLinkSame { check : self.source.eClass() = self.target.eClass() message : 'Cannot synchronize a ' + self.source.eClass().name + ' with a ' + self.target.eClass().name fix { title : 'Synchronize with another ' + self.source.eClass().name do { var target := UserInput.choose('Select target', _Model.getAllOfType(self.source.eClass().name)); if (target.isDefined()) self.target := target; } } } } We have defined three constraints: The first ( HasName ) requires that each file has a non-empty name. The second one ( NameStartsWithCapital ) requires that every folder starts with a capital letter. Unlike the HasName , this is declared as a critique which means that if it is not satisfied by an element, this will be shown as a warning (instead of an error) on the editor. In the guard of this constraint we first check that the element satisfies the HasName constraint first (it wouldn't make sense to check this for an empty-named file). If the critique is not satisfied, a warning is generated and the user is presented with the option to invoke the fix which automatically renames the folder. The third one ( MustLinkSame ) requires that a sync synchronizes two things of the same type: i.e. a folder with a folder, a file with a file etc. If this fails, it generates an error and the user can invoke the fix to repair it. In the fix, the user is prompted to select one of the elements of the same type as the source of the sync to serve as the target.","title":"Step 3: Write the constraints"},{"location":"doc/articles/evl-gmf-integration/#step-4-bind-the-constraints-to-the-editor","text":"Having written the constraints, the next step is to bind them to the GMF editor. To do this, we switch to the Extensions tab of MANIFEST.MF and add the org.eclipse.epsilon.evl.emf.validation extension. Then we right-click it and add a new constraintBinding . In the namespaceURI field of the extension we set the value to filesystem and in the constraints field we select the validation/filesystem.evl EVL file we created in Step 3. Next, we add the org.eclipse.ui.ide.markerResolution extension and below it we create two markerResolutionGenerator with the following details class : org.eclipse.epsilon.evl.emf.validation.EvlMarkerResolutionGenerator markerType : org.eclipse.epsilon.eugenia.examples.filesystem.diagram.diagnostic and class : org.eclipse.epsilon.evl.emf.validation.EvlMarkerResolutionGenerator markerType : org.eclipse.emf.ecore.diagnostic","title":"Step 4: Bind the constraints to the editor"},{"location":"doc/articles/evl-gmf-integration/#step-5-ready-to-go","text":"The next step is to run a new Eclipse instance and create a new filesystem diagram that looks like this: To validate this we go to the Diagram menu and select Validate (depending on your version of Eclipse, the Validate option may be located under the Edit menu instead). The editor now looks like this: There are two problems with our model: The sync between picture.bmp and backup is invalid as it syncs a file with a folder. As a result the MustLinkSame constraint has failed and the sync has been visually annotated with a red circle that shows this. Similarly, the NameStartsWithCapital constraints has failed for the backup folder (it should start with an upper-case letter) and this is indicated with a red triangle on the folder. The generated errors/warnings also appear in the Problems view: Double-clicking on an error/warning in this view brings us to the respective editor and highlights the failing element. What is more important however is that for constraints for which we have defined fixes (e.g. the MustLinkSame and NameStartsWithCapital ) constraints, we can also apply the fixes using this view. To do this we need to right-click a problem that has quick fixes (indicated by a small lamp on the bottom right) and select Quick Fix . Doing this for the \"Folder backup should start with an upper-case letter\" warning, brings up the following dialog: Clicking Finish invokes the behaviour of the fix which renames the folder from backup to Backup (and resolves the problem). The change is also reflected to the diagram automatically due to the GMF MVC architecture. It is worth mentioning that any changes done during a quick fix can be undone/redone using the respective options from the Edit menu (or simply using Ctrl-Z , Ctrl-Y ). Also, if an error occurs in the middle of a fix block, all changes to the model done in the block are automatically rolled back.","title":"Step 5: Ready to go!"},{"location":"doc/articles/evl-gmf-integration/#troubleshootingknown-issues","text":"While errors/warnings are persisted across sessions, quick-fixes are not. Therefore, if you run validation and re-start Eclipse, in the new Eclipse instance the problems will still appear in the editor/problems view but quick-fixes will not be available until you run validation again. We are currently working on a fix for this.","title":"Troubleshooting/Known issues"},{"location":"doc/articles/evl-gmf-integration/#recipes","text":"If you need validation to be performed whenever your diagram is saved add the following line in the doSaveDocument(IProgressMonitor monitor, Object element, IDocument document, boolean overwrite) method of your XXXDocumentProvider class (located in the .diagram.part package) in your diagram plugin. ValidateAction . runValidation (( View ) document . getContent ());","title":"Recipes"},{"location":"doc/articles/exercises/","text":"MDE Exercises \u00b6 This article provides a number of exercises you can use to test your knowledge on MDE, EMF and Epsilon. Exercise 1: Metamodelling with Ecore \u00b6 Write Ecore metamodels (using Emfatic or the graphical Ecore editor) for the following scenarios, and create instances of these metamodels using the reflective EMF tree editor : All school rooms have a buzzer triggered by a central clock to signal the end of the school day. Political parties, such as the Labour Party, the Conservative party, and the Liberal Democrat party, have both voters and supporters. An undirected graph consists of a set of vertices and a set of edges. Edges connect pairs of vertices. A football league has a set of teams, where each team has a manager and a set of players. A player is a forward, defender, or goalkeeper. The manager cannot be a player. A student is awarded a prize. Each prize is donated by at least one sponsor, e.g., IBM. A prize may be jointly awarded. Each student must write a letter thanking the sponsors of their prize Exercise 2: Constructing models programmatically using EOL \u00b6 In the previous exercise, you created sample models conforming to your metamodels using the reflective EMF tree editor. In this exercise, you should create the same models, but this time programmatically using EOL . Exercise 3: Introducing EOL operations \u00b6 The Office Management System (OMS) is used to manage the rooms available to a company. It keeps track of who is assigned to occupy a room, along with their position in the company. It facilitates providing newly hired employees with offices, and assists employees who are to move from one office to another. Employees have positions, an office (offices are never shared), and know when they started work at the company and when they ended their employment. The OMS keeps track of all employees and rooms. Rooms are either occupied or unoccupied.\\ With the OMS, it is possible to: hire a new employee and assign them to a room fire an employee and remove them from their office move an employee from one room to another, unoccupied room calculate the set of rooms that are unoccupied (useful for planning) With this scenario in mind you need to do the following: Write an Ecore metamodel for the system above Write the body of the following EOL operations that implement 1-4 above operation Employee hire() { ... } operation Employee fire() { ... } operation Employee move(to:Room) { ... } operation Company getFreeRooms() : Sequence(Room) { ... } Exercise 4: Model validation with EVL \u00b6 Construct the Ecore metamodel above and create a sample model that conforms to it using the reflective EMF tree editor. Write the following EVL constraints and evaluate them on your sample model In the context of class Student, write a constraint stating that a student takes up to 6 modules In the context of class Grade, write a constraint stating that the mark must always be non-negative. In the context of Module, write a constraint stating that every student must have a unique name. In the context of Student, write a constraint that states that the grades for the modules taken by a student must be identical to the grades that the student knows about directly Exercise 5: Model transformation with ETL \u00b6 Write an ETL transformation that transforms models conforming to the metamodel of Exercise 4 to models conforming to the metamodel below. Exercise 6: Text generation with EGL \u00b6 Write an EGL transformation that reads a model conforming to the metamodel of exercise 4 and produces a text file containing the names of all students and the total marks each student has obtained so far. Exercise 7: Multiple file generation with EGL \u00b6 Write an EGL transformation that reads a model conforming to the metamodel of exercise 5 and generates one file per transcript. Each output file should be named after the student with a .txt suffix (e.g. John Doe.txt) and it should contain a list of all the modules and marks of the student. Exercise 8: Using ANT to implement an ETL-EGL workflow \u00b6 Use the ANT tasks provided by Epsilon to create an ANT workflow that invokes the ETL transformation of Exercise 5 and then passes the produced model to the EGL transformation of Exercise 7, which in turn generates a set of transcript files. Exercise 9: Constructing graphical editors \u00b6 Create GMF editors for the metamodels you have written in the exercises above using Eugenia.","title":"MDE Exercises"},{"location":"doc/articles/exercises/#mde-exercises","text":"This article provides a number of exercises you can use to test your knowledge on MDE, EMF and Epsilon.","title":"MDE Exercises"},{"location":"doc/articles/exercises/#exercise-1-metamodelling-with-ecore","text":"Write Ecore metamodels (using Emfatic or the graphical Ecore editor) for the following scenarios, and create instances of these metamodels using the reflective EMF tree editor : All school rooms have a buzzer triggered by a central clock to signal the end of the school day. Political parties, such as the Labour Party, the Conservative party, and the Liberal Democrat party, have both voters and supporters. An undirected graph consists of a set of vertices and a set of edges. Edges connect pairs of vertices. A football league has a set of teams, where each team has a manager and a set of players. A player is a forward, defender, or goalkeeper. The manager cannot be a player. A student is awarded a prize. Each prize is donated by at least one sponsor, e.g., IBM. A prize may be jointly awarded. Each student must write a letter thanking the sponsors of their prize","title":"Exercise 1: Metamodelling with Ecore"},{"location":"doc/articles/exercises/#exercise-2-constructing-models-programmatically-using-eol","text":"In the previous exercise, you created sample models conforming to your metamodels using the reflective EMF tree editor. In this exercise, you should create the same models, but this time programmatically using EOL .","title":"Exercise 2: Constructing models programmatically using EOL"},{"location":"doc/articles/exercises/#exercise-3-introducing-eol-operations","text":"The Office Management System (OMS) is used to manage the rooms available to a company. It keeps track of who is assigned to occupy a room, along with their position in the company. It facilitates providing newly hired employees with offices, and assists employees who are to move from one office to another. Employees have positions, an office (offices are never shared), and know when they started work at the company and when they ended their employment. The OMS keeps track of all employees and rooms. Rooms are either occupied or unoccupied.\\ With the OMS, it is possible to: hire a new employee and assign them to a room fire an employee and remove them from their office move an employee from one room to another, unoccupied room calculate the set of rooms that are unoccupied (useful for planning) With this scenario in mind you need to do the following: Write an Ecore metamodel for the system above Write the body of the following EOL operations that implement 1-4 above operation Employee hire() { ... } operation Employee fire() { ... } operation Employee move(to:Room) { ... } operation Company getFreeRooms() : Sequence(Room) { ... }","title":"Exercise 3: Introducing EOL operations"},{"location":"doc/articles/exercises/#exercise-4-model-validation-with-evl","text":"Construct the Ecore metamodel above and create a sample model that conforms to it using the reflective EMF tree editor. Write the following EVL constraints and evaluate them on your sample model In the context of class Student, write a constraint stating that a student takes up to 6 modules In the context of class Grade, write a constraint stating that the mark must always be non-negative. In the context of Module, write a constraint stating that every student must have a unique name. In the context of Student, write a constraint that states that the grades for the modules taken by a student must be identical to the grades that the student knows about directly","title":"Exercise 4: Model validation with EVL"},{"location":"doc/articles/exercises/#exercise-5-model-transformation-with-etl","text":"Write an ETL transformation that transforms models conforming to the metamodel of Exercise 4 to models conforming to the metamodel below.","title":"Exercise 5: Model transformation with ETL"},{"location":"doc/articles/exercises/#exercise-6-text-generation-with-egl","text":"Write an EGL transformation that reads a model conforming to the metamodel of exercise 4 and produces a text file containing the names of all students and the total marks each student has obtained so far.","title":"Exercise 6: Text generation with EGL"},{"location":"doc/articles/exercises/#exercise-7-multiple-file-generation-with-egl","text":"Write an EGL transformation that reads a model conforming to the metamodel of exercise 5 and generates one file per transcript. Each output file should be named after the student with a .txt suffix (e.g. John Doe.txt) and it should contain a list of all the modules and marks of the student.","title":"Exercise 7: Multiple file generation with EGL"},{"location":"doc/articles/exercises/#exercise-8-using-ant-to-implement-an-etl-egl-workflow","text":"Use the ANT tasks provided by Epsilon to create an ANT workflow that invokes the ETL transformation of Exercise 5 and then passes the produced model to the EGL transformation of Exercise 7, which in turn generates a set of transcript files.","title":"Exercise 8: Using ANT to implement an ETL-EGL workflow"},{"location":"doc/articles/exercises/#exercise-9-constructing-graphical-editors","text":"Create GMF editors for the metamodels you have written in the exercises above using Eugenia.","title":"Exercise 9: Constructing graphical editors"},{"location":"doc/articles/extended-properties/","text":"Extended Properties \u00b6 This article demonstrates the extended properties mechanism in EOL (and by inheritance, in all languages in Epsilon). We present the rationale and semantics of extended properties using the following simple metamodel (in Emfatic): package SimpleTree; class Tree { attr String name; ref Tree#children parent; val Tree[*]#parent children; } Now, what we want to do is to traverse a model that conforms to this metamodel and calculate and print the depth of each Tree in it. We can do this using this simple EOL program: var depths = new Map; for (n in Tree.allInstances.select(t|not t.parent.isDefined())) { n.setDepth(0); } for (n in Tree.allInstances) { (n.name + \" \" + depths.get(n)).println(); } operation Tree setDepth(depth : Integer) { depths.put(self,depth); for (c in self.children) { c.setDepth(depth + 1); } } Because the Tree EClass doesn't have a depth property, we have to use the depths Map to store the calculated depth of each Tree . Another solution would be to add a depth property to the Tree EClass so that its instances can store such information; but following this approach will soon pollute our metamodel with information of secondary importance. We've often come across similar situations where we needed to attach some kind of information (that is not supported by the metamodel) to particular model elements during model management operations (validation, transformation etc.). Until now, we've been using Maps to achieve this (similarly to what we've done above). However, now, EOL (and all languages built atop it) support non-invasive extended properties which provide a more elegant solution to this recurring problem. An extended property is a property that starts with the ~ character. Its semantics are quite straightforward: the first time a value is assigned to an extended property of an object (e.g. x.~a := b; ), the property is created and associated to the object and the value is assigned to it. Similarly, x.~a returns the value of the property or undefined if the property has not been set on the particular object yet. Using extended properties, we can rewrite the above code (without needing to use a Map ) as follows: for (n in Tree.allInstances.select(t|not t.parent.isDefined())) { n.setDepth(0); } for (n in Tree.allInstances) { (n.name + \" \" + n.~depth).println(); } operation Tree setDepth(depth : Integer) { self.~depth = depth; for (c in self.children) { c.setDepth(depth + 1); } }","title":"Extended Properties"},{"location":"doc/articles/extended-properties/#extended-properties","text":"This article demonstrates the extended properties mechanism in EOL (and by inheritance, in all languages in Epsilon). We present the rationale and semantics of extended properties using the following simple metamodel (in Emfatic): package SimpleTree; class Tree { attr String name; ref Tree#children parent; val Tree[*]#parent children; } Now, what we want to do is to traverse a model that conforms to this metamodel and calculate and print the depth of each Tree in it. We can do this using this simple EOL program: var depths = new Map; for (n in Tree.allInstances.select(t|not t.parent.isDefined())) { n.setDepth(0); } for (n in Tree.allInstances) { (n.name + \" \" + depths.get(n)).println(); } operation Tree setDepth(depth : Integer) { depths.put(self,depth); for (c in self.children) { c.setDepth(depth + 1); } } Because the Tree EClass doesn't have a depth property, we have to use the depths Map to store the calculated depth of each Tree . Another solution would be to add a depth property to the Tree EClass so that its instances can store such information; but following this approach will soon pollute our metamodel with information of secondary importance. We've often come across similar situations where we needed to attach some kind of information (that is not supported by the metamodel) to particular model elements during model management operations (validation, transformation etc.). Until now, we've been using Maps to achieve this (similarly to what we've done above). However, now, EOL (and all languages built atop it) support non-invasive extended properties which provide a more elegant solution to this recurring problem. An extended property is a property that starts with the ~ character. Its semantics are quite straightforward: the first time a value is assigned to an extended property of an object (e.g. x.~a := b; ), the property is created and associated to the object and the value is assigned to it. Similarly, x.~a returns the value of the property or undefined if the property has not been set on the particular object yet. Using extended properties, we can rewrite the above code (without needing to use a Map ) as follows: for (n in Tree.allInstances.select(t|not t.parent.isDefined())) { n.setDepth(0); } for (n in Tree.allInstances) { (n.name + \" \" + n.~depth).println(); } operation Tree setDepth(depth : Integer) { self.~depth = depth; for (c in self.children) { c.setDepth(depth + 1); } }","title":"Extended Properties"},{"location":"doc/articles/git-fork-epsilon/","text":"Forking Epsilon as a non-committer with Git \u00b6 This article demonstrates how you can extend Epsilon and continue to receive source updates from the main development branch whilst also being able to commit changes to your own repository using the git version control system. The idea is to clone the Epsilon repository, make a branch and then set the remote for that branch to the master branch of your private repository. This allows you to maintain the history of Epsilon and so later on your changes can be merged into the main development branch if and when you gain committer priviliges. For the rest of this article, we shall refer to the main Epsilon project as \"origin\". Specifically, we are referring to the \"master\" branch of Epsilon. For the extension project, we shall call it \"fork\". Specifically, \"forkbranch\" refers to the branch name of the extension project, whilst \"forkrepo\" refers to the respository name of the extension project. We will assume that the \"master\" branch of forkrepo is used to host the forkbranch for simplicity. If you have already set up your repository and have some content on it, you should back up all of your work before proceeding, as the following steps involve resetting your repository. The steps are as follows: Create a new folder and cd to it: git init Create a blank text file in that folder git add . git commit -m \"Reset repo\" git remote add origin <forkrepo url> git push -u --force origin master - Delete the folder you created. Your repository should now be completely clean. Here are the main steps: Clone the main project repository: git clone git://git.eclipse.org/gitroot/epsilon/org.eclipse.epsilon.git Create and switch to a new branch: git checkout -b forkbranch Add the remote repository for the branch: git remote add forkrepo <fork url> Set the remote repository you're going to upload to: git branch --set-upstream-to=forkrepo/master Confirm that the following outputs \"forkrepo/master\": git rev-parse --abbrev-ref --symbolic-full-name @{u} Set the default push branch to be the same as the tracking (in this case, it will be master): git config push.default upstream If you have already initialized your fork repository, you need to get the files first.: git fetch forkrepo master This is the crucial step, as it allows you to merge your fork repository with the main project's repository: git pull --allow-unrelated-histories Add the files from the main project to your commit: git commit -m \"Original, unmodified files\" This will upload all the files to your fork repository: git push --repo=forkrepo Confirm that the following outputs \"On branch forkbranch Your branch is up-to-date with 'forkrepo/master'. nothing to commit, working tree clean\": git status Set the default push to your fork repository: git config remote.pushDefault forkrepo For more information on using Git, please refer to the documentation .","title":"Forking Epsilon as a non-committer with Git"},{"location":"doc/articles/git-fork-epsilon/#forking-epsilon-as-a-non-committer-with-git","text":"This article demonstrates how you can extend Epsilon and continue to receive source updates from the main development branch whilst also being able to commit changes to your own repository using the git version control system. The idea is to clone the Epsilon repository, make a branch and then set the remote for that branch to the master branch of your private repository. This allows you to maintain the history of Epsilon and so later on your changes can be merged into the main development branch if and when you gain committer priviliges. For the rest of this article, we shall refer to the main Epsilon project as \"origin\". Specifically, we are referring to the \"master\" branch of Epsilon. For the extension project, we shall call it \"fork\". Specifically, \"forkbranch\" refers to the branch name of the extension project, whilst \"forkrepo\" refers to the respository name of the extension project. We will assume that the \"master\" branch of forkrepo is used to host the forkbranch for simplicity. If you have already set up your repository and have some content on it, you should back up all of your work before proceeding, as the following steps involve resetting your repository. The steps are as follows: Create a new folder and cd to it: git init Create a blank text file in that folder git add . git commit -m \"Reset repo\" git remote add origin <forkrepo url> git push -u --force origin master - Delete the folder you created. Your repository should now be completely clean. Here are the main steps: Clone the main project repository: git clone git://git.eclipse.org/gitroot/epsilon/org.eclipse.epsilon.git Create and switch to a new branch: git checkout -b forkbranch Add the remote repository for the branch: git remote add forkrepo <fork url> Set the remote repository you're going to upload to: git branch --set-upstream-to=forkrepo/master Confirm that the following outputs \"forkrepo/master\": git rev-parse --abbrev-ref --symbolic-full-name @{u} Set the default push branch to be the same as the tracking (in this case, it will be master): git config push.default upstream If you have already initialized your fork repository, you need to get the files first.: git fetch forkrepo master This is the crucial step, as it allows you to merge your fork repository with the main project's repository: git pull --allow-unrelated-histories Add the files from the main project to your commit: git commit -m \"Original, unmodified files\" This will upload all the files to your fork repository: git push --repo=forkrepo Confirm that the following outputs \"On branch forkbranch Your branch is up-to-date with 'forkrepo/master'. nothing to commit, working tree clean\": git status Set the default push to your fork repository: git config remote.pushDefault forkrepo For more information on using Git, please refer to the documentation .","title":"Forking Epsilon as a non-committer with Git"},{"location":"doc/articles/hutn-basic/","text":"Using the Human-Usable Textual Notation (HUTN) in Epsilon \u00b6 In this article we demonstrate how you can use a textual notation to create models using the Human-Usable Textual Notation (HUTN) implementation provided by Epsilon. Please note that, currently, HUTN works only with EMF, and cannot be used to create models for other modelling technologies, such as MDR or plain XML. Getting started \u00b6 To create a model with HUTN, we first need to define our metamodel. In this example, we'll use the Families metamodel shown below: Once we have created your metamodel and registered it with Epsilon, we create a new HUTN document by clicking File\u2192New\u2192Other... and selecting HUTN File . The metamodel nsuri field should contain the namespace URI of our metamodel: families Epsilon will initialise a HUTN file for our metamodel (as shown below). We can now specify and then generate our model. @Spec { metamodel \"families\" { nsUri: \"families\" } } families { // Place your model element specifications here } HUTN Syntax \u00b6 We now briefly describe the HUTN syntax. We can specify an instance of Family using the following HUTN: Family { name: \"The Smiths\" lotteryNumbers: 10, 24, 26, 32, 45, 49 } Note that multi-valued features can be specified using a comma separated list. Containment references \u00b6 Containment references are specified by nesting model element definitions. For example, the following HUTN specifies two members, John and Jane of the Smiths: Family { name: \"The Smiths\" lotteryNumbers: 10, 24, 26, 32, 45, 49 members: Person { name: \"John Smith\" }, Person { name: \"Jane Smith\" } } Non-containment references \u00b6 Non-containment references are specified using a HUTN identifier, which is the string appearing in double-quotes as part of an object\\'s declaraton. Below, the second Family has the identifier \"bloggs.\" In the following HUTN, The first family references the second family, using the familyFriends reference: Family { familyFriends: Family \"bloggs\" } Family \"bloggs\" {} Cross-model references \u00b6 References to model elements stored in another file are using a URI fragment: Family { familyFriends: Family \"../families/AnotherNeighbourhood.model#/1/\" familyFriends: Family \"../families/AnotherNeighbourhood.model#_swAAYJX5Ed2TbbKclPHPaA\" } URI fragments can have either a relative (XPath-like) syntax, or use a unique identifier. For example, the first reference above uses a relative syntax to refer to the second (index of 1) Family in the AnotherNeighbourhood.model file. For more information on URI fragments, see the relevant section here . Shortcuts \u00b6 There are some syntactic shortcuts in HUTN, which we now demonstrate. Objects do not have to specify a body, and can instead be terminated with a semi-colon: Family {} // is equivalent to: Family; Although boolean-valued attributes can be specified using true or false values, they can also be specified as a prefix on the model element definition: Family { nuclear: false migrant: true } // is equivalent to: ~nuclear migrant Family; Non-containment references can be specified using association blocks or even with an infix notation: Family { familyFriends: Family \"bloggs\" } Family \"bloggs\"; // is equivalent to the following association block Family \"smiths\"; Family \"bloggs\"; familyFriends { \"smiths\" \"bloggs\" // More familyFriends can be specified here } // is equivalent to the following infix notation: Family \"smiths\"; Family \"bloggs\"; Family \"smiths\" familyFriends Family \"bloggs\"; Generating a model from HUTN \u00b6 When we have finished specifying our HUTN, we can generate a corresponding model. Right-click the HUTN document and select HUTN\u2192Generate Model , as shown below Epsilon can automatically generate a model whenever you change your HUTN file. Right-click your project and select HUTN\u2192Enable HUTN Project Nature . This is illustrated in the following screenshot: Additional resources \u00b6 http://www.omg.org/spec/HUTN/ : The OMG HUTN specification. http://dx.doi.org/10.1007/978-3-540-87875-9_18 : Our MoDELS/UML 2008 paper on the HUTN implementation provided by Epsilon.","title":"Using the Human-Usable Textual Notation (HUTN) in Epsilon"},{"location":"doc/articles/hutn-basic/#using-the-human-usable-textual-notation-hutn-in-epsilon","text":"In this article we demonstrate how you can use a textual notation to create models using the Human-Usable Textual Notation (HUTN) implementation provided by Epsilon. Please note that, currently, HUTN works only with EMF, and cannot be used to create models for other modelling technologies, such as MDR or plain XML.","title":"Using the Human-Usable Textual Notation (HUTN) in Epsilon"},{"location":"doc/articles/hutn-basic/#getting-started","text":"To create a model with HUTN, we first need to define our metamodel. In this example, we'll use the Families metamodel shown below: Once we have created your metamodel and registered it with Epsilon, we create a new HUTN document by clicking File\u2192New\u2192Other... and selecting HUTN File . The metamodel nsuri field should contain the namespace URI of our metamodel: families Epsilon will initialise a HUTN file for our metamodel (as shown below). We can now specify and then generate our model. @Spec { metamodel \"families\" { nsUri: \"families\" } } families { // Place your model element specifications here }","title":"Getting started"},{"location":"doc/articles/hutn-basic/#hutn-syntax","text":"We now briefly describe the HUTN syntax. We can specify an instance of Family using the following HUTN: Family { name: \"The Smiths\" lotteryNumbers: 10, 24, 26, 32, 45, 49 } Note that multi-valued features can be specified using a comma separated list.","title":"HUTN Syntax"},{"location":"doc/articles/hutn-basic/#containment-references","text":"Containment references are specified by nesting model element definitions. For example, the following HUTN specifies two members, John and Jane of the Smiths: Family { name: \"The Smiths\" lotteryNumbers: 10, 24, 26, 32, 45, 49 members: Person { name: \"John Smith\" }, Person { name: \"Jane Smith\" } }","title":"Containment references"},{"location":"doc/articles/hutn-basic/#non-containment-references","text":"Non-containment references are specified using a HUTN identifier, which is the string appearing in double-quotes as part of an object\\'s declaraton. Below, the second Family has the identifier \"bloggs.\" In the following HUTN, The first family references the second family, using the familyFriends reference: Family { familyFriends: Family \"bloggs\" } Family \"bloggs\" {}","title":"Non-containment references"},{"location":"doc/articles/hutn-basic/#cross-model-references","text":"References to model elements stored in another file are using a URI fragment: Family { familyFriends: Family \"../families/AnotherNeighbourhood.model#/1/\" familyFriends: Family \"../families/AnotherNeighbourhood.model#_swAAYJX5Ed2TbbKclPHPaA\" } URI fragments can have either a relative (XPath-like) syntax, or use a unique identifier. For example, the first reference above uses a relative syntax to refer to the second (index of 1) Family in the AnotherNeighbourhood.model file. For more information on URI fragments, see the relevant section here .","title":"Cross-model references"},{"location":"doc/articles/hutn-basic/#shortcuts","text":"There are some syntactic shortcuts in HUTN, which we now demonstrate. Objects do not have to specify a body, and can instead be terminated with a semi-colon: Family {} // is equivalent to: Family; Although boolean-valued attributes can be specified using true or false values, they can also be specified as a prefix on the model element definition: Family { nuclear: false migrant: true } // is equivalent to: ~nuclear migrant Family; Non-containment references can be specified using association blocks or even with an infix notation: Family { familyFriends: Family \"bloggs\" } Family \"bloggs\"; // is equivalent to the following association block Family \"smiths\"; Family \"bloggs\"; familyFriends { \"smiths\" \"bloggs\" // More familyFriends can be specified here } // is equivalent to the following infix notation: Family \"smiths\"; Family \"bloggs\"; Family \"smiths\" familyFriends Family \"bloggs\";","title":"Shortcuts"},{"location":"doc/articles/hutn-basic/#generating-a-model-from-hutn","text":"When we have finished specifying our HUTN, we can generate a corresponding model. Right-click the HUTN document and select HUTN\u2192Generate Model , as shown below Epsilon can automatically generate a model whenever you change your HUTN file. Right-click your project and select HUTN\u2192Enable HUTN Project Nature . This is illustrated in the following screenshot:","title":"Generating a model from HUTN"},{"location":"doc/articles/hutn-basic/#additional-resources","text":"http://www.omg.org/spec/HUTN/ : The OMG HUTN specification. http://dx.doi.org/10.1007/978-3-540-87875-9_18 : Our MoDELS/UML 2008 paper on the HUTN implementation provided by Epsilon.","title":"Additional resources"},{"location":"doc/articles/hutn-compliance/","text":"Compliance of Epsilon HUTN to the OMG Standard \u00b6 Epsilon HUTN is an implementation of the OMG HUTN standard . Epsilon HUTN implements most of the OMG standard, but there are some differences between the two. This article summaries the similarities and differences between Epsilon HUTN and the OMG HUTN standard. Feature Section of the OMG HUTN Standard Supported in Epsilon HUTN? Details of support in Epsilon HUTN Packages Section 6.2 Yes Classes Section 6.3 Partial Epsilon HUTN provides an extra syntactic shortcut. Not yet supported: parametric attributes and enumeration adjectives. Attributes Section 6.4 Yes Epsilon HUTN corrects a mistake in the HUTN standard. References Sections 6.5 and 6.8 Yes Limitation: Epsilon HUTN only allows absolute references for non-containment features. Classifier-Level Attributes Section 6.6 Yes Data values Section 6.7 Yes Epsilon HUTN supports Ecore (EMF) types, rather than MOF types. Inline configuration Section 6.9 No A configuration model is used instead. Configuration rules Section 5 Partial Currently supported: IdentifierConfig and DefaultValueConfig rules. Extra Object Shorthand \u00b6 Epsilon HUTN allows classes with no body to be terminated with a semi-colon rather than with a pair of empty brackets, for example the following are equivalent in Epsilon HUTN: Family \"The Smiths\" {} Family \"The Smiths\"; This form appears in Figure 6.5 of the HUTN specification, but oddly is not supported in the grammar provided by the HUTN specification. Parametric Attributes \u00b6 The HUTN specification allows classes to be instantiated with arguments, for example: Coordinate (3.5, 7.3) {} The above code assumes that configuration rules have been specified for the parameters of Coordinate. Epsilon HUTN does not currently support this form. Instead, the following code can be used: Coordinate { x: 3.5 y: 7.3 } Enumeration Adjectives \u00b6 The HUTN specification allows objects to be prefixed with enumeration values as adjectives, for example: poodle Dog {} The above code assumes that configuration rules have been specified to configure Dog to accept a feature, \"breed,\" as an enumeration adjective. Epsilon HUTN does not currently support this form. Instead, the following code can be used: Dog { breed: poodle } Potential error in the OMG HUTN Specification \u00b6 Section 6.4 of the OMG HUTN specification appears to contain an error. Grammar rule [20] implies that AttributeName is optional in specifying a KeywordAttribute. However, the semantics of an empty KeywordAttribute or a single tilde as a KeywordAttribute are not defined. Epsilon HUTN ensures that an AttributeName is specified for every KeywordAttribute. Absolute References \u00b6 The HUTN specification allows relative referencing for non-containment references. For example: ShapePackage \"triangles\" { Polygon \"my_triangle\" { Coordinate (3.6, 7.3) {} Coordinate (5.2, 7.6) {} Coordinate (9.4, 13) {} } } ShapePackage \"lines\" { Polygon \"my_line\" { Coordinate (4.6, 78.3) {} Coordinate (10.4, 1.5) {} } Diagram \"my_diagram\" { shapes: \"//triangles/my_triangle\", \"/my_line\" } } The Diagram object references two Polygons: \"my_triangle\" and \"my line\". The first is referenced with respect to the root of the document (\"//\"), while the second is referenced with respect to the current package (\"/\"). Epsilon HUTN does not support relative referencing, and all references are resolved with respect to the diagram root. The \"//\" prefix is omitted: Diagram \"my_diagram\" { shapes: \"my_triangle\", \"my_line\" }","title":"Compliance of Epsilon HUTN to the OMG Standard"},{"location":"doc/articles/hutn-compliance/#compliance-of-epsilon-hutn-to-the-omg-standard","text":"Epsilon HUTN is an implementation of the OMG HUTN standard . Epsilon HUTN implements most of the OMG standard, but there are some differences between the two. This article summaries the similarities and differences between Epsilon HUTN and the OMG HUTN standard. Feature Section of the OMG HUTN Standard Supported in Epsilon HUTN? Details of support in Epsilon HUTN Packages Section 6.2 Yes Classes Section 6.3 Partial Epsilon HUTN provides an extra syntactic shortcut. Not yet supported: parametric attributes and enumeration adjectives. Attributes Section 6.4 Yes Epsilon HUTN corrects a mistake in the HUTN standard. References Sections 6.5 and 6.8 Yes Limitation: Epsilon HUTN only allows absolute references for non-containment features. Classifier-Level Attributes Section 6.6 Yes Data values Section 6.7 Yes Epsilon HUTN supports Ecore (EMF) types, rather than MOF types. Inline configuration Section 6.9 No A configuration model is used instead. Configuration rules Section 5 Partial Currently supported: IdentifierConfig and DefaultValueConfig rules.","title":"Compliance of Epsilon HUTN to the OMG Standard"},{"location":"doc/articles/hutn-compliance/#extra-object-shorthand","text":"Epsilon HUTN allows classes with no body to be terminated with a semi-colon rather than with a pair of empty brackets, for example the following are equivalent in Epsilon HUTN: Family \"The Smiths\" {} Family \"The Smiths\"; This form appears in Figure 6.5 of the HUTN specification, but oddly is not supported in the grammar provided by the HUTN specification.","title":"Extra Object Shorthand"},{"location":"doc/articles/hutn-compliance/#parametric-attributes","text":"The HUTN specification allows classes to be instantiated with arguments, for example: Coordinate (3.5, 7.3) {} The above code assumes that configuration rules have been specified for the parameters of Coordinate. Epsilon HUTN does not currently support this form. Instead, the following code can be used: Coordinate { x: 3.5 y: 7.3 }","title":"Parametric Attributes"},{"location":"doc/articles/hutn-compliance/#enumeration-adjectives","text":"The HUTN specification allows objects to be prefixed with enumeration values as adjectives, for example: poodle Dog {} The above code assumes that configuration rules have been specified to configure Dog to accept a feature, \"breed,\" as an enumeration adjective. Epsilon HUTN does not currently support this form. Instead, the following code can be used: Dog { breed: poodle }","title":"Enumeration Adjectives"},{"location":"doc/articles/hutn-compliance/#potential-error-in-the-omg-hutn-specification","text":"Section 6.4 of the OMG HUTN specification appears to contain an error. Grammar rule [20] implies that AttributeName is optional in specifying a KeywordAttribute. However, the semantics of an empty KeywordAttribute or a single tilde as a KeywordAttribute are not defined. Epsilon HUTN ensures that an AttributeName is specified for every KeywordAttribute.","title":"Potential error in the OMG HUTN Specification"},{"location":"doc/articles/hutn-compliance/#absolute-references","text":"The HUTN specification allows relative referencing for non-containment references. For example: ShapePackage \"triangles\" { Polygon \"my_triangle\" { Coordinate (3.6, 7.3) {} Coordinate (5.2, 7.6) {} Coordinate (9.4, 13) {} } } ShapePackage \"lines\" { Polygon \"my_line\" { Coordinate (4.6, 78.3) {} Coordinate (10.4, 1.5) {} } Diagram \"my_diagram\" { shapes: \"//triangles/my_triangle\", \"/my_line\" } } The Diagram object references two Polygons: \"my_triangle\" and \"my line\". The first is referenced with respect to the root of the document (\"//\"), while the second is referenced with respect to the current package (\"/\"). Epsilon HUTN does not support relative referencing, and all references are resolved with respect to the diagram root. The \"//\" prefix is omitted: Diagram \"my_diagram\" { shapes: \"my_triangle\", \"my_line\" }","title":"Absolute References"},{"location":"doc/articles/hutn-configuration/","text":"Customising Epsilon HUTN documents with configuration \u00b6 In this article we demonstrate how you can use the configuration features of Epsilon HUTN to customise your HUTN documents. For an introduction to modelling with HUTN, please refer to this article . Getting started \u00b6 Throughout this article, we'll use the following metamodel: Suppose we've already constructed a Families model using the following HUTN source: @Spec { metamodel \"families\" { nsUri: \"families\" } } families { Family { name: \"The Smiths\" familyFriends: Family \"does\", Family \"bloggs\" } Family \"does\" { name: \"The Does\" migrant: true } Family \"bloggs\" { name: \"The Bloggs\" migrant: true } } There is some duplication in the HUTN document above. Firstly, the identifiers used to reference Family objects are very similar to the families' names. Secondly, the migrant property is set to true in two of the three Families. A HUTN configuration model can be used to customise the document and reduce the duplication. A HUTN configuration model comprises rules, which customise the HUTN document. The remainder of this article describes how to create and use a configuration model to specify default values for properties and inferred values for identifiers. Creating a HUTN configuration model \u00b6 To create a HUTN configuration model, select File\u2192New\u2192Other\u2192Epsilon\u2192EMF Model . Specify a filename ending in .model, select the HUTN config metamodel URI and select Configuration as the root element. The dialogue should then look like this: After opening the resulting configuration model, new rules can be added. Right-click the configuration element, select New Child\u2192Rules Default Value Rule and New Child\u2192Rules Identifier Rule to create two rules: Default value rules are used to specify a value that will be used when the HUTN source document does not specify a value for a feature. Right-click the newly created default value rule and select Show Properties View . Specify Family as the classifier, migrant as the attribute and true as the value: Identifier rules are used to specify an attribute that will be used to identify model elements in the HUTN source document. Right-click the identifier rule and select Show Properties View . Specify Family as the classifier, and name as the attribute: Using a HUTN configuration model \u00b6 To make use of the configuration model, the preamble of the HUTN document must be changed to the following: @Spec { metamodel \"families\" { nsUri: \"families\" configFile: \"FamiliesConfig.model\" } } Note the extra line that references the configuration model. The value of the configFile attribute is a path relative to the HUTN document. The body of the HUTN document shown at the start of the article can now be rewritten as follows: families { Family { name: \"The Smiths\" familyFriends: Family \"The Does\", Family \"The Bloggs\" migrant: false } Family \"The Does\" {} Family \"The Bloggs\" {} } The identifiers specified for the last two families also specify the value of their name attribute, and so there's no need to explicitly set the name attribute in the body of the Family element. Conversely, the first Family specifies a name (The Smiths), and no identifier. A reference to the first family can use The Smiths as an identifier. Notice also that the migrant attribute values have been removed from the The Does and The Bloggs, as the default value is now true. However, The Smiths must now explicitly state that its migrant value should be false. Additional resources \u00b6 Article: Using HUTN in Epsilon","title":"Customising Epsilon HUTN documents with configuration"},{"location":"doc/articles/hutn-configuration/#customising-epsilon-hutn-documents-with-configuration","text":"In this article we demonstrate how you can use the configuration features of Epsilon HUTN to customise your HUTN documents. For an introduction to modelling with HUTN, please refer to this article .","title":"Customising Epsilon HUTN documents with configuration"},{"location":"doc/articles/hutn-configuration/#getting-started","text":"Throughout this article, we'll use the following metamodel: Suppose we've already constructed a Families model using the following HUTN source: @Spec { metamodel \"families\" { nsUri: \"families\" } } families { Family { name: \"The Smiths\" familyFriends: Family \"does\", Family \"bloggs\" } Family \"does\" { name: \"The Does\" migrant: true } Family \"bloggs\" { name: \"The Bloggs\" migrant: true } } There is some duplication in the HUTN document above. Firstly, the identifiers used to reference Family objects are very similar to the families' names. Secondly, the migrant property is set to true in two of the three Families. A HUTN configuration model can be used to customise the document and reduce the duplication. A HUTN configuration model comprises rules, which customise the HUTN document. The remainder of this article describes how to create and use a configuration model to specify default values for properties and inferred values for identifiers.","title":"Getting started"},{"location":"doc/articles/hutn-configuration/#creating-a-hutn-configuration-model","text":"To create a HUTN configuration model, select File\u2192New\u2192Other\u2192Epsilon\u2192EMF Model . Specify a filename ending in .model, select the HUTN config metamodel URI and select Configuration as the root element. The dialogue should then look like this: After opening the resulting configuration model, new rules can be added. Right-click the configuration element, select New Child\u2192Rules Default Value Rule and New Child\u2192Rules Identifier Rule to create two rules: Default value rules are used to specify a value that will be used when the HUTN source document does not specify a value for a feature. Right-click the newly created default value rule and select Show Properties View . Specify Family as the classifier, migrant as the attribute and true as the value: Identifier rules are used to specify an attribute that will be used to identify model elements in the HUTN source document. Right-click the identifier rule and select Show Properties View . Specify Family as the classifier, and name as the attribute:","title":"Creating a HUTN configuration model"},{"location":"doc/articles/hutn-configuration/#using-a-hutn-configuration-model","text":"To make use of the configuration model, the preamble of the HUTN document must be changed to the following: @Spec { metamodel \"families\" { nsUri: \"families\" configFile: \"FamiliesConfig.model\" } } Note the extra line that references the configuration model. The value of the configFile attribute is a path relative to the HUTN document. The body of the HUTN document shown at the start of the article can now be rewritten as follows: families { Family { name: \"The Smiths\" familyFriends: Family \"The Does\", Family \"The Bloggs\" migrant: false } Family \"The Does\" {} Family \"The Bloggs\" {} } The identifiers specified for the last two families also specify the value of their name attribute, and so there's no need to explicitly set the name attribute in the body of the Family element. Conversely, the first Family specifies a name (The Smiths), and no identifier. A reference to the first family can use The Smiths as an identifier. Notice also that the migrant attribute values have been removed from the The Does and The Bloggs, as the default value is now true. However, The Smiths must now explicitly state that its migrant value should be false.","title":"Using a HUTN configuration model"},{"location":"doc/articles/hutn-configuration/#additional-resources","text":"Article: Using HUTN in Epsilon","title":"Additional resources"},{"location":"doc/articles/in-memory-emf-model/","text":"Working with custom EMF resources \u00b6 Epsilon's default EMF driver ( EmfModel ), provides little support for customising the underlying EMF resource loading/persistence process (e.g. using custom resource factories, passing parameters to the resources's load/save methods etc.). If you're invoking an Epsilon program from Java and you need more flexibility in this respect, you can use InMemoryEmfModel instead, which is essentially a wrapper for a pre-loaded EMF resource. A skeleton example follows. Resource resource = ...; InMemoryEmfModel model = new InMemoryEmfModel ( resource ); model . setName ( \"M\" ); EolModule module = new EolModule (); module . parse (...); module . getContext (). getModelRepository (). addModel ( model ); module . execute (); resource . save (...);","title":"Working with custom EMF resources"},{"location":"doc/articles/in-memory-emf-model/#working-with-custom-emf-resources","text":"Epsilon's default EMF driver ( EmfModel ), provides little support for customising the underlying EMF resource loading/persistence process (e.g. using custom resource factories, passing parameters to the resources's load/save methods etc.). If you're invoking an Epsilon program from Java and you need more flexibility in this respect, you can use InMemoryEmfModel instead, which is essentially a wrapper for a pre-loaded EMF resource. A skeleton example follows. Resource resource = ...; InMemoryEmfModel model = new InMemoryEmfModel ( resource ); model . setName ( \"M\" ); EolModule module = new EolModule (); module . parse (...); module . getContext (). getModelRepository (). addModel ( model ); module . execute (); resource . save (...);","title":"Working with custom EMF resources"},{"location":"doc/articles/inspect-models-exeed/","text":"Inspecting EMF models with Exeed \u00b6 Exeed is an extended version of the built-in EMF reflective editor that enables customisation of labels and icons by adding annotations to ECore metamodels. Another feature it provides is the ability to display structural information about the elements of an EMF model. To see the types of all elements in the editor tree as well as the feature in which each element is contained, open your EMF model with Exeed and click Exeed->Show Structural Info. By doing this, the structural information of each element appears next to its label. For example, selecting this option for a GMF .gmfgraph model will make it look like this: The red-underlined text shows the type of the element (FigureGallery), the blue-underlined text shows the feature in which it is contained (figures), and the green-underlined text shows the EClass that owns the containing feature (Canvas). So next time you need to open an EMF model with a text editor to inspect its structure by reading the underlying XMI, you may want to consider giving Exeed a try instead.","title":"Inspecting EMF models with Exeed"},{"location":"doc/articles/inspect-models-exeed/#inspecting-emf-models-with-exeed","text":"Exeed is an extended version of the built-in EMF reflective editor that enables customisation of labels and icons by adding annotations to ECore metamodels. Another feature it provides is the ability to display structural information about the elements of an EMF model. To see the types of all elements in the editor tree as well as the feature in which each element is contained, open your EMF model with Exeed and click Exeed->Show Structural Info. By doing this, the structural information of each element appears next to its label. For example, selecting this option for a GMF .gmfgraph model will make it look like this: The red-underlined text shows the type of the element (FigureGallery), the blue-underlined text shows the feature in which it is contained (figures), and the green-underlined text shows the EClass that owns the containing feature (Canvas). So next time you need to open an EMF model with a text editor to inspect its structure by reading the underlying XMI, you may want to consider giving Exeed a try instead.","title":"Inspecting EMF models with Exeed"},{"location":"doc/articles/labsupdatesite/","text":"Publishing your project to the Epsilon Labs Update Site \u00b6 In this article we explain the steps required to publish your Epsilon related project in the Epsilon Labs update site. General Recommendations \u00b6 As part of the process you will configure your project to be under continuous integration (CI) which is automatically triggered when you push changes to master branch of your project's git repository. For this reason it is recommended that you create a develop branch in which you make frequent commits/pushes and only merge changes to the master branch when you want to release a new version (you might be interested in GitFlow ) Creating Feature Plugins \u00b6 In the Eclipse world, a feature is a group of one or more plugins that offer a specific functionality within Eclipse. For example, the Epsilon Core feature groups all the plugins that provide support for the core Epsilon languages (EOL, ETL, EGL, etc.) and drivers (CSV, XML, Bibtext, etc.). In order to publish your project you need to create feature plugins. As a minimum you would need to provide two features: one for the base functionality and other for the developer tools. The developer tools are the plugins that provide UI contributions (menus, launchers, etc.). For example, the JDBC project provides these two features (developer tools plugins and features should use the dt suffix): org.eclipse.epsilon.emc.jdbc.mysql.feature org.eclipse.epsilon.emc.jdbc.mysql.feature.dt Feature Information \u00b6 NOTE : Correctly fill the feature information. This information is displayed within the Install New Software tool and therefore the first point of contact between your project and the user. Feature Description Optional URL: Leave blank Text: Meaningful information about the plugins Copyright Notice Optional URL: Leave blank Text: Copyright (c) 2008 The University of York. All rights reserved. Contributors: License Agreement Use the appropriate license agreement. This depends on the libraries you are using. Sites to visit Any important sites of interest (e.g. Epsilon's website) Group your project's plugins \u00b6 Add each of your project plugins to the relevant feature. Remember that your dt plugins should go in your dt (development tools) feature. Create a site.xml \u00b6 An update site contains information about the features and plugins that can be installed from it. In order to the EpsilonLabs Updatesite to know what features/plugins you provide you must add this information to a site.xml file. You can find a template here or in the EpsilonLabs update site repository (template folder). In a nutshell, site.xml lists the features of your project and provides a category (a logical grouping of features) for your project. Set up CI \u00b6 Go to CircleCi and log in using your Github credentials (for simple configuration of the project). Add your project to CircleCI \u00b6 In the top left corner select the epsilonlabs organization. Click on Add Project Click on Setup Project In Language select Maven(Java) Skip the circli configuration (we will show you this next) Click on Start Building Set up EpsilonLabs build Trigger \u00b6 Open the epsilonlabs CircleCI project Go to settings Go to API Permisssions Copy the token value of the TRIGGER_TOKEN Go to your project Go to settings Go to Environment Variables Add variable: Name : TRIGGER_BUILD, Value : Paste the TRIGGER_TOKEN value Configure CircleCI for your project \u00b6 Create a .circleci folder in the root of your project Create a new config.yml file Use the template provided ( here or in the EpsilonLabs updatesite repository ) and make sure you add an store_artifacts entry for each plugin and feature JAR. Note : The path information points to the target folder which will be populated by maven (see next). Use Maven + Tycho to build your project \u00b6 We will use a pom-less configuration to build your project with maven and Tycho. Create a POM for your project. If you divide your projects into plugins, features, tests folders (btw, you should) you need to create a parent pom, and then a pom for each folder. A pom-less build avoids having a pom for each project, but still needs the structure ones. Use the provided template(s), change the artifact id and add your plugins and features to the modules section. The templates are here or in the EpsilonLabs updatesite repository ) To enable the pomless build, copy the extensions.xml (or in the repository) file to a .mvn folder in your project. Local maven build \u00b6 Install Maven and build your project to test that your poms are correct. You should also make sure that any tests you have coded are executed as part of the Maven build. mvn clean install Check that the required JARs have been created in the target folder. Test your CI build \u00b6 Merge your repository changes (new files) to your master branch and push to GitHub. This should trigger a new build. Go to CircleCI and verify that your build completed without errors. CircleCI will simply execute a maven build so if your local build succeeded the CircleCi should too. Additionally check that all the JARs have been stored as artifacts. In your project's build information page click on the Artifacts tab: Add your project to the EpsilonLabs update site \u00b6 Fork the EpsilonLabs update site repository and add your project in two places: Add a new entry(line) to the projects.txt file. The line should have your project name (the name of the project in GitHub) and the target platform information. In most cases this can be Any . If you only support a specific platform you should provide the correct values. Add a new local repository to the updasite's root pom (the project name should match the entry in the projects file): <repository> <id> epsilonlabs-{project} </id> <url> file:///${main.basedir}/repository/{project}} </url> <layout> p2 </layout> </repository> Make a pull request to the main EpsilonLabs update site project. After your request has been merged you should see your project in the update site. Additional resources \u00b6 Eclipse p2 publisher","title":"Publishing your project to the Epsilon Labs Update Site"},{"location":"doc/articles/labsupdatesite/#publishing-your-project-to-the-epsilon-labs-update-site","text":"In this article we explain the steps required to publish your Epsilon related project in the Epsilon Labs update site.","title":"Publishing your project to the Epsilon Labs Update Site"},{"location":"doc/articles/labsupdatesite/#general-recommendations","text":"As part of the process you will configure your project to be under continuous integration (CI) which is automatically triggered when you push changes to master branch of your project's git repository. For this reason it is recommended that you create a develop branch in which you make frequent commits/pushes and only merge changes to the master branch when you want to release a new version (you might be interested in GitFlow )","title":"General Recommendations"},{"location":"doc/articles/labsupdatesite/#creating-feature-plugins","text":"In the Eclipse world, a feature is a group of one or more plugins that offer a specific functionality within Eclipse. For example, the Epsilon Core feature groups all the plugins that provide support for the core Epsilon languages (EOL, ETL, EGL, etc.) and drivers (CSV, XML, Bibtext, etc.). In order to publish your project you need to create feature plugins. As a minimum you would need to provide two features: one for the base functionality and other for the developer tools. The developer tools are the plugins that provide UI contributions (menus, launchers, etc.). For example, the JDBC project provides these two features (developer tools plugins and features should use the dt suffix): org.eclipse.epsilon.emc.jdbc.mysql.feature org.eclipse.epsilon.emc.jdbc.mysql.feature.dt","title":"Creating Feature Plugins"},{"location":"doc/articles/labsupdatesite/#feature-information","text":"NOTE : Correctly fill the feature information. This information is displayed within the Install New Software tool and therefore the first point of contact between your project and the user. Feature Description Optional URL: Leave blank Text: Meaningful information about the plugins Copyright Notice Optional URL: Leave blank Text: Copyright (c) 2008 The University of York. All rights reserved. Contributors: License Agreement Use the appropriate license agreement. This depends on the libraries you are using. Sites to visit Any important sites of interest (e.g. Epsilon's website)","title":"Feature Information"},{"location":"doc/articles/labsupdatesite/#group-your-projects-plugins","text":"Add each of your project plugins to the relevant feature. Remember that your dt plugins should go in your dt (development tools) feature.","title":"Group your project's plugins"},{"location":"doc/articles/labsupdatesite/#create-a-sitexml","text":"An update site contains information about the features and plugins that can be installed from it. In order to the EpsilonLabs Updatesite to know what features/plugins you provide you must add this information to a site.xml file. You can find a template here or in the EpsilonLabs update site repository (template folder). In a nutshell, site.xml lists the features of your project and provides a category (a logical grouping of features) for your project.","title":"Create a site.xml"},{"location":"doc/articles/labsupdatesite/#set-up-ci","text":"Go to CircleCi and log in using your Github credentials (for simple configuration of the project).","title":"Set up CI"},{"location":"doc/articles/labsupdatesite/#add-your-project-to-circleci","text":"In the top left corner select the epsilonlabs organization. Click on Add Project Click on Setup Project In Language select Maven(Java) Skip the circli configuration (we will show you this next) Click on Start Building","title":"Add your project to CircleCI"},{"location":"doc/articles/labsupdatesite/#set-up-epsilonlabs-build-trigger","text":"Open the epsilonlabs CircleCI project Go to settings Go to API Permisssions Copy the token value of the TRIGGER_TOKEN Go to your project Go to settings Go to Environment Variables Add variable: Name : TRIGGER_BUILD, Value : Paste the TRIGGER_TOKEN value","title":"Set up EpsilonLabs build Trigger"},{"location":"doc/articles/labsupdatesite/#configure-circleci-for-your-project","text":"Create a .circleci folder in the root of your project Create a new config.yml file Use the template provided ( here or in the EpsilonLabs updatesite repository ) and make sure you add an store_artifacts entry for each plugin and feature JAR. Note : The path information points to the target folder which will be populated by maven (see next).","title":"Configure CircleCI for your project"},{"location":"doc/articles/labsupdatesite/#use-maven-tycho-to-build-your-project","text":"We will use a pom-less configuration to build your project with maven and Tycho. Create a POM for your project. If you divide your projects into plugins, features, tests folders (btw, you should) you need to create a parent pom, and then a pom for each folder. A pom-less build avoids having a pom for each project, but still needs the structure ones. Use the provided template(s), change the artifact id and add your plugins and features to the modules section. The templates are here or in the EpsilonLabs updatesite repository ) To enable the pomless build, copy the extensions.xml (or in the repository) file to a .mvn folder in your project.","title":"Use Maven + Tycho to build your project"},{"location":"doc/articles/labsupdatesite/#local-maven-build","text":"Install Maven and build your project to test that your poms are correct. You should also make sure that any tests you have coded are executed as part of the Maven build. mvn clean install Check that the required JARs have been created in the target folder.","title":"Local maven build"},{"location":"doc/articles/labsupdatesite/#test-your-ci-build","text":"Merge your repository changes (new files) to your master branch and push to GitHub. This should trigger a new build. Go to CircleCI and verify that your build completed without errors. CircleCI will simply execute a maven build so if your local build succeeded the CircleCi should too. Additionally check that all the JARs have been stored as artifacts. In your project's build information page click on the Artifacts tab:","title":"Test your CI build"},{"location":"doc/articles/labsupdatesite/#add-your-project-to-the-epsilonlabs-update-site","text":"Fork the EpsilonLabs update site repository and add your project in two places: Add a new entry(line) to the projects.txt file. The line should have your project name (the name of the project in GitHub) and the target platform information. In most cases this can be Any . If you only support a specific platform you should provide the correct values. Add a new local repository to the updasite's root pom (the project name should match the entry in the projects file): <repository> <id> epsilonlabs-{project} </id> <url> file:///${main.basedir}/repository/{project}} </url> <layout> p2 </layout> </repository> Make a pull request to the main EpsilonLabs update site project. After your request has been merged you should see your project in the update site.","title":"Add your project to the EpsilonLabs update site"},{"location":"doc/articles/labsupdatesite/#additional-resources","text":"Eclipse p2 publisher","title":"Additional resources"},{"location":"doc/articles/lambda-expressions/","text":"Native lambda expressions \u00b6 Whilst EOL has many useful declarative operations built in, some applications and developers may benefit from using alternative implementations, such as the Java Streams API . Epsilon now allows you to invoke functional interfaces using EOL first-order operation syntax. Provided that the method being invoked takes one or more functional interface s as a parameter and the correct number of parameters are supplied to each interface, this integration should work seamlessly as with regular first-order operation call expressions. For lambda expressions which do not require a parameter, you can either omit the parameter, use null or _ in place of the parameter, like so: var optional = Native(\"java.util.stream.IntStream\") .range(0, 16) .filter(i | i / 4 >= 2) .findFirst(); optional.orElse(64/4); // No lambda - literal value always calculated even if not present. optional.orElseGet(null | someIntensiveCalculation()); // Evaluation only occurs if no value is present. optional.orElseThrow(| new Native(\"org.eclipse.epsilon.eol.exceptions.EolRuntimeException\")); Here is an example of how one could use Java Streams and the equivalent approach using EOL (i.e. without native delegation): var Collectors = Native(\"java.util.stream.Collectors\"); var testData = Sequence{-1024..1024}; var positiveOddsSquaredEol = testData .select(i | i >= 0 and i.mod(2) > 0) .collect(i | i * i) .asSet(); var positiveOddsSquaredJava = testData.stream() .filter(i | i >= 0 and i.mod(2) > 0) .map(i | i * i) .collect(Collectors.toSet()); assertEquals(positiveOddsSquaredEol, positiveOddsSquaredJava); One benefits of using Streams is lazy evaluation, which allows you to chain a series of operations without executing the entire pipeline on all elements. This can be more efficient since streams are not materialised in intermediate operations, unlike EOL first-order operations which always return a collection and are thus evaluated eagerly. As with built-in EOL operations, Streams also support parallel execution, although this must be explicitly specified with the .parallel() property on the stream. Currently EOL does not support operations which require a simple variable and non-functional interface as a parameter, such as the iterate operation. To work around this, you can assign lambda expressions to variables, deriving them by calling a built-in operation to obtain the desired type. // UnaryOperator var doubler = unary(i | i * 2); assertEquals(16, doubler.apply(8)); // Predicate var isEvenTester = predicate(i | i.mod(2) == 0); assertFalse(predicate.test(3)); // Function var hasher = func(x | x.hashCode()); assertEquals(-1007761232, hasher.apply(\"a string\")); // Consumer var printer = consumer(x | x.println()); printer.accept(\"Testing...\"); // Supplier var threadSafeCollectionMaker = supplier( | new Native(\"java.util.concurrent.ConcurrentLinkedDeque\")); var deque = threadSafeCollectionMaker.get(); // Runnable var sayHi = runnable( | \"Hello, World!\".println()); sayHi.run(); Streams vs EOL cheat sheet \u00b6 Aside from the fact that streams are lazy and Epsilon operations are eager, there is some inevitable overlap in their functionality. This section provides an equivalence mapping from Epsilon to Java Streams to help you migrate from one to the other. select => filter collect => map forAll => allMatch exists => anyMatch none => noneMatch nMatch => No efficient short-circuiting equivalent, but result can be achieved using filter followed by .count() == n count => count one => Same as nMatch with n = 1 selectOne => filter followed by .findAny() / .findFirst() then .orElse(null) if the desired absence of a result is null reject => same as select with negated predicate sortBy => sorted mapBy => .collect(Collectors.groupingBy) aggregate => .collect(Collectors.toMap) In addition, non-first-order operations on Epsilon collection types can be simulated as follows for streams: flatten => .flatMap(c | c.stream()) -- please note that flatten is recursive whilst flatMap is not sum => .filter(e | e.isInteger()).mapToInt(i | i).sum() -- replace Int/Integer with appropriate type (Long, Double etc.) min / max => Same as sum but replace the last call with min or max as required product => Same as sum but replace the last call with .reduce(i1, i2 | i1 * i2).getAsLong() -- replace Long with appropriate type asBag => .collect(Collectors.toCollection(| new Bag)) asSequence / asSet / asOrderedSet => Same as asBag but replace Bag with desired type Please note that streams are one-shot and the pipeline cannot be re-used once a terminal operation is invoked (see the API for details).","title":"Native lambda expressions"},{"location":"doc/articles/lambda-expressions/#native-lambda-expressions","text":"Whilst EOL has many useful declarative operations built in, some applications and developers may benefit from using alternative implementations, such as the Java Streams API . Epsilon now allows you to invoke functional interfaces using EOL first-order operation syntax. Provided that the method being invoked takes one or more functional interface s as a parameter and the correct number of parameters are supplied to each interface, this integration should work seamlessly as with regular first-order operation call expressions. For lambda expressions which do not require a parameter, you can either omit the parameter, use null or _ in place of the parameter, like so: var optional = Native(\"java.util.stream.IntStream\") .range(0, 16) .filter(i | i / 4 >= 2) .findFirst(); optional.orElse(64/4); // No lambda - literal value always calculated even if not present. optional.orElseGet(null | someIntensiveCalculation()); // Evaluation only occurs if no value is present. optional.orElseThrow(| new Native(\"org.eclipse.epsilon.eol.exceptions.EolRuntimeException\")); Here is an example of how one could use Java Streams and the equivalent approach using EOL (i.e. without native delegation): var Collectors = Native(\"java.util.stream.Collectors\"); var testData = Sequence{-1024..1024}; var positiveOddsSquaredEol = testData .select(i | i >= 0 and i.mod(2) > 0) .collect(i | i * i) .asSet(); var positiveOddsSquaredJava = testData.stream() .filter(i | i >= 0 and i.mod(2) > 0) .map(i | i * i) .collect(Collectors.toSet()); assertEquals(positiveOddsSquaredEol, positiveOddsSquaredJava); One benefits of using Streams is lazy evaluation, which allows you to chain a series of operations without executing the entire pipeline on all elements. This can be more efficient since streams are not materialised in intermediate operations, unlike EOL first-order operations which always return a collection and are thus evaluated eagerly. As with built-in EOL operations, Streams also support parallel execution, although this must be explicitly specified with the .parallel() property on the stream. Currently EOL does not support operations which require a simple variable and non-functional interface as a parameter, such as the iterate operation. To work around this, you can assign lambda expressions to variables, deriving them by calling a built-in operation to obtain the desired type. // UnaryOperator var doubler = unary(i | i * 2); assertEquals(16, doubler.apply(8)); // Predicate var isEvenTester = predicate(i | i.mod(2) == 0); assertFalse(predicate.test(3)); // Function var hasher = func(x | x.hashCode()); assertEquals(-1007761232, hasher.apply(\"a string\")); // Consumer var printer = consumer(x | x.println()); printer.accept(\"Testing...\"); // Supplier var threadSafeCollectionMaker = supplier( | new Native(\"java.util.concurrent.ConcurrentLinkedDeque\")); var deque = threadSafeCollectionMaker.get(); // Runnable var sayHi = runnable( | \"Hello, World!\".println()); sayHi.run();","title":"Native lambda expressions"},{"location":"doc/articles/lambda-expressions/#streams-vs-eol-cheat-sheet","text":"Aside from the fact that streams are lazy and Epsilon operations are eager, there is some inevitable overlap in their functionality. This section provides an equivalence mapping from Epsilon to Java Streams to help you migrate from one to the other. select => filter collect => map forAll => allMatch exists => anyMatch none => noneMatch nMatch => No efficient short-circuiting equivalent, but result can be achieved using filter followed by .count() == n count => count one => Same as nMatch with n = 1 selectOne => filter followed by .findAny() / .findFirst() then .orElse(null) if the desired absence of a result is null reject => same as select with negated predicate sortBy => sorted mapBy => .collect(Collectors.groupingBy) aggregate => .collect(Collectors.toMap) In addition, non-first-order operations on Epsilon collection types can be simulated as follows for streams: flatten => .flatMap(c | c.stream()) -- please note that flatten is recursive whilst flatMap is not sum => .filter(e | e.isInteger()).mapToInt(i | i).sum() -- replace Int/Integer with appropriate type (Long, Double etc.) min / max => Same as sum but replace the last call with min or max as required product => Same as sum but replace the last call with .reduce(i1, i2 | i1 * i2).getAsLong() -- replace Long with appropriate type asBag => .collect(Collectors.toCollection(| new Bag)) asSequence / asSet / asOrderedSet => Same as asBag but replace Bag with desired type Please note that streams are one-shot and the pipeline cannot be re-used once a terminal operation is invoked (see the API for details).","title":"Streams vs EOL cheat sheet"},{"location":"doc/articles/manage-the-epsilon-website-locally/","text":"Manage the Epsilon web site locally \u00b6 This article provides a step-by-step guide for obtaining a local copy of the Epsilon website. The website is managed using the mkdocs library. The content is organised in different Markdown files, from which a static website can be generated. Setting up your environment \u00b6 Clone the Git repository at ssh://user_id@git.eclipse.org:29418/www.eclipse.org/epsilon.git if you are a project comitter, or at git://git.eclipse.org/gitroot/www.eclipse.org/epsilon.git if not. Download and install virtualenv . Navigate to the mkdocs folder, and run ./serve.sh from a terminal. The first time this command is run, a Python virtual environment will be created unther the mkdocs/env directory. After the environment is ready (and on subsequent calls to ./serve.sh ), a local web server containing the Epsilon website will be running at http://localhost:8000 . Real-time modification of the website \u00b6 All the Markdown sources of the website are contained in the mkdocs folder. After running the ./serve.sh command, we can alter these sources, and the changes will be reflected automatically in the local website. This is very useful to get quick feedback of our changes, as we do not have to regenerate the website each time we make a modification. To shutdown the local web server at any time, hit CTRL + C on the terminal you used to launch it in the first place. Building the static site \u00b6 Once you've happy with the changes you've made to the Markdown sources, you can re-generate the static website. To do so, run ./build.sh and wait for it to finish. Updating the website contents \u00b6 As a convention for project commiters, introducing a change in the website is usually separated in two commits: the first one contains any changes to the Markdown sources, while the second one includes the result of building again the static site as described in the previous section. If you are not a commiter, but you find any typos or parts of the website that do not work as they should, thanks for letting us know ! Finding broken links \u00b6 wget and grep can be used to find broken links in the Epsilon website. First, run the website locally by executing the ./serve.sh command as described above. Then, we will traverse the website using wget with this command: wget -e robots=off --spider -r --no-parent -o wget_errors.txt http://localhost:8000 We have used these options: -e robots=off makes wget ignore robots.txt . This is OK in this case, as we're running the spider on our own local server. --spider prevents wget from downloading page requisites that do not contain links -r makes wget traverse through links --no-parent prevents wget from leaving /gmt/epsilon/ -o wget_errors.txt collects all messages in the wget_errors.txt file Once it's done, we can simply search for the word \"404\" in the log, with: grep -B2 -w 404 wget_errors.txt We will get a list of all the URLs which reported 404 (Not Found) HTTP error codes.","title":"Manage the Epsilon web site locally"},{"location":"doc/articles/manage-the-epsilon-website-locally/#manage-the-epsilon-web-site-locally","text":"This article provides a step-by-step guide for obtaining a local copy of the Epsilon website. The website is managed using the mkdocs library. The content is organised in different Markdown files, from which a static website can be generated.","title":"Manage the Epsilon web site locally"},{"location":"doc/articles/manage-the-epsilon-website-locally/#setting-up-your-environment","text":"Clone the Git repository at ssh://user_id@git.eclipse.org:29418/www.eclipse.org/epsilon.git if you are a project comitter, or at git://git.eclipse.org/gitroot/www.eclipse.org/epsilon.git if not. Download and install virtualenv . Navigate to the mkdocs folder, and run ./serve.sh from a terminal. The first time this command is run, a Python virtual environment will be created unther the mkdocs/env directory. After the environment is ready (and on subsequent calls to ./serve.sh ), a local web server containing the Epsilon website will be running at http://localhost:8000 .","title":"Setting up your environment"},{"location":"doc/articles/manage-the-epsilon-website-locally/#real-time-modification-of-the-website","text":"All the Markdown sources of the website are contained in the mkdocs folder. After running the ./serve.sh command, we can alter these sources, and the changes will be reflected automatically in the local website. This is very useful to get quick feedback of our changes, as we do not have to regenerate the website each time we make a modification. To shutdown the local web server at any time, hit CTRL + C on the terminal you used to launch it in the first place.","title":"Real-time modification of the website"},{"location":"doc/articles/manage-the-epsilon-website-locally/#building-the-static-site","text":"Once you've happy with the changes you've made to the Markdown sources, you can re-generate the static website. To do so, run ./build.sh and wait for it to finish.","title":"Building the static site"},{"location":"doc/articles/manage-the-epsilon-website-locally/#updating-the-website-contents","text":"As a convention for project commiters, introducing a change in the website is usually separated in two commits: the first one contains any changes to the Markdown sources, while the second one includes the result of building again the static site as described in the previous section. If you are not a commiter, but you find any typos or parts of the website that do not work as they should, thanks for letting us know !","title":"Updating the website contents"},{"location":"doc/articles/manage-the-epsilon-website-locally/#finding-broken-links","text":"wget and grep can be used to find broken links in the Epsilon website. First, run the website locally by executing the ./serve.sh command as described above. Then, we will traverse the website using wget with this command: wget -e robots=off --spider -r --no-parent -o wget_errors.txt http://localhost:8000 We have used these options: -e robots=off makes wget ignore robots.txt . This is OK in this case, as we're running the spider on our own local server. --spider prevents wget from downloading page requisites that do not contain links -r makes wget traverse through links --no-parent prevents wget from leaving /gmt/epsilon/ -o wget_errors.txt collects all messages in the wget_errors.txt file Once it's done, we can simply search for the word \"404\" in the log, with: grep -B2 -w 404 wget_errors.txt We will get a list of all the URLs which reported 404 (Not Found) HTTP error codes.","title":"Finding broken links"},{"location":"doc/articles/maven-release/","text":"Releasing Epsilon to Maven Central \u00b6 This article describes the overall process required to release a new stable release of Epsilon to Maven Central. There are a few steps involved, some of which are outside our control. The guide will describe the steps that we do control, and point you to the relevant resources for the others. Preparation \u00b6 The first step is to gain deploy rights to our org.eclipse.epsilon groupId in the Sonatype OSS Nexus repository. To do this, please register at the Sonatype JIRA and give your JIRA username to the Epsilon release engineer(s), so we may file a ticket to have deploy rights granted to you. Testing the Plain Maven build \u00b6 Our plain Maven artifacts are built through a parallel hierarchy of pom-plain.xml files, starting from the root of the Epsilon repository. To do a plain Maven compilation + test build from scratch, simply run this: mvn -f pom-plain.xml clean test Keep in mind that plain Maven builds do not run unit tests, as we already run those in the Tycho build. Make sure that all tests pass in the Tycho build first. Double check the dependencies in the various pom-plain.xml files, especially those related to external libraries. Check the project metadata in the pom-plain.xml file, which lists the current developers, SCM URLs, and other details. Preparing a Maven release branch \u00b6 Once the new stable version of Epsilon has been tagged, create a Maven release branch with: git checkout -b maven-RELEASE RELEASE-TAG Set the version in the pom-plain.xml files: mvn -f pom-plain.xml versions:set Enter the version number of the release, and create a commit for it: git add ... git commit -m \"Set plain Maven versions to RELEASE\" Push the commit to Jenkins: git push If you need to make any other tweaks for the Maven release, you may want to try them here first rather than pollute master . Once the release is out, you may want to cherry-pick those tweaks back into master . Release to Maven Central \u00b6 The Jenkins build will automatically sign the plain Maven JARs and create a new staging repository in the OSSRH Sonatype Nexus server. It will also attempt to \"close\" it to modification, which will trigger the Maven Central validation rules. If one of these rules fail, the repository will be left open: the violations will be recorded in the Jenkins build logs, and you can try to manually close the repository and see those checks applied once more. As a precaution, we require all staging repositories to be manually checked before we release them to Maven Central. Once the Jenkins build passes, log into Sonatype OSS with your JIRA credentials and check the \"Staging Repositories\" section. Search for \"epsilon\" and you should be able to see the newly created staging repository. Select the repository and check in the \"Contents\" tab that everything is in order. If you are not happy with it, you can drop the repository, add more commits to the Maven release branch, and retry the upload. If you are happy with the contents, click on \"Release\" and enter an appropriate message in the \"Reason\" field (usually, \"Stable release RELEASE of Eclipse Epsilon\" suffices). After about an hour or so, the staging repository will disappear, and after a few hours the contents of the repository should be available from Maven Central . This may take up to a day, so be patient!","title":"Releasing Epsilon to Maven Central"},{"location":"doc/articles/maven-release/#releasing-epsilon-to-maven-central","text":"This article describes the overall process required to release a new stable release of Epsilon to Maven Central. There are a few steps involved, some of which are outside our control. The guide will describe the steps that we do control, and point you to the relevant resources for the others.","title":"Releasing Epsilon to Maven Central"},{"location":"doc/articles/maven-release/#preparation","text":"The first step is to gain deploy rights to our org.eclipse.epsilon groupId in the Sonatype OSS Nexus repository. To do this, please register at the Sonatype JIRA and give your JIRA username to the Epsilon release engineer(s), so we may file a ticket to have deploy rights granted to you.","title":"Preparation"},{"location":"doc/articles/maven-release/#testing-the-plain-maven-build","text":"Our plain Maven artifacts are built through a parallel hierarchy of pom-plain.xml files, starting from the root of the Epsilon repository. To do a plain Maven compilation + test build from scratch, simply run this: mvn -f pom-plain.xml clean test Keep in mind that plain Maven builds do not run unit tests, as we already run those in the Tycho build. Make sure that all tests pass in the Tycho build first. Double check the dependencies in the various pom-plain.xml files, especially those related to external libraries. Check the project metadata in the pom-plain.xml file, which lists the current developers, SCM URLs, and other details.","title":"Testing the Plain Maven build"},{"location":"doc/articles/maven-release/#preparing-a-maven-release-branch","text":"Once the new stable version of Epsilon has been tagged, create a Maven release branch with: git checkout -b maven-RELEASE RELEASE-TAG Set the version in the pom-plain.xml files: mvn -f pom-plain.xml versions:set Enter the version number of the release, and create a commit for it: git add ... git commit -m \"Set plain Maven versions to RELEASE\" Push the commit to Jenkins: git push If you need to make any other tweaks for the Maven release, you may want to try them here first rather than pollute master . Once the release is out, you may want to cherry-pick those tweaks back into master .","title":"Preparing a Maven release branch"},{"location":"doc/articles/maven-release/#release-to-maven-central","text":"The Jenkins build will automatically sign the plain Maven JARs and create a new staging repository in the OSSRH Sonatype Nexus server. It will also attempt to \"close\" it to modification, which will trigger the Maven Central validation rules. If one of these rules fail, the repository will be left open: the violations will be recorded in the Jenkins build logs, and you can try to manually close the repository and see those checks applied once more. As a precaution, we require all staging repositories to be manually checked before we release them to Maven Central. Once the Jenkins build passes, log into Sonatype OSS with your JIRA credentials and check the \"Staging Repositories\" section. Search for \"epsilon\" and you should be able to see the newly created staging repository. Select the repository and check in the \"Contents\" tab that everything is in order. If you are not happy with it, you can drop the repository, add more commits to the Maven release branch, and retry the upload. If you are happy with the contents, click on \"Release\" and enter an appropriate message in the \"Reason\" field (usually, \"Stable release RELEASE of Eclipse Epsilon\" suffices). After about an hour or so, the staging repository will disappear, and after a few hours the contents of the repository should be available from Maven Central . This may take up to a day, so be patient!","title":"Release to Maven Central"},{"location":"doc/articles/minimal-examples/","text":"Constructing a helpful minimal example \u00b6 From time to time, you may run into a problem when using Epsilon or find a bug. In these instances, we're happy to provide technical support and we endeavour to ensure that no question on our forum goes unanswered. We often ask users to supply a minimal example that we can use to reproduce the problem on our machine. A high quality example often allows to send a much quicker and more accurate response. This article describes how to put together a useful example. Please include the following: The version of Epsilon that you're running. Instructions for reproducing the problem A minimal version of all of the artefacts needed to reproduce the problem: models, metamodels (e.g. .ecore files), Epsilon programs (e.g. .eol, .evl, .etl, .egl files) Where applicable, Eclipse launch configurations or Ant build files for your Epsilon programs. An Eclipse project containing the minimal artefacts (and launch configurations or Ant build files). Please refrain from including files and folders that are not part of an Eclipse project as it is not always clear what we are expected to do with them. The remainder of this article contains hints and tips for each of the above. Once you have a minimal example, please attach it to a message in the forum or email it to us. Finding the version of Epsilon \u00b6 When developing and maintaining Epsilon, we often work on several versions of Epsilon at once: we maintain separate interim and stable versions, and we often use separate development branches for experimental features. Consequently, we need to ensure that we're running the same version of Epsilon as you in order to reproduce your problem. To identify which version of Epsilon you have: Click Help\u2192About Eclipse (on Mac OS X click Eclipse\u2192About Eclipse ). Click the Installation Details button Depending on how Epsilon has been installed, its version number may appear on the list of Installed Software : If not, click Plug-ins . Sort the list by the Plug-in id column by clicking the column title. Locate the row with org.eclipse.epsilon.eol.engine as its plug-in id, as shown below. Instructions for reproducing the problem \u00b6 When reproducing your problem requires more than one or two steps, a short set of instructions is a great help for us. Please try to provide a list of steps that we can follow to reproduce the problem. For example: Open Example.model, and add a new Node with name \"foo\". Run the Foo2Bar.etl transformation with the supplied launch configuration. Open Example.model. Note that the Node that you added has not changed: it has not been transformed! The Node named \"foo\" should now be named \"bar\". A minimal version \u00b6 Often, Epsilon users are manipulating large models with many thousands of elements, or executing Epsilon programs with many hundreds of lines of code. When investigating a problem or fixing a bug, it is extremely helpful for us to receive a minimal project that focuses exactly on the problem that you are encountering. In particular, please provide: A small number of models, metamodels and Epsilon programs (ideally 1 of each). Small models and metamodels (ideally with very few model elements). Small programs (ideally containing only the code required to reproduce the problem). Tip Although it can take a little extra time for you to produce a minimal example, we really appreciate it. A minimal example allows us to spend more time fixing the problem and providing advice, and much less time trying to reproduce the problem on our computer. Also, based on our experience, messages that provide a minimal example tend to get answered much faster. On the other hand, examples which indicate little/no effort from the reporter's side to narrow down the problem (e.g. complete code dumps) tend to be pushed back to the end of the queue and can take significantly longer to investigate. In some cases, building a minimal example is a great way to troubleshoot the problem that you're experiencing, and you may even find a solution to the problem while doing so. Epsilon launch configurations \u00b6 When launching an Epsilon program from within Eclipse, it is common to produce a launch configuration, which defines the models on which an Epsilon program is executed. By default, Eclipse does not store these launch configurations in your workspace and hence they are not included in projects that are exported from your workspace. To store an existing launch configuration in your workspace: Click Run\u2192Run Configurations . Select the Epsilon program for which you wish to store a launch configuration from the left-hand pane. Select the Common tab. By default, under Save as the Local option is selected. Click Shared file and then Browse . Select the project that contains the Epsilon program from the dialogue box, and then click Ok , as shown below. Click Apply . Close the Run Configurations dialogue box. Eclipse will create a new .launch file in your project, which contains all of the information needed to launch your Epsilon program, as shown below. Exporting an Eclipse project from your workspace \u00b6 Once you have created a project containing a minimal example (and launch configurations or Ant scripts), you can create an archive file which can be emailed to us: Right-click your Project Click Export... Under the General category, select Archive File and click Next . Ensure that the project(s) that you wish to export are checked in the left-hand pane. Supply a file name in the To archive file text box. Click Finish . Please email the resulting archive file to us.","title":"Constructing a helpful minimal example"},{"location":"doc/articles/minimal-examples/#constructing-a-helpful-minimal-example","text":"From time to time, you may run into a problem when using Epsilon or find a bug. In these instances, we're happy to provide technical support and we endeavour to ensure that no question on our forum goes unanswered. We often ask users to supply a minimal example that we can use to reproduce the problem on our machine. A high quality example often allows to send a much quicker and more accurate response. This article describes how to put together a useful example. Please include the following: The version of Epsilon that you're running. Instructions for reproducing the problem A minimal version of all of the artefacts needed to reproduce the problem: models, metamodels (e.g. .ecore files), Epsilon programs (e.g. .eol, .evl, .etl, .egl files) Where applicable, Eclipse launch configurations or Ant build files for your Epsilon programs. An Eclipse project containing the minimal artefacts (and launch configurations or Ant build files). Please refrain from including files and folders that are not part of an Eclipse project as it is not always clear what we are expected to do with them. The remainder of this article contains hints and tips for each of the above. Once you have a minimal example, please attach it to a message in the forum or email it to us.","title":"Constructing a helpful minimal example"},{"location":"doc/articles/minimal-examples/#finding-the-version-of-epsilon","text":"When developing and maintaining Epsilon, we often work on several versions of Epsilon at once: we maintain separate interim and stable versions, and we often use separate development branches for experimental features. Consequently, we need to ensure that we're running the same version of Epsilon as you in order to reproduce your problem. To identify which version of Epsilon you have: Click Help\u2192About Eclipse (on Mac OS X click Eclipse\u2192About Eclipse ). Click the Installation Details button Depending on how Epsilon has been installed, its version number may appear on the list of Installed Software : If not, click Plug-ins . Sort the list by the Plug-in id column by clicking the column title. Locate the row with org.eclipse.epsilon.eol.engine as its plug-in id, as shown below.","title":"Finding the version of Epsilon"},{"location":"doc/articles/minimal-examples/#instructions-for-reproducing-the-problem","text":"When reproducing your problem requires more than one or two steps, a short set of instructions is a great help for us. Please try to provide a list of steps that we can follow to reproduce the problem. For example: Open Example.model, and add a new Node with name \"foo\". Run the Foo2Bar.etl transformation with the supplied launch configuration. Open Example.model. Note that the Node that you added has not changed: it has not been transformed! The Node named \"foo\" should now be named \"bar\".","title":"Instructions for reproducing the problem"},{"location":"doc/articles/minimal-examples/#a-minimal-version","text":"Often, Epsilon users are manipulating large models with many thousands of elements, or executing Epsilon programs with many hundreds of lines of code. When investigating a problem or fixing a bug, it is extremely helpful for us to receive a minimal project that focuses exactly on the problem that you are encountering. In particular, please provide: A small number of models, metamodels and Epsilon programs (ideally 1 of each). Small models and metamodels (ideally with very few model elements). Small programs (ideally containing only the code required to reproduce the problem). Tip Although it can take a little extra time for you to produce a minimal example, we really appreciate it. A minimal example allows us to spend more time fixing the problem and providing advice, and much less time trying to reproduce the problem on our computer. Also, based on our experience, messages that provide a minimal example tend to get answered much faster. On the other hand, examples which indicate little/no effort from the reporter's side to narrow down the problem (e.g. complete code dumps) tend to be pushed back to the end of the queue and can take significantly longer to investigate. In some cases, building a minimal example is a great way to troubleshoot the problem that you're experiencing, and you may even find a solution to the problem while doing so.","title":"A minimal version"},{"location":"doc/articles/minimal-examples/#epsilon-launch-configurations","text":"When launching an Epsilon program from within Eclipse, it is common to produce a launch configuration, which defines the models on which an Epsilon program is executed. By default, Eclipse does not store these launch configurations in your workspace and hence they are not included in projects that are exported from your workspace. To store an existing launch configuration in your workspace: Click Run\u2192Run Configurations . Select the Epsilon program for which you wish to store a launch configuration from the left-hand pane. Select the Common tab. By default, under Save as the Local option is selected. Click Shared file and then Browse . Select the project that contains the Epsilon program from the dialogue box, and then click Ok , as shown below. Click Apply . Close the Run Configurations dialogue box. Eclipse will create a new .launch file in your project, which contains all of the information needed to launch your Epsilon program, as shown below.","title":"Epsilon launch configurations"},{"location":"doc/articles/minimal-examples/#exporting-an-eclipse-project-from-your-workspace","text":"Once you have created a project containing a minimal example (and launch configurations or Ant scripts), you can create an archive file which can be emailed to us: Right-click your Project Click Export... Under the General category, select Archive File and click Next . Ensure that the project(s) that you wish to export are checked in the left-hand pane. Supply a file name in the To archive file text box. Click Finish . Please email the resulting archive file to us.","title":"Exporting an Eclipse project from your workspace"},{"location":"doc/articles/modular-flexmi/","text":"Modularity Mechanisms in Flexmi \u00b6","title":"Modularity Mechanisms in Flexmi"},{"location":"doc/articles/modular-flexmi/#modularity-mechanisms-in-flexmi","text":"","title":"Modularity Mechanisms in Flexmi"},{"location":"doc/articles/parallel-execution/","text":"Multi-threaded execution of Epsilon programs \u00b6 Some of Epsilon's languages support parallel execution, which can leverage multiple hardware threads to improve performance. To enable this, head to the Advanced tab and select a parallel implementation. Where there are multiple implementations, prefer the \"Elements\" or \"Atom\" ones. An \"Atom\" is a tuple of a module element and model element, so for example a \"ContextAtom\" in EVL is context-element pair - that is, the granularity of parallelisation will be at the model element level (one job for every model element). Note that the modelling technology must also be able to handle concurrent query operations. Most modelling technologies will likely be supported for read-only model management tasks such as validation and code generation, however some which rely on external tools e.g. Simulink cannot handle concurrent operations. In any case, since most models support caching, the cache must be set up to support concurrency. You should ensure that the appropriate concurrency support option is checked in the model configuration. Note that when choosing a parallel implementation, first-order operations such as select , exists etc. will also be parallelised automatically where appropriate. This applies in particular to the parallel EOL implementation. Annotation-based parallelism \u00b6 In cases where an \"Annotation-based\" implementation is available, you can choose which rules are parallelised with the @parallel annotation. For example in EVL: context ModelElementType { @parallel constraint Invariant { check { // ... } } If further control is required, you can also choose whether a rule will be executed in parallel on a per-element basis using an executable annotation. This allows you to write a Boolean EOL expression to determine whether a given model element should be executed in parallel for the annotated rule. You can access the model element in the annotation with self as usual, and also any operations or variables in scope. Any rules not annotated will be executed sequentially. pre { var parallelThreshold = 9001; } context ModelElementType { $parallel self.children.size() > parallelThreshold; constraint Invariant { check { // ... } } Limitations \u00b6 Currently Epsilon does not support assignment of extended properties when executing in parallel. Parallel operations also cannot be nested.","title":"Multi-threaded execution of Epsilon programs"},{"location":"doc/articles/parallel-execution/#multi-threaded-execution-of-epsilon-programs","text":"Some of Epsilon's languages support parallel execution, which can leverage multiple hardware threads to improve performance. To enable this, head to the Advanced tab and select a parallel implementation. Where there are multiple implementations, prefer the \"Elements\" or \"Atom\" ones. An \"Atom\" is a tuple of a module element and model element, so for example a \"ContextAtom\" in EVL is context-element pair - that is, the granularity of parallelisation will be at the model element level (one job for every model element). Note that the modelling technology must also be able to handle concurrent query operations. Most modelling technologies will likely be supported for read-only model management tasks such as validation and code generation, however some which rely on external tools e.g. Simulink cannot handle concurrent operations. In any case, since most models support caching, the cache must be set up to support concurrency. You should ensure that the appropriate concurrency support option is checked in the model configuration. Note that when choosing a parallel implementation, first-order operations such as select , exists etc. will also be parallelised automatically where appropriate. This applies in particular to the parallel EOL implementation.","title":"Multi-threaded execution of Epsilon programs"},{"location":"doc/articles/parallel-execution/#annotation-based-parallelism","text":"In cases where an \"Annotation-based\" implementation is available, you can choose which rules are parallelised with the @parallel annotation. For example in EVL: context ModelElementType { @parallel constraint Invariant { check { /